Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[MTD] Rework the out of band handling completely

Hopefully the last iteration on this!

The handling of out of band data on NAND was accompanied by tons of fruitless
discussions and halfarsed patches to make it work for a particular
problem. Sufficiently annoyed by I all those "I know it better" mails and the
resonable amount of discarded "it solves my problem" patches, I finally decided
to go for the big rework. After removing the _ecc variants of mtd read/write
functions the solution to satisfy the various requirements was to refactor the
read/write _oob functions in mtd.

The major change is that read/write_oob now takes a pointer to an operation
descriptor structure "struct mtd_oob_ops".instead of having a function with at
least seven arguments.

read/write_oob which should probably renamed to a more descriptive name, can do
the following tasks:

- read/write out of band data
- read/write data content and out of band data
- read/write raw data content and out of band data (ecc disabled)

struct mtd_oob_ops has a mode field, which determines the oob handling mode.

Aside of the MTD_OOB_RAW mode, which is intended to be especially for
diagnostic purposes and some internal functions e.g. bad block table creation,
the other two modes are for mtd clients:

MTD_OOB_PLACE puts/gets the given oob data exactly to/from the place which is
described by the ooboffs and ooblen fields of the mtd_oob_ops strcuture. It's
up to the caller to make sure that the byte positions are not used by the ECC
placement algorithms.

MTD_OOB_AUTO puts/gets the given oob data automaticaly to/from the places in
the out of band area which are described by the oobfree tuples in the ecclayout
data structre which is associated to the devicee.

The decision whether data plus oob or oob only handling is done depends on the
setting of the datbuf member of the data structure. When datbuf == NULL then
the internal read/write_oob functions are selected, otherwise the read/write
data routines are invoked.

Tested on a few platforms with all variants. Please be aware of possible
regressions for your particular device / application scenario

Disclaimer: Any whining will be ignored from those who just contributed "hot
air blurb" and never sat down to tackle the underlying problem of the mess in
the NAND driver grown over time and the big chunk of work to fix up the
existing users. The problem was not the holiness of the existing MTD
interfaces. The problems was the lack of time to go for the big overhaul. It's
easy to add more mess to the existing one, but it takes alot of effort to go
for a real solution.

Improvements and bugfixes are welcome!

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

+1044 -606
+24 -15
drivers/mtd/devices/doc2000.c
··· 59 59 size_t *retlen, u_char *buf, u_char *eccbuf, struct nand_oobinfo *oobsel); 60 60 static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, 61 61 size_t *retlen, const u_char *buf, u_char *eccbuf, struct nand_oobinfo *oobsel); 62 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 63 - size_t *retlen, u_char *buf); 64 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 65 - size_t *retlen, const u_char *buf); 62 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 63 + struct mtd_oob_ops *ops); 64 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 65 + struct mtd_oob_ops *ops); 66 66 static int doc_write_oob_nolock(struct mtd_info *mtd, loff_t ofs, size_t len, 67 67 size_t *retlen, const u_char *buf); 68 68 static int doc_erase (struct mtd_info *mtd, struct erase_info *instr); ··· 959 959 return 0; 960 960 } 961 961 962 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 963 - size_t * retlen, u_char * buf) 962 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 963 + struct mtd_oob_ops *ops) 964 964 { 965 965 struct DiskOnChip *this = mtd->priv; 966 966 int len256 = 0, ret; 967 967 struct Nand *mychip; 968 + uint8_t *buf = ops->oobbuf; 969 + size_t len = ops->len; 970 + 971 + BUG_ON(ops->mode != MTD_OOB_PLACE); 972 + 973 + ofs += ops->ooboffs; 968 974 969 975 mutex_lock(&this->lock); 970 976 ··· 1011 1005 1012 1006 DoC_ReadBuf(this, &buf[len256], len - len256); 1013 1007 1014 - *retlen = len; 1008 + ops->retlen = len; 1015 1009 /* Reading the full OOB data drops us off of the end of the page, 1016 1010 * causing the flash device to go into busy mode, so we need 1017 1011 * to wait until ready 11.4.1 and Toshiba TC58256FT docs */ ··· 1126 1120 1127 1121 } 1128 1122 1129 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 1130 - size_t * retlen, const u_char * buf) 1123 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 1124 + struct mtd_oob_ops *ops) 1131 1125 { 1132 - struct DiskOnChip *this = mtd->priv; 1133 - int ret; 1126 + struct DiskOnChip *this = mtd->priv; 1127 + int ret; 1134 1128 1135 - mutex_lock(&this->lock); 1136 - ret = doc_write_oob_nolock(mtd, ofs, len, retlen, buf); 1129 + BUG_ON(ops->mode != MTD_OOB_PLACE); 1137 1130 1138 - mutex_unlock(&this->lock); 1139 - return ret; 1131 + mutex_lock(&this->lock); 1132 + ret = doc_write_oob_nolock(mtd, ofs + ops->ooboffs, ops->len, 1133 + &ops->retlen, ops->oobbuf); 1134 + 1135 + mutex_unlock(&this->lock); 1136 + return ret; 1140 1137 } 1141 1138 1142 1139 static int doc_erase(struct mtd_info *mtd, struct erase_info *instr)
+23 -11
drivers/mtd/devices/doc2001.c
··· 43 43 static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, 44 44 size_t *retlen, const u_char *buf, u_char *eccbuf, 45 45 struct nand_oobinfo *oobsel); 46 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 47 - size_t *retlen, u_char *buf); 48 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 49 - size_t *retlen, const u_char *buf); 46 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 47 + struct mtd_oob_ops *ops); 48 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 49 + struct mtd_oob_ops *ops); 50 50 static int doc_erase (struct mtd_info *mtd, struct erase_info *instr); 51 51 52 52 static struct mtd_info *docmillist = NULL; ··· 662 662 return ret; 663 663 } 664 664 665 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 666 - size_t *retlen, u_char *buf) 665 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 666 + struct mtd_oob_ops *ops) 667 667 { 668 668 #ifndef USE_MEMCPY 669 669 int i; ··· 672 672 struct DiskOnChip *this = mtd->priv; 673 673 void __iomem *docptr = this->virtadr; 674 674 struct Nand *mychip = &this->chips[ofs >> this->chipshift]; 675 + uint8_t *buf = ops->oobbuf; 676 + size_t len = ops->len; 677 + 678 + BUG_ON(ops->mode != MTD_OOB_PLACE); 679 + 680 + ofs += ops->ooboffs; 675 681 676 682 /* Find the chip which is to be used and select it */ 677 683 if (this->curfloor != mychip->floor) { ··· 714 708 #endif 715 709 buf[len - 1] = ReadDOC(docptr, LastDataRead); 716 710 717 - *retlen = len; 711 + ops->retlen = len; 718 712 719 713 return 0; 720 714 } 721 715 722 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 723 - size_t *retlen, const u_char *buf) 716 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 717 + struct mtd_oob_ops *ops) 724 718 { 725 719 #ifndef USE_MEMCPY 726 720 int i; ··· 730 724 struct DiskOnChip *this = mtd->priv; 731 725 void __iomem *docptr = this->virtadr; 732 726 struct Nand *mychip = &this->chips[ofs >> this->chipshift]; 727 + uint8_t *buf = ops->oobbuf; 728 + size_t len = ops->len; 729 + 730 + BUG_ON(ops->mode != MTD_OOB_PLACE); 731 + 732 + ofs += ops->ooboffs; 733 733 734 734 /* Find the chip which is to be used and select it */ 735 735 if (this->curfloor != mychip->floor) { ··· 787 775 if (ReadDOC(docptr, Mil_CDSN_IO) & 1) { 788 776 printk("Error programming oob data\n"); 789 777 /* FIXME: implement Bad Block Replacement (in nftl.c ??) */ 790 - *retlen = 0; 778 + ops->retlen = 0; 791 779 ret = -EIO; 792 780 } 793 781 dummy = ReadDOC(docptr, LastDataRead); 794 782 795 - *retlen = len; 783 + ops->retlen = len; 796 784 797 785 return ret; 798 786 }
+23 -11
drivers/mtd/devices/doc2001plus.c
··· 47 47 static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, 48 48 size_t *retlen, const u_char *buf, u_char *eccbuf, 49 49 struct nand_oobinfo *oobsel); 50 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 51 - size_t *retlen, u_char *buf); 52 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 53 - size_t *retlen, const u_char *buf); 50 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 51 + struct mtd_oob_ops *ops); 52 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 53 + struct mtd_oob_ops *ops); 54 54 static int doc_erase (struct mtd_info *mtd, struct erase_info *instr); 55 55 56 56 static struct mtd_info *docmilpluslist = NULL; ··· 868 868 return ret; 869 869 } 870 870 871 - static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 872 - size_t *retlen, u_char *buf) 871 + static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, 872 + struct mtd_oob_ops *ops) 873 873 { 874 874 loff_t fofs, base; 875 875 struct DiskOnChip *this = mtd->priv; 876 876 void __iomem * docptr = this->virtadr; 877 877 struct Nand *mychip = &this->chips[ofs >> this->chipshift]; 878 878 size_t i, size, got, want; 879 + uint8_t *buf = ops->oobbuf; 880 + size_t len = ops->len; 881 + 882 + BUG_ON(ops->mode != MTD_OOB_PLACE); 883 + 884 + ofs += ops->ooboffs; 879 885 880 886 DoC_CheckASIC(docptr); 881 887 ··· 947 941 /* Disable flash internally */ 948 942 WriteDOC(0, docptr, Mplus_FlashSelect); 949 943 950 - *retlen = len; 944 + ops->retlen = len; 951 945 return 0; 952 946 } 953 947 954 - static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, 955 - size_t *retlen, const u_char *buf) 948 + static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, 949 + struct mtd_oob_ops *ops) 956 950 { 957 951 volatile char dummy; 958 952 loff_t fofs, base; ··· 961 955 struct Nand *mychip = &this->chips[ofs >> this->chipshift]; 962 956 size_t i, size, got, want; 963 957 int ret = 0; 958 + uint8_t *buf = ops->oobbuf; 959 + size_t len = ops->len; 960 + 961 + BUG_ON(ops->mode != MTD_OOB_PLACE); 962 + 963 + ofs += ops->ooboffs; 964 964 965 965 DoC_CheckASIC(docptr); 966 966 ··· 1042 1030 printk("MTD: Error 0x%x programming oob at 0x%x\n", 1043 1031 dummy, (int)ofs); 1044 1032 /* FIXME: implement Bad Block Replacement */ 1045 - *retlen = 0; 1033 + ops->retlen = 0; 1046 1034 ret = -EIO; 1047 1035 } 1048 1036 dummy = ReadDOC(docptr, Mplus_LastDataRead); ··· 1055 1043 /* Disable flash internally */ 1056 1044 WriteDOC(0, docptr, Mplus_FlashSelect); 1057 1045 1058 - *retlen = len; 1046 + ops->retlen = len; 1059 1047 return ret; 1060 1048 } 1061 1049
+87 -24
drivers/mtd/inftlcore.c
··· 151 151 */ 152 152 153 153 /* 154 + * Read oob data from flash 155 + */ 156 + int inftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, 157 + size_t *retlen, uint8_t *buf) 158 + { 159 + struct mtd_oob_ops ops; 160 + int res; 161 + 162 + ops.mode = MTD_OOB_PLACE; 163 + ops.ooboffs = offs & (mtd->writesize - 1); 164 + ops.ooblen = len; 165 + ops.oobbuf = buf; 166 + ops.datbuf = NULL; 167 + ops.len = len; 168 + 169 + res = mtd->read_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 170 + *retlen = ops.retlen; 171 + return res; 172 + } 173 + 174 + /* 175 + * Write oob data to flash 176 + */ 177 + int inftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, 178 + size_t *retlen, uint8_t *buf) 179 + { 180 + struct mtd_oob_ops ops; 181 + int res; 182 + 183 + ops.mode = MTD_OOB_PLACE; 184 + ops.ooboffs = offs & (mtd->writesize - 1); 185 + ops.ooblen = len; 186 + ops.oobbuf = buf; 187 + ops.datbuf = NULL; 188 + ops.len = len; 189 + 190 + res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 191 + *retlen = ops.retlen; 192 + return res; 193 + } 194 + 195 + /* 196 + * Write data and oob to flash 197 + */ 198 + static int inftl_write(struct mtd_info *mtd, loff_t offs, size_t len, 199 + size_t *retlen, uint8_t *buf, uint8_t *oob) 200 + { 201 + struct mtd_oob_ops ops; 202 + int res; 203 + 204 + ops.mode = MTD_OOB_PLACE; 205 + ops.ooboffs = offs; 206 + ops.ooblen = mtd->oobsize; 207 + ops.oobbuf = oob; 208 + ops.datbuf = buf; 209 + ops.len = len; 210 + 211 + res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 212 + *retlen = ops.retlen; 213 + return res; 214 + } 215 + 216 + /* 154 217 * INFTL_findfreeblock: Find a free Erase Unit on the INFTL partition. 155 218 * This function is used when the give Virtual Unit Chain. 156 219 */ ··· 290 227 if ((BlockMap[block] != 0xffff) || BlockDeleted[block]) 291 228 continue; 292 229 293 - if (mtd->read_oob(mtd, (thisEUN * inftl->EraseSize) 294 - + (block * SECTORSIZE), 16 , &retlen, 295 - (char *)&oob) < 0) 230 + if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) 231 + + (block * SECTORSIZE), 16, &retlen, 232 + (char *)&oob) < 0) 296 233 status = SECTOR_IGNORE; 297 234 else 298 235 status = oob.b.Status | oob.b.Status1; ··· 367 304 memset(&oob, 0xff, sizeof(struct inftl_oob)); 368 305 oob.b.Status = oob.b.Status1 = SECTOR_USED; 369 306 370 - nand_write_raw(inftl->mbd.mtd, (inftl->EraseSize * targetEUN) + 371 - (block * SECTORSIZE), SECTORSIZE, &retlen, 372 - movebuf, (char *)&oob); 307 + inftl_write(inftl->mbd.mtd, (inftl->EraseSize * targetEUN) + 308 + (block * SECTORSIZE), SECTORSIZE, &retlen, 309 + movebuf, (char *)&oob); 373 310 } 374 311 375 312 /* ··· 500 437 silly = MAX_LOOPS; 501 438 502 439 while (thisEUN <= inftl->lastEUN) { 503 - mtd->read_oob(mtd, (thisEUN * inftl->EraseSize) + 504 - blockofs, 8, &retlen, (char *)&bci); 440 + inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) + 441 + blockofs, 8, &retlen, (char *)&bci); 505 442 506 443 status = bci.Status | bci.Status1; 507 444 DEBUG(MTD_DEBUG_LEVEL3, "INFTL: status of block %d in " ··· 588 525 nacs = 0; 589 526 thisEUN = inftl->VUtable[thisVUC]; 590 527 if (thisEUN != BLOCK_NIL) { 591 - mtd->read_oob(mtd, thisEUN * inftl->EraseSize 592 - + 8, 8, &retlen, (char *)&oob.u); 528 + inftl_read_oob(mtd, thisEUN * inftl->EraseSize 529 + + 8, 8, &retlen, (char *)&oob.u); 593 530 anac = oob.u.a.ANAC + 1; 594 531 nacs = oob.u.a.NACs + 1; 595 532 } ··· 610 547 oob.u.a.parityPerField = parity; 611 548 oob.u.a.discarded = 0xaa; 612 549 613 - mtd->write_oob(mtd, writeEUN * inftl->EraseSize + 8, 8, 614 - &retlen, (char *)&oob.u); 550 + inftl_write_oob(mtd, writeEUN * inftl->EraseSize + 8, 8, 551 + &retlen, (char *)&oob.u); 615 552 616 553 /* Also back up header... */ 617 554 oob.u.b.virtualUnitNo = cpu_to_le16(thisVUC); ··· 621 558 oob.u.b.parityPerField = parity; 622 559 oob.u.b.discarded = 0xaa; 623 560 624 - mtd->write_oob(mtd, writeEUN * inftl->EraseSize + 625 - SECTORSIZE * 4 + 8, 8, &retlen, (char *)&oob.u); 561 + inftl_write_oob(mtd, writeEUN * inftl->EraseSize + 562 + SECTORSIZE * 4 + 8, 8, &retlen, (char *)&oob.u); 626 563 627 564 inftl->PUtable[writeEUN] = inftl->VUtable[thisVUC]; 628 565 inftl->VUtable[thisVUC] = writeEUN; ··· 673 610 if (BlockUsed[block] || BlockDeleted[block]) 674 611 continue; 675 612 676 - if (mtd->read_oob(mtd, (thisEUN * inftl->EraseSize) 677 - + (block * SECTORSIZE), 8 , &retlen, 613 + if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) 614 + + (block * SECTORSIZE), 8 , &retlen, 678 615 (char *)&bci) < 0) 679 616 status = SECTOR_IGNORE; 680 617 else ··· 774 711 "block=%d)\n", inftl, block); 775 712 776 713 while (thisEUN < inftl->nb_blocks) { 777 - if (mtd->read_oob(mtd, (thisEUN * inftl->EraseSize) + 778 - blockofs, 8, &retlen, (char *)&bci) < 0) 714 + if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) + 715 + blockofs, 8, &retlen, (char *)&bci) < 0) 779 716 status = SECTOR_IGNORE; 780 717 else 781 718 status = bci.Status | bci.Status1; ··· 809 746 if (thisEUN != BLOCK_NIL) { 810 747 loff_t ptr = (thisEUN * inftl->EraseSize) + blockofs; 811 748 812 - if (mtd->read_oob(mtd, ptr, 8, &retlen, (char *)&bci) < 0) 749 + if (inftl_read_oob(mtd, ptr, 8, &retlen, (char *)&bci) < 0) 813 750 return -EIO; 814 751 bci.Status = bci.Status1 = SECTOR_DELETED; 815 - if (mtd->write_oob(mtd, ptr, 8, &retlen, (char *)&bci) < 0) 752 + if (inftl_write_oob(mtd, ptr, 8, &retlen, (char *)&bci) < 0) 816 753 return -EIO; 817 754 INFTL_trydeletechain(inftl, block / (inftl->EraseSize / SECTORSIZE)); 818 755 } ··· 853 790 memset(&oob, 0xff, sizeof(struct inftl_oob)); 854 791 oob.b.Status = oob.b.Status1 = SECTOR_USED; 855 792 856 - nand_write_raw(inftl->mbd.mtd, (writeEUN * inftl->EraseSize) + 857 - blockofs, SECTORSIZE, &retlen, (char *)buffer, 858 - (char *)&oob); 793 + inftl_write(inftl->mbd.mtd, (writeEUN * inftl->EraseSize) + 794 + blockofs, SECTORSIZE, &retlen, (char *)buffer, 795 + (char *)&oob); 859 796 /* 860 797 * need to write SECTOR_USED flags since they are not written 861 798 * in mtd_writeecc ··· 883 820 "buffer=%p)\n", inftl, block, buffer); 884 821 885 822 while (thisEUN < inftl->nb_blocks) { 886 - if (mtd->read_oob(mtd, (thisEUN * inftl->EraseSize) + 823 + if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) + 887 824 blockofs, 8, &retlen, (char *)&bci) < 0) 888 825 status = SECTOR_IGNORE; 889 826 else
+16 -11
drivers/mtd/inftlmount.c
··· 43 43 44 44 char inftlmountrev[]="$Revision: 1.18 $"; 45 45 46 + extern int inftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, 47 + size_t *retlen, uint8_t *buf); 48 + extern int inftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, 49 + size_t *retlen, uint8_t *buf); 50 + 46 51 /* 47 52 * find_boot_record: Find the INFTL Media Header and its Spare copy which 48 53 * contains the various device information of the INFTL partition and ··· 112 107 } 113 108 114 109 /* To be safer with BIOS, also use erase mark as discriminant */ 115 - if ((ret = mtd->read_oob(mtd, block * inftl->EraseSize + 116 - SECTORSIZE + 8, 8, &retlen, 117 - (char *)&h1) < 0)) { 110 + if ((ret = inftl_read_oob(mtd, block * inftl->EraseSize + 111 + SECTORSIZE + 8, 8, &retlen, 112 + (char *)&h1) < 0)) { 118 113 printk(KERN_WARNING "INFTL: ANAND header found at " 119 114 "0x%x in mtd%d, but OOB data read failed " 120 115 "(err %d)\n", block * inftl->EraseSize, ··· 368 363 return -1; 369 364 370 365 if (check_oob) { 371 - if(mtd->read_oob(mtd, address, mtd->oobsize, 372 - &retlen, &buf[SECTORSIZE]) < 0) 366 + if(inftl_read_oob(mtd, address, mtd->oobsize, 367 + &retlen, &buf[SECTORSIZE]) < 0) 373 368 return -1; 374 369 if (memcmpb(buf + SECTORSIZE, 0xff, mtd->oobsize) != 0) 375 370 return -1; ··· 438 433 uci.Reserved[2] = 0; 439 434 uci.Reserved[3] = 0; 440 435 instr->addr = block * inftl->EraseSize + SECTORSIZE * 2; 441 - if (mtd->write_oob(mtd, instr->addr + 8, 8, &retlen, (char *)&uci) < 0) 436 + if (inftl_write_oob(mtd, instr->addr + 8, 8, &retlen, (char *)&uci) < 0) 442 437 goto fail; 443 438 return 0; 444 439 fail: ··· 616 611 break; 617 612 } 618 613 619 - if (mtd->read_oob(mtd, block * s->EraseSize + 8, 620 - 8, &retlen, (char *)&h0) < 0 || 621 - mtd->read_oob(mtd, block * s->EraseSize + 622 - 2 * SECTORSIZE + 8, 8, &retlen, 623 - (char *)&h1) < 0) { 614 + if (inftl_read_oob(mtd, block * s->EraseSize + 8, 615 + 8, &retlen, (char *)&h0) < 0 || 616 + inftl_read_oob(mtd, block * s->EraseSize + 617 + 2 * SECTORSIZE + 8, 8, &retlen, 618 + (char *)&h1) < 0) { 624 619 /* Should never happen? */ 625 620 do_format_chain++; 626 621 break;
+39 -20
drivers/mtd/mtdchar.c
··· 408 408 case MEMWRITEOOB: 409 409 { 410 410 struct mtd_oob_buf buf; 411 - void *databuf; 412 - ssize_t retlen; 411 + struct mtd_oob_ops ops; 413 412 414 413 if(!(file->f_mode & 2)) 415 414 return -EPERM; ··· 416 417 if (copy_from_user(&buf, argp, sizeof(struct mtd_oob_buf))) 417 418 return -EFAULT; 418 419 419 - if (buf.length > 0x4096) 420 + if (buf.length > 4096) 420 421 return -EINVAL; 421 422 422 423 if (!mtd->write_oob) ··· 428 429 if (ret) 429 430 return ret; 430 431 431 - databuf = kmalloc(buf.length, GFP_KERNEL); 432 - if (!databuf) 432 + ops.len = buf.length; 433 + ops.ooblen = mtd->oobsize; 434 + ops.ooboffs = buf.start & (mtd->oobsize - 1); 435 + ops.datbuf = NULL; 436 + ops.mode = MTD_OOB_PLACE; 437 + 438 + if (ops.ooboffs && ops.len > (ops.ooblen - ops.ooboffs)) 439 + return -EINVAL; 440 + 441 + ops.oobbuf = kmalloc(buf.length, GFP_KERNEL); 442 + if (!ops.oobbuf) 433 443 return -ENOMEM; 434 444 435 - if (copy_from_user(databuf, buf.ptr, buf.length)) { 436 - kfree(databuf); 445 + if (copy_from_user(ops.oobbuf, buf.ptr, buf.length)) { 446 + kfree(ops.oobbuf); 437 447 return -EFAULT; 438 448 } 439 449 440 - ret = (mtd->write_oob)(mtd, buf.start, buf.length, &retlen, databuf); 450 + buf.start &= ~(mtd->oobsize - 1); 451 + ret = mtd->write_oob(mtd, buf.start, &ops); 441 452 442 - if (copy_to_user(argp + sizeof(uint32_t), &retlen, sizeof(uint32_t))) 453 + if (copy_to_user(argp + sizeof(uint32_t), &ops.retlen, 454 + sizeof(uint32_t))) 443 455 ret = -EFAULT; 444 456 445 - kfree(databuf); 457 + kfree(ops.oobbuf); 446 458 break; 447 459 448 460 } ··· 461 451 case MEMREADOOB: 462 452 { 463 453 struct mtd_oob_buf buf; 464 - void *databuf; 465 - ssize_t retlen; 454 + struct mtd_oob_ops ops; 466 455 467 456 if (copy_from_user(&buf, argp, sizeof(struct mtd_oob_buf))) 468 457 return -EFAULT; 469 458 470 - if (buf.length > 0x4096) 459 + if (buf.length > 4096) 471 460 return -EINVAL; 472 461 473 462 if (!mtd->read_oob) ··· 474 465 else 475 466 ret = access_ok(VERIFY_WRITE, buf.ptr, 476 467 buf.length) ? 0 : -EFAULT; 477 - 478 468 if (ret) 479 469 return ret; 480 470 481 - databuf = kmalloc(buf.length, GFP_KERNEL); 482 - if (!databuf) 471 + ops.len = buf.length; 472 + ops.ooblen = mtd->oobsize; 473 + ops.ooboffs = buf.start & (mtd->oobsize - 1); 474 + ops.datbuf = NULL; 475 + ops.mode = MTD_OOB_PLACE; 476 + 477 + if (ops.ooboffs && ops.len > (ops.ooblen - ops.ooboffs)) 478 + return -EINVAL; 479 + 480 + ops.oobbuf = kmalloc(buf.length, GFP_KERNEL); 481 + if (!ops.oobbuf) 483 482 return -ENOMEM; 484 483 485 - ret = (mtd->read_oob)(mtd, buf.start, buf.length, &retlen, databuf); 484 + buf.start &= ~(mtd->oobsize - 1); 485 + ret = mtd->read_oob(mtd, buf.start, &ops); 486 486 487 - if (put_user(retlen, (uint32_t __user *)argp)) 487 + if (put_user(ops.retlen, (uint32_t __user *)argp)) 488 488 ret = -EFAULT; 489 - else if (retlen && copy_to_user(buf.ptr, databuf, retlen)) 489 + else if (ops.retlen && copy_to_user(buf.ptr, ops.oobbuf, 490 + ops.retlen)) 490 491 ret = -EFAULT; 491 492 492 - kfree(databuf); 493 + kfree(ops.oobbuf); 493 494 break; 494 495 } 495 496
+37 -53
drivers/mtd/mtdconcat.c
··· 231 231 } 232 232 233 233 static int 234 - concat_read_oob(struct mtd_info *mtd, loff_t from, size_t len, 235 - size_t * retlen, u_char * buf) 234 + concat_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) 236 235 { 237 236 struct mtd_concat *concat = CONCAT(mtd); 238 - int err = -EINVAL; 239 - int i; 237 + struct mtd_oob_ops devops = *ops; 238 + int i, err; 240 239 241 - *retlen = 0; 240 + ops->retlen = 0; 242 241 243 242 for (i = 0; i < concat->num_subdev; i++) { 244 243 struct mtd_info *subdev = concat->subdev[i]; 245 - size_t size, retsize; 246 244 247 245 if (from >= subdev->size) { 248 - /* Not destined for this subdev */ 249 - size = 0; 250 246 from -= subdev->size; 251 247 continue; 252 248 } 253 - if (from + len > subdev->size) 254 - /* First part goes into this subdev */ 255 - size = subdev->size - from; 256 - else 257 - /* Entire transaction goes into this subdev */ 258 - size = len; 259 249 260 - if (subdev->read_oob) 261 - err = subdev->read_oob(subdev, from, size, 262 - &retsize, buf); 263 - else 264 - err = -EINVAL; 250 + /* partial read ? */ 251 + if (from + devops.len > subdev->size) 252 + devops.len = subdev->size - from; 265 253 254 + err = subdev->read_oob(subdev, from, &devops); 255 + ops->retlen += devops.retlen; 266 256 if (err) 267 - break; 257 + return err; 268 258 269 - *retlen += retsize; 270 - len -= size; 271 - if (len == 0) 272 - break; 259 + devops.len = ops->len - ops->retlen; 260 + if (!devops.len) 261 + return 0; 273 262 274 - err = -EINVAL; 275 - buf += size; 263 + if (devops.datbuf) 264 + devops.datbuf += devops.retlen; 265 + if (devops.oobbuf) 266 + devops.oobbuf += devops.ooblen; 267 + 276 268 from = 0; 277 269 } 278 - return err; 270 + return -EINVAL; 279 271 } 280 272 281 273 static int 282 - concat_write_oob(struct mtd_info *mtd, loff_t to, size_t len, 283 - size_t * retlen, const u_char * buf) 274 + concat_write_oob(struct mtd_info *mtd, loff_t to, struct mtd_oob_ops *ops) 284 275 { 285 276 struct mtd_concat *concat = CONCAT(mtd); 286 - int err = -EINVAL; 287 - int i; 277 + struct mtd_oob_ops devops = *ops; 278 + int i, err; 288 279 289 280 if (!(mtd->flags & MTD_WRITEABLE)) 290 281 return -EROFS; 291 282 292 - *retlen = 0; 283 + ops->retlen = 0; 293 284 294 285 for (i = 0; i < concat->num_subdev; i++) { 295 286 struct mtd_info *subdev = concat->subdev[i]; 296 - size_t size, retsize; 297 287 298 288 if (to >= subdev->size) { 299 - size = 0; 300 289 to -= subdev->size; 301 290 continue; 302 291 } 303 - if (to + len > subdev->size) 304 - size = subdev->size - to; 305 - else 306 - size = len; 307 292 308 - if (!(subdev->flags & MTD_WRITEABLE)) 309 - err = -EROFS; 310 - else if (subdev->write_oob) 311 - err = subdev->write_oob(subdev, to, size, &retsize, 312 - buf); 313 - else 314 - err = -EINVAL; 293 + /* partial write ? */ 294 + if (to + devops.len > subdev->size) 295 + devops.len = subdev->size - to; 315 296 297 + err = subdev->write_oob(subdev, to, &devops); 298 + ops->retlen += devops.retlen; 316 299 if (err) 317 - break; 300 + return err; 318 301 319 - *retlen += retsize; 320 - len -= size; 321 - if (len == 0) 322 - break; 302 + devops.len = ops->len - ops->retlen; 303 + if (!devops.len) 304 + return 0; 323 305 324 - err = -EINVAL; 325 - buf += size; 306 + if (devops.datbuf) 307 + devops.datbuf += devops.retlen; 308 + if (devops.oobbuf) 309 + devops.oobbuf += devops.ooblen; 326 310 to = 0; 327 311 } 328 - return err; 312 + return -EINVAL; 329 313 } 330 314 331 315 static void concat_erase_callback(struct erase_info *instr)
+15 -14
drivers/mtd/mtdpart.c
··· 78 78 part->master->unpoint (part->master, addr, from + part->offset, len); 79 79 } 80 80 81 - static int part_read_oob (struct mtd_info *mtd, loff_t from, size_t len, 82 - size_t *retlen, u_char *buf) 81 + static int part_read_oob(struct mtd_info *mtd, loff_t from, 82 + struct mtd_oob_ops *ops) 83 83 { 84 84 struct mtd_part *part = PART(mtd); 85 + 85 86 if (from >= mtd->size) 86 - len = 0; 87 - else if (from + len > mtd->size) 88 - len = mtd->size - from; 89 - return part->master->read_oob (part->master, from + part->offset, 90 - len, retlen, buf); 87 + return -EINVAL; 88 + if (from + ops->len > mtd->size) 89 + return -EINVAL; 90 + return part->master->read_oob(part->master, from + part->offset, ops); 91 91 } 92 92 93 93 static int part_read_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len, ··· 134 134 len, retlen, buf); 135 135 } 136 136 137 - static int part_write_oob (struct mtd_info *mtd, loff_t to, size_t len, 138 - size_t *retlen, const u_char *buf) 137 + static int part_write_oob(struct mtd_info *mtd, loff_t to, 138 + struct mtd_oob_ops *ops) 139 139 { 140 140 struct mtd_part *part = PART(mtd); 141 + 141 142 if (!(mtd->flags & MTD_WRITEABLE)) 142 143 return -EROFS; 144 + 143 145 if (to >= mtd->size) 144 - len = 0; 145 - else if (to + len > mtd->size) 146 - len = mtd->size - to; 147 - return part->master->write_oob (part->master, to + part->offset, 148 - len, retlen, buf); 146 + return -EINVAL; 147 + if (to + ops->len > mtd->size) 148 + return -EINVAL; 149 + return part->master->write_oob(part->master, to + part->offset, ops); 149 150 } 150 151 151 152 static int part_write_user_prot_reg (struct mtd_info *mtd, loff_t from, size_t len,
+337 -239
drivers/mtd/nand/nand_base.c
··· 81 81 .length = 38}} 82 82 }; 83 83 84 - /* This is used for padding purposes in nand_write_oob */ 85 - static uint8_t ffchars[] = { 86 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 87 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 88 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 89 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 90 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 91 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 92 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 93 - 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 94 - }; 95 - 96 - static int nand_write_oob(struct mtd_info *mtd, loff_t to, size_t len, 97 - size_t *retlen, const uint8_t *buf); 98 84 static int nand_get_device(struct nand_chip *chip, struct mtd_info *mtd, 99 85 int new_state); 86 + 87 + static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, 88 + struct mtd_oob_ops *ops); 100 89 101 90 /* 102 91 * For devices which display every fart in the system on a seperate LED. Is ··· 347 358 { 348 359 struct nand_chip *chip = mtd->priv; 349 360 uint8_t buf[2] = { 0, 0 }; 350 - size_t retlen; 351 361 int block; 352 362 353 363 /* Get block number */ ··· 359 371 return nand_update_bbt(mtd, ofs); 360 372 361 373 /* We write two bytes, so we dont have to mess with 16 bit access */ 362 - ofs += mtd->oobsize + (chip->badblockpos & ~0x01); 363 - return nand_write_oob(mtd, ofs, 2, &retlen, buf); 374 + ofs += mtd->oobsize; 375 + chip->ops.len = 2; 376 + chip->ops.datbuf = NULL; 377 + chip->ops.oobbuf = buf; 378 + chip->ops.ooboffs = chip->badblockpos & ~0x01; 379 + 380 + return nand_do_write_oob(mtd, ofs, &chip->ops); 364 381 } 365 382 366 383 /** ··· 733 740 } 734 741 735 742 /** 743 + * nand_read_page_raw - [Intern] read raw page data without ecc 744 + * @mtd: mtd info structure 745 + * @chip: nand chip info structure 746 + * @buf: buffer to store read data 747 + */ 748 + static int nand_read_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 749 + uint8_t *buf) 750 + { 751 + chip->read_buf(mtd, buf, mtd->writesize); 752 + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 753 + return 0; 754 + } 755 + 756 + /** 736 757 * nand_read_page_swecc - {REPLACABLE] software ecc based page read function 737 758 * @mtd: mtd info structure 738 759 * @chip: nand chip info structure ··· 763 756 uint8_t *ecc_code = chip->buffers.ecccode; 764 757 int *eccpos = chip->ecc.layout->eccpos; 765 758 766 - chip->read_buf(mtd, buf, mtd->writesize); 767 - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); 768 - 769 - if (chip->ecc.mode == NAND_ECC_NONE) 770 - return 0; 759 + nand_read_page_raw(mtd, chip, buf); 771 760 772 761 for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) 773 762 chip->ecc.calculate(mtd, p, &ecc_calc[i]); ··· 885 882 } 886 883 887 884 /** 888 - * nand_do_read - [Internal] Read data with ECC 885 + * nand_transfer_oob - [Internal] Transfer oob to client buffer 886 + * @chip: nand chip structure 887 + * @ops: oob ops structure 888 + */ 889 + static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob, 890 + struct mtd_oob_ops *ops) 891 + { 892 + size_t len = ops->ooblen; 893 + 894 + switch(ops->mode) { 895 + 896 + case MTD_OOB_PLACE: 897 + case MTD_OOB_RAW: 898 + memcpy(oob, chip->oob_poi + ops->ooboffs, len); 899 + return oob + len; 900 + 901 + case MTD_OOB_AUTO: { 902 + struct nand_oobfree *free = chip->ecc.layout->oobfree; 903 + size_t bytes; 904 + 905 + for(; free->length && len; free++, len -= bytes) { 906 + bytes = min(len, free->length); 907 + 908 + memcpy(oob, chip->oob_poi + free->offset, bytes); 909 + oob += bytes; 910 + } 911 + return oob; 912 + } 913 + default: 914 + BUG(); 915 + } 916 + return NULL; 917 + } 918 + 919 + /** 920 + * nand_do_read_ops - [Internal] Read data with ECC 889 921 * 890 922 * @mtd: MTD device structure 891 923 * @from: offset to read from 892 - * @len: number of bytes to read 893 - * @retlen: pointer to variable to store the number of read bytes 894 - * @buf: the databuffer to put data 895 924 * 896 925 * Internal function. Called with chip held. 897 926 */ 898 - int nand_do_read(struct mtd_info *mtd, loff_t from, size_t len, 899 - size_t *retlen, uint8_t *buf) 927 + static int nand_do_read_ops(struct mtd_info *mtd, loff_t from, 928 + struct mtd_oob_ops *ops) 900 929 { 901 930 int chipnr, page, realpage, col, bytes, aligned; 902 931 struct nand_chip *chip = mtd->priv; ··· 936 901 int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 937 902 int sndcmd = 1; 938 903 int ret = 0; 939 - uint32_t readlen = len; 940 - uint8_t *bufpoi; 904 + uint32_t readlen = ops->len; 905 + uint8_t *bufpoi, *oob, *buf; 941 906 942 907 stats = mtd->ecc_stats; 943 908 ··· 950 915 col = (int)(from & (mtd->writesize - 1)); 951 916 chip->oob_poi = chip->buffers.oobrbuf; 952 917 918 + buf = ops->datbuf; 919 + oob = ops->oobbuf; 920 + 953 921 while(1) { 954 922 bytes = min(mtd->writesize - col, readlen); 955 923 aligned = (bytes == mtd->writesize); 956 924 957 925 /* Is the current page in the buffer ? */ 958 - if (realpage != chip->pagebuf) { 926 + if (realpage != chip->pagebuf || oob) { 959 927 bufpoi = aligned ? buf : chip->buffers.databuf; 960 928 961 929 if (likely(sndcmd)) { ··· 977 939 memcpy(buf, chip->buffers.databuf + col, bytes); 978 940 } 979 941 942 + buf += bytes; 943 + 944 + if (unlikely(oob)) { 945 + /* Raw mode does data:oob:data:oob */ 946 + if (ops->mode != MTD_OOB_RAW) 947 + oob = nand_transfer_oob(chip, oob, ops); 948 + else 949 + buf = nand_transfer_oob(chip, buf, ops); 950 + } 951 + 980 952 if (!(chip->options & NAND_NO_READRDY)) { 981 953 /* 982 954 * Apply delay or wait for ready/busy pin. Do ··· 1000 952 else 1001 953 nand_wait_ready(mtd); 1002 954 } 1003 - } else 955 + } else { 1004 956 memcpy(buf, chip->buffers.databuf + col, bytes); 957 + buf += bytes; 958 + } 1005 959 1006 - buf += bytes; 1007 960 readlen -= bytes; 1008 961 1009 962 if (!readlen) ··· 1030 981 sndcmd = 1; 1031 982 } 1032 983 1033 - *retlen = len - (size_t) readlen; 984 + ops->retlen = ops->len - (size_t) readlen; 1034 985 1035 986 if (ret) 1036 987 return ret; ··· 1051 1002 static int nand_read(struct mtd_info *mtd, loff_t from, size_t len, 1052 1003 size_t *retlen, uint8_t *buf) 1053 1004 { 1005 + struct nand_chip *chip = mtd->priv; 1054 1006 int ret; 1055 1007 1056 - *retlen = 0; 1057 1008 /* Do not allow reads past end of device */ 1058 1009 if ((from + len) > mtd->size) 1059 1010 return -EINVAL; 1060 1011 if (!len) 1061 1012 return 0; 1062 1013 1063 - nand_get_device(mtd->priv, mtd, FL_READING); 1014 + nand_get_device(chip, mtd, FL_READING); 1064 1015 1065 - ret = nand_do_read(mtd, from, len, retlen, buf); 1016 + chip->ops.len = len; 1017 + chip->ops.datbuf = buf; 1018 + chip->ops.oobbuf = NULL; 1019 + 1020 + ret = nand_do_read_ops(mtd, from, &chip->ops); 1066 1021 1067 1022 nand_release_device(mtd); 1068 1023 1024 + *retlen = chip->ops.retlen; 1069 1025 return ret; 1070 1026 } 1071 1027 1072 1028 /** 1073 - * nand_read_oob - [MTD Interface] NAND read out-of-band 1029 + * nand_do_read_oob - [Intern] NAND read out-of-band 1074 1030 * @mtd: MTD device structure 1075 1031 * @from: offset to read from 1076 - * @len: number of bytes to read 1077 - * @retlen: pointer to variable to store the number of read bytes 1078 - * @buf: the databuffer to put data 1032 + * @ops: oob operations description structure 1079 1033 * 1080 1034 * NAND read out-of-band data from the spare area 1081 1035 */ 1082 - static int nand_read_oob(struct mtd_info *mtd, loff_t from, size_t len, 1083 - size_t *retlen, uint8_t *buf) 1036 + static int nand_do_read_oob(struct mtd_info *mtd, loff_t from, 1037 + struct mtd_oob_ops *ops) 1084 1038 { 1085 1039 int col, page, realpage, chipnr, sndcmd = 1; 1086 1040 struct nand_chip *chip = mtd->priv; 1087 1041 int blkcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 1088 - int readlen = len; 1042 + int direct, bytes, readlen = ops->len; 1043 + uint8_t *bufpoi, *buf = ops->oobbuf; 1089 1044 1090 1045 DEBUG(MTD_DEBUG_LEVEL3, "nand_read_oob: from = 0x%08x, len = %i\n", 1091 1046 (unsigned int)from, (int)len); 1092 - 1093 - /* Initialize return length value */ 1094 - *retlen = 0; 1095 - 1096 - /* Do not allow reads past end of device */ 1097 - if ((from + len) > mtd->size) { 1098 - DEBUG(MTD_DEBUG_LEVEL0, "nand_read_oob: " 1099 - "Attempt read beyond end of device\n"); 1100 - return -EINVAL; 1101 - } 1102 - 1103 - nand_get_device(chip, mtd, FL_READING); 1104 1047 1105 1048 chipnr = (int)(from >> chip->chip_shift); 1106 1049 chip->select_chip(mtd, chipnr); ··· 1101 1060 realpage = (int)(from >> chip->page_shift); 1102 1061 page = realpage & chip->pagemask; 1103 1062 1104 - /* Mask to get column */ 1105 - col = from & (mtd->oobsize - 1); 1063 + if (ops->mode != MTD_OOB_AUTO) { 1064 + col = ops->ooboffs; 1065 + direct = 1; 1066 + } else { 1067 + col = 0; 1068 + direct = 0; 1069 + } 1106 1070 1107 1071 while(1) { 1108 - int bytes = min((int)(mtd->oobsize - col), readlen); 1072 + bytes = direct ? ops->ooblen : mtd->oobsize; 1073 + bufpoi = direct ? buf : chip->buffers.oobrbuf; 1109 1074 1110 1075 if (likely(sndcmd)) { 1111 1076 chip->cmdfunc(mtd, NAND_CMD_READOOB, col, page); 1112 1077 sndcmd = 0; 1113 1078 } 1114 1079 1115 - chip->read_buf(mtd, buf, bytes); 1080 + chip->read_buf(mtd, bufpoi, bytes); 1116 1081 1117 - readlen -= bytes; 1082 + if (unlikely(!direct)) 1083 + buf = nand_transfer_oob(chip, buf, ops); 1084 + else 1085 + buf += ops->ooblen; 1086 + 1087 + readlen -= ops->ooblen; 1118 1088 if (!readlen) 1119 1089 break; 1120 1090 ··· 1141 1089 else 1142 1090 nand_wait_ready(mtd); 1143 1091 } 1144 - 1145 - buf += bytes; 1146 - bytes = mtd->oobsize; 1147 - col = 0; 1148 1092 1149 1093 /* Increment page address */ 1150 1094 realpage++; ··· 1160 1112 sndcmd = 1; 1161 1113 } 1162 1114 1163 - /* Deselect and wake up anyone waiting on the device */ 1164 - nand_release_device(mtd); 1165 - 1166 - *retlen = len; 1115 + ops->retlen = ops->len; 1167 1116 return 0; 1168 1117 } 1169 1118 1170 1119 /** 1171 - * nand_read_raw - [GENERIC] Read raw data including oob into buffer 1120 + * nand_read_oob - [MTD Interface] NAND read data and/or out-of-band 1172 1121 * @mtd: MTD device structure 1173 - * @buf: temporary buffer 1174 1122 * @from: offset to read from 1175 - * @len: number of bytes to read 1176 - * @ooblen: number of oob data bytes to read 1123 + * @ops: oob operation description structure 1177 1124 * 1178 - * Read raw data including oob into buffer 1125 + * NAND read data and/or out-of-band data 1179 1126 */ 1180 - int nand_read_raw(struct mtd_info *mtd, uint8_t *buf, loff_t from, size_t len, 1181 - size_t ooblen) 1127 + static int nand_read_oob(struct mtd_info *mtd, loff_t from, 1128 + struct mtd_oob_ops *ops) 1182 1129 { 1130 + int (*read_page)(struct mtd_info *mtd, struct nand_chip *chip, 1131 + uint8_t *buf) = NULL; 1183 1132 struct nand_chip *chip = mtd->priv; 1184 - int page = (int)(from >> chip->page_shift); 1185 - int chipnr = (int)(from >> chip->chip_shift); 1186 - int sndcmd = 1; 1187 - int cnt = 0; 1188 - int pagesize = mtd->writesize + mtd->oobsize; 1189 - int blockcheck; 1133 + int ret = -ENOTSUPP; 1134 + 1135 + ops->retlen = 0; 1190 1136 1191 1137 /* Do not allow reads past end of device */ 1192 - if ((from + len) > mtd->size) { 1193 - DEBUG(MTD_DEBUG_LEVEL0, "nand_read_raw: " 1138 + if ((from + ops->len) > mtd->size) { 1139 + DEBUG(MTD_DEBUG_LEVEL0, "nand_read_oob: " 1194 1140 "Attempt read beyond end of device\n"); 1195 1141 return -EINVAL; 1196 1142 } 1197 1143 1198 - /* Grab the lock and see if the device is available */ 1199 1144 nand_get_device(chip, mtd, FL_READING); 1200 1145 1201 - chip->select_chip(mtd, chipnr); 1146 + switch(ops->mode) { 1147 + case MTD_OOB_PLACE: 1148 + case MTD_OOB_AUTO: 1149 + break; 1202 1150 1203 - /* Add requested oob length */ 1204 - len += ooblen; 1205 - blockcheck = (1 << (chip->phys_erase_shift - chip->page_shift)) - 1; 1151 + case MTD_OOB_RAW: 1152 + /* Replace the read_page algorithm temporary */ 1153 + read_page = chip->ecc.read_page; 1154 + chip->ecc.read_page = nand_read_page_raw; 1155 + break; 1206 1156 1207 - while (len) { 1208 - if (likely(sndcmd)) { 1209 - chip->cmdfunc(mtd, NAND_CMD_READ0, 0, 1210 - page & chip->pagemask); 1211 - sndcmd = 0; 1212 - } 1213 - 1214 - chip->read_buf(mtd, &buf[cnt], pagesize); 1215 - 1216 - len -= pagesize; 1217 - cnt += pagesize; 1218 - page++; 1219 - 1220 - if (!(chip->options & NAND_NO_READRDY)) { 1221 - if (!chip->dev_ready) 1222 - udelay(chip->chip_delay); 1223 - else 1224 - nand_wait_ready(mtd); 1225 - } 1226 - 1227 - /* 1228 - * Check, if the chip supports auto page increment or if we 1229 - * cross a block boundary. 1230 - */ 1231 - if (!NAND_CANAUTOINCR(chip) || !(page & blockcheck)) 1232 - sndcmd = 1; 1157 + default: 1158 + goto out; 1233 1159 } 1234 1160 1235 - /* Deselect and wake up anyone waiting on the device */ 1161 + if (!ops->datbuf) 1162 + ret = nand_do_read_oob(mtd, from, ops); 1163 + else 1164 + ret = nand_do_read_ops(mtd, from, ops); 1165 + 1166 + if (unlikely(ops->mode == MTD_OOB_RAW)) 1167 + chip->ecc.read_page = read_page; 1168 + out: 1236 1169 nand_release_device(mtd); 1237 - return 0; 1170 + return ret; 1171 + } 1172 + 1173 + 1174 + /** 1175 + * nand_write_page_raw - [Intern] raw page write function 1176 + * @mtd: mtd info structure 1177 + * @chip: nand chip info structure 1178 + * @buf: data buffer 1179 + */ 1180 + static void nand_write_page_raw(struct mtd_info *mtd, struct nand_chip *chip, 1181 + const uint8_t *buf) 1182 + { 1183 + chip->write_buf(mtd, buf, mtd->writesize); 1184 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1238 1185 } 1239 1186 1240 1187 /** ··· 1248 1205 const uint8_t *p = buf; 1249 1206 int *eccpos = chip->ecc.layout->eccpos; 1250 1207 1251 - if (chip->ecc.mode != NAND_ECC_NONE) { 1252 - /* Software ecc calculation */ 1253 - for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) 1254 - chip->ecc.calculate(mtd, p, &ecc_calc[i]); 1208 + /* Software ecc calculation */ 1209 + for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) 1210 + chip->ecc.calculate(mtd, p, &ecc_calc[i]); 1255 1211 1256 - for (i = 0; i < chip->ecc.total; i++) 1257 - chip->oob_poi[eccpos[i]] = ecc_calc[i]; 1258 - } 1212 + for (i = 0; i < chip->ecc.total; i++) 1213 + chip->oob_poi[eccpos[i]] = ecc_calc[i]; 1259 1214 1260 - chip->write_buf(mtd, buf, mtd->writesize); 1261 - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1215 + nand_write_page_raw(mtd, chip, buf); 1262 1216 } 1263 1217 1264 1218 /** ··· 1382 1342 return 0; 1383 1343 } 1384 1344 1345 + /** 1346 + * nand_fill_oob - [Internal] Transfer client buffer to oob 1347 + * @chip: nand chip structure 1348 + * @oob: oob data buffer 1349 + * @ops: oob ops structure 1350 + */ 1351 + static uint8_t *nand_fill_oob(struct nand_chip *chip, uint8_t *oob, 1352 + struct mtd_oob_ops *ops) 1353 + { 1354 + size_t len = ops->ooblen; 1355 + 1356 + switch(ops->mode) { 1357 + 1358 + case MTD_OOB_PLACE: 1359 + case MTD_OOB_RAW: 1360 + memcpy(chip->oob_poi + ops->ooboffs, oob, len); 1361 + return oob + len; 1362 + 1363 + case MTD_OOB_AUTO: { 1364 + struct nand_oobfree *free = chip->ecc.layout->oobfree; 1365 + size_t bytes; 1366 + 1367 + for(; free->length && len; free++, len -= bytes) { 1368 + bytes = min(len, free->length); 1369 + memcpy(chip->oob_poi + free->offset, oob, bytes); 1370 + oob += bytes; 1371 + } 1372 + return oob; 1373 + } 1374 + default: 1375 + BUG(); 1376 + } 1377 + return NULL; 1378 + } 1379 + 1385 1380 #define NOTALIGNED(x) (x & (mtd->writesize-1)) != 0 1386 1381 1387 1382 /** 1388 - * nand_write - [MTD Interface] NAND write with ECC 1383 + * nand_do_write_ops - [Internal] NAND write with ECC 1389 1384 * @mtd: MTD device structure 1390 1385 * @to: offset to write to 1391 - * @len: number of bytes to write 1392 - * @retlen: pointer to variable to store the number of written bytes 1393 - * @buf: the data to write 1386 + * @ops: oob operations description structure 1394 1387 * 1395 1388 * NAND write with ECC 1396 1389 */ 1397 - static int nand_write(struct mtd_info *mtd, loff_t to, size_t len, 1398 - size_t *retlen, const uint8_t *buf) 1390 + static int nand_do_write_ops(struct mtd_info *mtd, loff_t to, 1391 + struct mtd_oob_ops *ops) 1399 1392 { 1400 1393 int chipnr, realpage, page, blockmask; 1401 1394 struct nand_chip *chip = mtd->priv; 1402 - uint32_t writelen = len; 1395 + uint32_t writelen = ops->len; 1396 + uint8_t *oob = ops->oobbuf; 1397 + uint8_t *buf = ops->datbuf; 1403 1398 int bytes = mtd->writesize; 1404 - int ret = -EIO; 1399 + int ret; 1405 1400 1406 - *retlen = 0; 1407 - 1408 - /* Do not allow write past end of device */ 1409 - if ((to + len) > mtd->size) { 1410 - DEBUG(MTD_DEBUG_LEVEL0, "nand_write: " 1411 - "Attempt to write past end of page\n"); 1412 - return -EINVAL; 1413 - } 1401 + ops->retlen = 0; 1414 1402 1415 1403 /* reject writes, which are not page aligned */ 1416 - if (NOTALIGNED(to) || NOTALIGNED(len)) { 1404 + if (NOTALIGNED(to) || NOTALIGNED(ops->len)) { 1417 1405 printk(KERN_NOTICE "nand_write: " 1418 1406 "Attempt to write not page aligned data\n"); 1419 1407 return -EINVAL; 1420 1408 } 1421 1409 1422 - if (!len) 1410 + if (!writelen) 1423 1411 return 0; 1424 - 1425 - nand_get_device(chip, mtd, FL_WRITING); 1426 1412 1427 1413 /* Check, if it is write protected */ 1428 1414 if (nand_check_wp(mtd)) 1429 - goto out; 1415 + return -EIO; 1430 1416 1431 1417 chipnr = (int)(to >> chip->chip_shift); 1432 1418 chip->select_chip(mtd, chipnr); ··· 1463 1397 1464 1398 /* Invalidate the page cache, when we write to the cached page */ 1465 1399 if (to <= (chip->pagebuf << chip->page_shift) && 1466 - (chip->pagebuf << chip->page_shift) < (to + len)) 1400 + (chip->pagebuf << chip->page_shift) < (to + ops->len)) 1467 1401 chip->pagebuf = -1; 1468 1402 1469 1403 chip->oob_poi = chip->buffers.oobwbuf; 1470 1404 1471 1405 while(1) { 1472 1406 int cached = writelen > bytes && page != blockmask; 1407 + 1408 + if (unlikely(oob)) 1409 + oob = nand_fill_oob(chip, oob, ops); 1473 1410 1474 1411 ret = nand_write_page(mtd, chip, buf, page, cached); 1475 1412 if (ret) ··· 1493 1424 chip->select_chip(mtd, chipnr); 1494 1425 } 1495 1426 } 1496 - out: 1497 - *retlen = len - writelen; 1498 - nand_release_device(mtd); 1427 + 1428 + if (unlikely(oob)) 1429 + memset(chip->oob_poi, 0xff, mtd->oobsize); 1430 + 1431 + ops->retlen = ops->len - writelen; 1499 1432 return ret; 1500 1433 } 1501 1434 1502 1435 /** 1503 - * nand_write_raw - [GENERIC] Write raw data including oob 1504 - * @mtd: MTD device structure 1505 - * @buf: source buffer 1506 - * @to: offset to write to 1507 - * @len: number of bytes to write 1508 - * @buf: source buffer 1509 - * @oob: oob buffer 1510 - * 1511 - * Write raw data including oob 1512 - */ 1513 - int nand_write_raw(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, 1514 - const uint8_t *buf, uint8_t *oob) 1515 - { 1516 - struct nand_chip *chip = mtd->priv; 1517 - int page = (int)(to >> chip->page_shift); 1518 - int chipnr = (int)(to >> chip->chip_shift); 1519 - int ret; 1520 - 1521 - *retlen = 0; 1522 - 1523 - /* Do not allow writes past end of device */ 1524 - if ((to + len) > mtd->size) { 1525 - DEBUG(MTD_DEBUG_LEVEL0, "nand_read_raw: Attempt write " 1526 - "beyond end of device\n"); 1527 - return -EINVAL; 1528 - } 1529 - 1530 - /* Grab the lock and see if the device is available */ 1531 - nand_get_device(chip, mtd, FL_WRITING); 1532 - 1533 - chip->select_chip(mtd, chipnr); 1534 - chip->oob_poi = oob; 1535 - 1536 - while (len != *retlen) { 1537 - ret = nand_write_page(mtd, chip, buf, page, 0); 1538 - if (ret) 1539 - return ret; 1540 - page++; 1541 - *retlen += mtd->writesize; 1542 - buf += mtd->writesize; 1543 - chip->oob_poi += mtd->oobsize; 1544 - } 1545 - 1546 - /* Deselect and wake up anyone waiting on the device */ 1547 - nand_release_device(mtd); 1548 - return 0; 1549 - } 1550 - EXPORT_SYMBOL_GPL(nand_write_raw); 1551 - 1552 - /** 1553 - * nand_write_oob - [MTD Interface] NAND write out-of-band 1436 + * nand_write - [MTD Interface] NAND write with ECC 1554 1437 * @mtd: MTD device structure 1555 1438 * @to: offset to write to 1556 1439 * @len: number of bytes to write 1557 1440 * @retlen: pointer to variable to store the number of written bytes 1558 1441 * @buf: the data to write 1559 1442 * 1560 - * NAND write out-of-band 1443 + * NAND write with ECC 1561 1444 */ 1562 - static int nand_write_oob(struct mtd_info *mtd, loff_t to, size_t len, 1445 + static int nand_write(struct mtd_info *mtd, loff_t to, size_t len, 1563 1446 size_t *retlen, const uint8_t *buf) 1564 1447 { 1565 - int column, page, status, ret = -EIO, chipnr; 1448 + struct nand_chip *chip = mtd->priv; 1449 + int ret; 1450 + 1451 + /* Do not allow reads past end of device */ 1452 + if ((to + len) > mtd->size) 1453 + return -EINVAL; 1454 + if (!len) 1455 + return 0; 1456 + 1457 + nand_get_device(chip, mtd, FL_READING); 1458 + 1459 + chip->ops.len = len; 1460 + chip->ops.datbuf = (uint8_t *)buf; 1461 + chip->ops.oobbuf = NULL; 1462 + 1463 + ret = nand_do_write_ops(mtd, to, &chip->ops); 1464 + 1465 + nand_release_device(mtd); 1466 + 1467 + *retlen = chip->ops.retlen; 1468 + return ret; 1469 + } 1470 + 1471 + /** 1472 + * nand_do_write_oob - [MTD Interface] NAND write out-of-band 1473 + * @mtd: MTD device structure 1474 + * @to: offset to write to 1475 + * @ops: oob operation description structure 1476 + * 1477 + * NAND write out-of-band 1478 + */ 1479 + static int nand_do_write_oob(struct mtd_info *mtd, loff_t to, 1480 + struct mtd_oob_ops *ops) 1481 + { 1482 + int chipnr, page, status; 1566 1483 struct nand_chip *chip = mtd->priv; 1567 1484 1568 1485 DEBUG(MTD_DEBUG_LEVEL3, "nand_write_oob: to = 0x%08x, len = %i\n", 1569 - (unsigned int)to, (int)len); 1570 - 1571 - /* Initialize return length value */ 1572 - *retlen = 0; 1486 + (unsigned int)to, (int)ops->len); 1573 1487 1574 1488 /* Do not allow write past end of page */ 1575 - column = to & (mtd->oobsize - 1); 1576 - if ((column + len) > mtd->oobsize) { 1489 + if ((ops->ooboffs + ops->len) > mtd->oobsize) { 1577 1490 DEBUG(MTD_DEBUG_LEVEL0, "nand_write_oob: " 1578 1491 "Attempt to write past end of page\n"); 1579 1492 return -EINVAL; 1580 1493 } 1581 - 1582 - nand_get_device(chip, mtd, FL_WRITING); 1583 1494 1584 1495 chipnr = (int)(to >> chip->chip_shift); 1585 1496 chip->select_chip(mtd, chipnr); ··· 1577 1528 1578 1529 /* Check, if it is write protected */ 1579 1530 if (nand_check_wp(mtd)) 1580 - goto out; 1531 + return -EROFS; 1581 1532 1582 1533 /* Invalidate the page cache, if we write to the cached page */ 1583 1534 if (page == chip->pagebuf) 1584 1535 chip->pagebuf = -1; 1585 1536 1586 - if (NAND_MUST_PAD(chip)) { 1537 + if (ops->mode == MTD_OOB_AUTO || NAND_MUST_PAD(chip)) { 1538 + chip->oob_poi = chip->buffers.oobwbuf; 1539 + memset(chip->oob_poi, 0xff, mtd->oobsize); 1540 + nand_fill_oob(chip, ops->oobbuf, ops); 1587 1541 chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, 1588 1542 page & chip->pagemask); 1589 - /* prepad 0xff for partial programming */ 1590 - chip->write_buf(mtd, ffchars, column); 1591 - /* write data */ 1592 - chip->write_buf(mtd, buf, len); 1593 - /* postpad 0xff for partial programming */ 1594 - chip->write_buf(mtd, ffchars, mtd->oobsize - (len + column)); 1543 + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); 1544 + memset(chip->oob_poi, 0xff, mtd->oobsize); 1595 1545 } else { 1596 - chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize + column, 1546 + chip->cmdfunc(mtd, NAND_CMD_SEQIN, 1547 + mtd->writesize + ops->ooboffs, 1597 1548 page & chip->pagemask); 1598 - chip->write_buf(mtd, buf, len); 1549 + chip->write_buf(mtd, ops->oobbuf, ops->len); 1599 1550 } 1551 + 1600 1552 /* Send command to program the OOB data */ 1601 1553 chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); 1602 1554 ··· 1607 1557 if (status & NAND_STATUS_FAIL) { 1608 1558 DEBUG(MTD_DEBUG_LEVEL0, "nand_write_oob: " 1609 1559 "Failed write, page 0x%08x\n", page); 1610 - ret = -EIO; 1611 - goto out; 1560 + return -EIO; 1612 1561 } 1613 - *retlen = len; 1562 + ops->retlen = ops->len; 1614 1563 1615 1564 #ifdef CONFIG_MTD_NAND_VERIFY_WRITE 1616 - /* Send command to read back the data */ 1617 - chip->cmdfunc(mtd, NAND_CMD_READOOB, column, page & chip->pagemask); 1565 + if (ops->mode != MTD_OOB_AUTO) { 1566 + /* Send command to read back the data */ 1567 + chip->cmdfunc(mtd, NAND_CMD_READOOB, ops->ooboffs, 1568 + page & chip->pagemask); 1618 1569 1619 - if (chip->verify_buf(mtd, buf, len)) { 1620 - DEBUG(MTD_DEBUG_LEVEL0, "nand_write_oob: " 1621 - "Failed write verify, page 0x%08x\n", page); 1622 - ret = -EIO; 1623 - goto out; 1570 + if (chip->verify_buf(mtd, ops->oobbuf, ops->len)) { 1571 + DEBUG(MTD_DEBUG_LEVEL0, "nand_write_oob: " 1572 + "Failed write verify, page 0x%08x\n", page); 1573 + return -EIO; 1574 + } 1624 1575 } 1625 1576 #endif 1626 - ret = 0; 1627 - out: 1628 - /* Deselect and wake up anyone waiting on the device */ 1629 - nand_release_device(mtd); 1577 + return 0; 1578 + } 1630 1579 1580 + /** 1581 + * nand_write_oob - [MTD Interface] NAND write data and/or out-of-band 1582 + * @mtd: MTD device structure 1583 + * @from: offset to read from 1584 + * @ops: oob operation description structure 1585 + */ 1586 + static int nand_write_oob(struct mtd_info *mtd, loff_t to, 1587 + struct mtd_oob_ops *ops) 1588 + { 1589 + void (*write_page)(struct mtd_info *mtd, struct nand_chip *chip, 1590 + const uint8_t *buf) = NULL; 1591 + struct nand_chip *chip = mtd->priv; 1592 + int ret = -ENOTSUPP; 1593 + 1594 + ops->retlen = 0; 1595 + 1596 + /* Do not allow writes past end of device */ 1597 + if ((to + ops->len) > mtd->size) { 1598 + DEBUG(MTD_DEBUG_LEVEL0, "nand_read_oob: " 1599 + "Attempt read beyond end of device\n"); 1600 + return -EINVAL; 1601 + } 1602 + 1603 + nand_get_device(chip, mtd, FL_READING); 1604 + 1605 + switch(ops->mode) { 1606 + case MTD_OOB_PLACE: 1607 + case MTD_OOB_AUTO: 1608 + break; 1609 + 1610 + case MTD_OOB_RAW: 1611 + /* Replace the write_page algorithm temporary */ 1612 + write_page = chip->ecc.write_page; 1613 + chip->ecc.write_page = nand_write_page_raw; 1614 + break; 1615 + 1616 + default: 1617 + goto out; 1618 + } 1619 + 1620 + if (!ops->datbuf) 1621 + ret = nand_do_write_oob(mtd, to, ops); 1622 + else 1623 + ret = nand_do_write_ops(mtd, to, ops); 1624 + 1625 + if (unlikely(ops->mode == MTD_OOB_RAW)) 1626 + chip->ecc.write_page = write_page; 1627 + out: 1628 + nand_release_device(mtd); 1631 1629 return ret; 1632 1630 } 1633 1631 ··· 2289 2191 case NAND_ECC_NONE: 2290 2192 printk(KERN_WARNING "NAND_ECC_NONE selected by board driver. " 2291 2193 "This is not recommended !!\n"); 2292 - chip->ecc.read_page = nand_read_page_swecc; 2293 - chip->ecc.write_page = nand_write_page_swecc; 2194 + chip->ecc.read_page = nand_read_page_raw; 2195 + chip->ecc.write_page = nand_write_page_raw; 2294 2196 chip->ecc.size = mtd->writesize; 2295 2197 chip->ecc.bytes = 0; 2296 2198 break;
+137 -47
drivers/mtd/nand/nand_bbt.c
··· 230 230 return 0; 231 231 } 232 232 233 + /* 234 + * Scan read raw data from flash 235 + */ 236 + static int scan_read_raw(struct mtd_info *mtd, uint8_t *buf, loff_t offs, 237 + size_t len) 238 + { 239 + struct mtd_oob_ops ops; 240 + 241 + ops.mode = MTD_OOB_RAW; 242 + ops.ooboffs = 0; 243 + ops.ooblen = mtd->oobsize; 244 + ops.oobbuf = buf; 245 + ops.datbuf = buf; 246 + ops.len = len; 247 + 248 + return mtd->read_oob(mtd, offs, &ops); 249 + } 250 + 251 + /* 252 + * Scan write data with oob to flash 253 + */ 254 + static int scan_write_bbt(struct mtd_info *mtd, loff_t offs, size_t len, 255 + uint8_t *buf, uint8_t *oob) 256 + { 257 + struct mtd_oob_ops ops; 258 + 259 + ops.mode = MTD_OOB_PLACE; 260 + ops.ooboffs = 0; 261 + ops.ooblen = mtd->oobsize; 262 + ops.datbuf = buf; 263 + ops.oobbuf = oob; 264 + ops.len = len; 265 + 266 + return mtd->write_oob(mtd, offs, &ops); 267 + } 268 + 233 269 /** 234 270 * read_abs_bbts - [GENERIC] Read the bad block table(s) for all chips starting at a given page 235 271 * @mtd: MTD device structure ··· 277 241 * We assume that the bbt bits are in consecutive order. 278 242 * 279 243 */ 280 - static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, struct nand_bbt_descr *md) 244 + static int read_abs_bbts(struct mtd_info *mtd, uint8_t *buf, 245 + struct nand_bbt_descr *td, struct nand_bbt_descr *md) 281 246 { 282 247 struct nand_chip *this = mtd->priv; 283 248 284 249 /* Read the primary version, if available */ 285 250 if (td->options & NAND_BBT_VERSION) { 286 - nand_read_raw(mtd, buf, td->pages[0] << this->page_shift, mtd->writesize, mtd->oobsize); 251 + scan_read_raw(mtd, buf, td->pages[0] << this->page_shift, 252 + mtd->writesize); 287 253 td->version[0] = buf[mtd->writesize + td->veroffs]; 288 - printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", td->pages[0], td->version[0]); 254 + printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", 255 + td->pages[0], td->version[0]); 289 256 } 290 257 291 258 /* Read the mirror version, if available */ 292 259 if (md && (md->options & NAND_BBT_VERSION)) { 293 - nand_read_raw(mtd, buf, md->pages[0] << this->page_shift, mtd->writesize, mtd->oobsize); 260 + scan_read_raw(mtd, buf, md->pages[0] << this->page_shift, 261 + mtd->writesize); 294 262 md->version[0] = buf[mtd->writesize + md->veroffs]; 295 - printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", md->pages[0], md->version[0]); 263 + printk(KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", 264 + md->pages[0], md->version[0]); 296 265 } 297 - 298 266 return 1; 267 + } 268 + 269 + /* 270 + * Scan a given block full 271 + */ 272 + static int scan_block_full(struct mtd_info *mtd, struct nand_bbt_descr *bd, 273 + loff_t offs, uint8_t *buf, size_t readlen, 274 + int scanlen, int len) 275 + { 276 + int ret, j; 277 + 278 + ret = scan_read_raw(mtd, buf, offs, readlen); 279 + if (ret) 280 + return ret; 281 + 282 + for (j = 0; j < len; j++, buf += scanlen) { 283 + if (check_pattern(buf, scanlen, mtd->writesize, bd)) 284 + return 1; 285 + } 286 + return 0; 287 + } 288 + 289 + /* 290 + * Scan a given block partially 291 + */ 292 + static int scan_block_fast(struct mtd_info *mtd, struct nand_bbt_descr *bd, 293 + loff_t offs, uint8_t *buf, int len) 294 + { 295 + struct mtd_oob_ops ops; 296 + int j, ret; 297 + 298 + ops.len = mtd->oobsize; 299 + ops.ooblen = mtd->oobsize; 300 + ops.oobbuf = buf; 301 + ops.ooboffs = 0; 302 + ops.datbuf = NULL; 303 + ops.mode = MTD_OOB_PLACE; 304 + 305 + for (j = 0; j < len; j++) { 306 + /* 307 + * Read the full oob until read_oob is fixed to 308 + * handle single byte reads for 16 bit 309 + * buswidth 310 + */ 311 + ret = mtd->read_oob(mtd, offs, &ops); 312 + if (ret) 313 + return ret; 314 + 315 + if (check_short_pattern(buf, bd)) 316 + return 1; 317 + 318 + offs += mtd->writesize; 319 + } 320 + return 0; 299 321 } 300 322 301 323 /** ··· 367 273 * Create a bad block table by scanning the device 368 274 * for the given good/bad block identify pattern 369 275 */ 370 - static int create_bbt(struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd, int chip) 276 + static int create_bbt(struct mtd_info *mtd, uint8_t *buf, 277 + struct nand_bbt_descr *bd, int chip) 371 278 { 372 279 struct nand_chip *this = mtd->priv; 373 - int i, j, numblocks, len, scanlen; 280 + int i, numblocks, len, scanlen; 374 281 int startblock; 375 282 loff_t from; 376 - size_t readlen, ooblen; 283 + size_t readlen; 377 284 378 285 printk(KERN_INFO "Scanning device for bad blocks\n"); 379 286 ··· 389 294 390 295 if (!(bd->options & NAND_BBT_SCANEMPTY)) { 391 296 /* We need only read few bytes from the OOB area */ 392 - scanlen = ooblen = 0; 297 + scanlen = 0; 393 298 readlen = bd->len; 394 299 } else { 395 300 /* Full page content should be read */ 396 301 scanlen = mtd->writesize + mtd->oobsize; 397 302 readlen = len * mtd->writesize; 398 - ooblen = len * mtd->oobsize; 399 303 } 400 304 401 305 if (chip == -1) { 402 - /* Note that numblocks is 2 * (real numblocks) here, see i+=2 below as it 403 - * makes shifting and masking less painful */ 306 + /* Note that numblocks is 2 * (real numblocks) here, see i+=2 307 + * below as it makes shifting and masking less painful */ 404 308 numblocks = mtd->size >> (this->bbt_erase_shift - 1); 405 309 startblock = 0; 406 310 from = 0; ··· 418 324 for (i = startblock; i < numblocks;) { 419 325 int ret; 420 326 421 - if (bd->options & NAND_BBT_SCANEMPTY) 422 - if ((ret = nand_read_raw(mtd, buf, from, readlen, ooblen))) 423 - return ret; 327 + if (bd->options & NAND_BBT_SCANALLPAGES) 328 + ret = scan_block_full(mtd, bd, from, buf, readlen, 329 + scanlen, len); 330 + else 331 + ret = scan_block_fast(mtd, bd, from, buf, len); 424 332 425 - for (j = 0; j < len; j++) { 426 - if (!(bd->options & NAND_BBT_SCANEMPTY)) { 427 - size_t retlen; 333 + if (ret < 0) 334 + return ret; 428 335 429 - /* Read the full oob until read_oob is fixed to 430 - * handle single byte reads for 16 bit buswidth */ 431 - ret = mtd->read_oob(mtd, from + j * mtd->writesize, mtd->oobsize, &retlen, buf); 432 - if (ret) 433 - return ret; 434 - 435 - if (check_short_pattern(buf, bd)) { 436 - this->bbt[i >> 3] |= 0x03 << (i & 0x6); 437 - printk(KERN_WARNING "Bad eraseblock %d at 0x%08x\n", 438 - i >> 1, (unsigned int)from); 439 - break; 440 - } 441 - } else { 442 - if (check_pattern(&buf[j * scanlen], scanlen, mtd->writesize, bd)) { 443 - this->bbt[i >> 3] |= 0x03 << (i & 0x6); 444 - printk(KERN_WARNING "Bad eraseblock %d at 0x%08x\n", 445 - i >> 1, (unsigned int)from); 446 - break; 447 - } 448 - } 336 + if (ret) { 337 + this->bbt[i >> 3] |= 0x03 << (i & 0x6); 338 + printk(KERN_WARNING "Bad eraseblock %d at 0x%08x\n", 339 + i >> 1, (unsigned int)from); 449 340 } 341 + 450 342 i += 2; 451 343 from += (1 << this->bbt_erase_shift); 452 344 } ··· 463 383 int bits, startblock, block, dir; 464 384 int scanlen = mtd->writesize + mtd->oobsize; 465 385 int bbtblocks; 386 + int blocktopage = this->bbt_erase_shift - this->page_shift; 466 387 467 388 /* Search direction top -> down ? */ 468 389 if (td->options & NAND_BBT_LASTBLOCK) { ··· 493 412 td->pages[i] = -1; 494 413 /* Scan the maximum number of blocks */ 495 414 for (block = 0; block < td->maxblocks; block++) { 415 + 496 416 int actblock = startblock + dir * block; 417 + loff_t offs = actblock << this->bbt_erase_shift; 418 + 497 419 /* Read first page */ 498 - nand_read_raw(mtd, buf, actblock << this->bbt_erase_shift, mtd->writesize, mtd->oobsize); 420 + scan_read_raw(mtd, buf, offs, mtd->writesize); 499 421 if (!check_pattern(buf, scanlen, mtd->writesize, td)) { 500 - td->pages[i] = actblock << (this->bbt_erase_shift - this->page_shift); 422 + td->pages[i] = actblock << blocktopage; 501 423 if (td->options & NAND_BBT_VERSION) { 502 424 td->version[i] = buf[mtd->writesize + td->veroffs]; 503 425 } ··· 565 481 int nrchips, bbtoffs, pageoffs, ooboffs; 566 482 uint8_t msk[4]; 567 483 uint8_t rcode = td->reserved_block_code; 568 - size_t retlen, len = 0, ooblen; 484 + size_t retlen, len = 0; 569 485 loff_t to; 486 + struct mtd_oob_ops ops; 487 + 488 + ops.ooblen = mtd->oobsize; 489 + ops.ooboffs = 0; 490 + ops.datbuf = NULL; 491 + ops.mode = MTD_OOB_PLACE; 570 492 571 493 if (!rcode) 572 494 rcode = 0xff; ··· 673 583 "bad block table\n"); 674 584 } 675 585 /* Read oob data */ 676 - ooblen = (len >> this->page_shift) * mtd->oobsize; 677 - res = mtd->read_oob(mtd, to + mtd->writesize, ooblen, 678 - &retlen, &buf[len]); 679 - if (res < 0 || retlen != ooblen) 586 + ops.len = (len >> this->page_shift) * mtd->oobsize; 587 + ops.oobbuf = &buf[len]; 588 + res = mtd->read_oob(mtd, to + mtd->writesize, &ops); 589 + if (res < 0 || ops.retlen != ops.len) 680 590 goto outerr; 681 591 682 592 /* Calc the byte offset in the buffer */ ··· 725 635 if (res < 0) 726 636 goto outerr; 727 637 728 - res = nand_write_raw(mtd, to, len, &retlen, buf, &buf[len]); 638 + res = scan_write_bbt(mtd, to, len, buf, &buf[len]); 729 639 if (res < 0) 730 640 goto outerr; 731 641
+76 -16
drivers/mtd/nftlcore.c
··· 134 134 kfree(nftl); 135 135 } 136 136 137 + /* 138 + * Read oob data from flash 139 + */ 140 + int nftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, 141 + size_t *retlen, uint8_t *buf) 142 + { 143 + struct mtd_oob_ops ops; 144 + int res; 145 + 146 + ops.mode = MTD_OOB_PLACE; 147 + ops.ooboffs = offs & (mtd->writesize - 1); 148 + ops.ooblen = len; 149 + ops.oobbuf = buf; 150 + ops.datbuf = NULL; 151 + ops.len = len; 152 + 153 + res = mtd->read_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 154 + *retlen = ops.retlen; 155 + return res; 156 + } 157 + 158 + /* 159 + * Write oob data to flash 160 + */ 161 + int nftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, 162 + size_t *retlen, uint8_t *buf) 163 + { 164 + struct mtd_oob_ops ops; 165 + int res; 166 + 167 + ops.mode = MTD_OOB_PLACE; 168 + ops.ooboffs = offs & (mtd->writesize - 1); 169 + ops.ooblen = len; 170 + ops.oobbuf = buf; 171 + ops.datbuf = NULL; 172 + ops.len = len; 173 + 174 + res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 175 + *retlen = ops.retlen; 176 + return res; 177 + } 178 + 179 + /* 180 + * Write data and oob to flash 181 + */ 182 + static int nftl_write(struct mtd_info *mtd, loff_t offs, size_t len, 183 + size_t *retlen, uint8_t *buf, uint8_t *oob) 184 + { 185 + struct mtd_oob_ops ops; 186 + int res; 187 + 188 + ops.mode = MTD_OOB_PLACE; 189 + ops.ooboffs = offs; 190 + ops.ooblen = mtd->oobsize; 191 + ops.oobbuf = oob; 192 + ops.datbuf = buf; 193 + ops.len = len; 194 + 195 + res = mtd->write_oob(mtd, offs & ~(mtd->writesize - 1), &ops); 196 + *retlen = ops.retlen; 197 + return res; 198 + } 199 + 137 200 #ifdef CONFIG_NFTL_RW 138 201 139 202 /* Actual NFTL access routines */ ··· 279 216 280 217 targetEUN = thisEUN; 281 218 for (block = 0; block < nftl->EraseSize / 512; block ++) { 282 - mtd->read_oob(mtd, (thisEUN * nftl->EraseSize) + 219 + nftl_read_oob(mtd, (thisEUN * nftl->EraseSize) + 283 220 (block * 512), 16 , &retlen, 284 221 (char *)&oob); 285 222 if (block == 2) { ··· 396 333 longer one */ 397 334 oob.u.c.FoldMark = oob.u.c.FoldMark1 = cpu_to_le16(FOLD_MARK_IN_PROGRESS); 398 335 oob.u.c.unused = 0xffffffff; 399 - mtd->write_oob(mtd, (nftl->EraseSize * targetEUN) + 2 * 512 + 8, 336 + nftl_write_oob(mtd, (nftl->EraseSize * targetEUN) + 2 * 512 + 8, 400 337 8, &retlen, (char *)&oob.u); 401 338 } 402 339 ··· 432 369 memset(&oob, 0xff, sizeof(struct nftl_oob)); 433 370 oob.b.Status = oob.b.Status1 = SECTOR_USED; 434 371 435 - nand_write_raw(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 436 - (block * 512), 512, &retlen, movebuf, 437 - (char *)&oob); 438 - 372 + nftl_write(nftl->mbd.mtd, (nftl->EraseSize * targetEUN) + 373 + (block * 512), 512, &retlen, movebuf, (char *)&oob); 439 374 } 440 375 441 376 /* add the header so that it is now a valid chain */ 442 377 oob.u.a.VirtUnitNum = oob.u.a.SpareVirtUnitNum = cpu_to_le16(thisVUC); 443 378 oob.u.a.ReplUnitNum = oob.u.a.SpareReplUnitNum = 0xffff; 444 379 445 - mtd->write_oob(mtd, (nftl->EraseSize * targetEUN) + 8, 380 + nftl_write_oob(mtd, (nftl->EraseSize * targetEUN) + 8, 446 381 8, &retlen, (char *)&oob.u); 447 382 448 383 /* OK. We've moved the whole lot into the new block. Now we have to free the original blocks. */ ··· 560 499 561 500 lastEUN = writeEUN; 562 501 563 - mtd->read_oob(mtd, 502 + nftl_read_oob(mtd, 564 503 (writeEUN * nftl->EraseSize) + blockofs, 565 504 8, &retlen, (char *)&bci); 566 505 ··· 649 588 nftl->ReplUnitTable[writeEUN] = BLOCK_NIL; 650 589 651 590 /* ... and on the flash itself */ 652 - mtd->read_oob(mtd, writeEUN * nftl->EraseSize + 8, 8, 591 + nftl_read_oob(mtd, writeEUN * nftl->EraseSize + 8, 8, 653 592 &retlen, (char *)&oob.u); 654 593 655 594 oob.u.a.VirtUnitNum = oob.u.a.SpareVirtUnitNum = cpu_to_le16(thisVUC); 656 595 657 - mtd->write_oob(mtd, writeEUN * nftl->EraseSize + 8, 8, 596 + nftl_write_oob(mtd, writeEUN * nftl->EraseSize + 8, 8, 658 597 &retlen, (char *)&oob.u); 659 598 660 599 /* we link the new block to the chain only after the ··· 664 603 /* Both in our cache... */ 665 604 nftl->ReplUnitTable[lastEUN] = writeEUN; 666 605 /* ... and on the flash itself */ 667 - mtd->read_oob(mtd, (lastEUN * nftl->EraseSize) + 8, 606 + nftl_read_oob(mtd, (lastEUN * nftl->EraseSize) + 8, 668 607 8, &retlen, (char *)&oob.u); 669 608 670 609 oob.u.a.ReplUnitNum = oob.u.a.SpareReplUnitNum 671 610 = cpu_to_le16(writeEUN); 672 611 673 - mtd->write_oob(mtd, (lastEUN * nftl->EraseSize) + 8, 612 + nftl_write_oob(mtd, (lastEUN * nftl->EraseSize) + 8, 674 613 8, &retlen, (char *)&oob.u); 675 614 } 676 615 ··· 704 643 memset(&oob, 0xff, sizeof(struct nftl_oob)); 705 644 oob.b.Status = oob.b.Status1 = SECTOR_USED; 706 645 707 - nand_write_raw(nftl->mbd.mtd, (writeEUN * nftl->EraseSize) + 708 - blockofs, 512, &retlen, (char *)buffer, 709 - (char *)&oob); 646 + nftl_write(nftl->mbd.mtd, (writeEUN * nftl->EraseSize) + blockofs, 647 + 512, &retlen, (char *)buffer, (char *)&oob); 710 648 return 0; 711 649 } 712 650 #endif /* CONFIG_NFTL_RW */ ··· 727 667 728 668 if (thisEUN != BLOCK_NIL) { 729 669 while (thisEUN < nftl->nb_blocks) { 730 - if (mtd->read_oob(mtd, (thisEUN * nftl->EraseSize) + 670 + if (nftl_read_oob(mtd, (thisEUN * nftl->EraseSize) + 731 671 blockofs, 8, &retlen, 732 672 (char *)&bci) < 0) 733 673 status = SECTOR_IGNORE;
+17 -12
drivers/mtd/nftlmount.c
··· 33 33 34 34 char nftlmountrev[]="$Revision: 1.41 $"; 35 35 36 + extern int nftl_read_oob(struct mtd_info *mtd, loff_t offs, size_t len, 37 + size_t *retlen, uint8_t *buf); 38 + extern int nftl_write_oob(struct mtd_info *mtd, loff_t offs, size_t len, 39 + size_t *retlen, uint8_t *buf); 40 + 36 41 /* find_boot_record: Find the NFTL Media Header and its Spare copy which contains the 37 42 * various device information of the NFTL partition and Bad Unit Table. Update 38 43 * the ReplUnitTable[] table accroding to the Bad Unit Table. ReplUnitTable[] ··· 97 92 } 98 93 99 94 /* To be safer with BIOS, also use erase mark as discriminant */ 100 - if ((ret = mtd->read_oob(mtd, block * nftl->EraseSize + 95 + if ((ret = nftl_read_oob(mtd, block * nftl->EraseSize + 101 96 SECTORSIZE + 8, 8, &retlen, 102 97 (char *)&h1) < 0)) { 103 98 printk(KERN_WARNING "ANAND header found at 0x%x in mtd%d, but OOB data read failed (err %d)\n", ··· 288 283 return -1; 289 284 290 285 if (check_oob) { 291 - if(mtd->read_oob(mtd, address, mtd->oobsize, 286 + if(nftl_read_oob(mtd, address, mtd->oobsize, 292 287 &retlen, &buf[SECTORSIZE]) < 0) 293 288 return -1; 294 289 if (memcmpb(buf + SECTORSIZE, 0xff, mtd->oobsize) != 0) ··· 316 311 struct mtd_info *mtd = nftl->mbd.mtd; 317 312 318 313 /* Read the Unit Control Information #1 for Wear-Leveling */ 319 - if (mtd->read_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 8, 314 + if (nftl_read_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 8, 320 315 8, &retlen, (char *)&uci) < 0) 321 316 goto default_uci1; 322 317 ··· 356 351 goto fail; 357 352 358 353 uci.WearInfo = le32_to_cpu(nb_erases); 359 - if (mtd->write_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 354 + if (nftl_write_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 360 355 8, 8, &retlen, (char *)&uci) < 0) 361 356 goto fail; 362 357 return 0; ··· 388 383 block = first_block; 389 384 for (;;) { 390 385 for (i = 0; i < sectors_per_block; i++) { 391 - if (mtd->read_oob(mtd, 386 + if (nftl_read_oob(mtd, 392 387 block * nftl->EraseSize + i * SECTORSIZE, 393 388 8, &retlen, (char *)&bci) < 0) 394 389 status = SECTOR_IGNORE; ··· 409 404 /* sector not free actually : mark it as SECTOR_IGNORE */ 410 405 bci.Status = SECTOR_IGNORE; 411 406 bci.Status1 = SECTOR_IGNORE; 412 - mtd->write_oob(mtd, block * 407 + nftl_write_oob(mtd, block * 413 408 nftl->EraseSize + 414 409 i * SECTORSIZE, 8, 415 410 &retlen, (char *)&bci); ··· 503 498 size_t retlen; 504 499 505 500 /* check erase mark. */ 506 - if (mtd->read_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 501 + if (nftl_read_oob(mtd, block * nftl->EraseSize + SECTORSIZE + 8, 8, 507 502 &retlen, (char *)&h1) < 0) 508 503 return -1; 509 504 ··· 518 513 h1.EraseMark = cpu_to_le16(ERASE_MARK); 519 514 h1.EraseMark1 = cpu_to_le16(ERASE_MARK); 520 515 h1.WearInfo = cpu_to_le32(0); 521 - if (mtd->write_oob(mtd, 516 + if (nftl_write_oob(mtd, 522 517 block * nftl->EraseSize + SECTORSIZE + 8, 8, 523 518 &retlen, (char *)&h1) < 0) 524 519 return -1; ··· 531 526 SECTORSIZE, 0) != 0) 532 527 return -1; 533 528 534 - if (mtd->read_oob(mtd, block * nftl->EraseSize + i, 529 + if (nftl_read_oob(mtd, block * nftl->EraseSize + i, 535 530 16, &retlen, buf) < 0) 536 531 return -1; 537 532 if (i == SECTORSIZE) { ··· 562 557 struct nftl_uci2 uci; 563 558 size_t retlen; 564 559 565 - if (mtd->read_oob(mtd, block * nftl->EraseSize + 2 * SECTORSIZE + 8, 560 + if (nftl_read_oob(mtd, block * nftl->EraseSize + 2 * SECTORSIZE + 8, 566 561 8, &retlen, (char *)&uci) < 0) 567 562 return 0; 568 563 ··· 602 597 603 598 for (;;) { 604 599 /* read the block header. If error, we format the chain */ 605 - if (mtd->read_oob(mtd, 600 + if (nftl_read_oob(mtd, 606 601 block * s->EraseSize + 8, 8, 607 602 &retlen, (char *)&h0) < 0 || 608 - mtd->read_oob(mtd, 603 + nftl_read_oob(mtd, 609 604 block * s->EraseSize + 610 605 SECTORSIZE + 8, 8, 611 606 &retlen, (char *)&h1) < 0) {
+38 -8
drivers/mtd/onenand/onenand_base.c
··· 671 671 } 672 672 673 673 /** 674 - * onenand_read_oob - [MTD Interface] OneNAND read out-of-band 674 + * onenand_do_read_oob - [MTD Interface] OneNAND read out-of-band 675 675 * @param mtd MTD device structure 676 676 * @param from offset to read from 677 677 * @param len number of bytes to read ··· 680 680 * 681 681 * OneNAND read out-of-band data from the spare area 682 682 */ 683 - static int onenand_read_oob(struct mtd_info *mtd, loff_t from, size_t len, 684 - size_t *retlen, u_char *buf) 683 + int onenand_do_read_oob(struct mtd_info *mtd, loff_t from, size_t len, 684 + size_t *retlen, u_char *buf) 685 685 { 686 686 struct onenand_chip *this = mtd->priv; 687 687 int read = 0, thislen, column; ··· 742 742 743 743 *retlen = read; 744 744 return ret; 745 + } 746 + 747 + /** 748 + * onenand_read_oob - [MTD Interface] NAND write data and/or out-of-band 749 + * @mtd: MTD device structure 750 + * @from: offset to read from 751 + * @ops: oob operation description structure 752 + */ 753 + static int onenand_read_oob(struct mtd_info *mtd, loff_t from, 754 + struct mtd_oob_ops *ops) 755 + { 756 + BUG_ON(ops->mode != MTD_OOB_PLACE); 757 + 758 + return onenand_do_read_oob(mtd, from + ops->ooboffs, ops->len, 759 + &ops->retlen, ops->oobbuf); 745 760 } 746 761 747 762 #ifdef CONFIG_MTD_ONENAND_VERIFY_WRITE ··· 909 894 } 910 895 911 896 /** 912 - * onenand_write_oob - [MTD Interface] OneNAND write out-of-band 897 + * onenand_do_write_oob - [Internal] OneNAND write out-of-band 913 898 * @param mtd MTD device structure 914 899 * @param to offset to write to 915 900 * @param len number of bytes to write ··· 918 903 * 919 904 * OneNAND write out-of-band 920 905 */ 921 - static int onenand_write_oob(struct mtd_info *mtd, loff_t to, size_t len, 922 - size_t *retlen, const u_char *buf) 906 + static int onenand_do_write_oob(struct mtd_info *mtd, loff_t to, size_t len, 907 + size_t *retlen, const u_char *buf) 923 908 { 924 909 struct onenand_chip *this = mtd->priv; 925 910 int column, ret = 0; ··· 985 970 *retlen = written; 986 971 987 972 return ret; 973 + } 974 + 975 + /** 976 + * onenand_write_oob - [MTD Interface] NAND write data and/or out-of-band 977 + * @mtd: MTD device structure 978 + * @from: offset to read from 979 + * @ops: oob operation description structure 980 + */ 981 + static int onenand_write_oob(struct mtd_info *mtd, loff_t to, 982 + struct mtd_oob_ops *ops) 983 + { 984 + BUG_ON(ops->mode != MTD_OOB_PLACE); 985 + 986 + return onenand_do_write_oob(mtd, to + ops->ooboffs, ops->len, 987 + &ops->retlen, ops->oobbuf); 988 988 } 989 989 990 990 /** ··· 1168 1138 1169 1139 /* We write two bytes, so we dont have to mess with 16 bit access */ 1170 1140 ofs += mtd->oobsize + (bbm->badblockpos & ~0x01); 1171 - return mtd->write_oob(mtd, ofs , 2, &retlen, buf); 1141 + return onenand_do_write_oob(mtd, ofs , 2, &retlen, buf); 1172 1142 } 1173 1143 1174 1144 /** ··· 1358 1328 this->command(mtd, ONENAND_CMD_OTP_ACCESS, 0, 0); 1359 1329 this->wait(mtd, FL_OTPING); 1360 1330 1361 - ret = mtd->write_oob(mtd, from, len, retlen, buf); 1331 + ret = onenand_do_write_oob(mtd, from, len, retlen, buf); 1362 1332 1363 1333 /* Exit OTP access mode */ 1364 1334 this->command(mtd, ONENAND_CMD_RESET, 0, 0);
+5 -2
drivers/mtd/onenand/onenand_bbt.c
··· 17 17 #include <linux/mtd/onenand.h> 18 18 #include <linux/mtd/compatmac.h> 19 19 20 + extern int onenand_do_read_oob(struct mtd_info *mtd, loff_t from, size_t len, 21 + size_t *retlen, u_char *buf); 22 + 20 23 /** 21 24 * check_short_pattern - [GENERIC] check if a pattern is in the buffer 22 25 * @param buf the buffer to search ··· 90 87 91 88 /* No need to read pages fully, 92 89 * just read required OOB bytes */ 93 - ret = mtd->read_oob(mtd, from + j * mtd->writesize + bd->offs, 94 - readlen, &retlen, &buf[0]); 90 + ret = onenand_do_read_oob(mtd, from + j * mtd->writesize + bd->offs, 91 + readlen, &retlen, &buf[0]); 95 92 96 93 if (ret) 97 94 return ret;
+1
fs/jffs2/jffs2_fs_sb.h
··· 100 100 #ifdef CONFIG_JFFS2_FS_WRITEBUFFER 101 101 /* Write-behind buffer for NAND flash */ 102 102 unsigned char *wbuf; 103 + unsigned char *oobbuf; 103 104 uint32_t wbuf_ofs; 104 105 uint32_t wbuf_len; 105 106 struct jffs2_inodirty *wbuf_inodes;
+119 -113
fs/jffs2/wbuf.c
··· 955 955 return ret; 956 956 } 957 957 958 + #define NR_OOB_SCAN_PAGES 4 959 + 958 960 /* 959 - * Check, if the out of band area is empty 961 + * Check, if the out of band area is empty 960 962 */ 961 - int jffs2_check_oob_empty( struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb, int mode) 963 + int jffs2_check_oob_empty(struct jffs2_sb_info *c, 964 + struct jffs2_eraseblock *jeb, int mode) 962 965 { 963 - unsigned char *buf; 964 - int ret = 0; 965 - int i,len,page; 966 - size_t retlen; 967 - int oob_size; 966 + int i, page, ret; 967 + int oobsize = c->mtd->oobsize; 968 + struct mtd_oob_ops ops; 968 969 969 - /* allocate a buffer for all oob data in this sector */ 970 - oob_size = c->mtd->oobsize; 971 - len = 4 * oob_size; 972 - buf = kmalloc(len, GFP_KERNEL); 973 - if (!buf) { 974 - printk(KERN_NOTICE "jffs2_check_oob_empty(): allocation of temporary data buffer for oob check failed\n"); 975 - return -ENOMEM; 976 - } 977 - /* 978 - * if mode = 0, we scan for a total empty oob area, else we have 979 - * to take care of the cleanmarker in the first page of the block 980 - */ 981 - ret = jffs2_flash_read_oob(c, jeb->offset, len , &retlen, buf); 970 + ops.len = NR_OOB_SCAN_PAGES * oobsize; 971 + ops.ooblen = oobsize; 972 + ops.oobbuf = c->oobbuf; 973 + ops.ooboffs = 0; 974 + ops.datbuf = NULL; 975 + ops.mode = MTD_OOB_PLACE; 976 + 977 + ret = c->mtd->read_oob(c->mtd, jeb->offset, &ops); 982 978 if (ret) { 983 - D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB failed %d for block at %08x\n", ret, jeb->offset)); 984 - goto out; 979 + D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB " 980 + "failed %d for block at %08x\n", ret, jeb->offset)); 981 + return ret; 985 982 } 986 983 987 - if (retlen < len) { 988 - D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB return short read " 989 - "(%zd bytes not %d) for block at %08x\n", retlen, len, jeb->offset)); 990 - ret = -EIO; 991 - goto out; 984 + if (ops.retlen < ops.len) { 985 + D1(printk(KERN_WARNING "jffs2_check_oob_empty(): Read OOB " 986 + "returned short read (%zd bytes not %d) for block " 987 + "at %08x\n", ops.retlen, ops.len, jeb->offset)); 988 + return -EIO; 992 989 } 993 990 994 991 /* Special check for first page */ 995 - for(i = 0; i < oob_size ; i++) { 992 + for(i = 0; i < oobsize ; i++) { 996 993 /* Yeah, we know about the cleanmarker. */ 997 994 if (mode && i >= c->fsdata_pos && 998 995 i < c->fsdata_pos + c->fsdata_len) 999 996 continue; 1000 997 1001 - if (buf[i] != 0xFF) { 1002 - D2(printk(KERN_DEBUG "Found %02x at %x in OOB for %08x\n", 1003 - buf[i], i, jeb->offset)); 1004 - ret = 1; 1005 - goto out; 998 + if (ops.oobbuf[i] != 0xFF) { 999 + D2(printk(KERN_DEBUG "Found %02x at %x in OOB for " 1000 + "%08x\n", ops.oobbuf[i], i, jeb->offset)); 1001 + return 1; 1006 1002 } 1007 1003 } 1008 1004 1009 1005 /* we know, we are aligned :) */ 1010 - for (page = oob_size; page < len; page += sizeof(long)) { 1011 - unsigned long dat = *(unsigned long *)(&buf[page]); 1012 - if(dat != -1) { 1013 - ret = 1; 1014 - goto out; 1015 - } 1006 + for (page = oobsize; page < ops.len; page += sizeof(long)) { 1007 + long dat = *(long *)(&ops.oobbuf[page]); 1008 + if(dat != -1) 1009 + return 1; 1016 1010 } 1017 - 1018 - out: 1019 - kfree(buf); 1020 - 1021 - return ret; 1011 + return 0; 1022 1012 } 1023 1013 1024 1014 /* 1025 - * Scan for a valid cleanmarker and for bad blocks 1026 - * For virtual blocks (concatenated physical blocks) check the cleanmarker 1027 - * only in the first page of the first physical block, but scan for bad blocks in all 1028 - * physical blocks 1029 - */ 1030 - int jffs2_check_nand_cleanmarker (struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) 1015 + * Scan for a valid cleanmarker and for bad blocks 1016 + */ 1017 + int jffs2_check_nand_cleanmarker (struct jffs2_sb_info *c, 1018 + struct jffs2_eraseblock *jeb) 1031 1019 { 1032 1020 struct jffs2_unknown_node n; 1033 - unsigned char buf[2 * NAND_MAX_OOBSIZE]; 1034 - unsigned char *p; 1035 - int ret, i, cnt, retval = 0; 1036 - size_t retlen, offset; 1037 - int oob_size; 1021 + struct mtd_oob_ops ops; 1022 + int oobsize = c->mtd->oobsize; 1023 + unsigned char *p,*b; 1024 + int i, ret; 1025 + size_t offset = jeb->offset; 1038 1026 1039 - offset = jeb->offset; 1040 - oob_size = c->mtd->oobsize; 1041 - 1042 - /* Loop through the physical blocks */ 1043 - for (cnt = 0; cnt < (c->sector_size / c->mtd->erasesize); cnt++) { 1044 - /* Check first if the block is bad. */ 1045 - if (c->mtd->block_isbad (c->mtd, offset)) { 1046 - D1 (printk (KERN_WARNING "jffs2_check_nand_cleanmarker(): Bad block at %08x\n", jeb->offset)); 1047 - return 2; 1048 - } 1049 - /* 1050 - * We read oob data from page 0 and 1 of the block. 1051 - * page 0 contains cleanmarker and badblock info 1052 - * page 1 contains failure count of this block 1053 - */ 1054 - ret = c->mtd->read_oob (c->mtd, offset, oob_size << 1, &retlen, buf); 1055 - 1056 - if (ret) { 1057 - D1 (printk (KERN_WARNING "jffs2_check_nand_cleanmarker(): Read OOB failed %d for block at %08x\n", ret, jeb->offset)); 1058 - return ret; 1059 - } 1060 - if (retlen < (oob_size << 1)) { 1061 - D1 (printk (KERN_WARNING "jffs2_check_nand_cleanmarker(): Read OOB return short read (%zd bytes not %d) for block at %08x\n", retlen, oob_size << 1, jeb->offset)); 1062 - return -EIO; 1063 - } 1064 - 1065 - /* Check cleanmarker only on the first physical block */ 1066 - if (!cnt) { 1067 - n.magic = cpu_to_je16 (JFFS2_MAGIC_BITMASK); 1068 - n.nodetype = cpu_to_je16 (JFFS2_NODETYPE_CLEANMARKER); 1069 - n.totlen = cpu_to_je32 (8); 1070 - p = (unsigned char *) &n; 1071 - 1072 - for (i = 0; i < c->fsdata_len; i++) { 1073 - if (buf[c->fsdata_pos + i] != p[i]) { 1074 - retval = 1; 1075 - } 1076 - } 1077 - D1(if (retval == 1) { 1078 - printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): Cleanmarker node not detected in block at %08x\n", jeb->offset); 1079 - printk(KERN_WARNING "OOB at %08zx was ", offset); 1080 - for (i=0; i < oob_size; i++) { 1081 - printk("%02x ", buf[i]); 1082 - } 1083 - printk("\n"); 1084 - }) 1085 - } 1086 - offset += c->mtd->erasesize; 1027 + /* Check first if the block is bad. */ 1028 + if (c->mtd->block_isbad(c->mtd, offset)) { 1029 + D1 (printk(KERN_WARNING "jffs2_check_nand_cleanmarker()" 1030 + ": Bad block at %08x\n", jeb->offset)); 1031 + return 2; 1087 1032 } 1088 - return retval; 1033 + 1034 + ops.len = oobsize; 1035 + ops.ooblen = oobsize; 1036 + ops.oobbuf = c->oobbuf; 1037 + ops.ooboffs = 0; 1038 + ops.datbuf = NULL; 1039 + ops.mode = MTD_OOB_PLACE; 1040 + 1041 + ret = c->mtd->read_oob(c->mtd, offset, &ops); 1042 + if (ret) { 1043 + D1 (printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): " 1044 + "Read OOB failed %d for block at %08x\n", 1045 + ret, jeb->offset)); 1046 + return ret; 1047 + } 1048 + 1049 + if (ops.retlen < ops.len) { 1050 + D1 (printk (KERN_WARNING "jffs2_check_nand_cleanmarker(): " 1051 + "Read OOB return short read (%zd bytes not %d) " 1052 + "for block at %08x\n", ops.retlen, ops.len, 1053 + jeb->offset)); 1054 + return -EIO; 1055 + } 1056 + 1057 + n.magic = cpu_to_je16 (JFFS2_MAGIC_BITMASK); 1058 + n.nodetype = cpu_to_je16 (JFFS2_NODETYPE_CLEANMARKER); 1059 + n.totlen = cpu_to_je32 (8); 1060 + p = (unsigned char *) &n; 1061 + b = c->oobbuf + c->fsdata_pos; 1062 + 1063 + for (i = c->fsdata_len; i; i--) { 1064 + if (*b++ != *p++) 1065 + ret = 1; 1066 + } 1067 + 1068 + D1(if (ret == 1) { 1069 + printk(KERN_WARNING "jffs2_check_nand_cleanmarker(): " 1070 + "Cleanmarker node not detected in block at %08x\n", 1071 + offset); 1072 + printk(KERN_WARNING "OOB at %08zx was ", offset); 1073 + for (i=0; i < oobsize; i++) 1074 + printk("%02x ", c->oobbuf[i]); 1075 + printk("\n"); 1076 + }); 1077 + return ret; 1089 1078 } 1090 1079 1091 - int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, struct jffs2_eraseblock *jeb) 1080 + int jffs2_write_nand_cleanmarker(struct jffs2_sb_info *c, 1081 + struct jffs2_eraseblock *jeb) 1092 1082 { 1093 - struct jffs2_unknown_node n; 1094 - int ret; 1095 - size_t retlen; 1083 + struct jffs2_unknown_node n; 1084 + int ret; 1085 + struct mtd_oob_ops ops; 1096 1086 1097 1087 n.magic = cpu_to_je16(JFFS2_MAGIC_BITMASK); 1098 1088 n.nodetype = cpu_to_je16(JFFS2_NODETYPE_CLEANMARKER); 1099 1089 n.totlen = cpu_to_je32(8); 1100 1090 1101 - ret = jffs2_flash_write_oob(c, jeb->offset + c->fsdata_pos, c->fsdata_len, &retlen, (unsigned char *)&n); 1091 + ops.len = c->fsdata_len; 1092 + ops.ooblen = c->fsdata_len;; 1093 + ops.oobbuf = (uint8_t *)&n; 1094 + ops.ooboffs = c->fsdata_pos; 1095 + ops.datbuf = NULL; 1096 + ops.mode = MTD_OOB_PLACE; 1097 + 1098 + ret = c->mtd->write_oob(c->mtd, jeb->offset, &ops); 1102 1099 1103 1100 if (ret) { 1104 - D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Write failed for block at %08x: error %d\n", jeb->offset, ret)); 1101 + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): " 1102 + "Write failed for block at %08x: error %d\n", 1103 + jeb->offset, ret)); 1105 1104 return ret; 1106 1105 } 1107 - if (retlen != c->fsdata_len) { 1108 - D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): Short write for block at %08x: %zd not %d\n", jeb->offset, retlen, c->fsdata_len)); 1109 - return ret; 1106 + if (ops.retlen != ops.len) { 1107 + D1(printk(KERN_WARNING "jffs2_write_nand_cleanmarker(): " 1108 + "Short write for block at %08x: %zd not %d\n", 1109 + jeb->offset, ops.retlen, ops.len)); 1110 + return -EIO; 1110 1111 } 1111 1112 return 0; 1112 1113 } ··· 1186 1185 if (!c->wbuf) 1187 1186 return -ENOMEM; 1188 1187 1188 + c->oobbuf = kmalloc(NR_OOB_SCAN_PAGES * c->mtd->oobsize, GFP_KERNEL); 1189 + if (!c->oobbuf) 1190 + return -ENOMEM; 1191 + 1189 1192 res = jffs2_nand_set_oobinfo(c); 1190 1193 1191 1194 #ifdef BREAKME ··· 1207 1202 void jffs2_nand_flash_cleanup(struct jffs2_sb_info *c) 1208 1203 { 1209 1204 kfree(c->wbuf); 1205 + kfree(c->oobbuf); 1210 1206 } 1211 1207 1212 1208 int jffs2_dataflash_setup(struct jffs2_sb_info *c) {
+48 -2
include/linux/mtd/mtd.h
··· 67 67 unsigned long failed; 68 68 }; 69 69 70 + /* 71 + * oob operation modes 72 + * 73 + * MTD_OOB_PLACE: oob data are placed at the given offset 74 + * MTD_OOB_AUTO: oob data are automatically placed at the free areas 75 + * which are defined by the ecclayout 76 + * MTD_OOB_RAW: mode to read raw data+oob in one chunk. The oob data 77 + * is inserted into the data. Thats a raw image of the 78 + * flash contents. 79 + */ 80 + typedef enum { 81 + MTD_OOB_PLACE, 82 + MTD_OOB_AUTO, 83 + MTD_OOB_RAW, 84 + } mtd_oob_mode_t; 85 + 86 + /** 87 + * struct mtd_oob_ops - oob operation operands 88 + * @mode: operation mode 89 + * 90 + * @len: number of bytes to write/read. When a data buffer is given 91 + * (datbuf != NULL) this is the number of data bytes. When 92 + + no data buffer is available this is the number of oob bytes. 93 + * 94 + * @retlen: number of bytes written/read. When a data buffer is given 95 + * (datbuf != NULL) this is the number of data bytes. When 96 + + no data buffer is available this is the number of oob bytes. 97 + * 98 + * @ooblen: number of oob bytes per page 99 + * @ooboffs: offset of oob data in the oob area (only relevant when 100 + * mode = MTD_OOB_PLACE) 101 + * @datbuf: data buffer - if NULL only oob data are read/written 102 + * @oobbuf: oob data buffer 103 + */ 104 + struct mtd_oob_ops { 105 + mtd_oob_mode_t mode; 106 + size_t len; 107 + size_t retlen; 108 + size_t ooblen; 109 + uint32_t ooboffs; 110 + uint8_t *datbuf; 111 + uint8_t *oobbuf; 112 + }; 113 + 70 114 struct mtd_info { 71 115 u_char type; 72 116 u_int32_t flags; ··· 169 125 int (*read) (struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen, u_char *buf); 170 126 int (*write) (struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const u_char *buf); 171 127 172 - int (*read_oob) (struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen, u_char *buf); 173 - int (*write_oob) (struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen, const u_char *buf); 128 + int (*read_oob) (struct mtd_info *mtd, loff_t from, 129 + struct mtd_oob_ops *ops); 130 + int (*write_oob) (struct mtd_info *mtd, loff_t to, 131 + struct mtd_oob_ops *ops); 174 132 175 133 /* 176 134 * Methods to access the protection register area, present in some
+2 -8
include/linux/mtd/nand.h
··· 31 31 /* Free resources held by the NAND device */ 32 32 extern void nand_release (struct mtd_info *mtd); 33 33 34 - /* Read raw data from the device without ECC */ 35 - extern int nand_read_raw (struct mtd_info *mtd, uint8_t *buf, loff_t from, 36 - size_t len, size_t ooblen); 37 - 38 - 39 - extern int nand_write_raw(struct mtd_info *mtd, loff_t to, size_t len, 40 - size_t *retlen, const uint8_t *buf, uint8_t *oob); 41 - 42 34 /* The maximum number of NAND chips in an array */ 43 35 #define NAND_MAX_CHIPS 8 44 36 ··· 366 374 struct nand_ecc_ctrl ecc; 367 375 struct nand_buffers buffers; 368 376 struct nand_hw_control hwcontrol; 377 + 378 + struct mtd_oob_ops ops; 369 379 370 380 uint8_t *bbt; 371 381 struct nand_bbt_descr *bbt_td;