powerpc: Avoid unaligned loads and stores in boot memcpy code

The 601 processor will generate an alignment exception for accesses
which cross a page boundary. In the boot wrapper code, OF is still
handling all exceptions, and it doesn't have an alignment exception
handler that emulates the instruction and continues.

This changes the memcpy and memmove routines in the boot wrapper to
avoid doing unaligned accesses. If the source and destination are
misaligned with respect to each other, we just copy one byte at a
time.

Signed-off-by: Paul Mackerras <paulus@samba.org>

+17 -3
+17 -3
arch/powerpc/boot/string.S
··· 107 107 rlwinm. r7,r5,32-3,3,31 /* r7 = r5 >> 3 */ 108 108 addi r6,r3,-4 109 109 addi r4,r4,-4 110 - beq 2f /* if less than 8 bytes to do */ 110 + beq 3f /* if less than 8 bytes to do */ 111 111 andi. r0,r6,3 /* get dest word aligned */ 112 112 mtctr r7 113 113 bne 5f 114 + andi. r0,r4,3 /* check src word aligned too */ 115 + bne 3f 114 116 1: lwz r7,4(r4) 115 117 lwzu r8,8(r4) 116 118 stw r7,4(r6) ··· 134 132 bdnz 4b 135 133 blr 136 134 5: subfic r0,r0,4 135 + cmpw cr1,r0,r5 136 + add r7,r0,r4 137 + andi. r7,r7,3 /* will source be word-aligned too? */ 138 + ble cr1,3b 139 + bne 3b /* do byte-by-byte if not */ 137 140 mtctr r0 138 141 6: lbz r7,4(r4) 139 142 addi r4,r4,1 ··· 156 149 rlwinm. r7,r5,32-3,3,31 /* r7 = r5 >> 3 */ 157 150 add r6,r3,r5 158 151 add r4,r4,r5 159 - beq 2f 152 + beq 3f 160 153 andi. r0,r6,3 161 154 mtctr r7 162 155 bne 5f 156 + andi. r0,r4,3 157 + bne 3f 163 158 1: lwz r7,-4(r4) 164 159 lwzu r8,-8(r4) 165 160 stw r7,-4(r6) ··· 180 171 stbu r0,-1(r6) 181 172 bdnz 4b 182 173 blr 183 - 5: mtctr r0 174 + 5: cmpw cr1,r0,r5 175 + subf r7,r0,r4 176 + andi. r7,r7,3 177 + ble cr1,3b 178 + bne 3b 179 + mtctr r0 184 180 6: lbzu r7,-1(r4) 185 181 stbu r7,-1(r6) 186 182 bdnz 6b