powerpc/64s: Fix stf mitigation patching w/strict RWX & hash

The stf entry barrier fallback is unsafe to execute in a semi-patched
state, which can happen when enabling/disabling the mitigation with
strict kernel RWX enabled and using the hash MMU.

See the previous commit for more details.

Fix it by changing the order in which we patch the instructions.

Note the stf barrier fallback is only used on Power6 or earlier.

Fixes: bd573a81312f ("powerpc/mm/64s: Allow STRICT_KERNEL_RWX again")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20210513140800.1391706-2-mpe@ellerman.id.au

+10 -10
+10 -10
arch/powerpc/lib/feature-fixups.c
··· 150 151 pr_devel("patching dest %lx\n", (unsigned long)dest); 152 153 - patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 154 - 155 - if (types & STF_BARRIER_FALLBACK) 156 patch_branch((struct ppc_inst *)(dest + 1), 157 - (unsigned long)&stf_barrier_fallback, 158 - BRANCH_SET_LINK); 159 - else 160 - patch_instruction((struct ppc_inst *)(dest + 1), 161 - ppc_inst(instrs[1])); 162 - 163 - patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 164 } 165 166 printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,
··· 150 151 pr_devel("patching dest %lx\n", (unsigned long)dest); 152 153 + // See comment in do_entry_flush_fixups() RE order of patching 154 + if (types & STF_BARRIER_FALLBACK) { 155 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 156 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 157 patch_branch((struct ppc_inst *)(dest + 1), 158 + (unsigned long)&stf_barrier_fallback, BRANCH_SET_LINK); 159 + } else { 160 + patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); 161 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 162 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 163 + } 164 } 165 166 printk(KERN_DEBUG "stf-barrier: patched %d entry locations (%s barrier)\n", i,