Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'sparc64-ADI'

Khalid Aziz says:

====================
Application Data Integrity feature introduced by SPARC M7

V12 changes:
This series is same as v10 and v11 and was simply rebased on 4.16-rc2
kernel and patch 11 was added to update signal delivery code to use the
new helper functions added by Eric Biederman. Can mm maintainers please
review patches 2, 7, 8 and 9 which are arch independent, and
include/linux/mm.h and mm/ksm.c changes in patch 10 and ack these if
everything looks good?

SPARC M7 processor adds additional metadata for memory address space
that can be used to secure access to regions of memory. This additional
metadata is implemented as a 4-bit tag attached to each cacheline size
block of memory. A task can set a tag on any number of such blocks.
Access to such block is granted only if the virtual address used to
access that block of memory has the tag encoded in the uppermost 4 bits
of VA. Since sparc processor does not implement all 64 bits of VA, top 4
bits are available for ADI tags. Any mismatch between tag encoded in VA
and tag set on the memory block results in a trap. Tags are verified in
the VA presented to the MMU and tags are associated with the physical
page VA maps on to. If a memory page is swapped out and page frame gets
reused for another task, the tags are lost and hence must be saved when
swapping or migrating the page.

A userspace task enables ADI through mprotect(). This patch series adds
a page protection bit PROT_ADI and a corresponding VMA flag
VM_SPARC_ADI. VM_SPARC_ADI is used to trigger setting TTE.mcd bit in the
sparc pte that enables ADI checking on the corresponding page. MMU
validates the tag embedded in VA for every page that has TTE.mcd bit set
in its pte. After enabling ADI on a memory range, the userspace task can
set ADI version tags using stxa instruction with ASI_MCD_PRIMARY or
ASI_MCD_ST_BLKINIT_PRIMARY ASI.

Once userspace task calls mprotect() with PROT_ADI, kernel takes
following overall steps:

1. Find the VMAs covering the address range passed in to mprotect and
set VM_SPARC_ADI flag. If address range covers a subset of a VMA, the
VMA will be split.

2. When a page is allocated for a VA and the VMA covering this VA has
VM_SPARC_ADI flag set, set the TTE.mcd bit so MMU will check the
vwersion tag.

3. Userspace can now set version tags on the memory it has enabled ADI
on. Userspace accesses ADI enabled memory using a virtual address that
has the version tag embedded in the high bits. MMU validates this
version tag against the actual tag set on the memory. If tag matches,
MMU performs the VA->PA translation and access is granted. If there is a
mismatch, hypervisor sends a data access exception or precise memory
corruption detected exception depending upon whether precise exceptions
are enabled or not (controlled by MCDPERR register). Kernel sends
SIGSEGV to the task with appropriate si_code.

4. If a page is being swapped out or migrated, kernel must save any ADI
tags set on the page. Kernel maintains a page worth of tag storage
descriptors. Each descriptors pointsto a tag storage space and the
address range it covers. If the page being swapped out or migrated has
ADI enabled on it, kernel finds a tag storage descriptor that covers the
address range for the page or allocates a new descriptor if none of the
existing descriptors cover the address range. Kernel saves tags from the
page into the tag storage space descriptor points to.

5. When the page is swapped back in or reinstantiated after migration,
kernel restores the version tags on the new physical page by retrieving
the original tag from tag storage pointed to by a tag storage descriptor
for the virtual address range for new page.

User task can disable ADI by calling mprotect() again on the memory
range with PROT_ADI bit unset. Kernel clears the VM_SPARC_ADI flag in
VMAs, merges adjacent VMAs if necessary, and clears TTE.mcd bit in the
corresponding ptes.

IOMMU does not support ADI checking. Any version tags embedded in the
top bits of VA meant for IOMMU, are cleared and replaced with sign
extension of the first non-version tag bit (bit 59 for SPARC M7) for
IOMMU addresses.

This patch series adds support for this feature in 11 patches:

Patch 1/11
Tag mismatch on access by a task results in a trap from hypervisor as
data access exception or a precide memory corruption detected
exception. As part of handling these exceptions, kernel sends a
SIGSEGV to user process with special si_code to indicate which fault
occurred. This patch adds three new si_codes to differentiate between
various mismatch errors.

Patch 2/11
When a page is swapped or migrated, metadata associated with the page
must be saved so it can be restored later. This patch adds a new
function that saves/restores this metadata when updating pte upon a
swap/migration.

Patch 3/11
SPARC M7 processor adds new fields to control registers to support ADI
feature. It also adds a new exception for precise traps on tag
mismatch. This patch adds definitions for the new control register
fields, new ASIs for ADI and an exception handler for the precise trap
on tag mismatch.

Patch 4/11
New hypervisor fault types were added by sparc M7 processor to support
ADI feature. This patch adds code to handle these fault types for data
access exception handler.

Patch 5/11
When ADI is in use for a page and a tag mismatch occurs, processor
raises "Memory corruption Detected" trap. This patch adds a handler
for this trap.

Patch 6/11
ADI usage is governed by ADI properties on a platform. These
properties are provided to kernel by firmware. Thsi patch adds new
auxiliary vectors that provide these values to userpsace.

Patch 7/11
arch_validate_prot() is used to validate the new protection bits asked
for by the userspace app. Validating protection bits may need the
context of address space the bits are being applied to. One such
example is PROT_ADI bit on sparc processor that enables ADI protection
on an address range. ADI protection applies only to addresses covered
by physical RAM and not other PFN mapped addresses or device
addresses. This patch adds "address" to the parameters being passed to
arch_validate_prot() to provide that context.

Patch 8/11
When protection bits are changed on a page, kernel carries forward all
protection bits except for read/write/exec. Additional code was added
to allow kernel to clear PKEY bits on x86 but this requirement to
clear other bits is not unique to x86. This patch extends the existing
code to allow other architectures to clear any other protection bits
as well on protection bit change.

Patch 9/11
When a processor supports additional metadata on memory pages, that
additional metadata needs to be copied to new memory pages when those
pages are moved. This patch allows architecture specific code to
replace the default copy_highpage() routine with arch specific
version that copies the metadata as well besides the data on the page.

Patch 10/11
This patch adds support for a user space task to enable ADI and enable
tag checking for subsets of its address space. As part of enabling
this feature, this patch adds to support manipulation of precise
exception for memory corruption detection, adds code to save and
restore tags on page swap and migration, and adds code to handle ADI
tagged addresses for DMA.

Patch 11/11
Update signal delivery code in arch/sparc/kernel/traps_64.c to use
the new helper function force_sig_fault() added by commit
f8ec66014ffd ("signal: Add send_sig_fault and force_sig_fault").

Changelog v12:

- Rebased to 4.16-rc2
- Added patch 11 to update signal delivery functions

Changelog v11:

- Rebased to 4.15

Changelog v10:

- Patch 1/10: Updated si_codes definitions for SEGV to match 4.14
- Patch 2/10: No changes
- Patch 3/10: Updated copyright
- Patch 4/10: No changes
- Patch 5/10: No changes
- Patch 6/10: Updated copyright
- Patch 7/10: No changes
- Patch 8/10: No changes
- Patch 9/10: No changes
- Patch 10/10: Added code to return from kernel path to set
PSTATE.mcde if kernel continues execution in another thread
(Suggested by Anthony)

Changelog v9:

- Patch 1/10: No changes
- Patch 2/10: No changes
- Patch 3/10: No changes
- Patch 4/10: No changes
- Patch 5/10: No changes
- Patch 6/10: No changes
- Patch 7/10: No changes
- Patch 8/10: No changes
- Patch 9/10: New patch
- Patch 10/10: Patch 9 from v8. Added code to copy ADI tags when
pages are migrated. Updated code to detect overflow and underflow
of addresses when allocating tag storage.

Changelog v8:

- Patch 1/9: No changes
- Patch 2/9: Fixed and erroneous "}"
- Patch 3/9: Minor print formatting change
- Patch 4/9: No changes
- Patch 5/9: No changes
- Patch 6/9: Added AT_ADI_UEONADI back
- Patch 7/9: Added addr parameter to powerpc arch_validate_prot()
- Patch 8/9: No changes
- Patch 9/9:
- Documentation updates
- Added an IPI on mprotect(...PROT_ADI...) call and
restore of TSTATE.MCDE on context switch
- Removed restriction on enabling ADI on read-only
memory
- Changed kzalloc() for tag storage to use GFP_NOWAIT
- Added code to handle overflow and underflow when
allocating tag storage
- Replaced sun_m7_patch_1insn_range() with
sun4v_patch_1insn_range()
- Added membar after restoring ADI tags in
copy_user_highpage()

Changelog v7:

- Patch 1/9: No changes
- Patch 2/9: Updated parameters to arch specific swap in/out
handlers
- Patch 3/9: No changes
- Patch 4/9: new patch split off from patch 4/4 in v6
- Patch 5/9: new patch split off from patch 4/4 in v6
- Patch 6/9: new patch split off from patch 4/4 in v6
- Patch 7/9: new patch
- Patch 8/9: new patch
- Patch 9/9:
- Enhanced arch_validate_prot() to enable ADI only on
writable addresses backed by physical RAM
- Added support for saving/restoring ADI tags for each
ADI block size address range on a page on swap in/out
- copy ADI tags on COW
- Updated values for auxiliary vectors to not conflict
with values on other architectures to avoid conflict
in glibc
- Disable same page merging on ADI enabled pages
- Enable ADI only on writable addresses backed by
physical RAM
- Split parts of patch off into separate patches

Changelog v6:
- Patch 1/4: No changes
- Patch 2/4: No changes
- Patch 3/4: Added missing nop in the delay slot in
sun4v_mcd_detect_precise
- Patch 4/4: Eliminated instructions to read and write PSTATE
as well as MCDPER and PMCDPER on every access to userspace
addresses by setting PSTATE and PMCDPER correctly upon entry
into kernel

Changelog v5:
- Patch 1/4: No changes
- Patch 2/4: Replaced set_swp_pte_at() with new architecture
functions arch_do_swap_page() and arch_unmap_one() that
suppoprt architecture specific actions to be taken on page
swap and migration
- Patch 3/4: Fixed indentation issues in assembly code
- Patch 4/4:
- Fixed indentation issues and instrcuctions in assembly
code
- Removed CONFIG_SPARC64 from mdesc.c
- Changed to maintain state of MCDPER register in thread
info flags as opposed to in mm context. MCDPER is a
per-thread state and belongs in thread info flag as
opposed to mm context which is shared across threads.
Added comments to clarify this is a lazily maintained
state and must be updated on context switch and
copy_process()
- Updated code to use the new arch_do_swap_page() and
arch_unmap_one() functions

Testing:

- All functionality was tested with 8K normal pages as well as hugepages
using malloc, mmap and shm.
- Multiple long duration stress tests were run using hugepages over 2+
months. Normal pages were tested with shorter duration stress tests.
- Tested swapping with malloc and shm by reducing max memory and
allocating three times the available system memory by active processes
using ADI on allocated memory. Ran through multiple hours long runs of
this test.
- Tested page migration with malloc and shm by migrating data pages of
active ADI test process using migratepages, back and forth between two
nodes every few seconds over an hour long run. Verified page migration
through /proc/<pid>/numa_maps.
- Tested COW support using test that forks children that read from
ADI enabled pages shared with parent and other children and write to
them as well forcing COW.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+1446 -25
+278
Documentation/sparc/adi.txt
··· 1 + Application Data Integrity (ADI) 2 + ================================ 3 + 4 + SPARC M7 processor adds the Application Data Integrity (ADI) feature. 5 + ADI allows a task to set version tags on any subset of its address 6 + space. Once ADI is enabled and version tags are set for ranges of 7 + address space of a task, the processor will compare the tag in pointers 8 + to memory in these ranges to the version set by the application 9 + previously. Access to memory is granted only if the tag in given pointer 10 + matches the tag set by the application. In case of mismatch, processor 11 + raises an exception. 12 + 13 + Following steps must be taken by a task to enable ADI fully: 14 + 15 + 1. Set the user mode PSTATE.mcde bit. This acts as master switch for 16 + the task's entire address space to enable/disable ADI for the task. 17 + 18 + 2. Set TTE.mcd bit on any TLB entries that correspond to the range of 19 + addresses ADI is being enabled on. MMU checks the version tag only 20 + on the pages that have TTE.mcd bit set. 21 + 22 + 3. Set the version tag for virtual addresses using stxa instruction 23 + and one of the MCD specific ASIs. Each stxa instruction sets the 24 + given tag for one ADI block size number of bytes. This step must 25 + be repeated for entire page to set tags for entire page. 26 + 27 + ADI block size for the platform is provided by the hypervisor to kernel 28 + in machine description tables. Hypervisor also provides the number of 29 + top bits in the virtual address that specify the version tag. Once 30 + version tag has been set for a memory location, the tag is stored in the 31 + physical memory and the same tag must be present in the ADI version tag 32 + bits of the virtual address being presented to the MMU. For example on 33 + SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block 34 + size is same as cacheline size which is 64 bytes. A task that sets ADI 35 + version to, say 10, on a range of memory, must access that memory using 36 + virtual addresses that contain 0xa in bits 63-60. 37 + 38 + ADI is enabled on a set of pages using mprotect() with PROT_ADI flag. 39 + When ADI is enabled on a set of pages by a task for the first time, 40 + kernel sets the PSTATE.mcde bit fot the task. Version tags for memory 41 + addresses are set with an stxa instruction on the addresses using 42 + ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is 43 + provided by the hypervisor to the kernel. Kernel returns the value of 44 + ADI block size to userspace using auxiliary vector along with other ADI 45 + info. Following auxiliary vectors are provided by the kernel: 46 + 47 + AT_ADI_BLKSZ ADI block size. This is the granularity and 48 + alignment, in bytes, of ADI versioning. 49 + AT_ADI_NBITS Number of ADI version bits in the VA 50 + 51 + 52 + IMPORTANT NOTES: 53 + 54 + - Version tag values of 0x0 and 0xf are reserved. These values match any 55 + tag in virtual address and never generate a mismatch exception. 56 + 57 + - Version tags are set on virtual addresses from userspace even though 58 + tags are stored in physical memory. Tags are set on a physical page 59 + after it has been allocated to a task and a pte has been created for 60 + it. 61 + 62 + - When a task frees a memory page it had set version tags on, the page 63 + goes back to free page pool. When this page is re-allocated to a task, 64 + kernel clears the page using block initialization ASI which clears the 65 + version tags as well for the page. If a page allocated to a task is 66 + freed and allocated back to the same task, old version tags set by the 67 + task on that page will no longer be present. 68 + 69 + - ADI tag mismatches are not detected for non-faulting loads. 70 + 71 + - Kernel does not set any tags for user pages and it is entirely a 72 + task's responsibility to set any version tags. Kernel does ensure the 73 + version tags are preserved if a page is swapped out to the disk and 74 + swapped back in. It also preserves that version tags if a page is 75 + migrated. 76 + 77 + - ADI works for any size pages. A userspace task need not be aware of 78 + page size when using ADI. It can simply select a virtual address 79 + range, enable ADI on the range using mprotect() and set version tags 80 + for the entire range. mprotect() ensures range is aligned to page size 81 + and is a multiple of page size. 82 + 83 + - ADI tags can only be set on writable memory. For example, ADI tags can 84 + not be set on read-only mappings. 85 + 86 + 87 + 88 + ADI related traps 89 + ----------------- 90 + 91 + With ADI enabled, following new traps may occur: 92 + 93 + Disrupting memory corruption 94 + 95 + When a store accesses a memory localtion that has TTE.mcd=1, 96 + the task is running with ADI enabled (PSTATE.mcde=1), and the ADI 97 + tag in the address used (bits 63:60) does not match the tag set on 98 + the corresponding cacheline, a memory corruption trap occurs. By 99 + default, it is a disrupting trap and is sent to the hypervisor 100 + first. Hypervisor creates a sun4v error report and sends a 101 + resumable error (TT=0x7e) trap to the kernel. The kernel sends 102 + a SIGSEGV to the task that resulted in this trap with the following 103 + info: 104 + 105 + siginfo.si_signo = SIGSEGV; 106 + siginfo.errno = 0; 107 + siginfo.si_code = SEGV_ADIDERR; 108 + siginfo.si_addr = addr; /* PC where first mismatch occurred */ 109 + siginfo.si_trapno = 0; 110 + 111 + 112 + Precise memory corruption 113 + 114 + When a store accesses a memory location that has TTE.mcd=1, 115 + the task is running with ADI enabled (PSTATE.mcde=1), and the ADI 116 + tag in the address used (bits 63:60) does not match the tag set on 117 + the corresponding cacheline, a memory corruption trap occurs. If 118 + MCD precise exception is enabled (MCDPERR=1), a precise 119 + exception is sent to the kernel with TT=0x1a. The kernel sends 120 + a SIGSEGV to the task that resulted in this trap with the following 121 + info: 122 + 123 + siginfo.si_signo = SIGSEGV; 124 + siginfo.errno = 0; 125 + siginfo.si_code = SEGV_ADIPERR; 126 + siginfo.si_addr = addr; /* address that caused trap */ 127 + siginfo.si_trapno = 0; 128 + 129 + NOTE: ADI tag mismatch on a load always results in precise trap. 130 + 131 + 132 + MCD disabled 133 + 134 + When a task has not enabled ADI and attempts to set ADI version 135 + on a memory address, processor sends an MCD disabled trap. This 136 + trap is handled by hypervisor first and the hypervisor vectors this 137 + trap through to the kernel as Data Access Exception trap with 138 + fault type set to 0xa (invalid ASI). When this occurs, the kernel 139 + sends the task SIGSEGV signal with following info: 140 + 141 + siginfo.si_signo = SIGSEGV; 142 + siginfo.errno = 0; 143 + siginfo.si_code = SEGV_ACCADI; 144 + siginfo.si_addr = addr; /* address that caused trap */ 145 + siginfo.si_trapno = 0; 146 + 147 + 148 + Sample program to use ADI 149 + ------------------------- 150 + 151 + Following sample program is meant to illustrate how to use the ADI 152 + functionality. 153 + 154 + #include <unistd.h> 155 + #include <stdio.h> 156 + #include <stdlib.h> 157 + #include <elf.h> 158 + #include <sys/ipc.h> 159 + #include <sys/shm.h> 160 + #include <sys/mman.h> 161 + #include <asm/asi.h> 162 + 163 + #ifndef AT_ADI_BLKSZ 164 + #define AT_ADI_BLKSZ 48 165 + #endif 166 + #ifndef AT_ADI_NBITS 167 + #define AT_ADI_NBITS 49 168 + #endif 169 + 170 + #ifndef PROT_ADI 171 + #define PROT_ADI 0x10 172 + #endif 173 + 174 + #define BUFFER_SIZE 32*1024*1024UL 175 + 176 + main(int argc, char* argv[], char* envp[]) 177 + { 178 + unsigned long i, mcde, adi_blksz, adi_nbits; 179 + char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr; 180 + int shmid, version; 181 + Elf64_auxv_t *auxv; 182 + 183 + adi_blksz = 0; 184 + 185 + while(*envp++ != NULL); 186 + for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) { 187 + switch (auxv->a_type) { 188 + case AT_ADI_BLKSZ: 189 + adi_blksz = auxv->a_un.a_val; 190 + break; 191 + case AT_ADI_NBITS: 192 + adi_nbits = auxv->a_un.a_val; 193 + break; 194 + } 195 + } 196 + if (adi_blksz == 0) { 197 + fprintf(stderr, "Oops! ADI is not supported\n"); 198 + exit(1); 199 + } 200 + 201 + printf("ADI capabilities:\n"); 202 + printf("\tBlock size = %ld\n", adi_blksz); 203 + printf("\tNumber of bits = %ld\n", adi_nbits); 204 + 205 + if ((shmid = shmget(2, BUFFER_SIZE, 206 + IPC_CREAT | SHM_R | SHM_W)) < 0) { 207 + perror("shmget failed"); 208 + exit(1); 209 + } 210 + 211 + shmaddr = shmat(shmid, NULL, 0); 212 + if (shmaddr == (char *)-1) { 213 + perror("shm attach failed"); 214 + shmctl(shmid, IPC_RMID, NULL); 215 + exit(1); 216 + } 217 + 218 + if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) { 219 + perror("mprotect failed"); 220 + goto err_out; 221 + } 222 + 223 + /* Set the ADI version tag on the shm segment 224 + */ 225 + version = 10; 226 + tmp_addr = shmaddr; 227 + end = shmaddr + BUFFER_SIZE; 228 + while (tmp_addr < end) { 229 + asm volatile( 230 + "stxa %1, [%0]0x90\n\t" 231 + : 232 + : "r" (tmp_addr), "r" (version)); 233 + tmp_addr += adi_blksz; 234 + } 235 + asm volatile("membar #Sync\n\t"); 236 + 237 + /* Create a versioned address from the normal address by placing 238 + * version tag in the upper adi_nbits bits 239 + */ 240 + tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits); 241 + tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits); 242 + veraddr = (void *) (((unsigned long)version << (64-adi_nbits)) 243 + | (unsigned long)tmp_addr); 244 + 245 + printf("Starting the writes:\n"); 246 + for (i = 0; i < BUFFER_SIZE; i++) { 247 + veraddr[i] = (char)(i); 248 + if (!(i % (1024 * 1024))) 249 + printf("."); 250 + } 251 + printf("\n"); 252 + 253 + printf("Verifying data..."); 254 + fflush(stdout); 255 + for (i = 0; i < BUFFER_SIZE; i++) 256 + if (veraddr[i] != (char)i) 257 + printf("\nIndex %lu mismatched\n", i); 258 + printf("Done.\n"); 259 + 260 + /* Disable ADI and clean up 261 + */ 262 + if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) { 263 + perror("mprotect failed"); 264 + goto err_out; 265 + } 266 + 267 + if (shmdt((const void *)shmaddr) != 0) 268 + perror("Detach failure"); 269 + shmctl(shmid, IPC_RMID, NULL); 270 + 271 + exit(0); 272 + 273 + err_out: 274 + if (shmdt((const void *)shmaddr) != 0) 275 + perror("Detach failure"); 276 + shmctl(shmid, IPC_RMID, NULL); 277 + exit(1); 278 + }
+2 -2
arch/powerpc/include/asm/mman.h
··· 43 43 } 44 44 #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) 45 45 46 - static inline bool arch_validate_prot(unsigned long prot) 46 + static inline bool arch_validate_prot(unsigned long prot, unsigned long addr) 47 47 { 48 48 if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) 49 49 return false; ··· 51 51 return false; 52 52 return true; 53 53 } 54 - #define arch_validate_prot(prot) arch_validate_prot(prot) 54 + #define arch_validate_prot arch_validate_prot 55 55 56 56 #endif /* CONFIG_PPC64 */ 57 57 #endif /* _ASM_POWERPC_MMAN_H */
+1 -1
arch/powerpc/kernel/syscalls.c
··· 48 48 { 49 49 long ret = -EINVAL; 50 50 51 - if (!arch_validate_prot(prot)) 51 + if (!arch_validate_prot(prot, addr)) 52 52 goto out; 53 53 54 54 if (shift) {
+6
arch/sparc/include/asm/adi.h
··· 1 + #ifndef ___ASM_SPARC_ADI_H 2 + #define ___ASM_SPARC_ADI_H 3 + #if defined(__sparc__) && defined(__arch64__) 4 + #include <asm/adi_64.h> 5 + #endif 6 + #endif
+47
arch/sparc/include/asm/adi_64.h
··· 1 + /* adi_64.h: ADI related data structures 2 + * 3 + * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved. 4 + * Author: Khalid Aziz (khalid.aziz@oracle.com) 5 + * 6 + * This work is licensed under the terms of the GNU GPL, version 2. 7 + */ 8 + #ifndef __ASM_SPARC64_ADI_H 9 + #define __ASM_SPARC64_ADI_H 10 + 11 + #include <linux/types.h> 12 + 13 + #ifndef __ASSEMBLY__ 14 + 15 + struct adi_caps { 16 + __u64 blksz; 17 + __u64 nbits; 18 + __u64 ue_on_adi; 19 + }; 20 + 21 + struct adi_config { 22 + bool enabled; 23 + struct adi_caps caps; 24 + }; 25 + 26 + extern struct adi_config adi_state; 27 + 28 + extern void mdesc_adi_init(void); 29 + 30 + static inline bool adi_capable(void) 31 + { 32 + return adi_state.enabled; 33 + } 34 + 35 + static inline unsigned long adi_blksize(void) 36 + { 37 + return adi_state.caps.blksz; 38 + } 39 + 40 + static inline unsigned long adi_nbits(void) 41 + { 42 + return adi_state.caps.nbits; 43 + } 44 + 45 + #endif /* __ASSEMBLY__ */ 46 + 47 + #endif /* !(__ASM_SPARC64_ADI_H) */
+5
arch/sparc/include/asm/elf_64.h
··· 10 10 #include <asm/processor.h> 11 11 #include <asm/extable_64.h> 12 12 #include <asm/spitfire.h> 13 + #include <asm/adi.h> 13 14 14 15 /* 15 16 * Sparc section types ··· 216 215 217 216 #define ARCH_DLINFO \ 218 217 do { \ 218 + extern struct adi_config adi_state; \ 219 219 if (vdso_enabled) \ 220 220 NEW_AUX_ENT(AT_SYSINFO_EHDR, \ 221 221 (unsigned long)current->mm->context.vdso); \ 222 + NEW_AUX_ENT(AT_ADI_BLKSZ, adi_state.caps.blksz); \ 223 + NEW_AUX_ENT(AT_ADI_NBITS, adi_state.caps.nbits); \ 224 + NEW_AUX_ENT(AT_ADI_UEONADI, adi_state.caps.ue_on_adi); \ 222 225 } while (0) 223 226 224 227 struct linux_binprm;
+2
arch/sparc/include/asm/hypervisor.h
··· 570 570 #define HV_FAULT_TYPE_RESV1 13 571 571 #define HV_FAULT_TYPE_UNALIGNED 14 572 572 #define HV_FAULT_TYPE_INV_PGSZ 15 573 + #define HV_FAULT_TYPE_MCD 17 574 + #define HV_FAULT_TYPE_MCD_DIS 18 573 575 /* Values 16 --> -2 are reserved. */ 574 576 #define HV_FAULT_TYPE_MULTIPLE -1 575 577
+83 -1
arch/sparc/include/asm/mman.h
··· 7 7 #ifndef __ASSEMBLY__ 8 8 #define arch_mmap_check(addr,len,flags) sparc_mmap_check(addr,len) 9 9 int sparc_mmap_check(unsigned long addr, unsigned long len); 10 - #endif 10 + 11 + #ifdef CONFIG_SPARC64 12 + #include <asm/adi_64.h> 13 + 14 + static inline void ipi_set_tstate_mcde(void *arg) 15 + { 16 + struct mm_struct *mm = arg; 17 + 18 + /* Set TSTATE_MCDE for the task using address map that ADI has been 19 + * enabled on if the task is running. If not, it will be set 20 + * automatically at the next context switch 21 + */ 22 + if (current->mm == mm) { 23 + struct pt_regs *regs; 24 + 25 + regs = task_pt_regs(current); 26 + regs->tstate |= TSTATE_MCDE; 27 + } 28 + } 29 + 30 + #define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot) 31 + static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot) 32 + { 33 + if (adi_capable() && (prot & PROT_ADI)) { 34 + struct pt_regs *regs; 35 + 36 + if (!current->mm->context.adi) { 37 + regs = task_pt_regs(current); 38 + regs->tstate |= TSTATE_MCDE; 39 + current->mm->context.adi = true; 40 + on_each_cpu_mask(mm_cpumask(current->mm), 41 + ipi_set_tstate_mcde, current->mm, 0); 42 + } 43 + return VM_SPARC_ADI; 44 + } else { 45 + return 0; 46 + } 47 + } 48 + 49 + #define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags) 50 + static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags) 51 + { 52 + return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0); 53 + } 54 + 55 + #define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr) 56 + static inline int sparc_validate_prot(unsigned long prot, unsigned long addr) 57 + { 58 + if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI)) 59 + return 0; 60 + if (prot & PROT_ADI) { 61 + if (!adi_capable()) 62 + return 0; 63 + 64 + if (addr) { 65 + struct vm_area_struct *vma; 66 + 67 + vma = find_vma(current->mm, addr); 68 + if (vma) { 69 + /* ADI can not be enabled on PFN 70 + * mapped pages 71 + */ 72 + if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) 73 + return 0; 74 + 75 + /* Mergeable pages can become unmergeable 76 + * if ADI is enabled on them even if they 77 + * have identical data on them. This can be 78 + * because ADI enabled pages with identical 79 + * data may still not have identical ADI 80 + * tags on them. Disallow ADI on mergeable 81 + * pages. 82 + */ 83 + if (vma->vm_flags & VM_MERGEABLE) 84 + return 0; 85 + } 86 + } 87 + } 88 + return 1; 89 + } 90 + #endif /* CONFIG_SPARC64 */ 91 + 92 + #endif /* __ASSEMBLY__ */ 11 93 #endif /* __SPARC_MMAN_H__ */
+17
arch/sparc/include/asm/mmu_64.h
··· 90 90 #define MM_NUM_TSBS 1 91 91 #endif 92 92 93 + /* ADI tags are stored when a page is swapped out and the storage for 94 + * tags is allocated dynamically. There is a tag storage descriptor 95 + * associated with each set of tag storage pages. Tag storage descriptors 96 + * are allocated dynamically. Since kernel will allocate a full page for 97 + * each tag storage descriptor, we can store up to 98 + * PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page. 99 + */ 100 + typedef struct { 101 + unsigned long start; /* Start address for this tag storage */ 102 + unsigned long end; /* Last address for tag storage */ 103 + unsigned char *tags; /* Where the tags are */ 104 + unsigned long tag_users; /* number of references to descriptor */ 105 + } tag_storage_desc_t; 106 + 93 107 typedef struct { 94 108 spinlock_t lock; 95 109 unsigned long sparc64_ctx_val; ··· 112 98 struct tsb_config tsb_block[MM_NUM_TSBS]; 113 99 struct hv_tsb_descr tsb_descr[MM_NUM_TSBS]; 114 100 void *vdso; 101 + bool adi; 102 + tag_storage_desc_t *tag_store; 103 + spinlock_t tag_lock; 115 104 } mm_context_t; 116 105 117 106 #endif /* !__ASSEMBLY__ */
+51
arch/sparc/include/asm/mmu_context_64.h
··· 9 9 #include <linux/spinlock.h> 10 10 #include <linux/mm_types.h> 11 11 #include <linux/smp.h> 12 + #include <linux/sched.h> 12 13 13 14 #include <asm/spitfire.h> 15 + #include <asm/adi_64.h> 14 16 #include <asm-generic/mm_hooks.h> 15 17 #include <asm/percpu.h> 16 18 ··· 138 136 139 137 #define deactivate_mm(tsk,mm) do { } while (0) 140 138 #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL) 139 + 140 + #define __HAVE_ARCH_START_CONTEXT_SWITCH 141 + static inline void arch_start_context_switch(struct task_struct *prev) 142 + { 143 + /* Save the current state of MCDPER register for the process 144 + * we are switching from 145 + */ 146 + if (adi_capable()) { 147 + register unsigned long tmp_mcdper; 148 + 149 + __asm__ __volatile__( 150 + ".word 0x83438000\n\t" /* rd %mcdper, %g1 */ 151 + "mov %%g1, %0\n\t" 152 + : "=r" (tmp_mcdper) 153 + : 154 + : "g1"); 155 + if (tmp_mcdper) 156 + set_tsk_thread_flag(prev, TIF_MCDPER); 157 + else 158 + clear_tsk_thread_flag(prev, TIF_MCDPER); 159 + } 160 + } 161 + 162 + #define finish_arch_post_lock_switch finish_arch_post_lock_switch 163 + static inline void finish_arch_post_lock_switch(void) 164 + { 165 + /* Restore the state of MCDPER register for the new process 166 + * just switched to. 167 + */ 168 + if (adi_capable()) { 169 + register unsigned long tmp_mcdper; 170 + 171 + tmp_mcdper = test_thread_flag(TIF_MCDPER); 172 + __asm__ __volatile__( 173 + "mov %0, %%g1\n\t" 174 + ".word 0x9d800001\n\t" /* wr %g0, %g1, %mcdper" */ 175 + ".word 0xaf902001\n\t" /* wrpr %g0, 1, %pmcdper */ 176 + : 177 + : "ir" (tmp_mcdper) 178 + : "g1"); 179 + if (current && current->mm && current->mm->context.adi) { 180 + struct pt_regs *regs; 181 + 182 + regs = task_pt_regs(current); 183 + regs->tstate |= TSTATE_MCDE; 184 + } 185 + } 186 + } 187 + 141 188 #endif /* !(__ASSEMBLY__) */ 142 189 143 190 #endif /* !(__SPARC64_MMU_CONTEXT_H) */
+6
arch/sparc/include/asm/page_64.h
··· 48 48 void clear_user_page(void *addr, unsigned long vaddr, struct page *page); 49 49 #define copy_page(X,Y) memcpy((void *)(X), (void *)(Y), PAGE_SIZE) 50 50 void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage); 51 + #define __HAVE_ARCH_COPY_USER_HIGHPAGE 52 + struct vm_area_struct; 53 + void copy_user_highpage(struct page *to, struct page *from, 54 + unsigned long vaddr, struct vm_area_struct *vma); 55 + #define __HAVE_ARCH_COPY_HIGHPAGE 56 + void copy_highpage(struct page *to, struct page *from); 51 57 52 58 /* Unlike sparc32, sparc64's parameter passing API is more 53 59 * sane in that structures which as small enough are passed
+48
arch/sparc/include/asm/pgtable_64.h
··· 19 19 #include <asm/types.h> 20 20 #include <asm/spitfire.h> 21 21 #include <asm/asi.h> 22 + #include <asm/adi.h> 22 23 #include <asm/page.h> 23 24 #include <asm/processor.h> 24 25 ··· 165 164 #define _PAGE_E_4V _AC(0x0000000000000800,UL) /* side-Effect */ 166 165 #define _PAGE_CP_4V _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */ 167 166 #define _PAGE_CV_4V _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */ 167 + /* Bit 9 is used to enable MCD corruption detection instead on M7 */ 168 + #define _PAGE_MCD_4V _AC(0x0000000000000200,UL) /* Memory Corruption */ 168 169 #define _PAGE_P_4V _AC(0x0000000000000100,UL) /* Privileged Page */ 169 170 #define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */ 170 171 #define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */ ··· 604 601 static inline pte_t pte_mkspecial(pte_t pte) 605 602 { 606 603 pte_val(pte) |= _PAGE_SPECIAL; 604 + return pte; 605 + } 606 + 607 + static inline pte_t pte_mkmcd(pte_t pte) 608 + { 609 + pte_val(pte) |= _PAGE_MCD_4V; 610 + return pte; 611 + } 612 + 613 + static inline pte_t pte_mknotmcd(pte_t pte) 614 + { 615 + pte_val(pte) &= ~_PAGE_MCD_4V; 607 616 return pte; 608 617 } 609 618 ··· 1060 1045 1061 1046 int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long, 1062 1047 unsigned long, pgprot_t); 1048 + 1049 + void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma, 1050 + unsigned long addr, pte_t pte); 1051 + 1052 + int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma, 1053 + unsigned long addr, pte_t oldpte); 1054 + 1055 + #define __HAVE_ARCH_DO_SWAP_PAGE 1056 + static inline void arch_do_swap_page(struct mm_struct *mm, 1057 + struct vm_area_struct *vma, 1058 + unsigned long addr, 1059 + pte_t pte, pte_t oldpte) 1060 + { 1061 + /* If this is a new page being mapped in, there can be no 1062 + * ADI tags stored away for this page. Skip looking for 1063 + * stored tags 1064 + */ 1065 + if (pte_none(oldpte)) 1066 + return; 1067 + 1068 + if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V)) 1069 + adi_restore_tags(mm, vma, addr, pte); 1070 + } 1071 + 1072 + #define __HAVE_ARCH_UNMAP_ONE 1073 + static inline int arch_unmap_one(struct mm_struct *mm, 1074 + struct vm_area_struct *vma, 1075 + unsigned long addr, pte_t oldpte) 1076 + { 1077 + if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V)) 1078 + return adi_save_tags(mm, vma, addr, oldpte); 1079 + return 0; 1080 + } 1063 1081 1064 1082 static inline int io_remap_pfn_range(struct vm_area_struct *vma, 1065 1083 unsigned long from, unsigned long pfn,
+1 -1
arch/sparc/include/asm/thread_info_64.h
··· 188 188 * in using in assembly, else we can't use the mask as 189 189 * an immediate value in instructions such as andcc. 190 190 */ 191 - /* flag bit 12 is available */ 191 + #define TIF_MCDPER 12 /* Precise MCD exception */ 192 192 #define TIF_MEMDIE 13 /* is terminating due to OOM killer */ 193 193 #define TIF_POLLING_NRFLAG 14 194 194
+2
arch/sparc/include/asm/trap_block.h
··· 76 76 __sun4v_1insn_patch_end; 77 77 extern struct sun4v_1insn_patch_entry __fast_win_ctrl_1insn_patch, 78 78 __fast_win_ctrl_1insn_patch_end; 79 + extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch, 80 + __sun_m7_1insn_patch_end; 79 81 80 82 struct sun4v_2insn_patch_entry { 81 83 unsigned int addr;
+10
arch/sparc/include/asm/ttable.h
··· 219 219 nop; \ 220 220 nop; 221 221 222 + #define SUN4V_MCD_PRECISE \ 223 + ldxa [%g0] ASI_SCRATCHPAD, %g2; \ 224 + ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \ 225 + ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \ 226 + ba,pt %xcc, etrap; \ 227 + rd %pc, %g7; \ 228 + ba,pt %xcc, sun4v_mcd_detect_precise; \ 229 + nop; \ 230 + nop; 231 + 222 232 /* Before touching these macros, you owe it to yourself to go and 223 233 * see how arch/sparc64/kernel/winfixup.S works... -DaveM 224 234 *
+5
arch/sparc/include/uapi/asm/asi.h
··· 145 145 * ASIs, "(4V)" designates SUN4V specific ASIs. "(NG4)" designates SPARC-T4 146 146 * and later ASIs. 147 147 */ 148 + #define ASI_MCD_PRIV_PRIMARY 0x02 /* (NG7) Privileged MCD version VA */ 149 + #define ASI_MCD_REAL 0x05 /* (NG7) Privileged MCD version PA */ 148 150 #define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */ 149 151 #define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */ 150 152 #define ASI_BLK_AIUP_4V 0x16 /* (4V) Prim, user, block ld/st */ ··· 247 245 #define ASI_UDBL_CONTROL_R 0x7f /* External UDB control regs rd low*/ 248 246 #define ASI_INTR_R 0x7f /* IRQ vector dispatch read */ 249 247 #define ASI_INTR_DATAN_R 0x7f /* (III) In irq vector data reg N */ 248 + #define ASI_MCD_PRIMARY 0x90 /* (NG7) MCD version load/store */ 249 + #define ASI_MCD_ST_BLKINIT_PRIMARY \ 250 + 0x92 /* (NG7) MCD store BLKINIT primary */ 250 251 #define ASI_PIC 0xb0 /* (NG4) PIC registers */ 251 252 #define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */ 252 253 #define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */
+11
arch/sparc/include/uapi/asm/auxvec.h
··· 3 3 4 4 #define AT_SYSINFO_EHDR 33 5 5 6 + #ifdef CONFIG_SPARC64 7 + /* Avoid overlap with other AT_* values since they are consolidated in 8 + * glibc and any overlaps can cause problems 9 + */ 10 + #define AT_ADI_BLKSZ 48 11 + #define AT_ADI_NBITS 49 12 + #define AT_ADI_UEONADI 50 13 + 14 + #define AT_VECTOR_SIZE_ARCH 4 15 + #else 6 16 #define AT_VECTOR_SIZE_ARCH 1 17 + #endif 7 18 8 19 #endif /* !(__ASMSPARC_AUXVEC_H) */
+2
arch/sparc/include/uapi/asm/mman.h
··· 6 6 7 7 /* SunOS'ified... */ 8 8 9 + #define PROT_ADI 0x10 /* ADI enabled */ 10 + 9 11 #define MAP_RENAME MAP_ANONYMOUS /* In SunOS terminology */ 10 12 #define MAP_NORESERVE 0x40 /* don't reserve swap pages */ 11 13 #define MAP_INHERIT 0x80 /* SunOS doesn't do this, but... */
+10
arch/sparc/include/uapi/asm/pstate.h
··· 11 11 * ----------------------------------------------------------------------- 12 12 * 63 12 11 10 9 8 7 6 5 4 3 2 1 0 13 13 */ 14 + /* IG on V9 conflicts with MCDE on M7. PSTATE_MCDE will only be used on 15 + * processors that support ADI which do not use IG, hence there is no 16 + * functional conflict 17 + */ 14 18 #define PSTATE_IG _AC(0x0000000000000800,UL) /* Interrupt Globals. */ 19 + #define PSTATE_MCDE _AC(0x0000000000000800,UL) /* MCD Enable */ 15 20 #define PSTATE_MG _AC(0x0000000000000400,UL) /* MMU Globals. */ 16 21 #define PSTATE_CLE _AC(0x0000000000000200,UL) /* Current Little Endian.*/ 17 22 #define PSTATE_TLE _AC(0x0000000000000100,UL) /* Trap Little Endian. */ ··· 53 48 #define TSTATE_ASI _AC(0x00000000ff000000,UL) /* AddrSpace ID. */ 54 49 #define TSTATE_PIL _AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/ 55 50 #define TSTATE_PSTATE _AC(0x00000000000fff00,UL) /* PSTATE. */ 51 + /* IG on V9 conflicts with MCDE on M7. TSTATE_MCDE will only be used on 52 + * processors that support ADI which do not support IG, hence there is 53 + * no functional conflict 54 + */ 56 55 #define TSTATE_IG _AC(0x0000000000080000,UL) /* Interrupt Globals.*/ 56 + #define TSTATE_MCDE _AC(0x0000000000080000,UL) /* MCD enable. */ 57 57 #define TSTATE_MG _AC(0x0000000000040000,UL) /* MMU Globals. */ 58 58 #define TSTATE_CLE _AC(0x0000000000020000,UL) /* CurrLittleEndian. */ 59 59 #define TSTATE_TLE _AC(0x0000000000010000,UL) /* TrapLittleEndian. */
+1
arch/sparc/kernel/Makefile
··· 69 69 obj-$(CONFIG_SPARC64) += hvapi.o 70 70 obj-$(CONFIG_SPARC64) += sstate.o 71 71 obj-$(CONFIG_SPARC64) += mdesc.o 72 + obj-$(CONFIG_SPARC64) += adi_64.o 72 73 obj-$(CONFIG_SPARC64) += pcr.o 73 74 obj-$(CONFIG_SPARC64) += nmi.o 74 75 obj-$(CONFIG_SPARC64_SMP) += cpumap.o
+397
arch/sparc/kernel/adi_64.c
··· 1 + /* adi_64.c: support for ADI (Application Data Integrity) feature on 2 + * sparc m7 and newer processors. This feature is also known as 3 + * SSM (Silicon Secured Memory). 4 + * 5 + * Copyright (C) 2016 Oracle and/or its affiliates. All rights reserved. 6 + * Author: Khalid Aziz (khalid.aziz@oracle.com) 7 + * 8 + * This work is licensed under the terms of the GNU GPL, version 2. 9 + */ 10 + #include <linux/init.h> 11 + #include <linux/slab.h> 12 + #include <linux/mm_types.h> 13 + #include <asm/mdesc.h> 14 + #include <asm/adi_64.h> 15 + #include <asm/mmu_64.h> 16 + #include <asm/pgtable_64.h> 17 + 18 + /* Each page of storage for ADI tags can accommodate tags for 128 19 + * pages. When ADI enabled pages are being swapped out, it would be 20 + * prudent to allocate at least enough tag storage space to accommodate 21 + * SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to 22 + * store tags for four SWAPFILE_CLUSTER pages to reduce need for 23 + * further allocations for same vma. 24 + */ 25 + #define TAG_STORAGE_PAGES 8 26 + 27 + struct adi_config adi_state; 28 + EXPORT_SYMBOL(adi_state); 29 + 30 + /* mdesc_adi_init() : Parse machine description provided by the 31 + * hypervisor to detect ADI capabilities 32 + * 33 + * Hypervisor reports ADI capabilities of platform in "hwcap-list" property 34 + * for "cpu" node. If the platform supports ADI, "hwcap-list" property 35 + * contains the keyword "adp". If the platform supports ADI, "platform" 36 + * node will contain "adp-blksz", "adp-nbits" and "ue-on-adp" properties 37 + * to describe the ADI capabilities. 38 + */ 39 + void __init mdesc_adi_init(void) 40 + { 41 + struct mdesc_handle *hp = mdesc_grab(); 42 + const char *prop; 43 + u64 pn, *val; 44 + int len; 45 + 46 + if (!hp) 47 + goto adi_not_found; 48 + 49 + pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "cpu"); 50 + if (pn == MDESC_NODE_NULL) 51 + goto adi_not_found; 52 + 53 + prop = mdesc_get_property(hp, pn, "hwcap-list", &len); 54 + if (!prop) 55 + goto adi_not_found; 56 + 57 + /* 58 + * Look for "adp" keyword in hwcap-list which would indicate 59 + * ADI support 60 + */ 61 + adi_state.enabled = false; 62 + while (len) { 63 + int plen; 64 + 65 + if (!strcmp(prop, "adp")) { 66 + adi_state.enabled = true; 67 + break; 68 + } 69 + 70 + plen = strlen(prop) + 1; 71 + prop += plen; 72 + len -= plen; 73 + } 74 + 75 + if (!adi_state.enabled) 76 + goto adi_not_found; 77 + 78 + /* Find the ADI properties in "platform" node. If all ADI 79 + * properties are not found, ADI support is incomplete and 80 + * do not enable ADI in the kernel. 81 + */ 82 + pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "platform"); 83 + if (pn == MDESC_NODE_NULL) 84 + goto adi_not_found; 85 + 86 + val = (u64 *) mdesc_get_property(hp, pn, "adp-blksz", &len); 87 + if (!val) 88 + goto adi_not_found; 89 + adi_state.caps.blksz = *val; 90 + 91 + val = (u64 *) mdesc_get_property(hp, pn, "adp-nbits", &len); 92 + if (!val) 93 + goto adi_not_found; 94 + adi_state.caps.nbits = *val; 95 + 96 + val = (u64 *) mdesc_get_property(hp, pn, "ue-on-adp", &len); 97 + if (!val) 98 + goto adi_not_found; 99 + adi_state.caps.ue_on_adi = *val; 100 + 101 + /* Some of the code to support swapping ADI tags is written 102 + * assumption that two ADI tags can fit inside one byte. If 103 + * this assumption is broken by a future architecture change, 104 + * that code will have to be revisited. If that were to happen, 105 + * disable ADI support so we do not get unpredictable results 106 + * with programs trying to use ADI and their pages getting 107 + * swapped out 108 + */ 109 + if (adi_state.caps.nbits > 4) { 110 + pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n"); 111 + adi_state.enabled = false; 112 + } 113 + 114 + mdesc_release(hp); 115 + return; 116 + 117 + adi_not_found: 118 + adi_state.enabled = false; 119 + adi_state.caps.blksz = 0; 120 + adi_state.caps.nbits = 0; 121 + if (hp) 122 + mdesc_release(hp); 123 + } 124 + 125 + tag_storage_desc_t *find_tag_store(struct mm_struct *mm, 126 + struct vm_area_struct *vma, 127 + unsigned long addr) 128 + { 129 + tag_storage_desc_t *tag_desc = NULL; 130 + unsigned long i, max_desc, flags; 131 + 132 + /* Check if this vma already has tag storage descriptor 133 + * allocated for it. 134 + */ 135 + max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t); 136 + if (mm->context.tag_store) { 137 + tag_desc = mm->context.tag_store; 138 + spin_lock_irqsave(&mm->context.tag_lock, flags); 139 + for (i = 0; i < max_desc; i++) { 140 + if ((addr >= tag_desc->start) && 141 + ((addr + PAGE_SIZE - 1) <= tag_desc->end)) 142 + break; 143 + tag_desc++; 144 + } 145 + spin_unlock_irqrestore(&mm->context.tag_lock, flags); 146 + 147 + /* If no matching entries were found, this must be a 148 + * freshly allocated page 149 + */ 150 + if (i >= max_desc) 151 + tag_desc = NULL; 152 + } 153 + 154 + return tag_desc; 155 + } 156 + 157 + tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm, 158 + struct vm_area_struct *vma, 159 + unsigned long addr) 160 + { 161 + unsigned char *tags; 162 + unsigned long i, size, max_desc, flags; 163 + tag_storage_desc_t *tag_desc, *open_desc; 164 + unsigned long end_addr, hole_start, hole_end; 165 + 166 + max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t); 167 + open_desc = NULL; 168 + hole_start = 0; 169 + hole_end = ULONG_MAX; 170 + end_addr = addr + PAGE_SIZE - 1; 171 + 172 + /* Check if this vma already has tag storage descriptor 173 + * allocated for it. 174 + */ 175 + spin_lock_irqsave(&mm->context.tag_lock, flags); 176 + if (mm->context.tag_store) { 177 + tag_desc = mm->context.tag_store; 178 + 179 + /* Look for a matching entry for this address. While doing 180 + * that, look for the first open slot as well and find 181 + * the hole in already allocated range where this request 182 + * will fit in. 183 + */ 184 + for (i = 0; i < max_desc; i++) { 185 + if (tag_desc->tag_users == 0) { 186 + if (open_desc == NULL) 187 + open_desc = tag_desc; 188 + } else { 189 + if ((addr >= tag_desc->start) && 190 + (tag_desc->end >= (addr + PAGE_SIZE - 1))) { 191 + tag_desc->tag_users++; 192 + goto out; 193 + } 194 + } 195 + if ((tag_desc->start > end_addr) && 196 + (tag_desc->start < hole_end)) 197 + hole_end = tag_desc->start; 198 + if ((tag_desc->end < addr) && 199 + (tag_desc->end > hole_start)) 200 + hole_start = tag_desc->end; 201 + tag_desc++; 202 + } 203 + 204 + } else { 205 + size = sizeof(tag_storage_desc_t)*max_desc; 206 + mm->context.tag_store = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN); 207 + if (mm->context.tag_store == NULL) { 208 + tag_desc = NULL; 209 + goto out; 210 + } 211 + tag_desc = mm->context.tag_store; 212 + for (i = 0; i < max_desc; i++, tag_desc++) 213 + tag_desc->tag_users = 0; 214 + open_desc = mm->context.tag_store; 215 + i = 0; 216 + } 217 + 218 + /* Check if we ran out of tag storage descriptors */ 219 + if (open_desc == NULL) { 220 + tag_desc = NULL; 221 + goto out; 222 + } 223 + 224 + /* Mark this tag descriptor slot in use and then initialize it */ 225 + tag_desc = open_desc; 226 + tag_desc->tag_users = 1; 227 + 228 + /* Tag storage has not been allocated for this vma and space 229 + * is available in tag storage descriptor. Since this page is 230 + * being swapped out, there is high probability subsequent pages 231 + * in the VMA will be swapped out as well. Allocate pages to 232 + * store tags for as many pages in this vma as possible but not 233 + * more than TAG_STORAGE_PAGES. Each byte in tag space holds 234 + * two ADI tags since each ADI tag is 4 bits. Each ADI tag 235 + * covers adi_blksize() worth of addresses. Check if the hole is 236 + * big enough to accommodate full address range for using 237 + * TAG_STORAGE_PAGES number of tag pages. 238 + */ 239 + size = TAG_STORAGE_PAGES * PAGE_SIZE; 240 + end_addr = addr + (size*2*adi_blksize()) - 1; 241 + /* Check for overflow. If overflow occurs, allocate only one page */ 242 + if (end_addr < addr) { 243 + size = PAGE_SIZE; 244 + end_addr = addr + (size*2*adi_blksize()) - 1; 245 + /* If overflow happens with the minimum tag storage 246 + * allocation as well, adjust ending address for this 247 + * tag storage. 248 + */ 249 + if (end_addr < addr) 250 + end_addr = ULONG_MAX; 251 + } 252 + if (hole_end < end_addr) { 253 + /* Available hole is too small on the upper end of 254 + * address. Can we expand the range towards the lower 255 + * address and maximize use of this slot? 256 + */ 257 + unsigned long tmp_addr; 258 + 259 + end_addr = hole_end - 1; 260 + tmp_addr = end_addr - (size*2*adi_blksize()) + 1; 261 + /* Check for underflow. If underflow occurs, allocate 262 + * only one page for storing ADI tags 263 + */ 264 + if (tmp_addr > addr) { 265 + size = PAGE_SIZE; 266 + tmp_addr = end_addr - (size*2*adi_blksize()) - 1; 267 + /* If underflow happens with the minimum tag storage 268 + * allocation as well, adjust starting address for 269 + * this tag storage. 270 + */ 271 + if (tmp_addr > addr) 272 + tmp_addr = 0; 273 + } 274 + if (tmp_addr < hole_start) { 275 + /* Available hole is restricted on lower address 276 + * end as well 277 + */ 278 + tmp_addr = hole_start + 1; 279 + } 280 + addr = tmp_addr; 281 + size = (end_addr + 1 - addr)/(2*adi_blksize()); 282 + size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE; 283 + size = size * PAGE_SIZE; 284 + } 285 + tags = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN); 286 + if (tags == NULL) { 287 + tag_desc->tag_users = 0; 288 + tag_desc = NULL; 289 + goto out; 290 + } 291 + tag_desc->start = addr; 292 + tag_desc->tags = tags; 293 + tag_desc->end = end_addr; 294 + 295 + out: 296 + spin_unlock_irqrestore(&mm->context.tag_lock, flags); 297 + return tag_desc; 298 + } 299 + 300 + void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm) 301 + { 302 + unsigned long flags; 303 + unsigned char *tags = NULL; 304 + 305 + spin_lock_irqsave(&mm->context.tag_lock, flags); 306 + tag_desc->tag_users--; 307 + if (tag_desc->tag_users == 0) { 308 + tag_desc->start = tag_desc->end = 0; 309 + /* Do not free up the tag storage space allocated 310 + * by the first descriptor. This is persistent 311 + * emergency tag storage space for the task. 312 + */ 313 + if (tag_desc != mm->context.tag_store) { 314 + tags = tag_desc->tags; 315 + tag_desc->tags = NULL; 316 + } 317 + } 318 + spin_unlock_irqrestore(&mm->context.tag_lock, flags); 319 + kfree(tags); 320 + } 321 + 322 + #define tag_start(addr, tag_desc) \ 323 + ((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize()))) 324 + 325 + /* Retrieve any saved ADI tags for the page being swapped back in and 326 + * restore these tags to the newly allocated physical page. 327 + */ 328 + void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma, 329 + unsigned long addr, pte_t pte) 330 + { 331 + unsigned char *tag; 332 + tag_storage_desc_t *tag_desc; 333 + unsigned long paddr, tmp, version1, version2; 334 + 335 + /* Check if the swapped out page has an ADI version 336 + * saved. If yes, restore version tag to the newly 337 + * allocated page. 338 + */ 339 + tag_desc = find_tag_store(mm, vma, addr); 340 + if (tag_desc == NULL) 341 + return; 342 + 343 + tag = tag_start(addr, tag_desc); 344 + paddr = pte_val(pte) & _PAGE_PADDR_4V; 345 + for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) { 346 + version1 = (*tag) >> 4; 347 + version2 = (*tag) & 0x0f; 348 + *tag++ = 0; 349 + asm volatile("stxa %0, [%1] %2\n\t" 350 + : 351 + : "r" (version1), "r" (tmp), 352 + "i" (ASI_MCD_REAL)); 353 + tmp += adi_blksize(); 354 + asm volatile("stxa %0, [%1] %2\n\t" 355 + : 356 + : "r" (version2), "r" (tmp), 357 + "i" (ASI_MCD_REAL)); 358 + } 359 + asm volatile("membar #Sync\n\t"); 360 + 361 + /* Check and mark this tag space for release later if 362 + * the swapped in page was the last user of tag space 363 + */ 364 + del_tag_store(tag_desc, mm); 365 + } 366 + 367 + /* A page is about to be swapped out. Save any ADI tags associated with 368 + * this physical page so they can be restored later when the page is swapped 369 + * back in. 370 + */ 371 + int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma, 372 + unsigned long addr, pte_t oldpte) 373 + { 374 + unsigned char *tag; 375 + tag_storage_desc_t *tag_desc; 376 + unsigned long version1, version2, paddr, tmp; 377 + 378 + tag_desc = alloc_tag_store(mm, vma, addr); 379 + if (tag_desc == NULL) 380 + return -1; 381 + 382 + tag = tag_start(addr, tag_desc); 383 + paddr = pte_val(oldpte) & _PAGE_PADDR_4V; 384 + for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) { 385 + asm volatile("ldxa [%1] %2, %0\n\t" 386 + : "=r" (version1) 387 + : "r" (tmp), "i" (ASI_MCD_REAL)); 388 + tmp += adi_blksize(); 389 + asm volatile("ldxa [%1] %2, %0\n\t" 390 + : "=r" (version2) 391 + : "r" (tmp), "i" (ASI_MCD_REAL)); 392 + *tag = (version1 << 4) | version2; 393 + tag++; 394 + } 395 + 396 + return 0; 397 + }
+3
arch/sparc/kernel/entry.h
··· 160 160 void sun4v_nonresum_error(struct pt_regs *regs, 161 161 unsigned long offset); 162 162 void sun4v_nonresum_overflow(struct pt_regs *regs); 163 + void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, 164 + unsigned long addr, 165 + unsigned long context); 163 166 164 167 extern unsigned long sun4v_err_itlb_vaddr; 165 168 extern unsigned long sun4v_err_itlb_ctx;
+26 -1
arch/sparc/kernel/etrap_64.S
··· 151 151 stx %g6, [%sp + PTREGS_OFF + PT_V9_G6] 152 152 stx %g7, [%sp + PTREGS_OFF + PT_V9_G7] 153 153 or %l7, %l0, %l7 154 - sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0 154 + 661: sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0 155 + /* If userspace is using ADI, it could potentially pass 156 + * a pointer with version tag embedded in it. To maintain 157 + * the ADI security, we must enable PSTATE.mcde. Userspace 158 + * would have already set TTE.mcd in an earlier call to 159 + * kernel and set the version tag for the address being 160 + * dereferenced. Setting PSTATE.mcde would ensure any 161 + * access to userspace data through a system call honors 162 + * ADI and does not allow a rogue app to bypass ADI by 163 + * using system calls. Setting PSTATE.mcde only affects 164 + * accesses to virtual addresses that have TTE.mcd set. 165 + * Set PMCDPER to ensure any exceptions caused by ADI 166 + * version tag mismatch are exposed before system call 167 + * returns to userspace. Setting PMCDPER affects only 168 + * writes to virtual addresses that have TTE.mcd set and 169 + * have a version tag set as well. 170 + */ 171 + .section .sun_m7_1insn_patch, "ax" 172 + .word 661b 173 + sethi %hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0 174 + .previous 175 + 661: nop 176 + .section .sun_m7_1insn_patch, "ax" 177 + .word 661b 178 + .word 0xaf902001 /* wrpr %g0, 1, %pmcdper */ 179 + .previous 155 180 or %l7, %l0, %l7 156 181 wrpr %l2, %tnpc 157 182 wrpr %l7, (TSTATE_PRIV | TSTATE_IE), %tstate
+1
arch/sparc/kernel/head_64.S
··· 897 897 #include "syscalls.S" 898 898 #include "helpers.S" 899 899 #include "sun4v_tlb_miss.S" 900 + #include "sun4v_mcd.S" 900 901 #include "sun4v_ivec.S" 901 902 #include "ktlb.S" 902 903 #include "tsb.S"
+2
arch/sparc/kernel/mdesc.c
··· 22 22 #include <linux/uaccess.h> 23 23 #include <asm/oplib.h> 24 24 #include <asm/smp.h> 25 + #include <asm/adi.h> 25 26 26 27 /* Unlike the OBP device tree, the machine description is a full-on 27 28 * DAG. An arbitrary number of ARCs are possible from one ··· 1346 1345 1347 1346 cur_mdesc = hp; 1348 1347 1348 + mdesc_adi_init(); 1349 1349 report_platform_properties(); 1350 1350 }
+25
arch/sparc/kernel/process_64.c
··· 670 670 return 0; 671 671 } 672 672 673 + /* TIF_MCDPER in thread info flags for current task is updated lazily upon 674 + * a context switch. Update this flag in current task's thread flags 675 + * before dup so the dup'd task will inherit the current TIF_MCDPER flag. 676 + */ 677 + int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) 678 + { 679 + if (adi_capable()) { 680 + register unsigned long tmp_mcdper; 681 + 682 + __asm__ __volatile__( 683 + ".word 0x83438000\n\t" /* rd %mcdper, %g1 */ 684 + "mov %%g1, %0\n\t" 685 + : "=r" (tmp_mcdper) 686 + : 687 + : "g1"); 688 + if (tmp_mcdper) 689 + set_thread_flag(TIF_MCDPER); 690 + else 691 + clear_thread_flag(TIF_MCDPER); 692 + } 693 + 694 + *dst = *src; 695 + return 0; 696 + } 697 + 673 698 typedef struct { 674 699 union { 675 700 unsigned int pr_regs[32];
+30 -3
arch/sparc/kernel/rtrap_64.S
··· 25 25 .align 32 26 26 __handle_preemption: 27 27 call SCHEDULE_USER 28 - wrpr %g0, RTRAP_PSTATE, %pstate 28 + 661: wrpr %g0, RTRAP_PSTATE, %pstate 29 + /* If userspace is using ADI, it could potentially pass 30 + * a pointer with version tag embedded in it. To maintain 31 + * the ADI security, we must re-enable PSTATE.mcde before 32 + * we continue execution in the kernel for another thread. 33 + */ 34 + .section .sun_m7_1insn_patch, "ax" 35 + .word 661b 36 + wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate 37 + .previous 29 38 ba,pt %xcc, __handle_preemption_continue 30 39 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 31 40 32 41 __handle_user_windows: 33 42 call fault_in_user_windows 34 - wrpr %g0, RTRAP_PSTATE, %pstate 43 + 661: wrpr %g0, RTRAP_PSTATE, %pstate 44 + /* If userspace is using ADI, it could potentially pass 45 + * a pointer with version tag embedded in it. To maintain 46 + * the ADI security, we must re-enable PSTATE.mcde before 47 + * we continue execution in the kernel for another thread. 48 + */ 49 + .section .sun_m7_1insn_patch, "ax" 50 + .word 661b 51 + wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate 52 + .previous 35 53 ba,pt %xcc, __handle_preemption_continue 36 54 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 37 55 ··· 66 48 add %sp, PTREGS_OFF, %o0 67 49 mov %l0, %o2 68 50 call do_notify_resume 69 - wrpr %g0, RTRAP_PSTATE, %pstate 51 + 661: wrpr %g0, RTRAP_PSTATE, %pstate 52 + /* If userspace is using ADI, it could potentially pass 53 + * a pointer with version tag embedded in it. To maintain 54 + * the ADI security, we must re-enable PSTATE.mcde before 55 + * we continue execution in the kernel for another thread. 56 + */ 57 + .section .sun_m7_1insn_patch, "ax" 58 + .word 661b 59 + wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate 60 + .previous 70 61 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 71 62 72 63 /* Signal delivery can modify pt_regs tstate, so we must
+2
arch/sparc/kernel/setup_64.c
··· 294 294 case SUN4V_CHIP_SPARC_M7: 295 295 case SUN4V_CHIP_SPARC_M8: 296 296 case SUN4V_CHIP_SPARC_SN: 297 + sun4v_patch_1insn_range(&__sun_m7_1insn_patch, 298 + &__sun_m7_1insn_patch_end); 297 299 sun_m7_patch_2insn_range(&__sun_m7_2insn_patch, 298 300 &__sun_m7_2insn_patch_end); 299 301 break;
+18
arch/sparc/kernel/sun4v_mcd.S
··· 1 + /* sun4v_mcd.S: Sun4v memory corruption detected precise exception handler 2 + * 3 + * Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved. 4 + * Authors: Bob Picco <bob.picco@oracle.com>, 5 + * Khalid Aziz <khalid.aziz@oracle.com> 6 + * 7 + * This work is licensed under the terms of the GNU GPL, version 2. 8 + */ 9 + .text 10 + .align 32 11 + 12 + sun4v_mcd_detect_precise: 13 + mov %l4, %o1 14 + mov %l5, %o2 15 + call sun4v_mem_corrupt_detect_precise 16 + add %sp, PTREGS_OFF, %o0 17 + ba,a,pt %xcc, rtrap 18 + nop
+123 -7
arch/sparc/kernel/traps_64.c
··· 362 362 { 363 363 unsigned short type = (type_ctx >> 16); 364 364 unsigned short ctx = (type_ctx & 0xffff); 365 - siginfo_t info; 366 365 367 366 if (notify_die(DIE_TRAP, "data access exception", regs, 368 367 0, 0x8, SIGTRAP) == NOTIFY_STOP) ··· 396 397 if (is_no_fault_exception(regs)) 397 398 return; 398 399 399 - info.si_signo = SIGSEGV; 400 - info.si_errno = 0; 401 - info.si_code = SEGV_MAPERR; 402 - info.si_addr = (void __user *) addr; 403 - info.si_trapno = 0; 404 - force_sig_info(SIGSEGV, &info, current); 400 + /* MCD (Memory Corruption Detection) disabled trap (TT=0x19) in HV 401 + * is vectored thorugh data access exception trap with fault type 402 + * set to HV_FAULT_TYPE_MCD_DIS. Check for MCD disabled trap. 403 + * Accessing an address with invalid ASI for the address, for 404 + * example setting an ADI tag on an address with ASI_MCD_PRIMARY 405 + * when TTE.mcd is not set for the VA, is also vectored into 406 + * kerbel by HV as data access exception with fault type set to 407 + * HV_FAULT_TYPE_INV_ASI. 408 + */ 409 + switch (type) { 410 + case HV_FAULT_TYPE_INV_ASI: 411 + force_sig_fault(SIGILL, ILL_ILLADR, (void __user *)addr, 0, 412 + current); 413 + break; 414 + case HV_FAULT_TYPE_MCD_DIS: 415 + force_sig_fault(SIGSEGV, SEGV_ACCADI, (void __user *)addr, 0, 416 + current); 417 + break; 418 + default: 419 + force_sig_fault(SIGSEGV, SEGV_MAPERR, (void __user *)addr, 0, 420 + current); 421 + break; 422 + } 405 423 } 406 424 407 425 void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) ··· 1863 1847 #define SUN4V_ERR_ATTRS_ASI 0x00000080 1864 1848 #define SUN4V_ERR_ATTRS_PRIV_REG 0x00000100 1865 1849 #define SUN4V_ERR_ATTRS_SPSTATE_MSK 0x00000600 1850 + #define SUN4V_ERR_ATTRS_MCD 0x00000800 1866 1851 #define SUN4V_ERR_ATTRS_SPSTATE_SHFT 9 1867 1852 #define SUN4V_ERR_ATTRS_MODE_MSK 0x03000000 1868 1853 #define SUN4V_ERR_ATTRS_MODE_SHFT 24 ··· 2061 2044 } 2062 2045 } 2063 2046 2047 + /* Handle memory corruption detected error which is vectored in 2048 + * through resumable error trap. 2049 + */ 2050 + void do_mcd_err(struct pt_regs *regs, struct sun4v_error_entry ent) 2051 + { 2052 + if (notify_die(DIE_TRAP, "MCD error", regs, 0, 0x34, 2053 + SIGSEGV) == NOTIFY_STOP) 2054 + return; 2055 + 2056 + if (regs->tstate & TSTATE_PRIV) { 2057 + /* MCD exception could happen because the task was 2058 + * running a system call with MCD enabled and passed a 2059 + * non-versioned pointer or pointer with bad version 2060 + * tag to the system call. In such cases, hypervisor 2061 + * places the address of offending instruction in the 2062 + * resumable error report. This is a deferred error, 2063 + * so the read/write that caused the trap was potentially 2064 + * retired long time back and we may have no choice 2065 + * but to send SIGSEGV to the process. 2066 + */ 2067 + const struct exception_table_entry *entry; 2068 + 2069 + entry = search_exception_tables(regs->tpc); 2070 + if (entry) { 2071 + /* Looks like a bad syscall parameter */ 2072 + #ifdef DEBUG_EXCEPTIONS 2073 + pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n", 2074 + regs->tpc); 2075 + pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n", 2076 + ent.err_raddr, entry->fixup); 2077 + #endif 2078 + regs->tpc = entry->fixup; 2079 + regs->tnpc = regs->tpc + 4; 2080 + return; 2081 + } 2082 + } 2083 + 2084 + /* Send SIGSEGV to the userspace process with the right signal 2085 + * code 2086 + */ 2087 + force_sig_fault(SIGSEGV, SEGV_ADIDERR, (void __user *)ent.err_raddr, 2088 + 0, current); 2089 + } 2090 + 2064 2091 /* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate. 2065 2092 * Log the event and clear the first word of the entry. 2066 2093 */ ··· 2140 2079 local_copy.err_secs); 2141 2080 orderly_poweroff(true); 2142 2081 goto out; 2082 + } 2083 + 2084 + /* If this is a memory corruption detected error vectored in 2085 + * by HV through resumable error trap, call the handler 2086 + */ 2087 + if (local_copy.err_attrs & SUN4V_ERR_ATTRS_MCD) { 2088 + do_mcd_err(regs, local_copy); 2089 + return; 2143 2090 } 2144 2091 2145 2092 sun4v_log_error(regs, &local_copy, cpu, ··· 2723 2654 info.si_addr = (void __user *) addr; 2724 2655 info.si_trapno = 0; 2725 2656 force_sig_info(SIGBUS, &info, current); 2657 + } 2658 + 2659 + /* sun4v_mem_corrupt_detect_precise() - Handle precise exception on an ADI 2660 + * tag mismatch. 2661 + * 2662 + * ADI version tag mismatch on a load from memory always results in a 2663 + * precise exception. Tag mismatch on a store to memory will result in 2664 + * precise exception if MCDPER or PMCDPER is set to 1. 2665 + */ 2666 + void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, unsigned long addr, 2667 + unsigned long context) 2668 + { 2669 + if (notify_die(DIE_TRAP, "memory corruption precise exception", regs, 2670 + 0, 0x8, SIGSEGV) == NOTIFY_STOP) 2671 + return; 2672 + 2673 + if (regs->tstate & TSTATE_PRIV) { 2674 + /* MCD exception could happen because the task was running 2675 + * a system call with MCD enabled and passed a non-versioned 2676 + * pointer or pointer with bad version tag to the system 2677 + * call. 2678 + */ 2679 + const struct exception_table_entry *entry; 2680 + 2681 + entry = search_exception_tables(regs->tpc); 2682 + if (entry) { 2683 + /* Looks like a bad syscall parameter */ 2684 + #ifdef DEBUG_EXCEPTIONS 2685 + pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n", 2686 + regs->tpc); 2687 + pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n", 2688 + regs->tpc, entry->fixup); 2689 + #endif 2690 + regs->tpc = entry->fixup; 2691 + regs->tnpc = regs->tpc + 4; 2692 + return; 2693 + } 2694 + pr_emerg("%s: ADDR[%016lx] CTX[%lx], going.\n", 2695 + __func__, addr, context); 2696 + die_if_kernel("MCD precise", regs); 2697 + } 2698 + 2699 + if (test_thread_flag(TIF_32BIT)) { 2700 + regs->tpc &= 0xffffffff; 2701 + regs->tnpc &= 0xffffffff; 2702 + } 2703 + force_sig_fault(SIGSEGV, SEGV_ADIPERR, (void __user *)addr, 0, current); 2726 2704 } 2727 2705 2728 2706 void do_privop(struct pt_regs *regs)
+4 -2
arch/sparc/kernel/ttable_64.S
··· 26 26 TRAP_7INSNS(do_illegal_instruction) 27 27 tl0_privop: TRAP(do_privop) 28 28 tl0_resv012: BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17) 29 - tl0_resv018: BTRAP(0x18) BTRAP(0x19) BTRAP(0x1a) BTRAP(0x1b) BTRAP(0x1c) BTRAP(0x1d) 30 - tl0_resv01e: BTRAP(0x1e) BTRAP(0x1f) 29 + tl0_resv018: BTRAP(0x18) BTRAP(0x19) 30 + tl0_mcd: SUN4V_MCD_PRECISE 31 + tl0_resv01b: BTRAP(0x1b) 32 + tl0_resv01c: BTRAP(0x1c) BTRAP(0x1d) BTRAP(0x1e) BTRAP(0x1f) 31 33 tl0_fpdis: TRAP_NOSAVE(do_fpdis) 32 34 tl0_fpieee: TRAP_SAVEFPU(do_fpieee) 33 35 tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos)
+6 -1
arch/sparc/kernel/urtt_fill.S
··· 50 50 SET_GL(0) 51 51 .previous 52 52 53 - wrpr %g0, RTRAP_PSTATE, %pstate 53 + 661: wrpr %g0, RTRAP_PSTATE, %pstate 54 + .section .sun_m7_1insn_patch, "ax" 55 + .word 661b 56 + /* Re-enable PSTATE.mcde to maintain ADI security */ 57 + wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate 58 + .previous 54 59 55 60 mov %l1, %g6 56 61 ldx [%g6 + TI_TASK], %g4
+5
arch/sparc/kernel/vmlinux.lds.S
··· 145 145 *(.pause_3insn_patch) 146 146 __pause_3insn_patch_end = .; 147 147 } 148 + .sun_m7_1insn_patch : { 149 + __sun_m7_1insn_patch = .; 150 + *(.sun_m7_1insn_patch) 151 + __sun_m7_1insn_patch_end = .; 152 + } 148 153 .sun_m7_2insn_patch : { 149 154 __sun_m7_2insn_patch = .; 150 155 *(.sun_m7_2insn_patch)
+37
arch/sparc/mm/gup.c
··· 12 12 #include <linux/pagemap.h> 13 13 #include <linux/rwsem.h> 14 14 #include <asm/pgtable.h> 15 + #include <asm/adi.h> 15 16 16 17 /* 17 18 * The performance critical leaf functions are made noinline otherwise gcc ··· 202 201 pgd_t *pgdp; 203 202 int nr = 0; 204 203 204 + #ifdef CONFIG_SPARC64 205 + if (adi_capable()) { 206 + long addr = start; 207 + 208 + /* If userspace has passed a versioned address, kernel 209 + * will not find it in the VMAs since it does not store 210 + * the version tags in the list of VMAs. Storing version 211 + * tags in list of VMAs is impractical since they can be 212 + * changed any time from userspace without dropping into 213 + * kernel. Any address search in VMAs will be done with 214 + * non-versioned addresses. Ensure the ADI version bits 215 + * are dropped here by sign extending the last bit before 216 + * ADI bits. IOMMU does not implement version tags. 217 + */ 218 + addr = (addr << (long)adi_nbits()) >> (long)adi_nbits(); 219 + start = addr; 220 + } 221 + #endif 205 222 start &= PAGE_MASK; 206 223 addr = start; 207 224 len = (unsigned long) nr_pages << PAGE_SHIFT; ··· 250 231 pgd_t *pgdp; 251 232 int nr = 0; 252 233 234 + #ifdef CONFIG_SPARC64 235 + if (adi_capable()) { 236 + long addr = start; 237 + 238 + /* If userspace has passed a versioned address, kernel 239 + * will not find it in the VMAs since it does not store 240 + * the version tags in the list of VMAs. Storing version 241 + * tags in list of VMAs is impractical since they can be 242 + * changed any time from userspace without dropping into 243 + * kernel. Any address search in VMAs will be done with 244 + * non-versioned addresses. Ensure the ADI version bits 245 + * are dropped here by sign extending the last bit before 246 + * ADI bits. IOMMU does not implements version tags, 247 + */ 248 + addr = (addr << (long)adi_nbits()) >> (long)adi_nbits(); 249 + start = addr; 250 + } 251 + #endif 253 252 start &= PAGE_MASK; 254 253 addr = start; 255 254 len = (unsigned long) nr_pages << PAGE_SHIFT;
+13 -1
arch/sparc/mm/hugetlbpage.c
··· 182 182 struct page *page, int writeable) 183 183 { 184 184 unsigned int shift = huge_page_shift(hstate_vma(vma)); 185 + pte_t pte; 185 186 186 - return hugepage_shift_to_tte(entry, shift); 187 + pte = hugepage_shift_to_tte(entry, shift); 188 + 189 + #ifdef CONFIG_SPARC64 190 + /* If this vma has ADI enabled on it, turn on TTE.mcd 191 + */ 192 + if (vma->vm_flags & VM_SPARC_ADI) 193 + return pte_mkmcd(pte); 194 + else 195 + return pte_mknotmcd(pte); 196 + #else 197 + return pte; 198 + #endif 187 199 } 188 200 189 201 static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
+69
arch/sparc/mm/init_64.c
··· 3160 3160 do_flush_tlb_kernel_range(start, end); 3161 3161 } 3162 3162 } 3163 + 3164 + void copy_user_highpage(struct page *to, struct page *from, 3165 + unsigned long vaddr, struct vm_area_struct *vma) 3166 + { 3167 + char *vfrom, *vto; 3168 + 3169 + vfrom = kmap_atomic(from); 3170 + vto = kmap_atomic(to); 3171 + copy_user_page(vto, vfrom, vaddr, to); 3172 + kunmap_atomic(vto); 3173 + kunmap_atomic(vfrom); 3174 + 3175 + /* If this page has ADI enabled, copy over any ADI tags 3176 + * as well 3177 + */ 3178 + if (vma->vm_flags & VM_SPARC_ADI) { 3179 + unsigned long pfrom, pto, i, adi_tag; 3180 + 3181 + pfrom = page_to_phys(from); 3182 + pto = page_to_phys(to); 3183 + 3184 + for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) { 3185 + asm volatile("ldxa [%1] %2, %0\n\t" 3186 + : "=r" (adi_tag) 3187 + : "r" (i), "i" (ASI_MCD_REAL)); 3188 + asm volatile("stxa %0, [%1] %2\n\t" 3189 + : 3190 + : "r" (adi_tag), "r" (pto), 3191 + "i" (ASI_MCD_REAL)); 3192 + pto += adi_blksize(); 3193 + } 3194 + asm volatile("membar #Sync\n\t"); 3195 + } 3196 + } 3197 + EXPORT_SYMBOL(copy_user_highpage); 3198 + 3199 + void copy_highpage(struct page *to, struct page *from) 3200 + { 3201 + char *vfrom, *vto; 3202 + 3203 + vfrom = kmap_atomic(from); 3204 + vto = kmap_atomic(to); 3205 + copy_page(vto, vfrom); 3206 + kunmap_atomic(vto); 3207 + kunmap_atomic(vfrom); 3208 + 3209 + /* If this platform is ADI enabled, copy any ADI tags 3210 + * as well 3211 + */ 3212 + if (adi_capable()) { 3213 + unsigned long pfrom, pto, i, adi_tag; 3214 + 3215 + pfrom = page_to_phys(from); 3216 + pto = page_to_phys(to); 3217 + 3218 + for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) { 3219 + asm volatile("ldxa [%1] %2, %0\n\t" 3220 + : "=r" (adi_tag) 3221 + : "r" (i), "i" (ASI_MCD_REAL)); 3222 + asm volatile("stxa %0, [%1] %2\n\t" 3223 + : 3224 + : "r" (adi_tag), "r" (pto), 3225 + "i" (ASI_MCD_REAL)); 3226 + pto += adi_blksize(); 3227 + } 3228 + asm volatile("membar #Sync\n\t"); 3229 + } 3230 + } 3231 + EXPORT_SYMBOL(copy_highpage);
+21
arch/sparc/mm/tsb.c
··· 546 546 547 547 mm->context.sparc64_ctx_val = 0UL; 548 548 549 + mm->context.tag_store = NULL; 550 + spin_lock_init(&mm->context.tag_lock); 551 + 549 552 #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) 550 553 /* We reset them to zero because the fork() page copying 551 554 * will re-increment the counters as the parent PTEs are ··· 614 611 } 615 612 616 613 spin_unlock_irqrestore(&ctx_alloc_lock, flags); 614 + 615 + /* If ADI tag storage was allocated for this task, free it */ 616 + if (mm->context.tag_store) { 617 + tag_storage_desc_t *tag_desc; 618 + unsigned long max_desc; 619 + unsigned char *tags; 620 + 621 + tag_desc = mm->context.tag_store; 622 + max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t); 623 + for (i = 0; i < max_desc; i++) { 624 + tags = tag_desc->tags; 625 + tag_desc->tags = NULL; 626 + kfree(tags); 627 + tag_desc++; 628 + } 629 + kfree(mm->context.tag_store); 630 + mm->context.tag_store = NULL; 631 + } 617 632 }
+1 -1
arch/x86/kernel/signal_compat.c
··· 27 27 */ 28 28 BUILD_BUG_ON(NSIGILL != 11); 29 29 BUILD_BUG_ON(NSIGFPE != 13); 30 - BUILD_BUG_ON(NSIGSEGV != 4); 30 + BUILD_BUG_ON(NSIGSEGV != 7); 31 31 BUILD_BUG_ON(NSIGBUS != 5); 32 32 BUILD_BUG_ON(NSIGTRAP != 4); 33 33 BUILD_BUG_ON(NSIGCHLD != 6);
+36
include/asm-generic/pgtable.h
··· 400 400 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 401 401 #endif 402 402 403 + #ifndef __HAVE_ARCH_DO_SWAP_PAGE 404 + /* 405 + * Some architectures support metadata associated with a page. When a 406 + * page is being swapped out, this metadata must be saved so it can be 407 + * restored when the page is swapped back in. SPARC M7 and newer 408 + * processors support an ADI (Application Data Integrity) tag for the 409 + * page as metadata for the page. arch_do_swap_page() can restore this 410 + * metadata when a page is swapped back in. 411 + */ 412 + static inline void arch_do_swap_page(struct mm_struct *mm, 413 + struct vm_area_struct *vma, 414 + unsigned long addr, 415 + pte_t pte, pte_t oldpte) 416 + { 417 + 418 + } 419 + #endif 420 + 421 + #ifndef __HAVE_ARCH_UNMAP_ONE 422 + /* 423 + * Some architectures support metadata associated with a page. When a 424 + * page is being swapped out, this metadata must be saved so it can be 425 + * restored when the page is swapped back in. SPARC M7 and newer 426 + * processors support an ADI (Application Data Integrity) tag for the 427 + * page as metadata for the page. arch_unmap_one() can save this 428 + * metadata on a swap-out of a page. 429 + */ 430 + static inline int arch_unmap_one(struct mm_struct *mm, 431 + struct vm_area_struct *vma, 432 + unsigned long addr, 433 + pte_t orig_pte) 434 + { 435 + return 0; 436 + } 437 + #endif 438 + 403 439 #ifndef __HAVE_ARCH_PGD_OFFSET_GATE 404 440 #define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) 405 441 #endif
+4
include/linux/highmem.h
··· 237 237 238 238 #endif 239 239 240 + #ifndef __HAVE_ARCH_COPY_HIGHPAGE 241 + 240 242 static inline void copy_highpage(struct page *to, struct page *from) 241 243 { 242 244 char *vfrom, *vto; ··· 249 247 kunmap_atomic(vto); 250 248 kunmap_atomic(vfrom); 251 249 } 250 + 251 + #endif 252 252 253 253 #endif /* _LINUX_HIGHMEM_H */
+9
include/linux/mm.h
··· 245 245 # define VM_GROWSUP VM_ARCH_1 246 246 #elif defined(CONFIG_IA64) 247 247 # define VM_GROWSUP VM_ARCH_1 248 + #elif defined(CONFIG_SPARC64) 249 + # define VM_SPARC_ADI VM_ARCH_1 /* Uses ADI tag for access control */ 250 + # define VM_ARCH_CLEAR VM_SPARC_ADI 248 251 #elif !defined(CONFIG_MMU) 249 252 # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ 250 253 #endif ··· 289 286 290 287 /* This mask is used to clear all the VMA flags used by mlock */ 291 288 #define VM_LOCKED_CLEAR_MASK (~(VM_LOCKED | VM_LOCKONFAULT)) 289 + 290 + /* Arch-specific flags to clear when updating VM flags on protection change */ 291 + #ifndef VM_ARCH_CLEAR 292 + # define VM_ARCH_CLEAR VM_NONE 293 + #endif 294 + #define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR) 292 295 293 296 /* 294 297 * mapping from the currently active vm_flags protection bits (the
+1 -1
include/linux/mman.h
··· 92 92 * 93 93 * Returns true if the prot flags are valid 94 94 */ 95 - static inline bool arch_validate_prot(unsigned long prot) 95 + static inline bool arch_validate_prot(unsigned long prot, unsigned long addr) 96 96 { 97 97 return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0; 98 98 }
+4 -1
include/uapi/asm-generic/siginfo.h
··· 246 246 #else 247 247 # define SEGV_PKUERR 4 /* failed protection key checks */ 248 248 #endif 249 - #define NSIGSEGV 4 249 + #define SEGV_ACCADI 5 /* ADI not enabled for mapped object */ 250 + #define SEGV_ADIDERR 6 /* Disrupting MCD error */ 251 + #define SEGV_ADIPERR 7 /* Precise MCD exception */ 252 + #define NSIGSEGV 7 250 253 251 254 /* 252 255 * SIGBUS si_codes
+4
mm/ksm.c
··· 2369 2369 if (*vm_flags & VM_SAO) 2370 2370 return 0; 2371 2371 #endif 2372 + #ifdef VM_SPARC_ADI 2373 + if (*vm_flags & VM_SPARC_ADI) 2374 + return 0; 2375 + #endif 2372 2376 2373 2377 if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { 2374 2378 err = __ksm_enter(mm);
+1
mm/memory.c
··· 3053 3053 if (pte_swp_soft_dirty(vmf->orig_pte)) 3054 3054 pte = pte_mksoft_dirty(pte); 3055 3055 set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); 3056 + arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); 3056 3057 vmf->orig_pte = pte; 3057 3058 3058 3059 /* ksm created a completely new copy */
+2 -2
mm/mprotect.c
··· 417 417 end = start + len; 418 418 if (end <= start) 419 419 return -ENOMEM; 420 - if (!arch_validate_prot(prot)) 420 + if (!arch_validate_prot(prot, start)) 421 421 return -EINVAL; 422 422 423 423 reqprot = prot; ··· 475 475 * cleared from the VMA. 476 476 */ 477 477 mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC | 478 - ARCH_VM_PKEY_FLAGS; 478 + VM_FLAGS_CLEAR; 479 479 480 480 new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey); 481 481 newflags = calc_vm_prot_bits(prot, new_vma_pkey);
+14
mm/rmap.c
··· 1497 1497 (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) { 1498 1498 swp_entry_t entry; 1499 1499 pte_t swp_pte; 1500 + 1501 + if (arch_unmap_one(mm, vma, address, pteval) < 0) { 1502 + set_pte_at(mm, address, pvmw.pte, pteval); 1503 + ret = false; 1504 + page_vma_mapped_walk_done(&pvmw); 1505 + break; 1506 + } 1507 + 1500 1508 /* 1501 1509 * Store the pfn of the page in a special migration 1502 1510 * pte. do_swap_page() will wait until the migration ··· 1559 1551 } 1560 1552 1561 1553 if (swap_duplicate(entry) < 0) { 1554 + set_pte_at(mm, address, pvmw.pte, pteval); 1555 + ret = false; 1556 + page_vma_mapped_walk_done(&pvmw); 1557 + break; 1558 + } 1559 + if (arch_unmap_one(mm, vma, address, pteval) < 0) { 1562 1560 set_pte_at(mm, address, pvmw.pte, pteval); 1563 1561 ret = false; 1564 1562 page_vma_mapped_walk_done(&pvmw);