Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'acpi-5.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI updates from Rafael Wysocki:
"These update the ACPICA code in the kernel to upstream revision
20191018, add support for EFI specific purpose memory, update the ACPI
EC driver to make it work on systems with hardware-reduced ACPI,
improve ACPI-based device enumeration for some platforms, rework the
lid blacklist handling in the button driver and add more lid quirks to
it, unify ACPI _HID/_UID matching, fix assorted issues and clean up
the code and documentation.

Specifics:

- Update the ACPICA code in the kernel to upstream revision 20191018
including:
* Fixes for Clang warnings (Bob Moore)
* Fix for possible overflow in get_tick_count() (Bob Moore)
* Introduction of acpi_unload_table() (Bob Moore)
* Debugger and utilities updates (Erik Schmauss)
* Fix for unloading tables loaded via configfs (Nikolaus Voss)

- Add support for EFI specific purpose memory to optionally allow
either application-exclusive or core-kernel-mm managed access to
differentiated memory (Dan Williams)

- Fix and clean up processing of the HMAT table (Brice Goglin, Qian
Cai, Tao Xu)

- Update the ACPI EC driver to make it work on systems with
hardware-reduced ACPI (Daniel Drake)

- Always build in support for the Generic Event Device (GED) to allow
one kernel binary to work both on systems with full hardware ACPI
and hardware-reduced ACPI (Arjan van de Ven)

- Fix the table unload mechanism to unregister platform devices
created when the given table was loaded (Andy Shevchenko)

- Rework the lid blacklist handling in the button driver and add more
lid quirks to it (Hans de Goede)

- Improve ACPI-based device enumeration for some platforms based on
Intel BayTrail SoCs (Hans de Goede)

- Add an OpRegion driver for the Cherry Trail Crystal Cove PMIC and
prevent handlers from being registered for unhandled PMIC OpRegions
(Hans de Goede)

- Unify ACPI _HID/_UID matching (Andy Shevchenko)

- Clean up documentation and comments (Cao jin, James Pack, Kacper
Piwiński)"

* tag 'acpi-5.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (52 commits)
ACPI: OSI: Shoot duplicate word
ACPI: HMAT: use %u instead of %d to print u32 values
ACPI: NUMA: HMAT: fix a section mismatch
ACPI: HMAT: don't mix pxm and nid when setting memory target processor_pxm
ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device
ACPI: NUMA: HMAT: Register HMAT at device_initcall level
device-dax: Add a driver for "hmem" devices
dax: Fix alloc_dax_region() compile warning
lib: Uplevel the pmem "region" ida to a global allocator
x86/efi: Add efi_fake_mem support for EFI_MEMORY_SP
arm/efi: EFI soft reservation to memblock
x86/efi: EFI soft reservation to E820 enumeration
efi: Common enable/disable infrastructure for EFI soft reservation
x86/efi: Push EFI_MEMMAP check into leaf routines
efi: Enumerate EFI_MEMORY_SP
ACPI: NUMA: Establish a new drivers/acpi/numa/ directory
ACPICA: Update version to 20191018
ACPICA: debugger: remove leading whitespaces when converting a string to a buffer
ACPICA: acpiexec: initialize all simple types and field units from user input
ACPICA: debugger: add field unit support for acpi_db_get_next_token
...

+1373 -409
+16 -3
Documentation/admin-guide/kernel-parameters.txt
··· 1168 1168 Format: {"off" | "on" | "skip[mbr]"} 1169 1169 1170 1170 efi= [EFI] 1171 - Format: { "old_map", "nochunk", "noruntime", "debug" } 1171 + Format: { "old_map", "nochunk", "noruntime", "debug", 1172 + "nosoftreserve" } 1172 1173 old_map [X86-64]: switch to the old ioremap-based EFI 1173 1174 runtime services mapping. 32-bit still uses this one by 1174 1175 default. ··· 1178 1177 firmware implementations. 1179 1178 noruntime : disable EFI runtime services support 1180 1179 debug: enable misc debug output 1180 + nosoftreserve: The EFI_MEMORY_SP (Specific Purpose) 1181 + attribute may cause the kernel to reserve the 1182 + memory range for a memory mapping driver to 1183 + claim. Specify efi=nosoftreserve to disable this 1184 + reservation and treat the memory by its base type 1185 + (i.e. EFI_CONVENTIONAL_MEMORY / "System RAM"). 1181 1186 1182 1187 efi_no_storage_paranoia [EFI; X86] 1183 1188 Using this parameter you can use more than 50% of ··· 1196 1189 updating original EFI memory map. 1197 1190 Region of memory which aa attribute is added to is 1198 1191 from ss to ss+nn. 1192 + 1199 1193 If efi_fake_mem=2G@4G:0x10000,2G@0x10a0000000:0x10000 1200 1194 is specified, EFI_MEMORY_MORE_RELIABLE(0x10000) 1201 1195 attribute is added to range 0x100000000-0x180000000 and 1202 1196 0x10a0000000-0x1120000000. 1203 1197 1198 + If efi_fake_mem=8G@9G:0x40000 is specified, the 1199 + EFI_MEMORY_SP(0x40000) attribute is added to 1200 + range 0x240000000-0x43fffffff. 1201 + 1204 1202 Using this parameter you can do debugging of EFI memmap 1205 - related feature. For example, you can do debugging of 1203 + related features. For example, you can do debugging of 1206 1204 Address Range Mirroring feature even if your box 1207 - doesn't support it. 1205 + doesn't support it, or mark specific memory as 1206 + "soft reserved". 1208 1207 1209 1208 efivar_ssdt= [EFI; X86] Name of an EFI variable that contains an SSDT 1210 1209 that is to be dynamically loaded by Linux. If there are
+1 -1
Documentation/firmware-guide/acpi/namespace.rst
··· 261 261 struct acpi_device objects represented by the given row (xSDT means DSDT 262 262 or SSDT). 263 263 264 - The forth column of the above table indicates the 'bus_id' generation 264 + The fourth column of the above table indicates the 'bus_id' generation 265 265 rule of the struct acpi_device object: 266 266 267 267 _HID:
+2
arch/arm64/mm/mmu.c
··· 1060 1060 __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), 1061 1061 size, PAGE_KERNEL, __pgd_pgtable_alloc, flags); 1062 1062 1063 + memblock_clear_nomap(start, size); 1064 + 1063 1065 return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, 1064 1066 restrictions); 1065 1067 }
+5 -1
arch/x86/boot/compressed/eboot.c
··· 554 554 case EFI_BOOT_SERVICES_CODE: 555 555 case EFI_BOOT_SERVICES_DATA: 556 556 case EFI_CONVENTIONAL_MEMORY: 557 - e820_type = E820_TYPE_RAM; 557 + if (efi_soft_reserve_enabled() && 558 + (d->attribute & EFI_MEMORY_SP)) 559 + e820_type = E820_TYPE_SOFT_RESERVED; 560 + else 561 + e820_type = E820_TYPE_RAM; 558 562 break; 559 563 560 564 case EFI_ACPI_MEMORY_NVS:
+39 -7
arch/x86/boot/compressed/kaslr.c
··· 132 132 #include "../../../../lib/ctype.c" 133 133 #include "../../../../lib/cmdline.c" 134 134 135 + enum parse_mode { 136 + PARSE_MEMMAP, 137 + PARSE_EFI, 138 + }; 139 + 135 140 static int 136 - parse_memmap(char *p, unsigned long long *start, unsigned long long *size) 141 + parse_memmap(char *p, unsigned long long *start, unsigned long long *size, 142 + enum parse_mode mode) 137 143 { 138 144 char *oldp; 139 145 ··· 162 156 *start = memparse(p + 1, &p); 163 157 return 0; 164 158 case '@': 165 - /* memmap=nn@ss specifies usable region, should be skipped */ 166 - *size = 0; 159 + if (mode == PARSE_MEMMAP) { 160 + /* 161 + * memmap=nn@ss specifies usable region, should 162 + * be skipped 163 + */ 164 + *size = 0; 165 + } else { 166 + unsigned long long flags; 167 + 168 + /* 169 + * efi_fake_mem=nn@ss:attr the attr specifies 170 + * flags that might imply a soft-reservation. 171 + */ 172 + *start = memparse(p + 1, &p); 173 + if (p && *p == ':') { 174 + p++; 175 + if (kstrtoull(p, 0, &flags) < 0) 176 + *size = 0; 177 + else if (flags & EFI_MEMORY_SP) 178 + return 0; 179 + } 180 + *size = 0; 181 + } 167 182 /* Fall through */ 168 183 default: 169 184 /* ··· 199 172 return -EINVAL; 200 173 } 201 174 202 - static void mem_avoid_memmap(char *str) 175 + static void mem_avoid_memmap(enum parse_mode mode, char *str) 203 176 { 204 177 static int i; 205 178 ··· 214 187 if (k) 215 188 *k++ = 0; 216 189 217 - rc = parse_memmap(str, &start, &size); 190 + rc = parse_memmap(str, &start, &size, mode); 218 191 if (rc < 0) 219 192 break; 220 193 str = k; ··· 265 238 } 266 239 } 267 240 268 - 269 241 static void handle_mem_options(void) 270 242 { 271 243 char *args = (char *)get_cmd_line_ptr(); ··· 297 271 } 298 272 299 273 if (!strcmp(param, "memmap")) { 300 - mem_avoid_memmap(val); 274 + mem_avoid_memmap(PARSE_MEMMAP, val); 301 275 } else if (strstr(param, "hugepages")) { 302 276 parse_gb_huge_pages(param, val); 303 277 } else if (!strcmp(param, "mem")) { ··· 310 284 goto out; 311 285 312 286 mem_limit = mem_size; 287 + } else if (!strcmp(param, "efi_fake_mem")) { 288 + mem_avoid_memmap(PARSE_EFI, val); 313 289 } 314 290 } 315 291 ··· 798 770 * Only EFI_CONVENTIONAL_MEMORY is guaranteed to be free. 799 771 */ 800 772 if (md->type != EFI_CONVENTIONAL_MEMORY) 773 + continue; 774 + 775 + if (efi_soft_reserve_enabled() && 776 + (md->attribute & EFI_MEMORY_SP)) 801 777 continue; 802 778 803 779 if (efi_mirror_found &&
+8
arch/x86/include/asm/e820/types.h
··· 29 29 E820_TYPE_PRAM = 12, 30 30 31 31 /* 32 + * Special-purpose memory is indicated to the system via the 33 + * EFI_MEMORY_SP attribute. Define an e820 translation of this 34 + * memory type for the purpose of reserving this range and 35 + * marking it with the IORES_DESC_SOFT_RESERVED designation. 36 + */ 37 + E820_TYPE_SOFT_RESERVED = 0xefffffff, 38 + 39 + /* 32 40 * Reserved RAM used by the kernel itself if 33 41 * CONFIG_INTEL_TXT=y is enabled, memory of this type 34 42 * will be included in the S3 integrity calculation
+16 -1
arch/x86/include/asm/efi.h
··· 140 140 extern void efi_switch_mm(struct mm_struct *mm); 141 141 extern void efi_recover_from_page_fault(unsigned long phys_addr); 142 142 extern void efi_free_boot_services(void); 143 - extern void efi_reserve_boot_services(void); 144 143 145 144 struct efi_setup_data { 146 145 u64 fw_vendor; ··· 243 244 extern bool efi_reboot_required(void); 244 245 extern bool efi_is_table_address(unsigned long phys_addr); 245 246 247 + extern void efi_find_mirror(void); 248 + extern void efi_reserve_boot_services(void); 246 249 #else 247 250 static inline void parse_efi_setup(u64 phys_addr, u32 data_len) {} 248 251 static inline bool efi_reboot_required(void) ··· 255 254 { 256 255 return false; 257 256 } 257 + static inline void efi_find_mirror(void) 258 + { 259 + } 260 + static inline void efi_reserve_boot_services(void) 261 + { 262 + } 258 263 #endif /* CONFIG_EFI */ 264 + 265 + #ifdef CONFIG_EFI_FAKE_MEMMAP 266 + extern void __init efi_fake_memmap_early(void); 267 + #else 268 + static inline void efi_fake_memmap_early(void) 269 + { 270 + } 271 + #endif 259 272 260 273 #endif /* _ASM_X86_EFI_H */
+10 -2
arch/x86/kernel/e820.c
··· 190 190 case E820_TYPE_RAM: /* Fall through: */ 191 191 case E820_TYPE_RESERVED_KERN: pr_cont("usable"); break; 192 192 case E820_TYPE_RESERVED: pr_cont("reserved"); break; 193 + case E820_TYPE_SOFT_RESERVED: pr_cont("soft reserved"); break; 193 194 case E820_TYPE_ACPI: pr_cont("ACPI data"); break; 194 195 case E820_TYPE_NVS: pr_cont("ACPI NVS"); break; 195 196 case E820_TYPE_UNUSABLE: pr_cont("unusable"); break; ··· 1049 1048 case E820_TYPE_PRAM: return "Persistent Memory (legacy)"; 1050 1049 case E820_TYPE_PMEM: return "Persistent Memory"; 1051 1050 case E820_TYPE_RESERVED: return "Reserved"; 1051 + case E820_TYPE_SOFT_RESERVED: return "Soft Reserved"; 1052 1052 default: return "Unknown E820 type"; 1053 1053 } 1054 1054 } ··· 1065 1063 case E820_TYPE_PRAM: /* Fall-through: */ 1066 1064 case E820_TYPE_PMEM: /* Fall-through: */ 1067 1065 case E820_TYPE_RESERVED: /* Fall-through: */ 1066 + case E820_TYPE_SOFT_RESERVED: /* Fall-through: */ 1068 1067 default: return IORESOURCE_MEM; 1069 1068 } 1070 1069 } ··· 1078 1075 case E820_TYPE_PMEM: return IORES_DESC_PERSISTENT_MEMORY; 1079 1076 case E820_TYPE_PRAM: return IORES_DESC_PERSISTENT_MEMORY_LEGACY; 1080 1077 case E820_TYPE_RESERVED: return IORES_DESC_RESERVED; 1078 + case E820_TYPE_SOFT_RESERVED: return IORES_DESC_SOFT_RESERVED; 1081 1079 case E820_TYPE_RESERVED_KERN: /* Fall-through: */ 1082 1080 case E820_TYPE_RAM: /* Fall-through: */ 1083 1081 case E820_TYPE_UNUSABLE: /* Fall-through: */ ··· 1093 1089 return true; 1094 1090 1095 1091 /* 1096 - * Treat persistent memory like device memory, i.e. reserve it 1097 - * for exclusive use of a driver 1092 + * Treat persistent memory and other special memory ranges like 1093 + * device memory, i.e. reserve it for exclusive use of a driver 1098 1094 */ 1099 1095 switch (type) { 1100 1096 case E820_TYPE_RESERVED: 1097 + case E820_TYPE_SOFT_RESERVED: 1101 1098 case E820_TYPE_PRAM: 1102 1099 case E820_TYPE_PMEM: 1103 1100 return false; ··· 1300 1295 end = entry->addr + entry->size; 1301 1296 if (end != (resource_size_t)end) 1302 1297 continue; 1298 + 1299 + if (entry->type == E820_TYPE_SOFT_RESERVED) 1300 + memblock_reserve(entry->addr, entry->size); 1303 1301 1304 1302 if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) 1305 1303 continue;
+8 -10
arch/x86/kernel/setup.c
··· 1138 1138 1139 1139 reserve_bios_regions(); 1140 1140 1141 - if (efi_enabled(EFI_MEMMAP)) { 1142 - efi_fake_memmap(); 1143 - efi_find_mirror(); 1144 - efi_esrt_init(); 1141 + efi_fake_memmap(); 1142 + efi_find_mirror(); 1143 + efi_esrt_init(); 1145 1144 1146 - /* 1147 - * The EFI specification says that boot service code won't be 1148 - * called after ExitBootServices(). This is, in fact, a lie. 1149 - */ 1150 - efi_reserve_boot_services(); 1151 - } 1145 + /* 1146 + * The EFI specification says that boot service code won't be 1147 + * called after ExitBootServices(). This is, in fact, a lie. 1148 + */ 1149 + efi_reserve_boot_services(); 1152 1150 1153 1151 /* preallocate 4k for mptable mpc */ 1154 1152 e820__memblock_alloc_reserved_mpc_new();
+50 -4
arch/x86/platform/efi/efi.c
··· 128 128 efi_memory_desc_t *md; 129 129 u64 mirror_size = 0, total_size = 0; 130 130 131 + if (!efi_enabled(EFI_MEMMAP)) 132 + return; 133 + 131 134 for_each_efi_memory_desc(md) { 132 135 unsigned long long start = md->phys_addr; 133 136 unsigned long long size = md->num_pages << EFI_PAGE_SHIFT; ··· 148 145 149 146 /* 150 147 * Tell the kernel about the EFI memory map. This might include 151 - * more than the max 128 entries that can fit in the e820 legacy 152 - * (zeropage) memory map. 148 + * more than the max 128 entries that can fit in the passed in e820 149 + * legacy (zeropage) memory map, but the kernel's e820 table can hold 150 + * E820_MAX_ENTRIES. 153 151 */ 154 152 155 153 static void __init do_add_efi_memmap(void) 156 154 { 157 155 efi_memory_desc_t *md; 156 + 157 + if (!efi_enabled(EFI_MEMMAP)) 158 + return; 158 159 159 160 for_each_efi_memory_desc(md) { 160 161 unsigned long long start = md->phys_addr; ··· 171 164 case EFI_BOOT_SERVICES_CODE: 172 165 case EFI_BOOT_SERVICES_DATA: 173 166 case EFI_CONVENTIONAL_MEMORY: 174 - if (md->attribute & EFI_MEMORY_WB) 167 + if (efi_soft_reserve_enabled() 168 + && (md->attribute & EFI_MEMORY_SP)) 169 + e820_type = E820_TYPE_SOFT_RESERVED; 170 + else if (md->attribute & EFI_MEMORY_WB) 175 171 e820_type = E820_TYPE_RAM; 176 172 else 177 173 e820_type = E820_TYPE_RESERVED; ··· 200 190 e820_type = E820_TYPE_RESERVED; 201 191 break; 202 192 } 193 + 203 194 e820__range_add(start, size, e820_type); 204 195 } 205 196 e820__update_table(e820_table); 197 + } 198 + 199 + /* 200 + * Given add_efi_memmap defaults to 0 and there there is no alternative 201 + * e820 mechanism for soft-reserved memory, import the full EFI memory 202 + * map if soft reservations are present and enabled. Otherwise, the 203 + * mechanism to disable the kernel's consideration of EFI_MEMORY_SP is 204 + * the efi=nosoftreserve option. 205 + */ 206 + static bool do_efi_soft_reserve(void) 207 + { 208 + efi_memory_desc_t *md; 209 + 210 + if (!efi_enabled(EFI_MEMMAP)) 211 + return false; 212 + 213 + if (!efi_soft_reserve_enabled()) 214 + return false; 215 + 216 + for_each_efi_memory_desc(md) 217 + if (md->type == EFI_CONVENTIONAL_MEMORY && 218 + (md->attribute & EFI_MEMORY_SP)) 219 + return true; 220 + return false; 206 221 } 207 222 208 223 int __init efi_memblock_x86_reserve_range(void) ··· 259 224 if (rv) 260 225 return rv; 261 226 262 - if (add_efi_memmap) 227 + if (add_efi_memmap || do_efi_soft_reserve()) 263 228 do_add_efi_memmap(); 229 + 230 + efi_fake_memmap_early(); 264 231 265 232 WARN(efi.memmap.desc_version != 1, 266 233 "Unexpected EFI_MEMORY_DESCRIPTOR version %ld", ··· 813 776 * doesn't exist for 32-bit kernels. 814 777 */ 815 778 if (IS_ENABLED(CONFIG_X86_32)) 779 + return false; 780 + 781 + /* 782 + * EFI specific purpose memory may be reserved by default 783 + * depending on kernel config and boot options. 784 + */ 785 + if (md->type == EFI_CONVENTIONAL_MEMORY && 786 + efi_soft_reserve_enabled() && 787 + (md->attribute & EFI_MEMORY_SP)) 816 788 return false; 817 789 818 790 /*
+3
arch/x86/platform/efi/quirks.c
··· 320 320 { 321 321 efi_memory_desc_t *md; 322 322 323 + if (!efi_enabled(EFI_MEMMAP)) 324 + return; 325 + 323 326 for_each_efi_memory_desc(md) { 324 327 u64 start = md->phys_addr; 325 328 u64 size = md->num_pages << EFI_PAGE_SHIFT;
+12 -11
drivers/acpi/Kconfig
··· 319 319 To compile this driver as a module, choose M here: 320 320 the module will be called thermal. 321 321 322 - config ACPI_NUMA 323 - bool "NUMA support" 324 - depends on NUMA 325 - depends on (X86 || IA64 || ARM64) 326 - default y if IA64 || ARM64 327 - 328 322 config ACPI_CUSTOM_DSDT_FILE 329 323 string "Custom DSDT Table file to include" 330 324 default "" ··· 467 473 If you are unsure what to do, do not enable this option. 468 474 469 475 source "drivers/acpi/nfit/Kconfig" 470 - source "drivers/acpi/hmat/Kconfig" 471 - 476 + source "drivers/acpi/numa/Kconfig" 472 477 source "drivers/acpi/apei/Kconfig" 473 478 source "drivers/acpi/dptf/Kconfig" 474 479 ··· 506 513 PMIC chip. 507 514 508 515 if PMIC_OPREGION 509 - config CRC_PMIC_OPREGION 510 - bool "ACPI operation region support for CrystalCove PMIC" 516 + config BYTCRC_PMIC_OPREGION 517 + bool "ACPI operation region support for Bay Trail Crystal Cove PMIC" 511 518 depends on INTEL_SOC_PMIC 512 519 help 513 - This config adds ACPI operation region support for CrystalCove PMIC. 520 + This config adds ACPI operation region support for the Bay Trail 521 + version of the Crystal Cove PMIC. 522 + 523 + config CHTCRC_PMIC_OPREGION 524 + bool "ACPI operation region support for Cherry Trail Crystal Cove PMIC" 525 + depends on INTEL_SOC_PMIC 526 + help 527 + This config adds ACPI operation region support for the Cherry Trail 528 + version of the Crystal Cove PMIC. 514 529 515 530 config XPOWER_PMIC_OPREGION 516 531 bool "ACPI operation region support for XPower AXP288 PMIC"
+4 -4
drivers/acpi/Makefile
··· 48 48 acpi-$(CONFIG_ARM_AMBA) += acpi_amba.o 49 49 acpi-y += power.o 50 50 acpi-y += event.o 51 - acpi-$(CONFIG_ACPI_REDUCED_HARDWARE_ONLY) += evged.o 51 + acpi-y += evged.o 52 52 acpi-y += sysfs.o 53 53 acpi-y += property.o 54 54 acpi-$(CONFIG_X86) += acpi_cmos_rtc.o 55 55 acpi-$(CONFIG_X86) += x86/apple.o 56 56 acpi-$(CONFIG_X86) += x86/utils.o 57 57 acpi-$(CONFIG_DEBUG_FS) += debugfs.o 58 - acpi-$(CONFIG_ACPI_NUMA) += numa.o 59 58 acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o 60 59 acpi-y += acpi_lpat.o 61 60 acpi-$(CONFIG_ACPI_LPIT) += acpi_lpit.o ··· 79 80 obj-$(CONFIG_ACPI) += container.o 80 81 obj-$(CONFIG_ACPI_THERMAL) += thermal.o 81 82 obj-$(CONFIG_ACPI_NFIT) += nfit/ 82 - obj-$(CONFIG_ACPI_HMAT) += hmat/ 83 + obj-$(CONFIG_ACPI_NUMA) += numa/ 83 84 obj-$(CONFIG_ACPI) += acpi_memhotplug.o 84 85 obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o 85 86 obj-$(CONFIG_ACPI_BATTERY) += battery.o ··· 108 109 obj-$(CONFIG_ACPI_EXTLOG) += acpi_extlog.o 109 110 110 111 obj-$(CONFIG_PMIC_OPREGION) += pmic/intel_pmic.o 111 - obj-$(CONFIG_CRC_PMIC_OPREGION) += pmic/intel_pmic_crc.o 112 + obj-$(CONFIG_BYTCRC_PMIC_OPREGION) += pmic/intel_pmic_bytcrc.o 113 + obj-$(CONFIG_CHTCRC_PMIC_OPREGION) += pmic/intel_pmic_chtcrc.o 112 114 obj-$(CONFIG_XPOWER_PMIC_OPREGION) += pmic/intel_pmic_xpower.o 113 115 obj-$(CONFIG_BXT_WC_PMIC_OPREGION) += pmic/intel_pmic_bxtwc.o 114 116 obj-$(CONFIG_CHT_WC_PMIC_OPREGION) += pmic/intel_pmic_chtwc.o
+2 -2
drivers/acpi/acpi_configfs.c
··· 53 53 if (!table->header) 54 54 return -ENOMEM; 55 55 56 - ret = acpi_load_table(table->header); 56 + ret = acpi_load_table(table->header, &table->index); 57 57 if (ret) { 58 58 kfree(table->header); 59 59 table->header = NULL; ··· 223 223 struct acpi_table *table = container_of(cfg, struct acpi_table, cfg); 224 224 225 225 ACPI_INFO(("Host-directed Dynamic ACPI Table Unload")); 226 - acpi_tb_unload_table(table->index); 226 + acpi_unload_table(table->index); 227 227 } 228 228 229 229 static struct configfs_group_operations acpi_table_group_ops = {
+28 -20
drivers/acpi/acpi_lpss.c
··· 10 10 #include <linux/acpi.h> 11 11 #include <linux/clkdev.h> 12 12 #include <linux/clk-provider.h> 13 + #include <linux/dmi.h> 13 14 #include <linux/err.h> 14 15 #include <linux/io.h> 15 16 #include <linux/mutex.h> ··· 464 463 const char *consumer_hid; 465 464 const char *consumer_uid; 466 465 u32 flags; 466 + const struct dmi_system_id *dep_missing_ids; 467 + }; 468 + 469 + /* Please keep this list sorted alphabetically by vendor and model */ 470 + static const struct dmi_system_id i2c1_dep_missing_dmi_ids[] = { 471 + { 472 + .matches = { 473 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 474 + DMI_MATCH(DMI_PRODUCT_NAME, "T200TA"), 475 + }, 476 + }, 477 + {} 467 478 }; 468 479 469 480 /* ··· 486 473 * the supplier is not enumerated until after the consumer is probed. 487 474 */ 488 475 static const struct lpss_device_links lpss_device_links[] = { 476 + /* CHT External sdcard slot controller depends on PMIC I2C ctrl */ 489 477 {"808622C1", "7", "80860F14", "3", DL_FLAG_PM_RUNTIME}, 478 + /* CHT iGPU depends on PMIC I2C controller */ 490 479 {"808622C1", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, 480 + /* BYT iGPU depends on the Embedded Controller I2C controller (UID 1) */ 481 + {"80860F41", "1", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME, 482 + i2c1_dep_missing_dmi_ids}, 483 + /* BYT CR iGPU depends on PMIC I2C controller (UID 5 on CR) */ 491 484 {"80860F41", "5", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, 485 + /* BYT iGPU depends on PMIC I2C controller (UID 7 on non CR) */ 486 + {"80860F41", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, 492 487 }; 493 - 494 - static bool hid_uid_match(struct acpi_device *adev, 495 - const char *hid2, const char *uid2) 496 - { 497 - const char *hid1 = acpi_device_hid(adev); 498 - const char *uid1 = acpi_device_uid(adev); 499 - 500 - if (strcmp(hid1, hid2)) 501 - return false; 502 - 503 - if (!uid2) 504 - return true; 505 - 506 - return uid1 && !strcmp(uid1, uid2); 507 - } 508 488 509 489 static bool acpi_lpss_is_supplier(struct acpi_device *adev, 510 490 const struct lpss_device_links *link) 511 491 { 512 - return hid_uid_match(adev, link->supplier_hid, link->supplier_uid); 492 + return acpi_dev_hid_uid_match(adev, link->supplier_hid, link->supplier_uid); 513 493 } 514 494 515 495 static bool acpi_lpss_is_consumer(struct acpi_device *adev, 516 496 const struct lpss_device_links *link) 517 497 { 518 - return hid_uid_match(adev, link->consumer_hid, link->consumer_uid); 498 + return acpi_dev_hid_uid_match(adev, link->consumer_hid, link->consumer_uid); 519 499 } 520 500 521 501 struct hid_uid { ··· 524 518 if (!adev) 525 519 return 0; 526 520 527 - return hid_uid_match(adev, id->hid, id->uid); 521 + return acpi_dev_hid_uid_match(adev, id->hid, id->uid); 528 522 } 529 523 530 524 static struct device *acpi_lpss_find_device(const char *hid, const char *uid) ··· 576 570 if (!dev2) 577 571 return; 578 572 579 - if (acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1))) 573 + if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids)) 574 + || acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1))) 580 575 device_link_add(dev2, dev1, link->flags); 581 576 582 577 put_device(dev2); ··· 592 585 if (!dev2) 593 586 return; 594 587 595 - if (acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2))) 588 + if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids)) 589 + || acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2))) 596 590 device_link_add(dev1, dev2, link->flags); 597 591 598 592 put_device(dev2);
+43
drivers/acpi/acpi_platform.c
··· 31 31 {"", 0}, 32 32 }; 33 33 34 + static struct platform_device *acpi_platform_device_find_by_companion(struct acpi_device *adev) 35 + { 36 + struct device *dev; 37 + 38 + dev = bus_find_device_by_acpi_dev(&platform_bus_type, adev); 39 + return dev ? to_platform_device(dev) : NULL; 40 + } 41 + 42 + static int acpi_platform_device_remove_notify(struct notifier_block *nb, 43 + unsigned long value, void *arg) 44 + { 45 + struct acpi_device *adev = arg; 46 + struct platform_device *pdev; 47 + 48 + switch (value) { 49 + case ACPI_RECONFIG_DEVICE_ADD: 50 + /* Nothing to do here */ 51 + break; 52 + case ACPI_RECONFIG_DEVICE_REMOVE: 53 + if (!acpi_device_enumerated(adev)) 54 + break; 55 + 56 + pdev = acpi_platform_device_find_by_companion(adev); 57 + if (!pdev) 58 + break; 59 + 60 + platform_device_unregister(pdev); 61 + put_device(&pdev->dev); 62 + break; 63 + } 64 + 65 + return NOTIFY_OK; 66 + } 67 + 68 + static struct notifier_block acpi_platform_notifier = { 69 + .notifier_call = acpi_platform_device_remove_notify, 70 + }; 71 + 34 72 static void acpi_platform_fill_resource(struct acpi_device *adev, 35 73 const struct resource *src, struct resource *dest) 36 74 { ··· 168 130 return pdev; 169 131 } 170 132 EXPORT_SYMBOL_GPL(acpi_create_platform_device); 133 + 134 + void __init acpi_platform_init(void) 135 + { 136 + acpi_reconfig_notifier_register(&acpi_platform_notifier); 137 + }
+6 -2
drivers/acpi/acpi_video.c
··· 699 699 * event notify code. 700 700 * lcd_flag : 701 701 * 0. The system BIOS should automatically control the brightness level 702 - * of the LCD when the power changes from AC to DC 702 + * of the LCD when: 703 + * - the power changes from AC to DC (ACPI appendix B) 704 + * - a brightness hotkey gets pressed (implied by Win7/8 backlight docs) 703 705 * 1. The system BIOS should NOT automatically control the brightness 704 - * level of the LCD when the power changes from AC to DC. 706 + * level of the LCD when: 707 + * - the power changes from AC to DC (ACPI appendix B) 708 + * - a brightness hotkey gets pressed (implied by Win7/8 backlight docs) 705 709 * Return Value: 706 710 * -EINVAL wrong arg. 707 711 */
+2
drivers/acpi/acpica/acdebug.h
··· 148 148 149 149 void acpi_db_get_bus_info(void); 150 150 151 + acpi_status acpi_db_display_fields(u32 address_space_id); 152 + 151 153 /* 152 154 * dbdisply - debug display commands 153 155 */
+10
drivers/acpi/acpica/acstruct.h
··· 192 192 u32 num_INI; 193 193 }; 194 194 195 + /* Info used by Acpi acpi_db_display_fields */ 196 + 197 + struct acpi_region_walk_info { 198 + u32 debug_level; 199 + u32 count; 200 + acpi_owner_id owner_id; 201 + u8 display_type; 202 + u32 address_space_id; 203 + }; 204 + 195 205 /* TBD: [Restructure] Merge with struct above */ 196 206 197 207 struct acpi_walk_info {
+5 -4
drivers/acpi/acpica/acutils.h
··· 142 142 143 143 /* acpi_ut_dump_buffer */ 144 144 145 - #define DB_BYTE_DISPLAY 1 146 - #define DB_WORD_DISPLAY 2 147 - #define DB_DWORD_DISPLAY 4 148 - #define DB_QWORD_DISPLAY 8 145 + #define DB_BYTE_DISPLAY 0x01 146 + #define DB_WORD_DISPLAY 0x02 147 + #define DB_DWORD_DISPLAY 0x04 148 + #define DB_QWORD_DISPLAY 0x08 149 + #define DB_DISPLAY_DATA_ONLY 0x10 149 150 150 151 /* 151 152 * utascii - ASCII utilities
+4
drivers/acpi/acpica/dbconvert.c
··· 106 106 u8 *buffer; 107 107 acpi_status status; 108 108 109 + /* Skip all preceding white space */ 110 + 111 + acpi_ut_remove_whitespace(&string); 112 + 109 113 /* Generate the final buffer length */ 110 114 111 115 for (i = 0, length = 0; string[i];) {
-2
drivers/acpi/acpica/dbdisply.c
··· 513 513 return; 514 514 } 515 515 516 - obj_desc = walk_state->method_desc; 517 516 node = walk_state->method_node; 518 517 519 518 if (walk_state->results) { ··· 564 565 return; 565 566 } 566 567 567 - node = walk_state->method_node; 568 568 acpi_os_printf("Current Control Method Call Tree\n"); 569 569 570 570 while (walk_state) {
+1 -1
drivers/acpi/acpica/dbfileio.c
··· 93 93 while (table_list_head) { 94 94 table = table_list_head->table; 95 95 96 - status = acpi_load_table(table); 96 + status = acpi_load_table(table, NULL); 97 97 if (ACPI_FAILURE(status)) { 98 98 if (status == AE_ALREADY_EXISTS) { 99 99 acpi_os_printf
+35 -1
drivers/acpi/acpica/dbinput.c
··· 50 50 CMD_EVALUATE, 51 51 CMD_EXECUTE, 52 52 CMD_EXIT, 53 + CMD_FIELDS, 53 54 CMD_FIND, 54 55 CMD_GO, 55 56 CMD_HANDLERS, ··· 128 127 {"EVALUATE", 1}, 129 128 {"EXECUTE", 1}, 130 129 {"EXIT", 0}, 130 + {"FIELDS", 1}, 131 131 {"FIND", 1}, 132 132 {"GO", 0}, 133 133 {"HANDLERS", 0}, ··· 202 200 "Find ACPI name(s) with wildcards\n"}, 203 201 {1, " Integrity", "Validate namespace integrity\n"}, 204 202 {1, " Methods", "Display list of loaded control methods\n"}, 203 + {1, " Fields <AddressSpaceId>", 204 + "Display list of loaded field units by space ID\n"}, 205 205 {1, " Namespace [Object] [Depth]", 206 206 "Display loaded namespace tree/subtree\n"}, 207 207 {1, " Notify <Object> <Value>", "Send a notification on Object\n"}, ··· 511 507 } 512 508 break; 513 509 510 + case '{': 511 + 512 + /* This is the start of a field unit, scan until closing brace */ 513 + 514 + string++; 515 + start = string; 516 + type = ACPI_TYPE_FIELD_UNIT; 517 + 518 + /* Find end of buffer */ 519 + 520 + while (*string && (*string != '}')) { 521 + string++; 522 + } 523 + break; 524 + 514 525 case '[': 515 526 516 527 /* This is the start of a package, scan until closing bracket */ ··· 693 674 union acpi_parse_object *op) 694 675 { 695 676 u32 temp; 677 + u64 temp64; 696 678 u32 command_index; 697 679 u32 param_count; 698 680 char *command_line; ··· 709 689 710 690 param_count = acpi_db_get_line(input_buffer); 711 691 command_index = acpi_db_match_command(acpi_gbl_db_args[0]); 712 - temp = 0; 713 692 714 693 /* 715 694 * We don't want to add the !! command to the history buffer. It ··· 807 788 case CMD_FIND: 808 789 809 790 status = acpi_db_find_name_in_namespace(acpi_gbl_db_args[1]); 791 + break; 792 + 793 + case CMD_FIELDS: 794 + 795 + status = acpi_ut_strtoul64(acpi_gbl_db_args[1], &temp64); 796 + 797 + if (ACPI_FAILURE(status) 798 + || temp64 >= ACPI_NUM_PREDEFINED_REGIONS) { 799 + acpi_os_printf 800 + ("Invalid adress space ID: must be between 0 and %u inclusive\n", 801 + ACPI_NUM_PREDEFINED_REGIONS - 1); 802 + return (AE_OK); 803 + } 804 + 805 + status = acpi_db_display_fields((u32)temp64); 810 806 break; 811 807 812 808 case CMD_GO:
+4
drivers/acpi/acpica/dbmethod.c
··· 321 321 walk_state->parse_flags |= ACPI_PARSE_DISASSEMBLE; 322 322 323 323 status = acpi_ps_parse_aml(walk_state); 324 + if (ACPI_FAILURE(status)) { 325 + return (status); 326 + } 327 + 324 328 (void)acpi_dm_parse_deferred_ops(op); 325 329 326 330 /* Now we can disassemble the method */
+114
drivers/acpi/acpica/dbnames.c
··· 10 10 #include "acnamesp.h" 11 11 #include "acdebug.h" 12 12 #include "acpredef.h" 13 + #include "acinterp.h" 13 14 14 15 #define _COMPONENT ACPI_CA_DEBUGGER 15 16 ACPI_MODULE_NAME("dbnames") ··· 505 504 506 505 /******************************************************************************* 507 506 * 507 + * FUNCTION: acpi_db_walk_for_fields 508 + * 509 + * PARAMETERS: Callback from walk_namespace 510 + * 511 + * RETURN: Status 512 + * 513 + * DESCRIPTION: Display short info about objects in the namespace 514 + * 515 + ******************************************************************************/ 516 + 517 + static acpi_status 518 + acpi_db_walk_for_fields(acpi_handle obj_handle, 519 + u32 nesting_level, void *context, void **return_value) 520 + { 521 + union acpi_object *ret_value; 522 + struct acpi_region_walk_info *info = 523 + (struct acpi_region_walk_info *)context; 524 + struct acpi_buffer buffer; 525 + acpi_status status; 526 + struct acpi_namespace_node *node = acpi_ns_validate_handle(obj_handle); 527 + 528 + if (!node) { 529 + return (AE_OK); 530 + } 531 + if (node->object->field.region_obj->region.space_id != 532 + info->address_space_id) { 533 + return (AE_OK); 534 + } 535 + 536 + info->count++; 537 + 538 + /* Get and display the full pathname to this object */ 539 + 540 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 541 + status = acpi_ns_handle_to_pathname(obj_handle, &buffer, TRUE); 542 + if (ACPI_FAILURE(status)) { 543 + acpi_os_printf("Could Not get pathname for object %p\n", 544 + obj_handle); 545 + return (AE_OK); 546 + } 547 + 548 + acpi_os_printf("%s ", (char *)buffer.pointer); 549 + ACPI_FREE(buffer.pointer); 550 + 551 + buffer.length = ACPI_ALLOCATE_LOCAL_BUFFER; 552 + acpi_evaluate_object(obj_handle, NULL, NULL, &buffer); 553 + 554 + /* 555 + * Since this is a field unit, surround the output in braces 556 + */ 557 + acpi_os_printf("{"); 558 + 559 + ret_value = (union acpi_object *)buffer.pointer; 560 + switch (ret_value->type) { 561 + case ACPI_TYPE_INTEGER: 562 + 563 + acpi_os_printf("%8.8X%8.8X", 564 + ACPI_FORMAT_UINT64(ret_value->integer.value)); 565 + break; 566 + 567 + case ACPI_TYPE_BUFFER: 568 + 569 + acpi_ut_dump_buffer(ret_value->buffer.pointer, 570 + ret_value->buffer.length, 571 + DB_DISPLAY_DATA_ONLY | DB_BYTE_DISPLAY, 0); 572 + break; 573 + 574 + default: 575 + 576 + break; 577 + } 578 + acpi_os_printf("}\n"); 579 + 580 + ACPI_FREE(buffer.pointer); 581 + 582 + return (AE_OK); 583 + } 584 + 585 + /******************************************************************************* 586 + * 508 587 * FUNCTION: acpi_db_walk_for_specific_objects 509 588 * 510 589 * PARAMETERS: Callback from walk_namespace ··· 706 625 info.count, acpi_ut_get_type_name(type)); 707 626 708 627 acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT); 628 + return (AE_OK); 629 + } 630 + 631 + /******************************************************************************* 632 + * 633 + * FUNCTION: acpi_db_display_fields 634 + * 635 + * PARAMETERS: obj_type_arg - Type of object to display 636 + * display_count_arg - Max depth to display 637 + * 638 + * RETURN: None 639 + * 640 + * DESCRIPTION: Display objects in the namespace of the requested type 641 + * 642 + ******************************************************************************/ 643 + 644 + acpi_status acpi_db_display_fields(u32 address_space_id) 645 + { 646 + struct acpi_region_walk_info info; 647 + 648 + info.count = 0; 649 + info.owner_id = ACPI_OWNER_ID_MAX; 650 + info.debug_level = ACPI_UINT32_MAX; 651 + info.display_type = ACPI_DISPLAY_SUMMARY | ACPI_DISPLAY_SHORT; 652 + info.address_space_id = address_space_id; 653 + 654 + /* Walk the namespace from the root */ 655 + 656 + (void)acpi_walk_namespace(ACPI_TYPE_LOCAL_REGION_FIELD, 657 + ACPI_ROOT_OBJECT, ACPI_UINT32_MAX, 658 + acpi_db_walk_for_fields, NULL, (void *)&info, 659 + NULL); 660 + 709 661 return (AE_OK); 710 662 } 711 663
-1
drivers/acpi/acpica/dbobject.c
··· 464 464 u8 display_args = FALSE; 465 465 466 466 node = walk_state->method_node; 467 - obj_desc = walk_state->method_desc; 468 467 469 468 /* There are no arguments for the module-level code case */ 470 469
+1 -1
drivers/acpi/acpica/dscontrol.c
··· 85 85 walk_state->parser_state.pkg_end; 86 86 control_state->control.opcode = op->common.aml_opcode; 87 87 control_state->control.loop_timeout = acpi_os_get_timer() + 88 - (u64)(acpi_gbl_max_loop_iterations * ACPI_100NSEC_PER_SEC); 88 + ((u64)acpi_gbl_max_loop_iterations * ACPI_100NSEC_PER_SEC); 89 89 90 90 /* Push the control state on this walk's control stack */ 91 91
+3 -9
drivers/acpi/acpica/dsfield.c
··· 149 149 150 150 if (walk_state->deferred_node) { 151 151 node = walk_state->deferred_node; 152 - status = AE_OK; 153 152 } else { 154 153 /* Execute flag should always be set when this function is entered */ 155 154 ··· 263 264 union acpi_parse_object *child; 264 265 265 266 #ifdef ACPI_EXEC_APP 266 - u64 value = 0; 267 267 union acpi_operand_object *result_desc; 268 268 union acpi_operand_object *obj_desc; 269 269 char *name_path; ··· 404 406 name_path = 405 407 acpi_ns_get_external_pathname(info-> 406 408 field_node); 407 - obj_desc = 408 - acpi_ut_create_integer_object 409 - (value); 410 409 if (ACPI_SUCCESS 411 410 (ae_lookup_init_file_entry 412 - (name_path, &value))) { 411 + (name_path, &obj_desc))) { 413 412 acpi_ex_write_data_to_field 414 413 (obj_desc, 415 414 acpi_ns_get_attached_object 416 415 (info->field_node), 417 416 &result_desc); 417 + acpi_ut_remove_reference 418 + (obj_desc); 418 419 } 419 - acpi_ut_remove_reference(obj_desc); 420 420 ACPI_FREE(name_path); 421 421 #endif 422 422 } ··· 632 636 } 633 637 634 638 /* Name already exists, just ignore this error */ 635 - 636 - status = AE_OK; 637 639 } 638 640 639 641 arg->common.node = node;
+7 -4
drivers/acpi/acpica/evgpeblk.c
··· 110 110 111 111 status = 112 112 acpi_hw_disable_gpe_block(gpe_block->xrupt_block, gpe_block, NULL); 113 + if (ACPI_FAILURE(status)) { 114 + return_ACPI_STATUS(status); 115 + } 113 116 114 117 if (!gpe_block->previous && !gpe_block->next) { 115 118 ··· 362 359 walk_info.gpe_device = gpe_device; 363 360 walk_info.execute_by_owner_id = FALSE; 364 361 365 - status = acpi_ns_walk_namespace(ACPI_TYPE_METHOD, gpe_device, 366 - ACPI_UINT32_MAX, ACPI_NS_WALK_NO_UNLOCK, 367 - acpi_ev_match_gpe_method, NULL, 368 - &walk_info, NULL); 362 + (void)acpi_ns_walk_namespace(ACPI_TYPE_METHOD, gpe_device, 363 + ACPI_UINT32_MAX, ACPI_NS_WALK_NO_UNLOCK, 364 + acpi_ev_match_gpe_method, NULL, &walk_info, 365 + NULL); 369 366 370 367 /* Return the new block */ 371 368
-3
drivers/acpi/acpica/evgpeinit.c
··· 156 156 * GPE0 and GPE1 do not have to be contiguous in the GPE number 157 157 * space. However, GPE0 always starts at GPE number zero. 158 158 */ 159 - gpe_number_max = acpi_gbl_FADT.gpe1_base + 160 - ((register_count1 * ACPI_GPE_REGISTER_WIDTH) - 1); 161 159 } 162 160 } 163 161 ··· 167 169 168 170 ACPI_DEBUG_PRINT((ACPI_DB_INIT, 169 171 "There are no GPE blocks defined in the FADT\n")); 170 - status = AE_OK; 171 172 goto cleanup; 172 173 } 173 174
+10 -2
drivers/acpi/acpica/evmisc.c
··· 230 230 /* Disable all GPEs in all GPE blocks */ 231 231 232 232 status = acpi_ev_walk_gpe_list(acpi_hw_disable_gpe_block, NULL); 233 + if (ACPI_FAILURE(status)) { 234 + ACPI_EXCEPTION((AE_INFO, status, 235 + "Could not disable GPEs in GPE block")); 236 + } 233 237 234 238 status = acpi_ev_remove_global_lock_handler(); 235 239 if (ACPI_FAILURE(status)) { 236 - ACPI_ERROR((AE_INFO, 237 - "Could not remove Global Lock handler")); 240 + ACPI_EXCEPTION((AE_INFO, status, 241 + "Could not remove Global Lock handler")); 238 242 } 239 243 240 244 acpi_gbl_events_initialized = FALSE; ··· 254 250 /* Deallocate all handler objects installed within GPE info structs */ 255 251 256 252 status = acpi_ev_walk_gpe_list(acpi_ev_delete_gpe_handlers, NULL); 253 + if (ACPI_FAILURE(status)) { 254 + ACPI_EXCEPTION((AE_INFO, status, 255 + "Could not delete GPE handlers")); 256 + } 257 257 258 258 /* Return to original mode if necessary */ 259 259
+2 -2
drivers/acpi/acpica/evregion.c
··· 836 836 objects[1].type = ACPI_TYPE_INTEGER; 837 837 objects[1].integer.value = ACPI_REG_CONNECT; 838 838 839 - status = acpi_evaluate_object(reg_method, NULL, &args, NULL); 839 + (void)acpi_evaluate_object(reg_method, NULL, &args, NULL); 840 840 841 841 exit: 842 842 /* We ignore all errors from above, don't care */ 843 843 844 - status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 844 + (void)acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 845 845 return_VOID; 846 846 }
-1
drivers/acpi/acpica/evrgnini.c
··· 198 198 * root bridge. Still need to return a context object 199 199 * for the new PCI_Config operation region, however. 200 200 */ 201 - status = AE_OK; 202 201 } else { 203 202 ACPI_EXCEPTION((AE_INFO, status, 204 203 "Could not install PciConfig handler "
+3
drivers/acpi/acpica/hwxfsleep.c
··· 166 166 167 167 status = acpi_hw_write_port(acpi_gbl_FADT.smi_command, 168 168 (u32)acpi_gbl_FADT.s4_bios_request, 8); 169 + if (ACPI_FAILURE(status)) { 170 + return_ACPI_STATUS(status); 171 + } 169 172 170 173 do { 171 174 acpi_os_stall(ACPI_USEC_PER_MSEC);
+1 -1
drivers/acpi/acpica/nsconvert.c
··· 486 486 error_exit: 487 487 ACPI_FREE(name); 488 488 *return_object = new_object; 489 - return (AE_OK); 489 + return (status); 490 490 }
+2 -4
drivers/acpi/acpica/nsdump.c
··· 291 291 for (i = 0; 292 292 (i < obj_desc->buffer.length 293 293 && i < 12); i++) { 294 - acpi_os_printf(" %.2hX", 294 + acpi_os_printf(" %2.2X", 295 295 obj_desc->buffer. 296 296 pointer[i]); 297 297 } ··· 404 404 case ACPI_TYPE_LOCAL_BANK_FIELD: 405 405 case ACPI_TYPE_LOCAL_INDEX_FIELD: 406 406 407 - acpi_os_printf(" Off %.3X Len %.2X Acc %.2hd\n", 407 + acpi_os_printf(" Off %.3X Len %.2X Acc %.2X\n", 408 408 (obj_desc->common_field. 409 409 base_byte_offset * 8) 410 410 + ··· 589 589 590 590 goto cleanup; 591 591 } 592 - 593 - obj_type = ACPI_TYPE_INVALID; /* Terminate loop after next pass */ 594 592 } 595 593 596 594 cleanup:
+2 -2
drivers/acpi/acpica/nsxfname.c
··· 425 425 } 426 426 427 427 if (cls) { 428 - next_id_string = acpi_ns_copy_device_id(&info->class_code, 429 - cls, next_id_string); 428 + (void)acpi_ns_copy_device_id(&info->class_code, 429 + cls, next_id_string); 430 430 } 431 431 432 432 /* Copy the fixed-length data */
+2 -5
drivers/acpi/acpica/psobject.c
··· 481 481 walk_state->opcode = (*op)->common.aml_opcode; 482 482 483 483 status = walk_state->ascending_callback(walk_state); 484 - status = 485 - acpi_ps_next_parse_state(walk_state, *op, status); 484 + (void)acpi_ps_next_parse_state(walk_state, *op, status); 486 485 487 486 status2 = acpi_ps_complete_this_op(walk_state, *op); 488 487 if (ACPI_FAILURE(status2)) { ··· 489 490 } 490 491 } 491 492 492 - status = AE_OK; 493 493 break; 494 494 495 495 case AE_CTRL_BREAK: ··· 510 512 walk_state->opcode = (*op)->common.aml_opcode; 511 513 512 514 status = walk_state->ascending_callback(walk_state); 513 - status = acpi_ps_next_parse_state(walk_state, *op, status); 515 + (void)acpi_ps_next_parse_state(walk_state, *op, status); 514 516 515 517 status2 = acpi_ps_complete_this_op(walk_state, *op); 516 518 if (ACPI_FAILURE(status2)) { 517 519 return_ACPI_STATUS(status2); 518 520 } 519 521 520 - status = AE_OK; 521 522 break; 522 523 523 524 case AE_CTRL_TERMINATE:
+3
drivers/acpi/acpica/rscreate.c
··· 312 312 path_buffer.pointer = user_prt->source; 313 313 314 314 status = acpi_ns_handle_to_pathname((acpi_handle)node, &path_buffer, FALSE); 315 + if (ACPI_FAILURE(status)) { 316 + return_ACPI_STATUS(status); 317 + } 315 318 316 319 /* +1 to include null terminator */ 317 320
+3
drivers/acpi/acpica/tbdata.c
··· 933 933 } 934 934 935 935 status = acpi_ns_load_table(table_index, parent_node); 936 + if (ACPI_FAILURE(status)) { 937 + return_ACPI_STATUS(status); 938 + } 936 939 937 940 /* 938 941 * Update GPEs for any new _Lxx/_Exx methods. Ignore errors. The host is
+39 -1
drivers/acpi/acpica/tbxfload.c
··· 268 268 * 269 269 * PARAMETERS: table - Pointer to a buffer containing the ACPI 270 270 * table to be loaded. 271 + * table_idx - Pointer to a u32 for storing the table 272 + * index, might be NULL 271 273 * 272 274 * RETURN: Status 273 275 * ··· 280 278 * to ensure that the table is not deleted or unmapped. 281 279 * 282 280 ******************************************************************************/ 283 - acpi_status acpi_load_table(struct acpi_table_header *table) 281 + acpi_status acpi_load_table(struct acpi_table_header *table, u32 *table_idx) 284 282 { 285 283 acpi_status status; 286 284 u32 table_index; ··· 299 297 status = acpi_tb_install_and_load_table(ACPI_PTR_TO_PHYSADDR(table), 300 298 ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL, 301 299 FALSE, &table_index); 300 + if (table_idx) { 301 + *table_idx = table_index; 302 + } 303 + 302 304 if (ACPI_SUCCESS(status)) { 303 305 304 306 /* Complete the initialization/resolution of new objects */ ··· 396 390 } 397 391 398 392 ACPI_EXPORT_SYMBOL(acpi_unload_parent_table) 393 + /******************************************************************************* 394 + * 395 + * FUNCTION: acpi_unload_table 396 + * 397 + * PARAMETERS: table_index - Index as returned by acpi_load_table 398 + * 399 + * RETURN: Status 400 + * 401 + * DESCRIPTION: Via the table_index representing an SSDT or OEMx table, unloads 402 + * the table and deletes all namespace objects associated with 403 + * that table. Unloading of the DSDT is not allowed. 404 + * Note: Mainly intended to support hotplug removal of SSDTs. 405 + * 406 + ******************************************************************************/ 407 + acpi_status acpi_unload_table(u32 table_index) 408 + { 409 + acpi_status status; 410 + 411 + ACPI_FUNCTION_TRACE(acpi_unload_table); 412 + 413 + if (table_index == 1) { 414 + 415 + /* table_index==1 means DSDT is the owner. DSDT cannot be unloaded */ 416 + 417 + return_ACPI_STATUS(AE_TYPE); 418 + } 419 + 420 + status = acpi_tb_unload_table(table_index); 421 + return_ACPI_STATUS(status); 422 + } 423 + 424 + ACPI_EXPORT_SYMBOL(acpi_unload_table)
+29 -23
drivers/acpi/acpica/utbuffer.c
··· 37 37 u32 j; 38 38 u32 temp32; 39 39 u8 buf_char; 40 + u32 display_data_only = display & DB_DISPLAY_DATA_ONLY; 40 41 42 + display &= ~DB_DISPLAY_DATA_ONLY; 41 43 if (!buffer) { 42 44 acpi_os_printf("Null Buffer Pointer in DumpBuffer!\n"); 43 45 return; ··· 55 53 56 54 /* Print current offset */ 57 55 58 - acpi_os_printf("%8.4X: ", (base_offset + i)); 56 + if (!display_data_only) { 57 + acpi_os_printf("%8.4X: ", (base_offset + i)); 58 + } 59 59 60 60 /* Print 16 hex chars */ 61 61 ··· 113 109 * Print the ASCII equivalent characters but watch out for the bad 114 110 * unprintable ones (printable chars are 0x20 through 0x7E) 115 111 */ 116 - acpi_os_printf(" "); 117 - for (j = 0; j < 16; j++) { 118 - if (i + j >= count) { 119 - acpi_os_printf("\n"); 120 - return; 112 + if (!display_data_only) { 113 + acpi_os_printf(" "); 114 + for (j = 0; j < 16; j++) { 115 + if (i + j >= count) { 116 + acpi_os_printf("\n"); 117 + return; 118 + } 119 + 120 + /* 121 + * Add comment characters so rest of line is ignored when 122 + * compiled 123 + */ 124 + if (j == 0) { 125 + acpi_os_printf("// "); 126 + } 127 + 128 + buf_char = buffer[(acpi_size)i + j]; 129 + if (isprint(buf_char)) { 130 + acpi_os_printf("%c", buf_char); 131 + } else { 132 + acpi_os_printf("."); 133 + } 121 134 } 122 135 123 - /* 124 - * Add comment characters so rest of line is ignored when 125 - * compiled 126 - */ 127 - if (j == 0) { 128 - acpi_os_printf("// "); 129 - } 136 + /* Done with that line. */ 130 137 131 - buf_char = buffer[(acpi_size)i + j]; 132 - if (isprint(buf_char)) { 133 - acpi_os_printf("%c", buf_char); 134 - } else { 135 - acpi_os_printf("."); 136 - } 138 + acpi_os_printf("\n"); 137 139 } 138 - 139 - /* Done with that line. */ 140 - 141 - acpi_os_printf("\n"); 142 140 i += 16; 143 141 } 144 142
-2
drivers/acpi/acpica/utids.c
··· 289 289 value); 290 290 length = ACPI_EISAID_STRING_SIZE; 291 291 } else { /* ACPI_TYPE_STRING */ 292 - 293 292 /* Copy the String CID from the returned object */ 294 - 295 293 strcpy(next_id_string, cid_objects[i]->string.pointer); 296 294 length = cid_objects[i]->string.length + 1; 297 295 }
+1 -1
drivers/acpi/acpica/uttrack.c
··· 660 660 case ACPI_DESC_TYPE_PARSER: 661 661 662 662 acpi_os_printf 663 - ("AmlOpcode 0x%04hX\n", 663 + ("AmlOpcode 0x%04X\n", 664 664 descriptor->op.asl. 665 665 aml_opcode); 666 666 break;
+74 -63
drivers/acpi/button.c
··· 44 44 #define ACPI_BUTTON_DEVICE_NAME_LID "Lid Switch" 45 45 #define ACPI_BUTTON_TYPE_LID 0x05 46 46 47 - #define ACPI_BUTTON_LID_INIT_IGNORE 0x00 48 - #define ACPI_BUTTON_LID_INIT_OPEN 0x01 49 - #define ACPI_BUTTON_LID_INIT_METHOD 0x02 47 + enum { 48 + ACPI_BUTTON_LID_INIT_IGNORE, 49 + ACPI_BUTTON_LID_INIT_OPEN, 50 + ACPI_BUTTON_LID_INIT_METHOD, 51 + ACPI_BUTTON_LID_INIT_DISABLED, 52 + }; 53 + 54 + static const char * const lid_init_state_str[] = { 55 + [ACPI_BUTTON_LID_INIT_IGNORE] = "ignore", 56 + [ACPI_BUTTON_LID_INIT_OPEN] = "open", 57 + [ACPI_BUTTON_LID_INIT_METHOD] = "method", 58 + [ACPI_BUTTON_LID_INIT_DISABLED] = "disabled", 59 + }; 50 60 51 61 #define _COMPONENT ACPI_BUTTON_COMPONENT 52 62 ACPI_MODULE_NAME("button"); ··· 75 65 }; 76 66 MODULE_DEVICE_TABLE(acpi, button_device_ids); 77 67 78 - /* 79 - * Some devices which don't even have a lid in anyway have a broken _LID 80 - * method (e.g. pointing to a floating gpio pin) causing spurious LID events. 81 - */ 82 - static const struct dmi_system_id lid_blacklst[] = { 68 + /* Please keep this list sorted alphabetically by vendor and model */ 69 + static const struct dmi_system_id dmi_lid_quirks[] = { 83 70 { 84 - /* GP-electronic T701 */ 71 + /* 72 + * Asus T200TA, _LID keeps reporting closed after every second 73 + * openening of the lid. Causing immediate re-suspend after 74 + * opening every other open. Using LID_INIT_OPEN fixes this. 75 + */ 76 + .matches = { 77 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 78 + DMI_MATCH(DMI_PRODUCT_NAME, "T200TA"), 79 + }, 80 + .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, 81 + }, 82 + { 83 + /* GP-electronic T701, _LID method points to a floating GPIO */ 85 84 .matches = { 86 85 DMI_MATCH(DMI_SYS_VENDOR, "Insyde"), 87 86 DMI_MATCH(DMI_PRODUCT_NAME, "T701"), 88 87 DMI_MATCH(DMI_BIOS_VERSION, "BYT70A.YNCHENG.WIN.007"), 89 88 }, 89 + .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_DISABLED, 90 + }, 91 + { 92 + /* 93 + * Medion Akoya E2215T, notification of the LID device only 94 + * happens on close, not on open and _LID always returns closed. 95 + */ 96 + .matches = { 97 + DMI_MATCH(DMI_SYS_VENDOR, "MEDION"), 98 + DMI_MATCH(DMI_PRODUCT_NAME, "E2215T MD60198"), 99 + }, 100 + .driver_data = (void *)(long)ACPI_BUTTON_LID_INIT_OPEN, 90 101 }, 91 102 {} 92 103 }; ··· 147 116 bool suspended; 148 117 }; 149 118 150 - static BLOCKING_NOTIFIER_HEAD(acpi_lid_notifier); 151 119 static struct acpi_device *lid_device; 152 - static u8 lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 120 + static long lid_init_state = -1; 153 121 154 122 static unsigned long lid_report_interval __read_mostly = 500; 155 123 module_param(lid_report_interval, ulong, 0644); ··· 176 146 static int acpi_lid_notify_state(struct acpi_device *device, int state) 177 147 { 178 148 struct acpi_button *button = acpi_driver_data(device); 179 - int ret; 180 149 ktime_t next_report; 181 150 bool do_update; 182 151 ··· 252 223 button->last_time = ktime_get(); 253 224 } 254 225 255 - ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, device); 256 - if (ret == NOTIFY_DONE) 257 - ret = blocking_notifier_call_chain(&acpi_lid_notifier, state, 258 - device); 259 - if (ret == NOTIFY_DONE || ret == NOTIFY_OK) { 260 - /* 261 - * It is also regarded as success if the notifier_chain 262 - * returns NOTIFY_OK or NOTIFY_DONE. 263 - */ 264 - ret = 0; 265 - } 266 - return ret; 226 + return 0; 267 227 } 268 228 269 229 static int __maybe_unused acpi_button_state_seq_show(struct seq_file *seq, ··· 349 331 /* -------------------------------------------------------------------------- 350 332 Driver Interface 351 333 -------------------------------------------------------------------------- */ 352 - int acpi_lid_notifier_register(struct notifier_block *nb) 353 - { 354 - return blocking_notifier_chain_register(&acpi_lid_notifier, nb); 355 - } 356 - EXPORT_SYMBOL(acpi_lid_notifier_register); 357 - 358 - int acpi_lid_notifier_unregister(struct notifier_block *nb) 359 - { 360 - return blocking_notifier_chain_unregister(&acpi_lid_notifier, nb); 361 - } 362 - EXPORT_SYMBOL(acpi_lid_notifier_unregister); 363 - 364 334 int acpi_lid_open(void) 365 335 { 366 336 if (!lid_device) ··· 478 472 char *name, *class; 479 473 int error; 480 474 481 - if (!strcmp(hid, ACPI_BUTTON_HID_LID) && dmi_check_system(lid_blacklst)) 475 + if (!strcmp(hid, ACPI_BUTTON_HID_LID) && 476 + lid_init_state == ACPI_BUTTON_LID_INIT_DISABLED) 482 477 return -ENODEV; 483 478 484 479 button = kzalloc(sizeof(struct acpi_button), GFP_KERNEL); ··· 585 578 static int param_set_lid_init_state(const char *val, 586 579 const struct kernel_param *kp) 587 580 { 588 - int result = 0; 581 + int i; 589 582 590 - if (!strncmp(val, "open", sizeof("open") - 1)) { 591 - lid_init_state = ACPI_BUTTON_LID_INIT_OPEN; 592 - pr_info("Notify initial lid state as open\n"); 593 - } else if (!strncmp(val, "method", sizeof("method") - 1)) { 594 - lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 595 - pr_info("Notify initial lid state with _LID return value\n"); 596 - } else if (!strncmp(val, "ignore", sizeof("ignore") - 1)) { 597 - lid_init_state = ACPI_BUTTON_LID_INIT_IGNORE; 598 - pr_info("Do not notify initial lid state\n"); 599 - } else 600 - result = -EINVAL; 601 - return result; 583 + i = sysfs_match_string(lid_init_state_str, val); 584 + if (i < 0) 585 + return i; 586 + 587 + lid_init_state = i; 588 + pr_info("Initial lid state set to '%s'\n", lid_init_state_str[i]); 589 + return 0; 602 590 } 603 591 604 - static int param_get_lid_init_state(char *buffer, 605 - const struct kernel_param *kp) 592 + static int param_get_lid_init_state(char *buf, const struct kernel_param *kp) 606 593 { 607 - switch (lid_init_state) { 608 - case ACPI_BUTTON_LID_INIT_OPEN: 609 - return sprintf(buffer, "open"); 610 - case ACPI_BUTTON_LID_INIT_METHOD: 611 - return sprintf(buffer, "method"); 612 - case ACPI_BUTTON_LID_INIT_IGNORE: 613 - return sprintf(buffer, "ignore"); 614 - default: 615 - return sprintf(buffer, "invalid"); 616 - } 617 - return 0; 594 + int i, c = 0; 595 + 596 + for (i = 0; i < ARRAY_SIZE(lid_init_state_str); i++) 597 + if (i == lid_init_state) 598 + c += sprintf(buf + c, "[%s] ", lid_init_state_str[i]); 599 + else 600 + c += sprintf(buf + c, "%s ", lid_init_state_str[i]); 601 + 602 + buf[c - 1] = '\n'; /* Replace the final space with a newline */ 603 + 604 + return c; 618 605 } 619 606 620 607 module_param_call(lid_init_state, ··· 618 617 619 618 static int acpi_button_register_driver(struct acpi_driver *driver) 620 619 { 620 + const struct dmi_system_id *dmi_id; 621 + 622 + if (lid_init_state == -1) { 623 + dmi_id = dmi_first_match(dmi_lid_quirks); 624 + if (dmi_id) 625 + lid_init_state = (long)dmi_id->driver_data; 626 + else 627 + lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 628 + } 629 + 621 630 /* 622 631 * Modules such as nouveau.ko and i915.ko have a link time dependency 623 632 * on acpi_lid_open(), and would therefore not be loadable on ACPI
+139 -56
drivers/acpi/ec.c
··· 95 95 EC_FLAGS_QUERY_ENABLED, /* Query is enabled */ 96 96 EC_FLAGS_QUERY_PENDING, /* Query is pending */ 97 97 EC_FLAGS_QUERY_GUARDING, /* Guard for SCI_EVT check */ 98 - EC_FLAGS_GPE_HANDLER_INSTALLED, /* GPE handler installed */ 98 + EC_FLAGS_EVENT_HANDLER_INSTALLED, /* Event handler installed */ 99 99 EC_FLAGS_EC_HANDLER_INSTALLED, /* OpReg handler installed */ 100 - EC_FLAGS_EVT_HANDLER_INSTALLED, /* _Qxx handlers installed */ 100 + EC_FLAGS_QUERY_METHODS_INSTALLED, /* _Qxx handlers installed */ 101 101 EC_FLAGS_STARTED, /* Driver is started */ 102 102 EC_FLAGS_STOPPED, /* Driver is stopped */ 103 - EC_FLAGS_GPE_MASKED, /* GPE masked */ 103 + EC_FLAGS_EVENTS_MASKED, /* Events masked */ 104 104 }; 105 105 106 106 #define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */ ··· 397 397 static void acpi_ec_submit_request(struct acpi_ec *ec) 398 398 { 399 399 ec->reference_count++; 400 - if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags) && 401 - ec->reference_count == 1) 400 + if (test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags) && 401 + ec->gpe >= 0 && ec->reference_count == 1) 402 402 acpi_ec_enable_gpe(ec, true); 403 403 } 404 404 ··· 407 407 bool flushed = false; 408 408 409 409 ec->reference_count--; 410 - if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags) && 411 - ec->reference_count == 0) 410 + if (test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags) && 411 + ec->gpe >= 0 && ec->reference_count == 0) 412 412 acpi_ec_disable_gpe(ec, true); 413 413 flushed = acpi_ec_flushed(ec); 414 414 if (flushed) 415 415 wake_up(&ec->wait); 416 416 } 417 417 418 - static void acpi_ec_mask_gpe(struct acpi_ec *ec) 418 + static void acpi_ec_mask_events(struct acpi_ec *ec) 419 419 { 420 - if (!test_bit(EC_FLAGS_GPE_MASKED, &ec->flags)) { 421 - acpi_ec_disable_gpe(ec, false); 420 + if (!test_bit(EC_FLAGS_EVENTS_MASKED, &ec->flags)) { 421 + if (ec->gpe >= 0) 422 + acpi_ec_disable_gpe(ec, false); 423 + else 424 + disable_irq_nosync(ec->irq); 425 + 422 426 ec_dbg_drv("Polling enabled"); 423 - set_bit(EC_FLAGS_GPE_MASKED, &ec->flags); 427 + set_bit(EC_FLAGS_EVENTS_MASKED, &ec->flags); 424 428 } 425 429 } 426 430 427 - static void acpi_ec_unmask_gpe(struct acpi_ec *ec) 431 + static void acpi_ec_unmask_events(struct acpi_ec *ec) 428 432 { 429 - if (test_bit(EC_FLAGS_GPE_MASKED, &ec->flags)) { 430 - clear_bit(EC_FLAGS_GPE_MASKED, &ec->flags); 431 - acpi_ec_enable_gpe(ec, false); 433 + if (test_bit(EC_FLAGS_EVENTS_MASKED, &ec->flags)) { 434 + clear_bit(EC_FLAGS_EVENTS_MASKED, &ec->flags); 435 + if (ec->gpe >= 0) 436 + acpi_ec_enable_gpe(ec, false); 437 + else 438 + enable_irq(ec->irq); 439 + 432 440 ec_dbg_drv("Polling disabled"); 433 441 } 434 442 } ··· 462 454 463 455 static void acpi_ec_submit_query(struct acpi_ec *ec) 464 456 { 465 - acpi_ec_mask_gpe(ec); 457 + acpi_ec_mask_events(ec); 466 458 if (!acpi_ec_event_enabled(ec)) 467 459 return; 468 460 if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) { ··· 478 470 if (test_and_clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) 479 471 ec_dbg_evt("Command(%s) unblocked", 480 472 acpi_ec_cmd_string(ACPI_EC_COMMAND_QUERY)); 481 - acpi_ec_unmask_gpe(ec); 473 + acpi_ec_unmask_events(ec); 482 474 } 483 475 484 476 static inline void __acpi_ec_enable_event(struct acpi_ec *ec) ··· 656 648 * ensure a hardware STS 0->1 change after this clearing can always 657 649 * trigger a GPE interrupt. 658 650 */ 659 - acpi_ec_clear_gpe(ec); 651 + if (ec->gpe >= 0) 652 + acpi_ec_clear_gpe(ec); 653 + 660 654 status = acpi_ec_read_status(ec); 661 655 t = ec->curr; 662 656 /* ··· 727 717 ++t->irq_count; 728 718 /* Allow triggering on 0 threshold */ 729 719 if (t->irq_count == ec_storm_threshold) 730 - acpi_ec_mask_gpe(ec); 720 + acpi_ec_mask_events(ec); 731 721 } 732 722 } 733 723 out: ··· 825 815 826 816 spin_lock_irqsave(&ec->lock, tmp); 827 817 if (t->irq_count == ec_storm_threshold) 828 - acpi_ec_unmask_gpe(ec); 818 + acpi_ec_unmask_events(ec); 829 819 ec_dbg_req("Command(%s) stopped", acpi_ec_cmd_string(t->command)); 830 820 ec->curr = NULL; 831 821 /* Disable GPE for command processing (IBF=0/OBF=1) */ ··· 1285 1275 acpi_ec_check_event(ec); 1286 1276 } 1287 1277 1288 - static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, 1289 - u32 gpe_number, void *data) 1278 + static void acpi_ec_handle_interrupt(struct acpi_ec *ec) 1290 1279 { 1291 1280 unsigned long flags; 1292 - struct acpi_ec *ec = data; 1293 1281 1294 1282 spin_lock_irqsave(&ec->lock, flags); 1295 1283 advance_transaction(ec); 1296 1284 spin_unlock_irqrestore(&ec->lock, flags); 1285 + } 1286 + 1287 + static u32 acpi_ec_gpe_handler(acpi_handle gpe_device, 1288 + u32 gpe_number, void *data) 1289 + { 1290 + acpi_ec_handle_interrupt(data); 1297 1291 return ACPI_INTERRUPT_HANDLED; 1292 + } 1293 + 1294 + static irqreturn_t acpi_ec_irq_handler(int irq, void *data) 1295 + { 1296 + acpi_ec_handle_interrupt(data); 1297 + return IRQ_HANDLED; 1298 1298 } 1299 1299 1300 1300 /* -------------------------------------------------------------------------- ··· 1379 1359 ec->timestamp = jiffies; 1380 1360 ec->busy_polling = true; 1381 1361 ec->polling_guard = 0; 1362 + ec->gpe = -1; 1363 + ec->irq = -1; 1382 1364 return ec; 1383 1365 } 1384 1366 ··· 1428 1406 /* Get GPE bit assignment (EC events). */ 1429 1407 /* TODO: Add support for _GPE returning a package */ 1430 1408 status = acpi_evaluate_integer(handle, "_GPE", NULL, &tmp); 1431 - if (ACPI_FAILURE(status)) 1432 - return status; 1433 - ec->gpe = tmp; 1409 + if (ACPI_SUCCESS(status)) 1410 + ec->gpe = tmp; 1411 + 1412 + /* 1413 + * Errors are non-fatal, allowing for ACPI Reduced Hardware 1414 + * platforms which use GpioInt instead of GPE. 1415 + */ 1434 1416 } 1435 1417 /* Use the global lock for all EC transactions? */ 1436 1418 tmp = 0; ··· 1444 1418 return AE_CTRL_TERMINATE; 1445 1419 } 1446 1420 1421 + static void install_gpe_event_handler(struct acpi_ec *ec) 1422 + { 1423 + acpi_status status = 1424 + acpi_install_gpe_raw_handler(NULL, ec->gpe, 1425 + ACPI_GPE_EDGE_TRIGGERED, 1426 + &acpi_ec_gpe_handler, 1427 + ec); 1428 + if (ACPI_SUCCESS(status)) { 1429 + /* This is not fatal as we can poll EC events */ 1430 + set_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags); 1431 + acpi_ec_leave_noirq(ec); 1432 + if (test_bit(EC_FLAGS_STARTED, &ec->flags) && 1433 + ec->reference_count >= 1) 1434 + acpi_ec_enable_gpe(ec, true); 1435 + } 1436 + } 1437 + 1438 + /* ACPI reduced hardware platforms use a GpioInt specified in _CRS. */ 1439 + static int install_gpio_irq_event_handler(struct acpi_ec *ec, 1440 + struct acpi_device *device) 1441 + { 1442 + int irq = acpi_dev_gpio_irq_get(device, 0); 1443 + int ret; 1444 + 1445 + if (irq < 0) 1446 + return irq; 1447 + 1448 + ret = request_irq(irq, acpi_ec_irq_handler, IRQF_SHARED, 1449 + "ACPI EC", ec); 1450 + 1451 + /* 1452 + * Unlike the GPE case, we treat errors here as fatal, we'll only 1453 + * implement GPIO polling if we find a case that needs it. 1454 + */ 1455 + if (ret < 0) 1456 + return ret; 1457 + 1458 + ec->irq = irq; 1459 + set_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags); 1460 + acpi_ec_leave_noirq(ec); 1461 + 1462 + return 0; 1463 + } 1464 + 1447 1465 /* 1448 1466 * Note: This function returns an error code only when the address space 1449 1467 * handler is not installed, which means "not able to handle 1450 1468 * transactions". 1451 1469 */ 1452 - static int ec_install_handlers(struct acpi_ec *ec, bool handle_events) 1470 + static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device, 1471 + bool handle_events) 1453 1472 { 1454 1473 acpi_status status; 1455 1474 ··· 1527 1456 if (!handle_events) 1528 1457 return 0; 1529 1458 1530 - if (!test_bit(EC_FLAGS_EVT_HANDLER_INSTALLED, &ec->flags)) { 1459 + if (!test_bit(EC_FLAGS_QUERY_METHODS_INSTALLED, &ec->flags)) { 1531 1460 /* Find and register all query methods */ 1532 1461 acpi_walk_namespace(ACPI_TYPE_METHOD, ec->handle, 1, 1533 1462 acpi_ec_register_query_methods, 1534 1463 NULL, ec, NULL); 1535 - set_bit(EC_FLAGS_EVT_HANDLER_INSTALLED, &ec->flags); 1464 + set_bit(EC_FLAGS_QUERY_METHODS_INSTALLED, &ec->flags); 1536 1465 } 1537 - if (!test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags)) { 1538 - status = acpi_install_gpe_raw_handler(NULL, ec->gpe, 1539 - ACPI_GPE_EDGE_TRIGGERED, 1540 - &acpi_ec_gpe_handler, ec); 1541 - /* This is not fatal as we can poll EC events */ 1542 - if (ACPI_SUCCESS(status)) { 1543 - set_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags); 1544 - acpi_ec_leave_noirq(ec); 1545 - if (test_bit(EC_FLAGS_STARTED, &ec->flags) && 1546 - ec->reference_count >= 1) 1547 - acpi_ec_enable_gpe(ec, true); 1466 + if (!test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags)) { 1467 + if (ec->gpe >= 0) { 1468 + install_gpe_event_handler(ec); 1469 + } else if (device) { 1470 + int ret = install_gpio_irq_event_handler(ec, device); 1471 + 1472 + if (ret) 1473 + return ret; 1474 + } else { /* No GPE and no GpioInt? */ 1475 + return -ENODEV; 1548 1476 } 1549 1477 } 1550 1478 /* EC is fully operational, allow queries */ ··· 1574 1504 */ 1575 1505 acpi_ec_stop(ec, false); 1576 1506 1577 - if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags)) { 1578 - if (ACPI_FAILURE(acpi_remove_gpe_handler(NULL, ec->gpe, 1579 - &acpi_ec_gpe_handler))) 1507 + if (test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags)) { 1508 + if (ec->gpe >= 0 && 1509 + ACPI_FAILURE(acpi_remove_gpe_handler(NULL, ec->gpe, 1510 + &acpi_ec_gpe_handler))) 1580 1511 pr_err("failed to remove gpe handler\n"); 1581 - clear_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags); 1512 + 1513 + if (ec->irq >= 0) 1514 + free_irq(ec->irq, ec); 1515 + 1516 + clear_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags); 1582 1517 } 1583 - if (test_bit(EC_FLAGS_EVT_HANDLER_INSTALLED, &ec->flags)) { 1518 + if (test_bit(EC_FLAGS_QUERY_METHODS_INSTALLED, &ec->flags)) { 1584 1519 acpi_ec_remove_query_handlers(ec, true, 0); 1585 - clear_bit(EC_FLAGS_EVT_HANDLER_INSTALLED, &ec->flags); 1520 + clear_bit(EC_FLAGS_QUERY_METHODS_INSTALLED, &ec->flags); 1586 1521 } 1587 1522 } 1588 1523 1589 - static int acpi_ec_setup(struct acpi_ec *ec, bool handle_events) 1524 + static int acpi_ec_setup(struct acpi_ec *ec, struct acpi_device *device, 1525 + bool handle_events) 1590 1526 { 1591 1527 int ret; 1592 1528 1593 - ret = ec_install_handlers(ec, handle_events); 1529 + ret = ec_install_handlers(ec, device, handle_events); 1594 1530 if (ret) 1595 1531 return ret; 1596 1532 ··· 1607 1531 } 1608 1532 1609 1533 acpi_handle_info(ec->handle, 1610 - "GPE=0x%x, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n", 1611 - ec->gpe, ec->command_addr, ec->data_addr); 1534 + "GPE=0x%x, IRQ=%d, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n", 1535 + ec->gpe, ec->irq, ec->command_addr, ec->data_addr); 1612 1536 return ret; 1613 1537 } 1614 1538 ··· 1672 1596 } 1673 1597 } 1674 1598 1675 - ret = acpi_ec_setup(ec, true); 1599 + ret = acpi_ec_setup(ec, device, true); 1676 1600 if (ret) 1677 1601 goto err_query; 1678 1602 ··· 1792 1716 * At this point, the GPE is not fully initialized, so do not to 1793 1717 * handle the events. 1794 1718 */ 1795 - ret = acpi_ec_setup(ec, false); 1719 + ret = acpi_ec_setup(ec, NULL, false); 1796 1720 if (ret) { 1797 1721 acpi_ec_free(ec); 1798 1722 return; ··· 1965 1889 ec->command_addr = ecdt_ptr->control.address; 1966 1890 ec->data_addr = ecdt_ptr->data.address; 1967 1891 } 1968 - ec->gpe = ecdt_ptr->gpe; 1892 + 1893 + /* 1894 + * Ignore the GPE value on Reduced Hardware platforms. 1895 + * Some products have this set to an erroneous value. 1896 + */ 1897 + if (!acpi_gbl_reduced_hardware) 1898 + ec->gpe = ecdt_ptr->gpe; 1899 + 1969 1900 ec->handle = ACPI_ROOT_OBJECT; 1970 1901 1971 1902 /* 1972 1903 * At this point, the namespace is not initialized, so do not find 1973 1904 * the namespace objects, or handle the events. 1974 1905 */ 1975 - ret = acpi_ec_setup(ec, false); 1906 + ret = acpi_ec_setup(ec, NULL, false); 1976 1907 if (ret) { 1977 1908 acpi_ec_free(ec); 1978 1909 return; ··· 2011 1928 * masked at the low level without side effects. 2012 1929 */ 2013 1930 if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) && 2014 - ec->reference_count >= 1) 1931 + ec->gpe >= 0 && ec->reference_count >= 1) 2015 1932 acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_DISABLE); 2016 1933 2017 1934 acpi_ec_enter_noirq(ec); ··· 2026 1943 acpi_ec_leave_noirq(ec); 2027 1944 2028 1945 if (ec_no_wakeup && test_bit(EC_FLAGS_STARTED, &ec->flags) && 2029 - ec->reference_count >= 1) 1946 + ec->gpe >= 0 && ec->reference_count >= 1) 2030 1947 acpi_set_gpe(NULL, ec->gpe, ACPI_GPE_ENABLE); 2031 1948 2032 1949 return 0;
+7
drivers/acpi/hmat/Kconfig drivers/acpi/numa/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + config ACPI_NUMA 3 + bool "NUMA support" 4 + depends on NUMA 5 + depends on (X86 || IA64 || ARM64) 6 + default y if IA64 || ARM64 7 + 2 8 config ACPI_HMAT 3 9 bool "ACPI Heterogeneous Memory Attribute Table Support" 4 10 depends on ACPI_NUMA 5 11 select HMEM_REPORTING 12 + select MEMREGION 6 13 help 7 14 If set, this option has the kernel parse and report the 8 15 platform's ACPI HMAT (Heterogeneous Memory Attributes Table),
-2
drivers/acpi/hmat/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_ACPI_HMAT) := hmat.o
+135 -23
drivers/acpi/hmat/hmat.c drivers/acpi/numa/hmat.c
··· 8 8 * the applicable attributes with the node's interfaces. 9 9 */ 10 10 11 + #define pr_fmt(fmt) "acpi/hmat: " fmt 12 + #define dev_fmt(fmt) "acpi/hmat: " fmt 13 + 11 14 #include <linux/acpi.h> 12 15 #include <linux/bitops.h> 13 16 #include <linux/device.h> 14 17 #include <linux/init.h> 15 18 #include <linux/list.h> 19 + #include <linux/mm.h> 20 + #include <linux/platform_device.h> 16 21 #include <linux/list_sort.h> 22 + #include <linux/memregion.h> 17 23 #include <linux/memory.h> 18 24 #include <linux/mutex.h> 19 25 #include <linux/node.h> ··· 55 49 struct list_head node; 56 50 unsigned int memory_pxm; 57 51 unsigned int processor_pxm; 52 + struct resource memregions; 58 53 struct node_hmem_attrs hmem_attrs; 59 54 struct list_head caches; 60 55 struct node_cache_attrs cache_attrs; ··· 111 104 list_add_tail(&initiator->node, &initiators); 112 105 } 113 106 114 - static __init void alloc_memory_target(unsigned int mem_pxm) 107 + static __init void alloc_memory_target(unsigned int mem_pxm, 108 + resource_size_t start, resource_size_t len) 115 109 { 116 110 struct memory_target *target; 117 111 118 112 target = find_mem_target(mem_pxm); 119 - if (target) 120 - return; 113 + if (!target) { 114 + target = kzalloc(sizeof(*target), GFP_KERNEL); 115 + if (!target) 116 + return; 117 + target->memory_pxm = mem_pxm; 118 + target->processor_pxm = PXM_INVAL; 119 + target->memregions = (struct resource) { 120 + .name = "ACPI mem", 121 + .start = 0, 122 + .end = -1, 123 + .flags = IORESOURCE_MEM, 124 + }; 125 + list_add_tail(&target->node, &targets); 126 + INIT_LIST_HEAD(&target->caches); 127 + } 121 128 122 - target = kzalloc(sizeof(*target), GFP_KERNEL); 123 - if (!target) 124 - return; 125 - 126 - target->memory_pxm = mem_pxm; 127 - target->processor_pxm = PXM_INVAL; 128 - list_add_tail(&target->node, &targets); 129 - INIT_LIST_HEAD(&target->caches); 129 + /* 130 + * There are potentially multiple ranges per PXM, so record each 131 + * in the per-target memregions resource tree. 132 + */ 133 + if (!__request_region(&target->memregions, start, len, "memory target", 134 + IORESOURCE_MEM)) 135 + pr_warn("failed to reserve %#llx - %#llx in pxm: %d\n", 136 + start, start + len, mem_pxm); 130 137 } 131 138 132 139 static __init const char *hmat_data_type(u8 type) ··· 293 272 u8 type, mem_hier; 294 273 295 274 if (hmat_loc->header.length < sizeof(*hmat_loc)) { 296 - pr_notice("HMAT: Unexpected locality header length: %d\n", 275 + pr_notice("HMAT: Unexpected locality header length: %u\n", 297 276 hmat_loc->header.length); 298 277 return -EINVAL; 299 278 } ··· 305 284 total_size = sizeof(*hmat_loc) + sizeof(*entries) * ipds * tpds + 306 285 sizeof(*inits) * ipds + sizeof(*targs) * tpds; 307 286 if (hmat_loc->header.length < total_size) { 308 - pr_notice("HMAT: Unexpected locality header length:%d, minimum required:%d\n", 287 + pr_notice("HMAT: Unexpected locality header length:%u, minimum required:%u\n", 309 288 hmat_loc->header.length, total_size); 310 289 return -EINVAL; 311 290 } 312 291 313 - pr_info("HMAT: Locality: Flags:%02x Type:%s Initiator Domains:%d Target Domains:%d Base:%lld\n", 292 + pr_info("HMAT: Locality: Flags:%02x Type:%s Initiator Domains:%u Target Domains:%u Base:%lld\n", 314 293 hmat_loc->flags, hmat_data_type(type), ipds, tpds, 315 294 hmat_loc->entry_base_unit); 316 295 ··· 323 302 value = hmat_normalize(entries[init * tpds + targ], 324 303 hmat_loc->entry_base_unit, 325 304 type); 326 - pr_info(" Initiator-Target[%d-%d]:%d%s\n", 305 + pr_info(" Initiator-Target[%u-%u]:%u%s\n", 327 306 inits[init], targs[targ], value, 328 307 hmat_data_type_suffix(type)); 329 308 ··· 350 329 u32 attrs; 351 330 352 331 if (cache->header.length < sizeof(*cache)) { 353 - pr_notice("HMAT: Unexpected cache header length: %d\n", 332 + pr_notice("HMAT: Unexpected cache header length: %u\n", 354 333 cache->header.length); 355 334 return -EINVAL; 356 335 } 357 336 358 337 attrs = cache->cache_attributes; 359 - pr_info("HMAT: Cache: Domain:%d Size:%llu Attrs:%08x SMBIOS Handles:%d\n", 338 + pr_info("HMAT: Cache: Domain:%u Size:%llu Attrs:%08x SMBIOS Handles:%d\n", 360 339 cache->memory_PD, cache->cache_size, attrs, 361 340 cache->number_of_SMBIOShandles); 362 341 ··· 411 390 struct memory_target *target = NULL; 412 391 413 392 if (p->header.length != sizeof(*p)) { 414 - pr_notice("HMAT: Unexpected address range header length: %d\n", 393 + pr_notice("HMAT: Unexpected address range header length: %u\n", 415 394 p->header.length); 416 395 return -EINVAL; 417 396 } 418 397 419 398 if (hmat_revision == 1) 420 - pr_info("HMAT: Memory (%#llx length %#llx) Flags:%04x Processor Domain:%d Memory Domain:%d\n", 399 + pr_info("HMAT: Memory (%#llx length %#llx) Flags:%04x Processor Domain:%u Memory Domain:%u\n", 421 400 p->reserved3, p->reserved4, p->flags, p->processor_PD, 422 401 p->memory_PD); 423 402 else 424 - pr_info("HMAT: Memory Flags:%04x Processor Domain:%d Memory Domain:%d\n", 403 + pr_info("HMAT: Memory Flags:%04x Processor Domain:%u Memory Domain:%u\n", 425 404 p->flags, p->processor_PD, p->memory_PD); 426 405 427 406 if (p->flags & ACPI_HMAT_MEMORY_PD_VALID && hmat_revision == 1) { ··· 438 417 pr_debug("HMAT: Invalid Processor Domain\n"); 439 418 return -EINVAL; 440 419 } 441 - target->processor_pxm = p_node; 420 + target->processor_pxm = p->processor_PD; 442 421 } 443 422 444 423 return 0; ··· 473 452 return -EINVAL; 474 453 if (!(ma->flags & ACPI_SRAT_MEM_ENABLED)) 475 454 return 0; 476 - alloc_memory_target(ma->proximity_domain); 455 + alloc_memory_target(ma->proximity_domain, ma->base_address, ma->length); 477 456 return 0; 478 457 } 479 458 ··· 634 613 node_set_perf_attrs(mem_nid, &target->hmem_attrs, 0); 635 614 } 636 615 616 + static void hmat_register_target_device(struct memory_target *target, 617 + struct resource *r) 618 + { 619 + /* define a clean / non-busy resource for the platform device */ 620 + struct resource res = { 621 + .start = r->start, 622 + .end = r->end, 623 + .flags = IORESOURCE_MEM, 624 + }; 625 + struct platform_device *pdev; 626 + struct memregion_info info; 627 + int rc, id; 628 + 629 + rc = region_intersects(res.start, resource_size(&res), IORESOURCE_MEM, 630 + IORES_DESC_SOFT_RESERVED); 631 + if (rc != REGION_INTERSECTS) 632 + return; 633 + 634 + id = memregion_alloc(GFP_KERNEL); 635 + if (id < 0) { 636 + pr_err("memregion allocation failure for %pr\n", &res); 637 + return; 638 + } 639 + 640 + pdev = platform_device_alloc("hmem", id); 641 + if (!pdev) { 642 + pr_err("hmem device allocation failure for %pr\n", &res); 643 + goto out_pdev; 644 + } 645 + 646 + pdev->dev.numa_node = acpi_map_pxm_to_online_node(target->memory_pxm); 647 + info = (struct memregion_info) { 648 + .target_node = acpi_map_pxm_to_node(target->memory_pxm), 649 + }; 650 + rc = platform_device_add_data(pdev, &info, sizeof(info)); 651 + if (rc < 0) { 652 + pr_err("hmem memregion_info allocation failure for %pr\n", &res); 653 + goto out_pdev; 654 + } 655 + 656 + rc = platform_device_add_resources(pdev, &res, 1); 657 + if (rc < 0) { 658 + pr_err("hmem resource allocation failure for %pr\n", &res); 659 + goto out_resource; 660 + } 661 + 662 + rc = platform_device_add(pdev); 663 + if (rc < 0) { 664 + dev_err(&pdev->dev, "device add failed for %pr\n", &res); 665 + goto out_resource; 666 + } 667 + 668 + return; 669 + 670 + out_resource: 671 + put_device(&pdev->dev); 672 + out_pdev: 673 + memregion_free(id); 674 + } 675 + 676 + static void hmat_register_target_devices(struct memory_target *target) 677 + { 678 + struct resource *res; 679 + 680 + /* 681 + * Do not bother creating devices if no driver is available to 682 + * consume them. 683 + */ 684 + if (!IS_ENABLED(CONFIG_DEV_DAX_HMEM)) 685 + return; 686 + 687 + for (res = target->memregions.child; res; res = res->sibling) 688 + hmat_register_target_device(target, res); 689 + } 690 + 637 691 static void hmat_register_target(struct memory_target *target) 638 692 { 639 693 int nid = pxm_to_node(target->memory_pxm); 694 + 695 + /* 696 + * Devices may belong to either an offline or online 697 + * node, so unconditionally add them. 698 + */ 699 + hmat_register_target_devices(target); 640 700 641 701 /* 642 702 * Skip offline nodes. This can happen when memory ··· 779 677 struct target_cache *tcache, *cnext; 780 678 781 679 list_for_each_entry_safe(target, tnext, &targets, node) { 680 + struct resource *res, *res_next; 681 + 782 682 list_for_each_entry_safe(tcache, cnext, &target->caches, node) { 783 683 list_del(&tcache->node); 784 684 kfree(tcache); 785 685 } 686 + 786 687 list_del(&target->node); 688 + res = target->memregions.child; 689 + while (res) { 690 + res_next = res->sibling; 691 + __release_region(&target->memregions, res->start, 692 + resource_size(res)); 693 + res = res_next; 694 + } 787 695 kfree(target); 788 696 } 789 697 ··· 860 748 acpi_put_table(tbl); 861 749 return 0; 862 750 } 863 - subsys_initcall(hmat_init); 751 + device_initcall(hmat_init);
+2 -1
drivers/acpi/internal.h
··· 165 165 -------------------------------------------------------------------------- */ 166 166 struct acpi_ec { 167 167 acpi_handle handle; 168 - u32 gpe; 168 + int gpe; 169 + int irq; 169 170 unsigned long command_addr; 170 171 unsigned long data_addr; 171 172 bool global_lock;
drivers/acpi/numa.c drivers/acpi/numa/srat.c
+3
drivers/acpi/numa/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + obj-$(CONFIG_ACPI_NUMA) += srat.o 3 + obj-$(CONFIG_ACPI_HMAT) += hmat.o
+3 -3
drivers/acpi/osi.c
··· 473 473 */ 474 474 475 475 /* 476 - * Without this this EEEpc exports a non working WMI interface, with 477 - * this it exports a working "good old" eeepc_laptop interface, fixing 478 - * both brightness control, and rfkill not working. 476 + * Without this EEEpc exports a non working WMI interface, with 477 + * this it exports a working "good old" eeepc_laptop interface, 478 + * fixing both brightness control, and rfkill not working. 479 479 */ 480 480 { 481 481 .callback = dmi_enable_osi_linux,
+13 -7
drivers/acpi/pmic/intel_pmic.c
··· 252 252 struct regmap *regmap, 253 253 struct intel_pmic_opregion_data *d) 254 254 { 255 - acpi_status status; 255 + acpi_status status = AE_OK; 256 256 struct intel_pmic_opregion *opregion; 257 257 int ret; 258 258 ··· 270 270 opregion->regmap = regmap; 271 271 opregion->lpat_table = acpi_lpat_get_conversion_table(handle); 272 272 273 - status = acpi_install_address_space_handler(handle, 273 + if (d->power_table_count) 274 + status = acpi_install_address_space_handler(handle, 274 275 PMIC_POWER_OPREGION_ID, 275 276 intel_pmic_power_handler, 276 277 NULL, opregion); ··· 280 279 goto out_error; 281 280 } 282 281 283 - status = acpi_install_address_space_handler(handle, 282 + if (d->thermal_table_count) 283 + status = acpi_install_address_space_handler(handle, 284 284 PMIC_THERMAL_OPREGION_ID, 285 285 intel_pmic_thermal_handler, 286 286 NULL, opregion); ··· 303 301 return 0; 304 302 305 303 out_remove_thermal_handler: 306 - acpi_remove_address_space_handler(handle, PMIC_THERMAL_OPREGION_ID, 307 - intel_pmic_thermal_handler); 304 + if (d->thermal_table_count) 305 + acpi_remove_address_space_handler(handle, 306 + PMIC_THERMAL_OPREGION_ID, 307 + intel_pmic_thermal_handler); 308 308 309 309 out_remove_power_handler: 310 - acpi_remove_address_space_handler(handle, PMIC_POWER_OPREGION_ID, 311 - intel_pmic_power_handler); 310 + if (d->power_table_count) 311 + acpi_remove_address_space_handler(handle, 312 + PMIC_POWER_OPREGION_ID, 313 + intel_pmic_power_handler); 312 314 313 315 out_error: 314 316 acpi_lpat_free_conversion_table(opregion->lpat_table);
+44
drivers/acpi/pmic/intel_pmic_chtcrc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Intel Cherry Trail Crystal Cove PMIC operation region driver 4 + * 5 + * Copyright (C) 2019 Hans de Goede <hdegoede@redhat.com> 6 + */ 7 + 8 + #include <linux/acpi.h> 9 + #include <linux/init.h> 10 + #include <linux/mfd/intel_soc_pmic.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/regmap.h> 13 + #include "intel_pmic.h" 14 + 15 + /* 16 + * We have no docs for the CHT Crystal Cove PMIC. The Asus Zenfone-2 kernel 17 + * code has 2 Crystal Cove regulator drivers, one calls the PMIC a "Crystal 18 + * Cove Plus" PMIC and talks about Cherry Trail, so presuambly that one 19 + * could be used to get register info for the regulators if we need to 20 + * implement regulator support in the future. 21 + * 22 + * For now the sole purpose of this driver is to make 23 + * intel_soc_pmic_exec_mipi_pmic_seq_element work on devices with a 24 + * CHT Crystal Cove PMIC. 25 + */ 26 + static struct intel_pmic_opregion_data intel_chtcrc_pmic_opregion_data = { 27 + .pmic_i2c_address = 0x6e, 28 + }; 29 + 30 + static int intel_chtcrc_pmic_opregion_probe(struct platform_device *pdev) 31 + { 32 + struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent); 33 + return intel_pmic_install_opregion_handler(&pdev->dev, 34 + ACPI_HANDLE(pdev->dev.parent), pmic->regmap, 35 + &intel_chtcrc_pmic_opregion_data); 36 + } 37 + 38 + static struct platform_driver intel_chtcrc_pmic_opregion_driver = { 39 + .probe = intel_chtcrc_pmic_opregion_probe, 40 + .driver = { 41 + .name = "cht_crystal_cove_pmic", 42 + }, 43 + }; 44 + builtin_platform_driver(intel_chtcrc_pmic_opregion_driver);
+2 -2
drivers/acpi/pmic/intel_pmic_crc.c drivers/acpi/pmic/intel_pmic_bytcrc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Intel CrystalCove PMIC operation region driver 3 + * Intel Bay Trail Crystal Cove PMIC operation region driver 4 4 * 5 5 * Copyright (C) 2014 Intel Corporation. All rights reserved. 6 6 */ ··· 295 295 static struct platform_driver intel_crc_pmic_opregion_driver = { 296 296 .probe = intel_crc_pmic_opregion_probe, 297 297 .driver = { 298 - .name = "crystal_cove_pmic", 298 + .name = "byt_crystal_cove_pmic", 299 299 }, 300 300 }; 301 301 builtin_platform_driver(intel_crc_pmic_opregion_driver);
+1
drivers/acpi/scan.c
··· 2174 2174 acpi_pci_root_init(); 2175 2175 acpi_pci_link_init(); 2176 2176 acpi_processor_init(); 2177 + acpi_platform_init(); 2177 2178 acpi_lpss_init(); 2178 2179 acpi_apd_init(); 2179 2180 acpi_cmos_rtc_init();
+32
drivers/acpi/utils.c
··· 455 455 456 456 /** 457 457 * acpi_handle_path: Return the object path of handle 458 + * @handle: ACPI device handle 458 459 * 459 460 * Caller must free the returned buffer 460 461 */ ··· 474 473 475 474 /** 476 475 * acpi_handle_printk: Print message with ACPI prefix and object path 476 + * @level: log level 477 + * @handle: ACPI device handle 478 + * @fmt: format string 477 479 * 478 480 * This function is called through acpi_handle_<level> macros and prints 479 481 * a message with ACPI prefix and object path. This function acquires ··· 505 501 #if defined(CONFIG_DYNAMIC_DEBUG) 506 502 /** 507 503 * __acpi_handle_debug: pr_debug with ACPI prefix and object path 504 + * @descriptor: Dynamic Debug descriptor 505 + * @handle: ACPI device handle 506 + * @fmt: format string 508 507 * 509 508 * This function is called through acpi_handle_debug macro and debug 510 509 * prints a message with ACPI prefix and object path. This function ··· 700 693 return false; 701 694 } 702 695 EXPORT_SYMBOL(acpi_check_dsm); 696 + 697 + /** 698 + * acpi_dev_hid_uid_match - Match device by supplied HID and UID 699 + * @adev: ACPI device to match. 700 + * @hid2: Hardware ID of the device. 701 + * @uid2: Unique ID of the device, pass NULL to not check _UID. 702 + * 703 + * Matches HID and UID in @adev with given @hid2 and @uid2. 704 + * Returns true if matches. 705 + */ 706 + bool acpi_dev_hid_uid_match(struct acpi_device *adev, 707 + const char *hid2, const char *uid2) 708 + { 709 + const char *hid1 = acpi_device_hid(adev); 710 + const char *uid1 = acpi_device_uid(adev); 711 + 712 + if (strcmp(hid1, hid2)) 713 + return false; 714 + 715 + if (!uid2) 716 + return true; 717 + 718 + return uid1 && !strcmp(uid1, uid2); 719 + } 720 + EXPORT_SYMBOL(acpi_dev_hid_uid_match); 703 721 704 722 /** 705 723 * acpi_dev_found - Detect presence of a given ACPI device in the namespace.
+22 -5
drivers/dax/Kconfig
··· 32 32 33 33 Say M if unsure 34 34 35 + config DEV_DAX_HMEM 36 + tristate "HMEM DAX: direct access to 'specific purpose' memory" 37 + depends on EFI_SOFT_RESERVE 38 + default DEV_DAX 39 + help 40 + EFI 2.8 platforms, and others, may advertise 'specific purpose' 41 + memory. For example, a high bandwidth memory pool. The 42 + indication from platform firmware is meant to reserve the 43 + memory from typical usage by default. This driver creates 44 + device-dax instances for these memory ranges, and that also 45 + enables the possibility to assign them to the DEV_DAX_KMEM 46 + driver to override the reservation and add them to kernel 47 + "System RAM" pool. 48 + 49 + Say M if unsure. 50 + 35 51 config DEV_DAX_KMEM 36 52 tristate "KMEM DAX: volatile-use of persistent memory" 37 53 default DEV_DAX 38 54 depends on DEV_DAX 39 55 depends on MEMORY_HOTPLUG # for add_memory() and friends 40 56 help 41 - Support access to persistent memory as if it were RAM. This 42 - allows easier use of persistent memory by unmodified 43 - applications. 57 + Support access to persistent, or other performance 58 + differentiated memory as if it were System RAM. This allows 59 + easier use of persistent memory by unmodified applications, or 60 + adds core kernel memory services to heterogeneous memory types 61 + (HMEM) marked "reserved" by platform firmware. 44 62 45 63 To use this feature, a DAX device must be unbound from the 46 - device_dax driver (PMEM DAX) and bound to this kmem driver 47 - on each boot. 64 + device_dax driver and bound to this kmem driver on each boot. 48 65 49 66 Say N if unsure. 50 67
+2
drivers/dax/Makefile
··· 2 2 obj-$(CONFIG_DAX) += dax.o 3 3 obj-$(CONFIG_DEV_DAX) += device_dax.o 4 4 obj-$(CONFIG_DEV_DAX_KMEM) += kmem.o 5 + obj-$(CONFIG_DEV_DAX_HMEM) += dax_hmem.o 5 6 6 7 dax-y := super.o 7 8 dax-y += bus.o 8 9 device_dax-y := device.o 10 + dax_hmem-y := hmem.o 9 11 10 12 obj-y += pmem/
+1 -1
drivers/dax/bus.c
··· 227 227 228 228 struct dax_region *alloc_dax_region(struct device *parent, int region_id, 229 229 struct resource *res, int target_node, unsigned int align, 230 - unsigned long pfn_flags) 230 + unsigned long long pfn_flags) 231 231 { 232 232 struct dax_region *dax_region; 233 233
+1 -1
drivers/dax/bus.h
··· 11 11 void dax_region_put(struct dax_region *dax_region); 12 12 struct dax_region *alloc_dax_region(struct device *parent, int region_id, 13 13 struct resource *res, int target_node, unsigned int align, 14 - unsigned long flags); 14 + unsigned long long flags); 15 15 16 16 enum dev_dax_subsys { 17 17 DEV_DAX_BUS,
+1 -1
drivers/dax/dax-private.h
··· 32 32 struct device *dev; 33 33 unsigned int align; 34 34 struct resource res; 35 - unsigned long pfn_flags; 35 + unsigned long long pfn_flags; 36 36 }; 37 37 38 38 /**
+56
drivers/dax/hmem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/platform_device.h> 3 + #include <linux/memregion.h> 4 + #include <linux/module.h> 5 + #include <linux/pfn_t.h> 6 + #include "bus.h" 7 + 8 + static int dax_hmem_probe(struct platform_device *pdev) 9 + { 10 + struct device *dev = &pdev->dev; 11 + struct dev_pagemap pgmap = { }; 12 + struct dax_region *dax_region; 13 + struct memregion_info *mri; 14 + struct dev_dax *dev_dax; 15 + struct resource *res; 16 + 17 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 18 + if (!res) 19 + return -ENOMEM; 20 + 21 + mri = dev->platform_data; 22 + memcpy(&pgmap.res, res, sizeof(*res)); 23 + 24 + dax_region = alloc_dax_region(dev, pdev->id, res, mri->target_node, 25 + PMD_SIZE, PFN_DEV|PFN_MAP); 26 + if (!dax_region) 27 + return -ENOMEM; 28 + 29 + dev_dax = devm_create_dev_dax(dax_region, 0, &pgmap); 30 + if (IS_ERR(dev_dax)) 31 + return PTR_ERR(dev_dax); 32 + 33 + /* child dev_dax instances now own the lifetime of the dax_region */ 34 + dax_region_put(dax_region); 35 + return 0; 36 + } 37 + 38 + static int dax_hmem_remove(struct platform_device *pdev) 39 + { 40 + /* devm handles teardown */ 41 + return 0; 42 + } 43 + 44 + static struct platform_driver dax_hmem_driver = { 45 + .probe = dax_hmem_probe, 46 + .remove = dax_hmem_remove, 47 + .driver = { 48 + .name = "hmem", 49 + }, 50 + }; 51 + 52 + module_platform_driver(dax_hmem_driver); 53 + 54 + MODULE_ALIAS("platform:hmem*"); 55 + MODULE_LICENSE("GPL v2"); 56 + MODULE_AUTHOR("Intel Corporation");
+21
drivers/firmware/efi/Kconfig
··· 75 75 Ranges can be set up to this value using comma-separated list. 76 76 The default value is 8. 77 77 78 + config EFI_SOFT_RESERVE 79 + bool "Reserve EFI Specific Purpose Memory" 80 + depends on EFI && EFI_STUB && ACPI_HMAT 81 + default ACPI_HMAT 82 + help 83 + On systems that have mixed performance classes of memory EFI 84 + may indicate specific purpose memory with an attribute (See 85 + EFI_MEMORY_SP in UEFI 2.8). A memory range tagged with this 86 + attribute may have unique performance characteristics compared 87 + to the system's general purpose "System RAM" pool. On the 88 + expectation that such memory has application specific usage, 89 + and its base EFI memory type is "conventional" answer Y to 90 + arrange for the kernel to reserve it as a "Soft Reserved" 91 + resource, and set aside for direct-access (device-dax) by 92 + default. The memory range can later be optionally assigned to 93 + the page allocator by system administrator policy via the 94 + device-dax kmem facility. Say N to have the kernel treat this 95 + memory as "System RAM" by default. 96 + 97 + If unsure, say Y. 98 + 78 99 config EFI_PARAMS_FROM_FDT 79 100 bool 80 101 help
+4 -1
drivers/firmware/efi/Makefile
··· 20 20 obj-$(CONFIG_EFI_RUNTIME_MAP) += runtime-map.o 21 21 obj-$(CONFIG_EFI_RUNTIME_WRAPPERS) += runtime-wrappers.o 22 22 obj-$(CONFIG_EFI_STUB) += libstub/ 23 - obj-$(CONFIG_EFI_FAKE_MEMMAP) += fake_mem.o 23 + obj-$(CONFIG_EFI_FAKE_MEMMAP) += fake_map.o 24 24 obj-$(CONFIG_EFI_BOOTLOADER_CONTROL) += efibc.o 25 25 obj-$(CONFIG_EFI_TEST) += test/ 26 26 obj-$(CONFIG_EFI_DEV_PATH_PARSER) += dev-path-parser.o 27 27 obj-$(CONFIG_APPLE_PROPERTIES) += apple-properties.o 28 28 obj-$(CONFIG_EFI_RCI2_TABLE) += rci2-table.o 29 + 30 + fake_map-y += fake_mem.o 31 + fake_map-$(CONFIG_X86) += x86_fake_mem.o 29 32 30 33 arm-obj-$(CONFIG_EFI) := arm-init.o arm-runtime.o 31 34 obj-$(CONFIG_ARM) += $(arm-obj-y)
+9
drivers/firmware/efi/arm-init.c
··· 164 164 case EFI_CONVENTIONAL_MEMORY: 165 165 case EFI_PERSISTENT_MEMORY: 166 166 /* 167 + * Special purpose memory is 'soft reserved', which means it 168 + * is set aside initially, but can be hotplugged back in or 169 + * be assigned to the dax driver after boot. 170 + */ 171 + if (efi_soft_reserve_enabled() && 172 + (md->attribute & EFI_MEMORY_SP)) 173 + return false; 174 + 175 + /* 167 176 * According to the spec, these regions are no longer reserved 168 177 * after calling ExitBootServices(). However, we can only use 169 178 * them as System RAM if they can be mapped writeback cacheable.
+24
drivers/firmware/efi/arm-runtime.c
··· 121 121 return 0; 122 122 } 123 123 124 + if (efi_soft_reserve_enabled()) { 125 + efi_memory_desc_t *md; 126 + 127 + for_each_efi_memory_desc(md) { 128 + int md_size = md->num_pages << EFI_PAGE_SHIFT; 129 + struct resource *res; 130 + 131 + if (!(md->attribute & EFI_MEMORY_SP)) 132 + continue; 133 + 134 + res = kzalloc(sizeof(*res), GFP_KERNEL); 135 + if (WARN_ON(!res)) 136 + break; 137 + 138 + res->start = md->phys_addr; 139 + res->end = md->phys_addr + md_size - 1; 140 + res->name = "Soft Reserved"; 141 + res->flags = IORESOURCE_MEM; 142 + res->desc = IORES_DESC_SOFT_RESERVED; 143 + 144 + insert_resource(&iomem_resource, res); 145 + } 146 + } 147 + 124 148 if (efi_runtime_disabled()) { 125 149 pr_info("EFI runtime services will be disabled.\n"); 126 150 return 0;
+12 -3
drivers/firmware/efi/efi.c
··· 81 81 return disable_runtime; 82 82 } 83 83 84 + bool __pure __efi_soft_reserve_enabled(void) 85 + { 86 + return !efi_enabled(EFI_MEM_NO_SOFT_RESERVE); 87 + } 88 + 84 89 static int __init parse_efi_cmdline(char *str) 85 90 { 86 91 if (!str) { ··· 98 93 99 94 if (parse_option_str(str, "noruntime")) 100 95 disable_runtime = true; 96 + 97 + if (parse_option_str(str, "nosoftreserve")) 98 + set_bit(EFI_MEM_NO_SOFT_RESERVE, &efi.flags); 101 99 102 100 return 0; 103 101 } ··· 304 296 goto free_data; 305 297 } 306 298 307 - ret = acpi_load_table(data); 299 + ret = acpi_load_table(data, NULL); 308 300 if (ret) { 309 301 pr_err("failed to load table: %d\n", ret); 310 302 goto free_data; ··· 850 842 if (attr & ~(EFI_MEMORY_UC | EFI_MEMORY_WC | EFI_MEMORY_WT | 851 843 EFI_MEMORY_WB | EFI_MEMORY_UCE | EFI_MEMORY_RO | 852 844 EFI_MEMORY_WP | EFI_MEMORY_RP | EFI_MEMORY_XP | 853 - EFI_MEMORY_NV | 845 + EFI_MEMORY_NV | EFI_MEMORY_SP | 854 846 EFI_MEMORY_RUNTIME | EFI_MEMORY_MORE_RELIABLE)) 855 847 snprintf(pos, size, "|attr=0x%016llx]", 856 848 (unsigned long long)attr); 857 849 else 858 850 snprintf(pos, size, 859 - "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]", 851 + "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]", 860 852 attr & EFI_MEMORY_RUNTIME ? "RUN" : "", 861 853 attr & EFI_MEMORY_MORE_RELIABLE ? "MR" : "", 854 + attr & EFI_MEMORY_SP ? "SP" : "", 862 855 attr & EFI_MEMORY_NV ? "NV" : "", 863 856 attr & EFI_MEMORY_XP ? "XP" : "", 864 857 attr & EFI_MEMORY_RP ? "RP" : "",
+3
drivers/firmware/efi/esrt.c
··· 246 246 int rc; 247 247 phys_addr_t end; 248 248 249 + if (!efi_enabled(EFI_MEMMAP)) 250 + return; 251 + 249 252 pr_debug("esrt-init: loading.\n"); 250 253 if (!esrt_table_exists()) 251 254 return;
+12 -14
drivers/firmware/efi/fake_mem.c
··· 17 17 #include <linux/memblock.h> 18 18 #include <linux/types.h> 19 19 #include <linux/sort.h> 20 - #include <asm/efi.h> 20 + #include "fake_mem.h" 21 21 22 - #define EFI_MAX_FAKEMEM CONFIG_EFI_MAX_FAKE_MEM 23 - 24 - static struct efi_mem_range fake_mems[EFI_MAX_FAKEMEM]; 25 - static int nr_fake_mem; 22 + struct efi_mem_range efi_fake_mems[EFI_MAX_FAKEMEM]; 23 + int nr_fake_mem; 26 24 27 25 static int __init cmp_fake_mem(const void *x1, const void *x2) 28 26 { ··· 42 44 void *new_memmap; 43 45 int i; 44 46 45 - if (!nr_fake_mem) 47 + if (!efi_enabled(EFI_MEMMAP) || !nr_fake_mem) 46 48 return; 47 49 48 50 /* count up the number of EFI memory descriptor */ 49 51 for (i = 0; i < nr_fake_mem; i++) { 50 52 for_each_efi_memory_desc(md) { 51 - struct range *r = &fake_mems[i].range; 53 + struct range *r = &efi_fake_mems[i].range; 52 54 53 55 new_nr_map += efi_memmap_split_count(md, r); 54 56 } ··· 68 70 } 69 71 70 72 for (i = 0; i < nr_fake_mem; i++) 71 - efi_memmap_insert(&efi.memmap, new_memmap, &fake_mems[i]); 73 + efi_memmap_insert(&efi.memmap, new_memmap, &efi_fake_mems[i]); 72 74 73 75 /* swap into new EFI memmap */ 74 76 early_memunmap(new_memmap, efi.memmap.desc_size * new_nr_map); ··· 102 104 if (nr_fake_mem >= EFI_MAX_FAKEMEM) 103 105 break; 104 106 105 - fake_mems[nr_fake_mem].range.start = start; 106 - fake_mems[nr_fake_mem].range.end = start + mem_size - 1; 107 - fake_mems[nr_fake_mem].attribute = attribute; 107 + efi_fake_mems[nr_fake_mem].range.start = start; 108 + efi_fake_mems[nr_fake_mem].range.end = start + mem_size - 1; 109 + efi_fake_mems[nr_fake_mem].attribute = attribute; 108 110 nr_fake_mem++; 109 111 110 112 if (*p == ',') 111 113 p++; 112 114 } 113 115 114 - sort(fake_mems, nr_fake_mem, sizeof(struct efi_mem_range), 116 + sort(efi_fake_mems, nr_fake_mem, sizeof(struct efi_mem_range), 115 117 cmp_fake_mem, NULL); 116 118 117 119 for (i = 0; i < nr_fake_mem; i++) 118 120 pr_info("efi_fake_mem: add attr=0x%016llx to [mem 0x%016llx-0x%016llx]", 119 - fake_mems[i].attribute, fake_mems[i].range.start, 120 - fake_mems[i].range.end); 121 + efi_fake_mems[i].attribute, efi_fake_mems[i].range.start, 122 + efi_fake_mems[i].range.end); 121 123 122 124 return *p == '\0' ? 0 : -EINVAL; 123 125 }
+10
drivers/firmware/efi/fake_mem.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __EFI_FAKE_MEM_H__ 3 + #define __EFI_FAKE_MEM_H__ 4 + #include <asm/efi.h> 5 + 6 + #define EFI_MAX_FAKEMEM CONFIG_EFI_MAX_FAKE_MEM 7 + 8 + extern struct efi_mem_range efi_fake_mems[EFI_MAX_FAKEMEM]; 9 + extern int nr_fake_mem; 10 + #endif /* __EFI_FAKE_MEM_H__ */
+5
drivers/firmware/efi/libstub/arm32-stub.c
··· 146 146 continue; 147 147 148 148 case EFI_CONVENTIONAL_MEMORY: 149 + /* Skip soft reserved conventional memory */ 150 + if (efi_soft_reserve_enabled() && 151 + (desc->attribute & EFI_MEMORY_SP)) 152 + continue; 153 + 149 154 /* 150 155 * Reserve the intersection between this entry and the 151 156 * region.
+19
drivers/firmware/efi/libstub/efi-stub-helper.c
··· 32 32 static int __section(.data) __nokaslr; 33 33 static int __section(.data) __quiet; 34 34 static int __section(.data) __novamap; 35 + static bool __section(.data) efi_nosoftreserve; 35 36 36 37 int __pure nokaslr(void) 37 38 { ··· 45 44 int __pure novamap(void) 46 45 { 47 46 return __novamap; 47 + } 48 + bool __pure __efi_soft_reserve_enabled(void) 49 + { 50 + return !efi_nosoftreserve; 48 51 } 49 52 50 53 #define EFI_MMAP_NR_SLACK_SLOTS 8 ··· 216 211 if (desc->type != EFI_CONVENTIONAL_MEMORY) 217 212 continue; 218 213 214 + if (efi_soft_reserve_enabled() && 215 + (desc->attribute & EFI_MEMORY_SP)) 216 + continue; 217 + 219 218 if (desc->num_pages < nr_pages) 220 219 continue; 221 220 ··· 312 303 desc = efi_early_memdesc_ptr(m, desc_size, i); 313 304 314 305 if (desc->type != EFI_CONVENTIONAL_MEMORY) 306 + continue; 307 + 308 + if (efi_soft_reserve_enabled() && 309 + (desc->attribute & EFI_MEMORY_SP)) 315 310 continue; 316 311 317 312 if (desc->num_pages < nr_pages) ··· 495 482 if (!strncmp(str, "novamap", 7)) { 496 483 str += strlen("novamap"); 497 484 __novamap = 1; 485 + } 486 + 487 + if (IS_ENABLED(CONFIG_EFI_SOFT_RESERVE) && 488 + !strncmp(str, "nosoftreserve", 7)) { 489 + str += strlen("nosoftreserve"); 490 + efi_nosoftreserve = 1; 498 491 } 499 492 500 493 /* Group words together, delimited by "," */
+4
drivers/firmware/efi/libstub/random.c
··· 58 58 if (md->type != EFI_CONVENTIONAL_MEMORY) 59 59 return 0; 60 60 61 + if (efi_soft_reserve_enabled() && 62 + (md->attribute & EFI_MEMORY_SP)) 63 + return 0; 64 + 61 65 region_end = min((u64)ULONG_MAX, md->phys_addr + md->num_pages*EFI_PAGE_SIZE - 1); 62 66 63 67 first_slot = round_up(md->phys_addr, align);
+69
drivers/firmware/efi/x86_fake_mem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright(c) 2019 Intel Corporation. All rights reserved. */ 3 + #include <linux/efi.h> 4 + #include <asm/e820/api.h> 5 + #include "fake_mem.h" 6 + 7 + void __init efi_fake_memmap_early(void) 8 + { 9 + int i; 10 + 11 + /* 12 + * The late efi_fake_mem() call can handle all requests if 13 + * EFI_MEMORY_SP support is disabled. 14 + */ 15 + if (!efi_soft_reserve_enabled()) 16 + return; 17 + 18 + if (!efi_enabled(EFI_MEMMAP) || !nr_fake_mem) 19 + return; 20 + 21 + /* 22 + * Given that efi_fake_memmap() needs to perform memblock 23 + * allocations it needs to run after e820__memblock_setup(). 24 + * However, if efi_fake_mem specifies EFI_MEMORY_SP for a given 25 + * address range that potentially needs to mark the memory as 26 + * reserved prior to e820__memblock_setup(). Update e820 27 + * directly if EFI_MEMORY_SP is specified for an 28 + * EFI_CONVENTIONAL_MEMORY descriptor. 29 + */ 30 + for (i = 0; i < nr_fake_mem; i++) { 31 + struct efi_mem_range *mem = &efi_fake_mems[i]; 32 + efi_memory_desc_t *md; 33 + u64 m_start, m_end; 34 + 35 + if ((mem->attribute & EFI_MEMORY_SP) == 0) 36 + continue; 37 + 38 + m_start = mem->range.start; 39 + m_end = mem->range.end; 40 + for_each_efi_memory_desc(md) { 41 + u64 start, end; 42 + 43 + if (md->type != EFI_CONVENTIONAL_MEMORY) 44 + continue; 45 + 46 + start = md->phys_addr; 47 + end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1; 48 + 49 + if (m_start <= end && m_end >= start) 50 + /* fake range overlaps descriptor */; 51 + else 52 + continue; 53 + 54 + /* 55 + * Trim the boundary of the e820 update to the 56 + * descriptor in case the fake range overlaps 57 + * !EFI_CONVENTIONAL_MEMORY 58 + */ 59 + start = max(start, m_start); 60 + end = min(end, m_end); 61 + 62 + if (end <= start) 63 + continue; 64 + e820__range_update(start, end - start + 1, E820_TYPE_RAM, 65 + E820_TYPE_SOFT_RESERVED); 66 + e820__update_table(e820_table); 67 + } 68 + } 69 + }
+5 -25
drivers/iommu/amd_iommu.c
··· 124 124 * 125 125 ****************************************************************************/ 126 126 127 - static inline int match_hid_uid(struct device *dev, 128 - struct acpihid_map_entry *entry) 129 - { 130 - struct acpi_device *adev = ACPI_COMPANION(dev); 131 - const char *hid, *uid; 132 - 133 - if (!adev) 134 - return -ENODEV; 135 - 136 - hid = acpi_device_hid(adev); 137 - uid = acpi_device_uid(adev); 138 - 139 - if (!hid || !(*hid)) 140 - return -ENODEV; 141 - 142 - if (!uid || !(*uid)) 143 - return strcmp(hid, entry->hid); 144 - 145 - if (!(*entry->uid)) 146 - return strcmp(hid, entry->hid); 147 - 148 - return (strcmp(hid, entry->hid) || strcmp(uid, entry->uid)); 149 - } 150 - 151 127 static inline u16 get_pci_device_id(struct device *dev) 152 128 { 153 129 struct pci_dev *pdev = to_pci_dev(dev); ··· 134 158 static inline int get_acpihid_device_id(struct device *dev, 135 159 struct acpihid_map_entry **entry) 136 160 { 161 + struct acpi_device *adev = ACPI_COMPANION(dev); 137 162 struct acpihid_map_entry *p; 138 163 164 + if (!adev) 165 + return -ENODEV; 166 + 139 167 list_for_each_entry(p, &acpihid_map, list) { 140 - if (!match_hid_uid(dev, p)) { 168 + if (acpi_dev_hid_uid_match(adev, p->hid, p->uid)) { 141 169 if (entry) 142 170 *entry = p; 143 171 return p->devid;
+1 -1
drivers/mfd/intel_soc_pmic_crc.c
··· 75 75 .resources = gpio_resources, 76 76 }, 77 77 { 78 - .name = "crystal_cove_pmic", 78 + .name = "byt_crystal_cove_pmic", 79 79 }, 80 80 { 81 81 .name = "crystal_cove_pwm",
+16 -33
drivers/mmc/host/sdhci-acpi.c
··· 61 61 mmc_pm_flag_t pm_caps; 62 62 unsigned int flags; 63 63 size_t priv_size; 64 - int (*probe_slot)(struct platform_device *, const char *, const char *); 64 + int (*probe_slot)(struct platform_device *, struct acpi_device *); 65 65 int (*remove_slot)(struct platform_device *); 66 66 int (*free_slot)(struct platform_device *pdev); 67 67 int (*setup_host)(struct platform_device *pdev); ··· 325 325 * wifi card in the expected slot with an ACPI companion node, is used to 326 326 * indicate that acpi_device_fix_up_power() should be avoided. 327 327 */ 328 - static inline bool sdhci_acpi_no_fixup_child_power(const char *hid, 329 - const char *uid) 328 + static inline bool sdhci_acpi_no_fixup_child_power(struct acpi_device *adev) 330 329 { 331 330 return sdhci_acpi_cht() && 332 - !strcmp(hid, "80860F14") && 333 - !strcmp(uid, "2") && 331 + acpi_dev_hid_uid_match(adev, "80860F14", "2") && 334 332 sdhci_acpi_cht_pci_wifi(0x14e4, 0x43ec, 0, 28); 335 333 } 336 334 ··· 343 345 return false; 344 346 } 345 347 346 - static inline bool sdhci_acpi_no_fixup_child_power(const char *hid, 347 - const char *uid) 348 + static inline bool sdhci_acpi_no_fixup_child_power(struct acpi_device *adev) 348 349 { 349 350 return false; 350 351 } ··· 372 375 return ret; 373 376 } 374 377 375 - static int intel_probe_slot(struct platform_device *pdev, const char *hid, 376 - const char *uid) 378 + static int intel_probe_slot(struct platform_device *pdev, struct acpi_device *adev) 377 379 { 378 380 struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 379 381 struct intel_host *intel_host = sdhci_acpi_priv(c); 380 382 struct sdhci_host *host = c->host; 381 383 382 - if (hid && uid && !strcmp(hid, "80860F14") && !strcmp(uid, "1") && 384 + if (acpi_dev_hid_uid_match(adev, "80860F14", "1") && 383 385 sdhci_readl(host, SDHCI_CAPABILITIES) == 0x446cc8b2 && 384 386 sdhci_readl(host, SDHCI_CAPABILITIES_1) == 0x00000807) 385 387 host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */ 386 388 387 - if (hid && !strcmp(hid, "80865ACA")) 389 + if (acpi_dev_hid_uid_match(adev, "80865ACA", NULL)) 388 390 host->mmc_host_ops.get_cd = bxt_get_cd; 389 391 390 392 intel_dsm_init(intel_host, &pdev->dev, host->mmc); ··· 469 473 return IRQ_HANDLED; 470 474 } 471 475 472 - static int qcom_probe_slot(struct platform_device *pdev, const char *hid, 473 - const char *uid) 476 + static int qcom_probe_slot(struct platform_device *pdev, struct acpi_device *adev) 474 477 { 475 478 struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 476 479 struct sdhci_host *host = c->host; ··· 477 482 478 483 *irq = -EINVAL; 479 484 480 - if (strcmp(hid, "QCOM8051")) 485 + if (!acpi_dev_hid_uid_match(adev, "QCOM8051", NULL)) 481 486 return 0; 482 487 483 488 *irq = platform_get_irq(pdev, 1); ··· 496 501 struct sdhci_host *host = c->host; 497 502 struct acpi_device *adev; 498 503 int *irq = sdhci_acpi_priv(c); 499 - const char *hid; 500 504 501 505 adev = ACPI_COMPANION(dev); 502 506 if (!adev) 503 507 return -ENODEV; 504 508 505 - hid = acpi_device_hid(adev); 506 - if (strcmp(hid, "QCOM8051")) 509 + if (!acpi_dev_hid_uid_match(adev, "QCOM8051", NULL)) 507 510 return 0; 508 511 509 512 if (*irq < 0) ··· 576 583 }; 577 584 578 585 static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev, 579 - const char *hid, const char *uid) 586 + struct acpi_device *adev) 580 587 { 581 588 struct sdhci_acpi_host *c = platform_get_drvdata(pdev); 582 589 struct sdhci_host *host = c->host; ··· 647 654 }; 648 655 MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids); 649 656 650 - static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(const char *hid, 651 - const char *uid) 657 + static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(struct acpi_device *adev) 652 658 { 653 659 const struct sdhci_acpi_uid_slot *u; 654 660 655 661 for (u = sdhci_acpi_uids; u->hid; u++) { 656 - if (strcmp(u->hid, hid)) 657 - continue; 658 - if (!u->uid) 659 - return u->slot; 660 - if (uid && !strcmp(u->uid, uid)) 662 + if (acpi_dev_hid_uid_match(adev, u->hid, u->uid)) 661 663 return u->slot; 662 664 } 663 665 return NULL; ··· 668 680 struct resource *iomem; 669 681 resource_size_t len; 670 682 size_t priv_size; 671 - const char *hid; 672 - const char *uid; 673 683 int err; 674 684 675 685 device = ACPI_COMPANION(dev); 676 686 if (!device) 677 687 return -ENODEV; 678 688 679 - hid = acpi_device_hid(device); 680 - uid = acpi_device_uid(device); 681 - 682 - slot = sdhci_acpi_get_slot(hid, uid); 689 + slot = sdhci_acpi_get_slot(device); 683 690 684 691 /* Power on the SDHCI controller and its children */ 685 692 acpi_device_fix_up_power(device); 686 - if (!sdhci_acpi_no_fixup_child_power(hid, uid)) { 693 + if (!sdhci_acpi_no_fixup_child_power(device)) { 687 694 list_for_each_entry(child, &device->children, node) 688 695 if (child->status.present && child->status.enabled) 689 696 acpi_device_fix_up_power(child); ··· 728 745 729 746 if (c->slot) { 730 747 if (c->slot->probe_slot) { 731 - err = c->slot->probe_slot(pdev, hid, uid); 748 + err = c->slot->probe_slot(pdev, device); 732 749 if (err) 733 750 goto err_free; 734 751 }
+1
drivers/nvdimm/Kconfig
··· 4 4 depends on PHYS_ADDR_T_64BIT 5 5 depends on HAS_IOMEM 6 6 depends on BLK_DEV 7 + select MEMREGION 7 8 help 8 9 Generic support for non-volatile memory devices including 9 10 ACPI-6-NFIT defined resources. On platforms that define an
-1
drivers/nvdimm/core.c
··· 455 455 nd_region_exit(); 456 456 nvdimm_exit(); 457 457 nvdimm_bus_exit(); 458 - nd_region_devs_exit(); 459 458 nvdimm_devs_exit(); 460 459 } 461 460
-1
drivers/nvdimm/nd-core.h
··· 114 114 int __init nvdimm_bus_init(void); 115 115 void nvdimm_bus_exit(void); 116 116 void nvdimm_devs_exit(void); 117 - void nd_region_devs_exit(void); 118 117 struct nd_region; 119 118 void nd_region_advance_seeds(struct nd_region *nd_region, struct device *dev); 120 119 void nd_region_create_ns_seed(struct nd_region *nd_region);
+4 -9
drivers/nvdimm/region_devs.c
··· 3 3 * Copyright(c) 2013-2015 Intel Corporation. All rights reserved. 4 4 */ 5 5 #include <linux/scatterlist.h> 6 + #include <linux/memregion.h> 6 7 #include <linux/highmem.h> 7 8 #include <linux/sched.h> 8 9 #include <linux/slab.h> ··· 20 19 */ 21 20 #include <linux/io-64-nonatomic-hi-lo.h> 22 21 23 - static DEFINE_IDA(region_ida); 24 22 static DEFINE_PER_CPU(int, flush_idx); 25 23 26 24 static int nvdimm_map_flush(struct device *dev, struct nvdimm *nvdimm, int dimm, ··· 133 133 put_device(&nvdimm->dev); 134 134 } 135 135 free_percpu(nd_region->lane); 136 - ida_simple_remove(&region_ida, nd_region->id); 136 + memregion_free(nd_region->id); 137 137 if (is_nd_blk(dev)) 138 138 kfree(to_nd_blk_region(dev)); 139 139 else ··· 985 985 986 986 if (!region_buf) 987 987 return NULL; 988 - nd_region->id = ida_simple_get(&region_ida, 0, 0, GFP_KERNEL); 988 + nd_region->id = memregion_alloc(GFP_KERNEL); 989 989 if (nd_region->id < 0) 990 990 goto err_id; 991 991 ··· 1044 1044 return nd_region; 1045 1045 1046 1046 err_percpu: 1047 - ida_simple_remove(&region_ida, nd_region->id); 1047 + memregion_free(nd_region->id); 1048 1048 err_id: 1049 1049 kfree(region_buf); 1050 1050 return NULL; ··· 1215 1215 }; 1216 1216 1217 1217 return device_for_each_child(&nvdimm_bus->dev, &ctx, region_conflict); 1218 - } 1219 - 1220 - void __exit nd_region_devs_exit(void) 1221 - { 1222 - ida_destroy(&region_ida); 1223 1218 }
+5 -3
include/acpi/acpi_bus.h
··· 78 78 bool acpi_dev_found(const char *hid); 79 79 bool acpi_dev_present(const char *hid, const char *uid, s64 hrv); 80 80 81 - struct acpi_device * 82 - acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv); 83 - 84 81 #ifdef CONFIG_ACPI 85 82 86 83 #include <linux/proc_fs.h> ··· 679 682 ((acpi_gbl_FADT.header.revision < 6) && 680 683 adev->power.states[ACPI_STATE_D3_HOT].flags.explicit_set); 681 684 } 685 + 686 + bool acpi_dev_hid_uid_match(struct acpi_device *adev, const char *hid2, const char *uid2); 687 + 688 + struct acpi_device * 689 + acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv); 682 690 683 691 static inline void acpi_dev_put(struct acpi_device *adev) 684 692 {
+6 -2
include/acpi/acpixf.h
··· 12 12 13 13 /* Current ACPICA subsystem version in YYYYMMDD format */ 14 14 15 - #define ACPI_CA_VERSION 0x20190816 15 + #define ACPI_CA_VERSION 0x20191018 16 16 17 17 #include <acpi/acconfig.h> 18 18 #include <acpi/actypes.h> ··· 458 458 u8 physical)) 459 459 460 460 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 461 - acpi_load_table(struct acpi_table_header *table)) 461 + acpi_load_table(struct acpi_table_header *table, 462 + u32 *table_idx)) 463 + 464 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 465 + acpi_unload_table(u32 table_index)) 462 466 463 467 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 464 468 acpi_unload_parent_table(acpi_handle object))
-12
include/acpi/button.h
··· 2 2 #ifndef ACPI_BUTTON_H 3 3 #define ACPI_BUTTON_H 4 4 5 - #include <linux/notifier.h> 6 - 7 5 #if IS_ENABLED(CONFIG_ACPI_BUTTON) 8 - extern int acpi_lid_notifier_register(struct notifier_block *nb); 9 - extern int acpi_lid_notifier_unregister(struct notifier_block *nb); 10 6 extern int acpi_lid_open(void); 11 7 #else 12 - static inline int acpi_lid_notifier_register(struct notifier_block *nb) 13 - { 14 - return 0; 15 - } 16 - static inline int acpi_lid_notifier_unregister(struct notifier_block *nb) 17 - { 18 - return 0; 19 - } 20 8 static inline int acpi_lid_open(void) 21 9 { 22 10 return 1;
+8
include/linux/acpi.h
··· 678 678 return false; 679 679 } 680 680 681 + struct acpi_device; 682 + 683 + static inline bool 684 + acpi_dev_hid_uid_match(struct acpi_device *adev, const char *hid2, const char *uid2) 685 + { 686 + return false; 687 + } 688 + 681 689 static inline struct acpi_device * 682 690 acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv) 683 691 {
+15 -1
include/linux/efi.h
··· 112 112 #define EFI_MEMORY_MORE_RELIABLE \ 113 113 ((u64)0x0000000000010000ULL) /* higher reliability */ 114 114 #define EFI_MEMORY_RO ((u64)0x0000000000020000ULL) /* read-only */ 115 + #define EFI_MEMORY_SP ((u64)0x0000000000040000ULL) /* soft reserved */ 115 116 #define EFI_MEMORY_RUNTIME ((u64)0x8000000000000000ULL) /* range requires runtime mapping */ 116 117 #define EFI_MEMORY_DESCRIPTOR_VERSION 1 117 118 ··· 1045 1044 extern efi_status_t efi_query_variable_store(u32 attributes, 1046 1045 unsigned long size, 1047 1046 bool nonblocking); 1048 - extern void efi_find_mirror(void); 1049 1047 #else 1050 1048 1051 1049 static inline efi_status_t efi_query_variable_store(u32 attributes, ··· 1202 1202 #define EFI_DBG 8 /* Print additional debug info at runtime */ 1203 1203 #define EFI_NX_PE_DATA 9 /* Can runtime data regions be mapped non-executable? */ 1204 1204 #define EFI_MEM_ATTR 10 /* Did firmware publish an EFI_MEMORY_ATTRIBUTES table? */ 1205 + #define EFI_MEM_NO_SOFT_RESERVE 11 /* Is the kernel configured to ignore soft reservations? */ 1205 1206 1206 1207 #ifdef CONFIG_EFI 1207 1208 /* ··· 1213 1212 return test_bit(feature, &efi.flags) != 0; 1214 1213 } 1215 1214 extern void efi_reboot(enum reboot_mode reboot_mode, const char *__unused); 1215 + 1216 + bool __pure __efi_soft_reserve_enabled(void); 1217 + 1218 + static inline bool __pure efi_soft_reserve_enabled(void) 1219 + { 1220 + return IS_ENABLED(CONFIG_EFI_SOFT_RESERVE) 1221 + && __efi_soft_reserve_enabled(); 1222 + } 1216 1223 #else 1217 1224 static inline bool efi_enabled(int feature) 1218 1225 { ··· 1231 1222 1232 1223 static inline bool 1233 1224 efi_capsule_pending(int *reset_type) 1225 + { 1226 + return false; 1227 + } 1228 + 1229 + static inline bool efi_soft_reserve_enabled(void) 1234 1230 { 1235 1231 return false; 1236 1232 }
+1
include/linux/ioport.h
··· 134 134 IORES_DESC_PERSISTENT_MEMORY_LEGACY = 5, 135 135 IORES_DESC_DEVICE_PRIVATE_MEMORY = 6, 136 136 IORES_DESC_RESERVED = 7, 137 + IORES_DESC_SOFT_RESERVED = 8, 137 138 }; 138 139 139 140 /*
+23
include/linux/memregion.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _MEMREGION_H_ 3 + #define _MEMREGION_H_ 4 + #include <linux/types.h> 5 + #include <linux/errno.h> 6 + 7 + struct memregion_info { 8 + int target_node; 9 + }; 10 + 11 + #ifdef CONFIG_MEMREGION 12 + int memregion_alloc(gfp_t gfp); 13 + void memregion_free(int id); 14 + #else 15 + static inline int memregion_alloc(gfp_t gfp) 16 + { 17 + return -ENOMEM; 18 + } 19 + void memregion_free(int id) 20 + { 21 + } 22 + #endif 23 + #endif /* _MEMREGION_H_ */
+3
lib/Kconfig
··· 605 605 config ARCH_HAS_PMEM_API 606 606 bool 607 607 608 + config MEMREGION 609 + bool 610 + 608 611 # use memcpy to implement user copies for nommu architectures 609 612 config UACCESS_MEMCPY 610 613 bool
+1
lib/Makefile
··· 214 214 215 215 obj-$(CONFIG_SG_SPLIT) += sg_split.o 216 216 obj-$(CONFIG_SG_POOL) += sg_pool.o 217 + obj-$(CONFIG_MEMREGION) += memregion.o 217 218 obj-$(CONFIG_STMP_DEVICE) += stmp_device.o 218 219 obj-$(CONFIG_IRQ_POLL) += irq_poll.o 219 220
+18
lib/memregion.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* identifiers for device / performance-differentiated memory regions */ 3 + #include <linux/idr.h> 4 + #include <linux/types.h> 5 + 6 + static DEFINE_IDA(memregion_ids); 7 + 8 + int memregion_alloc(gfp_t gfp) 9 + { 10 + return ida_alloc(&memregion_ids, gfp); 11 + } 12 + EXPORT_SYMBOL(memregion_alloc); 13 + 14 + void memregion_free(int id) 15 + { 16 + ida_free(&memregion_ids, id); 17 + } 18 + EXPORT_SYMBOL(memregion_free);