Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (179 commits)
ACPI: Fix acpi_processor_idle and idle= boot parameters interaction
acpi: fix section mismatch warning in pnpacpi
intel_menlo: fix build warning
ACPI: Cleanup: Remove unneeded, multiple local dummy variables
ACPI: video - fix permissions on some proc entries
ACPI: video - properly handle errors when registering proc elements
ACPI: video - do not store invalid entries in attached_array list
ACPI: re-name acpi_pm_ops to acpi_suspend_ops
ACER_WMI/ASUS_LAPTOP: fix build bug
thinkpad_acpi: fix possible NULL pointer dereference if kstrdup failed
ACPI: check a return value correctly in acpi_power_get_context()
#if 0 acpi/bay.c:eject_removable_drive()
eeepc-laptop: add hwmon fan control
eeepc-laptop: add backlight
eeepc-laptop: add base driver
ACPI: thinkpad-acpi: bump up version to 0.20
ACPI: thinkpad-acpi: fix selects in Kconfig
ACPI: thinkpad-acpi: use a private workqueue
ACPI: thinkpad-acpi: fluff really minor fix
ACPI: thinkpad-acpi: use uppercase for "LED" on user documentation
...

Fixed conflicts in drivers/acpi/video.c and drivers/misc/intel_menlow.c
manually.

+6677 -3664
+122 -17
Documentation/laptops/thinkpad-acpi.txt
··· 1 1 ThinkPad ACPI Extras Driver 2 2 3 - Version 0.19 4 - January 06th, 2008 3 + Version 0.20 4 + April 09th, 2008 5 5 6 6 Borislav Deianov <borislav@users.sf.net> 7 7 Henrique de Moraes Holschuh <hmh@hmh.eng.br> ··· 18 18 moved to the drivers/misc tree and renamed to thinkpad-acpi for kernel 19 19 2.6.22, and release 0.14. 20 20 21 + The driver is named "thinkpad-acpi". In some places, like module 22 + names, "thinkpad_acpi" is used because of userspace issues. 23 + 24 + "tpacpi" is used as a shorthand where "thinkpad-acpi" would be too 25 + long due to length limitations on some Linux kernel versions. 21 26 22 27 Status 23 28 ------ ··· 576 571 with hotkey_report_mode. 577 572 578 573 574 + Brightness hotkey notes: 575 + 576 + These are the current sane choices for brightness key mapping in 577 + thinkpad-acpi: 578 + 579 + For IBM and Lenovo models *without* ACPI backlight control (the ones on 580 + which thinkpad-acpi will autoload its backlight interface by default, 581 + and on which ACPI video does not export a backlight interface): 582 + 583 + 1. Don't enable or map the brightness hotkeys in thinkpad-acpi, as 584 + these older firmware versions unfortunately won't respect the hotkey 585 + mask for brightness keys anyway, and always reacts to them. This 586 + usually work fine, unless X.org drivers are doing something to block 587 + the BIOS. In that case, use (3) below. This is the default mode of 588 + operation. 589 + 590 + 2. Enable the hotkeys, but map them to something else that is NOT 591 + KEY_BRIGHTNESS_UP/DOWN or any other keycode that would cause 592 + userspace to try to change the backlight level, and use that as an 593 + on-screen-display hint. 594 + 595 + 3. IF AND ONLY IF X.org drivers find a way to block the firmware from 596 + automatically changing the brightness, enable the hotkeys and map 597 + them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN, and feed that to 598 + something that calls xbacklight. thinkpad-acpi will not be able to 599 + change brightness in that case either, so you should disable its 600 + backlight interface. 601 + 602 + For Lenovo models *with* ACPI backlight control: 603 + 604 + 1. Load up ACPI video and use that. ACPI video will report ACPI 605 + events for brightness change keys. Do not mess with thinkpad-acpi 606 + defaults in this case. thinkpad-acpi should not have anything to do 607 + with backlight events in a scenario where ACPI video is loaded: 608 + brightness hotkeys must be disabled, and the backlight interface is 609 + to be kept disabled as well. This is the default mode of operation. 610 + 611 + 2. Do *NOT* load up ACPI video, enable the hotkeys in thinkpad-acpi, 612 + and map them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN. Process 613 + these keys on userspace somehow (e.g. by calling xbacklight). 614 + 579 615 Bluetooth 580 616 --------- 581 617 ··· 693 647 694 648 https://bugs.freedesktop.org/show_bug.cgi?id=2000 695 649 696 - ThinkLight control -- /proc/acpi/ibm/light 697 - ------------------------------------------ 650 + ThinkLight control 651 + ------------------ 698 652 699 - The current status of the ThinkLight can be found in this file. A few 700 - models which do not make the status available will show it as 701 - "unknown". The available commands are: 653 + procfs: /proc/acpi/ibm/light 654 + sysfs attributes: as per LED class, for the "tpacpi::thinklight" LED 655 + 656 + procfs notes: 657 + 658 + The ThinkLight status can be read and set through the procfs interface. A 659 + few models which do not make the status available will show the ThinkLight 660 + status as "unknown". The available commands are: 702 661 703 662 echo on > /proc/acpi/ibm/light 704 663 echo off > /proc/acpi/ibm/light 664 + 665 + sysfs notes: 666 + 667 + The ThinkLight sysfs interface is documented by the LED class 668 + documentation, in Documentation/leds-class.txt. The ThinkLight LED name 669 + is "tpacpi::thinklight". 670 + 671 + Due to limitations in the sysfs LED class, if the status of the thinklight 672 + cannot be read or if it is unknown, thinkpad-acpi will report it as "off". 673 + It is impossible to know if the status returned through sysfs is valid. 705 674 706 675 Docking / undocking -- /proc/acpi/ibm/dock 707 676 ------------------------------------------ ··· 876 815 in newer ThinkPads it is just a compatibility layer. Do not use it, it is 877 816 exported just as a debug tool. 878 817 879 - LED control -- /proc/acpi/ibm/led 880 - --------------------------------- 818 + LED control 819 + ----------- 881 820 882 - Some of the LED indicators can be controlled through this feature. The 883 - available commands are: 821 + procfs: /proc/acpi/ibm/led 822 + sysfs attributes: as per LED class, see below for names 884 823 885 - echo '<led number> on' >/proc/acpi/ibm/led 886 - echo '<led number> off' >/proc/acpi/ibm/led 887 - echo '<led number> blink' >/proc/acpi/ibm/led 824 + Some of the LED indicators can be controlled through this feature. On 825 + some older ThinkPad models, it is possible to query the status of the 826 + LED indicators as well. Newer ThinkPads cannot query the real status 827 + of the LED indicators. 888 828 889 - The <led number> range is 0 to 7. The set of LEDs that can be 890 - controlled varies from model to model. Here is the mapping on the X40: 829 + procfs notes: 830 + 831 + The available commands are: 832 + 833 + echo '<LED number> on' >/proc/acpi/ibm/led 834 + echo '<LED number> off' >/proc/acpi/ibm/led 835 + echo '<LED number> blink' >/proc/acpi/ibm/led 836 + 837 + The <LED number> range is 0 to 7. The set of LEDs that can be 838 + controlled varies from model to model. Here is the common ThinkPad 839 + mapping: 891 840 892 841 0 - power 893 842 1 - battery (orange) 894 843 2 - battery (green) 895 - 3 - UltraBase 844 + 3 - UltraBase/dock 896 845 4 - UltraBay 846 + 5 - UltraBase battery slot 847 + 6 - (unknown) 897 848 7 - standby 898 849 899 850 All of the above can be turned on and off and can be made to blink. 851 + 852 + sysfs notes: 853 + 854 + The ThinkPad LED sysfs interface is described in detail by the LED class 855 + documentation, in Documentation/leds-class.txt. 856 + 857 + The leds are named (in LED ID order, from 0 to 7): 858 + "tpacpi::power", "tpacpi:orange:batt", "tpacpi:green:batt", 859 + "tpacpi::dock_active", "tpacpi::bay_active", "tpacpi::dock_batt", 860 + "tpacpi::unknown_led", "tpacpi::standby". 861 + 862 + Due to limitations in the sysfs LED class, if the status of the LED 863 + indicators cannot be read due to an error, thinkpad-acpi will report it as 864 + a brightness of zero (same as LED off). 865 + 866 + If the thinkpad firmware doesn't support reading the current status, 867 + trying to read the current LED brightness will just return whatever 868 + brightness was last written to that attribute. 869 + 870 + These LEDs can blink using hardware acceleration. To request that a 871 + ThinkPad indicator LED should blink in hardware accelerated mode, use the 872 + "timer" trigger, and leave the delay_on and delay_off parameters set to 873 + zero (to request hardware acceleration autodetection). 900 874 901 875 ACPI sounds -- /proc/acpi/ibm/beep 902 876 ---------------------------------- ··· 1185 1089 increase the current power management level, i.e. they can 1186 1090 dim the display. 1187 1091 1092 + 1093 + WARNING: 1094 + 1095 + Whatever you do, do NOT ever call thinkpad-acpi backlight-level change 1096 + interface and the ACPI-based backlight level change interface 1097 + (available on newer BIOSes, and driven by the Linux ACPI video driver) 1098 + at the same time. The two will interact in bad ways, do funny things, 1099 + and maybe reduce the life of the backlight lamps by needlessly kicking 1100 + its level up and down at every change. 1188 1101 1189 1102 Volume control -- /proc/acpi/ibm/volume 1190 1103 ---------------------------------------
+27 -6
Documentation/thermal/sysfs-api.txt
··· 108 108 RO read only value 109 109 RW read/write value 110 110 111 - All thermal sysfs attributes will be represented under /sys/class/thermal 111 + Thermal sysfs attributes will be represented under /sys/class/thermal. 112 + Hwmon sysfs I/F extension is also available under /sys/class/hwmon 113 + if hwmon is compiled in or built as a module. 112 114 113 115 Thermal zone device sys I/F, created once it's registered: 114 - |thermal_zone[0-*]: 116 + /sys/class/thermal/thermal_zone[0-*]: 115 117 |-----type: Type of the thermal zone 116 118 |-----temp: Current temperature 117 119 |-----mode: Working mode of the thermal zone ··· 121 119 |-----trip_point_[0-*]_type: Trip point type 122 120 123 121 Thermal cooling device sys I/F, created once it's registered: 124 - |cooling_device[0-*]: 122 + /sys/class/thermal/cooling_device[0-*]: 125 123 |-----type : Type of the cooling device(processor/fan/...) 126 124 |-----max_state: Maximum cooling state of the cooling device 127 125 |-----cur_state: Current cooling state of the cooling device ··· 132 130 They are created/removed for each 133 131 thermal_zone_bind_cooling_device/thermal_zone_unbind_cooling_device successful execution. 134 132 135 - |thermal_zone[0-*] 133 + /sys/class/thermal/thermal_zone[0-*] 136 134 |-----cdev[0-*]: The [0-*]th cooling device in the current thermal zone 137 135 |-----cdev[0-*]_trip_point: Trip point that cdev[0-*] is associated with 138 136 137 + Besides the thermal zone device sysfs I/F and cooling device sysfs I/F, 138 + the generic thermal driver also creates a hwmon sysfs I/F for each _type_ of 139 + thermal zone device. E.g. the generic thermal driver registers one hwmon class device 140 + and build the associated hwmon sysfs I/F for all the registered ACPI thermal zones. 141 + /sys/class/hwmon/hwmon[0-*]: 142 + |-----name: The type of the thermal zone devices. 143 + |-----temp[1-*]_input: The current temperature of thermal zone [1-*]. 144 + |-----temp[1-*]_critical: The critical trip point of thermal zone [1-*]. 145 + Please read Documentation/hwmon/sysfs-interface for additional information. 139 146 140 147 *************************** 141 148 * Thermal zone attributes * ··· 152 141 153 142 type Strings which represent the thermal zone type. 154 143 This is given by thermal zone driver as part of registration. 155 - Eg: "ACPI thermal zone" indicates it's a ACPI thermal device 144 + Eg: "acpitz" indicates it's an ACPI thermal device. 145 + In order to keep it consistent with hwmon sys attribute, 146 + this should be a short, lowercase string, 147 + not containing spaces nor dashes. 156 148 RO 157 149 Required 158 150 ··· 232 218 /sys/class/thermal: 233 219 234 220 |thermal_zone1: 235 - |-----type: ACPI thermal zone 221 + |-----type: acpitz 236 222 |-----temp: 37000 237 223 |-----mode: kernel 238 224 |-----trip_point_0_temp: 100000 ··· 257 243 |-----type: Fan 258 244 |-----max_state: 2 259 245 |-----cur_state: 0 246 + 247 + /sys/class/hwmon: 248 + 249 + |hwmon0: 250 + |-----name: acpitz 251 + |-----temp1_input: 37000 252 + |-----temp1_crit: 100000
+7
MAINTAINERS
··· 1533 1533 W: bluesmoke.sourceforge.net 1534 1534 S: Maintained 1535 1535 1536 + EEEPC LAPTOP EXTRAS DRIVER 1537 + P: Corentin Chary 1538 + M: corentincj@iksaif.net 1539 + L: acpi4asus-user@lists.sourceforge.net 1540 + W: http://sourceforge.net/projects/acpi4asus 1541 + S: Maintained 1542 + 1536 1543 EEPRO100 NETWORK DRIVER 1537 1544 P: Andrey V. Savochkin 1538 1545 M: saw@saw.sw.com.sg
-8
arch/frv/kernel/pm.c
··· 163 163 if ((mode != 1) && (mode != 5)) 164 164 return -EINVAL; 165 165 166 - retval = pm_send_all(PM_SUSPEND, (void *)3); 167 - 168 166 if (retval == 0) { 169 167 if (mode == 5) 170 168 retval = pm_do_bus_sleep(); 171 169 else 172 170 retval = pm_do_suspend(); 173 - pm_send_all(PM_RESUME, (void *)0); 174 171 } 175 172 176 173 return retval; ··· 179 182 return -EINVAL; 180 183 if (!(clock_cmodes_permitted & (1<<new_cmode))) 181 184 return -EINVAL; 182 - 183 - /* tell all the drivers we're suspending */ 184 - pm_send_all(PM_SUSPEND, (void *)3); 185 185 186 186 /* now change cmode */ 187 187 local_irq_disable(); ··· 195 201 frv_dma_resume_all(); 196 202 local_irq_enable(); 197 203 198 - /* tell all the drivers we're resuming */ 199 - pm_send_all(PM_RESUME, (void *)0); 200 204 return 0; 201 205 } 202 206
+1 -34
arch/mips/au1000/common/power.c
··· 251 251 static int pm_do_sleep(ctl_table * ctl, int write, struct file *file, 252 252 void __user *buffer, size_t * len, loff_t *ppos) 253 253 { 254 - int retval = 0; 255 254 #ifdef SLEEP_TEST_TIMEOUT 256 255 #define TMPBUFLEN2 16 257 256 char buf[TMPBUFLEN2], *p; ··· 270 271 p = buf; 271 272 sleep_ticks = simple_strtoul(p, &p, 0); 272 273 #endif 273 - retval = pm_send_all(PM_SUSPEND, (void *) 2); 274 - 275 - if (retval) 276 - return retval; 277 274 278 275 au_sleep(); 279 - retval = pm_send_all(PM_RESUME, (void *) 0); 280 276 } 281 - return retval; 277 + return 0; 282 278 } 283 - 284 - static int pm_do_suspend(ctl_table * ctl, int write, struct file *file, 285 - void __user *buffer, size_t * len, loff_t *ppos) 286 - { 287 - int retval = 0; 288 - 289 - if (!write) { 290 - *len = 0; 291 - } else { 292 - retval = pm_send_all(PM_SUSPEND, (void *) 2); 293 - if (retval) 294 - return retval; 295 - suspend_mode = 1; 296 - 297 - retval = pm_send_all(PM_RESUME, (void *) 0); 298 - } 299 - return retval; 300 - } 301 - 302 279 303 280 static int pm_do_freq(ctl_table * ctl, int write, struct file *file, 304 281 void __user *buffer, size_t * len, loff_t *ppos) ··· 388 413 389 414 390 415 static struct ctl_table pm_table[] = { 391 - { 392 - .ctl_name = CTL_UNNUMBERED, 393 - .procname = "suspend", 394 - .data = NULL, 395 - .maxlen = 0, 396 - .mode = 0600, 397 - .proc_handler = &pm_do_suspend 398 - }, 399 416 { 400 417 .ctl_name = CTL_UNNUMBERED, 401 418 .procname = "sleep",
-16
arch/x86/kernel/apm_32.c
··· 1192 1192 int err; 1193 1193 struct apm_user *as; 1194 1194 1195 - if (pm_send_all(PM_SUSPEND, (void *)3)) { 1196 - /* Vetoed */ 1197 - if (vetoable) { 1198 - if (apm_info.connection_version > 0x100) 1199 - set_system_power_state(APM_STATE_REJECT); 1200 - err = -EBUSY; 1201 - ignore_sys_suspend = 0; 1202 - printk(KERN_WARNING "apm: suspend was vetoed.\n"); 1203 - goto out; 1204 - } 1205 - printk(KERN_CRIT "apm: suspend was vetoed, but suspending anyway.\n"); 1206 - } 1207 - 1208 1195 device_suspend(PMSG_SUSPEND); 1209 1196 local_irq_disable(); 1210 1197 device_power_down(PMSG_SUSPEND); ··· 1214 1227 device_power_up(); 1215 1228 local_irq_enable(); 1216 1229 device_resume(); 1217 - pm_send_all(PM_RESUME, (void *)0); 1218 1230 queue_event(APM_NORMAL_RESUME, NULL); 1219 - out: 1220 1231 spin_lock(&user_list_lock); 1221 1232 for (as = user_list; as != NULL; as = as->next) { 1222 1233 as->suspend_wait = 0; ··· 1325 1340 if ((event != APM_NORMAL_RESUME) 1326 1341 || (ignore_normal_resume == 0)) { 1327 1342 device_resume(); 1328 - pm_send_all(PM_RESUME, (void *)0); 1329 1343 queue_event(event, NULL); 1330 1344 } 1331 1345 ignore_normal_resume = 0;
+3
drivers/acpi/Kconfig
··· 140 140 tristate "Video" 141 141 depends on X86 && BACKLIGHT_CLASS_DEVICE && VIDEO_OUTPUT_CONTROL 142 142 depends on INPUT 143 + select THERMAL 143 144 help 144 145 This driver implement the ACPI Extensions For Display Adapters 145 146 for integrated graphics devices on motherboard, as specified in ··· 152 151 153 152 config ACPI_FAN 154 153 tristate "Fan" 154 + select THERMAL 155 155 default y 156 156 help 157 157 This driver adds support for ACPI fan devices, allowing user-mode ··· 174 172 175 173 config ACPI_PROCESSOR 176 174 tristate "Processor" 175 + select THERMAL 177 176 default y 178 177 help 179 178 This driver installs ACPI as the idle handler for Linux, and uses
+2
drivers/acpi/bay.c
··· 201 201 return 0; 202 202 } 203 203 204 + #if 0 204 205 /** 205 206 * eject_removable_drive - try to eject this drive 206 207 * @dev : the device structure of the drive ··· 226 225 return 0; 227 226 } 228 227 EXPORT_SYMBOL_GPL(eject_removable_drive); 228 + #endif /* 0 */ 229 229 230 230 static int acpi_bay_add_fs(struct bay *bay) 231 231 {
+100 -73
drivers/acpi/dispatcher/dsfield.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 89 89 90 90 ACPI_FUNCTION_TRACE(ds_create_buffer_field); 91 91 92 - /* Get the name_string argument */ 93 - 92 + /* 93 + * Get the name_string argument (name of the new buffer_field) 94 + */ 94 95 if (op->common.aml_opcode == AML_CREATE_FIELD_OP) { 96 + 97 + /* For create_field, name is the 4th argument */ 98 + 95 99 arg = acpi_ps_get_arg(op, 3); 96 100 } else { 97 - /* Create Bit/Byte/Word/Dword field */ 101 + /* For all other create_xXXField operators, name is the 3rd argument */ 98 102 99 103 arg = acpi_ps_get_arg(op, 2); 100 104 } ··· 111 107 node = walk_state->deferred_node; 112 108 status = AE_OK; 113 109 } else { 114 - /* 115 - * During the load phase, we want to enter the name of the field into 116 - * the namespace. During the execute phase (when we evaluate the size 117 - * operand), we want to lookup the name 118 - */ 119 - if (walk_state->parse_flags & ACPI_PARSE_EXECUTE) { 120 - flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE; 121 - } else { 122 - flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE | 123 - ACPI_NS_ERROR_IF_FOUND; 110 + /* Execute flag should always be set when this function is entered */ 111 + 112 + if (!(walk_state->parse_flags & ACPI_PARSE_EXECUTE)) { 113 + return_ACPI_STATUS(AE_AML_INTERNAL); 124 114 } 125 115 126 - /* 127 - * Enter the name_string into the namespace 128 - */ 116 + /* Creating new namespace node, should not already exist */ 117 + 118 + flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE | 119 + ACPI_NS_ERROR_IF_FOUND; 120 + 121 + /* Mark node temporary if we are executing a method */ 122 + 123 + if (walk_state->method_node) { 124 + flags |= ACPI_NS_TEMPORARY; 125 + } 126 + 127 + /* Enter the name_string into the namespace */ 128 + 129 129 status = 130 130 acpi_ns_lookup(walk_state->scope_info, 131 131 arg->common.value.string, ACPI_TYPE_ANY, 132 132 ACPI_IMODE_LOAD_PASS1, flags, walk_state, 133 - &(node)); 133 + &node); 134 134 if (ACPI_FAILURE(status)) { 135 135 ACPI_ERROR_NAMESPACE(arg->common.value.string, status); 136 136 return_ACPI_STATUS(status); ··· 144 136 /* 145 137 * We could put the returned object (Node) on the object stack for later, 146 138 * but for now, we will put it in the "op" object that the parser uses, 147 - * so we can get it again at the end of this scope 139 + * so we can get it again at the end of this scope. 148 140 */ 149 141 op->common.node = node; 150 142 151 143 /* 152 144 * If there is no object attached to the node, this node was just created 153 - * and we need to create the field object. Otherwise, this was a lookup 145 + * and we need to create the field object. Otherwise, this was a lookup 154 146 * of an existing node and we don't want to create the field object again. 155 147 */ 156 148 obj_desc = acpi_ns_get_attached_object(node); ··· 172 164 } 173 165 174 166 /* 175 - * Remember location in AML stream of the field unit 176 - * opcode and operands -- since the buffer and index 177 - * operands must be evaluated. 167 + * Remember location in AML stream of the field unit opcode and operands -- 168 + * since the buffer and index operands must be evaluated. 178 169 */ 179 170 second_desc = obj_desc->common.next_object; 180 171 second_desc->extra.aml_start = op->named.data; ··· 268 261 269 262 case AML_INT_NAMEDFIELD_OP: 270 263 271 - /* Lookup the name */ 264 + /* Lookup the name, it should already exist */ 272 265 273 266 status = acpi_ns_lookup(walk_state->scope_info, 274 267 (char *)&arg->named.name, ··· 279 272 if (ACPI_FAILURE(status)) { 280 273 ACPI_ERROR_NAMESPACE((char *)&arg->named.name, 281 274 status); 282 - if (status != AE_ALREADY_EXISTS) { 283 - return_ACPI_STATUS(status); 284 - } 285 - 286 - /* Already exists, ignore error */ 275 + return_ACPI_STATUS(status); 287 276 } else { 288 277 arg->common.node = info->field_node; 289 278 info->field_bit_length = arg->common.value.size; 290 279 291 - /* Create and initialize an object for the new Field Node */ 292 - 293 - status = acpi_ex_prep_field_value(info); 294 - if (ACPI_FAILURE(status)) { 295 - return_ACPI_STATUS(status); 280 + /* 281 + * If there is no object attached to the node, this node was 282 + * just created and we need to create the field object. 283 + * Otherwise, this was a lookup of an existing node and we 284 + * don't want to create the field object again. 285 + */ 286 + if (!acpi_ns_get_attached_object 287 + (info->field_node)) { 288 + status = acpi_ex_prep_field_value(info); 289 + if (ACPI_FAILURE(status)) { 290 + return_ACPI_STATUS(status); 291 + } 296 292 } 297 293 } 298 294 ··· 409 399 union acpi_parse_object *arg = NULL; 410 400 struct acpi_namespace_node *node; 411 401 u8 type = 0; 402 + u32 flags; 412 403 413 404 ACPI_FUNCTION_TRACE_PTR(ds_init_field_objects, op); 414 405 406 + /* Execute flag should always be set when this function is entered */ 407 + 408 + if (!(walk_state->parse_flags & ACPI_PARSE_EXECUTE)) { 409 + if (walk_state->parse_flags & ACPI_PARSE_DEFERRED_OP) { 410 + 411 + /* bank_field Op is deferred, just return OK */ 412 + 413 + return_ACPI_STATUS(AE_OK); 414 + } 415 + 416 + return_ACPI_STATUS(AE_AML_INTERNAL); 417 + } 418 + 419 + /* 420 + * Get the field_list argument for this opcode. This is the start of the 421 + * list of field elements. 422 + */ 415 423 switch (walk_state->opcode) { 416 424 case AML_FIELD_OP: 417 425 arg = acpi_ps_get_arg(op, 2); ··· 450 422 return_ACPI_STATUS(AE_BAD_PARAMETER); 451 423 } 452 424 425 + if (!arg) { 426 + return_ACPI_STATUS(AE_AML_NO_OPERAND); 427 + } 428 + 429 + /* Creating new namespace node(s), should not already exist */ 430 + 431 + flags = ACPI_NS_NO_UPSEARCH | ACPI_NS_DONT_OPEN_SCOPE | 432 + ACPI_NS_ERROR_IF_FOUND; 433 + 434 + /* Mark node(s) temporary if we are executing a method */ 435 + 436 + if (walk_state->method_node) { 437 + flags |= ACPI_NS_TEMPORARY; 438 + } 439 + 453 440 /* 454 441 * Walk the list of entries in the field_list 455 442 */ 456 443 while (arg) { 457 - 458 - /* Ignore OFFSET and ACCESSAS terms here */ 459 - 444 + /* 445 + * Ignore OFFSET and ACCESSAS terms here; we are only interested in the 446 + * field names in order to enter them into the namespace. 447 + */ 460 448 if (arg->common.aml_opcode == AML_INT_NAMEDFIELD_OP) { 461 449 status = acpi_ns_lookup(walk_state->scope_info, 462 - (char *)&arg->named.name, 463 - type, ACPI_IMODE_LOAD_PASS1, 464 - ACPI_NS_NO_UPSEARCH | 465 - ACPI_NS_DONT_OPEN_SCOPE | 466 - ACPI_NS_ERROR_IF_FOUND, 450 + (char *)&arg->named.name, type, 451 + ACPI_IMODE_LOAD_PASS1, flags, 467 452 walk_state, &node); 468 453 if (ACPI_FAILURE(status)) { 469 454 ACPI_ERROR_NAMESPACE((char *)&arg->named.name, ··· 493 452 arg->common.node = node; 494 453 } 495 454 496 - /* Move to next field in the list */ 455 + /* Get the next field element in the list */ 497 456 498 457 arg = arg->common.next; 499 458 } ··· 507 466 * 508 467 * PARAMETERS: Op - Op containing the Field definition and args 509 468 * region_node - Object for the containing Operation Region 510 - * ` walk_state - Current method state 469 + * walk_state - Current method state 511 470 * 512 471 * RETURN: Status 513 472 * ··· 554 513 return_ACPI_STATUS(status); 555 514 } 556 515 557 - /* Third arg is the bank_value */ 558 - 559 - /* TBD: This arg is a term_arg, not a constant, and must be evaluated */ 560 - 516 + /* 517 + * Third arg is the bank_value 518 + * This arg is a term_arg, not a constant 519 + * It will be evaluated later, by acpi_ds_eval_bank_field_operands 520 + */ 561 521 arg = arg->common.next; 562 - 563 - /* Currently, only the following constants are supported */ 564 - 565 - switch (arg->common.aml_opcode) { 566 - case AML_ZERO_OP: 567 - info.bank_value = 0; 568 - break; 569 - 570 - case AML_ONE_OP: 571 - info.bank_value = 1; 572 - break; 573 - 574 - case AML_BYTE_OP: 575 - case AML_WORD_OP: 576 - case AML_DWORD_OP: 577 - case AML_QWORD_OP: 578 - info.bank_value = (u32) arg->common.value.integer; 579 - break; 580 - 581 - default: 582 - info.bank_value = 0; 583 - ACPI_ERROR((AE_INFO, 584 - "Non-constant BankValue for BankField is not implemented")); 585 - } 586 522 587 523 /* Fourth arg is the field flags */ 588 524 ··· 571 553 info.field_type = ACPI_TYPE_LOCAL_BANK_FIELD; 572 554 info.region_node = region_node; 573 555 574 - status = acpi_ds_get_field_names(&info, walk_state, arg->common.next); 556 + /* 557 + * Use Info.data_register_node to store bank_field Op 558 + * It's safe because data_register_node will never be used when create bank field 559 + * We store aml_start and aml_length in the bank_field Op for late evaluation 560 + * Used in acpi_ex_prep_field_value(Info) 561 + * 562 + * TBD: Or, should we add a field in struct acpi_create_field_info, like "void *ParentOp"? 563 + */ 564 + info.data_register_node = (struct acpi_namespace_node *)op; 575 565 566 + status = acpi_ds_get_field_names(&info, walk_state, arg->common.next); 576 567 return_ACPI_STATUS(status); 577 568 } 578 569
+1 -1
drivers/acpi/dispatcher/dsinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+25 -36
drivers/acpi/dispatcher/dsmethod.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 42 42 */ 43 43 44 44 #include <acpi/acpi.h> 45 - #include <acpi/acparser.h> 46 45 #include <acpi/amlcode.h> 47 46 #include <acpi/acdispat.h> 48 47 #include <acpi/acinterp.h> ··· 101 102 walk_state->opcode, 102 103 walk_state->aml_offset, 103 104 NULL); 104 - (void)acpi_ex_enter_interpreter(); 105 + acpi_ex_enter_interpreter(); 105 106 } 106 107 #ifdef ACPI_DISASSEMBLER 107 108 if (ACPI_FAILURE(status)) { ··· 231 232 * recursive call. 232 233 */ 233 234 if (!walk_state || 234 - !obj_desc->method.mutex->mutex.owner_thread || 235 - (walk_state->thread != 236 - obj_desc->method.mutex->mutex.owner_thread)) { 235 + !obj_desc->method.mutex->mutex.thread_id || 236 + (walk_state->thread->thread_id != 237 + obj_desc->method.mutex->mutex.thread_id)) { 237 238 /* 238 239 * Acquire the method mutex. This releases the interpreter if we 239 240 * block (and reacquires it before it returns) ··· 253 254 original_sync_level = 254 255 walk_state->thread->current_sync_level; 255 256 256 - obj_desc->method.mutex->mutex.owner_thread = 257 - walk_state->thread; 257 + obj_desc->method.mutex->mutex.thread_id = 258 + walk_state->thread->thread_id; 258 259 walk_state->thread->current_sync_level = 259 260 obj_desc->method.sync_level; 260 261 } else { ··· 534 535 acpi_ds_terminate_control_method(union acpi_operand_object *method_desc, 535 536 struct acpi_walk_state *walk_state) 536 537 { 537 - struct acpi_namespace_node *method_node; 538 - acpi_status status; 539 538 540 539 ACPI_FUNCTION_TRACE_PTR(ds_terminate_control_method, walk_state); 541 540 ··· 548 551 /* Delete all arguments and locals */ 549 552 550 553 acpi_ds_method_data_delete_all(walk_state); 551 - } 552 554 553 - /* 554 - * If method is serialized, release the mutex and restore the 555 - * current sync level for this thread 556 - */ 557 - if (method_desc->method.mutex) { 558 - 559 - /* Acquisition Depth handles recursive calls */ 560 - 561 - method_desc->method.mutex->mutex.acquisition_depth--; 562 - if (!method_desc->method.mutex->mutex.acquisition_depth) { 563 - walk_state->thread->current_sync_level = 564 - method_desc->method.mutex->mutex. 565 - original_sync_level; 566 - 567 - acpi_os_release_mutex(method_desc->method.mutex->mutex. 568 - os_mutex); 569 - method_desc->method.mutex->mutex.owner_thread = NULL; 570 - } 571 - } 572 - 573 - if (walk_state) { 574 555 /* 575 - * Delete any objects created by this method during execution. 576 - * The method Node is stored in the walk state 556 + * If method is serialized, release the mutex and restore the 557 + * current sync level for this thread 577 558 */ 578 - method_node = walk_state->method_node; 559 + if (method_desc->method.mutex) { 560 + 561 + /* Acquisition Depth handles recursive calls */ 562 + 563 + method_desc->method.mutex->mutex.acquisition_depth--; 564 + if (!method_desc->method.mutex->mutex.acquisition_depth) { 565 + walk_state->thread->current_sync_level = 566 + method_desc->method.mutex->mutex. 567 + original_sync_level; 568 + 569 + acpi_os_release_mutex(method_desc->method. 570 + mutex->mutex.os_mutex); 571 + method_desc->method.mutex->mutex.thread_id = 0; 572 + } 573 + } 579 574 580 575 /* 581 576 * Delete any namespace objects created anywhere within ··· 609 620 */ 610 621 if ((method_desc->method.method_flags & AML_METHOD_SERIALIZED) 611 622 && (!method_desc->method.mutex)) { 612 - status = acpi_ds_create_method_mutex(method_desc); 623 + (void)acpi_ds_create_method_mutex(method_desc); 613 624 } 614 625 615 626 /* No more threads, we can free the owner_id */
+1 -1
drivers/acpi/dispatcher/dsmthdat.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+78 -23
drivers/acpi/dispatcher/dsobject.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 157 157 * will remain as named references. This behavior is not described 158 158 * in the ACPI spec, but it appears to be an oversight. 159 159 */ 160 - obj_desc = (union acpi_operand_object *)op->common.node; 160 + obj_desc = 161 + ACPI_CAST_PTR(union acpi_operand_object, 162 + op->common.node); 161 163 162 164 status = 163 165 acpi_ex_resolve_node_to_value(ACPI_CAST_INDIRECT_PTR ··· 174 172 switch (op->common.node->type) { 175 173 /* 176 174 * For these types, we need the actual node, not the subobject. 177 - * However, the subobject got an extra reference count above. 175 + * However, the subobject did not get an extra reference count above. 176 + * 177 + * TBD: should ex_resolve_node_to_value be changed to fix this? 178 + */ 179 + case ACPI_TYPE_DEVICE: 180 + case ACPI_TYPE_THERMAL: 181 + 182 + acpi_ut_add_reference(op->common.node->object); 183 + 184 + /*lint -fallthrough */ 185 + /* 186 + * For these types, we need the actual node, not the subobject. 187 + * The subobject got an extra reference count in ex_resolve_node_to_value. 178 188 */ 179 189 case ACPI_TYPE_MUTEX: 180 190 case ACPI_TYPE_METHOD: ··· 194 180 case ACPI_TYPE_PROCESSOR: 195 181 case ACPI_TYPE_EVENT: 196 182 case ACPI_TYPE_REGION: 197 - case ACPI_TYPE_DEVICE: 198 - case ACPI_TYPE_THERMAL: 199 183 200 - obj_desc = 201 - (union acpi_operand_object *)op->common. 202 - node; 184 + /* We will create a reference object for these types below */ 203 185 break; 204 186 205 187 default: 206 - break; 207 - } 208 - 209 - /* 210 - * If above resolved to an operand object, we are done. Otherwise, 211 - * we have a NS node, we must create the package entry as a named 212 - * reference. 213 - */ 214 - if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != 215 - ACPI_DESC_TYPE_NAMED) { 188 + /* 189 + * All other types - the node was resolved to an actual 190 + * object, we are done. 191 + */ 216 192 goto exit; 217 193 } 218 194 } ··· 227 223 228 224 exit: 229 225 *obj_desc_ptr = obj_desc; 230 - return_ACPI_STATUS(AE_OK); 226 + return_ACPI_STATUS(status); 231 227 } 232 228 233 229 /******************************************************************************* ··· 373 369 union acpi_parse_object *parent; 374 370 union acpi_operand_object *obj_desc = NULL; 375 371 acpi_status status = AE_OK; 376 - acpi_native_uint i; 372 + unsigned i; 373 + u16 index; 374 + u16 reference_count; 377 375 378 376 ACPI_FUNCTION_TRACE(ds_build_internal_package_obj); 379 377 ··· 453 447 package. 454 448 elements[i]); 455 449 } 450 + 451 + if (*obj_desc_ptr) { 452 + 453 + /* Existing package, get existing reference count */ 454 + 455 + reference_count = 456 + (*obj_desc_ptr)->common.reference_count; 457 + if (reference_count > 1) { 458 + 459 + /* Make new element ref count match original ref count */ 460 + 461 + for (index = 0; index < (reference_count - 1); 462 + index++) { 463 + acpi_ut_add_reference((obj_desc-> 464 + package. 465 + elements[i])); 466 + } 467 + } 468 + } 469 + 456 470 arg = arg->common.next; 457 471 } 458 472 459 - if (!arg) { 473 + /* Check for match between num_elements and actual length of package_list */ 474 + 475 + if (arg) { 476 + /* 477 + * num_elements was exhausted, but there are remaining elements in the 478 + * package_list. 479 + * 480 + * Note: technically, this is an error, from ACPI spec: "It is an error 481 + * for NumElements to be less than the number of elements in the 482 + * PackageList". However, for now, we just print an error message and 483 + * no exception is returned. 484 + */ 485 + while (arg) { 486 + 487 + /* Find out how many elements there really are */ 488 + 489 + i++; 490 + arg = arg->common.next; 491 + } 492 + 493 + ACPI_ERROR((AE_INFO, 494 + "Package List length (%X) larger than NumElements count (%X), truncated\n", 495 + i, element_count)); 496 + } else if (i < element_count) { 497 + /* 498 + * Arg list (elements) was exhausted, but we did not reach num_elements count. 499 + * Note: this is not an error, the package is padded out with NULLs. 500 + */ 460 501 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 461 - "Package List length larger than NumElements count (%X), truncated\n", 462 - element_count)); 502 + "Package List length (%X) smaller than NumElements count (%X), padded with null elements\n", 503 + i, element_count)); 463 504 } 464 505 465 506 obj_desc->package.flags |= AOPOBJ_DATA_VALID; ··· 774 721 /* Node was saved in Op */ 775 722 776 723 obj_desc->reference.node = op->common.node; 724 + obj_desc->reference.object = 725 + op->common.node->object; 777 726 } 778 727 779 728 obj_desc->reference.opcode = opcode;
+256 -4
drivers/acpi/dispatcher/dsopcode.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 49 49 #include <acpi/acinterp.h> 50 50 #include <acpi/acnamesp.h> 51 51 #include <acpi/acevents.h> 52 + #include <acpi/actables.h> 52 53 53 54 #define _COMPONENT ACPI_DISPATCHER 54 55 ACPI_MODULE_NAME("dsopcode") ··· 208 207 ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname 209 208 (ACPI_TYPE_BUFFER_FIELD, node, NULL)); 210 209 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "[%4.4s] BufferField Arg Init\n", 210 + acpi_ut_get_node_name(node))); 211 + 212 + /* Execute the AML code for the term_arg arguments */ 213 + 214 + status = acpi_ds_execute_arguments(node, acpi_ns_get_parent_node(node), 215 + extra_desc->extra.aml_length, 216 + extra_desc->extra.aml_start); 217 + return_ACPI_STATUS(status); 218 + } 219 + 220 + /******************************************************************************* 221 + * 222 + * FUNCTION: acpi_ds_get_bank_field_arguments 223 + * 224 + * PARAMETERS: obj_desc - A valid bank_field object 225 + * 226 + * RETURN: Status. 227 + * 228 + * DESCRIPTION: Get bank_field bank_value. This implements the late 229 + * evaluation of these field attributes. 230 + * 231 + ******************************************************************************/ 232 + 233 + acpi_status 234 + acpi_ds_get_bank_field_arguments(union acpi_operand_object *obj_desc) 235 + { 236 + union acpi_operand_object *extra_desc; 237 + struct acpi_namespace_node *node; 238 + acpi_status status; 239 + 240 + ACPI_FUNCTION_TRACE_PTR(ds_get_bank_field_arguments, obj_desc); 241 + 242 + if (obj_desc->common.flags & AOPOBJ_DATA_VALID) { 243 + return_ACPI_STATUS(AE_OK); 244 + } 245 + 246 + /* Get the AML pointer (method object) and bank_field node */ 247 + 248 + extra_desc = acpi_ns_get_secondary_object(obj_desc); 249 + node = obj_desc->bank_field.node; 250 + 251 + ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname 252 + (ACPI_TYPE_LOCAL_BANK_FIELD, node, NULL)); 253 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "[%4.4s] BankField Arg Init\n", 211 254 acpi_ut_get_node_name(node))); 212 255 213 256 /* Execute the AML code for the term_arg arguments */ ··· 815 770 816 771 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n", 817 772 obj_desc, 818 - ACPI_FORMAT_UINT64(obj_desc->region.address), 773 + ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address), 774 + obj_desc->region.length)); 775 + 776 + /* Now the address and length are valid for this opregion */ 777 + 778 + obj_desc->region.flags |= AOPOBJ_DATA_VALID; 779 + 780 + return_ACPI_STATUS(status); 781 + } 782 + 783 + /******************************************************************************* 784 + * 785 + * FUNCTION: acpi_ds_eval_table_region_operands 786 + * 787 + * PARAMETERS: walk_state - Current walk 788 + * Op - A valid region Op object 789 + * 790 + * RETURN: Status 791 + * 792 + * DESCRIPTION: Get region address and length 793 + * Called from acpi_ds_exec_end_op during data_table_region parse tree walk 794 + * 795 + ******************************************************************************/ 796 + 797 + acpi_status 798 + acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state, 799 + union acpi_parse_object *op) 800 + { 801 + acpi_status status; 802 + union acpi_operand_object *obj_desc; 803 + union acpi_operand_object **operand; 804 + struct acpi_namespace_node *node; 805 + union acpi_parse_object *next_op; 806 + acpi_native_uint table_index; 807 + struct acpi_table_header *table; 808 + 809 + ACPI_FUNCTION_TRACE_PTR(ds_eval_table_region_operands, op); 810 + 811 + /* 812 + * This is where we evaluate the signature_string and oem_iDString 813 + * and oem_table_iDString of the data_table_region declaration 814 + */ 815 + node = op->common.node; 816 + 817 + /* next_op points to signature_string op */ 818 + 819 + next_op = op->common.value.arg; 820 + 821 + /* 822 + * Evaluate/create the signature_string and oem_iDString 823 + * and oem_table_iDString operands 824 + */ 825 + status = acpi_ds_create_operands(walk_state, next_op); 826 + if (ACPI_FAILURE(status)) { 827 + return_ACPI_STATUS(status); 828 + } 829 + 830 + /* 831 + * Resolve the signature_string and oem_iDString 832 + * and oem_table_iDString operands 833 + */ 834 + status = acpi_ex_resolve_operands(op->common.aml_opcode, 835 + ACPI_WALK_OPERANDS, walk_state); 836 + if (ACPI_FAILURE(status)) { 837 + return_ACPI_STATUS(status); 838 + } 839 + 840 + ACPI_DUMP_OPERANDS(ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE, 841 + acpi_ps_get_opcode_name(op->common.aml_opcode), 842 + 1, "after AcpiExResolveOperands"); 843 + 844 + operand = &walk_state->operands[0]; 845 + 846 + /* Find the ACPI table */ 847 + 848 + status = acpi_tb_find_table(operand[0]->string.pointer, 849 + operand[1]->string.pointer, 850 + operand[2]->string.pointer, &table_index); 851 + if (ACPI_FAILURE(status)) { 852 + return_ACPI_STATUS(status); 853 + } 854 + 855 + acpi_ut_remove_reference(operand[0]); 856 + acpi_ut_remove_reference(operand[1]); 857 + acpi_ut_remove_reference(operand[2]); 858 + 859 + status = acpi_get_table_by_index(table_index, &table); 860 + if (ACPI_FAILURE(status)) { 861 + return_ACPI_STATUS(status); 862 + } 863 + 864 + obj_desc = acpi_ns_get_attached_object(node); 865 + if (!obj_desc) { 866 + return_ACPI_STATUS(AE_NOT_EXIST); 867 + } 868 + 869 + obj_desc->region.address = 870 + (acpi_physical_address) ACPI_TO_INTEGER(table); 871 + obj_desc->region.length = table->length; 872 + 873 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "RgnObj %p Addr %8.8X%8.8X Len %X\n", 874 + obj_desc, 875 + ACPI_FORMAT_NATIVE_UINT(obj_desc->region.address), 819 876 obj_desc->region.length)); 820 877 821 878 /* Now the address and length are valid for this opregion */ ··· 954 807 ACPI_FUNCTION_TRACE(ds_eval_data_object_operands); 955 808 956 809 /* The first operand (for all of these data objects) is the length */ 810 + 811 + /* 812 + * Set proper index into operand stack for acpi_ds_obj_stack_push 813 + * invoked inside acpi_ds_create_operand. 814 + */ 815 + walk_state->operand_index = walk_state->num_operands; 957 816 958 817 status = acpi_ds_create_operand(walk_state, op->common.value.arg, 1); 959 818 if (ACPI_FAILURE(status)) { ··· 1026 873 } 1027 874 } 1028 875 876 + return_ACPI_STATUS(status); 877 + } 878 + 879 + /******************************************************************************* 880 + * 881 + * FUNCTION: acpi_ds_eval_bank_field_operands 882 + * 883 + * PARAMETERS: walk_state - Current walk 884 + * Op - A valid bank_field Op object 885 + * 886 + * RETURN: Status 887 + * 888 + * DESCRIPTION: Get bank_field bank_value 889 + * Called from acpi_ds_exec_end_op during bank_field parse tree walk 890 + * 891 + ******************************************************************************/ 892 + 893 + acpi_status 894 + acpi_ds_eval_bank_field_operands(struct acpi_walk_state *walk_state, 895 + union acpi_parse_object *op) 896 + { 897 + acpi_status status; 898 + union acpi_operand_object *obj_desc; 899 + union acpi_operand_object *operand_desc; 900 + struct acpi_namespace_node *node; 901 + union acpi_parse_object *next_op; 902 + union acpi_parse_object *arg; 903 + 904 + ACPI_FUNCTION_TRACE_PTR(ds_eval_bank_field_operands, op); 905 + 906 + /* 907 + * This is where we evaluate the bank_value field of the 908 + * bank_field declaration 909 + */ 910 + 911 + /* next_op points to the op that holds the Region */ 912 + 913 + next_op = op->common.value.arg; 914 + 915 + /* next_op points to the op that holds the Bank Register */ 916 + 917 + next_op = next_op->common.next; 918 + 919 + /* next_op points to the op that holds the Bank Value */ 920 + 921 + next_op = next_op->common.next; 922 + 923 + /* 924 + * Set proper index into operand stack for acpi_ds_obj_stack_push 925 + * invoked inside acpi_ds_create_operand. 926 + * 927 + * We use walk_state->Operands[0] to store the evaluated bank_value 928 + */ 929 + walk_state->operand_index = 0; 930 + 931 + status = acpi_ds_create_operand(walk_state, next_op, 0); 932 + if (ACPI_FAILURE(status)) { 933 + return_ACPI_STATUS(status); 934 + } 935 + 936 + status = acpi_ex_resolve_to_value(&walk_state->operands[0], walk_state); 937 + if (ACPI_FAILURE(status)) { 938 + return_ACPI_STATUS(status); 939 + } 940 + 941 + ACPI_DUMP_OPERANDS(ACPI_WALK_OPERANDS, ACPI_IMODE_EXECUTE, 942 + acpi_ps_get_opcode_name(op->common.aml_opcode), 943 + 1, "after AcpiExResolveOperands"); 944 + 945 + /* 946 + * Get the bank_value operand and save it 947 + * (at Top of stack) 948 + */ 949 + operand_desc = walk_state->operands[0]; 950 + 951 + /* Arg points to the start Bank Field */ 952 + 953 + arg = acpi_ps_get_arg(op, 4); 954 + while (arg) { 955 + 956 + /* Ignore OFFSET and ACCESSAS terms here */ 957 + 958 + if (arg->common.aml_opcode == AML_INT_NAMEDFIELD_OP) { 959 + node = arg->common.node; 960 + 961 + obj_desc = acpi_ns_get_attached_object(node); 962 + if (!obj_desc) { 963 + return_ACPI_STATUS(AE_NOT_EXIST); 964 + } 965 + 966 + obj_desc->bank_field.value = 967 + (u32) operand_desc->integer.value; 968 + } 969 + 970 + /* Move to next field in the list */ 971 + 972 + arg = arg->common.next; 973 + } 974 + 975 + acpi_ut_remove_reference(operand_desc); 1029 976 return_ACPI_STATUS(status); 1030 977 } 1031 978 ··· 1323 1070 * is set to anything other than zero! 1324 1071 */ 1325 1072 walk_state->return_desc = walk_state->operands[0]; 1326 - } else if ((walk_state->results) && 1327 - (walk_state->results->results.num_results > 0)) { 1073 + } else if (walk_state->result_count) { 1328 1074 1329 1075 /* Since we have a real Return(), delete any implicit return */ 1330 1076
+149 -18
drivers/acpi/dispatcher/dsutils.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 278 278 AML_VAR_PACKAGE_OP) 279 279 || (op->common.parent->common.aml_opcode == AML_BUFFER_OP) 280 280 || (op->common.parent->common.aml_opcode == 281 - AML_INT_EVAL_SUBTREE_OP)) { 281 + AML_INT_EVAL_SUBTREE_OP) 282 + || (op->common.parent->common.aml_opcode == 283 + AML_BANK_FIELD_OP)) { 282 284 /* 283 285 * These opcodes allow term_arg(s) as operands and therefore 284 286 * the operands can be method calls. The result is used. ··· 474 472 /* A valid name must be looked up in the namespace */ 475 473 476 474 if ((arg->common.aml_opcode == AML_INT_NAMEPATH_OP) && 477 - (arg->common.value.string)) { 475 + (arg->common.value.string) && 476 + !(arg->common.flags & ACPI_PARSEOP_IN_STACK)) { 478 477 ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, "Getting a name: Arg=%p\n", 479 478 arg)); 480 479 ··· 598 595 } else { 599 596 /* Check for null name case */ 600 597 601 - if (arg->common.aml_opcode == AML_INT_NAMEPATH_OP) { 598 + if ((arg->common.aml_opcode == AML_INT_NAMEPATH_OP) && 599 + !(arg->common.flags & ACPI_PARSEOP_IN_STACK)) { 602 600 /* 603 601 * If the name is null, this means that this is an 604 602 * optional result parameter that was not specified ··· 621 617 return_ACPI_STATUS(AE_NOT_IMPLEMENTED); 622 618 } 623 619 624 - if (op_info->flags & AML_HAS_RETVAL) { 620 + if ((op_info->flags & AML_HAS_RETVAL) 621 + || (arg->common.flags & ACPI_PARSEOP_IN_STACK)) { 625 622 ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 626 623 "Argument previously created, already stacked\n")); 627 624 ··· 635 630 * Use value that was already previously returned 636 631 * by the evaluation of this argument 637 632 */ 638 - status = 639 - acpi_ds_result_pop_from_bottom(&obj_desc, 640 - walk_state); 633 + status = acpi_ds_result_pop(&obj_desc, walk_state); 641 634 if (ACPI_FAILURE(status)) { 642 635 /* 643 636 * Only error is underflow, and this indicates ··· 701 698 { 702 699 acpi_status status = AE_OK; 703 700 union acpi_parse_object *arg; 701 + union acpi_parse_object *arguments[ACPI_OBJ_NUM_OPERANDS]; 704 702 u32 arg_count = 0; 703 + u32 index = walk_state->num_operands; 704 + u32 i; 705 705 706 706 ACPI_FUNCTION_TRACE_PTR(ds_create_operands, first_arg); 707 707 708 - /* For all arguments in the list... */ 708 + /* Get all arguments in the list */ 709 709 710 710 arg = first_arg; 711 711 while (arg) { 712 - status = acpi_ds_create_operand(walk_state, arg, arg_count); 713 - if (ACPI_FAILURE(status)) { 714 - goto cleanup; 712 + if (index >= ACPI_OBJ_NUM_OPERANDS) { 713 + return_ACPI_STATUS(AE_BAD_DATA); 715 714 } 716 715 717 - ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 718 - "Arg #%d (%p) done, Arg1=%p\n", arg_count, 719 - arg, first_arg)); 716 + arguments[index] = arg; 717 + walk_state->operands[index] = NULL; 720 718 721 719 /* Move on to next argument, if any */ 722 720 723 721 arg = arg->common.next; 724 722 arg_count++; 723 + index++; 724 + } 725 + 726 + index--; 727 + 728 + /* It is the appropriate order to get objects from the Result stack */ 729 + 730 + for (i = 0; i < arg_count; i++) { 731 + arg = arguments[index]; 732 + 733 + /* Force the filling of the operand stack in inverse order */ 734 + 735 + walk_state->operand_index = (u8) index; 736 + 737 + status = acpi_ds_create_operand(walk_state, arg, index); 738 + if (ACPI_FAILURE(status)) { 739 + goto cleanup; 740 + } 741 + 742 + index--; 743 + 744 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 745 + "Arg #%d (%p) done, Arg1=%p\n", index, arg, 746 + first_arg)); 725 747 } 726 748 727 749 return_ACPI_STATUS(status); ··· 757 729 * pop everything off of the operand stack and delete those 758 730 * objects 759 731 */ 760 - (void)acpi_ds_obj_stack_pop_and_delete(arg_count, walk_state); 732 + acpi_ds_obj_stack_pop_and_delete(arg_count, walk_state); 761 733 762 - ACPI_EXCEPTION((AE_INFO, status, "While creating Arg %d", 763 - (arg_count + 1))); 734 + ACPI_EXCEPTION((AE_INFO, status, "While creating Arg %d", index)); 735 + return_ACPI_STATUS(status); 736 + } 737 + 738 + /***************************************************************************** 739 + * 740 + * FUNCTION: acpi_ds_evaluate_name_path 741 + * 742 + * PARAMETERS: walk_state - Current state of the parse tree walk, 743 + * the opcode of current operation should be 744 + * AML_INT_NAMEPATH_OP 745 + * 746 + * RETURN: Status 747 + * 748 + * DESCRIPTION: Translate the -name_path- parse tree object to the equivalent 749 + * interpreter object, convert it to value, if needed, duplicate 750 + * it, if needed, and push it onto the current result stack. 751 + * 752 + ****************************************************************************/ 753 + 754 + acpi_status acpi_ds_evaluate_name_path(struct acpi_walk_state *walk_state) 755 + { 756 + acpi_status status = AE_OK; 757 + union acpi_parse_object *op = walk_state->op; 758 + union acpi_operand_object **operand = &walk_state->operands[0]; 759 + union acpi_operand_object *new_obj_desc; 760 + u8 type; 761 + 762 + ACPI_FUNCTION_TRACE_PTR(ds_evaluate_name_path, walk_state); 763 + 764 + if (!op->common.parent) { 765 + 766 + /* This happens after certain exception processing */ 767 + 768 + goto exit; 769 + } 770 + 771 + if ((op->common.parent->common.aml_opcode == AML_PACKAGE_OP) || 772 + (op->common.parent->common.aml_opcode == AML_VAR_PACKAGE_OP) || 773 + (op->common.parent->common.aml_opcode == AML_REF_OF_OP)) { 774 + 775 + /* TBD: Should we specify this feature as a bit of op_info->Flags of these opcodes? */ 776 + 777 + goto exit; 778 + } 779 + 780 + status = acpi_ds_create_operand(walk_state, op, 0); 781 + if (ACPI_FAILURE(status)) { 782 + goto exit; 783 + } 784 + 785 + if (op->common.flags & ACPI_PARSEOP_TARGET) { 786 + new_obj_desc = *operand; 787 + goto push_result; 788 + } 789 + 790 + type = ACPI_GET_OBJECT_TYPE(*operand); 791 + 792 + status = acpi_ex_resolve_to_value(operand, walk_state); 793 + if (ACPI_FAILURE(status)) { 794 + goto exit; 795 + } 796 + 797 + if (type == ACPI_TYPE_INTEGER) { 798 + 799 + /* It was incremented by acpi_ex_resolve_to_value */ 800 + 801 + acpi_ut_remove_reference(*operand); 802 + 803 + status = 804 + acpi_ut_copy_iobject_to_iobject(*operand, &new_obj_desc, 805 + walk_state); 806 + if (ACPI_FAILURE(status)) { 807 + goto exit; 808 + } 809 + } else { 810 + /* 811 + * The object either was anew created or is 812 + * a Namespace node - don't decrement it. 813 + */ 814 + new_obj_desc = *operand; 815 + } 816 + 817 + /* Cleanup for name-path operand */ 818 + 819 + status = acpi_ds_obj_stack_pop(1, walk_state); 820 + if (ACPI_FAILURE(status)) { 821 + walk_state->result_obj = new_obj_desc; 822 + goto exit; 823 + } 824 + 825 + push_result: 826 + 827 + walk_state->result_obj = new_obj_desc; 828 + 829 + status = acpi_ds_result_push(walk_state->result_obj, walk_state); 830 + if (ACPI_SUCCESS(status)) { 831 + 832 + /* Force to take it from stack */ 833 + 834 + op->common.flags |= ACPI_PARSEOP_IN_STACK; 835 + } 836 + 837 + exit: 838 + 764 839 return_ACPI_STATUS(status); 765 840 }
+34 -44
drivers/acpi/dispatcher/dswexec.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 285 285 switch (opcode_class) { 286 286 case AML_CLASS_CONTROL: 287 287 288 - status = acpi_ds_result_stack_push(walk_state); 289 - if (ACPI_FAILURE(status)) { 290 - goto error_exit; 291 - } 292 - 293 288 status = acpi_ds_exec_begin_control_op(walk_state, op); 294 289 break; 295 290 ··· 300 305 status = acpi_ds_load2_begin_op(walk_state, NULL); 301 306 } 302 307 303 - if (op->common.aml_opcode == AML_REGION_OP) { 304 - status = acpi_ds_result_stack_push(walk_state); 305 - } 306 308 break; 307 309 308 310 case AML_CLASS_EXECUTE: 309 311 case AML_CLASS_CREATE: 310 - /* 311 - * Most operators with arguments (except create_xxx_field operators) 312 - * Start a new result/operand state 313 - */ 314 - if (walk_state->op_info->object_type != ACPI_TYPE_BUFFER_FIELD) { 315 - status = acpi_ds_result_stack_push(walk_state); 316 - } 312 + 317 313 break; 318 314 319 315 default: ··· 360 374 /* Init the walk state */ 361 375 362 376 walk_state->num_operands = 0; 377 + walk_state->operand_index = 0; 363 378 walk_state->return_desc = NULL; 364 379 walk_state->result_obj = NULL; 365 380 ··· 375 388 /* Decode the Opcode Class */ 376 389 377 390 switch (op_class) { 378 - case AML_CLASS_ARGUMENT: /* constants, literals, etc. - do nothing */ 391 + case AML_CLASS_ARGUMENT: /* Constants, literals, etc. */ 392 + 393 + if (walk_state->opcode == AML_INT_NAMEPATH_OP) { 394 + status = acpi_ds_evaluate_name_path(walk_state); 395 + if (ACPI_FAILURE(status)) { 396 + goto cleanup; 397 + } 398 + } 379 399 break; 380 400 381 - case AML_CLASS_EXECUTE: /* most operators with arguments */ 401 + case AML_CLASS_EXECUTE: /* Most operators with arguments */ 382 402 383 403 /* Build resolved operand stack */ 384 404 385 405 status = acpi_ds_create_operands(walk_state, first_arg); 386 - if (ACPI_FAILURE(status)) { 387 - goto cleanup; 388 - } 389 - 390 - /* Done with this result state (Now that operand stack is built) */ 391 - 392 - status = acpi_ds_result_stack_pop(walk_state); 393 406 if (ACPI_FAILURE(status)) { 394 407 goto cleanup; 395 408 } ··· 474 487 475 488 status = acpi_ds_exec_end_control_op(walk_state, op); 476 489 477 - /* Make sure to properly pop the result stack */ 478 - 479 - if (ACPI_SUCCESS(status)) { 480 - status = acpi_ds_result_stack_pop(walk_state); 481 - } else if (status == AE_CTRL_PENDING) { 482 - status = acpi_ds_result_stack_pop(walk_state); 483 - if (ACPI_SUCCESS(status)) { 484 - status = AE_CTRL_PENDING; 485 - } 486 - } 487 490 break; 488 491 489 492 case AML_TYPE_METHOD_CALL: ··· 493 516 494 517 op->common.node = 495 518 (struct acpi_namespace_node *)op->asl.value. 496 - arg->asl.node->object; 519 + arg->asl.node; 497 520 acpi_ut_add_reference(op->asl.value.arg->asl. 498 521 node->object); 499 522 return_ACPI_STATUS(AE_OK); ··· 609 632 break; 610 633 } 611 634 612 - /* Done with result state (Now that operand stack is built) */ 613 - 614 - status = acpi_ds_result_stack_pop(walk_state); 615 - if (ACPI_FAILURE(status)) { 616 - goto cleanup; 617 - } 618 - 619 635 /* 620 636 * If a result object was returned from above, push it on the 621 637 * current result stack ··· 641 671 if (ACPI_FAILURE(status)) { 642 672 break; 643 673 } 674 + } else if (op->common.aml_opcode == AML_DATA_REGION_OP) { 675 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 676 + "Executing DataTableRegion Strings Op=%p\n", 677 + op)); 644 678 645 - status = acpi_ds_result_stack_pop(walk_state); 679 + status = 680 + acpi_ds_eval_table_region_operands 681 + (walk_state, op); 682 + if (ACPI_FAILURE(status)) { 683 + break; 684 + } 685 + } else if (op->common.aml_opcode == AML_BANK_FIELD_OP) { 686 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 687 + "Executing BankField Op=%p\n", 688 + op)); 689 + 690 + status = 691 + acpi_ds_eval_bank_field_operands(walk_state, 692 + op); 693 + if (ACPI_FAILURE(status)) { 694 + break; 695 + } 646 696 } 647 697 break; 648 698
+27 -10
drivers/acpi/dispatcher/dswload.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 443 443 if (ACPI_FAILURE(status)) { 444 444 return_ACPI_STATUS(status); 445 445 } 446 + } else if (op->common.aml_opcode == AML_DATA_REGION_OP) { 447 + status = 448 + acpi_ex_create_region(op->named.data, 449 + op->named.length, 450 + REGION_DATA_TABLE, 451 + walk_state); 452 + if (ACPI_FAILURE(status)) { 453 + return_ACPI_STATUS(status); 454 + } 446 455 } 447 456 } 448 457 #endif ··· 776 767 acpi_ns_lookup(walk_state->scope_info, buffer_ptr, 777 768 object_type, ACPI_IMODE_LOAD_PASS2, flags, 778 769 walk_state, &node); 770 + 771 + if (ACPI_SUCCESS(status) && (flags & ACPI_NS_TEMPORARY)) { 772 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 773 + "***New Node [%4.4s] %p is temporary\n", 774 + acpi_ut_get_node_name(node), node)); 775 + } 779 776 break; 780 777 } 781 778 ··· 838 823 struct acpi_namespace_node *new_node; 839 824 #ifndef ACPI_NO_METHOD_EXECUTION 840 825 u32 i; 826 + u8 region_space; 841 827 #endif 842 828 843 829 ACPI_FUNCTION_TRACE(ds_load2_end_op); ··· 1019 1003 status = acpi_ex_create_event(walk_state); 1020 1004 break; 1021 1005 1022 - case AML_DATA_REGION_OP: 1023 - 1024 - status = acpi_ex_create_table_region(walk_state); 1025 - break; 1026 - 1027 1006 case AML_ALIAS_OP: 1028 1007 1029 1008 status = acpi_ex_create_alias(walk_state); ··· 1046 1035 switch (op->common.aml_opcode) { 1047 1036 #ifndef ACPI_NO_METHOD_EXECUTION 1048 1037 case AML_REGION_OP: 1038 + case AML_DATA_REGION_OP: 1039 + 1040 + if (op->common.aml_opcode == AML_REGION_OP) { 1041 + region_space = (acpi_adr_space_type) 1042 + ((op->common.value.arg)->common.value. 1043 + integer); 1044 + } else { 1045 + region_space = REGION_DATA_TABLE; 1046 + } 1049 1047 1050 1048 /* 1051 1049 * If we are executing a method, initialize the region ··· 1063 1043 status = 1064 1044 acpi_ex_create_region(op->named.data, 1065 1045 op->named.length, 1066 - (acpi_adr_space_type) 1067 - ((op->common.value. 1068 - arg)->common.value. 1069 - integer), 1046 + region_space, 1070 1047 walk_state); 1071 1048 if (ACPI_FAILURE(status)) { 1072 1049 return (status);
+1 -1
drivers/acpi/dispatcher/dswscope.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+115 -406
drivers/acpi/dispatcher/dswstate.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 49 49 #define _COMPONENT ACPI_DISPATCHER 50 50 ACPI_MODULE_NAME("dswstate") 51 51 52 - /* Local prototypes */ 53 - #ifdef ACPI_OBSOLETE_FUNCTIONS 54 - acpi_status 55 - acpi_ds_result_insert(void *object, 56 - u32 index, struct acpi_walk_state *walk_state); 57 - 58 - acpi_status acpi_ds_obj_stack_delete_all(struct acpi_walk_state *walk_state); 59 - 60 - acpi_status 61 - acpi_ds_obj_stack_pop_object(union acpi_operand_object **object, 62 - struct acpi_walk_state *walk_state); 63 - 64 - void *acpi_ds_obj_stack_get_value(u32 index, 65 - struct acpi_walk_state *walk_state); 66 - #endif 67 - 68 - #ifdef ACPI_FUTURE_USAGE 69 - /******************************************************************************* 70 - * 71 - * FUNCTION: acpi_ds_result_remove 72 - * 73 - * PARAMETERS: Object - Where to return the popped object 74 - * Index - Where to extract the object 75 - * walk_state - Current Walk state 76 - * 77 - * RETURN: Status 78 - * 79 - * DESCRIPTION: Pop an object off the bottom of this walk's result stack. In 80 - * other words, this is a FIFO. 81 - * 82 - ******************************************************************************/ 83 - 84 - acpi_status 85 - acpi_ds_result_remove(union acpi_operand_object **object, 86 - u32 index, struct acpi_walk_state *walk_state) 87 - { 88 - union acpi_generic_state *state; 89 - 90 - ACPI_FUNCTION_NAME(ds_result_remove); 91 - 92 - state = walk_state->results; 93 - if (!state) { 94 - ACPI_ERROR((AE_INFO, "No result object pushed! State=%p", 95 - walk_state)); 96 - return (AE_NOT_EXIST); 97 - } 98 - 99 - if (index >= ACPI_OBJ_MAX_OPERAND) { 100 - ACPI_ERROR((AE_INFO, 101 - "Index out of range: %X State=%p Num=%X", 102 - index, walk_state, state->results.num_results)); 103 - } 104 - 105 - /* Check for a valid result object */ 106 - 107 - if (!state->results.obj_desc[index]) { 108 - ACPI_ERROR((AE_INFO, 109 - "Null operand! State=%p #Ops=%X, Index=%X", 110 - walk_state, state->results.num_results, index)); 111 - return (AE_AML_NO_RETURN_VALUE); 112 - } 113 - 114 - /* Remove the object */ 115 - 116 - state->results.num_results--; 117 - 118 - *object = state->results.obj_desc[index]; 119 - state->results.obj_desc[index] = NULL; 120 - 121 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 122 - "Obj=%p [%s] Index=%X State=%p Num=%X\n", 123 - *object, 124 - (*object) ? acpi_ut_get_object_type_name(*object) : 125 - "NULL", index, walk_state, 126 - state->results.num_results)); 127 - 128 - return (AE_OK); 129 - } 130 - #endif /* ACPI_FUTURE_USAGE */ 52 + /* Local prototypes */ 53 + static acpi_status acpi_ds_result_stack_push(struct acpi_walk_state *ws); 54 + static acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *ws); 131 55 132 56 /******************************************************************************* 133 57 * ··· 62 138 * 63 139 * RETURN: Status 64 140 * 65 - * DESCRIPTION: Pop an object off the bottom of this walk's result stack. In 66 - * other words, this is a FIFO. 141 + * DESCRIPTION: Pop an object off the top of this walk's result stack 67 142 * 68 143 ******************************************************************************/ 69 144 70 145 acpi_status 71 - acpi_ds_result_pop(union acpi_operand_object ** object, 72 - struct acpi_walk_state * walk_state) 146 + acpi_ds_result_pop(union acpi_operand_object **object, 147 + struct acpi_walk_state *walk_state) 73 148 { 74 149 acpi_native_uint index; 75 150 union acpi_generic_state *state; 151 + acpi_status status; 76 152 77 153 ACPI_FUNCTION_NAME(ds_result_pop); 78 154 79 155 state = walk_state->results; 80 - if (!state) { 81 - return (AE_OK); 156 + 157 + /* Incorrect state of result stack */ 158 + 159 + if (state && !walk_state->result_count) { 160 + ACPI_ERROR((AE_INFO, "No results on result stack")); 161 + return (AE_AML_INTERNAL); 82 162 } 83 163 84 - if (!state->results.num_results) { 164 + if (!state && walk_state->result_count) { 165 + ACPI_ERROR((AE_INFO, "No result state for result stack")); 166 + return (AE_AML_INTERNAL); 167 + } 168 + 169 + /* Empty result stack */ 170 + 171 + if (!state) { 85 172 ACPI_ERROR((AE_INFO, "Result stack is empty! State=%p", 86 173 walk_state)); 87 174 return (AE_AML_NO_RETURN_VALUE); 88 175 } 89 176 90 - /* Remove top element */ 177 + /* Return object of the top element and clean that top element result stack */ 91 178 92 - state->results.num_results--; 179 + walk_state->result_count--; 180 + index = walk_state->result_count % ACPI_RESULTS_FRAME_OBJ_NUM; 93 181 94 - for (index = ACPI_OBJ_NUM_OPERANDS; index; index--) { 95 - 96 - /* Check for a valid result object */ 97 - 98 - if (state->results.obj_desc[index - 1]) { 99 - *object = state->results.obj_desc[index - 1]; 100 - state->results.obj_desc[index - 1] = NULL; 101 - 102 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 103 - "Obj=%p [%s] Index=%X State=%p Num=%X\n", 104 - *object, 105 - (*object) ? 106 - acpi_ut_get_object_type_name(*object) 107 - : "NULL", (u32) index - 1, walk_state, 108 - state->results.num_results)); 109 - 110 - return (AE_OK); 111 - } 112 - } 113 - 114 - ACPI_ERROR((AE_INFO, "No result objects! State=%p", walk_state)); 115 - return (AE_AML_NO_RETURN_VALUE); 116 - } 117 - 118 - /******************************************************************************* 119 - * 120 - * FUNCTION: acpi_ds_result_pop_from_bottom 121 - * 122 - * PARAMETERS: Object - Where to return the popped object 123 - * walk_state - Current Walk state 124 - * 125 - * RETURN: Status 126 - * 127 - * DESCRIPTION: Pop an object off the bottom of this walk's result stack. In 128 - * other words, this is a FIFO. 129 - * 130 - ******************************************************************************/ 131 - 132 - acpi_status 133 - acpi_ds_result_pop_from_bottom(union acpi_operand_object ** object, 134 - struct acpi_walk_state * walk_state) 135 - { 136 - acpi_native_uint index; 137 - union acpi_generic_state *state; 138 - 139 - ACPI_FUNCTION_NAME(ds_result_pop_from_bottom); 140 - 141 - state = walk_state->results; 142 - if (!state) { 182 + *object = state->results.obj_desc[index]; 183 + if (!*object) { 143 184 ACPI_ERROR((AE_INFO, 144 - "No result object pushed! State=%p", walk_state)); 145 - return (AE_NOT_EXIST); 146 - } 147 - 148 - if (!state->results.num_results) { 149 - ACPI_ERROR((AE_INFO, "No result objects! State=%p", 185 + "No result objects on result stack, State=%p", 150 186 walk_state)); 151 187 return (AE_AML_NO_RETURN_VALUE); 152 188 } 153 189 154 - /* Remove Bottom element */ 155 - 156 - *object = state->results.obj_desc[0]; 157 - 158 - /* Push entire stack down one element */ 159 - 160 - for (index = 0; index < state->results.num_results; index++) { 161 - state->results.obj_desc[index] = 162 - state->results.obj_desc[index + 1]; 190 + state->results.obj_desc[index] = NULL; 191 + if (index == 0) { 192 + status = acpi_ds_result_stack_pop(walk_state); 193 + if (ACPI_FAILURE(status)) { 194 + return (status); 195 + } 163 196 } 164 197 165 - state->results.num_results--; 166 - 167 - /* Check for a valid result object */ 168 - 169 - if (!*object) { 170 - ACPI_ERROR((AE_INFO, 171 - "Null operand! State=%p #Ops=%X Index=%X", 172 - walk_state, state->results.num_results, 173 - (u32) index)); 174 - return (AE_AML_NO_RETURN_VALUE); 175 - } 176 - 177 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] Results=%p State=%p\n", 178 - *object, 179 - (*object) ? acpi_ut_get_object_type_name(*object) : 180 - "NULL", state, walk_state)); 198 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 199 + "Obj=%p [%s] Index=%X State=%p Num=%X\n", *object, 200 + acpi_ut_get_object_type_name(*object), 201 + (u32) index, walk_state, walk_state->result_count)); 181 202 182 203 return (AE_OK); 183 204 } ··· 145 276 struct acpi_walk_state * walk_state) 146 277 { 147 278 union acpi_generic_state *state; 279 + acpi_status status; 280 + acpi_native_uint index; 148 281 149 282 ACPI_FUNCTION_NAME(ds_result_push); 283 + 284 + if (walk_state->result_count > walk_state->result_size) { 285 + ACPI_ERROR((AE_INFO, "Result stack is full")); 286 + return (AE_AML_INTERNAL); 287 + } else if (walk_state->result_count == walk_state->result_size) { 288 + 289 + /* Extend the result stack */ 290 + 291 + status = acpi_ds_result_stack_push(walk_state); 292 + if (ACPI_FAILURE(status)) { 293 + ACPI_ERROR((AE_INFO, 294 + "Failed to extend the result stack")); 295 + return (status); 296 + } 297 + } 298 + 299 + if (!(walk_state->result_count < walk_state->result_size)) { 300 + ACPI_ERROR((AE_INFO, "No free elements in result stack")); 301 + return (AE_AML_INTERNAL); 302 + } 150 303 151 304 state = walk_state->results; 152 305 if (!state) { ··· 176 285 return (AE_AML_INTERNAL); 177 286 } 178 287 179 - if (state->results.num_results == ACPI_OBJ_NUM_OPERANDS) { 180 - ACPI_ERROR((AE_INFO, 181 - "Result stack overflow: Obj=%p State=%p Num=%X", 182 - object, walk_state, state->results.num_results)); 183 - return (AE_STACK_OVERFLOW); 184 - } 185 - 186 288 if (!object) { 187 289 ACPI_ERROR((AE_INFO, 188 290 "Null Object! Obj=%p State=%p Num=%X", 189 - object, walk_state, state->results.num_results)); 291 + object, walk_state, walk_state->result_count)); 190 292 return (AE_BAD_PARAMETER); 191 293 } 192 294 193 - state->results.obj_desc[state->results.num_results] = object; 194 - state->results.num_results++; 295 + /* Assign the address of object to the top free element of result stack */ 296 + 297 + index = walk_state->result_count % ACPI_RESULTS_FRAME_OBJ_NUM; 298 + state->results.obj_desc[index] = object; 299 + walk_state->result_count++; 195 300 196 301 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p Num=%X Cur=%X\n", 197 302 object, 198 - object ? 199 303 acpi_ut_get_object_type_name((union 200 304 acpi_operand_object *) 201 - object) : "NULL", 202 - walk_state, state->results.num_results, 305 + object), walk_state, 306 + walk_state->result_count, 203 307 walk_state->current_result)); 204 308 205 309 return (AE_OK); ··· 208 322 * 209 323 * RETURN: Status 210 324 * 211 - * DESCRIPTION: Push an object onto the walk_state result stack. 325 + * DESCRIPTION: Push an object onto the walk_state result stack 212 326 * 213 327 ******************************************************************************/ 214 328 215 - acpi_status acpi_ds_result_stack_push(struct acpi_walk_state * walk_state) 329 + static acpi_status acpi_ds_result_stack_push(struct acpi_walk_state *walk_state) 216 330 { 217 331 union acpi_generic_state *state; 218 332 219 333 ACPI_FUNCTION_NAME(ds_result_stack_push); 334 + 335 + /* Check for stack overflow */ 336 + 337 + if (((u32) walk_state->result_size + ACPI_RESULTS_FRAME_OBJ_NUM) > 338 + ACPI_RESULTS_OBJ_NUM_MAX) { 339 + ACPI_ERROR((AE_INFO, "Result stack overflow: State=%p Num=%X", 340 + walk_state, walk_state->result_size)); 341 + return (AE_STACK_OVERFLOW); 342 + } 220 343 221 344 state = acpi_ut_create_generic_state(); 222 345 if (!state) { ··· 234 339 235 340 state->common.descriptor_type = ACPI_DESC_TYPE_STATE_RESULT; 236 341 acpi_ut_push_generic_state(&walk_state->results, state); 342 + 343 + /* Increase the length of the result stack by the length of frame */ 344 + 345 + walk_state->result_size += ACPI_RESULTS_FRAME_OBJ_NUM; 237 346 238 347 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Results=%p State=%p\n", 239 348 state, walk_state)); ··· 253 354 * 254 355 * RETURN: Status 255 356 * 256 - * DESCRIPTION: Pop an object off of the walk_state result stack. 357 + * DESCRIPTION: Pop an object off of the walk_state result stack 257 358 * 258 359 ******************************************************************************/ 259 360 260 - acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state * walk_state) 361 + static acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *walk_state) 261 362 { 262 363 union acpi_generic_state *state; 263 364 ··· 266 367 /* Check for stack underflow */ 267 368 268 369 if (walk_state->results == NULL) { 269 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Underflow - State=%p\n", 370 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 371 + "Result stack underflow - State=%p\n", 270 372 walk_state)); 271 373 return (AE_AML_NO_OPERAND); 272 374 } 273 375 376 + if (walk_state->result_size < ACPI_RESULTS_FRAME_OBJ_NUM) { 377 + ACPI_ERROR((AE_INFO, "Insufficient result stack size")); 378 + return (AE_AML_INTERNAL); 379 + } 380 + 274 381 state = acpi_ut_pop_generic_state(&walk_state->results); 382 + acpi_ut_delete_generic_state(state); 383 + 384 + /* Decrease the length of result stack by the length of frame */ 385 + 386 + walk_state->result_size -= ACPI_RESULTS_FRAME_OBJ_NUM; 275 387 276 388 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 277 389 "Result=%p RemainingResults=%X State=%p\n", 278 - state, state->results.num_results, walk_state)); 279 - 280 - acpi_ut_delete_generic_state(state); 390 + state, walk_state->result_count, walk_state)); 281 391 282 392 return (AE_OK); 283 393 } ··· 320 412 321 413 /* Put the object onto the stack */ 322 414 323 - walk_state->operands[walk_state->num_operands] = object; 415 + walk_state->operands[walk_state->operand_index] = object; 324 416 walk_state->num_operands++; 417 + 418 + /* For the usual order of filling the operand stack */ 419 + 420 + walk_state->operand_index++; 325 421 326 422 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p #Ops=%X\n", 327 423 object, ··· 396 484 * 397 485 ******************************************************************************/ 398 486 399 - acpi_status 487 + void 400 488 acpi_ds_obj_stack_pop_and_delete(u32 pop_count, 401 - struct acpi_walk_state * walk_state) 489 + struct acpi_walk_state *walk_state) 402 490 { 403 - u32 i; 491 + acpi_native_int i; 404 492 union acpi_operand_object *obj_desc; 405 493 406 494 ACPI_FUNCTION_NAME(ds_obj_stack_pop_and_delete); 407 495 408 - for (i = 0; i < pop_count; i++) { 496 + if (pop_count == 0) { 497 + return; 498 + } 409 499 410 - /* Check for stack underflow */ 411 - 500 + for (i = (acpi_native_int) (pop_count - 1); i >= 0; i--) { 412 501 if (walk_state->num_operands == 0) { 413 - ACPI_ERROR((AE_INFO, 414 - "Object stack underflow! Count=%X State=%p #Ops=%X", 415 - pop_count, walk_state, 416 - walk_state->num_operands)); 417 - return (AE_STACK_UNDERFLOW); 502 + return; 418 503 } 419 504 420 505 /* Pop the stack and delete an object if present in this stack entry */ 421 506 422 507 walk_state->num_operands--; 423 - obj_desc = walk_state->operands[walk_state->num_operands]; 508 + obj_desc = walk_state->operands[i]; 424 509 if (obj_desc) { 425 - acpi_ut_remove_reference(walk_state-> 426 - operands[walk_state-> 427 - num_operands]); 428 - walk_state->operands[walk_state->num_operands] = NULL; 510 + acpi_ut_remove_reference(walk_state->operands[i]); 511 + walk_state->operands[i] = NULL; 429 512 } 430 513 } 431 514 432 515 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Count=%X State=%p #Ops=%X\n", 433 516 pop_count, walk_state, walk_state->num_operands)); 434 - 435 - return (AE_OK); 436 517 } 437 518 438 519 /******************************************************************************* ··· 465 560 * 466 561 * RETURN: None 467 562 * 468 - * DESCRIPTION: Place the Thread state at the head of the state list. 563 + * DESCRIPTION: Place the Thread state at the head of the state list 469 564 * 470 565 ******************************************************************************/ 471 566 ··· 541 636 *thread) 542 637 { 543 638 struct acpi_walk_state *walk_state; 544 - acpi_status status; 545 639 546 640 ACPI_FUNCTION_TRACE(ds_create_walk_state); 547 641 ··· 562 658 #if (!defined (ACPI_NO_METHOD_EXECUTION) && !defined (ACPI_CONSTANT_EVAL_ONLY)) 563 659 acpi_ds_method_data_init(walk_state); 564 660 #endif 565 - 566 - /* Create an initial result stack entry */ 567 - 568 - status = acpi_ds_result_stack_push(walk_state); 569 - if (ACPI_FAILURE(status)) { 570 - ACPI_FREE(walk_state); 571 - return_PTR(NULL); 572 - } 573 661 574 662 /* Put the new state at the head of the walk list */ 575 663 ··· 756 860 ACPI_FREE(walk_state); 757 861 return_VOID; 758 862 } 759 - 760 - #ifdef ACPI_OBSOLETE_FUNCTIONS 761 - /******************************************************************************* 762 - * 763 - * FUNCTION: acpi_ds_result_insert 764 - * 765 - * PARAMETERS: Object - Object to push 766 - * Index - Where to insert the object 767 - * walk_state - Current Walk state 768 - * 769 - * RETURN: Status 770 - * 771 - * DESCRIPTION: Insert an object onto this walk's result stack 772 - * 773 - ******************************************************************************/ 774 - 775 - acpi_status 776 - acpi_ds_result_insert(void *object, 777 - u32 index, struct acpi_walk_state *walk_state) 778 - { 779 - union acpi_generic_state *state; 780 - 781 - ACPI_FUNCTION_NAME(ds_result_insert); 782 - 783 - state = walk_state->results; 784 - if (!state) { 785 - ACPI_ERROR((AE_INFO, "No result object pushed! State=%p", 786 - walk_state)); 787 - return (AE_NOT_EXIST); 788 - } 789 - 790 - if (index >= ACPI_OBJ_NUM_OPERANDS) { 791 - ACPI_ERROR((AE_INFO, 792 - "Index out of range: %X Obj=%p State=%p Num=%X", 793 - index, object, walk_state, 794 - state->results.num_results)); 795 - return (AE_BAD_PARAMETER); 796 - } 797 - 798 - if (!object) { 799 - ACPI_ERROR((AE_INFO, 800 - "Null Object! Index=%X Obj=%p State=%p Num=%X", 801 - index, object, walk_state, 802 - state->results.num_results)); 803 - return (AE_BAD_PARAMETER); 804 - } 805 - 806 - state->results.obj_desc[index] = object; 807 - state->results.num_results++; 808 - 809 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 810 - "Obj=%p [%s] State=%p Num=%X Cur=%X\n", 811 - object, 812 - object ? 813 - acpi_ut_get_object_type_name((union 814 - acpi_operand_object *) 815 - object) : "NULL", 816 - walk_state, state->results.num_results, 817 - walk_state->current_result)); 818 - 819 - return (AE_OK); 820 - } 821 - 822 - /******************************************************************************* 823 - * 824 - * FUNCTION: acpi_ds_obj_stack_delete_all 825 - * 826 - * PARAMETERS: walk_state - Current Walk state 827 - * 828 - * RETURN: Status 829 - * 830 - * DESCRIPTION: Clear the object stack by deleting all objects that are on it. 831 - * Should be used with great care, if at all! 832 - * 833 - ******************************************************************************/ 834 - 835 - acpi_status acpi_ds_obj_stack_delete_all(struct acpi_walk_state * walk_state) 836 - { 837 - u32 i; 838 - 839 - ACPI_FUNCTION_TRACE_PTR(ds_obj_stack_delete_all, walk_state); 840 - 841 - /* The stack size is configurable, but fixed */ 842 - 843 - for (i = 0; i < ACPI_OBJ_NUM_OPERANDS; i++) { 844 - if (walk_state->operands[i]) { 845 - acpi_ut_remove_reference(walk_state->operands[i]); 846 - walk_state->operands[i] = NULL; 847 - } 848 - } 849 - 850 - return_ACPI_STATUS(AE_OK); 851 - } 852 - 853 - /******************************************************************************* 854 - * 855 - * FUNCTION: acpi_ds_obj_stack_pop_object 856 - * 857 - * PARAMETERS: Object - Where to return the popped object 858 - * walk_state - Current Walk state 859 - * 860 - * RETURN: Status 861 - * 862 - * DESCRIPTION: Pop this walk's object stack. Objects on the stack are NOT 863 - * deleted by this routine. 864 - * 865 - ******************************************************************************/ 866 - 867 - acpi_status 868 - acpi_ds_obj_stack_pop_object(union acpi_operand_object **object, 869 - struct acpi_walk_state *walk_state) 870 - { 871 - ACPI_FUNCTION_NAME(ds_obj_stack_pop_object); 872 - 873 - /* Check for stack underflow */ 874 - 875 - if (walk_state->num_operands == 0) { 876 - ACPI_ERROR((AE_INFO, 877 - "Missing operand/stack empty! State=%p #Ops=%X", 878 - walk_state, walk_state->num_operands)); 879 - *object = NULL; 880 - return (AE_AML_NO_OPERAND); 881 - } 882 - 883 - /* Pop the stack */ 884 - 885 - walk_state->num_operands--; 886 - 887 - /* Check for a valid operand */ 888 - 889 - if (!walk_state->operands[walk_state->num_operands]) { 890 - ACPI_ERROR((AE_INFO, 891 - "Null operand! State=%p #Ops=%X", 892 - walk_state, walk_state->num_operands)); 893 - *object = NULL; 894 - return (AE_AML_NO_OPERAND); 895 - } 896 - 897 - /* Get operand and set stack entry to null */ 898 - 899 - *object = walk_state->operands[walk_state->num_operands]; 900 - walk_state->operands[walk_state->num_operands] = NULL; 901 - 902 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Obj=%p [%s] State=%p #Ops=%X\n", 903 - *object, acpi_ut_get_object_type_name(*object), 904 - walk_state, walk_state->num_operands)); 905 - 906 - return (AE_OK); 907 - } 908 - 909 - /******************************************************************************* 910 - * 911 - * FUNCTION: acpi_ds_obj_stack_get_value 912 - * 913 - * PARAMETERS: Index - Stack index whose value is desired. Based 914 - * on the top of the stack (index=0 == top) 915 - * walk_state - Current Walk state 916 - * 917 - * RETURN: Pointer to the requested operand 918 - * 919 - * DESCRIPTION: Retrieve an object from this walk's operand stack. Index must 920 - * be within the range of the current stack pointer. 921 - * 922 - ******************************************************************************/ 923 - 924 - void *acpi_ds_obj_stack_get_value(u32 index, struct acpi_walk_state *walk_state) 925 - { 926 - 927 - ACPI_FUNCTION_TRACE_PTR(ds_obj_stack_get_value, walk_state); 928 - 929 - /* Can't do it if the stack is empty */ 930 - 931 - if (walk_state->num_operands == 0) { 932 - return_PTR(NULL); 933 - } 934 - 935 - /* or if the index is past the top of the stack */ 936 - 937 - if (index > (walk_state->num_operands - (u32) 1)) { 938 - return_PTR(NULL); 939 - } 940 - 941 - return_PTR(walk_state-> 942 - operands[(acpi_native_uint) (walk_state->num_operands - 1) - 943 - index]); 944 - } 945 - #endif
+124 -116
drivers/acpi/ec.c
··· 73 73 74 74 #define ACPI_EC_DELAY 500 /* Wait 500ms max. during EC ops */ 75 75 #define ACPI_EC_UDELAY_GLK 1000 /* Wait 1ms max. to get global lock */ 76 + #define ACPI_EC_UDELAY 100 /* Wait 100us before polling EC again */ 76 77 77 78 enum { 78 79 EC_FLAGS_WAIT_GPE = 0, /* Don't check status until GPE arrives */ 79 80 EC_FLAGS_QUERY_PENDING, /* Query is pending */ 80 81 EC_FLAGS_GPE_MODE, /* Expect GPE to be sent for status change */ 81 - EC_FLAGS_NO_ADDRESS_GPE, /* Expect GPE only for non-address event */ 82 - EC_FLAGS_ADDRESS, /* Address is being written */ 83 - EC_FLAGS_NO_WDATA_GPE, /* Don't expect WDATA GPE event */ 84 - EC_FLAGS_WDATA, /* Data is being written */ 85 - EC_FLAGS_NO_OBF1_GPE, /* Don't expect GPE before read */ 86 - }; 87 - 88 - static int acpi_ec_remove(struct acpi_device *device, int type); 89 - static int acpi_ec_start(struct acpi_device *device); 90 - static int acpi_ec_stop(struct acpi_device *device, int type); 91 - static int acpi_ec_add(struct acpi_device *device); 92 - 93 - static const struct acpi_device_id ec_device_ids[] = { 94 - {"PNP0C09", 0}, 95 - {"", 0}, 96 - }; 97 - 98 - static struct acpi_driver acpi_ec_driver = { 99 - .name = "ec", 100 - .class = ACPI_EC_CLASS, 101 - .ids = ec_device_ids, 102 - .ops = { 103 - .add = acpi_ec_add, 104 - .remove = acpi_ec_remove, 105 - .start = acpi_ec_start, 106 - .stop = acpi_ec_stop, 107 - }, 82 + EC_FLAGS_NO_GPE, /* Don't use GPE mode */ 83 + EC_FLAGS_RESCHEDULE_POLL /* Re-schedule poll */ 108 84 }; 109 85 110 86 /* If we find an EC via the ECDT, we need to keep a ptr to its context */ ··· 105 129 struct mutex lock; 106 130 wait_queue_head_t wait; 107 131 struct list_head list; 132 + struct delayed_work work; 133 + atomic_t irq_count; 108 134 u8 handlers_installed; 109 135 } *boot_ec, *first_ec; 110 136 ··· 155 177 return 0; 156 178 } 157 179 180 + static void ec_schedule_ec_poll(struct acpi_ec *ec) 181 + { 182 + if (test_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags)) 183 + schedule_delayed_work(&ec->work, 184 + msecs_to_jiffies(ACPI_EC_DELAY)); 185 + } 186 + 187 + static void ec_switch_to_poll_mode(struct acpi_ec *ec) 188 + { 189 + set_bit(EC_FLAGS_NO_GPE, &ec->flags); 190 + clear_bit(EC_FLAGS_GPE_MODE, &ec->flags); 191 + acpi_disable_gpe(NULL, ec->gpe, ACPI_NOT_ISR); 192 + set_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags); 193 + } 194 + 158 195 static int acpi_ec_wait(struct acpi_ec *ec, enum ec_event event, int force_poll) 159 196 { 160 - int ret = 0; 161 - 162 - if (unlikely(event == ACPI_EC_EVENT_OBF_1 && 163 - test_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags))) 164 - force_poll = 1; 165 - if (unlikely(test_bit(EC_FLAGS_ADDRESS, &ec->flags) && 166 - test_bit(EC_FLAGS_NO_ADDRESS_GPE, &ec->flags))) 167 - force_poll = 1; 168 - if (unlikely(test_bit(EC_FLAGS_WDATA, &ec->flags) && 169 - test_bit(EC_FLAGS_NO_WDATA_GPE, &ec->flags))) 170 - force_poll = 1; 197 + atomic_set(&ec->irq_count, 0); 171 198 if (likely(test_bit(EC_FLAGS_GPE_MODE, &ec->flags)) && 172 199 likely(!force_poll)) { 173 200 if (wait_event_timeout(ec->wait, acpi_ec_check_status(ec, event), 174 201 msecs_to_jiffies(ACPI_EC_DELAY))) 175 - goto end; 202 + return 0; 176 203 clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags); 177 204 if (acpi_ec_check_status(ec, event)) { 178 - if (event == ACPI_EC_EVENT_OBF_1) { 179 - /* miss OBF_1 GPE, don't expect it */ 180 - pr_info(PREFIX "missing OBF confirmation, " 181 - "don't expect it any longer.\n"); 182 - set_bit(EC_FLAGS_NO_OBF1_GPE, &ec->flags); 183 - } else if (test_bit(EC_FLAGS_ADDRESS, &ec->flags)) { 184 - /* miss address GPE, don't expect it anymore */ 185 - pr_info(PREFIX "missing address confirmation, " 186 - "don't expect it any longer.\n"); 187 - set_bit(EC_FLAGS_NO_ADDRESS_GPE, &ec->flags); 188 - } else if (test_bit(EC_FLAGS_WDATA, &ec->flags)) { 189 - /* miss write data GPE, don't expect it */ 190 - pr_info(PREFIX "missing write data confirmation, " 191 - "don't expect it any longer.\n"); 192 - set_bit(EC_FLAGS_NO_WDATA_GPE, &ec->flags); 193 - } else { 194 - /* missing GPEs, switch back to poll mode */ 195 - if (printk_ratelimit()) 196 - pr_info(PREFIX "missing confirmations, " 205 + /* missing GPEs, switch back to poll mode */ 206 + if (printk_ratelimit()) 207 + pr_info(PREFIX "missing confirmations, " 197 208 "switch off interrupt mode.\n"); 198 - clear_bit(EC_FLAGS_GPE_MODE, &ec->flags); 199 - } 200 - goto end; 209 + ec_switch_to_poll_mode(ec); 210 + ec_schedule_ec_poll(ec); 211 + return 0; 201 212 } 202 213 } else { 203 214 unsigned long delay = jiffies + msecs_to_jiffies(ACPI_EC_DELAY); 204 215 clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags); 205 216 while (time_before(jiffies, delay)) { 206 217 if (acpi_ec_check_status(ec, event)) 207 - goto end; 218 + return 0; 219 + udelay(ACPI_EC_UDELAY); 208 220 } 209 221 } 210 - pr_err(PREFIX "acpi_ec_wait timeout," 211 - " status = %d, expect_event = %d\n", 212 - acpi_ec_read_status(ec), event); 213 - ret = -ETIME; 214 - end: 215 - clear_bit(EC_FLAGS_ADDRESS, &ec->flags); 216 - return ret; 222 + pr_err(PREFIX "acpi_ec_wait timeout, status = 0x%2.2x, event = %s\n", 223 + acpi_ec_read_status(ec), 224 + (event == ACPI_EC_EVENT_OBF_1) ? "\"b0=1\"" : "\"b1=0\""); 225 + return -ETIME; 217 226 } 218 227 219 228 static int acpi_ec_transaction_unlocked(struct acpi_ec *ec, u8 command, ··· 210 245 { 211 246 int result = 0; 212 247 set_bit(EC_FLAGS_WAIT_GPE, &ec->flags); 213 - acpi_ec_write_cmd(ec, command); 214 248 pr_debug(PREFIX "transaction start\n"); 249 + acpi_ec_write_cmd(ec, command); 215 250 for (; wdata_len > 0; --wdata_len) { 216 251 result = acpi_ec_wait(ec, ACPI_EC_EVENT_IBF_0, force_poll); 217 252 if (result) { ··· 219 254 "write_cmd timeout, command = %d\n", command); 220 255 goto end; 221 256 } 222 - /* mark the address byte written to EC */ 223 - if (rdata_len + wdata_len > 1) 224 - set_bit(EC_FLAGS_ADDRESS, &ec->flags); 225 257 set_bit(EC_FLAGS_WAIT_GPE, &ec->flags); 226 258 acpi_ec_write_data(ec, *(wdata++)); 227 259 } 228 260 229 261 if (!rdata_len) { 230 - set_bit(EC_FLAGS_WDATA, &ec->flags); 231 262 result = acpi_ec_wait(ec, ACPI_EC_EVENT_IBF_0, force_poll); 232 263 if (result) { 233 264 pr_err(PREFIX ··· 488 527 { 489 528 acpi_status status = AE_OK; 490 529 struct acpi_ec *ec = data; 530 + u8 state = acpi_ec_read_status(ec); 491 531 492 532 pr_debug(PREFIX "~~~> interrupt\n"); 533 + atomic_inc(&ec->irq_count); 534 + if (atomic_read(&ec->irq_count) > 5) { 535 + pr_err(PREFIX "GPE storm detected, disabling EC GPE\n"); 536 + ec_switch_to_poll_mode(ec); 537 + goto end; 538 + } 493 539 clear_bit(EC_FLAGS_WAIT_GPE, &ec->flags); 494 540 if (test_bit(EC_FLAGS_GPE_MODE, &ec->flags)) 495 541 wake_up(&ec->wait); 496 542 497 - if (acpi_ec_read_status(ec) & ACPI_EC_FLAG_SCI) { 543 + if (state & ACPI_EC_FLAG_SCI) { 498 544 if (!test_and_set_bit(EC_FLAGS_QUERY_PENDING, &ec->flags)) 499 545 status = acpi_os_execute(OSL_EC_BURST_HANDLER, 500 546 acpi_ec_gpe_query, ec); 501 - } else if (unlikely(!test_bit(EC_FLAGS_GPE_MODE, &ec->flags))) { 547 + } else if (!test_bit(EC_FLAGS_GPE_MODE, &ec->flags) && 548 + !test_bit(EC_FLAGS_NO_GPE, &ec->flags) && 549 + in_interrupt()) { 502 550 /* this is non-query, must be confirmation */ 503 551 if (printk_ratelimit()) 504 552 pr_info(PREFIX "non-query interrupt received," 505 553 " switching to interrupt mode\n"); 506 554 set_bit(EC_FLAGS_GPE_MODE, &ec->flags); 555 + clear_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags); 507 556 } 508 - 557 + end: 558 + ec_schedule_ec_poll(ec); 509 559 return ACPI_SUCCESS(status) ? 510 560 ACPI_INTERRUPT_HANDLED : ACPI_INTERRUPT_NOT_HANDLED; 561 + } 562 + 563 + static void do_ec_poll(struct work_struct *work) 564 + { 565 + struct acpi_ec *ec = container_of(work, struct acpi_ec, work.work); 566 + atomic_set(&ec->irq_count, 0); 567 + (void)acpi_ec_gpe_handler(ec); 511 568 } 512 569 513 570 /* -------------------------------------------------------------------------- 514 571 Address Space Management 515 572 -------------------------------------------------------------------------- */ 516 - 517 - static acpi_status 518 - acpi_ec_space_setup(acpi_handle region_handle, 519 - u32 function, void *handler_context, void **return_context) 520 - { 521 - /* 522 - * The EC object is in the handler context and is needed 523 - * when calling the acpi_ec_space_handler. 524 - */ 525 - *return_context = (function != ACPI_REGION_DEACTIVATE) ? 526 - handler_context : NULL; 527 - 528 - return AE_OK; 529 - } 530 573 531 574 static acpi_status 532 575 acpi_ec_space_handler(u32 function, acpi_physical_address address, ··· 669 704 mutex_init(&ec->lock); 670 705 init_waitqueue_head(&ec->wait); 671 706 INIT_LIST_HEAD(&ec->list); 707 + INIT_DELAYED_WORK_DEFERRABLE(&ec->work, do_ec_poll); 708 + atomic_set(&ec->irq_count, 0); 672 709 return ec; 673 710 } 674 711 ··· 703 736 status = acpi_evaluate_integer(handle, "_GPE", NULL, &ec->gpe); 704 737 if (ACPI_FAILURE(status)) 705 738 return status; 706 - /* Find and register all query methods */ 707 - acpi_walk_namespace(ACPI_TYPE_METHOD, handle, 1, 708 - acpi_ec_register_query_methods, ec, NULL); 709 739 /* Use the global lock for all EC transactions? */ 710 740 acpi_evaluate_integer(handle, "_GLK", NULL, &ec->global_lock); 711 741 ec->handle = handle; 712 742 return AE_CTRL_TERMINATE; 713 743 } 714 744 745 + static void ec_poll_stop(struct acpi_ec *ec) 746 + { 747 + clear_bit(EC_FLAGS_RESCHEDULE_POLL, &ec->flags); 748 + cancel_delayed_work(&ec->work); 749 + } 750 + 715 751 static void ec_remove_handlers(struct acpi_ec *ec) 716 752 { 753 + ec_poll_stop(ec); 717 754 if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle, 718 755 ACPI_ADR_SPACE_EC, &acpi_ec_space_handler))) 719 756 pr_err(PREFIX "failed to remove space handler\n"); ··· 737 766 strcpy(acpi_device_class(device), ACPI_EC_CLASS); 738 767 739 768 /* Check for boot EC */ 740 - if (boot_ec) { 741 - if (boot_ec->handle == device->handle) { 742 - /* Pre-loaded EC from DSDT, just move pointer */ 743 - ec = boot_ec; 744 - boot_ec = NULL; 745 - goto end; 746 - } else if (boot_ec->handle == ACPI_ROOT_OBJECT) { 747 - /* ECDT-based EC, time to shut it down */ 748 - ec_remove_handlers(boot_ec); 749 - kfree(boot_ec); 750 - first_ec = boot_ec = NULL; 769 + if (boot_ec && 770 + (boot_ec->handle == device->handle || 771 + boot_ec->handle == ACPI_ROOT_OBJECT)) { 772 + ec = boot_ec; 773 + boot_ec = NULL; 774 + } else { 775 + ec = make_acpi_ec(); 776 + if (!ec) 777 + return -ENOMEM; 778 + if (ec_parse_device(device->handle, 0, ec, NULL) != 779 + AE_CTRL_TERMINATE) { 780 + kfree(ec); 781 + return -EINVAL; 751 782 } 752 783 } 753 784 754 - ec = make_acpi_ec(); 755 - if (!ec) 756 - return -ENOMEM; 757 - 758 - if (ec_parse_device(device->handle, 0, ec, NULL) != 759 - AE_CTRL_TERMINATE) { 760 - kfree(ec); 761 - return -EINVAL; 762 - } 763 785 ec->handle = device->handle; 764 - end: 786 + 787 + /* Find and register all query methods */ 788 + acpi_walk_namespace(ACPI_TYPE_METHOD, ec->handle, 1, 789 + acpi_ec_register_query_methods, ec, NULL); 790 + 765 791 if (!first_ec) 766 792 first_ec = ec; 767 793 acpi_driver_data(device) = ec; ··· 833 865 status = acpi_install_address_space_handler(ec->handle, 834 866 ACPI_ADR_SPACE_EC, 835 867 &acpi_ec_space_handler, 836 - &acpi_ec_space_setup, ec); 868 + NULL, ec); 837 869 if (ACPI_FAILURE(status)) { 838 870 acpi_remove_gpe_handler(NULL, ec->gpe, &acpi_ec_gpe_handler); 839 871 return -ENODEV; ··· 860 892 861 893 /* EC is fully operational, allow queries */ 862 894 clear_bit(EC_FLAGS_QUERY_PENDING, &ec->flags); 895 + ec_schedule_ec_poll(ec); 863 896 return ret; 864 897 } 865 898 ··· 888 919 return -EFAULT; 889 920 } 890 921 922 + static const struct acpi_device_id ec_device_ids[] = { 923 + {"PNP0C09", 0}, 924 + {"", 0}, 925 + }; 926 + 891 927 int __init acpi_ec_ecdt_probe(void) 892 928 { 893 929 int ret; ··· 913 939 boot_ec->data_addr = ecdt_ptr->data.address; 914 940 boot_ec->gpe = ecdt_ptr->gpe; 915 941 boot_ec->handle = ACPI_ROOT_OBJECT; 942 + acpi_get_handle(ACPI_ROOT_OBJECT, ecdt_ptr->id, &boot_ec->handle); 916 943 } else { 917 944 /* This workaround is needed only on some broken machines, 918 945 * which require early EC, but fail to provide ECDT */ ··· 942 967 boot_ec = NULL; 943 968 return -ENODEV; 944 969 } 970 + 971 + static int acpi_ec_suspend(struct acpi_device *device, pm_message_t state) 972 + { 973 + struct acpi_ec *ec = acpi_driver_data(device); 974 + /* Stop using GPE */ 975 + set_bit(EC_FLAGS_NO_GPE, &ec->flags); 976 + clear_bit(EC_FLAGS_GPE_MODE, &ec->flags); 977 + acpi_disable_gpe(NULL, ec->gpe, ACPI_NOT_ISR); 978 + return 0; 979 + } 980 + 981 + static int acpi_ec_resume(struct acpi_device *device) 982 + { 983 + struct acpi_ec *ec = acpi_driver_data(device); 984 + /* Enable use of GPE back */ 985 + clear_bit(EC_FLAGS_NO_GPE, &ec->flags); 986 + acpi_enable_gpe(NULL, ec->gpe, ACPI_NOT_ISR); 987 + return 0; 988 + } 989 + 990 + static struct acpi_driver acpi_ec_driver = { 991 + .name = "ec", 992 + .class = ACPI_EC_CLASS, 993 + .ids = ec_device_ids, 994 + .ops = { 995 + .add = acpi_ec_add, 996 + .remove = acpi_ec_remove, 997 + .start = acpi_ec_start, 998 + .stop = acpi_ec_stop, 999 + .suspend = acpi_ec_suspend, 1000 + .resume = acpi_ec_resume, 1001 + }, 1002 + }; 945 1003 946 1004 static int __init acpi_ec_init(void) 947 1005 {
+1 -1
drivers/acpi/events/evevent.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -5
drivers/acpi/events/evgpe.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 247 247 acpi_status status; 248 248 249 249 ACPI_FUNCTION_TRACE(ev_disable_gpe); 250 - 251 - if (!(gpe_event_info->flags & ACPI_GPE_ENABLE_MASK)) { 252 - return_ACPI_STATUS(AE_OK); 253 - } 254 250 255 251 /* Make sure HW enable masks are updated */ 256 252
+1 -1
drivers/acpi/events/evgpeblk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+47 -45
drivers/acpi/events/evmisc.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 49 49 #define _COMPONENT ACPI_EVENTS 50 50 ACPI_MODULE_NAME("evmisc") 51 51 52 - /* Names for Notify() values, used for debug output */ 53 - #ifdef ACPI_DEBUG_OUTPUT 54 - static const char *acpi_notify_value_names[] = { 55 - "Bus Check", 56 - "Device Check", 57 - "Device Wake", 58 - "Eject Request", 59 - "Device Check Light", 60 - "Frequency Mismatch", 61 - "Bus Mode Mismatch", 62 - "Power Fault" 63 - }; 64 - #endif 65 - 66 52 /* Pointer to FACS needed for the Global Lock */ 67 - 68 53 static struct acpi_table_facs *facs = NULL; 69 54 70 55 /* Local prototypes */ ··· 79 94 switch (node->type) { 80 95 case ACPI_TYPE_DEVICE: 81 96 case ACPI_TYPE_PROCESSOR: 82 - case ACPI_TYPE_POWER: 83 97 case ACPI_TYPE_THERMAL: 84 98 /* 85 99 * These are the ONLY objects that can receive ACPI notifications ··· 123 139 * initiate soft-off or sleep operation? 124 140 */ 125 141 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 126 - "Dispatching Notify(%X) on node %p\n", notify_value, 127 - node)); 128 - 129 - if (notify_value <= 7) { 130 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Notify value: %s\n", 131 - acpi_notify_value_names[notify_value])); 132 - } else { 133 - ACPI_DEBUG_PRINT((ACPI_DB_INFO, 134 - "Notify value: 0x%2.2X **Device Specific**\n", 135 - notify_value)); 136 - } 142 + "Dispatching Notify on [%4.4s] Node %p Value 0x%2.2X (%s)\n", 143 + acpi_ut_get_node_name(node), node, notify_value, 144 + acpi_ut_get_notify_name(notify_value))); 137 145 138 146 /* Get the notify object attached to the NS Node */ 139 147 ··· 135 159 /* We have the notify object, Get the right handler */ 136 160 137 161 switch (node->type) { 162 + 163 + /* Notify allowed only on these types */ 164 + 138 165 case ACPI_TYPE_DEVICE: 139 166 case ACPI_TYPE_THERMAL: 140 167 case ACPI_TYPE_PROCESSOR: 141 - case ACPI_TYPE_POWER: 142 168 143 169 if (notify_value <= ACPI_MAX_SYS_NOTIFY) { 144 170 handler_obj = ··· 157 179 } 158 180 } 159 181 160 - /* If there is any handler to run, schedule the dispatcher */ 161 - 182 + /* 183 + * If there is any handler to run, schedule the dispatcher. 184 + * Check for: 185 + * 1) Global system notify handler 186 + * 2) Global device notify handler 187 + * 3) Per-device notify handler 188 + */ 162 189 if ((acpi_gbl_system_notify.handler 163 190 && (notify_value <= ACPI_MAX_SYS_NOTIFY)) 164 191 || (acpi_gbl_device_notify.handler ··· 171 188 notify_info = acpi_ut_create_generic_state(); 172 189 if (!notify_info) { 173 190 return (AE_NO_MEMORY); 191 + } 192 + 193 + if (!handler_obj) { 194 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, 195 + "Executing system notify handler for Notify (%4.4s, %X) node %p\n", 196 + acpi_ut_get_node_name(node), 197 + notify_value, node)); 174 198 } 175 199 176 200 notify_info->common.descriptor_type = ··· 192 202 if (ACPI_FAILURE(status)) { 193 203 acpi_ut_delete_generic_state(notify_info); 194 204 } 195 - } 196 - 197 - if (!handler_obj) { 205 + } else { 198 206 /* 199 - * There is no per-device notify handler for this device. 200 - * This may or may not be a problem. 207 + * There is no notify handler (per-device or system) for this device. 201 208 */ 202 209 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 203 - "No notify handler for Notify(%4.4s, %X) node %p\n", 210 + "No notify handler for Notify (%4.4s, %X) node %p\n", 204 211 acpi_ut_get_node_name(node), notify_value, 205 212 node)); 206 213 } ··· 336 349 337 350 ACPI_FUNCTION_TRACE(ev_init_global_lock_handler); 338 351 339 - status = 340 - acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 341 - (struct acpi_table_header **)&facs); 352 + status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 353 + ACPI_CAST_INDIRECT_PTR(struct 354 + acpi_table_header, 355 + &facs)); 342 356 if (ACPI_FAILURE(status)) { 343 357 return_ACPI_STATUS(status); 344 358 } ··· 427 439 * Only one thread can acquire the GL at a time, the global_lock_mutex 428 440 * enforces this. This interface releases the interpreter if we must wait. 429 441 */ 430 - status = acpi_ex_system_wait_mutex(acpi_gbl_global_lock_mutex, 0); 442 + status = acpi_ex_system_wait_mutex( 443 + acpi_gbl_global_lock_mutex->mutex.os_mutex, 0); 431 444 if (status == AE_TIME) { 432 445 if (acpi_ev_global_lock_thread_id == acpi_os_get_thread_id()) { 433 446 acpi_ev_global_lock_acquired++; ··· 437 448 } 438 449 439 450 if (ACPI_FAILURE(status)) { 440 - status = 441 - acpi_ex_system_wait_mutex(acpi_gbl_global_lock_mutex, 442 - timeout); 451 + status = acpi_ex_system_wait_mutex( 452 + acpi_gbl_global_lock_mutex->mutex.os_mutex, 453 + timeout); 443 454 } 444 455 if (ACPI_FAILURE(status)) { 445 456 return_ACPI_STATUS(status); ··· 447 458 448 459 acpi_ev_global_lock_thread_id = acpi_os_get_thread_id(); 449 460 acpi_ev_global_lock_acquired++; 461 + 462 + /* 463 + * Update the global lock handle and check for wraparound. The handle is 464 + * only used for the external global lock interfaces, but it is updated 465 + * here to properly handle the case where a single thread may acquire the 466 + * lock via both the AML and the acpi_acquire_global_lock interfaces. The 467 + * handle is therefore updated on the first acquire from a given thread 468 + * regardless of where the acquisition request originated. 469 + */ 470 + acpi_gbl_global_lock_handle++; 471 + if (acpi_gbl_global_lock_handle == 0) { 472 + acpi_gbl_global_lock_handle = 1; 473 + } 450 474 451 475 /* 452 476 * Make sure that a global lock actually exists. If not, just treat ··· 557 555 /* Release the local GL mutex */ 558 556 acpi_ev_global_lock_thread_id = NULL; 559 557 acpi_ev_global_lock_acquired = 0; 560 - acpi_os_release_mutex(acpi_gbl_global_lock_mutex); 558 + acpi_os_release_mutex(acpi_gbl_global_lock_mutex->mutex.os_mutex); 561 559 return_ACPI_STATUS(status); 562 560 } 563 561
+2 -2
drivers/acpi/events/evregion.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 394 394 ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, 395 395 "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", 396 396 &region_obj->region.handler->address_space, handler, 397 - ACPI_FORMAT_UINT64(address), 397 + ACPI_FORMAT_NATIVE_UINT(address), 398 398 acpi_ut_get_region_name(region_obj->region. 399 399 space_id))); 400 400
+1 -1
drivers/acpi/events/evrgnini.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/events/evsci.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+17 -6
drivers/acpi/events/evxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 758 758 * 759 759 * DESCRIPTION: Acquire the ACPI Global Lock 760 760 * 761 + * Note: Allows callers with the same thread ID to acquire the global lock 762 + * multiple times. In other words, externally, the behavior of the global lock 763 + * is identical to an AML mutex. On the first acquire, a new handle is 764 + * returned. On any subsequent calls to acquire by the same thread, the same 765 + * handle is returned. 766 + * 761 767 ******************************************************************************/ 762 768 acpi_status acpi_acquire_global_lock(u16 timeout, u32 * handle) 763 769 { ··· 776 770 /* Must lock interpreter to prevent race conditions */ 777 771 778 772 acpi_ex_enter_interpreter(); 779 - status = acpi_ev_acquire_global_lock(timeout); 780 - acpi_ex_exit_interpreter(); 773 + 774 + status = acpi_ex_acquire_mutex_object(timeout, 775 + acpi_gbl_global_lock_mutex, 776 + acpi_os_get_thread_id()); 781 777 782 778 if (ACPI_SUCCESS(status)) { 783 - acpi_gbl_global_lock_handle++; 779 + 780 + /* Return the global lock handle (updated in acpi_ev_acquire_global_lock) */ 781 + 784 782 *handle = acpi_gbl_global_lock_handle; 785 783 } 786 784 785 + acpi_ex_exit_interpreter(); 787 786 return (status); 788 787 } 789 788 ··· 809 798 { 810 799 acpi_status status; 811 800 812 - if (handle != acpi_gbl_global_lock_handle) { 801 + if (!handle || (handle != acpi_gbl_global_lock_handle)) { 813 802 return (AE_NOT_ACQUIRED); 814 803 } 815 804 816 - status = acpi_ev_release_global_lock(); 805 + status = acpi_ex_release_mutex_object(acpi_gbl_global_lock_mutex); 817 806 return (status); 818 807 } 819 808
+1 -1
drivers/acpi/events/evxfevnt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/events/evxfregn.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+85 -20
drivers/acpi/executer/exconfig.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 45 45 #include <acpi/acinterp.h> 46 46 #include <acpi/amlcode.h> 47 47 #include <acpi/acnamesp.h> 48 - #include <acpi/acevents.h> 49 48 #include <acpi/actables.h> 50 49 #include <acpi/acdispat.h> 51 50 ··· 137 138 138 139 ACPI_FUNCTION_TRACE(ex_load_table_op); 139 140 141 + /* Validate lengths for the signature_string, OEMIDString, OEMtable_iD */ 142 + 143 + if ((operand[0]->string.length > ACPI_NAME_SIZE) || 144 + (operand[1]->string.length > ACPI_OEM_ID_SIZE) || 145 + (operand[2]->string.length > ACPI_OEM_TABLE_ID_SIZE)) { 146 + return_ACPI_STATUS(AE_BAD_PARAMETER); 147 + } 148 + 140 149 /* Find the ACPI table in the RSDT/XSDT */ 141 150 142 151 status = acpi_tb_find_table(operand[0]->string.pointer, ··· 236 229 status = acpi_get_table_by_index(table_index, &table); 237 230 if (ACPI_SUCCESS(status)) { 238 231 ACPI_INFO((AE_INFO, 239 - "Dynamic OEM Table Load - [%4.4s] OemId [%6.6s] OemTableId [%8.8s]", 232 + "Dynamic OEM Table Load - [%.4s] OemId [%.6s] OemTableId [%.8s]", 240 233 table->signature, table->oem_id, 241 234 table->oem_table_id)); 235 + } 236 + 237 + /* Invoke table handler if present */ 238 + 239 + if (acpi_gbl_table_handler) { 240 + (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, table, 241 + acpi_gbl_table_handler_context); 242 242 } 243 243 244 244 *return_desc = ddb_handle; ··· 282 268 struct acpi_table_desc table_desc; 283 269 acpi_native_uint table_index; 284 270 acpi_status status; 271 + u32 length; 285 272 286 273 ACPI_FUNCTION_TRACE(ex_load_op); 287 274 ··· 293 278 switch (ACPI_GET_OBJECT_TYPE(obj_desc)) { 294 279 case ACPI_TYPE_REGION: 295 280 281 + ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Load from Region %p %s\n", 282 + obj_desc, 283 + acpi_ut_get_object_type_name(obj_desc))); 284 + 296 285 /* Region must be system_memory (from ACPI spec) */ 297 286 298 287 if (obj_desc->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) { 299 288 return_ACPI_STATUS(AE_AML_OPERAND_TYPE); 300 289 } 301 - 302 - ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Load from Region %p %s\n", 303 - obj_desc, 304 - acpi_ut_get_object_type_name(obj_desc))); 305 290 306 291 /* 307 292 * If the Region Address and Length have not been previously evaluated, ··· 314 299 } 315 300 } 316 301 302 + /* 303 + * We will simply map the memory region for the table. However, the 304 + * memory region is technically not guaranteed to remain stable and 305 + * we may eventually have to copy the table to a local buffer. 306 + */ 317 307 table_desc.address = obj_desc->region.address; 318 308 table_desc.length = obj_desc->region.length; 319 309 table_desc.flags = ACPI_TABLE_ORIGIN_MAPPED; ··· 326 306 327 307 case ACPI_TYPE_BUFFER: /* Buffer or resolved region_field */ 328 308 329 - /* Simply extract the buffer from the buffer object */ 330 - 331 309 ACPI_DEBUG_PRINT((ACPI_DB_EXEC, 332 310 "Load from Buffer or Field %p %s\n", obj_desc, 333 311 acpi_ut_get_object_type_name(obj_desc))); 334 312 335 - table_desc.pointer = ACPI_CAST_PTR(struct acpi_table_header, 336 - obj_desc->buffer.pointer); 337 - table_desc.length = table_desc.pointer->length; 338 - table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED; 313 + length = obj_desc->buffer.length; 339 314 340 - obj_desc->buffer.pointer = NULL; 315 + /* Must have at least an ACPI table header */ 316 + 317 + if (length < sizeof(struct acpi_table_header)) { 318 + return_ACPI_STATUS(AE_INVALID_TABLE_LENGTH); 319 + } 320 + 321 + /* Validate checksum here. It won't get validated in tb_add_table */ 322 + 323 + status = 324 + acpi_tb_verify_checksum(ACPI_CAST_PTR 325 + (struct acpi_table_header, 326 + obj_desc->buffer.pointer), length); 327 + if (ACPI_FAILURE(status)) { 328 + return_ACPI_STATUS(status); 329 + } 330 + 331 + /* 332 + * We need to copy the buffer since the original buffer could be 333 + * changed or deleted in the future 334 + */ 335 + table_desc.pointer = ACPI_ALLOCATE(length); 336 + if (!table_desc.pointer) { 337 + return_ACPI_STATUS(AE_NO_MEMORY); 338 + } 339 + 340 + ACPI_MEMCPY(table_desc.pointer, obj_desc->buffer.pointer, 341 + length); 342 + table_desc.length = length; 343 + table_desc.flags = ACPI_TABLE_ORIGIN_ALLOCATED; 341 344 break; 342 345 343 346 default: ··· 376 333 } 377 334 378 335 status = 379 - acpi_ex_add_table(table_index, acpi_gbl_root_node, &ddb_handle); 336 + acpi_ex_add_table(table_index, walk_state->scope_info->scope.node, 337 + &ddb_handle); 380 338 if (ACPI_FAILURE(status)) { 381 339 382 340 /* On error, table_ptr was deallocated above */ ··· 393 349 394 350 /* table_ptr was deallocated above */ 395 351 352 + acpi_ut_remove_reference(ddb_handle); 396 353 return_ACPI_STATUS(status); 354 + } 355 + 356 + /* Invoke table handler if present */ 357 + 358 + if (acpi_gbl_table_handler) { 359 + (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_LOAD, 360 + table_desc.pointer, 361 + acpi_gbl_table_handler_context); 397 362 } 398 363 399 364 cleanup: 400 365 if (ACPI_FAILURE(status)) { 366 + 367 + /* Delete allocated buffer or mapping */ 368 + 401 369 acpi_tb_delete_table(&table_desc); 402 370 } 403 371 return_ACPI_STATUS(status); ··· 432 376 acpi_status status = AE_OK; 433 377 union acpi_operand_object *table_desc = ddb_handle; 434 378 acpi_native_uint table_index; 379 + struct acpi_table_header *table; 435 380 436 381 ACPI_FUNCTION_TRACE(ex_unload_table); 437 382 ··· 452 395 453 396 table_index = (acpi_native_uint) table_desc->reference.object; 454 397 398 + /* Invoke table handler if present */ 399 + 400 + if (acpi_gbl_table_handler) { 401 + status = acpi_get_table_by_index(table_index, &table); 402 + if (ACPI_SUCCESS(status)) { 403 + (void)acpi_gbl_table_handler(ACPI_TABLE_EVENT_UNLOAD, 404 + table, 405 + acpi_gbl_table_handler_context); 406 + } 407 + } 408 + 455 409 /* 456 410 * Delete the entire namespace under this table Node 457 411 * (Offset contains the table_id) 458 412 */ 459 413 acpi_tb_delete_namespace_by_owner(table_index); 460 - acpi_tb_release_owner_id(table_index); 414 + (void)acpi_tb_release_owner_id(table_index); 461 415 462 416 acpi_tb_set_table_loaded_flag(table_index, FALSE); 463 417 464 - /* Delete the table descriptor (ddb_handle) */ 465 - 466 - acpi_ut_remove_reference(table_desc); 467 - return_ACPI_STATUS(status); 418 + return_ACPI_STATUS(AE_OK); 468 419 }
+1 -1
drivers/acpi/executer/exconvrt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+16 -101
drivers/acpi/executer/excreate.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 96 96 * to the original Node. 97 97 */ 98 98 switch (target_node->type) { 99 + 100 + /* For these types, the sub-object can change dynamically via a Store */ 101 + 99 102 case ACPI_TYPE_INTEGER: 100 103 case ACPI_TYPE_STRING: 101 104 case ACPI_TYPE_BUFFER: ··· 106 103 case ACPI_TYPE_BUFFER_FIELD: 107 104 108 105 /* 106 + * These types open a new scope, so we need the NS node in order to access 107 + * any children. 108 + */ 109 + case ACPI_TYPE_DEVICE: 110 + case ACPI_TYPE_POWER: 111 + case ACPI_TYPE_PROCESSOR: 112 + case ACPI_TYPE_THERMAL: 113 + case ACPI_TYPE_LOCAL_SCOPE: 114 + 115 + /* 109 116 * The new alias has the type ALIAS and points to the original 110 - * NS node, not the object itself. This is because for these 111 - * types, the object can change dynamically via a Store. 117 + * NS node, not the object itself. 112 118 */ 113 119 alias_node->type = ACPI_TYPE_LOCAL_ALIAS; 114 120 alias_node->object = ··· 127 115 case ACPI_TYPE_METHOD: 128 116 129 117 /* 130 - * The new alias has the type ALIAS and points to the original 131 - * NS node, not the object itself. This is because for these 132 - * types, the object can change dynamically via a Store. 118 + * Control method aliases need to be differentiated 133 119 */ 134 120 alias_node->type = ACPI_TYPE_LOCAL_METHOD_ALIAS; 135 121 alias_node->object = ··· 341 331 /* Install the new region object in the parent Node */ 342 332 343 333 status = acpi_ns_attach_object(node, obj_desc, ACPI_TYPE_REGION); 344 - 345 - cleanup: 346 - 347 - /* Remove local reference to the object */ 348 - 349 - acpi_ut_remove_reference(obj_desc); 350 - return_ACPI_STATUS(status); 351 - } 352 - 353 - /******************************************************************************* 354 - * 355 - * FUNCTION: acpi_ex_create_table_region 356 - * 357 - * PARAMETERS: walk_state - Current state 358 - * 359 - * RETURN: Status 360 - * 361 - * DESCRIPTION: Create a new data_table_region object 362 - * 363 - ******************************************************************************/ 364 - 365 - acpi_status acpi_ex_create_table_region(struct acpi_walk_state *walk_state) 366 - { 367 - acpi_status status; 368 - union acpi_operand_object **operand = &walk_state->operands[0]; 369 - union acpi_operand_object *obj_desc; 370 - struct acpi_namespace_node *node; 371 - union acpi_operand_object *region_obj2; 372 - acpi_native_uint table_index; 373 - struct acpi_table_header *table; 374 - 375 - ACPI_FUNCTION_TRACE(ex_create_table_region); 376 - 377 - /* Get the Node from the object stack */ 378 - 379 - node = walk_state->op->common.node; 380 - 381 - /* 382 - * If the region object is already attached to this node, 383 - * just return 384 - */ 385 - if (acpi_ns_get_attached_object(node)) { 386 - return_ACPI_STATUS(AE_OK); 387 - } 388 - 389 - /* Find the ACPI table */ 390 - 391 - status = acpi_tb_find_table(operand[1]->string.pointer, 392 - operand[2]->string.pointer, 393 - operand[3]->string.pointer, &table_index); 394 - if (ACPI_FAILURE(status)) { 395 - return_ACPI_STATUS(status); 396 - } 397 - 398 - /* Create the region descriptor */ 399 - 400 - obj_desc = acpi_ut_create_internal_object(ACPI_TYPE_REGION); 401 - if (!obj_desc) { 402 - return_ACPI_STATUS(AE_NO_MEMORY); 403 - } 404 - 405 - region_obj2 = obj_desc->common.next_object; 406 - region_obj2->extra.region_context = NULL; 407 - 408 - status = acpi_get_table_by_index(table_index, &table); 409 - if (ACPI_FAILURE(status)) { 410 - return_ACPI_STATUS(status); 411 - } 412 - 413 - /* Init the region from the operands */ 414 - 415 - obj_desc->region.space_id = REGION_DATA_TABLE; 416 - obj_desc->region.address = 417 - (acpi_physical_address) ACPI_TO_INTEGER(table); 418 - obj_desc->region.length = table->length; 419 - obj_desc->region.node = node; 420 - obj_desc->region.flags = AOPOBJ_DATA_VALID; 421 - 422 - /* Install the new region object in the parent Node */ 423 - 424 - status = acpi_ns_attach_object(node, obj_desc, ACPI_TYPE_REGION); 425 - if (ACPI_FAILURE(status)) { 426 - goto cleanup; 427 - } 428 - 429 - status = acpi_ev_initialize_region(obj_desc, FALSE); 430 - if (ACPI_FAILURE(status)) { 431 - if (status == AE_NOT_EXIST) { 432 - status = AE_OK; 433 - } else { 434 - goto cleanup; 435 - } 436 - } 437 - 438 - obj_desc->region.flags |= AOPOBJ_SETUP_COMPLETE; 439 334 440 335 cleanup: 441 336
+50 -21
drivers/acpi/executer/exdump.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 500 500 acpi_os_printf("Reference: Debug\n"); 501 501 break; 502 502 503 - case AML_NAME_OP: 504 - 505 - ACPI_DUMP_PATHNAME(obj_desc->reference.object, 506 - "Reference: Name: ", ACPI_LV_INFO, 507 - _COMPONENT); 508 - ACPI_DUMP_ENTRY(obj_desc->reference.object, 509 - ACPI_LV_INFO); 510 - break; 511 - 512 503 case AML_INDEX_OP: 513 504 514 505 acpi_os_printf("Reference: Index %p\n", 515 506 obj_desc->reference.object); 516 507 break; 517 508 509 + case AML_LOAD_OP: 510 + 511 + acpi_os_printf("Reference: [DdbHandle] TableIndex %p\n", 512 + obj_desc->reference.object); 513 + break; 514 + 518 515 case AML_REF_OF_OP: 519 516 520 - acpi_os_printf("Reference: (RefOf) %p\n", 521 - obj_desc->reference.object); 517 + acpi_os_printf("Reference: (RefOf) %p [%s]\n", 518 + obj_desc->reference.object, 519 + acpi_ut_get_type_name(((union 520 + acpi_operand_object 521 + *)obj_desc-> 522 + reference. 523 + object)->common. 524 + type)); 522 525 break; 523 526 524 527 case AML_ARG_OP: ··· 562 559 563 560 case AML_INT_NAMEPATH_OP: 564 561 565 - acpi_os_printf("Reference.Node->Name %X\n", 566 - obj_desc->reference.node->name.integer); 562 + acpi_os_printf("Reference: Namepath %X [%4.4s]\n", 563 + obj_desc->reference.node->name.integer, 564 + obj_desc->reference.node->name.ascii); 567 565 break; 568 566 569 567 default: ··· 644 640 acpi_os_printf("\n"); 645 641 } else { 646 642 acpi_os_printf(" base %8.8X%8.8X Length %X\n", 647 - ACPI_FORMAT_UINT64(obj_desc->region. 648 - address), 643 + ACPI_FORMAT_NATIVE_UINT(obj_desc->region. 644 + address), 649 645 obj_desc->region.length); 650 646 } 651 647 break; ··· 881 877 ret_buf.length = ACPI_ALLOCATE_LOCAL_BUFFER; 882 878 883 879 if (obj_desc->reference.opcode == AML_INT_NAMEPATH_OP) { 884 - acpi_os_printf("Named Object %p ", obj_desc->reference.node); 880 + acpi_os_printf(" Named Object %p ", obj_desc->reference.node); 885 881 886 882 status = 887 883 acpi_ns_handle_to_pathname(obj_desc->reference.node, 888 884 &ret_buf); 889 885 if (ACPI_FAILURE(status)) { 890 - acpi_os_printf("Could not convert name to pathname\n"); 886 + acpi_os_printf(" Could not convert name to pathname\n"); 891 887 } else { 892 888 acpi_os_printf("%s\n", (char *)ret_buf.pointer); 893 889 ACPI_FREE(ret_buf.pointer); 894 890 } 895 891 } else if (obj_desc->reference.object) { 896 - acpi_os_printf("\nReferenced Object: %p\n", 897 - obj_desc->reference.object); 892 + if (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) == 893 + ACPI_DESC_TYPE_OPERAND) { 894 + acpi_os_printf(" Target: %p", 895 + obj_desc->reference.object); 896 + if (obj_desc->reference.opcode == AML_LOAD_OP) { 897 + /* 898 + * For DDBHandle reference, 899 + * obj_desc->Reference.Object is the table index 900 + */ 901 + acpi_os_printf(" [DDBHandle]\n"); 902 + } else { 903 + acpi_os_printf(" [%s]\n", 904 + acpi_ut_get_type_name(((union 905 + acpi_operand_object 906 + *) 907 + obj_desc-> 908 + reference. 909 + object)-> 910 + common. 911 + type)); 912 + } 913 + } else { 914 + acpi_os_printf(" Target: %p\n", 915 + obj_desc->reference.object); 916 + } 898 917 } 899 918 } 900 919 ··· 1003 976 1004 977 case ACPI_TYPE_LOCAL_REFERENCE: 1005 978 1006 - acpi_os_printf("[Object Reference] "); 979 + acpi_os_printf("[Object Reference] %s", 980 + (acpi_ps_get_opcode_info 981 + (obj_desc->reference.opcode))->name); 1007 982 acpi_ex_dump_reference_obj(obj_desc); 1008 983 break; 1009 984
+9 -54
drivers/acpi/executer/exfield.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 71 71 union acpi_operand_object *buffer_desc; 72 72 acpi_size length; 73 73 void *buffer; 74 - u8 locked; 75 74 76 75 ACPI_FUNCTION_TRACE_PTR(ex_read_data_from_field, obj_desc); 77 76 ··· 110 111 111 112 /* Lock entire transaction if requested */ 112 113 113 - locked = 114 - acpi_ex_acquire_global_lock(obj_desc->common_field. 115 - field_flags); 114 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 116 115 117 116 /* 118 117 * Perform the read. ··· 122 125 buffer.pointer), 123 126 ACPI_READ | (obj_desc->field. 124 127 attribute << 16)); 125 - acpi_ex_release_global_lock(locked); 128 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 126 129 goto exit; 127 130 } 128 131 ··· 172 175 173 176 /* Lock entire transaction if requested */ 174 177 175 - locked = 176 - acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 178 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 177 179 178 180 /* Read from the field */ 179 181 180 182 status = acpi_ex_extract_from_field(obj_desc, buffer, (u32) length); 181 - acpi_ex_release_global_lock(locked); 183 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 182 184 183 185 exit: 184 186 if (ACPI_FAILURE(status)) { ··· 210 214 { 211 215 acpi_status status; 212 216 u32 length; 213 - u32 required_length; 214 217 void *buffer; 215 - void *new_buffer; 216 - u8 locked; 217 218 union acpi_operand_object *buffer_desc; 218 219 219 220 ACPI_FUNCTION_TRACE_PTR(ex_write_data_to_field, obj_desc); ··· 271 278 272 279 /* Lock entire transaction if requested */ 273 280 274 - locked = 275 - acpi_ex_acquire_global_lock(obj_desc->common_field. 276 - field_flags); 281 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 277 282 278 283 /* 279 284 * Perform the write (returns status and perhaps data in the ··· 282 291 (acpi_integer *) buffer, 283 292 ACPI_WRITE | (obj_desc->field. 284 293 attribute << 16)); 285 - acpi_ex_release_global_lock(locked); 294 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 286 295 287 296 *result_desc = buffer_desc; 288 297 return_ACPI_STATUS(status); ··· 310 319 return_ACPI_STATUS(AE_AML_OPERAND_TYPE); 311 320 } 312 321 313 - /* 314 - * We must have a buffer that is at least as long as the field 315 - * we are writing to. This is because individual fields are 316 - * indivisible and partial writes are not supported -- as per 317 - * the ACPI specification. 318 - */ 319 - new_buffer = NULL; 320 - required_length = 321 - ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length); 322 - 323 - if (length < required_length) { 324 - 325 - /* We need to create a new buffer */ 326 - 327 - new_buffer = ACPI_ALLOCATE_ZEROED(required_length); 328 - if (!new_buffer) { 329 - return_ACPI_STATUS(AE_NO_MEMORY); 330 - } 331 - 332 - /* 333 - * Copy the original data to the new buffer, starting 334 - * at Byte zero. All unused (upper) bytes of the 335 - * buffer will be 0. 336 - */ 337 - ACPI_MEMCPY((char *)new_buffer, (char *)buffer, length); 338 - buffer = new_buffer; 339 - length = required_length; 340 - } 341 - 342 322 ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, 343 323 "FieldWrite [FROM]: Obj %p (%s:%X), Buf %p, ByteLen %X\n", 344 324 source_desc, ··· 328 366 329 367 /* Lock entire transaction if requested */ 330 368 331 - locked = 332 - acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 369 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 333 370 334 371 /* Write to the field */ 335 372 336 373 status = acpi_ex_insert_into_field(obj_desc, buffer, length); 337 - acpi_ex_release_global_lock(locked); 338 - 339 - /* Free temporary buffer if we used one */ 340 - 341 - if (new_buffer) { 342 - ACPI_FREE(new_buffer); 343 - } 374 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 344 375 345 376 return_ACPI_STATUS(status); 346 377 }
+37 -9
drivers/acpi/executer/exfldio.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 263 263 rgn_desc->region.space_id, 264 264 obj_desc->common_field.access_byte_width, 265 265 obj_desc->common_field.base_byte_offset, 266 - field_datum_byte_offset, (void *)address)); 266 + field_datum_byte_offset, ACPI_CAST_PTR(void, 267 + address))); 267 268 268 269 /* Invoke the appropriate address_space/op_region handler */ 269 270 ··· 806 805 u32 datum_count; 807 806 u32 field_datum_count; 808 807 u32 i; 808 + u32 required_length; 809 + void *new_buffer; 809 810 810 811 ACPI_FUNCTION_TRACE(ex_insert_into_field); 811 812 812 813 /* Validate input buffer */ 813 814 814 - if (buffer_length < 815 - ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length)) { 816 - ACPI_ERROR((AE_INFO, 817 - "Field size %X (bits) is too large for buffer (%X)", 818 - obj_desc->common_field.bit_length, buffer_length)); 815 + new_buffer = NULL; 816 + required_length = 817 + ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length); 818 + /* 819 + * We must have a buffer that is at least as long as the field 820 + * we are writing to. This is because individual fields are 821 + * indivisible and partial writes are not supported -- as per 822 + * the ACPI specification. 823 + */ 824 + if (buffer_length < required_length) { 819 825 820 - return_ACPI_STATUS(AE_BUFFER_OVERFLOW); 826 + /* We need to create a new buffer */ 827 + 828 + new_buffer = ACPI_ALLOCATE_ZEROED(required_length); 829 + if (!new_buffer) { 830 + return_ACPI_STATUS(AE_NO_MEMORY); 831 + } 832 + 833 + /* 834 + * Copy the original data to the new buffer, starting 835 + * at Byte zero. All unused (upper) bytes of the 836 + * buffer will be 0. 837 + */ 838 + ACPI_MEMCPY((char *)new_buffer, (char *)buffer, buffer_length); 839 + buffer = new_buffer; 840 + buffer_length = required_length; 821 841 } 822 842 823 843 /* ··· 888 866 merged_datum, 889 867 field_offset); 890 868 if (ACPI_FAILURE(status)) { 891 - return_ACPI_STATUS(status); 869 + goto exit; 892 870 } 893 871 894 872 field_offset += obj_desc->common_field.access_byte_width; ··· 946 924 mask, merged_datum, 947 925 field_offset); 948 926 927 + exit: 928 + /* Free temporary buffer if we used one */ 929 + 930 + if (new_buffer) { 931 + ACPI_FREE(new_buffer); 932 + } 949 933 return_ACPI_STATUS(status); 950 934 }
+1 -1
drivers/acpi/executer/exmisc.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+171 -76
drivers/acpi/executer/exmutex.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 126 126 127 127 /******************************************************************************* 128 128 * 129 + * FUNCTION: acpi_ex_acquire_mutex_object 130 + * 131 + * PARAMETERS: time_desc - Timeout in milliseconds 132 + * obj_desc - Mutex object 133 + * Thread - Current thread state 134 + * 135 + * RETURN: Status 136 + * 137 + * DESCRIPTION: Acquire an AML mutex, low-level interface. Provides a common 138 + * path that supports multiple acquires by the same thread. 139 + * 140 + * MUTEX: Interpreter must be locked 141 + * 142 + * NOTE: This interface is called from three places: 143 + * 1) From acpi_ex_acquire_mutex, via an AML Acquire() operator 144 + * 2) From acpi_ex_acquire_global_lock when an AML Field access requires the 145 + * global lock 146 + * 3) From the external interface, acpi_acquire_global_lock 147 + * 148 + ******************************************************************************/ 149 + 150 + acpi_status 151 + acpi_ex_acquire_mutex_object(u16 timeout, 152 + union acpi_operand_object *obj_desc, 153 + acpi_thread_id thread_id) 154 + { 155 + acpi_status status; 156 + 157 + ACPI_FUNCTION_TRACE_PTR(ex_acquire_mutex_object, obj_desc); 158 + 159 + if (!obj_desc) { 160 + return_ACPI_STATUS(AE_BAD_PARAMETER); 161 + } 162 + 163 + /* Support for multiple acquires by the owning thread */ 164 + 165 + if (obj_desc->mutex.thread_id == thread_id) { 166 + /* 167 + * The mutex is already owned by this thread, just increment the 168 + * acquisition depth 169 + */ 170 + obj_desc->mutex.acquisition_depth++; 171 + return_ACPI_STATUS(AE_OK); 172 + } 173 + 174 + /* Acquire the mutex, wait if necessary. Special case for Global Lock */ 175 + 176 + if (obj_desc == acpi_gbl_global_lock_mutex) { 177 + status = acpi_ev_acquire_global_lock(timeout); 178 + } else { 179 + status = acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex, 180 + timeout); 181 + } 182 + 183 + if (ACPI_FAILURE(status)) { 184 + 185 + /* Includes failure from a timeout on time_desc */ 186 + 187 + return_ACPI_STATUS(status); 188 + } 189 + 190 + /* Acquired the mutex: update mutex object */ 191 + 192 + obj_desc->mutex.thread_id = thread_id; 193 + obj_desc->mutex.acquisition_depth = 1; 194 + obj_desc->mutex.original_sync_level = 0; 195 + obj_desc->mutex.owner_thread = NULL; /* Used only for AML Acquire() */ 196 + 197 + return_ACPI_STATUS(AE_OK); 198 + } 199 + 200 + /******************************************************************************* 201 + * 129 202 * FUNCTION: acpi_ex_acquire_mutex 130 203 * 131 204 * PARAMETERS: time_desc - Timeout integer ··· 224 151 return_ACPI_STATUS(AE_BAD_PARAMETER); 225 152 } 226 153 227 - /* Sanity check: we must have a valid thread ID */ 154 + /* Must have a valid thread ID */ 228 155 229 156 if (!walk_state->thread) { 230 157 ACPI_ERROR((AE_INFO, ··· 234 161 } 235 162 236 163 /* 237 - * Current Sync must be less than or equal to the sync level of the 164 + * Current sync level must be less than or equal to the sync level of the 238 165 * mutex. This mechanism provides some deadlock prevention 239 166 */ 240 167 if (walk_state->thread->current_sync_level > obj_desc->mutex.sync_level) { ··· 245 172 return_ACPI_STATUS(AE_AML_MUTEX_ORDER); 246 173 } 247 174 248 - /* Support for multiple acquires by the owning thread */ 175 + status = acpi_ex_acquire_mutex_object((u16) time_desc->integer.value, 176 + obj_desc, 177 + walk_state->thread->thread_id); 178 + if (ACPI_SUCCESS(status) && obj_desc->mutex.acquisition_depth == 1) { 179 + 180 + /* Save Thread object, original/current sync levels */ 181 + 182 + obj_desc->mutex.owner_thread = walk_state->thread; 183 + obj_desc->mutex.original_sync_level = 184 + walk_state->thread->current_sync_level; 185 + walk_state->thread->current_sync_level = 186 + obj_desc->mutex.sync_level; 187 + 188 + /* Link the mutex to the current thread for force-unlock at method exit */ 189 + 190 + acpi_ex_link_mutex(obj_desc, walk_state->thread); 191 + } 192 + 193 + return_ACPI_STATUS(status); 194 + } 195 + 196 + /******************************************************************************* 197 + * 198 + * FUNCTION: acpi_ex_release_mutex_object 199 + * 200 + * PARAMETERS: obj_desc - The object descriptor for this op 201 + * 202 + * RETURN: Status 203 + * 204 + * DESCRIPTION: Release a previously acquired Mutex, low level interface. 205 + * Provides a common path that supports multiple releases (after 206 + * previous multiple acquires) by the same thread. 207 + * 208 + * MUTEX: Interpreter must be locked 209 + * 210 + * NOTE: This interface is called from three places: 211 + * 1) From acpi_ex_release_mutex, via an AML Acquire() operator 212 + * 2) From acpi_ex_release_global_lock when an AML Field access requires the 213 + * global lock 214 + * 3) From the external interface, acpi_release_global_lock 215 + * 216 + ******************************************************************************/ 217 + 218 + acpi_status acpi_ex_release_mutex_object(union acpi_operand_object *obj_desc) 219 + { 220 + acpi_status status = AE_OK; 221 + 222 + ACPI_FUNCTION_TRACE(ex_release_mutex_object); 223 + 224 + if (obj_desc->mutex.acquisition_depth == 0) { 225 + return (AE_NOT_ACQUIRED); 226 + } 227 + 228 + /* Match multiple Acquires with multiple Releases */ 229 + 230 + obj_desc->mutex.acquisition_depth--; 231 + if (obj_desc->mutex.acquisition_depth != 0) { 232 + 233 + /* Just decrement the depth and return */ 234 + 235 + return_ACPI_STATUS(AE_OK); 236 + } 249 237 250 238 if (obj_desc->mutex.owner_thread) { 251 - if (obj_desc->mutex.owner_thread->thread_id == 252 - walk_state->thread->thread_id) { 253 - /* 254 - * The mutex is already owned by this thread, just increment the 255 - * acquisition depth 256 - */ 257 - obj_desc->mutex.acquisition_depth++; 258 - return_ACPI_STATUS(AE_OK); 259 - } 239 + 240 + /* Unlink the mutex from the owner's list */ 241 + 242 + acpi_ex_unlink_mutex(obj_desc); 243 + obj_desc->mutex.owner_thread = NULL; 260 244 } 261 245 262 - /* Acquire the mutex, wait if necessary. Special case for Global Lock */ 246 + /* Release the mutex, special case for Global Lock */ 263 247 264 - if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) { 265 - status = 266 - acpi_ev_acquire_global_lock((u16) time_desc->integer.value); 248 + if (obj_desc == acpi_gbl_global_lock_mutex) { 249 + status = acpi_ev_release_global_lock(); 267 250 } else { 268 - status = acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex, 269 - (u16) time_desc->integer. 270 - value); 251 + acpi_os_release_mutex(obj_desc->mutex.os_mutex); 271 252 } 272 253 273 - if (ACPI_FAILURE(status)) { 254 + /* Clear mutex info */ 274 255 275 - /* Includes failure from a timeout on time_desc */ 276 - 277 - return_ACPI_STATUS(status); 278 - } 279 - 280 - /* Have the mutex: update mutex and walk info and save the sync_level */ 281 - 282 - obj_desc->mutex.owner_thread = walk_state->thread; 283 - obj_desc->mutex.acquisition_depth = 1; 284 - obj_desc->mutex.original_sync_level = 285 - walk_state->thread->current_sync_level; 286 - 287 - walk_state->thread->current_sync_level = obj_desc->mutex.sync_level; 288 - 289 - /* Link the mutex to the current thread for force-unlock at method exit */ 290 - 291 - acpi_ex_link_mutex(obj_desc, walk_state->thread); 292 - return_ACPI_STATUS(AE_OK); 256 + obj_desc->mutex.thread_id = 0; 257 + return_ACPI_STATUS(status); 293 258 } 294 259 295 260 /******************************************************************************* ··· 364 253 return_ACPI_STATUS(AE_AML_MUTEX_NOT_ACQUIRED); 365 254 } 366 255 367 - /* Sanity check: we must have a valid thread ID */ 368 - 369 - if (!walk_state->thread) { 370 - ACPI_ERROR((AE_INFO, 371 - "Cannot release Mutex [%4.4s], null thread info", 372 - acpi_ut_get_node_name(obj_desc->mutex.node))); 373 - return_ACPI_STATUS(AE_AML_INTERNAL); 374 - } 375 - 376 256 /* 377 257 * The Mutex is owned, but this thread must be the owner. 378 258 * Special case for Global Lock, any thread can release 379 259 */ 380 260 if ((obj_desc->mutex.owner_thread->thread_id != 381 261 walk_state->thread->thread_id) 382 - && (obj_desc->mutex.os_mutex != acpi_gbl_global_lock_mutex)) { 262 + && (obj_desc != acpi_gbl_global_lock_mutex)) { 383 263 ACPI_ERROR((AE_INFO, 384 264 "Thread %lX cannot release Mutex [%4.4s] acquired by thread %lX", 385 265 (unsigned long)walk_state->thread->thread_id, ··· 380 278 return_ACPI_STATUS(AE_AML_NOT_OWNER); 381 279 } 382 280 281 + /* Must have a valid thread ID */ 282 + 283 + if (!walk_state->thread) { 284 + ACPI_ERROR((AE_INFO, 285 + "Cannot release Mutex [%4.4s], null thread info", 286 + acpi_ut_get_node_name(obj_desc->mutex.node))); 287 + return_ACPI_STATUS(AE_AML_INTERNAL); 288 + } 289 + 383 290 /* 384 291 * The sync level of the mutex must be less than or equal to the current 385 292 * sync level 386 293 */ 387 294 if (obj_desc->mutex.sync_level > walk_state->thread->current_sync_level) { 388 295 ACPI_ERROR((AE_INFO, 389 - "Cannot release Mutex [%4.4s], incorrect SyncLevel", 390 - acpi_ut_get_node_name(obj_desc->mutex.node))); 296 + "Cannot release Mutex [%4.4s], SyncLevel mismatch: mutex %d current %d", 297 + acpi_ut_get_node_name(obj_desc->mutex.node), 298 + obj_desc->mutex.sync_level, 299 + walk_state->thread->current_sync_level)); 391 300 return_ACPI_STATUS(AE_AML_MUTEX_ORDER); 392 301 } 393 302 394 - /* Match multiple Acquires with multiple Releases */ 303 + status = acpi_ex_release_mutex_object(obj_desc); 395 304 396 - obj_desc->mutex.acquisition_depth--; 397 - if (obj_desc->mutex.acquisition_depth != 0) { 305 + if (obj_desc->mutex.acquisition_depth == 0) { 398 306 399 - /* Just decrement the depth and return */ 307 + /* Restore the original sync_level */ 400 308 401 - return_ACPI_STATUS(AE_OK); 309 + walk_state->thread->current_sync_level = 310 + obj_desc->mutex.original_sync_level; 402 311 } 403 - 404 - /* Unlink the mutex from the owner's list */ 405 - 406 - acpi_ex_unlink_mutex(obj_desc); 407 - 408 - /* Release the mutex, special case for Global Lock */ 409 - 410 - if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) { 411 - status = acpi_ev_release_global_lock(); 412 - } else { 413 - acpi_os_release_mutex(obj_desc->mutex.os_mutex); 414 - } 415 - 416 - /* Update the mutex and restore sync_level */ 417 - 418 - obj_desc->mutex.owner_thread = NULL; 419 - walk_state->thread->current_sync_level = 420 - obj_desc->mutex.original_sync_level; 421 - 422 312 return_ACPI_STATUS(status); 423 313 } 424 314 ··· 451 357 452 358 /* Release the mutex, special case for Global Lock */ 453 359 454 - if (obj_desc->mutex.os_mutex == acpi_gbl_global_lock_mutex) { 360 + if (obj_desc == acpi_gbl_global_lock_mutex) { 455 361 456 362 /* Ignore errors */ 457 363 ··· 463 369 /* Mark mutex unowned */ 464 370 465 371 obj_desc->mutex.owner_thread = NULL; 372 + obj_desc->mutex.thread_id = 0; 466 373 467 374 /* Update Thread sync_level (Last mutex is the important one) */ 468 375
+1 -1
drivers/acpi/executer/exnames.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+19 -6
drivers/acpi/executer/exoparg1.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 121 121 122 122 if ((ACPI_FAILURE(status)) || walk_state->result_obj) { 123 123 acpi_ut_remove_reference(return_desc); 124 + walk_state->result_obj = NULL; 124 125 } else { 125 126 /* Save the return value */ 126 127 ··· 740 739 value = acpi_gbl_integer_byte_width; 741 740 break; 742 741 743 - case ACPI_TYPE_BUFFER: 744 - value = temp_desc->buffer.length; 745 - break; 746 - 747 742 case ACPI_TYPE_STRING: 748 743 value = temp_desc->string.length; 749 744 break; 750 745 746 + case ACPI_TYPE_BUFFER: 747 + 748 + /* Buffer arguments may not be evaluated at this point */ 749 + 750 + status = acpi_ds_get_buffer_arguments(temp_desc); 751 + value = temp_desc->buffer.length; 752 + break; 753 + 751 754 case ACPI_TYPE_PACKAGE: 755 + 756 + /* Package arguments may not be evaluated at this point */ 757 + 758 + status = acpi_ds_get_package_arguments(temp_desc); 752 759 value = temp_desc->package.count; 753 760 break; 754 761 755 762 default: 756 763 ACPI_ERROR((AE_INFO, 757 - "Operand is not Buf/Int/Str/Pkg - found type %s", 764 + "Operand must be Buffer/Integer/String/Package - found type %s", 758 765 acpi_ut_get_type_name(type))); 759 766 status = AE_AML_OPERAND_TYPE; 767 + goto cleanup; 768 + } 769 + 770 + if (ACPI_FAILURE(status)) { 760 771 goto cleanup; 761 772 } 762 773
+14 -7
drivers/acpi/executer/exoparg2.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 241 241 goto cleanup; 242 242 } 243 243 244 - /* Return the remainder */ 245 - 246 - walk_state->result_obj = return_desc1; 247 - 248 244 cleanup: 249 245 /* 250 246 * Since the remainder is not returned indirectly, remove a reference to ··· 253 257 /* Delete the return object */ 254 258 255 259 acpi_ut_remove_reference(return_desc1); 260 + } 261 + 262 + /* Save return object (the remainder) on success */ 263 + 264 + else { 265 + walk_state->result_obj = return_desc1; 256 266 } 257 267 258 268 return_ACPI_STATUS(status); ··· 492 490 493 491 if (ACPI_FAILURE(status)) { 494 492 acpi_ut_remove_reference(return_desc); 493 + walk_state->result_obj = NULL; 495 494 } 496 495 497 496 return_ACPI_STATUS(status); ··· 586 583 return_desc->integer.value = ACPI_INTEGER_MAX; 587 584 } 588 585 589 - walk_state->result_obj = return_desc; 590 - 591 586 cleanup: 592 587 593 588 /* Delete return object on error */ 594 589 595 590 if (ACPI_FAILURE(status)) { 596 591 acpi_ut_remove_reference(return_desc); 592 + } 593 + 594 + /* Save return object on success */ 595 + 596 + else { 597 + walk_state->result_obj = return_desc; 597 598 } 598 599 599 600 return_ACPI_STATUS(status);
+2 -1
drivers/acpi/executer/exoparg3.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 260 260 261 261 if (ACPI_FAILURE(status) || walk_state->result_obj) { 262 262 acpi_ut_remove_reference(return_desc); 263 + walk_state->result_obj = NULL; 263 264 } 264 265 265 266 /* Set the return object and exit */
+7 -3
drivers/acpi/executer/exoparg6.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 322 322 goto cleanup; 323 323 } 324 324 325 - walk_state->result_obj = return_desc; 326 - 327 325 cleanup: 328 326 329 327 /* Delete return object on error */ 330 328 331 329 if (ACPI_FAILURE(status)) { 332 330 acpi_ut_remove_reference(return_desc); 331 + } 332 + 333 + /* Save return object on success */ 334 + 335 + else { 336 + walk_state->result_obj = return_desc; 333 337 } 334 338 335 339 return_ACPI_STATUS(status);
+16 -1
drivers/acpi/executer/exprep.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 412 412 acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info) 413 413 { 414 414 union acpi_operand_object *obj_desc; 415 + union acpi_operand_object *second_desc = NULL; 415 416 u32 type; 416 417 acpi_status status; 417 418 ··· 495 494 obj_desc->field.access_byte_width, 496 495 obj_desc->bank_field.region_obj, 497 496 obj_desc->bank_field.bank_obj)); 497 + 498 + /* 499 + * Remember location in AML stream of the field unit 500 + * opcode and operands -- since the bank_value 501 + * operands must be evaluated. 502 + */ 503 + second_desc = obj_desc->common.next_object; 504 + second_desc->extra.aml_start = 505 + ((union acpi_parse_object *)(info->data_register_node))-> 506 + named.data; 507 + second_desc->extra.aml_length = 508 + ((union acpi_parse_object *)(info->data_register_node))-> 509 + named.length; 510 + 498 511 break; 499 512 500 513 case ACPI_TYPE_LOCAL_INDEX_FIELD:
+6 -4
drivers/acpi/executer/exregion.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 160 160 if (!mem_info->mapped_logical_address) { 161 161 ACPI_ERROR((AE_INFO, 162 162 "Could not map memory at %8.8X%8.8X, size %X", 163 - ACPI_FORMAT_UINT64(address), 163 + ACPI_FORMAT_NATIVE_UINT(address), 164 164 (u32) window_size)); 165 165 mem_info->mapped_length = 0; 166 166 return_ACPI_STATUS(AE_NO_MEMORY); ··· 182 182 183 183 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 184 184 "System-Memory (width %d) R/W %d Address=%8.8X%8.8X\n", 185 - bit_width, function, ACPI_FORMAT_UINT64(address))); 185 + bit_width, function, 186 + ACPI_FORMAT_NATIVE_UINT(address))); 186 187 187 188 /* 188 189 * Perform the memory read or write ··· 285 284 286 285 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 287 286 "System-IO (width %d) R/W %d Address=%8.8X%8.8X\n", 288 - bit_width, function, ACPI_FORMAT_UINT64(address))); 287 + bit_width, function, 288 + ACPI_FORMAT_NATIVE_UINT(address))); 289 289 290 290 /* Decode the function parameter */ 291 291
+7 -7
drivers/acpi/executer/exresnte.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 116 116 * Several object types require no further processing: 117 117 * 1) Device/Thermal objects don't have a "real" subobject, return the Node 118 118 * 2) Method locals and arguments have a pseudo-Node 119 + * 3) 10/2007: Added method type to assist with Package construction. 119 120 */ 120 121 if ((entry_type == ACPI_TYPE_DEVICE) || 121 122 (entry_type == ACPI_TYPE_THERMAL) || 123 + (entry_type == ACPI_TYPE_METHOD) || 122 124 (node->flags & (ANOBJ_METHOD_ARG | ANOBJ_METHOD_LOCAL))) { 123 125 return_ACPI_STATUS(AE_OK); 124 126 } ··· 216 214 /* For these objects, just return the object attached to the Node */ 217 215 218 216 case ACPI_TYPE_MUTEX: 219 - case ACPI_TYPE_METHOD: 220 217 case ACPI_TYPE_POWER: 221 218 case ACPI_TYPE_PROCESSOR: 222 219 case ACPI_TYPE_EVENT: ··· 239 238 case ACPI_TYPE_LOCAL_REFERENCE: 240 239 241 240 switch (source_desc->reference.opcode) { 242 - case AML_LOAD_OP: 243 - 244 - /* This is a ddb_handle */ 245 - /* Return an additional reference to the object */ 246 - 241 + case AML_LOAD_OP: /* This is a ddb_handle */ 247 242 case AML_REF_OF_OP: 243 + case AML_INDEX_OP: 244 + 245 + /* Return an additional reference to the object */ 248 246 249 247 obj_desc = source_desc; 250 248 acpi_ut_add_reference(obj_desc);
+26 -29
drivers/acpi/executer/exresolv.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 140 140 { 141 141 acpi_status status = AE_OK; 142 142 union acpi_operand_object *stack_desc; 143 - void *temp_node; 144 143 union acpi_operand_object *obj_desc = NULL; 145 144 u16 opcode; 146 145 ··· 155 156 opcode = stack_desc->reference.opcode; 156 157 157 158 switch (opcode) { 158 - case AML_NAME_OP: 159 - 160 - /* 161 - * Convert name reference to a namespace node 162 - * Then, acpi_ex_resolve_node_to_value can be used to get the value 163 - */ 164 - temp_node = stack_desc->reference.object; 165 - 166 - /* Delete the Reference Object */ 167 - 168 - acpi_ut_remove_reference(stack_desc); 169 - 170 - /* Return the namespace node */ 171 - 172 - (*stack_ptr) = temp_node; 173 - break; 174 - 175 159 case AML_LOCAL_OP: 176 160 case AML_ARG_OP: 177 161 ··· 189 207 switch (stack_desc->reference.target_type) { 190 208 case ACPI_TYPE_BUFFER_FIELD: 191 209 192 - /* Just return - leave the Reference on the stack */ 210 + /* Just return - do not dereference */ 193 211 break; 194 212 195 213 case ACPI_TYPE_PACKAGE: 196 214 215 + /* If method call or copy_object - do not dereference */ 216 + 217 + if ((walk_state->opcode == 218 + AML_INT_METHODCALL_OP) 219 + || (walk_state->opcode == AML_COPY_OP)) { 220 + break; 221 + } 222 + 223 + /* Otherwise, dereference the package_index to a package element */ 224 + 197 225 obj_desc = *stack_desc->reference.where; 198 226 if (obj_desc) { 199 227 /* 200 - * Valid obj descriptor, copy pointer to return value 228 + * Valid object descriptor, copy pointer to return value 201 229 * (i.e., dereference the package index) 202 230 * Delete the ref object, increment the returned object 203 231 */ ··· 216 224 *stack_ptr = obj_desc; 217 225 } else { 218 226 /* 219 - * A NULL object descriptor means an unitialized element of 227 + * A NULL object descriptor means an uninitialized element of 220 228 * the package, can't dereference it 221 229 */ 222 230 ACPI_ERROR((AE_INFO, 223 - "Attempt to deref an Index to NULL pkg element Idx=%p", 231 + "Attempt to dereference an Index to NULL package element Idx=%p", 224 232 stack_desc)); 225 233 status = AE_AML_UNINITIALIZED_ELEMENT; 226 234 } ··· 231 239 /* Invalid reference object */ 232 240 233 241 ACPI_ERROR((AE_INFO, 234 - "Unknown TargetType %X in Index/Reference obj %p", 242 + "Unknown TargetType %X in Index/Reference object %p", 235 243 stack_desc->reference.target_type, 236 244 stack_desc)); 237 245 status = AE_AML_INTERNAL; ··· 243 251 case AML_DEBUG_OP: 244 252 case AML_LOAD_OP: 245 253 246 - /* Just leave the object as-is */ 254 + /* Just leave the object as-is, do not dereference */ 247 255 248 256 break; 249 257 ··· 382 390 } 383 391 384 392 /* 385 - * For reference objects created via the ref_of or Index operators, 386 - * we need to get to the base object (as per the ACPI specification 387 - * of the object_type and size_of operators). This means traversing 388 - * the list of possibly many nested references. 393 + * For reference objects created via the ref_of, Index, or Load/load_table 394 + * operators, we need to get to the base object (as per the ACPI 395 + * specification of the object_type and size_of operators). This means 396 + * traversing the list of possibly many nested references. 389 397 */ 390 398 while (ACPI_GET_OBJECT_TYPE(obj_desc) == ACPI_TYPE_LOCAL_REFERENCE) { 391 399 switch (obj_desc->reference.opcode) { ··· 454 462 goto exit; 455 463 } 456 464 break; 465 + 466 + case AML_LOAD_OP: 467 + 468 + type = ACPI_TYPE_DDB_HANDLE; 469 + goto exit; 457 470 458 471 case AML_LOCAL_OP: 459 472 case AML_ARG_OP:
+1 -12
drivers/acpi/executer/exresop.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 137 137 union acpi_operand_object *obj_desc; 138 138 acpi_status status = AE_OK; 139 139 u8 object_type; 140 - void *temp_node; 141 140 u32 arg_types; 142 141 const struct acpi_opcode_info *op_info; 143 142 u32 this_arg_type; ··· 238 239 239 240 /*lint -fallthrough */ 240 241 241 - case AML_NAME_OP: 242 242 case AML_INDEX_OP: 243 243 case AML_REF_OF_OP: 244 244 case AML_ARG_OP: ··· 329 331 object_type, obj_desc); 330 332 if (ACPI_FAILURE(status)) { 331 333 return_ACPI_STATUS(status); 332 - } 333 - 334 - if (obj_desc->reference.opcode == AML_NAME_OP) { 335 - 336 - /* Convert a named reference to the actual named object */ 337 - 338 - temp_node = obj_desc->reference.object; 339 - acpi_ut_remove_reference(obj_desc); 340 - (*stack_ptr) = temp_node; 341 334 } 342 335 goto next_operand; 343 336
+93 -26
drivers/acpi/executer/exstore.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 84 84 85 85 ACPI_FUNCTION_TRACE_PTR(ex_do_debug_object, source_desc); 86 86 87 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[ACPI Debug] %*s", 88 - level, " ")); 87 + /* Print line header as long as we are not in the middle of an object display */ 88 + 89 + if (!((level > 0) && index == 0)) { 90 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[ACPI Debug] %*s", 91 + level, " ")); 92 + } 89 93 90 94 /* Display index for package output only */ 91 95 ··· 99 95 } 100 96 101 97 if (!source_desc) { 102 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "<Null Object>\n")); 98 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[Null Object]\n")); 103 99 return_VOID; 104 100 } 105 101 106 102 if (ACPI_GET_DESCRIPTOR_TYPE(source_desc) == ACPI_DESC_TYPE_OPERAND) { 107 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%s: ", 103 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%s ", 108 104 acpi_ut_get_object_type_name 109 105 (source_desc))); 110 106 ··· 126 122 } else { 127 123 return_VOID; 128 124 } 125 + 126 + /* source_desc is of type ACPI_DESC_TYPE_OPERAND */ 129 127 130 128 switch (ACPI_GET_OBJECT_TYPE(source_desc)) { 131 129 case ACPI_TYPE_INTEGER: ··· 153 147 (u32) source_desc->buffer.length)); 154 148 ACPI_DUMP_BUFFER(source_desc->buffer.pointer, 155 149 (source_desc->buffer.length < 156 - 32) ? source_desc->buffer.length : 32); 150 + 256) ? source_desc->buffer.length : 256); 157 151 break; 158 152 159 153 case ACPI_TYPE_STRING: ··· 166 160 case ACPI_TYPE_PACKAGE: 167 161 168 162 ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, 169 - "[0x%.2X Elements]\n", 163 + "[Contains 0x%.2X Elements]\n", 170 164 source_desc->package.count)); 171 165 172 166 /* Output the entire contents of the package */ ··· 186 180 (source_desc->reference.opcode), 187 181 source_desc->reference.offset)); 188 182 } else { 189 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[%s]\n", 183 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "[%s]", 190 184 acpi_ps_get_opcode_name 191 185 (source_desc->reference.opcode))); 192 186 } 193 187 194 - if (source_desc->reference.object) { 188 + if (source_desc->reference.opcode == AML_LOAD_OP) { /* Load and load_table */ 189 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, 190 + " Table OwnerId %p\n", 191 + source_desc->reference.object)); 192 + break; 193 + } 194 + 195 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, " ")); 196 + 197 + /* Check for valid node first, then valid object */ 198 + 199 + if (source_desc->reference.node) { 200 + if (ACPI_GET_DESCRIPTOR_TYPE 201 + (source_desc->reference.node) != 202 + ACPI_DESC_TYPE_NAMED) { 203 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, 204 + " %p - Not a valid namespace node\n", 205 + source_desc->reference. 206 + node)); 207 + } else { 208 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, 209 + "Node %p [%4.4s] ", 210 + source_desc->reference. 211 + node, 212 + (source_desc->reference. 213 + node)->name.ascii)); 214 + 215 + switch ((source_desc->reference.node)->type) { 216 + 217 + /* These types have no attached object */ 218 + 219 + case ACPI_TYPE_DEVICE: 220 + acpi_os_printf("Device\n"); 221 + break; 222 + 223 + case ACPI_TYPE_THERMAL: 224 + acpi_os_printf("Thermal Zone\n"); 225 + break; 226 + 227 + default: 228 + acpi_ex_do_debug_object((source_desc-> 229 + reference. 230 + node)->object, 231 + level + 4, 0); 232 + break; 233 + } 234 + } 235 + } else if (source_desc->reference.object) { 195 236 if (ACPI_GET_DESCRIPTOR_TYPE 196 237 (source_desc->reference.object) == 197 238 ACPI_DESC_TYPE_NAMED) { ··· 251 198 acpi_ex_do_debug_object(source_desc->reference. 252 199 object, level + 4, 0); 253 200 } 254 - } else if (source_desc->reference.node) { 255 - acpi_ex_do_debug_object((source_desc->reference.node)-> 256 - object, level + 4, 0); 257 201 } 258 202 break; 259 203 260 204 default: 261 205 262 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%p %s\n", 263 - source_desc, 264 - acpi_ut_get_object_type_name 265 - (source_desc))); 206 + ACPI_DEBUG_PRINT_RAW((ACPI_DB_DEBUG_OBJECT, "%p\n", 207 + source_desc)); 266 208 break; 267 209 } 268 210 ··· 361 313 * 4) Store to the debug object 362 314 */ 363 315 switch (ref_desc->reference.opcode) { 364 - case AML_NAME_OP: 365 316 case AML_REF_OF_OP: 366 317 367 318 /* Storing an object into a Name "container" */ ··· 462 415 */ 463 416 obj_desc = *(index_desc->reference.where); 464 417 465 - status = 466 - acpi_ut_copy_iobject_to_iobject(source_desc, &new_desc, 467 - walk_state); 468 - if (ACPI_FAILURE(status)) { 469 - return_ACPI_STATUS(status); 418 + if (ACPI_GET_OBJECT_TYPE(source_desc) == 419 + ACPI_TYPE_LOCAL_REFERENCE 420 + && source_desc->reference.opcode == AML_LOAD_OP) { 421 + 422 + /* This is a DDBHandle, just add a reference to it */ 423 + 424 + acpi_ut_add_reference(source_desc); 425 + new_desc = source_desc; 426 + } else { 427 + /* Normal object, copy it */ 428 + 429 + status = 430 + acpi_ut_copy_iobject_to_iobject(source_desc, 431 + &new_desc, 432 + walk_state); 433 + if (ACPI_FAILURE(status)) { 434 + return_ACPI_STATUS(status); 435 + } 470 436 } 471 437 472 438 if (obj_desc) { ··· 631 571 632 572 /* If no implicit conversion, drop into the default case below */ 633 573 634 - if ((!implicit_conversion) || (walk_state->opcode == AML_COPY_OP)) { 635 - 636 - /* Force execution of default (no implicit conversion) */ 637 - 574 + if ((!implicit_conversion) || 575 + ((walk_state->opcode == AML_COPY_OP) && 576 + (target_type != ACPI_TYPE_LOCAL_REGION_FIELD) && 577 + (target_type != ACPI_TYPE_LOCAL_BANK_FIELD) && 578 + (target_type != ACPI_TYPE_LOCAL_INDEX_FIELD))) { 579 + /* 580 + * Force execution of default (no implicit conversion). Note: 581 + * copy_object does not perform an implicit conversion, as per the ACPI 582 + * spec -- except in case of region/bank/index fields -- because these 583 + * objects must retain their original type permanently. 584 + */ 638 585 target_type = ACPI_TYPE_ANY; 639 586 } 640 587
+1 -1
drivers/acpi/executer/exstoren.c
··· 7 7 *****************************************************************************/ 8 8 9 9 /* 10 - * Copyright (C) 2000 - 2007, R. Byron Moore 10 + * Copyright (C) 2000 - 2008, Intel Corp. 11 11 * All rights reserved. 12 12 * 13 13 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/executer/exstorob.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -2
drivers/acpi/executer/exsystem.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 44 44 45 45 #include <acpi/acpi.h> 46 46 #include <acpi/acinterp.h> 47 - #include <acpi/acevents.h> 48 47 49 48 #define _COMPONENT ACPI_EXECUTER 50 49 ACPI_MODULE_NAME("exsystem")
+36 -35
drivers/acpi/executer/exutils.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 61 61 #include <acpi/acpi.h> 62 62 #include <acpi/acinterp.h> 63 63 #include <acpi/amlcode.h> 64 - #include <acpi/acevents.h> 65 64 66 65 #define _COMPONENT ACPI_EXECUTER 67 66 ACPI_MODULE_NAME("exutils") ··· 216 217 217 218 /* 218 219 * Object must be a valid number and we must be executing 219 - * a control method 220 + * a control method. NS node could be there for AML_INT_NAMEPATH_OP. 220 221 */ 221 222 if ((!obj_desc) || 223 + (ACPI_GET_DESCRIPTOR_TYPE(obj_desc) != ACPI_DESC_TYPE_OPERAND) || 222 224 (ACPI_GET_OBJECT_TYPE(obj_desc) != ACPI_TYPE_INTEGER)) { 223 225 return; 224 226 } ··· 240 240 * PARAMETERS: field_flags - Flags with Lock rule: 241 241 * always_lock or never_lock 242 242 * 243 - * RETURN: TRUE/FALSE indicating whether the lock was actually acquired 243 + * RETURN: None 244 244 * 245 - * DESCRIPTION: Obtain the global lock and keep track of this fact via two 246 - * methods. A global variable keeps the state of the lock, and 247 - * the state is returned to the caller. 245 + * DESCRIPTION: Obtain the ACPI hardware Global Lock, only if the field 246 + * flags specifiy that it is to be obtained before field access. 248 247 * 249 248 ******************************************************************************/ 250 249 251 - u8 acpi_ex_acquire_global_lock(u32 field_flags) 250 + void acpi_ex_acquire_global_lock(u32 field_flags) 252 251 { 253 - u8 locked = FALSE; 254 252 acpi_status status; 255 253 256 254 ACPI_FUNCTION_TRACE(ex_acquire_global_lock); 257 255 258 - /* Only attempt lock if the always_lock bit is set */ 256 + /* Only use the lock if the always_lock bit is set */ 259 257 260 - if (field_flags & AML_FIELD_LOCK_RULE_MASK) { 261 - 262 - /* We should attempt to get the lock, wait forever */ 263 - 264 - status = acpi_ev_acquire_global_lock(ACPI_WAIT_FOREVER); 265 - if (ACPI_SUCCESS(status)) { 266 - locked = TRUE; 267 - } else { 268 - ACPI_EXCEPTION((AE_INFO, status, 269 - "Could not acquire Global Lock")); 270 - } 258 + if (!(field_flags & AML_FIELD_LOCK_RULE_MASK)) { 259 + return_VOID; 271 260 } 272 261 273 - return_UINT8(locked); 262 + /* Attempt to get the global lock, wait forever */ 263 + 264 + status = acpi_ex_acquire_mutex_object(ACPI_WAIT_FOREVER, 265 + acpi_gbl_global_lock_mutex, 266 + acpi_os_get_thread_id()); 267 + 268 + if (ACPI_FAILURE(status)) { 269 + ACPI_EXCEPTION((AE_INFO, status, 270 + "Could not acquire Global Lock")); 271 + } 272 + 273 + return_VOID; 274 274 } 275 275 276 276 /******************************************************************************* 277 277 * 278 278 * FUNCTION: acpi_ex_release_global_lock 279 279 * 280 - * PARAMETERS: locked_by_me - Return value from corresponding call to 281 - * acquire_global_lock. 280 + * PARAMETERS: field_flags - Flags with Lock rule: 281 + * always_lock or never_lock 282 282 * 283 283 * RETURN: None 284 284 * 285 - * DESCRIPTION: Release the global lock if it is locked. 285 + * DESCRIPTION: Release the ACPI hardware Global Lock 286 286 * 287 287 ******************************************************************************/ 288 288 289 - void acpi_ex_release_global_lock(u8 locked_by_me) 289 + void acpi_ex_release_global_lock(u32 field_flags) 290 290 { 291 291 acpi_status status; 292 292 293 293 ACPI_FUNCTION_TRACE(ex_release_global_lock); 294 294 295 - /* Only attempt unlock if the caller locked it */ 295 + /* Only use the lock if the always_lock bit is set */ 296 296 297 - if (locked_by_me) { 297 + if (!(field_flags & AML_FIELD_LOCK_RULE_MASK)) { 298 + return_VOID; 299 + } 298 300 299 - /* OK, now release the lock */ 301 + /* Release the global lock */ 300 302 301 - status = acpi_ev_release_global_lock(); 302 - if (ACPI_FAILURE(status)) { 303 + status = acpi_ex_release_mutex_object(acpi_gbl_global_lock_mutex); 304 + if (ACPI_FAILURE(status)) { 303 305 304 - /* Report the error, but there isn't much else we can do */ 306 + /* Report the error, but there isn't much else we can do */ 305 307 306 - ACPI_EXCEPTION((AE_INFO, status, 307 - "Could not release ACPI Global Lock")); 308 - } 308 + ACPI_EXCEPTION((AE_INFO, status, 309 + "Could not release Global Lock")); 309 310 } 310 311 311 312 return_VOID;
+15 -16
drivers/acpi/fan.c
··· 256 256 result = PTR_ERR(cdev); 257 257 goto end; 258 258 } 259 - if (cdev) { 260 - printk(KERN_INFO PREFIX 261 - "%s is registered as cooling_device%d\n", 262 - device->dev.bus_id, cdev->id); 263 259 264 - acpi_driver_data(device) = cdev; 265 - result = sysfs_create_link(&device->dev.kobj, 266 - &cdev->device.kobj, 267 - "thermal_cooling"); 268 - if (result) 269 - return result; 260 + printk(KERN_INFO PREFIX 261 + "%s is registered as cooling_device%d\n", 262 + device->dev.bus_id, cdev->id); 270 263 271 - result = sysfs_create_link(&cdev->device.kobj, 272 - &device->dev.kobj, 273 - "device"); 274 - if (result) 275 - return result; 276 - } 264 + acpi_driver_data(device) = cdev; 265 + result = sysfs_create_link(&device->dev.kobj, 266 + &cdev->device.kobj, 267 + "thermal_cooling"); 268 + if (result) 269 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 270 + 271 + result = sysfs_create_link(&cdev->device.kobj, 272 + &device->dev.kobj, 273 + "device"); 274 + if (result) 275 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 277 276 278 277 result = acpi_fan_add_fs(device); 279 278 if (result)
+20
drivers/acpi/glue.c
··· 142 142 143 143 static int acpi_bind_one(struct device *dev, acpi_handle handle) 144 144 { 145 + struct acpi_device *acpi_dev; 145 146 acpi_status status; 146 147 147 148 if (dev->archdata.acpi_handle) { ··· 158 157 } 159 158 dev->archdata.acpi_handle = handle; 160 159 160 + status = acpi_bus_get_device(handle, &acpi_dev); 161 + if (!ACPI_FAILURE(status)) { 162 + int ret; 163 + 164 + ret = sysfs_create_link(&dev->kobj, &acpi_dev->dev.kobj, 165 + "firmware_node"); 166 + ret = sysfs_create_link(&acpi_dev->dev.kobj, &dev->kobj, 167 + "physical_node"); 168 + } 169 + 161 170 return 0; 162 171 } 163 172 ··· 176 165 if (!dev->archdata.acpi_handle) 177 166 return 0; 178 167 if (dev == acpi_get_physical_device(dev->archdata.acpi_handle)) { 168 + struct acpi_device *acpi_dev; 169 + 179 170 /* acpi_get_physical_device increase refcnt by one */ 180 171 put_device(dev); 172 + 173 + if (!acpi_bus_get_device(dev->archdata.acpi_handle, 174 + &acpi_dev)) { 175 + sysfs_remove_link(&dev->kobj, "firmware_node"); 176 + sysfs_remove_link(&acpi_dev->dev.kobj, "physical_node"); 177 + } 178 + 181 179 acpi_detach_data(dev->archdata.acpi_handle, 182 180 acpi_glue_data_handler); 183 181 dev->archdata.acpi_handle = NULL;
+1 -1
drivers/acpi/hardware/hwacpi.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/hardware/hwgpe.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/hardware/hwregs.c
··· 7 7 ******************************************************************************/ 8 8 9 9 /* 10 - * Copyright (C) 2000 - 2007, R. Byron Moore 10 + * Copyright (C) 2000 - 2008, Intel Corp. 11 11 * All rights reserved. 12 12 * 13 13 * Redistribution and use in source and binary forms, with or without
+9 -7
drivers/acpi/hardware/hwsleep.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 70 70 71 71 /* Get the FACS */ 72 72 73 - status = 74 - acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 75 - (struct acpi_table_header **)&facs); 73 + status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 74 + ACPI_CAST_INDIRECT_PTR(struct 75 + acpi_table_header, 76 + &facs)); 76 77 if (ACPI_FAILURE(status)) { 77 78 return_ACPI_STATUS(status); 78 79 } ··· 125 124 126 125 /* Get the FACS */ 127 126 128 - status = 129 - acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 130 - (struct acpi_table_header **)&facs); 127 + status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS, 128 + ACPI_CAST_INDIRECT_PTR(struct 129 + acpi_table_header, 130 + &facs)); 131 131 if (ACPI_FAILURE(status)) { 132 132 return_ACPI_STATUS(status); 133 133 }
+1 -1
drivers/acpi/hardware/hwtimer.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+61 -38
drivers/acpi/namespace/nsaccess.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 208 208 /* Special case for ACPI Global Lock */ 209 209 210 210 if (ACPI_STRCMP(init_val->name, "_GL_") == 0) { 211 - acpi_gbl_global_lock_mutex = 212 - obj_desc->mutex.os_mutex; 211 + acpi_gbl_global_lock_mutex = obj_desc; 213 212 214 213 /* Create additional counting semaphore for global lock */ 215 214 ··· 581 582 return_ACPI_STATUS(status); 582 583 } 583 584 584 - /* 585 - * Sanity typecheck of the target object: 586 - * 587 - * If 1) This is the last segment (num_segments == 0) 588 - * 2) And we are looking for a specific type 589 - * (Not checking for TYPE_ANY) 590 - * 3) Which is not an alias 591 - * 4) Which is not a local type (TYPE_SCOPE) 592 - * 5) And the type of target object is known (not TYPE_ANY) 593 - * 6) And target object does not match what we are looking for 594 - * 595 - * Then we have a type mismatch. Just warn and ignore it. 596 - */ 597 - if ((num_segments == 0) && 598 - (type_to_check_for != ACPI_TYPE_ANY) && 599 - (type_to_check_for != ACPI_TYPE_LOCAL_ALIAS) && 600 - (type_to_check_for != ACPI_TYPE_LOCAL_METHOD_ALIAS) && 601 - (type_to_check_for != ACPI_TYPE_LOCAL_SCOPE) && 602 - (this_node->type != ACPI_TYPE_ANY) && 603 - (this_node->type != type_to_check_for)) { 585 + /* More segments to follow? */ 604 586 605 - /* Complain about a type mismatch */ 606 - 607 - ACPI_WARNING((AE_INFO, 608 - "NsLookup: Type mismatch on %4.4s (%s), searching for (%s)", 609 - ACPI_CAST_PTR(char, &simple_name), 610 - acpi_ut_get_type_name(this_node->type), 611 - acpi_ut_get_type_name 612 - (type_to_check_for))); 587 + if (num_segments > 0) { 588 + /* 589 + * If we have an alias to an object that opens a scope (such as a 590 + * device or processor), we need to dereference the alias here so that 591 + * we can access any children of the original node (via the remaining 592 + * segments). 593 + */ 594 + if (this_node->type == ACPI_TYPE_LOCAL_ALIAS) { 595 + if (acpi_ns_opens_scope 596 + (((struct acpi_namespace_node *)this_node-> 597 + object)->type)) { 598 + this_node = 599 + (struct acpi_namespace_node *) 600 + this_node->object; 601 + } 602 + } 613 603 } 614 604 615 - /* 616 - * If this is the last name segment and we are not looking for a 617 - * specific type, but the type of found object is known, use that type 618 - * to see if it opens a scope. 619 - */ 620 - if ((num_segments == 0) && (type == ACPI_TYPE_ANY)) { 621 - type = this_node->type; 605 + /* Special handling for the last segment (num_segments == 0) */ 606 + 607 + else { 608 + /* 609 + * Sanity typecheck of the target object: 610 + * 611 + * If 1) This is the last segment (num_segments == 0) 612 + * 2) And we are looking for a specific type 613 + * (Not checking for TYPE_ANY) 614 + * 3) Which is not an alias 615 + * 4) Which is not a local type (TYPE_SCOPE) 616 + * 5) And the type of target object is known (not TYPE_ANY) 617 + * 6) And target object does not match what we are looking for 618 + * 619 + * Then we have a type mismatch. Just warn and ignore it. 620 + */ 621 + if ((type_to_check_for != ACPI_TYPE_ANY) && 622 + (type_to_check_for != ACPI_TYPE_LOCAL_ALIAS) && 623 + (type_to_check_for != ACPI_TYPE_LOCAL_METHOD_ALIAS) 624 + && (type_to_check_for != ACPI_TYPE_LOCAL_SCOPE) 625 + && (this_node->type != ACPI_TYPE_ANY) 626 + && (this_node->type != type_to_check_for)) { 627 + 628 + /* Complain about a type mismatch */ 629 + 630 + ACPI_WARNING((AE_INFO, 631 + "NsLookup: Type mismatch on %4.4s (%s), searching for (%s)", 632 + ACPI_CAST_PTR(char, &simple_name), 633 + acpi_ut_get_type_name(this_node-> 634 + type), 635 + acpi_ut_get_type_name 636 + (type_to_check_for))); 637 + } 638 + 639 + /* 640 + * If this is the last name segment and we are not looking for a 641 + * specific type, but the type of found object is known, use that type 642 + * to (later) see if it opens a scope. 643 + */ 644 + if (type == ACPI_TYPE_ANY) { 645 + type = this_node->type; 646 + } 622 647 } 623 648 624 649 /* Point to next name segment and make this node current */
+1 -1
drivers/acpi/namespace/nsalloc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+6 -5
drivers/acpi/namespace/nsdump.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 249 249 acpi_os_printf("ID %X Len %.4X Addr %p\n", 250 250 obj_desc->processor.proc_id, 251 251 obj_desc->processor.length, 252 - (char *)obj_desc->processor.address); 252 + ACPI_CAST_PTR(void, 253 + obj_desc->processor. 254 + address)); 253 255 break; 254 256 255 257 case ACPI_TYPE_DEVICE: ··· 322 320 space_id)); 323 321 if (obj_desc->region.flags & AOPOBJ_DATA_VALID) { 324 322 acpi_os_printf(" Addr %8.8X%8.8X Len %.4X\n", 325 - ACPI_FORMAT_UINT64(obj_desc-> 326 - region. 327 - address), 323 + ACPI_FORMAT_NATIVE_UINT 324 + (obj_desc->region.address), 328 325 obj_desc->region.length); 329 326 } else { 330 327 acpi_os_printf
+1 -1
drivers/acpi/namespace/nsdumpdv.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/namespace/nseval.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+11 -1
drivers/acpi/namespace/nsinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 244 244 info->field_count++; 245 245 break; 246 246 247 + case ACPI_TYPE_LOCAL_BANK_FIELD: 248 + info->field_count++; 249 + break; 250 + 247 251 case ACPI_TYPE_BUFFER: 248 252 info->buffer_count++; 249 253 break; ··· 289 285 290 286 info->field_init++; 291 287 status = acpi_ds_get_buffer_field_arguments(obj_desc); 288 + break; 289 + 290 + case ACPI_TYPE_LOCAL_BANK_FIELD: 291 + 292 + info->field_init++; 293 + status = acpi_ds_get_bank_field_arguments(obj_desc); 292 294 break; 293 295 294 296 case ACPI_TYPE_BUFFER:
+3 -3
drivers/acpi/namespace/nsload.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 107 107 goto unlock; 108 108 } 109 109 110 - status = acpi_ns_parse_table(table_index, node->child); 110 + status = acpi_ns_parse_table(table_index, node); 111 111 if (ACPI_SUCCESS(status)) { 112 112 acpi_tb_set_table_loaded_flag(table_index, TRUE); 113 113 } else { 114 - acpi_tb_release_owner_id(table_index); 114 + (void)acpi_tb_release_owner_id(table_index); 115 115 } 116 116 117 117 unlock:
+7 -1
drivers/acpi/namespace/nsnames.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 180 180 next_node = node; 181 181 182 182 while (next_node && (next_node != acpi_gbl_root_node)) { 183 + if (ACPI_GET_DESCRIPTOR_TYPE(next_node) != ACPI_DESC_TYPE_NAMED) { 184 + ACPI_ERROR((AE_INFO, 185 + "Invalid NS Node (%p) while traversing path", 186 + next_node)); 187 + return 0; 188 + } 183 189 size += ACPI_PATH_SEGMENT_LENGTH; 184 190 next_node = acpi_ns_get_parent_node(next_node); 185 191 }
+1 -1
drivers/acpi/namespace/nsobject.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+25 -8
drivers/acpi/namespace/nsparse.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 64 64 ******************************************************************************/ 65 65 acpi_status 66 66 acpi_ns_one_complete_parse(acpi_native_uint pass_number, 67 - acpi_native_uint table_index) 67 + acpi_native_uint table_index, 68 + struct acpi_namespace_node * start_node) 68 69 { 69 70 union acpi_parse_object *parse_root; 70 71 acpi_status status; ··· 112 111 aml_start = (u8 *) table + sizeof(struct acpi_table_header); 113 112 aml_length = table->length - sizeof(struct acpi_table_header); 114 113 status = acpi_ds_init_aml_walk(walk_state, parse_root, NULL, 115 - aml_start, aml_length, NULL, 116 - (u8) pass_number); 114 + aml_start, (u32) aml_length, 115 + NULL, (u8) pass_number); 117 116 } 118 117 119 118 if (ACPI_FAILURE(status)) { 120 119 acpi_ds_delete_walk_state(walk_state); 121 - acpi_ps_delete_parse_tree(parse_root); 122 - return_ACPI_STATUS(status); 120 + goto cleanup; 121 + } 122 + 123 + /* start_node is the default location to load the table */ 124 + 125 + if (start_node && start_node != acpi_gbl_root_node) { 126 + status = 127 + acpi_ds_scope_stack_push(start_node, ACPI_TYPE_METHOD, 128 + walk_state); 129 + if (ACPI_FAILURE(status)) { 130 + acpi_ds_delete_walk_state(walk_state); 131 + goto cleanup; 132 + } 123 133 } 124 134 125 135 /* Parse the AML */ ··· 139 127 (unsigned)pass_number)); 140 128 status = acpi_ps_parse_aml(walk_state); 141 129 130 + cleanup: 142 131 acpi_ps_delete_parse_tree(parse_root); 143 132 return_ACPI_STATUS(status); 144 133 } ··· 176 163 * performs another complete parse of the AML. 177 164 */ 178 165 ACPI_DEBUG_PRINT((ACPI_DB_PARSE, "**** Start pass 1\n")); 179 - status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS1, table_index); 166 + status = 167 + acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS1, table_index, 168 + start_node); 180 169 if (ACPI_FAILURE(status)) { 181 170 return_ACPI_STATUS(status); 182 171 } ··· 193 178 * parse objects are all cached. 194 179 */ 195 180 ACPI_DEBUG_PRINT((ACPI_DB_PARSE, "**** Start pass 2\n")); 196 - status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS2, table_index); 181 + status = 182 + acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS2, table_index, 183 + start_node); 197 184 if (ACPI_FAILURE(status)) { 198 185 return_ACPI_STATUS(status); 199 186 }
+1 -1
drivers/acpi/namespace/nssearch.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/namespace/nsutils.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+2 -4
drivers/acpi/namespace/nswalk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 77 77 78 78 /* It's really the parent's _scope_ that we want */ 79 79 80 - if (parent_node->child) { 81 - next_node = parent_node->child; 82 - } 80 + next_node = parent_node->child; 83 81 } 84 82 85 83 else {
+9 -6
drivers/acpi/namespace/nsxfeval.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 467 467 return (AE_CTRL_DEPTH); 468 468 } 469 469 470 - if (!(flags & ACPI_STA_DEVICE_PRESENT)) { 471 - 472 - /* Don't examine children of the device if not present */ 473 - 470 + if (!(flags & ACPI_STA_DEVICE_PRESENT) && 471 + !(flags & ACPI_STA_DEVICE_FUNCTIONING)) { 472 + /* 473 + * Don't examine the children of the device only when the 474 + * device is neither present nor functional. See ACPI spec, 475 + * description of _STA for more information. 476 + */ 474 477 return (AE_CTRL_DEPTH); 475 478 } 476 479 ··· 542 539 * value is returned to the caller. 543 540 * 544 541 * This is a wrapper for walk_namespace, but the callback performs 545 - * additional filtering. Please see acpi_get_device_callback. 542 + * additional filtering. Please see acpi_ns_get_device_callback. 546 543 * 547 544 ******************************************************************************/ 548 545
+1 -1
drivers/acpi/namespace/nsxfname.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/namespace/nsxfobj.c
··· 6 6 ******************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1
drivers/acpi/osl.c
··· 742 742 void acpi_os_wait_events_complete(void *context) 743 743 { 744 744 flush_workqueue(kacpid_wq); 745 + flush_workqueue(kacpi_notify_wq); 745 746 } 746 747 747 748 EXPORT_SYMBOL(acpi_os_wait_events_complete);
+43 -20
drivers/acpi/parser/psargs.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 230 230 struct acpi_parse_state *parser_state, 231 231 union acpi_parse_object *arg, u8 possible_method_call) 232 232 { 233 + acpi_status status; 233 234 char *path; 234 235 union acpi_parse_object *name_op; 235 - acpi_status status; 236 236 union acpi_operand_object *method_desc; 237 237 struct acpi_namespace_node *node; 238 - union acpi_generic_state scope_info; 238 + u8 *start = parser_state->aml; 239 239 240 240 ACPI_FUNCTION_TRACE(ps_get_next_namepath); 241 241 ··· 249 249 return_ACPI_STATUS(AE_OK); 250 250 } 251 251 252 - /* Setup search scope info */ 253 - 254 - scope_info.scope.node = NULL; 255 - node = parser_state->start_node; 256 - if (node) { 257 - scope_info.scope.node = node; 258 - } 259 - 260 252 /* 261 - * Lookup the name in the internal namespace. We don't want to add 262 - * anything new to the namespace here, however, so we use MODE_EXECUTE. 253 + * Lookup the name in the internal namespace, starting with the current 254 + * scope. We don't want to add anything new to the namespace here, 255 + * however, so we use MODE_EXECUTE. 263 256 * Allow searching of the parent tree, but don't open a new scope - 264 257 * we just want to lookup the object (must be mode EXECUTE to perform 265 258 * the upsearch) 266 259 */ 267 - status = 268 - acpi_ns_lookup(&scope_info, path, ACPI_TYPE_ANY, ACPI_IMODE_EXECUTE, 269 - ACPI_NS_SEARCH_PARENT | ACPI_NS_DONT_OPEN_SCOPE, 270 - NULL, &node); 260 + status = acpi_ns_lookup(walk_state->scope_info, path, 261 + ACPI_TYPE_ANY, ACPI_IMODE_EXECUTE, 262 + ACPI_NS_SEARCH_PARENT | ACPI_NS_DONT_OPEN_SCOPE, 263 + NULL, &node); 271 264 272 265 /* 273 266 * If this name is a control method invocation, we must ··· 268 275 */ 269 276 if (ACPI_SUCCESS(status) && 270 277 possible_method_call && (node->type == ACPI_TYPE_METHOD)) { 278 + if (walk_state->op->common.aml_opcode == AML_UNLOAD_OP) { 279 + /* 280 + * acpi_ps_get_next_namestring has increased the AML pointer, 281 + * so we need to restore the saved AML pointer for method call. 282 + */ 283 + walk_state->parser_state.aml = start; 284 + walk_state->arg_count = 1; 285 + acpi_ps_init_op(arg, AML_INT_METHODCALL_OP); 286 + return_ACPI_STATUS(AE_OK); 287 + } 271 288 272 289 /* This name is actually a control method invocation */ 273 290 ··· 689 686 return_ACPI_STATUS(AE_NO_MEMORY); 690 687 } 691 688 692 - status = 693 - acpi_ps_get_next_namepath(walk_state, parser_state, 694 - arg, 0); 689 + /* To support super_name arg of Unload */ 690 + 691 + if (walk_state->op->common.aml_opcode == AML_UNLOAD_OP) { 692 + status = 693 + acpi_ps_get_next_namepath(walk_state, 694 + parser_state, arg, 695 + 1); 696 + 697 + /* 698 + * If the super_name arg of Unload is a method call, 699 + * we have restored the AML pointer, just free this Arg 700 + */ 701 + if (arg->common.aml_opcode == 702 + AML_INT_METHODCALL_OP) { 703 + acpi_ps_free_op(arg); 704 + arg = NULL; 705 + } 706 + } else { 707 + status = 708 + acpi_ps_get_next_namepath(walk_state, 709 + parser_state, arg, 710 + 0); 711 + } 695 712 } else { 696 713 /* Single complex argument, nothing returned */ 697 714
+45 -16
drivers/acpi/parser/psloop.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 182 182 ACPI_FUNCTION_TRACE_PTR(ps_build_named_op, walk_state); 183 183 184 184 unnamed_op->common.value.arg = NULL; 185 + unnamed_op->common.arg_list_length = 0; 185 186 unnamed_op->common.aml_opcode = walk_state->opcode; 186 187 187 188 /* ··· 242 241 acpi_ps_append_arg(*op, unnamed_op->common.value.arg); 243 242 acpi_gbl_depth++; 244 243 245 - if ((*op)->common.aml_opcode == AML_REGION_OP) { 244 + if ((*op)->common.aml_opcode == AML_REGION_OP || 245 + (*op)->common.aml_opcode == AML_DATA_REGION_OP) { 246 246 /* 247 247 * Defer final parsing of an operation_region body, because we don't 248 248 * have enough info in the first pass to parse it correctly (i.e., ··· 282 280 acpi_status status = AE_OK; 283 281 union acpi_parse_object *op; 284 282 union acpi_parse_object *named_op = NULL; 283 + union acpi_parse_object *parent_scope; 284 + u8 argument_count; 285 + const struct acpi_opcode_info *op_info; 285 286 286 287 ACPI_FUNCTION_TRACE_PTR(ps_create_op, walk_state); 287 288 ··· 325 320 op->named.length = 0; 326 321 } 327 322 328 - acpi_ps_append_arg(acpi_ps_get_parent_scope 329 - (&(walk_state->parser_state)), op); 323 + if (walk_state->opcode == AML_BANK_FIELD_OP) { 324 + /* 325 + * Backup to beginning of bank_field declaration 326 + * body_length is unknown until we parse the body 327 + */ 328 + op->named.data = aml_op_start; 329 + op->named.length = 0; 330 + } 331 + 332 + parent_scope = acpi_ps_get_parent_scope(&(walk_state->parser_state)); 333 + acpi_ps_append_arg(parent_scope, op); 334 + 335 + if (parent_scope) { 336 + op_info = 337 + acpi_ps_get_opcode_info(parent_scope->common.aml_opcode); 338 + if (op_info->flags & AML_HAS_TARGET) { 339 + argument_count = 340 + acpi_ps_get_argument_count(op_info->type); 341 + if (parent_scope->common.arg_list_length > 342 + argument_count) { 343 + op->common.flags |= ACPI_PARSEOP_TARGET; 344 + } 345 + } else if (parent_scope->common.aml_opcode == AML_INCREMENT_OP) { 346 + op->common.flags |= ACPI_PARSEOP_TARGET; 347 + } 348 + } 330 349 331 350 if (walk_state->descending_callback != NULL) { 332 351 /* ··· 632 603 acpi_ps_pop_scope(&(walk_state->parser_state), op, 633 604 &walk_state->arg_types, 634 605 &walk_state->arg_count); 635 - 636 - if ((*op)->common.aml_opcode != AML_WHILE_OP) { 637 - status2 = acpi_ds_result_stack_pop(walk_state); 638 - if (ACPI_FAILURE(status2)) { 639 - return_ACPI_STATUS(status2); 640 - } 641 - } 642 606 } 643 607 644 608 /* Close this iteration of the While loop */ ··· 659 637 if (*op) { 660 638 status2 = 661 639 acpi_ps_complete_this_op(walk_state, *op); 662 - if (ACPI_FAILURE(status2)) { 663 - return_ACPI_STATUS(status2); 664 - } 665 - status2 = acpi_ds_result_stack_pop(walk_state); 666 640 if (ACPI_FAILURE(status2)) { 667 641 return_ACPI_STATUS(status2); 668 642 } ··· 1023 1005 acpi_gbl_depth--; 1024 1006 } 1025 1007 1026 - if (op->common.aml_opcode == AML_REGION_OP) { 1008 + if (op->common.aml_opcode == AML_REGION_OP || 1009 + op->common.aml_opcode == AML_DATA_REGION_OP) { 1027 1010 /* 1028 1011 * Skip parsing of control method or opregion body, 1029 1012 * because we don't have enough info in the first pass ··· 1042 1023 /* 1043 1024 * Backup to beginning of create_xXXfield declaration (1 for 1044 1025 * Opcode) 1026 + * 1027 + * body_length is unknown until we parse the body 1028 + */ 1029 + op->named.length = 1030 + (u32) (parser_state->aml - op->named.data); 1031 + } 1032 + 1033 + if (op->common.aml_opcode == AML_BANK_FIELD_OP) { 1034 + /* 1035 + * Backup to beginning of bank_field declaration 1045 1036 * 1046 1037 * body_length is unknown until we parse the body 1047 1038 */
+33 -5
drivers/acpi/parser/psopcode.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 49 49 #define _COMPONENT ACPI_PARSER 50 50 ACPI_MODULE_NAME("psopcode") 51 51 52 + static const u8 acpi_gbl_argument_count[] = 53 + { 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 6 }; 54 + 52 55 /******************************************************************************* 53 56 * 54 57 * NAME: acpi_gbl_aml_op_info ··· 62 59 * the operand type. 63 60 * 64 61 ******************************************************************************/ 62 + 65 63 /* 66 64 * Summary of opcode types/flags 67 65 * ··· 180 176 AML_CREATE_QWORD_FIELD_OP 181 177 182 178 ******************************************************************************/ 179 + 183 180 /* 184 181 * Master Opcode information table. A summary of everything we know about each 185 182 * opcode, all in one place. ··· 520 515 AML_TYPE_NAMED_FIELD, 521 516 AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD), 522 517 /* 5F */ ACPI_OP("BankField", ARGP_BANK_FIELD_OP, ARGI_BANK_FIELD_OP, 523 - ACPI_TYPE_ANY, AML_CLASS_NAMED_OBJECT, 518 + ACPI_TYPE_LOCAL_BANK_FIELD, AML_CLASS_NAMED_OBJECT, 524 519 AML_TYPE_NAMED_FIELD, 525 - AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD), 520 + AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | AML_FIELD | 521 + AML_DEFER), 526 522 527 523 /* Internal opcodes that map to invalid AML opcodes */ 528 524 ··· 625 619 AML_TYPE_EXEC_6A_0T_1R, AML_FLAGS_EXEC_6A_0T_1R), 626 620 /* 7C */ ACPI_OP("DataTableRegion", ARGP_DATA_REGION_OP, 627 621 ARGI_DATA_REGION_OP, ACPI_TYPE_REGION, 628 - AML_CLASS_NAMED_OBJECT, AML_TYPE_NAMED_SIMPLE, 622 + AML_CLASS_NAMED_OBJECT, AML_TYPE_NAMED_COMPLEX, 629 623 AML_HAS_ARGS | AML_NSOBJECT | AML_NSOPCODE | 630 - AML_NSNODE | AML_NAMED), 624 + AML_NSNODE | AML_NAMED | AML_DEFER), 631 625 /* 7D */ ACPI_OP("[EvalSubTree]", ARGP_SCOPE_OP, ARGI_SCOPE_OP, 632 626 ACPI_TYPE_ANY, AML_CLASS_NAMED_OBJECT, 633 627 AML_TYPE_NAMED_NO_OBJ, ··· 784 778 return ("OpcodeName unavailable"); 785 779 786 780 #endif 781 + } 782 + 783 + /******************************************************************************* 784 + * 785 + * FUNCTION: acpi_ps_get_argument_count 786 + * 787 + * PARAMETERS: op_type - Type associated with the AML opcode 788 + * 789 + * RETURN: Argument count 790 + * 791 + * DESCRIPTION: Obtain the number of expected arguments for an AML opcode 792 + * 793 + ******************************************************************************/ 794 + 795 + u8 acpi_ps_get_argument_count(u32 op_type) 796 + { 797 + 798 + if (op_type <= AML_TYPE_EXEC_6A_0T_1R) { 799 + return (acpi_gbl_argument_count[op_type]); 800 + } 801 + 802 + return (0); 787 803 }
+31 -14
drivers/acpi/parser/psparse.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 205 205 || (op->common.parent->common.aml_opcode == 206 206 AML_PACKAGE_OP) 207 207 || (op->common.parent->common.aml_opcode == 208 + AML_BANK_FIELD_OP) 209 + || (op->common.parent->common.aml_opcode == 208 210 AML_VAR_PACKAGE_OP)) { 209 211 replacement_op = 210 212 acpi_ps_alloc_op(AML_INT_RETURN_VALUE_OP); ··· 351 349 352 350 parser_state->aml = walk_state->aml_last_while; 353 351 walk_state->control_state->common.value = FALSE; 354 - status = acpi_ds_result_stack_pop(walk_state); 355 - if (ACPI_SUCCESS(status)) { 356 - status = AE_CTRL_BREAK; 357 - } 352 + status = AE_CTRL_BREAK; 358 353 break; 359 354 360 355 case AE_CTRL_CONTINUE: 361 356 362 357 parser_state->aml = walk_state->aml_last_while; 363 - status = acpi_ds_result_stack_pop(walk_state); 364 - if (ACPI_SUCCESS(status)) { 365 - status = AE_CTRL_CONTINUE; 366 - } 358 + status = AE_CTRL_CONTINUE; 367 359 break; 368 360 369 361 case AE_CTRL_PENDING: ··· 379 383 * Just close out this package 380 384 */ 381 385 parser_state->aml = acpi_ps_get_next_package_end(parser_state); 382 - status = acpi_ds_result_stack_pop(walk_state); 383 - if (ACPI_SUCCESS(status)) { 384 - status = AE_CTRL_PENDING; 385 - } 386 + status = AE_CTRL_PENDING; 386 387 break; 387 388 388 389 case AE_CTRL_FALSE: ··· 534 541 if ((status == AE_ALREADY_EXISTS) && 535 542 (!walk_state->method_desc->method.mutex)) { 536 543 ACPI_INFO((AE_INFO, 537 - "Marking method %4.4s as Serialized", 544 + "Marking method %4.4s as Serialized because of AE_ALREADY_EXISTS error", 538 545 walk_state->method_node->name. 539 546 ascii)); 540 547 ··· 594 601 * The object is deleted 595 602 */ 596 603 if (!previous_walk_state->return_desc) { 604 + /* 605 + * In slack mode execution, if there is no return value 606 + * we should implicitly return zero (0) as a default value. 607 + */ 608 + if (acpi_gbl_enable_interpreter_slack && 609 + !previous_walk_state-> 610 + implicit_return_obj) { 611 + previous_walk_state-> 612 + implicit_return_obj = 613 + acpi_ut_create_internal_object 614 + (ACPI_TYPE_INTEGER); 615 + if (!previous_walk_state-> 616 + implicit_return_obj) { 617 + return_ACPI_STATUS 618 + (AE_NO_MEMORY); 619 + } 620 + 621 + previous_walk_state-> 622 + implicit_return_obj-> 623 + integer.value = 0; 624 + } 625 + 626 + /* Restart the calling control method */ 627 + 597 628 status = 598 629 acpi_ds_restart_control_method 599 630 (walk_state,
+1 -1
drivers/acpi/parser/psscope.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+3 -1
drivers/acpi/parser/pstree.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 171 171 while (arg) { 172 172 arg->common.parent = op; 173 173 arg = arg->common.next; 174 + 175 + op->common.arg_list_length++; 174 176 } 175 177 } 176 178
+1 -1
drivers/acpi/parser/psutils.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/parser/pswalk.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/parser/psxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/power.c
··· 122 122 } 123 123 124 124 *resource = acpi_driver_data(device); 125 - if (!resource) 125 + if (!*resource) 126 126 return -ENODEV; 127 127 128 128 return 0;
+14 -15
drivers/acpi/processor_core.c
··· 674 674 result = PTR_ERR(pr->cdev); 675 675 goto end; 676 676 } 677 - if (pr->cdev) { 678 - printk(KERN_INFO PREFIX 679 - "%s is registered as cooling_device%d\n", 680 - device->dev.bus_id, pr->cdev->id); 681 677 682 - result = sysfs_create_link(&device->dev.kobj, 683 - &pr->cdev->device.kobj, 684 - "thermal_cooling"); 685 - if (result) 686 - return result; 687 - result = sysfs_create_link(&pr->cdev->device.kobj, 688 - &device->dev.kobj, 689 - "device"); 690 - if (result) 691 - return result; 692 - } 678 + printk(KERN_INFO PREFIX 679 + "%s is registered as cooling_device%d\n", 680 + device->dev.bus_id, pr->cdev->id); 681 + 682 + result = sysfs_create_link(&device->dev.kobj, 683 + &pr->cdev->device.kobj, 684 + "thermal_cooling"); 685 + if (result) 686 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 687 + result = sysfs_create_link(&pr->cdev->device.kobj, 688 + &device->dev.kobj, 689 + "device"); 690 + if (result) 691 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 693 692 694 693 if (pr->flags.throttling) { 695 694 printk(KERN_INFO PREFIX "%s [%s] (supports",
+16 -2
drivers/acpi/processor_idle.c
··· 847 847 /* all processors need to support C1 */ 848 848 pr->power.states[ACPI_STATE_C1].type = ACPI_STATE_C1; 849 849 pr->power.states[ACPI_STATE_C1].valid = 1; 850 + pr->power.states[ACPI_STATE_C1].entry_method = ACPI_CSTATE_HALT; 850 851 } 851 852 /* the C0 state only exists as a filler in our array */ 852 853 pr->power.states[ACPI_STATE_C0].valid = 1; ··· 960 959 cx.address); 961 960 } 962 961 962 + if (cx.type == ACPI_STATE_C1) { 963 + cx.valid = 1; 964 + } 963 965 964 966 obj = &(element->package.elements[2]); 965 967 if (obj->type != ACPI_TYPE_INTEGER) ··· 1299 1295 { 1300 1296 int result = 0; 1301 1297 1298 + if (boot_option_idle_override) 1299 + return 0; 1302 1300 1303 1301 if (!pr) 1304 1302 return -EINVAL; ··· 1740 1734 { 1741 1735 int ret; 1742 1736 1737 + if (boot_option_idle_override) 1738 + return 0; 1739 + 1743 1740 if (!pr) 1744 1741 return -EINVAL; 1745 1742 ··· 1773 1764 struct proc_dir_entry *entry = NULL; 1774 1765 unsigned int i; 1775 1766 1767 + if (boot_option_idle_override) 1768 + return 0; 1776 1769 1777 1770 if (!first_run) { 1778 1771 dmi_check_system(processor_power_dmi_table); ··· 1810 1799 * Note that we use previously set idle handler will be used on 1811 1800 * platforms that only support C1. 1812 1801 */ 1813 - if ((pr->flags.power) && (!boot_option_idle_override)) { 1802 + if (pr->flags.power) { 1814 1803 #ifdef CONFIG_CPU_IDLE 1815 1804 acpi_processor_setup_cpuidle(pr); 1816 1805 pr->power.dev.cpu = pr->id; ··· 1846 1835 int acpi_processor_power_exit(struct acpi_processor *pr, 1847 1836 struct acpi_device *device) 1848 1837 { 1838 + if (boot_option_idle_override) 1839 + return 0; 1840 + 1849 1841 #ifdef CONFIG_CPU_IDLE 1850 - if ((pr->flags.power) && (!boot_option_idle_override)) 1842 + if (pr->flags.power) 1851 1843 cpuidle_unregister_device(&pr->power.dev); 1852 1844 #endif 1853 1845 pr->flags.power_setup_done = 0;
+1 -1
drivers/acpi/resources/rsaddr.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+22 -4
drivers/acpi/resources/rscalc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 73 73 74 74 static u8 acpi_rs_count_set_bits(u16 bit_field) 75 75 { 76 - u8 bits_set; 76 + acpi_native_uint bits_set; 77 77 78 78 ACPI_FUNCTION_ENTRY(); 79 79 ··· 81 81 82 82 /* Zero the least significant bit that is set */ 83 83 84 - bit_field &= (bit_field - 1); 84 + bit_field &= (u16) (bit_field - 1); 85 85 } 86 86 87 - return (bits_set); 87 + return ((u8) bits_set); 88 88 } 89 89 90 90 /******************************************************************************* ··· 211 211 * variable-length fields 212 212 */ 213 213 switch (resource->type) { 214 + case ACPI_RESOURCE_TYPE_IRQ: 215 + 216 + /* Length can be 3 or 2 */ 217 + 218 + if (resource->data.irq.descriptor_length == 2) { 219 + total_size--; 220 + } 221 + break; 222 + 223 + case ACPI_RESOURCE_TYPE_START_DEPENDENT: 224 + 225 + /* Length can be 1 or 0 */ 226 + 227 + if (resource->data.irq.descriptor_length == 0) { 228 + total_size--; 229 + } 230 + break; 231 + 214 232 case ACPI_RESOURCE_TYPE_VENDOR: 215 233 /* 216 234 * Vendor Defined Resource:
+1 -1
drivers/acpi/resources/rscreate.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+7 -3
drivers/acpi/resources/rsdump.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 87 87 * 88 88 ******************************************************************************/ 89 89 90 - struct acpi_rsdump_info acpi_rs_dump_irq[6] = { 90 + struct acpi_rsdump_info acpi_rs_dump_irq[7] = { 91 91 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_irq), "IRQ", NULL}, 92 + {ACPI_RSD_UINT8, ACPI_RSD_OFFSET(irq.descriptor_length), 93 + "Descriptor Length", NULL}, 92 94 {ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET(irq.triggering), "Triggering", 93 95 acpi_gbl_he_decode}, 94 96 {ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET(irq.polarity), "Polarity", ··· 117 115 NULL} 118 116 }; 119 117 120 - struct acpi_rsdump_info acpi_rs_dump_start_dpf[3] = { 118 + struct acpi_rsdump_info acpi_rs_dump_start_dpf[4] = { 121 119 {ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_start_dpf), 122 120 "Start-Dependent-Functions", NULL}, 121 + {ACPI_RSD_UINT8, ACPI_RSD_OFFSET(start_dpf.descriptor_length), 122 + "Descriptor Length", NULL}, 123 123 {ACPI_RSD_2BITFLAG, ACPI_RSD_OFFSET(start_dpf.compatibility_priority), 124 124 "Compatibility Priority", acpi_gbl_config_decode}, 125 125 {ACPI_RSD_2BITFLAG, ACPI_RSD_OFFSET(start_dpf.performance_robustness),
+1 -1
drivers/acpi/resources/rsinfo.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+38 -3
drivers/acpi/resources/rsio.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 185 185 * 186 186 ******************************************************************************/ 187 187 188 - struct acpi_rsconvert_info acpi_rs_get_start_dpf[5] = { 188 + struct acpi_rsconvert_info acpi_rs_get_start_dpf[6] = { 189 189 {ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_START_DEPENDENT, 190 190 ACPI_RS_SIZE(struct acpi_resource_start_dependent), 191 191 ACPI_RSC_TABLE_SIZE(acpi_rs_get_start_dpf)}, ··· 195 195 {ACPI_RSC_SET8, ACPI_RS_OFFSET(data.start_dpf.compatibility_priority), 196 196 ACPI_ACCEPTABLE_CONFIGURATION, 197 197 2}, 198 + 199 + /* Get the descriptor length (0 or 1 for Start Dpf descriptor) */ 200 + 201 + {ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.start_dpf.descriptor_length), 202 + AML_OFFSET(start_dpf.descriptor_type), 203 + 0}, 198 204 199 205 /* All done if there is no flag byte present in the descriptor */ 200 206 ··· 225 219 * 226 220 ******************************************************************************/ 227 221 228 - struct acpi_rsconvert_info acpi_rs_set_start_dpf[6] = { 222 + struct acpi_rsconvert_info acpi_rs_set_start_dpf[10] = { 223 + /* Start with a default descriptor of length 1 */ 224 + 229 225 {ACPI_RSC_INITSET, ACPI_RESOURCE_NAME_START_DEPENDENT, 230 226 sizeof(struct aml_resource_start_dependent), 231 227 ACPI_RSC_TABLE_SIZE(acpi_rs_set_start_dpf)}, ··· 243 235 ACPI_RS_OFFSET(data.start_dpf.performance_robustness), 244 236 AML_OFFSET(start_dpf.flags), 245 237 2}, 238 + /* 239 + * All done if the output descriptor length is required to be 1 240 + * (i.e., optimization to 0 bytes cannot be attempted) 241 + */ 242 + {ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE, 243 + ACPI_RS_OFFSET(data.start_dpf.descriptor_length), 244 + 1}, 245 + 246 + /* Set length to 0 bytes (no flags byte) */ 247 + 248 + {ACPI_RSC_LENGTH, 0, 0, 249 + sizeof(struct aml_resource_start_dependent_noprio)}, 250 + 251 + /* 252 + * All done if the output descriptor length is required to be 0. 253 + * 254 + * TBD: Perhaps we should check for error if input flags are not 255 + * compatible with a 0-byte descriptor. 256 + */ 257 + {ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE, 258 + ACPI_RS_OFFSET(data.start_dpf.descriptor_length), 259 + 0}, 260 + 261 + /* Reset length to 1 byte (descriptor with flags byte) */ 262 + 263 + {ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_start_dependent)}, 264 + 246 265 /* 247 266 * All done if flags byte is necessary -- if either priority value 248 267 * is not ACPI_ACCEPTABLE_CONFIGURATION
+40 -5
drivers/acpi/resources/rsirq.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 52 52 * acpi_rs_get_irq 53 53 * 54 54 ******************************************************************************/ 55 - struct acpi_rsconvert_info acpi_rs_get_irq[7] = { 55 + struct acpi_rsconvert_info acpi_rs_get_irq[8] = { 56 56 {ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_IRQ, 57 57 ACPI_RS_SIZE(struct acpi_resource_irq), 58 58 ACPI_RSC_TABLE_SIZE(acpi_rs_get_irq)}, ··· 68 68 {ACPI_RSC_SET8, ACPI_RS_OFFSET(data.irq.triggering), 69 69 ACPI_EDGE_SENSITIVE, 70 70 1}, 71 + 72 + /* Get the descriptor length (2 or 3 for IRQ descriptor) */ 73 + 74 + {ACPI_RSC_2BITFLAG, ACPI_RS_OFFSET(data.irq.descriptor_length), 75 + AML_OFFSET(irq.descriptor_type), 76 + 0}, 71 77 72 78 /* All done if no flag byte present in descriptor */ 73 79 ··· 100 94 * 101 95 ******************************************************************************/ 102 96 103 - struct acpi_rsconvert_info acpi_rs_set_irq[9] = { 97 + struct acpi_rsconvert_info acpi_rs_set_irq[13] = { 98 + /* Start with a default descriptor of length 3 */ 99 + 104 100 {ACPI_RSC_INITSET, ACPI_RESOURCE_NAME_IRQ, 105 101 sizeof(struct aml_resource_irq), 106 102 ACPI_RSC_TABLE_SIZE(acpi_rs_set_irq)}, ··· 113 105 AML_OFFSET(irq.irq_mask), 114 106 ACPI_RS_OFFSET(data.irq.interrupt_count)}, 115 107 116 - /* Set the flags byte by default */ 108 + /* Set the flags byte */ 117 109 118 110 {ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.irq.triggering), 119 111 AML_OFFSET(irq.flags), ··· 126 118 {ACPI_RSC_1BITFLAG, ACPI_RS_OFFSET(data.irq.sharable), 127 119 AML_OFFSET(irq.flags), 128 120 4}, 121 + 122 + /* 123 + * All done if the output descriptor length is required to be 3 124 + * (i.e., optimization to 2 bytes cannot be attempted) 125 + */ 126 + {ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE, 127 + ACPI_RS_OFFSET(data.irq.descriptor_length), 128 + 3}, 129 + 130 + /* Set length to 2 bytes (no flags byte) */ 131 + 132 + {ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq_noflags)}, 133 + 134 + /* 135 + * All done if the output descriptor length is required to be 2. 136 + * 137 + * TBD: Perhaps we should check for error if input flags are not 138 + * compatible with a 2-byte descriptor. 139 + */ 140 + {ACPI_RSC_EXIT_EQ, ACPI_RSC_COMPARE_VALUE, 141 + ACPI_RS_OFFSET(data.irq.descriptor_length), 142 + 2}, 143 + 144 + /* Reset length to 3 bytes (descriptor with flags byte) */ 145 + 146 + {ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq)}, 147 + 129 148 /* 130 149 * Check if the flags byte is necessary. Not needed if the flags are: 131 150 * ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_HIGH, ACPI_EXCLUSIVE ··· 169 134 ACPI_RS_OFFSET(data.irq.sharable), 170 135 ACPI_EXCLUSIVE}, 171 136 172 - /* irq_no_flags() descriptor can be used */ 137 + /* We can optimize to a 2-byte irq_no_flags() descriptor */ 173 138 174 139 {ACPI_RSC_LENGTH, 0, 0, sizeof(struct aml_resource_irq_noflags)} 175 140 };
+1 -1
drivers/acpi/resources/rslist.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/resources/rsmemory.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+12 -1
drivers/acpi/resources/rsmisc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 494 494 ACPI_ERROR((AE_INFO, 495 495 "Invalid conversion sub-opcode")); 496 496 return_ACPI_STATUS(AE_BAD_PARAMETER); 497 + } 498 + break; 499 + 500 + case ACPI_RSC_EXIT_EQ: 501 + /* 502 + * Control - Exit conversion if equal 503 + */ 504 + if (*ACPI_ADD_PTR(u8, resource, 505 + COMPARE_TARGET(info)) == 506 + COMPARE_VALUE(info)) { 507 + goto exit; 497 508 } 498 509 break; 499 510
+4 -4
drivers/acpi/resources/rsutils.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 97 97 u16 acpi_rs_encode_bitmask(u8 * list, u8 count) 98 98 { 99 99 acpi_native_uint i; 100 - u16 mask; 100 + acpi_native_uint mask; 101 101 102 102 ACPI_FUNCTION_ENTRY(); 103 103 104 104 /* Encode the list into a single bitmask */ 105 105 106 106 for (i = 0, mask = 0; i < count; i++) { 107 - mask |= (0x0001 << list[i]); 107 + mask |= (0x1 << list[i]); 108 108 } 109 109 110 - return (mask); 110 + return ((u16) mask); 111 111 } 112 112 113 113 /*******************************************************************************
+1 -1
drivers/acpi/resources/rsxface.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+50 -13
drivers/acpi/scan.c
··· 677 677 device->wakeup.resources.count = package->package.count - 2; 678 678 for (i = 0; i < device->wakeup.resources.count; i++) { 679 679 element = &(package->package.elements[i + 2]); 680 - if (element->type != ACPI_TYPE_ANY) { 680 + if (element->type != ACPI_TYPE_LOCAL_REFERENCE) 681 681 return AE_BAD_DATA; 682 - } 683 682 684 683 device->wakeup.resources.handles[i] = element->reference.handle; 685 684 } ··· 691 692 acpi_status status = 0; 692 693 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 693 694 union acpi_object *package = NULL; 695 + union acpi_object in_arg[3]; 696 + struct acpi_object_list arg_list = { 3, in_arg }; 697 + acpi_status psw_status = AE_OK; 694 698 695 699 struct acpi_device_id button_device_ids[] = { 696 700 {"PNP0C0D", 0}, ··· 701 699 {"PNP0C0E", 0}, 702 700 {"", 0}, 703 701 }; 704 - 705 702 706 703 /* _PRW */ 707 704 status = acpi_evaluate_object(device->handle, "_PRW", NULL, &buffer); ··· 719 718 kfree(buffer.pointer); 720 719 721 720 device->wakeup.flags.valid = 1; 721 + /* Call _PSW/_DSW object to disable its ability to wake the sleeping 722 + * system for the ACPI device with the _PRW object. 723 + * The _PSW object is depreciated in ACPI 3.0 and is replaced by _DSW. 724 + * So it is necessary to call _DSW object first. Only when it is not 725 + * present will the _PSW object used. 726 + */ 727 + /* 728 + * Three agruments are needed for the _DSW object. 729 + * Argument 0: enable/disable the wake capabilities 730 + * When _DSW object is called to disable the wake capabilities, maybe 731 + * the first argument is filled. The value of the other two agruments 732 + * is meaningless. 733 + */ 734 + in_arg[0].type = ACPI_TYPE_INTEGER; 735 + in_arg[0].integer.value = 0; 736 + in_arg[1].type = ACPI_TYPE_INTEGER; 737 + in_arg[1].integer.value = 0; 738 + in_arg[2].type = ACPI_TYPE_INTEGER; 739 + in_arg[2].integer.value = 0; 740 + psw_status = acpi_evaluate_object(device->handle, "_DSW", 741 + &arg_list, NULL); 742 + if (ACPI_FAILURE(psw_status) && (psw_status != AE_NOT_FOUND)) 743 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "error in evaluate _DSW\n")); 744 + /* 745 + * When the _DSW object is not present, OSPM will call _PSW object. 746 + */ 747 + if (psw_status == AE_NOT_FOUND) { 748 + /* 749 + * Only one agruments is required for the _PSW object. 750 + * agrument 0: enable/disable the wake capabilities 751 + */ 752 + arg_list.count = 1; 753 + in_arg[0].integer.value = 0; 754 + psw_status = acpi_evaluate_object(device->handle, "_PSW", 755 + &arg_list, NULL); 756 + if (ACPI_FAILURE(psw_status) && (psw_status != AE_NOT_FOUND)) 757 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, "error in " 758 + "evaluate _PSW\n")); 759 + } 722 760 /* Power button, Lid switch always enable wakeup */ 723 761 if (!acpi_match_device_ids(device, button_device_ids)) 724 762 device->wakeup.flags.run_wake = 1; ··· 922 882 static int 923 883 acpi_video_bus_match(struct acpi_device *device) 924 884 { 925 - acpi_handle h_dummy1; 926 - acpi_handle h_dummy2; 927 - acpi_handle h_dummy3; 928 - 885 + acpi_handle h_dummy; 929 886 930 887 if (!device) 931 888 return -EINVAL; ··· 932 895 */ 933 896 934 897 /* Does this device able to support video switching ? */ 935 - if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOD", &h_dummy1)) && 936 - ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOS", &h_dummy2))) 898 + if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOD", &h_dummy)) && 899 + ACPI_SUCCESS(acpi_get_handle(device->handle, "_DOS", &h_dummy))) 937 900 return 0; 938 901 939 902 /* Does this device able to retrieve a video ROM ? */ 940 - if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_ROM", &h_dummy1))) 903 + if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_ROM", &h_dummy))) 941 904 return 0; 942 905 943 906 /* Does this device able to configure which video head to be POSTed ? */ 944 - if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_VPO", &h_dummy1)) && 945 - ACPI_SUCCESS(acpi_get_handle(device->handle, "_GPD", &h_dummy2)) && 946 - ACPI_SUCCESS(acpi_get_handle(device->handle, "_SPD", &h_dummy3))) 907 + if (ACPI_SUCCESS(acpi_get_handle(device->handle, "_VPO", &h_dummy)) && 908 + ACPI_SUCCESS(acpi_get_handle(device->handle, "_GPD", &h_dummy)) && 909 + ACPI_SUCCESS(acpi_get_handle(device->handle, "_SPD", &h_dummy))) 947 910 return 0; 948 911 949 912 return -ENODEV;
+21 -21
drivers/acpi/sleep/main.c
··· 51 51 } 52 52 53 53 #ifdef CONFIG_SUSPEND 54 - static struct platform_suspend_ops acpi_pm_ops; 54 + static struct platform_suspend_ops acpi_suspend_ops; 55 55 56 56 extern void do_suspend_lowlevel(void); 57 57 ··· 65 65 static int init_8259A_after_S1; 66 66 67 67 /** 68 - * acpi_pm_begin - Set the target system sleep state to the state 68 + * acpi_suspend_begin - Set the target system sleep state to the state 69 69 * associated with given @pm_state, if supported. 70 70 */ 71 71 72 - static int acpi_pm_begin(suspend_state_t pm_state) 72 + static int acpi_suspend_begin(suspend_state_t pm_state) 73 73 { 74 74 u32 acpi_state = acpi_suspend_states[pm_state]; 75 75 int error = 0; ··· 85 85 } 86 86 87 87 /** 88 - * acpi_pm_prepare - Do preliminary suspend work. 88 + * acpi_suspend_prepare - Do preliminary suspend work. 89 89 * 90 90 * If necessary, set the firmware waking vector and do arch-specific 91 91 * nastiness to get the wakeup code to the waking vector. 92 92 */ 93 93 94 - static int acpi_pm_prepare(void) 94 + static int acpi_suspend_prepare(void) 95 95 { 96 96 int error = acpi_sleep_prepare(acpi_target_sleep_state); 97 97 ··· 104 104 } 105 105 106 106 /** 107 - * acpi_pm_enter - Actually enter a sleep state. 107 + * acpi_suspend_enter - Actually enter a sleep state. 108 108 * @pm_state: ignored 109 109 * 110 110 * Flush caches and go to sleep. For STR we have to call arch-specific ··· 112 112 * It's unfortunate, but it works. Please fix if you're feeling frisky. 113 113 */ 114 114 115 - static int acpi_pm_enter(suspend_state_t pm_state) 115 + static int acpi_suspend_enter(suspend_state_t pm_state) 116 116 { 117 117 acpi_status status = AE_OK; 118 118 unsigned long flags = 0; ··· 169 169 } 170 170 171 171 /** 172 - * acpi_pm_finish - Instruct the platform to leave a sleep state. 172 + * acpi_suspend_finish - Instruct the platform to leave a sleep state. 173 173 * 174 174 * This is called after we wake back up (or if entering the sleep state 175 175 * failed). 176 176 */ 177 177 178 - static void acpi_pm_finish(void) 178 + static void acpi_suspend_finish(void) 179 179 { 180 180 u32 acpi_state = acpi_target_sleep_state; 181 181 ··· 196 196 } 197 197 198 198 /** 199 - * acpi_pm_end - Finish up suspend sequence. 199 + * acpi_suspend_end - Finish up suspend sequence. 200 200 */ 201 201 202 - static void acpi_pm_end(void) 202 + static void acpi_suspend_end(void) 203 203 { 204 204 /* 205 - * This is necessary in case acpi_pm_finish() is not called during a 205 + * This is necessary in case acpi_suspend_finish() is not called during a 206 206 * failing transition to a sleep state. 207 207 */ 208 208 acpi_target_sleep_state = ACPI_STATE_S0; 209 209 } 210 210 211 - static int acpi_pm_state_valid(suspend_state_t pm_state) 211 + static int acpi_suspend_state_valid(suspend_state_t pm_state) 212 212 { 213 213 u32 acpi_state; 214 214 ··· 224 224 } 225 225 } 226 226 227 - static struct platform_suspend_ops acpi_pm_ops = { 228 - .valid = acpi_pm_state_valid, 229 - .begin = acpi_pm_begin, 230 - .prepare = acpi_pm_prepare, 231 - .enter = acpi_pm_enter, 232 - .finish = acpi_pm_finish, 233 - .end = acpi_pm_end, 227 + static struct platform_suspend_ops acpi_suspend_ops = { 228 + .valid = acpi_suspend_state_valid, 229 + .begin = acpi_suspend_begin, 230 + .prepare = acpi_suspend_prepare, 231 + .enter = acpi_suspend_enter, 232 + .finish = acpi_suspend_finish, 233 + .end = acpi_suspend_end, 234 234 }; 235 235 236 236 /* ··· 492 492 } 493 493 } 494 494 495 - suspend_set_ops(&acpi_pm_ops); 495 + suspend_set_ops(&acpi_suspend_ops); 496 496 #endif 497 497 498 498 #ifdef CONFIG_HIBERNATION
+1 -1
drivers/acpi/tables/tbfadt.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+24 -10
drivers/acpi/tables/tbfind.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 70 70 { 71 71 acpi_native_uint i; 72 72 acpi_status status; 73 + struct acpi_table_header header; 73 74 74 75 ACPI_FUNCTION_TRACE(tb_find_table); 75 76 77 + /* Normalize the input strings */ 78 + 79 + ACPI_MEMSET(&header, 0, sizeof(struct acpi_table_header)); 80 + ACPI_STRNCPY(header.signature, signature, ACPI_NAME_SIZE); 81 + ACPI_STRNCPY(header.oem_id, oem_id, ACPI_OEM_ID_SIZE); 82 + ACPI_STRNCPY(header.oem_table_id, oem_table_id, ACPI_OEM_TABLE_ID_SIZE); 83 + 84 + /* Search for the table */ 85 + 76 86 for (i = 0; i < acpi_gbl_root_table_list.count; ++i) { 77 87 if (ACPI_MEMCMP(&(acpi_gbl_root_table_list.tables[i].signature), 78 - signature, ACPI_NAME_SIZE)) { 88 + header.signature, ACPI_NAME_SIZE)) { 79 89 80 90 /* Not the requested table */ 81 91 ··· 114 104 115 105 if (!ACPI_MEMCMP 116 106 (acpi_gbl_root_table_list.tables[i].pointer->signature, 117 - signature, ACPI_NAME_SIZE) && (!oem_id[0] 118 - || 119 - !ACPI_MEMCMP 120 - (acpi_gbl_root_table_list. 121 - tables[i].pointer->oem_id, 122 - oem_id, ACPI_OEM_ID_SIZE)) 107 + header.signature, ACPI_NAME_SIZE) && (!oem_id[0] 108 + || 109 + !ACPI_MEMCMP 110 + (acpi_gbl_root_table_list. 111 + tables[i].pointer-> 112 + oem_id, 113 + header.oem_id, 114 + ACPI_OEM_ID_SIZE)) 123 115 && (!oem_table_id[0] 124 116 || !ACPI_MEMCMP(acpi_gbl_root_table_list.tables[i]. 125 - pointer->oem_table_id, oem_table_id, 117 + pointer->oem_table_id, 118 + header.oem_table_id, 126 119 ACPI_OEM_TABLE_ID_SIZE))) { 127 120 *table_index = i; 128 121 129 122 ACPI_DEBUG_PRINT((ACPI_DB_TABLES, 130 - "Found table [%4.4s]\n", signature)); 123 + "Found table [%4.4s]\n", 124 + header.signature)); 131 125 return_ACPI_STATUS(AE_OK); 132 126 } 133 127 }
+16 -8
drivers/acpi/tables/tbinstal.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 125 125 126 126 /* The table must be either an SSDT or a PSDT or an OEMx */ 127 127 128 - if ((!ACPI_COMPARE_NAME(table_desc->pointer->signature, ACPI_SIG_PSDT)) 129 - && 130 - (!ACPI_COMPARE_NAME(table_desc->pointer->signature, ACPI_SIG_SSDT)) 131 - && (strncmp(table_desc->pointer->signature, "OEM", 3))) { 132 - ACPI_ERROR((AE_INFO, 133 - "Table has invalid signature [%4.4s], must be SSDT, PSDT or OEMx", 134 - table_desc->pointer->signature)); 128 + if (!ACPI_COMPARE_NAME(table_desc->pointer->signature, ACPI_SIG_PSDT)&& 129 + !ACPI_COMPARE_NAME(table_desc->pointer->signature, ACPI_SIG_SSDT)&& 130 + strncmp(table_desc->pointer->signature, "OEM", 3)) { 131 + /* Check for a printable name */ 132 + if (acpi_ut_valid_acpi_name( 133 + *(u32 *) table_desc->pointer->signature)) { 134 + ACPI_ERROR((AE_INFO, "Table has invalid signature " 135 + "[%4.4s], must be SSDT or PSDT", 136 + table_desc->pointer->signature)); 137 + } else { 138 + ACPI_ERROR((AE_INFO, "Table has invalid signature " 139 + "(0x%8.8X), must be SSDT or PSDT", 140 + *(u32 *) table_desc->pointer->signature)); 141 + } 135 142 return_ACPI_STATUS(AE_BAD_SIGNATURE); 136 143 } 137 144 ··· 169 162 170 163 acpi_tb_delete_table(table_desc); 171 164 *table_index = i; 165 + status = AE_ALREADY_EXISTS; 172 166 goto release; 173 167 } 174 168
+2 -2
drivers/acpi/tables/tbutils.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 212 212 213 213 if (checksum) { 214 214 ACPI_WARNING((AE_INFO, 215 - "Incorrect checksum in table [%4.4s] - %2.2X, should be %2.2X", 215 + "Incorrect checksum in table [%4.4s] - %2.2X, should be %2.2X", 216 216 table->signature, table->checksum, 217 217 (u8) (table->checksum - checksum))); 218 218
+90 -1
drivers/acpi/tables/tbxface.c
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 633 633 } 634 634 635 635 ACPI_EXPORT_SYMBOL(acpi_load_tables) 636 + 637 + 638 + /******************************************************************************* 639 + * 640 + * FUNCTION: acpi_install_table_handler 641 + * 642 + * PARAMETERS: Handler - Table event handler 643 + * Context - Value passed to the handler on each event 644 + * 645 + * RETURN: Status 646 + * 647 + * DESCRIPTION: Install table event handler 648 + * 649 + ******************************************************************************/ 650 + acpi_status 651 + acpi_install_table_handler(acpi_tbl_handler handler, void *context) 652 + { 653 + acpi_status status; 654 + 655 + ACPI_FUNCTION_TRACE(acpi_install_table_handler); 656 + 657 + if (!handler) { 658 + return_ACPI_STATUS(AE_BAD_PARAMETER); 659 + } 660 + 661 + status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); 662 + if (ACPI_FAILURE(status)) { 663 + return_ACPI_STATUS(status); 664 + } 665 + 666 + /* Don't allow more than one handler */ 667 + 668 + if (acpi_gbl_table_handler) { 669 + status = AE_ALREADY_EXISTS; 670 + goto cleanup; 671 + } 672 + 673 + /* Install the handler */ 674 + 675 + acpi_gbl_table_handler = handler; 676 + acpi_gbl_table_handler_context = context; 677 + 678 + cleanup: 679 + (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 680 + return_ACPI_STATUS(status); 681 + } 682 + 683 + ACPI_EXPORT_SYMBOL(acpi_install_table_handler) 684 + 685 + /******************************************************************************* 686 + * 687 + * FUNCTION: acpi_remove_table_handler 688 + * 689 + * PARAMETERS: Handler - Table event handler that was installed 690 + * previously. 691 + * 692 + * RETURN: Status 693 + * 694 + * DESCRIPTION: Remove table event handler 695 + * 696 + ******************************************************************************/ 697 + acpi_status acpi_remove_table_handler(acpi_tbl_handler handler) 698 + { 699 + acpi_status status; 700 + 701 + ACPI_FUNCTION_TRACE(acpi_remove_table_handler); 702 + 703 + status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS); 704 + if (ACPI_FAILURE(status)) { 705 + return_ACPI_STATUS(status); 706 + } 707 + 708 + /* Make sure that the installed handler is the same */ 709 + 710 + if (!handler || handler != acpi_gbl_table_handler) { 711 + status = AE_BAD_PARAMETER; 712 + goto cleanup; 713 + } 714 + 715 + /* Remove the handler */ 716 + 717 + acpi_gbl_table_handler = NULL; 718 + 719 + cleanup: 720 + (void)acpi_ut_release_mutex(ACPI_MTX_EVENTS); 721 + return_ACPI_STATUS(status); 722 + } 723 + 724 + ACPI_EXPORT_SYMBOL(acpi_remove_table_handler) 636 725 637 726 638 727 static int __init acpi_no_auto_ssdt_setup(char *s) {
+1 -1
drivers/acpi/tables/tbxfroot.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+19 -1
drivers/acpi/thermal.c
··· 889 889 static int thermal_get_temp(struct thermal_zone_device *thermal, char *buf) 890 890 { 891 891 struct acpi_thermal *tz = thermal->devdata; 892 + int result; 892 893 893 894 if (!tz) 894 895 return -EINVAL; 896 + 897 + result = acpi_thermal_get_temperature(tz); 898 + if (result) 899 + return result; 895 900 896 901 return sprintf(buf, "%ld\n", KELVIN_TO_MILLICELSIUS(tz->temperature)); 897 902 } ··· 1022 1017 return -EINVAL; 1023 1018 } 1024 1019 1020 + static int thermal_get_crit_temp(struct thermal_zone_device *thermal, 1021 + unsigned long *temperature) { 1022 + struct acpi_thermal *tz = thermal->devdata; 1023 + 1024 + if (tz->trips.critical.flags.valid) { 1025 + *temperature = KELVIN_TO_MILLICELSIUS( 1026 + tz->trips.critical.temperature); 1027 + return 0; 1028 + } else 1029 + return -EINVAL; 1030 + } 1031 + 1025 1032 typedef int (*cb)(struct thermal_zone_device *, int, 1026 1033 struct thermal_cooling_device *); 1027 1034 static int acpi_thermal_cooling_device_cb(struct thermal_zone_device *thermal, ··· 1125 1108 .set_mode = thermal_set_mode, 1126 1109 .get_trip_type = thermal_get_trip_type, 1127 1110 .get_trip_temp = thermal_get_trip_temp, 1111 + .get_crit_temp = thermal_get_crit_temp, 1128 1112 }; 1129 1113 1130 1114 static int acpi_thermal_register_thermal_zone(struct acpi_thermal *tz) ··· 1146 1128 1147 1129 for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE && 1148 1130 tz->trips.active[i].flags.valid; i++, trips++); 1149 - tz->thermal_zone = thermal_zone_device_register("ACPI thermal zone", 1131 + tz->thermal_zone = thermal_zone_device_register("acpitz", 1150 1132 trips, tz, &acpi_thermal_zone_ops); 1151 1133 if (IS_ERR(tz->thermal_zone)) 1152 1134 return -ENODEV;
+2 -2
drivers/acpi/utilities/utalloc.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 147 147 148 148 if (acpi_gbl_display_final_mem_stats) { 149 149 ACPI_STRCPY(buffer, "MEMORY"); 150 - acpi_db_display_statistics(buffer); 150 + (void)acpi_db_display_statistics(buffer); 151 151 } 152 152 #endif 153 153
+1 -1
drivers/acpi/utilities/utcache.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+51 -10
drivers/acpi/utilities/utcopy.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 43 43 44 44 #include <acpi/acpi.h> 45 45 #include <acpi/amlcode.h> 46 + #include <acpi/acnamesp.h> 47 + 46 48 47 49 #define _COMPONENT ACPI_UTILITIES 48 50 ACPI_MODULE_NAME("utcopy") ··· 174 172 175 173 case ACPI_TYPE_LOCAL_REFERENCE: 176 174 177 - /* 178 - * This is an object reference. Attempt to dereference it. 179 - */ 175 + /* This is an object reference. */ 176 + 180 177 switch (internal_object->reference.opcode) { 181 178 case AML_INT_NAMEPATH_OP: 182 179 183 180 /* For namepath, return the object handle ("reference") */ 184 181 185 182 default: 186 - /* 187 - * Use the object type of "Any" to indicate a reference 188 - * to object containing a handle to an ACPI named object. 189 - */ 190 - external_object->type = ACPI_TYPE_ANY; 183 + 184 + /* We are referring to the namespace node */ 185 + 191 186 external_object->reference.handle = 192 187 internal_object->reference.node; 188 + external_object->reference.actual_type = 189 + acpi_ns_get_type(internal_object->reference.node); 193 190 break; 194 191 } 195 192 break; ··· 216 215 /* 217 216 * There is no corresponding external object type 218 217 */ 218 + ACPI_ERROR((AE_INFO, 219 + "Unsupported object type, cannot convert to external object: %s", 220 + acpi_ut_get_type_name(ACPI_GET_OBJECT_TYPE 221 + (internal_object)))); 222 + 219 223 return_ACPI_STATUS(AE_SUPPORT); 220 224 } 221 225 ··· 461 455 case ACPI_TYPE_STRING: 462 456 case ACPI_TYPE_BUFFER: 463 457 case ACPI_TYPE_INTEGER: 458 + case ACPI_TYPE_LOCAL_REFERENCE: 464 459 465 460 internal_object = acpi_ut_create_internal_object((u8) 466 461 external_object-> ··· 471 464 } 472 465 break; 473 466 467 + case ACPI_TYPE_ANY: /* This is the case for a NULL object */ 468 + 469 + *ret_internal_object = NULL; 470 + return_ACPI_STATUS(AE_OK); 471 + 474 472 default: 475 473 /* All other types are not supported */ 474 + 475 + ACPI_ERROR((AE_INFO, 476 + "Unsupported object type, cannot convert to internal object: %s", 477 + acpi_ut_get_type_name(external_object->type))); 476 478 477 479 return_ACPI_STATUS(AE_SUPPORT); 478 480 } ··· 518 502 external_object->buffer.length); 519 503 520 504 internal_object->buffer.length = external_object->buffer.length; 505 + 506 + /* Mark buffer data valid */ 507 + 508 + internal_object->buffer.flags |= AOPOBJ_DATA_VALID; 521 509 break; 522 510 523 511 case ACPI_TYPE_INTEGER: 524 512 525 513 internal_object->integer.value = external_object->integer.value; 514 + break; 515 + 516 + case ACPI_TYPE_LOCAL_REFERENCE: 517 + 518 + /* TBD: should validate incoming handle */ 519 + 520 + internal_object->reference.opcode = AML_INT_NAMEPATH_OP; 521 + internal_object->reference.node = 522 + external_object->reference.handle; 526 523 break; 527 524 528 525 default: ··· 599 570 600 571 /* Truncate package and delete it */ 601 572 602 - package_object->package.count = i; 573 + package_object->package.count = (u32) i; 603 574 package_elements[i] = NULL; 604 575 acpi_ut_remove_reference(package_object); 605 576 return_ACPI_STATUS(status); 606 577 } 607 578 } 579 + 580 + /* Mark package data valid */ 581 + 582 + package_object->package.flags |= AOPOBJ_DATA_VALID; 608 583 609 584 *internal_object = package_object; 610 585 return_ACPI_STATUS(status); ··· 742 709 /* 743 710 * We copied the reference object, so we now must add a reference 744 711 * to the object pointed to by the reference 712 + * 713 + * DDBHandle reference (from Load/load_table is a special reference, 714 + * it's Reference.Object is the table index, so does not need to 715 + * increase the reference count 745 716 */ 717 + if (source_desc->reference.opcode == AML_LOAD_OP) { 718 + break; 719 + } 720 + 746 721 acpi_ut_add_reference(source_desc->reference.object); 747 722 break; 748 723
+12 -7
drivers/acpi/utilities/utdebug.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 68 68 69 69 void acpi_ut_init_stack_ptr_trace(void) 70 70 { 71 - u32 current_sp; 71 + acpi_size current_sp; 72 72 73 - acpi_gbl_entry_stack_pointer = ACPI_PTR_DIFF(&current_sp, NULL); 73 + acpi_gbl_entry_stack_pointer = &current_sp; 74 74 } 75 75 76 76 /******************************************************************************* ··· 89 89 { 90 90 acpi_size current_sp; 91 91 92 - current_sp = ACPI_PTR_DIFF(&current_sp, NULL); 93 - 94 - if (current_sp < acpi_gbl_lowest_stack_pointer) { 95 - acpi_gbl_lowest_stack_pointer = current_sp; 92 + if (&current_sp < acpi_gbl_lowest_stack_pointer) { 93 + acpi_gbl_lowest_stack_pointer = &current_sp; 96 94 } 97 95 98 96 if (acpi_gbl_nesting_level > acpi_gbl_deepest_nesting) { ··· 201 203 202 204 va_start(args, format); 203 205 acpi_os_vprintf(format, args); 206 + va_end(args); 204 207 } 205 208 206 209 ACPI_EXPORT_SYMBOL(acpi_ut_debug_print) ··· 239 240 240 241 va_start(args, format); 241 242 acpi_os_vprintf(format, args); 243 + va_end(args); 242 244 } 243 245 244 246 ACPI_EXPORT_SYMBOL(acpi_ut_debug_print_raw) ··· 523 523 acpi_native_uint j; 524 524 u32 temp32; 525 525 u8 buf_char; 526 + 527 + if (!buffer) { 528 + acpi_os_printf("Null Buffer Pointer in DumpBuffer!\n"); 529 + return; 530 + } 526 531 527 532 if ((count < 4) || (count & 0x01)) { 528 533 display = DB_BYTE_DISPLAY;
+18 -5
drivers/acpi/utilities/utdelete.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 158 158 "***** Mutex %p, OS Mutex %p\n", 159 159 object, object->mutex.os_mutex)); 160 160 161 - if (object->mutex.os_mutex == acpi_gbl_global_lock_mutex) { 161 + if (object == acpi_gbl_global_lock_mutex) { 162 162 163 163 /* Global Lock has extra semaphore */ 164 164 ··· 245 245 246 246 ACPI_DEBUG_PRINT((ACPI_DB_ALLOCATIONS, 247 247 "***** Buffer Field %p\n", object)); 248 + 249 + second_desc = acpi_ns_get_secondary_object(object); 250 + if (second_desc) { 251 + acpi_ut_delete_object_desc(second_desc); 252 + } 253 + break; 254 + 255 + case ACPI_TYPE_LOCAL_BANK_FIELD: 256 + 257 + ACPI_DEBUG_PRINT((ACPI_DB_ALLOCATIONS, 258 + "***** Bank Field %p\n", object)); 248 259 249 260 second_desc = acpi_ns_get_secondary_object(object); 250 261 if (second_desc) { ··· 535 524 536 525 case ACPI_TYPE_LOCAL_REFERENCE: 537 526 /* 538 - * The target of an Index (a package, string, or buffer) must track 539 - * changes to the ref count of the index. 527 + * The target of an Index (a package, string, or buffer) or a named 528 + * reference must track changes to the ref count of the index or 529 + * target object. 540 530 */ 541 - if (object->reference.opcode == AML_INDEX_OP) { 531 + if ((object->reference.opcode == AML_INDEX_OP) || 532 + (object->reference.opcode == AML_INT_NAMEPATH_OP)) { 542 533 next_object = object->reference.object; 543 534 } 544 535 break;
+1 -1
drivers/acpi/utilities/uteval.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+46 -3
drivers/acpi/utilities/utglobal.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 602 602 603 603 return (acpi_gbl_mutex_names[mutex_id]); 604 604 } 605 + 606 + /******************************************************************************* 607 + * 608 + * FUNCTION: acpi_ut_get_notify_name 609 + * 610 + * PARAMETERS: notify_value - Value from the Notify() request 611 + * 612 + * RETURN: String corresponding to the Notify Value. 613 + * 614 + * DESCRIPTION: Translate a Notify Value to a notify namestring. 615 + * 616 + ******************************************************************************/ 617 + 618 + /* Names for Notify() values, used for debug output */ 619 + 620 + static const char *acpi_gbl_notify_value_names[] = { 621 + "Bus Check", 622 + "Device Check", 623 + "Device Wake", 624 + "Eject Request", 625 + "Device Check Light", 626 + "Frequency Mismatch", 627 + "Bus Mode Mismatch", 628 + "Power Fault", 629 + "Capabilities Check", 630 + "Device PLD Check", 631 + "Reserved", 632 + "System Locality Update" 633 + }; 634 + 635 + const char *acpi_ut_get_notify_name(u32 notify_value) 636 + { 637 + 638 + if (notify_value <= ACPI_NOTIFY_MAX) { 639 + return (acpi_gbl_notify_value_names[notify_value]); 640 + } else if (notify_value <= ACPI_MAX_SYS_NOTIFY) { 641 + return ("Reserved"); 642 + } else { /* Greater or equal to 0x80 */ 643 + 644 + return ("**Device Specific**"); 645 + } 646 + } 605 647 #endif 606 648 607 649 /******************************************************************************* ··· 717 675 acpi_gbl_gpe_fadt_blocks[0] = NULL; 718 676 acpi_gbl_gpe_fadt_blocks[1] = NULL; 719 677 720 - /* Global notify handlers */ 678 + /* Global handlers */ 721 679 722 680 acpi_gbl_system_notify.handler = NULL; 723 681 acpi_gbl_device_notify.handler = NULL; 724 682 acpi_gbl_exception_handler = NULL; 725 683 acpi_gbl_init_handler = NULL; 684 + acpi_gbl_table_handler = NULL; 726 685 727 686 /* Global Lock support */ 728 687 ··· 765 722 acpi_gbl_root_node_struct.flags = ANOBJ_END_OF_PEER_LIST; 766 723 767 724 #ifdef ACPI_DEBUG_OUTPUT 768 - acpi_gbl_lowest_stack_pointer = ACPI_SIZE_MAX; 725 + acpi_gbl_lowest_stack_pointer = ACPI_CAST_PTR(acpi_size, ACPI_SIZE_MAX); 769 726 #endif 770 727 771 728 #ifdef ACPI_DBG_TRACK_ALLOCATIONS
+4 -1
drivers/acpi/utilities/utinit.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 125 125 acpi_gbl_startup_flags = 0; 126 126 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Shutting down ACPI Subsystem\n")); 127 127 128 + #ifndef ACPI_ASL_COMPILER 129 + 128 130 /* Close the acpi_event Handling */ 129 131 130 132 acpi_ev_terminate(); 133 + #endif 131 134 132 135 /* Close the Namespace */ 133 136
+2 -2
drivers/acpi/utilities/utmath.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 276 276 *out_quotient = in_dividend / divisor; 277 277 } 278 278 if (out_remainder) { 279 - *out_remainder = (u32) in_dividend % divisor; 279 + *out_remainder = (u32) (in_dividend % divisor); 280 280 } 281 281 282 282 return_ACPI_STATUS(AE_OK);
+5 -1
drivers/acpi/utilities/utmisc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 1033 1033 va_start(args, format); 1034 1034 acpi_os_vprintf(format, args); 1035 1035 acpi_os_printf(" [%X]\n", ACPI_CA_VERSION); 1036 + va_end(args); 1036 1037 } 1037 1038 1038 1039 void ACPI_INTERNAL_VAR_XFACE ··· 1062 1061 va_start(args, format); 1063 1062 acpi_os_vprintf(format, args); 1064 1063 acpi_os_printf(" [%X]\n", ACPI_CA_VERSION); 1064 + va_end(args); 1065 + va_end(args); 1065 1066 } 1066 1067 1067 1068 void ACPI_INTERNAL_VAR_XFACE ··· 1080 1077 va_start(args, format); 1081 1078 acpi_os_vprintf(format, args); 1082 1079 acpi_os_printf("\n"); 1080 + va_end(args); 1083 1081 }
+1 -1
drivers/acpi/utilities/utmutex.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+4 -4
drivers/acpi/utilities/utobject.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 107 107 switch (type) { 108 108 case ACPI_TYPE_REGION: 109 109 case ACPI_TYPE_BUFFER_FIELD: 110 + case ACPI_TYPE_LOCAL_BANK_FIELD: 110 111 111 112 /* These types require a secondary object */ 112 113 ··· 470 469 case ACPI_TYPE_PROCESSOR: 471 470 case ACPI_TYPE_POWER: 472 471 473 - /* 474 - * No extra data for these types 475 - */ 472 + /* No extra data for these types */ 473 + 476 474 break; 477 475 478 476 case ACPI_TYPE_LOCAL_REFERENCE:
+1 -1
drivers/acpi/utilities/utresrc.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
drivers/acpi/utilities/utstate.c
··· 5 5 ******************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+22 -19
drivers/acpi/utilities/utxface.c
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 49 49 #define _COMPONENT ACPI_UTILITIES 50 50 ACPI_MODULE_NAME("utxface") 51 51 52 + #ifndef ACPI_ASL_COMPILER 52 53 /******************************************************************************* 53 54 * 54 55 * FUNCTION: acpi_initialize_subsystem ··· 193 192 } 194 193 } 195 194 196 - /* 197 - * Complete the GPE initialization for the GPE blocks defined in the FADT 198 - * (GPE block 0 and 1). 199 - * 200 - * Note1: This is where the _PRW methods are executed for the GPEs. These 201 - * methods can only be executed after the SCI and Global Lock handlers are 202 - * installed and initialized. 203 - * 204 - * Note2: Currently, there seems to be no need to run the _REG methods 205 - * before execution of the _PRW methods and enabling of the GPEs. 206 - */ 207 - if (!(flags & ACPI_NO_EVENT_INIT)) { 208 - status = acpi_ev_install_fadt_gpes(); 209 - if (ACPI_FAILURE(status)) { 210 - return (status); 211 - } 212 - } 213 - 214 195 return_ACPI_STATUS(status); 215 196 } 216 197 ··· 263 280 } 264 281 265 282 /* 283 + * Complete the GPE initialization for the GPE blocks defined in the FADT 284 + * (GPE block 0 and 1). 285 + * 286 + * Note1: This is where the _PRW methods are executed for the GPEs. These 287 + * methods can only be executed after the SCI and Global Lock handlers are 288 + * installed and initialized. 289 + * 290 + * Note2: Currently, there seems to be no need to run the _REG methods 291 + * before execution of the _PRW methods and enabling of the GPEs. 292 + */ 293 + if (!(flags & ACPI_NO_EVENT_INIT)) { 294 + status = acpi_ev_install_fadt_gpes(); 295 + if (ACPI_FAILURE(status)) 296 + return (status); 297 + } 298 + 299 + /* 266 300 * Empty the caches (delete the cached objects) on the assumption that 267 301 * the table load filled them up more than they will be at runtime -- 268 302 * thus wasting non-paged memory. ··· 292 292 293 293 ACPI_EXPORT_SYMBOL(acpi_initialize_objects) 294 294 295 + #endif 295 296 /******************************************************************************* 296 297 * 297 298 * FUNCTION: acpi_terminate ··· 336 335 } 337 336 338 337 ACPI_EXPORT_SYMBOL(acpi_terminate) 338 + #ifndef ACPI_ASL_COMPILER 339 339 #ifdef ACPI_FUTURE_USAGE 340 340 /******************************************************************************* 341 341 * ··· 492 490 } 493 491 494 492 ACPI_EXPORT_SYMBOL(acpi_purge_cached_objects) 493 + #endif
+1 -1
drivers/acpi/utils.c
··· 398 398 399 399 element = &(package->package.elements[i]); 400 400 401 - if (element->type != ACPI_TYPE_ANY) { 401 + if (element->type != ACPI_TYPE_LOCAL_REFERENCE) { 402 402 status = AE_BAD_DATA; 403 403 printk(KERN_ERR PREFIX 404 404 "Expecting a [Reference] package element, found type %X\n",
+103 -96
drivers/acpi/video.c
··· 57 57 #define ACPI_VIDEO_NOTIFY_ZERO_BRIGHTNESS 0x88 58 58 #define ACPI_VIDEO_NOTIFY_DISPLAY_OFF 0x89 59 59 60 - #define ACPI_VIDEO_HEAD_INVALID (~0u - 1) 61 - #define ACPI_VIDEO_HEAD_END (~0u) 62 60 #define MAX_NAME_LEN 20 63 61 64 62 #define ACPI_VIDEO_DISPLAY_CRT 1 ··· 741 743 if (IS_ERR(device->cdev)) 742 744 return; 743 745 744 - if (device->cdev) { 745 - printk(KERN_INFO PREFIX 746 - "%s is registered as cooling_device%d\n", 747 - device->dev->dev.bus_id, device->cdev->id); 748 - result = sysfs_create_link(&device->dev->dev.kobj, 749 - &device->cdev->device.kobj, 750 - "thermal_cooling"); 751 - if (result) 752 - printk(KERN_ERR PREFIX "Create sysfs link\n"); 753 - result = sysfs_create_link(&device->cdev->device.kobj, 754 - &device->dev->dev.kobj, 755 - "device"); 756 - if (result) 757 - printk(KERN_ERR PREFIX "Create sysfs link\n"); 758 - } 746 + printk(KERN_INFO PREFIX 747 + "%s is registered as cooling_device%d\n", 748 + device->dev->dev.bus_id, device->cdev->id); 749 + result = sysfs_create_link(&device->dev->dev.kobj, 750 + &device->cdev->device.kobj, 751 + "thermal_cooling"); 752 + if (result) 753 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 754 + result = sysfs_create_link(&device->cdev->device.kobj, 755 + &device->dev->dev.kobj, "device"); 756 + if (result) 757 + printk(KERN_ERR PREFIX "Create sysfs link\n"); 758 + 759 759 } 760 760 if (device->cap._DCS && device->cap._DSS){ 761 761 static int count = 0; ··· 1055 1059 1056 1060 static int acpi_video_device_add_fs(struct acpi_device *device) 1057 1061 { 1058 - struct proc_dir_entry *entry = NULL; 1062 + struct proc_dir_entry *entry, *device_dir; 1059 1063 struct acpi_video_device *vid_dev; 1060 - 1061 - 1062 - if (!device) 1063 - return -ENODEV; 1064 1064 1065 1065 vid_dev = acpi_driver_data(device); 1066 1066 if (!vid_dev) 1067 1067 return -ENODEV; 1068 1068 1069 - if (!acpi_device_dir(device)) { 1070 - acpi_device_dir(device) = proc_mkdir(acpi_device_bid(device), 1071 - vid_dev->video->dir); 1072 - if (!acpi_device_dir(device)) 1073 - return -ENODEV; 1074 - acpi_device_dir(device)->owner = THIS_MODULE; 1075 - } 1069 + device_dir = proc_mkdir(acpi_device_bid(device), 1070 + vid_dev->video->dir); 1071 + if (!device_dir) 1072 + return -ENOMEM; 1073 + 1074 + device_dir->owner = THIS_MODULE; 1076 1075 1077 1076 /* 'info' [R] */ 1078 1077 entry = proc_create_data("info", S_IRUGO, acpi_device_dir(device), 1079 1078 &acpi_video_device_info_fops, acpi_driver_data(device)); 1080 1079 if (!entry) 1081 - return -ENODEV; 1080 + goto err_remove_dir; 1082 1081 1083 1082 /* 'state' [R/W] */ 1084 1083 acpi_video_device_state_fops.write = acpi_video_device_write_state; ··· 1082 1091 &acpi_video_device_state_fops, 1083 1092 acpi_driver_data(device)); 1084 1093 if (!entry) 1085 - return -ENODEV; 1094 + goto err_remove_info; 1086 1095 1087 1096 /* 'brightness' [R/W] */ 1088 1097 acpi_video_device_brightness_fops.write = ··· 1092 1101 &acpi_video_device_brightness_fops, 1093 1102 acpi_driver_data(device)); 1094 1103 if (!entry) 1095 - return -ENODEV; 1104 + goto err_remove_state; 1096 1105 1097 1106 /* 'EDID' [R] */ 1098 1107 entry = proc_create_data("EDID", S_IRUGO, acpi_device_dir(device), 1099 1108 &acpi_video_device_EDID_fops, 1100 1109 acpi_driver_data(device)); 1101 1110 if (!entry) 1102 - return -ENODEV; 1111 + goto err_remove_brightness; 1112 + 1103 1113 return 0; 1114 + 1115 + err_remove_brightness: 1116 + remove_proc_entry("brightness", device_dir); 1117 + err_remove_state: 1118 + remove_proc_entry("state", device_dir); 1119 + err_remove_info: 1120 + remove_proc_entry("info", device_dir); 1121 + err_remove_dir: 1122 + remove_proc_entry(acpi_device_bid(device), vid_dev->video->dir); 1123 + return -ENOMEM; 1104 1124 } 1105 1125 1106 1126 static int acpi_video_device_remove_fs(struct acpi_device *device) 1107 1127 { 1108 1128 struct acpi_video_device *vid_dev; 1129 + struct proc_dir_entry *device_dir; 1109 1130 1110 1131 vid_dev = acpi_driver_data(device); 1111 1132 if (!vid_dev || !vid_dev->video || !vid_dev->video->dir) 1112 1133 return -ENODEV; 1113 1134 1114 - if (acpi_device_dir(device)) { 1115 - remove_proc_entry("info", acpi_device_dir(device)); 1116 - remove_proc_entry("state", acpi_device_dir(device)); 1117 - remove_proc_entry("brightness", acpi_device_dir(device)); 1118 - remove_proc_entry("EDID", acpi_device_dir(device)); 1135 + device_dir = acpi_device_dir(device); 1136 + if (device_dir) { 1137 + remove_proc_entry("info", device_dir); 1138 + remove_proc_entry("state", device_dir); 1139 + remove_proc_entry("brightness", device_dir); 1140 + remove_proc_entry("EDID", device_dir); 1119 1141 remove_proc_entry(acpi_device_bid(device), vid_dev->video->dir); 1120 1142 acpi_device_dir(device) = NULL; 1121 1143 } ··· 1335 1331 1336 1332 static int acpi_video_bus_add_fs(struct acpi_device *device) 1337 1333 { 1338 - struct proc_dir_entry *entry = NULL; 1339 - struct acpi_video_bus *video; 1334 + struct acpi_video_bus *video = acpi_driver_data(device); 1335 + struct proc_dir_entry *device_dir; 1336 + struct proc_dir_entry *entry; 1340 1337 1338 + device_dir = proc_mkdir(acpi_device_bid(device), acpi_video_dir); 1339 + if (!device_dir) 1340 + return -ENOMEM; 1341 1341 1342 - video = acpi_driver_data(device); 1343 - 1344 - if (!acpi_device_dir(device)) { 1345 - acpi_device_dir(device) = proc_mkdir(acpi_device_bid(device), 1346 - acpi_video_dir); 1347 - if (!acpi_device_dir(device)) 1348 - return -ENODEV; 1349 - video->dir = acpi_device_dir(device); 1350 - acpi_device_dir(device)->owner = THIS_MODULE; 1351 - } 1342 + device_dir->owner = THIS_MODULE; 1352 1343 1353 1344 /* 'info' [R] */ 1354 1345 entry = proc_create_data("info", S_IRUGO, acpi_device_dir(device), 1355 1346 &acpi_video_bus_info_fops, 1356 1347 acpi_driver_data(device)); 1357 1348 if (!entry) 1358 - return -ENODEV; 1349 + goto err_remove_dir; 1359 1350 1360 1351 /* 'ROM' [R] */ 1361 1352 entry = proc_create_data("ROM", S_IRUGO, acpi_device_dir(device), 1362 1353 &acpi_video_bus_ROM_fops, 1363 1354 acpi_driver_data(device)); 1364 1355 if (!entry) 1365 - return -ENODEV; 1356 + goto err_remove_info; 1366 1357 1367 1358 /* 'POST_info' [R] */ 1368 1359 entry = proc_create_data("POST_info", S_IRUGO, acpi_device_dir(device), 1369 1360 &acpi_video_bus_POST_info_fops, 1370 1361 acpi_driver_data(device)); 1371 1362 if (!entry) 1372 - return -ENODEV; 1363 + goto err_remove_rom; 1373 1364 1374 1365 /* 'POST' [R/W] */ 1375 1366 acpi_video_bus_POST_fops.write = acpi_video_bus_write_POST; 1376 - entry = proc_create_data("POST", S_IFREG | S_IRUGO | S_IRUSR, 1367 + entry = proc_create_data("POST", S_IFREG | S_IRUGO | S_IWUSR, 1377 1368 acpi_device_dir(device), 1378 1369 &acpi_video_bus_POST_fops, 1379 1370 acpi_driver_data(device)); 1380 1371 if (!entry) 1381 - return -ENODEV; 1372 + goto err_remove_post_info; 1382 1373 1383 1374 /* 'DOS' [R/W] */ 1384 1375 acpi_video_bus_DOS_fops.write = acpi_video_bus_write_DOS; 1385 - entry = proc_create_data("DOS", S_IFREG | S_IRUGO | S_IRUSR, 1376 + entry = proc_create_data("DOS", S_IFREG | S_IRUGO | S_IWUSR, 1386 1377 acpi_device_dir(device), 1387 1378 &acpi_video_bus_DOS_fops, 1388 1379 acpi_driver_data(device)); 1389 1380 if (!entry) 1390 - return -ENODEV; 1381 + goto err_remove_post; 1391 1382 1383 + video->dir = acpi_device_dir(device) = device_dir; 1392 1384 return 0; 1385 + 1386 + err_remove_post: 1387 + remove_proc_entry("POST", device_dir); 1388 + err_remove_post_info: 1389 + remove_proc_entry("POST_info", device_dir); 1390 + err_remove_rom: 1391 + remove_proc_entry("ROM", device_dir); 1392 + err_remove_info: 1393 + remove_proc_entry("info", device_dir); 1394 + err_remove_dir: 1395 + remove_proc_entry(acpi_device_bid(device), acpi_video_dir); 1396 + return -ENOMEM; 1393 1397 } 1394 1398 1395 1399 static int acpi_video_bus_remove_fs(struct acpi_device *device) 1396 1400 { 1397 - struct acpi_video_bus *video; 1401 + struct proc_dir_entry *device_dir = acpi_device_dir(device); 1398 1402 1399 - 1400 - video = acpi_driver_data(device); 1401 - 1402 - if (acpi_device_dir(device)) { 1403 - remove_proc_entry("info", acpi_device_dir(device)); 1404 - remove_proc_entry("ROM", acpi_device_dir(device)); 1405 - remove_proc_entry("POST_info", acpi_device_dir(device)); 1406 - remove_proc_entry("POST", acpi_device_dir(device)); 1407 - remove_proc_entry("DOS", acpi_device_dir(device)); 1403 + if (device_dir) { 1404 + remove_proc_entry("info", device_dir); 1405 + remove_proc_entry("ROM", device_dir); 1406 + remove_proc_entry("POST_info", device_dir); 1407 + remove_proc_entry("POST", device_dir); 1408 + remove_proc_entry("DOS", device_dir); 1408 1409 remove_proc_entry(acpi_device_bid(device), acpi_video_dir); 1409 1410 acpi_device_dir(device) = NULL; 1410 1411 } ··· 1425 1416 static struct acpi_video_device_attrib* 1426 1417 acpi_video_get_device_attr(struct acpi_video_bus *video, unsigned long device_id) 1427 1418 { 1428 - int count; 1419 + struct acpi_video_enumerated_device *ids; 1420 + int i; 1429 1421 1430 - for(count = 0; count < video->attached_count; count++) 1431 - if((video->attached_array[count].value.int_val & 0xffff) == device_id) 1432 - return &(video->attached_array[count].value.attrib); 1422 + for (i = 0; i < video->attached_count; i++) { 1423 + ids = &video->attached_array[i]; 1424 + if ((ids->value.int_val & 0xffff) == device_id) 1425 + return &ids->value.attrib; 1426 + } 1427 + 1433 1428 return NULL; 1434 1429 } 1435 1430 ··· 1560 1547 acpi_video_device_bind(struct acpi_video_bus *video, 1561 1548 struct acpi_video_device *device) 1562 1549 { 1550 + struct acpi_video_enumerated_device *ids; 1563 1551 int i; 1564 1552 1565 - #define IDS_VAL(i) video->attached_array[i].value.int_val 1566 - #define IDS_BIND(i) video->attached_array[i].bind_info 1567 - 1568 - for (i = 0; IDS_VAL(i) != ACPI_VIDEO_HEAD_INVALID && 1569 - i < video->attached_count; i++) { 1570 - if (device->device_id == (IDS_VAL(i) & 0xffff)) { 1571 - IDS_BIND(i) = device; 1553 + for (i = 0; i < video->attached_count; i++) { 1554 + ids = &video->attached_array[i]; 1555 + if (device->device_id == (ids->value.int_val & 0xffff)) { 1556 + ids->bind_info = device; 1572 1557 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "device_bind %d\n", i)); 1573 1558 } 1574 1559 } 1575 - #undef IDS_VAL 1576 - #undef IDS_BIND 1577 1560 } 1578 1561 1579 1562 /* ··· 1588 1579 int status; 1589 1580 int count; 1590 1581 int i; 1591 - struct acpi_video_enumerated_device *active_device_list; 1582 + struct acpi_video_enumerated_device *active_list; 1592 1583 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 1593 1584 union acpi_object *dod = NULL; 1594 1585 union acpi_object *obj; ··· 1609 1600 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d video heads in _DOD\n", 1610 1601 dod->package.count)); 1611 1602 1612 - active_device_list = kmalloc((1 + 1613 - dod->package.count) * 1614 - sizeof(struct 1615 - acpi_video_enumerated_device), 1616 - GFP_KERNEL); 1617 - 1618 - if (!active_device_list) { 1603 + active_list = kcalloc(1 + dod->package.count, 1604 + sizeof(struct acpi_video_enumerated_device), 1605 + GFP_KERNEL); 1606 + if (!active_list) { 1619 1607 status = -ENOMEM; 1620 1608 goto out; 1621 1609 } ··· 1622 1616 obj = &dod->package.elements[i]; 1623 1617 1624 1618 if (obj->type != ACPI_TYPE_INTEGER) { 1625 - printk(KERN_ERR PREFIX "Invalid _DOD data\n"); 1626 - active_device_list[i].value.int_val = 1627 - ACPI_VIDEO_HEAD_INVALID; 1619 + printk(KERN_ERR PREFIX 1620 + "Invalid _DOD data in element %d\n", i); 1621 + continue; 1628 1622 } 1629 - active_device_list[i].value.int_val = obj->integer.value; 1630 - active_device_list[i].bind_info = NULL; 1623 + 1624 + active_list[count].value.int_val = obj->integer.value; 1625 + active_list[count].bind_info = NULL; 1631 1626 ACPI_DEBUG_PRINT((ACPI_DB_INFO, "dod element[%d] = %d\n", i, 1632 1627 (int)obj->integer.value)); 1633 1628 count++; 1634 1629 } 1635 - active_device_list[count].value.int_val = ACPI_VIDEO_HEAD_END; 1636 1630 1637 1631 kfree(video->attached_array); 1638 1632 1639 - video->attached_array = active_device_list; 1633 + video->attached_array = active_list; 1640 1634 video->attached_count = count; 1641 - out: 1635 + 1636 + out: 1642 1637 kfree(buffer.pointer); 1643 1638 return status; 1644 1639 }
+20 -1
drivers/misc/Kconfig
··· 140 140 depends on EXPERIMENTAL 141 141 depends on ACPI 142 142 depends on LEDS_CLASS 143 + depends on NEW_LEDS 143 144 depends on BACKLIGHT_CLASS_DEVICE 144 145 depends on SERIO_I8042 145 146 select ACPI_WMI ··· 161 160 depends on ACPI 162 161 depends on EXPERIMENTAL && !ACPI_ASUS 163 162 depends on LEDS_CLASS 163 + depends on NEW_LEDS 164 164 depends on BACKLIGHT_CLASS_DEVICE 165 165 ---help--- 166 166 This is the new Linux driver for Asus laptops. It may also support some ··· 243 241 config THINKPAD_ACPI 244 242 tristate "ThinkPad ACPI Laptop Extras" 245 243 depends on X86 && ACPI 244 + select BACKLIGHT_LCD_SUPPORT 246 245 select BACKLIGHT_CLASS_DEVICE 247 246 select HWMON 248 247 select NVRAM 249 - depends on INPUT 248 + select INPUT 249 + select NEW_LEDS 250 + select LEDS_CLASS 250 251 ---help--- 251 252 This is a driver for the IBM and Lenovo ThinkPad laptops. It adds 252 253 support for Fn-Fx key combinations, Bluetooth control, video ··· 349 344 config INTEL_MENLOW 350 345 tristate "Thermal Management driver for Intel menlow platform" 351 346 depends on ACPI_THERMAL 347 + select THERMAL 352 348 depends on X86 353 349 ---help--- 354 350 ACPI thermal management enhancement driver on 355 351 Intel Menlow platform. 356 352 357 353 If unsure, say N. 354 + 355 + config EEEPC_LAPTOP 356 + tristate "Eee PC Hotkey Driver (EXPERIMENTAL)" 357 + depends on X86 358 + depends on ACPI 359 + depends on BACKLIGHT_CLASS_DEVICE 360 + depends on HWMON 361 + depends on EXPERIMENTAL 362 + ---help--- 363 + This driver supports the Fn-Fx keys on Eee PC laptops. 364 + It also adds the ability to switch camera/wlan on/off. 365 + 366 + If you have an Eee PC laptop, say Y or M here. 358 367 359 368 config ENCLOSURE_SERVICES 360 369 tristate "Enclosure Services"
+2 -1
drivers/misc/Makefile
··· 7 7 obj-$(CONFIG_HDPU_FEATURES) += hdpuftrs/ 8 8 obj-$(CONFIG_MSI_LAPTOP) += msi-laptop.o 9 9 obj-$(CONFIG_ACER_WMI) += acer-wmi.o 10 - obj-$(CONFIG_ASUS_LAPTOP) += asus-laptop.o 10 + obj-$(CONFIG_ASUS_LAPTOP) += asus-laptop.o 11 + obj-$(CONFIG_EEEPC_LAPTOP) += eeepc-laptop.o 11 12 obj-$(CONFIG_ATMEL_PWM) += atmel_pwm.o 12 13 obj-$(CONFIG_ATMEL_SSC) += atmel-ssc.o 13 14 obj-$(CONFIG_ATMEL_TCLIB) += atmel_tclib.o
+666
drivers/misc/eeepc-laptop.c
··· 1 + /* 2 + * eepc-laptop.c - Asus Eee PC extras 3 + * 4 + * Based on asus_acpi.c as patched for the Eee PC by Asus: 5 + * ftp://ftp.asus.com/pub/ASUS/EeePC/701/ASUS_ACPI_071126.rar 6 + * Based on eee.c from eeepc-linux 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation; either version 2 of the License, or 11 + * (at your option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + */ 18 + 19 + #include <linux/kernel.h> 20 + #include <linux/module.h> 21 + #include <linux/init.h> 22 + #include <linux/types.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/backlight.h> 25 + #include <linux/fb.h> 26 + #include <linux/hwmon.h> 27 + #include <linux/hwmon-sysfs.h> 28 + #include <acpi/acpi_drivers.h> 29 + #include <acpi/acpi_bus.h> 30 + #include <linux/uaccess.h> 31 + 32 + #define EEEPC_LAPTOP_VERSION "0.1" 33 + 34 + #define EEEPC_HOTK_NAME "Eee PC Hotkey Driver" 35 + #define EEEPC_HOTK_FILE "eeepc" 36 + #define EEEPC_HOTK_CLASS "hotkey" 37 + #define EEEPC_HOTK_DEVICE_NAME "Hotkey" 38 + #define EEEPC_HOTK_HID "ASUS010" 39 + 40 + #define EEEPC_LOG EEEPC_HOTK_FILE ": " 41 + #define EEEPC_ERR KERN_ERR EEEPC_LOG 42 + #define EEEPC_WARNING KERN_WARNING EEEPC_LOG 43 + #define EEEPC_NOTICE KERN_NOTICE EEEPC_LOG 44 + #define EEEPC_INFO KERN_INFO EEEPC_LOG 45 + 46 + /* 47 + * Definitions for Asus EeePC 48 + */ 49 + #define NOTIFY_WLAN_ON 0x10 50 + #define NOTIFY_BRN_MIN 0x20 51 + #define NOTIFY_BRN_MAX 0x2f 52 + 53 + enum { 54 + DISABLE_ASL_WLAN = 0x0001, 55 + DISABLE_ASL_BLUETOOTH = 0x0002, 56 + DISABLE_ASL_IRDA = 0x0004, 57 + DISABLE_ASL_CAMERA = 0x0008, 58 + DISABLE_ASL_TV = 0x0010, 59 + DISABLE_ASL_GPS = 0x0020, 60 + DISABLE_ASL_DISPLAYSWITCH = 0x0040, 61 + DISABLE_ASL_MODEM = 0x0080, 62 + DISABLE_ASL_CARDREADER = 0x0100 63 + }; 64 + 65 + enum { 66 + CM_ASL_WLAN = 0, 67 + CM_ASL_BLUETOOTH, 68 + CM_ASL_IRDA, 69 + CM_ASL_1394, 70 + CM_ASL_CAMERA, 71 + CM_ASL_TV, 72 + CM_ASL_GPS, 73 + CM_ASL_DVDROM, 74 + CM_ASL_DISPLAYSWITCH, 75 + CM_ASL_PANELBRIGHT, 76 + CM_ASL_BIOSFLASH, 77 + CM_ASL_ACPIFLASH, 78 + CM_ASL_CPUFV, 79 + CM_ASL_CPUTEMPERATURE, 80 + CM_ASL_FANCPU, 81 + CM_ASL_FANCHASSIS, 82 + CM_ASL_USBPORT1, 83 + CM_ASL_USBPORT2, 84 + CM_ASL_USBPORT3, 85 + CM_ASL_MODEM, 86 + CM_ASL_CARDREADER, 87 + CM_ASL_LID 88 + }; 89 + 90 + const char *cm_getv[] = { 91 + "WLDG", NULL, NULL, NULL, 92 + "CAMG", NULL, NULL, NULL, 93 + NULL, "PBLG", NULL, NULL, 94 + "CFVG", NULL, NULL, NULL, 95 + "USBG", NULL, NULL, "MODG", 96 + "CRDG", "LIDG" 97 + }; 98 + 99 + const char *cm_setv[] = { 100 + "WLDS", NULL, NULL, NULL, 101 + "CAMS", NULL, NULL, NULL, 102 + "SDSP", "PBLS", "HDPS", NULL, 103 + "CFVS", NULL, NULL, NULL, 104 + "USBG", NULL, NULL, "MODS", 105 + "CRDS", NULL 106 + }; 107 + 108 + #define EEEPC_EC "\\_SB.PCI0.SBRG.EC0." 109 + 110 + #define EEEPC_EC_FAN_PWM EEEPC_EC "SC02" /* Fan PWM duty cycle (%) */ 111 + #define EEEPC_EC_SC02 0x63 112 + #define EEEPC_EC_FAN_HRPM EEEPC_EC "SC05" /* High byte, fan speed (RPM) */ 113 + #define EEEPC_EC_FAN_LRPM EEEPC_EC "SC06" /* Low byte, fan speed (RPM) */ 114 + #define EEEPC_EC_FAN_CTRL EEEPC_EC "SFB3" /* Byte containing SF25 */ 115 + #define EEEPC_EC_SFB3 0xD3 116 + 117 + /* 118 + * This is the main structure, we can use it to store useful information 119 + * about the hotk device 120 + */ 121 + struct eeepc_hotk { 122 + struct acpi_device *device; /* the device we are in */ 123 + acpi_handle handle; /* the handle of the hotk device */ 124 + u32 cm_supported; /* the control methods supported 125 + by this BIOS */ 126 + uint init_flag; /* Init flags */ 127 + u16 event_count[128]; /* count for each event */ 128 + }; 129 + 130 + /* The actual device the driver binds to */ 131 + static struct eeepc_hotk *ehotk; 132 + 133 + /* Platform device/driver */ 134 + static struct platform_driver platform_driver = { 135 + .driver = { 136 + .name = EEEPC_HOTK_FILE, 137 + .owner = THIS_MODULE, 138 + } 139 + }; 140 + 141 + static struct platform_device *platform_device; 142 + 143 + /* 144 + * The hotkey driver declaration 145 + */ 146 + static int eeepc_hotk_add(struct acpi_device *device); 147 + static int eeepc_hotk_remove(struct acpi_device *device, int type); 148 + 149 + static const struct acpi_device_id eeepc_device_ids[] = { 150 + {EEEPC_HOTK_HID, 0}, 151 + {"", 0}, 152 + }; 153 + MODULE_DEVICE_TABLE(acpi, eeepc_device_ids); 154 + 155 + static struct acpi_driver eeepc_hotk_driver = { 156 + .name = EEEPC_HOTK_NAME, 157 + .class = EEEPC_HOTK_CLASS, 158 + .ids = eeepc_device_ids, 159 + .ops = { 160 + .add = eeepc_hotk_add, 161 + .remove = eeepc_hotk_remove, 162 + }, 163 + }; 164 + 165 + /* The backlight device /sys/class/backlight */ 166 + static struct backlight_device *eeepc_backlight_device; 167 + 168 + /* The hwmon device */ 169 + static struct device *eeepc_hwmon_device; 170 + 171 + /* 172 + * The backlight class declaration 173 + */ 174 + static int read_brightness(struct backlight_device *bd); 175 + static int update_bl_status(struct backlight_device *bd); 176 + static struct backlight_ops eeepcbl_ops = { 177 + .get_brightness = read_brightness, 178 + .update_status = update_bl_status, 179 + }; 180 + 181 + MODULE_AUTHOR("Corentin Chary, Eric Cooper"); 182 + MODULE_DESCRIPTION(EEEPC_HOTK_NAME); 183 + MODULE_LICENSE("GPL"); 184 + 185 + /* 186 + * ACPI Helpers 187 + */ 188 + static int write_acpi_int(acpi_handle handle, const char *method, int val, 189 + struct acpi_buffer *output) 190 + { 191 + struct acpi_object_list params; 192 + union acpi_object in_obj; 193 + acpi_status status; 194 + 195 + params.count = 1; 196 + params.pointer = &in_obj; 197 + in_obj.type = ACPI_TYPE_INTEGER; 198 + in_obj.integer.value = val; 199 + 200 + status = acpi_evaluate_object(handle, (char *)method, &params, output); 201 + return (status == AE_OK ? 0 : -1); 202 + } 203 + 204 + static int read_acpi_int(acpi_handle handle, const char *method, int *val) 205 + { 206 + acpi_status status; 207 + ulong result; 208 + 209 + status = acpi_evaluate_integer(handle, (char *)method, NULL, &result); 210 + if (ACPI_FAILURE(status)) { 211 + *val = -1; 212 + return -1; 213 + } else { 214 + *val = result; 215 + return 0; 216 + } 217 + } 218 + 219 + static int set_acpi(int cm, int value) 220 + { 221 + if (ehotk->cm_supported & (0x1 << cm)) { 222 + const char *method = cm_setv[cm]; 223 + if (method == NULL) 224 + return -ENODEV; 225 + if (write_acpi_int(ehotk->handle, method, value, NULL)) 226 + printk(EEEPC_WARNING "Error writing %s\n", method); 227 + } 228 + return 0; 229 + } 230 + 231 + static int get_acpi(int cm) 232 + { 233 + int value = -1; 234 + if ((ehotk->cm_supported & (0x1 << cm))) { 235 + const char *method = cm_getv[cm]; 236 + if (method == NULL) 237 + return -ENODEV; 238 + if (read_acpi_int(ehotk->handle, method, &value)) 239 + printk(EEEPC_WARNING "Error reading %s\n", method); 240 + } 241 + return value; 242 + } 243 + 244 + /* 245 + * Backlight 246 + */ 247 + static int read_brightness(struct backlight_device *bd) 248 + { 249 + return get_acpi(CM_ASL_PANELBRIGHT); 250 + } 251 + 252 + static int set_brightness(struct backlight_device *bd, int value) 253 + { 254 + value = max(0, min(15, value)); 255 + return set_acpi(CM_ASL_PANELBRIGHT, value); 256 + } 257 + 258 + static int update_bl_status(struct backlight_device *bd) 259 + { 260 + return set_brightness(bd, bd->props.brightness); 261 + } 262 + 263 + /* 264 + * Sys helpers 265 + */ 266 + static int parse_arg(const char *buf, unsigned long count, int *val) 267 + { 268 + if (!count) 269 + return 0; 270 + if (sscanf(buf, "%i", val) != 1) 271 + return -EINVAL; 272 + return count; 273 + } 274 + 275 + static ssize_t store_sys_acpi(int cm, const char *buf, size_t count) 276 + { 277 + int rv, value; 278 + 279 + rv = parse_arg(buf, count, &value); 280 + if (rv > 0) 281 + set_acpi(cm, value); 282 + return rv; 283 + } 284 + 285 + static ssize_t show_sys_acpi(int cm, char *buf) 286 + { 287 + return sprintf(buf, "%d\n", get_acpi(cm)); 288 + } 289 + 290 + #define EEEPC_CREATE_DEVICE_ATTR(_name, _cm) \ 291 + static ssize_t show_##_name(struct device *dev, \ 292 + struct device_attribute *attr, \ 293 + char *buf) \ 294 + { \ 295 + return show_sys_acpi(_cm, buf); \ 296 + } \ 297 + static ssize_t store_##_name(struct device *dev, \ 298 + struct device_attribute *attr, \ 299 + const char *buf, size_t count) \ 300 + { \ 301 + return store_sys_acpi(_cm, buf, count); \ 302 + } \ 303 + static struct device_attribute dev_attr_##_name = { \ 304 + .attr = { \ 305 + .name = __stringify(_name), \ 306 + .mode = 0644 }, \ 307 + .show = show_##_name, \ 308 + .store = store_##_name, \ 309 + } 310 + 311 + EEEPC_CREATE_DEVICE_ATTR(camera, CM_ASL_CAMERA); 312 + EEEPC_CREATE_DEVICE_ATTR(cardr, CM_ASL_CARDREADER); 313 + EEEPC_CREATE_DEVICE_ATTR(disp, CM_ASL_DISPLAYSWITCH); 314 + EEEPC_CREATE_DEVICE_ATTR(wlan, CM_ASL_WLAN); 315 + 316 + static struct attribute *platform_attributes[] = { 317 + &dev_attr_camera.attr, 318 + &dev_attr_cardr.attr, 319 + &dev_attr_disp.attr, 320 + &dev_attr_wlan.attr, 321 + NULL 322 + }; 323 + 324 + static struct attribute_group platform_attribute_group = { 325 + .attrs = platform_attributes 326 + }; 327 + 328 + /* 329 + * Hotkey functions 330 + */ 331 + static int eeepc_hotk_check(void) 332 + { 333 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 334 + int result; 335 + 336 + result = acpi_bus_get_status(ehotk->device); 337 + if (result) 338 + return result; 339 + if (ehotk->device->status.present) { 340 + if (write_acpi_int(ehotk->handle, "INIT", ehotk->init_flag, 341 + &buffer)) { 342 + printk(EEEPC_ERR "Hotkey initialization failed\n"); 343 + return -ENODEV; 344 + } else { 345 + printk(EEEPC_NOTICE "Hotkey init flags 0x%x\n", 346 + ehotk->init_flag); 347 + } 348 + /* get control methods supported */ 349 + if (read_acpi_int(ehotk->handle, "CMSG" 350 + , &ehotk->cm_supported)) { 351 + printk(EEEPC_ERR 352 + "Get control methods supported failed\n"); 353 + return -ENODEV; 354 + } else { 355 + printk(EEEPC_INFO 356 + "Get control methods supported: 0x%x\n", 357 + ehotk->cm_supported); 358 + } 359 + } else { 360 + printk(EEEPC_ERR "Hotkey device not present, aborting\n"); 361 + return -EINVAL; 362 + } 363 + return 0; 364 + } 365 + 366 + static void notify_wlan(u32 *event) 367 + { 368 + /* if DISABLE_ASL_WLAN is set, the notify code for fn+f2 369 + will always be 0x10 */ 370 + if (ehotk->cm_supported & (0x1 << CM_ASL_WLAN)) { 371 + const char *method = cm_getv[CM_ASL_WLAN]; 372 + int value; 373 + if (read_acpi_int(ehotk->handle, method, &value)) 374 + printk(EEEPC_WARNING "Error reading %s\n", 375 + method); 376 + else if (value == 1) 377 + *event = 0x11; 378 + } 379 + } 380 + 381 + static void notify_brn(void) 382 + { 383 + struct backlight_device *bd = eeepc_backlight_device; 384 + bd->props.brightness = read_brightness(bd); 385 + } 386 + 387 + static void eeepc_hotk_notify(acpi_handle handle, u32 event, void *data) 388 + { 389 + if (!ehotk) 390 + return; 391 + if (event == NOTIFY_WLAN_ON && (DISABLE_ASL_WLAN & ehotk->init_flag)) 392 + notify_wlan(&event); 393 + if (event >= NOTIFY_BRN_MIN && event <= NOTIFY_BRN_MAX) 394 + notify_brn(); 395 + acpi_bus_generate_proc_event(ehotk->device, event, 396 + ehotk->event_count[event % 128]++); 397 + } 398 + 399 + static int eeepc_hotk_add(struct acpi_device *device) 400 + { 401 + acpi_status status = AE_OK; 402 + int result; 403 + 404 + if (!device) 405 + return -EINVAL; 406 + printk(EEEPC_NOTICE EEEPC_HOTK_NAME "\n"); 407 + ehotk = kzalloc(sizeof(struct eeepc_hotk), GFP_KERNEL); 408 + if (!ehotk) 409 + return -ENOMEM; 410 + ehotk->init_flag = DISABLE_ASL_WLAN | DISABLE_ASL_DISPLAYSWITCH; 411 + ehotk->handle = device->handle; 412 + strcpy(acpi_device_name(device), EEEPC_HOTK_DEVICE_NAME); 413 + strcpy(acpi_device_class(device), EEEPC_HOTK_CLASS); 414 + acpi_driver_data(device) = ehotk; 415 + ehotk->device = device; 416 + result = eeepc_hotk_check(); 417 + if (result) 418 + goto end; 419 + status = acpi_install_notify_handler(ehotk->handle, ACPI_SYSTEM_NOTIFY, 420 + eeepc_hotk_notify, ehotk); 421 + if (ACPI_FAILURE(status)) 422 + printk(EEEPC_ERR "Error installing notify handler\n"); 423 + end: 424 + if (result) { 425 + kfree(ehotk); 426 + ehotk = NULL; 427 + } 428 + return result; 429 + } 430 + 431 + static int eeepc_hotk_remove(struct acpi_device *device, int type) 432 + { 433 + acpi_status status = 0; 434 + 435 + if (!device || !acpi_driver_data(device)) 436 + return -EINVAL; 437 + status = acpi_remove_notify_handler(ehotk->handle, ACPI_SYSTEM_NOTIFY, 438 + eeepc_hotk_notify); 439 + if (ACPI_FAILURE(status)) 440 + printk(EEEPC_ERR "Error removing notify handler\n"); 441 + kfree(ehotk); 442 + return 0; 443 + } 444 + 445 + /* 446 + * Hwmon 447 + */ 448 + static int eeepc_get_fan_pwm(void) 449 + { 450 + int value = 0; 451 + 452 + read_acpi_int(NULL, EEEPC_EC_FAN_PWM, &value); 453 + return (value); 454 + } 455 + 456 + static void eeepc_set_fan_pwm(int value) 457 + { 458 + value = SENSORS_LIMIT(value, 0, 100); 459 + ec_write(EEEPC_EC_SC02, value); 460 + } 461 + 462 + static int eeepc_get_fan_rpm(void) 463 + { 464 + int high = 0; 465 + int low = 0; 466 + 467 + read_acpi_int(NULL, EEEPC_EC_FAN_HRPM, &high); 468 + read_acpi_int(NULL, EEEPC_EC_FAN_LRPM, &low); 469 + return (high << 8 | low); 470 + } 471 + 472 + static int eeepc_get_fan_ctrl(void) 473 + { 474 + int value = 0; 475 + 476 + read_acpi_int(NULL, EEEPC_EC_FAN_CTRL, &value); 477 + return ((value & 0x02 ? 1 : 0)); 478 + } 479 + 480 + static void eeepc_set_fan_ctrl(int manual) 481 + { 482 + int value = 0; 483 + 484 + read_acpi_int(NULL, EEEPC_EC_FAN_CTRL, &value); 485 + if (manual) 486 + value |= 0x02; 487 + else 488 + value &= ~0x02; 489 + ec_write(EEEPC_EC_SFB3, value); 490 + } 491 + 492 + static ssize_t store_sys_hwmon(void (*set)(int), const char *buf, size_t count) 493 + { 494 + int rv, value; 495 + 496 + rv = parse_arg(buf, count, &value); 497 + if (rv > 0) 498 + set(value); 499 + return rv; 500 + } 501 + 502 + static ssize_t show_sys_hwmon(int (*get)(void), char *buf) 503 + { 504 + return sprintf(buf, "%d\n", get()); 505 + } 506 + 507 + #define EEEPC_CREATE_SENSOR_ATTR(_name, _mode, _set, _get) \ 508 + static ssize_t show_##_name(struct device *dev, \ 509 + struct device_attribute *attr, \ 510 + char *buf) \ 511 + { \ 512 + return show_sys_hwmon(_set, buf); \ 513 + } \ 514 + static ssize_t store_##_name(struct device *dev, \ 515 + struct device_attribute *attr, \ 516 + const char *buf, size_t count) \ 517 + { \ 518 + return store_sys_hwmon(_get, buf, count); \ 519 + } \ 520 + static SENSOR_DEVICE_ATTR(_name, _mode, show_##_name, store_##_name, 0); 521 + 522 + EEEPC_CREATE_SENSOR_ATTR(fan1_input, S_IRUGO, eeepc_get_fan_rpm, NULL); 523 + EEEPC_CREATE_SENSOR_ATTR(fan1_pwm, S_IRUGO | S_IWUSR, 524 + eeepc_get_fan_pwm, eeepc_set_fan_pwm); 525 + EEEPC_CREATE_SENSOR_ATTR(pwm1_enable, S_IRUGO | S_IWUSR, 526 + eeepc_get_fan_ctrl, eeepc_set_fan_ctrl); 527 + 528 + static struct attribute *hwmon_attributes[] = { 529 + &sensor_dev_attr_fan1_pwm.dev_attr.attr, 530 + &sensor_dev_attr_fan1_input.dev_attr.attr, 531 + &sensor_dev_attr_pwm1_enable.dev_attr.attr, 532 + NULL 533 + }; 534 + 535 + static struct attribute_group hwmon_attribute_group = { 536 + .attrs = hwmon_attributes 537 + }; 538 + 539 + /* 540 + * exit/init 541 + */ 542 + static void eeepc_backlight_exit(void) 543 + { 544 + if (eeepc_backlight_device) 545 + backlight_device_unregister(eeepc_backlight_device); 546 + eeepc_backlight_device = NULL; 547 + } 548 + 549 + static void eeepc_hwmon_exit(void) 550 + { 551 + struct device *hwmon; 552 + 553 + hwmon = eeepc_hwmon_device; 554 + if (!hwmon) 555 + return ; 556 + hwmon_device_unregister(hwmon); 557 + sysfs_remove_group(&hwmon->kobj, 558 + &hwmon_attribute_group); 559 + eeepc_hwmon_device = NULL; 560 + } 561 + 562 + static void __exit eeepc_laptop_exit(void) 563 + { 564 + eeepc_backlight_exit(); 565 + eeepc_hwmon_exit(); 566 + acpi_bus_unregister_driver(&eeepc_hotk_driver); 567 + sysfs_remove_group(&platform_device->dev.kobj, 568 + &platform_attribute_group); 569 + platform_device_unregister(platform_device); 570 + platform_driver_unregister(&platform_driver); 571 + } 572 + 573 + static int eeepc_backlight_init(struct device *dev) 574 + { 575 + struct backlight_device *bd; 576 + 577 + bd = backlight_device_register(EEEPC_HOTK_FILE, dev, 578 + NULL, &eeepcbl_ops); 579 + if (IS_ERR(bd)) { 580 + printk(EEEPC_ERR 581 + "Could not register eeepc backlight device\n"); 582 + eeepc_backlight_device = NULL; 583 + return PTR_ERR(bd); 584 + } 585 + eeepc_backlight_device = bd; 586 + bd->props.max_brightness = 15; 587 + bd->props.brightness = read_brightness(NULL); 588 + bd->props.power = FB_BLANK_UNBLANK; 589 + backlight_update_status(bd); 590 + return 0; 591 + } 592 + 593 + static int eeepc_hwmon_init(struct device *dev) 594 + { 595 + struct device *hwmon; 596 + int result; 597 + 598 + hwmon = hwmon_device_register(dev); 599 + if (IS_ERR(hwmon)) { 600 + printk(EEEPC_ERR 601 + "Could not register eeepc hwmon device\n"); 602 + eeepc_hwmon_device = NULL; 603 + return PTR_ERR(hwmon); 604 + } 605 + eeepc_hwmon_device = hwmon; 606 + result = sysfs_create_group(&hwmon->kobj, 607 + &hwmon_attribute_group); 608 + if (result) 609 + eeepc_hwmon_exit(); 610 + return result; 611 + } 612 + 613 + static int __init eeepc_laptop_init(void) 614 + { 615 + struct device *dev; 616 + int result; 617 + 618 + if (acpi_disabled) 619 + return -ENODEV; 620 + result = acpi_bus_register_driver(&eeepc_hotk_driver); 621 + if (result < 0) 622 + return result; 623 + if (!ehotk) { 624 + acpi_bus_unregister_driver(&eeepc_hotk_driver); 625 + return -ENODEV; 626 + } 627 + dev = acpi_get_physical_device(ehotk->device->handle); 628 + result = eeepc_backlight_init(dev); 629 + if (result) 630 + goto fail_backlight; 631 + result = eeepc_hwmon_init(dev); 632 + if (result) 633 + goto fail_hwmon; 634 + /* Register platform stuff */ 635 + result = platform_driver_register(&platform_driver); 636 + if (result) 637 + goto fail_platform_driver; 638 + platform_device = platform_device_alloc(EEEPC_HOTK_FILE, -1); 639 + if (!platform_device) { 640 + result = -ENOMEM; 641 + goto fail_platform_device1; 642 + } 643 + result = platform_device_add(platform_device); 644 + if (result) 645 + goto fail_platform_device2; 646 + result = sysfs_create_group(&platform_device->dev.kobj, 647 + &platform_attribute_group); 648 + if (result) 649 + goto fail_sysfs; 650 + return 0; 651 + fail_sysfs: 652 + platform_device_del(platform_device); 653 + fail_platform_device2: 654 + platform_device_put(platform_device); 655 + fail_platform_device1: 656 + platform_driver_unregister(&platform_driver); 657 + fail_platform_driver: 658 + eeepc_hwmon_exit(); 659 + fail_hwmon: 660 + eeepc_backlight_exit(); 661 + fail_backlight: 662 + return result; 663 + } 664 + 665 + module_init(eeepc_laptop_init); 666 + module_exit(eeepc_laptop_exit);
+572 -193
drivers/misc/thinkpad_acpi.c
··· 21 21 * 02110-1301, USA. 22 22 */ 23 23 24 - #define TPACPI_VERSION "0.19" 24 + #define TPACPI_VERSION "0.20" 25 25 #define TPACPI_SYSFS_VERSION 0x020200 26 26 27 27 /* ··· 67 67 #include <linux/hwmon.h> 68 68 #include <linux/hwmon-sysfs.h> 69 69 #include <linux/input.h> 70 + #include <linux/leds.h> 70 71 #include <asm/uaccess.h> 71 72 72 73 #include <linux/dmi.h> ··· 86 85 #define TP_CMOS_VOLUME_MUTE 2 87 86 #define TP_CMOS_BRIGHTNESS_UP 4 88 87 #define TP_CMOS_BRIGHTNESS_DOWN 5 88 + #define TP_CMOS_THINKLIGHT_ON 12 89 + #define TP_CMOS_THINKLIGHT_OFF 13 89 90 90 91 /* NVRAM Addresses */ 91 92 enum tp_nvram_addr { ··· 136 133 #define TPACPI_PROC_DIR "ibm" 137 134 #define TPACPI_ACPI_EVENT_PREFIX "ibm" 138 135 #define TPACPI_DRVR_NAME TPACPI_FILE 136 + #define TPACPI_DRVR_SHORTNAME "tpacpi" 139 137 #define TPACPI_HWMON_DRVR_NAME TPACPI_NAME "_hwmon" 138 + 139 + #define TPACPI_NVRAM_KTHREAD_NAME "ktpacpi_nvramd" 140 + #define TPACPI_WORKQUEUE_NAME "ktpacpid" 140 141 141 142 #define TPACPI_MAX_ACPI_ARGS 3 142 143 ··· 232 225 u32 light:1; 233 226 u32 light_status:1; 234 227 u32 bright_16levels:1; 228 + u32 bright_acpimode:1; 235 229 u32 wan:1; 236 230 u32 fan_ctrl_status_undef:1; 237 231 u32 input_device_registered:1; ··· 244 236 u32 hotkey_poll_active:1; 245 237 } tp_features; 246 238 239 + static struct { 240 + u16 hotkey_mask_ff:1; 241 + u16 bright_cmos_ec_unsync:1; 242 + } tp_warned; 243 + 247 244 struct thinkpad_id_data { 248 245 unsigned int vendor; /* ThinkPad vendor: 249 246 * PCI_VENDOR_ID_IBM/PCI_VENDOR_ID_LENOVO */ ··· 259 246 u16 bios_model; /* Big Endian, TP-1Y = 0x5931, 0 = unknown */ 260 247 u16 ec_model; 261 248 262 - char *model_str; 249 + char *model_str; /* ThinkPad T43 */ 250 + char *nummodel_str; /* 9384A9C for a 9384-A9C model */ 263 251 }; 264 252 static struct thinkpad_id_data thinkpad_id; 265 253 ··· 272 258 273 259 static int experimental; 274 260 static u32 dbg_level; 261 + 262 + static struct workqueue_struct *tpacpi_wq; 263 + 264 + /* Special LED class that can defer work */ 265 + struct tpacpi_led_classdev { 266 + struct led_classdev led_classdev; 267 + struct work_struct work; 268 + enum led_brightness new_brightness; 269 + unsigned int led; 270 + }; 275 271 276 272 /**************************************************************************** 277 273 **************************************************************************** ··· 831 807 return 0; 832 808 } 833 809 810 + static int __init tpacpi_query_bcl_levels(acpi_handle handle) 811 + { 812 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 813 + union acpi_object *obj; 814 + int rc; 815 + 816 + if (ACPI_SUCCESS(acpi_evaluate_object(handle, NULL, NULL, &buffer))) { 817 + obj = (union acpi_object *)buffer.pointer; 818 + if (!obj || (obj->type != ACPI_TYPE_PACKAGE)) { 819 + printk(TPACPI_ERR "Unknown _BCL data, " 820 + "please report this to %s\n", TPACPI_MAIL); 821 + rc = 0; 822 + } else { 823 + rc = obj->package.count; 824 + } 825 + } else { 826 + return 0; 827 + } 828 + 829 + kfree(buffer.pointer); 830 + return rc; 831 + } 832 + 833 + static acpi_status __init tpacpi_acpi_walk_find_bcl(acpi_handle handle, 834 + u32 lvl, void *context, void **rv) 835 + { 836 + char name[ACPI_PATH_SEGMENT_LENGTH]; 837 + struct acpi_buffer buffer = { sizeof(name), &name }; 838 + 839 + if (ACPI_SUCCESS(acpi_get_name(handle, ACPI_SINGLE_NAME, &buffer)) && 840 + !strncmp("_BCL", name, sizeof(name) - 1)) { 841 + BUG_ON(!rv || !*rv); 842 + **(int **)rv = tpacpi_query_bcl_levels(handle); 843 + return AE_CTRL_TERMINATE; 844 + } else { 845 + return AE_OK; 846 + } 847 + } 848 + 849 + /* 850 + * Returns 0 (no ACPI _BCL or _BCL invalid), or size of brightness map 851 + */ 852 + static int __init tpacpi_check_std_acpi_brightness_support(void) 853 + { 854 + int status; 855 + int bcl_levels = 0; 856 + void *bcl_ptr = &bcl_levels; 857 + 858 + if (!vid_handle) { 859 + TPACPI_ACPIHANDLE_INIT(vid); 860 + } 861 + if (!vid_handle) 862 + return 0; 863 + 864 + /* 865 + * Search for a _BCL method, and execute it. This is safe on all 866 + * ThinkPads, and as a side-effect, _BCL will place a Lenovo Vista 867 + * BIOS in ACPI backlight control mode. We do NOT have to care 868 + * about calling the _BCL method in an enabled video device, any 869 + * will do for our purposes. 870 + */ 871 + 872 + status = acpi_walk_namespace(ACPI_TYPE_METHOD, vid_handle, 3, 873 + tpacpi_acpi_walk_find_bcl, NULL, 874 + &bcl_ptr); 875 + 876 + if (ACPI_SUCCESS(status) && bcl_levels > 2) { 877 + tp_features.bright_acpimode = 1; 878 + return (bcl_levels - 2); 879 + } 880 + 881 + return 0; 882 + } 883 + 834 884 /************************************************************************* 835 885 * thinkpad-acpi driver attributes 836 886 */ ··· 1007 909 thinkpad_id.ec_version_str : "unknown"); 1008 910 1009 911 if (thinkpad_id.vendor && thinkpad_id.model_str) 1010 - printk(TPACPI_INFO "%s %s\n", 912 + printk(TPACPI_INFO "%s %s, model %s\n", 1011 913 (thinkpad_id.vendor == PCI_VENDOR_ID_IBM) ? 1012 914 "IBM" : ((thinkpad_id.vendor == 1013 915 PCI_VENDOR_ID_LENOVO) ? 1014 916 "Lenovo" : "Unknown vendor"), 1015 - thinkpad_id.model_str); 917 + thinkpad_id.model_str, 918 + (thinkpad_id.nummodel_str) ? 919 + thinkpad_id.nummodel_str : "unknown"); 1016 920 1017 921 return 0; 1018 922 } ··· 1207 1107 int rc = 0; 1208 1108 1209 1109 if (tp_features.hotkey_mask) { 1110 + if (!tp_warned.hotkey_mask_ff && 1111 + (mask == 0xffff || mask == 0xffffff || 1112 + mask == 0xffffffff)) { 1113 + tp_warned.hotkey_mask_ff = 1; 1114 + printk(TPACPI_NOTICE 1115 + "setting the hotkey mask to 0x%08x is likely " 1116 + "not the best way to go about it\n", mask); 1117 + printk(TPACPI_NOTICE 1118 + "please consider using the driver defaults, " 1119 + "and refer to up-to-date thinkpad-acpi " 1120 + "documentation\n"); 1121 + } 1122 + 1210 1123 HOTKEY_CONFIG_CRITICAL_START 1211 1124 for (i = 0; i < 32; i++) { 1212 1125 u32 m = 1 << i; ··· 1540 1427 (tpacpi_inputdev->users > 0 || hotkey_report_mode < 2)) { 1541 1428 if (!tpacpi_hotkey_task) { 1542 1429 tpacpi_hotkey_task = kthread_run(hotkey_kthread, 1543 - NULL, 1544 - TPACPI_FILE "d"); 1430 + NULL, TPACPI_NVRAM_KTHREAD_NAME); 1545 1431 if (IS_ERR(tpacpi_hotkey_task)) { 1546 1432 tpacpi_hotkey_task = NULL; 1547 1433 printk(TPACPI_ERR ··· 1999 1887 KEY_UNKNOWN, /* 0x0D: FN+INSERT */ 2000 1888 KEY_UNKNOWN, /* 0x0E: FN+DELETE */ 2001 1889 1890 + /* These either have to go through ACPI video, or 1891 + * act like in the IBM ThinkPads, so don't ever 1892 + * enable them by default */ 2002 1893 KEY_RESERVED, /* 0x0F: FN+HOME (brightness up) */ 2003 1894 KEY_RESERVED, /* 0x10: FN+END (brightness down) */ 2004 1895 ··· 2204 2089 if (tp_features.hotkey_tablet) { 2205 2090 set_bit(EV_SW, tpacpi_inputdev->evbit); 2206 2091 set_bit(SW_TABLET_MODE, tpacpi_inputdev->swbit); 2092 + } 2093 + 2094 + /* Do not issue duplicate brightness change events to 2095 + * userspace */ 2096 + if (!tp_features.bright_acpimode) 2097 + /* update bright_acpimode... */ 2098 + tpacpi_check_std_acpi_brightness_support(); 2099 + 2100 + if (tp_features.bright_acpimode) { 2101 + printk(TPACPI_INFO 2102 + "This ThinkPad has standard ACPI backlight " 2103 + "brightness control, supported by the ACPI " 2104 + "video driver\n"); 2105 + printk(TPACPI_NOTICE 2106 + "Disabling thinkpad-acpi brightness events " 2107 + "by default...\n"); 2108 + 2109 + /* The hotkey_reserved_mask change below is not 2110 + * necessary while the keys are at KEY_RESERVED in the 2111 + * default map, but better safe than sorry, leave it 2112 + * here as a marker of what we have to do, especially 2113 + * when we finally become able to set this at runtime 2114 + * on response to X.org requests */ 2115 + hotkey_reserved_mask |= 2116 + (1 << TP_ACPI_HOTKEYSCAN_FNHOME) 2117 + | (1 << TP_ACPI_HOTKEYSCAN_FNEND); 2207 2118 } 2208 2119 2209 2120 dbg_printk(TPACPI_DBG_INIT, ··· 3251 3110 TPACPI_HANDLE(lght, root, "\\LGHT"); /* A21e, A2xm/p, T20-22, X20-21 */ 3252 3111 TPACPI_HANDLE(ledb, ec, "LEDB"); /* G4x */ 3253 3112 3113 + static int light_get_status(void) 3114 + { 3115 + int status = 0; 3116 + 3117 + if (tp_features.light_status) { 3118 + if (!acpi_evalf(ec_handle, &status, "KBLT", "d")) 3119 + return -EIO; 3120 + return (!!status); 3121 + } 3122 + 3123 + return -ENXIO; 3124 + } 3125 + 3126 + static int light_set_status(int status) 3127 + { 3128 + int rc; 3129 + 3130 + if (tp_features.light) { 3131 + if (cmos_handle) { 3132 + rc = acpi_evalf(cmos_handle, NULL, NULL, "vd", 3133 + (status)? 3134 + TP_CMOS_THINKLIGHT_ON : 3135 + TP_CMOS_THINKLIGHT_OFF); 3136 + } else { 3137 + rc = acpi_evalf(lght_handle, NULL, NULL, "vd", 3138 + (status)? 1 : 0); 3139 + } 3140 + return (rc)? 0 : -EIO; 3141 + } 3142 + 3143 + return -ENXIO; 3144 + } 3145 + 3146 + static void light_set_status_worker(struct work_struct *work) 3147 + { 3148 + struct tpacpi_led_classdev *data = 3149 + container_of(work, struct tpacpi_led_classdev, work); 3150 + 3151 + if (likely(tpacpi_lifecycle == TPACPI_LIFE_RUNNING)) 3152 + light_set_status((data->new_brightness != LED_OFF)); 3153 + } 3154 + 3155 + static void light_sysfs_set(struct led_classdev *led_cdev, 3156 + enum led_brightness brightness) 3157 + { 3158 + struct tpacpi_led_classdev *data = 3159 + container_of(led_cdev, 3160 + struct tpacpi_led_classdev, 3161 + led_classdev); 3162 + data->new_brightness = brightness; 3163 + queue_work(tpacpi_wq, &data->work); 3164 + } 3165 + 3166 + static enum led_brightness light_sysfs_get(struct led_classdev *led_cdev) 3167 + { 3168 + return (light_get_status() == 1)? LED_FULL : LED_OFF; 3169 + } 3170 + 3171 + static struct tpacpi_led_classdev tpacpi_led_thinklight = { 3172 + .led_classdev = { 3173 + .name = "tpacpi::thinklight", 3174 + .brightness_set = &light_sysfs_set, 3175 + .brightness_get = &light_sysfs_get, 3176 + } 3177 + }; 3178 + 3254 3179 static int __init light_init(struct ibm_init_struct *iibm) 3255 3180 { 3181 + int rc = 0; 3182 + 3256 3183 vdbg_printk(TPACPI_DBG_INIT, "initializing light subdriver\n"); 3257 3184 3258 3185 TPACPI_ACPIHANDLE_INIT(ledb); 3259 3186 TPACPI_ACPIHANDLE_INIT(lght); 3260 3187 TPACPI_ACPIHANDLE_INIT(cmos); 3188 + INIT_WORK(&tpacpi_led_thinklight.work, light_set_status_worker); 3261 3189 3262 3190 /* light not supported on 570, 600e/x, 770e, 770x, G4x, R30, R31 */ 3263 3191 tp_features.light = (cmos_handle || lght_handle) && !ledb_handle; ··· 3340 3130 vdbg_printk(TPACPI_DBG_INIT, "light is %s\n", 3341 3131 str_supported(tp_features.light)); 3342 3132 3343 - return (tp_features.light)? 0 : 1; 3133 + if (tp_features.light) { 3134 + rc = led_classdev_register(&tpacpi_pdev->dev, 3135 + &tpacpi_led_thinklight.led_classdev); 3136 + } 3137 + 3138 + if (rc < 0) { 3139 + tp_features.light = 0; 3140 + tp_features.light_status = 0; 3141 + } else { 3142 + rc = (tp_features.light)? 0 : 1; 3143 + } 3144 + return rc; 3145 + } 3146 + 3147 + static void light_exit(void) 3148 + { 3149 + led_classdev_unregister(&tpacpi_led_thinklight.led_classdev); 3150 + if (work_pending(&tpacpi_led_thinklight.work)) 3151 + flush_workqueue(tpacpi_wq); 3344 3152 } 3345 3153 3346 3154 static int light_read(char *p) 3347 3155 { 3348 3156 int len = 0; 3349 - int status = 0; 3157 + int status; 3350 3158 3351 3159 if (!tp_features.light) { 3352 3160 len += sprintf(p + len, "status:\t\tnot supported\n"); ··· 3372 3144 len += sprintf(p + len, "status:\t\tunknown\n"); 3373 3145 len += sprintf(p + len, "commands:\ton, off\n"); 3374 3146 } else { 3375 - if (!acpi_evalf(ec_handle, &status, "KBLT", "d")) 3376 - return -EIO; 3147 + status = light_get_status(); 3148 + if (status < 0) 3149 + return status; 3377 3150 len += sprintf(p + len, "status:\t\t%s\n", onoff(status, 0)); 3378 3151 len += sprintf(p + len, "commands:\ton, off\n"); 3379 3152 } ··· 3384 3155 3385 3156 static int light_write(char *buf) 3386 3157 { 3387 - int cmos_cmd, lght_cmd; 3388 3158 char *cmd; 3389 - int success; 3159 + int newstatus = 0; 3390 3160 3391 3161 if (!tp_features.light) 3392 3162 return -ENODEV; 3393 3163 3394 3164 while ((cmd = next_cmd(&buf))) { 3395 3165 if (strlencmp(cmd, "on") == 0) { 3396 - cmos_cmd = 0x0c; 3397 - lght_cmd = 1; 3166 + newstatus = 1; 3398 3167 } else if (strlencmp(cmd, "off") == 0) { 3399 - cmos_cmd = 0x0d; 3400 - lght_cmd = 0; 3168 + newstatus = 0; 3401 3169 } else 3402 3170 return -EINVAL; 3403 - 3404 - success = cmos_handle ? 3405 - acpi_evalf(cmos_handle, NULL, NULL, "vd", cmos_cmd) : 3406 - acpi_evalf(lght_handle, NULL, NULL, "vd", lght_cmd); 3407 - if (!success) 3408 - return -EIO; 3409 3171 } 3410 3172 3411 - return 0; 3173 + return light_set_status(newstatus); 3412 3174 } 3413 3175 3414 3176 static struct ibm_struct light_driver_data = { 3415 3177 .name = "light", 3416 3178 .read = light_read, 3417 3179 .write = light_write, 3180 + .exit = light_exit, 3418 3181 }; 3419 3182 3420 3183 /************************************************************************* ··· 3804 3583 TPACPI_LED_EC_HLMS = 0x0e, /* EC reg to select led to command */ 3805 3584 }; 3806 3585 3586 + enum led_status_t { 3587 + TPACPI_LED_OFF = 0, 3588 + TPACPI_LED_ON, 3589 + TPACPI_LED_BLINK, 3590 + }; 3591 + 3807 3592 static enum led_access_mode led_supported; 3808 3593 3809 3594 TPACPI_HANDLE(led, ec, "SLED", /* 570 */ ··· 3818 3591 "LED", /* all others */ 3819 3592 ); /* R30, R31 */ 3820 3593 3594 + #define TPACPI_LED_NUMLEDS 8 3595 + static struct tpacpi_led_classdev *tpacpi_leds; 3596 + static enum led_status_t tpacpi_led_state_cache[TPACPI_LED_NUMLEDS]; 3597 + static const char const *tpacpi_led_names[TPACPI_LED_NUMLEDS] = { 3598 + /* there's a limit of 19 chars + NULL before 2.6.26 */ 3599 + "tpacpi::power", 3600 + "tpacpi:orange:batt", 3601 + "tpacpi:green:batt", 3602 + "tpacpi::dock_active", 3603 + "tpacpi::bay_active", 3604 + "tpacpi::dock_batt", 3605 + "tpacpi::unknown_led", 3606 + "tpacpi::standby", 3607 + }; 3608 + 3609 + static int led_get_status(unsigned int led) 3610 + { 3611 + int status; 3612 + enum led_status_t led_s; 3613 + 3614 + switch (led_supported) { 3615 + case TPACPI_LED_570: 3616 + if (!acpi_evalf(ec_handle, 3617 + &status, "GLED", "dd", 1 << led)) 3618 + return -EIO; 3619 + led_s = (status == 0)? 3620 + TPACPI_LED_OFF : 3621 + ((status == 1)? 3622 + TPACPI_LED_ON : 3623 + TPACPI_LED_BLINK); 3624 + tpacpi_led_state_cache[led] = led_s; 3625 + return led_s; 3626 + default: 3627 + return -ENXIO; 3628 + } 3629 + 3630 + /* not reached */ 3631 + } 3632 + 3633 + static int led_set_status(unsigned int led, enum led_status_t ledstatus) 3634 + { 3635 + /* off, on, blink. Index is led_status_t */ 3636 + static const int const led_sled_arg1[] = { 0, 1, 3 }; 3637 + static const int const led_exp_hlbl[] = { 0, 0, 1 }; /* led# * */ 3638 + static const int const led_exp_hlcl[] = { 0, 1, 1 }; /* led# * */ 3639 + static const int const led_led_arg1[] = { 0, 0x80, 0xc0 }; 3640 + 3641 + int rc = 0; 3642 + 3643 + switch (led_supported) { 3644 + case TPACPI_LED_570: 3645 + /* 570 */ 3646 + led = 1 << led; 3647 + if (!acpi_evalf(led_handle, NULL, NULL, "vdd", 3648 + led, led_sled_arg1[ledstatus])) 3649 + rc = -EIO; 3650 + break; 3651 + case TPACPI_LED_OLD: 3652 + /* 600e/x, 770e, 770x, A21e, A2xm/p, T20-22, X20 */ 3653 + led = 1 << led; 3654 + rc = ec_write(TPACPI_LED_EC_HLMS, led); 3655 + if (rc >= 0) 3656 + rc = ec_write(TPACPI_LED_EC_HLBL, 3657 + led * led_exp_hlbl[ledstatus]); 3658 + if (rc >= 0) 3659 + rc = ec_write(TPACPI_LED_EC_HLCL, 3660 + led * led_exp_hlcl[ledstatus]); 3661 + break; 3662 + case TPACPI_LED_NEW: 3663 + /* all others */ 3664 + if (!acpi_evalf(led_handle, NULL, NULL, "vdd", 3665 + led, led_led_arg1[ledstatus])) 3666 + rc = -EIO; 3667 + break; 3668 + default: 3669 + rc = -ENXIO; 3670 + } 3671 + 3672 + if (!rc) 3673 + tpacpi_led_state_cache[led] = ledstatus; 3674 + 3675 + return rc; 3676 + } 3677 + 3678 + static void led_sysfs_set_status(unsigned int led, 3679 + enum led_brightness brightness) 3680 + { 3681 + led_set_status(led, 3682 + (brightness == LED_OFF) ? 3683 + TPACPI_LED_OFF : 3684 + (tpacpi_led_state_cache[led] == TPACPI_LED_BLINK) ? 3685 + TPACPI_LED_BLINK : TPACPI_LED_ON); 3686 + } 3687 + 3688 + static void led_set_status_worker(struct work_struct *work) 3689 + { 3690 + struct tpacpi_led_classdev *data = 3691 + container_of(work, struct tpacpi_led_classdev, work); 3692 + 3693 + if (likely(tpacpi_lifecycle == TPACPI_LIFE_RUNNING)) 3694 + led_sysfs_set_status(data->led, data->new_brightness); 3695 + } 3696 + 3697 + static void led_sysfs_set(struct led_classdev *led_cdev, 3698 + enum led_brightness brightness) 3699 + { 3700 + struct tpacpi_led_classdev *data = container_of(led_cdev, 3701 + struct tpacpi_led_classdev, led_classdev); 3702 + 3703 + data->new_brightness = brightness; 3704 + queue_work(tpacpi_wq, &data->work); 3705 + } 3706 + 3707 + static int led_sysfs_blink_set(struct led_classdev *led_cdev, 3708 + unsigned long *delay_on, unsigned long *delay_off) 3709 + { 3710 + struct tpacpi_led_classdev *data = container_of(led_cdev, 3711 + struct tpacpi_led_classdev, led_classdev); 3712 + 3713 + /* Can we choose the flash rate? */ 3714 + if (*delay_on == 0 && *delay_off == 0) { 3715 + /* yes. set them to the hardware blink rate (1 Hz) */ 3716 + *delay_on = 500; /* ms */ 3717 + *delay_off = 500; /* ms */ 3718 + } else if ((*delay_on != 500) || (*delay_off != 500)) 3719 + return -EINVAL; 3720 + 3721 + data->new_brightness = TPACPI_LED_BLINK; 3722 + queue_work(tpacpi_wq, &data->work); 3723 + 3724 + return 0; 3725 + } 3726 + 3727 + static enum led_brightness led_sysfs_get(struct led_classdev *led_cdev) 3728 + { 3729 + int rc; 3730 + 3731 + struct tpacpi_led_classdev *data = container_of(led_cdev, 3732 + struct tpacpi_led_classdev, led_classdev); 3733 + 3734 + rc = led_get_status(data->led); 3735 + 3736 + if (rc == TPACPI_LED_OFF || rc < 0) 3737 + rc = LED_OFF; /* no error handling in led class :( */ 3738 + else 3739 + rc = LED_FULL; 3740 + 3741 + return rc; 3742 + } 3743 + 3744 + static void led_exit(void) 3745 + { 3746 + unsigned int i; 3747 + 3748 + for (i = 0; i < TPACPI_LED_NUMLEDS; i++) { 3749 + if (tpacpi_leds[i].led_classdev.name) 3750 + led_classdev_unregister(&tpacpi_leds[i].led_classdev); 3751 + } 3752 + 3753 + kfree(tpacpi_leds); 3754 + tpacpi_leds = NULL; 3755 + } 3756 + 3821 3757 static int __init led_init(struct ibm_init_struct *iibm) 3822 3758 { 3759 + unsigned int i; 3760 + int rc; 3761 + 3823 3762 vdbg_printk(TPACPI_DBG_INIT, "initializing LED subdriver\n"); 3824 3763 3825 3764 TPACPI_ACPIHANDLE_INIT(led); ··· 4006 3613 vdbg_printk(TPACPI_DBG_INIT, "LED commands are %s, mode %d\n", 4007 3614 str_supported(led_supported), led_supported); 4008 3615 3616 + tpacpi_leds = kzalloc(sizeof(*tpacpi_leds) * TPACPI_LED_NUMLEDS, 3617 + GFP_KERNEL); 3618 + if (!tpacpi_leds) { 3619 + printk(TPACPI_ERR "Out of memory for LED data\n"); 3620 + return -ENOMEM; 3621 + } 3622 + 3623 + for (i = 0; i < TPACPI_LED_NUMLEDS; i++) { 3624 + tpacpi_leds[i].led = i; 3625 + 3626 + tpacpi_leds[i].led_classdev.brightness_set = &led_sysfs_set; 3627 + tpacpi_leds[i].led_classdev.blink_set = &led_sysfs_blink_set; 3628 + if (led_supported == TPACPI_LED_570) 3629 + tpacpi_leds[i].led_classdev.brightness_get = 3630 + &led_sysfs_get; 3631 + 3632 + tpacpi_leds[i].led_classdev.name = tpacpi_led_names[i]; 3633 + 3634 + INIT_WORK(&tpacpi_leds[i].work, led_set_status_worker); 3635 + 3636 + rc = led_classdev_register(&tpacpi_pdev->dev, 3637 + &tpacpi_leds[i].led_classdev); 3638 + if (rc < 0) { 3639 + tpacpi_leds[i].led_classdev.name = NULL; 3640 + led_exit(); 3641 + return rc; 3642 + } 3643 + } 3644 + 4009 3645 return (led_supported != TPACPI_LED_NONE)? 0 : 1; 4010 3646 } 4011 3647 4012 - #define led_status(s) ((s) == 0 ? "off" : ((s) == 1 ? "on" : "blinking")) 3648 + #define str_led_status(s) \ 3649 + ((s) == TPACPI_LED_OFF ? "off" : \ 3650 + ((s) == TPACPI_LED_ON ? "on" : "blinking")) 4013 3651 4014 3652 static int led_read(char *p) 4015 3653 { ··· 4056 3632 /* 570 */ 4057 3633 int i, status; 4058 3634 for (i = 0; i < 8; i++) { 4059 - if (!acpi_evalf(ec_handle, 4060 - &status, "GLED", "dd", 1 << i)) 3635 + status = led_get_status(i); 3636 + if (status < 0) 4061 3637 return -EIO; 4062 3638 len += sprintf(p + len, "%d:\t\t%s\n", 4063 - i, led_status(status)); 3639 + i, str_led_status(status)); 4064 3640 } 4065 3641 } 4066 3642 ··· 4070 3646 return len; 4071 3647 } 4072 3648 4073 - /* off, on, blink */ 4074 - static const int led_sled_arg1[] = { 0, 1, 3 }; 4075 - static const int led_exp_hlbl[] = { 0, 0, 1 }; /* led# * */ 4076 - static const int led_exp_hlcl[] = { 0, 1, 1 }; /* led# * */ 4077 - static const int led_led_arg1[] = { 0, 0x80, 0xc0 }; 4078 - 4079 3649 static int led_write(char *buf) 4080 3650 { 4081 3651 char *cmd; 4082 - int led, ind, ret; 3652 + int led, rc; 3653 + enum led_status_t s; 4083 3654 4084 3655 if (!led_supported) 4085 3656 return -ENODEV; ··· 4084 3665 return -EINVAL; 4085 3666 4086 3667 if (strstr(cmd, "off")) { 4087 - ind = 0; 3668 + s = TPACPI_LED_OFF; 4088 3669 } else if (strstr(cmd, "on")) { 4089 - ind = 1; 3670 + s = TPACPI_LED_ON; 4090 3671 } else if (strstr(cmd, "blink")) { 4091 - ind = 2; 4092 - } else 4093 - return -EINVAL; 4094 - 4095 - if (led_supported == TPACPI_LED_570) { 4096 - /* 570 */ 4097 - led = 1 << led; 4098 - if (!acpi_evalf(led_handle, NULL, NULL, "vdd", 4099 - led, led_sled_arg1[ind])) 4100 - return -EIO; 4101 - } else if (led_supported == TPACPI_LED_OLD) { 4102 - /* 600e/x, 770e, 770x, A21e, A2xm/p, T20-22, X20 */ 4103 - led = 1 << led; 4104 - ret = ec_write(TPACPI_LED_EC_HLMS, led); 4105 - if (ret >= 0) 4106 - ret = ec_write(TPACPI_LED_EC_HLBL, 4107 - led * led_exp_hlbl[ind]); 4108 - if (ret >= 0) 4109 - ret = ec_write(TPACPI_LED_EC_HLCL, 4110 - led * led_exp_hlcl[ind]); 4111 - if (ret < 0) 4112 - return ret; 3672 + s = TPACPI_LED_BLINK; 4113 3673 } else { 4114 - /* all others */ 4115 - if (!acpi_evalf(led_handle, NULL, NULL, "vdd", 4116 - led, led_led_arg1[ind])) 4117 - return -EIO; 3674 + return -EINVAL; 4118 3675 } 3676 + 3677 + rc = led_set_status(led, s); 3678 + if (rc < 0) 3679 + return rc; 4119 3680 } 4120 3681 4121 3682 return 0; ··· 4105 3706 .name = "led", 4106 3707 .read = led_read, 4107 3708 .write = led_write, 3709 + .exit = led_exit, 4108 3710 }; 4109 3711 4110 3712 /************************************************************************* ··· 4570 4170 4571 4171 #define TPACPI_BACKLIGHT_DEV_NAME "thinkpad_screen" 4572 4172 4173 + enum { 4174 + TP_EC_BACKLIGHT = 0x31, 4175 + 4176 + /* TP_EC_BACKLIGHT bitmasks */ 4177 + TP_EC_BACKLIGHT_LVLMSK = 0x1F, 4178 + TP_EC_BACKLIGHT_CMDMSK = 0xE0, 4179 + TP_EC_BACKLIGHT_MAPSW = 0x20, 4180 + }; 4181 + 4573 4182 static struct backlight_device *ibm_backlight_device; 4574 - static int brightness_offset = 0x31; 4575 4183 static int brightness_mode; 4576 4184 static unsigned int brightness_enable = 2; /* 2 = auto, 0 = no, 1 = yes */ 4577 4185 ··· 4588 4180 /* 4589 4181 * ThinkPads can read brightness from two places: EC 0x31, or 4590 4182 * CMOS NVRAM byte 0x5E, bits 0-3. 4183 + * 4184 + * EC 0x31 has the following layout 4185 + * Bit 7: unknown function 4186 + * Bit 6: unknown function 4187 + * Bit 5: Z: honour scale changes, NZ: ignore scale changes 4188 + * Bit 4: must be set to zero to avoid problems 4189 + * Bit 3-0: backlight brightness level 4190 + * 4191 + * brightness_get_raw returns status data in the EC 0x31 layout 4591 4192 */ 4592 - static int brightness_get(struct backlight_device *bd) 4193 + static int brightness_get_raw(int *status) 4593 4194 { 4594 4195 u8 lec = 0, lcmos = 0, level = 0; 4595 4196 4596 4197 if (brightness_mode & 1) { 4597 - if (!acpi_ec_read(brightness_offset, &lec)) 4198 + if (!acpi_ec_read(TP_EC_BACKLIGHT, &lec)) 4598 4199 return -EIO; 4599 - lec &= (tp_features.bright_16levels)? 0x0f : 0x07; 4600 - level = lec; 4200 + level = lec & TP_EC_BACKLIGHT_LVLMSK; 4601 4201 }; 4602 4202 if (brightness_mode & 2) { 4603 4203 lcmos = (nvram_read_byte(TP_NVRAM_ADDR_BRIGHTNESS) ··· 4615 4199 level = lcmos; 4616 4200 } 4617 4201 4618 - if (brightness_mode == 3 && lec != lcmos) { 4619 - printk(TPACPI_ERR 4620 - "CMOS NVRAM (%u) and EC (%u) do not agree " 4621 - "on display brightness level\n", 4622 - (unsigned int) lcmos, 4623 - (unsigned int) lec); 4624 - return -EIO; 4202 + if (brightness_mode == 3) { 4203 + *status = lec; /* Prefer EC, CMOS is just a backing store */ 4204 + lec &= TP_EC_BACKLIGHT_LVLMSK; 4205 + if (lec == lcmos) 4206 + tp_warned.bright_cmos_ec_unsync = 0; 4207 + else { 4208 + if (!tp_warned.bright_cmos_ec_unsync) { 4209 + printk(TPACPI_ERR 4210 + "CMOS NVRAM (%u) and EC (%u) do not " 4211 + "agree on display brightness level\n", 4212 + (unsigned int) lcmos, 4213 + (unsigned int) lec); 4214 + tp_warned.bright_cmos_ec_unsync = 1; 4215 + } 4216 + return -EIO; 4217 + } 4218 + } else { 4219 + *status = level; 4625 4220 } 4626 4221 4627 - return level; 4222 + return 0; 4628 4223 } 4629 4224 4630 4225 /* May return EINTR which can always be mapped to ERESTARTSYS */ ··· 4643 4216 { 4644 4217 int cmos_cmd, inc, i, res; 4645 4218 int current_value; 4219 + int command_bits; 4646 4220 4647 - if (value > ((tp_features.bright_16levels)? 15 : 7)) 4221 + if (value > ((tp_features.bright_16levels)? 15 : 7) || 4222 + value < 0) 4648 4223 return -EINVAL; 4649 4224 4650 4225 res = mutex_lock_interruptible(&brightness_mutex); 4651 4226 if (res < 0) 4652 4227 return res; 4653 4228 4654 - current_value = brightness_get(NULL); 4655 - if (current_value < 0) { 4656 - res = current_value; 4229 + res = brightness_get_raw(&current_value); 4230 + if (res < 0) 4657 4231 goto errout; 4658 - } 4232 + 4233 + command_bits = current_value & TP_EC_BACKLIGHT_CMDMSK; 4234 + current_value &= TP_EC_BACKLIGHT_LVLMSK; 4659 4235 4660 4236 cmos_cmd = value > current_value ? 4661 4237 TP_CMOS_BRIGHTNESS_UP : ··· 4673 4243 goto errout; 4674 4244 } 4675 4245 if ((brightness_mode & 1) && 4676 - !acpi_ec_write(brightness_offset, i + inc)) { 4246 + !acpi_ec_write(TP_EC_BACKLIGHT, 4247 + (i + inc) | command_bits)) { 4677 4248 res = -EIO; 4678 4249 goto errout;; 4679 4250 } ··· 4697 4266 bd->props.brightness : 0); 4698 4267 } 4699 4268 4269 + static int brightness_get(struct backlight_device *bd) 4270 + { 4271 + int status, res; 4272 + 4273 + res = brightness_get_raw(&status); 4274 + if (res < 0) 4275 + return 0; /* FIXME: teach backlight about error handling */ 4276 + 4277 + return status & TP_EC_BACKLIGHT_LVLMSK; 4278 + } 4279 + 4700 4280 static struct backlight_ops ibm_backlight_data = { 4701 4281 .get_brightness = brightness_get, 4702 4282 .update_status = brightness_update_status, 4703 4283 }; 4704 4284 4705 4285 /* --------------------------------------------------------------------- */ 4706 - 4707 - static int __init tpacpi_query_bcll_levels(acpi_handle handle) 4708 - { 4709 - struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 4710 - union acpi_object *obj; 4711 - int rc; 4712 - 4713 - if (ACPI_SUCCESS(acpi_evaluate_object(handle, NULL, NULL, &buffer))) { 4714 - obj = (union acpi_object *)buffer.pointer; 4715 - if (!obj || (obj->type != ACPI_TYPE_PACKAGE)) { 4716 - printk(TPACPI_ERR "Unknown BCLL data, " 4717 - "please report this to %s\n", TPACPI_MAIL); 4718 - rc = 0; 4719 - } else { 4720 - rc = obj->package.count; 4721 - } 4722 - } else { 4723 - return 0; 4724 - } 4725 - 4726 - kfree(buffer.pointer); 4727 - return rc; 4728 - } 4729 - 4730 - static acpi_status __init brightness_find_bcll(acpi_handle handle, u32 lvl, 4731 - void *context, void **rv) 4732 - { 4733 - char name[ACPI_PATH_SEGMENT_LENGTH]; 4734 - struct acpi_buffer buffer = { sizeof(name), &name }; 4735 - 4736 - if (ACPI_SUCCESS(acpi_get_name(handle, ACPI_SINGLE_NAME, &buffer)) && 4737 - !strncmp("BCLL", name, sizeof(name) - 1)) { 4738 - if (tpacpi_query_bcll_levels(handle) == 16) { 4739 - *rv = handle; 4740 - return AE_CTRL_TERMINATE; 4741 - } else { 4742 - return AE_OK; 4743 - } 4744 - } else { 4745 - return AE_OK; 4746 - } 4747 - } 4748 - 4749 - static int __init brightness_check_levels(void) 4750 - { 4751 - int status; 4752 - void *found_node = NULL; 4753 - 4754 - if (!vid_handle) { 4755 - TPACPI_ACPIHANDLE_INIT(vid); 4756 - } 4757 - if (!vid_handle) 4758 - return 0; 4759 - 4760 - /* Search for a BCLL package with 16 levels */ 4761 - status = acpi_walk_namespace(ACPI_TYPE_PACKAGE, vid_handle, 3, 4762 - brightness_find_bcll, NULL, 4763 - &found_node); 4764 - 4765 - return (ACPI_SUCCESS(status) && found_node != NULL); 4766 - } 4767 - 4768 - static acpi_status __init brightness_find_bcl(acpi_handle handle, u32 lvl, 4769 - void *context, void **rv) 4770 - { 4771 - char name[ACPI_PATH_SEGMENT_LENGTH]; 4772 - struct acpi_buffer buffer = { sizeof(name), &name }; 4773 - 4774 - if (ACPI_SUCCESS(acpi_get_name(handle, ACPI_SINGLE_NAME, &buffer)) && 4775 - !strncmp("_BCL", name, sizeof(name) - 1)) { 4776 - *rv = handle; 4777 - return AE_CTRL_TERMINATE; 4778 - } else { 4779 - return AE_OK; 4780 - } 4781 - } 4782 - 4783 - static int __init brightness_check_std_acpi_support(void) 4784 - { 4785 - int status; 4786 - void *found_node = NULL; 4787 - 4788 - if (!vid_handle) { 4789 - TPACPI_ACPIHANDLE_INIT(vid); 4790 - } 4791 - if (!vid_handle) 4792 - return 0; 4793 - 4794 - /* Search for a _BCL method, but don't execute it */ 4795 - status = acpi_walk_namespace(ACPI_TYPE_METHOD, vid_handle, 3, 4796 - brightness_find_bcl, NULL, &found_node); 4797 - 4798 - return (ACPI_SUCCESS(status) && found_node != NULL); 4799 - } 4800 4286 4801 4287 static int __init brightness_init(struct ibm_init_struct *iibm) 4802 4288 { ··· 4723 4375 4724 4376 mutex_init(&brightness_mutex); 4725 4377 4726 - if (!brightness_enable) { 4727 - dbg_printk(TPACPI_DBG_INIT, 4728 - "brightness support disabled by " 4729 - "module parameter\n"); 4730 - return 1; 4731 - } else if (brightness_enable > 1) { 4732 - if (brightness_check_std_acpi_support()) { 4378 + /* 4379 + * We always attempt to detect acpi support, so as to switch 4380 + * Lenovo Vista BIOS to ACPI brightness mode even if we are not 4381 + * going to publish a backlight interface 4382 + */ 4383 + b = tpacpi_check_std_acpi_brightness_support(); 4384 + if (b > 0) { 4385 + if (thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO) { 4386 + printk(TPACPI_NOTICE 4387 + "Lenovo BIOS switched to ACPI backlight " 4388 + "control mode\n"); 4389 + } 4390 + if (brightness_enable > 1) { 4733 4391 printk(TPACPI_NOTICE 4734 4392 "standard ACPI backlight interface " 4735 4393 "available, not loading native one...\n"); 4736 4394 return 1; 4737 4395 } 4738 4396 } 4397 + 4398 + if (!brightness_enable) { 4399 + dbg_printk(TPACPI_DBG_INIT, 4400 + "brightness support disabled by " 4401 + "module parameter\n"); 4402 + return 1; 4403 + } 4404 + 4405 + if (b > 16) { 4406 + printk(TPACPI_ERR 4407 + "Unsupported brightness interface, " 4408 + "please contact %s\n", TPACPI_MAIL); 4409 + return 1; 4410 + } 4411 + if (b == 16) 4412 + tp_features.bright_16levels = 1; 4739 4413 4740 4414 if (!brightness_mode) { 4741 4415 if (thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO) ··· 4772 4402 if (brightness_mode > 3) 4773 4403 return -EINVAL; 4774 4404 4775 - tp_features.bright_16levels = 4776 - thinkpad_id.vendor == PCI_VENDOR_ID_LENOVO && 4777 - brightness_check_levels(); 4778 - 4779 - b = brightness_get(NULL); 4780 - if (b < 0) 4405 + if (brightness_get_raw(&b) < 0) 4781 4406 return 1; 4782 4407 4783 4408 if (tp_features.bright_16levels) ··· 4790 4425 4791 4426 ibm_backlight_device->props.max_brightness = 4792 4427 (tp_features.bright_16levels)? 15 : 7; 4793 - ibm_backlight_device->props.brightness = b; 4428 + ibm_backlight_device->props.brightness = b & TP_EC_BACKLIGHT_LVLMSK; 4794 4429 backlight_update_status(ibm_backlight_device); 4795 4430 4796 4431 return 0; ··· 5411 5046 if (fan_watchdog_maxinterval > 0 && 5412 5047 tpacpi_lifecycle != TPACPI_LIFE_EXITING) { 5413 5048 fan_watchdog_active = 1; 5414 - if (!schedule_delayed_work(&fan_watchdog_task, 5049 + if (!queue_delayed_work(tpacpi_wq, &fan_watchdog_task, 5415 5050 msecs_to_jiffies(fan_watchdog_maxinterval 5416 5051 * 1000))) { 5417 5052 printk(TPACPI_ERR 5418 - "failed to schedule the fan watchdog, " 5053 + "failed to queue the fan watchdog, " 5419 5054 "watchdog will not trigger\n"); 5420 5055 } 5421 5056 } else ··· 5785 5420 &driver_attr_fan_watchdog); 5786 5421 5787 5422 cancel_delayed_work(&fan_watchdog_task); 5788 - flush_scheduled_work(); 5423 + flush_workqueue(tpacpi_wq); 5789 5424 } 5790 5425 5791 5426 static int fan_read(char *p) ··· 6191 5826 6192 5827 tp->model_str = kstrdup(dmi_get_system_info(DMI_PRODUCT_VERSION), 6193 5828 GFP_KERNEL); 6194 - if (strnicmp(tp->model_str, "ThinkPad", 8) != 0) { 5829 + if (tp->model_str && strnicmp(tp->model_str, "ThinkPad", 8) != 0) { 6195 5830 kfree(tp->model_str); 6196 5831 tp->model_str = NULL; 6197 5832 } 5833 + 5834 + tp->nummodel_str = kstrdup(dmi_get_system_info(DMI_PRODUCT_NAME), 5835 + GFP_KERNEL); 6198 5836 } 6199 5837 6200 5838 static int __init probe_for_thinkpad(void) ··· 6439 6071 if (proc_dir) 6440 6072 remove_proc_entry(TPACPI_PROC_DIR, acpi_root_dir); 6441 6073 6074 + if (tpacpi_wq) 6075 + destroy_workqueue(tpacpi_wq); 6076 + 6442 6077 kfree(thinkpad_id.bios_version_str); 6443 6078 kfree(thinkpad_id.ec_version_str); 6444 6079 kfree(thinkpad_id.model_str); ··· 6471 6100 6472 6101 TPACPI_ACPIHANDLE_INIT(ecrd); 6473 6102 TPACPI_ACPIHANDLE_INIT(ecwr); 6103 + 6104 + tpacpi_wq = create_singlethread_workqueue(TPACPI_WORKQUEUE_NAME); 6105 + if (!tpacpi_wq) { 6106 + thinkpad_acpi_module_exit(); 6107 + return -ENOMEM; 6108 + } 6474 6109 6475 6110 proc_dir = proc_mkdir(TPACPI_PROC_DIR, acpi_root_dir); 6476 6111 if (!proc_dir) { ··· 6599 6222 6600 6223 /* Please remove this in year 2009 */ 6601 6224 MODULE_ALIAS("ibm_acpi"); 6225 + 6226 + MODULE_ALIAS(TPACPI_DRVR_SHORTNAME); 6602 6227 6603 6228 /* 6604 6229 * DMI matching for module autoloading
+70 -4
drivers/pnp/base.h
··· 1 1 extern spinlock_t pnp_lock; 2 2 void *pnp_alloc(long size); 3 + 4 + int pnp_register_protocol(struct pnp_protocol *protocol); 5 + void pnp_unregister_protocol(struct pnp_protocol *protocol); 6 + 7 + #define PNP_EISA_ID_MASK 0x7fffffff 8 + void pnp_eisa_id_to_string(u32 id, char *str); 9 + struct pnp_dev *pnp_alloc_dev(struct pnp_protocol *, int id, char *pnpid); 10 + struct pnp_card *pnp_alloc_card(struct pnp_protocol *, int id, char *pnpid); 11 + 12 + int pnp_add_device(struct pnp_dev *dev); 13 + struct pnp_id *pnp_add_id(struct pnp_dev *dev, char *id); 3 14 int pnp_interface_attach_device(struct pnp_dev *dev); 15 + 16 + int pnp_add_card(struct pnp_card *card); 17 + struct pnp_id *pnp_add_card_id(struct pnp_card *card, char *id); 18 + void pnp_remove_card(struct pnp_card *card); 19 + int pnp_add_card_device(struct pnp_card *card, struct pnp_dev *dev); 20 + void pnp_remove_card_device(struct pnp_dev *dev); 21 + 22 + struct pnp_option *pnp_register_independent_option(struct pnp_dev *dev); 23 + struct pnp_option *pnp_register_dependent_option(struct pnp_dev *dev, 24 + int priority); 25 + int pnp_register_irq_resource(struct pnp_dev *dev, struct pnp_option *option, 26 + struct pnp_irq *data); 27 + int pnp_register_dma_resource(struct pnp_dev *dev, struct pnp_option *option, 28 + struct pnp_dma *data); 29 + int pnp_register_port_resource(struct pnp_dev *dev, struct pnp_option *option, 30 + struct pnp_port *data); 31 + int pnp_register_mem_resource(struct pnp_dev *dev, struct pnp_option *option, 32 + struct pnp_mem *data); 33 + void pnp_init_resources(struct pnp_dev *dev); 34 + 4 35 void pnp_fixup_device(struct pnp_dev *dev); 5 36 void pnp_free_option(struct pnp_option *option); 6 37 int __pnp_add_device(struct pnp_dev *dev); 7 38 void __pnp_remove_device(struct pnp_dev *dev); 8 39 9 - int pnp_check_port(struct pnp_dev * dev, int idx); 10 - int pnp_check_mem(struct pnp_dev * dev, int idx); 11 - int pnp_check_irq(struct pnp_dev * dev, int idx); 12 - int pnp_check_dma(struct pnp_dev * dev, int idx); 40 + int pnp_check_port(struct pnp_dev *dev, struct resource *res); 41 + int pnp_check_mem(struct pnp_dev *dev, struct resource *res); 42 + int pnp_check_irq(struct pnp_dev *dev, struct resource *res); 43 + int pnp_check_dma(struct pnp_dev *dev, struct resource *res); 44 + 45 + void dbg_pnp_show_resources(struct pnp_dev *dev, char *desc); 46 + 47 + void pnp_init_resource(struct resource *res); 48 + 49 + struct pnp_resource *pnp_get_pnp_resource(struct pnp_dev *dev, 50 + unsigned int type, unsigned int num); 51 + 52 + #define PNP_MAX_PORT 40 53 + #define PNP_MAX_MEM 24 54 + #define PNP_MAX_IRQ 2 55 + #define PNP_MAX_DMA 2 56 + 57 + struct pnp_resource { 58 + struct resource res; 59 + unsigned int index; /* ISAPNP config register index */ 60 + }; 61 + 62 + struct pnp_resource_table { 63 + struct pnp_resource port[PNP_MAX_PORT]; 64 + struct pnp_resource mem[PNP_MAX_MEM]; 65 + struct pnp_resource dma[PNP_MAX_DMA]; 66 + struct pnp_resource irq[PNP_MAX_IRQ]; 67 + }; 68 + 69 + struct pnp_resource *pnp_add_irq_resource(struct pnp_dev *dev, int irq, 70 + int flags); 71 + struct pnp_resource *pnp_add_dma_resource(struct pnp_dev *dev, int dma, 72 + int flags); 73 + struct pnp_resource *pnp_add_io_resource(struct pnp_dev *dev, 74 + resource_size_t start, 75 + resource_size_t end, int flags); 76 + struct pnp_resource *pnp_add_mem_resource(struct pnp_dev *dev, 77 + resource_size_t start, 78 + resource_size_t end, int flags);
+46 -9
drivers/pnp/card.c
··· 5 5 */ 6 6 7 7 #include <linux/module.h> 8 + #include <linux/ctype.h> 8 9 #include <linux/slab.h> 9 10 #include <linux/pnp.h> 10 11 #include "base.h" ··· 101 100 * @id: pointer to a pnp_id structure 102 101 * @card: pointer to the desired card 103 102 */ 104 - int pnp_add_card_id(struct pnp_id *id, struct pnp_card *card) 103 + struct pnp_id *pnp_add_card_id(struct pnp_card *card, char *id) 105 104 { 106 - struct pnp_id *ptr; 105 + struct pnp_id *dev_id, *ptr; 107 106 108 - id->next = NULL; 107 + dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 108 + if (!dev_id) 109 + return NULL; 110 + 111 + dev_id->id[0] = id[0]; 112 + dev_id->id[1] = id[1]; 113 + dev_id->id[2] = id[2]; 114 + dev_id->id[3] = tolower(id[3]); 115 + dev_id->id[4] = tolower(id[4]); 116 + dev_id->id[5] = tolower(id[5]); 117 + dev_id->id[6] = tolower(id[6]); 118 + dev_id->id[7] = '\0'; 119 + 120 + dev_id->next = NULL; 109 121 ptr = card->id; 110 122 while (ptr && ptr->next) 111 123 ptr = ptr->next; 112 124 if (ptr) 113 - ptr->next = id; 125 + ptr->next = dev_id; 114 126 else 115 - card->id = id; 116 - return 0; 127 + card->id = dev_id; 128 + 129 + return dev_id; 117 130 } 118 131 119 132 static void pnp_free_card_ids(struct pnp_card *card) ··· 149 134 150 135 pnp_free_card_ids(card); 151 136 kfree(card); 137 + } 138 + 139 + struct pnp_card *pnp_alloc_card(struct pnp_protocol *protocol, int id, char *pnpid) 140 + { 141 + struct pnp_card *card; 142 + struct pnp_id *dev_id; 143 + 144 + card = kzalloc(sizeof(struct pnp_card), GFP_KERNEL); 145 + if (!card) 146 + return NULL; 147 + 148 + card->protocol = protocol; 149 + card->number = id; 150 + 151 + card->dev.parent = &card->protocol->dev; 152 + sprintf(card->dev.bus_id, "%02x:%02x", card->protocol->number, 153 + card->number); 154 + 155 + dev_id = pnp_add_card_id(card, pnpid); 156 + if (!dev_id) { 157 + kfree(card); 158 + return NULL; 159 + } 160 + 161 + return card; 152 162 } 153 163 154 164 static ssize_t pnp_show_card_name(struct device *dmdev, ··· 231 191 int error; 232 192 struct list_head *pos, *temp; 233 193 234 - sprintf(card->dev.bus_id, "%02x:%02x", card->protocol->number, 235 - card->number); 236 - card->dev.parent = &card->protocol->dev; 237 194 card->dev.bus = NULL; 238 195 card->dev.release = &pnp_release_card; 239 196 error = device_register(&card->dev);
+39 -7
drivers/pnp/core.c
··· 106 106 pnp_free_option(dev->independent); 107 107 pnp_free_option(dev->dependent); 108 108 pnp_free_ids(dev); 109 + kfree(dev->res); 109 110 kfree(dev); 111 + } 112 + 113 + struct pnp_dev *pnp_alloc_dev(struct pnp_protocol *protocol, int id, char *pnpid) 114 + { 115 + struct pnp_dev *dev; 116 + struct pnp_id *dev_id; 117 + 118 + dev = kzalloc(sizeof(struct pnp_dev), GFP_KERNEL); 119 + if (!dev) 120 + return NULL; 121 + 122 + dev->res = kzalloc(sizeof(struct pnp_resource_table), GFP_KERNEL); 123 + if (!dev->res) { 124 + kfree(dev); 125 + return NULL; 126 + } 127 + 128 + dev->protocol = protocol; 129 + dev->number = id; 130 + dev->dma_mask = DMA_24BIT_MASK; 131 + 132 + dev->dev.parent = &dev->protocol->dev; 133 + dev->dev.bus = &pnp_bus_type; 134 + dev->dev.dma_mask = &dev->dma_mask; 135 + dev->dev.coherent_dma_mask = dev->dma_mask; 136 + dev->dev.release = &pnp_release_device; 137 + 138 + sprintf(dev->dev.bus_id, "%02x:%02x", dev->protocol->number, 139 + dev->number); 140 + 141 + dev_id = pnp_add_id(dev, pnpid); 142 + if (!dev_id) { 143 + kfree(dev->res); 144 + kfree(dev); 145 + return NULL; 146 + } 147 + 148 + return dev; 110 149 } 111 150 112 151 int __pnp_add_device(struct pnp_dev *dev) ··· 153 114 int ret; 154 115 155 116 pnp_fixup_device(dev); 156 - dev->dev.bus = &pnp_bus_type; 157 - dev->dev.dma_mask = &dev->dma_mask; 158 - dev->dma_mask = dev->dev.coherent_dma_mask = DMA_24BIT_MASK; 159 - dev->dev.release = &pnp_release_device; 160 117 dev->status = PNP_READY; 161 118 spin_lock(&pnp_lock); 162 119 list_add_tail(&dev->global_list, &pnp_global); ··· 180 145 if (dev->card) 181 146 return -EINVAL; 182 147 183 - dev->dev.parent = &dev->protocol->dev; 184 - sprintf(dev->dev.bus_id, "%02x:%02x", dev->protocol->number, 185 - dev->number); 186 148 ret = __pnp_add_device(dev); 187 149 if (ret) 188 150 return ret;
+21 -7
drivers/pnp/driver.c
··· 226 226 227 227 /** 228 228 * pnp_add_id - adds an EISA id to the specified device 229 - * @id: pointer to a pnp_id structure 230 229 * @dev: pointer to the desired device 230 + * @id: pointer to an EISA id string 231 231 */ 232 - int pnp_add_id(struct pnp_id *id, struct pnp_dev *dev) 232 + struct pnp_id *pnp_add_id(struct pnp_dev *dev, char *id) 233 233 { 234 - struct pnp_id *ptr; 234 + struct pnp_id *dev_id, *ptr; 235 235 236 - id->next = NULL; 236 + dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 237 + if (!dev_id) 238 + return NULL; 239 + 240 + dev_id->id[0] = id[0]; 241 + dev_id->id[1] = id[1]; 242 + dev_id->id[2] = id[2]; 243 + dev_id->id[3] = tolower(id[3]); 244 + dev_id->id[4] = tolower(id[4]); 245 + dev_id->id[5] = tolower(id[5]); 246 + dev_id->id[6] = tolower(id[6]); 247 + dev_id->id[7] = '\0'; 248 + 249 + dev_id->next = NULL; 237 250 ptr = dev->id; 238 251 while (ptr && ptr->next) 239 252 ptr = ptr->next; 240 253 if (ptr) 241 - ptr->next = id; 254 + ptr->next = dev_id; 242 255 else 243 - dev->id = id; 244 - return 0; 256 + dev->id = dev_id; 257 + 258 + return dev_id; 245 259 } 246 260 247 261 EXPORT_SYMBOL(pnp_register_driver);
+47 -64
drivers/pnp/interface.c
··· 248 248 char *buf) 249 249 { 250 250 struct pnp_dev *dev = to_pnp_dev(dmdev); 251 + struct resource *res; 251 252 int i, ret; 252 253 pnp_info_buffer_t *buffer; 253 254 ··· 268 267 else 269 268 pnp_printf(buffer, "disabled\n"); 270 269 271 - for (i = 0; i < PNP_MAX_PORT; i++) { 272 - if (pnp_port_valid(dev, i)) { 270 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_IO, i)); i++) { 271 + if (pnp_resource_valid(res)) { 273 272 pnp_printf(buffer, "io"); 274 - if (pnp_port_flags(dev, i) & IORESOURCE_DISABLED) 273 + if (res->flags & IORESOURCE_DISABLED) 275 274 pnp_printf(buffer, " disabled\n"); 276 275 else 277 276 pnp_printf(buffer, " 0x%llx-0x%llx\n", 278 - (unsigned long long) 279 - pnp_port_start(dev, i), 280 - (unsigned long long)pnp_port_end(dev, 281 - i)); 277 + (unsigned long long) res->start, 278 + (unsigned long long) res->end); 282 279 } 283 280 } 284 - for (i = 0; i < PNP_MAX_MEM; i++) { 285 - if (pnp_mem_valid(dev, i)) { 281 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_MEM, i)); i++) { 282 + if (pnp_resource_valid(res)) { 286 283 pnp_printf(buffer, "mem"); 287 - if (pnp_mem_flags(dev, i) & IORESOURCE_DISABLED) 284 + if (res->flags & IORESOURCE_DISABLED) 288 285 pnp_printf(buffer, " disabled\n"); 289 286 else 290 287 pnp_printf(buffer, " 0x%llx-0x%llx\n", 291 - (unsigned long long) 292 - pnp_mem_start(dev, i), 293 - (unsigned long long)pnp_mem_end(dev, 294 - i)); 288 + (unsigned long long) res->start, 289 + (unsigned long long) res->end); 295 290 } 296 291 } 297 - for (i = 0; i < PNP_MAX_IRQ; i++) { 298 - if (pnp_irq_valid(dev, i)) { 292 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_IRQ, i)); i++) { 293 + if (pnp_resource_valid(res)) { 299 294 pnp_printf(buffer, "irq"); 300 - if (pnp_irq_flags(dev, i) & IORESOURCE_DISABLED) 295 + if (res->flags & IORESOURCE_DISABLED) 301 296 pnp_printf(buffer, " disabled\n"); 302 297 else 303 298 pnp_printf(buffer, " %lld\n", 304 - (unsigned long long)pnp_irq(dev, i)); 299 + (unsigned long long) res->start); 305 300 } 306 301 } 307 - for (i = 0; i < PNP_MAX_DMA; i++) { 308 - if (pnp_dma_valid(dev, i)) { 302 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_DMA, i)); i++) { 303 + if (pnp_resource_valid(res)) { 309 304 pnp_printf(buffer, "dma"); 310 - if (pnp_dma_flags(dev, i) & IORESOURCE_DISABLED) 305 + if (res->flags & IORESOURCE_DISABLED) 311 306 pnp_printf(buffer, " disabled\n"); 312 307 else 313 308 pnp_printf(buffer, " %lld\n", 314 - (unsigned long long)pnp_dma(dev, i)); 309 + (unsigned long long) res->start); 315 310 } 316 311 } 317 312 ret = (buffer->curr - buf); ··· 320 323 const char *ubuf, size_t count) 321 324 { 322 325 struct pnp_dev *dev = to_pnp_dev(dmdev); 326 + struct pnp_resource *pnp_res; 323 327 char *buf = (void *)ubuf; 324 328 int retval = 0; 329 + resource_size_t start, end; 325 330 326 331 if (dev->status & PNP_ATTACHED) { 327 332 retval = -EBUSY; ··· 350 351 if (!strnicmp(buf, "auto", 4)) { 351 352 if (dev->active) 352 353 goto done; 353 - pnp_init_resource_table(&dev->res); 354 + pnp_init_resources(dev); 354 355 retval = pnp_auto_config_dev(dev); 355 356 goto done; 356 357 } 357 358 if (!strnicmp(buf, "clear", 5)) { 358 359 if (dev->active) 359 360 goto done; 360 - pnp_init_resource_table(&dev->res); 361 + pnp_init_resources(dev); 361 362 goto done; 362 363 } 363 364 if (!strnicmp(buf, "get", 3)) { 364 365 mutex_lock(&pnp_res_mutex); 365 366 if (pnp_can_read(dev)) 366 - dev->protocol->get(dev, &dev->res); 367 + dev->protocol->get(dev); 367 368 mutex_unlock(&pnp_res_mutex); 368 369 goto done; 369 370 } ··· 372 373 if (dev->active) 373 374 goto done; 374 375 buf += 3; 375 - pnp_init_resource_table(&dev->res); 376 + pnp_init_resources(dev); 376 377 mutex_lock(&pnp_res_mutex); 377 378 while (1) { 378 379 while (isspace(*buf)) ··· 381 382 buf += 2; 382 383 while (isspace(*buf)) 383 384 ++buf; 384 - dev->res.port_resource[nport].start = 385 - simple_strtoul(buf, &buf, 0); 385 + start = simple_strtoul(buf, &buf, 0); 386 386 while (isspace(*buf)) 387 387 ++buf; 388 388 if (*buf == '-') { 389 389 buf += 1; 390 390 while (isspace(*buf)) 391 391 ++buf; 392 - dev->res.port_resource[nport].end = 393 - simple_strtoul(buf, &buf, 0); 392 + end = simple_strtoul(buf, &buf, 0); 394 393 } else 395 - dev->res.port_resource[nport].end = 396 - dev->res.port_resource[nport].start; 397 - dev->res.port_resource[nport].flags = 398 - IORESOURCE_IO; 399 - nport++; 400 - if (nport >= PNP_MAX_PORT) 401 - break; 394 + end = start; 395 + pnp_res = pnp_add_io_resource(dev, start, end, 396 + 0); 397 + if (pnp_res) 398 + pnp_res->index = nport++; 402 399 continue; 403 400 } 404 401 if (!strnicmp(buf, "mem", 3)) { 405 402 buf += 3; 406 403 while (isspace(*buf)) 407 404 ++buf; 408 - dev->res.mem_resource[nmem].start = 409 - simple_strtoul(buf, &buf, 0); 405 + start = simple_strtoul(buf, &buf, 0); 410 406 while (isspace(*buf)) 411 407 ++buf; 412 408 if (*buf == '-') { 413 409 buf += 1; 414 410 while (isspace(*buf)) 415 411 ++buf; 416 - dev->res.mem_resource[nmem].end = 417 - simple_strtoul(buf, &buf, 0); 412 + end = simple_strtoul(buf, &buf, 0); 418 413 } else 419 - dev->res.mem_resource[nmem].end = 420 - dev->res.mem_resource[nmem].start; 421 - dev->res.mem_resource[nmem].flags = 422 - IORESOURCE_MEM; 423 - nmem++; 424 - if (nmem >= PNP_MAX_MEM) 425 - break; 414 + end = start; 415 + pnp_res = pnp_add_mem_resource(dev, start, end, 416 + 0); 417 + if (pnp_res) 418 + pnp_res->index = nmem++; 426 419 continue; 427 420 } 428 421 if (!strnicmp(buf, "irq", 3)) { 429 422 buf += 3; 430 423 while (isspace(*buf)) 431 424 ++buf; 432 - dev->res.irq_resource[nirq].start = 433 - dev->res.irq_resource[nirq].end = 434 - simple_strtoul(buf, &buf, 0); 435 - dev->res.irq_resource[nirq].flags = 436 - IORESOURCE_IRQ; 437 - nirq++; 438 - if (nirq >= PNP_MAX_IRQ) 439 - break; 425 + start = simple_strtoul(buf, &buf, 0); 426 + pnp_res = pnp_add_irq_resource(dev, start, 0); 427 + if (pnp_res) 428 + nirq++; 440 429 continue; 441 430 } 442 431 if (!strnicmp(buf, "dma", 3)) { 443 432 buf += 3; 444 433 while (isspace(*buf)) 445 434 ++buf; 446 - dev->res.dma_resource[ndma].start = 447 - dev->res.dma_resource[ndma].end = 448 - simple_strtoul(buf, &buf, 0); 449 - dev->res.dma_resource[ndma].flags = 450 - IORESOURCE_DMA; 451 - ndma++; 452 - if (ndma >= PNP_MAX_DMA) 453 - break; 435 + start = simple_strtoul(buf, &buf, 0); 436 + pnp_res = pnp_add_dma_resource(dev, start, 0); 437 + if (pnp_res) 438 + pnp_res->index = ndma++; 454 439 continue; 455 440 } 456 441 break;
+4
drivers/pnp/isapnp/Makefile
··· 5 5 isapnp-proc-$(CONFIG_PROC_FS) = proc.o 6 6 7 7 obj-y := core.o compat.o $(isapnp-proc-y) 8 + 9 + ifeq ($(CONFIG_PNP_DEBUG),y) 10 + EXTRA_CFLAGS += -DDEBUG 11 + endif
+172 -176
drivers/pnp/isapnp/core.c
··· 44 44 #include <linux/mutex.h> 45 45 #include <asm/io.h> 46 46 47 + #include "../base.h" 48 + 47 49 #if 0 48 50 #define ISAPNP_REGION_OK 49 51 #endif ··· 89 87 #define _LTAG_VENDOR 0x84 90 88 #define _LTAG_MEM32RANGE 0x85 91 89 #define _LTAG_FIXEDMEM32RANGE 0x86 90 + 91 + /* Logical device control and configuration registers */ 92 + 93 + #define ISAPNP_CFG_ACTIVATE 0x30 /* byte */ 94 + #define ISAPNP_CFG_MEM 0x40 /* 4 * dword */ 95 + #define ISAPNP_CFG_PORT 0x60 /* 8 * word */ 96 + #define ISAPNP_CFG_IRQ 0x70 /* 2 * word */ 97 + #define ISAPNP_CFG_DMA 0x74 /* 2 * byte */ 92 98 93 99 /* 94 100 * Sizes of ISAPNP logical device configuration register sets. ··· 398 388 } 399 389 400 390 /* 401 - * Parse EISA id. 402 - */ 403 - static void isapnp_parse_id(struct pnp_dev *dev, unsigned short vendor, 404 - unsigned short device) 405 - { 406 - struct pnp_id *id; 407 - 408 - if (!dev) 409 - return; 410 - id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 411 - if (!id) 412 - return; 413 - sprintf(id->id, "%c%c%c%x%x%x%x", 414 - 'A' + ((vendor >> 2) & 0x3f) - 1, 415 - 'A' + (((vendor & 3) << 3) | ((vendor >> 13) & 7)) - 1, 416 - 'A' + ((vendor >> 8) & 0x1f) - 1, 417 - (device >> 4) & 0x0f, 418 - device & 0x0f, (device >> 12) & 0x0f, (device >> 8) & 0x0f); 419 - pnp_add_id(id, dev); 420 - } 421 - 422 - /* 423 391 * Parse logical device tag. 424 392 */ 425 393 static struct pnp_dev *__init isapnp_parse_device(struct pnp_card *card, ··· 405 417 { 406 418 unsigned char tmp[6]; 407 419 struct pnp_dev *dev; 420 + u32 eisa_id; 421 + char id[8]; 408 422 409 423 isapnp_peek(tmp, size); 410 - dev = kzalloc(sizeof(struct pnp_dev), GFP_KERNEL); 424 + eisa_id = tmp[0] | tmp[1] << 8 | tmp[2] << 16 | tmp[3] << 24; 425 + pnp_eisa_id_to_string(eisa_id, id); 426 + 427 + dev = pnp_alloc_dev(&isapnp_protocol, number, id); 411 428 if (!dev) 412 429 return NULL; 413 - dev->number = number; 414 - isapnp_parse_id(dev, (tmp[1] << 8) | tmp[0], (tmp[3] << 8) | tmp[2]); 415 - dev->regs = tmp[4]; 430 + 416 431 dev->card = card; 417 - if (size > 5) 418 - dev->regs |= tmp[5] << 8; 419 - dev->protocol = &isapnp_protocol; 420 432 dev->capabilities |= PNP_CONFIGURABLE; 421 433 dev->capabilities |= PNP_READ; 422 434 dev->capabilities |= PNP_WRITE; 423 435 dev->capabilities |= PNP_DISABLE; 424 - pnp_init_resource_table(&dev->res); 436 + pnp_init_resources(dev); 425 437 return dev; 426 438 } 427 439 428 440 /* 429 441 * Add IRQ resource to resources list. 430 442 */ 431 - static void __init isapnp_parse_irq_resource(struct pnp_option *option, 443 + static void __init isapnp_parse_irq_resource(struct pnp_dev *dev, 444 + struct pnp_option *option, 432 445 int size) 433 446 { 434 447 unsigned char tmp[3]; ··· 446 457 irq->flags = tmp[2]; 447 458 else 448 459 irq->flags = IORESOURCE_IRQ_HIGHEDGE; 449 - pnp_register_irq_resource(option, irq); 460 + pnp_register_irq_resource(dev, option, irq); 450 461 } 451 462 452 463 /* 453 464 * Add DMA resource to resources list. 454 465 */ 455 - static void __init isapnp_parse_dma_resource(struct pnp_option *option, 466 + static void __init isapnp_parse_dma_resource(struct pnp_dev *dev, 467 + struct pnp_option *option, 456 468 int size) 457 469 { 458 470 unsigned char tmp[2]; ··· 465 475 return; 466 476 dma->map = tmp[0]; 467 477 dma->flags = tmp[1]; 468 - pnp_register_dma_resource(option, dma); 478 + pnp_register_dma_resource(dev, option, dma); 469 479 } 470 480 471 481 /* 472 482 * Add port resource to resources list. 473 483 */ 474 - static void __init isapnp_parse_port_resource(struct pnp_option *option, 484 + static void __init isapnp_parse_port_resource(struct pnp_dev *dev, 485 + struct pnp_option *option, 475 486 int size) 476 487 { 477 488 unsigned char tmp[7]; ··· 487 496 port->align = tmp[5]; 488 497 port->size = tmp[6]; 489 498 port->flags = tmp[0] ? PNP_PORT_FLAG_16BITADDR : 0; 490 - pnp_register_port_resource(option, port); 499 + pnp_register_port_resource(dev, option, port); 491 500 } 492 501 493 502 /* 494 503 * Add fixed port resource to resources list. 495 504 */ 496 - static void __init isapnp_parse_fixed_port_resource(struct pnp_option *option, 505 + static void __init isapnp_parse_fixed_port_resource(struct pnp_dev *dev, 506 + struct pnp_option *option, 497 507 int size) 498 508 { 499 509 unsigned char tmp[3]; ··· 508 516 port->size = tmp[2]; 509 517 port->align = 0; 510 518 port->flags = PNP_PORT_FLAG_FIXED; 511 - pnp_register_port_resource(option, port); 519 + pnp_register_port_resource(dev, option, port); 512 520 } 513 521 514 522 /* 515 523 * Add memory resource to resources list. 516 524 */ 517 - static void __init isapnp_parse_mem_resource(struct pnp_option *option, 525 + static void __init isapnp_parse_mem_resource(struct pnp_dev *dev, 526 + struct pnp_option *option, 518 527 int size) 519 528 { 520 529 unsigned char tmp[9]; ··· 530 537 mem->align = (tmp[6] << 8) | tmp[5]; 531 538 mem->size = ((tmp[8] << 8) | tmp[7]) << 8; 532 539 mem->flags = tmp[0]; 533 - pnp_register_mem_resource(option, mem); 540 + pnp_register_mem_resource(dev, option, mem); 534 541 } 535 542 536 543 /* 537 544 * Add 32-bit memory resource to resources list. 538 545 */ 539 - static void __init isapnp_parse_mem32_resource(struct pnp_option *option, 546 + static void __init isapnp_parse_mem32_resource(struct pnp_dev *dev, 547 + struct pnp_option *option, 540 548 int size) 541 549 { 542 550 unsigned char tmp[17]; ··· 554 560 mem->size = 555 561 (tmp[16] << 24) | (tmp[15] << 16) | (tmp[14] << 8) | tmp[13]; 556 562 mem->flags = tmp[0]; 557 - pnp_register_mem_resource(option, mem); 563 + pnp_register_mem_resource(dev, option, mem); 558 564 } 559 565 560 566 /* 561 567 * Add 32-bit fixed memory resource to resources list. 562 568 */ 563 - static void __init isapnp_parse_fixed_mem32_resource(struct pnp_option *option, 569 + static void __init isapnp_parse_fixed_mem32_resource(struct pnp_dev *dev, 570 + struct pnp_option *option, 564 571 int size) 565 572 { 566 573 unsigned char tmp[9]; ··· 576 581 mem->size = (tmp[8] << 24) | (tmp[7] << 16) | (tmp[6] << 8) | tmp[5]; 577 582 mem->align = 0; 578 583 mem->flags = tmp[0]; 579 - pnp_register_mem_resource(option, mem); 584 + pnp_register_mem_resource(dev, option, mem); 580 585 } 581 586 582 587 /* ··· 608 613 unsigned char type, tmp[17]; 609 614 struct pnp_option *option; 610 615 struct pnp_dev *dev; 616 + u32 eisa_id; 617 + char id[8]; 611 618 612 619 if ((dev = isapnp_parse_device(card, size, number++)) == NULL) 613 620 return 1; ··· 649 652 case _STAG_COMPATDEVID: 650 653 if (size == 4 && compat < DEVICE_COUNT_COMPATIBLE) { 651 654 isapnp_peek(tmp, 4); 652 - isapnp_parse_id(dev, (tmp[1] << 8) | tmp[0], 653 - (tmp[3] << 8) | tmp[2]); 655 + eisa_id = tmp[0] | tmp[1] << 8 | 656 + tmp[2] << 16 | tmp[3] << 24; 657 + pnp_eisa_id_to_string(eisa_id, id); 658 + pnp_add_id(dev, id); 654 659 compat++; 655 660 size = 0; 656 661 } ··· 660 661 case _STAG_IRQ: 661 662 if (size < 2 || size > 3) 662 663 goto __skip; 663 - isapnp_parse_irq_resource(option, size); 664 + isapnp_parse_irq_resource(dev, option, size); 664 665 size = 0; 665 666 break; 666 667 case _STAG_DMA: 667 668 if (size != 2) 668 669 goto __skip; 669 - isapnp_parse_dma_resource(option, size); 670 + isapnp_parse_dma_resource(dev, option, size); 670 671 size = 0; 671 672 break; 672 673 case _STAG_STARTDEP: ··· 686 687 if (size != 0) 687 688 goto __skip; 688 689 priority = 0; 690 + dev_dbg(&dev->dev, "end dependent options\n"); 689 691 break; 690 692 case _STAG_IOPORT: 691 693 if (size != 7) 692 694 goto __skip; 693 - isapnp_parse_port_resource(option, size); 695 + isapnp_parse_port_resource(dev, option, size); 694 696 size = 0; 695 697 break; 696 698 case _STAG_FIXEDIO: 697 699 if (size != 3) 698 700 goto __skip; 699 - isapnp_parse_fixed_port_resource(option, size); 701 + isapnp_parse_fixed_port_resource(dev, option, size); 700 702 size = 0; 701 703 break; 702 704 case _STAG_VENDOR: ··· 705 705 case _LTAG_MEMRANGE: 706 706 if (size != 9) 707 707 goto __skip; 708 - isapnp_parse_mem_resource(option, size); 708 + isapnp_parse_mem_resource(dev, option, size); 709 709 size = 0; 710 710 break; 711 711 case _LTAG_ANSISTR: ··· 720 720 case _LTAG_MEM32RANGE: 721 721 if (size != 17) 722 722 goto __skip; 723 - isapnp_parse_mem32_resource(option, size); 723 + isapnp_parse_mem32_resource(dev, option, size); 724 724 size = 0; 725 725 break; 726 726 case _LTAG_FIXEDMEM32RANGE: 727 727 if (size != 9) 728 728 goto __skip; 729 - isapnp_parse_fixed_mem32_resource(option, size); 729 + isapnp_parse_fixed_mem32_resource(dev, option, size); 730 730 size = 0; 731 731 break; 732 732 case _STAG_END: ··· 734 734 isapnp_skip_bytes(size); 735 735 return 1; 736 736 default: 737 - printk(KERN_ERR 738 - "isapnp: unexpected or unknown tag type 0x%x for logical device %i (device %i), ignored\n", 739 - type, dev->number, card->number); 737 + dev_err(&dev->dev, "unknown tag %#x (card %i), " 738 + "ignored\n", type, card->number); 740 739 } 741 740 __skip: 742 741 if (size > 0) ··· 788 789 isapnp_skip_bytes(size); 789 790 return; 790 791 default: 791 - printk(KERN_ERR 792 - "isapnp: unexpected or unknown tag type 0x%x for device %i, ignored\n", 793 - type, card->number); 792 + dev_err(&card->dev, "unknown tag %#x, ignored\n", 793 + type); 794 794 } 795 795 __skip: 796 796 if (size > 0) ··· 820 822 } 821 823 822 824 /* 823 - * Parse EISA id for ISA PnP card. 824 - */ 825 - static void isapnp_parse_card_id(struct pnp_card *card, unsigned short vendor, 826 - unsigned short device) 827 - { 828 - struct pnp_id *id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 829 - 830 - if (!id) 831 - return; 832 - sprintf(id->id, "%c%c%c%x%x%x%x", 833 - 'A' + ((vendor >> 2) & 0x3f) - 1, 834 - 'A' + (((vendor & 3) << 3) | ((vendor >> 13) & 7)) - 1, 835 - 'A' + ((vendor >> 8) & 0x1f) - 1, 836 - (device >> 4) & 0x0f, 837 - device & 0x0f, (device >> 12) & 0x0f, (device >> 8) & 0x0f); 838 - pnp_add_card_id(id, card); 839 - } 840 - 841 - /* 842 825 * Build device list for all present ISA PnP devices. 843 826 */ 844 827 static int __init isapnp_build_device_list(void) ··· 827 848 int csn; 828 849 unsigned char header[9], checksum; 829 850 struct pnp_card *card; 851 + u32 eisa_id; 852 + char id[8]; 830 853 831 854 isapnp_wait(); 832 855 isapnp_key(); ··· 836 855 isapnp_wake(csn); 837 856 isapnp_peek(header, 9); 838 857 checksum = isapnp_checksum(header); 858 + eisa_id = header[0] | header[1] << 8 | 859 + header[2] << 16 | header[3] << 24; 860 + pnp_eisa_id_to_string(eisa_id, id); 861 + card = pnp_alloc_card(&isapnp_protocol, csn, id); 862 + if (!card) 863 + continue; 864 + 839 865 #if 0 840 - printk(KERN_DEBUG 866 + dev_info(&card->dev, 841 867 "vendor: %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x\n", 842 868 header[0], header[1], header[2], header[3], header[4], 843 869 header[5], header[6], header[7], header[8]); 844 - printk(KERN_DEBUG "checksum = 0x%x\n", checksum); 870 + dev_info(&card->dev, "checksum = %#x\n", checksum); 845 871 #endif 846 - if ((card = 847 - kzalloc(sizeof(struct pnp_card), GFP_KERNEL)) == NULL) 848 - continue; 849 - 850 - card->number = csn; 851 872 INIT_LIST_HEAD(&card->devices); 852 - isapnp_parse_card_id(card, (header[1] << 8) | header[0], 853 - (header[3] << 8) | header[2]); 854 873 card->serial = 855 874 (header[7] << 24) | (header[6] << 16) | (header[5] << 8) | 856 875 header[4]; 857 876 isapnp_checksum_value = 0x00; 858 877 isapnp_parse_resource_map(card); 859 878 if (isapnp_checksum_value != 0x00) 860 - printk(KERN_ERR 861 - "isapnp: checksum for device %i is not valid (0x%x)\n", 862 - csn, isapnp_checksum_value); 879 + dev_err(&card->dev, "invalid checksum %#x\n", 880 + isapnp_checksum_value); 863 881 card->checksum = isapnp_checksum_value; 864 - card->protocol = &isapnp_protocol; 865 882 866 883 pnp_add_card(card); 867 884 } ··· 926 947 EXPORT_SYMBOL(isapnp_cfg_end); 927 948 EXPORT_SYMBOL(isapnp_write_byte); 928 949 929 - static int isapnp_read_resources(struct pnp_dev *dev, 930 - struct pnp_resource_table *res) 950 + static int isapnp_get_resources(struct pnp_dev *dev) 931 951 { 932 - int tmp, ret; 952 + struct pnp_resource *pnp_res; 953 + int i, ret; 933 954 955 + dev_dbg(&dev->dev, "get resources\n"); 956 + pnp_init_resources(dev); 957 + isapnp_cfg_begin(dev->card->number, dev->number); 934 958 dev->active = isapnp_read_byte(ISAPNP_CFG_ACTIVATE); 935 - if (dev->active) { 936 - for (tmp = 0; tmp < ISAPNP_MAX_PORT; tmp++) { 937 - ret = isapnp_read_word(ISAPNP_CFG_PORT + (tmp << 1)); 938 - if (!ret) 939 - continue; 940 - res->port_resource[tmp].start = ret; 941 - res->port_resource[tmp].flags = IORESOURCE_IO; 942 - } 943 - for (tmp = 0; tmp < ISAPNP_MAX_MEM; tmp++) { 944 - ret = 945 - isapnp_read_word(ISAPNP_CFG_MEM + (tmp << 3)) << 8; 946 - if (!ret) 947 - continue; 948 - res->mem_resource[tmp].start = ret; 949 - res->mem_resource[tmp].flags = IORESOURCE_MEM; 950 - } 951 - for (tmp = 0; tmp < ISAPNP_MAX_IRQ; tmp++) { 952 - ret = 953 - (isapnp_read_word(ISAPNP_CFG_IRQ + (tmp << 1)) >> 954 - 8); 955 - if (!ret) 956 - continue; 957 - res->irq_resource[tmp].start = 958 - res->irq_resource[tmp].end = ret; 959 - res->irq_resource[tmp].flags = IORESOURCE_IRQ; 960 - } 961 - for (tmp = 0; tmp < ISAPNP_MAX_DMA; tmp++) { 962 - ret = isapnp_read_byte(ISAPNP_CFG_DMA + tmp); 963 - if (ret == 4) 964 - continue; 965 - res->dma_resource[tmp].start = 966 - res->dma_resource[tmp].end = ret; 967 - res->dma_resource[tmp].flags = IORESOURCE_DMA; 959 + if (!dev->active) 960 + goto __end; 961 + 962 + for (i = 0; i < ISAPNP_MAX_PORT; i++) { 963 + ret = isapnp_read_word(ISAPNP_CFG_PORT + (i << 1)); 964 + if (ret) { 965 + pnp_res = pnp_add_io_resource(dev, ret, ret, 0); 966 + if (pnp_res) 967 + pnp_res->index = i; 968 968 } 969 969 } 970 + for (i = 0; i < ISAPNP_MAX_MEM; i++) { 971 + ret = isapnp_read_word(ISAPNP_CFG_MEM + (i << 3)) << 8; 972 + if (ret) { 973 + pnp_res = pnp_add_mem_resource(dev, ret, ret, 0); 974 + if (pnp_res) 975 + pnp_res->index = i; 976 + } 977 + } 978 + for (i = 0; i < ISAPNP_MAX_IRQ; i++) { 979 + ret = isapnp_read_word(ISAPNP_CFG_IRQ + (i << 1)) >> 8; 980 + if (ret) { 981 + pnp_res = pnp_add_irq_resource(dev, ret, 0); 982 + if (pnp_res) 983 + pnp_res->index = i; 984 + } 985 + } 986 + for (i = 0; i < ISAPNP_MAX_DMA; i++) { 987 + ret = isapnp_read_byte(ISAPNP_CFG_DMA + i); 988 + if (ret != 4) { 989 + pnp_res = pnp_add_dma_resource(dev, ret, 0); 990 + if (pnp_res) 991 + pnp_res->index = i; 992 + } 993 + } 994 + 995 + __end: 996 + isapnp_cfg_end(); 970 997 return 0; 971 998 } 972 999 973 - static int isapnp_get_resources(struct pnp_dev *dev, 974 - struct pnp_resource_table *res) 1000 + static int isapnp_set_resources(struct pnp_dev *dev) 975 1001 { 976 - int ret; 1002 + struct pnp_resource *pnp_res; 1003 + struct resource *res; 1004 + int tmp, index; 977 1005 978 - pnp_init_resource_table(res); 979 - isapnp_cfg_begin(dev->card->number, dev->number); 980 - ret = isapnp_read_resources(dev, res); 981 - isapnp_cfg_end(); 982 - return ret; 983 - } 984 - 985 - static int isapnp_set_resources(struct pnp_dev *dev, 986 - struct pnp_resource_table *res) 987 - { 988 - int tmp; 989 - 1006 + dev_dbg(&dev->dev, "set resources\n"); 990 1007 isapnp_cfg_begin(dev->card->number, dev->number); 991 1008 dev->active = 1; 992 - for (tmp = 0; 993 - tmp < ISAPNP_MAX_PORT 994 - && (res->port_resource[tmp]. 995 - flags & (IORESOURCE_IO | IORESOURCE_UNSET)) == IORESOURCE_IO; 996 - tmp++) 997 - isapnp_write_word(ISAPNP_CFG_PORT + (tmp << 1), 998 - res->port_resource[tmp].start); 999 - for (tmp = 0; 1000 - tmp < ISAPNP_MAX_IRQ 1001 - && (res->irq_resource[tmp]. 1002 - flags & (IORESOURCE_IRQ | IORESOURCE_UNSET)) == IORESOURCE_IRQ; 1003 - tmp++) { 1004 - int irq = res->irq_resource[tmp].start; 1005 - if (irq == 2) 1006 - irq = 9; 1007 - isapnp_write_byte(ISAPNP_CFG_IRQ + (tmp << 1), irq); 1009 + for (tmp = 0; tmp < ISAPNP_MAX_PORT; tmp++) { 1010 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IO, tmp); 1011 + if (!pnp_res) 1012 + continue; 1013 + res = &pnp_res->res; 1014 + if (pnp_resource_valid(res)) { 1015 + index = pnp_res->index; 1016 + dev_dbg(&dev->dev, " set io %d to %#llx\n", 1017 + index, (unsigned long long) res->start); 1018 + isapnp_write_word(ISAPNP_CFG_PORT + (index << 1), 1019 + res->start); 1020 + } 1008 1021 } 1009 - for (tmp = 0; 1010 - tmp < ISAPNP_MAX_DMA 1011 - && (res->dma_resource[tmp]. 1012 - flags & (IORESOURCE_DMA | IORESOURCE_UNSET)) == IORESOURCE_DMA; 1013 - tmp++) 1014 - isapnp_write_byte(ISAPNP_CFG_DMA + tmp, 1015 - res->dma_resource[tmp].start); 1016 - for (tmp = 0; 1017 - tmp < ISAPNP_MAX_MEM 1018 - && (res->mem_resource[tmp]. 1019 - flags & (IORESOURCE_MEM | IORESOURCE_UNSET)) == IORESOURCE_MEM; 1020 - tmp++) 1021 - isapnp_write_word(ISAPNP_CFG_MEM + (tmp << 3), 1022 - (res->mem_resource[tmp].start >> 8) & 0xffff); 1022 + for (tmp = 0; tmp < ISAPNP_MAX_IRQ; tmp++) { 1023 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IRQ, tmp); 1024 + if (!pnp_res) 1025 + continue; 1026 + res = &pnp_res->res; 1027 + if (pnp_resource_valid(res)) { 1028 + int irq = res->start; 1029 + if (irq == 2) 1030 + irq = 9; 1031 + index = pnp_res->index; 1032 + dev_dbg(&dev->dev, " set irq %d to %d\n", index, irq); 1033 + isapnp_write_byte(ISAPNP_CFG_IRQ + (index << 1), irq); 1034 + } 1035 + } 1036 + for (tmp = 0; tmp < ISAPNP_MAX_DMA; tmp++) { 1037 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_DMA, tmp); 1038 + if (!pnp_res) 1039 + continue; 1040 + res = &pnp_res->res; 1041 + if (pnp_resource_valid(res)) { 1042 + index = pnp_res->index; 1043 + dev_dbg(&dev->dev, " set dma %d to %lld\n", 1044 + index, (unsigned long long) res->start); 1045 + isapnp_write_byte(ISAPNP_CFG_DMA + index, res->start); 1046 + } 1047 + } 1048 + for (tmp = 0; tmp < ISAPNP_MAX_MEM; tmp++) { 1049 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_MEM, tmp); 1050 + if (!pnp_res) 1051 + continue; 1052 + res = &pnp_res->res; 1053 + if (pnp_resource_valid(res)) { 1054 + index = pnp_res->index; 1055 + dev_dbg(&dev->dev, " set mem %d to %#llx\n", 1056 + index, (unsigned long long) res->start); 1057 + isapnp_write_word(ISAPNP_CFG_MEM + (index << 3), 1058 + (res->start >> 8) & 0xffff); 1059 + } 1060 + } 1023 1061 /* FIXME: We aren't handling 32bit mems properly here */ 1024 1062 isapnp_activate(dev->number); 1025 1063 isapnp_cfg_end(); ··· 1134 1138 protocol_for_each_card(&isapnp_protocol, card) { 1135 1139 cards++; 1136 1140 if (isapnp_verbose) { 1137 - printk(KERN_INFO "isapnp: Card '%s'\n", 1138 - card->name[0] ? card->name : "Unknown"); 1141 + dev_info(&card->dev, "card '%s'\n", 1142 + card->name[0] ? card->name : "unknown"); 1139 1143 if (isapnp_verbose < 2) 1140 1144 continue; 1141 1145 card_for_each_dev(card, dev) { 1142 - printk(KERN_INFO "isapnp: Device '%s'\n", 1143 - dev->name[0] ? dev->name : "Unknown"); 1146 + dev_info(&card->dev, "device '%s'\n", 1147 + dev->name[0] ? dev->name : "unknown"); 1144 1148 } 1145 1149 } 1146 1150 }
+172 -192
drivers/pnp/manager.c
··· 19 19 20 20 static int pnp_assign_port(struct pnp_dev *dev, struct pnp_port *rule, int idx) 21 21 { 22 - resource_size_t *start, *end; 23 - unsigned long *flags; 22 + struct pnp_resource *pnp_res; 23 + struct resource *res; 24 24 25 - if (idx >= PNP_MAX_PORT) { 25 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IO, idx); 26 + if (!pnp_res) { 26 27 dev_err(&dev->dev, "too many I/O port resources\n"); 27 28 /* pretend we were successful so at least the manager won't try again */ 28 29 return 1; 29 30 } 30 31 31 - /* check if this resource has been manually set, if so skip */ 32 - if (!(dev->res.port_resource[idx].flags & IORESOURCE_AUTO)) 33 - return 1; 32 + res = &pnp_res->res; 34 33 35 - start = &dev->res.port_resource[idx].start; 36 - end = &dev->res.port_resource[idx].end; 37 - flags = &dev->res.port_resource[idx].flags; 34 + /* check if this resource has been manually set, if so skip */ 35 + if (!(res->flags & IORESOURCE_AUTO)) { 36 + dev_dbg(&dev->dev, " io %d already set to %#llx-%#llx " 37 + "flags %#lx\n", idx, (unsigned long long) res->start, 38 + (unsigned long long) res->end, res->flags); 39 + return 1; 40 + } 38 41 39 42 /* set the initial values */ 40 - *flags |= rule->flags | IORESOURCE_IO; 41 - *flags &= ~IORESOURCE_UNSET; 43 + pnp_res->index = idx; 44 + res->flags |= rule->flags | IORESOURCE_IO; 45 + res->flags &= ~IORESOURCE_UNSET; 42 46 43 47 if (!rule->size) { 44 - *flags |= IORESOURCE_DISABLED; 48 + res->flags |= IORESOURCE_DISABLED; 49 + dev_dbg(&dev->dev, " io %d disabled\n", idx); 45 50 return 1; /* skip disabled resource requests */ 46 51 } 47 52 48 - *start = rule->min; 49 - *end = *start + rule->size - 1; 53 + res->start = rule->min; 54 + res->end = res->start + rule->size - 1; 50 55 51 56 /* run through until pnp_check_port is happy */ 52 - while (!pnp_check_port(dev, idx)) { 53 - *start += rule->align; 54 - *end = *start + rule->size - 1; 55 - if (*start > rule->max || !rule->align) 57 + while (!pnp_check_port(dev, res)) { 58 + res->start += rule->align; 59 + res->end = res->start + rule->size - 1; 60 + if (res->start > rule->max || !rule->align) { 61 + dev_dbg(&dev->dev, " couldn't assign io %d\n", idx); 56 62 return 0; 63 + } 57 64 } 65 + dev_dbg(&dev->dev, " assign io %d %#llx-%#llx\n", idx, 66 + (unsigned long long) res->start, (unsigned long long) res->end); 58 67 return 1; 59 68 } 60 69 61 70 static int pnp_assign_mem(struct pnp_dev *dev, struct pnp_mem *rule, int idx) 62 71 { 63 - resource_size_t *start, *end; 64 - unsigned long *flags; 72 + struct pnp_resource *pnp_res; 73 + struct resource *res; 65 74 66 - if (idx >= PNP_MAX_MEM) { 75 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_MEM, idx); 76 + if (!pnp_res) { 67 77 dev_err(&dev->dev, "too many memory resources\n"); 68 78 /* pretend we were successful so at least the manager won't try again */ 69 79 return 1; 70 80 } 71 81 72 - /* check if this resource has been manually set, if so skip */ 73 - if (!(dev->res.mem_resource[idx].flags & IORESOURCE_AUTO)) 74 - return 1; 82 + res = &pnp_res->res; 75 83 76 - start = &dev->res.mem_resource[idx].start; 77 - end = &dev->res.mem_resource[idx].end; 78 - flags = &dev->res.mem_resource[idx].flags; 84 + /* check if this resource has been manually set, if so skip */ 85 + if (!(res->flags & IORESOURCE_AUTO)) { 86 + dev_dbg(&dev->dev, " mem %d already set to %#llx-%#llx " 87 + "flags %#lx\n", idx, (unsigned long long) res->start, 88 + (unsigned long long) res->end, res->flags); 89 + return 1; 90 + } 79 91 80 92 /* set the initial values */ 81 - *flags |= rule->flags | IORESOURCE_MEM; 82 - *flags &= ~IORESOURCE_UNSET; 93 + pnp_res->index = idx; 94 + res->flags |= rule->flags | IORESOURCE_MEM; 95 + res->flags &= ~IORESOURCE_UNSET; 83 96 84 97 /* convert pnp flags to standard Linux flags */ 85 98 if (!(rule->flags & IORESOURCE_MEM_WRITEABLE)) 86 - *flags |= IORESOURCE_READONLY; 99 + res->flags |= IORESOURCE_READONLY; 87 100 if (rule->flags & IORESOURCE_MEM_CACHEABLE) 88 - *flags |= IORESOURCE_CACHEABLE; 101 + res->flags |= IORESOURCE_CACHEABLE; 89 102 if (rule->flags & IORESOURCE_MEM_RANGELENGTH) 90 - *flags |= IORESOURCE_RANGELENGTH; 103 + res->flags |= IORESOURCE_RANGELENGTH; 91 104 if (rule->flags & IORESOURCE_MEM_SHADOWABLE) 92 - *flags |= IORESOURCE_SHADOWABLE; 105 + res->flags |= IORESOURCE_SHADOWABLE; 93 106 94 107 if (!rule->size) { 95 - *flags |= IORESOURCE_DISABLED; 108 + res->flags |= IORESOURCE_DISABLED; 109 + dev_dbg(&dev->dev, " mem %d disabled\n", idx); 96 110 return 1; /* skip disabled resource requests */ 97 111 } 98 112 99 - *start = rule->min; 100 - *end = *start + rule->size - 1; 113 + res->start = rule->min; 114 + res->end = res->start + rule->size - 1; 101 115 102 116 /* run through until pnp_check_mem is happy */ 103 - while (!pnp_check_mem(dev, idx)) { 104 - *start += rule->align; 105 - *end = *start + rule->size - 1; 106 - if (*start > rule->max || !rule->align) 117 + while (!pnp_check_mem(dev, res)) { 118 + res->start += rule->align; 119 + res->end = res->start + rule->size - 1; 120 + if (res->start > rule->max || !rule->align) { 121 + dev_dbg(&dev->dev, " couldn't assign mem %d\n", idx); 107 122 return 0; 123 + } 108 124 } 125 + dev_dbg(&dev->dev, " assign mem %d %#llx-%#llx\n", idx, 126 + (unsigned long long) res->start, (unsigned long long) res->end); 109 127 return 1; 110 128 } 111 129 112 130 static int pnp_assign_irq(struct pnp_dev *dev, struct pnp_irq *rule, int idx) 113 131 { 114 - resource_size_t *start, *end; 115 - unsigned long *flags; 132 + struct pnp_resource *pnp_res; 133 + struct resource *res; 116 134 int i; 117 135 118 136 /* IRQ priority: this table is good for i386 */ ··· 138 120 5, 10, 11, 12, 9, 14, 15, 7, 3, 4, 13, 0, 1, 6, 8, 2 139 121 }; 140 122 141 - if (idx >= PNP_MAX_IRQ) { 123 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IRQ, idx); 124 + if (!pnp_res) { 142 125 dev_err(&dev->dev, "too many IRQ resources\n"); 143 126 /* pretend we were successful so at least the manager won't try again */ 144 127 return 1; 145 128 } 146 129 147 - /* check if this resource has been manually set, if so skip */ 148 - if (!(dev->res.irq_resource[idx].flags & IORESOURCE_AUTO)) 149 - return 1; 130 + res = &pnp_res->res; 150 131 151 - start = &dev->res.irq_resource[idx].start; 152 - end = &dev->res.irq_resource[idx].end; 153 - flags = &dev->res.irq_resource[idx].flags; 132 + /* check if this resource has been manually set, if so skip */ 133 + if (!(res->flags & IORESOURCE_AUTO)) { 134 + dev_dbg(&dev->dev, " irq %d already set to %d flags %#lx\n", 135 + idx, (int) res->start, res->flags); 136 + return 1; 137 + } 154 138 155 139 /* set the initial values */ 156 - *flags |= rule->flags | IORESOURCE_IRQ; 157 - *flags &= ~IORESOURCE_UNSET; 140 + pnp_res->index = idx; 141 + res->flags |= rule->flags | IORESOURCE_IRQ; 142 + res->flags &= ~IORESOURCE_UNSET; 158 143 159 144 if (bitmap_empty(rule->map, PNP_IRQ_NR)) { 160 - *flags |= IORESOURCE_DISABLED; 145 + res->flags |= IORESOURCE_DISABLED; 146 + dev_dbg(&dev->dev, " irq %d disabled\n", idx); 161 147 return 1; /* skip disabled resource requests */ 162 148 } 163 149 164 150 /* TBD: need check for >16 IRQ */ 165 - *start = find_next_bit(rule->map, PNP_IRQ_NR, 16); 166 - if (*start < PNP_IRQ_NR) { 167 - *end = *start; 151 + res->start = find_next_bit(rule->map, PNP_IRQ_NR, 16); 152 + if (res->start < PNP_IRQ_NR) { 153 + res->end = res->start; 154 + dev_dbg(&dev->dev, " assign irq %d %d\n", idx, 155 + (int) res->start); 168 156 return 1; 169 157 } 170 158 for (i = 0; i < 16; i++) { 171 159 if (test_bit(xtab[i], rule->map)) { 172 - *start = *end = xtab[i]; 173 - if (pnp_check_irq(dev, idx)) 160 + res->start = res->end = xtab[i]; 161 + if (pnp_check_irq(dev, res)) { 162 + dev_dbg(&dev->dev, " assign irq %d %d\n", idx, 163 + (int) res->start); 174 164 return 1; 165 + } 175 166 } 176 167 } 168 + dev_dbg(&dev->dev, " couldn't assign irq %d\n", idx); 177 169 return 0; 178 170 } 179 171 180 172 static void pnp_assign_dma(struct pnp_dev *dev, struct pnp_dma *rule, int idx) 181 173 { 182 - resource_size_t *start, *end; 183 - unsigned long *flags; 174 + struct pnp_resource *pnp_res; 175 + struct resource *res; 184 176 int i; 185 177 186 178 /* DMA priority: this table is good for i386 */ ··· 198 170 1, 3, 5, 6, 7, 0, 2, 4 199 171 }; 200 172 201 - if (idx >= PNP_MAX_DMA) { 173 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_DMA, idx); 174 + if (!pnp_res) { 202 175 dev_err(&dev->dev, "too many DMA resources\n"); 203 176 return; 204 177 } 205 178 206 - /* check if this resource has been manually set, if so skip */ 207 - if (!(dev->res.dma_resource[idx].flags & IORESOURCE_AUTO)) 208 - return; 179 + res = &pnp_res->res; 209 180 210 - start = &dev->res.dma_resource[idx].start; 211 - end = &dev->res.dma_resource[idx].end; 212 - flags = &dev->res.dma_resource[idx].flags; 181 + /* check if this resource has been manually set, if so skip */ 182 + if (!(res->flags & IORESOURCE_AUTO)) { 183 + dev_dbg(&dev->dev, " dma %d already set to %d flags %#lx\n", 184 + idx, (int) res->start, res->flags); 185 + return; 186 + } 213 187 214 188 /* set the initial values */ 215 - *flags |= rule->flags | IORESOURCE_DMA; 216 - *flags &= ~IORESOURCE_UNSET; 189 + pnp_res->index = idx; 190 + res->flags |= rule->flags | IORESOURCE_DMA; 191 + res->flags &= ~IORESOURCE_UNSET; 217 192 218 193 for (i = 0; i < 8; i++) { 219 194 if (rule->map & (1 << xtab[i])) { 220 - *start = *end = xtab[i]; 221 - if (pnp_check_dma(dev, idx)) 195 + res->start = res->end = xtab[i]; 196 + if (pnp_check_dma(dev, res)) { 197 + dev_dbg(&dev->dev, " assign dma %d %d\n", idx, 198 + (int) res->start); 222 199 return; 200 + } 223 201 } 224 202 } 225 203 #ifdef MAX_DMA_CHANNELS 226 - *start = *end = MAX_DMA_CHANNELS; 204 + res->start = res->end = MAX_DMA_CHANNELS; 227 205 #endif 228 - *flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 206 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 207 + dev_dbg(&dev->dev, " disable dma %d\n", idx); 208 + } 209 + 210 + void pnp_init_resource(struct resource *res) 211 + { 212 + unsigned long type; 213 + 214 + type = res->flags & (IORESOURCE_IO | IORESOURCE_MEM | 215 + IORESOURCE_IRQ | IORESOURCE_DMA); 216 + 217 + res->name = NULL; 218 + res->flags = type | IORESOURCE_AUTO | IORESOURCE_UNSET; 219 + if (type == IORESOURCE_IRQ || type == IORESOURCE_DMA) { 220 + res->start = -1; 221 + res->end = -1; 222 + } else { 223 + res->start = 0; 224 + res->end = 0; 225 + } 229 226 } 230 227 231 228 /** 232 229 * pnp_init_resources - Resets a resource table to default values. 233 230 * @table: pointer to the desired resource table 234 231 */ 235 - void pnp_init_resource_table(struct pnp_resource_table *table) 232 + void pnp_init_resources(struct pnp_dev *dev) 236 233 { 234 + struct resource *res; 237 235 int idx; 238 236 239 237 for (idx = 0; idx < PNP_MAX_IRQ; idx++) { 240 - table->irq_resource[idx].name = NULL; 241 - table->irq_resource[idx].start = -1; 242 - table->irq_resource[idx].end = -1; 243 - table->irq_resource[idx].flags = 244 - IORESOURCE_IRQ | IORESOURCE_AUTO | IORESOURCE_UNSET; 238 + res = &dev->res->irq[idx].res; 239 + res->flags = IORESOURCE_IRQ; 240 + pnp_init_resource(res); 245 241 } 246 242 for (idx = 0; idx < PNP_MAX_DMA; idx++) { 247 - table->dma_resource[idx].name = NULL; 248 - table->dma_resource[idx].start = -1; 249 - table->dma_resource[idx].end = -1; 250 - table->dma_resource[idx].flags = 251 - IORESOURCE_DMA | IORESOURCE_AUTO | IORESOURCE_UNSET; 243 + res = &dev->res->dma[idx].res; 244 + res->flags = IORESOURCE_DMA; 245 + pnp_init_resource(res); 252 246 } 253 247 for (idx = 0; idx < PNP_MAX_PORT; idx++) { 254 - table->port_resource[idx].name = NULL; 255 - table->port_resource[idx].start = 0; 256 - table->port_resource[idx].end = 0; 257 - table->port_resource[idx].flags = 258 - IORESOURCE_IO | IORESOURCE_AUTO | IORESOURCE_UNSET; 248 + res = &dev->res->port[idx].res; 249 + res->flags = IORESOURCE_IO; 250 + pnp_init_resource(res); 259 251 } 260 252 for (idx = 0; idx < PNP_MAX_MEM; idx++) { 261 - table->mem_resource[idx].name = NULL; 262 - table->mem_resource[idx].start = 0; 263 - table->mem_resource[idx].end = 0; 264 - table->mem_resource[idx].flags = 265 - IORESOURCE_MEM | IORESOURCE_AUTO | IORESOURCE_UNSET; 253 + res = &dev->res->mem[idx].res; 254 + res->flags = IORESOURCE_MEM; 255 + pnp_init_resource(res); 266 256 } 267 257 } 268 258 ··· 288 242 * pnp_clean_resources - clears resources that were not manually set 289 243 * @res: the resources to clean 290 244 */ 291 - static void pnp_clean_resource_table(struct pnp_resource_table *res) 245 + static void pnp_clean_resource_table(struct pnp_dev *dev) 292 246 { 247 + struct resource *res; 293 248 int idx; 294 249 295 250 for (idx = 0; idx < PNP_MAX_IRQ; idx++) { 296 - if (!(res->irq_resource[idx].flags & IORESOURCE_AUTO)) 297 - continue; 298 - res->irq_resource[idx].start = -1; 299 - res->irq_resource[idx].end = -1; 300 - res->irq_resource[idx].flags = 301 - IORESOURCE_IRQ | IORESOURCE_AUTO | IORESOURCE_UNSET; 251 + res = &dev->res->irq[idx].res; 252 + if (res->flags & IORESOURCE_AUTO) { 253 + res->flags = IORESOURCE_IRQ; 254 + pnp_init_resource(res); 255 + } 302 256 } 303 257 for (idx = 0; idx < PNP_MAX_DMA; idx++) { 304 - if (!(res->dma_resource[idx].flags & IORESOURCE_AUTO)) 305 - continue; 306 - res->dma_resource[idx].start = -1; 307 - res->dma_resource[idx].end = -1; 308 - res->dma_resource[idx].flags = 309 - IORESOURCE_DMA | IORESOURCE_AUTO | IORESOURCE_UNSET; 258 + res = &dev->res->dma[idx].res; 259 + if (res->flags & IORESOURCE_AUTO) { 260 + res->flags = IORESOURCE_DMA; 261 + pnp_init_resource(res); 262 + } 310 263 } 311 264 for (idx = 0; idx < PNP_MAX_PORT; idx++) { 312 - if (!(res->port_resource[idx].flags & IORESOURCE_AUTO)) 313 - continue; 314 - res->port_resource[idx].start = 0; 315 - res->port_resource[idx].end = 0; 316 - res->port_resource[idx].flags = 317 - IORESOURCE_IO | IORESOURCE_AUTO | IORESOURCE_UNSET; 265 + res = &dev->res->port[idx].res; 266 + if (res->flags & IORESOURCE_AUTO) { 267 + res->flags = IORESOURCE_IO; 268 + pnp_init_resource(res); 269 + } 318 270 } 319 271 for (idx = 0; idx < PNP_MAX_MEM; idx++) { 320 - if (!(res->mem_resource[idx].flags & IORESOURCE_AUTO)) 321 - continue; 322 - res->mem_resource[idx].start = 0; 323 - res->mem_resource[idx].end = 0; 324 - res->mem_resource[idx].flags = 325 - IORESOURCE_MEM | IORESOURCE_AUTO | IORESOURCE_UNSET; 272 + res = &dev->res->mem[idx].res; 273 + if (res->flags & IORESOURCE_AUTO) { 274 + res->flags = IORESOURCE_MEM; 275 + pnp_init_resource(res); 276 + } 326 277 } 327 278 } 328 279 ··· 341 298 if (!pnp_can_configure(dev)) 342 299 return -ENODEV; 343 300 301 + dbg_pnp_show_resources(dev, "before pnp_assign_resources"); 344 302 mutex_lock(&pnp_res_mutex); 345 - pnp_clean_resource_table(&dev->res); /* start with a fresh slate */ 303 + pnp_clean_resource_table(dev); 346 304 if (dev->independent) { 305 + dev_dbg(&dev->dev, "assigning independent options\n"); 347 306 port = dev->independent->port; 348 307 mem = dev->independent->mem; 349 308 irq = dev->independent->irq; ··· 378 333 if (depnum) { 379 334 struct pnp_option *dep; 380 335 int i; 336 + 337 + dev_dbg(&dev->dev, "assigning dependent option %d\n", depnum); 381 338 for (i = 1, dep = dev->dependent; i < depnum; 382 339 i++, dep = dep->next) 383 340 if (!dep) ··· 415 368 goto fail; 416 369 417 370 mutex_unlock(&pnp_res_mutex); 371 + dbg_pnp_show_resources(dev, "after pnp_assign_resources"); 418 372 return 1; 419 373 420 374 fail: 421 - pnp_clean_resource_table(&dev->res); 375 + pnp_clean_resource_table(dev); 422 376 mutex_unlock(&pnp_res_mutex); 377 + dbg_pnp_show_resources(dev, "after pnp_assign_resources (failed)"); 423 378 return 0; 424 - } 425 - 426 - /** 427 - * pnp_manual_config_dev - Disables Auto Config and Manually sets the resource table 428 - * @dev: pointer to the desired device 429 - * @res: pointer to the new resource config 430 - * @mode: 0 or PNP_CONFIG_FORCE 431 - * 432 - * This function can be used by drivers that want to manually set thier resources. 433 - */ 434 - int pnp_manual_config_dev(struct pnp_dev *dev, struct pnp_resource_table *res, 435 - int mode) 436 - { 437 - int i; 438 - struct pnp_resource_table *bak; 439 - 440 - if (!pnp_can_configure(dev)) 441 - return -ENODEV; 442 - bak = pnp_alloc(sizeof(struct pnp_resource_table)); 443 - if (!bak) 444 - return -ENOMEM; 445 - *bak = dev->res; 446 - 447 - mutex_lock(&pnp_res_mutex); 448 - dev->res = *res; 449 - if (!(mode & PNP_CONFIG_FORCE)) { 450 - for (i = 0; i < PNP_MAX_PORT; i++) { 451 - if (!pnp_check_port(dev, i)) 452 - goto fail; 453 - } 454 - for (i = 0; i < PNP_MAX_MEM; i++) { 455 - if (!pnp_check_mem(dev, i)) 456 - goto fail; 457 - } 458 - for (i = 0; i < PNP_MAX_IRQ; i++) { 459 - if (!pnp_check_irq(dev, i)) 460 - goto fail; 461 - } 462 - for (i = 0; i < PNP_MAX_DMA; i++) { 463 - if (!pnp_check_dma(dev, i)) 464 - goto fail; 465 - } 466 - } 467 - mutex_unlock(&pnp_res_mutex); 468 - 469 - kfree(bak); 470 - return 0; 471 - 472 - fail: 473 - dev->res = *bak; 474 - mutex_unlock(&pnp_res_mutex); 475 - kfree(bak); 476 - return -EINVAL; 477 379 } 478 380 479 381 /** ··· 469 473 return -EINVAL; 470 474 } 471 475 472 - if (dev->protocol->set(dev, &dev->res) < 0) { 476 + dbg_pnp_show_resources(dev, "pnp_start_dev"); 477 + if (dev->protocol->set(dev) < 0) { 473 478 dev_err(&dev->dev, "activation failed\n"); 474 479 return -EIO; 475 480 } ··· 546 549 547 550 /* release the resources so that other devices can use them */ 548 551 mutex_lock(&pnp_res_mutex); 549 - pnp_clean_resource_table(&dev->res); 552 + pnp_clean_resource_table(dev); 550 553 mutex_unlock(&pnp_res_mutex); 551 554 552 555 return 0; 553 556 } 554 557 555 - /** 556 - * pnp_resource_change - change one resource 557 - * @resource: pointer to resource to be changed 558 - * @start: start of region 559 - * @size: size of region 560 - */ 561 - void pnp_resource_change(struct resource *resource, resource_size_t start, 562 - resource_size_t size) 563 - { 564 - resource->flags &= ~(IORESOURCE_AUTO | IORESOURCE_UNSET); 565 - resource->start = start; 566 - resource->end = start + size - 1; 567 - } 568 - 569 - EXPORT_SYMBOL(pnp_manual_config_dev); 570 558 EXPORT_SYMBOL(pnp_start_dev); 571 559 EXPORT_SYMBOL(pnp_stop_dev); 572 560 EXPORT_SYMBOL(pnp_activate_dev); 573 561 EXPORT_SYMBOL(pnp_disable_dev); 574 - EXPORT_SYMBOL(pnp_resource_change); 575 - EXPORT_SYMBOL(pnp_init_resource_table);
+4
drivers/pnp/pnpacpi/Makefile
··· 3 3 # 4 4 5 5 obj-y := core.o rsparser.o 6 + 7 + ifeq ($(CONFIG_PNP_DEBUG),y) 8 + EXTRA_CFLAGS += -DDEBUG 9 + endif
+19 -73
drivers/pnp/pnpacpi/core.c
··· 25 25 #include <acpi/acpi_bus.h> 26 26 #include <acpi/actypes.h> 27 27 28 + #include "../base.h" 28 29 #include "pnpacpi.h" 29 30 30 31 static int num = 0; ··· 45 44 {"", 0}, 46 45 }; 47 46 48 - static inline int is_exclusive_device(struct acpi_device *dev) 47 + static inline int __init is_exclusive_device(struct acpi_device *dev) 49 48 { 50 49 return (!acpi_match_device_ids(dev, excluded_id_list)); 51 50 } ··· 73 72 return 1; 74 73 } 75 74 76 - static void __init pnpidacpi_to_pnpid(char *id, char *str) 75 + static int pnpacpi_get_resources(struct pnp_dev *dev) 77 76 { 78 - str[0] = id[0]; 79 - str[1] = id[1]; 80 - str[2] = id[2]; 81 - str[3] = tolower(id[3]); 82 - str[4] = tolower(id[4]); 83 - str[5] = tolower(id[5]); 84 - str[6] = tolower(id[6]); 85 - str[7] = '\0'; 77 + dev_dbg(&dev->dev, "get resources\n"); 78 + return pnpacpi_parse_allocated_resource(dev); 86 79 } 87 80 88 - static int pnpacpi_get_resources(struct pnp_dev *dev, 89 - struct pnp_resource_table *res) 90 - { 91 - acpi_status status; 92 - 93 - status = pnpacpi_parse_allocated_resource((acpi_handle) dev->data, 94 - &dev->res); 95 - return ACPI_FAILURE(status) ? -ENODEV : 0; 96 - } 97 - 98 - static int pnpacpi_set_resources(struct pnp_dev *dev, 99 - struct pnp_resource_table *res) 81 + static int pnpacpi_set_resources(struct pnp_dev *dev) 100 82 { 101 83 acpi_handle handle = dev->data; 102 84 struct acpi_buffer buffer; 103 - int ret = 0; 85 + int ret; 104 86 acpi_status status; 105 87 106 - ret = pnpacpi_build_resource_template(handle, &buffer); 88 + dev_dbg(&dev->dev, "set resources\n"); 89 + ret = pnpacpi_build_resource_template(dev, &buffer); 107 90 if (ret) 108 91 return ret; 109 - ret = pnpacpi_encode_resources(res, &buffer); 92 + ret = pnpacpi_encode_resources(dev, &buffer); 110 93 if (ret) { 111 94 kfree(buffer.pointer); 112 95 return ret; ··· 148 163 { 149 164 acpi_handle temp = NULL; 150 165 acpi_status status; 151 - struct pnp_id *dev_id; 152 166 struct pnp_dev *dev; 153 167 154 168 status = acpi_get_handle(device->handle, "_CRS", &temp); ··· 155 171 is_exclusive_device(device)) 156 172 return 0; 157 173 158 - dev = kzalloc(sizeof(struct pnp_dev), GFP_KERNEL); 159 - if (!dev) { 160 - pnp_err("Out of memory"); 174 + dev = pnp_alloc_dev(&pnpacpi_protocol, num, acpi_device_hid(device)); 175 + if (!dev) 161 176 return -ENOMEM; 162 - } 177 + 163 178 dev->data = device->handle; 164 179 /* .enabled means the device can decode the resources */ 165 180 dev->active = device->status.enabled; ··· 174 191 if (ACPI_SUCCESS(status)) 175 192 dev->capabilities |= PNP_DISABLE; 176 193 177 - dev->protocol = &pnpacpi_protocol; 178 - 179 194 if (strlen(acpi_device_name(device))) 180 195 strncpy(dev->name, acpi_device_name(device), sizeof(dev->name)); 181 196 else 182 197 strncpy(dev->name, acpi_device_bid(device), sizeof(dev->name)); 183 198 184 - dev->number = num; 199 + if (dev->active) 200 + pnpacpi_parse_allocated_resource(dev); 185 201 186 - /* set the initial values for the PnP device */ 187 - dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 188 - if (!dev_id) 189 - goto err; 190 - pnpidacpi_to_pnpid(acpi_device_hid(device), dev_id->id); 191 - pnp_add_id(dev_id, dev); 202 + if (dev->capabilities & PNP_CONFIGURABLE) 203 + pnpacpi_parse_resource_option_data(dev); 192 204 193 - if (dev->active) { 194 - /* parse allocated resource */ 195 - status = pnpacpi_parse_allocated_resource(device->handle, 196 - &dev->res); 197 - if (ACPI_FAILURE(status) && (status != AE_NOT_FOUND)) { 198 - pnp_err("PnPACPI: METHOD_NAME__CRS failure for %s", 199 - dev_id->id); 200 - goto err1; 201 - } 202 - } 203 - 204 - if (dev->capabilities & PNP_CONFIGURABLE) { 205 - status = pnpacpi_parse_resource_option_data(device->handle, 206 - dev); 207 - if (ACPI_FAILURE(status) && (status != AE_NOT_FOUND)) { 208 - pnp_err("PnPACPI: METHOD_NAME__PRS failure for %s", 209 - dev_id->id); 210 - goto err1; 211 - } 212 - } 213 - 214 - /* parse compatible ids */ 215 205 if (device->flags.compatible_ids) { 216 206 struct acpi_compatible_id_list *cid_list = device->pnp.cid_list; 217 207 int i; ··· 192 236 for (i = 0; i < cid_list->count; i++) { 193 237 if (!ispnpidacpi(cid_list->id[i].value)) 194 238 continue; 195 - dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 196 - if (!dev_id) 197 - continue; 198 - 199 - pnpidacpi_to_pnpid(cid_list->id[i].value, dev_id->id); 200 - pnp_add_id(dev_id, dev); 239 + pnp_add_id(dev, cid_list->id[i].value); 201 240 } 202 241 } 203 242 204 243 /* clear out the damaged flags */ 205 244 if (!dev->active) 206 - pnp_init_resource_table(&dev->res); 245 + pnp_init_resources(dev); 207 246 pnp_add_device(dev); 208 247 num++; 209 248 210 249 return AE_OK; 211 - err1: 212 - kfree(dev_id); 213 - err: 214 - kfree(dev); 215 - return -EINVAL; 216 250 } 217 251 218 252 static acpi_status __init pnpacpi_add_device_handler(acpi_handle handle,
+4 -4
drivers/pnp/pnpacpi/pnpacpi.h
··· 5 5 #include <linux/acpi.h> 6 6 #include <linux/pnp.h> 7 7 8 - acpi_status pnpacpi_parse_allocated_resource(acpi_handle, struct pnp_resource_table*); 9 - acpi_status pnpacpi_parse_resource_option_data(acpi_handle, struct pnp_dev*); 10 - int pnpacpi_encode_resources(struct pnp_resource_table *, struct acpi_buffer *); 11 - int pnpacpi_build_resource_template(acpi_handle, struct acpi_buffer*); 8 + int pnpacpi_parse_allocated_resource(struct pnp_dev *); 9 + int pnpacpi_parse_resource_option_data(struct pnp_dev *); 10 + int pnpacpi_encode_resources(struct pnp_dev *, struct acpi_buffer *); 11 + int pnpacpi_build_resource_template(struct pnp_dev *, struct acpi_buffer *); 12 12 #endif
+312 -277
drivers/pnp/pnpacpi/rsparser.c
··· 21 21 #include <linux/kernel.h> 22 22 #include <linux/acpi.h> 23 23 #include <linux/pci.h> 24 + #include <linux/pnp.h> 25 + #include "../base.h" 24 26 #include "pnpacpi.h" 25 27 26 28 #ifdef CONFIG_IA64 ··· 34 32 /* 35 33 * Allocated Resources 36 34 */ 37 - static int irq_flags(int triggering, int polarity) 35 + static int irq_flags(int triggering, int polarity, int shareable) 38 36 { 37 + int flags; 38 + 39 39 if (triggering == ACPI_LEVEL_SENSITIVE) { 40 40 if (polarity == ACPI_ACTIVE_LOW) 41 - return IORESOURCE_IRQ_LOWLEVEL; 41 + flags = IORESOURCE_IRQ_LOWLEVEL; 42 42 else 43 - return IORESOURCE_IRQ_HIGHLEVEL; 43 + flags = IORESOURCE_IRQ_HIGHLEVEL; 44 44 } else { 45 45 if (polarity == ACPI_ACTIVE_LOW) 46 - return IORESOURCE_IRQ_LOWEDGE; 46 + flags = IORESOURCE_IRQ_LOWEDGE; 47 47 else 48 - return IORESOURCE_IRQ_HIGHEDGE; 48 + flags = IORESOURCE_IRQ_HIGHEDGE; 49 49 } 50 + 51 + if (shareable) 52 + flags |= IORESOURCE_IRQ_SHAREABLE; 53 + 54 + return flags; 50 55 } 51 56 52 57 static void decode_irq_flags(int flag, int *triggering, int *polarity) ··· 78 69 } 79 70 } 80 71 81 - static void pnpacpi_parse_allocated_irqresource(struct pnp_resource_table *res, 72 + static void pnpacpi_parse_allocated_irqresource(struct pnp_dev *dev, 82 73 u32 gsi, int triggering, 83 74 int polarity, int shareable) 84 75 { 85 - int i = 0; 86 - int irq; 76 + int irq, flags; 87 77 int p, t; 88 - static unsigned char warned; 89 78 90 79 if (!valid_IRQ(gsi)) 91 80 return; 92 81 93 - while (!(res->irq_resource[i].flags & IORESOURCE_UNSET) && 94 - i < PNP_MAX_IRQ) 95 - i++; 96 - if (i >= PNP_MAX_IRQ) { 97 - if (!warned) { 98 - printk(KERN_WARNING "pnpacpi: exceeded the max number" 99 - " of IRQ resources: %d\n", PNP_MAX_IRQ); 100 - warned = 1; 101 - } 102 - return; 103 - } 104 82 /* 105 83 * in IO-APIC mode, use overrided attribute. Two reasons: 106 84 * 1. BIOS bug in DSDT ··· 98 102 p = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 99 103 100 104 if (triggering != t || polarity != p) { 101 - pnp_warn("IRQ %d override to %s, %s", 105 + dev_warn(&dev->dev, "IRQ %d override to %s, %s\n", 102 106 gsi, t ? "edge":"level", p ? "low":"high"); 103 107 triggering = t; 104 108 polarity = p; 105 109 } 106 110 } 107 111 108 - res->irq_resource[i].flags = IORESOURCE_IRQ; // Also clears _UNSET flag 109 - res->irq_resource[i].flags |= irq_flags(triggering, polarity); 112 + flags = irq_flags(triggering, polarity, shareable); 110 113 irq = acpi_register_gsi(gsi, triggering, polarity); 111 - if (irq < 0) { 112 - res->irq_resource[i].flags |= IORESOURCE_DISABLED; 113 - return; 114 - } 114 + if (irq >= 0) 115 + pcibios_penalize_isa_irq(irq, 1); 116 + else 117 + flags |= IORESOURCE_DISABLED; 115 118 116 - if (shareable) 117 - res->irq_resource[i].flags |= IORESOURCE_IRQ_SHAREABLE; 118 - 119 - res->irq_resource[i].start = irq; 120 - res->irq_resource[i].end = irq; 121 - pcibios_penalize_isa_irq(irq, 1); 119 + pnp_add_irq_resource(dev, irq, flags); 122 120 } 123 121 124 122 static int dma_flags(int type, int bus_master, int transfer) ··· 158 168 return flags; 159 169 } 160 170 161 - static void pnpacpi_parse_allocated_dmaresource(struct pnp_resource_table *res, 162 - u32 dma, int type, 163 - int bus_master, int transfer) 171 + static void pnpacpi_parse_allocated_ioresource(struct pnp_dev *dev, u64 start, 172 + u64 len, int io_decode) 164 173 { 165 - int i = 0; 166 - static unsigned char warned; 174 + int flags = 0; 175 + u64 end = start + len - 1; 167 176 168 - while (i < PNP_MAX_DMA && 169 - !(res->dma_resource[i].flags & IORESOURCE_UNSET)) 170 - i++; 171 - if (i < PNP_MAX_DMA) { 172 - res->dma_resource[i].flags = IORESOURCE_DMA; // Also clears _UNSET flag 173 - res->dma_resource[i].flags |= 174 - dma_flags(type, bus_master, transfer); 175 - if (dma == -1) { 176 - res->dma_resource[i].flags |= IORESOURCE_DISABLED; 177 - return; 178 - } 179 - res->dma_resource[i].start = dma; 180 - res->dma_resource[i].end = dma; 181 - } else if (!warned) { 182 - printk(KERN_WARNING "pnpacpi: exceeded the max number of DMA " 183 - "resources: %d \n", PNP_MAX_DMA); 184 - warned = 1; 185 - } 177 + if (io_decode == ACPI_DECODE_16) 178 + flags |= PNP_PORT_FLAG_16BITADDR; 179 + if (len == 0 || end >= 0x10003) 180 + flags |= IORESOURCE_DISABLED; 181 + 182 + pnp_add_io_resource(dev, start, end, flags); 186 183 } 187 184 188 - static void pnpacpi_parse_allocated_ioresource(struct pnp_resource_table *res, 189 - u64 io, u64 len, int io_decode) 190 - { 191 - int i = 0; 192 - static unsigned char warned; 193 - 194 - while (!(res->port_resource[i].flags & IORESOURCE_UNSET) && 195 - i < PNP_MAX_PORT) 196 - i++; 197 - if (i < PNP_MAX_PORT) { 198 - res->port_resource[i].flags = IORESOURCE_IO; // Also clears _UNSET flag 199 - if (io_decode == ACPI_DECODE_16) 200 - res->port_resource[i].flags |= PNP_PORT_FLAG_16BITADDR; 201 - if (len <= 0 || (io + len - 1) >= 0x10003) { 202 - res->port_resource[i].flags |= IORESOURCE_DISABLED; 203 - return; 204 - } 205 - res->port_resource[i].start = io; 206 - res->port_resource[i].end = io + len - 1; 207 - } else if (!warned) { 208 - printk(KERN_WARNING "pnpacpi: exceeded the max number of IO " 209 - "resources: %d \n", PNP_MAX_PORT); 210 - warned = 1; 211 - } 212 - } 213 - 214 - static void pnpacpi_parse_allocated_memresource(struct pnp_resource_table *res, 215 - u64 mem, u64 len, 185 + static void pnpacpi_parse_allocated_memresource(struct pnp_dev *dev, 186 + u64 start, u64 len, 216 187 int write_protect) 217 188 { 218 - int i = 0; 219 - static unsigned char warned; 189 + int flags = 0; 190 + u64 end = start + len - 1; 220 191 221 - while (!(res->mem_resource[i].flags & IORESOURCE_UNSET) && 222 - (i < PNP_MAX_MEM)) 223 - i++; 224 - if (i < PNP_MAX_MEM) { 225 - res->mem_resource[i].flags = IORESOURCE_MEM; // Also clears _UNSET flag 226 - if (len <= 0) { 227 - res->mem_resource[i].flags |= IORESOURCE_DISABLED; 228 - return; 229 - } 230 - if (write_protect == ACPI_READ_WRITE_MEMORY) 231 - res->mem_resource[i].flags |= IORESOURCE_MEM_WRITEABLE; 192 + if (len == 0) 193 + flags |= IORESOURCE_DISABLED; 194 + if (write_protect == ACPI_READ_WRITE_MEMORY) 195 + flags |= IORESOURCE_MEM_WRITEABLE; 232 196 233 - res->mem_resource[i].start = mem; 234 - res->mem_resource[i].end = mem + len - 1; 235 - } else if (!warned) { 236 - printk(KERN_WARNING "pnpacpi: exceeded the max number of mem " 237 - "resources: %d\n", PNP_MAX_MEM); 238 - warned = 1; 239 - } 197 + pnp_add_mem_resource(dev, start, end, flags); 240 198 } 241 199 242 - static void pnpacpi_parse_allocated_address_space(struct pnp_resource_table *res_table, 200 + static void pnpacpi_parse_allocated_address_space(struct pnp_dev *dev, 243 201 struct acpi_resource *res) 244 202 { 245 203 struct acpi_resource_address64 addr, *p = &addr; ··· 195 257 196 258 status = acpi_resource_to_address64(res, p); 197 259 if (!ACPI_SUCCESS(status)) { 198 - pnp_warn("PnPACPI: failed to convert resource type %d", 260 + dev_warn(&dev->dev, "failed to convert resource type %d\n", 199 261 res->type); 200 262 return; 201 263 } ··· 204 266 return; 205 267 206 268 if (p->resource_type == ACPI_MEMORY_RANGE) 207 - pnpacpi_parse_allocated_memresource(res_table, 269 + pnpacpi_parse_allocated_memresource(dev, 208 270 p->minimum, p->address_length, 209 271 p->info.mem.write_protect); 210 272 else if (p->resource_type == ACPI_IO_RANGE) 211 - pnpacpi_parse_allocated_ioresource(res_table, 273 + pnpacpi_parse_allocated_ioresource(dev, 212 274 p->minimum, p->address_length, 213 275 p->granularity == 0xfff ? ACPI_DECODE_10 : 214 276 ACPI_DECODE_16); ··· 217 279 static acpi_status pnpacpi_allocated_resource(struct acpi_resource *res, 218 280 void *data) 219 281 { 220 - struct pnp_resource_table *res_table = data; 221 - int i; 282 + struct pnp_dev *dev = data; 283 + struct acpi_resource_irq *irq; 284 + struct acpi_resource_dma *dma; 285 + struct acpi_resource_io *io; 286 + struct acpi_resource_fixed_io *fixed_io; 287 + struct acpi_resource_memory24 *memory24; 288 + struct acpi_resource_memory32 *memory32; 289 + struct acpi_resource_fixed_memory32 *fixed_memory32; 290 + struct acpi_resource_extended_irq *extended_irq; 291 + int i, flags; 222 292 223 293 switch (res->type) { 224 294 case ACPI_RESOURCE_TYPE_IRQ: ··· 234 288 * Per spec, only one interrupt per descriptor is allowed in 235 289 * _CRS, but some firmware violates this, so parse them all. 236 290 */ 237 - for (i = 0; i < res->data.irq.interrupt_count; i++) { 238 - pnpacpi_parse_allocated_irqresource(res_table, 239 - res->data.irq.interrupts[i], 240 - res->data.irq.triggering, 241 - res->data.irq.polarity, 242 - res->data.irq.sharable); 291 + irq = &res->data.irq; 292 + for (i = 0; i < irq->interrupt_count; i++) { 293 + pnpacpi_parse_allocated_irqresource(dev, 294 + irq->interrupts[i], 295 + irq->triggering, 296 + irq->polarity, 297 + irq->sharable); 243 298 } 244 299 break; 245 300 246 301 case ACPI_RESOURCE_TYPE_DMA: 247 - if (res->data.dma.channel_count > 0) 248 - pnpacpi_parse_allocated_dmaresource(res_table, 249 - res->data.dma.channels[0], 250 - res->data.dma.type, 251 - res->data.dma.bus_master, 252 - res->data.dma.transfer); 302 + dma = &res->data.dma; 303 + if (dma->channel_count > 0) { 304 + flags = dma_flags(dma->type, dma->bus_master, 305 + dma->transfer); 306 + if (dma->channels[0] == (u8) -1) 307 + flags |= IORESOURCE_DISABLED; 308 + pnp_add_dma_resource(dev, dma->channels[0], flags); 309 + } 253 310 break; 254 311 255 312 case ACPI_RESOURCE_TYPE_IO: 256 - pnpacpi_parse_allocated_ioresource(res_table, 257 - res->data.io.minimum, 258 - res->data.io.address_length, 259 - res->data.io.io_decode); 313 + io = &res->data.io; 314 + pnpacpi_parse_allocated_ioresource(dev, 315 + io->minimum, 316 + io->address_length, 317 + io->io_decode); 260 318 break; 261 319 262 320 case ACPI_RESOURCE_TYPE_START_DEPENDENT: ··· 268 318 break; 269 319 270 320 case ACPI_RESOURCE_TYPE_FIXED_IO: 271 - pnpacpi_parse_allocated_ioresource(res_table, 272 - res->data.fixed_io.address, 273 - res->data.fixed_io.address_length, 321 + fixed_io = &res->data.fixed_io; 322 + pnpacpi_parse_allocated_ioresource(dev, 323 + fixed_io->address, 324 + fixed_io->address_length, 274 325 ACPI_DECODE_10); 275 326 break; 276 327 ··· 282 331 break; 283 332 284 333 case ACPI_RESOURCE_TYPE_MEMORY24: 285 - pnpacpi_parse_allocated_memresource(res_table, 286 - res->data.memory24.minimum, 287 - res->data.memory24.address_length, 288 - res->data.memory24.write_protect); 334 + memory24 = &res->data.memory24; 335 + pnpacpi_parse_allocated_memresource(dev, 336 + memory24->minimum, 337 + memory24->address_length, 338 + memory24->write_protect); 289 339 break; 290 340 case ACPI_RESOURCE_TYPE_MEMORY32: 291 - pnpacpi_parse_allocated_memresource(res_table, 292 - res->data.memory32.minimum, 293 - res->data.memory32.address_length, 294 - res->data.memory32.write_protect); 341 + memory32 = &res->data.memory32; 342 + pnpacpi_parse_allocated_memresource(dev, 343 + memory32->minimum, 344 + memory32->address_length, 345 + memory32->write_protect); 295 346 break; 296 347 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 297 - pnpacpi_parse_allocated_memresource(res_table, 298 - res->data.fixed_memory32.address, 299 - res->data.fixed_memory32.address_length, 300 - res->data.fixed_memory32.write_protect); 348 + fixed_memory32 = &res->data.fixed_memory32; 349 + pnpacpi_parse_allocated_memresource(dev, 350 + fixed_memory32->address, 351 + fixed_memory32->address_length, 352 + fixed_memory32->write_protect); 301 353 break; 302 354 case ACPI_RESOURCE_TYPE_ADDRESS16: 303 355 case ACPI_RESOURCE_TYPE_ADDRESS32: 304 356 case ACPI_RESOURCE_TYPE_ADDRESS64: 305 - pnpacpi_parse_allocated_address_space(res_table, res); 357 + pnpacpi_parse_allocated_address_space(dev, res); 306 358 break; 307 359 308 360 case ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64: ··· 314 360 break; 315 361 316 362 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 317 - if (res->data.extended_irq.producer_consumer == ACPI_PRODUCER) 363 + extended_irq = &res->data.extended_irq; 364 + if (extended_irq->producer_consumer == ACPI_PRODUCER) 318 365 return AE_OK; 319 366 320 - for (i = 0; i < res->data.extended_irq.interrupt_count; i++) { 321 - pnpacpi_parse_allocated_irqresource(res_table, 322 - res->data.extended_irq.interrupts[i], 323 - res->data.extended_irq.triggering, 324 - res->data.extended_irq.polarity, 325 - res->data.extended_irq.sharable); 367 + for (i = 0; i < extended_irq->interrupt_count; i++) { 368 + pnpacpi_parse_allocated_irqresource(dev, 369 + extended_irq->interrupts[i], 370 + extended_irq->triggering, 371 + extended_irq->polarity, 372 + extended_irq->sharable); 326 373 } 327 374 break; 328 375 ··· 331 376 break; 332 377 333 378 default: 334 - pnp_warn("PnPACPI: unknown resource type %d", res->type); 379 + dev_warn(&dev->dev, "unknown resource type %d in _CRS\n", 380 + res->type); 335 381 return AE_ERROR; 336 382 } 337 383 338 384 return AE_OK; 339 385 } 340 386 341 - acpi_status pnpacpi_parse_allocated_resource(acpi_handle handle, 342 - struct pnp_resource_table * res) 387 + int pnpacpi_parse_allocated_resource(struct pnp_dev *dev) 343 388 { 344 - /* Blank the resource table values */ 345 - pnp_init_resource_table(res); 389 + acpi_handle handle = dev->data; 390 + acpi_status status; 346 391 347 - return acpi_walk_resources(handle, METHOD_NAME__CRS, 348 - pnpacpi_allocated_resource, res); 392 + dev_dbg(&dev->dev, "parse allocated resources\n"); 393 + 394 + pnp_init_resources(dev); 395 + 396 + status = acpi_walk_resources(handle, METHOD_NAME__CRS, 397 + pnpacpi_allocated_resource, dev); 398 + 399 + if (ACPI_FAILURE(status)) { 400 + if (status != AE_NOT_FOUND) 401 + dev_err(&dev->dev, "can't evaluate _CRS: %d", status); 402 + return -EPERM; 403 + } 404 + return 0; 349 405 } 350 406 351 - static __init void pnpacpi_parse_dma_option(struct pnp_option *option, 407 + static __init void pnpacpi_parse_dma_option(struct pnp_dev *dev, 408 + struct pnp_option *option, 352 409 struct acpi_resource_dma *p) 353 410 { 354 411 int i; ··· 377 410 378 411 dma->flags = dma_flags(p->type, p->bus_master, p->transfer); 379 412 380 - pnp_register_dma_resource(option, dma); 413 + pnp_register_dma_resource(dev, option, dma); 381 414 } 382 415 383 - static __init void pnpacpi_parse_irq_option(struct pnp_option *option, 416 + static __init void pnpacpi_parse_irq_option(struct pnp_dev *dev, 417 + struct pnp_option *option, 384 418 struct acpi_resource_irq *p) 385 419 { 386 420 int i; ··· 396 428 for (i = 0; i < p->interrupt_count; i++) 397 429 if (p->interrupts[i]) 398 430 __set_bit(p->interrupts[i], irq->map); 399 - irq->flags = irq_flags(p->triggering, p->polarity); 431 + irq->flags = irq_flags(p->triggering, p->polarity, p->sharable); 400 432 401 - pnp_register_irq_resource(option, irq); 433 + pnp_register_irq_resource(dev, option, irq); 402 434 } 403 435 404 - static __init void pnpacpi_parse_ext_irq_option(struct pnp_option *option, 436 + static __init void pnpacpi_parse_ext_irq_option(struct pnp_dev *dev, 437 + struct pnp_option *option, 405 438 struct acpi_resource_extended_irq *p) 406 439 { 407 440 int i; ··· 417 448 for (i = 0; i < p->interrupt_count; i++) 418 449 if (p->interrupts[i]) 419 450 __set_bit(p->interrupts[i], irq->map); 420 - irq->flags = irq_flags(p->triggering, p->polarity); 451 + irq->flags = irq_flags(p->triggering, p->polarity, p->sharable); 421 452 422 - pnp_register_irq_resource(option, irq); 453 + pnp_register_irq_resource(dev, option, irq); 423 454 } 424 455 425 - static __init void pnpacpi_parse_port_option(struct pnp_option *option, 456 + static __init void pnpacpi_parse_port_option(struct pnp_dev *dev, 457 + struct pnp_option *option, 426 458 struct acpi_resource_io *io) 427 459 { 428 460 struct pnp_port *port; ··· 439 469 port->size = io->address_length; 440 470 port->flags = ACPI_DECODE_16 == io->io_decode ? 441 471 PNP_PORT_FLAG_16BITADDR : 0; 442 - pnp_register_port_resource(option, port); 472 + pnp_register_port_resource(dev, option, port); 443 473 } 444 474 445 - static __init void pnpacpi_parse_fixed_port_option(struct pnp_option *option, 475 + static __init void pnpacpi_parse_fixed_port_option(struct pnp_dev *dev, 476 + struct pnp_option *option, 446 477 struct acpi_resource_fixed_io *io) 447 478 { 448 479 struct pnp_port *port; ··· 457 486 port->size = io->address_length; 458 487 port->align = 0; 459 488 port->flags = PNP_PORT_FLAG_FIXED; 460 - pnp_register_port_resource(option, port); 489 + pnp_register_port_resource(dev, option, port); 461 490 } 462 491 463 - static __init void pnpacpi_parse_mem24_option(struct pnp_option *option, 492 + static __init void pnpacpi_parse_mem24_option(struct pnp_dev *dev, 493 + struct pnp_option *option, 464 494 struct acpi_resource_memory24 *p) 465 495 { 466 496 struct pnp_mem *mem; ··· 479 507 mem->flags = (ACPI_READ_WRITE_MEMORY == p->write_protect) ? 480 508 IORESOURCE_MEM_WRITEABLE : 0; 481 509 482 - pnp_register_mem_resource(option, mem); 510 + pnp_register_mem_resource(dev, option, mem); 483 511 } 484 512 485 - static __init void pnpacpi_parse_mem32_option(struct pnp_option *option, 513 + static __init void pnpacpi_parse_mem32_option(struct pnp_dev *dev, 514 + struct pnp_option *option, 486 515 struct acpi_resource_memory32 *p) 487 516 { 488 517 struct pnp_mem *mem; ··· 501 528 mem->flags = (ACPI_READ_WRITE_MEMORY == p->write_protect) ? 502 529 IORESOURCE_MEM_WRITEABLE : 0; 503 530 504 - pnp_register_mem_resource(option, mem); 531 + pnp_register_mem_resource(dev, option, mem); 505 532 } 506 533 507 - static __init void pnpacpi_parse_fixed_mem32_option(struct pnp_option *option, 534 + static __init void pnpacpi_parse_fixed_mem32_option(struct pnp_dev *dev, 535 + struct pnp_option *option, 508 536 struct acpi_resource_fixed_memory32 *p) 509 537 { 510 538 struct pnp_mem *mem; ··· 522 548 mem->flags = (ACPI_READ_WRITE_MEMORY == p->write_protect) ? 523 549 IORESOURCE_MEM_WRITEABLE : 0; 524 550 525 - pnp_register_mem_resource(option, mem); 551 + pnp_register_mem_resource(dev, option, mem); 526 552 } 527 553 528 - static __init void pnpacpi_parse_address_option(struct pnp_option *option, 554 + static __init void pnpacpi_parse_address_option(struct pnp_dev *dev, 555 + struct pnp_option *option, 529 556 struct acpi_resource *r) 530 557 { 531 558 struct acpi_resource_address64 addr, *p = &addr; ··· 554 579 mem->flags = (p->info.mem.write_protect == 555 580 ACPI_READ_WRITE_MEMORY) ? IORESOURCE_MEM_WRITEABLE 556 581 : 0; 557 - pnp_register_mem_resource(option, mem); 582 + pnp_register_mem_resource(dev, option, mem); 558 583 } else if (p->resource_type == ACPI_IO_RANGE) { 559 584 port = kzalloc(sizeof(struct pnp_port), GFP_KERNEL); 560 585 if (!port) ··· 563 588 port->size = p->address_length; 564 589 port->align = 0; 565 590 port->flags = PNP_PORT_FLAG_FIXED; 566 - pnp_register_port_resource(option, port); 591 + pnp_register_port_resource(dev, option, port); 567 592 } 568 593 } 569 594 ··· 583 608 584 609 switch (res->type) { 585 610 case ACPI_RESOURCE_TYPE_IRQ: 586 - pnpacpi_parse_irq_option(option, &res->data.irq); 611 + pnpacpi_parse_irq_option(dev, option, &res->data.irq); 587 612 break; 588 613 589 614 case ACPI_RESOURCE_TYPE_DMA: 590 - pnpacpi_parse_dma_option(option, &res->data.dma); 615 + pnpacpi_parse_dma_option(dev, option, &res->data.dma); 591 616 break; 592 617 593 618 case ACPI_RESOURCE_TYPE_START_DEPENDENT: ··· 617 642 case ACPI_RESOURCE_TYPE_END_DEPENDENT: 618 643 /*only one EndDependentFn is allowed */ 619 644 if (!parse_data->option_independent) { 620 - pnp_warn("PnPACPI: more than one EndDependentFn"); 645 + dev_warn(&dev->dev, "more than one EndDependentFn " 646 + "in _PRS\n"); 621 647 return AE_ERROR; 622 648 } 623 649 parse_data->option = parse_data->option_independent; 624 650 parse_data->option_independent = NULL; 651 + dev_dbg(&dev->dev, "end dependent options\n"); 625 652 break; 626 653 627 654 case ACPI_RESOURCE_TYPE_IO: 628 - pnpacpi_parse_port_option(option, &res->data.io); 655 + pnpacpi_parse_port_option(dev, option, &res->data.io); 629 656 break; 630 657 631 658 case ACPI_RESOURCE_TYPE_FIXED_IO: 632 - pnpacpi_parse_fixed_port_option(option, &res->data.fixed_io); 659 + pnpacpi_parse_fixed_port_option(dev, option, 660 + &res->data.fixed_io); 633 661 break; 634 662 635 663 case ACPI_RESOURCE_TYPE_VENDOR: ··· 640 662 break; 641 663 642 664 case ACPI_RESOURCE_TYPE_MEMORY24: 643 - pnpacpi_parse_mem24_option(option, &res->data.memory24); 665 + pnpacpi_parse_mem24_option(dev, option, &res->data.memory24); 644 666 break; 645 667 646 668 case ACPI_RESOURCE_TYPE_MEMORY32: 647 - pnpacpi_parse_mem32_option(option, &res->data.memory32); 669 + pnpacpi_parse_mem32_option(dev, option, &res->data.memory32); 648 670 break; 649 671 650 672 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 651 - pnpacpi_parse_fixed_mem32_option(option, 673 + pnpacpi_parse_fixed_mem32_option(dev, option, 652 674 &res->data.fixed_memory32); 653 675 break; 654 676 655 677 case ACPI_RESOURCE_TYPE_ADDRESS16: 656 678 case ACPI_RESOURCE_TYPE_ADDRESS32: 657 679 case ACPI_RESOURCE_TYPE_ADDRESS64: 658 - pnpacpi_parse_address_option(option, res); 680 + pnpacpi_parse_address_option(dev, option, res); 659 681 break; 660 682 661 683 case ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64: 662 684 break; 663 685 664 686 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 665 - pnpacpi_parse_ext_irq_option(option, &res->data.extended_irq); 687 + pnpacpi_parse_ext_irq_option(dev, option, 688 + &res->data.extended_irq); 666 689 break; 667 690 668 691 case ACPI_RESOURCE_TYPE_GENERIC_REGISTER: 669 692 break; 670 693 671 694 default: 672 - pnp_warn("PnPACPI: unknown resource type %d", res->type); 695 + dev_warn(&dev->dev, "unknown resource type %d in _PRS\n", 696 + res->type); 673 697 return AE_ERROR; 674 698 } 675 699 676 700 return AE_OK; 677 701 } 678 702 679 - acpi_status __init pnpacpi_parse_resource_option_data(acpi_handle handle, 680 - struct pnp_dev *dev) 703 + int __init pnpacpi_parse_resource_option_data(struct pnp_dev *dev) 681 704 { 705 + acpi_handle handle = dev->data; 682 706 acpi_status status; 683 707 struct acpipnp_parse_option_s parse_data; 684 708 709 + dev_dbg(&dev->dev, "parse resource options\n"); 710 + 685 711 parse_data.option = pnp_register_independent_option(dev); 686 712 if (!parse_data.option) 687 - return AE_ERROR; 713 + return -ENOMEM; 714 + 688 715 parse_data.option_independent = parse_data.option; 689 716 parse_data.dev = dev; 690 717 status = acpi_walk_resources(handle, METHOD_NAME__PRS, 691 718 pnpacpi_option_resource, &parse_data); 692 719 693 - return status; 720 + if (ACPI_FAILURE(status)) { 721 + if (status != AE_NOT_FOUND) 722 + dev_err(&dev->dev, "can't evaluate _PRS: %d", status); 723 + return -EPERM; 724 + } 725 + return 0; 694 726 } 695 727 696 728 static int pnpacpi_supported_resource(struct acpi_resource *res) ··· 748 760 return AE_OK; 749 761 } 750 762 751 - int pnpacpi_build_resource_template(acpi_handle handle, 763 + int pnpacpi_build_resource_template(struct pnp_dev *dev, 752 764 struct acpi_buffer *buffer) 753 765 { 766 + acpi_handle handle = dev->data; 754 767 struct acpi_resource *resource; 755 768 int res_cnt = 0; 756 769 acpi_status status; ··· 759 770 status = acpi_walk_resources(handle, METHOD_NAME__CRS, 760 771 pnpacpi_count_resources, &res_cnt); 761 772 if (ACPI_FAILURE(status)) { 762 - pnp_err("Evaluate _CRS failed"); 773 + dev_err(&dev->dev, "can't evaluate _CRS: %d\n", status); 763 774 return -EINVAL; 764 775 } 765 776 if (!res_cnt) ··· 768 779 buffer->pointer = kzalloc(buffer->length - 1, GFP_KERNEL); 769 780 if (!buffer->pointer) 770 781 return -ENOMEM; 771 - pnp_dbg("Res cnt %d", res_cnt); 782 + 772 783 resource = (struct acpi_resource *)buffer->pointer; 773 784 status = acpi_walk_resources(handle, METHOD_NAME__CRS, 774 785 pnpacpi_type_resources, &resource); 775 786 if (ACPI_FAILURE(status)) { 776 787 kfree(buffer->pointer); 777 - pnp_err("Evaluate _CRS failed"); 788 + dev_err(&dev->dev, "can't evaluate _CRS: %d\n", status); 778 789 return -EINVAL; 779 790 } 780 791 /* resource will pointer the end resource now */ ··· 783 794 return 0; 784 795 } 785 796 786 - static void pnpacpi_encode_irq(struct acpi_resource *resource, 797 + static void pnpacpi_encode_irq(struct pnp_dev *dev, 798 + struct acpi_resource *resource, 787 799 struct resource *p) 788 800 { 801 + struct acpi_resource_irq *irq = &resource->data.irq; 789 802 int triggering, polarity; 790 803 791 804 decode_irq_flags(p->flags & IORESOURCE_BITS, &triggering, &polarity); 792 - resource->data.irq.triggering = triggering; 793 - resource->data.irq.polarity = polarity; 805 + irq->triggering = triggering; 806 + irq->polarity = polarity; 794 807 if (triggering == ACPI_EDGE_SENSITIVE) 795 - resource->data.irq.sharable = ACPI_EXCLUSIVE; 808 + irq->sharable = ACPI_EXCLUSIVE; 796 809 else 797 - resource->data.irq.sharable = ACPI_SHARED; 798 - resource->data.irq.interrupt_count = 1; 799 - resource->data.irq.interrupts[0] = p->start; 810 + irq->sharable = ACPI_SHARED; 811 + irq->interrupt_count = 1; 812 + irq->interrupts[0] = p->start; 813 + 814 + dev_dbg(&dev->dev, " encode irq %d %s %s %s\n", (int) p->start, 815 + triggering == ACPI_LEVEL_SENSITIVE ? "level" : "edge", 816 + polarity == ACPI_ACTIVE_LOW ? "low" : "high", 817 + irq->sharable == ACPI_SHARED ? "shared" : "exclusive"); 800 818 } 801 819 802 - static void pnpacpi_encode_ext_irq(struct acpi_resource *resource, 820 + static void pnpacpi_encode_ext_irq(struct pnp_dev *dev, 821 + struct acpi_resource *resource, 803 822 struct resource *p) 804 823 { 824 + struct acpi_resource_extended_irq *extended_irq = &resource->data.extended_irq; 805 825 int triggering, polarity; 806 826 807 827 decode_irq_flags(p->flags & IORESOURCE_BITS, &triggering, &polarity); 808 - resource->data.extended_irq.producer_consumer = ACPI_CONSUMER; 809 - resource->data.extended_irq.triggering = triggering; 810 - resource->data.extended_irq.polarity = polarity; 828 + extended_irq->producer_consumer = ACPI_CONSUMER; 829 + extended_irq->triggering = triggering; 830 + extended_irq->polarity = polarity; 811 831 if (triggering == ACPI_EDGE_SENSITIVE) 812 - resource->data.irq.sharable = ACPI_EXCLUSIVE; 832 + extended_irq->sharable = ACPI_EXCLUSIVE; 813 833 else 814 - resource->data.irq.sharable = ACPI_SHARED; 815 - resource->data.extended_irq.interrupt_count = 1; 816 - resource->data.extended_irq.interrupts[0] = p->start; 834 + extended_irq->sharable = ACPI_SHARED; 835 + extended_irq->interrupt_count = 1; 836 + extended_irq->interrupts[0] = p->start; 837 + 838 + dev_dbg(&dev->dev, " encode irq %d %s %s %s\n", (int) p->start, 839 + triggering == ACPI_LEVEL_SENSITIVE ? "level" : "edge", 840 + polarity == ACPI_ACTIVE_LOW ? "low" : "high", 841 + extended_irq->sharable == ACPI_SHARED ? "shared" : "exclusive"); 817 842 } 818 843 819 - static void pnpacpi_encode_dma(struct acpi_resource *resource, 844 + static void pnpacpi_encode_dma(struct pnp_dev *dev, 845 + struct acpi_resource *resource, 820 846 struct resource *p) 821 847 { 848 + struct acpi_resource_dma *dma = &resource->data.dma; 849 + 822 850 /* Note: pnp_assign_dma will copy pnp_dma->flags into p->flags */ 823 851 switch (p->flags & IORESOURCE_DMA_SPEED_MASK) { 824 852 case IORESOURCE_DMA_TYPEA: 825 - resource->data.dma.type = ACPI_TYPE_A; 853 + dma->type = ACPI_TYPE_A; 826 854 break; 827 855 case IORESOURCE_DMA_TYPEB: 828 - resource->data.dma.type = ACPI_TYPE_B; 856 + dma->type = ACPI_TYPE_B; 829 857 break; 830 858 case IORESOURCE_DMA_TYPEF: 831 - resource->data.dma.type = ACPI_TYPE_F; 859 + dma->type = ACPI_TYPE_F; 832 860 break; 833 861 default: 834 - resource->data.dma.type = ACPI_COMPATIBILITY; 862 + dma->type = ACPI_COMPATIBILITY; 835 863 } 836 864 837 865 switch (p->flags & IORESOURCE_DMA_TYPE_MASK) { 838 866 case IORESOURCE_DMA_8BIT: 839 - resource->data.dma.transfer = ACPI_TRANSFER_8; 867 + dma->transfer = ACPI_TRANSFER_8; 840 868 break; 841 869 case IORESOURCE_DMA_8AND16BIT: 842 - resource->data.dma.transfer = ACPI_TRANSFER_8_16; 870 + dma->transfer = ACPI_TRANSFER_8_16; 843 871 break; 844 872 default: 845 - resource->data.dma.transfer = ACPI_TRANSFER_16; 873 + dma->transfer = ACPI_TRANSFER_16; 846 874 } 847 875 848 - resource->data.dma.bus_master = !!(p->flags & IORESOURCE_DMA_MASTER); 849 - resource->data.dma.channel_count = 1; 850 - resource->data.dma.channels[0] = p->start; 876 + dma->bus_master = !!(p->flags & IORESOURCE_DMA_MASTER); 877 + dma->channel_count = 1; 878 + dma->channels[0] = p->start; 879 + 880 + dev_dbg(&dev->dev, " encode dma %d " 881 + "type %#x transfer %#x master %d\n", 882 + (int) p->start, dma->type, dma->transfer, dma->bus_master); 851 883 } 852 884 853 - static void pnpacpi_encode_io(struct acpi_resource *resource, 885 + static void pnpacpi_encode_io(struct pnp_dev *dev, 886 + struct acpi_resource *resource, 854 887 struct resource *p) 855 888 { 889 + struct acpi_resource_io *io = &resource->data.io; 890 + 856 891 /* Note: pnp_assign_port will copy pnp_port->flags into p->flags */ 857 - resource->data.io.io_decode = (p->flags & PNP_PORT_FLAG_16BITADDR) ? 892 + io->io_decode = (p->flags & PNP_PORT_FLAG_16BITADDR) ? 858 893 ACPI_DECODE_16 : ACPI_DECODE_10; 859 - resource->data.io.minimum = p->start; 860 - resource->data.io.maximum = p->end; 861 - resource->data.io.alignment = 0; /* Correct? */ 862 - resource->data.io.address_length = p->end - p->start + 1; 894 + io->minimum = p->start; 895 + io->maximum = p->end; 896 + io->alignment = 0; /* Correct? */ 897 + io->address_length = p->end - p->start + 1; 898 + 899 + dev_dbg(&dev->dev, " encode io %#llx-%#llx decode %#x\n", 900 + (unsigned long long) p->start, (unsigned long long) p->end, 901 + io->io_decode); 863 902 } 864 903 865 - static void pnpacpi_encode_fixed_io(struct acpi_resource *resource, 904 + static void pnpacpi_encode_fixed_io(struct pnp_dev *dev, 905 + struct acpi_resource *resource, 866 906 struct resource *p) 867 907 { 868 - resource->data.fixed_io.address = p->start; 869 - resource->data.fixed_io.address_length = p->end - p->start + 1; 908 + struct acpi_resource_fixed_io *fixed_io = &resource->data.fixed_io; 909 + 910 + fixed_io->address = p->start; 911 + fixed_io->address_length = p->end - p->start + 1; 912 + 913 + dev_dbg(&dev->dev, " encode fixed_io %#llx-%#llx\n", 914 + (unsigned long long) p->start, (unsigned long long) p->end); 870 915 } 871 916 872 - static void pnpacpi_encode_mem24(struct acpi_resource *resource, 917 + static void pnpacpi_encode_mem24(struct pnp_dev *dev, 918 + struct acpi_resource *resource, 873 919 struct resource *p) 874 920 { 921 + struct acpi_resource_memory24 *memory24 = &resource->data.memory24; 922 + 875 923 /* Note: pnp_assign_mem will copy pnp_mem->flags into p->flags */ 876 - resource->data.memory24.write_protect = 924 + memory24->write_protect = 877 925 (p->flags & IORESOURCE_MEM_WRITEABLE) ? 878 926 ACPI_READ_WRITE_MEMORY : ACPI_READ_ONLY_MEMORY; 879 - resource->data.memory24.minimum = p->start; 880 - resource->data.memory24.maximum = p->end; 881 - resource->data.memory24.alignment = 0; 882 - resource->data.memory24.address_length = p->end - p->start + 1; 927 + memory24->minimum = p->start; 928 + memory24->maximum = p->end; 929 + memory24->alignment = 0; 930 + memory24->address_length = p->end - p->start + 1; 931 + 932 + dev_dbg(&dev->dev, " encode mem24 %#llx-%#llx write_protect %#x\n", 933 + (unsigned long long) p->start, (unsigned long long) p->end, 934 + memory24->write_protect); 883 935 } 884 936 885 - static void pnpacpi_encode_mem32(struct acpi_resource *resource, 937 + static void pnpacpi_encode_mem32(struct pnp_dev *dev, 938 + struct acpi_resource *resource, 886 939 struct resource *p) 887 940 { 888 - resource->data.memory32.write_protect = 941 + struct acpi_resource_memory32 *memory32 = &resource->data.memory32; 942 + 943 + memory32->write_protect = 889 944 (p->flags & IORESOURCE_MEM_WRITEABLE) ? 890 945 ACPI_READ_WRITE_MEMORY : ACPI_READ_ONLY_MEMORY; 891 - resource->data.memory32.minimum = p->start; 892 - resource->data.memory32.maximum = p->end; 893 - resource->data.memory32.alignment = 0; 894 - resource->data.memory32.address_length = p->end - p->start + 1; 946 + memory32->minimum = p->start; 947 + memory32->maximum = p->end; 948 + memory32->alignment = 0; 949 + memory32->address_length = p->end - p->start + 1; 950 + 951 + dev_dbg(&dev->dev, " encode mem32 %#llx-%#llx write_protect %#x\n", 952 + (unsigned long long) p->start, (unsigned long long) p->end, 953 + memory32->write_protect); 895 954 } 896 955 897 - static void pnpacpi_encode_fixed_mem32(struct acpi_resource *resource, 956 + static void pnpacpi_encode_fixed_mem32(struct pnp_dev *dev, 957 + struct acpi_resource *resource, 898 958 struct resource *p) 899 959 { 900 - resource->data.fixed_memory32.write_protect = 960 + struct acpi_resource_fixed_memory32 *fixed_memory32 = &resource->data.fixed_memory32; 961 + 962 + fixed_memory32->write_protect = 901 963 (p->flags & IORESOURCE_MEM_WRITEABLE) ? 902 964 ACPI_READ_WRITE_MEMORY : ACPI_READ_ONLY_MEMORY; 903 - resource->data.fixed_memory32.address = p->start; 904 - resource->data.fixed_memory32.address_length = p->end - p->start + 1; 965 + fixed_memory32->address = p->start; 966 + fixed_memory32->address_length = p->end - p->start + 1; 967 + 968 + dev_dbg(&dev->dev, " encode fixed_mem32 %#llx-%#llx " 969 + "write_protect %#x\n", 970 + (unsigned long long) p->start, (unsigned long long) p->end, 971 + fixed_memory32->write_protect); 905 972 } 906 973 907 - int pnpacpi_encode_resources(struct pnp_resource_table *res_table, 908 - struct acpi_buffer *buffer) 974 + int pnpacpi_encode_resources(struct pnp_dev *dev, struct acpi_buffer *buffer) 909 975 { 910 976 int i = 0; 911 977 /* pnpacpi_build_resource_template allocates extra mem */ ··· 968 924 struct acpi_resource *resource = buffer->pointer; 969 925 int port = 0, irq = 0, dma = 0, mem = 0; 970 926 971 - pnp_dbg("res cnt %d", res_cnt); 927 + dev_dbg(&dev->dev, "encode %d resources\n", res_cnt); 972 928 while (i < res_cnt) { 973 929 switch (resource->type) { 974 930 case ACPI_RESOURCE_TYPE_IRQ: 975 - pnp_dbg("Encode irq"); 976 - pnpacpi_encode_irq(resource, 977 - &res_table->irq_resource[irq]); 931 + pnpacpi_encode_irq(dev, resource, 932 + pnp_get_resource(dev, IORESOURCE_IRQ, irq)); 978 933 irq++; 979 934 break; 980 935 981 936 case ACPI_RESOURCE_TYPE_DMA: 982 - pnp_dbg("Encode dma"); 983 - pnpacpi_encode_dma(resource, 984 - &res_table->dma_resource[dma]); 937 + pnpacpi_encode_dma(dev, resource, 938 + pnp_get_resource(dev, IORESOURCE_DMA, dma)); 985 939 dma++; 986 940 break; 987 941 case ACPI_RESOURCE_TYPE_IO: 988 - pnp_dbg("Encode io"); 989 - pnpacpi_encode_io(resource, 990 - &res_table->port_resource[port]); 942 + pnpacpi_encode_io(dev, resource, 943 + pnp_get_resource(dev, IORESOURCE_IO, port)); 991 944 port++; 992 945 break; 993 946 case ACPI_RESOURCE_TYPE_FIXED_IO: 994 - pnp_dbg("Encode fixed io"); 995 - pnpacpi_encode_fixed_io(resource, 996 - &res_table-> 997 - port_resource[port]); 947 + pnpacpi_encode_fixed_io(dev, resource, 948 + pnp_get_resource(dev, IORESOURCE_IO, port)); 998 949 port++; 999 950 break; 1000 951 case ACPI_RESOURCE_TYPE_MEMORY24: 1001 - pnp_dbg("Encode mem24"); 1002 - pnpacpi_encode_mem24(resource, 1003 - &res_table->mem_resource[mem]); 952 + pnpacpi_encode_mem24(dev, resource, 953 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 1004 954 mem++; 1005 955 break; 1006 956 case ACPI_RESOURCE_TYPE_MEMORY32: 1007 - pnp_dbg("Encode mem32"); 1008 - pnpacpi_encode_mem32(resource, 1009 - &res_table->mem_resource[mem]); 957 + pnpacpi_encode_mem32(dev, resource, 958 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 1010 959 mem++; 1011 960 break; 1012 961 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32: 1013 - pnp_dbg("Encode fixed mem32"); 1014 - pnpacpi_encode_fixed_mem32(resource, 1015 - &res_table-> 1016 - mem_resource[mem]); 962 + pnpacpi_encode_fixed_mem32(dev, resource, 963 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 1017 964 mem++; 1018 965 break; 1019 966 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 1020 - pnp_dbg("Encode ext irq"); 1021 - pnpacpi_encode_ext_irq(resource, 1022 - &res_table->irq_resource[irq]); 967 + pnpacpi_encode_ext_irq(dev, resource, 968 + pnp_get_resource(dev, IORESOURCE_IRQ, irq)); 1023 969 irq++; 1024 970 break; 1025 971 case ACPI_RESOURCE_TYPE_START_DEPENDENT: ··· 1022 988 case ACPI_RESOURCE_TYPE_EXTENDED_ADDRESS64: 1023 989 case ACPI_RESOURCE_TYPE_GENERIC_REGISTER: 1024 990 default: /* other type */ 1025 - pnp_warn("unknown resource type %d", resource->type); 991 + dev_warn(&dev->dev, "can't encode unknown resource " 992 + "type %d\n", resource->type); 1026 993 return -EINVAL; 1027 994 } 1028 995 resource++;
+4
drivers/pnp/pnpbios/Makefile
··· 5 5 pnpbios-proc-$(CONFIG_PNPBIOS_PROC_FS) = proc.o 6 6 7 7 obj-y := core.o bioscalls.o rsparser.o $(pnpbios-proc-y) 8 + 9 + ifeq ($(CONFIG_PNP_DEBUG),y) 10 + EXTRA_CFLAGS += -DDEBUG 11 + endif
-1
drivers/pnp/pnpbios/bioscalls.c
··· 7 7 #include <linux/init.h> 8 8 #include <linux/linkage.h> 9 9 #include <linux/kernel.h> 10 - #include <linux/pnpbios.h> 11 10 #include <linux/device.h> 12 11 #include <linux/pnp.h> 13 12 #include <linux/mm.h>
+10 -21
drivers/pnp/pnpbios/core.c
··· 50 50 #include <linux/init.h> 51 51 #include <linux/linkage.h> 52 52 #include <linux/kernel.h> 53 - #include <linux/pnpbios.h> 54 53 #include <linux/device.h> 55 54 #include <linux/pnp.h> 56 55 #include <linux/mm.h> ··· 68 69 #include <asm/system.h> 69 70 #include <asm/byteorder.h> 70 71 72 + #include "../base.h" 71 73 #include "pnpbios.h" 72 74 73 75 /* ··· 203 203 204 204 #endif /* CONFIG_HOTPLUG */ 205 205 206 - static int pnpbios_get_resources(struct pnp_dev *dev, 207 - struct pnp_resource_table *res) 206 + static int pnpbios_get_resources(struct pnp_dev *dev) 208 207 { 209 208 u8 nodenum = dev->number; 210 209 struct pnp_bios_node *node; ··· 211 212 if (!pnpbios_is_dynamic(dev)) 212 213 return -EPERM; 213 214 215 + dev_dbg(&dev->dev, "get resources\n"); 214 216 node = kzalloc(node_info.max_node_size, GFP_KERNEL); 215 217 if (!node) 216 218 return -1; ··· 219 219 kfree(node); 220 220 return -ENODEV; 221 221 } 222 - pnpbios_read_resources_from_node(res, node); 222 + pnpbios_read_resources_from_node(dev, node); 223 223 dev->active = pnp_is_active(dev); 224 224 kfree(node); 225 225 return 0; 226 226 } 227 227 228 - static int pnpbios_set_resources(struct pnp_dev *dev, 229 - struct pnp_resource_table *res) 228 + static int pnpbios_set_resources(struct pnp_dev *dev) 230 229 { 231 230 u8 nodenum = dev->number; 232 231 struct pnp_bios_node *node; ··· 234 235 if (!pnpbios_is_dynamic(dev)) 235 236 return -EPERM; 236 237 238 + dev_dbg(&dev->dev, "set resources\n"); 237 239 node = kzalloc(node_info.max_node_size, GFP_KERNEL); 238 240 if (!node) 239 241 return -1; ··· 242 242 kfree(node); 243 243 return -ENODEV; 244 244 } 245 - if (pnpbios_write_resources_to_node(res, node) < 0) { 245 + if (pnpbios_write_resources_to_node(dev, node) < 0) { 246 246 kfree(node); 247 247 return -1; 248 248 } ··· 317 317 { 318 318 struct list_head *pos; 319 319 struct pnp_dev *dev; 320 - struct pnp_id *dev_id; 321 320 char id[8]; 322 321 323 322 /* check if the device is already added */ ··· 326 327 return -1; 327 328 } 328 329 329 - dev = kzalloc(sizeof(struct pnp_dev), GFP_KERNEL); 330 + pnp_eisa_id_to_string(node->eisa_id & PNP_EISA_ID_MASK, id); 331 + dev = pnp_alloc_dev(&pnpbios_protocol, node->handle, id); 330 332 if (!dev) 331 333 return -1; 332 334 333 - dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 334 - if (!dev_id) { 335 - kfree(dev); 336 - return -1; 337 - } 338 - 339 - dev->number = node->handle; 340 - pnpid32_to_pnpid(node->eisa_id, id); 341 - memcpy(dev_id->id, id, 7); 342 - pnp_add_id(dev_id, dev); 343 335 pnpbios_parse_data_stream(dev, node); 344 336 dev->active = pnp_is_active(dev); 345 337 dev->flags = node->flags; ··· 343 353 dev->capabilities |= PNP_WRITE; 344 354 if (dev->flags & PNPBIOS_REMOVABLE) 345 355 dev->capabilities |= PNP_REMOVABLE; 346 - dev->protocol = &pnpbios_protocol; 347 356 348 357 /* clear out the damaged flags */ 349 358 if (!dev->active) 350 - pnp_init_resource_table(&dev->res); 359 + pnp_init_resources(dev); 351 360 352 361 pnp_add_device(dev); 353 362 pnpbios_interface_attach_device(node);
+138 -2
drivers/pnp/pnpbios/pnpbios.h
··· 2 2 * pnpbios.h - contains local definitions 3 3 */ 4 4 5 + /* 6 + * Include file for the interface to a PnP BIOS 7 + * 8 + * Original BIOS code (C) 1998 Christian Schmidt (chr.schmidt@tu-bs.de) 9 + * PnP handler parts (c) 1998 Tom Lees <tom@lpsg.demon.co.uk> 10 + * Minor reorganizations by David Hinds <dahinds@users.sourceforge.net> 11 + * 12 + * This program is free software; you can redistribute it and/or modify it 13 + * under the terms of the GNU General Public License as published by the 14 + * Free Software Foundation; either version 2, or (at your option) any 15 + * later version. 16 + * 17 + * This program is distributed in the hope that it will be useful, but 18 + * WITHOUT ANY WARRANTY; without even the implied warranty of 19 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 + * General Public License for more details. 21 + * 22 + * You should have received a copy of the GNU General Public License 23 + * along with this program; if not, write to the Free Software 24 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 25 + */ 26 + 27 + /* 28 + * Return codes 29 + */ 30 + #define PNP_SUCCESS 0x00 31 + #define PNP_NOT_SET_STATICALLY 0x7f 32 + #define PNP_UNKNOWN_FUNCTION 0x81 33 + #define PNP_FUNCTION_NOT_SUPPORTED 0x82 34 + #define PNP_INVALID_HANDLE 0x83 35 + #define PNP_BAD_PARAMETER 0x84 36 + #define PNP_SET_FAILED 0x85 37 + #define PNP_EVENTS_NOT_PENDING 0x86 38 + #define PNP_SYSTEM_NOT_DOCKED 0x87 39 + #define PNP_NO_ISA_PNP_CARDS 0x88 40 + #define PNP_UNABLE_TO_DETERMINE_DOCK_CAPABILITIES 0x89 41 + #define PNP_CONFIG_CHANGE_FAILED_NO_BATTERY 0x8a 42 + #define PNP_CONFIG_CHANGE_FAILED_RESOURCE_CONFLICT 0x8b 43 + #define PNP_BUFFER_TOO_SMALL 0x8c 44 + #define PNP_USE_ESCD_SUPPORT 0x8d 45 + #define PNP_MESSAGE_NOT_SUPPORTED 0x8e 46 + #define PNP_HARDWARE_ERROR 0x8f 47 + 48 + #define ESCD_SUCCESS 0x00 49 + #define ESCD_IO_ERROR_READING 0x55 50 + #define ESCD_INVALID 0x56 51 + #define ESCD_BUFFER_TOO_SMALL 0x59 52 + #define ESCD_NVRAM_TOO_SMALL 0x5a 53 + #define ESCD_FUNCTION_NOT_SUPPORTED 0x81 54 + 55 + /* 56 + * Events that can be received by "get event" 57 + */ 58 + #define PNPEV_ABOUT_TO_CHANGE_CONFIG 0x0001 59 + #define PNPEV_DOCK_CHANGED 0x0002 60 + #define PNPEV_SYSTEM_DEVICE_CHANGED 0x0003 61 + #define PNPEV_CONFIG_CHANGED_FAILED 0x0004 62 + #define PNPEV_UNKNOWN_SYSTEM_EVENT 0xffff 63 + /* 0x8000 through 0xfffe are OEM defined */ 64 + 65 + /* 66 + * Messages that should be sent through "send message" 67 + */ 68 + #define PNPMSG_OK 0x00 69 + #define PNPMSG_ABORT 0x01 70 + #define PNPMSG_UNDOCK_DEFAULT_ACTION 0x40 71 + #define PNPMSG_POWER_OFF 0x41 72 + #define PNPMSG_PNP_OS_ACTIVE 0x42 73 + #define PNPMSG_PNP_OS_INACTIVE 0x43 74 + 75 + /* 76 + * Plug and Play BIOS flags 77 + */ 78 + #define PNPBIOS_NO_DISABLE 0x0001 79 + #define PNPBIOS_NO_CONFIG 0x0002 80 + #define PNPBIOS_OUTPUT 0x0004 81 + #define PNPBIOS_INPUT 0x0008 82 + #define PNPBIOS_BOOTABLE 0x0010 83 + #define PNPBIOS_DOCK 0x0020 84 + #define PNPBIOS_REMOVABLE 0x0040 85 + #define pnpbios_is_static(x) (((x)->flags & 0x0100) == 0x0000) 86 + #define pnpbios_is_dynamic(x) ((x)->flags & 0x0080) 87 + 88 + /* 89 + * Function Parameters 90 + */ 91 + #define PNPMODE_STATIC 1 92 + #define PNPMODE_DYNAMIC 0 93 + 94 + /* 0x8000 through 0xffff are OEM defined */ 95 + 96 + #pragma pack(1) 97 + struct pnp_dev_node_info { 98 + __u16 no_nodes; 99 + __u16 max_node_size; 100 + }; 101 + struct pnp_docking_station_info { 102 + __u32 location_id; 103 + __u32 serial; 104 + __u16 capabilities; 105 + }; 106 + struct pnp_isa_config_struc { 107 + __u8 revision; 108 + __u8 no_csns; 109 + __u16 isa_rd_data_port; 110 + __u16 reserved; 111 + }; 112 + struct escd_info_struc { 113 + __u16 min_escd_write_size; 114 + __u16 escd_size; 115 + __u32 nv_storage_base; 116 + }; 117 + struct pnp_bios_node { 118 + __u16 size; 119 + __u8 handle; 120 + __u32 eisa_id; 121 + __u8 type_code[3]; 122 + __u16 flags; 123 + __u8 data[0]; 124 + }; 125 + #pragma pack() 126 + 127 + /* non-exported */ 128 + extern struct pnp_dev_node_info node_info; 129 + 130 + extern int pnp_bios_dev_node_info(struct pnp_dev_node_info *data); 131 + extern int pnp_bios_get_dev_node(u8 *nodenum, char config, 132 + struct pnp_bios_node *data); 133 + extern int pnp_bios_set_dev_node(u8 nodenum, char config, 134 + struct pnp_bios_node *data); 135 + extern int pnp_bios_get_stat_res(char *info); 136 + extern int pnp_bios_isapnp_config(struct pnp_isa_config_struc *data); 137 + extern int pnp_bios_escd_info(struct escd_info_struc *data); 138 + extern int pnp_bios_read_escd(char *data, u32 nvram_base); 139 + extern int pnp_bios_dock_station_info(struct pnp_docking_station_info *data); 140 + 5 141 #pragma pack(1) 6 142 union pnp_bios_install_struct { 7 143 struct { ··· 164 28 extern int pnpbios_dont_use_current_config; 165 29 166 30 extern int pnpbios_parse_data_stream(struct pnp_dev *dev, struct pnp_bios_node * node); 167 - extern int pnpbios_read_resources_from_node(struct pnp_resource_table *res, struct pnp_bios_node * node); 168 - extern int pnpbios_write_resources_to_node(struct pnp_resource_table *res, struct pnp_bios_node * node); 31 + extern int pnpbios_read_resources_from_node(struct pnp_dev *dev, struct pnp_bios_node *node); 32 + extern int pnpbios_write_resources_to_node(struct pnp_dev *dev, struct pnp_bios_node *node); 169 33 extern void pnpid32_to_pnpid(u32 id, char *str); 170 34 171 35 extern void pnpbios_print_status(const char * module, u16 status);
+1 -1
drivers/pnp/pnpbios/proc.c
··· 23 23 #include <linux/slab.h> 24 24 #include <linux/types.h> 25 25 #include <linux/proc_fs.h> 26 - #include <linux/pnpbios.h> 26 + #include <linux/pnp.h> 27 27 #include <linux/init.h> 28 28 29 29 #include <asm/uaccess.h>
+146 -180
drivers/pnp/pnpbios/rsparser.c
··· 4 4 5 5 #include <linux/ctype.h> 6 6 #include <linux/pnp.h> 7 - #include <linux/pnpbios.h> 8 7 #include <linux/string.h> 9 8 #include <linux/slab.h> 10 9 ··· 15 16 } 16 17 #endif /* CONFIG_PCI */ 17 18 19 + #include "../base.h" 18 20 #include "pnpbios.h" 19 21 20 22 /* standard resource tags */ ··· 53 53 * Allocated Resources 54 54 */ 55 55 56 - static void pnpbios_parse_allocated_irqresource(struct pnp_resource_table *res, 57 - int irq) 56 + static void pnpbios_parse_allocated_ioresource(struct pnp_dev *dev, 57 + int start, int len) 58 58 { 59 - int i = 0; 59 + int flags = 0; 60 + int end = start + len - 1; 60 61 61 - while (!(res->irq_resource[i].flags & IORESOURCE_UNSET) 62 - && i < PNP_MAX_IRQ) 63 - i++; 64 - if (i < PNP_MAX_IRQ) { 65 - res->irq_resource[i].flags = IORESOURCE_IRQ; // Also clears _UNSET flag 66 - if (irq == -1) { 67 - res->irq_resource[i].flags |= IORESOURCE_DISABLED; 68 - return; 69 - } 70 - res->irq_resource[i].start = 71 - res->irq_resource[i].end = (unsigned long)irq; 72 - pcibios_penalize_isa_irq(irq, 1); 73 - } 62 + if (len <= 0 || end >= 0x10003) 63 + flags |= IORESOURCE_DISABLED; 64 + 65 + pnp_add_io_resource(dev, start, end, flags); 74 66 } 75 67 76 - static void pnpbios_parse_allocated_dmaresource(struct pnp_resource_table *res, 77 - int dma) 68 + static void pnpbios_parse_allocated_memresource(struct pnp_dev *dev, 69 + int start, int len) 78 70 { 79 - int i = 0; 71 + int flags = 0; 72 + int end = start + len - 1; 80 73 81 - while (i < PNP_MAX_DMA && 82 - !(res->dma_resource[i].flags & IORESOURCE_UNSET)) 83 - i++; 84 - if (i < PNP_MAX_DMA) { 85 - res->dma_resource[i].flags = IORESOURCE_DMA; // Also clears _UNSET flag 86 - if (dma == -1) { 87 - res->dma_resource[i].flags |= IORESOURCE_DISABLED; 88 - return; 89 - } 90 - res->dma_resource[i].start = 91 - res->dma_resource[i].end = (unsigned long)dma; 92 - } 74 + if (len <= 0) 75 + flags |= IORESOURCE_DISABLED; 76 + 77 + pnp_add_mem_resource(dev, start, end, flags); 93 78 } 94 79 95 - static void pnpbios_parse_allocated_ioresource(struct pnp_resource_table *res, 96 - int io, int len) 97 - { 98 - int i = 0; 99 - 100 - while (!(res->port_resource[i].flags & IORESOURCE_UNSET) 101 - && i < PNP_MAX_PORT) 102 - i++; 103 - if (i < PNP_MAX_PORT) { 104 - res->port_resource[i].flags = IORESOURCE_IO; // Also clears _UNSET flag 105 - if (len <= 0 || (io + len - 1) >= 0x10003) { 106 - res->port_resource[i].flags |= IORESOURCE_DISABLED; 107 - return; 108 - } 109 - res->port_resource[i].start = (unsigned long)io; 110 - res->port_resource[i].end = (unsigned long)(io + len - 1); 111 - } 112 - } 113 - 114 - static void pnpbios_parse_allocated_memresource(struct pnp_resource_table *res, 115 - int mem, int len) 116 - { 117 - int i = 0; 118 - 119 - while (!(res->mem_resource[i].flags & IORESOURCE_UNSET) 120 - && i < PNP_MAX_MEM) 121 - i++; 122 - if (i < PNP_MAX_MEM) { 123 - res->mem_resource[i].flags = IORESOURCE_MEM; // Also clears _UNSET flag 124 - if (len <= 0) { 125 - res->mem_resource[i].flags |= IORESOURCE_DISABLED; 126 - return; 127 - } 128 - res->mem_resource[i].start = (unsigned long)mem; 129 - res->mem_resource[i].end = (unsigned long)(mem + len - 1); 130 - } 131 - } 132 - 133 - static unsigned char *pnpbios_parse_allocated_resource_data(unsigned char *p, 134 - unsigned char *end, 135 - struct 136 - pnp_resource_table 137 - *res) 80 + static unsigned char *pnpbios_parse_allocated_resource_data(struct pnp_dev *dev, 81 + unsigned char *p, 82 + unsigned char *end) 138 83 { 139 84 unsigned int len, tag; 140 - int io, size, mask, i; 85 + int io, size, mask, i, flags; 141 86 142 87 if (!p) 143 88 return NULL; 144 89 145 - /* Blank the resource table values */ 146 - pnp_init_resource_table(res); 90 + dev_dbg(&dev->dev, "parse allocated resources\n"); 91 + 92 + pnp_init_resources(dev); 147 93 148 94 while ((char *)p < (char *)end) { 149 95 ··· 109 163 goto len_err; 110 164 io = *(short *)&p[4]; 111 165 size = *(short *)&p[10]; 112 - pnpbios_parse_allocated_memresource(res, io, size); 166 + pnpbios_parse_allocated_memresource(dev, io, size); 113 167 break; 114 168 115 169 case LARGE_TAG_ANSISTR: ··· 125 179 goto len_err; 126 180 io = *(int *)&p[4]; 127 181 size = *(int *)&p[16]; 128 - pnpbios_parse_allocated_memresource(res, io, size); 182 + pnpbios_parse_allocated_memresource(dev, io, size); 129 183 break; 130 184 131 185 case LARGE_TAG_FIXEDMEM32: ··· 133 187 goto len_err; 134 188 io = *(int *)&p[4]; 135 189 size = *(int *)&p[8]; 136 - pnpbios_parse_allocated_memresource(res, io, size); 190 + pnpbios_parse_allocated_memresource(dev, io, size); 137 191 break; 138 192 139 193 case SMALL_TAG_IRQ: 140 194 if (len < 2 || len > 3) 141 195 goto len_err; 196 + flags = 0; 142 197 io = -1; 143 198 mask = p[1] + p[2] * 256; 144 199 for (i = 0; i < 16; i++, mask = mask >> 1) 145 200 if (mask & 0x01) 146 201 io = i; 147 - pnpbios_parse_allocated_irqresource(res, io); 202 + if (io != -1) 203 + pcibios_penalize_isa_irq(io, 1); 204 + else 205 + flags = IORESOURCE_DISABLED; 206 + pnp_add_irq_resource(dev, io, flags); 148 207 break; 149 208 150 209 case SMALL_TAG_DMA: 151 210 if (len != 2) 152 211 goto len_err; 212 + flags = 0; 153 213 io = -1; 154 214 mask = p[1]; 155 215 for (i = 0; i < 8; i++, mask = mask >> 1) 156 216 if (mask & 0x01) 157 217 io = i; 158 - pnpbios_parse_allocated_dmaresource(res, io); 218 + if (io == -1) 219 + flags = IORESOURCE_DISABLED; 220 + pnp_add_dma_resource(dev, io, flags); 159 221 break; 160 222 161 223 case SMALL_TAG_PORT: ··· 171 217 goto len_err; 172 218 io = p[2] + p[3] * 256; 173 219 size = p[7]; 174 - pnpbios_parse_allocated_ioresource(res, io, size); 220 + pnpbios_parse_allocated_ioresource(dev, io, size); 175 221 break; 176 222 177 223 case SMALL_TAG_VENDOR: ··· 183 229 goto len_err; 184 230 io = p[1] + p[2] * 256; 185 231 size = p[3]; 186 - pnpbios_parse_allocated_ioresource(res, io, size); 232 + pnpbios_parse_allocated_ioresource(dev, io, size); 187 233 break; 188 234 189 235 case SMALL_TAG_END: ··· 193 239 194 240 default: /* an unkown tag */ 195 241 len_err: 196 - printk(KERN_ERR 197 - "PnPBIOS: Unknown tag '0x%x', length '%d'.\n", 198 - tag, len); 242 + dev_err(&dev->dev, "unknown tag %#x length %d\n", 243 + tag, len); 199 244 break; 200 245 } 201 246 ··· 205 252 p += len + 1; 206 253 } 207 254 208 - printk(KERN_ERR 209 - "PnPBIOS: Resource structure does not contain an end tag.\n"); 255 + dev_err(&dev->dev, "no end tag in resource structure\n"); 210 256 211 257 return NULL; 212 258 } ··· 214 262 * Resource Configuration Options 215 263 */ 216 264 217 - static __init void pnpbios_parse_mem_option(unsigned char *p, int size, 265 + static __init void pnpbios_parse_mem_option(struct pnp_dev *dev, 266 + unsigned char *p, int size, 218 267 struct pnp_option *option) 219 268 { 220 269 struct pnp_mem *mem; ··· 228 275 mem->align = (p[9] << 8) | p[8]; 229 276 mem->size = ((p[11] << 8) | p[10]) << 8; 230 277 mem->flags = p[3]; 231 - pnp_register_mem_resource(option, mem); 278 + pnp_register_mem_resource(dev, option, mem); 232 279 } 233 280 234 - static __init void pnpbios_parse_mem32_option(unsigned char *p, int size, 281 + static __init void pnpbios_parse_mem32_option(struct pnp_dev *dev, 282 + unsigned char *p, int size, 235 283 struct pnp_option *option) 236 284 { 237 285 struct pnp_mem *mem; ··· 245 291 mem->align = (p[15] << 24) | (p[14] << 16) | (p[13] << 8) | p[12]; 246 292 mem->size = (p[19] << 24) | (p[18] << 16) | (p[17] << 8) | p[16]; 247 293 mem->flags = p[3]; 248 - pnp_register_mem_resource(option, mem); 294 + pnp_register_mem_resource(dev, option, mem); 249 295 } 250 296 251 - static __init void pnpbios_parse_fixed_mem32_option(unsigned char *p, int size, 297 + static __init void pnpbios_parse_fixed_mem32_option(struct pnp_dev *dev, 298 + unsigned char *p, int size, 252 299 struct pnp_option *option) 253 300 { 254 301 struct pnp_mem *mem; ··· 261 306 mem->size = (p[11] << 24) | (p[10] << 16) | (p[9] << 8) | p[8]; 262 307 mem->align = 0; 263 308 mem->flags = p[3]; 264 - pnp_register_mem_resource(option, mem); 309 + pnp_register_mem_resource(dev, option, mem); 265 310 } 266 311 267 - static __init void pnpbios_parse_irq_option(unsigned char *p, int size, 268 - struct pnp_option *option) 312 + static __init void pnpbios_parse_irq_option(struct pnp_dev *dev, 313 + unsigned char *p, int size, 314 + struct pnp_option *option) 269 315 { 270 316 struct pnp_irq *irq; 271 317 unsigned long bits; ··· 280 324 irq->flags = p[3]; 281 325 else 282 326 irq->flags = IORESOURCE_IRQ_HIGHEDGE; 283 - pnp_register_irq_resource(option, irq); 327 + pnp_register_irq_resource(dev, option, irq); 284 328 } 285 329 286 - static __init void pnpbios_parse_dma_option(unsigned char *p, int size, 287 - struct pnp_option *option) 330 + static __init void pnpbios_parse_dma_option(struct pnp_dev *dev, 331 + unsigned char *p, int size, 332 + struct pnp_option *option) 288 333 { 289 334 struct pnp_dma *dma; 290 335 ··· 294 337 return; 295 338 dma->map = p[1]; 296 339 dma->flags = p[2]; 297 - pnp_register_dma_resource(option, dma); 340 + pnp_register_dma_resource(dev, option, dma); 298 341 } 299 342 300 - static __init void pnpbios_parse_port_option(unsigned char *p, int size, 343 + static __init void pnpbios_parse_port_option(struct pnp_dev *dev, 344 + unsigned char *p, int size, 301 345 struct pnp_option *option) 302 346 { 303 347 struct pnp_port *port; ··· 311 353 port->align = p[6]; 312 354 port->size = p[7]; 313 355 port->flags = p[1] ? PNP_PORT_FLAG_16BITADDR : 0; 314 - pnp_register_port_resource(option, port); 356 + pnp_register_port_resource(dev, option, port); 315 357 } 316 358 317 - static __init void pnpbios_parse_fixed_port_option(unsigned char *p, int size, 359 + static __init void pnpbios_parse_fixed_port_option(struct pnp_dev *dev, 360 + unsigned char *p, int size, 318 361 struct pnp_option *option) 319 362 { 320 363 struct pnp_port *port; ··· 327 368 port->size = p[3]; 328 369 port->align = 0; 329 370 port->flags = PNP_PORT_FLAG_FIXED; 330 - pnp_register_port_resource(option, port); 371 + pnp_register_port_resource(dev, option, port); 331 372 } 332 373 333 374 static __init unsigned char * ··· 340 381 341 382 if (!p) 342 383 return NULL; 384 + 385 + dev_dbg(&dev->dev, "parse resource options\n"); 343 386 344 387 option_independent = option = pnp_register_independent_option(dev); 345 388 if (!option) ··· 363 402 case LARGE_TAG_MEM: 364 403 if (len != 9) 365 404 goto len_err; 366 - pnpbios_parse_mem_option(p, len, option); 405 + pnpbios_parse_mem_option(dev, p, len, option); 367 406 break; 368 407 369 408 case LARGE_TAG_MEM32: 370 409 if (len != 17) 371 410 goto len_err; 372 - pnpbios_parse_mem32_option(p, len, option); 411 + pnpbios_parse_mem32_option(dev, p, len, option); 373 412 break; 374 413 375 414 case LARGE_TAG_FIXEDMEM32: 376 415 if (len != 9) 377 416 goto len_err; 378 - pnpbios_parse_fixed_mem32_option(p, len, option); 417 + pnpbios_parse_fixed_mem32_option(dev, p, len, option); 379 418 break; 380 419 381 420 case SMALL_TAG_IRQ: 382 421 if (len < 2 || len > 3) 383 422 goto len_err; 384 - pnpbios_parse_irq_option(p, len, option); 423 + pnpbios_parse_irq_option(dev, p, len, option); 385 424 break; 386 425 387 426 case SMALL_TAG_DMA: 388 427 if (len != 2) 389 428 goto len_err; 390 - pnpbios_parse_dma_option(p, len, option); 429 + pnpbios_parse_dma_option(dev, p, len, option); 391 430 break; 392 431 393 432 case SMALL_TAG_PORT: 394 433 if (len != 7) 395 434 goto len_err; 396 - pnpbios_parse_port_option(p, len, option); 435 + pnpbios_parse_port_option(dev, p, len, option); 397 436 break; 398 437 399 438 case SMALL_TAG_VENDOR: ··· 403 442 case SMALL_TAG_FIXEDPORT: 404 443 if (len != 3) 405 444 goto len_err; 406 - pnpbios_parse_fixed_port_option(p, len, option); 445 + pnpbios_parse_fixed_port_option(dev, p, len, option); 407 446 break; 408 447 409 448 case SMALL_TAG_STARTDEP: ··· 421 460 if (len != 0) 422 461 goto len_err; 423 462 if (option_independent == option) 424 - printk(KERN_WARNING 425 - "PnPBIOS: Missing SMALL_TAG_STARTDEP tag\n"); 463 + dev_warn(&dev->dev, "missing " 464 + "SMALL_TAG_STARTDEP tag\n"); 426 465 option = option_independent; 466 + dev_dbg(&dev->dev, "end dependent options\n"); 427 467 break; 428 468 429 469 case SMALL_TAG_END: ··· 432 470 433 471 default: /* an unkown tag */ 434 472 len_err: 435 - printk(KERN_ERR 436 - "PnPBIOS: Unknown tag '0x%x', length '%d'.\n", 437 - tag, len); 473 + dev_err(&dev->dev, "unknown tag %#x length %d\n", 474 + tag, len); 438 475 break; 439 476 } 440 477 ··· 444 483 p += len + 1; 445 484 } 446 485 447 - printk(KERN_ERR 448 - "PnPBIOS: Resource structure does not contain an end tag.\n"); 486 + dev_err(&dev->dev, "no end tag in resource structure\n"); 449 487 450 488 return NULL; 451 489 } ··· 453 493 * Compatible Device IDs 454 494 */ 455 495 456 - #define HEX(id,a) hex[((id)>>a) & 15] 457 - #define CHAR(id,a) (0x40 + (((id)>>a) & 31)) 458 - 459 - void pnpid32_to_pnpid(u32 id, char *str) 460 - { 461 - const char *hex = "0123456789abcdef"; 462 - 463 - id = be32_to_cpu(id); 464 - str[0] = CHAR(id, 26); 465 - str[1] = CHAR(id, 21); 466 - str[2] = CHAR(id, 16); 467 - str[3] = HEX(id, 12); 468 - str[4] = HEX(id, 8); 469 - str[5] = HEX(id, 4); 470 - str[6] = HEX(id, 0); 471 - str[7] = '\0'; 472 - } 473 - 474 - #undef CHAR 475 - #undef HEX 476 - 477 496 static unsigned char *pnpbios_parse_compatible_ids(unsigned char *p, 478 497 unsigned char *end, 479 498 struct pnp_dev *dev) 480 499 { 481 500 int len, tag; 501 + u32 eisa_id; 482 502 char id[8]; 483 503 struct pnp_id *dev_id; 484 504 ··· 488 548 case SMALL_TAG_COMPATDEVID: /* compatible ID */ 489 549 if (len != 4) 490 550 goto len_err; 491 - dev_id = kzalloc(sizeof(struct pnp_id), GFP_KERNEL); 551 + eisa_id = p[1] | p[2] << 8 | p[3] << 16 | p[4] << 24; 552 + pnp_eisa_id_to_string(eisa_id & PNP_EISA_ID_MASK, id); 553 + dev_id = pnp_add_id(dev, id); 492 554 if (!dev_id) 493 555 return NULL; 494 - pnpid32_to_pnpid(p[1] | p[2] << 8 | p[3] << 16 | p[4] << 495 - 24, id); 496 - memcpy(&dev_id->id, id, 7); 497 - pnp_add_id(dev_id, dev); 498 556 break; 499 557 500 558 case SMALL_TAG_END: ··· 502 564 503 565 default: /* an unkown tag */ 504 566 len_err: 505 - printk(KERN_ERR 506 - "PnPBIOS: Unknown tag '0x%x', length '%d'.\n", 507 - tag, len); 567 + dev_err(&dev->dev, "unknown tag %#x length %d\n", 568 + tag, len); 508 569 break; 509 570 } 510 571 ··· 514 577 p += len + 1; 515 578 } 516 579 517 - printk(KERN_ERR 518 - "PnPBIOS: Resource structure does not contain an end tag.\n"); 580 + dev_err(&dev->dev, "no end tag in resource structure\n"); 519 581 520 582 return NULL; 521 583 } ··· 523 587 * Allocated Resource Encoding 524 588 */ 525 589 526 - static void pnpbios_encode_mem(unsigned char *p, struct resource *res) 590 + static void pnpbios_encode_mem(struct pnp_dev *dev, unsigned char *p, 591 + struct resource *res) 527 592 { 528 593 unsigned long base = res->start; 529 594 unsigned long len = res->end - res->start + 1; ··· 535 598 p[7] = ((base >> 8) >> 8) & 0xff; 536 599 p[10] = (len >> 8) & 0xff; 537 600 p[11] = ((len >> 8) >> 8) & 0xff; 601 + 602 + dev_dbg(&dev->dev, " encode mem %#llx-%#llx\n", 603 + (unsigned long long) res->start, (unsigned long long) res->end); 538 604 } 539 605 540 - static void pnpbios_encode_mem32(unsigned char *p, struct resource *res) 606 + static void pnpbios_encode_mem32(struct pnp_dev *dev, unsigned char *p, 607 + struct resource *res) 541 608 { 542 609 unsigned long base = res->start; 543 610 unsigned long len = res->end - res->start + 1; ··· 558 617 p[17] = (len >> 8) & 0xff; 559 618 p[18] = (len >> 16) & 0xff; 560 619 p[19] = (len >> 24) & 0xff; 620 + 621 + dev_dbg(&dev->dev, " encode mem32 %#llx-%#llx\n", 622 + (unsigned long long) res->start, (unsigned long long) res->end); 561 623 } 562 624 563 - static void pnpbios_encode_fixed_mem32(unsigned char *p, struct resource *res) 625 + static void pnpbios_encode_fixed_mem32(struct pnp_dev *dev, unsigned char *p, 626 + struct resource *res) 564 627 { 565 628 unsigned long base = res->start; 566 629 unsigned long len = res->end - res->start + 1; ··· 577 632 p[9] = (len >> 8) & 0xff; 578 633 p[10] = (len >> 16) & 0xff; 579 634 p[11] = (len >> 24) & 0xff; 635 + 636 + dev_dbg(&dev->dev, " encode fixed_mem32 %#llx-%#llx\n", 637 + (unsigned long long) res->start, (unsigned long long) res->end); 580 638 } 581 639 582 - static void pnpbios_encode_irq(unsigned char *p, struct resource *res) 640 + static void pnpbios_encode_irq(struct pnp_dev *dev, unsigned char *p, 641 + struct resource *res) 583 642 { 584 643 unsigned long map = 0; 585 644 586 645 map = 1 << res->start; 587 646 p[1] = map & 0xff; 588 647 p[2] = (map >> 8) & 0xff; 648 + 649 + dev_dbg(&dev->dev, " encode irq %d\n", res->start); 589 650 } 590 651 591 - static void pnpbios_encode_dma(unsigned char *p, struct resource *res) 652 + static void pnpbios_encode_dma(struct pnp_dev *dev, unsigned char *p, 653 + struct resource *res) 592 654 { 593 655 unsigned long map = 0; 594 656 595 657 map = 1 << res->start; 596 658 p[1] = map & 0xff; 659 + 660 + dev_dbg(&dev->dev, " encode dma %d\n", res->start); 597 661 } 598 662 599 - static void pnpbios_encode_port(unsigned char *p, struct resource *res) 663 + static void pnpbios_encode_port(struct pnp_dev *dev, unsigned char *p, 664 + struct resource *res) 600 665 { 601 666 unsigned long base = res->start; 602 667 unsigned long len = res->end - res->start + 1; ··· 616 661 p[4] = base & 0xff; 617 662 p[5] = (base >> 8) & 0xff; 618 663 p[7] = len & 0xff; 664 + 665 + dev_dbg(&dev->dev, " encode io %#llx-%#llx\n", 666 + (unsigned long long) res->start, (unsigned long long) res->end); 619 667 } 620 668 621 - static void pnpbios_encode_fixed_port(unsigned char *p, struct resource *res) 669 + static void pnpbios_encode_fixed_port(struct pnp_dev *dev, unsigned char *p, 670 + struct resource *res) 622 671 { 623 672 unsigned long base = res->start; 624 673 unsigned long len = res->end - res->start + 1; ··· 630 671 p[1] = base & 0xff; 631 672 p[2] = (base >> 8) & 0xff; 632 673 p[3] = len & 0xff; 674 + 675 + dev_dbg(&dev->dev, " encode fixed_io %#llx-%#llx\n", 676 + (unsigned long long) res->start, (unsigned long long) res->end); 633 677 } 634 678 635 - static unsigned char *pnpbios_encode_allocated_resource_data(unsigned char *p, 636 - unsigned char *end, 637 - struct 638 - pnp_resource_table 639 - *res) 679 + static unsigned char *pnpbios_encode_allocated_resource_data(struct pnp_dev 680 + *dev, 681 + unsigned char *p, 682 + unsigned char *end) 640 683 { 641 684 unsigned int len, tag; 642 685 int port = 0, irq = 0, dma = 0, mem = 0; ··· 662 701 case LARGE_TAG_MEM: 663 702 if (len != 9) 664 703 goto len_err; 665 - pnpbios_encode_mem(p, &res->mem_resource[mem]); 704 + pnpbios_encode_mem(dev, p, 705 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 666 706 mem++; 667 707 break; 668 708 669 709 case LARGE_TAG_MEM32: 670 710 if (len != 17) 671 711 goto len_err; 672 - pnpbios_encode_mem32(p, &res->mem_resource[mem]); 712 + pnpbios_encode_mem32(dev, p, 713 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 673 714 mem++; 674 715 break; 675 716 676 717 case LARGE_TAG_FIXEDMEM32: 677 718 if (len != 9) 678 719 goto len_err; 679 - pnpbios_encode_fixed_mem32(p, &res->mem_resource[mem]); 720 + pnpbios_encode_fixed_mem32(dev, p, 721 + pnp_get_resource(dev, IORESOURCE_MEM, mem)); 680 722 mem++; 681 723 break; 682 724 683 725 case SMALL_TAG_IRQ: 684 726 if (len < 2 || len > 3) 685 727 goto len_err; 686 - pnpbios_encode_irq(p, &res->irq_resource[irq]); 728 + pnpbios_encode_irq(dev, p, 729 + pnp_get_resource(dev, IORESOURCE_IRQ, irq)); 687 730 irq++; 688 731 break; 689 732 690 733 case SMALL_TAG_DMA: 691 734 if (len != 2) 692 735 goto len_err; 693 - pnpbios_encode_dma(p, &res->dma_resource[dma]); 736 + pnpbios_encode_dma(dev, p, 737 + pnp_get_resource(dev, IORESOURCE_DMA, dma)); 694 738 dma++; 695 739 break; 696 740 697 741 case SMALL_TAG_PORT: 698 742 if (len != 7) 699 743 goto len_err; 700 - pnpbios_encode_port(p, &res->port_resource[port]); 744 + pnpbios_encode_port(dev, p, 745 + pnp_get_resource(dev, IORESOURCE_IO, port)); 701 746 port++; 702 747 break; 703 748 ··· 714 747 case SMALL_TAG_FIXEDPORT: 715 748 if (len != 3) 716 749 goto len_err; 717 - pnpbios_encode_fixed_port(p, &res->port_resource[port]); 750 + pnpbios_encode_fixed_port(dev, p, 751 + pnp_get_resource(dev, IORESOURCE_IO, port)); 718 752 port++; 719 753 break; 720 754 ··· 726 758 727 759 default: /* an unkown tag */ 728 760 len_err: 729 - printk(KERN_ERR 730 - "PnPBIOS: Unknown tag '0x%x', length '%d'.\n", 731 - tag, len); 761 + dev_err(&dev->dev, "unknown tag %#x length %d\n", 762 + tag, len); 732 763 break; 733 764 } 734 765 ··· 738 771 p += len + 1; 739 772 } 740 773 741 - printk(KERN_ERR 742 - "PnPBIOS: Resource structure does not contain an end tag.\n"); 774 + dev_err(&dev->dev, "no end tag in resource structure\n"); 743 775 744 776 return NULL; 745 777 } ··· 753 787 unsigned char *p = (char *)node->data; 754 788 unsigned char *end = (char *)(node->data + node->size); 755 789 756 - p = pnpbios_parse_allocated_resource_data(p, end, &dev->res); 790 + p = pnpbios_parse_allocated_resource_data(dev, p, end); 757 791 if (!p) 758 792 return -EIO; 759 793 p = pnpbios_parse_resource_option_data(p, end, dev); ··· 765 799 return 0; 766 800 } 767 801 768 - int pnpbios_read_resources_from_node(struct pnp_resource_table *res, 802 + int pnpbios_read_resources_from_node(struct pnp_dev *dev, 769 803 struct pnp_bios_node *node) 770 804 { 771 805 unsigned char *p = (char *)node->data; 772 806 unsigned char *end = (char *)(node->data + node->size); 773 807 774 - p = pnpbios_parse_allocated_resource_data(p, end, res); 808 + p = pnpbios_parse_allocated_resource_data(dev, p, end); 775 809 if (!p) 776 810 return -EIO; 777 811 return 0; 778 812 } 779 813 780 - int pnpbios_write_resources_to_node(struct pnp_resource_table *res, 814 + int pnpbios_write_resources_to_node(struct pnp_dev *dev, 781 815 struct pnp_bios_node *node) 782 816 { 783 817 unsigned char *p = (char *)node->data; 784 818 unsigned char *end = (char *)(node->data + node->size); 785 819 786 - p = pnpbios_encode_allocated_resource_data(p, end, res); 820 + p = pnpbios_encode_allocated_resource_data(dev, p, end); 787 821 if (!p) 788 822 return -EIO; 789 823 return 0;
+9 -6
drivers/pnp/quirks.c
··· 117 117 static void quirk_system_pci_resources(struct pnp_dev *dev) 118 118 { 119 119 struct pci_dev *pdev = NULL; 120 + struct resource *res; 120 121 resource_size_t pnp_start, pnp_end, pci_start, pci_end; 121 122 int i, j; 122 123 ··· 138 137 139 138 pci_start = pci_resource_start(pdev, i); 140 139 pci_end = pci_resource_end(pdev, i); 141 - for (j = 0; j < PNP_MAX_MEM; j++) { 142 - if (!pnp_mem_valid(dev, j) || 143 - pnp_mem_len(dev, j) == 0) 140 + for (j = 0; 141 + (res = pnp_get_resource(dev, IORESOURCE_MEM, j)); 142 + j++) { 143 + if (res->flags & IORESOURCE_UNSET || 144 + (res->start == 0 && res->end == 0)) 144 145 continue; 145 146 146 - pnp_start = pnp_mem_start(dev, j); 147 - pnp_end = pnp_mem_end(dev, j); 147 + pnp_start = res->start; 148 + pnp_end = res->end; 148 149 149 150 /* 150 151 * If the PNP region doesn't overlap the PCI ··· 179 176 pci_name(pdev), i, 180 177 (unsigned long long) pci_start, 181 178 (unsigned long long) pci_end); 182 - pnp_mem_flags(dev, j) = 0; 179 + res->flags = 0; 183 180 } 184 181 } 185 182 }
+293 -68
drivers/pnp/resource.c
··· 53 53 if (dev->independent) 54 54 dev_err(&dev->dev, "independent resource already registered\n"); 55 55 dev->independent = option; 56 + 57 + dev_dbg(&dev->dev, "new independent option\n"); 56 58 return option; 57 59 } 58 60 ··· 72 70 parent->next = option; 73 71 } else 74 72 dev->dependent = option; 73 + 74 + dev_dbg(&dev->dev, "new dependent option (priority %#x)\n", priority); 75 75 return option; 76 76 } 77 77 78 - int pnp_register_irq_resource(struct pnp_option *option, struct pnp_irq *data) 78 + int pnp_register_irq_resource(struct pnp_dev *dev, struct pnp_option *option, 79 + struct pnp_irq *data) 79 80 { 80 81 struct pnp_irq *ptr; 82 + #ifdef DEBUG 83 + char buf[PNP_IRQ_NR]; /* hex-encoded, so this is overkill but safe */ 84 + #endif 81 85 82 86 ptr = option->irq; 83 87 while (ptr && ptr->next) ··· 102 94 pcibios_penalize_isa_irq(i, 0); 103 95 } 104 96 #endif 97 + 98 + #ifdef DEBUG 99 + bitmap_scnprintf(buf, sizeof(buf), data->map, PNP_IRQ_NR); 100 + dev_dbg(&dev->dev, " irq bitmask %s flags %#x\n", buf, 101 + data->flags); 102 + #endif 105 103 return 0; 106 104 } 107 105 108 - int pnp_register_dma_resource(struct pnp_option *option, struct pnp_dma *data) 106 + int pnp_register_dma_resource(struct pnp_dev *dev, struct pnp_option *option, 107 + struct pnp_dma *data) 109 108 { 110 109 struct pnp_dma *ptr; 111 110 ··· 124 109 else 125 110 option->dma = data; 126 111 112 + dev_dbg(&dev->dev, " dma bitmask %#x flags %#x\n", data->map, 113 + data->flags); 127 114 return 0; 128 115 } 129 116 130 - int pnp_register_port_resource(struct pnp_option *option, struct pnp_port *data) 117 + int pnp_register_port_resource(struct pnp_dev *dev, struct pnp_option *option, 118 + struct pnp_port *data) 131 119 { 132 120 struct pnp_port *ptr; 133 121 ··· 142 124 else 143 125 option->port = data; 144 126 127 + dev_dbg(&dev->dev, " io " 128 + "min %#x max %#x align %d size %d flags %#x\n", 129 + data->min, data->max, data->align, data->size, data->flags); 145 130 return 0; 146 131 } 147 132 148 - int pnp_register_mem_resource(struct pnp_option *option, struct pnp_mem *data) 133 + int pnp_register_mem_resource(struct pnp_dev *dev, struct pnp_option *option, 134 + struct pnp_mem *data) 149 135 { 150 136 struct pnp_mem *ptr; 151 137 ··· 160 138 ptr->next = data; 161 139 else 162 140 option->mem = data; 141 + 142 + dev_dbg(&dev->dev, " mem " 143 + "min %#x max %#x align %d size %d flags %#x\n", 144 + data->min, data->max, data->align, data->size, data->flags); 163 145 return 0; 164 146 } 165 147 ··· 239 213 #define cannot_compare(flags) \ 240 214 ((flags) & (IORESOURCE_UNSET | IORESOURCE_DISABLED)) 241 215 242 - int pnp_check_port(struct pnp_dev *dev, int idx) 216 + int pnp_check_port(struct pnp_dev *dev, struct resource *res) 243 217 { 244 - int tmp; 218 + int i; 245 219 struct pnp_dev *tdev; 220 + struct resource *tres; 246 221 resource_size_t *port, *end, *tport, *tend; 247 222 248 - port = &dev->res.port_resource[idx].start; 249 - end = &dev->res.port_resource[idx].end; 223 + port = &res->start; 224 + end = &res->end; 250 225 251 226 /* if the resource doesn't exist, don't complain about it */ 252 - if (cannot_compare(dev->res.port_resource[idx].flags)) 227 + if (cannot_compare(res->flags)) 253 228 return 1; 254 229 255 230 /* check if the resource is already in use, skip if the ··· 261 234 } 262 235 263 236 /* check if the resource is reserved */ 264 - for (tmp = 0; tmp < 8; tmp++) { 265 - int rport = pnp_reserve_io[tmp << 1]; 266 - int rend = pnp_reserve_io[(tmp << 1) + 1] + rport - 1; 237 + for (i = 0; i < 8; i++) { 238 + int rport = pnp_reserve_io[i << 1]; 239 + int rend = pnp_reserve_io[(i << 1) + 1] + rport - 1; 267 240 if (ranged_conflict(port, end, &rport, &rend)) 268 241 return 0; 269 242 } 270 243 271 244 /* check for internal conflicts */ 272 - for (tmp = 0; tmp < PNP_MAX_PORT && tmp != idx; tmp++) { 273 - if (dev->res.port_resource[tmp].flags & IORESOURCE_IO) { 274 - tport = &dev->res.port_resource[tmp].start; 275 - tend = &dev->res.port_resource[tmp].end; 245 + for (i = 0; (tres = pnp_get_resource(dev, IORESOURCE_IO, i)); i++) { 246 + if (tres != res && tres->flags & IORESOURCE_IO) { 247 + tport = &tres->start; 248 + tend = &tres->end; 276 249 if (ranged_conflict(port, end, tport, tend)) 277 250 return 0; 278 251 } ··· 282 255 pnp_for_each_dev(tdev) { 283 256 if (tdev == dev) 284 257 continue; 285 - for (tmp = 0; tmp < PNP_MAX_PORT; tmp++) { 286 - if (tdev->res.port_resource[tmp].flags & IORESOURCE_IO) { 287 - if (cannot_compare 288 - (tdev->res.port_resource[tmp].flags)) 258 + for (i = 0; 259 + (tres = pnp_get_resource(tdev, IORESOURCE_IO, i)); 260 + i++) { 261 + if (tres->flags & IORESOURCE_IO) { 262 + if (cannot_compare(tres->flags)) 289 263 continue; 290 - tport = &tdev->res.port_resource[tmp].start; 291 - tend = &tdev->res.port_resource[tmp].end; 264 + tport = &tres->start; 265 + tend = &tres->end; 292 266 if (ranged_conflict(port, end, tport, tend)) 293 267 return 0; 294 268 } ··· 299 271 return 1; 300 272 } 301 273 302 - int pnp_check_mem(struct pnp_dev *dev, int idx) 274 + int pnp_check_mem(struct pnp_dev *dev, struct resource *res) 303 275 { 304 - int tmp; 276 + int i; 305 277 struct pnp_dev *tdev; 278 + struct resource *tres; 306 279 resource_size_t *addr, *end, *taddr, *tend; 307 280 308 - addr = &dev->res.mem_resource[idx].start; 309 - end = &dev->res.mem_resource[idx].end; 281 + addr = &res->start; 282 + end = &res->end; 310 283 311 284 /* if the resource doesn't exist, don't complain about it */ 312 - if (cannot_compare(dev->res.mem_resource[idx].flags)) 285 + if (cannot_compare(res->flags)) 313 286 return 1; 314 287 315 288 /* check if the resource is already in use, skip if the ··· 321 292 } 322 293 323 294 /* check if the resource is reserved */ 324 - for (tmp = 0; tmp < 8; tmp++) { 325 - int raddr = pnp_reserve_mem[tmp << 1]; 326 - int rend = pnp_reserve_mem[(tmp << 1) + 1] + raddr - 1; 295 + for (i = 0; i < 8; i++) { 296 + int raddr = pnp_reserve_mem[i << 1]; 297 + int rend = pnp_reserve_mem[(i << 1) + 1] + raddr - 1; 327 298 if (ranged_conflict(addr, end, &raddr, &rend)) 328 299 return 0; 329 300 } 330 301 331 302 /* check for internal conflicts */ 332 - for (tmp = 0; tmp < PNP_MAX_MEM && tmp != idx; tmp++) { 333 - if (dev->res.mem_resource[tmp].flags & IORESOURCE_MEM) { 334 - taddr = &dev->res.mem_resource[tmp].start; 335 - tend = &dev->res.mem_resource[tmp].end; 303 + for (i = 0; (tres = pnp_get_resource(dev, IORESOURCE_MEM, i)); i++) { 304 + if (tres != res && tres->flags & IORESOURCE_MEM) { 305 + taddr = &tres->start; 306 + tend = &tres->end; 336 307 if (ranged_conflict(addr, end, taddr, tend)) 337 308 return 0; 338 309 } ··· 342 313 pnp_for_each_dev(tdev) { 343 314 if (tdev == dev) 344 315 continue; 345 - for (tmp = 0; tmp < PNP_MAX_MEM; tmp++) { 346 - if (tdev->res.mem_resource[tmp].flags & IORESOURCE_MEM) { 347 - if (cannot_compare 348 - (tdev->res.mem_resource[tmp].flags)) 316 + for (i = 0; 317 + (tres = pnp_get_resource(tdev, IORESOURCE_MEM, i)); 318 + i++) { 319 + if (tres->flags & IORESOURCE_MEM) { 320 + if (cannot_compare(tres->flags)) 349 321 continue; 350 - taddr = &tdev->res.mem_resource[tmp].start; 351 - tend = &tdev->res.mem_resource[tmp].end; 322 + taddr = &tres->start; 323 + tend = &tres->end; 352 324 if (ranged_conflict(addr, end, taddr, tend)) 353 325 return 0; 354 326 } ··· 364 334 return IRQ_HANDLED; 365 335 } 366 336 367 - int pnp_check_irq(struct pnp_dev *dev, int idx) 337 + int pnp_check_irq(struct pnp_dev *dev, struct resource *res) 368 338 { 369 - int tmp; 339 + int i; 370 340 struct pnp_dev *tdev; 371 - resource_size_t *irq = &dev->res.irq_resource[idx].start; 341 + struct resource *tres; 342 + resource_size_t *irq; 343 + 344 + irq = &res->start; 372 345 373 346 /* if the resource doesn't exist, don't complain about it */ 374 - if (cannot_compare(dev->res.irq_resource[idx].flags)) 347 + if (cannot_compare(res->flags)) 375 348 return 1; 376 349 377 350 /* check if the resource is valid */ ··· 382 349 return 0; 383 350 384 351 /* check if the resource is reserved */ 385 - for (tmp = 0; tmp < 16; tmp++) { 386 - if (pnp_reserve_irq[tmp] == *irq) 352 + for (i = 0; i < 16; i++) { 353 + if (pnp_reserve_irq[i] == *irq) 387 354 return 0; 388 355 } 389 356 390 357 /* check for internal conflicts */ 391 - for (tmp = 0; tmp < PNP_MAX_IRQ && tmp != idx; tmp++) { 392 - if (dev->res.irq_resource[tmp].flags & IORESOURCE_IRQ) { 393 - if (dev->res.irq_resource[tmp].start == *irq) 358 + for (i = 0; (tres = pnp_get_resource(dev, IORESOURCE_IRQ, i)); i++) { 359 + if (tres != res && tres->flags & IORESOURCE_IRQ) { 360 + if (tres->start == *irq) 394 361 return 0; 395 362 } 396 363 } ··· 421 388 pnp_for_each_dev(tdev) { 422 389 if (tdev == dev) 423 390 continue; 424 - for (tmp = 0; tmp < PNP_MAX_IRQ; tmp++) { 425 - if (tdev->res.irq_resource[tmp].flags & IORESOURCE_IRQ) { 426 - if (cannot_compare 427 - (tdev->res.irq_resource[tmp].flags)) 391 + for (i = 0; 392 + (tres = pnp_get_resource(tdev, IORESOURCE_IRQ, i)); 393 + i++) { 394 + if (tres->flags & IORESOURCE_IRQ) { 395 + if (cannot_compare(tres->flags)) 428 396 continue; 429 - if ((tdev->res.irq_resource[tmp].start == *irq)) 397 + if (tres->start == *irq) 430 398 return 0; 431 399 } 432 400 } ··· 436 402 return 1; 437 403 } 438 404 439 - int pnp_check_dma(struct pnp_dev *dev, int idx) 405 + int pnp_check_dma(struct pnp_dev *dev, struct resource *res) 440 406 { 441 407 #ifndef CONFIG_IA64 442 - int tmp; 408 + int i; 443 409 struct pnp_dev *tdev; 444 - resource_size_t *dma = &dev->res.dma_resource[idx].start; 410 + struct resource *tres; 411 + resource_size_t *dma; 412 + 413 + dma = &res->start; 445 414 446 415 /* if the resource doesn't exist, don't complain about it */ 447 - if (cannot_compare(dev->res.dma_resource[idx].flags)) 416 + if (cannot_compare(res->flags)) 448 417 return 1; 449 418 450 419 /* check if the resource is valid */ ··· 455 418 return 0; 456 419 457 420 /* check if the resource is reserved */ 458 - for (tmp = 0; tmp < 8; tmp++) { 459 - if (pnp_reserve_dma[tmp] == *dma) 421 + for (i = 0; i < 8; i++) { 422 + if (pnp_reserve_dma[i] == *dma) 460 423 return 0; 461 424 } 462 425 463 426 /* check for internal conflicts */ 464 - for (tmp = 0; tmp < PNP_MAX_DMA && tmp != idx; tmp++) { 465 - if (dev->res.dma_resource[tmp].flags & IORESOURCE_DMA) { 466 - if (dev->res.dma_resource[tmp].start == *dma) 427 + for (i = 0; (tres = pnp_get_resource(dev, IORESOURCE_DMA, i)); i++) { 428 + if (tres != res && tres->flags & IORESOURCE_DMA) { 429 + if (tres->start == *dma) 467 430 return 0; 468 431 } 469 432 } ··· 480 443 pnp_for_each_dev(tdev) { 481 444 if (tdev == dev) 482 445 continue; 483 - for (tmp = 0; tmp < PNP_MAX_DMA; tmp++) { 484 - if (tdev->res.dma_resource[tmp].flags & IORESOURCE_DMA) { 485 - if (cannot_compare 486 - (tdev->res.dma_resource[tmp].flags)) 446 + for (i = 0; 447 + (tres = pnp_get_resource(tdev, IORESOURCE_DMA, i)); 448 + i++) { 449 + if (tres->flags & IORESOURCE_DMA) { 450 + if (cannot_compare(tres->flags)) 487 451 continue; 488 - if ((tdev->res.dma_resource[tmp].start == *dma)) 452 + if (tres->start == *dma) 489 453 return 0; 490 454 } 491 455 } ··· 497 459 /* IA64 does not have legacy DMA */ 498 460 return 0; 499 461 #endif 462 + } 463 + 464 + struct pnp_resource *pnp_get_pnp_resource(struct pnp_dev *dev, 465 + unsigned int type, unsigned int num) 466 + { 467 + struct pnp_resource_table *res = dev->res; 468 + 469 + switch (type) { 470 + case IORESOURCE_IO: 471 + if (num >= PNP_MAX_PORT) 472 + return NULL; 473 + return &res->port[num]; 474 + case IORESOURCE_MEM: 475 + if (num >= PNP_MAX_MEM) 476 + return NULL; 477 + return &res->mem[num]; 478 + case IORESOURCE_IRQ: 479 + if (num >= PNP_MAX_IRQ) 480 + return NULL; 481 + return &res->irq[num]; 482 + case IORESOURCE_DMA: 483 + if (num >= PNP_MAX_DMA) 484 + return NULL; 485 + return &res->dma[num]; 486 + } 487 + return NULL; 488 + } 489 + 490 + struct resource *pnp_get_resource(struct pnp_dev *dev, 491 + unsigned int type, unsigned int num) 492 + { 493 + struct pnp_resource *pnp_res; 494 + 495 + pnp_res = pnp_get_pnp_resource(dev, type, num); 496 + if (pnp_res) 497 + return &pnp_res->res; 498 + 499 + return NULL; 500 + } 501 + EXPORT_SYMBOL(pnp_get_resource); 502 + 503 + static struct pnp_resource *pnp_new_resource(struct pnp_dev *dev, int type) 504 + { 505 + struct pnp_resource *pnp_res; 506 + int i; 507 + 508 + switch (type) { 509 + case IORESOURCE_IO: 510 + for (i = 0; i < PNP_MAX_PORT; i++) { 511 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IO, i); 512 + if (pnp_res && !pnp_resource_valid(&pnp_res->res)) 513 + return pnp_res; 514 + } 515 + break; 516 + case IORESOURCE_MEM: 517 + for (i = 0; i < PNP_MAX_MEM; i++) { 518 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_MEM, i); 519 + if (pnp_res && !pnp_resource_valid(&pnp_res->res)) 520 + return pnp_res; 521 + } 522 + break; 523 + case IORESOURCE_IRQ: 524 + for (i = 0; i < PNP_MAX_IRQ; i++) { 525 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_IRQ, i); 526 + if (pnp_res && !pnp_resource_valid(&pnp_res->res)) 527 + return pnp_res; 528 + } 529 + break; 530 + case IORESOURCE_DMA: 531 + for (i = 0; i < PNP_MAX_DMA; i++) { 532 + pnp_res = pnp_get_pnp_resource(dev, IORESOURCE_DMA, i); 533 + if (pnp_res && !pnp_resource_valid(&pnp_res->res)) 534 + return pnp_res; 535 + } 536 + break; 537 + } 538 + return NULL; 539 + } 540 + 541 + struct pnp_resource *pnp_add_irq_resource(struct pnp_dev *dev, int irq, 542 + int flags) 543 + { 544 + struct pnp_resource *pnp_res; 545 + struct resource *res; 546 + static unsigned char warned; 547 + 548 + pnp_res = pnp_new_resource(dev, IORESOURCE_IRQ); 549 + if (!pnp_res) { 550 + if (!warned) { 551 + dev_err(&dev->dev, "can't add resource for IRQ %d\n", 552 + irq); 553 + warned = 1; 554 + } 555 + return NULL; 556 + } 557 + 558 + res = &pnp_res->res; 559 + res->flags = IORESOURCE_IRQ | flags; 560 + res->start = irq; 561 + res->end = irq; 562 + 563 + dev_dbg(&dev->dev, " add irq %d flags %#x\n", irq, flags); 564 + return pnp_res; 565 + } 566 + 567 + struct pnp_resource *pnp_add_dma_resource(struct pnp_dev *dev, int dma, 568 + int flags) 569 + { 570 + struct pnp_resource *pnp_res; 571 + struct resource *res; 572 + static unsigned char warned; 573 + 574 + pnp_res = pnp_new_resource(dev, IORESOURCE_DMA); 575 + if (!pnp_res) { 576 + if (!warned) { 577 + dev_err(&dev->dev, "can't add resource for DMA %d\n", 578 + dma); 579 + warned = 1; 580 + } 581 + return NULL; 582 + } 583 + 584 + res = &pnp_res->res; 585 + res->flags = IORESOURCE_DMA | flags; 586 + res->start = dma; 587 + res->end = dma; 588 + 589 + dev_dbg(&dev->dev, " add dma %d flags %#x\n", dma, flags); 590 + return pnp_res; 591 + } 592 + 593 + struct pnp_resource *pnp_add_io_resource(struct pnp_dev *dev, 594 + resource_size_t start, 595 + resource_size_t end, int flags) 596 + { 597 + struct pnp_resource *pnp_res; 598 + struct resource *res; 599 + static unsigned char warned; 600 + 601 + pnp_res = pnp_new_resource(dev, IORESOURCE_IO); 602 + if (!pnp_res) { 603 + if (!warned) { 604 + dev_err(&dev->dev, "can't add resource for IO " 605 + "%#llx-%#llx\n",(unsigned long long) start, 606 + (unsigned long long) end); 607 + warned = 1; 608 + } 609 + return NULL; 610 + } 611 + 612 + res = &pnp_res->res; 613 + res->flags = IORESOURCE_IO | flags; 614 + res->start = start; 615 + res->end = end; 616 + 617 + dev_dbg(&dev->dev, " add io %#llx-%#llx flags %#x\n", 618 + (unsigned long long) start, (unsigned long long) end, flags); 619 + return pnp_res; 620 + } 621 + 622 + struct pnp_resource *pnp_add_mem_resource(struct pnp_dev *dev, 623 + resource_size_t start, 624 + resource_size_t end, int flags) 625 + { 626 + struct pnp_resource *pnp_res; 627 + struct resource *res; 628 + static unsigned char warned; 629 + 630 + pnp_res = pnp_new_resource(dev, IORESOURCE_MEM); 631 + if (!pnp_res) { 632 + if (!warned) { 633 + dev_err(&dev->dev, "can't add resource for MEM " 634 + "%#llx-%#llx\n",(unsigned long long) start, 635 + (unsigned long long) end); 636 + warned = 1; 637 + } 638 + return NULL; 639 + } 640 + 641 + res = &pnp_res->res; 642 + res->flags = IORESOURCE_MEM | flags; 643 + res->start = start; 644 + res->end = end; 645 + 646 + dev_dbg(&dev->dev, " add mem %#llx-%#llx flags %#x\n", 647 + (unsigned long long) start, (unsigned long long) end, flags); 648 + return pnp_res; 500 649 } 501 650 502 651 /* format is: pnp_reserve_irq=irq1[,irq2] .... */
+63
drivers/pnp/support.c
··· 25 25 } 26 26 27 27 EXPORT_SYMBOL(pnp_is_active); 28 + 29 + /* 30 + * Functionally similar to acpi_ex_eisa_id_to_string(), but that's 31 + * buried in the ACPI CA, and we can't depend on it being present. 32 + */ 33 + void pnp_eisa_id_to_string(u32 id, char *str) 34 + { 35 + id = be32_to_cpu(id); 36 + 37 + /* 38 + * According to the specs, the first three characters are five-bit 39 + * compressed ASCII, and the left-over high order bit should be zero. 40 + * However, the Linux ISAPNP code historically used six bits for the 41 + * first character, and there seem to be IDs that depend on that, 42 + * e.g., "nEC8241" in the Linux 8250_pnp serial driver and the 43 + * FreeBSD sys/pc98/cbus/sio_cbus.c driver. 44 + */ 45 + str[0] = 'A' + ((id >> 26) & 0x3f) - 1; 46 + str[1] = 'A' + ((id >> 21) & 0x1f) - 1; 47 + str[2] = 'A' + ((id >> 16) & 0x1f) - 1; 48 + str[3] = hex_asc((id >> 12) & 0xf); 49 + str[4] = hex_asc((id >> 8) & 0xf); 50 + str[5] = hex_asc((id >> 4) & 0xf); 51 + str[6] = hex_asc((id >> 0) & 0xf); 52 + str[7] = '\0'; 53 + } 54 + 55 + void dbg_pnp_show_resources(struct pnp_dev *dev, char *desc) 56 + { 57 + #ifdef DEBUG 58 + struct resource *res; 59 + int i; 60 + 61 + dev_dbg(&dev->dev, "current resources: %s\n", desc); 62 + 63 + for (i = 0; i < PNP_MAX_IRQ; i++) { 64 + res = pnp_get_resource(dev, IORESOURCE_IRQ, i); 65 + if (res && !(res->flags & IORESOURCE_UNSET)) 66 + dev_dbg(&dev->dev, " irq %lld flags %#lx\n", 67 + (unsigned long long) res->start, res->flags); 68 + } 69 + for (i = 0; i < PNP_MAX_DMA; i++) { 70 + res = pnp_get_resource(dev, IORESOURCE_DMA, i); 71 + if (res && !(res->flags & IORESOURCE_UNSET)) 72 + dev_dbg(&dev->dev, " dma %lld flags %#lx\n", 73 + (unsigned long long) res->start, res->flags); 74 + } 75 + for (i = 0; i < PNP_MAX_PORT; i++) { 76 + res = pnp_get_resource(dev, IORESOURCE_IO, i); 77 + if (res && !(res->flags & IORESOURCE_UNSET)) 78 + dev_dbg(&dev->dev, " io %#llx-%#llx flags %#lx\n", 79 + (unsigned long long) res->start, 80 + (unsigned long long) res->end, res->flags); 81 + } 82 + for (i = 0; i < PNP_MAX_MEM; i++) { 83 + res = pnp_get_resource(dev, IORESOURCE_MEM, i); 84 + if (res && !(res->flags & IORESOURCE_UNSET)) 85 + dev_dbg(&dev->dev, " mem %#llx-%#llx flags %#lx\n", 86 + (unsigned long long) res->start, 87 + (unsigned long long) res->end, res->flags); 88 + } 89 + #endif 90 + }
+10 -11
drivers/pnp/system.c
··· 56 56 57 57 static void reserve_resources_of_dev(struct pnp_dev *dev) 58 58 { 59 + struct resource *res; 59 60 int i; 60 61 61 - for (i = 0; i < PNP_MAX_PORT; i++) { 62 - if (!pnp_port_valid(dev, i)) 62 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_IO, i)); i++) { 63 + if (res->flags & IORESOURCE_UNSET) 63 64 continue; 64 - if (pnp_port_start(dev, i) == 0) 65 + if (res->start == 0) 65 66 continue; /* disabled */ 66 - if (pnp_port_start(dev, i) < 0x100) 67 + if (res->start < 0x100) 67 68 /* 68 69 * Below 0x100 is only standard PC hardware 69 70 * (pics, kbd, timer, dma, ...) ··· 74 73 * So, do nothing 75 74 */ 76 75 continue; 77 - if (pnp_port_end(dev, i) < pnp_port_start(dev, i)) 76 + if (res->end < res->start) 78 77 continue; /* invalid */ 79 78 80 - reserve_range(dev, pnp_port_start(dev, i), 81 - pnp_port_end(dev, i), 1); 79 + reserve_range(dev, res->start, res->end, 1); 82 80 } 83 81 84 - for (i = 0; i < PNP_MAX_MEM; i++) { 85 - if (!pnp_mem_valid(dev, i)) 82 + for (i = 0; (res = pnp_get_resource(dev, IORESOURCE_MEM, i)); i++) { 83 + if (res->flags & IORESOURCE_UNSET) 86 84 continue; 87 85 88 - reserve_range(dev, pnp_mem_start(dev, i), 89 - pnp_mem_end(dev, i), 0); 86 + reserve_range(dev, res->start, res->end, 0); 90 87 } 91 88 } 92 89
+4 -3
drivers/rtc/rtc-cmos.c
··· 854 854 * don't define the IRQ. It should always be safe to 855 855 * hardcode it in these cases 856 856 */ 857 - return cmos_do_probe(&pnp->dev, &pnp->res.port_resource[0], 8); 857 + return cmos_do_probe(&pnp->dev, 858 + pnp_get_resource(pnp, IORESOURCE_IO, 0), 8); 858 859 else 859 860 return cmos_do_probe(&pnp->dev, 860 - &pnp->res.port_resource[0], 861 - pnp->res.irq_resource[0].start); 861 + pnp_get_resource(pnp, IORESOURCE_IO, 0), 862 + pnp_irq(pnp, 0)); 862 863 } 863 864 864 865 static void __exit cmos_pnp_remove(struct pnp_dev *pnp)
+2 -2
drivers/thermal/Kconfig
··· 3 3 # 4 4 5 5 menuconfig THERMAL 6 - bool "Generic Thermal sysfs driver" 6 + tristate "Generic Thermal sysfs driver" 7 7 help 8 8 Generic Thermal Sysfs driver offers a generic mechanism for 9 9 thermal management. Usually it's made up of one or more thermal ··· 11 11 Each thermal zone contains its own temperature, trip points, 12 12 cooling devices. 13 13 All platforms with ACPI thermal support can use this driver. 14 - If you want this support, you should say Y here. 14 + If you want this support, you should say Y or M here.
+1 -1
drivers/thermal/Makefile
··· 2 2 # Makefile for sensor chip drivers. 3 3 # 4 4 5 - obj-$(CONFIG_THERMAL) += thermal.o 5 + obj-$(CONFIG_THERMAL) += thermal_sys.o
+164 -1
drivers/thermal/thermal.c drivers/thermal/thermal_sys.c
··· 31 31 #include <linux/thermal.h> 32 32 #include <linux/spinlock.h> 33 33 34 - MODULE_AUTHOR("Zhang Rui") 34 + MODULE_AUTHOR("Zhang Rui"); 35 35 MODULE_DESCRIPTION("Generic thermal management sysfs support"); 36 36 MODULE_LICENSE("GPL"); 37 37 ··· 294 294 } 295 295 296 296 /* Device management */ 297 + 298 + #if defined(CONFIG_HWMON) || \ 299 + (defined(CONFIG_HWMON_MODULE) && defined(CONFIG_THERMAL_MODULE)) 300 + /* hwmon sys I/F */ 301 + #include <linux/hwmon.h> 302 + static LIST_HEAD(thermal_hwmon_list); 303 + 304 + static ssize_t 305 + name_show(struct device *dev, struct device_attribute *attr, char *buf) 306 + { 307 + struct thermal_hwmon_device *hwmon = dev->driver_data; 308 + return sprintf(buf, "%s\n", hwmon->type); 309 + } 310 + static DEVICE_ATTR(name, 0444, name_show, NULL); 311 + 312 + static ssize_t 313 + temp_input_show(struct device *dev, struct device_attribute *attr, char *buf) 314 + { 315 + struct thermal_hwmon_attr *hwmon_attr 316 + = container_of(attr, struct thermal_hwmon_attr, attr); 317 + struct thermal_zone_device *tz 318 + = container_of(hwmon_attr, struct thermal_zone_device, 319 + temp_input); 320 + 321 + return tz->ops->get_temp(tz, buf); 322 + } 323 + 324 + static ssize_t 325 + temp_crit_show(struct device *dev, struct device_attribute *attr, 326 + char *buf) 327 + { 328 + struct thermal_hwmon_attr *hwmon_attr 329 + = container_of(attr, struct thermal_hwmon_attr, attr); 330 + struct thermal_zone_device *tz 331 + = container_of(hwmon_attr, struct thermal_zone_device, 332 + temp_crit); 333 + 334 + return tz->ops->get_trip_temp(tz, 0, buf); 335 + } 336 + 337 + 338 + static int 339 + thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) 340 + { 341 + struct thermal_hwmon_device *hwmon; 342 + int new_hwmon_device = 1; 343 + int result; 344 + 345 + mutex_lock(&thermal_list_lock); 346 + list_for_each_entry(hwmon, &thermal_hwmon_list, node) 347 + if (!strcmp(hwmon->type, tz->type)) { 348 + new_hwmon_device = 0; 349 + mutex_unlock(&thermal_list_lock); 350 + goto register_sys_interface; 351 + } 352 + mutex_unlock(&thermal_list_lock); 353 + 354 + hwmon = kzalloc(sizeof(struct thermal_hwmon_device), GFP_KERNEL); 355 + if (!hwmon) 356 + return -ENOMEM; 357 + 358 + INIT_LIST_HEAD(&hwmon->tz_list); 359 + strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH); 360 + hwmon->device = hwmon_device_register(NULL); 361 + if (IS_ERR(hwmon->device)) { 362 + result = PTR_ERR(hwmon->device); 363 + goto free_mem; 364 + } 365 + hwmon->device->driver_data = hwmon; 366 + result = device_create_file(hwmon->device, &dev_attr_name); 367 + if (result) 368 + goto unregister_hwmon_device; 369 + 370 + register_sys_interface: 371 + tz->hwmon = hwmon; 372 + hwmon->count++; 373 + 374 + snprintf(tz->temp_input.name, THERMAL_NAME_LENGTH, 375 + "temp%d_input", hwmon->count); 376 + tz->temp_input.attr.attr.name = tz->temp_input.name; 377 + tz->temp_input.attr.attr.mode = 0444; 378 + tz->temp_input.attr.show = temp_input_show; 379 + result = device_create_file(hwmon->device, &tz->temp_input.attr); 380 + if (result) 381 + goto unregister_hwmon_device; 382 + 383 + if (tz->ops->get_crit_temp) { 384 + unsigned long temperature; 385 + if (!tz->ops->get_crit_temp(tz, &temperature)) { 386 + snprintf(tz->temp_crit.name, THERMAL_NAME_LENGTH, 387 + "temp%d_crit", hwmon->count); 388 + tz->temp_crit.attr.attr.name = tz->temp_crit.name; 389 + tz->temp_crit.attr.attr.mode = 0444; 390 + tz->temp_crit.attr.show = temp_crit_show; 391 + result = device_create_file(hwmon->device, 392 + &tz->temp_crit.attr); 393 + if (result) 394 + goto unregister_hwmon_device; 395 + } 396 + } 397 + 398 + mutex_lock(&thermal_list_lock); 399 + if (new_hwmon_device) 400 + list_add_tail(&hwmon->node, &thermal_hwmon_list); 401 + list_add_tail(&tz->hwmon_node, &hwmon->tz_list); 402 + mutex_unlock(&thermal_list_lock); 403 + 404 + return 0; 405 + 406 + unregister_hwmon_device: 407 + device_remove_file(hwmon->device, &tz->temp_crit.attr); 408 + device_remove_file(hwmon->device, &tz->temp_input.attr); 409 + if (new_hwmon_device) { 410 + device_remove_file(hwmon->device, &dev_attr_name); 411 + hwmon_device_unregister(hwmon->device); 412 + } 413 + free_mem: 414 + if (new_hwmon_device) 415 + kfree(hwmon); 416 + 417 + return result; 418 + } 419 + 420 + static void 421 + thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz) 422 + { 423 + struct thermal_hwmon_device *hwmon = tz->hwmon; 424 + 425 + tz->hwmon = NULL; 426 + device_remove_file(hwmon->device, &tz->temp_input.attr); 427 + device_remove_file(hwmon->device, &tz->temp_crit.attr); 428 + 429 + mutex_lock(&thermal_list_lock); 430 + list_del(&tz->hwmon_node); 431 + if (!list_empty(&hwmon->tz_list)) { 432 + mutex_unlock(&thermal_list_lock); 433 + return; 434 + } 435 + list_del(&hwmon->node); 436 + mutex_unlock(&thermal_list_lock); 437 + 438 + device_remove_file(hwmon->device, &dev_attr_name); 439 + hwmon_device_unregister(hwmon->device); 440 + kfree(hwmon); 441 + } 442 + #else 443 + static int 444 + thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) 445 + { 446 + return 0; 447 + } 448 + 449 + static void 450 + thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz) 451 + { 452 + } 453 + #endif 454 + 297 455 298 456 /** 299 457 * thermal_zone_bind_cooling_device - bind a cooling device to a thermal zone ··· 800 642 goto unregister; 801 643 } 802 644 645 + result = thermal_add_hwmon_sysfs(tz); 646 + if (result) 647 + goto unregister; 648 + 803 649 mutex_lock(&thermal_list_lock); 804 650 list_add_tail(&tz->node, &thermal_tz_list); 805 651 if (ops->bind) ··· 862 700 for (count = 0; count < tz->trips; count++) 863 701 TRIP_POINT_ATTR_REMOVE(&tz->device, count); 864 702 703 + thermal_remove_hwmon_sysfs(tz); 865 704 release_idr(&thermal_tz_idr, &thermal_idr_lock, tz->id); 866 705 idr_destroy(&tz->idr); 867 706 mutex_destroy(&tz->lock);
+13 -2
include/acpi/acconfig.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 63 63 64 64 /* Current ACPICA subsystem version in YYYYMMDD format */ 65 65 66 - #define ACPI_CA_VERSION 0x20070126 66 + #define ACPI_CA_VERSION 0x20080321 67 67 68 68 /* 69 69 * OS name, used for the _OS object. The _OS object is essentially obsolete, ··· 149 149 */ 150 150 #define ACPI_OBJ_NUM_OPERANDS 8 151 151 #define ACPI_OBJ_MAX_OPERAND 7 152 + 153 + /* Number of elements in the Result Stack frame, can be an arbitrary value */ 154 + 155 + #define ACPI_RESULTS_FRAME_OBJ_NUM 8 156 + 157 + /* 158 + * Maximal number of elements the Result Stack can contain, 159 + * it may be an arbitray value not exceeding the types of 160 + * result_size and result_count (now u8). 161 + */ 162 + #define ACPI_RESULTS_OBJ_NUM_MAX 255 152 163 153 164 /* Names within the namespace are 4 bytes long */ 154 165
+1 -1
include/acpi/acdebug.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+28 -10
include/acpi/acdisasm.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 97 97 #define ACPI_DMT_CHKSUM 20 98 98 #define ACPI_DMT_SPACEID 21 99 99 #define ACPI_DMT_GAS 22 100 - #define ACPI_DMT_DMAR 23 101 - #define ACPI_DMT_MADT 24 102 - #define ACPI_DMT_SRAT 25 103 - #define ACPI_DMT_EXIT 26 104 - #define ACPI_DMT_SIG 27 100 + #define ACPI_DMT_ASF 23 101 + #define ACPI_DMT_DMAR 24 102 + #define ACPI_DMT_HEST 25 103 + #define ACPI_DMT_HESTNTFY 26 104 + #define ACPI_DMT_HESTNTYP 27 105 + #define ACPI_DMT_MADT 28 106 + #define ACPI_DMT_SRAT 29 107 + #define ACPI_DMT_EXIT 30 108 + #define ACPI_DMT_SIG 31 105 109 106 110 typedef 107 - void (*ACPI_TABLE_HANDLER) (struct acpi_table_header * table); 111 + void (*acpi_dmtable_handler) (struct acpi_table_header * table); 108 112 109 113 struct acpi_dmtable_data { 110 114 char *signature; 111 115 struct acpi_dmtable_info *table_info; 112 - ACPI_TABLE_HANDLER table_handler; 116 + acpi_dmtable_handler table_handler; 113 117 char *name; 114 118 }; 115 119 ··· 153 149 extern struct acpi_dmtable_info acpi_dm_table_info_asf4[]; 154 150 extern struct acpi_dmtable_info acpi_dm_table_info_asf_hdr[]; 155 151 extern struct acpi_dmtable_info acpi_dm_table_info_boot[]; 152 + extern struct acpi_dmtable_info acpi_dm_table_info_bert[]; 156 153 extern struct acpi_dmtable_info acpi_dm_table_info_cpep[]; 157 154 extern struct acpi_dmtable_info acpi_dm_table_info_cpep0[]; 158 155 extern struct acpi_dmtable_info acpi_dm_table_info_dbgp[]; ··· 163 158 extern struct acpi_dmtable_info acpi_dm_table_info_dmar0[]; 164 159 extern struct acpi_dmtable_info acpi_dm_table_info_dmar1[]; 165 160 extern struct acpi_dmtable_info acpi_dm_table_info_ecdt[]; 161 + extern struct acpi_dmtable_info acpi_dm_table_info_einj[]; 162 + extern struct acpi_dmtable_info acpi_dm_table_info_einj0[]; 163 + extern struct acpi_dmtable_info acpi_dm_table_info_erst[]; 166 164 extern struct acpi_dmtable_info acpi_dm_table_info_facs[]; 167 165 extern struct acpi_dmtable_info acpi_dm_table_info_fadt1[]; 168 166 extern struct acpi_dmtable_info acpi_dm_table_info_fadt2[]; 169 167 extern struct acpi_dmtable_info acpi_dm_table_info_gas[]; 170 168 extern struct acpi_dmtable_info acpi_dm_table_info_header[]; 169 + extern struct acpi_dmtable_info acpi_dm_table_info_hest[]; 170 + extern struct acpi_dmtable_info acpi_dm_table_info_hest9[]; 171 + extern struct acpi_dmtable_info acpi_dm_table_info_hest_notify[]; 171 172 extern struct acpi_dmtable_info acpi_dm_table_info_hpet[]; 172 173 extern struct acpi_dmtable_info acpi_dm_table_info_madt[]; 173 174 extern struct acpi_dmtable_info acpi_dm_table_info_madt0[]; ··· 191 180 extern struct acpi_dmtable_info acpi_dm_table_info_rsdp1[]; 192 181 extern struct acpi_dmtable_info acpi_dm_table_info_rsdp2[]; 193 182 extern struct acpi_dmtable_info acpi_dm_table_info_sbst[]; 183 + extern struct acpi_dmtable_info acpi_dm_table_info_slic[]; 194 184 extern struct acpi_dmtable_info acpi_dm_table_info_slit[]; 195 185 extern struct acpi_dmtable_info acpi_dm_table_info_spcr[]; 196 186 extern struct acpi_dmtable_info acpi_dm_table_info_spmi[]; ··· 206 194 */ 207 195 void acpi_dm_dump_data_table(struct acpi_table_header *table); 208 196 209 - void 197 + acpi_status 210 198 acpi_dm_dump_table(u32 table_length, 211 199 u32 table_offset, 212 200 void *table, ··· 225 213 226 214 void acpi_dm_dump_dmar(struct acpi_table_header *table); 227 215 216 + void acpi_dm_dump_einj(struct acpi_table_header *table); 217 + 218 + void acpi_dm_dump_erst(struct acpi_table_header *table); 219 + 228 220 void acpi_dm_dump_fadt(struct acpi_table_header *table); 229 221 230 - void acpi_dm_dump_srat(struct acpi_table_header *table); 222 + void acpi_dm_dump_hest(struct acpi_table_header *table); 231 223 232 224 void acpi_dm_dump_mcfg(struct acpi_table_header *table); 233 225 ··· 242 226 void acpi_dm_dump_rsdt(struct acpi_table_header *table); 243 227 244 228 void acpi_dm_dump_slit(struct acpi_table_header *table); 229 + 230 + void acpi_dm_dump_srat(struct acpi_table_header *table); 245 231 246 232 void acpi_dm_dump_xsdt(struct acpi_table_header *table); 247 233
+15 -16
include/acpi/acdispat.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 53 53 acpi_status 54 54 acpi_ds_get_buffer_field_arguments(union acpi_operand_object *obj_desc); 55 55 56 + acpi_status 57 + acpi_ds_get_bank_field_arguments(union acpi_operand_object *obj_desc); 58 + 56 59 acpi_status acpi_ds_get_region_arguments(union acpi_operand_object *rgn_desc); 57 60 58 61 acpi_status acpi_ds_get_buffer_arguments(union acpi_operand_object *obj_desc); ··· 71 68 union acpi_parse_object *op); 72 69 73 70 acpi_status 71 + acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state, 72 + union acpi_parse_object *op); 73 + 74 + acpi_status 74 75 acpi_ds_eval_data_object_operands(struct acpi_walk_state *walk_state, 75 76 union acpi_parse_object *op, 76 77 union acpi_operand_object *obj_desc); 78 + 79 + acpi_status 80 + acpi_ds_eval_bank_field_operands(struct acpi_walk_state *walk_state, 81 + union acpi_parse_object *op); 77 82 78 83 acpi_status acpi_ds_initialize_region(acpi_handle obj_handle); 79 84 ··· 280 269 281 270 void acpi_ds_clear_operands(struct acpi_walk_state *walk_state); 282 271 272 + acpi_status acpi_ds_evaluate_name_path(struct acpi_walk_state *walk_state); 273 + 283 274 /* 284 275 * dswscope - Scope Stack manipulation 285 276 */ ··· 316 303 u32 aml_length, 317 304 struct acpi_evaluate_info *info, u8 pass_number); 318 305 319 - acpi_status 306 + void 320 307 acpi_ds_obj_stack_pop_and_delete(u32 pop_count, 321 308 struct acpi_walk_state *walk_state); 322 309 ··· 329 316 acpi_ds_push_walk_state(struct acpi_walk_state *walk_state, 330 317 struct acpi_thread_state *thread); 331 318 332 - acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *walk_state); 333 - 334 - acpi_status acpi_ds_result_stack_push(struct acpi_walk_state *walk_state); 335 - 336 319 acpi_status acpi_ds_result_stack_clear(struct acpi_walk_state *walk_state); 337 320 338 321 struct acpi_walk_state *acpi_ds_get_current_walk_state(struct acpi_thread_state 339 322 *thread); 340 - 341 - #ifdef ACPI_FUTURE_USAGE 342 - acpi_status 343 - acpi_ds_result_remove(union acpi_operand_object **object, 344 - u32 index, struct acpi_walk_state *walk_state); 345 - #endif 346 323 347 324 acpi_status 348 325 acpi_ds_result_pop(union acpi_operand_object **object, ··· 341 338 acpi_status 342 339 acpi_ds_result_push(union acpi_operand_object *object, 343 340 struct acpi_walk_state *walk_state); 344 - 345 - acpi_status 346 - acpi_ds_result_pop_from_bottom(union acpi_operand_object **object, 347 - struct acpi_walk_state *walk_state); 348 341 349 342 #endif /* _ACDISPAT_H_ */
+1 -1
include/acpi/acevents.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acexcep.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+24 -14
include/acpi/acglobal.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 170 170 ACPI_EXTERN struct acpi_mutex_info acpi_gbl_mutex_info[ACPI_NUM_MUTEX]; 171 171 172 172 /* 173 - * Global lock semaphore works in conjunction with the actual HW global lock 173 + * Global lock mutex is an actual AML mutex object 174 + * Global lock semaphore works in conjunction with the HW global lock 174 175 */ 175 - ACPI_EXTERN acpi_mutex acpi_gbl_global_lock_mutex; 176 + ACPI_EXTERN union acpi_operand_object *acpi_gbl_global_lock_mutex; 176 177 ACPI_EXTERN acpi_semaphore acpi_gbl_global_lock_semaphore; 178 + ACPI_EXTERN u16 acpi_gbl_global_lock_handle; 179 + ACPI_EXTERN u8 acpi_gbl_global_lock_acquired; 180 + ACPI_EXTERN u8 acpi_gbl_global_lock_present; 177 181 178 182 /* 179 183 * Spinlocks are used for interfaces that can be possibly called at ··· 217 213 ACPI_EXTERN struct acpi_object_notify_handler acpi_gbl_system_notify; 218 214 ACPI_EXTERN acpi_exception_handler acpi_gbl_exception_handler; 219 215 ACPI_EXTERN acpi_init_handler acpi_gbl_init_handler; 216 + ACPI_EXTERN acpi_tbl_handler acpi_gbl_table_handler; 217 + ACPI_EXTERN void *acpi_gbl_table_handler_context; 220 218 ACPI_EXTERN struct acpi_walk_state *acpi_gbl_breakpoint_walk; 219 + 220 + /* Owner ID support */ 221 + 222 + ACPI_EXTERN u32 acpi_gbl_owner_id_mask[ACPI_NUM_OWNERID_MASKS]; 223 + ACPI_EXTERN u8 acpi_gbl_last_owner_id_index; 224 + ACPI_EXTERN u8 acpi_gbl_next_owner_id_offset; 221 225 222 226 /* Misc */ 223 227 ··· 233 221 ACPI_EXTERN u32 acpi_gbl_rsdp_original_location; 234 222 ACPI_EXTERN u32 acpi_gbl_ns_lookup_count; 235 223 ACPI_EXTERN u32 acpi_gbl_ps_find_count; 236 - ACPI_EXTERN u32 acpi_gbl_owner_id_mask[ACPI_NUM_OWNERID_MASKS]; 237 224 ACPI_EXTERN u16 acpi_gbl_pm1_enable_register_save; 238 - ACPI_EXTERN u16 acpi_gbl_global_lock_handle; 239 - ACPI_EXTERN u8 acpi_gbl_last_owner_id_index; 240 - ACPI_EXTERN u8 acpi_gbl_next_owner_id_offset; 241 225 ACPI_EXTERN u8 acpi_gbl_debugger_configuration; 242 - ACPI_EXTERN u8 acpi_gbl_global_lock_acquired; 243 226 ACPI_EXTERN u8 acpi_gbl_step_to_next_call; 244 227 ACPI_EXTERN u8 acpi_gbl_acpi_hardware_present; 245 - ACPI_EXTERN u8 acpi_gbl_global_lock_present; 246 228 ACPI_EXTERN u8 acpi_gbl_events_initialized; 247 229 ACPI_EXTERN u8 acpi_gbl_system_awake_and_running; 230 + 231 + #ifndef DEFINE_ACPI_GLOBALS 232 + 233 + /* Other miscellaneous */ 248 234 249 235 extern u8 acpi_gbl_shutdown; 250 236 extern u32 acpi_gbl_startup_flags; ··· 250 240 extern const char *acpi_gbl_highest_dstate_names[4]; 251 241 extern const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES]; 252 242 extern const char *acpi_gbl_region_types[ACPI_NUM_PREDEFINED_REGIONS]; 243 + 244 + #endif 253 245 254 246 /* Exception codes */ 255 247 ··· 267 255 * 268 256 ****************************************************************************/ 269 257 270 - #define NUM_NS_TYPES ACPI_TYPE_INVALID+1 271 - 272 258 #if !defined (ACPI_NO_METHOD_EXECUTION) || defined (ACPI_CONSTANT_EVAL_ONLY) 273 259 #define NUM_PREDEFINED_NAMES 10 274 260 #else ··· 277 267 ACPI_EXTERN struct acpi_namespace_node *acpi_gbl_root_node; 278 268 ACPI_EXTERN struct acpi_namespace_node *acpi_gbl_fadt_gpe_device; 279 269 280 - extern const u8 acpi_gbl_ns_properties[NUM_NS_TYPES]; 270 + extern const u8 acpi_gbl_ns_properties[ACPI_NUM_NS_TYPES]; 281 271 extern const struct acpi_predefined_names 282 272 acpi_gbl_pre_defined_names[NUM_PREDEFINED_NAMES]; 283 273 ··· 285 275 ACPI_EXTERN u32 acpi_gbl_current_node_count; 286 276 ACPI_EXTERN u32 acpi_gbl_current_node_size; 287 277 ACPI_EXTERN u32 acpi_gbl_max_concurrent_node_count; 288 - ACPI_EXTERN acpi_size acpi_gbl_entry_stack_pointer; 289 - ACPI_EXTERN acpi_size acpi_gbl_lowest_stack_pointer; 278 + ACPI_EXTERN acpi_size *acpi_gbl_entry_stack_pointer; 279 + ACPI_EXTERN acpi_size *acpi_gbl_lowest_stack_pointer; 290 280 ACPI_EXTERN u32 acpi_gbl_deepest_nesting; 291 281 #endif 292 282
+1 -1
include/acpi/achware.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+10 -5
include/acpi/acinterp.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 215 215 u32 aml_length, 216 216 u8 region_space, struct acpi_walk_state *walk_state); 217 217 218 - acpi_status acpi_ex_create_table_region(struct acpi_walk_state *walk_state); 219 - 220 218 acpi_status acpi_ex_create_event(struct acpi_walk_state *walk_state); 221 219 222 220 acpi_status acpi_ex_create_alias(struct acpi_walk_state *walk_state); ··· 246 248 struct acpi_walk_state *walk_state); 247 249 248 250 acpi_status 251 + acpi_ex_acquire_mutex_object(u16 timeout, 252 + union acpi_operand_object *obj_desc, 253 + acpi_thread_id thread_id); 254 + 255 + acpi_status 249 256 acpi_ex_release_mutex(union acpi_operand_object *obj_desc, 250 257 struct acpi_walk_state *walk_state); 258 + 259 + acpi_status acpi_ex_release_mutex_object(union acpi_operand_object *obj_desc); 251 260 252 261 void acpi_ex_release_all_mutexes(struct acpi_thread_state *thread); 253 262 ··· 460 455 461 456 void acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc); 462 457 463 - u8 acpi_ex_acquire_global_lock(u32 rule); 458 + void acpi_ex_acquire_global_lock(u32 rule); 464 459 465 - void acpi_ex_release_global_lock(u8 locked); 460 + void acpi_ex_release_global_lock(u32 rule); 466 461 467 462 void acpi_ex_eisa_id_to_string(u32 numeric_id, char *out_string); 468 463
+6 -4
include/acpi/aclocal.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 522 522 * AML arguments 523 523 */ 524 524 struct acpi_result_values { 525 - ACPI_STATE_COMMON u8 num_results; 526 - u8 last_insert; 527 - union acpi_operand_object *obj_desc[ACPI_OBJ_NUM_OPERANDS]; 525 + ACPI_STATE_COMMON 526 + union acpi_operand_object *obj_desc[ACPI_RESULTS_FRAME_OBJ_NUM]; 528 527 }; 529 528 530 529 typedef ··· 603 604 union acpi_parse_object *next; /* Next op */\ 604 605 struct acpi_namespace_node *node; /* For use by interpreter */\ 605 606 union acpi_parse_value value; /* Value or args associated with the opcode */\ 607 + u8 arg_list_length; /* Number of elements in the arg list */\ 606 608 ACPI_DISASM_ONLY_MEMBERS (\ 607 609 u8 disasm_flags; /* Used during AML disassembly */\ 608 610 u8 disasm_opcode; /* Subtype used for disassembly */\ ··· 696 696 #define ACPI_PARSEOP_NAMED 0x02 697 697 #define ACPI_PARSEOP_DEFERRED 0x04 698 698 #define ACPI_PARSEOP_BYTELIST 0x08 699 + #define ACPI_PARSEOP_IN_STACK 0x10 700 + #define ACPI_PARSEOP_TARGET 0x20 699 701 #define ACPI_PARSEOP_IN_CACHE 0x80 700 702 701 703 /* Parse object disasm_flags */
+36 -35
include/acpi/acmacros.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 61 61 62 62 #define ACPI_ARRAY_LENGTH(x) (sizeof(x) / sizeof((x)[0])) 63 63 64 - #ifdef ACPI_NO_INTEGER64_SUPPORT 65 - /* 66 - * acpi_integer is 32-bits, no 64-bit support on this platform 67 - */ 68 - #define ACPI_LODWORD(l) ((u32)(l)) 69 - #define ACPI_HIDWORD(l) ((u32)(0)) 70 - 71 - #else 72 - 73 - /* 74 - * Full 64-bit address/integer on both 32-bit and 64-bit platforms 75 - */ 76 - #define ACPI_LODWORD(l) ((u32)(u64)(l)) 77 - #define ACPI_HIDWORD(l) ((u32)(((*(struct uint64_struct *)(void *)(&l))).hi)) 78 - #endif 79 - 80 - /* 81 - * printf() format helpers 82 - */ 83 - 84 - /* Split 64-bit integer into two 32-bit values. Use with %8.8_x%8.8_x */ 85 - 86 - #define ACPI_FORMAT_UINT64(i) ACPI_HIDWORD(i),ACPI_LODWORD(i) 87 - 88 64 /* 89 65 * Extract data using a pointer. Any more than a byte and we 90 66 * get into potential aligment issues -- see the STORE macros below. ··· 98 122 #endif 99 123 100 124 /* 125 + * Full 64-bit integer must be available on both 32-bit and 64-bit platforms 126 + */ 127 + struct acpi_integer_overlay { 128 + u32 lo_dword; 129 + u32 hi_dword; 130 + }; 131 + 132 + #define ACPI_LODWORD(integer) (ACPI_CAST_PTR (struct acpi_integer_overlay, &integer)->lo_dword) 133 + #define ACPI_HIDWORD(integer) (ACPI_CAST_PTR (struct acpi_integer_overlay, &integer)->hi_dword) 134 + 135 + /* 136 + * printf() format helpers 137 + */ 138 + 139 + /* Split 64-bit integer into two 32-bit values. Use with %8.8_x%8.8_x */ 140 + 141 + #define ACPI_FORMAT_UINT64(i) ACPI_HIDWORD(i),ACPI_LODWORD(i) 142 + 143 + #if ACPI_MACHINE_WIDTH == 64 144 + #define ACPI_FORMAT_NATIVE_UINT(i) ACPI_FORMAT_UINT64(i) 145 + #else 146 + #define ACPI_FORMAT_NATIVE_UINT(i) 0, (i) 147 + #endif 148 + 149 + /* 101 150 * Macros for moving data around to/from buffers that are possibly unaligned. 102 151 * If the hardware supports the transfer of unaligned data, just do the store. 103 152 * Otherwise, we have to move one byte at a time. ··· 138 137 139 138 /* These macros reverse the bytes during the move, converting little-endian to big endian */ 140 139 141 - /* Big Endian <== Little Endian */ 142 - /* Hi...Lo Lo...Hi */ 140 + /* Big Endian <== Little Endian */ 141 + /* Hi...Lo Lo...Hi */ 143 142 /* 16-bit source, 16/32/64 destination */ 144 143 145 144 #define ACPI_MOVE_16_TO_16(d,s) {(( u8 *)(void *)(d))[0] = ((u8 *)(void *)(s))[1];\ 146 - (( u8 *)(void *)(d))[1] = ((u8 *)(void *)(s))[0];} 145 + (( u8 *)(void *)(d))[1] = ((u8 *)(void *)(s))[0];} 147 146 148 147 #define ACPI_MOVE_16_TO_32(d,s) {(*(u32 *)(void *)(d))=0;\ 149 - ((u8 *)(void *)(d))[2] = ((u8 *)(void *)(s))[1];\ 150 - ((u8 *)(void *)(d))[3] = ((u8 *)(void *)(s))[0];} 148 + ((u8 *)(void *)(d))[2] = ((u8 *)(void *)(s))[1];\ 149 + ((u8 *)(void *)(d))[3] = ((u8 *)(void *)(s))[0];} 151 150 152 151 #define ACPI_MOVE_16_TO_64(d,s) {(*(u64 *)(void *)(d))=0;\ 153 - ((u8 *)(void *)(d))[6] = ((u8 *)(void *)(s))[1];\ 154 - ((u8 *)(void *)(d))[7] = ((u8 *)(void *)(s))[0];} 152 + ((u8 *)(void *)(d))[6] = ((u8 *)(void *)(s))[1];\ 153 + ((u8 *)(void *)(d))[7] = ((u8 *)(void *)(s))[0];} 155 154 156 155 /* 32-bit source, 16/32/64 destination */ 157 156 158 157 #define ACPI_MOVE_32_TO_16(d,s) ACPI_MOVE_16_TO_16(d,s) /* Truncate to 16 */ 159 158 160 159 #define ACPI_MOVE_32_TO_32(d,s) {(( u8 *)(void *)(d))[0] = ((u8 *)(void *)(s))[3];\ 161 - (( u8 *)(void *)(d))[1] = ((u8 *)(void *)(s))[2];\ 162 - (( u8 *)(void *)(d))[2] = ((u8 *)(void *)(s))[1];\ 163 - (( u8 *)(void *)(d))[3] = ((u8 *)(void *)(s))[0];} 160 + (( u8 *)(void *)(d))[1] = ((u8 *)(void *)(s))[2];\ 161 + (( u8 *)(void *)(d))[2] = ((u8 *)(void *)(s))[1];\ 162 + (( u8 *)(void *)(d))[3] = ((u8 *)(void *)(s))[0];} 164 163 165 164 #define ACPI_MOVE_32_TO_64(d,s) {(*(u64 *)(void *)(d))=0;\ 166 165 ((u8 *)(void *)(d))[4] = ((u8 *)(void *)(s))[3];\
+1 -1
include/acpi/acnames.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+3 -2
include/acpi/acnamesp.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 113 113 114 114 acpi_status 115 115 acpi_ns_one_complete_parse(acpi_native_uint pass_number, 116 - acpi_native_uint table_index); 116 + acpi_native_uint table_index, 117 + struct acpi_namespace_node *start_node); 117 118 118 119 /* 119 120 * nsaccess - Top-level namespace access
+3 -2
include/acpi/acobject.h
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 155 155 struct acpi_object_mutex { 156 156 ACPI_OBJECT_COMMON_HEADER u8 sync_level; /* 0-15, specified in Mutex() call */ 157 157 u16 acquisition_depth; /* Allow multiple Acquires, same thread */ 158 - struct acpi_thread_state *owner_thread; /* Current owner of the mutex */ 159 158 acpi_mutex os_mutex; /* Actual OS synchronization object */ 159 + acpi_thread_id thread_id; /* Current owner of the mutex */ 160 + struct acpi_thread_state *owner_thread; /* Current owner of the mutex */ 160 161 union acpi_operand_object *prev; /* Link for list of acquired mutexes */ 161 162 union acpi_operand_object *next; /* Link for list of acquired mutexes */ 162 163 struct acpi_namespace_node *node; /* Containing namespace node */
+3 -3
include/acpi/acopcode.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 233 233 #define ARGI_CREATE_WORD_FIELD_OP ARGI_LIST3 (ARGI_BUFFER, ARGI_INTEGER, ARGI_REFERENCE) 234 234 #define ARGI_DATA_REGION_OP ARGI_LIST3 (ARGI_STRING, ARGI_STRING, ARGI_STRING) 235 235 #define ARGI_DEBUG_OP ARG_NONE 236 - #define ARGI_DECREMENT_OP ARGI_LIST1 (ARGI_INTEGER_REF) 236 + #define ARGI_DECREMENT_OP ARGI_LIST1 (ARGI_TARGETREF) 237 237 #define ARGI_DEREF_OF_OP ARGI_LIST1 (ARGI_REF_OR_STRING) 238 238 #define ARGI_DEVICE_OP ARGI_INVALID_OPCODE 239 239 #define ARGI_DIVIDE_OP ARGI_LIST4 (ARGI_INTEGER, ARGI_INTEGER, ARGI_TARGETREF, ARGI_TARGETREF) ··· 246 246 #define ARGI_FIND_SET_RIGHT_BIT_OP ARGI_LIST2 (ARGI_INTEGER, ARGI_TARGETREF) 247 247 #define ARGI_FROM_BCD_OP ARGI_LIST2 (ARGI_INTEGER, ARGI_FIXED_TARGET) 248 248 #define ARGI_IF_OP ARGI_INVALID_OPCODE 249 - #define ARGI_INCREMENT_OP ARGI_LIST1 (ARGI_INTEGER_REF) 249 + #define ARGI_INCREMENT_OP ARGI_LIST1 (ARGI_TARGETREF) 250 250 #define ARGI_INDEX_FIELD_OP ARGI_INVALID_OPCODE 251 251 #define ARGI_INDEX_OP ARGI_LIST3 (ARGI_COMPLEXOBJ, ARGI_INTEGER, ARGI_TARGETREF) 252 252 #define ARGI_LAND_OP ARGI_LIST2 (ARGI_INTEGER, ARGI_INTEGER)
+1 -1
include/acpi/acoutput.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+3 -1
include/acpi/acparser.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 108 108 const struct acpi_opcode_info *acpi_ps_get_opcode_info(u16 opcode); 109 109 110 110 char *acpi_ps_get_opcode_name(u16 opcode); 111 + 112 + u8 acpi_ps_get_argument_count(u32 op_type); 111 113 112 114 /* 113 115 * psparse - top level parsing routines
+1 -1
include/acpi/acpi.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/acpiosxf.h
··· 8 8 *****************************************************************************/ 9 9 10 10 /* 11 - * Copyright (C) 2000 - 2007, R. Byron Moore 11 + * Copyright (C) 2000 - 2008, Intel Corp. 12 12 * All rights reserved. 13 13 * 14 14 * Redistribution and use in source and binary forms, with or without
+6 -1
include/acpi/acpixf.h
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without ··· 118 118 acpi_status 119 119 acpi_get_table_by_index(acpi_native_uint table_index, 120 120 struct acpi_table_header **out_table); 121 + 122 + acpi_status 123 + acpi_install_table_handler(acpi_tbl_handler handler, void *context); 124 + 125 + acpi_status acpi_remove_table_handler(acpi_tbl_handler handler); 121 126 122 127 /* 123 128 * Namespace and name interfaces
+2 -1
include/acpi/acresrc.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 94 94 #define ACPI_RSC_BITMASK16 18 95 95 #define ACPI_RSC_EXIT_NE 19 96 96 #define ACPI_RSC_EXIT_LE 20 97 + #define ACPI_RSC_EXIT_EQ 21 97 98 98 99 /* Resource Conversion sub-opcodes */ 99 100
+4 -1
include/acpi/acstruct.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 80 80 u16 opcode; /* Current AML opcode */ 81 81 u8 next_op_info; /* Info about next_op */ 82 82 u8 num_operands; /* Stack pointer for Operands[] array */ 83 + u8 operand_index; /* Index into operand stack, to be used by acpi_ds_obj_stack_push */ 83 84 acpi_owner_id owner_id; /* Owner of objects created during the walk */ 84 85 u8 last_predicate; /* Result of last predicate */ 85 86 u8 current_result; 86 87 u8 return_used; 87 88 u8 scope_depth; 88 89 u8 pass_number; /* Parse pass during table load */ 90 + u8 result_size; /* Total elements for the result stack */ 91 + u8 result_count; /* Current number of occupied elements of result stack */ 89 92 u32 aml_offset; 90 93 u32 arg_types; 91 94 u32 method_breakpoint; /* For single stepping */
+1 -1
include/acpi/actables.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/actbl.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+559 -2
include/acpi/actbl1.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 58 58 * it more difficult to inadvertently type in the wrong signature. 59 59 */ 60 60 #define ACPI_SIG_ASF "ASF!" /* Alert Standard Format table */ 61 + #define ACPI_SIG_BERT "BERT" /* Boot Error Record Table */ 61 62 #define ACPI_SIG_BOOT "BOOT" /* Simple Boot Flag Table */ 62 63 #define ACPI_SIG_CPEP "CPEP" /* Corrected Platform Error Polling table */ 63 64 #define ACPI_SIG_DBGP "DBGP" /* Debug Port table */ 64 65 #define ACPI_SIG_DMAR "DMAR" /* DMA Remapping table */ 65 66 #define ACPI_SIG_ECDT "ECDT" /* Embedded Controller Boot Resources Table */ 67 + #define ACPI_SIG_EINJ "EINJ" /* Error Injection table */ 68 + #define ACPI_SIG_ERST "ERST" /* Error Record Serialization Table */ 69 + #define ACPI_SIG_HEST "HEST" /* Hardware Error Source Table */ 66 70 #define ACPI_SIG_HPET "HPET" /* High Precision Event Timer table */ 71 + #define ACPI_SIG_IBFT "IBFT" /* i_sCSI Boot Firmware Table */ 67 72 #define ACPI_SIG_MADT "APIC" /* Multiple APIC Description Table */ 68 73 #define ACPI_SIG_MCFG "MCFG" /* PCI Memory Mapped Configuration table */ 69 74 #define ACPI_SIG_SBST "SBST" /* Smart Battery Specification Table */ 75 + #define ACPI_SIG_SLIC "SLIC" /* Software Licensing Description Table */ 70 76 #define ACPI_SIG_SLIT "SLIT" /* System Locality Distance Information Table */ 71 77 #define ACPI_SIG_SPCR "SPCR" /* Serial Port Console Redirection table */ 72 78 #define ACPI_SIG_SPMI "SPMI" /* Server Platform Management Interface table */ 73 79 #define ACPI_SIG_SRAT "SRAT" /* System Resource Affinity Table */ 74 80 #define ACPI_SIG_TCPA "TCPA" /* Trusted Computing Platform Alliance table */ 81 + #define ACPI_SIG_UEFI "UEFI" /* Uefi Boot Optimization Table */ 82 + #define ACPI_SIG_WDAT "WDAT" /* Watchdog Action Table */ 75 83 #define ACPI_SIG_WDRT "WDRT" /* Watchdog Resource Table */ 76 84 77 85 /* ··· 94 86 * portable, so do not use any other bitfield types. 95 87 */ 96 88 97 - /* Common Sub-table header (used in MADT, SRAT, etc.) */ 89 + /* Common Subtable header (used in MADT, SRAT, etc.) */ 98 90 99 91 struct acpi_subtable_header { 100 92 u8 type; 101 93 u8 length; 94 + }; 95 + 96 + /* Common Subtable header for WHEA tables (EINJ, ERST, WDAT) */ 97 + 98 + struct acpi_whea_header { 99 + u8 action; 100 + u8 instruction; 101 + u8 flags; 102 + u8 reserved; 103 + struct acpi_generic_address register_region; 104 + u64 value; /* Value used with Read/Write register */ 105 + u64 mask; /* Bitmask required for this register instruction */ 102 106 }; 103 107 104 108 /******************************************************************************* ··· 221 201 u8 eprom_address; 222 202 u8 devices; 223 203 }; 204 + 205 + /******************************************************************************* 206 + * 207 + * BERT - Boot Error Record Table 208 + * 209 + ******************************************************************************/ 210 + 211 + struct acpi_table_bert { 212 + struct acpi_table_header header; /* Common ACPI table header */ 213 + u32 region_length; /* Length of the boot error region */ 214 + u64 address; /* Physical addresss of the error region */ 215 + }; 216 + 217 + /* Boot Error Region */ 218 + 219 + struct acpi_bert_region { 220 + u32 block_status; 221 + u32 raw_data_offset; 222 + u32 raw_data_length; 223 + u32 data_length; 224 + u32 error_severity; 225 + }; 226 + 227 + /* block_status Flags */ 228 + 229 + #define ACPI_BERT_UNCORRECTABLE (1) 230 + #define ACPI_BERT_CORRECTABLE (2) 231 + #define ACPI_BERT_MULTIPLE_UNCORRECTABLE (4) 232 + #define ACPI_BERT_MULTIPLE_CORRECTABLE (8) 224 233 225 234 /******************************************************************************* 226 235 * ··· 399 350 400 351 /******************************************************************************* 401 352 * 353 + * EINJ - Error Injection Table 354 + * 355 + ******************************************************************************/ 356 + 357 + struct acpi_table_einj { 358 + struct acpi_table_header header; /* Common ACPI table header */ 359 + u32 header_length; 360 + u32 reserved; 361 + u32 entries; 362 + }; 363 + 364 + /* EINJ Injection Instruction Entries (actions) */ 365 + 366 + struct acpi_einj_entry { 367 + struct acpi_whea_header whea_header; /* Common header for WHEA tables */ 368 + }; 369 + 370 + /* Values for Action field above */ 371 + 372 + enum acpi_einj_actions { 373 + ACPI_EINJ_BEGIN_OPERATION = 0, 374 + ACPI_EINJ_GET_TRIGGER_TABLE = 1, 375 + ACPI_EINJ_SET_ERROR_TYPE = 2, 376 + ACPI_EINJ_GET_ERROR_TYPE = 3, 377 + ACPI_EINJ_END_OPERATION = 4, 378 + ACPI_EINJ_EXECUTE_OPERATION = 5, 379 + ACPI_EINJ_CHECK_BUSY_STATUS = 6, 380 + ACPI_EINJ_GET_COMMAND_STATUS = 7, 381 + ACPI_EINJ_ACTION_RESERVED = 8, /* 8 and greater are reserved */ 382 + ACPI_EINJ_TRIGGER_ERROR = 0xFF /* Except for this value */ 383 + }; 384 + 385 + /* Values for Instruction field above */ 386 + 387 + enum acpi_einj_instructions { 388 + ACPI_EINJ_READ_REGISTER = 0, 389 + ACPI_EINJ_READ_REGISTER_VALUE = 1, 390 + ACPI_EINJ_WRITE_REGISTER = 2, 391 + ACPI_EINJ_WRITE_REGISTER_VALUE = 3, 392 + ACPI_EINJ_NOOP = 4, 393 + ACPI_EINJ_INSTRUCTION_RESERVED = 5 /* 5 and greater are reserved */ 394 + }; 395 + 396 + /* EINJ Trigger Error Action Table */ 397 + 398 + struct acpi_einj_trigger { 399 + u32 header_size; 400 + u32 revision; 401 + u32 table_size; 402 + u32 entry_count; 403 + }; 404 + 405 + /******************************************************************************* 406 + * 407 + * ERST - Error Record Serialization Table 408 + * 409 + ******************************************************************************/ 410 + 411 + struct acpi_table_erst { 412 + struct acpi_table_header header; /* Common ACPI table header */ 413 + u32 header_length; 414 + u32 reserved; 415 + u32 entries; 416 + }; 417 + 418 + /* ERST Serialization Entries (actions) */ 419 + 420 + struct acpi_erst_entry { 421 + struct acpi_whea_header whea_header; /* Common header for WHEA tables */ 422 + }; 423 + 424 + /* Values for Action field above */ 425 + 426 + enum acpi_erst_actions { 427 + ACPI_ERST_BEGIN_WRITE_OPERATION = 0, 428 + ACPI_ERST_BEGIN_READ_OPERATION = 1, 429 + ACPI_ERST_BETGIN_CLEAR_OPERATION = 2, 430 + ACPI_ERST_END_OPERATION = 3, 431 + ACPI_ERST_SET_RECORD_OFFSET = 4, 432 + ACPI_ERST_EXECUTE_OPERATION = 5, 433 + ACPI_ERST_CHECK_BUSY_STATUS = 6, 434 + ACPI_ERST_GET_COMMAND_STATUS = 7, 435 + ACPI_ERST_GET_RECORD_IDENTIFIER = 8, 436 + ACPI_ERST_SET_RECORD_IDENTIFIER = 9, 437 + ACPI_ERST_GET_RECORD_COUNT = 10, 438 + ACPI_ERST_BEGIN_DUMMY_WRIITE = 11, 439 + ACPI_ERST_NOT_USED = 12, 440 + ACPI_ERST_GET_ERROR_RANGE = 13, 441 + ACPI_ERST_GET_ERROR_LENGTH = 14, 442 + ACPI_ERST_GET_ERROR_ATTRIBUTES = 15, 443 + ACPI_ERST_ACTION_RESERVED = 16 /* 16 and greater are reserved */ 444 + }; 445 + 446 + /* Values for Instruction field above */ 447 + 448 + enum acpi_erst_instructions { 449 + ACPI_ERST_READ_REGISTER = 0, 450 + ACPI_ERST_READ_REGISTER_VALUE = 1, 451 + ACPI_ERST_WRITE_REGISTER = 2, 452 + ACPI_ERST_WRITE_REGISTER_VALUE = 3, 453 + ACPI_ERST_NOOP = 4, 454 + ACPI_ERST_LOAD_VAR1 = 5, 455 + ACPI_ERST_LOAD_VAR2 = 6, 456 + ACPI_ERST_STORE_VAR1 = 7, 457 + ACPI_ERST_ADD = 8, 458 + ACPI_ERST_SUBTRACT = 9, 459 + ACPI_ERST_ADD_VALUE = 10, 460 + ACPI_ERST_SUBTRACT_VALUE = 11, 461 + ACPI_ERST_STALL = 12, 462 + ACPI_ERST_STALL_WHILE_TRUE = 13, 463 + ACPI_ERST_SKIP_NEXT_IF_TRUE = 14, 464 + ACPI_ERST_GOTO = 15, 465 + ACPI_ERST_SET_SRC_ADDRESS_BASE = 16, 466 + ACPI_ERST_SET_DST_ADDRESS_BASE = 17, 467 + ACPI_ERST_MOVE_DATA = 18, 468 + ACPI_ERST_INSTRUCTION_RESERVED = 19 /* 19 and greater are reserved */ 469 + }; 470 + 471 + /******************************************************************************* 472 + * 473 + * HEST - Hardware Error Source Table 474 + * 475 + ******************************************************************************/ 476 + 477 + struct acpi_table_hest { 478 + struct acpi_table_header header; /* Common ACPI table header */ 479 + u32 error_source_count; 480 + }; 481 + 482 + /* HEST subtable header */ 483 + 484 + struct acpi_hest_header { 485 + u16 type; 486 + }; 487 + 488 + /* Values for Type field above for subtables */ 489 + 490 + enum acpi_hest_types { 491 + ACPI_HEST_TYPE_XPF_MACHINE_CHECK = 0, 492 + ACPI_HEST_TYPE_XPF_CORRECTED_MACHINE_CHECK = 1, 493 + ACPI_HEST_TYPE_XPF_UNUSED = 2, 494 + ACPI_HEST_TYPE_XPF_NON_MASKABLE_INTERRUPT = 3, 495 + ACPI_HEST_TYPE_IPF_CORRECTED_MACHINE_CHECK = 4, 496 + ACPI_HEST_TYPE_IPF_CORRECTED_PLATFORM_ERROR = 5, 497 + ACPI_HEST_TYPE_AER_ROOT_PORT = 6, 498 + ACPI_HEST_TYPE_AER_ENDPOINT = 7, 499 + ACPI_HEST_TYPE_AER_BRIDGE = 8, 500 + ACPI_HEST_TYPE_GENERIC_HARDWARE_ERROR_SOURCE = 9, 501 + ACPI_HEST_TYPE_RESERVED = 10 /* 10 and greater are reserved */ 502 + }; 503 + 504 + /* 505 + * HEST Sub-subtables 506 + */ 507 + 508 + /* XPF Machine Check Error Bank */ 509 + 510 + struct acpi_hest_xpf_error_bank { 511 + u8 bank_number; 512 + u8 clear_status_on_init; 513 + u8 status_format; 514 + u8 config_write_enable; 515 + u32 control_register; 516 + u64 control_init_data; 517 + u32 status_register; 518 + u32 address_register; 519 + u32 misc_register; 520 + }; 521 + 522 + /* Generic Error Status */ 523 + 524 + struct acpi_hest_generic_status { 525 + u32 block_status; 526 + u32 raw_data_offset; 527 + u32 raw_data_length; 528 + u32 data_length; 529 + u32 error_severity; 530 + }; 531 + 532 + /* Generic Error Data */ 533 + 534 + struct acpi_hest_generic_data { 535 + u8 section_type[16]; 536 + u32 error_severity; 537 + u16 revision; 538 + u8 validation_bits; 539 + u8 flags; 540 + u32 error_data_length; 541 + u8 fru_id[16]; 542 + u8 fru_text[20]; 543 + }; 544 + 545 + /* Common HEST structure for PCI/AER types below (6,7,8) */ 546 + 547 + struct acpi_hest_aer_common { 548 + u16 source_id; 549 + u16 config_write_enable; 550 + u8 flags; 551 + u8 enabled; 552 + u32 records_to_pre_allocate; 553 + u32 max_sections_per_record; 554 + u32 bus; 555 + u16 device; 556 + u16 function; 557 + u16 device_control; 558 + u16 reserved; 559 + u32 uncorrectable_error_mask; 560 + u32 uncorrectable_error_severity; 561 + u32 correctable_error_mask; 562 + u32 advanced_error_cababilities; 563 + }; 564 + 565 + /* Hardware Error Notification */ 566 + 567 + struct acpi_hest_notify { 568 + u8 type; 569 + u8 length; 570 + u16 config_write_enable; 571 + u32 poll_interval; 572 + u32 vector; 573 + u32 polling_threshold_value; 574 + u32 polling_threshold_window; 575 + u32 error_threshold_value; 576 + u32 error_threshold_window; 577 + }; 578 + 579 + /* Values for Notify Type field above */ 580 + 581 + enum acpi_hest_notify_types { 582 + ACPI_HEST_NOTIFY_POLLED = 0, 583 + ACPI_HEST_NOTIFY_EXTERNAL = 1, 584 + ACPI_HEST_NOTIFY_LOCAL = 2, 585 + ACPI_HEST_NOTIFY_SCI = 3, 586 + ACPI_HEST_NOTIFY_NMI = 4, 587 + ACPI_HEST_NOTIFY_RESERVED = 5 /* 5 and greater are reserved */ 588 + }; 589 + 590 + /* 591 + * HEST subtables 592 + * 593 + * From WHEA Design Document, 16 May 2007. 594 + * Note: There is no subtable type 2 in this version of the document, 595 + * and there are two different subtable type 3s. 596 + */ 597 + 598 + /* 0: XPF Machine Check Exception */ 599 + 600 + struct acpi_hest_xpf_machine_check { 601 + struct acpi_hest_header header; 602 + u16 source_id; 603 + u16 config_write_enable; 604 + u8 flags; 605 + u8 reserved1; 606 + u32 records_to_pre_allocate; 607 + u32 max_sections_per_record; 608 + u64 global_capability_data; 609 + u64 global_control_data; 610 + u8 num_hardware_banks; 611 + u8 reserved2[7]; 612 + }; 613 + 614 + /* 1: XPF Corrected Machine Check */ 615 + 616 + struct acpi_table_hest_xpf_corrected { 617 + struct acpi_hest_header header; 618 + u16 source_id; 619 + u16 config_write_enable; 620 + u8 flags; 621 + u8 enabled; 622 + u32 records_to_pre_allocate; 623 + u32 max_sections_per_record; 624 + struct acpi_hest_notify notify; 625 + u8 num_hardware_banks; 626 + u8 reserved[3]; 627 + }; 628 + 629 + /* 3: XPF Non-Maskable Interrupt */ 630 + 631 + struct acpi_hest_xpf_nmi { 632 + struct acpi_hest_header header; 633 + u16 source_id; 634 + u32 reserved; 635 + u32 records_to_pre_allocate; 636 + u32 max_sections_per_record; 637 + u32 max_raw_data_length; 638 + }; 639 + 640 + /* 4: IPF Corrected Machine Check */ 641 + 642 + struct acpi_hest_ipf_corrected { 643 + struct acpi_hest_header header; 644 + u8 enabled; 645 + u8 reserved; 646 + }; 647 + 648 + /* 5: IPF Corrected Platform Error */ 649 + 650 + struct acpi_hest_ipf_corrected_platform { 651 + struct acpi_hest_header header; 652 + u8 enabled; 653 + u8 reserved; 654 + }; 655 + 656 + /* 6: PCI Express Root Port AER */ 657 + 658 + struct acpi_hest_aer_root { 659 + struct acpi_hest_header header; 660 + struct acpi_hest_aer_common aer; 661 + u32 root_error_command; 662 + }; 663 + 664 + /* 7: PCI Express AER (AER Endpoint) */ 665 + 666 + struct acpi_hest_aer { 667 + struct acpi_hest_header header; 668 + struct acpi_hest_aer_common aer; 669 + }; 670 + 671 + /* 8: PCI Express/PCI-X Bridge AER */ 672 + 673 + struct acpi_hest_aer_bridge { 674 + struct acpi_hest_header header; 675 + struct acpi_hest_aer_common aer; 676 + u32 secondary_uncorrectable_error_mask; 677 + u32 secondary_uncorrectable_error_severity; 678 + u32 secondary_advanced_capabilities; 679 + }; 680 + 681 + /* 9: Generic Hardware Error Source */ 682 + 683 + struct acpi_hest_generic { 684 + struct acpi_hest_header header; 685 + u16 source_id; 686 + u16 related_source_id; 687 + u8 config_write_enable; 688 + u8 enabled; 689 + u32 records_to_pre_allocate; 690 + u32 max_sections_per_record; 691 + u32 max_raw_data_length; 692 + struct acpi_generic_address error_status_address; 693 + struct acpi_hest_notify notify; 694 + u32 error_status_block_length; 695 + }; 696 + 697 + /******************************************************************************* 698 + * 402 699 * HPET - High Precision Event Timer table 403 700 * 404 701 ******************************************************************************/ ··· 765 370 #define ACPI_HPET_PAGE_PROTECT_64 (1<<2) /* 02: 64KB page protected */ 766 371 767 372 /*! [End] no source code translation !*/ 373 + 374 + /******************************************************************************* 375 + * 376 + * IBFT - Boot Firmware Table 377 + * 378 + ******************************************************************************/ 379 + 380 + struct acpi_table_ibft { 381 + struct acpi_table_header header; /* Common ACPI table header */ 382 + u8 reserved[12]; 383 + }; 384 + 385 + /* IBFT common subtable header */ 386 + 387 + struct acpi_ibft_header { 388 + u8 type; 389 + u8 version; 390 + u16 length; 391 + u8 index; 392 + u8 flags; 393 + }; 394 + 395 + /* Values for Type field above */ 396 + 397 + enum acpi_ibft_type { 398 + ACPI_IBFT_TYPE_NOT_USED = 0, 399 + ACPI_IBFT_TYPE_CONTROL = 1, 400 + ACPI_IBFT_TYPE_INITIATOR = 2, 401 + ACPI_IBFT_TYPE_NIC = 3, 402 + ACPI_IBFT_TYPE_TARGET = 4, 403 + ACPI_IBFT_TYPE_EXTENSIONS = 5, 404 + ACPI_IBFT_TYPE_RESERVED = 6 /* 6 and greater are reserved */ 405 + }; 406 + 407 + /* IBFT subtables */ 408 + 409 + struct acpi_ibft_control { 410 + struct acpi_ibft_header header; 411 + u16 extensions; 412 + u16 initiator_offset; 413 + u16 nic0_offset; 414 + u16 target0_offset; 415 + u16 nic1_offset; 416 + u16 target1_offset; 417 + }; 418 + 419 + struct acpi_ibft_initiator { 420 + struct acpi_ibft_header header; 421 + u8 sns_server[16]; 422 + u8 slp_server[16]; 423 + u8 primary_server[16]; 424 + u8 secondary_server[16]; 425 + u16 name_length; 426 + u16 name_offset; 427 + }; 428 + 429 + struct acpi_ibft_nic { 430 + struct acpi_ibft_header header; 431 + u8 ip_address[16]; 432 + u8 subnet_mask_prefix; 433 + u8 origin; 434 + u8 gateway[16]; 435 + u8 primary_dns[16]; 436 + u8 secondary_dns[16]; 437 + u8 dhcp[16]; 438 + u16 vlan; 439 + u8 mac_address[6]; 440 + u16 pci_address; 441 + u16 name_length; 442 + u16 name_offset; 443 + }; 444 + 445 + struct acpi_ibft_target { 446 + struct acpi_ibft_header header; 447 + u8 target_ip_address[16]; 448 + u16 target_ip_socket; 449 + u8 target_boot_lun[8]; 450 + u8 chap_type; 451 + u8 nic_association; 452 + u16 target_name_length; 453 + u16 target_name_offset; 454 + u16 chap_name_length; 455 + u16 chap_name_offset; 456 + u16 chap_secret_length; 457 + u16 chap_secret_offset; 458 + u16 reverse_chap_name_length; 459 + u16 reverse_chap_name_offset; 460 + u16 reverse_chap_secret_length; 461 + u16 reverse_chap_secret_offset; 462 + }; 768 463 769 464 /******************************************************************************* 770 465 * ··· 1178 693 u16 reserved; 1179 694 u32 max_log_length; /* Maximum length for the event log area */ 1180 695 u64 log_address; /* Address of the event log area */ 696 + }; 697 + 698 + /******************************************************************************* 699 + * 700 + * UEFI - UEFI Boot optimization Table 701 + * 702 + ******************************************************************************/ 703 + 704 + struct acpi_table_uefi { 705 + struct acpi_table_header header; /* Common ACPI table header */ 706 + u8 identifier[16]; /* UUID identifier */ 707 + u16 data_offset; /* Offset of remaining data in table */ 708 + u8 data; 709 + }; 710 + 711 + /******************************************************************************* 712 + * 713 + * WDAT - Watchdog Action Table 714 + * 715 + ******************************************************************************/ 716 + 717 + struct acpi_table_wdat { 718 + struct acpi_table_header header; /* Common ACPI table header */ 719 + u32 header_length; /* Watchdog Header Length */ 720 + u16 pci_segment; /* PCI Segment number */ 721 + u8 pci_bus; /* PCI Bus number */ 722 + u8 pci_device; /* PCI Device number */ 723 + u8 pci_function; /* PCI Function number */ 724 + u8 reserved[3]; 725 + u32 timer_period; /* Period of one timer count (msec) */ 726 + u32 max_count; /* Maximum counter value supported */ 727 + u32 min_count; /* Minimum counter value */ 728 + u8 flags; 729 + u8 reserved2[3]; 730 + u32 entries; /* Number of watchdog entries that follow */ 731 + }; 732 + 733 + /* WDAT Instruction Entries (actions) */ 734 + 735 + struct acpi_wdat_entry { 736 + struct acpi_whea_header whea_header; /* Common header for WHEA tables */ 737 + }; 738 + 739 + /* Values for Action field above */ 740 + 741 + enum acpi_wdat_actions { 742 + ACPI_WDAT_RESET = 1, 743 + ACPI_WDAT_GET_CURRENT_COUNTDOWN = 4, 744 + ACPI_WDAT_GET_COUNTDOWN = 5, 745 + ACPI_WDAT_SET_COUNTDOWN = 6, 746 + ACPI_WDAT_GET_RUNNING_STATE = 8, 747 + ACPI_WDAT_SET_RUNNING_STATE = 9, 748 + ACPI_WDAT_GET_STOPPED_STATE = 10, 749 + ACPI_WDAT_SET_STOPPED_STATE = 11, 750 + ACPI_WDAT_GET_REBOOT = 16, 751 + ACPI_WDAT_SET_REBOOT = 17, 752 + ACPI_WDAT_GET_SHUTDOWN = 18, 753 + ACPI_WDAT_SET_SHUTDOWN = 19, 754 + ACPI_WDAT_GET_STATUS = 32, 755 + ACPI_WDAT_SET_STATUS = 33, 756 + ACPI_WDAT_ACTION_RESERVED = 34 /* 34 and greater are reserved */ 757 + }; 758 + 759 + /* Values for Instruction field above */ 760 + 761 + enum acpi_wdat_instructions { 762 + ACPI_WDAT_READ_VALUE = 0, 763 + ACPI_WDAT_READ_COUNTDOWN = 1, 764 + ACPI_WDAT_WRITE_VALUE = 2, 765 + ACPI_WDAT_WRITE_COUNTDOWN = 3, 766 + ACPI_WDAT_INSTRUCTION_RESERVED = 4, /* 4 and greater are reserved */ 767 + ACPI_WDAT_PRESERVE_REGISTER = 0x80 /* Except for this value */ 1181 768 }; 1182 769 1183 770 /*******************************************************************************
+51 -42
include/acpi/actypes.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 323 323 #define acpi_semaphore void * 324 324 325 325 /* 326 - * Acpi integer width. In ACPI version 1, integers are 327 - * 32 bits. In ACPI version 2, integers are 64 bits. 328 - * Note that this pertains to the ACPI integer type only, not 329 - * other integers used in the implementation of the ACPI CA 326 + * Acpi integer width. In ACPI version 1, integers are 32 bits. In ACPI 327 + * version 2, integers are 64 bits. Note that this pertains to the ACPI integer 328 + * type only, not other integers used in the implementation of the ACPI CA 330 329 * subsystem. 331 330 */ 332 - #ifdef ACPI_NO_INTEGER64_SUPPORT 333 - 334 - /* 32-bit integers only, no 64-bit support */ 335 - 336 - typedef u32 acpi_integer; 337 - #define ACPI_INTEGER_MAX ACPI_UINT32_MAX 338 - #define ACPI_INTEGER_BIT_SIZE 32 339 - #define ACPI_MAX_DECIMAL_DIGITS 10 /* 2^32 = 4,294,967,296 */ 340 - 341 - #define ACPI_USE_NATIVE_DIVIDE /* Use compiler native 32-bit divide */ 342 - 343 - #else 344 - 345 - /* 64-bit integers */ 346 - 347 331 typedef unsigned long long acpi_integer; 348 332 #define ACPI_INTEGER_MAX ACPI_UINT64_MAX 349 333 #define ACPI_INTEGER_BIT_SIZE 64 ··· 335 351 336 352 #if ACPI_MACHINE_WIDTH == 64 337 353 #define ACPI_USE_NATIVE_DIVIDE /* Use compiler native 64-bit divide */ 338 - #endif 339 354 #endif 340 355 341 356 #define ACPI_MAX64_DECIMAL_DIGITS 20 ··· 402 419 /* 403 420 * Standard notify values 404 421 */ 405 - #define ACPI_NOTIFY_BUS_CHECK (u8) 0 406 - #define ACPI_NOTIFY_DEVICE_CHECK (u8) 1 407 - #define ACPI_NOTIFY_DEVICE_WAKE (u8) 2 408 - #define ACPI_NOTIFY_EJECT_REQUEST (u8) 3 409 - #define ACPI_NOTIFY_DEVICE_CHECK_LIGHT (u8) 4 410 - #define ACPI_NOTIFY_FREQUENCY_MISMATCH (u8) 5 411 - #define ACPI_NOTIFY_BUS_MODE_MISMATCH (u8) 6 412 - #define ACPI_NOTIFY_POWER_FAULT (u8) 7 422 + #define ACPI_NOTIFY_BUS_CHECK (u8) 0x00 423 + #define ACPI_NOTIFY_DEVICE_CHECK (u8) 0x01 424 + #define ACPI_NOTIFY_DEVICE_WAKE (u8) 0x02 425 + #define ACPI_NOTIFY_EJECT_REQUEST (u8) 0x03 426 + #define ACPI_NOTIFY_DEVICE_CHECK_LIGHT (u8) 0x04 427 + #define ACPI_NOTIFY_FREQUENCY_MISMATCH (u8) 0x05 428 + #define ACPI_NOTIFY_BUS_MODE_MISMATCH (u8) 0x06 429 + #define ACPI_NOTIFY_POWER_FAULT (u8) 0x07 430 + #define ACPI_NOTIFY_CAPABILITIES_CHECK (u8) 0x08 431 + #define ACPI_NOTIFY_DEVICE_PLD_CHECK (u8) 0x09 432 + #define ACPI_NOTIFY_RESERVED (u8) 0x0A 433 + #define ACPI_NOTIFY_LOCALITY_UPDATE (u8) 0x0B 434 + 435 + #define ACPI_NOTIFY_MAX 0x0B 413 436 414 437 /* 415 438 * Types associated with ACPI names and objects. The first group of ··· 482 493 483 494 #define ACPI_TYPE_INVALID 0x1E 484 495 #define ACPI_TYPE_NOT_FOUND 0xFF 496 + 497 + #define ACPI_NUM_NS_TYPES (ACPI_TYPE_INVALID + 1) 485 498 486 499 /* 487 500 * All I/O ··· 590 599 591 600 #define ACPI_SYSTEM_NOTIFY 0x1 592 601 #define ACPI_DEVICE_NOTIFY 0x2 593 - #define ACPI_ALL_NOTIFY 0x3 602 + #define ACPI_ALL_NOTIFY (ACPI_SYSTEM_NOTIFY | ACPI_DEVICE_NOTIFY) 594 603 #define ACPI_MAX_NOTIFY_HANDLER_TYPE 0x3 595 604 596 605 #define ACPI_MAX_SYS_NOTIFY 0x7f ··· 645 654 /* 646 655 * External ACPI object definition 647 656 */ 657 + 658 + /* 659 + * Note: Type == ACPI_TYPE_ANY (0) is used to indicate a NULL package element 660 + * or an unresolved named reference. 661 + */ 648 662 union acpi_object { 649 663 acpi_object_type type; /* See definition of acpi_ns_type for values */ 650 664 struct { 651 - acpi_object_type type; 665 + acpi_object_type type; /* ACPI_TYPE_INTEGER */ 652 666 acpi_integer value; /* The actual number */ 653 667 } integer; 654 668 655 669 struct { 656 - acpi_object_type type; 670 + acpi_object_type type; /* ACPI_TYPE_STRING */ 657 671 u32 length; /* # of bytes in string, excluding trailing null */ 658 672 char *pointer; /* points to the string value */ 659 673 } string; 660 674 661 675 struct { 662 - acpi_object_type type; 676 + acpi_object_type type; /* ACPI_TYPE_BUFFER */ 663 677 u32 length; /* # of bytes in buffer */ 664 678 u8 *pointer; /* points to the buffer */ 665 679 } buffer; 666 680 667 681 struct { 668 - acpi_object_type type; 669 - u32 fill1; 670 - acpi_handle handle; /* object reference */ 671 - } reference; 672 - 673 - struct { 674 - acpi_object_type type; 682 + acpi_object_type type; /* ACPI_TYPE_PACKAGE */ 675 683 u32 count; /* # of elements in package */ 676 684 union acpi_object *elements; /* Pointer to an array of ACPI_OBJECTs */ 677 685 } package; 678 686 679 687 struct { 680 - acpi_object_type type; 688 + acpi_object_type type; /* ACPI_TYPE_LOCAL_REFERENCE */ 689 + acpi_object_type actual_type; /* Type associated with the Handle */ 690 + acpi_handle handle; /* object reference */ 691 + } reference; 692 + 693 + struct { 694 + acpi_object_type type; /* ACPI_TYPE_PROCESSOR */ 681 695 u32 proc_id; 682 696 acpi_io_address pblk_address; 683 697 u32 pblk_length; 684 698 } processor; 685 699 686 700 struct { 687 - acpi_object_type type; 701 + acpi_object_type type; /* ACPI_TYPE_POWER */ 688 702 u32 system_level; 689 703 u32 resource_order; 690 704 } power_resource; ··· 743 747 u32 debug_layer; 744 748 }; 745 749 750 + /* Table Event Types */ 751 + 752 + #define ACPI_TABLE_EVENT_LOAD 0x0 753 + #define ACPI_TABLE_EVENT_UNLOAD 0x1 754 + #define ACPI_NUM_TABLE_EVENTS 2 755 + 746 756 /* 747 757 * Types specific to the OS service interfaces 748 758 */ ··· 777 775 acpi_name name, 778 776 u16 opcode, 779 777 u32 aml_offset, void *context); 778 + 779 + /* Table Event handler (Load, load_table etc) and types */ 780 + 781 + typedef 782 + acpi_status(*acpi_tbl_handler) (u32 event, void *table, void *context); 780 783 781 784 /* Address Spaces (For Operation Regions) */ 782 785 ··· 997 990 * Structures used to describe device resources 998 991 */ 999 992 struct acpi_resource_irq { 993 + u8 descriptor_length; 1000 994 u8 triggering; 1001 995 u8 polarity; 1002 996 u8 sharable; ··· 1014 1006 }; 1015 1007 1016 1008 struct acpi_resource_start_dependent { 1009 + u8 descriptor_length; 1017 1010 u8 compatibility_priority; 1018 1011 u8 performance_robustness; 1019 1012 };
+3 -1
include/acpi/acutils.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without ··· 115 115 #if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER) 116 116 117 117 char *acpi_ut_get_mutex_name(u32 mutex_id); 118 + 119 + const char *acpi_ut_get_notify_name(u32 notify_value); 118 120 119 121 #endif 120 122
+1 -1
include/acpi/amlcode.h
··· 7 7 *****************************************************************************/ 8 8 9 9 /* 10 - * Copyright (C) 2000 - 2007, R. Byron Moore 10 + * Copyright (C) 2000 - 2008, Intel Corp. 11 11 * All rights reserved. 12 12 * 13 13 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/amlresrc.h
··· 6 6 *****************************************************************************/ 7 7 8 8 /* 9 - * Copyright (C) 2000 - 2007, R. Byron Moore 9 + * Copyright (C) 2000 - 2008, Intel Corp. 10 10 * All rights reserved. 11 11 * 12 12 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/acenv.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/acgcc.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
+1 -1
include/acpi/platform/aclinux.h
··· 5 5 *****************************************************************************/ 6 6 7 7 /* 8 - * Copyright (C) 2000 - 2007, R. Byron Moore 8 + * Copyright (C) 2000 - 2008, Intel Corp. 9 9 * All rights reserved. 10 10 * 11 11 * Redistribution and use in source and binary forms, with or without
-10
include/linux/isapnp.h
··· 26 26 #include <linux/pnp.h> 27 27 28 28 /* 29 - * Configuration registers (TODO: change by specification) 30 - */ 31 - 32 - #define ISAPNP_CFG_ACTIVATE 0x30 /* byte */ 33 - #define ISAPNP_CFG_MEM 0x40 /* 4 * dword */ 34 - #define ISAPNP_CFG_PORT 0x60 /* 8 * word */ 35 - #define ISAPNP_CFG_IRQ 0x70 /* 2 * word */ 36 - #define ISAPNP_CFG_DMA 0x74 /* 2 * byte */ 37 - 38 - /* 39 29 * 40 30 */ 41 31
+107 -95
include/linux/pnp.h
··· 11 11 #include <linux/errno.h> 12 12 #include <linux/mod_devicetable.h> 13 13 14 - #define PNP_MAX_PORT 40 15 - #define PNP_MAX_MEM 24 16 - #define PNP_MAX_IRQ 2 17 - #define PNP_MAX_DMA 2 18 14 #define PNP_NAME_LEN 50 19 15 20 16 struct pnp_protocol; 21 17 struct pnp_dev; 18 + struct pnp_resource_table; 22 19 23 20 /* 24 21 * Resource Management 25 22 */ 23 + struct resource *pnp_get_resource(struct pnp_dev *, unsigned int, unsigned int); 26 24 27 - /* Use these instead of directly reading pnp_dev to get resource information */ 28 - #define pnp_port_start(dev,bar) ((dev)->res.port_resource[(bar)].start) 29 - #define pnp_port_end(dev,bar) ((dev)->res.port_resource[(bar)].end) 30 - #define pnp_port_flags(dev,bar) ((dev)->res.port_resource[(bar)].flags) 31 - #define pnp_port_valid(dev,bar) \ 32 - ((pnp_port_flags((dev),(bar)) & (IORESOURCE_IO | IORESOURCE_UNSET)) \ 33 - == IORESOURCE_IO) 34 - #define pnp_port_len(dev,bar) \ 35 - ((pnp_port_start((dev),(bar)) == 0 && \ 36 - pnp_port_end((dev),(bar)) == \ 37 - pnp_port_start((dev),(bar))) ? 0 : \ 38 - \ 39 - (pnp_port_end((dev),(bar)) - \ 40 - pnp_port_start((dev),(bar)) + 1)) 25 + static inline int pnp_resource_valid(struct resource *res) 26 + { 27 + if (res && !(res->flags & IORESOURCE_UNSET)) 28 + return 1; 29 + return 0; 30 + } 41 31 42 - #define pnp_mem_start(dev,bar) ((dev)->res.mem_resource[(bar)].start) 43 - #define pnp_mem_end(dev,bar) ((dev)->res.mem_resource[(bar)].end) 44 - #define pnp_mem_flags(dev,bar) ((dev)->res.mem_resource[(bar)].flags) 45 - #define pnp_mem_valid(dev,bar) \ 46 - ((pnp_mem_flags((dev),(bar)) & (IORESOURCE_MEM | IORESOURCE_UNSET)) \ 47 - == IORESOURCE_MEM) 48 - #define pnp_mem_len(dev,bar) \ 49 - ((pnp_mem_start((dev),(bar)) == 0 && \ 50 - pnp_mem_end((dev),(bar)) == \ 51 - pnp_mem_start((dev),(bar))) ? 0 : \ 52 - \ 53 - (pnp_mem_end((dev),(bar)) - \ 54 - pnp_mem_start((dev),(bar)) + 1)) 32 + static inline resource_size_t pnp_resource_len(struct resource *res) 33 + { 34 + if (res->start == 0 && res->end == 0) 35 + return 0; 36 + return res->end - res->start + 1; 37 + } 55 38 56 - #define pnp_irq(dev,bar) ((dev)->res.irq_resource[(bar)].start) 57 - #define pnp_irq_flags(dev,bar) ((dev)->res.irq_resource[(bar)].flags) 58 - #define pnp_irq_valid(dev,bar) \ 59 - ((pnp_irq_flags((dev),(bar)) & (IORESOURCE_IRQ | IORESOURCE_UNSET)) \ 60 - == IORESOURCE_IRQ) 61 39 62 - #define pnp_dma(dev,bar) ((dev)->res.dma_resource[(bar)].start) 63 - #define pnp_dma_flags(dev,bar) ((dev)->res.dma_resource[(bar)].flags) 64 - #define pnp_dma_valid(dev,bar) \ 65 - ((pnp_dma_flags((dev),(bar)) & (IORESOURCE_DMA | IORESOURCE_UNSET)) \ 66 - == IORESOURCE_DMA) 40 + static inline resource_size_t pnp_port_start(struct pnp_dev *dev, 41 + unsigned int bar) 42 + { 43 + return pnp_get_resource(dev, IORESOURCE_IO, bar)->start; 44 + } 45 + 46 + static inline resource_size_t pnp_port_end(struct pnp_dev *dev, 47 + unsigned int bar) 48 + { 49 + return pnp_get_resource(dev, IORESOURCE_IO, bar)->end; 50 + } 51 + 52 + static inline unsigned long pnp_port_flags(struct pnp_dev *dev, 53 + unsigned int bar) 54 + { 55 + return pnp_get_resource(dev, IORESOURCE_IO, bar)->flags; 56 + } 57 + 58 + static inline int pnp_port_valid(struct pnp_dev *dev, unsigned int bar) 59 + { 60 + return pnp_resource_valid(pnp_get_resource(dev, IORESOURCE_IO, bar)); 61 + } 62 + 63 + static inline resource_size_t pnp_port_len(struct pnp_dev *dev, 64 + unsigned int bar) 65 + { 66 + return pnp_resource_len(pnp_get_resource(dev, IORESOURCE_IO, bar)); 67 + } 68 + 69 + 70 + static inline resource_size_t pnp_mem_start(struct pnp_dev *dev, 71 + unsigned int bar) 72 + { 73 + return pnp_get_resource(dev, IORESOURCE_MEM, bar)->start; 74 + } 75 + 76 + static inline resource_size_t pnp_mem_end(struct pnp_dev *dev, 77 + unsigned int bar) 78 + { 79 + return pnp_get_resource(dev, IORESOURCE_MEM, bar)->end; 80 + } 81 + 82 + static inline unsigned long pnp_mem_flags(struct pnp_dev *dev, unsigned int bar) 83 + { 84 + return pnp_get_resource(dev, IORESOURCE_MEM, bar)->flags; 85 + } 86 + 87 + static inline int pnp_mem_valid(struct pnp_dev *dev, unsigned int bar) 88 + { 89 + return pnp_resource_valid(pnp_get_resource(dev, IORESOURCE_MEM, bar)); 90 + } 91 + 92 + static inline resource_size_t pnp_mem_len(struct pnp_dev *dev, 93 + unsigned int bar) 94 + { 95 + return pnp_resource_len(pnp_get_resource(dev, IORESOURCE_MEM, bar)); 96 + } 97 + 98 + 99 + static inline resource_size_t pnp_irq(struct pnp_dev *dev, unsigned int bar) 100 + { 101 + return pnp_get_resource(dev, IORESOURCE_IRQ, bar)->start; 102 + } 103 + 104 + static inline unsigned long pnp_irq_flags(struct pnp_dev *dev, unsigned int bar) 105 + { 106 + return pnp_get_resource(dev, IORESOURCE_IRQ, bar)->flags; 107 + } 108 + 109 + static inline int pnp_irq_valid(struct pnp_dev *dev, unsigned int bar) 110 + { 111 + return pnp_resource_valid(pnp_get_resource(dev, IORESOURCE_IRQ, bar)); 112 + } 113 + 114 + 115 + static inline resource_size_t pnp_dma(struct pnp_dev *dev, unsigned int bar) 116 + { 117 + return pnp_get_resource(dev, IORESOURCE_DMA, bar)->start; 118 + } 119 + 120 + static inline unsigned long pnp_dma_flags(struct pnp_dev *dev, unsigned int bar) 121 + { 122 + return pnp_get_resource(dev, IORESOURCE_DMA, bar)->flags; 123 + } 124 + 125 + static inline int pnp_dma_valid(struct pnp_dev *dev, unsigned int bar) 126 + { 127 + return pnp_resource_valid(pnp_get_resource(dev, IORESOURCE_DMA, bar)); 128 + } 129 + 67 130 68 131 #define PNP_PORT_FLAG_16BITADDR (1<<0) 69 132 #define PNP_PORT_FLAG_FIXED (1<<1) ··· 177 114 struct pnp_dma *dma; /* first DMA */ 178 115 struct pnp_mem *mem; /* first memory resource */ 179 116 struct pnp_option *next; /* used to chain dependent resources */ 180 - }; 181 - 182 - struct pnp_resource_table { 183 - struct resource port_resource[PNP_MAX_PORT]; 184 - struct resource mem_resource[PNP_MAX_MEM]; 185 - struct resource dma_resource[PNP_MAX_DMA]; 186 - struct resource irq_resource[PNP_MAX_IRQ]; 187 117 }; 188 118 189 119 /* ··· 248 192 int capabilities; 249 193 struct pnp_option *independent; 250 194 struct pnp_option *dependent; 251 - struct pnp_resource_table res; 195 + struct pnp_resource_table *res; 252 196 253 197 char name[PNP_NAME_LEN]; /* contains a human-readable name */ 254 - unsigned short regs; /* ISAPnP: supported registers */ 255 198 int flags; /* used by protocols */ 256 199 struct proc_dir_entry *procent; /* device entry in /proc/bus/isapnp */ 257 200 void *data; ··· 381 326 char *name; 382 327 383 328 /* resource control functions */ 384 - int (*get) (struct pnp_dev *dev, struct pnp_resource_table *res); 385 - int (*set) (struct pnp_dev *dev, struct pnp_resource_table *res); 329 + int (*get) (struct pnp_dev *dev); 330 + int (*set) (struct pnp_dev *dev); 386 331 int (*disable) (struct pnp_dev *dev); 387 332 388 333 /* protocol specific suspend/resume */ ··· 411 356 #if defined(CONFIG_PNP) 412 357 413 358 /* device management */ 414 - int pnp_register_protocol(struct pnp_protocol *protocol); 415 - void pnp_unregister_protocol(struct pnp_protocol *protocol); 416 - int pnp_add_device(struct pnp_dev *dev); 417 359 int pnp_device_attach(struct pnp_dev *pnp_dev); 418 360 void pnp_device_detach(struct pnp_dev *pnp_dev); 419 361 extern struct list_head pnp_global; 420 362 extern int pnp_platform_devices; 421 363 422 364 /* multidevice card support */ 423 - int pnp_add_card(struct pnp_card *card); 424 - void pnp_remove_card(struct pnp_card *card); 425 - int pnp_add_card_device(struct pnp_card *card, struct pnp_dev *dev); 426 - void pnp_remove_card_device(struct pnp_dev *dev); 427 - int pnp_add_card_id(struct pnp_id *id, struct pnp_card *card); 428 365 struct pnp_dev *pnp_request_card_device(struct pnp_card_link *clink, 429 366 const char *id, struct pnp_dev *from); 430 367 void pnp_release_card_device(struct pnp_dev *dev); ··· 425 378 extern struct list_head pnp_cards; 426 379 427 380 /* resource management */ 428 - struct pnp_option *pnp_register_independent_option(struct pnp_dev *dev); 429 - struct pnp_option *pnp_register_dependent_option(struct pnp_dev *dev, 430 - int priority); 431 - int pnp_register_irq_resource(struct pnp_option *option, struct pnp_irq *data); 432 - int pnp_register_dma_resource(struct pnp_option *option, struct pnp_dma *data); 433 - int pnp_register_port_resource(struct pnp_option *option, 434 - struct pnp_port *data); 435 - int pnp_register_mem_resource(struct pnp_option *option, struct pnp_mem *data); 436 - void pnp_init_resource_table(struct pnp_resource_table *table); 437 - int pnp_manual_config_dev(struct pnp_dev *dev, struct pnp_resource_table *res, 438 - int mode); 439 381 int pnp_auto_config_dev(struct pnp_dev *dev); 440 - int pnp_validate_config(struct pnp_dev *dev); 441 382 int pnp_start_dev(struct pnp_dev *dev); 442 383 int pnp_stop_dev(struct pnp_dev *dev); 443 384 int pnp_activate_dev(struct pnp_dev *dev); 444 385 int pnp_disable_dev(struct pnp_dev *dev); 445 - void pnp_resource_change(struct resource *resource, resource_size_t start, 446 - resource_size_t size); 447 386 448 387 /* protocol helpers */ 449 388 int pnp_is_active(struct pnp_dev *dev); 450 389 int compare_pnp_id(struct pnp_id *pos, const char *id); 451 - int pnp_add_id(struct pnp_id *id, struct pnp_dev *dev); 452 390 int pnp_register_driver(struct pnp_driver *drv); 453 391 void pnp_unregister_driver(struct pnp_driver *drv); 454 392 455 393 #else 456 394 457 395 /* device management */ 458 - static inline int pnp_register_protocol(struct pnp_protocol *protocol) { return -ENODEV; } 459 - static inline void pnp_unregister_protocol(struct pnp_protocol *protocol) { } 460 - static inline int pnp_init_device(struct pnp_dev *dev) { return -ENODEV; } 461 - static inline int pnp_add_device(struct pnp_dev *dev) { return -ENODEV; } 462 396 static inline int pnp_device_attach(struct pnp_dev *pnp_dev) { return -ENODEV; } 463 397 static inline void pnp_device_detach(struct pnp_dev *pnp_dev) { } 464 398 465 399 #define pnp_platform_devices 0 466 400 467 401 /* multidevice card support */ 468 - static inline int pnp_add_card(struct pnp_card *card) { return -ENODEV; } 469 - static inline void pnp_remove_card(struct pnp_card *card) { } 470 - static inline int pnp_add_card_device(struct pnp_card *card, struct pnp_dev *dev) { return -ENODEV; } 471 - static inline void pnp_remove_card_device(struct pnp_dev *dev) { } 472 - static inline int pnp_add_card_id(struct pnp_id *id, struct pnp_card *card) { return -ENODEV; } 473 402 static inline struct pnp_dev *pnp_request_card_device(struct pnp_card_link *clink, const char *id, struct pnp_dev *from) { return NULL; } 474 403 static inline void pnp_release_card_device(struct pnp_dev *dev) { } 475 404 static inline int pnp_register_card_driver(struct pnp_card_driver *drv) { return -ENODEV; } 476 405 static inline void pnp_unregister_card_driver(struct pnp_card_driver *drv) { } 477 406 478 407 /* resource management */ 479 - static inline struct pnp_option *pnp_register_independent_option(struct pnp_dev *dev) { return NULL; } 480 - static inline struct pnp_option *pnp_register_dependent_option(struct pnp_dev *dev, int priority) { return NULL; } 481 - static inline int pnp_register_irq_resource(struct pnp_option *option, struct pnp_irq *data) { return -ENODEV; } 482 - static inline int pnp_register_dma_resource(struct pnp_option *option, struct pnp_dma *data) { return -ENODEV; } 483 - static inline int pnp_register_port_resource(struct pnp_option *option, struct pnp_port *data) { return -ENODEV; } 484 - static inline int pnp_register_mem_resource(struct pnp_option *option, struct pnp_mem *data) { return -ENODEV; } 485 - static inline void pnp_init_resource_table(struct pnp_resource_table *table) { } 486 - static inline int pnp_manual_config_dev(struct pnp_dev *dev, struct pnp_resource_table *res, int mode) { return -ENODEV; } 487 408 static inline int pnp_auto_config_dev(struct pnp_dev *dev) { return -ENODEV; } 488 - static inline int pnp_validate_config(struct pnp_dev *dev) { return -ENODEV; } 489 409 static inline int pnp_start_dev(struct pnp_dev *dev) { return -ENODEV; } 490 410 static inline int pnp_stop_dev(struct pnp_dev *dev) { return -ENODEV; } 491 411 static inline int pnp_activate_dev(struct pnp_dev *dev) { return -ENODEV; } 492 412 static inline int pnp_disable_dev(struct pnp_dev *dev) { return -ENODEV; } 493 - static inline void pnp_resource_change(struct resource *resource, resource_size_t start, resource_size_t size) { } 494 413 495 414 /* protocol helpers */ 496 415 static inline int pnp_is_active(struct pnp_dev *dev) { return 0; } 497 416 static inline int compare_pnp_id(struct pnp_id *pos, const char *id) { return -ENODEV; } 498 - static inline int pnp_add_id(struct pnp_id *id, struct pnp_dev *dev) { return -ENODEV; } 499 417 static inline int pnp_register_driver(struct pnp_driver *drv) { return -ENODEV; } 500 418 static inline void pnp_unregister_driver(struct pnp_driver *drv) { } 501 419
-151
include/linux/pnpbios.h
··· 1 - /* 2 - * Include file for the interface to a PnP BIOS 3 - * 4 - * Original BIOS code (C) 1998 Christian Schmidt (chr.schmidt@tu-bs.de) 5 - * PnP handler parts (c) 1998 Tom Lees <tom@lpsg.demon.co.uk> 6 - * Minor reorganizations by David Hinds <dahinds@users.sourceforge.net> 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of the GNU General Public License as published by the 10 - * Free Software Foundation; either version 2, or (at your option) any 11 - * later version. 12 - * 13 - * This program is distributed in the hope that it will be useful, but 14 - * WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 - * General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - */ 22 - 23 - #ifndef _LINUX_PNPBIOS_H 24 - #define _LINUX_PNPBIOS_H 25 - 26 - #ifdef __KERNEL__ 27 - 28 - #include <linux/types.h> 29 - #include <linux/pnp.h> 30 - 31 - /* 32 - * Return codes 33 - */ 34 - #define PNP_SUCCESS 0x00 35 - #define PNP_NOT_SET_STATICALLY 0x7f 36 - #define PNP_UNKNOWN_FUNCTION 0x81 37 - #define PNP_FUNCTION_NOT_SUPPORTED 0x82 38 - #define PNP_INVALID_HANDLE 0x83 39 - #define PNP_BAD_PARAMETER 0x84 40 - #define PNP_SET_FAILED 0x85 41 - #define PNP_EVENTS_NOT_PENDING 0x86 42 - #define PNP_SYSTEM_NOT_DOCKED 0x87 43 - #define PNP_NO_ISA_PNP_CARDS 0x88 44 - #define PNP_UNABLE_TO_DETERMINE_DOCK_CAPABILITIES 0x89 45 - #define PNP_CONFIG_CHANGE_FAILED_NO_BATTERY 0x8a 46 - #define PNP_CONFIG_CHANGE_FAILED_RESOURCE_CONFLICT 0x8b 47 - #define PNP_BUFFER_TOO_SMALL 0x8c 48 - #define PNP_USE_ESCD_SUPPORT 0x8d 49 - #define PNP_MESSAGE_NOT_SUPPORTED 0x8e 50 - #define PNP_HARDWARE_ERROR 0x8f 51 - 52 - #define ESCD_SUCCESS 0x00 53 - #define ESCD_IO_ERROR_READING 0x55 54 - #define ESCD_INVALID 0x56 55 - #define ESCD_BUFFER_TOO_SMALL 0x59 56 - #define ESCD_NVRAM_TOO_SMALL 0x5a 57 - #define ESCD_FUNCTION_NOT_SUPPORTED 0x81 58 - 59 - /* 60 - * Events that can be received by "get event" 61 - */ 62 - #define PNPEV_ABOUT_TO_CHANGE_CONFIG 0x0001 63 - #define PNPEV_DOCK_CHANGED 0x0002 64 - #define PNPEV_SYSTEM_DEVICE_CHANGED 0x0003 65 - #define PNPEV_CONFIG_CHANGED_FAILED 0x0004 66 - #define PNPEV_UNKNOWN_SYSTEM_EVENT 0xffff 67 - /* 0x8000 through 0xfffe are OEM defined */ 68 - 69 - /* 70 - * Messages that should be sent through "send message" 71 - */ 72 - #define PNPMSG_OK 0x00 73 - #define PNPMSG_ABORT 0x01 74 - #define PNPMSG_UNDOCK_DEFAULT_ACTION 0x40 75 - #define PNPMSG_POWER_OFF 0x41 76 - #define PNPMSG_PNP_OS_ACTIVE 0x42 77 - #define PNPMSG_PNP_OS_INACTIVE 0x43 78 - 79 - /* 80 - * Plug and Play BIOS flags 81 - */ 82 - #define PNPBIOS_NO_DISABLE 0x0001 83 - #define PNPBIOS_NO_CONFIG 0x0002 84 - #define PNPBIOS_OUTPUT 0x0004 85 - #define PNPBIOS_INPUT 0x0008 86 - #define PNPBIOS_BOOTABLE 0x0010 87 - #define PNPBIOS_DOCK 0x0020 88 - #define PNPBIOS_REMOVABLE 0x0040 89 - #define pnpbios_is_static(x) (((x)->flags & 0x0100) == 0x0000) 90 - #define pnpbios_is_dynamic(x) ((x)->flags & 0x0080) 91 - 92 - /* 93 - * Function Parameters 94 - */ 95 - #define PNPMODE_STATIC 1 96 - #define PNPMODE_DYNAMIC 0 97 - 98 - /* 0x8000 through 0xffff are OEM defined */ 99 - 100 - #pragma pack(1) 101 - struct pnp_dev_node_info { 102 - __u16 no_nodes; 103 - __u16 max_node_size; 104 - }; 105 - struct pnp_docking_station_info { 106 - __u32 location_id; 107 - __u32 serial; 108 - __u16 capabilities; 109 - }; 110 - struct pnp_isa_config_struc { 111 - __u8 revision; 112 - __u8 no_csns; 113 - __u16 isa_rd_data_port; 114 - __u16 reserved; 115 - }; 116 - struct escd_info_struc { 117 - __u16 min_escd_write_size; 118 - __u16 escd_size; 119 - __u32 nv_storage_base; 120 - }; 121 - struct pnp_bios_node { 122 - __u16 size; 123 - __u8 handle; 124 - __u32 eisa_id; 125 - __u8 type_code[3]; 126 - __u16 flags; 127 - __u8 data[0]; 128 - }; 129 - #pragma pack() 130 - 131 - #ifdef CONFIG_PNPBIOS 132 - 133 - /* non-exported */ 134 - extern struct pnp_dev_node_info node_info; 135 - 136 - extern int pnp_bios_dev_node_info(struct pnp_dev_node_info *data); 137 - extern int pnp_bios_get_dev_node(u8 *nodenum, char config, 138 - struct pnp_bios_node *data); 139 - extern int pnp_bios_set_dev_node(u8 nodenum, char config, 140 - struct pnp_bios_node *data); 141 - extern int pnp_bios_get_stat_res(char *info); 142 - extern int pnp_bios_isapnp_config(struct pnp_isa_config_struc *data); 143 - extern int pnp_bios_escd_info(struct escd_info_struc *data); 144 - extern int pnp_bios_read_escd(char *data, u32 nvram_base); 145 - extern int pnp_bios_dock_station_info(struct pnp_docking_station_info *data); 146 - 147 - #endif /* CONFIG_PNPBIOS */ 148 - 149 - #endif /* __KERNEL__ */ 150 - 151 - #endif /* _LINUX_PNPBIOS_H */
+25 -14
include/linux/thermal.h
··· 41 41 int (*set_mode) (struct thermal_zone_device *, const char *); 42 42 int (*get_trip_type) (struct thermal_zone_device *, int, char *); 43 43 int (*get_trip_temp) (struct thermal_zone_device *, int, char *); 44 + int (*get_crit_temp) (struct thermal_zone_device *, unsigned long *); 44 45 }; 45 46 46 47 struct thermal_cooling_device_ops { ··· 66 65 ((long)t-2732+5)/10 : ((long)t-2732-5)/10) 67 66 #define CELSIUS_TO_KELVIN(t) ((t)*10+2732) 68 67 68 + #if defined(CONFIG_HWMON) || \ 69 + (defined(CONFIG_HWMON_MODULE) && defined(CONFIG_THERMAL_MODULE)) 70 + /* thermal zone devices with the same type share one hwmon device */ 71 + struct thermal_hwmon_device { 72 + char type[THERMAL_NAME_LENGTH]; 73 + struct device *device; 74 + int count; 75 + struct list_head tz_list; 76 + struct list_head node; 77 + }; 78 + 79 + struct thermal_hwmon_attr { 80 + struct device_attribute attr; 81 + char name[16]; 82 + }; 83 + #endif 84 + 69 85 struct thermal_zone_device { 70 86 int id; 71 87 char type[THERMAL_NAME_LENGTH]; ··· 94 76 struct idr idr; 95 77 struct mutex lock; /* protect cooling devices list */ 96 78 struct list_head node; 79 + #if defined(CONFIG_HWMON) || \ 80 + (defined(CONFIG_HWMON_MODULE) && defined(CONFIG_THERMAL_MODULE)) 81 + struct list_head hwmon_node; 82 + struct thermal_hwmon_device *hwmon; 83 + struct thermal_hwmon_attr temp_input; /* hwmon sys attr */ 84 + struct thermal_hwmon_attr temp_crit; /* hwmon sys attr */ 85 + #endif 97 86 }; 98 87 99 88 struct thermal_zone_device *thermal_zone_device_register(char *, int, void *, ··· 113 88 struct thermal_cooling_device *); 114 89 int thermal_zone_unbind_cooling_device(struct thermal_zone_device *, int, 115 90 struct thermal_cooling_device *); 116 - 117 - #ifdef CONFIG_THERMAL 118 91 struct thermal_cooling_device *thermal_cooling_device_register(char *, void *, 119 92 struct 120 93 thermal_cooling_device_ops 121 94 *); 122 95 void thermal_cooling_device_unregister(struct thermal_cooling_device *); 123 - #else 124 - static inline struct thermal_cooling_device 125 - *thermal_cooling_device_register(char *c, void *v, 126 - struct thermal_cooling_device_ops *t) 127 - { 128 - return NULL; 129 - } 130 - static inline 131 - void thermal_cooling_device_unregister(struct thermal_cooling_device *t) 132 - { 133 - }; 134 - #endif 135 96 136 97 #endif /* __THERMAL_H__ */
-10
kernel/power/Kconfig
··· 19 19 will issue the hlt instruction if nothing is to be done, thereby 20 20 sending the processor to sleep and saving power. 21 21 22 - config PM_LEGACY 23 - bool "Legacy Power Management API (DEPRECATED)" 24 - depends on PM 25 - default n 26 - ---help--- 27 - Support for pm_register() and friends. This old API is obsoleted 28 - by the driver model. 29 - 30 - If unsure, say N. 31 - 32 22 config PM_DEBUG 33 23 bool "Power Management Debug Support" 34 24 depends on PM
-1
kernel/power/Makefile
··· 4 4 endif 5 5 6 6 obj-y := main.o 7 - obj-$(CONFIG_PM_LEGACY) += pm.o 8 7 obj-$(CONFIG_PM_SLEEP) += process.o console.o 9 8 obj-$(CONFIG_HIBERNATION) += swsusp.o disk.o snapshot.o swap.o user.o 10 9
-205
kernel/power/pm.c
··· 1 - /* 2 - * pm.c - Power management interface 3 - * 4 - * Copyright (C) 2000 Andrew Henroid 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - #include <linux/init.h> 21 - #include <linux/module.h> 22 - #include <linux/spinlock.h> 23 - #include <linux/mm.h> 24 - #include <linux/slab.h> 25 - #include <linux/pm.h> 26 - #include <linux/pm_legacy.h> 27 - #include <linux/interrupt.h> 28 - #include <linux/mutex.h> 29 - 30 - /* 31 - * Locking notes: 32 - * pm_devs_lock can be a semaphore providing pm ops are not called 33 - * from an interrupt handler (already a bad idea so no change here). Each 34 - * change must be protected so that an unlink of an entry doesn't clash 35 - * with a pm send - which is permitted to sleep in the current architecture 36 - * 37 - * Module unloads clashing with pm events now work out safely, the module 38 - * unload path will block until the event has been sent. It may well block 39 - * until a resume but that will be fine. 40 - */ 41 - 42 - static DEFINE_MUTEX(pm_devs_lock); 43 - static LIST_HEAD(pm_devs); 44 - 45 - /** 46 - * pm_register - register a device with power management 47 - * @type: device type 48 - * @id: device ID 49 - * @callback: callback function 50 - * 51 - * Add a device to the list of devices that wish to be notified about 52 - * power management events. A &pm_dev structure is returned on success, 53 - * on failure the return is %NULL. 54 - * 55 - * The callback function will be called in process context and 56 - * it may sleep. 57 - */ 58 - 59 - struct pm_dev *pm_register(pm_dev_t type, 60 - unsigned long id, 61 - pm_callback callback) 62 - { 63 - struct pm_dev *dev = kzalloc(sizeof(struct pm_dev), GFP_KERNEL); 64 - if (dev) { 65 - dev->type = type; 66 - dev->id = id; 67 - dev->callback = callback; 68 - 69 - mutex_lock(&pm_devs_lock); 70 - list_add(&dev->entry, &pm_devs); 71 - mutex_unlock(&pm_devs_lock); 72 - } 73 - return dev; 74 - } 75 - 76 - /** 77 - * pm_send - send request to a single device 78 - * @dev: device to send to 79 - * @rqst: power management request 80 - * @data: data for the callback 81 - * 82 - * Issue a power management request to a given device. The 83 - * %PM_SUSPEND and %PM_RESUME events are handled specially. The 84 - * data field must hold the intended next state. No call is made 85 - * if the state matches. 86 - * 87 - * BUGS: what stops two power management requests occurring in parallel 88 - * and conflicting. 89 - * 90 - * WARNING: Calling pm_send directly is not generally recommended, in 91 - * particular there is no locking against the pm_dev going away. The 92 - * caller must maintain all needed locking or have 'inside knowledge' 93 - * on the safety. Also remember that this function is not locked against 94 - * pm_unregister. This means that you must handle SMP races on callback 95 - * execution and unload yourself. 96 - */ 97 - 98 - static int pm_send(struct pm_dev *dev, pm_request_t rqst, void *data) 99 - { 100 - int status = 0; 101 - unsigned long prev_state, next_state; 102 - 103 - if (in_interrupt()) 104 - BUG(); 105 - 106 - switch (rqst) { 107 - case PM_SUSPEND: 108 - case PM_RESUME: 109 - prev_state = dev->state; 110 - next_state = (unsigned long) data; 111 - if (prev_state != next_state) { 112 - if (dev->callback) 113 - status = (*dev->callback)(dev, rqst, data); 114 - if (!status) { 115 - dev->state = next_state; 116 - dev->prev_state = prev_state; 117 - } 118 - } 119 - else { 120 - dev->prev_state = prev_state; 121 - } 122 - break; 123 - default: 124 - if (dev->callback) 125 - status = (*dev->callback)(dev, rqst, data); 126 - break; 127 - } 128 - return status; 129 - } 130 - 131 - /* 132 - * Undo incomplete request 133 - */ 134 - static void pm_undo_all(struct pm_dev *last) 135 - { 136 - struct list_head *entry = last->entry.prev; 137 - while (entry != &pm_devs) { 138 - struct pm_dev *dev = list_entry(entry, struct pm_dev, entry); 139 - if (dev->state != dev->prev_state) { 140 - /* previous state was zero (running) resume or 141 - * previous state was non-zero (suspended) suspend 142 - */ 143 - pm_request_t undo = (dev->prev_state 144 - ? PM_SUSPEND:PM_RESUME); 145 - pm_send(dev, undo, (void*) dev->prev_state); 146 - } 147 - entry = entry->prev; 148 - } 149 - } 150 - 151 - /** 152 - * pm_send_all - send request to all managed devices 153 - * @rqst: power management request 154 - * @data: data for the callback 155 - * 156 - * Issue a power management request to a all devices. The 157 - * %PM_SUSPEND events are handled specially. Any device is 158 - * permitted to fail a suspend by returning a non zero (error) 159 - * value from its callback function. If any device vetoes a 160 - * suspend request then all other devices that have suspended 161 - * during the processing of this request are restored to their 162 - * previous state. 163 - * 164 - * WARNING: This function takes the pm_devs_lock. The lock is not dropped until 165 - * the callbacks have completed. This prevents races against pm locking 166 - * functions, races against module unload pm_unregister code. It does 167 - * mean however that you must not issue pm_ functions within the callback 168 - * or you will deadlock and users will hate you. 169 - * 170 - * Zero is returned on success. If a suspend fails then the status 171 - * from the device that vetoes the suspend is returned. 172 - * 173 - * BUGS: what stops two power management requests occurring in parallel 174 - * and conflicting. 175 - */ 176 - 177 - int pm_send_all(pm_request_t rqst, void *data) 178 - { 179 - struct list_head *entry; 180 - 181 - mutex_lock(&pm_devs_lock); 182 - entry = pm_devs.next; 183 - while (entry != &pm_devs) { 184 - struct pm_dev *dev = list_entry(entry, struct pm_dev, entry); 185 - if (dev->callback) { 186 - int status = pm_send(dev, rqst, data); 187 - if (status) { 188 - /* return devices to previous state on 189 - * failed suspend request 190 - */ 191 - if (rqst == PM_SUSPEND) 192 - pm_undo_all(dev); 193 - mutex_unlock(&pm_devs_lock); 194 - return status; 195 - } 196 - } 197 - entry = entry->next; 198 - } 199 - mutex_unlock(&pm_devs_lock); 200 - return 0; 201 - } 202 - 203 - EXPORT_SYMBOL(pm_register); 204 - EXPORT_SYMBOL(pm_send_all); 205 -