Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pm+acpi-3.15-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more ACPI and power management updates from Rafael Wysocki:
"These are commits that were not quite ready when I sent the original
pull request for 3.15-rc1 several days ago, but they have spent some
time in linux-next since then and appear to be good to go. All of
them are fixes and cleanups.

Specifics:

- Remaining changes from upstream ACPICA release 20140214 that
introduce code to automatically serialize the execution of methods
creating any named objects which really cannot be executed in
parallel with each other anyway (previously ACPICA attempted to
address that by aborting methods upon conflict detection, but that
wasn't reliable enough and led to other issues). From Bob Moore
and Lv Zheng.

- intel_pstate fix to use del_timer_sync() instead of del_timer() in
the exit path before freeing the timer structure from Dirk
Brandewie (original patch from Thomas Gleixner).

- cpufreq fix related to system resume from Viresh Kumar.

- Serialization of frequency transitions in cpufreq that involve
PRECHANGE and POSTCHANGE notifications to avoid ordering issues
resulting from race conditions. From Srivatsa S Bhat and Viresh
Kumar.

- Revert of an ACPI processor driver change that was based on a
specific interpretation of the ACPI spec which may not be correct
(the relevant part of the spec appears to be incomplete). From
Hanjun Guo.

- Runtime PM core cleanups and documentation updates from Geert
Uytterhoeven.

- PNP core cleanup from Michael Opdenacker"

* tag 'pm+acpi-3.15-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
cpufreq: Make cpufreq_notify_transition & cpufreq_notify_post_transition static
cpufreq: Convert existing drivers to use cpufreq_freq_transition_{begin|end}
cpufreq: Make sure frequency transitions are serialized
intel_pstate: Use del_timer_sync in intel_pstate_cpu_stop
cpufreq: resume drivers before enabling governors
PM / Runtime: Spelling s/competing/completing/
PM / Runtime: s/foo_process_requests/foo_process_next_request/
PM / Runtime: GENERIC_SUBSYS_PM_OPS is gone
PM / Runtime: Correct documented return values for generic PM callbacks
PM / Runtime: Split line longer than 80 characters
PM / Runtime: dev_pm_info.runtime_error is signed
Revert "ACPI / processor: Make it possible to get APIC ID via GIC"
ACPICA: Enable auto-serialization as a default kernel behavior.
ACPICA: Ignore sync_level for methods that have been auto-serialized.
ACPICA: Add additional named objects for the auto-serialize method scan.
ACPICA: Add auto-serialization support for ill-behaved control methods.
ACPICA: Remove global option to serialize all control methods.
PNP: remove deprecated IRQF_DISABLED

+377 -211
+8 -2
Documentation/kernel-parameters.txt
··· 229 229 use by PCI 230 230 Format: <irq>,<irq>... 231 231 232 + acpi_no_auto_serialize [HW,ACPI] 233 + Disable auto-serialization of AML methods 234 + AML control methods that contain the opcodes to create 235 + named objects will be marked as "Serialized" by the 236 + auto-serialization feature. 237 + This feature is enabled by default. 238 + This option allows to turn off the feature. 239 + 232 240 acpi_no_auto_ssdt [HW,ACPI] Disable automatic loading of SSDT 233 241 234 242 acpica_no_return_repair [HW, ACPI] ··· 313 305 314 306 acpi_sci= [HW,ACPI] ACPI System Control Interrupt trigger mode 315 307 Format: { level | edge | high | low } 316 - 317 - acpi_serialize [HW,ACPI] force serialization of AML methods 318 308 319 309 acpi_skip_timer_override [HW,ACPI] 320 310 Recognize and ignore IRQ0/pin2 Interrupt Override.
+13 -16
Documentation/power/runtime_pm.txt
··· 232 232 equal to zero); the initial value of it is 1 (i.e. runtime PM is 233 233 initially disabled for all devices) 234 234 235 - unsigned int runtime_error; 235 + int runtime_error; 236 236 - if set, there was a fatal error (one of the callbacks returned error code 237 237 as described in Section 2), so the helper funtions will not work until 238 238 this flag is cleared; this is the error code returned by the failing ··· 401 401 int pm_runtime_disable(struct device *dev); 402 402 - increment the device's 'power.disable_depth' field (if the value of that 403 403 field was previously zero, this prevents subsystem-level runtime PM 404 - callbacks from being run for the device), make sure that all of the pending 405 - runtime PM operations on the device are either completed or canceled; 406 - returns 1 if there was a resume request pending and it was necessary to 407 - execute the subsystem-level resume callback for the device to satisfy that 408 - request, otherwise 0 is returned 404 + callbacks from being run for the device), make sure that all of the 405 + pending runtime PM operations on the device are either completed or 406 + canceled; returns 1 if there was a resume request pending and it was 407 + necessary to execute the subsystem-level resume callback for the device 408 + to satisfy that request, otherwise 0 is returned 409 409 410 410 int pm_runtime_barrier(struct device *dev); 411 411 - check if there's a resume request pending for the device and resume it ··· 667 667 668 668 int pm_generic_runtime_suspend(struct device *dev); 669 669 - invoke the ->runtime_suspend() callback provided by the driver of this 670 - device and return its result, or return -EINVAL if not defined 670 + device and return its result, or return 0 if not defined 671 671 672 672 int pm_generic_runtime_resume(struct device *dev); 673 673 - invoke the ->runtime_resume() callback provided by the driver of this 674 - device and return its result, or return -EINVAL if not defined 674 + device and return its result, or return 0 if not defined 675 675 676 676 int pm_generic_suspend(struct device *dev); 677 677 - if the device has not been suspended at run time, invoke the ->suspend() ··· 727 727 int pm_generic_restore_noirq(struct device *dev); 728 728 - invoke the ->restore_noirq() callback provided by the device's driver 729 729 730 - These functions can be assigned to the ->runtime_idle(), ->runtime_suspend(), 730 + These functions are the defaults used by the PM core, if a subsystem doesn't 731 + provide its own callbacks for ->runtime_idle(), ->runtime_suspend(), 731 732 ->runtime_resume(), ->suspend(), ->suspend_noirq(), ->resume(), 732 733 ->resume_noirq(), ->freeze(), ->freeze_noirq(), ->thaw(), ->thaw_noirq(), 733 - ->poweroff(), ->poweroff_noirq(), ->restore(), ->restore_noirq() callback 734 - pointers in the subsystem-level dev_pm_ops structures. 735 - 736 - If a subsystem wishes to use all of them at the same time, it can simply assign 737 - the GENERIC_SUBSYS_PM_OPS macro, defined in include/linux/pm.h, to its 738 - dev_pm_ops structure pointer. 734 + ->poweroff(), ->poweroff_noirq(), ->restore(), ->restore_noirq() in the 735 + subsystem-level dev_pm_ops structure. 739 736 740 737 Device drivers that wish to use the same function as a system suspend, freeze, 741 738 poweroff and runtime suspend callback, and similarly for system resume, thaw, ··· 870 873 foo->is_suspended = 0; 871 874 pm_runtime_mark_last_busy(&foo->dev); 872 875 if (foo->num_pending_requests > 0) 873 - foo_process_requests(foo); 876 + foo_process_next_request(foo); 874 877 unlock(&foo->private_lock); 875 878 return 0; 876 879 }
+8 -5
drivers/acpi/acpica/acdispat.h
··· 139 139 struct acpi_walk_state *walk_state); 140 140 141 141 /* 142 - * dsload - Parser/Interpreter interface, pass 1 namespace load callbacks 142 + * dsload - Parser/Interpreter interface 143 143 */ 144 144 acpi_status 145 145 acpi_ds_init_callbacks(struct acpi_walk_state *walk_state, u32 pass_number); 146 + 147 + /* dsload - pass 1 namespace load callbacks */ 146 148 147 149 acpi_status 148 150 acpi_ds_load1_begin_op(struct acpi_walk_state *walk_state, ··· 152 150 153 151 acpi_status acpi_ds_load1_end_op(struct acpi_walk_state *walk_state); 154 152 155 - /* 156 - * dsload - Parser/Interpreter interface, pass 2 namespace load callbacks 157 - */ 153 + /* dsload - pass 2 namespace load callbacks */ 154 + 158 155 acpi_status 159 156 acpi_ds_load2_begin_op(struct acpi_walk_state *walk_state, 160 157 union acpi_parse_object **out_op); ··· 201 200 /* 202 201 * dsmethod - Parser/Interpreter interface - control method parsing 203 202 */ 204 - acpi_status acpi_ds_parse_method(struct acpi_namespace_node *node); 203 + acpi_status 204 + acpi_ds_auto_serialize_method(struct acpi_namespace_node *node, 205 + union acpi_operand_object *obj_desc); 205 206 206 207 acpi_status 207 208 acpi_ds_call_control_method(struct acpi_thread_state *thread,
+6 -5
drivers/acpi/acpica/acglobal.h
··· 93 93 ACPI_INIT_GLOBAL(u8, acpi_gbl_enable_interpreter_slack, FALSE); 94 94 95 95 /* 96 - * Automatically serialize ALL control methods? Default is FALSE, meaning 97 - * to use the Serialized/not_serialized method flags on a per method basis. 98 - * Only change this if the ASL code is poorly written and cannot handle 99 - * reentrancy even though methods are marked "NotSerialized". 96 + * Automatically serialize all methods that create named objects? Default 97 + * is TRUE, meaning that all non_serialized methods are scanned once at 98 + * table load time to determine those that create named objects. Methods 99 + * that create named objects are marked Serialized in order to prevent 100 + * possible run-time problems if they are entered by more than one thread. 100 101 */ 101 - ACPI_INIT_GLOBAL(u8, acpi_gbl_all_methods_serialized, FALSE); 102 + ACPI_INIT_GLOBAL(u8, acpi_gbl_auto_serialize_methods, TRUE); 102 103 103 104 /* 104 105 * Create the predefined _OSI method in the namespace? Default is TRUE
-4
drivers/acpi/acpica/acinterp.h
··· 458 458 459 459 void acpi_ex_exit_interpreter(void); 460 460 461 - void acpi_ex_reacquire_interpreter(void); 462 - 463 - void acpi_ex_relinquish_interpreter(void); 464 - 465 461 u8 acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc); 466 462 467 463 void acpi_ex_acquire_global_lock(u32 rule);
+2 -1
drivers/acpi/acpica/acobject.h
··· 193 193 #define ACPI_METHOD_INTERNAL_ONLY 0x02 /* Method is implemented internally (_OSI) */ 194 194 #define ACPI_METHOD_SERIALIZED 0x04 /* Method is serialized */ 195 195 #define ACPI_METHOD_SERIALIZED_PENDING 0x08 /* Method is to be marked serialized */ 196 - #define ACPI_METHOD_MODIFIED_NAMESPACE 0x10 /* Method modified the namespace */ 196 + #define ACPI_METHOD_IGNORE_SYNC_LEVEL 0x10 /* Method was auto-serialized at table load time */ 197 + #define ACPI_METHOD_MODIFIED_NAMESPACE 0x20 /* Method modified the namespace */ 197 198 198 199 /****************************************************************************** 199 200 *
+3
drivers/acpi/acpica/acstruct.h
··· 133 133 u32 table_index; 134 134 u32 object_count; 135 135 u32 method_count; 136 + u32 serial_method_count; 137 + u32 non_serial_method_count; 138 + u32 serialized_method_count; 136 139 u32 device_count; 137 140 u32 op_region_count; 138 141 u32 field_count;
+47 -12
drivers/acpi/acpica/dsinit.c
··· 83 83 (struct acpi_init_walk_info *)context; 84 84 struct acpi_namespace_node *node = 85 85 (struct acpi_namespace_node *)obj_handle; 86 - acpi_object_type type; 87 86 acpi_status status; 87 + union acpi_operand_object *obj_desc; 88 88 89 89 ACPI_FUNCTION_ENTRY(); 90 90 ··· 100 100 101 101 /* And even then, we are only interested in a few object types */ 102 102 103 - type = acpi_ns_get_type(obj_handle); 104 - 105 - switch (type) { 103 + switch (acpi_ns_get_type(obj_handle)) { 106 104 case ACPI_TYPE_REGION: 107 105 108 106 status = acpi_ds_initialize_region(obj_handle); ··· 115 117 break; 116 118 117 119 case ACPI_TYPE_METHOD: 118 - 120 + /* 121 + * Auto-serialization support. We will examine each method that is 122 + * not_serialized to determine if it creates any Named objects. If 123 + * it does, it will be marked serialized to prevent problems if 124 + * the method is entered by two or more threads and an attempt is 125 + * made to create the same named object twice -- which results in 126 + * an AE_ALREADY_EXISTS exception and method abort. 127 + */ 119 128 info->method_count++; 129 + obj_desc = acpi_ns_get_attached_object(node); 130 + if (!obj_desc) { 131 + break; 132 + } 133 + 134 + /* Ignore if already serialized */ 135 + 136 + if (obj_desc->method.info_flags & ACPI_METHOD_SERIALIZED) { 137 + info->serial_method_count++; 138 + break; 139 + } 140 + 141 + if (acpi_gbl_auto_serialize_methods) { 142 + 143 + /* Parse/scan method and serialize it if necessary */ 144 + 145 + acpi_ds_auto_serialize_method(node, obj_desc); 146 + if (obj_desc->method. 147 + info_flags & ACPI_METHOD_SERIALIZED) { 148 + 149 + /* Method was just converted to Serialized */ 150 + 151 + info->serial_method_count++; 152 + info->serialized_method_count++; 153 + break; 154 + } 155 + } 156 + 157 + info->non_serial_method_count++; 120 158 break; 121 159 122 160 case ACPI_TYPE_DEVICE: ··· 204 170 205 171 ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 206 172 "**** Starting initialization of namespace objects ****\n")); 207 - ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, "Parsing all Control Methods:")); 208 173 209 174 /* Set all init info to zero */ 210 175 ··· 238 205 } 239 206 240 207 ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT, 241 - "\nTable [%4.4s](id %4.4X) - %u Objects with %u Devices %u Methods %u Regions\n", 208 + "Table [%4.4s] (id %4.4X) - %4u Objects with %3u Devices, " 209 + "%3u Regions, %3u Methods (%u/%u/%u Serial/Non/Cvt)\n", 242 210 table->signature, owner_id, info.object_count, 243 - info.device_count, info.method_count, 244 - info.op_region_count)); 211 + info.device_count, info.op_region_count, 212 + info.method_count, info.serial_method_count, 213 + info.non_serial_method_count, 214 + info.serialized_method_count)); 245 215 246 - ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 247 - "%u Methods, %u Regions\n", info.method_count, 248 - info.op_region_count)); 216 + ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, "%u Methods, %u Regions\n", 217 + info.method_count, info.op_region_count)); 249 218 250 219 return_ACPI_STATUS(AE_OK); 251 220 }
+151 -5
drivers/acpi/acpica/dsmethod.c
··· 49 49 #ifdef ACPI_DISASSEMBLER 50 50 #include "acdisasm.h" 51 51 #endif 52 + #include "acparser.h" 53 + #include "amlcode.h" 52 54 53 55 #define _COMPONENT ACPI_DISPATCHER 54 56 ACPI_MODULE_NAME("dsmethod") 55 57 56 58 /* Local prototypes */ 57 59 static acpi_status 60 + acpi_ds_detect_named_opcodes(struct acpi_walk_state *walk_state, 61 + union acpi_parse_object **out_op); 62 + 63 + static acpi_status 58 64 acpi_ds_create_method_mutex(union acpi_operand_object *method_desc); 65 + 66 + /******************************************************************************* 67 + * 68 + * FUNCTION: acpi_ds_auto_serialize_method 69 + * 70 + * PARAMETERS: node - Namespace Node of the method 71 + * obj_desc - Method object attached to node 72 + * 73 + * RETURN: Status 74 + * 75 + * DESCRIPTION: Parse a control method AML to scan for control methods that 76 + * need serialization due to the creation of named objects. 77 + * 78 + * NOTE: It is a bit of overkill to mark all such methods serialized, since 79 + * there is only a problem if the method actually blocks during execution. 80 + * A blocking operation is, for example, a Sleep() operation, or any access 81 + * to an operation region. However, it is probably not possible to easily 82 + * detect whether a method will block or not, so we simply mark all suspicious 83 + * methods as serialized. 84 + * 85 + * NOTE2: This code is essentially a generic routine for parsing a single 86 + * control method. 87 + * 88 + ******************************************************************************/ 89 + 90 + acpi_status 91 + acpi_ds_auto_serialize_method(struct acpi_namespace_node *node, 92 + union acpi_operand_object *obj_desc) 93 + { 94 + acpi_status status; 95 + union acpi_parse_object *op = NULL; 96 + struct acpi_walk_state *walk_state; 97 + 98 + ACPI_FUNCTION_TRACE_PTR(ds_auto_serialize_method, node); 99 + 100 + ACPI_DEBUG_PRINT((ACPI_DB_PARSE, 101 + "Method auto-serialization parse [%4.4s] %p\n", 102 + acpi_ut_get_node_name(node), node)); 103 + 104 + /* Create/Init a root op for the method parse tree */ 105 + 106 + op = acpi_ps_alloc_op(AML_METHOD_OP); 107 + if (!op) { 108 + return_ACPI_STATUS(AE_NO_MEMORY); 109 + } 110 + 111 + acpi_ps_set_name(op, node->name.integer); 112 + op->common.node = node; 113 + 114 + /* Create and initialize a new walk state */ 115 + 116 + walk_state = 117 + acpi_ds_create_walk_state(node->owner_id, NULL, NULL, NULL); 118 + if (!walk_state) { 119 + return_ACPI_STATUS(AE_NO_MEMORY); 120 + } 121 + 122 + status = 123 + acpi_ds_init_aml_walk(walk_state, op, node, 124 + obj_desc->method.aml_start, 125 + obj_desc->method.aml_length, NULL, 0); 126 + if (ACPI_FAILURE(status)) { 127 + acpi_ds_delete_walk_state(walk_state); 128 + return_ACPI_STATUS(status); 129 + } 130 + 131 + walk_state->descending_callback = acpi_ds_detect_named_opcodes; 132 + 133 + /* Parse the method, scan for creation of named objects */ 134 + 135 + status = acpi_ps_parse_aml(walk_state); 136 + if (ACPI_FAILURE(status)) { 137 + return_ACPI_STATUS(status); 138 + } 139 + 140 + acpi_ps_delete_parse_tree(op); 141 + return_ACPI_STATUS(status); 142 + } 143 + 144 + /******************************************************************************* 145 + * 146 + * FUNCTION: acpi_ds_detect_named_opcodes 147 + * 148 + * PARAMETERS: walk_state - Current state of the parse tree walk 149 + * out_op - Unused, required for parser interface 150 + * 151 + * RETURN: Status 152 + * 153 + * DESCRIPTION: Descending callback used during the loading of ACPI tables. 154 + * Currently used to detect methods that must be marked serialized 155 + * in order to avoid problems with the creation of named objects. 156 + * 157 + ******************************************************************************/ 158 + 159 + static acpi_status 160 + acpi_ds_detect_named_opcodes(struct acpi_walk_state *walk_state, 161 + union acpi_parse_object **out_op) 162 + { 163 + 164 + ACPI_FUNCTION_NAME(acpi_ds_detect_named_opcodes); 165 + 166 + /* We are only interested in opcodes that create a new name */ 167 + 168 + if (! 169 + (walk_state->op_info-> 170 + flags & (AML_NAMED | AML_CREATE | AML_FIELD))) { 171 + return (AE_OK); 172 + } 173 + 174 + /* 175 + * At this point, we know we have a Named object opcode. 176 + * Mark the method as serialized. Later code will create a mutex for 177 + * this method to enforce serialization. 178 + * 179 + * Note, ACPI_METHOD_IGNORE_SYNC_LEVEL flag means that we will ignore the 180 + * Sync Level mechanism for this method, even though it is now serialized. 181 + * Otherwise, there can be conflicts with existing ASL code that actually 182 + * uses sync levels. 183 + */ 184 + walk_state->method_desc->method.sync_level = 0; 185 + walk_state->method_desc->method.info_flags |= 186 + (ACPI_METHOD_SERIALIZED | ACPI_METHOD_IGNORE_SYNC_LEVEL); 187 + 188 + ACPI_DEBUG_PRINT((ACPI_DB_INFO, 189 + "Method serialized [%4.4s] %p - [%s] (%4.4X)\n", 190 + walk_state->method_node->name.ascii, 191 + walk_state->method_node, walk_state->op_info->name, 192 + walk_state->opcode)); 193 + 194 + /* Abort the parse, no need to examine this method any further */ 195 + 196 + return (AE_CTRL_TERMINATE); 197 + } 59 198 60 199 /******************************************************************************* 61 200 * ··· 213 74 ******************************************************************************/ 214 75 215 76 acpi_status 216 - acpi_ds_method_error(acpi_status status, struct acpi_walk_state *walk_state) 77 + acpi_ds_method_error(acpi_status status, struct acpi_walk_state * walk_state) 217 78 { 218 79 ACPI_FUNCTION_ENTRY(); 219 80 ··· 356 217 /* 357 218 * The current_sync_level (per-thread) must be less than or equal to 358 219 * the sync level of the method. This mechanism provides some 359 - * deadlock prevention 220 + * deadlock prevention. 221 + * 222 + * If the method was auto-serialized, we just ignore the sync level 223 + * mechanism, because auto-serialization of methods can interfere 224 + * with ASL code that actually uses sync levels. 360 225 * 361 226 * Top-level method invocation has no walk state at this point 362 227 */ 363 228 if (walk_state && 364 - (walk_state->thread->current_sync_level > 365 - obj_desc->method.mutex->mutex.sync_level)) { 229 + (!(obj_desc->method. 230 + info_flags & ACPI_METHOD_IGNORE_SYNC_LEVEL)) 231 + && (walk_state->thread->current_sync_level > 232 + obj_desc->method.mutex->mutex.sync_level)) { 366 233 ACPI_ERROR((AE_INFO, 367 234 "Cannot acquire Mutex for method [%4.4s], current SyncLevel is too large (%u)", 368 235 acpi_ut_get_node_name(method_node), ··· 813 668 method_desc->method.info_flags &= 814 669 ~ACPI_METHOD_SERIALIZED_PENDING; 815 670 method_desc->method.info_flags |= 816 - ACPI_METHOD_SERIALIZED; 671 + (ACPI_METHOD_SERIALIZED | 672 + ACPI_METHOD_IGNORE_SYNC_LEVEL); 817 673 method_desc->method.sync_level = 0; 818 674 } 819 675
+16
drivers/acpi/acpica/dswload.c
··· 73 73 { 74 74 75 75 switch (pass_number) { 76 + case 0: 77 + 78 + /* Parse only - caller will setup callbacks */ 79 + 80 + walk_state->parse_flags = ACPI_PARSE_LOAD_PASS1 | 81 + ACPI_PARSE_DELETE_TREE | ACPI_PARSE_DISASSEMBLE; 82 + walk_state->descending_callback = NULL; 83 + walk_state->ascending_callback = NULL; 84 + break; 85 + 76 86 case 1: 87 + 88 + /* Load pass 1 */ 77 89 78 90 walk_state->parse_flags = ACPI_PARSE_LOAD_PASS1 | 79 91 ACPI_PARSE_DELETE_TREE; ··· 95 83 96 84 case 2: 97 85 86 + /* Load pass 2 */ 87 + 98 88 walk_state->parse_flags = ACPI_PARSE_LOAD_PASS1 | 99 89 ACPI_PARSE_DELETE_TREE; 100 90 walk_state->descending_callback = acpi_ds_load2_begin_op; ··· 104 90 break; 105 91 106 92 case 3: 93 + 94 + /* Execution pass */ 107 95 108 96 #ifndef ACPI_NO_METHOD_EXECUTION 109 97 walk_state->parse_flags |= ACPI_PARSE_EXECUTE |
+6 -6
drivers/acpi/acpica/exsystem.c
··· 77 77 78 78 /* We must wait, so unlock the interpreter */ 79 79 80 - acpi_ex_relinquish_interpreter(); 80 + acpi_ex_exit_interpreter(); 81 81 82 82 status = acpi_os_wait_semaphore(semaphore, 1, timeout); 83 83 ··· 87 87 88 88 /* Reacquire the interpreter */ 89 89 90 - acpi_ex_reacquire_interpreter(); 90 + acpi_ex_enter_interpreter(); 91 91 } 92 92 93 93 return_ACPI_STATUS(status); ··· 123 123 124 124 /* We must wait, so unlock the interpreter */ 125 125 126 - acpi_ex_relinquish_interpreter(); 126 + acpi_ex_exit_interpreter(); 127 127 128 128 status = acpi_os_acquire_mutex(mutex, timeout); 129 129 ··· 133 133 134 134 /* Reacquire the interpreter */ 135 135 136 - acpi_ex_reacquire_interpreter(); 136 + acpi_ex_enter_interpreter(); 137 137 } 138 138 139 139 return_ACPI_STATUS(status); ··· 198 198 199 199 /* Since this thread will sleep, we must release the interpreter */ 200 200 201 - acpi_ex_relinquish_interpreter(); 201 + acpi_ex_exit_interpreter(); 202 202 203 203 /* 204 204 * For compatibility with other ACPI implementations and to prevent ··· 212 212 213 213 /* And now we must get the interpreter again */ 214 214 215 - acpi_ex_reacquire_interpreter(); 215 + acpi_ex_enter_interpreter(); 216 216 return (AE_OK); 217 217 } 218 218
+10 -70
drivers/acpi/acpica/exutils.c
··· 100 100 101 101 /******************************************************************************* 102 102 * 103 - * FUNCTION: acpi_ex_reacquire_interpreter 104 - * 105 - * PARAMETERS: None 106 - * 107 - * RETURN: None 108 - * 109 - * DESCRIPTION: Reacquire the interpreter execution region from within the 110 - * interpreter code. Failure to enter the interpreter region is a 111 - * fatal system error. Used in conjunction with 112 - * relinquish_interpreter 113 - * 114 - ******************************************************************************/ 115 - 116 - void acpi_ex_reacquire_interpreter(void) 117 - { 118 - ACPI_FUNCTION_TRACE(ex_reacquire_interpreter); 119 - 120 - /* 121 - * If the global serialized flag is set, do not release the interpreter, 122 - * since it was not actually released by acpi_ex_relinquish_interpreter. 123 - * This forces the interpreter to be single threaded. 124 - */ 125 - if (!acpi_gbl_all_methods_serialized) { 126 - acpi_ex_enter_interpreter(); 127 - } 128 - 129 - return_VOID; 130 - } 131 - 132 - /******************************************************************************* 133 - * 134 103 * FUNCTION: acpi_ex_exit_interpreter 135 104 * 136 105 * PARAMETERS: None ··· 108 139 * 109 140 * DESCRIPTION: Exit the interpreter execution region. This is the top level 110 141 * routine used to exit the interpreter when all processing has 111 - * been completed. 142 + * been completed, or when the method blocks. 143 + * 144 + * Cases where the interpreter is unlocked internally: 145 + * 1) Method will be blocked on a Sleep() AML opcode 146 + * 2) Method will be blocked on an Acquire() AML opcode 147 + * 3) Method will be blocked on a Wait() AML opcode 148 + * 4) Method will be blocked to acquire the global lock 149 + * 5) Method will be blocked waiting to execute a serialized control 150 + * method that is currently executing 151 + * 6) About to invoke a user-installed opregion handler 112 152 * 113 153 ******************************************************************************/ 114 154 ··· 131 153 if (ACPI_FAILURE(status)) { 132 154 ACPI_ERROR((AE_INFO, 133 155 "Could not release AML Interpreter mutex")); 134 - } 135 - 136 - return_VOID; 137 - } 138 - 139 - /******************************************************************************* 140 - * 141 - * FUNCTION: acpi_ex_relinquish_interpreter 142 - * 143 - * PARAMETERS: None 144 - * 145 - * RETURN: None 146 - * 147 - * DESCRIPTION: Exit the interpreter execution region, from within the 148 - * interpreter - before attempting an operation that will possibly 149 - * block the running thread. 150 - * 151 - * Cases where the interpreter is unlocked internally 152 - * 1) Method to be blocked on a Sleep() AML opcode 153 - * 2) Method to be blocked on an Acquire() AML opcode 154 - * 3) Method to be blocked on a Wait() AML opcode 155 - * 4) Method to be blocked to acquire the global lock 156 - * 5) Method to be blocked waiting to execute a serialized control method 157 - * that is currently executing 158 - * 6) About to invoke a user-installed opregion handler 159 - * 160 - ******************************************************************************/ 161 - 162 - void acpi_ex_relinquish_interpreter(void) 163 - { 164 - ACPI_FUNCTION_TRACE(ex_relinquish_interpreter); 165 - 166 - /* 167 - * If the global serialized flag is set, do not release the interpreter. 168 - * This forces the interpreter to be single threaded. 169 - */ 170 - if (!acpi_gbl_all_methods_serialized) { 171 - acpi_ex_exit_interpreter(); 172 156 } 173 157 174 158 return_VOID;
+2 -3
drivers/acpi/acpica/nsinit.c
··· 111 111 info.object_count)); 112 112 113 113 ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 114 - "%u Control Methods found\n", info.method_count)); 115 - ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH, 116 - "%u Op Regions found\n", info.op_region_count)); 114 + "%u Control Methods found\n%u Op Regions found\n", 115 + info.method_count, info.op_region_count)); 117 116 118 117 return_ACPI_STATUS(AE_OK); 119 118 }
+2 -2
drivers/acpi/acpica/nsload.c
··· 128 128 * parse trees. 129 129 */ 130 130 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 131 - "**** Begin Table Method Parsing and Object Initialization\n")); 131 + "**** Begin Table Object Initialization\n")); 132 132 133 133 status = acpi_ds_initialize_objects(table_index, node); 134 134 135 135 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 136 - "**** Completed Table Method Parsing and Object Initialization\n")); 136 + "**** Completed Table Object Initialization\n")); 137 137 138 138 return_ACPI_STATUS(status); 139 139 }
+4
drivers/acpi/acpica/psloop.c
··· 480 480 status = AE_OK; 481 481 } 482 482 483 + if (status == AE_CTRL_TERMINATE) { 484 + return_ACPI_STATUS(status); 485 + } 486 + 483 487 status = 484 488 acpi_ps_complete_op(walk_state, &op, 485 489 status);
+5 -2
drivers/acpi/acpica/psobject.c
··· 219 219 220 220 status = walk_state->descending_callback(walk_state, op); 221 221 if (ACPI_FAILURE(status)) { 222 - ACPI_EXCEPTION((AE_INFO, status, "During name lookup/catalog")); 222 + if (status != AE_CTRL_TERMINATE) { 223 + ACPI_EXCEPTION((AE_INFO, status, 224 + "During name lookup/catalog")); 225 + } 223 226 return_ACPI_STATUS(status); 224 227 } 225 228 ··· 233 230 status = acpi_ps_next_parse_state(walk_state, *op, status); 234 231 if (ACPI_FAILURE(status)) { 235 232 if (status == AE_CTRL_PENDING) { 236 - return_ACPI_STATUS(AE_CTRL_PARSE_PENDING); 233 + status = AE_CTRL_PARSE_PENDING; 237 234 } 238 235 return_ACPI_STATUS(status); 239 236 }
+10 -6
drivers/acpi/osl.c
··· 1537 1537 1538 1538 __setup("acpi_osi=", osi_setup); 1539 1539 1540 - /* enable serialization to combat AE_ALREADY_EXISTS errors */ 1541 - static int __init acpi_serialize_setup(char *str) 1540 + /* 1541 + * Disable the auto-serialization of named objects creation methods. 1542 + * 1543 + * This feature is enabled by default. It marks the AML control methods 1544 + * that contain the opcodes to create named objects as "Serialized". 1545 + */ 1546 + static int __init acpi_no_auto_serialize_setup(char *str) 1542 1547 { 1543 - printk(KERN_INFO PREFIX "serialize enabled\n"); 1544 - 1545 - acpi_gbl_all_methods_serialized = TRUE; 1548 + acpi_gbl_auto_serialize_methods = FALSE; 1549 + pr_info("ACPI: auto-serialization disabled\n"); 1546 1550 1547 1551 return 1; 1548 1552 } 1549 1553 1550 - __setup("acpi_serialize", acpi_serialize_setup); 1554 + __setup("acpi_no_auto_serialize", acpi_no_auto_serialize_setup); 1551 1555 1552 1556 /* Check of resource interference between native drivers and ACPI 1553 1557 * OperationRegions (SystemIO and System Memory only).
-27
drivers/acpi/processor_core.c
··· 70 70 return 0; 71 71 } 72 72 73 - static int map_gic_id(struct acpi_subtable_header *entry, 74 - int device_declaration, u32 acpi_id, int *apic_id) 75 - { 76 - struct acpi_madt_generic_interrupt *gic = 77 - (struct acpi_madt_generic_interrupt *)entry; 78 - 79 - if (!(gic->flags & ACPI_MADT_ENABLED)) 80 - return -ENODEV; 81 - 82 - /* 83 - * In the GIC interrupt model, logical processors are 84 - * required to have a Processor Device object in the DSDT, 85 - * so we should check device_declaration here 86 - */ 87 - if (device_declaration && (gic->uid == acpi_id)) { 88 - *apic_id = gic->gic_id; 89 - return 0; 90 - } 91 - 92 - return -EINVAL; 93 - } 94 - 95 73 static int map_madt_entry(int type, u32 acpi_id) 96 74 { 97 75 unsigned long madt_end, entry; ··· 105 127 } else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC) { 106 128 if (!map_lsapic_id(header, type, acpi_id, &apic_id)) 107 129 break; 108 - } else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) { 109 - if (!map_gic_id(header, type, acpi_id, &apic_id)) 110 - break; 111 130 } 112 131 entry += header->length; 113 132 } ··· 135 160 map_lapic_id(header, acpi_id, &apic_id); 136 161 } else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC) { 137 162 map_lsapic_id(header, type, acpi_id, &apic_id); 138 - } else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) { 139 - map_gic_id(header, type, acpi_id, &apic_id); 140 163 } 141 164 142 165 exit:
+1 -1
drivers/base/power/generic_ops.c
··· 285 285 EXPORT_SYMBOL_GPL(pm_generic_restore); 286 286 287 287 /** 288 - * pm_generic_complete - Generic routine competing a device power transition. 288 + * pm_generic_complete - Generic routine completing a device power transition. 289 289 * @dev: Device to handle. 290 290 * 291 291 * Complete a device power transition during a system-wide power transition.
+2 -2
drivers/cpufreq/cpufreq-nforce2.c
··· 270 270 pr_debug("Old CPU frequency %d kHz, new %d kHz\n", 271 271 freqs.old, freqs.new); 272 272 273 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 273 + cpufreq_freq_transition_begin(policy, &freqs); 274 274 275 275 /* Disable IRQs */ 276 276 /* local_irq_save(flags); */ ··· 285 285 /* Enable IRQs */ 286 286 /* local_irq_restore(flags); */ 287 287 288 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 288 + cpufreq_freq_transition_end(policy, &freqs, 0); 289 289 290 290 return 0; 291 291 }
+47 -14
drivers/cpufreq/cpufreq.c
··· 331 331 * function. It is called twice on all CPU frequency changes that have 332 332 * external effects. 333 333 */ 334 - void cpufreq_notify_transition(struct cpufreq_policy *policy, 334 + static void cpufreq_notify_transition(struct cpufreq_policy *policy, 335 335 struct cpufreq_freqs *freqs, unsigned int state) 336 336 { 337 337 for_each_cpu(freqs->cpu, policy->cpus) 338 338 __cpufreq_notify_transition(policy, freqs, state); 339 339 } 340 - EXPORT_SYMBOL_GPL(cpufreq_notify_transition); 341 340 342 341 /* Do post notifications when there are chances that transition has failed */ 343 - void cpufreq_notify_post_transition(struct cpufreq_policy *policy, 342 + static void cpufreq_notify_post_transition(struct cpufreq_policy *policy, 344 343 struct cpufreq_freqs *freqs, int transition_failed) 345 344 { 346 345 cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE); ··· 350 351 cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE); 351 352 cpufreq_notify_transition(policy, freqs, CPUFREQ_POSTCHANGE); 352 353 } 353 - EXPORT_SYMBOL_GPL(cpufreq_notify_post_transition); 354 + 355 + void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, 356 + struct cpufreq_freqs *freqs) 357 + { 358 + wait: 359 + wait_event(policy->transition_wait, !policy->transition_ongoing); 360 + 361 + spin_lock(&policy->transition_lock); 362 + 363 + if (unlikely(policy->transition_ongoing)) { 364 + spin_unlock(&policy->transition_lock); 365 + goto wait; 366 + } 367 + 368 + policy->transition_ongoing = true; 369 + 370 + spin_unlock(&policy->transition_lock); 371 + 372 + cpufreq_notify_transition(policy, freqs, CPUFREQ_PRECHANGE); 373 + } 374 + EXPORT_SYMBOL_GPL(cpufreq_freq_transition_begin); 375 + 376 + void cpufreq_freq_transition_end(struct cpufreq_policy *policy, 377 + struct cpufreq_freqs *freqs, int transition_failed) 378 + { 379 + if (unlikely(WARN_ON(!policy->transition_ongoing))) 380 + return; 381 + 382 + cpufreq_notify_post_transition(policy, freqs, transition_failed); 383 + 384 + policy->transition_ongoing = false; 385 + 386 + wake_up(&policy->transition_wait); 387 + } 388 + EXPORT_SYMBOL_GPL(cpufreq_freq_transition_end); 354 389 355 390 356 391 /********************************************************************* ··· 1018 985 1019 986 INIT_LIST_HEAD(&policy->policy_list); 1020 987 init_rwsem(&policy->rwsem); 988 + spin_lock_init(&policy->transition_lock); 989 + init_waitqueue_head(&policy->transition_wait); 1021 990 1022 991 return policy; 1023 992 ··· 1505 1470 policy = per_cpu(cpufreq_cpu_data, cpu); 1506 1471 read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1507 1472 1508 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 1509 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 1473 + cpufreq_freq_transition_begin(policy, &freqs); 1474 + cpufreq_freq_transition_end(policy, &freqs, 0); 1510 1475 } 1511 1476 1512 1477 /** ··· 1687 1652 cpufreq_suspended = false; 1688 1653 1689 1654 list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 1690 - if (__cpufreq_governor(policy, CPUFREQ_GOV_START) 1655 + if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) 1656 + pr_err("%s: Failed to resume driver: %p\n", __func__, 1657 + policy); 1658 + else if (__cpufreq_governor(policy, CPUFREQ_GOV_START) 1691 1659 || __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS)) 1692 1660 pr_err("%s: Failed to start governor for policy: %p\n", 1693 1661 __func__, policy); 1694 - else if (cpufreq_driver->resume 1695 - && cpufreq_driver->resume(policy)) 1696 - pr_err("%s: Failed to resume driver: %p\n", __func__, 1697 - policy); 1698 1662 1699 1663 /* 1700 1664 * schedule call cpufreq_update_policy() for boot CPU, i.e. last ··· 1866 1832 pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1867 1833 __func__, policy->cpu, freqs.old, freqs.new); 1868 1834 1869 - cpufreq_notify_transition(policy, &freqs, 1870 - CPUFREQ_PRECHANGE); 1835 + cpufreq_freq_transition_begin(policy, &freqs); 1871 1836 } 1872 1837 1873 1838 retval = cpufreq_driver->target_index(policy, index); ··· 1875 1842 __func__, retval); 1876 1843 1877 1844 if (notify) 1878 - cpufreq_notify_post_transition(policy, &freqs, retval); 1845 + cpufreq_freq_transition_end(policy, &freqs, retval); 1879 1846 } 1880 1847 1881 1848 out:
+2 -2
drivers/cpufreq/exynos5440-cpufreq.c
··· 219 219 freqs.old = policy->cur; 220 220 freqs.new = freq_table[index].frequency; 221 221 222 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 222 + cpufreq_freq_transition_begin(policy, &freqs); 223 223 224 224 /* Set the target frequency in all C0_3_PSTATE register */ 225 225 for_each_cpu(i, policy->cpus) { ··· 258 258 dev_crit(dvfs_info->dev, "New frequency out of range\n"); 259 259 freqs.new = freqs.old; 260 260 } 261 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 261 + cpufreq_freq_transition_end(policy, &freqs, 0); 262 262 263 263 cpufreq_cpu_put(policy); 264 264 mutex_unlock(&cpufreq_lock);
+2 -2
drivers/cpufreq/gx-suspmod.c
··· 265 265 266 266 freqs.new = new_khz; 267 267 268 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 268 + cpufreq_freq_transition_begin(policy, &freqs); 269 269 local_irq_save(flags); 270 270 271 271 if (new_khz != stock_freq) { ··· 314 314 315 315 gx_params->pci_suscfg = suscfg; 316 316 317 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 317 + cpufreq_freq_transition_end(policy, &freqs, 0); 318 318 319 319 pr_debug("suspend modulation w/ duration of ON:%d us, OFF:%d us\n", 320 320 gx_params->on_duration * 32, gx_params->off_duration * 32);
+2 -2
drivers/cpufreq/integrator-cpufreq.c
··· 122 122 return 0; 123 123 } 124 124 125 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 125 + cpufreq_freq_transition_begin(policy, &freqs); 126 126 127 127 cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET); 128 128 ··· 143 143 */ 144 144 set_cpus_allowed(current, cpus_allowed); 145 145 146 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 146 + cpufreq_freq_transition_end(policy, &freqs, 0); 147 147 148 148 return 0; 149 149 }
+1 -1
drivers/cpufreq/intel_pstate.c
··· 778 778 779 779 pr_info("intel_pstate CPU %d exiting\n", cpu_num); 780 780 781 - del_timer(&all_cpu_data[cpu_num]->timer); 781 + del_timer_sync(&all_cpu_data[cpu_num]->timer); 782 782 intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); 783 783 kfree(all_cpu_data[cpu_num]); 784 784 all_cpu_data[cpu_num] = NULL;
+2 -2
drivers/cpufreq/longhaul.c
··· 269 269 freqs.old = calc_speed(longhaul_get_cpu_mult()); 270 270 freqs.new = speed; 271 271 272 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 272 + cpufreq_freq_transition_begin(policy, &freqs); 273 273 274 274 pr_debug("Setting to FSB:%dMHz Mult:%d.%dx (%s)\n", 275 275 fsb, mult/10, mult%10, print_speed(speed/1000)); ··· 386 386 } 387 387 } 388 388 /* Report true CPU frequency */ 389 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 389 + cpufreq_freq_transition_end(policy, &freqs, 0); 390 390 391 391 if (!bm_timeout) 392 392 printk(KERN_INFO PFX "Warning: Timeout while waiting for "
+2 -2
drivers/cpufreq/pcc-cpufreq.c
··· 215 215 216 216 freqs.old = policy->cur; 217 217 freqs.new = target_freq; 218 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 218 + cpufreq_freq_transition_begin(policy, &freqs); 219 219 220 220 input_buffer = 0x1 | (((target_freq * 100) 221 221 / (ioread32(&pcch_hdr->nominal) * 1000)) << 8); ··· 231 231 status = ioread16(&pcch_hdr->status); 232 232 iowrite16(0, &pcch_hdr->status); 233 233 234 - cpufreq_notify_post_transition(policy, &freqs, status != CMD_COMPLETE); 234 + cpufreq_freq_transition_end(policy, &freqs, status != CMD_COMPLETE); 235 235 spin_unlock(&pcc_lock); 236 236 237 237 if (status != CMD_COMPLETE) {
+2 -2
drivers/cpufreq/powernow-k6.c
··· 148 148 freqs.old = busfreq * powernow_k6_get_cpu_multiplier(); 149 149 freqs.new = busfreq * clock_ratio[best_i].driver_data; 150 150 151 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 151 + cpufreq_freq_transition_begin(policy, &freqs); 152 152 153 153 powernow_k6_set_cpu_multiplier(best_i); 154 154 155 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 155 + cpufreq_freq_transition_end(policy, &freqs, 0); 156 156 157 157 return 0; 158 158 }
+2 -2
drivers/cpufreq/powernow-k7.c
··· 269 269 270 270 freqs.new = powernow_table[index].frequency; 271 271 272 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 272 + cpufreq_freq_transition_begin(policy, &freqs); 273 273 274 274 /* Now do the magic poking into the MSRs. */ 275 275 ··· 290 290 if (have_a0 == 1) 291 291 local_irq_enable(); 292 292 293 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 293 + cpufreq_freq_transition_end(policy, &freqs, 0); 294 294 295 295 return 0; 296 296 }
+2 -2
drivers/cpufreq/powernow-k8.c
··· 963 963 policy = cpufreq_cpu_get(smp_processor_id()); 964 964 cpufreq_cpu_put(policy); 965 965 966 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 966 + cpufreq_freq_transition_begin(policy, &freqs); 967 967 res = transition_fid_vid(data, fid, vid); 968 - cpufreq_notify_post_transition(policy, &freqs, res); 968 + cpufreq_freq_transition_end(policy, &freqs, res); 969 969 970 970 return res; 971 971 }
+2 -2
drivers/cpufreq/s3c24xx-cpufreq.c
··· 217 217 s3c_cpufreq_updateclk(clk_pclk, cpu_new.freq.pclk); 218 218 219 219 /* start the frequency change */ 220 - cpufreq_notify_transition(policy, &freqs.freqs, CPUFREQ_PRECHANGE); 220 + cpufreq_freq_transition_begin(policy, &freqs.freqs); 221 221 222 222 /* If hclk is staying the same, then we do not need to 223 223 * re-write the IO or the refresh timings whilst we are changing ··· 261 261 local_irq_restore(flags); 262 262 263 263 /* notify everyone we've done this */ 264 - cpufreq_notify_transition(policy, &freqs.freqs, CPUFREQ_POSTCHANGE); 264 + cpufreq_freq_transition_end(policy, &freqs.freqs, 0); 265 265 266 266 s3c_freq_dbg("%s: finished\n", __func__); 267 267 return 0;
+2 -2
drivers/cpufreq/sh-cpufreq.c
··· 68 68 freqs.new = (freq + 500) / 1000; 69 69 freqs.flags = 0; 70 70 71 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 71 + cpufreq_freq_transition_begin(policy, &freqs); 72 72 set_cpus_allowed_ptr(current, &cpus_allowed); 73 73 clk_set_rate(cpuclk, freq); 74 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); 74 + cpufreq_freq_transition_end(policy, &freqs, 0); 75 75 76 76 dev_dbg(dev, "set frequency %lu Hz\n", freq); 77 77
+2 -2
drivers/cpufreq/unicore2-cpufreq.c
··· 44 44 freqs.old = policy->cur; 45 45 freqs.new = target_freq; 46 46 47 - cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); 47 + cpufreq_freq_transition_begin(policy, &freqs); 48 48 ret = clk_set_rate(policy->mclk, target_freq * 1000); 49 - cpufreq_notify_post_transition(policy, &freqs, ret); 49 + cpufreq_freq_transition_end(policy, &freqs, ret); 50 50 51 51 return ret; 52 52 }
+1 -1
drivers/pnp/resource.c
··· 385 385 * device is active because it itself may be in use */ 386 386 if (!dev->active) { 387 387 if (request_irq(*irq, pnp_test_handler, 388 - IRQF_DISABLED | IRQF_PROBE_SHARED, "pnp", NULL)) 388 + IRQF_PROBE_SHARED, "pnp", NULL)) 389 389 return 0; 390 390 free_irq(*irq, NULL); 391 391 }
+1 -1
include/acpi/acpixf.h
··· 71 71 72 72 /* ACPICA runtime options */ 73 73 74 - extern u8 acpi_gbl_all_methods_serialized; 74 + extern u8 acpi_gbl_auto_serialize_methods; 75 75 extern u8 acpi_gbl_copy_dsdt_locally; 76 76 extern u8 acpi_gbl_create_osi_method; 77 77 extern u8 acpi_gbl_disable_auto_repair;
+9 -3
include/linux/cpufreq.h
··· 16 16 #include <linux/completion.h> 17 17 #include <linux/kobject.h> 18 18 #include <linux/notifier.h> 19 + #include <linux/spinlock.h> 19 20 #include <linux/sysfs.h> 20 21 21 22 /********************************************************************* ··· 105 104 * __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT); 106 105 */ 107 106 struct rw_semaphore rwsem; 107 + 108 + /* Synchronization for frequency transitions */ 109 + bool transition_ongoing; /* Tracks transition status */ 110 + spinlock_t transition_lock; 111 + wait_queue_head_t transition_wait; 108 112 }; 109 113 110 114 /* Only for ACPI */ ··· 339 333 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); 340 334 int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list); 341 335 342 - void cpufreq_notify_transition(struct cpufreq_policy *policy, 343 - struct cpufreq_freqs *freqs, unsigned int state); 344 - void cpufreq_notify_post_transition(struct cpufreq_policy *policy, 336 + void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, 337 + struct cpufreq_freqs *freqs); 338 + void cpufreq_freq_transition_end(struct cpufreq_policy *policy, 345 339 struct cpufreq_freqs *freqs, int transition_failed); 346 340 347 341 #else /* CONFIG_CPU_FREQ */