ACPI: acpi_idle: touch TS_POLLING only in the non-MWAIT case

commit d306ebc28649b89877a22158fe0076f06cc46f60
(ACPI: Be in TS_POLLING state during mwait based C-state entry)
fixed an important power & performance issue where ACPI c2 and c3 C-states
were clearing TS_POLLING even when using MWAIT (ACPI_STATE_FFH).
That bug had been causing us to receive redundant scheduling interrups
when we had already been woken up by MONITOR/MWAIT.

Following up on that...

In the MWAIT case, we don't have to subsequently
check need_resched(), as that c heck was there
for the TS_POLLING-clearing case.

Note that not only does the cpuidle calling function
already check need_resched() before calling us, the
low-level entry into monitor/mwait calls it twice --
guaranteeing that a write to the trigger address
can not go un-noticed.

Also, in this case, we don't have to set TS_POLLING
when we wake, because we never cleared it.

Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Venkatesh Pallipadi <venki@google.com>

Len Brown 02cf4f98 a7d27c37

+16 -12
+16 -12
drivers/acpi/processor_idle.c
··· 881 881 return(acpi_idle_enter_c1(dev, state)); 882 882 883 883 local_irq_disable(); 884 + 884 885 if (cx->entry_method != ACPI_CSTATE_FFH) { 885 886 current_thread_info()->status &= ~TS_POLLING; 886 887 /* ··· 889 888 * NEED_RESCHED: 890 889 */ 891 890 smp_mb(); 892 - } 893 891 894 - if (unlikely(need_resched())) { 895 - current_thread_info()->status |= TS_POLLING; 896 - local_irq_enable(); 897 - return 0; 892 + if (unlikely(need_resched())) { 893 + current_thread_info()->status |= TS_POLLING; 894 + local_irq_enable(); 895 + return 0; 896 + } 898 897 } 899 898 900 899 /* ··· 919 918 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 920 919 921 920 local_irq_enable(); 922 - current_thread_info()->status |= TS_POLLING; 921 + if (cx->entry_method != ACPI_CSTATE_FFH) 922 + current_thread_info()->status |= TS_POLLING; 923 923 924 924 cx->usage++; 925 925 ··· 970 968 } 971 969 972 970 local_irq_disable(); 971 + 973 972 if (cx->entry_method != ACPI_CSTATE_FFH) { 974 973 current_thread_info()->status &= ~TS_POLLING; 975 974 /* ··· 978 975 * NEED_RESCHED: 979 976 */ 980 977 smp_mb(); 981 - } 982 978 983 - if (unlikely(need_resched())) { 984 - current_thread_info()->status |= TS_POLLING; 985 - local_irq_enable(); 986 - return 0; 979 + if (unlikely(need_resched())) { 980 + current_thread_info()->status |= TS_POLLING; 981 + local_irq_enable(); 982 + return 0; 983 + } 987 984 } 988 985 989 986 acpi_unlazy_tlb(smp_processor_id()); ··· 1035 1032 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 1036 1033 1037 1034 local_irq_enable(); 1038 - current_thread_info()->status |= TS_POLLING; 1035 + if (cx->entry_method != ACPI_CSTATE_FFH) 1036 + current_thread_info()->status |= TS_POLLING; 1039 1037 1040 1038 cx->usage++; 1041 1039