ACPI: acpi_idle: touch TS_POLLING only in the non-MWAIT case

commit d306ebc28649b89877a22158fe0076f06cc46f60
(ACPI: Be in TS_POLLING state during mwait based C-state entry)
fixed an important power & performance issue where ACPI c2 and c3 C-states
were clearing TS_POLLING even when using MWAIT (ACPI_STATE_FFH).
That bug had been causing us to receive redundant scheduling interrups
when we had already been woken up by MONITOR/MWAIT.

Following up on that...

In the MWAIT case, we don't have to subsequently
check need_resched(), as that c heck was there
for the TS_POLLING-clearing case.

Note that not only does the cpuidle calling function
already check need_resched() before calling us, the
low-level entry into monitor/mwait calls it twice --
guaranteeing that a write to the trigger address
can not go un-noticed.

Also, in this case, we don't have to set TS_POLLING
when we wake, because we never cleared it.

Signed-off-by: Len Brown <len.brown@intel.com>
Acked-by: Venkatesh Pallipadi <venki@google.com>

Len Brown 02cf4f98 a7d27c37

+16 -12
+16 -12
drivers/acpi/processor_idle.c
··· 881 return(acpi_idle_enter_c1(dev, state)); 882 883 local_irq_disable(); 884 if (cx->entry_method != ACPI_CSTATE_FFH) { 885 current_thread_info()->status &= ~TS_POLLING; 886 /* ··· 889 * NEED_RESCHED: 890 */ 891 smp_mb(); 892 - } 893 894 - if (unlikely(need_resched())) { 895 - current_thread_info()->status |= TS_POLLING; 896 - local_irq_enable(); 897 - return 0; 898 } 899 900 /* ··· 919 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 920 921 local_irq_enable(); 922 - current_thread_info()->status |= TS_POLLING; 923 924 cx->usage++; 925 ··· 970 } 971 972 local_irq_disable(); 973 if (cx->entry_method != ACPI_CSTATE_FFH) { 974 current_thread_info()->status &= ~TS_POLLING; 975 /* ··· 978 * NEED_RESCHED: 979 */ 980 smp_mb(); 981 - } 982 983 - if (unlikely(need_resched())) { 984 - current_thread_info()->status |= TS_POLLING; 985 - local_irq_enable(); 986 - return 0; 987 } 988 989 acpi_unlazy_tlb(smp_processor_id()); ··· 1035 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 1036 1037 local_irq_enable(); 1038 - current_thread_info()->status |= TS_POLLING; 1039 1040 cx->usage++; 1041
··· 881 return(acpi_idle_enter_c1(dev, state)); 882 883 local_irq_disable(); 884 + 885 if (cx->entry_method != ACPI_CSTATE_FFH) { 886 current_thread_info()->status &= ~TS_POLLING; 887 /* ··· 888 * NEED_RESCHED: 889 */ 890 smp_mb(); 891 892 + if (unlikely(need_resched())) { 893 + current_thread_info()->status |= TS_POLLING; 894 + local_irq_enable(); 895 + return 0; 896 + } 897 } 898 899 /* ··· 918 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 919 920 local_irq_enable(); 921 + if (cx->entry_method != ACPI_CSTATE_FFH) 922 + current_thread_info()->status |= TS_POLLING; 923 924 cx->usage++; 925 ··· 968 } 969 970 local_irq_disable(); 971 + 972 if (cx->entry_method != ACPI_CSTATE_FFH) { 973 current_thread_info()->status &= ~TS_POLLING; 974 /* ··· 975 * NEED_RESCHED: 976 */ 977 smp_mb(); 978 979 + if (unlikely(need_resched())) { 980 + current_thread_info()->status |= TS_POLLING; 981 + local_irq_enable(); 982 + return 0; 983 + } 984 } 985 986 acpi_unlazy_tlb(smp_processor_id()); ··· 1032 sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS); 1033 1034 local_irq_enable(); 1035 + if (cx->entry_method != ACPI_CSTATE_FFH) 1036 + current_thread_info()->status |= TS_POLLING; 1037 1038 cx->usage++; 1039