Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pm-cpufreq'

* pm-cpufreq: (28 commits)
cpufreq: handle calls to ->target_index() in separate routine
cpufreq: s5pv210: drop check for CONFIG_PM_VERBOSE
cpufreq: intel_pstate: Remove unused member name of cpudata
cpufreq: Break out early when frequency equals target_freq
cpufreq: Tegra: drop wrapper around tegra_update_cpu_speed()
cpufreq: imx6q: Remove unused include
cpufreq: imx6q: Drop devm_clk/regulator_get usage
cpufreq: powernow-k8: Suppress checkpatch warnings
cpufreq: powernv: make local function static
cpufreq: Enable big.LITTLE cpufreq driver on arm64
cpufreq: nforce2: remove DEFINE_PCI_DEVICE_TABLE macro
intel_pstate: Add CPU IDs for Broadwell processors
cpufreq: Fix build error on some platforms that use cpufreq_for_each_*
PM / OPP: Move cpufreq specific OPP functions out of generic OPP library
PM / OPP: Remove cpufreq wrapper dependency on internal data organization
cpufreq: Catch double invocations of cpufreq_freq_transition_begin/end
intel_pstate: Remove sample parameter in intel_pstate_calc_busy
cpufreq: Kconfig: Fix spelling errors
cpufreq: Make linux-pm@vger.kernel.org official mailing list
cpufreq: exynos: Use dev_err/info function instead of pr_err/info
...

+576 -556
+2 -2
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 128 128 129 129 What: /sys/devices/system/cpu/cpu#/cpufreq/* 130 130 Date: pre-git history 131 - Contact: cpufreq@vger.kernel.org 131 + Contact: linux-pm@vger.kernel.org 132 132 Description: Discover and change clock speed of CPUs 133 133 134 134 Clock scaling allows you to change the clock speed of the ··· 146 146 147 147 What: /sys/devices/system/cpu/cpu#/cpufreq/freqdomain_cpus 148 148 Date: June 2013 149 - Contact: cpufreq@vger.kernel.org 149 + Contact: linux-pm@vger.kernel.org 150 150 Description: Discover CPUs in the same CPU frequency coordination domain 151 151 152 152 freqdomain_cpus is the list of CPUs (online+offline) that share
+29
Documentation/cpu-freq/core.txt
··· 20 20 --------- 21 21 1. CPUFreq core and interfaces 22 22 2. CPUFreq notifiers 23 + 3. CPUFreq Table Generation with Operating Performance Point (OPP) 23 24 24 25 1. General Information 25 26 ======================= ··· 93 92 cpu - number of the affected CPU 94 93 old - old frequency 95 94 new - new frequency 95 + 96 + 3. CPUFreq Table Generation with Operating Performance Point (OPP) 97 + ================================================================== 98 + For details about OPP, see Documentation/power/opp.txt 99 + 100 + dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with 101 + cpufreq_frequency_table_cpuinfo which is provided with the list of 102 + frequencies that are available for operation. This function provides 103 + a ready to use conversion routine to translate the OPP layer's internal 104 + information about the available frequencies into a format readily 105 + providable to cpufreq. 106 + 107 + WARNING: Do not use this function in interrupt context. 108 + 109 + Example: 110 + soc_pm_init() 111 + { 112 + /* Do things */ 113 + r = dev_pm_opp_init_cpufreq_table(dev, &freq_table); 114 + if (!r) 115 + cpufreq_frequency_table_cpuinfo(policy, freq_table); 116 + /* Do other things */ 117 + } 118 + 119 + NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in 120 + addition to CONFIG_PM_OPP. 121 + 122 + dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
+19
Documentation/cpu-freq/cpu-drivers.txt
··· 228 228 stage. Just pass the values to this function, and the unsigned int 229 229 index returns the number of the frequency table entry which contains 230 230 the frequency the CPU shall be set to. 231 + 232 + The following macros can be used as iterators over cpufreq_frequency_table: 233 + 234 + cpufreq_for_each_entry(pos, table) - iterates over all entries of frequency 235 + table. 236 + 237 + cpufreq-for_each_valid_entry(pos, table) - iterates over all entries, 238 + excluding CPUFREQ_ENTRY_INVALID frequencies. 239 + Use arguments "pos" - a cpufreq_frequency_table * as a loop cursor and 240 + "table" - the cpufreq_frequency_table * you want to iterate over. 241 + 242 + For example: 243 + 244 + struct cpufreq_frequency_table *pos, *driver_freq_table; 245 + 246 + cpufreq_for_each_entry(pos, driver_freq_table) { 247 + /* Do something with pos */ 248 + pos->frequency = ... 249 + }
+2 -2
Documentation/cpu-freq/index.txt
··· 35 35 ------------ 36 36 There is a CPU frequency changing CVS commit and general list where 37 37 you can report bugs, problems or submit patches. To post a message, 38 - send an email to cpufreq@vger.kernel.org, to subscribe go to 39 - http://vger.kernel.org/vger-lists.html#cpufreq and follow the 38 + send an email to linux-pm@vger.kernel.org, to subscribe go to 39 + http://vger.kernel.org/vger-lists.html#linux-pm and follow the 40 40 instructions there. 41 41 42 42 Links
+5 -35
Documentation/power/opp.txt
··· 10 10 3. OPP Search Functions 11 11 4. OPP Availability Control Functions 12 12 5. OPP Data Retrieval Functions 13 - 6. Cpufreq Table Generation 14 - 7. Data Structures 13 + 6. Data Structures 15 14 16 15 1. Introduction 17 16 =============== ··· 71 72 OPP library facilitates this concept in it's implementation. The following 72 73 operational functions operate only on available opps: 73 74 opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count 74 - and dev_pm_opp_init_cpufreq_table 75 75 76 76 dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then 77 77 be used for dev_pm_opp_enable/disable functions to make an opp available as required. ··· 94 96 opp_get_{voltage, freq, opp_count} fall into this category. 95 97 96 98 opp_{add,enable,disable} are updaters which use mutex and implement it's own 97 - RCU locking mechanisms. dev_pm_opp_init_cpufreq_table acts as an updater and uses 98 - mutex to implment RCU updater strategy. These functions should *NOT* be called 99 - under RCU locks and other contexts that prevent blocking functions in RCU or 100 - mutex operations from working. 99 + RCU locking mechanisms. These functions should *NOT* be called under RCU locks 100 + and other contexts that prevent blocking functions in RCU or mutex operations 101 + from working. 101 102 102 103 2. Initial OPP List Registration 103 104 ================================ ··· 308 311 /* Do other things */ 309 312 } 310 313 311 - 6. Cpufreq Table Generation 312 - =========================== 313 - dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with 314 - cpufreq_frequency_table_cpuinfo which is provided with the list of 315 - frequencies that are available for operation. This function provides 316 - a ready to use conversion routine to translate the OPP layer's internal 317 - information about the available frequencies into a format readily 318 - providable to cpufreq. 319 - 320 - WARNING: Do not use this function in interrupt context. 321 - 322 - Example: 323 - soc_pm_init() 324 - { 325 - /* Do things */ 326 - r = dev_pm_opp_init_cpufreq_table(dev, &freq_table); 327 - if (!r) 328 - cpufreq_frequency_table_cpuinfo(policy, freq_table); 329 - /* Do other things */ 330 - } 331 - 332 - NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in 333 - addition to CONFIG_PM as power management feature is required to 334 - dynamically scale voltage and frequency in a system. 335 - 336 - dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table 337 - 338 - 7. Data Structures 314 + 6. Data Structures 339 315 ================== 340 316 Typically an SoC contains multiple voltage domains which are variable. Each 341 317 domain is represented by a device pointer. The relationship to OPP can be
-2
MAINTAINERS
··· 2410 2410 CPU FREQUENCY DRIVERS 2411 2411 M: Rafael J. Wysocki <rjw@rjwysocki.net> 2412 2412 M: Viresh Kumar <viresh.kumar@linaro.org> 2413 - L: cpufreq@vger.kernel.org 2414 2413 L: linux-pm@vger.kernel.org 2415 2414 S: Maintained 2416 2415 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git ··· 2420 2421 CPU FREQUENCY DRIVERS - ARM BIG LITTLE 2421 2422 M: Viresh Kumar <viresh.kumar@linaro.org> 2422 2423 M: Sudeep Holla <sudeep.holla@arm.com> 2423 - L: cpufreq@vger.kernel.org 2424 2424 L: linux-pm@vger.kernel.org 2425 2425 W: http://www.arm.com/products/processors/technologies/biglittleprocessing.php 2426 2426 S: Maintained
+5 -4
arch/arm/mach-davinci/da850.c
··· 1092 1092 1093 1093 static int da850_round_armrate(struct clk *clk, unsigned long rate) 1094 1094 { 1095 - int i, ret = 0, diff; 1095 + int ret = 0, diff; 1096 1096 unsigned int best = (unsigned int) -1; 1097 1097 struct cpufreq_frequency_table *table = cpufreq_info.freq_table; 1098 + struct cpufreq_frequency_table *pos; 1098 1099 1099 1100 rate /= 1000; /* convert to kHz */ 1100 1101 1101 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 1102 - diff = table[i].frequency - rate; 1102 + cpufreq_for_each_entry(pos, table) { 1103 + diff = pos->frequency - rate; 1103 1104 if (diff < 0) 1104 1105 diff = -diff; 1105 1106 1106 1107 if (diff < best) { 1107 1108 best = diff; 1108 - ret = table[i].frequency; 1109 + ret = pos->frequency; 1109 1110 } 1110 1111 } 1111 1112
+5 -12
arch/mips/loongson/lemote-2f/clock.c
··· 91 91 92 92 int clk_set_rate(struct clk *clk, unsigned long rate) 93 93 { 94 - unsigned int rate_khz = rate / 1000; 94 + struct cpufreq_frequency_table *pos; 95 95 int ret = 0; 96 96 int regval; 97 - int i; 98 97 99 98 if (likely(clk->ops && clk->ops->set_rate)) { 100 99 unsigned long flags; ··· 106 107 if (unlikely(clk->flags & CLK_RATE_PROPAGATES)) 107 108 propagate_rate(clk); 108 109 109 - for (i = 0; loongson2_clockmod_table[i].frequency != CPUFREQ_TABLE_END; 110 - i++) { 111 - if (loongson2_clockmod_table[i].frequency == 112 - CPUFREQ_ENTRY_INVALID) 113 - continue; 114 - if (rate_khz == loongson2_clockmod_table[i].frequency) 110 + cpufreq_for_each_valid_entry(pos, loongson2_clockmod_table) 111 + if (rate == pos->frequency) 115 112 break; 116 - } 117 - if (rate_khz != loongson2_clockmod_table[i].frequency) 113 + if (rate != pos->frequency) 118 114 return -ENOTSUPP; 119 115 120 116 clk->rate = rate; 121 117 122 118 regval = LOONGSON_CHIPCFG0; 123 - regval = (regval & ~0x7) | 124 - (loongson2_clockmod_table[i].driver_data - 1); 119 + regval = (regval & ~0x7) | (pos->driver_data - 1); 125 120 LOONGSON_CHIPCFG0 = regval; 126 121 127 122 return ret;
-91
drivers/base/power/opp.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/err.h> 17 17 #include <linux/slab.h> 18 - #include <linux/cpufreq.h> 19 18 #include <linux/device.h> 20 19 #include <linux/list.h> 21 20 #include <linux/rculist.h> ··· 617 618 return opp_set_availability(dev, freq, false); 618 619 } 619 620 EXPORT_SYMBOL_GPL(dev_pm_opp_disable); 620 - 621 - #ifdef CONFIG_CPU_FREQ 622 - /** 623 - * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 624 - * @dev: device for which we do this operation 625 - * @table: Cpufreq table returned back to caller 626 - * 627 - * Generate a cpufreq table for a provided device- this assumes that the 628 - * opp list is already initialized and ready for usage. 629 - * 630 - * This function allocates required memory for the cpufreq table. It is 631 - * expected that the caller does the required maintenance such as freeing 632 - * the table as required. 633 - * 634 - * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 635 - * if no memory available for the operation (table is not populated), returns 0 636 - * if successful and table is populated. 637 - * 638 - * WARNING: It is important for the callers to ensure refreshing their copy of 639 - * the table if any of the mentioned functions have been invoked in the interim. 640 - * 641 - * Locking: The internal device_opp and opp structures are RCU protected. 642 - * To simplify the logic, we pretend we are updater and hold relevant mutex here 643 - * Callers should ensure that this function is *NOT* called under RCU protection 644 - * or in contexts where mutex locking cannot be used. 645 - */ 646 - int dev_pm_opp_init_cpufreq_table(struct device *dev, 647 - struct cpufreq_frequency_table **table) 648 - { 649 - struct device_opp *dev_opp; 650 - struct dev_pm_opp *opp; 651 - struct cpufreq_frequency_table *freq_table; 652 - int i = 0; 653 - 654 - /* Pretend as if I am an updater */ 655 - mutex_lock(&dev_opp_list_lock); 656 - 657 - dev_opp = find_device_opp(dev); 658 - if (IS_ERR(dev_opp)) { 659 - int r = PTR_ERR(dev_opp); 660 - mutex_unlock(&dev_opp_list_lock); 661 - dev_err(dev, "%s: Device OPP not found (%d)\n", __func__, r); 662 - return r; 663 - } 664 - 665 - freq_table = kzalloc(sizeof(struct cpufreq_frequency_table) * 666 - (dev_pm_opp_get_opp_count(dev) + 1), GFP_KERNEL); 667 - if (!freq_table) { 668 - mutex_unlock(&dev_opp_list_lock); 669 - dev_warn(dev, "%s: Unable to allocate frequency table\n", 670 - __func__); 671 - return -ENOMEM; 672 - } 673 - 674 - list_for_each_entry(opp, &dev_opp->opp_list, node) { 675 - if (opp->available) { 676 - freq_table[i].driver_data = i; 677 - freq_table[i].frequency = opp->rate / 1000; 678 - i++; 679 - } 680 - } 681 - mutex_unlock(&dev_opp_list_lock); 682 - 683 - freq_table[i].driver_data = i; 684 - freq_table[i].frequency = CPUFREQ_TABLE_END; 685 - 686 - *table = &freq_table[0]; 687 - 688 - return 0; 689 - } 690 - EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 691 - 692 - /** 693 - * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 694 - * @dev: device for which we do this operation 695 - * @table: table to free 696 - * 697 - * Free up the table allocated by dev_pm_opp_init_cpufreq_table 698 - */ 699 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 700 - struct cpufreq_frequency_table **table) 701 - { 702 - if (!table) 703 - return; 704 - 705 - kfree(*table); 706 - *table = NULL; 707 - } 708 - EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); 709 - #endif /* CONFIG_CPU_FREQ */ 710 621 711 622 /** 712 623 * dev_pm_opp_get_notifier() - find notifier_head of the device with opp
+4 -3
drivers/cpufreq/Kconfig.arm
··· 5 5 # big LITTLE core layer and glue drivers 6 6 config ARM_BIG_LITTLE_CPUFREQ 7 7 tristate "Generic ARM big LITTLE CPUfreq driver" 8 - depends on ARM && BIG_LITTLE && ARM_CPU_TOPOLOGY && HAVE_CLK 8 + depends on (BIG_LITTLE && ARM_CPU_TOPOLOGY) || (ARM64 && SMP) 9 + depends on HAVE_CLK 9 10 select PM_OPP 10 11 help 11 12 This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. ··· 86 85 It allows usage of special frequencies for Samsung Exynos 87 86 processors if thermal conditions are appropriate. 88 87 89 - It reguires, for safe operation, thermal framework with properly 88 + It requires, for safe operation, thermal framework with properly 90 89 defined trip points. 91 90 92 91 If in doubt, say N. ··· 187 186 S3C2450 SoC. The S3C2416 supports changing the rate of the 188 187 armdiv clock source and also entering a so called dynamic 189 188 voltage scaling mode in which it is possible to reduce the 190 - core voltage of the cpu. 189 + core voltage of the CPU. 191 190 192 191 If in doubt, say N. 193 192
+2 -2
drivers/cpufreq/Kconfig.x86
··· 10 10 The driver implements an internal governor and will become 11 11 the scaling driver and governor for Sandy bridge processors. 12 12 13 - When this driver is enabled it will become the perferred 13 + When this driver is enabled it will become the preferred 14 14 scaling driver for Sandy bridge processors. 15 15 16 16 If in doubt, say N. ··· 52 52 help 53 53 The powernow-k8 driver used to provide a sysfs knob called "cpb" 54 54 to disable the Core Performance Boosting feature of AMD CPUs. This 55 - file has now been superseeded by the more generic "boost" entry. 55 + file has now been superseded by the more generic "boost" entry. 56 56 57 57 By enabling this option the acpi_cpufreq driver provides the old 58 58 entry in addition to the new boost ones, for compatibility reasons.
+2
drivers/cpufreq/Makefile
··· 1 1 # CPUfreq core 2 2 obj-$(CONFIG_CPU_FREQ) += cpufreq.o freq_table.o 3 + obj-$(CONFIG_PM_OPP) += cpufreq_opp.o 4 + 3 5 # CPUfreq stats 4 6 obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o 5 7
+4 -5
drivers/cpufreq/acpi-cpufreq.c
··· 213 213 214 214 static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data) 215 215 { 216 - int i; 216 + struct cpufreq_frequency_table *pos; 217 217 struct acpi_processor_performance *perf; 218 218 219 219 if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) ··· 223 223 224 224 perf = data->acpi_data; 225 225 226 - for (i = 0; data->freq_table[i].frequency != CPUFREQ_TABLE_END; i++) { 227 - if (msr == perf->states[data->freq_table[i].driver_data].status) 228 - return data->freq_table[i].frequency; 229 - } 226 + cpufreq_for_each_entry(pos, data->freq_table) 227 + if (msr == perf->states[pos->driver_data].status) 228 + return pos->frequency; 230 229 return data->freq_table[0].frequency; 231 230 } 232 231
+8 -8
drivers/cpufreq/arm_big_little.c
··· 226 226 /* get the minimum frequency in the cpufreq_frequency_table */ 227 227 static inline u32 get_table_min(struct cpufreq_frequency_table *table) 228 228 { 229 - int i; 229 + struct cpufreq_frequency_table *pos; 230 230 uint32_t min_freq = ~0; 231 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) 232 - if (table[i].frequency < min_freq) 233 - min_freq = table[i].frequency; 231 + cpufreq_for_each_entry(pos, table) 232 + if (pos->frequency < min_freq) 233 + min_freq = pos->frequency; 234 234 return min_freq; 235 235 } 236 236 237 237 /* get the maximum frequency in the cpufreq_frequency_table */ 238 238 static inline u32 get_table_max(struct cpufreq_frequency_table *table) 239 239 { 240 - int i; 240 + struct cpufreq_frequency_table *pos; 241 241 uint32_t max_freq = 0; 242 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) 243 - if (table[i].frequency > max_freq) 244 - max_freq = table[i].frequency; 242 + cpufreq_for_each_entry(pos, table) 243 + if (pos->frequency > max_freq) 244 + max_freq = pos->frequency; 245 245 return max_freq; 246 246 } 247 247
+1 -1
drivers/cpufreq/cpufreq-nforce2.c
··· 379 379 }; 380 380 381 381 #ifdef MODULE 382 - static DEFINE_PCI_DEVICE_TABLE(nforce2_ids) = { 382 + static const struct pci_device_id nforce2_ids[] = { 383 383 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2 }, 384 384 {} 385 385 };
+47 -23
drivers/cpufreq/cpufreq.c
··· 354 354 void cpufreq_freq_transition_begin(struct cpufreq_policy *policy, 355 355 struct cpufreq_freqs *freqs) 356 356 { 357 + 358 + /* 359 + * Catch double invocations of _begin() which lead to self-deadlock. 360 + * ASYNC_NOTIFICATION drivers are left out because the cpufreq core 361 + * doesn't invoke _begin() on their behalf, and hence the chances of 362 + * double invocations are very low. Moreover, there are scenarios 363 + * where these checks can emit false-positive warnings in these 364 + * drivers; so we avoid that by skipping them altogether. 365 + */ 366 + WARN_ON(!(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION) 367 + && current == policy->transition_task); 368 + 357 369 wait: 358 370 wait_event(policy->transition_wait, !policy->transition_ongoing); 359 371 ··· 377 365 } 378 366 379 367 policy->transition_ongoing = true; 368 + policy->transition_task = current; 380 369 381 370 spin_unlock(&policy->transition_lock); 382 371 ··· 394 381 cpufreq_notify_post_transition(policy, freqs, transition_failed); 395 382 396 383 policy->transition_ongoing = false; 384 + policy->transition_task = NULL; 397 385 398 386 wake_up(&policy->transition_wait); 399 387 } ··· 1816 1802 * GOVERNORS * 1817 1803 *********************************************************************/ 1818 1804 1805 + static int __target_index(struct cpufreq_policy *policy, 1806 + struct cpufreq_frequency_table *freq_table, int index) 1807 + { 1808 + struct cpufreq_freqs freqs; 1809 + int retval = -EINVAL; 1810 + bool notify; 1811 + 1812 + notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1813 + 1814 + if (notify) { 1815 + freqs.old = policy->cur; 1816 + freqs.new = freq_table[index].frequency; 1817 + freqs.flags = 0; 1818 + 1819 + pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1820 + __func__, policy->cpu, freqs.old, freqs.new); 1821 + 1822 + cpufreq_freq_transition_begin(policy, &freqs); 1823 + } 1824 + 1825 + retval = cpufreq_driver->target_index(policy, index); 1826 + if (retval) 1827 + pr_err("%s: Failed to change cpu frequency: %d\n", __func__, 1828 + retval); 1829 + 1830 + if (notify) 1831 + cpufreq_freq_transition_end(policy, &freqs, retval); 1832 + 1833 + return retval; 1834 + } 1835 + 1819 1836 int __cpufreq_driver_target(struct cpufreq_policy *policy, 1820 1837 unsigned int target_freq, 1821 1838 unsigned int relation) 1822 1839 { 1823 - int retval = -EINVAL; 1824 1840 unsigned int old_target_freq = target_freq; 1841 + int retval = -EINVAL; 1825 1842 1826 1843 if (cpufreq_disabled()) 1827 1844 return -ENODEV; ··· 1879 1834 retval = cpufreq_driver->target(policy, target_freq, relation); 1880 1835 else if (cpufreq_driver->target_index) { 1881 1836 struct cpufreq_frequency_table *freq_table; 1882 - struct cpufreq_freqs freqs; 1883 - bool notify; 1884 1837 int index; 1885 1838 1886 1839 freq_table = cpufreq_frequency_get_table(policy->cpu); ··· 1899 1856 goto out; 1900 1857 } 1901 1858 1902 - notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION); 1903 - 1904 - if (notify) { 1905 - freqs.old = policy->cur; 1906 - freqs.new = freq_table[index].frequency; 1907 - freqs.flags = 0; 1908 - 1909 - pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n", 1910 - __func__, policy->cpu, freqs.old, freqs.new); 1911 - 1912 - cpufreq_freq_transition_begin(policy, &freqs); 1913 - } 1914 - 1915 - retval = cpufreq_driver->target_index(policy, index); 1916 - if (retval) 1917 - pr_err("%s: Failed to change cpu frequency: %d\n", 1918 - __func__, retval); 1919 - 1920 - if (notify) 1921 - cpufreq_freq_transition_end(policy, &freqs, retval); 1859 + retval = __target_index(policy, freq_table, index); 1922 1860 } 1923 1861 1924 1862 out:
+110
drivers/cpufreq/cpufreq_opp.c
··· 1 + /* 2 + * Generic OPP helper interface for CPUFreq drivers 3 + * 4 + * Copyright (C) 2009-2014 Texas Instruments Incorporated. 5 + * Nishanth Menon 6 + * Romit Dasgupta 7 + * Kevin Hilman 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + #include <linux/cpufreq.h> 14 + #include <linux/device.h> 15 + #include <linux/err.h> 16 + #include <linux/errno.h> 17 + #include <linux/export.h> 18 + #include <linux/kernel.h> 19 + #include <linux/pm_opp.h> 20 + #include <linux/rcupdate.h> 21 + #include <linux/slab.h> 22 + 23 + /** 24 + * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device 25 + * @dev: device for which we do this operation 26 + * @table: Cpufreq table returned back to caller 27 + * 28 + * Generate a cpufreq table for a provided device- this assumes that the 29 + * opp list is already initialized and ready for usage. 30 + * 31 + * This function allocates required memory for the cpufreq table. It is 32 + * expected that the caller does the required maintenance such as freeing 33 + * the table as required. 34 + * 35 + * Returns -EINVAL for bad pointers, -ENODEV if the device is not found, -ENOMEM 36 + * if no memory available for the operation (table is not populated), returns 0 37 + * if successful and table is populated. 38 + * 39 + * WARNING: It is important for the callers to ensure refreshing their copy of 40 + * the table if any of the mentioned functions have been invoked in the interim. 41 + * 42 + * Locking: The internal device_opp and opp structures are RCU protected. 43 + * Since we just use the regular accessor functions to access the internal data 44 + * structures, we use RCU read lock inside this function. As a result, users of 45 + * this function DONOT need to use explicit locks for invoking. 46 + */ 47 + int dev_pm_opp_init_cpufreq_table(struct device *dev, 48 + struct cpufreq_frequency_table **table) 49 + { 50 + struct dev_pm_opp *opp; 51 + struct cpufreq_frequency_table *freq_table = NULL; 52 + int i, max_opps, ret = 0; 53 + unsigned long rate; 54 + 55 + rcu_read_lock(); 56 + 57 + max_opps = dev_pm_opp_get_opp_count(dev); 58 + if (max_opps <= 0) { 59 + ret = max_opps ? max_opps : -ENODATA; 60 + goto out; 61 + } 62 + 63 + freq_table = kzalloc(sizeof(*freq_table) * (max_opps + 1), GFP_KERNEL); 64 + if (!freq_table) { 65 + ret = -ENOMEM; 66 + goto out; 67 + } 68 + 69 + for (i = 0, rate = 0; i < max_opps; i++, rate++) { 70 + /* find next rate */ 71 + opp = dev_pm_opp_find_freq_ceil(dev, &rate); 72 + if (IS_ERR(opp)) { 73 + ret = PTR_ERR(opp); 74 + goto out; 75 + } 76 + freq_table[i].driver_data = i; 77 + freq_table[i].frequency = rate / 1000; 78 + } 79 + 80 + freq_table[i].driver_data = i; 81 + freq_table[i].frequency = CPUFREQ_TABLE_END; 82 + 83 + *table = &freq_table[0]; 84 + 85 + out: 86 + rcu_read_unlock(); 87 + if (ret) 88 + kfree(freq_table); 89 + 90 + return ret; 91 + } 92 + EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table); 93 + 94 + /** 95 + * dev_pm_opp_free_cpufreq_table() - free the cpufreq table 96 + * @dev: device for which we do this operation 97 + * @table: table to free 98 + * 99 + * Free up the table allocated by dev_pm_opp_init_cpufreq_table 100 + */ 101 + void dev_pm_opp_free_cpufreq_table(struct device *dev, 102 + struct cpufreq_frequency_table **table) 103 + { 104 + if (!table) 105 + return; 106 + 107 + kfree(*table); 108 + *table = NULL; 109 + } 110 + EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
+8 -16
drivers/cpufreq/cpufreq_stats.c
··· 182 182 183 183 static int __cpufreq_stats_create_table(struct cpufreq_policy *policy) 184 184 { 185 - unsigned int i, j, count = 0, ret = 0; 185 + unsigned int i, count = 0, ret = 0; 186 186 struct cpufreq_stats *stat; 187 187 unsigned int alloc_size; 188 188 unsigned int cpu = policy->cpu; 189 - struct cpufreq_frequency_table *table; 189 + struct cpufreq_frequency_table *pos, *table; 190 190 191 191 table = cpufreq_frequency_get_table(cpu); 192 192 if (unlikely(!table)) ··· 205 205 stat->cpu = cpu; 206 206 per_cpu(cpufreq_stats_table, cpu) = stat; 207 207 208 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 209 - unsigned int freq = table[i].frequency; 210 - if (freq == CPUFREQ_ENTRY_INVALID) 211 - continue; 208 + cpufreq_for_each_valid_entry(pos, table) 212 209 count++; 213 - } 214 210 215 211 alloc_size = count * sizeof(int) + count * sizeof(u64); 216 212 ··· 224 228 #ifdef CONFIG_CPU_FREQ_STAT_DETAILS 225 229 stat->trans_table = stat->freq_table + count; 226 230 #endif 227 - j = 0; 228 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 229 - unsigned int freq = table[i].frequency; 230 - if (freq == CPUFREQ_ENTRY_INVALID) 231 - continue; 232 - if (freq_table_get_index(stat, freq) == -1) 233 - stat->freq_table[j++] = freq; 234 - } 235 - stat->state_num = j; 231 + i = 0; 232 + cpufreq_for_each_valid_entry(pos, table) 233 + if (freq_table_get_index(stat, pos->frequency) == -1) 234 + stat->freq_table[i++] = pos->frequency; 235 + stat->state_num = i; 236 236 spin_lock(&cpufreq_stats_lock); 237 237 stat->last_time = get_jiffies_64(); 238 238 stat->last_index = freq_table_get_index(stat, policy->cur);
+3 -5
drivers/cpufreq/dbx500-cpufreq.c
··· 45 45 46 46 static int dbx500_cpufreq_probe(struct platform_device *pdev) 47 47 { 48 - int i = 0; 48 + struct cpufreq_frequency_table *pos; 49 49 50 50 freq_table = dev_get_platdata(&pdev->dev); 51 51 if (!freq_table) { ··· 60 60 } 61 61 62 62 pr_info("dbx500-cpufreq: Available frequencies:\n"); 63 - while (freq_table[i].frequency != CPUFREQ_TABLE_END) { 64 - pr_info(" %d Mhz\n", freq_table[i].frequency/1000); 65 - i++; 66 - } 63 + cpufreq_for_each_entry(pos, freq_table) 64 + pr_info(" %d Mhz\n", pos->frequency / 1000); 67 65 68 66 return cpufreq_register_driver(&dbx500_cpufreq_driver); 69 67 }
+4 -5
drivers/cpufreq/elanfreq.c
··· 147 147 static int elanfreq_cpu_init(struct cpufreq_policy *policy) 148 148 { 149 149 struct cpuinfo_x86 *c = &cpu_data(0); 150 - unsigned int i; 150 + struct cpufreq_frequency_table *pos; 151 151 152 152 /* capability check */ 153 153 if ((c->x86_vendor != X86_VENDOR_AMD) || ··· 159 159 max_freq = elanfreq_get_cpu_frequency(0); 160 160 161 161 /* table init */ 162 - for (i = 0; (elanfreq_table[i].frequency != CPUFREQ_TABLE_END); i++) { 163 - if (elanfreq_table[i].frequency > max_freq) 164 - elanfreq_table[i].frequency = CPUFREQ_ENTRY_INVALID; 165 - } 162 + cpufreq_for_each_entry(pos, elanfreq_table) 163 + if (pos->frequency > max_freq) 164 + pos->frequency = CPUFREQ_ENTRY_INVALID; 166 165 167 166 /* cpuinfo and default policy values */ 168 167 policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
+17 -15
drivers/cpufreq/exynos-cpufreq.c
··· 29 29 static int exynos_cpufreq_get_index(unsigned int freq) 30 30 { 31 31 struct cpufreq_frequency_table *freq_table = exynos_info->freq_table; 32 - int index; 32 + struct cpufreq_frequency_table *pos; 33 33 34 - for (index = 0; 35 - freq_table[index].frequency != CPUFREQ_TABLE_END; index++) 36 - if (freq_table[index].frequency == freq) 34 + cpufreq_for_each_entry(pos, freq_table) 35 + if (pos->frequency == freq) 37 36 break; 38 37 39 - if (freq_table[index].frequency == CPUFREQ_TABLE_END) 38 + if (pos->frequency == CPUFREQ_TABLE_END) 40 39 return -EINVAL; 41 40 42 - return index; 41 + return pos - freq_table; 43 42 } 44 43 45 44 static int exynos_cpufreq_scale(unsigned int target_freq) ··· 48 49 struct cpufreq_policy *policy = cpufreq_cpu_get(0); 49 50 unsigned int arm_volt, safe_arm_volt = 0; 50 51 unsigned int mpll_freq_khz = exynos_info->mpll_freq_khz; 52 + struct device *dev = exynos_info->dev; 51 53 unsigned int old_freq; 52 54 int index, old_index; 53 55 int ret = 0; ··· 90 90 /* Firstly, voltage up to increase frequency */ 91 91 ret = regulator_set_voltage(arm_regulator, arm_volt, arm_volt); 92 92 if (ret) { 93 - pr_err("%s: failed to set cpu voltage to %d\n", 94 - __func__, arm_volt); 93 + dev_err(dev, "failed to set cpu voltage to %d\n", 94 + arm_volt); 95 95 return ret; 96 96 } 97 97 } ··· 100 100 ret = regulator_set_voltage(arm_regulator, safe_arm_volt, 101 101 safe_arm_volt); 102 102 if (ret) { 103 - pr_err("%s: failed to set cpu voltage to %d\n", 104 - __func__, safe_arm_volt); 103 + dev_err(dev, "failed to set cpu voltage to %d\n", 104 + safe_arm_volt); 105 105 return ret; 106 106 } 107 107 } ··· 115 115 ret = regulator_set_voltage(arm_regulator, arm_volt, 116 116 arm_volt); 117 117 if (ret) { 118 - pr_err("%s: failed to set cpu voltage to %d\n", 119 - __func__, arm_volt); 118 + dev_err(dev, "failed to set cpu voltage to %d\n", 119 + arm_volt); 120 120 goto out; 121 121 } 122 122 } ··· 163 163 if (!exynos_info) 164 164 return -ENOMEM; 165 165 166 + exynos_info->dev = &pdev->dev; 167 + 166 168 if (soc_is_exynos4210()) 167 169 ret = exynos4210_cpufreq_init(exynos_info); 168 170 else if (soc_is_exynos4212() || soc_is_exynos4412()) ··· 178 176 goto err_vdd_arm; 179 177 180 178 if (exynos_info->set_freq == NULL) { 181 - pr_err("%s: No set_freq function (ERR)\n", __func__); 179 + dev_err(&pdev->dev, "No set_freq function (ERR)\n"); 182 180 goto err_vdd_arm; 183 181 } 184 182 185 183 arm_regulator = regulator_get(NULL, "vdd_arm"); 186 184 if (IS_ERR(arm_regulator)) { 187 - pr_err("%s: failed to get resource vdd_arm\n", __func__); 185 + dev_err(&pdev->dev, "failed to get resource vdd_arm\n"); 188 186 goto err_vdd_arm; 189 187 } 190 188 ··· 194 192 if (!cpufreq_register_driver(&exynos_driver)) 195 193 return 0; 196 194 197 - pr_err("%s: failed to register cpufreq driver\n", __func__); 195 + dev_err(&pdev->dev, "failed to register cpufreq driver\n"); 198 196 regulator_put(arm_regulator); 199 197 err_vdd_arm: 200 198 kfree(exynos_info);
+1
drivers/cpufreq/exynos-cpufreq.h
··· 34 34 }; 35 35 36 36 struct exynos_dvfs_info { 37 + struct device *dev; 37 38 unsigned long mpll_freq_khz; 38 39 unsigned int pll_safe_idx; 39 40 struct clk *cpu_clk;
+15 -15
drivers/cpufreq/exynos5440-cpufreq.c
··· 114 114 115 115 static int init_div_table(void) 116 116 { 117 - struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table; 117 + struct cpufreq_frequency_table *pos, *freq_tbl = dvfs_info->freq_table; 118 118 unsigned int tmp, clk_div, ema_div, freq, volt_id; 119 - int i = 0; 120 119 struct dev_pm_opp *opp; 121 120 122 121 rcu_read_lock(); 123 - for (i = 0; freq_tbl[i].frequency != CPUFREQ_TABLE_END; i++) { 124 - 122 + cpufreq_for_each_entry(pos, freq_tbl) { 125 123 opp = dev_pm_opp_find_freq_exact(dvfs_info->dev, 126 - freq_tbl[i].frequency * 1000, true); 124 + pos->frequency * 1000, true); 127 125 if (IS_ERR(opp)) { 128 126 rcu_read_unlock(); 129 127 dev_err(dvfs_info->dev, 130 128 "failed to find valid OPP for %u KHZ\n", 131 - freq_tbl[i].frequency); 129 + pos->frequency); 132 130 return PTR_ERR(opp); 133 131 } 134 132 135 - freq = freq_tbl[i].frequency / 1000; /* In MHZ */ 133 + freq = pos->frequency / 1000; /* In MHZ */ 136 134 clk_div = ((freq / CPU_DIV_FREQ_MAX) & P0_7_CPUCLKDEV_MASK) 137 135 << P0_7_CPUCLKDEV_SHIFT; 138 136 clk_div |= ((freq / CPU_ATB_FREQ_MAX) & P0_7_ATBCLKDEV_MASK) ··· 155 157 tmp = (clk_div | ema_div | (volt_id << P0_7_VDD_SHIFT) 156 158 | ((freq / FREQ_UNIT) << P0_7_FREQ_SHIFT)); 157 159 158 - __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * i); 160 + __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * 161 + (pos - freq_tbl)); 159 162 } 160 163 161 164 rcu_read_unlock(); ··· 165 166 166 167 static void exynos_enable_dvfs(unsigned int cur_frequency) 167 168 { 168 - unsigned int tmp, i, cpu; 169 + unsigned int tmp, cpu; 169 170 struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; 171 + struct cpufreq_frequency_table *pos; 170 172 /* Disable DVFS */ 171 173 __raw_writel(0, dvfs_info->base + XMU_DVFS_CTRL); 172 174 ··· 182 182 __raw_writel(tmp, dvfs_info->base + XMU_PMUIRQEN); 183 183 184 184 /* Set initial performance index */ 185 - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) 186 - if (freq_table[i].frequency == cur_frequency) 185 + cpufreq_for_each_entry(pos, freq_table) 186 + if (pos->frequency == cur_frequency) 187 187 break; 188 188 189 - if (freq_table[i].frequency == CPUFREQ_TABLE_END) { 189 + if (pos->frequency == CPUFREQ_TABLE_END) { 190 190 dev_crit(dvfs_info->dev, "Boot up frequency not supported\n"); 191 191 /* Assign the highest frequency */ 192 - i = 0; 193 - cur_frequency = freq_table[i].frequency; 192 + pos = freq_table; 193 + cur_frequency = pos->frequency; 194 194 } 195 195 196 196 dev_info(dvfs_info->dev, "Setting dvfs initial frequency = %uKHZ", ··· 199 199 for (cpu = 0; cpu < CONFIG_NR_CPUS; cpu++) { 200 200 tmp = __raw_readl(dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); 201 201 tmp &= ~(P_VALUE_MASK << C0_3_PSTATE_NEW_SHIFT); 202 - tmp |= (i << C0_3_PSTATE_NEW_SHIFT); 202 + tmp |= ((pos - freq_table) << C0_3_PSTATE_NEW_SHIFT); 203 203 __raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + cpu * 4); 204 204 } 205 205
+31 -33
drivers/cpufreq/freq_table.c
··· 21 21 int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 22 22 struct cpufreq_frequency_table *table) 23 23 { 24 + struct cpufreq_frequency_table *pos; 24 25 unsigned int min_freq = ~0; 25 26 unsigned int max_freq = 0; 26 - unsigned int i; 27 + unsigned int freq; 27 28 28 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 29 - unsigned int freq = table[i].frequency; 30 - if (freq == CPUFREQ_ENTRY_INVALID) { 31 - pr_debug("table entry %u is invalid, skipping\n", i); 29 + cpufreq_for_each_valid_entry(pos, table) { 30 + freq = pos->frequency; 32 31 33 - continue; 34 - } 35 32 if (!cpufreq_boost_enabled() 36 - && (table[i].flags & CPUFREQ_BOOST_FREQ)) 33 + && (pos->flags & CPUFREQ_BOOST_FREQ)) 37 34 continue; 38 35 39 - pr_debug("table entry %u: %u kHz\n", i, freq); 36 + pr_debug("table entry %u: %u kHz\n", (int)(pos - table), freq); 40 37 if (freq < min_freq) 41 38 min_freq = freq; 42 39 if (freq > max_freq) ··· 54 57 int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, 55 58 struct cpufreq_frequency_table *table) 56 59 { 57 - unsigned int next_larger = ~0, freq, i = 0; 60 + struct cpufreq_frequency_table *pos; 61 + unsigned int freq, next_larger = ~0; 58 62 bool found = false; 59 63 60 64 pr_debug("request for verification of policy (%u - %u kHz) for cpu %u\n", ··· 63 65 64 66 cpufreq_verify_within_cpu_limits(policy); 65 67 66 - for (; freq = table[i].frequency, freq != CPUFREQ_TABLE_END; i++) { 67 - if (freq == CPUFREQ_ENTRY_INVALID) 68 - continue; 68 + cpufreq_for_each_valid_entry(pos, table) { 69 + freq = pos->frequency; 70 + 69 71 if ((freq >= policy->min) && (freq <= policy->max)) { 70 72 found = true; 71 73 break; ··· 116 118 .driver_data = ~0, 117 119 .frequency = 0, 118 120 }; 119 - unsigned int i; 121 + struct cpufreq_frequency_table *pos; 122 + unsigned int freq, i = 0; 120 123 121 124 pr_debug("request for target %u kHz (relation: %u) for cpu %u\n", 122 125 target_freq, relation, policy->cpu); ··· 131 132 break; 132 133 } 133 134 134 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 135 - unsigned int freq = table[i].frequency; 136 - if (freq == CPUFREQ_ENTRY_INVALID) 137 - continue; 135 + cpufreq_for_each_valid_entry(pos, table) { 136 + freq = pos->frequency; 137 + 138 + i = pos - table; 138 139 if ((freq < policy->min) || (freq > policy->max)) 139 140 continue; 141 + if (freq == target_freq) { 142 + optimal.driver_data = i; 143 + break; 144 + } 140 145 switch (relation) { 141 146 case CPUFREQ_RELATION_H: 142 - if (freq <= target_freq) { 147 + if (freq < target_freq) { 143 148 if (freq >= optimal.frequency) { 144 149 optimal.frequency = freq; 145 150 optimal.driver_data = i; ··· 156 153 } 157 154 break; 158 155 case CPUFREQ_RELATION_L: 159 - if (freq >= target_freq) { 156 + if (freq > target_freq) { 160 157 if (freq <= optimal.frequency) { 161 158 optimal.frequency = freq; 162 159 optimal.driver_data = i; ··· 187 184 int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy, 188 185 unsigned int freq) 189 186 { 190 - struct cpufreq_frequency_table *table; 191 - int i; 187 + struct cpufreq_frequency_table *pos, *table; 192 188 193 189 table = cpufreq_frequency_get_table(policy->cpu); 194 190 if (unlikely(!table)) { ··· 195 193 return -ENOENT; 196 194 } 197 195 198 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 199 - if (table[i].frequency == freq) 200 - return i; 201 - } 196 + cpufreq_for_each_valid_entry(pos, table) 197 + if (pos->frequency == freq) 198 + return pos - table; 202 199 203 200 return -EINVAL; 204 201 } ··· 209 208 static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf, 210 209 bool show_boost) 211 210 { 212 - unsigned int i = 0; 213 211 ssize_t count = 0; 214 - struct cpufreq_frequency_table *table = policy->freq_table; 212 + struct cpufreq_frequency_table *pos, *table = policy->freq_table; 215 213 216 214 if (!table) 217 215 return -ENODEV; 218 216 219 - for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { 220 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 221 - continue; 217 + cpufreq_for_each_valid_entry(pos, table) { 222 218 /* 223 219 * show_boost = true and driver_data = BOOST freq 224 220 * display BOOST freqs ··· 227 229 * show_boost = false and driver_data != BOOST freq 228 230 * display NON BOOST freqs 229 231 */ 230 - if (show_boost ^ (table[i].flags & CPUFREQ_BOOST_FREQ)) 232 + if (show_boost ^ (pos->flags & CPUFREQ_BOOST_FREQ)) 231 233 continue; 232 234 233 - count += sprintf(&buf[count], "%d ", table[i].frequency); 235 + count += sprintf(&buf[count], "%d ", pos->frequency); 234 236 } 235 237 count += sprintf(&buf[count], "\n"); 236 238
+39 -15
drivers/cpufreq/imx6q-cpufreq.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/cpu.h> 11 11 #include <linux/cpufreq.h> 12 - #include <linux/delay.h> 13 12 #include <linux/err.h> 14 13 #include <linux/module.h> 15 14 #include <linux/of.h> ··· 169 170 return -ENOENT; 170 171 } 171 172 172 - arm_clk = devm_clk_get(cpu_dev, "arm"); 173 - pll1_sys_clk = devm_clk_get(cpu_dev, "pll1_sys"); 174 - pll1_sw_clk = devm_clk_get(cpu_dev, "pll1_sw"); 175 - step_clk = devm_clk_get(cpu_dev, "step"); 176 - pll2_pfd2_396m_clk = devm_clk_get(cpu_dev, "pll2_pfd2_396m"); 173 + arm_clk = clk_get(cpu_dev, "arm"); 174 + pll1_sys_clk = clk_get(cpu_dev, "pll1_sys"); 175 + pll1_sw_clk = clk_get(cpu_dev, "pll1_sw"); 176 + step_clk = clk_get(cpu_dev, "step"); 177 + pll2_pfd2_396m_clk = clk_get(cpu_dev, "pll2_pfd2_396m"); 177 178 if (IS_ERR(arm_clk) || IS_ERR(pll1_sys_clk) || IS_ERR(pll1_sw_clk) || 178 179 IS_ERR(step_clk) || IS_ERR(pll2_pfd2_396m_clk)) { 179 180 dev_err(cpu_dev, "failed to get clocks\n"); 180 181 ret = -ENOENT; 181 - goto put_node; 182 + goto put_clk; 182 183 } 183 184 184 - arm_reg = devm_regulator_get(cpu_dev, "arm"); 185 - pu_reg = devm_regulator_get(cpu_dev, "pu"); 186 - soc_reg = devm_regulator_get(cpu_dev, "soc"); 185 + arm_reg = regulator_get(cpu_dev, "arm"); 186 + pu_reg = regulator_get(cpu_dev, "pu"); 187 + soc_reg = regulator_get(cpu_dev, "soc"); 187 188 if (IS_ERR(arm_reg) || IS_ERR(pu_reg) || IS_ERR(soc_reg)) { 188 189 dev_err(cpu_dev, "failed to get regulators\n"); 189 190 ret = -ENOENT; 190 - goto put_node; 191 + goto put_reg; 191 192 } 192 193 193 194 /* ··· 200 201 ret = of_init_opp_table(cpu_dev); 201 202 if (ret < 0) { 202 203 dev_err(cpu_dev, "failed to init OPP table: %d\n", ret); 203 - goto put_node; 204 + goto put_reg; 204 205 } 205 206 206 207 num = dev_pm_opp_get_opp_count(cpu_dev); 207 208 if (num < 0) { 208 209 ret = num; 209 210 dev_err(cpu_dev, "no OPP table is found: %d\n", ret); 210 - goto put_node; 211 + goto put_reg; 211 212 } 212 213 } 213 214 214 215 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); 215 216 if (ret) { 216 217 dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); 217 - goto put_node; 218 + goto put_reg; 218 219 } 219 220 220 221 /* Make imx6_soc_volt array's size same as arm opp number */ ··· 300 301 301 302 free_freq_table: 302 303 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 303 - put_node: 304 + put_reg: 305 + if (!IS_ERR(arm_reg)) 306 + regulator_put(arm_reg); 307 + if (!IS_ERR(pu_reg)) 308 + regulator_put(pu_reg); 309 + if (!IS_ERR(soc_reg)) 310 + regulator_put(soc_reg); 311 + put_clk: 312 + if (!IS_ERR(arm_clk)) 313 + clk_put(arm_clk); 314 + if (!IS_ERR(pll1_sys_clk)) 315 + clk_put(pll1_sys_clk); 316 + if (!IS_ERR(pll1_sw_clk)) 317 + clk_put(pll1_sw_clk); 318 + if (!IS_ERR(step_clk)) 319 + clk_put(step_clk); 320 + if (!IS_ERR(pll2_pfd2_396m_clk)) 321 + clk_put(pll2_pfd2_396m_clk); 304 322 of_node_put(np); 305 323 return ret; 306 324 } ··· 326 310 { 327 311 cpufreq_unregister_driver(&imx6q_cpufreq_driver); 328 312 dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table); 313 + regulator_put(arm_reg); 314 + regulator_put(pu_reg); 315 + regulator_put(soc_reg); 316 + clk_put(arm_clk); 317 + clk_put(pll1_sys_clk); 318 + clk_put(pll1_sw_clk); 319 + clk_put(step_clk); 320 + clk_put(pll2_pfd2_396m_clk); 329 321 330 322 return 0; 331 323 }
+6 -9
drivers/cpufreq/intel_pstate.c
··· 32 32 #include <asm/msr.h> 33 33 #include <asm/cpu_device_id.h> 34 34 35 - #define SAMPLE_COUNT 3 36 - 37 35 #define BYT_RATIOS 0x66a 38 36 #define BYT_VIDS 0x66b 39 37 #define BYT_TURBO_RATIOS 0x66c ··· 87 89 88 90 struct cpudata { 89 91 int cpu; 90 - 91 - char name[64]; 92 92 93 93 struct timer_list timer; 94 94 ··· 545 549 546 550 static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) 547 551 { 548 - sprintf(cpu->name, "Intel 2nd generation core"); 549 - 550 552 cpu->pstate.min_pstate = pstate_funcs.get_min(); 551 553 cpu->pstate.max_pstate = pstate_funcs.get_max(); 552 554 cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); ··· 554 560 intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); 555 561 } 556 562 557 - static inline void intel_pstate_calc_busy(struct cpudata *cpu, 558 - struct sample *sample) 563 + static inline void intel_pstate_calc_busy(struct cpudata *cpu) 559 564 { 565 + struct sample *sample = &cpu->sample; 560 566 int64_t core_pct; 561 567 int32_t rem; 562 568 ··· 589 595 cpu->sample.aperf -= cpu->prev_aperf; 590 596 cpu->sample.mperf -= cpu->prev_mperf; 591 597 592 - intel_pstate_calc_busy(cpu, &cpu->sample); 598 + intel_pstate_calc_busy(cpu); 593 599 594 600 cpu->prev_aperf = aperf; 595 601 cpu->prev_mperf = mperf; ··· 678 684 ICPU(0x37, byt_params), 679 685 ICPU(0x3a, core_params), 680 686 ICPU(0x3c, core_params), 687 + ICPU(0x3d, core_params), 681 688 ICPU(0x3e, core_params), 682 689 ICPU(0x3f, core_params), 683 690 ICPU(0x45, core_params), 684 691 ICPU(0x46, core_params), 692 + ICPU(0x4f, core_params), 693 + ICPU(0x56, core_params), 685 694 {} 686 695 }; 687 696 MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
+5 -6
drivers/cpufreq/longhaul.c
··· 530 530 531 531 static void longhaul_setup_voltagescaling(void) 532 532 { 533 + struct cpufreq_frequency_table *freq_pos; 533 534 union msr_longhaul longhaul; 534 535 struct mV_pos minvid, maxvid, vid; 535 536 unsigned int j, speed, pos, kHz_step, numvscales; ··· 609 608 /* Calculate kHz for one voltage step */ 610 609 kHz_step = (highest_speed - min_vid_speed) / numvscales; 611 610 612 - j = 0; 613 - while (longhaul_table[j].frequency != CPUFREQ_TABLE_END) { 614 - speed = longhaul_table[j].frequency; 611 + cpufreq_for_each_entry(freq_pos, longhaul_table) { 612 + speed = freq_pos->frequency; 615 613 if (speed > min_vid_speed) 616 614 pos = (speed - min_vid_speed) / kHz_step + minvid.pos; 617 615 else 618 616 pos = minvid.pos; 619 - longhaul_table[j].driver_data |= mV_vrm_table[pos] << 8; 617 + freq_pos->driver_data |= mV_vrm_table[pos] << 8; 620 618 vid = vrm_mV_table[mV_vrm_table[pos]]; 621 619 printk(KERN_INFO PFX "f: %d kHz, index: %d, vid: %d mV\n", 622 - speed, j, vid.mV); 623 - j++; 620 + speed, (int)(freq_pos - longhaul_table), vid.mV); 624 621 } 625 622 626 623 can_scale_voltage = 1;
+5 -5
drivers/cpufreq/pasemi-cpufreq.c
··· 136 136 137 137 static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy) 138 138 { 139 + struct cpufreq_frequency_table *pos; 139 140 const u32 *max_freqp; 140 141 u32 max_freq; 141 - int i, cur_astate; 142 + int cur_astate; 142 143 struct resource res; 143 144 struct device_node *cpu, *dn; 144 145 int err = -ENODEV; ··· 198 197 pr_debug("initializing frequency table\n"); 199 198 200 199 /* initialize frequency table */ 201 - for (i=0; pas_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) { 202 - pas_freqs[i].frequency = 203 - get_astate_freq(pas_freqs[i].driver_data) * 100000; 204 - pr_debug("%d: %d\n", i, pas_freqs[i].frequency); 200 + cpufreq_for_each_entry(pos, pas_freqs) { 201 + pos->frequency = get_astate_freq(pos->driver_data) * 100000; 202 + pr_debug("%d: %d\n", (int)(pos - pas_freqs), pos->frequency); 205 203 } 206 204 207 205 cur_astate = get_cur_astate(policy->cpu);
+7 -7
drivers/cpufreq/powernow-k6.c
··· 151 151 152 152 static int powernow_k6_cpu_init(struct cpufreq_policy *policy) 153 153 { 154 + struct cpufreq_frequency_table *pos; 154 155 unsigned int i, f; 155 156 unsigned khz; 156 157 ··· 169 168 } 170 169 } 171 170 if (param_max_multiplier) { 172 - for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { 173 - if (clock_ratio[i].driver_data == param_max_multiplier) { 171 + cpufreq_for_each_entry(pos, clock_ratio) 172 + if (pos->driver_data == param_max_multiplier) { 174 173 max_multiplier = param_max_multiplier; 175 174 goto have_max_multiplier; 176 175 } 177 - } 178 176 printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n"); 179 177 return -EINVAL; 180 178 } ··· 201 201 param_busfreq = busfreq * 10; 202 202 203 203 /* table init */ 204 - for (i = 0; (clock_ratio[i].frequency != CPUFREQ_TABLE_END); i++) { 205 - f = clock_ratio[i].driver_data; 204 + cpufreq_for_each_entry(pos, clock_ratio) { 205 + f = pos->driver_data; 206 206 if (f > max_multiplier) 207 - clock_ratio[i].frequency = CPUFREQ_ENTRY_INVALID; 207 + pos->frequency = CPUFREQ_ENTRY_INVALID; 208 208 else 209 - clock_ratio[i].frequency = busfreq * f; 209 + pos->frequency = busfreq * f; 210 210 } 211 211 212 212 /* cpuinfo and default policy values */
+73 -107
drivers/cpufreq/powernow-k8.c
··· 27 27 * power and thermal data sheets, (e.g. 30417.pdf, 30430.pdf, 43375.pdf) 28 28 */ 29 29 30 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 31 + 30 32 #include <linux/kernel.h> 31 33 #include <linux/smp.h> 32 34 #include <linux/module.h> ··· 47 45 #include <linux/mutex.h> 48 46 #include <acpi/processor.h> 49 47 50 - #define PFX "powernow-k8: " 51 48 #define VERSION "version 2.20.00" 52 49 #include "powernow-k8.h" 53 50 ··· 162 161 u32 i = 0; 163 162 164 163 if ((fid & INVALID_FID_MASK) || (data->currvid & INVALID_VID_MASK)) { 165 - printk(KERN_ERR PFX "internal error - overflow on fid write\n"); 164 + pr_err("internal error - overflow on fid write\n"); 166 165 return 1; 167 166 } 168 167 ··· 176 175 do { 177 176 wrmsr(MSR_FIDVID_CTL, lo, data->plllock * PLL_LOCK_CONVERSION); 178 177 if (i++ > 100) { 179 - printk(KERN_ERR PFX 180 - "Hardware error - pending bit very stuck - " 181 - "no further pstate changes possible\n"); 178 + pr_err("Hardware error - pending bit very stuck - no further pstate changes possible\n"); 182 179 return 1; 183 180 } 184 181 } while (query_current_values_with_pending_wait(data)); ··· 184 185 count_off_irt(data); 185 186 186 187 if (savevid != data->currvid) { 187 - printk(KERN_ERR PFX 188 - "vid change on fid trans, old 0x%x, new 0x%x\n", 189 - savevid, data->currvid); 188 + pr_err("vid change on fid trans, old 0x%x, new 0x%x\n", 189 + savevid, data->currvid); 190 190 return 1; 191 191 } 192 192 193 193 if (fid != data->currfid) { 194 - printk(KERN_ERR PFX 195 - "fid trans failed, fid 0x%x, curr 0x%x\n", fid, 194 + pr_err("fid trans failed, fid 0x%x, curr 0x%x\n", fid, 196 195 data->currfid); 197 196 return 1; 198 197 } ··· 206 209 int i = 0; 207 210 208 211 if ((data->currfid & INVALID_FID_MASK) || (vid & INVALID_VID_MASK)) { 209 - printk(KERN_ERR PFX "internal error - overflow on vid write\n"); 212 + pr_err("internal error - overflow on vid write\n"); 210 213 return 1; 211 214 } 212 215 ··· 220 223 do { 221 224 wrmsr(MSR_FIDVID_CTL, lo, STOP_GRANT_5NS); 222 225 if (i++ > 100) { 223 - printk(KERN_ERR PFX "internal error - pending bit " 224 - "very stuck - no further pstate " 225 - "changes possible\n"); 226 + pr_err("internal error - pending bit very stuck - no further pstate changes possible\n"); 226 227 return 1; 227 228 } 228 229 } while (query_current_values_with_pending_wait(data)); 229 230 230 231 if (savefid != data->currfid) { 231 - printk(KERN_ERR PFX "fid changed on vid trans, old " 232 - "0x%x new 0x%x\n", 233 - savefid, data->currfid); 232 + pr_err("fid changed on vid trans, old 0x%x new 0x%x\n", 233 + savefid, data->currfid); 234 234 return 1; 235 235 } 236 236 237 237 if (vid != data->currvid) { 238 - printk(KERN_ERR PFX "vid trans failed, vid 0x%x, " 239 - "curr 0x%x\n", 238 + pr_err("vid trans failed, vid 0x%x, curr 0x%x\n", 240 239 vid, data->currvid); 241 240 return 1; 242 241 } ··· 276 283 return 1; 277 284 278 285 if ((reqfid != data->currfid) || (reqvid != data->currvid)) { 279 - printk(KERN_ERR PFX "failed (cpu%d): req 0x%x 0x%x, " 280 - "curr 0x%x 0x%x\n", 286 + pr_err("failed (cpu%d): req 0x%x 0x%x, curr 0x%x 0x%x\n", 281 287 smp_processor_id(), 282 288 reqfid, reqvid, data->currfid, data->currvid); 283 289 return 1; ··· 296 304 u32 savefid = data->currfid; 297 305 u32 maxvid, lo, rvomult = 1; 298 306 299 - pr_debug("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, " 300 - "reqvid 0x%x, rvo 0x%x\n", 307 + pr_debug("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n", 301 308 smp_processor_id(), 302 309 data->currfid, data->currvid, reqvid, data->rvo); 303 310 ··· 333 342 return 1; 334 343 335 344 if (savefid != data->currfid) { 336 - printk(KERN_ERR PFX "ph1 err, currfid changed 0x%x\n", 337 - data->currfid); 345 + pr_err("ph1 err, currfid changed 0x%x\n", data->currfid); 338 346 return 1; 339 347 } 340 348 ··· 350 360 u32 fid_interval, savevid = data->currvid; 351 361 352 362 if (data->currfid == reqfid) { 353 - printk(KERN_ERR PFX "ph2 null fid transition 0x%x\n", 354 - data->currfid); 363 + pr_err("ph2 null fid transition 0x%x\n", data->currfid); 355 364 return 0; 356 365 } 357 366 358 - pr_debug("ph2 (cpu%d): starting, currfid 0x%x, currvid 0x%x, " 359 - "reqfid 0x%x\n", 367 + pr_debug("ph2 (cpu%d): starting, currfid 0x%x, currvid 0x%x, reqfid 0x%x\n", 360 368 smp_processor_id(), 361 369 data->currfid, data->currvid, reqfid); 362 370 ··· 397 409 return 1; 398 410 399 411 if (data->currfid != reqfid) { 400 - printk(KERN_ERR PFX 401 - "ph2: mismatch, failed fid transition, " 402 - "curr 0x%x, req 0x%x\n", 412 + pr_err("ph2: mismatch, failed fid transition, curr 0x%x, req 0x%x\n", 403 413 data->currfid, reqfid); 404 414 return 1; 405 415 } 406 416 407 417 if (savevid != data->currvid) { 408 - printk(KERN_ERR PFX "ph2: vid changed, save 0x%x, curr 0x%x\n", 418 + pr_err("ph2: vid changed, save 0x%x, curr 0x%x\n", 409 419 savevid, data->currvid); 410 420 return 1; 411 421 } ··· 430 444 return 1; 431 445 432 446 if (savefid != data->currfid) { 433 - printk(KERN_ERR PFX 434 - "ph3: bad fid change, save 0x%x, curr 0x%x\n", 435 - savefid, data->currfid); 447 + pr_err("ph3: bad fid change, save 0x%x, curr 0x%x\n", 448 + savefid, data->currfid); 436 449 return 1; 437 450 } 438 451 439 452 if (data->currvid != reqvid) { 440 - printk(KERN_ERR PFX 441 - "ph3: failed vid transition\n, " 442 - "req 0x%x, curr 0x%x", 443 - reqvid, data->currvid); 453 + pr_err("ph3: failed vid transition\n, req 0x%x, curr 0x%x", 454 + reqvid, data->currvid); 444 455 return 1; 445 456 } 446 457 } ··· 481 498 if ((eax & CPUID_XFAM) == CPUID_XFAM_K8) { 482 499 if (((eax & CPUID_USE_XFAM_XMOD) != CPUID_USE_XFAM_XMOD) || 483 500 ((eax & CPUID_XMOD) > CPUID_XMOD_REV_MASK)) { 484 - printk(KERN_INFO PFX 485 - "Processor cpuid %x not supported\n", eax); 501 + pr_info("Processor cpuid %x not supported\n", eax); 486 502 return; 487 503 } 488 504 489 505 eax = cpuid_eax(CPUID_GET_MAX_CAPABILITIES); 490 506 if (eax < CPUID_FREQ_VOLT_CAPABILITIES) { 491 - printk(KERN_INFO PFX 492 - "No frequency change capabilities detected\n"); 507 + pr_info("No frequency change capabilities detected\n"); 493 508 return; 494 509 } 495 510 496 511 cpuid(CPUID_FREQ_VOLT_CAPABILITIES, &eax, &ebx, &ecx, &edx); 497 512 if ((edx & P_STATE_TRANSITION_CAPABLE) 498 513 != P_STATE_TRANSITION_CAPABLE) { 499 - printk(KERN_INFO PFX 500 - "Power state transitions not supported\n"); 514 + pr_info("Power state transitions not supported\n"); 501 515 return; 502 516 } 503 517 *rc = 0; ··· 509 529 510 530 for (j = 0; j < data->numps; j++) { 511 531 if (pst[j].vid > LEAST_VID) { 512 - printk(KERN_ERR FW_BUG PFX "vid %d invalid : 0x%x\n", 513 - j, pst[j].vid); 532 + pr_err(FW_BUG "vid %d invalid : 0x%x\n", j, 533 + pst[j].vid); 514 534 return -EINVAL; 515 535 } 516 536 if (pst[j].vid < data->rvo) { 517 537 /* vid + rvo >= 0 */ 518 - printk(KERN_ERR FW_BUG PFX "0 vid exceeded with pstate" 519 - " %d\n", j); 538 + pr_err(FW_BUG "0 vid exceeded with pstate %d\n", j); 520 539 return -ENODEV; 521 540 } 522 541 if (pst[j].vid < maxvid + data->rvo) { 523 542 /* vid + rvo >= maxvid */ 524 - printk(KERN_ERR FW_BUG PFX "maxvid exceeded with pstate" 525 - " %d\n", j); 543 + pr_err(FW_BUG "maxvid exceeded with pstate %d\n", j); 526 544 return -ENODEV; 527 545 } 528 546 if (pst[j].fid > MAX_FID) { 529 - printk(KERN_ERR FW_BUG PFX "maxfid exceeded with pstate" 530 - " %d\n", j); 547 + pr_err(FW_BUG "maxfid exceeded with pstate %d\n", j); 531 548 return -ENODEV; 532 549 } 533 550 if (j && (pst[j].fid < HI_FID_TABLE_BOTTOM)) { 534 551 /* Only first fid is allowed to be in "low" range */ 535 - printk(KERN_ERR FW_BUG PFX "two low fids - %d : " 536 - "0x%x\n", j, pst[j].fid); 552 + pr_err(FW_BUG "two low fids - %d : 0x%x\n", j, 553 + pst[j].fid); 537 554 return -EINVAL; 538 555 } 539 556 if (pst[j].fid < lastfid) 540 557 lastfid = pst[j].fid; 541 558 } 542 559 if (lastfid & 1) { 543 - printk(KERN_ERR FW_BUG PFX "lastfid invalid\n"); 560 + pr_err(FW_BUG "lastfid invalid\n"); 544 561 return -EINVAL; 545 562 } 546 563 if (lastfid > LO_FID_TABLE_TOP) 547 - printk(KERN_INFO FW_BUG PFX 548 - "first fid not from lo freq table\n"); 564 + pr_info(FW_BUG "first fid not from lo freq table\n"); 549 565 550 566 return 0; 551 567 } ··· 558 582 for (j = 0; j < data->numps; j++) { 559 583 if (data->powernow_table[j].frequency != 560 584 CPUFREQ_ENTRY_INVALID) { 561 - printk(KERN_INFO PFX 562 - "fid 0x%x (%d MHz), vid 0x%x\n", 563 - data->powernow_table[j].driver_data & 0xff, 564 - data->powernow_table[j].frequency/1000, 565 - data->powernow_table[j].driver_data >> 8); 585 + pr_info("fid 0x%x (%d MHz), vid 0x%x\n", 586 + data->powernow_table[j].driver_data & 0xff, 587 + data->powernow_table[j].frequency/1000, 588 + data->powernow_table[j].driver_data >> 8); 566 589 } 567 590 } 568 591 if (data->batps) 569 - printk(KERN_INFO PFX "Only %d pstates on battery\n", 570 - data->batps); 592 + pr_info("Only %d pstates on battery\n", data->batps); 571 593 } 572 594 573 595 static int fill_powernow_table(struct powernow_k8_data *data, ··· 576 602 577 603 if (data->batps) { 578 604 /* use ACPI support to get full speed on mains power */ 579 - printk(KERN_WARNING PFX 580 - "Only %d pstates usable (use ACPI driver for full " 581 - "range\n", data->batps); 605 + pr_warn("Only %d pstates usable (use ACPI driver for full range\n", 606 + data->batps); 582 607 data->numps = data->batps; 583 608 } 584 609 585 610 for (j = 1; j < data->numps; j++) { 586 611 if (pst[j-1].fid >= pst[j].fid) { 587 - printk(KERN_ERR PFX "PST out of sequence\n"); 612 + pr_err("PST out of sequence\n"); 588 613 return -EINVAL; 589 614 } 590 615 } 591 616 592 617 if (data->numps < 2) { 593 - printk(KERN_ERR PFX "no p states to transition\n"); 618 + pr_err("no p states to transition\n"); 594 619 return -ENODEV; 595 620 } 596 621 ··· 599 626 powernow_table = kzalloc((sizeof(*powernow_table) 600 627 * (data->numps + 1)), GFP_KERNEL); 601 628 if (!powernow_table) { 602 - printk(KERN_ERR PFX "powernow_table memory alloc failure\n"); 629 + pr_err("powernow_table memory alloc failure\n"); 603 630 return -ENOMEM; 604 631 } 605 632 ··· 654 681 655 682 pr_debug("table vers: 0x%x\n", psb->tableversion); 656 683 if (psb->tableversion != PSB_VERSION_1_4) { 657 - printk(KERN_ERR FW_BUG PFX "PSB table is not v1.4\n"); 684 + pr_err(FW_BUG "PSB table is not v1.4\n"); 658 685 return -ENODEV; 659 686 } 660 687 661 688 pr_debug("flags: 0x%x\n", psb->flags1); 662 689 if (psb->flags1) { 663 - printk(KERN_ERR FW_BUG PFX "unknown flags\n"); 690 + pr_err(FW_BUG "unknown flags\n"); 664 691 return -ENODEV; 665 692 } 666 693 ··· 689 716 cpst = 1; 690 717 } 691 718 if (cpst != 1) { 692 - printk(KERN_ERR FW_BUG PFX "numpst must be 1\n"); 719 + pr_err(FW_BUG "numpst must be 1\n"); 693 720 return -ENODEV; 694 721 } 695 722 ··· 715 742 * BIOS and Kernel Developer's Guide, which is available on 716 743 * www.amd.com 717 744 */ 718 - printk(KERN_ERR FW_BUG PFX "No PSB or ACPI _PSS objects\n"); 719 - printk(KERN_ERR PFX "Make sure that your BIOS is up to date" 720 - " and Cool'N'Quiet support is enabled in BIOS setup\n"); 745 + pr_err(FW_BUG "No PSB or ACPI _PSS objects\n"); 746 + pr_err("Make sure that your BIOS is up to date and Cool'N'Quiet support is enabled in BIOS setup\n"); 721 747 return -ENODEV; 722 748 } 723 749 ··· 791 819 acpi_processor_notify_smm(THIS_MODULE); 792 820 793 821 if (!zalloc_cpumask_var(&data->acpi_data.shared_cpu_map, GFP_KERNEL)) { 794 - printk(KERN_ERR PFX 795 - "unable to alloc powernow_k8_data cpumask\n"); 822 + pr_err("unable to alloc powernow_k8_data cpumask\n"); 796 823 ret_val = -ENOMEM; 797 824 goto err_out_mem; 798 825 } ··· 856 885 } 857 886 858 887 if (freq != (data->acpi_data.states[i].core_frequency * 1000)) { 859 - printk(KERN_INFO PFX "invalid freq entries " 860 - "%u kHz vs. %u kHz\n", freq, 861 - (unsigned int) 888 + pr_info("invalid freq entries %u kHz vs. %u kHz\n", 889 + freq, (unsigned int) 862 890 (data->acpi_data.states[i].core_frequency 863 891 * 1000)); 864 892 invalidate_entry(powernow_table, i); ··· 886 916 max_latency = cur_latency; 887 917 } 888 918 if (max_latency == 0) { 889 - pr_err(FW_WARN PFX "Invalid zero transition latency\n"); 919 + pr_err(FW_WARN "Invalid zero transition latency\n"); 890 920 max_latency = 1; 891 921 } 892 922 /* value in usecs, needs to be in nanoseconds */ ··· 961 991 checkvid = data->currvid; 962 992 963 993 if (pending_bit_stuck()) { 964 - printk(KERN_ERR PFX "failing targ, change pending bit set\n"); 994 + pr_err("failing targ, change pending bit set\n"); 965 995 return -EIO; 966 996 } 967 997 ··· 973 1003 return -EIO; 974 1004 975 1005 pr_debug("targ: curr fid 0x%x, vid 0x%x\n", 976 - data->currfid, data->currvid); 1006 + data->currfid, data->currvid); 977 1007 978 1008 if ((checkvid != data->currvid) || 979 1009 (checkfid != data->currfid)) { 980 - pr_info(PFX 981 - "error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 1010 + pr_info("error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", 982 1011 checkfid, data->currfid, 983 1012 checkvid, data->currvid); 984 1013 } ··· 989 1020 ret = transition_frequency_fidvid(data, newstate); 990 1021 991 1022 if (ret) { 992 - printk(KERN_ERR PFX "transition frequency failed\n"); 1023 + pr_err("transition frequency failed\n"); 993 1024 mutex_unlock(&fidvid_mutex); 994 1025 return 1; 995 1026 } ··· 1018 1049 struct init_on_cpu *init_on_cpu = _init_on_cpu; 1019 1050 1020 1051 if (pending_bit_stuck()) { 1021 - printk(KERN_ERR PFX "failing init, change pending bit set\n"); 1052 + pr_err("failing init, change pending bit set\n"); 1022 1053 init_on_cpu->rc = -ENODEV; 1023 1054 return; 1024 1055 } ··· 1033 1064 init_on_cpu->rc = 0; 1034 1065 } 1035 1066 1036 - static const char missing_pss_msg[] = 1037 - KERN_ERR 1038 - FW_BUG PFX "No compatible ACPI _PSS objects found.\n" 1039 - FW_BUG PFX "First, make sure Cool'N'Quiet is enabled in the BIOS.\n" 1040 - FW_BUG PFX "If that doesn't help, try upgrading your BIOS.\n"; 1067 + #define MISSING_PSS_MSG \ 1068 + FW_BUG "No compatible ACPI _PSS objects found.\n" \ 1069 + FW_BUG "First, make sure Cool'N'Quiet is enabled in the BIOS.\n" \ 1070 + FW_BUG "If that doesn't help, try upgrading your BIOS.\n" 1041 1071 1042 1072 /* per CPU init entry point to the driver */ 1043 1073 static int powernowk8_cpu_init(struct cpufreq_policy *pol) ··· 1051 1083 1052 1084 data = kzalloc(sizeof(*data), GFP_KERNEL); 1053 1085 if (!data) { 1054 - printk(KERN_ERR PFX "unable to alloc powernow_k8_data"); 1086 + pr_err("unable to alloc powernow_k8_data"); 1055 1087 return -ENOMEM; 1056 1088 } 1057 1089 ··· 1063 1095 * an UP version, and is deprecated by AMD. 1064 1096 */ 1065 1097 if (num_online_cpus() != 1) { 1066 - printk_once(missing_pss_msg); 1098 + pr_err_once(MISSING_PSS_MSG); 1067 1099 goto err_out; 1068 1100 } 1069 1101 if (pol->cpu != 0) { 1070 - printk(KERN_ERR FW_BUG PFX "No ACPI _PSS objects for " 1071 - "CPU other than CPU0. Complain to your BIOS " 1072 - "vendor.\n"); 1102 + pr_err(FW_BUG "No ACPI _PSS objects for CPU other than CPU0. Complain to your BIOS vendor.\n"); 1073 1103 goto err_out; 1074 1104 } 1075 1105 rc = find_psb_table(data); ··· 1095 1129 1096 1130 /* min/max the cpu is capable of */ 1097 1131 if (cpufreq_table_validate_and_show(pol, data->powernow_table)) { 1098 - printk(KERN_ERR FW_BUG PFX "invalid powernow_table\n"); 1132 + pr_err(FW_BUG "invalid powernow_table\n"); 1099 1133 powernow_k8_cpu_exit_acpi(data); 1100 1134 kfree(data->powernow_table); 1101 1135 kfree(data); ··· 1103 1137 } 1104 1138 1105 1139 pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n", 1106 - data->currfid, data->currvid); 1140 + data->currfid, data->currvid); 1107 1141 1108 1142 /* Point all the CPUs in this policy to the same data */ 1109 1143 for_each_cpu(cpu, pol->cpus) ··· 1186 1220 goto request; 1187 1221 1188 1222 if (strncmp(cur_drv, drv, min_t(size_t, strlen(cur_drv), strlen(drv)))) 1189 - pr_warn(PFX "WTF driver: %s\n", cur_drv); 1223 + pr_warn("WTF driver: %s\n", cur_drv); 1190 1224 1191 1225 return; 1192 1226 1193 1227 request: 1194 - pr_warn(PFX "This CPU is not supported anymore, using acpi-cpufreq instead.\n"); 1228 + pr_warn("This CPU is not supported anymore, using acpi-cpufreq instead.\n"); 1195 1229 request_module(drv); 1196 1230 } 1197 1231 ··· 1226 1260 if (ret) 1227 1261 return ret; 1228 1262 1229 - pr_info(PFX "Found %d %s (%d cpu cores) (" VERSION ")\n", 1263 + pr_info("Found %d %s (%d cpu cores) (" VERSION ")\n", 1230 1264 num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus); 1231 1265 1232 1266 return ret; ··· 1240 1274 cpufreq_unregister_driver(&cpufreq_amd64_driver); 1241 1275 } 1242 1276 1243 - MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com> and " 1244 - "Mark Langsdorf <mark.langsdorf@amd.com>"); 1277 + MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com>"); 1278 + MODULE_AUTHOR("Mark Langsdorf <mark.langsdorf@amd.com>"); 1245 1279 MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver."); 1246 1280 MODULE_LICENSE("GPL"); 1247 1281
+1 -1
drivers/cpufreq/powernow-k8.h
··· 19 19 u32 vidmvs; /* usable value calculated from mvs */ 20 20 u32 vstable; /* voltage stabilization time, units 20 us */ 21 21 u32 plllock; /* pll lock time, units 1 us */ 22 - u32 exttype; /* extended interface = 1 */ 22 + u32 exttype; /* extended interface = 1 */ 23 23 24 24 /* keep track of the current fid / vid or pstate */ 25 25 u32 currvid;
+1 -1
drivers/cpufreq/powernv-cpufreq.c
··· 235 235 * firmware for CPU 'cpu'. This value is reported through the sysfs 236 236 * file cpuinfo_cur_freq. 237 237 */ 238 - unsigned int powernv_cpufreq_get(unsigned int cpu) 238 + static unsigned int powernv_cpufreq_get(unsigned int cpu) 239 239 { 240 240 struct powernv_smp_call_data freq_data; 241 241
+5 -4
drivers/cpufreq/ppc_cbe_cpufreq.c
··· 67 67 68 68 static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy) 69 69 { 70 + struct cpufreq_frequency_table *pos; 70 71 const u32 *max_freqp; 71 72 u32 max_freq; 72 - int i, cur_pmode; 73 + int cur_pmode; 73 74 struct device_node *cpu; 74 75 75 76 cpu = of_get_cpu_node(policy->cpu, NULL); ··· 103 102 pr_debug("initializing frequency table\n"); 104 103 105 104 /* initialize frequency table */ 106 - for (i=0; cbe_freqs[i].frequency!=CPUFREQ_TABLE_END; i++) { 107 - cbe_freqs[i].frequency = max_freq / cbe_freqs[i].driver_data; 108 - pr_debug("%d: %d\n", i, cbe_freqs[i].frequency); 105 + cpufreq_for_each_entry(pos, cbe_freqs) { 106 + pos->frequency = max_freq / pos->driver_data; 107 + pr_debug("%d: %d\n", (int)(pos - cbe_freqs), pos->frequency); 109 108 } 110 109 111 110 /* if DEBUG is enabled set_pmode() measures the latency
+17 -23
drivers/cpufreq/s3c2416-cpufreq.c
··· 266 266 static void __init s3c2416_cpufreq_cfg_regulator(struct s3c2416_data *s3c_freq) 267 267 { 268 268 int count, v, i, found; 269 - struct cpufreq_frequency_table *freq; 269 + struct cpufreq_frequency_table *pos; 270 270 struct s3c2416_dvfs *dvfs; 271 271 272 272 count = regulator_count_voltages(s3c_freq->vddarm); ··· 275 275 return; 276 276 } 277 277 278 - freq = s3c_freq->freq_table; 279 - while (count > 0 && freq->frequency != CPUFREQ_TABLE_END) { 280 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 281 - continue; 278 + if (!count) 279 + goto out; 282 280 283 - dvfs = &s3c2416_dvfs_table[freq->driver_data]; 281 + cpufreq_for_each_valid_entry(pos, s3c_freq->freq_table) { 282 + dvfs = &s3c2416_dvfs_table[pos->driver_data]; 284 283 found = 0; 285 284 286 285 /* Check only the min-voltage, more is always ok on S3C2416 */ ··· 291 292 292 293 if (!found) { 293 294 pr_debug("cpufreq: %dkHz unsupported by regulator\n", 294 - freq->frequency); 295 - freq->frequency = CPUFREQ_ENTRY_INVALID; 295 + pos->frequency); 296 + pos->frequency = CPUFREQ_ENTRY_INVALID; 296 297 } 297 - 298 - freq++; 299 298 } 300 299 300 + out: 301 301 /* Guessed */ 302 302 s3c_freq->regulator_latency = 1 * 1000 * 1000; 303 303 } ··· 336 338 static int __init s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy) 337 339 { 338 340 struct s3c2416_data *s3c_freq = &s3c2416_cpufreq; 339 - struct cpufreq_frequency_table *freq; 341 + struct cpufreq_frequency_table *pos; 340 342 struct clk *msysclk; 341 343 unsigned long rate; 342 344 int ret; ··· 425 427 s3c_freq->regulator_latency = 0; 426 428 #endif 427 429 428 - freq = s3c_freq->freq_table; 429 - while (freq->frequency != CPUFREQ_TABLE_END) { 430 + cpufreq_for_each_entry(pos, s3c_freq->freq_table) { 430 431 /* special handling for dvs mode */ 431 - if (freq->driver_data == 0) { 432 + if (pos->driver_data == 0) { 432 433 if (!s3c_freq->hclk) { 433 434 pr_debug("cpufreq: %dkHz unsupported as it would need unavailable dvs mode\n", 434 - freq->frequency); 435 - freq->frequency = CPUFREQ_ENTRY_INVALID; 435 + pos->frequency); 436 + pos->frequency = CPUFREQ_ENTRY_INVALID; 436 437 } else { 437 - freq++; 438 438 continue; 439 439 } 440 440 } 441 441 442 442 /* Check for frequencies we can generate */ 443 443 rate = clk_round_rate(s3c_freq->armdiv, 444 - freq->frequency * 1000); 444 + pos->frequency * 1000); 445 445 rate /= 1000; 446 - if (rate != freq->frequency) { 446 + if (rate != pos->frequency) { 447 447 pr_debug("cpufreq: %dkHz unsupported by clock (clk_round_rate return %lu)\n", 448 - freq->frequency, rate); 449 - freq->frequency = CPUFREQ_ENTRY_INVALID; 448 + pos->frequency, rate); 449 + pos->frequency = CPUFREQ_ENTRY_INVALID; 450 450 } 451 - 452 - freq++; 453 451 } 454 452 455 453 /* Datasheet says PLL stabalisation time must be at least 300us,
+5 -10
drivers/cpufreq/s3c64xx-cpufreq.c
··· 118 118 pr_err("Unable to check supported voltages\n"); 119 119 } 120 120 121 - freq = s3c64xx_freq_table; 122 - while (count > 0 && freq->frequency != CPUFREQ_TABLE_END) { 123 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 124 - continue; 121 + if (!count) 122 + goto out; 125 123 124 + cpufreq_for_each_valid_entry(freq, s3c64xx_freq_table) { 126 125 dvfs = &s3c64xx_dvfs_table[freq->driver_data]; 127 126 found = 0; 128 127 ··· 136 137 freq->frequency); 137 138 freq->frequency = CPUFREQ_ENTRY_INVALID; 138 139 } 139 - 140 - freq++; 141 140 } 142 141 142 + out: 143 143 /* Guess based on having to do an I2C/SPI write; in future we 144 144 * will be able to query the regulator performance here. */ 145 145 regulator_latency = 1 * 1000 * 1000; ··· 177 179 } 178 180 #endif 179 181 180 - freq = s3c64xx_freq_table; 181 - while (freq->frequency != CPUFREQ_TABLE_END) { 182 + cpufreq_for_each_entry(freq, s3c64xx_freq_table) { 182 183 unsigned long r; 183 184 184 185 /* Check for frequencies we can generate */ ··· 193 196 * frequency is the maximum we can support. */ 194 197 if (!vddarm && freq->frequency > clk_get_rate(policy->clk) / 1000) 195 198 freq->frequency = CPUFREQ_ENTRY_INVALID; 196 - 197 - freq++; 198 199 } 199 200 200 201 /* Datasheet says PLL stabalisation time (if we were to use
+2 -4
drivers/cpufreq/s5pv210-cpufreq.c
··· 175 175 mutex_lock(&set_freq_lock); 176 176 177 177 if (no_cpufreq_access) { 178 - #ifdef CONFIG_PM_VERBOSE 179 - pr_err("%s:%d denied access to %s as it is disabled" 180 - "temporarily\n", __FILE__, __LINE__, __func__); 181 - #endif 178 + pr_err("Denied access to %s as it is disabled temporarily\n", 179 + __func__); 182 180 ret = -EINVAL; 183 181 goto exit; 184 182 }
+1 -1
drivers/cpufreq/speedstep-centrino.c
··· 28 28 #include <asm/cpu_device_id.h> 29 29 30 30 #define PFX "speedstep-centrino: " 31 - #define MAINTAINER "cpufreq@vger.kernel.org" 31 + #define MAINTAINER "linux-pm@vger.kernel.org" 32 32 33 33 #define INTEL_MSR_RANGE (0xffff) 34 34
+2 -7
drivers/cpufreq/tegra-cpufreq.c
··· 82 82 return ret; 83 83 } 84 84 85 - static int tegra_update_cpu_speed(struct cpufreq_policy *policy, 86 - unsigned long rate) 85 + static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 87 86 { 87 + unsigned long rate = freq_table[index].frequency; 88 88 int ret = 0; 89 89 90 90 /* ··· 104 104 rate); 105 105 106 106 return ret; 107 - } 108 - 109 - static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 110 - { 111 - return tegra_update_cpu_speed(policy, freq_table[index].frequency); 112 107 } 113 108 114 109 static int tegra_cpu_init(struct cpufreq_policy *policy)
+8 -11
drivers/mfd/db8500-prcmu.c
··· 1734 1734 1735 1735 static long round_armss_rate(unsigned long rate) 1736 1736 { 1737 + struct cpufreq_frequency_table *pos; 1737 1738 long freq = 0; 1738 - int i = 0; 1739 1739 1740 1740 /* cpufreq table frequencies is in KHz. */ 1741 1741 rate = rate / 1000; 1742 1742 1743 1743 /* Find the corresponding arm opp from the cpufreq table. */ 1744 - while (db8500_cpufreq_table[i].frequency != CPUFREQ_TABLE_END) { 1745 - freq = db8500_cpufreq_table[i].frequency; 1744 + cpufreq_for_each_entry(pos, db8500_cpufreq_table) { 1745 + freq = pos->frequency; 1746 1746 if (freq == rate) 1747 1747 break; 1748 - i++; 1749 1748 } 1750 1749 1751 1750 /* Return the last valid value, even if a match was not found. */ ··· 1885 1886 1886 1887 static int set_armss_rate(unsigned long rate) 1887 1888 { 1888 - int i = 0; 1889 + struct cpufreq_frequency_table *pos; 1889 1890 1890 1891 /* cpufreq table frequencies is in KHz. */ 1891 1892 rate = rate / 1000; 1892 1893 1893 1894 /* Find the corresponding arm opp from the cpufreq table. */ 1894 - while (db8500_cpufreq_table[i].frequency != CPUFREQ_TABLE_END) { 1895 - if (db8500_cpufreq_table[i].frequency == rate) 1895 + cpufreq_for_each_entry(pos, db8500_cpufreq_table) 1896 + if (pos->frequency == rate) 1896 1897 break; 1897 - i++; 1898 - } 1899 1898 1900 - if (db8500_cpufreq_table[i].frequency != rate) 1899 + if (pos->frequency != rate) 1901 1900 return -EINVAL; 1902 1901 1903 1902 /* Set the new arm opp. */ 1904 - return db8500_prcmu_set_arm_opp(db8500_cpufreq_table[i].driver_data); 1903 + return db8500_prcmu_set_arm_opp(pos->driver_data); 1905 1904 } 1906 1905 1907 1906 static int set_plldsi_rate(unsigned long rate)
+5 -9
drivers/net/irda/sh_sir.c
··· 217 217 static u32 sh_sir_find_sclk(struct clk *irda_clk) 218 218 { 219 219 struct cpufreq_frequency_table *freq_table = irda_clk->freq_table; 220 + struct cpufreq_frequency_table *pos; 220 221 struct clk *pclk = clk_get(NULL, "peripheral_clk"); 221 222 u32 limit, min = 0xffffffff, tmp; 222 - int i, index = 0; 223 + int index = 0; 223 224 224 225 limit = clk_get_rate(pclk); 225 226 clk_put(pclk); 226 227 227 228 /* IrDA can not set over peripheral_clk */ 228 - for (i = 0; 229 - freq_table[i].frequency != CPUFREQ_TABLE_END; 230 - i++) { 231 - u32 freq = freq_table[i].frequency; 232 - 233 - if (freq == CPUFREQ_ENTRY_INVALID) 234 - continue; 229 + cpufreq_for_each_valid_entry(pos, freq_table) { 230 + u32 freq = pos->frequency; 235 231 236 232 /* IrDA should not over peripheral_clk */ 237 233 if (freq > limit) ··· 236 240 tmp = freq % SCLK_BASE; 237 241 if (tmp < min) { 238 242 min = tmp; 239 - index = i; 243 + index = pos - freq_table; 240 244 } 241 245 } 242 246
+5 -15
drivers/sh/clk/core.c
··· 196 196 struct cpufreq_frequency_table *freq_table, 197 197 unsigned long rate) 198 198 { 199 - int i; 199 + struct cpufreq_frequency_table *pos; 200 200 201 - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) { 202 - unsigned long freq = freq_table[i].frequency; 203 - 204 - if (freq == CPUFREQ_ENTRY_INVALID) 205 - continue; 206 - 207 - if (freq == rate) 208 - return i; 209 - } 201 + cpufreq_for_each_valid_entry(pos, freq_table) 202 + if (pos->frequency == rate) 203 + return pos - freq_table; 210 204 211 205 return -ENOENT; 212 206 } ··· 569 575 return abs(target - *best_freq); 570 576 } 571 577 572 - for (freq = parent->freq_table; freq->frequency != CPUFREQ_TABLE_END; 573 - freq++) { 574 - if (freq->frequency == CPUFREQ_ENTRY_INVALID) 575 - continue; 576 - 578 + cpufreq_for_each_valid_entry(freq, parent->freq_table) { 577 579 if (unlikely(freq->frequency / target <= div_min - 1)) { 578 580 unsigned long freq_max; 579 581
+13 -20
drivers/thermal/cpu_cooling.c
··· 144 144 unsigned int *output, 145 145 enum cpufreq_cooling_property property) 146 146 { 147 - int i, j; 147 + int i; 148 148 unsigned long max_level = 0, level = 0; 149 149 unsigned int freq = CPUFREQ_ENTRY_INVALID; 150 150 int descend = -1; 151 - struct cpufreq_frequency_table *table = 151 + struct cpufreq_frequency_table *pos, *table = 152 152 cpufreq_frequency_get_table(cpu); 153 153 154 154 if (!output) ··· 157 157 if (!table) 158 158 return -EINVAL; 159 159 160 - for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 161 - /* ignore invalid entries */ 162 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 163 - continue; 164 - 160 + cpufreq_for_each_valid_entry(pos, table) { 165 161 /* ignore duplicate entry */ 166 - if (freq == table[i].frequency) 162 + if (freq == pos->frequency) 167 163 continue; 168 164 169 165 /* get the frequency order */ 170 166 if (freq != CPUFREQ_ENTRY_INVALID && descend == -1) 171 - descend = !!(freq > table[i].frequency); 167 + descend = freq > pos->frequency; 172 168 173 - freq = table[i].frequency; 169 + freq = pos->frequency; 174 170 max_level++; 175 171 } 176 172 ··· 186 190 if (property == GET_FREQ) 187 191 level = descend ? input : (max_level - input); 188 192 189 - for (i = 0, j = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) { 190 - /* ignore invalid entry */ 191 - if (table[i].frequency == CPUFREQ_ENTRY_INVALID) 192 - continue; 193 - 193 + i = 0; 194 + cpufreq_for_each_valid_entry(pos, table) { 194 195 /* ignore duplicate entry */ 195 - if (freq == table[i].frequency) 196 + if (freq == pos->frequency) 196 197 continue; 197 198 198 199 /* now we have a valid frequency entry */ 199 - freq = table[i].frequency; 200 + freq = pos->frequency; 200 201 201 202 if (property == GET_LEVEL && (unsigned int)input == freq) { 202 203 /* get level by frequency */ 203 - *output = descend ? j : (max_level - j); 204 + *output = descend ? i : (max_level - i); 204 205 return 0; 205 206 } 206 - if (property == GET_FREQ && level == j) { 207 + if (property == GET_FREQ && level == i) { 207 208 /* get frequency by level */ 208 209 *output = freq; 209 210 return 0; 210 211 } 211 - j++; 212 + i++; 212 213 } 213 214 214 215 return -EINVAL;
+50
include/linux/cpufreq.h
··· 110 110 bool transition_ongoing; /* Tracks transition status */ 111 111 spinlock_t transition_lock; 112 112 wait_queue_head_t transition_wait; 113 + struct task_struct *transition_task; /* Task which is doing the transition */ 113 114 }; 114 115 115 116 /* Only for ACPI */ ··· 468 467 unsigned int frequency; /* kHz - doesn't need to be in ascending 469 468 * order */ 470 469 }; 470 + 471 + #if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP) 472 + int dev_pm_opp_init_cpufreq_table(struct device *dev, 473 + struct cpufreq_frequency_table **table); 474 + void dev_pm_opp_free_cpufreq_table(struct device *dev, 475 + struct cpufreq_frequency_table **table); 476 + #else 477 + static inline int dev_pm_opp_init_cpufreq_table(struct device *dev, 478 + struct cpufreq_frequency_table 479 + **table) 480 + { 481 + return -EINVAL; 482 + } 483 + 484 + static inline void dev_pm_opp_free_cpufreq_table(struct device *dev, 485 + struct cpufreq_frequency_table 486 + **table) 487 + { 488 + } 489 + #endif 490 + 491 + static inline bool cpufreq_next_valid(struct cpufreq_frequency_table **pos) 492 + { 493 + while ((*pos)->frequency != CPUFREQ_TABLE_END) 494 + if ((*pos)->frequency != CPUFREQ_ENTRY_INVALID) 495 + return true; 496 + else 497 + (*pos)++; 498 + return false; 499 + } 500 + 501 + /* 502 + * cpufreq_for_each_entry - iterate over a cpufreq_frequency_table 503 + * @pos: the cpufreq_frequency_table * to use as a loop cursor. 504 + * @table: the cpufreq_frequency_table * to iterate over. 505 + */ 506 + 507 + #define cpufreq_for_each_entry(pos, table) \ 508 + for (pos = table; pos->frequency != CPUFREQ_TABLE_END; pos++) 509 + 510 + /* 511 + * cpufreq_for_each_valid_entry - iterate over a cpufreq_frequency_table 512 + * excluding CPUFREQ_ENTRY_INVALID frequencies. 513 + * @pos: the cpufreq_frequency_table * to use as a loop cursor. 514 + * @table: the cpufreq_frequency_table * to iterate over. 515 + */ 516 + 517 + #define cpufreq_for_each_valid_entry(pos, table) \ 518 + for (pos = table; cpufreq_next_valid(&pos); pos++) 471 519 472 520 int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, 473 521 struct cpufreq_frequency_table *table);
-20
include/linux/pm_opp.h
··· 15 15 #define __LINUX_OPP_H__ 16 16 17 17 #include <linux/err.h> 18 - #include <linux/cpufreq.h> 19 18 #include <linux/notifier.h> 20 19 21 20 struct dev_pm_opp; ··· 115 116 return -EINVAL; 116 117 } 117 118 #endif 118 - 119 - #if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP) 120 - int dev_pm_opp_init_cpufreq_table(struct device *dev, 121 - struct cpufreq_frequency_table **table); 122 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 123 - struct cpufreq_frequency_table **table); 124 - #else 125 - static inline int dev_pm_opp_init_cpufreq_table(struct device *dev, 126 - struct cpufreq_frequency_table **table) 127 - { 128 - return -EINVAL; 129 - } 130 - 131 - static inline 132 - void dev_pm_opp_free_cpufreq_table(struct device *dev, 133 - struct cpufreq_frequency_table **table) 134 - { 135 - } 136 - #endif /* CONFIG_CPU_FREQ */ 137 119 138 120 #endif /* __LINUX_OPP_H__ */
+1 -1
tools/power/cpupower/Makefile
··· 62 62 LIB_MIN= 0 63 63 64 64 PACKAGE = cpupower 65 - PACKAGE_BUGREPORT = cpufreq@vger.kernel.org 65 + PACKAGE_BUGREPORT = linux-pm@vger.kernel.org 66 66 LANGUAGES = de fr it cs pt 67 67 68 68
+1 -1
tools/power/cpupower/debug/kernel/cpufreq-test_tsc.c
··· 18 18 * 5.) if the third value, "diff_pmtmr", changes between 2. and 4., the 19 19 * TSC-based delay routine on the Linux kernel does not correctly 20 20 * handle the cpufreq transition. Please report this to 21 - * cpufreq@vger.kernel.org 21 + * linux-pm@vger.kernel.org 22 22 */ 23 23 24 24 #include <linux/kernel.h>