Merge git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core-2.6:
Documentation: ABI: /sys/devices/system/cpu/cpu#/node
Documentation: ABI: /sys/devices/system/cpu/cpuidle/
Documentation: ABI: /sys/devices/system/cpu/sched_[mc|smt]_power_savings
Documentation: ABI: /sys/devices/system/cpu/cpu#/ topology files
Documentation: ABI: /sys/devices/system/cpu/ topology files
Documentation: ABI: document /sys/devices/system/cpu/
Documentation: ABI: rename sysfs-devices-cache_disable properly
Driver core: allow certain drivers prohibit bind/unbind via sysfs
Driver core: fix driver_register() return value

+206 -44
-18
Documentation/ABI/testing/sysfs-devices-cache_disable
··· 1 - What: /sys/devices/system/cpu/cpu*/cache/index*/cache_disable_X 2 - Date: August 2008 3 - KernelVersion: 2.6.27 4 - Contact: mark.langsdorf@amd.com 5 - Description: These files exist in every cpu's cache index directories. 6 - There are currently 2 cache_disable_# files in each 7 - directory. Reading from these files on a supported 8 - processor will return that cache disable index value 9 - for that processor and node. Writing to one of these 10 - files will cause the specificed cache index to be disabled. 11 - 12 - Currently, only AMD Family 10h Processors support cache index 13 - disable, and only for their L3 caches. See the BIOS and 14 - Kernel Developer's Guide at 15 - http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/31116-Public-GH-BKDG_3.20_2-4-09.pdf 16 - for formatting information and other details on the 17 - cache index disable. 18 - Users: joachim.deguara@amd.com
+156
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 1 + What: /sys/devices/system/cpu/ 2 + Date: pre-git history 3 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 4 + Description: 5 + A collection of both global and individual CPU attributes 6 + 7 + Individual CPU attributes are contained in subdirectories 8 + named by the kernel's logical CPU number, e.g.: 9 + 10 + /sys/devices/system/cpu/cpu#/ 11 + 12 + What: /sys/devices/system/cpu/sched_mc_power_savings 13 + /sys/devices/system/cpu/sched_smt_power_savings 14 + Date: June 2006 15 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 16 + Description: Discover and adjust the kernel's multi-core scheduler support. 17 + 18 + Possible values are: 19 + 20 + 0 - No power saving load balance (default value) 21 + 1 - Fill one thread/core/package first for long running threads 22 + 2 - Also bias task wakeups to semi-idle cpu package for power 23 + savings 24 + 25 + sched_mc_power_savings is dependent upon SCHED_MC, which is 26 + itself architecture dependent. 27 + 28 + sched_smt_power_savings is dependent upon SCHED_SMT, which 29 + is itself architecture dependent. 30 + 31 + The two files are independent of each other. It is possible 32 + that one file may be present without the other. 33 + 34 + Introduced by git commit 5c45bf27. 35 + 36 + 37 + What: /sys/devices/system/cpu/kernel_max 38 + /sys/devices/system/cpu/offline 39 + /sys/devices/system/cpu/online 40 + /sys/devices/system/cpu/possible 41 + /sys/devices/system/cpu/present 42 + Date: December 2008 43 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 44 + Description: CPU topology files that describe kernel limits related to 45 + hotplug. Briefly: 46 + 47 + kernel_max: the maximum cpu index allowed by the kernel 48 + configuration. 49 + 50 + offline: cpus that are not online because they have been 51 + HOTPLUGGED off or exceed the limit of cpus allowed by the 52 + kernel configuration (kernel_max above). 53 + 54 + online: cpus that are online and being scheduled. 55 + 56 + possible: cpus that have been allocated resources and can be 57 + brought online if they are present. 58 + 59 + present: cpus that have been identified as being present in 60 + the system. 61 + 62 + See Documentation/cputopology.txt for more information. 63 + 64 + 65 + 66 + What: /sys/devices/system/cpu/cpu#/node 67 + Date: October 2009 68 + Contact: Linux memory management mailing list <linux-mm@kvack.org> 69 + Description: Discover NUMA node a CPU belongs to 70 + 71 + When CONFIG_NUMA is enabled, a symbolic link that points 72 + to the corresponding NUMA node directory. 73 + 74 + For example, the following symlink is created for cpu42 75 + in NUMA node 2: 76 + 77 + /sys/devices/system/cpu/cpu42/node2 -> ../../node/node2 78 + 79 + 80 + What: /sys/devices/system/cpu/cpu#/topology/core_id 81 + /sys/devices/system/cpu/cpu#/topology/core_siblings 82 + /sys/devices/system/cpu/cpu#/topology/core_siblings_list 83 + /sys/devices/system/cpu/cpu#/topology/physical_package_id 84 + /sys/devices/system/cpu/cpu#/topology/thread_siblings 85 + /sys/devices/system/cpu/cpu#/topology/thread_siblings_list 86 + Date: December 2008 87 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 88 + Description: CPU topology files that describe a logical CPU's relationship 89 + to other cores and threads in the same physical package. 90 + 91 + One cpu# directory is created per logical CPU in the system, 92 + e.g. /sys/devices/system/cpu/cpu42/. 93 + 94 + Briefly, the files above are: 95 + 96 + core_id: the CPU core ID of cpu#. Typically it is the 97 + hardware platform's identifier (rather than the kernel's). 98 + The actual value is architecture and platform dependent. 99 + 100 + core_siblings: internal kernel map of cpu#'s hardware threads 101 + within the same physical_package_id. 102 + 103 + core_siblings_list: human-readable list of the logical CPU 104 + numbers within the same physical_package_id as cpu#. 105 + 106 + physical_package_id: physical package id of cpu#. Typically 107 + corresponds to a physical socket number, but the actual value 108 + is architecture and platform dependent. 109 + 110 + thread_siblings: internel kernel map of cpu#'s hardware 111 + threads within the same core as cpu# 112 + 113 + thread_siblings_list: human-readable list of cpu#'s hardware 114 + threads within the same core as cpu# 115 + 116 + See Documentation/cputopology.txt for more information. 117 + 118 + 119 + What: /sys/devices/system/cpu/cpuidle/current_driver 120 + /sys/devices/system/cpu/cpuidle/current_governer_ro 121 + Date: September 2007 122 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 123 + Description: Discover cpuidle policy and mechanism 124 + 125 + Various CPUs today support multiple idle levels that are 126 + differentiated by varying exit latencies and power 127 + consumption during idle. 128 + 129 + Idle policy (governor) is differentiated from idle mechanism 130 + (driver) 131 + 132 + current_driver: displays current idle mechanism 133 + 134 + current_governor_ro: displays current idle policy 135 + 136 + See files in Documentation/cpuidle/ for more information. 137 + 138 + 139 + What: /sys/devices/system/cpu/cpu*/cache/index*/cache_disable_X 140 + Date: August 2008 141 + KernelVersion: 2.6.27 142 + Contact: mark.langsdorf@amd.com 143 + Description: These files exist in every cpu's cache index directories. 144 + There are currently 2 cache_disable_# files in each 145 + directory. Reading from these files on a supported 146 + processor will return that cache disable index value 147 + for that processor and node. Writing to one of these 148 + files will cause the specificed cache index to be disabled. 149 + 150 + Currently, only AMD Family 10h Processors support cache index 151 + disable, and only for their L3 caches. See the BIOS and 152 + Kernel Developer's Guide at 153 + http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/31116-Public-GH-BKDG_3.20_2-4-09.pdf 154 + for formatting information and other details on the 155 + cache index disable. 156 + Users: joachim.deguara@amd.com
+30 -17
Documentation/cputopology.txt
··· 1 1 2 - Export cpu topology info via sysfs. Items (attributes) are similar 2 + Export CPU topology info via sysfs. Items (attributes) are similar 3 3 to /proc/cpuinfo. 4 4 5 5 1) /sys/devices/system/cpu/cpuX/topology/physical_package_id: 6 - represent the physical package id of cpu X; 6 + 7 + physical package id of cpuX. Typically corresponds to a physical 8 + socket number, but the actual value is architecture and platform 9 + dependent. 10 + 7 11 2) /sys/devices/system/cpu/cpuX/topology/core_id: 8 - represent the cpu core id to cpu X; 12 + 13 + the CPU core ID of cpuX. Typically it is the hardware platform's 14 + identifier (rather than the kernel's). The actual value is 15 + architecture and platform dependent. 16 + 9 17 3) /sys/devices/system/cpu/cpuX/topology/thread_siblings: 10 - represent the thread siblings to cpu X in the same core; 18 + 19 + internel kernel map of cpuX's hardware threads within the same 20 + core as cpuX 21 + 11 22 4) /sys/devices/system/cpu/cpuX/topology/core_siblings: 12 - represent the thread siblings to cpu X in the same physical package; 23 + 24 + internal kernel map of cpuX's hardware threads within the same 25 + physical_package_id. 13 26 14 27 To implement it in an architecture-neutral way, a new source file, 15 28 drivers/base/topology.c, is to export the 4 attributes. ··· 45 32 3) thread_siblings: just the given CPU 46 33 4) core_siblings: just the given CPU 47 34 48 - Additionally, cpu topology information is provided under 35 + Additionally, CPU topology information is provided under 49 36 /sys/devices/system/cpu and includes these files. The internal 50 37 source for the output is in brackets ("[]"). 51 38 52 - kernel_max: the maximum cpu index allowed by the kernel configuration. 39 + kernel_max: the maximum CPU index allowed by the kernel configuration. 53 40 [NR_CPUS-1] 54 41 55 - offline: cpus that are not online because they have been 42 + offline: CPUs that are not online because they have been 56 43 HOTPLUGGED off (see cpu-hotplug.txt) or exceed the limit 57 - of cpus allowed by the kernel configuration (kernel_max 44 + of CPUs allowed by the kernel configuration (kernel_max 58 45 above). [~cpu_online_mask + cpus >= NR_CPUS] 59 46 60 - online: cpus that are online and being scheduled [cpu_online_mask] 47 + online: CPUs that are online and being scheduled [cpu_online_mask] 61 48 62 - possible: cpus that have been allocated resources and can be 49 + possible: CPUs that have been allocated resources and can be 63 50 brought online if they are present. [cpu_possible_mask] 64 51 65 - present: cpus that have been identified as being present in the 52 + present: CPUs that have been identified as being present in the 66 53 system. [cpu_present_mask] 67 54 68 55 The format for the above output is compatible with cpulist_parse() 69 56 [see <linux/cpumask.h>]. Some examples follow. 70 57 71 - In this example, there are 64 cpus in the system but cpus 32-63 exceed 58 + In this example, there are 64 CPUs in the system but cpus 32-63 exceed 72 59 the kernel max which is limited to 0..31 by the NR_CPUS config option 73 - being 32. Note also that cpus 2 and 4-31 are not online but could be 60 + being 32. Note also that CPUs 2 and 4-31 are not online but could be 74 61 brought online as they are both present and possible. 75 62 76 63 kernel_max: 31 ··· 80 67 present: 0-31 81 68 82 69 In this example, the NR_CPUS config option is 128, but the kernel was 83 - started with possible_cpus=144. There are 4 cpus in the system and cpu2 84 - was manually taken offline (and is the only cpu that can be brought 70 + started with possible_cpus=144. There are 4 CPUs in the system and cpu2 71 + was manually taken offline (and is the only CPU that can be brought 85 72 online.) 86 73 87 74 kernel_max: 127 ··· 91 78 present: 0-3 92 79 93 80 See cpu-hotplug.txt for the possible_cpus=NUM kernel start parameter 94 - as well as more information on the various cpumask's. 81 + as well as more information on the various cpumasks.
+11 -6
drivers/base/bus.c
··· 689 689 printk(KERN_ERR "%s: driver_add_attrs(%s) failed\n", 690 690 __func__, drv->name); 691 691 } 692 - error = add_bind_files(drv); 693 - if (error) { 694 - /* Ditto */ 695 - printk(KERN_ERR "%s: add_bind_files(%s) failed\n", 696 - __func__, drv->name); 692 + 693 + if (!drv->suppress_bind_attrs) { 694 + error = add_bind_files(drv); 695 + if (error) { 696 + /* Ditto */ 697 + printk(KERN_ERR "%s: add_bind_files(%s) failed\n", 698 + __func__, drv->name); 699 + } 697 700 } 698 701 699 702 kobject_uevent(&priv->kobj, KOBJ_ADD); 700 703 return 0; 704 + 701 705 out_unregister: 702 706 kfree(drv->p); 703 707 drv->p = NULL; ··· 724 720 if (!drv->bus) 725 721 return; 726 722 727 - remove_bind_files(drv); 723 + if (!drv->suppress_bind_attrs) 724 + remove_bind_files(drv); 728 725 driver_remove_attrs(drv->bus, drv); 729 726 driver_remove_file(drv, &driver_attr_uevent); 730 727 klist_remove(&drv->p->knode_bus);
+1 -1
drivers/base/driver.c
··· 236 236 put_driver(other); 237 237 printk(KERN_ERR "Error: Driver '%s' is already registered, " 238 238 "aborting...\n", drv->name); 239 - return -EEXIST; 239 + return -EBUSY; 240 240 } 241 241 242 242 ret = bus_add_driver(drv);
+5 -1
drivers/base/platform.c
··· 521 521 { 522 522 int retval, code; 523 523 524 + /* make sure driver won't have bind/unbind attributes */ 525 + drv->driver.suppress_bind_attrs = true; 526 + 524 527 /* temporary section violation during probe() */ 525 528 drv->probe = probe; 526 529 retval = code = platform_driver_register(drv); 527 530 528 - /* Fixup that section violation, being paranoid about code scanning 531 + /* 532 + * Fixup that section violation, being paranoid about code scanning 529 533 * the list of drivers in order to probe new devices. Check to see 530 534 * if the probe was successful, and make sure any forced probes of 531 535 * new devices fail.
+3 -1
include/linux/device.h
··· 124 124 struct bus_type *bus; 125 125 126 126 struct module *owner; 127 - const char *mod_name; /* used for built-in modules */ 127 + const char *mod_name; /* used for built-in modules */ 128 + 129 + bool suppress_bind_attrs; /* disables bind/unbind via sysfs */ 128 130 129 131 int (*probe) (struct device *dev); 130 132 int (*remove) (struct device *dev);