Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'driver-core-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull driver core updates from Greg KH:
"Here is the set of changes for the driver core for 5.17-rc1.

Lots of little things here, including:

- kobj_type cleanups

- auxiliary_bus documentation updates

- auxiliary_device conversions for some drivers (relevant subsystems
all have provided acks for these)

- kernfs lock contention reduction for some workloads

- other tiny cleanups and changes.

All of these have been in linux-next for a while with no reported
issues"

* tag 'driver-core-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (43 commits)
kobject documentation: remove default_attrs information
drivers/firmware: Add missing platform_device_put() in sysfb_create_simplefb
debugfs: lockdown: Allow reading debugfs files that are not world readable
driver core: Make bus notifiers in right order in really_probe()
driver core: Move driver_sysfs_remove() after driver_sysfs_add()
firmware: edd: remove empty default_attrs array
firmware: dmi-sysfs: use default_groups in kobj_type
qemu_fw_cfg: use default_groups in kobj_type
firmware: memmap: use default_groups in kobj_type
sh: sq: use default_groups in kobj_type
headers/uninline: Uninline single-use function: kobject_has_children()
devtmpfs: mount with noexec and nosuid
driver core: Simplify async probe test code by using ktime_ms_delta()
nilfs2: use default_groups in kobj_type
kobject: remove kset from struct kset_uevent_ops callbacks
driver core: make kobj_type constant.
driver core: platform: document registration-failure requirement
vdpa/mlx5: Use auxiliary_device driver data helpers
net/mlx5e: Use auxiliary_device driver data helpers
soundwire: intel: Use auxiliary_device driver data helpers
...

+1211 -749
+15
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 666 666 ================ ============================================== 667 667 668 668 See also: Documentation/arm64/memory-tagging-extension.rst 669 + 670 + What: /sys/devices/system/cpu/nohz_full 671 + Date: Apr 2015 672 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 673 + Description: 674 + (RO) the list of CPUs that are in nohz_full mode. 675 + These CPUs are set by boot parameter "nohz_full=". 676 + 677 + What: /sys/devices/system/cpu/isolated 678 + Date: Apr 2015 679 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 680 + Description: 681 + (RO) the list of CPUs that are isolated and don't 682 + participate in load balancing. These CPUs are set by 683 + boot parameter "isolcpus=".
+11 -14
Documentation/admin-guide/cputopology.rst
··· 8 8 Documentation/ABI/stable/sysfs-devices-system-cpu. 9 9 10 10 Architecture-neutral, drivers/base/topology.c, exports these attributes. 11 - However, the book and drawer related sysfs files will only be created if 12 - CONFIG_SCHED_BOOK and CONFIG_SCHED_DRAWER are selected, respectively. 13 - 14 - CONFIG_SCHED_BOOK and CONFIG_SCHED_DRAWER are currently only used on s390, 15 - where they reflect the cpu and cache hierarchy. 11 + However the die, cluster, book, and drawer hierarchy related sysfs files will 12 + only be created if an architecture provides the related macros as described 13 + below. 16 14 17 15 For an architecture to support this feature, it must define some of 18 16 these macros in include/asm-XXX/topology.h:: ··· 41 43 2) topology_die_id: -1 42 44 3) topology_cluster_id: -1 43 45 4) topology_core_id: 0 44 - 5) topology_sibling_cpumask: just the given CPU 45 - 6) topology_core_cpumask: just the given CPU 46 - 7) topology_cluster_cpumask: just the given CPU 47 - 8) topology_die_cpumask: just the given CPU 48 - 49 - For architectures that don't support books (CONFIG_SCHED_BOOK) there are no 50 - default definitions for topology_book_id() and topology_book_cpumask(). 51 - For architectures that don't support drawers (CONFIG_SCHED_DRAWER) there are 52 - no default definitions for topology_drawer_id() and topology_drawer_cpumask(). 46 + 5) topology_book_id: -1 47 + 6) topology_drawer_id: -1 48 + 7) topology_sibling_cpumask: just the given CPU 49 + 8) topology_core_cpumask: just the given CPU 50 + 9) topology_cluster_cpumask: just the given CPU 51 + 10) topology_die_cpumask: just the given CPU 52 + 11) topology_book_cpumask: just the given CPU 53 + 12) topology_drawer_cpumask: just the given CPU 53 54 54 55 Additionally, CPU topology information is provided under 55 56 /sys/devices/system/cpu and includes these files. The internal
+7 -9
Documentation/core-api/kobject.rst
··· 118 118 Code which creates a kobject must, of course, initialize that object. Some 119 119 of the internal fields are setup with a (mandatory) call to kobject_init():: 120 120 121 - void kobject_init(struct kobject *kobj, struct kobj_type *ktype); 121 + void kobject_init(struct kobject *kobj, const struct kobj_type *ktype); 122 122 123 123 The ktype is required for a kobject to be created properly, as every kobject 124 124 must have an associated kobj_type. After calling kobject_init(), to ··· 156 156 There is a helper function to both initialize and add the kobject to the 157 157 kernel at the same time, called surprisingly enough kobject_init_and_add():: 158 158 159 - int kobject_init_and_add(struct kobject *kobj, struct kobj_type *ktype, 159 + int kobject_init_and_add(struct kobject *kobj, const struct kobj_type *ktype, 160 160 struct kobject *parent, const char *fmt, ...); 161 161 162 162 The arguments are the same as the individual kobject_init() and ··· 299 299 struct kobj_type { 300 300 void (*release)(struct kobject *kobj); 301 301 const struct sysfs_ops *sysfs_ops; 302 - struct attribute **default_attrs; 303 302 const struct attribute_group **default_groups; 304 303 const struct kobj_ns_type_operations *(*child_ns_type)(struct kobject *kobj); 305 304 const void *(*namespace)(struct kobject *kobj); ··· 312 313 313 314 The release field in struct kobj_type is, of course, a pointer to the 314 315 release() method for this type of kobject. The other two fields (sysfs_ops 315 - and default_attrs) control how objects of this type are represented in 316 + and default_groups) control how objects of this type are represented in 316 317 sysfs; they are beyond the scope of this document. 317 318 318 - The default_attrs pointer is a list of default attributes that will be 319 + The default_groups pointer is a list of default attributes that will be 319 320 automatically created for any kobject that is registered with this ktype. 320 321 321 322 ··· 372 373 associated with it, it can use the struct kset_uevent_ops to handle it:: 373 374 374 375 struct kset_uevent_ops { 375 - int (* const filter)(struct kset *kset, struct kobject *kobj); 376 - const char *(* const name)(struct kset *kset, struct kobject *kobj); 377 - int (* const uevent)(struct kset *kset, struct kobject *kobj, 378 - struct kobj_uevent_env *env); 376 + int (* const filter)(struct kobject *kobj); 377 + const char *(* const name)(struct kobject *kobj); 378 + int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env); 379 379 }; 380 380 381 381
+19 -205
Documentation/driver-api/auxiliary_bus.rst
··· 6 6 Auxiliary Bus 7 7 ============= 8 8 9 - In some subsystems, the functionality of the core device (PCI/ACPI/other) is 10 - too complex for a single device to be managed by a monolithic driver 11 - (e.g. Sound Open Firmware), multiple devices might implement a common 12 - intersection of functionality (e.g. NICs + RDMA), or a driver may want to 13 - export an interface for another subsystem to drive (e.g. SIOV Physical Function 14 - export Virtual Function management). A split of the functionality into child- 15 - devices representing sub-domains of functionality makes it possible to 16 - compartmentalize, layer, and distribute domain-specific concerns via a Linux 17 - device-driver model. 18 - 19 - An example for this kind of requirement is the audio subsystem where a single 20 - IP is handling multiple entities such as HDMI, Soundwire, local devices such as 21 - mics/speakers etc. The split for the core's functionality can be arbitrary or 22 - be defined by the DSP firmware topology and include hooks for test/debug. This 23 - allows for the audio core device to be minimal and focused on hardware-specific 24 - control and communication. 25 - 26 - Each auxiliary_device represents a part of its parent functionality. The 27 - generic behavior can be extended and specialized as needed by encapsulating an 28 - auxiliary_device within other domain-specific structures and the use of .ops 29 - callbacks. Devices on the auxiliary bus do not share any structures and the use 30 - of a communication channel with the parent is domain-specific. 31 - 32 - Note that ops are intended as a way to augment instance behavior within a class 33 - of auxiliary devices, it is not the mechanism for exporting common 34 - infrastructure from the parent. Consider EXPORT_SYMBOL_NS() to convey 35 - infrastructure from the parent module to the auxiliary module(s). 36 - 9 + .. kernel-doc:: drivers/base/auxiliary.c 10 + :doc: PURPOSE 37 11 38 12 When Should the Auxiliary Bus Be Used 39 13 ===================================== 40 14 41 - The auxiliary bus is to be used when a driver and one or more kernel modules, 42 - who share a common header file with the driver, need a mechanism to connect and 43 - provide access to a shared object allocated by the auxiliary_device's 44 - registering driver. The registering driver for the auxiliary_device(s) and the 45 - kernel module(s) registering auxiliary_drivers can be from the same subsystem, 46 - or from multiple subsystems. 15 + .. kernel-doc:: drivers/base/auxiliary.c 16 + :doc: USAGE 47 17 48 - The emphasis here is on a common generic interface that keeps subsystem 49 - customization out of the bus infrastructure. 50 18 51 - One example is a PCI network device that is RDMA-capable and exports a child 52 - device to be driven by an auxiliary_driver in the RDMA subsystem. The PCI 53 - driver allocates and registers an auxiliary_device for each physical 54 - function on the NIC. The RDMA driver registers an auxiliary_driver that claims 55 - each of these auxiliary_devices. This conveys data/ops published by the parent 56 - PCI device/driver to the RDMA auxiliary_driver. 19 + Auxiliary Device Creation 20 + ========================= 57 21 58 - Another use case is for the PCI device to be split out into multiple sub 59 - functions. For each sub function an auxiliary_device is created. A PCI sub 60 - function driver binds to such devices that creates its own one or more class 61 - devices. A PCI sub function auxiliary device is likely to be contained in a 62 - struct with additional attributes such as user defined sub function number and 63 - optional attributes such as resources and a link to the parent device. These 64 - attributes could be used by systemd/udev; and hence should be initialized 65 - before a driver binds to an auxiliary_device. 22 + .. kernel-doc:: include/linux/auxiliary_bus.h 23 + :identifiers: auxiliary_device 66 24 67 - A key requirement for utilizing the auxiliary bus is that there is no 68 - dependency on a physical bus, device, register accesses or regmap support. 69 - These individual devices split from the core cannot live on the platform bus as 70 - they are not physical devices that are controlled by DT/ACPI. The same 71 - argument applies for not using MFD in this scenario as MFD relies on individual 72 - function devices being physical devices. 73 - 74 - Auxiliary Device 75 - ================ 76 - 77 - An auxiliary_device represents a part of its parent device's functionality. It 78 - is given a name that, combined with the registering drivers KBUILD_MODNAME, 79 - creates a match_name that is used for driver binding, and an id that combined 80 - with the match_name provide a unique name to register with the bus subsystem. 81 - 82 - Registering an auxiliary_device is a two-step process. First call 83 - auxiliary_device_init(), which checks several aspects of the auxiliary_device 84 - struct and performs a device_initialize(). After this step completes, any 85 - error state must have a call to auxiliary_device_uninit() in its resolution path. 86 - The second step in registering an auxiliary_device is to perform a call to 87 - auxiliary_device_add(), which sets the name of the device and add the device to 88 - the bus. 89 - 90 - Unregistering an auxiliary_device is also a two-step process to mirror the 91 - register process. First call auxiliary_device_delete(), then call 92 - auxiliary_device_uninit(). 93 - 94 - .. code-block:: c 95 - 96 - struct auxiliary_device { 97 - struct device dev; 98 - const char *name; 99 - u32 id; 100 - }; 101 - 102 - If two auxiliary_devices both with a match_name "mod.foo" are registered onto 103 - the bus, they must have unique id values (e.g. "x" and "y") so that the 104 - registered devices names are "mod.foo.x" and "mod.foo.y". If match_name + id 105 - are not unique, then the device_add fails and generates an error message. 106 - 107 - The auxiliary_device.dev.type.release or auxiliary_device.dev.release must be 108 - populated with a non-NULL pointer to successfully register the auxiliary_device. 109 - 110 - The auxiliary_device.dev.parent must also be populated. 25 + .. kernel-doc:: drivers/base/auxiliary.c 26 + :identifiers: auxiliary_device_init __auxiliary_device_add 27 + auxiliary_find_device 111 28 112 29 Auxiliary Device Memory Model and Lifespan 113 30 ------------------------------------------ 114 31 115 - The registering driver is the entity that allocates memory for the 116 - auxiliary_device and register it on the auxiliary bus. It is important to note 117 - that, as opposed to the platform bus, the registering driver is wholly 118 - responsible for the management for the memory used for the driver object. 32 + .. kernel-doc:: include/linux/auxiliary_bus.h 33 + :doc: DEVICE_LIFESPAN 119 34 120 - A parent object, defined in the shared header file, contains the 121 - auxiliary_device. It also contains a pointer to the shared object(s), which 122 - also is defined in the shared header. Both the parent object and the shared 123 - object(s) are allocated by the registering driver. This layout allows the 124 - auxiliary_driver's registering module to perform a container_of() call to go 125 - from the pointer to the auxiliary_device, that is passed during the call to the 126 - auxiliary_driver's probe function, up to the parent object, and then have 127 - access to the shared object(s). 128 - 129 - The memory for the auxiliary_device is freed only in its release() callback 130 - flow as defined by its registering driver. 131 - 132 - The memory for the shared object(s) must have a lifespan equal to, or greater 133 - than, the lifespan of the memory for the auxiliary_device. The auxiliary_driver 134 - should only consider that this shared object is valid as long as the 135 - auxiliary_device is still registered on the auxiliary bus. It is up to the 136 - registering driver to manage (e.g. free or keep available) the memory for the 137 - shared object beyond the life of the auxiliary_device. 138 - 139 - The registering driver must unregister all auxiliary devices before its own 140 - driver.remove() is completed. 141 35 142 36 Auxiliary Drivers 143 37 ================= 144 38 145 - Auxiliary drivers follow the standard driver model convention, where 146 - discovery/enumeration is handled by the core, and drivers 147 - provide probe() and remove() methods. They support power management 148 - and shutdown notifications using the standard conventions. 39 + .. kernel-doc:: include/linux/auxiliary_bus.h 40 + :identifiers: auxiliary_driver module_auxiliary_driver 149 41 150 - .. code-block:: c 151 - 152 - struct auxiliary_driver { 153 - int (*probe)(struct auxiliary_device *, 154 - const struct auxiliary_device_id *id); 155 - void (*remove)(struct auxiliary_device *); 156 - void (*shutdown)(struct auxiliary_device *); 157 - int (*suspend)(struct auxiliary_device *, pm_message_t); 158 - int (*resume)(struct auxiliary_device *); 159 - struct device_driver driver; 160 - const struct auxiliary_device_id *id_table; 161 - }; 162 - 163 - Auxiliary drivers register themselves with the bus by calling 164 - auxiliary_driver_register(). The id_table contains the match_names of auxiliary 165 - devices that a driver can bind with. 42 + .. kernel-doc:: drivers/base/auxiliary.c 43 + :identifiers: __auxiliary_driver_register auxiliary_driver_unregister 166 44 167 45 Example Usage 168 46 ============= 169 47 170 - Auxiliary devices are created and registered by a subsystem-level core device 171 - that needs to break up its functionality into smaller fragments. One way to 172 - extend the scope of an auxiliary_device is to encapsulate it within a domain- 173 - pecific structure defined by the parent device. This structure contains the 174 - auxiliary_device and any associated shared data/callbacks needed to establish 175 - the connection with the parent. 48 + .. kernel-doc:: drivers/base/auxiliary.c 49 + :doc: EXAMPLE 176 50 177 - An example is: 178 - 179 - .. code-block:: c 180 - 181 - struct foo { 182 - struct auxiliary_device auxdev; 183 - void (*connect)(struct auxiliary_device *auxdev); 184 - void (*disconnect)(struct auxiliary_device *auxdev); 185 - void *data; 186 - }; 187 - 188 - The parent device then registers the auxiliary_device by calling 189 - auxiliary_device_init(), and then auxiliary_device_add(), with the pointer to 190 - the auxdev member of the above structure. The parent provides a name for the 191 - auxiliary_device that, combined with the parent's KBUILD_MODNAME, creates a 192 - match_name that is be used for matching and binding with a driver. 193 - 194 - Whenever an auxiliary_driver is registered, based on the match_name, the 195 - auxiliary_driver's probe() is invoked for the matching devices. The 196 - auxiliary_driver can also be encapsulated inside custom drivers that make the 197 - core device's functionality extensible by adding additional domain-specific ops 198 - as follows: 199 - 200 - .. code-block:: c 201 - 202 - struct my_ops { 203 - void (*send)(struct auxiliary_device *auxdev); 204 - void (*receive)(struct auxiliary_device *auxdev); 205 - }; 206 - 207 - 208 - struct my_driver { 209 - struct auxiliary_driver auxiliary_drv; 210 - const struct my_ops ops; 211 - }; 212 - 213 - An example of this type of usage is: 214 - 215 - .. code-block:: c 216 - 217 - const struct auxiliary_device_id my_auxiliary_id_table[] = { 218 - { .name = "foo_mod.foo_dev" }, 219 - { }, 220 - }; 221 - 222 - const struct my_ops my_custom_ops = { 223 - .send = my_tx, 224 - .receive = my_rx, 225 - }; 226 - 227 - const struct my_driver my_drv = { 228 - .auxiliary_drv = { 229 - .name = "myauxiliarydrv", 230 - .id_table = my_auxiliary_id_table, 231 - .probe = my_probe, 232 - .remove = my_remove, 233 - .shutdown = my_shutdown, 234 - }, 235 - .ops = my_custom_ops, 236 - };
+5 -7
Documentation/translations/zh_CN/core-api/kobject.rst
··· 258 258 struct kobj_type { 259 259 void (*release)(struct kobject *kobj); 260 260 const struct sysfs_ops *sysfs_ops; 261 - struct attribute **default_attrs; 262 261 const struct attribute_group **default_groups; 263 262 const struct kobj_ns_type_operations *(*child_ns_type)(struct kobject *kobj); 264 263 const void *(*namespace)(struct kobject *kobj); ··· 270 271 指针。 271 272 272 273 当然,kobj_type结构中的release字段是指向这种类型的kobject的release() 273 - 方法的一个指针。另外两个字段(sysfs_ops 和 default_attrs)控制这种 274 + 方法的一个指针。另外两个字段(sysfs_ops 和 default_groups)控制这种 274 275 类型的对象如何在 sysfs 中被表示;它们超出了本文的范围。 275 276 276 - default_attrs 指针是一个默认属性的列表,它将为任何用这个 ktype 注册 277 + default_groups 指针是一个默认属性的列表,它将为任何用这个 ktype 注册 277 278 的 kobject 自动创建。 278 279 279 280 ··· 324 325 结构体kset_uevent_ops来处理它:: 325 326 326 327 struct kset_uevent_ops { 327 - int (* const filter)(struct kset *kset, struct kobject *kobj); 328 - const char *(* const name)(struct kset *kset, struct kobject *kobj); 329 - int (* const uevent)(struct kset *kset, struct kobject *kobj, 330 - struct kobj_uevent_env *env); 328 + int (* const filter)(struct kobject *kobj); 329 + const char *(* const name)(struct kobject *kobj); 330 + int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env); 331 331 }; 332 332 333 333
+8 -4
MAINTAINERS
··· 9819 9819 F: drivers/mfd/intel_soc_pmic* 9820 9820 F: include/linux/mfd/intel_soc_pmic* 9821 9821 9822 - INTEL PMT DRIVER 9823 - M: "David E. Box" <david.e.box@linux.intel.com> 9824 - S: Maintained 9825 - F: drivers/mfd/intel_pmt.c 9822 + INTEL PMT DRIVERS 9823 + M: David E. Box <david.e.box@linux.intel.com> 9824 + S: Supported 9826 9825 F: drivers/platform/x86/intel/pmt/ 9827 9826 9828 9827 INTEL PRO/WIRELESS 2100, 2200BG, 2915ABG NETWORK CONNECTION SUPPORT ··· 9887 9888 L: platform-driver-x86@vger.kernel.org 9888 9889 S: Maintained 9889 9890 F: drivers/platform/x86/intel/uncore-frequency.c 9891 + 9892 + INTEL VENDOR SPECIFIC EXTENDED CAPABILITIES DRIVER 9893 + M: David E. Box <david.e.box@linux.intel.com> 9894 + S: Supported 9895 + F: drivers/platform/x86/intel/vsec.* 9890 9896 9891 9897 INTEL VIRTUAL BUTTON DRIVER 9892 9898 M: AceLan Kao <acelan.kao@canonical.com>
+2 -1
arch/sh/kernel/cpu/sh4/sq.c
··· 324 324 &mapping_attr.attr, 325 325 NULL, 326 326 }; 327 + ATTRIBUTE_GROUPS(sq_sysfs); 327 328 328 329 static const struct sysfs_ops sq_sysfs_ops = { 329 330 .show = sq_sysfs_show, ··· 333 332 334 333 static struct kobj_type ktype_percpu_entry = { 335 334 .sysfs_ops = &sq_sysfs_ops, 336 - .default_attrs = sq_sysfs_attrs, 335 + .default_groups = sq_sysfs_groups, 337 336 }; 338 337 339 338 static int sq_dev_add(struct device *dev, struct subsys_interface *sif)
+11
drivers/base/Kconfig
··· 62 62 rescue mode with init=/bin/sh, even when the /dev directory 63 63 on the rootfs is completely empty. 64 64 65 + config DEVTMPFS_SAFE 66 + bool "Use nosuid,noexec mount options on devtmpfs" 67 + depends on DEVTMPFS 68 + help 69 + This instructs the kernel to include the MS_NOEXEC and MS_NOSUID mount 70 + flags when mounting devtmpfs. 71 + 72 + Notice: If enabled, things like /dev/mem cannot be mmapped 73 + with the PROT_EXEC flag. This can break, for example, non-KMS 74 + video drivers. 75 + 65 76 config STANDALONE 66 77 bool "Select only drivers that don't need compile-time external firmware" 67 78 default y
+150 -2
drivers/base/auxiliary.c
··· 17 17 #include <linux/auxiliary_bus.h> 18 18 #include "base.h" 19 19 20 + /** 21 + * DOC: PURPOSE 22 + * 23 + * In some subsystems, the functionality of the core device (PCI/ACPI/other) is 24 + * too complex for a single device to be managed by a monolithic driver (e.g. 25 + * Sound Open Firmware), multiple devices might implement a common intersection 26 + * of functionality (e.g. NICs + RDMA), or a driver may want to export an 27 + * interface for another subsystem to drive (e.g. SIOV Physical Function export 28 + * Virtual Function management). A split of the functionality into child- 29 + * devices representing sub-domains of functionality makes it possible to 30 + * compartmentalize, layer, and distribute domain-specific concerns via a Linux 31 + * device-driver model. 32 + * 33 + * An example for this kind of requirement is the audio subsystem where a 34 + * single IP is handling multiple entities such as HDMI, Soundwire, local 35 + * devices such as mics/speakers etc. The split for the core's functionality 36 + * can be arbitrary or be defined by the DSP firmware topology and include 37 + * hooks for test/debug. This allows for the audio core device to be minimal 38 + * and focused on hardware-specific control and communication. 39 + * 40 + * Each auxiliary_device represents a part of its parent functionality. The 41 + * generic behavior can be extended and specialized as needed by encapsulating 42 + * an auxiliary_device within other domain-specific structures and the use of 43 + * .ops callbacks. Devices on the auxiliary bus do not share any structures and 44 + * the use of a communication channel with the parent is domain-specific. 45 + * 46 + * Note that ops are intended as a way to augment instance behavior within a 47 + * class of auxiliary devices, it is not the mechanism for exporting common 48 + * infrastructure from the parent. Consider EXPORT_SYMBOL_NS() to convey 49 + * infrastructure from the parent module to the auxiliary module(s). 50 + */ 51 + 52 + /** 53 + * DOC: USAGE 54 + * 55 + * The auxiliary bus is to be used when a driver and one or more kernel 56 + * modules, who share a common header file with the driver, need a mechanism to 57 + * connect and provide access to a shared object allocated by the 58 + * auxiliary_device's registering driver. The registering driver for the 59 + * auxiliary_device(s) and the kernel module(s) registering auxiliary_drivers 60 + * can be from the same subsystem, or from multiple subsystems. 61 + * 62 + * The emphasis here is on a common generic interface that keeps subsystem 63 + * customization out of the bus infrastructure. 64 + * 65 + * One example is a PCI network device that is RDMA-capable and exports a child 66 + * device to be driven by an auxiliary_driver in the RDMA subsystem. The PCI 67 + * driver allocates and registers an auxiliary_device for each physical 68 + * function on the NIC. The RDMA driver registers an auxiliary_driver that 69 + * claims each of these auxiliary_devices. This conveys data/ops published by 70 + * the parent PCI device/driver to the RDMA auxiliary_driver. 71 + * 72 + * Another use case is for the PCI device to be split out into multiple sub 73 + * functions. For each sub function an auxiliary_device is created. A PCI sub 74 + * function driver binds to such devices that creates its own one or more class 75 + * devices. A PCI sub function auxiliary device is likely to be contained in a 76 + * struct with additional attributes such as user defined sub function number 77 + * and optional attributes such as resources and a link to the parent device. 78 + * These attributes could be used by systemd/udev; and hence should be 79 + * initialized before a driver binds to an auxiliary_device. 80 + * 81 + * A key requirement for utilizing the auxiliary bus is that there is no 82 + * dependency on a physical bus, device, register accesses or regmap support. 83 + * These individual devices split from the core cannot live on the platform bus 84 + * as they are not physical devices that are controlled by DT/ACPI. The same 85 + * argument applies for not using MFD in this scenario as MFD relies on 86 + * individual function devices being physical devices. 87 + */ 88 + 89 + /** 90 + * DOC: EXAMPLE 91 + * 92 + * Auxiliary devices are created and registered by a subsystem-level core 93 + * device that needs to break up its functionality into smaller fragments. One 94 + * way to extend the scope of an auxiliary_device is to encapsulate it within a 95 + * domain- pecific structure defined by the parent device. This structure 96 + * contains the auxiliary_device and any associated shared data/callbacks 97 + * needed to establish the connection with the parent. 98 + * 99 + * An example is: 100 + * 101 + * .. code-block:: c 102 + * 103 + * struct foo { 104 + * struct auxiliary_device auxdev; 105 + * void (*connect)(struct auxiliary_device *auxdev); 106 + * void (*disconnect)(struct auxiliary_device *auxdev); 107 + * void *data; 108 + * }; 109 + * 110 + * The parent device then registers the auxiliary_device by calling 111 + * auxiliary_device_init(), and then auxiliary_device_add(), with the pointer 112 + * to the auxdev member of the above structure. The parent provides a name for 113 + * the auxiliary_device that, combined with the parent's KBUILD_MODNAME, 114 + * creates a match_name that is be used for matching and binding with a driver. 115 + * 116 + * Whenever an auxiliary_driver is registered, based on the match_name, the 117 + * auxiliary_driver's probe() is invoked for the matching devices. The 118 + * auxiliary_driver can also be encapsulated inside custom drivers that make 119 + * the core device's functionality extensible by adding additional 120 + * domain-specific ops as follows: 121 + * 122 + * .. code-block:: c 123 + * 124 + * struct my_ops { 125 + * void (*send)(struct auxiliary_device *auxdev); 126 + * void (*receive)(struct auxiliary_device *auxdev); 127 + * }; 128 + * 129 + * 130 + * struct my_driver { 131 + * struct auxiliary_driver auxiliary_drv; 132 + * const struct my_ops ops; 133 + * }; 134 + * 135 + * An example of this type of usage is: 136 + * 137 + * .. code-block:: c 138 + * 139 + * const struct auxiliary_device_id my_auxiliary_id_table[] = { 140 + * { .name = "foo_mod.foo_dev" }, 141 + * { }, 142 + * }; 143 + * 144 + * const struct my_ops my_custom_ops = { 145 + * .send = my_tx, 146 + * .receive = my_rx, 147 + * }; 148 + * 149 + * const struct my_driver my_drv = { 150 + * .auxiliary_drv = { 151 + * .name = "myauxiliarydrv", 152 + * .id_table = my_auxiliary_id_table, 153 + * .probe = my_probe, 154 + * .remove = my_remove, 155 + * .shutdown = my_shutdown, 156 + * }, 157 + * .ops = my_custom_ops, 158 + * }; 159 + */ 160 + 20 161 static const struct auxiliary_device_id *auxiliary_match_id(const struct auxiliary_device_id *id, 21 162 const struct auxiliary_device *auxdev) 22 163 { ··· 258 117 * auxiliary_device_init - check auxiliary_device and initialize 259 118 * @auxdev: auxiliary device struct 260 119 * 261 - * This is the first step in the two-step process to register an 120 + * This is the second step in the three-step process to register an 262 121 * auxiliary_device. 263 122 * 264 123 * When this function returns an error code, then the device_initialize will ··· 296 155 * @auxdev: auxiliary bus device to add to the bus 297 156 * @modname: name of the parent device's driver module 298 157 * 299 - * This is the second step in the two-step process to register an 158 + * This is the third step in the three-step process to register an 300 159 * auxiliary_device. 301 160 * 302 161 * This function must be called after a successful call to ··· 343 202 * This function returns a reference to a device that is 'found' 344 203 * for later use, as determined by the @match callback. 345 204 * 205 + * The reference returned should be released with put_device(). 206 + * 346 207 * The callback should return 0 if the device doesn't match and non-zero 347 208 * if it does. If the callback returns non-zero, this function will 348 209 * return to the caller and not iterate over any more devices. ··· 368 225 * @auxdrv: auxiliary_driver structure 369 226 * @owner: owning module/driver 370 227 * @modname: KBUILD_MODNAME for parent driver 228 + * 229 + * The expectation is that users will call the "auxiliary_driver_register" 230 + * macro so that the caller's KBUILD_MODNAME is automatically inserted for the 231 + * modname parameter. Only if a user requires a custom name would this version 232 + * be called directly. 371 233 */ 372 234 int __auxiliary_driver_register(struct auxiliary_driver *auxdrv, 373 235 struct module *owner, const char *modname)
+2 -2
drivers/base/bus.c
··· 163 163 .release = bus_release, 164 164 }; 165 165 166 - static int bus_uevent_filter(struct kset *kset, struct kobject *kobj) 166 + static int bus_uevent_filter(struct kobject *kobj) 167 167 { 168 - struct kobj_type *ktype = get_ktype(kobj); 168 + const struct kobj_type *ktype = get_ktype(kobj); 169 169 170 170 if (ktype == &bus_ktype) 171 171 return 1;
+23 -7
drivers/base/core.c
··· 2260 2260 }; 2261 2261 2262 2262 2263 - static int dev_uevent_filter(struct kset *kset, struct kobject *kobj) 2263 + static int dev_uevent_filter(struct kobject *kobj) 2264 2264 { 2265 - struct kobj_type *ktype = get_ktype(kobj); 2265 + const struct kobj_type *ktype = get_ktype(kobj); 2266 2266 2267 2267 if (ktype == &device_ktype) { 2268 2268 struct device *dev = kobj_to_dev(kobj); ··· 2274 2274 return 0; 2275 2275 } 2276 2276 2277 - static const char *dev_uevent_name(struct kset *kset, struct kobject *kobj) 2277 + static const char *dev_uevent_name(struct kobject *kobj) 2278 2278 { 2279 2279 struct device *dev = kobj_to_dev(kobj); 2280 2280 ··· 2285 2285 return NULL; 2286 2286 } 2287 2287 2288 - static int dev_uevent(struct kset *kset, struct kobject *kobj, 2289 - struct kobj_uevent_env *env) 2288 + static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env) 2290 2289 { 2291 2290 struct device *dev = kobj_to_dev(kobj); 2292 2291 int retval = 0; ··· 2380 2381 2381 2382 /* respect filter */ 2382 2383 if (kset->uevent_ops && kset->uevent_ops->filter) 2383 - if (!kset->uevent_ops->filter(kset, &dev->kobj)) 2384 + if (!kset->uevent_ops->filter(&dev->kobj)) 2384 2385 goto out; 2385 2386 2386 2387 env = kzalloc(sizeof(struct kobj_uevent_env), GFP_KERNEL); ··· 2388 2389 return -ENOMEM; 2389 2390 2390 2391 /* let the kset specific function add its keys */ 2391 - retval = kset->uevent_ops->uevent(kset, &dev->kobj, env); 2392 + retval = kset->uevent_ops->uevent(&dev->kobj, env); 2392 2393 if (retval) 2393 2394 goto out; 2394 2395 ··· 3025 3026 static inline struct kobject *get_glue_dir(struct device *dev) 3026 3027 { 3027 3028 return dev->kobj.parent; 3029 + } 3030 + 3031 + /** 3032 + * kobject_has_children - Returns whether a kobject has children. 3033 + * @kobj: the object to test 3034 + * 3035 + * This will return whether a kobject has other kobjects as children. 3036 + * 3037 + * It does NOT account for the presence of attribute files, only sub 3038 + * directories. It also assumes there is no concurrent addition or 3039 + * removal of such children, and thus relies on external locking. 3040 + */ 3041 + static inline bool kobject_has_children(struct kobject *kobj) 3042 + { 3043 + WARN_ON_ONCE(kref_read(&kobj->kref) == 0); 3044 + 3045 + return kobj->sd && kobj->sd->dir.subdirs; 3028 3046 } 3029 3047 3030 3048 /*
+4 -3
drivers/base/dd.c
··· 577 577 if (dev->bus->dma_configure) { 578 578 ret = dev->bus->dma_configure(dev); 579 579 if (ret) 580 - goto probe_failed; 580 + goto pinctrl_bind_failed; 581 581 } 582 582 583 583 ret = driver_sysfs_add(dev); 584 584 if (ret) { 585 585 pr_err("%s: driver_sysfs_add(%s) failed\n", 586 586 __func__, dev_name(dev)); 587 - goto probe_failed; 587 + goto sysfs_failed; 588 588 } 589 589 590 590 if (dev->pm_domain && dev->pm_domain->activate) { ··· 657 657 else if (drv->remove) 658 658 drv->remove(dev); 659 659 probe_failed: 660 + driver_sysfs_remove(dev); 661 + sysfs_failed: 660 662 if (dev->bus) 661 663 blocking_notifier_call_chain(&dev->bus->p->bus_notifier, 662 664 BUS_NOTIFY_DRIVER_NOT_BOUND, dev); ··· 668 666 arch_teardown_dma_ops(dev); 669 667 kfree(dev->dma_range_map); 670 668 dev->dma_range_map = NULL; 671 - driver_sysfs_remove(dev); 672 669 dev->driver = NULL; 673 670 dev_set_drvdata(dev, NULL); 674 671 if (dev->pm_domain && dev->pm_domain->dismiss)
+8 -2
drivers/base/devtmpfs.c
··· 29 29 #include <uapi/linux/mount.h> 30 30 #include "base.h" 31 31 32 + #ifdef CONFIG_DEVTMPFS_SAFE 33 + #define DEVTMPFS_MFLAGS (MS_SILENT | MS_NOEXEC | MS_NOSUID) 34 + #else 35 + #define DEVTMPFS_MFLAGS (MS_SILENT) 36 + #endif 37 + 32 38 static struct task_struct *thread; 33 39 34 40 static int __initdata mount_dev = IS_ENABLED(CONFIG_DEVTMPFS_MOUNT); ··· 369 363 if (!thread) 370 364 return 0; 371 365 372 - err = init_mount("devtmpfs", "dev", "devtmpfs", MS_SILENT, NULL); 366 + err = init_mount("devtmpfs", "dev", "devtmpfs", DEVTMPFS_MFLAGS, NULL); 373 367 if (err) 374 368 printk(KERN_INFO "devtmpfs: error mounting %i\n", err); 375 369 else ··· 418 412 err = ksys_unshare(CLONE_NEWNS); 419 413 if (err) 420 414 goto out; 421 - err = init_mount("devtmpfs", "/", "devtmpfs", MS_SILENT, NULL); 415 + err = init_mount("devtmpfs", "/", "devtmpfs", DEVTMPFS_MFLAGS, NULL); 422 416 if (err) 423 417 goto out; 424 418 init_chdir("/.."); /* will traverse into overmounted root */
+7 -2
drivers/base/platform.c
··· 258 258 int ret; 259 259 260 260 ret = platform_get_irq_optional(dev, num); 261 - if (ret < 0 && ret != -EPROBE_DEFER) 262 - dev_err(&dev->dev, "IRQ index %u not found\n", num); 261 + if (ret < 0) 262 + return dev_err_probe(&dev->dev, ret, 263 + "IRQ index %u not found\n", num); 263 264 264 265 return ret; 265 266 } ··· 763 762 /** 764 763 * platform_device_register - add a platform-level device 765 764 * @pdev: platform device we're adding 765 + * 766 + * NOTE: _Never_ directly free @pdev after calling this function, even if it 767 + * returned an error! Always use platform_device_put() to give up the 768 + * reference initialised in this function instead. 766 769 */ 767 770 int platform_device_register(struct platform_device *pdev) 768 771 {
+11 -2
drivers/base/property.c
··· 478 478 unsigned int nargs, unsigned int index, 479 479 struct fwnode_reference_args *args) 480 480 { 481 - return fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop, 482 - nargs, index, args); 481 + int ret; 482 + 483 + ret = fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop, 484 + nargs, index, args); 485 + 486 + if (ret < 0 && !IS_ERR_OR_NULL(fwnode) && 487 + !IS_ERR_OR_NULL(fwnode->secondary)) 488 + ret = fwnode_call_int_op(fwnode->secondary, get_reference_args, 489 + prop, nargs_prop, nargs, index, args); 490 + 491 + return ret; 483 492 } 484 493 EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args); 485 494
+5 -9
drivers/base/test/test_async_driver_probe.c
··· 104 104 struct platform_device **pdev = NULL; 105 105 int async_id = 0, sync_id = 0; 106 106 unsigned long long duration; 107 - ktime_t calltime, delta; 107 + ktime_t calltime; 108 108 int err, nid, cpu; 109 109 110 110 pr_info("registering first set of asynchronous devices...\n"); ··· 133 133 goto err_unregister_async_devs; 134 134 } 135 135 136 - delta = ktime_sub(ktime_get(), calltime); 137 - duration = (unsigned long long) ktime_to_ms(delta); 136 + duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime); 138 137 pr_info("registration took %lld msecs\n", duration); 139 138 if (duration > TEST_PROBE_THRESHOLD) { 140 139 pr_err("test failed: probe took too long\n"); ··· 160 161 async_id++; 161 162 } 162 163 163 - delta = ktime_sub(ktime_get(), calltime); 164 - duration = (unsigned long long) ktime_to_ms(delta); 164 + duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime); 165 165 dev_info(&(*pdev)->dev, 166 166 "registration took %lld msecs\n", duration); 167 167 if (duration > TEST_PROBE_THRESHOLD) { ··· 195 197 goto err_unregister_sync_devs; 196 198 } 197 199 198 - delta = ktime_sub(ktime_get(), calltime); 199 - duration = (unsigned long long) ktime_to_ms(delta); 200 + duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime); 200 201 pr_info("registration took %lld msecs\n", duration); 201 202 if (duration < TEST_PROBE_THRESHOLD) { 202 203 dev_err(&(*pdev)->dev, ··· 220 223 221 224 sync_id++; 222 225 223 - delta = ktime_sub(ktime_get(), calltime); 224 - duration = (unsigned long long) ktime_to_ms(delta); 226 + duration = (unsigned long long)ktime_ms_delta(ktime_get(), calltime); 225 227 dev_info(&(*pdev)->dev, 226 228 "registration took %lld msecs\n", duration); 227 229 if (duration < TEST_PROBE_THRESHOLD) {
+22 -6
drivers/base/topology.c
··· 45 45 define_id_show_func(physical_package_id); 46 46 static DEVICE_ATTR_RO(physical_package_id); 47 47 48 + #ifdef TOPOLOGY_DIE_SYSFS 48 49 define_id_show_func(die_id); 49 50 static DEVICE_ATTR_RO(die_id); 51 + #endif 50 52 53 + #ifdef TOPOLOGY_CLUSTER_SYSFS 51 54 define_id_show_func(cluster_id); 52 55 static DEVICE_ATTR_RO(cluster_id); 56 + #endif 53 57 54 58 define_id_show_func(core_id); 55 59 static DEVICE_ATTR_RO(core_id); ··· 70 66 static BIN_ATTR_RO(core_siblings, 0); 71 67 static BIN_ATTR_RO(core_siblings_list, 0); 72 68 69 + #ifdef TOPOLOGY_CLUSTER_SYSFS 73 70 define_siblings_read_func(cluster_cpus, cluster_cpumask); 74 71 static BIN_ATTR_RO(cluster_cpus, 0); 75 72 static BIN_ATTR_RO(cluster_cpus_list, 0); 73 + #endif 76 74 75 + #ifdef TOPOLOGY_DIE_SYSFS 77 76 define_siblings_read_func(die_cpus, die_cpumask); 78 77 static BIN_ATTR_RO(die_cpus, 0); 79 78 static BIN_ATTR_RO(die_cpus_list, 0); 79 + #endif 80 80 81 81 define_siblings_read_func(package_cpus, core_cpumask); 82 82 static BIN_ATTR_RO(package_cpus, 0); 83 83 static BIN_ATTR_RO(package_cpus_list, 0); 84 84 85 - #ifdef CONFIG_SCHED_BOOK 85 + #ifdef TOPOLOGY_BOOK_SYSFS 86 86 define_id_show_func(book_id); 87 87 static DEVICE_ATTR_RO(book_id); 88 88 define_siblings_read_func(book_siblings, book_cpumask); ··· 94 86 static BIN_ATTR_RO(book_siblings_list, 0); 95 87 #endif 96 88 97 - #ifdef CONFIG_SCHED_DRAWER 89 + #ifdef TOPOLOGY_DRAWER_SYSFS 98 90 define_id_show_func(drawer_id); 99 91 static DEVICE_ATTR_RO(drawer_id); 100 92 define_siblings_read_func(drawer_siblings, drawer_cpumask); ··· 109 101 &bin_attr_thread_siblings_list, 110 102 &bin_attr_core_siblings, 111 103 &bin_attr_core_siblings_list, 104 + #ifdef TOPOLOGY_CLUSTER_SYSFS 112 105 &bin_attr_cluster_cpus, 113 106 &bin_attr_cluster_cpus_list, 107 + #endif 108 + #ifdef TOPOLOGY_DIE_SYSFS 114 109 &bin_attr_die_cpus, 115 110 &bin_attr_die_cpus_list, 111 + #endif 116 112 &bin_attr_package_cpus, 117 113 &bin_attr_package_cpus_list, 118 - #ifdef CONFIG_SCHED_BOOK 114 + #ifdef TOPOLOGY_BOOK_SYSFS 119 115 &bin_attr_book_siblings, 120 116 &bin_attr_book_siblings_list, 121 117 #endif 122 - #ifdef CONFIG_SCHED_DRAWER 118 + #ifdef TOPOLOGY_DRAWER_SYSFS 123 119 &bin_attr_drawer_siblings, 124 120 &bin_attr_drawer_siblings_list, 125 121 #endif ··· 132 120 133 121 static struct attribute *default_attrs[] = { 134 122 &dev_attr_physical_package_id.attr, 123 + #ifdef TOPOLOGY_DIE_SYSFS 135 124 &dev_attr_die_id.attr, 125 + #endif 126 + #ifdef TOPOLOGY_CLUSTER_SYSFS 136 127 &dev_attr_cluster_id.attr, 128 + #endif 137 129 &dev_attr_core_id.attr, 138 - #ifdef CONFIG_SCHED_BOOK 130 + #ifdef TOPOLOGY_BOOK_SYSFS 139 131 &dev_attr_book_id.attr, 140 132 #endif 141 - #ifdef CONFIG_SCHED_DRAWER 133 + #ifdef TOPOLOGY_DRAWER_SYSFS 142 134 &dev_attr_drawer_id.attr, 143 135 #endif 144 136 NULL
+1 -1
drivers/dma-buf/dma-buf-sysfs-stats.c
··· 132 132 133 133 134 134 /* Statistics files do not need to send uevents. */ 135 - static int dmabuf_sysfs_uevent_filter(struct kset *kset, struct kobject *kobj) 135 + static int dmabuf_sysfs_uevent_filter(struct kobject *kobj) 136 136 { 137 137 return 0; 138 138 }
+4 -3
drivers/firmware/dmi-sysfs.c
··· 302 302 &dmi_sysfs_attr_sel_per_log_type_descriptor_length.attr, 303 303 NULL, 304 304 }; 305 - 305 + ATTRIBUTE_GROUPS(dmi_sysfs_sel); 306 306 307 307 static struct kobj_type dmi_system_event_log_ktype = { 308 308 .release = dmi_entry_free, 309 309 .sysfs_ops = &dmi_sysfs_specialize_attr_ops, 310 - .default_attrs = dmi_sysfs_sel_attrs, 310 + .default_groups = dmi_sysfs_sel_groups, 311 311 }; 312 312 313 313 typedef u8 (*sel_io_reader)(const struct dmi_system_event_log *sel, ··· 518 518 &dmi_sysfs_attr_entry_position.attr, 519 519 NULL, 520 520 }; 521 + ATTRIBUTE_GROUPS(dmi_sysfs_entry); 521 522 522 523 static ssize_t dmi_entry_raw_read_helper(struct dmi_sysfs_entry *entry, 523 524 const struct dmi_header *dh, ··· 566 565 static struct kobj_type dmi_sysfs_entry_ktype = { 567 566 .release = dmi_sysfs_entry_release, 568 567 .sysfs_ops = &dmi_sysfs_attr_ops, 569 - .default_attrs = dmi_sysfs_entry_attrs, 568 + .default_groups = dmi_sysfs_entry_groups, 570 569 }; 571 570 572 571 static struct kset *dmi_kset;
-9
drivers/firmware/edd.c
··· 574 574 static EDD_DEVICE_ATTR(host_bus, 0444, edd_show_host_bus, edd_has_edd30); 575 575 static EDD_DEVICE_ATTR(mbr_signature, 0444, edd_show_mbr_signature, edd_has_mbr_signature); 576 576 577 - 578 - /* These are default attributes that are added for every edd 579 - * device discovered. There are none. 580 - */ 581 - static struct attribute * def_attrs[] = { 582 - NULL, 583 - }; 584 - 585 577 /* These attributes are conditional and only added for some devices. */ 586 578 static struct edd_attribute * edd_attrs[] = { 587 579 &edd_attr_raw_data, ··· 611 619 static struct kobj_type edd_ktype = { 612 620 .release = edd_release, 613 621 .sysfs_ops = &edd_attr_ops, 614 - .default_attrs = def_attrs, 615 622 }; 616 623 617 624 static struct kset *edd_kset;
+2 -1
drivers/firmware/memmap.c
··· 69 69 &memmap_type_attr.attr, 70 70 NULL 71 71 }; 72 + ATTRIBUTE_GROUPS(def); 72 73 73 74 static const struct sysfs_ops memmap_attr_ops = { 74 75 .show = memmap_attr_show, ··· 119 118 static struct kobj_type __refdata memmap_ktype = { 120 119 .release = release_firmware_map_entry, 121 120 .sysfs_ops = &memmap_attr_ops, 122 - .default_attrs = def_attrs, 121 + .default_groups = def_groups, 123 122 }; 124 123 125 124 /*
+3 -2
drivers/firmware/qemu_fw_cfg.c
··· 395 395 } 396 396 } 397 397 398 - /* default_attrs: per-entry attributes and show methods */ 398 + /* per-entry attributes and show methods */ 399 399 400 400 #define FW_CFG_SYSFS_ATTR(_attr) \ 401 401 struct fw_cfg_sysfs_attribute fw_cfg_sysfs_attr_##_attr = { \ ··· 428 428 &fw_cfg_sysfs_attr_name.attr, 429 429 NULL, 430 430 }; 431 + ATTRIBUTE_GROUPS(fw_cfg_sysfs_entry); 431 432 432 433 /* sysfs_ops: find fw_cfg_[entry, attribute] and call appropriate show method */ 433 434 static ssize_t fw_cfg_sysfs_attr_show(struct kobject *kobj, struct attribute *a, ··· 455 454 456 455 /* kobj_type: ties together all properties required to register an entry */ 457 456 static struct kobj_type fw_cfg_sysfs_entry_ktype = { 458 - .default_attrs = fw_cfg_sysfs_entry_attrs, 457 + .default_groups = fw_cfg_sysfs_entry_groups, 459 458 .sysfs_ops = &fw_cfg_sysfs_attr_ops, 460 459 .release = fw_cfg_sysfs_release_entry, 461 460 };
+6 -2
drivers/firmware/sysfb_simplefb.c
··· 113 113 sysfb_apply_efi_quirks(pd); 114 114 115 115 ret = platform_device_add_resources(pd, &res, 1); 116 - if (ret) 116 + if (ret) { 117 + platform_device_put(pd); 117 118 return ret; 119 + } 118 120 119 121 ret = platform_device_add_data(pd, mode, sizeof(*mode)); 120 - if (ret) 122 + if (ret) { 123 + platform_device_put(pd); 121 124 return ret; 125 + } 122 126 123 127 return platform_device_add(pd); 124 128 }
+2 -2
drivers/infiniband/hw/irdma/main.c
··· 207 207 struct iidc_auxiliary_dev, 208 208 adev); 209 209 struct ice_pf *pf = iidc_adev->pf; 210 - struct irdma_device *iwdev = dev_get_drvdata(&aux_dev->dev); 210 + struct irdma_device *iwdev = auxiliary_get_drvdata(aux_dev); 211 211 212 212 irdma_ib_unregister_device(iwdev); 213 213 ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, false); ··· 295 295 ice_rdma_update_vsi_filter(pf, iwdev->vsi_num, true); 296 296 297 297 ibdev_dbg(&iwdev->ibdev, "INIT: Gen2 PF[%d] device probe success\n", PCI_FUNC(rf->pcidev->devfn)); 298 - dev_set_drvdata(&aux_dev->dev, iwdev); 298 + auxiliary_set_drvdata(aux_dev, iwdev); 299 299 300 300 return 0; 301 301
+4 -4
drivers/infiniband/hw/mlx5/main.c
··· 4422 4422 } 4423 4423 mutex_unlock(&mlx5_ib_multiport_mutex); 4424 4424 4425 - dev_set_drvdata(&adev->dev, mpi); 4425 + auxiliary_set_drvdata(adev, mpi); 4426 4426 return 0; 4427 4427 } 4428 4428 ··· 4430 4430 { 4431 4431 struct mlx5_ib_multiport_info *mpi; 4432 4432 4433 - mpi = dev_get_drvdata(&adev->dev); 4433 + mpi = auxiliary_get_drvdata(adev); 4434 4434 mutex_lock(&mlx5_ib_multiport_mutex); 4435 4435 if (mpi->ibdev) 4436 4436 mlx5_ib_unbind_slave_port(mpi->ibdev, mpi); ··· 4480 4480 return ret; 4481 4481 } 4482 4482 4483 - dev_set_drvdata(&adev->dev, dev); 4483 + auxiliary_set_drvdata(adev, dev); 4484 4484 return 0; 4485 4485 } 4486 4486 ··· 4488 4488 { 4489 4489 struct mlx5_ib_dev *dev; 4490 4490 4491 - dev = dev_get_drvdata(&adev->dev); 4491 + dev = auxiliary_get_drvdata(adev); 4492 4492 __mlx5_ib_remove(dev, dev->profile, MLX5_IB_STAGE_MAX); 4493 4493 } 4494 4494
-10
drivers/mfd/Kconfig
··· 696 696 Register and P-unit access. In addition this creates devices 697 697 for iTCO watchdog and telemetry that are part of the PMC. 698 698 699 - config MFD_INTEL_PMT 700 - tristate "Intel Platform Monitoring Technology (PMT) support" 701 - depends on X86 && PCI 702 - select MFD_CORE 703 - help 704 - The Intel Platform Monitoring Technology (PMT) is an interface that 705 - provides access to hardware monitor registers. This driver supports 706 - Telemetry, Watcher, and Crashlog PMT capabilities/devices for 707 - platforms starting from Tiger Lake. 708 - 709 699 config MFD_IPAQ_MICRO 710 700 bool "Atmel Micro ASIC (iPAQ h3100/h3600/h3700) Support" 711 701 depends on SA1100_H3100 || SA1100_H3600
-1
drivers/mfd/Makefile
··· 211 211 obj-$(CONFIG_MFD_INTEL_LPSS_PCI) += intel-lpss-pci.o 212 212 obj-$(CONFIG_MFD_INTEL_LPSS_ACPI) += intel-lpss-acpi.o 213 213 obj-$(CONFIG_MFD_INTEL_PMC_BXT) += intel_pmc_bxt.o 214 - obj-$(CONFIG_MFD_INTEL_PMT) += intel_pmt.o 215 214 obj-$(CONFIG_MFD_PALMAS) += palmas.o 216 215 obj-$(CONFIG_MFD_VIPERBOARD) += viperboard.o 217 216 obj-$(CONFIG_MFD_NTXEC) += ntxec.o
-261
drivers/mfd/intel_pmt.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Intel Platform Monitoring Technology PMT driver 4 - * 5 - * Copyright (c) 2020, Intel Corporation. 6 - * All Rights Reserved. 7 - * 8 - * Author: David E. Box <david.e.box@linux.intel.com> 9 - */ 10 - 11 - #include <linux/bits.h> 12 - #include <linux/kernel.h> 13 - #include <linux/mfd/core.h> 14 - #include <linux/module.h> 15 - #include <linux/pci.h> 16 - #include <linux/platform_device.h> 17 - #include <linux/pm.h> 18 - #include <linux/pm_runtime.h> 19 - #include <linux/types.h> 20 - 21 - /* Intel DVSEC capability vendor space offsets */ 22 - #define INTEL_DVSEC_ENTRIES 0xA 23 - #define INTEL_DVSEC_SIZE 0xB 24 - #define INTEL_DVSEC_TABLE 0xC 25 - #define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0)) 26 - #define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3)) 27 - #define INTEL_DVSEC_ENTRY_SIZE 4 28 - 29 - /* PMT capabilities */ 30 - #define DVSEC_INTEL_ID_TELEMETRY 2 31 - #define DVSEC_INTEL_ID_WATCHER 3 32 - #define DVSEC_INTEL_ID_CRASHLOG 4 33 - 34 - struct intel_dvsec_header { 35 - u16 length; 36 - u16 id; 37 - u8 num_entries; 38 - u8 entry_size; 39 - u8 tbir; 40 - u32 offset; 41 - }; 42 - 43 - enum pmt_quirks { 44 - /* Watcher capability not supported */ 45 - PMT_QUIRK_NO_WATCHER = BIT(0), 46 - 47 - /* Crashlog capability not supported */ 48 - PMT_QUIRK_NO_CRASHLOG = BIT(1), 49 - 50 - /* Use shift instead of mask to read discovery table offset */ 51 - PMT_QUIRK_TABLE_SHIFT = BIT(2), 52 - 53 - /* DVSEC not present (provided in driver data) */ 54 - PMT_QUIRK_NO_DVSEC = BIT(3), 55 - }; 56 - 57 - struct pmt_platform_info { 58 - unsigned long quirks; 59 - struct intel_dvsec_header **capabilities; 60 - }; 61 - 62 - static const struct pmt_platform_info tgl_info = { 63 - .quirks = PMT_QUIRK_NO_WATCHER | PMT_QUIRK_NO_CRASHLOG | 64 - PMT_QUIRK_TABLE_SHIFT, 65 - }; 66 - 67 - /* DG1 Platform with DVSEC quirk*/ 68 - static struct intel_dvsec_header dg1_telemetry = { 69 - .length = 0x10, 70 - .id = 2, 71 - .num_entries = 1, 72 - .entry_size = 3, 73 - .tbir = 0, 74 - .offset = 0x466000, 75 - }; 76 - 77 - static struct intel_dvsec_header *dg1_capabilities[] = { 78 - &dg1_telemetry, 79 - NULL 80 - }; 81 - 82 - static const struct pmt_platform_info dg1_info = { 83 - .quirks = PMT_QUIRK_NO_DVSEC, 84 - .capabilities = dg1_capabilities, 85 - }; 86 - 87 - static int pmt_add_dev(struct pci_dev *pdev, struct intel_dvsec_header *header, 88 - unsigned long quirks) 89 - { 90 - struct device *dev = &pdev->dev; 91 - struct resource *res, *tmp; 92 - struct mfd_cell *cell; 93 - const char *name; 94 - int count = header->num_entries; 95 - int size = header->entry_size; 96 - int id = header->id; 97 - int i; 98 - 99 - switch (id) { 100 - case DVSEC_INTEL_ID_TELEMETRY: 101 - name = "pmt_telemetry"; 102 - break; 103 - case DVSEC_INTEL_ID_WATCHER: 104 - if (quirks & PMT_QUIRK_NO_WATCHER) { 105 - dev_info(dev, "Watcher not supported\n"); 106 - return -EINVAL; 107 - } 108 - name = "pmt_watcher"; 109 - break; 110 - case DVSEC_INTEL_ID_CRASHLOG: 111 - if (quirks & PMT_QUIRK_NO_CRASHLOG) { 112 - dev_info(dev, "Crashlog not supported\n"); 113 - return -EINVAL; 114 - } 115 - name = "pmt_crashlog"; 116 - break; 117 - default: 118 - return -EINVAL; 119 - } 120 - 121 - if (!header->num_entries || !header->entry_size) { 122 - dev_err(dev, "Invalid count or size for %s header\n", name); 123 - return -EINVAL; 124 - } 125 - 126 - cell = devm_kzalloc(dev, sizeof(*cell), GFP_KERNEL); 127 - if (!cell) 128 - return -ENOMEM; 129 - 130 - res = devm_kcalloc(dev, count, sizeof(*res), GFP_KERNEL); 131 - if (!res) 132 - return -ENOMEM; 133 - 134 - if (quirks & PMT_QUIRK_TABLE_SHIFT) 135 - header->offset >>= 3; 136 - 137 - /* 138 - * The PMT DVSEC contains the starting offset and count for a block of 139 - * discovery tables, each providing access to monitoring facilities for 140 - * a section of the device. Create a resource list of these tables to 141 - * provide to the driver. 142 - */ 143 - for (i = 0, tmp = res; i < count; i++, tmp++) { 144 - tmp->start = pdev->resource[header->tbir].start + 145 - header->offset + i * (size << 2); 146 - tmp->end = tmp->start + (size << 2) - 1; 147 - tmp->flags = IORESOURCE_MEM; 148 - } 149 - 150 - cell->resources = res; 151 - cell->num_resources = count; 152 - cell->name = name; 153 - 154 - return devm_mfd_add_devices(dev, PLATFORM_DEVID_AUTO, cell, 1, NULL, 0, 155 - NULL); 156 - } 157 - 158 - static int pmt_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 159 - { 160 - struct pmt_platform_info *info; 161 - unsigned long quirks = 0; 162 - bool found_devices = false; 163 - int ret, pos = 0; 164 - 165 - ret = pcim_enable_device(pdev); 166 - if (ret) 167 - return ret; 168 - 169 - info = (struct pmt_platform_info *)id->driver_data; 170 - 171 - if (info) 172 - quirks = info->quirks; 173 - 174 - if (info && (info->quirks & PMT_QUIRK_NO_DVSEC)) { 175 - struct intel_dvsec_header **header; 176 - 177 - header = info->capabilities; 178 - while (*header) { 179 - ret = pmt_add_dev(pdev, *header, quirks); 180 - if (ret) 181 - dev_warn(&pdev->dev, 182 - "Failed to add device for DVSEC id %d\n", 183 - (*header)->id); 184 - else 185 - found_devices = true; 186 - 187 - ++header; 188 - } 189 - } else { 190 - do { 191 - struct intel_dvsec_header header; 192 - u32 table; 193 - u16 vid; 194 - 195 - pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DVSEC); 196 - if (!pos) 197 - break; 198 - 199 - pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vid); 200 - if (vid != PCI_VENDOR_ID_INTEL) 201 - continue; 202 - 203 - pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, 204 - &header.id); 205 - pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES, 206 - &header.num_entries); 207 - pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE, 208 - &header.entry_size); 209 - pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE, 210 - &table); 211 - 212 - header.tbir = INTEL_DVSEC_TABLE_BAR(table); 213 - header.offset = INTEL_DVSEC_TABLE_OFFSET(table); 214 - 215 - ret = pmt_add_dev(pdev, &header, quirks); 216 - if (ret) 217 - continue; 218 - 219 - found_devices = true; 220 - } while (true); 221 - } 222 - 223 - if (!found_devices) 224 - return -ENODEV; 225 - 226 - pm_runtime_put(&pdev->dev); 227 - pm_runtime_allow(&pdev->dev); 228 - 229 - return 0; 230 - } 231 - 232 - static void pmt_pci_remove(struct pci_dev *pdev) 233 - { 234 - pm_runtime_forbid(&pdev->dev); 235 - pm_runtime_get_sync(&pdev->dev); 236 - } 237 - 238 - #define PCI_DEVICE_ID_INTEL_PMT_ADL 0x467d 239 - #define PCI_DEVICE_ID_INTEL_PMT_DG1 0x490e 240 - #define PCI_DEVICE_ID_INTEL_PMT_OOBMSM 0x09a7 241 - #define PCI_DEVICE_ID_INTEL_PMT_TGL 0x9a0d 242 - static const struct pci_device_id pmt_pci_ids[] = { 243 - { PCI_DEVICE_DATA(INTEL, PMT_ADL, &tgl_info) }, 244 - { PCI_DEVICE_DATA(INTEL, PMT_DG1, &dg1_info) }, 245 - { PCI_DEVICE_DATA(INTEL, PMT_OOBMSM, NULL) }, 246 - { PCI_DEVICE_DATA(INTEL, PMT_TGL, &tgl_info) }, 247 - { } 248 - }; 249 - MODULE_DEVICE_TABLE(pci, pmt_pci_ids); 250 - 251 - static struct pci_driver pmt_pci_driver = { 252 - .name = "intel-pmt", 253 - .id_table = pmt_pci_ids, 254 - .probe = pmt_pci_probe, 255 - .remove = pmt_pci_remove, 256 - }; 257 - module_pci_driver(pmt_pci_driver); 258 - 259 - MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>"); 260 - MODULE_DESCRIPTION("Intel Platform Monitoring Technology PMT driver"); 261 - MODULE_LICENSE("GPL v2");
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 5534 5534 static int mlx5e_resume(struct auxiliary_device *adev) 5535 5535 { 5536 5536 struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev); 5537 - struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev); 5537 + struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); 5538 5538 struct net_device *netdev = priv->netdev; 5539 5539 struct mlx5_core_dev *mdev = edev->mdev; 5540 5540 int err; ··· 5557 5557 5558 5558 static int mlx5e_suspend(struct auxiliary_device *adev, pm_message_t state) 5559 5559 { 5560 - struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev); 5560 + struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); 5561 5561 struct net_device *netdev = priv->netdev; 5562 5562 struct mlx5_core_dev *mdev = priv->mdev; 5563 5563 ··· 5589 5589 mlx5e_build_nic_netdev(netdev); 5590 5590 5591 5591 priv = netdev_priv(netdev); 5592 - dev_set_drvdata(&adev->dev, priv); 5592 + auxiliary_set_drvdata(adev, priv); 5593 5593 5594 5594 priv->profile = profile; 5595 5595 priv->ppriv = NULL; ··· 5637 5637 5638 5638 static void mlx5e_remove(struct auxiliary_device *adev) 5639 5639 { 5640 - struct mlx5e_priv *priv = dev_get_drvdata(&adev->dev); 5640 + struct mlx5e_priv *priv = auxiliary_get_drvdata(adev); 5641 5641 pm_message_t state = {}; 5642 5642 5643 5643 mlx5e_dcbnl_delete_app(priv);
+11
drivers/platform/x86/intel/Kconfig
··· 170 170 171 171 To compile this driver as a module, choose M here: the module 172 172 will be called intel-uncore-frequency. 173 + 174 + config INTEL_VSEC 175 + tristate "Intel Vendor Specific Extended Capabilities Driver" 176 + depends on PCI 177 + select AUXILIARY_BUS 178 + help 179 + Adds support for feature drivers exposed using Intel PCIe VSEC and 180 + DVSEC. 181 + 182 + To compile this driver as a module, choose M here: the module will 183 + be called intel_vsec.
+2
drivers/platform/x86/intel/Makefile
··· 26 26 obj-$(CONFIG_INTEL_INT0002_VGPIO) += intel_int0002_vgpio.o 27 27 intel_oaktrail-y := oaktrail.o 28 28 obj-$(CONFIG_INTEL_OAKTRAIL) += intel_oaktrail.o 29 + intel_vsec-y := vsec.o 30 + obj-$(CONFIG_INTEL_VSEC) += intel_vsec.o 29 31 30 32 # Intel PMIC / PMC / P-Unit drivers 31 33 intel_bxtwc_tmu-y := bxtwc_tmu.o
+2 -2
drivers/platform/x86/intel/pmt/Kconfig
··· 17 17 18 18 config INTEL_PMT_TELEMETRY 19 19 tristate "Intel Platform Monitoring Technology (PMT) Telemetry driver" 20 - depends on MFD_INTEL_PMT 20 + depends on INTEL_VSEC 21 21 select INTEL_PMT_CLASS 22 22 help 23 23 The Intel Platform Monitory Technology (PMT) Telemetry driver provides ··· 29 29 30 30 config INTEL_PMT_CRASHLOG 31 31 tristate "Intel Platform Monitoring Technology (PMT) Crashlog driver" 32 - depends on MFD_INTEL_PMT 32 + depends on INTEL_VSEC 33 33 select INTEL_PMT_CLASS 34 34 help 35 35 The Intel Platform Monitoring Technology (PMT) crashlog driver provides
+10 -11
drivers/platform/x86/intel/pmt/class.c
··· 13 13 #include <linux/mm.h> 14 14 #include <linux/pci.h> 15 15 16 + #include "../vsec.h" 16 17 #include "class.h" 17 18 18 19 #define PMT_XA_START 0 ··· 282 281 return ret; 283 282 } 284 283 285 - int intel_pmt_dev_create(struct intel_pmt_entry *entry, 286 - struct intel_pmt_namespace *ns, 287 - struct platform_device *pdev, int idx) 284 + int intel_pmt_dev_create(struct intel_pmt_entry *entry, struct intel_pmt_namespace *ns, 285 + struct intel_vsec_device *intel_vsec_dev, int idx) 288 286 { 287 + struct device *dev = &intel_vsec_dev->auxdev.dev; 289 288 struct intel_pmt_header header; 290 289 struct resource *disc_res; 291 - int ret = -ENODEV; 290 + int ret; 292 291 293 - disc_res = platform_get_resource(pdev, IORESOURCE_MEM, idx); 294 - if (!disc_res) 295 - return ret; 292 + disc_res = &intel_vsec_dev->resource[idx]; 296 293 297 - entry->disc_table = devm_platform_ioremap_resource(pdev, idx); 294 + entry->disc_table = devm_ioremap_resource(dev, disc_res); 298 295 if (IS_ERR(entry->disc_table)) 299 296 return PTR_ERR(entry->disc_table); 300 297 301 - ret = ns->pmt_header_decode(entry, &header, &pdev->dev); 298 + ret = ns->pmt_header_decode(entry, &header, dev); 302 299 if (ret) 303 300 return ret; 304 301 305 - ret = intel_pmt_populate_entry(entry, &header, &pdev->dev, disc_res); 302 + ret = intel_pmt_populate_entry(entry, &header, dev, disc_res); 306 303 if (ret) 307 304 return ret; 308 305 309 - return intel_pmt_dev_register(entry, ns, &pdev->dev); 306 + return intel_pmt_dev_register(entry, ns, dev); 310 307 311 308 } 312 309 EXPORT_SYMBOL_GPL(intel_pmt_dev_create);
+3 -2
drivers/platform/x86/intel/pmt/class.h
··· 2 2 #ifndef _INTEL_PMT_CLASS_H 3 3 #define _INTEL_PMT_CLASS_H 4 4 5 - #include <linux/platform_device.h> 6 5 #include <linux/xarray.h> 7 6 #include <linux/types.h> 8 7 #include <linux/bits.h> 9 8 #include <linux/err.h> 10 9 #include <linux/io.h> 10 + 11 + #include "../vsec.h" 11 12 12 13 /* PMT access types */ 13 14 #define ACCESS_BARID 2 ··· 48 47 bool intel_pmt_is_early_client_hw(struct device *dev); 49 48 int intel_pmt_dev_create(struct intel_pmt_entry *entry, 50 49 struct intel_pmt_namespace *ns, 51 - struct platform_device *pdev, int idx); 50 + struct intel_vsec_device *dev, int idx); 52 51 void intel_pmt_dev_destroy(struct intel_pmt_entry *entry, 53 52 struct intel_pmt_namespace *ns); 54 53 #endif
+25 -22
drivers/platform/x86/intel/pmt/crashlog.c
··· 8 8 * Author: "Alexander Duyck" <alexander.h.duyck@linux.intel.com> 9 9 */ 10 10 11 + #include <linux/auxiliary_bus.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/module.h> 13 14 #include <linux/pci.h> ··· 16 15 #include <linux/uaccess.h> 17 16 #include <linux/overflow.h> 18 17 18 + #include "../vsec.h" 19 19 #include "class.h" 20 - 21 - #define DRV_NAME "pmt_crashlog" 22 20 23 21 /* Crashlog discovery header types */ 24 22 #define CRASH_TYPE_OOBMSM 1 ··· 257 257 /* 258 258 * initialization 259 259 */ 260 - static int pmt_crashlog_remove(struct platform_device *pdev) 260 + static void pmt_crashlog_remove(struct auxiliary_device *auxdev) 261 261 { 262 - struct pmt_crashlog_priv *priv = platform_get_drvdata(pdev); 262 + struct pmt_crashlog_priv *priv = auxiliary_get_drvdata(auxdev); 263 263 int i; 264 264 265 265 for (i = 0; i < priv->num_entries; i++) 266 266 intel_pmt_dev_destroy(&priv->entry[i].entry, &pmt_crashlog_ns); 267 - 268 - return 0; 269 267 } 270 268 271 - static int pmt_crashlog_probe(struct platform_device *pdev) 269 + static int pmt_crashlog_probe(struct auxiliary_device *auxdev, 270 + const struct auxiliary_device_id *id) 272 271 { 272 + struct intel_vsec_device *intel_vsec_dev = auxdev_to_ivdev(auxdev); 273 273 struct pmt_crashlog_priv *priv; 274 274 size_t size; 275 275 int i, ret; 276 276 277 - size = struct_size(priv, entry, pdev->num_resources); 278 - priv = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 277 + size = struct_size(priv, entry, intel_vsec_dev->num_resources); 278 + priv = devm_kzalloc(&auxdev->dev, size, GFP_KERNEL); 279 279 if (!priv) 280 280 return -ENOMEM; 281 281 282 - platform_set_drvdata(pdev, priv); 282 + auxiliary_set_drvdata(auxdev, priv); 283 283 284 - for (i = 0; i < pdev->num_resources; i++) { 284 + for (i = 0; i < intel_vsec_dev->num_resources; i++) { 285 285 struct intel_pmt_entry *entry = &priv->entry[i].entry; 286 286 287 - ret = intel_pmt_dev_create(entry, &pmt_crashlog_ns, pdev, i); 287 + ret = intel_pmt_dev_create(entry, &pmt_crashlog_ns, intel_vsec_dev, i); 288 288 if (ret < 0) 289 289 goto abort_probe; 290 290 if (ret) ··· 295 295 296 296 return 0; 297 297 abort_probe: 298 - pmt_crashlog_remove(pdev); 298 + pmt_crashlog_remove(auxdev); 299 299 return ret; 300 300 } 301 301 302 - static struct platform_driver pmt_crashlog_driver = { 303 - .driver = { 304 - .name = DRV_NAME, 305 - }, 306 - .remove = pmt_crashlog_remove, 307 - .probe = pmt_crashlog_probe, 302 + static const struct auxiliary_device_id pmt_crashlog_id_table[] = { 303 + { .name = "intel_vsec.crashlog" }, 304 + {} 305 + }; 306 + MODULE_DEVICE_TABLE(auxiliary, pmt_crashlog_id_table); 307 + 308 + static struct auxiliary_driver pmt_crashlog_aux_driver = { 309 + .id_table = pmt_crashlog_id_table, 310 + .remove = pmt_crashlog_remove, 311 + .probe = pmt_crashlog_probe, 308 312 }; 309 313 310 314 static int __init pmt_crashlog_init(void) 311 315 { 312 - return platform_driver_register(&pmt_crashlog_driver); 316 + return auxiliary_driver_register(&pmt_crashlog_aux_driver); 313 317 } 314 318 315 319 static void __exit pmt_crashlog_exit(void) 316 320 { 317 - platform_driver_unregister(&pmt_crashlog_driver); 321 + auxiliary_driver_unregister(&pmt_crashlog_aux_driver); 318 322 xa_destroy(&crashlog_array); 319 323 } 320 324 ··· 327 323 328 324 MODULE_AUTHOR("Alexander Duyck <alexander.h.duyck@linux.intel.com>"); 329 325 MODULE_DESCRIPTION("Intel PMT Crashlog driver"); 330 - MODULE_ALIAS("platform:" DRV_NAME); 331 326 MODULE_LICENSE("GPL v2");
+24 -22
drivers/platform/x86/intel/pmt/telemetry.c
··· 8 8 * Author: "David E. Box" <david.e.box@linux.intel.com> 9 9 */ 10 10 11 + #include <linux/auxiliary_bus.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/module.h> 13 14 #include <linux/pci.h> ··· 16 15 #include <linux/uaccess.h> 17 16 #include <linux/overflow.h> 18 17 18 + #include "../vsec.h" 19 19 #include "class.h" 20 - 21 - #define TELEM_DEV_NAME "pmt_telemetry" 22 20 23 21 #define TELEM_SIZE_OFFSET 0x0 24 22 #define TELEM_GUID_OFFSET 0x4 ··· 79 79 .pmt_header_decode = pmt_telem_header_decode, 80 80 }; 81 81 82 - static int pmt_telem_remove(struct platform_device *pdev) 82 + static void pmt_telem_remove(struct auxiliary_device *auxdev) 83 83 { 84 - struct pmt_telem_priv *priv = platform_get_drvdata(pdev); 84 + struct pmt_telem_priv *priv = auxiliary_get_drvdata(auxdev); 85 85 int i; 86 86 87 87 for (i = 0; i < priv->num_entries; i++) 88 88 intel_pmt_dev_destroy(&priv->entry[i], &pmt_telem_ns); 89 - 90 - return 0; 91 89 } 92 90 93 - static int pmt_telem_probe(struct platform_device *pdev) 91 + static int pmt_telem_probe(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id) 94 92 { 93 + struct intel_vsec_device *intel_vsec_dev = auxdev_to_ivdev(auxdev); 95 94 struct pmt_telem_priv *priv; 96 95 size_t size; 97 96 int i, ret; 98 97 99 - size = struct_size(priv, entry, pdev->num_resources); 100 - priv = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); 98 + size = struct_size(priv, entry, intel_vsec_dev->num_resources); 99 + priv = devm_kzalloc(&auxdev->dev, size, GFP_KERNEL); 101 100 if (!priv) 102 101 return -ENOMEM; 103 102 104 - platform_set_drvdata(pdev, priv); 103 + auxiliary_set_drvdata(auxdev, priv); 105 104 106 - for (i = 0; i < pdev->num_resources; i++) { 105 + for (i = 0; i < intel_vsec_dev->num_resources; i++) { 107 106 struct intel_pmt_entry *entry = &priv->entry[i]; 108 107 109 - ret = intel_pmt_dev_create(entry, &pmt_telem_ns, pdev, i); 108 + ret = intel_pmt_dev_create(entry, &pmt_telem_ns, intel_vsec_dev, i); 110 109 if (ret < 0) 111 110 goto abort_probe; 112 111 if (ret) ··· 116 117 117 118 return 0; 118 119 abort_probe: 119 - pmt_telem_remove(pdev); 120 + pmt_telem_remove(auxdev); 120 121 return ret; 121 122 } 122 123 123 - static struct platform_driver pmt_telem_driver = { 124 - .driver = { 125 - .name = TELEM_DEV_NAME, 126 - }, 127 - .remove = pmt_telem_remove, 128 - .probe = pmt_telem_probe, 124 + static const struct auxiliary_device_id pmt_telem_id_table[] = { 125 + { .name = "intel_vsec.telemetry" }, 126 + {} 127 + }; 128 + MODULE_DEVICE_TABLE(auxiliary, pmt_telem_id_table); 129 + 130 + static struct auxiliary_driver pmt_telem_aux_driver = { 131 + .id_table = pmt_telem_id_table, 132 + .remove = pmt_telem_remove, 133 + .probe = pmt_telem_probe, 129 134 }; 130 135 131 136 static int __init pmt_telem_init(void) 132 137 { 133 - return platform_driver_register(&pmt_telem_driver); 138 + return auxiliary_driver_register(&pmt_telem_aux_driver); 134 139 } 135 140 module_init(pmt_telem_init); 136 141 137 142 static void __exit pmt_telem_exit(void) 138 143 { 139 - platform_driver_unregister(&pmt_telem_driver); 144 + auxiliary_driver_unregister(&pmt_telem_aux_driver); 140 145 xa_destroy(&telem_array); 141 146 } 142 147 module_exit(pmt_telem_exit); 143 148 144 149 MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>"); 145 150 MODULE_DESCRIPTION("Intel PMT Telemetry driver"); 146 - MODULE_ALIAS("platform:" TELEM_DEV_NAME); 147 151 MODULE_LICENSE("GPL v2");
+408
drivers/platform/x86/intel/vsec.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Intel Vendor Specific Extended Capabilities auxiliary bus driver 4 + * 5 + * Copyright (c) 2021, Intel Corporation. 6 + * All Rights Reserved. 7 + * 8 + * Author: David E. Box <david.e.box@linux.intel.com> 9 + * 10 + * This driver discovers and creates auxiliary devices for Intel defined PCIe 11 + * "Vendor Specific" and "Designated Vendor Specific" Extended Capabilities, 12 + * VSEC and DVSEC respectively. The driver supports features on specific PCIe 13 + * endpoints that exist primarily to expose them. 14 + */ 15 + 16 + #include <linux/auxiliary_bus.h> 17 + #include <linux/bits.h> 18 + #include <linux/kernel.h> 19 + #include <linux/idr.h> 20 + #include <linux/module.h> 21 + #include <linux/pci.h> 22 + #include <linux/types.h> 23 + 24 + #include "vsec.h" 25 + 26 + /* Intel DVSEC offsets */ 27 + #define INTEL_DVSEC_ENTRIES 0xA 28 + #define INTEL_DVSEC_SIZE 0xB 29 + #define INTEL_DVSEC_TABLE 0xC 30 + #define INTEL_DVSEC_TABLE_BAR(x) ((x) & GENMASK(2, 0)) 31 + #define INTEL_DVSEC_TABLE_OFFSET(x) ((x) & GENMASK(31, 3)) 32 + #define TABLE_OFFSET_SHIFT 3 33 + 34 + static DEFINE_IDA(intel_vsec_ida); 35 + 36 + /** 37 + * struct intel_vsec_header - Common fields of Intel VSEC and DVSEC registers. 38 + * @rev: Revision ID of the VSEC/DVSEC register space 39 + * @length: Length of the VSEC/DVSEC register space 40 + * @id: ID of the feature 41 + * @num_entries: Number of instances of the feature 42 + * @entry_size: Size of the discovery table for each feature 43 + * @tbir: BAR containing the discovery tables 44 + * @offset: BAR offset of start of the first discovery table 45 + */ 46 + struct intel_vsec_header { 47 + u8 rev; 48 + u16 length; 49 + u16 id; 50 + u8 num_entries; 51 + u8 entry_size; 52 + u8 tbir; 53 + u32 offset; 54 + }; 55 + 56 + /* Platform specific data */ 57 + struct intel_vsec_platform_info { 58 + struct intel_vsec_header **capabilities; 59 + unsigned long quirks; 60 + }; 61 + 62 + enum intel_vsec_id { 63 + VSEC_ID_TELEMETRY = 2, 64 + VSEC_ID_WATCHER = 3, 65 + VSEC_ID_CRASHLOG = 4, 66 + }; 67 + 68 + static enum intel_vsec_id intel_vsec_allow_list[] = { 69 + VSEC_ID_TELEMETRY, 70 + VSEC_ID_WATCHER, 71 + VSEC_ID_CRASHLOG, 72 + }; 73 + 74 + static const char *intel_vsec_name(enum intel_vsec_id id) 75 + { 76 + switch (id) { 77 + case VSEC_ID_TELEMETRY: 78 + return "telemetry"; 79 + 80 + case VSEC_ID_WATCHER: 81 + return "watcher"; 82 + 83 + case VSEC_ID_CRASHLOG: 84 + return "crashlog"; 85 + 86 + default: 87 + return NULL; 88 + } 89 + } 90 + 91 + static bool intel_vsec_allowed(u16 id) 92 + { 93 + int i; 94 + 95 + for (i = 0; i < ARRAY_SIZE(intel_vsec_allow_list); i++) 96 + if (intel_vsec_allow_list[i] == id) 97 + return true; 98 + 99 + return false; 100 + } 101 + 102 + static bool intel_vsec_disabled(u16 id, unsigned long quirks) 103 + { 104 + switch (id) { 105 + case VSEC_ID_WATCHER: 106 + return !!(quirks & VSEC_QUIRK_NO_WATCHER); 107 + 108 + case VSEC_ID_CRASHLOG: 109 + return !!(quirks & VSEC_QUIRK_NO_CRASHLOG); 110 + 111 + default: 112 + return false; 113 + } 114 + } 115 + 116 + static void intel_vsec_remove_aux(void *data) 117 + { 118 + auxiliary_device_delete(data); 119 + auxiliary_device_uninit(data); 120 + } 121 + 122 + static void intel_vsec_dev_release(struct device *dev) 123 + { 124 + struct intel_vsec_device *intel_vsec_dev = dev_to_ivdev(dev); 125 + 126 + ida_free(intel_vsec_dev->ida, intel_vsec_dev->auxdev.id); 127 + kfree(intel_vsec_dev->resource); 128 + kfree(intel_vsec_dev); 129 + } 130 + 131 + static int intel_vsec_add_aux(struct pci_dev *pdev, struct intel_vsec_device *intel_vsec_dev, 132 + const char *name) 133 + { 134 + struct auxiliary_device *auxdev = &intel_vsec_dev->auxdev; 135 + int ret; 136 + 137 + ret = ida_alloc(intel_vsec_dev->ida, GFP_KERNEL); 138 + if (ret < 0) { 139 + kfree(intel_vsec_dev); 140 + return ret; 141 + } 142 + 143 + auxdev->id = ret; 144 + auxdev->name = name; 145 + auxdev->dev.parent = &pdev->dev; 146 + auxdev->dev.release = intel_vsec_dev_release; 147 + 148 + ret = auxiliary_device_init(auxdev); 149 + if (ret < 0) { 150 + ida_free(intel_vsec_dev->ida, auxdev->id); 151 + kfree(intel_vsec_dev->resource); 152 + kfree(intel_vsec_dev); 153 + return ret; 154 + } 155 + 156 + ret = auxiliary_device_add(auxdev); 157 + if (ret < 0) { 158 + auxiliary_device_uninit(auxdev); 159 + return ret; 160 + } 161 + 162 + return devm_add_action_or_reset(&pdev->dev, intel_vsec_remove_aux, auxdev); 163 + } 164 + 165 + static int intel_vsec_add_dev(struct pci_dev *pdev, struct intel_vsec_header *header, 166 + unsigned long quirks) 167 + { 168 + struct intel_vsec_device *intel_vsec_dev; 169 + struct resource *res, *tmp; 170 + int i; 171 + 172 + if (!intel_vsec_allowed(header->id) || intel_vsec_disabled(header->id, quirks)) 173 + return -EINVAL; 174 + 175 + if (!header->num_entries) { 176 + dev_dbg(&pdev->dev, "Invalid 0 entry count for header id %d\n", header->id); 177 + return -EINVAL; 178 + } 179 + 180 + if (!header->entry_size) { 181 + dev_dbg(&pdev->dev, "Invalid 0 entry size for header id %d\n", header->id); 182 + return -EINVAL; 183 + } 184 + 185 + intel_vsec_dev = kzalloc(sizeof(*intel_vsec_dev), GFP_KERNEL); 186 + if (!intel_vsec_dev) 187 + return -ENOMEM; 188 + 189 + res = kcalloc(header->num_entries, sizeof(*res), GFP_KERNEL); 190 + if (!res) { 191 + kfree(intel_vsec_dev); 192 + return -ENOMEM; 193 + } 194 + 195 + if (quirks & VSEC_QUIRK_TABLE_SHIFT) 196 + header->offset >>= TABLE_OFFSET_SHIFT; 197 + 198 + /* 199 + * The DVSEC/VSEC contains the starting offset and count for a block of 200 + * discovery tables. Create a resource array of these tables to the 201 + * auxiliary device driver. 202 + */ 203 + for (i = 0, tmp = res; i < header->num_entries; i++, tmp++) { 204 + tmp->start = pdev->resource[header->tbir].start + 205 + header->offset + i * (header->entry_size * sizeof(u32)); 206 + tmp->end = tmp->start + (header->entry_size * sizeof(u32)) - 1; 207 + tmp->flags = IORESOURCE_MEM; 208 + } 209 + 210 + intel_vsec_dev->pcidev = pdev; 211 + intel_vsec_dev->resource = res; 212 + intel_vsec_dev->num_resources = header->num_entries; 213 + intel_vsec_dev->quirks = quirks; 214 + intel_vsec_dev->ida = &intel_vsec_ida; 215 + 216 + return intel_vsec_add_aux(pdev, intel_vsec_dev, intel_vsec_name(header->id)); 217 + } 218 + 219 + static bool intel_vsec_walk_header(struct pci_dev *pdev, unsigned long quirks, 220 + struct intel_vsec_header **header) 221 + { 222 + bool have_devices = false; 223 + int ret; 224 + 225 + for ( ; *header; header++) { 226 + ret = intel_vsec_add_dev(pdev, *header, quirks); 227 + if (ret) 228 + dev_info(&pdev->dev, "Could not add device for DVSEC id %d\n", 229 + (*header)->id); 230 + else 231 + have_devices = true; 232 + } 233 + 234 + return have_devices; 235 + } 236 + 237 + static bool intel_vsec_walk_dvsec(struct pci_dev *pdev, unsigned long quirks) 238 + { 239 + bool have_devices = false; 240 + int pos = 0; 241 + 242 + do { 243 + struct intel_vsec_header header; 244 + u32 table, hdr; 245 + u16 vid; 246 + int ret; 247 + 248 + pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DVSEC); 249 + if (!pos) 250 + break; 251 + 252 + pci_read_config_dword(pdev, pos + PCI_DVSEC_HEADER1, &hdr); 253 + vid = PCI_DVSEC_HEADER1_VID(hdr); 254 + if (vid != PCI_VENDOR_ID_INTEL) 255 + continue; 256 + 257 + /* Support only revision 1 */ 258 + header.rev = PCI_DVSEC_HEADER1_REV(hdr); 259 + if (header.rev != 1) { 260 + dev_info(&pdev->dev, "Unsupported DVSEC revision %d\n", header.rev); 261 + continue; 262 + } 263 + 264 + header.length = PCI_DVSEC_HEADER1_LEN(hdr); 265 + 266 + pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES, &header.num_entries); 267 + pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE, &header.entry_size); 268 + pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE, &table); 269 + 270 + header.tbir = INTEL_DVSEC_TABLE_BAR(table); 271 + header.offset = INTEL_DVSEC_TABLE_OFFSET(table); 272 + 273 + pci_read_config_dword(pdev, pos + PCI_DVSEC_HEADER2, &hdr); 274 + header.id = PCI_DVSEC_HEADER2_ID(hdr); 275 + 276 + ret = intel_vsec_add_dev(pdev, &header, quirks); 277 + if (ret) 278 + continue; 279 + 280 + have_devices = true; 281 + } while (true); 282 + 283 + return have_devices; 284 + } 285 + 286 + static bool intel_vsec_walk_vsec(struct pci_dev *pdev, unsigned long quirks) 287 + { 288 + bool have_devices = false; 289 + int pos = 0; 290 + 291 + do { 292 + struct intel_vsec_header header; 293 + u32 table, hdr; 294 + int ret; 295 + 296 + pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_VNDR); 297 + if (!pos) 298 + break; 299 + 300 + pci_read_config_dword(pdev, pos + PCI_VNDR_HEADER, &hdr); 301 + 302 + /* Support only revision 1 */ 303 + header.rev = PCI_VNDR_HEADER_REV(hdr); 304 + if (header.rev != 1) { 305 + dev_info(&pdev->dev, "Unsupported VSEC revision %d\n", header.rev); 306 + continue; 307 + } 308 + 309 + header.id = PCI_VNDR_HEADER_ID(hdr); 310 + header.length = PCI_VNDR_HEADER_LEN(hdr); 311 + 312 + /* entry, size, and table offset are the same as DVSEC */ 313 + pci_read_config_byte(pdev, pos + INTEL_DVSEC_ENTRIES, &header.num_entries); 314 + pci_read_config_byte(pdev, pos + INTEL_DVSEC_SIZE, &header.entry_size); 315 + pci_read_config_dword(pdev, pos + INTEL_DVSEC_TABLE, &table); 316 + 317 + header.tbir = INTEL_DVSEC_TABLE_BAR(table); 318 + header.offset = INTEL_DVSEC_TABLE_OFFSET(table); 319 + 320 + ret = intel_vsec_add_dev(pdev, &header, quirks); 321 + if (ret) 322 + continue; 323 + 324 + have_devices = true; 325 + } while (true); 326 + 327 + return have_devices; 328 + } 329 + 330 + static int intel_vsec_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 331 + { 332 + struct intel_vsec_platform_info *info; 333 + bool have_devices = false; 334 + unsigned long quirks = 0; 335 + int ret; 336 + 337 + ret = pcim_enable_device(pdev); 338 + if (ret) 339 + return ret; 340 + 341 + info = (struct intel_vsec_platform_info *)id->driver_data; 342 + if (info) 343 + quirks = info->quirks; 344 + 345 + if (intel_vsec_walk_dvsec(pdev, quirks)) 346 + have_devices = true; 347 + 348 + if (intel_vsec_walk_vsec(pdev, quirks)) 349 + have_devices = true; 350 + 351 + if (info && (info->quirks & VSEC_QUIRK_NO_DVSEC) && 352 + intel_vsec_walk_header(pdev, quirks, info->capabilities)) 353 + have_devices = true; 354 + 355 + if (!have_devices) 356 + return -ENODEV; 357 + 358 + return 0; 359 + } 360 + 361 + /* TGL info */ 362 + static const struct intel_vsec_platform_info tgl_info = { 363 + .quirks = VSEC_QUIRK_NO_WATCHER | VSEC_QUIRK_NO_CRASHLOG | VSEC_QUIRK_TABLE_SHIFT, 364 + }; 365 + 366 + /* DG1 info */ 367 + static struct intel_vsec_header dg1_telemetry = { 368 + .length = 0x10, 369 + .id = 2, 370 + .num_entries = 1, 371 + .entry_size = 3, 372 + .tbir = 0, 373 + .offset = 0x466000, 374 + }; 375 + 376 + static struct intel_vsec_header *dg1_capabilities[] = { 377 + &dg1_telemetry, 378 + NULL 379 + }; 380 + 381 + static const struct intel_vsec_platform_info dg1_info = { 382 + .capabilities = dg1_capabilities, 383 + .quirks = VSEC_QUIRK_NO_DVSEC, 384 + }; 385 + 386 + #define PCI_DEVICE_ID_INTEL_VSEC_ADL 0x467d 387 + #define PCI_DEVICE_ID_INTEL_VSEC_DG1 0x490e 388 + #define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7 389 + #define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d 390 + static const struct pci_device_id intel_vsec_pci_ids[] = { 391 + { PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) }, 392 + { PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) }, 393 + { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, NULL) }, 394 + { PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) }, 395 + { } 396 + }; 397 + MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids); 398 + 399 + static struct pci_driver intel_vsec_pci_driver = { 400 + .name = "intel_vsec", 401 + .id_table = intel_vsec_pci_ids, 402 + .probe = intel_vsec_pci_probe, 403 + }; 404 + module_pci_driver(intel_vsec_pci_driver); 405 + 406 + MODULE_AUTHOR("David E. Box <david.e.box@linux.intel.com>"); 407 + MODULE_DESCRIPTION("Intel Extended Capabilities auxiliary bus driver"); 408 + MODULE_LICENSE("GPL v2");
+43
drivers/platform/x86/intel/vsec.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _VSEC_H 3 + #define _VSEC_H 4 + 5 + #include <linux/auxiliary_bus.h> 6 + #include <linux/bits.h> 7 + 8 + struct pci_dev; 9 + struct resource; 10 + 11 + enum intel_vsec_quirks { 12 + /* Watcher feature not supported */ 13 + VSEC_QUIRK_NO_WATCHER = BIT(0), 14 + 15 + /* Crashlog feature not supported */ 16 + VSEC_QUIRK_NO_CRASHLOG = BIT(1), 17 + 18 + /* Use shift instead of mask to read discovery table offset */ 19 + VSEC_QUIRK_TABLE_SHIFT = BIT(2), 20 + 21 + /* DVSEC not present (provided in driver data) */ 22 + VSEC_QUIRK_NO_DVSEC = BIT(3), 23 + }; 24 + 25 + struct intel_vsec_device { 26 + struct auxiliary_device auxdev; 27 + struct pci_dev *pcidev; 28 + struct resource *resource; 29 + struct ida *ida; 30 + unsigned long quirks; 31 + int num_resources; 32 + }; 33 + 34 + static inline struct intel_vsec_device *dev_to_ivdev(struct device *dev) 35 + { 36 + return container_of(dev, struct intel_vsec_device, auxdev.dev); 37 + } 38 + 39 + static inline struct intel_vsec_device *auxdev_to_ivdev(struct auxiliary_device *auxdev) 40 + { 41 + return container_of(auxdev, struct intel_vsec_device, auxdev); 42 + } 43 + #endif
+4 -4
drivers/soundwire/intel.c
··· 1293 1293 bus->ops = &sdw_intel_ops; 1294 1294 1295 1295 /* set driver data, accessed by snd_soc_dai_get_drvdata() */ 1296 - dev_set_drvdata(dev, cdns); 1296 + auxiliary_set_drvdata(auxdev, cdns); 1297 1297 1298 1298 /* use generic bandwidth allocation algorithm */ 1299 1299 sdw->cdns.bus.compute_params = sdw_compute_params; ··· 1321 1321 { 1322 1322 struct sdw_cdns_stream_config config; 1323 1323 struct device *dev = &auxdev->dev; 1324 - struct sdw_cdns *cdns = dev_get_drvdata(dev); 1324 + struct sdw_cdns *cdns = auxiliary_get_drvdata(auxdev); 1325 1325 struct sdw_intel *sdw = cdns_to_intel(cdns); 1326 1326 struct sdw_bus *bus = &cdns->bus; 1327 1327 int link_flags; ··· 1463 1463 static void intel_link_remove(struct auxiliary_device *auxdev) 1464 1464 { 1465 1465 struct device *dev = &auxdev->dev; 1466 - struct sdw_cdns *cdns = dev_get_drvdata(dev); 1466 + struct sdw_cdns *cdns = auxiliary_get_drvdata(auxdev); 1467 1467 struct sdw_intel *sdw = cdns_to_intel(cdns); 1468 1468 struct sdw_bus *bus = &cdns->bus; 1469 1469 ··· 1488 1488 void __iomem *shim; 1489 1489 u16 wake_sts; 1490 1490 1491 - sdw = dev_get_drvdata(dev); 1491 + sdw = auxiliary_get_drvdata(auxdev); 1492 1492 bus = &sdw->cdns.bus; 1493 1493 1494 1494 if (bus->prop.hw_disabled || !sdw->startup_done) {
+1 -1
drivers/soundwire/intel_init.c
··· 244 244 goto err; 245 245 246 246 link = &ldev->link_res; 247 - link->cdns = dev_get_drvdata(&ldev->auxdev.dev); 247 + link->cdns = auxiliary_get_drvdata(&ldev->auxdev); 248 248 249 249 if (!link->cdns) { 250 250 dev_err(&adev->dev, "failed to get link->cdns\n");
+2 -2
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 2683 2683 if (err) 2684 2684 goto reg_err; 2685 2685 2686 - dev_set_drvdata(&adev->dev, mgtdev); 2686 + auxiliary_set_drvdata(adev, mgtdev); 2687 2687 2688 2688 return 0; 2689 2689 ··· 2696 2696 { 2697 2697 struct mlx5_vdpa_mgmtdev *mgtdev; 2698 2698 2699 - mgtdev = dev_get_drvdata(&adev->dev); 2699 + mgtdev = auxiliary_get_drvdata(adev); 2700 2700 vdpa_mgmtdev_unregister(&mgtdev->mgtdev); 2701 2701 kfree(mgtdev); 2702 2702 }
+1 -1
fs/debugfs/file.c
··· 147 147 struct file *filp, 148 148 const struct file_operations *real_fops) 149 149 { 150 - if ((inode->i_mode & 07777) == 0444 && 150 + if ((inode->i_mode & 07777 & ~0444) == 0 && 151 151 !(filp->f_mode & FMODE_WRITE) && 152 152 !real_fops->unlocked_ioctl && 153 153 !real_fops->compat_ioctl &&
+1 -2
fs/dlm/lockspace.c
··· 216 216 return ls->ls_uevent_result; 217 217 } 218 218 219 - static int dlm_uevent(struct kset *kset, struct kobject *kobj, 220 - struct kobj_uevent_env *env) 219 + static int dlm_uevent(struct kobject *kobj, struct kobj_uevent_env *env) 221 220 { 222 221 struct dlm_ls *ls = container_of(kobj, struct dlm_ls, ls_kobj); 223 222
+1 -2
fs/gfs2/sys.c
··· 767 767 wait_for_completion(&sdp->sd_kobj_unregister); 768 768 } 769 769 770 - static int gfs2_uevent(struct kset *kset, struct kobject *kobj, 771 - struct kobj_uevent_env *env) 770 + static int gfs2_uevent(struct kobject *kobj, struct kobj_uevent_env *env) 772 771 { 773 772 struct gfs2_sbd *sdp = container_of(kobj, struct gfs2_sbd, sd_kobj); 774 773 struct super_block *s = sdp->sd_vfs;
+72 -46
fs/kernfs/dir.c
··· 17 17 18 18 #include "kernfs-internal.h" 19 19 20 - DECLARE_RWSEM(kernfs_rwsem); 21 20 static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */ 22 21 static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by rename_lock */ 23 22 static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */ ··· 25 26 26 27 static bool kernfs_active(struct kernfs_node *kn) 27 28 { 28 - lockdep_assert_held(&kernfs_rwsem); 29 + lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem); 29 30 return atomic_read(&kn->active) >= 0; 30 31 } 31 32 ··· 456 457 * return after draining is complete. 457 458 */ 458 459 static void kernfs_drain(struct kernfs_node *kn) 459 - __releases(&kernfs_rwsem) __acquires(&kernfs_rwsem) 460 + __releases(&kernfs_root(kn)->kernfs_rwsem) 461 + __acquires(&kernfs_root(kn)->kernfs_rwsem) 460 462 { 461 463 struct kernfs_root *root = kernfs_root(kn); 462 464 463 - lockdep_assert_held_write(&kernfs_rwsem); 465 + lockdep_assert_held_write(&root->kernfs_rwsem); 464 466 WARN_ON_ONCE(kernfs_active(kn)); 465 467 466 - up_write(&kernfs_rwsem); 468 + up_write(&root->kernfs_rwsem); 467 469 468 470 if (kernfs_lockdep(kn)) { 469 471 rwsem_acquire(&kn->dep_map, 0, 0, _RET_IP_); ··· 483 483 484 484 kernfs_drain_open_files(kn); 485 485 486 - down_write(&kernfs_rwsem); 486 + down_write(&root->kernfs_rwsem); 487 487 } 488 488 489 489 /** ··· 718 718 int kernfs_add_one(struct kernfs_node *kn) 719 719 { 720 720 struct kernfs_node *parent = kn->parent; 721 + struct kernfs_root *root = kernfs_root(parent); 721 722 struct kernfs_iattrs *ps_iattr; 722 723 bool has_ns; 723 724 int ret; 724 725 725 - down_write(&kernfs_rwsem); 726 + down_write(&root->kernfs_rwsem); 726 727 727 728 ret = -EINVAL; 728 729 has_ns = kernfs_ns_enabled(parent); ··· 754 753 ps_iattr->ia_mtime = ps_iattr->ia_ctime; 755 754 } 756 755 757 - up_write(&kernfs_rwsem); 756 + up_write(&root->kernfs_rwsem); 758 757 759 758 /* 760 759 * Activate the new node unless CREATE_DEACTIVATED is requested. ··· 768 767 return 0; 769 768 770 769 out_unlock: 771 - up_write(&kernfs_rwsem); 770 + up_write(&root->kernfs_rwsem); 772 771 return ret; 773 772 } 774 773 ··· 789 788 bool has_ns = kernfs_ns_enabled(parent); 790 789 unsigned int hash; 791 790 792 - lockdep_assert_held(&kernfs_rwsem); 791 + lockdep_assert_held(&kernfs_root(parent)->kernfs_rwsem); 793 792 794 793 if (has_ns != (bool)ns) { 795 794 WARN(1, KERN_WARNING "kernfs: ns %s in '%s' for '%s'\n", ··· 821 820 size_t len; 822 821 char *p, *name; 823 822 824 - lockdep_assert_held_read(&kernfs_rwsem); 823 + lockdep_assert_held_read(&kernfs_root(parent)->kernfs_rwsem); 825 824 826 825 /* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */ 827 826 spin_lock_irq(&kernfs_rename_lock); ··· 860 859 const char *name, const void *ns) 861 860 { 862 861 struct kernfs_node *kn; 862 + struct kernfs_root *root = kernfs_root(parent); 863 863 864 - down_read(&kernfs_rwsem); 864 + down_read(&root->kernfs_rwsem); 865 865 kn = kernfs_find_ns(parent, name, ns); 866 866 kernfs_get(kn); 867 - up_read(&kernfs_rwsem); 867 + up_read(&root->kernfs_rwsem); 868 868 869 869 return kn; 870 870 } ··· 885 883 const char *path, const void *ns) 886 884 { 887 885 struct kernfs_node *kn; 886 + struct kernfs_root *root = kernfs_root(parent); 888 887 889 - down_read(&kernfs_rwsem); 888 + down_read(&root->kernfs_rwsem); 890 889 kn = kernfs_walk_ns(parent, path, ns); 891 890 kernfs_get(kn); 892 - up_read(&kernfs_rwsem); 891 + up_read(&root->kernfs_rwsem); 893 892 894 893 return kn; 895 894 } ··· 915 912 return ERR_PTR(-ENOMEM); 916 913 917 914 idr_init(&root->ino_idr); 915 + init_rwsem(&root->kernfs_rwsem); 918 916 INIT_LIST_HEAD(&root->supers); 919 917 920 918 /* ··· 961 957 */ 962 958 void kernfs_destroy_root(struct kernfs_root *root) 963 959 { 964 - kernfs_remove(root->kn); /* will also free @root */ 960 + /* 961 + * kernfs_remove holds kernfs_rwsem from the root so the root 962 + * shouldn't be freed during the operation. 963 + */ 964 + kernfs_get(root->kn); 965 + kernfs_remove(root->kn); 966 + kernfs_put(root->kn); /* will also free @root */ 965 967 } 966 968 967 969 /** ··· 1045 1035 static int kernfs_dop_revalidate(struct dentry *dentry, unsigned int flags) 1046 1036 { 1047 1037 struct kernfs_node *kn; 1038 + struct kernfs_root *root; 1048 1039 1049 1040 if (flags & LOOKUP_RCU) 1050 1041 return -ECHILD; ··· 1057 1046 /* If the kernfs parent node has changed discard and 1058 1047 * proceed to ->lookup. 1059 1048 */ 1060 - down_read(&kernfs_rwsem); 1061 1049 spin_lock(&dentry->d_lock); 1062 1050 parent = kernfs_dentry_node(dentry->d_parent); 1063 1051 if (parent) { 1052 + spin_unlock(&dentry->d_lock); 1053 + root = kernfs_root(parent); 1054 + down_read(&root->kernfs_rwsem); 1064 1055 if (kernfs_dir_changed(parent, dentry)) { 1065 - spin_unlock(&dentry->d_lock); 1066 - up_read(&kernfs_rwsem); 1056 + up_read(&root->kernfs_rwsem); 1067 1057 return 0; 1068 1058 } 1069 - } 1070 - spin_unlock(&dentry->d_lock); 1071 - up_read(&kernfs_rwsem); 1059 + up_read(&root->kernfs_rwsem); 1060 + } else 1061 + spin_unlock(&dentry->d_lock); 1072 1062 1073 1063 /* The kernfs parent node hasn't changed, leave the 1074 1064 * dentry negative and return success. ··· 1078 1066 } 1079 1067 1080 1068 kn = kernfs_dentry_node(dentry); 1081 - down_read(&kernfs_rwsem); 1069 + root = kernfs_root(kn); 1070 + down_read(&root->kernfs_rwsem); 1082 1071 1083 1072 /* The kernfs node has been deactivated */ 1084 1073 if (!kernfs_active(kn)) ··· 1098 1085 kernfs_info(dentry->d_sb)->ns != kn->ns) 1099 1086 goto out_bad; 1100 1087 1101 - up_read(&kernfs_rwsem); 1088 + up_read(&root->kernfs_rwsem); 1102 1089 return 1; 1103 1090 out_bad: 1104 - up_read(&kernfs_rwsem); 1091 + up_read(&root->kernfs_rwsem); 1105 1092 return 0; 1106 1093 } 1107 1094 ··· 1115 1102 { 1116 1103 struct kernfs_node *parent = dir->i_private; 1117 1104 struct kernfs_node *kn; 1105 + struct kernfs_root *root; 1118 1106 struct inode *inode = NULL; 1119 1107 const void *ns = NULL; 1120 1108 1121 - down_read(&kernfs_rwsem); 1109 + root = kernfs_root(parent); 1110 + down_read(&root->kernfs_rwsem); 1122 1111 if (kernfs_ns_enabled(parent)) 1123 1112 ns = kernfs_info(dir->i_sb)->ns; 1124 1113 ··· 1131 1116 * create a negative. 1132 1117 */ 1133 1118 if (!kernfs_active(kn)) { 1134 - up_read(&kernfs_rwsem); 1119 + up_read(&root->kernfs_rwsem); 1135 1120 return NULL; 1136 1121 } 1137 1122 inode = kernfs_get_inode(dir->i_sb, kn); ··· 1146 1131 */ 1147 1132 if (!IS_ERR(inode)) 1148 1133 kernfs_set_rev(parent, dentry); 1149 - up_read(&kernfs_rwsem); 1134 + up_read(&root->kernfs_rwsem); 1150 1135 1151 1136 /* instantiate and hash (possibly negative) dentry */ 1152 1137 return d_splice_alias(inode, dentry); ··· 1269 1254 { 1270 1255 struct rb_node *rbn; 1271 1256 1272 - lockdep_assert_held_write(&kernfs_rwsem); 1257 + lockdep_assert_held_write(&kernfs_root(root)->kernfs_rwsem); 1273 1258 1274 1259 /* if first iteration, visit leftmost descendant which may be root */ 1275 1260 if (!pos) ··· 1304 1289 void kernfs_activate(struct kernfs_node *kn) 1305 1290 { 1306 1291 struct kernfs_node *pos; 1292 + struct kernfs_root *root = kernfs_root(kn); 1307 1293 1308 - down_write(&kernfs_rwsem); 1294 + down_write(&root->kernfs_rwsem); 1309 1295 1310 1296 pos = NULL; 1311 1297 while ((pos = kernfs_next_descendant_post(pos, kn))) { ··· 1320 1304 pos->flags |= KERNFS_ACTIVATED; 1321 1305 } 1322 1306 1323 - up_write(&kernfs_rwsem); 1307 + up_write(&root->kernfs_rwsem); 1324 1308 } 1325 1309 1326 1310 static void __kernfs_remove(struct kernfs_node *kn) 1327 1311 { 1328 1312 struct kernfs_node *pos; 1329 1313 1330 - lockdep_assert_held_write(&kernfs_rwsem); 1314 + lockdep_assert_held_write(&kernfs_root(kn)->kernfs_rwsem); 1331 1315 1332 1316 /* 1333 1317 * Short-circuit if non-root @kn has already finished removal. ··· 1397 1381 */ 1398 1382 void kernfs_remove(struct kernfs_node *kn) 1399 1383 { 1400 - down_write(&kernfs_rwsem); 1384 + struct kernfs_root *root = kernfs_root(kn); 1385 + 1386 + down_write(&root->kernfs_rwsem); 1401 1387 __kernfs_remove(kn); 1402 - up_write(&kernfs_rwsem); 1388 + up_write(&root->kernfs_rwsem); 1403 1389 } 1404 1390 1405 1391 /** ··· 1487 1469 bool kernfs_remove_self(struct kernfs_node *kn) 1488 1470 { 1489 1471 bool ret; 1472 + struct kernfs_root *root = kernfs_root(kn); 1490 1473 1491 - down_write(&kernfs_rwsem); 1474 + down_write(&root->kernfs_rwsem); 1492 1475 kernfs_break_active_protection(kn); 1493 1476 1494 1477 /* ··· 1517 1498 atomic_read(&kn->active) == KN_DEACTIVATED_BIAS) 1518 1499 break; 1519 1500 1520 - up_write(&kernfs_rwsem); 1501 + up_write(&root->kernfs_rwsem); 1521 1502 schedule(); 1522 - down_write(&kernfs_rwsem); 1503 + down_write(&root->kernfs_rwsem); 1523 1504 } 1524 1505 finish_wait(waitq, &wait); 1525 1506 WARN_ON_ONCE(!RB_EMPTY_NODE(&kn->rb)); ··· 1532 1513 */ 1533 1514 kernfs_unbreak_active_protection(kn); 1534 1515 1535 - up_write(&kernfs_rwsem); 1516 + up_write(&root->kernfs_rwsem); 1536 1517 return ret; 1537 1518 } 1538 1519 ··· 1549 1530 const void *ns) 1550 1531 { 1551 1532 struct kernfs_node *kn; 1533 + struct kernfs_root *root; 1552 1534 1553 1535 if (!parent) { 1554 1536 WARN(1, KERN_WARNING "kernfs: can not remove '%s', no directory\n", ··· 1557 1537 return -ENOENT; 1558 1538 } 1559 1539 1560 - down_write(&kernfs_rwsem); 1540 + root = kernfs_root(parent); 1541 + down_write(&root->kernfs_rwsem); 1561 1542 1562 1543 kn = kernfs_find_ns(parent, name, ns); 1563 1544 if (kn) 1564 1545 __kernfs_remove(kn); 1565 1546 1566 - up_write(&kernfs_rwsem); 1547 + up_write(&root->kernfs_rwsem); 1567 1548 1568 1549 if (kn) 1569 1550 return 0; ··· 1583 1562 const char *new_name, const void *new_ns) 1584 1563 { 1585 1564 struct kernfs_node *old_parent; 1565 + struct kernfs_root *root; 1586 1566 const char *old_name = NULL; 1587 1567 int error; 1588 1568 ··· 1591 1569 if (!kn->parent) 1592 1570 return -EINVAL; 1593 1571 1594 - down_write(&kernfs_rwsem); 1572 + root = kernfs_root(kn); 1573 + down_write(&root->kernfs_rwsem); 1595 1574 1596 1575 error = -ENOENT; 1597 1576 if (!kernfs_active(kn) || !kernfs_active(new_parent) || ··· 1646 1623 1647 1624 error = 0; 1648 1625 out: 1649 - up_write(&kernfs_rwsem); 1626 + up_write(&root->kernfs_rwsem); 1650 1627 return error; 1651 1628 } 1652 1629 ··· 1717 1694 struct dentry *dentry = file->f_path.dentry; 1718 1695 struct kernfs_node *parent = kernfs_dentry_node(dentry); 1719 1696 struct kernfs_node *pos = file->private_data; 1697 + struct kernfs_root *root; 1720 1698 const void *ns = NULL; 1721 1699 1722 1700 if (!dir_emit_dots(file, ctx)) 1723 1701 return 0; 1724 - down_read(&kernfs_rwsem); 1702 + 1703 + root = kernfs_root(parent); 1704 + down_read(&root->kernfs_rwsem); 1725 1705 1726 1706 if (kernfs_ns_enabled(parent)) 1727 1707 ns = kernfs_info(dentry->d_sb)->ns; ··· 1741 1715 file->private_data = pos; 1742 1716 kernfs_get(pos); 1743 1717 1744 - up_read(&kernfs_rwsem); 1718 + up_read(&root->kernfs_rwsem); 1745 1719 if (!dir_emit(ctx, name, len, ino, type)) 1746 1720 return 0; 1747 - down_read(&kernfs_rwsem); 1721 + down_read(&root->kernfs_rwsem); 1748 1722 } 1749 - up_read(&kernfs_rwsem); 1723 + up_read(&root->kernfs_rwsem); 1750 1724 file->private_data = NULL; 1751 1725 ctx->pos = INT_MAX; 1752 1726 return 0;
+4 -2
fs/kernfs/file.c
··· 847 847 { 848 848 struct kernfs_node *kn; 849 849 struct kernfs_super_info *info; 850 + struct kernfs_root *root; 850 851 repeat: 851 852 /* pop one off the notify_list */ 852 853 spin_lock_irq(&kernfs_notify_lock); ··· 860 859 kn->attr.notify_next = NULL; 861 860 spin_unlock_irq(&kernfs_notify_lock); 862 861 862 + root = kernfs_root(kn); 863 863 /* kick fsnotify */ 864 - down_write(&kernfs_rwsem); 864 + down_write(&root->kernfs_rwsem); 865 865 866 866 list_for_each_entry(info, &kernfs_root(kn)->supers, node) { 867 867 struct kernfs_node *parent; ··· 900 898 iput(inode); 901 899 } 902 900 903 - up_write(&kernfs_rwsem); 901 + up_write(&root->kernfs_rwsem); 904 902 kernfs_put(kn); 905 903 goto repeat; 906 904 }
+14 -8
fs/kernfs/inode.c
··· 99 99 int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr) 100 100 { 101 101 int ret; 102 + struct kernfs_root *root = kernfs_root(kn); 102 103 103 - down_write(&kernfs_rwsem); 104 + down_write(&root->kernfs_rwsem); 104 105 ret = __kernfs_setattr(kn, iattr); 105 - up_write(&kernfs_rwsem); 106 + up_write(&root->kernfs_rwsem); 106 107 return ret; 107 108 } 108 109 ··· 112 111 { 113 112 struct inode *inode = d_inode(dentry); 114 113 struct kernfs_node *kn = inode->i_private; 114 + struct kernfs_root *root; 115 115 int error; 116 116 117 117 if (!kn) 118 118 return -EINVAL; 119 119 120 - down_write(&kernfs_rwsem); 120 + root = kernfs_root(kn); 121 + down_write(&root->kernfs_rwsem); 121 122 error = setattr_prepare(&init_user_ns, dentry, iattr); 122 123 if (error) 123 124 goto out; ··· 132 129 setattr_copy(&init_user_ns, inode, iattr); 133 130 134 131 out: 135 - up_write(&kernfs_rwsem); 132 + up_write(&root->kernfs_rwsem); 136 133 return error; 137 134 } 138 135 ··· 187 184 { 188 185 struct inode *inode = d_inode(path->dentry); 189 186 struct kernfs_node *kn = inode->i_private; 187 + struct kernfs_root *root = kernfs_root(kn); 190 188 191 - down_read(&kernfs_rwsem); 189 + down_read(&root->kernfs_rwsem); 192 190 spin_lock(&inode->i_lock); 193 191 kernfs_refresh_inode(kn, inode); 194 192 generic_fillattr(&init_user_ns, inode, stat); 195 193 spin_unlock(&inode->i_lock); 196 - up_read(&kernfs_rwsem); 194 + up_read(&root->kernfs_rwsem); 197 195 198 196 return 0; 199 197 } ··· 278 274 struct inode *inode, int mask) 279 275 { 280 276 struct kernfs_node *kn; 277 + struct kernfs_root *root; 281 278 int ret; 282 279 283 280 if (mask & MAY_NOT_BLOCK) 284 281 return -ECHILD; 285 282 286 283 kn = inode->i_private; 284 + root = kernfs_root(kn); 287 285 288 - down_read(&kernfs_rwsem); 286 + down_read(&root->kernfs_rwsem); 289 287 spin_lock(&inode->i_lock); 290 288 kernfs_refresh_inode(kn, inode); 291 289 ret = generic_permission(&init_user_ns, inode, mask); 292 290 spin_unlock(&inode->i_lock); 293 - up_read(&kernfs_rwsem); 291 + up_read(&root->kernfs_rwsem); 294 292 295 293 return ret; 296 294 }
+9 -6
fs/kernfs/mount.c
··· 236 236 static int kernfs_fill_super(struct super_block *sb, struct kernfs_fs_context *kfc) 237 237 { 238 238 struct kernfs_super_info *info = kernfs_info(sb); 239 + struct kernfs_root *kf_root = kfc->root; 239 240 struct inode *inode; 240 241 struct dentry *root; 241 242 ··· 256 255 sb->s_shrink.seeks = 0; 257 256 258 257 /* get root inode, initialize and unlock it */ 259 - down_read(&kernfs_rwsem); 258 + down_read(&kf_root->kernfs_rwsem); 260 259 inode = kernfs_get_inode(sb, info->root->kn); 261 - up_read(&kernfs_rwsem); 260 + up_read(&kf_root->kernfs_rwsem); 262 261 if (!inode) { 263 262 pr_debug("kernfs: could not get root inode\n"); 264 263 return -ENOMEM; ··· 335 334 336 335 if (!sb->s_root) { 337 336 struct kernfs_super_info *info = kernfs_info(sb); 337 + struct kernfs_root *root = kfc->root; 338 338 339 339 kfc->new_sb_created = true; 340 340 ··· 346 344 } 347 345 sb->s_flags |= SB_ACTIVE; 348 346 349 - down_write(&kernfs_rwsem); 347 + down_write(&root->kernfs_rwsem); 350 348 list_add(&info->node, &info->root->supers); 351 - up_write(&kernfs_rwsem); 349 + up_write(&root->kernfs_rwsem); 352 350 } 353 351 354 352 fc->root = dget(sb->s_root); ··· 373 371 void kernfs_kill_sb(struct super_block *sb) 374 372 { 375 373 struct kernfs_super_info *info = kernfs_info(sb); 374 + struct kernfs_root *root = info->root; 376 375 377 - down_write(&kernfs_rwsem); 376 + down_write(&root->kernfs_rwsem); 378 377 list_del(&info->node); 379 - up_write(&kernfs_rwsem); 378 + up_write(&root->kernfs_rwsem); 380 379 381 380 /* 382 381 * Remove the superblock from fs_supers/s_instances
+3 -2
fs/kernfs/symlink.c
··· 113 113 struct kernfs_node *kn = inode->i_private; 114 114 struct kernfs_node *parent = kn->parent; 115 115 struct kernfs_node *target = kn->symlink.target_kn; 116 + struct kernfs_root *root = kernfs_root(parent); 116 117 int error; 117 118 118 - down_read(&kernfs_rwsem); 119 + down_read(&root->kernfs_rwsem); 119 120 error = kernfs_get_target_path(parent, target, path); 120 - up_read(&kernfs_rwsem); 121 + up_read(&root->kernfs_rwsem); 121 122 122 123 return error; 123 124 }
+10 -3
fs/nilfs2/sysfs.c
··· 57 57 complete(&subgroups->sg_##name##_kobj_unregister); \ 58 58 } \ 59 59 static struct kobj_type nilfs_##name##_ktype = { \ 60 - .default_attrs = nilfs_##name##_attrs, \ 60 + .default_groups = nilfs_##name##_groups, \ 61 61 .sysfs_ops = &nilfs_##name##_attr_ops, \ 62 62 .release = nilfs_##name##_attr_release, \ 63 63 } ··· 129 129 NILFS_SNAPSHOT_ATTR_LIST(README), 130 130 NULL, 131 131 }; 132 + ATTRIBUTE_GROUPS(nilfs_snapshot); 132 133 133 134 static ssize_t nilfs_snapshot_attr_show(struct kobject *kobj, 134 135 struct attribute *attr, char *buf) ··· 167 166 }; 168 167 169 168 static struct kobj_type nilfs_snapshot_ktype = { 170 - .default_attrs = nilfs_snapshot_attrs, 169 + .default_groups = nilfs_snapshot_groups, 171 170 .sysfs_ops = &nilfs_snapshot_attr_ops, 172 171 .release = nilfs_snapshot_attr_release, 173 172 }; ··· 227 226 NILFS_MOUNTED_SNAPSHOTS_ATTR_LIST(README), 228 227 NULL, 229 228 }; 229 + ATTRIBUTE_GROUPS(nilfs_mounted_snapshots); 230 230 231 231 NILFS_DEV_INT_GROUP_OPS(mounted_snapshots, dev); 232 232 NILFS_DEV_INT_GROUP_TYPE(mounted_snapshots, dev); ··· 341 339 NILFS_CHECKPOINTS_ATTR_LIST(README), 342 340 NULL, 343 341 }; 342 + ATTRIBUTE_GROUPS(nilfs_checkpoints); 344 343 345 344 NILFS_DEV_INT_GROUP_OPS(checkpoints, dev); 346 345 NILFS_DEV_INT_GROUP_TYPE(checkpoints, dev); ··· 431 428 NILFS_SEGMENTS_ATTR_LIST(README), 432 429 NULL, 433 430 }; 431 + ATTRIBUTE_GROUPS(nilfs_segments); 434 432 435 433 NILFS_DEV_INT_GROUP_OPS(segments, dev); 436 434 NILFS_DEV_INT_GROUP_TYPE(segments, dev); ··· 693 689 NILFS_SEGCTOR_ATTR_LIST(README), 694 690 NULL, 695 691 }; 692 + ATTRIBUTE_GROUPS(nilfs_segctor); 696 693 697 694 NILFS_DEV_INT_GROUP_OPS(segctor, dev); 698 695 NILFS_DEV_INT_GROUP_TYPE(segctor, dev); ··· 821 816 NILFS_SUPERBLOCK_ATTR_LIST(README), 822 817 NULL, 823 818 }; 819 + ATTRIBUTE_GROUPS(nilfs_superblock); 824 820 825 821 NILFS_DEV_INT_GROUP_OPS(superblock, dev); 826 822 NILFS_DEV_INT_GROUP_TYPE(superblock, dev); ··· 930 924 NILFS_DEV_ATTR_LIST(README), 931 925 NULL, 932 926 }; 927 + ATTRIBUTE_GROUPS(nilfs_dev); 933 928 934 929 static ssize_t nilfs_dev_attr_show(struct kobject *kobj, 935 930 struct attribute *attr, char *buf) ··· 968 961 }; 969 962 970 963 static struct kobj_type nilfs_dev_ktype = { 971 - .default_attrs = nilfs_dev_attrs, 964 + .default_groups = nilfs_dev_groups, 972 965 .sysfs_ops = &nilfs_dev_attr_ops, 973 966 .release = nilfs_dev_attr_release, 974 967 };
+174
include/linux/auxiliary_bus.h
··· 11 11 #include <linux/device.h> 12 12 #include <linux/mod_devicetable.h> 13 13 14 + /** 15 + * DOC: DEVICE_LIFESPAN 16 + * 17 + * The registering driver is the entity that allocates memory for the 18 + * auxiliary_device and registers it on the auxiliary bus. It is important to 19 + * note that, as opposed to the platform bus, the registering driver is wholly 20 + * responsible for the management of the memory used for the device object. 21 + * 22 + * To be clear the memory for the auxiliary_device is freed in the release() 23 + * callback defined by the registering driver. The registering driver should 24 + * only call auxiliary_device_delete() and then auxiliary_device_uninit() when 25 + * it is done with the device. The release() function is then automatically 26 + * called if and when other code releases their reference to the devices. 27 + * 28 + * A parent object, defined in the shared header file, contains the 29 + * auxiliary_device. It also contains a pointer to the shared object(s), which 30 + * also is defined in the shared header. Both the parent object and the shared 31 + * object(s) are allocated by the registering driver. This layout allows the 32 + * auxiliary_driver's registering module to perform a container_of() call to go 33 + * from the pointer to the auxiliary_device, that is passed during the call to 34 + * the auxiliary_driver's probe function, up to the parent object, and then 35 + * have access to the shared object(s). 36 + * 37 + * The memory for the shared object(s) must have a lifespan equal to, or 38 + * greater than, the lifespan of the memory for the auxiliary_device. The 39 + * auxiliary_driver should only consider that the shared object is valid as 40 + * long as the auxiliary_device is still registered on the auxiliary bus. It 41 + * is up to the registering driver to manage (e.g. free or keep available) the 42 + * memory for the shared object beyond the life of the auxiliary_device. 43 + * 44 + * The registering driver must unregister all auxiliary devices before its own 45 + * driver.remove() is completed. An easy way to ensure this is to use the 46 + * devm_add_action_or_reset() call to register a function against the parent 47 + * device which unregisters the auxiliary device object(s). 48 + * 49 + * Finally, any operations which operate on the auxiliary devices must continue 50 + * to function (if only to return an error) after the registering driver 51 + * unregisters the auxiliary device. 52 + */ 53 + 54 + /** 55 + * struct auxiliary_device - auxiliary device object. 56 + * @dev: Device, 57 + * The release and parent fields of the device structure must be filled 58 + * in 59 + * @name: Match name found by the auxiliary device driver, 60 + * @id: unique identitier if multiple devices of the same name are exported, 61 + * 62 + * An auxiliary_device represents a part of its parent device's functionality. 63 + * It is given a name that, combined with the registering drivers 64 + * KBUILD_MODNAME, creates a match_name that is used for driver binding, and an 65 + * id that combined with the match_name provide a unique name to register with 66 + * the bus subsystem. For example, a driver registering an auxiliary device is 67 + * named 'foo_mod.ko' and the subdevice is named 'foo_dev'. The match name is 68 + * therefore 'foo_mod.foo_dev'. 69 + * 70 + * Registering an auxiliary_device is a three-step process. 71 + * 72 + * First, a 'struct auxiliary_device' needs to be defined or allocated for each 73 + * sub-device desired. The name, id, dev.release, and dev.parent fields of 74 + * this structure must be filled in as follows. 75 + * 76 + * The 'name' field is to be given a name that is recognized by the auxiliary 77 + * driver. If two auxiliary_devices with the same match_name, eg 78 + * "foo_mod.foo_dev", are registered onto the bus, they must have unique id 79 + * values (e.g. "x" and "y") so that the registered devices names are 80 + * "foo_mod.foo_dev.x" and "foo_mod.foo_dev.y". If match_name + id are not 81 + * unique, then the device_add fails and generates an error message. 82 + * 83 + * The auxiliary_device.dev.type.release or auxiliary_device.dev.release must 84 + * be populated with a non-NULL pointer to successfully register the 85 + * auxiliary_device. This release call is where resources associated with the 86 + * auxiliary device must be free'ed. Because once the device is placed on the 87 + * bus the parent driver can not tell what other code may have a reference to 88 + * this data. 89 + * 90 + * The auxiliary_device.dev.parent should be set. Typically to the registering 91 + * drivers device. 92 + * 93 + * Second, call auxiliary_device_init(), which checks several aspects of the 94 + * auxiliary_device struct and performs a device_initialize(). After this step 95 + * completes, any error state must have a call to auxiliary_device_uninit() in 96 + * its resolution path. 97 + * 98 + * The third and final step in registering an auxiliary_device is to perform a 99 + * call to auxiliary_device_add(), which sets the name of the device and adds 100 + * the device to the bus. 101 + * 102 + * .. code-block:: c 103 + * 104 + * #define MY_DEVICE_NAME "foo_dev" 105 + * 106 + * ... 107 + * 108 + * struct auxiliary_device *my_aux_dev = my_aux_dev_alloc(xxx); 109 + * 110 + * // Step 1: 111 + * my_aux_dev->name = MY_DEVICE_NAME; 112 + * my_aux_dev->id = my_unique_id_alloc(xxx); 113 + * my_aux_dev->dev.release = my_aux_dev_release; 114 + * my_aux_dev->dev.parent = my_dev; 115 + * 116 + * // Step 2: 117 + * if (auxiliary_device_init(my_aux_dev)) 118 + * goto fail; 119 + * 120 + * // Step 3: 121 + * if (auxiliary_device_add(my_aux_dev)) { 122 + * auxiliary_device_uninit(my_aux_dev); 123 + * goto fail; 124 + * } 125 + * 126 + * ... 127 + * 128 + * 129 + * Unregistering an auxiliary_device is a two-step process to mirror the 130 + * register process. First call auxiliary_device_delete(), then call 131 + * auxiliary_device_uninit(). 132 + * 133 + * .. code-block:: c 134 + * 135 + * auxiliary_device_delete(my_dev->my_aux_dev); 136 + * auxiliary_device_uninit(my_dev->my_aux_dev); 137 + */ 14 138 struct auxiliary_device { 15 139 struct device dev; 16 140 const char *name; 17 141 u32 id; 18 142 }; 19 143 144 + /** 145 + * struct auxiliary_driver - Definition of an auxiliary bus driver 146 + * @probe: Called when a matching device is added to the bus. 147 + * @remove: Called when device is removed from the bus. 148 + * @shutdown: Called at shut-down time to quiesce the device. 149 + * @suspend: Called to put the device to sleep mode. Usually to a power state. 150 + * @resume: Called to bring a device from sleep mode. 151 + * @name: Driver name. 152 + * @driver: Core driver structure. 153 + * @id_table: Table of devices this driver should match on the bus. 154 + * 155 + * Auxiliary drivers follow the standard driver model convention, where 156 + * discovery/enumeration is handled by the core, and drivers provide probe() 157 + * and remove() methods. They support power management and shutdown 158 + * notifications using the standard conventions. 159 + * 160 + * Auxiliary drivers register themselves with the bus by calling 161 + * auxiliary_driver_register(). The id_table contains the match_names of 162 + * auxiliary devices that a driver can bind with. 163 + * 164 + * .. code-block:: c 165 + * 166 + * static const struct auxiliary_device_id my_auxiliary_id_table[] = { 167 + * { .name = "foo_mod.foo_dev" }, 168 + * {}, 169 + * }; 170 + * 171 + * MODULE_DEVICE_TABLE(auxiliary, my_auxiliary_id_table); 172 + * 173 + * struct auxiliary_driver my_drv = { 174 + * .name = "myauxiliarydrv", 175 + * .id_table = my_auxiliary_id_table, 176 + * .probe = my_drv_probe, 177 + * .remove = my_drv_remove 178 + * }; 179 + */ 20 180 struct auxiliary_driver { 21 181 int (*probe)(struct auxiliary_device *auxdev, const struct auxiliary_device_id *id); 22 182 void (*remove)(struct auxiliary_device *auxdev); ··· 187 27 struct device_driver driver; 188 28 const struct auxiliary_device_id *id_table; 189 29 }; 30 + 31 + static inline void *auxiliary_get_drvdata(struct auxiliary_device *auxdev) 32 + { 33 + return dev_get_drvdata(&auxdev->dev); 34 + } 35 + 36 + static inline void auxiliary_set_drvdata(struct auxiliary_device *auxdev, void *data) 37 + { 38 + dev_set_drvdata(&auxdev->dev, data); 39 + } 190 40 191 41 static inline struct auxiliary_device *to_auxiliary_dev(struct device *dev) 192 42 { ··· 236 66 * Helper macro for auxiliary drivers which do not do anything special in 237 67 * module init/exit. This eliminates a lot of boilerplate. Each module may only 238 68 * use this macro once, and calling it replaces module_init() and module_exit() 69 + * 70 + * .. code-block:: c 71 + * 72 + * module_auxiliary_driver(my_drv); 239 73 */ 240 74 #define module_auxiliary_driver(__auxiliary_driver) \ 241 75 module_driver(__auxiliary_driver, auxiliary_driver_register, auxiliary_driver_unregister)
+5 -1
include/linux/kernfs.h
··· 6 6 #ifndef __LINUX_KERNFS_H 7 7 #define __LINUX_KERNFS_H 8 8 9 - #include <linux/kernel.h> 10 9 #include <linux/err.h> 11 10 #include <linux/list.h> 12 11 #include <linux/mutex.h> ··· 13 14 #include <linux/lockdep.h> 14 15 #include <linux/rbtree.h> 15 16 #include <linux/atomic.h> 17 + #include <linux/bug.h> 18 + #include <linux/types.h> 16 19 #include <linux/uidgid.h> 17 20 #include <linux/wait.h> 21 + #include <linux/rwsem.h> 18 22 19 23 struct file; 20 24 struct dentry; 21 25 struct iattr; 22 26 struct seq_file; 23 27 struct vm_area_struct; 28 + struct vm_operations_struct; 24 29 struct super_block; 25 30 struct file_system_type; 26 31 struct poll_table_struct; ··· 200 197 struct list_head supers; 201 198 202 199 wait_queue_head_t deactivate_waitq; 200 + struct rw_semaphore kernfs_rwsem; 203 201 }; 204 202 205 203 struct kernfs_open_file {
+8 -26
include/linux/kobject.h
··· 19 19 #include <linux/list.h> 20 20 #include <linux/sysfs.h> 21 21 #include <linux/compiler.h> 22 + #include <linux/container_of.h> 22 23 #include <linux/spinlock.h> 23 24 #include <linux/kref.h> 24 25 #include <linux/kobject_ns.h> 25 - #include <linux/kernel.h> 26 26 #include <linux/wait.h> 27 27 #include <linux/atomic.h> 28 28 #include <linux/workqueue.h> ··· 66 66 struct list_head entry; 67 67 struct kobject *parent; 68 68 struct kset *kset; 69 - struct kobj_type *ktype; 69 + const struct kobj_type *ktype; 70 70 struct kernfs_node *sd; /* sysfs directory entry */ 71 71 struct kref kref; 72 72 #ifdef CONFIG_DEBUG_KOBJECT_RELEASE ··· 90 90 return kobj->name; 91 91 } 92 92 93 - extern void kobject_init(struct kobject *kobj, struct kobj_type *ktype); 93 + extern void kobject_init(struct kobject *kobj, const struct kobj_type *ktype); 94 94 extern __printf(3, 4) __must_check 95 95 int kobject_add(struct kobject *kobj, struct kobject *parent, 96 96 const char *fmt, ...); 97 97 extern __printf(4, 5) __must_check 98 98 int kobject_init_and_add(struct kobject *kobj, 99 - struct kobj_type *ktype, struct kobject *parent, 99 + const struct kobj_type *ktype, struct kobject *parent, 100 100 const char *fmt, ...); 101 101 102 102 extern void kobject_del(struct kobject *kobj); ··· 117 117 kuid_t *uid, kgid_t *gid); 118 118 extern char *kobject_get_path(struct kobject *kobj, gfp_t flag); 119 119 120 - /** 121 - * kobject_has_children - Returns whether a kobject has children. 122 - * @kobj: the object to test 123 - * 124 - * This will return whether a kobject has other kobjects as children. 125 - * 126 - * It does NOT account for the presence of attribute files, only sub 127 - * directories. It also assumes there is no concurrent addition or 128 - * removal of such children, and thus relies on external locking. 129 - */ 130 - static inline bool kobject_has_children(struct kobject *kobj) 131 - { 132 - WARN_ON_ONCE(kref_read(&kobj->kref) == 0); 133 - 134 - return kobj->sd && kobj->sd->dir.subdirs; 135 - } 136 - 137 120 struct kobj_type { 138 121 void (*release)(struct kobject *kobj); 139 122 const struct sysfs_ops *sysfs_ops; ··· 136 153 }; 137 154 138 155 struct kset_uevent_ops { 139 - int (* const filter)(struct kset *kset, struct kobject *kobj); 140 - const char *(* const name)(struct kset *kset, struct kobject *kobj); 141 - int (* const uevent)(struct kset *kset, struct kobject *kobj, 142 - struct kobj_uevent_env *env); 156 + int (* const filter)(struct kobject *kobj); 157 + const char *(* const name)(struct kobject *kobj); 158 + int (* const uevent)(struct kobject *kobj, struct kobj_uevent_env *env); 143 159 }; 144 160 145 161 struct kobj_attribute { ··· 199 217 kobject_put(&k->kobj); 200 218 } 201 219 202 - static inline struct kobj_type *get_ktype(struct kobject *kobj) 220 + static inline const struct kobj_type *get_ktype(struct kobject *kobj) 203 221 { 204 222 return kobj->ktype; 205 223 }
+25
include/linux/topology.h
··· 180 180 181 181 #endif /* [!]CONFIG_HAVE_MEMORYLESS_NODES */ 182 182 183 + #if defined(topology_die_id) && defined(topology_die_cpumask) 184 + #define TOPOLOGY_DIE_SYSFS 185 + #endif 186 + #if defined(topology_cluster_id) && defined(topology_cluster_cpumask) 187 + #define TOPOLOGY_CLUSTER_SYSFS 188 + #endif 189 + #if defined(topology_book_id) && defined(topology_book_cpumask) 190 + #define TOPOLOGY_BOOK_SYSFS 191 + #endif 192 + #if defined(topology_drawer_id) && defined(topology_drawer_cpumask) 193 + #define TOPOLOGY_DRAWER_SYSFS 194 + #endif 195 + 183 196 #ifndef topology_physical_package_id 184 197 #define topology_physical_package_id(cpu) ((void)(cpu), -1) 185 198 #endif ··· 205 192 #ifndef topology_core_id 206 193 #define topology_core_id(cpu) ((void)(cpu), 0) 207 194 #endif 195 + #ifndef topology_book_id 196 + #define topology_book_id(cpu) ((void)(cpu), -1) 197 + #endif 198 + #ifndef topology_drawer_id 199 + #define topology_drawer_id(cpu) ((void)(cpu), -1) 200 + #endif 208 201 #ifndef topology_sibling_cpumask 209 202 #define topology_sibling_cpumask(cpu) cpumask_of(cpu) 210 203 #endif ··· 222 203 #endif 223 204 #ifndef topology_die_cpumask 224 205 #define topology_die_cpumask(cpu) cpumask_of(cpu) 206 + #endif 207 + #ifndef topology_book_cpumask 208 + #define topology_book_cpumask(cpu) cpumask_of(cpu) 209 + #endif 210 + #ifndef topology_drawer_cpumask 211 + #define topology_drawer_cpumask(cpu) cpumask_of(cpu) 225 212 #endif 226 213 227 214 #if defined(CONFIG_SCHED_SMT) && !defined(cpu_smt_mask)
+4
include/uapi/linux/pci_regs.h
··· 1086 1086 1087 1087 /* Designated Vendor-Specific (DVSEC, PCI_EXT_CAP_ID_DVSEC) */ 1088 1088 #define PCI_DVSEC_HEADER1 0x4 /* Designated Vendor-Specific Header1 */ 1089 + #define PCI_DVSEC_HEADER1_VID(x) ((x) & 0xffff) 1090 + #define PCI_DVSEC_HEADER1_REV(x) (((x) >> 16) & 0xf) 1091 + #define PCI_DVSEC_HEADER1_LEN(x) (((x) >> 20) & 0xfff) 1089 1092 #define PCI_DVSEC_HEADER2 0x8 /* Designated Vendor-Specific Header2 */ 1093 + #define PCI_DVSEC_HEADER2_ID(x) ((x) & 0xffff) 1090 1094 1091 1095 /* Data Link Feature */ 1092 1096 #define PCI_DLF_CAP 0x04 /* Capabilities Register */
+2 -2
kernel/params.c
··· 926 926 .store = module_attr_store, 927 927 }; 928 928 929 - static int uevent_filter(struct kset *kset, struct kobject *kobj) 929 + static int uevent_filter(struct kobject *kobj) 930 930 { 931 - struct kobj_type *ktype = get_ktype(kobj); 931 + const struct kobj_type *ktype = get_ktype(kobj); 932 932 933 933 if (ktype == &module_ktype) 934 934 return 1;
+4 -4
lib/kobject.c
··· 65 65 */ 66 66 static int populate_dir(struct kobject *kobj) 67 67 { 68 - struct kobj_type *t = get_ktype(kobj); 68 + const struct kobj_type *t = get_ktype(kobj); 69 69 struct attribute *attr; 70 70 int error = 0; 71 71 int i; ··· 346 346 * to kobject_put(), not by a call to kfree directly to ensure that all of 347 347 * the memory is cleaned up properly. 348 348 */ 349 - void kobject_init(struct kobject *kobj, struct kobj_type *ktype) 349 + void kobject_init(struct kobject *kobj, const struct kobj_type *ktype) 350 350 { 351 351 char *err_str; 352 352 ··· 461 461 * same type of error handling after a call to kobject_add() and kobject 462 462 * lifetime rules are the same here. 463 463 */ 464 - int kobject_init_and_add(struct kobject *kobj, struct kobj_type *ktype, 464 + int kobject_init_and_add(struct kobject *kobj, const struct kobj_type *ktype, 465 465 struct kobject *parent, const char *fmt, ...) 466 466 { 467 467 va_list args; ··· 679 679 static void kobject_cleanup(struct kobject *kobj) 680 680 { 681 681 struct kobject *parent = kobj->parent; 682 - struct kobj_type *t = get_ktype(kobj); 682 + const struct kobj_type *t = get_ktype(kobj); 683 683 const char *name = kobj->name; 684 684 685 685 pr_debug("kobject: '%s' (%p): %s, parent %p\n",
+3 -3
lib/kobject_uevent.c
··· 501 501 } 502 502 /* skip the event, if the filter returns zero. */ 503 503 if (uevent_ops && uevent_ops->filter) 504 - if (!uevent_ops->filter(kset, kobj)) { 504 + if (!uevent_ops->filter(kobj)) { 505 505 pr_debug("kobject: '%s' (%p): %s: filter function " 506 506 "caused the event to drop!\n", 507 507 kobject_name(kobj), kobj, __func__); ··· 510 510 511 511 /* originating subsystem */ 512 512 if (uevent_ops && uevent_ops->name) 513 - subsystem = uevent_ops->name(kset, kobj); 513 + subsystem = uevent_ops->name(kobj); 514 514 else 515 515 subsystem = kobject_name(&kset->kobj); 516 516 if (!subsystem) { ··· 554 554 555 555 /* let the kset specific function add its stuff */ 556 556 if (uevent_ops && uevent_ops->uevent) { 557 - retval = uevent_ops->uevent(kset, kobj, env); 557 + retval = uevent_ops->uevent(kobj, env); 558 558 if (retval) { 559 559 pr_debug("kobject: '%s' (%p): %s: uevent() returned " 560 560 "%d\n", kobject_name(kobj), kobj,