Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc updates from Greg KH:
"Here is the big set of char/misc driver patches for 4.17-rc1.

There are a lot of little things in here, nothing huge, but all
important to the different hardware types involved:

- thunderbolt driver updates

- parport updates (people still care...)

- nvmem driver updates

- mei updates (as always)

- hwtracing driver updates

- hyperv driver updates

- extcon driver updates

- ... and a handful of even smaller driver subsystem and individual
driver updates

All of these have been in linux-next with no reported issues"

* tag 'char-misc-4.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (149 commits)
hwtracing: Add HW tracing support menu
intel_th: Add ACPI glue layer
intel_th: Allow forcing host mode through drvdata
intel_th: Pick up irq number from resources
intel_th: Don't touch switch routing in host mode
intel_th: Use correct method of finding hub
intel_th: Add SPDX GPL-2.0 header to replace GPLv2 boilerplate
stm class: Make dummy's master/channel ranges configurable
stm class: Add SPDX GPL-2.0 header to replace GPLv2 boilerplate
MAINTAINERS: Bestow upon myself the care for drivers/hwtracing
hv: add SPDX license id to Kconfig
hv: add SPDX license to trace
Drivers: hv: vmbus: do not mark HV_PCIE as perf_device
Drivers: hv: vmbus: respect what we get from hv_get_synint_state()
/dev/mem: Avoid overwriting "err" in read_mem()
eeprom: at24: use SPDX identifier instead of GPL boiler-plate
eeprom: at24: simplify the i2c functionality checking
eeprom: at24: fix a line break
eeprom: at24: tweak newlines
eeprom: at24: refactor at24_probe()
...

+3384 -1214
+7
Documentation/ABI/stable/sysfs-bus-vmbus
··· 132 132 Contact: Stephen Hemminger <sthemmin@microsoft.com> 133 133 Description: Monitor bit associated with channel 134 134 Users: Debugging tools and userspace drivers 135 + 136 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/ring 137 + Date: January. 2018 138 + KernelVersion: 4.16 139 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 140 + Description: Binary file created by uio_hv_generic for ring buffer 141 + Users: Userspace drivers
+33
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 1 + What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl 2 + Date: Jun 2018 3 + KernelVersion: 4.17 4 + Contact: thunderbolt-software@lists.01.org 5 + Description: Holds a comma separated list of device unique_ids that 6 + are allowed to be connected automatically during system 7 + startup (e.g boot devices). The list always contains 8 + maximum supported number of unique_ids where unused 9 + entries are empty. This allows the userspace software 10 + to determine how many entries the controller supports. 11 + If there are multiple controllers, each controller has 12 + its own ACL list and size may be different between the 13 + controllers. 14 + 15 + System BIOS may have an option "Preboot ACL" or similar 16 + that needs to be selected before this list is taken into 17 + consideration. 18 + 19 + Software always updates a full list in each write. 20 + 21 + If a device is authorized automatically during boot its 22 + boot attribute is set to 1. 23 + 1 24 What: /sys/bus/thunderbolt/devices/.../domainX/security 2 25 Date: Sep 2017 3 26 KernelVersion: 4.13 ··· 35 12 minimum. User needs to authorize each device. 36 13 dponly: Automatically tunnel Display port (and USB). No 37 14 PCIe tunnels are created. 15 + usbonly: Automatically tunnel USB controller of the 16 + connected Thunderbolt dock (and Display Port). All 17 + PCIe links downstream of the dock are removed. 38 18 39 19 What: /sys/bus/thunderbolt/devices/.../authorized 40 20 Date: Sep 2017 ··· 63 37 authorized. In case of failure errno will be ENOKEY if 64 38 the device did not contain a key at all, and 65 39 EKEYREJECTED if the challenge response did not match. 40 + 41 + What: /sys/bus/thunderbolt/devices/.../boot 42 + Date: Jun 2018 43 + KernelVersion: 4.17 44 + Contact: thunderbolt-software@lists.01.org 45 + Description: This attribute contains 1 if Thunderbolt device was already 46 + authorized on boot and 0 otherwise. 66 47 67 48 What: /sys/bus/thunderbolt/devices/.../key 68 49 Date: Sep 2017
+9
Documentation/ABI/testing/sysfs-class-mei
··· 45 45 Description: Display the driver HBM protocol version. 46 46 47 47 The HBM protocol version supported by the driver. 48 + 49 + What: /sys/class/mei/meiN/tx_queue_limit 50 + Date: Jan 2018 51 + KernelVersion: 4.16 52 + Contact: Tomas Winkler <tomas.winkler@intel.com> 53 + Description: Configure tx queue limit 54 + 55 + Set maximal number of pending writes 56 + per opened session.
+10
Documentation/ABI/testing/sysfs-driver-fsi-master-gpio
··· 1 + What: /sys/bus/platform/devices/[..]/fsi-master-gpio/external_mode 2 + Date: Feb 2018 3 + KernelVersion: 4.17 4 + Contact: jk@ozlabs.org 5 + Description: 6 + Controls access arbitration for GPIO-based FSI master. A 7 + value of 0 (the default) sets normal mode, where the 8 + driver performs FSI bus transactions, 1 sets external mode, 9 + where the FSI bus is driven externally (for example, by 10 + a debug device).
+10 -5
Documentation/admin-guide/thunderbolt.rst
··· 21 21 Security levels and how to use them 22 22 ----------------------------------- 23 23 Starting with Intel Falcon Ridge Thunderbolt controller there are 4 24 - security levels available. The reason for these is the fact that the 25 - connected devices can be DMA masters and thus read contents of the host 26 - memory without CPU and OS knowing about it. There are ways to prevent 27 - this by setting up an IOMMU but it is not always available for various 28 - reasons. 24 + security levels available. Intel Titan Ridge added one more security level 25 + (usbonly). The reason for these is the fact that the connected devices can 26 + be DMA masters and thus read contents of the host memory without CPU and OS 27 + knowing about it. There are ways to prevent this by setting up an IOMMU but 28 + it is not always available for various reasons. 29 29 30 30 The security levels are as follows: 31 31 ··· 51 51 The firmware automatically creates tunnels for Display Port and 52 52 USB. No PCIe tunneling is done. In BIOS settings this is 53 53 typically called *Display Port Only*. 54 + 55 + usbonly 56 + The firmware automatically creates tunnels for the USB controller and 57 + Display Port in a dock. All PCIe links downstream of the dock are 58 + removed. 54 59 55 60 The current security level can be read from 56 61 ``/sys/bus/thunderbolt/devices/domainX/security`` where ``domainX`` is
+49
Documentation/devicetree/bindings/connector/samsung,usb-connector-11pin.txt
··· 1 + Samsung micro-USB 11-pin connector 2 + ================================== 3 + 4 + Samsung micro-USB 11-pin connector is an extension of micro-USB connector. 5 + It is present in multiple Samsung mobile devices. 6 + It has additional pins to route MHL traffic simultanously with USB. 7 + 8 + The bindings are superset of usb-connector bindings for micro-USB connector[1]. 9 + 10 + Required properties: 11 + - compatible: must be: "samsung,usb-connector-11pin", "usb-b-connector", 12 + - type: must be "micro". 13 + 14 + Required nodes: 15 + - any data bus to the connector should be modeled using the OF graph bindings 16 + specified in bindings/graph.txt, unless the bus is between parent node and 17 + the connector. Since single connector can have multpile data buses every bus 18 + has assigned OF graph port number as follows: 19 + 0: High Speed (HS), 20 + 3: Mobile High-Definition Link (MHL), specific to 11-pin Samsung micro-USB. 21 + 22 + [1]: bindings/connector/usb-connector.txt 23 + 24 + Example 25 + ------- 26 + 27 + Micro-USB connector with HS lines routed via controller (MUIC) and MHL lines 28 + connected to HDMI-MHL bridge (sii8620): 29 + 30 + muic-max77843@66 { 31 + ... 32 + usb_con: connector { 33 + compatible = "samsung,usb-connector-11pin", "usb-b-connector"; 34 + label = "micro-USB"; 35 + type = "micro"; 36 + 37 + ports { 38 + #address-cells = <1>; 39 + #size-cells = <0>; 40 + 41 + port@3 { 42 + reg = <3>; 43 + usb_con_mhl: endpoint { 44 + remote-endpoint = <&sii8620_mhl>; 45 + }; 46 + }; 47 + }; 48 + }; 49 + };
+75
Documentation/devicetree/bindings/connector/usb-connector.txt
··· 1 + USB Connector 2 + ============= 3 + 4 + USB connector node represents physical USB connector. It should be 5 + a child of USB interface controller. 6 + 7 + Required properties: 8 + - compatible: describes type of the connector, must be one of: 9 + "usb-a-connector", 10 + "usb-b-connector", 11 + "usb-c-connector". 12 + 13 + Optional properties: 14 + - label: symbolic name for the connector, 15 + - type: size of the connector, should be specified in case of USB-A, USB-B 16 + non-fullsize connectors: "mini", "micro". 17 + 18 + Required nodes: 19 + - any data bus to the connector should be modeled using the OF graph bindings 20 + specified in bindings/graph.txt, unless the bus is between parent node and 21 + the connector. Since single connector can have multpile data buses every bus 22 + has assigned OF graph port number as follows: 23 + 0: High Speed (HS), present in all connectors, 24 + 1: Super Speed (SS), present in SS capable connectors, 25 + 2: Sideband use (SBU), present in USB-C. 26 + 27 + Examples 28 + -------- 29 + 30 + 1. Micro-USB connector with HS lines routed via controller (MUIC): 31 + 32 + muic-max77843@66 { 33 + ... 34 + usb_con: connector { 35 + compatible = "usb-b-connector"; 36 + label = "micro-USB"; 37 + type = "micro"; 38 + }; 39 + }; 40 + 41 + 2. USB-C connector attached to CC controller (s2mm005), HS lines routed 42 + to companion PMIC (max77865), SS lines to USB3 PHY and SBU to DisplayPort. 43 + DisplayPort video lines are routed to the connector via SS mux in USB3 PHY. 44 + 45 + ccic: s2mm005@33 { 46 + ... 47 + usb_con: connector { 48 + compatible = "usb-c-connector"; 49 + label = "USB-C"; 50 + 51 + ports { 52 + #address-cells = <1>; 53 + #size-cells = <0>; 54 + 55 + port@0 { 56 + reg = <0>; 57 + usb_con_hs: endpoint { 58 + remote-endpoint = <&max77865_usbc_hs>; 59 + }; 60 + }; 61 + port@1 { 62 + reg = <1>; 63 + usb_con_ss: endpoint { 64 + remote-endpoint = <&usbdrd_phy_ss>; 65 + }; 66 + }; 67 + port@2 { 68 + reg = <2>; 69 + usb_con_sbu: endpoint { 70 + remote-endpoint = <&dp_aux>; 71 + }; 72 + }; 73 + }; 74 + }; 75 + };
+151
Documentation/devicetree/bindings/fsi/fsi.txt
··· 1 + FSI bus & engine generic device tree bindings 2 + ============================================= 3 + 4 + The FSI bus is probe-able, so the OS is able to enumerate FSI slaves, and 5 + engines within those slaves. However, we have a facility to match devicetree 6 + nodes to probed engines. This allows for fsi engines to expose non-probeable 7 + busses, which are then exposed by the device tree. For example, an FSI engine 8 + that is an I2C master - the I2C bus can be described by the device tree under 9 + the engine's device tree node. 10 + 11 + FSI masters may require their own DT nodes (to describe the master HW itself); 12 + that requirement is defined by the master's implementation, and is described by 13 + the fsi-master-* binding specifications. 14 + 15 + Under the masters' nodes, we can describe the bus topology using nodes to 16 + represent the FSI slaves and their slave engines. As a basic outline: 17 + 18 + fsi-master { 19 + /* top-level of FSI bus topology, bound to an FSI master driver and 20 + * exposes an FSI bus */ 21 + 22 + fsi-slave@<link,id> { 23 + /* this node defines the FSI slave device, and is handled 24 + * entirely with FSI core code */ 25 + 26 + fsi-slave-engine@<addr> { 27 + /* this node defines the engine endpoint & address range, which 28 + * is bound to the relevant fsi device driver */ 29 + ... 30 + }; 31 + 32 + fsi-slave-engine@<addr> { 33 + ... 34 + }; 35 + 36 + }; 37 + }; 38 + 39 + Note that since the bus is probe-able, some (or all) of the topology may 40 + not be described; this binding only provides an optional facility for 41 + adding subordinate device tree nodes as children of FSI engines. 42 + 43 + FSI masters 44 + ----------- 45 + 46 + FSI master nodes declare themselves as such with the "fsi-master" compatible 47 + value. It's likely that an implementation-specific compatible value will 48 + be needed as well, for example: 49 + 50 + compatible = "fsi-master-gpio", "fsi-master"; 51 + 52 + Since the master nodes describe the top-level of the FSI topology, they also 53 + need to declare the FSI-standard addressing scheme. This requires two cells for 54 + addresses (link index and slave ID), and no size: 55 + 56 + #address-cells = <2>; 57 + #size-cells = <0>; 58 + 59 + An optional boolean property can be added to indicate that a particular master 60 + should not scan for connected devices at initialization time. This is 61 + necessary in cases where a scan could cause arbitration issues with other 62 + masters that may be present on the bus. 63 + 64 + no-scan-on-init; 65 + 66 + FSI slaves 67 + ---------- 68 + 69 + Slaves are identified by a (link-index, slave-id) pair, so require two cells 70 + for an address identifier. Since these are not a range, no size cells are 71 + required. For an example, a slave on link 1, with ID 2, could be represented 72 + as: 73 + 74 + cfam@1,2 { 75 + reg = <1 2>; 76 + [...]; 77 + } 78 + 79 + Each slave provides an address-space, under which the engines are accessible. 80 + That address space has a maximum of 23 bits, so we use one cell to represent 81 + addresses and sizes in the slave address space: 82 + 83 + #address-cells = <1>; 84 + #size-cells = <1>; 85 + 86 + 87 + FSI engines (devices) 88 + --------------------- 89 + 90 + Engines are identified by their address under the slaves' address spaces. We 91 + use a single cell for address and size. Engine nodes represent the endpoint 92 + FSI device, and are passed to those FSI device drivers' ->probe() functions. 93 + 94 + For example, for a slave using a single 0x400-byte page starting at address 95 + 0xc00: 96 + 97 + engine@c00 { 98 + reg = <0xc00 0x400>; 99 + }; 100 + 101 + 102 + Full example 103 + ------------ 104 + 105 + Here's an example that illustrates: 106 + - an FSI master 107 + - connected to an FSI slave 108 + - that contains an engine that is an I2C master 109 + - connected to an I2C EEPROM 110 + 111 + The FSI master may be connected to additional slaves, and slaves may have 112 + additional engines, but they don't necessarily need to be describe in the 113 + device tree if no extra platform information is required. 114 + 115 + /* The GPIO-based FSI master node, describing the top level of the 116 + * FSI bus 117 + */ 118 + gpio-fsi { 119 + compatible = "fsi-master-gpio", "fsi-master"; 120 + #address-cells = <2>; 121 + #size-cells = <0>; 122 + 123 + /* A FSI slave (aka. CFAM) at link 0, ID 0. */ 124 + cfam@0,0 { 125 + reg = <0 0>; 126 + #address-cells = <1>; 127 + #size-cells = <1>; 128 + 129 + /* FSI engine at 0xc00, using a single page. In this example, 130 + * it's an I2C master controller, so subnodes describe the 131 + * I2C bus. 132 + */ 133 + i2c-controller@c00 { 134 + reg = <0xc00 0x400>; 135 + 136 + /* Engine-specific data. In this case, we're describing an 137 + * I2C bus, so we're conforming to the generic I2C binding 138 + */ 139 + compatible = "some-vendor,fsi-i2c-controller"; 140 + #address-cells = <1>; 141 + #size-cells = <1>; 142 + 143 + /* I2C endpoint device: an Atmel EEPROM */ 144 + eeprom@50 { 145 + compatible = "atmel,24c256"; 146 + reg = <0x50>; 147 + pagesize = <64>; 148 + }; 149 + }; 150 + }; 151 + };
+41
Documentation/devicetree/bindings/mfd/aspeed-lpc.txt
··· 109 109 }; 110 110 }; 111 111 112 + BMC Node Children 113 + ================== 114 + 115 + 112 116 Host Node Children 113 117 ================== 118 + 119 + LPC Host Interface Controller 120 + ------------------- 121 + 122 + The LPC Host Interface Controller manages functions exposed to the host such as 123 + LPC firmware hub cycles, configuration of the LPC-to-AHB mapping, UART 124 + management and bus snoop configuration. 125 + 126 + Required properties: 127 + 128 + - compatible: One of: 129 + "aspeed,ast2400-lpc-ctrl"; 130 + "aspeed,ast2500-lpc-ctrl"; 131 + 132 + - reg: contains offset/length values of the host interface controller 133 + memory regions 134 + 135 + - clocks: contains a phandle to the syscon node describing the clocks. 136 + There should then be one cell representing the clock to use 137 + 138 + - memory-region: A phandle to a reserved_memory region to be used for the LPC 139 + to AHB mapping 140 + 141 + - flash: A phandle to the SPI flash controller containing the flash to 142 + be exposed over the LPC to AHB mapping 143 + 144 + Example: 145 + 146 + lpc-host@80 { 147 + lpc_ctrl: lpc-ctrl@0 { 148 + compatible = "aspeed,ast2500-lpc-ctrl"; 149 + reg = <0x0 0x80>; 150 + clocks = <&syscon ASPEED_CLK_GATE_LCLK>; 151 + memory-region = <&flash_memory>; 152 + flash = <&spi>; 153 + }; 154 + }; 114 155 115 156 LPC Host Controller 116 157 -------------------
+19 -4
Documentation/devicetree/bindings/nvmem/imx-ocotp.txt
··· 11 11 "fsl,imx6ul-ocotp" (i.MX6UL), 12 12 "fsl,imx7d-ocotp" (i.MX7D/S), 13 13 followed by "syscon". 14 + - #address-cells : Should be 1 15 + - #size-cells : Should be 1 14 16 - reg: Should contain the register base and length. 15 17 - clocks: Should contain a phandle pointing to the gated peripheral clock. 16 18 17 19 Optional properties: 18 20 - read-only: disable write access 19 21 20 - Example: 22 + Optional Child nodes: 21 23 24 + - Data cells of ocotp: 25 + Detailed bindings are described in bindings/nvmem/nvmem.txt 26 + 27 + Example: 22 28 ocotp: ocotp@21bc000 { 23 - compatible = "fsl,imx6q-ocotp", "syscon"; 29 + #address-cells = <1>; 30 + #size-cells = <1>; 31 + compatible = "fsl,imx6sx-ocotp", "syscon"; 24 32 reg = <0x021bc000 0x4000>; 25 - clocks = <&clks IMX6QDL_CLK_IIM>; 26 - read-only; 33 + clocks = <&clks IMX6SX_CLK_OCOTP>; 34 + 35 + tempmon_calib: calib@38 { 36 + reg = <0x38 4>; 37 + }; 38 + 39 + tempmon_temp_grade: temp-grade@20 { 40 + reg = <0x20 4>; 41 + }; 27 42 };
+2 -1
Documentation/devicetree/bindings/nvmem/snvs-lpgpr.txt
··· 1 1 Device tree bindings for Low Power General Purpose Register found in i.MX6Q/D 2 - Secure Non-Volatile Storage. 2 + and i.MX7 Secure Non-Volatile Storage. 3 3 4 4 This DT node should be represented as a sub-node of a "syscon", 5 5 "simple-mfd" node. ··· 8 8 - compatible: should be one of the fallowing variants: 9 9 "fsl,imx6q-snvs-lpgpr" for Freescale i.MX6Q/D/DL/S 10 10 "fsl,imx6ul-snvs-lpgpr" for Freescale i.MX6UL 11 + "fsl,imx7d-snvs-lpgpr" for Freescale i.MX7D/S 11 12 12 13 Example: 13 14 snvs: snvs@020cc000 {
+5
Documentation/driver-api/uio-howto.rst
··· 709 709 3) Network receive buffer region 710 710 4) Network send buffer region 711 711 712 + If a subchannel is created by a request to host, then the uio_hv_generic 713 + device driver will create a sysfs binary file for the per-channel ring buffer. 714 + For example: 715 + /sys/bus/vmbus/devices/3811fe4d-0fa0-4b62-981a-74fc1084c757/channels/21/ring 716 + 712 717 Further information 713 718 =================== 714 719
+6 -1
MAINTAINERS
··· 6212 6212 F: drivers/char/hw_random/ 6213 6213 F: include/linux/hw_random.h 6214 6214 6215 + HARDWARE TRACING FACILITIES 6216 + M: Alexander Shishkin <alexander.shishkin@linux.intel.com> 6217 + S: Maintained 6218 + F: drivers/hwtracing/ 6219 + 6215 6220 HARDWARE SPINLOCK CORE 6216 6221 M: Ohad Ben-Cohen <ohad@wizery.com> 6217 6222 M: Bjorn Andersson <bjorn.andersson@linaro.org> ··· 8145 8140 LINUX KERNEL DUMP TEST MODULE (LKDTM) 8146 8141 M: Kees Cook <keescook@chromium.org> 8147 8142 S: Maintained 8148 - F: drivers/misc/lkdtm* 8143 + F: drivers/misc/lkdtm/* 8149 8144 8150 8145 LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM) 8151 8146 M: Alan Stern <stern@rowland.harvard.edu>
+3
arch/x86/entry/entry_32.S
··· 902 902 BUILD_INTERRUPT3(hyperv_reenlightenment_vector, HYPERV_REENLIGHTENMENT_VECTOR, 903 903 hyperv_reenlightenment_intr) 904 904 905 + BUILD_INTERRUPT3(hv_stimer0_callback_vector, HYPERV_STIMER0_VECTOR, 906 + hv_stimer0_vector_handler) 907 + 905 908 #endif /* CONFIG_HYPERV */ 906 909 907 910 ENTRY(page_fault)
+3
arch/x86/entry/entry_64.S
··· 1140 1140 1141 1141 apicinterrupt3 HYPERV_REENLIGHTENMENT_VECTOR \ 1142 1142 hyperv_reenlightenment_vector hyperv_reenlightenment_intr 1143 + 1144 + apicinterrupt3 HYPERV_STIMER0_VECTOR \ 1145 + hv_stimer0_callback_vector hv_stimer0_vector_handler 1143 1146 #endif /* CONFIG_HYPERV */ 1144 1147 1145 1148 idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK
+1
arch/x86/include/asm/hardirq.h
··· 40 40 #endif 41 41 #if IS_ENABLED(CONFIG_HYPERV) 42 42 unsigned int irq_hv_reenlightenment_count; 43 + unsigned int hyperv_stimer0_count; 43 44 #endif 44 45 } ____cacheline_aligned irq_cpustat_t; 45 46
+2 -1
arch/x86/include/asm/irq_vectors.h
··· 106 106 107 107 #if IS_ENABLED(CONFIG_HYPERV) 108 108 #define HYPERV_REENLIGHTENMENT_VECTOR 0xee 109 + #define HYPERV_STIMER0_VECTOR 0xed 109 110 #endif 110 111 111 - #define LOCAL_TIMER_VECTOR 0xed 112 + #define LOCAL_TIMER_VECTOR 0xec 112 113 113 114 #define NR_VECTORS 256 114 115
+13
arch/x86/include/asm/mshyperv.h
··· 173 173 void hv_setup_crash_handler(void (*handler)(struct pt_regs *regs)); 174 174 void hv_remove_crash_handler(void); 175 175 176 + /* 177 + * Routines for stimer0 Direct Mode handling. 178 + * On x86/x64, there are no percpu actions to take. 179 + */ 180 + void hv_stimer0_vector_handler(struct pt_regs *regs); 181 + void hv_stimer0_callback_vector(void); 182 + int hv_setup_stimer0_irq(int *irq, int *vector, void (*handler)(void)); 183 + void hv_remove_stimer0_irq(int irq); 184 + 185 + static inline void hv_enable_stimer0_percpu_irq(int irq) {} 186 + static inline void hv_disable_stimer0_percpu_irq(int irq) {} 187 + 188 + 176 189 #if IS_ENABLED(CONFIG_HYPERV) 177 190 extern struct clocksource *hyperv_cs; 178 191 extern void *hv_hypercall_pg;
+3
arch/x86/include/uapi/asm/hyperv.h
··· 77 77 /* Crash MSR available */ 78 78 #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10) 79 79 80 + /* stimer Direct Mode is available */ 81 + #define HV_X64_STIMER_DIRECT_MODE_AVAILABLE (1 << 19) 82 + 80 83 /* 81 84 * Feature identification: EBX indicates which flags were specified at 82 85 * partition creation. The format is the same as the partition creation
+40
arch/x86/kernel/cpu/mshyperv.c
··· 37 37 38 38 #if IS_ENABLED(CONFIG_HYPERV) 39 39 static void (*vmbus_handler)(void); 40 + static void (*hv_stimer0_handler)(void); 40 41 static void (*hv_kexec_handler)(void); 41 42 static void (*hv_crash_handler)(struct pt_regs *regs); 42 43 ··· 69 68 } 70 69 EXPORT_SYMBOL_GPL(hv_setup_vmbus_irq); 71 70 EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq); 71 + 72 + /* 73 + * Routines to do per-architecture handling of stimer0 74 + * interrupts when in Direct Mode 75 + */ 76 + 77 + __visible void __irq_entry hv_stimer0_vector_handler(struct pt_regs *regs) 78 + { 79 + struct pt_regs *old_regs = set_irq_regs(regs); 80 + 81 + entering_irq(); 82 + inc_irq_stat(hyperv_stimer0_count); 83 + if (hv_stimer0_handler) 84 + hv_stimer0_handler(); 85 + ack_APIC_irq(); 86 + 87 + exiting_irq(); 88 + set_irq_regs(old_regs); 89 + } 90 + 91 + int hv_setup_stimer0_irq(int *irq, int *vector, void (*handler)(void)) 92 + { 93 + *vector = HYPERV_STIMER0_VECTOR; 94 + *irq = 0; /* Unused on x86/x64 */ 95 + hv_stimer0_handler = handler; 96 + return 0; 97 + } 98 + EXPORT_SYMBOL_GPL(hv_setup_stimer0_irq); 99 + 100 + void hv_remove_stimer0_irq(int irq) 101 + { 102 + /* We have no way to deallocate the interrupt gate */ 103 + hv_stimer0_handler = NULL; 104 + } 105 + EXPORT_SYMBOL_GPL(hv_remove_stimer0_irq); 72 106 73 107 void hv_setup_kexec_handler(void (*handler)(void)) 74 108 { ··· 293 257 alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR, 294 258 hyperv_reenlightenment_vector); 295 259 260 + /* Setup the IDT for stimer0 */ 261 + if (ms_hyperv.misc_features & HV_X64_STIMER_DIRECT_MODE_AVAILABLE) 262 + alloc_intr_gate(HYPERV_STIMER0_VECTOR, 263 + hv_stimer0_callback_vector); 296 264 #endif 297 265 } 298 266
+7
arch/x86/kernel/irq.c
··· 150 150 irq_stats(j)->irq_hv_reenlightenment_count); 151 151 seq_puts(p, " Hyper-V reenlightenment interrupts\n"); 152 152 } 153 + if (test_bit(HYPERV_STIMER0_VECTOR, system_vectors)) { 154 + seq_printf(p, "%*s: ", prec, "HVS"); 155 + for_each_online_cpu(j) 156 + seq_printf(p, "%10u ", 157 + irq_stats(j)->hyperv_stimer0_count); 158 + seq_puts(p, " Hyper-V stimer0 interrupts\n"); 159 + } 153 160 #endif 154 161 seq_printf(p, "%*s: %10u\n", prec, "ERR", atomic_read(&irq_err_count)); 155 162 #if defined(CONFIG_X86_IO_APIC)
+11 -8
block/genhd.c
··· 308 308 /** 309 309 * register_blkdev - register a new block device 310 310 * 311 - * @major: the requested major device number [1..255]. If @major = 0, try to 312 - * allocate any unused major number. 311 + * @major: the requested major device number [1..BLKDEV_MAJOR_MAX-1]. If 312 + * @major = 0, try to allocate any unused major number. 313 313 * @name: the name of the new block device as a zero terminated string 314 314 * 315 315 * The @name must be unique within the system. 316 316 * 317 317 * The return value depends on the @major input parameter: 318 318 * 319 - * - if a major device number was requested in range [1..255] then the 320 - * function returns zero on success, or a negative error code 319 + * - if a major device number was requested in range [1..BLKDEV_MAJOR_MAX-1] 320 + * then the function returns zero on success, or a negative error code 321 321 * - if any unused major number was requested with @major = 0 parameter 322 322 * then the return value is the allocated major number in range 323 - * [1..255] or a negative error code otherwise 323 + * [1..BLKDEV_MAJOR_MAX-1] or a negative error code otherwise 324 + * 325 + * See Documentation/admin-guide/devices.txt for the list of allocated 326 + * major numbers. 324 327 */ 325 328 int register_blkdev(unsigned int major, const char *name) 326 329 { ··· 350 347 } 351 348 352 349 if (major >= BLKDEV_MAJOR_MAX) { 353 - pr_err("register_blkdev: major requested (%d) is greater than the maximum (%d) for %s\n", 354 - major, BLKDEV_MAJOR_MAX, name); 350 + pr_err("register_blkdev: major requested (%u) is greater than the maximum (%u) for %s\n", 351 + major, BLKDEV_MAJOR_MAX-1, name); 355 352 356 353 ret = -EINVAL; 357 354 goto out; ··· 378 375 ret = -EBUSY; 379 376 380 377 if (ret < 0) { 381 - printk("register_blkdev: cannot get major %d for %s\n", 378 + printk("register_blkdev: cannot get major %u for %s\n", 382 379 major, name); 383 380 kfree(p); 384 381 }
+1 -3
drivers/Kconfig
··· 199 199 200 200 source "drivers/nvmem/Kconfig" 201 201 202 - source "drivers/hwtracing/stm/Kconfig" 203 - 204 - source "drivers/hwtracing/intel_th/Kconfig" 202 + source "drivers/hwtracing/Kconfig" 205 203 206 204 source "drivers/fpga/Kconfig" 207 205
+1 -1
drivers/char/Kconfig
··· 236 236 237 237 config NVRAM 238 238 tristate "/dev/nvram support" 239 - depends on ATARI || X86 || (ARM && RTC_DRV_CMOS) || GENERIC_NVRAM 239 + depends on ATARI || X86 || GENERIC_NVRAM 240 240 ---help--- 241 241 If you say Y here and create a character special file /dev/nvram 242 242 with major number 10 and minor number 144 using mknod ("man mknod"),
+3 -3
drivers/char/mem.c
··· 137 137 138 138 while (count > 0) { 139 139 unsigned long remaining; 140 - int allowed; 140 + int allowed, probe; 141 141 142 142 sz = size_inside_page(p, count); 143 143 ··· 160 160 if (!ptr) 161 161 goto failed; 162 162 163 - err = probe_kernel_read(bounce, ptr, sz); 163 + probe = probe_kernel_read(bounce, ptr, sz); 164 164 unxlate_dev_mem_ptr(p, ptr); 165 - if (err) 165 + if (probe) 166 166 goto failed; 167 167 168 168 remaining = copy_to_user(buf, bounce, sz);
-1
drivers/char/xillybus/xillybus_pcie.c
··· 24 24 25 25 #define PCI_DEVICE_ID_XILLYBUS 0xebeb 26 26 27 - #define PCI_VENDOR_ID_ALTERA 0x1172 28 27 #define PCI_VENDOR_ID_ACTEL 0x11aa 29 28 #define PCI_VENDOR_ID_LATTICE 0x1204 30 29
+45 -58
drivers/extcon/extcon-gpio.c
··· 18 18 */ 19 19 20 20 #include <linux/extcon-provider.h> 21 - #include <linux/extcon/extcon-gpio.h> 22 - #include <linux/gpio.h> 23 21 #include <linux/gpio/consumer.h> 24 22 #include <linux/init.h> 25 23 #include <linux/interrupt.h> ··· 27 29 #include <linux/slab.h> 28 30 #include <linux/workqueue.h> 29 31 32 + /** 33 + * struct gpio_extcon_data - A simple GPIO-controlled extcon device state container. 34 + * @edev: Extcon device. 35 + * @irq: Interrupt line for the external connector. 36 + * @work: Work fired by the interrupt. 37 + * @debounce_jiffies: Number of jiffies to wait for the GPIO to stabilize, from the debounce 38 + * value. 39 + * @gpiod: GPIO descriptor for this external connector. 40 + * @extcon_id: The unique id of specific external connector. 41 + * @debounce: Debounce time for GPIO IRQ in ms. 42 + * @irq_flags: IRQ Flags (e.g., IRQF_TRIGGER_LOW). 43 + * @check_on_resume: Boolean describing whether to check the state of gpio 44 + * while resuming from sleep. 45 + */ 30 46 struct gpio_extcon_data { 31 47 struct extcon_dev *edev; 32 48 int irq; 33 49 struct delayed_work work; 34 50 unsigned long debounce_jiffies; 35 - 36 - struct gpio_desc *id_gpiod; 37 - struct gpio_extcon_pdata *pdata; 51 + struct gpio_desc *gpiod; 52 + unsigned int extcon_id; 53 + unsigned long debounce; 54 + unsigned long irq_flags; 55 + bool check_on_resume; 38 56 }; 39 57 40 58 static void gpio_extcon_work(struct work_struct *work) ··· 60 46 container_of(to_delayed_work(work), struct gpio_extcon_data, 61 47 work); 62 48 63 - state = gpiod_get_value_cansleep(data->id_gpiod); 64 - if (data->pdata->gpio_active_low) 65 - state = !state; 66 - 67 - extcon_set_state_sync(data->edev, data->pdata->extcon_id, state); 49 + state = gpiod_get_value_cansleep(data->gpiod); 50 + extcon_set_state_sync(data->edev, data->extcon_id, state); 68 51 } 69 52 70 53 static irqreturn_t gpio_irq_handler(int irq, void *dev_id) ··· 73 62 return IRQ_HANDLED; 74 63 } 75 64 76 - static int gpio_extcon_init(struct device *dev, struct gpio_extcon_data *data) 77 - { 78 - struct gpio_extcon_pdata *pdata = data->pdata; 79 - int ret; 80 - 81 - ret = devm_gpio_request_one(dev, pdata->gpio, GPIOF_DIR_IN, 82 - dev_name(dev)); 83 - if (ret < 0) 84 - return ret; 85 - 86 - data->id_gpiod = gpio_to_desc(pdata->gpio); 87 - if (!data->id_gpiod) 88 - return -EINVAL; 89 - 90 - if (pdata->debounce) { 91 - ret = gpiod_set_debounce(data->id_gpiod, 92 - pdata->debounce * 1000); 93 - if (ret < 0) 94 - data->debounce_jiffies = 95 - msecs_to_jiffies(pdata->debounce); 96 - } 97 - 98 - data->irq = gpiod_to_irq(data->id_gpiod); 99 - if (data->irq < 0) 100 - return data->irq; 101 - 102 - return 0; 103 - } 104 - 105 65 static int gpio_extcon_probe(struct platform_device *pdev) 106 66 { 107 - struct gpio_extcon_pdata *pdata = dev_get_platdata(&pdev->dev); 108 67 struct gpio_extcon_data *data; 68 + struct device *dev = &pdev->dev; 109 69 int ret; 110 70 111 - if (!pdata) 112 - return -EBUSY; 113 - if (!pdata->irq_flags || pdata->extcon_id > EXTCON_NONE) 114 - return -EINVAL; 115 - 116 - data = devm_kzalloc(&pdev->dev, sizeof(struct gpio_extcon_data), 117 - GFP_KERNEL); 71 + data = devm_kzalloc(dev, sizeof(struct gpio_extcon_data), GFP_KERNEL); 118 72 if (!data) 119 73 return -ENOMEM; 120 - data->pdata = pdata; 121 74 122 - /* Initialize the gpio */ 123 - ret = gpio_extcon_init(&pdev->dev, data); 124 - if (ret < 0) 125 - return ret; 75 + /* 76 + * FIXME: extcon_id represents the unique identifier of external 77 + * connectors such as EXTCON_USB, EXTCON_DISP_HDMI and so on. extcon_id 78 + * is necessary to register the extcon device. But, it's not yet 79 + * developed to get the extcon id from device-tree or others. 80 + * On later, it have to be solved. 81 + */ 82 + if (!data->irq_flags || data->extcon_id > EXTCON_NONE) 83 + return -EINVAL; 84 + 85 + data->gpiod = devm_gpiod_get(dev, "extcon", GPIOD_IN); 86 + if (IS_ERR(data->gpiod)) 87 + return PTR_ERR(data->gpiod); 88 + data->irq = gpiod_to_irq(data->gpiod); 89 + if (data->irq <= 0) 90 + return data->irq; 126 91 127 92 /* Allocate the memory of extcon devie and register extcon device */ 128 - data->edev = devm_extcon_dev_allocate(&pdev->dev, &pdata->extcon_id); 93 + data->edev = devm_extcon_dev_allocate(dev, &data->extcon_id); 129 94 if (IS_ERR(data->edev)) { 130 - dev_err(&pdev->dev, "failed to allocate extcon device\n"); 95 + dev_err(dev, "failed to allocate extcon device\n"); 131 96 return -ENOMEM; 132 97 } 133 98 134 - ret = devm_extcon_dev_register(&pdev->dev, data->edev); 99 + ret = devm_extcon_dev_register(dev, data->edev); 135 100 if (ret < 0) 136 101 return ret; 137 102 ··· 117 130 * Request the interrupt of gpio to detect whether external connector 118 131 * is attached or detached. 119 132 */ 120 - ret = devm_request_any_context_irq(&pdev->dev, data->irq, 121 - gpio_irq_handler, pdata->irq_flags, 133 + ret = devm_request_any_context_irq(dev, data->irq, 134 + gpio_irq_handler, data->irq_flags, 122 135 pdev->name, data); 123 136 if (ret < 0) 124 137 return ret; ··· 145 158 struct gpio_extcon_data *data; 146 159 147 160 data = dev_get_drvdata(dev); 148 - if (data->pdata->check_on_resume) 161 + if (data->check_on_resume) 149 162 queue_delayed_work(system_power_efficient_wq, 150 163 &data->work, data->debounce_jiffies); 151 164
+7 -4
drivers/extcon/extcon-intel-cht-wc.c
··· 66 66 67 67 #define CHT_WC_VBUS_GPIO_CTLO 0x6e2d 68 68 #define CHT_WC_VBUS_GPIO_CTLO_OUTPUT BIT(0) 69 + #define CHT_WC_VBUS_GPIO_CTLO_DRV_OD BIT(4) 70 + #define CHT_WC_VBUS_GPIO_CTLO_DIR_OUT BIT(5) 69 71 70 72 enum cht_wc_usb_id { 71 73 USB_ID_OTG, ··· 185 183 { 186 184 int ret, val; 187 185 188 - val = enable ? CHT_WC_VBUS_GPIO_CTLO_OUTPUT : 0; 189 - 190 186 /* 191 187 * The 5V boost converter is enabled through a gpio on the PMIC, since 192 188 * there currently is no gpio driver we access the gpio reg directly. 193 189 */ 194 - ret = regmap_update_bits(ext->regmap, CHT_WC_VBUS_GPIO_CTLO, 195 - CHT_WC_VBUS_GPIO_CTLO_OUTPUT, val); 190 + val = CHT_WC_VBUS_GPIO_CTLO_DRV_OD | CHT_WC_VBUS_GPIO_CTLO_DIR_OUT; 191 + if (enable) 192 + val |= CHT_WC_VBUS_GPIO_CTLO_OUTPUT; 193 + 194 + ret = regmap_write(ext->regmap, CHT_WC_VBUS_GPIO_CTLO, val); 196 195 if (ret) 197 196 dev_err(ext->dev, "Error writing Vbus GPIO CTLO: %d\n", ret); 198 197 }
+5 -4
drivers/extcon/extcon-intel-int3496.c
··· 50 50 static const struct acpi_gpio_params mux_gpios = { INT3496_GPIO_USB_MUX, 0, false }; 51 51 52 52 static const struct acpi_gpio_mapping acpi_int3496_default_gpios[] = { 53 - { "id-gpios", &id_gpios, 1 }, 53 + /* 54 + * Some platforms have a bug in ACPI GPIO description making IRQ 55 + * GPIO to be output only. Ask the GPIO core to ignore this limit. 56 + */ 57 + { "id-gpios", &id_gpios, 1, ACPI_GPIO_QUIRK_NO_IO_RESTRICTION }, 54 58 { "vbus-gpios", &vbus_gpios, 1 }, 55 59 { "mux-gpios", &mux_gpios, 1 }, 56 60 { }, ··· 116 112 ret = PTR_ERR(data->gpio_usb_id); 117 113 dev_err(dev, "can't request USB ID GPIO: %d\n", ret); 118 114 return ret; 119 - } else if (gpiod_get_direction(data->gpio_usb_id) != GPIOF_DIR_IN) { 120 - dev_warn(dev, FW_BUG "USB ID GPIO not in input mode, fixing\n"); 121 - gpiod_direction_input(data->gpio_usb_id); 122 115 } 123 116 124 117 data->usb_id_irq = gpiod_to_irq(data->gpio_usb_id);
+34 -10
drivers/extcon/extcon.c
··· 1336 1336 EXPORT_SYMBOL_GPL(extcon_dev_unregister); 1337 1337 1338 1338 #ifdef CONFIG_OF 1339 + 1340 + /* 1341 + * extcon_find_edev_by_node - Find the extcon device from devicetree. 1342 + * @node : OF node identifying edev 1343 + * 1344 + * Return the pointer of extcon device if success or ERR_PTR(err) if fail. 1345 + */ 1346 + struct extcon_dev *extcon_find_edev_by_node(struct device_node *node) 1347 + { 1348 + struct extcon_dev *edev; 1349 + 1350 + mutex_lock(&extcon_dev_list_lock); 1351 + list_for_each_entry(edev, &extcon_dev_list, entry) 1352 + if (edev->dev.parent && edev->dev.parent->of_node == node) 1353 + goto out; 1354 + edev = ERR_PTR(-EPROBE_DEFER); 1355 + out: 1356 + mutex_unlock(&extcon_dev_list_lock); 1357 + 1358 + return edev; 1359 + } 1360 + 1339 1361 /* 1340 1362 * extcon_get_edev_by_phandle - Get the extcon device from devicetree. 1341 1363 * @dev : the instance to the given device ··· 1385 1363 return ERR_PTR(-ENODEV); 1386 1364 } 1387 1365 1388 - mutex_lock(&extcon_dev_list_lock); 1389 - list_for_each_entry(edev, &extcon_dev_list, entry) { 1390 - if (edev->dev.parent && edev->dev.parent->of_node == node) { 1391 - mutex_unlock(&extcon_dev_list_lock); 1392 - of_node_put(node); 1393 - return edev; 1394 - } 1395 - } 1396 - mutex_unlock(&extcon_dev_list_lock); 1366 + edev = extcon_find_edev_by_node(node); 1397 1367 of_node_put(node); 1398 1368 1399 - return ERR_PTR(-EPROBE_DEFER); 1369 + return edev; 1400 1370 } 1371 + 1401 1372 #else 1373 + 1374 + struct extcon_dev *extcon_find_edev_by_node(struct device_node *node) 1375 + { 1376 + return ERR_PTR(-ENOSYS); 1377 + } 1378 + 1402 1379 struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, int index) 1403 1380 { 1404 1381 return ERR_PTR(-ENOSYS); 1405 1382 } 1383 + 1406 1384 #endif /* CONFIG_OF */ 1385 + 1386 + EXPORT_SYMBOL_GPL(extcon_find_edev_by_node); 1407 1387 EXPORT_SYMBOL_GPL(extcon_get_edev_by_phandle); 1408 1388 1409 1389 /**
-2
drivers/fpga/altera-cvp.c
··· 384 384 const struct pci_device_id *dev_id); 385 385 static void altera_cvp_remove(struct pci_dev *pdev); 386 386 387 - #define PCI_VENDOR_ID_ALTERA 0x1172 388 - 389 387 static struct pci_device_id altera_cvp_id_tbl[] = { 390 388 { PCI_VDEVICE(ALTERA, PCI_ANY_ID) }, 391 389 { }
+1
drivers/fsi/Kconfig
··· 4 4 5 5 menuconfig FSI 6 6 tristate "FSI support" 7 + depends on OF 7 8 select CRC4 8 9 ---help--- 9 10 FSI - the FRU Support Interface - is a simple bus for low-level
+120 -9
drivers/fsi/fsi-core.c
··· 18 18 #include <linux/fsi.h> 19 19 #include <linux/idr.h> 20 20 #include <linux/module.h> 21 + #include <linux/of.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/bitops.h> 23 24 ··· 143 142 { 144 143 struct fsi_device *device = to_fsi_dev(_device); 145 144 145 + of_node_put(device->dev.of_node); 146 146 kfree(device); 147 147 } 148 148 ··· 207 205 if (rc) 208 206 return rc; 209 207 210 - dev_info(&slave->dev, "status: 0x%08x, sisc: 0x%08x\n", 208 + dev_dbg(&slave->dev, "status: 0x%08x, sisc: 0x%08x\n", 211 209 be32_to_cpu(stat), be32_to_cpu(irq)); 212 210 213 211 /* clear interrupts */ ··· 336 334 } 337 335 EXPORT_SYMBOL_GPL(fsi_slave_release_range); 338 336 337 + static bool fsi_device_node_matches(struct device *dev, struct device_node *np, 338 + uint32_t addr, uint32_t size) 339 + { 340 + unsigned int len, na, ns; 341 + const __be32 *prop; 342 + uint32_t psize; 343 + 344 + na = of_n_addr_cells(np); 345 + ns = of_n_size_cells(np); 346 + 347 + if (na != 1 || ns != 1) 348 + return false; 349 + 350 + prop = of_get_property(np, "reg", &len); 351 + if (!prop || len != 8) 352 + return false; 353 + 354 + if (of_read_number(prop, 1) != addr) 355 + return false; 356 + 357 + psize = of_read_number(prop + 1, 1); 358 + if (psize != size) { 359 + dev_warn(dev, 360 + "node %s matches probed address, but not size (got 0x%x, expected 0x%x)", 361 + of_node_full_name(np), psize, size); 362 + } 363 + 364 + return true; 365 + } 366 + 367 + /* Find a matching node for the slave engine at @address, using @size bytes 368 + * of space. Returns NULL if not found, or a matching node with refcount 369 + * already incremented. 370 + */ 371 + static struct device_node *fsi_device_find_of_node(struct fsi_device *dev) 372 + { 373 + struct device_node *parent, *np; 374 + 375 + parent = dev_of_node(&dev->slave->dev); 376 + if (!parent) 377 + return NULL; 378 + 379 + for_each_child_of_node(parent, np) { 380 + if (fsi_device_node_matches(&dev->dev, np, 381 + dev->addr, dev->size)) 382 + return np; 383 + } 384 + 385 + return NULL; 386 + } 387 + 339 388 static int fsi_slave_scan(struct fsi_slave *slave) 340 389 { 341 390 uint32_t engine_addr; ··· 455 402 dev_set_name(&dev->dev, "%02x:%02x:%02x:%02x", 456 403 slave->master->idx, slave->link, 457 404 slave->id, i - 2); 405 + dev->dev.of_node = fsi_device_find_of_node(dev); 458 406 459 407 rc = device_register(&dev->dev); 460 408 if (rc) { ··· 612 558 { 613 559 struct fsi_slave *slave = to_fsi_slave(dev); 614 560 561 + of_node_put(dev->of_node); 615 562 kfree(slave); 563 + } 564 + 565 + static bool fsi_slave_node_matches(struct device_node *np, 566 + int link, uint8_t id) 567 + { 568 + unsigned int len, na, ns; 569 + const __be32 *prop; 570 + 571 + na = of_n_addr_cells(np); 572 + ns = of_n_size_cells(np); 573 + 574 + /* Ensure we have the correct format for addresses and sizes in 575 + * reg properties 576 + */ 577 + if (na != 2 || ns != 0) 578 + return false; 579 + 580 + prop = of_get_property(np, "reg", &len); 581 + if (!prop || len != 8) 582 + return false; 583 + 584 + return (of_read_number(prop, 1) == link) && 585 + (of_read_number(prop + 1, 1) == id); 586 + } 587 + 588 + /* Find a matching node for the slave at (link, id). Returns NULL if none 589 + * found, or a matching node with refcount already incremented. 590 + */ 591 + static struct device_node *fsi_slave_find_of_node(struct fsi_master *master, 592 + int link, uint8_t id) 593 + { 594 + struct device_node *parent, *np; 595 + 596 + parent = dev_of_node(&master->dev); 597 + if (!parent) 598 + return NULL; 599 + 600 + for_each_child_of_node(parent, np) { 601 + if (fsi_slave_node_matches(np, link, id)) 602 + return np; 603 + } 604 + 605 + return NULL; 616 606 } 617 607 618 608 static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id) ··· 687 589 return -EIO; 688 590 } 689 591 690 - dev_info(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n", 592 + dev_dbg(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n", 691 593 chip_id, master->idx, link, id); 692 594 693 595 rc = fsi_slave_set_smode(master, link, id); ··· 721 623 722 624 slave->master = master; 723 625 slave->dev.parent = &master->dev; 626 + slave->dev.of_node = fsi_slave_find_of_node(master, link, id); 724 627 slave->dev.release = fsi_slave_release; 725 628 slave->link = link; 726 629 slave->id = id; ··· 755 656 /* FSI master support */ 756 657 static int fsi_check_access(uint32_t addr, size_t size) 757 658 { 758 - if (size != 1 && size != 2 && size != 4) 759 - return -EINVAL; 760 - 761 - if ((addr & 0x3) != (size & 0x3)) 659 + if (size == 4) { 660 + if (addr & 0x3) 661 + return -EINVAL; 662 + } else if (size == 2) { 663 + if (addr & 0x1) 664 + return -EINVAL; 665 + } else if (size != 1) 762 666 return -EINVAL; 763 667 764 668 return 0; ··· 864 762 device_for_each_child(&master->dev, NULL, fsi_master_remove_slave); 865 763 } 866 764 765 + int fsi_master_rescan(struct fsi_master *master) 766 + { 767 + fsi_master_unscan(master); 768 + return fsi_master_scan(master); 769 + } 770 + EXPORT_SYMBOL_GPL(fsi_master_rescan); 771 + 867 772 static ssize_t master_rescan_store(struct device *dev, 868 773 struct device_attribute *attr, const char *buf, size_t count) 869 774 { 870 775 struct fsi_master *master = to_fsi_master(dev); 871 776 int rc; 872 777 873 - fsi_master_unscan(master); 874 - rc = fsi_master_scan(master); 778 + rc = fsi_master_rescan(master); 875 779 if (rc < 0) 876 780 return rc; 877 781 ··· 901 793 int fsi_master_register(struct fsi_master *master) 902 794 { 903 795 int rc; 796 + struct device_node *np; 904 797 905 798 if (!master) 906 799 return -EINVAL; ··· 929 820 return rc; 930 821 } 931 822 932 - fsi_master_scan(master); 823 + np = dev_of_node(&master->dev); 824 + if (!of_property_read_bool(np, "no-scan-on-init")) 825 + fsi_master_scan(master); 933 826 934 827 return 0; 935 828 }
+87 -2
drivers/fsi/fsi-master-gpio.c
··· 9 9 #include <linux/gpio/consumer.h> 10 10 #include <linux/io.h> 11 11 #include <linux/module.h> 12 + #include <linux/of.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/spinlock.h> ··· 60 59 struct gpio_desc *gpio_trans; /* Voltage translator */ 61 60 struct gpio_desc *gpio_enable; /* FSI enable */ 62 61 struct gpio_desc *gpio_mux; /* Mux control */ 62 + bool external_mode; 63 63 }; 64 64 65 65 #define CREATE_TRACE_POINTS ··· 413 411 int rc; 414 412 415 413 spin_lock_irqsave(&master->cmd_lock, flags); 414 + 415 + if (master->external_mode) { 416 + spin_unlock_irqrestore(&master->cmd_lock, flags); 417 + return -EBUSY; 418 + } 419 + 416 420 serial_out(master, cmd); 417 421 echo_delay(master); 418 422 rc = poll_for_response(master, slave, resp_len, resp); ··· 469 461 static int fsi_master_gpio_break(struct fsi_master *_master, int link) 470 462 { 471 463 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 464 + unsigned long flags; 472 465 473 466 if (link != 0) 474 467 return -ENODEV; 475 468 476 469 trace_fsi_master_gpio_break(master); 477 470 471 + spin_lock_irqsave(&master->cmd_lock, flags); 472 + if (master->external_mode) { 473 + spin_unlock_irqrestore(&master->cmd_lock, flags); 474 + return -EBUSY; 475 + } 478 476 set_sda_output(master, 1); 479 477 sda_out(master, 1); 480 478 clock_toggle(master, FSI_PRE_BREAK_CLOCKS); ··· 489 475 echo_delay(master); 490 476 sda_out(master, 1); 491 477 clock_toggle(master, FSI_POST_BREAK_CLOCKS); 478 + spin_unlock_irqrestore(&master->cmd_lock, flags); 492 479 493 480 /* Wait for logic reset to take effect */ 494 481 udelay(200); ··· 509 494 clock_zeros(master, FSI_INIT_CLOCKS); 510 495 } 511 496 497 + static void fsi_master_gpio_init_external(struct fsi_master_gpio *master) 498 + { 499 + gpiod_direction_output(master->gpio_mux, 0); 500 + gpiod_direction_output(master->gpio_trans, 0); 501 + gpiod_direction_output(master->gpio_enable, 1); 502 + gpiod_direction_input(master->gpio_clk); 503 + gpiod_direction_input(master->gpio_data); 504 + } 505 + 512 506 static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link) 513 507 { 514 508 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 509 + unsigned long flags; 510 + int rc = -EBUSY; 515 511 516 512 if (link != 0) 517 513 return -ENODEV; 518 - gpiod_set_value(master->gpio_enable, 1); 519 514 520 - return 0; 515 + spin_lock_irqsave(&master->cmd_lock, flags); 516 + if (!master->external_mode) { 517 + gpiod_set_value(master->gpio_enable, 1); 518 + rc = 0; 519 + } 520 + spin_unlock_irqrestore(&master->cmd_lock, flags); 521 + 522 + return rc; 521 523 } 524 + 525 + static ssize_t external_mode_show(struct device *dev, 526 + struct device_attribute *attr, char *buf) 527 + { 528 + struct fsi_master_gpio *master = dev_get_drvdata(dev); 529 + 530 + return snprintf(buf, PAGE_SIZE - 1, "%u\n", 531 + master->external_mode ? 1 : 0); 532 + } 533 + 534 + static ssize_t external_mode_store(struct device *dev, 535 + struct device_attribute *attr, const char *buf, size_t count) 536 + { 537 + struct fsi_master_gpio *master = dev_get_drvdata(dev); 538 + unsigned long flags, val; 539 + bool external_mode; 540 + int err; 541 + 542 + err = kstrtoul(buf, 0, &val); 543 + if (err) 544 + return err; 545 + 546 + external_mode = !!val; 547 + 548 + spin_lock_irqsave(&master->cmd_lock, flags); 549 + 550 + if (external_mode == master->external_mode) { 551 + spin_unlock_irqrestore(&master->cmd_lock, flags); 552 + return count; 553 + } 554 + 555 + master->external_mode = external_mode; 556 + if (master->external_mode) 557 + fsi_master_gpio_init_external(master); 558 + else 559 + fsi_master_gpio_init(master); 560 + spin_unlock_irqrestore(&master->cmd_lock, flags); 561 + 562 + fsi_master_rescan(&master->master); 563 + 564 + return count; 565 + } 566 + 567 + static DEVICE_ATTR(external_mode, 0664, 568 + external_mode_show, external_mode_store); 522 569 523 570 static int fsi_master_gpio_probe(struct platform_device *pdev) 524 571 { 525 572 struct fsi_master_gpio *master; 526 573 struct gpio_desc *gpio; 574 + int rc; 527 575 528 576 master = devm_kzalloc(&pdev->dev, sizeof(*master), GFP_KERNEL); 529 577 if (!master) ··· 594 516 595 517 master->dev = &pdev->dev; 596 518 master->master.dev.parent = master->dev; 519 + master->master.dev.of_node = of_node_get(dev_of_node(master->dev)); 597 520 598 521 gpio = devm_gpiod_get(&pdev->dev, "clock", 0); 599 522 if (IS_ERR(gpio)) { ··· 644 565 645 566 fsi_master_gpio_init(master); 646 567 568 + rc = device_create_file(&pdev->dev, &dev_attr_external_mode); 569 + if (rc) 570 + return rc; 571 + 647 572 return fsi_master_register(&master->master); 648 573 } 649 574 ··· 665 582 if (master->gpio_mux) 666 583 devm_gpiod_put(&pdev->dev, master->gpio_mux); 667 584 fsi_master_unregister(&master->master); 585 + 586 + of_node_put(master->master.dev.of_node); 668 587 669 588 return 0; 670 589 }
+23 -4
drivers/fsi/fsi-master-hub.c
··· 16 16 #include <linux/delay.h> 17 17 #include <linux/fsi.h> 18 18 #include <linux/module.h> 19 + #include <linux/of.h> 19 20 #include <linux/slab.h> 20 21 21 22 #include "fsi-master.h" ··· 254 253 255 254 reg = be32_to_cpu(__reg); 256 255 links = (reg >> 8) & 0xff; 257 - dev_info(dev, "hub version %08x (%d links)\n", reg, links); 256 + dev_dbg(dev, "hub version %08x (%d links)\n", reg, links); 258 257 259 258 rc = fsi_slave_claim_range(fsi_dev->slave, FSI_HUB_LINK_OFFSET, 260 259 FSI_HUB_LINK_SIZE * links); ··· 275 274 276 275 hub->master.dev.parent = dev; 277 276 hub->master.dev.release = hub_master_release; 277 + hub->master.dev.of_node = of_node_get(dev_of_node(dev)); 278 278 279 279 hub->master.n_links = links; 280 280 hub->master.read = hub_master_read; ··· 288 286 hub_master_init(hub); 289 287 290 288 rc = fsi_master_register(&hub->master); 291 - if (!rc) 292 - return 0; 289 + if (rc) 290 + goto err_release; 293 291 294 - kfree(hub); 292 + /* At this point, fsi_master_register performs the device_initialize(), 293 + * and holds the sole reference on master.dev. This means the device 294 + * will be freed (via ->release) during any subsequent call to 295 + * fsi_master_unregister. We add our own reference to it here, so we 296 + * can perform cleanup (in _remove()) without it being freed before 297 + * we're ready. 298 + */ 299 + get_device(&hub->master.dev); 300 + return 0; 301 + 295 302 err_release: 296 303 fsi_slave_release_range(fsi_dev->slave, FSI_HUB_LINK_OFFSET, 297 304 FSI_HUB_LINK_SIZE * links); ··· 313 302 314 303 fsi_master_unregister(&hub->master); 315 304 fsi_slave_release_range(hub->upstream->slave, hub->addr, hub->size); 305 + of_node_put(hub->master.dev.of_node); 306 + 307 + /* 308 + * master.dev will likely be ->release()ed after this, which free()s 309 + * the hub 310 + */ 311 + put_device(&hub->master.dev); 312 + 316 313 return 0; 317 314 } 318 315
+17
drivers/fsi/fsi-master.h
··· 37 37 38 38 #define dev_to_fsi_master(d) container_of(d, struct fsi_master, dev) 39 39 40 + /** 41 + * fsi_master registration & lifetime: the fsi_master_register() and 42 + * fsi_master_unregister() functions will take ownership of the master, and 43 + * ->dev in particular. The registration path performs a get_device(), which 44 + * takes the first reference on the device. Similarly, the unregistration path 45 + * performs a put_device(), which may well drop the last reference. 46 + * 47 + * This means that master implementations *may* need to hold their own 48 + * reference (via get_device()) on master->dev. In particular, if the device's 49 + * ->release callback frees the fsi_master, then fsi_master_unregister will 50 + * invoke this free if no other reference is held. 51 + * 52 + * The same applies for the error path of fsi_master_register; if the call 53 + * fails, dev->release will have been invoked. 54 + */ 40 55 extern int fsi_master_register(struct fsi_master *master); 41 56 extern void fsi_master_unregister(struct fsi_master *master); 57 + 58 + extern int fsi_master_rescan(struct fsi_master *master); 42 59 43 60 #endif /* DRIVERS_FSI_MASTER_H */
+94 -3
drivers/gpu/drm/bridge/sil-sii8620.c
··· 17 17 18 18 #include <linux/clk.h> 19 19 #include <linux/delay.h> 20 + #include <linux/extcon.h> 20 21 #include <linux/gpio/consumer.h> 21 22 #include <linux/i2c.h> 22 23 #include <linux/interrupt.h> ··· 26 25 #include <linux/list.h> 27 26 #include <linux/module.h> 28 27 #include <linux/mutex.h> 28 + #include <linux/of_graph.h> 29 29 #include <linux/regulator/consumer.h> 30 30 #include <linux/slab.h> 31 31 ··· 83 81 struct edid *edid; 84 82 unsigned int gen2_write_burst:1; 85 83 enum sii8620_mt_state mt_state; 84 + struct extcon_dev *extcon; 85 + struct notifier_block extcon_nb; 86 + struct work_struct extcon_wq; 87 + int cable_state; 86 88 struct list_head mt_queue; 87 89 struct { 88 90 int r_size; ··· 2176 2170 ctx->rc_dev = rc_dev; 2177 2171 } 2178 2172 2173 + static void sii8620_cable_out(struct sii8620 *ctx) 2174 + { 2175 + disable_irq(to_i2c_client(ctx->dev)->irq); 2176 + sii8620_hw_off(ctx); 2177 + } 2178 + 2179 + static void sii8620_extcon_work(struct work_struct *work) 2180 + { 2181 + struct sii8620 *ctx = 2182 + container_of(work, struct sii8620, extcon_wq); 2183 + int state = extcon_get_state(ctx->extcon, EXTCON_DISP_MHL); 2184 + 2185 + if (state == ctx->cable_state) 2186 + return; 2187 + 2188 + ctx->cable_state = state; 2189 + 2190 + if (state > 0) 2191 + sii8620_cable_in(ctx); 2192 + else 2193 + sii8620_cable_out(ctx); 2194 + } 2195 + 2196 + static int sii8620_extcon_notifier(struct notifier_block *self, 2197 + unsigned long event, void *ptr) 2198 + { 2199 + struct sii8620 *ctx = 2200 + container_of(self, struct sii8620, extcon_nb); 2201 + 2202 + schedule_work(&ctx->extcon_wq); 2203 + 2204 + return NOTIFY_DONE; 2205 + } 2206 + 2207 + static int sii8620_extcon_init(struct sii8620 *ctx) 2208 + { 2209 + struct extcon_dev *edev; 2210 + struct device_node *musb, *muic; 2211 + int ret; 2212 + 2213 + /* get micro-USB connector node */ 2214 + musb = of_graph_get_remote_node(ctx->dev->of_node, 1, -1); 2215 + /* next get micro-USB Interface Controller node */ 2216 + muic = of_get_next_parent(musb); 2217 + 2218 + if (!muic) { 2219 + dev_info(ctx->dev, "no extcon found, switching to 'always on' mode\n"); 2220 + return 0; 2221 + } 2222 + 2223 + edev = extcon_find_edev_by_node(muic); 2224 + of_node_put(muic); 2225 + if (IS_ERR(edev)) { 2226 + if (PTR_ERR(edev) == -EPROBE_DEFER) 2227 + return -EPROBE_DEFER; 2228 + dev_err(ctx->dev, "Invalid or missing extcon\n"); 2229 + return PTR_ERR(edev); 2230 + } 2231 + 2232 + ctx->extcon = edev; 2233 + ctx->extcon_nb.notifier_call = sii8620_extcon_notifier; 2234 + INIT_WORK(&ctx->extcon_wq, sii8620_extcon_work); 2235 + ret = extcon_register_notifier(edev, EXTCON_DISP_MHL, &ctx->extcon_nb); 2236 + if (ret) { 2237 + dev_err(ctx->dev, "failed to register notifier for MHL\n"); 2238 + return ret; 2239 + } 2240 + 2241 + return 0; 2242 + } 2243 + 2179 2244 static inline struct sii8620 *bridge_to_sii8620(struct drm_bridge *bridge) 2180 2245 { 2181 2246 return container_of(bridge, struct sii8620, bridge); ··· 2379 2302 if (ret) 2380 2303 return ret; 2381 2304 2305 + ret = sii8620_extcon_init(ctx); 2306 + if (ret < 0) { 2307 + dev_err(ctx->dev, "failed to initialize EXTCON\n"); 2308 + return ret; 2309 + } 2310 + 2382 2311 i2c_set_clientdata(client, ctx); 2383 2312 2384 2313 ctx->bridge.funcs = &sii8620_bridge_funcs; 2385 2314 ctx->bridge.of_node = dev->of_node; 2386 2315 drm_bridge_add(&ctx->bridge); 2387 2316 2388 - sii8620_cable_in(ctx); 2317 + if (!ctx->extcon) 2318 + sii8620_cable_in(ctx); 2389 2319 2390 2320 return 0; 2391 2321 } ··· 2401 2317 { 2402 2318 struct sii8620 *ctx = i2c_get_clientdata(client); 2403 2319 2404 - disable_irq(to_i2c_client(ctx->dev)->irq); 2405 - sii8620_hw_off(ctx); 2320 + if (ctx->extcon) { 2321 + extcon_unregister_notifier(ctx->extcon, EXTCON_DISP_MHL, 2322 + &ctx->extcon_nb); 2323 + flush_work(&ctx->extcon_wq); 2324 + if (ctx->cable_state > 0) 2325 + sii8620_cable_out(ctx); 2326 + } else { 2327 + sii8620_cable_out(ctx); 2328 + } 2406 2329 drm_bridge_remove(&ctx->bridge); 2407 2330 2408 2331 return 0;
+2
drivers/hv/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 1 3 menu "Microsoft Hyper-V guest support" 2 4 3 5 config HYPERV
+1
drivers/hv/Makefile
··· 4 4 obj-$(CONFIG_HYPERV_BALLOON) += hv_balloon.o 5 5 6 6 CFLAGS_hv_trace.o = -I$(src) 7 + CFLAGS_hv_balloon.o = -I$(src) 7 8 8 9 hv_vmbus-y := vmbus_drv.o \ 9 10 hv.o connection.o channel.o \
+3 -5
drivers/hv/channel_mgmt.c
··· 71 71 /* PCIE */ 72 72 { .dev_type = HV_PCIE, 73 73 HV_PCIE_GUID, 74 - .perf_device = true, 74 + .perf_device = false, 75 75 }, 76 76 77 77 /* Synthetic Frame Buffer */ ··· 596 596 /* 597 597 * Starting with Win8, we can statically distribute the incoming 598 598 * channel interrupt load by binding a channel to VCPU. 599 - * We do this in a hierarchical fashion: 600 - * First distribute the primary channels across available NUMA nodes 601 - * and then distribute the subchannels amongst the CPUs in the NUMA 602 - * node assigned to the primary channel. 599 + * We distribute the interrupt loads to one or more NUMA nodes based on 600 + * the channel's affinity_policy. 603 601 * 604 602 * For pre-win8 hosts or non-performance critical channels we assign the 605 603 * first CPU in the first NUMA node.
+58 -6
drivers/hv/hv.c
··· 27 27 #include <linux/vmalloc.h> 28 28 #include <linux/hyperv.h> 29 29 #include <linux/version.h> 30 - #include <linux/interrupt.h> 30 + #include <linux/random.h> 31 31 #include <linux/clockchips.h> 32 32 #include <asm/hyperv.h> 33 33 #include <asm/mshyperv.h> ··· 37 37 struct hv_context hv_context = { 38 38 .synic_initialized = false, 39 39 }; 40 + 41 + /* 42 + * If false, we're using the old mechanism for stimer0 interrupts 43 + * where it sends a VMbus message when it expires. The old 44 + * mechanism is used when running on older versions of Hyper-V 45 + * that don't support Direct Mode. While Hyper-V provides 46 + * four stimer's per CPU, Linux uses only stimer0. 47 + */ 48 + static bool direct_mode_enabled; 49 + static int stimer0_irq; 50 + static int stimer0_vector; 40 51 41 52 #define HV_TIMER_FREQUENCY (10 * 1000 * 1000) /* 100ns period */ 42 53 #define HV_MAX_MAX_DELTA_TICKS 0xffffffff ··· 64 53 if (!hv_context.cpu_context) 65 54 return -ENOMEM; 66 55 56 + direct_mode_enabled = ms_hyperv.misc_features & 57 + HV_X64_STIMER_DIRECT_MODE_AVAILABLE; 67 58 return 0; 68 59 } 69 60 ··· 104 91 return status & 0xFFFF; 105 92 } 106 93 94 + /* 95 + * ISR for when stimer0 is operating in Direct Mode. Direct Mode 96 + * does not use VMbus or any VMbus messages, so process here and not 97 + * in the VMbus driver code. 98 + */ 99 + 100 + static void hv_stimer0_isr(void) 101 + { 102 + struct hv_per_cpu_context *hv_cpu; 103 + 104 + hv_cpu = this_cpu_ptr(hv_context.cpu_context); 105 + hv_cpu->clk_evt->event_handler(hv_cpu->clk_evt); 106 + add_interrupt_randomness(stimer0_vector, 0); 107 + } 108 + 107 109 static int hv_ce_set_next_event(unsigned long delta, 108 110 struct clock_event_device *evt) 109 111 { ··· 136 108 { 137 109 hv_init_timer(HV_X64_MSR_STIMER0_COUNT, 0); 138 110 hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, 0); 111 + if (direct_mode_enabled) 112 + hv_disable_stimer0_percpu_irq(stimer0_irq); 139 113 140 114 return 0; 141 115 } ··· 146 116 { 147 117 union hv_timer_config timer_cfg; 148 118 119 + timer_cfg.as_uint64 = 0; 149 120 timer_cfg.enable = 1; 150 121 timer_cfg.auto_enable = 1; 151 - timer_cfg.sintx = VMBUS_MESSAGE_SINT; 122 + if (direct_mode_enabled) { 123 + /* 124 + * When it expires, the timer will directly interrupt 125 + * on the specified hardware vector/IRQ. 126 + */ 127 + timer_cfg.direct_mode = 1; 128 + timer_cfg.apic_vector = stimer0_vector; 129 + hv_enable_stimer0_percpu_irq(stimer0_irq); 130 + } else { 131 + /* 132 + * When it expires, the timer will generate a VMbus message, 133 + * to be handled by the normal VMbus interrupt handler. 134 + */ 135 + timer_cfg.direct_mode = 0; 136 + timer_cfg.sintx = VMBUS_MESSAGE_SINT; 137 + } 152 138 hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, timer_cfg.as_uint64); 153 - 154 139 return 0; 155 140 } 156 141 ··· 192 147 int cpu; 193 148 194 149 hv_context.hv_numa_map = kzalloc(sizeof(struct cpumask) * nr_node_ids, 195 - GFP_ATOMIC); 150 + GFP_KERNEL); 196 151 if (hv_context.hv_numa_map == NULL) { 197 152 pr_err("Unable to allocate NUMA map\n"); 198 153 goto err; ··· 236 191 INIT_LIST_HEAD(&hv_cpu->chan_list); 237 192 } 238 193 194 + if (direct_mode_enabled && 195 + hv_setup_stimer0_irq(&stimer0_irq, &stimer0_vector, 196 + hv_stimer0_isr)) 197 + goto err; 198 + 239 199 return 0; 240 200 err: 241 201 return -ENOMEM; ··· 267 217 } 268 218 269 219 /* 270 - * hv_synic_init - Initialize the Synthethic Interrupt Controller. 220 + * hv_synic_init - Initialize the Synthetic Interrupt Controller. 271 221 * 272 222 * If it is already initialized by another entity (ie x2v shim), we need to 273 223 * retrieve the initialized message and event pages. Otherwise, we create and ··· 302 252 hv_get_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, 303 253 shared_sint.as_uint64); 304 254 305 - shared_sint.as_uint64 = 0; 306 255 shared_sint.vector = HYPERVISOR_CALLBACK_VECTOR; 307 256 shared_sint.masked = false; 308 257 if (ms_hyperv.hints & HV_X64_DEPRECATING_AEOI_RECOMMENDED) ··· 340 291 341 292 if (!(ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE)) 342 293 return; 294 + 295 + if (direct_mode_enabled) 296 + hv_remove_stimer0_irq(stimer0_irq); 343 297 344 298 for_each_present_cpu(cpu) { 345 299 struct hv_per_cpu_context *hv_cpu
+83 -38
drivers/hv/hv_balloon.c
··· 34 34 35 35 #include <linux/hyperv.h> 36 36 37 + #define CREATE_TRACE_POINTS 38 + #include "hv_trace_balloon.h" 39 + 37 40 /* 38 41 * We begin with definitions supporting the Dynamic Memory protocol 39 42 * with the host. ··· 579 576 static void post_status(struct hv_dynmem_device *dm); 580 577 581 578 #ifdef CONFIG_MEMORY_HOTPLUG 579 + static inline bool has_pfn_is_backed(struct hv_hotadd_state *has, 580 + unsigned long pfn) 581 + { 582 + struct hv_hotadd_gap *gap; 583 + 584 + /* The page is not backed. */ 585 + if ((pfn < has->covered_start_pfn) || (pfn >= has->covered_end_pfn)) 586 + return false; 587 + 588 + /* Check for gaps. */ 589 + list_for_each_entry(gap, &has->gap_list, list) { 590 + if ((pfn >= gap->start_pfn) && (pfn < gap->end_pfn)) 591 + return false; 592 + } 593 + 594 + return true; 595 + } 596 + 597 + static unsigned long hv_page_offline_check(unsigned long start_pfn, 598 + unsigned long nr_pages) 599 + { 600 + unsigned long pfn = start_pfn, count = 0; 601 + struct hv_hotadd_state *has; 602 + bool found; 603 + 604 + while (pfn < start_pfn + nr_pages) { 605 + /* 606 + * Search for HAS which covers the pfn and when we find one 607 + * count how many consequitive PFNs are covered. 608 + */ 609 + found = false; 610 + list_for_each_entry(has, &dm_device.ha_region_list, list) { 611 + while ((pfn >= has->start_pfn) && 612 + (pfn < has->end_pfn) && 613 + (pfn < start_pfn + nr_pages)) { 614 + found = true; 615 + if (has_pfn_is_backed(has, pfn)) 616 + count++; 617 + pfn++; 618 + } 619 + } 620 + 621 + /* 622 + * This PFN is not in any HAS (e.g. we're offlining a region 623 + * which was present at boot), no need to account for it. Go 624 + * to the next one. 625 + */ 626 + if (!found) 627 + pfn++; 628 + } 629 + 630 + return count; 631 + } 632 + 582 633 static int hv_memory_notifier(struct notifier_block *nb, unsigned long val, 583 634 void *v) 584 635 { 585 636 struct memory_notify *mem = (struct memory_notify *)v; 586 - unsigned long flags; 637 + unsigned long flags, pfn_count; 587 638 588 639 switch (val) { 589 640 case MEM_ONLINE: ··· 650 593 651 594 case MEM_OFFLINE: 652 595 spin_lock_irqsave(&dm_device.ha_lock, flags); 653 - dm_device.num_pages_onlined -= mem->nr_pages; 596 + pfn_count = hv_page_offline_check(mem->start_pfn, 597 + mem->nr_pages); 598 + if (pfn_count <= dm_device.num_pages_onlined) { 599 + dm_device.num_pages_onlined -= pfn_count; 600 + } else { 601 + /* 602 + * We're offlining more pages than we managed to online. 603 + * This is unexpected. In any case don't let 604 + * num_pages_onlined wrap around zero. 605 + */ 606 + WARN_ON_ONCE(1); 607 + dm_device.num_pages_onlined = 0; 608 + } 654 609 spin_unlock_irqrestore(&dm_device.ha_lock, flags); 655 610 break; 656 611 case MEM_GOING_ONLINE: ··· 681 612 /* Check if the particular page is backed and can be onlined and online it. */ 682 613 static void hv_page_online_one(struct hv_hotadd_state *has, struct page *pg) 683 614 { 684 - unsigned long cur_start_pgp; 685 - unsigned long cur_end_pgp; 686 - struct hv_hotadd_gap *gap; 687 - 688 - cur_start_pgp = (unsigned long)pfn_to_page(has->covered_start_pfn); 689 - cur_end_pgp = (unsigned long)pfn_to_page(has->covered_end_pfn); 690 - 691 - /* The page is not backed. */ 692 - if (((unsigned long)pg < cur_start_pgp) || 693 - ((unsigned long)pg >= cur_end_pgp)) 615 + if (!has_pfn_is_backed(has, page_to_pfn(pg))) 694 616 return; 695 - 696 - /* Check for gaps. */ 697 - list_for_each_entry(gap, &has->gap_list, list) { 698 - cur_start_pgp = (unsigned long) 699 - pfn_to_page(gap->start_pfn); 700 - cur_end_pgp = (unsigned long) 701 - pfn_to_page(gap->end_pfn); 702 - if (((unsigned long)pg >= cur_start_pgp) && 703 - ((unsigned long)pg < cur_end_pgp)) { 704 - return; 705 - } 706 - } 707 617 708 618 /* This frame is currently backed; online the page. */ 709 619 __online_page_set_limits(pg); ··· 739 691 (HA_CHUNK << PAGE_SHIFT)); 740 692 741 693 if (ret) { 742 - pr_warn("hot_add memory failed error is %d\n", ret); 694 + pr_err("hot_add memory failed error is %d\n", ret); 743 695 if (ret == -EEXIST) { 744 696 /* 745 697 * This error indicates that the error ··· 774 726 static void hv_online_page(struct page *pg) 775 727 { 776 728 struct hv_hotadd_state *has; 777 - unsigned long cur_start_pgp; 778 - unsigned long cur_end_pgp; 779 729 unsigned long flags; 730 + unsigned long pfn = page_to_pfn(pg); 780 731 781 732 spin_lock_irqsave(&dm_device.ha_lock, flags); 782 733 list_for_each_entry(has, &dm_device.ha_region_list, list) { 783 - cur_start_pgp = (unsigned long) 784 - pfn_to_page(has->start_pfn); 785 - cur_end_pgp = (unsigned long)pfn_to_page(has->end_pfn); 786 - 787 734 /* The page belongs to a different HAS. */ 788 - if (((unsigned long)pg < cur_start_pgp) || 789 - ((unsigned long)pg >= cur_end_pgp)) 735 + if ((pfn < has->start_pfn) || (pfn >= has->end_pfn)) 790 736 continue; 791 737 792 738 hv_page_online_one(has, pg); ··· 1056 1014 resp.result = 0; 1057 1015 1058 1016 if (!do_hot_add || (resp.page_count == 0)) 1059 - pr_info("Memory hot add failed\n"); 1017 + pr_err("Memory hot add failed\n"); 1060 1018 1061 1019 dm->state = DM_INITIALIZED; 1062 1020 resp.hdr.trans_id = atomic_inc_return(&trans_id); ··· 1083 1041 1084 1042 break; 1085 1043 default: 1086 - pr_info("Received Unknown type: %d\n", info_hdr->type); 1044 + pr_warn("Received Unknown type: %d\n", info_hdr->type); 1087 1045 } 1088 1046 } 1089 1047 ··· 1162 1120 dm->num_pages_added - dm->num_pages_onlined : 0) + 1163 1121 compute_balloon_floor(); 1164 1122 1123 + trace_balloon_status(status.num_avail, status.num_committed, 1124 + vm_memory_committed(), dm->num_pages_ballooned, 1125 + dm->num_pages_added, dm->num_pages_onlined); 1165 1126 /* 1166 1127 * If our transaction ID is no longer current, just don't 1167 1128 * send the status. This can happen if we were interrupted ··· 1335 1290 /* 1336 1291 * Free up the memory we allocatted. 1337 1292 */ 1338 - pr_info("Balloon response failed\n"); 1293 + pr_err("Balloon response failed\n"); 1339 1294 1340 1295 for (i = 0; i < bl_resp->range_count; i++) 1341 1296 free_balloon_pages(&dm_device, ··· 1466 1421 struct dm_capabilities_resp_msg *cap_resp) 1467 1422 { 1468 1423 if (!cap_resp->is_accepted) { 1469 - pr_info("Capabilities not accepted by host\n"); 1424 + pr_err("Capabilities not accepted by host\n"); 1470 1425 dm->state = DM_INIT_ERROR; 1471 1426 } 1472 1427 complete(&dm->host_event); ··· 1553 1508 break; 1554 1509 1555 1510 default: 1556 - pr_err("Unhandled message: type: %d\n", dm_hdr->type); 1511 + pr_warn("Unhandled message: type: %d\n", dm_hdr->type); 1557 1512 1558 1513 } 1559 1514 }
+2
drivers/hv/hv_trace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 1 3 #include "hyperv_vmbus.h" 2 4 3 5 #define CREATE_TRACE_POINTS
+2
drivers/hv/hv_trace.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 1 3 #undef TRACE_SYSTEM 2 4 #define TRACE_SYSTEM hyperv 3 5
+48
drivers/hv/hv_trace_balloon.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM hyperv 3 + 4 + #if !defined(_HV_TRACE_BALLOON_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _HV_TRACE_BALLOON_H 6 + 7 + #include <linux/tracepoint.h> 8 + 9 + TRACE_EVENT(balloon_status, 10 + TP_PROTO(u64 available, u64 committed, 11 + unsigned long vm_memory_committed, 12 + unsigned long pages_ballooned, 13 + unsigned long pages_added, 14 + unsigned long pages_onlined), 15 + TP_ARGS(available, committed, vm_memory_committed, 16 + pages_ballooned, pages_added, pages_onlined), 17 + TP_STRUCT__entry( 18 + __field(u64, available) 19 + __field(u64, committed) 20 + __field(unsigned long, vm_memory_committed) 21 + __field(unsigned long, pages_ballooned) 22 + __field(unsigned long, pages_added) 23 + __field(unsigned long, pages_onlined) 24 + ), 25 + TP_fast_assign( 26 + __entry->available = available; 27 + __entry->committed = committed; 28 + __entry->vm_memory_committed = vm_memory_committed; 29 + __entry->pages_ballooned = pages_ballooned; 30 + __entry->pages_added = pages_added; 31 + __entry->pages_onlined = pages_onlined; 32 + ), 33 + TP_printk("available %lld, committed %lld; vm_memory_committed %ld;" 34 + " pages_ballooned %ld, pages_added %ld, pages_onlined %ld", 35 + __entry->available, __entry->committed, 36 + __entry->vm_memory_committed, __entry->pages_ballooned, 37 + __entry->pages_added, __entry->pages_onlined 38 + ) 39 + ); 40 + 41 + #undef TRACE_INCLUDE_PATH 42 + #define TRACE_INCLUDE_PATH . 43 + #undef TRACE_INCLUDE_FILE 44 + #define TRACE_INCLUDE_FILE hv_trace_balloon 45 + #endif /* _HV_TRACE_BALLOON_H */ 46 + 47 + /* This part must be outside protection */ 48 + #include <trace/define_trace.h>
+3 -1
drivers/hv/hyperv_vmbus.h
··· 57 57 u64 periodic:1; 58 58 u64 lazy:1; 59 59 u64 auto_enable:1; 60 - u64 reserved_z0:12; 60 + u64 apic_vector:8; 61 + u64 direct_mode:1; 62 + u64 reserved_z0:3; 61 63 u64 sintx:4; 62 64 u64 reserved_z1:44; 63 65 };
+7
drivers/hwtracing/Kconfig
··· 1 + menu "HW tracing support" 2 + 3 + source "drivers/hwtracing/stm/Kconfig" 4 + 5 + source "drivers/hwtracing/intel_th/Kconfig" 6 + 7 + endmenu
+1 -1
drivers/hwtracing/coresight/coresight-cpu-debug.c
··· 315 315 } 316 316 317 317 pc = debug_adjust_pc(drvdata); 318 - dev_emerg(dev, " EDPCSR: [<%p>] %pS\n", (void *)pc, (void *)pc); 318 + dev_emerg(dev, " EDPCSR: [<%px>] %pS\n", (void *)pc, (void *)pc); 319 319 320 320 if (drvdata->edcidsr_present) 321 321 dev_emerg(dev, " EDCIDSR: %08x\n", drvdata->edcidsr);
+2 -2
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 1780 1780 */ 1781 1781 for (j = 0; j < 8; j++) { 1782 1782 if (maskbyte & 1) 1783 - config->ctxid_pid[i] &= ~(0xFF << (j * 8)); 1783 + config->ctxid_pid[i] &= ~(0xFFUL << (j * 8)); 1784 1784 maskbyte >>= 1; 1785 1785 } 1786 1786 /* Select the next ctxid comparator mask value */ ··· 1963 1963 */ 1964 1964 for (j = 0; j < 8; j++) { 1965 1965 if (maskbyte & 1) 1966 - config->vmid_val[i] &= ~(0xFF << (j * 8)); 1966 + config->vmid_val[i] &= ~(0xFFUL << (j * 8)); 1967 1967 maskbyte >>= 1; 1968 1968 } 1969 1969 /* Select the next vmid comparator mask value */
+12
drivers/hwtracing/intel_th/Kconfig
··· 25 25 26 26 Say Y here to enable PCI Intel TH support. 27 27 28 + config INTEL_TH_ACPI 29 + tristate "Intel(R) Trace Hub ACPI controller" 30 + depends on ACPI 31 + help 32 + Intel(R) Trace Hub may exist as an ACPI device. This option enables 33 + support glue layer for ACPI-based Intel TH. This typically implies 34 + 'host debugger' mode, that is, the trace configuration and capture 35 + is handled by an external debug host and corresponding controls will 36 + not be available on the target. 37 + 38 + Say Y here to enable ACPI Intel TH support. 39 + 28 40 config INTEL_TH_GTH 29 41 tristate "Intel(R) Trace Hub Global Trace Hub" 30 42 help
+3
drivers/hwtracing/intel_th/Makefile
··· 6 6 obj-$(CONFIG_INTEL_TH_PCI) += intel_th_pci.o 7 7 intel_th_pci-y := pci.o 8 8 9 + obj-$(CONFIG_INTEL_TH_ACPI) += intel_th_acpi.o 10 + intel_th_acpi-y := acpi.o 11 + 9 12 obj-$(CONFIG_INTEL_TH_GTH) += intel_th_gth.o 10 13 intel_th_gth-y := gth.o 11 14
+79
drivers/hwtracing/intel_th/acpi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Intel(R) Trace Hub ACPI driver 4 + * 5 + * Copyright (C) 2017 Intel Corporation. 6 + */ 7 + 8 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 9 + 10 + #include <linux/types.h> 11 + #include <linux/module.h> 12 + #include <linux/device.h> 13 + #include <linux/sysfs.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/acpi.h> 16 + 17 + #include "intel_th.h" 18 + 19 + #define DRIVER_NAME "intel_th_acpi" 20 + 21 + static const struct intel_th_drvdata intel_th_acpi_pch = { 22 + .host_mode_only = 1, 23 + }; 24 + 25 + static const struct intel_th_drvdata intel_th_acpi_uncore = { 26 + .host_mode_only = 1, 27 + }; 28 + 29 + static const struct acpi_device_id intel_th_acpi_ids[] = { 30 + { "INTC1000", (kernel_ulong_t)&intel_th_acpi_uncore }, 31 + { "INTC1001", (kernel_ulong_t)&intel_th_acpi_pch }, 32 + { "", 0 }, 33 + }; 34 + 35 + MODULE_DEVICE_TABLE(acpi, intel_th_acpi_ids); 36 + 37 + static int intel_th_acpi_probe(struct platform_device *pdev) 38 + { 39 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 40 + const struct acpi_device_id *id; 41 + struct intel_th *th; 42 + 43 + id = acpi_match_device(intel_th_acpi_ids, &pdev->dev); 44 + if (!id) 45 + return -ENODEV; 46 + 47 + th = intel_th_alloc(&pdev->dev, (void *)id->driver_data, 48 + pdev->resource, pdev->num_resources, -1); 49 + if (IS_ERR(th)) 50 + return PTR_ERR(th); 51 + 52 + adev->driver_data = th; 53 + 54 + return 0; 55 + } 56 + 57 + static int intel_th_acpi_remove(struct platform_device *pdev) 58 + { 59 + struct intel_th *th = platform_get_drvdata(pdev); 60 + 61 + intel_th_free(th); 62 + 63 + return 0; 64 + } 65 + 66 + static struct platform_driver intel_th_acpi_driver = { 67 + .probe = intel_th_acpi_probe, 68 + .remove = intel_th_acpi_remove, 69 + .driver = { 70 + .name = DRIVER_NAME, 71 + .acpi_match_table = intel_th_acpi_ids, 72 + }, 73 + }; 74 + 75 + module_platform_driver(intel_th_acpi_driver); 76 + 77 + MODULE_LICENSE("GPL v2"); 78 + MODULE_DESCRIPTION("Intel(R) Trace Hub ACPI controller driver"); 79 + MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@intel.com>");
+18 -13
drivers/hwtracing/intel_th/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub driver core 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 630 638 thdev->output.port = -1; 631 639 thdev->output.scratchpad = subdev->scrpd; 632 640 } else if (subdev->type == INTEL_TH_SWITCH) { 633 - thdev->host_mode = host_mode; 641 + thdev->host_mode = 642 + INTEL_TH_CAP(th, host_mode_only) ? true : host_mode; 634 643 th->hub = thdev; 635 644 } 636 645 ··· 730 737 struct intel_th_device *thdev; 731 738 732 739 /* only allow SOURCE and SWITCH devices in host mode */ 733 - if (host_mode && subdev->type == INTEL_TH_OUTPUT) 740 + if ((INTEL_TH_CAP(th, host_mode_only) || host_mode) && 741 + subdev->type == INTEL_TH_OUTPUT) 734 742 continue; 735 743 736 744 /* ··· 807 813 struct resource *devres, unsigned int ndevres, int irq) 808 814 { 809 815 struct intel_th *th; 810 - int err; 816 + int err, r; 817 + 818 + if (irq == -1) 819 + for (r = 0; r < ndevres; r++) 820 + if (devres[r].flags & IORESOURCE_IRQ) { 821 + irq = devres[r].start; 822 + break; 823 + } 811 824 812 825 th = kzalloc(sizeof(*th), GFP_KERNEL); 813 826 if (!th) ··· 936 935 int intel_th_set_output(struct intel_th_device *thdev, 937 936 unsigned int master) 938 937 { 939 - struct intel_th_device *hub = to_intel_th_device(thdev->dev.parent); 938 + struct intel_th_device *hub = to_intel_th_hub(thdev); 940 939 struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver); 940 + 941 + /* In host mode, this is up to the external debugger, do nothing. */ 942 + if (hub->host_mode) 943 + return 0; 941 944 942 945 if (!hubdrv->set_output) 943 946 return -ENOTSUPP;
+1 -9
drivers/hwtracing/intel_th/debug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub driver debugging 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #include <linux/types.h>
+1 -9
drivers/hwtracing/intel_th/debug.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Intel(R) Trace Hub driver debugging 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_DEBUG_H__
+1 -9
drivers/hwtracing/intel_th/gth.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub Global Trace Hub 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -9
drivers/hwtracing/intel_th/gth.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Intel(R) Trace Hub Global Trace Hub (GTH) data structures 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_GTH_H__
+4 -10
drivers/hwtracing/intel_th/intel_th.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Intel(R) Trace Hub data structures 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_H__ ··· 42 50 /** 43 51 * struct intel_th_drvdata - describes hardware capabilities and quirks 44 52 * @tscu_enable: device needs SW to enable time stamping unit 53 + * @host_mode_only: device can only operate in 'host debugger' mode 45 54 */ 46 55 struct intel_th_drvdata { 47 - unsigned int tscu_enable : 1; 56 + unsigned int tscu_enable : 1, 57 + host_mode_only : 1; 48 58 }; 49 59 50 60 #define INTEL_TH_CAP(_th, _cap) ((_th)->drvdata ? (_th)->drvdata->_cap : 0)
+1 -9
drivers/hwtracing/intel_th/msu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub Memory Storage Unit 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -9
drivers/hwtracing/intel_th/msu.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub Memory Storage Unit (MSU) data structures 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_MSU_H__
+1 -9
drivers/hwtracing/intel_th/pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub pci driver 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -9
drivers/hwtracing/intel_th/pti.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub PTI output driver 3 4 * 4 5 * Copyright (C) 2014-2016 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -9
drivers/hwtracing/intel_th/pti.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub PTI output data structures 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_STH_H__
+1 -9
drivers/hwtracing/intel_th/sth.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel(R) Trace Hub Software Trace Hub support 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -9
drivers/hwtracing/intel_th/sth.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Intel(R) Trace Hub Software Trace Hub (STH) data structures 3 4 * 4 5 * Copyright (C) 2014-2015 Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 6 */ 15 7 16 8 #ifndef __INTEL_TH_STH_H__
+1 -9
drivers/hwtracing/stm/console.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Simple kernel console driver for STM devices 3 4 * Copyright (c) 2014, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * STM console will send kernel messages over STM devices to a trace host. 15 7 */
+1 -9
drivers/hwtracing/stm/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * System Trace Module (STM) infrastructure 3 4 * Copyright (c) 2014, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * STM class implements generic infrastructure for System Trace Module devices 15 7 * as defined in MIPI STPv2 specification.
+22 -12
drivers/hwtracing/stm/dummy_stm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * A dummy STM device for stm/stm_source class testing. 3 4 * Copyright (c) 2014, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * STM class implements generic infrastructure for System Trace Module devices 15 7 * as defined in MIPI STPv2 specification. ··· 12 20 #include <linux/module.h> 13 21 #include <linux/slab.h> 14 22 #include <linux/stm.h> 23 + #include <uapi/linux/stm.h> 15 24 16 25 static ssize_t notrace 17 26 dummy_stm_packet(struct stm_data *stm_data, unsigned int master, ··· 45 52 46 53 module_param(fail_mode, int, 0600); 47 54 55 + static unsigned int master_min; 56 + 57 + module_param(master_min, int, 0400); 58 + 59 + static unsigned int master_max = STP_MASTER_MAX; 60 + 61 + module_param(master_max, int, 0400); 62 + 63 + static unsigned int nr_channels = STP_CHANNEL_MAX; 64 + 65 + module_param(nr_channels, int, 0400); 66 + 48 67 static int dummy_stm_link(struct stm_data *data, unsigned int master, 49 68 unsigned int channel) 50 69 { ··· 73 68 if (nr_dummies < 0 || nr_dummies > DUMMY_STM_MAX) 74 69 return -EINVAL; 75 70 71 + if (master_min > master_max || 72 + master_max > STP_MASTER_MAX || 73 + nr_channels > STP_CHANNEL_MAX) 74 + return -EINVAL; 75 + 76 76 for (i = 0; i < nr_dummies; i++) { 77 77 dummy_stm[i].name = kasprintf(GFP_KERNEL, "dummy_stm.%d", i); 78 78 if (!dummy_stm[i].name) 79 79 goto fail_unregister; 80 80 81 - dummy_stm[i].sw_start = 0x0000; 82 - dummy_stm[i].sw_end = 0xffff; 83 - dummy_stm[i].sw_nchannels = 0xffff; 81 + dummy_stm[i].sw_start = master_min; 82 + dummy_stm[i].sw_end = master_max; 83 + dummy_stm[i].sw_nchannels = nr_channels; 84 84 dummy_stm[i].packet = dummy_stm_packet; 85 85 dummy_stm[i].link = dummy_stm_link; 86 86
+1 -9
drivers/hwtracing/stm/heartbeat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Simple heartbeat STM source driver 3 4 * Copyright (c) 2016, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * Heartbeat STM source will send repetitive messages over STM devices to a 15 7 * trace host.
+1 -9
drivers/hwtracing/stm/policy.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * System Trace Module (STM) master/channel allocation policy management 3 4 * Copyright (c) 2014, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * A master/channel allocation policy allows mapping string identifiers to 15 7 * master and channel ranges, where allocation can be done.
+1 -9
drivers/hwtracing/stm/stm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * System Trace Module (STM) infrastructure 3 4 * Copyright (c) 2014, Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * STM class implements generic infrastructure for System Trace Module devices 15 7 * as defined in MIPI STPv2 specification.
+1
drivers/mcb/mcb-pci.c
··· 117 117 118 118 static const struct pci_device_id mcb_pci_tbl[] = { 119 119 { PCI_DEVICE(PCI_VENDOR_ID_MEN, PCI_DEVICE_ID_MEN_CHAMELEON) }, 120 + { PCI_DEVICE(PCI_VENDOR_ID_ALTERA, PCI_DEVICE_ID_MEN_CHAMELEON) }, 120 121 { 0 }, 121 122 }; 122 123 MODULE_DEVICE_TABLE(pci, mcb_pci_tbl);
-1
drivers/misc/Kconfig
··· 75 75 config ATMEL_TCB_CLKSRC_BLOCK 76 76 int 77 77 depends on ATMEL_TCB_CLKSRC 78 - prompt "TC Block" if CPU_AT32AP700X 79 78 default 0 80 79 range 0 1 81 80 help
+1 -19
drivers/misc/Makefile
··· 12 12 obj-$(CONFIG_ATMEL_TCLIB) += atmel_tclib.o 13 13 obj-$(CONFIG_DUMMY_IRQ) += dummy-irq.o 14 14 obj-$(CONFIG_ICS932S401) += ics932s401.o 15 - obj-$(CONFIG_LKDTM) += lkdtm.o 15 + obj-$(CONFIG_LKDTM) += lkdtm/ 16 16 obj-$(CONFIG_TIFM_CORE) += tifm_core.o 17 17 obj-$(CONFIG_TIFM_7XX1) += tifm_7xx1.o 18 18 obj-$(CONFIG_PHANTOM) += phantom.o ··· 57 57 obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o 58 58 obj-$(CONFIG_OCXL) += ocxl/ 59 59 obj-$(CONFIG_MISC_RTSX) += cardreader/ 60 - 61 - lkdtm-$(CONFIG_LKDTM) += lkdtm_core.o 62 - lkdtm-$(CONFIG_LKDTM) += lkdtm_bugs.o 63 - lkdtm-$(CONFIG_LKDTM) += lkdtm_heap.o 64 - lkdtm-$(CONFIG_LKDTM) += lkdtm_perms.o 65 - lkdtm-$(CONFIG_LKDTM) += lkdtm_refcount.o 66 - lkdtm-$(CONFIG_LKDTM) += lkdtm_rodata_objcopy.o 67 - lkdtm-$(CONFIG_LKDTM) += lkdtm_usercopy.o 68 - 69 - KCOV_INSTRUMENT_lkdtm_rodata.o := n 70 - 71 - OBJCOPYFLAGS := 72 - OBJCOPYFLAGS_lkdtm_rodata_objcopy.o := \ 73 - --set-section-flags .text=alloc,readonly \ 74 - --rename-section .text=.rodata 75 - targets += lkdtm_rodata.o lkdtm_rodata_objcopy.o 76 - $(obj)/lkdtm_rodata_objcopy.o: $(obj)/lkdtm_rodata.o FORCE 77 - $(call if_changed,objcopy)
+39 -5
drivers/misc/aspeed-lpc-ctrl.c
··· 7 7 * 2 of the License, or (at your option) any later version. 8 8 */ 9 9 10 + #include <linux/clk.h> 10 11 #include <linux/mfd/syscon.h> 11 12 #include <linux/miscdevice.h> 12 13 #include <linux/mm.h> ··· 21 20 22 21 #define DEVICE_NAME "aspeed-lpc-ctrl" 23 22 23 + #define HICR5 0x0 24 + #define HICR5_ENL2H BIT(8) 25 + #define HICR5_ENFWH BIT(10) 26 + 24 27 #define HICR7 0x8 25 28 #define HICR8 0xc 26 29 27 30 struct aspeed_lpc_ctrl { 28 31 struct miscdevice miscdev; 29 32 struct regmap *regmap; 33 + struct clk *clk; 30 34 phys_addr_t mem_base; 31 35 resource_size_t mem_size; 32 36 u32 pnor_size; ··· 159 153 if (rc) 160 154 return rc; 161 155 162 - return regmap_write(lpc_ctrl->regmap, HICR8, 163 - (~(map.size - 1)) | ((map.size >> 16) - 1)); 156 + rc = regmap_write(lpc_ctrl->regmap, HICR8, 157 + (~(map.size - 1)) | ((map.size >> 16) - 1)); 158 + if (rc) 159 + return rc; 160 + 161 + /* 162 + * Enable LPC FHW cycles. This is required for the host to 163 + * access the regions specified. 164 + */ 165 + return regmap_update_bits(lpc_ctrl->regmap, HICR5, 166 + HICR5_ENFWH | HICR5_ENL2H, 167 + HICR5_ENFWH | HICR5_ENL2H); 164 168 } 165 169 166 170 return -EINVAL; ··· 237 221 return -ENODEV; 238 222 } 239 223 224 + lpc_ctrl->clk = devm_clk_get(dev, NULL); 225 + if (IS_ERR(lpc_ctrl->clk)) { 226 + dev_err(dev, "couldn't get clock\n"); 227 + return PTR_ERR(lpc_ctrl->clk); 228 + } 229 + rc = clk_prepare_enable(lpc_ctrl->clk); 230 + if (rc) { 231 + dev_err(dev, "couldn't enable clock\n"); 232 + return rc; 233 + } 234 + 240 235 lpc_ctrl->miscdev.minor = MISC_DYNAMIC_MINOR; 241 236 lpc_ctrl->miscdev.name = DEVICE_NAME; 242 237 lpc_ctrl->miscdev.fops = &aspeed_lpc_ctrl_fops; 243 238 lpc_ctrl->miscdev.parent = dev; 244 239 rc = misc_register(&lpc_ctrl->miscdev); 245 - if (rc) 240 + if (rc) { 246 241 dev_err(dev, "Unable to register device\n"); 247 - else 248 - dev_info(dev, "Loaded at %pr\n", &resm); 242 + goto err; 243 + } 249 244 245 + dev_info(dev, "Loaded at %pr\n", &resm); 246 + 247 + return 0; 248 + 249 + err: 250 + clk_disable_unprepare(lpc_ctrl->clk); 250 251 return rc; 251 252 } 252 253 ··· 272 239 struct aspeed_lpc_ctrl *lpc_ctrl = dev_get_drvdata(&pdev->dev); 273 240 274 241 misc_deregister(&lpc_ctrl->miscdev); 242 + clk_disable_unprepare(lpc_ctrl->clk); 275 243 276 244 return 0; 277 245 }
+6 -6
drivers/misc/cardreader/rts5260.c
··· 388 388 OC_POWER_DOWN); 389 389 } 390 390 391 - int rts5260_get_ocpstat(struct rtsx_pcr *pcr, u8 *val) 391 + static int rts5260_get_ocpstat(struct rtsx_pcr *pcr, u8 *val) 392 392 { 393 393 return rtsx_pci_read_register(pcr, REG_OCPSTAT, val); 394 394 } 395 395 396 - int rts5260_get_ocpstat2(struct rtsx_pcr *pcr, u8 *val) 396 + static int rts5260_get_ocpstat2(struct rtsx_pcr *pcr, u8 *val) 397 397 { 398 398 return rtsx_pci_read_register(pcr, REG_DV3318_OCPSTAT, val); 399 399 } 400 400 401 - void rts5260_clear_ocpstat(struct rtsx_pcr *pcr) 401 + static void rts5260_clear_ocpstat(struct rtsx_pcr *pcr) 402 402 { 403 403 u8 mask = 0; 404 404 u8 val = 0; ··· 418 418 DV3318_OCP_INT_CLR | DV3318_OCP_CLR, 0); 419 419 } 420 420 421 - void rts5260_process_ocp(struct rtsx_pcr *pcr) 421 + static void rts5260_process_ocp(struct rtsx_pcr *pcr) 422 422 { 423 423 if (!pcr->option.ocp_en) 424 424 return; ··· 449 449 } 450 450 } 451 451 452 - int rts5260_init_hw(struct rtsx_pcr *pcr) 452 + static int rts5260_init_hw(struct rtsx_pcr *pcr) 453 453 { 454 454 int err; 455 455 ··· 620 620 return 0; 621 621 } 622 622 623 - void rts5260_set_aspm(struct rtsx_pcr *pcr, bool enable) 623 + static void rts5260_set_aspm(struct rtsx_pcr *pcr, bool enable) 624 624 { 625 625 struct rtsx_cr_option *option = &pcr->option; 626 626 u8 val = 0;
+161 -142
drivers/misc/eeprom/at24.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * at24.c - handle most I2C EEPROMs 3 4 * 4 5 * Copyright (C) 2005-2007 David Brownell 5 6 * Copyright (C) 2008 Wolfram Sang, Pengutronix 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License as published by 9 - * the Free Software Foundation; either version 2 of the License, or 10 - * (at your option) any later version. 11 7 */ 8 + 12 9 #include <linux/kernel.h> 13 10 #include <linux/init.h> 14 11 #include <linux/module.h> ··· 60 63 }; 61 64 62 65 struct at24_data { 63 - struct at24_platform_data chip; 64 - 65 66 /* 66 67 * Lock protects against activities from other Linux tasks, 67 68 * but not from changes by other I2C masters. ··· 70 75 unsigned int num_addresses; 71 76 unsigned int offset_adj; 72 77 73 - struct nvmem_config nvmem_config; 78 + u32 byte_len; 79 + u16 page_size; 80 + u8 flags; 81 + 74 82 struct nvmem_device *nvmem; 75 83 76 84 struct gpio_desc *wp_gpio; ··· 237 239 }; 238 240 MODULE_DEVICE_TABLE(acpi, at24_acpi_ids); 239 241 240 - /*-------------------------------------------------------------------------*/ 241 - 242 242 /* 243 243 * This routine supports chips which consume multiple I2C addresses. It 244 244 * computes the addressing information to be used for a given r/w request. ··· 251 255 { 252 256 unsigned int i; 253 257 254 - if (at24->chip.flags & AT24_FLAG_ADDR16) { 258 + if (at24->flags & AT24_FLAG_ADDR16) { 255 259 i = *offset >> 16; 256 260 *offset &= 0xffff; 257 261 } else { ··· 260 264 } 261 265 262 266 return &at24->client[i]; 267 + } 268 + 269 + static struct device *at24_base_client_dev(struct at24_data *at24) 270 + { 271 + return &at24->client[0].client->dev; 263 272 } 264 273 265 274 static size_t at24_adjust_read_count(struct at24_data *at24, ··· 278 277 * the next slave address: truncate the count to the slave boundary, 279 278 * so that the read never straddles slaves. 280 279 */ 281 - if (at24->chip.flags & AT24_FLAG_NO_RDROL) { 282 - bits = (at24->chip.flags & AT24_FLAG_ADDR16) ? 16 : 8; 280 + if (at24->flags & AT24_FLAG_NO_RDROL) { 281 + bits = (at24->flags & AT24_FLAG_ADDR16) ? 16 : 8; 283 282 remainder = BIT(bits) - offset; 284 283 if (count > remainder) 285 284 count = remainder; ··· 338 337 count = at24->write_max; 339 338 340 339 /* Never roll over backwards, to the start of this page */ 341 - next_page = roundup(offset + 1, at24->chip.page_size); 340 + next_page = roundup(offset + 1, at24->page_size); 342 341 if (offset + count > next_page) 343 342 count = next_page - offset; 344 343 ··· 372 371 373 372 static int at24_read(void *priv, unsigned int off, void *val, size_t count) 374 373 { 375 - struct at24_data *at24 = priv; 376 - struct device *dev = &at24->client[0].client->dev; 374 + struct at24_data *at24; 375 + struct device *dev; 377 376 char *buf = val; 378 377 int ret; 378 + 379 + at24 = priv; 380 + dev = at24_base_client_dev(at24); 379 381 380 382 if (unlikely(!count)) 381 383 return count; 382 384 383 - if (off + count > at24->chip.byte_len) 385 + if (off + count > at24->byte_len) 384 386 return -EINVAL; 385 387 386 388 ret = pm_runtime_get_sync(dev); ··· 399 395 mutex_lock(&at24->lock); 400 396 401 397 while (count) { 402 - int status; 403 - 404 - status = at24_regmap_read(at24, buf, off, count); 405 - if (status < 0) { 398 + ret = at24_regmap_read(at24, buf, off, count); 399 + if (ret < 0) { 406 400 mutex_unlock(&at24->lock); 407 401 pm_runtime_put(dev); 408 - return status; 402 + return ret; 409 403 } 410 - buf += status; 411 - off += status; 412 - count -= status; 404 + buf += ret; 405 + off += ret; 406 + count -= ret; 413 407 } 414 408 415 409 mutex_unlock(&at24->lock); ··· 419 417 420 418 static int at24_write(void *priv, unsigned int off, void *val, size_t count) 421 419 { 422 - struct at24_data *at24 = priv; 423 - struct device *dev = &at24->client[0].client->dev; 420 + struct at24_data *at24; 421 + struct device *dev; 424 422 char *buf = val; 425 423 int ret; 424 + 425 + at24 = priv; 426 + dev = at24_base_client_dev(at24); 426 427 427 428 if (unlikely(!count)) 428 429 return -EINVAL; 429 430 430 - if (off + count > at24->chip.byte_len) 431 + if (off + count > at24->byte_len) 431 432 return -EINVAL; 432 433 433 434 ret = pm_runtime_get_sync(dev); ··· 447 442 gpiod_set_value_cansleep(at24->wp_gpio, 0); 448 443 449 444 while (count) { 450 - int status; 451 - 452 - status = at24_regmap_write(at24, buf, off, count); 453 - if (status < 0) { 445 + ret = at24_regmap_write(at24, buf, off, count); 446 + if (ret < 0) { 454 447 gpiod_set_value_cansleep(at24->wp_gpio, 1); 455 448 mutex_unlock(&at24->lock); 456 449 pm_runtime_put(dev); 457 - return status; 450 + return ret; 458 451 } 459 - buf += status; 460 - off += status; 461 - count -= status; 452 + buf += ret; 453 + off += ret; 454 + count -= ret; 462 455 } 463 456 464 457 gpiod_set_value_cansleep(at24->wp_gpio, 1); ··· 467 464 return 0; 468 465 } 469 466 470 - static void at24_get_pdata(struct device *dev, struct at24_platform_data *chip) 467 + static void at24_properties_to_pdata(struct device *dev, 468 + struct at24_platform_data *chip) 471 469 { 472 470 int err; 473 471 u32 val; ··· 495 491 } 496 492 } 497 493 494 + static int at24_get_pdata(struct device *dev, struct at24_platform_data *pdata) 495 + { 496 + struct device_node *of_node = dev->of_node; 497 + const struct at24_chip_data *cdata; 498 + const struct i2c_device_id *id; 499 + struct at24_platform_data *pd; 500 + 501 + pd = dev_get_platdata(dev); 502 + if (pd) { 503 + memcpy(pdata, pd, sizeof(*pdata)); 504 + return 0; 505 + } 506 + 507 + id = i2c_match_id(at24_ids, to_i2c_client(dev)); 508 + 509 + /* 510 + * The I2C core allows OF nodes compatibles to match against the 511 + * I2C device ID table as a fallback, so check not only if an OF 512 + * node is present but also if it matches an OF device ID entry. 513 + */ 514 + if (of_node && of_match_device(at24_of_match, dev)) 515 + cdata = of_device_get_match_data(dev); 516 + else if (id) 517 + cdata = (void *)&id->driver_data; 518 + else 519 + cdata = acpi_device_get_match_data(dev); 520 + 521 + if (!cdata) 522 + return -ENODEV; 523 + 524 + pdata->byte_len = cdata->byte_len; 525 + pdata->flags = cdata->flags; 526 + at24_properties_to_pdata(dev, pdata); 527 + 528 + return 0; 529 + } 530 + 498 531 static unsigned int at24_get_offset_adj(u8 flags, unsigned int byte_len) 499 532 { 500 533 if (flags & AT24_FLAG_MAC) { ··· 555 514 } 556 515 } 557 516 558 - static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id) 517 + static int at24_probe(struct i2c_client *client) 559 518 { 560 - struct at24_platform_data chip = { 0 }; 561 - const struct at24_chip_data *cd = NULL; 562 - bool writable; 563 - struct at24_data *at24; 564 - int err; 565 - unsigned int i, num_addresses; 566 519 struct regmap_config regmap_config = { }; 520 + struct nvmem_config nvmem_config = { }; 521 + struct at24_platform_data pdata = { }; 522 + struct device *dev = &client->dev; 523 + bool i2c_fn_i2c, i2c_fn_block; 524 + unsigned int i, num_addresses; 525 + struct at24_data *at24; 526 + struct regmap *regmap; 527 + size_t at24_size; 528 + bool writable; 567 529 u8 test_byte; 530 + int err; 568 531 569 - if (client->dev.platform_data) { 570 - chip = *(struct at24_platform_data *)client->dev.platform_data; 571 - } else { 572 - /* 573 - * The I2C core allows OF nodes compatibles to match against the 574 - * I2C device ID table as a fallback, so check not only if an OF 575 - * node is present but also if it matches an OF device ID entry. 576 - */ 577 - if (client->dev.of_node && 578 - of_match_device(at24_of_match, &client->dev)) { 579 - cd = of_device_get_match_data(&client->dev); 580 - } else if (id) { 581 - cd = (void *)id->driver_data; 582 - } else { 583 - const struct acpi_device_id *aid; 532 + i2c_fn_i2c = i2c_check_functionality(client->adapter, I2C_FUNC_I2C); 533 + i2c_fn_block = i2c_check_functionality(client->adapter, 534 + I2C_FUNC_SMBUS_WRITE_I2C_BLOCK); 584 535 585 - aid = acpi_match_device(at24_acpi_ids, &client->dev); 586 - if (aid) 587 - cd = (void *)aid->driver_data; 588 - } 589 - if (!cd) 590 - return -ENODEV; 536 + err = at24_get_pdata(dev, &pdata); 537 + if (err) 538 + return err; 591 539 592 - chip.byte_len = cd->byte_len; 593 - chip.flags = cd->flags; 594 - at24_get_pdata(&client->dev, &chip); 595 - } 540 + if (!i2c_fn_i2c && !i2c_fn_block) 541 + pdata.page_size = 1; 596 542 597 - if (!is_power_of_2(chip.byte_len)) 598 - dev_warn(&client->dev, 599 - "byte_len looks suspicious (no power of 2)!\n"); 600 - if (!chip.page_size) { 601 - dev_err(&client->dev, "page_size must not be 0!\n"); 543 + if (!pdata.page_size) { 544 + dev_err(dev, "page_size must not be 0!\n"); 602 545 return -EINVAL; 603 546 } 604 - if (!is_power_of_2(chip.page_size)) 605 - dev_warn(&client->dev, 606 - "page_size looks suspicious (no power of 2)!\n"); 607 547 608 - if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C) && 609 - !i2c_check_functionality(client->adapter, 610 - I2C_FUNC_SMBUS_WRITE_I2C_BLOCK)) 611 - chip.page_size = 1; 548 + if (!is_power_of_2(pdata.page_size)) 549 + dev_warn(dev, "page_size looks suspicious (no power of 2)!\n"); 612 550 613 - if (chip.flags & AT24_FLAG_TAKE8ADDR) 551 + if (pdata.flags & AT24_FLAG_TAKE8ADDR) 614 552 num_addresses = 8; 615 553 else 616 - num_addresses = DIV_ROUND_UP(chip.byte_len, 617 - (chip.flags & AT24_FLAG_ADDR16) ? 65536 : 256); 554 + num_addresses = DIV_ROUND_UP(pdata.byte_len, 555 + (pdata.flags & AT24_FLAG_ADDR16) ? 65536 : 256); 618 556 619 - regmap_config.val_bits = 8; 620 - regmap_config.reg_bits = (chip.flags & AT24_FLAG_ADDR16) ? 16 : 8; 621 - 622 - at24 = devm_kzalloc(&client->dev, sizeof(struct at24_data) + 623 - num_addresses * sizeof(struct at24_client), GFP_KERNEL); 624 - if (!at24) 625 - return -ENOMEM; 626 - 627 - mutex_init(&at24->lock); 628 - at24->chip = chip; 629 - at24->num_addresses = num_addresses; 630 - at24->offset_adj = at24_get_offset_adj(chip.flags, chip.byte_len); 631 - 632 - at24->wp_gpio = devm_gpiod_get_optional(&client->dev, 633 - "wp", GPIOD_OUT_HIGH); 634 - if (IS_ERR(at24->wp_gpio)) 635 - return PTR_ERR(at24->wp_gpio); 636 - 637 - at24->client[0].client = client; 638 - at24->client[0].regmap = devm_regmap_init_i2c(client, &regmap_config); 639 - if (IS_ERR(at24->client[0].regmap)) 640 - return PTR_ERR(at24->client[0].regmap); 641 - 642 - if ((chip.flags & AT24_FLAG_SERIAL) && (chip.flags & AT24_FLAG_MAC)) { 643 - dev_err(&client->dev, 557 + if ((pdata.flags & AT24_FLAG_SERIAL) && (pdata.flags & AT24_FLAG_MAC)) { 558 + dev_err(dev, 644 559 "invalid device data - cannot have both AT24_FLAG_SERIAL & AT24_FLAG_MAC."); 645 560 return -EINVAL; 646 561 } 647 562 648 - writable = !(chip.flags & AT24_FLAG_READONLY); 563 + regmap_config.val_bits = 8; 564 + regmap_config.reg_bits = (pdata.flags & AT24_FLAG_ADDR16) ? 16 : 8; 565 + regmap_config.disable_locking = true; 566 + 567 + regmap = devm_regmap_init_i2c(client, &regmap_config); 568 + if (IS_ERR(regmap)) 569 + return PTR_ERR(regmap); 570 + 571 + at24_size = sizeof(*at24) + num_addresses * sizeof(struct at24_client); 572 + at24 = devm_kzalloc(dev, at24_size, GFP_KERNEL); 573 + if (!at24) 574 + return -ENOMEM; 575 + 576 + mutex_init(&at24->lock); 577 + at24->byte_len = pdata.byte_len; 578 + at24->page_size = pdata.page_size; 579 + at24->flags = pdata.flags; 580 + at24->num_addresses = num_addresses; 581 + at24->offset_adj = at24_get_offset_adj(pdata.flags, pdata.byte_len); 582 + at24->client[0].client = client; 583 + at24->client[0].regmap = regmap; 584 + 585 + at24->wp_gpio = devm_gpiod_get_optional(dev, "wp", GPIOD_OUT_HIGH); 586 + if (IS_ERR(at24->wp_gpio)) 587 + return PTR_ERR(at24->wp_gpio); 588 + 589 + writable = !(pdata.flags & AT24_FLAG_READONLY); 649 590 if (writable) { 650 591 at24->write_max = min_t(unsigned int, 651 - chip.page_size, at24_io_limit); 652 - if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C) && 653 - at24->write_max > I2C_SMBUS_BLOCK_MAX) 592 + pdata.page_size, at24_io_limit); 593 + if (!i2c_fn_i2c && at24->write_max > I2C_SMBUS_BLOCK_MAX) 654 594 at24->write_max = I2C_SMBUS_BLOCK_MAX; 655 595 } 656 596 ··· 640 618 at24->client[i].client = i2c_new_dummy(client->adapter, 641 619 client->addr + i); 642 620 if (!at24->client[i].client) { 643 - dev_err(&client->dev, "address 0x%02x unavailable\n", 644 - client->addr + i); 621 + dev_err(dev, "address 0x%02x unavailable\n", 622 + client->addr + i); 645 623 err = -EADDRINUSE; 646 624 goto err_clients; 647 625 } ··· 657 635 i2c_set_clientdata(client, at24); 658 636 659 637 /* enable runtime pm */ 660 - pm_runtime_set_active(&client->dev); 661 - pm_runtime_enable(&client->dev); 638 + pm_runtime_set_active(dev); 639 + pm_runtime_enable(dev); 662 640 663 641 /* 664 642 * Perform a one-byte test read to verify that the 665 643 * chip is functional. 666 644 */ 667 645 err = at24_read(at24, 0, &test_byte, 1); 668 - pm_runtime_idle(&client->dev); 646 + pm_runtime_idle(dev); 669 647 if (err) { 670 648 err = -ENODEV; 671 649 goto err_clients; 672 650 } 673 651 674 - at24->nvmem_config.name = dev_name(&client->dev); 675 - at24->nvmem_config.dev = &client->dev; 676 - at24->nvmem_config.read_only = !writable; 677 - at24->nvmem_config.root_only = true; 678 - at24->nvmem_config.owner = THIS_MODULE; 679 - at24->nvmem_config.compat = true; 680 - at24->nvmem_config.base_dev = &client->dev; 681 - at24->nvmem_config.reg_read = at24_read; 682 - at24->nvmem_config.reg_write = at24_write; 683 - at24->nvmem_config.priv = at24; 684 - at24->nvmem_config.stride = 1; 685 - at24->nvmem_config.word_size = 1; 686 - at24->nvmem_config.size = chip.byte_len; 652 + nvmem_config.name = dev_name(dev); 653 + nvmem_config.dev = dev; 654 + nvmem_config.read_only = !writable; 655 + nvmem_config.root_only = true; 656 + nvmem_config.owner = THIS_MODULE; 657 + nvmem_config.compat = true; 658 + nvmem_config.base_dev = dev; 659 + nvmem_config.reg_read = at24_read; 660 + nvmem_config.reg_write = at24_write; 661 + nvmem_config.priv = at24; 662 + nvmem_config.stride = 1; 663 + nvmem_config.word_size = 1; 664 + nvmem_config.size = pdata.byte_len; 687 665 688 - at24->nvmem = nvmem_register(&at24->nvmem_config); 689 - 666 + at24->nvmem = nvmem_register(&nvmem_config); 690 667 if (IS_ERR(at24->nvmem)) { 691 668 err = PTR_ERR(at24->nvmem); 692 669 goto err_clients; 693 670 } 694 671 695 - dev_info(&client->dev, "%u byte %s EEPROM, %s, %u bytes/write\n", 696 - chip.byte_len, client->name, 697 - writable ? "writable" : "read-only", at24->write_max); 672 + dev_info(dev, "%u byte %s EEPROM, %s, %u bytes/write\n", 673 + pdata.byte_len, client->name, 674 + writable ? "writable" : "read-only", at24->write_max); 698 675 699 676 /* export data to kernel code */ 700 - if (chip.setup) 701 - chip.setup(at24->nvmem, chip.context); 677 + if (pdata.setup) 678 + pdata.setup(at24->nvmem, pdata.context); 702 679 703 680 return 0; 704 681 ··· 706 685 if (at24->client[i].client) 707 686 i2c_unregister_device(at24->client[i].client); 708 687 709 - pm_runtime_disable(&client->dev); 688 + pm_runtime_disable(dev); 710 689 711 690 return err; 712 691 } ··· 729 708 return 0; 730 709 } 731 710 732 - /*-------------------------------------------------------------------------*/ 733 - 734 711 static struct i2c_driver at24_driver = { 735 712 .driver = { 736 713 .name = "at24", 737 714 .of_match_table = at24_of_match, 738 715 .acpi_match_table = ACPI_PTR(at24_acpi_ids), 739 716 }, 740 - .probe = at24_probe, 717 + .probe_new = at24_probe, 741 718 .remove = at24_remove, 742 719 .id_table = at24_ids, 743 720 };
+1 -1
drivers/misc/eeprom/at25.c
··· 102 102 } 103 103 104 104 spi_message_init(&m); 105 - memset(t, 0, sizeof t); 105 + memset(t, 0, sizeof(t)); 106 106 107 107 t[0].tx_buf = command; 108 108 t[0].len = at25->addrlen + 1;
drivers/misc/lkdtm.h drivers/misc/lkdtm/lkdtm.h
+20
drivers/misc/lkdtm/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_LKDTM) += lkdtm.o 3 + 4 + lkdtm-$(CONFIG_LKDTM) += core.o 5 + lkdtm-$(CONFIG_LKDTM) += bugs.o 6 + lkdtm-$(CONFIG_LKDTM) += heap.o 7 + lkdtm-$(CONFIG_LKDTM) += perms.o 8 + lkdtm-$(CONFIG_LKDTM) += refcount.o 9 + lkdtm-$(CONFIG_LKDTM) += rodata_objcopy.o 10 + lkdtm-$(CONFIG_LKDTM) += usercopy.o 11 + 12 + KCOV_INSTRUMENT_rodata.o := n 13 + 14 + OBJCOPYFLAGS := 15 + OBJCOPYFLAGS_rodata_objcopy.o := \ 16 + --set-section-flags .text=alloc,readonly \ 17 + --rename-section .text=.rodata 18 + targets += rodata.o rodata_objcopy.o 19 + $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE 20 + $(call if_changed,objcopy)
drivers/misc/lkdtm_bugs.c drivers/misc/lkdtm/bugs.c
drivers/misc/lkdtm_core.c drivers/misc/lkdtm/core.c
drivers/misc/lkdtm_heap.c drivers/misc/lkdtm/heap.c
drivers/misc/lkdtm_perms.c drivers/misc/lkdtm/perms.c
+1
drivers/misc/lkdtm_refcount.c drivers/misc/lkdtm/refcount.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * This is for all the tests related to refcount bugs (e.g. overflow, 3 4 * underflow, reaching zero untested, etc).
drivers/misc/lkdtm_rodata.c drivers/misc/lkdtm/rodata.c
drivers/misc/lkdtm_usercopy.c drivers/misc/lkdtm/usercopy.c
+51 -32
drivers/misc/mei/bus.c
··· 74 74 goto out; 75 75 } 76 76 77 + while (cl->tx_cb_queued >= bus->tx_queue_limit) { 78 + mutex_unlock(&bus->device_lock); 79 + rets = wait_event_interruptible(cl->tx_wait, 80 + cl->writing_state == MEI_WRITE_COMPLETE || 81 + (!mei_cl_is_connected(cl))); 82 + mutex_lock(&bus->device_lock); 83 + if (rets) { 84 + if (signal_pending(current)) 85 + rets = -EINTR; 86 + goto out; 87 + } 88 + if (!mei_cl_is_connected(cl)) { 89 + rets = -ENODEV; 90 + goto out; 91 + } 92 + } 93 + 77 94 cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, NULL); 78 95 if (!cb) { 79 96 rets = -ENOMEM; ··· 467 450 EXPORT_SYMBOL_GPL(mei_cldev_enabled); 468 451 469 452 /** 453 + * mei_cl_bus_module_get - acquire module of the underlying 454 + * hw driver. 455 + * 456 + * @cldev: mei client device 457 + * 458 + * Return: true on success; false if the module was removed. 459 + */ 460 + static bool mei_cl_bus_module_get(struct mei_cl_device *cldev) 461 + { 462 + return try_module_get(cldev->bus->dev->driver->owner); 463 + } 464 + 465 + /** 466 + * mei_cl_bus_module_put - release the underlying hw module. 467 + * 468 + * @cldev: mei client device 469 + */ 470 + static void mei_cl_bus_module_put(struct mei_cl_device *cldev) 471 + { 472 + module_put(cldev->bus->dev->driver->owner); 473 + } 474 + 475 + /** 470 476 * mei_cldev_enable - enable me client device 471 477 * create connection with me client 472 478 * ··· 527 487 goto out; 528 488 } 529 489 490 + if (!mei_cl_bus_module_get(cldev)) { 491 + dev_err(&cldev->dev, "get hw module failed"); 492 + ret = -ENODEV; 493 + goto out; 494 + } 495 + 530 496 ret = mei_cl_connect(cl, cldev->me_cl, NULL); 531 - if (ret < 0) 497 + if (ret < 0) { 532 498 dev_err(&cldev->dev, "cannot connect\n"); 499 + mei_cl_bus_module_put(cldev); 500 + } 533 501 534 502 out: 535 503 mutex_unlock(&bus->device_lock); ··· 601 553 dev_err(bus->dev, "Could not disconnect from the ME client\n"); 602 554 603 555 out: 556 + mei_cl_bus_module_put(cldev); 557 + 604 558 /* Flush queues and remove any pending read */ 605 559 mei_cl_flush_queues(cl, NULL); 606 560 mei_cl_unlink(cl); ··· 611 561 return err; 612 562 } 613 563 EXPORT_SYMBOL_GPL(mei_cldev_disable); 614 - 615 - /** 616 - * mei_cl_bus_module_get - acquire module of the underlying 617 - * hw module. 618 - * 619 - * @cl: host client 620 - * 621 - * Return: true on success; false if the module was removed. 622 - */ 623 - bool mei_cl_bus_module_get(struct mei_cl *cl) 624 - { 625 - struct mei_cl_device *cldev = cl->cldev; 626 - 627 - if (!cldev) 628 - return true; 629 - 630 - return try_module_get(cldev->bus->dev->driver->owner); 631 - } 632 - 633 - /** 634 - * mei_cl_bus_module_put - release the underlying hw module. 635 - * 636 - * @cl: host client 637 - */ 638 - void mei_cl_bus_module_put(struct mei_cl *cl) 639 - { 640 - struct mei_cl_device *cldev = cl->cldev; 641 - 642 - if (cldev) 643 - module_put(cldev->bus->dev->driver->owner); 644 - } 645 564 646 565 /** 647 566 * mei_cl_device_find - find matching entry in the driver id table
+50 -37
drivers/misc/mei/client.c
··· 350 350 } 351 351 352 352 /** 353 + * mei_tx_cb_queue - queue tx callback 354 + * 355 + * Locking: called under "dev->device_lock" lock 356 + * 357 + * @cb: mei callback struct 358 + * @head: an instance of list to queue on 359 + */ 360 + static inline void mei_tx_cb_enqueue(struct mei_cl_cb *cb, 361 + struct list_head *head) 362 + { 363 + list_add_tail(&cb->list, head); 364 + cb->cl->tx_cb_queued++; 365 + } 366 + 367 + /** 368 + * mei_tx_cb_dequeue - dequeue tx callback 369 + * 370 + * Locking: called under "dev->device_lock" lock 371 + * 372 + * @cb: mei callback struct to dequeue and free 373 + */ 374 + static inline void mei_tx_cb_dequeue(struct mei_cl_cb *cb) 375 + { 376 + if (!WARN_ON(cb->cl->tx_cb_queued == 0)) 377 + cb->cl->tx_cb_queued--; 378 + 379 + mei_io_cb_free(cb); 380 + } 381 + 382 + /** 353 383 * mei_io_cb_init - allocate and initialize io callback 354 384 * 355 385 * @cl: mei client ··· 407 377 } 408 378 409 379 /** 410 - * __mei_io_list_flush_cl - removes and frees cbs belonging to cl. 380 + * mei_io_list_flush_cl - removes cbs belonging to the cl. 411 381 * 412 382 * @head: an instance of our list structure 413 - * @cl: host client, can be NULL for flushing the whole list 414 - * @free: whether to free the cbs 383 + * @cl: host client 415 384 */ 416 - static void __mei_io_list_flush_cl(struct list_head *head, 417 - const struct mei_cl *cl, bool free) 385 + static void mei_io_list_flush_cl(struct list_head *head, 386 + const struct mei_cl *cl) 418 387 { 419 388 struct mei_cl_cb *cb, *next; 420 389 421 - /* enable removing everything if no cl is specified */ 422 390 list_for_each_entry_safe(cb, next, head, list) { 423 - if (!cl || mei_cl_cmp_id(cl, cb->cl)) { 391 + if (mei_cl_cmp_id(cl, cb->cl)) 424 392 list_del_init(&cb->list); 425 - if (free) 426 - mei_io_cb_free(cb); 427 - } 428 393 } 429 394 } 430 395 431 396 /** 432 - * mei_io_list_flush_cl - removes list entry belonging to cl. 397 + * mei_io_tx_list_free_cl - removes cb belonging to the cl and free them 433 398 * 434 399 * @head: An instance of our list structure 435 400 * @cl: host client 436 401 */ 437 - static inline void mei_io_list_flush_cl(struct list_head *head, 438 - const struct mei_cl *cl) 402 + static void mei_io_tx_list_free_cl(struct list_head *head, 403 + const struct mei_cl *cl) 439 404 { 440 - __mei_io_list_flush_cl(head, cl, false); 441 - } 405 + struct mei_cl_cb *cb, *next; 442 406 443 - /** 444 - * mei_io_list_free_cl - removes cb belonging to cl and free them 445 - * 446 - * @head: An instance of our list structure 447 - * @cl: host client 448 - */ 449 - static inline void mei_io_list_free_cl(struct list_head *head, 450 - const struct mei_cl *cl) 451 - { 452 - __mei_io_list_flush_cl(head, cl, true); 407 + list_for_each_entry_safe(cb, next, head, list) { 408 + if (mei_cl_cmp_id(cl, cb->cl)) 409 + mei_tx_cb_dequeue(cb); 410 + } 453 411 } 454 412 455 413 /** ··· 556 538 dev = cl->dev; 557 539 558 540 cl_dbg(dev, cl, "remove list entry belonging to cl\n"); 559 - mei_io_list_free_cl(&cl->dev->write_list, cl); 560 - mei_io_list_free_cl(&cl->dev->write_waiting_list, cl); 541 + mei_io_tx_list_free_cl(&cl->dev->write_list, cl); 542 + mei_io_tx_list_free_cl(&cl->dev->write_waiting_list, cl); 561 543 mei_io_list_flush_cl(&cl->dev->ctrl_wr_list, cl); 562 544 mei_io_list_flush_cl(&cl->dev->ctrl_rd_list, cl); 563 545 mei_io_list_free_fp(&cl->rd_pending, fp); ··· 774 756 return; 775 757 776 758 cl->state = MEI_FILE_DISCONNECTED; 777 - mei_io_list_free_cl(&dev->write_list, cl); 778 - mei_io_list_free_cl(&dev->write_waiting_list, cl); 759 + mei_io_tx_list_free_cl(&dev->write_list, cl); 760 + mei_io_tx_list_free_cl(&dev->write_waiting_list, cl); 779 761 mei_io_list_flush_cl(&dev->ctrl_rd_list, cl); 780 762 mei_io_list_flush_cl(&dev->ctrl_wr_list, cl); 781 763 mei_cl_wake_all(cl); 782 764 cl->rx_flow_ctrl_creds = 0; 783 765 cl->tx_flow_ctrl_creds = 0; 784 766 cl->timer_count = 0; 785 - 786 - mei_cl_bus_module_put(cl); 787 767 788 768 if (!cl->me_cl) 789 769 return; ··· 1091 1075 return -ENODEV; 1092 1076 1093 1077 dev = cl->dev; 1094 - 1095 - if (!mei_cl_bus_module_get(cl)) 1096 - return -ENODEV; 1097 1078 1098 1079 rets = mei_cl_set_connecting(cl, me_cl); 1099 1080 if (rets) ··· 1711 1698 1712 1699 out: 1713 1700 if (mei_hdr.msg_complete) 1714 - list_add_tail(&cb->list, &dev->write_waiting_list); 1701 + mei_tx_cb_enqueue(cb, &dev->write_waiting_list); 1715 1702 else 1716 - list_add_tail(&cb->list, &dev->write_list); 1703 + mei_tx_cb_enqueue(cb, &dev->write_list); 1717 1704 1718 1705 cb = NULL; 1719 1706 if (blocking && cl->writing_state != MEI_WRITE_COMPLETE) { ··· 1759 1746 1760 1747 switch (cb->fop_type) { 1761 1748 case MEI_FOP_WRITE: 1762 - mei_io_cb_free(cb); 1749 + mei_tx_cb_dequeue(cb); 1763 1750 cl->writing_state = MEI_WRITE_COMPLETE; 1764 1751 if (waitqueue_active(&cl->tx_wait)) { 1765 1752 wake_up_interruptible(&cl->tx_wait);
+4 -3
drivers/misc/mei/debugfs.c
··· 97 97 int pos = 0; 98 98 int ret; 99 99 100 - #define HDR " |me|host|state|rd|wr|\n" 100 + #define HDR " |me|host|state|rd|wr|wrq\n" 101 101 102 102 if (!dev) 103 103 return -ENODEV; ··· 130 130 list_for_each_entry(cl, &dev->file_list, link) { 131 131 132 132 pos += scnprintf(buf + pos, bufsz - pos, 133 - "%3d|%2d|%4d|%5d|%2d|%2d|\n", 133 + "%3d|%2d|%4d|%5d|%2d|%2d|%3u\n", 134 134 i, mei_cl_me_id(cl), cl->host_client_id, cl->state, 135 - !list_empty(&cl->rd_completed), cl->writing_state); 135 + !list_empty(&cl->rd_completed), cl->writing_state, 136 + cl->tx_cb_queued); 136 137 i++; 137 138 } 138 139 out:
+1
drivers/misc/mei/init.c
··· 383 383 INIT_LIST_HEAD(&dev->write_waiting_list); 384 384 INIT_LIST_HEAD(&dev->ctrl_wr_list); 385 385 INIT_LIST_HEAD(&dev->ctrl_rd_list); 386 + dev->tx_queue_limit = MEI_TX_QUEUE_LIMIT_DEFAULT; 386 387 387 388 INIT_DELAYED_WORK(&dev->timer_work, mei_timer); 388 389 INIT_WORK(&dev->reset_work, mei_reset_work);
+65 -1
drivers/misc/mei/main.c
··· 291 291 goto out; 292 292 } 293 293 294 + while (cl->tx_cb_queued >= dev->tx_queue_limit) { 295 + if (file->f_flags & O_NONBLOCK) { 296 + rets = -EAGAIN; 297 + goto out; 298 + } 299 + mutex_unlock(&dev->device_lock); 300 + rets = wait_event_interruptible(cl->tx_wait, 301 + cl->writing_state == MEI_WRITE_COMPLETE || 302 + (!mei_cl_is_connected(cl))); 303 + mutex_lock(&dev->device_lock); 304 + if (rets) { 305 + if (signal_pending(current)) 306 + rets = -EINTR; 307 + goto out; 308 + } 309 + if (!mei_cl_is_connected(cl)) { 310 + rets = -ENODEV; 311 + goto out; 312 + } 313 + } 314 + 294 315 *offset = 0; 295 316 cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file); 296 317 if (!cb) { ··· 528 507 break; 529 508 530 509 default: 531 - dev_err(dev->dev, ": unsupported ioctl %d.\n", cmd); 532 510 rets = -ENOIOCTLCMD; 533 511 } 534 512 ··· 598 578 mask |= EPOLLIN | EPOLLRDNORM; 599 579 else 600 580 mei_cl_read_start(cl, mei_cl_mtu(cl), file); 581 + } 582 + 583 + if (req_events & (POLLOUT | POLLWRNORM)) { 584 + poll_wait(file, &cl->tx_wait, wait); 585 + if (cl->tx_cb_queued < dev->tx_queue_limit) 586 + mask |= POLLOUT | POLLWRNORM; 601 587 } 602 588 603 589 out: ··· 775 749 } 776 750 static DEVICE_ATTR_RO(hbm_ver_drv); 777 751 752 + static ssize_t tx_queue_limit_show(struct device *device, 753 + struct device_attribute *attr, char *buf) 754 + { 755 + struct mei_device *dev = dev_get_drvdata(device); 756 + u8 size = 0; 757 + 758 + mutex_lock(&dev->device_lock); 759 + size = dev->tx_queue_limit; 760 + mutex_unlock(&dev->device_lock); 761 + 762 + return snprintf(buf, PAGE_SIZE, "%u\n", size); 763 + } 764 + 765 + static ssize_t tx_queue_limit_store(struct device *device, 766 + struct device_attribute *attr, 767 + const char *buf, size_t count) 768 + { 769 + struct mei_device *dev = dev_get_drvdata(device); 770 + u8 limit; 771 + unsigned int inp; 772 + int err; 773 + 774 + err = kstrtouint(buf, 10, &inp); 775 + if (err) 776 + return err; 777 + if (inp > MEI_TX_QUEUE_LIMIT_MAX || inp < MEI_TX_QUEUE_LIMIT_MIN) 778 + return -EINVAL; 779 + limit = inp; 780 + 781 + mutex_lock(&dev->device_lock); 782 + dev->tx_queue_limit = limit; 783 + mutex_unlock(&dev->device_lock); 784 + 785 + return count; 786 + } 787 + static DEVICE_ATTR_RW(tx_queue_limit); 788 + 778 789 static struct attribute *mei_attrs[] = { 779 790 &dev_attr_fw_status.attr, 780 791 &dev_attr_hbm_ver.attr, 781 792 &dev_attr_hbm_ver_drv.attr, 793 + &dev_attr_tx_queue_limit.attr, 782 794 NULL 783 795 }; 784 796 ATTRIBUTE_GROUPS(mei);
+8 -2
drivers/misc/mei/mei_dev.h
··· 210 210 * @timer_count: watchdog timer for operation completion 211 211 * @notify_en: notification - enabled/disabled 212 212 * @notify_ev: pending notification event 213 + * @tx_cb_queued: number of tx callbacks in queue 213 214 * @writing_state: state of the tx 214 215 * @rd_pending: pending read credits 215 216 * @rd_completed: completed read ··· 235 234 u8 timer_count; 236 235 u8 notify_en; 237 236 u8 notify_ev; 237 + u8 tx_cb_queued; 238 238 enum mei_file_transaction_states writing_state; 239 239 struct list_head rd_pending; 240 240 struct list_head rd_completed; 241 241 242 242 struct mei_cl_device *cldev; 243 243 }; 244 + 245 + #define MEI_TX_QUEUE_LIMIT_DEFAULT 50 246 + #define MEI_TX_QUEUE_LIMIT_MAX 255 247 + #define MEI_TX_QUEUE_LIMIT_MIN 30 244 248 245 249 /** 246 250 * struct mei_hw_ops - hw specific ops ··· 321 315 bool mei_cl_bus_rx_event(struct mei_cl *cl); 322 316 bool mei_cl_bus_notify_event(struct mei_cl *cl); 323 317 void mei_cl_bus_remove_devices(struct mei_device *bus); 324 - bool mei_cl_bus_module_get(struct mei_cl *cl); 325 - void mei_cl_bus_module_put(struct mei_cl *cl); 326 318 int mei_cl_bus_init(void); 327 319 void mei_cl_bus_exit(void); 328 320 ··· 365 361 * @write_waiting_list : write completion list 366 362 * @ctrl_wr_list : pending control write list 367 363 * @ctrl_rd_list : pending control read list 364 + * @tx_queue_limit: tx queues per client linit 368 365 * 369 366 * @file_list : list of opened handles 370 367 * @open_handle_count: number of opened handles ··· 430 425 struct list_head write_waiting_list; 431 426 struct list_head ctrl_wr_list; 432 427 struct list_head ctrl_rd_list; 428 + u8 tx_queue_limit; 433 429 434 430 struct list_head file_list; 435 431 long open_handle_count;
+4 -2
drivers/misc/mic/bus/vop_bus.c
··· 135 135 136 136 static void vop_release_dev(struct device *d) 137 137 { 138 - put_device(d); 138 + struct vop_device *dev = dev_to_vop(d); 139 + 140 + kfree(dev); 139 141 } 140 142 141 143 struct vop_device * ··· 176 174 goto free_vdev; 177 175 return vdev; 178 176 free_vdev: 179 - kfree(vdev); 177 + put_device(&vdev->dev); 180 178 return ERR_PTR(ret); 181 179 } 182 180 EXPORT_SYMBOL_GPL(vop_register_device);
+1 -1
drivers/misc/ocxl/pci.c
··· 519 519 rc = device_register(&fn->dev); 520 520 if (rc) { 521 521 deconfigure_function(fn); 522 - device_unregister(&fn->dev); 522 + put_device(&fn->dev); 523 523 return ERR_PTR(rc); 524 524 } 525 525 return fn;
+2 -2
drivers/nvmem/Kconfig
··· 167 167 168 168 config NVMEM_SNVS_LPGPR 169 169 tristate "Support for Low Power General Purpose Register" 170 - depends on SOC_IMX6 || COMPILE_TEST 170 + depends on SOC_IMX6 || SOC_IMX7D || COMPILE_TEST 171 171 help 172 172 This is a driver for Low Power General Purpose Register (LPGPR) available on 173 - i.MX6 SoCs in Secure Non-Volatile Storage (SNVS) of this chip. 173 + i.MX6 and i.MX7 SoCs in Secure Non-Volatile Storage (SNVS) of this chip. 174 174 175 175 This driver can also be built as a module. If so, the module 176 176 will be called nvmem-snvs-lpgpr.
+2 -13
drivers/nvmem/bcm-ocotp.c
··· 262 262 else if (of_device_is_compatible(dev->of_node, "brcm,ocotp-v2")) 263 263 priv->map = &otp_map_v2; 264 264 else { 265 - dev_err(&pdev->dev, 266 - "%s otpc config map not defined\n", __func__); 265 + dev_err(dev, "%s otpc config map not defined\n", __func__); 267 266 return -EINVAL; 268 267 } 269 268 ··· 301 302 302 303 priv->config = &bcm_otpc_nvmem_config; 303 304 304 - nvmem = nvmem_register(&bcm_otpc_nvmem_config); 305 + nvmem = devm_nvmem_register(dev, &bcm_otpc_nvmem_config); 305 306 if (IS_ERR(nvmem)) { 306 307 dev_err(dev, "error registering nvmem config\n"); 307 308 return PTR_ERR(nvmem); 308 309 } 309 310 310 - platform_set_drvdata(pdev, nvmem); 311 - 312 311 return 0; 313 - } 314 - 315 - static int bcm_otpc_remove(struct platform_device *pdev) 316 - { 317 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 318 - 319 - return nvmem_unregister(nvmem); 320 312 } 321 313 322 314 static struct platform_driver bcm_otpc_driver = { 323 315 .probe = bcm_otpc_probe, 324 - .remove = bcm_otpc_remove, 325 316 .driver = { 326 317 .name = "brcm-otpc", 327 318 .of_match_table = bcm_otpc_dt_ids,
+67 -3
drivers/nvmem/core.c
··· 473 473 nvmem->reg_read = config->reg_read; 474 474 nvmem->reg_write = config->reg_write; 475 475 nvmem->dev.of_node = config->dev->of_node; 476 - dev_set_name(&nvmem->dev, "%s%d", 477 - config->name ? : "nvmem", 478 - config->name ? config->id : nvmem->id); 476 + 477 + if (config->id == -1 && config->name) { 478 + dev_set_name(&nvmem->dev, "%s", config->name); 479 + } else { 480 + dev_set_name(&nvmem->dev, "%s%d", 481 + config->name ? : "nvmem", 482 + config->name ? config->id : nvmem->id); 483 + } 479 484 480 485 nvmem->read_only = device_property_present(config->dev, "read-only") | 481 486 config->read_only; ··· 548 543 return 0; 549 544 } 550 545 EXPORT_SYMBOL_GPL(nvmem_unregister); 546 + 547 + static void devm_nvmem_release(struct device *dev, void *res) 548 + { 549 + WARN_ON(nvmem_unregister(*(struct nvmem_device **)res)); 550 + } 551 + 552 + /** 553 + * devm_nvmem_register() - Register a managed nvmem device for given 554 + * nvmem_config. 555 + * Also creates an binary entry in /sys/bus/nvmem/devices/dev-name/nvmem 556 + * 557 + * @config: nvmem device configuration with which nvmem device is created. 558 + * 559 + * Return: Will be an ERR_PTR() on error or a valid pointer to nvmem_device 560 + * on success. 561 + */ 562 + struct nvmem_device *devm_nvmem_register(struct device *dev, 563 + const struct nvmem_config *config) 564 + { 565 + struct nvmem_device **ptr, *nvmem; 566 + 567 + ptr = devres_alloc(devm_nvmem_release, sizeof(*ptr), GFP_KERNEL); 568 + if (!ptr) 569 + return ERR_PTR(-ENOMEM); 570 + 571 + nvmem = nvmem_register(config); 572 + 573 + if (!IS_ERR(nvmem)) { 574 + *ptr = nvmem; 575 + devres_add(dev, ptr); 576 + } else { 577 + devres_free(ptr); 578 + } 579 + 580 + return nvmem; 581 + } 582 + EXPORT_SYMBOL_GPL(devm_nvmem_register); 583 + 584 + static int devm_nvmem_match(struct device *dev, void *res, void *data) 585 + { 586 + struct nvmem_device **r = res; 587 + 588 + return *r == data; 589 + } 590 + 591 + /** 592 + * devm_nvmem_unregister() - Unregister previously registered managed nvmem 593 + * device. 594 + * 595 + * @nvmem: Pointer to previously registered nvmem device. 596 + * 597 + * Return: Will be an negative on error or a zero on success. 598 + */ 599 + int devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem) 600 + { 601 + return devres_release(dev, devm_nvmem_release, devm_nvmem_match, nvmem); 602 + } 603 + EXPORT_SYMBOL(devm_nvmem_unregister); 604 + 551 605 552 606 static struct nvmem_device *__nvmem_device_get(struct device_node *np, 553 607 struct nvmem_cell **cellp,
+3 -15
drivers/nvmem/imx-iim.c
··· 125 125 126 126 drvdata = of_id->data; 127 127 128 - iim->clk = devm_clk_get(&pdev->dev, NULL); 128 + iim->clk = devm_clk_get(dev, NULL); 129 129 if (IS_ERR(iim->clk)) 130 130 return PTR_ERR(iim->clk); 131 131 ··· 138 138 cfg.size = drvdata->nregs; 139 139 cfg.priv = iim; 140 140 141 - nvmem = nvmem_register(&cfg); 142 - if (IS_ERR(nvmem)) 143 - return PTR_ERR(nvmem); 141 + nvmem = devm_nvmem_register(dev, &cfg); 144 142 145 - platform_set_drvdata(pdev, nvmem); 146 - 147 - return 0; 148 - } 149 - 150 - static int imx_iim_remove(struct platform_device *pdev) 151 - { 152 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 153 - 154 - return nvmem_unregister(nvmem); 143 + return PTR_ERR_OR_ZERO(nvmem); 155 144 } 156 145 157 146 static struct platform_driver imx_iim_driver = { 158 147 .probe = imx_iim_probe, 159 - .remove = imx_iim_remove, 160 148 .driver = { 161 149 .name = "imx-iim", 162 150 .of_match_table = imx_iim_dt_ids,
+2 -16
drivers/nvmem/imx-ocotp.c
··· 439 439 440 440 static int imx_ocotp_probe(struct platform_device *pdev) 441 441 { 442 - const struct of_device_id *of_id; 443 442 struct device *dev = &pdev->dev; 444 443 struct resource *res; 445 444 struct ocotp_priv *priv; ··· 459 460 if (IS_ERR(priv->clk)) 460 461 return PTR_ERR(priv->clk); 461 462 462 - of_id = of_match_device(imx_ocotp_dt_ids, dev); 463 463 priv->params = of_device_get_match_data(&pdev->dev); 464 464 imx_ocotp_nvmem_config.size = 4 * priv->params->nregs; 465 465 imx_ocotp_nvmem_config.dev = dev; 466 466 imx_ocotp_nvmem_config.priv = priv; 467 467 priv->config = &imx_ocotp_nvmem_config; 468 - nvmem = nvmem_register(&imx_ocotp_nvmem_config); 468 + nvmem = devm_nvmem_register(dev, &imx_ocotp_nvmem_config); 469 469 470 - if (IS_ERR(nvmem)) 471 - return PTR_ERR(nvmem); 472 470 473 - platform_set_drvdata(pdev, nvmem); 474 - 475 - return 0; 476 - } 477 - 478 - static int imx_ocotp_remove(struct platform_device *pdev) 479 - { 480 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 481 - 482 - return nvmem_unregister(nvmem); 471 + return PTR_ERR_OR_ZERO(nvmem); 483 472 } 484 473 485 474 static struct platform_driver imx_ocotp_driver = { 486 475 .probe = imx_ocotp_probe, 487 - .remove = imx_ocotp_remove, 488 476 .driver = { 489 477 .name = "imx_ocotp", 490 478 .of_match_table = imx_ocotp_dt_ids,
+2 -14
drivers/nvmem/lpc18xx_otp.c
··· 86 86 lpc18xx_otp_nvmem_config.dev = &pdev->dev; 87 87 lpc18xx_otp_nvmem_config.priv = otp; 88 88 89 - nvmem = nvmem_register(&lpc18xx_otp_nvmem_config); 90 - if (IS_ERR(nvmem)) 91 - return PTR_ERR(nvmem); 89 + nvmem = devm_nvmem_register(&pdev->dev, &lpc18xx_otp_nvmem_config); 92 90 93 - platform_set_drvdata(pdev, nvmem); 94 - 95 - return 0; 96 - } 97 - 98 - static int lpc18xx_otp_remove(struct platform_device *pdev) 99 - { 100 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 101 - 102 - return nvmem_unregister(nvmem); 91 + return PTR_ERR_OR_ZERO(nvmem); 103 92 } 104 93 105 94 static const struct of_device_id lpc18xx_otp_dt_ids[] = { ··· 99 110 100 111 static struct platform_driver lpc18xx_otp_driver = { 101 112 .probe = lpc18xx_otp_probe, 102 - .remove = lpc18xx_otp_remove, 103 113 .driver = { 104 114 .name = "lpc18xx_otp", 105 115 .of_match_table = lpc18xx_otp_dt_ids,
+2 -14
drivers/nvmem/meson-efuse.c
··· 60 60 econfig.reg_read = meson_efuse_read; 61 61 econfig.size = size; 62 62 63 - nvmem = nvmem_register(&econfig); 64 - if (IS_ERR(nvmem)) 65 - return PTR_ERR(nvmem); 63 + nvmem = devm_nvmem_register(&pdev->dev, &econfig); 66 64 67 - platform_set_drvdata(pdev, nvmem); 68 - 69 - return 0; 70 - } 71 - 72 - static int meson_efuse_remove(struct platform_device *pdev) 73 - { 74 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 75 - 76 - return nvmem_unregister(nvmem); 65 + return PTR_ERR_OR_ZERO(nvmem); 77 66 } 78 67 79 68 static struct platform_driver meson_efuse_driver = { 80 69 .probe = meson_efuse_probe, 81 - .remove = meson_efuse_remove, 82 70 .driver = { 83 71 .name = "meson-efuse", 84 72 .of_match_table = meson_efuse_match,
+2 -14
drivers/nvmem/meson-mx-efuse.c
··· 233 233 return PTR_ERR(efuse->core_clk); 234 234 } 235 235 236 - efuse->nvmem = nvmem_register(&efuse->config); 237 - if (IS_ERR(efuse->nvmem)) 238 - return PTR_ERR(efuse->nvmem); 236 + efuse->nvmem = devm_nvmem_register(&pdev->dev, &efuse->config); 239 237 240 - platform_set_drvdata(pdev, efuse); 241 - 242 - return 0; 243 - } 244 - 245 - static int meson_mx_efuse_remove(struct platform_device *pdev) 246 - { 247 - struct meson_mx_efuse *efuse = platform_get_drvdata(pdev); 248 - 249 - return nvmem_unregister(efuse->nvmem); 238 + return PTR_ERR_OR_ZERO(efuse->nvmem); 250 239 } 251 240 252 241 static struct platform_driver meson_mx_efuse_driver = { 253 242 .probe = meson_mx_efuse_probe, 254 - .remove = meson_mx_efuse_remove, 255 243 .driver = { 256 244 .name = "meson-mx-efuse", 257 245 .of_match_table = meson_mx_efuse_match,
+2 -14
drivers/nvmem/mtk-efuse.c
··· 72 72 econfig.size = resource_size(res); 73 73 econfig.priv = priv; 74 74 econfig.dev = dev; 75 - nvmem = nvmem_register(&econfig); 76 - if (IS_ERR(nvmem)) 77 - return PTR_ERR(nvmem); 75 + nvmem = devm_nvmem_register(dev, &econfig); 78 76 79 - platform_set_drvdata(pdev, nvmem); 80 - 81 - return 0; 82 - } 83 - 84 - static int mtk_efuse_remove(struct platform_device *pdev) 85 - { 86 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 87 - 88 - return nvmem_unregister(nvmem); 77 + return PTR_ERR_OR_ZERO(nvmem); 89 78 } 90 79 91 80 static const struct of_device_id mtk_efuse_of_match[] = { ··· 86 97 87 98 static struct platform_driver mtk_efuse_driver = { 88 99 .probe = mtk_efuse_probe, 89 - .remove = mtk_efuse_remove, 90 100 .driver = { 91 101 .name = "mediatek,efuse", 92 102 .of_match_table = mtk_efuse_of_match,
+2 -14
drivers/nvmem/qfprom.c
··· 47 47 return 0; 48 48 } 49 49 50 - static int qfprom_remove(struct platform_device *pdev) 51 - { 52 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 53 - 54 - return nvmem_unregister(nvmem); 55 - } 56 - 57 50 static struct nvmem_config econfig = { 58 51 .name = "qfprom", 59 52 .stride = 1, ··· 75 82 econfig.dev = dev; 76 83 econfig.priv = priv; 77 84 78 - nvmem = nvmem_register(&econfig); 79 - if (IS_ERR(nvmem)) 80 - return PTR_ERR(nvmem); 85 + nvmem = devm_nvmem_register(dev, &econfig); 81 86 82 - platform_set_drvdata(pdev, nvmem); 83 - 84 - return 0; 87 + return PTR_ERR_OR_ZERO(nvmem); 85 88 } 86 89 87 90 static const struct of_device_id qfprom_of_match[] = { ··· 88 99 89 100 static struct platform_driver qfprom_driver = { 90 101 .probe = qfprom_probe, 91 - .remove = qfprom_remove, 92 102 .driver = { 93 103 .name = "qcom,qfprom", 94 104 .of_match_table = qfprom_of_match,
+10 -22
drivers/nvmem/rockchip-efuse.c
··· 259 259 struct resource *res; 260 260 struct nvmem_device *nvmem; 261 261 struct rockchip_efuse_chip *efuse; 262 - const struct of_device_id *match; 262 + const void *data; 263 263 struct device *dev = &pdev->dev; 264 264 265 - match = of_match_device(dev->driver->of_match_table, dev); 266 - if (!match || !match->data) { 265 + data = of_device_get_match_data(dev); 266 + if (!data) { 267 267 dev_err(dev, "failed to get match data\n"); 268 268 return -EINVAL; 269 269 } 270 270 271 - efuse = devm_kzalloc(&pdev->dev, sizeof(struct rockchip_efuse_chip), 271 + efuse = devm_kzalloc(dev, sizeof(struct rockchip_efuse_chip), 272 272 GFP_KERNEL); 273 273 if (!efuse) 274 274 return -ENOMEM; 275 275 276 276 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 277 - efuse->base = devm_ioremap_resource(&pdev->dev, res); 277 + efuse->base = devm_ioremap_resource(dev, res); 278 278 if (IS_ERR(efuse->base)) 279 279 return PTR_ERR(efuse->base); 280 280 281 - efuse->clk = devm_clk_get(&pdev->dev, "pclk_efuse"); 281 + efuse->clk = devm_clk_get(dev, "pclk_efuse"); 282 282 if (IS_ERR(efuse->clk)) 283 283 return PTR_ERR(efuse->clk); 284 284 285 - efuse->dev = &pdev->dev; 285 + efuse->dev = dev; 286 286 if (of_property_read_u32(dev->of_node, "rockchip,efuse-size", 287 287 &econfig.size)) 288 288 econfig.size = resource_size(res); 289 - econfig.reg_read = match->data; 289 + econfig.reg_read = data; 290 290 econfig.priv = efuse; 291 291 econfig.dev = efuse->dev; 292 - nvmem = nvmem_register(&econfig); 293 - if (IS_ERR(nvmem)) 294 - return PTR_ERR(nvmem); 292 + nvmem = devm_nvmem_register(dev, &econfig); 295 293 296 - platform_set_drvdata(pdev, nvmem); 297 - 298 - return 0; 299 - } 300 - 301 - static int rockchip_efuse_remove(struct platform_device *pdev) 302 - { 303 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 304 - 305 - return nvmem_unregister(nvmem); 294 + return PTR_ERR_OR_ZERO(nvmem); 306 295 } 307 296 308 297 static struct platform_driver rockchip_efuse_driver = { 309 298 .probe = rockchip_efuse_probe, 310 - .remove = rockchip_efuse_remove, 311 299 .driver = { 312 300 .name = "rockchip-efuse", 313 301 .of_match_table = rockchip_efuse_match,
+28 -25
drivers/nvmem/snvs_lpgpr.c
··· 14 14 #include <linux/regmap.h> 15 15 16 16 #define IMX6Q_SNVS_HPLR 0x00 17 - #define IMX6Q_GPR_SL BIT(5) 18 17 #define IMX6Q_SNVS_LPLR 0x34 19 - #define IMX6Q_GPR_HL BIT(5) 20 18 #define IMX6Q_SNVS_LPGPR 0x68 19 + 20 + #define IMX7D_SNVS_HPLR 0x00 21 + #define IMX7D_SNVS_LPLR 0x34 22 + #define IMX7D_SNVS_LPGPR 0x90 23 + 24 + #define IMX_GPR_SL BIT(5) 25 + #define IMX_GPR_HL BIT(5) 21 26 22 27 struct snvs_lpgpr_cfg { 23 28 int offset; 24 29 int offset_hplr; 25 30 int offset_lplr; 31 + int size; 26 32 }; 27 33 28 34 struct snvs_lpgpr_priv { ··· 42 36 .offset = IMX6Q_SNVS_LPGPR, 43 37 .offset_hplr = IMX6Q_SNVS_HPLR, 44 38 .offset_lplr = IMX6Q_SNVS_LPLR, 39 + .size = 4, 40 + }; 41 + 42 + static const struct snvs_lpgpr_cfg snvs_lpgpr_cfg_imx7d = { 43 + .offset = IMX7D_SNVS_LPGPR, 44 + .offset_hplr = IMX7D_SNVS_HPLR, 45 + .offset_lplr = IMX7D_SNVS_LPLR, 46 + .size = 16, 45 47 }; 46 48 47 49 static int snvs_lpgpr_write(void *context, unsigned int offset, void *val, ··· 64 50 if (ret < 0) 65 51 return ret; 66 52 67 - if (lock_reg & IMX6Q_GPR_SL) 53 + if (lock_reg & IMX_GPR_SL) 68 54 return -EPERM; 69 55 70 56 ret = regmap_read(priv->regmap, dcfg->offset_lplr, &lock_reg); 71 57 if (ret < 0) 72 58 return ret; 73 59 74 - if (lock_reg & IMX6Q_GPR_HL) 60 + if (lock_reg & IMX_GPR_HL) 75 61 return -EPERM; 76 62 77 63 return regmap_bulk_write(priv->regmap, dcfg->offset + offset, val, ··· 124 110 cfg->priv = priv; 125 111 cfg->name = dev_name(dev); 126 112 cfg->dev = dev; 127 - cfg->stride = 4, 128 - cfg->word_size = 4, 129 - cfg->size = 4, 130 - cfg->owner = THIS_MODULE, 131 - cfg->reg_read = snvs_lpgpr_read, 132 - cfg->reg_write = snvs_lpgpr_write, 113 + cfg->stride = 4; 114 + cfg->word_size = 4; 115 + cfg->size = dcfg->size, 116 + cfg->owner = THIS_MODULE; 117 + cfg->reg_read = snvs_lpgpr_read; 118 + cfg->reg_write = snvs_lpgpr_write; 133 119 134 - nvmem = nvmem_register(cfg); 135 - if (IS_ERR(nvmem)) 136 - return PTR_ERR(nvmem); 120 + nvmem = devm_nvmem_register(dev, cfg); 137 121 138 - platform_set_drvdata(pdev, nvmem); 139 - 140 - return 0; 141 - } 142 - 143 - static int snvs_lpgpr_remove(struct platform_device *pdev) 144 - { 145 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 146 - 147 - return nvmem_unregister(nvmem); 122 + return PTR_ERR_OR_ZERO(nvmem); 148 123 } 149 124 150 125 static const struct of_device_id snvs_lpgpr_dt_ids[] = { 151 126 { .compatible = "fsl,imx6q-snvs-lpgpr", .data = &snvs_lpgpr_cfg_imx6q }, 152 127 { .compatible = "fsl,imx6ul-snvs-lpgpr", 153 128 .data = &snvs_lpgpr_cfg_imx6q }, 129 + { .compatible = "fsl,imx7d-snvs-lpgpr", .data = &snvs_lpgpr_cfg_imx7d }, 154 130 { }, 155 131 }; 156 132 MODULE_DEVICE_TABLE(of, snvs_lpgpr_dt_ids); 157 133 158 134 static struct platform_driver snvs_lpgpr_driver = { 159 135 .probe = snvs_lpgpr_probe, 160 - .remove = snvs_lpgpr_remove, 161 136 .driver = { 162 137 .name = "snvs_lpgpr", 163 138 .of_match_table = snvs_lpgpr_dt_ids, ··· 155 152 module_platform_driver(snvs_lpgpr_driver); 156 153 157 154 MODULE_AUTHOR("Oleksij Rempel <o.rempel@pengutronix.de>"); 158 - MODULE_DESCRIPTION("Low Power General Purpose Register in i.MX6 Secure Non-Volatile Storage"); 155 + MODULE_DESCRIPTION("Low Power General Purpose Register in i.MX6 and i.MX7 Secure Non-Volatile Storage"); 159 156 MODULE_LICENSE("GPL v2");
+50 -21
drivers/nvmem/sunxi_sid.c
··· 85 85 } 86 86 87 87 static int sun8i_sid_register_readout(const struct sunxi_sid *sid, 88 - const unsigned int word) 88 + const unsigned int offset, 89 + u32 *out) 89 90 { 90 91 u32 reg_val; 91 92 int ret; 92 93 93 94 /* Set word, lock access, and set read command */ 94 - reg_val = (word & SUN8I_SID_OFFSET_MASK) 95 + reg_val = (offset & SUN8I_SID_OFFSET_MASK) 95 96 << SUN8I_SID_OFFSET_SHIFT; 96 97 reg_val |= SUN8I_SID_OP_LOCK | SUN8I_SID_READ; 97 98 writel(reg_val, sid->base + SUN8I_SID_PRCTL); ··· 102 101 if (ret) 103 102 return ret; 104 103 104 + if (out) 105 + *out = readl(sid->base + SUN8I_SID_RDKEY); 106 + 105 107 writel(0, sid->base + SUN8I_SID_PRCTL); 108 + 109 + return 0; 110 + } 111 + 112 + /* 113 + * On Allwinner H3, the value on the 0x200 offset of the SID controller seems 114 + * to be not reliable at all. 115 + * Read by the registers instead. 116 + */ 117 + static int sun8i_sid_read_byte_by_reg(const struct sunxi_sid *sid, 118 + const unsigned int offset, 119 + u8 *out) 120 + { 121 + u32 word; 122 + int ret; 123 + 124 + ret = sun8i_sid_register_readout(sid, offset & ~0x03, &word); 125 + 126 + if (ret) 127 + return ret; 128 + 129 + *out = (word >> ((offset & 0x3) * 8)) & 0xff; 130 + 131 + return 0; 132 + } 133 + 134 + static int sun8i_sid_read_by_reg(void *context, unsigned int offset, 135 + void *val, size_t bytes) 136 + { 137 + struct sunxi_sid *sid = context; 138 + u8 *buf = val; 139 + int ret; 140 + 141 + while (bytes--) { 142 + ret = sun8i_sid_read_byte_by_reg(sid, offset++, buf++); 143 + if (ret) 144 + return ret; 145 + } 146 + 106 147 return 0; 107 148 } 108 149 ··· 174 131 175 132 size = cfg->size; 176 133 177 - if (cfg->need_register_readout) { 178 - /* 179 - * H3's SID controller have a bug that the value at 0x200 180 - * offset is not the correct value when the hardware is reseted. 181 - * However, after doing a register-based read operation, the 182 - * value become right. 183 - * Do a full read operation here, but ignore its value 184 - * (as it's more fast to read by direct MMIO value than 185 - * with registers) 186 - */ 187 - for (i = 0; i < (size >> 2); i++) { 188 - ret = sun8i_sid_register_readout(sid, i); 189 - if (ret) 190 - return ret; 191 - } 192 - } 193 - 194 134 econfig.size = size; 195 135 econfig.dev = dev; 196 - econfig.reg_read = sunxi_sid_read; 136 + if (cfg->need_register_readout) 137 + econfig.reg_read = sun8i_sid_read_by_reg; 138 + else 139 + econfig.reg_read = sunxi_sid_read; 197 140 econfig.priv = sid; 198 141 nvmem = nvmem_register(&econfig); 199 142 if (IS_ERR(nvmem)) ··· 192 163 } 193 164 194 165 for (i = 0; i < size; i++) 195 - randomness[i] = sunxi_sid_read_byte(sid, i); 166 + econfig.reg_read(sid, i, &randomness[i], 1); 196 167 197 168 add_device_randomness(randomness, size); 198 169 kfree(randomness);
+2 -14
drivers/nvmem/uniphier-efuse.c
··· 60 60 econfig.size = resource_size(res); 61 61 econfig.priv = priv; 62 62 econfig.dev = dev; 63 - nvmem = nvmem_register(&econfig); 64 - if (IS_ERR(nvmem)) 65 - return PTR_ERR(nvmem); 63 + nvmem = devm_nvmem_register(dev, &econfig); 66 64 67 - platform_set_drvdata(pdev, nvmem); 68 - 69 - return 0; 70 - } 71 - 72 - static int uniphier_efuse_remove(struct platform_device *pdev) 73 - { 74 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 75 - 76 - return nvmem_unregister(nvmem); 65 + return PTR_ERR_OR_ZERO(nvmem); 77 66 } 78 67 79 68 static const struct of_device_id uniphier_efuse_of_match[] = { ··· 73 84 74 85 static struct platform_driver uniphier_efuse_driver = { 75 86 .probe = uniphier_efuse_probe, 76 - .remove = uniphier_efuse_remove, 77 87 .driver = { 78 88 .name = "uniphier-efuse", 79 89 .of_match_table = uniphier_efuse_of_match,
+5 -19
drivers/nvmem/vf610-ocotp.c
··· 217 217 }; 218 218 MODULE_DEVICE_TABLE(of, ocotp_of_match); 219 219 220 - static int vf610_ocotp_remove(struct platform_device *pdev) 221 - { 222 - struct vf610_ocotp *ocotp_dev = platform_get_drvdata(pdev); 223 - 224 - return nvmem_unregister(ocotp_dev->nvmem); 225 - } 226 - 227 220 static int vf610_ocotp_probe(struct platform_device *pdev) 228 221 { 229 222 struct device *dev = &pdev->dev; 230 223 struct resource *res; 231 224 struct vf610_ocotp *ocotp_dev; 232 225 233 - ocotp_dev = devm_kzalloc(&pdev->dev, 234 - sizeof(struct vf610_ocotp), GFP_KERNEL); 226 + ocotp_dev = devm_kzalloc(dev, sizeof(struct vf610_ocotp), GFP_KERNEL); 235 227 if (!ocotp_dev) 236 228 return -ENOMEM; 237 229 ··· 238 246 PTR_ERR(ocotp_dev->clk)); 239 247 return PTR_ERR(ocotp_dev->clk); 240 248 } 249 + ocotp_dev->dev = dev; 250 + ocotp_dev->timing = vf610_ocotp_calculate_timing(ocotp_dev); 241 251 242 252 ocotp_config.size = resource_size(res); 243 253 ocotp_config.priv = ocotp_dev; 244 254 ocotp_config.dev = dev; 245 255 246 - ocotp_dev->nvmem = nvmem_register(&ocotp_config); 247 - if (IS_ERR(ocotp_dev->nvmem)) 248 - return PTR_ERR(ocotp_dev->nvmem); 256 + ocotp_dev->nvmem = devm_nvmem_register(dev, &ocotp_config); 249 257 250 - ocotp_dev->dev = dev; 251 - platform_set_drvdata(pdev, ocotp_dev); 252 - 253 - ocotp_dev->timing = vf610_ocotp_calculate_timing(ocotp_dev); 254 - 255 - return 0; 258 + return PTR_ERR_OR_ZERO(ocotp_dev->nvmem); 256 259 } 257 260 258 261 static struct platform_driver vf610_ocotp_driver = { 259 262 .probe = vf610_ocotp_probe, 260 - .remove = vf610_ocotp_remove, 261 263 .driver = { 262 264 .name = "vf610-ocotp", 263 265 .of_match_table = ocotp_of_match,
+3 -5
drivers/parport/parport_ax88796.c
··· 273 273 { 274 274 struct device *_dev = &pdev->dev; 275 275 struct ax_drvdata *dd; 276 - struct parport *pp = NULL; 276 + struct parport *pp; 277 277 struct resource *res; 278 278 unsigned long size; 279 279 int spacing; 280 280 int irq; 281 281 int ret; 282 282 283 - dd = kzalloc(sizeof(struct ax_drvdata), GFP_KERNEL); 284 - if (dd == NULL) { 285 - dev_err(_dev, "no memory for private data\n"); 283 + dd = kzalloc(sizeof(*dd), GFP_KERNEL); 284 + if (!dd) 286 285 return -ENOMEM; 287 - } 288 286 289 287 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 290 288 if (res == NULL) {
+4
drivers/parport/parport_pc.c
··· 2646 2646 netmos_9901, 2647 2647 netmos_9865, 2648 2648 quatech_sppxp100, 2649 + wch_ch382l, 2649 2650 }; 2650 2651 2651 2652 ··· 2709 2708 /* netmos_9901 */ { 1, { { 0, -1 }, } }, 2710 2709 /* netmos_9865 */ { 1, { { 0, -1 }, } }, 2711 2710 /* quatech_sppxp100 */ { 1, { { 0, 1 }, } }, 2711 + /* wch_ch382l */ { 1, { { 2, -1 }, } }, 2712 2712 }; 2713 2713 2714 2714 static const struct pci_device_id parport_pc_pci_tbl[] = { ··· 2799 2797 /* Quatech SPPXP-100 Parallel port PCI ExpressCard */ 2800 2798 { PCI_VENDOR_ID_QUATECH, PCI_DEVICE_ID_QUATECH_SPPXP_100, 2801 2799 PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 }, 2800 + /* WCH CH382L PCI-E single parallel port card */ 2801 + { 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l }, 2802 2802 { 0, } /* terminate list */ 2803 2803 }; 2804 2804 MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl);
+46 -76
drivers/parport/parport_serial.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Support for common PCI multi-I/O cards (which is most of them) 3 4 * 4 5 * Copyright (C) 2001 Tim Waugh <twaugh@redhat.com> 5 - * 6 - * This program is free software; you can redistribute it and/or 7 - * modify it under the terms of the GNU General Public License 8 - * as published by the Free Software Foundation; either version 9 - * 2 of the License, or (at your option) any later version. 10 - * 11 6 * 12 7 * Multi-function PCI cards are supposed to present separate logical 13 8 * devices on the bus. A common thing to do seems to be to just use 14 9 * one logical device with lots of base address registers for both 15 10 * parallel ports and serial ports. This driver is for dealing with 16 11 * that. 17 - * 18 12 */ 19 13 20 - #include <linux/types.h> 21 - #include <linux/module.h> 22 - #include <linux/init.h> 23 - #include <linux/slab.h> 24 - #include <linux/pci.h> 25 14 #include <linux/interrupt.h> 15 + #include <linux/module.h> 26 16 #include <linux/parport.h> 27 17 #include <linux/parport_pc.h> 18 + #include <linux/pci.h> 19 + #include <linux/slab.h> 20 + #include <linux/types.h> 21 + 28 22 #include <linux/8250_pci.h> 29 23 30 24 enum parport_pc_pci_cards { ··· 59 65 wch_ch353_1s1p, 60 66 wch_ch353_2s1p, 61 67 wch_ch382_2s1p, 68 + brainboxes_5s1p, 62 69 sunix_2s1p, 63 70 }; 64 71 ··· 148 153 /* wch_ch353_1s1p*/ { 1, { { 1, -1}, } }, 149 154 /* wch_ch353_2s1p*/ { 1, { { 2, -1}, } }, 150 155 /* wch_ch382_2s1p*/ { 1, { { 2, -1}, } }, 156 + /* brainboxes_5s1p */ { 1, { { 3, -1 }, } }, 151 157 /* sunix_2s1p */ { 1, { { 3, -1 }, } }, 152 158 }; 153 159 ··· 253 257 { 0x4348, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, wch_ch353_1s1p}, 254 258 { 0x4348, 0x7053, 0x4348, 0x3253, 0, 0, wch_ch353_2s1p}, 255 259 { 0x1c00, 0x3250, 0x1c00, 0x3250, 0, 0, wch_ch382_2s1p}, 260 + 261 + /* BrainBoxes PX272/PX306 MIO card */ 262 + { PCI_VENDOR_ID_INTASHIELD, 0x4100, 263 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, brainboxes_5s1p }, 256 264 257 265 /* 258 266 * More SUNIX variations. At least one of these has part number ··· 501 501 .uart_offset = 8, 502 502 .first_offset = 0xC0, 503 503 }, 504 + [brainboxes_5s1p] = { 505 + .flags = FL_BASE2, 506 + .num_ports = 5, 507 + .base_baud = 921600, 508 + .uart_offset = 8, 509 + }, 504 510 [sunix_2s1p] = { 505 511 .flags = FL_BASE0|FL_BASE_BARS, 506 512 .num_ports = 2, ··· 530 524 struct serial_private *serial; 531 525 532 526 board = &pci_parport_serial_boards[id->driver_data]; 533 - 534 527 if (board->num_ports == 0) 535 528 return 0; 536 529 537 530 serial = pciserial_init_ports(dev, board); 538 - 539 531 if (IS_ERR(serial)) 540 532 return PTR_ERR(serial); 541 533 ··· 562 558 int irq; 563 559 564 560 if (priv->num_par == ARRAY_SIZE (priv->port)) { 565 - printk (KERN_WARNING 566 - "parport_serial: %s: only %zu parallel ports " 567 - "supported (%d reported)\n", pci_name (dev), 568 - ARRAY_SIZE(priv->port), card->numports); 561 + dev_warn(&dev->dev, 562 + "only %zu parallel ports supported (%d reported)\n", 563 + ARRAY_SIZE(priv->port), card->numports); 569 564 break; 570 565 } 571 566 ··· 580 577 irq = dev->irq; 581 578 if (irq == IRQ_NONE) { 582 579 dev_dbg(&dev->dev, 583 - "PCI parallel port detected: I/O at %#lx(%#lx)\n", 580 + "PCI parallel port detected: I/O at %#lx(%#lx)\n", 584 581 io_lo, io_hi); 585 582 irq = PARPORT_IRQ_NONE; 586 583 } else { 587 584 dev_dbg(&dev->dev, 588 - "PCI parallel port detected: I/O at %#lx(%#lx), IRQ %d\n", 585 + "PCI parallel port detected: I/O at %#lx(%#lx), IRQ %d\n", 589 586 io_lo, io_hi, irq); 590 587 } 591 588 port = parport_pc_probe_port (io_lo, io_hi, irq, ··· 608 605 struct parport_serial_private *priv; 609 606 int err; 610 607 611 - priv = kzalloc (sizeof *priv, GFP_KERNEL); 608 + priv = devm_kzalloc(&dev->dev, sizeof(*priv), GFP_KERNEL); 612 609 if (!priv) 613 610 return -ENOMEM; 611 + 614 612 pci_set_drvdata (dev, priv); 615 613 616 - err = pci_enable_device (dev); 617 - if (err) { 618 - kfree (priv); 614 + err = pcim_enable_device(dev); 615 + if (err) 619 616 return err; 620 - } 621 617 622 - if (parport_register (dev, id)) { 623 - kfree (priv); 624 - return -ENODEV; 625 - } 618 + err = parport_register(dev, id); 619 + if (err) 620 + return err; 626 621 627 - if (serial_register (dev, id)) { 622 + err = serial_register(dev, id); 623 + if (err) { 628 624 int i; 629 625 for (i = 0; i < priv->num_par; i++) 630 626 parport_pc_unregister_port (priv->port[i]); 631 - kfree (priv); 632 - return -ENODEV; 627 + return err; 633 628 } 634 629 635 630 return 0; ··· 646 645 for (i = 0; i < priv->num_par; i++) 647 646 parport_pc_unregister_port (priv->port[i]); 648 647 649 - kfree (priv); 650 648 return; 651 649 } 652 650 653 - #ifdef CONFIG_PM 654 - static int parport_serial_pci_suspend(struct pci_dev *dev, pm_message_t state) 651 + static int __maybe_unused parport_serial_pci_suspend(struct device *dev) 655 652 { 656 - struct parport_serial_private *priv = pci_get_drvdata(dev); 653 + struct pci_dev *pdev = to_pci_dev(dev); 654 + struct parport_serial_private *priv = pci_get_drvdata(pdev); 657 655 658 656 if (priv->serial) 659 657 pciserial_suspend_ports(priv->serial); 660 658 661 659 /* FIXME: What about parport? */ 662 - 663 - pci_save_state(dev); 664 - pci_set_power_state(dev, pci_choose_state(dev, state)); 665 660 return 0; 666 661 } 667 662 668 - static int parport_serial_pci_resume(struct pci_dev *dev) 663 + static int __maybe_unused parport_serial_pci_resume(struct device *dev) 669 664 { 670 - struct parport_serial_private *priv = pci_get_drvdata(dev); 671 - int err; 672 - 673 - pci_set_power_state(dev, PCI_D0); 674 - pci_restore_state(dev); 675 - 676 - /* 677 - * The device may have been disabled. Re-enable it. 678 - */ 679 - err = pci_enable_device(dev); 680 - if (err) { 681 - printk(KERN_ERR "parport_serial: %s: error enabling " 682 - "device for resume (%d)\n", pci_name(dev), err); 683 - return err; 684 - } 665 + struct pci_dev *pdev = to_pci_dev(dev); 666 + struct parport_serial_private *priv = pci_get_drvdata(pdev); 685 667 686 668 if (priv->serial) 687 669 pciserial_resume_ports(priv->serial); 688 670 689 671 /* FIXME: What about parport? */ 690 - 691 672 return 0; 692 673 } 693 - #endif 674 + 675 + static SIMPLE_DEV_PM_OPS(parport_serial_pm_ops, 676 + parport_serial_pci_suspend, parport_serial_pci_resume); 694 677 695 678 static struct pci_driver parport_serial_pci_driver = { 696 679 .name = "parport_serial", 697 680 .id_table = parport_serial_pci_tbl, 698 681 .probe = parport_serial_pci_probe, 699 682 .remove = parport_serial_pci_remove, 700 - #ifdef CONFIG_PM 701 - .suspend = parport_serial_pci_suspend, 702 - .resume = parport_serial_pci_resume, 703 - #endif 683 + .driver = { 684 + .pm = &parport_serial_pm_ops, 685 + }, 704 686 }; 705 - 706 - 707 - static int __init parport_serial_init (void) 708 - { 709 - return pci_register_driver (&parport_serial_pci_driver); 710 - } 711 - 712 - static void __exit parport_serial_exit (void) 713 - { 714 - pci_unregister_driver (&parport_serial_pci_driver); 715 - return; 716 - } 687 + module_pci_driver(parport_serial_pci_driver); 717 688 718 689 MODULE_AUTHOR("Tim Waugh <twaugh@redhat.com>"); 719 690 MODULE_DESCRIPTION("Driver for common parallel+serial multi-I/O PCI cards"); 720 691 MODULE_LICENSE("GPL"); 721 - 722 - module_init(parport_serial_init); 723 - module_exit(parport_serial_exit);
+18 -3
drivers/pps/clients/pps_parport.c
··· 49 49 " zero turns clear edge capture off entirely"); 50 50 module_param(clear_wait, uint, 0); 51 51 52 + static DEFINE_IDA(pps_client_index); 52 53 53 54 /* internal per port structure */ 54 55 struct pps_client_pp { ··· 57 56 struct pps_device *pps; /* PPS device */ 58 57 unsigned int cw; /* port clear timeout */ 59 58 unsigned int cw_err; /* number of timeouts */ 59 + int index; /* device number */ 60 60 }; 61 61 62 62 static inline int signal_is_set(struct parport *port) ··· 138 136 139 137 static void parport_attach(struct parport *port) 140 138 { 139 + struct pardev_cb pps_client_cb; 140 + int index; 141 141 struct pps_client_pp *device; 142 142 struct pps_source_info info = { 143 143 .name = KBUILD_MODNAME, ··· 158 154 return; 159 155 } 160 156 161 - device->pardev = parport_register_device(port, KBUILD_MODNAME, 162 - NULL, NULL, parport_irq, PARPORT_FLAG_EXCL, device); 157 + index = ida_simple_get(&pps_client_index, 0, 0, GFP_KERNEL); 158 + memset(&pps_client_cb, 0, sizeof(pps_client_cb)); 159 + pps_client_cb.private = device; 160 + pps_client_cb.irq_func = parport_irq; 161 + pps_client_cb.flags = PARPORT_FLAG_EXCL; 162 + device->pardev = parport_register_dev_model(port, 163 + KBUILD_MODNAME, 164 + &pps_client_cb, 165 + index); 163 166 if (!device->pardev) { 164 167 pr_err("couldn't register with %s\n", port->name); 165 168 goto err_free; ··· 187 176 device->cw = clear_wait; 188 177 189 178 port->ops->enable_irq(port); 179 + device->index = index; 190 180 191 181 pr_info("attached to %s\n", port->name); 192 182 ··· 198 186 err_unregister_dev: 199 187 parport_unregister_device(device->pardev); 200 188 err_free: 189 + ida_simple_remove(&pps_client_index, index); 201 190 kfree(device); 202 191 } 203 192 ··· 218 205 pps_unregister_source(device->pps); 219 206 parport_release(pardev); 220 207 parport_unregister_device(pardev); 208 + ida_simple_remove(&pps_client_index, device->index); 221 209 kfree(device); 222 210 } 223 211 224 212 static struct parport_driver pps_parport_driver = { 225 213 .name = KBUILD_MODNAME, 226 - .attach = parport_attach, 214 + .match_port = parport_attach, 227 215 .detach = parport_detach, 216 + .devmodel = true, 228 217 }; 229 218 230 219 /* module staff */
+9 -3
drivers/pps/generators/pps_gen_parport.c
··· 192 192 193 193 static void parport_attach(struct parport *port) 194 194 { 195 + struct pardev_cb pps_cb; 196 + 195 197 if (attached) { 196 198 /* we already have a port */ 197 199 return; 198 200 } 199 201 200 - device.pardev = parport_register_device(port, KBUILD_MODNAME, 201 - NULL, NULL, NULL, PARPORT_FLAG_EXCL, &device); 202 + memset(&pps_cb, 0, sizeof(pps_cb)); 203 + pps_cb.private = &device; 204 + pps_cb.flags = PARPORT_FLAG_EXCL; 205 + device.pardev = parport_register_dev_model(port, KBUILD_MODNAME, 206 + &pps_cb, 0); 202 207 if (!device.pardev) { 203 208 pr_err("couldn't register with %s\n", port->name); 204 209 return; ··· 241 236 242 237 static struct parport_driver pps_gen_parport_driver = { 243 238 .name = KBUILD_MODNAME, 244 - .attach = parport_attach, 239 + .match_port = parport_attach, 245 240 .detach = parport_detach, 241 + .devmodel = true, 246 242 }; 247 243 248 244 /* module staff */
+1 -1
drivers/siox/siox-core.c
··· 594 594 size_t inbytes = 0, outbytes = 0; 595 595 u8 statustype = 0; 596 596 597 - ret = sscanf(buf, "%20s %zu %zu %hhu", type, &inbytes, 597 + ret = sscanf(buf, "%19s %zu %zu %hhu", type, &inbytes, 598 598 &outbytes, &statustype); 599 599 if (ret != 3 && ret != 4) 600 600 return -EINVAL;
+1 -1
drivers/slimbus/core.c
··· 141 141 sbdev->e_addr = *eaddr; 142 142 ret = slim_add_device(ctrl, sbdev, node); 143 143 if (ret) { 144 - kfree(sbdev); 144 + put_device(&sbdev->dev); 145 145 return NULL; 146 146 } 147 147
+12 -14
drivers/thunderbolt/dma_port.c
··· 170 170 171 171 static int dma_find_port(struct tb_switch *sw) 172 172 { 173 - int port, ret; 174 - u32 type; 173 + static const int ports[] = { 3, 5, 7 }; 174 + int i; 175 175 176 176 /* 177 - * The DMA (NHI) port is either 3 or 5 depending on the 178 - * controller. Try both starting from 5 which is more common. 177 + * The DMA (NHI) port is either 3, 5 or 7 depending on the 178 + * controller. Try all of them. 179 179 */ 180 - port = 5; 181 - ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1, 182 - DMA_PORT_TIMEOUT); 183 - if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 184 - return port; 180 + for (i = 0; i < ARRAY_SIZE(ports); i++) { 181 + u32 type; 182 + int ret; 185 183 186 - port = 3; 187 - ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), port, 2, 1, 188 - DMA_PORT_TIMEOUT); 189 - if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 190 - return port; 184 + ret = dma_port_read(sw->tb->ctl, &type, tb_route(sw), ports[i], 185 + 2, 1, DMA_PORT_TIMEOUT); 186 + if (!ret && (type & 0xffffff) == TB_TYPE_NHI) 187 + return ports[i]; 188 + } 191 189 192 190 return -ENODEV; 193 191 }
+129 -1
drivers/thunderbolt/domain.c
··· 117 117 [TB_SECURITY_USER] = "user", 118 118 [TB_SECURITY_SECURE] = "secure", 119 119 [TB_SECURITY_DPONLY] = "dponly", 120 + [TB_SECURITY_USBONLY] = "usbonly", 120 121 }; 122 + 123 + static ssize_t boot_acl_show(struct device *dev, struct device_attribute *attr, 124 + char *buf) 125 + { 126 + struct tb *tb = container_of(dev, struct tb, dev); 127 + uuid_t *uuids; 128 + ssize_t ret; 129 + int i; 130 + 131 + uuids = kcalloc(tb->nboot_acl, sizeof(uuid_t), GFP_KERNEL); 132 + if (!uuids) 133 + return -ENOMEM; 134 + 135 + if (mutex_lock_interruptible(&tb->lock)) { 136 + ret = -ERESTARTSYS; 137 + goto out; 138 + } 139 + ret = tb->cm_ops->get_boot_acl(tb, uuids, tb->nboot_acl); 140 + if (ret) { 141 + mutex_unlock(&tb->lock); 142 + goto out; 143 + } 144 + mutex_unlock(&tb->lock); 145 + 146 + for (ret = 0, i = 0; i < tb->nboot_acl; i++) { 147 + if (!uuid_is_null(&uuids[i])) 148 + ret += snprintf(buf + ret, PAGE_SIZE - ret, "%pUb", 149 + &uuids[i]); 150 + 151 + ret += snprintf(buf + ret, PAGE_SIZE - ret, "%s", 152 + i < tb->nboot_acl - 1 ? "," : "\n"); 153 + } 154 + 155 + out: 156 + kfree(uuids); 157 + return ret; 158 + } 159 + 160 + static ssize_t boot_acl_store(struct device *dev, struct device_attribute *attr, 161 + const char *buf, size_t count) 162 + { 163 + struct tb *tb = container_of(dev, struct tb, dev); 164 + char *str, *s, *uuid_str; 165 + ssize_t ret = 0; 166 + uuid_t *acl; 167 + int i = 0; 168 + 169 + /* 170 + * Make sure the value is not bigger than tb->nboot_acl * UUID 171 + * length + commas and optional "\n". Also the smallest allowable 172 + * string is tb->nboot_acl * ",". 173 + */ 174 + if (count > (UUID_STRING_LEN + 1) * tb->nboot_acl + 1) 175 + return -EINVAL; 176 + if (count < tb->nboot_acl - 1) 177 + return -EINVAL; 178 + 179 + str = kstrdup(buf, GFP_KERNEL); 180 + if (!str) 181 + return -ENOMEM; 182 + 183 + acl = kcalloc(tb->nboot_acl, sizeof(uuid_t), GFP_KERNEL); 184 + if (!acl) { 185 + ret = -ENOMEM; 186 + goto err_free_str; 187 + } 188 + 189 + uuid_str = strim(str); 190 + while ((s = strsep(&uuid_str, ",")) != NULL && i < tb->nboot_acl) { 191 + size_t len = strlen(s); 192 + 193 + if (len) { 194 + if (len != UUID_STRING_LEN) { 195 + ret = -EINVAL; 196 + goto err_free_acl; 197 + } 198 + ret = uuid_parse(s, &acl[i]); 199 + if (ret) 200 + goto err_free_acl; 201 + } 202 + 203 + i++; 204 + } 205 + 206 + if (s || i < tb->nboot_acl) { 207 + ret = -EINVAL; 208 + goto err_free_acl; 209 + } 210 + 211 + if (mutex_lock_interruptible(&tb->lock)) { 212 + ret = -ERESTARTSYS; 213 + goto err_free_acl; 214 + } 215 + ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl); 216 + mutex_unlock(&tb->lock); 217 + 218 + err_free_acl: 219 + kfree(acl); 220 + err_free_str: 221 + kfree(str); 222 + 223 + return ret ?: count; 224 + } 225 + static DEVICE_ATTR_RW(boot_acl); 121 226 122 227 static ssize_t security_show(struct device *dev, struct device_attribute *attr, 123 228 char *buf) 124 229 { 125 230 struct tb *tb = container_of(dev, struct tb, dev); 231 + const char *name = "unknown"; 126 232 127 - return sprintf(buf, "%s\n", tb_security_names[tb->security_level]); 233 + if (tb->security_level < ARRAY_SIZE(tb_security_names)) 234 + name = tb_security_names[tb->security_level]; 235 + 236 + return sprintf(buf, "%s\n", name); 128 237 } 129 238 static DEVICE_ATTR_RO(security); 130 239 131 240 static struct attribute *domain_attrs[] = { 241 + &dev_attr_boot_acl.attr, 132 242 &dev_attr_security.attr, 133 243 NULL, 134 244 }; 135 245 246 + static umode_t domain_attr_is_visible(struct kobject *kobj, 247 + struct attribute *attr, int n) 248 + { 249 + struct device *dev = container_of(kobj, struct device, kobj); 250 + struct tb *tb = container_of(dev, struct tb, dev); 251 + 252 + if (attr == &dev_attr_boot_acl.attr) { 253 + if (tb->nboot_acl && 254 + tb->cm_ops->get_boot_acl && 255 + tb->cm_ops->set_boot_acl) 256 + return attr->mode; 257 + return 0; 258 + } 259 + 260 + return attr->mode; 261 + } 262 + 136 263 static struct attribute_group domain_attr_group = { 264 + .is_visible = domain_attr_is_visible, 137 265 .attrs = domain_attrs, 138 266 }; 139 267
+685 -79
drivers/thunderbolt/icm.c
··· 41 41 #define PHY_PORT_CS1_LINK_STATE_MASK GENMASK(29, 26) 42 42 #define PHY_PORT_CS1_LINK_STATE_SHIFT 26 43 43 44 - #define ICM_TIMEOUT 5000 /* ms */ 44 + #define ICM_TIMEOUT 5000 /* ms */ 45 + #define ICM_APPROVE_TIMEOUT 10000 /* ms */ 45 46 #define ICM_MAX_LINK 4 46 47 #define ICM_MAX_DEPTH 6 47 48 ··· 56 55 * @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides 57 56 * (only set when @upstream_port is not %NULL) 58 57 * @safe_mode: ICM is in safe mode 58 + * @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported) 59 59 * @is_supported: Checks if we can support ICM on this controller 60 60 * @get_mode: Read and return the ICM firmware mode (optional) 61 61 * @get_route: Find a route string for given switch 62 + * @driver_ready: Send driver ready message to ICM 62 63 * @device_connected: Handle device connected ICM message 63 64 * @device_disconnected: Handle device disconnected ICM message 64 65 * @xdomain_connected - Handle XDomain connected ICM message ··· 70 67 struct mutex request_lock; 71 68 struct delayed_work rescan_work; 72 69 struct pci_dev *upstream_port; 70 + size_t max_boot_acl; 73 71 int vnd_cap; 74 72 bool safe_mode; 75 73 bool (*is_supported)(struct tb *tb); 76 74 int (*get_mode)(struct tb *tb); 77 75 int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); 76 + int (*driver_ready)(struct tb *tb, 77 + enum tb_security_level *security_level, 78 + size_t *nboot_acl); 78 79 void (*device_connected)(struct tb *tb, 79 80 const struct icm_pkg_header *hdr); 80 81 void (*device_disconnected)(struct tb *tb, ··· 116 109 static inline u64 get_route(u32 route_hi, u32 route_lo) 117 110 { 118 111 return (u64)route_hi << 32 | route_lo; 112 + } 113 + 114 + static inline u64 get_parent_route(u64 route) 115 + { 116 + int depth = tb_route_length(route); 117 + return depth ? route & ~(0xffULL << (depth - 1) * TB_ROUTE_SHIFT) : 0; 119 118 } 120 119 121 120 static bool icm_match(const struct tb_cfg_request *req, ··· 258 245 return ret; 259 246 } 260 247 248 + static int 249 + icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 250 + size_t *nboot_acl) 251 + { 252 + struct icm_fr_pkg_driver_ready_response reply; 253 + struct icm_pkg_driver_ready request = { 254 + .hdr.code = ICM_DRIVER_READY, 255 + }; 256 + int ret; 257 + 258 + memset(&reply, 0, sizeof(reply)); 259 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 260 + 1, ICM_TIMEOUT); 261 + if (ret) 262 + return ret; 263 + 264 + if (security_level) 265 + *security_level = reply.security_level & ICM_FR_SLEVEL_MASK; 266 + 267 + return 0; 268 + } 269 + 261 270 static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw) 262 271 { 263 272 struct icm_fr_pkg_approve_device request; ··· 295 260 memset(&reply, 0, sizeof(reply)); 296 261 /* Use larger timeout as establishing tunnels can take some time */ 297 262 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 298 - 1, 10000); 263 + 1, ICM_APPROVE_TIMEOUT); 299 264 if (ret) 300 265 return ret; 301 266 ··· 409 374 return 0; 410 375 } 411 376 377 + static void add_switch(struct tb_switch *parent_sw, u64 route, 378 + const uuid_t *uuid, u8 connection_id, u8 connection_key, 379 + u8 link, u8 depth, enum tb_security_level security_level, 380 + bool authorized, bool boot) 381 + { 382 + struct tb_switch *sw; 383 + 384 + sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route); 385 + if (!sw) 386 + return; 387 + 388 + sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL); 389 + sw->connection_id = connection_id; 390 + sw->connection_key = connection_key; 391 + sw->link = link; 392 + sw->depth = depth; 393 + sw->authorized = authorized; 394 + sw->security_level = security_level; 395 + sw->boot = boot; 396 + 397 + /* Link the two switches now */ 398 + tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 399 + tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); 400 + 401 + if (tb_switch_add(sw)) { 402 + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 403 + tb_switch_put(sw); 404 + return; 405 + } 406 + } 407 + 408 + static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, 409 + u64 route, u8 connection_id, u8 connection_key, 410 + u8 link, u8 depth, bool boot) 411 + { 412 + /* Disconnect from parent */ 413 + tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 414 + /* Re-connect via updated port*/ 415 + tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 416 + 417 + /* Update with the new addressing information */ 418 + sw->config.route_hi = upper_32_bits(route); 419 + sw->config.route_lo = lower_32_bits(route); 420 + sw->connection_id = connection_id; 421 + sw->connection_key = connection_key; 422 + sw->link = link; 423 + sw->depth = depth; 424 + sw->boot = boot; 425 + 426 + /* This switch still exists */ 427 + sw->is_unplugged = false; 428 + } 429 + 412 430 static void remove_switch(struct tb_switch *sw) 413 431 { 414 432 struct tb_switch *parent_sw; ··· 471 383 tb_switch_remove(sw); 472 384 } 473 385 386 + static void add_xdomain(struct tb_switch *sw, u64 route, 387 + const uuid_t *local_uuid, const uuid_t *remote_uuid, 388 + u8 link, u8 depth) 389 + { 390 + struct tb_xdomain *xd; 391 + 392 + xd = tb_xdomain_alloc(sw->tb, &sw->dev, route, local_uuid, remote_uuid); 393 + if (!xd) 394 + return; 395 + 396 + xd->link = link; 397 + xd->depth = depth; 398 + 399 + tb_port_at(route, sw)->xdomain = xd; 400 + 401 + tb_xdomain_add(xd); 402 + } 403 + 404 + static void update_xdomain(struct tb_xdomain *xd, u64 route, u8 link) 405 + { 406 + xd->link = link; 407 + xd->route = route; 408 + xd->is_unplugged = false; 409 + } 410 + 411 + static void remove_xdomain(struct tb_xdomain *xd) 412 + { 413 + struct tb_switch *sw; 414 + 415 + sw = tb_to_switch(xd->dev.parent); 416 + tb_port_at(xd->route, sw)->xdomain = NULL; 417 + tb_xdomain_remove(xd); 418 + } 419 + 474 420 static void 475 421 icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 476 422 { 477 423 const struct icm_fr_event_device_connected *pkg = 478 424 (const struct icm_fr_event_device_connected *)hdr; 425 + enum tb_security_level security_level; 479 426 struct tb_switch *sw, *parent_sw; 480 427 struct icm *icm = tb_priv(tb); 481 428 bool authorized = false; 429 + struct tb_xdomain *xd; 482 430 u8 link, depth; 431 + bool boot; 483 432 u64 route; 484 433 int ret; 485 434 ··· 524 399 depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> 525 400 ICM_LINK_INFO_DEPTH_SHIFT; 526 401 authorized = pkg->link_info & ICM_LINK_INFO_APPROVED; 402 + security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 403 + ICM_FLAGS_SLEVEL_SHIFT; 404 + boot = pkg->link_info & ICM_LINK_INFO_BOOT; 405 + 406 + if (pkg->link_info & ICM_LINK_INFO_REJECTED) { 407 + tb_info(tb, "switch at %u.%u was rejected by ICM firmware because topology limit exceeded\n", 408 + link, depth); 409 + return; 410 + } 527 411 528 412 ret = icm->get_route(tb, link, depth, &route); 529 413 if (ret) { ··· 559 425 */ 560 426 if (sw->depth == depth && sw_phy_port == phy_port && 561 427 !!sw->authorized == authorized) { 562 - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 563 - tb_port_at(route, parent_sw)->remote = 564 - tb_upstream_port(sw); 565 - sw->config.route_hi = upper_32_bits(route); 566 - sw->config.route_lo = lower_32_bits(route); 567 - sw->connection_id = pkg->connection_id; 568 - sw->connection_key = pkg->connection_key; 569 - sw->link = link; 570 - sw->depth = depth; 571 - sw->is_unplugged = false; 428 + update_switch(parent_sw, sw, route, pkg->connection_id, 429 + pkg->connection_key, link, depth, boot); 572 430 tb_switch_put(sw); 573 431 return; 574 432 } ··· 593 467 tb_switch_put(sw); 594 468 } 595 469 470 + /* Remove existing XDomain connection if found */ 471 + xd = tb_xdomain_find_by_link_depth(tb, link, depth); 472 + if (xd) { 473 + remove_xdomain(xd); 474 + tb_xdomain_put(xd); 475 + } 476 + 596 477 parent_sw = tb_switch_find_by_link_depth(tb, link, depth - 1); 597 478 if (!parent_sw) { 598 479 tb_err(tb, "failed to find parent switch for %u.%u\n", ··· 607 474 return; 608 475 } 609 476 610 - sw = tb_switch_alloc(tb, &parent_sw->dev, route); 611 - if (!sw) { 612 - tb_switch_put(parent_sw); 613 - return; 614 - } 477 + add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 478 + pkg->connection_key, link, depth, security_level, 479 + authorized, boot); 615 480 616 - sw->uuid = kmemdup(&pkg->ep_uuid, sizeof(pkg->ep_uuid), GFP_KERNEL); 617 - sw->connection_id = pkg->connection_id; 618 - sw->connection_key = pkg->connection_key; 619 - sw->link = link; 620 - sw->depth = depth; 621 - sw->authorized = authorized; 622 - sw->security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 623 - ICM_FLAGS_SLEVEL_SHIFT; 624 - 625 - /* Link the two switches now */ 626 - tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 627 - tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); 628 - 629 - ret = tb_switch_add(sw); 630 - if (ret) { 631 - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 632 - tb_switch_put(sw); 633 - } 634 481 tb_switch_put(parent_sw); 635 482 } 636 483 ··· 640 527 641 528 remove_switch(sw); 642 529 tb_switch_put(sw); 643 - } 644 - 645 - static void remove_xdomain(struct tb_xdomain *xd) 646 - { 647 - struct tb_switch *sw; 648 - 649 - sw = tb_to_switch(xd->dev.parent); 650 - tb_port_at(xd->route, sw)->xdomain = NULL; 651 - tb_xdomain_remove(xd); 652 530 } 653 531 654 532 static void ··· 681 577 phy_port = phy_port_from_route(route, depth); 682 578 683 579 if (xd->depth == depth && xd_phy_port == phy_port) { 684 - xd->link = link; 685 - xd->route = route; 686 - xd->is_unplugged = false; 580 + update_xdomain(xd, route, link); 687 581 tb_xdomain_put(xd); 688 582 return; 689 583 } ··· 731 629 return; 732 630 } 733 631 734 - xd = tb_xdomain_alloc(sw->tb, &sw->dev, route, 735 - &pkg->local_uuid, &pkg->remote_uuid); 736 - if (!xd) { 737 - tb_switch_put(sw); 738 - return; 739 - } 740 - 741 - xd->link = link; 742 - xd->depth = depth; 743 - 744 - tb_port_at(route, sw)->xdomain = xd; 745 - 746 - tb_xdomain_add(xd); 632 + add_xdomain(sw, route, &pkg->local_uuid, &pkg->remote_uuid, link, 633 + depth); 747 634 tb_switch_put(sw); 748 635 } 749 636 ··· 749 658 * cannot find it here. 750 659 */ 751 660 xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid); 661 + if (xd) { 662 + remove_xdomain(xd); 663 + tb_xdomain_put(xd); 664 + } 665 + } 666 + 667 + static int 668 + icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 669 + size_t *nboot_acl) 670 + { 671 + struct icm_tr_pkg_driver_ready_response reply; 672 + struct icm_pkg_driver_ready request = { 673 + .hdr.code = ICM_DRIVER_READY, 674 + }; 675 + int ret; 676 + 677 + memset(&reply, 0, sizeof(reply)); 678 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 679 + 1, 20000); 680 + if (ret) 681 + return ret; 682 + 683 + if (security_level) 684 + *security_level = reply.info & ICM_TR_INFO_SLEVEL_MASK; 685 + if (nboot_acl) 686 + *nboot_acl = (reply.info & ICM_TR_INFO_BOOT_ACL_MASK) >> 687 + ICM_TR_INFO_BOOT_ACL_SHIFT; 688 + return 0; 689 + } 690 + 691 + static int icm_tr_approve_switch(struct tb *tb, struct tb_switch *sw) 692 + { 693 + struct icm_tr_pkg_approve_device request; 694 + struct icm_tr_pkg_approve_device reply; 695 + int ret; 696 + 697 + memset(&request, 0, sizeof(request)); 698 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 699 + request.hdr.code = ICM_APPROVE_DEVICE; 700 + request.route_lo = sw->config.route_lo; 701 + request.route_hi = sw->config.route_hi; 702 + request.connection_id = sw->connection_id; 703 + 704 + memset(&reply, 0, sizeof(reply)); 705 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 706 + 1, ICM_APPROVE_TIMEOUT); 707 + if (ret) 708 + return ret; 709 + 710 + if (reply.hdr.flags & ICM_FLAGS_ERROR) { 711 + tb_warn(tb, "PCIe tunnel creation failed\n"); 712 + return -EIO; 713 + } 714 + 715 + return 0; 716 + } 717 + 718 + static int icm_tr_add_switch_key(struct tb *tb, struct tb_switch *sw) 719 + { 720 + struct icm_tr_pkg_add_device_key_response reply; 721 + struct icm_tr_pkg_add_device_key request; 722 + int ret; 723 + 724 + memset(&request, 0, sizeof(request)); 725 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 726 + request.hdr.code = ICM_ADD_DEVICE_KEY; 727 + request.route_lo = sw->config.route_lo; 728 + request.route_hi = sw->config.route_hi; 729 + request.connection_id = sw->connection_id; 730 + memcpy(request.key, sw->key, TB_SWITCH_KEY_SIZE); 731 + 732 + memset(&reply, 0, sizeof(reply)); 733 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 734 + 1, ICM_TIMEOUT); 735 + if (ret) 736 + return ret; 737 + 738 + if (reply.hdr.flags & ICM_FLAGS_ERROR) { 739 + tb_warn(tb, "Adding key to switch failed\n"); 740 + return -EIO; 741 + } 742 + 743 + return 0; 744 + } 745 + 746 + static int icm_tr_challenge_switch_key(struct tb *tb, struct tb_switch *sw, 747 + const u8 *challenge, u8 *response) 748 + { 749 + struct icm_tr_pkg_challenge_device_response reply; 750 + struct icm_tr_pkg_challenge_device request; 751 + int ret; 752 + 753 + memset(&request, 0, sizeof(request)); 754 + memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid)); 755 + request.hdr.code = ICM_CHALLENGE_DEVICE; 756 + request.route_lo = sw->config.route_lo; 757 + request.route_hi = sw->config.route_hi; 758 + request.connection_id = sw->connection_id; 759 + memcpy(request.challenge, challenge, TB_SWITCH_KEY_SIZE); 760 + 761 + memset(&reply, 0, sizeof(reply)); 762 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 763 + 1, ICM_TIMEOUT); 764 + if (ret) 765 + return ret; 766 + 767 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 768 + return -EKEYREJECTED; 769 + if (reply.hdr.flags & ICM_FLAGS_NO_KEY) 770 + return -ENOKEY; 771 + 772 + memcpy(response, reply.response, TB_SWITCH_KEY_SIZE); 773 + 774 + return 0; 775 + } 776 + 777 + static int icm_tr_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 778 + { 779 + struct icm_tr_pkg_approve_xdomain_response reply; 780 + struct icm_tr_pkg_approve_xdomain request; 781 + int ret; 782 + 783 + memset(&request, 0, sizeof(request)); 784 + request.hdr.code = ICM_APPROVE_XDOMAIN; 785 + request.route_hi = upper_32_bits(xd->route); 786 + request.route_lo = lower_32_bits(xd->route); 787 + request.transmit_path = xd->transmit_path; 788 + request.transmit_ring = xd->transmit_ring; 789 + request.receive_path = xd->receive_path; 790 + request.receive_ring = xd->receive_ring; 791 + memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 792 + 793 + memset(&reply, 0, sizeof(reply)); 794 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 795 + 1, ICM_TIMEOUT); 796 + if (ret) 797 + return ret; 798 + 799 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 800 + return -EIO; 801 + 802 + return 0; 803 + } 804 + 805 + static int icm_tr_xdomain_tear_down(struct tb *tb, struct tb_xdomain *xd, 806 + int stage) 807 + { 808 + struct icm_tr_pkg_disconnect_xdomain_response reply; 809 + struct icm_tr_pkg_disconnect_xdomain request; 810 + int ret; 811 + 812 + memset(&request, 0, sizeof(request)); 813 + request.hdr.code = ICM_DISCONNECT_XDOMAIN; 814 + request.stage = stage; 815 + request.route_hi = upper_32_bits(xd->route); 816 + request.route_lo = lower_32_bits(xd->route); 817 + memcpy(&request.remote_uuid, xd->remote_uuid, sizeof(*xd->remote_uuid)); 818 + 819 + memset(&reply, 0, sizeof(reply)); 820 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 821 + 1, ICM_TIMEOUT); 822 + if (ret) 823 + return ret; 824 + 825 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 826 + return -EIO; 827 + 828 + return 0; 829 + } 830 + 831 + static int icm_tr_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd) 832 + { 833 + int ret; 834 + 835 + ret = icm_tr_xdomain_tear_down(tb, xd, 1); 836 + if (ret) 837 + return ret; 838 + 839 + usleep_range(10, 50); 840 + return icm_tr_xdomain_tear_down(tb, xd, 2); 841 + } 842 + 843 + static void 844 + icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 845 + { 846 + const struct icm_tr_event_device_connected *pkg = 847 + (const struct icm_tr_event_device_connected *)hdr; 848 + enum tb_security_level security_level; 849 + struct tb_switch *sw, *parent_sw; 850 + struct tb_xdomain *xd; 851 + bool authorized, boot; 852 + u64 route; 853 + 854 + /* 855 + * Currently we don't use the QoS information coming with the 856 + * device connected message so simply just ignore that extra 857 + * packet for now. 858 + */ 859 + if (pkg->hdr.packet_id) 860 + return; 861 + 862 + /* 863 + * After NVM upgrade adding root switch device fails because we 864 + * initiated reset. During that time ICM might still send device 865 + * connected message which we ignore here. 866 + */ 867 + if (!tb->root_switch) 868 + return; 869 + 870 + route = get_route(pkg->route_hi, pkg->route_lo); 871 + authorized = pkg->link_info & ICM_LINK_INFO_APPROVED; 872 + security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >> 873 + ICM_FLAGS_SLEVEL_SHIFT; 874 + boot = pkg->link_info & ICM_LINK_INFO_BOOT; 875 + 876 + if (pkg->link_info & ICM_LINK_INFO_REJECTED) { 877 + tb_info(tb, "switch at %llx was rejected by ICM firmware because topology limit exceeded\n", 878 + route); 879 + return; 880 + } 881 + 882 + sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid); 883 + if (sw) { 884 + /* Update the switch if it is still in the same place */ 885 + if (tb_route(sw) == route && !!sw->authorized == authorized) { 886 + parent_sw = tb_to_switch(sw->dev.parent); 887 + update_switch(parent_sw, sw, route, pkg->connection_id, 888 + 0, 0, 0, boot); 889 + tb_switch_put(sw); 890 + return; 891 + } 892 + 893 + remove_switch(sw); 894 + tb_switch_put(sw); 895 + } 896 + 897 + /* Another switch with the same address */ 898 + sw = tb_switch_find_by_route(tb, route); 899 + if (sw) { 900 + remove_switch(sw); 901 + tb_switch_put(sw); 902 + } 903 + 904 + /* XDomain connection with the same address */ 905 + xd = tb_xdomain_find_by_route(tb, route); 906 + if (xd) { 907 + remove_xdomain(xd); 908 + tb_xdomain_put(xd); 909 + } 910 + 911 + parent_sw = tb_switch_find_by_route(tb, get_parent_route(route)); 912 + if (!parent_sw) { 913 + tb_err(tb, "failed to find parent switch for %llx\n", route); 914 + return; 915 + } 916 + 917 + add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 918 + 0, 0, 0, security_level, authorized, boot); 919 + 920 + tb_switch_put(parent_sw); 921 + } 922 + 923 + static void 924 + icm_tr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) 925 + { 926 + const struct icm_tr_event_device_disconnected *pkg = 927 + (const struct icm_tr_event_device_disconnected *)hdr; 928 + struct tb_switch *sw; 929 + u64 route; 930 + 931 + route = get_route(pkg->route_hi, pkg->route_lo); 932 + 933 + sw = tb_switch_find_by_route(tb, route); 934 + if (!sw) { 935 + tb_warn(tb, "no switch exists at %llx, ignoring\n", route); 936 + return; 937 + } 938 + 939 + remove_switch(sw); 940 + tb_switch_put(sw); 941 + } 942 + 943 + static void 944 + icm_tr_xdomain_connected(struct tb *tb, const struct icm_pkg_header *hdr) 945 + { 946 + const struct icm_tr_event_xdomain_connected *pkg = 947 + (const struct icm_tr_event_xdomain_connected *)hdr; 948 + struct tb_xdomain *xd; 949 + struct tb_switch *sw; 950 + u64 route; 951 + 952 + if (!tb->root_switch) 953 + return; 954 + 955 + route = get_route(pkg->local_route_hi, pkg->local_route_lo); 956 + 957 + xd = tb_xdomain_find_by_uuid(tb, &pkg->remote_uuid); 958 + if (xd) { 959 + if (xd->route == route) { 960 + update_xdomain(xd, route, 0); 961 + tb_xdomain_put(xd); 962 + return; 963 + } 964 + 965 + remove_xdomain(xd); 966 + tb_xdomain_put(xd); 967 + } 968 + 969 + /* An existing xdomain with the same address */ 970 + xd = tb_xdomain_find_by_route(tb, route); 971 + if (xd) { 972 + remove_xdomain(xd); 973 + tb_xdomain_put(xd); 974 + } 975 + 976 + /* 977 + * If the user disconnected a switch during suspend and 978 + * connected another host to the same port, remove the switch 979 + * first. 980 + */ 981 + sw = get_switch_at_route(tb->root_switch, route); 982 + if (sw) 983 + remove_switch(sw); 984 + 985 + sw = tb_switch_find_by_route(tb, get_parent_route(route)); 986 + if (!sw) { 987 + tb_warn(tb, "no switch exists at %llx, ignoring\n", route); 988 + return; 989 + } 990 + 991 + add_xdomain(sw, route, &pkg->local_uuid, &pkg->remote_uuid, 0, 0); 992 + tb_switch_put(sw); 993 + } 994 + 995 + static void 996 + icm_tr_xdomain_disconnected(struct tb *tb, const struct icm_pkg_header *hdr) 997 + { 998 + const struct icm_tr_event_xdomain_disconnected *pkg = 999 + (const struct icm_tr_event_xdomain_disconnected *)hdr; 1000 + struct tb_xdomain *xd; 1001 + u64 route; 1002 + 1003 + route = get_route(pkg->route_hi, pkg->route_lo); 1004 + 1005 + xd = tb_xdomain_find_by_route(tb, route); 752 1006 if (xd) { 753 1007 remove_xdomain(xd); 754 1008 tb_xdomain_put(xd); ··· 1164 728 static int icm_ar_get_mode(struct tb *tb) 1165 729 { 1166 730 struct tb_nhi *nhi = tb->nhi; 1167 - int retries = 5; 731 + int retries = 60; 1168 732 u32 val; 1169 733 1170 734 do { 1171 735 val = ioread32(nhi->iobase + REG_FW_STS); 1172 736 if (val & REG_FW_STS_NVM_AUTH_DONE) 1173 737 break; 1174 - msleep(30); 738 + msleep(50); 1175 739 } while (--retries); 1176 740 1177 741 if (!retries) { ··· 1180 744 } 1181 745 1182 746 return nhi_mailbox_mode(nhi); 747 + } 748 + 749 + static int 750 + icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level, 751 + size_t *nboot_acl) 752 + { 753 + struct icm_ar_pkg_driver_ready_response reply; 754 + struct icm_pkg_driver_ready request = { 755 + .hdr.code = ICM_DRIVER_READY, 756 + }; 757 + int ret; 758 + 759 + memset(&reply, 0, sizeof(reply)); 760 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 761 + 1, ICM_TIMEOUT); 762 + if (ret) 763 + return ret; 764 + 765 + if (security_level) 766 + *security_level = reply.info & ICM_AR_INFO_SLEVEL_MASK; 767 + if (nboot_acl && (reply.info & ICM_AR_INFO_BOOT_ACL_SUPPORTED)) 768 + *nboot_acl = (reply.info & ICM_AR_INFO_BOOT_ACL_MASK) >> 769 + ICM_AR_INFO_BOOT_ACL_SHIFT; 770 + return 0; 1183 771 } 1184 772 1185 773 static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route) ··· 1225 765 return -EIO; 1226 766 1227 767 *route = get_route(reply.route_hi, reply.route_lo); 768 + return 0; 769 + } 770 + 771 + static int icm_ar_get_boot_acl(struct tb *tb, uuid_t *uuids, size_t nuuids) 772 + { 773 + struct icm_ar_pkg_preboot_acl_response reply; 774 + struct icm_ar_pkg_preboot_acl request = { 775 + .hdr = { .code = ICM_PREBOOT_ACL }, 776 + }; 777 + int ret, i; 778 + 779 + memset(&reply, 0, sizeof(reply)); 780 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 781 + 1, ICM_TIMEOUT); 782 + if (ret) 783 + return ret; 784 + 785 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 786 + return -EIO; 787 + 788 + for (i = 0; i < nuuids; i++) { 789 + u32 *uuid = (u32 *)&uuids[i]; 790 + 791 + uuid[0] = reply.acl[i].uuid_lo; 792 + uuid[1] = reply.acl[i].uuid_hi; 793 + 794 + if (uuid[0] == 0xffffffff && uuid[1] == 0xffffffff) { 795 + /* Map empty entries to null UUID */ 796 + uuid[0] = 0; 797 + uuid[1] = 0; 798 + } else { 799 + /* Upper two DWs are always one's */ 800 + uuid[2] = 0xffffffff; 801 + uuid[3] = 0xffffffff; 802 + } 803 + } 804 + 805 + return ret; 806 + } 807 + 808 + static int icm_ar_set_boot_acl(struct tb *tb, const uuid_t *uuids, 809 + size_t nuuids) 810 + { 811 + struct icm_ar_pkg_preboot_acl_response reply; 812 + struct icm_ar_pkg_preboot_acl request = { 813 + .hdr = { 814 + .code = ICM_PREBOOT_ACL, 815 + .flags = ICM_FLAGS_WRITE, 816 + }, 817 + }; 818 + int ret, i; 819 + 820 + for (i = 0; i < nuuids; i++) { 821 + const u32 *uuid = (const u32 *)&uuids[i]; 822 + 823 + if (uuid_is_null(&uuids[i])) { 824 + /* 825 + * Map null UUID to the empty (all one) entries 826 + * for ICM. 827 + */ 828 + request.acl[i].uuid_lo = 0xffffffff; 829 + request.acl[i].uuid_hi = 0xffffffff; 830 + } else { 831 + /* Two high DWs need to be set to all one */ 832 + if (uuid[2] != 0xffffffff || uuid[3] != 0xffffffff) 833 + return -EINVAL; 834 + 835 + request.acl[i].uuid_lo = uuid[0]; 836 + request.acl[i].uuid_hi = uuid[1]; 837 + } 838 + } 839 + 840 + memset(&reply, 0, sizeof(reply)); 841 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 842 + 1, ICM_TIMEOUT); 843 + if (ret) 844 + return ret; 845 + 846 + if (reply.hdr.flags & ICM_FLAGS_ERROR) 847 + return -EIO; 848 + 1228 849 return 0; 1229 850 } 1230 851 ··· 1355 814 } 1356 815 1357 816 static int 1358 - __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level) 817 + __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level, 818 + size_t *nboot_acl) 1359 819 { 1360 - struct icm_pkg_driver_ready_response reply; 1361 - struct icm_pkg_driver_ready request = { 1362 - .hdr.code = ICM_DRIVER_READY, 1363 - }; 1364 - unsigned int retries = 10; 820 + struct icm *icm = tb_priv(tb); 821 + unsigned int retries = 50; 1365 822 int ret; 1366 823 1367 - memset(&reply, 0, sizeof(reply)); 1368 - ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1369 - 1, ICM_TIMEOUT); 1370 - if (ret) 824 + ret = icm->driver_ready(tb, security_level, nboot_acl); 825 + if (ret) { 826 + tb_err(tb, "failed to send driver ready to ICM\n"); 1371 827 return ret; 1372 - 1373 - if (security_level) 1374 - *security_level = reply.security_level & 0xf; 828 + } 1375 829 1376 830 /* 1377 831 * Hold on here until the switch config space is accessible so ··· 1384 848 msleep(50); 1385 849 } while (--retries); 1386 850 851 + tb_err(tb, "failed to read root switch config space, giving up\n"); 1387 852 return -ETIMEDOUT; 1388 853 } 1389 854 ··· 1451 914 { 1452 915 struct icm *icm = tb_priv(tb); 1453 916 u32 val; 917 + 918 + if (!icm->upstream_port) 919 + return -ENODEV; 1454 920 1455 921 /* Put ARC to wait for CIO reset event to happen */ 1456 922 val = ioread32(nhi->iobase + REG_FW_STS); ··· 1594 1054 break; 1595 1055 1596 1056 default: 1057 + if (ret < 0) 1058 + return ret; 1059 + 1597 1060 tb_err(tb, "ICM firmware is in wrong mode: %u\n", ret); 1598 1061 return -ENODEV; 1599 1062 } ··· 1632 1089 return 0; 1633 1090 } 1634 1091 1635 - return __icm_driver_ready(tb, &tb->security_level); 1092 + ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl); 1093 + if (ret) 1094 + return ret; 1095 + 1096 + /* 1097 + * Make sure the number of supported preboot ACL matches what we 1098 + * expect or disable the whole feature. 1099 + */ 1100 + if (tb->nboot_acl > icm->max_boot_acl) 1101 + tb->nboot_acl = 0; 1102 + 1103 + return 0; 1636 1104 } 1637 1105 1638 1106 static int icm_suspend(struct tb *tb) ··· 1739 1185 * Now all existing children should be resumed, start events 1740 1186 * from ICM to get updated status. 1741 1187 */ 1742 - __icm_driver_ready(tb, NULL); 1188 + __icm_driver_ready(tb, NULL, NULL); 1743 1189 1744 1190 /* 1745 1191 * We do not get notifications of devices that have been ··· 1792 1238 return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DISCONNECT_PCIE_PATHS, 0); 1793 1239 } 1794 1240 1795 - /* Falcon Ridge and Alpine Ridge */ 1241 + /* Falcon Ridge */ 1796 1242 static const struct tb_cm_ops icm_fr_ops = { 1797 1243 .driver_ready = icm_driver_ready, 1798 1244 .start = icm_start, ··· 1806 1252 .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1807 1253 .approve_xdomain_paths = icm_fr_approve_xdomain_paths, 1808 1254 .disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths, 1255 + }; 1256 + 1257 + /* Alpine Ridge */ 1258 + static const struct tb_cm_ops icm_ar_ops = { 1259 + .driver_ready = icm_driver_ready, 1260 + .start = icm_start, 1261 + .stop = icm_stop, 1262 + .suspend = icm_suspend, 1263 + .complete = icm_complete, 1264 + .handle_event = icm_handle_event, 1265 + .get_boot_acl = icm_ar_get_boot_acl, 1266 + .set_boot_acl = icm_ar_set_boot_acl, 1267 + .approve_switch = icm_fr_approve_switch, 1268 + .add_switch_key = icm_fr_add_switch_key, 1269 + .challenge_switch_key = icm_fr_challenge_switch_key, 1270 + .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1271 + .approve_xdomain_paths = icm_fr_approve_xdomain_paths, 1272 + .disconnect_xdomain_paths = icm_fr_disconnect_xdomain_paths, 1273 + }; 1274 + 1275 + /* Titan Ridge */ 1276 + static const struct tb_cm_ops icm_tr_ops = { 1277 + .driver_ready = icm_driver_ready, 1278 + .start = icm_start, 1279 + .stop = icm_stop, 1280 + .suspend = icm_suspend, 1281 + .complete = icm_complete, 1282 + .handle_event = icm_handle_event, 1283 + .get_boot_acl = icm_ar_get_boot_acl, 1284 + .set_boot_acl = icm_ar_set_boot_acl, 1285 + .approve_switch = icm_tr_approve_switch, 1286 + .add_switch_key = icm_tr_add_switch_key, 1287 + .challenge_switch_key = icm_tr_challenge_switch_key, 1288 + .disconnect_pcie_paths = icm_disconnect_pcie_paths, 1289 + .approve_xdomain_paths = icm_tr_approve_xdomain_paths, 1290 + .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 1809 1291 }; 1810 1292 1811 1293 struct tb *icm_probe(struct tb_nhi *nhi) ··· 1862 1272 case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 1863 1273 icm->is_supported = icm_fr_is_supported; 1864 1274 icm->get_route = icm_fr_get_route; 1275 + icm->driver_ready = icm_fr_driver_ready; 1865 1276 icm->device_connected = icm_fr_device_connected; 1866 1277 icm->device_disconnected = icm_fr_device_disconnected; 1867 1278 icm->xdomain_connected = icm_fr_xdomain_connected; ··· 1875 1284 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI: 1876 1285 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI: 1877 1286 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI: 1287 + icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 1878 1288 icm->is_supported = icm_ar_is_supported; 1879 1289 icm->get_mode = icm_ar_get_mode; 1880 1290 icm->get_route = icm_ar_get_route; 1291 + icm->driver_ready = icm_ar_driver_ready; 1881 1292 icm->device_connected = icm_fr_device_connected; 1882 1293 icm->device_disconnected = icm_fr_device_disconnected; 1883 1294 icm->xdomain_connected = icm_fr_xdomain_connected; 1884 1295 icm->xdomain_disconnected = icm_fr_xdomain_disconnected; 1885 - tb->cm_ops = &icm_fr_ops; 1296 + tb->cm_ops = &icm_ar_ops; 1297 + break; 1298 + 1299 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI: 1300 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI: 1301 + icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 1302 + icm->is_supported = icm_ar_is_supported; 1303 + icm->get_mode = icm_ar_get_mode; 1304 + icm->driver_ready = icm_tr_driver_ready; 1305 + icm->device_connected = icm_tr_device_connected; 1306 + icm->device_disconnected = icm_tr_device_disconnected; 1307 + icm->xdomain_connected = icm_tr_xdomain_connected; 1308 + icm->xdomain_disconnected = icm_tr_xdomain_disconnected; 1309 + tb->cm_ops = &icm_tr_ops; 1886 1310 break; 1887 1311 } 1888 1312
+4 -1
drivers/thunderbolt/nhi.c
··· 1036 1036 */ 1037 1037 tb_domain_put(tb); 1038 1038 nhi_shutdown(nhi); 1039 - return -EIO; 1039 + return res; 1040 1040 } 1041 1041 pci_set_drvdata(pdev, tb); 1042 1042 ··· 1064 1064 * we just disable hotplug, the 1065 1065 * pci-tunnels stay alive. 1066 1066 */ 1067 + .thaw_noirq = nhi_resume_noirq, 1067 1068 .restore_noirq = nhi_resume_noirq, 1068 1069 .suspend = nhi_suspend, 1069 1070 .freeze = nhi_suspend, ··· 1111 1110 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) }, 1112 1111 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) }, 1113 1112 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) }, 1113 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) }, 1114 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) }, 1114 1115 1115 1116 { 0,} 1116 1117 };
+5
drivers/thunderbolt/nhi.h
··· 45 45 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI 0x15dc 46 46 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI 0x15dd 47 47 #define PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI 0x15de 48 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE 0x15e7 49 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI 0x15e8 50 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea 51 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb 52 + #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef 48 53 49 54 #endif
+60 -1
drivers/thunderbolt/switch.c
··· 716 716 if (sw->authorized) 717 717 goto unlock; 718 718 719 + /* 720 + * Make sure there is no PCIe rescan ongoing when a new PCIe 721 + * tunnel is created. Otherwise the PCIe rescan code might find 722 + * the new tunnel too early. 723 + */ 724 + pci_lock_rescan_remove(); 725 + 719 726 switch (val) { 720 727 /* Approve switch */ 721 728 case 1: ··· 741 734 default: 742 735 break; 743 736 } 737 + 738 + pci_unlock_rescan_remove(); 744 739 745 740 if (!ret) { 746 741 sw->authorized = val; ··· 774 765 return ret ? ret : count; 775 766 } 776 767 static DEVICE_ATTR_RW(authorized); 768 + 769 + static ssize_t boot_show(struct device *dev, struct device_attribute *attr, 770 + char *buf) 771 + { 772 + struct tb_switch *sw = tb_to_switch(dev); 773 + 774 + return sprintf(buf, "%u\n", sw->boot); 775 + } 776 + static DEVICE_ATTR_RO(boot); 777 777 778 778 static ssize_t device_show(struct device *dev, struct device_attribute *attr, 779 779 char *buf) ··· 960 942 961 943 static struct attribute *switch_attrs[] = { 962 944 &dev_attr_authorized.attr, 945 + &dev_attr_boot.attr, 963 946 &dev_attr_device.attr, 964 947 &dev_attr_device_name.attr, 965 948 &dev_attr_key.attr, ··· 987 968 } else if (attr == &dev_attr_nvm_authenticate.attr || 988 969 attr == &dev_attr_nvm_version.attr) { 989 970 if (sw->dma_port) 971 + return attr->mode; 972 + return 0; 973 + } else if (attr == &dev_attr_boot.attr) { 974 + if (tb_route(sw)) 990 975 return attr->mode; 991 976 return 0; 992 977 } ··· 1051 1028 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE: 1052 1029 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE: 1053 1030 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE: 1031 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE: 1032 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE: 1033 + case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE: 1054 1034 return 3; 1055 1035 1056 1036 default: ··· 1496 1470 u8 link; 1497 1471 u8 depth; 1498 1472 const uuid_t *uuid; 1473 + u64 route; 1499 1474 }; 1500 1475 1501 1476 static int tb_switch_match(struct device *dev, void *data) ··· 1511 1484 1512 1485 if (lookup->uuid) 1513 1486 return !memcmp(sw->uuid, lookup->uuid, sizeof(*lookup->uuid)); 1487 + 1488 + if (lookup->route) { 1489 + return sw->config.route_lo == lower_32_bits(lookup->route) && 1490 + sw->config.route_hi == upper_32_bits(lookup->route); 1491 + } 1514 1492 1515 1493 /* Root switch is matched only by depth */ 1516 1494 if (!lookup->depth) ··· 1551 1519 } 1552 1520 1553 1521 /** 1554 - * tb_switch_find_by_link_depth() - Find switch by UUID 1522 + * tb_switch_find_by_uuid() - Find switch by UUID 1555 1523 * @tb: Domain the switch belongs 1556 1524 * @uuid: UUID to look for 1557 1525 * ··· 1566 1534 memset(&lookup, 0, sizeof(lookup)); 1567 1535 lookup.tb = tb; 1568 1536 lookup.uuid = uuid; 1537 + 1538 + dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); 1539 + if (dev) 1540 + return tb_to_switch(dev); 1541 + 1542 + return NULL; 1543 + } 1544 + 1545 + /** 1546 + * tb_switch_find_by_route() - Find switch by route string 1547 + * @tb: Domain the switch belongs 1548 + * @route: Route string to look for 1549 + * 1550 + * Returned switch has reference count increased so the caller needs to 1551 + * call tb_switch_put() when done with the switch. 1552 + */ 1553 + struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route) 1554 + { 1555 + struct tb_sw_lookup lookup; 1556 + struct device *dev; 1557 + 1558 + if (!route) 1559 + return tb_switch_get(tb->root_switch); 1560 + 1561 + memset(&lookup, 0, sizeof(lookup)); 1562 + lookup.tb = tb; 1563 + lookup.route = route; 1569 1564 1570 1565 dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match); 1571 1566 if (dev)
+14
drivers/thunderbolt/tb.h
··· 66 66 * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) 67 67 * @no_nvm_upgrade: Prevent NVM upgrade of this switch 68 68 * @safe_mode: The switch is in safe-mode 69 + * @boot: Whether the switch was already authorized on boot or not 69 70 * @authorized: Whether the switch is authorized by user or policy 70 71 * @work: Work used to automatically authorize a switch 71 72 * @security_level: Switch supported security level ··· 100 99 struct tb_switch_nvm *nvm; 101 100 bool no_nvm_upgrade; 102 101 bool safe_mode; 102 + bool boot; 103 103 unsigned int authorized; 104 104 struct work_struct work; 105 105 enum tb_security_level security_level; ··· 200 198 * @suspend: Connection manager specific suspend 201 199 * @complete: Connection manager specific complete 202 200 * @handle_event: Handle thunderbolt event 201 + * @get_boot_acl: Get boot ACL list 202 + * @set_boot_acl: Set boot ACL list 203 203 * @approve_switch: Approve switch 204 204 * @add_switch_key: Add key to switch 205 205 * @challenge_switch_key: Challenge switch using key ··· 219 215 void (*complete)(struct tb *tb); 220 216 void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type, 221 217 const void *buf, size_t size); 218 + int (*get_boot_acl)(struct tb *tb, uuid_t *uuids, size_t nuuids); 219 + int (*set_boot_acl)(struct tb *tb, const uuid_t *uuids, size_t nuuids); 222 220 int (*approve_switch)(struct tb *tb, struct tb_switch *sw); 223 221 int (*add_switch_key)(struct tb *tb, struct tb_switch *sw); 224 222 int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw, ··· 392 386 struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, 393 387 u8 depth); 394 388 struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid); 389 + struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route); 390 + 391 + static inline struct tb_switch *tb_switch_get(struct tb_switch *sw) 392 + { 393 + if (sw) 394 + get_device(&sw->dev); 395 + return sw; 396 + } 395 397 396 398 static inline void tb_switch_put(struct tb_switch *sw) 397 399 {
+179 -1
drivers/thunderbolt/tb_msgs.h
··· 102 102 ICM_ADD_DEVICE_KEY = 0x6, 103 103 ICM_GET_ROUTE = 0xa, 104 104 ICM_APPROVE_XDOMAIN = 0x10, 105 + ICM_DISCONNECT_XDOMAIN = 0x11, 106 + ICM_PREBOOT_ACL = 0x18, 105 107 }; 106 108 107 109 enum icm_event_code { ··· 124 122 #define ICM_FLAGS_NO_KEY BIT(1) 125 123 #define ICM_FLAGS_SLEVEL_SHIFT 3 126 124 #define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3) 125 + #define ICM_FLAGS_WRITE BIT(7) 127 126 128 127 struct icm_pkg_driver_ready { 129 128 struct icm_pkg_header hdr; 130 129 }; 131 130 132 - struct icm_pkg_driver_ready_response { 131 + /* Falcon Ridge only messages */ 132 + 133 + struct icm_fr_pkg_driver_ready_response { 133 134 struct icm_pkg_header hdr; 134 135 u8 romver; 135 136 u8 ramver; 136 137 u16 security_level; 137 138 }; 139 + 140 + #define ICM_FR_SLEVEL_MASK 0xf 138 141 139 142 /* Falcon Ridge & Alpine Ridge common messages */ 140 143 ··· 183 176 #define ICM_LINK_INFO_DEPTH_SHIFT 4 184 177 #define ICM_LINK_INFO_DEPTH_MASK GENMASK(7, 4) 185 178 #define ICM_LINK_INFO_APPROVED BIT(8) 179 + #define ICM_LINK_INFO_REJECTED BIT(9) 180 + #define ICM_LINK_INFO_BOOT BIT(10) 186 181 187 182 struct icm_fr_pkg_approve_device { 188 183 struct icm_pkg_header hdr; ··· 279 270 280 271 /* Alpine Ridge only messages */ 281 272 273 + struct icm_ar_pkg_driver_ready_response { 274 + struct icm_pkg_header hdr; 275 + u8 romver; 276 + u8 ramver; 277 + u16 info; 278 + }; 279 + 280 + #define ICM_AR_INFO_SLEVEL_MASK GENMASK(3, 0) 281 + #define ICM_AR_INFO_BOOT_ACL_SHIFT 7 282 + #define ICM_AR_INFO_BOOT_ACL_MASK GENMASK(11, 7) 283 + #define ICM_AR_INFO_BOOT_ACL_SUPPORTED BIT(13) 284 + 282 285 struct icm_ar_pkg_get_route { 283 286 struct icm_pkg_header hdr; 284 287 u16 reserved; ··· 303 282 u16 link_info; 304 283 u32 route_hi; 305 284 u32 route_lo; 285 + }; 286 + 287 + struct icm_ar_boot_acl_entry { 288 + u32 uuid_lo; 289 + u32 uuid_hi; 290 + }; 291 + 292 + #define ICM_AR_PREBOOT_ACL_ENTRIES 16 293 + 294 + struct icm_ar_pkg_preboot_acl { 295 + struct icm_pkg_header hdr; 296 + struct icm_ar_boot_acl_entry acl[ICM_AR_PREBOOT_ACL_ENTRIES]; 297 + }; 298 + 299 + struct icm_ar_pkg_preboot_acl_response { 300 + struct icm_pkg_header hdr; 301 + struct icm_ar_boot_acl_entry acl[ICM_AR_PREBOOT_ACL_ENTRIES]; 302 + }; 303 + 304 + /* Titan Ridge messages */ 305 + 306 + struct icm_tr_pkg_driver_ready_response { 307 + struct icm_pkg_header hdr; 308 + u16 reserved1; 309 + u16 info; 310 + u32 nvm_version; 311 + u16 device_id; 312 + u16 reserved2; 313 + }; 314 + 315 + #define ICM_TR_INFO_SLEVEL_MASK GENMASK(2, 0) 316 + #define ICM_TR_INFO_BOOT_ACL_SHIFT 7 317 + #define ICM_TR_INFO_BOOT_ACL_MASK GENMASK(12, 7) 318 + 319 + struct icm_tr_event_device_connected { 320 + struct icm_pkg_header hdr; 321 + uuid_t ep_uuid; 322 + u32 route_hi; 323 + u32 route_lo; 324 + u8 connection_id; 325 + u8 reserved; 326 + u16 link_info; 327 + u32 ep_name[55]; 328 + }; 329 + 330 + struct icm_tr_event_device_disconnected { 331 + struct icm_pkg_header hdr; 332 + u32 route_hi; 333 + u32 route_lo; 334 + }; 335 + 336 + struct icm_tr_event_xdomain_connected { 337 + struct icm_pkg_header hdr; 338 + u16 reserved; 339 + u16 link_info; 340 + uuid_t remote_uuid; 341 + uuid_t local_uuid; 342 + u32 local_route_hi; 343 + u32 local_route_lo; 344 + u32 remote_route_hi; 345 + u32 remote_route_lo; 346 + }; 347 + 348 + struct icm_tr_event_xdomain_disconnected { 349 + struct icm_pkg_header hdr; 350 + u32 route_hi; 351 + u32 route_lo; 352 + uuid_t remote_uuid; 353 + }; 354 + 355 + struct icm_tr_pkg_approve_device { 356 + struct icm_pkg_header hdr; 357 + uuid_t ep_uuid; 358 + u32 route_hi; 359 + u32 route_lo; 360 + u8 connection_id; 361 + u8 reserved1[3]; 362 + }; 363 + 364 + struct icm_tr_pkg_add_device_key { 365 + struct icm_pkg_header hdr; 366 + uuid_t ep_uuid; 367 + u32 route_hi; 368 + u32 route_lo; 369 + u8 connection_id; 370 + u8 reserved[3]; 371 + u32 key[8]; 372 + }; 373 + 374 + struct icm_tr_pkg_challenge_device { 375 + struct icm_pkg_header hdr; 376 + uuid_t ep_uuid; 377 + u32 route_hi; 378 + u32 route_lo; 379 + u8 connection_id; 380 + u8 reserved[3]; 381 + u32 challenge[8]; 382 + }; 383 + 384 + struct icm_tr_pkg_approve_xdomain { 385 + struct icm_pkg_header hdr; 386 + u32 route_hi; 387 + u32 route_lo; 388 + uuid_t remote_uuid; 389 + u16 transmit_path; 390 + u16 transmit_ring; 391 + u16 receive_path; 392 + u16 receive_ring; 393 + }; 394 + 395 + struct icm_tr_pkg_disconnect_xdomain { 396 + struct icm_pkg_header hdr; 397 + u8 stage; 398 + u8 reserved[3]; 399 + u32 route_hi; 400 + u32 route_lo; 401 + uuid_t remote_uuid; 402 + }; 403 + 404 + struct icm_tr_pkg_challenge_device_response { 405 + struct icm_pkg_header hdr; 406 + uuid_t ep_uuid; 407 + u32 route_hi; 408 + u32 route_lo; 409 + u8 connection_id; 410 + u8 reserved[3]; 411 + u32 challenge[8]; 412 + u32 response[8]; 413 + }; 414 + 415 + struct icm_tr_pkg_add_device_key_response { 416 + struct icm_pkg_header hdr; 417 + uuid_t ep_uuid; 418 + u32 route_hi; 419 + u32 route_lo; 420 + u8 connection_id; 421 + u8 reserved[3]; 422 + }; 423 + 424 + struct icm_tr_pkg_approve_xdomain_response { 425 + struct icm_pkg_header hdr; 426 + u32 route_hi; 427 + u32 route_lo; 428 + uuid_t remote_uuid; 429 + u16 transmit_path; 430 + u16 transmit_ring; 431 + u16 receive_path; 432 + u16 receive_ring; 433 + }; 434 + 435 + struct icm_tr_pkg_disconnect_xdomain_response { 436 + struct icm_pkg_header hdr; 437 + u8 stage; 438 + u8 reserved[3]; 439 + u32 route_hi; 440 + u32 route_lo; 441 + uuid_t remote_uuid; 306 442 }; 307 443 308 444 /* XDomain messages */
+36 -13
drivers/thunderbolt/xdomain.c
··· 1255 1255 const uuid_t *uuid; 1256 1256 u8 link; 1257 1257 u8 depth; 1258 + u64 route; 1258 1259 }; 1259 1260 1260 1261 static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw, ··· 1276 1275 if (lookup->uuid) { 1277 1276 if (uuid_equal(xd->remote_uuid, lookup->uuid)) 1278 1277 return xd; 1279 - } else if (lookup->link == xd->link && 1278 + } else if (lookup->link && 1279 + lookup->link == xd->link && 1280 1280 lookup->depth == xd->depth) { 1281 + return xd; 1282 + } else if (lookup->route && 1283 + lookup->route == xd->route) { 1281 1284 return xd; 1282 1285 } 1283 1286 } else if (port->remote) { ··· 1318 1313 lookup.uuid = uuid; 1319 1314 1320 1315 xd = switch_find_xdomain(tb->root_switch, &lookup); 1321 - if (xd) { 1322 - get_device(&xd->dev); 1323 - return xd; 1324 - } 1325 - 1326 - return NULL; 1316 + return tb_xdomain_get(xd); 1327 1317 } 1328 1318 EXPORT_SYMBOL_GPL(tb_xdomain_find_by_uuid); 1329 1319 ··· 1349 1349 lookup.depth = depth; 1350 1350 1351 1351 xd = switch_find_xdomain(tb->root_switch, &lookup); 1352 - if (xd) { 1353 - get_device(&xd->dev); 1354 - return xd; 1355 - } 1356 - 1357 - return NULL; 1352 + return tb_xdomain_get(xd); 1358 1353 } 1354 + 1355 + /** 1356 + * tb_xdomain_find_by_route() - Find an XDomain by route string 1357 + * @tb: Domain where the XDomain belongs to 1358 + * @route: XDomain route string 1359 + * 1360 + * Finds XDomain by walking through the Thunderbolt topology below @tb. 1361 + * The returned XDomain will have its reference count increased so the 1362 + * caller needs to call tb_xdomain_put() when it is done with the 1363 + * object. 1364 + * 1365 + * This will find all XDomains including the ones that are not yet added 1366 + * to the bus (handshake is still in progress). 1367 + * 1368 + * The caller needs to hold @tb->lock. 1369 + */ 1370 + struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route) 1371 + { 1372 + struct tb_xdomain_lookup lookup; 1373 + struct tb_xdomain *xd; 1374 + 1375 + memset(&lookup, 0, sizeof(lookup)); 1376 + lookup.route = route; 1377 + 1378 + xd = switch_find_xdomain(tb->root_switch, &lookup); 1379 + return tb_xdomain_get(xd); 1380 + } 1381 + EXPORT_SYMBOL_GPL(tb_xdomain_find_by_route); 1359 1382 1360 1383 bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type, 1361 1384 const void *buf, size_t size)
+89
drivers/uio/uio_hv_generic.c
··· 121 121 uio_event_notify(&pdata->info); 122 122 } 123 123 124 + /* 125 + * Handle fault when looking for sub channel ring buffer 126 + * Subchannel ring buffer is same as resource 0 which is main ring buffer 127 + * This is derived from uio_vma_fault 128 + */ 129 + static int hv_uio_vma_fault(struct vm_fault *vmf) 130 + { 131 + struct vm_area_struct *vma = vmf->vma; 132 + void *ring_buffer = vma->vm_private_data; 133 + struct page *page; 134 + void *addr; 135 + 136 + addr = ring_buffer + (vmf->pgoff << PAGE_SHIFT); 137 + page = virt_to_page(addr); 138 + get_page(page); 139 + vmf->page = page; 140 + return 0; 141 + } 142 + 143 + static const struct vm_operations_struct hv_uio_vm_ops = { 144 + .fault = hv_uio_vma_fault, 145 + }; 146 + 147 + /* Sysfs API to allow mmap of the ring buffers */ 148 + static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj, 149 + struct bin_attribute *attr, 150 + struct vm_area_struct *vma) 151 + { 152 + struct vmbus_channel *channel 153 + = container_of(kobj, struct vmbus_channel, kobj); 154 + unsigned long requested_pages, actual_pages; 155 + 156 + if (vma->vm_end < vma->vm_start) 157 + return -EINVAL; 158 + 159 + /* only allow 0 for now */ 160 + if (vma->vm_pgoff > 0) 161 + return -EINVAL; 162 + 163 + requested_pages = vma_pages(vma); 164 + actual_pages = 2 * HV_RING_SIZE; 165 + if (requested_pages > actual_pages) 166 + return -EINVAL; 167 + 168 + vma->vm_private_data = channel->ringbuffer_pages; 169 + vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 170 + vma->vm_ops = &hv_uio_vm_ops; 171 + return 0; 172 + } 173 + 174 + static struct bin_attribute ring_buffer_bin_attr __ro_after_init = { 175 + .attr = { 176 + .name = "ring", 177 + .mode = 0600, 178 + /* size is set at init time */ 179 + }, 180 + .mmap = hv_uio_ring_mmap, 181 + }; 182 + 183 + /* Callback from VMBUS subystem when new channel created. */ 184 + static void 185 + hv_uio_new_channel(struct vmbus_channel *new_sc) 186 + { 187 + struct hv_device *hv_dev = new_sc->primary_channel->device_obj; 188 + struct device *device = &hv_dev->device; 189 + struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); 190 + const size_t ring_bytes = HV_RING_SIZE * PAGE_SIZE; 191 + int ret; 192 + 193 + /* Create host communication ring */ 194 + ret = vmbus_open(new_sc, ring_bytes, ring_bytes, NULL, 0, 195 + hv_uio_channel_cb, pdata); 196 + if (ret) { 197 + dev_err(device, "vmbus_open subchannel failed: %d\n", ret); 198 + return; 199 + } 200 + 201 + /* Disable interrupts on sub channel */ 202 + new_sc->inbound.ring_buffer->interrupt_mask = 1; 203 + set_channel_read_mode(new_sc, HV_CALL_ISR); 204 + 205 + ret = sysfs_create_bin_file(&new_sc->kobj, &ring_buffer_bin_attr); 206 + if (ret) { 207 + dev_err(device, "sysfs create ring bin file failed; %d\n", ret); 208 + vmbus_close(new_sc); 209 + } 210 + } 211 + 124 212 static void 125 213 hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) 126 214 { ··· 324 236 } 325 237 326 238 vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind); 239 + vmbus_set_sc_create_callback(dev->channel, hv_uio_new_channel); 327 240 328 241 hv_set_drvdata(dev, pdata); 329 242
+1
drivers/w1/w1.c
··· 706 706 dev_err(&sl->dev, 707 707 "Device registration [%s] failed. err=%d\n", 708 708 dev_name(&sl->dev), err); 709 + put_device(&sl->dev); 709 710 return err; 710 711 } 711 712 w1_family_notify(BUS_NOTIFY_ADD_DEVICE, sl);
+5 -5
fs/char_dev.c
··· 67 67 int i; 68 68 struct char_device_struct *cd; 69 69 70 - for (i = ARRAY_SIZE(chrdevs)-1; i > CHRDEV_MAJOR_DYN_END; i--) { 70 + for (i = ARRAY_SIZE(chrdevs)-1; i >= CHRDEV_MAJOR_DYN_END; i--) { 71 71 if (chrdevs[i] == NULL) 72 72 return i; 73 73 } 74 74 75 75 for (i = CHRDEV_MAJOR_DYN_EXT_START; 76 - i > CHRDEV_MAJOR_DYN_EXT_END; i--) { 76 + i >= CHRDEV_MAJOR_DYN_EXT_END; i--) { 77 77 for (cd = chrdevs[major_to_index(i)]; cd; cd = cd->next) 78 78 if (cd->major == i) 79 79 break; 80 80 81 - if (cd == NULL || cd->major != i) 81 + if (cd == NULL) 82 82 return i; 83 83 } 84 84 ··· 121 121 } 122 122 123 123 if (major >= CHRDEV_MAJOR_MAX) { 124 - pr_err("CHRDEV \"%s\" major requested (%d) is greater than the maximum (%d)\n", 125 - name, major, CHRDEV_MAJOR_MAX); 124 + pr_err("CHRDEV \"%s\" major requested (%u) is greater than the maximum (%u)\n", 125 + name, major, CHRDEV_MAJOR_MAX-1); 126 126 ret = -EINVAL; 127 127 goto out; 128 128 }
+6
include/linux/extcon.h
··· 230 230 * Following APIs get the extcon_dev from devicetree or by through extcon name. 231 231 */ 232 232 extern struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name); 233 + extern struct extcon_dev *extcon_find_edev_by_node(struct device_node *node); 233 234 extern struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, 234 235 int index); 235 236 ··· 280 279 struct notifier_block *nb) { } 281 280 282 281 static inline struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name) 282 + { 283 + return ERR_PTR(-ENODEV); 284 + } 285 + 286 + static inline struct extcon_dev *extcon_find_edev_by_node(struct device_node *node) 283 287 { 284 288 return ERR_PTR(-ENODEV); 285 289 }
-47
include/linux/extcon/extcon-gpio.h
··· 1 - /* 2 - * Single-state GPIO extcon driver based on extcon class 3 - * 4 - * Copyright (C) 2012 Samsung Electronics 5 - * Author: MyungJoo Ham <myungjoo.ham@samsung.com> 6 - * 7 - * based on switch class driver 8 - * Copyright (C) 2008 Google, Inc. 9 - * Author: Mike Lockwood <lockwood@android.com> 10 - * 11 - * This software is licensed under the terms of the GNU General Public 12 - * License version 2, as published by the Free Software Foundation, and 13 - * may be copied, distributed, and modified under those terms. 14 - * 15 - * This program is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 - * GNU General Public License for more details. 19 - */ 20 - #ifndef __EXTCON_GPIO_H__ 21 - #define __EXTCON_GPIO_H__ __FILE__ 22 - 23 - #include <linux/extcon.h> 24 - 25 - /** 26 - * struct gpio_extcon_pdata - A simple GPIO-controlled extcon device. 27 - * @extcon_id: The unique id of specific external connector. 28 - * @gpio: Corresponding GPIO. 29 - * @gpio_active_low: Boolean describing whether gpio active state is 1 or 0 30 - * If true, low state of gpio means active. 31 - * If false, high state of gpio means active. 32 - * @debounce: Debounce time for GPIO IRQ in ms. 33 - * @irq_flags: IRQ Flags (e.g., IRQF_TRIGGER_LOW). 34 - * @check_on_resume: Boolean describing whether to check the state of gpio 35 - * while resuming from sleep. 36 - */ 37 - struct gpio_extcon_pdata { 38 - unsigned int extcon_id; 39 - unsigned gpio; 40 - bool gpio_active_low; 41 - unsigned long debounce; 42 - unsigned long irq_flags; 43 - 44 - bool check_on_resume; 45 - }; 46 - 47 - #endif /* __EXTCON_GPIO_H__ */
+1 -1
include/linux/hyperv.h
··· 844 844 845 845 /* 846 846 * NUMA distribution policy: 847 - * We support teo policies: 847 + * We support two policies: 848 848 * 1) Balanced: Here all performance critical channels are 849 849 * distributed evenly amongst all the NUMA nodes. 850 850 * This policy will be the default policy.
+42
include/linux/nvmem-provider.h
··· 22 22 typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset, 23 23 void *val, size_t bytes); 24 24 25 + /** 26 + * struct nvmem_config - NVMEM device configuration 27 + * 28 + * @dev: Parent device. 29 + * @name: Optional name. 30 + * @id: Optional device ID used in full name. Ignored if name is NULL. 31 + * @owner: Pointer to exporter module. Used for refcounting. 32 + * @cells: Optional array of pre-defined NVMEM cells. 33 + * @ncells: Number of elements in cells. 34 + * @read_only: Device is read-only. 35 + * @root_only: Device is accessibly to root only. 36 + * @reg_read: Callback to read data. 37 + * @reg_write: Callback to write data. 38 + * @size: Device size. 39 + * @word_size: Minimum read/write access granularity. 40 + * @stride: Minimum read/write access stride. 41 + * @priv: User context passed to read/write callbacks. 42 + * 43 + * Note: A default "nvmem<id>" name will be assigned to the device if 44 + * no name is specified in its configuration. In such case "<id>" is 45 + * generated with ida_simple_get() and provided id field is ignored. 46 + * 47 + * Note: Specifying name and setting id to -1 implies a unique device 48 + * whose name is provided as-is (kept unaltered). 49 + */ 25 50 struct nvmem_config { 26 51 struct device *dev; 27 52 const char *name; ··· 72 47 struct nvmem_device *nvmem_register(const struct nvmem_config *cfg); 73 48 int nvmem_unregister(struct nvmem_device *nvmem); 74 49 50 + struct nvmem_device *devm_nvmem_register(struct device *dev, 51 + const struct nvmem_config *cfg); 52 + 53 + int devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem); 54 + 75 55 #else 76 56 77 57 static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c) ··· 87 57 static inline int nvmem_unregister(struct nvmem_device *nvmem) 88 58 { 89 59 return -ENOSYS; 60 + } 61 + 62 + static inline struct nvmem_device * 63 + devm_nvmem_register(struct device *dev, const struct nvmem_config *c) 64 + { 65 + return nvmem_register(c); 66 + } 67 + 68 + static inline int 69 + devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem) 70 + { 71 + return nvmem_unregister(nvmem); 90 72 } 91 73 92 74 #endif /* CONFIG_NVMEM */
+2
include/linux/pci_ids.h
··· 1562 1562 #define PCI_DEVICE_ID_SERVERWORKS_CSB6LPC 0x0227 1563 1563 #define PCI_DEVICE_ID_SERVERWORKS_HT1100LD 0x0408 1564 1564 1565 + #define PCI_VENDOR_ID_ALTERA 0x1172 1566 + 1565 1567 #define PCI_VENDOR_ID_SBE 0x1176 1566 1568 #define PCI_DEVICE_ID_SBE_WANXL100 0x0301 1567 1569 #define PCI_DEVICE_ID_SBE_WANXL200 0x0302
+1 -9
include/linux/stm.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * System Trace Module (STM) infrastructure apis 3 4 * Copyright (C) 2014 Intel Corporation. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 */ 14 6 15 7 #ifndef _STM_H_
+19
include/linux/thunderbolt.h
··· 45 45 * @TB_SECURITY_USER: User approval required at minimum 46 46 * @TB_SECURITY_SECURE: One time saved key required at minimum 47 47 * @TB_SECURITY_DPONLY: Only tunnel Display port (and USB) 48 + * @TB_SECURITY_USBONLY: Only tunnel USB controller of the connected 49 + * Thunderbolt dock (and Display Port). All PCIe 50 + * links downstream of the dock are removed. 48 51 */ 49 52 enum tb_security_level { 50 53 TB_SECURITY_NONE, 51 54 TB_SECURITY_USER, 52 55 TB_SECURITY_SECURE, 53 56 TB_SECURITY_DPONLY, 57 + TB_SECURITY_USBONLY, 54 58 }; 55 59 56 60 /** ··· 69 65 * @cm_ops: Connection manager specific operations vector 70 66 * @index: Linux assigned domain number 71 67 * @security_level: Current security level 68 + * @nboot_acl: Number of boot ACLs the domain supports 72 69 * @privdata: Private connection manager specific data 73 70 */ 74 71 struct tb { ··· 82 77 const struct tb_cm_ops *cm_ops; 83 78 int index; 84 79 enum tb_security_level security_level; 80 + size_t nboot_acl; 85 81 unsigned long privdata[0]; 86 82 }; 87 83 ··· 243 237 u16 receive_ring); 244 238 int tb_xdomain_disable_paths(struct tb_xdomain *xd); 245 239 struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid); 240 + struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route); 246 241 247 242 static inline struct tb_xdomain * 248 243 tb_xdomain_find_by_uuid_locked(struct tb *tb, const uuid_t *uuid) ··· 252 245 253 246 mutex_lock(&tb->lock); 254 247 xd = tb_xdomain_find_by_uuid(tb, uuid); 248 + mutex_unlock(&tb->lock); 249 + 250 + return xd; 251 + } 252 + 253 + static inline struct tb_xdomain * 254 + tb_xdomain_find_by_route_locked(struct tb *tb, u64 route) 255 + { 256 + struct tb_xdomain *xd; 257 + 258 + mutex_lock(&tb->lock); 259 + xd = tb_xdomain_find_by_route(tb, route); 255 260 mutex_unlock(&tb->lock); 256 261 257 262 return xd;
+4 -9
include/uapi/linux/stm.h
··· 3 3 * System Trace Module (STM) userspace interfaces 4 4 * Copyright (c) 2014, Intel Corporation. 5 5 * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 6 * STM class implements generic infrastructure for System Trace Module devices 16 7 * as defined in MIPI STPv2 specification. 17 8 */ ··· 11 20 #define _UAPI_LINUX_STM_H 12 21 13 22 #include <linux/types.h> 23 + 24 + /* Maximum allowed master and channel values */ 25 + #define STP_MASTER_MAX 0xffff 26 + #define STP_CHANNEL_MAX 0xffff 14 27 15 28 /** 16 29 * struct stp_policy_id - identification for the STP policy
+40 -42
lib/devres.c
··· 5 5 #include <linux/gfp.h> 6 6 #include <linux/export.h> 7 7 8 + enum devm_ioremap_type { 9 + DEVM_IOREMAP = 0, 10 + DEVM_IOREMAP_NC, 11 + DEVM_IOREMAP_WC, 12 + }; 13 + 8 14 void devm_ioremap_release(struct device *dev, void *res) 9 15 { 10 16 iounmap(*(void __iomem **)res); ··· 19 13 static int devm_ioremap_match(struct device *dev, void *res, void *match_data) 20 14 { 21 15 return *(void **)res == match_data; 16 + } 17 + 18 + static void __iomem *__devm_ioremap(struct device *dev, resource_size_t offset, 19 + resource_size_t size, 20 + enum devm_ioremap_type type) 21 + { 22 + void __iomem **ptr, *addr = NULL; 23 + 24 + ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL); 25 + if (!ptr) 26 + return NULL; 27 + 28 + switch (type) { 29 + case DEVM_IOREMAP: 30 + addr = ioremap(offset, size); 31 + break; 32 + case DEVM_IOREMAP_NC: 33 + addr = ioremap_nocache(offset, size); 34 + break; 35 + case DEVM_IOREMAP_WC: 36 + addr = ioremap_wc(offset, size); 37 + break; 38 + } 39 + 40 + if (addr) { 41 + *ptr = addr; 42 + devres_add(dev, ptr); 43 + } else 44 + devres_free(ptr); 45 + 46 + return addr; 22 47 } 23 48 24 49 /** ··· 63 26 void __iomem *devm_ioremap(struct device *dev, resource_size_t offset, 64 27 resource_size_t size) 65 28 { 66 - void __iomem **ptr, *addr; 67 - 68 - ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL); 69 - if (!ptr) 70 - return NULL; 71 - 72 - addr = ioremap(offset, size); 73 - if (addr) { 74 - *ptr = addr; 75 - devres_add(dev, ptr); 76 - } else 77 - devres_free(ptr); 78 - 79 - return addr; 29 + return __devm_ioremap(dev, offset, size, DEVM_IOREMAP); 80 30 } 81 31 EXPORT_SYMBOL(devm_ioremap); 82 32 ··· 79 55 void __iomem *devm_ioremap_nocache(struct device *dev, resource_size_t offset, 80 56 resource_size_t size) 81 57 { 82 - void __iomem **ptr, *addr; 83 - 84 - ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL); 85 - if (!ptr) 86 - return NULL; 87 - 88 - addr = ioremap_nocache(offset, size); 89 - if (addr) { 90 - *ptr = addr; 91 - devres_add(dev, ptr); 92 - } else 93 - devres_free(ptr); 94 - 95 - return addr; 58 + return __devm_ioremap(dev, offset, size, DEVM_IOREMAP_NC); 96 59 } 97 60 EXPORT_SYMBOL(devm_ioremap_nocache); 98 61 ··· 94 83 void __iomem *devm_ioremap_wc(struct device *dev, resource_size_t offset, 95 84 resource_size_t size) 96 85 { 97 - void __iomem **ptr, *addr; 98 - 99 - ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL); 100 - if (!ptr) 101 - return NULL; 102 - 103 - addr = ioremap_wc(offset, size); 104 - if (addr) { 105 - *ptr = addr; 106 - devres_add(dev, ptr); 107 - } else 108 - devres_free(ptr); 109 - 110 - return addr; 86 + return __devm_ioremap(dev, offset, size, DEVM_IOREMAP_WC); 111 87 } 112 88 EXPORT_SYMBOL(devm_ioremap_wc); 113 89
+3 -1
tools/hv/hv_fcopy_daemon.c
··· 21 21 #include <stdio.h> 22 22 #include <stdlib.h> 23 23 #include <unistd.h> 24 + #include <string.h> 24 25 #include <errno.h> 25 26 #include <linux/hyperv.h> 27 + #include <linux/limits.h> 26 28 #include <syslog.h> 27 29 #include <sys/stat.h> 28 30 #include <fcntl.h> 29 31 #include <getopt.h> 30 32 31 33 static int target_fd; 32 - static char target_fname[W_MAX_PATH]; 34 + static char target_fname[PATH_MAX]; 33 35 static unsigned long long filesize; 34 36 35 37 static int hv_start_fcopy(struct hv_start_fcopy *smsg)
+65 -73
tools/hv/hv_kvp_daemon.c
··· 634 634 return mac_addr; 635 635 } 636 636 637 - 638 - /* 639 - * Retrieve the interface name given tha MAC address. 640 - */ 641 - 642 - static char *kvp_mac_to_if_name(char *mac) 643 - { 644 - DIR *dir; 645 - struct dirent *entry; 646 - FILE *file; 647 - char *p, *x; 648 - char *if_name = NULL; 649 - char buf[256]; 650 - char dev_id[PATH_MAX]; 651 - unsigned int i; 652 - 653 - dir = opendir(KVP_NET_DIR); 654 - if (dir == NULL) 655 - return NULL; 656 - 657 - while ((entry = readdir(dir)) != NULL) { 658 - /* 659 - * Set the state for the next pass. 660 - */ 661 - snprintf(dev_id, sizeof(dev_id), "%s%s/address", KVP_NET_DIR, 662 - entry->d_name); 663 - 664 - file = fopen(dev_id, "r"); 665 - if (file == NULL) 666 - continue; 667 - 668 - p = fgets(buf, sizeof(buf), file); 669 - if (p) { 670 - x = strchr(p, '\n'); 671 - if (x) 672 - *x = '\0'; 673 - 674 - for (i = 0; i < strlen(p); i++) 675 - p[i] = toupper(p[i]); 676 - 677 - if (!strcmp(p, mac)) { 678 - /* 679 - * Found the MAC match; return the interface 680 - * name. The caller will free the memory. 681 - */ 682 - if_name = strdup(entry->d_name); 683 - fclose(file); 684 - break; 685 - } 686 - } 687 - fclose(file); 688 - } 689 - 690 - closedir(dir); 691 - return if_name; 692 - } 693 - 694 - 695 637 static void kvp_process_ipconfig_file(char *cmd, 696 638 char *config_buf, unsigned int len, 697 639 int element_size, int offset) ··· 939 997 return error; 940 998 } 941 999 1000 + /* 1001 + * Retrieve the IP given the MAC address. 1002 + */ 1003 + static int kvp_mac_to_ip(struct hv_kvp_ipaddr_value *kvp_ip_val) 1004 + { 1005 + char *mac = (char *)kvp_ip_val->adapter_id; 1006 + DIR *dir; 1007 + struct dirent *entry; 1008 + FILE *file; 1009 + char *p, *x; 1010 + char *if_name = NULL; 1011 + char buf[256]; 1012 + char dev_id[PATH_MAX]; 1013 + unsigned int i; 1014 + int error = HV_E_FAIL; 1015 + 1016 + dir = opendir(KVP_NET_DIR); 1017 + if (dir == NULL) 1018 + return HV_E_FAIL; 1019 + 1020 + while ((entry = readdir(dir)) != NULL) { 1021 + /* 1022 + * Set the state for the next pass. 1023 + */ 1024 + snprintf(dev_id, sizeof(dev_id), "%s%s/address", KVP_NET_DIR, 1025 + entry->d_name); 1026 + 1027 + file = fopen(dev_id, "r"); 1028 + if (file == NULL) 1029 + continue; 1030 + 1031 + p = fgets(buf, sizeof(buf), file); 1032 + fclose(file); 1033 + if (!p) 1034 + continue; 1035 + 1036 + x = strchr(p, '\n'); 1037 + if (x) 1038 + *x = '\0'; 1039 + 1040 + for (i = 0; i < strlen(p); i++) 1041 + p[i] = toupper(p[i]); 1042 + 1043 + if (strcmp(p, mac)) 1044 + continue; 1045 + 1046 + /* 1047 + * Found the MAC match. 1048 + * A NIC (e.g. VF) matching the MAC, but without IP, is skipped. 1049 + */ 1050 + if_name = entry->d_name; 1051 + if (!if_name) 1052 + continue; 1053 + 1054 + error = kvp_get_ip_info(0, if_name, KVP_OP_GET_IP_INFO, 1055 + kvp_ip_val, MAX_IP_ADDR_SIZE * 2); 1056 + 1057 + if (!error && strlen((char *)kvp_ip_val->ip_addr)) 1058 + break; 1059 + } 1060 + 1061 + closedir(dir); 1062 + return error; 1063 + } 942 1064 943 1065 static int expand_ipv6(char *addr, int type) 944 1066 { ··· 1478 1472 switch (op) { 1479 1473 case KVP_OP_GET_IP_INFO: 1480 1474 kvp_ip_val = &hv_msg->body.kvp_ip_val; 1481 - if_name = 1482 - kvp_mac_to_if_name((char *)kvp_ip_val->adapter_id); 1483 1475 1484 - if (if_name == NULL) { 1485 - /* 1486 - * We could not map the mac address to an 1487 - * interface name; return error. 1488 - */ 1489 - hv_msg->error = HV_E_FAIL; 1490 - break; 1491 - } 1492 - error = kvp_get_ip_info( 1493 - 0, if_name, KVP_OP_GET_IP_INFO, 1494 - kvp_ip_val, 1495 - (MAX_IP_ADDR_SIZE * 2)); 1476 + error = kvp_mac_to_ip(kvp_ip_val); 1496 1477 1497 1478 if (error) 1498 1479 hv_msg->error = error; 1499 1480 1500 - free(if_name); 1501 1481 break; 1502 1482 1503 1483 case KVP_OP_SET_IP_INFO:
+1
tools/hv/hv_vss_daemon.c
··· 22 22 #include <sys/poll.h> 23 23 #include <sys/ioctl.h> 24 24 #include <sys/stat.h> 25 + #include <sys/sysmacros.h> 25 26 #include <fcntl.h> 26 27 #include <stdio.h> 27 28 #include <mntent.h>