Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the big pull request for char/misc drivers for 4.16-rc1.

There's a lot of stuff in here. Three new driver subsystems were added
for various types of hardware busses:

- siox
- slimbus
- soundwire

as well as a new vboxguest subsystem for the VirtualBox hypervisor
drivers.

There's also big updates from the FPGA subsystem, lots of Android
binder fixes, the usual handful of hyper-v updates, and lots of other
smaller driver updates.

All of these have been in linux-next for a long time, with no reported
issues"

* tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (155 commits)
char: lp: use true or false for boolean values
android: binder: use VM_ALLOC to get vm area
android: binder: Use true and false for boolean values
lkdtm: fix handle_irq_event symbol for INT_HW_IRQ_EN
EISA: Delete error message for a failed memory allocation in eisa_probe()
EISA: Whitespace cleanup
misc: remove AVR32 dependencies
virt: vbox: Add error mapping for VERR_INVALID_NAME and VERR_NO_MORE_FILES
soundwire: Fix a signedness bug
uio_hv_generic: fix new type mismatch warnings
uio_hv_generic: fix type mismatch warnings
auxdisplay: img-ascii-lcd: add missing MODULE_DESCRIPTION/AUTHOR/LICENSE
uio_hv_generic: add rescind support
uio_hv_generic: check that host supports monitor page
uio_hv_generic: create send and receive buffers
uio: document uio_hv_generic regions
doc: fix documentation about uio_hv_generic
vmbus: add monitor_id and subchannel_id to sysfs per channel
vmbus: fix ABI documentation
uio_hv_generic: use ISR callback method
...

+14300 -1154
+37 -16
Documentation/ABI/stable/sysfs-bus-vmbus
··· 42 42 Description: The 16 bit vendor ID of the device 43 43 Users: tools/hv/lsvmbus and user level RDMA libraries 44 44 45 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu 45 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN 46 + Date: September. 2017 47 + KernelVersion: 4.14 48 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 49 + Description: Directory for per-channel information 50 + NN is the VMBUS relid associtated with the channel. 51 + 52 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu 46 53 Date: September. 2017 47 54 KernelVersion: 4.14 48 55 Contact: Stephen Hemminger <sthemmin@microsoft.com> 49 56 Description: VCPU (sub)channel is affinitized to 50 - Users: tools/hv/lsvmbus and other debuggig tools 57 + Users: tools/hv/lsvmbus and other debugging tools 51 58 52 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu 59 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu 53 60 Date: September. 2017 54 61 KernelVersion: 4.14 55 62 Contact: Stephen Hemminger <sthemmin@microsoft.com> 56 63 Description: VCPU (sub)channel is affinitized to 57 - Users: tools/hv/lsvmbus and other debuggig tools 64 + Users: tools/hv/lsvmbus and other debugging tools 58 65 59 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/in_mask 66 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/in_mask 60 67 Date: September. 2017 61 68 KernelVersion: 4.14 62 69 Contact: Stephen Hemminger <sthemmin@microsoft.com> 63 - Description: Inbound channel signaling state 70 + Description: Host to guest channel interrupt mask 64 71 Users: Debugging tools 65 72 66 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/latency 73 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/latency 67 74 Date: September. 2017 68 75 KernelVersion: 4.14 69 76 Contact: Stephen Hemminger <sthemmin@microsoft.com> 70 77 Description: Channel signaling latency 71 78 Users: Debugging tools 72 79 73 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/out_mask 80 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/out_mask 74 81 Date: September. 2017 75 82 KernelVersion: 4.14 76 83 Contact: Stephen Hemminger <sthemmin@microsoft.com> 77 - Description: Outbound channel signaling state 84 + Description: Guest to host channel interrupt mask 78 85 Users: Debugging tools 79 86 80 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/pending 87 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/pending 81 88 Date: September. 2017 82 89 KernelVersion: 4.14 83 90 Contact: Stephen Hemminger <sthemmin@microsoft.com> 84 91 Description: Channel interrupt pending state 85 92 Users: Debugging tools 86 93 87 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/read_avail 94 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/read_avail 88 95 Date: September. 2017 89 96 KernelVersion: 4.14 90 97 Contact: Stephen Hemminger <sthemmin@microsoft.com> 91 - Description: Bytes availabble to read 98 + Description: Bytes available to read 92 99 Users: Debugging tools 93 100 94 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/write_avail 101 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/write_avail 95 102 Date: September. 2017 96 103 KernelVersion: 4.14 97 104 Contact: Stephen Hemminger <sthemmin@microsoft.com> 98 - Description: Bytes availabble to write 105 + Description: Bytes available to write 99 106 Users: Debugging tools 100 107 101 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/events 108 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/events 102 109 Date: September. 2017 103 110 KernelVersion: 4.14 104 111 Contact: Stephen Hemminger <sthemmin@microsoft.com> 105 112 Description: Number of times we have signaled the host 106 113 Users: Debugging tools 107 114 108 - What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/interrupts 115 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/interrupts 109 116 Date: September. 2017 110 117 KernelVersion: 4.14 111 118 Contact: Stephen Hemminger <sthemmin@microsoft.com> 112 119 Description: Number of times we have taken an interrupt (incoming) 113 120 Users: Debugging tools 121 + 122 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/subchannel_id 123 + Date: January. 2018 124 + KernelVersion: 4.16 125 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 126 + Description: Subchannel ID associated with VMBUS channel 127 + Users: Debugging tools and userspace drivers 128 + 129 + What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/monitor_id 130 + Date: January. 2018 131 + KernelVersion: 4.16 132 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 133 + Description: Monitor bit associated with channel 134 + Users: Debugging tools and userspace drivers
+87
Documentation/ABI/testing/sysfs-bus-siox
··· 1 + What: /sys/bus/siox/devices/siox-X/active 2 + KernelVersion: 4.16 3 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 4 + Description: 5 + On reading represents the current state of the bus. If it 6 + contains a "0" the bus is stopped and connected devices are 7 + expected to not do anything because their watchdog triggered. 8 + When the file contains a "1" the bus is operated and periodically 9 + does a push-pull cycle to write and read data from the 10 + connected devices. 11 + When writing a "0" or "1" the bus moves to the described state. 12 + 13 + What: /sys/bus/siox/devices/siox-X/device_add 14 + KernelVersion: 4.16 15 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 16 + Description: 17 + Write-only file. Write 18 + 19 + <type> <inbytes> <outbytes> <statustype> 20 + 21 + to add a new device dynamically. <type> is the name that is used to match 22 + to a driver (similar to the platform bus). <inbytes> and <outbytes> define 23 + the length of the input and output shift register in bytes respectively. 24 + <statustype> defines the 4 bit device type that is check to identify connection 25 + problems. 26 + The new device is added to the end of the existing chain. 27 + 28 + What: /sys/bus/siox/devices/siox-X/device_remove 29 + KernelVersion: 4.16 30 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 31 + Description: 32 + Write-only file. A single write removes the last device in the siox chain. 33 + 34 + What: /sys/bus/siox/devices/siox-X/poll_interval_ns 35 + KernelVersion: 4.16 36 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 37 + Description: 38 + Defines the interval between two poll cycles in nano seconds. 39 + Note this is rounded to jiffies on writing. On reading the current value 40 + is returned. 41 + 42 + What: /sys/bus/siox/devices/siox-X-Y/connected 43 + KernelVersion: 4.16 44 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 45 + Description: 46 + Read-only value. "0" means the Yth device on siox bus X isn't "connected" i.e. 47 + communication with it is not ensured. "1" signals a working connection. 48 + 49 + What: /sys/bus/siox/devices/siox-X-Y/inbytes 50 + KernelVersion: 4.16 51 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 52 + Description: 53 + Read-only value reporting the inbytes value provided to siox-X/device_add 54 + 55 + What: /sys/bus/siox/devices/siox-X-Y/status_errors 56 + KernelVersion: 4.16 57 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 58 + Description: 59 + Counts the number of time intervals when the read status byte doesn't yield the 60 + expected value. 61 + 62 + What: /sys/bus/siox/devices/siox-X-Y/type 63 + KernelVersion: 4.16 64 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 65 + Description: 66 + Read-only value reporting the type value provided to siox-X/device_add. 67 + 68 + What: /sys/bus/siox/devices/siox-X-Y/watchdog 69 + KernelVersion: 4.16 70 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 71 + Description: 72 + Read-only value reporting if the watchdog of the siox device is 73 + active. "0" means the watchdog is not active and the device is expected to 74 + be operational. "1" means the watchdog keeps the device in reset. 75 + 76 + What: /sys/bus/siox/devices/siox-X-Y/watchdog_errors 77 + KernelVersion: 4.16 78 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 79 + Description: 80 + Read-only value reporting the number to time intervals when the 81 + watchdog was active. 82 + 83 + What: /sys/bus/siox/devices/siox-X-Y/outbytes 84 + KernelVersion: 4.16 85 + Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 86 + Description: 87 + Read-only value reporting the outbytes value provided to siox-X/device_add.
+3 -1
Documentation/devicetree/bindings/eeprom/at25.txt
··· 11 11 - spi-max-frequency : max spi frequency to use 12 12 - pagesize : size of the eeprom page 13 13 - size : total eeprom size in bytes 14 - - address-width : number of address bits (one of 8, 16, or 24) 14 + - address-width : number of address bits (one of 8, 9, 16, or 24). 15 + For 9 bits, the MSB of the address is sent as bit 3 of the instruction 16 + byte, before the address byte. 15 17 16 18 Optional properties: 17 19 - spi-cpha : SPI shifted clock phase, as per spi-bus bindings.
+5
Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt
··· 6 6 - "rockchip,rk3188-efuse" - for RK3188 SoCs. 7 7 - "rockchip,rk3228-efuse" - for RK3228 SoCs. 8 8 - "rockchip,rk3288-efuse" - for RK3288 SoCs. 9 + - "rockchip,rk3328-efuse" - for RK3328 SoCs. 9 10 - "rockchip,rk3368-efuse" - for RK3368 SoCs. 10 11 - "rockchip,rk3399-efuse" - for RK3399 SoCs. 11 12 - reg: Should contain the registers location and exact eFuse size 12 13 - clocks: Should be the clock id of eFuse 13 14 - clock-names: Should be "pclk_efuse" 15 + 16 + Optional properties: 17 + - rockchip,efuse-size: Should be exact eFuse size in byte, the eFuse 18 + size in property <reg> will be invalid if define this property. 14 19 15 20 Deprecated properties: 16 21 - compatible: "rockchip,rockchip-efuse"
+19
Documentation/devicetree/bindings/siox/eckelmann,siox-gpio.txt
··· 1 + Eckelmann SIOX GPIO bus 2 + 3 + Required properties: 4 + - compatible : "eckelmann,siox-gpio" 5 + - din-gpios, dout-gpios, dclk-gpios, dld-gpios: references gpios for the 6 + corresponding bus signals. 7 + 8 + Examples: 9 + 10 + siox { 11 + compatible = "eckelmann,siox-gpio"; 12 + pinctrl-names = "default"; 13 + pinctrl-0 = <&pinctrl_siox>; 14 + 15 + din-gpios = <&gpio6 11 0>; 16 + dout-gpios = <&gpio6 8 0>; 17 + dclk-gpios = <&gpio6 9 0>; 18 + dld-gpios = <&gpio6 10 0>; 19 + };
+50
Documentation/devicetree/bindings/slimbus/bus.txt
··· 1 + SLIM(Serial Low Power Interchip Media Bus) bus 2 + 3 + SLIMbus is a 2-wire bus, and is used to communicate with peripheral 4 + components like audio-codec. 5 + 6 + Required property for SLIMbus controller node: 7 + - compatible - name of SLIMbus controller 8 + 9 + Child nodes: 10 + Every SLIMbus controller node can contain zero or more child nodes 11 + representing slave devices on the bus. Every SLIMbus slave device is 12 + uniquely determined by the enumeration address containing 4 fields: 13 + Manufacturer ID, Product code, Device index, and Instance value for 14 + the device. 15 + If child node is not present and it is instantiated after device 16 + discovery (slave device reporting itself present). 17 + 18 + In some cases it may be necessary to describe non-probeable device 19 + details such as non-standard ways of powering up a device. In 20 + such cases, child nodes for those devices will be present as 21 + slaves of the SLIMbus controller, as detailed below. 22 + 23 + Required property for SLIMbus child node if it is present: 24 + - reg - Should be ('Device index', 'Instance ID') from SLIMbus 25 + Enumeration Address. 26 + Device Index Uniquely identifies multiple Devices within 27 + a single Component. 28 + Instance ID Is for the cases where multiple Devices of the 29 + same type or Class are attached to the bus. 30 + 31 + - compatible -"slimMID,PID". The textual representation of Manufacturer ID, 32 + Product Code, shall be in lower case hexadecimal with leading 33 + zeroes suppressed 34 + 35 + SLIMbus example for Qualcomm's slimbus manager component: 36 + 37 + slim@28080000 { 38 + compatible = "qcom,apq8064-slim", "qcom,slim"; 39 + reg = <0x28080000 0x2000>, 40 + interrupts = <0 33 0>; 41 + clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>; 42 + clock-names = "iface", "core"; 43 + #address-cells = <2>; 44 + #size-cell = <0>; 45 + 46 + codec: wcd9310@1,0{ 47 + compatible = "slim217,60"; 48 + reg = <1 0>; 49 + }; 50 + };
+39
Documentation/devicetree/bindings/slimbus/slim-qcom-ctrl.txt
··· 1 + Qualcomm SLIMbus controller 2 + This controller is used if applications processor driver controls SLIMbus 3 + master component. 4 + 5 + Required properties: 6 + 7 + - #address-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt 8 + - #size-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt 9 + 10 + - reg : Offset and length of the register region(s) for the device 11 + - reg-names : Register region name(s) referenced in reg above 12 + Required register resource entries are: 13 + "ctrl": Physical address of controller register blocks 14 + "slew": required for "qcom,apq8064-slim" SOC. 15 + - compatible : should be "qcom,<SOC-NAME>-slim" for SOC specific compatible 16 + followed by "qcom,slim" for fallback. 17 + - interrupts : Interrupt number used by this controller 18 + - clocks : Interface and core clocks used by this SLIMbus controller 19 + - clock-names : Required clock-name entries are: 20 + "iface" : Interface clock for this controller 21 + "core" : Interrupt for controller core's BAM 22 + 23 + Example: 24 + 25 + slim@28080000 { 26 + compatible = "qcom,apq8064-slim", "qcom,slim"; 27 + reg = <0x28080000 0x2000>, <0x80207C 4>; 28 + reg-names = "ctrl", "slew"; 29 + interrupts = <0 33 0>; 30 + clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>; 31 + clock-names = "iface", "core"; 32 + #address-cells = <2>; 33 + #size-cell = <0>; 34 + 35 + wcd9310: audio-codec@1,0{ 36 + compatible = "slim217,60"; 37 + reg = <1 0>; 38 + }; 39 + };
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 97 97 dragino Dragino Technology Co., Limited 98 98 ea Embedded Artists AB 99 99 ebv EBV Elektronik 100 + eckelmann Eckelmann AG 100 101 edt Emerging Display Technologies 101 102 eeti eGalax_eMPIA Technology Inc 102 103 elan Elan Microelectronic Corp.
+2
Documentation/driver-api/index.rst
··· 47 47 gpio 48 48 misc_devices 49 49 dmaengine/index 50 + slimbus 51 + soundwire/index 50 52 51 53 .. only:: subproject and html 52 54
+127
Documentation/driver-api/slimbus.rst
··· 1 + ============================ 2 + Linux kernel SLIMbus support 3 + ============================ 4 + 5 + Overview 6 + ======== 7 + 8 + What is SLIMbus? 9 + ---------------- 10 + SLIMbus (Serial Low Power Interchip Media Bus) is a specification developed by 11 + MIPI (Mobile Industry Processor Interface) alliance. The bus uses master/slave 12 + configuration, and is a 2-wire multi-drop implementation (clock, and data). 13 + 14 + Currently, SLIMbus is used to interface between application processors of SoCs 15 + (System-on-Chip) and peripheral components (typically codec). SLIMbus uses 16 + Time-Division-Multiplexing to accommodate multiple data channels, and 17 + a control channel. 18 + 19 + The control channel is used for various control functions such as bus 20 + management, configuration and status updates. These messages can be unicast (e.g. 21 + reading/writing device specific values), or multicast (e.g. data channel 22 + reconfiguration sequence is a broadcast message announced to all devices) 23 + 24 + A data channel is used for data-transfer between 2 SLIMbus devices. Data 25 + channel uses dedicated ports on the device. 26 + 27 + Hardware description: 28 + --------------------- 29 + SLIMbus specification has different types of device classifications based on 30 + their capabilities. 31 + A manager device is responsible for enumeration, configuration, and dynamic 32 + channel allocation. Every bus has 1 active manager. 33 + 34 + A generic device is a device providing application functionality (e.g. codec). 35 + 36 + Framer device is responsible for clocking the bus, and transmitting frame-sync 37 + and framing information on the bus. 38 + 39 + Each SLIMbus component has an interface device for monitoring physical layer. 40 + 41 + Typically each SoC contains SLIMbus component having 1 manager, 1 framer device, 42 + 1 generic device (for data channel support), and 1 interface device. 43 + External peripheral SLIMbus component usually has 1 generic device (for 44 + functionality/data channel support), and an associated interface device. 45 + The generic device's registers are mapped as 'value elements' so that they can 46 + be written/read using SLIMbus control channel exchanging control/status type of 47 + information. 48 + In case there are multiple framer devices on the same bus, manager device is 49 + responsible to select the active-framer for clocking the bus. 50 + 51 + Per specification, SLIMbus uses "clock gears" to do power management based on 52 + current frequency and bandwidth requirements. There are 10 clock gears and each 53 + gear changes the SLIMbus frequency to be twice its previous gear. 54 + 55 + Each device has a 6-byte enumeration-address and the manager assigns every 56 + device with a 1-byte logical address after the devices report presence on the 57 + bus. 58 + 59 + Software description: 60 + --------------------- 61 + There are 2 types of SLIMbus drivers: 62 + 63 + slim_controller represents a 'controller' for SLIMbus. This driver should 64 + implement duties needed by the SoC (manager device, associated 65 + interface device for monitoring the layers and reporting errors, default 66 + framer device). 67 + 68 + slim_device represents the 'generic device/component' for SLIMbus, and a 69 + slim_driver should implement driver for that slim_device. 70 + 71 + Device notifications to the driver: 72 + ----------------------------------- 73 + Since SLIMbus devices have mechanisms for reporting their presence, the 74 + framework allows drivers to bind when corresponding devices report their 75 + presence on the bus. 76 + However, it is possible that the driver needs to be probed 77 + first so that it can enable corresponding SLIMbus device (e.g. power it up and/or 78 + take it out of reset). To support that behavior, the framework allows drivers 79 + to probe first as well (e.g. using standard DeviceTree compatibility field). 80 + This creates the necessity for the driver to know when the device is functional 81 + (i.e. reported present). device_up callback is used for that reason when the 82 + device reports present and is assigned a logical address by the controller. 83 + 84 + Similarly, SLIMbus devices 'report absent' when they go down. A 'device_down' 85 + callback notifies the driver when the device reports absent and its logical 86 + address assignment is invalidated by the controller. 87 + 88 + Another notification "boot_device" is used to notify the slim_driver when 89 + controller resets the bus. This notification allows the driver to take necessary 90 + steps to boot the device so that it's functional after the bus has been reset. 91 + 92 + Driver and Controller APIs: 93 + -------------------------- 94 + .. kernel-doc:: include/linux/slimbus.h 95 + :internal: 96 + 97 + .. kernel-doc:: drivers/slimbus/slimbus.h 98 + :internal: 99 + 100 + .. kernel-doc:: drivers/slimbus/core.c 101 + :export: 102 + 103 + Clock-pause: 104 + ------------ 105 + SLIMbus mandates that a reconfiguration sequence (known as clock-pause) be 106 + broadcast to all active devices on the bus before the bus can enter low-power 107 + mode. Controller uses this sequence when it decides to enter low-power mode so 108 + that corresponding clocks and/or power-rails can be turned off to save power. 109 + Clock-pause is exited by waking up framer device (if controller driver initiates 110 + exiting low power mode), or by toggling the data line (if a slave device wants 111 + to initiate it). 112 + 113 + Clock-pause APIs: 114 + ~~~~~~~~~~~~~~~~~ 115 + .. kernel-doc:: drivers/slimbus/sched.c 116 + :export: 117 + 118 + Messaging: 119 + ---------- 120 + The framework supports regmap and read/write apis to exchange control-information 121 + with a SLIMbus device. APIs can be synchronous or asynchronous. 122 + The header file <linux/slimbus.h> has more documentation about messaging APIs. 123 + 124 + Messaging APIs: 125 + ~~~~~~~~~~~~~~~ 126 + .. kernel-doc:: drivers/slimbus/messaging.c 127 + :export:
+15
Documentation/driver-api/soundwire/index.rst
··· 1 + ======================= 2 + SoundWire Documentation 3 + ======================= 4 + 5 + .. toctree:: 6 + :maxdepth: 1 7 + 8 + summary 9 + 10 + .. only:: subproject 11 + 12 + Indices 13 + ======= 14 + 15 + * :ref:`genindex`
+207
Documentation/driver-api/soundwire/summary.rst
··· 1 + =========================== 2 + SoundWire Subsystem Summary 3 + =========================== 4 + 5 + SoundWire is a new interface ratified in 2015 by the MIPI Alliance. 6 + SoundWire is used for transporting data typically related to audio 7 + functions. SoundWire interface is optimized to integrate audio devices in 8 + mobile or mobile inspired systems. 9 + 10 + SoundWire is a 2-pin multi-drop interface with data and clock line. It 11 + facilitates development of low cost, efficient, high performance systems. 12 + Broad level key features of SoundWire interface include: 13 + 14 + (1) Transporting all of payload data channels, control information, and setup 15 + commands over a single two-pin interface. 16 + 17 + (2) Lower clock frequency, and hence lower power consumption, by use of DDR 18 + (Dual Data Rate) data transmission. 19 + 20 + (3) Clock scaling and optional multiple data lanes to give wide flexibility 21 + in data rate to match system requirements. 22 + 23 + (4) Device status monitoring, including interrupt-style alerts to the Master. 24 + 25 + The SoundWire protocol supports up to eleven Slave interfaces. All the 26 + interfaces share the common Bus containing data and clock line. Each of the 27 + Slaves can support up to 14 Data Ports. 13 Data Ports are dedicated to audio 28 + transport. Data Port0 is dedicated to transport of Bulk control information, 29 + each of the audio Data Ports (1..14) can support up to 8 Channels in 30 + transmit or receiving mode (typically fixed direction but configurable 31 + direction is enabled by the specification). Bandwidth restrictions to 32 + ~19.2..24.576Mbits/s don't however allow for 11*13*8 channels to be 33 + transmitted simultaneously. 34 + 35 + Below figure shows an example of connectivity between a SoundWire Master and 36 + two Slave devices. :: 37 + 38 + +---------------+ +---------------+ 39 + | | Clock Signal | | 40 + | Master |-------+-------------------------------| Slave | 41 + | Interface | | Data Signal | Interface 1 | 42 + | |-------|-------+-----------------------| | 43 + +---------------+ | | +---------------+ 44 + | | 45 + | | 46 + | | 47 + +--+-------+--+ 48 + | | 49 + | Slave | 50 + | Interface 2 | 51 + | | 52 + +-------------+ 53 + 54 + 55 + Terminology 56 + =========== 57 + 58 + The MIPI SoundWire specification uses the term 'device' to refer to a Master 59 + or Slave interface, which of course can be confusing. In this summary and 60 + code we use the term interface only to refer to the hardware. We follow the 61 + Linux device model by mapping each Slave interface connected on the bus as a 62 + device managed by a specific driver. The Linux SoundWire subsystem provides 63 + a framework to implement a SoundWire Slave driver with an API allowing 64 + 3rd-party vendors to enable implementation-defined functionality while 65 + common setup/configuration tasks are handled by the bus. 66 + 67 + Bus: 68 + Implements SoundWire Linux Bus which handles the SoundWire protocol. 69 + Programs all the MIPI-defined Slave registers. Represents a SoundWire 70 + Master. Multiple instances of Bus may be present in a system. 71 + 72 + Slave: 73 + Registers as SoundWire Slave device (Linux Device). Multiple Slave devices 74 + can register to a Bus instance. 75 + 76 + Slave driver: 77 + Driver controlling the Slave device. MIPI-specified registers are controlled 78 + directly by the Bus (and transmitted through the Master driver/interface). 79 + Any implementation-defined Slave register is controlled by Slave driver. In 80 + practice, it is expected that the Slave driver relies on regmap and does not 81 + request direct register access. 82 + 83 + Programming interfaces (SoundWire Master interface Driver) 84 + ========================================================== 85 + 86 + SoundWire Bus supports programming interfaces for the SoundWire Master 87 + implementation and SoundWire Slave devices. All the code uses the "sdw" 88 + prefix commonly used by SoC designers and 3rd party vendors. 89 + 90 + Each of the SoundWire Master interfaces needs to be registered to the Bus. 91 + Bus implements API to read standard Master MIPI properties and also provides 92 + callback in Master ops for Master driver to implement its own functions that 93 + provides capabilities information. DT support is not implemented at this 94 + time but should be trivial to add since capabilities are enabled with the 95 + ``device_property_`` API. 96 + 97 + The Master interface along with the Master interface capabilities are 98 + registered based on board file, DT or ACPI. 99 + 100 + Following is the Bus API to register the SoundWire Bus: 101 + 102 + .. code-block:: c 103 + 104 + int sdw_add_bus_master(struct sdw_bus *bus) 105 + { 106 + if (!bus->dev) 107 + return -ENODEV; 108 + 109 + mutex_init(&bus->lock); 110 + INIT_LIST_HEAD(&bus->slaves); 111 + 112 + /* Check ACPI for Slave devices */ 113 + sdw_acpi_find_slaves(bus); 114 + 115 + /* Check DT for Slave devices */ 116 + sdw_of_find_slaves(bus); 117 + 118 + return 0; 119 + } 120 + 121 + This will initialize sdw_bus object for Master device. "sdw_master_ops" and 122 + "sdw_master_port_ops" callback functions are provided to the Bus. 123 + 124 + "sdw_master_ops" is used by Bus to control the Bus in the hardware specific 125 + way. It includes Bus control functions such as sending the SoundWire 126 + read/write messages on Bus, setting up clock frequency & Stream 127 + Synchronization Point (SSP). The "sdw_master_ops" structure abstracts the 128 + hardware details of the Master from the Bus. 129 + 130 + "sdw_master_port_ops" is used by Bus to setup the Port parameters of the 131 + Master interface Port. Master interface Port register map is not defined by 132 + MIPI specification, so Bus calls the "sdw_master_port_ops" callback 133 + function to do Port operations like "Port Prepare", "Port Transport params 134 + set", "Port enable and disable". The implementation of the Master driver can 135 + then perform hardware-specific configurations. 136 + 137 + Programming interfaces (SoundWire Slave Driver) 138 + =============================================== 139 + 140 + The MIPI specification requires each Slave interface to expose a unique 141 + 48-bit identifier, stored in 6 read-only dev_id registers. This dev_id 142 + identifier contains vendor and part information, as well as a field enabling 143 + to differentiate between identical components. An additional class field is 144 + currently unused. Slave driver is written for a specific vendor and part 145 + identifier, Bus enumerates the Slave device based on these two ids. 146 + Slave device and driver match is done based on these two ids . Probe 147 + of the Slave driver is called by Bus on successful match between device and 148 + driver id. A parent/child relationship is enforced between Master and Slave 149 + devices (the logical representation is aligned with the physical 150 + connectivity). 151 + 152 + The information on Master/Slave dependencies is stored in platform data, 153 + board-file, ACPI or DT. The MIPI Software specification defines additional 154 + link_id parameters for controllers that have multiple Master interfaces. The 155 + dev_id registers are only unique in the scope of a link, and the link_id 156 + unique in the scope of a controller. Both dev_id and link_id are not 157 + necessarily unique at the system level but the parent/child information is 158 + used to avoid ambiguity. 159 + 160 + .. code-block:: c 161 + 162 + static const struct sdw_device_id slave_id[] = { 163 + SDW_SLAVE_ENTRY(0x025d, 0x700, 0), 164 + {}, 165 + }; 166 + MODULE_DEVICE_TABLE(sdw, slave_id); 167 + 168 + static struct sdw_driver slave_sdw_driver = { 169 + .driver = { 170 + .name = "slave_xxx", 171 + .pm = &slave_runtime_pm, 172 + }, 173 + .probe = slave_sdw_probe, 174 + .remove = slave_sdw_remove, 175 + .ops = &slave_slave_ops, 176 + .id_table = slave_id, 177 + }; 178 + 179 + 180 + For capabilities, Bus implements API to read standard Slave MIPI properties 181 + and also provides callback in Slave ops for Slave driver to implement own 182 + function that provides capabilities information. Bus needs to know a set of 183 + Slave capabilities to program Slave registers and to control the Bus 184 + reconfigurations. 185 + 186 + Future enhancements to be done 187 + ============================== 188 + 189 + (1) Bulk Register Access (BRA) transfers. 190 + 191 + 192 + (2) Multiple data lane support. 193 + 194 + Links 195 + ===== 196 + 197 + SoundWire MIPI specification 1.1 is available at: 198 + https://members.mipi.org/wg/All-Members/document/70290 199 + 200 + SoundWire MIPI DisCo (Discovery and Configuration) specification is 201 + available at: 202 + https://www.mipi.org/specifications/mipi-disco-soundwire 203 + 204 + (publicly accessible with registration or directly accessible to MIPI 205 + members) 206 + 207 + MIPI Alliance Manufacturer ID Page: mid.mipi.org
+19 -7
Documentation/driver-api/uio-howto.rst
··· 667 667 Since the driver does not declare any device GUID's, it will not get 668 668 loaded automatically and will not automatically bind to any devices, you 669 669 must load it and allocate id to the driver yourself. For example, to use 670 - the network device GUID:: 670 + the network device class GUID:: 671 671 672 672 modprobe uio_hv_generic 673 673 echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" > /sys/bus/vmbus/drivers/uio_hv_generic/new_id 674 674 675 675 If there already is a hardware specific kernel driver for the device, 676 676 the generic driver still won't bind to it, in this case if you want to 677 - use the generic driver (why would you?) you'll have to manually unbind 678 - the hardware specific driver and bind the generic driver, like this:: 677 + use the generic driver for a userspace library you'll have to manually unbind 678 + the hardware specific driver and bind the generic driver, using the device specific GUID 679 + like this:: 679 680 680 - echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind 681 - echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind 681 + echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind 682 + echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind 682 683 683 684 You can verify that the device has been bound to the driver by looking 684 685 for it in sysfs, for example like the following:: 685 686 686 - ls -l /sys/bus/vmbus/devices/vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver 687 + ls -l /sys/bus/vmbus/devices/ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver 687 688 688 689 Which if successful should print:: 689 690 690 - .../vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic 691 + .../ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic 691 692 692 693 Things to know about uio_hv_generic 693 694 ----------------------------------- ··· 697 696 prevents the device from generating further interrupts until the bit is 698 697 cleared. The userspace driver should clear this bit before blocking and 699 698 waiting for more interrupts. 699 + 700 + When host rescinds a device, the interrupt file descriptor is marked down 701 + and any reads of the interrupt file descriptor will return -EIO. Similar 702 + to a closed socket or disconnected serial device. 703 + 704 + The vmbus device regions are mapped into uio device resources: 705 + 0) Channel ring buffers: guest to host and host to guest 706 + 1) Guest to host interrupt signalling pages 707 + 2) Guest to host monitor page 708 + 3) Network receive buffer region 709 + 4) Network send buffer region 700 710 701 711 Further information 702 712 ===================
+76 -76
Documentation/fpga/fpga-mgr.txt
··· 11 11 The FPGA image data itself is very manufacturer specific, but for our purposes 12 12 it's just binary data. The FPGA manager core won't parse it. 13 13 14 + The FPGA image to be programmed can be in a scatter gather list, a single 15 + contiguous buffer, or a firmware file. Because allocating contiguous kernel 16 + memory for the buffer should be avoided, users are encouraged to use a scatter 17 + gather list instead if possible. 18 + 19 + The particulars for programming the image are presented in a structure (struct 20 + fpga_image_info). This struct contains parameters such as pointers to the 21 + FPGA image as well as image-specific particulars such as whether the image was 22 + built for full or partial reconfiguration. 14 23 15 24 API Functions: 16 25 ============== 17 26 18 - To program the FPGA from a file or from a buffer: 19 - ------------------------------------------------- 27 + To program the FPGA: 28 + -------------------- 20 29 21 - int fpga_mgr_buf_load(struct fpga_manager *mgr, 22 - struct fpga_image_info *info, 23 - const char *buf, size_t count); 30 + int fpga_mgr_load(struct fpga_manager *mgr, 31 + struct fpga_image_info *info); 24 32 25 - Load the FPGA from an image which exists as a contiguous buffer in 26 - memory. Allocating contiguous kernel memory for the buffer should be avoided, 27 - users are encouraged to use the _sg interface instead of this. 28 - 29 - int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, 30 - struct fpga_image_info *info, 31 - struct sg_table *sgt); 32 - 33 - Load the FPGA from an image from non-contiguous in memory. Callers can 34 - construct a sg_table using alloc_page backed memory. 35 - 36 - int fpga_mgr_firmware_load(struct fpga_manager *mgr, 37 - struct fpga_image_info *info, 38 - const char *image_name); 39 - 40 - Load the FPGA from an image which exists as a file. The image file must be on 41 - the firmware search path (see the firmware class documentation). If successful, 33 + Load the FPGA from an image which is indicated in the info. If successful, 42 34 the FPGA ends up in operating mode. Return 0 on success or a negative error 43 35 code. 44 36 45 - A FPGA design contained in a FPGA image file will likely have particulars that 46 - affect how the image is programmed to the FPGA. These are contained in struct 47 - fpga_image_info. Currently the only such particular is a single flag bit 48 - indicating whether the image is for full or partial reconfiguration. 37 + To allocate or free a struct fpga_image_info: 38 + --------------------------------------------- 39 + 40 + struct fpga_image_info *fpga_image_info_alloc(struct device *dev); 41 + 42 + void fpga_image_info_free(struct fpga_image_info *info); 49 43 50 44 To get/put a reference to a FPGA manager: 51 45 ----------------------------------------- 52 46 53 47 struct fpga_manager *of_fpga_mgr_get(struct device_node *node); 54 48 struct fpga_manager *fpga_mgr_get(struct device *dev); 55 - 56 - Given a DT node or device, get an exclusive reference to a FPGA manager. 57 - 58 49 void fpga_mgr_put(struct fpga_manager *mgr); 59 50 60 - Release the reference. 51 + Given a DT node or device, get a reference to a FPGA manager. This pointer 52 + can be saved until you are ready to program the FPGA. fpga_mgr_put releases 53 + the reference. 54 + 55 + 56 + To get exclusive control of a FPGA manager: 57 + ------------------------------------------- 58 + 59 + int fpga_mgr_lock(struct fpga_manager *mgr); 60 + void fpga_mgr_unlock(struct fpga_manager *mgr); 61 + 62 + The user should call fpga_mgr_lock and verify that it returns 0 before 63 + attempting to program the FPGA. Likewise, the user should call 64 + fpga_mgr_unlock when done programming the FPGA. 61 65 62 66 63 67 To register or unregister the low level FPGA-specific driver: 64 68 ------------------------------------------------------------- 65 69 66 70 int fpga_mgr_register(struct device *dev, const char *name, 67 - const struct fpga_manager_ops *mops, 68 - void *priv); 71 + const struct fpga_manager_ops *mops, 72 + void *priv); 69 73 70 74 void fpga_mgr_unregister(struct device *dev); 71 75 ··· 79 75 80 76 How to write an image buffer to a supported FPGA 81 77 ================================================ 82 - /* Include to get the API */ 83 78 #include <linux/fpga/fpga-mgr.h> 84 79 85 - /* device node that specifies the FPGA manager to use */ 86 - struct device_node *mgr_node = ... 87 - 88 - /* FPGA image is in this buffer. count is size of the buffer. */ 89 - char *buf = ... 90 - int count = ... 91 - 92 - /* struct with information about the FPGA image to program. */ 93 - struct fpga_image_info info; 94 - 95 - /* flags indicates whether to do full or partial reconfiguration */ 96 - info.flags = 0; 97 - 80 + struct fpga_manager *mgr; 81 + struct fpga_image_info *info; 98 82 int ret; 99 83 84 + /* 85 + * Get a reference to FPGA manager. The manager is not locked, so you can 86 + * hold onto this reference without it preventing programming. 87 + * 88 + * This example uses the device node of the manager. Alternatively, use 89 + * fpga_mgr_get(dev) instead if you have the device. 90 + */ 91 + mgr = of_fpga_mgr_get(mgr_node); 92 + 93 + /* struct with information about the FPGA image to program. */ 94 + info = fpga_image_info_alloc(dev); 95 + 96 + /* flags indicates whether to do full or partial reconfiguration */ 97 + info->flags = FPGA_MGR_PARTIAL_RECONFIG; 98 + 99 + /* 100 + * At this point, indicate where the image is. This is pseudo-code; you're 101 + * going to use one of these three. 102 + */ 103 + if (image is in a scatter gather table) { 104 + 105 + info->sgt = [your scatter gather table] 106 + 107 + } else if (image is in a buffer) { 108 + 109 + info->buf = [your image buffer] 110 + info->count = [image buffer size] 111 + 112 + } else if (image is in a firmware file) { 113 + 114 + info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL); 115 + 116 + } 117 + 100 118 /* Get exclusive control of FPGA manager */ 101 - struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node); 119 + ret = fpga_mgr_lock(mgr); 102 120 103 121 /* Load the buffer to the FPGA */ 104 122 ret = fpga_mgr_buf_load(mgr, &info, buf, count); 105 123 106 124 /* Release the FPGA manager */ 125 + fpga_mgr_unlock(mgr); 107 126 fpga_mgr_put(mgr); 108 127 109 - 110 - How to write an image file to a supported FPGA 111 - ============================================== 112 - /* Include to get the API */ 113 - #include <linux/fpga/fpga-mgr.h> 114 - 115 - /* device node that specifies the FPGA manager to use */ 116 - struct device_node *mgr_node = ... 117 - 118 - /* FPGA image is in this file which is in the firmware search path */ 119 - const char *path = "fpga-image-9.rbf" 120 - 121 - /* struct with information about the FPGA image to program. */ 122 - struct fpga_image_info info; 123 - 124 - /* flags indicates whether to do full or partial reconfiguration */ 125 - info.flags = 0; 126 - 127 - int ret; 128 - 129 - /* Get exclusive control of FPGA manager */ 130 - struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node); 131 - 132 - /* Get the firmware image (path) and load it to the FPGA */ 133 - ret = fpga_mgr_firmware_load(mgr, &info, path); 134 - 135 - /* Release the FPGA manager */ 136 - fpga_mgr_put(mgr); 137 - 128 + /* Deallocate the image info if you're done with it */ 129 + fpga_image_info_free(info); 138 130 139 131 How to support a new FPGA device 140 132 ================================
+95
Documentation/fpga/fpga-region.txt
··· 1 + FPGA Regions 2 + 3 + Alan Tull 2017 4 + 5 + CONTENTS 6 + - Introduction 7 + - The FPGA region API 8 + - Usage example 9 + 10 + Introduction 11 + ============ 12 + 13 + This document is meant to be an brief overview of the FPGA region API usage. A 14 + more conceptual look at regions can be found in [1]. 15 + 16 + For the purposes of this API document, let's just say that a region associates 17 + an FPGA Manager and a bridge (or bridges) with a reprogrammable region of an 18 + FPGA or the whole FPGA. The API provides a way to register a region and to 19 + program a region. 20 + 21 + Currently the only layer above fpga-region.c in the kernel is the Device Tree 22 + support (of-fpga-region.c) described in [1]. The DT support layer uses regions 23 + to program the FPGA and then DT to handle enumeration. The common region code 24 + is intended to be used by other schemes that have other ways of accomplishing 25 + enumeration after programming. 26 + 27 + An fpga-region can be set up to know the following things: 28 + * which FPGA manager to use to do the programming 29 + * which bridges to disable before programming and enable afterwards. 30 + 31 + Additional info needed to program the FPGA image is passed in the struct 32 + fpga_image_info [2] including: 33 + * pointers to the image as either a scatter-gather buffer, a contiguous 34 + buffer, or the name of firmware file 35 + * flags indicating specifics such as whether the image if for partial 36 + reconfiguration. 37 + 38 + =================== 39 + The FPGA region API 40 + =================== 41 + 42 + To register or unregister a region: 43 + ----------------------------------- 44 + 45 + int fpga_region_register(struct device *dev, 46 + struct fpga_region *region); 47 + int fpga_region_unregister(struct fpga_region *region); 48 + 49 + An example of usage can be seen in the probe function of [3] 50 + 51 + To program an FPGA: 52 + ------------------- 53 + int fpga_region_program_fpga(struct fpga_region *region); 54 + 55 + This function operates on info passed in the fpga_image_info 56 + (region->info). 57 + 58 + This function will attempt to: 59 + * lock the region's mutex 60 + * lock the region's FPGA manager 61 + * build a list of FPGA bridges if a method has been specified to do so 62 + * disable the bridges 63 + * program the FPGA 64 + * re-enable the bridges 65 + * release the locks 66 + 67 + ============= 68 + Usage example 69 + ============= 70 + 71 + First, allocate the info struct: 72 + 73 + info = fpga_image_info_alloc(dev); 74 + if (!info) 75 + return -ENOMEM; 76 + 77 + Set flags as needed, i.e. 78 + 79 + info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 80 + 81 + Point to your FPGA image, such as: 82 + 83 + info->sgt = &sgt; 84 + 85 + Add info to region and do the programming: 86 + 87 + region->info = info; 88 + ret = fpga_region_program_fpga(region); 89 + 90 + Then enumerate whatever hardware has appeared in the FPGA. 91 + 92 + -- 93 + [1] ../devicetree/bindings/fpga/fpga-region.txt 94 + [2] ./fpga-mgr.txt 95 + [3] ../../drivers/fpga/of-fpga-region.c
+23
Documentation/fpga/overview.txt
··· 1 + Linux kernel FPGA support 2 + 3 + Alan Tull 2017 4 + 5 + The main point of this project has been to separate the out the upper layers 6 + that know when to reprogram a FPGA from the lower layers that know how to 7 + reprogram a specific FPGA device. The intention is to make this manufacturer 8 + agnostic, understanding that of course the FPGA images are very device specific 9 + themselves. 10 + 11 + The framework in the kernel includes: 12 + * low level FPGA manager drivers that know how to program a specific device 13 + * the fpga-mgr framework they are registered with 14 + * low level FPGA bridge drivers for hard/soft bridges which are intended to 15 + be disable during FPGA programming 16 + * the fpga-bridge framework they are registered with 17 + * the fpga-region framework which associates and controls managers and bridges 18 + as reconfigurable regions 19 + * the of-fpga-region support for reprogramming FPGAs when device tree overlays 20 + are applied. 21 + 22 + I would encourage you the user to add code that creates FPGA regions rather 23 + that trying to control managers and bridges separately.
+36 -2
MAINTAINERS
··· 3421 3421 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 3422 3422 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git 3423 3423 S: Supported 3424 - F: drivers/char/* 3425 - F: drivers/misc/* 3424 + F: drivers/char/ 3425 + F: drivers/misc/ 3426 3426 F: include/linux/miscdevice.h 3427 3427 3428 3428 CHECKPATCH ··· 12526 12526 F: lib/test_siphash.c 12527 12527 F: include/linux/siphash.h 12528 12528 12529 + SIOX 12530 + M: Gavin Schenk <g.schenk@eckelmann.de> 12531 + M: Uwe Kleine-König <kernel@pengutronix.de> 12532 + S: Supported 12533 + F: drivers/siox/* 12534 + F: include/trace/events/siox.h 12535 + 12529 12536 SIS 190 ETHERNET DRIVER 12530 12537 M: Francois Romieu <romieu@fr.zoreil.com> 12531 12538 L: netdev@vger.kernel.org ··· 12583 12576 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 12584 12577 F: include/linux/srcu.h 12585 12578 F: kernel/rcu/srcu.c 12579 + 12580 + SERIAL LOW-POWER INTER-CHIP MEDIA BUS (SLIMbus) 12581 + M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 12582 + L: alsa-devel@alsa-project.org (moderated for non-subscribers) 12583 + S: Maintained 12584 + F: drivers/slimbus/ 12585 + F: Documentation/devicetree/bindings/slimbus/ 12586 + F: include/linux/slimbus.h 12586 12587 12587 12588 SMACK SECURITY MODULE 12588 12589 M: Casey Schaufler <casey@schaufler-ca.com> ··· 12816 12801 F: Documentation/sound/alsa/soc/ 12817 12802 F: sound/soc/ 12818 12803 F: include/sound/soc* 12804 + 12805 + SOUNDWIRE SUBSYSTEM 12806 + M: Vinod Koul <vinod.koul@intel.com> 12807 + M: Sanyog Kale <sanyog.r.kale@intel.com> 12808 + R: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 12809 + L: alsa-devel@alsa-project.org (moderated for non-subscribers) 12810 + S: Supported 12811 + F: Documentation/driver-api/soundwire/ 12812 + F: drivers/soundwire/ 12813 + F: include/linux/soundwire/ 12819 12814 12820 12815 SP2 MEDIA DRIVER 12821 12816 M: Olli Salonen <olli.salonen@iki.fi> ··· 14696 14671 S: Maintained 14697 14672 F: drivers/virtio/virtio_input.c 14698 14673 F: include/uapi/linux/virtio_input.h 14674 + 14675 + VIRTUAL BOX GUEST DEVICE DRIVER 14676 + M: Hans de Goede <hdegoede@redhat.com> 14677 + M: Arnd Bergmann <arnd@arndb.de> 14678 + M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14679 + S: Maintained 14680 + F: include/linux/vbox_utils.h 14681 + F: include/uapi/linux/vbox*.h 14682 + F: drivers/virt/vboxguest/ 14699 14683 14700 14684 VIRTUAL SERIO DEVICE DRIVER 14701 14685 M: Stephen Chandler Paul <thatslyude@gmail.com>
+14 -7
arch/x86/hyperv/hv_init.c
··· 239 239 } 240 240 EXPORT_SYMBOL_GPL(hyperv_report_panic); 241 241 242 - bool hv_is_hypercall_page_setup(void) 242 + bool hv_is_hyperv_initialized(void) 243 243 { 244 244 union hv_x64_msr_hypercall_contents hypercall_msr; 245 245 246 - /* Check if the hypercall page is setup */ 246 + /* 247 + * Ensure that we're really on Hyper-V, and not a KVM or Xen 248 + * emulation of Hyper-V 249 + */ 250 + if (x86_hyper_type != X86_HYPER_MS_HYPERV) 251 + return false; 252 + 253 + /* 254 + * Verify that earlier initialization succeeded by checking 255 + * that the hypercall page is setup 256 + */ 247 257 hypercall_msr.as_uint64 = 0; 248 258 rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); 249 259 250 - if (!hypercall_msr.enable) 251 - return false; 252 - 253 - return true; 260 + return hypercall_msr.enable; 254 261 } 255 - EXPORT_SYMBOL_GPL(hv_is_hypercall_page_setup); 262 + EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized);
+2 -2
arch/x86/include/asm/mshyperv.h
··· 314 314 void hyperv_setup_mmu_ops(void); 315 315 void hyper_alloc_mmu(void); 316 316 void hyperv_report_panic(struct pt_regs *regs, long err); 317 - bool hv_is_hypercall_page_setup(void); 317 + bool hv_is_hyperv_initialized(void); 318 318 void hyperv_cleanup(void); 319 319 #else /* CONFIG_HYPERV */ 320 320 static inline void hyperv_init(void) {} 321 - static inline bool hv_is_hypercall_page_setup(void) { return false; } 321 + static inline bool hv_is_hyperv_initialized(void) { return false; } 322 322 static inline void hyperv_cleanup(void) {} 323 323 static inline void hyperv_setup_mmu_ops(void) {} 324 324 #endif /* CONFIG_HYPERV */
+6
drivers/Kconfig
··· 153 153 154 154 source "drivers/rpmsg/Kconfig" 155 155 156 + source "drivers/soundwire/Kconfig" 157 + 156 158 source "drivers/soc/Kconfig" 157 159 158 160 source "drivers/devfreq/Kconfig" ··· 214 212 source "drivers/opp/Kconfig" 215 213 216 214 source "drivers/visorbus/Kconfig" 215 + 216 + source "drivers/siox/Kconfig" 217 + 218 + source "drivers/slimbus/Kconfig" 217 219 218 220 endmenu
+3
drivers/Makefile
··· 87 87 obj-$(CONFIG_SPI) += spi/ 88 88 obj-$(CONFIG_SPMI) += spmi/ 89 89 obj-$(CONFIG_HSI) += hsi/ 90 + obj-$(CONFIG_SLIMBUS) += slimbus/ 90 91 obj-y += net/ 91 92 obj-$(CONFIG_ATM) += atm/ 92 93 obj-$(CONFIG_FUSION) += message/ ··· 158 157 obj-$(CONFIG_HWSPINLOCK) += hwspinlock/ 159 158 obj-$(CONFIG_REMOTEPROC) += remoteproc/ 160 159 obj-$(CONFIG_RPMSG) += rpmsg/ 160 + obj-$(CONFIG_SOUNDWIRE) += soundwire/ 161 161 162 162 # Virtualization drivers 163 163 obj-$(CONFIG_VIRT_DRIVERS) += virt/ ··· 187 185 obj-$(CONFIG_TEE) += tee/ 188 186 obj-$(CONFIG_MULTIPLEXER) += mux/ 189 187 obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ 188 + obj-$(CONFIG_SIOX) += siox/
+135 -61
drivers/android/binder.c
··· 141 141 }; 142 142 static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR | 143 143 BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION; 144 - module_param_named(debug_mask, binder_debug_mask, uint, S_IWUSR | S_IRUGO); 144 + module_param_named(debug_mask, binder_debug_mask, uint, 0644); 145 145 146 146 static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES; 147 147 module_param_named(devices, binder_devices_param, charp, 0444); ··· 160 160 return ret; 161 161 } 162 162 module_param_call(stop_on_user_error, binder_set_stop_on_user_error, 163 - param_get_int, &binder_stop_on_user_error, S_IWUSR | S_IRUGO); 163 + param_get_int, &binder_stop_on_user_error, 0644); 164 164 165 165 #define binder_debug(mask, x...) \ 166 166 do { \ ··· 249 249 unsigned int cur = atomic_inc_return(&log->cur); 250 250 251 251 if (cur >= ARRAY_SIZE(log->entry)) 252 - log->full = 1; 252 + log->full = true; 253 253 e = &log->entry[cur % ARRAY_SIZE(log->entry)]; 254 254 WRITE_ONCE(e->debug_id_done, 0); 255 255 /* ··· 493 493 * (protected by @inner_lock) 494 494 * @todo: list of work for this process 495 495 * (protected by @inner_lock) 496 - * @wait: wait queue head to wait for proc work 497 - * (invariant after initialized) 498 496 * @stats: per-process binder statistics 499 497 * (atomics, no lock needed) 500 498 * @delivered_death: list of delivered death notification ··· 535 537 bool is_dead; 536 538 537 539 struct list_head todo; 538 - wait_queue_head_t wait; 539 540 struct binder_stats stats; 540 541 struct list_head delivered_death; 541 542 int max_threads; ··· 576 579 * (protected by @proc->inner_lock) 577 580 * @todo: list of work to do for this thread 578 581 * (protected by @proc->inner_lock) 582 + * @process_todo: whether work in @todo should be processed 583 + * (protected by @proc->inner_lock) 579 584 * @return_error: transaction errors reported by this thread 580 585 * (only accessed by this thread) 581 586 * @reply_error: transaction errors reported by target thread ··· 603 604 bool looper_need_return; /* can be written by other thread */ 604 605 struct binder_transaction *transaction_stack; 605 606 struct list_head todo; 607 + bool process_todo; 606 608 struct binder_error return_error; 607 609 struct binder_error reply_error; 608 610 wait_queue_head_t wait; ··· 789 789 return ret; 790 790 } 791 791 792 + /** 793 + * binder_enqueue_work_ilocked() - Add an item to the work list 794 + * @work: struct binder_work to add to list 795 + * @target_list: list to add work to 796 + * 797 + * Adds the work to the specified list. Asserts that work 798 + * is not already on a list. 799 + * 800 + * Requires the proc->inner_lock to be held. 801 + */ 792 802 static void 793 803 binder_enqueue_work_ilocked(struct binder_work *work, 794 804 struct list_head *target_list) ··· 809 799 } 810 800 811 801 /** 812 - * binder_enqueue_work() - Add an item to the work list 813 - * @proc: binder_proc associated with list 802 + * binder_enqueue_deferred_thread_work_ilocked() - Add deferred thread work 803 + * @thread: thread to queue work to 814 804 * @work: struct binder_work to add to list 815 - * @target_list: list to add work to 816 805 * 817 - * Adds the work to the specified list. Asserts that work 818 - * is not already on a list. 806 + * Adds the work to the todo list of the thread. Doesn't set the process_todo 807 + * flag, which means that (if it wasn't already set) the thread will go to 808 + * sleep without handling this work when it calls read. 809 + * 810 + * Requires the proc->inner_lock to be held. 819 811 */ 820 812 static void 821 - binder_enqueue_work(struct binder_proc *proc, 822 - struct binder_work *work, 823 - struct list_head *target_list) 813 + binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread, 814 + struct binder_work *work) 824 815 { 825 - binder_inner_proc_lock(proc); 826 - binder_enqueue_work_ilocked(work, target_list); 827 - binder_inner_proc_unlock(proc); 816 + binder_enqueue_work_ilocked(work, &thread->todo); 817 + } 818 + 819 + /** 820 + * binder_enqueue_thread_work_ilocked() - Add an item to the thread work list 821 + * @thread: thread to queue work to 822 + * @work: struct binder_work to add to list 823 + * 824 + * Adds the work to the todo list of the thread, and enables processing 825 + * of the todo queue. 826 + * 827 + * Requires the proc->inner_lock to be held. 828 + */ 829 + static void 830 + binder_enqueue_thread_work_ilocked(struct binder_thread *thread, 831 + struct binder_work *work) 832 + { 833 + binder_enqueue_work_ilocked(work, &thread->todo); 834 + thread->process_todo = true; 835 + } 836 + 837 + /** 838 + * binder_enqueue_thread_work() - Add an item to the thread work list 839 + * @thread: thread to queue work to 840 + * @work: struct binder_work to add to list 841 + * 842 + * Adds the work to the todo list of the thread, and enables processing 843 + * of the todo queue. 844 + */ 845 + static void 846 + binder_enqueue_thread_work(struct binder_thread *thread, 847 + struct binder_work *work) 848 + { 849 + binder_inner_proc_lock(thread->proc); 850 + binder_enqueue_thread_work_ilocked(thread, work); 851 + binder_inner_proc_unlock(thread->proc); 828 852 } 829 853 830 854 static void ··· 984 940 static bool binder_has_work_ilocked(struct binder_thread *thread, 985 941 bool do_proc_work) 986 942 { 987 - return !binder_worklist_empty_ilocked(&thread->todo) || 943 + return thread->process_todo || 988 944 thread->looper_need_return || 989 945 (do_proc_work && 990 946 !binder_worklist_empty_ilocked(&thread->proc->todo)); ··· 1272 1228 node->local_strong_refs++; 1273 1229 if (!node->has_strong_ref && target_list) { 1274 1230 binder_dequeue_work_ilocked(&node->work); 1231 + /* 1232 + * Note: this function is the only place where we queue 1233 + * directly to a thread->todo without using the 1234 + * corresponding binder_enqueue_thread_work() helper 1235 + * functions; in this case it's ok to not set the 1236 + * process_todo flag, since we know this node work will 1237 + * always be followed by other work that starts queue 1238 + * processing: in case of synchronous transactions, a 1239 + * BR_REPLY or BR_ERROR; in case of oneway 1240 + * transactions, a BR_TRANSACTION_COMPLETE. 1241 + */ 1275 1242 binder_enqueue_work_ilocked(&node->work, target_list); 1276 1243 } 1277 1244 } else { ··· 1294 1239 node->debug_id); 1295 1240 return -EINVAL; 1296 1241 } 1242 + /* 1243 + * See comment above 1244 + */ 1297 1245 binder_enqueue_work_ilocked(&node->work, target_list); 1298 1246 } 1299 1247 } ··· 1986 1928 binder_pop_transaction_ilocked(target_thread, t); 1987 1929 if (target_thread->reply_error.cmd == BR_OK) { 1988 1930 target_thread->reply_error.cmd = error_code; 1989 - binder_enqueue_work_ilocked( 1990 - &target_thread->reply_error.work, 1991 - &target_thread->todo); 1931 + binder_enqueue_thread_work_ilocked( 1932 + target_thread, 1933 + &target_thread->reply_error.work); 1992 1934 wake_up_interruptible(&target_thread->wait); 1993 1935 } else { 1994 1936 WARN(1, "Unexpected reply error: %u\n", ··· 2627 2569 struct binder_proc *proc, 2628 2570 struct binder_thread *thread) 2629 2571 { 2630 - struct list_head *target_list = NULL; 2631 2572 struct binder_node *node = t->buffer->target_node; 2632 2573 bool oneway = !!(t->flags & TF_ONE_WAY); 2633 - bool wakeup = true; 2574 + bool pending_async = false; 2634 2575 2635 2576 BUG_ON(!node); 2636 2577 binder_node_lock(node); 2637 2578 if (oneway) { 2638 2579 BUG_ON(thread); 2639 2580 if (node->has_async_transaction) { 2640 - target_list = &node->async_todo; 2641 - wakeup = false; 2581 + pending_async = true; 2642 2582 } else { 2643 - node->has_async_transaction = 1; 2583 + node->has_async_transaction = true; 2644 2584 } 2645 2585 } 2646 2586 ··· 2650 2594 return false; 2651 2595 } 2652 2596 2653 - if (!thread && !target_list) 2597 + if (!thread && !pending_async) 2654 2598 thread = binder_select_thread_ilocked(proc); 2655 2599 2656 2600 if (thread) 2657 - target_list = &thread->todo; 2658 - else if (!target_list) 2659 - target_list = &proc->todo; 2601 + binder_enqueue_thread_work_ilocked(thread, &t->work); 2602 + else if (!pending_async) 2603 + binder_enqueue_work_ilocked(&t->work, &proc->todo); 2660 2604 else 2661 - BUG_ON(target_list != &node->async_todo); 2605 + binder_enqueue_work_ilocked(&t->work, &node->async_todo); 2662 2606 2663 - binder_enqueue_work_ilocked(&t->work, target_list); 2664 - 2665 - if (wakeup) 2607 + if (!pending_async) 2666 2608 binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */); 2667 2609 2668 2610 binder_inner_proc_unlock(proc); ··· 3155 3101 } 3156 3102 } 3157 3103 tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; 3158 - binder_enqueue_work(proc, tcomplete, &thread->todo); 3159 3104 t->work.type = BINDER_WORK_TRANSACTION; 3160 3105 3161 3106 if (reply) { 3107 + binder_enqueue_thread_work(thread, tcomplete); 3162 3108 binder_inner_proc_lock(target_proc); 3163 3109 if (target_thread->is_dead) { 3164 3110 binder_inner_proc_unlock(target_proc); ··· 3166 3112 } 3167 3113 BUG_ON(t->buffer->async_transaction != 0); 3168 3114 binder_pop_transaction_ilocked(target_thread, in_reply_to); 3169 - binder_enqueue_work_ilocked(&t->work, &target_thread->todo); 3115 + binder_enqueue_thread_work_ilocked(target_thread, &t->work); 3170 3116 binder_inner_proc_unlock(target_proc); 3171 3117 wake_up_interruptible_sync(&target_thread->wait); 3172 3118 binder_free_transaction(in_reply_to); 3173 3119 } else if (!(t->flags & TF_ONE_WAY)) { 3174 3120 BUG_ON(t->buffer->async_transaction != 0); 3175 3121 binder_inner_proc_lock(proc); 3122 + /* 3123 + * Defer the TRANSACTION_COMPLETE, so we don't return to 3124 + * userspace immediately; this allows the target process to 3125 + * immediately start processing this transaction, reducing 3126 + * latency. We will then return the TRANSACTION_COMPLETE when 3127 + * the target replies (or there is an error). 3128 + */ 3129 + binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete); 3176 3130 t->need_reply = 1; 3177 3131 t->from_parent = thread->transaction_stack; 3178 3132 thread->transaction_stack = t; ··· 3194 3132 } else { 3195 3133 BUG_ON(target_node == NULL); 3196 3134 BUG_ON(t->buffer->async_transaction != 1); 3135 + binder_enqueue_thread_work(thread, tcomplete); 3197 3136 if (!binder_proc_transaction(t, target_proc, NULL)) 3198 3137 goto err_dead_proc_or_thread; 3199 3138 } ··· 3273 3210 BUG_ON(thread->return_error.cmd != BR_OK); 3274 3211 if (in_reply_to) { 3275 3212 thread->return_error.cmd = BR_TRANSACTION_COMPLETE; 3276 - binder_enqueue_work(thread->proc, 3277 - &thread->return_error.work, 3278 - &thread->todo); 3213 + binder_enqueue_thread_work(thread, &thread->return_error.work); 3279 3214 binder_send_failed_reply(in_reply_to, return_error); 3280 3215 } else { 3281 3216 thread->return_error.cmd = return_error; 3282 - binder_enqueue_work(thread->proc, 3283 - &thread->return_error.work, 3284 - &thread->todo); 3217 + binder_enqueue_thread_work(thread, &thread->return_error.work); 3285 3218 } 3286 3219 } 3287 3220 ··· 3483 3424 w = binder_dequeue_work_head_ilocked( 3484 3425 &buf_node->async_todo); 3485 3426 if (!w) { 3486 - buf_node->has_async_transaction = 0; 3427 + buf_node->has_async_transaction = false; 3487 3428 } else { 3488 3429 binder_enqueue_work_ilocked( 3489 3430 w, &proc->todo); ··· 3581 3522 WARN_ON(thread->return_error.cmd != 3582 3523 BR_OK); 3583 3524 thread->return_error.cmd = BR_ERROR; 3584 - binder_enqueue_work( 3585 - thread->proc, 3586 - &thread->return_error.work, 3587 - &thread->todo); 3525 + binder_enqueue_thread_work( 3526 + thread, 3527 + &thread->return_error.work); 3588 3528 binder_debug( 3589 3529 BINDER_DEBUG_FAILED_TRANSACTION, 3590 3530 "%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n", ··· 3663 3605 if (thread->looper & 3664 3606 (BINDER_LOOPER_STATE_REGISTERED | 3665 3607 BINDER_LOOPER_STATE_ENTERED)) 3666 - binder_enqueue_work_ilocked( 3667 - &death->work, 3668 - &thread->todo); 3608 + binder_enqueue_thread_work_ilocked( 3609 + thread, 3610 + &death->work); 3669 3611 else { 3670 3612 binder_enqueue_work_ilocked( 3671 3613 &death->work, ··· 3720 3662 if (thread->looper & 3721 3663 (BINDER_LOOPER_STATE_REGISTERED | 3722 3664 BINDER_LOOPER_STATE_ENTERED)) 3723 - binder_enqueue_work_ilocked( 3724 - &death->work, &thread->todo); 3665 + binder_enqueue_thread_work_ilocked( 3666 + thread, &death->work); 3725 3667 else { 3726 3668 binder_enqueue_work_ilocked( 3727 3669 &death->work, ··· 3895 3837 break; 3896 3838 } 3897 3839 w = binder_dequeue_work_head_ilocked(list); 3840 + if (binder_worklist_empty_ilocked(&thread->todo)) 3841 + thread->process_todo = false; 3898 3842 3899 3843 switch (w->type) { 3900 3844 case BINDER_WORK_TRANSACTION: { ··· 4362 4302 if (t) 4363 4303 spin_lock(&t->lock); 4364 4304 } 4305 + 4306 + /* 4307 + * If this thread used poll, make sure we remove the waitqueue 4308 + * from any epoll data structures holding it with POLLFREE. 4309 + * waitqueue_active() is safe to use here because we're holding 4310 + * the inner lock. 4311 + */ 4312 + if ((thread->looper & BINDER_LOOPER_STATE_POLL) && 4313 + waitqueue_active(&thread->wait)) { 4314 + wake_up_poll(&thread->wait, POLLHUP | POLLFREE); 4315 + } 4316 + 4365 4317 binder_inner_proc_unlock(thread->proc); 4366 4318 4367 4319 if (send_reply) ··· 4718 4646 return 0; 4719 4647 4720 4648 err_bad_arg: 4721 - pr_err("binder_mmap: %d %lx-%lx %s failed %d\n", 4649 + pr_err("%s: %d %lx-%lx %s failed %d\n", __func__, 4722 4650 proc->pid, vma->vm_start, vma->vm_end, failure_string, ret); 4723 4651 return ret; 4724 4652 } ··· 4728 4656 struct binder_proc *proc; 4729 4657 struct binder_device *binder_dev; 4730 4658 4731 - binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n", 4659 + binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__, 4732 4660 current->group_leader->pid, current->pid); 4733 4661 4734 4662 proc = kzalloc(sizeof(*proc), GFP_KERNEL); ··· 4767 4695 * anyway print all contexts that a given PID has, so this 4768 4696 * is not a problem. 4769 4697 */ 4770 - proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO, 4698 + proc->debugfs_entry = debugfs_create_file(strbuf, 0444, 4771 4699 binder_debugfs_dir_entry_proc, 4772 4700 (void *)(unsigned long)proc->pid, 4773 4701 &binder_proc_fops); ··· 5596 5524 struct binder_device *device; 5597 5525 struct hlist_node *tmp; 5598 5526 5599 - binder_alloc_shrinker_init(); 5527 + ret = binder_alloc_shrinker_init(); 5528 + if (ret) 5529 + return ret; 5600 5530 5601 5531 atomic_set(&binder_transaction_log.cur, ~0U); 5602 5532 atomic_set(&binder_transaction_log_failed.cur, ~0U); ··· 5610 5536 5611 5537 if (binder_debugfs_dir_entry_root) { 5612 5538 debugfs_create_file("state", 5613 - S_IRUGO, 5539 + 0444, 5614 5540 binder_debugfs_dir_entry_root, 5615 5541 NULL, 5616 5542 &binder_state_fops); 5617 5543 debugfs_create_file("stats", 5618 - S_IRUGO, 5544 + 0444, 5619 5545 binder_debugfs_dir_entry_root, 5620 5546 NULL, 5621 5547 &binder_stats_fops); 5622 5548 debugfs_create_file("transactions", 5623 - S_IRUGO, 5549 + 0444, 5624 5550 binder_debugfs_dir_entry_root, 5625 5551 NULL, 5626 5552 &binder_transactions_fops); 5627 5553 debugfs_create_file("transaction_log", 5628 - S_IRUGO, 5554 + 0444, 5629 5555 binder_debugfs_dir_entry_root, 5630 5556 &binder_transaction_log, 5631 5557 &binder_transaction_log_fops); 5632 5558 debugfs_create_file("failed_transaction_log", 5633 - S_IRUGO, 5559 + 0444, 5634 5560 binder_debugfs_dir_entry_root, 5635 5561 &binder_transaction_log_failed, 5636 5562 &binder_transaction_log_fops);
+20 -9
drivers/android/binder_alloc.c
··· 281 281 goto err_vm_insert_page_failed; 282 282 } 283 283 284 + if (index + 1 > alloc->pages_high) 285 + alloc->pages_high = index + 1; 286 + 284 287 trace_binder_alloc_page_end(alloc, index); 285 288 /* vm_insert_page does not seem to increment the refcount */ 286 289 } ··· 327 324 return vma ? -ENOMEM : -ESRCH; 328 325 } 329 326 330 - struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc, 331 - size_t data_size, 332 - size_t offsets_size, 333 - size_t extra_buffers_size, 334 - int is_async) 327 + static struct binder_buffer *binder_alloc_new_buf_locked( 328 + struct binder_alloc *alloc, 329 + size_t data_size, 330 + size_t offsets_size, 331 + size_t extra_buffers_size, 332 + int is_async) 335 333 { 336 334 struct rb_node *n = alloc->free_buffers.rb_node; 337 335 struct binder_buffer *buffer; ··· 670 666 goto err_already_mapped; 671 667 } 672 668 673 - area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP); 669 + area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC); 674 670 if (area == NULL) { 675 671 ret = -ENOMEM; 676 672 failure_string = "get_vm_area"; ··· 857 853 } 858 854 mutex_unlock(&alloc->mutex); 859 855 seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); 856 + seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); 860 857 } 861 858 862 859 /** ··· 1007 1002 INIT_LIST_HEAD(&alloc->buffers); 1008 1003 } 1009 1004 1010 - void binder_alloc_shrinker_init(void) 1005 + int binder_alloc_shrinker_init(void) 1011 1006 { 1012 - list_lru_init(&binder_alloc_lru); 1013 - register_shrinker(&binder_shrinker); 1007 + int ret = list_lru_init(&binder_alloc_lru); 1008 + 1009 + if (ret == 0) { 1010 + ret = register_shrinker(&binder_shrinker); 1011 + if (ret) 1012 + list_lru_destroy(&binder_alloc_lru); 1013 + } 1014 + return ret; 1014 1015 }
+3 -1
drivers/android/binder_alloc.h
··· 92 92 * @pages: array of binder_lru_page 93 93 * @buffer_size: size of address space specified via mmap 94 94 * @pid: pid for associated binder_proc (invariant after init) 95 + * @pages_high: high watermark of offset in @pages 95 96 * 96 97 * Bookkeeping structure for per-proc address space management for binder 97 98 * buffers. It is normally initialized during binder_init() and binder_mmap() ··· 113 112 size_t buffer_size; 114 113 uint32_t buffer_free; 115 114 int pid; 115 + size_t pages_high; 116 116 }; 117 117 118 118 #ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST ··· 130 128 size_t extra_buffers_size, 131 129 int is_async); 132 130 extern void binder_alloc_init(struct binder_alloc *alloc); 133 - void binder_alloc_shrinker_init(void); 131 + extern int binder_alloc_shrinker_init(void); 134 132 extern void binder_alloc_vma_close(struct binder_alloc *alloc); 135 133 extern struct binder_buffer * 136 134 binder_alloc_prepare_to_free(struct binder_alloc *alloc,
+4
drivers/auxdisplay/img-ascii-lcd.c
··· 441 441 .remove = img_ascii_lcd_remove, 442 442 }; 443 443 module_platform_driver(img_ascii_lcd_driver); 444 + 445 + MODULE_DESCRIPTION("Imagination Technologies ASCII LCD Display"); 446 + MODULE_AUTHOR("Paul Burton <paul.burton@mips.com>"); 447 + MODULE_LICENSE("GPL");
+4
drivers/base/regmap/Kconfig
··· 20 20 tristate 21 21 depends on I2C 22 22 23 + config REGMAP_SLIMBUS 24 + tristate 25 + depends on SLIMBUS 26 + 23 27 config REGMAP_SPI 24 28 tristate 25 29 depends on SPI
+1
drivers/base/regmap/Makefile
··· 8 8 obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o 9 9 obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o 10 10 obj-$(CONFIG_REGMAP_I2C) += regmap-i2c.o 11 + obj-$(CONFIG_REGMAP_SLIMBUS) += regmap-slimbus.o 11 12 obj-$(CONFIG_REGMAP_SPI) += regmap-spi.o 12 13 obj-$(CONFIG_REGMAP_SPMI) += regmap-spmi.o 13 14 obj-$(CONFIG_REGMAP_MMIO) += regmap-mmio.o
+80
drivers/base/regmap/regmap-slimbus.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2017, Linaro Ltd. 3 + 4 + #include <linux/regmap.h> 5 + #include <linux/slimbus.h> 6 + #include <linux/module.h> 7 + 8 + #include "internal.h" 9 + 10 + static int regmap_slimbus_byte_reg_read(void *context, unsigned int reg, 11 + unsigned int *val) 12 + { 13 + struct slim_device *sdev = context; 14 + int v; 15 + 16 + v = slim_readb(sdev, reg); 17 + 18 + if (v < 0) 19 + return v; 20 + 21 + *val = v; 22 + 23 + return 0; 24 + } 25 + 26 + static int regmap_slimbus_byte_reg_write(void *context, unsigned int reg, 27 + unsigned int val) 28 + { 29 + struct slim_device *sdev = context; 30 + 31 + return slim_writeb(sdev, reg, val); 32 + } 33 + 34 + static struct regmap_bus regmap_slimbus_bus = { 35 + .reg_write = regmap_slimbus_byte_reg_write, 36 + .reg_read = regmap_slimbus_byte_reg_read, 37 + .reg_format_endian_default = REGMAP_ENDIAN_LITTLE, 38 + .val_format_endian_default = REGMAP_ENDIAN_LITTLE, 39 + }; 40 + 41 + static const struct regmap_bus *regmap_get_slimbus(struct slim_device *slim, 42 + const struct regmap_config *config) 43 + { 44 + if (config->val_bits == 8 && config->reg_bits == 8) 45 + return &regmap_slimbus_bus; 46 + 47 + return ERR_PTR(-ENOTSUPP); 48 + } 49 + 50 + struct regmap *__regmap_init_slimbus(struct slim_device *slimbus, 51 + const struct regmap_config *config, 52 + struct lock_class_key *lock_key, 53 + const char *lock_name) 54 + { 55 + const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config); 56 + 57 + if (IS_ERR(bus)) 58 + return ERR_CAST(bus); 59 + 60 + return __regmap_init(&slimbus->dev, bus, &slimbus->dev, config, 61 + lock_key, lock_name); 62 + } 63 + EXPORT_SYMBOL_GPL(__regmap_init_slimbus); 64 + 65 + struct regmap *__devm_regmap_init_slimbus(struct slim_device *slimbus, 66 + const struct regmap_config *config, 67 + struct lock_class_key *lock_key, 68 + const char *lock_name) 69 + { 70 + const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config); 71 + 72 + if (IS_ERR(bus)) 73 + return ERR_CAST(bus); 74 + 75 + return __devm_regmap_init(&slimbus->dev, bus, &slimbus, config, 76 + lock_key, lock_name); 77 + } 78 + EXPORT_SYMBOL_GPL(__devm_regmap_init_slimbus); 79 + 80 + MODULE_LICENSE("GPL v2");
+52 -17
drivers/char/lp.c
··· 659 659 return retval; 660 660 } 661 661 662 - static int lp_set_timeout(unsigned int minor, struct timeval *par_timeout) 662 + static int lp_set_timeout(unsigned int minor, s64 tv_sec, long tv_usec) 663 663 { 664 664 long to_jiffies; 665 665 666 666 /* Convert to jiffies, place in lp_table */ 667 - if ((par_timeout->tv_sec < 0) || 668 - (par_timeout->tv_usec < 0)) { 667 + if (tv_sec < 0 || tv_usec < 0) 669 668 return -EINVAL; 669 + 670 + /* 671 + * we used to not check, so let's not make this fatal, 672 + * but deal with user space passing a 32-bit tv_nsec in 673 + * a 64-bit field, capping the timeout to 1 second 674 + * worth of microseconds, and capping the total at 675 + * MAX_JIFFY_OFFSET. 676 + */ 677 + if (tv_usec > 999999) 678 + tv_usec = 999999; 679 + 680 + if (tv_sec >= MAX_SEC_IN_JIFFIES - 1) { 681 + to_jiffies = MAX_JIFFY_OFFSET; 682 + } else { 683 + to_jiffies = DIV_ROUND_UP(tv_usec, 1000000/HZ); 684 + to_jiffies += tv_sec * (long) HZ; 670 685 } 671 - to_jiffies = DIV_ROUND_UP(par_timeout->tv_usec, 1000000/HZ); 672 - to_jiffies += par_timeout->tv_sec * (long) HZ; 686 + 673 687 if (to_jiffies <= 0) { 674 688 return -EINVAL; 675 689 } ··· 691 677 return 0; 692 678 } 693 679 680 + static int lp_set_timeout32(unsigned int minor, void __user *arg) 681 + { 682 + s32 karg[2]; 683 + 684 + if (copy_from_user(karg, arg, sizeof(karg))) 685 + return -EFAULT; 686 + 687 + return lp_set_timeout(minor, karg[0], karg[1]); 688 + } 689 + 690 + static int lp_set_timeout64(unsigned int minor, void __user *arg) 691 + { 692 + s64 karg[2]; 693 + 694 + if (copy_from_user(karg, arg, sizeof(karg))) 695 + return -EFAULT; 696 + 697 + return lp_set_timeout(minor, karg[0], karg[1]); 698 + } 699 + 694 700 static long lp_ioctl(struct file *file, unsigned int cmd, 695 701 unsigned long arg) 696 702 { 697 703 unsigned int minor; 698 - struct timeval par_timeout; 699 704 int ret; 700 705 701 706 minor = iminor(file_inode(file)); 702 707 mutex_lock(&lp_mutex); 703 708 switch (cmd) { 704 - case LPSETTIMEOUT: 705 - if (copy_from_user(&par_timeout, (void __user *)arg, 706 - sizeof (struct timeval))) { 707 - ret = -EFAULT; 709 + case LPSETTIMEOUT_OLD: 710 + if (BITS_PER_LONG == 32) { 711 + ret = lp_set_timeout32(minor, (void __user *)arg); 708 712 break; 709 713 } 710 - ret = lp_set_timeout(minor, &par_timeout); 714 + /* fallthrough for 64-bit */ 715 + case LPSETTIMEOUT_NEW: 716 + ret = lp_set_timeout64(minor, (void __user *)arg); 711 717 break; 712 718 default: 713 719 ret = lp_do_ioctl(minor, cmd, arg, (void __user *)arg); ··· 743 709 unsigned long arg) 744 710 { 745 711 unsigned int minor; 746 - struct timeval par_timeout; 747 712 int ret; 748 713 749 714 minor = iminor(file_inode(file)); 750 715 mutex_lock(&lp_mutex); 751 716 switch (cmd) { 752 - case LPSETTIMEOUT: 753 - if (compat_get_timeval(&par_timeout, compat_ptr(arg))) { 754 - ret = -EFAULT; 717 + case LPSETTIMEOUT_OLD: 718 + if (!COMPAT_USE_64BIT_TIME) { 719 + ret = lp_set_timeout32(minor, (void __user *)arg); 755 720 break; 756 721 } 757 - ret = lp_set_timeout(minor, &par_timeout); 722 + /* fallthrough for x32 mode */ 723 + case LPSETTIMEOUT_NEW: 724 + ret = lp_set_timeout64(minor, (void __user *)arg); 758 725 break; 759 726 #ifdef LP_STATS 760 727 case LPGETSTATS: ··· 900 865 printk(KERN_INFO "lp: too many ports, %s ignored.\n", 901 866 str); 902 867 } else if (!strcmp(str, "reset")) { 903 - reset = 1; 868 + reset = true; 904 869 } 905 870 return 1; 906 871 }
+22 -5
drivers/char/mem.c
··· 107 107 phys_addr_t p = *ppos; 108 108 ssize_t read, sz; 109 109 void *ptr; 110 + char *bounce; 111 + int err; 110 112 111 113 if (p != *ppos) 112 114 return 0; ··· 131 129 } 132 130 #endif 133 131 132 + bounce = kmalloc(PAGE_SIZE, GFP_KERNEL); 133 + if (!bounce) 134 + return -ENOMEM; 135 + 134 136 while (count > 0) { 135 137 unsigned long remaining; 136 138 int allowed; 137 139 138 140 sz = size_inside_page(p, count); 139 141 142 + err = -EPERM; 140 143 allowed = page_is_allowed(p >> PAGE_SHIFT); 141 144 if (!allowed) 142 - return -EPERM; 145 + goto failed; 146 + 147 + err = -EFAULT; 143 148 if (allowed == 2) { 144 149 /* Show zeros for restricted memory. */ 145 150 remaining = clear_user(buf, sz); ··· 158 149 */ 159 150 ptr = xlate_dev_mem_ptr(p); 160 151 if (!ptr) 161 - return -EFAULT; 152 + goto failed; 162 153 163 - remaining = copy_to_user(buf, ptr, sz); 164 - 154 + err = probe_kernel_read(bounce, ptr, sz); 165 155 unxlate_dev_mem_ptr(p, ptr); 156 + if (err) 157 + goto failed; 158 + 159 + remaining = copy_to_user(buf, bounce, sz); 166 160 } 167 161 168 162 if (remaining) 169 - return -EFAULT; 163 + goto failed; 170 164 171 165 buf += sz; 172 166 p += sz; 173 167 count -= sz; 174 168 read += sz; 175 169 } 170 + kfree(bounce); 176 171 177 172 *ppos += read; 178 173 return read; 174 + 175 + failed: 176 + kfree(bounce); 177 + return err; 179 178 } 180 179 181 180 static ssize_t write_mem(struct file *file, const char __user *buf,
+2 -2
drivers/char/xillybus/Kconfig
··· 4 4 5 5 config XILLYBUS 6 6 tristate "Xillybus generic FPGA interface" 7 - depends on PCI || (OF_ADDRESS && OF_IRQ) 7 + depends on PCI || OF 8 8 select CRC32 9 9 help 10 10 Xillybus is a generic interface for peripherals designed on ··· 24 24 25 25 config XILLYBUS_OF 26 26 tristate "Xillybus over Device Tree" 27 - depends on OF_ADDRESS && OF_IRQ && HAS_DMA 27 + depends on OF && HAS_DMA 28 28 help 29 29 Set to M if you want Xillybus to find its resources from the 30 30 Open Firmware Flattened Device Tree. If the target is an embedded
+4 -8
drivers/char/xillybus/xillybus_of.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/of.h> 18 - #include <linux/of_irq.h> 19 - #include <linux/of_address.h> 20 - #include <linux/of_device.h> 21 - #include <linux/of_platform.h> 22 18 #include <linux/err.h> 23 19 #include "xillybus.h" 24 20 ··· 119 123 struct xilly_endpoint *endpoint; 120 124 int rc; 121 125 int irq; 122 - struct resource res; 126 + struct resource *res; 123 127 struct xilly_endpoint_hardware *ephw = &of_hw; 124 128 125 129 if (of_property_read_bool(dev->of_node, "dma-coherent")) ··· 132 136 133 137 dev_set_drvdata(dev, endpoint); 134 138 135 - rc = of_address_to_resource(dev->of_node, 0, &res); 136 - endpoint->registers = devm_ioremap_resource(dev, &res); 139 + res = platform_get_resource(op, IORESOURCE_MEM, 0); 140 + endpoint->registers = devm_ioremap_resource(dev, res); 137 141 138 142 if (IS_ERR(endpoint->registers)) 139 143 return PTR_ERR(endpoint->registers); 140 144 141 - irq = irq_of_parse_and_map(dev->of_node, 0); 145 + irq = platform_get_irq(op, 0); 142 146 143 147 rc = devm_request_irq(dev, irq, xillybus_isr, 0, xillyname, endpoint); 144 148
+29 -31
drivers/eisa/eisa-bus.c
··· 75 75 76 76 static char __init *decode_eisa_sig(unsigned long addr) 77 77 { 78 - static char sig_str[EISA_SIG_LEN]; 78 + static char sig_str[EISA_SIG_LEN]; 79 79 u8 sig[4]; 80 - u16 rev; 80 + u16 rev; 81 81 int i; 82 82 83 83 for (i = 0; i < 4; i++) { ··· 96 96 if (!i && (sig[0] & 0x80)) 97 97 return NULL; 98 98 } 99 - 100 - sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1); 101 - sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1); 102 - sig_str[2] = (sig[1] & 0x1f) + ('A' - 1); 103 - rev = (sig[2] << 8) | sig[3]; 104 - sprintf(sig_str + 3, "%04X", rev); 105 99 106 - return sig_str; 100 + sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1); 101 + sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1); 102 + sig_str[2] = (sig[1] & 0x1f) + ('A' - 1); 103 + rev = (sig[2] << 8) | sig[3]; 104 + sprintf(sig_str + 3, "%04X", rev); 105 + 106 + return sig_str; 107 107 } 108 108 109 109 static int eisa_bus_match(struct device *dev, struct device_driver *drv) ··· 198 198 sig = decode_eisa_sig(sig_addr); 199 199 if (!sig) 200 200 return -1; /* No EISA device here */ 201 - 201 + 202 202 memcpy(edev->id.sig, sig, EISA_SIG_LEN); 203 203 edev->slot = slot; 204 204 edev->state = inb(SLOT_ADDRESS(root, slot) + EISA_CONFIG_OFFSET) ··· 222 222 223 223 if (is_forced_dev(enable_dev, enable_dev_count, root, edev)) 224 224 edev->state = EISA_CONFIG_ENABLED | EISA_CONFIG_FORCED; 225 - 225 + 226 226 if (is_forced_dev(disable_dev, disable_dev_count, root, edev)) 227 227 edev->state = EISA_CONFIG_FORCED; 228 228 ··· 275 275 edev->res[i].start = edev->res[i].end = 0; 276 276 continue; 277 277 } 278 - 278 + 279 279 if (slot) { 280 280 edev->res[i].name = NULL; 281 281 edev->res[i].start = SLOT_ADDRESS(root, slot) ··· 295 295 } 296 296 297 297 return 0; 298 - 298 + 299 299 failed: 300 300 while (--i >= 0) 301 301 release_resource(&edev->res[i]); ··· 314 314 315 315 static int __init eisa_probe(struct eisa_root_device *root) 316 316 { 317 - int i, c; 317 + int i, c; 318 318 struct eisa_device *edev; 319 319 char *enabled_str; 320 320 ··· 322 322 323 323 /* First try to get hold of slot 0. If there is no device 324 324 * here, simply fail, unless root->force_probe is set. */ 325 - 325 + 326 326 edev = kzalloc(sizeof(*edev), GFP_KERNEL); 327 - if (!edev) { 328 - dev_err(root->dev, "EISA: Couldn't allocate mainboard slot\n"); 327 + if (!edev) 329 328 return -ENOMEM; 330 - } 331 - 329 + 332 330 if (eisa_request_resources(root, edev, 0)) { 333 331 dev_warn(root->dev, 334 - "EISA: Cannot allocate resource for mainboard\n"); 332 + "EISA: Cannot allocate resource for mainboard\n"); 335 333 kfree(edev); 336 334 if (!root->force_probe) 337 335 return -EBUSY; ··· 348 350 349 351 if (eisa_register_device(edev)) { 350 352 dev_err(&edev->dev, "EISA: Failed to register %s\n", 351 - edev->id.sig); 353 + edev->id.sig); 352 354 eisa_release_resources(edev); 353 355 kfree(edev); 354 356 } 355 - 357 + 356 358 force_probe: 357 - 358 - for (c = 0, i = 1; i <= root->slots; i++) { 359 + 360 + for (c = 0, i = 1; i <= root->slots; i++) { 359 361 edev = kzalloc(sizeof(*edev), GFP_KERNEL); 360 362 if (!edev) { 361 363 dev_err(root->dev, "EISA: Out of memory for slot %d\n", ··· 365 367 366 368 if (eisa_request_resources(root, edev, i)) { 367 369 dev_warn(root->dev, 368 - "Cannot allocate resource for EISA slot %d\n", 369 - i); 370 + "Cannot allocate resource for EISA slot %d\n", 371 + i); 370 372 kfree(edev); 371 373 continue; 372 374 } ··· 393 395 394 396 if (eisa_register_device(edev)) { 395 397 dev_err(&edev->dev, "EISA: Failed to register %s\n", 396 - edev->id.sig); 398 + edev->id.sig); 397 399 eisa_release_resources(edev); 398 400 kfree(edev); 399 401 } 400 - } 402 + } 401 403 402 404 dev_info(root->dev, "EISA: Detected %d card%s\n", c, c == 1 ? "" : "s"); 403 405 return 0; ··· 420 422 * been already registered. This prevents the virtual root 421 423 * device from registering after the real one has, for 422 424 * example... */ 423 - 425 + 424 426 root->eisa_root_res.name = eisa_root_res.name; 425 427 root->eisa_root_res.start = root->res->start; 426 428 root->eisa_root_res.end = root->res->end; ··· 429 431 err = request_resource(&eisa_root_res, &root->eisa_root_res); 430 432 if (err) 431 433 return err; 432 - 434 + 433 435 root->bus_nr = eisa_bus_count++; 434 436 435 437 err = eisa_probe(root); ··· 442 444 static int __init eisa_init(void) 443 445 { 444 446 int r; 445 - 447 + 446 448 r = bus_register(&eisa_bus_type); 447 449 if (r) 448 450 return r;
+5 -5
drivers/eisa/pci_eisa.c
··· 50 50 return -1; 51 51 } 52 52 53 - pci_eisa_root.dev = &pdev->dev; 54 - pci_eisa_root.res = bus_res; 55 - pci_eisa_root.bus_base_addr = bus_res->start; 56 - pci_eisa_root.slots = EISA_MAX_SLOTS; 57 - pci_eisa_root.dma_mask = pdev->dma_mask; 53 + pci_eisa_root.dev = &pdev->dev; 54 + pci_eisa_root.res = bus_res; 55 + pci_eisa_root.bus_base_addr = bus_res->start; 56 + pci_eisa_root.slots = EISA_MAX_SLOTS; 57 + pci_eisa_root.dma_mask = pdev->dma_mask; 58 58 dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root); 59 59 60 60 if (eisa_root_register (&pci_eisa_root)) {
+9 -10
drivers/eisa/virtual_root.c
··· 35 35 }; 36 36 37 37 static struct eisa_root_device eisa_bus_root = { 38 - .dev = &eisa_root_dev.dev, 39 - .bus_base_addr = 0, 40 - .res = &ioport_resource, 41 - .slots = EISA_MAX_SLOTS, 42 - .dma_mask = 0xffffffff, 38 + .dev = &eisa_root_dev.dev, 39 + .bus_base_addr = 0, 40 + .res = &ioport_resource, 41 + .slots = EISA_MAX_SLOTS, 42 + .dma_mask = 0xffffffff, 43 43 }; 44 44 45 45 static void virtual_eisa_release (struct device *dev) ··· 50 50 static int __init virtual_eisa_root_init (void) 51 51 { 52 52 int r; 53 - 54 - if ((r = platform_device_register (&eisa_root_dev))) { 55 - return r; 56 - } 53 + 54 + if ((r = platform_device_register (&eisa_root_dev))) 55 + return r; 57 56 58 57 eisa_bus_root.force_probe = force_probe; 59 - 58 + 60 59 dev_set_drvdata(&eisa_root_dev.dev, &eisa_bus_root); 61 60 62 61 if (eisa_root_register (&eisa_bus_root)) {
+1 -1
drivers/extcon/extcon-adc-jack.c
··· 144 144 return err; 145 145 146 146 data->irq = platform_get_irq(pdev, 0); 147 - if (!data->irq) { 147 + if (data->irq < 0) { 148 148 dev_err(&pdev->dev, "platform_get_irq failed\n"); 149 149 return -ENODEV; 150 150 }
+32 -4
drivers/extcon/extcon-axp288.c
··· 1 1 /* 2 2 * extcon-axp288.c - X-Power AXP288 PMIC extcon cable detection driver 3 3 * 4 + * Copyright (C) 2016-2017 Hans de Goede <hdegoede@redhat.com> 4 5 * Copyright (C) 2015 Intel Corporation 5 6 * Author: Ramakrishna Pallala <ramakrishna.pallala@intel.com> 6 7 * ··· 98 97 struct device *dev; 99 98 struct regmap *regmap; 100 99 struct regmap_irq_chip_data *regmap_irqc; 100 + struct delayed_work det_work; 101 101 int irq[EXTCON_IRQ_END]; 102 102 struct extcon_dev *edev; 103 103 unsigned int previous_cable; 104 + bool first_detect_done; 104 105 }; 105 106 106 107 /* Power up/down reason string array */ ··· 138 135 139 136 /* Clear the register value for next reboot (write 1 to clear bit) */ 140 137 regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask); 138 + } 139 + 140 + static void axp288_chrg_detect_complete(struct axp288_extcon_info *info) 141 + { 142 + /* 143 + * We depend on other drivers to do things like mux the data lines, 144 + * enable/disable vbus based on the id-pin, etc. Sometimes the BIOS has 145 + * not set these things up correctly resulting in the initial charger 146 + * cable type detection giving a wrong result and we end up not charging 147 + * or charging at only 0.5A. 148 + * 149 + * So we schedule a second cable type detection after 2 seconds to 150 + * give the other drivers time to load and do their thing. 151 + */ 152 + if (!info->first_detect_done) { 153 + queue_delayed_work(system_wq, &info->det_work, 154 + msecs_to_jiffies(2000)); 155 + info->first_detect_done = true; 156 + } 141 157 } 142 158 143 159 static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info) ··· 205 183 cable = EXTCON_CHG_USB_DCP; 206 184 break; 207 185 default: 208 - dev_warn(info->dev, 209 - "disconnect or unknown or ID event\n"); 186 + dev_warn(info->dev, "unknown (reserved) bc detect result\n"); 187 + cable = EXTCON_CHG_USB_SDP; 210 188 } 211 189 212 190 no_vbus: ··· 222 200 223 201 info->previous_cable = cable; 224 202 } 203 + 204 + axp288_chrg_detect_complete(info); 225 205 226 206 return 0; 227 207 ··· 246 222 return IRQ_HANDLED; 247 223 } 248 224 249 - static void axp288_extcon_enable(struct axp288_extcon_info *info) 225 + static void axp288_extcon_det_work(struct work_struct *work) 250 226 { 227 + struct axp288_extcon_info *info = 228 + container_of(work, struct axp288_extcon_info, det_work.work); 229 + 251 230 regmap_update_bits(info->regmap, AXP288_BC_GLOBAL_REG, 252 231 BC_GLOBAL_RUN, 0); 253 232 /* Enable the charger detection logic */ ··· 272 245 info->regmap = axp20x->regmap; 273 246 info->regmap_irqc = axp20x->regmap_irqc; 274 247 info->previous_cable = EXTCON_NONE; 248 + INIT_DELAYED_WORK(&info->det_work, axp288_extcon_det_work); 275 249 276 250 platform_set_drvdata(pdev, info); 277 251 ··· 318 290 } 319 291 320 292 /* Start charger cable type detection */ 321 - axp288_extcon_enable(info); 293 + queue_delayed_work(system_wq, &info->det_work, 0); 322 294 323 295 return 0; 324 296 }
+1 -1
drivers/extcon/extcon-max77693.c
··· 266 266 static int max77693_muic_set_path(struct max77693_muic_info *info, 267 267 u8 val, bool attached) 268 268 { 269 - int ret = 0; 269 + int ret; 270 270 unsigned int ctrl1, ctrl2 = 0; 271 271 272 272 if (attached)
+1 -1
drivers/extcon/extcon-max8997.c
··· 204 204 static int max8997_muic_set_path(struct max8997_muic_info *info, 205 205 u8 val, bool attached) 206 206 { 207 - int ret = 0; 207 + int ret; 208 208 u8 ctrl1, ctrl2 = 0; 209 209 210 210 if (attached)
+57 -50
drivers/fpga/Kconfig
··· 11 11 12 12 if FPGA 13 13 14 - config FPGA_REGION 15 - tristate "FPGA Region" 16 - depends on OF && FPGA_BRIDGE 17 - help 18 - FPGA Regions allow loading FPGA images under control of 19 - the Device Tree. 20 - 21 - config FPGA_MGR_ICE40_SPI 22 - tristate "Lattice iCE40 SPI" 23 - depends on OF && SPI 24 - help 25 - FPGA manager driver support for Lattice iCE40 FPGAs over SPI. 26 - 27 - config FPGA_MGR_ALTERA_CVP 28 - tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager" 29 - depends on PCI 30 - help 31 - FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V 32 - and Arria 10 Altera FPGAs using the CvP interface over PCIe. 33 - 34 - config FPGA_MGR_ALTERA_PS_SPI 35 - tristate "Altera FPGA Passive Serial over SPI" 36 - depends on SPI 37 - help 38 - FPGA manager driver support for Altera Arria/Cyclone/Stratix 39 - using the passive serial interface over SPI. 40 - 41 14 config FPGA_MGR_SOCFPGA 42 15 tristate "Altera SOCFPGA FPGA Manager" 43 16 depends on ARCH_SOCFPGA || COMPILE_TEST ··· 24 51 help 25 52 FPGA manager driver support for Altera Arria10 SoCFPGA. 26 53 27 - config FPGA_MGR_TS73XX 28 - tristate "Technologic Systems TS-73xx SBC FPGA Manager" 29 - depends on ARCH_EP93XX && MACH_TS72XX 30 - help 31 - FPGA manager driver support for the Altera Cyclone II FPGA 32 - present on the TS-73xx SBC boards. 54 + config ALTERA_PR_IP_CORE 55 + tristate "Altera Partial Reconfiguration IP Core" 56 + help 57 + Core driver support for Altera Partial Reconfiguration IP component 33 58 34 - config FPGA_MGR_XILINX_SPI 35 - tristate "Xilinx Configuration over Slave Serial (SPI)" 59 + config ALTERA_PR_IP_CORE_PLAT 60 + tristate "Platform support of Altera Partial Reconfiguration IP Core" 61 + depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM 62 + help 63 + Platform driver support for Altera Partial Reconfiguration IP 64 + component 65 + 66 + config FPGA_MGR_ALTERA_PS_SPI 67 + tristate "Altera FPGA Passive Serial over SPI" 36 68 depends on SPI 37 69 help 38 - FPGA manager driver support for Xilinx FPGA configuration 39 - over slave serial interface. 70 + FPGA manager driver support for Altera Arria/Cyclone/Stratix 71 + using the passive serial interface over SPI. 72 + 73 + config FPGA_MGR_ALTERA_CVP 74 + tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager" 75 + depends on PCI 76 + help 77 + FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V 78 + and Arria 10 Altera FPGAs using the CvP interface over PCIe. 40 79 41 80 config FPGA_MGR_ZYNQ_FPGA 42 81 tristate "Xilinx Zynq FPGA" ··· 57 72 help 58 73 FPGA manager driver support for Xilinx Zynq FPGAs. 59 74 75 + config FPGA_MGR_XILINX_SPI 76 + tristate "Xilinx Configuration over Slave Serial (SPI)" 77 + depends on SPI 78 + help 79 + FPGA manager driver support for Xilinx FPGA configuration 80 + over slave serial interface. 81 + 82 + config FPGA_MGR_ICE40_SPI 83 + tristate "Lattice iCE40 SPI" 84 + depends on OF && SPI 85 + help 86 + FPGA manager driver support for Lattice iCE40 FPGAs over SPI. 87 + 88 + config FPGA_MGR_TS73XX 89 + tristate "Technologic Systems TS-73xx SBC FPGA Manager" 90 + depends on ARCH_EP93XX && MACH_TS72XX 91 + help 92 + FPGA manager driver support for the Altera Cyclone II FPGA 93 + present on the TS-73xx SBC boards. 94 + 60 95 config FPGA_BRIDGE 61 96 tristate "FPGA Bridge Framework" 62 - depends on OF 63 97 help 64 98 Say Y here if you want to support bridges connected between host 65 99 processors and FPGAs or between FPGAs. ··· 99 95 isolate one region of the FPGA from the busses while that 100 96 region is being reprogrammed. 101 97 102 - config ALTERA_PR_IP_CORE 103 - tristate "Altera Partial Reconfiguration IP Core" 104 - help 105 - Core driver support for Altera Partial Reconfiguration IP component 106 - 107 - config ALTERA_PR_IP_CORE_PLAT 108 - tristate "Platform support of Altera Partial Reconfiguration IP Core" 109 - depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM 110 - help 111 - Platform driver support for Altera Partial Reconfiguration IP 112 - component 113 - 114 98 config XILINX_PR_DECOUPLER 115 99 tristate "Xilinx LogiCORE PR Decoupler" 116 100 depends on FPGA_BRIDGE ··· 108 116 The PR Decoupler exists in the FPGA fabric to isolate one 109 117 region of the FPGA from the busses while that region is 110 118 being reprogrammed during partial reconfig. 119 + 120 + config FPGA_REGION 121 + tristate "FPGA Region" 122 + depends on FPGA_BRIDGE 123 + help 124 + FPGA Region common code. A FPGA Region controls a FPGA Manager 125 + and the FPGA Bridges associated with either a reconfigurable 126 + region of an FPGA or a whole FPGA. 127 + 128 + config OF_FPGA_REGION 129 + tristate "FPGA Region Device Tree Overlay Support" 130 + depends on OF && FPGA_REGION 131 + help 132 + Support for loading FPGA images by applying a Device Tree 133 + overlay. 111 134 112 135 endif # FPGA
+1
drivers/fpga/Makefile
··· 26 26 27 27 # High Level Interfaces 28 28 obj-$(CONFIG_FPGA_REGION) += fpga-region.o 29 + obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o
+88 -25
drivers/fpga/fpga-bridge.c
··· 2 2 * FPGA Bridge Framework Driver 3 3 * 4 4 * Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved. 5 + * Copyright (C) 2017 Intel Corporation 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify it 7 8 * under the terms and conditions of the GNU General Public License, ··· 71 70 } 72 71 EXPORT_SYMBOL_GPL(fpga_bridge_disable); 73 72 74 - /** 75 - * of_fpga_bridge_get - get an exclusive reference to a fpga bridge 76 - * 77 - * @np: node pointer of a FPGA bridge 78 - * @info: fpga image specific information 79 - * 80 - * Return fpga_bridge struct if successful. 81 - * Return -EBUSY if someone already has a reference to the bridge. 82 - * Return -ENODEV if @np is not a FPGA Bridge. 83 - */ 84 - struct fpga_bridge *of_fpga_bridge_get(struct device_node *np, 85 - struct fpga_image_info *info) 86 - 73 + static struct fpga_bridge *__fpga_bridge_get(struct device *dev, 74 + struct fpga_image_info *info) 87 75 { 88 - struct device *dev; 89 76 struct fpga_bridge *bridge; 90 77 int ret = -ENODEV; 91 78 92 - dev = class_find_device(fpga_bridge_class, NULL, np, 93 - fpga_bridge_of_node_match); 94 - if (!dev) 95 - goto err_dev; 96 - 97 79 bridge = to_fpga_bridge(dev); 98 - if (!bridge) 99 - goto err_dev; 100 80 101 81 bridge->info = info; 102 82 ··· 99 117 put_device(dev); 100 118 return ERR_PTR(ret); 101 119 } 120 + 121 + /** 122 + * of_fpga_bridge_get - get an exclusive reference to a fpga bridge 123 + * 124 + * @np: node pointer of a FPGA bridge 125 + * @info: fpga image specific information 126 + * 127 + * Return fpga_bridge struct if successful. 128 + * Return -EBUSY if someone already has a reference to the bridge. 129 + * Return -ENODEV if @np is not a FPGA Bridge. 130 + */ 131 + struct fpga_bridge *of_fpga_bridge_get(struct device_node *np, 132 + struct fpga_image_info *info) 133 + { 134 + struct device *dev; 135 + 136 + dev = class_find_device(fpga_bridge_class, NULL, np, 137 + fpga_bridge_of_node_match); 138 + if (!dev) 139 + return ERR_PTR(-ENODEV); 140 + 141 + return __fpga_bridge_get(dev, info); 142 + } 102 143 EXPORT_SYMBOL_GPL(of_fpga_bridge_get); 144 + 145 + static int fpga_bridge_dev_match(struct device *dev, const void *data) 146 + { 147 + return dev->parent == data; 148 + } 149 + 150 + /** 151 + * fpga_bridge_get - get an exclusive reference to a fpga bridge 152 + * @dev: parent device that fpga bridge was registered with 153 + * 154 + * Given a device, get an exclusive reference to a fpga bridge. 155 + * 156 + * Return: fpga manager struct or IS_ERR() condition containing error code. 157 + */ 158 + struct fpga_bridge *fpga_bridge_get(struct device *dev, 159 + struct fpga_image_info *info) 160 + { 161 + struct device *bridge_dev; 162 + 163 + bridge_dev = class_find_device(fpga_bridge_class, NULL, dev, 164 + fpga_bridge_dev_match); 165 + if (!bridge_dev) 166 + return ERR_PTR(-ENODEV); 167 + 168 + return __fpga_bridge_get(bridge_dev, info); 169 + } 170 + EXPORT_SYMBOL_GPL(fpga_bridge_get); 103 171 104 172 /** 105 173 * fpga_bridge_put - release a reference to a bridge ··· 238 206 EXPORT_SYMBOL_GPL(fpga_bridges_put); 239 207 240 208 /** 241 - * fpga_bridges_get_to_list - get a bridge, add it to a list 209 + * of_fpga_bridge_get_to_list - get a bridge, add it to a list 242 210 * 243 211 * @np: node pointer of a FPGA bridge 244 212 * @info: fpga image specific information ··· 248 216 * 249 217 * Return 0 for success, error code from of_fpga_bridge_get() othewise. 250 218 */ 251 - int fpga_bridge_get_to_list(struct device_node *np, 219 + int of_fpga_bridge_get_to_list(struct device_node *np, 220 + struct fpga_image_info *info, 221 + struct list_head *bridge_list) 222 + { 223 + struct fpga_bridge *bridge; 224 + unsigned long flags; 225 + 226 + bridge = of_fpga_bridge_get(np, info); 227 + if (IS_ERR(bridge)) 228 + return PTR_ERR(bridge); 229 + 230 + spin_lock_irqsave(&bridge_list_lock, flags); 231 + list_add(&bridge->node, bridge_list); 232 + spin_unlock_irqrestore(&bridge_list_lock, flags); 233 + 234 + return 0; 235 + } 236 + EXPORT_SYMBOL_GPL(of_fpga_bridge_get_to_list); 237 + 238 + /** 239 + * fpga_bridge_get_to_list - given device, get a bridge, add it to a list 240 + * 241 + * @dev: FPGA bridge device 242 + * @info: fpga image specific information 243 + * @bridge_list: list of FPGA bridges 244 + * 245 + * Get an exclusive reference to the bridge and and it to the list. 246 + * 247 + * Return 0 for success, error code from fpga_bridge_get() othewise. 248 + */ 249 + int fpga_bridge_get_to_list(struct device *dev, 252 250 struct fpga_image_info *info, 253 251 struct list_head *bridge_list) 254 252 { 255 253 struct fpga_bridge *bridge; 256 254 unsigned long flags; 257 255 258 - bridge = of_fpga_bridge_get(np, info); 256 + bridge = fpga_bridge_get(dev, info); 259 257 if (IS_ERR(bridge)) 260 258 return PTR_ERR(bridge); 261 259 ··· 365 303 bridge->priv = priv; 366 304 367 305 device_initialize(&bridge->dev); 306 + bridge->dev.groups = br_ops->groups; 368 307 bridge->dev.class = fpga_bridge_class; 369 308 bridge->dev.parent = dev; 370 309 bridge->dev.of_node = dev->of_node; ··· 444 381 } 445 382 446 383 MODULE_DESCRIPTION("FPGA Bridge Driver"); 447 - MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); 384 + MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); 448 385 MODULE_LICENSE("GPL v2"); 449 386 450 387 subsys_initcall(fpga_bridge_dev_init);
+94 -29
drivers/fpga/fpga-mgr.c
··· 2 2 * FPGA Manager Core 3 3 * 4 4 * Copyright (C) 2013-2015 Altera Corporation 5 + * Copyright (C) 2017 Intel Corporation 5 6 * 6 7 * With code from the mailing list: 7 8 * Copyright (C) 2013 Xilinx, Inc. ··· 31 30 32 31 static DEFINE_IDA(fpga_mgr_ida); 33 32 static struct class *fpga_mgr_class; 33 + 34 + struct fpga_image_info *fpga_image_info_alloc(struct device *dev) 35 + { 36 + struct fpga_image_info *info; 37 + 38 + get_device(dev); 39 + 40 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 41 + if (!info) { 42 + put_device(dev); 43 + return NULL; 44 + } 45 + 46 + info->dev = dev; 47 + 48 + return info; 49 + } 50 + EXPORT_SYMBOL_GPL(fpga_image_info_alloc); 51 + 52 + void fpga_image_info_free(struct fpga_image_info *info) 53 + { 54 + struct device *dev; 55 + 56 + if (!info) 57 + return; 58 + 59 + dev = info->dev; 60 + if (info->firmware_name) 61 + devm_kfree(dev, info->firmware_name); 62 + 63 + devm_kfree(dev, info); 64 + put_device(dev); 65 + } 66 + EXPORT_SYMBOL_GPL(fpga_image_info_free); 34 67 35 68 /* 36 69 * Call the low level driver's write_init function. This will do the ··· 172 137 * 173 138 * Return: 0 on success, negative error code otherwise. 174 139 */ 175 - int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info, 176 - struct sg_table *sgt) 140 + static int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, 141 + struct fpga_image_info *info, 142 + struct sg_table *sgt) 177 143 { 178 144 int ret; 179 145 ··· 206 170 207 171 return fpga_mgr_write_complete(mgr, info); 208 172 } 209 - EXPORT_SYMBOL_GPL(fpga_mgr_buf_load_sg); 210 173 211 174 static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr, 212 175 struct fpga_image_info *info, ··· 245 210 * 246 211 * Return: 0 on success, negative error code otherwise. 247 212 */ 248 - int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info, 249 - const char *buf, size_t count) 213 + static int fpga_mgr_buf_load(struct fpga_manager *mgr, 214 + struct fpga_image_info *info, 215 + const char *buf, size_t count) 250 216 { 251 217 struct page **pages; 252 218 struct sg_table sgt; ··· 302 266 303 267 return rc; 304 268 } 305 - EXPORT_SYMBOL_GPL(fpga_mgr_buf_load); 306 269 307 270 /** 308 271 * fpga_mgr_firmware_load - request firmware and load to fpga ··· 317 282 * 318 283 * Return: 0 on success, negative error code otherwise. 319 284 */ 320 - int fpga_mgr_firmware_load(struct fpga_manager *mgr, 321 - struct fpga_image_info *info, 322 - const char *image_name) 285 + static int fpga_mgr_firmware_load(struct fpga_manager *mgr, 286 + struct fpga_image_info *info, 287 + const char *image_name) 323 288 { 324 289 struct device *dev = &mgr->dev; 325 290 const struct firmware *fw; ··· 342 307 343 308 return ret; 344 309 } 345 - EXPORT_SYMBOL_GPL(fpga_mgr_firmware_load); 310 + 311 + int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info) 312 + { 313 + if (info->sgt) 314 + return fpga_mgr_buf_load_sg(mgr, info, info->sgt); 315 + if (info->buf && info->count) 316 + return fpga_mgr_buf_load(mgr, info, info->buf, info->count); 317 + if (info->firmware_name) 318 + return fpga_mgr_firmware_load(mgr, info, info->firmware_name); 319 + return -EINVAL; 320 + } 321 + EXPORT_SYMBOL_GPL(fpga_mgr_load); 346 322 347 323 static const char * const state_str[] = { 348 324 [FPGA_MGR_STATE_UNKNOWN] = "unknown", ··· 410 364 static struct fpga_manager *__fpga_mgr_get(struct device *dev) 411 365 { 412 366 struct fpga_manager *mgr; 413 - int ret = -ENODEV; 414 367 415 368 mgr = to_fpga_manager(dev); 416 - if (!mgr) 417 - goto err_dev; 418 - 419 - /* Get exclusive use of fpga manager */ 420 - if (!mutex_trylock(&mgr->ref_mutex)) { 421 - ret = -EBUSY; 422 - goto err_dev; 423 - } 424 369 425 370 if (!try_module_get(dev->parent->driver->owner)) 426 - goto err_ll_mod; 371 + goto err_dev; 427 372 428 373 return mgr; 429 374 430 - err_ll_mod: 431 - mutex_unlock(&mgr->ref_mutex); 432 375 err_dev: 433 376 put_device(dev); 434 - return ERR_PTR(ret); 377 + return ERR_PTR(-ENODEV); 435 378 } 436 379 437 380 static int fpga_mgr_dev_match(struct device *dev, const void *data) ··· 429 394 } 430 395 431 396 /** 432 - * fpga_mgr_get - get an exclusive reference to a fpga mgr 397 + * fpga_mgr_get - get a reference to a fpga mgr 433 398 * @dev: parent device that fpga mgr was registered with 434 399 * 435 - * Given a device, get an exclusive reference to a fpga mgr. 400 + * Given a device, get a reference to a fpga mgr. 436 401 * 437 402 * Return: fpga manager struct or IS_ERR() condition containing error code. 438 403 */ ··· 453 418 } 454 419 455 420 /** 456 - * of_fpga_mgr_get - get an exclusive reference to a fpga mgr 421 + * of_fpga_mgr_get - get a reference to a fpga mgr 457 422 * @node: device node 458 423 * 459 - * Given a device node, get an exclusive reference to a fpga mgr. 424 + * Given a device node, get a reference to a fpga mgr. 460 425 * 461 426 * Return: fpga manager struct or IS_ERR() condition containing error code. 462 427 */ ··· 480 445 void fpga_mgr_put(struct fpga_manager *mgr) 481 446 { 482 447 module_put(mgr->dev.parent->driver->owner); 483 - mutex_unlock(&mgr->ref_mutex); 484 448 put_device(&mgr->dev); 485 449 } 486 450 EXPORT_SYMBOL_GPL(fpga_mgr_put); 451 + 452 + /** 453 + * fpga_mgr_lock - Lock FPGA manager for exclusive use 454 + * @mgr: fpga manager 455 + * 456 + * Given a pointer to FPGA Manager (from fpga_mgr_get() or 457 + * of_fpga_mgr_put()) attempt to get the mutex. 458 + * 459 + * Return: 0 for success or -EBUSY 460 + */ 461 + int fpga_mgr_lock(struct fpga_manager *mgr) 462 + { 463 + if (!mutex_trylock(&mgr->ref_mutex)) { 464 + dev_err(&mgr->dev, "FPGA manager is in use.\n"); 465 + return -EBUSY; 466 + } 467 + 468 + return 0; 469 + } 470 + EXPORT_SYMBOL_GPL(fpga_mgr_lock); 471 + 472 + /** 473 + * fpga_mgr_unlock - Unlock FPGA manager 474 + * @mgr: fpga manager 475 + */ 476 + void fpga_mgr_unlock(struct fpga_manager *mgr) 477 + { 478 + mutex_unlock(&mgr->ref_mutex); 479 + } 480 + EXPORT_SYMBOL_GPL(fpga_mgr_unlock); 487 481 488 482 /** 489 483 * fpga_mgr_register - register a low level fpga manager driver ··· 567 503 568 504 device_initialize(&mgr->dev); 569 505 mgr->dev.class = fpga_mgr_class; 506 + mgr->dev.groups = mops->groups; 570 507 mgr->dev.parent = dev; 571 508 mgr->dev.of_node = dev->of_node; 572 509 mgr->dev.id = id; ··· 643 578 ida_destroy(&fpga_mgr_ida); 644 579 } 645 580 646 - MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); 581 + MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); 647 582 MODULE_DESCRIPTION("FPGA manager framework"); 648 583 MODULE_LICENSE("GPL v2"); 649 584
+46 -418
drivers/fpga/fpga-region.c
··· 2 2 * FPGA Region - Device Tree support for FPGA programming under Linux 3 3 * 4 4 * Copyright (C) 2013-2016 Altera Corporation 5 + * Copyright (C) 2017 Intel Corporation 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify it 7 8 * under the terms and conditions of the GNU General Public License, ··· 19 18 20 19 #include <linux/fpga/fpga-bridge.h> 21 20 #include <linux/fpga/fpga-mgr.h> 21 + #include <linux/fpga/fpga-region.h> 22 22 #include <linux/idr.h> 23 23 #include <linux/kernel.h> 24 24 #include <linux/list.h> 25 25 #include <linux/module.h> 26 - #include <linux/of_platform.h> 27 26 #include <linux/slab.h> 28 27 #include <linux/spinlock.h> 29 - 30 - /** 31 - * struct fpga_region - FPGA Region structure 32 - * @dev: FPGA Region device 33 - * @mutex: enforces exclusive reference to region 34 - * @bridge_list: list of FPGA bridges specified in region 35 - * @info: fpga image specific information 36 - */ 37 - struct fpga_region { 38 - struct device dev; 39 - struct mutex mutex; /* for exclusive reference to region */ 40 - struct list_head bridge_list; 41 - struct fpga_image_info *info; 42 - }; 43 - 44 - #define to_fpga_region(d) container_of(d, struct fpga_region, dev) 45 28 46 29 static DEFINE_IDA(fpga_region_ida); 47 30 static struct class *fpga_region_class; 48 31 49 - static const struct of_device_id fpga_region_of_match[] = { 50 - { .compatible = "fpga-region", }, 51 - {}, 52 - }; 53 - MODULE_DEVICE_TABLE(of, fpga_region_of_match); 54 - 55 - static int fpga_region_of_node_match(struct device *dev, const void *data) 56 - { 57 - return dev->of_node == data; 58 - } 59 - 60 - /** 61 - * fpga_region_find - find FPGA region 62 - * @np: device node of FPGA Region 63 - * Caller will need to put_device(&region->dev) when done. 64 - * Returns FPGA Region struct or NULL 65 - */ 66 - static struct fpga_region *fpga_region_find(struct device_node *np) 32 + struct fpga_region *fpga_region_class_find( 33 + struct device *start, const void *data, 34 + int (*match)(struct device *, const void *)) 67 35 { 68 36 struct device *dev; 69 37 70 - dev = class_find_device(fpga_region_class, NULL, np, 71 - fpga_region_of_node_match); 38 + dev = class_find_device(fpga_region_class, start, data, match); 72 39 if (!dev) 73 40 return NULL; 74 41 75 42 return to_fpga_region(dev); 76 43 } 44 + EXPORT_SYMBOL_GPL(fpga_region_class_find); 77 45 78 46 /** 79 47 * fpga_region_get - get an exclusive reference to a fpga region ··· 64 94 } 65 95 66 96 get_device(dev); 67 - of_node_get(dev->of_node); 68 97 if (!try_module_get(dev->parent->driver->owner)) { 69 - of_node_put(dev->of_node); 70 98 put_device(dev); 71 99 mutex_unlock(&region->mutex); 72 100 return ERR_PTR(-ENODEV); 73 101 } 74 102 75 - dev_dbg(&region->dev, "get\n"); 103 + dev_dbg(dev, "get\n"); 76 104 77 105 return region; 78 106 } ··· 84 116 { 85 117 struct device *dev = &region->dev; 86 118 87 - dev_dbg(&region->dev, "put\n"); 119 + dev_dbg(dev, "put\n"); 88 120 89 121 module_put(dev->parent->driver->owner); 90 - of_node_put(dev->of_node); 91 122 put_device(dev); 92 123 mutex_unlock(&region->mutex); 93 124 } 94 125 95 126 /** 96 - * fpga_region_get_manager - get exclusive reference for FPGA manager 97 - * @region: FPGA region 98 - * 99 - * Get FPGA Manager from "fpga-mgr" property or from ancestor region. 100 - * 101 - * Caller should call fpga_mgr_put() when done with manager. 102 - * 103 - * Return: fpga manager struct or IS_ERR() condition containing error code. 104 - */ 105 - static struct fpga_manager *fpga_region_get_manager(struct fpga_region *region) 106 - { 107 - struct device *dev = &region->dev; 108 - struct device_node *np = dev->of_node; 109 - struct device_node *mgr_node; 110 - struct fpga_manager *mgr; 111 - 112 - of_node_get(np); 113 - while (np) { 114 - if (of_device_is_compatible(np, "fpga-region")) { 115 - mgr_node = of_parse_phandle(np, "fpga-mgr", 0); 116 - if (mgr_node) { 117 - mgr = of_fpga_mgr_get(mgr_node); 118 - of_node_put(np); 119 - return mgr; 120 - } 121 - } 122 - np = of_get_next_parent(np); 123 - } 124 - of_node_put(np); 125 - 126 - return ERR_PTR(-EINVAL); 127 - } 128 - 129 - /** 130 - * fpga_region_get_bridges - create a list of bridges 131 - * @region: FPGA region 132 - * @overlay: device node of the overlay 133 - * 134 - * Create a list of bridges including the parent bridge and the bridges 135 - * specified by "fpga-bridges" property. Note that the 136 - * fpga_bridges_enable/disable/put functions are all fine with an empty list 137 - * if that happens. 138 - * 139 - * Caller should call fpga_bridges_put(&region->bridge_list) when 140 - * done with the bridges. 141 - * 142 - * Return 0 for success (even if there are no bridges specified) 143 - * or -EBUSY if any of the bridges are in use. 144 - */ 145 - static int fpga_region_get_bridges(struct fpga_region *region, 146 - struct device_node *overlay) 147 - { 148 - struct device *dev = &region->dev; 149 - struct device_node *region_np = dev->of_node; 150 - struct device_node *br, *np, *parent_br = NULL; 151 - int i, ret; 152 - 153 - /* If parent is a bridge, add to list */ 154 - ret = fpga_bridge_get_to_list(region_np->parent, region->info, 155 - &region->bridge_list); 156 - if (ret == -EBUSY) 157 - return ret; 158 - 159 - if (!ret) 160 - parent_br = region_np->parent; 161 - 162 - /* If overlay has a list of bridges, use it. */ 163 - if (of_parse_phandle(overlay, "fpga-bridges", 0)) 164 - np = overlay; 165 - else 166 - np = region_np; 167 - 168 - for (i = 0; ; i++) { 169 - br = of_parse_phandle(np, "fpga-bridges", i); 170 - if (!br) 171 - break; 172 - 173 - /* If parent bridge is in list, skip it. */ 174 - if (br == parent_br) 175 - continue; 176 - 177 - /* If node is a bridge, get it and add to list */ 178 - ret = fpga_bridge_get_to_list(br, region->info, 179 - &region->bridge_list); 180 - 181 - /* If any of the bridges are in use, give up */ 182 - if (ret == -EBUSY) { 183 - fpga_bridges_put(&region->bridge_list); 184 - return -EBUSY; 185 - } 186 - } 187 - 188 - return 0; 189 - } 190 - 191 - /** 192 127 * fpga_region_program_fpga - program FPGA 193 128 * @region: FPGA region 194 - * @firmware_name: name of FPGA image firmware file 195 - * @overlay: device node of the overlay 196 - * Program an FPGA using information in the device tree. 197 - * Function assumes that there is a firmware-name property. 129 + * Program an FPGA using fpga image info (region->info). 198 130 * Return 0 for success or negative error code. 199 131 */ 200 - static int fpga_region_program_fpga(struct fpga_region *region, 201 - const char *firmware_name, 202 - struct device_node *overlay) 132 + int fpga_region_program_fpga(struct fpga_region *region) 203 133 { 204 - struct fpga_manager *mgr; 134 + struct device *dev = &region->dev; 135 + struct fpga_image_info *info = region->info; 205 136 int ret; 206 137 207 138 region = fpga_region_get(region); 208 139 if (IS_ERR(region)) { 209 - pr_err("failed to get fpga region\n"); 140 + dev_err(dev, "failed to get FPGA region\n"); 210 141 return PTR_ERR(region); 211 142 } 212 143 213 - mgr = fpga_region_get_manager(region); 214 - if (IS_ERR(mgr)) { 215 - pr_err("failed to get fpga region manager\n"); 216 - ret = PTR_ERR(mgr); 144 + ret = fpga_mgr_lock(region->mgr); 145 + if (ret) { 146 + dev_err(dev, "FPGA manager is busy\n"); 217 147 goto err_put_region; 218 148 } 219 149 220 - ret = fpga_region_get_bridges(region, overlay); 221 - if (ret) { 222 - pr_err("failed to get fpga region bridges\n"); 223 - goto err_put_mgr; 150 + /* 151 + * In some cases, we already have a list of bridges in the 152 + * fpga region struct. Or we don't have any bridges. 153 + */ 154 + if (region->get_bridges) { 155 + ret = region->get_bridges(region); 156 + if (ret) { 157 + dev_err(dev, "failed to get fpga region bridges\n"); 158 + goto err_unlock_mgr; 159 + } 224 160 } 225 161 226 162 ret = fpga_bridges_disable(&region->bridge_list); 227 163 if (ret) { 228 - pr_err("failed to disable region bridges\n"); 164 + dev_err(dev, "failed to disable bridges\n"); 229 165 goto err_put_br; 230 166 } 231 167 232 - ret = fpga_mgr_firmware_load(mgr, region->info, firmware_name); 168 + ret = fpga_mgr_load(region->mgr, info); 233 169 if (ret) { 234 - pr_err("failed to load fpga image\n"); 170 + dev_err(dev, "failed to load FPGA image\n"); 235 171 goto err_put_br; 236 172 } 237 173 238 174 ret = fpga_bridges_enable(&region->bridge_list); 239 175 if (ret) { 240 - pr_err("failed to enable region bridges\n"); 176 + dev_err(dev, "failed to enable region bridges\n"); 241 177 goto err_put_br; 242 178 } 243 179 244 - fpga_mgr_put(mgr); 180 + fpga_mgr_unlock(region->mgr); 245 181 fpga_region_put(region); 246 182 247 183 return 0; 248 184 249 185 err_put_br: 250 - fpga_bridges_put(&region->bridge_list); 251 - err_put_mgr: 252 - fpga_mgr_put(mgr); 186 + if (region->get_bridges) 187 + fpga_bridges_put(&region->bridge_list); 188 + err_unlock_mgr: 189 + fpga_mgr_unlock(region->mgr); 253 190 err_put_region: 254 191 fpga_region_put(region); 255 192 256 193 return ret; 257 194 } 195 + EXPORT_SYMBOL_GPL(fpga_region_program_fpga); 258 196 259 - /** 260 - * child_regions_with_firmware 261 - * @overlay: device node of the overlay 262 - * 263 - * If the overlay adds child FPGA regions, they are not allowed to have 264 - * firmware-name property. 265 - * 266 - * Return 0 for OK or -EINVAL if child FPGA region adds firmware-name. 267 - */ 268 - static int child_regions_with_firmware(struct device_node *overlay) 197 + int fpga_region_register(struct device *dev, struct fpga_region *region) 269 198 { 270 - struct device_node *child_region; 271 - const char *child_firmware_name; 272 - int ret = 0; 273 - 274 - of_node_get(overlay); 275 - 276 - child_region = of_find_matching_node(overlay, fpga_region_of_match); 277 - while (child_region) { 278 - if (!of_property_read_string(child_region, "firmware-name", 279 - &child_firmware_name)) { 280 - ret = -EINVAL; 281 - break; 282 - } 283 - child_region = of_find_matching_node(child_region, 284 - fpga_region_of_match); 285 - } 286 - 287 - of_node_put(child_region); 288 - 289 - if (ret) 290 - pr_err("firmware-name not allowed in child FPGA region: %pOF", 291 - child_region); 292 - 293 - return ret; 294 - } 295 - 296 - /** 297 - * fpga_region_notify_pre_apply - pre-apply overlay notification 298 - * 299 - * @region: FPGA region that the overlay was applied to 300 - * @nd: overlay notification data 301 - * 302 - * Called after when an overlay targeted to a FPGA Region is about to be 303 - * applied. Function will check the properties that will be added to the FPGA 304 - * region. If the checks pass, it will program the FPGA. 305 - * 306 - * The checks are: 307 - * The overlay must add either firmware-name or external-fpga-config property 308 - * to the FPGA Region. 309 - * 310 - * firmware-name : program the FPGA 311 - * external-fpga-config : FPGA is already programmed 312 - * encrypted-fpga-config : FPGA bitstream is encrypted 313 - * 314 - * The overlay can add other FPGA regions, but child FPGA regions cannot have a 315 - * firmware-name property since those regions don't exist yet. 316 - * 317 - * If the overlay that breaks the rules, notifier returns an error and the 318 - * overlay is rejected before it goes into the main tree. 319 - * 320 - * Returns 0 for success or negative error code for failure. 321 - */ 322 - static int fpga_region_notify_pre_apply(struct fpga_region *region, 323 - struct of_overlay_notify_data *nd) 324 - { 325 - const char *firmware_name = NULL; 326 - struct fpga_image_info *info; 327 - int ret; 328 - 329 - info = devm_kzalloc(&region->dev, sizeof(*info), GFP_KERNEL); 330 - if (!info) 331 - return -ENOMEM; 332 - 333 - region->info = info; 334 - 335 - /* Reject overlay if child FPGA Regions have firmware-name property */ 336 - ret = child_regions_with_firmware(nd->overlay); 337 - if (ret) 338 - return ret; 339 - 340 - /* Read FPGA region properties from the overlay */ 341 - if (of_property_read_bool(nd->overlay, "partial-fpga-config")) 342 - info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 343 - 344 - if (of_property_read_bool(nd->overlay, "external-fpga-config")) 345 - info->flags |= FPGA_MGR_EXTERNAL_CONFIG; 346 - 347 - if (of_property_read_bool(nd->overlay, "encrypted-fpga-config")) 348 - info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM; 349 - 350 - of_property_read_string(nd->overlay, "firmware-name", &firmware_name); 351 - 352 - of_property_read_u32(nd->overlay, "region-unfreeze-timeout-us", 353 - &info->enable_timeout_us); 354 - 355 - of_property_read_u32(nd->overlay, "region-freeze-timeout-us", 356 - &info->disable_timeout_us); 357 - 358 - of_property_read_u32(nd->overlay, "config-complete-timeout-us", 359 - &info->config_complete_timeout_us); 360 - 361 - /* If FPGA was externally programmed, don't specify firmware */ 362 - if ((info->flags & FPGA_MGR_EXTERNAL_CONFIG) && firmware_name) { 363 - pr_err("error: specified firmware and external-fpga-config"); 364 - return -EINVAL; 365 - } 366 - 367 - /* FPGA is already configured externally. We're done. */ 368 - if (info->flags & FPGA_MGR_EXTERNAL_CONFIG) 369 - return 0; 370 - 371 - /* If we got this far, we should be programming the FPGA */ 372 - if (!firmware_name) { 373 - pr_err("should specify firmware-name or external-fpga-config\n"); 374 - return -EINVAL; 375 - } 376 - 377 - return fpga_region_program_fpga(region, firmware_name, nd->overlay); 378 - } 379 - 380 - /** 381 - * fpga_region_notify_post_remove - post-remove overlay notification 382 - * 383 - * @region: FPGA region that was targeted by the overlay that was removed 384 - * @nd: overlay notification data 385 - * 386 - * Called after an overlay has been removed if the overlay's target was a 387 - * FPGA region. 388 - */ 389 - static void fpga_region_notify_post_remove(struct fpga_region *region, 390 - struct of_overlay_notify_data *nd) 391 - { 392 - fpga_bridges_disable(&region->bridge_list); 393 - fpga_bridges_put(&region->bridge_list); 394 - devm_kfree(&region->dev, region->info); 395 - region->info = NULL; 396 - } 397 - 398 - /** 399 - * of_fpga_region_notify - reconfig notifier for dynamic DT changes 400 - * @nb: notifier block 401 - * @action: notifier action 402 - * @arg: reconfig data 403 - * 404 - * This notifier handles programming a FPGA when a "firmware-name" property is 405 - * added to a fpga-region. 406 - * 407 - * Returns NOTIFY_OK or error if FPGA programming fails. 408 - */ 409 - static int of_fpga_region_notify(struct notifier_block *nb, 410 - unsigned long action, void *arg) 411 - { 412 - struct of_overlay_notify_data *nd = arg; 413 - struct fpga_region *region; 414 - int ret; 415 - 416 - switch (action) { 417 - case OF_OVERLAY_PRE_APPLY: 418 - pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__); 419 - break; 420 - case OF_OVERLAY_POST_APPLY: 421 - pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__); 422 - return NOTIFY_OK; /* not for us */ 423 - case OF_OVERLAY_PRE_REMOVE: 424 - pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__); 425 - return NOTIFY_OK; /* not for us */ 426 - case OF_OVERLAY_POST_REMOVE: 427 - pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__); 428 - break; 429 - default: /* should not happen */ 430 - return NOTIFY_OK; 431 - } 432 - 433 - region = fpga_region_find(nd->target); 434 - if (!region) 435 - return NOTIFY_OK; 436 - 437 - ret = 0; 438 - switch (action) { 439 - case OF_OVERLAY_PRE_APPLY: 440 - ret = fpga_region_notify_pre_apply(region, nd); 441 - break; 442 - 443 - case OF_OVERLAY_POST_REMOVE: 444 - fpga_region_notify_post_remove(region, nd); 445 - break; 446 - } 447 - 448 - put_device(&region->dev); 449 - 450 - if (ret) 451 - return notifier_from_errno(ret); 452 - 453 - return NOTIFY_OK; 454 - } 455 - 456 - static struct notifier_block fpga_region_of_nb = { 457 - .notifier_call = of_fpga_region_notify, 458 - }; 459 - 460 - static int fpga_region_probe(struct platform_device *pdev) 461 - { 462 - struct device *dev = &pdev->dev; 463 - struct device_node *np = dev->of_node; 464 - struct fpga_region *region; 465 199 int id, ret = 0; 466 200 467 - region = kzalloc(sizeof(*region), GFP_KERNEL); 468 - if (!region) 469 - return -ENOMEM; 470 - 471 201 id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL); 472 - if (id < 0) { 473 - ret = id; 474 - goto err_kfree; 475 - } 202 + if (id < 0) 203 + return id; 476 204 477 205 mutex_init(&region->mutex); 478 206 INIT_LIST_HEAD(&region->bridge_list); 479 - 480 207 device_initialize(&region->dev); 208 + region->dev.groups = region->groups; 481 209 region->dev.class = fpga_region_class; 482 210 region->dev.parent = dev; 483 - region->dev.of_node = np; 211 + region->dev.of_node = dev->of_node; 484 212 region->dev.id = id; 485 213 dev_set_drvdata(dev, region); 486 214 ··· 188 524 if (ret) 189 525 goto err_remove; 190 526 191 - of_platform_populate(np, fpga_region_of_match, NULL, &region->dev); 192 - 193 - dev_info(dev, "FPGA Region probed\n"); 194 - 195 527 return 0; 196 528 197 529 err_remove: 198 530 ida_simple_remove(&fpga_region_ida, id); 199 - err_kfree: 200 - kfree(region); 201 - 202 531 return ret; 203 532 } 533 + EXPORT_SYMBOL_GPL(fpga_region_register); 204 534 205 - static int fpga_region_remove(struct platform_device *pdev) 535 + int fpga_region_unregister(struct fpga_region *region) 206 536 { 207 - struct fpga_region *region = platform_get_drvdata(pdev); 208 - 209 537 device_unregister(&region->dev); 210 538 211 539 return 0; 212 540 } 213 - 214 - static struct platform_driver fpga_region_driver = { 215 - .probe = fpga_region_probe, 216 - .remove = fpga_region_remove, 217 - .driver = { 218 - .name = "fpga-region", 219 - .of_match_table = of_match_ptr(fpga_region_of_match), 220 - }, 221 - }; 541 + EXPORT_SYMBOL_GPL(fpga_region_unregister); 222 542 223 543 static void fpga_region_dev_release(struct device *dev) 224 544 { 225 545 struct fpga_region *region = to_fpga_region(dev); 226 546 227 547 ida_simple_remove(&fpga_region_ida, region->dev.id); 228 - kfree(region); 229 548 } 230 549 231 550 /** ··· 217 570 */ 218 571 static int __init fpga_region_init(void) 219 572 { 220 - int ret; 221 - 222 573 fpga_region_class = class_create(THIS_MODULE, "fpga_region"); 223 574 if (IS_ERR(fpga_region_class)) 224 575 return PTR_ERR(fpga_region_class); 225 576 226 577 fpga_region_class->dev_release = fpga_region_dev_release; 227 578 228 - ret = of_overlay_notifier_register(&fpga_region_of_nb); 229 - if (ret) 230 - goto err_class; 231 - 232 - ret = platform_driver_register(&fpga_region_driver); 233 - if (ret) 234 - goto err_plat; 235 - 236 579 return 0; 237 - 238 - err_plat: 239 - of_overlay_notifier_unregister(&fpga_region_of_nb); 240 - err_class: 241 - class_destroy(fpga_region_class); 242 - ida_destroy(&fpga_region_ida); 243 - return ret; 244 580 } 245 581 246 582 static void __exit fpga_region_exit(void) 247 583 { 248 - platform_driver_unregister(&fpga_region_driver); 249 - of_overlay_notifier_unregister(&fpga_region_of_nb); 250 584 class_destroy(fpga_region_class); 251 585 ida_destroy(&fpga_region_ida); 252 586 } ··· 236 608 module_exit(fpga_region_exit); 237 609 238 610 MODULE_DESCRIPTION("FPGA Region"); 239 - MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>"); 611 + MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); 240 612 MODULE_LICENSE("GPL v2");
+504
drivers/fpga/of-fpga-region.c
··· 1 + /* 2 + * FPGA Region - Device Tree support for FPGA programming under Linux 3 + * 4 + * Copyright (C) 2013-2016 Altera Corporation 5 + * Copyright (C) 2017 Intel Corporation 6 + * 7 + * This program is free software; you can redistribute it and/or modify it 8 + * under the terms and conditions of the GNU General Public License, 9 + * version 2, as published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope it will be useful, but WITHOUT 12 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 + * more details. 15 + * 16 + * You should have received a copy of the GNU General Public License along with 17 + * this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #include <linux/fpga/fpga-bridge.h> 21 + #include <linux/fpga/fpga-mgr.h> 22 + #include <linux/fpga/fpga-region.h> 23 + #include <linux/idr.h> 24 + #include <linux/kernel.h> 25 + #include <linux/list.h> 26 + #include <linux/module.h> 27 + #include <linux/of_platform.h> 28 + #include <linux/slab.h> 29 + #include <linux/spinlock.h> 30 + 31 + static const struct of_device_id fpga_region_of_match[] = { 32 + { .compatible = "fpga-region", }, 33 + {}, 34 + }; 35 + MODULE_DEVICE_TABLE(of, fpga_region_of_match); 36 + 37 + static int fpga_region_of_node_match(struct device *dev, const void *data) 38 + { 39 + return dev->of_node == data; 40 + } 41 + 42 + /** 43 + * of_fpga_region_find - find FPGA region 44 + * @np: device node of FPGA Region 45 + * 46 + * Caller will need to put_device(&region->dev) when done. 47 + * 48 + * Returns FPGA Region struct or NULL 49 + */ 50 + static struct fpga_region *of_fpga_region_find(struct device_node *np) 51 + { 52 + return fpga_region_class_find(NULL, np, fpga_region_of_node_match); 53 + } 54 + 55 + /** 56 + * of_fpga_region_get_mgr - get reference for FPGA manager 57 + * @np: device node of FPGA region 58 + * 59 + * Get FPGA Manager from "fpga-mgr" property or from ancestor region. 60 + * 61 + * Caller should call fpga_mgr_put() when done with manager. 62 + * 63 + * Return: fpga manager struct or IS_ERR() condition containing error code. 64 + */ 65 + static struct fpga_manager *of_fpga_region_get_mgr(struct device_node *np) 66 + { 67 + struct device_node *mgr_node; 68 + struct fpga_manager *mgr; 69 + 70 + of_node_get(np); 71 + while (np) { 72 + if (of_device_is_compatible(np, "fpga-region")) { 73 + mgr_node = of_parse_phandle(np, "fpga-mgr", 0); 74 + if (mgr_node) { 75 + mgr = of_fpga_mgr_get(mgr_node); 76 + of_node_put(mgr_node); 77 + of_node_put(np); 78 + return mgr; 79 + } 80 + } 81 + np = of_get_next_parent(np); 82 + } 83 + of_node_put(np); 84 + 85 + return ERR_PTR(-EINVAL); 86 + } 87 + 88 + /** 89 + * of_fpga_region_get_bridges - create a list of bridges 90 + * @region: FPGA region 91 + * 92 + * Create a list of bridges including the parent bridge and the bridges 93 + * specified by "fpga-bridges" property. Note that the 94 + * fpga_bridges_enable/disable/put functions are all fine with an empty list 95 + * if that happens. 96 + * 97 + * Caller should call fpga_bridges_put(&region->bridge_list) when 98 + * done with the bridges. 99 + * 100 + * Return 0 for success (even if there are no bridges specified) 101 + * or -EBUSY if any of the bridges are in use. 102 + */ 103 + static int of_fpga_region_get_bridges(struct fpga_region *region) 104 + { 105 + struct device *dev = &region->dev; 106 + struct device_node *region_np = dev->of_node; 107 + struct fpga_image_info *info = region->info; 108 + struct device_node *br, *np, *parent_br = NULL; 109 + int i, ret; 110 + 111 + /* If parent is a bridge, add to list */ 112 + ret = of_fpga_bridge_get_to_list(region_np->parent, info, 113 + &region->bridge_list); 114 + 115 + /* -EBUSY means parent is a bridge that is under use. Give up. */ 116 + if (ret == -EBUSY) 117 + return ret; 118 + 119 + /* Zero return code means parent was a bridge and was added to list. */ 120 + if (!ret) 121 + parent_br = region_np->parent; 122 + 123 + /* If overlay has a list of bridges, use it. */ 124 + br = of_parse_phandle(info->overlay, "fpga-bridges", 0); 125 + if (br) { 126 + of_node_put(br); 127 + np = info->overlay; 128 + } else { 129 + np = region_np; 130 + } 131 + 132 + for (i = 0; ; i++) { 133 + br = of_parse_phandle(np, "fpga-bridges", i); 134 + if (!br) 135 + break; 136 + 137 + /* If parent bridge is in list, skip it. */ 138 + if (br == parent_br) { 139 + of_node_put(br); 140 + continue; 141 + } 142 + 143 + /* If node is a bridge, get it and add to list */ 144 + ret = of_fpga_bridge_get_to_list(br, info, 145 + &region->bridge_list); 146 + of_node_put(br); 147 + 148 + /* If any of the bridges are in use, give up */ 149 + if (ret == -EBUSY) { 150 + fpga_bridges_put(&region->bridge_list); 151 + return -EBUSY; 152 + } 153 + } 154 + 155 + return 0; 156 + } 157 + 158 + /** 159 + * child_regions_with_firmware 160 + * @overlay: device node of the overlay 161 + * 162 + * If the overlay adds child FPGA regions, they are not allowed to have 163 + * firmware-name property. 164 + * 165 + * Return 0 for OK or -EINVAL if child FPGA region adds firmware-name. 166 + */ 167 + static int child_regions_with_firmware(struct device_node *overlay) 168 + { 169 + struct device_node *child_region; 170 + const char *child_firmware_name; 171 + int ret = 0; 172 + 173 + of_node_get(overlay); 174 + 175 + child_region = of_find_matching_node(overlay, fpga_region_of_match); 176 + while (child_region) { 177 + if (!of_property_read_string(child_region, "firmware-name", 178 + &child_firmware_name)) { 179 + ret = -EINVAL; 180 + break; 181 + } 182 + child_region = of_find_matching_node(child_region, 183 + fpga_region_of_match); 184 + } 185 + 186 + of_node_put(child_region); 187 + 188 + if (ret) 189 + pr_err("firmware-name not allowed in child FPGA region: %pOF", 190 + child_region); 191 + 192 + return ret; 193 + } 194 + 195 + /** 196 + * of_fpga_region_parse_ov - parse and check overlay applied to region 197 + * 198 + * @region: FPGA region 199 + * @overlay: overlay applied to the FPGA region 200 + * 201 + * Given an overlay applied to a FPGA region, parse the FPGA image specific 202 + * info in the overlay and do some checking. 203 + * 204 + * Returns: 205 + * NULL if overlay doesn't direct us to program the FPGA. 206 + * fpga_image_info struct if there is an image to program. 207 + * error code for invalid overlay. 208 + */ 209 + static struct fpga_image_info *of_fpga_region_parse_ov( 210 + struct fpga_region *region, 211 + struct device_node *overlay) 212 + { 213 + struct device *dev = &region->dev; 214 + struct fpga_image_info *info; 215 + const char *firmware_name; 216 + int ret; 217 + 218 + if (region->info) { 219 + dev_err(dev, "Region already has overlay applied.\n"); 220 + return ERR_PTR(-EINVAL); 221 + } 222 + 223 + /* 224 + * Reject overlay if child FPGA Regions added in the overlay have 225 + * firmware-name property (would mean that an FPGA region that has 226 + * not been added to the live tree yet is doing FPGA programming). 227 + */ 228 + ret = child_regions_with_firmware(overlay); 229 + if (ret) 230 + return ERR_PTR(ret); 231 + 232 + info = fpga_image_info_alloc(dev); 233 + if (!info) 234 + return ERR_PTR(-ENOMEM); 235 + 236 + info->overlay = overlay; 237 + 238 + /* Read FPGA region properties from the overlay */ 239 + if (of_property_read_bool(overlay, "partial-fpga-config")) 240 + info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 241 + 242 + if (of_property_read_bool(overlay, "external-fpga-config")) 243 + info->flags |= FPGA_MGR_EXTERNAL_CONFIG; 244 + 245 + if (of_property_read_bool(overlay, "encrypted-fpga-config")) 246 + info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM; 247 + 248 + if (!of_property_read_string(overlay, "firmware-name", 249 + &firmware_name)) { 250 + info->firmware_name = devm_kstrdup(dev, firmware_name, 251 + GFP_KERNEL); 252 + if (!info->firmware_name) 253 + return ERR_PTR(-ENOMEM); 254 + } 255 + 256 + of_property_read_u32(overlay, "region-unfreeze-timeout-us", 257 + &info->enable_timeout_us); 258 + 259 + of_property_read_u32(overlay, "region-freeze-timeout-us", 260 + &info->disable_timeout_us); 261 + 262 + of_property_read_u32(overlay, "config-complete-timeout-us", 263 + &info->config_complete_timeout_us); 264 + 265 + /* If overlay is not programming the FPGA, don't need FPGA image info */ 266 + if (!info->firmware_name) { 267 + ret = 0; 268 + goto ret_no_info; 269 + } 270 + 271 + /* 272 + * If overlay informs us FPGA was externally programmed, specifying 273 + * firmware here would be ambiguous. 274 + */ 275 + if (info->flags & FPGA_MGR_EXTERNAL_CONFIG) { 276 + dev_err(dev, "error: specified firmware and external-fpga-config"); 277 + ret = -EINVAL; 278 + goto ret_no_info; 279 + } 280 + 281 + return info; 282 + ret_no_info: 283 + fpga_image_info_free(info); 284 + return ERR_PTR(ret); 285 + } 286 + 287 + /** 288 + * of_fpga_region_notify_pre_apply - pre-apply overlay notification 289 + * 290 + * @region: FPGA region that the overlay was applied to 291 + * @nd: overlay notification data 292 + * 293 + * Called when an overlay targeted to a FPGA Region is about to be applied. 294 + * Parses the overlay for properties that influence how the FPGA will be 295 + * programmed and does some checking. If the checks pass, programs the FPGA. 296 + * If the checks fail, overlay is rejected and does not get added to the 297 + * live tree. 298 + * 299 + * Returns 0 for success or negative error code for failure. 300 + */ 301 + static int of_fpga_region_notify_pre_apply(struct fpga_region *region, 302 + struct of_overlay_notify_data *nd) 303 + { 304 + struct device *dev = &region->dev; 305 + struct fpga_image_info *info; 306 + int ret; 307 + 308 + info = of_fpga_region_parse_ov(region, nd->overlay); 309 + if (IS_ERR(info)) 310 + return PTR_ERR(info); 311 + 312 + /* If overlay doesn't program the FPGA, accept it anyway. */ 313 + if (!info) 314 + return 0; 315 + 316 + if (region->info) { 317 + dev_err(dev, "Region already has overlay applied.\n"); 318 + return -EINVAL; 319 + } 320 + 321 + region->info = info; 322 + ret = fpga_region_program_fpga(region); 323 + if (ret) { 324 + /* error; reject overlay */ 325 + fpga_image_info_free(info); 326 + region->info = NULL; 327 + } 328 + 329 + return ret; 330 + } 331 + 332 + /** 333 + * of_fpga_region_notify_post_remove - post-remove overlay notification 334 + * 335 + * @region: FPGA region that was targeted by the overlay that was removed 336 + * @nd: overlay notification data 337 + * 338 + * Called after an overlay has been removed if the overlay's target was a 339 + * FPGA region. 340 + */ 341 + static void of_fpga_region_notify_post_remove(struct fpga_region *region, 342 + struct of_overlay_notify_data *nd) 343 + { 344 + fpga_bridges_disable(&region->bridge_list); 345 + fpga_bridges_put(&region->bridge_list); 346 + fpga_image_info_free(region->info); 347 + region->info = NULL; 348 + } 349 + 350 + /** 351 + * of_fpga_region_notify - reconfig notifier for dynamic DT changes 352 + * @nb: notifier block 353 + * @action: notifier action 354 + * @arg: reconfig data 355 + * 356 + * This notifier handles programming a FPGA when a "firmware-name" property is 357 + * added to a fpga-region. 358 + * 359 + * Returns NOTIFY_OK or error if FPGA programming fails. 360 + */ 361 + static int of_fpga_region_notify(struct notifier_block *nb, 362 + unsigned long action, void *arg) 363 + { 364 + struct of_overlay_notify_data *nd = arg; 365 + struct fpga_region *region; 366 + int ret; 367 + 368 + switch (action) { 369 + case OF_OVERLAY_PRE_APPLY: 370 + pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__); 371 + break; 372 + case OF_OVERLAY_POST_APPLY: 373 + pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__); 374 + return NOTIFY_OK; /* not for us */ 375 + case OF_OVERLAY_PRE_REMOVE: 376 + pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__); 377 + return NOTIFY_OK; /* not for us */ 378 + case OF_OVERLAY_POST_REMOVE: 379 + pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__); 380 + break; 381 + default: /* should not happen */ 382 + return NOTIFY_OK; 383 + } 384 + 385 + region = of_fpga_region_find(nd->target); 386 + if (!region) 387 + return NOTIFY_OK; 388 + 389 + ret = 0; 390 + switch (action) { 391 + case OF_OVERLAY_PRE_APPLY: 392 + ret = of_fpga_region_notify_pre_apply(region, nd); 393 + break; 394 + 395 + case OF_OVERLAY_POST_REMOVE: 396 + of_fpga_region_notify_post_remove(region, nd); 397 + break; 398 + } 399 + 400 + put_device(&region->dev); 401 + 402 + if (ret) 403 + return notifier_from_errno(ret); 404 + 405 + return NOTIFY_OK; 406 + } 407 + 408 + static struct notifier_block fpga_region_of_nb = { 409 + .notifier_call = of_fpga_region_notify, 410 + }; 411 + 412 + static int of_fpga_region_probe(struct platform_device *pdev) 413 + { 414 + struct device *dev = &pdev->dev; 415 + struct device_node *np = dev->of_node; 416 + struct fpga_region *region; 417 + struct fpga_manager *mgr; 418 + int ret; 419 + 420 + /* Find the FPGA mgr specified by region or parent region. */ 421 + mgr = of_fpga_region_get_mgr(np); 422 + if (IS_ERR(mgr)) 423 + return -EPROBE_DEFER; 424 + 425 + region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL); 426 + if (!region) { 427 + ret = -ENOMEM; 428 + goto eprobe_mgr_put; 429 + } 430 + 431 + region->mgr = mgr; 432 + 433 + /* Specify how to get bridges for this type of region. */ 434 + region->get_bridges = of_fpga_region_get_bridges; 435 + 436 + ret = fpga_region_register(dev, region); 437 + if (ret) 438 + goto eprobe_mgr_put; 439 + 440 + of_platform_populate(np, fpga_region_of_match, NULL, &region->dev); 441 + 442 + dev_info(dev, "FPGA Region probed\n"); 443 + 444 + return 0; 445 + 446 + eprobe_mgr_put: 447 + fpga_mgr_put(mgr); 448 + return ret; 449 + } 450 + 451 + static int of_fpga_region_remove(struct platform_device *pdev) 452 + { 453 + struct fpga_region *region = platform_get_drvdata(pdev); 454 + 455 + fpga_region_unregister(region); 456 + fpga_mgr_put(region->mgr); 457 + 458 + return 0; 459 + } 460 + 461 + static struct platform_driver of_fpga_region_driver = { 462 + .probe = of_fpga_region_probe, 463 + .remove = of_fpga_region_remove, 464 + .driver = { 465 + .name = "of-fpga-region", 466 + .of_match_table = of_match_ptr(fpga_region_of_match), 467 + }, 468 + }; 469 + 470 + /** 471 + * fpga_region_init - init function for fpga_region class 472 + * Creates the fpga_region class and registers a reconfig notifier. 473 + */ 474 + static int __init of_fpga_region_init(void) 475 + { 476 + int ret; 477 + 478 + ret = of_overlay_notifier_register(&fpga_region_of_nb); 479 + if (ret) 480 + return ret; 481 + 482 + ret = platform_driver_register(&of_fpga_region_driver); 483 + if (ret) 484 + goto err_plat; 485 + 486 + return 0; 487 + 488 + err_plat: 489 + of_overlay_notifier_unregister(&fpga_region_of_nb); 490 + return ret; 491 + } 492 + 493 + static void __exit of_fpga_region_exit(void) 494 + { 495 + platform_driver_unregister(&of_fpga_region_driver); 496 + of_overlay_notifier_unregister(&fpga_region_of_nb); 497 + } 498 + 499 + subsys_initcall(of_fpga_region_init); 500 + module_exit(of_fpga_region_exit); 501 + 502 + MODULE_DESCRIPTION("FPGA Region"); 503 + MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); 504 + MODULE_LICENSE("GPL v2");
+7 -1
drivers/fpga/socfpga-a10.c
··· 519 519 return -EBUSY; 520 520 } 521 521 522 - return fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager", 522 + ret = fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager", 523 523 &socfpga_a10_fpga_mgr_ops, priv); 524 + if (ret) { 525 + clk_disable_unprepare(priv->clk); 526 + return ret; 527 + } 528 + 529 + return 0; 524 530 } 525 531 526 532 static int socfpga_a10_fpga_remove(struct platform_device *pdev)
+1 -5
drivers/fsi/Kconfig
··· 2 2 # FSI subsystem 3 3 # 4 4 5 - menu "FSI support" 6 - 7 - config FSI 5 + menuconfig FSI 8 6 tristate "FSI support" 9 7 select CRC4 10 8 ---help--- ··· 32 34 This option enables an FSI based SCOM device driver. 33 35 34 36 endif 35 - 36 - endmenu
-3
drivers/hv/hv.c
··· 49 49 */ 50 50 int hv_init(void) 51 51 { 52 - if (!hv_is_hypercall_page_setup()) 53 - return -ENOTSUPP; 54 - 55 52 hv_context.cpu_context = alloc_percpu(struct hv_per_cpu_context); 56 53 if (!hv_context.cpu_context) 57 54 return -ENOMEM;
+28 -12
drivers/hv/vmbus_drv.c
··· 37 37 #include <linux/sched/task_stack.h> 38 38 39 39 #include <asm/hyperv.h> 40 - #include <asm/hypervisor.h> 41 40 #include <asm/mshyperv.h> 42 41 #include <linux/notifier.h> 43 42 #include <linux/ptrace.h> ··· 1052 1053 * Initialize the per-cpu interrupt state and 1053 1054 * connect to the host. 1054 1055 */ 1055 - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv:online", 1056 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online", 1056 1057 hv_synic_init, hv_synic_cleanup); 1057 1058 if (ret < 0) 1058 1059 goto err_alloc; ··· 1192 1193 1193 1194 return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); 1194 1195 } 1195 - VMBUS_CHAN_ATTR_RO(out_mask); 1196 + static VMBUS_CHAN_ATTR_RO(out_mask); 1196 1197 1197 1198 static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf) 1198 1199 { ··· 1200 1201 1201 1202 return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); 1202 1203 } 1203 - VMBUS_CHAN_ATTR_RO(in_mask); 1204 + static VMBUS_CHAN_ATTR_RO(in_mask); 1204 1205 1205 1206 static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf) 1206 1207 { ··· 1208 1209 1209 1210 return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi)); 1210 1211 } 1211 - VMBUS_CHAN_ATTR_RO(read_avail); 1212 + static VMBUS_CHAN_ATTR_RO(read_avail); 1212 1213 1213 1214 static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf) 1214 1215 { ··· 1216 1217 1217 1218 return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi)); 1218 1219 } 1219 - VMBUS_CHAN_ATTR_RO(write_avail); 1220 + static VMBUS_CHAN_ATTR_RO(write_avail); 1220 1221 1221 1222 static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf) 1222 1223 { 1223 1224 return sprintf(buf, "%u\n", channel->target_cpu); 1224 1225 } 1225 - VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL); 1226 + static VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL); 1226 1227 1227 1228 static ssize_t channel_pending_show(const struct vmbus_channel *channel, 1228 1229 char *buf) ··· 1231 1232 channel_pending(channel, 1232 1233 vmbus_connection.monitor_pages[1])); 1233 1234 } 1234 - VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL); 1235 + static VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL); 1235 1236 1236 1237 static ssize_t channel_latency_show(const struct vmbus_channel *channel, 1237 1238 char *buf) ··· 1240 1241 channel_latency(channel, 1241 1242 vmbus_connection.monitor_pages[1])); 1242 1243 } 1243 - VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL); 1244 + static VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL); 1244 1245 1245 1246 static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf) 1246 1247 { 1247 1248 return sprintf(buf, "%llu\n", channel->interrupts); 1248 1249 } 1249 - VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL); 1250 + static VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL); 1250 1251 1251 1252 static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf) 1252 1253 { 1253 1254 return sprintf(buf, "%llu\n", channel->sig_events); 1254 1255 } 1255 - VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); 1256 + static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); 1257 + 1258 + static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel, 1259 + char *buf) 1260 + { 1261 + return sprintf(buf, "%u\n", channel->offermsg.monitorid); 1262 + } 1263 + static VMBUS_CHAN_ATTR(monitor_id, S_IRUGO, subchannel_monitor_id_show, NULL); 1264 + 1265 + static ssize_t subchannel_id_show(const struct vmbus_channel *channel, 1266 + char *buf) 1267 + { 1268 + return sprintf(buf, "%u\n", 1269 + channel->offermsg.offer.sub_channel_index); 1270 + } 1271 + static VMBUS_CHAN_ATTR_RO(subchannel_id); 1256 1272 1257 1273 static struct attribute *vmbus_chan_attrs[] = { 1258 1274 &chan_attr_out_mask.attr, ··· 1279 1265 &chan_attr_latency.attr, 1280 1266 &chan_attr_interrupts.attr, 1281 1267 &chan_attr_events.attr, 1268 + &chan_attr_monitor_id.attr, 1269 + &chan_attr_subchannel_id.attr, 1282 1270 NULL 1283 1271 }; 1284 1272 ··· 1733 1717 { 1734 1718 int ret, t; 1735 1719 1736 - if (x86_hyper_type != X86_HYPER_MS_HYPERV) 1720 + if (!hv_is_hyperv_initialized()) 1737 1721 return -ENODEV; 1738 1722 1739 1723 init_completion(&probe_event);
+1 -3
drivers/hwtracing/coresight/coresight-dynamic-replicator.c
··· 163 163 desc.dev = &adev->dev; 164 164 desc.groups = replicator_groups; 165 165 drvdata->csdev = coresight_register(&desc); 166 - if (IS_ERR(drvdata->csdev)) 167 - return PTR_ERR(drvdata->csdev); 168 166 169 - return 0; 167 + return PTR_ERR_OR_ZERO(drvdata->csdev); 170 168 } 171 169 172 170 #ifdef CONFIG_PM
-1
drivers/hwtracing/coresight/coresight-etb10.c
··· 33 33 #include <linux/mm.h> 34 34 #include <linux/perf_event.h> 35 35 36 - #include <asm/local.h> 37 36 38 37 #include "coresight-priv.h" 39 38
+1 -3
drivers/hwtracing/coresight/coresight-funnel.c
··· 214 214 desc.dev = dev; 215 215 desc.groups = coresight_funnel_groups; 216 216 drvdata->csdev = coresight_register(&desc); 217 - if (IS_ERR(drvdata->csdev)) 218 - return PTR_ERR(drvdata->csdev); 219 217 220 - return 0; 218 + return PTR_ERR_OR_ZERO(drvdata->csdev); 221 219 } 222 220 223 221 #ifdef CONFIG_PM
+11 -6
drivers/hwtracing/coresight/coresight-tpiu.c
··· 46 46 #define TPIU_ITATBCTR0 0xef8 47 47 48 48 /** register definition **/ 49 + /* FFSR - 0x300 */ 50 + #define FFSR_FT_STOPPED BIT(1) 49 51 /* FFCR - 0x304 */ 50 52 #define FFCR_FON_MAN BIT(6) 53 + #define FFCR_STOP_FI BIT(12) 51 54 52 55 /** 53 56 * @base: memory mapped base address for this component. ··· 88 85 { 89 86 CS_UNLOCK(drvdata->base); 90 87 91 - /* Clear formatter controle reg. */ 92 - writel_relaxed(0x0, drvdata->base + TPIU_FFCR); 88 + /* Clear formatter and stop on flush */ 89 + writel_relaxed(FFCR_STOP_FI, drvdata->base + TPIU_FFCR); 93 90 /* Generate manual flush */ 94 - writel_relaxed(FFCR_FON_MAN, drvdata->base + TPIU_FFCR); 91 + writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR); 92 + /* Wait for flush to complete */ 93 + coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0); 94 + /* Wait for formatter to stop */ 95 + coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1); 95 96 96 97 CS_LOCK(drvdata->base); 97 98 } ··· 167 160 desc.pdata = pdata; 168 161 desc.dev = dev; 169 162 drvdata->csdev = coresight_register(&desc); 170 - if (IS_ERR(drvdata->csdev)) 171 - return PTR_ERR(drvdata->csdev); 172 163 173 - return 0; 164 + return PTR_ERR_OR_ZERO(drvdata->csdev); 174 165 } 175 166 176 167 #ifdef CONFIG_PM
+5 -20
drivers/hwtracing/coresight/coresight.c
··· 843 843 } 844 844 845 845 846 - static int coresight_name_match(struct device *dev, void *data) 847 - { 848 - char *to_match; 849 - struct coresight_device *i_csdev; 850 - 851 - to_match = data; 852 - i_csdev = to_coresight_device(dev); 853 - 854 - if (to_match && !strcmp(to_match, dev_name(&i_csdev->dev))) 855 - return 1; 856 - 857 - return 0; 858 - } 859 - 860 846 static void coresight_fixup_device_conns(struct coresight_device *csdev) 861 847 { 862 848 int i; 863 - struct device *dev = NULL; 864 - struct coresight_connection *conn; 865 849 866 850 for (i = 0; i < csdev->nr_outport; i++) { 867 - conn = &csdev->conns[i]; 868 - dev = bus_find_device(&coresight_bustype, NULL, 869 - (void *)conn->child_name, 870 - coresight_name_match); 851 + struct coresight_connection *conn = &csdev->conns[i]; 852 + struct device *dev = NULL; 871 853 854 + if (conn->child_name) 855 + dev = bus_find_device_by_name(&coresight_bustype, NULL, 856 + conn->child_name); 872 857 if (dev) { 873 858 conn->child_dev = to_coresight_device(dev); 874 859 /* and put reference from 'bus_find_device()' */
+2 -2
drivers/misc/Kconfig
··· 53 53 54 54 config ATMEL_TCLIB 55 55 bool "Atmel AT32/AT91 Timer/Counter Library" 56 - depends on (AVR32 || ARCH_AT91) 56 + depends on ARCH_AT91 57 57 help 58 58 Select this if you want a library to allocate the Timer/Counter 59 59 blocks found on many Atmel processors. This facilitates using ··· 192 192 193 193 config ATMEL_SSC 194 194 tristate "Device driver for Atmel SSC peripheral" 195 - depends on HAS_IOMEM && (AVR32 || ARCH_AT91 || COMPILE_TEST) 195 + depends on HAS_IOMEM && (ARCH_AT91 || COMPILE_TEST) 196 196 ---help--- 197 197 This option enables device driver support for Atmel Synchronized 198 198 Serial Communication peripheral (SSC).
+15 -15
drivers/misc/ad525x_dpot.c
··· 3 3 * Copyright (c) 2009-2010 Analog Devices, Inc. 4 4 * Author: Michael Hennerich <hennerich@blackfin.uclinux.org> 5 5 * 6 - * DEVID #Wipers #Positions Resistor Options (kOhm) 6 + * DEVID #Wipers #Positions Resistor Options (kOhm) 7 7 * AD5258 1 64 1, 10, 50, 100 8 8 * AD5259 1 256 5, 10, 50, 100 9 9 * AD5251 2 64 1, 10, 50, 100 ··· 84 84 struct dpot_data { 85 85 struct ad_dpot_bus_data bdata; 86 86 struct mutex update_lock; 87 - unsigned rdac_mask; 88 - unsigned max_pos; 87 + unsigned int rdac_mask; 88 + unsigned int max_pos; 89 89 unsigned long devid; 90 - unsigned uid; 91 - unsigned feat; 92 - unsigned wipers; 90 + unsigned int uid; 91 + unsigned int feat; 92 + unsigned int wipers; 93 93 u16 rdac_cache[MAX_RDACS]; 94 94 DECLARE_BITMAP(otp_en_mask, MAX_RDACS); 95 95 }; ··· 126 126 127 127 static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg) 128 128 { 129 - unsigned ctrl = 0; 129 + unsigned int ctrl = 0; 130 130 int value; 131 131 132 132 if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD))) { ··· 175 175 static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg) 176 176 { 177 177 int value; 178 - unsigned ctrl = 0; 178 + unsigned int ctrl = 0; 179 179 180 180 switch (dpot->uid) { 181 181 case DPOT_UID(AD5246_ID): ··· 238 238 239 239 static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value) 240 240 { 241 - unsigned val = 0; 241 + unsigned int val = 0; 242 242 243 243 if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD | DPOT_ADDR_OTP))) { 244 244 if (dpot->feat & F_RDACS_WONLY) ··· 328 328 static s32 dpot_write_i2c(struct dpot_data *dpot, u8 reg, u16 value) 329 329 { 330 330 /* Only write the instruction byte for certain commands */ 331 - unsigned tmp = 0, ctrl = 0; 331 + unsigned int tmp = 0, ctrl = 0; 332 332 333 333 switch (dpot->uid) { 334 334 case DPOT_UID(AD5246_ID): ··· 515 515 #define DPOT_DEVICE_SHOW_SET(name, reg) \ 516 516 DPOT_DEVICE_SHOW(name, reg) \ 517 517 DPOT_DEVICE_SET(name, reg) \ 518 - static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name); 518 + static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name) 519 519 520 520 #define DPOT_DEVICE_SHOW_ONLY(name, reg) \ 521 521 DPOT_DEVICE_SHOW(name, reg) \ 522 - static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL); 522 + static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL) 523 523 524 524 DPOT_DEVICE_SHOW_SET(rdac0, DPOT_ADDR_RDAC | DPOT_RDAC0); 525 525 DPOT_DEVICE_SHOW_SET(eeprom0, DPOT_ADDR_EEPROM | DPOT_RDAC0); ··· 616 616 { \ 617 617 return sysfs_do_cmd(dev, attr, buf, count, _cmd); \ 618 618 } \ 619 - static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name); 619 + static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name) 620 620 621 621 DPOT_DEVICE_DO_CMD(inc_all, DPOT_INC_ALL); 622 622 DPOT_DEVICE_DO_CMD(dec_all, DPOT_DEC_ALL); ··· 636 636 }; 637 637 638 638 static int ad_dpot_add_files(struct device *dev, 639 - unsigned features, unsigned rdac) 639 + unsigned int features, unsigned int rdac) 640 640 { 641 641 int err = sysfs_create_file(&dev->kobj, 642 642 dpot_attrib_wipers[rdac]); ··· 661 661 } 662 662 663 663 static inline void ad_dpot_remove_files(struct device *dev, 664 - unsigned features, unsigned rdac) 664 + unsigned int features, unsigned int rdac) 665 665 { 666 666 sysfs_remove_file(&dev->kobj, 667 667 dpot_attrib_wipers[rdac]);
+6 -6
drivers/misc/ad525x_dpot.h
··· 195 195 struct dpot_data; 196 196 197 197 struct ad_dpot_bus_ops { 198 - int (*read_d8) (void *client); 199 - int (*read_r8d8) (void *client, u8 reg); 200 - int (*read_r8d16) (void *client, u8 reg); 201 - int (*write_d8) (void *client, u8 val); 202 - int (*write_r8d8) (void *client, u8 reg, u8 val); 203 - int (*write_r8d16) (void *client, u8 reg, u16 val); 198 + int (*read_d8)(void *client); 199 + int (*read_r8d8)(void *client, u8 reg); 200 + int (*read_r8d16)(void *client, u8 reg); 201 + int (*write_d8)(void *client, u8 val); 202 + int (*write_r8d8)(void *client, u8 reg, u8 val); 203 + int (*write_r8d16)(void *client, u8 reg, u16 val); 204 204 }; 205 205 206 206 struct ad_dpot_bus_data {
+11
drivers/misc/apds990x.c
··· 715 715 { 716 716 int i; 717 717 int pos = 0; 718 + 718 719 for (i = 0; i < ARRAY_SIZE(arates_hz); i++) 719 720 pos += sprintf(buf + pos, "%d ", arates_hz[i]); 720 721 sprintf(buf + pos - 1, "\n"); ··· 726 725 struct device_attribute *attr, char *buf) 727 726 { 728 727 struct apds990x_chip *chip = dev_get_drvdata(dev); 728 + 729 729 return sprintf(buf, "%d\n", chip->arate); 730 730 } 731 731 ··· 786 784 { 787 785 ssize_t ret; 788 786 struct apds990x_chip *chip = dev_get_drvdata(dev); 787 + 789 788 if (pm_runtime_suspended(dev) || !chip->prox_en) 790 789 return -EIO; 791 790 ··· 810 807 struct device_attribute *attr, char *buf) 811 808 { 812 809 struct apds990x_chip *chip = dev_get_drvdata(dev); 810 + 813 811 return sprintf(buf, "%d\n", chip->prox_en); 814 812 } 815 813 ··· 851 847 struct device_attribute *attr, char *buf) 852 848 { 853 849 struct apds990x_chip *chip = dev_get_drvdata(dev); 850 + 854 851 return sprintf(buf, "%s\n", 855 852 reporting_modes[!!chip->prox_continuous_mode]); 856 853 } ··· 889 884 struct device_attribute *attr, char *buf) 890 885 { 891 886 struct apds990x_chip *chip = dev_get_drvdata(dev); 887 + 892 888 return sprintf(buf, "%d\n", chip->lux_thres_hi); 893 889 } 894 890 ··· 897 891 struct device_attribute *attr, char *buf) 898 892 { 899 893 struct apds990x_chip *chip = dev_get_drvdata(dev); 894 + 900 895 return sprintf(buf, "%d\n", chip->lux_thres_lo); 901 896 } 902 897 ··· 933 926 { 934 927 struct apds990x_chip *chip = dev_get_drvdata(dev); 935 928 int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_hi, buf); 929 + 936 930 if (ret < 0) 937 931 return ret; 938 932 return len; ··· 945 937 { 946 938 struct apds990x_chip *chip = dev_get_drvdata(dev); 947 939 int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_lo, buf); 940 + 948 941 if (ret < 0) 949 942 return ret; 950 943 return len; ··· 963 954 struct device_attribute *attr, char *buf) 964 955 { 965 956 struct apds990x_chip *chip = dev_get_drvdata(dev); 957 + 966 958 return sprintf(buf, "%d\n", chip->prox_thres); 967 959 } 968 960 ··· 1036 1026 struct device_attribute *attr, char *buf) 1037 1027 { 1038 1028 struct apds990x_chip *chip = dev_get_drvdata(dev); 1029 + 1039 1030 return sprintf(buf, "%s %d\n", chip->chipname, chip->revision); 1040 1031 } 1041 1032
+26 -9
drivers/misc/ds1682.c
··· 59 59 { 60 60 struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr); 61 61 struct i2c_client *client = to_i2c_client(dev); 62 - __le32 val = 0; 62 + unsigned long long val, check; 63 + __le32 val_le = 0; 63 64 int rc; 64 65 65 66 dev_dbg(dev, "ds1682_show() called on %s\n", attr->attr.name); 66 67 67 68 /* Read the register */ 68 69 rc = i2c_smbus_read_i2c_block_data(client, sattr->index, sattr->nr, 69 - (u8 *) & val); 70 + (u8 *)&val_le); 70 71 if (rc < 0) 71 72 return -EIO; 72 73 73 - /* Special case: the 32 bit regs are time values with 1/4s 74 - * resolution, scale them up to milliseconds */ 75 - if (sattr->nr == 4) 76 - return sprintf(buf, "%llu\n", 77 - ((unsigned long long)le32_to_cpu(val)) * 250); 74 + val = le32_to_cpu(val_le); 78 75 79 - /* Format the output string and return # of bytes */ 80 - return sprintf(buf, "%li\n", (long)le32_to_cpu(val)); 76 + if (sattr->index == DS1682_REG_ELAPSED) { 77 + int retries = 5; 78 + 79 + /* Detect and retry when a tick occurs mid-read */ 80 + do { 81 + rc = i2c_smbus_read_i2c_block_data(client, sattr->index, 82 + sattr->nr, 83 + (u8 *)&val_le); 84 + if (rc < 0 || retries <= 0) 85 + return -EIO; 86 + 87 + check = val; 88 + val = le32_to_cpu(val_le); 89 + retries--; 90 + } while (val != check && val != (check + 1)); 91 + } 92 + 93 + /* Format the output string and return # of bytes 94 + * Special case: the 32 bit regs are time values with 1/4s 95 + * resolution, scale them up to milliseconds 96 + */ 97 + return sprintf(buf, "%llu\n", (sattr->nr == 4) ? (val * 250) : val); 81 98 } 82 99 83 100 static ssize_t ds1682_store(struct device *dev, struct device_attribute *attr,
+3
drivers/misc/eeprom/at25.c
··· 276 276 return -ENODEV; 277 277 } 278 278 switch (val) { 279 + case 9: 280 + chip->flags |= EE_INSTR_BIT3_IS_ADDR; 281 + /* fall through */ 279 282 case 8: 280 283 chip->flags |= EE_ADDR1; 281 284 break;
+3 -9
drivers/misc/enclosure.c
··· 468 468 .dev_groups = enclosure_class_groups, 469 469 }; 470 470 471 - static const char *const enclosure_status [] = { 471 + static const char *const enclosure_status[] = { 472 472 [ENCLOSURE_STATUS_UNSUPPORTED] = "unsupported", 473 473 [ENCLOSURE_STATUS_OK] = "OK", 474 474 [ENCLOSURE_STATUS_CRITICAL] = "critical", ··· 480 480 [ENCLOSURE_STATUS_MAX] = NULL, 481 481 }; 482 482 483 - static const char *const enclosure_type [] = { 483 + static const char *const enclosure_type[] = { 484 484 [ENCLOSURE_COMPONENT_DEVICE] = "device", 485 485 [ENCLOSURE_COMPONENT_ARRAY_DEVICE] = "array device", 486 486 }; ··· 680 680 681 681 static int __init enclosure_init(void) 682 682 { 683 - int err; 684 - 685 - err = class_register(&enclosure_class); 686 - if (err) 687 - return err; 688 - 689 - return 0; 683 + return class_register(&enclosure_class); 690 684 } 691 685 692 686 static void __exit enclosure_exit(void)
+1
drivers/misc/fsa9480.c
··· 465 465 static int fsa9480_remove(struct i2c_client *client) 466 466 { 467 467 struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); 468 + 468 469 if (client->irq) 469 470 free_irq(client->irq, usbsw); 470 471
+8 -8
drivers/misc/genwqe/card_base.c
··· 153 153 cd->card_state = GENWQE_CARD_UNUSED; 154 154 spin_lock_init(&cd->print_lock); 155 155 156 - cd->ddcb_software_timeout = genwqe_ddcb_software_timeout; 157 - cd->kill_timeout = genwqe_kill_timeout; 156 + cd->ddcb_software_timeout = GENWQE_DDCB_SOFTWARE_TIMEOUT; 157 + cd->kill_timeout = GENWQE_KILL_TIMEOUT; 158 158 159 159 for (j = 0; j < GENWQE_MAX_VFS; j++) 160 - cd->vf_jobtimeout_msec[j] = genwqe_vf_jobtimeout_msec; 160 + cd->vf_jobtimeout_msec[j] = GENWQE_VF_JOBTIMEOUT_MSEC; 161 161 162 162 genwqe_devices[i] = cd; 163 163 return cd; ··· 324 324 u32 T = genwqe_T_psec(cd); 325 325 u64 x; 326 326 327 - if (genwqe_pf_jobtimeout_msec == 0) 327 + if (GENWQE_PF_JOBTIMEOUT_MSEC == 0) 328 328 return false; 329 329 330 330 /* PF: large value needed, flash update 2sec per block */ 331 - x = ilog2(genwqe_pf_jobtimeout_msec * 331 + x = ilog2(GENWQE_PF_JOBTIMEOUT_MSEC * 332 332 16000000000uL/(T * 15)) - 10; 333 333 334 334 genwqe_write_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, ··· 904 904 * b) a critical GFIR occured 905 905 * 906 906 * Informational GFIRs are checked and potentially printed in 907 - * health_check_interval seconds. 907 + * GENWQE_HEALTH_CHECK_INTERVAL seconds. 908 908 */ 909 909 static int genwqe_health_thread(void *data) 910 910 { ··· 918 918 rc = wait_event_interruptible_timeout(cd->health_waitq, 919 919 (genwqe_health_check_cond(cd, &gfir) || 920 920 (should_stop = kthread_should_stop())), 921 - genwqe_health_check_interval * HZ); 921 + GENWQE_HEALTH_CHECK_INTERVAL * HZ); 922 922 923 923 if (should_stop) 924 924 break; ··· 1028 1028 { 1029 1029 int rc; 1030 1030 1031 - if (genwqe_health_check_interval <= 0) 1031 + if (GENWQE_HEALTH_CHECK_INTERVAL <= 0) 1032 1032 return 0; /* valid for disabling the service */ 1033 1033 1034 1034 /* moved before request_irq() */
+9 -11
drivers/misc/genwqe/card_base.h
··· 47 47 #define GENWQE_CARD_NO_MAX (16 * GENWQE_MAX_FUNCS) 48 48 49 49 /* Compile parameters, some of them appear in debugfs for later adjustment */ 50 - #define genwqe_ddcb_max 32 /* DDCBs on the work-queue */ 51 - #define genwqe_polling_enabled 0 /* in case of irqs not working */ 52 - #define genwqe_ddcb_software_timeout 10 /* timeout per DDCB in seconds */ 53 - #define genwqe_kill_timeout 8 /* time until process gets killed */ 54 - #define genwqe_vf_jobtimeout_msec 250 /* 250 msec */ 55 - #define genwqe_pf_jobtimeout_msec 8000 /* 8 sec should be ok */ 56 - #define genwqe_health_check_interval 4 /* <= 0: disabled */ 50 + #define GENWQE_DDCB_MAX 32 /* DDCBs on the work-queue */ 51 + #define GENWQE_POLLING_ENABLED 0 /* in case of irqs not working */ 52 + #define GENWQE_DDCB_SOFTWARE_TIMEOUT 10 /* timeout per DDCB in seconds */ 53 + #define GENWQE_KILL_TIMEOUT 8 /* time until process gets killed */ 54 + #define GENWQE_VF_JOBTIMEOUT_MSEC 250 /* 250 msec */ 55 + #define GENWQE_PF_JOBTIMEOUT_MSEC 8000 /* 8 sec should be ok */ 56 + #define GENWQE_HEALTH_CHECK_INTERVAL 4 /* <= 0: disabled */ 57 57 58 58 /* Sysfs attribute groups used when we create the genwqe device */ 59 59 extern const struct attribute_group *genwqe_attribute_groups[]; ··· 490 490 491 491 /* Memory allocation/deallocation; dma address handling */ 492 492 int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, 493 - void *uaddr, unsigned long size, 494 - struct ddcb_requ *req); 493 + void *uaddr, unsigned long size); 495 494 496 - int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m, 497 - struct ddcb_requ *req); 495 + int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m); 498 496 499 497 static inline bool dma_mapping_used(struct dma_mapping *m) 500 498 {
+11 -11
drivers/misc/genwqe/card_ddcb.c
··· 500 500 501 501 rc = wait_event_interruptible_timeout(queue->ddcb_waitqs[ddcb_no], 502 502 ddcb_requ_finished(cd, req), 503 - genwqe_ddcb_software_timeout * HZ); 503 + GENWQE_DDCB_SOFTWARE_TIMEOUT * HZ); 504 504 505 505 /* 506 506 * We need to distinguish 3 cases here: ··· 633 633 __be32 old, new; 634 634 635 635 /* unsigned long flags; */ 636 - if (genwqe_ddcb_software_timeout <= 0) { 636 + if (GENWQE_DDCB_SOFTWARE_TIMEOUT <= 0) { 637 637 dev_err(&pci_dev->dev, 638 638 "[%s] err: software timeout is not set!\n", __func__); 639 639 return -EFAULT; ··· 641 641 642 642 pddcb = &queue->ddcb_vaddr[req->num]; 643 643 644 - for (t = 0; t < genwqe_ddcb_software_timeout * 10; t++) { 644 + for (t = 0; t < GENWQE_DDCB_SOFTWARE_TIMEOUT * 10; t++) { 645 645 646 646 spin_lock_irqsave(&queue->ddcb_lock, flags); 647 647 ··· 718 718 719 719 dev_err(&pci_dev->dev, 720 720 "[%s] err: DDCB#%d not purged and not completed after %d seconds QSTAT=%016llx!!\n", 721 - __func__, req->num, genwqe_ddcb_software_timeout, 721 + __func__, req->num, GENWQE_DDCB_SOFTWARE_TIMEOUT, 722 722 queue_status); 723 723 724 724 print_ddcb_info(cd, req->queue); ··· 778 778 /* FIXME circumvention to improve performance when no irq is 779 779 * there. 780 780 */ 781 - if (genwqe_polling_enabled) 781 + if (GENWQE_POLLING_ENABLED) 782 782 genwqe_check_ddcb_queue(cd, queue); 783 783 784 784 /* ··· 878 878 pddcb->icrc_hsi_shi_32 = cpu_to_be32((u32)icrc << 16); 879 879 880 880 /* enable DDCB completion irq */ 881 - if (!genwqe_polling_enabled) 881 + if (!GENWQE_POLLING_ENABLED) 882 882 pddcb->icrc_hsi_shi_32 |= DDCB_INTR_BE32; 883 883 884 884 dev_dbg(&pci_dev->dev, "INPUT DDCB#%d\n", req->num); ··· 1028 1028 unsigned int queue_size; 1029 1029 struct pci_dev *pci_dev = cd->pci_dev; 1030 1030 1031 - if (genwqe_ddcb_max < 2) 1031 + if (GENWQE_DDCB_MAX < 2) 1032 1032 return -EINVAL; 1033 1033 1034 - queue_size = roundup(genwqe_ddcb_max * sizeof(struct ddcb), PAGE_SIZE); 1034 + queue_size = roundup(GENWQE_DDCB_MAX * sizeof(struct ddcb), PAGE_SIZE); 1035 1035 1036 1036 queue->ddcbs_in_flight = 0; /* statistics */ 1037 1037 queue->ddcbs_max_in_flight = 0; ··· 1040 1040 queue->wait_on_busy = 0; 1041 1041 1042 1042 queue->ddcb_seq = 0x100; /* start sequence number */ 1043 - queue->ddcb_max = genwqe_ddcb_max; /* module parameter */ 1043 + queue->ddcb_max = GENWQE_DDCB_MAX; 1044 1044 queue->ddcb_vaddr = __genwqe_alloc_consistent(cd, queue_size, 1045 1045 &queue->ddcb_daddr); 1046 1046 if (queue->ddcb_vaddr == NULL) { ··· 1194 1194 1195 1195 genwqe_check_ddcb_queue(cd, &cd->queue); 1196 1196 1197 - if (genwqe_polling_enabled) { 1197 + if (GENWQE_POLLING_ENABLED) { 1198 1198 rc = wait_event_interruptible_timeout( 1199 1199 cd->queue_waitq, 1200 1200 genwqe_ddcbs_in_flight(cd) || ··· 1340 1340 int genwqe_finish_queue(struct genwqe_dev *cd) 1341 1341 { 1342 1342 int i, rc = 0, in_flight; 1343 - int waitmax = genwqe_ddcb_software_timeout; 1343 + int waitmax = GENWQE_DDCB_SOFTWARE_TIMEOUT; 1344 1344 struct pci_dev *pci_dev = cd->pci_dev; 1345 1345 struct ddcb_queue *queue = &cd->queue; 1346 1346
+1 -1
drivers/misc/genwqe/card_debugfs.c
··· 198 198 199 199 jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, 0); 200 200 seq_printf(s, " PF 0x%016llx %d msec\n", jtimer, 201 - genwqe_pf_jobtimeout_msec); 201 + GENWQE_PF_JOBTIMEOUT_MSEC); 202 202 203 203 for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) { 204 204 jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT,
+8 -11
drivers/misc/genwqe/card_dev.c
··· 226 226 kfree(dma_map); 227 227 } else if (dma_map->type == GENWQE_MAPPING_SGL_TEMP) { 228 228 /* we use dma_map statically from the request */ 229 - genwqe_user_vunmap(cd, dma_map, NULL); 229 + genwqe_user_vunmap(cd, dma_map); 230 230 } 231 231 } 232 232 } ··· 249 249 * deleted. 250 250 */ 251 251 list_del_init(&dma_map->pin_list); 252 - genwqe_user_vunmap(cd, dma_map, NULL); 252 + genwqe_user_vunmap(cd, dma_map); 253 253 kfree(dma_map); 254 254 } 255 255 } ··· 790 790 return -ENOMEM; 791 791 792 792 genwqe_mapping_init(dma_map, GENWQE_MAPPING_SGL_PINNED); 793 - rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size, NULL); 793 + rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size); 794 794 if (rc != 0) { 795 795 dev_err(&pci_dev->dev, 796 796 "[%s] genwqe_user_vmap rc=%d\n", __func__, rc); ··· 820 820 return -ENOENT; 821 821 822 822 genwqe_del_pin(cfile, dma_map); 823 - genwqe_user_vunmap(cd, dma_map, NULL); 823 + genwqe_user_vunmap(cd, dma_map); 824 824 kfree(dma_map); 825 825 return 0; 826 826 } ··· 841 841 842 842 if (dma_mapping_used(dma_map)) { 843 843 __genwqe_del_mapping(cfile, dma_map); 844 - genwqe_user_vunmap(cd, dma_map, req); 844 + genwqe_user_vunmap(cd, dma_map); 845 845 } 846 846 if (req->sgls[i].sgl != NULL) 847 847 genwqe_free_sync_sgl(cd, &req->sgls[i]); ··· 947 947 m->write = 0; 948 948 949 949 rc = genwqe_user_vmap(cd, m, (void *)u_addr, 950 - u_size, req); 950 + u_size); 951 951 if (rc != 0) 952 952 goto err_out; 953 953 ··· 1011 1011 { 1012 1012 int rc; 1013 1013 struct genwqe_ddcb_cmd *cmd; 1014 - struct ddcb_requ *req; 1015 1014 struct genwqe_dev *cd = cfile->cd; 1016 1015 struct file *filp = cfile->filp; 1017 1016 1018 1017 cmd = ddcb_requ_alloc(); 1019 1018 if (cmd == NULL) 1020 1019 return -ENOMEM; 1021 - 1022 - req = container_of(cmd, struct ddcb_requ, cmd); 1023 1020 1024 1021 if (copy_from_user(cmd, (void __user *)arg, sizeof(*cmd))) { 1025 1022 ddcb_requ_free(cmd); ··· 1342 1345 rc = genwqe_kill_fasync(cd, SIGIO); 1343 1346 if (rc > 0) { 1344 1347 /* give kill_timeout seconds to close file descriptors ... */ 1345 - for (i = 0; (i < genwqe_kill_timeout) && 1348 + for (i = 0; (i < GENWQE_KILL_TIMEOUT) && 1346 1349 genwqe_open_files(cd); i++) { 1347 1350 dev_info(&pci_dev->dev, " %d sec ...", i); 1348 1351 ··· 1360 1363 rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */ 1361 1364 if (rc) { 1362 1365 /* Give kill_timout more seconds to end processes */ 1363 - for (i = 0; (i < genwqe_kill_timeout) && 1366 + for (i = 0; (i < GENWQE_KILL_TIMEOUT) && 1364 1367 genwqe_open_files(cd); i++) { 1365 1368 dev_warn(&pci_dev->dev, " %d sec ...", i); 1366 1369
+9 -16
drivers/misc/genwqe/card_utils.c
··· 524 524 } 525 525 526 526 /** 527 - * free_user_pages() - Give pinned pages back 527 + * genwqe_free_user_pages() - Give pinned pages back 528 528 * 529 - * Documentation of get_user_pages is in mm/memory.c: 529 + * Documentation of get_user_pages is in mm/gup.c: 530 530 * 531 531 * If the page is written to, set_page_dirty (or set_page_dirty_lock, 532 532 * as appropriate) must be called after the page is finished with, and 533 533 * before put_page is called. 534 - * 535 - * FIXME Could be of use to others and might belong in the generic 536 - * code, if others agree. E.g. 537 - * ll_free_user_pages in drivers/staging/lustre/lustre/llite/rw26.c 538 - * ceph_put_page_vector in net/ceph/pagevec.c 539 - * maybe more? 540 534 */ 541 - static int free_user_pages(struct page **page_list, unsigned int nr_pages, 542 - int dirty) 535 + static int genwqe_free_user_pages(struct page **page_list, 536 + unsigned int nr_pages, int dirty) 543 537 { 544 538 unsigned int i; 545 539 ··· 571 577 * Return: 0 if success 572 578 */ 573 579 int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr, 574 - unsigned long size, struct ddcb_requ *req) 580 + unsigned long size) 575 581 { 576 582 int rc = -EINVAL; 577 583 unsigned long data, offs; ··· 611 617 612 618 /* assumption: get_user_pages can be killed by signals. */ 613 619 if (rc < m->nr_pages) { 614 - free_user_pages(m->page_list, rc, m->write); 620 + genwqe_free_user_pages(m->page_list, rc, m->write); 615 621 rc = -EFAULT; 616 622 goto fail_get_user_pages; 617 623 } ··· 623 629 return 0; 624 630 625 631 fail_free_user_pages: 626 - free_user_pages(m->page_list, m->nr_pages, m->write); 632 + genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); 627 633 628 634 fail_get_user_pages: 629 635 kfree(m->page_list); ··· 641 647 * @cd: pointer to genwqe device 642 648 * @m: mapping params 643 649 */ 644 - int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m, 645 - struct ddcb_requ *req) 650 + int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m) 646 651 { 647 652 struct pci_dev *pci_dev = cd->pci_dev; 648 653 ··· 655 662 genwqe_unmap_pages(cd, m->dma_list, m->nr_pages); 656 663 657 664 if (m->page_list) { 658 - free_user_pages(m->page_list, m->nr_pages, m->write); 665 + genwqe_free_user_pages(m->page_list, m->nr_pages, m->write); 659 666 660 667 kfree(m->page_list); 661 668 m->page_list = NULL;
+1 -4
drivers/misc/hpilo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Driver for the HP iLO management processor. 3 4 * 4 5 * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. 5 6 * David Altobelli <david.altobelli@hpe.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 #include <linux/kernel.h> 12 9 #include <linux/types.h>
+1 -4
drivers/misc/hpilo.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/drivers/char/hpilo.h 3 4 * 4 5 * Copyright (C) 2008 Hewlett-Packard Development Company, L.P. 5 6 * David Altobelli <david.altobelli@hp.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 #ifndef __HPILO_H 12 9 #define __HPILO_H
+4 -4
drivers/misc/ics932s401.c
··· 33 33 34 34 /* ICS932S401 registers */ 35 35 #define ICS932S401_REG_CFG2 0x01 36 - #define ICS932S401_CFG1_SPREAD 0x01 36 + #define ICS932S401_CFG1_SPREAD 0x01 37 37 #define ICS932S401_REG_CFG7 0x06 38 38 #define ICS932S401_FS_MASK 0x07 39 39 #define ICS932S401_REG_VENDOR_REV 0x07 ··· 58 58 #define ICS932S401_REG_SRC_SPREAD1 0x11 59 59 #define ICS932S401_REG_SRC_SPREAD2 0x12 60 60 #define ICS932S401_REG_CPU_DIVISOR 0x13 61 - #define ICS932S401_CPU_DIVISOR_SHIFT 4 61 + #define ICS932S401_CPU_DIVISOR_SHIFT 4 62 62 #define ICS932S401_REG_PCISRC_DIVISOR 0x14 63 63 #define ICS932S401_SRC_DIVISOR_MASK 0x0F 64 64 #define ICS932S401_PCI_DIVISOR_SHIFT 4 ··· 225 225 else { 226 226 /* Freq is neatly wrapped up for us */ 227 227 int fid = data->regs[ICS932S401_REG_CFG7] & ICS932S401_FS_MASK; 228 + 228 229 freq = fs_speeds[fid]; 229 230 if (data->regs[ICS932S401_REG_CTRL] & ICS932S401_CPU_ALT) { 230 231 switch (freq) { ··· 353 352 static DEVICE_ATTR(cpu_spread, S_IRUGO, show_spread, NULL); 354 353 static DEVICE_ATTR(src_spread, S_IRUGO, show_spread, NULL); 355 354 356 - static struct attribute *ics932s401_attr[] = 357 - { 355 + static struct attribute *ics932s401_attr[] = { 358 356 &dev_attr_spread_enabled.attr, 359 357 &dev_attr_cpu_clock_selection.attr, 360 358 &dev_attr_cpu_clock.attr,
+7
drivers/misc/isl29003.c
··· 78 78 u32 reg, u8 mask, u8 shift) 79 79 { 80 80 struct isl29003_data *data = i2c_get_clientdata(client); 81 + 81 82 return (data->reg_cache[reg] & mask) >> shift; 82 83 } 83 84 ··· 161 160 { 162 161 struct isl29003_data *data = i2c_get_clientdata(client); 163 162 u8 cmdreg = data->reg_cache[ISL29003_REG_COMMAND]; 163 + 164 164 return ~cmdreg & ISL29003_ADC_PD; 165 165 } 166 166 ··· 198 196 struct device_attribute *attr, char *buf) 199 197 { 200 198 struct i2c_client *client = to_i2c_client(dev); 199 + 201 200 return sprintf(buf, "%i\n", isl29003_get_range(client)); 202 201 } 203 202 ··· 234 231 char *buf) 235 232 { 236 233 struct i2c_client *client = to_i2c_client(dev); 234 + 237 235 return sprintf(buf, "%d\n", isl29003_get_resolution(client)); 238 236 } 239 237 ··· 268 264 struct device_attribute *attr, char *buf) 269 265 { 270 266 struct i2c_client *client = to_i2c_client(dev); 267 + 271 268 return sprintf(buf, "%d\n", isl29003_get_mode(client)); 272 269 } 273 270 ··· 303 298 char *buf) 304 299 { 305 300 struct i2c_client *client = to_i2c_client(dev); 301 + 306 302 return sprintf(buf, "%d\n", isl29003_get_power_state(client)); 307 303 } 308 304 ··· 367 361 * if one of the reads fails, we consider the init failed */ 368 362 for (i = 0; i < ARRAY_SIZE(data->reg_cache); i++) { 369 363 int v = i2c_smbus_read_byte_data(client, i); 364 + 370 365 if (v < 0) 371 366 return -ENODEV; 372 367
+1 -1
drivers/misc/lkdtm_core.c
··· 96 96 CRASHPOINT("DIRECT", NULL), 97 97 #ifdef CONFIG_KPROBES 98 98 CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"), 99 - CRASHPOINT("INT_HW_IRQ_EN", "handle_IRQ_event"), 99 + CRASHPOINT("INT_HW_IRQ_EN", "handle_irq_event"), 100 100 CRASHPOINT("INT_TASKLET_ENTRY", "tasklet_action"), 101 101 CRASHPOINT("FS_DEVRW", "ll_rw_block"), 102 102 CRASHPOINT("MEM_SWAPOUT", "shrink_inactive_list"),
+4
drivers/misc/lkdtm_heap.c
··· 16 16 { 17 17 size_t len = 1020; 18 18 u32 *data = kmalloc(len, GFP_KERNEL); 19 + if (!data) 20 + return; 19 21 20 22 data[1024 / sizeof(u32)] = 0x12345678; 21 23 kfree(data); ··· 35 33 size_t offset = (len / sizeof(*base)) / 2; 36 34 37 35 base = kmalloc(len, GFP_KERNEL); 36 + if (!base) 37 + return; 38 38 pr_info("Allocated memory %p-%p\n", base, &base[offset * 2]); 39 39 pr_info("Attempting bad write to freed memory at %p\n", 40 40 &base[offset]);
+8 -2
drivers/misc/mei/bus.c
··· 543 543 mutex_lock(&bus->device_lock); 544 544 545 545 if (!mei_cl_is_connected(cl)) { 546 - dev_dbg(bus->dev, "Already disconnected"); 546 + dev_dbg(bus->dev, "Already disconnected\n"); 547 + err = 0; 548 + goto out; 549 + } 550 + 551 + if (bus->dev_state == MEI_DEV_POWER_DOWN) { 552 + dev_dbg(bus->dev, "Device is powering down, don't bother with disconnection\n"); 547 553 err = 0; 548 554 goto out; 549 555 } 550 556 551 557 err = mei_cl_disconnect(cl); 552 558 if (err < 0) 553 - dev_err(bus->dev, "Could not disconnect from the ME client"); 559 + dev_err(bus->dev, "Could not disconnect from the ME client\n"); 554 560 555 561 out: 556 562 /* Flush queues and remove any pending read */
+3 -1
drivers/misc/mei/hw-me.c
··· 1260 1260 if (rets == -ENODATA) 1261 1261 break; 1262 1262 1263 - if (rets && dev->dev_state != MEI_DEV_RESETTING) { 1263 + if (rets && 1264 + (dev->dev_state != MEI_DEV_RESETTING && 1265 + dev->dev_state != MEI_DEV_POWER_DOWN)) { 1264 1266 dev_err(dev->dev, "mei_irq_read_handler ret = %d.\n", 1265 1267 rets); 1266 1268 schedule_work(&dev->reset_work);
+3 -1
drivers/misc/mei/hw-txe.c
··· 1127 1127 if (test_and_clear_bit(TXE_INTR_OUT_DB_BIT, &hw->intr_cause)) { 1128 1128 /* Read from TXE */ 1129 1129 rets = mei_irq_read_handler(dev, &cmpl_list, &slots); 1130 - if (rets && dev->dev_state != MEI_DEV_RESETTING) { 1130 + if (rets && 1131 + (dev->dev_state != MEI_DEV_RESETTING && 1132 + dev->dev_state != MEI_DEV_POWER_DOWN)) { 1131 1133 dev_err(dev->dev, 1132 1134 "mei_irq_read_handler ret = %d.\n", rets); 1133 1135
+3 -1
drivers/misc/mei/init.c
··· 310 310 { 311 311 dev_dbg(dev->dev, "stopping the device.\n"); 312 312 313 + mutex_lock(&dev->device_lock); 314 + dev->dev_state = MEI_DEV_POWER_DOWN; 315 + mutex_unlock(&dev->device_lock); 313 316 mei_cl_bus_remove_devices(dev); 314 317 315 318 mei_cancel_work(dev); ··· 322 319 323 320 mutex_lock(&dev->device_lock); 324 321 325 - dev->dev_state = MEI_DEV_POWER_DOWN; 326 322 mei_reset(dev); 327 323 /* move device to disabled state unconditionally */ 328 324 dev->dev_state = MEI_DEV_DISABLED;
+4 -1
drivers/misc/mei/pci-me.c
··· 238 238 */ 239 239 mei_me_set_pm_domain(dev); 240 240 241 - if (mei_pg_is_enabled(dev)) 241 + if (mei_pg_is_enabled(dev)) { 242 242 pm_runtime_put_noidle(&pdev->dev); 243 + if (hw->d0i3_supported) 244 + pm_runtime_allow(&pdev->dev); 245 + } 243 246 244 247 dev_dbg(&pdev->dev, "initialization successful.\n"); 245 248
+7 -15
drivers/misc/mic/vop/vop_vringh.c
··· 937 937 dd.num_vq > MIC_MAX_VRINGS) 938 938 return -EINVAL; 939 939 940 - dd_config = kzalloc(mic_desc_size(&dd), GFP_KERNEL); 941 - if (!dd_config) 942 - return -ENOMEM; 943 - if (copy_from_user(dd_config, argp, mic_desc_size(&dd))) { 944 - ret = -EFAULT; 945 - goto free_ret; 946 - } 940 + dd_config = memdup_user(argp, mic_desc_size(&dd)); 941 + if (IS_ERR(dd_config)) 942 + return PTR_ERR(dd_config); 943 + 947 944 /* Ensure desc has not changed between the two reads */ 948 945 if (memcmp(&dd, dd_config, sizeof(dd))) { 949 946 ret = -EINVAL; ··· 992 995 ret = vop_vdev_inited(vdev); 993 996 if (ret) 994 997 goto __unlock_ret; 995 - buf = kzalloc(vdev->dd->config_len, GFP_KERNEL); 996 - if (!buf) { 997 - ret = -ENOMEM; 998 + buf = memdup_user(argp, vdev->dd->config_len); 999 + if (IS_ERR(buf)) { 1000 + ret = PTR_ERR(buf); 998 1001 goto __unlock_ret; 999 1002 } 1000 - if (copy_from_user(buf, argp, vdev->dd->config_len)) { 1001 - ret = -EFAULT; 1002 - goto done; 1003 - } 1004 1003 ret = vop_virtio_config_change(vdev, buf); 1005 - done: 1006 1004 kfree(buf); 1007 1005 __unlock_ret: 1008 1006 mutex_unlock(&vdev->vdev_mutex);
+1 -3
drivers/misc/vexpress-syscfg.c
··· 270 270 /* Must use dev.parent (MFD), as that's where DT phandle points at... */ 271 271 bridge = vexpress_config_bridge_register(pdev->dev.parent, 272 272 &vexpress_syscfg_bridge_ops, syscfg); 273 - if (IS_ERR(bridge)) 274 - return PTR_ERR(bridge); 275 273 276 - return 0; 274 + return PTR_ERR_OR_ZERO(bridge); 277 275 } 278 276 279 277 static const struct platform_device_id vexpress_syscfg_id_table[] = {
+1
drivers/mux/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Multiplexer devices 3 4 #
+1
drivers/mux/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for multiplexer devices. 3 4 #
+1 -4
drivers/mux/adg792a.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Multiplexer driver for Analog Devices ADG792A/G Triple 4:1 mux 3 4 * 4 5 * Copyright (C) 2017 Axentia Technologies AB 5 6 * 6 7 * Author: Peter Rosin <peda@axentia.se> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/err.h>
+1 -4
drivers/mux/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Multiplexer subsystem 3 4 * 4 5 * Copyright (C) 2017 Axentia Technologies AB 5 6 * 6 7 * Author: Peter Rosin <peda@axentia.se> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #define pr_fmt(fmt) "mux-core: " fmt
+1 -4
drivers/mux/gpio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * GPIO-controlled multiplexer driver 3 4 * 4 5 * Copyright (C) 2017 Axentia Technologies AB 5 6 * 6 7 * Author: Peter Rosin <peda@axentia.se> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/err.h>
+1 -4
drivers/mux/mmio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * MMIO register bitfield-controlled multiplexer driver 3 4 * 4 5 * Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de> 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 6 */ 10 7 11 8 #include <linux/bitops.h>
+5 -12
drivers/nvmem/core.c
··· 444 444 struct nvmem_device *nvmem_register(const struct nvmem_config *config) 445 445 { 446 446 struct nvmem_device *nvmem; 447 - struct device_node *np; 448 447 int rval; 449 448 450 449 if (!config->dev) ··· 463 464 nvmem->owner = config->owner; 464 465 if (!nvmem->owner && config->dev->driver) 465 466 nvmem->owner = config->dev->driver->owner; 466 - nvmem->stride = config->stride; 467 - nvmem->word_size = config->word_size; 467 + nvmem->stride = config->stride ?: 1; 468 + nvmem->word_size = config->word_size ?: 1; 468 469 nvmem->size = config->size; 469 470 nvmem->dev.type = &nvmem_provider_type; 470 471 nvmem->dev.bus = &nvmem_bus_type; ··· 472 473 nvmem->priv = config->priv; 473 474 nvmem->reg_read = config->reg_read; 474 475 nvmem->reg_write = config->reg_write; 475 - np = config->dev->of_node; 476 - nvmem->dev.of_node = np; 476 + nvmem->dev.of_node = config->dev->of_node; 477 477 dev_set_name(&nvmem->dev, "%s%d", 478 478 config->name ? : "nvmem", 479 479 config->name ? config->id : nvmem->id); 480 480 481 - nvmem->read_only = of_property_read_bool(np, "read-only") | 481 + nvmem->read_only = device_property_present(config->dev, "read-only") | 482 482 config->read_only; 483 483 484 484 if (config->root_only) ··· 598 600 mutex_unlock(&nvmem_mutex); 599 601 } 600 602 601 - static int nvmem_match(struct device *dev, void *data) 602 - { 603 - return !strcmp(dev_name(dev), data); 604 - } 605 - 606 603 static struct nvmem_device *nvmem_find(const char *name) 607 604 { 608 605 struct device *d; 609 606 610 - d = bus_find_device(&nvmem_bus_type, NULL, (void *)name, nvmem_match); 607 + d = bus_find_device_by_name(&nvmem_bus_type, NULL, name); 611 608 612 609 if (!d) 613 610 return NULL;
+69 -1
drivers/nvmem/rockchip-efuse.c
··· 32 32 #define RK3288_STROBE BIT(1) 33 33 #define RK3288_CSB BIT(0) 34 34 35 + #define RK3328_SECURE_SIZES 96 36 + #define RK3328_INT_STATUS 0x0018 37 + #define RK3328_DOUT 0x0020 38 + #define RK3328_AUTO_CTRL 0x0024 39 + #define RK3328_INT_FINISH BIT(0) 40 + #define RK3328_AUTO_ENB BIT(0) 41 + #define RK3328_AUTO_RD BIT(1) 42 + 35 43 #define RK3399_A_SHIFT 16 36 44 #define RK3399_A_MASK 0x3ff 37 45 #define RK3399_NBYTES 4 ··· 98 90 clk_disable_unprepare(efuse->clk); 99 91 100 92 return 0; 93 + } 94 + 95 + static int rockchip_rk3328_efuse_read(void *context, unsigned int offset, 96 + void *val, size_t bytes) 97 + { 98 + struct rockchip_efuse_chip *efuse = context; 99 + unsigned int addr_start, addr_end, addr_offset, addr_len; 100 + u32 out_value, status; 101 + u8 *buf; 102 + int ret, i = 0; 103 + 104 + ret = clk_prepare_enable(efuse->clk); 105 + if (ret < 0) { 106 + dev_err(efuse->dev, "failed to prepare/enable efuse clk\n"); 107 + return ret; 108 + } 109 + 110 + /* 128 Byte efuse, 96 Byte for secure, 32 Byte for non-secure */ 111 + offset += RK3328_SECURE_SIZES; 112 + addr_start = rounddown(offset, RK3399_NBYTES) / RK3399_NBYTES; 113 + addr_end = roundup(offset + bytes, RK3399_NBYTES) / RK3399_NBYTES; 114 + addr_offset = offset % RK3399_NBYTES; 115 + addr_len = addr_end - addr_start; 116 + 117 + buf = kzalloc(sizeof(*buf) * addr_len * RK3399_NBYTES, GFP_KERNEL); 118 + if (!buf) { 119 + ret = -ENOMEM; 120 + goto nomem; 121 + } 122 + 123 + while (addr_len--) { 124 + writel(RK3328_AUTO_RD | RK3328_AUTO_ENB | 125 + ((addr_start++ & RK3399_A_MASK) << RK3399_A_SHIFT), 126 + efuse->base + RK3328_AUTO_CTRL); 127 + udelay(4); 128 + status = readl(efuse->base + RK3328_INT_STATUS); 129 + if (!(status & RK3328_INT_FINISH)) { 130 + ret = -EIO; 131 + goto err; 132 + } 133 + out_value = readl(efuse->base + RK3328_DOUT); 134 + writel(RK3328_INT_FINISH, efuse->base + RK3328_INT_STATUS); 135 + 136 + memcpy(&buf[i], &out_value, RK3399_NBYTES); 137 + i += RK3399_NBYTES; 138 + } 139 + 140 + memcpy(val, buf + addr_offset, bytes); 141 + err: 142 + kfree(buf); 143 + nomem: 144 + clk_disable_unprepare(efuse->clk); 145 + 146 + return ret; 101 147 } 102 148 103 149 static int rockchip_rk3399_efuse_read(void *context, unsigned int offset, ··· 243 181 .data = (void *)&rockchip_rk3288_efuse_read, 244 182 }, 245 183 { 184 + .compatible = "rockchip,rk3328-efuse", 185 + .data = (void *)&rockchip_rk3328_efuse_read, 186 + }, 187 + { 246 188 .compatible = "rockchip,rk3399-efuse", 247 189 .data = (void *)&rockchip_rk3399_efuse_read, 248 190 }, ··· 283 217 return PTR_ERR(efuse->clk); 284 218 285 219 efuse->dev = &pdev->dev; 286 - econfig.size = resource_size(res); 220 + if (of_property_read_u32(dev->of_node, "rockchip,efuse-size", 221 + &econfig.size)) 222 + econfig.size = resource_size(res); 287 223 econfig.reg_read = match->data; 288 224 econfig.priv = efuse; 289 225 econfig.dev = efuse->dev;
+5 -5
drivers/nvmem/uniphier-efuse.c
··· 27 27 unsigned int reg, void *_val, size_t bytes) 28 28 { 29 29 struct uniphier_efuse_priv *priv = context; 30 - u32 *val = _val; 30 + u8 *val = _val; 31 31 int offs; 32 32 33 - for (offs = 0; offs < bytes; offs += sizeof(u32)) 34 - *val++ = readl(priv->base + reg + offs); 33 + for (offs = 0; offs < bytes; offs += sizeof(u8)) 34 + *val++ = readb(priv->base + reg + offs); 35 35 36 36 return 0; 37 37 } ··· 53 53 if (IS_ERR(priv->base)) 54 54 return PTR_ERR(priv->base); 55 55 56 - econfig.stride = 4; 57 - econfig.word_size = 4; 56 + econfig.stride = 1; 57 + econfig.word_size = 1; 58 58 econfig.read_only = true; 59 59 econfig.reg_read = uniphier_reg_read; 60 60 econfig.size = resource_size(res);
+18
drivers/siox/Kconfig
··· 1 + menuconfig SIOX 2 + tristate "Eckelmann SIOX Support" 3 + help 4 + SIOX stands for Serial Input Output eXtension and is a synchronous 5 + bus system invented by Eckelmann AG. It is used in their control and 6 + remote monitoring systems for commercial and industrial refrigeration 7 + to drive additional I/O units. 8 + 9 + Unless you know better, it is probably safe to say "no" here. 10 + 11 + if SIOX 12 + 13 + config SIOX_BUS_GPIO 14 + tristate "SIOX GPIO bus driver" 15 + help 16 + SIOX bus driver that controls the four bus lines using GPIOs. 17 + 18 + endif
+2
drivers/siox/Makefile
··· 1 + obj-$(CONFIG_SIOX) += siox-core.o 2 + obj-$(CONFIG_SIOX_BUS_GPIO) += siox-bus-gpio.o
+172
drivers/siox/siox-bus-gpio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> 4 + */ 5 + 6 + #include <linux/gpio/consumer.h> 7 + #include <linux/module.h> 8 + #include <linux/platform_device.h> 9 + 10 + #include <linux/delay.h> 11 + 12 + #include "siox.h" 13 + 14 + #define DRIVER_NAME "siox-gpio" 15 + 16 + struct siox_gpio_ddata { 17 + struct gpio_desc *din; 18 + struct gpio_desc *dout; 19 + struct gpio_desc *dclk; 20 + struct gpio_desc *dld; 21 + }; 22 + 23 + static unsigned int siox_clkhigh_ns = 1000; 24 + static unsigned int siox_loadhigh_ns; 25 + static unsigned int siox_bytegap_ns; 26 + 27 + static int siox_gpio_pushpull(struct siox_master *smaster, 28 + size_t setbuf_len, const u8 setbuf[], 29 + size_t getbuf_len, u8 getbuf[]) 30 + { 31 + struct siox_gpio_ddata *ddata = siox_master_get_devdata(smaster); 32 + size_t i; 33 + size_t cycles = max(setbuf_len, getbuf_len); 34 + 35 + /* reset data and clock */ 36 + gpiod_set_value_cansleep(ddata->dout, 0); 37 + gpiod_set_value_cansleep(ddata->dclk, 0); 38 + 39 + gpiod_set_value_cansleep(ddata->dld, 1); 40 + ndelay(siox_loadhigh_ns); 41 + gpiod_set_value_cansleep(ddata->dld, 0); 42 + 43 + for (i = 0; i < cycles; ++i) { 44 + u8 set = 0, get = 0; 45 + size_t j; 46 + 47 + if (i >= cycles - setbuf_len) 48 + set = setbuf[i - (cycles - setbuf_len)]; 49 + 50 + for (j = 0; j < 8; ++j) { 51 + get <<= 1; 52 + if (gpiod_get_value_cansleep(ddata->din)) 53 + get |= 1; 54 + 55 + /* DOUT is logically inverted */ 56 + gpiod_set_value_cansleep(ddata->dout, !(set & 0x80)); 57 + set <<= 1; 58 + 59 + gpiod_set_value_cansleep(ddata->dclk, 1); 60 + ndelay(siox_clkhigh_ns); 61 + gpiod_set_value_cansleep(ddata->dclk, 0); 62 + } 63 + 64 + if (i < getbuf_len) 65 + getbuf[i] = get; 66 + 67 + ndelay(siox_bytegap_ns); 68 + } 69 + 70 + gpiod_set_value_cansleep(ddata->dld, 1); 71 + ndelay(siox_loadhigh_ns); 72 + gpiod_set_value_cansleep(ddata->dld, 0); 73 + 74 + /* 75 + * Resetting dout isn't necessary protocol wise, but it makes the 76 + * signals more pretty because the dout level is deterministic between 77 + * cycles. Note that this only affects dout between the master and the 78 + * first siox device. dout for the later devices depend on the output of 79 + * the previous siox device. 80 + */ 81 + gpiod_set_value_cansleep(ddata->dout, 0); 82 + 83 + return 0; 84 + } 85 + 86 + static int siox_gpio_probe(struct platform_device *pdev) 87 + { 88 + struct device *dev = &pdev->dev; 89 + struct siox_gpio_ddata *ddata; 90 + int ret; 91 + struct siox_master *smaster; 92 + 93 + smaster = siox_master_alloc(&pdev->dev, sizeof(*ddata)); 94 + if (!smaster) { 95 + dev_err(dev, "failed to allocate siox master\n"); 96 + return -ENOMEM; 97 + } 98 + 99 + platform_set_drvdata(pdev, smaster); 100 + ddata = siox_master_get_devdata(smaster); 101 + 102 + ddata->din = devm_gpiod_get(dev, "din", GPIOD_IN); 103 + if (IS_ERR(ddata->din)) { 104 + ret = PTR_ERR(ddata->din); 105 + dev_err(dev, "Failed to get %s GPIO: %d\n", "din", ret); 106 + goto err; 107 + } 108 + 109 + ddata->dout = devm_gpiod_get(dev, "dout", GPIOD_OUT_LOW); 110 + if (IS_ERR(ddata->dout)) { 111 + ret = PTR_ERR(ddata->dout); 112 + dev_err(dev, "Failed to get %s GPIO: %d\n", "dout", ret); 113 + goto err; 114 + } 115 + 116 + ddata->dclk = devm_gpiod_get(dev, "dclk", GPIOD_OUT_LOW); 117 + if (IS_ERR(ddata->dclk)) { 118 + ret = PTR_ERR(ddata->dclk); 119 + dev_err(dev, "Failed to get %s GPIO: %d\n", "dclk", ret); 120 + goto err; 121 + } 122 + 123 + ddata->dld = devm_gpiod_get(dev, "dld", GPIOD_OUT_LOW); 124 + if (IS_ERR(ddata->dld)) { 125 + ret = PTR_ERR(ddata->dld); 126 + dev_err(dev, "Failed to get %s GPIO: %d\n", "dld", ret); 127 + goto err; 128 + } 129 + 130 + smaster->pushpull = siox_gpio_pushpull; 131 + /* XXX: determine automatically like spi does */ 132 + smaster->busno = 0; 133 + 134 + ret = siox_master_register(smaster); 135 + if (ret) { 136 + dev_err(dev, "Failed to register siox master: %d\n", ret); 137 + err: 138 + siox_master_put(smaster); 139 + } 140 + 141 + return ret; 142 + } 143 + 144 + static int siox_gpio_remove(struct platform_device *pdev) 145 + { 146 + struct siox_master *master = platform_get_drvdata(pdev); 147 + 148 + siox_master_unregister(master); 149 + 150 + return 0; 151 + } 152 + 153 + static const struct of_device_id siox_gpio_dt_ids[] = { 154 + { .compatible = "eckelmann,siox-gpio", }, 155 + { /* sentinel */ } 156 + }; 157 + MODULE_DEVICE_TABLE(of, siox_gpio_dt_ids); 158 + 159 + static struct platform_driver siox_gpio_driver = { 160 + .probe = siox_gpio_probe, 161 + .remove = siox_gpio_remove, 162 + 163 + .driver = { 164 + .name = DRIVER_NAME, 165 + .of_match_table = siox_gpio_dt_ids, 166 + }, 167 + }; 168 + module_platform_driver(siox_gpio_driver); 169 + 170 + MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>"); 171 + MODULE_LICENSE("GPL v2"); 172 + MODULE_ALIAS("platform:" DRIVER_NAME);
+934
drivers/siox/siox-core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> 4 + */ 5 + #include <linux/kernel.h> 6 + #include <linux/device.h> 7 + #include <linux/module.h> 8 + #include <linux/slab.h> 9 + #include <linux/sysfs.h> 10 + 11 + #include "siox.h" 12 + 13 + /* 14 + * The lowest bit in the SIOX status word signals if the in-device watchdog is 15 + * ok. If the bit is set, the device is functional. 16 + * 17 + * On writing the watchdog timer is reset when this bit toggles. 18 + */ 19 + #define SIOX_STATUS_WDG 0x01 20 + 21 + /* 22 + * Bits 1 to 3 of the status word read as the bitwise negation of what was 23 + * clocked in before. The value clocked in is changed in each cycle and so 24 + * allows to detect transmit/receive problems. 25 + */ 26 + #define SIOX_STATUS_COUNTER 0x0e 27 + 28 + /* 29 + * Each Siox-Device has a 4 bit type number that is neither 0 nor 15. This is 30 + * available in the upper nibble of the read status. 31 + * 32 + * On write these bits are DC. 33 + */ 34 + #define SIOX_STATUS_TYPE 0xf0 35 + 36 + #define CREATE_TRACE_POINTS 37 + #include <trace/events/siox.h> 38 + 39 + static bool siox_is_registered; 40 + 41 + static void siox_master_lock(struct siox_master *smaster) 42 + { 43 + mutex_lock(&smaster->lock); 44 + } 45 + 46 + static void siox_master_unlock(struct siox_master *smaster) 47 + { 48 + mutex_unlock(&smaster->lock); 49 + } 50 + 51 + static inline u8 siox_status_clean(u8 status_read, u8 status_written) 52 + { 53 + /* 54 + * bits 3:1 of status sample the respective bit in the status 55 + * byte written in the previous cycle but inverted. So if you wrote the 56 + * status word as 0xa before (counter = 0b101), it is expected to get 57 + * back the counter bits as 0b010. 58 + * 59 + * So given the last status written this function toggles the there 60 + * unset counter bits in the read value such that the counter bits in 61 + * the return value are all zero iff the bits were read as expected to 62 + * simplify error detection. 63 + */ 64 + 65 + return status_read ^ (~status_written & 0xe); 66 + } 67 + 68 + static bool siox_device_counter_error(struct siox_device *sdevice, 69 + u8 status_clean) 70 + { 71 + return (status_clean & SIOX_STATUS_COUNTER) != 0; 72 + } 73 + 74 + static bool siox_device_type_error(struct siox_device *sdevice, u8 status_clean) 75 + { 76 + u8 statustype = (status_clean & SIOX_STATUS_TYPE) >> 4; 77 + 78 + /* 79 + * If the device knows which value the type bits should have, check 80 + * against this value otherwise just rule out the invalid values 0b0000 81 + * and 0b1111. 82 + */ 83 + if (sdevice->statustype) { 84 + if (statustype != sdevice->statustype) 85 + return true; 86 + } else { 87 + switch (statustype) { 88 + case 0: 89 + case 0xf: 90 + return true; 91 + } 92 + } 93 + 94 + return false; 95 + } 96 + 97 + static bool siox_device_wdg_error(struct siox_device *sdevice, u8 status_clean) 98 + { 99 + return (status_clean & SIOX_STATUS_WDG) == 0; 100 + } 101 + 102 + /* 103 + * If there is a type or counter error the device is called "unsynced". 104 + */ 105 + bool siox_device_synced(struct siox_device *sdevice) 106 + { 107 + if (siox_device_type_error(sdevice, sdevice->status_read_clean)) 108 + return false; 109 + 110 + return !siox_device_counter_error(sdevice, sdevice->status_read_clean); 111 + 112 + } 113 + EXPORT_SYMBOL_GPL(siox_device_synced); 114 + 115 + /* 116 + * A device is called "connected" if it is synced and the watchdog is not 117 + * asserted. 118 + */ 119 + bool siox_device_connected(struct siox_device *sdevice) 120 + { 121 + if (!siox_device_synced(sdevice)) 122 + return false; 123 + 124 + return !siox_device_wdg_error(sdevice, sdevice->status_read_clean); 125 + } 126 + EXPORT_SYMBOL_GPL(siox_device_connected); 127 + 128 + static void siox_poll(struct siox_master *smaster) 129 + { 130 + struct siox_device *sdevice; 131 + size_t i = smaster->setbuf_len; 132 + unsigned int devno = 0; 133 + int unsync_error = 0; 134 + 135 + smaster->last_poll = jiffies; 136 + 137 + /* 138 + * The counter bits change in each second cycle, the watchdog bit 139 + * toggles each time. 140 + * The counter bits hold values from [0, 6]. 7 would be possible 141 + * theoretically but the protocol designer considered that a bad idea 142 + * for reasons unknown today. (Maybe that's because then the status read 143 + * back has only zeros in the counter bits then which might be confused 144 + * with a stuck-at-0 error. But for the same reason (with s/0/1/) 0 145 + * could be skipped.) 146 + */ 147 + if (++smaster->status > 0x0d) 148 + smaster->status = 0; 149 + 150 + memset(smaster->buf, 0, smaster->setbuf_len); 151 + 152 + /* prepare data pushed out to devices in buf[0..setbuf_len) */ 153 + list_for_each_entry(sdevice, &smaster->devices, node) { 154 + struct siox_driver *sdriver = 155 + to_siox_driver(sdevice->dev.driver); 156 + sdevice->status_written = smaster->status; 157 + 158 + i -= sdevice->inbytes; 159 + 160 + /* 161 + * If the device or a previous one is unsynced, don't pet the 162 + * watchdog. This is done to ensure that the device is kept in 163 + * reset when something is wrong. 164 + */ 165 + if (!siox_device_synced(sdevice)) 166 + unsync_error = 1; 167 + 168 + if (sdriver && !unsync_error) 169 + sdriver->set_data(sdevice, sdevice->status_written, 170 + &smaster->buf[i + 1]); 171 + else 172 + /* 173 + * Don't trigger watchdog if there is no driver or a 174 + * sync problem 175 + */ 176 + sdevice->status_written &= ~SIOX_STATUS_WDG; 177 + 178 + smaster->buf[i] = sdevice->status_written; 179 + 180 + trace_siox_set_data(smaster, sdevice, devno, i); 181 + 182 + devno++; 183 + } 184 + 185 + smaster->pushpull(smaster, smaster->setbuf_len, smaster->buf, 186 + smaster->getbuf_len, 187 + smaster->buf + smaster->setbuf_len); 188 + 189 + unsync_error = 0; 190 + 191 + /* interpret data pulled in from devices in buf[setbuf_len..] */ 192 + devno = 0; 193 + i = smaster->setbuf_len; 194 + list_for_each_entry(sdevice, &smaster->devices, node) { 195 + struct siox_driver *sdriver = 196 + to_siox_driver(sdevice->dev.driver); 197 + u8 status = smaster->buf[i + sdevice->outbytes - 1]; 198 + u8 status_clean; 199 + u8 prev_status_clean = sdevice->status_read_clean; 200 + bool synced = true; 201 + bool connected = true; 202 + 203 + if (!siox_device_synced(sdevice)) 204 + unsync_error = 1; 205 + 206 + /* 207 + * If the watchdog bit wasn't toggled in this cycle, report the 208 + * watchdog as active to give a consistent view for drivers and 209 + * sysfs consumers. 210 + */ 211 + if (!sdriver || unsync_error) 212 + status &= ~SIOX_STATUS_WDG; 213 + 214 + status_clean = 215 + siox_status_clean(status, 216 + sdevice->status_written_lastcycle); 217 + 218 + /* Check counter bits */ 219 + if (siox_device_counter_error(sdevice, status_clean)) { 220 + bool prev_counter_error; 221 + 222 + synced = false; 223 + 224 + /* only report a new error if the last cycle was ok */ 225 + prev_counter_error = 226 + siox_device_counter_error(sdevice, 227 + prev_status_clean); 228 + if (!prev_counter_error) { 229 + sdevice->status_errors++; 230 + sysfs_notify_dirent(sdevice->status_errors_kn); 231 + } 232 + } 233 + 234 + /* Check type bits */ 235 + if (siox_device_type_error(sdevice, status_clean)) 236 + synced = false; 237 + 238 + /* If the device is unsynced report the watchdog as active */ 239 + if (!synced) { 240 + status &= ~SIOX_STATUS_WDG; 241 + status_clean &= ~SIOX_STATUS_WDG; 242 + } 243 + 244 + if (siox_device_wdg_error(sdevice, status_clean)) 245 + connected = false; 246 + 247 + /* The watchdog state changed just now */ 248 + if ((status_clean ^ prev_status_clean) & SIOX_STATUS_WDG) { 249 + sysfs_notify_dirent(sdevice->watchdog_kn); 250 + 251 + if (siox_device_wdg_error(sdevice, status_clean)) { 252 + struct kernfs_node *wd_errs = 253 + sdevice->watchdog_errors_kn; 254 + 255 + sdevice->watchdog_errors++; 256 + sysfs_notify_dirent(wd_errs); 257 + } 258 + } 259 + 260 + if (connected != sdevice->connected) 261 + sysfs_notify_dirent(sdevice->connected_kn); 262 + 263 + sdevice->status_read_clean = status_clean; 264 + sdevice->status_written_lastcycle = sdevice->status_written; 265 + sdevice->connected = connected; 266 + 267 + trace_siox_get_data(smaster, sdevice, devno, status_clean, i); 268 + 269 + /* only give data read to driver if the device is connected */ 270 + if (sdriver && connected) 271 + sdriver->get_data(sdevice, &smaster->buf[i]); 272 + 273 + devno++; 274 + i += sdevice->outbytes; 275 + } 276 + } 277 + 278 + static int siox_poll_thread(void *data) 279 + { 280 + struct siox_master *smaster = data; 281 + signed long timeout = 0; 282 + 283 + get_device(&smaster->dev); 284 + 285 + for (;;) { 286 + if (kthread_should_stop()) { 287 + put_device(&smaster->dev); 288 + return 0; 289 + } 290 + 291 + siox_master_lock(smaster); 292 + 293 + if (smaster->active) { 294 + unsigned long next_poll = 295 + smaster->last_poll + smaster->poll_interval; 296 + if (time_is_before_eq_jiffies(next_poll)) 297 + siox_poll(smaster); 298 + 299 + timeout = smaster->poll_interval - 300 + (jiffies - smaster->last_poll); 301 + } else { 302 + timeout = MAX_SCHEDULE_TIMEOUT; 303 + } 304 + 305 + /* 306 + * Set the task to idle while holding the lock. This makes sure 307 + * that we don't sleep too long when the bus is reenabled before 308 + * schedule_timeout is reached. 309 + */ 310 + if (timeout > 0) 311 + set_current_state(TASK_IDLE); 312 + 313 + siox_master_unlock(smaster); 314 + 315 + if (timeout > 0) 316 + schedule_timeout(timeout); 317 + 318 + /* 319 + * I'm not clear if/why it is important to set the state to 320 + * RUNNING again, but it fixes a "do not call blocking ops when 321 + * !TASK_RUNNING;"-warning. 322 + */ 323 + set_current_state(TASK_RUNNING); 324 + } 325 + } 326 + 327 + static int __siox_start(struct siox_master *smaster) 328 + { 329 + if (!(smaster->setbuf_len + smaster->getbuf_len)) 330 + return -ENODEV; 331 + 332 + if (!smaster->buf) 333 + return -ENOMEM; 334 + 335 + if (smaster->active) 336 + return 0; 337 + 338 + smaster->active = 1; 339 + wake_up_process(smaster->poll_thread); 340 + 341 + return 1; 342 + } 343 + 344 + static int siox_start(struct siox_master *smaster) 345 + { 346 + int ret; 347 + 348 + siox_master_lock(smaster); 349 + ret = __siox_start(smaster); 350 + siox_master_unlock(smaster); 351 + 352 + return ret; 353 + } 354 + 355 + static int __siox_stop(struct siox_master *smaster) 356 + { 357 + if (smaster->active) { 358 + struct siox_device *sdevice; 359 + 360 + smaster->active = 0; 361 + 362 + list_for_each_entry(sdevice, &smaster->devices, node) { 363 + if (sdevice->connected) 364 + sysfs_notify_dirent(sdevice->connected_kn); 365 + sdevice->connected = false; 366 + } 367 + 368 + return 1; 369 + } 370 + return 0; 371 + } 372 + 373 + static int siox_stop(struct siox_master *smaster) 374 + { 375 + int ret; 376 + 377 + siox_master_lock(smaster); 378 + ret = __siox_stop(smaster); 379 + siox_master_unlock(smaster); 380 + 381 + return ret; 382 + } 383 + 384 + static ssize_t type_show(struct device *dev, 385 + struct device_attribute *attr, char *buf) 386 + { 387 + struct siox_device *sdev = to_siox_device(dev); 388 + 389 + return sprintf(buf, "%s\n", sdev->type); 390 + } 391 + 392 + static DEVICE_ATTR_RO(type); 393 + 394 + static ssize_t inbytes_show(struct device *dev, 395 + struct device_attribute *attr, char *buf) 396 + { 397 + struct siox_device *sdev = to_siox_device(dev); 398 + 399 + return sprintf(buf, "%zu\n", sdev->inbytes); 400 + } 401 + 402 + static DEVICE_ATTR_RO(inbytes); 403 + 404 + static ssize_t outbytes_show(struct device *dev, 405 + struct device_attribute *attr, char *buf) 406 + { 407 + struct siox_device *sdev = to_siox_device(dev); 408 + 409 + return sprintf(buf, "%zu\n", sdev->outbytes); 410 + } 411 + 412 + static DEVICE_ATTR_RO(outbytes); 413 + 414 + static ssize_t status_errors_show(struct device *dev, 415 + struct device_attribute *attr, char *buf) 416 + { 417 + struct siox_device *sdev = to_siox_device(dev); 418 + unsigned int status_errors; 419 + 420 + siox_master_lock(sdev->smaster); 421 + 422 + status_errors = sdev->status_errors; 423 + 424 + siox_master_unlock(sdev->smaster); 425 + 426 + return sprintf(buf, "%u\n", status_errors); 427 + } 428 + 429 + static DEVICE_ATTR_RO(status_errors); 430 + 431 + static ssize_t connected_show(struct device *dev, 432 + struct device_attribute *attr, char *buf) 433 + { 434 + struct siox_device *sdev = to_siox_device(dev); 435 + bool connected; 436 + 437 + siox_master_lock(sdev->smaster); 438 + 439 + connected = sdev->connected; 440 + 441 + siox_master_unlock(sdev->smaster); 442 + 443 + return sprintf(buf, "%u\n", connected); 444 + } 445 + 446 + static DEVICE_ATTR_RO(connected); 447 + 448 + static ssize_t watchdog_show(struct device *dev, 449 + struct device_attribute *attr, char *buf) 450 + { 451 + struct siox_device *sdev = to_siox_device(dev); 452 + u8 status; 453 + 454 + siox_master_lock(sdev->smaster); 455 + 456 + status = sdev->status_read_clean; 457 + 458 + siox_master_unlock(sdev->smaster); 459 + 460 + return sprintf(buf, "%d\n", status & SIOX_STATUS_WDG); 461 + } 462 + 463 + static DEVICE_ATTR_RO(watchdog); 464 + 465 + static ssize_t watchdog_errors_show(struct device *dev, 466 + struct device_attribute *attr, char *buf) 467 + { 468 + struct siox_device *sdev = to_siox_device(dev); 469 + unsigned int watchdog_errors; 470 + 471 + siox_master_lock(sdev->smaster); 472 + 473 + watchdog_errors = sdev->watchdog_errors; 474 + 475 + siox_master_unlock(sdev->smaster); 476 + 477 + return sprintf(buf, "%u\n", watchdog_errors); 478 + } 479 + 480 + static DEVICE_ATTR_RO(watchdog_errors); 481 + 482 + static struct attribute *siox_device_attrs[] = { 483 + &dev_attr_type.attr, 484 + &dev_attr_inbytes.attr, 485 + &dev_attr_outbytes.attr, 486 + &dev_attr_status_errors.attr, 487 + &dev_attr_connected.attr, 488 + &dev_attr_watchdog.attr, 489 + &dev_attr_watchdog_errors.attr, 490 + NULL 491 + }; 492 + ATTRIBUTE_GROUPS(siox_device); 493 + 494 + static void siox_device_release(struct device *dev) 495 + { 496 + struct siox_device *sdevice = to_siox_device(dev); 497 + 498 + kfree(sdevice); 499 + } 500 + 501 + static struct device_type siox_device_type = { 502 + .groups = siox_device_groups, 503 + .release = siox_device_release, 504 + }; 505 + 506 + static int siox_match(struct device *dev, struct device_driver *drv) 507 + { 508 + if (dev->type != &siox_device_type) 509 + return 0; 510 + 511 + /* up to now there is only a single driver so keeping this simple */ 512 + return 1; 513 + } 514 + 515 + static struct bus_type siox_bus_type = { 516 + .name = "siox", 517 + .match = siox_match, 518 + }; 519 + 520 + static int siox_driver_probe(struct device *dev) 521 + { 522 + struct siox_driver *sdriver = to_siox_driver(dev->driver); 523 + struct siox_device *sdevice = to_siox_device(dev); 524 + int ret; 525 + 526 + ret = sdriver->probe(sdevice); 527 + return ret; 528 + } 529 + 530 + static int siox_driver_remove(struct device *dev) 531 + { 532 + struct siox_driver *sdriver = 533 + container_of(dev->driver, struct siox_driver, driver); 534 + struct siox_device *sdevice = to_siox_device(dev); 535 + int ret; 536 + 537 + ret = sdriver->remove(sdevice); 538 + return ret; 539 + } 540 + 541 + static void siox_driver_shutdown(struct device *dev) 542 + { 543 + struct siox_driver *sdriver = 544 + container_of(dev->driver, struct siox_driver, driver); 545 + struct siox_device *sdevice = to_siox_device(dev); 546 + 547 + sdriver->shutdown(sdevice); 548 + } 549 + 550 + static ssize_t active_show(struct device *dev, 551 + struct device_attribute *attr, char *buf) 552 + { 553 + struct siox_master *smaster = to_siox_master(dev); 554 + 555 + return sprintf(buf, "%d\n", smaster->active); 556 + } 557 + 558 + static ssize_t active_store(struct device *dev, 559 + struct device_attribute *attr, 560 + const char *buf, size_t count) 561 + { 562 + struct siox_master *smaster = to_siox_master(dev); 563 + int ret; 564 + int active; 565 + 566 + ret = kstrtoint(buf, 0, &active); 567 + if (ret < 0) 568 + return ret; 569 + 570 + if (active) 571 + ret = siox_start(smaster); 572 + else 573 + ret = siox_stop(smaster); 574 + 575 + if (ret < 0) 576 + return ret; 577 + 578 + return count; 579 + } 580 + 581 + static DEVICE_ATTR_RW(active); 582 + 583 + static struct siox_device *siox_device_add(struct siox_master *smaster, 584 + const char *type, size_t inbytes, 585 + size_t outbytes, u8 statustype); 586 + 587 + static ssize_t device_add_store(struct device *dev, 588 + struct device_attribute *attr, 589 + const char *buf, size_t count) 590 + { 591 + struct siox_master *smaster = to_siox_master(dev); 592 + int ret; 593 + char type[20] = ""; 594 + size_t inbytes = 0, outbytes = 0; 595 + u8 statustype = 0; 596 + 597 + ret = sscanf(buf, "%20s %zu %zu %hhu", type, &inbytes, 598 + &outbytes, &statustype); 599 + if (ret != 3 && ret != 4) 600 + return -EINVAL; 601 + 602 + if (strcmp(type, "siox-12x8") || inbytes != 2 || outbytes != 4) 603 + return -EINVAL; 604 + 605 + siox_device_add(smaster, "siox-12x8", inbytes, outbytes, statustype); 606 + 607 + return count; 608 + } 609 + 610 + static DEVICE_ATTR_WO(device_add); 611 + 612 + static void siox_device_remove(struct siox_master *smaster); 613 + 614 + static ssize_t device_remove_store(struct device *dev, 615 + struct device_attribute *attr, 616 + const char *buf, size_t count) 617 + { 618 + struct siox_master *smaster = to_siox_master(dev); 619 + 620 + /* XXX? require to write <type> <inbytes> <outbytes> */ 621 + siox_device_remove(smaster); 622 + 623 + return count; 624 + } 625 + 626 + static DEVICE_ATTR_WO(device_remove); 627 + 628 + static ssize_t poll_interval_ns_show(struct device *dev, 629 + struct device_attribute *attr, char *buf) 630 + { 631 + struct siox_master *smaster = to_siox_master(dev); 632 + 633 + return sprintf(buf, "%lld\n", jiffies_to_nsecs(smaster->poll_interval)); 634 + } 635 + 636 + static ssize_t poll_interval_ns_store(struct device *dev, 637 + struct device_attribute *attr, 638 + const char *buf, size_t count) 639 + { 640 + struct siox_master *smaster = to_siox_master(dev); 641 + int ret; 642 + u64 val; 643 + 644 + ret = kstrtou64(buf, 0, &val); 645 + if (ret < 0) 646 + return ret; 647 + 648 + siox_master_lock(smaster); 649 + 650 + smaster->poll_interval = nsecs_to_jiffies(val); 651 + 652 + siox_master_unlock(smaster); 653 + 654 + return count; 655 + } 656 + 657 + static DEVICE_ATTR_RW(poll_interval_ns); 658 + 659 + static struct attribute *siox_master_attrs[] = { 660 + &dev_attr_active.attr, 661 + &dev_attr_device_add.attr, 662 + &dev_attr_device_remove.attr, 663 + &dev_attr_poll_interval_ns.attr, 664 + NULL 665 + }; 666 + ATTRIBUTE_GROUPS(siox_master); 667 + 668 + static void siox_master_release(struct device *dev) 669 + { 670 + struct siox_master *smaster = to_siox_master(dev); 671 + 672 + kfree(smaster); 673 + } 674 + 675 + static struct device_type siox_master_type = { 676 + .groups = siox_master_groups, 677 + .release = siox_master_release, 678 + }; 679 + 680 + struct siox_master *siox_master_alloc(struct device *dev, 681 + size_t size) 682 + { 683 + struct siox_master *smaster; 684 + 685 + if (!dev) 686 + return NULL; 687 + 688 + smaster = kzalloc(sizeof(*smaster) + size, GFP_KERNEL); 689 + if (!smaster) 690 + return NULL; 691 + 692 + device_initialize(&smaster->dev); 693 + 694 + smaster->busno = -1; 695 + smaster->dev.bus = &siox_bus_type; 696 + smaster->dev.type = &siox_master_type; 697 + smaster->dev.parent = dev; 698 + smaster->poll_interval = DIV_ROUND_UP(HZ, 40); 699 + 700 + dev_set_drvdata(&smaster->dev, &smaster[1]); 701 + 702 + return smaster; 703 + } 704 + EXPORT_SYMBOL_GPL(siox_master_alloc); 705 + 706 + int siox_master_register(struct siox_master *smaster) 707 + { 708 + int ret; 709 + 710 + if (!siox_is_registered) 711 + return -EPROBE_DEFER; 712 + 713 + if (!smaster->pushpull) 714 + return -EINVAL; 715 + 716 + dev_set_name(&smaster->dev, "siox-%d", smaster->busno); 717 + 718 + smaster->last_poll = jiffies; 719 + smaster->poll_thread = kthread_create(siox_poll_thread, smaster, 720 + "siox-%d", smaster->busno); 721 + if (IS_ERR(smaster->poll_thread)) { 722 + smaster->active = 0; 723 + return PTR_ERR(smaster->poll_thread); 724 + } 725 + 726 + mutex_init(&smaster->lock); 727 + INIT_LIST_HEAD(&smaster->devices); 728 + 729 + ret = device_add(&smaster->dev); 730 + if (ret) 731 + kthread_stop(smaster->poll_thread); 732 + 733 + return ret; 734 + } 735 + EXPORT_SYMBOL_GPL(siox_master_register); 736 + 737 + void siox_master_unregister(struct siox_master *smaster) 738 + { 739 + /* remove device */ 740 + device_del(&smaster->dev); 741 + 742 + siox_master_lock(smaster); 743 + 744 + __siox_stop(smaster); 745 + 746 + while (smaster->num_devices) { 747 + struct siox_device *sdevice; 748 + 749 + sdevice = container_of(smaster->devices.prev, 750 + struct siox_device, node); 751 + list_del(&sdevice->node); 752 + smaster->num_devices--; 753 + 754 + siox_master_unlock(smaster); 755 + 756 + device_unregister(&sdevice->dev); 757 + 758 + siox_master_lock(smaster); 759 + } 760 + 761 + siox_master_unlock(smaster); 762 + 763 + put_device(&smaster->dev); 764 + } 765 + EXPORT_SYMBOL_GPL(siox_master_unregister); 766 + 767 + static struct siox_device *siox_device_add(struct siox_master *smaster, 768 + const char *type, size_t inbytes, 769 + size_t outbytes, u8 statustype) 770 + { 771 + struct siox_device *sdevice; 772 + int ret; 773 + size_t buf_len; 774 + 775 + sdevice = kzalloc(sizeof(*sdevice), GFP_KERNEL); 776 + if (!sdevice) 777 + return ERR_PTR(-ENOMEM); 778 + 779 + sdevice->type = type; 780 + sdevice->inbytes = inbytes; 781 + sdevice->outbytes = outbytes; 782 + sdevice->statustype = statustype; 783 + 784 + sdevice->smaster = smaster; 785 + sdevice->dev.parent = &smaster->dev; 786 + sdevice->dev.bus = &siox_bus_type; 787 + sdevice->dev.type = &siox_device_type; 788 + 789 + siox_master_lock(smaster); 790 + 791 + dev_set_name(&sdevice->dev, "siox-%d-%d", 792 + smaster->busno, smaster->num_devices); 793 + 794 + buf_len = smaster->setbuf_len + inbytes + 795 + smaster->getbuf_len + outbytes; 796 + if (smaster->buf_len < buf_len) { 797 + u8 *buf = krealloc(smaster->buf, buf_len, GFP_KERNEL); 798 + 799 + if (!buf) { 800 + dev_err(&smaster->dev, 801 + "failed to realloc buffer to %zu\n", buf_len); 802 + ret = -ENOMEM; 803 + goto err_buf_alloc; 804 + } 805 + 806 + smaster->buf_len = buf_len; 807 + smaster->buf = buf; 808 + } 809 + 810 + ret = device_register(&sdevice->dev); 811 + if (ret) { 812 + dev_err(&smaster->dev, "failed to register device: %d\n", ret); 813 + 814 + goto err_device_register; 815 + } 816 + 817 + smaster->num_devices++; 818 + list_add_tail(&sdevice->node, &smaster->devices); 819 + 820 + smaster->setbuf_len += sdevice->inbytes; 821 + smaster->getbuf_len += sdevice->outbytes; 822 + 823 + sdevice->status_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, 824 + "status_errors"); 825 + sdevice->watchdog_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, 826 + "watchdog"); 827 + sdevice->watchdog_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, 828 + "watchdog_errors"); 829 + sdevice->connected_kn = sysfs_get_dirent(sdevice->dev.kobj.sd, 830 + "connected"); 831 + 832 + siox_master_unlock(smaster); 833 + 834 + return sdevice; 835 + 836 + err_device_register: 837 + /* don't care to make the buffer smaller again */ 838 + 839 + err_buf_alloc: 840 + siox_master_unlock(smaster); 841 + 842 + kfree(sdevice); 843 + 844 + return ERR_PTR(ret); 845 + } 846 + 847 + static void siox_device_remove(struct siox_master *smaster) 848 + { 849 + struct siox_device *sdevice; 850 + 851 + siox_master_lock(smaster); 852 + 853 + if (!smaster->num_devices) { 854 + siox_master_unlock(smaster); 855 + return; 856 + } 857 + 858 + sdevice = container_of(smaster->devices.prev, struct siox_device, node); 859 + list_del(&sdevice->node); 860 + smaster->num_devices--; 861 + 862 + smaster->setbuf_len -= sdevice->inbytes; 863 + smaster->getbuf_len -= sdevice->outbytes; 864 + 865 + if (!smaster->num_devices) 866 + __siox_stop(smaster); 867 + 868 + siox_master_unlock(smaster); 869 + 870 + /* 871 + * This must be done without holding the master lock because we're 872 + * called from device_remove_store which also holds a sysfs mutex. 873 + * device_unregister tries to aquire the same lock. 874 + */ 875 + device_unregister(&sdevice->dev); 876 + } 877 + 878 + int __siox_driver_register(struct siox_driver *sdriver, struct module *owner) 879 + { 880 + int ret; 881 + 882 + if (unlikely(!siox_is_registered)) 883 + return -EPROBE_DEFER; 884 + 885 + if (!sdriver->set_data && !sdriver->get_data) { 886 + pr_err("Driver %s doesn't provide needed callbacks\n", 887 + sdriver->driver.name); 888 + return -EINVAL; 889 + } 890 + 891 + sdriver->driver.owner = owner; 892 + sdriver->driver.bus = &siox_bus_type; 893 + 894 + if (sdriver->probe) 895 + sdriver->driver.probe = siox_driver_probe; 896 + if (sdriver->remove) 897 + sdriver->driver.remove = siox_driver_remove; 898 + if (sdriver->shutdown) 899 + sdriver->driver.shutdown = siox_driver_shutdown; 900 + 901 + ret = driver_register(&sdriver->driver); 902 + if (ret) 903 + pr_err("Failed to register siox driver %s (%d)\n", 904 + sdriver->driver.name, ret); 905 + 906 + return ret; 907 + } 908 + EXPORT_SYMBOL_GPL(__siox_driver_register); 909 + 910 + static int __init siox_init(void) 911 + { 912 + int ret; 913 + 914 + ret = bus_register(&siox_bus_type); 915 + if (ret) { 916 + pr_err("Registration of SIOX bus type failed: %d\n", ret); 917 + return ret; 918 + } 919 + 920 + siox_is_registered = true; 921 + 922 + return 0; 923 + } 924 + subsys_initcall(siox_init); 925 + 926 + static void __exit siox_exit(void) 927 + { 928 + bus_unregister(&siox_bus_type); 929 + } 930 + module_exit(siox_exit); 931 + 932 + MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>"); 933 + MODULE_DESCRIPTION("Eckelmann SIOX driver core"); 934 + MODULE_LICENSE("GPL v2");
+49
drivers/siox/siox.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> 4 + */ 5 + #include <linux/kernel.h> 6 + #include <linux/kthread.h> 7 + #include <linux/siox.h> 8 + 9 + #define to_siox_master(_dev) container_of((_dev), struct siox_master, dev) 10 + struct siox_master { 11 + /* these fields should be initialized by the driver */ 12 + int busno; 13 + int (*pushpull)(struct siox_master *smaster, 14 + size_t setbuf_len, const u8 setbuf[], 15 + size_t getbuf_len, u8 getbuf[]); 16 + 17 + /* might be initialized by the driver, if 0 it is set to HZ / 40 */ 18 + unsigned long poll_interval; /* in jiffies */ 19 + 20 + /* framework private stuff */ 21 + struct mutex lock; 22 + bool active; 23 + struct module *owner; 24 + struct device dev; 25 + unsigned int num_devices; 26 + struct list_head devices; 27 + 28 + size_t setbuf_len, getbuf_len; 29 + size_t buf_len; 30 + u8 *buf; 31 + u8 status; 32 + 33 + unsigned long last_poll; 34 + struct task_struct *poll_thread; 35 + }; 36 + 37 + static inline void *siox_master_get_devdata(struct siox_master *smaster) 38 + { 39 + return dev_get_drvdata(&smaster->dev); 40 + } 41 + 42 + struct siox_master *siox_master_alloc(struct device *dev, size_t size); 43 + static inline void siox_master_put(struct siox_master *smaster) 44 + { 45 + put_device(&smaster->dev); 46 + } 47 + 48 + int siox_master_register(struct siox_master *smaster); 49 + void siox_master_unregister(struct siox_master *smaster);
+24
drivers/slimbus/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # SLIMbus driver configuration 4 + # 5 + menuconfig SLIMBUS 6 + tristate "SLIMbus support" 7 + help 8 + SLIMbus is standard interface between System-on-Chip and audio codec, 9 + and other peripheral components in typical embedded systems. 10 + 11 + If unsure, choose N. 12 + 13 + if SLIMBUS 14 + 15 + # SLIMbus controllers 16 + config SLIM_QCOM_CTRL 17 + tristate "Qualcomm SLIMbus Manager Component" 18 + depends on SLIMBUS 19 + depends on HAS_IOMEM 20 + help 21 + Select driver if Qualcomm's SLIMbus Manager Component is 22 + programmed using Linux kernel. 23 + 24 + endif
+10
drivers/slimbus/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Makefile for kernel SLIMbus framework. 4 + # 5 + obj-$(CONFIG_SLIMBUS) += slimbus.o 6 + slimbus-y := core.o messaging.o sched.o 7 + 8 + #Controllers 9 + obj-$(CONFIG_SLIM_QCOM_CTRL) += slim-qcom-ctrl.o 10 + slim-qcom-ctrl-y := qcom-ctrl.o
+480
drivers/slimbus/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + #include <linux/errno.h> 8 + #include <linux/slab.h> 9 + #include <linux/init.h> 10 + #include <linux/idr.h> 11 + #include <linux/of.h> 12 + #include <linux/pm_runtime.h> 13 + #include <linux/slimbus.h> 14 + #include "slimbus.h" 15 + 16 + static DEFINE_IDA(ctrl_ida); 17 + 18 + static const struct slim_device_id *slim_match(const struct slim_device_id *id, 19 + const struct slim_device *sbdev) 20 + { 21 + while (id->manf_id != 0 || id->prod_code != 0) { 22 + if (id->manf_id == sbdev->e_addr.manf_id && 23 + id->prod_code == sbdev->e_addr.prod_code) 24 + return id; 25 + id++; 26 + } 27 + return NULL; 28 + } 29 + 30 + static int slim_device_match(struct device *dev, struct device_driver *drv) 31 + { 32 + struct slim_device *sbdev = to_slim_device(dev); 33 + struct slim_driver *sbdrv = to_slim_driver(drv); 34 + 35 + return !!slim_match(sbdrv->id_table, sbdev); 36 + } 37 + 38 + static int slim_device_probe(struct device *dev) 39 + { 40 + struct slim_device *sbdev = to_slim_device(dev); 41 + struct slim_driver *sbdrv = to_slim_driver(dev->driver); 42 + 43 + return sbdrv->probe(sbdev); 44 + } 45 + 46 + static int slim_device_remove(struct device *dev) 47 + { 48 + struct slim_device *sbdev = to_slim_device(dev); 49 + struct slim_driver *sbdrv; 50 + 51 + if (dev->driver) { 52 + sbdrv = to_slim_driver(dev->driver); 53 + if (sbdrv->remove) 54 + sbdrv->remove(sbdev); 55 + } 56 + 57 + return 0; 58 + } 59 + 60 + struct bus_type slimbus_bus = { 61 + .name = "slimbus", 62 + .match = slim_device_match, 63 + .probe = slim_device_probe, 64 + .remove = slim_device_remove, 65 + }; 66 + EXPORT_SYMBOL_GPL(slimbus_bus); 67 + 68 + /* 69 + * __slim_driver_register() - Client driver registration with SLIMbus 70 + * 71 + * @drv:Client driver to be associated with client-device. 72 + * @owner: owning module/driver 73 + * 74 + * This API will register the client driver with the SLIMbus 75 + * It is called from the driver's module-init function. 76 + */ 77 + int __slim_driver_register(struct slim_driver *drv, struct module *owner) 78 + { 79 + /* ID table and probe are mandatory */ 80 + if (!drv->id_table || !drv->probe) 81 + return -EINVAL; 82 + 83 + drv->driver.bus = &slimbus_bus; 84 + drv->driver.owner = owner; 85 + 86 + return driver_register(&drv->driver); 87 + } 88 + EXPORT_SYMBOL_GPL(__slim_driver_register); 89 + 90 + /* 91 + * slim_driver_unregister() - Undo effect of slim_driver_register 92 + * 93 + * @drv: Client driver to be unregistered 94 + */ 95 + void slim_driver_unregister(struct slim_driver *drv) 96 + { 97 + driver_unregister(&drv->driver); 98 + } 99 + EXPORT_SYMBOL_GPL(slim_driver_unregister); 100 + 101 + static void slim_dev_release(struct device *dev) 102 + { 103 + struct slim_device *sbdev = to_slim_device(dev); 104 + 105 + kfree(sbdev); 106 + } 107 + 108 + static int slim_add_device(struct slim_controller *ctrl, 109 + struct slim_device *sbdev, 110 + struct device_node *node) 111 + { 112 + sbdev->dev.bus = &slimbus_bus; 113 + sbdev->dev.parent = ctrl->dev; 114 + sbdev->dev.release = slim_dev_release; 115 + sbdev->dev.driver = NULL; 116 + sbdev->ctrl = ctrl; 117 + 118 + if (node) 119 + sbdev->dev.of_node = of_node_get(node); 120 + 121 + dev_set_name(&sbdev->dev, "%x:%x:%x:%x", 122 + sbdev->e_addr.manf_id, 123 + sbdev->e_addr.prod_code, 124 + sbdev->e_addr.dev_index, 125 + sbdev->e_addr.instance); 126 + 127 + return device_register(&sbdev->dev); 128 + } 129 + 130 + static struct slim_device *slim_alloc_device(struct slim_controller *ctrl, 131 + struct slim_eaddr *eaddr, 132 + struct device_node *node) 133 + { 134 + struct slim_device *sbdev; 135 + int ret; 136 + 137 + sbdev = kzalloc(sizeof(*sbdev), GFP_KERNEL); 138 + if (!sbdev) 139 + return NULL; 140 + 141 + sbdev->e_addr = *eaddr; 142 + ret = slim_add_device(ctrl, sbdev, node); 143 + if (ret) { 144 + kfree(sbdev); 145 + return NULL; 146 + } 147 + 148 + return sbdev; 149 + } 150 + 151 + static void of_register_slim_devices(struct slim_controller *ctrl) 152 + { 153 + struct device *dev = ctrl->dev; 154 + struct device_node *node; 155 + 156 + if (!ctrl->dev->of_node) 157 + return; 158 + 159 + for_each_child_of_node(ctrl->dev->of_node, node) { 160 + struct slim_device *sbdev; 161 + struct slim_eaddr e_addr; 162 + const char *compat = NULL; 163 + int reg[2], ret; 164 + int manf_id, prod_code; 165 + 166 + compat = of_get_property(node, "compatible", NULL); 167 + if (!compat) 168 + continue; 169 + 170 + ret = sscanf(compat, "slim%x,%x", &manf_id, &prod_code); 171 + if (ret != 2) { 172 + dev_err(dev, "Manf ID & Product code not found %s\n", 173 + compat); 174 + continue; 175 + } 176 + 177 + ret = of_property_read_u32_array(node, "reg", reg, 2); 178 + if (ret) { 179 + dev_err(dev, "Device and Instance id not found:%d\n", 180 + ret); 181 + continue; 182 + } 183 + 184 + e_addr.dev_index = reg[0]; 185 + e_addr.instance = reg[1]; 186 + e_addr.manf_id = manf_id; 187 + e_addr.prod_code = prod_code; 188 + 189 + sbdev = slim_alloc_device(ctrl, &e_addr, node); 190 + if (!sbdev) 191 + continue; 192 + } 193 + } 194 + 195 + /* 196 + * slim_register_controller() - Controller bring-up and registration. 197 + * 198 + * @ctrl: Controller to be registered. 199 + * 200 + * A controller is registered with the framework using this API. 201 + * If devices on a controller were registered before controller, 202 + * this will make sure that they get probed when controller is up 203 + */ 204 + int slim_register_controller(struct slim_controller *ctrl) 205 + { 206 + int id; 207 + 208 + id = ida_simple_get(&ctrl_ida, 0, 0, GFP_KERNEL); 209 + if (id < 0) 210 + return id; 211 + 212 + ctrl->id = id; 213 + 214 + if (!ctrl->min_cg) 215 + ctrl->min_cg = SLIM_MIN_CLK_GEAR; 216 + if (!ctrl->max_cg) 217 + ctrl->max_cg = SLIM_MAX_CLK_GEAR; 218 + 219 + ida_init(&ctrl->laddr_ida); 220 + idr_init(&ctrl->tid_idr); 221 + mutex_init(&ctrl->lock); 222 + mutex_init(&ctrl->sched.m_reconf); 223 + init_completion(&ctrl->sched.pause_comp); 224 + 225 + dev_dbg(ctrl->dev, "Bus [%s] registered:dev:%p\n", 226 + ctrl->name, ctrl->dev); 227 + 228 + of_register_slim_devices(ctrl); 229 + 230 + return 0; 231 + } 232 + EXPORT_SYMBOL_GPL(slim_register_controller); 233 + 234 + /* slim_remove_device: Remove the effect of slim_add_device() */ 235 + static void slim_remove_device(struct slim_device *sbdev) 236 + { 237 + device_unregister(&sbdev->dev); 238 + } 239 + 240 + static int slim_ctrl_remove_device(struct device *dev, void *null) 241 + { 242 + slim_remove_device(to_slim_device(dev)); 243 + return 0; 244 + } 245 + 246 + /** 247 + * slim_unregister_controller() - Controller tear-down. 248 + * 249 + * @ctrl: Controller to tear-down. 250 + */ 251 + int slim_unregister_controller(struct slim_controller *ctrl) 252 + { 253 + /* Remove all clients */ 254 + device_for_each_child(ctrl->dev, NULL, slim_ctrl_remove_device); 255 + /* Enter Clock Pause */ 256 + slim_ctrl_clk_pause(ctrl, false, 0); 257 + ida_simple_remove(&ctrl_ida, ctrl->id); 258 + 259 + return 0; 260 + } 261 + EXPORT_SYMBOL_GPL(slim_unregister_controller); 262 + 263 + static void slim_device_update_status(struct slim_device *sbdev, 264 + enum slim_device_status status) 265 + { 266 + struct slim_driver *sbdrv; 267 + 268 + if (sbdev->status == status) 269 + return; 270 + 271 + sbdev->status = status; 272 + if (!sbdev->dev.driver) 273 + return; 274 + 275 + sbdrv = to_slim_driver(sbdev->dev.driver); 276 + if (sbdrv->device_status) 277 + sbdrv->device_status(sbdev, sbdev->status); 278 + } 279 + 280 + /** 281 + * slim_report_absent() - Controller calls this function when a device 282 + * reports absent, OR when the device cannot be communicated with 283 + * 284 + * @sbdev: Device that cannot be reached, or sent report absent 285 + */ 286 + void slim_report_absent(struct slim_device *sbdev) 287 + { 288 + struct slim_controller *ctrl = sbdev->ctrl; 289 + 290 + if (!ctrl) 291 + return; 292 + 293 + /* invalidate logical addresses */ 294 + mutex_lock(&ctrl->lock); 295 + sbdev->is_laddr_valid = false; 296 + mutex_unlock(&ctrl->lock); 297 + 298 + ida_simple_remove(&ctrl->laddr_ida, sbdev->laddr); 299 + slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_DOWN); 300 + } 301 + EXPORT_SYMBOL_GPL(slim_report_absent); 302 + 303 + static bool slim_eaddr_equal(struct slim_eaddr *a, struct slim_eaddr *b) 304 + { 305 + return (a->manf_id == b->manf_id && 306 + a->prod_code == b->prod_code && 307 + a->dev_index == b->dev_index && 308 + a->instance == b->instance); 309 + } 310 + 311 + static int slim_match_dev(struct device *dev, void *data) 312 + { 313 + struct slim_eaddr *e_addr = data; 314 + struct slim_device *sbdev = to_slim_device(dev); 315 + 316 + return slim_eaddr_equal(&sbdev->e_addr, e_addr); 317 + } 318 + 319 + static struct slim_device *find_slim_device(struct slim_controller *ctrl, 320 + struct slim_eaddr *eaddr) 321 + { 322 + struct slim_device *sbdev; 323 + struct device *dev; 324 + 325 + dev = device_find_child(ctrl->dev, eaddr, slim_match_dev); 326 + if (dev) { 327 + sbdev = to_slim_device(dev); 328 + return sbdev; 329 + } 330 + 331 + return NULL; 332 + } 333 + 334 + /** 335 + * slim_get_device() - get handle to a device. 336 + * 337 + * @ctrl: Controller on which this device will be added/queried 338 + * @e_addr: Enumeration address of the device to be queried 339 + * 340 + * Return: pointer to a device if it has already reported. Creates a new 341 + * device and returns pointer to it if the device has not yet enumerated. 342 + */ 343 + struct slim_device *slim_get_device(struct slim_controller *ctrl, 344 + struct slim_eaddr *e_addr) 345 + { 346 + struct slim_device *sbdev; 347 + 348 + sbdev = find_slim_device(ctrl, e_addr); 349 + if (!sbdev) { 350 + sbdev = slim_alloc_device(ctrl, e_addr, NULL); 351 + if (!sbdev) 352 + return ERR_PTR(-ENOMEM); 353 + } 354 + 355 + return sbdev; 356 + } 357 + EXPORT_SYMBOL_GPL(slim_get_device); 358 + 359 + static int slim_device_alloc_laddr(struct slim_device *sbdev, 360 + bool report_present) 361 + { 362 + struct slim_controller *ctrl = sbdev->ctrl; 363 + u8 laddr; 364 + int ret; 365 + 366 + mutex_lock(&ctrl->lock); 367 + if (ctrl->get_laddr) { 368 + ret = ctrl->get_laddr(ctrl, &sbdev->e_addr, &laddr); 369 + if (ret < 0) 370 + goto err; 371 + } else if (report_present) { 372 + ret = ida_simple_get(&ctrl->laddr_ida, 373 + 0, SLIM_LA_MANAGER - 1, GFP_KERNEL); 374 + if (ret < 0) 375 + goto err; 376 + 377 + laddr = ret; 378 + } else { 379 + ret = -EINVAL; 380 + goto err; 381 + } 382 + 383 + if (ctrl->set_laddr) { 384 + ret = ctrl->set_laddr(ctrl, &sbdev->e_addr, laddr); 385 + if (ret) { 386 + ret = -EINVAL; 387 + goto err; 388 + } 389 + } 390 + 391 + sbdev->laddr = laddr; 392 + sbdev->is_laddr_valid = true; 393 + 394 + slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_UP); 395 + 396 + dev_dbg(ctrl->dev, "setting slimbus l-addr:%x, ea:%x,%x,%x,%x\n", 397 + laddr, sbdev->e_addr.manf_id, sbdev->e_addr.prod_code, 398 + sbdev->e_addr.dev_index, sbdev->e_addr.instance); 399 + 400 + err: 401 + mutex_unlock(&ctrl->lock); 402 + return ret; 403 + 404 + } 405 + 406 + /** 407 + * slim_device_report_present() - Report enumerated device. 408 + * 409 + * @ctrl: Controller with which device is enumerated. 410 + * @e_addr: Enumeration address of the device. 411 + * @laddr: Return logical address (if valid flag is false) 412 + * 413 + * Called by controller in response to REPORT_PRESENT. Framework will assign 414 + * a logical address to this enumeration address. 415 + * Function returns -EXFULL to indicate that all logical addresses are already 416 + * taken. 417 + */ 418 + int slim_device_report_present(struct slim_controller *ctrl, 419 + struct slim_eaddr *e_addr, u8 *laddr) 420 + { 421 + struct slim_device *sbdev; 422 + int ret; 423 + 424 + ret = pm_runtime_get_sync(ctrl->dev); 425 + 426 + if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { 427 + dev_err(ctrl->dev, "slim ctrl not active,state:%d, ret:%d\n", 428 + ctrl->sched.clk_state, ret); 429 + goto slimbus_not_active; 430 + } 431 + 432 + sbdev = slim_get_device(ctrl, e_addr); 433 + if (IS_ERR(sbdev)) 434 + return -ENODEV; 435 + 436 + if (sbdev->is_laddr_valid) { 437 + *laddr = sbdev->laddr; 438 + return 0; 439 + } 440 + 441 + ret = slim_device_alloc_laddr(sbdev, true); 442 + 443 + slimbus_not_active: 444 + pm_runtime_mark_last_busy(ctrl->dev); 445 + pm_runtime_put_autosuspend(ctrl->dev); 446 + return ret; 447 + } 448 + EXPORT_SYMBOL_GPL(slim_device_report_present); 449 + 450 + /** 451 + * slim_get_logical_addr() - get/allocate logical address of a SLIMbus device. 452 + * 453 + * @sbdev: client handle requesting the address. 454 + * 455 + * Return: zero if a logical address is valid or a new logical address 456 + * has been assigned. error code in case of error. 457 + */ 458 + int slim_get_logical_addr(struct slim_device *sbdev) 459 + { 460 + if (!sbdev->is_laddr_valid) 461 + return slim_device_alloc_laddr(sbdev, false); 462 + 463 + return 0; 464 + } 465 + EXPORT_SYMBOL_GPL(slim_get_logical_addr); 466 + 467 + static void __exit slimbus_exit(void) 468 + { 469 + bus_unregister(&slimbus_bus); 470 + } 471 + module_exit(slimbus_exit); 472 + 473 + static int __init slimbus_init(void) 474 + { 475 + return bus_register(&slimbus_bus); 476 + } 477 + postcore_initcall(slimbus_init); 478 + 479 + MODULE_LICENSE("GPL v2"); 480 + MODULE_DESCRIPTION("SLIMbus core");
+332
drivers/slimbus/messaging.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #include <linux/slab.h> 7 + #include <linux/pm_runtime.h> 8 + #include "slimbus.h" 9 + 10 + /** 11 + * slim_msg_response() - Deliver Message response received from a device to the 12 + * framework. 13 + * 14 + * @ctrl: Controller handle 15 + * @reply: Reply received from the device 16 + * @len: Length of the reply 17 + * @tid: Transaction ID received with which framework can associate reply. 18 + * 19 + * Called by controller to inform framework about the response received. 20 + * This helps in making the API asynchronous, and controller-driver doesn't need 21 + * to manage 1 more table other than the one managed by framework mapping TID 22 + * with buffers 23 + */ 24 + void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 len) 25 + { 26 + struct slim_msg_txn *txn; 27 + struct slim_val_inf *msg; 28 + unsigned long flags; 29 + 30 + spin_lock_irqsave(&ctrl->txn_lock, flags); 31 + txn = idr_find(&ctrl->tid_idr, tid); 32 + if (txn == NULL) { 33 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 34 + return; 35 + } 36 + 37 + msg = txn->msg; 38 + if (msg == NULL || msg->rbuf == NULL) { 39 + dev_err(ctrl->dev, "Got response to invalid TID:%d, len:%d\n", 40 + tid, len); 41 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 42 + return; 43 + } 44 + 45 + idr_remove(&ctrl->tid_idr, tid); 46 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 47 + 48 + memcpy(msg->rbuf, reply, len); 49 + if (txn->comp) 50 + complete(txn->comp); 51 + 52 + /* Remove runtime-pm vote now that response was received for TID txn */ 53 + pm_runtime_mark_last_busy(ctrl->dev); 54 + pm_runtime_put_autosuspend(ctrl->dev); 55 + } 56 + EXPORT_SYMBOL_GPL(slim_msg_response); 57 + 58 + /** 59 + * slim_do_transfer() - Process a SLIMbus-messaging transaction 60 + * 61 + * @ctrl: Controller handle 62 + * @txn: Transaction to be sent over SLIMbus 63 + * 64 + * Called by controller to transmit messaging transactions not dealing with 65 + * Interface/Value elements. (e.g. transmittting a message to assign logical 66 + * address to a slave device 67 + * 68 + * Return: -ETIMEDOUT: If transmission of this message timed out 69 + * (e.g. due to bus lines not being clocked or driven by controller) 70 + */ 71 + int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn) 72 + { 73 + DECLARE_COMPLETION_ONSTACK(done); 74 + bool need_tid = false, clk_pause_msg = false; 75 + unsigned long flags; 76 + int ret, tid, timeout; 77 + 78 + /* 79 + * do not vote for runtime-PM if the transactions are part of clock 80 + * pause sequence 81 + */ 82 + if (ctrl->sched.clk_state == SLIM_CLK_ENTERING_PAUSE && 83 + (txn->mt == SLIM_MSG_MT_CORE && 84 + txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION && 85 + txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW)) 86 + clk_pause_msg = true; 87 + 88 + if (!clk_pause_msg) { 89 + ret = pm_runtime_get_sync(ctrl->dev); 90 + if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { 91 + dev_err(ctrl->dev, "ctrl wrong state:%d, ret:%d\n", 92 + ctrl->sched.clk_state, ret); 93 + goto slim_xfer_err; 94 + } 95 + } 96 + 97 + need_tid = slim_tid_txn(txn->mt, txn->mc); 98 + 99 + if (need_tid) { 100 + spin_lock_irqsave(&ctrl->txn_lock, flags); 101 + tid = idr_alloc(&ctrl->tid_idr, txn, 0, 102 + SLIM_MAX_TIDS, GFP_ATOMIC); 103 + txn->tid = tid; 104 + 105 + if (!txn->msg->comp) 106 + txn->comp = &done; 107 + else 108 + txn->comp = txn->comp; 109 + 110 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 111 + 112 + if (tid < 0) 113 + return tid; 114 + } 115 + 116 + ret = ctrl->xfer_msg(ctrl, txn); 117 + 118 + if (ret && need_tid && !txn->msg->comp) { 119 + unsigned long ms = txn->rl + HZ; 120 + 121 + timeout = wait_for_completion_timeout(txn->comp, 122 + msecs_to_jiffies(ms)); 123 + if (!timeout) { 124 + ret = -ETIMEDOUT; 125 + spin_lock_irqsave(&ctrl->txn_lock, flags); 126 + idr_remove(&ctrl->tid_idr, tid); 127 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 128 + } 129 + } 130 + 131 + if (ret) 132 + dev_err(ctrl->dev, "Tx:MT:0x%x, MC:0x%x, LA:0x%x failed:%d\n", 133 + txn->mt, txn->mc, txn->la, ret); 134 + 135 + slim_xfer_err: 136 + if (!clk_pause_msg && (!need_tid || ret == -ETIMEDOUT)) { 137 + /* 138 + * remove runtime-pm vote if this was TX only, or 139 + * if there was error during this transaction 140 + */ 141 + pm_runtime_mark_last_busy(ctrl->dev); 142 + pm_runtime_mark_last_busy(ctrl->dev); 143 + } 144 + return ret; 145 + } 146 + EXPORT_SYMBOL_GPL(slim_do_transfer); 147 + 148 + static int slim_val_inf_sanity(struct slim_controller *ctrl, 149 + struct slim_val_inf *msg, u8 mc) 150 + { 151 + if (!msg || msg->num_bytes > 16 || 152 + (msg->start_offset + msg->num_bytes) > 0xC00) 153 + goto reterr; 154 + switch (mc) { 155 + case SLIM_MSG_MC_REQUEST_VALUE: 156 + case SLIM_MSG_MC_REQUEST_INFORMATION: 157 + if (msg->rbuf != NULL) 158 + return 0; 159 + break; 160 + 161 + case SLIM_MSG_MC_CHANGE_VALUE: 162 + case SLIM_MSG_MC_CLEAR_INFORMATION: 163 + if (msg->wbuf != NULL) 164 + return 0; 165 + break; 166 + 167 + case SLIM_MSG_MC_REQUEST_CHANGE_VALUE: 168 + case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION: 169 + if (msg->rbuf != NULL && msg->wbuf != NULL) 170 + return 0; 171 + break; 172 + } 173 + reterr: 174 + if (msg) 175 + dev_err(ctrl->dev, "Sanity check failed:msg:offset:0x%x, mc:%d\n", 176 + msg->start_offset, mc); 177 + return -EINVAL; 178 + } 179 + 180 + static u16 slim_slicesize(int code) 181 + { 182 + static const u8 sizetocode[16] = { 183 + 0, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7 184 + }; 185 + 186 + clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); 187 + 188 + return sizetocode[code - 1]; 189 + } 190 + 191 + /** 192 + * slim_xfer_msg() - Transfer a value info message on slim device 193 + * 194 + * @sbdev: slim device to which this msg has to be transfered 195 + * @msg: value info message pointer 196 + * @mc: message code of the message 197 + * 198 + * Called by drivers which want to transfer a vlaue or info elements. 199 + * 200 + * Return: -ETIMEDOUT: If transmission of this message timed out 201 + */ 202 + int slim_xfer_msg(struct slim_device *sbdev, struct slim_val_inf *msg, 203 + u8 mc) 204 + { 205 + DEFINE_SLIM_LDEST_TXN(txn_stack, mc, 6, sbdev->laddr, msg); 206 + struct slim_msg_txn *txn = &txn_stack; 207 + struct slim_controller *ctrl = sbdev->ctrl; 208 + int ret; 209 + u16 sl; 210 + 211 + if (!ctrl) 212 + return -EINVAL; 213 + 214 + ret = slim_val_inf_sanity(ctrl, msg, mc); 215 + if (ret) 216 + return ret; 217 + 218 + sl = slim_slicesize(msg->num_bytes); 219 + 220 + dev_dbg(ctrl->dev, "SB xfer msg:os:%x, len:%d, MC:%x, sl:%x\n", 221 + msg->start_offset, msg->num_bytes, mc, sl); 222 + 223 + txn->ec = ((sl | (1 << 3)) | ((msg->start_offset & 0xFFF) << 4)); 224 + 225 + switch (mc) { 226 + case SLIM_MSG_MC_REQUEST_CHANGE_VALUE: 227 + case SLIM_MSG_MC_CHANGE_VALUE: 228 + case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION: 229 + case SLIM_MSG_MC_CLEAR_INFORMATION: 230 + txn->rl += msg->num_bytes; 231 + default: 232 + break; 233 + } 234 + 235 + if (slim_tid_txn(txn->mt, txn->mc)) 236 + txn->rl++; 237 + 238 + return slim_do_transfer(ctrl, txn); 239 + } 240 + EXPORT_SYMBOL_GPL(slim_xfer_msg); 241 + 242 + static void slim_fill_msg(struct slim_val_inf *msg, u32 addr, 243 + size_t count, u8 *rbuf, u8 *wbuf) 244 + { 245 + msg->start_offset = addr; 246 + msg->num_bytes = count; 247 + msg->rbuf = rbuf; 248 + msg->wbuf = wbuf; 249 + } 250 + 251 + /** 252 + * slim_read() - Read SLIMbus value element 253 + * 254 + * @sdev: client handle. 255 + * @addr: address of value element to read. 256 + * @count: number of bytes to read. Maximum bytes allowed are 16. 257 + * @val: will return what the value element value was 258 + * 259 + * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of 260 + * this message timed out (e.g. due to bus lines not being clocked 261 + * or driven by controller) 262 + */ 263 + int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val) 264 + { 265 + struct slim_val_inf msg; 266 + 267 + slim_fill_msg(&msg, addr, count, val, NULL); 268 + 269 + return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_REQUEST_VALUE); 270 + } 271 + EXPORT_SYMBOL_GPL(slim_read); 272 + 273 + /** 274 + * slim_readb() - Read byte from SLIMbus value element 275 + * 276 + * @sdev: client handle. 277 + * @addr: address in the value element to read. 278 + * 279 + * Return: byte value of value element. 280 + */ 281 + int slim_readb(struct slim_device *sdev, u32 addr) 282 + { 283 + int ret; 284 + u8 buf; 285 + 286 + ret = slim_read(sdev, addr, 1, &buf); 287 + if (ret < 0) 288 + return ret; 289 + else 290 + return buf; 291 + } 292 + EXPORT_SYMBOL_GPL(slim_readb); 293 + 294 + /** 295 + * slim_write() - Write SLIMbus value element 296 + * 297 + * @sdev: client handle. 298 + * @addr: address in the value element to write. 299 + * @count: number of bytes to write. Maximum bytes allowed are 16. 300 + * @val: value to write to value element 301 + * 302 + * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of 303 + * this message timed out (e.g. due to bus lines not being clocked 304 + * or driven by controller) 305 + */ 306 + int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val) 307 + { 308 + struct slim_val_inf msg; 309 + 310 + slim_fill_msg(&msg, addr, count, val, NULL); 311 + 312 + return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_CHANGE_VALUE); 313 + } 314 + EXPORT_SYMBOL_GPL(slim_write); 315 + 316 + /** 317 + * slim_writeb() - Write byte to SLIMbus value element 318 + * 319 + * @sdev: client handle. 320 + * @addr: address of value element to write. 321 + * @value: value to write to value element 322 + * 323 + * Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of 324 + * this message timed out (e.g. due to bus lines not being clocked 325 + * or driven by controller) 326 + * 327 + */ 328 + int slim_writeb(struct slim_device *sdev, u32 addr, u8 value) 329 + { 330 + return slim_write(sdev, addr, 1, &value); 331 + } 332 + EXPORT_SYMBOL_GPL(slim_writeb);
+747
drivers/slimbus/qcom-ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #include <linux/irq.h> 7 + #include <linux/kernel.h> 8 + #include <linux/init.h> 9 + #include <linux/slab.h> 10 + #include <linux/io.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/delay.h> 14 + #include <linux/clk.h> 15 + #include <linux/of.h> 16 + #include <linux/pm_runtime.h> 17 + #include "slimbus.h" 18 + 19 + /* Manager registers */ 20 + #define MGR_CFG 0x200 21 + #define MGR_STATUS 0x204 22 + #define MGR_INT_EN 0x210 23 + #define MGR_INT_STAT 0x214 24 + #define MGR_INT_CLR 0x218 25 + #define MGR_TX_MSG 0x230 26 + #define MGR_RX_MSG 0x270 27 + #define MGR_IE_STAT 0x2F0 28 + #define MGR_VE_STAT 0x300 29 + #define MGR_CFG_ENABLE 1 30 + 31 + /* Framer registers */ 32 + #define FRM_CFG 0x400 33 + #define FRM_STAT 0x404 34 + #define FRM_INT_EN 0x410 35 + #define FRM_INT_STAT 0x414 36 + #define FRM_INT_CLR 0x418 37 + #define FRM_WAKEUP 0x41C 38 + #define FRM_CLKCTL_DONE 0x420 39 + #define FRM_IE_STAT 0x430 40 + #define FRM_VE_STAT 0x440 41 + 42 + /* Interface registers */ 43 + #define INTF_CFG 0x600 44 + #define INTF_STAT 0x604 45 + #define INTF_INT_EN 0x610 46 + #define INTF_INT_STAT 0x614 47 + #define INTF_INT_CLR 0x618 48 + #define INTF_IE_STAT 0x630 49 + #define INTF_VE_STAT 0x640 50 + 51 + /* Interrupt status bits */ 52 + #define MGR_INT_TX_NACKED_2 BIT(25) 53 + #define MGR_INT_MSG_BUF_CONTE BIT(26) 54 + #define MGR_INT_RX_MSG_RCVD BIT(30) 55 + #define MGR_INT_TX_MSG_SENT BIT(31) 56 + 57 + /* Framer config register settings */ 58 + #define FRM_ACTIVE 1 59 + #define CLK_GEAR 7 60 + #define ROOT_FREQ 11 61 + #define REF_CLK_GEAR 15 62 + #define INTR_WAKE 19 63 + 64 + #define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \ 65 + ((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16)) 66 + 67 + #define SLIM_ROOT_FREQ 24576000 68 + #define QCOM_SLIM_AUTOSUSPEND 1000 69 + 70 + /* MAX message size over control channel */ 71 + #define SLIM_MSGQ_BUF_LEN 40 72 + #define QCOM_TX_MSGS 2 73 + #define QCOM_RX_MSGS 8 74 + #define QCOM_BUF_ALLOC_RETRIES 10 75 + 76 + #define CFG_PORT(r, v) ((v) ? CFG_PORT_V2(r) : CFG_PORT_V1(r)) 77 + 78 + /* V2 Component registers */ 79 + #define CFG_PORT_V2(r) ((r ## _V2)) 80 + #define COMP_CFG_V2 4 81 + #define COMP_TRUST_CFG_V2 0x3000 82 + 83 + /* V1 Component registers */ 84 + #define CFG_PORT_V1(r) ((r ## _V1)) 85 + #define COMP_CFG_V1 0 86 + #define COMP_TRUST_CFG_V1 0x14 87 + 88 + /* Resource group info for manager, and non-ported generic device-components */ 89 + #define EE_MGR_RSC_GRP (1 << 10) 90 + #define EE_NGD_2 (2 << 6) 91 + #define EE_NGD_1 0 92 + 93 + struct slim_ctrl_buf { 94 + void *base; 95 + spinlock_t lock; 96 + int head; 97 + int tail; 98 + int sl_sz; 99 + int n; 100 + }; 101 + 102 + struct qcom_slim_ctrl { 103 + struct slim_controller ctrl; 104 + struct slim_framer framer; 105 + struct device *dev; 106 + void __iomem *base; 107 + void __iomem *slew_reg; 108 + 109 + struct slim_ctrl_buf rx; 110 + struct slim_ctrl_buf tx; 111 + 112 + struct completion **wr_comp; 113 + int irq; 114 + struct workqueue_struct *rxwq; 115 + struct work_struct wd; 116 + struct clk *rclk; 117 + struct clk *hclk; 118 + }; 119 + 120 + static void qcom_slim_queue_tx(struct qcom_slim_ctrl *ctrl, void *buf, 121 + u8 len, u32 tx_reg) 122 + { 123 + int count = (len + 3) >> 2; 124 + 125 + __iowrite32_copy(ctrl->base + tx_reg, buf, count); 126 + 127 + /* Ensure Oder of subsequent writes */ 128 + mb(); 129 + } 130 + 131 + static void *slim_alloc_rxbuf(struct qcom_slim_ctrl *ctrl) 132 + { 133 + unsigned long flags; 134 + int idx; 135 + 136 + spin_lock_irqsave(&ctrl->rx.lock, flags); 137 + if ((ctrl->rx.tail + 1) % ctrl->rx.n == ctrl->rx.head) { 138 + spin_unlock_irqrestore(&ctrl->rx.lock, flags); 139 + dev_err(ctrl->dev, "RX QUEUE full!"); 140 + return NULL; 141 + } 142 + idx = ctrl->rx.tail; 143 + ctrl->rx.tail = (ctrl->rx.tail + 1) % ctrl->rx.n; 144 + spin_unlock_irqrestore(&ctrl->rx.lock, flags); 145 + 146 + return ctrl->rx.base + (idx * ctrl->rx.sl_sz); 147 + } 148 + 149 + static void slim_ack_txn(struct qcom_slim_ctrl *ctrl, int err) 150 + { 151 + struct completion *comp; 152 + unsigned long flags; 153 + int idx; 154 + 155 + spin_lock_irqsave(&ctrl->tx.lock, flags); 156 + idx = ctrl->tx.head; 157 + ctrl->tx.head = (ctrl->tx.head + 1) % ctrl->tx.n; 158 + spin_unlock_irqrestore(&ctrl->tx.lock, flags); 159 + 160 + comp = ctrl->wr_comp[idx]; 161 + ctrl->wr_comp[idx] = NULL; 162 + 163 + complete(comp); 164 + } 165 + 166 + static irqreturn_t qcom_slim_handle_tx_irq(struct qcom_slim_ctrl *ctrl, 167 + u32 stat) 168 + { 169 + int err = 0; 170 + 171 + if (stat & MGR_INT_TX_MSG_SENT) 172 + writel_relaxed(MGR_INT_TX_MSG_SENT, 173 + ctrl->base + MGR_INT_CLR); 174 + 175 + if (stat & MGR_INT_TX_NACKED_2) { 176 + u32 mgr_stat = readl_relaxed(ctrl->base + MGR_STATUS); 177 + u32 mgr_ie_stat = readl_relaxed(ctrl->base + MGR_IE_STAT); 178 + u32 frm_stat = readl_relaxed(ctrl->base + FRM_STAT); 179 + u32 frm_cfg = readl_relaxed(ctrl->base + FRM_CFG); 180 + u32 frm_intr_stat = readl_relaxed(ctrl->base + FRM_INT_STAT); 181 + u32 frm_ie_stat = readl_relaxed(ctrl->base + FRM_IE_STAT); 182 + u32 intf_stat = readl_relaxed(ctrl->base + INTF_STAT); 183 + u32 intf_intr_stat = readl_relaxed(ctrl->base + INTF_INT_STAT); 184 + u32 intf_ie_stat = readl_relaxed(ctrl->base + INTF_IE_STAT); 185 + 186 + writel_relaxed(MGR_INT_TX_NACKED_2, ctrl->base + MGR_INT_CLR); 187 + 188 + dev_err(ctrl->dev, "TX Nack MGR:int:0x%x, stat:0x%x\n", 189 + stat, mgr_stat); 190 + dev_err(ctrl->dev, "TX Nack MGR:ie:0x%x\n", mgr_ie_stat); 191 + dev_err(ctrl->dev, "TX Nack FRM:int:0x%x, stat:0x%x\n", 192 + frm_intr_stat, frm_stat); 193 + dev_err(ctrl->dev, "TX Nack FRM:cfg:0x%x, ie:0x%x\n", 194 + frm_cfg, frm_ie_stat); 195 + dev_err(ctrl->dev, "TX Nack INTF:intr:0x%x, stat:0x%x\n", 196 + intf_intr_stat, intf_stat); 197 + dev_err(ctrl->dev, "TX Nack INTF:ie:0x%x\n", 198 + intf_ie_stat); 199 + err = -ENOTCONN; 200 + } 201 + 202 + slim_ack_txn(ctrl, err); 203 + 204 + return IRQ_HANDLED; 205 + } 206 + 207 + static irqreturn_t qcom_slim_handle_rx_irq(struct qcom_slim_ctrl *ctrl, 208 + u32 stat) 209 + { 210 + u32 *rx_buf, pkt[10]; 211 + bool q_rx = false; 212 + u8 mc, mt, len; 213 + 214 + pkt[0] = readl_relaxed(ctrl->base + MGR_RX_MSG); 215 + mt = SLIM_HEADER_GET_MT(pkt[0]); 216 + len = SLIM_HEADER_GET_RL(pkt[0]); 217 + mc = SLIM_HEADER_GET_MC(pkt[0]>>8); 218 + 219 + /* 220 + * this message cannot be handled by ISR, so 221 + * let work-queue handle it 222 + */ 223 + if (mt == SLIM_MSG_MT_CORE && mc == SLIM_MSG_MC_REPORT_PRESENT) { 224 + rx_buf = (u32 *)slim_alloc_rxbuf(ctrl); 225 + if (!rx_buf) { 226 + dev_err(ctrl->dev, "dropping RX:0x%x due to RX full\n", 227 + pkt[0]); 228 + goto rx_ret_irq; 229 + } 230 + rx_buf[0] = pkt[0]; 231 + 232 + } else { 233 + rx_buf = pkt; 234 + } 235 + 236 + __ioread32_copy(rx_buf + 1, ctrl->base + MGR_RX_MSG + 4, 237 + DIV_ROUND_UP(len, 4)); 238 + 239 + switch (mc) { 240 + 241 + case SLIM_MSG_MC_REPORT_PRESENT: 242 + q_rx = true; 243 + break; 244 + case SLIM_MSG_MC_REPLY_INFORMATION: 245 + case SLIM_MSG_MC_REPLY_VALUE: 246 + slim_msg_response(&ctrl->ctrl, (u8 *)(rx_buf + 1), 247 + (u8)(*rx_buf >> 24), (len - 4)); 248 + break; 249 + default: 250 + dev_err(ctrl->dev, "unsupported MC,%x MT:%x\n", 251 + mc, mt); 252 + break; 253 + } 254 + rx_ret_irq: 255 + writel(MGR_INT_RX_MSG_RCVD, ctrl->base + 256 + MGR_INT_CLR); 257 + if (q_rx) 258 + queue_work(ctrl->rxwq, &ctrl->wd); 259 + 260 + return IRQ_HANDLED; 261 + } 262 + 263 + static irqreturn_t qcom_slim_interrupt(int irq, void *d) 264 + { 265 + struct qcom_slim_ctrl *ctrl = d; 266 + u32 stat = readl_relaxed(ctrl->base + MGR_INT_STAT); 267 + int ret = IRQ_NONE; 268 + 269 + if (stat & MGR_INT_TX_MSG_SENT || stat & MGR_INT_TX_NACKED_2) 270 + ret = qcom_slim_handle_tx_irq(ctrl, stat); 271 + 272 + if (stat & MGR_INT_RX_MSG_RCVD) 273 + ret = qcom_slim_handle_rx_irq(ctrl, stat); 274 + 275 + return ret; 276 + } 277 + 278 + static int qcom_clk_pause_wakeup(struct slim_controller *sctrl) 279 + { 280 + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); 281 + 282 + clk_prepare_enable(ctrl->hclk); 283 + clk_prepare_enable(ctrl->rclk); 284 + enable_irq(ctrl->irq); 285 + 286 + writel_relaxed(1, ctrl->base + FRM_WAKEUP); 287 + /* Make sure framer wakeup write goes through before ISR fires */ 288 + mb(); 289 + /* 290 + * HW Workaround: Currently, slave is reporting lost-sync messages 291 + * after SLIMbus comes out of clock pause. 292 + * Transaction with slave fail before slave reports that message 293 + * Give some time for that report to come 294 + * SLIMbus wakes up in clock gear 10 at 24.576MHz. With each superframe 295 + * being 250 usecs, we wait for 5-10 superframes here to ensure 296 + * we get the message 297 + */ 298 + usleep_range(1250, 2500); 299 + return 0; 300 + } 301 + 302 + static void *slim_alloc_txbuf(struct qcom_slim_ctrl *ctrl, 303 + struct slim_msg_txn *txn, 304 + struct completion *done) 305 + { 306 + unsigned long flags; 307 + int idx; 308 + 309 + spin_lock_irqsave(&ctrl->tx.lock, flags); 310 + if (((ctrl->tx.head + 1) % ctrl->tx.n) == ctrl->tx.tail) { 311 + spin_unlock_irqrestore(&ctrl->tx.lock, flags); 312 + dev_err(ctrl->dev, "controller TX buf unavailable"); 313 + return NULL; 314 + } 315 + idx = ctrl->tx.tail; 316 + ctrl->wr_comp[idx] = done; 317 + ctrl->tx.tail = (ctrl->tx.tail + 1) % ctrl->tx.n; 318 + 319 + spin_unlock_irqrestore(&ctrl->tx.lock, flags); 320 + 321 + return ctrl->tx.base + (idx * ctrl->tx.sl_sz); 322 + } 323 + 324 + 325 + static int qcom_xfer_msg(struct slim_controller *sctrl, 326 + struct slim_msg_txn *txn) 327 + { 328 + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); 329 + DECLARE_COMPLETION_ONSTACK(done); 330 + void *pbuf = slim_alloc_txbuf(ctrl, txn, &done); 331 + unsigned long ms = txn->rl + HZ; 332 + u8 *puc; 333 + int ret = 0, timeout, retries = QCOM_BUF_ALLOC_RETRIES; 334 + u8 la = txn->la; 335 + u32 *head; 336 + /* HW expects length field to be excluded */ 337 + txn->rl--; 338 + 339 + /* spin till buffer is made available */ 340 + if (!pbuf) { 341 + while (retries--) { 342 + usleep_range(10000, 15000); 343 + pbuf = slim_alloc_txbuf(ctrl, txn, &done); 344 + if (pbuf) 345 + break; 346 + } 347 + } 348 + 349 + if (retries < 0 && !pbuf) 350 + return -ENOMEM; 351 + 352 + puc = (u8 *)pbuf; 353 + head = (u32 *)pbuf; 354 + 355 + if (txn->dt == SLIM_MSG_DEST_LOGICALADDR) { 356 + *head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, 357 + txn->mc, 0, la); 358 + puc += 3; 359 + } else { 360 + *head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, 361 + txn->mc, 1, la); 362 + puc += 2; 363 + } 364 + 365 + if (slim_tid_txn(txn->mt, txn->mc)) 366 + *(puc++) = txn->tid; 367 + 368 + if (slim_ec_txn(txn->mt, txn->mc)) { 369 + *(puc++) = (txn->ec & 0xFF); 370 + *(puc++) = (txn->ec >> 8) & 0xFF; 371 + } 372 + 373 + if (txn->msg && txn->msg->wbuf) 374 + memcpy(puc, txn->msg->wbuf, txn->msg->num_bytes); 375 + 376 + qcom_slim_queue_tx(ctrl, head, txn->rl, MGR_TX_MSG); 377 + timeout = wait_for_completion_timeout(&done, msecs_to_jiffies(ms)); 378 + 379 + if (!timeout) { 380 + dev_err(ctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", txn->mc, 381 + txn->mt); 382 + ret = -ETIMEDOUT; 383 + } 384 + 385 + return ret; 386 + 387 + } 388 + 389 + static int qcom_set_laddr(struct slim_controller *sctrl, 390 + struct slim_eaddr *ead, u8 laddr) 391 + { 392 + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev); 393 + struct { 394 + __be16 manf_id; 395 + __be16 prod_code; 396 + u8 dev_index; 397 + u8 instance; 398 + u8 laddr; 399 + } __packed p; 400 + struct slim_val_inf msg = {0}; 401 + DEFINE_SLIM_EDEST_TXN(txn, SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS, 402 + 10, laddr, &msg); 403 + int ret; 404 + 405 + p.manf_id = cpu_to_be16(ead->manf_id); 406 + p.prod_code = cpu_to_be16(ead->prod_code); 407 + p.dev_index = ead->dev_index; 408 + p.instance = ead->instance; 409 + p.laddr = laddr; 410 + 411 + msg.wbuf = (void *)&p; 412 + msg.num_bytes = 7; 413 + ret = slim_do_transfer(&ctrl->ctrl, &txn); 414 + 415 + if (ret) 416 + dev_err(ctrl->dev, "set LA:0x%x failed:ret:%d\n", 417 + laddr, ret); 418 + return ret; 419 + } 420 + 421 + static int slim_get_current_rxbuf(struct qcom_slim_ctrl *ctrl, void *buf) 422 + { 423 + unsigned long flags; 424 + 425 + spin_lock_irqsave(&ctrl->rx.lock, flags); 426 + if (ctrl->rx.tail == ctrl->rx.head) { 427 + spin_unlock_irqrestore(&ctrl->rx.lock, flags); 428 + return -ENODATA; 429 + } 430 + memcpy(buf, ctrl->rx.base + (ctrl->rx.head * ctrl->rx.sl_sz), 431 + ctrl->rx.sl_sz); 432 + 433 + ctrl->rx.head = (ctrl->rx.head + 1) % ctrl->rx.n; 434 + spin_unlock_irqrestore(&ctrl->rx.lock, flags); 435 + 436 + return 0; 437 + } 438 + 439 + static void qcom_slim_rxwq(struct work_struct *work) 440 + { 441 + u8 buf[SLIM_MSGQ_BUF_LEN]; 442 + u8 mc, mt, len; 443 + int ret; 444 + struct qcom_slim_ctrl *ctrl = container_of(work, struct qcom_slim_ctrl, 445 + wd); 446 + 447 + while ((slim_get_current_rxbuf(ctrl, buf)) != -ENODATA) { 448 + len = SLIM_HEADER_GET_RL(buf[0]); 449 + mt = SLIM_HEADER_GET_MT(buf[0]); 450 + mc = SLIM_HEADER_GET_MC(buf[1]); 451 + if (mt == SLIM_MSG_MT_CORE && 452 + mc == SLIM_MSG_MC_REPORT_PRESENT) { 453 + struct slim_eaddr ea; 454 + u8 laddr; 455 + 456 + ea.manf_id = be16_to_cpup((__be16 *)&buf[2]); 457 + ea.prod_code = be16_to_cpup((__be16 *)&buf[4]); 458 + ea.dev_index = buf[6]; 459 + ea.instance = buf[7]; 460 + 461 + ret = slim_device_report_present(&ctrl->ctrl, &ea, 462 + &laddr); 463 + if (ret < 0) 464 + dev_err(ctrl->dev, "assign laddr failed:%d\n", 465 + ret); 466 + } else { 467 + dev_err(ctrl->dev, "unexpected message:mc:%x, mt:%x\n", 468 + mc, mt); 469 + } 470 + } 471 + } 472 + 473 + static void qcom_slim_prg_slew(struct platform_device *pdev, 474 + struct qcom_slim_ctrl *ctrl) 475 + { 476 + struct resource *slew_mem; 477 + 478 + if (!ctrl->slew_reg) { 479 + /* SLEW RATE register for this SLIMbus */ 480 + slew_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, 481 + "slew"); 482 + ctrl->slew_reg = devm_ioremap(&pdev->dev, slew_mem->start, 483 + resource_size(slew_mem)); 484 + if (!ctrl->slew_reg) 485 + return; 486 + } 487 + 488 + writel_relaxed(1, ctrl->slew_reg); 489 + /* Make sure SLIMbus-slew rate enabling goes through */ 490 + wmb(); 491 + } 492 + 493 + static int qcom_slim_probe(struct platform_device *pdev) 494 + { 495 + struct qcom_slim_ctrl *ctrl; 496 + struct slim_controller *sctrl; 497 + struct resource *slim_mem; 498 + int ret, ver; 499 + 500 + ctrl = devm_kzalloc(&pdev->dev, sizeof(*ctrl), GFP_KERNEL); 501 + if (!ctrl) 502 + return -ENOMEM; 503 + 504 + ctrl->hclk = devm_clk_get(&pdev->dev, "iface"); 505 + if (IS_ERR(ctrl->hclk)) 506 + return PTR_ERR(ctrl->hclk); 507 + 508 + ctrl->rclk = devm_clk_get(&pdev->dev, "core"); 509 + if (IS_ERR(ctrl->rclk)) 510 + return PTR_ERR(ctrl->rclk); 511 + 512 + ret = clk_set_rate(ctrl->rclk, SLIM_ROOT_FREQ); 513 + if (ret) { 514 + dev_err(&pdev->dev, "ref-clock set-rate failed:%d\n", ret); 515 + return ret; 516 + } 517 + 518 + ctrl->irq = platform_get_irq(pdev, 0); 519 + if (!ctrl->irq) { 520 + dev_err(&pdev->dev, "no slimbus IRQ\n"); 521 + return -ENODEV; 522 + } 523 + 524 + sctrl = &ctrl->ctrl; 525 + sctrl->dev = &pdev->dev; 526 + ctrl->dev = &pdev->dev; 527 + platform_set_drvdata(pdev, ctrl); 528 + dev_set_drvdata(ctrl->dev, ctrl); 529 + 530 + slim_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); 531 + ctrl->base = devm_ioremap_resource(ctrl->dev, slim_mem); 532 + if (IS_ERR(ctrl->base)) { 533 + dev_err(&pdev->dev, "IOremap failed\n"); 534 + return PTR_ERR(ctrl->base); 535 + } 536 + 537 + sctrl->set_laddr = qcom_set_laddr; 538 + sctrl->xfer_msg = qcom_xfer_msg; 539 + sctrl->wakeup = qcom_clk_pause_wakeup; 540 + ctrl->tx.n = QCOM_TX_MSGS; 541 + ctrl->tx.sl_sz = SLIM_MSGQ_BUF_LEN; 542 + ctrl->rx.n = QCOM_RX_MSGS; 543 + ctrl->rx.sl_sz = SLIM_MSGQ_BUF_LEN; 544 + ctrl->wr_comp = kzalloc(sizeof(struct completion *) * QCOM_TX_MSGS, 545 + GFP_KERNEL); 546 + if (!ctrl->wr_comp) 547 + return -ENOMEM; 548 + 549 + spin_lock_init(&ctrl->rx.lock); 550 + spin_lock_init(&ctrl->tx.lock); 551 + INIT_WORK(&ctrl->wd, qcom_slim_rxwq); 552 + ctrl->rxwq = create_singlethread_workqueue("qcom_slim_rx"); 553 + if (!ctrl->rxwq) { 554 + dev_err(ctrl->dev, "Failed to start Rx WQ\n"); 555 + return -ENOMEM; 556 + } 557 + 558 + ctrl->framer.rootfreq = SLIM_ROOT_FREQ / 8; 559 + ctrl->framer.superfreq = 560 + ctrl->framer.rootfreq / SLIM_CL_PER_SUPERFRAME_DIV8; 561 + sctrl->a_framer = &ctrl->framer; 562 + sctrl->clkgear = SLIM_MAX_CLK_GEAR; 563 + 564 + qcom_slim_prg_slew(pdev, ctrl); 565 + 566 + ret = devm_request_irq(&pdev->dev, ctrl->irq, qcom_slim_interrupt, 567 + IRQF_TRIGGER_HIGH, "qcom_slim_irq", ctrl); 568 + if (ret) { 569 + dev_err(&pdev->dev, "request IRQ failed\n"); 570 + goto err_request_irq_failed; 571 + } 572 + 573 + ret = clk_prepare_enable(ctrl->hclk); 574 + if (ret) 575 + goto err_hclk_enable_failed; 576 + 577 + ret = clk_prepare_enable(ctrl->rclk); 578 + if (ret) 579 + goto err_rclk_enable_failed; 580 + 581 + ctrl->tx.base = devm_kcalloc(&pdev->dev, ctrl->tx.n, ctrl->tx.sl_sz, 582 + GFP_KERNEL); 583 + if (!ctrl->tx.base) { 584 + ret = -ENOMEM; 585 + goto err; 586 + } 587 + 588 + ctrl->rx.base = devm_kcalloc(&pdev->dev,ctrl->rx.n, ctrl->rx.sl_sz, 589 + GFP_KERNEL); 590 + if (!ctrl->rx.base) { 591 + ret = -ENOMEM; 592 + goto err; 593 + } 594 + 595 + /* Register with framework before enabling frame, clock */ 596 + ret = slim_register_controller(&ctrl->ctrl); 597 + if (ret) { 598 + dev_err(ctrl->dev, "error adding controller\n"); 599 + goto err; 600 + } 601 + 602 + ver = readl_relaxed(ctrl->base); 603 + /* Version info in 16 MSbits */ 604 + ver >>= 16; 605 + /* Component register initialization */ 606 + writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver)); 607 + writel((EE_MGR_RSC_GRP | EE_NGD_2 | EE_NGD_1), 608 + ctrl->base + CFG_PORT(COMP_TRUST_CFG, ver)); 609 + 610 + writel((MGR_INT_TX_NACKED_2 | 611 + MGR_INT_MSG_BUF_CONTE | MGR_INT_RX_MSG_RCVD | 612 + MGR_INT_TX_MSG_SENT), ctrl->base + MGR_INT_EN); 613 + writel(1, ctrl->base + MGR_CFG); 614 + /* Framer register initialization */ 615 + writel((1 << INTR_WAKE) | (0xA << REF_CLK_GEAR) | 616 + (0xA << CLK_GEAR) | (1 << ROOT_FREQ) | (1 << FRM_ACTIVE) | 1, 617 + ctrl->base + FRM_CFG); 618 + writel(MGR_CFG_ENABLE, ctrl->base + MGR_CFG); 619 + writel(1, ctrl->base + INTF_CFG); 620 + writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver)); 621 + 622 + pm_runtime_use_autosuspend(&pdev->dev); 623 + pm_runtime_set_autosuspend_delay(&pdev->dev, QCOM_SLIM_AUTOSUSPEND); 624 + pm_runtime_set_active(&pdev->dev); 625 + pm_runtime_mark_last_busy(&pdev->dev); 626 + pm_runtime_enable(&pdev->dev); 627 + 628 + dev_dbg(ctrl->dev, "QCOM SB controller is up:ver:0x%x!\n", ver); 629 + return 0; 630 + 631 + err: 632 + clk_disable_unprepare(ctrl->rclk); 633 + err_rclk_enable_failed: 634 + clk_disable_unprepare(ctrl->hclk); 635 + err_hclk_enable_failed: 636 + err_request_irq_failed: 637 + destroy_workqueue(ctrl->rxwq); 638 + return ret; 639 + } 640 + 641 + static int qcom_slim_remove(struct platform_device *pdev) 642 + { 643 + struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); 644 + 645 + pm_runtime_disable(&pdev->dev); 646 + slim_unregister_controller(&ctrl->ctrl); 647 + destroy_workqueue(ctrl->rxwq); 648 + return 0; 649 + } 650 + 651 + /* 652 + * If PM_RUNTIME is not defined, these 2 functions become helper 653 + * functions to be called from system suspend/resume. 654 + */ 655 + #ifdef CONFIG_PM 656 + static int qcom_slim_runtime_suspend(struct device *device) 657 + { 658 + struct platform_device *pdev = to_platform_device(device); 659 + struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); 660 + int ret; 661 + 662 + dev_dbg(device, "pm_runtime: suspending...\n"); 663 + ret = slim_ctrl_clk_pause(&ctrl->ctrl, false, SLIM_CLK_UNSPECIFIED); 664 + if (ret) { 665 + dev_err(device, "clk pause not entered:%d", ret); 666 + } else { 667 + disable_irq(ctrl->irq); 668 + clk_disable_unprepare(ctrl->hclk); 669 + clk_disable_unprepare(ctrl->rclk); 670 + } 671 + return ret; 672 + } 673 + 674 + static int qcom_slim_runtime_resume(struct device *device) 675 + { 676 + struct platform_device *pdev = to_platform_device(device); 677 + struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); 678 + int ret = 0; 679 + 680 + dev_dbg(device, "pm_runtime: resuming...\n"); 681 + ret = slim_ctrl_clk_pause(&ctrl->ctrl, true, 0); 682 + if (ret) 683 + dev_err(device, "clk pause not exited:%d", ret); 684 + return ret; 685 + } 686 + #endif 687 + 688 + #ifdef CONFIG_PM_SLEEP 689 + static int qcom_slim_suspend(struct device *dev) 690 + { 691 + int ret = 0; 692 + 693 + if (!pm_runtime_enabled(dev) || 694 + (!pm_runtime_suspended(dev))) { 695 + dev_dbg(dev, "system suspend"); 696 + ret = qcom_slim_runtime_suspend(dev); 697 + } 698 + 699 + return ret; 700 + } 701 + 702 + static int qcom_slim_resume(struct device *dev) 703 + { 704 + if (!pm_runtime_enabled(dev) || !pm_runtime_suspended(dev)) { 705 + int ret; 706 + 707 + dev_dbg(dev, "system resume"); 708 + ret = qcom_slim_runtime_resume(dev); 709 + if (!ret) { 710 + pm_runtime_mark_last_busy(dev); 711 + pm_request_autosuspend(dev); 712 + } 713 + return ret; 714 + 715 + } 716 + return 0; 717 + } 718 + #endif /* CONFIG_PM_SLEEP */ 719 + 720 + static const struct dev_pm_ops qcom_slim_dev_pm_ops = { 721 + SET_SYSTEM_SLEEP_PM_OPS(qcom_slim_suspend, qcom_slim_resume) 722 + SET_RUNTIME_PM_OPS( 723 + qcom_slim_runtime_suspend, 724 + qcom_slim_runtime_resume, 725 + NULL 726 + ) 727 + }; 728 + 729 + static const struct of_device_id qcom_slim_dt_match[] = { 730 + { .compatible = "qcom,slim", }, 731 + { .compatible = "qcom,apq8064-slim", }, 732 + {} 733 + }; 734 + 735 + static struct platform_driver qcom_slim_driver = { 736 + .probe = qcom_slim_probe, 737 + .remove = qcom_slim_remove, 738 + .driver = { 739 + .name = "qcom_slim_ctrl", 740 + .of_match_table = qcom_slim_dt_match, 741 + .pm = &qcom_slim_dev_pm_ops, 742 + }, 743 + }; 744 + module_platform_driver(qcom_slim_driver); 745 + 746 + MODULE_LICENSE("GPL v2"); 747 + MODULE_DESCRIPTION("Qualcomm SLIMbus Controller");
+121
drivers/slimbus/sched.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #include <linux/errno.h> 7 + #include "slimbus.h" 8 + 9 + /** 10 + * slim_ctrl_clk_pause() - Called by slimbus controller to enter/exit 11 + * 'clock pause' 12 + * @ctrl: controller requesting bus to be paused or woken up 13 + * @wakeup: Wakeup this controller from clock pause. 14 + * @restart: Restart time value per spec used for clock pause. This value 15 + * isn't used when controller is to be woken up. 16 + * 17 + * Slimbus specification needs this sequence to turn-off clocks for the bus. 18 + * The sequence involves sending 3 broadcast messages (reconfiguration 19 + * sequence) to inform all devices on the bus. 20 + * To exit clock-pause, controller typically wakes up active framer device. 21 + * This API executes clock pause reconfiguration sequence if wakeup is false. 22 + * If wakeup is true, controller's wakeup is called. 23 + * For entering clock-pause, -EBUSY is returned if a message txn in pending. 24 + */ 25 + int slim_ctrl_clk_pause(struct slim_controller *ctrl, bool wakeup, u8 restart) 26 + { 27 + int i, ret = 0; 28 + unsigned long flags; 29 + struct slim_sched *sched = &ctrl->sched; 30 + struct slim_val_inf msg = {0, 0, NULL, NULL}; 31 + 32 + DEFINE_SLIM_BCAST_TXN(txn, SLIM_MSG_MC_BEGIN_RECONFIGURATION, 33 + 3, SLIM_LA_MANAGER, &msg); 34 + 35 + if (wakeup == false && restart > SLIM_CLK_UNSPECIFIED) 36 + return -EINVAL; 37 + 38 + mutex_lock(&sched->m_reconf); 39 + if (wakeup) { 40 + if (sched->clk_state == SLIM_CLK_ACTIVE) { 41 + mutex_unlock(&sched->m_reconf); 42 + return 0; 43 + } 44 + 45 + /* 46 + * Fine-tune calculation based on clock gear, 47 + * message-bandwidth after bandwidth management 48 + */ 49 + ret = wait_for_completion_timeout(&sched->pause_comp, 50 + msecs_to_jiffies(100)); 51 + if (!ret) { 52 + mutex_unlock(&sched->m_reconf); 53 + pr_err("Previous clock pause did not finish"); 54 + return -ETIMEDOUT; 55 + } 56 + ret = 0; 57 + 58 + /* 59 + * Slimbus framework will call controller wakeup 60 + * Controller should make sure that it sets active framer 61 + * out of clock pause 62 + */ 63 + if (sched->clk_state == SLIM_CLK_PAUSED && ctrl->wakeup) 64 + ret = ctrl->wakeup(ctrl); 65 + if (!ret) 66 + sched->clk_state = SLIM_CLK_ACTIVE; 67 + mutex_unlock(&sched->m_reconf); 68 + 69 + return ret; 70 + } 71 + 72 + /* already paused */ 73 + if (ctrl->sched.clk_state == SLIM_CLK_PAUSED) { 74 + mutex_unlock(&sched->m_reconf); 75 + return 0; 76 + } 77 + 78 + spin_lock_irqsave(&ctrl->txn_lock, flags); 79 + for (i = 0; i < SLIM_MAX_TIDS; i++) { 80 + /* Pending response for a message */ 81 + if (idr_find(&ctrl->tid_idr, i)) { 82 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 83 + mutex_unlock(&sched->m_reconf); 84 + return -EBUSY; 85 + } 86 + } 87 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 88 + 89 + sched->clk_state = SLIM_CLK_ENTERING_PAUSE; 90 + 91 + /* clock pause sequence */ 92 + ret = slim_do_transfer(ctrl, &txn); 93 + if (ret) 94 + goto clk_pause_ret; 95 + 96 + txn.mc = SLIM_MSG_MC_NEXT_PAUSE_CLOCK; 97 + txn.rl = 4; 98 + msg.num_bytes = 1; 99 + msg.wbuf = &restart; 100 + ret = slim_do_transfer(ctrl, &txn); 101 + if (ret) 102 + goto clk_pause_ret; 103 + 104 + txn.mc = SLIM_MSG_MC_RECONFIGURE_NOW; 105 + txn.rl = 3; 106 + msg.num_bytes = 1; 107 + msg.wbuf = NULL; 108 + ret = slim_do_transfer(ctrl, &txn); 109 + 110 + clk_pause_ret: 111 + if (ret) { 112 + sched->clk_state = SLIM_CLK_ACTIVE; 113 + } else { 114 + sched->clk_state = SLIM_CLK_PAUSED; 115 + complete(&sched->pause_comp); 116 + } 117 + mutex_unlock(&sched->m_reconf); 118 + 119 + return ret; 120 + } 121 + EXPORT_SYMBOL_GPL(slim_ctrl_clk_pause);
+261
drivers/slimbus/slimbus.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #ifndef _DRIVERS_SLIMBUS_H 7 + #define _DRIVERS_SLIMBUS_H 8 + #include <linux/module.h> 9 + #include <linux/device.h> 10 + #include <linux/mutex.h> 11 + #include <linux/completion.h> 12 + #include <linux/slimbus.h> 13 + 14 + /* Standard values per SLIMbus spec needed by controllers and devices */ 15 + #define SLIM_CL_PER_SUPERFRAME 6144 16 + #define SLIM_CL_PER_SUPERFRAME_DIV8 (SLIM_CL_PER_SUPERFRAME >> 3) 17 + 18 + /* SLIMbus message types. Related to interpretation of message code. */ 19 + #define SLIM_MSG_MT_CORE 0x0 20 + 21 + /* 22 + * SLIM Broadcast header format 23 + * BYTE 0: MT[7:5] RL[4:0] 24 + * BYTE 1: RSVD[7] MC[6:0] 25 + * BYTE 2: RSVD[7:6] DT[5:4] PI[3:0] 26 + */ 27 + #define SLIM_MSG_MT_MASK GENMASK(2, 0) 28 + #define SLIM_MSG_MT_SHIFT 5 29 + #define SLIM_MSG_RL_MASK GENMASK(4, 0) 30 + #define SLIM_MSG_RL_SHIFT 0 31 + #define SLIM_MSG_MC_MASK GENMASK(6, 0) 32 + #define SLIM_MSG_MC_SHIFT 0 33 + #define SLIM_MSG_DT_MASK GENMASK(1, 0) 34 + #define SLIM_MSG_DT_SHIFT 4 35 + 36 + #define SLIM_HEADER_GET_MT(b) ((b >> SLIM_MSG_MT_SHIFT) & SLIM_MSG_MT_MASK) 37 + #define SLIM_HEADER_GET_RL(b) ((b >> SLIM_MSG_RL_SHIFT) & SLIM_MSG_RL_MASK) 38 + #define SLIM_HEADER_GET_MC(b) ((b >> SLIM_MSG_MC_SHIFT) & SLIM_MSG_MC_MASK) 39 + #define SLIM_HEADER_GET_DT(b) ((b >> SLIM_MSG_DT_SHIFT) & SLIM_MSG_DT_MASK) 40 + 41 + /* Device management messages used by this framework */ 42 + #define SLIM_MSG_MC_REPORT_PRESENT 0x1 43 + #define SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS 0x2 44 + #define SLIM_MSG_MC_REPORT_ABSENT 0xF 45 + 46 + /* Clock pause Reconfiguration messages */ 47 + #define SLIM_MSG_MC_BEGIN_RECONFIGURATION 0x40 48 + #define SLIM_MSG_MC_NEXT_PAUSE_CLOCK 0x4A 49 + #define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F 50 + 51 + /* Clock pause values per SLIMbus spec */ 52 + #define SLIM_CLK_FAST 0 53 + #define SLIM_CLK_CONST_PHASE 1 54 + #define SLIM_CLK_UNSPECIFIED 2 55 + 56 + /* Destination type Values */ 57 + #define SLIM_MSG_DEST_LOGICALADDR 0 58 + #define SLIM_MSG_DEST_ENUMADDR 1 59 + #define SLIM_MSG_DEST_BROADCAST 3 60 + 61 + /* Standard values per SLIMbus spec needed by controllers and devices */ 62 + #define SLIM_MAX_CLK_GEAR 10 63 + #define SLIM_MIN_CLK_GEAR 1 64 + 65 + /* Manager's logical address is set to 0xFF per spec */ 66 + #define SLIM_LA_MANAGER 0xFF 67 + 68 + #define SLIM_MAX_TIDS 256 69 + /** 70 + * struct slim_framer - Represents SLIMbus framer. 71 + * Every controller may have multiple framers. There is 1 active framer device 72 + * responsible for clocking the bus. 73 + * Manager is responsible for framer hand-over. 74 + * @dev: Driver model representation of the device. 75 + * @e_addr: Enumeration address of the framer. 76 + * @rootfreq: Root Frequency at which the framer can run. This is maximum 77 + * frequency ('clock gear 10') at which the bus can operate. 78 + * @superfreq: Superframes per root frequency. Every frame is 6144 bits. 79 + */ 80 + struct slim_framer { 81 + struct device dev; 82 + struct slim_eaddr e_addr; 83 + int rootfreq; 84 + int superfreq; 85 + }; 86 + 87 + #define to_slim_framer(d) container_of(d, struct slim_framer, dev) 88 + 89 + /** 90 + * struct slim_msg_txn - Message to be sent by the controller. 91 + * This structure has packet header, 92 + * payload and buffer to be filled (if any) 93 + * @rl: Header field. remaining length. 94 + * @mt: Header field. Message type. 95 + * @mc: Header field. LSB is message code for type mt. 96 + * @dt: Header field. Destination type. 97 + * @ec: Element code. Used for elemental access APIs. 98 + * @tid: Transaction ID. Used for messages expecting response. 99 + * (relevant for message-codes involving read operation) 100 + * @la: Logical address of the device this message is going to. 101 + * (Not used when destination type is broadcast.) 102 + * @msg: Elemental access message to be read/written 103 + * @comp: completion if read/write is synchronous, used internally 104 + * for tid based transactions. 105 + */ 106 + struct slim_msg_txn { 107 + u8 rl; 108 + u8 mt; 109 + u8 mc; 110 + u8 dt; 111 + u16 ec; 112 + u8 tid; 113 + u8 la; 114 + struct slim_val_inf *msg; 115 + struct completion *comp; 116 + }; 117 + 118 + /* Frequently used message transaction structures */ 119 + #define DEFINE_SLIM_LDEST_TXN(name, mc, rl, la, msg) \ 120 + struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_LOGICALADDR, 0,\ 121 + 0, la, msg, } 122 + 123 + #define DEFINE_SLIM_BCAST_TXN(name, mc, rl, la, msg) \ 124 + struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_BROADCAST, 0,\ 125 + 0, la, msg, } 126 + 127 + #define DEFINE_SLIM_EDEST_TXN(name, mc, rl, la, msg) \ 128 + struct slim_msg_txn name = { rl, 0, mc, SLIM_MSG_DEST_ENUMADDR, 0,\ 129 + 0, la, msg, } 130 + /** 131 + * enum slim_clk_state: SLIMbus controller's clock state used internally for 132 + * maintaining current clock state. 133 + * @SLIM_CLK_ACTIVE: SLIMbus clock is active 134 + * @SLIM_CLK_ENTERING_PAUSE: SLIMbus clock pause sequence is being sent on the 135 + * bus. If this succeeds, state changes to SLIM_CLK_PAUSED. If the 136 + * transition fails, state changes back to SLIM_CLK_ACTIVE 137 + * @SLIM_CLK_PAUSED: SLIMbus controller clock has paused. 138 + */ 139 + enum slim_clk_state { 140 + SLIM_CLK_ACTIVE, 141 + SLIM_CLK_ENTERING_PAUSE, 142 + SLIM_CLK_PAUSED, 143 + }; 144 + 145 + /** 146 + * struct slim_sched: Framework uses this structure internally for scheduling. 147 + * @clk_state: Controller's clock state from enum slim_clk_state 148 + * @pause_comp: Signals completion of clock pause sequence. This is useful when 149 + * client tries to call SLIMbus transaction when controller is entering 150 + * clock pause. 151 + * @m_reconf: This mutex is held until current reconfiguration (data channel 152 + * scheduling, message bandwidth reservation) is done. Message APIs can 153 + * use the bus concurrently when this mutex is held since elemental access 154 + * messages can be sent on the bus when reconfiguration is in progress. 155 + */ 156 + struct slim_sched { 157 + enum slim_clk_state clk_state; 158 + struct completion pause_comp; 159 + struct mutex m_reconf; 160 + }; 161 + 162 + /** 163 + * struct slim_controller - Controls every instance of SLIMbus 164 + * (similar to 'master' on SPI) 165 + * @dev: Device interface to this driver 166 + * @id: Board-specific number identifier for this controller/bus 167 + * @name: Name for this controller 168 + * @min_cg: Minimum clock gear supported by this controller (default value: 1) 169 + * @max_cg: Maximum clock gear supported by this controller (default value: 10) 170 + * @clkgear: Current clock gear in which this bus is running 171 + * @laddr_ida: logical address id allocator 172 + * @a_framer: Active framer which is clocking the bus managed by this controller 173 + * @lock: Mutex protecting controller data structures 174 + * @devices: Slim device list 175 + * @tid_idr: tid id allocator 176 + * @txn_lock: Lock to protect table of transactions 177 + * @sched: scheduler structure used by the controller 178 + * @xfer_msg: Transfer a message on this controller (this can be a broadcast 179 + * control/status message like data channel setup, or a unicast message 180 + * like value element read/write. 181 + * @set_laddr: Setup logical address at laddr for the slave with elemental 182 + * address e_addr. Drivers implementing controller will be expected to 183 + * send unicast message to this device with its logical address. 184 + * @get_laddr: It is possible that controller needs to set fixed logical 185 + * address table and get_laddr can be used in that case so that controller 186 + * can do this assignment. Use case is when the master is on the remote 187 + * processor side, who is resposible for allocating laddr. 188 + * @wakeup: This function pointer implements controller-specific procedure 189 + * to wake it up from clock-pause. Framework will call this to bring 190 + * the controller out of clock pause. 191 + * 192 + * 'Manager device' is responsible for device management, bandwidth 193 + * allocation, channel setup, and port associations per channel. 194 + * Device management means Logical address assignment/removal based on 195 + * enumeration (report-present, report-absent) of a device. 196 + * Bandwidth allocation is done dynamically by the manager based on active 197 + * channels on the bus, message-bandwidth requests made by SLIMbus devices. 198 + * Based on current bandwidth usage, manager chooses a frequency to run 199 + * the bus at (in steps of 'clock-gear', 1 through 10, each clock gear 200 + * representing twice the frequency than the previous gear). 201 + * Manager is also responsible for entering (and exiting) low-power-mode 202 + * (known as 'clock pause'). 203 + * Manager can do handover of framer if there are multiple framers on the 204 + * bus and a certain usecase warrants using certain framer to avoid keeping 205 + * previous framer being powered-on. 206 + * 207 + * Controller here performs duties of the manager device, and 'interface 208 + * device'. Interface device is responsible for monitoring the bus and 209 + * reporting information such as loss-of-synchronization, data 210 + * slot-collision. 211 + */ 212 + struct slim_controller { 213 + struct device *dev; 214 + unsigned int id; 215 + char name[SLIMBUS_NAME_SIZE]; 216 + int min_cg; 217 + int max_cg; 218 + int clkgear; 219 + struct ida laddr_ida; 220 + struct slim_framer *a_framer; 221 + struct mutex lock; 222 + struct list_head devices; 223 + struct idr tid_idr; 224 + spinlock_t txn_lock; 225 + struct slim_sched sched; 226 + int (*xfer_msg)(struct slim_controller *ctrl, 227 + struct slim_msg_txn *tx); 228 + int (*set_laddr)(struct slim_controller *ctrl, 229 + struct slim_eaddr *ea, u8 laddr); 230 + int (*get_laddr)(struct slim_controller *ctrl, 231 + struct slim_eaddr *ea, u8 *laddr); 232 + int (*wakeup)(struct slim_controller *ctrl); 233 + }; 234 + 235 + int slim_device_report_present(struct slim_controller *ctrl, 236 + struct slim_eaddr *e_addr, u8 *laddr); 237 + void slim_report_absent(struct slim_device *sbdev); 238 + int slim_register_controller(struct slim_controller *ctrl); 239 + int slim_unregister_controller(struct slim_controller *ctrl); 240 + void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 l); 241 + int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn); 242 + int slim_ctrl_clk_pause(struct slim_controller *ctrl, bool wakeup, u8 restart); 243 + 244 + static inline bool slim_tid_txn(u8 mt, u8 mc) 245 + { 246 + return (mt == SLIM_MSG_MT_CORE && 247 + (mc == SLIM_MSG_MC_REQUEST_INFORMATION || 248 + mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION || 249 + mc == SLIM_MSG_MC_REQUEST_VALUE || 250 + mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION)); 251 + } 252 + 253 + static inline bool slim_ec_txn(u8 mt, u8 mc) 254 + { 255 + return (mt == SLIM_MSG_MT_CORE && 256 + ((mc >= SLIM_MSG_MC_REQUEST_INFORMATION && 257 + mc <= SLIM_MSG_MC_REPORT_INFORMATION) || 258 + (mc >= SLIM_MSG_MC_REQUEST_VALUE && 259 + mc <= SLIM_MSG_MC_CHANGE_VALUE))); 260 + } 261 + #endif /* _LINUX_SLIMBUS_H */
+37
drivers/soundwire/Kconfig
··· 1 + # 2 + # SoundWire subsystem configuration 3 + # 4 + 5 + menuconfig SOUNDWIRE 6 + bool "SoundWire support" 7 + ---help--- 8 + SoundWire is a 2-Pin interface with data and clock line ratified 9 + by the MIPI Alliance. SoundWire is used for transporting data 10 + typically related to audio functions. SoundWire interface is 11 + optimized to integrate audio devices in mobile or mobile inspired 12 + systems. Say Y to enable this subsystem, N if you do not have such 13 + a device 14 + 15 + if SOUNDWIRE 16 + 17 + comment "SoundWire Devices" 18 + 19 + config SOUNDWIRE_BUS 20 + tristate 21 + select REGMAP_SOUNDWIRE 22 + 23 + config SOUNDWIRE_CADENCE 24 + tristate 25 + 26 + config SOUNDWIRE_INTEL 27 + tristate "Intel SoundWire Master driver" 28 + select SOUNDWIRE_CADENCE 29 + select SOUNDWIRE_BUS 30 + depends on X86 && ACPI 31 + ---help--- 32 + SoundWire Intel Master driver. 33 + If you have an Intel platform which has a SoundWire Master then 34 + enable this config option to get the SoundWire support for that 35 + device. 36 + 37 + endif
+18
drivers/soundwire/Makefile
··· 1 + # 2 + # Makefile for soundwire core 3 + # 4 + 5 + #Bus Objs 6 + soundwire-bus-objs := bus_type.o bus.o slave.o mipi_disco.o 7 + obj-$(CONFIG_SOUNDWIRE_BUS) += soundwire-bus.o 8 + 9 + #Cadence Objs 10 + soundwire-cadence-objs := cadence_master.o 11 + obj-$(CONFIG_SOUNDWIRE_CADENCE) += soundwire-cadence.o 12 + 13 + #Intel driver 14 + soundwire-intel-objs := intel.o 15 + obj-$(CONFIG_SOUNDWIRE_INTEL) += soundwire-intel.o 16 + 17 + soundwire-intel-init-objs := intel_init.o 18 + obj-$(CONFIG_SOUNDWIRE_INTEL) += soundwire-intel-init.o
+997
drivers/soundwire/bus.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #include <linux/acpi.h> 5 + #include <linux/mod_devicetable.h> 6 + #include <linux/pm_runtime.h> 7 + #include <linux/soundwire/sdw_registers.h> 8 + #include <linux/soundwire/sdw.h> 9 + #include "bus.h" 10 + 11 + /** 12 + * sdw_add_bus_master() - add a bus Master instance 13 + * @bus: bus instance 14 + * 15 + * Initializes the bus instance, read properties and create child 16 + * devices. 17 + */ 18 + int sdw_add_bus_master(struct sdw_bus *bus) 19 + { 20 + int ret; 21 + 22 + if (!bus->dev) { 23 + pr_err("SoundWire bus has no device"); 24 + return -ENODEV; 25 + } 26 + 27 + if (!bus->ops) { 28 + dev_err(bus->dev, "SoundWire Bus ops are not set"); 29 + return -EINVAL; 30 + } 31 + 32 + mutex_init(&bus->msg_lock); 33 + mutex_init(&bus->bus_lock); 34 + INIT_LIST_HEAD(&bus->slaves); 35 + 36 + if (bus->ops->read_prop) { 37 + ret = bus->ops->read_prop(bus); 38 + if (ret < 0) { 39 + dev_err(bus->dev, "Bus read properties failed:%d", ret); 40 + return ret; 41 + } 42 + } 43 + 44 + /* 45 + * Device numbers in SoundWire are 0 thru 15. Enumeration device 46 + * number (0), Broadcast device number (15), Group numbers (12 and 47 + * 13) and Master device number (14) are not used for assignment so 48 + * mask these and other higher bits. 49 + */ 50 + 51 + /* Set higher order bits */ 52 + *bus->assigned = ~GENMASK(SDW_BROADCAST_DEV_NUM, SDW_ENUM_DEV_NUM); 53 + 54 + /* Set enumuration device number and broadcast device number */ 55 + set_bit(SDW_ENUM_DEV_NUM, bus->assigned); 56 + set_bit(SDW_BROADCAST_DEV_NUM, bus->assigned); 57 + 58 + /* Set group device numbers and master device number */ 59 + set_bit(SDW_GROUP12_DEV_NUM, bus->assigned); 60 + set_bit(SDW_GROUP13_DEV_NUM, bus->assigned); 61 + set_bit(SDW_MASTER_DEV_NUM, bus->assigned); 62 + 63 + /* 64 + * SDW is an enumerable bus, but devices can be powered off. So, 65 + * they won't be able to report as present. 66 + * 67 + * Create Slave devices based on Slaves described in 68 + * the respective firmware (ACPI/DT) 69 + */ 70 + if (IS_ENABLED(CONFIG_ACPI) && ACPI_HANDLE(bus->dev)) 71 + ret = sdw_acpi_find_slaves(bus); 72 + else 73 + ret = -ENOTSUPP; /* No ACPI/DT so error out */ 74 + 75 + if (ret) { 76 + dev_err(bus->dev, "Finding slaves failed:%d\n", ret); 77 + return ret; 78 + } 79 + 80 + return 0; 81 + } 82 + EXPORT_SYMBOL(sdw_add_bus_master); 83 + 84 + static int sdw_delete_slave(struct device *dev, void *data) 85 + { 86 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 87 + struct sdw_bus *bus = slave->bus; 88 + 89 + mutex_lock(&bus->bus_lock); 90 + 91 + if (slave->dev_num) /* clear dev_num if assigned */ 92 + clear_bit(slave->dev_num, bus->assigned); 93 + 94 + list_del_init(&slave->node); 95 + mutex_unlock(&bus->bus_lock); 96 + 97 + device_unregister(dev); 98 + return 0; 99 + } 100 + 101 + /** 102 + * sdw_delete_bus_master() - delete the bus master instance 103 + * @bus: bus to be deleted 104 + * 105 + * Remove the instance, delete the child devices. 106 + */ 107 + void sdw_delete_bus_master(struct sdw_bus *bus) 108 + { 109 + device_for_each_child(bus->dev, NULL, sdw_delete_slave); 110 + } 111 + EXPORT_SYMBOL(sdw_delete_bus_master); 112 + 113 + /* 114 + * SDW IO Calls 115 + */ 116 + 117 + static inline int find_response_code(enum sdw_command_response resp) 118 + { 119 + switch (resp) { 120 + case SDW_CMD_OK: 121 + return 0; 122 + 123 + case SDW_CMD_IGNORED: 124 + return -ENODATA; 125 + 126 + case SDW_CMD_TIMEOUT: 127 + return -ETIMEDOUT; 128 + 129 + default: 130 + return -EIO; 131 + } 132 + } 133 + 134 + static inline int do_transfer(struct sdw_bus *bus, struct sdw_msg *msg) 135 + { 136 + int retry = bus->prop.err_threshold; 137 + enum sdw_command_response resp; 138 + int ret = 0, i; 139 + 140 + for (i = 0; i <= retry; i++) { 141 + resp = bus->ops->xfer_msg(bus, msg); 142 + ret = find_response_code(resp); 143 + 144 + /* if cmd is ok or ignored return */ 145 + if (ret == 0 || ret == -ENODATA) 146 + return ret; 147 + } 148 + 149 + return ret; 150 + } 151 + 152 + static inline int do_transfer_defer(struct sdw_bus *bus, 153 + struct sdw_msg *msg, struct sdw_defer *defer) 154 + { 155 + int retry = bus->prop.err_threshold; 156 + enum sdw_command_response resp; 157 + int ret = 0, i; 158 + 159 + defer->msg = msg; 160 + defer->length = msg->len; 161 + 162 + for (i = 0; i <= retry; i++) { 163 + resp = bus->ops->xfer_msg_defer(bus, msg, defer); 164 + ret = find_response_code(resp); 165 + /* if cmd is ok or ignored return */ 166 + if (ret == 0 || ret == -ENODATA) 167 + return ret; 168 + } 169 + 170 + return ret; 171 + } 172 + 173 + static int sdw_reset_page(struct sdw_bus *bus, u16 dev_num) 174 + { 175 + int retry = bus->prop.err_threshold; 176 + enum sdw_command_response resp; 177 + int ret = 0, i; 178 + 179 + for (i = 0; i <= retry; i++) { 180 + resp = bus->ops->reset_page_addr(bus, dev_num); 181 + ret = find_response_code(resp); 182 + /* if cmd is ok or ignored return */ 183 + if (ret == 0 || ret == -ENODATA) 184 + return ret; 185 + } 186 + 187 + return ret; 188 + } 189 + 190 + /** 191 + * sdw_transfer() - Synchronous transfer message to a SDW Slave device 192 + * @bus: SDW bus 193 + * @msg: SDW message to be xfered 194 + */ 195 + int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg) 196 + { 197 + int ret; 198 + 199 + mutex_lock(&bus->msg_lock); 200 + 201 + ret = do_transfer(bus, msg); 202 + if (ret != 0 && ret != -ENODATA) 203 + dev_err(bus->dev, "trf on Slave %d failed:%d\n", 204 + msg->dev_num, ret); 205 + 206 + if (msg->page) 207 + sdw_reset_page(bus, msg->dev_num); 208 + 209 + mutex_unlock(&bus->msg_lock); 210 + 211 + return ret; 212 + } 213 + 214 + /** 215 + * sdw_transfer_defer() - Asynchronously transfer message to a SDW Slave device 216 + * @bus: SDW bus 217 + * @msg: SDW message to be xfered 218 + * @defer: Defer block for signal completion 219 + * 220 + * Caller needs to hold the msg_lock lock while calling this 221 + */ 222 + int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, 223 + struct sdw_defer *defer) 224 + { 225 + int ret; 226 + 227 + if (!bus->ops->xfer_msg_defer) 228 + return -ENOTSUPP; 229 + 230 + ret = do_transfer_defer(bus, msg, defer); 231 + if (ret != 0 && ret != -ENODATA) 232 + dev_err(bus->dev, "Defer trf on Slave %d failed:%d\n", 233 + msg->dev_num, ret); 234 + 235 + if (msg->page) 236 + sdw_reset_page(bus, msg->dev_num); 237 + 238 + return ret; 239 + } 240 + 241 + 242 + int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, 243 + u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf) 244 + { 245 + memset(msg, 0, sizeof(*msg)); 246 + msg->addr = addr; /* addr is 16 bit and truncated here */ 247 + msg->len = count; 248 + msg->dev_num = dev_num; 249 + msg->flags = flags; 250 + msg->buf = buf; 251 + msg->ssp_sync = false; 252 + msg->page = false; 253 + 254 + if (addr < SDW_REG_NO_PAGE) { /* no paging area */ 255 + return 0; 256 + } else if (addr >= SDW_REG_MAX) { /* illegal addr */ 257 + pr_err("SDW: Invalid address %x passed\n", addr); 258 + return -EINVAL; 259 + } 260 + 261 + if (addr < SDW_REG_OPTIONAL_PAGE) { /* 32k but no page */ 262 + if (slave && !slave->prop.paging_support) 263 + return 0; 264 + /* no need for else as that will fall thru to paging */ 265 + } 266 + 267 + /* paging mandatory */ 268 + if (dev_num == SDW_ENUM_DEV_NUM || dev_num == SDW_BROADCAST_DEV_NUM) { 269 + pr_err("SDW: Invalid device for paging :%d\n", dev_num); 270 + return -EINVAL; 271 + } 272 + 273 + if (!slave) { 274 + pr_err("SDW: No slave for paging addr\n"); 275 + return -EINVAL; 276 + } else if (!slave->prop.paging_support) { 277 + dev_err(&slave->dev, 278 + "address %x needs paging but no support", addr); 279 + return -EINVAL; 280 + } 281 + 282 + msg->addr_page1 = (addr >> SDW_REG_SHIFT(SDW_SCP_ADDRPAGE1_MASK)); 283 + msg->addr_page2 = (addr >> SDW_REG_SHIFT(SDW_SCP_ADDRPAGE2_MASK)); 284 + msg->addr |= BIT(15); 285 + msg->page = true; 286 + 287 + return 0; 288 + } 289 + 290 + /** 291 + * sdw_nread() - Read "n" contiguous SDW Slave registers 292 + * @slave: SDW Slave 293 + * @addr: Register address 294 + * @count: length 295 + * @val: Buffer for values to be read 296 + */ 297 + int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) 298 + { 299 + struct sdw_msg msg; 300 + int ret; 301 + 302 + ret = sdw_fill_msg(&msg, slave, addr, count, 303 + slave->dev_num, SDW_MSG_FLAG_READ, val); 304 + if (ret < 0) 305 + return ret; 306 + 307 + ret = pm_runtime_get_sync(slave->bus->dev); 308 + if (ret < 0) 309 + return ret; 310 + 311 + ret = sdw_transfer(slave->bus, &msg); 312 + pm_runtime_put(slave->bus->dev); 313 + 314 + return ret; 315 + } 316 + EXPORT_SYMBOL(sdw_nread); 317 + 318 + /** 319 + * sdw_nwrite() - Write "n" contiguous SDW Slave registers 320 + * @slave: SDW Slave 321 + * @addr: Register address 322 + * @count: length 323 + * @val: Buffer for values to be read 324 + */ 325 + int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) 326 + { 327 + struct sdw_msg msg; 328 + int ret; 329 + 330 + ret = sdw_fill_msg(&msg, slave, addr, count, 331 + slave->dev_num, SDW_MSG_FLAG_WRITE, val); 332 + if (ret < 0) 333 + return ret; 334 + 335 + ret = pm_runtime_get_sync(slave->bus->dev); 336 + if (ret < 0) 337 + return ret; 338 + 339 + ret = sdw_transfer(slave->bus, &msg); 340 + pm_runtime_put(slave->bus->dev); 341 + 342 + return ret; 343 + } 344 + EXPORT_SYMBOL(sdw_nwrite); 345 + 346 + /** 347 + * sdw_read() - Read a SDW Slave register 348 + * @slave: SDW Slave 349 + * @addr: Register address 350 + */ 351 + int sdw_read(struct sdw_slave *slave, u32 addr) 352 + { 353 + u8 buf; 354 + int ret; 355 + 356 + ret = sdw_nread(slave, addr, 1, &buf); 357 + if (ret < 0) 358 + return ret; 359 + else 360 + return buf; 361 + } 362 + EXPORT_SYMBOL(sdw_read); 363 + 364 + /** 365 + * sdw_write() - Write a SDW Slave register 366 + * @slave: SDW Slave 367 + * @addr: Register address 368 + * @value: Register value 369 + */ 370 + int sdw_write(struct sdw_slave *slave, u32 addr, u8 value) 371 + { 372 + return sdw_nwrite(slave, addr, 1, &value); 373 + 374 + } 375 + EXPORT_SYMBOL(sdw_write); 376 + 377 + /* 378 + * SDW alert handling 379 + */ 380 + 381 + /* called with bus_lock held */ 382 + static struct sdw_slave *sdw_get_slave(struct sdw_bus *bus, int i) 383 + { 384 + struct sdw_slave *slave = NULL; 385 + 386 + list_for_each_entry(slave, &bus->slaves, node) { 387 + if (slave->dev_num == i) 388 + return slave; 389 + } 390 + 391 + return NULL; 392 + } 393 + 394 + static int sdw_compare_devid(struct sdw_slave *slave, struct sdw_slave_id id) 395 + { 396 + 397 + if ((slave->id.unique_id != id.unique_id) || 398 + (slave->id.mfg_id != id.mfg_id) || 399 + (slave->id.part_id != id.part_id) || 400 + (slave->id.class_id != id.class_id)) 401 + return -ENODEV; 402 + 403 + return 0; 404 + } 405 + 406 + /* called with bus_lock held */ 407 + static int sdw_get_device_num(struct sdw_slave *slave) 408 + { 409 + int bit; 410 + 411 + bit = find_first_zero_bit(slave->bus->assigned, SDW_MAX_DEVICES); 412 + if (bit == SDW_MAX_DEVICES) { 413 + bit = -ENODEV; 414 + goto err; 415 + } 416 + 417 + /* 418 + * Do not update dev_num in Slave data structure here, 419 + * Update once program dev_num is successful 420 + */ 421 + set_bit(bit, slave->bus->assigned); 422 + 423 + err: 424 + return bit; 425 + } 426 + 427 + static int sdw_assign_device_num(struct sdw_slave *slave) 428 + { 429 + int ret, dev_num; 430 + 431 + /* check first if device number is assigned, if so reuse that */ 432 + if (!slave->dev_num) { 433 + mutex_lock(&slave->bus->bus_lock); 434 + dev_num = sdw_get_device_num(slave); 435 + mutex_unlock(&slave->bus->bus_lock); 436 + if (dev_num < 0) { 437 + dev_err(slave->bus->dev, "Get dev_num failed: %d", 438 + dev_num); 439 + return dev_num; 440 + } 441 + } else { 442 + dev_info(slave->bus->dev, 443 + "Slave already registered dev_num:%d", 444 + slave->dev_num); 445 + 446 + /* Clear the slave->dev_num to transfer message on device 0 */ 447 + dev_num = slave->dev_num; 448 + slave->dev_num = 0; 449 + 450 + } 451 + 452 + ret = sdw_write(slave, SDW_SCP_DEVNUMBER, dev_num); 453 + if (ret < 0) { 454 + dev_err(&slave->dev, "Program device_num failed: %d", ret); 455 + return ret; 456 + } 457 + 458 + /* After xfer of msg, restore dev_num */ 459 + slave->dev_num = dev_num; 460 + 461 + return 0; 462 + } 463 + 464 + void sdw_extract_slave_id(struct sdw_bus *bus, 465 + u64 addr, struct sdw_slave_id *id) 466 + { 467 + dev_dbg(bus->dev, "SDW Slave Addr: %llx", addr); 468 + 469 + /* 470 + * Spec definition 471 + * Register Bit Contents 472 + * DevId_0 [7:4] 47:44 sdw_version 473 + * DevId_0 [3:0] 43:40 unique_id 474 + * DevId_1 39:32 mfg_id [15:8] 475 + * DevId_2 31:24 mfg_id [7:0] 476 + * DevId_3 23:16 part_id [15:8] 477 + * DevId_4 15:08 part_id [7:0] 478 + * DevId_5 07:00 class_id 479 + */ 480 + id->sdw_version = (addr >> 44) & GENMASK(3, 0); 481 + id->unique_id = (addr >> 40) & GENMASK(3, 0); 482 + id->mfg_id = (addr >> 24) & GENMASK(15, 0); 483 + id->part_id = (addr >> 8) & GENMASK(15, 0); 484 + id->class_id = addr & GENMASK(7, 0); 485 + 486 + dev_dbg(bus->dev, 487 + "SDW Slave class_id %x, part_id %x, mfg_id %x, unique_id %x, version %x", 488 + id->class_id, id->part_id, id->mfg_id, 489 + id->unique_id, id->sdw_version); 490 + 491 + } 492 + 493 + static int sdw_program_device_num(struct sdw_bus *bus) 494 + { 495 + u8 buf[SDW_NUM_DEV_ID_REGISTERS] = {0}; 496 + struct sdw_slave *slave, *_s; 497 + struct sdw_slave_id id; 498 + struct sdw_msg msg; 499 + bool found = false; 500 + int count = 0, ret; 501 + u64 addr; 502 + 503 + /* No Slave, so use raw xfer api */ 504 + ret = sdw_fill_msg(&msg, NULL, SDW_SCP_DEVID_0, 505 + SDW_NUM_DEV_ID_REGISTERS, 0, SDW_MSG_FLAG_READ, buf); 506 + if (ret < 0) 507 + return ret; 508 + 509 + do { 510 + ret = sdw_transfer(bus, &msg); 511 + if (ret == -ENODATA) { /* end of device id reads */ 512 + ret = 0; 513 + break; 514 + } 515 + if (ret < 0) { 516 + dev_err(bus->dev, "DEVID read fail:%d\n", ret); 517 + break; 518 + } 519 + 520 + /* 521 + * Construct the addr and extract. Cast the higher shift 522 + * bits to avoid truncation due to size limit. 523 + */ 524 + addr = buf[5] | (buf[4] << 8) | (buf[3] << 16) | 525 + ((u64)buf[2] << 24) | ((u64)buf[1] << 32) | 526 + ((u64)buf[0] << 40); 527 + 528 + sdw_extract_slave_id(bus, addr, &id); 529 + 530 + /* Now compare with entries */ 531 + list_for_each_entry_safe(slave, _s, &bus->slaves, node) { 532 + if (sdw_compare_devid(slave, id) == 0) { 533 + found = true; 534 + 535 + /* 536 + * Assign a new dev_num to this Slave and 537 + * not mark it present. It will be marked 538 + * present after it reports ATTACHED on new 539 + * dev_num 540 + */ 541 + ret = sdw_assign_device_num(slave); 542 + if (ret) { 543 + dev_err(slave->bus->dev, 544 + "Assign dev_num failed:%d", 545 + ret); 546 + return ret; 547 + } 548 + 549 + break; 550 + } 551 + } 552 + 553 + if (found == false) { 554 + /* TODO: Park this device in Group 13 */ 555 + dev_err(bus->dev, "Slave Entry not found"); 556 + } 557 + 558 + count++; 559 + 560 + /* 561 + * Check till error out or retry (count) exhausts. 562 + * Device can drop off and rejoin during enumeration 563 + * so count till twice the bound. 564 + */ 565 + 566 + } while (ret == 0 && count < (SDW_MAX_DEVICES * 2)); 567 + 568 + return ret; 569 + } 570 + 571 + static void sdw_modify_slave_status(struct sdw_slave *slave, 572 + enum sdw_slave_status status) 573 + { 574 + mutex_lock(&slave->bus->bus_lock); 575 + slave->status = status; 576 + mutex_unlock(&slave->bus->bus_lock); 577 + } 578 + 579 + static int sdw_initialize_slave(struct sdw_slave *slave) 580 + { 581 + struct sdw_slave_prop *prop = &slave->prop; 582 + int ret; 583 + u8 val; 584 + 585 + /* 586 + * Set bus clash, parity and SCP implementation 587 + * defined interrupt mask 588 + * TODO: Read implementation defined interrupt mask 589 + * from Slave property 590 + */ 591 + val = SDW_SCP_INT1_IMPL_DEF | SDW_SCP_INT1_BUS_CLASH | 592 + SDW_SCP_INT1_PARITY; 593 + 594 + /* Enable SCP interrupts */ 595 + ret = sdw_update(slave, SDW_SCP_INTMASK1, val, val); 596 + if (ret < 0) { 597 + dev_err(slave->bus->dev, 598 + "SDW_SCP_INTMASK1 write failed:%d", ret); 599 + return ret; 600 + } 601 + 602 + /* No need to continue if DP0 is not present */ 603 + if (!slave->prop.dp0_prop) 604 + return 0; 605 + 606 + /* Enable DP0 interrupts */ 607 + val = prop->dp0_prop->device_interrupts; 608 + val |= SDW_DP0_INT_PORT_READY | SDW_DP0_INT_BRA_FAILURE; 609 + 610 + ret = sdw_update(slave, SDW_DP0_INTMASK, val, val); 611 + if (ret < 0) { 612 + dev_err(slave->bus->dev, 613 + "SDW_DP0_INTMASK read failed:%d", ret); 614 + return val; 615 + } 616 + 617 + return 0; 618 + } 619 + 620 + static int sdw_handle_dp0_interrupt(struct sdw_slave *slave, u8 *slave_status) 621 + { 622 + u8 clear = 0, impl_int_mask; 623 + int status, status2, ret, count = 0; 624 + 625 + status = sdw_read(slave, SDW_DP0_INT); 626 + if (status < 0) { 627 + dev_err(slave->bus->dev, 628 + "SDW_DP0_INT read failed:%d", status); 629 + return status; 630 + } 631 + 632 + do { 633 + 634 + if (status & SDW_DP0_INT_TEST_FAIL) { 635 + dev_err(&slave->dev, "Test fail for port 0"); 636 + clear |= SDW_DP0_INT_TEST_FAIL; 637 + } 638 + 639 + /* 640 + * Assumption: PORT_READY interrupt will be received only for 641 + * ports implementing Channel Prepare state machine (CP_SM) 642 + */ 643 + 644 + if (status & SDW_DP0_INT_PORT_READY) { 645 + complete(&slave->port_ready[0]); 646 + clear |= SDW_DP0_INT_PORT_READY; 647 + } 648 + 649 + if (status & SDW_DP0_INT_BRA_FAILURE) { 650 + dev_err(&slave->dev, "BRA failed"); 651 + clear |= SDW_DP0_INT_BRA_FAILURE; 652 + } 653 + 654 + impl_int_mask = SDW_DP0_INT_IMPDEF1 | 655 + SDW_DP0_INT_IMPDEF2 | SDW_DP0_INT_IMPDEF3; 656 + 657 + if (status & impl_int_mask) { 658 + clear |= impl_int_mask; 659 + *slave_status = clear; 660 + } 661 + 662 + /* clear the interrupt */ 663 + ret = sdw_write(slave, SDW_DP0_INT, clear); 664 + if (ret < 0) { 665 + dev_err(slave->bus->dev, 666 + "SDW_DP0_INT write failed:%d", ret); 667 + return ret; 668 + } 669 + 670 + /* Read DP0 interrupt again */ 671 + status2 = sdw_read(slave, SDW_DP0_INT); 672 + if (status2 < 0) { 673 + dev_err(slave->bus->dev, 674 + "SDW_DP0_INT read failed:%d", status2); 675 + return status2; 676 + } 677 + status &= status2; 678 + 679 + count++; 680 + 681 + /* we can get alerts while processing so keep retrying */ 682 + } while (status != 0 && count < SDW_READ_INTR_CLEAR_RETRY); 683 + 684 + if (count == SDW_READ_INTR_CLEAR_RETRY) 685 + dev_warn(slave->bus->dev, "Reached MAX_RETRY on DP0 read"); 686 + 687 + return ret; 688 + } 689 + 690 + static int sdw_handle_port_interrupt(struct sdw_slave *slave, 691 + int port, u8 *slave_status) 692 + { 693 + u8 clear = 0, impl_int_mask; 694 + int status, status2, ret, count = 0; 695 + u32 addr; 696 + 697 + if (port == 0) 698 + return sdw_handle_dp0_interrupt(slave, slave_status); 699 + 700 + addr = SDW_DPN_INT(port); 701 + status = sdw_read(slave, addr); 702 + if (status < 0) { 703 + dev_err(slave->bus->dev, 704 + "SDW_DPN_INT read failed:%d", status); 705 + 706 + return status; 707 + } 708 + 709 + do { 710 + 711 + if (status & SDW_DPN_INT_TEST_FAIL) { 712 + dev_err(&slave->dev, "Test fail for port:%d", port); 713 + clear |= SDW_DPN_INT_TEST_FAIL; 714 + } 715 + 716 + /* 717 + * Assumption: PORT_READY interrupt will be received only 718 + * for ports implementing CP_SM. 719 + */ 720 + if (status & SDW_DPN_INT_PORT_READY) { 721 + complete(&slave->port_ready[port]); 722 + clear |= SDW_DPN_INT_PORT_READY; 723 + } 724 + 725 + impl_int_mask = SDW_DPN_INT_IMPDEF1 | 726 + SDW_DPN_INT_IMPDEF2 | SDW_DPN_INT_IMPDEF3; 727 + 728 + 729 + if (status & impl_int_mask) { 730 + clear |= impl_int_mask; 731 + *slave_status = clear; 732 + } 733 + 734 + /* clear the interrupt */ 735 + ret = sdw_write(slave, addr, clear); 736 + if (ret < 0) { 737 + dev_err(slave->bus->dev, 738 + "SDW_DPN_INT write failed:%d", ret); 739 + return ret; 740 + } 741 + 742 + /* Read DPN interrupt again */ 743 + status2 = sdw_read(slave, addr); 744 + if (status2 < 0) { 745 + dev_err(slave->bus->dev, 746 + "SDW_DPN_INT read failed:%d", status2); 747 + return status2; 748 + } 749 + status &= status2; 750 + 751 + count++; 752 + 753 + /* we can get alerts while processing so keep retrying */ 754 + } while (status != 0 && count < SDW_READ_INTR_CLEAR_RETRY); 755 + 756 + if (count == SDW_READ_INTR_CLEAR_RETRY) 757 + dev_warn(slave->bus->dev, "Reached MAX_RETRY on port read"); 758 + 759 + return ret; 760 + } 761 + 762 + static int sdw_handle_slave_alerts(struct sdw_slave *slave) 763 + { 764 + struct sdw_slave_intr_status slave_intr; 765 + u8 clear = 0, bit, port_status[15]; 766 + int port_num, stat, ret, count = 0; 767 + unsigned long port; 768 + bool slave_notify = false; 769 + u8 buf, buf2[2], _buf, _buf2[2]; 770 + 771 + sdw_modify_slave_status(slave, SDW_SLAVE_ALERT); 772 + 773 + /* Read Instat 1, Instat 2 and Instat 3 registers */ 774 + buf = ret = sdw_read(slave, SDW_SCP_INT1); 775 + if (ret < 0) { 776 + dev_err(slave->bus->dev, 777 + "SDW_SCP_INT1 read failed:%d", ret); 778 + return ret; 779 + } 780 + 781 + ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, buf2); 782 + if (ret < 0) { 783 + dev_err(slave->bus->dev, 784 + "SDW_SCP_INT2/3 read failed:%d", ret); 785 + return ret; 786 + } 787 + 788 + do { 789 + /* 790 + * Check parity, bus clash and Slave (impl defined) 791 + * interrupt 792 + */ 793 + if (buf & SDW_SCP_INT1_PARITY) { 794 + dev_err(&slave->dev, "Parity error detected"); 795 + clear |= SDW_SCP_INT1_PARITY; 796 + } 797 + 798 + if (buf & SDW_SCP_INT1_BUS_CLASH) { 799 + dev_err(&slave->dev, "Bus clash error detected"); 800 + clear |= SDW_SCP_INT1_BUS_CLASH; 801 + } 802 + 803 + /* 804 + * When bus clash or parity errors are detected, such errors 805 + * are unlikely to be recoverable errors. 806 + * TODO: In such scenario, reset bus. Make this configurable 807 + * via sysfs property with bus reset being the default. 808 + */ 809 + 810 + if (buf & SDW_SCP_INT1_IMPL_DEF) { 811 + dev_dbg(&slave->dev, "Slave impl defined interrupt\n"); 812 + clear |= SDW_SCP_INT1_IMPL_DEF; 813 + slave_notify = true; 814 + } 815 + 816 + /* Check port 0 - 3 interrupts */ 817 + port = buf & SDW_SCP_INT1_PORT0_3; 818 + 819 + /* To get port number corresponding to bits, shift it */ 820 + port = port >> SDW_REG_SHIFT(SDW_SCP_INT1_PORT0_3); 821 + for_each_set_bit(bit, &port, 8) { 822 + sdw_handle_port_interrupt(slave, bit, 823 + &port_status[bit]); 824 + 825 + } 826 + 827 + /* Check if cascade 2 interrupt is present */ 828 + if (buf & SDW_SCP_INT1_SCP2_CASCADE) { 829 + port = buf2[0] & SDW_SCP_INTSTAT2_PORT4_10; 830 + for_each_set_bit(bit, &port, 8) { 831 + /* scp2 ports start from 4 */ 832 + port_num = bit + 3; 833 + sdw_handle_port_interrupt(slave, 834 + port_num, 835 + &port_status[port_num]); 836 + } 837 + } 838 + 839 + /* now check last cascade */ 840 + if (buf2[0] & SDW_SCP_INTSTAT2_SCP3_CASCADE) { 841 + port = buf2[1] & SDW_SCP_INTSTAT3_PORT11_14; 842 + for_each_set_bit(bit, &port, 8) { 843 + /* scp3 ports start from 11 */ 844 + port_num = bit + 10; 845 + sdw_handle_port_interrupt(slave, 846 + port_num, 847 + &port_status[port_num]); 848 + } 849 + } 850 + 851 + /* Update the Slave driver */ 852 + if (slave_notify && (slave->ops) && 853 + (slave->ops->interrupt_callback)) { 854 + slave_intr.control_port = clear; 855 + memcpy(slave_intr.port, &port_status, 856 + sizeof(slave_intr.port)); 857 + 858 + slave->ops->interrupt_callback(slave, &slave_intr); 859 + } 860 + 861 + /* Ack interrupt */ 862 + ret = sdw_write(slave, SDW_SCP_INT1, clear); 863 + if (ret < 0) { 864 + dev_err(slave->bus->dev, 865 + "SDW_SCP_INT1 write failed:%d", ret); 866 + return ret; 867 + } 868 + 869 + /* 870 + * Read status again to ensure no new interrupts arrived 871 + * while servicing interrupts. 872 + */ 873 + _buf = ret = sdw_read(slave, SDW_SCP_INT1); 874 + if (ret < 0) { 875 + dev_err(slave->bus->dev, 876 + "SDW_SCP_INT1 read failed:%d", ret); 877 + return ret; 878 + } 879 + 880 + ret = sdw_nread(slave, SDW_SCP_INTSTAT2, 2, _buf2); 881 + if (ret < 0) { 882 + dev_err(slave->bus->dev, 883 + "SDW_SCP_INT2/3 read failed:%d", ret); 884 + return ret; 885 + } 886 + 887 + /* Make sure no interrupts are pending */ 888 + buf &= _buf; 889 + buf2[0] &= _buf2[0]; 890 + buf2[1] &= _buf2[1]; 891 + stat = buf || buf2[0] || buf2[1]; 892 + 893 + /* 894 + * Exit loop if Slave is continuously in ALERT state even 895 + * after servicing the interrupt multiple times. 896 + */ 897 + count++; 898 + 899 + /* we can get alerts while processing so keep retrying */ 900 + } while (stat != 0 && count < SDW_READ_INTR_CLEAR_RETRY); 901 + 902 + if (count == SDW_READ_INTR_CLEAR_RETRY) 903 + dev_warn(slave->bus->dev, "Reached MAX_RETRY on alert read"); 904 + 905 + return ret; 906 + } 907 + 908 + static int sdw_update_slave_status(struct sdw_slave *slave, 909 + enum sdw_slave_status status) 910 + { 911 + if ((slave->ops) && (slave->ops->update_status)) 912 + return slave->ops->update_status(slave, status); 913 + 914 + return 0; 915 + } 916 + 917 + /** 918 + * sdw_handle_slave_status() - Handle Slave status 919 + * @bus: SDW bus instance 920 + * @status: Status for all Slave(s) 921 + */ 922 + int sdw_handle_slave_status(struct sdw_bus *bus, 923 + enum sdw_slave_status status[]) 924 + { 925 + enum sdw_slave_status prev_status; 926 + struct sdw_slave *slave; 927 + int i, ret = 0; 928 + 929 + if (status[0] == SDW_SLAVE_ATTACHED) { 930 + ret = sdw_program_device_num(bus); 931 + if (ret) 932 + dev_err(bus->dev, "Slave attach failed: %d", ret); 933 + } 934 + 935 + /* Continue to check other slave statuses */ 936 + for (i = 1; i <= SDW_MAX_DEVICES; i++) { 937 + mutex_lock(&bus->bus_lock); 938 + if (test_bit(i, bus->assigned) == false) { 939 + mutex_unlock(&bus->bus_lock); 940 + continue; 941 + } 942 + mutex_unlock(&bus->bus_lock); 943 + 944 + slave = sdw_get_slave(bus, i); 945 + if (!slave) 946 + continue; 947 + 948 + switch (status[i]) { 949 + case SDW_SLAVE_UNATTACHED: 950 + if (slave->status == SDW_SLAVE_UNATTACHED) 951 + break; 952 + 953 + sdw_modify_slave_status(slave, SDW_SLAVE_UNATTACHED); 954 + break; 955 + 956 + case SDW_SLAVE_ALERT: 957 + ret = sdw_handle_slave_alerts(slave); 958 + if (ret) 959 + dev_err(bus->dev, 960 + "Slave %d alert handling failed: %d", 961 + i, ret); 962 + break; 963 + 964 + case SDW_SLAVE_ATTACHED: 965 + if (slave->status == SDW_SLAVE_ATTACHED) 966 + break; 967 + 968 + prev_status = slave->status; 969 + sdw_modify_slave_status(slave, SDW_SLAVE_ATTACHED); 970 + 971 + if (prev_status == SDW_SLAVE_ALERT) 972 + break; 973 + 974 + ret = sdw_initialize_slave(slave); 975 + if (ret) 976 + dev_err(bus->dev, 977 + "Slave %d initialization failed: %d", 978 + i, ret); 979 + 980 + break; 981 + 982 + default: 983 + dev_err(bus->dev, "Invalid slave %d status:%d", 984 + i, status[i]); 985 + break; 986 + } 987 + 988 + ret = sdw_update_slave_status(slave, status[i]); 989 + if (ret) 990 + dev_err(slave->bus->dev, 991 + "Update Slave status failed:%d", ret); 992 + 993 + } 994 + 995 + return ret; 996 + } 997 + EXPORT_SYMBOL(sdw_handle_slave_status);
+71
drivers/soundwire/bus.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SDW_BUS_H 5 + #define __SDW_BUS_H 6 + 7 + #if IS_ENABLED(CONFIG_ACPI) 8 + int sdw_acpi_find_slaves(struct sdw_bus *bus); 9 + #else 10 + static inline int sdw_acpi_find_slaves(struct sdw_bus *bus) 11 + { 12 + return -ENOTSUPP; 13 + } 14 + #endif 15 + 16 + void sdw_extract_slave_id(struct sdw_bus *bus, 17 + u64 addr, struct sdw_slave_id *id); 18 + 19 + enum { 20 + SDW_MSG_FLAG_READ = 0, 21 + SDW_MSG_FLAG_WRITE, 22 + }; 23 + 24 + /** 25 + * struct sdw_msg - Message structure 26 + * @addr: Register address accessed in the Slave 27 + * @len: number of messages 28 + * @dev_num: Slave device number 29 + * @addr_page1: SCP address page 1 Slave register 30 + * @addr_page2: SCP address page 2 Slave register 31 + * @flags: transfer flags, indicate if xfer is read or write 32 + * @buf: message data buffer 33 + * @ssp_sync: Send message at SSP (Stream Synchronization Point) 34 + * @page: address requires paging 35 + */ 36 + struct sdw_msg { 37 + u16 addr; 38 + u16 len; 39 + u8 dev_num; 40 + u8 addr_page1; 41 + u8 addr_page2; 42 + u8 flags; 43 + u8 *buf; 44 + bool ssp_sync; 45 + bool page; 46 + }; 47 + 48 + int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg); 49 + int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, 50 + struct sdw_defer *defer); 51 + 52 + #define SDW_READ_INTR_CLEAR_RETRY 10 53 + 54 + int sdw_fill_msg(struct sdw_msg *msg, struct sdw_slave *slave, 55 + u32 addr, size_t count, u16 dev_num, u8 flags, u8 *buf); 56 + 57 + /* Read-Modify-Write Slave register */ 58 + static inline int 59 + sdw_update(struct sdw_slave *slave, u32 addr, u8 mask, u8 val) 60 + { 61 + int tmp; 62 + 63 + tmp = sdw_read(slave, addr); 64 + if (tmp < 0) 65 + return tmp; 66 + 67 + tmp = (tmp & ~mask) | val; 68 + return sdw_write(slave, addr, tmp); 69 + } 70 + 71 + #endif /* __SDW_BUS_H */
+193
drivers/soundwire/bus_type.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #include <linux/module.h> 5 + #include <linux/mod_devicetable.h> 6 + #include <linux/pm_domain.h> 7 + #include <linux/soundwire/sdw.h> 8 + #include <linux/soundwire/sdw_type.h> 9 + 10 + /** 11 + * sdw_get_device_id - find the matching SoundWire device id 12 + * @slave: SoundWire Slave Device 13 + * @drv: SoundWire Slave Driver 14 + * 15 + * The match is done by comparing the mfg_id and part_id from the 16 + * struct sdw_device_id. 17 + */ 18 + static const struct sdw_device_id * 19 + sdw_get_device_id(struct sdw_slave *slave, struct sdw_driver *drv) 20 + { 21 + const struct sdw_device_id *id = drv->id_table; 22 + 23 + while (id && id->mfg_id) { 24 + if (slave->id.mfg_id == id->mfg_id && 25 + slave->id.part_id == id->part_id) 26 + return id; 27 + id++; 28 + } 29 + 30 + return NULL; 31 + } 32 + 33 + static int sdw_bus_match(struct device *dev, struct device_driver *ddrv) 34 + { 35 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 36 + struct sdw_driver *drv = drv_to_sdw_driver(ddrv); 37 + 38 + return !!sdw_get_device_id(slave, drv); 39 + } 40 + 41 + int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size) 42 + { 43 + /* modalias is sdw:m<mfg_id>p<part_id> */ 44 + 45 + return snprintf(buf, size, "sdw:m%04Xp%04X\n", 46 + slave->id.mfg_id, slave->id.part_id); 47 + } 48 + 49 + static int sdw_uevent(struct device *dev, struct kobj_uevent_env *env) 50 + { 51 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 52 + char modalias[32]; 53 + 54 + sdw_slave_modalias(slave, modalias, sizeof(modalias)); 55 + 56 + if (add_uevent_var(env, "MODALIAS=%s", modalias)) 57 + return -ENOMEM; 58 + 59 + return 0; 60 + } 61 + 62 + struct bus_type sdw_bus_type = { 63 + .name = "soundwire", 64 + .match = sdw_bus_match, 65 + .uevent = sdw_uevent, 66 + }; 67 + EXPORT_SYMBOL_GPL(sdw_bus_type); 68 + 69 + static int sdw_drv_probe(struct device *dev) 70 + { 71 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 72 + struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); 73 + const struct sdw_device_id *id; 74 + int ret; 75 + 76 + id = sdw_get_device_id(slave, drv); 77 + if (!id) 78 + return -ENODEV; 79 + 80 + slave->ops = drv->ops; 81 + 82 + /* 83 + * attach to power domain but don't turn on (last arg) 84 + */ 85 + ret = dev_pm_domain_attach(dev, false); 86 + if (ret != -EPROBE_DEFER) { 87 + ret = drv->probe(slave, id); 88 + if (ret) { 89 + dev_err(dev, "Probe of %s failed: %d\n", drv->name, ret); 90 + dev_pm_domain_detach(dev, false); 91 + } 92 + } 93 + 94 + if (ret) 95 + return ret; 96 + 97 + /* device is probed so let's read the properties now */ 98 + if (slave->ops && slave->ops->read_prop) 99 + slave->ops->read_prop(slave); 100 + 101 + /* 102 + * Check for valid clk_stop_timeout, use DisCo worst case value of 103 + * 300ms 104 + * 105 + * TODO: check the timeouts and driver removal case 106 + */ 107 + if (slave->prop.clk_stop_timeout == 0) 108 + slave->prop.clk_stop_timeout = 300; 109 + 110 + slave->bus->clk_stop_timeout = max_t(u32, slave->bus->clk_stop_timeout, 111 + slave->prop.clk_stop_timeout); 112 + 113 + return 0; 114 + } 115 + 116 + static int sdw_drv_remove(struct device *dev) 117 + { 118 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 119 + struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); 120 + int ret = 0; 121 + 122 + if (drv->remove) 123 + ret = drv->remove(slave); 124 + 125 + dev_pm_domain_detach(dev, false); 126 + 127 + return ret; 128 + } 129 + 130 + static void sdw_drv_shutdown(struct device *dev) 131 + { 132 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 133 + struct sdw_driver *drv = drv_to_sdw_driver(dev->driver); 134 + 135 + if (drv->shutdown) 136 + drv->shutdown(slave); 137 + } 138 + 139 + /** 140 + * __sdw_register_driver() - register a SoundWire Slave driver 141 + * @drv: driver to register 142 + * @owner: owning module/driver 143 + * 144 + * Return: zero on success, else a negative error code. 145 + */ 146 + int __sdw_register_driver(struct sdw_driver *drv, struct module *owner) 147 + { 148 + drv->driver.bus = &sdw_bus_type; 149 + 150 + if (!drv->probe) { 151 + pr_err("driver %s didn't provide SDW probe routine\n", 152 + drv->name); 153 + return -EINVAL; 154 + } 155 + 156 + drv->driver.owner = owner; 157 + drv->driver.probe = sdw_drv_probe; 158 + 159 + if (drv->remove) 160 + drv->driver.remove = sdw_drv_remove; 161 + 162 + if (drv->shutdown) 163 + drv->driver.shutdown = sdw_drv_shutdown; 164 + 165 + return driver_register(&drv->driver); 166 + } 167 + EXPORT_SYMBOL_GPL(__sdw_register_driver); 168 + 169 + /** 170 + * sdw_unregister_driver() - unregisters the SoundWire Slave driver 171 + * @drv: driver to unregister 172 + */ 173 + void sdw_unregister_driver(struct sdw_driver *drv) 174 + { 175 + driver_unregister(&drv->driver); 176 + } 177 + EXPORT_SYMBOL_GPL(sdw_unregister_driver); 178 + 179 + static int __init sdw_bus_init(void) 180 + { 181 + return bus_register(&sdw_bus_type); 182 + } 183 + 184 + static void __exit sdw_bus_exit(void) 185 + { 186 + bus_unregister(&sdw_bus_type); 187 + } 188 + 189 + postcore_initcall(sdw_bus_init); 190 + module_exit(sdw_bus_exit); 191 + 192 + MODULE_DESCRIPTION("SoundWire bus"); 193 + MODULE_LICENSE("GPL v2");
+751
drivers/soundwire/cadence_master.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + /* 5 + * Cadence SoundWire Master module 6 + * Used by Master driver 7 + */ 8 + 9 + #include <linux/delay.h> 10 + #include <linux/device.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 14 + #include <linux/soundwire/sdw_registers.h> 15 + #include <linux/soundwire/sdw.h> 16 + #include "bus.h" 17 + #include "cadence_master.h" 18 + 19 + #define CDNS_MCP_CONFIG 0x0 20 + 21 + #define CDNS_MCP_CONFIG_MCMD_RETRY GENMASK(27, 24) 22 + #define CDNS_MCP_CONFIG_MPREQ_DELAY GENMASK(20, 16) 23 + #define CDNS_MCP_CONFIG_MMASTER BIT(7) 24 + #define CDNS_MCP_CONFIG_BUS_REL BIT(6) 25 + #define CDNS_MCP_CONFIG_SNIFFER BIT(5) 26 + #define CDNS_MCP_CONFIG_SSPMOD BIT(4) 27 + #define CDNS_MCP_CONFIG_CMD BIT(3) 28 + #define CDNS_MCP_CONFIG_OP GENMASK(2, 0) 29 + #define CDNS_MCP_CONFIG_OP_NORMAL 0 30 + 31 + #define CDNS_MCP_CONTROL 0x4 32 + 33 + #define CDNS_MCP_CONTROL_RST_DELAY GENMASK(10, 8) 34 + #define CDNS_MCP_CONTROL_CMD_RST BIT(7) 35 + #define CDNS_MCP_CONTROL_SOFT_RST BIT(6) 36 + #define CDNS_MCP_CONTROL_SW_RST BIT(5) 37 + #define CDNS_MCP_CONTROL_HW_RST BIT(4) 38 + #define CDNS_MCP_CONTROL_CLK_PAUSE BIT(3) 39 + #define CDNS_MCP_CONTROL_CLK_STOP_CLR BIT(2) 40 + #define CDNS_MCP_CONTROL_CMD_ACCEPT BIT(1) 41 + #define CDNS_MCP_CONTROL_BLOCK_WAKEUP BIT(0) 42 + 43 + 44 + #define CDNS_MCP_CMDCTRL 0x8 45 + #define CDNS_MCP_SSPSTAT 0xC 46 + #define CDNS_MCP_FRAME_SHAPE 0x10 47 + #define CDNS_MCP_FRAME_SHAPE_INIT 0x14 48 + 49 + #define CDNS_MCP_CONFIG_UPDATE 0x18 50 + #define CDNS_MCP_CONFIG_UPDATE_BIT BIT(0) 51 + 52 + #define CDNS_MCP_PHYCTRL 0x1C 53 + #define CDNS_MCP_SSP_CTRL0 0x20 54 + #define CDNS_MCP_SSP_CTRL1 0x28 55 + #define CDNS_MCP_CLK_CTRL0 0x30 56 + #define CDNS_MCP_CLK_CTRL1 0x38 57 + 58 + #define CDNS_MCP_STAT 0x40 59 + 60 + #define CDNS_MCP_STAT_ACTIVE_BANK BIT(20) 61 + #define CDNS_MCP_STAT_CLK_STOP BIT(16) 62 + 63 + #define CDNS_MCP_INTSTAT 0x44 64 + #define CDNS_MCP_INTMASK 0x48 65 + 66 + #define CDNS_MCP_INT_IRQ BIT(31) 67 + #define CDNS_MCP_INT_WAKEUP BIT(16) 68 + #define CDNS_MCP_INT_SLAVE_RSVD BIT(15) 69 + #define CDNS_MCP_INT_SLAVE_ALERT BIT(14) 70 + #define CDNS_MCP_INT_SLAVE_ATTACH BIT(13) 71 + #define CDNS_MCP_INT_SLAVE_NATTACH BIT(12) 72 + #define CDNS_MCP_INT_SLAVE_MASK GENMASK(15, 12) 73 + #define CDNS_MCP_INT_DPINT BIT(11) 74 + #define CDNS_MCP_INT_CTRL_CLASH BIT(10) 75 + #define CDNS_MCP_INT_DATA_CLASH BIT(9) 76 + #define CDNS_MCP_INT_CMD_ERR BIT(7) 77 + #define CDNS_MCP_INT_RX_WL BIT(2) 78 + #define CDNS_MCP_INT_TXE BIT(1) 79 + 80 + #define CDNS_MCP_INTSET 0x4C 81 + 82 + #define CDNS_SDW_SLAVE_STAT 0x50 83 + #define CDNS_MCP_SLAVE_STAT_MASK BIT(1, 0) 84 + 85 + #define CDNS_MCP_SLAVE_INTSTAT0 0x54 86 + #define CDNS_MCP_SLAVE_INTSTAT1 0x58 87 + #define CDNS_MCP_SLAVE_INTSTAT_NPRESENT BIT(0) 88 + #define CDNS_MCP_SLAVE_INTSTAT_ATTACHED BIT(1) 89 + #define CDNS_MCP_SLAVE_INTSTAT_ALERT BIT(2) 90 + #define CDNS_MCP_SLAVE_INTSTAT_RESERVED BIT(3) 91 + #define CDNS_MCP_SLAVE_STATUS_BITS GENMASK(3, 0) 92 + #define CDNS_MCP_SLAVE_STATUS_NUM 4 93 + 94 + #define CDNS_MCP_SLAVE_INTMASK0 0x5C 95 + #define CDNS_MCP_SLAVE_INTMASK1 0x60 96 + 97 + #define CDNS_MCP_SLAVE_INTMASK0_MASK GENMASK(30, 0) 98 + #define CDNS_MCP_SLAVE_INTMASK1_MASK GENMASK(16, 0) 99 + 100 + #define CDNS_MCP_PORT_INTSTAT 0x64 101 + #define CDNS_MCP_PDI_STAT 0x6C 102 + 103 + #define CDNS_MCP_FIFOLEVEL 0x78 104 + #define CDNS_MCP_FIFOSTAT 0x7C 105 + #define CDNS_MCP_RX_FIFO_AVAIL GENMASK(5, 0) 106 + 107 + #define CDNS_MCP_CMD_BASE 0x80 108 + #define CDNS_MCP_RESP_BASE 0x80 109 + #define CDNS_MCP_CMD_LEN 0x20 110 + #define CDNS_MCP_CMD_WORD_LEN 0x4 111 + 112 + #define CDNS_MCP_CMD_SSP_TAG BIT(31) 113 + #define CDNS_MCP_CMD_COMMAND GENMASK(30, 28) 114 + #define CDNS_MCP_CMD_DEV_ADDR GENMASK(27, 24) 115 + #define CDNS_MCP_CMD_REG_ADDR_H GENMASK(23, 16) 116 + #define CDNS_MCP_CMD_REG_ADDR_L GENMASK(15, 8) 117 + #define CDNS_MCP_CMD_REG_DATA GENMASK(7, 0) 118 + 119 + #define CDNS_MCP_CMD_READ 2 120 + #define CDNS_MCP_CMD_WRITE 3 121 + 122 + #define CDNS_MCP_RESP_RDATA GENMASK(15, 8) 123 + #define CDNS_MCP_RESP_ACK BIT(0) 124 + #define CDNS_MCP_RESP_NACK BIT(1) 125 + 126 + #define CDNS_DP_SIZE 128 127 + 128 + #define CDNS_DPN_B0_CONFIG(n) (0x100 + CDNS_DP_SIZE * (n)) 129 + #define CDNS_DPN_B0_CH_EN(n) (0x104 + CDNS_DP_SIZE * (n)) 130 + #define CDNS_DPN_B0_SAMPLE_CTRL(n) (0x108 + CDNS_DP_SIZE * (n)) 131 + #define CDNS_DPN_B0_OFFSET_CTRL(n) (0x10C + CDNS_DP_SIZE * (n)) 132 + #define CDNS_DPN_B0_HCTRL(n) (0x110 + CDNS_DP_SIZE * (n)) 133 + #define CDNS_DPN_B0_ASYNC_CTRL(n) (0x114 + CDNS_DP_SIZE * (n)) 134 + 135 + #define CDNS_DPN_B1_CONFIG(n) (0x118 + CDNS_DP_SIZE * (n)) 136 + #define CDNS_DPN_B1_CH_EN(n) (0x11C + CDNS_DP_SIZE * (n)) 137 + #define CDNS_DPN_B1_SAMPLE_CTRL(n) (0x120 + CDNS_DP_SIZE * (n)) 138 + #define CDNS_DPN_B1_OFFSET_CTRL(n) (0x124 + CDNS_DP_SIZE * (n)) 139 + #define CDNS_DPN_B1_HCTRL(n) (0x128 + CDNS_DP_SIZE * (n)) 140 + #define CDNS_DPN_B1_ASYNC_CTRL(n) (0x12C + CDNS_DP_SIZE * (n)) 141 + 142 + #define CDNS_DPN_CONFIG_BPM BIT(18) 143 + #define CDNS_DPN_CONFIG_BGC GENMASK(17, 16) 144 + #define CDNS_DPN_CONFIG_WL GENMASK(12, 8) 145 + #define CDNS_DPN_CONFIG_PORT_DAT GENMASK(3, 2) 146 + #define CDNS_DPN_CONFIG_PORT_FLOW GENMASK(1, 0) 147 + 148 + #define CDNS_DPN_SAMPLE_CTRL_SI GENMASK(15, 0) 149 + 150 + #define CDNS_DPN_OFFSET_CTRL_1 GENMASK(7, 0) 151 + #define CDNS_DPN_OFFSET_CTRL_2 GENMASK(15, 8) 152 + 153 + #define CDNS_DPN_HCTRL_HSTOP GENMASK(3, 0) 154 + #define CDNS_DPN_HCTRL_HSTART GENMASK(7, 4) 155 + #define CDNS_DPN_HCTRL_LCTRL GENMASK(10, 8) 156 + 157 + #define CDNS_PORTCTRL 0x130 158 + #define CDNS_PORTCTRL_DIRN BIT(7) 159 + #define CDNS_PORTCTRL_BANK_INVERT BIT(8) 160 + 161 + #define CDNS_PORT_OFFSET 0x80 162 + 163 + #define CDNS_PDI_CONFIG(n) (0x1100 + (n) * 16) 164 + 165 + #define CDNS_PDI_CONFIG_SOFT_RESET BIT(24) 166 + #define CDNS_PDI_CONFIG_CHANNEL GENMASK(15, 8) 167 + #define CDNS_PDI_CONFIG_PORT GENMASK(4, 0) 168 + 169 + /* Driver defaults */ 170 + 171 + #define CDNS_DEFAULT_CLK_DIVIDER 0 172 + #define CDNS_DEFAULT_FRAME_SHAPE 0x30 173 + #define CDNS_DEFAULT_SSP_INTERVAL 0x18 174 + #define CDNS_TX_TIMEOUT 2000 175 + 176 + #define CDNS_PCM_PDI_OFFSET 0x2 177 + #define CDNS_PDM_PDI_OFFSET 0x6 178 + 179 + #define CDNS_SCP_RX_FIFOLEVEL 0x2 180 + 181 + /* 182 + * register accessor helpers 183 + */ 184 + static inline u32 cdns_readl(struct sdw_cdns *cdns, int offset) 185 + { 186 + return readl(cdns->registers + offset); 187 + } 188 + 189 + static inline void cdns_writel(struct sdw_cdns *cdns, int offset, u32 value) 190 + { 191 + writel(value, cdns->registers + offset); 192 + } 193 + 194 + static inline void cdns_updatel(struct sdw_cdns *cdns, 195 + int offset, u32 mask, u32 val) 196 + { 197 + u32 tmp; 198 + 199 + tmp = cdns_readl(cdns, offset); 200 + tmp = (tmp & ~mask) | val; 201 + cdns_writel(cdns, offset, tmp); 202 + } 203 + 204 + static int cdns_clear_bit(struct sdw_cdns *cdns, int offset, u32 value) 205 + { 206 + int timeout = 10; 207 + u32 reg_read; 208 + 209 + writel(value, cdns->registers + offset); 210 + 211 + /* Wait for bit to be self cleared */ 212 + do { 213 + reg_read = readl(cdns->registers + offset); 214 + if ((reg_read & value) == 0) 215 + return 0; 216 + 217 + timeout--; 218 + udelay(50); 219 + } while (timeout != 0); 220 + 221 + return -EAGAIN; 222 + } 223 + 224 + /* 225 + * IO Calls 226 + */ 227 + static enum sdw_command_response cdns_fill_msg_resp( 228 + struct sdw_cdns *cdns, 229 + struct sdw_msg *msg, int count, int offset) 230 + { 231 + int nack = 0, no_ack = 0; 232 + int i; 233 + 234 + /* check message response */ 235 + for (i = 0; i < count; i++) { 236 + if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { 237 + no_ack = 1; 238 + dev_dbg(cdns->dev, "Msg Ack not received\n"); 239 + if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { 240 + nack = 1; 241 + dev_err(cdns->dev, "Msg NACK received\n"); 242 + } 243 + } 244 + } 245 + 246 + if (nack) { 247 + dev_err(cdns->dev, "Msg NACKed for Slave %d\n", msg->dev_num); 248 + return SDW_CMD_FAIL; 249 + } else if (no_ack) { 250 + dev_dbg(cdns->dev, "Msg ignored for Slave %d\n", msg->dev_num); 251 + return SDW_CMD_IGNORED; 252 + } 253 + 254 + /* fill response */ 255 + for (i = 0; i < count; i++) 256 + msg->buf[i + offset] = cdns->response_buf[i] >> 257 + SDW_REG_SHIFT(CDNS_MCP_RESP_RDATA); 258 + 259 + return SDW_CMD_OK; 260 + } 261 + 262 + static enum sdw_command_response 263 + _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd, 264 + int offset, int count, bool defer) 265 + { 266 + unsigned long time; 267 + u32 base, i, data; 268 + u16 addr; 269 + 270 + /* Program the watermark level for RX FIFO */ 271 + if (cdns->msg_count != count) { 272 + cdns_writel(cdns, CDNS_MCP_FIFOLEVEL, count); 273 + cdns->msg_count = count; 274 + } 275 + 276 + base = CDNS_MCP_CMD_BASE; 277 + addr = msg->addr; 278 + 279 + for (i = 0; i < count; i++) { 280 + data = msg->dev_num << SDW_REG_SHIFT(CDNS_MCP_CMD_DEV_ADDR); 281 + data |= cmd << SDW_REG_SHIFT(CDNS_MCP_CMD_COMMAND); 282 + data |= addr++ << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); 283 + 284 + if (msg->flags == SDW_MSG_FLAG_WRITE) 285 + data |= msg->buf[i + offset]; 286 + 287 + data |= msg->ssp_sync << SDW_REG_SHIFT(CDNS_MCP_CMD_SSP_TAG); 288 + cdns_writel(cdns, base, data); 289 + base += CDNS_MCP_CMD_WORD_LEN; 290 + } 291 + 292 + if (defer) 293 + return SDW_CMD_OK; 294 + 295 + /* wait for timeout or response */ 296 + time = wait_for_completion_timeout(&cdns->tx_complete, 297 + msecs_to_jiffies(CDNS_TX_TIMEOUT)); 298 + if (!time) { 299 + dev_err(cdns->dev, "IO transfer timed out\n"); 300 + msg->len = 0; 301 + return SDW_CMD_TIMEOUT; 302 + } 303 + 304 + return cdns_fill_msg_resp(cdns, msg, count, offset); 305 + } 306 + 307 + static enum sdw_command_response cdns_program_scp_addr( 308 + struct sdw_cdns *cdns, struct sdw_msg *msg) 309 + { 310 + int nack = 0, no_ack = 0; 311 + unsigned long time; 312 + u32 data[2], base; 313 + int i; 314 + 315 + /* Program the watermark level for RX FIFO */ 316 + if (cdns->msg_count != CDNS_SCP_RX_FIFOLEVEL) { 317 + cdns_writel(cdns, CDNS_MCP_FIFOLEVEL, CDNS_SCP_RX_FIFOLEVEL); 318 + cdns->msg_count = CDNS_SCP_RX_FIFOLEVEL; 319 + } 320 + 321 + data[0] = msg->dev_num << SDW_REG_SHIFT(CDNS_MCP_CMD_DEV_ADDR); 322 + data[0] |= 0x3 << SDW_REG_SHIFT(CDNS_MCP_CMD_COMMAND); 323 + data[1] = data[0]; 324 + 325 + data[0] |= SDW_SCP_ADDRPAGE1 << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); 326 + data[1] |= SDW_SCP_ADDRPAGE2 << SDW_REG_SHIFT(CDNS_MCP_CMD_REG_ADDR_L); 327 + 328 + data[0] |= msg->addr_page1; 329 + data[1] |= msg->addr_page2; 330 + 331 + base = CDNS_MCP_CMD_BASE; 332 + cdns_writel(cdns, base, data[0]); 333 + base += CDNS_MCP_CMD_WORD_LEN; 334 + cdns_writel(cdns, base, data[1]); 335 + 336 + time = wait_for_completion_timeout(&cdns->tx_complete, 337 + msecs_to_jiffies(CDNS_TX_TIMEOUT)); 338 + if (!time) { 339 + dev_err(cdns->dev, "SCP Msg trf timed out\n"); 340 + msg->len = 0; 341 + return SDW_CMD_TIMEOUT; 342 + } 343 + 344 + /* check response the writes */ 345 + for (i = 0; i < 2; i++) { 346 + if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { 347 + no_ack = 1; 348 + dev_err(cdns->dev, "Program SCP Ack not received"); 349 + if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { 350 + nack = 1; 351 + dev_err(cdns->dev, "Program SCP NACK received"); 352 + } 353 + } 354 + } 355 + 356 + /* For NACK, NO ack, don't return err if we are in Broadcast mode */ 357 + if (nack) { 358 + dev_err(cdns->dev, 359 + "SCP_addrpage NACKed for Slave %d", msg->dev_num); 360 + return SDW_CMD_FAIL; 361 + } else if (no_ack) { 362 + dev_dbg(cdns->dev, 363 + "SCP_addrpage ignored for Slave %d", msg->dev_num); 364 + return SDW_CMD_IGNORED; 365 + } 366 + 367 + return SDW_CMD_OK; 368 + } 369 + 370 + static int cdns_prep_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int *cmd) 371 + { 372 + int ret; 373 + 374 + if (msg->page) { 375 + ret = cdns_program_scp_addr(cdns, msg); 376 + if (ret) { 377 + msg->len = 0; 378 + return ret; 379 + } 380 + } 381 + 382 + switch (msg->flags) { 383 + case SDW_MSG_FLAG_READ: 384 + *cmd = CDNS_MCP_CMD_READ; 385 + break; 386 + 387 + case SDW_MSG_FLAG_WRITE: 388 + *cmd = CDNS_MCP_CMD_WRITE; 389 + break; 390 + 391 + default: 392 + dev_err(cdns->dev, "Invalid msg cmd: %d\n", msg->flags); 393 + return -EINVAL; 394 + } 395 + 396 + return 0; 397 + } 398 + 399 + static enum sdw_command_response 400 + cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg) 401 + { 402 + struct sdw_cdns *cdns = bus_to_cdns(bus); 403 + int cmd = 0, ret, i; 404 + 405 + ret = cdns_prep_msg(cdns, msg, &cmd); 406 + if (ret) 407 + return SDW_CMD_FAIL_OTHER; 408 + 409 + for (i = 0; i < msg->len / CDNS_MCP_CMD_LEN; i++) { 410 + ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, 411 + CDNS_MCP_CMD_LEN, false); 412 + if (ret < 0) 413 + goto exit; 414 + } 415 + 416 + if (!(msg->len % CDNS_MCP_CMD_LEN)) 417 + goto exit; 418 + 419 + ret = _cdns_xfer_msg(cdns, msg, cmd, i * CDNS_MCP_CMD_LEN, 420 + msg->len % CDNS_MCP_CMD_LEN, false); 421 + 422 + exit: 423 + return ret; 424 + } 425 + 426 + static enum sdw_command_response 427 + cdns_xfer_msg_defer(struct sdw_bus *bus, 428 + struct sdw_msg *msg, struct sdw_defer *defer) 429 + { 430 + struct sdw_cdns *cdns = bus_to_cdns(bus); 431 + int cmd = 0, ret; 432 + 433 + /* for defer only 1 message is supported */ 434 + if (msg->len > 1) 435 + return -ENOTSUPP; 436 + 437 + ret = cdns_prep_msg(cdns, msg, &cmd); 438 + if (ret) 439 + return SDW_CMD_FAIL_OTHER; 440 + 441 + cdns->defer = defer; 442 + cdns->defer->length = msg->len; 443 + 444 + return _cdns_xfer_msg(cdns, msg, cmd, 0, msg->len, true); 445 + } 446 + 447 + static enum sdw_command_response 448 + cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num) 449 + { 450 + struct sdw_cdns *cdns = bus_to_cdns(bus); 451 + struct sdw_msg msg; 452 + 453 + /* Create dummy message with valid device number */ 454 + memset(&msg, 0, sizeof(msg)); 455 + msg.dev_num = dev_num; 456 + 457 + return cdns_program_scp_addr(cdns, &msg); 458 + } 459 + 460 + /* 461 + * IRQ handling 462 + */ 463 + 464 + static void cdns_read_response(struct sdw_cdns *cdns) 465 + { 466 + u32 num_resp, cmd_base; 467 + int i; 468 + 469 + num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT); 470 + num_resp &= CDNS_MCP_RX_FIFO_AVAIL; 471 + 472 + cmd_base = CDNS_MCP_CMD_BASE; 473 + 474 + for (i = 0; i < num_resp; i++) { 475 + cdns->response_buf[i] = cdns_readl(cdns, cmd_base); 476 + cmd_base += CDNS_MCP_CMD_WORD_LEN; 477 + } 478 + } 479 + 480 + static int cdns_update_slave_status(struct sdw_cdns *cdns, 481 + u32 slave0, u32 slave1) 482 + { 483 + enum sdw_slave_status status[SDW_MAX_DEVICES + 1]; 484 + bool is_slave = false; 485 + u64 slave, mask; 486 + int i, set_status; 487 + 488 + /* combine the two status */ 489 + slave = ((u64)slave1 << 32) | slave0; 490 + memset(status, 0, sizeof(status)); 491 + 492 + for (i = 0; i <= SDW_MAX_DEVICES; i++) { 493 + mask = (slave >> (i * CDNS_MCP_SLAVE_STATUS_NUM)) & 494 + CDNS_MCP_SLAVE_STATUS_BITS; 495 + if (!mask) 496 + continue; 497 + 498 + is_slave = true; 499 + set_status = 0; 500 + 501 + if (mask & CDNS_MCP_SLAVE_INTSTAT_RESERVED) { 502 + status[i] = SDW_SLAVE_RESERVED; 503 + set_status++; 504 + } 505 + 506 + if (mask & CDNS_MCP_SLAVE_INTSTAT_ATTACHED) { 507 + status[i] = SDW_SLAVE_ATTACHED; 508 + set_status++; 509 + } 510 + 511 + if (mask & CDNS_MCP_SLAVE_INTSTAT_ALERT) { 512 + status[i] = SDW_SLAVE_ALERT; 513 + set_status++; 514 + } 515 + 516 + if (mask & CDNS_MCP_SLAVE_INTSTAT_NPRESENT) { 517 + status[i] = SDW_SLAVE_UNATTACHED; 518 + set_status++; 519 + } 520 + 521 + /* first check if Slave reported multiple status */ 522 + if (set_status > 1) { 523 + dev_warn(cdns->dev, 524 + "Slave reported multiple Status: %d\n", 525 + status[i]); 526 + /* 527 + * TODO: we need to reread the status here by 528 + * issuing a PING cmd 529 + */ 530 + } 531 + } 532 + 533 + if (is_slave) 534 + return sdw_handle_slave_status(&cdns->bus, status); 535 + 536 + return 0; 537 + } 538 + 539 + /** 540 + * sdw_cdns_irq() - Cadence interrupt handler 541 + * @irq: irq number 542 + * @dev_id: irq context 543 + */ 544 + irqreturn_t sdw_cdns_irq(int irq, void *dev_id) 545 + { 546 + struct sdw_cdns *cdns = dev_id; 547 + u32 int_status; 548 + int ret = IRQ_HANDLED; 549 + 550 + /* Check if the link is up */ 551 + if (!cdns->link_up) 552 + return IRQ_NONE; 553 + 554 + int_status = cdns_readl(cdns, CDNS_MCP_INTSTAT); 555 + 556 + if (!(int_status & CDNS_MCP_INT_IRQ)) 557 + return IRQ_NONE; 558 + 559 + if (int_status & CDNS_MCP_INT_RX_WL) { 560 + cdns_read_response(cdns); 561 + 562 + if (cdns->defer) { 563 + cdns_fill_msg_resp(cdns, cdns->defer->msg, 564 + cdns->defer->length, 0); 565 + complete(&cdns->defer->complete); 566 + cdns->defer = NULL; 567 + } else 568 + complete(&cdns->tx_complete); 569 + } 570 + 571 + if (int_status & CDNS_MCP_INT_CTRL_CLASH) { 572 + 573 + /* Slave is driving bit slot during control word */ 574 + dev_err_ratelimited(cdns->dev, "Bus clash for control word\n"); 575 + int_status |= CDNS_MCP_INT_CTRL_CLASH; 576 + } 577 + 578 + if (int_status & CDNS_MCP_INT_DATA_CLASH) { 579 + /* 580 + * Multiple slaves trying to drive bit slot, or issue with 581 + * ownership of data bits or Slave gone bonkers 582 + */ 583 + dev_err_ratelimited(cdns->dev, "Bus clash for data word\n"); 584 + int_status |= CDNS_MCP_INT_DATA_CLASH; 585 + } 586 + 587 + if (int_status & CDNS_MCP_INT_SLAVE_MASK) { 588 + /* Mask the Slave interrupt and wake thread */ 589 + cdns_updatel(cdns, CDNS_MCP_INTMASK, 590 + CDNS_MCP_INT_SLAVE_MASK, 0); 591 + 592 + int_status &= ~CDNS_MCP_INT_SLAVE_MASK; 593 + ret = IRQ_WAKE_THREAD; 594 + } 595 + 596 + cdns_writel(cdns, CDNS_MCP_INTSTAT, int_status); 597 + return ret; 598 + } 599 + EXPORT_SYMBOL(sdw_cdns_irq); 600 + 601 + /** 602 + * sdw_cdns_thread() - Cadence irq thread handler 603 + * @irq: irq number 604 + * @dev_id: irq context 605 + */ 606 + irqreturn_t sdw_cdns_thread(int irq, void *dev_id) 607 + { 608 + struct sdw_cdns *cdns = dev_id; 609 + u32 slave0, slave1; 610 + 611 + dev_dbg(cdns->dev, "Slave status change\n"); 612 + 613 + slave0 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT0); 614 + slave1 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1); 615 + 616 + cdns_update_slave_status(cdns, slave0, slave1); 617 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT0, slave0); 618 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT1, slave1); 619 + 620 + /* clear and unmask Slave interrupt now */ 621 + cdns_writel(cdns, CDNS_MCP_INTSTAT, CDNS_MCP_INT_SLAVE_MASK); 622 + cdns_updatel(cdns, CDNS_MCP_INTMASK, 623 + CDNS_MCP_INT_SLAVE_MASK, CDNS_MCP_INT_SLAVE_MASK); 624 + 625 + return IRQ_HANDLED; 626 + } 627 + EXPORT_SYMBOL(sdw_cdns_thread); 628 + 629 + /* 630 + * init routines 631 + */ 632 + static int _cdns_enable_interrupt(struct sdw_cdns *cdns) 633 + { 634 + u32 mask; 635 + 636 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, 637 + CDNS_MCP_SLAVE_INTMASK0_MASK); 638 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, 639 + CDNS_MCP_SLAVE_INTMASK1_MASK); 640 + 641 + mask = CDNS_MCP_INT_SLAVE_RSVD | CDNS_MCP_INT_SLAVE_ALERT | 642 + CDNS_MCP_INT_SLAVE_ATTACH | CDNS_MCP_INT_SLAVE_NATTACH | 643 + CDNS_MCP_INT_CTRL_CLASH | CDNS_MCP_INT_DATA_CLASH | 644 + CDNS_MCP_INT_RX_WL | CDNS_MCP_INT_IRQ | CDNS_MCP_INT_DPINT; 645 + 646 + cdns_writel(cdns, CDNS_MCP_INTMASK, mask); 647 + 648 + return 0; 649 + } 650 + 651 + /** 652 + * sdw_cdns_enable_interrupt() - Enable SDW interrupts and update config 653 + * @cdns: Cadence instance 654 + */ 655 + int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns) 656 + { 657 + int ret; 658 + 659 + _cdns_enable_interrupt(cdns); 660 + ret = cdns_clear_bit(cdns, CDNS_MCP_CONFIG_UPDATE, 661 + CDNS_MCP_CONFIG_UPDATE_BIT); 662 + if (ret < 0) 663 + dev_err(cdns->dev, "Config update timedout"); 664 + 665 + return ret; 666 + } 667 + EXPORT_SYMBOL(sdw_cdns_enable_interrupt); 668 + 669 + /** 670 + * sdw_cdns_init() - Cadence initialization 671 + * @cdns: Cadence instance 672 + */ 673 + int sdw_cdns_init(struct sdw_cdns *cdns) 674 + { 675 + u32 val; 676 + int ret; 677 + 678 + /* Exit clock stop */ 679 + ret = cdns_clear_bit(cdns, CDNS_MCP_CONTROL, 680 + CDNS_MCP_CONTROL_CLK_STOP_CLR); 681 + if (ret < 0) { 682 + dev_err(cdns->dev, "Couldn't exit from clock stop\n"); 683 + return ret; 684 + } 685 + 686 + /* Set clock divider */ 687 + val = cdns_readl(cdns, CDNS_MCP_CLK_CTRL0); 688 + val |= CDNS_DEFAULT_CLK_DIVIDER; 689 + cdns_writel(cdns, CDNS_MCP_CLK_CTRL0, val); 690 + 691 + /* Set the default frame shape */ 692 + cdns_writel(cdns, CDNS_MCP_FRAME_SHAPE_INIT, CDNS_DEFAULT_FRAME_SHAPE); 693 + 694 + /* Set SSP interval to default value */ 695 + cdns_writel(cdns, CDNS_MCP_SSP_CTRL0, CDNS_DEFAULT_SSP_INTERVAL); 696 + cdns_writel(cdns, CDNS_MCP_SSP_CTRL1, CDNS_DEFAULT_SSP_INTERVAL); 697 + 698 + /* Set cmd accept mode */ 699 + cdns_updatel(cdns, CDNS_MCP_CONTROL, CDNS_MCP_CONTROL_CMD_ACCEPT, 700 + CDNS_MCP_CONTROL_CMD_ACCEPT); 701 + 702 + /* Configure mcp config */ 703 + val = cdns_readl(cdns, CDNS_MCP_CONFIG); 704 + 705 + /* Set Max cmd retry to 15 */ 706 + val |= CDNS_MCP_CONFIG_MCMD_RETRY; 707 + 708 + /* Set frame delay between PREQ and ping frame to 15 frames */ 709 + val |= 0xF << SDW_REG_SHIFT(CDNS_MCP_CONFIG_MPREQ_DELAY); 710 + 711 + /* Disable auto bus release */ 712 + val &= ~CDNS_MCP_CONFIG_BUS_REL; 713 + 714 + /* Disable sniffer mode */ 715 + val &= ~CDNS_MCP_CONFIG_SNIFFER; 716 + 717 + /* Set cmd mode for Tx and Rx cmds */ 718 + val &= ~CDNS_MCP_CONFIG_CMD; 719 + 720 + /* Set operation to normal */ 721 + val &= ~CDNS_MCP_CONFIG_OP; 722 + val |= CDNS_MCP_CONFIG_OP_NORMAL; 723 + 724 + cdns_writel(cdns, CDNS_MCP_CONFIG, val); 725 + 726 + return 0; 727 + } 728 + EXPORT_SYMBOL(sdw_cdns_init); 729 + 730 + struct sdw_master_ops sdw_cdns_master_ops = { 731 + .read_prop = sdw_master_read_prop, 732 + .xfer_msg = cdns_xfer_msg, 733 + .xfer_msg_defer = cdns_xfer_msg_defer, 734 + .reset_page_addr = cdns_reset_page_addr, 735 + }; 736 + EXPORT_SYMBOL(sdw_cdns_master_ops); 737 + 738 + /** 739 + * sdw_cdns_probe() - Cadence probe routine 740 + * @cdns: Cadence instance 741 + */ 742 + int sdw_cdns_probe(struct sdw_cdns *cdns) 743 + { 744 + init_completion(&cdns->tx_complete); 745 + 746 + return 0; 747 + } 748 + EXPORT_SYMBOL(sdw_cdns_probe); 749 + 750 + MODULE_LICENSE("Dual BSD/GPL"); 751 + MODULE_DESCRIPTION("Cadence Soundwire Library");
+48
drivers/soundwire/cadence_master.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SDW_CADENCE_H 5 + #define __SDW_CADENCE_H 6 + 7 + /** 8 + * struct sdw_cdns - Cadence driver context 9 + * @dev: Linux device 10 + * @bus: Bus handle 11 + * @instance: instance number 12 + * @response_buf: SoundWire response buffer 13 + * @tx_complete: Tx completion 14 + * @defer: Defer pointer 15 + * @registers: Cadence registers 16 + * @link_up: Link status 17 + * @msg_count: Messages sent on bus 18 + */ 19 + struct sdw_cdns { 20 + struct device *dev; 21 + struct sdw_bus bus; 22 + unsigned int instance; 23 + 24 + u32 response_buf[0x80]; 25 + struct completion tx_complete; 26 + struct sdw_defer *defer; 27 + 28 + void __iomem *registers; 29 + 30 + bool link_up; 31 + unsigned int msg_count; 32 + }; 33 + 34 + #define bus_to_cdns(_bus) container_of(_bus, struct sdw_cdns, bus) 35 + 36 + /* Exported symbols */ 37 + 38 + int sdw_cdns_probe(struct sdw_cdns *cdns); 39 + extern struct sdw_master_ops sdw_cdns_master_ops; 40 + 41 + irqreturn_t sdw_cdns_irq(int irq, void *dev_id); 42 + irqreturn_t sdw_cdns_thread(int irq, void *dev_id); 43 + 44 + int sdw_cdns_init(struct sdw_cdns *cdns); 45 + int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns); 46 + 47 + 48 + #endif /* __SDW_CADENCE_H */
+345
drivers/soundwire/intel.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + /* 5 + * Soundwire Intel Master Driver 6 + */ 7 + 8 + #include <linux/acpi.h> 9 + #include <linux/delay.h> 10 + #include <linux/interrupt.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/soundwire/sdw_registers.h> 13 + #include <linux/soundwire/sdw.h> 14 + #include <linux/soundwire/sdw_intel.h> 15 + #include "cadence_master.h" 16 + #include "intel.h" 17 + 18 + /* Intel SHIM Registers Definition */ 19 + #define SDW_SHIM_LCAP 0x0 20 + #define SDW_SHIM_LCTL 0x4 21 + #define SDW_SHIM_IPPTR 0x8 22 + #define SDW_SHIM_SYNC 0xC 23 + 24 + #define SDW_SHIM_CTLSCAP(x) (0x010 + 0x60 * x) 25 + #define SDW_SHIM_CTLS0CM(x) (0x012 + 0x60 * x) 26 + #define SDW_SHIM_CTLS1CM(x) (0x014 + 0x60 * x) 27 + #define SDW_SHIM_CTLS2CM(x) (0x016 + 0x60 * x) 28 + #define SDW_SHIM_CTLS3CM(x) (0x018 + 0x60 * x) 29 + #define SDW_SHIM_PCMSCAP(x) (0x020 + 0x60 * x) 30 + 31 + #define SDW_SHIM_PCMSYCHM(x, y) (0x022 + (0x60 * x) + (0x2 * y)) 32 + #define SDW_SHIM_PCMSYCHC(x, y) (0x042 + (0x60 * x) + (0x2 * y)) 33 + #define SDW_SHIM_PDMSCAP(x) (0x062 + 0x60 * x) 34 + #define SDW_SHIM_IOCTL(x) (0x06C + 0x60 * x) 35 + #define SDW_SHIM_CTMCTL(x) (0x06E + 0x60 * x) 36 + 37 + #define SDW_SHIM_WAKEEN 0x190 38 + #define SDW_SHIM_WAKESTS 0x192 39 + 40 + #define SDW_SHIM_LCTL_SPA BIT(0) 41 + #define SDW_SHIM_LCTL_CPA BIT(8) 42 + 43 + #define SDW_SHIM_SYNC_SYNCPRD_VAL 0x176F 44 + #define SDW_SHIM_SYNC_SYNCPRD GENMASK(14, 0) 45 + #define SDW_SHIM_SYNC_SYNCCPU BIT(15) 46 + #define SDW_SHIM_SYNC_CMDSYNC_MASK GENMASK(19, 16) 47 + #define SDW_SHIM_SYNC_CMDSYNC BIT(16) 48 + #define SDW_SHIM_SYNC_SYNCGO BIT(24) 49 + 50 + #define SDW_SHIM_PCMSCAP_ISS GENMASK(3, 0) 51 + #define SDW_SHIM_PCMSCAP_OSS GENMASK(7, 4) 52 + #define SDW_SHIM_PCMSCAP_BSS GENMASK(12, 8) 53 + 54 + #define SDW_SHIM_PCMSYCM_LCHN GENMASK(3, 0) 55 + #define SDW_SHIM_PCMSYCM_HCHN GENMASK(7, 4) 56 + #define SDW_SHIM_PCMSYCM_STREAM GENMASK(13, 8) 57 + #define SDW_SHIM_PCMSYCM_DIR BIT(15) 58 + 59 + #define SDW_SHIM_PDMSCAP_ISS GENMASK(3, 0) 60 + #define SDW_SHIM_PDMSCAP_OSS GENMASK(7, 4) 61 + #define SDW_SHIM_PDMSCAP_BSS GENMASK(12, 8) 62 + #define SDW_SHIM_PDMSCAP_CPSS GENMASK(15, 13) 63 + 64 + #define SDW_SHIM_IOCTL_MIF BIT(0) 65 + #define SDW_SHIM_IOCTL_CO BIT(1) 66 + #define SDW_SHIM_IOCTL_COE BIT(2) 67 + #define SDW_SHIM_IOCTL_DO BIT(3) 68 + #define SDW_SHIM_IOCTL_DOE BIT(4) 69 + #define SDW_SHIM_IOCTL_BKE BIT(5) 70 + #define SDW_SHIM_IOCTL_WPDD BIT(6) 71 + #define SDW_SHIM_IOCTL_CIBD BIT(8) 72 + #define SDW_SHIM_IOCTL_DIBD BIT(9) 73 + 74 + #define SDW_SHIM_CTMCTL_DACTQE BIT(0) 75 + #define SDW_SHIM_CTMCTL_DODS BIT(1) 76 + #define SDW_SHIM_CTMCTL_DOAIS GENMASK(4, 3) 77 + 78 + #define SDW_SHIM_WAKEEN_ENABLE BIT(0) 79 + #define SDW_SHIM_WAKESTS_STATUS BIT(0) 80 + 81 + /* Intel ALH Register definitions */ 82 + #define SDW_ALH_STRMZCFG(x) (0x000 + (0x4 * x)) 83 + 84 + #define SDW_ALH_STRMZCFG_DMAT_VAL 0x3 85 + #define SDW_ALH_STRMZCFG_DMAT GENMASK(7, 0) 86 + #define SDW_ALH_STRMZCFG_CHN GENMASK(19, 16) 87 + 88 + struct sdw_intel { 89 + struct sdw_cdns cdns; 90 + int instance; 91 + struct sdw_intel_link_res *res; 92 + }; 93 + 94 + #define cdns_to_intel(_cdns) container_of(_cdns, struct sdw_intel, cdns) 95 + 96 + /* 97 + * Read, write helpers for HW registers 98 + */ 99 + static inline int intel_readl(void __iomem *base, int offset) 100 + { 101 + return readl(base + offset); 102 + } 103 + 104 + static inline void intel_writel(void __iomem *base, int offset, int value) 105 + { 106 + writel(value, base + offset); 107 + } 108 + 109 + static inline u16 intel_readw(void __iomem *base, int offset) 110 + { 111 + return readw(base + offset); 112 + } 113 + 114 + static inline void intel_writew(void __iomem *base, int offset, u16 value) 115 + { 116 + writew(value, base + offset); 117 + } 118 + 119 + static int intel_clear_bit(void __iomem *base, int offset, u32 value, u32 mask) 120 + { 121 + int timeout = 10; 122 + u32 reg_read; 123 + 124 + writel(value, base + offset); 125 + do { 126 + reg_read = readl(base + offset); 127 + if (!(reg_read & mask)) 128 + return 0; 129 + 130 + timeout--; 131 + udelay(50); 132 + } while (timeout != 0); 133 + 134 + return -EAGAIN; 135 + } 136 + 137 + static int intel_set_bit(void __iomem *base, int offset, u32 value, u32 mask) 138 + { 139 + int timeout = 10; 140 + u32 reg_read; 141 + 142 + writel(value, base + offset); 143 + do { 144 + reg_read = readl(base + offset); 145 + if (reg_read & mask) 146 + return 0; 147 + 148 + timeout--; 149 + udelay(50); 150 + } while (timeout != 0); 151 + 152 + return -EAGAIN; 153 + } 154 + 155 + /* 156 + * shim ops 157 + */ 158 + 159 + static int intel_link_power_up(struct sdw_intel *sdw) 160 + { 161 + unsigned int link_id = sdw->instance; 162 + void __iomem *shim = sdw->res->shim; 163 + int spa_mask, cpa_mask; 164 + int link_control, ret; 165 + 166 + /* Link power up sequence */ 167 + link_control = intel_readl(shim, SDW_SHIM_LCTL); 168 + spa_mask = (SDW_SHIM_LCTL_SPA << link_id); 169 + cpa_mask = (SDW_SHIM_LCTL_CPA << link_id); 170 + link_control |= spa_mask; 171 + 172 + ret = intel_set_bit(shim, SDW_SHIM_LCTL, link_control, cpa_mask); 173 + if (ret < 0) 174 + return ret; 175 + 176 + sdw->cdns.link_up = true; 177 + return 0; 178 + } 179 + 180 + static int intel_shim_init(struct sdw_intel *sdw) 181 + { 182 + void __iomem *shim = sdw->res->shim; 183 + unsigned int link_id = sdw->instance; 184 + int sync_reg, ret; 185 + u16 ioctl = 0, act = 0; 186 + 187 + /* Initialize Shim */ 188 + ioctl |= SDW_SHIM_IOCTL_BKE; 189 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 190 + 191 + ioctl |= SDW_SHIM_IOCTL_WPDD; 192 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 193 + 194 + ioctl |= SDW_SHIM_IOCTL_DO; 195 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 196 + 197 + ioctl |= SDW_SHIM_IOCTL_DOE; 198 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 199 + 200 + /* Switch to MIP from Glue logic */ 201 + ioctl = intel_readw(shim, SDW_SHIM_IOCTL(link_id)); 202 + 203 + ioctl &= ~(SDW_SHIM_IOCTL_DOE); 204 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 205 + 206 + ioctl &= ~(SDW_SHIM_IOCTL_DO); 207 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 208 + 209 + ioctl |= (SDW_SHIM_IOCTL_MIF); 210 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 211 + 212 + ioctl &= ~(SDW_SHIM_IOCTL_BKE); 213 + ioctl &= ~(SDW_SHIM_IOCTL_COE); 214 + 215 + intel_writew(shim, SDW_SHIM_IOCTL(link_id), ioctl); 216 + 217 + act |= 0x1 << SDW_REG_SHIFT(SDW_SHIM_CTMCTL_DOAIS); 218 + act |= SDW_SHIM_CTMCTL_DACTQE; 219 + act |= SDW_SHIM_CTMCTL_DODS; 220 + intel_writew(shim, SDW_SHIM_CTMCTL(link_id), act); 221 + 222 + /* Now set SyncPRD period */ 223 + sync_reg = intel_readl(shim, SDW_SHIM_SYNC); 224 + sync_reg |= (SDW_SHIM_SYNC_SYNCPRD_VAL << 225 + SDW_REG_SHIFT(SDW_SHIM_SYNC_SYNCPRD)); 226 + 227 + /* Set SyncCPU bit */ 228 + sync_reg |= SDW_SHIM_SYNC_SYNCCPU; 229 + ret = intel_clear_bit(shim, SDW_SHIM_SYNC, sync_reg, 230 + SDW_SHIM_SYNC_SYNCCPU); 231 + if (ret < 0) 232 + dev_err(sdw->cdns.dev, "Failed to set sync period: %d", ret); 233 + 234 + return ret; 235 + } 236 + 237 + static int intel_prop_read(struct sdw_bus *bus) 238 + { 239 + /* Initialize with default handler to read all DisCo properties */ 240 + sdw_master_read_prop(bus); 241 + 242 + /* BIOS is not giving some values correctly. So, lets override them */ 243 + bus->prop.num_freq = 1; 244 + bus->prop.freq = devm_kcalloc(bus->dev, sizeof(*bus->prop.freq), 245 + bus->prop.num_freq, GFP_KERNEL); 246 + if (!bus->prop.freq) 247 + return -ENOMEM; 248 + 249 + bus->prop.freq[0] = bus->prop.max_freq; 250 + bus->prop.err_threshold = 5; 251 + 252 + return 0; 253 + } 254 + 255 + /* 256 + * probe and init 257 + */ 258 + static int intel_probe(struct platform_device *pdev) 259 + { 260 + struct sdw_intel *sdw; 261 + int ret; 262 + 263 + sdw = devm_kzalloc(&pdev->dev, sizeof(*sdw), GFP_KERNEL); 264 + if (!sdw) 265 + return -ENOMEM; 266 + 267 + sdw->instance = pdev->id; 268 + sdw->res = dev_get_platdata(&pdev->dev); 269 + sdw->cdns.dev = &pdev->dev; 270 + sdw->cdns.registers = sdw->res->registers; 271 + sdw->cdns.instance = sdw->instance; 272 + sdw->cdns.msg_count = 0; 273 + sdw->cdns.bus.dev = &pdev->dev; 274 + sdw->cdns.bus.link_id = pdev->id; 275 + 276 + sdw_cdns_probe(&sdw->cdns); 277 + 278 + /* Set property read ops */ 279 + sdw_cdns_master_ops.read_prop = intel_prop_read; 280 + sdw->cdns.bus.ops = &sdw_cdns_master_ops; 281 + 282 + platform_set_drvdata(pdev, sdw); 283 + 284 + ret = sdw_add_bus_master(&sdw->cdns.bus); 285 + if (ret) { 286 + dev_err(&pdev->dev, "sdw_add_bus_master fail: %d\n", ret); 287 + goto err_master_reg; 288 + } 289 + 290 + /* Initialize shim and controller */ 291 + intel_link_power_up(sdw); 292 + intel_shim_init(sdw); 293 + 294 + ret = sdw_cdns_init(&sdw->cdns); 295 + if (ret) 296 + goto err_init; 297 + 298 + ret = sdw_cdns_enable_interrupt(&sdw->cdns); 299 + if (ret) 300 + goto err_init; 301 + 302 + /* Acquire IRQ */ 303 + ret = request_threaded_irq(sdw->res->irq, sdw_cdns_irq, 304 + sdw_cdns_thread, IRQF_SHARED, KBUILD_MODNAME, 305 + &sdw->cdns); 306 + if (ret < 0) { 307 + dev_err(sdw->cdns.dev, "unable to grab IRQ %d, disabling device\n", 308 + sdw->res->irq); 309 + goto err_init; 310 + } 311 + 312 + return 0; 313 + 314 + err_init: 315 + sdw_delete_bus_master(&sdw->cdns.bus); 316 + err_master_reg: 317 + return ret; 318 + } 319 + 320 + static int intel_remove(struct platform_device *pdev) 321 + { 322 + struct sdw_intel *sdw; 323 + 324 + sdw = platform_get_drvdata(pdev); 325 + 326 + free_irq(sdw->res->irq, sdw); 327 + sdw_delete_bus_master(&sdw->cdns.bus); 328 + 329 + return 0; 330 + } 331 + 332 + static struct platform_driver sdw_intel_drv = { 333 + .probe = intel_probe, 334 + .remove = intel_remove, 335 + .driver = { 336 + .name = "int-sdw", 337 + 338 + }, 339 + }; 340 + 341 + module_platform_driver(sdw_intel_drv); 342 + 343 + MODULE_LICENSE("Dual BSD/GPL"); 344 + MODULE_ALIAS("platform:int-sdw"); 345 + MODULE_DESCRIPTION("Intel Soundwire Master Driver");
+23
drivers/soundwire/intel.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SDW_INTEL_LOCAL_H 5 + #define __SDW_INTEL_LOCAL_H 6 + 7 + /** 8 + * struct sdw_intel_res - Soundwire link resources 9 + * @registers: Link IO registers base 10 + * @shim: Audio shim pointer 11 + * @alh: ALH (Audio Link Hub) pointer 12 + * @irq: Interrupt line 13 + * 14 + * This is set as pdata for each link instance. 15 + */ 16 + struct sdw_intel_link_res { 17 + void __iomem *registers; 18 + void __iomem *shim; 19 + void __iomem *alh; 20 + int irq; 21 + }; 22 + 23 + #endif /* __SDW_INTEL_LOCAL_H */
+198
drivers/soundwire/intel_init.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + /* 5 + * SDW Intel Init Routines 6 + * 7 + * Initializes and creates SDW devices based on ACPI and Hardware values 8 + */ 9 + 10 + #include <linux/acpi.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/soundwire/sdw_intel.h> 13 + #include "intel.h" 14 + 15 + #define SDW_MAX_LINKS 4 16 + #define SDW_SHIM_LCAP 0x0 17 + #define SDW_SHIM_BASE 0x2C000 18 + #define SDW_ALH_BASE 0x2C800 19 + #define SDW_LINK_BASE 0x30000 20 + #define SDW_LINK_SIZE 0x10000 21 + 22 + struct sdw_link_data { 23 + struct sdw_intel_link_res res; 24 + struct platform_device *pdev; 25 + }; 26 + 27 + struct sdw_intel_ctx { 28 + int count; 29 + struct sdw_link_data *links; 30 + }; 31 + 32 + static int sdw_intel_cleanup_pdev(struct sdw_intel_ctx *ctx) 33 + { 34 + struct sdw_link_data *link = ctx->links; 35 + int i; 36 + 37 + if (!link) 38 + return 0; 39 + 40 + for (i = 0; i < ctx->count; i++) { 41 + if (link->pdev) 42 + platform_device_unregister(link->pdev); 43 + link++; 44 + } 45 + 46 + kfree(ctx->links); 47 + ctx->links = NULL; 48 + 49 + return 0; 50 + } 51 + 52 + static struct sdw_intel_ctx 53 + *sdw_intel_add_controller(struct sdw_intel_res *res) 54 + { 55 + struct platform_device_info pdevinfo; 56 + struct platform_device *pdev; 57 + struct sdw_link_data *link; 58 + struct sdw_intel_ctx *ctx; 59 + struct acpi_device *adev; 60 + int ret, i; 61 + u8 count; 62 + u32 caps; 63 + 64 + if (acpi_bus_get_device(res->handle, &adev)) 65 + return NULL; 66 + 67 + /* Found controller, find links supported */ 68 + count = 0; 69 + ret = fwnode_property_read_u8_array(acpi_fwnode_handle(adev), 70 + "mipi-sdw-master-count", &count, 1); 71 + 72 + /* Don't fail on error, continue and use hw value */ 73 + if (ret) { 74 + dev_err(&adev->dev, 75 + "Failed to read mipi-sdw-master-count: %d\n", ret); 76 + count = SDW_MAX_LINKS; 77 + } 78 + 79 + /* Check SNDWLCAP.LCOUNT */ 80 + caps = ioread32(res->mmio_base + SDW_SHIM_BASE + SDW_SHIM_LCAP); 81 + 82 + /* Check HW supported vs property value and use min of two */ 83 + count = min_t(u8, caps, count); 84 + 85 + /* Check count is within bounds */ 86 + if (count > SDW_MAX_LINKS) { 87 + dev_err(&adev->dev, "Link count %d exceeds max %d\n", 88 + count, SDW_MAX_LINKS); 89 + return NULL; 90 + } 91 + 92 + dev_dbg(&adev->dev, "Creating %d SDW Link devices\n", count); 93 + 94 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 95 + if (!ctx) 96 + return NULL; 97 + 98 + ctx->count = count; 99 + ctx->links = kcalloc(ctx->count, sizeof(*ctx->links), GFP_KERNEL); 100 + if (!ctx->links) 101 + goto link_err; 102 + 103 + link = ctx->links; 104 + 105 + /* Create SDW Master devices */ 106 + for (i = 0; i < count; i++) { 107 + 108 + link->res.irq = res->irq; 109 + link->res.registers = res->mmio_base + SDW_LINK_BASE 110 + + (SDW_LINK_SIZE * i); 111 + link->res.shim = res->mmio_base + SDW_SHIM_BASE; 112 + link->res.alh = res->mmio_base + SDW_ALH_BASE; 113 + 114 + memset(&pdevinfo, 0, sizeof(pdevinfo)); 115 + 116 + pdevinfo.parent = res->parent; 117 + pdevinfo.name = "int-sdw"; 118 + pdevinfo.id = i; 119 + pdevinfo.fwnode = acpi_fwnode_handle(adev); 120 + pdevinfo.data = &link->res; 121 + pdevinfo.size_data = sizeof(link->res); 122 + 123 + pdev = platform_device_register_full(&pdevinfo); 124 + if (IS_ERR(pdev)) { 125 + dev_err(&adev->dev, 126 + "platform device creation failed: %ld\n", 127 + PTR_ERR(pdev)); 128 + goto pdev_err; 129 + } 130 + 131 + link->pdev = pdev; 132 + link++; 133 + } 134 + 135 + return ctx; 136 + 137 + pdev_err: 138 + sdw_intel_cleanup_pdev(ctx); 139 + link_err: 140 + kfree(ctx); 141 + return NULL; 142 + } 143 + 144 + static acpi_status sdw_intel_acpi_cb(acpi_handle handle, u32 level, 145 + void *cdata, void **return_value) 146 + { 147 + struct sdw_intel_res *res = cdata; 148 + struct acpi_device *adev; 149 + 150 + if (acpi_bus_get_device(handle, &adev)) { 151 + dev_err(&adev->dev, "Couldn't find ACPI handle\n"); 152 + return AE_NOT_FOUND; 153 + } 154 + 155 + res->handle = handle; 156 + return AE_OK; 157 + } 158 + 159 + /** 160 + * sdw_intel_init() - SoundWire Intel init routine 161 + * @parent_handle: ACPI parent handle 162 + * @res: resource data 163 + * 164 + * This scans the namespace and creates SoundWire link controller devices 165 + * based on the info queried. 166 + */ 167 + void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res) 168 + { 169 + acpi_status status; 170 + 171 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 172 + parent_handle, 1, 173 + sdw_intel_acpi_cb, 174 + NULL, res, NULL); 175 + if (ACPI_FAILURE(status)) 176 + return NULL; 177 + 178 + return sdw_intel_add_controller(res); 179 + } 180 + EXPORT_SYMBOL(sdw_intel_init); 181 + 182 + /** 183 + * sdw_intel_exit() - SoundWire Intel exit 184 + * @arg: callback context 185 + * 186 + * Delete the controller instances created and cleanup 187 + */ 188 + void sdw_intel_exit(void *arg) 189 + { 190 + struct sdw_intel_ctx *ctx = arg; 191 + 192 + sdw_intel_cleanup_pdev(ctx); 193 + kfree(ctx); 194 + } 195 + EXPORT_SYMBOL(sdw_intel_exit); 196 + 197 + MODULE_LICENSE("Dual BSD/GPL"); 198 + MODULE_DESCRIPTION("Intel Soundwire Init Library");
+401
drivers/soundwire/mipi_disco.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + /* 5 + * MIPI Discovery And Configuration (DisCo) Specification for SoundWire 6 + * specifies properties to be implemented for SoundWire Masters and Slaves. 7 + * The DisCo spec doesn't mandate these properties. However, SDW bus cannot 8 + * work without knowing these values. 9 + * 10 + * The helper functions read the Master and Slave properties. Implementers 11 + * of Master or Slave drivers can use any of the below three mechanisms: 12 + * a) Use these APIs here as .read_prop() callback for Master and Slave 13 + * b) Implement own methods and set those as .read_prop(), but invoke 14 + * APIs in this file for generic read and override the values with 15 + * platform specific data 16 + * c) Implement ones own methods which do not use anything provided 17 + * here 18 + */ 19 + 20 + #include <linux/device.h> 21 + #include <linux/property.h> 22 + #include <linux/mod_devicetable.h> 23 + #include <linux/soundwire/sdw.h> 24 + #include "bus.h" 25 + 26 + /** 27 + * sdw_master_read_prop() - Read Master properties 28 + * @bus: SDW bus instance 29 + */ 30 + int sdw_master_read_prop(struct sdw_bus *bus) 31 + { 32 + struct sdw_master_prop *prop = &bus->prop; 33 + struct fwnode_handle *link; 34 + char name[32]; 35 + int nval, i; 36 + 37 + device_property_read_u32(bus->dev, 38 + "mipi-sdw-sw-interface-revision", &prop->revision); 39 + 40 + /* Find master handle */ 41 + snprintf(name, sizeof(name), 42 + "mipi-sdw-master-%d-subproperties", bus->link_id); 43 + 44 + link = device_get_named_child_node(bus->dev, name); 45 + if (!link) { 46 + dev_err(bus->dev, "Master node %s not found\n", name); 47 + return -EIO; 48 + } 49 + 50 + if (fwnode_property_read_bool(link, 51 + "mipi-sdw-clock-stop-mode0-supported") == true) 52 + prop->clk_stop_mode = SDW_CLK_STOP_MODE0; 53 + 54 + if (fwnode_property_read_bool(link, 55 + "mipi-sdw-clock-stop-mode1-supported") == true) 56 + prop->clk_stop_mode |= SDW_CLK_STOP_MODE1; 57 + 58 + fwnode_property_read_u32(link, 59 + "mipi-sdw-max-clock-frequency", &prop->max_freq); 60 + 61 + nval = fwnode_property_read_u32_array(link, 62 + "mipi-sdw-clock-frequencies-supported", NULL, 0); 63 + if (nval > 0) { 64 + 65 + prop->num_freq = nval; 66 + prop->freq = devm_kcalloc(bus->dev, prop->num_freq, 67 + sizeof(*prop->freq), GFP_KERNEL); 68 + if (!prop->freq) 69 + return -ENOMEM; 70 + 71 + fwnode_property_read_u32_array(link, 72 + "mipi-sdw-clock-frequencies-supported", 73 + prop->freq, prop->num_freq); 74 + } 75 + 76 + /* 77 + * Check the frequencies supported. If FW doesn't provide max 78 + * freq, then populate here by checking values. 79 + */ 80 + if (!prop->max_freq && prop->freq) { 81 + prop->max_freq = prop->freq[0]; 82 + for (i = 1; i < prop->num_freq; i++) { 83 + if (prop->freq[i] > prop->max_freq) 84 + prop->max_freq = prop->freq[i]; 85 + } 86 + } 87 + 88 + nval = fwnode_property_read_u32_array(link, 89 + "mipi-sdw-supported-clock-gears", NULL, 0); 90 + if (nval > 0) { 91 + 92 + prop->num_clk_gears = nval; 93 + prop->clk_gears = devm_kcalloc(bus->dev, prop->num_clk_gears, 94 + sizeof(*prop->clk_gears), GFP_KERNEL); 95 + if (!prop->clk_gears) 96 + return -ENOMEM; 97 + 98 + fwnode_property_read_u32_array(link, 99 + "mipi-sdw-supported-clock-gears", 100 + prop->clk_gears, prop->num_clk_gears); 101 + } 102 + 103 + fwnode_property_read_u32(link, "mipi-sdw-default-frame-rate", 104 + &prop->default_frame_rate); 105 + 106 + fwnode_property_read_u32(link, "mipi-sdw-default-frame-row-size", 107 + &prop->default_row); 108 + 109 + fwnode_property_read_u32(link, "mipi-sdw-default-frame-col-size", 110 + &prop->default_col); 111 + 112 + prop->dynamic_frame = fwnode_property_read_bool(link, 113 + "mipi-sdw-dynamic-frame-shape"); 114 + 115 + fwnode_property_read_u32(link, "mipi-sdw-command-error-threshold", 116 + &prop->err_threshold); 117 + 118 + return 0; 119 + } 120 + EXPORT_SYMBOL(sdw_master_read_prop); 121 + 122 + static int sdw_slave_read_dp0(struct sdw_slave *slave, 123 + struct fwnode_handle *port, struct sdw_dp0_prop *dp0) 124 + { 125 + int nval; 126 + 127 + fwnode_property_read_u32(port, "mipi-sdw-port-max-wordlength", 128 + &dp0->max_word); 129 + 130 + fwnode_property_read_u32(port, "mipi-sdw-port-min-wordlength", 131 + &dp0->min_word); 132 + 133 + nval = fwnode_property_read_u32_array(port, 134 + "mipi-sdw-port-wordlength-configs", NULL, 0); 135 + if (nval > 0) { 136 + 137 + dp0->num_words = nval; 138 + dp0->words = devm_kcalloc(&slave->dev, 139 + dp0->num_words, sizeof(*dp0->words), 140 + GFP_KERNEL); 141 + if (!dp0->words) 142 + return -ENOMEM; 143 + 144 + fwnode_property_read_u32_array(port, 145 + "mipi-sdw-port-wordlength-configs", 146 + dp0->words, dp0->num_words); 147 + } 148 + 149 + dp0->flow_controlled = fwnode_property_read_bool( 150 + port, "mipi-sdw-bra-flow-controlled"); 151 + 152 + dp0->simple_ch_prep_sm = fwnode_property_read_bool( 153 + port, "mipi-sdw-simplified-channel-prepare-sm"); 154 + 155 + dp0->device_interrupts = fwnode_property_read_bool( 156 + port, "mipi-sdw-imp-def-dp0-interrupts-supported"); 157 + 158 + return 0; 159 + } 160 + 161 + static int sdw_slave_read_dpn(struct sdw_slave *slave, 162 + struct sdw_dpn_prop *dpn, int count, int ports, char *type) 163 + { 164 + struct fwnode_handle *node; 165 + u32 bit, i = 0; 166 + int nval; 167 + unsigned long addr; 168 + char name[40]; 169 + 170 + addr = ports; 171 + /* valid ports are 1 to 14 so apply mask */ 172 + addr &= GENMASK(14, 1); 173 + 174 + for_each_set_bit(bit, &addr, 32) { 175 + snprintf(name, sizeof(name), 176 + "mipi-sdw-dp-%d-%s-subproperties", bit, type); 177 + 178 + dpn[i].num = bit; 179 + 180 + node = device_get_named_child_node(&slave->dev, name); 181 + if (!node) { 182 + dev_err(&slave->dev, "%s dpN not found\n", name); 183 + return -EIO; 184 + } 185 + 186 + fwnode_property_read_u32(node, "mipi-sdw-port-max-wordlength", 187 + &dpn[i].max_word); 188 + fwnode_property_read_u32(node, "mipi-sdw-port-min-wordlength", 189 + &dpn[i].min_word); 190 + 191 + nval = fwnode_property_read_u32_array(node, 192 + "mipi-sdw-port-wordlength-configs", NULL, 0); 193 + if (nval > 0) { 194 + 195 + dpn[i].num_words = nval; 196 + dpn[i].words = devm_kcalloc(&slave->dev, 197 + dpn[i].num_words, 198 + sizeof(*dpn[i].words), GFP_KERNEL); 199 + if (!dpn[i].words) 200 + return -ENOMEM; 201 + 202 + fwnode_property_read_u32_array(node, 203 + "mipi-sdw-port-wordlength-configs", 204 + dpn[i].words, dpn[i].num_words); 205 + } 206 + 207 + fwnode_property_read_u32(node, "mipi-sdw-data-port-type", 208 + &dpn[i].type); 209 + 210 + fwnode_property_read_u32(node, 211 + "mipi-sdw-max-grouping-supported", 212 + &dpn[i].max_grouping); 213 + 214 + dpn[i].simple_ch_prep_sm = fwnode_property_read_bool(node, 215 + "mipi-sdw-simplified-channelprepare-sm"); 216 + 217 + fwnode_property_read_u32(node, 218 + "mipi-sdw-port-channelprepare-timeout", 219 + &dpn[i].ch_prep_timeout); 220 + 221 + fwnode_property_read_u32(node, 222 + "mipi-sdw-imp-def-dpn-interrupts-supported", 223 + &dpn[i].device_interrupts); 224 + 225 + fwnode_property_read_u32(node, "mipi-sdw-min-channel-number", 226 + &dpn[i].min_ch); 227 + 228 + fwnode_property_read_u32(node, "mipi-sdw-max-channel-number", 229 + &dpn[i].max_ch); 230 + 231 + nval = fwnode_property_read_u32_array(node, 232 + "mipi-sdw-channel-number-list", NULL, 0); 233 + if (nval > 0) { 234 + 235 + dpn[i].num_ch = nval; 236 + dpn[i].ch = devm_kcalloc(&slave->dev, dpn[i].num_ch, 237 + sizeof(*dpn[i].ch), GFP_KERNEL); 238 + if (!dpn[i].ch) 239 + return -ENOMEM; 240 + 241 + fwnode_property_read_u32_array(node, 242 + "mipi-sdw-channel-number-list", 243 + dpn[i].ch, dpn[i].num_ch); 244 + } 245 + 246 + nval = fwnode_property_read_u32_array(node, 247 + "mipi-sdw-channel-combination-list", NULL, 0); 248 + if (nval > 0) { 249 + 250 + dpn[i].num_ch_combinations = nval; 251 + dpn[i].ch_combinations = devm_kcalloc(&slave->dev, 252 + dpn[i].num_ch_combinations, 253 + sizeof(*dpn[i].ch_combinations), 254 + GFP_KERNEL); 255 + if (!dpn[i].ch_combinations) 256 + return -ENOMEM; 257 + 258 + fwnode_property_read_u32_array(node, 259 + "mipi-sdw-channel-combination-list", 260 + dpn[i].ch_combinations, 261 + dpn[i].num_ch_combinations); 262 + } 263 + 264 + fwnode_property_read_u32(node, 265 + "mipi-sdw-modes-supported", &dpn[i].modes); 266 + 267 + fwnode_property_read_u32(node, "mipi-sdw-max-async-buffer", 268 + &dpn[i].max_async_buffer); 269 + 270 + dpn[i].block_pack_mode = fwnode_property_read_bool(node, 271 + "mipi-sdw-block-packing-mode"); 272 + 273 + fwnode_property_read_u32(node, "mipi-sdw-port-encoding-type", 274 + &dpn[i].port_encoding); 275 + 276 + /* TODO: Read audio mode */ 277 + 278 + i++; 279 + } 280 + 281 + return 0; 282 + } 283 + 284 + /** 285 + * sdw_slave_read_prop() - Read Slave properties 286 + * @slave: SDW Slave 287 + */ 288 + int sdw_slave_read_prop(struct sdw_slave *slave) 289 + { 290 + struct sdw_slave_prop *prop = &slave->prop; 291 + struct device *dev = &slave->dev; 292 + struct fwnode_handle *port; 293 + int num_of_ports, nval, i, dp0 = 0; 294 + 295 + device_property_read_u32(dev, "mipi-sdw-sw-interface-revision", 296 + &prop->mipi_revision); 297 + 298 + prop->wake_capable = device_property_read_bool(dev, 299 + "mipi-sdw-wake-up-unavailable"); 300 + prop->wake_capable = !prop->wake_capable; 301 + 302 + prop->test_mode_capable = device_property_read_bool(dev, 303 + "mipi-sdw-test-mode-supported"); 304 + 305 + prop->clk_stop_mode1 = false; 306 + if (device_property_read_bool(dev, 307 + "mipi-sdw-clock-stop-mode1-supported")) 308 + prop->clk_stop_mode1 = true; 309 + 310 + prop->simple_clk_stop_capable = device_property_read_bool(dev, 311 + "mipi-sdw-simplified-clockstopprepare-sm-supported"); 312 + 313 + device_property_read_u32(dev, "mipi-sdw-clockstopprepare-timeout", 314 + &prop->clk_stop_timeout); 315 + 316 + device_property_read_u32(dev, "mipi-sdw-slave-channelprepare-timeout", 317 + &prop->ch_prep_timeout); 318 + 319 + device_property_read_u32(dev, 320 + "mipi-sdw-clockstopprepare-hard-reset-behavior", 321 + &prop->reset_behave); 322 + 323 + prop->high_PHY_capable = device_property_read_bool(dev, 324 + "mipi-sdw-highPHY-capable"); 325 + 326 + prop->paging_support = device_property_read_bool(dev, 327 + "mipi-sdw-paging-support"); 328 + 329 + prop->bank_delay_support = device_property_read_bool(dev, 330 + "mipi-sdw-bank-delay-support"); 331 + 332 + device_property_read_u32(dev, 333 + "mipi-sdw-port15-read-behavior", &prop->p15_behave); 334 + 335 + device_property_read_u32(dev, "mipi-sdw-master-count", 336 + &prop->master_count); 337 + 338 + device_property_read_u32(dev, "mipi-sdw-source-port-list", 339 + &prop->source_ports); 340 + 341 + device_property_read_u32(dev, "mipi-sdw-sink-port-list", 342 + &prop->sink_ports); 343 + 344 + /* Read dp0 properties */ 345 + port = device_get_named_child_node(dev, "mipi-sdw-dp-0-subproperties"); 346 + if (!port) { 347 + dev_dbg(dev, "DP0 node not found!!\n"); 348 + } else { 349 + 350 + prop->dp0_prop = devm_kzalloc(&slave->dev, 351 + sizeof(*prop->dp0_prop), GFP_KERNEL); 352 + if (!prop->dp0_prop) 353 + return -ENOMEM; 354 + 355 + sdw_slave_read_dp0(slave, port, prop->dp0_prop); 356 + dp0 = 1; 357 + } 358 + 359 + /* 360 + * Based on each DPn port, get source and sink dpn properties. 361 + * Also, some ports can operate as both source or sink. 362 + */ 363 + 364 + /* Allocate memory for set bits in port lists */ 365 + nval = hweight32(prop->source_ports); 366 + prop->src_dpn_prop = devm_kcalloc(&slave->dev, nval, 367 + sizeof(*prop->src_dpn_prop), GFP_KERNEL); 368 + if (!prop->src_dpn_prop) 369 + return -ENOMEM; 370 + 371 + /* Read dpn properties for source port(s) */ 372 + sdw_slave_read_dpn(slave, prop->src_dpn_prop, nval, 373 + prop->source_ports, "source"); 374 + 375 + nval = hweight32(prop->sink_ports); 376 + prop->sink_dpn_prop = devm_kcalloc(&slave->dev, nval, 377 + sizeof(*prop->sink_dpn_prop), GFP_KERNEL); 378 + if (!prop->sink_dpn_prop) 379 + return -ENOMEM; 380 + 381 + /* Read dpn properties for sink port(s) */ 382 + sdw_slave_read_dpn(slave, prop->sink_dpn_prop, nval, 383 + prop->sink_ports, "sink"); 384 + 385 + /* some ports are bidirectional so check total ports by ORing */ 386 + nval = prop->source_ports | prop->sink_ports; 387 + num_of_ports = hweight32(nval) + dp0; /* add DP0 */ 388 + 389 + /* Allocate port_ready based on num_of_ports */ 390 + slave->port_ready = devm_kcalloc(&slave->dev, num_of_ports, 391 + sizeof(*slave->port_ready), GFP_KERNEL); 392 + if (!slave->port_ready) 393 + return -ENOMEM; 394 + 395 + /* Initialize completion */ 396 + for (i = 0; i < num_of_ports; i++) 397 + init_completion(&slave->port_ready[i]); 398 + 399 + return 0; 400 + } 401 + EXPORT_SYMBOL(sdw_slave_read_prop);
+114
drivers/soundwire/slave.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #include <linux/acpi.h> 5 + #include <linux/soundwire/sdw.h> 6 + #include <linux/soundwire/sdw_type.h> 7 + #include "bus.h" 8 + 9 + static void sdw_slave_release(struct device *dev) 10 + { 11 + struct sdw_slave *slave = dev_to_sdw_dev(dev); 12 + 13 + kfree(slave); 14 + } 15 + 16 + static int sdw_slave_add(struct sdw_bus *bus, 17 + struct sdw_slave_id *id, struct fwnode_handle *fwnode) 18 + { 19 + struct sdw_slave *slave; 20 + int ret; 21 + 22 + slave = kzalloc(sizeof(*slave), GFP_KERNEL); 23 + if (!slave) 24 + return -ENOMEM; 25 + 26 + /* Initialize data structure */ 27 + memcpy(&slave->id, id, sizeof(*id)); 28 + slave->dev.parent = bus->dev; 29 + slave->dev.fwnode = fwnode; 30 + 31 + /* name shall be sdw:link:mfg:part:class:unique */ 32 + dev_set_name(&slave->dev, "sdw:%x:%x:%x:%x:%x", 33 + bus->link_id, id->mfg_id, id->part_id, 34 + id->class_id, id->unique_id); 35 + 36 + slave->dev.release = sdw_slave_release; 37 + slave->dev.bus = &sdw_bus_type; 38 + slave->bus = bus; 39 + slave->status = SDW_SLAVE_UNATTACHED; 40 + slave->dev_num = 0; 41 + 42 + mutex_lock(&bus->bus_lock); 43 + list_add_tail(&slave->node, &bus->slaves); 44 + mutex_unlock(&bus->bus_lock); 45 + 46 + ret = device_register(&slave->dev); 47 + if (ret) { 48 + dev_err(bus->dev, "Failed to add slave: ret %d\n", ret); 49 + 50 + /* 51 + * On err, don't free but drop ref as this will be freed 52 + * when release method is invoked. 53 + */ 54 + mutex_lock(&bus->bus_lock); 55 + list_del(&slave->node); 56 + mutex_unlock(&bus->bus_lock); 57 + put_device(&slave->dev); 58 + } 59 + 60 + return ret; 61 + } 62 + 63 + #if IS_ENABLED(CONFIG_ACPI) 64 + /* 65 + * sdw_acpi_find_slaves() - Find Slave devices in Master ACPI node 66 + * @bus: SDW bus instance 67 + * 68 + * Scans Master ACPI node for SDW child Slave devices and registers it. 69 + */ 70 + int sdw_acpi_find_slaves(struct sdw_bus *bus) 71 + { 72 + struct acpi_device *adev, *parent; 73 + 74 + parent = ACPI_COMPANION(bus->dev); 75 + if (!parent) { 76 + dev_err(bus->dev, "Can't find parent for acpi bind\n"); 77 + return -ENODEV; 78 + } 79 + 80 + list_for_each_entry(adev, &parent->children, node) { 81 + unsigned long long addr; 82 + struct sdw_slave_id id; 83 + unsigned int link_id; 84 + acpi_status status; 85 + 86 + status = acpi_evaluate_integer(adev->handle, 87 + METHOD_NAME__ADR, NULL, &addr); 88 + 89 + if (ACPI_FAILURE(status)) { 90 + dev_err(bus->dev, "_ADR resolution failed: %x\n", 91 + status); 92 + return status; 93 + } 94 + 95 + /* Extract link id from ADR, Bit 51 to 48 (included) */ 96 + link_id = (addr >> 48) & GENMASK(3, 0); 97 + 98 + /* Check for link_id match */ 99 + if (link_id != bus->link_id) 100 + continue; 101 + 102 + sdw_extract_slave_id(bus, addr, &id); 103 + 104 + /* 105 + * don't error check for sdw_slave_add as we want to continue 106 + * adding Slaves 107 + */ 108 + sdw_slave_add(bus, &id, acpi_fwnode_handle(adev)); 109 + } 110 + 111 + return 0; 112 + } 113 + 114 + #endif
+108 -32
drivers/uio/uio_hv_generic.c
··· 10 10 * Since the driver does not declare any device ids, you must allocate 11 11 * id and bind the device to the driver yourself. For example: 12 12 * 13 + * Associate Network GUID with UIO device 13 14 * # echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" \ 14 - * > /sys/bus/vmbus/drivers/uio_hv_generic 15 - * # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \ 15 + * > /sys/bus/vmbus/drivers/uio_hv_generic/new_id 16 + * Then rebind 17 + * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ 16 18 * > /sys/bus/vmbus/drivers/hv_netvsc/unbind 17 - * # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \ 19 + * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ 18 20 * > /sys/bus/vmbus/drivers/uio_hv_generic/bind 19 21 */ 20 22 ··· 39 37 #define DRIVER_AUTHOR "Stephen Hemminger <sthemmin at microsoft.com>" 40 38 #define DRIVER_DESC "Generic UIO driver for VMBus devices" 41 39 40 + #define HV_RING_SIZE 512 /* pages */ 41 + #define SEND_BUFFER_SIZE (15 * 1024 * 1024) 42 + #define RECV_BUFFER_SIZE (15 * 1024 * 1024) 43 + 42 44 /* 43 45 * List of resources to be mapped to user space 44 46 * can be extended up to MAX_UIO_MAPS(5) items ··· 51 45 TXRX_RING_MAP = 0, 52 46 INT_PAGE_MAP, 53 47 MON_PAGE_MAP, 48 + RECV_BUF_MAP, 49 + SEND_BUF_MAP 54 50 }; 55 - 56 - #define HV_RING_SIZE 512 57 51 58 52 struct hv_uio_private_data { 59 53 struct uio_info info; 60 54 struct hv_device *device; 55 + 56 + void *recv_buf; 57 + u32 recv_gpadl; 58 + char recv_name[32]; /* "recv_4294967295" */ 59 + 60 + void *send_buf; 61 + u32 send_gpadl; 62 + char send_name[32]; 61 63 }; 62 - 63 - static int 64 - hv_uio_mmap(struct uio_info *info, struct vm_area_struct *vma) 65 - { 66 - int mi; 67 - 68 - if (vma->vm_pgoff >= MAX_UIO_MAPS) 69 - return -EINVAL; 70 - 71 - if (info->mem[vma->vm_pgoff].size == 0) 72 - return -EINVAL; 73 - 74 - mi = (int)vma->vm_pgoff; 75 - 76 - return remap_pfn_range(vma, vma->vm_start, 77 - info->mem[mi].addr >> PAGE_SHIFT, 78 - vma->vm_end - vma->vm_start, vma->vm_page_prot); 79 - } 80 64 81 65 /* 82 66 * This is the irqcontrol callback to be registered to uio_info. ··· 103 107 uio_event_notify(&pdata->info); 104 108 } 105 109 110 + /* 111 + * Callback from vmbus_event when channel is rescinded. 112 + */ 113 + static void hv_uio_rescind(struct vmbus_channel *channel) 114 + { 115 + struct hv_device *hv_dev = channel->primary_channel->device_obj; 116 + struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); 117 + 118 + /* 119 + * Turn off the interrupt file handle 120 + * Next read for event will return -EIO 121 + */ 122 + pdata->info.irq = 0; 123 + 124 + /* Wake up reader */ 125 + uio_event_notify(&pdata->info); 126 + } 127 + 128 + static void 129 + hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) 130 + { 131 + if (pdata->send_gpadl) 132 + vmbus_teardown_gpadl(dev->channel, pdata->send_gpadl); 133 + vfree(pdata->send_buf); 134 + 135 + if (pdata->recv_gpadl) 136 + vmbus_teardown_gpadl(dev->channel, pdata->recv_gpadl); 137 + vfree(pdata->recv_buf); 138 + } 139 + 106 140 static int 107 141 hv_uio_probe(struct hv_device *dev, 108 142 const struct hv_vmbus_device_id *dev_id) ··· 150 124 if (ret) 151 125 goto fail; 152 126 127 + /* Communicating with host has to be via shared memory not hypercall */ 128 + if (!dev->channel->offermsg.monitor_allocated) { 129 + dev_err(&dev->device, "vmbus channel requires hypercall\n"); 130 + ret = -ENOTSUPP; 131 + goto fail_close; 132 + } 133 + 153 134 dev->channel->inbound.ring_buffer->interrupt_mask = 1; 154 - set_channel_read_mode(dev->channel, HV_CALL_DIRECT); 135 + set_channel_read_mode(dev->channel, HV_CALL_ISR); 155 136 156 137 /* Fill general uio info */ 157 138 pdata->info.name = "uio_hv_generic"; 158 139 pdata->info.version = DRIVER_VERSION; 159 140 pdata->info.irqcontrol = hv_uio_irqcontrol; 160 - pdata->info.mmap = hv_uio_mmap; 161 141 pdata->info.irq = UIO_IRQ_CUSTOM; 162 142 163 143 /* mem resources */ 164 144 pdata->info.mem[TXRX_RING_MAP].name = "txrx_rings"; 165 145 pdata->info.mem[TXRX_RING_MAP].addr 166 - = virt_to_phys(dev->channel->ringbuffer_pages); 146 + = (uintptr_t)dev->channel->ringbuffer_pages; 167 147 pdata->info.mem[TXRX_RING_MAP].size 168 - = dev->channel->ringbuffer_pagecount * PAGE_SIZE; 148 + = dev->channel->ringbuffer_pagecount << PAGE_SHIFT; 169 149 pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_LOGICAL; 170 150 171 151 pdata->info.mem[INT_PAGE_MAP].name = "int_page"; 172 - pdata->info.mem[INT_PAGE_MAP].addr = 173 - virt_to_phys(vmbus_connection.int_page); 152 + pdata->info.mem[INT_PAGE_MAP].addr 153 + = (uintptr_t)vmbus_connection.int_page; 174 154 pdata->info.mem[INT_PAGE_MAP].size = PAGE_SIZE; 175 155 pdata->info.mem[INT_PAGE_MAP].memtype = UIO_MEM_LOGICAL; 176 156 177 - pdata->info.mem[MON_PAGE_MAP].name = "monitor_pages"; 178 - pdata->info.mem[MON_PAGE_MAP].addr = 179 - virt_to_phys(vmbus_connection.monitor_pages[1]); 157 + pdata->info.mem[MON_PAGE_MAP].name = "monitor_page"; 158 + pdata->info.mem[MON_PAGE_MAP].addr 159 + = (uintptr_t)vmbus_connection.monitor_pages[1]; 180 160 pdata->info.mem[MON_PAGE_MAP].size = PAGE_SIZE; 181 161 pdata->info.mem[MON_PAGE_MAP].memtype = UIO_MEM_LOGICAL; 162 + 163 + pdata->recv_buf = vzalloc(RECV_BUFFER_SIZE); 164 + if (pdata->recv_buf == NULL) { 165 + ret = -ENOMEM; 166 + goto fail_close; 167 + } 168 + 169 + ret = vmbus_establish_gpadl(dev->channel, pdata->recv_buf, 170 + RECV_BUFFER_SIZE, &pdata->recv_gpadl); 171 + if (ret) 172 + goto fail_close; 173 + 174 + /* put Global Physical Address Label in name */ 175 + snprintf(pdata->recv_name, sizeof(pdata->recv_name), 176 + "recv:%u", pdata->recv_gpadl); 177 + pdata->info.mem[RECV_BUF_MAP].name = pdata->recv_name; 178 + pdata->info.mem[RECV_BUF_MAP].addr 179 + = (uintptr_t)pdata->recv_buf; 180 + pdata->info.mem[RECV_BUF_MAP].size = RECV_BUFFER_SIZE; 181 + pdata->info.mem[RECV_BUF_MAP].memtype = UIO_MEM_VIRTUAL; 182 + 183 + 184 + pdata->send_buf = vzalloc(SEND_BUFFER_SIZE); 185 + if (pdata->send_buf == NULL) { 186 + ret = -ENOMEM; 187 + goto fail_close; 188 + } 189 + 190 + ret = vmbus_establish_gpadl(dev->channel, pdata->send_buf, 191 + SEND_BUFFER_SIZE, &pdata->send_gpadl); 192 + if (ret) 193 + goto fail_close; 194 + 195 + snprintf(pdata->send_name, sizeof(pdata->send_name), 196 + "send:%u", pdata->send_gpadl); 197 + pdata->info.mem[SEND_BUF_MAP].name = pdata->send_name; 198 + pdata->info.mem[SEND_BUF_MAP].addr 199 + = (uintptr_t)pdata->send_buf; 200 + pdata->info.mem[SEND_BUF_MAP].size = SEND_BUFFER_SIZE; 201 + pdata->info.mem[SEND_BUF_MAP].memtype = UIO_MEM_VIRTUAL; 182 202 183 203 pdata->info.priv = pdata; 184 204 pdata->device = dev; ··· 235 163 goto fail_close; 236 164 } 237 165 166 + vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind); 167 + 238 168 hv_set_drvdata(dev, pdata); 239 169 240 170 return 0; 241 171 242 172 fail_close: 173 + hv_uio_cleanup(dev, pdata); 243 174 vmbus_close(dev->channel); 244 175 fail: 245 176 kfree(pdata); ··· 259 184 return 0; 260 185 261 186 uio_unregister_device(&pdata->info); 187 + hv_uio_cleanup(dev, pdata); 262 188 hv_set_drvdata(dev, NULL); 263 189 vmbus_close(dev->channel); 264 190 kfree(pdata);
+1
drivers/virt/Kconfig
··· 30 30 4) A kernel interface for receiving callbacks when a managed 31 31 partition shuts down. 32 32 33 + source "drivers/virt/vboxguest/Kconfig" 33 34 endif
+1
drivers/virt/Makefile
··· 3 3 # 4 4 5 5 obj-$(CONFIG_FSL_HV_MANAGER) += fsl_hypervisor.o 6 + obj-y += vboxguest/
+18
drivers/virt/vboxguest/Kconfig
··· 1 + config VBOXGUEST 2 + tristate "Virtual Box Guest integration support" 3 + depends on X86 && PCI && INPUT 4 + help 5 + This is a driver for the Virtual Box Guest PCI device used in 6 + Virtual Box virtual machines. Enabling this driver will add 7 + support for Virtual Box Guest integration features such as 8 + copy-and-paste, seamless mode and OpenGL pass-through. 9 + 10 + This driver also offers vboxguest IPC functionality which is needed 11 + for the vboxfs driver which offers folder sharing support. 12 + 13 + If you enable this driver you should also enable the VBOXVIDEO option. 14 + 15 + Although it is possible to build this module in, it is advised 16 + to build this driver as a module, so that it can be updated 17 + independently of the kernel. Select M to build this driver as a 18 + module.
+3
drivers/virt/vboxguest/Makefile
··· 1 + vboxguest-y := vboxguest_linux.o vboxguest_core.o vboxguest_utils.o 2 + 3 + obj-$(CONFIG_VBOXGUEST) += vboxguest.o
+1571
drivers/virt/vboxguest/vboxguest_core.c
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * vboxguest core guest-device handling code, VBoxGuest.cpp in upstream svn. 4 + * 5 + * Copyright (C) 2007-2016 Oracle Corporation 6 + */ 7 + 8 + #include <linux/device.h> 9 + #include <linux/mm.h> 10 + #include <linux/sched.h> 11 + #include <linux/sizes.h> 12 + #include <linux/slab.h> 13 + #include <linux/vbox_err.h> 14 + #include <linux/vbox_utils.h> 15 + #include <linux/vmalloc.h> 16 + #include "vboxguest_core.h" 17 + #include "vboxguest_version.h" 18 + 19 + /* Get the pointer to the first HGCM parameter. */ 20 + #define VBG_IOCTL_HGCM_CALL_PARMS(a) \ 21 + ((struct vmmdev_hgcm_function_parameter *)( \ 22 + (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call))) 23 + /* Get the pointer to the first HGCM parameter in a 32-bit request. */ 24 + #define VBG_IOCTL_HGCM_CALL_PARMS32(a) \ 25 + ((struct vmmdev_hgcm_function_parameter32 *)( \ 26 + (u8 *)(a) + sizeof(struct vbg_ioctl_hgcm_call))) 27 + 28 + #define GUEST_MAPPINGS_TRIES 5 29 + 30 + /** 31 + * Reserves memory in which the VMM can relocate any guest mappings 32 + * that are floating around. 33 + * 34 + * This operation is a little bit tricky since the VMM might not accept 35 + * just any address because of address clashes between the three contexts 36 + * it operates in, so we try several times. 37 + * 38 + * Failure to reserve the guest mappings is ignored. 39 + * 40 + * @gdev: The Guest extension device. 41 + */ 42 + static void vbg_guest_mappings_init(struct vbg_dev *gdev) 43 + { 44 + struct vmmdev_hypervisorinfo *req; 45 + void *guest_mappings[GUEST_MAPPINGS_TRIES]; 46 + struct page **pages = NULL; 47 + u32 size, hypervisor_size; 48 + int i, rc; 49 + 50 + /* Query the required space. */ 51 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_GET_HYPERVISOR_INFO); 52 + if (!req) 53 + return; 54 + 55 + req->hypervisor_start = 0; 56 + req->hypervisor_size = 0; 57 + rc = vbg_req_perform(gdev, req); 58 + if (rc < 0) 59 + goto out; 60 + 61 + /* 62 + * The VMM will report back if there is nothing it wants to map, like 63 + * for instance in VT-x and AMD-V mode. 64 + */ 65 + if (req->hypervisor_size == 0) 66 + goto out; 67 + 68 + hypervisor_size = req->hypervisor_size; 69 + /* Add 4M so that we can align the vmap to 4MiB as the host requires. */ 70 + size = PAGE_ALIGN(req->hypervisor_size) + SZ_4M; 71 + 72 + pages = kmalloc(sizeof(*pages) * (size >> PAGE_SHIFT), GFP_KERNEL); 73 + if (!pages) 74 + goto out; 75 + 76 + gdev->guest_mappings_dummy_page = alloc_page(GFP_HIGHUSER); 77 + if (!gdev->guest_mappings_dummy_page) 78 + goto out; 79 + 80 + for (i = 0; i < (size >> PAGE_SHIFT); i++) 81 + pages[i] = gdev->guest_mappings_dummy_page; 82 + 83 + /* 84 + * Try several times, the VMM might not accept some addresses because 85 + * of address clashes between the three contexts. 86 + */ 87 + for (i = 0; i < GUEST_MAPPINGS_TRIES; i++) { 88 + guest_mappings[i] = vmap(pages, (size >> PAGE_SHIFT), 89 + VM_MAP, PAGE_KERNEL_RO); 90 + if (!guest_mappings[i]) 91 + break; 92 + 93 + req->header.request_type = VMMDEVREQ_SET_HYPERVISOR_INFO; 94 + req->header.rc = VERR_INTERNAL_ERROR; 95 + req->hypervisor_size = hypervisor_size; 96 + req->hypervisor_start = 97 + (unsigned long)PTR_ALIGN(guest_mappings[i], SZ_4M); 98 + 99 + rc = vbg_req_perform(gdev, req); 100 + if (rc >= 0) { 101 + gdev->guest_mappings = guest_mappings[i]; 102 + break; 103 + } 104 + } 105 + 106 + /* Free vmap's from failed attempts. */ 107 + while (--i >= 0) 108 + vunmap(guest_mappings[i]); 109 + 110 + /* On failure free the dummy-page backing the vmap */ 111 + if (!gdev->guest_mappings) { 112 + __free_page(gdev->guest_mappings_dummy_page); 113 + gdev->guest_mappings_dummy_page = NULL; 114 + } 115 + 116 + out: 117 + kfree(req); 118 + kfree(pages); 119 + } 120 + 121 + /** 122 + * Undo what vbg_guest_mappings_init did. 123 + * 124 + * @gdev: The Guest extension device. 125 + */ 126 + static void vbg_guest_mappings_exit(struct vbg_dev *gdev) 127 + { 128 + struct vmmdev_hypervisorinfo *req; 129 + int rc; 130 + 131 + if (!gdev->guest_mappings) 132 + return; 133 + 134 + /* 135 + * Tell the host that we're going to free the memory we reserved for 136 + * it, the free it up. (Leak the memory if anything goes wrong here.) 137 + */ 138 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_HYPERVISOR_INFO); 139 + if (!req) 140 + return; 141 + 142 + req->hypervisor_start = 0; 143 + req->hypervisor_size = 0; 144 + 145 + rc = vbg_req_perform(gdev, req); 146 + 147 + kfree(req); 148 + 149 + if (rc < 0) { 150 + vbg_err("%s error: %d\n", __func__, rc); 151 + return; 152 + } 153 + 154 + vunmap(gdev->guest_mappings); 155 + gdev->guest_mappings = NULL; 156 + 157 + __free_page(gdev->guest_mappings_dummy_page); 158 + gdev->guest_mappings_dummy_page = NULL; 159 + } 160 + 161 + /** 162 + * Report the guest information to the host. 163 + * Return: 0 or negative errno value. 164 + * @gdev: The Guest extension device. 165 + */ 166 + static int vbg_report_guest_info(struct vbg_dev *gdev) 167 + { 168 + /* 169 + * Allocate and fill in the two guest info reports. 170 + */ 171 + struct vmmdev_guest_info *req1 = NULL; 172 + struct vmmdev_guest_info2 *req2 = NULL; 173 + int rc, ret = -ENOMEM; 174 + 175 + req1 = vbg_req_alloc(sizeof(*req1), VMMDEVREQ_REPORT_GUEST_INFO); 176 + req2 = vbg_req_alloc(sizeof(*req2), VMMDEVREQ_REPORT_GUEST_INFO2); 177 + if (!req1 || !req2) 178 + goto out_free; 179 + 180 + req1->interface_version = VMMDEV_VERSION; 181 + req1->os_type = VMMDEV_OSTYPE_LINUX26; 182 + #if __BITS_PER_LONG == 64 183 + req1->os_type |= VMMDEV_OSTYPE_X64; 184 + #endif 185 + 186 + req2->additions_major = VBG_VERSION_MAJOR; 187 + req2->additions_minor = VBG_VERSION_MINOR; 188 + req2->additions_build = VBG_VERSION_BUILD; 189 + req2->additions_revision = VBG_SVN_REV; 190 + /* (no features defined yet) */ 191 + req2->additions_features = 0; 192 + strlcpy(req2->name, VBG_VERSION_STRING, 193 + sizeof(req2->name)); 194 + 195 + /* 196 + * There are two protocols here: 197 + * 1. INFO2 + INFO1. Supported by >=3.2.51. 198 + * 2. INFO1 and optionally INFO2. The old protocol. 199 + * 200 + * We try protocol 2 first. It will fail with VERR_NOT_SUPPORTED 201 + * if not supported by the VMMDev (message ordering requirement). 202 + */ 203 + rc = vbg_req_perform(gdev, req2); 204 + if (rc >= 0) { 205 + rc = vbg_req_perform(gdev, req1); 206 + } else if (rc == VERR_NOT_SUPPORTED || rc == VERR_NOT_IMPLEMENTED) { 207 + rc = vbg_req_perform(gdev, req1); 208 + if (rc >= 0) { 209 + rc = vbg_req_perform(gdev, req2); 210 + if (rc == VERR_NOT_IMPLEMENTED) 211 + rc = VINF_SUCCESS; 212 + } 213 + } 214 + ret = vbg_status_code_to_errno(rc); 215 + 216 + out_free: 217 + kfree(req2); 218 + kfree(req1); 219 + return ret; 220 + } 221 + 222 + /** 223 + * Report the guest driver status to the host. 224 + * Return: 0 or negative errno value. 225 + * @gdev: The Guest extension device. 226 + * @active: Flag whether the driver is now active or not. 227 + */ 228 + static int vbg_report_driver_status(struct vbg_dev *gdev, bool active) 229 + { 230 + struct vmmdev_guest_status *req; 231 + int rc; 232 + 233 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_REPORT_GUEST_STATUS); 234 + if (!req) 235 + return -ENOMEM; 236 + 237 + req->facility = VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER; 238 + if (active) 239 + req->status = VBOXGUEST_FACILITY_STATUS_ACTIVE; 240 + else 241 + req->status = VBOXGUEST_FACILITY_STATUS_INACTIVE; 242 + req->flags = 0; 243 + 244 + rc = vbg_req_perform(gdev, req); 245 + if (rc == VERR_NOT_IMPLEMENTED) /* Compatibility with older hosts. */ 246 + rc = VINF_SUCCESS; 247 + 248 + kfree(req); 249 + 250 + return vbg_status_code_to_errno(rc); 251 + } 252 + 253 + /** 254 + * Inflate the balloon by one chunk. The caller owns the balloon mutex. 255 + * Return: 0 or negative errno value. 256 + * @gdev: The Guest extension device. 257 + * @chunk_idx: Index of the chunk. 258 + */ 259 + static int vbg_balloon_inflate(struct vbg_dev *gdev, u32 chunk_idx) 260 + { 261 + struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req; 262 + struct page **pages; 263 + int i, rc, ret; 264 + 265 + pages = kmalloc(sizeof(*pages) * VMMDEV_MEMORY_BALLOON_CHUNK_PAGES, 266 + GFP_KERNEL | __GFP_NOWARN); 267 + if (!pages) 268 + return -ENOMEM; 269 + 270 + req->header.size = sizeof(*req); 271 + req->inflate = true; 272 + req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; 273 + 274 + for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) { 275 + pages[i] = alloc_page(GFP_KERNEL | __GFP_NOWARN); 276 + if (!pages[i]) { 277 + ret = -ENOMEM; 278 + goto out_error; 279 + } 280 + 281 + req->phys_page[i] = page_to_phys(pages[i]); 282 + } 283 + 284 + rc = vbg_req_perform(gdev, req); 285 + if (rc < 0) { 286 + vbg_err("%s error, rc: %d\n", __func__, rc); 287 + ret = vbg_status_code_to_errno(rc); 288 + goto out_error; 289 + } 290 + 291 + gdev->mem_balloon.pages[chunk_idx] = pages; 292 + 293 + return 0; 294 + 295 + out_error: 296 + while (--i >= 0) 297 + __free_page(pages[i]); 298 + kfree(pages); 299 + 300 + return ret; 301 + } 302 + 303 + /** 304 + * Deflate the balloon by one chunk. The caller owns the balloon mutex. 305 + * Return: 0 or negative errno value. 306 + * @gdev: The Guest extension device. 307 + * @chunk_idx: Index of the chunk. 308 + */ 309 + static int vbg_balloon_deflate(struct vbg_dev *gdev, u32 chunk_idx) 310 + { 311 + struct vmmdev_memballoon_change *req = gdev->mem_balloon.change_req; 312 + struct page **pages = gdev->mem_balloon.pages[chunk_idx]; 313 + int i, rc; 314 + 315 + req->header.size = sizeof(*req); 316 + req->inflate = false; 317 + req->pages = VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; 318 + 319 + for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) 320 + req->phys_page[i] = page_to_phys(pages[i]); 321 + 322 + rc = vbg_req_perform(gdev, req); 323 + if (rc < 0) { 324 + vbg_err("%s error, rc: %d\n", __func__, rc); 325 + return vbg_status_code_to_errno(rc); 326 + } 327 + 328 + for (i = 0; i < VMMDEV_MEMORY_BALLOON_CHUNK_PAGES; i++) 329 + __free_page(pages[i]); 330 + kfree(pages); 331 + gdev->mem_balloon.pages[chunk_idx] = NULL; 332 + 333 + return 0; 334 + } 335 + 336 + /** 337 + * Respond to VMMDEV_EVENT_BALLOON_CHANGE_REQUEST events, query the size 338 + * the host wants the balloon to be and adjust accordingly. 339 + */ 340 + static void vbg_balloon_work(struct work_struct *work) 341 + { 342 + struct vbg_dev *gdev = 343 + container_of(work, struct vbg_dev, mem_balloon.work); 344 + struct vmmdev_memballoon_info *req = gdev->mem_balloon.get_req; 345 + u32 i, chunks; 346 + int rc, ret; 347 + 348 + /* 349 + * Setting this bit means that we request the value from the host and 350 + * change the guest memory balloon according to the returned value. 351 + */ 352 + req->event_ack = VMMDEV_EVENT_BALLOON_CHANGE_REQUEST; 353 + rc = vbg_req_perform(gdev, req); 354 + if (rc < 0) { 355 + vbg_err("%s error, rc: %d)\n", __func__, rc); 356 + return; 357 + } 358 + 359 + /* 360 + * The host always returns the same maximum amount of chunks, so 361 + * we do this once. 362 + */ 363 + if (!gdev->mem_balloon.max_chunks) { 364 + gdev->mem_balloon.pages = 365 + devm_kcalloc(gdev->dev, req->phys_mem_chunks, 366 + sizeof(struct page **), GFP_KERNEL); 367 + if (!gdev->mem_balloon.pages) 368 + return; 369 + 370 + gdev->mem_balloon.max_chunks = req->phys_mem_chunks; 371 + } 372 + 373 + chunks = req->balloon_chunks; 374 + if (chunks > gdev->mem_balloon.max_chunks) { 375 + vbg_err("%s: illegal balloon size %u (max=%u)\n", 376 + __func__, chunks, gdev->mem_balloon.max_chunks); 377 + return; 378 + } 379 + 380 + if (chunks > gdev->mem_balloon.chunks) { 381 + /* inflate */ 382 + for (i = gdev->mem_balloon.chunks; i < chunks; i++) { 383 + ret = vbg_balloon_inflate(gdev, i); 384 + if (ret < 0) 385 + return; 386 + 387 + gdev->mem_balloon.chunks++; 388 + } 389 + } else { 390 + /* deflate */ 391 + for (i = gdev->mem_balloon.chunks; i-- > chunks;) { 392 + ret = vbg_balloon_deflate(gdev, i); 393 + if (ret < 0) 394 + return; 395 + 396 + gdev->mem_balloon.chunks--; 397 + } 398 + } 399 + } 400 + 401 + /** 402 + * Callback for heartbeat timer. 403 + */ 404 + static void vbg_heartbeat_timer(struct timer_list *t) 405 + { 406 + struct vbg_dev *gdev = from_timer(gdev, t, heartbeat_timer); 407 + 408 + vbg_req_perform(gdev, gdev->guest_heartbeat_req); 409 + mod_timer(&gdev->heartbeat_timer, 410 + msecs_to_jiffies(gdev->heartbeat_interval_ms)); 411 + } 412 + 413 + /** 414 + * Configure the host to check guest's heartbeat 415 + * and get heartbeat interval from the host. 416 + * Return: 0 or negative errno value. 417 + * @gdev: The Guest extension device. 418 + * @enabled: Set true to enable guest heartbeat checks on host. 419 + */ 420 + static int vbg_heartbeat_host_config(struct vbg_dev *gdev, bool enabled) 421 + { 422 + struct vmmdev_heartbeat *req; 423 + int rc; 424 + 425 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_HEARTBEAT_CONFIGURE); 426 + if (!req) 427 + return -ENOMEM; 428 + 429 + req->enabled = enabled; 430 + req->interval_ns = 0; 431 + rc = vbg_req_perform(gdev, req); 432 + do_div(req->interval_ns, 1000000); /* ns -> ms */ 433 + gdev->heartbeat_interval_ms = req->interval_ns; 434 + kfree(req); 435 + 436 + return vbg_status_code_to_errno(rc); 437 + } 438 + 439 + /** 440 + * Initializes the heartbeat timer. This feature may be disabled by the host. 441 + * Return: 0 or negative errno value. 442 + * @gdev: The Guest extension device. 443 + */ 444 + static int vbg_heartbeat_init(struct vbg_dev *gdev) 445 + { 446 + int ret; 447 + 448 + /* Make sure that heartbeat checking is disabled if we fail. */ 449 + ret = vbg_heartbeat_host_config(gdev, false); 450 + if (ret < 0) 451 + return ret; 452 + 453 + ret = vbg_heartbeat_host_config(gdev, true); 454 + if (ret < 0) 455 + return ret; 456 + 457 + /* 458 + * Preallocate the request to use it from the timer callback because: 459 + * 1) on Windows vbg_req_alloc must be called at IRQL <= APC_LEVEL 460 + * and the timer callback runs at DISPATCH_LEVEL; 461 + * 2) avoid repeated allocations. 462 + */ 463 + gdev->guest_heartbeat_req = vbg_req_alloc( 464 + sizeof(*gdev->guest_heartbeat_req), 465 + VMMDEVREQ_GUEST_HEARTBEAT); 466 + if (!gdev->guest_heartbeat_req) 467 + return -ENOMEM; 468 + 469 + vbg_info("%s: Setting up heartbeat to trigger every %d milliseconds\n", 470 + __func__, gdev->heartbeat_interval_ms); 471 + mod_timer(&gdev->heartbeat_timer, 0); 472 + 473 + return 0; 474 + } 475 + 476 + /** 477 + * Cleanup hearbeat code, stop HB timer and disable host heartbeat checking. 478 + * @gdev: The Guest extension device. 479 + */ 480 + static void vbg_heartbeat_exit(struct vbg_dev *gdev) 481 + { 482 + del_timer_sync(&gdev->heartbeat_timer); 483 + vbg_heartbeat_host_config(gdev, false); 484 + kfree(gdev->guest_heartbeat_req); 485 + 486 + } 487 + 488 + /** 489 + * Applies a change to the bit usage tracker. 490 + * Return: true if the mask changed, false if not. 491 + * @tracker: The bit usage tracker. 492 + * @changed: The bits to change. 493 + * @previous: The previous value of the bits. 494 + */ 495 + static bool vbg_track_bit_usage(struct vbg_bit_usage_tracker *tracker, 496 + u32 changed, u32 previous) 497 + { 498 + bool global_change = false; 499 + 500 + while (changed) { 501 + u32 bit = ffs(changed) - 1; 502 + u32 bitmask = BIT(bit); 503 + 504 + if (bitmask & previous) { 505 + tracker->per_bit_usage[bit] -= 1; 506 + if (tracker->per_bit_usage[bit] == 0) { 507 + global_change = true; 508 + tracker->mask &= ~bitmask; 509 + } 510 + } else { 511 + tracker->per_bit_usage[bit] += 1; 512 + if (tracker->per_bit_usage[bit] == 1) { 513 + global_change = true; 514 + tracker->mask |= bitmask; 515 + } 516 + } 517 + 518 + changed &= ~bitmask; 519 + } 520 + 521 + return global_change; 522 + } 523 + 524 + /** 525 + * Init and termination worker for resetting the (host) event filter on the host 526 + * Return: 0 or negative errno value. 527 + * @gdev: The Guest extension device. 528 + * @fixed_events: Fixed events (init time). 529 + */ 530 + static int vbg_reset_host_event_filter(struct vbg_dev *gdev, 531 + u32 fixed_events) 532 + { 533 + struct vmmdev_mask *req; 534 + int rc; 535 + 536 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK); 537 + if (!req) 538 + return -ENOMEM; 539 + 540 + req->not_mask = U32_MAX & ~fixed_events; 541 + req->or_mask = fixed_events; 542 + rc = vbg_req_perform(gdev, req); 543 + if (rc < 0) 544 + vbg_err("%s error, rc: %d\n", __func__, rc); 545 + 546 + kfree(req); 547 + return vbg_status_code_to_errno(rc); 548 + } 549 + 550 + /** 551 + * Changes the event filter mask for the given session. 552 + * 553 + * This is called in response to VBG_IOCTL_CHANGE_FILTER_MASK as well as to 554 + * do session cleanup. Takes the session spinlock. 555 + * 556 + * Return: 0 or negative errno value. 557 + * @gdev: The Guest extension device. 558 + * @session: The session. 559 + * @or_mask: The events to add. 560 + * @not_mask: The events to remove. 561 + * @session_termination: Set if we're called by the session cleanup code. 562 + * This tweaks the error handling so we perform 563 + * proper session cleanup even if the host 564 + * misbehaves. 565 + */ 566 + static int vbg_set_session_event_filter(struct vbg_dev *gdev, 567 + struct vbg_session *session, 568 + u32 or_mask, u32 not_mask, 569 + bool session_termination) 570 + { 571 + struct vmmdev_mask *req; 572 + u32 changed, previous; 573 + int rc, ret = 0; 574 + 575 + /* Allocate a request buffer before taking the spinlock */ 576 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_CTL_GUEST_FILTER_MASK); 577 + if (!req) { 578 + if (!session_termination) 579 + return -ENOMEM; 580 + /* Ignore allocation failure, we must do session cleanup. */ 581 + } 582 + 583 + mutex_lock(&gdev->session_mutex); 584 + 585 + /* Apply the changes to the session mask. */ 586 + previous = session->event_filter; 587 + session->event_filter |= or_mask; 588 + session->event_filter &= ~not_mask; 589 + 590 + /* If anything actually changed, update the global usage counters. */ 591 + changed = previous ^ session->event_filter; 592 + if (!changed) 593 + goto out; 594 + 595 + vbg_track_bit_usage(&gdev->event_filter_tracker, changed, previous); 596 + or_mask = gdev->fixed_events | gdev->event_filter_tracker.mask; 597 + 598 + if (gdev->event_filter_host == or_mask || !req) 599 + goto out; 600 + 601 + gdev->event_filter_host = or_mask; 602 + req->or_mask = or_mask; 603 + req->not_mask = ~or_mask; 604 + rc = vbg_req_perform(gdev, req); 605 + if (rc < 0) { 606 + ret = vbg_status_code_to_errno(rc); 607 + 608 + /* Failed, roll back (unless it's session termination time). */ 609 + gdev->event_filter_host = U32_MAX; 610 + if (session_termination) 611 + goto out; 612 + 613 + vbg_track_bit_usage(&gdev->event_filter_tracker, changed, 614 + session->event_filter); 615 + session->event_filter = previous; 616 + } 617 + 618 + out: 619 + mutex_unlock(&gdev->session_mutex); 620 + kfree(req); 621 + 622 + return ret; 623 + } 624 + 625 + /** 626 + * Init and termination worker for set guest capabilities to zero on the host. 627 + * Return: 0 or negative errno value. 628 + * @gdev: The Guest extension device. 629 + */ 630 + static int vbg_reset_host_capabilities(struct vbg_dev *gdev) 631 + { 632 + struct vmmdev_mask *req; 633 + int rc; 634 + 635 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES); 636 + if (!req) 637 + return -ENOMEM; 638 + 639 + req->not_mask = U32_MAX; 640 + req->or_mask = 0; 641 + rc = vbg_req_perform(gdev, req); 642 + if (rc < 0) 643 + vbg_err("%s error, rc: %d\n", __func__, rc); 644 + 645 + kfree(req); 646 + return vbg_status_code_to_errno(rc); 647 + } 648 + 649 + /** 650 + * Sets the guest capabilities for a session. Takes the session spinlock. 651 + * Return: 0 or negative errno value. 652 + * @gdev: The Guest extension device. 653 + * @session: The session. 654 + * @or_mask: The capabilities to add. 655 + * @not_mask: The capabilities to remove. 656 + * @session_termination: Set if we're called by the session cleanup code. 657 + * This tweaks the error handling so we perform 658 + * proper session cleanup even if the host 659 + * misbehaves. 660 + */ 661 + static int vbg_set_session_capabilities(struct vbg_dev *gdev, 662 + struct vbg_session *session, 663 + u32 or_mask, u32 not_mask, 664 + bool session_termination) 665 + { 666 + struct vmmdev_mask *req; 667 + u32 changed, previous; 668 + int rc, ret = 0; 669 + 670 + /* Allocate a request buffer before taking the spinlock */ 671 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_GUEST_CAPABILITIES); 672 + if (!req) { 673 + if (!session_termination) 674 + return -ENOMEM; 675 + /* Ignore allocation failure, we must do session cleanup. */ 676 + } 677 + 678 + mutex_lock(&gdev->session_mutex); 679 + 680 + /* Apply the changes to the session mask. */ 681 + previous = session->guest_caps; 682 + session->guest_caps |= or_mask; 683 + session->guest_caps &= ~not_mask; 684 + 685 + /* If anything actually changed, update the global usage counters. */ 686 + changed = previous ^ session->guest_caps; 687 + if (!changed) 688 + goto out; 689 + 690 + vbg_track_bit_usage(&gdev->guest_caps_tracker, changed, previous); 691 + or_mask = gdev->guest_caps_tracker.mask; 692 + 693 + if (gdev->guest_caps_host == or_mask || !req) 694 + goto out; 695 + 696 + gdev->guest_caps_host = or_mask; 697 + req->or_mask = or_mask; 698 + req->not_mask = ~or_mask; 699 + rc = vbg_req_perform(gdev, req); 700 + if (rc < 0) { 701 + ret = vbg_status_code_to_errno(rc); 702 + 703 + /* Failed, roll back (unless it's session termination time). */ 704 + gdev->guest_caps_host = U32_MAX; 705 + if (session_termination) 706 + goto out; 707 + 708 + vbg_track_bit_usage(&gdev->guest_caps_tracker, changed, 709 + session->guest_caps); 710 + session->guest_caps = previous; 711 + } 712 + 713 + out: 714 + mutex_unlock(&gdev->session_mutex); 715 + kfree(req); 716 + 717 + return ret; 718 + } 719 + 720 + /** 721 + * vbg_query_host_version get the host feature mask and version information. 722 + * Return: 0 or negative errno value. 723 + * @gdev: The Guest extension device. 724 + */ 725 + static int vbg_query_host_version(struct vbg_dev *gdev) 726 + { 727 + struct vmmdev_host_version *req; 728 + int rc, ret; 729 + 730 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_GET_HOST_VERSION); 731 + if (!req) 732 + return -ENOMEM; 733 + 734 + rc = vbg_req_perform(gdev, req); 735 + ret = vbg_status_code_to_errno(rc); 736 + if (ret) 737 + goto out; 738 + 739 + snprintf(gdev->host_version, sizeof(gdev->host_version), "%u.%u.%ur%u", 740 + req->major, req->minor, req->build, req->revision); 741 + gdev->host_features = req->features; 742 + 743 + vbg_info("vboxguest: host-version: %s %#x\n", gdev->host_version, 744 + gdev->host_features); 745 + 746 + if (!(req->features & VMMDEV_HVF_HGCM_PHYS_PAGE_LIST)) { 747 + vbg_err("vboxguest: Error host too old (does not support page-lists)\n"); 748 + ret = -ENODEV; 749 + } 750 + 751 + out: 752 + kfree(req); 753 + return ret; 754 + } 755 + 756 + /** 757 + * Initializes the VBoxGuest device extension when the 758 + * device driver is loaded. 759 + * 760 + * The native code locates the VMMDev on the PCI bus and retrieve 761 + * the MMIO and I/O port ranges, this function will take care of 762 + * mapping the MMIO memory (if present). Upon successful return 763 + * the native code should set up the interrupt handler. 764 + * 765 + * Return: 0 or negative errno value. 766 + * 767 + * @gdev: The Guest extension device. 768 + * @fixed_events: Events that will be enabled upon init and no client 769 + * will ever be allowed to mask. 770 + */ 771 + int vbg_core_init(struct vbg_dev *gdev, u32 fixed_events) 772 + { 773 + int ret = -ENOMEM; 774 + 775 + gdev->fixed_events = fixed_events | VMMDEV_EVENT_HGCM; 776 + gdev->event_filter_host = U32_MAX; /* forces a report */ 777 + gdev->guest_caps_host = U32_MAX; /* forces a report */ 778 + 779 + init_waitqueue_head(&gdev->event_wq); 780 + init_waitqueue_head(&gdev->hgcm_wq); 781 + spin_lock_init(&gdev->event_spinlock); 782 + mutex_init(&gdev->session_mutex); 783 + mutex_init(&gdev->cancel_req_mutex); 784 + timer_setup(&gdev->heartbeat_timer, vbg_heartbeat_timer, 0); 785 + INIT_WORK(&gdev->mem_balloon.work, vbg_balloon_work); 786 + 787 + gdev->mem_balloon.get_req = 788 + vbg_req_alloc(sizeof(*gdev->mem_balloon.get_req), 789 + VMMDEVREQ_GET_MEMBALLOON_CHANGE_REQ); 790 + gdev->mem_balloon.change_req = 791 + vbg_req_alloc(sizeof(*gdev->mem_balloon.change_req), 792 + VMMDEVREQ_CHANGE_MEMBALLOON); 793 + gdev->cancel_req = 794 + vbg_req_alloc(sizeof(*(gdev->cancel_req)), 795 + VMMDEVREQ_HGCM_CANCEL2); 796 + gdev->ack_events_req = 797 + vbg_req_alloc(sizeof(*gdev->ack_events_req), 798 + VMMDEVREQ_ACKNOWLEDGE_EVENTS); 799 + gdev->mouse_status_req = 800 + vbg_req_alloc(sizeof(*gdev->mouse_status_req), 801 + VMMDEVREQ_GET_MOUSE_STATUS); 802 + 803 + if (!gdev->mem_balloon.get_req || !gdev->mem_balloon.change_req || 804 + !gdev->cancel_req || !gdev->ack_events_req || 805 + !gdev->mouse_status_req) 806 + goto err_free_reqs; 807 + 808 + ret = vbg_query_host_version(gdev); 809 + if (ret) 810 + goto err_free_reqs; 811 + 812 + ret = vbg_report_guest_info(gdev); 813 + if (ret) { 814 + vbg_err("vboxguest: vbg_report_guest_info error: %d\n", ret); 815 + goto err_free_reqs; 816 + } 817 + 818 + ret = vbg_reset_host_event_filter(gdev, gdev->fixed_events); 819 + if (ret) { 820 + vbg_err("vboxguest: Error setting fixed event filter: %d\n", 821 + ret); 822 + goto err_free_reqs; 823 + } 824 + 825 + ret = vbg_reset_host_capabilities(gdev); 826 + if (ret) { 827 + vbg_err("vboxguest: Error clearing guest capabilities: %d\n", 828 + ret); 829 + goto err_free_reqs; 830 + } 831 + 832 + ret = vbg_core_set_mouse_status(gdev, 0); 833 + if (ret) { 834 + vbg_err("vboxguest: Error clearing mouse status: %d\n", ret); 835 + goto err_free_reqs; 836 + } 837 + 838 + /* These may fail without requiring the driver init to fail. */ 839 + vbg_guest_mappings_init(gdev); 840 + vbg_heartbeat_init(gdev); 841 + 842 + /* All Done! */ 843 + ret = vbg_report_driver_status(gdev, true); 844 + if (ret < 0) 845 + vbg_err("vboxguest: Error reporting driver status: %d\n", ret); 846 + 847 + return 0; 848 + 849 + err_free_reqs: 850 + kfree(gdev->mouse_status_req); 851 + kfree(gdev->ack_events_req); 852 + kfree(gdev->cancel_req); 853 + kfree(gdev->mem_balloon.change_req); 854 + kfree(gdev->mem_balloon.get_req); 855 + return ret; 856 + } 857 + 858 + /** 859 + * Call this on exit to clean-up vboxguest-core managed resources. 860 + * 861 + * The native code should call this before the driver is loaded, 862 + * but don't call this on shutdown. 863 + * @gdev: The Guest extension device. 864 + */ 865 + void vbg_core_exit(struct vbg_dev *gdev) 866 + { 867 + vbg_heartbeat_exit(gdev); 868 + vbg_guest_mappings_exit(gdev); 869 + 870 + /* Clear the host flags (mouse status etc). */ 871 + vbg_reset_host_event_filter(gdev, 0); 872 + vbg_reset_host_capabilities(gdev); 873 + vbg_core_set_mouse_status(gdev, 0); 874 + 875 + kfree(gdev->mouse_status_req); 876 + kfree(gdev->ack_events_req); 877 + kfree(gdev->cancel_req); 878 + kfree(gdev->mem_balloon.change_req); 879 + kfree(gdev->mem_balloon.get_req); 880 + } 881 + 882 + /** 883 + * Creates a VBoxGuest user session. 884 + * 885 + * vboxguest_linux.c calls this when userspace opens the char-device. 886 + * Return: A pointer to the new session or an ERR_PTR on error. 887 + * @gdev: The Guest extension device. 888 + * @user: Set if this is a session for the vboxuser device. 889 + */ 890 + struct vbg_session *vbg_core_open_session(struct vbg_dev *gdev, bool user) 891 + { 892 + struct vbg_session *session; 893 + 894 + session = kzalloc(sizeof(*session), GFP_KERNEL); 895 + if (!session) 896 + return ERR_PTR(-ENOMEM); 897 + 898 + session->gdev = gdev; 899 + session->user_session = user; 900 + 901 + return session; 902 + } 903 + 904 + /** 905 + * Closes a VBoxGuest session. 906 + * @session: The session to close (and free). 907 + */ 908 + void vbg_core_close_session(struct vbg_session *session) 909 + { 910 + struct vbg_dev *gdev = session->gdev; 911 + int i, rc; 912 + 913 + vbg_set_session_capabilities(gdev, session, 0, U32_MAX, true); 914 + vbg_set_session_event_filter(gdev, session, 0, U32_MAX, true); 915 + 916 + for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { 917 + if (!session->hgcm_client_ids[i]) 918 + continue; 919 + 920 + vbg_hgcm_disconnect(gdev, session->hgcm_client_ids[i], &rc); 921 + } 922 + 923 + kfree(session); 924 + } 925 + 926 + static int vbg_ioctl_chk(struct vbg_ioctl_hdr *hdr, size_t in_size, 927 + size_t out_size) 928 + { 929 + if (hdr->size_in != (sizeof(*hdr) + in_size) || 930 + hdr->size_out != (sizeof(*hdr) + out_size)) 931 + return -EINVAL; 932 + 933 + return 0; 934 + } 935 + 936 + static int vbg_ioctl_driver_version_info( 937 + struct vbg_ioctl_driver_version_info *info) 938 + { 939 + const u16 vbg_maj_version = VBG_IOC_VERSION >> 16; 940 + u16 min_maj_version, req_maj_version; 941 + 942 + if (vbg_ioctl_chk(&info->hdr, sizeof(info->u.in), sizeof(info->u.out))) 943 + return -EINVAL; 944 + 945 + req_maj_version = info->u.in.req_version >> 16; 946 + min_maj_version = info->u.in.min_version >> 16; 947 + 948 + if (info->u.in.min_version > info->u.in.req_version || 949 + min_maj_version != req_maj_version) 950 + return -EINVAL; 951 + 952 + if (info->u.in.min_version <= VBG_IOC_VERSION && 953 + min_maj_version == vbg_maj_version) { 954 + info->u.out.session_version = VBG_IOC_VERSION; 955 + } else { 956 + info->u.out.session_version = U32_MAX; 957 + info->hdr.rc = VERR_VERSION_MISMATCH; 958 + } 959 + 960 + info->u.out.driver_version = VBG_IOC_VERSION; 961 + info->u.out.driver_revision = 0; 962 + info->u.out.reserved1 = 0; 963 + info->u.out.reserved2 = 0; 964 + 965 + return 0; 966 + } 967 + 968 + static bool vbg_wait_event_cond(struct vbg_dev *gdev, 969 + struct vbg_session *session, 970 + u32 event_mask) 971 + { 972 + unsigned long flags; 973 + bool wakeup; 974 + u32 events; 975 + 976 + spin_lock_irqsave(&gdev->event_spinlock, flags); 977 + 978 + events = gdev->pending_events & event_mask; 979 + wakeup = events || session->cancel_waiters; 980 + 981 + spin_unlock_irqrestore(&gdev->event_spinlock, flags); 982 + 983 + return wakeup; 984 + } 985 + 986 + /* Must be called with the event_lock held */ 987 + static u32 vbg_consume_events_locked(struct vbg_dev *gdev, 988 + struct vbg_session *session, 989 + u32 event_mask) 990 + { 991 + u32 events = gdev->pending_events & event_mask; 992 + 993 + gdev->pending_events &= ~events; 994 + return events; 995 + } 996 + 997 + static int vbg_ioctl_wait_for_events(struct vbg_dev *gdev, 998 + struct vbg_session *session, 999 + struct vbg_ioctl_wait_for_events *wait) 1000 + { 1001 + u32 timeout_ms = wait->u.in.timeout_ms; 1002 + u32 event_mask = wait->u.in.events; 1003 + unsigned long flags; 1004 + long timeout; 1005 + int ret = 0; 1006 + 1007 + if (vbg_ioctl_chk(&wait->hdr, sizeof(wait->u.in), sizeof(wait->u.out))) 1008 + return -EINVAL; 1009 + 1010 + if (timeout_ms == U32_MAX) 1011 + timeout = MAX_SCHEDULE_TIMEOUT; 1012 + else 1013 + timeout = msecs_to_jiffies(timeout_ms); 1014 + 1015 + wait->u.out.events = 0; 1016 + do { 1017 + timeout = wait_event_interruptible_timeout( 1018 + gdev->event_wq, 1019 + vbg_wait_event_cond(gdev, session, event_mask), 1020 + timeout); 1021 + 1022 + spin_lock_irqsave(&gdev->event_spinlock, flags); 1023 + 1024 + if (timeout < 0 || session->cancel_waiters) { 1025 + ret = -EINTR; 1026 + } else if (timeout == 0) { 1027 + ret = -ETIMEDOUT; 1028 + } else { 1029 + wait->u.out.events = 1030 + vbg_consume_events_locked(gdev, session, event_mask); 1031 + } 1032 + 1033 + spin_unlock_irqrestore(&gdev->event_spinlock, flags); 1034 + 1035 + /* 1036 + * Someone else may have consumed the event(s) first, in 1037 + * which case we go back to waiting. 1038 + */ 1039 + } while (ret == 0 && wait->u.out.events == 0); 1040 + 1041 + return ret; 1042 + } 1043 + 1044 + static int vbg_ioctl_interrupt_all_wait_events(struct vbg_dev *gdev, 1045 + struct vbg_session *session, 1046 + struct vbg_ioctl_hdr *hdr) 1047 + { 1048 + unsigned long flags; 1049 + 1050 + if (hdr->size_in != sizeof(*hdr) || hdr->size_out != sizeof(*hdr)) 1051 + return -EINVAL; 1052 + 1053 + spin_lock_irqsave(&gdev->event_spinlock, flags); 1054 + session->cancel_waiters = true; 1055 + spin_unlock_irqrestore(&gdev->event_spinlock, flags); 1056 + 1057 + wake_up(&gdev->event_wq); 1058 + 1059 + return 0; 1060 + } 1061 + 1062 + /** 1063 + * Checks if the VMM request is allowed in the context of the given session. 1064 + * Return: 0 or negative errno value. 1065 + * @gdev: The Guest extension device. 1066 + * @session: The calling session. 1067 + * @req: The request. 1068 + */ 1069 + static int vbg_req_allowed(struct vbg_dev *gdev, struct vbg_session *session, 1070 + const struct vmmdev_request_header *req) 1071 + { 1072 + const struct vmmdev_guest_status *guest_status; 1073 + bool trusted_apps_only; 1074 + 1075 + switch (req->request_type) { 1076 + /* Trusted users apps only. */ 1077 + case VMMDEVREQ_QUERY_CREDENTIALS: 1078 + case VMMDEVREQ_REPORT_CREDENTIALS_JUDGEMENT: 1079 + case VMMDEVREQ_REGISTER_SHARED_MODULE: 1080 + case VMMDEVREQ_UNREGISTER_SHARED_MODULE: 1081 + case VMMDEVREQ_WRITE_COREDUMP: 1082 + case VMMDEVREQ_GET_CPU_HOTPLUG_REQ: 1083 + case VMMDEVREQ_SET_CPU_HOTPLUG_STATUS: 1084 + case VMMDEVREQ_CHECK_SHARED_MODULES: 1085 + case VMMDEVREQ_GET_PAGE_SHARING_STATUS: 1086 + case VMMDEVREQ_DEBUG_IS_PAGE_SHARED: 1087 + case VMMDEVREQ_REPORT_GUEST_STATS: 1088 + case VMMDEVREQ_REPORT_GUEST_USER_STATE: 1089 + case VMMDEVREQ_GET_STATISTICS_CHANGE_REQ: 1090 + trusted_apps_only = true; 1091 + break; 1092 + 1093 + /* Anyone. */ 1094 + case VMMDEVREQ_GET_MOUSE_STATUS: 1095 + case VMMDEVREQ_SET_MOUSE_STATUS: 1096 + case VMMDEVREQ_SET_POINTER_SHAPE: 1097 + case VMMDEVREQ_GET_HOST_VERSION: 1098 + case VMMDEVREQ_IDLE: 1099 + case VMMDEVREQ_GET_HOST_TIME: 1100 + case VMMDEVREQ_SET_POWER_STATUS: 1101 + case VMMDEVREQ_ACKNOWLEDGE_EVENTS: 1102 + case VMMDEVREQ_CTL_GUEST_FILTER_MASK: 1103 + case VMMDEVREQ_REPORT_GUEST_STATUS: 1104 + case VMMDEVREQ_GET_DISPLAY_CHANGE_REQ: 1105 + case VMMDEVREQ_VIDEMODE_SUPPORTED: 1106 + case VMMDEVREQ_GET_HEIGHT_REDUCTION: 1107 + case VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2: 1108 + case VMMDEVREQ_VIDEMODE_SUPPORTED2: 1109 + case VMMDEVREQ_VIDEO_ACCEL_ENABLE: 1110 + case VMMDEVREQ_VIDEO_ACCEL_FLUSH: 1111 + case VMMDEVREQ_VIDEO_SET_VISIBLE_REGION: 1112 + case VMMDEVREQ_GET_DISPLAY_CHANGE_REQEX: 1113 + case VMMDEVREQ_GET_SEAMLESS_CHANGE_REQ: 1114 + case VMMDEVREQ_GET_VRDPCHANGE_REQ: 1115 + case VMMDEVREQ_LOG_STRING: 1116 + case VMMDEVREQ_GET_SESSION_ID: 1117 + trusted_apps_only = false; 1118 + break; 1119 + 1120 + /* Depends on the request parameters... */ 1121 + case VMMDEVREQ_REPORT_GUEST_CAPABILITIES: 1122 + guest_status = (const struct vmmdev_guest_status *)req; 1123 + switch (guest_status->facility) { 1124 + case VBOXGUEST_FACILITY_TYPE_ALL: 1125 + case VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER: 1126 + vbg_err("Denying userspace vmm report guest cap. call facility %#08x\n", 1127 + guest_status->facility); 1128 + return -EPERM; 1129 + case VBOXGUEST_FACILITY_TYPE_VBOX_SERVICE: 1130 + trusted_apps_only = true; 1131 + break; 1132 + case VBOXGUEST_FACILITY_TYPE_VBOX_TRAY_CLIENT: 1133 + case VBOXGUEST_FACILITY_TYPE_SEAMLESS: 1134 + case VBOXGUEST_FACILITY_TYPE_GRAPHICS: 1135 + default: 1136 + trusted_apps_only = false; 1137 + break; 1138 + } 1139 + break; 1140 + 1141 + /* Anything else is not allowed. */ 1142 + default: 1143 + vbg_err("Denying userspace vmm call type %#08x\n", 1144 + req->request_type); 1145 + return -EPERM; 1146 + } 1147 + 1148 + if (trusted_apps_only && session->user_session) { 1149 + vbg_err("Denying userspace vmm call type %#08x through vboxuser device node\n", 1150 + req->request_type); 1151 + return -EPERM; 1152 + } 1153 + 1154 + return 0; 1155 + } 1156 + 1157 + static int vbg_ioctl_vmmrequest(struct vbg_dev *gdev, 1158 + struct vbg_session *session, void *data) 1159 + { 1160 + struct vbg_ioctl_hdr *hdr = data; 1161 + int ret; 1162 + 1163 + if (hdr->size_in != hdr->size_out) 1164 + return -EINVAL; 1165 + 1166 + if (hdr->size_in > VMMDEV_MAX_VMMDEVREQ_SIZE) 1167 + return -E2BIG; 1168 + 1169 + if (hdr->type == VBG_IOCTL_HDR_TYPE_DEFAULT) 1170 + return -EINVAL; 1171 + 1172 + ret = vbg_req_allowed(gdev, session, data); 1173 + if (ret < 0) 1174 + return ret; 1175 + 1176 + vbg_req_perform(gdev, data); 1177 + WARN_ON(hdr->rc == VINF_HGCM_ASYNC_EXECUTE); 1178 + 1179 + return 0; 1180 + } 1181 + 1182 + static int vbg_ioctl_hgcm_connect(struct vbg_dev *gdev, 1183 + struct vbg_session *session, 1184 + struct vbg_ioctl_hgcm_connect *conn) 1185 + { 1186 + u32 client_id; 1187 + int i, ret; 1188 + 1189 + if (vbg_ioctl_chk(&conn->hdr, sizeof(conn->u.in), sizeof(conn->u.out))) 1190 + return -EINVAL; 1191 + 1192 + /* Find a free place in the sessions clients array and claim it */ 1193 + mutex_lock(&gdev->session_mutex); 1194 + for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { 1195 + if (!session->hgcm_client_ids[i]) { 1196 + session->hgcm_client_ids[i] = U32_MAX; 1197 + break; 1198 + } 1199 + } 1200 + mutex_unlock(&gdev->session_mutex); 1201 + 1202 + if (i >= ARRAY_SIZE(session->hgcm_client_ids)) 1203 + return -EMFILE; 1204 + 1205 + ret = vbg_hgcm_connect(gdev, &conn->u.in.loc, &client_id, 1206 + &conn->hdr.rc); 1207 + 1208 + mutex_lock(&gdev->session_mutex); 1209 + if (ret == 0 && conn->hdr.rc >= 0) { 1210 + conn->u.out.client_id = client_id; 1211 + session->hgcm_client_ids[i] = client_id; 1212 + } else { 1213 + conn->u.out.client_id = 0; 1214 + session->hgcm_client_ids[i] = 0; 1215 + } 1216 + mutex_unlock(&gdev->session_mutex); 1217 + 1218 + return ret; 1219 + } 1220 + 1221 + static int vbg_ioctl_hgcm_disconnect(struct vbg_dev *gdev, 1222 + struct vbg_session *session, 1223 + struct vbg_ioctl_hgcm_disconnect *disconn) 1224 + { 1225 + u32 client_id; 1226 + int i, ret; 1227 + 1228 + if (vbg_ioctl_chk(&disconn->hdr, sizeof(disconn->u.in), 0)) 1229 + return -EINVAL; 1230 + 1231 + client_id = disconn->u.in.client_id; 1232 + if (client_id == 0 || client_id == U32_MAX) 1233 + return -EINVAL; 1234 + 1235 + mutex_lock(&gdev->session_mutex); 1236 + for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) { 1237 + if (session->hgcm_client_ids[i] == client_id) { 1238 + session->hgcm_client_ids[i] = U32_MAX; 1239 + break; 1240 + } 1241 + } 1242 + mutex_unlock(&gdev->session_mutex); 1243 + 1244 + if (i >= ARRAY_SIZE(session->hgcm_client_ids)) 1245 + return -EINVAL; 1246 + 1247 + ret = vbg_hgcm_disconnect(gdev, client_id, &disconn->hdr.rc); 1248 + 1249 + mutex_lock(&gdev->session_mutex); 1250 + if (ret == 0 && disconn->hdr.rc >= 0) 1251 + session->hgcm_client_ids[i] = 0; 1252 + else 1253 + session->hgcm_client_ids[i] = client_id; 1254 + mutex_unlock(&gdev->session_mutex); 1255 + 1256 + return ret; 1257 + } 1258 + 1259 + static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev, 1260 + struct vbg_session *session, bool f32bit, 1261 + struct vbg_ioctl_hgcm_call *call) 1262 + { 1263 + size_t actual_size; 1264 + u32 client_id; 1265 + int i, ret; 1266 + 1267 + if (call->hdr.size_in < sizeof(*call)) 1268 + return -EINVAL; 1269 + 1270 + if (call->hdr.size_in != call->hdr.size_out) 1271 + return -EINVAL; 1272 + 1273 + if (call->parm_count > VMMDEV_HGCM_MAX_PARMS) 1274 + return -E2BIG; 1275 + 1276 + client_id = call->client_id; 1277 + if (client_id == 0 || client_id == U32_MAX) 1278 + return -EINVAL; 1279 + 1280 + actual_size = sizeof(*call); 1281 + if (f32bit) 1282 + actual_size += call->parm_count * 1283 + sizeof(struct vmmdev_hgcm_function_parameter32); 1284 + else 1285 + actual_size += call->parm_count * 1286 + sizeof(struct vmmdev_hgcm_function_parameter); 1287 + if (call->hdr.size_in < actual_size) { 1288 + vbg_debug("VBG_IOCTL_HGCM_CALL: hdr.size_in %d required size is %zd\n", 1289 + call->hdr.size_in, actual_size); 1290 + return -EINVAL; 1291 + } 1292 + call->hdr.size_out = actual_size; 1293 + 1294 + /* 1295 + * Validate the client id. 1296 + */ 1297 + mutex_lock(&gdev->session_mutex); 1298 + for (i = 0; i < ARRAY_SIZE(session->hgcm_client_ids); i++) 1299 + if (session->hgcm_client_ids[i] == client_id) 1300 + break; 1301 + mutex_unlock(&gdev->session_mutex); 1302 + if (i >= ARRAY_SIZE(session->hgcm_client_ids)) { 1303 + vbg_debug("VBG_IOCTL_HGCM_CALL: INVALID handle. u32Client=%#08x\n", 1304 + client_id); 1305 + return -EINVAL; 1306 + } 1307 + 1308 + if (f32bit) 1309 + ret = vbg_hgcm_call32(gdev, client_id, 1310 + call->function, call->timeout_ms, 1311 + VBG_IOCTL_HGCM_CALL_PARMS32(call), 1312 + call->parm_count, &call->hdr.rc); 1313 + else 1314 + ret = vbg_hgcm_call(gdev, client_id, 1315 + call->function, call->timeout_ms, 1316 + VBG_IOCTL_HGCM_CALL_PARMS(call), 1317 + call->parm_count, &call->hdr.rc); 1318 + 1319 + if (ret == -E2BIG) { 1320 + /* E2BIG needs to be reported through the hdr.rc field. */ 1321 + call->hdr.rc = VERR_OUT_OF_RANGE; 1322 + ret = 0; 1323 + } 1324 + 1325 + if (ret && ret != -EINTR && ret != -ETIMEDOUT) 1326 + vbg_err("VBG_IOCTL_HGCM_CALL error: %d\n", ret); 1327 + 1328 + return ret; 1329 + } 1330 + 1331 + static int vbg_ioctl_log(struct vbg_ioctl_log *log) 1332 + { 1333 + if (log->hdr.size_out != sizeof(log->hdr)) 1334 + return -EINVAL; 1335 + 1336 + vbg_info("%.*s", (int)(log->hdr.size_in - sizeof(log->hdr)), 1337 + log->u.in.msg); 1338 + 1339 + return 0; 1340 + } 1341 + 1342 + static int vbg_ioctl_change_filter_mask(struct vbg_dev *gdev, 1343 + struct vbg_session *session, 1344 + struct vbg_ioctl_change_filter *filter) 1345 + { 1346 + u32 or_mask, not_mask; 1347 + 1348 + if (vbg_ioctl_chk(&filter->hdr, sizeof(filter->u.in), 0)) 1349 + return -EINVAL; 1350 + 1351 + or_mask = filter->u.in.or_mask; 1352 + not_mask = filter->u.in.not_mask; 1353 + 1354 + if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK) 1355 + return -EINVAL; 1356 + 1357 + return vbg_set_session_event_filter(gdev, session, or_mask, not_mask, 1358 + false); 1359 + } 1360 + 1361 + static int vbg_ioctl_change_guest_capabilities(struct vbg_dev *gdev, 1362 + struct vbg_session *session, struct vbg_ioctl_set_guest_caps *caps) 1363 + { 1364 + u32 or_mask, not_mask; 1365 + int ret; 1366 + 1367 + if (vbg_ioctl_chk(&caps->hdr, sizeof(caps->u.in), sizeof(caps->u.out))) 1368 + return -EINVAL; 1369 + 1370 + or_mask = caps->u.in.or_mask; 1371 + not_mask = caps->u.in.not_mask; 1372 + 1373 + if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK) 1374 + return -EINVAL; 1375 + 1376 + ret = vbg_set_session_capabilities(gdev, session, or_mask, not_mask, 1377 + false); 1378 + if (ret) 1379 + return ret; 1380 + 1381 + caps->u.out.session_caps = session->guest_caps; 1382 + caps->u.out.global_caps = gdev->guest_caps_host; 1383 + 1384 + return 0; 1385 + } 1386 + 1387 + static int vbg_ioctl_check_balloon(struct vbg_dev *gdev, 1388 + struct vbg_ioctl_check_balloon *balloon_info) 1389 + { 1390 + if (vbg_ioctl_chk(&balloon_info->hdr, 0, sizeof(balloon_info->u.out))) 1391 + return -EINVAL; 1392 + 1393 + balloon_info->u.out.balloon_chunks = gdev->mem_balloon.chunks; 1394 + /* 1395 + * Under Linux we handle VMMDEV_EVENT_BALLOON_CHANGE_REQUEST 1396 + * events entirely in the kernel, see vbg_core_isr(). 1397 + */ 1398 + balloon_info->u.out.handle_in_r3 = false; 1399 + 1400 + return 0; 1401 + } 1402 + 1403 + static int vbg_ioctl_write_core_dump(struct vbg_dev *gdev, 1404 + struct vbg_ioctl_write_coredump *dump) 1405 + { 1406 + struct vmmdev_write_core_dump *req; 1407 + 1408 + if (vbg_ioctl_chk(&dump->hdr, sizeof(dump->u.in), 0)) 1409 + return -EINVAL; 1410 + 1411 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_WRITE_COREDUMP); 1412 + if (!req) 1413 + return -ENOMEM; 1414 + 1415 + req->flags = dump->u.in.flags; 1416 + dump->hdr.rc = vbg_req_perform(gdev, req); 1417 + 1418 + kfree(req); 1419 + return 0; 1420 + } 1421 + 1422 + /** 1423 + * Common IOCtl for user to kernel communication. 1424 + * Return: 0 or negative errno value. 1425 + * @session: The client session. 1426 + * @req: The requested function. 1427 + * @data: The i/o data buffer, minimum size sizeof(struct vbg_ioctl_hdr). 1428 + */ 1429 + int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data) 1430 + { 1431 + unsigned int req_no_size = req & ~IOCSIZE_MASK; 1432 + struct vbg_dev *gdev = session->gdev; 1433 + struct vbg_ioctl_hdr *hdr = data; 1434 + bool f32bit = false; 1435 + 1436 + hdr->rc = VINF_SUCCESS; 1437 + if (!hdr->size_out) 1438 + hdr->size_out = hdr->size_in; 1439 + 1440 + /* 1441 + * hdr->version and hdr->size_in / hdr->size_out minimum size are 1442 + * already checked by vbg_misc_device_ioctl(). 1443 + */ 1444 + 1445 + /* For VMMDEV_REQUEST hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT */ 1446 + if (req_no_size == VBG_IOCTL_VMMDEV_REQUEST(0) || 1447 + req == VBG_IOCTL_VMMDEV_REQUEST_BIG) 1448 + return vbg_ioctl_vmmrequest(gdev, session, data); 1449 + 1450 + if (hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT) 1451 + return -EINVAL; 1452 + 1453 + /* Fixed size requests. */ 1454 + switch (req) { 1455 + case VBG_IOCTL_DRIVER_VERSION_INFO: 1456 + return vbg_ioctl_driver_version_info(data); 1457 + case VBG_IOCTL_HGCM_CONNECT: 1458 + return vbg_ioctl_hgcm_connect(gdev, session, data); 1459 + case VBG_IOCTL_HGCM_DISCONNECT: 1460 + return vbg_ioctl_hgcm_disconnect(gdev, session, data); 1461 + case VBG_IOCTL_WAIT_FOR_EVENTS: 1462 + return vbg_ioctl_wait_for_events(gdev, session, data); 1463 + case VBG_IOCTL_INTERRUPT_ALL_WAIT_FOR_EVENTS: 1464 + return vbg_ioctl_interrupt_all_wait_events(gdev, session, data); 1465 + case VBG_IOCTL_CHANGE_FILTER_MASK: 1466 + return vbg_ioctl_change_filter_mask(gdev, session, data); 1467 + case VBG_IOCTL_CHANGE_GUEST_CAPABILITIES: 1468 + return vbg_ioctl_change_guest_capabilities(gdev, session, data); 1469 + case VBG_IOCTL_CHECK_BALLOON: 1470 + return vbg_ioctl_check_balloon(gdev, data); 1471 + case VBG_IOCTL_WRITE_CORE_DUMP: 1472 + return vbg_ioctl_write_core_dump(gdev, data); 1473 + } 1474 + 1475 + /* Variable sized requests. */ 1476 + switch (req_no_size) { 1477 + #ifdef CONFIG_COMPAT 1478 + case VBG_IOCTL_HGCM_CALL_32(0): 1479 + f32bit = true; 1480 + /* Fall through */ 1481 + #endif 1482 + case VBG_IOCTL_HGCM_CALL(0): 1483 + return vbg_ioctl_hgcm_call(gdev, session, f32bit, data); 1484 + case VBG_IOCTL_LOG(0): 1485 + return vbg_ioctl_log(data); 1486 + } 1487 + 1488 + vbg_debug("VGDrvCommonIoCtl: Unknown req %#08x\n", req); 1489 + return -ENOTTY; 1490 + } 1491 + 1492 + /** 1493 + * Report guest supported mouse-features to the host. 1494 + * 1495 + * Return: 0 or negative errno value. 1496 + * @gdev: The Guest extension device. 1497 + * @features: The set of features to report to the host. 1498 + */ 1499 + int vbg_core_set_mouse_status(struct vbg_dev *gdev, u32 features) 1500 + { 1501 + struct vmmdev_mouse_status *req; 1502 + int rc; 1503 + 1504 + req = vbg_req_alloc(sizeof(*req), VMMDEVREQ_SET_MOUSE_STATUS); 1505 + if (!req) 1506 + return -ENOMEM; 1507 + 1508 + req->mouse_features = features; 1509 + req->pointer_pos_x = 0; 1510 + req->pointer_pos_y = 0; 1511 + 1512 + rc = vbg_req_perform(gdev, req); 1513 + if (rc < 0) 1514 + vbg_err("%s error, rc: %d\n", __func__, rc); 1515 + 1516 + kfree(req); 1517 + return vbg_status_code_to_errno(rc); 1518 + } 1519 + 1520 + /** Core interrupt service routine. */ 1521 + irqreturn_t vbg_core_isr(int irq, void *dev_id) 1522 + { 1523 + struct vbg_dev *gdev = dev_id; 1524 + struct vmmdev_events *req = gdev->ack_events_req; 1525 + bool mouse_position_changed = false; 1526 + unsigned long flags; 1527 + u32 events = 0; 1528 + int rc; 1529 + 1530 + if (!gdev->mmio->V.V1_04.have_events) 1531 + return IRQ_NONE; 1532 + 1533 + /* Get and acknowlegde events. */ 1534 + req->header.rc = VERR_INTERNAL_ERROR; 1535 + req->events = 0; 1536 + rc = vbg_req_perform(gdev, req); 1537 + if (rc < 0) { 1538 + vbg_err("Error performing events req, rc: %d\n", rc); 1539 + return IRQ_NONE; 1540 + } 1541 + 1542 + events = req->events; 1543 + 1544 + if (events & VMMDEV_EVENT_MOUSE_POSITION_CHANGED) { 1545 + mouse_position_changed = true; 1546 + events &= ~VMMDEV_EVENT_MOUSE_POSITION_CHANGED; 1547 + } 1548 + 1549 + if (events & VMMDEV_EVENT_HGCM) { 1550 + wake_up(&gdev->hgcm_wq); 1551 + events &= ~VMMDEV_EVENT_HGCM; 1552 + } 1553 + 1554 + if (events & VMMDEV_EVENT_BALLOON_CHANGE_REQUEST) { 1555 + schedule_work(&gdev->mem_balloon.work); 1556 + events &= ~VMMDEV_EVENT_BALLOON_CHANGE_REQUEST; 1557 + } 1558 + 1559 + if (events) { 1560 + spin_lock_irqsave(&gdev->event_spinlock, flags); 1561 + gdev->pending_events |= events; 1562 + spin_unlock_irqrestore(&gdev->event_spinlock, flags); 1563 + 1564 + wake_up(&gdev->event_wq); 1565 + } 1566 + 1567 + if (mouse_position_changed) 1568 + vbg_linux_mouse_event(gdev); 1569 + 1570 + return IRQ_HANDLED; 1571 + }
+174
drivers/virt/vboxguest/vboxguest_core.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* Copyright (C) 2010-2016 Oracle Corporation */ 3 + 4 + #ifndef __VBOXGUEST_CORE_H__ 5 + #define __VBOXGUEST_CORE_H__ 6 + 7 + #include <linux/input.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/kernel.h> 10 + #include <linux/list.h> 11 + #include <linux/miscdevice.h> 12 + #include <linux/spinlock.h> 13 + #include <linux/wait.h> 14 + #include <linux/workqueue.h> 15 + #include <linux/vboxguest.h> 16 + #include "vmmdev.h" 17 + 18 + struct vbg_session; 19 + 20 + /** VBox guest memory balloon. */ 21 + struct vbg_mem_balloon { 22 + /** Work handling VMMDEV_EVENT_BALLOON_CHANGE_REQUEST events */ 23 + struct work_struct work; 24 + /** Pre-allocated vmmdev_memballoon_info req for query */ 25 + struct vmmdev_memballoon_info *get_req; 26 + /** Pre-allocated vmmdev_memballoon_change req for inflate / deflate */ 27 + struct vmmdev_memballoon_change *change_req; 28 + /** The current number of chunks in the balloon. */ 29 + u32 chunks; 30 + /** The maximum number of chunks in the balloon. */ 31 + u32 max_chunks; 32 + /** 33 + * Array of pointers to page arrays. A page * array is allocated for 34 + * each chunk when inflating, and freed when the deflating. 35 + */ 36 + struct page ***pages; 37 + }; 38 + 39 + /** 40 + * Per bit usage tracker for a u32 mask. 41 + * 42 + * Used for optimal handling of guest properties and event filter. 43 + */ 44 + struct vbg_bit_usage_tracker { 45 + /** Per bit usage counters. */ 46 + u32 per_bit_usage[32]; 47 + /** The current mask according to per_bit_usage. */ 48 + u32 mask; 49 + }; 50 + 51 + /** VBox guest device (data) extension. */ 52 + struct vbg_dev { 53 + struct device *dev; 54 + /** The base of the adapter I/O ports. */ 55 + u16 io_port; 56 + /** Pointer to the mapping of the VMMDev adapter memory. */ 57 + struct vmmdev_memory *mmio; 58 + /** Host version */ 59 + char host_version[64]; 60 + /** Host features */ 61 + unsigned int host_features; 62 + /** 63 + * Dummy page and vmap address for reserved kernel virtual-address 64 + * space for the guest mappings, only used on hosts lacking vtx. 65 + */ 66 + struct page *guest_mappings_dummy_page; 67 + void *guest_mappings; 68 + /** Spinlock protecting pending_events. */ 69 + spinlock_t event_spinlock; 70 + /** Preallocated struct vmmdev_events for the IRQ handler. */ 71 + struct vmmdev_events *ack_events_req; 72 + /** Wait-for-event list for threads waiting for multiple events. */ 73 + wait_queue_head_t event_wq; 74 + /** Mask of pending events. */ 75 + u32 pending_events; 76 + /** Wait-for-event list for threads waiting on HGCM async completion. */ 77 + wait_queue_head_t hgcm_wq; 78 + /** Pre-allocated hgcm cancel2 req. for cancellation on timeout */ 79 + struct vmmdev_hgcm_cancel2 *cancel_req; 80 + /** Mutex protecting cancel_req accesses */ 81 + struct mutex cancel_req_mutex; 82 + /** Pre-allocated mouse-status request for the input-device handling. */ 83 + struct vmmdev_mouse_status *mouse_status_req; 84 + /** Input device for reporting abs mouse coordinates to the guest. */ 85 + struct input_dev *input; 86 + 87 + /** Memory balloon information. */ 88 + struct vbg_mem_balloon mem_balloon; 89 + 90 + /** Lock for session related items in vbg_dev and vbg_session */ 91 + struct mutex session_mutex; 92 + /** Events we won't permit anyone to filter out. */ 93 + u32 fixed_events; 94 + /** 95 + * Usage counters for the host events (excludes fixed events), 96 + * Protected by session_mutex. 97 + */ 98 + struct vbg_bit_usage_tracker event_filter_tracker; 99 + /** 100 + * The event filter last reported to the host (or UINT32_MAX). 101 + * Protected by session_mutex. 102 + */ 103 + u32 event_filter_host; 104 + 105 + /** 106 + * Usage counters for guest capabilities. Indexed by capability bit 107 + * number, one count per session using a capability. 108 + * Protected by session_mutex. 109 + */ 110 + struct vbg_bit_usage_tracker guest_caps_tracker; 111 + /** 112 + * The guest capabilities last reported to the host (or UINT32_MAX). 113 + * Protected by session_mutex. 114 + */ 115 + u32 guest_caps_host; 116 + 117 + /** 118 + * Heartbeat timer which fires with interval 119 + * cNsHearbeatInterval and its handler sends 120 + * VMMDEVREQ_GUEST_HEARTBEAT to VMMDev. 121 + */ 122 + struct timer_list heartbeat_timer; 123 + /** Heartbeat timer interval in ms. */ 124 + int heartbeat_interval_ms; 125 + /** Preallocated VMMDEVREQ_GUEST_HEARTBEAT request. */ 126 + struct vmmdev_request_header *guest_heartbeat_req; 127 + 128 + /** "vboxguest" char-device */ 129 + struct miscdevice misc_device; 130 + /** "vboxuser" char-device */ 131 + struct miscdevice misc_device_user; 132 + }; 133 + 134 + /** The VBoxGuest per session data. */ 135 + struct vbg_session { 136 + /** Pointer to the device extension. */ 137 + struct vbg_dev *gdev; 138 + 139 + /** 140 + * Array containing HGCM client IDs associated with this session. 141 + * These will be automatically disconnected when the session is closed. 142 + * Protected by vbg_gdev.session_mutex. 143 + */ 144 + u32 hgcm_client_ids[64]; 145 + /** 146 + * Host events requested by the session. 147 + * An event type requested in any guest session will be added to the 148 + * host filter. Protected by vbg_gdev.session_mutex. 149 + */ 150 + u32 event_filter; 151 + /** 152 + * Guest capabilities for this session. 153 + * A capability claimed by any guest session will be reported to the 154 + * host. Protected by vbg_gdev.session_mutex. 155 + */ 156 + u32 guest_caps; 157 + /** Does this session belong to a root process or a user one? */ 158 + bool user_session; 159 + /** Set on CANCEL_ALL_WAITEVENTS, protected by vbg_devevent_spinlock. */ 160 + bool cancel_waiters; 161 + }; 162 + 163 + int vbg_core_init(struct vbg_dev *gdev, u32 fixed_events); 164 + void vbg_core_exit(struct vbg_dev *gdev); 165 + struct vbg_session *vbg_core_open_session(struct vbg_dev *gdev, bool user); 166 + void vbg_core_close_session(struct vbg_session *session); 167 + int vbg_core_ioctl(struct vbg_session *session, unsigned int req, void *data); 168 + int vbg_core_set_mouse_status(struct vbg_dev *gdev, u32 features); 169 + 170 + irqreturn_t vbg_core_isr(int irq, void *dev_id); 171 + 172 + void vbg_linux_mouse_event(struct vbg_dev *gdev); 173 + 174 + #endif
+466
drivers/virt/vboxguest/vboxguest_linux.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * vboxguest linux pci driver, char-dev and input-device code, 4 + * 5 + * Copyright (C) 2006-2016 Oracle Corporation 6 + */ 7 + 8 + #include <linux/input.h> 9 + #include <linux/kernel.h> 10 + #include <linux/miscdevice.h> 11 + #include <linux/module.h> 12 + #include <linux/pci.h> 13 + #include <linux/poll.h> 14 + #include <linux/vbox_utils.h> 15 + #include "vboxguest_core.h" 16 + 17 + /** The device name. */ 18 + #define DEVICE_NAME "vboxguest" 19 + /** The device name for the device node open to everyone. */ 20 + #define DEVICE_NAME_USER "vboxuser" 21 + /** VirtualBox PCI vendor ID. */ 22 + #define VBOX_VENDORID 0x80ee 23 + /** VMMDev PCI card product ID. */ 24 + #define VMMDEV_DEVICEID 0xcafe 25 + 26 + /** Mutex protecting the global vbg_gdev pointer used by vbg_get/put_gdev. */ 27 + static DEFINE_MUTEX(vbg_gdev_mutex); 28 + /** Global vbg_gdev pointer used by vbg_get/put_gdev. */ 29 + static struct vbg_dev *vbg_gdev; 30 + 31 + static int vbg_misc_device_open(struct inode *inode, struct file *filp) 32 + { 33 + struct vbg_session *session; 34 + struct vbg_dev *gdev; 35 + 36 + /* misc_open sets filp->private_data to our misc device */ 37 + gdev = container_of(filp->private_data, struct vbg_dev, misc_device); 38 + 39 + session = vbg_core_open_session(gdev, false); 40 + if (IS_ERR(session)) 41 + return PTR_ERR(session); 42 + 43 + filp->private_data = session; 44 + return 0; 45 + } 46 + 47 + static int vbg_misc_device_user_open(struct inode *inode, struct file *filp) 48 + { 49 + struct vbg_session *session; 50 + struct vbg_dev *gdev; 51 + 52 + /* misc_open sets filp->private_data to our misc device */ 53 + gdev = container_of(filp->private_data, struct vbg_dev, 54 + misc_device_user); 55 + 56 + session = vbg_core_open_session(gdev, false); 57 + if (IS_ERR(session)) 58 + return PTR_ERR(session); 59 + 60 + filp->private_data = session; 61 + return 0; 62 + } 63 + 64 + /** 65 + * Close device. 66 + * Return: 0 on success, negated errno on failure. 67 + * @inode: Pointer to inode info structure. 68 + * @filp: Associated file pointer. 69 + */ 70 + static int vbg_misc_device_close(struct inode *inode, struct file *filp) 71 + { 72 + vbg_core_close_session(filp->private_data); 73 + filp->private_data = NULL; 74 + return 0; 75 + } 76 + 77 + /** 78 + * Device I/O Control entry point. 79 + * Return: 0 on success, negated errno on failure. 80 + * @filp: Associated file pointer. 81 + * @req: The request specified to ioctl(). 82 + * @arg: The argument specified to ioctl(). 83 + */ 84 + static long vbg_misc_device_ioctl(struct file *filp, unsigned int req, 85 + unsigned long arg) 86 + { 87 + struct vbg_session *session = filp->private_data; 88 + size_t returned_size, size; 89 + struct vbg_ioctl_hdr hdr; 90 + int ret = 0; 91 + void *buf; 92 + 93 + if (copy_from_user(&hdr, (void *)arg, sizeof(hdr))) 94 + return -EFAULT; 95 + 96 + if (hdr.version != VBG_IOCTL_HDR_VERSION) 97 + return -EINVAL; 98 + 99 + if (hdr.size_in < sizeof(hdr) || 100 + (hdr.size_out && hdr.size_out < sizeof(hdr))) 101 + return -EINVAL; 102 + 103 + size = max(hdr.size_in, hdr.size_out); 104 + if (_IOC_SIZE(req) && _IOC_SIZE(req) != size) 105 + return -EINVAL; 106 + if (size > SZ_16M) 107 + return -E2BIG; 108 + 109 + /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */ 110 + buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32); 111 + if (!buf) 112 + return -ENOMEM; 113 + 114 + if (copy_from_user(buf, (void *)arg, hdr.size_in)) { 115 + ret = -EFAULT; 116 + goto out; 117 + } 118 + if (hdr.size_in < size) 119 + memset(buf + hdr.size_in, 0, size - hdr.size_in); 120 + 121 + ret = vbg_core_ioctl(session, req, buf); 122 + if (ret) 123 + goto out; 124 + 125 + returned_size = ((struct vbg_ioctl_hdr *)buf)->size_out; 126 + if (returned_size > size) { 127 + vbg_debug("%s: too much output data %zu > %zu\n", 128 + __func__, returned_size, size); 129 + returned_size = size; 130 + } 131 + if (copy_to_user((void *)arg, buf, returned_size) != 0) 132 + ret = -EFAULT; 133 + 134 + out: 135 + kfree(buf); 136 + 137 + return ret; 138 + } 139 + 140 + /** The file_operations structures. */ 141 + static const struct file_operations vbg_misc_device_fops = { 142 + .owner = THIS_MODULE, 143 + .open = vbg_misc_device_open, 144 + .release = vbg_misc_device_close, 145 + .unlocked_ioctl = vbg_misc_device_ioctl, 146 + #ifdef CONFIG_COMPAT 147 + .compat_ioctl = vbg_misc_device_ioctl, 148 + #endif 149 + }; 150 + static const struct file_operations vbg_misc_device_user_fops = { 151 + .owner = THIS_MODULE, 152 + .open = vbg_misc_device_user_open, 153 + .release = vbg_misc_device_close, 154 + .unlocked_ioctl = vbg_misc_device_ioctl, 155 + #ifdef CONFIG_COMPAT 156 + .compat_ioctl = vbg_misc_device_ioctl, 157 + #endif 158 + }; 159 + 160 + /** 161 + * Called when the input device is first opened. 162 + * 163 + * Sets up absolute mouse reporting. 164 + */ 165 + static int vbg_input_open(struct input_dev *input) 166 + { 167 + struct vbg_dev *gdev = input_get_drvdata(input); 168 + u32 feat = VMMDEV_MOUSE_GUEST_CAN_ABSOLUTE | VMMDEV_MOUSE_NEW_PROTOCOL; 169 + int ret; 170 + 171 + ret = vbg_core_set_mouse_status(gdev, feat); 172 + if (ret) 173 + return ret; 174 + 175 + return 0; 176 + } 177 + 178 + /** 179 + * Called if all open handles to the input device are closed. 180 + * 181 + * Disables absolute reporting. 182 + */ 183 + static void vbg_input_close(struct input_dev *input) 184 + { 185 + struct vbg_dev *gdev = input_get_drvdata(input); 186 + 187 + vbg_core_set_mouse_status(gdev, 0); 188 + } 189 + 190 + /** 191 + * Creates the kernel input device. 192 + * 193 + * Return: 0 on success, negated errno on failure. 194 + */ 195 + static int vbg_create_input_device(struct vbg_dev *gdev) 196 + { 197 + struct input_dev *input; 198 + 199 + input = devm_input_allocate_device(gdev->dev); 200 + if (!input) 201 + return -ENOMEM; 202 + 203 + input->id.bustype = BUS_PCI; 204 + input->id.vendor = VBOX_VENDORID; 205 + input->id.product = VMMDEV_DEVICEID; 206 + input->open = vbg_input_open; 207 + input->close = vbg_input_close; 208 + input->dev.parent = gdev->dev; 209 + input->name = "VirtualBox mouse integration"; 210 + 211 + input_set_abs_params(input, ABS_X, VMMDEV_MOUSE_RANGE_MIN, 212 + VMMDEV_MOUSE_RANGE_MAX, 0, 0); 213 + input_set_abs_params(input, ABS_Y, VMMDEV_MOUSE_RANGE_MIN, 214 + VMMDEV_MOUSE_RANGE_MAX, 0, 0); 215 + input_set_capability(input, EV_KEY, BTN_MOUSE); 216 + input_set_drvdata(input, gdev); 217 + 218 + gdev->input = input; 219 + 220 + return input_register_device(gdev->input); 221 + } 222 + 223 + static ssize_t host_version_show(struct device *dev, 224 + struct device_attribute *attr, char *buf) 225 + { 226 + struct vbg_dev *gdev = dev_get_drvdata(dev); 227 + 228 + return sprintf(buf, "%s\n", gdev->host_version); 229 + } 230 + 231 + static ssize_t host_features_show(struct device *dev, 232 + struct device_attribute *attr, char *buf) 233 + { 234 + struct vbg_dev *gdev = dev_get_drvdata(dev); 235 + 236 + return sprintf(buf, "%#x\n", gdev->host_features); 237 + } 238 + 239 + static DEVICE_ATTR_RO(host_version); 240 + static DEVICE_ATTR_RO(host_features); 241 + 242 + /** 243 + * Does the PCI detection and init of the device. 244 + * 245 + * Return: 0 on success, negated errno on failure. 246 + */ 247 + static int vbg_pci_probe(struct pci_dev *pci, const struct pci_device_id *id) 248 + { 249 + struct device *dev = &pci->dev; 250 + resource_size_t io, io_len, mmio, mmio_len; 251 + struct vmmdev_memory *vmmdev; 252 + struct vbg_dev *gdev; 253 + int ret; 254 + 255 + gdev = devm_kzalloc(dev, sizeof(*gdev), GFP_KERNEL); 256 + if (!gdev) 257 + return -ENOMEM; 258 + 259 + ret = pci_enable_device(pci); 260 + if (ret != 0) { 261 + vbg_err("vboxguest: Error enabling device: %d\n", ret); 262 + return ret; 263 + } 264 + 265 + ret = -ENODEV; 266 + 267 + io = pci_resource_start(pci, 0); 268 + io_len = pci_resource_len(pci, 0); 269 + if (!io || !io_len) { 270 + vbg_err("vboxguest: Error IO-port resource (0) is missing\n"); 271 + goto err_disable_pcidev; 272 + } 273 + if (devm_request_region(dev, io, io_len, DEVICE_NAME) == NULL) { 274 + vbg_err("vboxguest: Error could not claim IO resource\n"); 275 + ret = -EBUSY; 276 + goto err_disable_pcidev; 277 + } 278 + 279 + mmio = pci_resource_start(pci, 1); 280 + mmio_len = pci_resource_len(pci, 1); 281 + if (!mmio || !mmio_len) { 282 + vbg_err("vboxguest: Error MMIO resource (1) is missing\n"); 283 + goto err_disable_pcidev; 284 + } 285 + 286 + if (devm_request_mem_region(dev, mmio, mmio_len, DEVICE_NAME) == NULL) { 287 + vbg_err("vboxguest: Error could not claim MMIO resource\n"); 288 + ret = -EBUSY; 289 + goto err_disable_pcidev; 290 + } 291 + 292 + vmmdev = devm_ioremap(dev, mmio, mmio_len); 293 + if (!vmmdev) { 294 + vbg_err("vboxguest: Error ioremap failed; MMIO addr=%pap size=%pap\n", 295 + &mmio, &mmio_len); 296 + goto err_disable_pcidev; 297 + } 298 + 299 + /* Validate MMIO region version and size. */ 300 + if (vmmdev->version != VMMDEV_MEMORY_VERSION || 301 + vmmdev->size < 32 || vmmdev->size > mmio_len) { 302 + vbg_err("vboxguest: Bogus VMMDev memory; version=%08x (expected %08x) size=%d (expected <= %d)\n", 303 + vmmdev->version, VMMDEV_MEMORY_VERSION, 304 + vmmdev->size, (int)mmio_len); 305 + goto err_disable_pcidev; 306 + } 307 + 308 + gdev->io_port = io; 309 + gdev->mmio = vmmdev; 310 + gdev->dev = dev; 311 + gdev->misc_device.minor = MISC_DYNAMIC_MINOR; 312 + gdev->misc_device.name = DEVICE_NAME; 313 + gdev->misc_device.fops = &vbg_misc_device_fops; 314 + gdev->misc_device_user.minor = MISC_DYNAMIC_MINOR; 315 + gdev->misc_device_user.name = DEVICE_NAME_USER; 316 + gdev->misc_device_user.fops = &vbg_misc_device_user_fops; 317 + 318 + ret = vbg_core_init(gdev, VMMDEV_EVENT_MOUSE_POSITION_CHANGED); 319 + if (ret) 320 + goto err_disable_pcidev; 321 + 322 + ret = vbg_create_input_device(gdev); 323 + if (ret) { 324 + vbg_err("vboxguest: Error creating input device: %d\n", ret); 325 + goto err_vbg_core_exit; 326 + } 327 + 328 + ret = devm_request_irq(dev, pci->irq, vbg_core_isr, IRQF_SHARED, 329 + DEVICE_NAME, gdev); 330 + if (ret) { 331 + vbg_err("vboxguest: Error requesting irq: %d\n", ret); 332 + goto err_vbg_core_exit; 333 + } 334 + 335 + ret = misc_register(&gdev->misc_device); 336 + if (ret) { 337 + vbg_err("vboxguest: Error misc_register %s failed: %d\n", 338 + DEVICE_NAME, ret); 339 + goto err_vbg_core_exit; 340 + } 341 + 342 + ret = misc_register(&gdev->misc_device_user); 343 + if (ret) { 344 + vbg_err("vboxguest: Error misc_register %s failed: %d\n", 345 + DEVICE_NAME_USER, ret); 346 + goto err_unregister_misc_device; 347 + } 348 + 349 + mutex_lock(&vbg_gdev_mutex); 350 + if (!vbg_gdev) 351 + vbg_gdev = gdev; 352 + else 353 + ret = -EBUSY; 354 + mutex_unlock(&vbg_gdev_mutex); 355 + 356 + if (ret) { 357 + vbg_err("vboxguest: Error more then 1 vbox guest pci device\n"); 358 + goto err_unregister_misc_device_user; 359 + } 360 + 361 + pci_set_drvdata(pci, gdev); 362 + device_create_file(dev, &dev_attr_host_version); 363 + device_create_file(dev, &dev_attr_host_features); 364 + 365 + vbg_info("vboxguest: misc device minor %d, IRQ %d, I/O port %x, MMIO at %pap (size %pap)\n", 366 + gdev->misc_device.minor, pci->irq, gdev->io_port, 367 + &mmio, &mmio_len); 368 + 369 + return 0; 370 + 371 + err_unregister_misc_device_user: 372 + misc_deregister(&gdev->misc_device_user); 373 + err_unregister_misc_device: 374 + misc_deregister(&gdev->misc_device); 375 + err_vbg_core_exit: 376 + vbg_core_exit(gdev); 377 + err_disable_pcidev: 378 + pci_disable_device(pci); 379 + 380 + return ret; 381 + } 382 + 383 + static void vbg_pci_remove(struct pci_dev *pci) 384 + { 385 + struct vbg_dev *gdev = pci_get_drvdata(pci); 386 + 387 + mutex_lock(&vbg_gdev_mutex); 388 + vbg_gdev = NULL; 389 + mutex_unlock(&vbg_gdev_mutex); 390 + 391 + device_remove_file(gdev->dev, &dev_attr_host_features); 392 + device_remove_file(gdev->dev, &dev_attr_host_version); 393 + misc_deregister(&gdev->misc_device_user); 394 + misc_deregister(&gdev->misc_device); 395 + vbg_core_exit(gdev); 396 + pci_disable_device(pci); 397 + } 398 + 399 + struct vbg_dev *vbg_get_gdev(void) 400 + { 401 + mutex_lock(&vbg_gdev_mutex); 402 + 403 + /* 404 + * Note on success we keep the mutex locked until vbg_put_gdev(), 405 + * this stops vbg_pci_remove from removing the device from underneath 406 + * vboxsf. vboxsf will only hold a reference for a short while. 407 + */ 408 + if (vbg_gdev) 409 + return vbg_gdev; 410 + 411 + mutex_unlock(&vbg_gdev_mutex); 412 + return ERR_PTR(-ENODEV); 413 + } 414 + EXPORT_SYMBOL(vbg_get_gdev); 415 + 416 + void vbg_put_gdev(struct vbg_dev *gdev) 417 + { 418 + WARN_ON(gdev != vbg_gdev); 419 + mutex_unlock(&vbg_gdev_mutex); 420 + } 421 + EXPORT_SYMBOL(vbg_put_gdev); 422 + 423 + /** 424 + * Callback for mouse events. 425 + * 426 + * This is called at the end of the ISR, after leaving the event spinlock, if 427 + * VMMDEV_EVENT_MOUSE_POSITION_CHANGED was raised by the host. 428 + * 429 + * @gdev: The device extension. 430 + */ 431 + void vbg_linux_mouse_event(struct vbg_dev *gdev) 432 + { 433 + int rc; 434 + 435 + /* Report events to the kernel input device */ 436 + gdev->mouse_status_req->mouse_features = 0; 437 + gdev->mouse_status_req->pointer_pos_x = 0; 438 + gdev->mouse_status_req->pointer_pos_y = 0; 439 + rc = vbg_req_perform(gdev, gdev->mouse_status_req); 440 + if (rc >= 0) { 441 + input_report_abs(gdev->input, ABS_X, 442 + gdev->mouse_status_req->pointer_pos_x); 443 + input_report_abs(gdev->input, ABS_Y, 444 + gdev->mouse_status_req->pointer_pos_y); 445 + input_sync(gdev->input); 446 + } 447 + } 448 + 449 + static const struct pci_device_id vbg_pci_ids[] = { 450 + { .vendor = VBOX_VENDORID, .device = VMMDEV_DEVICEID }, 451 + {} 452 + }; 453 + MODULE_DEVICE_TABLE(pci, vbg_pci_ids); 454 + 455 + static struct pci_driver vbg_pci_driver = { 456 + .name = DEVICE_NAME, 457 + .id_table = vbg_pci_ids, 458 + .probe = vbg_pci_probe, 459 + .remove = vbg_pci_remove, 460 + }; 461 + 462 + module_pci_driver(vbg_pci_driver); 463 + 464 + MODULE_AUTHOR("Oracle Corporation"); 465 + MODULE_DESCRIPTION("Oracle VM VirtualBox Guest Additions for Linux Module"); 466 + MODULE_LICENSE("GPL");
+803
drivers/virt/vboxguest/vboxguest_utils.c
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * vboxguest vmm-req and hgcm-call code, VBoxGuestR0LibHGCMInternal.cpp, 4 + * VBoxGuestR0LibGenericRequest.cpp and RTErrConvertToErrno.cpp in vbox svn. 5 + * 6 + * Copyright (C) 2006-2016 Oracle Corporation 7 + */ 8 + 9 + #include <linux/errno.h> 10 + #include <linux/kernel.h> 11 + #include <linux/mm.h> 12 + #include <linux/module.h> 13 + #include <linux/sizes.h> 14 + #include <linux/slab.h> 15 + #include <linux/uaccess.h> 16 + #include <linux/vmalloc.h> 17 + #include <linux/vbox_err.h> 18 + #include <linux/vbox_utils.h> 19 + #include "vboxguest_core.h" 20 + 21 + /* Get the pointer to the first parameter of a HGCM call request. */ 22 + #define VMMDEV_HGCM_CALL_PARMS(a) \ 23 + ((struct vmmdev_hgcm_function_parameter *)( \ 24 + (u8 *)(a) + sizeof(struct vmmdev_hgcm_call))) 25 + 26 + /* The max parameter buffer size for a user request. */ 27 + #define VBG_MAX_HGCM_USER_PARM (24 * SZ_1M) 28 + /* The max parameter buffer size for a kernel request. */ 29 + #define VBG_MAX_HGCM_KERNEL_PARM (16 * SZ_1M) 30 + 31 + #define VBG_DEBUG_PORT 0x504 32 + 33 + /* This protects vbg_log_buf and serializes VBG_DEBUG_PORT accesses */ 34 + static DEFINE_SPINLOCK(vbg_log_lock); 35 + static char vbg_log_buf[128]; 36 + 37 + #define VBG_LOG(name, pr_func) \ 38 + void name(const char *fmt, ...) \ 39 + { \ 40 + unsigned long flags; \ 41 + va_list args; \ 42 + int i, count; \ 43 + \ 44 + va_start(args, fmt); \ 45 + spin_lock_irqsave(&vbg_log_lock, flags); \ 46 + \ 47 + count = vscnprintf(vbg_log_buf, sizeof(vbg_log_buf), fmt, args);\ 48 + for (i = 0; i < count; i++) \ 49 + outb(vbg_log_buf[i], VBG_DEBUG_PORT); \ 50 + \ 51 + pr_func("%s", vbg_log_buf); \ 52 + \ 53 + spin_unlock_irqrestore(&vbg_log_lock, flags); \ 54 + va_end(args); \ 55 + } \ 56 + EXPORT_SYMBOL(name) 57 + 58 + VBG_LOG(vbg_info, pr_info); 59 + VBG_LOG(vbg_warn, pr_warn); 60 + VBG_LOG(vbg_err, pr_err); 61 + #if defined(DEBUG) && !defined(CONFIG_DYNAMIC_DEBUG) 62 + VBG_LOG(vbg_debug, pr_debug); 63 + #endif 64 + 65 + void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type) 66 + { 67 + struct vmmdev_request_header *req; 68 + 69 + req = kmalloc(len, GFP_KERNEL | __GFP_DMA32); 70 + if (!req) 71 + return NULL; 72 + 73 + memset(req, 0xaa, len); 74 + 75 + req->size = len; 76 + req->version = VMMDEV_REQUEST_HEADER_VERSION; 77 + req->request_type = req_type; 78 + req->rc = VERR_GENERAL_FAILURE; 79 + req->reserved1 = 0; 80 + req->reserved2 = 0; 81 + 82 + return req; 83 + } 84 + 85 + /* Note this function returns a VBox status code, not a negative errno!! */ 86 + int vbg_req_perform(struct vbg_dev *gdev, void *req) 87 + { 88 + unsigned long phys_req = virt_to_phys(req); 89 + 90 + outl(phys_req, gdev->io_port + VMMDEV_PORT_OFF_REQUEST); 91 + /* 92 + * The host changes the request as a result of the outl, make sure 93 + * the outl and any reads of the req happen in the correct order. 94 + */ 95 + mb(); 96 + 97 + return ((struct vmmdev_request_header *)req)->rc; 98 + } 99 + 100 + static bool hgcm_req_done(struct vbg_dev *gdev, 101 + struct vmmdev_hgcmreq_header *header) 102 + { 103 + unsigned long flags; 104 + bool done; 105 + 106 + spin_lock_irqsave(&gdev->event_spinlock, flags); 107 + done = header->flags & VMMDEV_HGCM_REQ_DONE; 108 + spin_unlock_irqrestore(&gdev->event_spinlock, flags); 109 + 110 + return done; 111 + } 112 + 113 + int vbg_hgcm_connect(struct vbg_dev *gdev, 114 + struct vmmdev_hgcm_service_location *loc, 115 + u32 *client_id, int *vbox_status) 116 + { 117 + struct vmmdev_hgcm_connect *hgcm_connect = NULL; 118 + int rc; 119 + 120 + hgcm_connect = vbg_req_alloc(sizeof(*hgcm_connect), 121 + VMMDEVREQ_HGCM_CONNECT); 122 + if (!hgcm_connect) 123 + return -ENOMEM; 124 + 125 + hgcm_connect->header.flags = 0; 126 + memcpy(&hgcm_connect->loc, loc, sizeof(*loc)); 127 + hgcm_connect->client_id = 0; 128 + 129 + rc = vbg_req_perform(gdev, hgcm_connect); 130 + 131 + if (rc == VINF_HGCM_ASYNC_EXECUTE) 132 + wait_event(gdev->hgcm_wq, 133 + hgcm_req_done(gdev, &hgcm_connect->header)); 134 + 135 + if (rc >= 0) { 136 + *client_id = hgcm_connect->client_id; 137 + rc = hgcm_connect->header.result; 138 + } 139 + 140 + kfree(hgcm_connect); 141 + 142 + *vbox_status = rc; 143 + return 0; 144 + } 145 + EXPORT_SYMBOL(vbg_hgcm_connect); 146 + 147 + int vbg_hgcm_disconnect(struct vbg_dev *gdev, u32 client_id, int *vbox_status) 148 + { 149 + struct vmmdev_hgcm_disconnect *hgcm_disconnect = NULL; 150 + int rc; 151 + 152 + hgcm_disconnect = vbg_req_alloc(sizeof(*hgcm_disconnect), 153 + VMMDEVREQ_HGCM_DISCONNECT); 154 + if (!hgcm_disconnect) 155 + return -ENOMEM; 156 + 157 + hgcm_disconnect->header.flags = 0; 158 + hgcm_disconnect->client_id = client_id; 159 + 160 + rc = vbg_req_perform(gdev, hgcm_disconnect); 161 + 162 + if (rc == VINF_HGCM_ASYNC_EXECUTE) 163 + wait_event(gdev->hgcm_wq, 164 + hgcm_req_done(gdev, &hgcm_disconnect->header)); 165 + 166 + if (rc >= 0) 167 + rc = hgcm_disconnect->header.result; 168 + 169 + kfree(hgcm_disconnect); 170 + 171 + *vbox_status = rc; 172 + return 0; 173 + } 174 + EXPORT_SYMBOL(vbg_hgcm_disconnect); 175 + 176 + static u32 hgcm_call_buf_size_in_pages(void *buf, u32 len) 177 + { 178 + u32 size = PAGE_ALIGN(len + ((unsigned long)buf & ~PAGE_MASK)); 179 + 180 + return size >> PAGE_SHIFT; 181 + } 182 + 183 + static void hgcm_call_add_pagelist_size(void *buf, u32 len, size_t *extra) 184 + { 185 + u32 page_count; 186 + 187 + page_count = hgcm_call_buf_size_in_pages(buf, len); 188 + *extra += offsetof(struct vmmdev_hgcm_pagelist, pages[page_count]); 189 + } 190 + 191 + static int hgcm_call_preprocess_linaddr( 192 + const struct vmmdev_hgcm_function_parameter *src_parm, 193 + void **bounce_buf_ret, size_t *extra) 194 + { 195 + void *buf, *bounce_buf; 196 + bool copy_in; 197 + u32 len; 198 + int ret; 199 + 200 + buf = (void *)src_parm->u.pointer.u.linear_addr; 201 + len = src_parm->u.pointer.size; 202 + copy_in = src_parm->type != VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT; 203 + 204 + if (len > VBG_MAX_HGCM_USER_PARM) 205 + return -E2BIG; 206 + 207 + bounce_buf = kvmalloc(len, GFP_KERNEL); 208 + if (!bounce_buf) 209 + return -ENOMEM; 210 + 211 + if (copy_in) { 212 + ret = copy_from_user(bounce_buf, (void __user *)buf, len); 213 + if (ret) 214 + return -EFAULT; 215 + } else { 216 + memset(bounce_buf, 0, len); 217 + } 218 + 219 + *bounce_buf_ret = bounce_buf; 220 + hgcm_call_add_pagelist_size(bounce_buf, len, extra); 221 + return 0; 222 + } 223 + 224 + /** 225 + * Preprocesses the HGCM call, validate parameters, alloc bounce buffers and 226 + * figure out how much extra storage we need for page lists. 227 + * Return: 0 or negative errno value. 228 + * @src_parm: Pointer to source function call parameters 229 + * @parm_count: Number of function call parameters. 230 + * @bounce_bufs_ret: Where to return the allocated bouncebuffer array 231 + * @extra: Where to return the extra request space needed for 232 + * physical page lists. 233 + */ 234 + static int hgcm_call_preprocess( 235 + const struct vmmdev_hgcm_function_parameter *src_parm, 236 + u32 parm_count, void ***bounce_bufs_ret, size_t *extra) 237 + { 238 + void *buf, **bounce_bufs = NULL; 239 + u32 i, len; 240 + int ret; 241 + 242 + for (i = 0; i < parm_count; i++, src_parm++) { 243 + switch (src_parm->type) { 244 + case VMMDEV_HGCM_PARM_TYPE_32BIT: 245 + case VMMDEV_HGCM_PARM_TYPE_64BIT: 246 + break; 247 + 248 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 249 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 250 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 251 + if (!bounce_bufs) { 252 + bounce_bufs = kcalloc(parm_count, 253 + sizeof(void *), 254 + GFP_KERNEL); 255 + if (!bounce_bufs) 256 + return -ENOMEM; 257 + 258 + *bounce_bufs_ret = bounce_bufs; 259 + } 260 + 261 + ret = hgcm_call_preprocess_linaddr(src_parm, 262 + &bounce_bufs[i], 263 + extra); 264 + if (ret) 265 + return ret; 266 + 267 + break; 268 + 269 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: 270 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: 271 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: 272 + buf = (void *)src_parm->u.pointer.u.linear_addr; 273 + len = src_parm->u.pointer.size; 274 + if (WARN_ON(len > VBG_MAX_HGCM_KERNEL_PARM)) 275 + return -E2BIG; 276 + 277 + hgcm_call_add_pagelist_size(buf, len, extra); 278 + break; 279 + 280 + default: 281 + return -EINVAL; 282 + } 283 + } 284 + 285 + return 0; 286 + } 287 + 288 + /** 289 + * Translates linear address types to page list direction flags. 290 + * 291 + * Return: page list flags. 292 + * @type: The type. 293 + */ 294 + static u32 hgcm_call_linear_addr_type_to_pagelist_flags( 295 + enum vmmdev_hgcm_function_parameter_type type) 296 + { 297 + switch (type) { 298 + default: 299 + WARN_ON(1); 300 + /* Fall through */ 301 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 302 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: 303 + return VMMDEV_HGCM_F_PARM_DIRECTION_BOTH; 304 + 305 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 306 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: 307 + return VMMDEV_HGCM_F_PARM_DIRECTION_TO_HOST; 308 + 309 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 310 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: 311 + return VMMDEV_HGCM_F_PARM_DIRECTION_FROM_HOST; 312 + } 313 + } 314 + 315 + static void hgcm_call_init_linaddr(struct vmmdev_hgcm_call *call, 316 + struct vmmdev_hgcm_function_parameter *dst_parm, void *buf, u32 len, 317 + enum vmmdev_hgcm_function_parameter_type type, u32 *off_extra) 318 + { 319 + struct vmmdev_hgcm_pagelist *dst_pg_lst; 320 + struct page *page; 321 + bool is_vmalloc; 322 + u32 i, page_count; 323 + 324 + dst_parm->type = type; 325 + 326 + if (len == 0) { 327 + dst_parm->u.pointer.size = 0; 328 + dst_parm->u.pointer.u.linear_addr = 0; 329 + return; 330 + } 331 + 332 + dst_pg_lst = (void *)call + *off_extra; 333 + page_count = hgcm_call_buf_size_in_pages(buf, len); 334 + is_vmalloc = is_vmalloc_addr(buf); 335 + 336 + dst_parm->type = VMMDEV_HGCM_PARM_TYPE_PAGELIST; 337 + dst_parm->u.page_list.size = len; 338 + dst_parm->u.page_list.offset = *off_extra; 339 + dst_pg_lst->flags = hgcm_call_linear_addr_type_to_pagelist_flags(type); 340 + dst_pg_lst->offset_first_page = (unsigned long)buf & ~PAGE_MASK; 341 + dst_pg_lst->page_count = page_count; 342 + 343 + for (i = 0; i < page_count; i++) { 344 + if (is_vmalloc) 345 + page = vmalloc_to_page(buf); 346 + else 347 + page = virt_to_page(buf); 348 + 349 + dst_pg_lst->pages[i] = page_to_phys(page); 350 + buf += PAGE_SIZE; 351 + } 352 + 353 + *off_extra += offsetof(struct vmmdev_hgcm_pagelist, pages[page_count]); 354 + } 355 + 356 + /** 357 + * Initializes the call request that we're sending to the host. 358 + * @call: The call to initialize. 359 + * @client_id: The client ID of the caller. 360 + * @function: The function number of the function to call. 361 + * @src_parm: Pointer to source function call parameters. 362 + * @parm_count: Number of function call parameters. 363 + * @bounce_bufs: The bouncebuffer array. 364 + */ 365 + static void hgcm_call_init_call( 366 + struct vmmdev_hgcm_call *call, u32 client_id, u32 function, 367 + const struct vmmdev_hgcm_function_parameter *src_parm, 368 + u32 parm_count, void **bounce_bufs) 369 + { 370 + struct vmmdev_hgcm_function_parameter *dst_parm = 371 + VMMDEV_HGCM_CALL_PARMS(call); 372 + u32 i, off_extra = (uintptr_t)(dst_parm + parm_count) - (uintptr_t)call; 373 + void *buf; 374 + 375 + call->header.flags = 0; 376 + call->header.result = VINF_SUCCESS; 377 + call->client_id = client_id; 378 + call->function = function; 379 + call->parm_count = parm_count; 380 + 381 + for (i = 0; i < parm_count; i++, src_parm++, dst_parm++) { 382 + switch (src_parm->type) { 383 + case VMMDEV_HGCM_PARM_TYPE_32BIT: 384 + case VMMDEV_HGCM_PARM_TYPE_64BIT: 385 + *dst_parm = *src_parm; 386 + break; 387 + 388 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 389 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 390 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 391 + hgcm_call_init_linaddr(call, dst_parm, bounce_bufs[i], 392 + src_parm->u.pointer.size, 393 + src_parm->type, &off_extra); 394 + break; 395 + 396 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: 397 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: 398 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: 399 + buf = (void *)src_parm->u.pointer.u.linear_addr; 400 + hgcm_call_init_linaddr(call, dst_parm, buf, 401 + src_parm->u.pointer.size, 402 + src_parm->type, &off_extra); 403 + break; 404 + 405 + default: 406 + WARN_ON(1); 407 + dst_parm->type = VMMDEV_HGCM_PARM_TYPE_INVALID; 408 + } 409 + } 410 + } 411 + 412 + /** 413 + * Tries to cancel a pending HGCM call. 414 + * 415 + * Return: VBox status code 416 + */ 417 + static int hgcm_cancel_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call) 418 + { 419 + int rc; 420 + 421 + /* 422 + * We use a pre-allocated request for cancellations, which is 423 + * protected by cancel_req_mutex. This means that all cancellations 424 + * get serialized, this should be fine since they should be rare. 425 + */ 426 + mutex_lock(&gdev->cancel_req_mutex); 427 + gdev->cancel_req->phys_req_to_cancel = virt_to_phys(call); 428 + rc = vbg_req_perform(gdev, gdev->cancel_req); 429 + mutex_unlock(&gdev->cancel_req_mutex); 430 + 431 + if (rc == VERR_NOT_IMPLEMENTED) { 432 + call->header.flags |= VMMDEV_HGCM_REQ_CANCELLED; 433 + call->header.header.request_type = VMMDEVREQ_HGCM_CANCEL; 434 + 435 + rc = vbg_req_perform(gdev, call); 436 + if (rc == VERR_INVALID_PARAMETER) 437 + rc = VERR_NOT_FOUND; 438 + } 439 + 440 + if (rc >= 0) 441 + call->header.flags |= VMMDEV_HGCM_REQ_CANCELLED; 442 + 443 + return rc; 444 + } 445 + 446 + /** 447 + * Performs the call and completion wait. 448 + * Return: 0 or negative errno value. 449 + * @gdev: The VBoxGuest device extension. 450 + * @call: The call to execute. 451 + * @timeout_ms: Timeout in ms. 452 + * @leak_it: Where to return the leak it / free it, indicator. 453 + * Cancellation fun. 454 + */ 455 + static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call, 456 + u32 timeout_ms, bool *leak_it) 457 + { 458 + int rc, cancel_rc, ret; 459 + long timeout; 460 + 461 + *leak_it = false; 462 + 463 + rc = vbg_req_perform(gdev, call); 464 + 465 + /* 466 + * If the call failed, then pretend success. Upper layers will 467 + * interpret the result code in the packet. 468 + */ 469 + if (rc < 0) { 470 + call->header.result = rc; 471 + return 0; 472 + } 473 + 474 + if (rc != VINF_HGCM_ASYNC_EXECUTE) 475 + return 0; 476 + 477 + /* Host decided to process the request asynchronously, wait for it */ 478 + if (timeout_ms == U32_MAX) 479 + timeout = MAX_SCHEDULE_TIMEOUT; 480 + else 481 + timeout = msecs_to_jiffies(timeout_ms); 482 + 483 + timeout = wait_event_interruptible_timeout( 484 + gdev->hgcm_wq, 485 + hgcm_req_done(gdev, &call->header), 486 + timeout); 487 + 488 + /* timeout > 0 means hgcm_req_done has returned true, so success */ 489 + if (timeout > 0) 490 + return 0; 491 + 492 + if (timeout == 0) 493 + ret = -ETIMEDOUT; 494 + else 495 + ret = -EINTR; 496 + 497 + /* Cancel the request */ 498 + cancel_rc = hgcm_cancel_call(gdev, call); 499 + if (cancel_rc >= 0) 500 + return ret; 501 + 502 + /* 503 + * Failed to cancel, this should mean that the cancel has lost the 504 + * race with normal completion, wait while the host completes it. 505 + */ 506 + if (cancel_rc == VERR_NOT_FOUND || cancel_rc == VERR_SEM_DESTROYED) 507 + timeout = msecs_to_jiffies(500); 508 + else 509 + timeout = msecs_to_jiffies(2000); 510 + 511 + timeout = wait_event_timeout(gdev->hgcm_wq, 512 + hgcm_req_done(gdev, &call->header), 513 + timeout); 514 + 515 + if (WARN_ON(timeout == 0)) { 516 + /* We really should never get here */ 517 + vbg_err("%s: Call timedout and cancellation failed, leaking the request\n", 518 + __func__); 519 + *leak_it = true; 520 + return ret; 521 + } 522 + 523 + /* The call has completed normally after all */ 524 + return 0; 525 + } 526 + 527 + /** 528 + * Copies the result of the call back to the caller info structure and user 529 + * buffers. 530 + * Return: 0 or negative errno value. 531 + * @call: HGCM call request. 532 + * @dst_parm: Pointer to function call parameters destination. 533 + * @parm_count: Number of function call parameters. 534 + * @bounce_bufs: The bouncebuffer array. 535 + */ 536 + static int hgcm_call_copy_back_result( 537 + const struct vmmdev_hgcm_call *call, 538 + struct vmmdev_hgcm_function_parameter *dst_parm, 539 + u32 parm_count, void **bounce_bufs) 540 + { 541 + const struct vmmdev_hgcm_function_parameter *src_parm = 542 + VMMDEV_HGCM_CALL_PARMS(call); 543 + void __user *p; 544 + int ret; 545 + u32 i; 546 + 547 + /* Copy back parameters. */ 548 + for (i = 0; i < parm_count; i++, src_parm++, dst_parm++) { 549 + switch (dst_parm->type) { 550 + case VMMDEV_HGCM_PARM_TYPE_32BIT: 551 + case VMMDEV_HGCM_PARM_TYPE_64BIT: 552 + *dst_parm = *src_parm; 553 + break; 554 + 555 + case VMMDEV_HGCM_PARM_TYPE_PAGELIST: 556 + dst_parm->u.page_list.size = src_parm->u.page_list.size; 557 + break; 558 + 559 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 560 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL: 561 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN: 562 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT: 563 + dst_parm->u.pointer.size = src_parm->u.pointer.size; 564 + break; 565 + 566 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 567 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 568 + dst_parm->u.pointer.size = src_parm->u.pointer.size; 569 + 570 + p = (void __user *)dst_parm->u.pointer.u.linear_addr; 571 + ret = copy_to_user(p, bounce_bufs[i], 572 + min(src_parm->u.pointer.size, 573 + dst_parm->u.pointer.size)); 574 + if (ret) 575 + return -EFAULT; 576 + break; 577 + 578 + default: 579 + WARN_ON(1); 580 + return -EINVAL; 581 + } 582 + } 583 + 584 + return 0; 585 + } 586 + 587 + int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, 588 + u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, 589 + u32 parm_count, int *vbox_status) 590 + { 591 + struct vmmdev_hgcm_call *call; 592 + void **bounce_bufs = NULL; 593 + bool leak_it; 594 + size_t size; 595 + int i, ret; 596 + 597 + size = sizeof(struct vmmdev_hgcm_call) + 598 + parm_count * sizeof(struct vmmdev_hgcm_function_parameter); 599 + /* 600 + * Validate and buffer the parameters for the call. This also increases 601 + * call_size with the amount of extra space needed for page lists. 602 + */ 603 + ret = hgcm_call_preprocess(parms, parm_count, &bounce_bufs, &size); 604 + if (ret) { 605 + /* Even on error bounce bufs may still have been allocated */ 606 + goto free_bounce_bufs; 607 + } 608 + 609 + call = vbg_req_alloc(size, VMMDEVREQ_HGCM_CALL); 610 + if (!call) { 611 + ret = -ENOMEM; 612 + goto free_bounce_bufs; 613 + } 614 + 615 + hgcm_call_init_call(call, client_id, function, parms, parm_count, 616 + bounce_bufs); 617 + 618 + ret = vbg_hgcm_do_call(gdev, call, timeout_ms, &leak_it); 619 + if (ret == 0) { 620 + *vbox_status = call->header.result; 621 + ret = hgcm_call_copy_back_result(call, parms, parm_count, 622 + bounce_bufs); 623 + } 624 + 625 + if (!leak_it) 626 + kfree(call); 627 + 628 + free_bounce_bufs: 629 + if (bounce_bufs) { 630 + for (i = 0; i < parm_count; i++) 631 + kvfree(bounce_bufs[i]); 632 + kfree(bounce_bufs); 633 + } 634 + 635 + return ret; 636 + } 637 + EXPORT_SYMBOL(vbg_hgcm_call); 638 + 639 + #ifdef CONFIG_COMPAT 640 + int vbg_hgcm_call32( 641 + struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, 642 + struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, 643 + int *vbox_status) 644 + { 645 + struct vmmdev_hgcm_function_parameter *parm64 = NULL; 646 + u32 i, size; 647 + int ret = 0; 648 + 649 + /* KISS allocate a temporary request and convert the parameters. */ 650 + size = parm_count * sizeof(struct vmmdev_hgcm_function_parameter); 651 + parm64 = kzalloc(size, GFP_KERNEL); 652 + if (!parm64) 653 + return -ENOMEM; 654 + 655 + for (i = 0; i < parm_count; i++) { 656 + switch (parm32[i].type) { 657 + case VMMDEV_HGCM_PARM_TYPE_32BIT: 658 + parm64[i].type = VMMDEV_HGCM_PARM_TYPE_32BIT; 659 + parm64[i].u.value32 = parm32[i].u.value32; 660 + break; 661 + 662 + case VMMDEV_HGCM_PARM_TYPE_64BIT: 663 + parm64[i].type = VMMDEV_HGCM_PARM_TYPE_64BIT; 664 + parm64[i].u.value64 = parm32[i].u.value64; 665 + break; 666 + 667 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 668 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 669 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 670 + parm64[i].type = parm32[i].type; 671 + parm64[i].u.pointer.size = parm32[i].u.pointer.size; 672 + parm64[i].u.pointer.u.linear_addr = 673 + parm32[i].u.pointer.u.linear_addr; 674 + break; 675 + 676 + default: 677 + ret = -EINVAL; 678 + } 679 + if (ret < 0) 680 + goto out_free; 681 + } 682 + 683 + ret = vbg_hgcm_call(gdev, client_id, function, timeout_ms, 684 + parm64, parm_count, vbox_status); 685 + if (ret < 0) 686 + goto out_free; 687 + 688 + /* Copy back. */ 689 + for (i = 0; i < parm_count; i++, parm32++, parm64++) { 690 + switch (parm64[i].type) { 691 + case VMMDEV_HGCM_PARM_TYPE_32BIT: 692 + parm32[i].u.value32 = parm64[i].u.value32; 693 + break; 694 + 695 + case VMMDEV_HGCM_PARM_TYPE_64BIT: 696 + parm32[i].u.value64 = parm64[i].u.value64; 697 + break; 698 + 699 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT: 700 + case VMMDEV_HGCM_PARM_TYPE_LINADDR: 701 + case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN: 702 + parm32[i].u.pointer.size = parm64[i].u.pointer.size; 703 + break; 704 + 705 + default: 706 + WARN_ON(1); 707 + ret = -EINVAL; 708 + } 709 + } 710 + 711 + out_free: 712 + kfree(parm64); 713 + return ret; 714 + } 715 + #endif 716 + 717 + static const int vbg_status_code_to_errno_table[] = { 718 + [-VERR_ACCESS_DENIED] = -EPERM, 719 + [-VERR_FILE_NOT_FOUND] = -ENOENT, 720 + [-VERR_PROCESS_NOT_FOUND] = -ESRCH, 721 + [-VERR_INTERRUPTED] = -EINTR, 722 + [-VERR_DEV_IO_ERROR] = -EIO, 723 + [-VERR_TOO_MUCH_DATA] = -E2BIG, 724 + [-VERR_BAD_EXE_FORMAT] = -ENOEXEC, 725 + [-VERR_INVALID_HANDLE] = -EBADF, 726 + [-VERR_TRY_AGAIN] = -EAGAIN, 727 + [-VERR_NO_MEMORY] = -ENOMEM, 728 + [-VERR_INVALID_POINTER] = -EFAULT, 729 + [-VERR_RESOURCE_BUSY] = -EBUSY, 730 + [-VERR_ALREADY_EXISTS] = -EEXIST, 731 + [-VERR_NOT_SAME_DEVICE] = -EXDEV, 732 + [-VERR_NOT_A_DIRECTORY] = -ENOTDIR, 733 + [-VERR_PATH_NOT_FOUND] = -ENOTDIR, 734 + [-VERR_INVALID_NAME] = -ENOENT, 735 + [-VERR_IS_A_DIRECTORY] = -EISDIR, 736 + [-VERR_INVALID_PARAMETER] = -EINVAL, 737 + [-VERR_TOO_MANY_OPEN_FILES] = -ENFILE, 738 + [-VERR_INVALID_FUNCTION] = -ENOTTY, 739 + [-VERR_SHARING_VIOLATION] = -ETXTBSY, 740 + [-VERR_FILE_TOO_BIG] = -EFBIG, 741 + [-VERR_DISK_FULL] = -ENOSPC, 742 + [-VERR_SEEK_ON_DEVICE] = -ESPIPE, 743 + [-VERR_WRITE_PROTECT] = -EROFS, 744 + [-VERR_BROKEN_PIPE] = -EPIPE, 745 + [-VERR_DEADLOCK] = -EDEADLK, 746 + [-VERR_FILENAME_TOO_LONG] = -ENAMETOOLONG, 747 + [-VERR_FILE_LOCK_FAILED] = -ENOLCK, 748 + [-VERR_NOT_IMPLEMENTED] = -ENOSYS, 749 + [-VERR_NOT_SUPPORTED] = -ENOSYS, 750 + [-VERR_DIR_NOT_EMPTY] = -ENOTEMPTY, 751 + [-VERR_TOO_MANY_SYMLINKS] = -ELOOP, 752 + [-VERR_NO_MORE_FILES] = -ENODATA, 753 + [-VERR_NO_DATA] = -ENODATA, 754 + [-VERR_NET_NO_NETWORK] = -ENONET, 755 + [-VERR_NET_NOT_UNIQUE_NAME] = -ENOTUNIQ, 756 + [-VERR_NO_TRANSLATION] = -EILSEQ, 757 + [-VERR_NET_NOT_SOCKET] = -ENOTSOCK, 758 + [-VERR_NET_DEST_ADDRESS_REQUIRED] = -EDESTADDRREQ, 759 + [-VERR_NET_MSG_SIZE] = -EMSGSIZE, 760 + [-VERR_NET_PROTOCOL_TYPE] = -EPROTOTYPE, 761 + [-VERR_NET_PROTOCOL_NOT_AVAILABLE] = -ENOPROTOOPT, 762 + [-VERR_NET_PROTOCOL_NOT_SUPPORTED] = -EPROTONOSUPPORT, 763 + [-VERR_NET_SOCKET_TYPE_NOT_SUPPORTED] = -ESOCKTNOSUPPORT, 764 + [-VERR_NET_OPERATION_NOT_SUPPORTED] = -EOPNOTSUPP, 765 + [-VERR_NET_PROTOCOL_FAMILY_NOT_SUPPORTED] = -EPFNOSUPPORT, 766 + [-VERR_NET_ADDRESS_FAMILY_NOT_SUPPORTED] = -EAFNOSUPPORT, 767 + [-VERR_NET_ADDRESS_IN_USE] = -EADDRINUSE, 768 + [-VERR_NET_ADDRESS_NOT_AVAILABLE] = -EADDRNOTAVAIL, 769 + [-VERR_NET_DOWN] = -ENETDOWN, 770 + [-VERR_NET_UNREACHABLE] = -ENETUNREACH, 771 + [-VERR_NET_CONNECTION_RESET] = -ENETRESET, 772 + [-VERR_NET_CONNECTION_ABORTED] = -ECONNABORTED, 773 + [-VERR_NET_CONNECTION_RESET_BY_PEER] = -ECONNRESET, 774 + [-VERR_NET_NO_BUFFER_SPACE] = -ENOBUFS, 775 + [-VERR_NET_ALREADY_CONNECTED] = -EISCONN, 776 + [-VERR_NET_NOT_CONNECTED] = -ENOTCONN, 777 + [-VERR_NET_SHUTDOWN] = -ESHUTDOWN, 778 + [-VERR_NET_TOO_MANY_REFERENCES] = -ETOOMANYREFS, 779 + [-VERR_TIMEOUT] = -ETIMEDOUT, 780 + [-VERR_NET_CONNECTION_REFUSED] = -ECONNREFUSED, 781 + [-VERR_NET_HOST_DOWN] = -EHOSTDOWN, 782 + [-VERR_NET_HOST_UNREACHABLE] = -EHOSTUNREACH, 783 + [-VERR_NET_ALREADY_IN_PROGRESS] = -EALREADY, 784 + [-VERR_NET_IN_PROGRESS] = -EINPROGRESS, 785 + [-VERR_MEDIA_NOT_PRESENT] = -ENOMEDIUM, 786 + [-VERR_MEDIA_NOT_RECOGNIZED] = -EMEDIUMTYPE, 787 + }; 788 + 789 + int vbg_status_code_to_errno(int rc) 790 + { 791 + if (rc >= 0) 792 + return 0; 793 + 794 + rc = -rc; 795 + if (rc >= ARRAY_SIZE(vbg_status_code_to_errno_table) || 796 + vbg_status_code_to_errno_table[rc] == 0) { 797 + vbg_warn("%s: Unhandled err %d\n", __func__, -rc); 798 + return -EPROTO; 799 + } 800 + 801 + return vbg_status_code_to_errno_table[rc]; 802 + } 803 + EXPORT_SYMBOL(vbg_status_code_to_errno);
+19
drivers/virt/vboxguest/vboxguest_version.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * VBox Guest additions version info, this is used by the host to determine 4 + * supported guest-addition features in some cases. So this will need to be 5 + * synced with vbox upstreams versioning scheme when we implement / port 6 + * new features from the upstream out-of-tree vboxguest driver. 7 + */ 8 + 9 + #ifndef __VBOX_VERSION_H__ 10 + #define __VBOX_VERSION_H__ 11 + 12 + /* Last synced October 4th 2017 */ 13 + #define VBG_VERSION_MAJOR 5 14 + #define VBG_VERSION_MINOR 2 15 + #define VBG_VERSION_BUILD 0 16 + #define VBG_SVN_REV 68940 17 + #define VBG_VERSION_STRING "5.2.0" 18 + 19 + #endif
+449
drivers/virt/vboxguest/vmmdev.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * Virtual Device for Guest <-> VMM/Host communication interface 4 + * 5 + * Copyright (C) 2006-2016 Oracle Corporation 6 + */ 7 + 8 + #ifndef __VBOX_VMMDEV_H__ 9 + #define __VBOX_VMMDEV_H__ 10 + 11 + #include <asm/bitsperlong.h> 12 + #include <linux/sizes.h> 13 + #include <linux/types.h> 14 + #include <linux/vbox_vmmdev_types.h> 15 + 16 + /* Port for generic request interface (relative offset). */ 17 + #define VMMDEV_PORT_OFF_REQUEST 0 18 + 19 + /** Layout of VMMDEV RAM region that contains information for guest. */ 20 + struct vmmdev_memory { 21 + /** The size of this structure. */ 22 + u32 size; 23 + /** The structure version. (VMMDEV_MEMORY_VERSION) */ 24 + u32 version; 25 + 26 + union { 27 + struct { 28 + /** Flag telling that VMMDev has events pending. */ 29 + u8 have_events; 30 + /** Explicit padding, MBZ. */ 31 + u8 padding[3]; 32 + } V1_04; 33 + 34 + struct { 35 + /** Pending events flags, set by host. */ 36 + u32 host_events; 37 + /** Mask of events the guest wants, set by guest. */ 38 + u32 guest_event_mask; 39 + } V1_03; 40 + } V; 41 + 42 + /* struct vbva_memory, not used */ 43 + }; 44 + VMMDEV_ASSERT_SIZE(vmmdev_memory, 8 + 8); 45 + 46 + /** Version of vmmdev_memory structure (vmmdev_memory::version). */ 47 + #define VMMDEV_MEMORY_VERSION (1) 48 + 49 + /* Host mouse capabilities has been changed. */ 50 + #define VMMDEV_EVENT_MOUSE_CAPABILITIES_CHANGED BIT(0) 51 + /* HGCM event. */ 52 + #define VMMDEV_EVENT_HGCM BIT(1) 53 + /* A display change request has been issued. */ 54 + #define VMMDEV_EVENT_DISPLAY_CHANGE_REQUEST BIT(2) 55 + /* Credentials are available for judgement. */ 56 + #define VMMDEV_EVENT_JUDGE_CREDENTIALS BIT(3) 57 + /* The guest has been restored. */ 58 + #define VMMDEV_EVENT_RESTORED BIT(4) 59 + /* Seamless mode state changed. */ 60 + #define VMMDEV_EVENT_SEAMLESS_MODE_CHANGE_REQUEST BIT(5) 61 + /* Memory balloon size changed. */ 62 + #define VMMDEV_EVENT_BALLOON_CHANGE_REQUEST BIT(6) 63 + /* Statistics interval changed. */ 64 + #define VMMDEV_EVENT_STATISTICS_INTERVAL_CHANGE_REQUEST BIT(7) 65 + /* VRDP status changed. */ 66 + #define VMMDEV_EVENT_VRDP BIT(8) 67 + /* New mouse position data available. */ 68 + #define VMMDEV_EVENT_MOUSE_POSITION_CHANGED BIT(9) 69 + /* CPU hotplug event occurred. */ 70 + #define VMMDEV_EVENT_CPU_HOTPLUG BIT(10) 71 + /* The mask of valid events, for sanity checking. */ 72 + #define VMMDEV_EVENT_VALID_EVENT_MASK 0x000007ffU 73 + 74 + /* 75 + * Additions are allowed to work only if additions_major == vmmdev_current && 76 + * additions_minor <= vmmdev_current. Additions version is reported to host 77 + * (VMMDev) by VMMDEVREQ_REPORT_GUEST_INFO. 78 + */ 79 + #define VMMDEV_VERSION 0x00010004 80 + #define VMMDEV_VERSION_MAJOR (VMMDEV_VERSION >> 16) 81 + #define VMMDEV_VERSION_MINOR (VMMDEV_VERSION & 0xffff) 82 + 83 + /* Maximum request packet size. */ 84 + #define VMMDEV_MAX_VMMDEVREQ_SIZE 1048576 85 + 86 + /* Version of vmmdev_request_header structure. */ 87 + #define VMMDEV_REQUEST_HEADER_VERSION 0x10001 88 + 89 + /** struct vmmdev_request_header - Generic VMMDev request header. */ 90 + struct vmmdev_request_header { 91 + /** IN: Size of the structure in bytes (including body). */ 92 + u32 size; 93 + /** IN: Version of the structure. */ 94 + u32 version; 95 + /** IN: Type of the request. */ 96 + enum vmmdev_request_type request_type; 97 + /** OUT: Return code. */ 98 + s32 rc; 99 + /** Reserved field no.1. MBZ. */ 100 + u32 reserved1; 101 + /** Reserved field no.2. MBZ. */ 102 + u32 reserved2; 103 + }; 104 + VMMDEV_ASSERT_SIZE(vmmdev_request_header, 24); 105 + 106 + /** 107 + * struct vmmdev_mouse_status - Mouse status request structure. 108 + * 109 + * Used by VMMDEVREQ_GET_MOUSE_STATUS and VMMDEVREQ_SET_MOUSE_STATUS. 110 + */ 111 + struct vmmdev_mouse_status { 112 + /** header */ 113 + struct vmmdev_request_header header; 114 + /** Mouse feature mask. See VMMDEV_MOUSE_*. */ 115 + u32 mouse_features; 116 + /** Mouse x position. */ 117 + s32 pointer_pos_x; 118 + /** Mouse y position. */ 119 + s32 pointer_pos_y; 120 + }; 121 + VMMDEV_ASSERT_SIZE(vmmdev_mouse_status, 24 + 12); 122 + 123 + /* The guest can (== wants to) handle absolute coordinates. */ 124 + #define VMMDEV_MOUSE_GUEST_CAN_ABSOLUTE BIT(0) 125 + /* 126 + * The host can (== wants to) send absolute coordinates. 127 + * (Input not captured.) 128 + */ 129 + #define VMMDEV_MOUSE_HOST_WANTS_ABSOLUTE BIT(1) 130 + /* 131 + * The guest can *NOT* switch to software cursor and therefore depends on the 132 + * host cursor. 133 + * 134 + * When guest additions are installed and the host has promised to display the 135 + * cursor itself, the guest installs a hardware mouse driver. Don't ask the 136 + * guest to switch to a software cursor then. 137 + */ 138 + #define VMMDEV_MOUSE_GUEST_NEEDS_HOST_CURSOR BIT(2) 139 + /* The host does NOT provide support for drawing the cursor itself. */ 140 + #define VMMDEV_MOUSE_HOST_CANNOT_HWPOINTER BIT(3) 141 + /* The guest can read VMMDev events to find out about pointer movement */ 142 + #define VMMDEV_MOUSE_NEW_PROTOCOL BIT(4) 143 + /* 144 + * If the guest changes the status of the VMMDEV_MOUSE_GUEST_NEEDS_HOST_CURSOR 145 + * bit, the host will honour this. 146 + */ 147 + #define VMMDEV_MOUSE_HOST_RECHECKS_NEEDS_HOST_CURSOR BIT(5) 148 + /* 149 + * The host supplies an absolute pointing device. The Guest Additions may 150 + * wish to use this to decide whether to install their own driver. 151 + */ 152 + #define VMMDEV_MOUSE_HOST_HAS_ABS_DEV BIT(6) 153 + 154 + /* The minimum value our pointing device can return. */ 155 + #define VMMDEV_MOUSE_RANGE_MIN 0 156 + /* The maximum value our pointing device can return. */ 157 + #define VMMDEV_MOUSE_RANGE_MAX 0xFFFF 158 + 159 + /** 160 + * struct vmmdev_host_version - VirtualBox host version request structure. 161 + * 162 + * VBG uses this to detect the precense of new features in the interface. 163 + */ 164 + struct vmmdev_host_version { 165 + /** Header. */ 166 + struct vmmdev_request_header header; 167 + /** Major version. */ 168 + u16 major; 169 + /** Minor version. */ 170 + u16 minor; 171 + /** Build number. */ 172 + u32 build; 173 + /** SVN revision. */ 174 + u32 revision; 175 + /** Feature mask. */ 176 + u32 features; 177 + }; 178 + VMMDEV_ASSERT_SIZE(vmmdev_host_version, 24 + 16); 179 + 180 + /* Physical page lists are supported by HGCM. */ 181 + #define VMMDEV_HVF_HGCM_PHYS_PAGE_LIST BIT(0) 182 + 183 + /** 184 + * struct vmmdev_mask - Structure to set / clear bits in a mask used for 185 + * VMMDEVREQ_SET_GUEST_CAPABILITIES and VMMDEVREQ_CTL_GUEST_FILTER_MASK. 186 + */ 187 + struct vmmdev_mask { 188 + /** Header. */ 189 + struct vmmdev_request_header header; 190 + /** Mask of bits to be set. */ 191 + u32 or_mask; 192 + /** Mask of bits to be cleared. */ 193 + u32 not_mask; 194 + }; 195 + VMMDEV_ASSERT_SIZE(vmmdev_mask, 24 + 8); 196 + 197 + /* The guest supports seamless display rendering. */ 198 + #define VMMDEV_GUEST_SUPPORTS_SEAMLESS BIT(0) 199 + /* The guest supports mapping guest to host windows. */ 200 + #define VMMDEV_GUEST_SUPPORTS_GUEST_HOST_WINDOW_MAPPING BIT(1) 201 + /* 202 + * The guest graphical additions are active. 203 + * Used for fast activation and deactivation of certain graphical operations 204 + * (e.g. resizing & seamless). The legacy VMMDEVREQ_REPORT_GUEST_CAPABILITIES 205 + * request sets this automatically, but VMMDEVREQ_SET_GUEST_CAPABILITIES does 206 + * not. 207 + */ 208 + #define VMMDEV_GUEST_SUPPORTS_GRAPHICS BIT(2) 209 + 210 + /** struct vmmdev_hypervisorinfo - Hypervisor info structure. */ 211 + struct vmmdev_hypervisorinfo { 212 + /** Header. */ 213 + struct vmmdev_request_header header; 214 + /** 215 + * Guest virtual address of proposed hypervisor start. 216 + * Not used by VMMDEVREQ_GET_HYPERVISOR_INFO. 217 + */ 218 + u32 hypervisor_start; 219 + /** Hypervisor size in bytes. */ 220 + u32 hypervisor_size; 221 + }; 222 + VMMDEV_ASSERT_SIZE(vmmdev_hypervisorinfo, 24 + 8); 223 + 224 + /** struct vmmdev_events - Pending events structure. */ 225 + struct vmmdev_events { 226 + /** Header. */ 227 + struct vmmdev_request_header header; 228 + /** OUT: Pending event mask. */ 229 + u32 events; 230 + }; 231 + VMMDEV_ASSERT_SIZE(vmmdev_events, 24 + 4); 232 + 233 + #define VMMDEV_OSTYPE_LINUX26 0x53000 234 + #define VMMDEV_OSTYPE_X64 BIT(8) 235 + 236 + /** struct vmmdev_guestinfo - Guest information report. */ 237 + struct vmmdev_guest_info { 238 + /** Header. */ 239 + struct vmmdev_request_header header; 240 + /** 241 + * The VMMDev interface version expected by additions. 242 + * *Deprecated*, do not use anymore! Will be removed. 243 + */ 244 + u32 interface_version; 245 + /** Guest OS type. */ 246 + u32 os_type; 247 + }; 248 + VMMDEV_ASSERT_SIZE(vmmdev_guest_info, 24 + 8); 249 + 250 + /** struct vmmdev_guestinfo2 - Guest information report, version 2. */ 251 + struct vmmdev_guest_info2 { 252 + /** Header. */ 253 + struct vmmdev_request_header header; 254 + /** Major version. */ 255 + u16 additions_major; 256 + /** Minor version. */ 257 + u16 additions_minor; 258 + /** Build number. */ 259 + u32 additions_build; 260 + /** SVN revision. */ 261 + u32 additions_revision; 262 + /** Feature mask, currently unused. */ 263 + u32 additions_features; 264 + /** 265 + * The intentional meaning of this field was: 266 + * Some additional information, for example 'Beta 1' or something like 267 + * that. 268 + * 269 + * The way it was implemented was implemented: VBG_VERSION_STRING. 270 + * 271 + * This means the first three members are duplicated in this field (if 272 + * the guest build config is sane). So, the user must check this and 273 + * chop it off before usage. There is, because of the Main code's blind 274 + * trust in the field's content, no way back. 275 + */ 276 + char name[128]; 277 + }; 278 + VMMDEV_ASSERT_SIZE(vmmdev_guest_info2, 24 + 144); 279 + 280 + enum vmmdev_guest_facility_type { 281 + VBOXGUEST_FACILITY_TYPE_UNKNOWN = 0, 282 + VBOXGUEST_FACILITY_TYPE_VBOXGUEST_DRIVER = 20, 283 + /* VBoxGINA / VBoxCredProv / pam_vbox. */ 284 + VBOXGUEST_FACILITY_TYPE_AUTO_LOGON = 90, 285 + VBOXGUEST_FACILITY_TYPE_VBOX_SERVICE = 100, 286 + /* VBoxTray (Windows), VBoxClient (Linux, Unix). */ 287 + VBOXGUEST_FACILITY_TYPE_VBOX_TRAY_CLIENT = 101, 288 + VBOXGUEST_FACILITY_TYPE_SEAMLESS = 1000, 289 + VBOXGUEST_FACILITY_TYPE_GRAPHICS = 1100, 290 + VBOXGUEST_FACILITY_TYPE_ALL = 0x7ffffffe, 291 + /* Ensure the enum is a 32 bit data-type */ 292 + VBOXGUEST_FACILITY_TYPE_SIZEHACK = 0x7fffffff 293 + }; 294 + 295 + enum vmmdev_guest_facility_status { 296 + VBOXGUEST_FACILITY_STATUS_INACTIVE = 0, 297 + VBOXGUEST_FACILITY_STATUS_PAUSED = 1, 298 + VBOXGUEST_FACILITY_STATUS_PRE_INIT = 20, 299 + VBOXGUEST_FACILITY_STATUS_INIT = 30, 300 + VBOXGUEST_FACILITY_STATUS_ACTIVE = 50, 301 + VBOXGUEST_FACILITY_STATUS_TERMINATING = 100, 302 + VBOXGUEST_FACILITY_STATUS_TERMINATED = 101, 303 + VBOXGUEST_FACILITY_STATUS_FAILED = 800, 304 + VBOXGUEST_FACILITY_STATUS_UNKNOWN = 999, 305 + /* Ensure the enum is a 32 bit data-type */ 306 + VBOXGUEST_FACILITY_STATUS_SIZEHACK = 0x7fffffff 307 + }; 308 + 309 + /** struct vmmdev_guest_status - Guest Additions status structure. */ 310 + struct vmmdev_guest_status { 311 + /** Header. */ 312 + struct vmmdev_request_header header; 313 + /** Facility the status is indicated for. */ 314 + enum vmmdev_guest_facility_type facility; 315 + /** Current guest status. */ 316 + enum vmmdev_guest_facility_status status; 317 + /** Flags, not used at the moment. */ 318 + u32 flags; 319 + }; 320 + VMMDEV_ASSERT_SIZE(vmmdev_guest_status, 24 + 12); 321 + 322 + #define VMMDEV_MEMORY_BALLOON_CHUNK_SIZE (1048576) 323 + #define VMMDEV_MEMORY_BALLOON_CHUNK_PAGES (1048576 / 4096) 324 + 325 + /** struct vmmdev_memballoon_info - Memory-balloon info structure. */ 326 + struct vmmdev_memballoon_info { 327 + /** Header. */ 328 + struct vmmdev_request_header header; 329 + /** Balloon size in megabytes. */ 330 + u32 balloon_chunks; 331 + /** Guest ram size in megabytes. */ 332 + u32 phys_mem_chunks; 333 + /** 334 + * Setting this to VMMDEV_EVENT_BALLOON_CHANGE_REQUEST indicates that 335 + * the request is a response to that event. 336 + * (Don't confuse this with VMMDEVREQ_ACKNOWLEDGE_EVENTS.) 337 + */ 338 + u32 event_ack; 339 + }; 340 + VMMDEV_ASSERT_SIZE(vmmdev_memballoon_info, 24 + 12); 341 + 342 + /** struct vmmdev_memballoon_change - Change the size of the balloon. */ 343 + struct vmmdev_memballoon_change { 344 + /** Header. */ 345 + struct vmmdev_request_header header; 346 + /** The number of pages in the array. */ 347 + u32 pages; 348 + /** true = inflate, false = deflate. */ 349 + u32 inflate; 350 + /** Physical address (u64) of each page. */ 351 + u64 phys_page[VMMDEV_MEMORY_BALLOON_CHUNK_PAGES]; 352 + }; 353 + 354 + /** struct vmmdev_write_core_dump - Write Core Dump request data. */ 355 + struct vmmdev_write_core_dump { 356 + /** Header. */ 357 + struct vmmdev_request_header header; 358 + /** Flags (reserved, MBZ). */ 359 + u32 flags; 360 + }; 361 + VMMDEV_ASSERT_SIZE(vmmdev_write_core_dump, 24 + 4); 362 + 363 + /** struct vmmdev_heartbeat - Heart beat check state structure. */ 364 + struct vmmdev_heartbeat { 365 + /** Header. */ 366 + struct vmmdev_request_header header; 367 + /** OUT: Guest heartbeat interval in nanosec. */ 368 + u64 interval_ns; 369 + /** Heartbeat check flag. */ 370 + u8 enabled; 371 + /** Explicit padding, MBZ. */ 372 + u8 padding[3]; 373 + } __packed; 374 + VMMDEV_ASSERT_SIZE(vmmdev_heartbeat, 24 + 12); 375 + 376 + #define VMMDEV_HGCM_REQ_DONE BIT(0) 377 + #define VMMDEV_HGCM_REQ_CANCELLED BIT(1) 378 + 379 + /** struct vmmdev_hgcmreq_header - vmmdev HGCM requests header. */ 380 + struct vmmdev_hgcmreq_header { 381 + /** Request header. */ 382 + struct vmmdev_request_header header; 383 + 384 + /** HGCM flags. */ 385 + u32 flags; 386 + 387 + /** Result code. */ 388 + s32 result; 389 + }; 390 + VMMDEV_ASSERT_SIZE(vmmdev_hgcmreq_header, 24 + 8); 391 + 392 + /** struct vmmdev_hgcm_connect - HGCM connect request structure. */ 393 + struct vmmdev_hgcm_connect { 394 + /** HGCM request header. */ 395 + struct vmmdev_hgcmreq_header header; 396 + 397 + /** IN: Description of service to connect to. */ 398 + struct vmmdev_hgcm_service_location loc; 399 + 400 + /** OUT: Client identifier assigned by local instance of HGCM. */ 401 + u32 client_id; 402 + }; 403 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_connect, 32 + 132 + 4); 404 + 405 + /** struct vmmdev_hgcm_disconnect - HGCM disconnect request structure. */ 406 + struct vmmdev_hgcm_disconnect { 407 + /** HGCM request header. */ 408 + struct vmmdev_hgcmreq_header header; 409 + 410 + /** IN: Client identifier. */ 411 + u32 client_id; 412 + }; 413 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_disconnect, 32 + 4); 414 + 415 + #define VMMDEV_HGCM_MAX_PARMS 32 416 + 417 + /** struct vmmdev_hgcm_call - HGCM call request structure. */ 418 + struct vmmdev_hgcm_call { 419 + /* request header */ 420 + struct vmmdev_hgcmreq_header header; 421 + 422 + /** IN: Client identifier. */ 423 + u32 client_id; 424 + /** IN: Service function number. */ 425 + u32 function; 426 + /** IN: Number of parameters. */ 427 + u32 parm_count; 428 + /** Parameters follow in form: HGCMFunctionParameter32|64 parms[X]; */ 429 + }; 430 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_call, 32 + 12); 431 + 432 + /** 433 + * struct vmmdev_hgcm_cancel2 - HGCM cancel request structure, version 2. 434 + * 435 + * After the request header.rc will be: 436 + * 437 + * VINF_SUCCESS when cancelled. 438 + * VERR_NOT_FOUND if the specified request cannot be found. 439 + * VERR_INVALID_PARAMETER if the address is invalid valid. 440 + */ 441 + struct vmmdev_hgcm_cancel2 { 442 + /** Header. */ 443 + struct vmmdev_request_header header; 444 + /** The physical address of the request to cancel. */ 445 + u32 phys_req_to_cancel; 446 + }; 447 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_cancel2, 24 + 4); 448 + 449 + #endif
+1 -1
drivers/vme/vme.c
··· 1290 1290 { 1291 1291 struct vme_error_handler *handler; 1292 1292 1293 - handler = kmalloc(sizeof(*handler), GFP_KERNEL); 1293 + handler = kmalloc(sizeof(*handler), GFP_ATOMIC); 1294 1294 if (!handler) 1295 1295 return NULL; 1296 1296
+11 -3
include/linux/fpga/fpga-bridge.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - #include <linux/device.h> 3 - #include <linux/fpga/fpga-mgr.h> 4 2 5 3 #ifndef _LINUX_FPGA_BRIDGE_H 6 4 #define _LINUX_FPGA_BRIDGE_H 5 + 6 + #include <linux/device.h> 7 + #include <linux/fpga/fpga-mgr.h> 7 8 8 9 struct fpga_bridge; 9 10 ··· 13 12 * @enable_show: returns the FPGA bridge's status 14 13 * @enable_set: set a FPGA bridge as enabled or disabled 15 14 * @fpga_bridge_remove: set FPGA into a specific state during driver remove 15 + * @groups: optional attribute groups. 16 16 */ 17 17 struct fpga_bridge_ops { 18 18 int (*enable_show)(struct fpga_bridge *bridge); 19 19 int (*enable_set)(struct fpga_bridge *bridge, bool enable); 20 20 void (*fpga_bridge_remove)(struct fpga_bridge *bridge); 21 + const struct attribute_group **groups; 21 22 }; 22 23 23 24 /** ··· 46 43 47 44 struct fpga_bridge *of_fpga_bridge_get(struct device_node *node, 48 45 struct fpga_image_info *info); 46 + struct fpga_bridge *fpga_bridge_get(struct device *dev, 47 + struct fpga_image_info *info); 49 48 void fpga_bridge_put(struct fpga_bridge *bridge); 50 49 int fpga_bridge_enable(struct fpga_bridge *bridge); 51 50 int fpga_bridge_disable(struct fpga_bridge *bridge); ··· 55 50 int fpga_bridges_enable(struct list_head *bridge_list); 56 51 int fpga_bridges_disable(struct list_head *bridge_list); 57 52 void fpga_bridges_put(struct list_head *bridge_list); 58 - int fpga_bridge_get_to_list(struct device_node *np, 53 + int fpga_bridge_get_to_list(struct device *dev, 59 54 struct fpga_image_info *info, 60 55 struct list_head *bridge_list); 56 + int of_fpga_bridge_get_to_list(struct device_node *np, 57 + struct fpga_image_info *info, 58 + struct list_head *bridge_list); 61 59 62 60 int fpga_bridge_register(struct device *dev, const char *name, 63 61 const struct fpga_bridge_ops *br_ops, void *priv);
+28 -11
include/linux/fpga/fpga-mgr.h
··· 1 1 /* 2 2 * FPGA Framework 3 3 * 4 - * Copyright (C) 2013-2015 Altera Corporation 4 + * Copyright (C) 2013-2016 Altera Corporation 5 + * Copyright (C) 2017 Intel Corporation 5 6 * 6 7 * This program is free software; you can redistribute it and/or modify it 7 8 * under the terms and conditions of the GNU General Public License, ··· 16 15 * You should have received a copy of the GNU General Public License along with 17 16 * this program. If not, see <http://www.gnu.org/licenses/>. 18 17 */ 19 - #include <linux/mutex.h> 20 - #include <linux/platform_device.h> 21 - 22 18 #ifndef _LINUX_FPGA_MGR_H 23 19 #define _LINUX_FPGA_MGR_H 20 + 21 + #include <linux/mutex.h> 22 + #include <linux/platform_device.h> 24 23 25 24 struct fpga_manager; 26 25 struct sg_table; ··· 84 83 * @disable_timeout_us: maximum time to disable traffic through bridge (uSec) 85 84 * @config_complete_timeout_us: maximum time for FPGA to switch to operating 86 85 * status in the write_complete op. 86 + * @firmware_name: name of FPGA image firmware file 87 + * @sgt: scatter/gather table containing FPGA image 88 + * @buf: contiguous buffer containing FPGA image 89 + * @count: size of buf 90 + * @dev: device that owns this 91 + * @overlay: Device Tree overlay 87 92 */ 88 93 struct fpga_image_info { 89 94 u32 flags; 90 95 u32 enable_timeout_us; 91 96 u32 disable_timeout_us; 92 97 u32 config_complete_timeout_us; 98 + char *firmware_name; 99 + struct sg_table *sgt; 100 + const char *buf; 101 + size_t count; 102 + struct device *dev; 103 + #ifdef CONFIG_OF 104 + struct device_node *overlay; 105 + #endif 93 106 }; 94 107 95 108 /** ··· 115 100 * @write_sg: write the scatter list of configuration data to the FPGA 116 101 * @write_complete: set FPGA to operating state after writing is done 117 102 * @fpga_remove: optional: Set FPGA into a specific state during driver remove 103 + * @groups: optional attribute groups. 118 104 * 119 105 * fpga_manager_ops are the low level functions implemented by a specific 120 106 * fpga manager driver. The optional ones are tested for NULL before being ··· 132 116 int (*write_complete)(struct fpga_manager *mgr, 133 117 struct fpga_image_info *info); 134 118 void (*fpga_remove)(struct fpga_manager *mgr); 119 + const struct attribute_group **groups; 135 120 }; 136 121 137 122 /** ··· 155 138 156 139 #define to_fpga_manager(d) container_of(d, struct fpga_manager, dev) 157 140 158 - int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info, 159 - const char *buf, size_t count); 160 - int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info, 161 - struct sg_table *sgt); 141 + struct fpga_image_info *fpga_image_info_alloc(struct device *dev); 162 142 163 - int fpga_mgr_firmware_load(struct fpga_manager *mgr, 164 - struct fpga_image_info *info, 165 - const char *image_name); 143 + void fpga_image_info_free(struct fpga_image_info *info); 144 + 145 + int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info); 146 + 147 + int fpga_mgr_lock(struct fpga_manager *mgr); 148 + void fpga_mgr_unlock(struct fpga_manager *mgr); 166 149 167 150 struct fpga_manager *of_fpga_mgr_get(struct device_node *node); 168 151
+40
include/linux/fpga/fpga-region.h
··· 1 + #ifndef _FPGA_REGION_H 2 + #define _FPGA_REGION_H 3 + 4 + #include <linux/device.h> 5 + #include <linux/fpga/fpga-mgr.h> 6 + #include <linux/fpga/fpga-bridge.h> 7 + 8 + /** 9 + * struct fpga_region - FPGA Region structure 10 + * @dev: FPGA Region device 11 + * @mutex: enforces exclusive reference to region 12 + * @bridge_list: list of FPGA bridges specified in region 13 + * @mgr: FPGA manager 14 + * @info: FPGA image info 15 + * @priv: private data 16 + * @get_bridges: optional function to get bridges to a list 17 + * @groups: optional attribute groups. 18 + */ 19 + struct fpga_region { 20 + struct device dev; 21 + struct mutex mutex; /* for exclusive reference to region */ 22 + struct list_head bridge_list; 23 + struct fpga_manager *mgr; 24 + struct fpga_image_info *info; 25 + void *priv; 26 + int (*get_bridges)(struct fpga_region *region); 27 + const struct attribute_group **groups; 28 + }; 29 + 30 + #define to_fpga_region(d) container_of(d, struct fpga_region, dev) 31 + 32 + struct fpga_region *fpga_region_class_find( 33 + struct device *start, const void *data, 34 + int (*match)(struct device *, const void *)); 35 + 36 + int fpga_region_program_fpga(struct fpga_region *region); 37 + int fpga_region_register(struct device *dev, struct fpga_region *region); 38 + int fpga_region_unregister(struct fpga_region *region); 39 + 40 + #endif /* _FPGA_REGION_H */
-84
include/linux/i7300_idle.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - 3 - #ifndef I7300_IDLE_H 4 - #define I7300_IDLE_H 5 - 6 - #include <linux/pci.h> 7 - 8 - /* 9 - * I/O AT controls (PCI bus 0 device 8 function 0) 10 - * DIMM controls (PCI bus 0 device 16 function 1) 11 - */ 12 - #define IOAT_BUS 0 13 - #define IOAT_DEVFN PCI_DEVFN(8, 0) 14 - #define MEMCTL_BUS 0 15 - #define MEMCTL_DEVFN PCI_DEVFN(16, 1) 16 - 17 - struct fbd_ioat { 18 - unsigned int vendor; 19 - unsigned int ioat_dev; 20 - unsigned int enabled; 21 - }; 22 - 23 - /* 24 - * The i5000 chip-set has the same hooks as the i7300 25 - * but it is not enabled by default and must be manually 26 - * manually enabled with "forceload=1" because it is 27 - * only lightly validated. 28 - */ 29 - 30 - static const struct fbd_ioat fbd_ioat_list[] = { 31 - {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB, 1}, 32 - {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT, 0}, 33 - {0, 0} 34 - }; 35 - 36 - /* table of devices that work with this driver */ 37 - static const struct pci_device_id pci_tbl[] = { 38 - { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_FBD_CNB) }, 39 - { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_5000_ERR) }, 40 - { } /* Terminating entry */ 41 - }; 42 - 43 - /* Check for known platforms with I/O-AT */ 44 - static inline int i7300_idle_platform_probe(struct pci_dev **fbd_dev, 45 - struct pci_dev **ioat_dev, 46 - int enable_all) 47 - { 48 - int i; 49 - struct pci_dev *memdev, *dmadev; 50 - 51 - memdev = pci_get_bus_and_slot(MEMCTL_BUS, MEMCTL_DEVFN); 52 - if (!memdev) 53 - return -ENODEV; 54 - 55 - for (i = 0; pci_tbl[i].vendor != 0; i++) { 56 - if (memdev->vendor == pci_tbl[i].vendor && 57 - memdev->device == pci_tbl[i].device) { 58 - break; 59 - } 60 - } 61 - if (pci_tbl[i].vendor == 0) 62 - return -ENODEV; 63 - 64 - dmadev = pci_get_bus_and_slot(IOAT_BUS, IOAT_DEVFN); 65 - if (!dmadev) 66 - return -ENODEV; 67 - 68 - for (i = 0; fbd_ioat_list[i].vendor != 0; i++) { 69 - if (dmadev->vendor == fbd_ioat_list[i].vendor && 70 - dmadev->device == fbd_ioat_list[i].ioat_dev) { 71 - if (!(fbd_ioat_list[i].enabled || enable_all)) 72 - continue; 73 - if (fbd_dev) 74 - *fbd_dev = memdev; 75 - if (ioat_dev) 76 - *ioat_dev = dmadev; 77 - 78 - return 0; 79 - } 80 - } 81 - return -ENODEV; 82 - } 83 - 84 - #endif
+19
include/linux/mod_devicetable.h
··· 229 229 unsigned long driver_data; 230 230 }; 231 231 232 + struct sdw_device_id { 233 + __u16 mfg_id; 234 + __u16 part_id; 235 + kernel_ulong_t driver_data; 236 + }; 237 + 232 238 /* 233 239 * Struct used for matching a device 234 240 */ ··· 456 450 struct spi_device_id { 457 451 char name[SPI_NAME_SIZE]; 458 452 kernel_ulong_t driver_data; /* Data private to the driver */ 453 + }; 454 + 455 + /* SLIMbus */ 456 + 457 + #define SLIMBUS_NAME_SIZE 32 458 + #define SLIMBUS_MODULE_PREFIX "slim:" 459 + 460 + struct slim_device_id { 461 + __u16 manf_id, prod_code; 462 + __u16 dev_index, instance; 463 + 464 + /* Data private to the driver */ 465 + kernel_ulong_t driver_data; 459 466 }; 460 467 461 468 #define SPMI_NAME_SIZE 32
+1 -4
include/linux/mux/consumer.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * mux/consumer.h - definitions for the multiplexer consumer interface 3 4 * 4 5 * Copyright (C) 2017 Axentia Technologies AB 5 6 * 6 7 * Author: Peter Rosin <peda@axentia.se> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #ifndef _LINUX_MUX_CONSUMER_H
+1 -4
include/linux/mux/driver.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * mux/driver.h - definitions for the multiplexer driver interface 3 4 * 4 5 * Copyright (C) 2017 Axentia Technologies AB 5 6 * 6 7 * Author: Peter Rosin <peda@axentia.se> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #ifndef _LINUX_MUX_DRIVER_H
+18
include/linux/regmap.h
··· 24 24 struct device; 25 25 struct i2c_client; 26 26 struct irq_domain; 27 + struct slim_device; 27 28 struct spi_device; 28 29 struct spmi_device; 29 30 struct regmap; ··· 512 511 const struct regmap_config *config, 513 512 struct lock_class_key *lock_key, 514 513 const char *lock_name); 514 + struct regmap *__regmap_init_slimbus(struct slim_device *slimbus, 515 + const struct regmap_config *config, 516 + struct lock_class_key *lock_key, 517 + const char *lock_name); 515 518 struct regmap *__regmap_init_spi(struct spi_device *dev, 516 519 const struct regmap_config *config, 517 520 struct lock_class_key *lock_key, ··· 639 634 #define regmap_init_i2c(i2c, config) \ 640 635 __regmap_lockdep_wrapper(__regmap_init_i2c, #config, \ 641 636 i2c, config) 637 + 638 + /** 639 + * regmap_init_slimbus() - Initialise register map 640 + * 641 + * @slimbus: Device that will be interacted with 642 + * @config: Configuration for register map 643 + * 644 + * The return value will be an ERR_PTR() on error or a valid pointer to 645 + * a struct regmap. 646 + */ 647 + #define regmap_init_slimbus(slimbus, config) \ 648 + __regmap_lockdep_wrapper(__regmap_init_slimbus, #config, \ 649 + slimbus, config) 642 650 643 651 /** 644 652 * regmap_init_spi() - Initialise register map
+77
include/linux/siox.h
··· 1 + /* 2 + * Copyright (C) 2015 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> 3 + * 4 + * This program is free software; you can redistribute it and/or modify it under 5 + * the terms of the GNU General Public License version 2 as published by the 6 + * Free Software Foundation. 7 + */ 8 + 9 + #include <linux/device.h> 10 + 11 + #define to_siox_device(_dev) container_of((_dev), struct siox_device, dev) 12 + struct siox_device { 13 + struct list_head node; /* node in smaster->devices */ 14 + struct siox_master *smaster; 15 + struct device dev; 16 + 17 + const char *type; 18 + size_t inbytes; 19 + size_t outbytes; 20 + u8 statustype; 21 + 22 + u8 status_read_clean; 23 + u8 status_written; 24 + u8 status_written_lastcycle; 25 + bool connected; 26 + 27 + /* statistics */ 28 + unsigned int watchdog_errors; 29 + unsigned int status_errors; 30 + 31 + struct kernfs_node *status_errors_kn; 32 + struct kernfs_node *watchdog_kn; 33 + struct kernfs_node *watchdog_errors_kn; 34 + struct kernfs_node *connected_kn; 35 + }; 36 + 37 + bool siox_device_synced(struct siox_device *sdevice); 38 + bool siox_device_connected(struct siox_device *sdevice); 39 + 40 + struct siox_driver { 41 + int (*probe)(struct siox_device *sdevice); 42 + int (*remove)(struct siox_device *sdevice); 43 + void (*shutdown)(struct siox_device *sdevice); 44 + 45 + /* 46 + * buf is big enough to hold sdev->inbytes - 1 bytes, the status byte 47 + * is in the scope of the framework. 48 + */ 49 + int (*set_data)(struct siox_device *sdevice, u8 status, u8 buf[]); 50 + /* 51 + * buf is big enough to hold sdev->outbytes - 1 bytes, the status byte 52 + * is in the scope of the framework 53 + */ 54 + int (*get_data)(struct siox_device *sdevice, const u8 buf[]); 55 + 56 + struct device_driver driver; 57 + }; 58 + 59 + static inline struct siox_driver *to_siox_driver(struct device_driver *driver) 60 + { 61 + if (driver) 62 + return container_of(driver, struct siox_driver, driver); 63 + else 64 + return NULL; 65 + } 66 + 67 + int __siox_driver_register(struct siox_driver *sdriver, struct module *owner); 68 + 69 + static inline int siox_driver_register(struct siox_driver *sdriver) 70 + { 71 + return __siox_driver_register(sdriver, THIS_MODULE); 72 + } 73 + 74 + static inline void siox_driver_unregister(struct siox_driver *sdriver) 75 + { 76 + return driver_unregister(&sdriver->driver); 77 + }
+164
include/linux/slimbus.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2017, The Linux Foundation 4 + */ 5 + 6 + #ifndef _LINUX_SLIMBUS_H 7 + #define _LINUX_SLIMBUS_H 8 + #include <linux/device.h> 9 + #include <linux/module.h> 10 + #include <linux/completion.h> 11 + #include <linux/mod_devicetable.h> 12 + 13 + extern struct bus_type slimbus_bus; 14 + 15 + /** 16 + * struct slim_eaddr - Enumeration address for a SLIMbus device 17 + * @manf_id: Manufacturer Id for the device 18 + * @prod_code: Product code 19 + * @dev_index: Device index 20 + * @instance: Instance value 21 + */ 22 + struct slim_eaddr { 23 + u16 manf_id; 24 + u16 prod_code; 25 + u8 dev_index; 26 + u8 instance; 27 + } __packed; 28 + 29 + /** 30 + * enum slim_device_status - slim device status 31 + * @SLIM_DEVICE_STATUS_DOWN: Slim device is absent or not reported yet. 32 + * @SLIM_DEVICE_STATUS_UP: Slim device is announced on the bus. 33 + * @SLIM_DEVICE_STATUS_RESERVED: Reserved for future use. 34 + */ 35 + enum slim_device_status { 36 + SLIM_DEVICE_STATUS_DOWN = 0, 37 + SLIM_DEVICE_STATUS_UP, 38 + SLIM_DEVICE_STATUS_RESERVED, 39 + }; 40 + 41 + struct slim_controller; 42 + 43 + /** 44 + * struct slim_device - Slim device handle. 45 + * @dev: Driver model representation of the device. 46 + * @e_addr: Enumeration address of this device. 47 + * @status: slim device status 48 + * @ctrl: slim controller instance. 49 + * @laddr: 1-byte Logical address of this device. 50 + * @is_laddr_valid: indicates if the laddr is valid or not 51 + * 52 + * This is the client/device handle returned when a SLIMbus 53 + * device is registered with a controller. 54 + * Pointer to this structure is used by client-driver as a handle. 55 + */ 56 + struct slim_device { 57 + struct device dev; 58 + struct slim_eaddr e_addr; 59 + struct slim_controller *ctrl; 60 + enum slim_device_status status; 61 + u8 laddr; 62 + bool is_laddr_valid; 63 + }; 64 + 65 + #define to_slim_device(d) container_of(d, struct slim_device, dev) 66 + 67 + /** 68 + * struct slim_driver - SLIMbus 'generic device' (slave) device driver 69 + * (similar to 'spi_device' on SPI) 70 + * @probe: Binds this driver to a SLIMbus device. 71 + * @remove: Unbinds this driver from the SLIMbus device. 72 + * @shutdown: Standard shutdown callback used during powerdown/halt. 73 + * @device_status: This callback is called when 74 + * - The device reports present and gets a laddr assigned 75 + * - The device reports absent, or the bus goes down. 76 + * @driver: SLIMbus device drivers should initialize name and owner field of 77 + * this structure 78 + * @id_table: List of SLIMbus devices supported by this driver 79 + */ 80 + 81 + struct slim_driver { 82 + int (*probe)(struct slim_device *sl); 83 + void (*remove)(struct slim_device *sl); 84 + void (*shutdown)(struct slim_device *sl); 85 + int (*device_status)(struct slim_device *sl, 86 + enum slim_device_status s); 87 + struct device_driver driver; 88 + const struct slim_device_id *id_table; 89 + }; 90 + #define to_slim_driver(d) container_of(d, struct slim_driver, driver) 91 + 92 + /** 93 + * struct slim_val_inf - Slimbus value or information element 94 + * @start_offset: Specifies starting offset in information/value element map 95 + * @rbuf: buffer to read the values 96 + * @wbuf: buffer to write 97 + * @num_bytes: upto 16. This ensures that the message will fit the slicesize 98 + * per SLIMbus spec 99 + * @comp: completion for asynchronous operations, valid only if TID is 100 + * required for transaction, like REQUEST operations. 101 + * Rest of the transactions are synchronous anyway. 102 + */ 103 + struct slim_val_inf { 104 + u16 start_offset; 105 + u8 num_bytes; 106 + u8 *rbuf; 107 + const u8 *wbuf; 108 + struct completion *comp; 109 + }; 110 + 111 + /* 112 + * use a macro to avoid include chaining to get THIS_MODULE 113 + */ 114 + #define slim_driver_register(drv) \ 115 + __slim_driver_register(drv, THIS_MODULE) 116 + int __slim_driver_register(struct slim_driver *drv, struct module *owner); 117 + void slim_driver_unregister(struct slim_driver *drv); 118 + 119 + /** 120 + * module_slim_driver() - Helper macro for registering a SLIMbus driver 121 + * @__slim_driver: slimbus_driver struct 122 + * 123 + * Helper macro for SLIMbus drivers which do not do anything special in module 124 + * init/exit. This eliminates a lot of boilerplate. Each module may only 125 + * use this macro once, and calling it replaces module_init() and module_exit() 126 + */ 127 + #define module_slim_driver(__slim_driver) \ 128 + module_driver(__slim_driver, slim_driver_register, \ 129 + slim_driver_unregister) 130 + 131 + static inline void *slim_get_devicedata(const struct slim_device *dev) 132 + { 133 + return dev_get_drvdata(&dev->dev); 134 + } 135 + 136 + static inline void slim_set_devicedata(struct slim_device *dev, void *data) 137 + { 138 + dev_set_drvdata(&dev->dev, data); 139 + } 140 + 141 + struct slim_device *slim_get_device(struct slim_controller *ctrl, 142 + struct slim_eaddr *e_addr); 143 + int slim_get_logical_addr(struct slim_device *sbdev); 144 + 145 + /* Information Element management messages */ 146 + #define SLIM_MSG_MC_REQUEST_INFORMATION 0x20 147 + #define SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION 0x21 148 + #define SLIM_MSG_MC_REPLY_INFORMATION 0x24 149 + #define SLIM_MSG_MC_CLEAR_INFORMATION 0x28 150 + #define SLIM_MSG_MC_REPORT_INFORMATION 0x29 151 + 152 + /* Value Element management messages */ 153 + #define SLIM_MSG_MC_REQUEST_VALUE 0x60 154 + #define SLIM_MSG_MC_REQUEST_CHANGE_VALUE 0x61 155 + #define SLIM_MSG_MC_REPLY_VALUE 0x64 156 + #define SLIM_MSG_MC_CHANGE_VALUE 0x68 157 + 158 + int slim_xfer_msg(struct slim_device *sbdev, struct slim_val_inf *msg, 159 + u8 mc); 160 + int slim_readb(struct slim_device *sdev, u32 addr); 161 + int slim_writeb(struct slim_device *sdev, u32 addr, u8 value); 162 + int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val); 163 + int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val); 164 + #endif /* _LINUX_SLIMBUS_H */
+479
include/linux/soundwire/sdw.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SOUNDWIRE_H 5 + #define __SOUNDWIRE_H 6 + 7 + struct sdw_bus; 8 + struct sdw_slave; 9 + 10 + /* SDW spec defines and enums, as defined by MIPI 1.1. Spec */ 11 + 12 + /* SDW Broadcast Device Number */ 13 + #define SDW_BROADCAST_DEV_NUM 15 14 + 15 + /* SDW Enumeration Device Number */ 16 + #define SDW_ENUM_DEV_NUM 0 17 + 18 + /* SDW Group Device Numbers */ 19 + #define SDW_GROUP12_DEV_NUM 12 20 + #define SDW_GROUP13_DEV_NUM 13 21 + 22 + /* SDW Master Device Number, not supported yet */ 23 + #define SDW_MASTER_DEV_NUM 14 24 + 25 + #define SDW_NUM_DEV_ID_REGISTERS 6 26 + 27 + #define SDW_MAX_DEVICES 11 28 + 29 + /** 30 + * enum sdw_slave_status - Slave status 31 + * @SDW_SLAVE_UNATTACHED: Slave is not attached with the bus. 32 + * @SDW_SLAVE_ATTACHED: Slave is attached with bus. 33 + * @SDW_SLAVE_ALERT: Some alert condition on the Slave 34 + * @SDW_SLAVE_RESERVED: Reserved for future use 35 + */ 36 + enum sdw_slave_status { 37 + SDW_SLAVE_UNATTACHED = 0, 38 + SDW_SLAVE_ATTACHED = 1, 39 + SDW_SLAVE_ALERT = 2, 40 + SDW_SLAVE_RESERVED = 3, 41 + }; 42 + 43 + /** 44 + * enum sdw_command_response - Command response as defined by SDW spec 45 + * @SDW_CMD_OK: cmd was successful 46 + * @SDW_CMD_IGNORED: cmd was ignored 47 + * @SDW_CMD_FAIL: cmd was NACKed 48 + * @SDW_CMD_TIMEOUT: cmd timedout 49 + * @SDW_CMD_FAIL_OTHER: cmd failed due to other reason than above 50 + * 51 + * NOTE: The enum is different than actual Spec as response in the Spec is 52 + * combination of ACK/NAK bits 53 + * 54 + * SDW_CMD_TIMEOUT/FAIL_OTHER is defined for SW use, not in spec 55 + */ 56 + enum sdw_command_response { 57 + SDW_CMD_OK = 0, 58 + SDW_CMD_IGNORED = 1, 59 + SDW_CMD_FAIL = 2, 60 + SDW_CMD_TIMEOUT = 3, 61 + SDW_CMD_FAIL_OTHER = 4, 62 + }; 63 + 64 + /* 65 + * SDW properties, defined in MIPI DisCo spec v1.0 66 + */ 67 + enum sdw_clk_stop_reset_behave { 68 + SDW_CLK_STOP_KEEP_STATUS = 1, 69 + }; 70 + 71 + /** 72 + * enum sdw_p15_behave - Slave Port 15 behaviour when the Master attempts a 73 + * read 74 + * @SDW_P15_READ_IGNORED: Read is ignored 75 + * @SDW_P15_CMD_OK: Command is ok 76 + */ 77 + enum sdw_p15_behave { 78 + SDW_P15_READ_IGNORED = 0, 79 + SDW_P15_CMD_OK = 1, 80 + }; 81 + 82 + /** 83 + * enum sdw_dpn_type - Data port types 84 + * @SDW_DPN_FULL: Full Data Port is supported 85 + * @SDW_DPN_SIMPLE: Simplified Data Port as defined in spec. 86 + * DPN_SampleCtrl2, DPN_OffsetCtrl2, DPN_HCtrl and DPN_BlockCtrl3 87 + * are not implemented. 88 + * @SDW_DPN_REDUCED: Reduced Data Port as defined in spec. 89 + * DPN_SampleCtrl2, DPN_HCtrl are not implemented. 90 + */ 91 + enum sdw_dpn_type { 92 + SDW_DPN_FULL = 0, 93 + SDW_DPN_SIMPLE = 1, 94 + SDW_DPN_REDUCED = 2, 95 + }; 96 + 97 + /** 98 + * enum sdw_clk_stop_mode - Clock Stop modes 99 + * @SDW_CLK_STOP_MODE0: Slave can continue operation seamlessly on clock 100 + * restart 101 + * @SDW_CLK_STOP_MODE1: Slave may have entered a deeper power-saving mode, 102 + * not capable of continuing operation seamlessly when the clock restarts 103 + */ 104 + enum sdw_clk_stop_mode { 105 + SDW_CLK_STOP_MODE0 = 0, 106 + SDW_CLK_STOP_MODE1 = 1, 107 + }; 108 + 109 + /** 110 + * struct sdw_dp0_prop - DP0 properties 111 + * @max_word: Maximum number of bits in a Payload Channel Sample, 1 to 64 112 + * (inclusive) 113 + * @min_word: Minimum number of bits in a Payload Channel Sample, 1 to 64 114 + * (inclusive) 115 + * @num_words: number of wordlengths supported 116 + * @words: wordlengths supported 117 + * @flow_controlled: Slave implementation results in an OK_NotReady 118 + * response 119 + * @simple_ch_prep_sm: If channel prepare sequence is required 120 + * @device_interrupts: If implementation-defined interrupts are supported 121 + * 122 + * The wordlengths are specified by Spec as max, min AND number of 123 + * discrete values, implementation can define based on the wordlengths they 124 + * support 125 + */ 126 + struct sdw_dp0_prop { 127 + u32 max_word; 128 + u32 min_word; 129 + u32 num_words; 130 + u32 *words; 131 + bool flow_controlled; 132 + bool simple_ch_prep_sm; 133 + bool device_interrupts; 134 + }; 135 + 136 + /** 137 + * struct sdw_dpn_audio_mode - Audio mode properties for DPn 138 + * @bus_min_freq: Minimum bus frequency, in Hz 139 + * @bus_max_freq: Maximum bus frequency, in Hz 140 + * @bus_num_freq: Number of discrete frequencies supported 141 + * @bus_freq: Discrete bus frequencies, in Hz 142 + * @min_freq: Minimum sampling frequency, in Hz 143 + * @max_freq: Maximum sampling bus frequency, in Hz 144 + * @num_freq: Number of discrete sampling frequency supported 145 + * @freq: Discrete sampling frequencies, in Hz 146 + * @prep_ch_behave: Specifies the dependencies between Channel Prepare 147 + * sequence and bus clock configuration 148 + * If 0, Channel Prepare can happen at any Bus clock rate 149 + * If 1, Channel Prepare sequence shall happen only after Bus clock is 150 + * changed to a frequency supported by this mode or compatible modes 151 + * described by the next field 152 + * @glitchless: Bitmap describing possible glitchless transitions from this 153 + * Audio Mode to other Audio Modes 154 + */ 155 + struct sdw_dpn_audio_mode { 156 + u32 bus_min_freq; 157 + u32 bus_max_freq; 158 + u32 bus_num_freq; 159 + u32 *bus_freq; 160 + u32 max_freq; 161 + u32 min_freq; 162 + u32 num_freq; 163 + u32 *freq; 164 + u32 prep_ch_behave; 165 + u32 glitchless; 166 + }; 167 + 168 + /** 169 + * struct sdw_dpn_prop - Data Port DPn properties 170 + * @num: port number 171 + * @max_word: Maximum number of bits in a Payload Channel Sample, 1 to 64 172 + * (inclusive) 173 + * @min_word: Minimum number of bits in a Payload Channel Sample, 1 to 64 174 + * (inclusive) 175 + * @num_words: Number of discrete supported wordlengths 176 + * @words: Discrete supported wordlength 177 + * @type: Data port type. Full, Simplified or Reduced 178 + * @max_grouping: Maximum number of samples that can be grouped together for 179 + * a full data port 180 + * @simple_ch_prep_sm: If the port supports simplified channel prepare state 181 + * machine 182 + * @ch_prep_timeout: Port-specific timeout value, in milliseconds 183 + * @device_interrupts: If set, each bit corresponds to support for 184 + * implementation-defined interrupts 185 + * @max_ch: Maximum channels supported 186 + * @min_ch: Minimum channels supported 187 + * @num_ch: Number of discrete channels supported 188 + * @ch: Discrete channels supported 189 + * @num_ch_combinations: Number of channel combinations supported 190 + * @ch_combinations: Channel combinations supported 191 + * @modes: SDW mode supported 192 + * @max_async_buffer: Number of samples that this port can buffer in 193 + * asynchronous modes 194 + * @block_pack_mode: Type of block port mode supported 195 + * @port_encoding: Payload Channel Sample encoding schemes supported 196 + * @audio_modes: Audio modes supported 197 + */ 198 + struct sdw_dpn_prop { 199 + u32 num; 200 + u32 max_word; 201 + u32 min_word; 202 + u32 num_words; 203 + u32 *words; 204 + enum sdw_dpn_type type; 205 + u32 max_grouping; 206 + bool simple_ch_prep_sm; 207 + u32 ch_prep_timeout; 208 + u32 device_interrupts; 209 + u32 max_ch; 210 + u32 min_ch; 211 + u32 num_ch; 212 + u32 *ch; 213 + u32 num_ch_combinations; 214 + u32 *ch_combinations; 215 + u32 modes; 216 + u32 max_async_buffer; 217 + bool block_pack_mode; 218 + u32 port_encoding; 219 + struct sdw_dpn_audio_mode *audio_modes; 220 + }; 221 + 222 + /** 223 + * struct sdw_slave_prop - SoundWire Slave properties 224 + * @mipi_revision: Spec version of the implementation 225 + * @wake_capable: Wake-up events are supported 226 + * @test_mode_capable: If test mode is supported 227 + * @clk_stop_mode1: Clock-Stop Mode 1 is supported 228 + * @simple_clk_stop_capable: Simple clock mode is supported 229 + * @clk_stop_timeout: Worst-case latency of the Clock Stop Prepare State 230 + * Machine transitions, in milliseconds 231 + * @ch_prep_timeout: Worst-case latency of the Channel Prepare State Machine 232 + * transitions, in milliseconds 233 + * @reset_behave: Slave keeps the status of the SlaveStopClockPrepare 234 + * state machine (P=1 SCSP_SM) after exit from clock-stop mode1 235 + * @high_PHY_capable: Slave is HighPHY capable 236 + * @paging_support: Slave implements paging registers SCP_AddrPage1 and 237 + * SCP_AddrPage2 238 + * @bank_delay_support: Slave implements bank delay/bridge support registers 239 + * SCP_BankDelay and SCP_NextFrame 240 + * @p15_behave: Slave behavior when the Master attempts a read to the Port15 241 + * alias 242 + * @lane_control_support: Slave supports lane control 243 + * @master_count: Number of Masters present on this Slave 244 + * @source_ports: Bitmap identifying source ports 245 + * @sink_ports: Bitmap identifying sink ports 246 + * @dp0_prop: Data Port 0 properties 247 + * @src_dpn_prop: Source Data Port N properties 248 + * @sink_dpn_prop: Sink Data Port N properties 249 + */ 250 + struct sdw_slave_prop { 251 + u32 mipi_revision; 252 + bool wake_capable; 253 + bool test_mode_capable; 254 + bool clk_stop_mode1; 255 + bool simple_clk_stop_capable; 256 + u32 clk_stop_timeout; 257 + u32 ch_prep_timeout; 258 + enum sdw_clk_stop_reset_behave reset_behave; 259 + bool high_PHY_capable; 260 + bool paging_support; 261 + bool bank_delay_support; 262 + enum sdw_p15_behave p15_behave; 263 + bool lane_control_support; 264 + u32 master_count; 265 + u32 source_ports; 266 + u32 sink_ports; 267 + struct sdw_dp0_prop *dp0_prop; 268 + struct sdw_dpn_prop *src_dpn_prop; 269 + struct sdw_dpn_prop *sink_dpn_prop; 270 + }; 271 + 272 + /** 273 + * struct sdw_master_prop - Master properties 274 + * @revision: MIPI spec version of the implementation 275 + * @master_count: Number of masters 276 + * @clk_stop_mode: Bitmap for Clock Stop modes supported 277 + * @max_freq: Maximum Bus clock frequency, in Hz 278 + * @num_clk_gears: Number of clock gears supported 279 + * @clk_gears: Clock gears supported 280 + * @num_freq: Number of clock frequencies supported, in Hz 281 + * @freq: Clock frequencies supported, in Hz 282 + * @default_frame_rate: Controller default Frame rate, in Hz 283 + * @default_row: Number of rows 284 + * @default_col: Number of columns 285 + * @dynamic_frame: Dynamic frame supported 286 + * @err_threshold: Number of times that software may retry sending a single 287 + * command 288 + * @dpn_prop: Data Port N properties 289 + */ 290 + struct sdw_master_prop { 291 + u32 revision; 292 + u32 master_count; 293 + enum sdw_clk_stop_mode clk_stop_mode; 294 + u32 max_freq; 295 + u32 num_clk_gears; 296 + u32 *clk_gears; 297 + u32 num_freq; 298 + u32 *freq; 299 + u32 default_frame_rate; 300 + u32 default_row; 301 + u32 default_col; 302 + bool dynamic_frame; 303 + u32 err_threshold; 304 + struct sdw_dpn_prop *dpn_prop; 305 + }; 306 + 307 + int sdw_master_read_prop(struct sdw_bus *bus); 308 + int sdw_slave_read_prop(struct sdw_slave *slave); 309 + 310 + /* 311 + * SDW Slave Structures and APIs 312 + */ 313 + 314 + /** 315 + * struct sdw_slave_id - Slave ID 316 + * @mfg_id: MIPI Manufacturer ID 317 + * @part_id: Device Part ID 318 + * @class_id: MIPI Class ID, unused now. 319 + * Currently a placeholder in MIPI SoundWire Spec 320 + * @unique_id: Device unique ID 321 + * @sdw_version: SDW version implemented 322 + * 323 + * The order of the IDs here does not follow the DisCo spec definitions 324 + */ 325 + struct sdw_slave_id { 326 + __u16 mfg_id; 327 + __u16 part_id; 328 + __u8 class_id; 329 + __u8 unique_id:4; 330 + __u8 sdw_version:4; 331 + }; 332 + 333 + /** 334 + * struct sdw_slave_intr_status - Slave interrupt status 335 + * @control_port: control port status 336 + * @port: data port status 337 + */ 338 + struct sdw_slave_intr_status { 339 + u8 control_port; 340 + u8 port[15]; 341 + }; 342 + 343 + /** 344 + * struct sdw_slave_ops - Slave driver callback ops 345 + * @read_prop: Read Slave properties 346 + * @interrupt_callback: Device interrupt notification (invoked in thread 347 + * context) 348 + * @update_status: Update Slave status 349 + */ 350 + struct sdw_slave_ops { 351 + int (*read_prop)(struct sdw_slave *sdw); 352 + int (*interrupt_callback)(struct sdw_slave *slave, 353 + struct sdw_slave_intr_status *status); 354 + int (*update_status)(struct sdw_slave *slave, 355 + enum sdw_slave_status status); 356 + }; 357 + 358 + /** 359 + * struct sdw_slave - SoundWire Slave 360 + * @id: MIPI device ID 361 + * @dev: Linux device 362 + * @status: Status reported by the Slave 363 + * @bus: Bus handle 364 + * @ops: Slave callback ops 365 + * @prop: Slave properties 366 + * @node: node for bus list 367 + * @port_ready: Port ready completion flag for each Slave port 368 + * @dev_num: Device Number assigned by Bus 369 + */ 370 + struct sdw_slave { 371 + struct sdw_slave_id id; 372 + struct device dev; 373 + enum sdw_slave_status status; 374 + struct sdw_bus *bus; 375 + const struct sdw_slave_ops *ops; 376 + struct sdw_slave_prop prop; 377 + struct list_head node; 378 + struct completion *port_ready; 379 + u16 dev_num; 380 + }; 381 + 382 + #define dev_to_sdw_dev(_dev) container_of(_dev, struct sdw_slave, dev) 383 + 384 + struct sdw_driver { 385 + const char *name; 386 + 387 + int (*probe)(struct sdw_slave *sdw, 388 + const struct sdw_device_id *id); 389 + int (*remove)(struct sdw_slave *sdw); 390 + void (*shutdown)(struct sdw_slave *sdw); 391 + 392 + const struct sdw_device_id *id_table; 393 + const struct sdw_slave_ops *ops; 394 + 395 + struct device_driver driver; 396 + }; 397 + 398 + #define SDW_SLAVE_ENTRY(_mfg_id, _part_id, _drv_data) \ 399 + { .mfg_id = (_mfg_id), .part_id = (_part_id), \ 400 + .driver_data = (unsigned long)(_drv_data) } 401 + 402 + int sdw_handle_slave_status(struct sdw_bus *bus, 403 + enum sdw_slave_status status[]); 404 + 405 + /* 406 + * SDW master structures and APIs 407 + */ 408 + 409 + struct sdw_msg; 410 + 411 + /** 412 + * struct sdw_defer - SDW deffered message 413 + * @length: message length 414 + * @complete: message completion 415 + * @msg: SDW message 416 + */ 417 + struct sdw_defer { 418 + int length; 419 + struct completion complete; 420 + struct sdw_msg *msg; 421 + }; 422 + 423 + /** 424 + * struct sdw_master_ops - Master driver ops 425 + * @read_prop: Read Master properties 426 + * @xfer_msg: Transfer message callback 427 + * @xfer_msg_defer: Defer version of transfer message callback 428 + * @reset_page_addr: Reset the SCP page address registers 429 + */ 430 + struct sdw_master_ops { 431 + int (*read_prop)(struct sdw_bus *bus); 432 + 433 + enum sdw_command_response (*xfer_msg) 434 + (struct sdw_bus *bus, struct sdw_msg *msg); 435 + enum sdw_command_response (*xfer_msg_defer) 436 + (struct sdw_bus *bus, struct sdw_msg *msg, 437 + struct sdw_defer *defer); 438 + enum sdw_command_response (*reset_page_addr) 439 + (struct sdw_bus *bus, unsigned int dev_num); 440 + }; 441 + 442 + /** 443 + * struct sdw_bus - SoundWire bus 444 + * @dev: Master linux device 445 + * @link_id: Link id number, can be 0 to N, unique for each Master 446 + * @slaves: list of Slaves on this bus 447 + * @assigned: Bitmap for Slave device numbers. 448 + * Bit set implies used number, bit clear implies unused number. 449 + * @bus_lock: bus lock 450 + * @msg_lock: message lock 451 + * @ops: Master callback ops 452 + * @prop: Master properties 453 + * @defer_msg: Defer message 454 + * @clk_stop_timeout: Clock stop timeout computed 455 + */ 456 + struct sdw_bus { 457 + struct device *dev; 458 + unsigned int link_id; 459 + struct list_head slaves; 460 + DECLARE_BITMAP(assigned, SDW_MAX_DEVICES); 461 + struct mutex bus_lock; 462 + struct mutex msg_lock; 463 + const struct sdw_master_ops *ops; 464 + struct sdw_master_prop prop; 465 + struct sdw_defer defer_msg; 466 + unsigned int clk_stop_timeout; 467 + }; 468 + 469 + int sdw_add_bus_master(struct sdw_bus *bus); 470 + void sdw_delete_bus_master(struct sdw_bus *bus); 471 + 472 + /* messaging and data APIs */ 473 + 474 + int sdw_read(struct sdw_slave *slave, u32 addr); 475 + int sdw_write(struct sdw_slave *slave, u32 addr, u8 value); 476 + int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); 477 + int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); 478 + 479 + #endif /* __SOUNDWIRE_H */
+24
include/linux/soundwire/sdw_intel.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SDW_INTEL_H 5 + #define __SDW_INTEL_H 6 + 7 + /** 8 + * struct sdw_intel_res - Soundwire Intel resource structure 9 + * @mmio_base: mmio base of SoundWire registers 10 + * @irq: interrupt number 11 + * @handle: ACPI parent handle 12 + * @parent: parent device 13 + */ 14 + struct sdw_intel_res { 15 + void __iomem *mmio_base; 16 + int irq; 17 + acpi_handle handle; 18 + struct device *parent; 19 + }; 20 + 21 + void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res); 22 + void sdw_intel_exit(void *arg); 23 + 24 + #endif
+194
include/linux/soundwire/sdw_registers.h
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SDW_REGISTERS_H 5 + #define __SDW_REGISTERS_H 6 + 7 + /* 8 + * typically we define register and shifts but if one observes carefully, 9 + * the shift can be generated from MASKS using few bit primitaives like ffs 10 + * etc, so we use that and avoid defining shifts 11 + */ 12 + #define SDW_REG_SHIFT(n) (ffs(n) - 1) 13 + 14 + /* 15 + * SDW registers as defined by MIPI 1.1 Spec 16 + */ 17 + #define SDW_REGADDR GENMASK(14, 0) 18 + #define SDW_SCP_ADDRPAGE2_MASK GENMASK(22, 15) 19 + #define SDW_SCP_ADDRPAGE1_MASK GENMASK(30, 23) 20 + 21 + #define SDW_REG_NO_PAGE 0x00008000 22 + #define SDW_REG_OPTIONAL_PAGE 0x00010000 23 + #define SDW_REG_MAX 0x80000000 24 + 25 + #define SDW_DPN_SIZE 0x100 26 + #define SDW_BANK1_OFFSET 0x10 27 + 28 + /* 29 + * DP0 Interrupt register & bits 30 + * 31 + * Spec treats Status (RO) and Clear (WC) as separate but they are same 32 + * address, so treat as same register with WC. 33 + */ 34 + 35 + /* both INT and STATUS register are same */ 36 + #define SDW_DP0_INT 0x0 37 + #define SDW_DP0_INTMASK 0x1 38 + #define SDW_DP0_PORTCTRL 0x2 39 + #define SDW_DP0_BLOCKCTRL1 0x3 40 + #define SDW_DP0_PREPARESTATUS 0x4 41 + #define SDW_DP0_PREPARECTRL 0x5 42 + 43 + #define SDW_DP0_INT_TEST_FAIL BIT(0) 44 + #define SDW_DP0_INT_PORT_READY BIT(1) 45 + #define SDW_DP0_INT_BRA_FAILURE BIT(2) 46 + #define SDW_DP0_INT_IMPDEF1 BIT(5) 47 + #define SDW_DP0_INT_IMPDEF2 BIT(6) 48 + #define SDW_DP0_INT_IMPDEF3 BIT(7) 49 + 50 + #define SDW_DP0_PORTCTRL_DATAMODE GENMASK(3, 2) 51 + #define SDW_DP0_PORTCTRL_NXTINVBANK BIT(4) 52 + #define SDW_DP0_PORTCTRL_BPT_PAYLD GENMASK(7, 6) 53 + 54 + #define SDW_DP0_CHANNELEN 0x20 55 + #define SDW_DP0_SAMPLECTRL1 0x22 56 + #define SDW_DP0_SAMPLECTRL2 0x23 57 + #define SDW_DP0_OFFSETCTRL1 0x24 58 + #define SDW_DP0_OFFSETCTRL2 0x25 59 + #define SDW_DP0_HCTRL 0x26 60 + #define SDW_DP0_LANECTRL 0x28 61 + 62 + /* Both INT and STATUS register are same */ 63 + #define SDW_SCP_INT1 0x40 64 + #define SDW_SCP_INTMASK1 0x41 65 + 66 + #define SDW_SCP_INT1_PARITY BIT(0) 67 + #define SDW_SCP_INT1_BUS_CLASH BIT(1) 68 + #define SDW_SCP_INT1_IMPL_DEF BIT(2) 69 + #define SDW_SCP_INT1_SCP2_CASCADE BIT(7) 70 + #define SDW_SCP_INT1_PORT0_3 GENMASK(6, 3) 71 + 72 + #define SDW_SCP_INTSTAT2 0x42 73 + #define SDW_SCP_INTSTAT2_SCP3_CASCADE BIT(7) 74 + #define SDW_SCP_INTSTAT2_PORT4_10 GENMASK(6, 0) 75 + 76 + 77 + #define SDW_SCP_INTSTAT3 0x43 78 + #define SDW_SCP_INTSTAT3_PORT11_14 GENMASK(3, 0) 79 + 80 + /* Number of interrupt status registers */ 81 + #define SDW_NUM_INT_STAT_REGISTERS 3 82 + 83 + /* Number of interrupt clear registers */ 84 + #define SDW_NUM_INT_CLEAR_REGISTERS 1 85 + 86 + #define SDW_SCP_CTRL 0x44 87 + #define SDW_SCP_CTRL_CLK_STP_NOW BIT(1) 88 + #define SDW_SCP_CTRL_FORCE_RESET BIT(7) 89 + 90 + #define SDW_SCP_STAT 0x44 91 + #define SDW_SCP_STAT_CLK_STP_NF BIT(0) 92 + #define SDW_SCP_STAT_HPHY_NOK BIT(5) 93 + #define SDW_SCP_STAT_CURR_BANK BIT(6) 94 + 95 + #define SDW_SCP_SYSTEMCTRL 0x45 96 + #define SDW_SCP_SYSTEMCTRL_CLK_STP_PREP BIT(0) 97 + #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE BIT(2) 98 + #define SDW_SCP_SYSTEMCTRL_WAKE_UP_EN BIT(3) 99 + #define SDW_SCP_SYSTEMCTRL_HIGH_PHY BIT(4) 100 + 101 + #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE0 0 102 + #define SDW_SCP_SYSTEMCTRL_CLK_STP_MODE1 BIT(2) 103 + 104 + #define SDW_SCP_DEVNUMBER 0x46 105 + #define SDW_SCP_HIGH_PHY_CHECK 0x47 106 + #define SDW_SCP_ADDRPAGE1 0x48 107 + #define SDW_SCP_ADDRPAGE2 0x49 108 + #define SDW_SCP_KEEPEREN 0x4A 109 + #define SDW_SCP_BANKDELAY 0x4B 110 + #define SDW_SCP_TESTMODE 0x4F 111 + #define SDW_SCP_DEVID_0 0x50 112 + #define SDW_SCP_DEVID_1 0x51 113 + #define SDW_SCP_DEVID_2 0x52 114 + #define SDW_SCP_DEVID_3 0x53 115 + #define SDW_SCP_DEVID_4 0x54 116 + #define SDW_SCP_DEVID_5 0x55 117 + 118 + /* Banked Registers */ 119 + #define SDW_SCP_FRAMECTRL_B0 0x60 120 + #define SDW_SCP_FRAMECTRL_B1 (0x60 + SDW_BANK1_OFFSET) 121 + #define SDW_SCP_NEXTFRAME_B0 0x61 122 + #define SDW_SCP_NEXTFRAME_B1 (0x61 + SDW_BANK1_OFFSET) 123 + 124 + /* Both INT and STATUS register is same */ 125 + #define SDW_DPN_INT(n) (0x0 + SDW_DPN_SIZE * (n)) 126 + #define SDW_DPN_INTMASK(n) (0x1 + SDW_DPN_SIZE * (n)) 127 + #define SDW_DPN_PORTCTRL(n) (0x2 + SDW_DPN_SIZE * (n)) 128 + #define SDW_DPN_BLOCKCTRL1(n) (0x3 + SDW_DPN_SIZE * (n)) 129 + #define SDW_DPN_PREPARESTATUS(n) (0x4 + SDW_DPN_SIZE * (n)) 130 + #define SDW_DPN_PREPARECTRL(n) (0x5 + SDW_DPN_SIZE * (n)) 131 + 132 + #define SDW_DPN_INT_TEST_FAIL BIT(0) 133 + #define SDW_DPN_INT_PORT_READY BIT(1) 134 + #define SDW_DPN_INT_IMPDEF1 BIT(5) 135 + #define SDW_DPN_INT_IMPDEF2 BIT(6) 136 + #define SDW_DPN_INT_IMPDEF3 BIT(7) 137 + 138 + #define SDW_DPN_PORTCTRL_FLOWMODE GENMASK(1, 0) 139 + #define SDW_DPN_PORTCTRL_DATAMODE GENMASK(3, 2) 140 + #define SDW_DPN_PORTCTRL_NXTINVBANK BIT(4) 141 + 142 + #define SDW_DPN_BLOCKCTRL1_WDLEN GENMASK(5, 0) 143 + 144 + #define SDW_DPN_PREPARECTRL_CH_PREP GENMASK(7, 0) 145 + 146 + #define SDW_DPN_CHANNELEN_B0(n) (0x20 + SDW_DPN_SIZE * (n)) 147 + #define SDW_DPN_CHANNELEN_B1(n) (0x30 + SDW_DPN_SIZE * (n)) 148 + 149 + #define SDW_DPN_BLOCKCTRL2_B0(n) (0x21 + SDW_DPN_SIZE * (n)) 150 + #define SDW_DPN_BLOCKCTRL2_B1(n) (0x31 + SDW_DPN_SIZE * (n)) 151 + 152 + #define SDW_DPN_SAMPLECTRL1_B0(n) (0x22 + SDW_DPN_SIZE * (n)) 153 + #define SDW_DPN_SAMPLECTRL1_B1(n) (0x32 + SDW_DPN_SIZE * (n)) 154 + 155 + #define SDW_DPN_SAMPLECTRL2_B0(n) (0x23 + SDW_DPN_SIZE * (n)) 156 + #define SDW_DPN_SAMPLECTRL2_B1(n) (0x33 + SDW_DPN_SIZE * (n)) 157 + 158 + #define SDW_DPN_OFFSETCTRL1_B0(n) (0x24 + SDW_DPN_SIZE * (n)) 159 + #define SDW_DPN_OFFSETCTRL1_B1(n) (0x34 + SDW_DPN_SIZE * (n)) 160 + 161 + #define SDW_DPN_OFFSETCTRL2_B0(n) (0x25 + SDW_DPN_SIZE * (n)) 162 + #define SDW_DPN_OFFSETCTRL2_B1(n) (0x35 + SDW_DPN_SIZE * (n)) 163 + 164 + #define SDW_DPN_HCTRL_B0(n) (0x26 + SDW_DPN_SIZE * (n)) 165 + #define SDW_DPN_HCTRL_B1(n) (0x36 + SDW_DPN_SIZE * (n)) 166 + 167 + #define SDW_DPN_BLOCKCTRL3_B0(n) (0x27 + SDW_DPN_SIZE * (n)) 168 + #define SDW_DPN_BLOCKCTRL3_B1(n) (0x37 + SDW_DPN_SIZE * (n)) 169 + 170 + #define SDW_DPN_LANECTRL_B0(n) (0x28 + SDW_DPN_SIZE * (n)) 171 + #define SDW_DPN_LANECTRL_B1(n) (0x38 + SDW_DPN_SIZE * (n)) 172 + 173 + #define SDW_DPN_SAMPLECTRL_LOW GENMASK(7, 0) 174 + #define SDW_DPN_SAMPLECTRL_HIGH GENMASK(15, 8) 175 + 176 + #define SDW_DPN_HCTRL_HSTART GENMASK(7, 4) 177 + #define SDW_DPN_HCTRL_HSTOP GENMASK(3, 0) 178 + 179 + #define SDW_NUM_CASC_PORT_INTSTAT1 4 180 + #define SDW_CASC_PORT_START_INTSTAT1 0 181 + #define SDW_CASC_PORT_MASK_INTSTAT1 0x8 182 + #define SDW_CASC_PORT_REG_OFFSET_INTSTAT1 0x0 183 + 184 + #define SDW_NUM_CASC_PORT_INTSTAT2 7 185 + #define SDW_CASC_PORT_START_INTSTAT2 4 186 + #define SDW_CASC_PORT_MASK_INTSTAT2 1 187 + #define SDW_CASC_PORT_REG_OFFSET_INTSTAT2 1 188 + 189 + #define SDW_NUM_CASC_PORT_INTSTAT3 4 190 + #define SDW_CASC_PORT_START_INTSTAT3 11 191 + #define SDW_CASC_PORT_MASK_INTSTAT3 1 192 + #define SDW_CASC_PORT_REG_OFFSET_INTSTAT3 2 193 + 194 + #endif /* __SDW_REGISTERS_H */
+19
include/linux/soundwire/sdw_type.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright(c) 2015-17 Intel Corporation. 3 + 4 + #ifndef __SOUNDWIRE_TYPES_H 5 + #define __SOUNDWIRE_TYPES_H 6 + 7 + extern struct bus_type sdw_bus_type; 8 + 9 + #define drv_to_sdw_driver(_drv) container_of(_drv, struct sdw_driver, driver) 10 + 11 + #define sdw_register_driver(drv) \ 12 + __sdw_register_driver(drv, THIS_MODULE) 13 + 14 + int __sdw_register_driver(struct sdw_driver *drv, struct module *); 15 + void sdw_unregister_driver(struct sdw_driver *drv); 16 + 17 + int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size); 18 + 19 + #endif /* __SOUNDWIRE_TYPES_H */
+79
include/linux/vbox_utils.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* Copyright (C) 2006-2016 Oracle Corporation */ 3 + 4 + #ifndef __VBOX_UTILS_H__ 5 + #define __VBOX_UTILS_H__ 6 + 7 + #include <linux/printk.h> 8 + #include <linux/vbox_vmmdev_types.h> 9 + 10 + struct vbg_dev; 11 + 12 + /** 13 + * vboxguest logging functions, these log both to the backdoor and call 14 + * the equivalent kernel pr_foo function. 15 + */ 16 + __printf(1, 2) void vbg_info(const char *fmt, ...); 17 + __printf(1, 2) void vbg_warn(const char *fmt, ...); 18 + __printf(1, 2) void vbg_err(const char *fmt, ...); 19 + 20 + /* Only use backdoor logging for non-dynamic debug builds */ 21 + #if defined(DEBUG) && !defined(CONFIG_DYNAMIC_DEBUG) 22 + __printf(1, 2) void vbg_debug(const char *fmt, ...); 23 + #else 24 + #define vbg_debug pr_debug 25 + #endif 26 + 27 + /** 28 + * Allocate memory for generic request and initialize the request header. 29 + * 30 + * Return: the allocated memory 31 + * @len: Size of memory block required for the request. 32 + * @req_type: The generic request type. 33 + */ 34 + void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); 35 + 36 + /** 37 + * Perform a generic request. 38 + * 39 + * Return: VBox status code 40 + * @gdev: The Guest extension device. 41 + * @req: Pointer to the request structure. 42 + */ 43 + int vbg_req_perform(struct vbg_dev *gdev, void *req); 44 + 45 + int vbg_hgcm_connect(struct vbg_dev *gdev, 46 + struct vmmdev_hgcm_service_location *loc, 47 + u32 *client_id, int *vbox_status); 48 + 49 + int vbg_hgcm_disconnect(struct vbg_dev *gdev, u32 client_id, int *vbox_status); 50 + 51 + int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, 52 + u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, 53 + u32 parm_count, int *vbox_status); 54 + 55 + int vbg_hgcm_call32( 56 + struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, 57 + struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, 58 + int *vbox_status); 59 + 60 + /** 61 + * Convert a VirtualBox status code to a standard Linux kernel return value. 62 + * Return: 0 or negative errno value. 63 + * @rc: VirtualBox status code to convert. 64 + */ 65 + int vbg_status_code_to_errno(int rc); 66 + 67 + /** 68 + * Helper for the vboxsf driver to get a reference to the guest device. 69 + * Return: a pointer to the gdev; or a ERR_PTR value on error. 70 + */ 71 + struct vbg_dev *vbg_get_gdev(void); 72 + 73 + /** 74 + * Helper for the vboxsf driver to put a guest device reference. 75 + * @gdev: Reference returned by vbg_get_gdev to put. 76 + */ 77 + void vbg_put_gdev(struct vbg_dev *gdev); 78 + 79 + #endif
+66
include/trace/events/siox.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM siox 3 + 4 + #if !defined(_TRACE_SIOX_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _TRACE_SIOX_H 6 + 7 + #include <linux/tracepoint.h> 8 + 9 + TRACE_EVENT(siox_set_data, 10 + TP_PROTO(const struct siox_master *smaster, 11 + const struct siox_device *sdevice, 12 + unsigned int devno, size_t bufoffset), 13 + TP_ARGS(smaster, sdevice, devno, bufoffset), 14 + TP_STRUCT__entry( 15 + __field(int, busno) 16 + __field(unsigned int, devno) 17 + __field(size_t, inbytes) 18 + __dynamic_array(u8, buf, sdevice->inbytes) 19 + ), 20 + TP_fast_assign( 21 + __entry->busno = smaster->busno; 22 + __entry->devno = devno; 23 + __entry->inbytes = sdevice->inbytes; 24 + memcpy(__get_dynamic_array(buf), 25 + smaster->buf + bufoffset, sdevice->inbytes); 26 + ), 27 + TP_printk("siox-%d-%u [%*phD]", 28 + __entry->busno, 29 + __entry->devno, 30 + (int)__entry->inbytes, __get_dynamic_array(buf) 31 + ) 32 + ); 33 + 34 + TRACE_EVENT(siox_get_data, 35 + TP_PROTO(const struct siox_master *smaster, 36 + const struct siox_device *sdevice, 37 + unsigned int devno, u8 status_clean, 38 + size_t bufoffset), 39 + TP_ARGS(smaster, sdevice, devno, status_clean, bufoffset), 40 + TP_STRUCT__entry( 41 + __field(int, busno) 42 + __field(unsigned int, devno) 43 + __field(u8, status_clean) 44 + __field(size_t, outbytes) 45 + __dynamic_array(u8, buf, sdevice->outbytes) 46 + ), 47 + TP_fast_assign( 48 + __entry->busno = smaster->busno; 49 + __entry->devno = devno; 50 + __entry->status_clean = status_clean; 51 + __entry->outbytes = sdevice->outbytes; 52 + memcpy(__get_dynamic_array(buf), 53 + smaster->buf + bufoffset, sdevice->outbytes); 54 + ), 55 + TP_printk("siox-%d-%u (%02hhx) [%*phD]", 56 + __entry->busno, 57 + __entry->devno, 58 + __entry->status_clean, 59 + (int)__entry->outbytes, __get_dynamic_array(buf) 60 + ) 61 + ); 62 + 63 + #endif /* if !defined(_TRACE_SIOX_H) || defined(TRACE_HEADER_MULTI_READ) */ 64 + 65 + /* This part must be outside protection */ 66 + #include <trace/define_trace.h>
+11 -1
include/uapi/linux/lp.h
··· 8 8 #ifndef _UAPI_LINUX_LP_H 9 9 #define _UAPI_LINUX_LP_H 10 10 11 + #include <linux/types.h> 12 + #include <linux/ioctl.h> 11 13 12 14 /* 13 15 * Per POSIX guidelines, this module reserves the LP and lp prefixes ··· 90 88 #define LPGETSTATS 0x060d /* get statistics (struct lp_stats) */ 91 89 #endif 92 90 #define LPGETFLAGS 0x060e /* get status flags */ 93 - #define LPSETTIMEOUT 0x060f /* set parport timeout */ 91 + #define LPSETTIMEOUT_OLD 0x060f /* set parport timeout */ 92 + #define LPSETTIMEOUT_NEW \ 93 + _IOW(0x6, 0xf, __s64[2]) /* set parport timeout */ 94 + #if __BITS_PER_LONG == 64 95 + #define LPSETTIMEOUT LPSETTIMEOUT_OLD 96 + #else 97 + #define LPSETTIMEOUT (sizeof(time_t) > sizeof(__kernel_long_t) ? \ 98 + LPSETTIMEOUT_NEW : LPSETTIMEOUT_OLD) 99 + #endif 94 100 95 101 /* timeout for printk'ing a timeout, in jiffies (100ths of a second). 96 102 This is also used for re-checking error conditions if LP_ABORT is
+151
include/uapi/linux/vbox_err.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* Copyright (C) 2017 Oracle Corporation */ 3 + 4 + #ifndef __UAPI_VBOX_ERR_H__ 5 + #define __UAPI_VBOX_ERR_H__ 6 + 7 + #define VINF_SUCCESS 0 8 + #define VERR_GENERAL_FAILURE (-1) 9 + #define VERR_INVALID_PARAMETER (-2) 10 + #define VERR_INVALID_MAGIC (-3) 11 + #define VERR_INVALID_HANDLE (-4) 12 + #define VERR_LOCK_FAILED (-5) 13 + #define VERR_INVALID_POINTER (-6) 14 + #define VERR_IDT_FAILED (-7) 15 + #define VERR_NO_MEMORY (-8) 16 + #define VERR_ALREADY_LOADED (-9) 17 + #define VERR_PERMISSION_DENIED (-10) 18 + #define VERR_VERSION_MISMATCH (-11) 19 + #define VERR_NOT_IMPLEMENTED (-12) 20 + #define VERR_INVALID_FLAGS (-13) 21 + 22 + #define VERR_NOT_EQUAL (-18) 23 + #define VERR_NOT_SYMLINK (-19) 24 + #define VERR_NO_TMP_MEMORY (-20) 25 + #define VERR_INVALID_FMODE (-21) 26 + #define VERR_WRONG_ORDER (-22) 27 + #define VERR_NO_TLS_FOR_SELF (-23) 28 + #define VERR_FAILED_TO_SET_SELF_TLS (-24) 29 + #define VERR_NO_CONT_MEMORY (-26) 30 + #define VERR_NO_PAGE_MEMORY (-27) 31 + #define VERR_THREAD_IS_DEAD (-29) 32 + #define VERR_THREAD_NOT_WAITABLE (-30) 33 + #define VERR_PAGE_TABLE_NOT_PRESENT (-31) 34 + #define VERR_INVALID_CONTEXT (-32) 35 + #define VERR_TIMER_BUSY (-33) 36 + #define VERR_ADDRESS_CONFLICT (-34) 37 + #define VERR_UNRESOLVED_ERROR (-35) 38 + #define VERR_INVALID_FUNCTION (-36) 39 + #define VERR_NOT_SUPPORTED (-37) 40 + #define VERR_ACCESS_DENIED (-38) 41 + #define VERR_INTERRUPTED (-39) 42 + #define VERR_TIMEOUT (-40) 43 + #define VERR_BUFFER_OVERFLOW (-41) 44 + #define VERR_TOO_MUCH_DATA (-42) 45 + #define VERR_MAX_THRDS_REACHED (-43) 46 + #define VERR_MAX_PROCS_REACHED (-44) 47 + #define VERR_SIGNAL_REFUSED (-45) 48 + #define VERR_SIGNAL_PENDING (-46) 49 + #define VERR_SIGNAL_INVALID (-47) 50 + #define VERR_STATE_CHANGED (-48) 51 + #define VERR_INVALID_UUID_FORMAT (-49) 52 + #define VERR_PROCESS_NOT_FOUND (-50) 53 + #define VERR_PROCESS_RUNNING (-51) 54 + #define VERR_TRY_AGAIN (-52) 55 + #define VERR_PARSE_ERROR (-53) 56 + #define VERR_OUT_OF_RANGE (-54) 57 + #define VERR_NUMBER_TOO_BIG (-55) 58 + #define VERR_NO_DIGITS (-56) 59 + #define VERR_NEGATIVE_UNSIGNED (-57) 60 + #define VERR_NO_TRANSLATION (-58) 61 + 62 + #define VERR_NOT_FOUND (-78) 63 + #define VERR_INVALID_STATE (-79) 64 + #define VERR_OUT_OF_RESOURCES (-80) 65 + 66 + #define VERR_FILE_NOT_FOUND (-102) 67 + #define VERR_PATH_NOT_FOUND (-103) 68 + #define VERR_INVALID_NAME (-104) 69 + #define VERR_ALREADY_EXISTS (-105) 70 + #define VERR_TOO_MANY_OPEN_FILES (-106) 71 + #define VERR_SEEK (-107) 72 + #define VERR_NEGATIVE_SEEK (-108) 73 + #define VERR_SEEK_ON_DEVICE (-109) 74 + #define VERR_EOF (-110) 75 + #define VERR_READ_ERROR (-111) 76 + #define VERR_WRITE_ERROR (-112) 77 + #define VERR_WRITE_PROTECT (-113) 78 + #define VERR_SHARING_VIOLATION (-114) 79 + #define VERR_FILE_LOCK_FAILED (-115) 80 + #define VERR_FILE_LOCK_VIOLATION (-116) 81 + #define VERR_CANT_CREATE (-117) 82 + #define VERR_CANT_DELETE_DIRECTORY (-118) 83 + #define VERR_NOT_SAME_DEVICE (-119) 84 + #define VERR_FILENAME_TOO_LONG (-120) 85 + #define VERR_MEDIA_NOT_PRESENT (-121) 86 + #define VERR_MEDIA_NOT_RECOGNIZED (-122) 87 + #define VERR_FILE_NOT_LOCKED (-123) 88 + #define VERR_FILE_LOCK_LOST (-124) 89 + #define VERR_DIR_NOT_EMPTY (-125) 90 + #define VERR_NOT_A_DIRECTORY (-126) 91 + #define VERR_IS_A_DIRECTORY (-127) 92 + #define VERR_FILE_TOO_BIG (-128) 93 + 94 + #define VERR_NET_IO_ERROR (-400) 95 + #define VERR_NET_OUT_OF_RESOURCES (-401) 96 + #define VERR_NET_HOST_NOT_FOUND (-402) 97 + #define VERR_NET_PATH_NOT_FOUND (-403) 98 + #define VERR_NET_PRINT_ERROR (-404) 99 + #define VERR_NET_NO_NETWORK (-405) 100 + #define VERR_NET_NOT_UNIQUE_NAME (-406) 101 + 102 + #define VERR_NET_IN_PROGRESS (-436) 103 + #define VERR_NET_ALREADY_IN_PROGRESS (-437) 104 + #define VERR_NET_NOT_SOCKET (-438) 105 + #define VERR_NET_DEST_ADDRESS_REQUIRED (-439) 106 + #define VERR_NET_MSG_SIZE (-440) 107 + #define VERR_NET_PROTOCOL_TYPE (-441) 108 + #define VERR_NET_PROTOCOL_NOT_AVAILABLE (-442) 109 + #define VERR_NET_PROTOCOL_NOT_SUPPORTED (-443) 110 + #define VERR_NET_SOCKET_TYPE_NOT_SUPPORTED (-444) 111 + #define VERR_NET_OPERATION_NOT_SUPPORTED (-445) 112 + #define VERR_NET_PROTOCOL_FAMILY_NOT_SUPPORTED (-446) 113 + #define VERR_NET_ADDRESS_FAMILY_NOT_SUPPORTED (-447) 114 + #define VERR_NET_ADDRESS_IN_USE (-448) 115 + #define VERR_NET_ADDRESS_NOT_AVAILABLE (-449) 116 + #define VERR_NET_DOWN (-450) 117 + #define VERR_NET_UNREACHABLE (-451) 118 + #define VERR_NET_CONNECTION_RESET (-452) 119 + #define VERR_NET_CONNECTION_ABORTED (-453) 120 + #define VERR_NET_CONNECTION_RESET_BY_PEER (-454) 121 + #define VERR_NET_NO_BUFFER_SPACE (-455) 122 + #define VERR_NET_ALREADY_CONNECTED (-456) 123 + #define VERR_NET_NOT_CONNECTED (-457) 124 + #define VERR_NET_SHUTDOWN (-458) 125 + #define VERR_NET_TOO_MANY_REFERENCES (-459) 126 + #define VERR_NET_CONNECTION_TIMED_OUT (-460) 127 + #define VERR_NET_CONNECTION_REFUSED (-461) 128 + #define VERR_NET_HOST_DOWN (-464) 129 + #define VERR_NET_HOST_UNREACHABLE (-465) 130 + #define VERR_NET_PROTOCOL_ERROR (-466) 131 + #define VERR_NET_INCOMPLETE_TX_PACKET (-467) 132 + 133 + /* misc. unsorted codes */ 134 + #define VERR_RESOURCE_BUSY (-138) 135 + #define VERR_DISK_FULL (-152) 136 + #define VERR_TOO_MANY_SYMLINKS (-156) 137 + #define VERR_NO_MORE_FILES (-201) 138 + #define VERR_INTERNAL_ERROR (-225) 139 + #define VERR_INTERNAL_ERROR_2 (-226) 140 + #define VERR_INTERNAL_ERROR_3 (-227) 141 + #define VERR_INTERNAL_ERROR_4 (-228) 142 + #define VERR_DEV_IO_ERROR (-250) 143 + #define VERR_IO_BAD_LENGTH (-255) 144 + #define VERR_BROKEN_PIPE (-301) 145 + #define VERR_NO_DATA (-304) 146 + #define VERR_SEM_DESTROYED (-363) 147 + #define VERR_DEADLOCK (-365) 148 + #define VERR_BAD_EXE_FORMAT (-608) 149 + #define VINF_HGCM_ASYNC_EXECUTE (2903) 150 + 151 + #endif
+226
include/uapi/linux/vbox_vmmdev_types.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * Virtual Device for Guest <-> VMM/Host communication, type definitions 4 + * which are also used for the vboxguest ioctl interface / by vboxsf 5 + * 6 + * Copyright (C) 2006-2016 Oracle Corporation 7 + */ 8 + 9 + #ifndef __UAPI_VBOX_VMMDEV_TYPES_H__ 10 + #define __UAPI_VBOX_VMMDEV_TYPES_H__ 11 + 12 + #include <asm/bitsperlong.h> 13 + #include <linux/types.h> 14 + 15 + /* 16 + * We cannot use linux' compiletime_assert here because it expects to be used 17 + * inside a function only. Use a typedef to a char array with a negative size. 18 + */ 19 + #define VMMDEV_ASSERT_SIZE(type, size) \ 20 + typedef char type ## _asrt_size[1 - 2*!!(sizeof(struct type) != (size))] 21 + 22 + /** enum vmmdev_request_type - VMMDev request types. */ 23 + enum vmmdev_request_type { 24 + VMMDEVREQ_INVALID_REQUEST = 0, 25 + VMMDEVREQ_GET_MOUSE_STATUS = 1, 26 + VMMDEVREQ_SET_MOUSE_STATUS = 2, 27 + VMMDEVREQ_SET_POINTER_SHAPE = 3, 28 + VMMDEVREQ_GET_HOST_VERSION = 4, 29 + VMMDEVREQ_IDLE = 5, 30 + VMMDEVREQ_GET_HOST_TIME = 10, 31 + VMMDEVREQ_GET_HYPERVISOR_INFO = 20, 32 + VMMDEVREQ_SET_HYPERVISOR_INFO = 21, 33 + VMMDEVREQ_REGISTER_PATCH_MEMORY = 22, /* since version 3.0.6 */ 34 + VMMDEVREQ_DEREGISTER_PATCH_MEMORY = 23, /* since version 3.0.6 */ 35 + VMMDEVREQ_SET_POWER_STATUS = 30, 36 + VMMDEVREQ_ACKNOWLEDGE_EVENTS = 41, 37 + VMMDEVREQ_CTL_GUEST_FILTER_MASK = 42, 38 + VMMDEVREQ_REPORT_GUEST_INFO = 50, 39 + VMMDEVREQ_REPORT_GUEST_INFO2 = 58, /* since version 3.2.0 */ 40 + VMMDEVREQ_REPORT_GUEST_STATUS = 59, /* since version 3.2.8 */ 41 + VMMDEVREQ_REPORT_GUEST_USER_STATE = 74, /* since version 4.3 */ 42 + /* Retrieve a display resize request sent by the host, deprecated. */ 43 + VMMDEVREQ_GET_DISPLAY_CHANGE_REQ = 51, 44 + VMMDEVREQ_VIDEMODE_SUPPORTED = 52, 45 + VMMDEVREQ_GET_HEIGHT_REDUCTION = 53, 46 + /** 47 + * @VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2: 48 + * Retrieve a display resize request sent by the host. 49 + * 50 + * Queries a display resize request sent from the host. If the 51 + * event_ack member is sent to true and there is an unqueried request 52 + * available for one of the virtual display then that request will 53 + * be returned. If several displays have unqueried requests the lowest 54 + * numbered display will be chosen first. Only the most recent unseen 55 + * request for each display is remembered. 56 + * If event_ack is set to false, the last host request queried with 57 + * event_ack set is resent, or failing that the most recent received 58 + * from the host. If no host request was ever received then all zeros 59 + * are returned. 60 + */ 61 + VMMDEVREQ_GET_DISPLAY_CHANGE_REQ2 = 54, 62 + VMMDEVREQ_REPORT_GUEST_CAPABILITIES = 55, 63 + VMMDEVREQ_SET_GUEST_CAPABILITIES = 56, 64 + VMMDEVREQ_VIDEMODE_SUPPORTED2 = 57, /* since version 3.2.0 */ 65 + VMMDEVREQ_GET_DISPLAY_CHANGE_REQEX = 80, /* since version 4.2.4 */ 66 + VMMDEVREQ_HGCM_CONNECT = 60, 67 + VMMDEVREQ_HGCM_DISCONNECT = 61, 68 + VMMDEVREQ_HGCM_CALL32 = 62, 69 + VMMDEVREQ_HGCM_CALL64 = 63, 70 + VMMDEVREQ_HGCM_CANCEL = 64, 71 + VMMDEVREQ_HGCM_CANCEL2 = 65, 72 + VMMDEVREQ_VIDEO_ACCEL_ENABLE = 70, 73 + VMMDEVREQ_VIDEO_ACCEL_FLUSH = 71, 74 + VMMDEVREQ_VIDEO_SET_VISIBLE_REGION = 72, 75 + VMMDEVREQ_GET_SEAMLESS_CHANGE_REQ = 73, 76 + VMMDEVREQ_QUERY_CREDENTIALS = 100, 77 + VMMDEVREQ_REPORT_CREDENTIALS_JUDGEMENT = 101, 78 + VMMDEVREQ_REPORT_GUEST_STATS = 110, 79 + VMMDEVREQ_GET_MEMBALLOON_CHANGE_REQ = 111, 80 + VMMDEVREQ_GET_STATISTICS_CHANGE_REQ = 112, 81 + VMMDEVREQ_CHANGE_MEMBALLOON = 113, 82 + VMMDEVREQ_GET_VRDPCHANGE_REQ = 150, 83 + VMMDEVREQ_LOG_STRING = 200, 84 + VMMDEVREQ_GET_CPU_HOTPLUG_REQ = 210, 85 + VMMDEVREQ_SET_CPU_HOTPLUG_STATUS = 211, 86 + VMMDEVREQ_REGISTER_SHARED_MODULE = 212, 87 + VMMDEVREQ_UNREGISTER_SHARED_MODULE = 213, 88 + VMMDEVREQ_CHECK_SHARED_MODULES = 214, 89 + VMMDEVREQ_GET_PAGE_SHARING_STATUS = 215, 90 + VMMDEVREQ_DEBUG_IS_PAGE_SHARED = 216, 91 + VMMDEVREQ_GET_SESSION_ID = 217, /* since version 3.2.8 */ 92 + VMMDEVREQ_WRITE_COREDUMP = 218, 93 + VMMDEVREQ_GUEST_HEARTBEAT = 219, 94 + VMMDEVREQ_HEARTBEAT_CONFIGURE = 220, 95 + /* Ensure the enum is a 32 bit data-type */ 96 + VMMDEVREQ_SIZEHACK = 0x7fffffff 97 + }; 98 + 99 + #if __BITS_PER_LONG == 64 100 + #define VMMDEVREQ_HGCM_CALL VMMDEVREQ_HGCM_CALL64 101 + #else 102 + #define VMMDEVREQ_HGCM_CALL VMMDEVREQ_HGCM_CALL32 103 + #endif 104 + 105 + /** HGCM service location types. */ 106 + enum vmmdev_hgcm_service_location_type { 107 + VMMDEV_HGCM_LOC_INVALID = 0, 108 + VMMDEV_HGCM_LOC_LOCALHOST = 1, 109 + VMMDEV_HGCM_LOC_LOCALHOST_EXISTING = 2, 110 + /* Ensure the enum is a 32 bit data-type */ 111 + VMMDEV_HGCM_LOC_SIZEHACK = 0x7fffffff 112 + }; 113 + 114 + /** HGCM host service location. */ 115 + struct vmmdev_hgcm_service_location_localhost { 116 + /** Service name */ 117 + char service_name[128]; 118 + }; 119 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_service_location_localhost, 128); 120 + 121 + /** HGCM service location. */ 122 + struct vmmdev_hgcm_service_location { 123 + /** Type of the location. */ 124 + enum vmmdev_hgcm_service_location_type type; 125 + 126 + union { 127 + struct vmmdev_hgcm_service_location_localhost localhost; 128 + } u; 129 + }; 130 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_service_location, 128 + 4); 131 + 132 + /** HGCM function parameter type. */ 133 + enum vmmdev_hgcm_function_parameter_type { 134 + VMMDEV_HGCM_PARM_TYPE_INVALID = 0, 135 + VMMDEV_HGCM_PARM_TYPE_32BIT = 1, 136 + VMMDEV_HGCM_PARM_TYPE_64BIT = 2, 137 + /** Deprecated Doesn't work, use PAGELIST. */ 138 + VMMDEV_HGCM_PARM_TYPE_PHYSADDR = 3, 139 + /** In and Out, user-memory */ 140 + VMMDEV_HGCM_PARM_TYPE_LINADDR = 4, 141 + /** In, user-memory (read; host<-guest) */ 142 + VMMDEV_HGCM_PARM_TYPE_LINADDR_IN = 5, 143 + /** Out, user-memory (write; host->guest) */ 144 + VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT = 6, 145 + /** In and Out, kernel-memory */ 146 + VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL = 7, 147 + /** In, kernel-memory (read; host<-guest) */ 148 + VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_IN = 8, 149 + /** Out, kernel-memory (write; host->guest) */ 150 + VMMDEV_HGCM_PARM_TYPE_LINADDR_KERNEL_OUT = 9, 151 + /** Physical addresses of locked pages for a buffer. */ 152 + VMMDEV_HGCM_PARM_TYPE_PAGELIST = 10, 153 + /* Ensure the enum is a 32 bit data-type */ 154 + VMMDEV_HGCM_PARM_TYPE_SIZEHACK = 0x7fffffff 155 + }; 156 + 157 + /** HGCM function parameter, 32-bit client. */ 158 + struct vmmdev_hgcm_function_parameter32 { 159 + enum vmmdev_hgcm_function_parameter_type type; 160 + union { 161 + __u32 value32; 162 + __u64 value64; 163 + struct { 164 + __u32 size; 165 + union { 166 + __u32 phys_addr; 167 + __u32 linear_addr; 168 + } u; 169 + } pointer; 170 + struct { 171 + /** Size of the buffer described by the page list. */ 172 + __u32 size; 173 + /** Relative to the request header. */ 174 + __u32 offset; 175 + } page_list; 176 + } u; 177 + } __packed; 178 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_function_parameter32, 4 + 8); 179 + 180 + /** HGCM function parameter, 64-bit client. */ 181 + struct vmmdev_hgcm_function_parameter64 { 182 + enum vmmdev_hgcm_function_parameter_type type; 183 + union { 184 + __u32 value32; 185 + __u64 value64; 186 + struct { 187 + __u32 size; 188 + union { 189 + __u64 phys_addr; 190 + __u64 linear_addr; 191 + } u; 192 + } __packed pointer; 193 + struct { 194 + /** Size of the buffer described by the page list. */ 195 + __u32 size; 196 + /** Relative to the request header. */ 197 + __u32 offset; 198 + } page_list; 199 + } __packed u; 200 + } __packed; 201 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_function_parameter64, 4 + 12); 202 + 203 + #if __BITS_PER_LONG == 64 204 + #define vmmdev_hgcm_function_parameter vmmdev_hgcm_function_parameter64 205 + #else 206 + #define vmmdev_hgcm_function_parameter vmmdev_hgcm_function_parameter32 207 + #endif 208 + 209 + #define VMMDEV_HGCM_F_PARM_DIRECTION_NONE 0x00000000U 210 + #define VMMDEV_HGCM_F_PARM_DIRECTION_TO_HOST 0x00000001U 211 + #define VMMDEV_HGCM_F_PARM_DIRECTION_FROM_HOST 0x00000002U 212 + #define VMMDEV_HGCM_F_PARM_DIRECTION_BOTH 0x00000003U 213 + 214 + /** 215 + * struct vmmdev_hgcm_pagelist - VMMDEV_HGCM_PARM_TYPE_PAGELIST parameters 216 + * point to this structure to actually describe the buffer. 217 + */ 218 + struct vmmdev_hgcm_pagelist { 219 + __u32 flags; /** VMMDEV_HGCM_F_PARM_*. */ 220 + __u16 offset_first_page; /** Data offset in the first page. */ 221 + __u16 page_count; /** Number of pages. */ 222 + __u64 pages[1]; /** Page addresses. */ 223 + }; 224 + VMMDEV_ASSERT_SIZE(vmmdev_hgcm_pagelist, 4 + 2 + 2 + 8); 225 + 226 + #endif
+330
include/uapi/linux/vboxguest.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0 OR CDDL-1.0) */ 2 + /* 3 + * VBoxGuest - VirtualBox Guest Additions Driver Interface. 4 + * 5 + * Copyright (C) 2006-2016 Oracle Corporation 6 + */ 7 + 8 + #ifndef __UAPI_VBOXGUEST_H__ 9 + #define __UAPI_VBOXGUEST_H__ 10 + 11 + #include <asm/bitsperlong.h> 12 + #include <linux/ioctl.h> 13 + #include <linux/vbox_err.h> 14 + #include <linux/vbox_vmmdev_types.h> 15 + 16 + /* Version of vbg_ioctl_hdr structure. */ 17 + #define VBG_IOCTL_HDR_VERSION 0x10001 18 + /* Default request type. Use this for non-VMMDev requests. */ 19 + #define VBG_IOCTL_HDR_TYPE_DEFAULT 0 20 + 21 + /** 22 + * Common ioctl header. 23 + * 24 + * This is a mirror of vmmdev_request_header to prevent duplicating data and 25 + * needing to verify things multiple times. 26 + */ 27 + struct vbg_ioctl_hdr { 28 + /** IN: The request input size, and output size if size_out is zero. */ 29 + __u32 size_in; 30 + /** IN: Structure version (VBG_IOCTL_HDR_VERSION) */ 31 + __u32 version; 32 + /** IN: The VMMDev request type or VBG_IOCTL_HDR_TYPE_DEFAULT. */ 33 + __u32 type; 34 + /** 35 + * OUT: The VBox status code of the operation, out direction only. 36 + * This is a VINF_ or VERR_ value as defined in vbox_err.h. 37 + */ 38 + __s32 rc; 39 + /** IN: Output size. Set to zero to use size_in as output size. */ 40 + __u32 size_out; 41 + /** Reserved, MBZ. */ 42 + __u32 reserved; 43 + }; 44 + VMMDEV_ASSERT_SIZE(vbg_ioctl_hdr, 24); 45 + 46 + 47 + /* 48 + * The VBoxGuest I/O control version. 49 + * 50 + * As usual, the high word contains the major version and changes to it 51 + * signifies incompatible changes. 52 + * 53 + * The lower word is the minor version number, it is increased when new 54 + * functions are added or existing changed in a backwards compatible manner. 55 + */ 56 + #define VBG_IOC_VERSION 0x00010000u 57 + 58 + /** 59 + * VBG_IOCTL_DRIVER_VERSION_INFO data structure 60 + * 61 + * Note VBG_IOCTL_DRIVER_VERSION_INFO may switch the session to a backwards 62 + * compatible interface version if uClientVersion indicates older client code. 63 + */ 64 + struct vbg_ioctl_driver_version_info { 65 + /** The header. */ 66 + struct vbg_ioctl_hdr hdr; 67 + union { 68 + struct { 69 + /** Requested interface version (VBG_IOC_VERSION). */ 70 + __u32 req_version; 71 + /** 72 + * Minimum interface version number (typically the 73 + * major version part of VBG_IOC_VERSION). 74 + */ 75 + __u32 min_version; 76 + /** Reserved, MBZ. */ 77 + __u32 reserved1; 78 + /** Reserved, MBZ. */ 79 + __u32 reserved2; 80 + } in; 81 + struct { 82 + /** Version for this session (typ. VBG_IOC_VERSION). */ 83 + __u32 session_version; 84 + /** Version of the IDC interface (VBG_IOC_VERSION). */ 85 + __u32 driver_version; 86 + /** The SVN revision of the driver, or 0. */ 87 + __u32 driver_revision; 88 + /** Reserved \#1 (zero until defined). */ 89 + __u32 reserved1; 90 + /** Reserved \#2 (zero until defined). */ 91 + __u32 reserved2; 92 + } out; 93 + } u; 94 + }; 95 + VMMDEV_ASSERT_SIZE(vbg_ioctl_driver_version_info, 24 + 20); 96 + 97 + #define VBG_IOCTL_DRIVER_VERSION_INFO \ 98 + _IOWR('V', 0, struct vbg_ioctl_driver_version_info) 99 + 100 + 101 + /* IOCTL to perform a VMM Device request less than 1KB in size. */ 102 + #define VBG_IOCTL_VMMDEV_REQUEST(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 2, s) 103 + 104 + 105 + /* IOCTL to perform a VMM Device request larger then 1KB. */ 106 + #define VBG_IOCTL_VMMDEV_REQUEST_BIG _IOC(_IOC_READ | _IOC_WRITE, 'V', 3, 0) 107 + 108 + 109 + /** VBG_IOCTL_HGCM_CONNECT data structure. */ 110 + struct vbg_ioctl_hgcm_connect { 111 + struct vbg_ioctl_hdr hdr; 112 + union { 113 + struct { 114 + struct vmmdev_hgcm_service_location loc; 115 + } in; 116 + struct { 117 + __u32 client_id; 118 + } out; 119 + } u; 120 + }; 121 + VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_connect, 24 + 132); 122 + 123 + #define VBG_IOCTL_HGCM_CONNECT \ 124 + _IOWR('V', 4, struct vbg_ioctl_hgcm_connect) 125 + 126 + 127 + /** VBG_IOCTL_HGCM_DISCONNECT data structure. */ 128 + struct vbg_ioctl_hgcm_disconnect { 129 + struct vbg_ioctl_hdr hdr; 130 + union { 131 + struct { 132 + __u32 client_id; 133 + } in; 134 + } u; 135 + }; 136 + VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_disconnect, 24 + 4); 137 + 138 + #define VBG_IOCTL_HGCM_DISCONNECT \ 139 + _IOWR('V', 5, struct vbg_ioctl_hgcm_disconnect) 140 + 141 + 142 + /** VBG_IOCTL_HGCM_CALL data structure. */ 143 + struct vbg_ioctl_hgcm_call { 144 + /** The header. */ 145 + struct vbg_ioctl_hdr hdr; 146 + /** Input: The id of the caller. */ 147 + __u32 client_id; 148 + /** Input: Function number. */ 149 + __u32 function; 150 + /** 151 + * Input: How long to wait (milliseconds) for completion before 152 + * cancelling the call. Set to -1 to wait indefinitely. 153 + */ 154 + __u32 timeout_ms; 155 + /** Interruptable flag, ignored for userspace calls. */ 156 + __u8 interruptible; 157 + /** Explicit padding, MBZ. */ 158 + __u8 reserved; 159 + /** 160 + * Input: How many parameters following this structure. 161 + * 162 + * The parameters are either HGCMFunctionParameter64 or 32, 163 + * depending on whether we're receiving a 64-bit or 32-bit request. 164 + * 165 + * The current maximum is 61 parameters (given a 1KB max request size, 166 + * and a 64-bit parameter size of 16 bytes). 167 + */ 168 + __u16 parm_count; 169 + /* 170 + * Parameters follow in form: 171 + * struct hgcm_function_parameter<32|64> parms[parm_count] 172 + */ 173 + }; 174 + VMMDEV_ASSERT_SIZE(vbg_ioctl_hgcm_call, 24 + 16); 175 + 176 + #define VBG_IOCTL_HGCM_CALL_32(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 6, s) 177 + #define VBG_IOCTL_HGCM_CALL_64(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 7, s) 178 + #if __BITS_PER_LONG == 64 179 + #define VBG_IOCTL_HGCM_CALL(s) VBG_IOCTL_HGCM_CALL_64(s) 180 + #else 181 + #define VBG_IOCTL_HGCM_CALL(s) VBG_IOCTL_HGCM_CALL_32(s) 182 + #endif 183 + 184 + 185 + /** VBG_IOCTL_LOG data structure. */ 186 + struct vbg_ioctl_log { 187 + /** The header. */ 188 + struct vbg_ioctl_hdr hdr; 189 + union { 190 + struct { 191 + /** 192 + * The log message, this may be zero terminated. If it 193 + * is not zero terminated then the length is determined 194 + * from the input size. 195 + */ 196 + char msg[1]; 197 + } in; 198 + } u; 199 + }; 200 + 201 + #define VBG_IOCTL_LOG(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 9, s) 202 + 203 + 204 + /** VBG_IOCTL_WAIT_FOR_EVENTS data structure. */ 205 + struct vbg_ioctl_wait_for_events { 206 + /** The header. */ 207 + struct vbg_ioctl_hdr hdr; 208 + union { 209 + struct { 210 + /** Timeout in milliseconds. */ 211 + __u32 timeout_ms; 212 + /** Events to wait for. */ 213 + __u32 events; 214 + } in; 215 + struct { 216 + /** Events that occurred. */ 217 + __u32 events; 218 + } out; 219 + } u; 220 + }; 221 + VMMDEV_ASSERT_SIZE(vbg_ioctl_wait_for_events, 24 + 8); 222 + 223 + #define VBG_IOCTL_WAIT_FOR_EVENTS \ 224 + _IOWR('V', 10, struct vbg_ioctl_wait_for_events) 225 + 226 + 227 + /* 228 + * IOCTL to VBoxGuest to interrupt (cancel) any pending 229 + * VBG_IOCTL_WAIT_FOR_EVENTS and return. 230 + * 231 + * Handled inside the vboxguest driver and not seen by the host at all. 232 + * After calling this, VBG_IOCTL_WAIT_FOR_EVENTS should no longer be called in 233 + * the same session. Any VBOXGUEST_IOCTL_WAITEVENT calls in the same session 234 + * done after calling this will directly exit with -EINTR. 235 + */ 236 + #define VBG_IOCTL_INTERRUPT_ALL_WAIT_FOR_EVENTS \ 237 + _IOWR('V', 11, struct vbg_ioctl_hdr) 238 + 239 + 240 + /** VBG_IOCTL_CHANGE_FILTER_MASK data structure. */ 241 + struct vbg_ioctl_change_filter { 242 + /** The header. */ 243 + struct vbg_ioctl_hdr hdr; 244 + union { 245 + struct { 246 + /** Flags to set. */ 247 + __u32 or_mask; 248 + /** Flags to remove. */ 249 + __u32 not_mask; 250 + } in; 251 + } u; 252 + }; 253 + VMMDEV_ASSERT_SIZE(vbg_ioctl_change_filter, 24 + 8); 254 + 255 + /* IOCTL to VBoxGuest to control the event filter mask. */ 256 + #define VBG_IOCTL_CHANGE_FILTER_MASK \ 257 + _IOWR('V', 12, struct vbg_ioctl_change_filter) 258 + 259 + 260 + /** VBG_IOCTL_CHANGE_GUEST_CAPABILITIES data structure. */ 261 + struct vbg_ioctl_set_guest_caps { 262 + /** The header. */ 263 + struct vbg_ioctl_hdr hdr; 264 + union { 265 + struct { 266 + /** Capabilities to set (VMMDEV_GUEST_SUPPORTS_XXX). */ 267 + __u32 or_mask; 268 + /** Capabilities to drop (VMMDEV_GUEST_SUPPORTS_XXX). */ 269 + __u32 not_mask; 270 + } in; 271 + struct { 272 + /** Capabilities held by the session after the call. */ 273 + __u32 session_caps; 274 + /** Capabilities for all the sessions after the call. */ 275 + __u32 global_caps; 276 + } out; 277 + } u; 278 + }; 279 + VMMDEV_ASSERT_SIZE(vbg_ioctl_set_guest_caps, 24 + 8); 280 + 281 + #define VBG_IOCTL_CHANGE_GUEST_CAPABILITIES \ 282 + _IOWR('V', 14, struct vbg_ioctl_set_guest_caps) 283 + 284 + 285 + /** VBG_IOCTL_CHECK_BALLOON data structure. */ 286 + struct vbg_ioctl_check_balloon { 287 + /** The header. */ 288 + struct vbg_ioctl_hdr hdr; 289 + union { 290 + struct { 291 + /** The size of the balloon in chunks of 1MB. */ 292 + __u32 balloon_chunks; 293 + /** 294 + * false = handled in R0, no further action required. 295 + * true = allocate balloon memory in R3. 296 + */ 297 + __u8 handle_in_r3; 298 + /** Explicit padding, MBZ. */ 299 + __u8 padding[3]; 300 + } out; 301 + } u; 302 + }; 303 + VMMDEV_ASSERT_SIZE(vbg_ioctl_check_balloon, 24 + 8); 304 + 305 + /* 306 + * IOCTL to check memory ballooning. 307 + * 308 + * The guest kernel module will ask the host for the current size of the 309 + * balloon and adjust the size. Or it will set handle_in_r3 = true and R3 is 310 + * responsible for allocating memory and calling VBG_IOCTL_CHANGE_BALLOON. 311 + */ 312 + #define VBG_IOCTL_CHECK_BALLOON \ 313 + _IOWR('V', 17, struct vbg_ioctl_check_balloon) 314 + 315 + 316 + /** VBG_IOCTL_WRITE_CORE_DUMP data structure. */ 317 + struct vbg_ioctl_write_coredump { 318 + struct vbg_ioctl_hdr hdr; 319 + union { 320 + struct { 321 + __u32 flags; /** Flags (reserved, MBZ). */ 322 + } in; 323 + } u; 324 + }; 325 + VMMDEV_ASSERT_SIZE(vbg_ioctl_write_coredump, 24 + 4); 326 + 327 + #define VBG_IOCTL_WRITE_CORE_DUMP \ 328 + _IOWR('V', 19, struct vbg_ioctl_write_coredump) 329 + 330 + #endif
+4
scripts/mod/devicetable-offsets.c
··· 203 203 DEVID_FIELD(hda_device_id, rev_id); 204 204 DEVID_FIELD(hda_device_id, api_version); 205 205 206 + DEVID(sdw_device_id); 207 + DEVID_FIELD(sdw_device_id, mfg_id); 208 + DEVID_FIELD(sdw_device_id, part_id); 209 + 206 210 DEVID(fsl_mc_device_id); 207 211 DEVID_FIELD(fsl_mc_device_id, vendor); 208 212 DEVID_FIELD(fsl_mc_device_id, obj_type);
+15
scripts/mod/file2alias.c
··· 1289 1289 } 1290 1290 ADD_TO_DEVTABLE("hdaudio", hda_device_id, do_hda_entry); 1291 1291 1292 + /* Looks like: sdw:mNpN */ 1293 + static int do_sdw_entry(const char *filename, void *symval, char *alias) 1294 + { 1295 + DEF_FIELD(symval, sdw_device_id, mfg_id); 1296 + DEF_FIELD(symval, sdw_device_id, part_id); 1297 + 1298 + strcpy(alias, "sdw:"); 1299 + ADD(alias, "m", mfg_id != 0, mfg_id); 1300 + ADD(alias, "p", part_id != 0, part_id); 1301 + 1302 + add_wildcard(alias); 1303 + return 1; 1304 + } 1305 + ADD_TO_DEVTABLE("sdw", sdw_device_id, do_sdw_entry); 1306 + 1292 1307 /* Looks like: fsl-mc:vNdN */ 1293 1308 static int do_fsl_mc_entry(const char *filename, void *symval, 1294 1309 char *alias)
+1
security/Kconfig
··· 154 154 bool "Harden memory copies between kernel and userspace" 155 155 depends on HAVE_HARDENED_USERCOPY_ALLOCATOR 156 156 select BUG 157 + imply STRICT_DEVMEM 157 158 help 158 159 This option checks for obviously wrong memory regions when 159 160 copying memory to/from the kernel (via copy_to_user() and
+22 -1
tools/hv/Makefile
··· 7 7 8 8 CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include 9 9 10 - all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon 10 + sbindir ?= /usr/sbin 11 + libexecdir ?= /usr/libexec 12 + sharedstatedir ?= /var/lib 13 + 14 + ALL_PROGRAMS := hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon 15 + 16 + ALL_SCRIPTS := hv_get_dhcp_info.sh hv_get_dns_info.sh hv_set_ifconfig.sh 17 + 18 + all: $(ALL_PROGRAMS) 19 + 11 20 %: %.c 12 21 $(CC) $(CFLAGS) -o $@ $^ 13 22 14 23 clean: 15 24 $(RM) hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon 25 + 26 + install: all 27 + install -d -m 755 $(DESTDIR)$(sbindir); \ 28 + install -d -m 755 $(DESTDIR)$(libexecdir)/hypervkvpd; \ 29 + install -d -m 755 $(DESTDIR)$(sharedstatedir); \ 30 + for program in $(ALL_PROGRAMS); do \ 31 + install $$program -m 755 $(DESTDIR)$(sbindir); \ 32 + done; \ 33 + install -m 755 lsvmbus $(DESTDIR)$(sbindir); \ 34 + for script in $(ALL_SCRIPTS); do \ 35 + install $$script -m 755 $(DESTDIR)$(libexecdir)/hypervkvpd/$${script%.sh}; \ 36 + done