Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.21-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the big set of char and misc driver patches for 4.21-rc1.

Lots of different types of driver things in here, as this tree seems
to be the "collection of various driver subsystems not big enough to
have their own git tree" lately.

Anyway, some highlights of the changes in here:

- binderfs: is it a rule that all driver subsystems will eventually
grow to have their own filesystem? Binder now has one to handle the
use of it in containerized systems.

This was discussed at the Plumbers conference a few months ago and
knocked into mergable shape very fast by Christian Brauner. Who
also has signed up to be another binder maintainer, showing a
distinct lack of good judgement :)

- binder updates and fixes

- mei driver updates

- fpga driver updates and additions

- thunderbolt driver updates

- soundwire driver updates

- extcon driver updates

- nvmem driver updates

- hyper-v driver updates

- coresight driver updates

- pvpanic driver additions and reworking for more device support

- lp driver updates. Yes really, it's _finally_ moved to the proper
parallal port driver model, something I never thought I would see
happen. Good stuff.

- other tiny driver updates and fixes.

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-4.21-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (116 commits)
MAINTAINERS: add another Android binder maintainer
intel_th: msu: Fix an off-by-one in attribute store
stm class: Add a reference to the SyS-T document
stm class: Fix a module refcount leak in policy creation error path
char: lp: use new parport device model
char: lp: properly count the lp devices
char: lp: use first unused lp number while registering
char: lp: detach the device when parallel port is removed
char: lp: introduce list to save port number
bus: qcom: remove duplicated include from qcom-ebi2.c
VMCI: Use memdup_user() rather than duplicating its implementation
char/rtc: Use of_node_name_eq for node name comparisons
misc: mic: fix a DMA pool free failure
ptp: fix an IS_ERR() vs NULL check
genwqe: Fix size check
binder: implement binderfs
binder: fix use-after-free due to ksys_close() during fdget()
bus: fsl-mc: remove duplicated include files
bus: fsl-mc: explicitly define the fsl_mc_command endianness
misc: ti-st: make array read_ver_cmd static, shrinks object size
...

+4973 -917
+9
Documentation/ABI/testing/sysfs-bus-thunderbolt
··· 21 21 If a device is authorized automatically during boot its 22 22 boot attribute is set to 1. 23 23 24 + What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection 25 + Date: Mar 2019 26 + KernelVersion: 4.21 27 + Contact: thunderbolt-software@lists.01.org 28 + Description: This attribute tells whether the system uses IOMMU 29 + for DMA protection. Value of 1 means IOMMU is used 0 means 30 + it is not (DMA protection is solely based on Thunderbolt 31 + security levels). 32 + 24 33 What: /sys/bus/thunderbolt/devices/.../domainX/security 25 34 Date: Sep 2017 26 35 KernelVersion: 4.13
+20
Documentation/admin-guide/thunderbolt.rst
··· 133 133 the device without a key or write a new key and write 1 to the 134 134 ``authorized`` file to get the new key stored on the device NVM. 135 135 136 + DMA protection utilizing IOMMU 137 + ------------------------------ 138 + Recent systems from 2018 and forward with Thunderbolt ports may natively 139 + support IOMMU. This means that Thunderbolt security is handled by an IOMMU 140 + so connected devices cannot access memory regions outside of what is 141 + allocated for them by drivers. When Linux is running on such system it 142 + automatically enables IOMMU if not enabled by the user already. These 143 + systems can be identified by reading ``1`` from 144 + ``/sys/bus/thunderbolt/devices/domainX/iommu_dma_protection`` attribute. 145 + 146 + The driver does not do anything special in this case but because DMA 147 + protection is handled by the IOMMU, security levels (if set) are 148 + redundant. For this reason some systems ship with security level set to 149 + ``none``. Other systems have security level set to ``user`` in order to 150 + support downgrade to older OS, so users who want to automatically 151 + authorize devices when IOMMU DMA protection is enabled can use the 152 + following ``udev`` rule:: 153 + 154 + ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1" 155 + 136 156 Upgrading NVM on Thunderbolt device or host 137 157 ------------------------------------------- 138 158 Since most of the functionality is handled in firmware running on a
+57
Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt
··· 1 + Intel Service Layer Driver for Stratix10 SoC 2 + ============================================ 3 + Intel Stratix10 SoC is composed of a 64 bit quad-core ARM Cortex A53 hard 4 + processor system (HPS) and Secure Device Manager (SDM). When the FPGA is 5 + configured from HPS, there needs to be a way for HPS to notify SDM the 6 + location and size of the configuration data. Then SDM will get the 7 + configuration data from that location and perform the FPGA configuration. 8 + 9 + To meet the whole system security needs and support virtual machine requesting 10 + communication with SDM, only the secure world of software (EL3, Exception 11 + Layer 3) can interface with SDM. All software entities running on other 12 + exception layers must channel through the EL3 software whenever it needs 13 + service from SDM. 14 + 15 + Intel Stratix10 service layer driver, running at privileged exception level 16 + (EL1, Exception Layer 1), interfaces with the service providers and provides 17 + the services for FPGA configuration, QSPI, Crypto and warm reset. Service layer 18 + driver also manages secure monitor call (SMC) to communicate with secure monitor 19 + code running in EL3. 20 + 21 + Required properties: 22 + ------------------- 23 + The svc node has the following mandatory properties, must be located under 24 + the firmware node. 25 + 26 + - compatible: "intel,stratix10-svc" 27 + - method: smc or hvc 28 + smc - Secure Monitor Call 29 + hvc - Hypervisor Call 30 + - memory-region: 31 + phandle to the reserved memory node. See 32 + Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt 33 + for details 34 + 35 + Example: 36 + ------- 37 + 38 + reserved-memory { 39 + #address-cells = <2>; 40 + #size-cells = <2>; 41 + ranges; 42 + 43 + service_reserved: svcbuffer@0 { 44 + compatible = "shared-dma-pool"; 45 + reg = <0x0 0x0 0x0 0x1000000>; 46 + alignment = <0x1000>; 47 + no-map; 48 + }; 49 + }; 50 + 51 + firmware { 52 + svc { 53 + compatible = "intel,stratix10-svc"; 54 + method = "smc"; 55 + memory-region = <&service_reserved>; 56 + }; 57 + };
+17
Documentation/devicetree/bindings/fpga/intel-stratix10-soc-fpga-mgr.txt
··· 1 + Intel Stratix10 SoC FPGA Manager 2 + 3 + Required properties: 4 + The fpga_mgr node has the following mandatory property, must be located under 5 + firmware/svc node. 6 + 7 + - compatible : should contain "intel,stratix10-soc-fpga-mgr" 8 + 9 + Example: 10 + 11 + firmware { 12 + svc { 13 + fpga_mgr: fpga-mgr { 14 + compatible = "intel,stratix10-soc-fpga-mgr"; 15 + }; 16 + }; 17 + };
+29
Documentation/devicetree/bindings/misc/pvpanic-mmio.txt
··· 1 + * QEMU PVPANIC MMIO Configuration bindings 2 + 3 + QEMU's emulation / virtualization targets provide the following PVPANIC 4 + MMIO Configuration interface on the "virt" machine. 5 + type: 6 + 7 + - a read-write, 16-bit wide data register. 8 + 9 + QEMU exposes the data register to guests as memory mapped registers. 10 + 11 + Required properties: 12 + 13 + - compatible: "qemu,pvpanic-mmio". 14 + - reg: the MMIO region used by the device. 15 + * Bytes 0x0 Write panic event to the reg when guest OS panics. 16 + * Bytes 0x1 Reserved. 17 + 18 + Example: 19 + 20 + / { 21 + #size-cells = <0x2>; 22 + #address-cells = <0x2>; 23 + 24 + pvpanic-mmio@9060000 { 25 + compatible = "qemu,pvpanic-mmio"; 26 + reg = <0x0 0x9060000 0x0 0x2>; 27 + }; 28 + }; 29 +
+3
Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt
··· 2 2 3 3 Required properties: 4 4 - compatible: should be "amlogic,meson-gxbb-efuse" 5 + - clocks: phandle to the efuse peripheral clock provided by the 6 + clock controller. 5 7 6 8 = Data cells = 7 9 Are child nodes of eFuse, bindings of which as described in ··· 13 11 14 12 efuse: efuse { 15 13 compatible = "amlogic,meson-gxbb-efuse"; 14 + clocks = <&clkc CLKID_EFUSE>; 16 15 #address-cells = <1>; 17 16 #size-cells = <1>; 18 17
+30
Documentation/driver-api/firmware/other_interfaces.rst
··· 13 13 .. kernel-doc:: drivers/firmware/edd.c 14 14 :internal: 15 15 16 + Intel Stratix10 SoC Service Layer 17 + --------------------------------- 18 + Some features of the Intel Stratix10 SoC require a level of privilege 19 + higher than the kernel is granted. Such secure features include 20 + FPGA programming. In terms of the ARMv8 architecture, the kernel runs 21 + at Exception Level 1 (EL1), access to the features requires 22 + Exception Level 3 (EL3). 23 + 24 + The Intel Stratix10 SoC service layer provides an in kernel API for 25 + drivers to request access to the secure features. The requests are queued 26 + and processed one by one. ARM’s SMCCC is used to pass the execution 27 + of the requests on to a secure monitor (EL3). 28 + 29 + .. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h 30 + :functions: stratix10_svc_command_code 31 + 32 + .. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h 33 + :functions: stratix10_svc_client_msg 34 + 35 + .. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h 36 + :functions: stratix10_svc_command_reconfig_payload 37 + 38 + .. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h 39 + :functions: stratix10_svc_cb_data 40 + 41 + .. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h 42 + :functions: stratix10_svc_client 43 + 44 + .. kernel-doc:: drivers/firmware/stratix10-svc.c 45 + :export:
+1
Documentation/trace/index.rst
··· 22 22 hwlat_detector 23 23 intel_th 24 24 stm 25 + sys-t
+3 -1
MAINTAINERS
··· 958 958 M: Todd Kjos <tkjos@android.com> 959 959 M: Martijn Coenen <maco@android.com> 960 960 M: Joel Fernandes <joel@joelfernandes.org> 961 + M: Christian Brauner <christian@brauner.io> 961 962 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git 962 963 L: devel@driverdev.osuosl.org 963 964 S: Supported ··· 1443 1442 1444 1443 ARM/CORESIGHT FRAMEWORK AND DRIVERS 1445 1444 M: Mathieu Poirier <mathieu.poirier@linaro.org> 1445 + R: Suzuki K Poulose <suzuki.poulose@arm.com> 1446 1446 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1447 1447 S: Maintained 1448 1448 F: drivers/hwtracing/coresight/* ··· 16244 16242 F: include/linux/vme* 16245 16243 16246 16244 VMWARE BALLOON DRIVER 16247 - M: Xavier Deguillard <xdeguillard@vmware.com> 16245 + M: Julien Freche <jfreche@vmware.com> 16248 16246 M: Nadav Amit <namit@vmware.com> 16249 16247 M: "VMware, Inc." <pv-drivers@vmware.com> 16250 16248 L: linux-kernel@vger.kernel.org
+33
arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
··· 24 24 #address-cells = <2>; 25 25 #size-cells = <2>; 26 26 27 + reserved-memory { 28 + #address-cells = <2>; 29 + #size-cells = <2>; 30 + ranges; 31 + 32 + service_reserved: svcbuffer@0 { 33 + compatible = "shared-dma-pool"; 34 + reg = <0x0 0x0 0x0 0x1000000>; 35 + alignment = <0x1000>; 36 + no-map; 37 + }; 38 + }; 39 + 27 40 cpus { 28 41 #address-cells = <1>; 29 42 #size-cells = <0>; ··· 105 92 device_type = "soc"; 106 93 interrupt-parent = <&intc>; 107 94 ranges = <0 0 0 0xffffffff>; 95 + 96 + base_fpga_region { 97 + #address-cells = <0x1>; 98 + #size-cells = <0x1>; 99 + 100 + compatible = "fpga-region"; 101 + fpga-mgr = <&fpga_mgr>; 102 + }; 108 103 109 104 clkmgr: clock-controller@ffd10000 { 110 105 compatible = "intel,stratix10-clkmgr"; ··· 557 536 clocks = <&qspi_clk>; 558 537 559 538 status = "disabled"; 539 + }; 540 + 541 + firmware { 542 + svc { 543 + compatible = "intel,stratix10-svc"; 544 + method = "smc"; 545 + memory-region = <&service_reserved>; 546 + 547 + fpga_mgr: fpga-mgr { 548 + compatible = "intel,stratix10-soc-fpga-mgr"; 549 + }; 550 + }; 560 551 }; 561 552 }; 562 553 };
+11
drivers/acpi/property.c
··· 24 24 acpi_object_type type, 25 25 const union acpi_object **obj); 26 26 27 + /* 28 + * The GUIDs here are made equivalent to each other in order to avoid extra 29 + * complexity in the properties handling code, with the caveat that the 30 + * kernel will accept certain combinations of GUID and properties that are 31 + * not defined without a warning. For instance if any of the properties 32 + * from different GUID appear in a property list of another, it will be 33 + * accepted by the kernel. Firmware validation tools should catch these. 34 + */ 27 35 static const guid_t prp_guids[] = { 28 36 /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ 29 37 GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c, ··· 39 31 /* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */ 40 32 GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3, 41 33 0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4), 34 + /* External facing port GUID: efcc06cc-73ac-4bc3-bff0-76143807c389 */ 35 + GUID_INIT(0xefcc06cc, 0x73ac, 0x4bc3, 36 + 0xbf, 0xf0, 0x76, 0x14, 0x38, 0x07, 0xc3, 0x89), 42 37 }; 43 38 44 39 static const guid_t ads_guid =
+12
drivers/android/Kconfig
··· 20 20 Android process, using Binder to identify, invoke and pass arguments 21 21 between said processes. 22 22 23 + config ANDROID_BINDERFS 24 + bool "Android Binderfs filesystem" 25 + depends on ANDROID_BINDER_IPC 26 + default n 27 + ---help--- 28 + Binderfs is a pseudo-filesystem for the Android Binder IPC driver 29 + which can be mounted per-ipc namespace allowing to run multiple 30 + instances of Android. 31 + Each binderfs mount initially only contains a binder-control device. 32 + It can be used to dynamically allocate new binder IPC devices via 33 + ioctls. 34 + 23 35 config ANDROID_BINDER_DEVICES 24 36 string "Android Binder devices" 25 37 depends on ANDROID_BINDER_IPC
+1
drivers/android/Makefile
··· 1 1 ccflags-y += -I$(src) # needed for trace events 2 2 3 + obj-$(CONFIG_ANDROID_BINDERFS) += binderfs.o 3 4 obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o binder_alloc.o 4 5 obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) += binder_alloc_selftest.o
+131 -51
drivers/android/binder.c
··· 72 72 #include <linux/spinlock.h> 73 73 #include <linux/ratelimit.h> 74 74 #include <linux/syscalls.h> 75 + #include <linux/task_work.h> 75 76 76 77 #include <uapi/linux/android/binder.h> 77 78 78 79 #include <asm/cacheflush.h> 79 80 80 81 #include "binder_alloc.h" 82 + #include "binder_internal.h" 81 83 #include "binder_trace.h" 82 84 83 85 static HLIST_HEAD(binder_deferred_list); ··· 96 94 static struct dentry *binder_debugfs_dir_entry_proc; 97 95 static atomic_t binder_last_id; 98 96 99 - #define BINDER_DEBUG_ENTRY(name) \ 100 - static int binder_##name##_open(struct inode *inode, struct file *file) \ 101 - { \ 102 - return single_open(file, binder_##name##_show, inode->i_private); \ 103 - } \ 104 - \ 105 - static const struct file_operations binder_##name##_fops = { \ 106 - .owner = THIS_MODULE, \ 107 - .open = binder_##name##_open, \ 108 - .read = seq_read, \ 109 - .llseek = seq_lseek, \ 110 - .release = single_release, \ 111 - } 112 - 113 - static int binder_proc_show(struct seq_file *m, void *unused); 114 - BINDER_DEBUG_ENTRY(proc); 97 + static int proc_show(struct seq_file *m, void *unused); 98 + DEFINE_SHOW_ATTRIBUTE(proc); 115 99 116 100 /* This is only defined in include/asm-arm/sizes.h */ 117 101 #ifndef SZ_1K ··· 249 261 memset(e, 0, sizeof(*e)); 250 262 return e; 251 263 } 252 - 253 - struct binder_context { 254 - struct binder_node *binder_context_mgr_node; 255 - struct mutex context_mgr_node_lock; 256 - 257 - kuid_t binder_context_mgr_uid; 258 - const char *name; 259 - }; 260 - 261 - struct binder_device { 262 - struct hlist_node hlist; 263 - struct miscdevice miscdev; 264 - struct binder_context context; 265 - }; 266 264 267 265 /** 268 266 * struct binder_work - work enqueued on a worklist ··· 634 660 #define binder_proc_lock(proc) _binder_proc_lock(proc, __LINE__) 635 661 static void 636 662 _binder_proc_lock(struct binder_proc *proc, int line) 663 + __acquires(&proc->outer_lock) 637 664 { 638 665 binder_debug(BINDER_DEBUG_SPINLOCKS, 639 666 "%s: line=%d\n", __func__, line); ··· 650 675 #define binder_proc_unlock(_proc) _binder_proc_unlock(_proc, __LINE__) 651 676 static void 652 677 _binder_proc_unlock(struct binder_proc *proc, int line) 678 + __releases(&proc->outer_lock) 653 679 { 654 680 binder_debug(BINDER_DEBUG_SPINLOCKS, 655 681 "%s: line=%d\n", __func__, line); ··· 666 690 #define binder_inner_proc_lock(proc) _binder_inner_proc_lock(proc, __LINE__) 667 691 static void 668 692 _binder_inner_proc_lock(struct binder_proc *proc, int line) 693 + __acquires(&proc->inner_lock) 669 694 { 670 695 binder_debug(BINDER_DEBUG_SPINLOCKS, 671 696 "%s: line=%d\n", __func__, line); ··· 682 705 #define binder_inner_proc_unlock(proc) _binder_inner_proc_unlock(proc, __LINE__) 683 706 static void 684 707 _binder_inner_proc_unlock(struct binder_proc *proc, int line) 708 + __releases(&proc->inner_lock) 685 709 { 686 710 binder_debug(BINDER_DEBUG_SPINLOCKS, 687 711 "%s: line=%d\n", __func__, line); ··· 698 720 #define binder_node_lock(node) _binder_node_lock(node, __LINE__) 699 721 static void 700 722 _binder_node_lock(struct binder_node *node, int line) 723 + __acquires(&node->lock) 701 724 { 702 725 binder_debug(BINDER_DEBUG_SPINLOCKS, 703 726 "%s: line=%d\n", __func__, line); ··· 714 735 #define binder_node_unlock(node) _binder_node_unlock(node, __LINE__) 715 736 static void 716 737 _binder_node_unlock(struct binder_node *node, int line) 738 + __releases(&node->lock) 717 739 { 718 740 binder_debug(BINDER_DEBUG_SPINLOCKS, 719 741 "%s: line=%d\n", __func__, line); ··· 731 751 #define binder_node_inner_lock(node) _binder_node_inner_lock(node, __LINE__) 732 752 static void 733 753 _binder_node_inner_lock(struct binder_node *node, int line) 754 + __acquires(&node->lock) __acquires(&node->proc->inner_lock) 734 755 { 735 756 binder_debug(BINDER_DEBUG_SPINLOCKS, 736 757 "%s: line=%d\n", __func__, line); 737 758 spin_lock(&node->lock); 738 759 if (node->proc) 739 760 binder_inner_proc_lock(node->proc); 761 + else 762 + /* annotation for sparse */ 763 + __acquire(&node->proc->inner_lock); 740 764 } 741 765 742 766 /** ··· 752 768 #define binder_node_inner_unlock(node) _binder_node_inner_unlock(node, __LINE__) 753 769 static void 754 770 _binder_node_inner_unlock(struct binder_node *node, int line) 771 + __releases(&node->lock) __releases(&node->proc->inner_lock) 755 772 { 756 773 struct binder_proc *proc = node->proc; 757 774 ··· 760 775 "%s: line=%d\n", __func__, line); 761 776 if (proc) 762 777 binder_inner_proc_unlock(proc); 778 + else 779 + /* annotation for sparse */ 780 + __release(&node->proc->inner_lock); 763 781 spin_unlock(&node->lock); 764 782 } 765 783 ··· 1372 1384 binder_node_inner_lock(node); 1373 1385 if (!node->proc) 1374 1386 spin_lock(&binder_dead_nodes_lock); 1387 + else 1388 + __acquire(&binder_dead_nodes_lock); 1375 1389 node->tmp_refs--; 1376 1390 BUG_ON(node->tmp_refs < 0); 1377 1391 if (!node->proc) 1378 1392 spin_unlock(&binder_dead_nodes_lock); 1393 + else 1394 + __release(&binder_dead_nodes_lock); 1379 1395 /* 1380 1396 * Call binder_dec_node() to check if all refcounts are 0 1381 1397 * and cleanup is needed. Calling with strong=0 and internal=1 ··· 1882 1890 */ 1883 1891 static struct binder_thread *binder_get_txn_from_and_acq_inner( 1884 1892 struct binder_transaction *t) 1893 + __acquires(&t->from->proc->inner_lock) 1885 1894 { 1886 1895 struct binder_thread *from; 1887 1896 1888 1897 from = binder_get_txn_from(t); 1889 - if (!from) 1898 + if (!from) { 1899 + __acquire(&from->proc->inner_lock); 1890 1900 return NULL; 1901 + } 1891 1902 binder_inner_proc_lock(from->proc); 1892 1903 if (t->from) { 1893 1904 BUG_ON(from != t->from); 1894 1905 return from; 1895 1906 } 1896 1907 binder_inner_proc_unlock(from->proc); 1908 + __acquire(&from->proc->inner_lock); 1897 1909 binder_thread_dec_tmpref(from); 1898 1910 return NULL; 1899 1911 } ··· 1969 1973 binder_thread_dec_tmpref(target_thread); 1970 1974 binder_free_transaction(t); 1971 1975 return; 1976 + } else { 1977 + __release(&target_thread->proc->inner_lock); 1972 1978 } 1973 1979 next = t->from_parent; 1974 1980 ··· 2158 2160 return (fixup_offset >= last_min_offset); 2159 2161 } 2160 2162 2163 + /** 2164 + * struct binder_task_work_cb - for deferred close 2165 + * 2166 + * @twork: callback_head for task work 2167 + * @fd: fd to close 2168 + * 2169 + * Structure to pass task work to be handled after 2170 + * returning from binder_ioctl() via task_work_add(). 2171 + */ 2172 + struct binder_task_work_cb { 2173 + struct callback_head twork; 2174 + struct file *file; 2175 + }; 2176 + 2177 + /** 2178 + * binder_do_fd_close() - close list of file descriptors 2179 + * @twork: callback head for task work 2180 + * 2181 + * It is not safe to call ksys_close() during the binder_ioctl() 2182 + * function if there is a chance that binder's own file descriptor 2183 + * might be closed. This is to meet the requirements for using 2184 + * fdget() (see comments for __fget_light()). Therefore use 2185 + * task_work_add() to schedule the close operation once we have 2186 + * returned from binder_ioctl(). This function is a callback 2187 + * for that mechanism and does the actual ksys_close() on the 2188 + * given file descriptor. 2189 + */ 2190 + static void binder_do_fd_close(struct callback_head *twork) 2191 + { 2192 + struct binder_task_work_cb *twcb = container_of(twork, 2193 + struct binder_task_work_cb, twork); 2194 + 2195 + fput(twcb->file); 2196 + kfree(twcb); 2197 + } 2198 + 2199 + /** 2200 + * binder_deferred_fd_close() - schedule a close for the given file-descriptor 2201 + * @fd: file-descriptor to close 2202 + * 2203 + * See comments in binder_do_fd_close(). This function is used to schedule 2204 + * a file-descriptor to be closed after returning from binder_ioctl(). 2205 + */ 2206 + static void binder_deferred_fd_close(int fd) 2207 + { 2208 + struct binder_task_work_cb *twcb; 2209 + 2210 + twcb = kzalloc(sizeof(*twcb), GFP_KERNEL); 2211 + if (!twcb) 2212 + return; 2213 + init_task_work(&twcb->twork, binder_do_fd_close); 2214 + __close_fd_get_file(fd, &twcb->file); 2215 + if (twcb->file) 2216 + task_work_add(current, &twcb->twork, true); 2217 + else 2218 + kfree(twcb); 2219 + } 2220 + 2161 2221 static void binder_transaction_buffer_release(struct binder_proc *proc, 2162 2222 struct binder_buffer *buffer, 2163 2223 binder_size_t *failed_at) ··· 2355 2299 } 2356 2300 fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset); 2357 2301 for (fd_index = 0; fd_index < fda->num_fds; fd_index++) 2358 - ksys_close(fd_array[fd_index]); 2302 + binder_deferred_fd_close(fd_array[fd_index]); 2359 2303 } break; 2360 2304 default: 2361 2305 pr_err("transaction release %d bad object type %x\n", ··· 2450 2394 fp->cookie = node->cookie; 2451 2395 if (node->proc) 2452 2396 binder_inner_proc_lock(node->proc); 2397 + else 2398 + __acquire(&node->proc->inner_lock); 2453 2399 binder_inc_node_nilocked(node, 2454 2400 fp->hdr.type == BINDER_TYPE_BINDER, 2455 2401 0, NULL); 2456 2402 if (node->proc) 2457 2403 binder_inner_proc_unlock(node->proc); 2404 + else 2405 + __release(&node->proc->inner_lock); 2458 2406 trace_binder_transaction_ref_to_node(t, node, &src_rdata); 2459 2407 binder_debug(BINDER_DEBUG_TRANSACTION, 2460 2408 " ref %d desc %d -> node %d u%016llx\n", ··· 2822 2762 binder_set_nice(in_reply_to->saved_priority); 2823 2763 target_thread = binder_get_txn_from_and_acq_inner(in_reply_to); 2824 2764 if (target_thread == NULL) { 2765 + /* annotation for sparse */ 2766 + __release(&target_thread->proc->inner_lock); 2825 2767 return_error = BR_DEAD_REPLY; 2826 2768 return_error_line = __LINE__; 2827 2769 goto err_dead_binder; ··· 3974 3912 } else if (ret) { 3975 3913 u32 *fdp = (u32 *)(t->buffer->data + fixup->offset); 3976 3914 3977 - ksys_close(*fdp); 3915 + binder_deferred_fd_close(*fdp); 3978 3916 } 3979 3917 list_del(&fixup->fixup_entry); 3980 3918 kfree(fixup); ··· 4226 4164 if (cmd == BR_DEAD_BINDER) 4227 4165 goto done; /* DEAD_BINDER notifications can cause transactions */ 4228 4166 } break; 4167 + default: 4168 + binder_inner_proc_unlock(proc); 4169 + pr_err("%d:%d: bad work type %d\n", 4170 + proc->pid, thread->pid, w->type); 4171 + break; 4229 4172 } 4230 4173 4231 4174 if (!t) ··· 4534 4467 spin_lock(&t->lock); 4535 4468 if (t->to_thread == thread) 4536 4469 send_reply = t; 4470 + } else { 4471 + __acquire(&t->lock); 4537 4472 } 4538 4473 thread->is_dead = true; 4539 4474 ··· 4564 4495 spin_unlock(&last_t->lock); 4565 4496 if (t) 4566 4497 spin_lock(&t->lock); 4498 + else 4499 + __acquire(&t->lock); 4567 4500 } 4501 + /* annotation for sparse, lock not acquired in last iteration above */ 4502 + __release(&t->lock); 4568 4503 4569 4504 /* 4570 4505 * If this thread used poll, make sure we remove the waitqueue ··· 5011 4938 proc->tsk = current->group_leader; 5012 4939 INIT_LIST_HEAD(&proc->todo); 5013 4940 proc->default_priority = task_nice(current); 5014 - binder_dev = container_of(filp->private_data, struct binder_device, 5015 - miscdev); 4941 + /* binderfs stashes devices in i_private */ 4942 + if (is_binderfs_device(nodp)) 4943 + binder_dev = nodp->i_private; 4944 + else 4945 + binder_dev = container_of(filp->private_data, 4946 + struct binder_device, miscdev); 5016 4947 proc->context = &binder_dev->context; 5017 4948 binder_alloc_init(&proc->alloc); 5018 4949 ··· 5044 4967 proc->debugfs_entry = debugfs_create_file(strbuf, 0444, 5045 4968 binder_debugfs_dir_entry_proc, 5046 4969 (void *)(unsigned long)proc->pid, 5047 - &binder_proc_fops); 4970 + &proc_fops); 5048 4971 } 5049 4972 5050 4973 return 0; ··· 5468 5391 for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) { 5469 5392 struct binder_node *node = rb_entry(n, struct binder_node, 5470 5393 rb_node); 5394 + if (!print_all && !node->has_async_transaction) 5395 + continue; 5396 + 5471 5397 /* 5472 5398 * take a temporary reference on the node so it 5473 5399 * survives and isn't removed from the tree ··· 5675 5595 } 5676 5596 5677 5597 5678 - static int binder_state_show(struct seq_file *m, void *unused) 5598 + static int state_show(struct seq_file *m, void *unused) 5679 5599 { 5680 5600 struct binder_proc *proc; 5681 5601 struct binder_node *node; ··· 5714 5634 return 0; 5715 5635 } 5716 5636 5717 - static int binder_stats_show(struct seq_file *m, void *unused) 5637 + static int stats_show(struct seq_file *m, void *unused) 5718 5638 { 5719 5639 struct binder_proc *proc; 5720 5640 ··· 5730 5650 return 0; 5731 5651 } 5732 5652 5733 - static int binder_transactions_show(struct seq_file *m, void *unused) 5653 + static int transactions_show(struct seq_file *m, void *unused) 5734 5654 { 5735 5655 struct binder_proc *proc; 5736 5656 ··· 5743 5663 return 0; 5744 5664 } 5745 5665 5746 - static int binder_proc_show(struct seq_file *m, void *unused) 5666 + static int proc_show(struct seq_file *m, void *unused) 5747 5667 { 5748 5668 struct binder_proc *itr; 5749 5669 int pid = (unsigned long)m->private; ··· 5786 5706 "\n" : " (incomplete)\n"); 5787 5707 } 5788 5708 5789 - static int binder_transaction_log_show(struct seq_file *m, void *unused) 5709 + static int transaction_log_show(struct seq_file *m, void *unused) 5790 5710 { 5791 5711 struct binder_transaction_log *log = m->private; 5792 5712 unsigned int log_cur = atomic_read(&log->cur); ··· 5807 5727 return 0; 5808 5728 } 5809 5729 5810 - static const struct file_operations binder_fops = { 5730 + const struct file_operations binder_fops = { 5811 5731 .owner = THIS_MODULE, 5812 5732 .poll = binder_poll, 5813 5733 .unlocked_ioctl = binder_ioctl, ··· 5818 5738 .release = binder_release, 5819 5739 }; 5820 5740 5821 - BINDER_DEBUG_ENTRY(state); 5822 - BINDER_DEBUG_ENTRY(stats); 5823 - BINDER_DEBUG_ENTRY(transactions); 5824 - BINDER_DEBUG_ENTRY(transaction_log); 5741 + DEFINE_SHOW_ATTRIBUTE(state); 5742 + DEFINE_SHOW_ATTRIBUTE(stats); 5743 + DEFINE_SHOW_ATTRIBUTE(transactions); 5744 + DEFINE_SHOW_ATTRIBUTE(transaction_log); 5825 5745 5826 5746 static int __init init_binder_device(const char *name) 5827 5747 { ··· 5875 5795 0444, 5876 5796 binder_debugfs_dir_entry_root, 5877 5797 NULL, 5878 - &binder_state_fops); 5798 + &state_fops); 5879 5799 debugfs_create_file("stats", 5880 5800 0444, 5881 5801 binder_debugfs_dir_entry_root, 5882 5802 NULL, 5883 - &binder_stats_fops); 5803 + &stats_fops); 5884 5804 debugfs_create_file("transactions", 5885 5805 0444, 5886 5806 binder_debugfs_dir_entry_root, 5887 5807 NULL, 5888 - &binder_transactions_fops); 5808 + &transactions_fops); 5889 5809 debugfs_create_file("transaction_log", 5890 5810 0444, 5891 5811 binder_debugfs_dir_entry_root, 5892 5812 &binder_transaction_log, 5893 - &binder_transaction_log_fops); 5813 + &transaction_log_fops); 5894 5814 debugfs_create_file("failed_transaction_log", 5895 5815 0444, 5896 5816 binder_debugfs_dir_entry_root, 5897 5817 &binder_transaction_log_failed, 5898 - &binder_transaction_log_fops); 5818 + &transaction_log_fops); 5899 5819 } 5900 5820 5901 5821 /*
+1
drivers/android/binder_alloc.c
··· 939 939 struct list_lru_one *lru, 940 940 spinlock_t *lock, 941 941 void *cb_arg) 942 + __must_hold(lock) 942 943 { 943 944 struct mm_struct *mm = NULL; 944 945 struct binder_lru_page *page = container_of(item,
+10 -10
drivers/android/binder_alloc.h
··· 30 30 * struct binder_buffer - buffer used for binder transactions 31 31 * @entry: entry alloc->buffers 32 32 * @rb_node: node for allocated_buffers/free_buffers rb trees 33 - * @free: true if buffer is free 34 - * @allow_user_free: describe the second member of struct blah, 35 - * @async_transaction: describe the second member of struct blah, 36 - * @debug_id: describe the second member of struct blah, 37 - * @transaction: describe the second member of struct blah, 38 - * @target_node: describe the second member of struct blah, 39 - * @data_size: describe the second member of struct blah, 40 - * @offsets_size: describe the second member of struct blah, 41 - * @extra_buffers_size: describe the second member of struct blah, 42 - * @data:i describe the second member of struct blah, 33 + * @free: %true if buffer is free 34 + * @allow_user_free: %true if user is allowed to free buffer 35 + * @async_transaction: %true if buffer is in use for an async txn 36 + * @debug_id: unique ID for debugging 37 + * @transaction: pointer to associated struct binder_transaction 38 + * @target_node: struct binder_node associated with this buffer 39 + * @data_size: size of @transaction data 40 + * @offsets_size: size of array of offsets 41 + * @extra_buffers_size: size of space for other objects (like sg lists) 42 + * @data: pointer to base of buffer space 43 43 * 44 44 * Bookkeeping structure for binder transaction buffers 45 45 */
+49
drivers/android/binder_internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_BINDER_INTERNAL_H 4 + #define _LINUX_BINDER_INTERNAL_H 5 + 6 + #include <linux/export.h> 7 + #include <linux/fs.h> 8 + #include <linux/list.h> 9 + #include <linux/miscdevice.h> 10 + #include <linux/mutex.h> 11 + #include <linux/stddef.h> 12 + #include <linux/types.h> 13 + #include <linux/uidgid.h> 14 + 15 + struct binder_context { 16 + struct binder_node *binder_context_mgr_node; 17 + struct mutex context_mgr_node_lock; 18 + kuid_t binder_context_mgr_uid; 19 + const char *name; 20 + }; 21 + 22 + /** 23 + * struct binder_device - information about a binder device node 24 + * @hlist: list of binder devices (only used for devices requested via 25 + * CONFIG_ANDROID_BINDER_DEVICES) 26 + * @miscdev: information about a binder character device node 27 + * @context: binder context information 28 + * @binderfs_inode: This is the inode of the root dentry of the super block 29 + * belonging to a binderfs mount. 30 + */ 31 + struct binder_device { 32 + struct hlist_node hlist; 33 + struct miscdevice miscdev; 34 + struct binder_context context; 35 + struct inode *binderfs_inode; 36 + }; 37 + 38 + extern const struct file_operations binder_fops; 39 + 40 + #ifdef CONFIG_ANDROID_BINDERFS 41 + extern bool is_binderfs_device(const struct inode *inode); 42 + #else 43 + static inline bool is_binderfs_device(const struct inode *inode) 44 + { 45 + return false; 46 + } 47 + #endif 48 + 49 + #endif /* _LINUX_BINDER_INTERNAL_H */
+544
drivers/android/binderfs.c
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #include <linux/compiler_types.h> 4 + #include <linux/errno.h> 5 + #include <linux/fs.h> 6 + #include <linux/fsnotify.h> 7 + #include <linux/gfp.h> 8 + #include <linux/idr.h> 9 + #include <linux/init.h> 10 + #include <linux/ipc_namespace.h> 11 + #include <linux/kdev_t.h> 12 + #include <linux/kernel.h> 13 + #include <linux/list.h> 14 + #include <linux/magic.h> 15 + #include <linux/major.h> 16 + #include <linux/miscdevice.h> 17 + #include <linux/module.h> 18 + #include <linux/mutex.h> 19 + #include <linux/mount.h> 20 + #include <linux/parser.h> 21 + #include <linux/radix-tree.h> 22 + #include <linux/sched.h> 23 + #include <linux/slab.h> 24 + #include <linux/spinlock_types.h> 25 + #include <linux/stddef.h> 26 + #include <linux/string.h> 27 + #include <linux/types.h> 28 + #include <linux/uaccess.h> 29 + #include <linux/user_namespace.h> 30 + #include <linux/xarray.h> 31 + #include <uapi/asm-generic/errno-base.h> 32 + #include <uapi/linux/android/binder.h> 33 + #include <uapi/linux/android/binder_ctl.h> 34 + 35 + #include "binder_internal.h" 36 + 37 + #define FIRST_INODE 1 38 + #define SECOND_INODE 2 39 + #define INODE_OFFSET 3 40 + #define INTSTRLEN 21 41 + #define BINDERFS_MAX_MINOR (1U << MINORBITS) 42 + 43 + static struct vfsmount *binderfs_mnt; 44 + 45 + static dev_t binderfs_dev; 46 + static DEFINE_MUTEX(binderfs_minors_mutex); 47 + static DEFINE_IDA(binderfs_minors); 48 + 49 + /** 50 + * binderfs_info - information about a binderfs mount 51 + * @ipc_ns: The ipc namespace the binderfs mount belongs to. 52 + * @control_dentry: This records the dentry of this binderfs mount 53 + * binder-control device. 54 + * @root_uid: uid that needs to be used when a new binder device is 55 + * created. 56 + * @root_gid: gid that needs to be used when a new binder device is 57 + * created. 58 + */ 59 + struct binderfs_info { 60 + struct ipc_namespace *ipc_ns; 61 + struct dentry *control_dentry; 62 + kuid_t root_uid; 63 + kgid_t root_gid; 64 + 65 + }; 66 + 67 + static inline struct binderfs_info *BINDERFS_I(const struct inode *inode) 68 + { 69 + return inode->i_sb->s_fs_info; 70 + } 71 + 72 + bool is_binderfs_device(const struct inode *inode) 73 + { 74 + if (inode->i_sb->s_magic == BINDERFS_SUPER_MAGIC) 75 + return true; 76 + 77 + return false; 78 + } 79 + 80 + /** 81 + * binderfs_binder_device_create - allocate inode from super block of a 82 + * binderfs mount 83 + * @ref_inode: inode from wich the super block will be taken 84 + * @userp: buffer to copy information about new device for userspace to 85 + * @req: struct binderfs_device as copied from userspace 86 + * 87 + * This function allocated a new binder_device and reserves a new minor 88 + * number for it. 89 + * Minor numbers are limited and tracked globally in binderfs_minors. The 90 + * function will stash a struct binder_device for the specific binder 91 + * device in i_private of the inode. 92 + * It will go on to allocate a new inode from the super block of the 93 + * filesystem mount, stash a struct binder_device in its i_private field 94 + * and attach a dentry to that inode. 95 + * 96 + * Return: 0 on success, negative errno on failure 97 + */ 98 + static int binderfs_binder_device_create(struct inode *ref_inode, 99 + struct binderfs_device __user *userp, 100 + struct binderfs_device *req) 101 + { 102 + int minor, ret; 103 + struct dentry *dentry, *dup, *root; 104 + struct binder_device *device; 105 + size_t name_len = BINDERFS_MAX_NAME + 1; 106 + char *name = NULL; 107 + struct inode *inode = NULL; 108 + struct super_block *sb = ref_inode->i_sb; 109 + struct binderfs_info *info = sb->s_fs_info; 110 + 111 + /* Reserve new minor number for the new device. */ 112 + mutex_lock(&binderfs_minors_mutex); 113 + minor = ida_alloc_max(&binderfs_minors, BINDERFS_MAX_MINOR, GFP_KERNEL); 114 + mutex_unlock(&binderfs_minors_mutex); 115 + if (minor < 0) 116 + return minor; 117 + 118 + ret = -ENOMEM; 119 + device = kzalloc(sizeof(*device), GFP_KERNEL); 120 + if (!device) 121 + goto err; 122 + 123 + inode = new_inode(sb); 124 + if (!inode) 125 + goto err; 126 + 127 + inode->i_ino = minor + INODE_OFFSET; 128 + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); 129 + init_special_inode(inode, S_IFCHR | 0600, 130 + MKDEV(MAJOR(binderfs_dev), minor)); 131 + inode->i_fop = &binder_fops; 132 + inode->i_uid = info->root_uid; 133 + inode->i_gid = info->root_gid; 134 + 135 + name = kmalloc(name_len, GFP_KERNEL); 136 + if (!name) 137 + goto err; 138 + 139 + strscpy(name, req->name, name_len); 140 + 141 + device->binderfs_inode = inode; 142 + device->context.binder_context_mgr_uid = INVALID_UID; 143 + device->context.name = name; 144 + device->miscdev.name = name; 145 + device->miscdev.minor = minor; 146 + mutex_init(&device->context.context_mgr_node_lock); 147 + 148 + req->major = MAJOR(binderfs_dev); 149 + req->minor = minor; 150 + 151 + ret = copy_to_user(userp, req, sizeof(*req)); 152 + if (ret) { 153 + ret = -EFAULT; 154 + goto err; 155 + } 156 + 157 + root = sb->s_root; 158 + inode_lock(d_inode(root)); 159 + dentry = d_alloc_name(root, name); 160 + if (!dentry) { 161 + inode_unlock(d_inode(root)); 162 + ret = -ENOMEM; 163 + goto err; 164 + } 165 + 166 + /* Verify that the name userspace gave us is not already in use. */ 167 + dup = d_lookup(root, &dentry->d_name); 168 + if (dup) { 169 + if (d_really_is_positive(dup)) { 170 + dput(dup); 171 + dput(dentry); 172 + inode_unlock(d_inode(root)); 173 + ret = -EEXIST; 174 + goto err; 175 + } 176 + dput(dup); 177 + } 178 + 179 + inode->i_private = device; 180 + d_add(dentry, inode); 181 + fsnotify_create(root->d_inode, dentry); 182 + inode_unlock(d_inode(root)); 183 + 184 + return 0; 185 + 186 + err: 187 + kfree(name); 188 + kfree(device); 189 + mutex_lock(&binderfs_minors_mutex); 190 + ida_free(&binderfs_minors, minor); 191 + mutex_unlock(&binderfs_minors_mutex); 192 + iput(inode); 193 + 194 + return ret; 195 + } 196 + 197 + /** 198 + * binderfs_ctl_ioctl - handle binder device node allocation requests 199 + * 200 + * The request handler for the binder-control device. All requests operate on 201 + * the binderfs mount the binder-control device resides in: 202 + * - BINDER_CTL_ADD 203 + * Allocate a new binder device. 204 + * 205 + * Return: 0 on success, negative errno on failure 206 + */ 207 + static long binder_ctl_ioctl(struct file *file, unsigned int cmd, 208 + unsigned long arg) 209 + { 210 + int ret = -EINVAL; 211 + struct inode *inode = file_inode(file); 212 + struct binderfs_device __user *device = (struct binderfs_device __user *)arg; 213 + struct binderfs_device device_req; 214 + 215 + switch (cmd) { 216 + case BINDER_CTL_ADD: 217 + ret = copy_from_user(&device_req, device, sizeof(device_req)); 218 + if (ret) { 219 + ret = -EFAULT; 220 + break; 221 + } 222 + 223 + ret = binderfs_binder_device_create(inode, device, &device_req); 224 + break; 225 + default: 226 + break; 227 + } 228 + 229 + return ret; 230 + } 231 + 232 + static void binderfs_evict_inode(struct inode *inode) 233 + { 234 + struct binder_device *device = inode->i_private; 235 + 236 + clear_inode(inode); 237 + 238 + if (!device) 239 + return; 240 + 241 + mutex_lock(&binderfs_minors_mutex); 242 + ida_free(&binderfs_minors, device->miscdev.minor); 243 + mutex_unlock(&binderfs_minors_mutex); 244 + 245 + kfree(device->context.name); 246 + kfree(device); 247 + } 248 + 249 + static const struct super_operations binderfs_super_ops = { 250 + .statfs = simple_statfs, 251 + .evict_inode = binderfs_evict_inode, 252 + }; 253 + 254 + static int binderfs_rename(struct inode *old_dir, struct dentry *old_dentry, 255 + struct inode *new_dir, struct dentry *new_dentry, 256 + unsigned int flags) 257 + { 258 + struct inode *inode = d_inode(old_dentry); 259 + 260 + /* binderfs doesn't support directories. */ 261 + if (d_is_dir(old_dentry)) 262 + return -EPERM; 263 + 264 + if (flags & ~RENAME_NOREPLACE) 265 + return -EINVAL; 266 + 267 + if (!simple_empty(new_dentry)) 268 + return -ENOTEMPTY; 269 + 270 + if (d_really_is_positive(new_dentry)) 271 + simple_unlink(new_dir, new_dentry); 272 + 273 + old_dir->i_ctime = old_dir->i_mtime = new_dir->i_ctime = 274 + new_dir->i_mtime = inode->i_ctime = current_time(old_dir); 275 + 276 + return 0; 277 + } 278 + 279 + static int binderfs_unlink(struct inode *dir, struct dentry *dentry) 280 + { 281 + /* 282 + * The control dentry is only ever touched during mount so checking it 283 + * here should not require us to take lock. 284 + */ 285 + if (BINDERFS_I(dir)->control_dentry == dentry) 286 + return -EPERM; 287 + 288 + return simple_unlink(dir, dentry); 289 + } 290 + 291 + static const struct file_operations binder_ctl_fops = { 292 + .owner = THIS_MODULE, 293 + .open = nonseekable_open, 294 + .unlocked_ioctl = binder_ctl_ioctl, 295 + .compat_ioctl = binder_ctl_ioctl, 296 + .llseek = noop_llseek, 297 + }; 298 + 299 + /** 300 + * binderfs_binder_ctl_create - create a new binder-control device 301 + * @sb: super block of the binderfs mount 302 + * 303 + * This function creates a new binder-control device node in the binderfs mount 304 + * referred to by @sb. 305 + * 306 + * Return: 0 on success, negative errno on failure 307 + */ 308 + static int binderfs_binder_ctl_create(struct super_block *sb) 309 + { 310 + int minor, ret; 311 + struct dentry *dentry; 312 + struct binder_device *device; 313 + struct inode *inode = NULL; 314 + struct dentry *root = sb->s_root; 315 + struct binderfs_info *info = sb->s_fs_info; 316 + 317 + device = kzalloc(sizeof(*device), GFP_KERNEL); 318 + if (!device) 319 + return -ENOMEM; 320 + 321 + inode_lock(d_inode(root)); 322 + 323 + /* If we have already created a binder-control node, return. */ 324 + if (info->control_dentry) { 325 + ret = 0; 326 + goto out; 327 + } 328 + 329 + ret = -ENOMEM; 330 + inode = new_inode(sb); 331 + if (!inode) 332 + goto out; 333 + 334 + /* Reserve a new minor number for the new device. */ 335 + mutex_lock(&binderfs_minors_mutex); 336 + minor = ida_alloc_max(&binderfs_minors, BINDERFS_MAX_MINOR, GFP_KERNEL); 337 + mutex_unlock(&binderfs_minors_mutex); 338 + if (minor < 0) { 339 + ret = minor; 340 + goto out; 341 + } 342 + 343 + inode->i_ino = SECOND_INODE; 344 + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); 345 + init_special_inode(inode, S_IFCHR | 0600, 346 + MKDEV(MAJOR(binderfs_dev), minor)); 347 + inode->i_fop = &binder_ctl_fops; 348 + inode->i_uid = info->root_uid; 349 + inode->i_gid = info->root_gid; 350 + 351 + device->binderfs_inode = inode; 352 + device->miscdev.minor = minor; 353 + 354 + dentry = d_alloc_name(root, "binder-control"); 355 + if (!dentry) 356 + goto out; 357 + 358 + inode->i_private = device; 359 + info->control_dentry = dentry; 360 + d_add(dentry, inode); 361 + inode_unlock(d_inode(root)); 362 + 363 + return 0; 364 + 365 + out: 366 + inode_unlock(d_inode(root)); 367 + kfree(device); 368 + iput(inode); 369 + 370 + return ret; 371 + } 372 + 373 + static const struct inode_operations binderfs_dir_inode_operations = { 374 + .lookup = simple_lookup, 375 + .rename = binderfs_rename, 376 + .unlink = binderfs_unlink, 377 + }; 378 + 379 + static int binderfs_fill_super(struct super_block *sb, void *data, int silent) 380 + { 381 + struct binderfs_info *info; 382 + int ret = -ENOMEM; 383 + struct inode *inode = NULL; 384 + struct ipc_namespace *ipc_ns = sb->s_fs_info; 385 + 386 + get_ipc_ns(ipc_ns); 387 + 388 + sb->s_blocksize = PAGE_SIZE; 389 + sb->s_blocksize_bits = PAGE_SHIFT; 390 + 391 + /* 392 + * The binderfs filesystem can be mounted by userns root in a 393 + * non-initial userns. By default such mounts have the SB_I_NODEV flag 394 + * set in s_iflags to prevent security issues where userns root can 395 + * just create random device nodes via mknod() since it owns the 396 + * filesystem mount. But binderfs does not allow to create any files 397 + * including devices nodes. The only way to create binder devices nodes 398 + * is through the binder-control device which userns root is explicitly 399 + * allowed to do. So removing the SB_I_NODEV flag from s_iflags is both 400 + * necessary and safe. 401 + */ 402 + sb->s_iflags &= ~SB_I_NODEV; 403 + sb->s_iflags |= SB_I_NOEXEC; 404 + sb->s_magic = BINDERFS_SUPER_MAGIC; 405 + sb->s_op = &binderfs_super_ops; 406 + sb->s_time_gran = 1; 407 + 408 + info = kzalloc(sizeof(struct binderfs_info), GFP_KERNEL); 409 + if (!info) 410 + goto err_without_dentry; 411 + 412 + info->ipc_ns = ipc_ns; 413 + info->root_gid = make_kgid(sb->s_user_ns, 0); 414 + if (!gid_valid(info->root_gid)) 415 + info->root_gid = GLOBAL_ROOT_GID; 416 + info->root_uid = make_kuid(sb->s_user_ns, 0); 417 + if (!uid_valid(info->root_uid)) 418 + info->root_uid = GLOBAL_ROOT_UID; 419 + 420 + sb->s_fs_info = info; 421 + 422 + inode = new_inode(sb); 423 + if (!inode) 424 + goto err_without_dentry; 425 + 426 + inode->i_ino = FIRST_INODE; 427 + inode->i_fop = &simple_dir_operations; 428 + inode->i_mode = S_IFDIR | 0755; 429 + inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode); 430 + inode->i_op = &binderfs_dir_inode_operations; 431 + set_nlink(inode, 2); 432 + 433 + sb->s_root = d_make_root(inode); 434 + if (!sb->s_root) 435 + goto err_without_dentry; 436 + 437 + ret = binderfs_binder_ctl_create(sb); 438 + if (ret) 439 + goto err_with_dentry; 440 + 441 + return 0; 442 + 443 + err_with_dentry: 444 + dput(sb->s_root); 445 + sb->s_root = NULL; 446 + 447 + err_without_dentry: 448 + put_ipc_ns(ipc_ns); 449 + iput(inode); 450 + kfree(info); 451 + 452 + return ret; 453 + } 454 + 455 + static int binderfs_test_super(struct super_block *sb, void *data) 456 + { 457 + struct binderfs_info *info = sb->s_fs_info; 458 + 459 + if (info) 460 + return info->ipc_ns == data; 461 + 462 + return 0; 463 + } 464 + 465 + static int binderfs_set_super(struct super_block *sb, void *data) 466 + { 467 + sb->s_fs_info = data; 468 + return set_anon_super(sb, NULL); 469 + } 470 + 471 + static struct dentry *binderfs_mount(struct file_system_type *fs_type, 472 + int flags, const char *dev_name, 473 + void *data) 474 + { 475 + struct super_block *sb; 476 + struct ipc_namespace *ipc_ns = current->nsproxy->ipc_ns; 477 + 478 + if (!ns_capable(ipc_ns->user_ns, CAP_SYS_ADMIN)) 479 + return ERR_PTR(-EPERM); 480 + 481 + sb = sget_userns(fs_type, binderfs_test_super, binderfs_set_super, 482 + flags, ipc_ns->user_ns, ipc_ns); 483 + if (IS_ERR(sb)) 484 + return ERR_CAST(sb); 485 + 486 + if (!sb->s_root) { 487 + int ret = binderfs_fill_super(sb, data, flags & SB_SILENT ? 1 : 0); 488 + if (ret) { 489 + deactivate_locked_super(sb); 490 + return ERR_PTR(ret); 491 + } 492 + 493 + sb->s_flags |= SB_ACTIVE; 494 + } 495 + 496 + return dget(sb->s_root); 497 + } 498 + 499 + static void binderfs_kill_super(struct super_block *sb) 500 + { 501 + struct binderfs_info *info = sb->s_fs_info; 502 + 503 + if (info && info->ipc_ns) 504 + put_ipc_ns(info->ipc_ns); 505 + 506 + kfree(info); 507 + kill_litter_super(sb); 508 + } 509 + 510 + static struct file_system_type binder_fs_type = { 511 + .name = "binder", 512 + .mount = binderfs_mount, 513 + .kill_sb = binderfs_kill_super, 514 + .fs_flags = FS_USERNS_MOUNT, 515 + }; 516 + 517 + static int __init init_binderfs(void) 518 + { 519 + int ret; 520 + 521 + /* Allocate new major number for binderfs. */ 522 + ret = alloc_chrdev_region(&binderfs_dev, 0, BINDERFS_MAX_MINOR, 523 + "binder"); 524 + if (ret) 525 + return ret; 526 + 527 + ret = register_filesystem(&binder_fs_type); 528 + if (ret) { 529 + unregister_chrdev_region(binderfs_dev, BINDERFS_MAX_MINOR); 530 + return ret; 531 + } 532 + 533 + binderfs_mnt = kern_mount(&binder_fs_type); 534 + if (IS_ERR(binderfs_mnt)) { 535 + ret = PTR_ERR(binderfs_mnt); 536 + binderfs_mnt = NULL; 537 + unregister_filesystem(&binder_fs_type); 538 + unregister_chrdev_region(binderfs_dev, BINDERFS_MAX_MINOR); 539 + } 540 + 541 + return ret; 542 + } 543 + 544 + device_initcall(init_binderfs);
-1
drivers/bus/fsl-mc/dpbp.c
··· 5 5 */ 6 6 #include <linux/kernel.h> 7 7 #include <linux/fsl/mc.h> 8 - #include <linux/fsl/mc.h> 9 8 10 9 #include "fsl-mc-private.h" 11 10
-1
drivers/bus/fsl-mc/dpcon.c
··· 5 5 */ 6 6 #include <linux/kernel.h> 7 7 #include <linux/fsl/mc.h> 8 - #include <linux/fsl/mc.h> 9 8 10 9 #include "fsl-mc-private.h" 11 10
-1
drivers/bus/qcom-ebi2.c
··· 21 21 #include <linux/of.h> 22 22 #include <linux/of_platform.h> 23 23 #include <linux/init.h> 24 - #include <linux/io.h> 25 24 #include <linux/slab.h> 26 25 #include <linux/platform_device.h> 27 26 #include <linux/bitops.h>
+148 -130
drivers/char/lp.c
··· 46 46 * lp=auto (assign lp devices to all ports that 47 47 * have printers attached, as determined 48 48 * by the IEEE-1284 autoprobe) 49 - * 50 - * lp=reset (reset the printer during 49 + * 50 + * lp=reset (reset the printer during 51 51 * initialisation) 52 52 * 53 53 * lp=off (disable the printer driver entirely) ··· 141 141 142 142 static DEFINE_MUTEX(lp_mutex); 143 143 static struct lp_struct lp_table[LP_NO]; 144 + static int port_num[LP_NO]; 144 145 145 146 static unsigned int lp_count = 0; 146 147 static struct class *lp_class; ··· 167 166 static void lp_claim_parport_or_block(struct lp_struct *this_lp) 168 167 { 169 168 if (!test_and_set_bit(LP_PARPORT_CLAIMED, &this_lp->bits)) { 170 - parport_claim_or_block (this_lp->dev); 169 + parport_claim_or_block(this_lp->dev); 171 170 } 172 171 } 173 172 ··· 175 174 static void lp_release_parport(struct lp_struct *this_lp) 176 175 { 177 176 if (test_and_clear_bit(LP_PARPORT_CLAIMED, &this_lp->bits)) { 178 - parport_release (this_lp->dev); 177 + parport_release(this_lp->dev); 179 178 } 180 179 } 181 180 ··· 185 184 { 186 185 struct lp_struct *this_lp = (struct lp_struct *)handle; 187 186 set_bit(LP_PREEMPT_REQUEST, &this_lp->bits); 188 - return (1); 187 + return 1; 189 188 } 190 189 191 190 192 - /* 191 + /* 193 192 * Try to negotiate to a new mode; if unsuccessful negotiate to 194 193 * compatibility mode. Return the mode we ended up in. 195 194 */ 196 - static int lp_negotiate(struct parport * port, int mode) 195 + static int lp_negotiate(struct parport *port, int mode) 197 196 { 198 - if (parport_negotiate (port, mode) != 0) { 197 + if (parport_negotiate(port, mode) != 0) { 199 198 mode = IEEE1284_MODE_COMPAT; 200 - parport_negotiate (port, mode); 199 + parport_negotiate(port, mode); 201 200 } 202 201 203 - return (mode); 202 + return mode; 204 203 } 205 204 206 205 static int lp_reset(int minor) 207 206 { 208 207 int retval; 209 - lp_claim_parport_or_block (&lp_table[minor]); 208 + lp_claim_parport_or_block(&lp_table[minor]); 210 209 w_ctr(minor, LP_PSELECP); 211 - udelay (LP_DELAY); 210 + udelay(LP_DELAY); 212 211 w_ctr(minor, LP_PSELECP | LP_PINITP); 213 212 retval = r_str(minor); 214 - lp_release_parport (&lp_table[minor]); 213 + lp_release_parport(&lp_table[minor]); 215 214 return retval; 216 215 } 217 216 218 - static void lp_error (int minor) 217 + static void lp_error(int minor) 219 218 { 220 219 DEFINE_WAIT(wait); 221 220 int polling; ··· 224 223 return; 225 224 226 225 polling = lp_table[minor].dev->port->irq == PARPORT_IRQ_NONE; 227 - if (polling) lp_release_parport (&lp_table[minor]); 226 + if (polling) 227 + lp_release_parport(&lp_table[minor]); 228 228 prepare_to_wait(&lp_table[minor].waitq, &wait, TASK_INTERRUPTIBLE); 229 229 schedule_timeout(LP_TIMEOUT_POLLED); 230 230 finish_wait(&lp_table[minor].waitq, &wait); 231 - if (polling) lp_claim_parport_or_block (&lp_table[minor]); 232 - else parport_yield_blocking (lp_table[minor].dev); 231 + if (polling) 232 + lp_claim_parport_or_block(&lp_table[minor]); 233 + else 234 + parport_yield_blocking(lp_table[minor].dev); 233 235 } 234 236 235 237 static int lp_check_status(int minor) ··· 263 259 error = -EIO; 264 260 } else { 265 261 last = 0; /* Come here if LP_CAREFUL is set and no 266 - errors are reported. */ 262 + errors are reported. */ 267 263 } 268 264 269 265 lp_table[minor].last_error = last; ··· 280 276 281 277 /* If we're not in compatibility mode, we're ready now! */ 282 278 if (lp_table[minor].current_mode != IEEE1284_MODE_COMPAT) { 283 - return (0); 279 + return 0; 284 280 } 285 281 286 282 do { 287 - error = lp_check_status (minor); 283 + error = lp_check_status(minor); 288 284 if (error && (nonblock || (LP_F(minor) & LP_ABORT))) 289 285 break; 290 - if (signal_pending (current)) { 286 + if (signal_pending(current)) { 291 287 error = -EINTR; 292 288 break; 293 289 } ··· 295 291 return error; 296 292 } 297 293 298 - static ssize_t lp_write(struct file * file, const char __user * buf, 299 - size_t count, loff_t *ppos) 294 + static ssize_t lp_write(struct file *file, const char __user *buf, 295 + size_t count, loff_t *ppos) 300 296 { 301 297 unsigned int minor = iminor(file_inode(file)); 302 298 struct parport *port = lp_table[minor].dev->port; ··· 321 317 if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) 322 318 return -EINTR; 323 319 324 - if (copy_from_user (kbuf, buf, copy_size)) { 320 + if (copy_from_user(kbuf, buf, copy_size)) { 325 321 retv = -EFAULT; 326 322 goto out_unlock; 327 323 } 328 324 329 - /* Claim Parport or sleep until it becomes available 330 - */ 331 - lp_claim_parport_or_block (&lp_table[minor]); 325 + /* Claim Parport or sleep until it becomes available 326 + */ 327 + lp_claim_parport_or_block(&lp_table[minor]); 332 328 /* Go to the proper mode. */ 333 - lp_table[minor].current_mode = lp_negotiate (port, 334 - lp_table[minor].best_mode); 329 + lp_table[minor].current_mode = lp_negotiate(port, 330 + lp_table[minor].best_mode); 335 331 336 - parport_set_timeout (lp_table[minor].dev, 337 - (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK 338 - : lp_table[minor].timeout)); 332 + parport_set_timeout(lp_table[minor].dev, 333 + (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK 334 + : lp_table[minor].timeout)); 339 335 340 - if ((retv = lp_wait_ready (minor, nonblock)) == 0) 336 + if ((retv = lp_wait_ready(minor, nonblock)) == 0) 341 337 do { 342 338 /* Write the data. */ 343 - written = parport_write (port, kbuf, copy_size); 339 + written = parport_write(port, kbuf, copy_size); 344 340 if (written > 0) { 345 341 copy_size -= written; 346 342 count -= written; ··· 348 344 retv += written; 349 345 } 350 346 351 - if (signal_pending (current)) { 347 + if (signal_pending(current)) { 352 348 if (retv == 0) 353 349 retv = -EINTR; 354 350 ··· 359 355 /* incomplete write -> check error ! */ 360 356 int error; 361 357 362 - parport_negotiate (lp_table[minor].dev->port, 363 - IEEE1284_MODE_COMPAT); 358 + parport_negotiate(lp_table[minor].dev->port, 359 + IEEE1284_MODE_COMPAT); 364 360 lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; 365 361 366 - error = lp_wait_ready (minor, nonblock); 362 + error = lp_wait_ready(minor, nonblock); 367 363 368 364 if (error) { 369 365 if (retv == 0) ··· 375 371 break; 376 372 } 377 373 378 - parport_yield_blocking (lp_table[minor].dev); 379 - lp_table[minor].current_mode 380 - = lp_negotiate (port, 381 - lp_table[minor].best_mode); 374 + parport_yield_blocking(lp_table[minor].dev); 375 + lp_table[minor].current_mode 376 + = lp_negotiate(port, 377 + lp_table[minor].best_mode); 382 378 383 379 } else if (need_resched()) 384 - schedule (); 380 + schedule(); 385 381 386 382 if (count) { 387 383 copy_size = count; ··· 393 389 retv = -EFAULT; 394 390 break; 395 391 } 396 - } 392 + } 397 393 } while (count > 0); 398 394 399 - if (test_and_clear_bit(LP_PREEMPT_REQUEST, 395 + if (test_and_clear_bit(LP_PREEMPT_REQUEST, 400 396 &lp_table[minor].bits)) { 401 397 printk(KERN_INFO "lp%d releasing parport\n", minor); 402 - parport_negotiate (lp_table[minor].dev->port, 403 - IEEE1284_MODE_COMPAT); 398 + parport_negotiate(lp_table[minor].dev->port, 399 + IEEE1284_MODE_COMPAT); 404 400 lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; 405 - lp_release_parport (&lp_table[minor]); 401 + lp_release_parport(&lp_table[minor]); 406 402 } 407 403 out_unlock: 408 404 mutex_unlock(&lp_table[minor].port_mutex); 409 405 410 - return retv; 406 + return retv; 411 407 } 412 408 413 409 #ifdef CONFIG_PARPORT_1284 414 410 415 411 /* Status readback conforming to ieee1284 */ 416 - static ssize_t lp_read(struct file * file, char __user * buf, 412 + static ssize_t lp_read(struct file *file, char __user *buf, 417 413 size_t count, loff_t *ppos) 418 414 { 419 415 DEFINE_WAIT(wait); ··· 430 426 if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) 431 427 return -EINTR; 432 428 433 - lp_claim_parport_or_block (&lp_table[minor]); 429 + lp_claim_parport_or_block(&lp_table[minor]); 434 430 435 - parport_set_timeout (lp_table[minor].dev, 436 - (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK 437 - : lp_table[minor].timeout)); 431 + parport_set_timeout(lp_table[minor].dev, 432 + (nonblock ? PARPORT_INACTIVITY_O_NONBLOCK 433 + : lp_table[minor].timeout)); 438 434 439 - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 440 - if (parport_negotiate (lp_table[minor].dev->port, 441 - IEEE1284_MODE_NIBBLE)) { 435 + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 436 + if (parport_negotiate(lp_table[minor].dev->port, 437 + IEEE1284_MODE_NIBBLE)) { 442 438 retval = -EIO; 443 439 goto out; 444 440 } 445 441 446 442 while (retval == 0) { 447 - retval = parport_read (port, kbuf, count); 443 + retval = parport_read(port, kbuf, count); 448 444 449 445 if (retval > 0) 450 446 break; ··· 457 453 /* Wait for data. */ 458 454 459 455 if (lp_table[minor].dev->port->irq == PARPORT_IRQ_NONE) { 460 - parport_negotiate (lp_table[minor].dev->port, 461 - IEEE1284_MODE_COMPAT); 462 - lp_error (minor); 463 - if (parport_negotiate (lp_table[minor].dev->port, 464 - IEEE1284_MODE_NIBBLE)) { 456 + parport_negotiate(lp_table[minor].dev->port, 457 + IEEE1284_MODE_COMPAT); 458 + lp_error(minor); 459 + if (parport_negotiate(lp_table[minor].dev->port, 460 + IEEE1284_MODE_NIBBLE)) { 465 461 retval = -EIO; 466 462 goto out; 467 463 } ··· 471 467 finish_wait(&lp_table[minor].waitq, &wait); 472 468 } 473 469 474 - if (signal_pending (current)) { 470 + if (signal_pending(current)) { 475 471 retval = -ERESTARTSYS; 476 472 break; 477 473 } 478 474 479 - cond_resched (); 475 + cond_resched(); 480 476 } 481 - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 477 + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 482 478 out: 483 - lp_release_parport (&lp_table[minor]); 479 + lp_release_parport(&lp_table[minor]); 484 480 485 - if (retval > 0 && copy_to_user (buf, kbuf, retval)) 481 + if (retval > 0 && copy_to_user(buf, kbuf, retval)) 486 482 retval = -EFAULT; 487 483 488 484 mutex_unlock(&lp_table[minor].port_mutex); ··· 492 488 493 489 #endif /* IEEE 1284 support */ 494 490 495 - static int lp_open(struct inode * inode, struct file * file) 491 + static int lp_open(struct inode *inode, struct file *file) 496 492 { 497 493 unsigned int minor = iminor(inode); 498 494 int ret = 0; ··· 517 513 should most likely only ever be used by the tunelp application. */ 518 514 if ((LP_F(minor) & LP_ABORTOPEN) && !(file->f_flags & O_NONBLOCK)) { 519 515 int status; 520 - lp_claim_parport_or_block (&lp_table[minor]); 516 + lp_claim_parport_or_block(&lp_table[minor]); 521 517 status = r_str(minor); 522 - lp_release_parport (&lp_table[minor]); 518 + lp_release_parport(&lp_table[minor]); 523 519 if (status & LP_POUTPA) { 524 520 printk(KERN_INFO "lp%d out of paper\n", minor); 525 521 LP_F(minor) &= ~LP_BUSY; ··· 544 540 goto out; 545 541 } 546 542 /* Determine if the peripheral supports ECP mode */ 547 - lp_claim_parport_or_block (&lp_table[minor]); 543 + lp_claim_parport_or_block(&lp_table[minor]); 548 544 if ( (lp_table[minor].dev->port->modes & PARPORT_MODE_ECP) && 549 - !parport_negotiate (lp_table[minor].dev->port, 550 - IEEE1284_MODE_ECP)) { 551 - printk (KERN_INFO "lp%d: ECP mode\n", minor); 545 + !parport_negotiate(lp_table[minor].dev->port, 546 + IEEE1284_MODE_ECP)) { 547 + printk(KERN_INFO "lp%d: ECP mode\n", minor); 552 548 lp_table[minor].best_mode = IEEE1284_MODE_ECP; 553 549 } else { 554 550 lp_table[minor].best_mode = IEEE1284_MODE_COMPAT; 555 551 } 556 552 /* Leave peripheral in compatibility mode */ 557 - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 558 - lp_release_parport (&lp_table[minor]); 553 + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 554 + lp_release_parport(&lp_table[minor]); 559 555 lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; 560 556 out: 561 557 mutex_unlock(&lp_mutex); 562 558 return ret; 563 559 } 564 560 565 - static int lp_release(struct inode * inode, struct file * file) 561 + static int lp_release(struct inode *inode, struct file *file) 566 562 { 567 563 unsigned int minor = iminor(inode); 568 564 569 - lp_claim_parport_or_block (&lp_table[minor]); 570 - parport_negotiate (lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 565 + lp_claim_parport_or_block(&lp_table[minor]); 566 + parport_negotiate(lp_table[minor].dev->port, IEEE1284_MODE_COMPAT); 571 567 lp_table[minor].current_mode = IEEE1284_MODE_COMPAT; 572 - lp_release_parport (&lp_table[minor]); 568 + lp_release_parport(&lp_table[minor]); 573 569 kfree(lp_table[minor].lp_buffer); 574 570 lp_table[minor].lp_buffer = NULL; 575 571 LP_F(minor) &= ~LP_BUSY; ··· 619 615 case LPWAIT: 620 616 LP_WAIT(minor) = arg; 621 617 break; 622 - case LPSETIRQ: 618 + case LPSETIRQ: 623 619 return -EINVAL; 624 620 break; 625 621 case LPGETIRQ: ··· 630 626 case LPGETSTATUS: 631 627 if (mutex_lock_interruptible(&lp_table[minor].port_mutex)) 632 628 return -EINTR; 633 - lp_claim_parport_or_block (&lp_table[minor]); 629 + lp_claim_parport_or_block(&lp_table[minor]); 634 630 status = r_str(minor); 635 - lp_release_parport (&lp_table[minor]); 631 + lp_release_parport(&lp_table[minor]); 636 632 mutex_unlock(&lp_table[minor].port_mutex); 637 633 638 634 if (copy_to_user(argp, &status, sizeof(int))) ··· 651 647 sizeof(struct lp_stats)); 652 648 break; 653 649 #endif 654 - case LPGETFLAGS: 655 - status = LP_F(minor); 650 + case LPGETFLAGS: 651 + status = LP_F(minor); 656 652 if (copy_to_user(argp, &status, sizeof(int))) 657 653 return -EFAULT; 658 654 break; ··· 805 801 806 802 /* The console must be locked when we get here. */ 807 803 808 - static void lp_console_write (struct console *co, const char *s, 809 - unsigned count) 804 + static void lp_console_write(struct console *co, const char *s, 805 + unsigned count) 810 806 { 811 807 struct pardevice *dev = lp_table[CONSOLE_LP].dev; 812 808 struct parport *port = dev->port; 813 809 ssize_t written; 814 810 815 - if (parport_claim (dev)) 811 + if (parport_claim(dev)) 816 812 /* Nothing we can do. */ 817 813 return; 818 814 819 - parport_set_timeout (dev, 0); 815 + parport_set_timeout(dev, 0); 820 816 821 817 /* Go to compatibility mode. */ 822 - parport_negotiate (port, IEEE1284_MODE_COMPAT); 818 + parport_negotiate(port, IEEE1284_MODE_COMPAT); 823 819 824 820 do { 825 821 /* Write the data, converting LF->CRLF as we go. */ 826 822 ssize_t canwrite = count; 827 - char *lf = memchr (s, '\n', count); 823 + char *lf = memchr(s, '\n', count); 828 824 if (lf) 829 825 canwrite = lf - s; 830 826 831 827 if (canwrite > 0) { 832 - written = parport_write (port, s, canwrite); 828 + written = parport_write(port, s, canwrite); 833 829 834 830 if (written <= 0) 835 831 continue; ··· 847 843 s++; 848 844 count--; 849 845 do { 850 - written = parport_write (port, crlf, i); 846 + written = parport_write(port, crlf, i); 851 847 if (written > 0) 852 848 i -= written, crlf += written; 853 849 } while (i > 0 && (CONSOLE_LP_STRICT || written > 0)); 854 850 } 855 851 } while (count > 0 && (CONSOLE_LP_STRICT || written > 0)); 856 852 857 - parport_release (dev); 853 + parport_release(dev); 858 854 } 859 855 860 856 static struct console lpcons = { ··· 875 871 module_param(reset, bool, 0); 876 872 877 873 #ifndef MODULE 878 - static int __init lp_setup (char *str) 874 + static int __init lp_setup(char *str) 879 875 { 880 876 static int parport_ptr; 881 877 int x; ··· 912 908 913 909 static int lp_register(int nr, struct parport *port) 914 910 { 915 - lp_table[nr].dev = parport_register_device(port, "lp", 916 - lp_preempt, NULL, NULL, 0, 917 - (void *) &lp_table[nr]); 911 + struct pardev_cb ppdev_cb; 912 + 913 + memset(&ppdev_cb, 0, sizeof(ppdev_cb)); 914 + ppdev_cb.preempt = lp_preempt; 915 + ppdev_cb.private = &lp_table[nr]; 916 + lp_table[nr].dev = parport_register_dev_model(port, "lp", 917 + &ppdev_cb, nr); 918 918 if (lp_table[nr].dev == NULL) 919 919 return 1; 920 920 lp_table[nr].flags |= LP_EXIST; ··· 929 921 device_create(lp_class, port->dev, MKDEV(LP_MAJOR, nr), NULL, 930 922 "lp%d", nr); 931 923 932 - printk(KERN_INFO "lp%d: using %s (%s).\n", nr, port->name, 924 + printk(KERN_INFO "lp%d: using %s (%s).\n", nr, port->name, 933 925 (port->irq == PARPORT_IRQ_NONE)?"polling":"interrupt-driven"); 934 926 935 927 #ifdef CONFIG_LP_CONSOLE ··· 937 929 if (port->modes & PARPORT_MODE_SAFEININT) { 938 930 register_console(&lpcons); 939 931 console_registered = port; 940 - printk (KERN_INFO "lp%d: console ready\n", CONSOLE_LP); 932 + printk(KERN_INFO "lp%d: console ready\n", CONSOLE_LP); 941 933 } else 942 - printk (KERN_ERR "lp%d: cannot run console on %s\n", 943 - CONSOLE_LP, port->name); 934 + printk(KERN_ERR "lp%d: cannot run console on %s\n", 935 + CONSOLE_LP, port->name); 944 936 } 945 937 #endif 938 + port_num[nr] = port->number; 946 939 947 940 return 0; 948 941 } 949 942 950 - static void lp_attach (struct parport *port) 943 + static void lp_attach(struct parport *port) 951 944 { 952 945 unsigned int i; 953 946 ··· 962 953 printk(KERN_INFO "lp: ignoring parallel port (max. %d)\n",LP_NO); 963 954 return; 964 955 } 965 - if (!lp_register(lp_count, port)) 956 + for (i = 0; i < LP_NO; i++) 957 + if (port_num[i] == -1) 958 + break; 959 + 960 + if (!lp_register(i, port)) 966 961 lp_count++; 967 962 break; 968 963 ··· 982 969 } 983 970 } 984 971 985 - static void lp_detach (struct parport *port) 972 + static void lp_detach(struct parport *port) 986 973 { 974 + int n; 975 + 987 976 /* Write this some day. */ 988 977 #ifdef CONFIG_LP_CONSOLE 989 978 if (console_registered == port) { ··· 993 978 console_registered = NULL; 994 979 } 995 980 #endif /* CONFIG_LP_CONSOLE */ 981 + 982 + for (n = 0; n < LP_NO; n++) { 983 + if (port_num[n] == port->number) { 984 + port_num[n] = -1; 985 + lp_count--; 986 + device_destroy(lp_class, MKDEV(LP_MAJOR, n)); 987 + parport_unregister_device(lp_table[n].dev); 988 + } 989 + } 996 990 } 997 991 998 992 static struct parport_driver lp_driver = { 999 993 .name = "lp", 1000 - .attach = lp_attach, 994 + .match_port = lp_attach, 1001 995 .detach = lp_detach, 996 + .devmodel = true, 1002 997 }; 1003 998 1004 - static int __init lp_init (void) 999 + static int __init lp_init(void) 1005 1000 { 1006 1001 int i, err = 0; 1007 1002 ··· 1028 1003 #ifdef LP_STATS 1029 1004 lp_table[i].lastcall = 0; 1030 1005 lp_table[i].runchars = 0; 1031 - memset (&lp_table[i].stats, 0, sizeof (struct lp_stats)); 1006 + memset(&lp_table[i].stats, 0, sizeof(struct lp_stats)); 1032 1007 #endif 1033 1008 lp_table[i].last_error = 0; 1034 - init_waitqueue_head (&lp_table[i].waitq); 1035 - init_waitqueue_head (&lp_table[i].dataq); 1009 + init_waitqueue_head(&lp_table[i].waitq); 1010 + init_waitqueue_head(&lp_table[i].dataq); 1036 1011 mutex_init(&lp_table[i].port_mutex); 1037 1012 lp_table[i].timeout = 10 * HZ; 1013 + port_num[i] = -1; 1038 1014 } 1039 1015 1040 - if (register_chrdev (LP_MAJOR, "lp", &lp_fops)) { 1041 - printk (KERN_ERR "lp: unable to get major %d\n", LP_MAJOR); 1016 + if (register_chrdev(LP_MAJOR, "lp", &lp_fops)) { 1017 + printk(KERN_ERR "lp: unable to get major %d\n", LP_MAJOR); 1042 1018 return -EIO; 1043 1019 } 1044 1020 ··· 1049 1023 goto out_reg; 1050 1024 } 1051 1025 1052 - if (parport_register_driver (&lp_driver)) { 1053 - printk (KERN_ERR "lp: unable to register with parport\n"); 1026 + if (parport_register_driver(&lp_driver)) { 1027 + printk(KERN_ERR "lp: unable to register with parport\n"); 1054 1028 err = -EIO; 1055 1029 goto out_class; 1056 1030 } 1057 1031 1058 1032 if (!lp_count) { 1059 - printk (KERN_INFO "lp: driver loaded but no devices found\n"); 1033 + printk(KERN_INFO "lp: driver loaded but no devices found\n"); 1060 1034 #ifndef CONFIG_PARPORT_1284 1061 1035 if (parport_nr[0] == LP_PARPORT_AUTO) 1062 - printk (KERN_INFO "lp: (is IEEE 1284 support enabled?)\n"); 1036 + printk(KERN_INFO "lp: (is IEEE 1284 support enabled?)\n"); 1063 1037 #endif 1064 1038 } 1065 1039 ··· 1072 1046 return err; 1073 1047 } 1074 1048 1075 - static int __init lp_init_module (void) 1049 + static int __init lp_init_module(void) 1076 1050 { 1077 1051 if (parport[0]) { 1078 1052 /* The user gave some parameters. Let's see what they were. */ ··· 1086 1060 else { 1087 1061 char *ep; 1088 1062 unsigned long r = simple_strtoul(parport[n], &ep, 0); 1089 - if (ep != parport[n]) 1063 + if (ep != parport[n]) 1090 1064 parport_nr[n] = r; 1091 1065 else { 1092 1066 printk(KERN_ERR "lp: bad port specifier `%s'\n", parport[n]); ··· 1100 1074 return lp_init(); 1101 1075 } 1102 1076 1103 - static void lp_cleanup_module (void) 1077 + static void lp_cleanup_module(void) 1104 1078 { 1105 - unsigned int offset; 1106 - 1107 - parport_unregister_driver (&lp_driver); 1079 + parport_unregister_driver(&lp_driver); 1108 1080 1109 1081 #ifdef CONFIG_LP_CONSOLE 1110 - unregister_console (&lpcons); 1082 + unregister_console(&lpcons); 1111 1083 #endif 1112 1084 1113 1085 unregister_chrdev(LP_MAJOR, "lp"); 1114 - for (offset = 0; offset < LP_NO; offset++) { 1115 - if (lp_table[offset].dev == NULL) 1116 - continue; 1117 - parport_unregister_device(lp_table[offset].dev); 1118 - device_destroy(lp_class, MKDEV(LP_MAJOR, offset)); 1119 - } 1120 1086 class_destroy(lp_class); 1121 1087 } 1122 1088
+2 -2
drivers/char/rtc.c
··· 866 866 #ifdef CONFIG_SPARC32 867 867 for_each_node_by_name(ebus_dp, "ebus") { 868 868 struct device_node *dp; 869 - for (dp = ebus_dp; dp; dp = dp->sibling) { 870 - if (!strcmp(dp->name, "rtc")) { 869 + for_each_child_of_node(ebus_dp, dp) { 870 + if (of_node_name_eq(dp, "rtc")) { 871 871 op = of_find_device_by_node(dp); 872 872 if (op) { 873 873 rtc_port = op->resource[0].start;
+44 -43
drivers/char/tlclk.c
··· 506 506 507 507 val = (unsigned char)tmp; 508 508 spin_lock_irqsave(&event_lock, flags); 509 - if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { 510 - SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28); 511 - SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); 512 - } else if (val >= CLK_8_592MHz) { 513 - SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38); 514 - switch (val) { 515 - case CLK_8_592MHz: 516 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 517 - break; 518 - case CLK_11_184MHz: 519 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); 520 - break; 521 - case CLK_34_368MHz: 522 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 523 - break; 524 - case CLK_44_736MHz: 525 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 526 - break; 527 - } 528 - } else 529 - SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3); 530 - 509 + if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { 510 + SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28); 511 + SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); 512 + } else if (val >= CLK_8_592MHz) { 513 + SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38); 514 + switch (val) { 515 + case CLK_8_592MHz: 516 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 517 + break; 518 + case CLK_11_184MHz: 519 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); 520 + break; 521 + case CLK_34_368MHz: 522 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 523 + break; 524 + case CLK_44_736MHz: 525 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 526 + break; 527 + } 528 + } else { 529 + SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3); 530 + } 531 531 spin_unlock_irqrestore(&event_lock, flags); 532 532 533 533 return strnlen(buf, count); ··· 548 548 549 549 val = (unsigned char)tmp; 550 550 spin_lock_irqsave(&event_lock, flags); 551 - if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { 552 - SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5); 553 - SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); 554 - } else if (val >= CLK_8_592MHz) { 555 - SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); 556 - switch (val) { 557 - case CLK_8_592MHz: 558 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 559 - break; 560 - case CLK_11_184MHz: 561 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); 562 - break; 563 - case CLK_34_368MHz: 564 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 565 - break; 566 - case CLK_44_736MHz: 567 - SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 568 - break; 569 - } 570 - } else 571 - SET_PORT_BITS(TLCLK_REG3, 0xf8, val); 551 + if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) { 552 + SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5); 553 + SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val); 554 + } else if (val >= CLK_8_592MHz) { 555 + SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7); 556 + switch (val) { 557 + case CLK_8_592MHz: 558 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 2); 559 + break; 560 + case CLK_11_184MHz: 561 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 0); 562 + break; 563 + case CLK_34_368MHz: 564 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 3); 565 + break; 566 + case CLK_44_736MHz: 567 + SET_PORT_BITS(TLCLK_REG0, 0xfc, 1); 568 + break; 569 + } 570 + } else { 571 + SET_PORT_BITS(TLCLK_REG3, 0xf8, val); 572 + } 572 573 spin_unlock_irqrestore(&event_lock, flags); 573 574 574 575 return strnlen(buf, count);
+3 -14
drivers/char/virtio_console.c
··· 1309 1309 .attrs = port_sysfs_entries, 1310 1310 }; 1311 1311 1312 - static int debugfs_show(struct seq_file *s, void *data) 1312 + static int port_debugfs_show(struct seq_file *s, void *data) 1313 1313 { 1314 1314 struct port *port = s->private; 1315 1315 ··· 1327 1327 return 0; 1328 1328 } 1329 1329 1330 - static int debugfs_open(struct inode *inode, struct file *file) 1331 - { 1332 - return single_open(file, debugfs_show, inode->i_private); 1333 - } 1334 - 1335 - static const struct file_operations port_debugfs_ops = { 1336 - .owner = THIS_MODULE, 1337 - .open = debugfs_open, 1338 - .read = seq_read, 1339 - .llseek = seq_lseek, 1340 - .release = single_release, 1341 - }; 1330 + DEFINE_SHOW_ATTRIBUTE(port_debugfs); 1342 1331 1343 1332 static void set_console_size(struct port *port, u16 rows, u16 cols) 1344 1333 { ··· 1479 1490 port->debugfs_file = debugfs_create_file(debugfs_name, 0444, 1480 1491 pdrvdata.debugfs_dir, 1481 1492 port, 1482 - &port_debugfs_ops); 1493 + &port_debugfs_fops); 1483 1494 } 1484 1495 return 0; 1485 1496
+13 -2
drivers/extcon/extcon-max14577.c
··· 657 657 struct max14577 *max14577 = dev_get_drvdata(pdev->dev.parent); 658 658 struct max14577_muic_info *info; 659 659 int delay_jiffies; 660 + int cable_type; 661 + bool attached; 660 662 int ret; 661 663 int i; 662 664 u8 id; ··· 727 725 info->path_uart = CTRL1_SW_UART; 728 726 delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); 729 727 730 - /* Set initial path for UART */ 731 - max14577_muic_set_path(info, info->path_uart, true); 728 + /* Set initial path for UART when JIG is connected to get serial logs */ 729 + ret = max14577_bulk_read(info->max14577->regmap, 730 + MAX14577_MUIC_REG_STATUS1, info->status, 2); 731 + if (ret) { 732 + dev_err(info->dev, "Cannot read STATUS registers\n"); 733 + return ret; 734 + } 735 + cable_type = max14577_muic_get_cable_type(info, MAX14577_CABLE_GROUP_ADC, 736 + &attached); 737 + if (attached && cable_type == MAX14577_MUIC_ADC_FACTORY_MODE_UART_OFF) 738 + max14577_muic_set_path(info, info->path_uart, true); 732 739 733 740 /* Check revision number of MUIC device*/ 734 741 ret = max14577_read_reg(info->max14577->regmap,
+14 -2
drivers/extcon/extcon-max77693.c
··· 1072 1072 struct max77693_reg_data *init_data; 1073 1073 int num_init_data; 1074 1074 int delay_jiffies; 1075 + int cable_type; 1076 + bool attached; 1075 1077 int ret; 1076 1078 int i; 1077 1079 unsigned int id; ··· 1214 1212 delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); 1215 1213 } 1216 1214 1217 - /* Set initial path for UART */ 1218 - max77693_muic_set_path(info, info->path_uart, true); 1215 + /* Set initial path for UART when JIG is connected to get serial logs */ 1216 + ret = regmap_bulk_read(info->max77693->regmap_muic, 1217 + MAX77693_MUIC_REG_STATUS1, info->status, 2); 1218 + if (ret) { 1219 + dev_err(info->dev, "failed to read MUIC register\n"); 1220 + return ret; 1221 + } 1222 + cable_type = max77693_muic_get_cable_type(info, 1223 + MAX77693_CABLE_GROUP_ADC, &attached); 1224 + if (attached && (cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_ON || 1225 + cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_OFF)) 1226 + max77693_muic_set_path(info, info->path_uart, true); 1219 1227 1220 1228 /* Check revision number of MUIC device*/ 1221 1229 ret = regmap_read(info->max77693->regmap_muic,
+15 -3
drivers/extcon/extcon-max77843.c
··· 812 812 struct max77693_dev *max77843 = dev_get_drvdata(pdev->dev.parent); 813 813 struct max77843_muic_info *info; 814 814 unsigned int id; 815 + int cable_type; 816 + bool attached; 815 817 int i, ret; 816 818 817 819 info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); ··· 858 856 /* Set ADC debounce time */ 859 857 max77843_muic_set_debounce_time(info, MAX77843_DEBOUNCE_TIME_25MS); 860 858 861 - /* Set initial path for UART */ 862 - max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART, true, 863 - false); 859 + /* Set initial path for UART when JIG is connected to get serial logs */ 860 + ret = regmap_bulk_read(max77843->regmap_muic, 861 + MAX77843_MUIC_REG_STATUS1, info->status, 862 + MAX77843_MUIC_STATUS_NUM); 863 + if (ret) { 864 + dev_err(info->dev, "Cannot read STATUS registers\n"); 865 + goto err_muic_irq; 866 + } 867 + cable_type = max77843_muic_get_cable_type(info, MAX77843_CABLE_GROUP_ADC, 868 + &attached); 869 + if (attached && cable_type == MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF) 870 + max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART, 871 + true, false); 864 872 865 873 /* Check revision number of MUIC device */ 866 874 ret = regmap_read(max77843->regmap_muic, MAX77843_MUIC_REG_ID, &id);
+17 -8
drivers/extcon/extcon-max8997.c
··· 311 311 { 312 312 int ret = 0; 313 313 314 - if (usb_type == MAX8997_USB_HOST) { 315 - ret = max8997_muic_set_path(info, info->path_usb, attached); 316 - if (ret < 0) { 317 - dev_err(info->dev, "failed to update muic register\n"); 318 - return ret; 319 - } 314 + ret = max8997_muic_set_path(info, info->path_usb, attached); 315 + if (ret < 0) { 316 + dev_err(info->dev, "failed to update muic register\n"); 317 + return ret; 320 318 } 321 319 322 320 switch (usb_type) { ··· 630 632 struct max8997_platform_data *pdata = dev_get_platdata(max8997->dev); 631 633 struct max8997_muic_info *info; 632 634 int delay_jiffies; 635 + int cable_type; 636 + bool attached; 633 637 int ret, i; 634 638 635 639 info = devm_kzalloc(&pdev->dev, sizeof(struct max8997_muic_info), ··· 724 724 delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT); 725 725 } 726 726 727 - /* Set initial path for UART */ 728 - max8997_muic_set_path(info, info->path_uart, true); 727 + /* Set initial path for UART when JIG is connected to get serial logs */ 728 + ret = max8997_bulk_read(info->muic, MAX8997_MUIC_REG_STATUS1, 729 + 2, info->status); 730 + if (ret) { 731 + dev_err(info->dev, "failed to read MUIC register\n"); 732 + return ret; 733 + } 734 + cable_type = max8997_muic_get_cable_type(info, 735 + MAX8997_CABLE_GROUP_ADC, &attached); 736 + if (attached && cable_type == MAX8997_MUIC_ADC_FACTORY_MODE_UART_OFF) 737 + max8997_muic_set_path(info, info->path_uart, true); 729 738 730 739 /* Set ADC debounce time */ 731 740 max8997_muic_set_debounce_time(info, ADC_DEBOUNCE_TIME_25MS);
+12
drivers/firmware/Kconfig
··· 216 216 WARNING: Using incorrect parameters (base address in particular) 217 217 may crash your system. 218 218 219 + config INTEL_STRATIX10_SERVICE 220 + tristate "Intel Stratix10 Service Layer" 221 + depends on HAVE_ARM_SMCCC 222 + default n 223 + help 224 + Intel Stratix10 service layer runs at privileged exception level, 225 + interfaces with the service providers (FPGA manager is one of them) 226 + and manages secure monitor call to communicate with secure monitor 227 + software at secure monitor exception level. 228 + 229 + Say Y here if you want Stratix10 service layer support. 230 + 219 231 config QCOM_SCM 220 232 bool 221 233 depends on ARM || ARM64
+1
drivers/firmware/Makefile
··· 12 12 obj-$(CONFIG_EDD) += edd.o 13 13 obj-$(CONFIG_EFI_PCDP) += pcdp.o 14 14 obj-$(CONFIG_DMIID) += dmi-id.o 15 + obj-$(CONFIG_INTEL_STRATIX10_SERVICE) += stratix10-svc.o 15 16 obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o 16 17 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 17 18 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
+1041
drivers/firmware/stratix10-svc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2017-2018, Intel Corporation 4 + */ 5 + 6 + #include <linux/completion.h> 7 + #include <linux/delay.h> 8 + #include <linux/genalloc.h> 9 + #include <linux/io.h> 10 + #include <linux/kfifo.h> 11 + #include <linux/kthread.h> 12 + #include <linux/module.h> 13 + #include <linux/mutex.h> 14 + #include <linux/of.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/slab.h> 18 + #include <linux/spinlock.h> 19 + #include <linux/firmware/intel/stratix10-smc.h> 20 + #include <linux/firmware/intel/stratix10-svc-client.h> 21 + #include <linux/types.h> 22 + 23 + /** 24 + * SVC_NUM_DATA_IN_FIFO - number of struct stratix10_svc_data in the FIFO 25 + * 26 + * SVC_NUM_CHANNEL - number of channel supported by service layer driver 27 + * 28 + * FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS - claim back the submitted buffer(s) 29 + * from the secure world for FPGA manager to reuse, or to free the buffer(s) 30 + * when all bit-stream data had be send. 31 + * 32 + * FPGA_CONFIG_STATUS_TIMEOUT_SEC - poll the FPGA configuration status, 33 + * service layer will return error to FPGA manager when timeout occurs, 34 + * timeout is set to 30 seconds (30 * 1000) at Intel Stratix10 SoC. 35 + */ 36 + #define SVC_NUM_DATA_IN_FIFO 32 37 + #define SVC_NUM_CHANNEL 2 38 + #define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 200 39 + #define FPGA_CONFIG_STATUS_TIMEOUT_SEC 30 40 + 41 + typedef void (svc_invoke_fn)(unsigned long, unsigned long, unsigned long, 42 + unsigned long, unsigned long, unsigned long, 43 + unsigned long, unsigned long, 44 + struct arm_smccc_res *); 45 + struct stratix10_svc_chan; 46 + 47 + /** 48 + * struct stratix10_svc_sh_memory - service shared memory structure 49 + * @sync_complete: state for a completion 50 + * @addr: physical address of shared memory block 51 + * @size: size of shared memory block 52 + * @invoke_fn: function to issue secure monitor or hypervisor call 53 + * 54 + * This struct is used to save physical address and size of shared memory 55 + * block. The shared memory blocked is allocated by secure monitor software 56 + * at secure world. 57 + * 58 + * Service layer driver uses the physical address and size to create a memory 59 + * pool, then allocates data buffer from that memory pool for service client. 60 + */ 61 + struct stratix10_svc_sh_memory { 62 + struct completion sync_complete; 63 + unsigned long addr; 64 + unsigned long size; 65 + svc_invoke_fn *invoke_fn; 66 + }; 67 + 68 + /** 69 + * struct stratix10_svc_data_mem - service memory structure 70 + * @vaddr: virtual address 71 + * @paddr: physical address 72 + * @size: size of memory 73 + * @node: link list head node 74 + * 75 + * This struct is used in a list that keeps track of buffers which have 76 + * been allocated or freed from the memory pool. Service layer driver also 77 + * uses this struct to transfer physical address to virtual address. 78 + */ 79 + struct stratix10_svc_data_mem { 80 + void *vaddr; 81 + phys_addr_t paddr; 82 + size_t size; 83 + struct list_head node; 84 + }; 85 + 86 + /** 87 + * struct stratix10_svc_data - service data structure 88 + * @chan: service channel 89 + * @paddr: playload physical address 90 + * @size: playload size 91 + * @command: service command requested by client 92 + * @flag: configuration type (full or partial) 93 + * @arg: args to be passed via registers and not physically mapped buffers 94 + * 95 + * This struct is used in service FIFO for inter-process communication. 96 + */ 97 + struct stratix10_svc_data { 98 + struct stratix10_svc_chan *chan; 99 + phys_addr_t paddr; 100 + size_t size; 101 + u32 command; 102 + u32 flag; 103 + u64 arg[3]; 104 + }; 105 + 106 + /** 107 + * struct stratix10_svc_controller - service controller 108 + * @dev: device 109 + * @chans: array of service channels 110 + * @num_chans: number of channels in 'chans' array 111 + * @num_active_client: number of active service client 112 + * @node: list management 113 + * @genpool: memory pool pointing to the memory region 114 + * @task: pointer to the thread task which handles SMC or HVC call 115 + * @svc_fifo: a queue for storing service message data 116 + * @complete_status: state for completion 117 + * @svc_fifo_lock: protect access to service message data queue 118 + * @invoke_fn: function to issue secure monitor call or hypervisor call 119 + * 120 + * This struct is used to create communication channels for service clients, to 121 + * handle secure monitor or hypervisor call. 122 + */ 123 + struct stratix10_svc_controller { 124 + struct device *dev; 125 + struct stratix10_svc_chan *chans; 126 + int num_chans; 127 + int num_active_client; 128 + struct list_head node; 129 + struct gen_pool *genpool; 130 + struct task_struct *task; 131 + struct kfifo svc_fifo; 132 + struct completion complete_status; 133 + spinlock_t svc_fifo_lock; 134 + svc_invoke_fn *invoke_fn; 135 + }; 136 + 137 + /** 138 + * struct stratix10_svc_chan - service communication channel 139 + * @ctrl: pointer to service controller which is the provider of this channel 140 + * @scl: pointer to service client which owns the channel 141 + * @name: service client name associated with the channel 142 + * @lock: protect access to the channel 143 + * 144 + * This struct is used by service client to communicate with service layer, each 145 + * service client has its own channel created by service controller. 146 + */ 147 + struct stratix10_svc_chan { 148 + struct stratix10_svc_controller *ctrl; 149 + struct stratix10_svc_client *scl; 150 + char *name; 151 + spinlock_t lock; 152 + }; 153 + 154 + static LIST_HEAD(svc_ctrl); 155 + static LIST_HEAD(svc_data_mem); 156 + 157 + /** 158 + * svc_pa_to_va() - translate physical address to virtual address 159 + * @addr: to be translated physical address 160 + * 161 + * Return: valid virtual address or NULL if the provided physical 162 + * address doesn't exist. 163 + */ 164 + static void *svc_pa_to_va(unsigned long addr) 165 + { 166 + struct stratix10_svc_data_mem *pmem; 167 + 168 + pr_debug("claim back P-addr=0x%016x\n", (unsigned int)addr); 169 + list_for_each_entry(pmem, &svc_data_mem, node) 170 + if (pmem->paddr == addr) 171 + return pmem->vaddr; 172 + 173 + /* physical address is not found */ 174 + return NULL; 175 + } 176 + 177 + /** 178 + * svc_thread_cmd_data_claim() - claim back buffer from the secure world 179 + * @ctrl: pointer to service layer controller 180 + * @p_data: pointer to service data structure 181 + * @cb_data: pointer to callback data structure to service client 182 + * 183 + * Claim back the submitted buffers from the secure world and pass buffer 184 + * back to service client (FPGA manager, etc) for reuse. 185 + */ 186 + static void svc_thread_cmd_data_claim(struct stratix10_svc_controller *ctrl, 187 + struct stratix10_svc_data *p_data, 188 + struct stratix10_svc_cb_data *cb_data) 189 + { 190 + struct arm_smccc_res res; 191 + unsigned long timeout; 192 + 193 + reinit_completion(&ctrl->complete_status); 194 + timeout = msecs_to_jiffies(FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS); 195 + 196 + pr_debug("%s: claim back the submitted buffer\n", __func__); 197 + do { 198 + ctrl->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE, 199 + 0, 0, 0, 0, 0, 0, 0, &res); 200 + 201 + if (res.a0 == INTEL_SIP_SMC_STATUS_OK) { 202 + if (!res.a1) { 203 + complete(&ctrl->complete_status); 204 + break; 205 + } 206 + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_DONE); 207 + cb_data->kaddr1 = svc_pa_to_va(res.a1); 208 + cb_data->kaddr2 = (res.a2) ? 209 + svc_pa_to_va(res.a2) : NULL; 210 + cb_data->kaddr3 = (res.a3) ? 211 + svc_pa_to_va(res.a3) : NULL; 212 + p_data->chan->scl->receive_cb(p_data->chan->scl, 213 + cb_data); 214 + } else { 215 + pr_debug("%s: secure world busy, polling again\n", 216 + __func__); 217 + } 218 + } while (res.a0 == INTEL_SIP_SMC_STATUS_OK || 219 + res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY || 220 + wait_for_completion_timeout(&ctrl->complete_status, timeout)); 221 + } 222 + 223 + /** 224 + * svc_thread_cmd_config_status() - check configuration status 225 + * @ctrl: pointer to service layer controller 226 + * @p_data: pointer to service data structure 227 + * @cb_data: pointer to callback data structure to service client 228 + * 229 + * Check whether the secure firmware at secure world has finished the FPGA 230 + * configuration, and then inform FPGA manager the configuration status. 231 + */ 232 + static void svc_thread_cmd_config_status(struct stratix10_svc_controller *ctrl, 233 + struct stratix10_svc_data *p_data, 234 + struct stratix10_svc_cb_data *cb_data) 235 + { 236 + struct arm_smccc_res res; 237 + int count_in_sec; 238 + 239 + cb_data->kaddr1 = NULL; 240 + cb_data->kaddr2 = NULL; 241 + cb_data->kaddr3 = NULL; 242 + cb_data->status = BIT(SVC_STATUS_RECONFIG_ERROR); 243 + 244 + pr_debug("%s: polling config status\n", __func__); 245 + 246 + count_in_sec = FPGA_CONFIG_STATUS_TIMEOUT_SEC; 247 + while (count_in_sec) { 248 + ctrl->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_ISDONE, 249 + 0, 0, 0, 0, 0, 0, 0, &res); 250 + if ((res.a0 == INTEL_SIP_SMC_STATUS_OK) || 251 + (res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR)) 252 + break; 253 + 254 + /* 255 + * configuration is still in progress, wait one second then 256 + * poll again 257 + */ 258 + msleep(1000); 259 + count_in_sec--; 260 + }; 261 + 262 + if (res.a0 == INTEL_SIP_SMC_STATUS_OK && count_in_sec) 263 + cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); 264 + 265 + p_data->chan->scl->receive_cb(p_data->chan->scl, cb_data); 266 + } 267 + 268 + /** 269 + * svc_thread_recv_status_ok() - handle the successful status 270 + * @p_data: pointer to service data structure 271 + * @cb_data: pointer to callback data structure to service client 272 + * @res: result from SMC or HVC call 273 + * 274 + * Send back the correspond status to the service clients. 275 + */ 276 + static void svc_thread_recv_status_ok(struct stratix10_svc_data *p_data, 277 + struct stratix10_svc_cb_data *cb_data, 278 + struct arm_smccc_res res) 279 + { 280 + cb_data->kaddr1 = NULL; 281 + cb_data->kaddr2 = NULL; 282 + cb_data->kaddr3 = NULL; 283 + 284 + switch (p_data->command) { 285 + case COMMAND_RECONFIG: 286 + cb_data->status = BIT(SVC_STATUS_RECONFIG_REQUEST_OK); 287 + break; 288 + case COMMAND_RECONFIG_DATA_SUBMIT: 289 + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED); 290 + break; 291 + case COMMAND_NOOP: 292 + cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED); 293 + cb_data->kaddr1 = svc_pa_to_va(res.a1); 294 + break; 295 + case COMMAND_RECONFIG_STATUS: 296 + cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); 297 + break; 298 + case COMMAND_RSU_UPDATE: 299 + cb_data->status = BIT(SVC_STATUS_RSU_OK); 300 + break; 301 + default: 302 + pr_warn("it shouldn't happen\n"); 303 + break; 304 + } 305 + 306 + pr_debug("%s: call receive_cb\n", __func__); 307 + p_data->chan->scl->receive_cb(p_data->chan->scl, cb_data); 308 + } 309 + 310 + /** 311 + * svc_normal_to_secure_thread() - the function to run in the kthread 312 + * @data: data pointer for kthread function 313 + * 314 + * Service layer driver creates stratix10_svc_smc_hvc_call kthread on CPU 315 + * node 0, its function stratix10_svc_secure_call_thread is used to handle 316 + * SMC or HVC calls between kernel driver and secure monitor software. 317 + * 318 + * Return: 0 for success or -ENOMEM on error. 319 + */ 320 + static int svc_normal_to_secure_thread(void *data) 321 + { 322 + struct stratix10_svc_controller 323 + *ctrl = (struct stratix10_svc_controller *)data; 324 + struct stratix10_svc_data *pdata; 325 + struct stratix10_svc_cb_data *cbdata; 326 + struct arm_smccc_res res; 327 + unsigned long a0, a1, a2; 328 + int ret_fifo = 0; 329 + 330 + pdata = kmalloc(sizeof(*pdata), GFP_KERNEL); 331 + if (!pdata) 332 + return -ENOMEM; 333 + 334 + cbdata = kmalloc(sizeof(*cbdata), GFP_KERNEL); 335 + if (!cbdata) { 336 + kfree(pdata); 337 + return -ENOMEM; 338 + } 339 + 340 + /* default set, to remove build warning */ 341 + a0 = INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK; 342 + a1 = 0; 343 + a2 = 0; 344 + 345 + pr_debug("smc_hvc_shm_thread is running\n"); 346 + 347 + while (!kthread_should_stop()) { 348 + ret_fifo = kfifo_out_spinlocked(&ctrl->svc_fifo, 349 + pdata, sizeof(*pdata), 350 + &ctrl->svc_fifo_lock); 351 + 352 + if (!ret_fifo) 353 + continue; 354 + 355 + pr_debug("get from FIFO pa=0x%016x, command=%u, size=%u\n", 356 + (unsigned int)pdata->paddr, pdata->command, 357 + (unsigned int)pdata->size); 358 + 359 + switch (pdata->command) { 360 + case COMMAND_RECONFIG_DATA_CLAIM: 361 + svc_thread_cmd_data_claim(ctrl, pdata, cbdata); 362 + continue; 363 + case COMMAND_RECONFIG: 364 + a0 = INTEL_SIP_SMC_FPGA_CONFIG_START; 365 + pr_debug("conf_type=%u\n", (unsigned int)pdata->flag); 366 + a1 = pdata->flag; 367 + a2 = 0; 368 + break; 369 + case COMMAND_RECONFIG_DATA_SUBMIT: 370 + a0 = INTEL_SIP_SMC_FPGA_CONFIG_WRITE; 371 + a1 = (unsigned long)pdata->paddr; 372 + a2 = (unsigned long)pdata->size; 373 + break; 374 + case COMMAND_RECONFIG_STATUS: 375 + a0 = INTEL_SIP_SMC_FPGA_CONFIG_ISDONE; 376 + a1 = 0; 377 + a2 = 0; 378 + break; 379 + case COMMAND_RSU_STATUS: 380 + a0 = INTEL_SIP_SMC_RSU_STATUS; 381 + a1 = 0; 382 + a2 = 0; 383 + break; 384 + case COMMAND_RSU_UPDATE: 385 + a0 = INTEL_SIP_SMC_RSU_UPDATE; 386 + a1 = pdata->arg[0]; 387 + a2 = 0; 388 + break; 389 + default: 390 + pr_warn("it shouldn't happen\n"); 391 + break; 392 + } 393 + pr_debug("%s: before SMC call -- a0=0x%016x a1=0x%016x", 394 + __func__, (unsigned int)a0, (unsigned int)a1); 395 + pr_debug(" a2=0x%016x\n", (unsigned int)a2); 396 + 397 + ctrl->invoke_fn(a0, a1, a2, 0, 0, 0, 0, 0, &res); 398 + 399 + pr_debug("%s: after SMC call -- res.a0=0x%016x", 400 + __func__, (unsigned int)res.a0); 401 + pr_debug(" res.a1=0x%016x, res.a2=0x%016x", 402 + (unsigned int)res.a1, (unsigned int)res.a2); 403 + pr_debug(" res.a3=0x%016x\n", (unsigned int)res.a3); 404 + 405 + if (pdata->command == COMMAND_RSU_STATUS) { 406 + if (res.a0 == INTEL_SIP_SMC_RSU_ERROR) 407 + cbdata->status = BIT(SVC_STATUS_RSU_ERROR); 408 + else 409 + cbdata->status = BIT(SVC_STATUS_RSU_OK); 410 + 411 + cbdata->kaddr1 = &res; 412 + cbdata->kaddr2 = NULL; 413 + cbdata->kaddr3 = NULL; 414 + pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata); 415 + continue; 416 + } 417 + 418 + switch (res.a0) { 419 + case INTEL_SIP_SMC_STATUS_OK: 420 + svc_thread_recv_status_ok(pdata, cbdata, res); 421 + break; 422 + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY: 423 + switch (pdata->command) { 424 + case COMMAND_RECONFIG_DATA_SUBMIT: 425 + svc_thread_cmd_data_claim(ctrl, 426 + pdata, cbdata); 427 + break; 428 + case COMMAND_RECONFIG_STATUS: 429 + svc_thread_cmd_config_status(ctrl, 430 + pdata, cbdata); 431 + break; 432 + default: 433 + pr_warn("it shouldn't happen\n"); 434 + break; 435 + } 436 + break; 437 + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_REJECTED: 438 + pr_debug("%s: STATUS_REJECTED\n", __func__); 439 + break; 440 + case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR: 441 + pr_err("%s: STATUS_ERROR\n", __func__); 442 + cbdata->status = BIT(SVC_STATUS_RECONFIG_ERROR); 443 + cbdata->kaddr1 = NULL; 444 + cbdata->kaddr2 = NULL; 445 + cbdata->kaddr3 = NULL; 446 + pdata->chan->scl->receive_cb(pdata->chan->scl, cbdata); 447 + break; 448 + default: 449 + pr_warn("it shouldn't happen\n"); 450 + break; 451 + } 452 + }; 453 + 454 + kfree(cbdata); 455 + kfree(pdata); 456 + 457 + return 0; 458 + } 459 + 460 + /** 461 + * svc_normal_to_secure_shm_thread() - the function to run in the kthread 462 + * @data: data pointer for kthread function 463 + * 464 + * Service layer driver creates stratix10_svc_smc_hvc_shm kthread on CPU 465 + * node 0, its function stratix10_svc_secure_shm_thread is used to query the 466 + * physical address of memory block reserved by secure monitor software at 467 + * secure world. 468 + * 469 + * svc_normal_to_secure_shm_thread() calls do_exit() directly since it is a 470 + * standlone thread for which no one will call kthread_stop() or return when 471 + * 'kthread_should_stop()' is true. 472 + */ 473 + static int svc_normal_to_secure_shm_thread(void *data) 474 + { 475 + struct stratix10_svc_sh_memory 476 + *sh_mem = (struct stratix10_svc_sh_memory *)data; 477 + struct arm_smccc_res res; 478 + 479 + /* SMC or HVC call to get shared memory info from secure world */ 480 + sh_mem->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM, 481 + 0, 0, 0, 0, 0, 0, 0, &res); 482 + if (res.a0 == INTEL_SIP_SMC_STATUS_OK) { 483 + sh_mem->addr = res.a1; 484 + sh_mem->size = res.a2; 485 + } else { 486 + pr_err("%s: after SMC call -- res.a0=0x%016x", __func__, 487 + (unsigned int)res.a0); 488 + sh_mem->addr = 0; 489 + sh_mem->size = 0; 490 + } 491 + 492 + complete(&sh_mem->sync_complete); 493 + do_exit(0); 494 + } 495 + 496 + /** 497 + * svc_get_sh_memory() - get memory block reserved by secure monitor SW 498 + * @pdev: pointer to service layer device 499 + * @sh_memory: pointer to service shared memory structure 500 + * 501 + * Return: zero for successfully getting the physical address of memory block 502 + * reserved by secure monitor software, or negative value on error. 503 + */ 504 + static int svc_get_sh_memory(struct platform_device *pdev, 505 + struct stratix10_svc_sh_memory *sh_memory) 506 + { 507 + struct device *dev = &pdev->dev; 508 + struct task_struct *sh_memory_task; 509 + unsigned int cpu = 0; 510 + 511 + init_completion(&sh_memory->sync_complete); 512 + 513 + /* smc or hvc call happens on cpu 0 bound kthread */ 514 + sh_memory_task = kthread_create_on_node(svc_normal_to_secure_shm_thread, 515 + (void *)sh_memory, 516 + cpu_to_node(cpu), 517 + "svc_smc_hvc_shm_thread"); 518 + if (IS_ERR(sh_memory_task)) { 519 + dev_err(dev, "fail to create stratix10_svc_smc_shm_thread\n"); 520 + return -EINVAL; 521 + } 522 + 523 + wake_up_process(sh_memory_task); 524 + 525 + if (!wait_for_completion_timeout(&sh_memory->sync_complete, 10 * HZ)) { 526 + dev_err(dev, 527 + "timeout to get sh-memory paras from secure world\n"); 528 + return -ETIMEDOUT; 529 + } 530 + 531 + if (!sh_memory->addr || !sh_memory->size) { 532 + dev_err(dev, 533 + "fails to get shared memory info from secure world\n"); 534 + return -ENOMEM; 535 + } 536 + 537 + dev_dbg(dev, "SM software provides paddr: 0x%016x, size: 0x%08x\n", 538 + (unsigned int)sh_memory->addr, 539 + (unsigned int)sh_memory->size); 540 + 541 + return 0; 542 + } 543 + 544 + /** 545 + * svc_create_memory_pool() - create a memory pool from reserved memory block 546 + * @pdev: pointer to service layer device 547 + * @sh_memory: pointer to service shared memory structure 548 + * 549 + * Return: pool allocated from reserved memory block or ERR_PTR() on error. 550 + */ 551 + static struct gen_pool * 552 + svc_create_memory_pool(struct platform_device *pdev, 553 + struct stratix10_svc_sh_memory *sh_memory) 554 + { 555 + struct device *dev = &pdev->dev; 556 + struct gen_pool *genpool; 557 + unsigned long vaddr; 558 + phys_addr_t paddr; 559 + size_t size; 560 + phys_addr_t begin; 561 + phys_addr_t end; 562 + void *va; 563 + size_t page_mask = PAGE_SIZE - 1; 564 + int min_alloc_order = 3; 565 + int ret; 566 + 567 + begin = roundup(sh_memory->addr, PAGE_SIZE); 568 + end = rounddown(sh_memory->addr + sh_memory->size, PAGE_SIZE); 569 + paddr = begin; 570 + size = end - begin; 571 + va = memremap(paddr, size, MEMREMAP_WC); 572 + if (!va) { 573 + dev_err(dev, "fail to remap shared memory\n"); 574 + return ERR_PTR(-EINVAL); 575 + } 576 + vaddr = (unsigned long)va; 577 + dev_dbg(dev, 578 + "reserved memory vaddr: %p, paddr: 0x%16x size: 0x%8x\n", 579 + va, (unsigned int)paddr, (unsigned int)size); 580 + if ((vaddr & page_mask) || (paddr & page_mask) || 581 + (size & page_mask)) { 582 + dev_err(dev, "page is not aligned\n"); 583 + return ERR_PTR(-EINVAL); 584 + } 585 + genpool = gen_pool_create(min_alloc_order, -1); 586 + if (!genpool) { 587 + dev_err(dev, "fail to create genpool\n"); 588 + return ERR_PTR(-ENOMEM); 589 + } 590 + gen_pool_set_algo(genpool, gen_pool_best_fit, NULL); 591 + ret = gen_pool_add_virt(genpool, vaddr, paddr, size, -1); 592 + if (ret) { 593 + dev_err(dev, "fail to add memory chunk to the pool\n"); 594 + gen_pool_destroy(genpool); 595 + return ERR_PTR(ret); 596 + } 597 + 598 + return genpool; 599 + } 600 + 601 + /** 602 + * svc_smccc_smc() - secure monitor call between normal and secure world 603 + * @a0: argument passed in registers 0 604 + * @a1: argument passed in registers 1 605 + * @a2: argument passed in registers 2 606 + * @a3: argument passed in registers 3 607 + * @a4: argument passed in registers 4 608 + * @a5: argument passed in registers 5 609 + * @a6: argument passed in registers 6 610 + * @a7: argument passed in registers 7 611 + * @res: result values from register 0 to 3 612 + */ 613 + static void svc_smccc_smc(unsigned long a0, unsigned long a1, 614 + unsigned long a2, unsigned long a3, 615 + unsigned long a4, unsigned long a5, 616 + unsigned long a6, unsigned long a7, 617 + struct arm_smccc_res *res) 618 + { 619 + arm_smccc_smc(a0, a1, a2, a3, a4, a5, a6, a7, res); 620 + } 621 + 622 + /** 623 + * svc_smccc_hvc() - hypervisor call between normal and secure world 624 + * @a0: argument passed in registers 0 625 + * @a1: argument passed in registers 1 626 + * @a2: argument passed in registers 2 627 + * @a3: argument passed in registers 3 628 + * @a4: argument passed in registers 4 629 + * @a5: argument passed in registers 5 630 + * @a6: argument passed in registers 6 631 + * @a7: argument passed in registers 7 632 + * @res: result values from register 0 to 3 633 + */ 634 + static void svc_smccc_hvc(unsigned long a0, unsigned long a1, 635 + unsigned long a2, unsigned long a3, 636 + unsigned long a4, unsigned long a5, 637 + unsigned long a6, unsigned long a7, 638 + struct arm_smccc_res *res) 639 + { 640 + arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res); 641 + } 642 + 643 + /** 644 + * get_invoke_func() - invoke SMC or HVC call 645 + * @dev: pointer to device 646 + * 647 + * Return: function pointer to svc_smccc_smc or svc_smccc_hvc. 648 + */ 649 + static svc_invoke_fn *get_invoke_func(struct device *dev) 650 + { 651 + const char *method; 652 + 653 + if (of_property_read_string(dev->of_node, "method", &method)) { 654 + dev_warn(dev, "missing \"method\" property\n"); 655 + return ERR_PTR(-ENXIO); 656 + } 657 + 658 + if (!strcmp(method, "smc")) 659 + return svc_smccc_smc; 660 + if (!strcmp(method, "hvc")) 661 + return svc_smccc_hvc; 662 + 663 + dev_warn(dev, "invalid \"method\" property: %s\n", method); 664 + 665 + return ERR_PTR(-EINVAL); 666 + } 667 + 668 + /** 669 + * stratix10_svc_request_channel_byname() - request a service channel 670 + * @client: pointer to service client 671 + * @name: service client name 672 + * 673 + * This function is used by service client to request a service channel. 674 + * 675 + * Return: a pointer to channel assigned to the client on success, 676 + * or ERR_PTR() on error. 677 + */ 678 + struct stratix10_svc_chan *stratix10_svc_request_channel_byname( 679 + struct stratix10_svc_client *client, const char *name) 680 + { 681 + struct device *dev = client->dev; 682 + struct stratix10_svc_controller *controller; 683 + struct stratix10_svc_chan *chan = NULL; 684 + unsigned long flag; 685 + int i; 686 + 687 + /* if probe was called after client's, or error on probe */ 688 + if (list_empty(&svc_ctrl)) 689 + return ERR_PTR(-EPROBE_DEFER); 690 + 691 + controller = list_first_entry(&svc_ctrl, 692 + struct stratix10_svc_controller, node); 693 + for (i = 0; i < SVC_NUM_CHANNEL; i++) { 694 + if (!strcmp(controller->chans[i].name, name)) { 695 + chan = &controller->chans[i]; 696 + break; 697 + } 698 + } 699 + 700 + /* if there was no channel match */ 701 + if (i == SVC_NUM_CHANNEL) { 702 + dev_err(dev, "%s: channel not allocated\n", __func__); 703 + return ERR_PTR(-EINVAL); 704 + } 705 + 706 + if (chan->scl || !try_module_get(controller->dev->driver->owner)) { 707 + dev_dbg(dev, "%s: svc not free\n", __func__); 708 + return ERR_PTR(-EBUSY); 709 + } 710 + 711 + spin_lock_irqsave(&chan->lock, flag); 712 + chan->scl = client; 713 + chan->ctrl->num_active_client++; 714 + spin_unlock_irqrestore(&chan->lock, flag); 715 + 716 + return chan; 717 + } 718 + EXPORT_SYMBOL_GPL(stratix10_svc_request_channel_byname); 719 + 720 + /** 721 + * stratix10_svc_free_channel() - free service channel 722 + * @chan: service channel to be freed 723 + * 724 + * This function is used by service client to free a service channel. 725 + */ 726 + void stratix10_svc_free_channel(struct stratix10_svc_chan *chan) 727 + { 728 + unsigned long flag; 729 + 730 + spin_lock_irqsave(&chan->lock, flag); 731 + chan->scl = NULL; 732 + chan->ctrl->num_active_client--; 733 + module_put(chan->ctrl->dev->driver->owner); 734 + spin_unlock_irqrestore(&chan->lock, flag); 735 + } 736 + EXPORT_SYMBOL_GPL(stratix10_svc_free_channel); 737 + 738 + /** 739 + * stratix10_svc_send() - send a message data to the remote 740 + * @chan: service channel assigned to the client 741 + * @msg: message data to be sent, in the format of 742 + * "struct stratix10_svc_client_msg" 743 + * 744 + * This function is used by service client to add a message to the service 745 + * layer driver's queue for being sent to the secure world. 746 + * 747 + * Return: 0 for success, -ENOMEM or -ENOBUFS on error. 748 + */ 749 + int stratix10_svc_send(struct stratix10_svc_chan *chan, void *msg) 750 + { 751 + struct stratix10_svc_client_msg 752 + *p_msg = (struct stratix10_svc_client_msg *)msg; 753 + struct stratix10_svc_data_mem *p_mem; 754 + struct stratix10_svc_data *p_data; 755 + int ret = 0; 756 + unsigned int cpu = 0; 757 + 758 + p_data = kzalloc(sizeof(*p_data), GFP_KERNEL); 759 + if (!p_data) 760 + return -ENOMEM; 761 + 762 + /* first client will create kernel thread */ 763 + if (!chan->ctrl->task) { 764 + chan->ctrl->task = 765 + kthread_create_on_node(svc_normal_to_secure_thread, 766 + (void *)chan->ctrl, 767 + cpu_to_node(cpu), 768 + "svc_smc_hvc_thread"); 769 + if (IS_ERR(chan->ctrl->task)) { 770 + dev_err(chan->ctrl->dev, 771 + "fails to create svc_smc_hvc_thread\n"); 772 + kfree(p_data); 773 + return -EINVAL; 774 + } 775 + kthread_bind(chan->ctrl->task, cpu); 776 + wake_up_process(chan->ctrl->task); 777 + } 778 + 779 + pr_debug("%s: sent P-va=%p, P-com=%x, P-size=%u\n", __func__, 780 + p_msg->payload, p_msg->command, 781 + (unsigned int)p_msg->payload_length); 782 + 783 + if (list_empty(&svc_data_mem)) { 784 + if (p_msg->command == COMMAND_RECONFIG) { 785 + struct stratix10_svc_command_config_type *ct = 786 + (struct stratix10_svc_command_config_type *) 787 + p_msg->payload; 788 + p_data->flag = ct->flags; 789 + } 790 + } else { 791 + list_for_each_entry(p_mem, &svc_data_mem, node) 792 + if (p_mem->vaddr == p_msg->payload) { 793 + p_data->paddr = p_mem->paddr; 794 + break; 795 + } 796 + } 797 + 798 + p_data->command = p_msg->command; 799 + p_data->arg[0] = p_msg->arg[0]; 800 + p_data->arg[1] = p_msg->arg[1]; 801 + p_data->arg[2] = p_msg->arg[2]; 802 + p_data->size = p_msg->payload_length; 803 + p_data->chan = chan; 804 + pr_debug("%s: put to FIFO pa=0x%016x, cmd=%x, size=%u\n", __func__, 805 + (unsigned int)p_data->paddr, p_data->command, 806 + (unsigned int)p_data->size); 807 + ret = kfifo_in_spinlocked(&chan->ctrl->svc_fifo, p_data, 808 + sizeof(*p_data), 809 + &chan->ctrl->svc_fifo_lock); 810 + 811 + kfree(p_data); 812 + 813 + if (!ret) 814 + return -ENOBUFS; 815 + 816 + return 0; 817 + } 818 + EXPORT_SYMBOL_GPL(stratix10_svc_send); 819 + 820 + /** 821 + * stratix10_svc_done() - complete service request transactions 822 + * @chan: service channel assigned to the client 823 + * 824 + * This function should be called when client has finished its request 825 + * or there is an error in the request process. It allows the service layer 826 + * to stop the running thread to have maximize savings in kernel resources. 827 + */ 828 + void stratix10_svc_done(struct stratix10_svc_chan *chan) 829 + { 830 + /* stop thread when thread is running AND only one active client */ 831 + if (chan->ctrl->task && chan->ctrl->num_active_client <= 1) { 832 + pr_debug("svc_smc_hvc_shm_thread is stopped\n"); 833 + kthread_stop(chan->ctrl->task); 834 + chan->ctrl->task = NULL; 835 + } 836 + } 837 + EXPORT_SYMBOL_GPL(stratix10_svc_done); 838 + 839 + /** 840 + * stratix10_svc_allocate_memory() - allocate memory 841 + * @chan: service channel assigned to the client 842 + * @size: memory size requested by a specific service client 843 + * 844 + * Service layer allocates the requested number of bytes buffer from the 845 + * memory pool, service client uses this function to get allocated buffers. 846 + * 847 + * Return: address of allocated memory on success, or ERR_PTR() on error. 848 + */ 849 + void *stratix10_svc_allocate_memory(struct stratix10_svc_chan *chan, 850 + size_t size) 851 + { 852 + struct stratix10_svc_data_mem *pmem; 853 + unsigned long va; 854 + phys_addr_t pa; 855 + struct gen_pool *genpool = chan->ctrl->genpool; 856 + size_t s = roundup(size, 1 << genpool->min_alloc_order); 857 + 858 + pmem = devm_kzalloc(chan->ctrl->dev, sizeof(*pmem), GFP_KERNEL); 859 + if (!pmem) 860 + return ERR_PTR(-ENOMEM); 861 + 862 + va = gen_pool_alloc(genpool, s); 863 + if (!va) 864 + return ERR_PTR(-ENOMEM); 865 + 866 + memset((void *)va, 0, s); 867 + pa = gen_pool_virt_to_phys(genpool, va); 868 + 869 + pmem->vaddr = (void *)va; 870 + pmem->paddr = pa; 871 + pmem->size = s; 872 + list_add_tail(&pmem->node, &svc_data_mem); 873 + pr_debug("%s: va=%p, pa=0x%016x\n", __func__, 874 + pmem->vaddr, (unsigned int)pmem->paddr); 875 + 876 + return (void *)va; 877 + } 878 + EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory); 879 + 880 + /** 881 + * stratix10_svc_free_memory() - free allocated memory 882 + * @chan: service channel assigned to the client 883 + * @kaddr: memory to be freed 884 + * 885 + * This function is used by service client to free allocated buffers. 886 + */ 887 + void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr) 888 + { 889 + struct stratix10_svc_data_mem *pmem; 890 + size_t size = 0; 891 + 892 + list_for_each_entry(pmem, &svc_data_mem, node) 893 + if (pmem->vaddr == kaddr) { 894 + size = pmem->size; 895 + break; 896 + } 897 + 898 + gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size); 899 + pmem->vaddr = NULL; 900 + list_del(&pmem->node); 901 + } 902 + EXPORT_SYMBOL_GPL(stratix10_svc_free_memory); 903 + 904 + static const struct of_device_id stratix10_svc_drv_match[] = { 905 + {.compatible = "intel,stratix10-svc"}, 906 + {}, 907 + }; 908 + 909 + static int stratix10_svc_drv_probe(struct platform_device *pdev) 910 + { 911 + struct device *dev = &pdev->dev; 912 + struct stratix10_svc_controller *controller; 913 + struct stratix10_svc_chan *chans; 914 + struct gen_pool *genpool; 915 + struct stratix10_svc_sh_memory *sh_memory; 916 + svc_invoke_fn *invoke_fn; 917 + size_t fifo_size; 918 + int ret; 919 + 920 + /* get SMC or HVC function */ 921 + invoke_fn = get_invoke_func(dev); 922 + if (IS_ERR(invoke_fn)) 923 + return -EINVAL; 924 + 925 + sh_memory = devm_kzalloc(dev, sizeof(*sh_memory), GFP_KERNEL); 926 + if (!sh_memory) 927 + return -ENOMEM; 928 + 929 + sh_memory->invoke_fn = invoke_fn; 930 + ret = svc_get_sh_memory(pdev, sh_memory); 931 + if (ret) 932 + return ret; 933 + 934 + genpool = svc_create_memory_pool(pdev, sh_memory); 935 + if (!genpool) 936 + return -ENOMEM; 937 + 938 + /* allocate service controller and supporting channel */ 939 + controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL); 940 + if (!controller) 941 + return -ENOMEM; 942 + 943 + chans = devm_kmalloc_array(dev, SVC_NUM_CHANNEL, 944 + sizeof(*chans), GFP_KERNEL | __GFP_ZERO); 945 + if (!chans) 946 + return -ENOMEM; 947 + 948 + controller->dev = dev; 949 + controller->num_chans = SVC_NUM_CHANNEL; 950 + controller->num_active_client = 0; 951 + controller->chans = chans; 952 + controller->genpool = genpool; 953 + controller->task = NULL; 954 + controller->invoke_fn = invoke_fn; 955 + init_completion(&controller->complete_status); 956 + 957 + fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO; 958 + ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL); 959 + if (ret) { 960 + dev_err(dev, "fails to allocate FIFO\n"); 961 + return ret; 962 + } 963 + spin_lock_init(&controller->svc_fifo_lock); 964 + 965 + chans[0].scl = NULL; 966 + chans[0].ctrl = controller; 967 + chans[0].name = SVC_CLIENT_FPGA; 968 + spin_lock_init(&chans[0].lock); 969 + 970 + chans[1].scl = NULL; 971 + chans[1].ctrl = controller; 972 + chans[1].name = SVC_CLIENT_RSU; 973 + spin_lock_init(&chans[1].lock); 974 + 975 + list_add_tail(&controller->node, &svc_ctrl); 976 + platform_set_drvdata(pdev, controller); 977 + 978 + pr_info("Intel Service Layer Driver Initialized\n"); 979 + 980 + return ret; 981 + } 982 + 983 + static int stratix10_svc_drv_remove(struct platform_device *pdev) 984 + { 985 + struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); 986 + 987 + kfifo_free(&ctrl->svc_fifo); 988 + if (ctrl->task) { 989 + kthread_stop(ctrl->task); 990 + ctrl->task = NULL; 991 + } 992 + if (ctrl->genpool) 993 + gen_pool_destroy(ctrl->genpool); 994 + list_del(&ctrl->node); 995 + 996 + return 0; 997 + } 998 + 999 + static struct platform_driver stratix10_svc_driver = { 1000 + .probe = stratix10_svc_drv_probe, 1001 + .remove = stratix10_svc_drv_remove, 1002 + .driver = { 1003 + .name = "stratix10-svc", 1004 + .of_match_table = stratix10_svc_drv_match, 1005 + }, 1006 + }; 1007 + 1008 + static int __init stratix10_svc_init(void) 1009 + { 1010 + struct device_node *fw_np; 1011 + struct device_node *np; 1012 + int ret; 1013 + 1014 + fw_np = of_find_node_by_name(NULL, "firmware"); 1015 + if (!fw_np) 1016 + return -ENODEV; 1017 + 1018 + np = of_find_matching_node(fw_np, stratix10_svc_drv_match); 1019 + if (!np) 1020 + return -ENODEV; 1021 + 1022 + of_node_put(np); 1023 + ret = of_platform_populate(fw_np, stratix10_svc_drv_match, NULL, NULL); 1024 + if (ret) 1025 + return ret; 1026 + 1027 + return platform_driver_register(&stratix10_svc_driver); 1028 + } 1029 + 1030 + static void __exit stratix10_svc_exit(void) 1031 + { 1032 + return platform_driver_unregister(&stratix10_svc_driver); 1033 + } 1034 + 1035 + subsys_initcall(stratix10_svc_init); 1036 + module_exit(stratix10_svc_exit); 1037 + 1038 + MODULE_LICENSE("GPL v2"); 1039 + MODULE_DESCRIPTION("Intel Stratix10 Service Layer Driver"); 1040 + MODULE_AUTHOR("Richard Gong <richard.gong@intel.com>"); 1041 + MODULE_ALIAS("platform:stratix10-svc");
+6
drivers/fpga/Kconfig
··· 56 56 help 57 57 FPGA manager driver support for Xilinx Zynq FPGAs. 58 58 59 + config FPGA_MGR_STRATIX10_SOC 60 + tristate "Intel Stratix10 SoC FPGA Manager" 61 + depends on (ARCH_STRATIX10 && INTEL_STRATIX10_SERVICE) 62 + help 63 + FPGA manager driver support for the Intel Stratix10 SoC. 64 + 59 65 config FPGA_MGR_XILINX_SPI 60 66 tristate "Xilinx Configuration over Slave Serial (SPI)" 61 67 depends on SPI
+1
drivers/fpga/Makefile
··· 13 13 obj-$(CONFIG_FPGA_MGR_MACHXO2_SPI) += machxo2-spi.o 14 14 obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o 15 15 obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o 16 + obj-$(CONFIG_FPGA_MGR_STRATIX10_SOC) += stratix10-soc.o 16 17 obj-$(CONFIG_FPGA_MGR_TS73XX) += ts73xx-fpga.o 17 18 obj-$(CONFIG_FPGA_MGR_XILINX_SPI) += xilinx-spi.o 18 19 obj-$(CONFIG_FPGA_MGR_ZYNQ_FPGA) += zynq-fpga.o
+37 -12
drivers/fpga/altera-cvp.c
··· 403 403 struct altera_cvp_conf *conf; 404 404 struct fpga_manager *mgr; 405 405 u16 cmd, val; 406 + u32 regval; 406 407 int ret; 407 408 408 409 /* ··· 414 413 pci_read_config_word(pdev, VSE_PCIE_EXT_CAP_ID, &val); 415 414 if (val != VSE_PCIE_EXT_CAP_ID_VAL) { 416 415 dev_err(&pdev->dev, "Wrong EXT_CAP_ID value 0x%x\n", val); 416 + return -ENODEV; 417 + } 418 + 419 + pci_read_config_dword(pdev, VSE_CVP_STATUS, &regval); 420 + if (!(regval & VSE_CVP_STATUS_CVP_EN)) { 421 + dev_err(&pdev->dev, 422 + "CVP is disabled for this device: CVP_STATUS Reg 0x%x\n", 423 + regval); 417 424 return -ENODEV; 418 425 } 419 426 ··· 475 466 if (ret) 476 467 goto err_unmap; 477 468 478 - ret = driver_create_file(&altera_cvp_driver.driver, 479 - &driver_attr_chkcfg); 480 - if (ret) { 481 - dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n"); 482 - fpga_mgr_unregister(mgr); 483 - goto err_unmap; 484 - } 485 - 486 469 return 0; 487 470 488 471 err_unmap: 489 - pci_iounmap(pdev, conf->map); 472 + if (conf->map) 473 + pci_iounmap(pdev, conf->map); 490 474 pci_release_region(pdev, CVP_BAR); 491 475 err_disable: 492 476 cmd &= ~PCI_COMMAND_MEMORY; ··· 493 491 struct altera_cvp_conf *conf = mgr->priv; 494 492 u16 cmd; 495 493 496 - driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg); 497 494 fpga_mgr_unregister(mgr); 498 - pci_iounmap(pdev, conf->map); 495 + if (conf->map) 496 + pci_iounmap(pdev, conf->map); 499 497 pci_release_region(pdev, CVP_BAR); 500 498 pci_read_config_word(pdev, PCI_COMMAND, &cmd); 501 499 cmd &= ~PCI_COMMAND_MEMORY; 502 500 pci_write_config_word(pdev, PCI_COMMAND, cmd); 503 501 } 504 502 505 - module_pci_driver(altera_cvp_driver); 503 + static int __init altera_cvp_init(void) 504 + { 505 + int ret; 506 + 507 + ret = pci_register_driver(&altera_cvp_driver); 508 + if (ret) 509 + return ret; 510 + 511 + ret = driver_create_file(&altera_cvp_driver.driver, 512 + &driver_attr_chkcfg); 513 + if (ret) 514 + pr_warn("Can't create sysfs chkcfg file\n"); 515 + 516 + return 0; 517 + } 518 + 519 + static void __exit altera_cvp_exit(void) 520 + { 521 + driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg); 522 + pci_unregister_driver(&altera_cvp_driver); 523 + } 524 + 525 + module_init(altera_cvp_init); 526 + module_exit(altera_cvp_exit); 506 527 507 528 MODULE_LICENSE("GPL v2"); 508 529 MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
+35 -5
drivers/fpga/altera-ps-spi.c
··· 75 75 .t_st2ck_us = 10, /* min(t_ST2CK) */ 76 76 }; 77 77 78 + /* Array index is enum altera_ps_devtype */ 79 + static const struct altera_ps_data *altera_ps_data_map[] = { 80 + &c5_data, 81 + &a10_data, 82 + }; 83 + 78 84 static const struct of_device_id of_ef_match[] = { 79 85 { .compatible = "altr,fpga-passive-serial", .data = &c5_data }, 80 86 { .compatible = "altr,fpga-arria10-passive-serial", .data = &a10_data }, ··· 240 234 .write_complete = altera_ps_write_complete, 241 235 }; 242 236 237 + static const struct altera_ps_data *id_to_data(const struct spi_device_id *id) 238 + { 239 + kernel_ulong_t devtype = id->driver_data; 240 + const struct altera_ps_data *data; 241 + 242 + /* someone added a altera_ps_devtype without adding to the map array */ 243 + if (devtype >= ARRAY_SIZE(altera_ps_data_map)) 244 + return NULL; 245 + 246 + data = altera_ps_data_map[devtype]; 247 + if (!data || data->devtype != devtype) 248 + return NULL; 249 + 250 + return data; 251 + } 252 + 243 253 static int altera_ps_probe(struct spi_device *spi) 244 254 { 245 255 struct altera_ps_conf *conf; ··· 266 244 if (!conf) 267 245 return -ENOMEM; 268 246 269 - of_id = of_match_device(of_ef_match, &spi->dev); 270 - if (!of_id) 271 - return -ENODEV; 247 + if (spi->dev.of_node) { 248 + of_id = of_match_device(of_ef_match, &spi->dev); 249 + if (!of_id) 250 + return -ENODEV; 251 + conf->data = of_id->data; 252 + } else { 253 + conf->data = id_to_data(spi_get_device_id(spi)); 254 + if (!conf->data) 255 + return -ENODEV; 256 + } 272 257 273 - conf->data = of_id->data; 274 258 conf->spi = spi; 275 259 conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW); 276 260 if (IS_ERR(conf->config)) { ··· 322 294 } 323 295 324 296 static const struct spi_device_id altera_ps_spi_ids[] = { 325 - {"cyclone-ps-spi", 0}, 297 + { "cyclone-ps-spi", CYCLONE5 }, 298 + { "fpga-passive-serial", CYCLONE5 }, 299 + { "fpga-arria10-passive-serial", ARRIA10 }, 326 300 {} 327 301 }; 328 302 MODULE_DEVICE_TABLE(spi, altera_ps_spi_ids);
-2
drivers/fpga/dfl-fme-pr.c
··· 444 444 struct dfl_feature *feature) 445 445 { 446 446 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 447 - struct dfl_fme *priv; 448 447 449 448 mutex_lock(&pdata->lock); 450 - priv = dfl_fpga_pdata_get_private(pdata); 451 449 452 450 dfl_fme_destroy_regions(pdata); 453 451 dfl_fme_destroy_bridges(pdata);
+1 -1
drivers/fpga/dfl-fme-region.c
··· 64 64 65 65 static int fme_region_remove(struct platform_device *pdev) 66 66 { 67 - struct fpga_region *region = dev_get_drvdata(&pdev->dev); 67 + struct fpga_region *region = platform_get_drvdata(pdev); 68 68 struct fpga_manager *mgr = region->mgr; 69 69 70 70 fpga_region_unregister(region);
+1 -1
drivers/fpga/of-fpga-region.c
··· 421 421 goto eprobe_mgr_put; 422 422 423 423 of_platform_populate(np, fpga_region_of_match, NULL, &region->dev); 424 - dev_set_drvdata(dev, region); 424 + platform_set_drvdata(pdev, region); 425 425 426 426 dev_info(dev, "FPGA Region probed\n"); 427 427
+535
drivers/fpga/stratix10-soc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * FPGA Manager Driver for Intel Stratix10 SoC 4 + * 5 + * Copyright (C) 2018 Intel Corporation 6 + */ 7 + #include <linux/completion.h> 8 + #include <linux/fpga/fpga-mgr.h> 9 + #include <linux/firmware/intel/stratix10-svc-client.h> 10 + #include <linux/module.h> 11 + #include <linux/of.h> 12 + #include <linux/of_platform.h> 13 + 14 + /* 15 + * FPGA programming requires a higher level of privilege (EL3), per the SoC 16 + * design. 17 + */ 18 + #define NUM_SVC_BUFS 4 19 + #define SVC_BUF_SIZE SZ_512K 20 + 21 + /* Indicates buffer is in use if set */ 22 + #define SVC_BUF_LOCK 0 23 + 24 + #define S10_BUFFER_TIMEOUT (msecs_to_jiffies(SVC_RECONFIG_BUFFER_TIMEOUT_MS)) 25 + #define S10_RECONFIG_TIMEOUT (msecs_to_jiffies(SVC_RECONFIG_REQUEST_TIMEOUT_MS)) 26 + 27 + /* 28 + * struct s10_svc_buf 29 + * buf: virtual address of buf provided by service layer 30 + * lock: locked if buffer is in use 31 + */ 32 + struct s10_svc_buf { 33 + char *buf; 34 + unsigned long lock; 35 + }; 36 + 37 + struct s10_priv { 38 + struct stratix10_svc_chan *chan; 39 + struct stratix10_svc_client client; 40 + struct completion status_return_completion; 41 + struct s10_svc_buf svc_bufs[NUM_SVC_BUFS]; 42 + unsigned long status; 43 + }; 44 + 45 + static int s10_svc_send_msg(struct s10_priv *priv, 46 + enum stratix10_svc_command_code command, 47 + void *payload, u32 payload_length) 48 + { 49 + struct stratix10_svc_chan *chan = priv->chan; 50 + struct device *dev = priv->client.dev; 51 + struct stratix10_svc_client_msg msg; 52 + int ret; 53 + 54 + dev_dbg(dev, "%s cmd=%d payload=%p length=%d\n", 55 + __func__, command, payload, payload_length); 56 + 57 + msg.command = command; 58 + msg.payload = payload; 59 + msg.payload_length = payload_length; 60 + 61 + ret = stratix10_svc_send(chan, &msg); 62 + dev_dbg(dev, "stratix10_svc_send returned status %d\n", ret); 63 + 64 + return ret; 65 + } 66 + 67 + /* 68 + * Free buffers allocated from the service layer's pool that are not in use. 69 + * Return true when all buffers are freed. 70 + */ 71 + static bool s10_free_buffers(struct fpga_manager *mgr) 72 + { 73 + struct s10_priv *priv = mgr->priv; 74 + uint num_free = 0; 75 + uint i; 76 + 77 + for (i = 0; i < NUM_SVC_BUFS; i++) { 78 + if (!priv->svc_bufs[i].buf) { 79 + num_free++; 80 + continue; 81 + } 82 + 83 + if (!test_and_set_bit_lock(SVC_BUF_LOCK, 84 + &priv->svc_bufs[i].lock)) { 85 + stratix10_svc_free_memory(priv->chan, 86 + priv->svc_bufs[i].buf); 87 + priv->svc_bufs[i].buf = NULL; 88 + num_free++; 89 + } 90 + } 91 + 92 + return num_free == NUM_SVC_BUFS; 93 + } 94 + 95 + /* 96 + * Returns count of how many buffers are not in use. 97 + */ 98 + static uint s10_free_buffer_count(struct fpga_manager *mgr) 99 + { 100 + struct s10_priv *priv = mgr->priv; 101 + uint num_free = 0; 102 + uint i; 103 + 104 + for (i = 0; i < NUM_SVC_BUFS; i++) 105 + if (!priv->svc_bufs[i].buf) 106 + num_free++; 107 + 108 + return num_free; 109 + } 110 + 111 + /* 112 + * s10_unlock_bufs 113 + * Given the returned buffer address, match that address to our buffer struct 114 + * and unlock that buffer. This marks it as available to be refilled and sent 115 + * (or freed). 116 + * priv: private data 117 + * kaddr: kernel address of buffer that was returned from service layer 118 + */ 119 + static void s10_unlock_bufs(struct s10_priv *priv, void *kaddr) 120 + { 121 + uint i; 122 + 123 + if (!kaddr) 124 + return; 125 + 126 + for (i = 0; i < NUM_SVC_BUFS; i++) 127 + if (priv->svc_bufs[i].buf == kaddr) { 128 + clear_bit_unlock(SVC_BUF_LOCK, 129 + &priv->svc_bufs[i].lock); 130 + return; 131 + } 132 + 133 + WARN(1, "Unknown buffer returned from service layer %p\n", kaddr); 134 + } 135 + 136 + /* 137 + * s10_receive_callback - callback for service layer to use to provide client 138 + * (this driver) messages received through the mailbox. 139 + * client: service layer client struct 140 + * data: message from service layer 141 + */ 142 + static void s10_receive_callback(struct stratix10_svc_client *client, 143 + struct stratix10_svc_cb_data *data) 144 + { 145 + struct s10_priv *priv = client->priv; 146 + u32 status; 147 + int i; 148 + 149 + WARN_ONCE(!data, "%s: stratix10_svc_rc_data = NULL", __func__); 150 + 151 + status = data->status; 152 + 153 + /* 154 + * Here we set status bits as we receive them. Elsewhere, we always use 155 + * test_and_clear_bit() to check status in priv->status 156 + */ 157 + for (i = 0; i <= SVC_STATUS_RECONFIG_ERROR; i++) 158 + if (status & (1 << i)) 159 + set_bit(i, &priv->status); 160 + 161 + if (status & BIT(SVC_STATUS_RECONFIG_BUFFER_DONE)) { 162 + s10_unlock_bufs(priv, data->kaddr1); 163 + s10_unlock_bufs(priv, data->kaddr2); 164 + s10_unlock_bufs(priv, data->kaddr3); 165 + } 166 + 167 + complete(&priv->status_return_completion); 168 + } 169 + 170 + /* 171 + * s10_ops_write_init - prepare for FPGA reconfiguration by requesting 172 + * partial reconfig and allocating buffers from the service layer. 173 + */ 174 + static int s10_ops_write_init(struct fpga_manager *mgr, 175 + struct fpga_image_info *info, 176 + const char *buf, size_t count) 177 + { 178 + struct s10_priv *priv = mgr->priv; 179 + struct device *dev = priv->client.dev; 180 + struct stratix10_svc_command_config_type ctype; 181 + char *kbuf; 182 + uint i; 183 + int ret; 184 + 185 + ctype.flags = 0; 186 + if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) { 187 + dev_dbg(dev, "Requesting partial reconfiguration.\n"); 188 + ctype.flags |= BIT(COMMAND_RECONFIG_FLAG_PARTIAL); 189 + } else { 190 + dev_dbg(dev, "Requesting full reconfiguration.\n"); 191 + } 192 + 193 + reinit_completion(&priv->status_return_completion); 194 + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG, 195 + &ctype, sizeof(ctype)); 196 + if (ret < 0) 197 + goto init_done; 198 + 199 + ret = wait_for_completion_interruptible_timeout( 200 + &priv->status_return_completion, S10_RECONFIG_TIMEOUT); 201 + if (!ret) { 202 + dev_err(dev, "timeout waiting for RECONFIG_REQUEST\n"); 203 + ret = -ETIMEDOUT; 204 + goto init_done; 205 + } 206 + if (ret < 0) { 207 + dev_err(dev, "error (%d) waiting for RECONFIG_REQUEST\n", ret); 208 + goto init_done; 209 + } 210 + 211 + ret = 0; 212 + if (!test_and_clear_bit(SVC_STATUS_RECONFIG_REQUEST_OK, 213 + &priv->status)) { 214 + ret = -ETIMEDOUT; 215 + goto init_done; 216 + } 217 + 218 + /* Allocate buffers from the service layer's pool. */ 219 + for (i = 0; i < NUM_SVC_BUFS; i++) { 220 + kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE); 221 + if (!kbuf) { 222 + s10_free_buffers(mgr); 223 + ret = -ENOMEM; 224 + goto init_done; 225 + } 226 + 227 + priv->svc_bufs[i].buf = kbuf; 228 + priv->svc_bufs[i].lock = 0; 229 + } 230 + 231 + init_done: 232 + stratix10_svc_done(priv->chan); 233 + return ret; 234 + } 235 + 236 + /* 237 + * s10_send_buf - send a buffer to the service layer queue 238 + * mgr: fpga manager struct 239 + * buf: fpga image buffer 240 + * count: size of buf in bytes 241 + * Returns # of bytes transferred or -ENOBUFS if the all the buffers are in use 242 + * or if the service queue is full. Never returns 0. 243 + */ 244 + static int s10_send_buf(struct fpga_manager *mgr, const char *buf, size_t count) 245 + { 246 + struct s10_priv *priv = mgr->priv; 247 + struct device *dev = priv->client.dev; 248 + void *svc_buf; 249 + size_t xfer_sz; 250 + int ret; 251 + uint i; 252 + 253 + /* get/lock a buffer that that's not being used */ 254 + for (i = 0; i < NUM_SVC_BUFS; i++) 255 + if (!test_and_set_bit_lock(SVC_BUF_LOCK, 256 + &priv->svc_bufs[i].lock)) 257 + break; 258 + 259 + if (i == NUM_SVC_BUFS) 260 + return -ENOBUFS; 261 + 262 + xfer_sz = count < SVC_BUF_SIZE ? count : SVC_BUF_SIZE; 263 + 264 + svc_buf = priv->svc_bufs[i].buf; 265 + memcpy(svc_buf, buf, xfer_sz); 266 + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG_DATA_SUBMIT, 267 + svc_buf, xfer_sz); 268 + if (ret < 0) { 269 + dev_err(dev, 270 + "Error while sending data to service layer (%d)", ret); 271 + clear_bit_unlock(SVC_BUF_LOCK, &priv->svc_bufs[i].lock); 272 + return ret; 273 + } 274 + 275 + return xfer_sz; 276 + } 277 + 278 + /* 279 + * Send a FPGA image to privileged layers to write to the FPGA. When done 280 + * sending, free all service layer buffers we allocated in write_init. 281 + */ 282 + static int s10_ops_write(struct fpga_manager *mgr, const char *buf, 283 + size_t count) 284 + { 285 + struct s10_priv *priv = mgr->priv; 286 + struct device *dev = priv->client.dev; 287 + long wait_status; 288 + int sent = 0; 289 + int ret = 0; 290 + 291 + /* 292 + * Loop waiting for buffers to be returned. When a buffer is returned, 293 + * reuse it to send more data or free if if all data has been sent. 294 + */ 295 + while (count > 0 || s10_free_buffer_count(mgr) != NUM_SVC_BUFS) { 296 + reinit_completion(&priv->status_return_completion); 297 + 298 + if (count > 0) { 299 + sent = s10_send_buf(mgr, buf, count); 300 + if (sent < 0) 301 + continue; 302 + 303 + count -= sent; 304 + buf += sent; 305 + } else { 306 + if (s10_free_buffers(mgr)) 307 + return 0; 308 + 309 + ret = s10_svc_send_msg( 310 + priv, COMMAND_RECONFIG_DATA_CLAIM, 311 + NULL, 0); 312 + if (ret < 0) 313 + break; 314 + } 315 + 316 + /* 317 + * If callback hasn't already happened, wait for buffers to be 318 + * returned from service layer 319 + */ 320 + wait_status = 1; /* not timed out */ 321 + if (!priv->status) 322 + wait_status = wait_for_completion_interruptible_timeout( 323 + &priv->status_return_completion, 324 + S10_BUFFER_TIMEOUT); 325 + 326 + if (test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_DONE, 327 + &priv->status) || 328 + test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED, 329 + &priv->status)) { 330 + ret = 0; 331 + continue; 332 + } 333 + 334 + if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR, 335 + &priv->status)) { 336 + dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n"); 337 + ret = -EFAULT; 338 + break; 339 + } 340 + 341 + if (!wait_status) { 342 + dev_err(dev, "timeout waiting for svc layer buffers\n"); 343 + ret = -ETIMEDOUT; 344 + break; 345 + } 346 + if (wait_status < 0) { 347 + ret = wait_status; 348 + dev_err(dev, 349 + "error (%d) waiting for svc layer buffers\n", 350 + ret); 351 + break; 352 + } 353 + } 354 + 355 + if (!s10_free_buffers(mgr)) 356 + dev_err(dev, "%s not all buffers were freed\n", __func__); 357 + 358 + return ret; 359 + } 360 + 361 + static int s10_ops_write_complete(struct fpga_manager *mgr, 362 + struct fpga_image_info *info) 363 + { 364 + struct s10_priv *priv = mgr->priv; 365 + struct device *dev = priv->client.dev; 366 + unsigned long timeout; 367 + int ret; 368 + 369 + timeout = usecs_to_jiffies(info->config_complete_timeout_us); 370 + 371 + do { 372 + reinit_completion(&priv->status_return_completion); 373 + 374 + ret = s10_svc_send_msg(priv, COMMAND_RECONFIG_STATUS, NULL, 0); 375 + if (ret < 0) 376 + break; 377 + 378 + ret = wait_for_completion_interruptible_timeout( 379 + &priv->status_return_completion, timeout); 380 + if (!ret) { 381 + dev_err(dev, 382 + "timeout waiting for RECONFIG_COMPLETED\n"); 383 + ret = -ETIMEDOUT; 384 + break; 385 + } 386 + if (ret < 0) { 387 + dev_err(dev, 388 + "error (%d) waiting for RECONFIG_COMPLETED\n", 389 + ret); 390 + break; 391 + } 392 + /* Not error or timeout, so ret is # of jiffies until timeout */ 393 + timeout = ret; 394 + ret = 0; 395 + 396 + if (test_and_clear_bit(SVC_STATUS_RECONFIG_COMPLETED, 397 + &priv->status)) 398 + break; 399 + 400 + if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR, 401 + &priv->status)) { 402 + dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n"); 403 + ret = -EFAULT; 404 + break; 405 + } 406 + } while (1); 407 + 408 + stratix10_svc_done(priv->chan); 409 + 410 + return ret; 411 + } 412 + 413 + static enum fpga_mgr_states s10_ops_state(struct fpga_manager *mgr) 414 + { 415 + return FPGA_MGR_STATE_UNKNOWN; 416 + } 417 + 418 + static const struct fpga_manager_ops s10_ops = { 419 + .state = s10_ops_state, 420 + .write_init = s10_ops_write_init, 421 + .write = s10_ops_write, 422 + .write_complete = s10_ops_write_complete, 423 + }; 424 + 425 + static int s10_probe(struct platform_device *pdev) 426 + { 427 + struct device *dev = &pdev->dev; 428 + struct s10_priv *priv; 429 + struct fpga_manager *mgr; 430 + int ret; 431 + 432 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 433 + if (!priv) 434 + return -ENOMEM; 435 + 436 + priv->client.dev = dev; 437 + priv->client.receive_cb = s10_receive_callback; 438 + priv->client.priv = priv; 439 + 440 + priv->chan = stratix10_svc_request_channel_byname(&priv->client, 441 + SVC_CLIENT_FPGA); 442 + if (IS_ERR(priv->chan)) { 443 + dev_err(dev, "couldn't get service channel (%s)\n", 444 + SVC_CLIENT_FPGA); 445 + return PTR_ERR(priv->chan); 446 + } 447 + 448 + init_completion(&priv->status_return_completion); 449 + 450 + mgr = fpga_mgr_create(dev, "Stratix10 SOC FPGA Manager", 451 + &s10_ops, priv); 452 + if (!mgr) { 453 + dev_err(dev, "unable to create FPGA manager\n"); 454 + ret = -ENOMEM; 455 + goto probe_err; 456 + } 457 + 458 + ret = fpga_mgr_register(mgr); 459 + if (ret) { 460 + dev_err(dev, "unable to register FPGA manager\n"); 461 + fpga_mgr_free(mgr); 462 + goto probe_err; 463 + } 464 + 465 + platform_set_drvdata(pdev, mgr); 466 + return ret; 467 + 468 + probe_err: 469 + stratix10_svc_free_channel(priv->chan); 470 + return ret; 471 + } 472 + 473 + static int s10_remove(struct platform_device *pdev) 474 + { 475 + struct fpga_manager *mgr = platform_get_drvdata(pdev); 476 + struct s10_priv *priv = mgr->priv; 477 + 478 + fpga_mgr_unregister(mgr); 479 + stratix10_svc_free_channel(priv->chan); 480 + 481 + return 0; 482 + } 483 + 484 + static const struct of_device_id s10_of_match[] = { 485 + { .compatible = "intel,stratix10-soc-fpga-mgr", }, 486 + {}, 487 + }; 488 + 489 + MODULE_DEVICE_TABLE(of, s10_of_match); 490 + 491 + static struct platform_driver s10_driver = { 492 + .probe = s10_probe, 493 + .remove = s10_remove, 494 + .driver = { 495 + .name = "Stratix10 SoC FPGA manager", 496 + .of_match_table = of_match_ptr(s10_of_match), 497 + }, 498 + }; 499 + 500 + static int __init s10_init(void) 501 + { 502 + struct device_node *fw_np; 503 + struct device_node *np; 504 + int ret; 505 + 506 + fw_np = of_find_node_by_name(NULL, "svc"); 507 + if (!fw_np) 508 + return -ENODEV; 509 + 510 + np = of_find_matching_node(fw_np, s10_of_match); 511 + if (!np) { 512 + of_node_put(fw_np); 513 + return -ENODEV; 514 + } 515 + 516 + of_node_put(np); 517 + ret = of_platform_populate(fw_np, s10_of_match, NULL, NULL); 518 + of_node_put(fw_np); 519 + if (ret) 520 + return ret; 521 + 522 + return platform_driver_register(&s10_driver); 523 + } 524 + 525 + static void __exit s10_exit(void) 526 + { 527 + return platform_driver_unregister(&s10_driver); 528 + } 529 + 530 + module_init(s10_init); 531 + module_exit(s10_exit); 532 + 533 + MODULE_AUTHOR("Alan Tull <atull@kernel.org>"); 534 + MODULE_DESCRIPTION("Intel Stratix 10 SOC FPGA Manager"); 535 + MODULE_LICENSE("GPL v2");
+4
drivers/fpga/zynq-fpga.c
··· 501 501 if (err) 502 502 return err; 503 503 504 + /* Release 'PR' control back to the ICAP */ 505 + zynq_fpga_write(priv, CTRL_OFFSET, 506 + zynq_fpga_read(priv, CTRL_OFFSET) & ~CTRL_PCAP_PR_MASK); 507 + 504 508 err = zynq_fpga_poll_timeout(priv, INT_STS_OFFSET, intr_status, 505 509 intr_status & IXR_PCFG_DONE_MASK, 506 510 INIT_POLL_DELAY,
-1
drivers/hv/channel.c
··· 711 711 /* Snapshot the list of subchannels */ 712 712 spin_lock_irqsave(&channel->lock, flags); 713 713 list_splice_init(&channel->sc_list, &list); 714 - channel->num_sc = 0; 715 714 spin_unlock_irqrestore(&channel->lock, flags); 716 715 717 716 list_for_each_entry_safe(cur_channel, tmp, &list, sc_list) {
-44
drivers/hv/channel_mgmt.c
··· 405 405 primary_channel = channel->primary_channel; 406 406 spin_lock_irqsave(&primary_channel->lock, flags); 407 407 list_del(&channel->sc_list); 408 - primary_channel->num_sc--; 409 408 spin_unlock_irqrestore(&primary_channel->lock, flags); 410 409 } 411 410 ··· 1300 1301 1301 1302 return ret; 1302 1303 } 1303 - 1304 - /* 1305 - * Retrieve the (sub) channel on which to send an outgoing request. 1306 - * When a primary channel has multiple sub-channels, we try to 1307 - * distribute the load equally amongst all available channels. 1308 - */ 1309 - struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary) 1310 - { 1311 - struct list_head *cur, *tmp; 1312 - int cur_cpu; 1313 - struct vmbus_channel *cur_channel; 1314 - struct vmbus_channel *outgoing_channel = primary; 1315 - int next_channel; 1316 - int i = 1; 1317 - 1318 - if (list_empty(&primary->sc_list)) 1319 - return outgoing_channel; 1320 - 1321 - next_channel = primary->next_oc++; 1322 - 1323 - if (next_channel > (primary->num_sc)) { 1324 - primary->next_oc = 0; 1325 - return outgoing_channel; 1326 - } 1327 - 1328 - cur_cpu = hv_cpu_number_to_vp_number(smp_processor_id()); 1329 - list_for_each_safe(cur, tmp, &primary->sc_list) { 1330 - cur_channel = list_entry(cur, struct vmbus_channel, sc_list); 1331 - if (cur_channel->state != CHANNEL_OPENED_STATE) 1332 - continue; 1333 - 1334 - if (cur_channel->target_vp == cur_cpu) 1335 - return cur_channel; 1336 - 1337 - if (i == next_channel) 1338 - return cur_channel; 1339 - 1340 - i++; 1341 - } 1342 - 1343 - return outgoing_channel; 1344 - } 1345 - EXPORT_SYMBOL_GPL(vmbus_get_outgoing_channel); 1346 1304 1347 1305 static void invoke_sc_cb(struct vmbus_channel *primary_channel) 1348 1306 {
+3 -7
drivers/hv/hv.c
··· 33 33 #include "hyperv_vmbus.h" 34 34 35 35 /* The one and only */ 36 - struct hv_context hv_context = { 37 - .synic_initialized = false, 38 - }; 36 + struct hv_context hv_context; 39 37 40 38 /* 41 39 * If false, we're using the old mechanism for stimer0 interrupts ··· 324 326 325 327 hv_set_synic_state(sctrl.as_uint64); 326 328 327 - hv_context.synic_initialized = true; 328 - 329 329 /* 330 330 * Register the per-cpu clockevent source. 331 331 */ ··· 369 373 bool channel_found = false; 370 374 unsigned long flags; 371 375 372 - if (!hv_context.synic_initialized) 376 + hv_get_synic_state(sctrl.as_uint64); 377 + if (sctrl.enable != 1) 373 378 return -EFAULT; 374 379 375 380 /* ··· 432 435 hv_set_siefp(siefp.as_uint64); 433 436 434 437 /* Disable the global synic bit */ 435 - hv_get_synic_state(sctrl.as_uint64); 436 438 sctrl.enable = 0; 437 439 hv_set_synic_state(sctrl.as_uint64); 438 440
+1 -1
drivers/hv/hv_kvp.c
··· 437 437 val32 = in_msg->body.kvp_set.data.value_u32; 438 438 message->body.kvp_set.data.value_size = 439 439 sprintf(message->body.kvp_set.data.value, 440 - "%d", val32) + 1; 440 + "%u", val32) + 1; 441 441 break; 442 442 443 443 case REG_U64:
+1 -1
drivers/hv/hv_util.c
··· 483 483 484 484 /* The one and only one */ 485 485 static struct hv_driver util_drv = { 486 - .name = "hv_util", 486 + .name = "hv_utils", 487 487 .id_table = id_table, 488 488 .probe = util_probe, 489 489 .remove = util_remove,
-2
drivers/hv/hyperv_vmbus.h
··· 162 162 163 163 void *tsc_page; 164 164 165 - bool synic_initialized; 166 - 167 165 struct hv_per_cpu_context __percpu *cpu_context; 168 166 169 167 /*
+17 -6
drivers/hwtracing/coresight/coresight-etb10.c
··· 136 136 137 137 static int etb_enable_hw(struct etb_drvdata *drvdata) 138 138 { 139 + int rc = coresight_claim_device(drvdata->base); 140 + 141 + if (rc) 142 + return rc; 143 + 139 144 __etb_enable_hw(drvdata); 140 145 return 0; 141 146 } ··· 228 223 return 0; 229 224 } 230 225 231 - static void etb_disable_hw(struct etb_drvdata *drvdata) 226 + static void __etb_disable_hw(struct etb_drvdata *drvdata) 232 227 { 233 228 u32 ffcr; 234 229 ··· 318 313 CS_LOCK(drvdata->base); 319 314 } 320 315 316 + static void etb_disable_hw(struct etb_drvdata *drvdata) 317 + { 318 + __etb_disable_hw(drvdata); 319 + etb_dump_hw(drvdata); 320 + coresight_disclaim_device(drvdata->base); 321 + } 322 + 321 323 static void etb_disable(struct coresight_device *csdev) 322 324 { 323 325 struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ··· 335 323 /* Disable the ETB only if it needs to */ 336 324 if (drvdata->mode != CS_MODE_DISABLED) { 337 325 etb_disable_hw(drvdata); 338 - etb_dump_hw(drvdata); 339 326 drvdata->mode = CS_MODE_DISABLED; 340 327 } 341 328 spin_unlock_irqrestore(&drvdata->spinlock, flags); ··· 413 402 414 403 capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS; 415 404 416 - etb_disable_hw(drvdata); 405 + __etb_disable_hw(drvdata); 417 406 CS_UNLOCK(drvdata->base); 418 407 419 408 /* unit is in words, not bytes */ ··· 521 510 handle->head = (cur * PAGE_SIZE) + offset; 522 511 to_read = buf->nr_pages << PAGE_SHIFT; 523 512 } 524 - etb_enable_hw(drvdata); 513 + __etb_enable_hw(drvdata); 525 514 CS_LOCK(drvdata->base); 526 515 527 516 return to_read; ··· 545 534 546 535 spin_lock_irqsave(&drvdata->spinlock, flags); 547 536 if (drvdata->mode == CS_MODE_SYSFS) { 548 - etb_disable_hw(drvdata); 537 + __etb_disable_hw(drvdata); 549 538 etb_dump_hw(drvdata); 550 - etb_enable_hw(drvdata); 539 + __etb_enable_hw(drvdata); 551 540 } 552 541 spin_unlock_irqrestore(&drvdata->spinlock, flags); 553 542
+6 -6
drivers/hwtracing/coresight/coresight-etm3x.c
··· 363 363 364 364 CS_UNLOCK(drvdata->base); 365 365 366 + rc = coresight_claim_device_unlocked(drvdata->base); 367 + if (rc) 368 + goto done; 369 + 366 370 /* Turn engine on */ 367 371 etm_clr_pwrdwn(drvdata); 368 372 /* Apply power to trace registers */ 369 373 etm_set_pwrup(drvdata); 370 374 /* Make sure all registers are accessible */ 371 375 etm_os_unlock(drvdata); 372 - rc = coresight_claim_device_unlocked(drvdata->base); 373 - if (rc) 374 - goto done; 375 376 376 377 etm_set_prog(drvdata); 377 378 ··· 423 422 etm_clr_prog(drvdata); 424 423 425 424 done: 426 - if (rc) 427 - etm_set_pwrdwn(drvdata); 428 425 CS_LOCK(drvdata->base); 429 426 430 427 dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", ··· 576 577 for (i = 0; i < drvdata->nr_cntr; i++) 577 578 config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i)); 578 579 580 + etm_set_pwrdwn(drvdata); 579 581 coresight_disclaim_device_unlocked(drvdata->base); 580 582 581 - etm_set_pwrdwn(drvdata); 582 583 CS_LOCK(drvdata->base); 583 584 584 585 dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); ··· 601 602 * power down the tracer. 602 603 */ 603 604 etm_set_pwrdwn(drvdata); 605 + coresight_disclaim_device_unlocked(drvdata->base); 604 606 605 607 CS_LOCK(drvdata->base); 606 608 }
+1 -1
drivers/hwtracing/coresight/coresight-stm.c
··· 856 856 857 857 if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) { 858 858 dev_info(dev, 859 - "stm_register_device failed, probing deffered\n"); 859 + "stm_register_device failed, probing deferred\n"); 860 860 return -EPROBE_DEFER; 861 861 } 862 862
+1 -1
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 86 86 87 87 static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 88 88 { 89 - coresight_disclaim_device(drvdata); 90 89 __tmc_etb_disable_hw(drvdata); 90 + coresight_disclaim_device(drvdata->base); 91 91 } 92 92 93 93 static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
+2 -1
drivers/hwtracing/intel_th/msu.c
··· 1423 1423 if (!end) 1424 1424 break; 1425 1425 1426 - len -= end - p; 1426 + /* consume the number and the following comma, hence +1 */ 1427 + len -= end - p + 1; 1427 1428 p = end + 1; 1428 1429 } while (len); 1429 1430
+7 -5
drivers/hwtracing/stm/policy.c
··· 440 440 441 441 stm->policy = kzalloc(sizeof(*stm->policy), GFP_KERNEL); 442 442 if (!stm->policy) { 443 - mutex_unlock(&stm->policy_mutex); 444 - stm_put_protocol(pdrv); 445 - stm_put_device(stm); 446 - return ERR_PTR(-ENOMEM); 443 + ret = ERR_PTR(-ENOMEM); 444 + goto unlock_policy; 447 445 } 448 446 449 447 config_group_init_type_name(&stm->policy->group, name, ··· 456 458 mutex_unlock(&stm->policy_mutex); 457 459 458 460 if (IS_ERR(ret)) { 459 - stm_put_protocol(stm->pdrv); 461 + /* 462 + * pdrv and stm->pdrv at this point can be quite different, 463 + * and only one of them needs to be 'put' 464 + */ 465 + stm_put_protocol(pdrv); 460 466 stm_put_device(stm); 461 467 } 462 468
+25
drivers/iommu/dmar.c
··· 2042 2042 { 2043 2043 return dmar_device_hotplug(handle, false); 2044 2044 } 2045 + 2046 + /* 2047 + * dmar_platform_optin - Is %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in DMAR table 2048 + * 2049 + * Returns true if the platform has %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in 2050 + * the ACPI DMAR table. This means that the platform boot firmware has made 2051 + * sure no device can issue DMA outside of RMRR regions. 2052 + */ 2053 + bool dmar_platform_optin(void) 2054 + { 2055 + struct acpi_table_dmar *dmar; 2056 + acpi_status status; 2057 + bool ret; 2058 + 2059 + status = acpi_get_table(ACPI_SIG_DMAR, 0, 2060 + (struct acpi_table_header **)&dmar); 2061 + if (ACPI_FAILURE(status)) 2062 + return false; 2063 + 2064 + ret = !!(dmar->flags & DMAR_PLATFORM_OPT_IN); 2065 + acpi_put_table((struct acpi_table_header *)dmar); 2066 + 2067 + return ret; 2068 + } 2069 + EXPORT_SYMBOL_GPL(dmar_platform_optin);
+53 -3
drivers/iommu/intel-iommu.c
··· 184 184 */ 185 185 static int force_on = 0; 186 186 int intel_iommu_tboot_noforce; 187 + static int no_platform_optin; 187 188 188 189 #define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry)) 189 190 ··· 504 503 pr_info("IOMMU enabled\n"); 505 504 } else if (!strncmp(str, "off", 3)) { 506 505 dmar_disabled = 1; 506 + no_platform_optin = 1; 507 507 pr_info("IOMMU disabled\n"); 508 508 } else if (!strncmp(str, "igfx_off", 8)) { 509 509 dmar_map_gfx = 0; ··· 1473 1471 if (info->pri_supported && !pci_reset_pri(pdev) && !pci_enable_pri(pdev, 32)) 1474 1472 info->pri_enabled = 1; 1475 1473 #endif 1476 - if (info->ats_supported && !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) { 1474 + if (!pdev->untrusted && info->ats_supported && 1475 + !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) { 1477 1476 info->ats_enabled = 1; 1478 1477 domain_update_iotlb(info->domain); 1479 1478 info->ats_qdep = pci_ats_queue_depth(pdev); ··· 2896 2893 struct pci_dev *pdev = to_pci_dev(dev); 2897 2894 2898 2895 if (device_is_rmrr_locked(dev)) 2896 + return 0; 2897 + 2898 + /* 2899 + * Prevent any device marked as untrusted from getting 2900 + * placed into the statically identity mapping domain. 2901 + */ 2902 + if (pdev->untrusted) 2899 2903 return 0; 2900 2904 2901 2905 if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev)) ··· 4732 4722 NULL, 4733 4723 }; 4734 4724 4725 + static int __init platform_optin_force_iommu(void) 4726 + { 4727 + struct pci_dev *pdev = NULL; 4728 + bool has_untrusted_dev = false; 4729 + 4730 + if (!dmar_platform_optin() || no_platform_optin) 4731 + return 0; 4732 + 4733 + for_each_pci_dev(pdev) { 4734 + if (pdev->untrusted) { 4735 + has_untrusted_dev = true; 4736 + break; 4737 + } 4738 + } 4739 + 4740 + if (!has_untrusted_dev) 4741 + return 0; 4742 + 4743 + if (no_iommu || dmar_disabled) 4744 + pr_info("Intel-IOMMU force enabled due to platform opt in\n"); 4745 + 4746 + /* 4747 + * If Intel-IOMMU is disabled by default, we will apply identity 4748 + * map for all devices except those marked as being untrusted. 4749 + */ 4750 + if (dmar_disabled) 4751 + iommu_identity_mapping |= IDENTMAP_ALL; 4752 + 4753 + dmar_disabled = 0; 4754 + #if defined(CONFIG_X86) && defined(CONFIG_SWIOTLB) 4755 + swiotlb = 0; 4756 + #endif 4757 + no_iommu = 0; 4758 + 4759 + return 1; 4760 + } 4761 + 4735 4762 int __init intel_iommu_init(void) 4736 4763 { 4737 4764 int ret = -ENODEV; 4738 4765 struct dmar_drhd_unit *drhd; 4739 4766 struct intel_iommu *iommu; 4740 4767 4741 - /* VT-d is required for a TXT/tboot launch, so enforce that */ 4742 - force_on = tboot_force_iommu(); 4768 + /* 4769 + * Intel IOMMU is required for a TXT/tboot launch or platform 4770 + * opt in, so enforce that. 4771 + */ 4772 + force_on = tboot_force_iommu() || platform_optin_force_iommu(); 4743 4773 4744 4774 if (iommu_init_mempool()) { 4745 4775 if (force_on)
+8
drivers/misc/Kconfig
··· 513 513 tristate 514 514 default MISC_RTSX_PCI || MISC_RTSX_USB 515 515 516 + config PVPANIC 517 + tristate "pvpanic device support" 518 + depends on HAS_IOMEM && (ACPI || OF) 519 + help 520 + This driver provides support for the pvpanic device. pvpanic is 521 + a paravirtualized device provided by QEMU; it lets a virtual machine 522 + (guest) communicate panic events to the host. 523 + 516 524 source "drivers/misc/c2port/Kconfig" 517 525 source "drivers/misc/eeprom/Kconfig" 518 526 source "drivers/misc/cb710/Kconfig"
+2 -1
drivers/misc/Makefile
··· 57 57 obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o 58 58 obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o 59 59 obj-$(CONFIG_OCXL) += ocxl/ 60 - obj-y += cardreader/ 60 + obj-y += cardreader/ 61 + obj-$(CONFIG_PVPANIC) += pvpanic.o
+1 -2
drivers/misc/altera-stapl/altera.c
··· 2176 2176 key_ptr = &p[note_strings + 2177 2177 get_unaligned_be32( 2178 2178 &p[note_table + (8 * i)])]; 2179 - if ((strncasecmp(key, key_ptr, strlen(key_ptr)) == 0) && 2180 - (key != NULL)) { 2179 + if (key && !strncasecmp(key, key_ptr, strlen(key_ptr))) { 2181 2180 status = 0; 2182 2181 2183 2182 value_ptr = &p[note_strings +
+36 -49
drivers/misc/genwqe/card_debugfs.c
··· 33 33 #include "card_base.h" 34 34 #include "card_ddcb.h" 35 35 36 - #define GENWQE_DEBUGFS_RO(_name, _showfn) \ 37 - static int genwqe_debugfs_##_name##_open(struct inode *inode, \ 38 - struct file *file) \ 39 - { \ 40 - return single_open(file, _showfn, inode->i_private); \ 41 - } \ 42 - static const struct file_operations genwqe_##_name##_fops = { \ 43 - .open = genwqe_debugfs_##_name##_open, \ 44 - .read = seq_read, \ 45 - .llseek = seq_lseek, \ 46 - .release = single_release, \ 47 - } 48 - 49 36 static void dbg_uidn_show(struct seq_file *s, struct genwqe_reg *regs, 50 37 int entries) 51 38 { ··· 74 87 return 0; 75 88 } 76 89 77 - static int genwqe_curr_dbg_uid0_show(struct seq_file *s, void *unused) 90 + static int curr_dbg_uid0_show(struct seq_file *s, void *unused) 78 91 { 79 92 return curr_dbg_uidn_show(s, unused, 0); 80 93 } 81 94 82 - GENWQE_DEBUGFS_RO(curr_dbg_uid0, genwqe_curr_dbg_uid0_show); 95 + DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid0); 83 96 84 - static int genwqe_curr_dbg_uid1_show(struct seq_file *s, void *unused) 97 + static int curr_dbg_uid1_show(struct seq_file *s, void *unused) 85 98 { 86 99 return curr_dbg_uidn_show(s, unused, 1); 87 100 } 88 101 89 - GENWQE_DEBUGFS_RO(curr_dbg_uid1, genwqe_curr_dbg_uid1_show); 102 + DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid1); 90 103 91 - static int genwqe_curr_dbg_uid2_show(struct seq_file *s, void *unused) 104 + static int curr_dbg_uid2_show(struct seq_file *s, void *unused) 92 105 { 93 106 return curr_dbg_uidn_show(s, unused, 2); 94 107 } 95 108 96 - GENWQE_DEBUGFS_RO(curr_dbg_uid2, genwqe_curr_dbg_uid2_show); 109 + DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid2); 97 110 98 111 static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid) 99 112 { ··· 103 116 return 0; 104 117 } 105 118 106 - static int genwqe_prev_dbg_uid0_show(struct seq_file *s, void *unused) 119 + static int prev_dbg_uid0_show(struct seq_file *s, void *unused) 107 120 { 108 121 return prev_dbg_uidn_show(s, unused, 0); 109 122 } 110 123 111 - GENWQE_DEBUGFS_RO(prev_dbg_uid0, genwqe_prev_dbg_uid0_show); 124 + DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid0); 112 125 113 - static int genwqe_prev_dbg_uid1_show(struct seq_file *s, void *unused) 126 + static int prev_dbg_uid1_show(struct seq_file *s, void *unused) 114 127 { 115 128 return prev_dbg_uidn_show(s, unused, 1); 116 129 } 117 130 118 - GENWQE_DEBUGFS_RO(prev_dbg_uid1, genwqe_prev_dbg_uid1_show); 131 + DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid1); 119 132 120 - static int genwqe_prev_dbg_uid2_show(struct seq_file *s, void *unused) 133 + static int prev_dbg_uid2_show(struct seq_file *s, void *unused) 121 134 { 122 135 return prev_dbg_uidn_show(s, unused, 2); 123 136 } 124 137 125 - GENWQE_DEBUGFS_RO(prev_dbg_uid2, genwqe_prev_dbg_uid2_show); 138 + DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid2); 126 139 127 - static int genwqe_curr_regs_show(struct seq_file *s, void *unused) 140 + static int curr_regs_show(struct seq_file *s, void *unused) 128 141 { 129 142 struct genwqe_dev *cd = s->private; 130 143 unsigned int i; ··· 151 164 return 0; 152 165 } 153 166 154 - GENWQE_DEBUGFS_RO(curr_regs, genwqe_curr_regs_show); 167 + DEFINE_SHOW_ATTRIBUTE(curr_regs); 155 168 156 - static int genwqe_prev_regs_show(struct seq_file *s, void *unused) 169 + static int prev_regs_show(struct seq_file *s, void *unused) 157 170 { 158 171 struct genwqe_dev *cd = s->private; 159 172 unsigned int i; ··· 175 188 return 0; 176 189 } 177 190 178 - GENWQE_DEBUGFS_RO(prev_regs, genwqe_prev_regs_show); 191 + DEFINE_SHOW_ATTRIBUTE(prev_regs); 179 192 180 - static int genwqe_jtimer_show(struct seq_file *s, void *unused) 193 + static int jtimer_show(struct seq_file *s, void *unused) 181 194 { 182 195 struct genwqe_dev *cd = s->private; 183 196 unsigned int vf_num; ··· 196 209 return 0; 197 210 } 198 211 199 - GENWQE_DEBUGFS_RO(jtimer, genwqe_jtimer_show); 212 + DEFINE_SHOW_ATTRIBUTE(jtimer); 200 213 201 - static int genwqe_queue_working_time_show(struct seq_file *s, void *unused) 214 + static int queue_working_time_show(struct seq_file *s, void *unused) 202 215 { 203 216 struct genwqe_dev *cd = s->private; 204 217 unsigned int vf_num; ··· 214 227 return 0; 215 228 } 216 229 217 - GENWQE_DEBUGFS_RO(queue_working_time, genwqe_queue_working_time_show); 230 + DEFINE_SHOW_ATTRIBUTE(queue_working_time); 218 231 219 - static int genwqe_ddcb_info_show(struct seq_file *s, void *unused) 232 + static int ddcb_info_show(struct seq_file *s, void *unused) 220 233 { 221 234 struct genwqe_dev *cd = s->private; 222 235 unsigned int i; ··· 287 300 return 0; 288 301 } 289 302 290 - GENWQE_DEBUGFS_RO(ddcb_info, genwqe_ddcb_info_show); 303 + DEFINE_SHOW_ATTRIBUTE(ddcb_info); 291 304 292 - static int genwqe_info_show(struct seq_file *s, void *unused) 305 + static int info_show(struct seq_file *s, void *unused) 293 306 { 294 307 struct genwqe_dev *cd = s->private; 295 308 u64 app_id, slu_id, bitstream = -1; ··· 322 335 return 0; 323 336 } 324 337 325 - GENWQE_DEBUGFS_RO(info, genwqe_info_show); 338 + DEFINE_SHOW_ATTRIBUTE(info); 326 339 327 340 int genwqe_init_debugfs(struct genwqe_dev *cd) 328 341 { ··· 343 356 344 357 /* non privileged interfaces are done here */ 345 358 file = debugfs_create_file("ddcb_info", S_IRUGO, root, cd, 346 - &genwqe_ddcb_info_fops); 359 + &ddcb_info_fops); 347 360 if (!file) { 348 361 ret = -ENOMEM; 349 362 goto err1; 350 363 } 351 364 352 365 file = debugfs_create_file("info", S_IRUGO, root, cd, 353 - &genwqe_info_fops); 366 + &info_fops); 354 367 if (!file) { 355 368 ret = -ENOMEM; 356 369 goto err1; ··· 383 396 } 384 397 385 398 file = debugfs_create_file("curr_regs", S_IRUGO, root, cd, 386 - &genwqe_curr_regs_fops); 399 + &curr_regs_fops); 387 400 if (!file) { 388 401 ret = -ENOMEM; 389 402 goto err1; 390 403 } 391 404 392 405 file = debugfs_create_file("curr_dbg_uid0", S_IRUGO, root, cd, 393 - &genwqe_curr_dbg_uid0_fops); 406 + &curr_dbg_uid0_fops); 394 407 if (!file) { 395 408 ret = -ENOMEM; 396 409 goto err1; 397 410 } 398 411 399 412 file = debugfs_create_file("curr_dbg_uid1", S_IRUGO, root, cd, 400 - &genwqe_curr_dbg_uid1_fops); 413 + &curr_dbg_uid1_fops); 401 414 if (!file) { 402 415 ret = -ENOMEM; 403 416 goto err1; 404 417 } 405 418 406 419 file = debugfs_create_file("curr_dbg_uid2", S_IRUGO, root, cd, 407 - &genwqe_curr_dbg_uid2_fops); 420 + &curr_dbg_uid2_fops); 408 421 if (!file) { 409 422 ret = -ENOMEM; 410 423 goto err1; 411 424 } 412 425 413 426 file = debugfs_create_file("prev_regs", S_IRUGO, root, cd, 414 - &genwqe_prev_regs_fops); 427 + &prev_regs_fops); 415 428 if (!file) { 416 429 ret = -ENOMEM; 417 430 goto err1; 418 431 } 419 432 420 433 file = debugfs_create_file("prev_dbg_uid0", S_IRUGO, root, cd, 421 - &genwqe_prev_dbg_uid0_fops); 434 + &prev_dbg_uid0_fops); 422 435 if (!file) { 423 436 ret = -ENOMEM; 424 437 goto err1; 425 438 } 426 439 427 440 file = debugfs_create_file("prev_dbg_uid1", S_IRUGO, root, cd, 428 - &genwqe_prev_dbg_uid1_fops); 441 + &prev_dbg_uid1_fops); 429 442 if (!file) { 430 443 ret = -ENOMEM; 431 444 goto err1; 432 445 } 433 446 434 447 file = debugfs_create_file("prev_dbg_uid2", S_IRUGO, root, cd, 435 - &genwqe_prev_dbg_uid2_fops); 448 + &prev_dbg_uid2_fops); 436 449 if (!file) { 437 450 ret = -ENOMEM; 438 451 goto err1; ··· 450 463 } 451 464 452 465 file = debugfs_create_file("jobtimer", S_IRUGO, root, cd, 453 - &genwqe_jtimer_fops); 466 + &jtimer_fops); 454 467 if (!file) { 455 468 ret = -ENOMEM; 456 469 goto err1; 457 470 } 458 471 459 472 file = debugfs_create_file("queue_working_time", S_IRUGO, root, cd, 460 - &genwqe_queue_working_time_fops); 473 + &queue_working_time_fops); 461 474 if (!file) { 462 475 ret = -ENOMEM; 463 476 goto err1;
+1 -1
drivers/misc/genwqe/card_utils.c
··· 215 215 void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size, 216 216 dma_addr_t *dma_handle) 217 217 { 218 - if (get_order(size) > MAX_ORDER) 218 + if (get_order(size) >= MAX_ORDER) 219 219 return NULL; 220 220 221 221 return dma_zalloc_coherent(&cd->pci_dev->dev, size, dma_handle,
+1
drivers/misc/mei/Makefile
··· 9 9 mei-objs += interrupt.o 10 10 mei-objs += client.o 11 11 mei-objs += main.o 12 + mei-objs += dma-ring.o 12 13 mei-objs += bus.o 13 14 mei-objs += bus-fixup.o 14 15 mei-$(CONFIG_DEBUG_FS) += debugfs.o
+57 -34
drivers/misc/mei/client.c
··· 318 318 } 319 319 320 320 /** 321 - * mei_cl_cmp_id - tells if the clients are the same 322 - * 323 - * @cl1: host client 1 324 - * @cl2: host client 2 325 - * 326 - * Return: true - if the clients has same host and me ids 327 - * false - otherwise 328 - */ 329 - static inline bool mei_cl_cmp_id(const struct mei_cl *cl1, 330 - const struct mei_cl *cl2) 331 - { 332 - return cl1 && cl2 && 333 - (cl1->host_client_id == cl2->host_client_id) && 334 - (mei_cl_me_id(cl1) == mei_cl_me_id(cl2)); 335 - } 336 - 337 - /** 338 321 * mei_io_cb_free - free mei_cb_private related memory 339 322 * 340 323 * @cb: mei callback struct ··· 401 418 struct mei_cl_cb *cb, *next; 402 419 403 420 list_for_each_entry_safe(cb, next, head, list) { 404 - if (mei_cl_cmp_id(cl, cb->cl)) 421 + if (cl == cb->cl) 405 422 list_del_init(&cb->list); 406 423 } 407 424 } ··· 418 435 struct mei_cl_cb *cb, *next; 419 436 420 437 list_for_each_entry_safe(cb, next, head, list) { 421 - if (mei_cl_cmp_id(cl, cb->cl)) 438 + if (cl == cb->cl) 422 439 mei_tx_cb_dequeue(cb); 423 440 } 424 441 } ··· 461 478 if (length == 0) 462 479 return cb; 463 480 464 - cb->buf.data = kmalloc(length, GFP_KERNEL); 481 + cb->buf.data = kmalloc(roundup(length, MEI_SLOT_SIZE), GFP_KERNEL); 465 482 if (!cb->buf.data) { 466 483 mei_io_cb_free(cb); 467 484 return NULL; ··· 1357 1374 1358 1375 mutex_unlock(&dev->device_lock); 1359 1376 wait_event_timeout(cl->wait, 1360 - cl->notify_en == request || !mei_cl_is_connected(cl), 1377 + cl->notify_en == request || 1378 + cl->status || 1379 + !mei_cl_is_connected(cl), 1361 1380 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 1362 1381 mutex_lock(&dev->device_lock); 1363 1382 ··· 1558 1573 struct mei_msg_hdr mei_hdr; 1559 1574 size_t hdr_len = sizeof(mei_hdr); 1560 1575 size_t len; 1561 - size_t hbuf_len; 1576 + size_t hbuf_len, dr_len; 1562 1577 int hbuf_slots; 1578 + u32 dr_slots; 1579 + u32 dma_len; 1563 1580 int rets; 1564 1581 bool first_chunk; 1582 + const void *data; 1565 1583 1566 1584 if (WARN_ON(!cl || !cl->dev)) 1567 1585 return -ENODEV; ··· 1585 1597 } 1586 1598 1587 1599 len = buf->size - cb->buf_idx; 1600 + data = buf->data + cb->buf_idx; 1588 1601 hbuf_slots = mei_hbuf_empty_slots(dev); 1589 1602 if (hbuf_slots < 0) { 1590 1603 rets = -EOVERFLOW; ··· 1593 1604 } 1594 1605 1595 1606 hbuf_len = mei_slots2data(hbuf_slots); 1607 + dr_slots = mei_dma_ring_empty_slots(dev); 1608 + dr_len = mei_slots2data(dr_slots); 1596 1609 1597 1610 mei_msg_hdr_init(&mei_hdr, cb); 1598 1611 ··· 1605 1614 if (len + hdr_len <= hbuf_len) { 1606 1615 mei_hdr.length = len; 1607 1616 mei_hdr.msg_complete = 1; 1617 + } else if (dr_slots && hbuf_len >= hdr_len + sizeof(dma_len)) { 1618 + mei_hdr.dma_ring = 1; 1619 + if (len > dr_len) 1620 + len = dr_len; 1621 + else 1622 + mei_hdr.msg_complete = 1; 1623 + 1624 + mei_hdr.length = sizeof(dma_len); 1625 + dma_len = len; 1626 + data = &dma_len; 1608 1627 } else if ((u32)hbuf_slots == mei_hbuf_depth(dev)) { 1609 - mei_hdr.length = hbuf_len - hdr_len; 1628 + len = hbuf_len - hdr_len; 1629 + mei_hdr.length = len; 1610 1630 } else { 1611 1631 return 0; 1612 1632 } 1613 1633 1614 - cl_dbg(dev, cl, "buf: size = %zu idx = %zu\n", 1615 - cb->buf.size, cb->buf_idx); 1634 + if (mei_hdr.dma_ring) 1635 + mei_dma_ring_write(dev, buf->data + cb->buf_idx, len); 1616 1636 1617 - rets = mei_write_message(dev, &mei_hdr, hdr_len, 1618 - buf->data + cb->buf_idx, mei_hdr.length); 1637 + rets = mei_write_message(dev, &mei_hdr, hdr_len, data, mei_hdr.length); 1619 1638 if (rets) 1620 1639 goto err; 1621 1640 1622 1641 cl->status = 0; 1623 1642 cl->writing_state = MEI_WRITING; 1624 - cb->buf_idx += mei_hdr.length; 1643 + cb->buf_idx += len; 1625 1644 1626 1645 if (first_chunk) { 1627 1646 if (mei_cl_tx_flow_ctrl_creds_reduce(cl)) { ··· 1666 1665 struct mei_msg_data *buf; 1667 1666 struct mei_msg_hdr mei_hdr; 1668 1667 size_t hdr_len = sizeof(mei_hdr); 1669 - size_t len; 1670 - size_t hbuf_len; 1668 + size_t len, hbuf_len, dr_len; 1671 1669 int hbuf_slots; 1670 + u32 dr_slots; 1671 + u32 dma_len; 1672 1672 ssize_t rets; 1673 1673 bool blocking; 1674 + const void *data; 1674 1675 1675 1676 if (WARN_ON(!cl || !cl->dev)) 1676 1677 return -ENODEV; ··· 1684 1681 1685 1682 buf = &cb->buf; 1686 1683 len = buf->size; 1687 - blocking = cb->blocking; 1688 1684 1689 1685 cl_dbg(dev, cl, "len=%zd\n", len); 1686 + 1687 + blocking = cb->blocking; 1688 + data = buf->data; 1690 1689 1691 1690 rets = pm_runtime_get(dev->dev); 1692 1691 if (rets < 0 && rets != -EINPROGRESS) { ··· 1726 1721 } 1727 1722 1728 1723 hbuf_len = mei_slots2data(hbuf_slots); 1724 + dr_slots = mei_dma_ring_empty_slots(dev); 1725 + dr_len = mei_slots2data(dr_slots); 1729 1726 1730 1727 if (len + hdr_len <= hbuf_len) { 1731 1728 mei_hdr.length = len; 1732 1729 mei_hdr.msg_complete = 1; 1730 + } else if (dr_slots && hbuf_len >= hdr_len + sizeof(dma_len)) { 1731 + mei_hdr.dma_ring = 1; 1732 + if (len > dr_len) 1733 + len = dr_len; 1734 + else 1735 + mei_hdr.msg_complete = 1; 1736 + 1737 + mei_hdr.length = sizeof(dma_len); 1738 + dma_len = len; 1739 + data = &dma_len; 1733 1740 } else { 1734 - mei_hdr.length = hbuf_len - hdr_len; 1741 + len = hbuf_len - hdr_len; 1742 + mei_hdr.length = len; 1735 1743 } 1736 1744 1745 + if (mei_hdr.dma_ring) 1746 + mei_dma_ring_write(dev, buf->data, len); 1747 + 1737 1748 rets = mei_write_message(dev, &mei_hdr, hdr_len, 1738 - buf->data, mei_hdr.length); 1749 + data, mei_hdr.length); 1739 1750 if (rets) 1740 1751 goto err; 1741 1752 ··· 1760 1739 goto err; 1761 1740 1762 1741 cl->writing_state = MEI_WRITING; 1763 - cb->buf_idx = mei_hdr.length; 1742 + cb->buf_idx = len; 1743 + /* restore return value */ 1744 + len = buf->size; 1764 1745 1765 1746 out: 1766 1747 if (mei_hdr.msg_complete)
+269
drivers/misc/mei/dma-ring.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. 4 + */ 5 + #include <linux/dma-mapping.h> 6 + #include <linux/mei.h> 7 + 8 + #include "mei_dev.h" 9 + 10 + /** 11 + * mei_dmam_dscr_alloc() - allocate a managed coherent buffer 12 + * for the dma descriptor 13 + * @dev: mei_device 14 + * @dscr: dma descriptor 15 + * 16 + * Return: 17 + * * 0 - on success or zero allocation request 18 + * * -EINVAL - if size is not power of 2 19 + * * -ENOMEM - of allocation has failed 20 + */ 21 + static int mei_dmam_dscr_alloc(struct mei_device *dev, 22 + struct mei_dma_dscr *dscr) 23 + { 24 + if (!dscr->size) 25 + return 0; 26 + 27 + if (WARN_ON(!is_power_of_2(dscr->size))) 28 + return -EINVAL; 29 + 30 + if (dscr->vaddr) 31 + return 0; 32 + 33 + dscr->vaddr = dmam_alloc_coherent(dev->dev, dscr->size, &dscr->daddr, 34 + GFP_KERNEL); 35 + if (!dscr->vaddr) 36 + return -ENOMEM; 37 + 38 + return 0; 39 + } 40 + 41 + /** 42 + * mei_dmam_dscr_free() - free a managed coherent buffer 43 + * from the dma descriptor 44 + * @dev: mei_device 45 + * @dscr: dma descriptor 46 + */ 47 + static void mei_dmam_dscr_free(struct mei_device *dev, 48 + struct mei_dma_dscr *dscr) 49 + { 50 + if (!dscr->vaddr) 51 + return; 52 + 53 + dmam_free_coherent(dev->dev, dscr->size, dscr->vaddr, dscr->daddr); 54 + dscr->vaddr = NULL; 55 + } 56 + 57 + /** 58 + * mei_dmam_ring_free() - free dma ring buffers 59 + * @dev: mei device 60 + */ 61 + void mei_dmam_ring_free(struct mei_device *dev) 62 + { 63 + int i; 64 + 65 + for (i = 0; i < DMA_DSCR_NUM; i++) 66 + mei_dmam_dscr_free(dev, &dev->dr_dscr[i]); 67 + } 68 + 69 + /** 70 + * mei_dmam_ring_alloc() - allocate dma ring buffers 71 + * @dev: mei device 72 + * 73 + * Return: -ENOMEM on allocation failure 0 otherwise 74 + */ 75 + int mei_dmam_ring_alloc(struct mei_device *dev) 76 + { 77 + int i; 78 + 79 + for (i = 0; i < DMA_DSCR_NUM; i++) 80 + if (mei_dmam_dscr_alloc(dev, &dev->dr_dscr[i])) 81 + goto err; 82 + 83 + return 0; 84 + 85 + err: 86 + mei_dmam_ring_free(dev); 87 + return -ENOMEM; 88 + } 89 + 90 + /** 91 + * mei_dma_ring_is_allocated() - check if dma ring is allocated 92 + * @dev: mei device 93 + * 94 + * Return: true if dma ring is allocated 95 + */ 96 + bool mei_dma_ring_is_allocated(struct mei_device *dev) 97 + { 98 + return !!dev->dr_dscr[DMA_DSCR_HOST].vaddr; 99 + } 100 + 101 + static inline 102 + struct hbm_dma_ring_ctrl *mei_dma_ring_ctrl(struct mei_device *dev) 103 + { 104 + return (struct hbm_dma_ring_ctrl *)dev->dr_dscr[DMA_DSCR_CTRL].vaddr; 105 + } 106 + 107 + /** 108 + * mei_dma_ring_reset() - reset the dma control block 109 + * @dev: mei device 110 + */ 111 + void mei_dma_ring_reset(struct mei_device *dev) 112 + { 113 + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); 114 + 115 + if (!ctrl) 116 + return; 117 + 118 + memset(ctrl, 0, sizeof(*ctrl)); 119 + } 120 + 121 + /** 122 + * mei_dma_copy_from() - copy from dma ring into buffer 123 + * @dev: mei device 124 + * @buf: data buffer 125 + * @offset: offset in slots. 126 + * @n: number of slots to copy. 127 + */ 128 + static size_t mei_dma_copy_from(struct mei_device *dev, unsigned char *buf, 129 + u32 offset, u32 n) 130 + { 131 + unsigned char *dbuf = dev->dr_dscr[DMA_DSCR_DEVICE].vaddr; 132 + 133 + size_t b_offset = offset << 2; 134 + size_t b_n = n << 2; 135 + 136 + memcpy(buf, dbuf + b_offset, b_n); 137 + 138 + return b_n; 139 + } 140 + 141 + /** 142 + * mei_dma_copy_to() - copy to a buffer to the dma ring 143 + * @dev: mei device 144 + * @buf: data buffer 145 + * @offset: offset in slots. 146 + * @n: number of slots to copy. 147 + */ 148 + static size_t mei_dma_copy_to(struct mei_device *dev, unsigned char *buf, 149 + u32 offset, u32 n) 150 + { 151 + unsigned char *hbuf = dev->dr_dscr[DMA_DSCR_HOST].vaddr; 152 + 153 + size_t b_offset = offset << 2; 154 + size_t b_n = n << 2; 155 + 156 + memcpy(hbuf + b_offset, buf, b_n); 157 + 158 + return b_n; 159 + } 160 + 161 + /** 162 + * mei_dma_ring_read() - read data from the ring 163 + * @dev: mei device 164 + * @buf: buffer to read into: may be NULL in case of droping the data. 165 + * @len: length to read. 166 + */ 167 + void mei_dma_ring_read(struct mei_device *dev, unsigned char *buf, u32 len) 168 + { 169 + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); 170 + u32 dbuf_depth; 171 + u32 rd_idx, rem, slots; 172 + 173 + if (WARN_ON(!ctrl)) 174 + return; 175 + 176 + dev_dbg(dev->dev, "reading from dma %u bytes\n", len); 177 + 178 + if (!len) 179 + return; 180 + 181 + dbuf_depth = dev->dr_dscr[DMA_DSCR_DEVICE].size >> 2; 182 + rd_idx = READ_ONCE(ctrl->dbuf_rd_idx) & (dbuf_depth - 1); 183 + slots = mei_data2slots(len); 184 + 185 + /* if buf is NULL we drop the packet by advancing the pointer.*/ 186 + if (!buf) 187 + goto out; 188 + 189 + if (rd_idx + slots > dbuf_depth) { 190 + buf += mei_dma_copy_from(dev, buf, rd_idx, dbuf_depth - rd_idx); 191 + rem = slots - (dbuf_depth - rd_idx); 192 + rd_idx = 0; 193 + } else { 194 + rem = slots; 195 + } 196 + 197 + mei_dma_copy_from(dev, buf, rd_idx, rem); 198 + out: 199 + WRITE_ONCE(ctrl->dbuf_rd_idx, ctrl->dbuf_rd_idx + slots); 200 + } 201 + 202 + static inline u32 mei_dma_ring_hbuf_depth(struct mei_device *dev) 203 + { 204 + return dev->dr_dscr[DMA_DSCR_HOST].size >> 2; 205 + } 206 + 207 + /** 208 + * mei_dma_ring_empty_slots() - calaculate number of empty slots in dma ring 209 + * @dev: mei_device 210 + * 211 + * Return: number of empty slots 212 + */ 213 + u32 mei_dma_ring_empty_slots(struct mei_device *dev) 214 + { 215 + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); 216 + u32 wr_idx, rd_idx, hbuf_depth, empty; 217 + 218 + if (!mei_dma_ring_is_allocated(dev)) 219 + return 0; 220 + 221 + if (WARN_ON(!ctrl)) 222 + return 0; 223 + 224 + /* easier to work in slots */ 225 + hbuf_depth = mei_dma_ring_hbuf_depth(dev); 226 + rd_idx = READ_ONCE(ctrl->hbuf_rd_idx); 227 + wr_idx = READ_ONCE(ctrl->hbuf_wr_idx); 228 + 229 + if (rd_idx > wr_idx) 230 + empty = rd_idx - wr_idx; 231 + else 232 + empty = hbuf_depth - (wr_idx - rd_idx); 233 + 234 + return empty; 235 + } 236 + 237 + /** 238 + * mei_dma_ring_write - write data to dma ring host buffer 239 + * 240 + * @dev: mei_device 241 + * @buf: data will be written 242 + * @len: data length 243 + */ 244 + void mei_dma_ring_write(struct mei_device *dev, unsigned char *buf, u32 len) 245 + { 246 + struct hbm_dma_ring_ctrl *ctrl = mei_dma_ring_ctrl(dev); 247 + u32 hbuf_depth; 248 + u32 wr_idx, rem, slots; 249 + 250 + if (WARN_ON(!ctrl)) 251 + return; 252 + 253 + dev_dbg(dev->dev, "writing to dma %u bytes\n", len); 254 + hbuf_depth = mei_dma_ring_hbuf_depth(dev); 255 + wr_idx = READ_ONCE(ctrl->hbuf_wr_idx) & (hbuf_depth - 1); 256 + slots = mei_data2slots(len); 257 + 258 + if (wr_idx + slots > hbuf_depth) { 259 + buf += mei_dma_copy_to(dev, buf, wr_idx, hbuf_depth - wr_idx); 260 + rem = slots - (hbuf_depth - wr_idx); 261 + wr_idx = 0; 262 + } else { 263 + rem = slots; 264 + } 265 + 266 + mei_dma_copy_to(dev, buf, wr_idx, rem); 267 + 268 + WRITE_ONCE(ctrl->hbuf_wr_idx, ctrl->hbuf_wr_idx + slots); 269 + }
+87 -5
drivers/misc/mei/hbm.c
··· 65 65 MEI_HBM_STATE(IDLE); 66 66 MEI_HBM_STATE(STARTING); 67 67 MEI_HBM_STATE(STARTED); 68 + MEI_HBM_STATE(DR_SETUP); 68 69 MEI_HBM_STATE(ENUM_CLIENTS); 69 70 MEI_HBM_STATE(CLIENT_PROPERTIES); 70 71 MEI_HBM_STATE(STOPPED); ··· 291 290 } 292 291 293 292 dev->hbm_state = MEI_HBM_STARTING; 293 + dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT; 294 + mei_schedule_stall_timer(dev); 295 + return 0; 296 + } 297 + 298 + /** 299 + * mei_hbm_dma_setup_req() - setup DMA request 300 + * @dev: the device structure 301 + * 302 + * Return: 0 on success and < 0 on failure 303 + */ 304 + static int mei_hbm_dma_setup_req(struct mei_device *dev) 305 + { 306 + struct mei_msg_hdr mei_hdr; 307 + struct hbm_dma_setup_request req; 308 + const size_t len = sizeof(struct hbm_dma_setup_request); 309 + unsigned int i; 310 + int ret; 311 + 312 + mei_hbm_hdr(&mei_hdr, len); 313 + 314 + memset(&req, 0, len); 315 + req.hbm_cmd = MEI_HBM_DMA_SETUP_REQ_CMD; 316 + for (i = 0; i < DMA_DSCR_NUM; i++) { 317 + phys_addr_t paddr; 318 + 319 + paddr = dev->dr_dscr[i].daddr; 320 + req.dma_dscr[i].addr_hi = upper_32_bits(paddr); 321 + req.dma_dscr[i].addr_lo = lower_32_bits(paddr); 322 + req.dma_dscr[i].size = dev->dr_dscr[i].size; 323 + } 324 + 325 + mei_dma_ring_reset(dev); 326 + 327 + ret = mei_hbm_write_message(dev, &mei_hdr, &req); 328 + if (ret) { 329 + dev_err(dev->dev, "dma setup request write failed: ret = %d.\n", 330 + ret); 331 + return ret; 332 + } 333 + 334 + dev->hbm_state = MEI_HBM_DR_SETUP; 294 335 dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT; 295 336 mei_schedule_stall_timer(dev); 296 337 return 0; ··· 1087 1044 struct hbm_host_version_response *version_res; 1088 1045 struct hbm_props_response *props_res; 1089 1046 struct hbm_host_enum_response *enum_res; 1047 + struct hbm_dma_setup_response *dma_setup_res; 1090 1048 struct hbm_add_client_request *add_cl_req; 1091 1049 int ret; 1092 1050 ··· 1152 1108 return -EPROTO; 1153 1109 } 1154 1110 1155 - if (mei_hbm_enum_clients_req(dev)) { 1156 - dev_err(dev->dev, "hbm: start: failed to send enumeration request\n"); 1157 - return -EIO; 1111 + if (dev->hbm_f_dr_supported) { 1112 + if (mei_dmam_ring_alloc(dev)) 1113 + dev_info(dev->dev, "running w/o dma ring\n"); 1114 + if (mei_dma_ring_is_allocated(dev)) { 1115 + if (mei_hbm_dma_setup_req(dev)) 1116 + return -EIO; 1117 + 1118 + wake_up(&dev->wait_hbm_start); 1119 + break; 1120 + } 1158 1121 } 1159 1122 1123 + dev->hbm_f_dr_supported = 0; 1124 + mei_dmam_ring_free(dev); 1125 + 1126 + if (mei_hbm_enum_clients_req(dev)) 1127 + return -EIO; 1128 + 1160 1129 wake_up(&dev->wait_hbm_start); 1130 + break; 1131 + 1132 + case MEI_HBM_DMA_SETUP_RES_CMD: 1133 + dev_dbg(dev->dev, "hbm: dma setup response: message received.\n"); 1134 + 1135 + dev->init_clients_timer = 0; 1136 + 1137 + if (dev->hbm_state != MEI_HBM_DR_SETUP) { 1138 + dev_err(dev->dev, "hbm: dma setup response: state mismatch, [%d, %d]\n", 1139 + dev->dev_state, dev->hbm_state); 1140 + return -EPROTO; 1141 + } 1142 + 1143 + dma_setup_res = (struct hbm_dma_setup_response *)mei_msg; 1144 + 1145 + if (dma_setup_res->status) { 1146 + dev_info(dev->dev, "hbm: dma setup response: failure = %d %s\n", 1147 + dma_setup_res->status, 1148 + mei_hbm_status_str(dma_setup_res->status)); 1149 + dev->hbm_f_dr_supported = 0; 1150 + mei_dmam_ring_free(dev); 1151 + } 1152 + 1153 + if (mei_hbm_enum_clients_req(dev)) 1154 + return -EIO; 1161 1155 break; 1162 1156 1163 1157 case CLIENT_CONNECT_RES_CMD: ··· 1353 1271 break; 1354 1272 1355 1273 default: 1356 - BUG(); 1357 - break; 1274 + WARN(1, "hbm: wrong command %d\n", mei_msg->hbm_cmd); 1275 + return -EPROTO; 1358 1276 1359 1277 } 1360 1278 return 0;
+2
drivers/misc/mei/hbm.h
··· 26 26 * 27 27 * @MEI_HBM_IDLE : protocol not started 28 28 * @MEI_HBM_STARTING : start request message was sent 29 + * @MEI_HBM_DR_SETUP : dma ring setup request message was sent 29 30 * @MEI_HBM_ENUM_CLIENTS : enumeration request was sent 30 31 * @MEI_HBM_CLIENT_PROPERTIES : acquiring clients properties 31 32 * @MEI_HBM_STARTED : enumeration was completed ··· 35 34 enum mei_hbm_state { 36 35 MEI_HBM_IDLE = 0, 37 36 MEI_HBM_STARTING, 37 + MEI_HBM_DR_SETUP, 38 38 MEI_HBM_ENUM_CLIENTS, 39 39 MEI_HBM_CLIENT_PROPERTIES, 40 40 MEI_HBM_STARTED,
+6
drivers/misc/mei/hw-me.c
··· 1471 1471 { 1472 1472 struct mei_device *dev; 1473 1473 struct mei_me_hw *hw; 1474 + int i; 1474 1475 1475 1476 dev = devm_kzalloc(&pdev->dev, sizeof(struct mei_device) + 1476 1477 sizeof(struct mei_me_hw), GFP_KERNEL); 1477 1478 if (!dev) 1478 1479 return NULL; 1480 + 1479 1481 hw = to_me_hw(dev); 1482 + 1483 + for (i = 0; i < DMA_DSCR_NUM; i++) 1484 + dev->dr_dscr[i].size = cfg->dma_size[i]; 1480 1485 1481 1486 mei_device_init(dev, &pdev->dev, &mei_me_hw_ops); 1482 1487 hw->cfg = cfg; 1488 + 1483 1489 return dev; 1484 1490 } 1485 1491
+28 -1
drivers/misc/mei/hw.h
··· 35 35 /* 36 36 * MEI Version 37 37 */ 38 - #define HBM_MINOR_VERSION 0 38 + #define HBM_MINOR_VERSION 1 39 39 #define HBM_MAJOR_VERSION 2 40 40 41 41 /* ··· 206 206 * @dma_ring: message is on dma ring 207 207 * @internal: message is internal 208 208 * @msg_complete: last packet of the message 209 + * @extension: extension of the header 209 210 */ 210 211 struct mei_msg_hdr { 211 212 u32 me_addr:8; ··· 216 215 u32 dma_ring:1; 217 216 u32 internal:1; 218 217 u32 msg_complete:1; 218 + u32 extension[0]; 219 219 } __packed; 220 + 221 + #define MEI_MSG_HDR_MAX 2 220 222 221 223 struct mei_bus_message { 222 224 u8 hbm_cmd; ··· 514 510 u8 hbm_cmd; 515 511 u8 status; 516 512 u8 reserved[2]; 513 + } __packed; 514 + 515 + /** 516 + * struct mei_dma_ring_ctrl - dma ring control block 517 + * 518 + * @hbuf_wr_idx: host circular buffer write index in slots 519 + * @reserved1: reserved for alignment 520 + * @hbuf_rd_idx: host circular buffer read index in slots 521 + * @reserved2: reserved for alignment 522 + * @dbuf_wr_idx: device circular buffer write index in slots 523 + * @reserved3: reserved for alignment 524 + * @dbuf_rd_idx: device circular buffer read index in slots 525 + * @reserved4: reserved for alignment 526 + */ 527 + struct hbm_dma_ring_ctrl { 528 + u32 hbuf_wr_idx; 529 + u32 reserved1; 530 + u32 hbuf_rd_idx; 531 + u32 reserved2; 532 + u32 dbuf_wr_idx; 533 + u32 reserved3; 534 + u32 dbuf_rd_idx; 535 + u32 reserved4; 517 536 } __packed; 518 537 519 538 #endif
+1 -1
drivers/misc/mei/init.c
··· 151 151 152 152 mei_hbm_reset(dev); 153 153 154 - dev->rd_msg_hdr = 0; 154 + memset(dev->rd_msg_hdr, 0, sizeof(dev->rd_msg_hdr)); 155 155 156 156 if (ret) { 157 157 dev_err(dev->dev, "hw_reset failed ret = %d\n", ret);
+29 -12
drivers/misc/mei/interrupt.c
··· 75 75 */ 76 76 static void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr) 77 77 { 78 + if (hdr->dma_ring) 79 + mei_dma_ring_read(dev, NULL, hdr->extension[0]); 78 80 /* 79 81 * no need to check for size as it is guarantied 80 82 * that length fits into rd_msg_buf ··· 102 100 struct mei_device *dev = cl->dev; 103 101 struct mei_cl_cb *cb; 104 102 size_t buf_sz; 103 + u32 length; 105 104 106 105 cb = list_first_entry_or_null(&cl->rd_pending, struct mei_cl_cb, list); 107 106 if (!cb) { ··· 122 119 goto discard; 123 120 } 124 121 125 - buf_sz = mei_hdr->length + cb->buf_idx; 122 + length = mei_hdr->dma_ring ? mei_hdr->extension[0] : mei_hdr->length; 123 + 124 + buf_sz = length + cb->buf_idx; 126 125 /* catch for integer overflow */ 127 126 if (buf_sz < cb->buf_idx) { 128 127 cl_err(dev, cl, "message is too big len %d idx %zu\n", 129 - mei_hdr->length, cb->buf_idx); 128 + length, cb->buf_idx); 130 129 cb->status = -EMSGSIZE; 131 130 goto discard; 132 131 } 133 132 134 133 if (cb->buf.size < buf_sz) { 135 134 cl_dbg(dev, cl, "message overflow. size %zu len %d idx %zu\n", 136 - cb->buf.size, mei_hdr->length, cb->buf_idx); 135 + cb->buf.size, length, cb->buf_idx); 137 136 cb->status = -EMSGSIZE; 138 137 goto discard; 139 138 } 140 139 140 + if (mei_hdr->dma_ring) 141 + mei_dma_ring_read(dev, cb->buf.data + cb->buf_idx, length); 142 + 143 + /* for DMA read 0 length to generate an interrupt to the device */ 141 144 mei_read_slots(dev, cb->buf.data + cb->buf_idx, mei_hdr->length); 142 145 143 - cb->buf_idx += mei_hdr->length; 146 + cb->buf_idx += length; 144 147 145 148 if (mei_hdr->msg_complete) { 146 149 cl_dbg(dev, cl, "completed read length = %zu\n", cb->buf_idx); ··· 256 247 if (!msg_hdr || mei_hdr->reserved) 257 248 return -EBADMSG; 258 249 250 + if (mei_hdr->dma_ring && mei_hdr->length != MEI_SLOT_SIZE) 251 + return -EBADMSG; 252 + 259 253 return 0; 260 254 } 261 255 ··· 279 267 struct mei_cl *cl; 280 268 int ret; 281 269 282 - if (!dev->rd_msg_hdr) { 283 - dev->rd_msg_hdr = mei_read_hdr(dev); 270 + if (!dev->rd_msg_hdr[0]) { 271 + dev->rd_msg_hdr[0] = mei_read_hdr(dev); 284 272 (*slots)--; 285 273 dev_dbg(dev->dev, "slots =%08x.\n", *slots); 286 274 287 - ret = hdr_is_valid(dev->rd_msg_hdr); 275 + ret = hdr_is_valid(dev->rd_msg_hdr[0]); 288 276 if (ret) { 289 277 dev_err(dev->dev, "corrupted message header 0x%08X\n", 290 - dev->rd_msg_hdr); 278 + dev->rd_msg_hdr[0]); 291 279 goto end; 292 280 } 293 281 } 294 282 295 - mei_hdr = (struct mei_msg_hdr *)&dev->rd_msg_hdr; 283 + mei_hdr = (struct mei_msg_hdr *)dev->rd_msg_hdr; 296 284 dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(mei_hdr)); 297 285 298 286 if (mei_slots2data(*slots) < mei_hdr->length) { ··· 301 289 /* we can't read the message */ 302 290 ret = -ENODATA; 303 291 goto end; 292 + } 293 + 294 + if (mei_hdr->dma_ring) { 295 + dev->rd_msg_hdr[1] = mei_read_hdr(dev); 296 + (*slots)--; 297 + mei_hdr->length = 0; 304 298 } 305 299 306 300 /* HBM message */ ··· 342 324 goto reset_slots; 343 325 } 344 326 dev_err(dev->dev, "no destination client found 0x%08X\n", 345 - dev->rd_msg_hdr); 327 + dev->rd_msg_hdr[0]); 346 328 ret = -EBADMSG; 347 329 goto end; 348 330 } ··· 352 334 353 335 reset_slots: 354 336 /* reset the number of slots and header */ 337 + memset(dev->rd_msg_hdr, 0, sizeof(dev->rd_msg_hdr)); 355 338 *slots = mei_count_full_read_slots(dev); 356 - dev->rd_msg_hdr = 0; 357 - 358 339 if (*slots == -EOVERFLOW) { 359 340 /* overflow - reset */ 360 341 dev_err(dev->dev, "resetting due to slots overflow.\n");
+25 -1
drivers/misc/mei/mei_dev.h
··· 122 122 unsigned char *data; 123 123 }; 124 124 125 + /** 126 + * struct mei_dma_dscr - dma address descriptor 127 + * 128 + * @vaddr: dma buffer virtual address 129 + * @daddr: dma buffer physical address 130 + * @size : dma buffer size 131 + */ 132 + struct mei_dma_dscr { 133 + void *vaddr; 134 + dma_addr_t daddr; 135 + size_t size; 136 + }; 137 + 125 138 /* Maximum number of processed FW status registers */ 126 139 #define MEI_FW_STATUS_MAX 6 127 140 /* Minimal buffer for FW status string (8 bytes in dw + space or '\0') */ ··· 422 409 * @rd_msg_hdr : read message header storage 423 410 * 424 411 * @hbuf_is_ready : query if the host host/write buffer is ready 412 + * @dr_dscr: DMA ring descriptors: TX, RX, and CTRL 425 413 * 426 414 * @version : HBM protocol version in use 427 415 * @hbm_f_pg_supported : hbm feature pgi protocol ··· 497 483 #endif /* CONFIG_PM */ 498 484 499 485 unsigned char rd_msg_buf[MEI_RD_MSG_BUF_SIZE]; 500 - u32 rd_msg_hdr; 486 + u32 rd_msg_hdr[MEI_MSG_HDR_MAX]; 501 487 502 488 /* write buffer */ 503 489 bool hbuf_is_ready; 490 + 491 + struct mei_dma_dscr dr_dscr[DMA_DSCR_NUM]; 504 492 505 493 struct hbm_version version; 506 494 unsigned int hbm_f_pg_supported:1; ··· 593 577 int mei_restart(struct mei_device *dev); 594 578 void mei_stop(struct mei_device *dev); 595 579 void mei_cancel_work(struct mei_device *dev); 580 + 581 + int mei_dmam_ring_alloc(struct mei_device *dev); 582 + void mei_dmam_ring_free(struct mei_device *dev); 583 + bool mei_dma_ring_is_allocated(struct mei_device *dev); 584 + void mei_dma_ring_reset(struct mei_device *dev); 585 + void mei_dma_ring_read(struct mei_device *dev, unsigned char *buf, u32 len); 586 + void mei_dma_ring_write(struct mei_device *dev, unsigned char *buf, u32 len); 587 + u32 mei_dma_ring_empty_slots(struct mei_device *dev); 596 588 597 589 /* 598 590 * MEI interrupt functions prototype
+2 -2
drivers/misc/mei/pci-me.c
··· 98 98 {MEI_PCI_DEVICE(MEI_DEV_ID_KBP, MEI_ME_PCH8_CFG)}, 99 99 {MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, MEI_ME_PCH8_CFG)}, 100 100 101 - {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH8_CFG)}, 101 + {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH12_CFG)}, 102 102 {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_4, MEI_ME_PCH8_CFG)}, 103 - {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH8_CFG)}, 103 + {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_CFG)}, 104 104 {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)}, 105 105 106 106 /* required last entry */
+4 -20
drivers/misc/mic/card/mic_debugfs.c
··· 37 37 static struct dentry *mic_dbg; 38 38 39 39 /** 40 - * mic_intr_test - Send interrupts to host. 40 + * mic_intr_show - Send interrupts to host. 41 41 */ 42 - static int mic_intr_test(struct seq_file *s, void *unused) 42 + static int mic_intr_show(struct seq_file *s, void *unused) 43 43 { 44 44 struct mic_driver *mdrv = s->private; 45 45 struct mic_device *mdev = &mdrv->mdev; ··· 56 56 return 0; 57 57 } 58 58 59 - static int mic_intr_test_open(struct inode *inode, struct file *file) 60 - { 61 - return single_open(file, mic_intr_test, inode->i_private); 62 - } 63 - 64 - static int mic_intr_test_release(struct inode *inode, struct file *file) 65 - { 66 - return single_release(inode, file); 67 - } 68 - 69 - static const struct file_operations intr_test_ops = { 70 - .owner = THIS_MODULE, 71 - .open = mic_intr_test_open, 72 - .read = seq_read, 73 - .llseek = seq_lseek, 74 - .release = mic_intr_test_release 75 - }; 59 + DEFINE_SHOW_ATTRIBUTE(mic_intr); 76 60 77 61 /** 78 62 * mic_create_card_debug_dir - Initialize MIC debugfs entries. ··· 75 91 } 76 92 77 93 d = debugfs_create_file("intr_test", 0444, mdrv->dbg_dir, 78 - mdrv, &intr_test_ops); 94 + mdrv, &mic_intr_fops); 79 95 80 96 if (!d) { 81 97 dev_err(mdrv->dev,
+9 -30
drivers/misc/mic/cosm/cosm_debugfs.c
··· 28 28 static struct dentry *cosm_dbg; 29 29 30 30 /** 31 - * cosm_log_buf_show - Display MIC kernel log buffer 31 + * log_buf_show - Display MIC kernel log buffer 32 32 * 33 33 * log_buf addr/len is read from System.map by user space 34 34 * and populated in sysfs entries. 35 35 */ 36 - static int cosm_log_buf_show(struct seq_file *s, void *unused) 36 + static int log_buf_show(struct seq_file *s, void *unused) 37 37 { 38 38 void __iomem *log_buf_va; 39 39 int __iomem *log_buf_len_va; ··· 78 78 return 0; 79 79 } 80 80 81 - static int cosm_log_buf_open(struct inode *inode, struct file *file) 82 - { 83 - return single_open(file, cosm_log_buf_show, inode->i_private); 84 - } 85 - 86 - static const struct file_operations log_buf_ops = { 87 - .owner = THIS_MODULE, 88 - .open = cosm_log_buf_open, 89 - .read = seq_read, 90 - .llseek = seq_lseek, 91 - .release = single_release 92 - }; 81 + DEFINE_SHOW_ATTRIBUTE(log_buf); 93 82 94 83 /** 95 - * cosm_force_reset_show - Force MIC reset 84 + * force_reset_show - Force MIC reset 96 85 * 97 86 * Invokes the force_reset COSM bus op instead of the standard reset 98 87 * op in case a force reset of the MIC device is required 99 88 */ 100 - static int cosm_force_reset_show(struct seq_file *s, void *pos) 89 + static int force_reset_show(struct seq_file *s, void *pos) 101 90 { 102 91 struct cosm_device *cdev = s->private; 103 92 ··· 94 105 return 0; 95 106 } 96 107 97 - static int cosm_force_reset_debug_open(struct inode *inode, struct file *file) 98 - { 99 - return single_open(file, cosm_force_reset_show, inode->i_private); 100 - } 101 - 102 - static const struct file_operations force_reset_ops = { 103 - .owner = THIS_MODULE, 104 - .open = cosm_force_reset_debug_open, 105 - .read = seq_read, 106 - .llseek = seq_lseek, 107 - .release = single_release 108 - }; 108 + DEFINE_SHOW_ATTRIBUTE(force_reset); 109 109 110 110 void cosm_create_debug_dir(struct cosm_device *cdev) 111 111 { ··· 108 130 if (!cdev->dbg_dir) 109 131 return; 110 132 111 - debugfs_create_file("log_buf", 0444, cdev->dbg_dir, cdev, &log_buf_ops); 133 + debugfs_create_file("log_buf", 0444, cdev->dbg_dir, cdev, 134 + &log_buf_fops); 112 135 debugfs_create_file("force_reset", 0444, cdev->dbg_dir, cdev, 113 - &force_reset_ops); 136 + &force_reset_fops); 114 137 } 115 138 116 139 void cosm_delete_debug_dir(struct cosm_device *cdev)
+7 -55
drivers/misc/mic/host/mic_debugfs.c
··· 54 54 return 0; 55 55 } 56 56 57 - static int mic_smpt_debug_open(struct inode *inode, struct file *file) 58 - { 59 - return single_open(file, mic_smpt_show, inode->i_private); 60 - } 61 - 62 - static int mic_smpt_debug_release(struct inode *inode, struct file *file) 63 - { 64 - return single_release(inode, file); 65 - } 66 - 67 - static const struct file_operations smpt_file_ops = { 68 - .owner = THIS_MODULE, 69 - .open = mic_smpt_debug_open, 70 - .read = seq_read, 71 - .llseek = seq_lseek, 72 - .release = mic_smpt_debug_release 73 - }; 57 + DEFINE_SHOW_ATTRIBUTE(mic_smpt); 74 58 75 59 static int mic_post_code_show(struct seq_file *s, void *pos) 76 60 { ··· 65 81 return 0; 66 82 } 67 83 68 - static int mic_post_code_debug_open(struct inode *inode, struct file *file) 69 - { 70 - return single_open(file, mic_post_code_show, inode->i_private); 71 - } 72 - 73 - static int mic_post_code_debug_release(struct inode *inode, struct file *file) 74 - { 75 - return single_release(inode, file); 76 - } 77 - 78 - static const struct file_operations post_code_ops = { 79 - .owner = THIS_MODULE, 80 - .open = mic_post_code_debug_open, 81 - .read = seq_read, 82 - .llseek = seq_lseek, 83 - .release = mic_post_code_debug_release 84 - }; 84 + DEFINE_SHOW_ATTRIBUTE(mic_post_code); 85 85 86 86 static int mic_msi_irq_info_show(struct seq_file *s, void *pos) 87 87 { ··· 111 143 return 0; 112 144 } 113 145 114 - static int mic_msi_irq_info_debug_open(struct inode *inode, struct file *file) 115 - { 116 - return single_open(file, mic_msi_irq_info_show, inode->i_private); 117 - } 118 - 119 - static int 120 - mic_msi_irq_info_debug_release(struct inode *inode, struct file *file) 121 - { 122 - return single_release(inode, file); 123 - } 124 - 125 - static const struct file_operations msi_irq_info_ops = { 126 - .owner = THIS_MODULE, 127 - .open = mic_msi_irq_info_debug_open, 128 - .read = seq_read, 129 - .llseek = seq_lseek, 130 - .release = mic_msi_irq_info_debug_release 131 - }; 146 + DEFINE_SHOW_ATTRIBUTE(mic_msi_irq_info); 132 147 133 148 /** 134 149 * mic_create_debug_dir - Initialize MIC debugfs entries. ··· 128 177 if (!mdev->dbg_dir) 129 178 return; 130 179 131 - debugfs_create_file("smpt", 0444, mdev->dbg_dir, mdev, &smpt_file_ops); 180 + debugfs_create_file("smpt", 0444, mdev->dbg_dir, mdev, 181 + &mic_smpt_fops); 132 182 133 183 debugfs_create_file("post_code", 0444, mdev->dbg_dir, mdev, 134 - &post_code_ops); 184 + &mic_post_code_fops); 135 185 136 186 debugfs_create_file("msi_irq_info", 0444, mdev->dbg_dir, mdev, 137 - &msi_irq_info_ops); 187 + &mic_msi_irq_info_fops); 138 188 } 139 189 140 190 /**
+6 -38
drivers/misc/mic/scif/scif_debugfs.c
··· 24 24 /* Debugfs parent dir */ 25 25 static struct dentry *scif_dbg; 26 26 27 - static int scif_dev_test(struct seq_file *s, void *unused) 27 + static int scif_dev_show(struct seq_file *s, void *unused) 28 28 { 29 29 int node; 30 30 ··· 44 44 return 0; 45 45 } 46 46 47 - static int scif_dev_test_open(struct inode *inode, struct file *file) 48 - { 49 - return single_open(file, scif_dev_test, inode->i_private); 50 - } 51 - 52 - static int scif_dev_test_release(struct inode *inode, struct file *file) 53 - { 54 - return single_release(inode, file); 55 - } 56 - 57 - static const struct file_operations scif_dev_ops = { 58 - .owner = THIS_MODULE, 59 - .open = scif_dev_test_open, 60 - .read = seq_read, 61 - .llseek = seq_lseek, 62 - .release = scif_dev_test_release 63 - }; 47 + DEFINE_SHOW_ATTRIBUTE(scif_dev); 64 48 65 49 static void scif_display_window(struct scif_window *window, struct seq_file *s) 66 50 { ··· 88 104 } 89 105 } 90 106 91 - static int scif_rma_test(struct seq_file *s, void *unused) 107 + static int scif_rma_show(struct seq_file *s, void *unused) 92 108 { 93 109 struct scif_endpt *ep; 94 110 struct list_head *pos; ··· 107 123 return 0; 108 124 } 109 125 110 - static int scif_rma_test_open(struct inode *inode, struct file *file) 111 - { 112 - return single_open(file, scif_rma_test, inode->i_private); 113 - } 114 - 115 - static int scif_rma_test_release(struct inode *inode, struct file *file) 116 - { 117 - return single_release(inode, file); 118 - } 119 - 120 - static const struct file_operations scif_rma_ops = { 121 - .owner = THIS_MODULE, 122 - .open = scif_rma_test_open, 123 - .read = seq_read, 124 - .llseek = seq_lseek, 125 - .release = scif_rma_test_release 126 - }; 126 + DEFINE_SHOW_ATTRIBUTE(scif_rma); 127 127 128 128 void __init scif_init_debugfs(void) 129 129 { ··· 118 150 return; 119 151 } 120 152 121 - debugfs_create_file("scif_dev", 0444, scif_dbg, NULL, &scif_dev_ops); 122 - debugfs_create_file("scif_rma", 0444, scif_dbg, NULL, &scif_rma_ops); 153 + debugfs_create_file("scif_dev", 0444, scif_dbg, NULL, &scif_dev_fops); 154 + debugfs_create_file("scif_rma", 0444, scif_dbg, NULL, &scif_rma_fops); 123 155 debugfs_create_u8("en_msg_log", 0666, scif_dbg, &scif_info.en_msg_log); 124 156 debugfs_create_u8("p2p_enable", 0666, scif_dbg, &scif_info.p2p_enable); 125 157 }
+17 -5
drivers/misc/mic/scif/scif_fence.c
··· 195 195 196 196 static void scif_prog_signal_cb(void *arg) 197 197 { 198 - struct scif_status *status = arg; 198 + struct scif_cb_arg *cb_arg = arg; 199 199 200 - dma_pool_free(status->ep->remote_dev->signal_pool, status, 201 - status->src_dma_addr); 200 + dma_pool_free(cb_arg->ep->remote_dev->signal_pool, cb_arg->status, 201 + cb_arg->src_dma_addr); 202 + kfree(cb_arg); 202 203 } 203 204 204 205 static int _scif_prog_signal(scif_epd_t epd, dma_addr_t dst, u64 val) ··· 210 209 bool x100 = !is_dma_copy_aligned(chan->device, 1, 1, 1); 211 210 struct dma_async_tx_descriptor *tx; 212 211 struct scif_status *status = NULL; 212 + struct scif_cb_arg *cb_arg = NULL; 213 213 dma_addr_t src; 214 214 dma_cookie_t cookie; 215 215 int err; ··· 259 257 goto dma_fail; 260 258 } 261 259 if (!x100) { 260 + cb_arg = kmalloc(sizeof(*cb_arg), GFP_KERNEL); 261 + if (!cb_arg) { 262 + err = -ENOMEM; 263 + goto dma_fail; 264 + } 265 + cb_arg->src_dma_addr = src; 266 + cb_arg->status = status; 267 + cb_arg->ep = ep; 262 268 tx->callback = scif_prog_signal_cb; 263 - tx->callback_param = status; 269 + tx->callback_param = cb_arg; 264 270 } 265 271 cookie = tx->tx_submit(tx); 266 272 if (dma_submit_error(cookie)) { ··· 280 270 dma_async_issue_pending(chan); 281 271 return 0; 282 272 dma_fail: 283 - if (!x100) 273 + if (!x100) { 284 274 dma_pool_free(ep->remote_dev->signal_pool, status, 285 275 src - offsetof(struct scif_status, val)); 276 + kfree(cb_arg); 277 + } 286 278 alloc_fail: 287 279 return err; 288 280 }
+13
drivers/misc/mic/scif/scif_rma.h
··· 206 206 }; 207 207 208 208 /* 209 + * struct scif_cb_arg - Stores the argument of the callback func 210 + * 211 + * @src_dma_addr: Source buffer DMA address 212 + * @status: DMA status 213 + * @ep: SCIF endpoint 214 + */ 215 + struct scif_cb_arg { 216 + dma_addr_t src_dma_addr; 217 + struct scif_status *status; 218 + struct scif_endpt *ep; 219 + }; 220 + 221 + /* 209 222 * struct scif_window - Registration Window for Self and Remote 210 223 * 211 224 * @nr_pages: Number of pages which is defined as a s64 instead of an int
+4 -36
drivers/misc/mic/vop/vop_debugfs.c
··· 101 101 return 0; 102 102 } 103 103 104 - static int vop_dp_debug_open(struct inode *inode, struct file *file) 105 - { 106 - return single_open(file, vop_dp_show, inode->i_private); 107 - } 108 - 109 - static int vop_dp_debug_release(struct inode *inode, struct file *file) 110 - { 111 - return single_release(inode, file); 112 - } 113 - 114 - static const struct file_operations dp_ops = { 115 - .owner = THIS_MODULE, 116 - .open = vop_dp_debug_open, 117 - .read = seq_read, 118 - .llseek = seq_lseek, 119 - .release = vop_dp_debug_release 120 - }; 104 + DEFINE_SHOW_ATTRIBUTE(vop_dp); 121 105 122 106 static int vop_vdev_info_show(struct seq_file *s, void *unused) 123 107 { ··· 178 194 return 0; 179 195 } 180 196 181 - static int vop_vdev_info_debug_open(struct inode *inode, struct file *file) 182 - { 183 - return single_open(file, vop_vdev_info_show, inode->i_private); 184 - } 185 - 186 - static int vop_vdev_info_debug_release(struct inode *inode, struct file *file) 187 - { 188 - return single_release(inode, file); 189 - } 190 - 191 - static const struct file_operations vdev_info_ops = { 192 - .owner = THIS_MODULE, 193 - .open = vop_vdev_info_debug_open, 194 - .read = seq_read, 195 - .llseek = seq_lseek, 196 - .release = vop_vdev_info_debug_release 197 - }; 197 + DEFINE_SHOW_ATTRIBUTE(vop_vdev_info); 198 198 199 199 void vop_init_debugfs(struct vop_info *vi) 200 200 { ··· 190 222 pr_err("can't create debugfs dir vop\n"); 191 223 return; 192 224 } 193 - debugfs_create_file("dp", 0444, vi->dbg, vi, &dp_ops); 194 - debugfs_create_file("vdev_info", 0444, vi->dbg, vi, &vdev_info_ops); 225 + debugfs_create_file("dp", 0444, vi->dbg, vi, &vop_dp_fops); 226 + debugfs_create_file("vdev_info", 0444, vi->dbg, vi, &vop_vdev_info_fops); 195 227 } 196 228 197 229 void vop_exit_debugfs(struct vop_info *vi)
+192
drivers/misc/pvpanic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Pvpanic Device Support 4 + * 5 + * Copyright (C) 2013 Fujitsu. 6 + * Copyright (C) 2018 ZTE. 7 + */ 8 + 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 11 + #include <linux/acpi.h> 12 + #include <linux/kernel.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_address.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/types.h> 18 + 19 + static void __iomem *base; 20 + 21 + #define PVPANIC_PANICKED (1 << 0) 22 + 23 + MODULE_AUTHOR("Hu Tao <hutao@cn.fujitsu.com>"); 24 + MODULE_DESCRIPTION("pvpanic device driver"); 25 + MODULE_LICENSE("GPL"); 26 + 27 + static void 28 + pvpanic_send_event(unsigned int event) 29 + { 30 + iowrite8(event, base); 31 + } 32 + 33 + static int 34 + pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, 35 + void *unused) 36 + { 37 + pvpanic_send_event(PVPANIC_PANICKED); 38 + return NOTIFY_DONE; 39 + } 40 + 41 + static struct notifier_block pvpanic_panic_nb = { 42 + .notifier_call = pvpanic_panic_notify, 43 + .priority = 1, /* let this called before broken drm_fb_helper */ 44 + }; 45 + 46 + #ifdef CONFIG_ACPI 47 + static int pvpanic_add(struct acpi_device *device); 48 + static int pvpanic_remove(struct acpi_device *device); 49 + 50 + static const struct acpi_device_id pvpanic_device_ids[] = { 51 + { "QEMU0001", 0 }, 52 + { "", 0 } 53 + }; 54 + MODULE_DEVICE_TABLE(acpi, pvpanic_device_ids); 55 + 56 + static struct acpi_driver pvpanic_driver = { 57 + .name = "pvpanic", 58 + .class = "QEMU", 59 + .ids = pvpanic_device_ids, 60 + .ops = { 61 + .add = pvpanic_add, 62 + .remove = pvpanic_remove, 63 + }, 64 + .owner = THIS_MODULE, 65 + }; 66 + 67 + static acpi_status 68 + pvpanic_walk_resources(struct acpi_resource *res, void *context) 69 + { 70 + struct resource r; 71 + 72 + if (acpi_dev_resource_io(res, &r)) { 73 + base = ioport_map(r.start, resource_size(&r)); 74 + return AE_OK; 75 + } else if (acpi_dev_resource_memory(res, &r)) { 76 + base = ioremap(r.start, resource_size(&r)); 77 + return AE_OK; 78 + } 79 + 80 + return AE_ERROR; 81 + } 82 + 83 + static int pvpanic_add(struct acpi_device *device) 84 + { 85 + int ret; 86 + 87 + ret = acpi_bus_get_status(device); 88 + if (ret < 0) 89 + return ret; 90 + 91 + if (!device->status.enabled || !device->status.functional) 92 + return -ENODEV; 93 + 94 + acpi_walk_resources(device->handle, METHOD_NAME__CRS, 95 + pvpanic_walk_resources, NULL); 96 + 97 + if (!base) 98 + return -ENODEV; 99 + 100 + atomic_notifier_chain_register(&panic_notifier_list, 101 + &pvpanic_panic_nb); 102 + 103 + return 0; 104 + } 105 + 106 + static int pvpanic_remove(struct acpi_device *device) 107 + { 108 + 109 + atomic_notifier_chain_unregister(&panic_notifier_list, 110 + &pvpanic_panic_nb); 111 + iounmap(base); 112 + 113 + return 0; 114 + } 115 + 116 + static int pvpanic_register_acpi_driver(void) 117 + { 118 + return acpi_bus_register_driver(&pvpanic_driver); 119 + } 120 + 121 + static void pvpanic_unregister_acpi_driver(void) 122 + { 123 + acpi_bus_unregister_driver(&pvpanic_driver); 124 + } 125 + #else 126 + static int pvpanic_register_acpi_driver(void) 127 + { 128 + return -ENODEV; 129 + } 130 + 131 + static void pvpanic_unregister_acpi_driver(void) {} 132 + #endif 133 + 134 + static int pvpanic_mmio_probe(struct platform_device *pdev) 135 + { 136 + struct resource *mem; 137 + 138 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 139 + if (!mem) 140 + return -EINVAL; 141 + 142 + base = devm_ioremap_resource(&pdev->dev, mem); 143 + if (IS_ERR(base)) 144 + return PTR_ERR(base); 145 + 146 + atomic_notifier_chain_register(&panic_notifier_list, 147 + &pvpanic_panic_nb); 148 + 149 + return 0; 150 + } 151 + 152 + static int pvpanic_mmio_remove(struct platform_device *pdev) 153 + { 154 + 155 + atomic_notifier_chain_unregister(&panic_notifier_list, 156 + &pvpanic_panic_nb); 157 + 158 + return 0; 159 + } 160 + 161 + static const struct of_device_id pvpanic_mmio_match[] = { 162 + { .compatible = "qemu,pvpanic-mmio", }, 163 + {} 164 + }; 165 + 166 + static struct platform_driver pvpanic_mmio_driver = { 167 + .driver = { 168 + .name = "pvpanic-mmio", 169 + .of_match_table = pvpanic_mmio_match, 170 + }, 171 + .probe = pvpanic_mmio_probe, 172 + .remove = pvpanic_mmio_remove, 173 + }; 174 + 175 + static int __init pvpanic_mmio_init(void) 176 + { 177 + if (acpi_disabled) 178 + return platform_driver_register(&pvpanic_mmio_driver); 179 + else 180 + return pvpanic_register_acpi_driver(); 181 + } 182 + 183 + static void __exit pvpanic_mmio_exit(void) 184 + { 185 + if (acpi_disabled) 186 + platform_driver_unregister(&pvpanic_mmio_driver); 187 + else 188 + pvpanic_unregister_acpi_driver(); 189 + } 190 + 191 + module_init(pvpanic_mmio_init); 192 + module_exit(pvpanic_mmio_exit);
+7 -29
drivers/misc/ti-st/st_kim.c
··· 211 211 static long read_local_version(struct kim_data_s *kim_gdata, char *bts_scr_name) 212 212 { 213 213 unsigned short version = 0, chip = 0, min_ver = 0, maj_ver = 0; 214 - const char read_ver_cmd[] = { 0x01, 0x01, 0x10, 0x00 }; 214 + static const char read_ver_cmd[] = { 0x01, 0x01, 0x10, 0x00 }; 215 215 long timeout; 216 216 217 217 pr_debug("%s", __func__); ··· 564 564 /* functions called from subsystems */ 565 565 /* called when debugfs entry is read from */ 566 566 567 - static int show_version(struct seq_file *s, void *unused) 567 + static int version_show(struct seq_file *s, void *unused) 568 568 { 569 569 struct kim_data_s *kim_gdata = (struct kim_data_s *)s->private; 570 570 seq_printf(s, "%04X %d.%d.%d\n", kim_gdata->version.full, ··· 573 573 return 0; 574 574 } 575 575 576 - static int show_list(struct seq_file *s, void *unused) 576 + static int list_show(struct seq_file *s, void *unused) 577 577 { 578 578 struct kim_data_s *kim_gdata = (struct kim_data_s *)s->private; 579 579 kim_st_list_protocols(kim_gdata->core_data, s); ··· 688 688 *core_data = NULL; 689 689 } 690 690 691 - static int kim_version_open(struct inode *i, struct file *f) 692 - { 693 - return single_open(f, show_version, i->i_private); 694 - } 695 - 696 - static int kim_list_open(struct inode *i, struct file *f) 697 - { 698 - return single_open(f, show_list, i->i_private); 699 - } 700 - 701 - static const struct file_operations version_debugfs_fops = { 702 - /* version info */ 703 - .open = kim_version_open, 704 - .read = seq_read, 705 - .llseek = seq_lseek, 706 - .release = single_release, 707 - }; 708 - static const struct file_operations list_debugfs_fops = { 709 - /* protocols info */ 710 - .open = kim_list_open, 711 - .read = seq_read, 712 - .llseek = seq_lseek, 713 - .release = single_release, 714 - }; 691 + DEFINE_SHOW_ATTRIBUTE(version); 692 + DEFINE_SHOW_ATTRIBUTE(list); 715 693 716 694 /**********************************************************************/ 717 695 /* functions called from platform device driver subsystem ··· 767 789 } 768 790 769 791 debugfs_create_file("version", S_IRUGO, kim_debugfs_dir, 770 - kim_gdata, &version_debugfs_fops); 792 + kim_gdata, &version_fops); 771 793 debugfs_create_file("protocols", S_IRUGO, kim_debugfs_dir, 772 - kim_gdata, &list_debugfs_fops); 794 + kim_gdata, &list_fops); 773 795 return 0; 774 796 775 797 err_sysfs_group:
+1 -1
drivers/misc/vexpress-syscfg.c
··· 61 61 int tries; 62 62 long timeout; 63 63 64 - if (WARN_ON(index > func->num_templates)) 64 + if (WARN_ON(index >= func->num_templates)) 65 65 return -EINVAL; 66 66 67 67 command = readl(syscfg->base + SYS_CFGCTRL);
+1 -12
drivers/misc/vmw_balloon.c
··· 1470 1470 return 0; 1471 1471 } 1472 1472 1473 - static int vmballoon_debug_open(struct inode *inode, struct file *file) 1474 - { 1475 - return single_open(file, vmballoon_debug_show, inode->i_private); 1476 - } 1477 - 1478 - static const struct file_operations vmballoon_debug_fops = { 1479 - .owner = THIS_MODULE, 1480 - .open = vmballoon_debug_open, 1481 - .read = seq_read, 1482 - .llseek = seq_lseek, 1483 - .release = single_release, 1484 - }; 1473 + DEFINE_SHOW_ATTRIBUTE(vmballoon_debug); 1485 1474 1486 1475 static int __init vmballoon_debugfs_init(struct vmballoon *b) 1487 1476 {
+4 -14
drivers/misc/vmw_vmci/vmci_host.c
··· 750 750 if (copy_from_user(&set_info, uptr, sizeof(set_info))) 751 751 return -EFAULT; 752 752 753 - cpt_buf = kmalloc(set_info.buf_size, GFP_KERNEL); 754 - if (!cpt_buf) { 755 - vmci_ioctl_err( 756 - "cannot allocate memory to set cpt state (type=%d)\n", 757 - set_info.cpt_type); 758 - return -ENOMEM; 759 - } 760 - 761 - if (copy_from_user(cpt_buf, (void __user *)(uintptr_t)set_info.cpt_buf, 762 - set_info.buf_size)) { 763 - retval = -EFAULT; 764 - goto out; 765 - } 753 + cpt_buf = memdup_user((void __user *)(uintptr_t)set_info.cpt_buf, 754 + set_info.buf_size); 755 + if (IS_ERR(cpt_buf)) 756 + return PTR_ERR(cpt_buf); 766 757 767 758 cid = vmci_ctx_get_id(vmci_host_dev->context); 768 759 set_info.result = vmci_ctx_set_chkpt_state(cid, set_info.cpt_type, ··· 761 770 762 771 retval = copy_to_user(uptr, &set_info, sizeof(set_info)) ? -EFAULT : 0; 763 772 764 - out: 765 773 kfree(cpt_buf); 766 774 return retval; 767 775 }
+1
drivers/mtd/Kconfig
··· 1 1 menuconfig MTD 2 2 tristate "Memory Technology Device (MTD) support" 3 + imply NVMEM 3 4 help 4 5 Memory Technology Devices are flash, RAM and similar chips, often 5 6 used for solid state file systems on embedded devices. This option
+56
drivers/mtd/mtdcore.c
··· 41 41 #include <linux/reboot.h> 42 42 #include <linux/leds.h> 43 43 #include <linux/debugfs.h> 44 + #include <linux/nvmem-provider.h> 44 45 45 46 #include <linux/mtd/mtd.h> 46 47 #include <linux/mtd/partitions.h> ··· 489 488 } 490 489 EXPORT_SYMBOL_GPL(mtd_pairing_groups); 491 490 491 + static int mtd_nvmem_reg_read(void *priv, unsigned int offset, 492 + void *val, size_t bytes) 493 + { 494 + struct mtd_info *mtd = priv; 495 + size_t retlen; 496 + int err; 497 + 498 + err = mtd_read(mtd, offset, bytes, &retlen, val); 499 + if (err && err != -EUCLEAN) 500 + return err; 501 + 502 + return retlen == bytes ? 0 : -EIO; 503 + } 504 + 505 + static int mtd_nvmem_add(struct mtd_info *mtd) 506 + { 507 + struct nvmem_config config = {}; 508 + 509 + config.dev = &mtd->dev; 510 + config.name = mtd->name; 511 + config.owner = THIS_MODULE; 512 + config.reg_read = mtd_nvmem_reg_read; 513 + config.size = mtd->size; 514 + config.word_size = 1; 515 + config.stride = 1; 516 + config.read_only = true; 517 + config.root_only = true; 518 + config.no_of_node = true; 519 + config.priv = mtd; 520 + 521 + mtd->nvmem = nvmem_register(&config); 522 + if (IS_ERR(mtd->nvmem)) { 523 + /* Just ignore if there is no NVMEM support in the kernel */ 524 + if (PTR_ERR(mtd->nvmem) == -ENOSYS) { 525 + mtd->nvmem = NULL; 526 + } else { 527 + dev_err(&mtd->dev, "Failed to register NVMEM device\n"); 528 + return PTR_ERR(mtd->nvmem); 529 + } 530 + } 531 + 532 + return 0; 533 + } 534 + 492 535 static struct dentry *dfs_dir_mtd; 493 536 494 537 /** ··· 615 570 if (error) 616 571 goto fail_added; 617 572 573 + /* Add the nvmem provider */ 574 + error = mtd_nvmem_add(mtd); 575 + if (error) 576 + goto fail_nvmem_add; 577 + 618 578 if (!IS_ERR_OR_NULL(dfs_dir_mtd)) { 619 579 mtd->dbg.dfs_dir = debugfs_create_dir(dev_name(&mtd->dev), dfs_dir_mtd); 620 580 if (IS_ERR_OR_NULL(mtd->dbg.dfs_dir)) { ··· 645 595 __module_get(THIS_MODULE); 646 596 return 0; 647 597 598 + fail_nvmem_add: 599 + device_unregister(&mtd->dev); 648 600 fail_added: 649 601 of_node_put(mtd_get_of_node(mtd)); 650 602 idr_remove(&mtd_idr, i); ··· 689 637 mtd->index, mtd->name, mtd->usecount); 690 638 ret = -EBUSY; 691 639 } else { 640 + /* Try to remove the NVMEM provider */ 641 + if (mtd->nvmem) 642 + nvmem_unregister(mtd->nvmem); 643 + 692 644 device_unregister(&mtd->dev); 693 645 694 646 idr_remove(&mtd_idr, mtd->index);
+30 -1
drivers/nvmem/core.c
··· 28 28 size_t size; 29 29 bool read_only; 30 30 int flags; 31 + enum nvmem_type type; 31 32 struct bin_attribute eeprom; 32 33 struct device *base_dev; 33 34 struct list_head cells; ··· 61 60 62 61 static BLOCKING_NOTIFIER_HEAD(nvmem_notifier); 63 62 63 + static const char * const nvmem_type_str[] = { 64 + [NVMEM_TYPE_UNKNOWN] = "Unknown", 65 + [NVMEM_TYPE_EEPROM] = "EEPROM", 66 + [NVMEM_TYPE_OTP] = "OTP", 67 + [NVMEM_TYPE_BATTERY_BACKED] = "Battery backed", 68 + }; 69 + 64 70 #ifdef CONFIG_DEBUG_LOCK_ALLOC 65 71 static struct lock_class_key eeprom_lock_key; 66 72 #endif ··· 90 82 91 83 return -EINVAL; 92 84 } 85 + 86 + static ssize_t type_show(struct device *dev, 87 + struct device_attribute *attr, char *buf) 88 + { 89 + struct nvmem_device *nvmem = to_nvmem_device(dev); 90 + 91 + return sprintf(buf, "%s\n", nvmem_type_str[nvmem->type]); 92 + } 93 + 94 + static DEVICE_ATTR_RO(type); 95 + 96 + static struct attribute *nvmem_attrs[] = { 97 + &dev_attr_type.attr, 98 + NULL, 99 + }; 93 100 94 101 static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj, 95 102 struct bin_attribute *attr, ··· 191 168 192 169 static const struct attribute_group nvmem_bin_rw_group = { 193 170 .bin_attrs = nvmem_bin_rw_attributes, 171 + .attrs = nvmem_attrs, 194 172 }; 195 173 196 174 static const struct attribute_group *nvmem_rw_dev_groups[] = { ··· 215 191 216 192 static const struct attribute_group nvmem_bin_ro_group = { 217 193 .bin_attrs = nvmem_bin_ro_attributes, 194 + .attrs = nvmem_attrs, 218 195 }; 219 196 220 197 static const struct attribute_group *nvmem_ro_dev_groups[] = { ··· 240 215 241 216 static const struct attribute_group nvmem_bin_rw_root_group = { 242 217 .bin_attrs = nvmem_bin_rw_root_attributes, 218 + .attrs = nvmem_attrs, 243 219 }; 244 220 245 221 static const struct attribute_group *nvmem_rw_root_dev_groups[] = { ··· 264 238 265 239 static const struct attribute_group nvmem_bin_ro_root_group = { 266 240 .bin_attrs = nvmem_bin_ro_root_attributes, 241 + .attrs = nvmem_attrs, 267 242 }; 268 243 269 244 static const struct attribute_group *nvmem_ro_root_dev_groups[] = { ··· 632 605 nvmem->dev.bus = &nvmem_bus_type; 633 606 nvmem->dev.parent = config->dev; 634 607 nvmem->priv = config->priv; 608 + nvmem->type = config->type; 635 609 nvmem->reg_read = config->reg_read; 636 610 nvmem->reg_write = config->reg_write; 637 - nvmem->dev.of_node = config->dev->of_node; 611 + if (!config->no_of_node) 612 + nvmem->dev.of_node = config->dev->of_node; 638 613 639 614 if (config->id == -1 && config->name) { 640 615 dev_set_name(&nvmem->dev, "%s", config->name);
+28 -1
drivers/nvmem/meson-efuse.c
··· 14 14 * more details. 15 15 */ 16 16 17 + #include <linux/clk.h> 17 18 #include <linux/module.h> 18 19 #include <linux/nvmem-provider.h> 19 20 #include <linux/of.h> ··· 47 46 struct device *dev = &pdev->dev; 48 47 struct nvmem_device *nvmem; 49 48 struct nvmem_config *econfig; 49 + struct clk *clk; 50 50 unsigned int size; 51 + int ret; 51 52 52 - if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) 53 + clk = devm_clk_get(dev, NULL); 54 + if (IS_ERR(clk)) { 55 + ret = PTR_ERR(clk); 56 + if (ret != -EPROBE_DEFER) 57 + dev_err(dev, "failed to get efuse gate"); 58 + return ret; 59 + } 60 + 61 + ret = clk_prepare_enable(clk); 62 + if (ret) { 63 + dev_err(dev, "failed to enable gate"); 64 + return ret; 65 + } 66 + 67 + ret = devm_add_action_or_reset(dev, 68 + (void(*)(void *))clk_disable_unprepare, 69 + clk); 70 + if (ret) { 71 + dev_err(dev, "failed to add disable callback"); 72 + return ret; 73 + } 74 + 75 + if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) { 76 + dev_err(dev, "failed to get max user"); 53 77 return -EINVAL; 78 + } 54 79 55 80 econfig = devm_kzalloc(dev, sizeof(*econfig), GFP_KERNEL); 56 81 if (!econfig)
+1 -1
drivers/parport/parport_pc.c
··· 1667 1667 default: 1668 1668 printk(KERN_WARNING "0x%lx: Unknown implementation ID\n", 1669 1669 pb->base); 1670 - /* Assume 1 */ 1670 + /* Fall through - Assume 1 */ 1671 1671 case 1: 1672 1672 pword = 1; 1673 1673 }
+19
drivers/pci/pci-acpi.c
··· 789 789 ACPI_FREE(obj); 790 790 } 791 791 792 + static void pci_acpi_set_untrusted(struct pci_dev *dev) 793 + { 794 + u8 val; 795 + 796 + if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) 797 + return; 798 + if (device_property_read_u8(&dev->dev, "ExternalFacingPort", &val)) 799 + return; 800 + 801 + /* 802 + * These root ports expose PCIe (including DMA) outside of the 803 + * system so make sure we treat them and everything behind as 804 + * untrusted. 805 + */ 806 + if (val) 807 + dev->untrusted = 1; 808 + } 809 + 792 810 static void pci_acpi_setup(struct device *dev) 793 811 { 794 812 struct pci_dev *pci_dev = to_pci_dev(dev); ··· 816 798 return; 817 799 818 800 pci_acpi_optimize_delay(pci_dev, adev->handle); 801 + pci_acpi_set_untrusted(pci_dev); 819 802 820 803 pci_acpi_add_pm_notifier(adev, pci_dev); 821 804 if (!adev->wakeup.flags.valid)
+15
drivers/pci/probe.c
··· 1378 1378 } 1379 1379 } 1380 1380 1381 + static void set_pcie_untrusted(struct pci_dev *dev) 1382 + { 1383 + struct pci_dev *parent; 1384 + 1385 + /* 1386 + * If the upstream bridge is untrusted we treat this device 1387 + * untrusted as well. 1388 + */ 1389 + parent = pci_upstream_bridge(dev); 1390 + if (parent && parent->untrusted) 1391 + dev->untrusted = true; 1392 + } 1393 + 1381 1394 /** 1382 1395 * pci_ext_cfg_is_aliased - Is ext config space just an alias of std config? 1383 1396 * @dev: PCI device ··· 1650 1637 1651 1638 /* Need to have dev->cfg_size ready */ 1652 1639 set_pcie_thunderbolt(dev); 1640 + 1641 + set_pcie_untrusted(dev); 1653 1642 1654 1643 /* "Unknown power state" */ 1655 1644 dev->current_state = PCI_UNKNOWN;
-8
drivers/platform/x86/Kconfig
··· 1172 1172 This driver checks to determine whether the device has Intel Smart 1173 1173 Connect enabled, and if so disables it. 1174 1174 1175 - config PVPANIC 1176 - tristate "pvpanic device support" 1177 - depends on ACPI 1178 - ---help--- 1179 - This driver provides support for the pvpanic device. pvpanic is 1180 - a paravirtualized device provided by QEMU; it lets a virtual machine 1181 - (guest) communicate panic events to the host. 1182 - 1183 1175 config INTEL_PMC_IPC 1184 1176 tristate "Intel PMC IPC Driver" 1185 1177 depends on ACPI
-1
drivers/platform/x86/Makefile
··· 79 79 obj-$(CONFIG_INTEL_RST) += intel-rst.o 80 80 obj-$(CONFIG_INTEL_SMARTCONNECT) += intel-smartconnect.o 81 81 82 - obj-$(CONFIG_PVPANIC) += pvpanic.o 83 82 obj-$(CONFIG_ALIENWARE_WMI) += alienware-wmi.o 84 83 obj-$(CONFIG_INTEL_PMC_IPC) += intel_pmc_ipc.o 85 84 obj-$(CONFIG_TOUCHSCREEN_DMI) += touchscreen_dmi.o
-124
drivers/platform/x86/pvpanic.c
··· 1 - /* 2 - * pvpanic.c - pvpanic Device Support 3 - * 4 - * Copyright (C) 2013 Fujitsu. 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 19 - */ 20 - 21 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 22 - 23 - #include <linux/kernel.h> 24 - #include <linux/module.h> 25 - #include <linux/init.h> 26 - #include <linux/types.h> 27 - #include <linux/acpi.h> 28 - 29 - MODULE_AUTHOR("Hu Tao <hutao@cn.fujitsu.com>"); 30 - MODULE_DESCRIPTION("pvpanic device driver"); 31 - MODULE_LICENSE("GPL"); 32 - 33 - static int pvpanic_add(struct acpi_device *device); 34 - static int pvpanic_remove(struct acpi_device *device); 35 - 36 - static const struct acpi_device_id pvpanic_device_ids[] = { 37 - { "QEMU0001", 0 }, 38 - { "", 0 }, 39 - }; 40 - MODULE_DEVICE_TABLE(acpi, pvpanic_device_ids); 41 - 42 - #define PVPANIC_PANICKED (1 << 0) 43 - 44 - static u16 port; 45 - 46 - static struct acpi_driver pvpanic_driver = { 47 - .name = "pvpanic", 48 - .class = "QEMU", 49 - .ids = pvpanic_device_ids, 50 - .ops = { 51 - .add = pvpanic_add, 52 - .remove = pvpanic_remove, 53 - }, 54 - .owner = THIS_MODULE, 55 - }; 56 - 57 - static void 58 - pvpanic_send_event(unsigned int event) 59 - { 60 - outb(event, port); 61 - } 62 - 63 - static int 64 - pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, 65 - void *unused) 66 - { 67 - pvpanic_send_event(PVPANIC_PANICKED); 68 - return NOTIFY_DONE; 69 - } 70 - 71 - static struct notifier_block pvpanic_panic_nb = { 72 - .notifier_call = pvpanic_panic_notify, 73 - .priority = 1, /* let this called before broken drm_fb_helper */ 74 - }; 75 - 76 - 77 - static acpi_status 78 - pvpanic_walk_resources(struct acpi_resource *res, void *context) 79 - { 80 - switch (res->type) { 81 - case ACPI_RESOURCE_TYPE_END_TAG: 82 - return AE_OK; 83 - 84 - case ACPI_RESOURCE_TYPE_IO: 85 - port = res->data.io.minimum; 86 - return AE_OK; 87 - 88 - default: 89 - return AE_ERROR; 90 - } 91 - } 92 - 93 - static int pvpanic_add(struct acpi_device *device) 94 - { 95 - int ret; 96 - 97 - ret = acpi_bus_get_status(device); 98 - if (ret < 0) 99 - return ret; 100 - 101 - if (!device->status.enabled || !device->status.functional) 102 - return -ENODEV; 103 - 104 - acpi_walk_resources(device->handle, METHOD_NAME__CRS, 105 - pvpanic_walk_resources, NULL); 106 - 107 - if (!port) 108 - return -ENODEV; 109 - 110 - atomic_notifier_chain_register(&panic_notifier_list, 111 - &pvpanic_panic_nb); 112 - 113 - return 0; 114 - } 115 - 116 - static int pvpanic_remove(struct acpi_device *device) 117 - { 118 - 119 - atomic_notifier_chain_unregister(&panic_notifier_list, 120 - &pvpanic_panic_nb); 121 - return 0; 122 - } 123 - 124 - module_acpi_driver(pvpanic_driver);
+2 -2
drivers/pps/clients/pps-gpio.c
··· 158 158 if (data->capture_clear) 159 159 pps_default_params |= PPS_CAPTURECLEAR | PPS_OFFSETCLEAR; 160 160 data->pps = pps_register_source(&data->info, pps_default_params); 161 - if (data->pps == NULL) { 161 + if (IS_ERR(data->pps)) { 162 162 dev_err(&pdev->dev, "failed to register IRQ %d as PPS source\n", 163 163 data->irq); 164 - return -EINVAL; 164 + return PTR_ERR(data->pps); 165 165 } 166 166 167 167 /* register IRQ interrupt handler */
+2 -2
drivers/pps/clients/pps-ktimer.c
··· 80 80 { 81 81 pps = pps_register_source(&pps_ktimer_info, 82 82 PPS_CAPTUREASSERT | PPS_OFFSETASSERT); 83 - if (pps == NULL) { 83 + if (IS_ERR(pps)) { 84 84 pr_err("cannot register PPS source\n"); 85 - return -ENOMEM; 85 + return PTR_ERR(pps); 86 86 } 87 87 88 88 timer_setup(&ktimer, pps_ktimer_event, 0);
+2 -2
drivers/pps/clients/pps-ldisc.c
··· 72 72 73 73 pps = pps_register_source(&info, PPS_CAPTUREBOTH | \ 74 74 PPS_OFFSETASSERT | PPS_OFFSETCLEAR); 75 - if (pps == NULL) { 75 + if (IS_ERR(pps)) { 76 76 pr_err("cannot register PPS source \"%s\"\n", info.path); 77 - return -ENOMEM; 77 + return PTR_ERR(pps); 78 78 } 79 79 pps->lookup_cookie = tty; 80 80
+1 -1
drivers/pps/clients/pps_parport.c
··· 179 179 180 180 device->pps = pps_register_source(&info, 181 181 PPS_CAPTUREBOTH | PPS_OFFSETASSERT | PPS_OFFSETCLEAR); 182 - if (device->pps == NULL) { 182 + if (IS_ERR(device->pps)) { 183 183 pr_err("couldn't register PPS source\n"); 184 184 goto err_release_dev; 185 185 }
+3 -2
drivers/pps/kapi.c
··· 72 72 * source is described by info's fields and it will have, as default PPS 73 73 * parameters, the ones specified into default_params. 74 74 * 75 - * The function returns, in case of success, the PPS device. Otherwise NULL. 75 + * The function returns, in case of success, the PPS device. Otherwise 76 + * ERR_PTR(errno). 76 77 */ 77 78 78 79 struct pps_device *pps_register_source(struct pps_source_info *info, ··· 136 135 pps_register_source_exit: 137 136 pr_err("%s: unable to register source\n", info->name); 138 137 139 - return NULL; 138 + return ERR_PTR(err); 140 139 } 141 140 EXPORT_SYMBOL(pps_register_source); 142 141
+2 -2
drivers/ptp/ptp_clock.c
··· 265 265 pps.mode = PTP_PPS_MODE; 266 266 pps.owner = info->owner; 267 267 ptp->pps_source = pps_register_source(&pps, PTP_PPS_DEFAULTS); 268 - if (!ptp->pps_source) { 269 - err = -EINVAL; 268 + if (IS_ERR(ptp->pps_source)) { 269 + err = PTR_ERR(ptp->pps_source); 270 270 pr_err("failed to register pps source\n"); 271 271 goto no_pps; 272 272 }
+3 -2
drivers/slimbus/Kconfig
··· 22 22 23 23 config SLIM_QCOM_NGD_CTRL 24 24 tristate "Qualcomm SLIMbus Satellite Non-Generic Device Component" 25 - depends on QCOM_QMI_HELPERS 26 - depends on HAS_IOMEM && DMA_ENGINE 25 + depends on HAS_IOMEM && DMA_ENGINE && NET 26 + depends on ARCH_QCOM || COMPILE_TEST 27 + select QCOM_QMI_HELPERS 27 28 help 28 29 Select driver if Qualcomm's SLIMbus Satellite Non-Generic Device 29 30 Component is programmed using Linux kernel.
+2 -4
drivers/slimbus/qcom-ctrl.c
··· 654 654 #ifdef CONFIG_PM 655 655 static int qcom_slim_runtime_suspend(struct device *device) 656 656 { 657 - struct platform_device *pdev = to_platform_device(device); 658 - struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); 657 + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(device); 659 658 int ret; 660 659 661 660 dev_dbg(device, "pm_runtime: suspending...\n"); ··· 671 672 672 673 static int qcom_slim_runtime_resume(struct device *device) 673 674 { 674 - struct platform_device *pdev = to_platform_device(device); 675 - struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev); 675 + struct qcom_slim_ctrl *ctrl = dev_get_drvdata(device); 676 676 int ret = 0; 677 677 678 678 dev_dbg(device, "pm_runtime: resuming...\n");
+4 -3
drivers/slimbus/qcom-ngd-ctrl.c
··· 787 787 788 788 if (txn->msg->num_bytes > SLIM_MSGQ_BUF_LEN || 789 789 txn->rl > SLIM_MSGQ_BUF_LEN) { 790 - dev_err(ctrl->dev, "msg exeeds HW limit\n"); 790 + dev_err(ctrl->dev, "msg exceeds HW limit\n"); 791 791 return -EINVAL; 792 792 } 793 793 ··· 1327 1327 { 1328 1328 const struct ngd_reg_offset_data *data; 1329 1329 struct qcom_slim_ngd *ngd; 1330 + const struct of_device_id *match; 1330 1331 struct device_node *node; 1331 1332 u32 id; 1332 1333 1333 - data = of_match_node(qcom_slim_ngd_dt_match, parent->of_node)->data; 1334 - 1334 + match = of_match_node(qcom_slim_ngd_dt_match, parent->of_node); 1335 + data = match->data; 1335 1336 for_each_available_child_of_node(parent->of_node, node) { 1336 1337 if (of_property_read_u32(node, "reg", &id)) 1337 1338 continue;
+2 -2
drivers/soundwire/intel.c
··· 654 654 return cdns_set_sdw_stream(dai, stream, false, direction); 655 655 } 656 656 657 - static struct snd_soc_dai_ops intel_pcm_dai_ops = { 657 + static const struct snd_soc_dai_ops intel_pcm_dai_ops = { 658 658 .hw_params = intel_hw_params, 659 659 .hw_free = intel_hw_free, 660 660 .shutdown = sdw_cdns_shutdown, 661 661 .set_sdw_stream = intel_pcm_set_sdw_stream, 662 662 }; 663 663 664 - static struct snd_soc_dai_ops intel_pdm_dai_ops = { 664 + static const struct snd_soc_dai_ops intel_pdm_dai_ops = { 665 665 .hw_params = intel_hw_params, 666 666 .hw_free = intel_hw_free, 667 667 .shutdown = sdw_cdns_shutdown,
+17
drivers/thunderbolt/domain.c
··· 7 7 */ 8 8 9 9 #include <linux/device.h> 10 + #include <linux/dmar.h> 10 11 #include <linux/idr.h> 12 + #include <linux/iommu.h> 11 13 #include <linux/module.h> 12 14 #include <linux/pm_runtime.h> 13 15 #include <linux/slab.h> ··· 238 236 } 239 237 static DEVICE_ATTR_RW(boot_acl); 240 238 239 + static ssize_t iommu_dma_protection_show(struct device *dev, 240 + struct device_attribute *attr, 241 + char *buf) 242 + { 243 + /* 244 + * Kernel DMA protection is a feature where Thunderbolt security is 245 + * handled natively using IOMMU. It is enabled when IOMMU is 246 + * enabled and ACPI DMAR table has DMAR_PLATFORM_OPT_IN set. 247 + */ 248 + return sprintf(buf, "%d\n", 249 + iommu_present(&pci_bus_type) && dmar_platform_optin()); 250 + } 251 + static DEVICE_ATTR_RO(iommu_dma_protection); 252 + 241 253 static ssize_t security_show(struct device *dev, struct device_attribute *attr, 242 254 char *buf) 243 255 { ··· 267 251 268 252 static struct attribute *domain_attrs[] = { 269 253 &dev_attr_boot_acl.attr, 254 + &dev_attr_iommu_dma_protection.attr, 270 255 &dev_attr_security.attr, 271 256 NULL, 272 257 };
+11 -8
drivers/uio/uio.c
··· 569 569 ssize_t retval = 0; 570 570 s32 event_count; 571 571 572 - mutex_lock(&idev->info_lock); 573 - if (!idev->info || !idev->info->irq) 574 - retval = -EIO; 575 - mutex_unlock(&idev->info_lock); 576 - 577 - if (retval) 578 - return retval; 579 - 580 572 if (count != sizeof(s32)) 581 573 return -EINVAL; 582 574 583 575 add_wait_queue(&idev->wait, &wait); 584 576 585 577 do { 578 + mutex_lock(&idev->info_lock); 579 + if (!idev->info || !idev->info->irq) { 580 + retval = -EIO; 581 + mutex_unlock(&idev->info_lock); 582 + break; 583 + } 584 + mutex_unlock(&idev->info_lock); 585 + 586 586 set_current_state(TASK_INTERRUPTIBLE); 587 587 588 588 event_count = atomic_read(&idev->event); ··· 1016 1016 1017 1017 idev->info = NULL; 1018 1018 mutex_unlock(&idev->info_lock); 1019 + 1020 + wake_up_interruptible(&idev->wait); 1021 + kill_fasync(&idev->async_queue, SIGIO, POLL_HUP); 1019 1022 1020 1023 device_unregister(&idev->dev); 1021 1024
+2 -4
drivers/uio/uio_fsl_elbc_gpcm.c
··· 74 74 static ssize_t reg_show(struct device *dev, struct device_attribute *attr, 75 75 char *buf) 76 76 { 77 - struct platform_device *pdev = to_platform_device(dev); 78 - struct uio_info *info = platform_get_drvdata(pdev); 77 + struct uio_info *info = dev_get_drvdata(dev); 79 78 struct fsl_elbc_gpcm *priv = info->priv; 80 79 struct fsl_lbc_bank *bank = &priv->lbc->bank[priv->bank]; 81 80 ··· 93 94 static ssize_t reg_store(struct device *dev, struct device_attribute *attr, 94 95 const char *buf, size_t count) 95 96 { 96 - struct platform_device *pdev = to_platform_device(dev); 97 - struct uio_info *info = platform_get_drvdata(pdev); 97 + struct uio_info *info = dev_get_drvdata(dev); 98 98 struct fsl_elbc_gpcm *priv = info->priv; 99 99 struct fsl_lbc_bank *bank = &priv->lbc->bank[priv->bank]; 100 100 unsigned long val;
+1 -1
drivers/virt/vboxguest/vboxguest_core.c
··· 1312 1312 return -EINVAL; 1313 1313 } 1314 1314 1315 - if (f32bit) 1315 + if (IS_ENABLED(CONFIG_COMPAT) && f32bit) 1316 1316 ret = vbg_hgcm_call32(gdev, client_id, 1317 1317 call->function, call->timeout_ms, 1318 1318 VBG_IOCTL_HGCM_CALL_PARMS32(call),
+29
fs/file.c
··· 640 640 } 641 641 EXPORT_SYMBOL(__close_fd); /* for ksys_close() */ 642 642 643 + /* 644 + * variant of __close_fd that gets a ref on the file for later fput 645 + */ 646 + int __close_fd_get_file(unsigned int fd, struct file **res) 647 + { 648 + struct files_struct *files = current->files; 649 + struct file *file; 650 + struct fdtable *fdt; 651 + 652 + spin_lock(&files->file_lock); 653 + fdt = files_fdtable(files); 654 + if (fd >= fdt->max_fds) 655 + goto out_unlock; 656 + file = fdt->fd[fd]; 657 + if (!file) 658 + goto out_unlock; 659 + rcu_assign_pointer(fdt->fd[fd], NULL); 660 + __put_unused_fd(files, fd); 661 + spin_unlock(&files->file_lock); 662 + get_file(file); 663 + *res = file; 664 + return filp_close(file, files); 665 + 666 + out_unlock: 667 + spin_unlock(&files->file_lock); 668 + *res = NULL; 669 + return -ENOENT; 670 + } 671 + 643 672 void do_close_on_exec(struct files_struct *files) 644 673 { 645 674 unsigned i;
+8
include/linux/dmar.h
··· 39 39 /* DMAR Flags */ 40 40 #define DMAR_INTR_REMAP 0x1 41 41 #define DMAR_X2APIC_OPT_OUT 0x2 42 + #define DMAR_PLATFORM_OPT_IN 0x4 42 43 43 44 struct intel_iommu; 44 45 ··· 171 170 { return 0; } 172 171 #endif /* CONFIG_IRQ_REMAP */ 173 172 173 + extern bool dmar_platform_optin(void); 174 + 174 175 #else /* CONFIG_DMAR_TABLE */ 175 176 176 177 static inline int dmar_device_add(void *handle) ··· 183 180 static inline int dmar_device_remove(void *handle) 184 181 { 185 182 return 0; 183 + } 184 + 185 + static inline bool dmar_platform_optin(void) 186 + { 187 + return false; 186 188 } 187 189 188 190 #endif /* CONFIG_DMAR_TABLE */
+1
include/linux/fdtable.h
··· 121 121 unsigned int fd, struct file *file); 122 122 extern int __close_fd(struct files_struct *files, 123 123 unsigned int fd); 124 + extern int __close_fd_get_file(unsigned int fd, struct file **res); 124 125 125 126 extern struct kmem_cache *files_cachep; 126 127
+312
include/linux/firmware/intel/stratix10-smc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2017-2018, Intel Corporation 4 + */ 5 + 6 + #ifndef __STRATIX10_SMC_H 7 + #define __STRATIX10_SMC_H 8 + 9 + #include <linux/arm-smccc.h> 10 + #include <linux/bitops.h> 11 + 12 + /** 13 + * This file defines the Secure Monitor Call (SMC) message protocol used for 14 + * service layer driver in normal world (EL1) to communicate with secure 15 + * monitor software in Secure Monitor Exception Level 3 (EL3). 16 + * 17 + * This file is shared with secure firmware (FW) which is out of kernel tree. 18 + * 19 + * An ARM SMC instruction takes a function identifier and up to 6 64-bit 20 + * register values as arguments, and can return up to 4 64-bit register 21 + * value. The operation of the secure monitor is determined by the parameter 22 + * values passed in through registers. 23 + * 24 + * EL1 and EL3 communicates pointer as physical address rather than the 25 + * virtual address. 26 + * 27 + * Functions specified by ARM SMC Calling convention: 28 + * 29 + * FAST call executes atomic operations, returns when the requested operation 30 + * has completed. 31 + * STD call starts a operation which can be preempted by a non-secure 32 + * interrupt. The call can return before the requested operation has 33 + * completed. 34 + * 35 + * a0..a7 is used as register names in the descriptions below, on arm32 36 + * that translates to r0..r7 and on arm64 to w0..w7. 37 + */ 38 + 39 + /** 40 + * @func_num: function ID 41 + */ 42 + #define INTEL_SIP_SMC_STD_CALL_VAL(func_num) \ 43 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_STD_CALL, ARM_SMCCC_SMC_64, \ 44 + ARM_SMCCC_OWNER_SIP, (func_num)) 45 + 46 + #define INTEL_SIP_SMC_FAST_CALL_VAL(func_num) \ 47 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64, \ 48 + ARM_SMCCC_OWNER_SIP, (func_num)) 49 + 50 + /** 51 + * Return values in INTEL_SIP_SMC_* call 52 + * 53 + * INTEL_SIP_SMC_RETURN_UNKNOWN_FUNCTION: 54 + * Secure monitor software doesn't recognize the request. 55 + * 56 + * INTEL_SIP_SMC_STATUS_OK: 57 + * FPGA configuration completed successfully, 58 + * In case of FPGA configuration write operation, it means secure monitor 59 + * software can accept the next chunk of FPGA configuration data. 60 + * 61 + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY: 62 + * In case of FPGA configuration write operation, it means secure monitor 63 + * software is still processing previous data & can't accept the next chunk 64 + * of data. Service driver needs to issue 65 + * INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE call to query the 66 + * completed block(s). 67 + * 68 + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR: 69 + * There is error during the FPGA configuration process. 70 + * 71 + * INTEL_SIP_SMC_REG_ERROR: 72 + * There is error during a read or write operation of the protected registers. 73 + * 74 + * INTEL_SIP_SMC_RSU_ERROR: 75 + * There is error during a remote status update. 76 + */ 77 + #define INTEL_SIP_SMC_RETURN_UNKNOWN_FUNCTION 0xFFFFFFFF 78 + #define INTEL_SIP_SMC_STATUS_OK 0x0 79 + #define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY 0x1 80 + #define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_REJECTED 0x2 81 + #define INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR 0x4 82 + #define INTEL_SIP_SMC_REG_ERROR 0x5 83 + #define INTEL_SIP_SMC_RSU_ERROR 0x7 84 + 85 + /** 86 + * Request INTEL_SIP_SMC_FPGA_CONFIG_START 87 + * 88 + * Sync call used by service driver at EL1 to request the FPGA in EL3 to 89 + * be prepare to receive a new configuration. 90 + * 91 + * Call register usage: 92 + * a0: INTEL_SIP_SMC_FPGA_CONFIG_START. 93 + * a1: flag for full or partial configuration. 0 for full and 1 for partial 94 + * configuration. 95 + * a2-7: not used. 96 + * 97 + * Return status: 98 + * a0: INTEL_SIP_SMC_STATUS_OK, or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 99 + * a1-3: not used. 100 + */ 101 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_START 1 102 + #define INTEL_SIP_SMC_FPGA_CONFIG_START \ 103 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_START) 104 + 105 + /** 106 + * Request INTEL_SIP_SMC_FPGA_CONFIG_WRITE 107 + * 108 + * Async call used by service driver at EL1 to provide FPGA configuration data 109 + * to secure world. 110 + * 111 + * Call register usage: 112 + * a0: INTEL_SIP_SMC_FPGA_CONFIG_WRITE. 113 + * a1: 64bit physical address of the configuration data memory block 114 + * a2: Size of configuration data block. 115 + * a3-7: not used. 116 + * 117 + * Return status: 118 + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or 119 + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 120 + * a1: 64bit physical address of 1st completed memory block if any completed 121 + * block, otherwise zero value. 122 + * a2: 64bit physical address of 2nd completed memory block if any completed 123 + * block, otherwise zero value. 124 + * a3: 64bit physical address of 3rd completed memory block if any completed 125 + * block, otherwise zero value. 126 + */ 127 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_WRITE 2 128 + #define INTEL_SIP_SMC_FPGA_CONFIG_WRITE \ 129 + INTEL_SIP_SMC_STD_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_WRITE) 130 + 131 + /** 132 + * Request INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE 133 + * 134 + * Sync call used by service driver at EL1 to track the completed write 135 + * transactions. This request is called after INTEL_SIP_SMC_FPGA_CONFIG_WRITE 136 + * call returns INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY. 137 + * 138 + * Call register usage: 139 + * a0: INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE. 140 + * a1-7: not used. 141 + * 142 + * Return status: 143 + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or 144 + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 145 + * a1: 64bit physical address of 1st completed memory block. 146 + * a2: 64bit physical address of 2nd completed memory block if 147 + * any completed block, otherwise zero value. 148 + * a3: 64bit physical address of 3rd completed memory block if 149 + * any completed block, otherwise zero value. 150 + */ 151 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_COMPLETED_WRITE 3 152 + #define INTEL_SIP_SMC_FPGA_CONFIG_COMPLETED_WRITE \ 153 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_COMPLETED_WRITE) 154 + 155 + /** 156 + * Request INTEL_SIP_SMC_FPGA_CONFIG_ISDONE 157 + * 158 + * Sync call used by service driver at EL1 to inform secure world that all 159 + * data are sent, to check whether or not the secure world had completed 160 + * the FPGA configuration process. 161 + * 162 + * Call register usage: 163 + * a0: INTEL_SIP_SMC_FPGA_CONFIG_ISDONE. 164 + * a1-7: not used. 165 + * 166 + * Return status: 167 + * a0: INTEL_SIP_SMC_STATUS_OK, INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY or 168 + * INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 169 + * a1-3: not used. 170 + */ 171 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_ISDONE 4 172 + #define INTEL_SIP_SMC_FPGA_CONFIG_ISDONE \ 173 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_ISDONE) 174 + 175 + /** 176 + * Request INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM 177 + * 178 + * Sync call used by service driver at EL1 to query the physical address of 179 + * memory block reserved by secure monitor software. 180 + * 181 + * Call register usage: 182 + * a0:INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM. 183 + * a1-7: not used. 184 + * 185 + * Return status: 186 + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 187 + * a1: start of physical address of reserved memory block. 188 + * a2: size of reserved memory block. 189 + * a3: not used. 190 + */ 191 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_GET_MEM 5 192 + #define INTEL_SIP_SMC_FPGA_CONFIG_GET_MEM \ 193 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_GET_MEM) 194 + 195 + /** 196 + * Request INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK 197 + * 198 + * For SMC loop-back mode only, used for internal integration, debugging 199 + * or troubleshooting. 200 + * 201 + * Call register usage: 202 + * a0: INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK. 203 + * a1-7: not used. 204 + * 205 + * Return status: 206 + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR. 207 + * a1-3: not used. 208 + */ 209 + #define INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_LOOPBACK 6 210 + #define INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK \ 211 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_LOOPBACK) 212 + 213 + /* 214 + * Request INTEL_SIP_SMC_REG_READ 215 + * 216 + * Read a protected register at EL3 217 + * 218 + * Call register usage: 219 + * a0: INTEL_SIP_SMC_REG_READ. 220 + * a1: register address. 221 + * a2-7: not used. 222 + * 223 + * Return status: 224 + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. 225 + * a1: value in the register 226 + * a2-3: not used. 227 + */ 228 + #define INTEL_SIP_SMC_FUNCID_REG_READ 7 229 + #define INTEL_SIP_SMC_REG_READ \ 230 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_READ) 231 + 232 + /* 233 + * Request INTEL_SIP_SMC_REG_WRITE 234 + * 235 + * Write a protected register at EL3 236 + * 237 + * Call register usage: 238 + * a0: INTEL_SIP_SMC_REG_WRITE. 239 + * a1: register address 240 + * a2: value to program into register. 241 + * a3-7: not used. 242 + * 243 + * Return status: 244 + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. 245 + * a1-3: not used. 246 + */ 247 + #define INTEL_SIP_SMC_FUNCID_REG_WRITE 8 248 + #define INTEL_SIP_SMC_REG_WRITE \ 249 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_WRITE) 250 + 251 + /* 252 + * Request INTEL_SIP_SMC_FUNCID_REG_UPDATE 253 + * 254 + * Update one or more bits in a protected register at EL3 using a 255 + * read-modify-write operation. 256 + * 257 + * Call register usage: 258 + * a0: INTEL_SIP_SMC_REG_UPDATE. 259 + * a1: register address 260 + * a2: write Mask. 261 + * a3: value to write. 262 + * a4-7: not used. 263 + * 264 + * Return status: 265 + * a0: INTEL_SIP_SMC_STATUS_OK or INTEL_SIP_SMC_REG_ERROR. 266 + * a1-3: Not used. 267 + */ 268 + #define INTEL_SIP_SMC_FUNCID_REG_UPDATE 9 269 + #define INTEL_SIP_SMC_REG_UPDATE \ 270 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_UPDATE) 271 + 272 + /* 273 + * Request INTEL_SIP_SMC_RSU_STATUS 274 + * 275 + * Request remote status update boot log, call is synchronous. 276 + * 277 + * Call register usage: 278 + * a0 INTEL_SIP_SMC_RSU_STATUS 279 + * a1-7 not used 280 + * 281 + * Return status 282 + * a0: Current Image 283 + * a1: Last Failing Image 284 + * a2: Version | State 285 + * a3: Error details | Error location 286 + * 287 + * Or 288 + * 289 + * a0: INTEL_SIP_SMC_RSU_ERROR 290 + */ 291 + #define INTEL_SIP_SMC_FUNCID_RSU_STATUS 11 292 + #define INTEL_SIP_SMC_RSU_STATUS \ 293 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_STATUS) 294 + 295 + /* 296 + * Request INTEL_SIP_SMC_RSU_UPDATE 297 + * 298 + * Request to set the offset of the bitstream to boot after reboot, call 299 + * is synchronous. 300 + * 301 + * Call register usage: 302 + * a0 INTEL_SIP_SMC_RSU_UPDATE 303 + * a1 64bit physical address of the configuration data memory in flash 304 + * a2-7 not used 305 + * 306 + * Return status 307 + * a0 INTEL_SIP_SMC_STATUS_OK 308 + */ 309 + #define INTEL_SIP_SMC_FUNCID_RSU_UPDATE 12 310 + #define INTEL_SIP_SMC_RSU_UPDATE \ 311 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_UPDATE) 312 + #endif
+217
include/linux/firmware/intel/stratix10-svc-client.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2017-2018, Intel Corporation 4 + */ 5 + 6 + #ifndef __STRATIX10_SVC_CLIENT_H 7 + #define __STRATIX10_SVC_CLIENT_H 8 + 9 + /** 10 + * Service layer driver supports client names 11 + * 12 + * fpga: for FPGA configuration 13 + * rsu: for remote status update 14 + */ 15 + #define SVC_CLIENT_FPGA "fpga" 16 + #define SVC_CLIENT_RSU "rsu" 17 + 18 + /** 19 + * Status of the sent command, in bit number 20 + * 21 + * SVC_COMMAND_STATUS_RECONFIG_REQUEST_OK: 22 + * Secure firmware accepts the request of FPGA reconfiguration. 23 + * 24 + * SVC_STATUS_RECONFIG_BUFFER_SUBMITTED: 25 + * Service client successfully submits FPGA configuration 26 + * data buffer to secure firmware. 27 + * 28 + * SVC_COMMAND_STATUS_RECONFIG_BUFFER_DONE: 29 + * Secure firmware completes data process, ready to accept the 30 + * next WRITE transaction. 31 + * 32 + * SVC_COMMAND_STATUS_RECONFIG_COMPLETED: 33 + * Secure firmware completes FPGA configuration successfully, FPGA should 34 + * be in user mode. 35 + * 36 + * SVC_COMMAND_STATUS_RECONFIG_BUSY: 37 + * FPGA configuration is still in process. 38 + * 39 + * SVC_COMMAND_STATUS_RECONFIG_ERROR: 40 + * Error encountered during FPGA configuration. 41 + * 42 + * SVC_STATUS_RSU_OK: 43 + * Secure firmware accepts the request of remote status update (RSU). 44 + */ 45 + #define SVC_STATUS_RECONFIG_REQUEST_OK 0 46 + #define SVC_STATUS_RECONFIG_BUFFER_SUBMITTED 1 47 + #define SVC_STATUS_RECONFIG_BUFFER_DONE 2 48 + #define SVC_STATUS_RECONFIG_COMPLETED 3 49 + #define SVC_STATUS_RECONFIG_BUSY 4 50 + #define SVC_STATUS_RECONFIG_ERROR 5 51 + #define SVC_STATUS_RSU_OK 6 52 + #define SVC_STATUS_RSU_ERROR 7 53 + /** 54 + * Flag bit for COMMAND_RECONFIG 55 + * 56 + * COMMAND_RECONFIG_FLAG_PARTIAL: 57 + * Set to FPGA configuration type (full or partial), the default 58 + * is full reconfig. 59 + */ 60 + #define COMMAND_RECONFIG_FLAG_PARTIAL 0 61 + 62 + /** 63 + * Timeout settings for service clients: 64 + * timeout value used in Stratix10 FPGA manager driver. 65 + * timeout value used in RSU driver 66 + */ 67 + #define SVC_RECONFIG_REQUEST_TIMEOUT_MS 100 68 + #define SVC_RECONFIG_BUFFER_TIMEOUT_MS 240 69 + #define SVC_RSU_REQUEST_TIMEOUT_MS 300 70 + 71 + struct stratix10_svc_chan; 72 + 73 + /** 74 + * enum stratix10_svc_command_code - supported service commands 75 + * 76 + * @COMMAND_NOOP: do 'dummy' request for integration/debug/trouble-shooting 77 + * 78 + * @COMMAND_RECONFIG: ask for FPGA configuration preparation, return status 79 + * is SVC_STATUS_RECONFIG_REQUEST_OK 80 + * 81 + * @COMMAND_RECONFIG_DATA_SUBMIT: submit buffer(s) of bit-stream data for the 82 + * FPGA configuration, return status is SVC_STATUS_RECONFIG_BUFFER_SUBMITTED, 83 + * or SVC_STATUS_RECONFIG_ERROR 84 + * 85 + * @COMMAND_RECONFIG_DATA_CLAIM: check the status of the configuration, return 86 + * status is SVC_STATUS_RECONFIG_COMPLETED, or SVC_STATUS_RECONFIG_BUSY, or 87 + * SVC_STATUS_RECONFIG_ERROR 88 + * 89 + * @COMMAND_RECONFIG_STATUS: check the status of the configuration, return 90 + * status is SVC_STATUS_RECONFIG_COMPLETED, or SVC_STATUS_RECONFIG_BUSY, or 91 + * SVC_STATUS_RECONFIG_ERROR 92 + * 93 + * @COMMAND_RSU_STATUS: request remote system update boot log, return status 94 + * is log data or SVC_STATUS_RSU_ERROR 95 + * 96 + * @COMMAND_RSU_UPDATE: set the offset of the bitstream to boot after reboot, 97 + * return status is SVC_STATUS_RSU_OK or SVC_STATUS_RSU_ERROR 98 + */ 99 + enum stratix10_svc_command_code { 100 + COMMAND_NOOP = 0, 101 + COMMAND_RECONFIG, 102 + COMMAND_RECONFIG_DATA_SUBMIT, 103 + COMMAND_RECONFIG_DATA_CLAIM, 104 + COMMAND_RECONFIG_STATUS, 105 + COMMAND_RSU_STATUS, 106 + COMMAND_RSU_UPDATE 107 + }; 108 + 109 + /** 110 + * struct stratix10_svc_client_msg - message sent by client to service 111 + * @payload: starting address of data need be processed 112 + * @payload_length: data size in bytes 113 + * @command: service command 114 + * @arg: args to be passed via registers and not physically mapped buffers 115 + */ 116 + struct stratix10_svc_client_msg { 117 + void *payload; 118 + size_t payload_length; 119 + enum stratix10_svc_command_code command; 120 + u64 arg[3]; 121 + }; 122 + 123 + /** 124 + * struct stratix10_svc_command_config_type - config type 125 + * @flags: flag bit for the type of FPGA configuration 126 + */ 127 + struct stratix10_svc_command_config_type { 128 + u32 flags; 129 + }; 130 + 131 + /** 132 + * struct stratix10_svc_cb_data - callback data structure from service layer 133 + * @status: the status of sent command 134 + * @kaddr1: address of 1st completed data block 135 + * @kaddr2: address of 2nd completed data block 136 + * @kaddr3: address of 3rd completed data block 137 + */ 138 + struct stratix10_svc_cb_data { 139 + u32 status; 140 + void *kaddr1; 141 + void *kaddr2; 142 + void *kaddr3; 143 + }; 144 + 145 + /** 146 + * struct stratix10_svc_client - service client structure 147 + * @dev: the client device 148 + * @receive_cb: callback to provide service client the received data 149 + * @priv: client private data 150 + */ 151 + struct stratix10_svc_client { 152 + struct device *dev; 153 + void (*receive_cb)(struct stratix10_svc_client *client, 154 + struct stratix10_svc_cb_data *cb_data); 155 + void *priv; 156 + }; 157 + 158 + /** 159 + * stratix10_svc_request_channel_byname() - request service channel 160 + * @client: identity of the client requesting the channel 161 + * @name: supporting client name defined above 162 + * 163 + * Return: a pointer to channel assigned to the client on success, 164 + * or ERR_PTR() on error. 165 + */ 166 + struct stratix10_svc_chan 167 + *stratix10_svc_request_channel_byname(struct stratix10_svc_client *client, 168 + const char *name); 169 + 170 + /** 171 + * stratix10_svc_free_channel() - free service channel. 172 + * @chan: service channel to be freed 173 + */ 174 + void stratix10_svc_free_channel(struct stratix10_svc_chan *chan); 175 + 176 + /** 177 + * stratix10_svc_allocate_memory() - allocate the momory 178 + * @chan: service channel assigned to the client 179 + * @size: number of bytes client requests 180 + * 181 + * Service layer allocates the requested number of bytes from the memory 182 + * pool for the client. 183 + * 184 + * Return: the starting address of allocated memory on success, or 185 + * ERR_PTR() on error. 186 + */ 187 + void *stratix10_svc_allocate_memory(struct stratix10_svc_chan *chan, 188 + size_t size); 189 + 190 + /** 191 + * stratix10_svc_free_memory() - free allocated memory 192 + * @chan: service channel assigned to the client 193 + * @kaddr: starting address of memory to be free back to pool 194 + */ 195 + void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr); 196 + 197 + /** 198 + * stratix10_svc_send() - send a message to the remote 199 + * @chan: service channel assigned to the client 200 + * @msg: message data to be sent, in the format of 201 + * struct stratix10_svc_client_msg 202 + * 203 + * Return: 0 for success, -ENOMEM or -ENOBUFS on error. 204 + */ 205 + int stratix10_svc_send(struct stratix10_svc_chan *chan, void *msg); 206 + 207 + /** 208 + * intel_svc_done() - complete service request 209 + * @chan: service channel assigned to the client 210 + * 211 + * This function is used by service client to inform service layer that 212 + * client's service requests are completed, or there is an error in the 213 + * request process. 214 + */ 215 + void stratix10_svc_done(struct stratix10_svc_chan *chan); 216 + #endif 217 +
+6 -6
include/linux/fsl/mc.h
··· 210 210 }; 211 211 212 212 struct fsl_mc_command { 213 - u64 header; 214 - u64 params[MC_CMD_NUM_OF_PARAMS]; 213 + __le64 header; 214 + __le64 params[MC_CMD_NUM_OF_PARAMS]; 215 215 }; 216 216 217 217 enum mc_cmd_status { ··· 238 238 /* Command completion flag */ 239 239 #define MC_CMD_FLAG_INTR_DIS 0x01 240 240 241 - static inline u64 mc_encode_cmd_header(u16 cmd_id, 242 - u32 cmd_flags, 243 - u16 token) 241 + static inline __le64 mc_encode_cmd_header(u16 cmd_id, 242 + u32 cmd_flags, 243 + u16 token) 244 244 { 245 - u64 header = 0; 245 + __le64 header = 0; 246 246 struct mc_cmd_header *hdr = (struct mc_cmd_header *)&header; 247 247 248 248 hdr->cmd_id = cpu_to_le16(cmd_id);
-17
include/linux/hyperv.h
··· 831 831 */ 832 832 struct list_head sc_list; 833 833 /* 834 - * Current number of sub-channels. 835 - */ 836 - int num_sc; 837 - /* 838 - * Number of a sub-channel (position within sc_list) which is supposed 839 - * to be used as the next outgoing channel. 840 - */ 841 - int next_oc; 842 - /* 843 834 * The primary channel this sub-channel belongs to. 844 835 * This will be NULL for the primary channel. 845 836 */ ··· 962 971 963 972 void vmbus_set_chn_rescind_callback(struct vmbus_channel *channel, 964 973 void (*chn_rescind_cb)(struct vmbus_channel *)); 965 - 966 - /* 967 - * Retrieve the (sub) channel on which to send an outgoing request. 968 - * When a primary channel has multiple sub-channels, we choose a 969 - * channel whose VCPU binding is closest to the VCPU on which 970 - * this call is being made. 971 - */ 972 - struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary); 973 974 974 975 /* 975 976 * Check if sub-channels have already been offerred. This API will be useful
+2
include/linux/mtd/mtd.h
··· 25 25 #include <linux/notifier.h> 26 26 #include <linux/device.h> 27 27 #include <linux/of.h> 28 + #include <linux/nvmem-provider.h> 28 29 29 30 #include <mtd/mtd-abi.h> 30 31 ··· 343 342 struct device dev; 344 343 int usecount; 345 344 struct mtd_debug_info dbg; 345 + struct nvmem_device *nvmem; 346 346 }; 347 347 348 348 int mtd_ooblayout_ecc(struct mtd_info *mtd, int section,
+11
include/linux/nvmem-provider.h
··· 19 19 typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset, 20 20 void *val, size_t bytes); 21 21 22 + enum nvmem_type { 23 + NVMEM_TYPE_UNKNOWN = 0, 24 + NVMEM_TYPE_EEPROM, 25 + NVMEM_TYPE_OTP, 26 + NVMEM_TYPE_BATTERY_BACKED, 27 + }; 28 + 22 29 /** 23 30 * struct nvmem_config - NVMEM device configuration 24 31 * ··· 35 28 * @owner: Pointer to exporter module. Used for refcounting. 36 29 * @cells: Optional array of pre-defined NVMEM cells. 37 30 * @ncells: Number of elements in cells. 31 + * @type: Type of the nvmem storage 38 32 * @read_only: Device is read-only. 39 33 * @root_only: Device is accessibly to root only. 34 + * @no_of_node: Device should not use the parent's of_node even if it's !NULL. 40 35 * @reg_read: Callback to read data. 41 36 * @reg_write: Callback to write data. 42 37 * @size: Device size. ··· 60 51 struct module *owner; 61 52 const struct nvmem_cell_info *cells; 62 53 int ncells; 54 + enum nvmem_type type; 63 55 bool read_only; 64 56 bool root_only; 57 + bool no_of_node; 65 58 nvmem_reg_read_t reg_read; 66 59 nvmem_reg_write_t reg_write; 67 60 int size;
+8
include/linux/pci.h
··· 396 396 unsigned int is_hotplug_bridge:1; 397 397 unsigned int shpc_managed:1; /* SHPC owned by shpchp */ 398 398 unsigned int is_thunderbolt:1; /* Thunderbolt controller */ 399 + /* 400 + * Devices marked being untrusted are the ones that can potentially 401 + * execute DMA attacks and similar. They are typically connected 402 + * through external ports such as Thunderbolt but not limited to 403 + * that. When an IOMMU is enabled they should be getting full 404 + * mappings to make sure they cannot access arbitrary memory. 405 + */ 406 + unsigned int untrusted:1; 399 407 unsigned int __aer_firmware_first_valid:1; 400 408 unsigned int __aer_firmware_first:1; 401 409 unsigned int broken_intx_masking:1; /* INTx masking can't be used */
+35
include/uapi/linux/android/binder_ctl.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + /* 3 + * Copyright (C) 2018 Canonical Ltd. 4 + * 5 + */ 6 + 7 + #ifndef _UAPI_LINUX_BINDER_CTL_H 8 + #define _UAPI_LINUX_BINDER_CTL_H 9 + 10 + #include <linux/android/binder.h> 11 + #include <linux/types.h> 12 + #include <linux/ioctl.h> 13 + 14 + #define BINDERFS_MAX_NAME 255 15 + 16 + /** 17 + * struct binderfs_device - retrieve information about a new binder device 18 + * @name: the name to use for the new binderfs binder device 19 + * @major: major number allocated for binderfs binder devices 20 + * @minor: minor number allocated for the new binderfs binder device 21 + * 22 + */ 23 + struct binderfs_device { 24 + char name[BINDERFS_MAX_NAME + 1]; 25 + __u8 major; 26 + __u8 minor; 27 + }; 28 + 29 + /** 30 + * Allocate a new binder device. 31 + */ 32 + #define BINDER_CTL_ADD _IOWR('b', 1, struct binderfs_device) 33 + 34 + #endif /* _UAPI_LINUX_BINDER_CTL_H */ 35 +
+1
include/uapi/linux/magic.h
··· 73 73 #define DAXFS_MAGIC 0x64646178 74 74 #define BINFMTFS_MAGIC 0x42494e4d 75 75 #define DEVPTS_SUPER_MAGIC 0x1cd1 76 + #define BINDERFS_SUPER_MAGIC 0x6c6f6f70 76 77 #define FUTEXFS_SUPER_MAGIC 0xBAD1DEA 77 78 #define PIPEFS_MAGIC 0x50495045 78 79 #define PROC_SUPER_MAGIC 0x9fa0
+4 -3
tools/Makefile
··· 13 13 @echo ' cgroup - cgroup tools' 14 14 @echo ' cpupower - a tool for all things x86 CPU power' 15 15 @echo ' firewire - the userspace part of nosy, an IEEE-1394 traffic sniffer' 16 + @echo ' firmware - Firmware tools' 16 17 @echo ' freefall - laptop accelerometer program for disk protection' 17 18 @echo ' gpio - GPIO tools' 18 19 @echo ' hv - tools used when in Hyper-V clients' ··· 61 60 cpupower: FORCE 62 61 $(call descend,power/$@) 63 62 64 - cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi pci: FORCE 63 + cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi pci firmware: FORCE 65 64 $(call descend,$@) 66 65 67 66 liblockdep: FORCE ··· 138 137 cpupower_clean: 139 138 $(call descend,power/cpupower,clean) 140 139 141 - cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean: 140 + cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean firmware_clean: 142 141 $(call descend,$(@:_clean=),clean) 143 142 144 143 liblockdep_clean: ··· 176 175 perf_clean selftests_clean turbostat_clean spi_clean usb_clean virtio_clean \ 177 176 vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ 178 177 freefall_clean build_clean libbpf_clean libsubcmd_clean liblockdep_clean \ 179 - gpio_clean objtool_clean leds_clean wmi_clean pci_clean 178 + gpio_clean objtool_clean leds_clean wmi_clean pci_clean firmware_clean 180 179 181 180 .PHONY: FORCE
+13
tools/firmware/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # Makefile for firmware tools 3 + 4 + CFLAGS = -Wall -Wextra -g 5 + 6 + all: ihex2fw 7 + %: %.c 8 + $(CC) $(CFLAGS) -o $@ $^ 9 + 10 + clean: 11 + $(RM) ihex2fw 12 + 13 + .PHONY: all clean
+281
tools/firmware/ihex2fw.c
··· 1 + /* 2 + * Parser/loader for IHEX formatted data. 3 + * 4 + * Copyright © 2008 David Woodhouse <dwmw2@infradead.org> 5 + * Copyright © 2005 Jan Harkes <jaharkes@cs.cmu.edu> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <stdint.h> 13 + #include <arpa/inet.h> 14 + #include <stdio.h> 15 + #include <errno.h> 16 + #include <sys/types.h> 17 + #include <sys/stat.h> 18 + #include <sys/mman.h> 19 + #include <fcntl.h> 20 + #include <string.h> 21 + #include <unistd.h> 22 + #include <stdlib.h> 23 + #define _GNU_SOURCE 24 + #include <getopt.h> 25 + 26 + 27 + struct ihex_binrec { 28 + struct ihex_binrec *next; /* not part of the real data structure */ 29 + uint32_t addr; 30 + uint16_t len; 31 + uint8_t data[]; 32 + }; 33 + 34 + /** 35 + * nybble/hex are little helpers to parse hexadecimal numbers to a byte value 36 + **/ 37 + static uint8_t nybble(const uint8_t n) 38 + { 39 + if (n >= '0' && n <= '9') return n - '0'; 40 + else if (n >= 'A' && n <= 'F') return n - ('A' - 10); 41 + else if (n >= 'a' && n <= 'f') return n - ('a' - 10); 42 + return 0; 43 + } 44 + 45 + static uint8_t hex(const uint8_t *data, uint8_t *crc) 46 + { 47 + uint8_t val = (nybble(data[0]) << 4) | nybble(data[1]); 48 + *crc += val; 49 + return val; 50 + } 51 + 52 + static int process_ihex(uint8_t *data, ssize_t size); 53 + static void file_record(struct ihex_binrec *record); 54 + static int output_records(int outfd); 55 + 56 + static int sort_records = 0; 57 + static int wide_records = 0; 58 + static int include_jump = 0; 59 + 60 + static int usage(void) 61 + { 62 + fprintf(stderr, "ihex2fw: Convert ihex files into binary " 63 + "representation for use by Linux kernel\n"); 64 + fprintf(stderr, "usage: ihex2fw [<options>] <src.HEX> <dst.fw>\n"); 65 + fprintf(stderr, " -w: wide records (16-bit length)\n"); 66 + fprintf(stderr, " -s: sort records by address\n"); 67 + fprintf(stderr, " -j: include records for CS:IP/EIP address\n"); 68 + return 1; 69 + } 70 + 71 + int main(int argc, char **argv) 72 + { 73 + int infd, outfd; 74 + struct stat st; 75 + uint8_t *data; 76 + int opt; 77 + 78 + while ((opt = getopt(argc, argv, "wsj")) != -1) { 79 + switch (opt) { 80 + case 'w': 81 + wide_records = 1; 82 + break; 83 + case 's': 84 + sort_records = 1; 85 + break; 86 + case 'j': 87 + include_jump = 1; 88 + break; 89 + default: 90 + return usage(); 91 + } 92 + } 93 + 94 + if (optind + 2 != argc) 95 + return usage(); 96 + 97 + if (!strcmp(argv[optind], "-")) 98 + infd = 0; 99 + else 100 + infd = open(argv[optind], O_RDONLY); 101 + if (infd == -1) { 102 + fprintf(stderr, "Failed to open source file: %s", 103 + strerror(errno)); 104 + return usage(); 105 + } 106 + if (fstat(infd, &st)) { 107 + perror("stat"); 108 + return 1; 109 + } 110 + data = mmap(NULL, st.st_size, PROT_READ, MAP_SHARED, infd, 0); 111 + if (data == MAP_FAILED) { 112 + perror("mmap"); 113 + return 1; 114 + } 115 + 116 + if (!strcmp(argv[optind+1], "-")) 117 + outfd = 1; 118 + else 119 + outfd = open(argv[optind+1], O_TRUNC|O_CREAT|O_WRONLY, 0644); 120 + if (outfd == -1) { 121 + fprintf(stderr, "Failed to open destination file: %s", 122 + strerror(errno)); 123 + return usage(); 124 + } 125 + if (process_ihex(data, st.st_size)) 126 + return 1; 127 + 128 + return output_records(outfd); 129 + } 130 + 131 + static int process_ihex(uint8_t *data, ssize_t size) 132 + { 133 + struct ihex_binrec *record; 134 + uint32_t offset = 0; 135 + uint32_t data32; 136 + uint8_t type, crc = 0, crcbyte = 0; 137 + int i, j; 138 + int line = 1; 139 + int len; 140 + 141 + i = 0; 142 + next_record: 143 + /* search for the start of record character */ 144 + while (i < size) { 145 + if (data[i] == '\n') line++; 146 + if (data[i++] == ':') break; 147 + } 148 + 149 + /* Minimum record length would be about 10 characters */ 150 + if (i + 10 > size) { 151 + fprintf(stderr, "Can't find valid record at line %d\n", line); 152 + return -EINVAL; 153 + } 154 + 155 + len = hex(data + i, &crc); i += 2; 156 + if (wide_records) { 157 + len <<= 8; 158 + len += hex(data + i, &crc); i += 2; 159 + } 160 + record = malloc((sizeof (*record) + len + 3) & ~3); 161 + if (!record) { 162 + fprintf(stderr, "out of memory for records\n"); 163 + return -ENOMEM; 164 + } 165 + memset(record, 0, (sizeof(*record) + len + 3) & ~3); 166 + record->len = len; 167 + 168 + /* now check if we have enough data to read everything */ 169 + if (i + 8 + (record->len * 2) > size) { 170 + fprintf(stderr, "Not enough data to read complete record at line %d\n", 171 + line); 172 + return -EINVAL; 173 + } 174 + 175 + record->addr = hex(data + i, &crc) << 8; i += 2; 176 + record->addr |= hex(data + i, &crc); i += 2; 177 + type = hex(data + i, &crc); i += 2; 178 + 179 + for (j = 0; j < record->len; j++, i += 2) 180 + record->data[j] = hex(data + i, &crc); 181 + 182 + /* check CRC */ 183 + crcbyte = hex(data + i, &crc); i += 2; 184 + if (crc != 0) { 185 + fprintf(stderr, "CRC failure at line %d: got 0x%X, expected 0x%X\n", 186 + line, crcbyte, (unsigned char)(crcbyte-crc)); 187 + return -EINVAL; 188 + } 189 + 190 + /* Done reading the record */ 191 + switch (type) { 192 + case 0: 193 + /* old style EOF record? */ 194 + if (!record->len) 195 + break; 196 + 197 + record->addr += offset; 198 + file_record(record); 199 + goto next_record; 200 + 201 + case 1: /* End-Of-File Record */ 202 + if (record->addr || record->len) { 203 + fprintf(stderr, "Bad EOF record (type 01) format at line %d", 204 + line); 205 + return -EINVAL; 206 + } 207 + break; 208 + 209 + case 2: /* Extended Segment Address Record (HEX86) */ 210 + case 4: /* Extended Linear Address Record (HEX386) */ 211 + if (record->addr || record->len != 2) { 212 + fprintf(stderr, "Bad HEX86/HEX386 record (type %02X) at line %d\n", 213 + type, line); 214 + return -EINVAL; 215 + } 216 + 217 + /* We shouldn't really be using the offset for HEX86 because 218 + * the wraparound case is specified quite differently. */ 219 + offset = record->data[0] << 8 | record->data[1]; 220 + offset <<= (type == 2 ? 4 : 16); 221 + goto next_record; 222 + 223 + case 3: /* Start Segment Address Record */ 224 + case 5: /* Start Linear Address Record */ 225 + if (record->addr || record->len != 4) { 226 + fprintf(stderr, "Bad Start Address record (type %02X) at line %d\n", 227 + type, line); 228 + return -EINVAL; 229 + } 230 + 231 + memcpy(&data32, &record->data[0], sizeof(data32)); 232 + data32 = htonl(data32); 233 + memcpy(&record->data[0], &data32, sizeof(data32)); 234 + 235 + /* These records contain the CS/IP or EIP where execution 236 + * starts. If requested output this as a record. */ 237 + if (include_jump) 238 + file_record(record); 239 + goto next_record; 240 + 241 + default: 242 + fprintf(stderr, "Unknown record (type %02X)\n", type); 243 + return -EINVAL; 244 + } 245 + 246 + return 0; 247 + } 248 + 249 + static struct ihex_binrec *records; 250 + 251 + static void file_record(struct ihex_binrec *record) 252 + { 253 + struct ihex_binrec **p = &records; 254 + 255 + while ((*p) && (!sort_records || (*p)->addr < record->addr)) 256 + p = &((*p)->next); 257 + 258 + record->next = *p; 259 + *p = record; 260 + } 261 + 262 + static int output_records(int outfd) 263 + { 264 + unsigned char zeroes[6] = {0, 0, 0, 0, 0, 0}; 265 + struct ihex_binrec *p = records; 266 + 267 + while (p) { 268 + uint16_t writelen = (p->len + 9) & ~3; 269 + 270 + p->addr = htonl(p->addr); 271 + p->len = htons(p->len); 272 + if (write(outfd, &p->addr, writelen) != writelen) 273 + return 1; 274 + p = p->next; 275 + } 276 + /* EOF record is zero length, since we don't bother to represent 277 + the type field in the binary version */ 278 + if (write(outfd, zeroes, 6) != 6) 279 + return 1; 280 + return 0; 281 + }
+13 -2
tools/hv/hv_kvp_daemon.c
··· 1178 1178 FILE *file; 1179 1179 char cmd[PATH_MAX]; 1180 1180 char *mac_addr; 1181 + int str_len; 1181 1182 1182 1183 /* 1183 1184 * Set the configuration for the specified interface with ··· 1302 1301 * invoke the external script to do its magic. 1303 1302 */ 1304 1303 1305 - snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s", 1306 - "hv_set_ifconfig", if_file); 1304 + str_len = snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s", 1305 + "hv_set_ifconfig", if_file); 1306 + /* 1307 + * This is a little overcautious, but it's necessary to suppress some 1308 + * false warnings from gcc 8.0.1. 1309 + */ 1310 + if (str_len <= 0 || (unsigned int)str_len >= sizeof(cmd)) { 1311 + syslog(LOG_ERR, "Cmd '%s' (len=%d) may be too long", 1312 + cmd, str_len); 1313 + return HV_E_FAIL; 1314 + } 1315 + 1307 1316 if (system(cmd)) { 1308 1317 syslog(LOG_ERR, "Failed to execute cmd '%s'; error: %d %s", 1309 1318 cmd, errno, strerror(errno));