Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.15-rc1' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc updates from Greg KH:
"Here is the big set of char/misc and other driver subsystem patches
for 4.15-rc1.

There are small changes all over here, hyperv driver updates, pcmcia
driver updates, w1 driver updats, vme driver updates, nvmem driver
updates, and lots of other little one-off driver updates as well. The
shortlog has the full details.

All of these have been in linux-next for quite a while with no
reported issues"

* tag 'char-misc-4.15-rc1' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (90 commits)
VME: Return -EBUSY when DMA list in use
w1: keep balance of mutex locks and refcnts
MAINTAINERS: Update VME subsystem tree.
nvmem: sunxi-sid: add support for A64/H5's SID controller
nvmem: imx-ocotp: Update module description
nvmem: imx-ocotp: Enable i.MX7D OTP write support
nvmem: imx-ocotp: Add i.MX7D timing write clock setup support
nvmem: imx-ocotp: Move i.MX6 write clock setup to dedicated function
nvmem: imx-ocotp: Add support for banked OTP addressing
nvmem: imx-ocotp: Pass parameters via a struct
nvmem: imx-ocotp: Restrict OTP write to IMX6 processors
nvmem: uniphier: add UniPhier eFuse driver
dt-bindings: nvmem: add description for UniPhier eFuse
nvmem: set nvmem->owner to nvmem->dev->driver->owner if unset
nvmem: qfprom: fix different address space warnings of sparse
nvmem: mtk-efuse: fix different address space warnings of sparse
nvmem: mtk-efuse: use stack for nvmem_config instead of malloc'ing it
nvmem: imx-iim: use stack for nvmem_config instead of malloc'ing it
thunderbolt: tb: fix use after free in tb_activate_pcie_devices
MAINTAINERS: Add git tree for Thunderbolt development
...

+2771 -518
+70
Documentation/ABI/stable/sysfs-bus-vmbus
··· 41 41 Contact: K. Y. Srinivasan <kys@microsoft.com> 42 42 Description: The 16 bit vendor ID of the device 43 43 Users: tools/hv/lsvmbus and user level RDMA libraries 44 + 45 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu 46 + Date: September. 2017 47 + KernelVersion: 4.14 48 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 49 + Description: VCPU (sub)channel is affinitized to 50 + Users: tools/hv/lsvmbus and other debuggig tools 51 + 52 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu 53 + Date: September. 2017 54 + KernelVersion: 4.14 55 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 56 + Description: VCPU (sub)channel is affinitized to 57 + Users: tools/hv/lsvmbus and other debuggig tools 58 + 59 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/in_mask 60 + Date: September. 2017 61 + KernelVersion: 4.14 62 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 63 + Description: Inbound channel signaling state 64 + Users: Debugging tools 65 + 66 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/latency 67 + Date: September. 2017 68 + KernelVersion: 4.14 69 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 70 + Description: Channel signaling latency 71 + Users: Debugging tools 72 + 73 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/out_mask 74 + Date: September. 2017 75 + KernelVersion: 4.14 76 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 77 + Description: Outbound channel signaling state 78 + Users: Debugging tools 79 + 80 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/pending 81 + Date: September. 2017 82 + KernelVersion: 4.14 83 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 84 + Description: Channel interrupt pending state 85 + Users: Debugging tools 86 + 87 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/read_avail 88 + Date: September. 2017 89 + KernelVersion: 4.14 90 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 91 + Description: Bytes availabble to read 92 + Users: Debugging tools 93 + 94 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/write_avail 95 + Date: September. 2017 96 + KernelVersion: 4.14 97 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 98 + Description: Bytes availabble to write 99 + Users: Debugging tools 100 + 101 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/events 102 + Date: September. 2017 103 + KernelVersion: 4.14 104 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 105 + Description: Number of times we have signaled the host 106 + Users: Debugging tools 107 + 108 + What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/interrupts 109 + Date: September. 2017 110 + KernelVersion: 4.14 111 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 112 + Description: Number of times we have taken an interrupt (incoming) 113 + Users: Debugging tools
+21
Documentation/ABI/testing/sysfs-driver-w1_ds28e17
··· 1 + What: /sys/bus/w1/devices/19-<id>/speed 2 + Date: Sep 2017 3 + KernelVersion: 4.14 4 + Contact: Jan Kandziora <jjj@gmx.de> 5 + Description: When written, this file sets the I2C speed on the connected 6 + DS28E17 chip. When read, it reads the current setting from 7 + the DS28E17 chip. 8 + Valid values: 100, 400, 900 [kBaud]. 9 + Default 100, can be set by w1_ds28e17.speed= module parameter. 10 + Users: w1_ds28e17 driver 11 + 12 + What: /sys/bus/w1/devices/19-<id>/stretch 13 + Date: Sep 2017 14 + KernelVersion: 4.14 15 + Contact: Jan Kandziora <jjj@gmx.de> 16 + Description: When written, this file sets the multiplier used to calculate 17 + the busy timeout for I2C operations on the connected DS28E17 18 + chip. When read, returns the current setting. 19 + Valid values: 1 to 9. 20 + Default 1, can be set by w1_ds28e17.stretch= module parameter. 21 + Users: w1_ds28e17 driver
+1
Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt
··· 5 5 "allwinner,sun4i-a10-sid" 6 6 "allwinner,sun7i-a20-sid" 7 7 "allwinner,sun8i-h3-sid" 8 + "allwinner,sun50i-a64-sid" 8 9 9 10 - reg: Should contain registers location and length 10 11
+1 -1
Documentation/devicetree/bindings/nvmem/amlogic-efuse.txt
··· 1 - = Amlogic eFuse device tree bindings = 1 + = Amlogic Meson GX eFuse device tree bindings = 2 2 3 3 Required properties: 4 4 - compatible: should be "amlogic,meson-gxbb-efuse"
+22
Documentation/devicetree/bindings/nvmem/amlogic-meson-mx-efuse.txt
··· 1 + Amlogic Meson6/Meson8/Meson8b efuse 2 + 3 + Required Properties: 4 + - compatible: depending on the SoC this should be one of: 5 + - "amlogic,meson6-efuse" 6 + - "amlogic,meson8-efuse" 7 + - "amlogic,meson8b-efuse" 8 + - reg: base address and size of the efuse registers 9 + - clocks: a reference to the efuse core gate clock 10 + - clock-names: must be "core" 11 + 12 + All properties and sub-nodes as well as the consumer bindings 13 + defined in nvmem.txt in this directory are also supported. 14 + 15 + 16 + Example: 17 + efuse: nvmem@0 { 18 + compatible = "amlogic,meson8-efuse"; 19 + reg = <0x0 0x2000>; 20 + clocks = <&clkc CLKID_EFUSE>; 21 + clock-names = "core"; 22 + };
+1
Documentation/devicetree/bindings/nvmem/rockchip-efuse.txt
··· 6 6 - "rockchip,rk3188-efuse" - for RK3188 SoCs. 7 7 - "rockchip,rk3228-efuse" - for RK3228 SoCs. 8 8 - "rockchip,rk3288-efuse" - for RK3288 SoCs. 9 + - "rockchip,rk3368-efuse" - for RK3368 SoCs. 9 10 - "rockchip,rk3399-efuse" - for RK3399 SoCs. 10 11 - reg: Should contain the registers location and exact eFuse size 11 12 - clocks: Should be the clock id of eFuse
+20
Documentation/devicetree/bindings/nvmem/snvs-lpgpr.txt
··· 1 + Device tree bindings for Low Power General Purpose Register found in i.MX6Q/D 2 + Secure Non-Volatile Storage. 3 + 4 + This DT node should be represented as a sub-node of a "syscon", 5 + "simple-mfd" node. 6 + 7 + Required properties: 8 + - compatible: should be one of the fallowing variants: 9 + "fsl,imx6q-snvs-lpgpr" for Freescale i.MX6Q/D/DL/S 10 + "fsl,imx6ul-snvs-lpgpr" for Freescale i.MX6UL 11 + 12 + Example: 13 + snvs: snvs@020cc000 { 14 + compatible = "fsl,sec-v4.0-mon", "syscon", "simple-mfd"; 15 + reg = <0x020cc000 0x4000>; 16 + 17 + snvs_lpgpr: snvs-lpgpr { 18 + compatible = "fsl,imx6q-snvs-lpgpr"; 19 + }; 20 + };
+49
Documentation/devicetree/bindings/nvmem/uniphier-efuse.txt
··· 1 + = UniPhier eFuse device tree bindings = 2 + 3 + This UniPhier eFuse must be under soc-glue. 4 + 5 + Required properties: 6 + - compatible: should be "socionext,uniphier-efuse" 7 + - reg: should contain the register location and length 8 + 9 + = Data cells = 10 + Are child nodes of efuse, bindings of which as described in 11 + bindings/nvmem/nvmem.txt 12 + 13 + Example: 14 + 15 + soc-glue@5f900000 { 16 + compatible = "socionext,uniphier-ld20-soc-glue-debug", 17 + "simple-mfd"; 18 + #address-cells = <1>; 19 + #size-cells = <1>; 20 + ranges = <0x0 0x5f900000 0x2000>; 21 + 22 + efuse@100 { 23 + compatible = "socionext,uniphier-efuse"; 24 + reg = <0x100 0x28>; 25 + }; 26 + 27 + efuse@200 { 28 + compatible = "socionext,uniphier-efuse"; 29 + reg = <0x200 0x68>; 30 + #address-cells = <1>; 31 + #size-cells = <1>; 32 + 33 + /* Data cells */ 34 + usb_mon: usb-mon@54 { 35 + reg = <0x54 0xc>; 36 + }; 37 + }; 38 + }; 39 + 40 + = Data consumers = 41 + Are device nodes which consume nvmem data cells. 42 + 43 + Example: 44 + 45 + usb { 46 + ... 47 + nvmem-cells = <&usb_mon>; 48 + nvmem-cell-names = "usb_mon"; 49 + }
+16 -4
Documentation/trace/coresight-cpu-debug.txt
··· 149 149 150 150 At the runtime you can disable idle states with below methods: 151 151 152 - Set latency request to /dev/cpu_dma_latency to disable all CPUs specific idle 153 - states (if latency = 0uS then disable all idle states): 154 - # echo "what_ever_latency_you_need_in_uS" > /dev/cpu_dma_latency 152 + It is possible to disable CPU idle states by way of the PM QoS 153 + subsystem, more specifically by using the "/dev/cpu_dma_latency" 154 + interface (see Documentation/power/pm_qos_interface.txt for more 155 + details). As specified in the PM QoS documentation the requested 156 + parameter will stay in effect until the file descriptor is released. 157 + For example: 155 158 156 - Disable specific CPU's specific idle state: 159 + # exec 3<> /dev/cpu_dma_latency; echo 0 >&3 160 + ... 161 + Do some work... 162 + ... 163 + # exec 3<>- 164 + 165 + The same can also be done from an application program. 166 + 167 + Disable specific CPU's specific idle state from cpuidle sysfs (see 168 + Documentation/cpuidle/sysfs.txt): 157 169 # echo 1 > /sys/devices/system/cpu/cpu$cpu/cpuidle/state$state/disable 158 170 159 171
+2
Documentation/w1/slaves/00-INDEX
··· 10 10 - The Maxim/Dallas Semiconductor ds2438 smart battery monitor. 11 11 w1_ds28e04 12 12 - The Maxim/Dallas Semiconductor ds28e04 eeprom. 13 + w1_ds28e17 14 + - The Maxim/Dallas Semiconductor ds28e17 1-Wire-to-I2C Master Bridge.
+68
Documentation/w1/slaves/w1_ds28e17
··· 1 + Kernel driver w1_ds28e17 2 + ======================== 3 + 4 + Supported chips: 5 + * Maxim DS28E17 1-Wire-to-I2C Master Bridge 6 + 7 + supported family codes: 8 + W1_FAMILY_DS28E17 0x19 9 + 10 + Author: Jan Kandziora <jjj@gmx.de> 11 + 12 + 13 + Description 14 + ----------- 15 + The DS28E17 is a Onewire slave device which acts as an I2C bus master. 16 + 17 + This driver creates a new I2C bus for any DS28E17 device detected. I2C buses 18 + come and go as the DS28E17 devices come and go. I2C slave devices connected to 19 + a DS28E17 can be accessed by the kernel or userspace tools as if they were 20 + connected to a "native" I2C bus master. 21 + 22 + 23 + An udev rule like the following 24 + ------------------------------------------------------------------------------- 25 + SUBSYSTEM=="i2c-dev", KERNEL=="i2c-[0-9]*", ATTRS{name}=="w1-19-*", \ 26 + SYMLINK+="i2c-$attr{name}" 27 + ------------------------------------------------------------------------------- 28 + may be used to create stable /dev/i2c- entries based on the unique id of the 29 + DS28E17 chip. 30 + 31 + 32 + Driver parameters are: 33 + 34 + speed: 35 + This sets up the default I2C speed a DS28E17 get configured for as soon 36 + it is connected. The power-on default of the DS28E17 is 400kBaud, but 37 + chips may come and go on the Onewire bus without being de-powered and 38 + as soon the "w1_ds28e17" driver notices a freshly connected, or 39 + reconnected DS28E17 device on the Onewire bus, it will re-apply this 40 + setting. 41 + 42 + Valid values are 100, 400, 900 [kBaud]. Any other value means to leave 43 + alone the current DS28E17 setting on detect. The default value is 100. 44 + 45 + stretch: 46 + This sets up the default stretch value used for freshly connected 47 + DS28E17 devices. It is a multiplier used on the calculation of the busy 48 + wait time for an I2C transfer. This is to account for I2C slave devices 49 + which make heavy use of the I2C clock stretching feature and thus, the 50 + needed timeout cannot be pre-calculated correctly. As the w1_ds28e17 51 + driver checks the DS28E17's busy flag in a loop after the precalculated 52 + wait time, it should be hardly needed to tweak this setting. 53 + 54 + Leave it at 1 unless you get ETIMEDOUT errors and a "w1_slave_driver 55 + 19-00000002dbd8: busy timeout" in the kernel log. 56 + 57 + Valid values are 1 to 9. The default is 1. 58 + 59 + 60 + The driver creates sysfs files /sys/bus/w1/devices/19-<id>/speed and 61 + /sys/bus/w1/devices/19-<id>/stretch for each device, preloaded with the default 62 + settings from the driver parameters. They may be changed anytime. In addition a 63 + directory /sys/bus/w1/devices/19-<id>/i2c-<nnn> for the I2C bus master sysfs 64 + structure is created. 65 + 66 + 67 + See https://github.com/ianka/w1_ds28e17 for even more information. 68 +
+3 -2
MAINTAINERS
··· 5474 5474 5475 5475 FPGA MANAGER FRAMEWORK 5476 5476 M: Alan Tull <atull@kernel.org> 5477 - R: Moritz Fischer <mdf@kernel.org> 5477 + M: Moritz Fischer <mdf@kernel.org> 5478 5478 L: linux-fpga@vger.kernel.org 5479 5479 S: Maintained 5480 5480 T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git ··· 13390 13390 M: Michael Jamet <michael.jamet@intel.com> 13391 13391 M: Mika Westerberg <mika.westerberg@linux.intel.com> 13392 13392 M: Yehezkel Bernat <yehezkel.bernat@intel.com> 13393 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt.git 13393 13394 S: Maintained 13394 13395 F: drivers/thunderbolt/ 13395 13396 F: include/linux/thunderbolt.h ··· 14486 14485 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14487 14486 L: devel@driverdev.osuosl.org 14488 14487 S: Maintained 14489 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 14488 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git 14490 14489 F: Documentation/driver-api/vme.rst 14491 14490 F: drivers/staging/vme/ 14492 14491 F: drivers/vme/
+9 -6
arch/x86/hyperv/hv_init.c
··· 210 210 } 211 211 EXPORT_SYMBOL_GPL(hyperv_cleanup); 212 212 213 - void hyperv_report_panic(struct pt_regs *regs) 213 + void hyperv_report_panic(struct pt_regs *regs, long err) 214 214 { 215 215 static bool panic_reported; 216 + u64 guest_id; 216 217 217 218 /* 218 219 * We prefer to report panic on 'die' chain as we have proper ··· 224 223 return; 225 224 panic_reported = true; 226 225 227 - wrmsrl(HV_X64_MSR_CRASH_P0, regs->ip); 228 - wrmsrl(HV_X64_MSR_CRASH_P1, regs->ax); 229 - wrmsrl(HV_X64_MSR_CRASH_P2, regs->bx); 230 - wrmsrl(HV_X64_MSR_CRASH_P3, regs->cx); 231 - wrmsrl(HV_X64_MSR_CRASH_P4, regs->dx); 226 + rdmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id); 227 + 228 + wrmsrl(HV_X64_MSR_CRASH_P0, err); 229 + wrmsrl(HV_X64_MSR_CRASH_P1, guest_id); 230 + wrmsrl(HV_X64_MSR_CRASH_P2, regs->ip); 231 + wrmsrl(HV_X64_MSR_CRASH_P3, regs->ax); 232 + wrmsrl(HV_X64_MSR_CRASH_P4, regs->sp); 232 233 233 234 /* 234 235 * Let Hyper-V know there is crash data available
+1 -1
arch/x86/include/asm/mshyperv.h
··· 311 311 void hyperv_init(void); 312 312 void hyperv_setup_mmu_ops(void); 313 313 void hyper_alloc_mmu(void); 314 - void hyperv_report_panic(struct pt_regs *regs); 314 + void hyperv_report_panic(struct pt_regs *regs, long err); 315 315 bool hv_is_hypercall_page_setup(void); 316 316 void hyperv_cleanup(void); 317 317 #else /* CONFIG_HYPERV */
+1 -1
drivers/android/binder.c
··· 2192 2192 off_start, 2193 2193 offp - off_start); 2194 2194 if (!parent) { 2195 - pr_err("transaction release %d bad parent offset", 2195 + pr_err("transaction release %d bad parent offset\n", 2196 2196 debug_id); 2197 2197 continue; 2198 2198 }
+8 -10
drivers/android/binder_alloc.c
··· 186 186 } 187 187 188 188 static int binder_update_page_range(struct binder_alloc *alloc, int allocate, 189 - void *start, void *end, 190 - struct vm_area_struct *vma) 189 + void *start, void *end) 191 190 { 192 191 void *page_addr; 193 192 unsigned long user_page_addr; 194 193 struct binder_lru_page *page; 194 + struct vm_area_struct *vma = NULL; 195 195 struct mm_struct *mm = NULL; 196 196 bool need_mm = false; 197 197 ··· 215 215 } 216 216 } 217 217 218 - if (!vma && need_mm && mmget_not_zero(alloc->vma_vm_mm)) 218 + if (need_mm && mmget_not_zero(alloc->vma_vm_mm)) 219 219 mm = alloc->vma_vm_mm; 220 220 221 221 if (mm) { ··· 437 437 if (end_page_addr > has_page_addr) 438 438 end_page_addr = has_page_addr; 439 439 ret = binder_update_page_range(alloc, 1, 440 - (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr, NULL); 440 + (void *)PAGE_ALIGN((uintptr_t)buffer->data), end_page_addr); 441 441 if (ret) 442 442 return ERR_PTR(ret); 443 443 ··· 478 478 err_alloc_buf_struct_failed: 479 479 binder_update_page_range(alloc, 0, 480 480 (void *)PAGE_ALIGN((uintptr_t)buffer->data), 481 - end_page_addr, NULL); 481 + end_page_addr); 482 482 return ERR_PTR(-ENOMEM); 483 483 } 484 484 ··· 562 562 alloc->pid, buffer->data, 563 563 prev->data, next ? next->data : NULL); 564 564 binder_update_page_range(alloc, 0, buffer_start_page(buffer), 565 - buffer_start_page(buffer) + PAGE_SIZE, 566 - NULL); 565 + buffer_start_page(buffer) + PAGE_SIZE); 567 566 } 568 567 list_del(&buffer->entry); 569 568 kfree(buffer); ··· 599 600 600 601 binder_update_page_range(alloc, 0, 601 602 (void *)PAGE_ALIGN((uintptr_t)buffer->data), 602 - (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK), 603 - NULL); 603 + (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK)); 604 604 605 605 rb_erase(&buffer->rb_node, &alloc->allocated_buffers); 606 606 buffer->free = 1; ··· 982 984 return ret; 983 985 } 984 986 985 - struct shrinker binder_shrinker = { 987 + static struct shrinker binder_shrinker = { 986 988 .count_objects = binder_shrink_count, 987 989 .scan_objects = binder_shrink_scan, 988 990 .seeks = DEFAULT_SEEKS,
+3 -3
drivers/char/pcmcia/cm4000_cs.c
··· 659 659 * is already doing that for you. 660 660 */ 661 661 662 - static void monitor_card(unsigned long p) 662 + static void monitor_card(struct timer_list *t) 663 663 { 664 - struct cm4000_dev *dev = (struct cm4000_dev *) p; 664 + struct cm4000_dev *dev = from_timer(dev, t, timer); 665 665 unsigned int iobase = dev->p_dev->resource[0]->start; 666 666 unsigned short s; 667 667 struct ptsreq ptsreq; ··· 1374 1374 DEBUGP(3, dev, "-> start_monitor\n"); 1375 1375 if (!dev->monitor_running) { 1376 1376 DEBUGP(5, dev, "create, init and add timer\n"); 1377 - setup_timer(&dev->timer, monitor_card, (unsigned long)dev); 1377 + timer_setup(&dev->timer, monitor_card, 0); 1378 1378 dev->monitor_running = 1; 1379 1379 mod_timer(&dev->timer, jiffies); 1380 1380 } else
+3 -4
drivers/char/pcmcia/cm4040_cs.c
··· 104 104 105 105 /* poll the device fifo status register. not to be confused with 106 106 * the poll syscall. */ 107 - static void cm4040_do_poll(unsigned long dummy) 107 + static void cm4040_do_poll(struct timer_list *t) 108 108 { 109 - struct reader_dev *dev = (struct reader_dev *) dummy; 109 + struct reader_dev *dev = from_timer(dev, t, poll_timer); 110 110 unsigned int obs = xinb(dev->p_dev->resource[0]->start 111 111 + REG_OFFSET_BUFFER_STATUS); 112 112 ··· 465 465 466 466 link->open = 1; 467 467 468 - dev->poll_timer.data = (unsigned long) dev; 469 468 mod_timer(&dev->poll_timer, jiffies + POLL_PERIOD); 470 469 471 470 DEBUGP(2, dev, "<- cm4040_open (successfully)\n"); ··· 584 585 init_waitqueue_head(&dev->poll_wait); 585 586 init_waitqueue_head(&dev->read_wait); 586 587 init_waitqueue_head(&dev->write_wait); 587 - setup_timer(&dev->poll_timer, cm4040_do_poll, 0); 588 + timer_setup(&dev->poll_timer, cm4040_do_poll, 0); 588 589 589 590 ret = reader_config(link, i); 590 591 if (ret) {
+1 -1
drivers/fpga/xilinx-pr-decoupler.c
··· 79 79 return !status; 80 80 } 81 81 82 - static struct fpga_bridge_ops xlnx_pr_decoupler_br_ops = { 82 + static const struct fpga_bridge_ops xlnx_pr_decoupler_br_ops = { 83 83 .enable_set = xlnx_pr_decoupler_enable_set, 84 84 .enable_show = xlnx_pr_decoupler_enable_show, 85 85 };
+3 -3
drivers/fsi/fsi-core.c
··· 185 185 return 0; 186 186 } 187 187 188 - int fsi_slave_report_and_clear_errors(struct fsi_slave *slave) 188 + static int fsi_slave_report_and_clear_errors(struct fsi_slave *slave) 189 189 { 190 190 struct fsi_master *master = slave->master; 191 191 uint32_t irq, stat; ··· 215 215 216 216 static int fsi_slave_set_smode(struct fsi_master *master, int link, int id); 217 217 218 - int fsi_slave_handle_error(struct fsi_slave *slave, bool write, uint32_t addr, 219 - size_t size) 218 + static int fsi_slave_handle_error(struct fsi_slave *slave, bool write, 219 + uint32_t addr, size_t size) 220 220 { 221 221 struct fsi_master *master = slave->master; 222 222 int rc, link;
+3 -1
drivers/hv/Makefile
··· 3 3 obj-$(CONFIG_HYPERV_UTILS) += hv_utils.o 4 4 obj-$(CONFIG_HYPERV_BALLOON) += hv_balloon.o 5 5 6 + CFLAGS_hv_trace.o = -I$(src) 7 + 6 8 hv_vmbus-y := vmbus_drv.o \ 7 9 hv.o connection.o channel.o \ 8 - channel_mgmt.o ring_buffer.o 10 + channel_mgmt.o ring_buffer.o hv_trace.o 9 11 hv_utils-y := hv_util.o hv_kvp.o hv_snapshot.o hv_fcopy.o hv_utils_transport.o
+22 -1
drivers/hv/channel.c
··· 43 43 { 44 44 struct hv_monitor_page *monitorpage; 45 45 46 + trace_vmbus_setevent(channel); 47 + 46 48 /* 47 49 * For channels marked as in "low latency" mode 48 50 * bypass the monitor page mechanism. ··· 187 185 ret = vmbus_post_msg(open_msg, 188 186 sizeof(struct vmbus_channel_open_channel), true); 189 187 188 + trace_vmbus_open(open_msg, ret); 189 + 190 190 if (ret != 0) { 191 191 err = ret; 192 192 goto error_clean_msglist; ··· 238 234 const uuid_le *shv_host_servie_id) 239 235 { 240 236 struct vmbus_channel_tl_connect_request conn_msg; 237 + int ret; 241 238 242 239 memset(&conn_msg, 0, sizeof(conn_msg)); 243 240 conn_msg.header.msgtype = CHANNELMSG_TL_CONNECT_REQUEST; 244 241 conn_msg.guest_endpoint_id = *shv_guest_servie_id; 245 242 conn_msg.host_service_id = *shv_host_servie_id; 246 243 247 - return vmbus_post_msg(&conn_msg, sizeof(conn_msg), true); 244 + ret = vmbus_post_msg(&conn_msg, sizeof(conn_msg), true); 245 + 246 + trace_vmbus_send_tl_connect_request(&conn_msg, ret); 247 + 248 + return ret; 248 249 } 249 250 EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request); 250 251 ··· 442 433 443 434 ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize - 444 435 sizeof(*msginfo), true); 436 + 437 + trace_vmbus_establish_gpadl_header(gpadlmsg, ret); 438 + 445 439 if (ret != 0) 446 440 goto cleanup; 447 441 ··· 460 448 ret = vmbus_post_msg(gpadl_body, 461 449 submsginfo->msgsize - sizeof(*submsginfo), 462 450 true); 451 + 452 + trace_vmbus_establish_gpadl_body(gpadl_body, ret); 453 + 463 454 if (ret != 0) 464 455 goto cleanup; 465 456 ··· 525 510 526 511 ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown), 527 512 true); 513 + 514 + trace_vmbus_teardown_gpadl(msg, ret); 528 515 529 516 if (ret) 530 517 goto post_msg_err; ··· 605 588 606 589 ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel), 607 590 true); 591 + 592 + trace_vmbus_close_internal(msg, ret); 608 593 609 594 if (ret) { 610 595 pr_err("Close failed: close post msg return is %d\n", ret); ··· 764 745 desc.dataoffset8 = descsize >> 3; /* in 8-bytes granularity */ 765 746 desc.length8 = (u16)(packetlen_aligned >> 3); 766 747 desc.transactionid = requestid; 748 + desc.reserved = 0; 767 749 desc.rangecount = pagecount; 768 750 769 751 for (i = 0; i < pagecount; i++) { ··· 808 788 desc->dataoffset8 = desc_size >> 3; /* in 8-bytes granularity */ 809 789 desc->length8 = (u16)(packetlen_aligned >> 3); 810 790 desc->transactionid = requestid; 791 + desc->reserved = 0; 811 792 desc->rangecount = 1; 812 793 813 794 bufferlist[0].iov_base = desc;
+32 -4
drivers/hv/channel_mgmt.c
··· 350 350 { 351 351 tasklet_kill(&channel->callback_event); 352 352 353 - kfree_rcu(channel, rcu); 353 + kobject_put(&channel->kobj); 354 354 } 355 355 356 356 static void percpu_channel_enq(void *arg) ··· 373 373 static void vmbus_release_relid(u32 relid) 374 374 { 375 375 struct vmbus_channel_relid_released msg; 376 + int ret; 376 377 377 378 memset(&msg, 0, sizeof(struct vmbus_channel_relid_released)); 378 379 msg.child_relid = relid; 379 380 msg.header.msgtype = CHANNELMSG_RELID_RELEASED; 380 - vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released), 381 - true); 381 + ret = vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released), 382 + true); 383 + 384 + trace_vmbus_release_relid(&msg, ret); 382 385 } 383 386 384 387 void hv_process_channel_removal(u32 relid) ··· 523 520 newchannel->state = CHANNEL_OPEN_STATE; 524 521 525 522 if (!fnew) { 523 + struct hv_device *dev 524 + = newchannel->primary_channel->device_obj; 525 + 526 + if (vmbus_add_channel_kobj(dev, newchannel)) { 527 + atomic_dec(&vmbus_connection.offer_in_progress); 528 + goto err_free_chan; 529 + } 530 + 526 531 if (channel->sc_creation_callback != NULL) 527 532 channel->sc_creation_callback(newchannel); 528 533 newchannel->probe_done = true; ··· 816 805 817 806 offer = (struct vmbus_channel_offer_channel *)hdr; 818 807 808 + trace_vmbus_onoffer(offer); 809 + 819 810 /* Allocate the channel object and save this offer. */ 820 811 newchannel = alloc_channel(); 821 812 if (!newchannel) { ··· 858 845 struct device *dev; 859 846 860 847 rescind = (struct vmbus_channel_rescind_offer *)hdr; 848 + 849 + trace_vmbus_onoffer_rescind(rescind); 861 850 862 851 /* 863 852 * The offer msg and the corresponding rescind msg ··· 989 974 990 975 result = (struct vmbus_channel_open_result *)hdr; 991 976 977 + trace_vmbus_onopen_result(result); 978 + 992 979 /* 993 980 * Find the open msg, copy the result and signal/unblock the wait event 994 981 */ ··· 1034 1017 unsigned long flags; 1035 1018 1036 1019 gpadlcreated = (struct vmbus_channel_gpadl_created *)hdr; 1020 + 1021 + trace_vmbus_ongpadl_created(gpadlcreated); 1037 1022 1038 1023 /* 1039 1024 * Find the establish msg, copy the result and signal/unblock the wait ··· 1085 1066 1086 1067 gpadl_torndown = (struct vmbus_channel_gpadl_torndown *)hdr; 1087 1068 1069 + trace_vmbus_ongpadl_torndown(gpadl_torndown); 1070 + 1088 1071 /* 1089 1072 * Find the open msg, copy the result and signal/unblock the wait event 1090 1073 */ ··· 1130 1109 unsigned long flags; 1131 1110 1132 1111 version_response = (struct vmbus_channel_version_response *)hdr; 1112 + 1113 + trace_vmbus_onversion_response(version_response); 1114 + 1133 1115 spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); 1134 1116 1135 1117 list_for_each_entry(msginfo, &vmbus_connection.chn_msg_list, ··· 1192 1168 hdr = (struct vmbus_channel_message_header *)msg->u.payload; 1193 1169 size = msg->header.payload_size; 1194 1170 1171 + trace_vmbus_on_message(hdr); 1172 + 1195 1173 if (hdr->msgtype >= CHANNELMSG_COUNT) { 1196 1174 pr_err("Received invalid channel message type %d size %d\n", 1197 1175 hdr->msgtype, size); ··· 1227 1201 1228 1202 msg->msgtype = CHANNELMSG_REQUESTOFFERS; 1229 1203 1230 - 1231 1204 ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_message_header), 1232 1205 true); 1206 + 1207 + trace_vmbus_request_offers(ret); 1208 + 1233 1209 if (ret != 0) { 1234 1210 pr_err("Unable to request offers - %d\n", ret); 1235 1211
+7
drivers/hv/connection.c
··· 117 117 ret = vmbus_post_msg(msg, 118 118 sizeof(struct vmbus_channel_initiate_contact), 119 119 true); 120 + 121 + trace_vmbus_negotiate_version(msg, ret); 122 + 120 123 if (ret != 0) { 121 124 spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); 122 125 list_del(&msginfo->msglistentry); ··· 322 319 struct vmbus_channel *channel = (void *) data; 323 320 unsigned long time_limit = jiffies + 2; 324 321 322 + trace_vmbus_on_event(channel); 323 + 325 324 do { 326 325 void (*callback_fn)(void *); 327 326 ··· 413 408 414 409 if (!channel->is_dedicated_interrupt) 415 410 vmbus_send_interrupt(child_relid); 411 + 412 + ++channel->sig_events; 416 413 417 414 hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); 418 415 }
+4
drivers/hv/hv_trace.c
··· 1 + #include "hyperv_vmbus.h" 2 + 3 + #define CREATE_TRACE_POINTS 4 + #include "hv_trace.h"
+327
drivers/hv/hv_trace.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM hyperv 3 + 4 + #if !defined(_HV_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _HV_TRACE_H 6 + 7 + #include <linux/tracepoint.h> 8 + 9 + DECLARE_EVENT_CLASS(vmbus_hdr_msg, 10 + TP_PROTO(const struct vmbus_channel_message_header *hdr), 11 + TP_ARGS(hdr), 12 + TP_STRUCT__entry(__field(unsigned int, msgtype)), 13 + TP_fast_assign(__entry->msgtype = hdr->msgtype;), 14 + TP_printk("msgtype=%u", __entry->msgtype) 15 + ); 16 + 17 + DEFINE_EVENT(vmbus_hdr_msg, vmbus_on_msg_dpc, 18 + TP_PROTO(const struct vmbus_channel_message_header *hdr), 19 + TP_ARGS(hdr) 20 + ); 21 + 22 + DEFINE_EVENT(vmbus_hdr_msg, vmbus_on_message, 23 + TP_PROTO(const struct vmbus_channel_message_header *hdr), 24 + TP_ARGS(hdr) 25 + ); 26 + 27 + TRACE_EVENT(vmbus_onoffer, 28 + TP_PROTO(const struct vmbus_channel_offer_channel *offer), 29 + TP_ARGS(offer), 30 + TP_STRUCT__entry( 31 + __field(u32, child_relid) 32 + __field(u8, monitorid) 33 + __field(u16, is_ddc_int) 34 + __field(u32, connection_id) 35 + __array(char, if_type, 16) 36 + __array(char, if_instance, 16) 37 + __field(u16, chn_flags) 38 + __field(u16, mmio_mb) 39 + __field(u16, sub_idx) 40 + ), 41 + TP_fast_assign(__entry->child_relid = offer->child_relid; 42 + __entry->monitorid = offer->monitorid; 43 + __entry->is_ddc_int = offer->is_dedicated_interrupt; 44 + __entry->connection_id = offer->connection_id; 45 + memcpy(__entry->if_type, 46 + &offer->offer.if_type.b, 16); 47 + memcpy(__entry->if_instance, 48 + &offer->offer.if_instance.b, 16); 49 + __entry->chn_flags = offer->offer.chn_flags; 50 + __entry->mmio_mb = offer->offer.mmio_megabytes; 51 + __entry->sub_idx = offer->offer.sub_channel_index; 52 + ), 53 + TP_printk("child_relid 0x%x, monitorid 0x%x, is_dedicated %d, " 54 + "connection_id 0x%x, if_type %pUl, if_instance %pUl, " 55 + "chn_flags 0x%x, mmio_megabytes %d, sub_channel_index %d", 56 + __entry->child_relid, __entry->monitorid, 57 + __entry->is_ddc_int, __entry->connection_id, 58 + __entry->if_type, __entry->if_instance, 59 + __entry->chn_flags, __entry->mmio_mb, 60 + __entry->sub_idx 61 + ) 62 + ); 63 + 64 + TRACE_EVENT(vmbus_onoffer_rescind, 65 + TP_PROTO(const struct vmbus_channel_rescind_offer *offer), 66 + TP_ARGS(offer), 67 + TP_STRUCT__entry(__field(u32, child_relid)), 68 + TP_fast_assign(__entry->child_relid = offer->child_relid), 69 + TP_printk("child_relid 0x%x", __entry->child_relid) 70 + ); 71 + 72 + TRACE_EVENT(vmbus_onopen_result, 73 + TP_PROTO(const struct vmbus_channel_open_result *result), 74 + TP_ARGS(result), 75 + TP_STRUCT__entry( 76 + __field(u32, child_relid) 77 + __field(u32, openid) 78 + __field(u32, status) 79 + ), 80 + TP_fast_assign(__entry->child_relid = result->child_relid; 81 + __entry->openid = result->openid; 82 + __entry->status = result->status; 83 + ), 84 + TP_printk("child_relid 0x%x, openid %d, status %d", 85 + __entry->child_relid, __entry->openid, __entry->status 86 + ) 87 + ); 88 + 89 + TRACE_EVENT(vmbus_ongpadl_created, 90 + TP_PROTO(const struct vmbus_channel_gpadl_created *gpadlcreated), 91 + TP_ARGS(gpadlcreated), 92 + TP_STRUCT__entry( 93 + __field(u32, child_relid) 94 + __field(u32, gpadl) 95 + __field(u32, status) 96 + ), 97 + TP_fast_assign(__entry->child_relid = gpadlcreated->child_relid; 98 + __entry->gpadl = gpadlcreated->gpadl; 99 + __entry->status = gpadlcreated->creation_status; 100 + ), 101 + TP_printk("child_relid 0x%x, gpadl 0x%x, creation_status %d", 102 + __entry->child_relid, __entry->gpadl, __entry->status 103 + ) 104 + ); 105 + 106 + TRACE_EVENT(vmbus_ongpadl_torndown, 107 + TP_PROTO(const struct vmbus_channel_gpadl_torndown *gpadltorndown), 108 + TP_ARGS(gpadltorndown), 109 + TP_STRUCT__entry(__field(u32, gpadl)), 110 + TP_fast_assign(__entry->gpadl = gpadltorndown->gpadl), 111 + TP_printk("gpadl 0x%x", __entry->gpadl) 112 + ); 113 + 114 + TRACE_EVENT(vmbus_onversion_response, 115 + TP_PROTO(const struct vmbus_channel_version_response *response), 116 + TP_ARGS(response), 117 + TP_STRUCT__entry( 118 + __field(u8, ver) 119 + ), 120 + TP_fast_assign(__entry->ver = response->version_supported; 121 + ), 122 + TP_printk("version_supported %d", __entry->ver) 123 + ); 124 + 125 + TRACE_EVENT(vmbus_request_offers, 126 + TP_PROTO(int ret), 127 + TP_ARGS(ret), 128 + TP_STRUCT__entry(__field(int, ret)), 129 + TP_fast_assign(__entry->ret = ret), 130 + TP_printk("sending ret %d", __entry->ret) 131 + ); 132 + 133 + TRACE_EVENT(vmbus_open, 134 + TP_PROTO(const struct vmbus_channel_open_channel *msg, int ret), 135 + TP_ARGS(msg, ret), 136 + TP_STRUCT__entry( 137 + __field(u32, child_relid) 138 + __field(u32, openid) 139 + __field(u32, gpadlhandle) 140 + __field(u32, target_vp) 141 + __field(u32, offset) 142 + __field(int, ret) 143 + ), 144 + TP_fast_assign( 145 + __entry->child_relid = msg->child_relid; 146 + __entry->openid = msg->openid; 147 + __entry->gpadlhandle = msg->ringbuffer_gpadlhandle; 148 + __entry->target_vp = msg->target_vp; 149 + __entry->offset = msg->downstream_ringbuffer_pageoffset; 150 + __entry->ret = ret; 151 + ), 152 + TP_printk("sending child_relid 0x%x, openid %d, " 153 + "gpadlhandle 0x%x, target_vp 0x%x, offset 0x%x, ret %d", 154 + __entry->child_relid, __entry->openid, 155 + __entry->gpadlhandle, __entry->target_vp, 156 + __entry->offset, __entry->ret 157 + ) 158 + ); 159 + 160 + TRACE_EVENT(vmbus_close_internal, 161 + TP_PROTO(const struct vmbus_channel_close_channel *msg, int ret), 162 + TP_ARGS(msg, ret), 163 + TP_STRUCT__entry( 164 + __field(u32, child_relid) 165 + __field(int, ret) 166 + ), 167 + TP_fast_assign( 168 + __entry->child_relid = msg->child_relid; 169 + __entry->ret = ret; 170 + ), 171 + TP_printk("sending child_relid 0x%x, ret %d", __entry->child_relid, 172 + __entry->ret) 173 + ); 174 + 175 + TRACE_EVENT(vmbus_establish_gpadl_header, 176 + TP_PROTO(const struct vmbus_channel_gpadl_header *msg, int ret), 177 + TP_ARGS(msg, ret), 178 + TP_STRUCT__entry( 179 + __field(u32, child_relid) 180 + __field(u32, gpadl) 181 + __field(u16, range_buflen) 182 + __field(u16, rangecount) 183 + __field(int, ret) 184 + ), 185 + TP_fast_assign( 186 + __entry->child_relid = msg->child_relid; 187 + __entry->gpadl = msg->gpadl; 188 + __entry->range_buflen = msg->range_buflen; 189 + __entry->rangecount = msg->rangecount; 190 + __entry->ret = ret; 191 + ), 192 + TP_printk("sending child_relid 0x%x, gpadl 0x%x, range_buflen %d " 193 + "rangecount %d, ret %d", 194 + __entry->child_relid, __entry->gpadl, 195 + __entry->range_buflen, __entry->rangecount, __entry->ret 196 + ) 197 + ); 198 + 199 + TRACE_EVENT(vmbus_establish_gpadl_body, 200 + TP_PROTO(const struct vmbus_channel_gpadl_body *msg, int ret), 201 + TP_ARGS(msg, ret), 202 + TP_STRUCT__entry( 203 + __field(u32, msgnumber) 204 + __field(u32, gpadl) 205 + __field(int, ret) 206 + ), 207 + TP_fast_assign( 208 + __entry->msgnumber = msg->msgnumber; 209 + __entry->gpadl = msg->gpadl; 210 + __entry->ret = ret; 211 + ), 212 + TP_printk("sending msgnumber %d, gpadl 0x%x, ret %d", 213 + __entry->msgnumber, __entry->gpadl, __entry->ret 214 + ) 215 + ); 216 + 217 + TRACE_EVENT(vmbus_teardown_gpadl, 218 + TP_PROTO(const struct vmbus_channel_gpadl_teardown *msg, int ret), 219 + TP_ARGS(msg, ret), 220 + TP_STRUCT__entry( 221 + __field(u32, child_relid) 222 + __field(u32, gpadl) 223 + __field(int, ret) 224 + ), 225 + TP_fast_assign( 226 + __entry->child_relid = msg->child_relid; 227 + __entry->gpadl = msg->gpadl; 228 + __entry->ret = ret; 229 + ), 230 + TP_printk("sending child_relid 0x%x, gpadl 0x%x, ret %d", 231 + __entry->child_relid, __entry->gpadl, __entry->ret 232 + ) 233 + ); 234 + 235 + TRACE_EVENT(vmbus_negotiate_version, 236 + TP_PROTO(const struct vmbus_channel_initiate_contact *msg, int ret), 237 + TP_ARGS(msg, ret), 238 + TP_STRUCT__entry( 239 + __field(u32, ver) 240 + __field(u32, target_vcpu) 241 + __field(int, ret) 242 + __field(u64, int_page) 243 + __field(u64, mon_page1) 244 + __field(u64, mon_page2) 245 + ), 246 + TP_fast_assign( 247 + __entry->ver = msg->vmbus_version_requested; 248 + __entry->target_vcpu = msg->target_vcpu; 249 + __entry->int_page = msg->interrupt_page; 250 + __entry->mon_page1 = msg->monitor_page1; 251 + __entry->mon_page2 = msg->monitor_page2; 252 + __entry->ret = ret; 253 + ), 254 + TP_printk("sending vmbus_version_requested %d, target_vcpu 0x%x, " 255 + "pages %llx:%llx:%llx, ret %d", 256 + __entry->ver, __entry->target_vcpu, __entry->int_page, 257 + __entry->mon_page1, __entry->mon_page2, __entry->ret 258 + ) 259 + ); 260 + 261 + TRACE_EVENT(vmbus_release_relid, 262 + TP_PROTO(const struct vmbus_channel_relid_released *msg, int ret), 263 + TP_ARGS(msg, ret), 264 + TP_STRUCT__entry( 265 + __field(u32, child_relid) 266 + __field(int, ret) 267 + ), 268 + TP_fast_assign( 269 + __entry->child_relid = msg->child_relid; 270 + __entry->ret = ret; 271 + ), 272 + TP_printk("sending child_relid 0x%x, ret %d", 273 + __entry->child_relid, __entry->ret 274 + ) 275 + ); 276 + 277 + TRACE_EVENT(vmbus_send_tl_connect_request, 278 + TP_PROTO(const struct vmbus_channel_tl_connect_request *msg, 279 + int ret), 280 + TP_ARGS(msg, ret), 281 + TP_STRUCT__entry( 282 + __array(char, guest_id, 16) 283 + __array(char, host_id, 16) 284 + __field(int, ret) 285 + ), 286 + TP_fast_assign( 287 + memcpy(__entry->guest_id, &msg->guest_endpoint_id.b, 16); 288 + memcpy(__entry->host_id, &msg->host_service_id.b, 16); 289 + __entry->ret = ret; 290 + ), 291 + TP_printk("sending guest_endpoint_id %pUl, host_service_id %pUl, " 292 + "ret %d", 293 + __entry->guest_id, __entry->host_id, __entry->ret 294 + ) 295 + ); 296 + 297 + DECLARE_EVENT_CLASS(vmbus_channel, 298 + TP_PROTO(const struct vmbus_channel *channel), 299 + TP_ARGS(channel), 300 + TP_STRUCT__entry(__field(u32, relid)), 301 + TP_fast_assign(__entry->relid = channel->offermsg.child_relid), 302 + TP_printk("relid 0x%x", __entry->relid) 303 + ); 304 + 305 + DEFINE_EVENT(vmbus_channel, vmbus_chan_sched, 306 + TP_PROTO(const struct vmbus_channel *channel), 307 + TP_ARGS(channel) 308 + ); 309 + 310 + DEFINE_EVENT(vmbus_channel, vmbus_setevent, 311 + TP_PROTO(const struct vmbus_channel *channel), 312 + TP_ARGS(channel) 313 + ); 314 + 315 + DEFINE_EVENT(vmbus_channel, vmbus_on_event, 316 + TP_PROTO(const struct vmbus_channel *channel), 317 + TP_ARGS(channel) 318 + ); 319 + 320 + #undef TRACE_INCLUDE_PATH 321 + #define TRACE_INCLUDE_PATH . 322 + #undef TRACE_INCLUDE_FILE 323 + #define TRACE_INCLUDE_FILE hv_trace 324 + #endif /* _HV_TRACE_H */ 325 + 326 + /* This part must be outside protection */ 327 + #include <trace/define_trace.h>
+4
drivers/hv/hyperv_vmbus.h
··· 31 31 #include <linux/hyperv.h> 32 32 #include <linux/interrupt.h> 33 33 34 + #include "hv_trace.h" 35 + 34 36 /* 35 37 * Timeout for services such as KVP and fcopy. 36 38 */ ··· 375 373 376 374 int vmbus_device_register(struct hv_device *child_device_obj); 377 375 void vmbus_device_unregister(struct hv_device *device_obj); 376 + int vmbus_add_channel_kobj(struct hv_device *device_obj, 377 + struct vmbus_channel *channel); 378 378 379 379 struct vmbus_channel *relid2channel(u32 relid); 380 380
+195 -14
drivers/hv/vmbus_drv.c
··· 65 65 66 66 regs = current_pt_regs(); 67 67 68 - hyperv_report_panic(regs); 68 + hyperv_report_panic(regs, val); 69 69 return NOTIFY_DONE; 70 70 } 71 71 ··· 75 75 struct die_args *die = (struct die_args *)args; 76 76 struct pt_regs *regs = die->regs; 77 77 78 - hyperv_report_panic(regs); 78 + hyperv_report_panic(regs, val); 79 79 return NOTIFY_DONE; 80 80 } 81 81 ··· 107 107 sprintf(&alias_name[i], "%02x", hv_dev->dev_type.b[i/2]); 108 108 } 109 109 110 - static u8 channel_monitor_group(struct vmbus_channel *channel) 110 + static u8 channel_monitor_group(const struct vmbus_channel *channel) 111 111 { 112 112 return (u8)channel->offermsg.monitorid / 32; 113 113 } 114 114 115 - static u8 channel_monitor_offset(struct vmbus_channel *channel) 115 + static u8 channel_monitor_offset(const struct vmbus_channel *channel) 116 116 { 117 117 return (u8)channel->offermsg.monitorid % 32; 118 118 } 119 119 120 - static u32 channel_pending(struct vmbus_channel *channel, 121 - struct hv_monitor_page *monitor_page) 120 + static u32 channel_pending(const struct vmbus_channel *channel, 121 + const struct hv_monitor_page *monitor_page) 122 122 { 123 123 u8 monitor_group = channel_monitor_group(channel); 124 + 124 125 return monitor_page->trigger_group[monitor_group].pending; 125 126 } 126 127 127 - static u32 channel_latency(struct vmbus_channel *channel, 128 - struct hv_monitor_page *monitor_page) 128 + static u32 channel_latency(const struct vmbus_channel *channel, 129 + const struct hv_monitor_page *monitor_page) 129 130 { 130 131 u8 monitor_group = channel_monitor_group(channel); 131 132 u8 monitor_offset = channel_monitor_offset(channel); 133 + 132 134 return monitor_page->latency[monitor_group][monitor_offset]; 133 135 } 134 136 ··· 835 833 836 834 hdr = (struct vmbus_channel_message_header *)msg->u.payload; 837 835 836 + trace_vmbus_on_msg_dpc(hdr); 837 + 838 838 if (hdr->msgtype >= CHANNELMSG_COUNT) { 839 839 WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype); 840 840 goto msg_handled; ··· 945 941 946 942 if (channel->rescind) 947 943 continue; 944 + 945 + trace_vmbus_chan_sched(channel); 946 + 947 + ++channel->interrupts; 948 948 949 949 switch (channel->callback_mode) { 950 950 case HV_CALL_ISR: ··· 1141 1133 } 1142 1134 EXPORT_SYMBOL_GPL(vmbus_driver_unregister); 1143 1135 1136 + 1137 + /* 1138 + * Called when last reference to channel is gone. 1139 + */ 1140 + static void vmbus_chan_release(struct kobject *kobj) 1141 + { 1142 + struct vmbus_channel *channel 1143 + = container_of(kobj, struct vmbus_channel, kobj); 1144 + 1145 + kfree_rcu(channel, rcu); 1146 + } 1147 + 1148 + struct vmbus_chan_attribute { 1149 + struct attribute attr; 1150 + ssize_t (*show)(const struct vmbus_channel *chan, char *buf); 1151 + ssize_t (*store)(struct vmbus_channel *chan, 1152 + const char *buf, size_t count); 1153 + }; 1154 + #define VMBUS_CHAN_ATTR(_name, _mode, _show, _store) \ 1155 + struct vmbus_chan_attribute chan_attr_##_name \ 1156 + = __ATTR(_name, _mode, _show, _store) 1157 + #define VMBUS_CHAN_ATTR_RW(_name) \ 1158 + struct vmbus_chan_attribute chan_attr_##_name = __ATTR_RW(_name) 1159 + #define VMBUS_CHAN_ATTR_RO(_name) \ 1160 + struct vmbus_chan_attribute chan_attr_##_name = __ATTR_RO(_name) 1161 + #define VMBUS_CHAN_ATTR_WO(_name) \ 1162 + struct vmbus_chan_attribute chan_attr_##_name = __ATTR_WO(_name) 1163 + 1164 + static ssize_t vmbus_chan_attr_show(struct kobject *kobj, 1165 + struct attribute *attr, char *buf) 1166 + { 1167 + const struct vmbus_chan_attribute *attribute 1168 + = container_of(attr, struct vmbus_chan_attribute, attr); 1169 + const struct vmbus_channel *chan 1170 + = container_of(kobj, struct vmbus_channel, kobj); 1171 + 1172 + if (!attribute->show) 1173 + return -EIO; 1174 + 1175 + return attribute->show(chan, buf); 1176 + } 1177 + 1178 + static const struct sysfs_ops vmbus_chan_sysfs_ops = { 1179 + .show = vmbus_chan_attr_show, 1180 + }; 1181 + 1182 + static ssize_t out_mask_show(const struct vmbus_channel *channel, char *buf) 1183 + { 1184 + const struct hv_ring_buffer_info *rbi = &channel->outbound; 1185 + 1186 + return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); 1187 + } 1188 + VMBUS_CHAN_ATTR_RO(out_mask); 1189 + 1190 + static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf) 1191 + { 1192 + const struct hv_ring_buffer_info *rbi = &channel->inbound; 1193 + 1194 + return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask); 1195 + } 1196 + VMBUS_CHAN_ATTR_RO(in_mask); 1197 + 1198 + static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf) 1199 + { 1200 + const struct hv_ring_buffer_info *rbi = &channel->inbound; 1201 + 1202 + return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi)); 1203 + } 1204 + VMBUS_CHAN_ATTR_RO(read_avail); 1205 + 1206 + static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf) 1207 + { 1208 + const struct hv_ring_buffer_info *rbi = &channel->outbound; 1209 + 1210 + return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi)); 1211 + } 1212 + VMBUS_CHAN_ATTR_RO(write_avail); 1213 + 1214 + static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf) 1215 + { 1216 + return sprintf(buf, "%u\n", channel->target_cpu); 1217 + } 1218 + VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL); 1219 + 1220 + static ssize_t channel_pending_show(const struct vmbus_channel *channel, 1221 + char *buf) 1222 + { 1223 + return sprintf(buf, "%d\n", 1224 + channel_pending(channel, 1225 + vmbus_connection.monitor_pages[1])); 1226 + } 1227 + VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL); 1228 + 1229 + static ssize_t channel_latency_show(const struct vmbus_channel *channel, 1230 + char *buf) 1231 + { 1232 + return sprintf(buf, "%d\n", 1233 + channel_latency(channel, 1234 + vmbus_connection.monitor_pages[1])); 1235 + } 1236 + VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL); 1237 + 1238 + static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf) 1239 + { 1240 + return sprintf(buf, "%llu\n", channel->interrupts); 1241 + } 1242 + VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL); 1243 + 1244 + static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf) 1245 + { 1246 + return sprintf(buf, "%llu\n", channel->sig_events); 1247 + } 1248 + VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL); 1249 + 1250 + static struct attribute *vmbus_chan_attrs[] = { 1251 + &chan_attr_out_mask.attr, 1252 + &chan_attr_in_mask.attr, 1253 + &chan_attr_read_avail.attr, 1254 + &chan_attr_write_avail.attr, 1255 + &chan_attr_cpu.attr, 1256 + &chan_attr_pending.attr, 1257 + &chan_attr_latency.attr, 1258 + &chan_attr_interrupts.attr, 1259 + &chan_attr_events.attr, 1260 + NULL 1261 + }; 1262 + 1263 + static struct kobj_type vmbus_chan_ktype = { 1264 + .sysfs_ops = &vmbus_chan_sysfs_ops, 1265 + .release = vmbus_chan_release, 1266 + .default_attrs = vmbus_chan_attrs, 1267 + }; 1268 + 1269 + /* 1270 + * vmbus_add_channel_kobj - setup a sub-directory under device/channels 1271 + */ 1272 + int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel) 1273 + { 1274 + struct kobject *kobj = &channel->kobj; 1275 + u32 relid = channel->offermsg.child_relid; 1276 + int ret; 1277 + 1278 + kobj->kset = dev->channels_kset; 1279 + ret = kobject_init_and_add(kobj, &vmbus_chan_ktype, NULL, 1280 + "%u", relid); 1281 + if (ret) 1282 + return ret; 1283 + 1284 + kobject_uevent(kobj, KOBJ_ADD); 1285 + 1286 + return 0; 1287 + } 1288 + 1144 1289 /* 1145 1290 * vmbus_device_create - Creates and registers a new child device 1146 1291 * on the vmbus. ··· 1325 1164 */ 1326 1165 int vmbus_device_register(struct hv_device *child_device_obj) 1327 1166 { 1328 - int ret = 0; 1167 + struct kobject *kobj = &child_device_obj->device.kobj; 1168 + int ret; 1329 1169 1330 1170 dev_set_name(&child_device_obj->device, "%pUl", 1331 1171 child_device_obj->channel->offermsg.offer.if_instance.b); ··· 1340 1178 * binding...which will eventually call vmbus_match() and vmbus_probe() 1341 1179 */ 1342 1180 ret = device_register(&child_device_obj->device); 1343 - 1344 - if (ret) 1181 + if (ret) { 1345 1182 pr_err("Unable to register child device\n"); 1346 - else 1347 - pr_debug("child device %s registered\n", 1348 - dev_name(&child_device_obj->device)); 1183 + return ret; 1184 + } 1349 1185 1186 + child_device_obj->channels_kset = kset_create_and_add("channels", 1187 + NULL, kobj); 1188 + if (!child_device_obj->channels_kset) { 1189 + ret = -ENOMEM; 1190 + goto err_dev_unregister; 1191 + } 1192 + 1193 + ret = vmbus_add_channel_kobj(child_device_obj, 1194 + child_device_obj->channel); 1195 + if (ret) { 1196 + pr_err("Unable to register primary channeln"); 1197 + goto err_kset_unregister; 1198 + } 1199 + 1200 + return 0; 1201 + 1202 + err_kset_unregister: 1203 + kset_unregister(child_device_obj->channels_kset); 1204 + 1205 + err_dev_unregister: 1206 + device_unregister(&child_device_obj->device); 1350 1207 return ret; 1351 1208 } 1352 1209
+2 -2
drivers/hwtracing/coresight/coresight-dynamic-replicator.c
··· 199 199 200 200 static const struct amba_id replicator_ids[] = { 201 201 { 202 - .id = 0x0003b909, 203 - .mask = 0x0003ffff, 202 + .id = 0x000bb909, 203 + .mask = 0x000fffff, 204 204 }, 205 205 { 206 206 /* Coresight SoC-600 */
+2 -2
drivers/hwtracing/coresight/coresight-etb10.c
··· 748 748 749 749 static const struct amba_id etb_ids[] = { 750 750 { 751 - .id = 0x0003b907, 752 - .mask = 0x0003ffff, 751 + .id = 0x000bb907, 752 + .mask = 0x000fffff, 753 753 }, 754 754 { 0, 0}, 755 755 };
+12 -12
drivers/hwtracing/coresight/coresight-etm3x.c
··· 901 901 902 902 static const struct amba_id etm_ids[] = { 903 903 { /* ETM 3.3 */ 904 - .id = 0x0003b921, 905 - .mask = 0x0003ffff, 904 + .id = 0x000bb921, 905 + .mask = 0x000fffff, 906 906 .data = "ETM 3.3", 907 907 }, 908 908 { /* ETM 3.5 - Cortex-A5 */ 909 - .id = 0x0003b955, 910 - .mask = 0x0003ffff, 909 + .id = 0x000bb955, 910 + .mask = 0x000fffff, 911 911 .data = "ETM 3.5", 912 912 }, 913 913 { /* ETM 3.5 */ 914 - .id = 0x0003b956, 915 - .mask = 0x0003ffff, 914 + .id = 0x000bb956, 915 + .mask = 0x000fffff, 916 916 .data = "ETM 3.5", 917 917 }, 918 918 { /* PTM 1.0 */ 919 - .id = 0x0003b950, 920 - .mask = 0x0003ffff, 919 + .id = 0x000bb950, 920 + .mask = 0x000fffff, 921 921 .data = "PTM 1.0", 922 922 }, 923 923 { /* PTM 1.1 */ 924 - .id = 0x0003b95f, 925 - .mask = 0x0003ffff, 924 + .id = 0x000bb95f, 925 + .mask = 0x000fffff, 926 926 .data = "PTM 1.1", 927 927 }, 928 928 { /* PTM 1.1 Qualcomm */ 929 - .id = 0x0003006f, 930 - .mask = 0x0003ffff, 929 + .id = 0x000b006f, 930 + .mask = 0x000fffff, 931 931 .data = "PTM 1.1", 932 932 }, 933 933 { 0, 0},
+2 -2
drivers/hwtracing/coresight/coresight-funnel.c
··· 248 248 249 249 static const struct amba_id funnel_ids[] = { 250 250 { 251 - .id = 0x0003b908, 252 - .mask = 0x0003ffff, 251 + .id = 0x000bb908, 252 + .mask = 0x000fffff, 253 253 }, 254 254 { 255 255 /* Coresight SoC-600 */
+4 -4
drivers/hwtracing/coresight/coresight-stm.c
··· 917 917 918 918 static const struct amba_id stm_ids[] = { 919 919 { 920 - .id = 0x0003b962, 921 - .mask = 0x0003ffff, 920 + .id = 0x000bb962, 921 + .mask = 0x000fffff, 922 922 .data = "STM32", 923 923 }, 924 924 { 925 - .id = 0x0003b963, 926 - .mask = 0x0003ffff, 925 + .id = 0x000bb963, 926 + .mask = 0x000fffff, 927 927 .data = "STM500", 928 928 }, 929 929 { 0, 0},
+2 -2
drivers/hwtracing/coresight/coresight-tmc.c
··· 439 439 440 440 static const struct amba_id tmc_ids[] = { 441 441 { 442 - .id = 0x0003b961, 443 - .mask = 0x0003ffff, 442 + .id = 0x000bb961, 443 + .mask = 0x000fffff, 444 444 }, 445 445 { 446 446 /* Coresight SoC 600 TMC-ETR/ETS */
+2 -2
drivers/hwtracing/coresight/coresight-tpiu.c
··· 194 194 195 195 static const struct amba_id tpiu_ids[] = { 196 196 { 197 - .id = 0x0003b912, 198 - .mask = 0x0003ffff, 197 + .id = 0x000bb912, 198 + .mask = 0x000fffff, 199 199 }, 200 200 { 201 201 .id = 0x0004b912,
+2 -1
drivers/misc/altera-stapl/Kconfig
··· 1 - comment "Altera FPGA firmware download module" 1 + comment "Altera FPGA firmware download module (requires I2C)" 2 + depends on !I2C 2 3 3 4 config ALTERA_STAPL 4 5 tristate "Altera FPGA firmware download module"
+6 -1
drivers/misc/genwqe/card_base.h
··· 182 182 183 183 struct list_head card_list; /* list of usr_maps for card */ 184 184 struct list_head pin_list; /* list of pinned memory for dev */ 185 + int write; /* writable map? useful in unmapping */ 185 186 }; 186 187 187 188 static inline void genwqe_mapping_init(struct dma_mapping *m, ··· 190 189 { 191 190 memset(m, 0, sizeof(*m)); 192 191 m->type = type; 192 + m->write = 1; /* Assume the maps we create are R/W */ 193 193 } 194 194 195 195 /** ··· 349 347 * @user_size: size of user-space memory area 350 348 * @page: buffer for partial pages if needed 351 349 * @page_dma_addr: dma address partial pages 350 + * @write: should we write it back to userspace? 352 351 */ 353 352 struct genwqe_sgl { 354 353 dma_addr_t sgl_dma_addr; ··· 358 355 359 356 void __user *user_addr; /* user-space base-address */ 360 357 size_t user_size; /* size of memory area */ 358 + 359 + int write; 361 360 362 361 unsigned long nr_pages; 363 362 unsigned long fpage_offs; ··· 374 369 }; 375 370 376 371 int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, 377 - void __user *user_addr, size_t user_size); 372 + void __user *user_addr, size_t user_size, int write); 378 373 379 374 int genwqe_setup_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, 380 375 dma_addr_t *dma_list);
+5 -1
drivers/misc/genwqe/card_dev.c
··· 942 942 943 943 genwqe_mapping_init(m, 944 944 GENWQE_MAPPING_SGL_TEMP); 945 + 946 + if (ats_flags == ATS_TYPE_SGL_RD) 947 + m->write = 0; 948 + 945 949 rc = genwqe_user_vmap(cd, m, (void *)u_addr, 946 950 u_size, req); 947 951 if (rc != 0) ··· 958 954 /* create genwqe style scatter gather list */ 959 955 rc = genwqe_alloc_sync_sgl(cd, &req->sgls[i], 960 956 (void __user *)u_addr, 961 - u_size); 957 + u_size, m->write); 962 958 if (rc != 0) 963 959 goto err_out; 964 960
+27 -16
drivers/misc/genwqe/card_utils.c
··· 296 296 * from user-space into the cached pages. 297 297 */ 298 298 int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, 299 - void __user *user_addr, size_t user_size) 299 + void __user *user_addr, size_t user_size, int write) 300 300 { 301 301 int rc; 302 302 struct pci_dev *pci_dev = cd->pci_dev; ··· 312 312 313 313 sgl->user_addr = user_addr; 314 314 sgl->user_size = user_size; 315 + sgl->write = write; 315 316 sgl->sgl_size = genwqe_sgl_size(sgl->nr_pages); 316 317 317 318 if (get_order(sgl->sgl_size) > MAX_ORDER) { ··· 477 476 int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl) 478 477 { 479 478 int rc = 0; 479 + size_t offset; 480 + unsigned long res; 480 481 struct pci_dev *pci_dev = cd->pci_dev; 481 482 482 483 if (sgl->fpage) { 483 - if (copy_to_user(sgl->user_addr, sgl->fpage + sgl->fpage_offs, 484 - sgl->fpage_size)) { 485 - dev_err(&pci_dev->dev, "[%s] err: copying fpage!\n", 486 - __func__); 487 - rc = -EFAULT; 484 + if (sgl->write) { 485 + res = copy_to_user(sgl->user_addr, 486 + sgl->fpage + sgl->fpage_offs, sgl->fpage_size); 487 + if (res) { 488 + dev_err(&pci_dev->dev, 489 + "[%s] err: copying fpage! (res=%lu)\n", 490 + __func__, res); 491 + rc = -EFAULT; 492 + } 488 493 } 489 494 __genwqe_free_consistent(cd, PAGE_SIZE, sgl->fpage, 490 495 sgl->fpage_dma_addr); ··· 498 491 sgl->fpage_dma_addr = 0; 499 492 } 500 493 if (sgl->lpage) { 501 - if (copy_to_user(sgl->user_addr + sgl->user_size - 502 - sgl->lpage_size, sgl->lpage, 503 - sgl->lpage_size)) { 504 - dev_err(&pci_dev->dev, "[%s] err: copying lpage!\n", 505 - __func__); 506 - rc = -EFAULT; 494 + if (sgl->write) { 495 + offset = sgl->user_size - sgl->lpage_size; 496 + res = copy_to_user(sgl->user_addr + offset, sgl->lpage, 497 + sgl->lpage_size); 498 + if (res) { 499 + dev_err(&pci_dev->dev, 500 + "[%s] err: copying lpage! (res=%lu)\n", 501 + __func__, res); 502 + rc = -EFAULT; 503 + } 507 504 } 508 505 __genwqe_free_consistent(cd, PAGE_SIZE, sgl->lpage, 509 506 sgl->lpage_dma_addr); ··· 610 599 /* pin user pages in memory */ 611 600 rc = get_user_pages_fast(data & PAGE_MASK, /* page aligned addr */ 612 601 m->nr_pages, 613 - 1, /* write by caller */ 602 + m->write, /* readable/writable */ 614 603 m->page_list); /* ptrs to pages */ 615 604 if (rc < 0) 616 605 goto fail_get_user_pages; 617 606 618 607 /* assumption: get_user_pages can be killed by signals. */ 619 608 if (rc < m->nr_pages) { 620 - free_user_pages(m->page_list, rc, 0); 609 + free_user_pages(m->page_list, rc, m->write); 621 610 rc = -EFAULT; 622 611 goto fail_get_user_pages; 623 612 } ··· 629 618 return 0; 630 619 631 620 fail_free_user_pages: 632 - free_user_pages(m->page_list, m->nr_pages, 0); 621 + free_user_pages(m->page_list, m->nr_pages, m->write); 633 622 634 623 fail_get_user_pages: 635 624 kfree(m->page_list); ··· 662 651 genwqe_unmap_pages(cd, m->dma_list, m->nr_pages); 663 652 664 653 if (m->page_list) { 665 - free_user_pages(m->page_list, m->nr_pages, 1); 654 + free_user_pages(m->page_list, m->nr_pages, m->write); 666 655 667 656 kfree(m->page_list); 668 657 m->page_list = NULL;
+9 -9
drivers/misc/lkdtm_core.c
··· 122 122 } 123 123 124 124 /* Define the possible types of crashes that can be triggered. */ 125 - struct crashtype crashtypes[] = { 125 + static const struct crashtype crashtypes[] = { 126 126 CRASHTYPE(PANIC), 127 127 CRASHTYPE(BUG), 128 128 CRASHTYPE(WARNING), ··· 188 188 189 189 /* Global kprobe entry and crashtype. */ 190 190 static struct kprobe *lkdtm_kprobe; 191 - struct crashpoint *lkdtm_crashpoint; 192 - struct crashtype *lkdtm_crashtype; 191 + static struct crashpoint *lkdtm_crashpoint; 192 + static const struct crashtype *lkdtm_crashtype; 193 193 194 194 /* Module parameters */ 195 195 static int recur_count = -1; ··· 212 212 213 213 214 214 /* Return the crashtype number or NULL if the name is invalid */ 215 - static struct crashtype *find_crashtype(const char *name) 215 + static const struct crashtype *find_crashtype(const char *name) 216 216 { 217 217 int i; 218 218 ··· 228 228 * This is forced noinline just so it distinctly shows up in the stackdump 229 229 * which makes validation of expected lkdtm crashes easier. 230 230 */ 231 - static noinline void lkdtm_do_action(struct crashtype *crashtype) 231 + static noinline void lkdtm_do_action(const struct crashtype *crashtype) 232 232 { 233 233 if (WARN_ON(!crashtype || !crashtype->func)) 234 234 return; ··· 236 236 } 237 237 238 238 static int lkdtm_register_cpoint(struct crashpoint *crashpoint, 239 - struct crashtype *crashtype) 239 + const struct crashtype *crashtype) 240 240 { 241 241 int ret; 242 242 ··· 300 300 size_t count, loff_t *off) 301 301 { 302 302 struct crashpoint *crashpoint = file_inode(f)->i_private; 303 - struct crashtype *crashtype = NULL; 303 + const struct crashtype *crashtype = NULL; 304 304 char *buf; 305 305 int err; 306 306 ··· 368 368 static ssize_t direct_entry(struct file *f, const char __user *user_buf, 369 369 size_t count, loff_t *off) 370 370 { 371 - struct crashtype *crashtype; 371 + const struct crashtype *crashtype; 372 372 char *buf; 373 373 374 374 if (count >= PAGE_SIZE) ··· 404 404 static int __init lkdtm_module_init(void) 405 405 { 406 406 struct crashpoint *crashpoint = NULL; 407 - struct crashtype *crashtype = NULL; 407 + const struct crashtype *crashtype = NULL; 408 408 int ret = -EINVAL; 409 409 int i; 410 410
-1
drivers/misc/mei/mei-trace.c
··· 23 23 EXPORT_TRACEPOINT_SYMBOL(mei_reg_read); 24 24 EXPORT_TRACEPOINT_SYMBOL(mei_reg_write); 25 25 EXPORT_TRACEPOINT_SYMBOL(mei_pci_cfg_read); 26 - EXPORT_TRACEPOINT_SYMBOL(mei_pci_cfg_write); 27 26 #endif /* __CHECKER__ */
-19
drivers/misc/mei/mei-trace.h
··· 83 83 __get_str(dev), __entry->reg, __entry->offs, __entry->val) 84 84 ); 85 85 86 - TRACE_EVENT(mei_pci_cfg_write, 87 - TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val), 88 - TP_ARGS(dev, reg, offs, val), 89 - TP_STRUCT__entry( 90 - __string(dev, dev_name(dev)) 91 - __field(const char *, reg) 92 - __field(u32, offs) 93 - __field(u32, val) 94 - ), 95 - TP_fast_assign( 96 - __assign_str(dev, dev_name(dev)) 97 - __entry->reg = reg; 98 - __entry->offs = offs; 99 - __entry->val = val; 100 - ), 101 - TP_printk("[%s] pci cfg write %s[%#x] = %#x", 102 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 103 - ); 104 - 105 86 #endif /* _MEI_TRACE_H_ */ 106 87 107 88 /* This part must be outside protection */
+4
drivers/misc/mic/Kconfig
··· 1 + menu "Intel MIC & related support" 2 + 1 3 comment "Intel MIC Bus Driver" 2 4 3 5 config INTEL_MIC_BUS ··· 152 150 if VOP 153 151 source "drivers/vhost/Kconfig.vringh" 154 152 endif 153 + 154 + endmenu
+33 -2
drivers/nvmem/Kconfig
··· 123 123 This driver can also be built as a module. If so, the module 124 124 will be called nvmem_sunxi_sid. 125 125 126 + config UNIPHIER_EFUSE 127 + tristate "UniPhier SoCs eFuse support" 128 + depends on ARCH_UNIPHIER || COMPILE_TEST 129 + depends on HAS_IOMEM 130 + help 131 + This is a simple driver to dump specified values of UniPhier SoC 132 + from eFuse. 133 + 134 + This driver can also be built as a module. If so, the module 135 + will be called nvmem-uniphier-efuse. 136 + 126 137 config NVMEM_VF610_OCOTP 127 138 tristate "VF610 SoC OCOTP support" 128 139 depends on SOC_VF610 || COMPILE_TEST ··· 146 135 be called nvmem-vf610-ocotp. 147 136 148 137 config MESON_EFUSE 149 - tristate "Amlogic eFuse Support" 138 + tristate "Amlogic Meson GX eFuse Support" 150 139 depends on (ARCH_MESON || COMPILE_TEST) && MESON_SM 151 140 help 152 141 This is a driver to retrieve specific values from the eFuse found on 153 - the Amlogic Meson SoCs. 142 + the Amlogic Meson GX SoCs. 154 143 155 144 This driver can also be built as a module. If so, the module 156 145 will be called nvmem_meson_efuse. 146 + 147 + config MESON_MX_EFUSE 148 + tristate "Amlogic Meson6/Meson8/Meson8b eFuse Support" 149 + depends on ARCH_MESON || COMPILE_TEST 150 + help 151 + This is a driver to retrieve specific values from the eFuse found on 152 + the Amlogic Meson6, Meson8 and Meson8b SoCs. 153 + 154 + This driver can also be built as a module. If so, the module 155 + will be called nvmem_meson_mx_efuse. 156 + 157 + config NVMEM_SNVS_LPGPR 158 + tristate "Support for Low Power General Purpose Register" 159 + depends on SOC_IMX6 || COMPILE_TEST 160 + help 161 + This is a driver for Low Power General Purpose Register (LPGPR) available on 162 + i.MX6 SoCs in Secure Non-Volatile Storage (SNVS) of this chip. 163 + 164 + This driver can also be built as a module. If so, the module 165 + will be called nvmem-snvs-lpgpr. 157 166 158 167 endif
+6
drivers/nvmem/Makefile
··· 27 27 nvmem_rockchip_efuse-y := rockchip-efuse.o 28 28 obj-$(CONFIG_NVMEM_SUNXI_SID) += nvmem_sunxi_sid.o 29 29 nvmem_sunxi_sid-y := sunxi_sid.o 30 + obj-$(CONFIG_UNIPHIER_EFUSE) += nvmem-uniphier-efuse.o 31 + nvmem-uniphier-efuse-y := uniphier-efuse.o 30 32 obj-$(CONFIG_NVMEM_VF610_OCOTP) += nvmem-vf610-ocotp.o 31 33 nvmem-vf610-ocotp-y := vf610-ocotp.o 32 34 obj-$(CONFIG_MESON_EFUSE) += nvmem_meson_efuse.o 33 35 nvmem_meson_efuse-y := meson-efuse.o 36 + obj-$(CONFIG_MESON_MX_EFUSE) += nvmem_meson_mx_efuse.o 37 + nvmem_meson_mx_efuse-y := meson-mx-efuse.o 38 + obj-$(CONFIG_NVMEM_SNVS_LPGPR) += nvmem_snvs_lpgpr.o 39 + nvmem_snvs_lpgpr-y := snvs_lpgpr.o
-1
drivers/nvmem/bcm-ocotp.c
··· 232 232 .read_only = false, 233 233 .word_size = 4, 234 234 .stride = 4, 235 - .owner = THIS_MODULE, 236 235 .reg_read = bcm_otpc_read, 237 236 .reg_write = bcm_otpc_write, 238 237 };
+7 -6
drivers/nvmem/core.c
··· 462 462 463 463 nvmem->id = rval; 464 464 nvmem->owner = config->owner; 465 + if (!nvmem->owner && config->dev->driver) 466 + nvmem->owner = config->dev->driver->owner; 465 467 nvmem->stride = config->stride; 466 468 nvmem->word_size = config->word_size; 467 469 nvmem->size = config->size; ··· 617 615 return to_nvmem_device(d); 618 616 } 619 617 620 - #if IS_ENABLED(CONFIG_NVMEM) && IS_ENABLED(CONFIG_OF) 618 + #if IS_ENABLED(CONFIG_OF) 621 619 /** 622 620 * of_nvmem_device_get() - Get nvmem device from a given id 623 621 * ··· 755 753 return cell; 756 754 } 757 755 758 - #if IS_ENABLED(CONFIG_NVMEM) && IS_ENABLED(CONFIG_OF) 756 + #if IS_ENABLED(CONFIG_OF) 759 757 /** 760 758 * of_nvmem_cell_get() - Get a nvmem cell from given device node and cell id 761 759 * ··· 948 946 } 949 947 EXPORT_SYMBOL_GPL(nvmem_cell_put); 950 948 951 - static inline void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, 952 - void *buf) 949 + static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf) 953 950 { 954 951 u8 *p, *b; 955 952 int i, bit_offset = cell->bit_offset; ··· 1029 1028 } 1030 1029 EXPORT_SYMBOL_GPL(nvmem_cell_read); 1031 1030 1032 - static inline void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell, 1033 - u8 *_buf, int len) 1031 + static void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell, 1032 + u8 *_buf, int len) 1034 1033 { 1035 1034 struct nvmem_device *nvmem = cell->nvmem; 1036 1035 int i, rc, nbits, bit_offset = cell->bit_offset;
+10 -14
drivers/nvmem/imx-iim.c
··· 34 34 struct iim_priv { 35 35 void __iomem *base; 36 36 struct clk *clk; 37 - struct nvmem_config nvmem; 38 37 }; 39 38 40 39 static int imx_iim_read(void *context, unsigned int offset, ··· 107 108 struct resource *res; 108 109 struct iim_priv *iim; 109 110 struct nvmem_device *nvmem; 110 - struct nvmem_config *cfg; 111 + struct nvmem_config cfg = {}; 111 112 const struct imx_iim_drvdata *drvdata = NULL; 112 113 113 114 iim = devm_kzalloc(dev, sizeof(*iim), GFP_KERNEL); ··· 129 130 if (IS_ERR(iim->clk)) 130 131 return PTR_ERR(iim->clk); 131 132 132 - cfg = &iim->nvmem; 133 + cfg.name = "imx-iim", 134 + cfg.read_only = true, 135 + cfg.word_size = 1, 136 + cfg.stride = 1, 137 + cfg.reg_read = imx_iim_read, 138 + cfg.dev = dev; 139 + cfg.size = drvdata->nregs; 140 + cfg.priv = iim; 133 141 134 - cfg->name = "imx-iim", 135 - cfg->read_only = true, 136 - cfg->word_size = 1, 137 - cfg->stride = 1, 138 - cfg->owner = THIS_MODULE, 139 - cfg->reg_read = imx_iim_read, 140 - cfg->dev = dev; 141 - cfg->size = drvdata->nregs; 142 - cfg->priv = iim; 143 - 144 - nvmem = nvmem_register(cfg); 142 + nvmem = nvmem_register(&cfg); 145 143 if (IS_ERR(nvmem)) 146 144 return PTR_ERR(nvmem); 147 145
+158 -39
drivers/nvmem/imx-ocotp.c
··· 40 40 #define IMX_OCOTP_ADDR_CTRL_SET 0x0004 41 41 #define IMX_OCOTP_ADDR_CTRL_CLR 0x0008 42 42 #define IMX_OCOTP_ADDR_TIMING 0x0010 43 - #define IMX_OCOTP_ADDR_DATA 0x0020 43 + #define IMX_OCOTP_ADDR_DATA0 0x0020 44 + #define IMX_OCOTP_ADDR_DATA1 0x0030 45 + #define IMX_OCOTP_ADDR_DATA2 0x0040 46 + #define IMX_OCOTP_ADDR_DATA3 0x0050 44 47 45 48 #define IMX_OCOTP_BM_CTRL_ADDR 0x0000007F 46 49 #define IMX_OCOTP_BM_CTRL_BUSY 0x00000100 47 50 #define IMX_OCOTP_BM_CTRL_ERROR 0x00000200 48 51 #define IMX_OCOTP_BM_CTRL_REL_SHADOWS 0x00000400 49 52 50 - #define DEF_RELAX 20 /* > 16.5ns */ 53 + #define DEF_RELAX 20 /* > 16.5ns */ 54 + #define DEF_FSOURCE 1001 /* > 1000 ns */ 55 + #define DEF_STROBE_PROG 10000 /* IPG clocks */ 51 56 #define IMX_OCOTP_WR_UNLOCK 0x3E770000 52 57 #define IMX_OCOTP_READ_LOCKED_VAL 0xBADABADA 53 58 ··· 62 57 struct device *dev; 63 58 struct clk *clk; 64 59 void __iomem *base; 65 - unsigned int nregs; 60 + const struct ocotp_params *params; 66 61 struct nvmem_config *config; 62 + }; 63 + 64 + struct ocotp_params { 65 + unsigned int nregs; 66 + unsigned int bank_address_words; 67 + void (*set_timing)(struct ocotp_priv *priv); 67 68 }; 68 69 69 70 static int imx_ocotp_wait_for_busy(void __iomem *base, u32 flags) ··· 132 121 index = offset >> 2; 133 122 count = bytes >> 2; 134 123 135 - if (count > (priv->nregs - index)) 136 - count = priv->nregs - index; 124 + if (count > (priv->params->nregs - index)) 125 + count = priv->params->nregs - index; 137 126 138 127 mutex_lock(&ocotp_mutex); 139 128 ··· 171 160 return ret; 172 161 } 173 162 174 - static int imx_ocotp_write(void *context, unsigned int offset, void *val, 175 - size_t bytes) 163 + static void imx_ocotp_set_imx6_timing(struct ocotp_priv *priv) 176 164 { 177 - struct ocotp_priv *priv = context; 178 - u32 *buf = val; 179 - int ret; 180 - 181 165 unsigned long clk_rate = 0; 182 166 unsigned long strobe_read, relax, strobe_prog; 183 167 u32 timing = 0; 184 - u32 ctrl; 185 - u8 waddr; 186 - 187 - /* allow only writing one complete OTP word at a time */ 188 - if ((bytes != priv->config->word_size) || 189 - (offset % priv->config->word_size)) 190 - return -EINVAL; 191 - 192 - mutex_lock(&ocotp_mutex); 193 - 194 - ret = clk_prepare_enable(priv->clk); 195 - if (ret < 0) { 196 - mutex_unlock(&ocotp_mutex); 197 - dev_err(priv->dev, "failed to prepare/enable ocotp clk\n"); 198 - return ret; 199 - } 200 168 201 169 /* 47.3.1.3.1 202 170 * Program HW_OCOTP_TIMING[STROBE_PROG] and HW_OCOTP_TIMING[RELAX] ··· 194 204 timing |= (strobe_read << 16) & 0x003F0000; 195 205 196 206 writel(timing, priv->base + IMX_OCOTP_ADDR_TIMING); 207 + } 208 + 209 + static void imx_ocotp_set_imx7_timing(struct ocotp_priv *priv) 210 + { 211 + unsigned long clk_rate = 0; 212 + u64 fsource, strobe_prog; 213 + u32 timing = 0; 214 + 215 + /* i.MX 7Solo Applications Processor Reference Manual, Rev. 0.1 216 + * 6.4.3.3 217 + */ 218 + clk_rate = clk_get_rate(priv->clk); 219 + fsource = DIV_ROUND_UP_ULL((u64)clk_rate * DEF_FSOURCE, 220 + NSEC_PER_SEC) + 1; 221 + strobe_prog = DIV_ROUND_CLOSEST_ULL((u64)clk_rate * DEF_STROBE_PROG, 222 + NSEC_PER_SEC) + 1; 223 + 224 + timing = strobe_prog & 0x00000FFF; 225 + timing |= (fsource << 12) & 0x000FF000; 226 + 227 + writel(timing, priv->base + IMX_OCOTP_ADDR_TIMING); 228 + } 229 + 230 + static int imx_ocotp_write(void *context, unsigned int offset, void *val, 231 + size_t bytes) 232 + { 233 + struct ocotp_priv *priv = context; 234 + u32 *buf = val; 235 + int ret; 236 + 237 + u32 ctrl; 238 + u8 waddr; 239 + u8 word = 0; 240 + 241 + /* allow only writing one complete OTP word at a time */ 242 + if ((bytes != priv->config->word_size) || 243 + (offset % priv->config->word_size)) 244 + return -EINVAL; 245 + 246 + mutex_lock(&ocotp_mutex); 247 + 248 + ret = clk_prepare_enable(priv->clk); 249 + if (ret < 0) { 250 + mutex_unlock(&ocotp_mutex); 251 + dev_err(priv->dev, "failed to prepare/enable ocotp clk\n"); 252 + return ret; 253 + } 254 + 255 + /* Setup the write timing values */ 256 + priv->params->set_timing(priv); 197 257 198 258 /* 47.3.1.3.2 199 259 * Check that HW_OCOTP_CTRL[BUSY] and HW_OCOTP_CTRL[ERROR] are clear. ··· 264 224 * description. Both the unlock code and address can be written in the 265 225 * same operation. 266 226 */ 267 - /* OTP write/read address specifies one of 128 word address locations */ 268 - waddr = offset / 4; 227 + if (priv->params->bank_address_words != 0) { 228 + /* 229 + * In banked/i.MX7 mode the OTP register bank goes into waddr 230 + * see i.MX 7Solo Applications Processor Reference Manual, Rev. 231 + * 0.1 section 6.4.3.1 232 + */ 233 + offset = offset / priv->config->word_size; 234 + waddr = offset / priv->params->bank_address_words; 235 + word = offset & (priv->params->bank_address_words - 1); 236 + } else { 237 + /* 238 + * Non-banked i.MX6 mode. 239 + * OTP write/read address specifies one of 128 word address 240 + * locations 241 + */ 242 + waddr = offset / 4; 243 + } 269 244 270 245 ctrl = readl(priv->base + IMX_OCOTP_ADDR_CTRL); 271 246 ctrl &= ~IMX_OCOTP_BM_CTRL_ADDR; ··· 306 251 * shift right (with zero fill). This shifting is required to program 307 252 * the OTP serially. During the write operation, HW_OCOTP_DATA cannot be 308 253 * modified. 254 + * Note: on i.MX7 there are four data fields to write for banked write 255 + * with the fuse blowing operation only taking place after data0 256 + * has been written. This is why data0 must always be the last 257 + * register written. 309 258 */ 310 - writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA); 259 + if (priv->params->bank_address_words != 0) { 260 + /* Banked/i.MX7 mode */ 261 + switch (word) { 262 + case 0: 263 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA1); 264 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA2); 265 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA3); 266 + writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA0); 267 + break; 268 + case 1: 269 + writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA1); 270 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA2); 271 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA3); 272 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA0); 273 + break; 274 + case 2: 275 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA1); 276 + writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA2); 277 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA3); 278 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA0); 279 + break; 280 + case 3: 281 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA1); 282 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA2); 283 + writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA3); 284 + writel(0, priv->base + IMX_OCOTP_ADDR_DATA0); 285 + break; 286 + } 287 + } else { 288 + /* Non-banked i.MX6 mode */ 289 + writel(*buf, priv->base + IMX_OCOTP_ADDR_DATA0); 290 + } 311 291 312 292 /* 47.4.1.4.5 313 293 * Once complete, the controller will clear BUSY. A write request to a ··· 393 303 .read_only = false, 394 304 .word_size = 4, 395 305 .stride = 4, 396 - .owner = THIS_MODULE, 397 306 .reg_read = imx_ocotp_read, 398 307 .reg_write = imx_ocotp_write, 399 308 }; 400 309 310 + static const struct ocotp_params imx6q_params = { 311 + .nregs = 128, 312 + .bank_address_words = 0, 313 + .set_timing = imx_ocotp_set_imx6_timing, 314 + }; 315 + 316 + static const struct ocotp_params imx6sl_params = { 317 + .nregs = 64, 318 + .bank_address_words = 0, 319 + .set_timing = imx_ocotp_set_imx6_timing, 320 + }; 321 + 322 + static const struct ocotp_params imx6sx_params = { 323 + .nregs = 128, 324 + .bank_address_words = 0, 325 + .set_timing = imx_ocotp_set_imx6_timing, 326 + }; 327 + 328 + static const struct ocotp_params imx6ul_params = { 329 + .nregs = 128, 330 + .bank_address_words = 0, 331 + .set_timing = imx_ocotp_set_imx6_timing, 332 + }; 333 + 334 + static const struct ocotp_params imx7d_params = { 335 + .nregs = 64, 336 + .bank_address_words = 4, 337 + .set_timing = imx_ocotp_set_imx7_timing, 338 + }; 339 + 401 340 static const struct of_device_id imx_ocotp_dt_ids[] = { 402 - { .compatible = "fsl,imx6q-ocotp", (void *)128 }, 403 - { .compatible = "fsl,imx6sl-ocotp", (void *)64 }, 404 - { .compatible = "fsl,imx6sx-ocotp", (void *)128 }, 405 - { .compatible = "fsl,imx6ul-ocotp", (void *)128 }, 406 - { .compatible = "fsl,imx7d-ocotp", (void *)64 }, 341 + { .compatible = "fsl,imx6q-ocotp", .data = &imx6q_params }, 342 + { .compatible = "fsl,imx6sl-ocotp", .data = &imx6sl_params }, 343 + { .compatible = "fsl,imx6sx-ocotp", .data = &imx6sx_params }, 344 + { .compatible = "fsl,imx6ul-ocotp", .data = &imx6ul_params }, 345 + { .compatible = "fsl,imx7d-ocotp", .data = &imx7d_params }, 407 346 { }, 408 347 }; 409 348 MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids); ··· 461 342 return PTR_ERR(priv->clk); 462 343 463 344 of_id = of_match_device(imx_ocotp_dt_ids, dev); 464 - priv->nregs = (unsigned long)of_id->data; 465 - imx_ocotp_nvmem_config.size = 4 * priv->nregs; 345 + priv->params = of_device_get_match_data(&pdev->dev); 346 + imx_ocotp_nvmem_config.size = 4 * priv->params->nregs; 466 347 imx_ocotp_nvmem_config.dev = dev; 467 348 imx_ocotp_nvmem_config.priv = priv; 468 349 priv->config = &imx_ocotp_nvmem_config; ··· 494 375 module_platform_driver(imx_ocotp_driver); 495 376 496 377 MODULE_AUTHOR("Philipp Zabel <p.zabel@pengutronix.de>"); 497 - MODULE_DESCRIPTION("i.MX6 OCOTP fuse box driver"); 378 + MODULE_DESCRIPTION("i.MX6/i.MX7 OCOTP fuse box driver"); 498 379 MODULE_LICENSE("GPL v2");
-1
drivers/nvmem/lpc18xx_eeprom.c
··· 159 159 .word_size = 4, 160 160 .reg_read = lpc18xx_eeprom_read, 161 161 .reg_write = lpc18xx_eeprom_gather_write, 162 - .owner = THIS_MODULE, 163 162 }; 164 163 165 164 static int lpc18xx_eeprom_probe(struct platform_device *pdev)
-1
drivers/nvmem/lpc18xx_otp.c
··· 64 64 .read_only = true, 65 65 .word_size = LPC18XX_OTP_WORD_SIZE, 66 66 .stride = LPC18XX_OTP_WORD_SIZE, 67 - .owner = THIS_MODULE, 68 67 .reg_read = lpc18xx_otp_read, 69 68 }; 70 69
+2 -3
drivers/nvmem/meson-efuse.c
··· 1 1 /* 2 - * Amlogic eFuse Driver 2 + * Amlogic Meson GX eFuse Driver 3 3 * 4 4 * Copyright (c) 2016 Endless Computers, Inc. 5 5 * Author: Carlo Caione <carlo@endlessm.com> ··· 37 37 38 38 static struct nvmem_config econfig = { 39 39 .name = "meson-efuse", 40 - .owner = THIS_MODULE, 41 40 .stride = 1, 42 41 .word_size = 1, 43 42 .read_only = true, ··· 88 89 module_platform_driver(meson_efuse_driver); 89 90 90 91 MODULE_AUTHOR("Carlo Caione <carlo@endlessm.com>"); 91 - MODULE_DESCRIPTION("Amlogic Meson NVMEM driver"); 92 + MODULE_DESCRIPTION("Amlogic Meson GX NVMEM driver"); 92 93 MODULE_LICENSE("GPL v2");
+265
drivers/nvmem/meson-mx-efuse.c
··· 1 + /* 2 + * Amlogic Meson6, Meson8 and Meson8b eFuse Driver 3 + * 4 + * Copyright (c) 2017 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of version 2 of the GNU General Public License as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + */ 15 + 16 + #include <linux/bitfield.h> 17 + #include <linux/bitops.h> 18 + #include <linux/clk.h> 19 + #include <linux/delay.h> 20 + #include <linux/io.h> 21 + #include <linux/iopoll.h> 22 + #include <linux/module.h> 23 + #include <linux/nvmem-provider.h> 24 + #include <linux/of.h> 25 + #include <linux/of_device.h> 26 + #include <linux/platform_device.h> 27 + #include <linux/sizes.h> 28 + #include <linux/slab.h> 29 + 30 + #define MESON_MX_EFUSE_CNTL1 0x04 31 + #define MESON_MX_EFUSE_CNTL1_PD_ENABLE BIT(27) 32 + #define MESON_MX_EFUSE_CNTL1_AUTO_RD_BUSY BIT(26) 33 + #define MESON_MX_EFUSE_CNTL1_AUTO_RD_START BIT(25) 34 + #define MESON_MX_EFUSE_CNTL1_AUTO_RD_ENABLE BIT(24) 35 + #define MESON_MX_EFUSE_CNTL1_BYTE_WR_DATA GENMASK(23, 16) 36 + #define MESON_MX_EFUSE_CNTL1_AUTO_WR_BUSY BIT(14) 37 + #define MESON_MX_EFUSE_CNTL1_AUTO_WR_START BIT(13) 38 + #define MESON_MX_EFUSE_CNTL1_AUTO_WR_ENABLE BIT(12) 39 + #define MESON_MX_EFUSE_CNTL1_BYTE_ADDR_SET BIT(11) 40 + #define MESON_MX_EFUSE_CNTL1_BYTE_ADDR_MASK GENMASK(10, 0) 41 + 42 + #define MESON_MX_EFUSE_CNTL2 0x08 43 + 44 + #define MESON_MX_EFUSE_CNTL4 0x10 45 + #define MESON_MX_EFUSE_CNTL4_ENCRYPT_ENABLE BIT(10) 46 + 47 + struct meson_mx_efuse_platform_data { 48 + const char *name; 49 + unsigned int word_size; 50 + }; 51 + 52 + struct meson_mx_efuse { 53 + void __iomem *base; 54 + struct clk *core_clk; 55 + struct nvmem_device *nvmem; 56 + struct nvmem_config config; 57 + }; 58 + 59 + static void meson_mx_efuse_mask_bits(struct meson_mx_efuse *efuse, u32 reg, 60 + u32 mask, u32 set) 61 + { 62 + u32 data; 63 + 64 + data = readl(efuse->base + reg); 65 + data &= ~mask; 66 + data |= (set & mask); 67 + 68 + writel(data, efuse->base + reg); 69 + } 70 + 71 + static int meson_mx_efuse_hw_enable(struct meson_mx_efuse *efuse) 72 + { 73 + int err; 74 + 75 + err = clk_prepare_enable(efuse->core_clk); 76 + if (err) 77 + return err; 78 + 79 + /* power up the efuse */ 80 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 81 + MESON_MX_EFUSE_CNTL1_PD_ENABLE, 0); 82 + 83 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL4, 84 + MESON_MX_EFUSE_CNTL4_ENCRYPT_ENABLE, 0); 85 + 86 + return 0; 87 + } 88 + 89 + static void meson_mx_efuse_hw_disable(struct meson_mx_efuse *efuse) 90 + { 91 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 92 + MESON_MX_EFUSE_CNTL1_PD_ENABLE, 93 + MESON_MX_EFUSE_CNTL1_PD_ENABLE); 94 + 95 + clk_disable_unprepare(efuse->core_clk); 96 + } 97 + 98 + static int meson_mx_efuse_read_addr(struct meson_mx_efuse *efuse, 99 + unsigned int addr, u32 *value) 100 + { 101 + int err; 102 + u32 regval; 103 + 104 + /* write the address to read */ 105 + regval = FIELD_PREP(MESON_MX_EFUSE_CNTL1_BYTE_ADDR_MASK, addr); 106 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 107 + MESON_MX_EFUSE_CNTL1_BYTE_ADDR_MASK, regval); 108 + 109 + /* inform the hardware that we changed the address */ 110 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 111 + MESON_MX_EFUSE_CNTL1_BYTE_ADDR_SET, 112 + MESON_MX_EFUSE_CNTL1_BYTE_ADDR_SET); 113 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 114 + MESON_MX_EFUSE_CNTL1_BYTE_ADDR_SET, 0); 115 + 116 + /* start the read process */ 117 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 118 + MESON_MX_EFUSE_CNTL1_AUTO_RD_START, 119 + MESON_MX_EFUSE_CNTL1_AUTO_RD_START); 120 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 121 + MESON_MX_EFUSE_CNTL1_AUTO_RD_START, 0); 122 + 123 + /* 124 + * perform a dummy read to ensure that the HW has the RD_BUSY bit set 125 + * when polling for the status below. 126 + */ 127 + readl(efuse->base + MESON_MX_EFUSE_CNTL1); 128 + 129 + err = readl_poll_timeout_atomic(efuse->base + MESON_MX_EFUSE_CNTL1, 130 + regval, 131 + (!(regval & MESON_MX_EFUSE_CNTL1_AUTO_RD_BUSY)), 132 + 1, 1000); 133 + if (err) { 134 + dev_err(efuse->config.dev, 135 + "Timeout while reading efuse address %u\n", addr); 136 + return err; 137 + } 138 + 139 + *value = readl(efuse->base + MESON_MX_EFUSE_CNTL2); 140 + 141 + return 0; 142 + } 143 + 144 + static int meson_mx_efuse_read(void *context, unsigned int offset, 145 + void *buf, size_t bytes) 146 + { 147 + struct meson_mx_efuse *efuse = context; 148 + u32 tmp; 149 + int err, i, addr; 150 + 151 + err = meson_mx_efuse_hw_enable(efuse); 152 + if (err) 153 + return err; 154 + 155 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 156 + MESON_MX_EFUSE_CNTL1_AUTO_RD_ENABLE, 157 + MESON_MX_EFUSE_CNTL1_AUTO_RD_ENABLE); 158 + 159 + for (i = offset; i < offset + bytes; i += efuse->config.word_size) { 160 + addr = i / efuse->config.word_size; 161 + 162 + err = meson_mx_efuse_read_addr(efuse, addr, &tmp); 163 + if (err) 164 + break; 165 + 166 + memcpy(buf + i, &tmp, efuse->config.word_size); 167 + } 168 + 169 + meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1, 170 + MESON_MX_EFUSE_CNTL1_AUTO_RD_ENABLE, 0); 171 + 172 + meson_mx_efuse_hw_disable(efuse); 173 + 174 + return err; 175 + } 176 + 177 + static const struct meson_mx_efuse_platform_data meson6_efuse_data = { 178 + .name = "meson6-efuse", 179 + .word_size = 1, 180 + }; 181 + 182 + static const struct meson_mx_efuse_platform_data meson8_efuse_data = { 183 + .name = "meson8-efuse", 184 + .word_size = 4, 185 + }; 186 + 187 + static const struct meson_mx_efuse_platform_data meson8b_efuse_data = { 188 + .name = "meson8b-efuse", 189 + .word_size = 4, 190 + }; 191 + 192 + static const struct of_device_id meson_mx_efuse_match[] = { 193 + { .compatible = "amlogic,meson6-efuse", .data = &meson6_efuse_data }, 194 + { .compatible = "amlogic,meson8-efuse", .data = &meson8_efuse_data }, 195 + { .compatible = "amlogic,meson8b-efuse", .data = &meson8b_efuse_data }, 196 + { /* sentinel */ }, 197 + }; 198 + MODULE_DEVICE_TABLE(of, meson_mx_efuse_match); 199 + 200 + static int meson_mx_efuse_probe(struct platform_device *pdev) 201 + { 202 + const struct meson_mx_efuse_platform_data *drvdata; 203 + struct meson_mx_efuse *efuse; 204 + struct resource *res; 205 + 206 + drvdata = of_device_get_match_data(&pdev->dev); 207 + if (!drvdata) 208 + return -EINVAL; 209 + 210 + efuse = devm_kzalloc(&pdev->dev, sizeof(*efuse), GFP_KERNEL); 211 + if (!efuse) 212 + return -ENOMEM; 213 + 214 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 215 + efuse->base = devm_ioremap_resource(&pdev->dev, res); 216 + if (IS_ERR(efuse->base)) 217 + return PTR_ERR(efuse->base); 218 + 219 + efuse->config.name = devm_kstrdup(&pdev->dev, drvdata->name, 220 + GFP_KERNEL); 221 + efuse->config.owner = THIS_MODULE; 222 + efuse->config.dev = &pdev->dev; 223 + efuse->config.priv = efuse; 224 + efuse->config.stride = drvdata->word_size; 225 + efuse->config.word_size = drvdata->word_size; 226 + efuse->config.size = SZ_512; 227 + efuse->config.read_only = true; 228 + efuse->config.reg_read = meson_mx_efuse_read; 229 + 230 + efuse->core_clk = devm_clk_get(&pdev->dev, "core"); 231 + if (IS_ERR(efuse->core_clk)) { 232 + dev_err(&pdev->dev, "Failed to get core clock\n"); 233 + return PTR_ERR(efuse->core_clk); 234 + } 235 + 236 + efuse->nvmem = nvmem_register(&efuse->config); 237 + if (IS_ERR(efuse->nvmem)) 238 + return PTR_ERR(efuse->nvmem); 239 + 240 + platform_set_drvdata(pdev, efuse); 241 + 242 + return 0; 243 + } 244 + 245 + static int meson_mx_efuse_remove(struct platform_device *pdev) 246 + { 247 + struct meson_mx_efuse *efuse = platform_get_drvdata(pdev); 248 + 249 + return nvmem_unregister(efuse->nvmem); 250 + } 251 + 252 + static struct platform_driver meson_mx_efuse_driver = { 253 + .probe = meson_mx_efuse_probe, 254 + .remove = meson_mx_efuse_remove, 255 + .driver = { 256 + .name = "meson-mx-efuse", 257 + .of_match_table = meson_mx_efuse_match, 258 + }, 259 + }; 260 + 261 + module_platform_driver(meson_mx_efuse_driver); 262 + 263 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 264 + MODULE_DESCRIPTION("Amlogic Meson MX eFuse NVMEM driver"); 265 + MODULE_LICENSE("GPL v2");
+25 -22
drivers/nvmem/mtk-efuse.c
··· 18 18 #include <linux/nvmem-provider.h> 19 19 #include <linux/platform_device.h> 20 20 21 + struct mtk_efuse_priv { 22 + void __iomem *base; 23 + }; 24 + 21 25 static int mtk_reg_read(void *context, 22 26 unsigned int reg, void *_val, size_t bytes) 23 27 { 24 - void __iomem *base = context; 28 + struct mtk_efuse_priv *priv = context; 25 29 u32 *val = _val; 26 30 int i = 0, words = bytes / 4; 27 31 28 32 while (words--) 29 - *val++ = readl(base + reg + (i++ * 4)); 33 + *val++ = readl(priv->base + reg + (i++ * 4)); 30 34 31 35 return 0; 32 36 } ··· 38 34 static int mtk_reg_write(void *context, 39 35 unsigned int reg, void *_val, size_t bytes) 40 36 { 41 - void __iomem *base = context; 37 + struct mtk_efuse_priv *priv = context; 42 38 u32 *val = _val; 43 39 int i = 0, words = bytes / 4; 44 40 45 41 while (words--) 46 - writel(*val++, base + reg + (i++ * 4)); 42 + writel(*val++, priv->base + reg + (i++ * 4)); 47 43 48 44 return 0; 49 45 } ··· 53 49 struct device *dev = &pdev->dev; 54 50 struct resource *res; 55 51 struct nvmem_device *nvmem; 56 - struct nvmem_config *econfig; 57 - void __iomem *base; 52 + struct nvmem_config econfig = {}; 53 + struct mtk_efuse_priv *priv; 58 54 59 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 60 - base = devm_ioremap_resource(dev, res); 61 - if (IS_ERR(base)) 62 - return PTR_ERR(base); 63 - 64 - econfig = devm_kzalloc(dev, sizeof(*econfig), GFP_KERNEL); 65 - if (!econfig) 55 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 56 + if (!priv) 66 57 return -ENOMEM; 67 58 68 - econfig->stride = 4; 69 - econfig->word_size = 4; 70 - econfig->reg_read = mtk_reg_read; 71 - econfig->reg_write = mtk_reg_write; 72 - econfig->size = resource_size(res); 73 - econfig->priv = base; 74 - econfig->dev = dev; 75 - econfig->owner = THIS_MODULE; 76 - nvmem = nvmem_register(econfig); 59 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 60 + priv->base = devm_ioremap_resource(dev, res); 61 + if (IS_ERR(priv->base)) 62 + return PTR_ERR(priv->base); 63 + 64 + econfig.stride = 4; 65 + econfig.word_size = 4; 66 + econfig.reg_read = mtk_reg_read; 67 + econfig.reg_write = mtk_reg_write; 68 + econfig.size = resource_size(res); 69 + econfig.priv = priv; 70 + econfig.dev = dev; 71 + nvmem = nvmem_register(&econfig); 77 72 if (IS_ERR(nvmem)) 78 73 return PTR_ERR(nvmem); 79 74
-1
drivers/nvmem/mxs-ocotp.c
··· 118 118 .name = "mxs-ocotp", 119 119 .stride = 16, 120 120 .word_size = 4, 121 - .owner = THIS_MODULE, 122 121 .reg_read = mxs_ocotp_read, 123 122 }; 124 123
+17 -10
drivers/nvmem/qfprom.c
··· 17 17 #include <linux/nvmem-provider.h> 18 18 #include <linux/platform_device.h> 19 19 20 + struct qfprom_priv { 21 + void __iomem *base; 22 + }; 23 + 20 24 static int qfprom_reg_read(void *context, 21 25 unsigned int reg, void *_val, size_t bytes) 22 26 { 23 - void __iomem *base = context; 27 + struct qfprom_priv *priv = context; 24 28 u8 *val = _val; 25 29 int i = 0, words = bytes; 26 30 27 31 while (words--) 28 - *val++ = readb(base + reg + i++); 32 + *val++ = readb(priv->base + reg + i++); 29 33 30 34 return 0; 31 35 } ··· 37 33 static int qfprom_reg_write(void *context, 38 34 unsigned int reg, void *_val, size_t bytes) 39 35 { 40 - void __iomem *base = context; 36 + struct qfprom_priv *priv = context; 41 37 u8 *val = _val; 42 38 int i = 0, words = bytes; 43 39 44 40 while (words--) 45 - writeb(*val++, base + reg + i++); 41 + writeb(*val++, priv->base + reg + i++); 46 42 47 43 return 0; 48 44 } ··· 56 52 57 53 static struct nvmem_config econfig = { 58 54 .name = "qfprom", 59 - .owner = THIS_MODULE, 60 55 .stride = 1, 61 56 .word_size = 1, 62 57 .reg_read = qfprom_reg_read, ··· 67 64 struct device *dev = &pdev->dev; 68 65 struct resource *res; 69 66 struct nvmem_device *nvmem; 70 - void __iomem *base; 67 + struct qfprom_priv *priv; 68 + 69 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 70 + if (!priv) 71 + return -ENOMEM; 71 72 72 73 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 73 - base = devm_ioremap_resource(dev, res); 74 - if (IS_ERR(base)) 75 - return PTR_ERR(base); 74 + priv->base = devm_ioremap_resource(dev, res); 75 + if (IS_ERR(priv->base)) 76 + return PTR_ERR(priv->base); 76 77 77 78 econfig.size = resource_size(res); 78 79 econfig.dev = dev; 79 - econfig.priv = base; 80 + econfig.priv = priv; 80 81 81 82 nvmem = nvmem_register(&econfig); 82 83 if (IS_ERR(nvmem))
+4 -1
drivers/nvmem/rockchip-efuse.c
··· 149 149 150 150 static struct nvmem_config econfig = { 151 151 .name = "rockchip-efuse", 152 - .owner = THIS_MODULE, 153 152 .stride = 1, 154 153 .word_size = 1, 155 154 .read_only = true, ··· 174 175 }, 175 176 { 176 177 .compatible = "rockchip,rk3288-efuse", 178 + .data = (void *)&rockchip_rk3288_efuse_read, 179 + }, 180 + { 181 + .compatible = "rockchip,rk3368-efuse", 177 182 .data = (void *)&rockchip_rk3288_efuse_read, 178 183 }, 179 184 {
+156
drivers/nvmem/snvs_lpgpr.c
··· 1 + /* 2 + * Copyright (c) 2015 Pengutronix, Steffen Trumtrar <kernel@pengutronix.de> 3 + * Copyright (c) 2017 Pengutronix, Oleksij Rempel <kernel@pengutronix.de> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 7 + * as published by the Free Software Foundation. 8 + */ 9 + 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/module.h> 12 + #include <linux/nvmem-provider.h> 13 + #include <linux/of_device.h> 14 + #include <linux/regmap.h> 15 + 16 + #define IMX6Q_SNVS_HPLR 0x00 17 + #define IMX6Q_GPR_SL BIT(5) 18 + #define IMX6Q_SNVS_LPLR 0x34 19 + #define IMX6Q_GPR_HL BIT(5) 20 + #define IMX6Q_SNVS_LPGPR 0x68 21 + 22 + struct snvs_lpgpr_cfg { 23 + int offset; 24 + int offset_hplr; 25 + int offset_lplr; 26 + }; 27 + 28 + struct snvs_lpgpr_priv { 29 + struct device_d *dev; 30 + struct regmap *regmap; 31 + struct nvmem_config cfg; 32 + const struct snvs_lpgpr_cfg *dcfg; 33 + }; 34 + 35 + static const struct snvs_lpgpr_cfg snvs_lpgpr_cfg_imx6q = { 36 + .offset = IMX6Q_SNVS_LPGPR, 37 + .offset_hplr = IMX6Q_SNVS_HPLR, 38 + .offset_lplr = IMX6Q_SNVS_LPLR, 39 + }; 40 + 41 + static int snvs_lpgpr_write(void *context, unsigned int offset, void *val, 42 + size_t bytes) 43 + { 44 + struct snvs_lpgpr_priv *priv = context; 45 + const struct snvs_lpgpr_cfg *dcfg = priv->dcfg; 46 + unsigned int lock_reg; 47 + int ret; 48 + 49 + ret = regmap_read(priv->regmap, dcfg->offset_hplr, &lock_reg); 50 + if (ret < 0) 51 + return ret; 52 + 53 + if (lock_reg & IMX6Q_GPR_SL) 54 + return -EPERM; 55 + 56 + ret = regmap_read(priv->regmap, dcfg->offset_lplr, &lock_reg); 57 + if (ret < 0) 58 + return ret; 59 + 60 + if (lock_reg & IMX6Q_GPR_HL) 61 + return -EPERM; 62 + 63 + return regmap_bulk_write(priv->regmap, dcfg->offset + offset, val, 64 + bytes / 4); 65 + } 66 + 67 + static int snvs_lpgpr_read(void *context, unsigned int offset, void *val, 68 + size_t bytes) 69 + { 70 + struct snvs_lpgpr_priv *priv = context; 71 + const struct snvs_lpgpr_cfg *dcfg = priv->dcfg; 72 + 73 + return regmap_bulk_read(priv->regmap, dcfg->offset + offset, 74 + val, bytes / 4); 75 + } 76 + 77 + static int snvs_lpgpr_probe(struct platform_device *pdev) 78 + { 79 + struct device *dev = &pdev->dev; 80 + struct device_node *node = dev->of_node; 81 + struct device_node *syscon_node; 82 + struct snvs_lpgpr_priv *priv; 83 + struct nvmem_config *cfg; 84 + struct nvmem_device *nvmem; 85 + const struct snvs_lpgpr_cfg *dcfg; 86 + 87 + if (!node) 88 + return -ENOENT; 89 + 90 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 91 + if (!priv) 92 + return -ENOMEM; 93 + 94 + dcfg = of_device_get_match_data(dev); 95 + if (!dcfg) 96 + return -EINVAL; 97 + 98 + syscon_node = of_get_parent(node); 99 + if (!syscon_node) 100 + return -ENODEV; 101 + 102 + priv->regmap = syscon_node_to_regmap(syscon_node); 103 + of_node_put(syscon_node); 104 + if (IS_ERR(priv->regmap)) 105 + return PTR_ERR(priv->regmap); 106 + 107 + priv->dcfg = dcfg; 108 + 109 + cfg = &priv->cfg; 110 + cfg->priv = priv; 111 + cfg->name = dev_name(dev); 112 + cfg->dev = dev; 113 + cfg->stride = 4, 114 + cfg->word_size = 4, 115 + cfg->size = 4, 116 + cfg->owner = THIS_MODULE, 117 + cfg->reg_read = snvs_lpgpr_read, 118 + cfg->reg_write = snvs_lpgpr_write, 119 + 120 + nvmem = nvmem_register(cfg); 121 + if (IS_ERR(nvmem)) 122 + return PTR_ERR(nvmem); 123 + 124 + platform_set_drvdata(pdev, nvmem); 125 + 126 + return 0; 127 + } 128 + 129 + static int snvs_lpgpr_remove(struct platform_device *pdev) 130 + { 131 + struct nvmem_device *nvmem = platform_get_drvdata(pdev); 132 + 133 + return nvmem_unregister(nvmem); 134 + } 135 + 136 + static const struct of_device_id snvs_lpgpr_dt_ids[] = { 137 + { .compatible = "fsl,imx6q-snvs-lpgpr", .data = &snvs_lpgpr_cfg_imx6q }, 138 + { .compatible = "fsl,imx6ul-snvs-lpgpr", 139 + .data = &snvs_lpgpr_cfg_imx6q }, 140 + { }, 141 + }; 142 + MODULE_DEVICE_TABLE(of, snvs_lpgpr_dt_ids); 143 + 144 + static struct platform_driver snvs_lpgpr_driver = { 145 + .probe = snvs_lpgpr_probe, 146 + .remove = snvs_lpgpr_remove, 147 + .driver = { 148 + .name = "snvs_lpgpr", 149 + .of_match_table = snvs_lpgpr_dt_ids, 150 + }, 151 + }; 152 + module_platform_driver(snvs_lpgpr_driver); 153 + 154 + MODULE_AUTHOR("Oleksij Rempel <o.rempel@pengutronix.de>"); 155 + MODULE_DESCRIPTION("Low Power General Purpose Register in i.MX6 Secure Non-Volatile Storage"); 156 + MODULE_LICENSE("GPL v2");
+6 -1
drivers/nvmem/sunxi_sid.c
··· 40 40 .read_only = true, 41 41 .stride = 4, 42 42 .word_size = 1, 43 - .owner = THIS_MODULE, 44 43 }; 45 44 46 45 struct sunxi_sid_cfg { ··· 198 199 .need_register_readout = true, 199 200 }; 200 201 202 + static const struct sunxi_sid_cfg sun50i_a64_cfg = { 203 + .value_offset = 0x200, 204 + .size = 0x100, 205 + }; 206 + 201 207 static const struct of_device_id sunxi_sid_of_match[] = { 202 208 { .compatible = "allwinner,sun4i-a10-sid", .data = &sun4i_a10_cfg }, 203 209 { .compatible = "allwinner,sun7i-a20-sid", .data = &sun7i_a20_cfg }, 204 210 { .compatible = "allwinner,sun8i-h3-sid", .data = &sun8i_h3_cfg }, 211 + { .compatible = "allwinner,sun50i-a64-sid", .data = &sun50i_a64_cfg }, 205 212 {/* sentinel */}, 206 213 }; 207 214 MODULE_DEVICE_TABLE(of, sunxi_sid_of_match);
+97
drivers/nvmem/uniphier-efuse.c
··· 1 + /* 2 + * UniPhier eFuse driver 3 + * 4 + * Copyright (C) 2017 Socionext Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #include <linux/device.h> 17 + #include <linux/io.h> 18 + #include <linux/module.h> 19 + #include <linux/nvmem-provider.h> 20 + #include <linux/platform_device.h> 21 + 22 + struct uniphier_efuse_priv { 23 + void __iomem *base; 24 + }; 25 + 26 + static int uniphier_reg_read(void *context, 27 + unsigned int reg, void *_val, size_t bytes) 28 + { 29 + struct uniphier_efuse_priv *priv = context; 30 + u32 *val = _val; 31 + int offs; 32 + 33 + for (offs = 0; offs < bytes; offs += sizeof(u32)) 34 + *val++ = readl(priv->base + reg + offs); 35 + 36 + return 0; 37 + } 38 + 39 + static int uniphier_efuse_probe(struct platform_device *pdev) 40 + { 41 + struct device *dev = &pdev->dev; 42 + struct resource *res; 43 + struct nvmem_device *nvmem; 44 + struct nvmem_config econfig = {}; 45 + struct uniphier_efuse_priv *priv; 46 + 47 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 48 + if (!priv) 49 + return -ENOMEM; 50 + 51 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 52 + priv->base = devm_ioremap_resource(dev, res); 53 + if (IS_ERR(priv->base)) 54 + return PTR_ERR(priv->base); 55 + 56 + econfig.stride = 4; 57 + econfig.word_size = 4; 58 + econfig.read_only = true; 59 + econfig.reg_read = uniphier_reg_read; 60 + econfig.size = resource_size(res); 61 + econfig.priv = priv; 62 + econfig.dev = dev; 63 + nvmem = nvmem_register(&econfig); 64 + if (IS_ERR(nvmem)) 65 + return PTR_ERR(nvmem); 66 + 67 + platform_set_drvdata(pdev, nvmem); 68 + 69 + return 0; 70 + } 71 + 72 + static int uniphier_efuse_remove(struct platform_device *pdev) 73 + { 74 + struct nvmem_device *nvmem = platform_get_drvdata(pdev); 75 + 76 + return nvmem_unregister(nvmem); 77 + } 78 + 79 + static const struct of_device_id uniphier_efuse_of_match[] = { 80 + { .compatible = "socionext,uniphier-efuse",}, 81 + {/* sentinel */}, 82 + }; 83 + MODULE_DEVICE_TABLE(of, uniphier_efuse_of_match); 84 + 85 + static struct platform_driver uniphier_efuse_driver = { 86 + .probe = uniphier_efuse_probe, 87 + .remove = uniphier_efuse_remove, 88 + .driver = { 89 + .name = "uniphier-efuse", 90 + .of_match_table = uniphier_efuse_of_match, 91 + }, 92 + }; 93 + module_platform_driver(uniphier_efuse_driver); 94 + 95 + MODULE_AUTHOR("Keiji Hayashibara <hayashibara.keiji@socionext.com>"); 96 + MODULE_DESCRIPTION("UniPhier eFuse driver"); 97 + MODULE_LICENSE("GPL v2");
-1
drivers/nvmem/vf610-ocotp.c
··· 206 206 207 207 static struct nvmem_config ocotp_config = { 208 208 .name = "ocotp", 209 - .owner = THIS_MODULE, 210 209 .stride = 4, 211 210 .word_size = 4, 212 211 .reg_read = vf610_ocotp_read,
+1 -1
drivers/parport/parport_ip32.c
··· 1769 1769 1770 1770 /*--- Default parport operations ---------------------------------------*/ 1771 1771 1772 - static __initdata struct parport_operations parport_ip32_ops = { 1772 + static const struct parport_operations parport_ip32_ops __initconst = { 1773 1773 .write_data = parport_ip32_write_data, 1774 1774 .read_data = parport_ip32_read_data, 1775 1775
+1 -1
drivers/pcmcia/cistpl.c
··· 1599 1599 } 1600 1600 1601 1601 1602 - struct bin_attribute pccard_cis_attr = { 1602 + const struct bin_attribute pccard_cis_attr = { 1603 1603 .attr = { .name = "cis", .mode = S_IRUGO | S_IWUSR }, 1604 1604 .size = 0x200, 1605 1605 .read = pccard_show_cis,
+1 -1
drivers/pcmcia/cs_internal.h
··· 152 152 int pcmcia_setup_irq(struct pcmcia_device *p_dev); 153 153 154 154 /* cistpl.c */ 155 - extern struct bin_attribute pccard_cis_attr; 155 + extern const struct bin_attribute pccard_cis_attr; 156 156 157 157 int pcmcia_read_cis_mem(struct pcmcia_socket *s, int attr, 158 158 u_int addr, u_int len, void *ptr);
+2 -5
drivers/pcmcia/m32r_cfc.c
··· 380 380 return IRQ_RETVAL(handled); 381 381 } /* pcc_interrupt */ 382 382 383 - static void pcc_interrupt_wrapper(u_long data) 383 + static void pcc_interrupt_wrapper(struct timer_list *unused) 384 384 { 385 385 pr_debug("m32r_cfc: pcc_interrupt_wrapper:\n"); 386 386 pcc_interrupt(0, NULL); 387 - init_timer(&poll_timer); 388 387 poll_timer.expires = jiffies + poll_interval; 389 388 add_timer(&poll_timer); 390 389 } ··· 757 758 758 759 /* Finally, schedule a polling interrupt */ 759 760 if (poll_interval != 0) { 760 - poll_timer.function = pcc_interrupt_wrapper; 761 - poll_timer.data = 0; 762 - init_timer(&poll_timer); 761 + timer_setup(&poll_timer, pcc_interrupt_wrapper, 0); 763 762 poll_timer.expires = jiffies + poll_interval; 764 763 add_timer(&poll_timer); 765 764 }
+2 -5
drivers/pcmcia/m32r_pcc.c
··· 386 386 return IRQ_RETVAL(handled); 387 387 } /* pcc_interrupt */ 388 388 389 - static void pcc_interrupt_wrapper(u_long data) 389 + static void pcc_interrupt_wrapper(struct timer_list *unused) 390 390 { 391 391 pcc_interrupt(0, NULL); 392 - init_timer(&poll_timer); 393 392 poll_timer.expires = jiffies + poll_interval; 394 393 add_timer(&poll_timer); 395 394 } ··· 728 729 729 730 /* Finally, schedule a polling interrupt */ 730 731 if (poll_interval != 0) { 731 - poll_timer.function = pcc_interrupt_wrapper; 732 - poll_timer.data = 0; 733 - init_timer(&poll_timer); 732 + timer_setup(&poll_timer, pcc_interrupt_wrapper, 0); 734 733 poll_timer.expires = jiffies + poll_interval; 735 734 add_timer(&poll_timer); 736 735 }
+1
drivers/thunderbolt/tb.c
··· 225 225 tb_port_info(up_port, 226 226 "PCIe tunnel activation failed, aborting\n"); 227 227 tb_pci_free(tunnel); 228 + continue; 228 229 } 229 230 230 231 list_add(&tunnel->list, &tcm->tunnel_list);
+26 -47
drivers/vme/bridges/vme_ca91cx42.c
··· 511 511 ca91cx42_bridge = image->parent; 512 512 513 513 /* Find pci_dev container of dev */ 514 - if (ca91cx42_bridge->parent == NULL) { 514 + if (!ca91cx42_bridge->parent) { 515 515 dev_err(ca91cx42_bridge->parent, "Dev entry NULL\n"); 516 516 return -EINVAL; 517 517 } ··· 529 529 image->kern_base = NULL; 530 530 kfree(image->bus_resource.name); 531 531 release_resource(&image->bus_resource); 532 - memset(&image->bus_resource, 0, sizeof(struct resource)); 532 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 533 533 } 534 534 535 - if (image->bus_resource.name == NULL) { 535 + if (!image->bus_resource.name) { 536 536 image->bus_resource.name = kmalloc(VMENAMSIZ+3, GFP_ATOMIC); 537 - if (image->bus_resource.name == NULL) { 538 - dev_err(ca91cx42_bridge->parent, "Unable to allocate " 539 - "memory for resource name\n"); 537 + if (!image->bus_resource.name) { 540 538 retval = -ENOMEM; 541 539 goto err_name; 542 540 } ··· 560 562 561 563 image->kern_base = ioremap_nocache( 562 564 image->bus_resource.start, size); 563 - if (image->kern_base == NULL) { 565 + if (!image->kern_base) { 564 566 dev_err(ca91cx42_bridge->parent, "Failed to remap resource\n"); 565 567 retval = -ENOMEM; 566 568 goto err_remap; ··· 572 574 release_resource(&image->bus_resource); 573 575 err_resource: 574 576 kfree(image->bus_resource.name); 575 - memset(&image->bus_resource, 0, sizeof(struct resource)); 577 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 576 578 err_name: 577 579 return retval; 578 580 } ··· 586 588 image->kern_base = NULL; 587 589 release_resource(&image->bus_resource); 588 590 kfree(image->bus_resource.name); 589 - memset(&image->bus_resource, 0, sizeof(struct resource)); 591 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 590 592 } 591 593 592 594 ··· 1034 1036 dev = list->parent->parent->parent; 1035 1037 1036 1038 /* XXX descriptor must be aligned on 64-bit boundaries */ 1037 - entry = kmalloc(sizeof(struct ca91cx42_dma_entry), GFP_KERNEL); 1038 - if (entry == NULL) { 1039 - dev_err(dev, "Failed to allocate memory for dma resource " 1040 - "structure\n"); 1039 + entry = kmalloc(sizeof(*entry), GFP_KERNEL); 1040 + if (!entry) { 1041 1041 retval = -ENOMEM; 1042 1042 goto err_mem; 1043 1043 } ··· 1048 1052 goto err_align; 1049 1053 } 1050 1054 1051 - memset(&entry->descriptor, 0, sizeof(struct ca91cx42_dma_descriptor)); 1055 + memset(&entry->descriptor, 0, sizeof(entry->descriptor)); 1052 1056 1053 1057 if (dest->type == VME_DMA_VME) { 1054 1058 entry->descriptor.dctl |= CA91CX42_DCTL_L2V; ··· 1319 1323 1320 1324 /* If we already have a callback attached, we can't move it! */ 1321 1325 for (i = 0; i < lm->monitors; i++) { 1322 - if (bridge->lm_callback[i] != NULL) { 1326 + if (bridge->lm_callback[i]) { 1323 1327 mutex_unlock(&lm->mtx); 1324 1328 dev_err(dev, "Location monitor callback attached, " 1325 1329 "can't reset\n"); ··· 1428 1432 } 1429 1433 1430 1434 /* Check that a callback isn't already attached */ 1431 - if (bridge->lm_callback[monitor] != NULL) { 1435 + if (bridge->lm_callback[monitor]) { 1432 1436 mutex_unlock(&lm->mtx); 1433 1437 dev_err(dev, "Existing callback attached\n"); 1434 1438 return -EBUSY; ··· 1563 1567 /* Allocate mem for CR/CSR image */ 1564 1568 bridge->crcsr_kernel = pci_zalloc_consistent(pdev, VME_CRCSR_BUF_SIZE, 1565 1569 &bridge->crcsr_bus); 1566 - if (bridge->crcsr_kernel == NULL) { 1570 + if (!bridge->crcsr_kernel) { 1567 1571 dev_err(&pdev->dev, "Failed to allocate memory for CR/CSR " 1568 1572 "image\n"); 1569 1573 return -ENOMEM; ··· 1614 1618 /* We want to support more than one of each bridge so we need to 1615 1619 * dynamically allocate the bridge structure 1616 1620 */ 1617 - ca91cx42_bridge = kzalloc(sizeof(struct vme_bridge), GFP_KERNEL); 1618 - 1619 - if (ca91cx42_bridge == NULL) { 1620 - dev_err(&pdev->dev, "Failed to allocate memory for device " 1621 - "structure\n"); 1621 + ca91cx42_bridge = kzalloc(sizeof(*ca91cx42_bridge), GFP_KERNEL); 1622 + if (!ca91cx42_bridge) { 1622 1623 retval = -ENOMEM; 1623 1624 goto err_struct; 1624 1625 } 1625 1626 vme_init_bridge(ca91cx42_bridge); 1626 1627 1627 - ca91cx42_device = kzalloc(sizeof(struct ca91cx42_driver), GFP_KERNEL); 1628 - 1629 - if (ca91cx42_device == NULL) { 1630 - dev_err(&pdev->dev, "Failed to allocate memory for device " 1631 - "structure\n"); 1628 + ca91cx42_device = kzalloc(sizeof(*ca91cx42_device), GFP_KERNEL); 1629 + if (!ca91cx42_device) { 1632 1630 retval = -ENOMEM; 1633 1631 goto err_driver; 1634 1632 } ··· 1678 1688 1679 1689 /* Add master windows to list */ 1680 1690 for (i = 0; i < CA91C142_MAX_MASTER; i++) { 1681 - master_image = kmalloc(sizeof(struct vme_master_resource), 1682 - GFP_KERNEL); 1683 - if (master_image == NULL) { 1684 - dev_err(&pdev->dev, "Failed to allocate memory for " 1685 - "master resource structure\n"); 1691 + master_image = kmalloc(sizeof(*master_image), GFP_KERNEL); 1692 + if (!master_image) { 1686 1693 retval = -ENOMEM; 1687 1694 goto err_master; 1688 1695 } ··· 1693 1706 VME_SUPER | VME_USER | VME_PROG | VME_DATA; 1694 1707 master_image->width_attr = VME_D8 | VME_D16 | VME_D32 | VME_D64; 1695 1708 memset(&master_image->bus_resource, 0, 1696 - sizeof(struct resource)); 1709 + sizeof(master_image->bus_resource)); 1697 1710 master_image->kern_base = NULL; 1698 1711 list_add_tail(&master_image->list, 1699 1712 &ca91cx42_bridge->master_resources); ··· 1701 1714 1702 1715 /* Add slave windows to list */ 1703 1716 for (i = 0; i < CA91C142_MAX_SLAVE; i++) { 1704 - slave_image = kmalloc(sizeof(struct vme_slave_resource), 1705 - GFP_KERNEL); 1706 - if (slave_image == NULL) { 1707 - dev_err(&pdev->dev, "Failed to allocate memory for " 1708 - "slave resource structure\n"); 1717 + slave_image = kmalloc(sizeof(*slave_image), GFP_KERNEL); 1718 + if (!slave_image) { 1709 1719 retval = -ENOMEM; 1710 1720 goto err_slave; 1711 1721 } ··· 1725 1741 1726 1742 /* Add dma engines to list */ 1727 1743 for (i = 0; i < CA91C142_MAX_DMA; i++) { 1728 - dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource), 1729 - GFP_KERNEL); 1730 - if (dma_ctrlr == NULL) { 1731 - dev_err(&pdev->dev, "Failed to allocate memory for " 1732 - "dma resource structure\n"); 1744 + dma_ctrlr = kmalloc(sizeof(*dma_ctrlr), GFP_KERNEL); 1745 + if (!dma_ctrlr) { 1733 1746 retval = -ENOMEM; 1734 1747 goto err_dma; 1735 1748 } ··· 1743 1762 } 1744 1763 1745 1764 /* Add location monitor to list */ 1746 - lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL); 1747 - if (lm == NULL) { 1748 - dev_err(&pdev->dev, "Failed to allocate memory for " 1749 - "location monitor resource structure\n"); 1765 + lm = kmalloc(sizeof(*lm), GFP_KERNEL); 1766 + if (!lm) { 1750 1767 retval = -ENOMEM; 1751 1768 goto err_lm; 1752 1769 }
+16 -19
drivers/vme/bridges/vme_fake.c
··· 409 409 /* Each location monitor covers 8 bytes */ 410 410 if (((lm_base + (8 * i)) <= addr) && 411 411 ((lm_base + (8 * i) + 8) > addr)) { 412 - if (bridge->lm_callback[i] != NULL) 412 + if (bridge->lm_callback[i]) 413 413 bridge->lm_callback[i]( 414 414 bridge->lm_data[i]); 415 415 } ··· 866 866 867 867 /* If we already have a callback attached, we can't move it! */ 868 868 for (i = 0; i < lm->monitors; i++) { 869 - if (bridge->lm_callback[i] != NULL) { 869 + if (bridge->lm_callback[i]) { 870 870 mutex_unlock(&lm->mtx); 871 871 pr_err("Location monitor callback attached, can't reset\n"); 872 872 return -EBUSY; ··· 940 940 } 941 941 942 942 /* Check that a callback isn't already attached */ 943 - if (bridge->lm_callback[monitor] != NULL) { 943 + if (bridge->lm_callback[monitor]) { 944 944 mutex_unlock(&lm->mtx); 945 945 pr_err("Existing callback attached\n"); 946 946 return -EBUSY; ··· 978 978 /* If all location monitors disabled, disable global Location Monitor */ 979 979 tmp = 0; 980 980 for (i = 0; i < lm->monitors; i++) { 981 - if (bridge->lm_callback[i] != NULL) 981 + if (bridge->lm_callback[i]) 982 982 tmp = 1; 983 983 } 984 984 ··· 1003 1003 { 1004 1004 void *alloc = kmalloc(size, GFP_KERNEL); 1005 1005 1006 - if (alloc != NULL) 1006 + if (alloc) 1007 1007 *dma = fake_ptr_to_pci(alloc); 1008 1008 1009 1009 return alloc; ··· 1039 1039 /* Allocate mem for CR/CSR image */ 1040 1040 bridge->crcsr_kernel = kzalloc(VME_CRCSR_BUF_SIZE, GFP_KERNEL); 1041 1041 bridge->crcsr_bus = fake_ptr_to_pci(bridge->crcsr_kernel); 1042 - if (bridge->crcsr_kernel == NULL) 1042 + if (!bridge->crcsr_kernel) 1043 1043 return -ENOMEM; 1044 1044 1045 1045 vstat = fake_slot_get(fake_bridge); ··· 1075 1075 /* If we want to support more than one bridge at some point, we need to 1076 1076 * dynamically allocate this so we get one per device. 1077 1077 */ 1078 - fake_bridge = kzalloc(sizeof(struct vme_bridge), GFP_KERNEL); 1079 - if (fake_bridge == NULL) { 1078 + fake_bridge = kzalloc(sizeof(*fake_bridge), GFP_KERNEL); 1079 + if (!fake_bridge) { 1080 1080 retval = -ENOMEM; 1081 1081 goto err_struct; 1082 1082 } 1083 1083 1084 - fake_device = kzalloc(sizeof(struct fake_driver), GFP_KERNEL); 1085 - if (fake_device == NULL) { 1084 + fake_device = kzalloc(sizeof(*fake_device), GFP_KERNEL); 1085 + if (!fake_device) { 1086 1086 retval = -ENOMEM; 1087 1087 goto err_driver; 1088 1088 } ··· 1104 1104 /* Add master windows to list */ 1105 1105 INIT_LIST_HEAD(&fake_bridge->master_resources); 1106 1106 for (i = 0; i < FAKE_MAX_MASTER; i++) { 1107 - master_image = kmalloc(sizeof(struct vme_master_resource), 1108 - GFP_KERNEL); 1109 - if (master_image == NULL) { 1107 + master_image = kmalloc(sizeof(*master_image), GFP_KERNEL); 1108 + if (!master_image) { 1110 1109 retval = -ENOMEM; 1111 1110 goto err_master; 1112 1111 } ··· 1130 1131 /* Add slave windows to list */ 1131 1132 INIT_LIST_HEAD(&fake_bridge->slave_resources); 1132 1133 for (i = 0; i < FAKE_MAX_SLAVE; i++) { 1133 - slave_image = kmalloc(sizeof(struct vme_slave_resource), 1134 - GFP_KERNEL); 1135 - if (slave_image == NULL) { 1134 + slave_image = kmalloc(sizeof(*slave_image), GFP_KERNEL); 1135 + if (!slave_image) { 1136 1136 retval = -ENOMEM; 1137 1137 goto err_slave; 1138 1138 } ··· 1152 1154 1153 1155 /* Add location monitor to list */ 1154 1156 INIT_LIST_HEAD(&fake_bridge->lm_resources); 1155 - lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL); 1156 - if (lm == NULL) { 1157 - pr_err("Failed to allocate memory for location monitor resource structure\n"); 1157 + lm = kmalloc(sizeof(*lm), GFP_KERNEL); 1158 + if (!lm) { 1158 1159 retval = -ENOMEM; 1159 1160 goto err_lm; 1160 1161 }
+32 -51
drivers/vme/bridges/vme_tsi148.c
··· 741 741 image->kern_base = NULL; 742 742 kfree(image->bus_resource.name); 743 743 release_resource(&image->bus_resource); 744 - memset(&image->bus_resource, 0, sizeof(struct resource)); 744 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 745 745 } 746 746 747 747 /* Exit here if size is zero */ 748 748 if (size == 0) 749 749 return 0; 750 750 751 - if (image->bus_resource.name == NULL) { 751 + if (!image->bus_resource.name) { 752 752 image->bus_resource.name = kmalloc(VMENAMSIZ+3, GFP_ATOMIC); 753 - if (image->bus_resource.name == NULL) { 754 - dev_err(tsi148_bridge->parent, "Unable to allocate " 755 - "memory for resource name\n"); 753 + if (!image->bus_resource.name) { 756 754 retval = -ENOMEM; 757 755 goto err_name; 758 756 } ··· 776 778 777 779 image->kern_base = ioremap_nocache( 778 780 image->bus_resource.start, size); 779 - if (image->kern_base == NULL) { 781 + if (!image->kern_base) { 780 782 dev_err(tsi148_bridge->parent, "Failed to remap resource\n"); 781 783 retval = -ENOMEM; 782 784 goto err_remap; ··· 788 790 release_resource(&image->bus_resource); 789 791 err_resource: 790 792 kfree(image->bus_resource.name); 791 - memset(&image->bus_resource, 0, sizeof(struct resource)); 793 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 792 794 err_name: 793 795 return retval; 794 796 } ··· 802 804 image->kern_base = NULL; 803 805 release_resource(&image->bus_resource); 804 806 kfree(image->bus_resource.name); 805 - memset(&image->bus_resource, 0, sizeof(struct resource)); 807 + memset(&image->bus_resource, 0, sizeof(image->bus_resource)); 806 808 } 807 809 808 810 /* ··· 1639 1641 tsi148_bridge = list->parent->parent; 1640 1642 1641 1643 /* Descriptor must be aligned on 64-bit boundaries */ 1642 - entry = kmalloc(sizeof(struct tsi148_dma_entry), GFP_KERNEL); 1643 - if (entry == NULL) { 1644 - dev_err(tsi148_bridge->parent, "Failed to allocate memory for " 1645 - "dma resource structure\n"); 1644 + entry = kmalloc(sizeof(*entry), GFP_KERNEL); 1645 + if (!entry) { 1646 1646 retval = -ENOMEM; 1647 1647 goto err_mem; 1648 1648 } ··· 1657 1661 /* Given we are going to fill out the structure, we probably don't 1658 1662 * need to zero it, but better safe than sorry for now. 1659 1663 */ 1660 - memset(&entry->descriptor, 0, sizeof(struct tsi148_dma_descriptor)); 1664 + memset(&entry->descriptor, 0, sizeof(entry->descriptor)); 1661 1665 1662 1666 /* Fill out source part */ 1663 1667 switch (src->type) { ··· 1752 1756 list_add_tail(&entry->list, &list->entries); 1753 1757 1754 1758 entry->dma_handle = dma_map_single(tsi148_bridge->parent, 1755 - &entry->descriptor, 1756 - sizeof(struct tsi148_dma_descriptor), DMA_TO_DEVICE); 1759 + &entry->descriptor, 1760 + sizeof(entry->descriptor), 1761 + DMA_TO_DEVICE); 1757 1762 if (dma_mapping_error(tsi148_bridge->parent, entry->dma_handle)) { 1758 1763 dev_err(tsi148_bridge->parent, "DMA mapping error\n"); 1759 1764 retval = -EINVAL; ··· 1943 1946 1944 1947 /* If we already have a callback attached, we can't move it! */ 1945 1948 for (i = 0; i < lm->monitors; i++) { 1946 - if (bridge->lm_callback[i] != NULL) { 1949 + if (bridge->lm_callback[i]) { 1947 1950 mutex_unlock(&lm->mtx); 1948 1951 dev_err(tsi148_bridge->parent, "Location monitor " 1949 1952 "callback attached, can't reset\n"); ··· 2068 2071 } 2069 2072 2070 2073 /* Check that a callback isn't already attached */ 2071 - if (bridge->lm_callback[monitor] != NULL) { 2074 + if (bridge->lm_callback[monitor]) { 2072 2075 mutex_unlock(&lm->mtx); 2073 2076 dev_err(tsi148_bridge->parent, "Existing callback attached\n"); 2074 2077 return -EBUSY; ··· 2205 2208 /* Allocate mem for CR/CSR image */ 2206 2209 bridge->crcsr_kernel = pci_zalloc_consistent(pdev, VME_CRCSR_BUF_SIZE, 2207 2210 &bridge->crcsr_bus); 2208 - if (bridge->crcsr_kernel == NULL) { 2211 + if (!bridge->crcsr_kernel) { 2209 2212 dev_err(tsi148_bridge->parent, "Failed to allocate memory for " 2210 2213 "CR/CSR image\n"); 2211 2214 return -ENOMEM; ··· 2291 2294 /* If we want to support more than one of each bridge, we need to 2292 2295 * dynamically generate this so we get one per device 2293 2296 */ 2294 - tsi148_bridge = kzalloc(sizeof(struct vme_bridge), GFP_KERNEL); 2295 - if (tsi148_bridge == NULL) { 2296 - dev_err(&pdev->dev, "Failed to allocate memory for device " 2297 - "structure\n"); 2297 + tsi148_bridge = kzalloc(sizeof(*tsi148_bridge), GFP_KERNEL); 2298 + if (!tsi148_bridge) { 2298 2299 retval = -ENOMEM; 2299 2300 goto err_struct; 2300 2301 } 2301 2302 vme_init_bridge(tsi148_bridge); 2302 2303 2303 - tsi148_device = kzalloc(sizeof(struct tsi148_driver), GFP_KERNEL); 2304 - if (tsi148_device == NULL) { 2305 - dev_err(&pdev->dev, "Failed to allocate memory for device " 2306 - "structure\n"); 2304 + tsi148_device = kzalloc(sizeof(*tsi148_device), GFP_KERNEL); 2305 + if (!tsi148_device) { 2307 2306 retval = -ENOMEM; 2308 2307 goto err_driver; 2309 2308 } ··· 2364 2371 master_num--; 2365 2372 2366 2373 tsi148_device->flush_image = 2367 - kmalloc(sizeof(struct vme_master_resource), GFP_KERNEL); 2368 - if (tsi148_device->flush_image == NULL) { 2369 - dev_err(&pdev->dev, "Failed to allocate memory for " 2370 - "flush resource structure\n"); 2374 + kmalloc(sizeof(*tsi148_device->flush_image), 2375 + GFP_KERNEL); 2376 + if (!tsi148_device->flush_image) { 2371 2377 retval = -ENOMEM; 2372 2378 goto err_master; 2373 2379 } ··· 2375 2383 tsi148_device->flush_image->locked = 1; 2376 2384 tsi148_device->flush_image->number = master_num; 2377 2385 memset(&tsi148_device->flush_image->bus_resource, 0, 2378 - sizeof(struct resource)); 2386 + sizeof(tsi148_device->flush_image->bus_resource)); 2379 2387 tsi148_device->flush_image->kern_base = NULL; 2380 2388 } 2381 2389 2382 2390 /* Add master windows to list */ 2383 2391 for (i = 0; i < master_num; i++) { 2384 - master_image = kmalloc(sizeof(struct vme_master_resource), 2385 - GFP_KERNEL); 2386 - if (master_image == NULL) { 2387 - dev_err(&pdev->dev, "Failed to allocate memory for " 2388 - "master resource structure\n"); 2392 + master_image = kmalloc(sizeof(*master_image), GFP_KERNEL); 2393 + if (!master_image) { 2389 2394 retval = -ENOMEM; 2390 2395 goto err_master; 2391 2396 } ··· 2399 2410 VME_PROG | VME_DATA; 2400 2411 master_image->width_attr = VME_D16 | VME_D32; 2401 2412 memset(&master_image->bus_resource, 0, 2402 - sizeof(struct resource)); 2413 + sizeof(master_image->bus_resource)); 2403 2414 master_image->kern_base = NULL; 2404 2415 list_add_tail(&master_image->list, 2405 2416 &tsi148_bridge->master_resources); ··· 2407 2418 2408 2419 /* Add slave windows to list */ 2409 2420 for (i = 0; i < TSI148_MAX_SLAVE; i++) { 2410 - slave_image = kmalloc(sizeof(struct vme_slave_resource), 2411 - GFP_KERNEL); 2412 - if (slave_image == NULL) { 2413 - dev_err(&pdev->dev, "Failed to allocate memory for " 2414 - "slave resource structure\n"); 2421 + slave_image = kmalloc(sizeof(*slave_image), GFP_KERNEL); 2422 + if (!slave_image) { 2415 2423 retval = -ENOMEM; 2416 2424 goto err_slave; 2417 2425 } ··· 2428 2442 2429 2443 /* Add dma engines to list */ 2430 2444 for (i = 0; i < TSI148_MAX_DMA; i++) { 2431 - dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource), 2432 - GFP_KERNEL); 2433 - if (dma_ctrlr == NULL) { 2434 - dev_err(&pdev->dev, "Failed to allocate memory for " 2435 - "dma resource structure\n"); 2445 + dma_ctrlr = kmalloc(sizeof(*dma_ctrlr), GFP_KERNEL); 2446 + if (!dma_ctrlr) { 2436 2447 retval = -ENOMEM; 2437 2448 goto err_dma; 2438 2449 } ··· 2448 2465 } 2449 2466 2450 2467 /* Add location monitor to list */ 2451 - lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL); 2452 - if (lm == NULL) { 2453 - dev_err(&pdev->dev, "Failed to allocate memory for " 2454 - "location monitor resource structure\n"); 2468 + lm = kmalloc(sizeof(*lm), GFP_KERNEL); 2469 + if (!lm) { 2455 2470 retval = -ENOMEM; 2456 2471 goto err_lm; 2457 2472 }
+92 -122
drivers/vme/vme.c
··· 92 92 { 93 93 struct vme_bridge *bridge; 94 94 95 - if (resource == NULL) { 95 + if (!resource) { 96 96 printk(KERN_ERR "No resource\n"); 97 97 return NULL; 98 98 } 99 99 100 100 bridge = find_bridge(resource); 101 - if (bridge == NULL) { 101 + if (!bridge) { 102 102 printk(KERN_ERR "Can't find bridge\n"); 103 103 return NULL; 104 104 } 105 105 106 - if (bridge->parent == NULL) { 106 + if (!bridge->parent) { 107 107 printk(KERN_ERR "Dev entry NULL for bridge %s\n", bridge->name); 108 108 return NULL; 109 109 } 110 110 111 - if (bridge->alloc_consistent == NULL) { 111 + if (!bridge->alloc_consistent) { 112 112 printk(KERN_ERR "alloc_consistent not supported by bridge %s\n", 113 113 bridge->name); 114 114 return NULL; ··· 132 132 { 133 133 struct vme_bridge *bridge; 134 134 135 - if (resource == NULL) { 135 + if (!resource) { 136 136 printk(KERN_ERR "No resource\n"); 137 137 return; 138 138 } 139 139 140 140 bridge = find_bridge(resource); 141 - if (bridge == NULL) { 141 + if (!bridge) { 142 142 printk(KERN_ERR "Can't find bridge\n"); 143 143 return; 144 144 } 145 145 146 - if (bridge->parent == NULL) { 146 + if (!bridge->parent) { 147 147 printk(KERN_ERR "Dev entry NULL for bridge %s\n", bridge->name); 148 148 return; 149 149 } 150 150 151 - if (bridge->free_consistent == NULL) { 151 + if (!bridge->free_consistent) { 152 152 printk(KERN_ERR "free_consistent not supported by bridge %s\n", 153 153 bridge->name); 154 154 return; ··· 208 208 { 209 209 int retval = 0; 210 210 211 + if (vme_base + size < size) 212 + return -EINVAL; 213 + 211 214 switch (aspace) { 212 215 case VME_A16: 213 - if (((vme_base + size) > VME_A16_MAX) || 214 - (vme_base > VME_A16_MAX)) 216 + if (vme_base + size > VME_A16_MAX) 215 217 retval = -EFAULT; 216 218 break; 217 219 case VME_A24: 218 - if (((vme_base + size) > VME_A24_MAX) || 219 - (vme_base > VME_A24_MAX)) 220 + if (vme_base + size > VME_A24_MAX) 220 221 retval = -EFAULT; 221 222 break; 222 223 case VME_A32: 223 - if (((vme_base + size) > VME_A32_MAX) || 224 - (vme_base > VME_A32_MAX)) 224 + if (vme_base + size > VME_A32_MAX) 225 225 retval = -EFAULT; 226 226 break; 227 227 case VME_A64: 228 - if ((size != 0) && (vme_base > U64_MAX + 1 - size)) 229 - retval = -EFAULT; 228 + /* The VME_A64_MAX limit is actually U64_MAX + 1 */ 230 229 break; 231 230 case VME_CRCSR: 232 - if (((vme_base + size) > VME_CRCSR_MAX) || 233 - (vme_base > VME_CRCSR_MAX)) 231 + if (vme_base + size > VME_CRCSR_MAX) 234 232 retval = -EFAULT; 235 233 break; 236 234 case VME_USER1: ··· 301 303 struct vme_resource *resource = NULL; 302 304 303 305 bridge = vdev->bridge; 304 - if (bridge == NULL) { 306 + if (!bridge) { 305 307 printk(KERN_ERR "Can't find VME bus\n"); 306 308 goto err_bus; 307 309 } ··· 311 313 slave_image = list_entry(slave_pos, 312 314 struct vme_slave_resource, list); 313 315 314 - if (slave_image == NULL) { 316 + if (!slave_image) { 315 317 printk(KERN_ERR "Registered NULL Slave resource\n"); 316 318 continue; 317 319 } ··· 331 333 } 332 334 333 335 /* No free image */ 334 - if (allocated_image == NULL) 336 + if (!allocated_image) 335 337 goto err_image; 336 338 337 - resource = kmalloc(sizeof(struct vme_resource), GFP_KERNEL); 338 - if (resource == NULL) { 339 - printk(KERN_WARNING "Unable to allocate resource structure\n"); 339 + resource = kmalloc(sizeof(*resource), GFP_KERNEL); 340 + if (!resource) 340 341 goto err_alloc; 341 - } 342 + 342 343 resource->type = VME_SLAVE; 343 344 resource->entry = &allocated_image->list; 344 345 ··· 386 389 387 390 image = list_entry(resource->entry, struct vme_slave_resource, list); 388 391 389 - if (bridge->slave_set == NULL) { 392 + if (!bridge->slave_set) { 390 393 printk(KERN_ERR "Function not supported\n"); 391 394 return -ENOSYS; 392 395 } ··· 435 438 436 439 image = list_entry(resource->entry, struct vme_slave_resource, list); 437 440 438 - if (bridge->slave_get == NULL) { 441 + if (!bridge->slave_get) { 439 442 printk(KERN_ERR "vme_slave_get not supported\n"); 440 443 return -EINVAL; 441 444 } ··· 462 465 463 466 slave_image = list_entry(resource->entry, struct vme_slave_resource, 464 467 list); 465 - if (slave_image == NULL) { 468 + if (!slave_image) { 466 469 printk(KERN_ERR "Can't find slave resource\n"); 467 470 return; 468 471 } ··· 502 505 struct vme_resource *resource = NULL; 503 506 504 507 bridge = vdev->bridge; 505 - if (bridge == NULL) { 508 + if (!bridge) { 506 509 printk(KERN_ERR "Can't find VME bus\n"); 507 510 goto err_bus; 508 511 } ··· 512 515 master_image = list_entry(master_pos, 513 516 struct vme_master_resource, list); 514 517 515 - if (master_image == NULL) { 518 + if (!master_image) { 516 519 printk(KERN_WARNING "Registered NULL master resource\n"); 517 520 continue; 518 521 } ··· 533 536 } 534 537 535 538 /* Check to see if we found a resource */ 536 - if (allocated_image == NULL) { 539 + if (!allocated_image) { 537 540 printk(KERN_ERR "Can't find a suitable resource\n"); 538 541 goto err_image; 539 542 } 540 543 541 - resource = kmalloc(sizeof(struct vme_resource), GFP_KERNEL); 542 - if (resource == NULL) { 543 - printk(KERN_ERR "Unable to allocate resource structure\n"); 544 + resource = kmalloc(sizeof(*resource), GFP_KERNEL); 545 + if (!resource) 544 546 goto err_alloc; 545 - } 547 + 546 548 resource->type = VME_MASTER; 547 549 resource->entry = &allocated_image->list; 548 550 ··· 590 594 591 595 image = list_entry(resource->entry, struct vme_master_resource, list); 592 596 593 - if (bridge->master_set == NULL) { 597 + if (!bridge->master_set) { 594 598 printk(KERN_WARNING "vme_master_set not supported\n"); 595 599 return -EINVAL; 596 600 } ··· 640 644 641 645 image = list_entry(resource->entry, struct vme_master_resource, list); 642 646 643 - if (bridge->master_get == NULL) { 647 + if (!bridge->master_get) { 644 648 printk(KERN_WARNING "%s not supported\n", __func__); 645 649 return -EINVAL; 646 650 } ··· 672 676 struct vme_master_resource *image; 673 677 size_t length; 674 678 675 - if (bridge->master_read == NULL) { 679 + if (!bridge->master_read) { 676 680 printk(KERN_WARNING "Reading from resource not supported\n"); 677 681 return -EINVAL; 678 682 } ··· 721 725 struct vme_master_resource *image; 722 726 size_t length; 723 727 724 - if (bridge->master_write == NULL) { 728 + if (!bridge->master_write) { 725 729 printk(KERN_WARNING "Writing to resource not supported\n"); 726 730 return -EINVAL; 727 731 } ··· 772 776 struct vme_bridge *bridge = find_bridge(resource); 773 777 struct vme_master_resource *image; 774 778 775 - if (bridge->master_rmw == NULL) { 779 + if (!bridge->master_rmw) { 776 780 printk(KERN_WARNING "Writing to resource not supported\n"); 777 781 return -EINVAL; 778 782 } ··· 842 846 843 847 master_image = list_entry(resource->entry, struct vme_master_resource, 844 848 list); 845 - if (master_image == NULL) { 849 + if (!master_image) { 846 850 printk(KERN_ERR "Can't find master resource\n"); 847 851 return; 848 852 } ··· 882 886 printk(KERN_ERR "No VME resource Attribute tests done\n"); 883 887 884 888 bridge = vdev->bridge; 885 - if (bridge == NULL) { 889 + if (!bridge) { 886 890 printk(KERN_ERR "Can't find VME bus\n"); 887 891 goto err_bus; 888 892 } ··· 891 895 list_for_each(dma_pos, &bridge->dma_resources) { 892 896 dma_ctrlr = list_entry(dma_pos, 893 897 struct vme_dma_resource, list); 894 - 895 - if (dma_ctrlr == NULL) { 898 + if (!dma_ctrlr) { 896 899 printk(KERN_ERR "Registered NULL DMA resource\n"); 897 900 continue; 898 901 } ··· 910 915 } 911 916 912 917 /* Check to see if we found a resource */ 913 - if (allocated_ctrlr == NULL) 918 + if (!allocated_ctrlr) 914 919 goto err_ctrlr; 915 920 916 - resource = kmalloc(sizeof(struct vme_resource), GFP_KERNEL); 917 - if (resource == NULL) { 918 - printk(KERN_WARNING "Unable to allocate resource structure\n"); 921 + resource = kmalloc(sizeof(*resource), GFP_KERNEL); 922 + if (!resource) 919 923 goto err_alloc; 920 - } 924 + 921 925 resource->type = VME_DMA; 922 926 resource->entry = &allocated_ctrlr->list; 923 927 ··· 945 951 */ 946 952 struct vme_dma_list *vme_new_dma_list(struct vme_resource *resource) 947 953 { 948 - struct vme_dma_resource *ctrlr; 949 954 struct vme_dma_list *dma_list; 950 955 951 956 if (resource->type != VME_DMA) { ··· 952 959 return NULL; 953 960 } 954 961 955 - ctrlr = list_entry(resource->entry, struct vme_dma_resource, list); 956 - 957 - dma_list = kmalloc(sizeof(struct vme_dma_list), GFP_KERNEL); 958 - if (dma_list == NULL) { 959 - printk(KERN_ERR "Unable to allocate memory for new DMA list\n"); 962 + dma_list = kmalloc(sizeof(*dma_list), GFP_KERNEL); 963 + if (!dma_list) 960 964 return NULL; 961 - } 965 + 962 966 INIT_LIST_HEAD(&dma_list->entries); 963 - dma_list->parent = ctrlr; 967 + dma_list->parent = list_entry(resource->entry, 968 + struct vme_dma_resource, 969 + list); 964 970 mutex_init(&dma_list->mtx); 965 971 966 972 return dma_list; ··· 982 990 struct vme_dma_attr *attributes; 983 991 struct vme_dma_pattern *pattern_attr; 984 992 985 - attributes = kmalloc(sizeof(struct vme_dma_attr), GFP_KERNEL); 986 - if (attributes == NULL) { 987 - printk(KERN_ERR "Unable to allocate memory for attributes structure\n"); 993 + attributes = kmalloc(sizeof(*attributes), GFP_KERNEL); 994 + if (!attributes) 988 995 goto err_attr; 989 - } 990 996 991 - pattern_attr = kmalloc(sizeof(struct vme_dma_pattern), GFP_KERNEL); 992 - if (pattern_attr == NULL) { 993 - printk(KERN_ERR "Unable to allocate memory for pattern attributes\n"); 997 + pattern_attr = kmalloc(sizeof(*pattern_attr), GFP_KERNEL); 998 + if (!pattern_attr) 994 999 goto err_pat; 995 - } 996 1000 997 1001 attributes->type = VME_DMA_PATTERN; 998 1002 attributes->private = (void *)pattern_attr; ··· 1022 1034 1023 1035 /* XXX Run some sanity checks here */ 1024 1036 1025 - attributes = kmalloc(sizeof(struct vme_dma_attr), GFP_KERNEL); 1026 - if (attributes == NULL) { 1027 - printk(KERN_ERR "Unable to allocate memory for attributes structure\n"); 1037 + attributes = kmalloc(sizeof(*attributes), GFP_KERNEL); 1038 + if (!attributes) 1028 1039 goto err_attr; 1029 - } 1030 1040 1031 - pci_attr = kmalloc(sizeof(struct vme_dma_pci), GFP_KERNEL); 1032 - if (pci_attr == NULL) { 1033 - printk(KERN_ERR "Unable to allocate memory for PCI attributes\n"); 1041 + pci_attr = kmalloc(sizeof(*pci_attr), GFP_KERNEL); 1042 + if (!pci_attr) 1034 1043 goto err_pci; 1035 - } 1036 - 1037 - 1038 1044 1039 1045 attributes->type = VME_DMA_PCI; 1040 1046 attributes->private = (void *)pci_attr; ··· 1063 1081 struct vme_dma_attr *attributes; 1064 1082 struct vme_dma_vme *vme_attr; 1065 1083 1066 - attributes = kmalloc( 1067 - sizeof(struct vme_dma_attr), GFP_KERNEL); 1068 - if (attributes == NULL) { 1069 - printk(KERN_ERR "Unable to allocate memory for attributes structure\n"); 1084 + attributes = kmalloc(sizeof(*attributes), GFP_KERNEL); 1085 + if (!attributes) 1070 1086 goto err_attr; 1071 - } 1072 1087 1073 - vme_attr = kmalloc(sizeof(struct vme_dma_vme), GFP_KERNEL); 1074 - if (vme_attr == NULL) { 1075 - printk(KERN_ERR "Unable to allocate memory for VME attributes\n"); 1088 + vme_attr = kmalloc(sizeof(*vme_attr), GFP_KERNEL); 1089 + if (!vme_attr) 1076 1090 goto err_vme; 1077 - } 1078 1091 1079 1092 attributes->type = VME_DMA_VME; 1080 1093 attributes->private = (void *)vme_attr; ··· 1125 1148 struct vme_bridge *bridge = list->parent->parent; 1126 1149 int retval; 1127 1150 1128 - if (bridge->dma_list_add == NULL) { 1151 + if (!bridge->dma_list_add) { 1129 1152 printk(KERN_WARNING "Link List DMA generation not supported\n"); 1130 1153 return -EINVAL; 1131 1154 } ··· 1158 1181 struct vme_bridge *bridge = list->parent->parent; 1159 1182 int retval; 1160 1183 1161 - if (bridge->dma_list_exec == NULL) { 1184 + if (!bridge->dma_list_exec) { 1162 1185 printk(KERN_ERR "Link List DMA execution not supported\n"); 1163 1186 return -EINVAL; 1164 1187 } ··· 1187 1210 struct vme_bridge *bridge = list->parent->parent; 1188 1211 int retval; 1189 1212 1190 - if (bridge->dma_list_empty == NULL) { 1213 + if (!bridge->dma_list_empty) { 1191 1214 printk(KERN_WARNING "Emptying of Link Lists not supported\n"); 1192 1215 return -EINVAL; 1193 1216 } 1194 1217 1195 1218 if (!mutex_trylock(&list->mtx)) { 1196 1219 printk(KERN_ERR "Link List in use\n"); 1197 - return -EINVAL; 1220 + return -EBUSY; 1198 1221 } 1199 1222 1200 1223 /* ··· 1319 1342 1320 1343 call = bridge->irq[level - 1].callback[statid].func; 1321 1344 priv_data = bridge->irq[level - 1].callback[statid].priv_data; 1322 - 1323 - if (call != NULL) 1345 + if (call) 1324 1346 call(level, statid, priv_data); 1325 1347 else 1326 1348 printk(KERN_WARNING "Spurious VME interrupt, level:%x, vector:%x\n", ··· 1350 1374 struct vme_bridge *bridge; 1351 1375 1352 1376 bridge = vdev->bridge; 1353 - if (bridge == NULL) { 1377 + if (!bridge) { 1354 1378 printk(KERN_ERR "Can't find VME bus\n"); 1355 1379 return -EINVAL; 1356 1380 } ··· 1360 1384 return -EINVAL; 1361 1385 } 1362 1386 1363 - if (bridge->irq_set == NULL) { 1387 + if (!bridge->irq_set) { 1364 1388 printk(KERN_ERR "Configuring interrupts not supported\n"); 1365 1389 return -EINVAL; 1366 1390 } ··· 1399 1423 struct vme_bridge *bridge; 1400 1424 1401 1425 bridge = vdev->bridge; 1402 - if (bridge == NULL) { 1426 + if (!bridge) { 1403 1427 printk(KERN_ERR "Can't find VME bus\n"); 1404 1428 return; 1405 1429 } ··· 1409 1433 return; 1410 1434 } 1411 1435 1412 - if (bridge->irq_set == NULL) { 1436 + if (!bridge->irq_set) { 1413 1437 printk(KERN_ERR "Configuring interrupts not supported\n"); 1414 1438 return; 1415 1439 } ··· 1446 1470 struct vme_bridge *bridge; 1447 1471 1448 1472 bridge = vdev->bridge; 1449 - if (bridge == NULL) { 1473 + if (!bridge) { 1450 1474 printk(KERN_ERR "Can't find VME bus\n"); 1451 1475 return -EINVAL; 1452 1476 } ··· 1456 1480 return -EINVAL; 1457 1481 } 1458 1482 1459 - if (bridge->irq_generate == NULL) { 1483 + if (!bridge->irq_generate) { 1460 1484 printk(KERN_WARNING "Interrupt generation not supported\n"); 1461 1485 return -EINVAL; 1462 1486 } ··· 1484 1508 struct vme_resource *resource = NULL; 1485 1509 1486 1510 bridge = vdev->bridge; 1487 - if (bridge == NULL) { 1511 + if (!bridge) { 1488 1512 printk(KERN_ERR "Can't find VME bus\n"); 1489 1513 goto err_bus; 1490 1514 } ··· 1493 1517 list_for_each(lm_pos, &bridge->lm_resources) { 1494 1518 lm = list_entry(lm_pos, 1495 1519 struct vme_lm_resource, list); 1496 - 1497 - if (lm == NULL) { 1520 + if (!lm) { 1498 1521 printk(KERN_ERR "Registered NULL Location Monitor resource\n"); 1499 1522 continue; 1500 1523 } ··· 1510 1535 } 1511 1536 1512 1537 /* Check to see if we found a resource */ 1513 - if (allocated_lm == NULL) 1538 + if (!allocated_lm) 1514 1539 goto err_lm; 1515 1540 1516 - resource = kmalloc(sizeof(struct vme_resource), GFP_KERNEL); 1517 - if (resource == NULL) { 1518 - printk(KERN_ERR "Unable to allocate resource structure\n"); 1541 + resource = kmalloc(sizeof(*resource), GFP_KERNEL); 1542 + if (!resource) 1519 1543 goto err_alloc; 1520 - } 1544 + 1521 1545 resource->type = VME_LM; 1522 1546 resource->entry = &allocated_lm->list; 1523 1547 ··· 1586 1612 1587 1613 lm = list_entry(resource->entry, struct vme_lm_resource, list); 1588 1614 1589 - if (bridge->lm_set == NULL) { 1615 + if (!bridge->lm_set) { 1590 1616 printk(KERN_ERR "vme_lm_set not supported\n"); 1591 1617 return -EINVAL; 1592 1618 } ··· 1622 1648 1623 1649 lm = list_entry(resource->entry, struct vme_lm_resource, list); 1624 1650 1625 - if (bridge->lm_get == NULL) { 1651 + if (!bridge->lm_get) { 1626 1652 printk(KERN_ERR "vme_lm_get not supported\n"); 1627 1653 return -EINVAL; 1628 1654 } ··· 1659 1685 1660 1686 lm = list_entry(resource->entry, struct vme_lm_resource, list); 1661 1687 1662 - if (bridge->lm_attach == NULL) { 1688 + if (!bridge->lm_attach) { 1663 1689 printk(KERN_ERR "vme_lm_attach not supported\n"); 1664 1690 return -EINVAL; 1665 1691 } ··· 1692 1718 1693 1719 lm = list_entry(resource->entry, struct vme_lm_resource, list); 1694 1720 1695 - if (bridge->lm_detach == NULL) { 1721 + if (!bridge->lm_detach) { 1696 1722 printk(KERN_ERR "vme_lm_detach not supported\n"); 1697 1723 return -EINVAL; 1698 1724 } ··· 1754 1780 struct vme_bridge *bridge; 1755 1781 1756 1782 bridge = vdev->bridge; 1757 - if (bridge == NULL) { 1783 + if (!bridge) { 1758 1784 printk(KERN_ERR "Can't find VME bus\n"); 1759 1785 return -EINVAL; 1760 1786 } 1761 1787 1762 - if (bridge->slot_get == NULL) { 1788 + if (!bridge->slot_get) { 1763 1789 printk(KERN_WARNING "vme_slot_num not supported\n"); 1764 1790 return -EINVAL; 1765 1791 } ··· 1782 1808 struct vme_bridge *bridge; 1783 1809 1784 1810 bridge = vdev->bridge; 1785 - if (bridge == NULL) { 1811 + if (!bridge) { 1786 1812 pr_err("Can't find VME bus\n"); 1787 1813 return -EINVAL; 1788 1814 } ··· 1862 1888 struct vme_dev *tmp; 1863 1889 1864 1890 for (i = 0; i < ndevs; i++) { 1865 - vdev = kzalloc(sizeof(struct vme_dev), GFP_KERNEL); 1891 + vdev = kzalloc(sizeof(*vdev), GFP_KERNEL); 1866 1892 if (!vdev) { 1867 1893 err = -ENOMEM; 1868 1894 goto err_devalloc; ··· 1994 2020 1995 2021 static int vme_bus_probe(struct device *dev) 1996 2022 { 1997 - int retval = -ENODEV; 1998 2023 struct vme_driver *driver; 1999 2024 struct vme_dev *vdev = dev_to_vme_dev(dev); 2000 2025 2001 2026 driver = dev->platform_data; 2027 + if (driver->probe) 2028 + return driver->probe(vdev); 2002 2029 2003 - if (driver->probe != NULL) 2004 - retval = driver->probe(vdev); 2005 - 2006 - return retval; 2030 + return -ENODEV; 2007 2031 } 2008 2032 2009 2033 static int vme_bus_remove(struct device *dev) 2010 2034 { 2011 - int retval = -ENODEV; 2012 2035 struct vme_driver *driver; 2013 2036 struct vme_dev *vdev = dev_to_vme_dev(dev); 2014 2037 2015 2038 driver = dev->platform_data; 2039 + if (driver->remove) 2040 + return driver->remove(vdev); 2016 2041 2017 - if (driver->remove != NULL) 2018 - retval = driver->remove(vdev); 2019 - 2020 - return retval; 2042 + return -ENODEV; 2021 2043 } 2022 2044 2023 2045 struct bus_type vme_bus_type = {
+15
drivers/w1/slaves/Kconfig
··· 148 148 149 149 If you are unsure, say N. 150 150 151 + config W1_SLAVE_DS28E17 152 + tristate "1-wire-to-I2C master bridge (DS28E17)" 153 + select CRC16 154 + depends on I2C 155 + help 156 + Say Y here if you want to use the DS28E17 1-wire-to-I2C master bridge. 157 + For each DS28E17 detected, a new I2C adapter is created within the 158 + kernel. I2C devices on that bus can be configured to be used by the 159 + kernel and userspace tools as on any other "native" I2C bus. 160 + 161 + This driver is also available as a module. If so, the module 162 + will be called w1_ds28e17. 163 + 164 + If you are unsure, say N. 165 + 151 166 endmenu
+1
drivers/w1/slaves/Makefile
··· 18 18 obj-$(CONFIG_W1_SLAVE_DS2780) += w1_ds2780.o 19 19 obj-$(CONFIG_W1_SLAVE_DS2781) += w1_ds2781.o 20 20 obj-$(CONFIG_W1_SLAVE_DS28E04) += w1_ds28e04.o 21 + obj-$(CONFIG_W1_SLAVE_DS28E17) += w1_ds28e17.o
+771
drivers/w1/slaves/w1_ds28e17.c
··· 1 + /* 2 + * w1_ds28e17.c - w1 family 19 (DS28E17) driver 3 + * 4 + * Copyright (c) 2016 Jan Kandziora <jjj@gmx.de> 5 + * 6 + * This source code is licensed under the GNU General Public License, 7 + * Version 2. See the file COPYING for more details. 8 + */ 9 + 10 + #include <linux/crc16.h> 11 + #include <linux/delay.h> 12 + #include <linux/device.h> 13 + #include <linux/i2c.h> 14 + #include <linux/kernel.h> 15 + #include <linux/module.h> 16 + #include <linux/moduleparam.h> 17 + #include <linux/slab.h> 18 + #include <linux/types.h> 19 + #include <linux/uaccess.h> 20 + 21 + #define CRC16_INIT 0 22 + 23 + #include <linux/w1.h> 24 + 25 + #define W1_FAMILY_DS28E17 0x19 26 + 27 + /* Module setup. */ 28 + MODULE_LICENSE("GPL v2"); 29 + MODULE_AUTHOR("Jan Kandziora <jjj@gmx.de>"); 30 + MODULE_DESCRIPTION("w1 family 19 driver for DS28E17, 1-wire to I2C master bridge"); 31 + MODULE_ALIAS("w1-family-" __stringify(W1_FAMILY_DS28E17)); 32 + 33 + 34 + /* Default I2C speed to be set when a DS28E17 is detected. */ 35 + static int i2c_speed = 100; 36 + module_param_named(speed, i2c_speed, int, (S_IRUSR | S_IWUSR)); 37 + MODULE_PARM_DESC(speed, "Default I2C speed to be set when a DS28E17 is detected"); 38 + 39 + /* Default I2C stretch value to be set when a DS28E17 is detected. */ 40 + static char i2c_stretch = 1; 41 + module_param_named(stretch, i2c_stretch, byte, (S_IRUSR | S_IWUSR)); 42 + MODULE_PARM_DESC(stretch, "Default I2C stretch value to be set when a DS28E17 is detected"); 43 + 44 + /* DS28E17 device command codes. */ 45 + #define W1_F19_WRITE_DATA_WITH_STOP 0x4B 46 + #define W1_F19_WRITE_DATA_NO_STOP 0x5A 47 + #define W1_F19_WRITE_DATA_ONLY 0x69 48 + #define W1_F19_WRITE_DATA_ONLY_WITH_STOP 0x78 49 + #define W1_F19_READ_DATA_WITH_STOP 0x87 50 + #define W1_F19_WRITE_READ_DATA_WITH_STOP 0x2D 51 + #define W1_F19_WRITE_CONFIGURATION 0xD2 52 + #define W1_F19_READ_CONFIGURATION 0xE1 53 + #define W1_F19_ENABLE_SLEEP_MODE 0x1E 54 + #define W1_F19_READ_DEVICE_REVISION 0xC4 55 + 56 + /* DS28E17 status bits */ 57 + #define W1_F19_STATUS_CRC 0x01 58 + #define W1_F19_STATUS_ADDRESS 0x02 59 + #define W1_F19_STATUS_START 0x08 60 + 61 + /* 62 + * Maximum number of I2C bytes to transfer within one CRC16 protected onewire 63 + * command. 64 + * */ 65 + #define W1_F19_WRITE_DATA_LIMIT 255 66 + 67 + /* Maximum number of I2C bytes to read with one onewire command. */ 68 + #define W1_F19_READ_DATA_LIMIT 255 69 + 70 + /* Constants for calculating the busy sleep. */ 71 + #define W1_F19_BUSY_TIMEBASES { 90, 23, 10 } 72 + #define W1_F19_BUSY_GRATUITY 1000 73 + 74 + /* Number of checks for the busy flag before timeout. */ 75 + #define W1_F19_BUSY_CHECKS 1000 76 + 77 + 78 + /* Slave specific data. */ 79 + struct w1_f19_data { 80 + u8 speed; 81 + u8 stretch; 82 + struct i2c_adapter adapter; 83 + }; 84 + 85 + 86 + /* Wait a while until the busy flag clears. */ 87 + static int w1_f19_i2c_busy_wait(struct w1_slave *sl, size_t count) 88 + { 89 + const unsigned long timebases[3] = W1_F19_BUSY_TIMEBASES; 90 + struct w1_f19_data *data = sl->family_data; 91 + unsigned int checks; 92 + 93 + /* Check the busy flag first in any case.*/ 94 + if (w1_touch_bit(sl->master, 1) == 0) 95 + return 0; 96 + 97 + /* 98 + * Do a generously long sleep in the beginning, 99 + * as we have to wait at least this time for all 100 + * the I2C bytes at the given speed to be transferred. 101 + */ 102 + usleep_range(timebases[data->speed] * (data->stretch) * count, 103 + timebases[data->speed] * (data->stretch) * count 104 + + W1_F19_BUSY_GRATUITY); 105 + 106 + /* Now continusly check the busy flag sent by the DS28E17. */ 107 + checks = W1_F19_BUSY_CHECKS; 108 + while ((checks--) > 0) { 109 + /* Return success if the busy flag is cleared. */ 110 + if (w1_touch_bit(sl->master, 1) == 0) 111 + return 0; 112 + 113 + /* Wait one non-streched byte timeslot. */ 114 + udelay(timebases[data->speed]); 115 + } 116 + 117 + /* Timeout. */ 118 + dev_warn(&sl->dev, "busy timeout\n"); 119 + return -ETIMEDOUT; 120 + } 121 + 122 + 123 + /* Utility function: result. */ 124 + static size_t w1_f19_error(struct w1_slave *sl, u8 w1_buf[]) 125 + { 126 + /* Warnings. */ 127 + if (w1_buf[0] & W1_F19_STATUS_CRC) 128 + dev_warn(&sl->dev, "crc16 mismatch\n"); 129 + if (w1_buf[0] & W1_F19_STATUS_ADDRESS) 130 + dev_warn(&sl->dev, "i2c device not responding\n"); 131 + if ((w1_buf[0] & (W1_F19_STATUS_CRC | W1_F19_STATUS_ADDRESS)) == 0 132 + && w1_buf[1] != 0) { 133 + dev_warn(&sl->dev, "i2c short write, %d bytes not acknowledged\n", 134 + w1_buf[1]); 135 + } 136 + 137 + /* Check error conditions. */ 138 + if (w1_buf[0] & W1_F19_STATUS_ADDRESS) 139 + return -ENXIO; 140 + if (w1_buf[0] & W1_F19_STATUS_START) 141 + return -EAGAIN; 142 + if (w1_buf[0] != 0 || w1_buf[1] != 0) 143 + return -EIO; 144 + 145 + /* All ok. */ 146 + return 0; 147 + } 148 + 149 + 150 + /* Utility function: write data to I2C slave, single chunk. */ 151 + static int __w1_f19_i2c_write(struct w1_slave *sl, 152 + const u8 *command, size_t command_count, 153 + const u8 *buffer, size_t count) 154 + { 155 + u16 crc; 156 + int error; 157 + u8 w1_buf[2]; 158 + 159 + /* Send command and I2C data to DS28E17. */ 160 + crc = crc16(CRC16_INIT, command, command_count); 161 + w1_write_block(sl->master, command, command_count); 162 + 163 + w1_buf[0] = count; 164 + crc = crc16(crc, w1_buf, 1); 165 + w1_write_8(sl->master, w1_buf[0]); 166 + 167 + crc = crc16(crc, buffer, count); 168 + w1_write_block(sl->master, buffer, count); 169 + 170 + w1_buf[0] = ~(crc & 0xFF); 171 + w1_buf[1] = ~((crc >> 8) & 0xFF); 172 + w1_write_block(sl->master, w1_buf, 2); 173 + 174 + /* Wait until busy flag clears (or timeout). */ 175 + if (w1_f19_i2c_busy_wait(sl, count + 1) < 0) 176 + return -ETIMEDOUT; 177 + 178 + /* Read status from DS28E17. */ 179 + w1_read_block(sl->master, w1_buf, 2); 180 + 181 + /* Check error conditions. */ 182 + error = w1_f19_error(sl, w1_buf); 183 + if (error < 0) 184 + return error; 185 + 186 + /* Return number of bytes written. */ 187 + return count; 188 + } 189 + 190 + 191 + /* Write data to I2C slave. */ 192 + static int w1_f19_i2c_write(struct w1_slave *sl, u16 i2c_address, 193 + const u8 *buffer, size_t count, bool stop) 194 + { 195 + int result; 196 + int remaining = count; 197 + const u8 *p; 198 + u8 command[2]; 199 + 200 + /* Check input. */ 201 + if (count == 0) 202 + return -EOPNOTSUPP; 203 + 204 + /* Check whether we need multiple commands. */ 205 + if (count <= W1_F19_WRITE_DATA_LIMIT) { 206 + /* 207 + * Small data amount. Data can be sent with 208 + * a single onewire command. 209 + */ 210 + 211 + /* Send all data to DS28E17. */ 212 + command[0] = (stop ? W1_F19_WRITE_DATA_WITH_STOP 213 + : W1_F19_WRITE_DATA_NO_STOP); 214 + command[1] = i2c_address << 1; 215 + result = __w1_f19_i2c_write(sl, command, 2, buffer, count); 216 + } else { 217 + /* Large data amount. Data has to be sent in multiple chunks. */ 218 + 219 + /* Send first chunk to DS28E17. */ 220 + p = buffer; 221 + command[0] = W1_F19_WRITE_DATA_NO_STOP; 222 + command[1] = i2c_address << 1; 223 + result = __w1_f19_i2c_write(sl, command, 2, p, 224 + W1_F19_WRITE_DATA_LIMIT); 225 + if (result < 0) 226 + return result; 227 + 228 + /* Resume to same DS28E17. */ 229 + if (w1_reset_resume_command(sl->master)) 230 + return -EIO; 231 + 232 + /* Next data chunk. */ 233 + p += W1_F19_WRITE_DATA_LIMIT; 234 + remaining -= W1_F19_WRITE_DATA_LIMIT; 235 + 236 + while (remaining > W1_F19_WRITE_DATA_LIMIT) { 237 + /* Send intermediate chunk to DS28E17. */ 238 + command[0] = W1_F19_WRITE_DATA_ONLY; 239 + result = __w1_f19_i2c_write(sl, command, 1, p, 240 + W1_F19_WRITE_DATA_LIMIT); 241 + if (result < 0) 242 + return result; 243 + 244 + /* Resume to same DS28E17. */ 245 + if (w1_reset_resume_command(sl->master)) 246 + return -EIO; 247 + 248 + /* Next data chunk. */ 249 + p += W1_F19_WRITE_DATA_LIMIT; 250 + remaining -= W1_F19_WRITE_DATA_LIMIT; 251 + } 252 + 253 + /* Send final chunk to DS28E17. */ 254 + command[0] = (stop ? W1_F19_WRITE_DATA_ONLY_WITH_STOP 255 + : W1_F19_WRITE_DATA_ONLY); 256 + result = __w1_f19_i2c_write(sl, command, 1, p, remaining); 257 + } 258 + 259 + return result; 260 + } 261 + 262 + 263 + /* Read data from I2C slave. */ 264 + static int w1_f19_i2c_read(struct w1_slave *sl, u16 i2c_address, 265 + u8 *buffer, size_t count) 266 + { 267 + u16 crc; 268 + int error; 269 + u8 w1_buf[5]; 270 + 271 + /* Check input. */ 272 + if (count == 0) 273 + return -EOPNOTSUPP; 274 + 275 + /* Send command to DS28E17. */ 276 + w1_buf[0] = W1_F19_READ_DATA_WITH_STOP; 277 + w1_buf[1] = i2c_address << 1 | 0x01; 278 + w1_buf[2] = count; 279 + crc = crc16(CRC16_INIT, w1_buf, 3); 280 + w1_buf[3] = ~(crc & 0xFF); 281 + w1_buf[4] = ~((crc >> 8) & 0xFF); 282 + w1_write_block(sl->master, w1_buf, 5); 283 + 284 + /* Wait until busy flag clears (or timeout). */ 285 + if (w1_f19_i2c_busy_wait(sl, count + 1) < 0) 286 + return -ETIMEDOUT; 287 + 288 + /* Read status from DS28E17. */ 289 + w1_buf[0] = w1_read_8(sl->master); 290 + w1_buf[1] = 0; 291 + 292 + /* Check error conditions. */ 293 + error = w1_f19_error(sl, w1_buf); 294 + if (error < 0) 295 + return error; 296 + 297 + /* Read received I2C data from DS28E17. */ 298 + return w1_read_block(sl->master, buffer, count); 299 + } 300 + 301 + 302 + /* Write to, then read data from I2C slave. */ 303 + static int w1_f19_i2c_write_read(struct w1_slave *sl, u16 i2c_address, 304 + const u8 *wbuffer, size_t wcount, u8 *rbuffer, size_t rcount) 305 + { 306 + u16 crc; 307 + int error; 308 + u8 w1_buf[3]; 309 + 310 + /* Check input. */ 311 + if (wcount == 0 || rcount == 0) 312 + return -EOPNOTSUPP; 313 + 314 + /* Send command and I2C data to DS28E17. */ 315 + w1_buf[0] = W1_F19_WRITE_READ_DATA_WITH_STOP; 316 + w1_buf[1] = i2c_address << 1; 317 + w1_buf[2] = wcount; 318 + crc = crc16(CRC16_INIT, w1_buf, 3); 319 + w1_write_block(sl->master, w1_buf, 3); 320 + 321 + crc = crc16(crc, wbuffer, wcount); 322 + w1_write_block(sl->master, wbuffer, wcount); 323 + 324 + w1_buf[0] = rcount; 325 + crc = crc16(crc, w1_buf, 1); 326 + w1_buf[1] = ~(crc & 0xFF); 327 + w1_buf[2] = ~((crc >> 8) & 0xFF); 328 + w1_write_block(sl->master, w1_buf, 3); 329 + 330 + /* Wait until busy flag clears (or timeout). */ 331 + if (w1_f19_i2c_busy_wait(sl, wcount + rcount + 2) < 0) 332 + return -ETIMEDOUT; 333 + 334 + /* Read status from DS28E17. */ 335 + w1_read_block(sl->master, w1_buf, 2); 336 + 337 + /* Check error conditions. */ 338 + error = w1_f19_error(sl, w1_buf); 339 + if (error < 0) 340 + return error; 341 + 342 + /* Read received I2C data from DS28E17. */ 343 + return w1_read_block(sl->master, rbuffer, rcount); 344 + } 345 + 346 + 347 + /* Do an I2C master transfer. */ 348 + static int w1_f19_i2c_master_transfer(struct i2c_adapter *adapter, 349 + struct i2c_msg *msgs, int num) 350 + { 351 + struct w1_slave *sl = (struct w1_slave *) adapter->algo_data; 352 + int i = 0; 353 + int result = 0; 354 + 355 + /* Start onewire transaction. */ 356 + mutex_lock(&sl->master->bus_mutex); 357 + 358 + /* Select DS28E17. */ 359 + if (w1_reset_select_slave(sl)) { 360 + i = -EIO; 361 + goto error; 362 + } 363 + 364 + /* Loop while there are still messages to transfer. */ 365 + while (i < num) { 366 + /* 367 + * Check for special case: Small write followed 368 + * by read to same I2C device. 369 + */ 370 + if (i < (num-1) 371 + && msgs[i].addr == msgs[i+1].addr 372 + && !(msgs[i].flags & I2C_M_RD) 373 + && (msgs[i+1].flags & I2C_M_RD) 374 + && (msgs[i].len <= W1_F19_WRITE_DATA_LIMIT)) { 375 + /* 376 + * The DS28E17 has a combined transfer 377 + * for small write+read. 378 + */ 379 + result = w1_f19_i2c_write_read(sl, msgs[i].addr, 380 + msgs[i].buf, msgs[i].len, 381 + msgs[i+1].buf, msgs[i+1].len); 382 + if (result < 0) { 383 + i = result; 384 + goto error; 385 + } 386 + 387 + /* 388 + * Check if we should interpret the read data 389 + * as a length byte. The DS28E17 unfortunately 390 + * has no read without stop, so we can just do 391 + * another simple read in that case. 392 + */ 393 + if (msgs[i+1].flags & I2C_M_RECV_LEN) { 394 + result = w1_f19_i2c_read(sl, msgs[i+1].addr, 395 + &(msgs[i+1].buf[1]), msgs[i+1].buf[0]); 396 + if (result < 0) { 397 + i = result; 398 + goto error; 399 + } 400 + } 401 + 402 + /* Eat up read message, too. */ 403 + i++; 404 + } else if (msgs[i].flags & I2C_M_RD) { 405 + /* Read transfer. */ 406 + result = w1_f19_i2c_read(sl, msgs[i].addr, 407 + msgs[i].buf, msgs[i].len); 408 + if (result < 0) { 409 + i = result; 410 + goto error; 411 + } 412 + 413 + /* 414 + * Check if we should interpret the read data 415 + * as a length byte. The DS28E17 unfortunately 416 + * has no read without stop, so we can just do 417 + * another simple read in that case. 418 + */ 419 + if (msgs[i].flags & I2C_M_RECV_LEN) { 420 + result = w1_f19_i2c_read(sl, 421 + msgs[i].addr, 422 + &(msgs[i].buf[1]), 423 + msgs[i].buf[0]); 424 + if (result < 0) { 425 + i = result; 426 + goto error; 427 + } 428 + } 429 + } else { 430 + /* 431 + * Write transfer. 432 + * Stop condition only for last 433 + * transfer. 434 + */ 435 + result = w1_f19_i2c_write(sl, 436 + msgs[i].addr, 437 + msgs[i].buf, 438 + msgs[i].len, 439 + i == (num-1)); 440 + if (result < 0) { 441 + i = result; 442 + goto error; 443 + } 444 + } 445 + 446 + /* Next message. */ 447 + i++; 448 + 449 + /* Are there still messages to send/receive? */ 450 + if (i < num) { 451 + /* Yes. Resume to same DS28E17. */ 452 + if (w1_reset_resume_command(sl->master)) { 453 + i = -EIO; 454 + goto error; 455 + } 456 + } 457 + } 458 + 459 + error: 460 + /* End onewire transaction. */ 461 + mutex_unlock(&sl->master->bus_mutex); 462 + 463 + /* Return number of messages processed or error. */ 464 + return i; 465 + } 466 + 467 + 468 + /* Get I2C adapter functionality. */ 469 + static u32 w1_f19_i2c_functionality(struct i2c_adapter *adapter) 470 + { 471 + /* 472 + * Plain I2C functions only. 473 + * SMBus is emulated by the kernel's I2C layer. 474 + * No "I2C_FUNC_SMBUS_QUICK" 475 + * No "I2C_FUNC_SMBUS_READ_BLOCK_DATA" 476 + * No "I2C_FUNC_SMBUS_BLOCK_PROC_CALL" 477 + */ 478 + return I2C_FUNC_I2C | 479 + I2C_FUNC_SMBUS_BYTE | 480 + I2C_FUNC_SMBUS_BYTE_DATA | 481 + I2C_FUNC_SMBUS_WORD_DATA | 482 + I2C_FUNC_SMBUS_PROC_CALL | 483 + I2C_FUNC_SMBUS_WRITE_BLOCK_DATA | 484 + I2C_FUNC_SMBUS_I2C_BLOCK | 485 + I2C_FUNC_SMBUS_PEC; 486 + } 487 + 488 + 489 + /* I2C adapter quirks. */ 490 + static const struct i2c_adapter_quirks w1_f19_i2c_adapter_quirks = { 491 + .max_read_len = W1_F19_READ_DATA_LIMIT, 492 + }; 493 + 494 + /* I2C algorithm. */ 495 + static const struct i2c_algorithm w1_f19_i2c_algorithm = { 496 + .master_xfer = w1_f19_i2c_master_transfer, 497 + .functionality = w1_f19_i2c_functionality, 498 + }; 499 + 500 + 501 + /* Read I2C speed from DS28E17. */ 502 + static int w1_f19_get_i2c_speed(struct w1_slave *sl) 503 + { 504 + struct w1_f19_data *data = sl->family_data; 505 + int result = -EIO; 506 + 507 + /* Start onewire transaction. */ 508 + mutex_lock(&sl->master->bus_mutex); 509 + 510 + /* Select slave. */ 511 + if (w1_reset_select_slave(sl)) 512 + goto error; 513 + 514 + /* Read slave configuration byte. */ 515 + w1_write_8(sl->master, W1_F19_READ_CONFIGURATION); 516 + result = w1_read_8(sl->master); 517 + if (result < 0 || result > 2) { 518 + result = -EIO; 519 + goto error; 520 + } 521 + 522 + /* Update speed in slave specific data. */ 523 + data->speed = result; 524 + 525 + error: 526 + /* End onewire transaction. */ 527 + mutex_unlock(&sl->master->bus_mutex); 528 + 529 + return result; 530 + } 531 + 532 + 533 + /* Set I2C speed on DS28E17. */ 534 + static int __w1_f19_set_i2c_speed(struct w1_slave *sl, u8 speed) 535 + { 536 + struct w1_f19_data *data = sl->family_data; 537 + const int i2c_speeds[3] = { 100, 400, 900 }; 538 + u8 w1_buf[2]; 539 + 540 + /* Select slave. */ 541 + if (w1_reset_select_slave(sl)) 542 + return -EIO; 543 + 544 + w1_buf[0] = W1_F19_WRITE_CONFIGURATION; 545 + w1_buf[1] = speed; 546 + w1_write_block(sl->master, w1_buf, 2); 547 + 548 + /* Update speed in slave specific data. */ 549 + data->speed = speed; 550 + 551 + dev_info(&sl->dev, "i2c speed set to %d kBaud\n", i2c_speeds[speed]); 552 + 553 + return 0; 554 + } 555 + 556 + static int w1_f19_set_i2c_speed(struct w1_slave *sl, u8 speed) 557 + { 558 + int result; 559 + 560 + /* Start onewire transaction. */ 561 + mutex_lock(&sl->master->bus_mutex); 562 + 563 + /* Set I2C speed on DS28E17. */ 564 + result = __w1_f19_set_i2c_speed(sl, speed); 565 + 566 + /* End onewire transaction. */ 567 + mutex_unlock(&sl->master->bus_mutex); 568 + 569 + return result; 570 + } 571 + 572 + 573 + /* Sysfs attributes. */ 574 + 575 + /* I2C speed attribute for a single chip. */ 576 + static ssize_t speed_show(struct device *dev, struct device_attribute *attr, 577 + char *buf) 578 + { 579 + struct w1_slave *sl = dev_to_w1_slave(dev); 580 + int result; 581 + 582 + /* Read current speed from slave. Updates data->speed. */ 583 + result = w1_f19_get_i2c_speed(sl); 584 + if (result < 0) 585 + return result; 586 + 587 + /* Return current speed value. */ 588 + return sprintf(buf, "%d\n", result); 589 + } 590 + 591 + static ssize_t speed_store(struct device *dev, struct device_attribute *attr, 592 + const char *buf, size_t count) 593 + { 594 + struct w1_slave *sl = dev_to_w1_slave(dev); 595 + int error; 596 + 597 + /* Valid values are: "100", "400", "900" */ 598 + if (count < 3 || count > 4 || !buf) 599 + return -EINVAL; 600 + if (count == 4 && buf[3] != '\n') 601 + return -EINVAL; 602 + if (buf[1] != '0' || buf[2] != '0') 603 + return -EINVAL; 604 + 605 + /* Set speed on slave. */ 606 + switch (buf[0]) { 607 + case '1': 608 + error = w1_f19_set_i2c_speed(sl, 0); 609 + break; 610 + case '4': 611 + error = w1_f19_set_i2c_speed(sl, 1); 612 + break; 613 + case '9': 614 + error = w1_f19_set_i2c_speed(sl, 2); 615 + break; 616 + default: 617 + return -EINVAL; 618 + } 619 + 620 + if (error < 0) 621 + return error; 622 + 623 + /* Return bytes written. */ 624 + return count; 625 + } 626 + 627 + static DEVICE_ATTR_RW(speed); 628 + 629 + 630 + /* Busy stretch attribute for a single chip. */ 631 + static ssize_t stretch_show(struct device *dev, struct device_attribute *attr, 632 + char *buf) 633 + { 634 + struct w1_slave *sl = dev_to_w1_slave(dev); 635 + struct w1_f19_data *data = sl->family_data; 636 + 637 + /* Return current stretch value. */ 638 + return sprintf(buf, "%d\n", data->stretch); 639 + } 640 + 641 + static ssize_t stretch_store(struct device *dev, struct device_attribute *attr, 642 + const char *buf, size_t count) 643 + { 644 + struct w1_slave *sl = dev_to_w1_slave(dev); 645 + struct w1_f19_data *data = sl->family_data; 646 + 647 + /* Valid values are '1' to '9' */ 648 + if (count < 1 || count > 2 || !buf) 649 + return -EINVAL; 650 + if (count == 2 && buf[1] != '\n') 651 + return -EINVAL; 652 + if (buf[0] < '1' || buf[0] > '9') 653 + return -EINVAL; 654 + 655 + /* Set busy stretch value. */ 656 + data->stretch = buf[0] & 0x0F; 657 + 658 + /* Return bytes written. */ 659 + return count; 660 + } 661 + 662 + static DEVICE_ATTR_RW(stretch); 663 + 664 + 665 + /* All attributes. */ 666 + static struct attribute *w1_f19_attrs[] = { 667 + &dev_attr_speed.attr, 668 + &dev_attr_stretch.attr, 669 + NULL, 670 + }; 671 + 672 + static const struct attribute_group w1_f19_group = { 673 + .attrs = w1_f19_attrs, 674 + }; 675 + 676 + static const struct attribute_group *w1_f19_groups[] = { 677 + &w1_f19_group, 678 + NULL, 679 + }; 680 + 681 + 682 + /* Slave add and remove functions. */ 683 + static int w1_f19_add_slave(struct w1_slave *sl) 684 + { 685 + struct w1_f19_data *data = NULL; 686 + 687 + /* Allocate memory for slave specific data. */ 688 + data = devm_kzalloc(&sl->dev, sizeof(*data), GFP_KERNEL); 689 + if (!data) 690 + return -ENOMEM; 691 + sl->family_data = data; 692 + 693 + /* Setup default I2C speed on slave. */ 694 + switch (i2c_speed) { 695 + case 100: 696 + __w1_f19_set_i2c_speed(sl, 0); 697 + break; 698 + case 400: 699 + __w1_f19_set_i2c_speed(sl, 1); 700 + break; 701 + case 900: 702 + __w1_f19_set_i2c_speed(sl, 2); 703 + break; 704 + default: 705 + /* 706 + * A i2c_speed module parameter of anything else 707 + * than 100, 400, 900 means not to touch the 708 + * speed of the DS28E17. 709 + * We assume 400kBaud, the power-on value. 710 + */ 711 + data->speed = 1; 712 + } 713 + 714 + /* 715 + * Setup default busy stretch 716 + * configuration for the DS28E17. 717 + */ 718 + data->stretch = i2c_stretch; 719 + 720 + /* Setup I2C adapter. */ 721 + data->adapter.owner = THIS_MODULE; 722 + data->adapter.algo = &w1_f19_i2c_algorithm; 723 + data->adapter.algo_data = sl; 724 + strcpy(data->adapter.name, "w1-"); 725 + strcat(data->adapter.name, sl->name); 726 + data->adapter.dev.parent = &sl->dev; 727 + data->adapter.quirks = &w1_f19_i2c_adapter_quirks; 728 + 729 + return i2c_add_adapter(&data->adapter); 730 + } 731 + 732 + static void w1_f19_remove_slave(struct w1_slave *sl) 733 + { 734 + struct w1_f19_data *family_data = sl->family_data; 735 + 736 + /* Delete I2C adapter. */ 737 + i2c_del_adapter(&family_data->adapter); 738 + 739 + /* Free slave specific data. */ 740 + devm_kfree(&sl->dev, family_data); 741 + sl->family_data = NULL; 742 + } 743 + 744 + 745 + /* Declarations within the w1 subsystem. */ 746 + static struct w1_family_ops w1_f19_fops = { 747 + .add_slave = w1_f19_add_slave, 748 + .remove_slave = w1_f19_remove_slave, 749 + .groups = w1_f19_groups, 750 + }; 751 + 752 + static struct w1_family w1_family_19 = { 753 + .fid = W1_FAMILY_DS28E17, 754 + .fops = &w1_f19_fops, 755 + }; 756 + 757 + 758 + /* Module init and remove functions. */ 759 + static int __init w1_f19_init(void) 760 + { 761 + return w1_register_family(&w1_family_19); 762 + } 763 + 764 + static void __exit w1_f19_fini(void) 765 + { 766 + w1_unregister_family(&w1_family_19); 767 + } 768 + 769 + module_init(w1_f19_init); 770 + module_exit(w1_f19_fini); 771 +
+31 -28
drivers/w1/slaves/w1_therm.c
··· 268 268 int ret, max_trying = 10; 269 269 u8 *family_data = sl->family_data; 270 270 271 - ret = mutex_lock_interruptible(&dev->bus_mutex); 272 - if (ret != 0) 273 - goto post_unlock; 274 - 275 271 if (!sl->family_data) { 276 272 ret = -ENODEV; 277 - goto pre_unlock; 273 + goto error; 278 274 } 279 275 280 276 /* prevent the slave from going away in sleep */ 281 277 atomic_inc(THERM_REFCNT(family_data)); 278 + 279 + ret = mutex_lock_interruptible(&dev->bus_mutex); 280 + if (ret != 0) 281 + goto dec_refcnt; 282 + 282 283 memset(rom, 0, sizeof(rom)); 283 284 284 285 while (max_trying--) { ··· 307 306 sleep_rem = msleep_interruptible(tm); 308 307 if (sleep_rem != 0) { 309 308 ret = -EINTR; 310 - goto post_unlock; 309 + goto dec_refcnt; 311 310 } 312 311 313 312 ret = mutex_lock_interruptible(&dev->bus_mutex); 314 313 if (ret != 0) 315 - goto post_unlock; 314 + goto dec_refcnt; 316 315 } else if (!w1_strong_pullup) { 317 316 sleep_rem = msleep_interruptible(tm); 318 317 if (sleep_rem != 0) { 319 318 ret = -EINTR; 320 - goto pre_unlock; 319 + goto mt_unlock; 321 320 } 322 321 } 323 322 ··· 325 324 } 326 325 } 327 326 328 - pre_unlock: 327 + mt_unlock: 329 328 mutex_unlock(&dev->bus_mutex); 330 - 331 - post_unlock: 329 + dec_refcnt: 332 330 atomic_dec(THERM_REFCNT(family_data)); 331 + error: 333 332 return ret; 334 333 } 335 334 ··· 351 350 352 351 if (val > 12 || val < 9) { 353 352 pr_warn("Unsupported precision\n"); 354 - return -1; 353 + ret = -EINVAL; 354 + goto error; 355 355 } 356 - 357 - ret = mutex_lock_interruptible(&dev->bus_mutex); 358 - if (ret != 0) 359 - goto post_unlock; 360 356 361 357 if (!sl->family_data) { 362 358 ret = -ENODEV; 363 - goto pre_unlock; 359 + goto error; 364 360 } 365 361 366 362 /* prevent the slave from going away in sleep */ 367 363 atomic_inc(THERM_REFCNT(family_data)); 364 + 365 + ret = mutex_lock_interruptible(&dev->bus_mutex); 366 + if (ret != 0) 367 + goto dec_refcnt; 368 + 368 369 memset(rom, 0, sizeof(rom)); 369 370 370 371 /* translate precision to bitmask (see datasheet page 9) */ ··· 414 411 } 415 412 } 416 413 417 - pre_unlock: 418 414 mutex_unlock(&dev->bus_mutex); 419 - 420 - post_unlock: 415 + dec_refcnt: 421 416 atomic_dec(THERM_REFCNT(family_data)); 417 + error: 422 418 return ret; 423 419 } 424 420 ··· 492 490 int ret, max_trying = 10; 493 491 u8 *family_data = sl->family_data; 494 492 495 - ret = mutex_lock_interruptible(&dev->bus_mutex); 496 - if (ret != 0) 497 - goto error; 498 - 499 493 if (!family_data) { 500 494 ret = -ENODEV; 501 - goto mt_unlock; 495 + goto error; 502 496 } 503 497 504 498 /* prevent the slave from going away in sleep */ 505 499 atomic_inc(THERM_REFCNT(family_data)); 500 + 501 + ret = mutex_lock_interruptible(&dev->bus_mutex); 502 + if (ret != 0) 503 + goto dec_refcnt; 504 + 506 505 memset(info->rom, 0, sizeof(info->rom)); 507 506 508 507 while (max_trying--) { ··· 545 542 sleep_rem = msleep_interruptible(tm); 546 543 if (sleep_rem != 0) { 547 544 ret = -EINTR; 548 - goto dec_refcnt; 545 + goto mt_unlock; 549 546 } 550 547 } 551 548 ··· 570 567 break; 571 568 } 572 569 573 - dec_refcnt: 574 - atomic_dec(THERM_REFCNT(family_data)); 575 570 mt_unlock: 576 571 mutex_unlock(&dev->bus_mutex); 572 + dec_refcnt: 573 + atomic_dec(THERM_REFCNT(family_data)); 577 574 error: 578 575 return ret; 579 576 }
+2 -1
drivers/w1/w1_io.c
··· 58 58 * @dev: the master device 59 59 * @bit: 0 - write a 0, 1 - write a 0 read the level 60 60 */ 61 - static u8 w1_touch_bit(struct w1_master *dev, int bit) 61 + u8 w1_touch_bit(struct w1_master *dev, int bit) 62 62 { 63 63 if (dev->bus_master->touch_bit) 64 64 return dev->bus_master->touch_bit(dev->bus_master->data, bit); ··· 69 69 return 0; 70 70 } 71 71 } 72 + EXPORT_SYMBOL_GPL(w1_touch_bit); 72 73 73 74 /** 74 75 * w1_write_bit() - Generates a write-0 or write-1 cycle.
+10
include/linux/hyperv.h
··· 719 719 720 720 struct vmbus_close_msg close_msg; 721 721 722 + /* Statistics */ 723 + u64 interrupts; /* Host to Guest interrupts */ 724 + u64 sig_events; /* Guest to Host events */ 725 + 722 726 /* Channel callback's invoked in softirq context */ 723 727 struct tasklet_struct callback_event; 724 728 void (*onchannel_callback)(void *context); ··· 831 827 * gone through grace period. 832 828 */ 833 829 struct rcu_head rcu; 830 + 831 + /* 832 + * For sysfs per-channel properties. 833 + */ 834 + struct kobject kobj; 834 835 835 836 /* 836 837 * For performance critical channels (storage, networking ··· 1098 1089 struct device device; 1099 1090 1100 1091 struct vmbus_channel *channel; 1092 + struct kset *channels_kset; 1101 1093 }; 1102 1094 1103 1095
+1
include/linux/w1.h
··· 293 293 w1_unregister_family) 294 294 295 295 u8 w1_triplet(struct w1_master *dev, int bdir); 296 + u8 w1_touch_bit(struct w1_master *dev, int bit); 296 297 void w1_write_8(struct w1_master *, u8); 297 298 u8 w1_read_8(struct w1_master *); 298 299 int w1_reset_bus(struct w1_master *);