Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/usb/r8152.c
net/netfilter/nfnetlink.c

Both r8152 and nfnetlink conflicts were simple overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+1539 -950
+3 -3
Documentation/cgroups/cpusets.txt
··· 345 345 The implementation is simple. 346 346 347 347 Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag 348 - PF_SPREAD_PAGE for each task that is in that cpuset or subsequently 348 + PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently 349 349 joins that cpuset. The page allocation calls for the page cache 350 - is modified to perform an inline check for this PF_SPREAD_PAGE task 350 + is modified to perform an inline check for this PFA_SPREAD_PAGE task 351 351 flag, and if set, a call to a new routine cpuset_mem_spread_node() 352 352 returns the node to prefer for the allocation. 353 353 354 354 Similarly, setting 'cpuset.memory_spread_slab' turns on the flag 355 - PF_SPREAD_SLAB, and appropriately marked slab caches will allocate 355 + PFA_SPREAD_SLAB, and appropriately marked slab caches will allocate 356 356 pages from the node returned by cpuset_mem_spread_node(). 357 357 358 358 The cpuset_mem_spread_node() routine is also simple. It uses the
+13 -2
Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
··· 56 56 - fsl,data-width : should be <18> or <24> 57 57 - port: A port node with endpoint definitions as defined in 58 58 Documentation/devicetree/bindings/media/video-interfaces.txt. 59 + On i.MX5, the internal two-input-multiplexer is used. 60 + Due to hardware limitations, only one port (port@[0,1]) 61 + can be used for each channel (lvds-channel@[0,1], respectively) 59 62 On i.MX6, there should be four ports (port@[0-3]) that correspond 60 63 to the four LVDS multiplexer inputs. 61 64 ··· 81 78 "di0", "di1"; 82 79 83 80 lvds-channel@0 { 81 + #address-cells = <1>; 82 + #size-cells = <0>; 84 83 reg = <0>; 85 84 fsl,data-mapping = "spwg"; 86 85 fsl,data-width = <24>; ··· 91 86 /* ... */ 92 87 }; 93 88 94 - port { 89 + port@0 { 90 + reg = <0>; 91 + 95 92 lvds0_in: endpoint { 96 93 remote-endpoint = <&ipu_di0_lvds0>; 97 94 }; ··· 101 94 }; 102 95 103 96 lvds-channel@1 { 97 + #address-cells = <1>; 98 + #size-cells = <0>; 104 99 reg = <1>; 105 100 fsl,data-mapping = "spwg"; 106 101 fsl,data-width = <24>; ··· 111 102 /* ... */ 112 103 }; 113 104 114 - port { 105 + port@1 { 106 + reg = <1>; 107 + 115 108 lvds1_in: endpoint { 116 109 remote-endpoint = <&ipu_di1_lvds1>; 117 110 };
+211
Documentation/devicetree/of_selftest.txt
··· 1 + Open Firmware Device Tree Selftest 2 + ---------------------------------- 3 + 4 + Author: Gaurav Minocha <gaurav.minocha.os@gmail.com> 5 + 6 + 1. Introduction 7 + 8 + This document explains how the test data required for executing OF selftest 9 + is attached to the live tree dynamically, independent of the machine's 10 + architecture. 11 + 12 + It is recommended to read the following documents before moving ahead. 13 + 14 + [1] Documentation/devicetree/usage-model.txt 15 + [2] http://www.devicetree.org/Device_Tree_Usage 16 + 17 + OF Selftest has been designed to test the interface (include/linux/of.h) 18 + provided to device driver developers to fetch the device information..etc. 19 + from the unflattened device tree data structure. This interface is used by 20 + most of the device drivers in various use cases. 21 + 22 + 23 + 2. Test-data 24 + 25 + The Device Tree Source file (drivers/of/testcase-data/testcases.dts) contains 26 + the test data required for executing the unit tests automated in 27 + drivers/of/selftests.c. Currently, following Device Tree Source Include files 28 + (.dtsi) are included in testcase.dts: 29 + 30 + drivers/of/testcase-data/tests-interrupts.dtsi 31 + drivers/of/testcase-data/tests-platform.dtsi 32 + drivers/of/testcase-data/tests-phandle.dtsi 33 + drivers/of/testcase-data/tests-match.dtsi 34 + 35 + When the kernel is build with OF_SELFTEST enabled, then the following make rule 36 + 37 + $(obj)/%.dtb: $(src)/%.dts FORCE 38 + $(call if_changed_dep, dtc) 39 + 40 + is used to compile the DT source file (testcase.dts) into a binary blob 41 + (testcase.dtb), also referred as flattened DT. 42 + 43 + After that, using the following rule the binary blob above is wrapped as an 44 + assembly file (testcase.dtb.S). 45 + 46 + $(obj)/%.dtb.S: $(obj)/%.dtb 47 + $(call cmd, dt_S_dtb) 48 + 49 + The assembly file is compiled into an object file (testcase.dtb.o), and is 50 + linked into the kernel image. 51 + 52 + 53 + 2.1. Adding the test data 54 + 55 + Un-flattened device tree structure: 56 + 57 + Un-flattened device tree consists of connected device_node(s) in form of a tree 58 + structure described below. 59 + 60 + // following struct members are used to construct the tree 61 + struct device_node { 62 + ... 63 + struct device_node *parent; 64 + struct device_node *child; 65 + struct device_node *sibling; 66 + struct device_node *allnext; /* next in list of all nodes */ 67 + ... 68 + }; 69 + 70 + Figure 1, describes a generic structure of machine’s un-flattened device tree 71 + considering only child and sibling pointers. There exists another pointer, 72 + *parent, that is used to traverse the tree in the reverse direction. So, at 73 + a particular level the child node and all the sibling nodes will have a parent 74 + pointer pointing to a common node (e.g. child1, sibling2, sibling3, sibling4’s 75 + parent points to root node) 76 + 77 + root (‘/’) 78 + | 79 + child1 -> sibling2 -> sibling3 -> sibling4 -> null 80 + | | | | 81 + | | | null 82 + | | | 83 + | | child31 -> sibling32 -> null 84 + | | | | 85 + | | null null 86 + | | 87 + | child21 -> sibling22 -> sibling23 -> null 88 + | | | | 89 + | null null null 90 + | 91 + child11 -> sibling12 -> sibling13 -> sibling14 -> null 92 + | | | | 93 + | | | null 94 + | | | 95 + null null child131 -> null 96 + | 97 + null 98 + 99 + Figure 1: Generic structure of un-flattened device tree 100 + 101 + 102 + *allnext: it is used to link all the nodes of DT into a list. So, for the 103 + above tree the list would be as follows: 104 + 105 + root->child1->child11->sibling12->sibling13->child131->sibling14->sibling2-> 106 + child21->sibling22->sibling23->sibling3->child31->sibling32->sibling4->null 107 + 108 + Before executing OF selftest, it is required to attach the test data to 109 + machine's device tree (if present). So, when selftest_data_add() is called, 110 + at first it reads the flattened device tree data linked into the kernel image 111 + via the following kernel symbols: 112 + 113 + __dtb_testcases_begin - address marking the start of test data blob 114 + __dtb_testcases_end - address marking the end of test data blob 115 + 116 + Secondly, it calls of_fdt_unflatten_device_tree() to unflatten the flattened 117 + blob. And finally, if the machine’s device tree (i.e live tree) is present, 118 + then it attaches the unflattened test data tree to the live tree, else it 119 + attaches itself as a live device tree. 120 + 121 + attach_node_and_children() uses of_attach_node() to attach the nodes into the 122 + live tree as explained below. To explain the same, the test data tree described 123 + in Figure 2 is attached to the live tree described in Figure 1. 124 + 125 + root (‘/’) 126 + | 127 + testcase-data 128 + | 129 + test-child0 -> test-sibling1 -> test-sibling2 -> test-sibling3 -> null 130 + | | | | 131 + test-child01 null null null 132 + 133 + 134 + allnext list: 135 + 136 + root->testcase-data->test-child0->test-child01->test-sibling1->test-sibling2 137 + ->test-sibling3->null 138 + 139 + Figure 2: Example test data tree to be attached to live tree. 140 + 141 + According to the scenario above, the live tree is already present so it isn’t 142 + required to attach the root(‘/’) node. All other nodes are attached by calling 143 + of_attach_node() on each node. 144 + 145 + In the function of_attach_node(), the new node is attached as the child of the 146 + given parent in live tree. But, if parent already has a child then the new node 147 + replaces the current child and turns it into its sibling. So, when the testcase 148 + data node is attached to the live tree above (Figure 1), the final structure is 149 + as shown in Figure 3. 150 + 151 + root (‘/’) 152 + | 153 + testcase-data -> child1 -> sibling2 -> sibling3 -> sibling4 -> null 154 + | | | | | 155 + (...) | | | null 156 + | | child31 -> sibling32 -> null 157 + | | | | 158 + | | null null 159 + | | 160 + | child21 -> sibling22 -> sibling23 -> null 161 + | | | | 162 + | null null null 163 + | 164 + child11 -> sibling12 -> sibling13 -> sibling14 -> null 165 + | | | | 166 + null null | null 167 + | 168 + child131 -> null 169 + | 170 + null 171 + ----------------------------------------------------------------------- 172 + 173 + root (‘/’) 174 + | 175 + testcase-data -> child1 -> sibling2 -> sibling3 -> sibling4 -> null 176 + | | | | | 177 + | (...) (...) (...) null 178 + | 179 + test-sibling3 -> test-sibling2 -> test-sibling1 -> test-child0 -> null 180 + | | | | 181 + null null null test-child01 182 + 183 + 184 + Figure 3: Live device tree structure after attaching the testcase-data. 185 + 186 + 187 + Astute readers would have noticed that test-child0 node becomes the last 188 + sibling compared to the earlier structure (Figure 2). After attaching first 189 + test-child0 the test-sibling1 is attached that pushes the child node 190 + (i.e. test-child0) to become a sibling and makes itself a child node, 191 + as mentioned above. 192 + 193 + If a duplicate node is found (i.e. if a node with same full_name property is 194 + already present in the live tree), then the node isn’t attached rather its 195 + properties are updated to the live tree’s node by calling the function 196 + update_node_properties(). 197 + 198 + 199 + 2.2. Removing the test data 200 + 201 + Once the test case execution is complete, selftest_data_remove is called in 202 + order to remove the device nodes attached initially (first the leaf nodes are 203 + detached and then moving up the parent nodes are removed, and eventually the 204 + whole tree). selftest_data_remove() calls detach_node_and_children() that uses 205 + of_detach_node() to detach the nodes from the live device tree. 206 + 207 + To detach a node, of_detach_node() first updates all_next linked list, by 208 + attaching the previous node’s allnext to current node’s allnext pointer. And 209 + then, it either updates the child pointer of given node’s parent to its 210 + sibling or attaches the previous sibling to the given node’s sibling, as 211 + appropriate. That is it :)
+2 -4
MAINTAINERS
··· 2098 2098 F: drivers/scsi/bfa/ 2099 2099 2100 2100 BROCADE BNA 10 GIGABIT ETHERNET DRIVER 2101 - M: Rasesh Mody <rmody@brocade.com> 2101 + M: Rasesh Mody <rasesh.mody@qlogic.com> 2102 2102 L: netdev@vger.kernel.org 2103 2103 S: Supported 2104 2104 F: drivers/net/ethernet/brocade/bna/ ··· 3012 3012 F: drivers/acpi/dock.c 3013 3013 3014 3014 DOCUMENTATION 3015 - M: Randy Dunlap <rdunlap@infradead.org> 3015 + M: Jiri Kosina <jkosina@suse.cz> 3016 3016 L: linux-doc@vger.kernel.org 3017 - T: quilt http://www.infradead.org/~rdunlap/Doc/patches/ 3018 3017 S: Maintained 3019 3018 F: Documentation/ 3020 3019 X: Documentation/ABI/ ··· 4476 4477 L: linux-i2c@vger.kernel.org 4477 4478 L: linux-acpi@vger.kernel.org 4478 4479 S: Maintained 4479 - F: drivers/i2c/i2c-acpi.c 4480 4480 4481 4481 I2C-TAOS-EVM DRIVER 4482 4482 M: Jean Delvare <jdelvare@suse.de>
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 17 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Shuffling Zombie Juror 6 6 7 7 # *DOCUMENTATION*
+12 -15
arch/arm/boot/dts/dra7-evm.dts
··· 447 447 gpmc,device-width = <2>; 448 448 gpmc,sync-clk-ps = <0>; 449 449 gpmc,cs-on-ns = <0>; 450 - gpmc,cs-rd-off-ns = <40>; 451 - gpmc,cs-wr-off-ns = <40>; 450 + gpmc,cs-rd-off-ns = <80>; 451 + gpmc,cs-wr-off-ns = <80>; 452 452 gpmc,adv-on-ns = <0>; 453 - gpmc,adv-rd-off-ns = <30>; 454 - gpmc,adv-wr-off-ns = <30>; 455 - gpmc,we-on-ns = <5>; 456 - gpmc,we-off-ns = <25>; 457 - gpmc,oe-on-ns = <2>; 458 - gpmc,oe-off-ns = <20>; 459 - gpmc,access-ns = <20>; 460 - gpmc,wr-access-ns = <40>; 461 - gpmc,rd-cycle-ns = <40>; 462 - gpmc,wr-cycle-ns = <40>; 463 - gpmc,wait-pin = <0>; 464 - gpmc,wait-on-read; 465 - gpmc,wait-on-write; 453 + gpmc,adv-rd-off-ns = <60>; 454 + gpmc,adv-wr-off-ns = <60>; 455 + gpmc,we-on-ns = <10>; 456 + gpmc,we-off-ns = <50>; 457 + gpmc,oe-on-ns = <4>; 458 + gpmc,oe-off-ns = <40>; 459 + gpmc,access-ns = <40>; 460 + gpmc,wr-access-ns = <80>; 461 + gpmc,rd-cycle-ns = <80>; 462 + gpmc,wr-cycle-ns = <80>; 466 463 gpmc,bus-turnaround-ns = <0>; 467 464 gpmc,cycle2cycle-delay-ns = <0>; 468 465 gpmc,clk-activation-ns = <0>;
+10 -2
arch/arm/boot/dts/imx53.dtsi
··· 423 423 status = "disabled"; 424 424 425 425 lvds-channel@0 { 426 + #address-cells = <1>; 427 + #size-cells = <0>; 426 428 reg = <0>; 427 429 status = "disabled"; 428 430 429 - port { 431 + port@0 { 432 + reg = <0>; 433 + 430 434 lvds0_in: endpoint { 431 435 remote-endpoint = <&ipu_di0_lvds0>; 432 436 }; ··· 438 434 }; 439 435 440 436 lvds-channel@1 { 437 + #address-cells = <1>; 438 + #size-cells = <0>; 441 439 reg = <1>; 442 440 status = "disabled"; 443 441 444 - port { 442 + port@1 { 443 + reg = <1>; 444 + 445 445 lvds1_in: endpoint { 446 446 remote-endpoint = <&ipu_di1_lvds1>; 447 447 };
+3 -3
arch/arm/boot/dts/k2e-clocks.dtsi
··· 40 40 #clock-cells = <0>; 41 41 compatible = "ti,keystone,psc-clock"; 42 42 clocks = <&chipclk16>; 43 - clock-output-names = "usb"; 43 + clock-output-names = "usb1"; 44 44 reg = <0x02350004 0xb00>, <0x02350000 0x400>; 45 45 reg-names = "control", "domain"; 46 46 domain-id = <0>; ··· 60 60 #clock-cells = <0>; 61 61 compatible = "ti,keystone,psc-clock"; 62 62 clocks = <&chipclk12>; 63 - clock-output-names = "pcie"; 64 - reg = <0x0235006c 0xb00>, <0x02350000 0x400>; 63 + clock-output-names = "pcie1"; 64 + reg = <0x0235006c 0xb00>, <0x02350048 0x400>; 65 65 reg-names = "control", "domain"; 66 66 domain-id = <18>; 67 67 };
+2 -3
arch/arm/boot/dts/omap5-cm-t54.dts
··· 353 353 }; 354 354 355 355 ldo8_reg: ldo8 { 356 - /* VDD_3v0: Does not go anywhere */ 356 + /* VDD_3V_GP: act led/serial console */ 357 357 regulator-name = "ldo8"; 358 358 regulator-min-microvolt = <3000000>; 359 359 regulator-max-microvolt = <3000000>; 360 + regulator-always-on; 360 361 regulator-boot-on; 361 - /* Unused */ 362 - status = "disabled"; 363 362 }; 364 363 365 364 ldo9_reg: ldo9 {
+1
arch/arm/include/asm/cacheflush.h
··· 466 466 */ 467 467 #define v7_exit_coherency_flush(level) \ 468 468 asm volatile( \ 469 + ".arch armv7-a \n\t" \ 469 470 "stmfd sp!, {fp, ip} \n\t" \ 470 471 "mrc p15, 0, r0, c1, c0, 0 @ get SCTLR \n\t" \ 471 472 "bic r0, r0, #"__stringify(CR_C)" \n\t" \
+2
arch/arm/include/asm/tls.h
··· 81 81 asm("mcr p15, 0, %0, c13, c0, 3" 82 82 : : "r" (val)); 83 83 } else { 84 + #ifdef CONFIG_KUSER_HELPERS 84 85 /* 85 86 * User space must never try to access this 86 87 * directly. Expect your app to break ··· 90 89 * entry-armv.S for details) 91 90 */ 92 91 *((unsigned int *)0xffff0ff0) = val; 92 + #endif 93 93 } 94 94 95 95 }
+9 -7
arch/arm/kernel/kprobes-test.c
··· 110 110 * 111 111 * @ TESTCASE_START 112 112 * bl __kprobes_test_case_start 113 - * @ start of inline data... 113 + * .pushsection .rodata 114 + * "10: 114 115 * .ascii "mov r0, r7" @ text title for test case 115 116 * .byte 0 116 - * .align 2, 0 117 + * .popsection 118 + * @ start of inline data... 119 + * .word 10b @ pointer to title in .rodata section 117 120 * 118 121 * @ TEST_ARG_REG 119 122 * .byte ARG_TYPE_REG ··· 974 971 __asm__ __volatile__ ( 975 972 "stmdb sp!, {r4-r11} \n\t" 976 973 "sub sp, sp, #"__stringify(TEST_MEMORY_SIZE)"\n\t" 977 - "bic r0, lr, #1 @ r0 = inline title string \n\t" 974 + "bic r0, lr, #1 @ r0 = inline data \n\t" 978 975 "mov r1, sp \n\t" 979 976 "bl kprobes_test_case_start \n\t" 980 977 "bx r0 \n\t" ··· 1352 1349 return pc + 4; 1353 1350 } 1354 1351 1355 - static uintptr_t __used kprobes_test_case_start(const char *title, void *stack) 1352 + static uintptr_t __used kprobes_test_case_start(const char **title, void *stack) 1356 1353 { 1357 1354 struct test_arg *args; 1358 1355 struct test_arg_end *end_arg; 1359 1356 unsigned long test_code; 1360 1357 1361 - args = (struct test_arg *)PTR_ALIGN(title + strlen(title) + 1, 4); 1362 - 1363 - current_title = title; 1358 + current_title = *title++; 1359 + args = (struct test_arg *)title; 1364 1360 current_args = args; 1365 1361 current_stack = stack; 1366 1362
+4 -1
arch/arm/kernel/kprobes-test.h
··· 111 111 #define TESTCASE_START(title) \ 112 112 __asm__ __volatile__ ( \ 113 113 "bl __kprobes_test_case_start \n\t" \ 114 + ".pushsection .rodata \n\t" \ 115 + "10: \n\t" \ 114 116 /* don't use .asciz here as 'title' may be */ \ 115 117 /* multiple strings to be concatenated. */ \ 116 118 ".ascii "#title" \n\t" \ 117 119 ".byte 0 \n\t" \ 118 - ".align 2, 0 \n\t" 120 + ".popsection \n\t" \ 121 + ".word 10b \n\t" 119 122 120 123 #define TEST_ARG_REG(reg, val) \ 121 124 ".byte "__stringify(ARG_TYPE_REG)" \n\t" \
+1 -5
arch/arm/mach-imx/clk-gate2.c
··· 97 97 struct clk_gate2 *gate = to_clk_gate2(hw); 98 98 99 99 if (gate->share_count) 100 - return !!(*gate->share_count); 100 + return !!__clk_get_enable_count(hw->clk); 101 101 else 102 102 return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx); 103 103 } ··· 127 127 gate->bit_idx = bit_idx; 128 128 gate->flags = clk_gate2_flags; 129 129 gate->lock = lock; 130 - 131 - /* Initialize share_count per hardware state */ 132 - if (share_count) 133 - *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0; 134 130 gate->share_count = share_count; 135 131 136 132 init.name = name;
-3
arch/arm/mach-omap2/Kconfig
··· 1 1 menu "TI OMAP/AM/DM/DRA Family" 2 2 depends on ARCH_MULTI_V6 || ARCH_MULTI_V7 3 3 4 - config ARCH_OMAP 5 - bool 6 - 7 4 config ARCH_OMAP2 8 5 bool "TI OMAP2" 9 6 depends on ARCH_MULTI_V6
+1 -1
arch/arm/mach-omap2/omap_hwmod.c
··· 2065 2065 2066 2066 spin_lock_irqsave(&io_chain_lock, flags); 2067 2067 2068 - if (cpu_is_omap34xx() && omap3_has_io_chain_ctrl()) 2068 + if (cpu_is_omap34xx()) 2069 2069 omap3xxx_prm_reconfigure_io_chain(); 2070 2070 else if (cpu_is_omap44xx()) 2071 2071 omap44xx_prm_reconfigure_io_chain();
+35 -4
arch/arm/mach-omap2/prm3xxx.c
··· 45 45 .ocp_barrier = &omap3xxx_prm_ocp_barrier, 46 46 .save_and_clear_irqen = &omap3xxx_prm_save_and_clear_irqen, 47 47 .restore_irqen = &omap3xxx_prm_restore_irqen, 48 - .reconfigure_io_chain = &omap3xxx_prm_reconfigure_io_chain, 48 + .reconfigure_io_chain = NULL, 49 49 }; 50 50 51 51 /* ··· 369 369 } 370 370 371 371 /** 372 - * omap3xxx_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain 372 + * omap3430_pre_es3_1_reconfigure_io_chain - restart wake-up daisy chain 373 + * 374 + * The ST_IO_CHAIN bit does not exist in 3430 before es3.1. The only 375 + * thing we can do is toggle EN_IO bit for earlier omaps. 376 + */ 377 + void omap3430_pre_es3_1_reconfigure_io_chain(void) 378 + { 379 + omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, 380 + PM_WKEN); 381 + omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD, 382 + PM_WKEN); 383 + omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN); 384 + } 385 + 386 + /** 387 + * omap3_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain 373 388 * 374 389 * Clear any previously-latched I/O wakeup events and ensure that the 375 390 * I/O wakeup gates are aligned with the current mux settings. Works 376 391 * by asserting WUCLKIN, waiting for WUCLKOUT to be asserted, and then 377 392 * deasserting WUCLKIN and clearing the ST_IO_CHAIN WKST bit. No 378 - * return value. 393 + * return value. These registers are only available in 3430 es3.1 and later. 379 394 */ 380 - void omap3xxx_prm_reconfigure_io_chain(void) 395 + void omap3_prm_reconfigure_io_chain(void) 381 396 { 382 397 int i = 0; 383 398 ··· 412 397 PM_WKST); 413 398 414 399 omap2_prm_read_mod_reg(WKUP_MOD, PM_WKST); 400 + } 401 + 402 + /** 403 + * omap3xxx_prm_reconfigure_io_chain - reconfigure I/O chain 404 + */ 405 + void omap3xxx_prm_reconfigure_io_chain(void) 406 + { 407 + if (omap3_prcm_irq_setup.reconfigure_io_chain) 408 + omap3_prcm_irq_setup.reconfigure_io_chain(); 415 409 } 416 410 417 411 /** ··· 679 655 680 656 if (!(prm_features & PRM_HAS_IO_WAKEUP)) 681 657 return 0; 658 + 659 + if (omap3_has_io_chain_ctrl()) 660 + omap3_prcm_irq_setup.reconfigure_io_chain = 661 + omap3_prm_reconfigure_io_chain; 662 + else 663 + omap3_prcm_irq_setup.reconfigure_io_chain = 664 + omap3430_pre_es3_1_reconfigure_io_chain; 682 665 683 666 omap3xxx_prm_enable_io_wakeup(); 684 667 ret = omap_prcm_register_chain_handler(&omap3_prcm_irq_setup);
+1 -1
arch/arm/mach-pxa/generic.c
··· 61 61 /* 62 62 * For non device-tree builds, keep legacy timer init 63 63 */ 64 - void pxa_timer_init(void) 64 + void __init pxa_timer_init(void) 65 65 { 66 66 pxa_timer_nodt_init(IRQ_OST0, io_p2v(0x40a00000), 67 67 get_clock_tick_rate());
+3
arch/arm/mm/alignment.c
··· 41 41 * This code is not portable to processors with late data abort handling. 42 42 */ 43 43 #define CODING_BITS(i) (i & 0x0e000000) 44 + #define COND_BITS(i) (i & 0xf0000000) 44 45 45 46 #define LDST_I_BIT(i) (i & (1 << 26)) /* Immediate constant */ 46 47 #define LDST_P_BIT(i) (i & (1 << 24)) /* Preindex */ ··· 822 821 break; 823 822 824 823 case 0x04000000: /* ldr or str immediate */ 824 + if (COND_BITS(instr) == 0xf0000000) /* NEON VLDn, VSTn */ 825 + goto bad; 825 826 offset.un = OFFSET_BITS(instr); 826 827 handler = do_alignment_ldrstr; 827 828 break;
+2 -2
arch/arm/mm/proc-v7-3level.S
··· 157 157 * TFR EV X F IHD LR S 158 158 * .EEE ..EE PUI. .TAT 4RVI ZWRS BLDP WCAM 159 159 * rxxx rrxx xxx0 0101 xxxx xxxx x111 xxxx < forced 160 - * 11 0 110 1 0011 1100 .111 1101 < we want 160 + * 11 0 110 0 0011 1100 .111 1101 < we want 161 161 */ 162 162 .align 2 163 163 .type v7_crval, #object 164 164 v7_crval: 165 - crval clear=0x0120c302, mmuset=0x30c23c7d, ucset=0x00c01c7c 165 + crval clear=0x0122c302, mmuset=0x30c03c7d, ucset=0x00c01c7c
+3
arch/arm/plat-omap/Kconfig
··· 1 + config ARCH_OMAP 2 + bool 3 + 1 4 if ARCH_OMAP 2 5 3 6 menu "TI OMAP Common Features"
+12
arch/mips/kernel/mcount.S
··· 129 129 nop 130 130 #endif 131 131 b ftrace_stub 132 + #ifdef CONFIG_32BIT 133 + addiu sp, sp, 8 134 + #else 132 135 nop 136 + #endif 133 137 134 138 static_trace: 135 139 MCOUNT_SAVE_REGS ··· 143 139 move a1, AT /* arg2: parent's return address */ 144 140 145 141 MCOUNT_RESTORE_REGS 142 + #ifdef CONFIG_32BIT 143 + addiu sp, sp, 8 144 + #endif 146 145 .globl ftrace_stub 147 146 ftrace_stub: 148 147 RETURN_BACK ··· 190 183 jal prepare_ftrace_return 191 184 nop 192 185 MCOUNT_RESTORE_REGS 186 + #ifndef CONFIG_DYNAMIC_FTRACE 187 + #ifdef CONFIG_32BIT 188 + addiu sp, sp, 8 189 + #endif 190 + #endif 193 191 RETURN_BACK 194 192 END(ftrace_graph_caller) 195 193
+3 -3
arch/mips/math-emu/cp1emu.c
··· 650 650 #define SIFROMREG(si, x) \ 651 651 do { \ 652 652 if (cop1_64bit(xcp)) \ 653 - (si) = get_fpr32(&ctx->fpr[x], 0); \ 653 + (si) = (int)get_fpr32(&ctx->fpr[x], 0); \ 654 654 else \ 655 - (si) = get_fpr32(&ctx->fpr[(x) & ~1], (x) & 1); \ 655 + (si) = (int)get_fpr32(&ctx->fpr[(x) & ~1], (x) & 1); \ 656 656 } while (0) 657 657 658 658 #define SITOREG(si, x) \ ··· 667 667 } \ 668 668 } while (0) 669 669 670 - #define SIFROMHREG(si, x) ((si) = get_fpr32(&ctx->fpr[x], 1)) 670 + #define SIFROMHREG(si, x) ((si) = (int)get_fpr32(&ctx->fpr[x], 1)) 671 671 672 672 #define SITOHREG(si, x) \ 673 673 do { \
+1 -2
arch/x86/boot/compressed/Makefile
··· 33 33 $(obj)/eboot.o: KBUILD_CFLAGS += -fshort-wchar -mno-red-zone 34 34 35 35 ifeq ($(CONFIG_EFI_STUB), y) 36 - VMLINUX_OBJS += $(obj)/eboot.o $(obj)/efi_stub_$(BITS).o \ 37 - $(objtree)/drivers/firmware/efi/libstub/lib.a 36 + VMLINUX_OBJS += $(obj)/eboot.o $(obj)/efi_stub_$(BITS).o 38 37 endif 39 38 40 39 $(obj)/vmlinux: $(VMLINUX_OBJS) FORCE
+15
arch/x86/boot/compressed/aslr.c
··· 183 183 static bool mem_avoid_overlap(struct mem_vector *img) 184 184 { 185 185 int i; 186 + struct setup_data *ptr; 186 187 187 188 for (i = 0; i < MEM_AVOID_MAX; i++) { 188 189 if (mem_overlaps(img, &mem_avoid[i])) 189 190 return true; 191 + } 192 + 193 + /* Avoid all entries in the setup_data linked list. */ 194 + ptr = (struct setup_data *)(unsigned long)real_mode->hdr.setup_data; 195 + while (ptr) { 196 + struct mem_vector avoid; 197 + 198 + avoid.start = (u64)ptr; 199 + avoid.size = sizeof(*ptr) + ptr->len; 200 + 201 + if (mem_overlaps(img, &avoid)) 202 + return true; 203 + 204 + ptr = (struct setup_data *)(unsigned long)ptr->next; 190 205 } 191 206 192 207 return false;
+26 -18
arch/x86/boot/compressed/eboot.c
··· 19 19 20 20 static efi_system_table_t *sys_table; 21 21 22 - struct efi_config *efi_early; 22 + static struct efi_config *efi_early; 23 + 24 + #define efi_call_early(f, ...) \ 25 + efi_early->call(efi_early->f, __VA_ARGS__); 23 26 24 27 #define BOOT_SERVICES(bits) \ 25 28 static void setup_boot_services##bits(struct efi_config *c) \ ··· 268 265 269 266 offset = offsetof(typeof(*out), output_string); 270 267 output_string = efi_early->text_output + offset; 268 + out = (typeof(out))(unsigned long)efi_early->text_output; 271 269 func = (u64 *)output_string; 272 270 273 - efi_early->call(*func, efi_early->text_output, str); 271 + efi_early->call(*func, out, str); 274 272 } else { 275 273 struct efi_simple_text_output_protocol_32 *out; 276 274 u32 *func; 277 275 278 276 offset = offsetof(typeof(*out), output_string); 279 277 output_string = efi_early->text_output + offset; 278 + out = (typeof(out))(unsigned long)efi_early->text_output; 280 279 func = (u32 *)output_string; 281 280 282 - efi_early->call(*func, efi_early->text_output, str); 281 + efi_early->call(*func, out, str); 283 282 } 284 283 } 284 + 285 + #include "../../../../drivers/firmware/efi/libstub/efi-stub-helper.c" 285 286 286 287 static void find_bits(unsigned long mask, u8 *pos, u8 *size) 287 288 { ··· 367 360 return status; 368 361 } 369 362 370 - static efi_status_t 363 + static void 371 364 setup_efi_pci32(struct boot_params *params, void **pci_handle, 372 365 unsigned long size) 373 366 { ··· 410 403 data = (struct setup_data *)rom; 411 404 412 405 } 413 - 414 - return status; 415 406 } 416 407 417 408 static efi_status_t ··· 468 463 469 464 } 470 465 471 - static efi_status_t 466 + static void 472 467 setup_efi_pci64(struct boot_params *params, void **pci_handle, 473 468 unsigned long size) 474 469 { ··· 511 506 data = (struct setup_data *)rom; 512 507 513 508 } 514 - 515 - return status; 516 509 } 517 510 518 - static efi_status_t setup_efi_pci(struct boot_params *params) 511 + /* 512 + * There's no way to return an informative status from this function, 513 + * because any analysis (and printing of error messages) needs to be 514 + * done directly at the EFI function call-site. 515 + * 516 + * For example, EFI_INVALID_PARAMETER could indicate a bug or maybe we 517 + * just didn't find any PCI devices, but there's no way to tell outside 518 + * the context of the call. 519 + */ 520 + static void setup_efi_pci(struct boot_params *params) 519 521 { 520 522 efi_status_t status; 521 523 void **pci_handle = NULL; ··· 539 527 size, (void **)&pci_handle); 540 528 541 529 if (status != EFI_SUCCESS) 542 - return status; 530 + return; 543 531 544 532 status = efi_call_early(locate_handle, 545 533 EFI_LOCATE_BY_PROTOCOL, &pci_proto, ··· 550 538 goto free_handle; 551 539 552 540 if (efi_early->is64) 553 - status = setup_efi_pci64(params, pci_handle, size); 541 + setup_efi_pci64(params, pci_handle, size); 554 542 else 555 - status = setup_efi_pci32(params, pci_handle, size); 543 + setup_efi_pci32(params, pci_handle, size); 556 544 557 545 free_handle: 558 546 efi_call_early(free_pool, pci_handle); 559 - return status; 560 547 } 561 548 562 549 static void ··· 1391 1380 1392 1381 setup_graphics(boot_params); 1393 1382 1394 - status = setup_efi_pci(boot_params); 1395 - if (status != EFI_SUCCESS) { 1396 - efi_printk(sys_table, "setup_efi_pci() failed!\n"); 1397 - } 1383 + setup_efi_pci(boot_params); 1398 1384 1399 1385 status = efi_call_early(allocate_pool, EFI_LOADER_DATA, 1400 1386 sizeof(*gdt), (void **)&gdt);
+16
arch/x86/boot/compressed/eboot.h
··· 103 103 void *blt; 104 104 }; 105 105 106 + struct efi_config { 107 + u64 image_handle; 108 + u64 table; 109 + u64 allocate_pool; 110 + u64 allocate_pages; 111 + u64 get_memory_map; 112 + u64 free_pool; 113 + u64 free_pages; 114 + u64 locate_handle; 115 + u64 handle_protocol; 116 + u64 exit_boot_services; 117 + u64 text_output; 118 + efi_status_t (*call)(unsigned long, ...); 119 + bool is64; 120 + } __packed; 121 + 106 122 #endif /* BOOT_COMPRESSED_EBOOT_H */
-24
arch/x86/include/asm/efi.h
··· 159 159 } 160 160 #endif /* CONFIG_EFI_MIXED */ 161 161 162 - 163 - /* arch specific definitions used by the stub code */ 164 - 165 - struct efi_config { 166 - u64 image_handle; 167 - u64 table; 168 - u64 allocate_pool; 169 - u64 allocate_pages; 170 - u64 get_memory_map; 171 - u64 free_pool; 172 - u64 free_pages; 173 - u64 locate_handle; 174 - u64 handle_protocol; 175 - u64 exit_boot_services; 176 - u64 text_output; 177 - efi_status_t (*call)(unsigned long, ...); 178 - bool is64; 179 - } __packed; 180 - 181 - extern struct efi_config *efi_early; 182 - 183 - #define efi_call_early(f, ...) \ 184 - efi_early->call(efi_early->f, __VA_ARGS__); 185 - 186 162 extern bool efi_reboot_required(void); 187 163 188 164 #else
+3 -3
arch/x86/include/asm/fixmap.h
··· 106 106 __end_of_permanent_fixed_addresses, 107 107 108 108 /* 109 - * 256 temporary boot-time mappings, used by early_ioremap(), 109 + * 512 temporary boot-time mappings, used by early_ioremap(), 110 110 * before ioremap() is functional. 111 111 * 112 - * If necessary we round it up to the next 256 pages boundary so 112 + * If necessary we round it up to the next 512 pages boundary so 113 113 * that we can have a single pgd entry and a single pte table: 114 114 */ 115 115 #define NR_FIX_BTMAPS 64 116 - #define FIX_BTMAPS_SLOTS 4 116 + #define FIX_BTMAPS_SLOTS 8 117 117 #define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS) 118 118 FIX_BTMAP_END = 119 119 (__end_of_permanent_fixed_addresses ^
+3
arch/x86/kernel/smpboot.c
··· 1284 1284 1285 1285 for_each_cpu(sibling, cpu_sibling_mask(cpu)) 1286 1286 cpumask_clear_cpu(cpu, cpu_sibling_mask(sibling)); 1287 + for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) 1288 + cpumask_clear_cpu(cpu, cpu_llc_shared_mask(sibling)); 1289 + cpumask_clear(cpu_llc_shared_mask(cpu)); 1287 1290 cpumask_clear(cpu_sibling_mask(cpu)); 1288 1291 cpumask_clear(cpu_core_mask(cpu)); 1289 1292 c->phys_proc_id = 0;
-1
drivers/acpi/acpi_lpss.c
··· 419 419 adev->driver_data = pdata; 420 420 pdev = acpi_create_platform_device(adev); 421 421 if (!IS_ERR_OR_NULL(pdev)) { 422 - device_enable_async_suspend(&pdev->dev); 423 422 return 1; 424 423 } 425 424
+1
drivers/acpi/acpica/aclocal.h
··· 254 254 u32 field_bit_position; 255 255 u32 field_bit_length; 256 256 u16 resource_length; 257 + u16 pin_number_index; 257 258 u8 field_flags; 258 259 u8 attribute; 259 260 u8 field_type;
+1
drivers/acpi/acpica/acobject.h
··· 264 264 ACPI_OBJECT_COMMON_HEADER ACPI_COMMON_FIELD_INFO u16 resource_length; 265 265 union acpi_operand_object *region_obj; /* Containing op_region object */ 266 266 u8 *resource_buffer; /* resource_template for serial regions/fields */ 267 + u16 pin_number_index; /* Index relative to previous Connection/Template */ 267 268 }; 268 269 269 270 struct acpi_object_bank_field {
+2
drivers/acpi/acpica/dsfield.c
··· 360 360 */ 361 361 info->resource_buffer = NULL; 362 362 info->connection_node = NULL; 363 + info->pin_number_index = 0; 363 364 364 365 /* 365 366 * A Connection() is either an actual resource descriptor (buffer) ··· 438 437 } 439 438 440 439 info->field_bit_position += info->field_bit_length; 440 + info->pin_number_index++; /* Index relative to previous Connection() */ 441 441 break; 442 442 443 443 default:
+31 -16
drivers/acpi/acpica/evregion.c
··· 142 142 union acpi_operand_object *region_obj2; 143 143 void *region_context = NULL; 144 144 struct acpi_connection_info *context; 145 + acpi_physical_address address; 145 146 146 147 ACPI_FUNCTION_TRACE(ev_address_space_dispatch); 147 148 ··· 232 231 /* We have everything we need, we can invoke the address space handler */ 233 232 234 233 handler = handler_desc->address_space.handler; 235 - 236 - ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, 237 - "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", 238 - &region_obj->region.handler->address_space, handler, 239 - ACPI_FORMAT_NATIVE_UINT(region_obj->region.address + 240 - region_offset), 241 - acpi_ut_get_region_name(region_obj->region. 242 - space_id))); 234 + address = (region_obj->region.address + region_offset); 243 235 244 236 /* 245 237 * Special handling for generic_serial_bus and general_purpose_io: 246 238 * There are three extra parameters that must be passed to the 247 239 * handler via the context: 248 - * 1) Connection buffer, a resource template from Connection() op. 249 - * 2) Length of the above buffer. 250 - * 3) Actual access length from the access_as() op. 240 + * 1) Connection buffer, a resource template from Connection() op 241 + * 2) Length of the above buffer 242 + * 3) Actual access length from the access_as() op 243 + * 244 + * In addition, for general_purpose_io, the Address and bit_width fields 245 + * are defined as follows: 246 + * 1) Address is the pin number index of the field (bit offset from 247 + * the previous Connection) 248 + * 2) bit_width is the actual bit length of the field (number of pins) 251 249 */ 252 - if (((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) || 253 - (region_obj->region.space_id == ACPI_ADR_SPACE_GPIO)) && 250 + if ((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) && 254 251 context && field_obj) { 255 252 256 253 /* Get the Connection (resource_template) buffer */ ··· 257 258 context->length = field_obj->field.resource_length; 258 259 context->access_length = field_obj->field.access_length; 259 260 } 261 + if ((region_obj->region.space_id == ACPI_ADR_SPACE_GPIO) && 262 + context && field_obj) { 263 + 264 + /* Get the Connection (resource_template) buffer */ 265 + 266 + context->connection = field_obj->field.resource_buffer; 267 + context->length = field_obj->field.resource_length; 268 + context->access_length = field_obj->field.access_length; 269 + address = field_obj->field.pin_number_index; 270 + bit_width = field_obj->field.bit_length; 271 + } 272 + 273 + ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, 274 + "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", 275 + &region_obj->region.handler->address_space, handler, 276 + ACPI_FORMAT_NATIVE_UINT(address), 277 + acpi_ut_get_region_name(region_obj->region. 278 + space_id))); 260 279 261 280 if (!(handler_desc->address_space.handler_flags & 262 281 ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) { ··· 288 271 289 272 /* Call the handler */ 290 273 291 - status = handler(function, 292 - (region_obj->region.address + region_offset), 293 - bit_width, value, context, 274 + status = handler(function, address, bit_width, value, context, 294 275 region_obj2->extra.region_context); 295 276 296 277 if (ACPI_FAILURE(status)) {
+67
drivers/acpi/acpica/exfield.c
··· 253 253 buffer = &buffer_desc->integer.value; 254 254 } 255 255 256 + if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && 257 + (obj_desc->field.region_obj->region.space_id == 258 + ACPI_ADR_SPACE_GPIO)) { 259 + /* 260 + * For GPIO (general_purpose_io), the Address will be the bit offset 261 + * from the previous Connection() operator, making it effectively a 262 + * pin number index. The bit_length is the length of the field, which 263 + * is thus the number of pins. 264 + */ 265 + ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, 266 + "GPIO FieldRead [FROM]: Pin %u Bits %u\n", 267 + obj_desc->field.pin_number_index, 268 + obj_desc->field.bit_length)); 269 + 270 + /* Lock entire transaction if requested */ 271 + 272 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 273 + 274 + /* Perform the write */ 275 + 276 + status = acpi_ex_access_region(obj_desc, 0, 277 + (u64 *)buffer, ACPI_READ); 278 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 279 + if (ACPI_FAILURE(status)) { 280 + acpi_ut_remove_reference(buffer_desc); 281 + } else { 282 + *ret_buffer_desc = buffer_desc; 283 + } 284 + return_ACPI_STATUS(status); 285 + } 286 + 256 287 ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, 257 288 "FieldRead [TO]: Obj %p, Type %X, Buf %p, ByteLen %X\n", 258 289 obj_desc, obj_desc->common.type, buffer, ··· 443 412 acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 444 413 445 414 *result_desc = buffer_desc; 415 + return_ACPI_STATUS(status); 416 + } else if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && 417 + (obj_desc->field.region_obj->region.space_id == 418 + ACPI_ADR_SPACE_GPIO)) { 419 + /* 420 + * For GPIO (general_purpose_io), we will bypass the entire field 421 + * mechanism and handoff the bit address and bit width directly to 422 + * the handler. The Address will be the bit offset 423 + * from the previous Connection() operator, making it effectively a 424 + * pin number index. The bit_length is the length of the field, which 425 + * is thus the number of pins. 426 + */ 427 + if (source_desc->common.type != ACPI_TYPE_INTEGER) { 428 + return_ACPI_STATUS(AE_AML_OPERAND_TYPE); 429 + } 430 + 431 + ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, 432 + "GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n", 433 + acpi_ut_get_type_name(source_desc->common. 434 + type), 435 + source_desc->common.type, 436 + (u32)source_desc->integer.value, 437 + obj_desc->field.pin_number_index, 438 + obj_desc->field.bit_length)); 439 + 440 + buffer = &source_desc->integer.value; 441 + 442 + /* Lock entire transaction if requested */ 443 + 444 + acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); 445 + 446 + /* Perform the write */ 447 + 448 + status = acpi_ex_access_region(obj_desc, 0, 449 + (u64 *)buffer, ACPI_WRITE); 450 + acpi_ex_release_global_lock(obj_desc->common_field.field_flags); 446 451 return_ACPI_STATUS(status); 447 452 } 448 453
+2
drivers/acpi/acpica/exprep.c
··· 484 484 obj_desc->field.resource_length = info->resource_length; 485 485 } 486 486 487 + obj_desc->field.pin_number_index = info->pin_number_index; 488 + 487 489 /* Allow full data read from EC address space */ 488 490 489 491 if ((obj_desc->field.region_obj->region.space_id ==
+8
drivers/acpi/container.c
··· 99 99 device_unregister(dev); 100 100 } 101 101 102 + static void container_device_online(struct acpi_device *adev) 103 + { 104 + struct device *dev = acpi_driver_data(adev); 105 + 106 + kobject_uevent(&dev->kobj, KOBJ_ONLINE); 107 + } 108 + 102 109 static struct acpi_scan_handler container_handler = { 103 110 .ids = container_device_ids, 104 111 .attach = container_device_attach, ··· 113 106 .hotplug = { 114 107 .enabled = true, 115 108 .demand_offline = true, 109 + .notify_online = container_device_online, 116 110 }, 117 111 }; 118 112
+4 -1
drivers/acpi/scan.c
··· 130 130 list_for_each_entry(id, &acpi_dev->pnp.ids, list) { 131 131 count = snprintf(&modalias[len], size, "%s:", id->id); 132 132 if (count < 0) 133 - return EINVAL; 133 + return -EINVAL; 134 134 if (count >= size) 135 135 return -ENOMEM; 136 136 len += count; ··· 2189 2189 ok: 2190 2190 list_for_each_entry(child, &device->children, node) 2191 2191 acpi_bus_attach(child); 2192 + 2193 + if (device->handler && device->handler->hotplug.notify_online) 2194 + device->handler->hotplug.notify_online(device); 2192 2195 } 2193 2196 2194 2197 /**
+8
drivers/acpi/video.c
··· 750 750 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T520"), 751 751 }, 752 752 }, 753 + { 754 + .callback = video_disable_native_backlight, 755 + .ident = "ThinkPad X201s", 756 + .matches = { 757 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 758 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X201s"), 759 + }, 760 + }, 753 761 754 762 /* The native backlight controls do not work on some older machines */ 755 763 {
+25 -25
drivers/bus/omap_l3_noc.h
··· 188 188 }; 189 189 190 190 static struct l3_masters_data omap_l3_masters[] = { 191 - { 0x0 , "MPU"}, 192 - { 0x10, "CS_ADP"}, 193 - { 0x14, "xxx"}, 194 - { 0x20, "DSP"}, 195 - { 0x30, "IVAHD"}, 196 - { 0x40, "ISS"}, 197 - { 0x44, "DucatiM3"}, 198 - { 0x48, "FaceDetect"}, 199 - { 0x50, "SDMA_Rd"}, 200 - { 0x54, "SDMA_Wr"}, 201 - { 0x58, "xxx"}, 202 - { 0x5C, "xxx"}, 203 - { 0x60, "SGX"}, 204 - { 0x70, "DSS"}, 205 - { 0x80, "C2C"}, 206 - { 0x88, "xxx"}, 207 - { 0x8C, "xxx"}, 208 - { 0x90, "HSI"}, 209 - { 0xA0, "MMC1"}, 210 - { 0xA4, "MMC2"}, 211 - { 0xA8, "MMC6"}, 212 - { 0xB0, "UNIPRO1"}, 213 - { 0xC0, "USBHOSTHS"}, 214 - { 0xC4, "USBOTGHS"}, 215 - { 0xC8, "USBHOSTFS"} 191 + { 0x00, "MPU"}, 192 + { 0x04, "CS_ADP"}, 193 + { 0x05, "xxx"}, 194 + { 0x08, "DSP"}, 195 + { 0x0C, "IVAHD"}, 196 + { 0x10, "ISS"}, 197 + { 0x11, "DucatiM3"}, 198 + { 0x12, "FaceDetect"}, 199 + { 0x14, "SDMA_Rd"}, 200 + { 0x15, "SDMA_Wr"}, 201 + { 0x16, "xxx"}, 202 + { 0x17, "xxx"}, 203 + { 0x18, "SGX"}, 204 + { 0x1C, "DSS"}, 205 + { 0x20, "C2C"}, 206 + { 0x22, "xxx"}, 207 + { 0x23, "xxx"}, 208 + { 0x24, "HSI"}, 209 + { 0x28, "MMC1"}, 210 + { 0x29, "MMC2"}, 211 + { 0x2A, "MMC6"}, 212 + { 0x2C, "UNIPRO1"}, 213 + { 0x30, "USBHOSTHS"}, 214 + { 0x31, "USBOTGHS"}, 215 + { 0x32, "USBHOSTFS"} 216 216 }; 217 217 218 218 static struct l3_flagmux_data *omap_l3_flagmux[] = {
+6 -4
drivers/cpufreq/cpufreq.c
··· 1289 1289 per_cpu(cpufreq_cpu_data, j) = NULL; 1290 1290 write_unlock_irqrestore(&cpufreq_driver_lock, flags); 1291 1291 1292 + up_write(&policy->rwsem); 1293 + 1292 1294 if (cpufreq_driver->exit) 1293 1295 cpufreq_driver->exit(policy); 1294 1296 err_set_policy_cpu: ··· 1658 1656 if (!cpufreq_driver) 1659 1657 return; 1660 1658 1659 + cpufreq_suspended = true; 1660 + 1661 1661 if (!has_target()) 1662 1662 return; 1663 1663 ··· 1674 1670 pr_err("%s: Failed to suspend driver: %p\n", __func__, 1675 1671 policy); 1676 1672 } 1677 - 1678 - cpufreq_suspended = true; 1679 1673 } 1680 1674 1681 1675 /** ··· 1689 1687 if (!cpufreq_driver) 1690 1688 return; 1691 1689 1690 + cpufreq_suspended = false; 1691 + 1692 1692 if (!has_target()) 1693 1693 return; 1694 1694 1695 1695 pr_debug("%s: Resuming Governors\n", __func__); 1696 - 1697 - cpufreq_suspended = false; 1698 1696 1699 1697 list_for_each_entry(policy, &cpufreq_policy_list, policy_list) { 1700 1698 if (cpufreq_driver->resume && cpufreq_driver->resume(policy))
+5
drivers/dma/omap-dma.c
··· 1017 1017 return -EINVAL; 1018 1018 1019 1019 if (c->paused) { 1020 + mb(); 1021 + 1022 + /* Restore channel link register */ 1023 + omap_dma_chan_write(c, CLNK_CTRL, c->desc->clnk_ctrl); 1024 + 1020 1025 omap_dma_start(c, c->desc); 1021 1026 c->paused = false; 1022 1027 }
+1 -1
drivers/firmware/efi/Makefile
··· 7 7 obj-$(CONFIG_UEFI_CPER) += cper.o 8 8 obj-$(CONFIG_EFI_RUNTIME_MAP) += runtime-map.o 9 9 obj-$(CONFIG_EFI_RUNTIME_WRAPPERS) += runtime-wrappers.o 10 - obj-$(CONFIG_EFI_STUB) += libstub/ 10 + obj-$(CONFIG_EFI_ARM_STUB) += libstub/
+4 -1
drivers/gpio/gpiolib-acpi.c
··· 377 377 struct gpio_chip *chip = achip->chip; 378 378 struct acpi_resource_gpio *agpio; 379 379 struct acpi_resource *ares; 380 + int pin_index = (int)address; 380 381 acpi_status status; 381 382 bool pull_up; 383 + int length; 382 384 int i; 383 385 384 386 status = acpi_buffer_to_resource(achip->conn_info.connection, ··· 402 400 return AE_BAD_PARAMETER; 403 401 } 404 402 405 - for (i = 0; i < agpio->pin_table_length; i++) { 403 + length = min(agpio->pin_table_length, (u16)(pin_index + bits)); 404 + for (i = pin_index; i < length; ++i) { 406 405 unsigned pin = agpio->pin_table[i]; 407 406 struct acpi_gpio_connection *conn; 408 407 struct gpio_desc *desc;
+2 -2
drivers/gpio/gpiolib.c
··· 413 413 return; 414 414 } 415 415 416 - irq_set_chained_handler(parent_irq, parent_handler); 417 416 /* 418 417 * The parent irqchip is already using the chip_data for this 419 418 * irqchip, so our callbacks simply use the handler_data. 420 419 */ 421 420 irq_set_handler_data(parent_irq, gpiochip); 421 + irq_set_chained_handler(parent_irq, parent_handler); 422 422 } 423 423 EXPORT_SYMBOL_GPL(gpiochip_set_chained_irqchip); 424 424 ··· 1674 1674 set_bit(FLAG_OPEN_SOURCE, &desc->flags); 1675 1675 1676 1676 /* No particular flag request, return here... */ 1677 - if (flags & GPIOD_FLAGS_BIT_DIR_SET) 1677 + if (!(flags & GPIOD_FLAGS_BIT_DIR_SET)) 1678 1678 return desc; 1679 1679 1680 1680 /* Process flags */
+7 -5
drivers/gpu/drm/i915/i915_cmd_parser.c
··· 709 709 BUG_ON(!validate_cmds_sorted(ring, cmd_tables, cmd_table_count)); 710 710 BUG_ON(!validate_regs_sorted(ring)); 711 711 712 - ret = init_hash_table(ring, cmd_tables, cmd_table_count); 713 - if (ret) { 714 - DRM_ERROR("CMD: cmd_parser_init failed!\n"); 715 - fini_hash_table(ring); 716 - return ret; 712 + if (hash_empty(ring->cmd_hash)) { 713 + ret = init_hash_table(ring, cmd_tables, cmd_table_count); 714 + if (ret) { 715 + DRM_ERROR("CMD: cmd_parser_init failed!\n"); 716 + fini_hash_table(ring); 717 + return ret; 718 + } 717 719 } 718 720 719 721 ring->needs_cmd_parser = true;
+1 -1
drivers/gpu/drm/i915/intel_hdmi.c
··· 732 732 if (tmp & HDMI_MODE_SELECT_HDMI) 733 733 pipe_config->has_hdmi_sink = true; 734 734 735 - if (tmp & HDMI_MODE_SELECT_HDMI) 735 + if (tmp & SDVO_AUDIO_ENABLE) 736 736 pipe_config->has_audio = true; 737 737 738 738 if (!HAS_PCH_SPLIT(dev) &&
+6 -6
drivers/gpu/drm/radeon/cik.c
··· 4803 4803 */ 4804 4804 static int cik_cp_compute_resume(struct radeon_device *rdev) 4805 4805 { 4806 - int r, i, idx; 4806 + int r, i, j, idx; 4807 4807 u32 tmp; 4808 4808 bool use_doorbell = true; 4809 4809 u64 hqd_gpu_addr; ··· 4922 4922 mqd->queue_state.cp_hqd_pq_wptr= 0; 4923 4923 if (RREG32(CP_HQD_ACTIVE) & 1) { 4924 4924 WREG32(CP_HQD_DEQUEUE_REQUEST, 1); 4925 - for (i = 0; i < rdev->usec_timeout; i++) { 4925 + for (j = 0; j < rdev->usec_timeout; j++) { 4926 4926 if (!(RREG32(CP_HQD_ACTIVE) & 1)) 4927 4927 break; 4928 4928 udelay(1); ··· 7751 7751 wptr = RREG32(IH_RB_WPTR); 7752 7752 7753 7753 if (wptr & RB_OVERFLOW) { 7754 + wptr &= ~RB_OVERFLOW; 7754 7755 /* When a ring buffer overflow happen start parsing interrupt 7755 7756 * from the last not overwritten vector (wptr + 16). Hopefully 7756 7757 * this should allow us to catchup. 7757 7758 */ 7758 - dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n", 7759 - wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask); 7759 + dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n", 7760 + wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask); 7760 7761 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask; 7761 7762 tmp = RREG32(IH_RB_CNTL); 7762 7763 tmp |= IH_WPTR_OVERFLOW_CLEAR; 7763 7764 WREG32(IH_RB_CNTL, tmp); 7764 - wptr &= ~RB_OVERFLOW; 7765 7765 } 7766 7766 return (wptr & rdev->ih.ptr_mask); 7767 7767 } ··· 8251 8251 /* wptr/rptr are in bytes! */ 8252 8252 rptr += 16; 8253 8253 rptr &= rdev->ih.ptr_mask; 8254 + WREG32(IH_RB_RPTR, rptr); 8254 8255 } 8255 8256 if (queue_hotplug) 8256 8257 schedule_work(&rdev->hotplug_work); ··· 8260 8259 if (queue_thermal) 8261 8260 schedule_work(&rdev->pm.dpm.thermal.work); 8262 8261 rdev->ih.rptr = rptr; 8263 - WREG32(IH_RB_RPTR, rdev->ih.rptr); 8264 8262 atomic_set(&rdev->ih.lock, 0); 8265 8263 8266 8264 /* make sure wptr hasn't changed while processing */
+4 -4
drivers/gpu/drm/radeon/evergreen.c
··· 4749 4749 wptr = RREG32(IH_RB_WPTR); 4750 4750 4751 4751 if (wptr & RB_OVERFLOW) { 4752 + wptr &= ~RB_OVERFLOW; 4752 4753 /* When a ring buffer overflow happen start parsing interrupt 4753 4754 * from the last not overwritten vector (wptr + 16). Hopefully 4754 4755 * this should allow us to catchup. 4755 4756 */ 4756 - dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n", 4757 - wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask); 4757 + dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n", 4758 + wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask); 4758 4759 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask; 4759 4760 tmp = RREG32(IH_RB_CNTL); 4760 4761 tmp |= IH_WPTR_OVERFLOW_CLEAR; 4761 4762 WREG32(IH_RB_CNTL, tmp); 4762 - wptr &= ~RB_OVERFLOW; 4763 4763 } 4764 4764 return (wptr & rdev->ih.ptr_mask); 4765 4765 } ··· 5137 5137 /* wptr/rptr are in bytes! */ 5138 5138 rptr += 16; 5139 5139 rptr &= rdev->ih.ptr_mask; 5140 + WREG32(IH_RB_RPTR, rptr); 5140 5141 } 5141 5142 if (queue_hotplug) 5142 5143 schedule_work(&rdev->hotplug_work); ··· 5146 5145 if (queue_thermal && rdev->pm.dpm_enabled) 5147 5146 schedule_work(&rdev->pm.dpm.thermal.work); 5148 5147 rdev->ih.rptr = rptr; 5149 - WREG32(IH_RB_RPTR, rdev->ih.rptr); 5150 5148 atomic_set(&rdev->ih.lock, 0); 5151 5149 5152 5150 /* make sure wptr hasn't changed while processing */
+4 -4
drivers/gpu/drm/radeon/r600.c
··· 3792 3792 wptr = RREG32(IH_RB_WPTR); 3793 3793 3794 3794 if (wptr & RB_OVERFLOW) { 3795 + wptr &= ~RB_OVERFLOW; 3795 3796 /* When a ring buffer overflow happen start parsing interrupt 3796 3797 * from the last not overwritten vector (wptr + 16). Hopefully 3797 3798 * this should allow us to catchup. 3798 3799 */ 3799 - dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n", 3800 - wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask); 3800 + dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n", 3801 + wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask); 3801 3802 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask; 3802 3803 tmp = RREG32(IH_RB_CNTL); 3803 3804 tmp |= IH_WPTR_OVERFLOW_CLEAR; 3804 3805 WREG32(IH_RB_CNTL, tmp); 3805 - wptr &= ~RB_OVERFLOW; 3806 3806 } 3807 3807 return (wptr & rdev->ih.ptr_mask); 3808 3808 } ··· 4048 4048 /* wptr/rptr are in bytes! */ 4049 4049 rptr += 16; 4050 4050 rptr &= rdev->ih.ptr_mask; 4051 + WREG32(IH_RB_RPTR, rptr); 4051 4052 } 4052 4053 if (queue_hotplug) 4053 4054 schedule_work(&rdev->hotplug_work); ··· 4057 4056 if (queue_thermal && rdev->pm.dpm_enabled) 4058 4057 schedule_work(&rdev->pm.dpm.thermal.work); 4059 4058 rdev->ih.rptr = rptr; 4060 - WREG32(IH_RB_RPTR, rdev->ih.rptr); 4061 4059 atomic_set(&rdev->ih.lock, 0); 4062 4060 4063 4061 /* make sure wptr hasn't changed while processing */
+1
drivers/gpu/drm/radeon/radeon.h
··· 106 106 extern int radeon_deep_color; 107 107 extern int radeon_use_pflipirq; 108 108 extern int radeon_bapm; 109 + extern int radeon_backlight; 109 110 110 111 /* 111 112 * Copy from radeon_drv.h so we don't have to include both and have conflicting
+4
drivers/gpu/drm/radeon/radeon_device.c
··· 123 123 * https://bugzilla.kernel.org/show_bug.cgi?id=51381 124 124 */ 125 125 { PCI_VENDOR_ID_ATI, 0x6741, 0x1043, 0x108c, RADEON_PX_QUIRK_DISABLE_PX }, 126 + /* Asus K53TK laptop with AMD A6-3420M APU and Radeon 7670m GPU 127 + * https://bugzilla.kernel.org/show_bug.cgi?id=51381 128 + */ 129 + { PCI_VENDOR_ID_ATI, 0x6840, 0x1043, 0x2122, RADEON_PX_QUIRK_DISABLE_PX }, 126 130 /* macbook pro 8.2 */ 127 131 { PCI_VENDOR_ID_ATI, 0x6741, PCI_VENDOR_ID_APPLE, 0x00e2, RADEON_PX_QUIRK_LONG_WAKEUP }, 128 132 { 0, 0, 0, 0, 0 },
+4
drivers/gpu/drm/radeon/radeon_drv.c
··· 181 181 int radeon_deep_color = 0; 182 182 int radeon_use_pflipirq = 2; 183 183 int radeon_bapm = -1; 184 + int radeon_backlight = -1; 184 185 185 186 MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers"); 186 187 module_param_named(no_wb, radeon_no_wb, int, 0444); ··· 263 262 264 263 MODULE_PARM_DESC(bapm, "BAPM support (1 = enable, 0 = disable, -1 = auto)"); 265 264 module_param_named(bapm, radeon_bapm, int, 0444); 265 + 266 + MODULE_PARM_DESC(backlight, "backlight support (1 = enable, 0 = disable, -1 = auto)"); 267 + module_param_named(backlight, radeon_backlight, int, 0444); 266 268 267 269 static struct pci_device_id pciidlist[] = { 268 270 radeon_PCI_IDS
+36 -8
drivers/gpu/drm/radeon/radeon_encoders.c
··· 158 158 return ret; 159 159 } 160 160 161 + static void radeon_encoder_add_backlight(struct radeon_encoder *radeon_encoder, 162 + struct drm_connector *connector) 163 + { 164 + struct drm_device *dev = radeon_encoder->base.dev; 165 + struct radeon_device *rdev = dev->dev_private; 166 + bool use_bl = false; 167 + 168 + if (!(radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))) 169 + return; 170 + 171 + if (radeon_backlight == 0) { 172 + return; 173 + } else if (radeon_backlight == 1) { 174 + use_bl = true; 175 + } else if (radeon_backlight == -1) { 176 + /* Quirks */ 177 + /* Amilo Xi 2550 only works with acpi bl */ 178 + if ((rdev->pdev->device == 0x9583) && 179 + (rdev->pdev->subsystem_vendor == 0x1734) && 180 + (rdev->pdev->subsystem_device == 0x1107)) 181 + use_bl = false; 182 + else 183 + use_bl = true; 184 + } 185 + 186 + if (use_bl) { 187 + if (rdev->is_atom_bios) 188 + radeon_atom_backlight_init(radeon_encoder, connector); 189 + else 190 + radeon_legacy_backlight_init(radeon_encoder, connector); 191 + rdev->mode_info.bl_encoder = radeon_encoder; 192 + } 193 + } 194 + 161 195 void 162 196 radeon_link_encoder_connector(struct drm_device *dev) 163 197 { 164 - struct radeon_device *rdev = dev->dev_private; 165 198 struct drm_connector *connector; 166 199 struct radeon_connector *radeon_connector; 167 200 struct drm_encoder *encoder; ··· 207 174 radeon_encoder = to_radeon_encoder(encoder); 208 175 if (radeon_encoder->devices & radeon_connector->devices) { 209 176 drm_mode_connector_attach_encoder(connector, encoder); 210 - if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { 211 - if (rdev->is_atom_bios) 212 - radeon_atom_backlight_init(radeon_encoder, connector); 213 - else 214 - radeon_legacy_backlight_init(radeon_encoder, connector); 215 - rdev->mode_info.bl_encoder = radeon_encoder; 216 - } 177 + if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) 178 + radeon_encoder_add_backlight(radeon_encoder, connector); 217 179 } 218 180 } 219 181 }
+4 -4
drivers/gpu/drm/radeon/si.c
··· 6316 6316 wptr = RREG32(IH_RB_WPTR); 6317 6317 6318 6318 if (wptr & RB_OVERFLOW) { 6319 + wptr &= ~RB_OVERFLOW; 6319 6320 /* When a ring buffer overflow happen start parsing interrupt 6320 6321 * from the last not overwritten vector (wptr + 16). Hopefully 6321 6322 * this should allow us to catchup. 6322 6323 */ 6323 - dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n", 6324 - wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask); 6324 + dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n", 6325 + wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask); 6325 6326 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask; 6326 6327 tmp = RREG32(IH_RB_CNTL); 6327 6328 tmp |= IH_WPTR_OVERFLOW_CLEAR; 6328 6329 WREG32(IH_RB_CNTL, tmp); 6329 - wptr &= ~RB_OVERFLOW; 6330 6330 } 6331 6331 return (wptr & rdev->ih.ptr_mask); 6332 6332 } ··· 6664 6664 /* wptr/rptr are in bytes! */ 6665 6665 rptr += 16; 6666 6666 rptr &= rdev->ih.ptr_mask; 6667 + WREG32(IH_RB_RPTR, rptr); 6667 6668 } 6668 6669 if (queue_hotplug) 6669 6670 schedule_work(&rdev->hotplug_work); 6670 6671 if (queue_thermal && rdev->pm.dpm_enabled) 6671 6672 schedule_work(&rdev->pm.dpm.thermal.work); 6672 6673 rdev->ih.rptr = rptr; 6673 - WREG32(IH_RB_RPTR, rdev->ih.rptr); 6674 6674 atomic_set(&rdev->ih.lock, 0); 6675 6675 6676 6676 /* make sure wptr hasn't changed while processing */
+1 -4
drivers/i2c/Makefile
··· 2 2 # Makefile for the i2c core. 3 3 # 4 4 5 - i2ccore-y := i2c-core.o 6 - i2ccore-$(CONFIG_ACPI) += i2c-acpi.o 7 - 8 5 obj-$(CONFIG_I2C_BOARDINFO) += i2c-boardinfo.o 9 - obj-$(CONFIG_I2C) += i2ccore.o 6 + obj-$(CONFIG_I2C) += i2c-core.o 10 7 obj-$(CONFIG_I2C_SMBUS) += i2c-smbus.o 11 8 obj-$(CONFIG_I2C_CHARDEV) += i2c-dev.o 12 9 obj-$(CONFIG_I2C_MUX) += i2c-mux.o
+2 -2
drivers/i2c/busses/i2c-ismt.c
··· 497 497 desc->wr_len_cmd = dma_size; 498 498 desc->control |= ISMT_DESC_BLK; 499 499 priv->dma_buffer[0] = command; 500 - memcpy(&priv->dma_buffer[1], &data->block[1], dma_size); 500 + memcpy(&priv->dma_buffer[1], &data->block[1], dma_size - 1); 501 501 } else { 502 502 /* Block Read */ 503 503 dev_dbg(dev, "I2C_SMBUS_BLOCK_DATA: READ\n"); ··· 525 525 desc->wr_len_cmd = dma_size; 526 526 desc->control |= ISMT_DESC_I2C; 527 527 priv->dma_buffer[0] = command; 528 - memcpy(&priv->dma_buffer[1], &data->block[1], dma_size); 528 + memcpy(&priv->dma_buffer[1], &data->block[1], dma_size - 1); 529 529 } else { 530 530 /* i2c Block Read */ 531 531 dev_dbg(dev, "I2C_SMBUS_I2C_BLOCK_DATA: READ\n");
+1 -1
drivers/i2c/busses/i2c-mxs.c
··· 429 429 ret = mxs_i2c_pio_wait_xfer_end(i2c); 430 430 if (ret) { 431 431 dev_err(i2c->dev, 432 - "PIO: Failed to send SELECT command!\n"); 432 + "PIO: Failed to send READ command!\n"); 433 433 goto cleanup; 434 434 } 435 435
+2 -2
drivers/i2c/busses/i2c-rcar.c
··· 76 76 #define RCAR_IRQ_RECV (MNR | MAL | MST | MAT | MDR) 77 77 #define RCAR_IRQ_STOP (MST) 78 78 79 - #define RCAR_IRQ_ACK_SEND (~(MAT | MDE)) 80 - #define RCAR_IRQ_ACK_RECV (~(MAT | MDR)) 79 + #define RCAR_IRQ_ACK_SEND (~(MAT | MDE) & 0xFF) 80 + #define RCAR_IRQ_ACK_RECV (~(MAT | MDR) & 0xFF) 81 81 82 82 #define ID_LAST_MSG (1 << 0) 83 83 #define ID_IOERROR (1 << 1)
+5 -6
drivers/i2c/busses/i2c-rk3x.c
··· 433 433 unsigned long i2c_rate = clk_get_rate(i2c->clk); 434 434 unsigned int div; 435 435 436 - /* SCL rate = (clk rate) / (8 * DIV) */ 437 - div = DIV_ROUND_UP(i2c_rate, scl_rate * 8); 438 - 439 - /* The lower and upper half of the CLKDIV reg describe the length of 440 - * SCL low & high periods. */ 441 - div = DIV_ROUND_UP(div, 2); 436 + /* set DIV = DIVH = DIVL 437 + * SCL rate = (clk rate) / (8 * (DIVH + 1 + DIVL + 1)) 438 + * = (clk rate) / (16 * (DIV + 1)) 439 + */ 440 + div = DIV_ROUND_UP(i2c_rate, scl_rate * 16) - 1; 442 441 443 442 i2c_writel(i2c, (div << 16) | (div & 0xffff), REG_CLKDIV); 444 443 }
+45 -12
drivers/i2c/busses/i2c-tegra.c
··· 380 380 { 381 381 int ret; 382 382 if (!i2c_dev->hw->has_single_clk_source) { 383 - ret = clk_prepare_enable(i2c_dev->fast_clk); 383 + ret = clk_enable(i2c_dev->fast_clk); 384 384 if (ret < 0) { 385 385 dev_err(i2c_dev->dev, 386 386 "Enabling fast clk failed, err %d\n", ret); 387 387 return ret; 388 388 } 389 389 } 390 - ret = clk_prepare_enable(i2c_dev->div_clk); 390 + ret = clk_enable(i2c_dev->div_clk); 391 391 if (ret < 0) { 392 392 dev_err(i2c_dev->dev, 393 393 "Enabling div clk failed, err %d\n", ret); 394 - clk_disable_unprepare(i2c_dev->fast_clk); 394 + clk_disable(i2c_dev->fast_clk); 395 395 } 396 396 return ret; 397 397 } 398 398 399 399 static inline void tegra_i2c_clock_disable(struct tegra_i2c_dev *i2c_dev) 400 400 { 401 - clk_disable_unprepare(i2c_dev->div_clk); 401 + clk_disable(i2c_dev->div_clk); 402 402 if (!i2c_dev->hw->has_single_clk_source) 403 - clk_disable_unprepare(i2c_dev->fast_clk); 403 + clk_disable(i2c_dev->fast_clk); 404 404 } 405 405 406 406 static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev) 407 407 { 408 408 u32 val; 409 409 int err = 0; 410 - int clk_multiplier = I2C_CLK_MULTIPLIER_STD_FAST_MODE; 411 410 u32 clk_divisor; 412 411 413 412 err = tegra_i2c_clock_enable(i2c_dev); ··· 426 427 (0x2 << I2C_CNFG_DEBOUNCE_CNT_SHIFT); 427 428 i2c_writel(i2c_dev, val, I2C_CNFG); 428 429 i2c_writel(i2c_dev, 0, I2C_INT_MASK); 429 - 430 - clk_multiplier *= (i2c_dev->hw->clk_divisor_std_fast_mode + 1); 431 - clk_set_rate(i2c_dev->div_clk, i2c_dev->bus_clk_rate * clk_multiplier); 432 430 433 431 /* Make sure clock divisor programmed correctly */ 434 432 clk_divisor = i2c_dev->hw->clk_divisor_hs_mode; ··· 708 712 void __iomem *base; 709 713 int irq; 710 714 int ret = 0; 715 + int clk_multiplier = I2C_CLK_MULTIPLIER_STD_FAST_MODE; 711 716 712 717 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 713 718 base = devm_ioremap_resource(&pdev->dev, res); ··· 774 777 775 778 platform_set_drvdata(pdev, i2c_dev); 776 779 780 + if (!i2c_dev->hw->has_single_clk_source) { 781 + ret = clk_prepare(i2c_dev->fast_clk); 782 + if (ret < 0) { 783 + dev_err(i2c_dev->dev, "Clock prepare failed %d\n", ret); 784 + return ret; 785 + } 786 + } 787 + 788 + clk_multiplier *= (i2c_dev->hw->clk_divisor_std_fast_mode + 1); 789 + ret = clk_set_rate(i2c_dev->div_clk, 790 + i2c_dev->bus_clk_rate * clk_multiplier); 791 + if (ret) { 792 + dev_err(i2c_dev->dev, "Clock rate change failed %d\n", ret); 793 + goto unprepare_fast_clk; 794 + } 795 + 796 + ret = clk_prepare(i2c_dev->div_clk); 797 + if (ret < 0) { 798 + dev_err(i2c_dev->dev, "Clock prepare failed %d\n", ret); 799 + goto unprepare_fast_clk; 800 + } 801 + 777 802 ret = tegra_i2c_init(i2c_dev); 778 803 if (ret) { 779 804 dev_err(&pdev->dev, "Failed to initialize i2c controller"); 780 - return ret; 805 + goto unprepare_div_clk; 781 806 } 782 807 783 808 ret = devm_request_irq(&pdev->dev, i2c_dev->irq, 784 809 tegra_i2c_isr, 0, dev_name(&pdev->dev), i2c_dev); 785 810 if (ret) { 786 811 dev_err(&pdev->dev, "Failed to request irq %i\n", i2c_dev->irq); 787 - return ret; 812 + goto unprepare_div_clk; 788 813 } 789 814 790 815 i2c_set_adapdata(&i2c_dev->adapter, i2c_dev); ··· 822 803 ret = i2c_add_numbered_adapter(&i2c_dev->adapter); 823 804 if (ret) { 824 805 dev_err(&pdev->dev, "Failed to add I2C adapter\n"); 825 - return ret; 806 + goto unprepare_div_clk; 826 807 } 827 808 828 809 return 0; 810 + 811 + unprepare_div_clk: 812 + clk_unprepare(i2c_dev->div_clk); 813 + 814 + unprepare_fast_clk: 815 + if (!i2c_dev->hw->has_single_clk_source) 816 + clk_unprepare(i2c_dev->fast_clk); 817 + 818 + return ret; 829 819 } 830 820 831 821 static int tegra_i2c_remove(struct platform_device *pdev) 832 822 { 833 823 struct tegra_i2c_dev *i2c_dev = platform_get_drvdata(pdev); 834 824 i2c_del_adapter(&i2c_dev->adapter); 825 + 826 + clk_unprepare(i2c_dev->div_clk); 827 + if (!i2c_dev->hw->has_single_clk_source) 828 + clk_unprepare(i2c_dev->fast_clk); 829 + 835 830 return 0; 836 831 } 837 832
-364
drivers/i2c/i2c-acpi.c
··· 1 - /* 2 - * I2C ACPI code 3 - * 4 - * Copyright (C) 2014 Intel Corp 5 - * 6 - * Author: Lan Tianyu <tianyu.lan@intel.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope that it will be useful, but 13 - * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY 14 - * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 15 - * for more details. 16 - */ 17 - #define pr_fmt(fmt) "I2C/ACPI : " fmt 18 - 19 - #include <linux/kernel.h> 20 - #include <linux/errno.h> 21 - #include <linux/err.h> 22 - #include <linux/i2c.h> 23 - #include <linux/acpi.h> 24 - 25 - struct acpi_i2c_handler_data { 26 - struct acpi_connection_info info; 27 - struct i2c_adapter *adapter; 28 - }; 29 - 30 - struct gsb_buffer { 31 - u8 status; 32 - u8 len; 33 - union { 34 - u16 wdata; 35 - u8 bdata; 36 - u8 data[0]; 37 - }; 38 - } __packed; 39 - 40 - static int acpi_i2c_add_resource(struct acpi_resource *ares, void *data) 41 - { 42 - struct i2c_board_info *info = data; 43 - 44 - if (ares->type == ACPI_RESOURCE_TYPE_SERIAL_BUS) { 45 - struct acpi_resource_i2c_serialbus *sb; 46 - 47 - sb = &ares->data.i2c_serial_bus; 48 - if (sb->type == ACPI_RESOURCE_SERIAL_TYPE_I2C) { 49 - info->addr = sb->slave_address; 50 - if (sb->access_mode == ACPI_I2C_10BIT_MODE) 51 - info->flags |= I2C_CLIENT_TEN; 52 - } 53 - } else if (info->irq < 0) { 54 - struct resource r; 55 - 56 - if (acpi_dev_resource_interrupt(ares, 0, &r)) 57 - info->irq = r.start; 58 - } 59 - 60 - /* Tell the ACPI core to skip this resource */ 61 - return 1; 62 - } 63 - 64 - static acpi_status acpi_i2c_add_device(acpi_handle handle, u32 level, 65 - void *data, void **return_value) 66 - { 67 - struct i2c_adapter *adapter = data; 68 - struct list_head resource_list; 69 - struct i2c_board_info info; 70 - struct acpi_device *adev; 71 - int ret; 72 - 73 - if (acpi_bus_get_device(handle, &adev)) 74 - return AE_OK; 75 - if (acpi_bus_get_status(adev) || !adev->status.present) 76 - return AE_OK; 77 - 78 - memset(&info, 0, sizeof(info)); 79 - info.acpi_node.companion = adev; 80 - info.irq = -1; 81 - 82 - INIT_LIST_HEAD(&resource_list); 83 - ret = acpi_dev_get_resources(adev, &resource_list, 84 - acpi_i2c_add_resource, &info); 85 - acpi_dev_free_resource_list(&resource_list); 86 - 87 - if (ret < 0 || !info.addr) 88 - return AE_OK; 89 - 90 - adev->power.flags.ignore_parent = true; 91 - strlcpy(info.type, dev_name(&adev->dev), sizeof(info.type)); 92 - if (!i2c_new_device(adapter, &info)) { 93 - adev->power.flags.ignore_parent = false; 94 - dev_err(&adapter->dev, 95 - "failed to add I2C device %s from ACPI\n", 96 - dev_name(&adev->dev)); 97 - } 98 - 99 - return AE_OK; 100 - } 101 - 102 - /** 103 - * acpi_i2c_register_devices - enumerate I2C slave devices behind adapter 104 - * @adap: pointer to adapter 105 - * 106 - * Enumerate all I2C slave devices behind this adapter by walking the ACPI 107 - * namespace. When a device is found it will be added to the Linux device 108 - * model and bound to the corresponding ACPI handle. 109 - */ 110 - void acpi_i2c_register_devices(struct i2c_adapter *adap) 111 - { 112 - acpi_handle handle; 113 - acpi_status status; 114 - 115 - if (!adap->dev.parent) 116 - return; 117 - 118 - handle = ACPI_HANDLE(adap->dev.parent); 119 - if (!handle) 120 - return; 121 - 122 - status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1, 123 - acpi_i2c_add_device, NULL, 124 - adap, NULL); 125 - if (ACPI_FAILURE(status)) 126 - dev_warn(&adap->dev, "failed to enumerate I2C slaves\n"); 127 - } 128 - 129 - #ifdef CONFIG_ACPI_I2C_OPREGION 130 - static int acpi_gsb_i2c_read_bytes(struct i2c_client *client, 131 - u8 cmd, u8 *data, u8 data_len) 132 - { 133 - 134 - struct i2c_msg msgs[2]; 135 - int ret; 136 - u8 *buffer; 137 - 138 - buffer = kzalloc(data_len, GFP_KERNEL); 139 - if (!buffer) 140 - return AE_NO_MEMORY; 141 - 142 - msgs[0].addr = client->addr; 143 - msgs[0].flags = client->flags; 144 - msgs[0].len = 1; 145 - msgs[0].buf = &cmd; 146 - 147 - msgs[1].addr = client->addr; 148 - msgs[1].flags = client->flags | I2C_M_RD; 149 - msgs[1].len = data_len; 150 - msgs[1].buf = buffer; 151 - 152 - ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 153 - if (ret < 0) 154 - dev_err(&client->adapter->dev, "i2c read failed\n"); 155 - else 156 - memcpy(data, buffer, data_len); 157 - 158 - kfree(buffer); 159 - return ret; 160 - } 161 - 162 - static int acpi_gsb_i2c_write_bytes(struct i2c_client *client, 163 - u8 cmd, u8 *data, u8 data_len) 164 - { 165 - 166 - struct i2c_msg msgs[1]; 167 - u8 *buffer; 168 - int ret = AE_OK; 169 - 170 - buffer = kzalloc(data_len + 1, GFP_KERNEL); 171 - if (!buffer) 172 - return AE_NO_MEMORY; 173 - 174 - buffer[0] = cmd; 175 - memcpy(buffer + 1, data, data_len); 176 - 177 - msgs[0].addr = client->addr; 178 - msgs[0].flags = client->flags; 179 - msgs[0].len = data_len + 1; 180 - msgs[0].buf = buffer; 181 - 182 - ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 183 - if (ret < 0) 184 - dev_err(&client->adapter->dev, "i2c write failed\n"); 185 - 186 - kfree(buffer); 187 - return ret; 188 - } 189 - 190 - static acpi_status 191 - acpi_i2c_space_handler(u32 function, acpi_physical_address command, 192 - u32 bits, u64 *value64, 193 - void *handler_context, void *region_context) 194 - { 195 - struct gsb_buffer *gsb = (struct gsb_buffer *)value64; 196 - struct acpi_i2c_handler_data *data = handler_context; 197 - struct acpi_connection_info *info = &data->info; 198 - struct acpi_resource_i2c_serialbus *sb; 199 - struct i2c_adapter *adapter = data->adapter; 200 - struct i2c_client client; 201 - struct acpi_resource *ares; 202 - u32 accessor_type = function >> 16; 203 - u8 action = function & ACPI_IO_MASK; 204 - acpi_status ret = AE_OK; 205 - int status; 206 - 207 - ret = acpi_buffer_to_resource(info->connection, info->length, &ares); 208 - if (ACPI_FAILURE(ret)) 209 - return ret; 210 - 211 - if (!value64 || ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS) { 212 - ret = AE_BAD_PARAMETER; 213 - goto err; 214 - } 215 - 216 - sb = &ares->data.i2c_serial_bus; 217 - if (sb->type != ACPI_RESOURCE_SERIAL_TYPE_I2C) { 218 - ret = AE_BAD_PARAMETER; 219 - goto err; 220 - } 221 - 222 - memset(&client, 0, sizeof(client)); 223 - client.adapter = adapter; 224 - client.addr = sb->slave_address; 225 - client.flags = 0; 226 - 227 - if (sb->access_mode == ACPI_I2C_10BIT_MODE) 228 - client.flags |= I2C_CLIENT_TEN; 229 - 230 - switch (accessor_type) { 231 - case ACPI_GSB_ACCESS_ATTRIB_SEND_RCV: 232 - if (action == ACPI_READ) { 233 - status = i2c_smbus_read_byte(&client); 234 - if (status >= 0) { 235 - gsb->bdata = status; 236 - status = 0; 237 - } 238 - } else { 239 - status = i2c_smbus_write_byte(&client, gsb->bdata); 240 - } 241 - break; 242 - 243 - case ACPI_GSB_ACCESS_ATTRIB_BYTE: 244 - if (action == ACPI_READ) { 245 - status = i2c_smbus_read_byte_data(&client, command); 246 - if (status >= 0) { 247 - gsb->bdata = status; 248 - status = 0; 249 - } 250 - } else { 251 - status = i2c_smbus_write_byte_data(&client, command, 252 - gsb->bdata); 253 - } 254 - break; 255 - 256 - case ACPI_GSB_ACCESS_ATTRIB_WORD: 257 - if (action == ACPI_READ) { 258 - status = i2c_smbus_read_word_data(&client, command); 259 - if (status >= 0) { 260 - gsb->wdata = status; 261 - status = 0; 262 - } 263 - } else { 264 - status = i2c_smbus_write_word_data(&client, command, 265 - gsb->wdata); 266 - } 267 - break; 268 - 269 - case ACPI_GSB_ACCESS_ATTRIB_BLOCK: 270 - if (action == ACPI_READ) { 271 - status = i2c_smbus_read_block_data(&client, command, 272 - gsb->data); 273 - if (status >= 0) { 274 - gsb->len = status; 275 - status = 0; 276 - } 277 - } else { 278 - status = i2c_smbus_write_block_data(&client, command, 279 - gsb->len, gsb->data); 280 - } 281 - break; 282 - 283 - case ACPI_GSB_ACCESS_ATTRIB_MULTIBYTE: 284 - if (action == ACPI_READ) { 285 - status = acpi_gsb_i2c_read_bytes(&client, command, 286 - gsb->data, info->access_length); 287 - if (status > 0) 288 - status = 0; 289 - } else { 290 - status = acpi_gsb_i2c_write_bytes(&client, command, 291 - gsb->data, info->access_length); 292 - } 293 - break; 294 - 295 - default: 296 - pr_info("protocol(0x%02x) is not supported.\n", accessor_type); 297 - ret = AE_BAD_PARAMETER; 298 - goto err; 299 - } 300 - 301 - gsb->status = status; 302 - 303 - err: 304 - ACPI_FREE(ares); 305 - return ret; 306 - } 307 - 308 - 309 - int acpi_i2c_install_space_handler(struct i2c_adapter *adapter) 310 - { 311 - acpi_handle handle = ACPI_HANDLE(adapter->dev.parent); 312 - struct acpi_i2c_handler_data *data; 313 - acpi_status status; 314 - 315 - if (!handle) 316 - return -ENODEV; 317 - 318 - data = kzalloc(sizeof(struct acpi_i2c_handler_data), 319 - GFP_KERNEL); 320 - if (!data) 321 - return -ENOMEM; 322 - 323 - data->adapter = adapter; 324 - status = acpi_bus_attach_private_data(handle, (void *)data); 325 - if (ACPI_FAILURE(status)) { 326 - kfree(data); 327 - return -ENOMEM; 328 - } 329 - 330 - status = acpi_install_address_space_handler(handle, 331 - ACPI_ADR_SPACE_GSBUS, 332 - &acpi_i2c_space_handler, 333 - NULL, 334 - data); 335 - if (ACPI_FAILURE(status)) { 336 - dev_err(&adapter->dev, "Error installing i2c space handler\n"); 337 - acpi_bus_detach_private_data(handle); 338 - kfree(data); 339 - return -ENOMEM; 340 - } 341 - 342 - return 0; 343 - } 344 - 345 - void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter) 346 - { 347 - acpi_handle handle = ACPI_HANDLE(adapter->dev.parent); 348 - struct acpi_i2c_handler_data *data; 349 - acpi_status status; 350 - 351 - if (!handle) 352 - return; 353 - 354 - acpi_remove_address_space_handler(handle, 355 - ACPI_ADR_SPACE_GSBUS, 356 - &acpi_i2c_space_handler); 357 - 358 - status = acpi_bus_get_private_data(handle, (void **)&data); 359 - if (ACPI_SUCCESS(status)) 360 - kfree(data); 361 - 362 - acpi_bus_detach_private_data(handle); 363 - } 364 - #endif
+364
drivers/i2c/i2c-core.c
··· 27 27 OF support is copyright (c) 2008 Jochen Friedrich <jochen@scram.de> 28 28 (based on a previous patch from Jon Smirl <jonsmirl@gmail.com>) and 29 29 (c) 2013 Wolfram Sang <wsa@the-dreams.de> 30 + I2C ACPI code Copyright (C) 2014 Intel Corp 31 + Author: Lan Tianyu <tianyu.lan@intel.com> 30 32 */ 31 33 32 34 #include <linux/module.h> ··· 79 77 { 80 78 static_key_slow_dec(&i2c_trace_msg); 81 79 } 80 + 81 + #if defined(CONFIG_ACPI) 82 + struct acpi_i2c_handler_data { 83 + struct acpi_connection_info info; 84 + struct i2c_adapter *adapter; 85 + }; 86 + 87 + struct gsb_buffer { 88 + u8 status; 89 + u8 len; 90 + union { 91 + u16 wdata; 92 + u8 bdata; 93 + u8 data[0]; 94 + }; 95 + } __packed; 96 + 97 + static int acpi_i2c_add_resource(struct acpi_resource *ares, void *data) 98 + { 99 + struct i2c_board_info *info = data; 100 + 101 + if (ares->type == ACPI_RESOURCE_TYPE_SERIAL_BUS) { 102 + struct acpi_resource_i2c_serialbus *sb; 103 + 104 + sb = &ares->data.i2c_serial_bus; 105 + if (sb->type == ACPI_RESOURCE_SERIAL_TYPE_I2C) { 106 + info->addr = sb->slave_address; 107 + if (sb->access_mode == ACPI_I2C_10BIT_MODE) 108 + info->flags |= I2C_CLIENT_TEN; 109 + } 110 + } else if (info->irq < 0) { 111 + struct resource r; 112 + 113 + if (acpi_dev_resource_interrupt(ares, 0, &r)) 114 + info->irq = r.start; 115 + } 116 + 117 + /* Tell the ACPI core to skip this resource */ 118 + return 1; 119 + } 120 + 121 + static acpi_status acpi_i2c_add_device(acpi_handle handle, u32 level, 122 + void *data, void **return_value) 123 + { 124 + struct i2c_adapter *adapter = data; 125 + struct list_head resource_list; 126 + struct i2c_board_info info; 127 + struct acpi_device *adev; 128 + int ret; 129 + 130 + if (acpi_bus_get_device(handle, &adev)) 131 + return AE_OK; 132 + if (acpi_bus_get_status(adev) || !adev->status.present) 133 + return AE_OK; 134 + 135 + memset(&info, 0, sizeof(info)); 136 + info.acpi_node.companion = adev; 137 + info.irq = -1; 138 + 139 + INIT_LIST_HEAD(&resource_list); 140 + ret = acpi_dev_get_resources(adev, &resource_list, 141 + acpi_i2c_add_resource, &info); 142 + acpi_dev_free_resource_list(&resource_list); 143 + 144 + if (ret < 0 || !info.addr) 145 + return AE_OK; 146 + 147 + adev->power.flags.ignore_parent = true; 148 + strlcpy(info.type, dev_name(&adev->dev), sizeof(info.type)); 149 + if (!i2c_new_device(adapter, &info)) { 150 + adev->power.flags.ignore_parent = false; 151 + dev_err(&adapter->dev, 152 + "failed to add I2C device %s from ACPI\n", 153 + dev_name(&adev->dev)); 154 + } 155 + 156 + return AE_OK; 157 + } 158 + 159 + /** 160 + * acpi_i2c_register_devices - enumerate I2C slave devices behind adapter 161 + * @adap: pointer to adapter 162 + * 163 + * Enumerate all I2C slave devices behind this adapter by walking the ACPI 164 + * namespace. When a device is found it will be added to the Linux device 165 + * model and bound to the corresponding ACPI handle. 166 + */ 167 + static void acpi_i2c_register_devices(struct i2c_adapter *adap) 168 + { 169 + acpi_handle handle; 170 + acpi_status status; 171 + 172 + if (!adap->dev.parent) 173 + return; 174 + 175 + handle = ACPI_HANDLE(adap->dev.parent); 176 + if (!handle) 177 + return; 178 + 179 + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, handle, 1, 180 + acpi_i2c_add_device, NULL, 181 + adap, NULL); 182 + if (ACPI_FAILURE(status)) 183 + dev_warn(&adap->dev, "failed to enumerate I2C slaves\n"); 184 + } 185 + 186 + #else /* CONFIG_ACPI */ 187 + static inline void acpi_i2c_register_devices(struct i2c_adapter *adap) { } 188 + #endif /* CONFIG_ACPI */ 189 + 190 + #ifdef CONFIG_ACPI_I2C_OPREGION 191 + static int acpi_gsb_i2c_read_bytes(struct i2c_client *client, 192 + u8 cmd, u8 *data, u8 data_len) 193 + { 194 + 195 + struct i2c_msg msgs[2]; 196 + int ret; 197 + u8 *buffer; 198 + 199 + buffer = kzalloc(data_len, GFP_KERNEL); 200 + if (!buffer) 201 + return AE_NO_MEMORY; 202 + 203 + msgs[0].addr = client->addr; 204 + msgs[0].flags = client->flags; 205 + msgs[0].len = 1; 206 + msgs[0].buf = &cmd; 207 + 208 + msgs[1].addr = client->addr; 209 + msgs[1].flags = client->flags | I2C_M_RD; 210 + msgs[1].len = data_len; 211 + msgs[1].buf = buffer; 212 + 213 + ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 214 + if (ret < 0) 215 + dev_err(&client->adapter->dev, "i2c read failed\n"); 216 + else 217 + memcpy(data, buffer, data_len); 218 + 219 + kfree(buffer); 220 + return ret; 221 + } 222 + 223 + static int acpi_gsb_i2c_write_bytes(struct i2c_client *client, 224 + u8 cmd, u8 *data, u8 data_len) 225 + { 226 + 227 + struct i2c_msg msgs[1]; 228 + u8 *buffer; 229 + int ret = AE_OK; 230 + 231 + buffer = kzalloc(data_len + 1, GFP_KERNEL); 232 + if (!buffer) 233 + return AE_NO_MEMORY; 234 + 235 + buffer[0] = cmd; 236 + memcpy(buffer + 1, data, data_len); 237 + 238 + msgs[0].addr = client->addr; 239 + msgs[0].flags = client->flags; 240 + msgs[0].len = data_len + 1; 241 + msgs[0].buf = buffer; 242 + 243 + ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 244 + if (ret < 0) 245 + dev_err(&client->adapter->dev, "i2c write failed\n"); 246 + 247 + kfree(buffer); 248 + return ret; 249 + } 250 + 251 + static acpi_status 252 + acpi_i2c_space_handler(u32 function, acpi_physical_address command, 253 + u32 bits, u64 *value64, 254 + void *handler_context, void *region_context) 255 + { 256 + struct gsb_buffer *gsb = (struct gsb_buffer *)value64; 257 + struct acpi_i2c_handler_data *data = handler_context; 258 + struct acpi_connection_info *info = &data->info; 259 + struct acpi_resource_i2c_serialbus *sb; 260 + struct i2c_adapter *adapter = data->adapter; 261 + struct i2c_client client; 262 + struct acpi_resource *ares; 263 + u32 accessor_type = function >> 16; 264 + u8 action = function & ACPI_IO_MASK; 265 + acpi_status ret = AE_OK; 266 + int status; 267 + 268 + ret = acpi_buffer_to_resource(info->connection, info->length, &ares); 269 + if (ACPI_FAILURE(ret)) 270 + return ret; 271 + 272 + if (!value64 || ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS) { 273 + ret = AE_BAD_PARAMETER; 274 + goto err; 275 + } 276 + 277 + sb = &ares->data.i2c_serial_bus; 278 + if (sb->type != ACPI_RESOURCE_SERIAL_TYPE_I2C) { 279 + ret = AE_BAD_PARAMETER; 280 + goto err; 281 + } 282 + 283 + memset(&client, 0, sizeof(client)); 284 + client.adapter = adapter; 285 + client.addr = sb->slave_address; 286 + client.flags = 0; 287 + 288 + if (sb->access_mode == ACPI_I2C_10BIT_MODE) 289 + client.flags |= I2C_CLIENT_TEN; 290 + 291 + switch (accessor_type) { 292 + case ACPI_GSB_ACCESS_ATTRIB_SEND_RCV: 293 + if (action == ACPI_READ) { 294 + status = i2c_smbus_read_byte(&client); 295 + if (status >= 0) { 296 + gsb->bdata = status; 297 + status = 0; 298 + } 299 + } else { 300 + status = i2c_smbus_write_byte(&client, gsb->bdata); 301 + } 302 + break; 303 + 304 + case ACPI_GSB_ACCESS_ATTRIB_BYTE: 305 + if (action == ACPI_READ) { 306 + status = i2c_smbus_read_byte_data(&client, command); 307 + if (status >= 0) { 308 + gsb->bdata = status; 309 + status = 0; 310 + } 311 + } else { 312 + status = i2c_smbus_write_byte_data(&client, command, 313 + gsb->bdata); 314 + } 315 + break; 316 + 317 + case ACPI_GSB_ACCESS_ATTRIB_WORD: 318 + if (action == ACPI_READ) { 319 + status = i2c_smbus_read_word_data(&client, command); 320 + if (status >= 0) { 321 + gsb->wdata = status; 322 + status = 0; 323 + } 324 + } else { 325 + status = i2c_smbus_write_word_data(&client, command, 326 + gsb->wdata); 327 + } 328 + break; 329 + 330 + case ACPI_GSB_ACCESS_ATTRIB_BLOCK: 331 + if (action == ACPI_READ) { 332 + status = i2c_smbus_read_block_data(&client, command, 333 + gsb->data); 334 + if (status >= 0) { 335 + gsb->len = status; 336 + status = 0; 337 + } 338 + } else { 339 + status = i2c_smbus_write_block_data(&client, command, 340 + gsb->len, gsb->data); 341 + } 342 + break; 343 + 344 + case ACPI_GSB_ACCESS_ATTRIB_MULTIBYTE: 345 + if (action == ACPI_READ) { 346 + status = acpi_gsb_i2c_read_bytes(&client, command, 347 + gsb->data, info->access_length); 348 + if (status > 0) 349 + status = 0; 350 + } else { 351 + status = acpi_gsb_i2c_write_bytes(&client, command, 352 + gsb->data, info->access_length); 353 + } 354 + break; 355 + 356 + default: 357 + pr_info("protocol(0x%02x) is not supported.\n", accessor_type); 358 + ret = AE_BAD_PARAMETER; 359 + goto err; 360 + } 361 + 362 + gsb->status = status; 363 + 364 + err: 365 + ACPI_FREE(ares); 366 + return ret; 367 + } 368 + 369 + 370 + static int acpi_i2c_install_space_handler(struct i2c_adapter *adapter) 371 + { 372 + acpi_handle handle; 373 + struct acpi_i2c_handler_data *data; 374 + acpi_status status; 375 + 376 + if (!adapter->dev.parent) 377 + return -ENODEV; 378 + 379 + handle = ACPI_HANDLE(adapter->dev.parent); 380 + 381 + if (!handle) 382 + return -ENODEV; 383 + 384 + data = kzalloc(sizeof(struct acpi_i2c_handler_data), 385 + GFP_KERNEL); 386 + if (!data) 387 + return -ENOMEM; 388 + 389 + data->adapter = adapter; 390 + status = acpi_bus_attach_private_data(handle, (void *)data); 391 + if (ACPI_FAILURE(status)) { 392 + kfree(data); 393 + return -ENOMEM; 394 + } 395 + 396 + status = acpi_install_address_space_handler(handle, 397 + ACPI_ADR_SPACE_GSBUS, 398 + &acpi_i2c_space_handler, 399 + NULL, 400 + data); 401 + if (ACPI_FAILURE(status)) { 402 + dev_err(&adapter->dev, "Error installing i2c space handler\n"); 403 + acpi_bus_detach_private_data(handle); 404 + kfree(data); 405 + return -ENOMEM; 406 + } 407 + 408 + return 0; 409 + } 410 + 411 + static void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter) 412 + { 413 + acpi_handle handle; 414 + struct acpi_i2c_handler_data *data; 415 + acpi_status status; 416 + 417 + if (!adapter->dev.parent) 418 + return; 419 + 420 + handle = ACPI_HANDLE(adapter->dev.parent); 421 + 422 + if (!handle) 423 + return; 424 + 425 + acpi_remove_address_space_handler(handle, 426 + ACPI_ADR_SPACE_GSBUS, 427 + &acpi_i2c_space_handler); 428 + 429 + status = acpi_bus_get_private_data(handle, (void **)&data); 430 + if (ACPI_SUCCESS(status)) 431 + kfree(data); 432 + 433 + acpi_bus_detach_private_data(handle); 434 + } 435 + #else /* CONFIG_ACPI_I2C_OPREGION */ 436 + static inline void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter) 437 + { } 438 + 439 + static inline int acpi_i2c_install_space_handler(struct i2c_adapter *adapter) 440 + { return 0; } 441 + #endif /* CONFIG_ACPI_I2C_OPREGION */ 82 442 83 443 /* ------------------------------------------------------------------------- */ 84 444
+7
drivers/input/serio/i8042-x86ia64io.h
··· 466 466 }, 467 467 }, 468 468 { 469 + /* Asus X450LCP */ 470 + .matches = { 471 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 472 + DMI_MATCH(DMI_PRODUCT_NAME, "X450LCP"), 473 + }, 474 + }, 475 + { 469 476 /* Avatar AVIU-145A6 */ 470 477 .matches = { 471 478 DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+3 -2
drivers/net/ethernet/broadcom/bnx2.c
··· 3236 3236 3237 3237 skb->protocol = eth_type_trans(skb, bp->dev); 3238 3238 3239 - if ((len > (bp->dev->mtu + ETH_HLEN)) && 3240 - (ntohs(skb->protocol) != 0x8100)) { 3239 + if (len > (bp->dev->mtu + ETH_HLEN) && 3240 + skb->protocol != htons(0x8100) && 3241 + skb->protocol != htons(ETH_P_8021AD)) { 3241 3242 3242 3243 dev_kfree_skb(skb); 3243 3244 goto next_rx;
+2 -1
drivers/net/ethernet/broadcom/tg3.c
··· 6918 6918 skb->protocol = eth_type_trans(skb, tp->dev); 6919 6919 6920 6920 if (len > (tp->dev->mtu + ETH_HLEN) && 6921 - skb->protocol != htons(ETH_P_8021Q)) { 6921 + skb->protocol != htons(ETH_P_8021Q) && 6922 + skb->protocol != htons(ETH_P_8021AD)) { 6922 6923 dev_kfree_skb_any(skb); 6923 6924 goto drop_it_no_recycle; 6924 6925 }
-11
drivers/net/ethernet/cadence/macb.c
··· 30 30 #include <linux/of_device.h> 31 31 #include <linux/of_mdio.h> 32 32 #include <linux/of_net.h> 33 - #include <linux/pinctrl/consumer.h> 34 33 35 34 #include "macb.h" 36 35 ··· 2070 2071 struct phy_device *phydev; 2071 2072 u32 config; 2072 2073 int err = -ENXIO; 2073 - struct pinctrl *pinctrl; 2074 2074 const char *mac; 2075 2075 2076 2076 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2077 2077 if (!regs) { 2078 2078 dev_err(&pdev->dev, "no mmio resource defined\n"); 2079 2079 goto err_out; 2080 - } 2081 - 2082 - pinctrl = devm_pinctrl_get_select_default(&pdev->dev); 2083 - if (IS_ERR(pinctrl)) { 2084 - err = PTR_ERR(pinctrl); 2085 - if (err == -EPROBE_DEFER) 2086 - goto err_out; 2087 - 2088 - dev_warn(&pdev->dev, "No pinctrl provided\n"); 2089 2080 } 2090 2081 2091 2082 err = -ENOMEM;
+2 -2
drivers/net/ethernet/mellanox/mlx4/main.c
··· 78 78 #endif /* CONFIG_PCI_MSI */ 79 79 80 80 static uint8_t num_vfs[3] = {0, 0, 0}; 81 - static int num_vfs_argc = 3; 81 + static int num_vfs_argc; 82 82 module_param_array(num_vfs, byte , &num_vfs_argc, 0444); 83 83 MODULE_PARM_DESC(num_vfs, "enable #num_vfs functions if num_vfs > 0\n" 84 84 "num_vfs=port1,port2,port1+2"); 85 85 86 86 static uint8_t probe_vf[3] = {0, 0, 0}; 87 - static int probe_vfs_argc = 3; 87 + static int probe_vfs_argc; 88 88 module_param_array(probe_vf, byte, &probe_vfs_argc, 0444); 89 89 MODULE_PARM_DESC(probe_vf, "number of vfs to probe by pf driver (num_vfs > 0)\n" 90 90 "probe_vf=port1,port2,port1+2");
+4 -2
drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c
··· 135 135 int i, j; 136 136 struct nx_host_tx_ring *tx_ring = adapter->tx_ring; 137 137 138 + spin_lock(&adapter->tx_clean_lock); 138 139 cmd_buf = tx_ring->cmd_buf_arr; 139 140 for (i = 0; i < tx_ring->num_desc; i++) { 140 141 buffrag = cmd_buf->frag_array; ··· 159 158 } 160 159 cmd_buf++; 161 160 } 161 + spin_unlock(&adapter->tx_clean_lock); 162 162 } 163 163 164 164 void netxen_free_sw_resources(struct netxen_adapter *adapter) ··· 1794 1792 break; 1795 1793 } 1796 1794 1797 - if (count && netif_running(netdev)) { 1798 - tx_ring->sw_consumer = sw_consumer; 1795 + tx_ring->sw_consumer = sw_consumer; 1799 1796 1797 + if (count && netif_running(netdev)) { 1800 1798 smp_mb(); 1801 1799 1802 1800 if (netif_queue_stopped(netdev) && netif_carrier_ok(netdev))
-2
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 1186 1186 return; 1187 1187 1188 1188 smp_mb(); 1189 - spin_lock(&adapter->tx_clean_lock); 1190 1189 netif_carrier_off(netdev); 1191 1190 netif_tx_disable(netdev); 1192 1191 ··· 1203 1204 netxen_napi_disable(adapter); 1204 1205 1205 1206 netxen_release_tx_buffers(adapter); 1206 - spin_unlock(&adapter->tx_clean_lock); 1207 1207 } 1208 1208 1209 1209 /* Usage: During suspend and firmware recovery module */
+2 -3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 1177 1177 { 1178 1178 u32 idc_params, val; 1179 1179 1180 - if (qlcnic_83xx_lockless_flash_read32(adapter, 1181 - QLC_83XX_IDC_FLASH_PARAM_ADDR, 1182 - (u8 *)&idc_params, 1)) { 1180 + if (qlcnic_83xx_flash_read32(adapter, QLC_83XX_IDC_FLASH_PARAM_ADDR, 1181 + (u8 *)&idc_params, 1)) { 1183 1182 dev_info(&adapter->pdev->dev, 1184 1183 "%s:failed to get IDC params from flash\n", __func__); 1185 1184 adapter->dev_init_timeo = QLC_83XX_IDC_INIT_TIMEOUT_SECS;
+5 -5
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 1333 1333 struct qlcnic_host_tx_ring *tx_ring; 1334 1334 struct qlcnic_esw_statistics port_stats; 1335 1335 struct qlcnic_mac_statistics mac_stats; 1336 - int index, ret, length, size, tx_size, ring; 1336 + int index, ret, length, size, ring; 1337 1337 char *p; 1338 1338 1339 - tx_size = adapter->drv_tx_rings * QLCNIC_TX_STATS_LEN; 1339 + memset(data, 0, stats->n_stats * sizeof(u64)); 1340 1340 1341 - memset(data, 0, tx_size * sizeof(u64)); 1342 1341 for (ring = 0, index = 0; ring < adapter->drv_tx_rings; ring++) { 1343 - if (test_bit(__QLCNIC_DEV_UP, &adapter->state)) { 1342 + if (adapter->is_up == QLCNIC_ADAPTER_UP_MAGIC) { 1344 1343 tx_ring = &adapter->tx_ring[ring]; 1345 1344 data = qlcnic_fill_tx_queue_stats(data, tx_ring); 1346 1345 qlcnic_update_stats(adapter); 1346 + } else { 1347 + data += QLCNIC_TX_STATS_LEN; 1347 1348 } 1348 1349 } 1349 1350 1350 - memset(data, 0, stats->n_stats * sizeof(u64)); 1351 1351 length = QLCNIC_STATS_LEN; 1352 1352 for (index = 0; index < length; index++) { 1353 1353 p = (char *)adapter + qlcnic_gstrings_stats[index].stat_offset;
+9 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2786 2786 if (IS_ERR(priv->stmmac_clk)) { 2787 2787 dev_warn(priv->device, "%s: warning: cannot get CSR clock\n", 2788 2788 __func__); 2789 - ret = PTR_ERR(priv->stmmac_clk); 2790 - goto error_clk_get; 2789 + /* If failed to obtain stmmac_clk and specific clk_csr value 2790 + * is NOT passed from the platform, probe fail. 2791 + */ 2792 + if (!priv->plat->clk_csr) { 2793 + ret = PTR_ERR(priv->stmmac_clk); 2794 + goto error_clk_get; 2795 + } else { 2796 + priv->stmmac_clk = NULL; 2797 + } 2791 2798 } 2792 2799 clk_prepare_enable(priv->stmmac_clk); 2793 2800
+2 -1
drivers/net/hyperv/netvsc_drv.c
··· 387 387 int hdr_offset; 388 388 u32 net_trans_info; 389 389 u32 hash; 390 + u32 skb_length = skb->len; 390 391 391 392 392 393 /* We will atmost need two pages to describe the rndis ··· 563 562 564 563 drop: 565 564 if (ret == 0) { 566 - net->stats.tx_bytes += skb->len; 565 + net->stats.tx_bytes += skb_length; 567 566 net->stats.tx_packets++; 568 567 } else { 569 568 kfree(packet);
+8 -10
drivers/net/macvtap.c
··· 112 112 return err; 113 113 } 114 114 115 + /* Requires RTNL */ 115 116 static int macvtap_set_queue(struct net_device *dev, struct file *file, 116 117 struct macvtap_queue *q) 117 118 { 118 119 struct macvlan_dev *vlan = netdev_priv(dev); 119 - int err = -EBUSY; 120 120 121 - rtnl_lock(); 122 121 if (vlan->numqueues == MAX_MACVTAP_QUEUES) 123 - goto out; 122 + return -EBUSY; 124 123 125 - err = 0; 126 124 rcu_assign_pointer(q->vlan, vlan); 127 125 rcu_assign_pointer(vlan->taps[vlan->numvtaps], q); 128 126 sock_hold(&q->sk); ··· 134 136 vlan->numvtaps++; 135 137 vlan->numqueues++; 136 138 137 - out: 138 - rtnl_unlock(); 139 - return err; 139 + return 0; 140 140 } 141 141 142 142 static int macvtap_disable_queue(struct macvtap_queue *q) ··· 450 454 static int macvtap_open(struct inode *inode, struct file *file) 451 455 { 452 456 struct net *net = current->nsproxy->net_ns; 453 - struct net_device *dev = dev_get_by_macvtap_minor(iminor(inode)); 457 + struct net_device *dev; 454 458 struct macvtap_queue *q; 455 - int err; 459 + int err = -ENODEV; 456 460 457 - err = -ENODEV; 461 + rtnl_lock(); 462 + dev = dev_get_by_macvtap_minor(iminor(inode)); 458 463 if (!dev) 459 464 goto out; 460 465 ··· 495 498 if (dev) 496 499 dev_put(dev); 497 500 501 + rtnl_unlock(); 498 502 return err; 499 503 } 500 504
+42 -46
drivers/net/usb/r8152.c
··· 26 26 #include <linux/mdio.h> 27 27 28 28 /* Version Information */ 29 - #define DRIVER_VERSION "v1.06.0 (2014/03/03)" 29 + #define DRIVER_VERSION "v1.06.1 (2014/10/01)" 30 30 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" 31 31 #define DRIVER_DESC "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters" 32 32 #define MODULENAME "r8152" ··· 1979 1979 ocp_write_word(tp, MCU_TYPE_PLA, PLA_MISC_1, ocp_data); 1980 1980 } 1981 1981 1982 + static int rtl_start_rx(struct r8152 *tp) 1983 + { 1984 + int i, ret = 0; 1985 + 1986 + INIT_LIST_HEAD(&tp->rx_done); 1987 + for (i = 0; i < RTL8152_MAX_RX; i++) { 1988 + INIT_LIST_HEAD(&tp->rx_info[i].list); 1989 + ret = r8152_submit_rx(tp, &tp->rx_info[i], GFP_KERNEL); 1990 + if (ret) 1991 + break; 1992 + } 1993 + 1994 + return ret; 1995 + } 1996 + 1997 + static int rtl_stop_rx(struct r8152 *tp) 1998 + { 1999 + int i; 2000 + 2001 + for (i = 0; i < RTL8152_MAX_RX; i++) 2002 + usb_kill_urb(tp->rx_info[i].urb); 2003 + 2004 + return 0; 2005 + } 2006 + 1982 2007 static int rtl_enable(struct r8152 *tp) 1983 2008 { 1984 2009 u32 ocp_data; 1985 - int i, ret; 1986 2010 1987 2011 r8152b_reset_packet_filter(tp); 1988 2012 ··· 2016 1992 2017 1993 rxdy_gated_en(tp, false); 2018 1994 2019 - INIT_LIST_HEAD(&tp->rx_done); 2020 - ret = 0; 2021 - for (i = 0; i < RTL8152_MAX_RX; i++) { 2022 - INIT_LIST_HEAD(&tp->rx_info[i].list); 2023 - ret |= r8152_submit_rx(tp, &tp->rx_info[i], GFP_KERNEL); 2024 - } 2025 - 2026 - return ret; 1995 + return rtl_start_rx(tp); 2027 1996 } 2028 1997 2029 1998 static int rtl8152_enable(struct r8152 *tp) ··· 2100 2083 usleep_range(1000, 2000); 2101 2084 } 2102 2085 2103 - for (i = 0; i < RTL8152_MAX_RX; i++) 2104 - usb_kill_urb(tp->rx_info[i].urb); 2086 + rtl_stop_rx(tp); 2105 2087 2106 2088 rtl8152_nic_reset(tp); 2107 2089 } ··· 2259 2243 } 2260 2244 } 2261 2245 2262 - static void rtl_clear_bp(struct r8152 *tp) 2263 - { 2264 - ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_0, 0); 2265 - ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_2, 0); 2266 - ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_4, 0); 2267 - ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_6, 0); 2268 - ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_0, 0); 2269 - ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_2, 0); 2270 - ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_4, 0); 2271 - ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_6, 0); 2272 - usleep_range(3000, 6000); 2273 - ocp_write_word(tp, MCU_TYPE_PLA, PLA_BP_BA, 0); 2274 - ocp_write_word(tp, MCU_TYPE_USB, USB_BP_BA, 0); 2275 - } 2276 - 2277 - static void r8153_clear_bp(struct r8152 *tp) 2278 - { 2279 - ocp_write_byte(tp, MCU_TYPE_PLA, PLA_BP_EN, 0); 2280 - ocp_write_byte(tp, MCU_TYPE_USB, USB_BP_EN, 0); 2281 - rtl_clear_bp(tp); 2282 - } 2283 - 2284 2246 static void r8153_teredo_off(struct r8152 *tp) 2285 2247 { 2286 2248 u32 ocp_data; ··· 2300 2306 data &= ~BMCR_PDOWN; 2301 2307 r8152_mdio_write(tp, MII_BMCR, data); 2302 2308 } 2303 - 2304 - rtl_clear_bp(tp); 2305 2309 2306 2310 set_bit(PHY_RESET, &tp->flags); 2307 2311 } ··· 2446 2454 data &= ~BMCR_PDOWN; 2447 2455 r8152_mdio_write(tp, MII_BMCR, data); 2448 2456 } 2449 - 2450 - r8153_clear_bp(tp); 2451 2457 2452 2458 if (tp->version == RTL_VER_03) { 2453 2459 data = ocp_reg_read(tp, OCP_EEE_CFG); ··· 3171 3181 clear_bit(WORK_ENABLE, &tp->flags); 3172 3182 usb_kill_urb(tp->intr_urb); 3173 3183 cancel_delayed_work_sync(&tp->schedule); 3184 + tasklet_disable(&tp->tl); 3174 3185 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) { 3186 + rtl_stop_rx(tp); 3175 3187 rtl_runtime_suspend_enable(tp, true); 3176 3188 } else { 3177 - tasklet_disable(&tp->tl); 3178 3189 tp->rtl_ops.down(tp); 3179 - tasklet_enable(&tp->tl); 3180 3190 } 3191 + tasklet_enable(&tp->tl); 3181 3192 } 3182 3193 3183 3194 return 0; ··· 3197 3206 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) { 3198 3207 rtl_runtime_suspend_enable(tp, false); 3199 3208 clear_bit(SELECTIVE_SUSPEND, &tp->flags); 3209 + set_bit(WORK_ENABLE, &tp->flags); 3200 3210 if (tp->speed & LINK_STATUS) 3201 - tp->rtl_ops.disable(tp); 3211 + rtl_start_rx(tp); 3202 3212 } else { 3203 3213 tp->rtl_ops.up(tp); 3204 3214 rtl8152_set_speed(tp, AUTONEG_ENABLE, 3205 3215 tp->mii.supports_gmii ? 3206 3216 SPEED_1000 : SPEED_100, 3207 3217 DUPLEX_FULL); 3218 + tp->speed = 0; 3219 + netif_carrier_off(tp->netdev); 3220 + set_bit(WORK_ENABLE, &tp->flags); 3208 3221 } 3209 - tp->speed = 0; 3210 - netif_carrier_off(tp->netdev); 3211 - set_bit(WORK_ENABLE, &tp->flags); 3212 3222 usb_submit_urb(tp->intr_urb, GFP_KERNEL); 3213 3223 } 3214 3224 ··· 3615 3623 if (test_bit(RTL8152_UNPLUG, &tp->flags)) 3616 3624 return; 3617 3625 3618 - r8153_power_cut_en(tp, true); 3626 + r8153_power_cut_en(tp, false); 3619 3627 } 3620 3628 3621 3629 static int rtl_ops_init(struct r8152 *tp, const struct usb_device_id *id) ··· 3780 3788 3781 3789 usb_set_intfdata(intf, NULL); 3782 3790 if (tp) { 3783 - set_bit(RTL8152_UNPLUG, &tp->flags); 3791 + struct usb_device *udev = tp->udev; 3792 + 3793 + if (udev->state == USB_STATE_NOTATTACHED) 3794 + set_bit(RTL8152_UNPLUG, &tp->flags); 3795 + 3784 3796 tasklet_kill(&tp->tl); 3785 3797 unregister_netdev(tp->netdev); 3786 3798 tp->rtl_ops.unload(tp);
+14 -2
drivers/of/base.c
··· 138 138 /* Important: Don't leak passwords */ 139 139 bool secure = strncmp(pp->name, "security-", 9) == 0; 140 140 141 + if (!IS_ENABLED(CONFIG_SYSFS)) 142 + return 0; 143 + 141 144 if (!of_kset || !of_node_is_attached(np)) 142 145 return 0; 143 146 ··· 160 157 const char *name; 161 158 struct property *pp; 162 159 int rc; 160 + 161 + if (!IS_ENABLED(CONFIG_SYSFS)) 162 + return 0; 163 163 164 164 if (!of_kset) 165 165 return 0; ··· 1719 1713 1720 1714 void __of_remove_property_sysfs(struct device_node *np, struct property *prop) 1721 1715 { 1716 + if (!IS_ENABLED(CONFIG_SYSFS)) 1717 + return; 1718 + 1722 1719 /* at early boot, bail here and defer setup to of_init() */ 1723 1720 if (of_kset && of_node_is_attached(np)) 1724 1721 sysfs_remove_bin_file(&np->kobj, &prop->attr); ··· 1786 1777 void __of_update_property_sysfs(struct device_node *np, struct property *newprop, 1787 1778 struct property *oldprop) 1788 1779 { 1780 + if (!IS_ENABLED(CONFIG_SYSFS)) 1781 + return; 1782 + 1789 1783 /* At early boot, bail out and defer setup to of_init() */ 1790 1784 if (!of_kset) 1791 1785 return; ··· 1859 1847 { 1860 1848 struct property *pp; 1861 1849 1850 + of_aliases = of_find_node_by_path("/aliases"); 1862 1851 of_chosen = of_find_node_by_path("/chosen"); 1863 1852 if (of_chosen == NULL) 1864 1853 of_chosen = of_find_node_by_path("/chosen@0"); ··· 1875 1862 of_stdout = of_find_node_by_path(name); 1876 1863 } 1877 1864 1878 - of_aliases = of_find_node_by_path("/aliases"); 1879 1865 if (!of_aliases) 1880 1866 return; 1881 1867 ··· 1998 1986 { 1999 1987 if (!dn || dn != of_stdout || console_set_on_cmdline) 2000 1988 return false; 2001 - return add_preferred_console(name, index, NULL); 1989 + return !add_preferred_console(name, index, NULL); 2002 1990 } 2003 1991 EXPORT_SYMBOL_GPL(of_console_check); 2004 1992
+3
drivers/of/dynamic.c
··· 45 45 { 46 46 struct property *pp; 47 47 48 + if (!IS_ENABLED(CONFIG_SYSFS)) 49 + return; 50 + 48 51 BUG_ON(!of_node_is_initialized(np)); 49 52 if (!of_kset) 50 53 return;
+9 -5
drivers/of/fdt.c
··· 928 928 void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size) 929 929 { 930 930 const u64 phys_offset = __pa(PAGE_OFFSET); 931 - base &= PAGE_MASK; 931 + 932 + if (!PAGE_ALIGNED(base)) { 933 + size -= PAGE_SIZE - (base & ~PAGE_MASK); 934 + base = PAGE_ALIGN(base); 935 + } 932 936 size &= PAGE_MASK; 933 937 934 938 if (base > MAX_PHYS_ADDR) { ··· 941 937 return; 942 938 } 943 939 944 - if (base + size > MAX_PHYS_ADDR) { 945 - pr_warning("Ignoring memory range 0x%lx - 0x%llx\n", 946 - ULONG_MAX, base + size); 947 - size = MAX_PHYS_ADDR - base; 940 + if (base + size - 1 > MAX_PHYS_ADDR) { 941 + pr_warning("Ignoring memory range 0x%llx - 0x%llx\n", 942 + ((u64)MAX_PHYS_ADDR) + 1, base + size); 943 + size = MAX_PHYS_ADDR - base + 1; 948 944 } 949 945 950 946 if (base + size < phys_offset) {
+1
drivers/rtc/rtc-efi.c
··· 232 232 233 233 module_platform_driver_probe(efi_rtc_driver, efi_rtc_probe); 234 234 235 + MODULE_ALIAS("platform:rtc-efi"); 235 236 MODULE_AUTHOR("dann frazier <dannf@hp.com>"); 236 237 MODULE_LICENSE("GPL"); 237 238 MODULE_DESCRIPTION("EFI RTC driver");
+33 -13
drivers/soc/qcom/qcom_gsbi.c
··· 22 22 #define GSBI_CTRL_REG 0x0000 23 23 #define GSBI_PROTOCOL_SHIFT 4 24 24 25 + struct gsbi_info { 26 + struct clk *hclk; 27 + u32 mode; 28 + u32 crci; 29 + }; 30 + 25 31 static int gsbi_probe(struct platform_device *pdev) 26 32 { 27 33 struct device_node *node = pdev->dev.of_node; 28 34 struct resource *res; 29 35 void __iomem *base; 30 - struct clk *hclk; 31 - u32 mode, crci = 0; 36 + struct gsbi_info *gsbi; 37 + 38 + gsbi = devm_kzalloc(&pdev->dev, sizeof(*gsbi), GFP_KERNEL); 39 + 40 + if (!gsbi) 41 + return -ENOMEM; 32 42 33 43 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 34 44 base = devm_ioremap_resource(&pdev->dev, res); 35 45 if (IS_ERR(base)) 36 46 return PTR_ERR(base); 37 47 38 - if (of_property_read_u32(node, "qcom,mode", &mode)) { 48 + if (of_property_read_u32(node, "qcom,mode", &gsbi->mode)) { 39 49 dev_err(&pdev->dev, "missing mode configuration\n"); 40 50 return -EINVAL; 41 51 } 42 52 43 53 /* not required, so default to 0 if not present */ 44 - of_property_read_u32(node, "qcom,crci", &crci); 54 + of_property_read_u32(node, "qcom,crci", &gsbi->crci); 45 55 46 - dev_info(&pdev->dev, "GSBI port protocol: %d crci: %d\n", mode, crci); 56 + dev_info(&pdev->dev, "GSBI port protocol: %d crci: %d\n", 57 + gsbi->mode, gsbi->crci); 58 + gsbi->hclk = devm_clk_get(&pdev->dev, "iface"); 59 + if (IS_ERR(gsbi->hclk)) 60 + return PTR_ERR(gsbi->hclk); 47 61 48 - hclk = devm_clk_get(&pdev->dev, "iface"); 49 - if (IS_ERR(hclk)) 50 - return PTR_ERR(hclk); 62 + clk_prepare_enable(gsbi->hclk); 51 63 52 - clk_prepare_enable(hclk); 53 - 54 - writel_relaxed((mode << GSBI_PROTOCOL_SHIFT) | crci, 64 + writel_relaxed((gsbi->mode << GSBI_PROTOCOL_SHIFT) | gsbi->crci, 55 65 base + GSBI_CTRL_REG); 56 66 57 67 /* make sure the gsbi control write is not reordered */ 58 68 wmb(); 59 69 60 - clk_disable_unprepare(hclk); 70 + platform_set_drvdata(pdev, gsbi); 61 71 62 - return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 72 + return of_platform_populate(node, NULL, NULL, &pdev->dev); 73 + } 74 + 75 + static int gsbi_remove(struct platform_device *pdev) 76 + { 77 + struct gsbi_info *gsbi = platform_get_drvdata(pdev); 78 + 79 + clk_disable_unprepare(gsbi->hclk); 80 + 81 + return 0; 63 82 } 64 83 65 84 static const struct of_device_id gsbi_dt_match[] = { ··· 95 76 .of_match_table = gsbi_dt_match, 96 77 }, 97 78 .probe = gsbi_probe, 79 + .remove = gsbi_remove, 98 80 }; 99 81 100 82 module_platform_driver(gsbi_driver);
+4 -4
fs/cachefiles/bind.c
··· 50 50 cache->brun_percent < 100); 51 51 52 52 if (*args) { 53 - pr_err("'bind' command doesn't take an argument"); 53 + pr_err("'bind' command doesn't take an argument\n"); 54 54 return -EINVAL; 55 55 } 56 56 57 57 if (!cache->rootdirname) { 58 - pr_err("No cache directory specified"); 58 + pr_err("No cache directory specified\n"); 59 59 return -EINVAL; 60 60 } 61 61 62 62 /* don't permit already bound caches to be re-bound */ 63 63 if (test_bit(CACHEFILES_READY, &cache->flags)) { 64 - pr_err("Cache already bound"); 64 + pr_err("Cache already bound\n"); 65 65 return -EBUSY; 66 66 } 67 67 ··· 248 248 kmem_cache_free(cachefiles_object_jar, fsdef); 249 249 error_root_object: 250 250 cachefiles_end_secure(cache, saved_cred); 251 - pr_err("Failed to register: %d", ret); 251 + pr_err("Failed to register: %d\n", ret); 252 252 return ret; 253 253 } 254 254
+15 -15
fs/cachefiles/daemon.c
··· 315 315 static int cachefiles_daemon_range_error(struct cachefiles_cache *cache, 316 316 char *args) 317 317 { 318 - pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%"); 318 + pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%\n"); 319 319 320 320 return -EINVAL; 321 321 } ··· 475 475 _enter(",%s", args); 476 476 477 477 if (!*args) { 478 - pr_err("Empty directory specified"); 478 + pr_err("Empty directory specified\n"); 479 479 return -EINVAL; 480 480 } 481 481 482 482 if (cache->rootdirname) { 483 - pr_err("Second cache directory specified"); 483 + pr_err("Second cache directory specified\n"); 484 484 return -EEXIST; 485 485 } 486 486 ··· 503 503 _enter(",%s", args); 504 504 505 505 if (!*args) { 506 - pr_err("Empty security context specified"); 506 + pr_err("Empty security context specified\n"); 507 507 return -EINVAL; 508 508 } 509 509 510 510 if (cache->secctx) { 511 - pr_err("Second security context specified"); 511 + pr_err("Second security context specified\n"); 512 512 return -EINVAL; 513 513 } 514 514 ··· 531 531 _enter(",%s", args); 532 532 533 533 if (!*args) { 534 - pr_err("Empty tag specified"); 534 + pr_err("Empty tag specified\n"); 535 535 return -EINVAL; 536 536 } 537 537 ··· 562 562 goto inval; 563 563 564 564 if (!test_bit(CACHEFILES_READY, &cache->flags)) { 565 - pr_err("cull applied to unready cache"); 565 + pr_err("cull applied to unready cache\n"); 566 566 return -EIO; 567 567 } 568 568 569 569 if (test_bit(CACHEFILES_DEAD, &cache->flags)) { 570 - pr_err("cull applied to dead cache"); 570 + pr_err("cull applied to dead cache\n"); 571 571 return -EIO; 572 572 } 573 573 ··· 587 587 588 588 notdir: 589 589 path_put(&path); 590 - pr_err("cull command requires dirfd to be a directory"); 590 + pr_err("cull command requires dirfd to be a directory\n"); 591 591 return -ENOTDIR; 592 592 593 593 inval: 594 - pr_err("cull command requires dirfd and filename"); 594 + pr_err("cull command requires dirfd and filename\n"); 595 595 return -EINVAL; 596 596 } 597 597 ··· 614 614 return 0; 615 615 616 616 inval: 617 - pr_err("debug command requires mask"); 617 + pr_err("debug command requires mask\n"); 618 618 return -EINVAL; 619 619 } 620 620 ··· 634 634 goto inval; 635 635 636 636 if (!test_bit(CACHEFILES_READY, &cache->flags)) { 637 - pr_err("inuse applied to unready cache"); 637 + pr_err("inuse applied to unready cache\n"); 638 638 return -EIO; 639 639 } 640 640 641 641 if (test_bit(CACHEFILES_DEAD, &cache->flags)) { 642 - pr_err("inuse applied to dead cache"); 642 + pr_err("inuse applied to dead cache\n"); 643 643 return -EIO; 644 644 } 645 645 ··· 659 659 660 660 notdir: 661 661 path_put(&path); 662 - pr_err("inuse command requires dirfd to be a directory"); 662 + pr_err("inuse command requires dirfd to be a directory\n"); 663 663 return -ENOTDIR; 664 664 665 665 inval: 666 - pr_err("inuse command requires dirfd and filename"); 666 + pr_err("inuse command requires dirfd and filename\n"); 667 667 return -EINVAL; 668 668 } 669 669
+1 -1
fs/cachefiles/internal.h
··· 255 255 256 256 #define cachefiles_io_error(___cache, FMT, ...) \ 257 257 do { \ 258 - pr_err("I/O Error: " FMT, ##__VA_ARGS__); \ 258 + pr_err("I/O Error: " FMT"\n", ##__VA_ARGS__); \ 259 259 fscache_io_error(&(___cache)->cache); \ 260 260 set_bit(CACHEFILES_DEAD, &(___cache)->flags); \ 261 261 } while (0)
+1 -1
fs/cachefiles/main.c
··· 84 84 error_object_jar: 85 85 misc_deregister(&cachefiles_dev); 86 86 error_dev: 87 - pr_err("failed to register: %d", ret); 87 + pr_err("failed to register: %d\n", ret); 88 88 return ret; 89 89 } 90 90
+7 -7
fs/cachefiles/namei.c
··· 543 543 next, next->d_inode, next->d_inode->i_ino); 544 544 545 545 } else if (!S_ISDIR(next->d_inode->i_mode)) { 546 - pr_err("inode %lu is not a directory", 546 + pr_err("inode %lu is not a directory\n", 547 547 next->d_inode->i_ino); 548 548 ret = -ENOBUFS; 549 549 goto error; ··· 574 574 } else if (!S_ISDIR(next->d_inode->i_mode) && 575 575 !S_ISREG(next->d_inode->i_mode) 576 576 ) { 577 - pr_err("inode %lu is not a file or directory", 577 + pr_err("inode %lu is not a file or directory\n", 578 578 next->d_inode->i_ino); 579 579 ret = -ENOBUFS; 580 580 goto error; ··· 768 768 ASSERT(subdir->d_inode); 769 769 770 770 if (!S_ISDIR(subdir->d_inode->i_mode)) { 771 - pr_err("%s is not a directory", dirname); 771 + pr_err("%s is not a directory\n", dirname); 772 772 ret = -EIO; 773 773 goto check_error; 774 774 } ··· 796 796 mkdir_error: 797 797 mutex_unlock(&dir->d_inode->i_mutex); 798 798 dput(subdir); 799 - pr_err("mkdir %s failed with error %d", dirname, ret); 799 + pr_err("mkdir %s failed with error %d\n", dirname, ret); 800 800 return ERR_PTR(ret); 801 801 802 802 lookup_error: 803 803 mutex_unlock(&dir->d_inode->i_mutex); 804 804 ret = PTR_ERR(subdir); 805 - pr_err("Lookup %s failed with error %d", dirname, ret); 805 + pr_err("Lookup %s failed with error %d\n", dirname, ret); 806 806 return ERR_PTR(ret); 807 807 808 808 nomem_d_alloc: ··· 892 892 if (ret == -EIO) { 893 893 cachefiles_io_error(cache, "Lookup failed"); 894 894 } else if (ret != -ENOMEM) { 895 - pr_err("Internal error: %d", ret); 895 + pr_err("Internal error: %d\n", ret); 896 896 ret = -EIO; 897 897 } 898 898 ··· 951 951 } 952 952 953 953 if (ret != -ENOMEM) { 954 - pr_err("Internal error: %d", ret); 954 + pr_err("Internal error: %d\n", ret); 955 955 ret = -EIO; 956 956 } 957 957
+5 -5
fs/cachefiles/xattr.c
··· 51 51 } 52 52 53 53 if (ret != -EEXIST) { 54 - pr_err("Can't set xattr on %*.*s [%lu] (err %d)", 54 + pr_err("Can't set xattr on %*.*s [%lu] (err %d)\n", 55 55 dentry->d_name.len, dentry->d_name.len, 56 56 dentry->d_name.name, dentry->d_inode->i_ino, 57 57 -ret); ··· 64 64 if (ret == -ERANGE) 65 65 goto bad_type_length; 66 66 67 - pr_err("Can't read xattr on %*.*s [%lu] (err %d)", 67 + pr_err("Can't read xattr on %*.*s [%lu] (err %d)\n", 68 68 dentry->d_name.len, dentry->d_name.len, 69 69 dentry->d_name.name, dentry->d_inode->i_ino, 70 70 -ret); ··· 85 85 return ret; 86 86 87 87 bad_type_length: 88 - pr_err("Cache object %lu type xattr length incorrect", 88 + pr_err("Cache object %lu type xattr length incorrect\n", 89 89 dentry->d_inode->i_ino); 90 90 ret = -EIO; 91 91 goto error; 92 92 93 93 bad_type: 94 94 xtype[2] = 0; 95 - pr_err("Cache object %*.*s [%lu] type %s not %s", 95 + pr_err("Cache object %*.*s [%lu] type %s not %s\n", 96 96 dentry->d_name.len, dentry->d_name.len, 97 97 dentry->d_name.name, dentry->d_inode->i_ino, 98 98 xtype, type); ··· 293 293 return ret; 294 294 295 295 bad_type_length: 296 - pr_err("Cache object %lu xattr length incorrect", 296 + pr_err("Cache object %lu xattr length incorrect\n", 297 297 dentry->d_inode->i_ino); 298 298 ret = -EIO; 299 299 goto error;
+36 -76
fs/dcache.c
··· 2372 2372 } 2373 2373 EXPORT_SYMBOL(dentry_update_name_case); 2374 2374 2375 - static void switch_names(struct dentry *dentry, struct dentry *target) 2375 + static void switch_names(struct dentry *dentry, struct dentry *target, 2376 + bool exchange) 2376 2377 { 2377 2378 if (dname_external(target)) { 2378 2379 if (dname_external(dentry)) { ··· 2407 2406 */ 2408 2407 unsigned int i; 2409 2408 BUILD_BUG_ON(!IS_ALIGNED(DNAME_INLINE_LEN, sizeof(long))); 2409 + if (!exchange) { 2410 + memcpy(dentry->d_iname, target->d_name.name, 2411 + target->d_name.len + 1); 2412 + dentry->d_name.hash_len = target->d_name.hash_len; 2413 + return; 2414 + } 2410 2415 for (i = 0; i < DNAME_INLINE_LEN / sizeof(long); i++) { 2411 2416 swap(((long *) &dentry->d_iname)[i], 2412 2417 ((long *) &target->d_iname)[i]); 2413 2418 } 2414 2419 } 2415 2420 } 2416 - swap(dentry->d_name.len, target->d_name.len); 2421 + swap(dentry->d_name.hash_len, target->d_name.hash_len); 2417 2422 } 2418 2423 2419 2424 static void dentry_lock_for_move(struct dentry *dentry, struct dentry *target) ··· 2449 2442 } 2450 2443 } 2451 2444 2452 - static void dentry_unlock_parents_for_move(struct dentry *dentry, 2453 - struct dentry *target) 2445 + static void dentry_unlock_for_move(struct dentry *dentry, struct dentry *target) 2454 2446 { 2455 2447 if (target->d_parent != dentry->d_parent) 2456 2448 spin_unlock(&dentry->d_parent->d_lock); 2457 2449 if (target->d_parent != target) 2458 2450 spin_unlock(&target->d_parent->d_lock); 2451 + spin_unlock(&target->d_lock); 2452 + spin_unlock(&dentry->d_lock); 2459 2453 } 2460 2454 2461 2455 /* 2462 2456 * When switching names, the actual string doesn't strictly have to 2463 2457 * be preserved in the target - because we're dropping the target 2464 2458 * anyway. As such, we can just do a simple memcpy() to copy over 2465 - * the new name before we switch. 2466 - * 2467 - * Note that we have to be a lot more careful about getting the hash 2468 - * switched - we have to switch the hash value properly even if it 2469 - * then no longer matches the actual (corrupted) string of the target. 2470 - * The hash value has to match the hash queue that the dentry is on.. 2459 + * the new name before we switch, unless we are going to rehash 2460 + * it. Note that if we *do* unhash the target, we are not allowed 2461 + * to rehash it without giving it a new name/hash key - whether 2462 + * we swap or overwrite the names here, resulting name won't match 2463 + * the reality in filesystem; it's only there for d_path() purposes. 2464 + * Note that all of this is happening under rename_lock, so the 2465 + * any hash lookup seeing it in the middle of manipulations will 2466 + * be discarded anyway. So we do not care what happens to the hash 2467 + * key in that case. 2471 2468 */ 2472 2469 /* 2473 2470 * __d_move - move a dentry ··· 2517 2506 d_hash(dentry->d_parent, dentry->d_name.hash)); 2518 2507 } 2519 2508 2520 - list_del(&dentry->d_u.d_child); 2521 - list_del(&target->d_u.d_child); 2522 - 2523 2509 /* Switch the names.. */ 2524 - switch_names(dentry, target); 2525 - swap(dentry->d_name.hash, target->d_name.hash); 2510 + switch_names(dentry, target, exchange); 2526 2511 2527 - /* ... and switch the parents */ 2512 + /* ... and switch them in the tree */ 2528 2513 if (IS_ROOT(dentry)) { 2514 + /* splicing a tree */ 2529 2515 dentry->d_parent = target->d_parent; 2530 2516 target->d_parent = target; 2531 - INIT_LIST_HEAD(&target->d_u.d_child); 2517 + list_del_init(&target->d_u.d_child); 2518 + list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); 2532 2519 } else { 2520 + /* swapping two dentries */ 2533 2521 swap(dentry->d_parent, target->d_parent); 2534 - 2535 - /* And add them back to the (new) parent lists */ 2536 - list_add(&target->d_u.d_child, &target->d_parent->d_subdirs); 2522 + list_move(&target->d_u.d_child, &target->d_parent->d_subdirs); 2523 + list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); 2524 + if (exchange) 2525 + fsnotify_d_move(target); 2526 + fsnotify_d_move(dentry); 2537 2527 } 2538 - 2539 - list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); 2540 2528 2541 2529 write_seqcount_end(&target->d_seq); 2542 2530 write_seqcount_end(&dentry->d_seq); 2543 2531 2544 - dentry_unlock_parents_for_move(dentry, target); 2545 - if (exchange) 2546 - fsnotify_d_move(target); 2547 - spin_unlock(&target->d_lock); 2548 - fsnotify_d_move(dentry); 2549 - spin_unlock(&dentry->d_lock); 2532 + dentry_unlock_for_move(dentry, target); 2550 2533 } 2551 2534 2552 2535 /* ··· 2638 2633 return ret; 2639 2634 } 2640 2635 2641 - /* 2642 - * Prepare an anonymous dentry for life in the superblock's dentry tree as a 2643 - * named dentry in place of the dentry to be replaced. 2644 - * returns with anon->d_lock held! 2645 - */ 2646 - static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon) 2647 - { 2648 - struct dentry *dparent; 2649 - 2650 - dentry_lock_for_move(anon, dentry); 2651 - 2652 - write_seqcount_begin(&dentry->d_seq); 2653 - write_seqcount_begin_nested(&anon->d_seq, DENTRY_D_LOCK_NESTED); 2654 - 2655 - dparent = dentry->d_parent; 2656 - 2657 - switch_names(dentry, anon); 2658 - swap(dentry->d_name.hash, anon->d_name.hash); 2659 - 2660 - dentry->d_parent = dentry; 2661 - list_del_init(&dentry->d_u.d_child); 2662 - anon->d_parent = dparent; 2663 - if (likely(!d_unhashed(anon))) { 2664 - hlist_bl_lock(&anon->d_sb->s_anon); 2665 - __hlist_bl_del(&anon->d_hash); 2666 - anon->d_hash.pprev = NULL; 2667 - hlist_bl_unlock(&anon->d_sb->s_anon); 2668 - } 2669 - list_move(&anon->d_u.d_child, &dparent->d_subdirs); 2670 - 2671 - write_seqcount_end(&dentry->d_seq); 2672 - write_seqcount_end(&anon->d_seq); 2673 - 2674 - dentry_unlock_parents_for_move(anon, dentry); 2675 - spin_unlock(&dentry->d_lock); 2676 - 2677 - /* anon->d_lock still locked, returns locked */ 2678 - } 2679 - 2680 2636 /** 2681 2637 * d_splice_alias - splice a disconnected dentry into the tree if one exists 2682 2638 * @inode: the inode which may have a disconnected dentry ··· 2683 2717 return ERR_PTR(-EIO); 2684 2718 } 2685 2719 write_seqlock(&rename_lock); 2686 - __d_materialise_dentry(dentry, new); 2720 + __d_move(new, dentry, false); 2687 2721 write_sequnlock(&rename_lock); 2688 - _d_rehash(new); 2689 - spin_unlock(&new->d_lock); 2690 2722 spin_unlock(&inode->i_lock); 2691 2723 security_d_instantiate(new, inode); 2692 2724 iput(inode); ··· 2744 2780 } else if (IS_ROOT(alias)) { 2745 2781 /* Is this an anonymous mountpoint that we 2746 2782 * could splice into our tree? */ 2747 - __d_materialise_dentry(dentry, alias); 2783 + __d_move(alias, dentry, false); 2748 2784 write_sequnlock(&rename_lock); 2749 2785 goto found; 2750 2786 } else { ··· 2771 2807 actual = __d_instantiate_unique(dentry, inode); 2772 2808 if (!actual) 2773 2809 actual = dentry; 2774 - else 2775 - BUG_ON(!d_unhashed(actual)); 2776 2810 2777 - spin_lock(&actual->d_lock); 2811 + d_rehash(actual); 2778 2812 found: 2779 - _d_rehash(actual); 2780 - spin_unlock(&actual->d_lock); 2781 2813 spin_unlock(&inode->i_lock); 2782 2814 out_nolock: 2783 2815 if (actual == dentry) {
+1 -1
fs/direct-io.c
··· 158 158 { 159 159 ssize_t ret; 160 160 161 - ret = iov_iter_get_pages(sdio->iter, dio->pages, DIO_PAGES, 161 + ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, 162 162 &sdio->from); 163 163 164 164 if (ret < 0 && sdio->blocks_available && (dio->rw & WRITE)) {
+1
fs/fuse/file.c
··· 1305 1305 size_t start; 1306 1306 ssize_t ret = iov_iter_get_pages(ii, 1307 1307 &req->pages[req->num_pages], 1308 + *nbytesp - nbytes, 1308 1309 req->max_pages - req->num_pages, 1309 1310 &start); 1310 1311 if (ret < 0)
+2 -1
fs/nfsd/nfs4xdr.c
··· 3104 3104 3105 3105 buf->page_len = maxcount; 3106 3106 buf->len += maxcount; 3107 - xdr->page_ptr += (maxcount + PAGE_SIZE - 1) / PAGE_SIZE; 3107 + xdr->page_ptr += (buf->page_base + maxcount + PAGE_SIZE - 1) 3108 + / PAGE_SIZE; 3108 3109 3109 3110 /* Use rest of head for padding and remaining ops: */ 3110 3111 buf->tail[0].iov_base = xdr->p;
+6 -1
fs/nilfs2/inode.c
··· 24 24 #include <linux/buffer_head.h> 25 25 #include <linux/gfp.h> 26 26 #include <linux/mpage.h> 27 + #include <linux/pagemap.h> 27 28 #include <linux/writeback.h> 28 29 #include <linux/aio.h> 29 30 #include "nilfs.h" ··· 220 219 221 220 static int nilfs_set_page_dirty(struct page *page) 222 221 { 222 + struct inode *inode = page->mapping->host; 223 223 int ret = __set_page_dirty_nobuffers(page); 224 224 225 225 if (page_has_buffers(page)) { 226 - struct inode *inode = page->mapping->host; 227 226 unsigned nr_dirty = 0; 228 227 struct buffer_head *bh, *head; 229 228 ··· 246 245 247 246 if (nr_dirty) 248 247 nilfs_set_file_dirty(inode, nr_dirty); 248 + } else if (ret) { 249 + unsigned nr_dirty = 1 << (PAGE_CACHE_SHIFT - inode->i_blkbits); 250 + 251 + nilfs_set_file_dirty(inode, nr_dirty); 249 252 } 250 253 return ret; 251 254 }
+10 -8
fs/ocfs2/dlm/dlmmaster.c
··· 655 655 clear_bit(bit, res->refmap); 656 656 } 657 657 658 - 659 - void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 658 + static void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 660 659 struct dlm_lock_resource *res) 661 660 { 662 - assert_spin_locked(&res->spinlock); 663 - 664 661 res->inflight_locks++; 665 662 666 663 mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name, 667 664 res->lockname.len, res->lockname.name, res->inflight_locks, 668 665 __builtin_return_address(0)); 666 + } 667 + 668 + void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm, 669 + struct dlm_lock_resource *res) 670 + { 671 + assert_spin_locked(&res->spinlock); 672 + __dlm_lockres_grab_inflight_ref(dlm, res); 669 673 } 670 674 671 675 void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm, ··· 898 894 /* finally add the lockres to its hash bucket */ 899 895 __dlm_insert_lockres(dlm, res); 900 896 901 - /* Grab inflight ref to pin the resource */ 902 - spin_lock(&res->spinlock); 903 - dlm_lockres_grab_inflight_ref(dlm, res); 904 - spin_unlock(&res->spinlock); 897 + /* since this lockres is new it doesn't not require the spinlock */ 898 + __dlm_lockres_grab_inflight_ref(dlm, res); 905 899 906 900 /* get an extra ref on the mle in case this is a BLOCK 907 901 * if so, the creator of the BLOCK may try to put the last
+1
fs/ocfs2/super.c
··· 2532 2532 kfree(osb->journal); 2533 2533 kfree(osb->local_alloc_copy); 2534 2534 kfree(osb->uuid_str); 2535 + kfree(osb->vol_label); 2535 2536 ocfs2_put_dlm_debug(osb->osb_dlm_debug); 2536 2537 memset(osb, 0, sizeof(struct ocfs2_super)); 2537 2538 }
+18 -9
fs/proc/task_mmu.c
··· 931 931 while (addr < end) { 932 932 struct vm_area_struct *vma = find_vma(walk->mm, addr); 933 933 pagemap_entry_t pme = make_pme(PM_NOT_PRESENT(pm->v2)); 934 - unsigned long vm_end; 934 + /* End of address space hole, which we mark as non-present. */ 935 + unsigned long hole_end; 935 936 936 - if (!vma) { 937 - vm_end = end; 938 - } else { 939 - vm_end = min(end, vma->vm_end); 940 - if (vma->vm_flags & VM_SOFTDIRTY) 941 - pme.pme |= PM_STATUS2(pm->v2, __PM_SOFT_DIRTY); 937 + if (vma) 938 + hole_end = min(end, vma->vm_start); 939 + else 940 + hole_end = end; 941 + 942 + for (; addr < hole_end; addr += PAGE_SIZE) { 943 + err = add_to_pagemap(addr, &pme, pm); 944 + if (err) 945 + goto out; 942 946 } 943 947 944 - for (; addr < vm_end; addr += PAGE_SIZE) { 948 + if (!vma) 949 + break; 950 + 951 + /* Addresses in the VMA. */ 952 + if (vma->vm_flags & VM_SOFTDIRTY) 953 + pme.pme |= PM_STATUS2(pm->v2, __PM_SOFT_DIRTY); 954 + for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) { 945 955 err = add_to_pagemap(addr, &pme, pm); 946 956 if (err) 947 957 goto out; 948 958 } 949 959 } 950 - 951 960 out: 952 961 return err; 953 962 }
+5 -1
fs/ufs/ialloc.c
··· 298 298 ufsi->i_oeftflag = 0; 299 299 ufsi->i_dir_start_lookup = 0; 300 300 memset(&ufsi->i_u1, 0, sizeof(ufsi->i_u1)); 301 - insert_inode_hash(inode); 301 + if (insert_inode_locked(inode) < 0) { 302 + err = -EIO; 303 + goto failed; 304 + } 302 305 mark_inode_dirty(inode); 303 306 304 307 if (uspi->fs_magic == UFS2_MAGIC) { ··· 340 337 fail_remove_inode: 341 338 unlock_ufs(sb); 342 339 clear_nlink(inode); 340 + unlock_new_inode(inode); 343 341 iput(inode); 344 342 UFSD("EXIT (FAILED): err %d\n", err); 345 343 return ERR_PTR(err);
+4
fs/ufs/namei.c
··· 38 38 { 39 39 int err = ufs_add_link(dentry, inode); 40 40 if (!err) { 41 + unlock_new_inode(inode); 41 42 d_instantiate(dentry, inode); 42 43 return 0; 43 44 } 44 45 inode_dec_link_count(inode); 46 + unlock_new_inode(inode); 45 47 iput(inode); 46 48 return err; 47 49 } ··· 157 155 158 156 out_fail: 159 157 inode_dec_link_count(inode); 158 + unlock_new_inode(inode); 160 159 iput(inode); 161 160 goto out; 162 161 } ··· 213 210 out_fail: 214 211 inode_dec_link_count(inode); 215 212 inode_dec_link_count(inode); 213 + unlock_new_inode(inode); 216 214 iput (inode); 217 215 inode_dec_link_count(dir); 218 216 unlock_ufs(dir->i_sb);
+1
include/acpi/acpi_bus.h
··· 118 118 struct acpi_hotplug_profile { 119 119 struct kobject kobj; 120 120 int (*scan_dependent)(struct acpi_device *adev); 121 + void (*notify_online)(struct acpi_device *adev); 121 122 bool enabled:1; 122 123 bool demand_offline:1; 123 124 };
+2 -2
include/linux/cpuset.h
··· 93 93 94 94 static inline int cpuset_do_page_mem_spread(void) 95 95 { 96 - return current->flags & PF_SPREAD_PAGE; 96 + return task_spread_page(current); 97 97 } 98 98 99 99 static inline int cpuset_do_slab_mem_spread(void) 100 100 { 101 - return current->flags & PF_SPREAD_SLAB; 101 + return task_spread_slab(current); 102 102 } 103 103 104 104 extern int current_cpuset_is_being_rebound(void);
-16
include/linux/i2c.h
··· 577 577 } 578 578 #endif /* CONFIG_OF */ 579 579 580 - #ifdef CONFIG_ACPI 581 - void acpi_i2c_register_devices(struct i2c_adapter *adap); 582 - #else 583 - static inline void acpi_i2c_register_devices(struct i2c_adapter *adap) { } 584 - #endif /* CONFIG_ACPI */ 585 - 586 - #ifdef CONFIG_ACPI_I2C_OPREGION 587 - int acpi_i2c_install_space_handler(struct i2c_adapter *adapter); 588 - void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter); 589 - #else 590 - static inline void acpi_i2c_remove_space_handler(struct i2c_adapter *adapter) 591 - { } 592 - static inline int acpi_i2c_install_space_handler(struct i2c_adapter *adapter) 593 - { return 0; } 594 - #endif /* CONFIG_ACPI_I2C_OPREGION */ 595 - 596 580 #endif /* _LINUX_I2C_H */
+36 -11
include/linux/sched.h
··· 1903 1903 #define PF_KTHREAD 0x00200000 /* I am a kernel thread */ 1904 1904 #define PF_RANDOMIZE 0x00400000 /* randomize virtual address space */ 1905 1905 #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ 1906 - #define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */ 1907 - #define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */ 1908 1906 #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */ 1909 1907 #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ 1910 1908 #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */ ··· 1955 1957 } 1956 1958 1957 1959 /* Per-process atomic flags. */ 1958 - #define PFA_NO_NEW_PRIVS 0x00000001 /* May not gain new privileges. */ 1960 + #define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */ 1961 + #define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */ 1962 + #define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */ 1959 1963 1960 - static inline bool task_no_new_privs(struct task_struct *p) 1961 - { 1962 - return test_bit(PFA_NO_NEW_PRIVS, &p->atomic_flags); 1963 - } 1964 1964 1965 - static inline void task_set_no_new_privs(struct task_struct *p) 1966 - { 1967 - set_bit(PFA_NO_NEW_PRIVS, &p->atomic_flags); 1968 - } 1965 + #define TASK_PFA_TEST(name, func) \ 1966 + static inline bool task_##func(struct task_struct *p) \ 1967 + { return test_bit(PFA_##name, &p->atomic_flags); } 1968 + #define TASK_PFA_SET(name, func) \ 1969 + static inline void task_set_##func(struct task_struct *p) \ 1970 + { set_bit(PFA_##name, &p->atomic_flags); } 1971 + #define TASK_PFA_CLEAR(name, func) \ 1972 + static inline void task_clear_##func(struct task_struct *p) \ 1973 + { clear_bit(PFA_##name, &p->atomic_flags); } 1974 + 1975 + TASK_PFA_TEST(NO_NEW_PRIVS, no_new_privs) 1976 + TASK_PFA_SET(NO_NEW_PRIVS, no_new_privs) 1977 + 1978 + TASK_PFA_TEST(SPREAD_PAGE, spread_page) 1979 + TASK_PFA_SET(SPREAD_PAGE, spread_page) 1980 + TASK_PFA_CLEAR(SPREAD_PAGE, spread_page) 1981 + 1982 + TASK_PFA_TEST(SPREAD_SLAB, spread_slab) 1983 + TASK_PFA_SET(SPREAD_SLAB, spread_slab) 1984 + TASK_PFA_CLEAR(SPREAD_SLAB, spread_slab) 1969 1985 1970 1986 /* 1971 1987 * task->jobctl flags ··· 2620 2608 task_thread_info(p)->task = p; 2621 2609 } 2622 2610 2611 + /* 2612 + * Return the address of the last usable long on the stack. 2613 + * 2614 + * When the stack grows down, this is just above the thread 2615 + * info struct. Going any lower will corrupt the threadinfo. 2616 + * 2617 + * When the stack grows up, this is the highest address. 2618 + * Beyond that position, we corrupt data on the next page. 2619 + */ 2623 2620 static inline unsigned long *end_of_stack(struct task_struct *p) 2624 2621 { 2622 + #ifdef CONFIG_STACK_GROWSUP 2623 + return (unsigned long *)((unsigned long)task_thread_info(p) + THREAD_SIZE) - 1; 2624 + #else 2625 2625 return (unsigned long *)(task_thread_info(p) + 1); 2626 + #endif 2626 2627 } 2627 2628 2628 2629 #endif
+1 -1
include/linux/uio.h
··· 84 84 void iov_iter_init(struct iov_iter *i, int direction, const struct iovec *iov, 85 85 unsigned long nr_segs, size_t count); 86 86 ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, 87 - unsigned maxpages, size_t *start); 87 + size_t maxsize, unsigned maxpages, size_t *start); 88 88 ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, 89 89 size_t maxsize, size_t *start); 90 90 int iov_iter_npages(const struct iov_iter *i, int maxpages);
+1 -4
include/net/ip6_fib.h
··· 114 114 u32 rt6i_flags; 115 115 struct rt6key rt6i_src; 116 116 struct rt6key rt6i_prefsrc; 117 - u32 rt6i_metric; 118 117 119 118 struct inet6_dev *rt6i_idev; 120 119 unsigned long _rt6i_peer; 121 120 122 - u32 rt6i_genid; 123 - 121 + u32 rt6i_metric; 124 122 /* more non-fragment space at head required */ 125 123 unsigned short rt6i_nfheader_len; 126 - 127 124 u8 rt6i_protocol; 128 125 }; 129 126
+3 -17
include/net/net_namespace.h
··· 352 352 atomic_inc(&net->ipv4.rt_genid); 353 353 } 354 354 355 - #if IS_ENABLED(CONFIG_IPV6) 356 - static inline int rt_genid_ipv6(struct net *net) 357 - { 358 - return atomic_read(&net->ipv6.rt_genid); 359 - } 360 - 355 + extern void (*__fib6_flush_trees)(struct net *net); 361 356 static inline void rt_genid_bump_ipv6(struct net *net) 362 357 { 363 - atomic_inc(&net->ipv6.rt_genid); 358 + if (__fib6_flush_trees) 359 + __fib6_flush_trees(net); 364 360 } 365 - #else 366 - static inline int rt_genid_ipv6(struct net *net) 367 - { 368 - return 0; 369 - } 370 - 371 - static inline void rt_genid_bump_ipv6(struct net *net) 372 - { 373 - } 374 - #endif 375 361 376 362 #if IS_ENABLED(CONFIG_IEEE802154_6LOWPAN) 377 363 static inline struct netns_ieee802154_lowpan *
+5 -4
kernel/cpuset.c
··· 365 365 struct task_struct *tsk) 366 366 { 367 367 if (is_spread_page(cs)) 368 - tsk->flags |= PF_SPREAD_PAGE; 368 + task_set_spread_page(tsk); 369 369 else 370 - tsk->flags &= ~PF_SPREAD_PAGE; 370 + task_clear_spread_page(tsk); 371 + 371 372 if (is_spread_slab(cs)) 372 - tsk->flags |= PF_SPREAD_SLAB; 373 + task_set_spread_slab(tsk); 373 374 else 374 - tsk->flags &= ~PF_SPREAD_SLAB; 375 + task_clear_spread_slab(tsk); 375 376 } 376 377 377 378 /*
+14 -34
kernel/power/snapshot.c
··· 725 725 clear_bit(bit, addr); 726 726 } 727 727 728 - static void memory_bm_clear_current(struct memory_bitmap *bm) 729 - { 730 - int bit; 731 - 732 - bit = max(bm->cur.node_bit - 1, 0); 733 - clear_bit(bit, bm->cur.node->data); 734 - } 735 - 736 728 static int memory_bm_test_bit(struct memory_bitmap *bm, unsigned long pfn) 737 729 { 738 730 void *addr; ··· 1333 1341 1334 1342 void swsusp_free(void) 1335 1343 { 1336 - unsigned long fb_pfn, fr_pfn; 1344 + struct zone *zone; 1345 + unsigned long pfn, max_zone_pfn; 1337 1346 1338 - memory_bm_position_reset(forbidden_pages_map); 1339 - memory_bm_position_reset(free_pages_map); 1347 + for_each_populated_zone(zone) { 1348 + max_zone_pfn = zone_end_pfn(zone); 1349 + for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) 1350 + if (pfn_valid(pfn)) { 1351 + struct page *page = pfn_to_page(pfn); 1340 1352 1341 - loop: 1342 - fr_pfn = memory_bm_next_pfn(free_pages_map); 1343 - fb_pfn = memory_bm_next_pfn(forbidden_pages_map); 1344 - 1345 - /* 1346 - * Find the next bit set in both bitmaps. This is guaranteed to 1347 - * terminate when fb_pfn == fr_pfn == BM_END_OF_MAP. 1348 - */ 1349 - do { 1350 - if (fb_pfn < fr_pfn) 1351 - fb_pfn = memory_bm_next_pfn(forbidden_pages_map); 1352 - if (fr_pfn < fb_pfn) 1353 - fr_pfn = memory_bm_next_pfn(free_pages_map); 1354 - } while (fb_pfn != fr_pfn); 1355 - 1356 - if (fr_pfn != BM_END_OF_MAP && pfn_valid(fr_pfn)) { 1357 - struct page *page = pfn_to_page(fr_pfn); 1358 - 1359 - memory_bm_clear_current(forbidden_pages_map); 1360 - memory_bm_clear_current(free_pages_map); 1361 - __free_page(page); 1362 - goto loop; 1353 + if (swsusp_page_is_forbidden(page) && 1354 + swsusp_page_is_free(page)) { 1355 + swsusp_unset_page_forbidden(page); 1356 + swsusp_unset_page_free(page); 1357 + __free_page(page); 1358 + } 1359 + } 1363 1360 } 1364 - 1365 1361 nr_copy_pages = 0; 1366 1362 nr_meta_pages = 0; 1367 1363 restore_pblist = NULL;
+1
lib/genalloc.c
··· 588 588 if (!np_pool) 589 589 return NULL; 590 590 pdev = of_find_device_by_node(np_pool); 591 + of_node_put(np_pool); 591 592 if (!pdev) 592 593 return NULL; 593 594 return dev_get_gen_pool(&pdev->dev);
+4 -4
lib/rhashtable.c
··· 592 592 * rhashtable_destroy - destroy hash table 593 593 * @ht: the hash table to destroy 594 594 * 595 - * Frees the bucket array. 595 + * Frees the bucket array. This function is not rcu safe, therefore the caller 596 + * has to make sure that no resizing may happen by unpublishing the hashtable 597 + * and waiting for the quiescent cycle before releasing the bucket array. 596 598 */ 597 599 void rhashtable_destroy(const struct rhashtable *ht) 598 600 { 599 - const struct bucket_table *tbl = rht_dereference(ht->tbl, ht); 600 - 601 - bucket_table_free(tbl); 601 + bucket_table_free(ht->tbl); 602 602 } 603 603 EXPORT_SYMBOL_GPL(rhashtable_destroy); 604 604
+9 -5
mm/iov_iter.c
··· 310 310 EXPORT_SYMBOL(iov_iter_init); 311 311 312 312 static ssize_t get_pages_iovec(struct iov_iter *i, 313 - struct page **pages, unsigned maxpages, 313 + struct page **pages, size_t maxsize, unsigned maxpages, 314 314 size_t *start) 315 315 { 316 316 size_t offset = i->iov_offset; ··· 323 323 len = iov->iov_len - offset; 324 324 if (len > i->count) 325 325 len = i->count; 326 + if (len > maxsize) 327 + len = maxsize; 326 328 addr = (unsigned long)iov->iov_base + offset; 327 329 len += *start = addr & (PAGE_SIZE - 1); 328 330 if (len > maxpages * PAGE_SIZE) ··· 590 588 } 591 589 592 590 static ssize_t get_pages_bvec(struct iov_iter *i, 593 - struct page **pages, unsigned maxpages, 591 + struct page **pages, size_t maxsize, unsigned maxpages, 594 592 size_t *start) 595 593 { 596 594 const struct bio_vec *bvec = i->bvec; 597 595 size_t len = bvec->bv_len - i->iov_offset; 598 596 if (len > i->count) 599 597 len = i->count; 598 + if (len > maxsize) 599 + len = maxsize; 600 600 /* can't be more than PAGE_SIZE */ 601 601 *start = bvec->bv_offset + i->iov_offset; 602 602 ··· 715 711 EXPORT_SYMBOL(iov_iter_alignment); 716 712 717 713 ssize_t iov_iter_get_pages(struct iov_iter *i, 718 - struct page **pages, unsigned maxpages, 714 + struct page **pages, size_t maxsize, unsigned maxpages, 719 715 size_t *start) 720 716 { 721 717 if (i->type & ITER_BVEC) 722 - return get_pages_bvec(i, pages, maxpages, start); 718 + return get_pages_bvec(i, pages, maxsize, maxpages, start); 723 719 else 724 - return get_pages_iovec(i, pages, maxpages, start); 720 + return get_pages_iovec(i, pages, maxsize, maxpages, start); 725 721 } 726 722 EXPORT_SYMBOL(iov_iter_get_pages); 727 723
+1 -1
mm/memory.c
··· 1127 1127 addr) != page->index) { 1128 1128 pte_t ptfile = pgoff_to_pte(page->index); 1129 1129 if (pte_soft_dirty(ptent)) 1130 - pte_file_mksoft_dirty(ptfile); 1130 + ptfile = pte_file_mksoft_dirty(ptfile); 1131 1131 set_pte_at(mm, addr, pte, ptfile); 1132 1132 } 1133 1133 if (PageAnon(page))
+3 -1
mm/shmem.c
··· 2367 2367 2368 2368 if (new_dentry->d_inode) { 2369 2369 (void) shmem_unlink(new_dir, new_dentry); 2370 - if (they_are_dirs) 2370 + if (they_are_dirs) { 2371 + drop_nlink(new_dentry->d_inode); 2371 2372 drop_nlink(old_dir); 2373 + } 2372 2374 } else if (they_are_dirs) { 2373 2375 drop_nlink(old_dir); 2374 2376 inc_nlink(new_dir);
+4 -11
mm/slab.c
··· 2124 2124 int 2125 2125 __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) 2126 2126 { 2127 - size_t left_over, freelist_size, ralign; 2127 + size_t left_over, freelist_size; 2128 + size_t ralign = BYTES_PER_WORD; 2128 2129 gfp_t gfp; 2129 2130 int err; 2130 2131 size_t size = cachep->size; ··· 2157 2156 size += (BYTES_PER_WORD - 1); 2158 2157 size &= ~(BYTES_PER_WORD - 1); 2159 2158 } 2160 - 2161 - /* 2162 - * Redzoning and user store require word alignment or possibly larger. 2163 - * Note this will be overridden by architecture or caller mandated 2164 - * alignment if either is greater than BYTES_PER_WORD. 2165 - */ 2166 - if (flags & SLAB_STORE_USER) 2167 - ralign = BYTES_PER_WORD; 2168 2159 2169 2160 if (flags & SLAB_RED_ZONE) { 2170 2161 ralign = REDZONE_ALIGN; ··· 2987 2994 2988 2995 #ifdef CONFIG_NUMA 2989 2996 /* 2990 - * Try allocating on another node if PF_SPREAD_SLAB is a mempolicy is set. 2997 + * Try allocating on another node if PFA_SPREAD_SLAB is a mempolicy is set. 2991 2998 * 2992 2999 * If we are in_interrupt, then process context, including cpusets and 2993 3000 * mempolicy, may not apply and should not be used for allocation policy. ··· 3219 3226 { 3220 3227 void *objp; 3221 3228 3222 - if (current->mempolicy || unlikely(current->flags & PF_SPREAD_SLAB)) { 3229 + if (current->mempolicy || cpuset_do_slab_mem_spread()) { 3223 3230 objp = alternate_node_alloc(cache, flags); 3224 3231 if (objp) 3225 3232 goto out;
+3
net/core/skbuff.c
··· 3166 3166 NAPI_GRO_CB(skb)->free = NAPI_GRO_FREE_STOLEN_HEAD; 3167 3167 goto done; 3168 3168 } 3169 + /* switch back to head shinfo */ 3170 + pinfo = skb_shinfo(p); 3171 + 3169 3172 if (pinfo->frag_list) 3170 3173 goto merge; 3171 3174 if (skb_gro_len(p) != pinfo->gso_size)
+8 -3
net/ipv4/ip_tunnel.c
··· 853 853 854 854 t = ip_tunnel_find(itn, p, itn->fb_tunnel_dev->type); 855 855 856 - if (!t && (cmd == SIOCADDTUNNEL)) { 857 - t = ip_tunnel_create(net, itn, p); 858 - err = PTR_ERR_OR_ZERO(t); 856 + if (cmd == SIOCADDTUNNEL) { 857 + if (!t) { 858 + t = ip_tunnel_create(net, itn, p); 859 + err = PTR_ERR_OR_ZERO(t); 860 + break; 861 + } 862 + 863 + err = -EEXIST; 859 864 break; 860 865 } 861 866 if (dev != itn->fb_tunnel_dev && cmd == SIOCCHGTUNNEL) {
+1 -1
net/ipv4/route.c
··· 746 746 } 747 747 748 748 n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw); 749 - if (n) { 749 + if (!IS_ERR(n)) { 750 750 if (!(n->nud_state & NUD_VALID)) { 751 751 neigh_event_send(n, NULL); 752 752 } else {
+2 -1
net/ipv6/addrconf.c
··· 4783 4783 4784 4784 if (ip6_del_rt(ifp->rt)) 4785 4785 dst_free(&ifp->rt->dst); 4786 + 4787 + rt_genid_bump_ipv6(net); 4786 4788 break; 4787 4789 } 4788 4790 atomic_inc(&net->ipv6.dev_addr_genid); 4789 - rt_genid_bump_ipv6(net); 4790 4791 } 4791 4792 4792 4793 static void ipv6_ifa_notify(int event, struct inet6_ifaddr *ifp)
+7
net/ipv6/addrconf_core.c
··· 8 8 #include <net/addrconf.h> 9 9 #include <net/ip.h> 10 10 11 + /* if ipv6 module registers this function is used by xfrm to force all 12 + * sockets to relookup their nodes - this is fairly expensive, be 13 + * careful 14 + */ 15 + void (*__fib6_flush_trees)(struct net *); 16 + EXPORT_SYMBOL(__fib6_flush_trees); 17 + 11 18 #define IPV6_ADDR_SCOPE_TYPE(scope) ((scope) << 16) 12 19 13 20 static inline unsigned int ipv6_addr_scope2type(unsigned int scope)
+20
net/ipv6/ip6_fib.c
··· 1605 1605 fib6_clean_tree(net, fn, fib6_prune_clone, 1, NULL); 1606 1606 } 1607 1607 1608 + static int fib6_update_sernum(struct rt6_info *rt, void *arg) 1609 + { 1610 + __u32 sernum = *(__u32 *)arg; 1611 + 1612 + if (rt->rt6i_node && 1613 + rt->rt6i_node->fn_sernum != sernum) 1614 + rt->rt6i_node->fn_sernum = sernum; 1615 + 1616 + return 0; 1617 + } 1618 + 1619 + static void fib6_flush_trees(struct net *net) 1620 + { 1621 + __u32 new_sernum = fib6_new_sernum(); 1622 + 1623 + fib6_clean_all(net, fib6_update_sernum, &new_sernum); 1624 + } 1625 + 1608 1626 /* 1609 1627 * Garbage collection 1610 1628 */ ··· 1806 1788 NULL); 1807 1789 if (ret) 1808 1790 goto out_unregister_subsys; 1791 + 1792 + __fib6_flush_trees = fib6_flush_trees; 1809 1793 out: 1810 1794 return ret; 1811 1795
+3
net/ipv6/ip6_gre.c
··· 314 314 struct ip6gre_net *ign = net_generic(net, ip6gre_net_id); 315 315 316 316 t = ip6gre_tunnel_find(net, parms, ARPHRD_IP6GRE); 317 + if (t && create) 318 + return NULL; 317 319 if (t || !create) 318 320 return t; 319 321 ··· 1730 1728 MODULE_AUTHOR("D. Kozlov (xeb@mail.ru)"); 1731 1729 MODULE_DESCRIPTION("GRE over IPv6 tunneling device"); 1732 1730 MODULE_ALIAS_RTNL_LINK("ip6gre"); 1731 + MODULE_ALIAS_RTNL_LINK("ip6gretap"); 1733 1732 MODULE_ALIAS_NETDEV("ip6gre0");
+5 -1
net/ipv6/ip6_tunnel.c
··· 364 364 (t = rtnl_dereference(*tp)) != NULL; 365 365 tp = &t->next) { 366 366 if (ipv6_addr_equal(local, &t->parms.laddr) && 367 - ipv6_addr_equal(remote, &t->parms.raddr)) 367 + ipv6_addr_equal(remote, &t->parms.raddr)) { 368 + if (create) 369 + return NULL; 370 + 368 371 return t; 372 + } 369 373 } 370 374 if (!create) 371 375 return NULL;
+5 -1
net/ipv6/ip6_vti.c
··· 253 253 (t = rtnl_dereference(*tp)) != NULL; 254 254 tp = &t->next) { 255 255 if (ipv6_addr_equal(local, &t->parms.laddr) && 256 - ipv6_addr_equal(remote, &t->parms.raddr)) 256 + ipv6_addr_equal(remote, &t->parms.raddr)) { 257 + if (create) 258 + return NULL; 259 + 257 260 return t; 261 + } 258 262 } 259 263 if (!create) 260 264 return NULL;
-4
net/ipv6/route.c
··· 314 314 315 315 memset(dst + 1, 0, sizeof(*rt) - sizeof(*dst)); 316 316 rt6_init_peer(rt, table ? &table->tb6_peers : net->ipv6.peers); 317 - rt->rt6i_genid = rt_genid_ipv6(net); 318 317 INIT_LIST_HEAD(&rt->rt6i_siblings); 319 318 } 320 319 return rt; ··· 1095 1096 * DST_OBSOLETE_FORCE_CHK which forces validation calls down 1096 1097 * into this function always. 1097 1098 */ 1098 - if (rt->rt6i_genid != rt_genid_ipv6(dev_net(rt->dst.dev))) 1099 - return NULL; 1100 - 1101 1099 if (!rt->rt6i_node || (rt->rt6i_node->fn_sernum != cookie)) 1102 1100 return NULL; 1103 1101
+1
net/netfilter/Kconfig
··· 856 856 tristate '"TPROXY" target transparent proxying support' 857 857 depends on NETFILTER_XTABLES 858 858 depends on NETFILTER_ADVANCED 859 + depends on (IPV6 || IPV6=n) 859 860 depends on IP_NF_MANGLE 860 861 select NF_DEFRAG_IPV4 861 862 select NF_DEFRAG_IPV6 if IP6_NF_IPTABLES
+63 -1
net/netfilter/nfnetlink.c
··· 222 222 } 223 223 } 224 224 225 + struct nfnl_err { 226 + struct list_head head; 227 + struct nlmsghdr *nlh; 228 + int err; 229 + }; 230 + 231 + static int nfnl_err_add(struct list_head *list, struct nlmsghdr *nlh, int err) 232 + { 233 + struct nfnl_err *nfnl_err; 234 + 235 + nfnl_err = kmalloc(sizeof(struct nfnl_err), GFP_KERNEL); 236 + if (nfnl_err == NULL) 237 + return -ENOMEM; 238 + 239 + nfnl_err->nlh = nlh; 240 + nfnl_err->err = err; 241 + list_add_tail(&nfnl_err->head, list); 242 + 243 + return 0; 244 + } 245 + 246 + static void nfnl_err_del(struct nfnl_err *nfnl_err) 247 + { 248 + list_del(&nfnl_err->head); 249 + kfree(nfnl_err); 250 + } 251 + 252 + static void nfnl_err_reset(struct list_head *err_list) 253 + { 254 + struct nfnl_err *nfnl_err, *next; 255 + 256 + list_for_each_entry_safe(nfnl_err, next, err_list, head) 257 + nfnl_err_del(nfnl_err); 258 + } 259 + 260 + static void nfnl_err_deliver(struct list_head *err_list, struct sk_buff *skb) 261 + { 262 + struct nfnl_err *nfnl_err, *next; 263 + 264 + list_for_each_entry_safe(nfnl_err, next, err_list, head) { 265 + netlink_ack(skb, nfnl_err->nlh, nfnl_err->err); 266 + nfnl_err_del(nfnl_err); 267 + } 268 + } 269 + 225 270 static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh, 226 271 u_int16_t subsys_id) 227 272 { ··· 275 230 const struct nfnetlink_subsystem *ss; 276 231 const struct nfnl_callback *nc; 277 232 bool success = true, done = false; 233 + static LIST_HEAD(err_list); 278 234 int err; 279 235 280 236 if (subsys_id >= NFNL_SUBSYS_COUNT) ··· 333 287 type = nlh->nlmsg_type; 334 288 if (type == NFNL_MSG_BATCH_BEGIN) { 335 289 /* Malformed: Batch begin twice */ 290 + nfnl_err_reset(&err_list); 336 291 success = false; 337 292 goto done; 338 293 } else if (type == NFNL_MSG_BATCH_END) { ··· 380 333 * original skb. 381 334 */ 382 335 if (err == -EAGAIN) { 336 + nfnl_err_reset(&err_list); 383 337 ss->abort(oskb); 384 338 nfnl_unlock(subsys_id); 385 339 kfree_skb(nskb); ··· 389 341 } 390 342 ack: 391 343 if (nlh->nlmsg_flags & NLM_F_ACK || err) { 344 + /* Errors are delivered once the full batch has been 345 + * processed, this avoids that the same error is 346 + * reported several times when replaying the batch. 347 + */ 348 + if (nfnl_err_add(&err_list, nlh, err) < 0) { 349 + /* We failed to enqueue an error, reset the 350 + * list of errors and send OOM to userspace 351 + * pointing to the batch header. 352 + */ 353 + nfnl_err_reset(&err_list); 354 + netlink_ack(skb, nlmsg_hdr(oskb), -ENOMEM); 355 + success = false; 356 + goto done; 357 + } 392 358 /* We don't stop processing the batch on errors, thus, 393 359 * userspace gets all the errors that the batch 394 360 * triggers. 395 361 */ 396 - netlink_ack(skb, nlh, err); 397 362 if (err) 398 363 success = false; 399 364 } ··· 422 361 else 423 362 ss->abort(oskb); 424 363 364 + nfnl_err_deliver(&err_list, oskb); 425 365 nfnl_unlock(subsys_id); 426 366 kfree_skb(nskb); 427 367 }
+7 -5
net/netfilter/nft_hash.c
··· 180 180 static void nft_hash_destroy(const struct nft_set *set) 181 181 { 182 182 const struct rhashtable *priv = nft_set_priv(set); 183 - const struct bucket_table *tbl; 183 + const struct bucket_table *tbl = priv->tbl; 184 184 struct nft_hash_elem *he, *next; 185 185 unsigned int i; 186 186 187 - tbl = rht_dereference(priv->tbl, priv); 188 - for (i = 0; i < tbl->size; i++) 189 - rht_for_each_entry_safe(he, next, tbl->buckets[i], priv, node) 187 + for (i = 0; i < tbl->size; i++) { 188 + for (he = rht_entry(tbl->buckets[i], struct nft_hash_elem, node); 189 + he != NULL; he = next) { 190 + next = rht_entry(he->node.next, struct nft_hash_elem, node); 190 191 nft_hash_elem_destroy(set, he); 191 - 192 + } 193 + } 192 194 rhashtable_destroy(priv); 193 195 } 194 196
-2
net/netfilter/nft_rbtree.c
··· 234 234 struct nft_rbtree_elem *rbe; 235 235 struct rb_node *node; 236 236 237 - spin_lock_bh(&nft_rbtree_lock); 238 237 while ((node = priv->root.rb_node) != NULL) { 239 238 rb_erase(node, &priv->root); 240 239 rbe = rb_entry(node, struct nft_rbtree_elem, node); 241 240 nft_rbtree_elem_destroy(set, rbe); 242 241 } 243 - spin_unlock_bh(&nft_rbtree_lock); 244 242 } 245 243 246 244 static bool nft_rbtree_estimate(const struct nft_set_desc *desc, u32 features,
+4 -2
net/sched/ematch.c
··· 526 526 match_idx = stack[--stackp]; 527 527 cur_match = tcf_em_get_match(tree, match_idx); 528 528 529 - if (tcf_em_early_end(cur_match, res)) 529 + if (tcf_em_early_end(cur_match, res)) { 530 + if (tcf_em_is_inverted(cur_match)) 531 + res = !res; 530 532 goto pop_stack; 531 - else { 533 + } else { 532 534 match_idx++; 533 535 goto proceed; 534 536 }
+6
scripts/tags.sh
··· 197 197 --regex-c++='/SETPCGFLAG\(([^,)]*).*/SetPageCgroup\1/' \ 198 198 --regex-c++='/CLEARPCGFLAG\(([^,)]*).*/ClearPageCgroup\1/' \ 199 199 --regex-c++='/TESTCLEARPCGFLAG\(([^,)]*).*/TestClearPageCgroup\1/' \ 200 + --regex-c++='/TASK_PFA_TEST\([^,]*,\s*([^)]*)\)/task_\1/' \ 201 + --regex-c++='/TASK_PFA_SET\([^,]*,\s*([^)]*)\)/task_set_\1/' \ 202 + --regex-c++='/TASK_PFA_CLEAR\([^,]*,\s*([^)]*)\)/task_clear_\1/'\ 200 203 --regex-c='/PCI_OP_READ\((\w*).*[1-4]\)/pci_bus_read_config_\1/' \ 201 204 --regex-c='/PCI_OP_WRITE\((\w*).*[1-4]\)/pci_bus_write_config_\1/' \ 202 205 --regex-c='/DEFINE_(MUTEX|SEMAPHORE|SPINLOCK)\((\w*)/\2/v/' \ ··· 263 260 --regex='/SETPCGFLAG\(([^,)]*).*/SetPageCgroup\1/' \ 264 261 --regex='/CLEARPCGFLAG\(([^,)]*).*/ClearPageCgroup\1/' \ 265 262 --regex='/TESTCLEARPCGFLAG\(([^,)]*).*/TestClearPageCgroup\1/' \ 263 + --regex='/TASK_PFA_TEST\([^,]*,\s*([^)]*)\)/task_\1/' \ 264 + --regex='/TASK_PFA_SET\([^,]*,\s*([^)]*)\)/task_set_\1/' \ 265 + --regex='/TASK_PFA_CLEAR\([^,]*,\s*([^)]*)\)/task_clear_\1/' \ 266 266 --regex='/_PE(\([^,)]*\).*/PEVENT_ERRNO__\1/' \ 267 267 --regex='/PCI_OP_READ(\([a-z]*[a-z]\).*[1-4])/pci_bus_read_config_\1/' \ 268 268 --regex='/PCI_OP_WRITE(\([a-z]*[a-z]\).*[1-4])/pci_bus_write_config_\1/'\