···345345The implementation is simple.346346347347Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag348348-PF_SPREAD_PAGE for each task that is in that cpuset or subsequently348348+PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently349349joins that cpuset. The page allocation calls for the page cache350350-is modified to perform an inline check for this PF_SPREAD_PAGE task350350+is modified to perform an inline check for this PFA_SPREAD_PAGE task351351flag, and if set, a call to a new routine cpuset_mem_spread_node()352352returns the node to prefer for the allocation.353353354354Similarly, setting 'cpuset.memory_spread_slab' turns on the flag355355-PF_SPREAD_SLAB, and appropriately marked slab caches will allocate355355+PFA_SPREAD_SLAB, and appropriately marked slab caches will allocate356356pages from the node returned by cpuset_mem_spread_node().357357358358The cpuset_mem_spread_node() routine is also simple. It uses the
···5656 - fsl,data-width : should be <18> or <24>5757 - port: A port node with endpoint definitions as defined in5858 Documentation/devicetree/bindings/media/video-interfaces.txt.5959+ On i.MX5, the internal two-input-multiplexer is used.6060+ Due to hardware limitations, only one port (port@[0,1])6161+ can be used for each channel (lvds-channel@[0,1], respectively)5962 On i.MX6, there should be four ports (port@[0-3]) that correspond6063 to the four LVDS multiplexer inputs.6164···8178 "di0", "di1";82798380 lvds-channel@0 {8181+ #address-cells = <1>;8282+ #size-cells = <0>;8483 reg = <0>;8584 fsl,data-mapping = "spwg";8685 fsl,data-width = <24>;···9186 /* ... */9287 };93889494- port {8989+ port@0 {9090+ reg = <0>;9191+9592 lvds0_in: endpoint {9693 remote-endpoint = <&ipu_di0_lvds0>;9794 };···10194 };1029510396 lvds-channel@1 {9797+ #address-cells = <1>;9898+ #size-cells = <0>;10499 reg = <1>;105100 fsl,data-mapping = "spwg";106101 fsl,data-width = <24>;···111102 /* ... */112103 };113104114114- port {105105+ port@1 {106106+ reg = <1>;107107+115108 lvds1_in: endpoint {116109 remote-endpoint = <&ipu_di1_lvds1>;117110 };
+211
Documentation/devicetree/of_selftest.txt
···11+Open Firmware Device Tree Selftest22+----------------------------------33+44+Author: Gaurav Minocha <gaurav.minocha.os@gmail.com>55+66+1. Introduction77+88+This document explains how the test data required for executing OF selftest99+is attached to the live tree dynamically, independent of the machine's1010+architecture.1111+1212+It is recommended to read the following documents before moving ahead.1313+1414+[1] Documentation/devicetree/usage-model.txt1515+[2] http://www.devicetree.org/Device_Tree_Usage1616+1717+OF Selftest has been designed to test the interface (include/linux/of.h)1818+provided to device driver developers to fetch the device information..etc.1919+from the unflattened device tree data structure. This interface is used by2020+most of the device drivers in various use cases.2121+2222+2323+2. Test-data2424+2525+The Device Tree Source file (drivers/of/testcase-data/testcases.dts) contains2626+the test data required for executing the unit tests automated in2727+drivers/of/selftests.c. Currently, following Device Tree Source Include files2828+(.dtsi) are included in testcase.dts:2929+3030+drivers/of/testcase-data/tests-interrupts.dtsi3131+drivers/of/testcase-data/tests-platform.dtsi3232+drivers/of/testcase-data/tests-phandle.dtsi3333+drivers/of/testcase-data/tests-match.dtsi3434+3535+When the kernel is build with OF_SELFTEST enabled, then the following make rule3636+3737+$(obj)/%.dtb: $(src)/%.dts FORCE3838+ $(call if_changed_dep, dtc)3939+4040+is used to compile the DT source file (testcase.dts) into a binary blob4141+(testcase.dtb), also referred as flattened DT.4242+4343+After that, using the following rule the binary blob above is wrapped as an4444+assembly file (testcase.dtb.S).4545+4646+$(obj)/%.dtb.S: $(obj)/%.dtb4747+ $(call cmd, dt_S_dtb)4848+4949+The assembly file is compiled into an object file (testcase.dtb.o), and is5050+linked into the kernel image.5151+5252+5353+2.1. Adding the test data5454+5555+Un-flattened device tree structure:5656+5757+Un-flattened device tree consists of connected device_node(s) in form of a tree5858+structure described below.5959+6060+// following struct members are used to construct the tree6161+struct device_node {6262+ ...6363+ struct device_node *parent;6464+ struct device_node *child;6565+ struct device_node *sibling;6666+ struct device_node *allnext; /* next in list of all nodes */6767+ ...6868+ };6969+7070+Figure 1, describes a generic structure of machine’s un-flattened device tree7171+considering only child and sibling pointers. There exists another pointer,7272+*parent, that is used to traverse the tree in the reverse direction. So, at7373+a particular level the child node and all the sibling nodes will have a parent7474+pointer pointing to a common node (e.g. child1, sibling2, sibling3, sibling4’s7575+parent points to root node)7676+7777+root (‘/’)7878+ |7979+child1 -> sibling2 -> sibling3 -> sibling4 -> null8080+ | | | |8181+ | | | null8282+ | | |8383+ | | child31 -> sibling32 -> null8484+ | | | |8585+ | | null null8686+ | |8787+ | child21 -> sibling22 -> sibling23 -> null8888+ | | | |8989+ | null null null9090+ |9191+child11 -> sibling12 -> sibling13 -> sibling14 -> null9292+ | | | |9393+ | | | null9494+ | | |9595+ null null child131 -> null9696+ |9797+ null9898+9999+Figure 1: Generic structure of un-flattened device tree100100+101101+102102+*allnext: it is used to link all the nodes of DT into a list. So, for the103103+ above tree the list would be as follows:104104+105105+root->child1->child11->sibling12->sibling13->child131->sibling14->sibling2->106106+child21->sibling22->sibling23->sibling3->child31->sibling32->sibling4->null107107+108108+Before executing OF selftest, it is required to attach the test data to109109+machine's device tree (if present). So, when selftest_data_add() is called,110110+at first it reads the flattened device tree data linked into the kernel image111111+via the following kernel symbols:112112+113113+__dtb_testcases_begin - address marking the start of test data blob114114+__dtb_testcases_end - address marking the end of test data blob115115+116116+Secondly, it calls of_fdt_unflatten_device_tree() to unflatten the flattened117117+blob. And finally, if the machine’s device tree (i.e live tree) is present,118118+then it attaches the unflattened test data tree to the live tree, else it119119+attaches itself as a live device tree.120120+121121+attach_node_and_children() uses of_attach_node() to attach the nodes into the122122+live tree as explained below. To explain the same, the test data tree described123123+ in Figure 2 is attached to the live tree described in Figure 1.124124+125125+root (‘/’)126126+ |127127+ testcase-data128128+ |129129+ test-child0 -> test-sibling1 -> test-sibling2 -> test-sibling3 -> null130130+ | | | |131131+ test-child01 null null null132132+133133+134134+allnext list:135135+136136+root->testcase-data->test-child0->test-child01->test-sibling1->test-sibling2137137+->test-sibling3->null138138+139139+Figure 2: Example test data tree to be attached to live tree.140140+141141+According to the scenario above, the live tree is already present so it isn’t142142+required to attach the root(‘/’) node. All other nodes are attached by calling143143+of_attach_node() on each node.144144+145145+In the function of_attach_node(), the new node is attached as the child of the146146+given parent in live tree. But, if parent already has a child then the new node147147+replaces the current child and turns it into its sibling. So, when the testcase148148+data node is attached to the live tree above (Figure 1), the final structure is149149+ as shown in Figure 3.150150+151151+root (‘/’)152152+ |153153+testcase-data -> child1 -> sibling2 -> sibling3 -> sibling4 -> null154154+ | | | | |155155+ (...) | | | null156156+ | | child31 -> sibling32 -> null157157+ | | | |158158+ | | null null159159+ | |160160+ | child21 -> sibling22 -> sibling23 -> null161161+ | | | |162162+ | null null null163163+ |164164+ child11 -> sibling12 -> sibling13 -> sibling14 -> null165165+ | | | |166166+ null null | null167167+ |168168+ child131 -> null169169+ |170170+ null171171+-----------------------------------------------------------------------172172+173173+root (‘/’)174174+ |175175+testcase-data -> child1 -> sibling2 -> sibling3 -> sibling4 -> null176176+ | | | | |177177+ | (...) (...) (...) null178178+ |179179+test-sibling3 -> test-sibling2 -> test-sibling1 -> test-child0 -> null180180+ | | | |181181+ null null null test-child01182182+183183+184184+Figure 3: Live device tree structure after attaching the testcase-data.185185+186186+187187+Astute readers would have noticed that test-child0 node becomes the last188188+sibling compared to the earlier structure (Figure 2). After attaching first189189+test-child0 the test-sibling1 is attached that pushes the child node190190+(i.e. test-child0) to become a sibling and makes itself a child node,191191+ as mentioned above.192192+193193+If a duplicate node is found (i.e. if a node with same full_name property is194194+already present in the live tree), then the node isn’t attached rather its195195+properties are updated to the live tree’s node by calling the function196196+update_node_properties().197197+198198+199199+2.2. Removing the test data200200+201201+Once the test case execution is complete, selftest_data_remove is called in202202+order to remove the device nodes attached initially (first the leaf nodes are203203+detached and then moving up the parent nodes are removed, and eventually the204204+whole tree). selftest_data_remove() calls detach_node_and_children() that uses205205+of_detach_node() to detach the nodes from the live device tree.206206+207207+To detach a node, of_detach_node() first updates all_next linked list, by208208+attaching the previous node’s allnext to current node’s allnext pointer. And209209+then, it either updates the child pointer of given node’s parent to its210210+sibling or attaches the previous sibling to the given node’s sibling, as211211+appropriate. That is it :)
···8181 asm("mcr p15, 0, %0, c13, c0, 3"8282 : : "r" (val));8383 } else {8484+#ifdef CONFIG_KUSER_HELPERS8485 /*8586 * User space must never try to access this8687 * directly. Expect your app to break···9089 * entry-armv.S for details)9190 */9291 *((unsigned int *)0xffff0ff0) = val;9292+#endif9393 }94949595 }
···11menu "TI OMAP/AM/DM/DRA Family"22 depends on ARCH_MULTI_V6 || ARCH_MULTI_V73344-config ARCH_OMAP55- bool66-74config ARCH_OMAP285 bool "TI OMAP2"96 depends on ARCH_MULTI_V6
+1-1
arch/arm/mach-omap2/omap_hwmod.c
···2065206520662066 spin_lock_irqsave(&io_chain_lock, flags);2067206720682068- if (cpu_is_omap34xx() && omap3_has_io_chain_ctrl())20682068+ if (cpu_is_omap34xx())20692069 omap3xxx_prm_reconfigure_io_chain();20702070 else if (cpu_is_omap44xx())20712071 omap44xx_prm_reconfigure_io_chain();
+35-4
arch/arm/mach-omap2/prm3xxx.c
···4545 .ocp_barrier = &omap3xxx_prm_ocp_barrier,4646 .save_and_clear_irqen = &omap3xxx_prm_save_and_clear_irqen,4747 .restore_irqen = &omap3xxx_prm_restore_irqen,4848- .reconfigure_io_chain = &omap3xxx_prm_reconfigure_io_chain,4848+ .reconfigure_io_chain = NULL,4949};50505151/*···369369}370370371371/**372372- * omap3xxx_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain372372+ * omap3430_pre_es3_1_reconfigure_io_chain - restart wake-up daisy chain373373+ *374374+ * The ST_IO_CHAIN bit does not exist in 3430 before es3.1. The only375375+ * thing we can do is toggle EN_IO bit for earlier omaps.376376+ */377377+void omap3430_pre_es3_1_reconfigure_io_chain(void)378378+{379379+ omap2_prm_clear_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD,380380+ PM_WKEN);381381+ omap2_prm_set_mod_reg_bits(OMAP3430_EN_IO_MASK, WKUP_MOD,382382+ PM_WKEN);383383+ omap2_prm_read_mod_reg(WKUP_MOD, PM_WKEN);384384+}385385+386386+/**387387+ * omap3_prm_reconfigure_io_chain - clear latches and reconfigure I/O chain373388 *374389 * Clear any previously-latched I/O wakeup events and ensure that the375390 * I/O wakeup gates are aligned with the current mux settings. Works376391 * by asserting WUCLKIN, waiting for WUCLKOUT to be asserted, and then377392 * deasserting WUCLKIN and clearing the ST_IO_CHAIN WKST bit. No378378- * return value.393393+ * return value. These registers are only available in 3430 es3.1 and later.379394 */380380-void omap3xxx_prm_reconfigure_io_chain(void)395395+void omap3_prm_reconfigure_io_chain(void)381396{382397 int i = 0;383398···412397 PM_WKST);413398414399 omap2_prm_read_mod_reg(WKUP_MOD, PM_WKST);400400+}401401+402402+/**403403+ * omap3xxx_prm_reconfigure_io_chain - reconfigure I/O chain404404+ */405405+void omap3xxx_prm_reconfigure_io_chain(void)406406+{407407+ if (omap3_prcm_irq_setup.reconfigure_io_chain)408408+ omap3_prcm_irq_setup.reconfigure_io_chain();415409}416410417411/**···679655680656 if (!(prm_features & PRM_HAS_IO_WAKEUP))681657 return 0;658658+659659+ if (omap3_has_io_chain_ctrl())660660+ omap3_prcm_irq_setup.reconfigure_io_chain =661661+ omap3_prm_reconfigure_io_chain;662662+ else663663+ omap3_prcm_irq_setup.reconfigure_io_chain =664664+ omap3430_pre_es3_1_reconfigure_io_chain;682665683666 omap3xxx_prm_enable_io_wakeup();684667 ret = omap_prcm_register_chain_handler(&omap3_prcm_irq_setup);
+1-1
arch/arm/mach-pxa/generic.c
···6161/*6262 * For non device-tree builds, keep legacy timer init6363 */6464-void pxa_timer_init(void)6464+void __init pxa_timer_init(void)6565{6666 pxa_timer_nodt_init(IRQ_OST0, io_p2v(0x40a00000),6767 get_clock_tick_rate());
+3
arch/arm/mm/alignment.c
···4141 * This code is not portable to processors with late data abort handling.4242 */4343#define CODING_BITS(i) (i & 0x0e000000)4444+#define COND_BITS(i) (i & 0xf0000000)44454546#define LDST_I_BIT(i) (i & (1 << 26)) /* Immediate constant */4647#define LDST_P_BIT(i) (i & (1 << 24)) /* Preindex */···822821 break;823822824823 case 0x04000000: /* ldr or str immediate */824824+ if (COND_BITS(instr) == 0xf0000000) /* NEON VLDn, VSTn */825825+ goto bad;825826 offset.un = OFFSET_BITS(instr);826827 handler = do_alignment_ldrstr;827828 break;
···106106 __end_of_permanent_fixed_addresses,107107108108 /*109109- * 256 temporary boot-time mappings, used by early_ioremap(),109109+ * 512 temporary boot-time mappings, used by early_ioremap(),110110 * before ioremap() is functional.111111 *112112- * If necessary we round it up to the next 256 pages boundary so112112+ * If necessary we round it up to the next 512 pages boundary so113113 * that we can have a single pgd entry and a single pte table:114114 */115115#define NR_FIX_BTMAPS 64116116-#define FIX_BTMAPS_SLOTS 4116116+#define FIX_BTMAPS_SLOTS 8117117#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)118118 FIX_BTMAP_END =119119 (__end_of_permanent_fixed_addresses ^
···264264 ACPI_OBJECT_COMMON_HEADER ACPI_COMMON_FIELD_INFO u16 resource_length;265265 union acpi_operand_object *region_obj; /* Containing op_region object */266266 u8 *resource_buffer; /* resource_template for serial regions/fields */267267+ u16 pin_number_index; /* Index relative to previous Connection/Template */267268};268269269270struct acpi_object_bank_field {
+2
drivers/acpi/acpica/dsfield.c
···360360 */361361 info->resource_buffer = NULL;362362 info->connection_node = NULL;363363+ info->pin_number_index = 0;363364364365 /*365366 * A Connection() is either an actual resource descriptor (buffer)···438437 }439438440439 info->field_bit_position += info->field_bit_length;440440+ info->pin_number_index++; /* Index relative to previous Connection() */441441 break;442442443443 default:
+31-16
drivers/acpi/acpica/evregion.c
···142142 union acpi_operand_object *region_obj2;143143 void *region_context = NULL;144144 struct acpi_connection_info *context;145145+ acpi_physical_address address;145146146147 ACPI_FUNCTION_TRACE(ev_address_space_dispatch);147148···232231 /* We have everything we need, we can invoke the address space handler */233232234233 handler = handler_desc->address_space.handler;235235-236236- ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,237237- "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",238238- ®ion_obj->region.handler->address_space, handler,239239- ACPI_FORMAT_NATIVE_UINT(region_obj->region.address +240240- region_offset),241241- acpi_ut_get_region_name(region_obj->region.242242- space_id)));234234+ address = (region_obj->region.address + region_offset);243235244236 /*245237 * Special handling for generic_serial_bus and general_purpose_io:246238 * There are three extra parameters that must be passed to the247239 * handler via the context:248248- * 1) Connection buffer, a resource template from Connection() op.249249- * 2) Length of the above buffer.250250- * 3) Actual access length from the access_as() op.240240+ * 1) Connection buffer, a resource template from Connection() op241241+ * 2) Length of the above buffer242242+ * 3) Actual access length from the access_as() op243243+ *244244+ * In addition, for general_purpose_io, the Address and bit_width fields245245+ * are defined as follows:246246+ * 1) Address is the pin number index of the field (bit offset from247247+ * the previous Connection)248248+ * 2) bit_width is the actual bit length of the field (number of pins)251249 */252252- if (((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) ||253253- (region_obj->region.space_id == ACPI_ADR_SPACE_GPIO)) &&250250+ if ((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) &&254251 context && field_obj) {255252256253 /* Get the Connection (resource_template) buffer */···257258 context->length = field_obj->field.resource_length;258259 context->access_length = field_obj->field.access_length;259260 }261261+ if ((region_obj->region.space_id == ACPI_ADR_SPACE_GPIO) &&262262+ context && field_obj) {263263+264264+ /* Get the Connection (resource_template) buffer */265265+266266+ context->connection = field_obj->field.resource_buffer;267267+ context->length = field_obj->field.resource_length;268268+ context->access_length = field_obj->field.access_length;269269+ address = field_obj->field.pin_number_index;270270+ bit_width = field_obj->field.bit_length;271271+ }272272+273273+ ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,274274+ "Handler %p (@%p) Address %8.8X%8.8X [%s]\n",275275+ ®ion_obj->region.handler->address_space, handler,276276+ ACPI_FORMAT_NATIVE_UINT(address),277277+ acpi_ut_get_region_name(region_obj->region.278278+ space_id)));260279261280 if (!(handler_desc->address_space.handler_flags &262281 ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) {···288271289272 /* Call the handler */290273291291- status = handler(function,292292- (region_obj->region.address + region_offset),293293- bit_width, value, context,274274+ status = handler(function, address, bit_width, value, context,294275 region_obj2->extra.region_context);295276296277 if (ACPI_FAILURE(status)) {
+67
drivers/acpi/acpica/exfield.c
···253253 buffer = &buffer_desc->integer.value;254254 }255255256256+ if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) &&257257+ (obj_desc->field.region_obj->region.space_id ==258258+ ACPI_ADR_SPACE_GPIO)) {259259+ /*260260+ * For GPIO (general_purpose_io), the Address will be the bit offset261261+ * from the previous Connection() operator, making it effectively a262262+ * pin number index. The bit_length is the length of the field, which263263+ * is thus the number of pins.264264+ */265265+ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,266266+ "GPIO FieldRead [FROM]: Pin %u Bits %u\n",267267+ obj_desc->field.pin_number_index,268268+ obj_desc->field.bit_length));269269+270270+ /* Lock entire transaction if requested */271271+272272+ acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);273273+274274+ /* Perform the write */275275+276276+ status = acpi_ex_access_region(obj_desc, 0,277277+ (u64 *)buffer, ACPI_READ);278278+ acpi_ex_release_global_lock(obj_desc->common_field.field_flags);279279+ if (ACPI_FAILURE(status)) {280280+ acpi_ut_remove_reference(buffer_desc);281281+ } else {282282+ *ret_buffer_desc = buffer_desc;283283+ }284284+ return_ACPI_STATUS(status);285285+ }286286+256287 ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,257288 "FieldRead [TO]: Obj %p, Type %X, Buf %p, ByteLen %X\n",258289 obj_desc, obj_desc->common.type, buffer,···443412 acpi_ex_release_global_lock(obj_desc->common_field.field_flags);444413445414 *result_desc = buffer_desc;415415+ return_ACPI_STATUS(status);416416+ } else if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) &&417417+ (obj_desc->field.region_obj->region.space_id ==418418+ ACPI_ADR_SPACE_GPIO)) {419419+ /*420420+ * For GPIO (general_purpose_io), we will bypass the entire field421421+ * mechanism and handoff the bit address and bit width directly to422422+ * the handler. The Address will be the bit offset423423+ * from the previous Connection() operator, making it effectively a424424+ * pin number index. The bit_length is the length of the field, which425425+ * is thus the number of pins.426426+ */427427+ if (source_desc->common.type != ACPI_TYPE_INTEGER) {428428+ return_ACPI_STATUS(AE_AML_OPERAND_TYPE);429429+ }430430+431431+ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,432432+ "GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n",433433+ acpi_ut_get_type_name(source_desc->common.434434+ type),435435+ source_desc->common.type,436436+ (u32)source_desc->integer.value,437437+ obj_desc->field.pin_number_index,438438+ obj_desc->field.bit_length));439439+440440+ buffer = &source_desc->integer.value;441441+442442+ /* Lock entire transaction if requested */443443+444444+ acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags);445445+446446+ /* Perform the write */447447+448448+ status = acpi_ex_access_region(obj_desc, 0,449449+ (u64 *)buffer, ACPI_WRITE);450450+ acpi_ex_release_global_lock(obj_desc->common_field.field_flags);446451 return_ACPI_STATUS(status);447452 }448453
+2
drivers/acpi/acpica/exprep.c
···484484 obj_desc->field.resource_length = info->resource_length;485485 }486486487487+ obj_desc->field.pin_number_index = info->pin_number_index;488488+487489 /* Allow full data read from EC address space */488490489491 if ((obj_desc->field.region_obj->region.space_id ==
···377377 struct gpio_chip *chip = achip->chip;378378 struct acpi_resource_gpio *agpio;379379 struct acpi_resource *ares;380380+ int pin_index = (int)address;380381 acpi_status status;381382 bool pull_up;383383+ int length;382384 int i;383385384386 status = acpi_buffer_to_resource(achip->conn_info.connection,···402400 return AE_BAD_PARAMETER;403401 }404402405405- for (i = 0; i < agpio->pin_table_length; i++) {403403+ length = min(agpio->pin_table_length, (u16)(pin_index + bits));404404+ for (i = pin_index; i < length; ++i) {406405 unsigned pin = agpio->pin_table[i];407406 struct acpi_gpio_connection *conn;408407 struct gpio_desc *desc;
+2-2
drivers/gpio/gpiolib.c
···413413 return;414414 }415415416416- irq_set_chained_handler(parent_irq, parent_handler);417416 /*418417 * The parent irqchip is already using the chip_data for this419418 * irqchip, so our callbacks simply use the handler_data.420419 */421420 irq_set_handler_data(parent_irq, gpiochip);421421+ irq_set_chained_handler(parent_irq, parent_handler);422422}423423EXPORT_SYMBOL_GPL(gpiochip_set_chained_irqchip);424424···16741674 set_bit(FLAG_OPEN_SOURCE, &desc->flags);1675167516761676 /* No particular flag request, return here... */16771677- if (flags & GPIOD_FLAGS_BIT_DIR_SET)16771677+ if (!(flags & GPIOD_FLAGS_BIT_DIR_SET))16781678 return desc;1679167916801680 /* Process flags */
+7-5
drivers/gpu/drm/i915/i915_cmd_parser.c
···709709 BUG_ON(!validate_cmds_sorted(ring, cmd_tables, cmd_table_count));710710 BUG_ON(!validate_regs_sorted(ring));711711712712- ret = init_hash_table(ring, cmd_tables, cmd_table_count);713713- if (ret) {714714- DRM_ERROR("CMD: cmd_parser_init failed!\n");715715- fini_hash_table(ring);716716- return ret;712712+ if (hash_empty(ring->cmd_hash)) {713713+ ret = init_hash_table(ring, cmd_tables, cmd_table_count);714714+ if (ret) {715715+ DRM_ERROR("CMD: cmd_parser_init failed!\n");716716+ fini_hash_table(ring);717717+ return ret;718718+ }717719 }718720719721 ring->needs_cmd_parser = true;
+1-1
drivers/gpu/drm/i915/intel_hdmi.c
···732732 if (tmp & HDMI_MODE_SELECT_HDMI)733733 pipe_config->has_hdmi_sink = true;734734735735- if (tmp & HDMI_MODE_SELECT_HDMI)735735+ if (tmp & SDVO_AUDIO_ENABLE)736736 pipe_config->has_audio = true;737737738738 if (!HAS_PCH_SPLIT(dev) &&
+6-6
drivers/gpu/drm/radeon/cik.c
···48034803 */48044804static int cik_cp_compute_resume(struct radeon_device *rdev)48054805{48064806- int r, i, idx;48064806+ int r, i, j, idx;48074807 u32 tmp;48084808 bool use_doorbell = true;48094809 u64 hqd_gpu_addr;···49224922 mqd->queue_state.cp_hqd_pq_wptr= 0;49234923 if (RREG32(CP_HQD_ACTIVE) & 1) {49244924 WREG32(CP_HQD_DEQUEUE_REQUEST, 1);49254925- for (i = 0; i < rdev->usec_timeout; i++) {49254925+ for (j = 0; j < rdev->usec_timeout; j++) {49264926 if (!(RREG32(CP_HQD_ACTIVE) & 1))49274927 break;49284928 udelay(1);···77517751 wptr = RREG32(IH_RB_WPTR);7752775277537753 if (wptr & RB_OVERFLOW) {77547754+ wptr &= ~RB_OVERFLOW;77547755 /* When a ring buffer overflow happen start parsing interrupt77557756 * from the last not overwritten vector (wptr + 16). Hopefully77567757 * this should allow us to catchup.77577758 */77587758- dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n",77597759- wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask);77597759+ dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",77607760+ wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask);77607761 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask;77617762 tmp = RREG32(IH_RB_CNTL);77627763 tmp |= IH_WPTR_OVERFLOW_CLEAR;77637764 WREG32(IH_RB_CNTL, tmp);77647764- wptr &= ~RB_OVERFLOW;77657765 }77667766 return (wptr & rdev->ih.ptr_mask);77677767}···82518251 /* wptr/rptr are in bytes! */82528252 rptr += 16;82538253 rptr &= rdev->ih.ptr_mask;82548254+ WREG32(IH_RB_RPTR, rptr);82548255 }82558256 if (queue_hotplug)82568257 schedule_work(&rdev->hotplug_work);···82608259 if (queue_thermal)82618260 schedule_work(&rdev->pm.dpm.thermal.work);82628261 rdev->ih.rptr = rptr;82638263- WREG32(IH_RB_RPTR, rdev->ih.rptr);82648262 atomic_set(&rdev->ih.lock, 0);8265826382668264 /* make sure wptr hasn't changed while processing */
+4-4
drivers/gpu/drm/radeon/evergreen.c
···47494749 wptr = RREG32(IH_RB_WPTR);4750475047514751 if (wptr & RB_OVERFLOW) {47524752+ wptr &= ~RB_OVERFLOW;47524753 /* When a ring buffer overflow happen start parsing interrupt47534754 * from the last not overwritten vector (wptr + 16). Hopefully47544755 * this should allow us to catchup.47554756 */47564756- dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n",47574757- wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask);47574757+ dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",47584758+ wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask);47584759 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask;47594760 tmp = RREG32(IH_RB_CNTL);47604761 tmp |= IH_WPTR_OVERFLOW_CLEAR;47614762 WREG32(IH_RB_CNTL, tmp);47624762- wptr &= ~RB_OVERFLOW;47634763 }47644764 return (wptr & rdev->ih.ptr_mask);47654765}···51375137 /* wptr/rptr are in bytes! */51385138 rptr += 16;51395139 rptr &= rdev->ih.ptr_mask;51405140+ WREG32(IH_RB_RPTR, rptr);51405141 }51415142 if (queue_hotplug)51425143 schedule_work(&rdev->hotplug_work);···51465145 if (queue_thermal && rdev->pm.dpm_enabled)51475146 schedule_work(&rdev->pm.dpm.thermal.work);51485147 rdev->ih.rptr = rptr;51495149- WREG32(IH_RB_RPTR, rdev->ih.rptr);51505148 atomic_set(&rdev->ih.lock, 0);5151514951525150 /* make sure wptr hasn't changed while processing */
+4-4
drivers/gpu/drm/radeon/r600.c
···37923792 wptr = RREG32(IH_RB_WPTR);3793379337943794 if (wptr & RB_OVERFLOW) {37953795+ wptr &= ~RB_OVERFLOW;37953796 /* When a ring buffer overflow happen start parsing interrupt37963797 * from the last not overwritten vector (wptr + 16). Hopefully37973798 * this should allow us to catchup.37983799 */37993799- dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, %d, %d)\n",38003800- wptr, rdev->ih.rptr, (wptr + 16) + rdev->ih.ptr_mask);38003800+ dev_warn(rdev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",38013801+ wptr, rdev->ih.rptr, (wptr + 16) & rdev->ih.ptr_mask);38013802 rdev->ih.rptr = (wptr + 16) & rdev->ih.ptr_mask;38023803 tmp = RREG32(IH_RB_CNTL);38033804 tmp |= IH_WPTR_OVERFLOW_CLEAR;38043805 WREG32(IH_RB_CNTL, tmp);38053805- wptr &= ~RB_OVERFLOW;38063806 }38073807 return (wptr & rdev->ih.ptr_mask);38083808}···40484048 /* wptr/rptr are in bytes! */40494049 rptr += 16;40504050 rptr &= rdev->ih.ptr_mask;40514051+ WREG32(IH_RB_RPTR, rptr);40514052 }40524053 if (queue_hotplug)40534054 schedule_work(&rdev->hotplug_work);···40574056 if (queue_thermal && rdev->pm.dpm_enabled)40584057 schedule_work(&rdev->pm.dpm.thermal.work);40594058 rdev->ih.rptr = rptr;40604060- WREG32(IH_RB_RPTR, rdev->ih.rptr);40614059 atomic_set(&rdev->ih.lock, 0);4062406040634061 /* make sure wptr hasn't changed while processing */
+1
drivers/gpu/drm/radeon/radeon.h
···106106extern int radeon_deep_color;107107extern int radeon_use_pflipirq;108108extern int radeon_bapm;109109+extern int radeon_backlight;109110110111/*111112 * Copy from radeon_drv.h so we don't have to include both and have conflicting
···13331333 struct qlcnic_host_tx_ring *tx_ring;13341334 struct qlcnic_esw_statistics port_stats;13351335 struct qlcnic_mac_statistics mac_stats;13361336- int index, ret, length, size, tx_size, ring;13361336+ int index, ret, length, size, ring;13371337 char *p;1338133813391339- tx_size = adapter->drv_tx_rings * QLCNIC_TX_STATS_LEN;13391339+ memset(data, 0, stats->n_stats * sizeof(u64));1340134013411341- memset(data, 0, tx_size * sizeof(u64));13421341 for (ring = 0, index = 0; ring < adapter->drv_tx_rings; ring++) {13431343- if (test_bit(__QLCNIC_DEV_UP, &adapter->state)) {13421342+ if (adapter->is_up == QLCNIC_ADAPTER_UP_MAGIC) {13441343 tx_ring = &adapter->tx_ring[ring];13451344 data = qlcnic_fill_tx_queue_stats(data, tx_ring);13461345 qlcnic_update_stats(adapter);13461346+ } else {13471347+ data += QLCNIC_TX_STATS_LEN;13471348 }13481349 }1349135013501350- memset(data, 0, stats->n_stats * sizeof(u64));13511351 length = QLCNIC_STATS_LEN;13521352 for (index = 0; index < length; index++) {13531353 p = (char *)adapter + qlcnic_gstrings_stats[index].stat_offset;
+9-2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···27862786 if (IS_ERR(priv->stmmac_clk)) {27872787 dev_warn(priv->device, "%s: warning: cannot get CSR clock\n",27882788 __func__);27892789- ret = PTR_ERR(priv->stmmac_clk);27902790- goto error_clk_get;27892789+ /* If failed to obtain stmmac_clk and specific clk_csr value27902790+ * is NOT passed from the platform, probe fail.27912791+ */27922792+ if (!priv->plat->clk_csr) {27932793+ ret = PTR_ERR(priv->stmmac_clk);27942794+ goto error_clk_get;27952795+ } else {27962796+ priv->stmmac_clk = NULL;27972797+ }27912798 }27922799 clk_prepare_enable(priv->stmmac_clk);27932800
+2-1
drivers/net/hyperv/netvsc_drv.c
···387387 int hdr_offset;388388 u32 net_trans_info;389389 u32 hash;390390+ u32 skb_length = skb->len;390391391392392393 /* We will atmost need two pages to describe the rndis···563562564563drop:565564 if (ret == 0) {566566- net->stats.tx_bytes += skb->len;565565+ net->stats.tx_bytes += skb_length;567566 net->stats.tx_packets++;568567 } else {569568 kfree(packet);
+8-10
drivers/net/macvtap.c
···112112 return err;113113}114114115115+/* Requires RTNL */115116static int macvtap_set_queue(struct net_device *dev, struct file *file,116117 struct macvtap_queue *q)117118{118119 struct macvlan_dev *vlan = netdev_priv(dev);119119- int err = -EBUSY;120120121121- rtnl_lock();122121 if (vlan->numqueues == MAX_MACVTAP_QUEUES)123123- goto out;122122+ return -EBUSY;124123125125- err = 0;126124 rcu_assign_pointer(q->vlan, vlan);127125 rcu_assign_pointer(vlan->taps[vlan->numvtaps], q);128126 sock_hold(&q->sk);···134136 vlan->numvtaps++;135137 vlan->numqueues++;136138137137-out:138138- rtnl_unlock();139139- return err;139139+ return 0;140140}141141142142static int macvtap_disable_queue(struct macvtap_queue *q)···450454static int macvtap_open(struct inode *inode, struct file *file)451455{452456 struct net *net = current->nsproxy->net_ns;453453- struct net_device *dev = dev_get_by_macvtap_minor(iminor(inode));457457+ struct net_device *dev;454458 struct macvtap_queue *q;455455- int err;459459+ int err = -ENODEV;456460457457- err = -ENODEV;461461+ rtnl_lock();462462+ dev = dev_get_by_macvtap_minor(iminor(inode));458463 if (!dev)459464 goto out;460465···495498 if (dev)496499 dev_put(dev);497500501501+ rtnl_unlock();498502 return err;499503}500504
+42-46
drivers/net/usb/r8152.c
···2626#include <linux/mdio.h>27272828/* Version Information */2929-#define DRIVER_VERSION "v1.06.0 (2014/03/03)"2929+#define DRIVER_VERSION "v1.06.1 (2014/10/01)"3030#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"3131#define DRIVER_DESC "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"3232#define MODULENAME "r8152"···19791979 ocp_write_word(tp, MCU_TYPE_PLA, PLA_MISC_1, ocp_data);19801980}1981198119821982+static int rtl_start_rx(struct r8152 *tp)19831983+{19841984+ int i, ret = 0;19851985+19861986+ INIT_LIST_HEAD(&tp->rx_done);19871987+ for (i = 0; i < RTL8152_MAX_RX; i++) {19881988+ INIT_LIST_HEAD(&tp->rx_info[i].list);19891989+ ret = r8152_submit_rx(tp, &tp->rx_info[i], GFP_KERNEL);19901990+ if (ret)19911991+ break;19921992+ }19931993+19941994+ return ret;19951995+}19961996+19971997+static int rtl_stop_rx(struct r8152 *tp)19981998+{19991999+ int i;20002000+20012001+ for (i = 0; i < RTL8152_MAX_RX; i++)20022002+ usb_kill_urb(tp->rx_info[i].urb);20032003+20042004+ return 0;20052005+}20062006+19822007static int rtl_enable(struct r8152 *tp)19832008{19842009 u32 ocp_data;19851985- int i, ret;1986201019872011 r8152b_reset_packet_filter(tp);19882012···2016199220171993 rxdy_gated_en(tp, false);2018199420192019- INIT_LIST_HEAD(&tp->rx_done);20202020- ret = 0;20212021- for (i = 0; i < RTL8152_MAX_RX; i++) {20222022- INIT_LIST_HEAD(&tp->rx_info[i].list);20232023- ret |= r8152_submit_rx(tp, &tp->rx_info[i], GFP_KERNEL);20242024- }20252025-20262026- return ret;19951995+ return rtl_start_rx(tp);20271996}2028199720291998static int rtl8152_enable(struct r8152 *tp)···21002083 usleep_range(1000, 2000);21012084 }2102208521032103- for (i = 0; i < RTL8152_MAX_RX; i++)21042104- usb_kill_urb(tp->rx_info[i].urb);20862086+ rtl_stop_rx(tp);2105208721062088 rtl8152_nic_reset(tp);21072089}···22592243 }22602244}2261224522622262-static void rtl_clear_bp(struct r8152 *tp)22632263-{22642264- ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_0, 0);22652265- ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_2, 0);22662266- ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_4, 0);22672267- ocp_write_dword(tp, MCU_TYPE_PLA, PLA_BP_6, 0);22682268- ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_0, 0);22692269- ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_2, 0);22702270- ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_4, 0);22712271- ocp_write_dword(tp, MCU_TYPE_USB, USB_BP_6, 0);22722272- usleep_range(3000, 6000);22732273- ocp_write_word(tp, MCU_TYPE_PLA, PLA_BP_BA, 0);22742274- ocp_write_word(tp, MCU_TYPE_USB, USB_BP_BA, 0);22752275-}22762276-22772277-static void r8153_clear_bp(struct r8152 *tp)22782278-{22792279- ocp_write_byte(tp, MCU_TYPE_PLA, PLA_BP_EN, 0);22802280- ocp_write_byte(tp, MCU_TYPE_USB, USB_BP_EN, 0);22812281- rtl_clear_bp(tp);22822282-}22832283-22842246static void r8153_teredo_off(struct r8152 *tp)22852247{22862248 u32 ocp_data;···23002306 data &= ~BMCR_PDOWN;23012307 r8152_mdio_write(tp, MII_BMCR, data);23022308 }23032303-23042304- rtl_clear_bp(tp);2305230923062310 set_bit(PHY_RESET, &tp->flags);23072311}···24462454 data &= ~BMCR_PDOWN;24472455 r8152_mdio_write(tp, MII_BMCR, data);24482456 }24492449-24502450- r8153_clear_bp(tp);2451245724522458 if (tp->version == RTL_VER_03) {24532459 data = ocp_reg_read(tp, OCP_EEE_CFG);···31713181 clear_bit(WORK_ENABLE, &tp->flags);31723182 usb_kill_urb(tp->intr_urb);31733183 cancel_delayed_work_sync(&tp->schedule);31843184+ tasklet_disable(&tp->tl);31743185 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {31863186+ rtl_stop_rx(tp);31753187 rtl_runtime_suspend_enable(tp, true);31763188 } else {31773177- tasklet_disable(&tp->tl);31783189 tp->rtl_ops.down(tp);31793179- tasklet_enable(&tp->tl);31803190 }31913191+ tasklet_enable(&tp->tl);31813192 }3182319331833194 return 0;···31973206 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {31983207 rtl_runtime_suspend_enable(tp, false);31993208 clear_bit(SELECTIVE_SUSPEND, &tp->flags);32093209+ set_bit(WORK_ENABLE, &tp->flags);32003210 if (tp->speed & LINK_STATUS)32013201- tp->rtl_ops.disable(tp);32113211+ rtl_start_rx(tp);32023212 } else {32033213 tp->rtl_ops.up(tp);32043214 rtl8152_set_speed(tp, AUTONEG_ENABLE,32053215 tp->mii.supports_gmii ?32063216 SPEED_1000 : SPEED_100,32073217 DUPLEX_FULL);32183218+ tp->speed = 0;32193219+ netif_carrier_off(tp->netdev);32203220+ set_bit(WORK_ENABLE, &tp->flags);32083221 }32093209- tp->speed = 0;32103210- netif_carrier_off(tp->netdev);32113211- set_bit(WORK_ENABLE, &tp->flags);32123222 usb_submit_urb(tp->intr_urb, GFP_KERNEL);32133223 }32143224···36153623 if (test_bit(RTL8152_UNPLUG, &tp->flags))36163624 return;3617362536183618- r8153_power_cut_en(tp, true);36263626+ r8153_power_cut_en(tp, false);36193627}3620362836213629static int rtl_ops_init(struct r8152 *tp, const struct usb_device_id *id)···3780378837813789 usb_set_intfdata(intf, NULL);37823790 if (tp) {37833783- set_bit(RTL8152_UNPLUG, &tp->flags);37913791+ struct usb_device *udev = tp->udev;37923792+37933793+ if (udev->state == USB_STATE_NOTATTACHED)37943794+ set_bit(RTL8152_UNPLUG, &tp->flags);37953795+37843796 tasklet_kill(&tp->tl);37853797 unregister_netdev(tp->netdev);37863798 tp->rtl_ops.unload(tp);
+14-2
drivers/of/base.c
···138138 /* Important: Don't leak passwords */139139 bool secure = strncmp(pp->name, "security-", 9) == 0;140140141141+ if (!IS_ENABLED(CONFIG_SYSFS))142142+ return 0;143143+141144 if (!of_kset || !of_node_is_attached(np))142145 return 0;143146···160157 const char *name;161158 struct property *pp;162159 int rc;160160+161161+ if (!IS_ENABLED(CONFIG_SYSFS))162162+ return 0;163163164164 if (!of_kset)165165 return 0;···1719171317201714void __of_remove_property_sysfs(struct device_node *np, struct property *prop)17211715{17161716+ if (!IS_ENABLED(CONFIG_SYSFS))17171717+ return;17181718+17221719 /* at early boot, bail here and defer setup to of_init() */17231720 if (of_kset && of_node_is_attached(np))17241721 sysfs_remove_bin_file(&np->kobj, &prop->attr);···17861777void __of_update_property_sysfs(struct device_node *np, struct property *newprop,17871778 struct property *oldprop)17881779{17801780+ if (!IS_ENABLED(CONFIG_SYSFS))17811781+ return;17821782+17891783 /* At early boot, bail out and defer setup to of_init() */17901784 if (!of_kset)17911785 return;···18591847{18601848 struct property *pp;1861184918501850+ of_aliases = of_find_node_by_path("/aliases");18621851 of_chosen = of_find_node_by_path("/chosen");18631852 if (of_chosen == NULL)18641853 of_chosen = of_find_node_by_path("/chosen@0");···18751862 of_stdout = of_find_node_by_path(name);18761863 }1877186418781878- of_aliases = of_find_node_by_path("/aliases");18791865 if (!of_aliases)18801866 return;18811867···19981986{19991987 if (!dn || dn != of_stdout || console_set_on_cmdline)20001988 return false;20012001- return add_preferred_console(name, index, NULL);19891989+ return !add_preferred_console(name, index, NULL);20021990}20031991EXPORT_SYMBOL_GPL(of_console_check);20041992
+3
drivers/of/dynamic.c
···4545{4646 struct property *pp;47474848+ if (!IS_ENABLED(CONFIG_SYSFS))4949+ return;5050+4851 BUG_ON(!of_node_is_initialized(np));4952 if (!of_kset)5053 return;
+9-5
drivers/of/fdt.c
···928928void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)929929{930930 const u64 phys_offset = __pa(PAGE_OFFSET);931931- base &= PAGE_MASK;931931+932932+ if (!PAGE_ALIGNED(base)) {933933+ size -= PAGE_SIZE - (base & ~PAGE_MASK);934934+ base = PAGE_ALIGN(base);935935+ }932936 size &= PAGE_MASK;933937934938 if (base > MAX_PHYS_ADDR) {···941937 return;942938 }943939944944- if (base + size > MAX_PHYS_ADDR) {945945- pr_warning("Ignoring memory range 0x%lx - 0x%llx\n",946946- ULONG_MAX, base + size);947947- size = MAX_PHYS_ADDR - base;940940+ if (base + size - 1 > MAX_PHYS_ADDR) {941941+ pr_warning("Ignoring memory range 0x%llx - 0x%llx\n",942942+ ((u64)MAX_PHYS_ADDR) + 1, base + size);943943+ size = MAX_PHYS_ADDR - base + 1;948944 }949945950946 if (base + size < phys_offset) {
···2222#define GSBI_CTRL_REG 0x00002323#define GSBI_PROTOCOL_SHIFT 424242525+struct gsbi_info {2626+ struct clk *hclk;2727+ u32 mode;2828+ u32 crci;2929+};3030+2531static int gsbi_probe(struct platform_device *pdev)2632{2733 struct device_node *node = pdev->dev.of_node;2834 struct resource *res;2935 void __iomem *base;3030- struct clk *hclk;3131- u32 mode, crci = 0;3636+ struct gsbi_info *gsbi;3737+3838+ gsbi = devm_kzalloc(&pdev->dev, sizeof(*gsbi), GFP_KERNEL);3939+4040+ if (!gsbi)4141+ return -ENOMEM;32423343 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);3444 base = devm_ioremap_resource(&pdev->dev, res);3545 if (IS_ERR(base))3646 return PTR_ERR(base);37473838- if (of_property_read_u32(node, "qcom,mode", &mode)) {4848+ if (of_property_read_u32(node, "qcom,mode", &gsbi->mode)) {3949 dev_err(&pdev->dev, "missing mode configuration\n");4050 return -EINVAL;4151 }42524353 /* not required, so default to 0 if not present */4444- of_property_read_u32(node, "qcom,crci", &crci);5454+ of_property_read_u32(node, "qcom,crci", &gsbi->crci);45554646- dev_info(&pdev->dev, "GSBI port protocol: %d crci: %d\n", mode, crci);5656+ dev_info(&pdev->dev, "GSBI port protocol: %d crci: %d\n",5757+ gsbi->mode, gsbi->crci);5858+ gsbi->hclk = devm_clk_get(&pdev->dev, "iface");5959+ if (IS_ERR(gsbi->hclk))6060+ return PTR_ERR(gsbi->hclk);47614848- hclk = devm_clk_get(&pdev->dev, "iface");4949- if (IS_ERR(hclk))5050- return PTR_ERR(hclk);6262+ clk_prepare_enable(gsbi->hclk);51635252- clk_prepare_enable(hclk);5353-5454- writel_relaxed((mode << GSBI_PROTOCOL_SHIFT) | crci,6464+ writel_relaxed((gsbi->mode << GSBI_PROTOCOL_SHIFT) | gsbi->crci,5565 base + GSBI_CTRL_REG);56665767 /* make sure the gsbi control write is not reordered */5868 wmb();59696060- clk_disable_unprepare(hclk);7070+ platform_set_drvdata(pdev, gsbi);61716262- return of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev);7272+ return of_platform_populate(node, NULL, NULL, &pdev->dev);7373+}7474+7575+static int gsbi_remove(struct platform_device *pdev)7676+{7777+ struct gsbi_info *gsbi = platform_get_drvdata(pdev);7878+7979+ clk_disable_unprepare(gsbi->hclk);8080+8181+ return 0;6382}64836584static const struct of_device_id gsbi_dt_match[] = {···9576 .of_match_table = gsbi_dt_match,9677 },9778 .probe = gsbi_probe,7979+ .remove = gsbi_remove,9880};998110082module_platform_driver(gsbi_driver);
+4-4
fs/cachefiles/bind.c
···5050 cache->brun_percent < 100);51515252 if (*args) {5353- pr_err("'bind' command doesn't take an argument");5353+ pr_err("'bind' command doesn't take an argument\n");5454 return -EINVAL;5555 }56565757 if (!cache->rootdirname) {5858- pr_err("No cache directory specified");5858+ pr_err("No cache directory specified\n");5959 return -EINVAL;6060 }61616262 /* don't permit already bound caches to be re-bound */6363 if (test_bit(CACHEFILES_READY, &cache->flags)) {6464- pr_err("Cache already bound");6464+ pr_err("Cache already bound\n");6565 return -EBUSY;6666 }6767···248248 kmem_cache_free(cachefiles_object_jar, fsdef);249249error_root_object:250250 cachefiles_end_secure(cache, saved_cred);251251- pr_err("Failed to register: %d", ret);251251+ pr_err("Failed to register: %d\n", ret);252252 return ret;253253}254254
+15-15
fs/cachefiles/daemon.c
···315315static int cachefiles_daemon_range_error(struct cachefiles_cache *cache,316316 char *args)317317{318318- pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%");318318+ pr_err("Free space limits must be in range 0%%<=stop<cull<run<100%%\n");319319320320 return -EINVAL;321321}···475475 _enter(",%s", args);476476477477 if (!*args) {478478- pr_err("Empty directory specified");478478+ pr_err("Empty directory specified\n");479479 return -EINVAL;480480 }481481482482 if (cache->rootdirname) {483483- pr_err("Second cache directory specified");483483+ pr_err("Second cache directory specified\n");484484 return -EEXIST;485485 }486486···503503 _enter(",%s", args);504504505505 if (!*args) {506506- pr_err("Empty security context specified");506506+ pr_err("Empty security context specified\n");507507 return -EINVAL;508508 }509509510510 if (cache->secctx) {511511- pr_err("Second security context specified");511511+ pr_err("Second security context specified\n");512512 return -EINVAL;513513 }514514···531531 _enter(",%s", args);532532533533 if (!*args) {534534- pr_err("Empty tag specified");534534+ pr_err("Empty tag specified\n");535535 return -EINVAL;536536 }537537···562562 goto inval;563563564564 if (!test_bit(CACHEFILES_READY, &cache->flags)) {565565- pr_err("cull applied to unready cache");565565+ pr_err("cull applied to unready cache\n");566566 return -EIO;567567 }568568569569 if (test_bit(CACHEFILES_DEAD, &cache->flags)) {570570- pr_err("cull applied to dead cache");570570+ pr_err("cull applied to dead cache\n");571571 return -EIO;572572 }573573···587587588588notdir:589589 path_put(&path);590590- pr_err("cull command requires dirfd to be a directory");590590+ pr_err("cull command requires dirfd to be a directory\n");591591 return -ENOTDIR;592592593593inval:594594- pr_err("cull command requires dirfd and filename");594594+ pr_err("cull command requires dirfd and filename\n");595595 return -EINVAL;596596}597597···614614 return 0;615615616616inval:617617- pr_err("debug command requires mask");617617+ pr_err("debug command requires mask\n");618618 return -EINVAL;619619}620620···634634 goto inval;635635636636 if (!test_bit(CACHEFILES_READY, &cache->flags)) {637637- pr_err("inuse applied to unready cache");637637+ pr_err("inuse applied to unready cache\n");638638 return -EIO;639639 }640640641641 if (test_bit(CACHEFILES_DEAD, &cache->flags)) {642642- pr_err("inuse applied to dead cache");642642+ pr_err("inuse applied to dead cache\n");643643 return -EIO;644644 }645645···659659660660notdir:661661 path_put(&path);662662- pr_err("inuse command requires dirfd to be a directory");662662+ pr_err("inuse command requires dirfd to be a directory\n");663663 return -ENOTDIR;664664665665inval:666666- pr_err("inuse command requires dirfd and filename");666666+ pr_err("inuse command requires dirfd and filename\n");667667 return -EINVAL;668668}669669
···8484error_object_jar:8585 misc_deregister(&cachefiles_dev);8686error_dev:8787- pr_err("failed to register: %d", ret);8787+ pr_err("failed to register: %d\n", ret);8888 return ret;8989}9090
+7-7
fs/cachefiles/namei.c
···543543 next, next->d_inode, next->d_inode->i_ino);544544545545 } else if (!S_ISDIR(next->d_inode->i_mode)) {546546- pr_err("inode %lu is not a directory",546546+ pr_err("inode %lu is not a directory\n",547547 next->d_inode->i_ino);548548 ret = -ENOBUFS;549549 goto error;···574574 } else if (!S_ISDIR(next->d_inode->i_mode) &&575575 !S_ISREG(next->d_inode->i_mode)576576 ) {577577- pr_err("inode %lu is not a file or directory",577577+ pr_err("inode %lu is not a file or directory\n",578578 next->d_inode->i_ino);579579 ret = -ENOBUFS;580580 goto error;···768768 ASSERT(subdir->d_inode);769769770770 if (!S_ISDIR(subdir->d_inode->i_mode)) {771771- pr_err("%s is not a directory", dirname);771771+ pr_err("%s is not a directory\n", dirname);772772 ret = -EIO;773773 goto check_error;774774 }···796796mkdir_error:797797 mutex_unlock(&dir->d_inode->i_mutex);798798 dput(subdir);799799- pr_err("mkdir %s failed with error %d", dirname, ret);799799+ pr_err("mkdir %s failed with error %d\n", dirname, ret);800800 return ERR_PTR(ret);801801802802lookup_error:803803 mutex_unlock(&dir->d_inode->i_mutex);804804 ret = PTR_ERR(subdir);805805- pr_err("Lookup %s failed with error %d", dirname, ret);805805+ pr_err("Lookup %s failed with error %d\n", dirname, ret);806806 return ERR_PTR(ret);807807808808nomem_d_alloc:···892892 if (ret == -EIO) {893893 cachefiles_io_error(cache, "Lookup failed");894894 } else if (ret != -ENOMEM) {895895- pr_err("Internal error: %d", ret);895895+ pr_err("Internal error: %d\n", ret);896896 ret = -EIO;897897 }898898···951951 }952952953953 if (ret != -ENOMEM) {954954- pr_err("Internal error: %d", ret);954954+ pr_err("Internal error: %d\n", ret);955955 ret = -EIO;956956 }957957
+5-5
fs/cachefiles/xattr.c
···5151 }52525353 if (ret != -EEXIST) {5454- pr_err("Can't set xattr on %*.*s [%lu] (err %d)",5454+ pr_err("Can't set xattr on %*.*s [%lu] (err %d)\n",5555 dentry->d_name.len, dentry->d_name.len,5656 dentry->d_name.name, dentry->d_inode->i_ino,5757 -ret);···6464 if (ret == -ERANGE)6565 goto bad_type_length;66666767- pr_err("Can't read xattr on %*.*s [%lu] (err %d)",6767+ pr_err("Can't read xattr on %*.*s [%lu] (err %d)\n",6868 dentry->d_name.len, dentry->d_name.len,6969 dentry->d_name.name, dentry->d_inode->i_ino,7070 -ret);···8585 return ret;86868787bad_type_length:8888- pr_err("Cache object %lu type xattr length incorrect",8888+ pr_err("Cache object %lu type xattr length incorrect\n",8989 dentry->d_inode->i_ino);9090 ret = -EIO;9191 goto error;92929393bad_type:9494 xtype[2] = 0;9595- pr_err("Cache object %*.*s [%lu] type %s not %s",9595+ pr_err("Cache object %*.*s [%lu] type %s not %s\n",9696 dentry->d_name.len, dentry->d_name.len,9797 dentry->d_name.name, dentry->d_inode->i_ino,9898 xtype, type);···293293 return ret;294294295295bad_type_length:296296- pr_err("Cache object %lu xattr length incorrect",296296+ pr_err("Cache object %lu xattr length incorrect\n",297297 dentry->d_inode->i_ino);298298 ret = -EIO;299299 goto error;
+36-76
fs/dcache.c
···23722372}23732373EXPORT_SYMBOL(dentry_update_name_case);2374237423752375-static void switch_names(struct dentry *dentry, struct dentry *target)23752375+static void switch_names(struct dentry *dentry, struct dentry *target,23762376+ bool exchange)23762377{23772378 if (dname_external(target)) {23782379 if (dname_external(dentry)) {···24072406 */24082407 unsigned int i;24092408 BUILD_BUG_ON(!IS_ALIGNED(DNAME_INLINE_LEN, sizeof(long)));24092409+ if (!exchange) {24102410+ memcpy(dentry->d_iname, target->d_name.name,24112411+ target->d_name.len + 1);24122412+ dentry->d_name.hash_len = target->d_name.hash_len;24132413+ return;24142414+ }24102415 for (i = 0; i < DNAME_INLINE_LEN / sizeof(long); i++) {24112416 swap(((long *) &dentry->d_iname)[i],24122417 ((long *) &target->d_iname)[i]);24132418 }24142419 }24152420 }24162416- swap(dentry->d_name.len, target->d_name.len);24212421+ swap(dentry->d_name.hash_len, target->d_name.hash_len);24172422}2418242324192424static void dentry_lock_for_move(struct dentry *dentry, struct dentry *target)···24492442 }24502443}2451244424522452-static void dentry_unlock_parents_for_move(struct dentry *dentry,24532453- struct dentry *target)24452445+static void dentry_unlock_for_move(struct dentry *dentry, struct dentry *target)24542446{24552447 if (target->d_parent != dentry->d_parent)24562448 spin_unlock(&dentry->d_parent->d_lock);24572449 if (target->d_parent != target)24582450 spin_unlock(&target->d_parent->d_lock);24512451+ spin_unlock(&target->d_lock);24522452+ spin_unlock(&dentry->d_lock);24592453}2460245424612455/*24622456 * When switching names, the actual string doesn't strictly have to24632457 * be preserved in the target - because we're dropping the target24642458 * anyway. As such, we can just do a simple memcpy() to copy over24652465- * the new name before we switch.24662466- *24672467- * Note that we have to be a lot more careful about getting the hash24682468- * switched - we have to switch the hash value properly even if it24692469- * then no longer matches the actual (corrupted) string of the target.24702470- * The hash value has to match the hash queue that the dentry is on..24592459+ * the new name before we switch, unless we are going to rehash24602460+ * it. Note that if we *do* unhash the target, we are not allowed24612461+ * to rehash it without giving it a new name/hash key - whether24622462+ * we swap or overwrite the names here, resulting name won't match24632463+ * the reality in filesystem; it's only there for d_path() purposes.24642464+ * Note that all of this is happening under rename_lock, so the24652465+ * any hash lookup seeing it in the middle of manipulations will24662466+ * be discarded anyway. So we do not care what happens to the hash24672467+ * key in that case.24712468 */24722469/*24732470 * __d_move - move a dentry···25172506 d_hash(dentry->d_parent, dentry->d_name.hash));25182507 }2519250825202520- list_del(&dentry->d_u.d_child);25212521- list_del(&target->d_u.d_child);25222522-25232509 /* Switch the names.. */25242524- switch_names(dentry, target);25252525- swap(dentry->d_name.hash, target->d_name.hash);25102510+ switch_names(dentry, target, exchange);2526251125272527- /* ... and switch the parents */25122512+ /* ... and switch them in the tree */25282513 if (IS_ROOT(dentry)) {25142514+ /* splicing a tree */25292515 dentry->d_parent = target->d_parent;25302516 target->d_parent = target;25312531- INIT_LIST_HEAD(&target->d_u.d_child);25172517+ list_del_init(&target->d_u.d_child);25182518+ list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);25322519 } else {25202520+ /* swapping two dentries */25332521 swap(dentry->d_parent, target->d_parent);25342534-25352535- /* And add them back to the (new) parent lists */25362536- list_add(&target->d_u.d_child, &target->d_parent->d_subdirs);25222522+ list_move(&target->d_u.d_child, &target->d_parent->d_subdirs);25232523+ list_move(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);25242524+ if (exchange)25252525+ fsnotify_d_move(target);25262526+ fsnotify_d_move(dentry);25372527 }25382538-25392539- list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);2540252825412529 write_seqcount_end(&target->d_seq);25422530 write_seqcount_end(&dentry->d_seq);2543253125442544- dentry_unlock_parents_for_move(dentry, target);25452545- if (exchange)25462546- fsnotify_d_move(target);25472547- spin_unlock(&target->d_lock);25482548- fsnotify_d_move(dentry);25492549- spin_unlock(&dentry->d_lock);25322532+ dentry_unlock_for_move(dentry, target);25502533}2551253425522535/*···26382633 return ret;26392634}2640263526412641-/*26422642- * Prepare an anonymous dentry for life in the superblock's dentry tree as a26432643- * named dentry in place of the dentry to be replaced.26442644- * returns with anon->d_lock held!26452645- */26462646-static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)26472647-{26482648- struct dentry *dparent;26492649-26502650- dentry_lock_for_move(anon, dentry);26512651-26522652- write_seqcount_begin(&dentry->d_seq);26532653- write_seqcount_begin_nested(&anon->d_seq, DENTRY_D_LOCK_NESTED);26542654-26552655- dparent = dentry->d_parent;26562656-26572657- switch_names(dentry, anon);26582658- swap(dentry->d_name.hash, anon->d_name.hash);26592659-26602660- dentry->d_parent = dentry;26612661- list_del_init(&dentry->d_u.d_child);26622662- anon->d_parent = dparent;26632663- if (likely(!d_unhashed(anon))) {26642664- hlist_bl_lock(&anon->d_sb->s_anon);26652665- __hlist_bl_del(&anon->d_hash);26662666- anon->d_hash.pprev = NULL;26672667- hlist_bl_unlock(&anon->d_sb->s_anon);26682668- }26692669- list_move(&anon->d_u.d_child, &dparent->d_subdirs);26702670-26712671- write_seqcount_end(&dentry->d_seq);26722672- write_seqcount_end(&anon->d_seq);26732673-26742674- dentry_unlock_parents_for_move(anon, dentry);26752675- spin_unlock(&dentry->d_lock);26762676-26772677- /* anon->d_lock still locked, returns locked */26782678-}26792679-26802636/**26812637 * d_splice_alias - splice a disconnected dentry into the tree if one exists26822638 * @inode: the inode which may have a disconnected dentry···26832717 return ERR_PTR(-EIO);26842718 }26852719 write_seqlock(&rename_lock);26862686- __d_materialise_dentry(dentry, new);27202720+ __d_move(new, dentry, false);26872721 write_sequnlock(&rename_lock);26882688- _d_rehash(new);26892689- spin_unlock(&new->d_lock);26902722 spin_unlock(&inode->i_lock);26912723 security_d_instantiate(new, inode);26922724 iput(inode);···27442780 } else if (IS_ROOT(alias)) {27452781 /* Is this an anonymous mountpoint that we27462782 * could splice into our tree? */27472747- __d_materialise_dentry(dentry, alias);27832783+ __d_move(alias, dentry, false);27482784 write_sequnlock(&rename_lock);27492785 goto found;27502786 } else {···27712807 actual = __d_instantiate_unique(dentry, inode);27722808 if (!actual)27732809 actual = dentry;27742774- else27752775- BUG_ON(!d_unhashed(actual));2776281027772777- spin_lock(&actual->d_lock);28112811+ d_rehash(actual);27782812found:27792779- _d_rehash(actual);27802780- spin_unlock(&actual->d_lock);27812813 spin_unlock(&inode->i_lock);27822814out_nolock:27832815 if (actual == dentry) {
+1-1
fs/direct-io.c
···158158{159159 ssize_t ret;160160161161- ret = iov_iter_get_pages(sdio->iter, dio->pages, DIO_PAGES,161161+ ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES,162162 &sdio->from);163163164164 if (ret < 0 && sdio->blocks_available && (dio->rw & WRITE)) {
+1
fs/fuse/file.c
···13051305 size_t start;13061306 ssize_t ret = iov_iter_get_pages(ii,13071307 &req->pages[req->num_pages],13081308+ *nbytesp - nbytes,13081309 req->max_pages - req->num_pages,13091310 &start);13101311 if (ret < 0)
+2-1
fs/nfsd/nfs4xdr.c
···3104310431053105 buf->page_len = maxcount;31063106 buf->len += maxcount;31073107- xdr->page_ptr += (maxcount + PAGE_SIZE - 1) / PAGE_SIZE;31073107+ xdr->page_ptr += (buf->page_base + maxcount + PAGE_SIZE - 1)31083108+ / PAGE_SIZE;3108310931093110 /* Use rest of head for padding and remaining ops: */31103111 buf->tail[0].iov_base = xdr->p;
+6-1
fs/nilfs2/inode.c
···2424#include <linux/buffer_head.h>2525#include <linux/gfp.h>2626#include <linux/mpage.h>2727+#include <linux/pagemap.h>2728#include <linux/writeback.h>2829#include <linux/aio.h>2930#include "nilfs.h"···220219221220static int nilfs_set_page_dirty(struct page *page)222221{222222+ struct inode *inode = page->mapping->host;223223 int ret = __set_page_dirty_nobuffers(page);224224225225 if (page_has_buffers(page)) {226226- struct inode *inode = page->mapping->host;227226 unsigned nr_dirty = 0;228227 struct buffer_head *bh, *head;229228···246245247246 if (nr_dirty)248247 nilfs_set_file_dirty(inode, nr_dirty);248248+ } else if (ret) {249249+ unsigned nr_dirty = 1 << (PAGE_CACHE_SHIFT - inode->i_blkbits);250250+251251+ nilfs_set_file_dirty(inode, nr_dirty);249252 }250253 return ret;251254}
+10-8
fs/ocfs2/dlm/dlmmaster.c
···655655 clear_bit(bit, res->refmap);656656}657657658658-659659-void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,658658+static void __dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,660659 struct dlm_lock_resource *res)661660{662662- assert_spin_locked(&res->spinlock);663663-664661 res->inflight_locks++;665662666663 mlog(0, "%s: res %.*s, inflight++: now %u, %ps()\n", dlm->name,667664 res->lockname.len, res->lockname.name, res->inflight_locks,668665 __builtin_return_address(0));666666+}667667+668668+void dlm_lockres_grab_inflight_ref(struct dlm_ctxt *dlm,669669+ struct dlm_lock_resource *res)670670+{671671+ assert_spin_locked(&res->spinlock);672672+ __dlm_lockres_grab_inflight_ref(dlm, res);669673}670674671675void dlm_lockres_drop_inflight_ref(struct dlm_ctxt *dlm,···898894 /* finally add the lockres to its hash bucket */899895 __dlm_insert_lockres(dlm, res);900896901901- /* Grab inflight ref to pin the resource */902902- spin_lock(&res->spinlock);903903- dlm_lockres_grab_inflight_ref(dlm, res);904904- spin_unlock(&res->spinlock);897897+ /* since this lockres is new it doesn't not require the spinlock */898898+ __dlm_lockres_grab_inflight_ref(dlm, res);905899906900 /* get an extra ref on the mle in case this is a BLOCK907901 * if so, the creator of the BLOCK may try to put the last
···19031903#define PF_KTHREAD 0x00200000 /* I am a kernel thread */19041904#define PF_RANDOMIZE 0x00400000 /* randomize virtual address space */19051905#define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */19061906-#define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */19071907-#define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */19081906#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */19091907#define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */19101908#define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */···19551957}1956195819571959/* Per-process atomic flags. */19581958-#define PFA_NO_NEW_PRIVS 0x00000001 /* May not gain new privileges. */19601960+#define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */19611961+#define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */19621962+#define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */1959196319601960-static inline bool task_no_new_privs(struct task_struct *p)19611961-{19621962- return test_bit(PFA_NO_NEW_PRIVS, &p->atomic_flags);19631963-}1964196419651965-static inline void task_set_no_new_privs(struct task_struct *p)19661966-{19671967- set_bit(PFA_NO_NEW_PRIVS, &p->atomic_flags);19681968-}19651965+#define TASK_PFA_TEST(name, func) \19661966+ static inline bool task_##func(struct task_struct *p) \19671967+ { return test_bit(PFA_##name, &p->atomic_flags); }19681968+#define TASK_PFA_SET(name, func) \19691969+ static inline void task_set_##func(struct task_struct *p) \19701970+ { set_bit(PFA_##name, &p->atomic_flags); }19711971+#define TASK_PFA_CLEAR(name, func) \19721972+ static inline void task_clear_##func(struct task_struct *p) \19731973+ { clear_bit(PFA_##name, &p->atomic_flags); }19741974+19751975+TASK_PFA_TEST(NO_NEW_PRIVS, no_new_privs)19761976+TASK_PFA_SET(NO_NEW_PRIVS, no_new_privs)19771977+19781978+TASK_PFA_TEST(SPREAD_PAGE, spread_page)19791979+TASK_PFA_SET(SPREAD_PAGE, spread_page)19801980+TASK_PFA_CLEAR(SPREAD_PAGE, spread_page)19811981+19821982+TASK_PFA_TEST(SPREAD_SLAB, spread_slab)19831983+TASK_PFA_SET(SPREAD_SLAB, spread_slab)19841984+TASK_PFA_CLEAR(SPREAD_SLAB, spread_slab)1969198519701986/*19711987 * task->jobctl flags···26202608 task_thread_info(p)->task = p;26212609}2622261026112611+/*26122612+ * Return the address of the last usable long on the stack.26132613+ *26142614+ * When the stack grows down, this is just above the thread26152615+ * info struct. Going any lower will corrupt the threadinfo.26162616+ *26172617+ * When the stack grows up, this is the highest address.26182618+ * Beyond that position, we corrupt data on the next page.26192619+ */26232620static inline unsigned long *end_of_stack(struct task_struct *p)26242621{26222622+#ifdef CONFIG_STACK_GROWSUP26232623+ return (unsigned long *)((unsigned long)task_thread_info(p) + THREAD_SIZE) - 1;26242624+#else26252625 return (unsigned long *)(task_thread_info(p) + 1);26262626+#endif26262627}2627262826282629#endif
···114114 u32 rt6i_flags;115115 struct rt6key rt6i_src;116116 struct rt6key rt6i_prefsrc;117117- u32 rt6i_metric;118117119118 struct inet6_dev *rt6i_idev;120119 unsigned long _rt6i_peer;121120122122- u32 rt6i_genid;123123-121121+ u32 rt6i_metric;124122 /* more non-fragment space at head required */125123 unsigned short rt6i_nfheader_len;126126-127124 u8 rt6i_protocol;128125};129126
+3-17
include/net/net_namespace.h
···352352 atomic_inc(&net->ipv4.rt_genid);353353}354354355355-#if IS_ENABLED(CONFIG_IPV6)356356-static inline int rt_genid_ipv6(struct net *net)357357-{358358- return atomic_read(&net->ipv6.rt_genid);359359-}360360-355355+extern void (*__fib6_flush_trees)(struct net *net);361356static inline void rt_genid_bump_ipv6(struct net *net)362357{363363- atomic_inc(&net->ipv6.rt_genid);358358+ if (__fib6_flush_trees)359359+ __fib6_flush_trees(net);364360}365365-#else366366-static inline int rt_genid_ipv6(struct net *net)367367-{368368- return 0;369369-}370370-371371-static inline void rt_genid_bump_ipv6(struct net *net)372372-{373373-}374374-#endif375361376362#if IS_ENABLED(CONFIG_IEEE802154_6LOWPAN)377363static inline struct netns_ieee802154_lowpan *
···725725 clear_bit(bit, addr);726726}727727728728-static void memory_bm_clear_current(struct memory_bitmap *bm)729729-{730730- int bit;731731-732732- bit = max(bm->cur.node_bit - 1, 0);733733- clear_bit(bit, bm->cur.node->data);734734-}735735-736728static int memory_bm_test_bit(struct memory_bitmap *bm, unsigned long pfn)737729{738730 void *addr;···1333134113341342void swsusp_free(void)13351343{13361336- unsigned long fb_pfn, fr_pfn;13441344+ struct zone *zone;13451345+ unsigned long pfn, max_zone_pfn;1337134613381338- memory_bm_position_reset(forbidden_pages_map);13391339- memory_bm_position_reset(free_pages_map);13471347+ for_each_populated_zone(zone) {13481348+ max_zone_pfn = zone_end_pfn(zone);13491349+ for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)13501350+ if (pfn_valid(pfn)) {13511351+ struct page *page = pfn_to_page(pfn);1340135213411341-loop:13421342- fr_pfn = memory_bm_next_pfn(free_pages_map);13431343- fb_pfn = memory_bm_next_pfn(forbidden_pages_map);13441344-13451345- /*13461346- * Find the next bit set in both bitmaps. This is guaranteed to13471347- * terminate when fb_pfn == fr_pfn == BM_END_OF_MAP.13481348- */13491349- do {13501350- if (fb_pfn < fr_pfn)13511351- fb_pfn = memory_bm_next_pfn(forbidden_pages_map);13521352- if (fr_pfn < fb_pfn)13531353- fr_pfn = memory_bm_next_pfn(free_pages_map);13541354- } while (fb_pfn != fr_pfn);13551355-13561356- if (fr_pfn != BM_END_OF_MAP && pfn_valid(fr_pfn)) {13571357- struct page *page = pfn_to_page(fr_pfn);13581358-13591359- memory_bm_clear_current(forbidden_pages_map);13601360- memory_bm_clear_current(free_pages_map);13611361- __free_page(page);13621362- goto loop;13531353+ if (swsusp_page_is_forbidden(page) &&13541354+ swsusp_page_is_free(page)) {13551355+ swsusp_unset_page_forbidden(page);13561356+ swsusp_unset_page_free(page);13571357+ __free_page(page);13581358+ }13591359+ }13631360 }13641364-13651361 nr_copy_pages = 0;13661362 nr_meta_pages = 0;13671363 restore_pblist = NULL;
+1
lib/genalloc.c
···588588 if (!np_pool)589589 return NULL;590590 pdev = of_find_device_by_node(np_pool);591591+ of_node_put(np_pool);591592 if (!pdev)592593 return NULL;593594 return dev_get_gen_pool(&pdev->dev);
+4-4
lib/rhashtable.c
···592592 * rhashtable_destroy - destroy hash table593593 * @ht: the hash table to destroy594594 *595595- * Frees the bucket array.595595+ * Frees the bucket array. This function is not rcu safe, therefore the caller596596+ * has to make sure that no resizing may happen by unpublishing the hashtable597597+ * and waiting for the quiescent cycle before releasing the bucket array.596598 */597599void rhashtable_destroy(const struct rhashtable *ht)598600{599599- const struct bucket_table *tbl = rht_dereference(ht->tbl, ht);600600-601601- bucket_table_free(tbl);601601+ bucket_table_free(ht->tbl);602602}603603EXPORT_SYMBOL_GPL(rhashtable_destroy);604604
···2367236723682368 if (new_dentry->d_inode) {23692369 (void) shmem_unlink(new_dir, new_dentry);23702370- if (they_are_dirs)23702370+ if (they_are_dirs) {23712371+ drop_nlink(new_dentry->d_inode);23712372 drop_nlink(old_dir);23732373+ }23722374 } else if (they_are_dirs) {23732375 drop_nlink(old_dir);23742376 inc_nlink(new_dir);
+4-11
mm/slab.c
···21242124int21252125__kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)21262126{21272127- size_t left_over, freelist_size, ralign;21272127+ size_t left_over, freelist_size;21282128+ size_t ralign = BYTES_PER_WORD;21282129 gfp_t gfp;21292130 int err;21302131 size_t size = cachep->size;···21572156 size += (BYTES_PER_WORD - 1);21582157 size &= ~(BYTES_PER_WORD - 1);21592158 }21602160-21612161- /*21622162- * Redzoning and user store require word alignment or possibly larger.21632163- * Note this will be overridden by architecture or caller mandated21642164- * alignment if either is greater than BYTES_PER_WORD.21652165- */21662166- if (flags & SLAB_STORE_USER)21672167- ralign = BYTES_PER_WORD;2168215921692160 if (flags & SLAB_RED_ZONE) {21702161 ralign = REDZONE_ALIGN;···2987299429882995#ifdef CONFIG_NUMA29892996/*29902990- * Try allocating on another node if PF_SPREAD_SLAB is a mempolicy is set.29972997+ * Try allocating on another node if PFA_SPREAD_SLAB is a mempolicy is set.29912998 *29922999 * If we are in_interrupt, then process context, including cpusets and29933000 * mempolicy, may not apply and should not be used for allocation policy.···32193226{32203227 void *objp;3221322832223222- if (current->mempolicy || unlikely(current->flags & PF_SPREAD_SLAB)) {32293229+ if (current->mempolicy || cpuset_do_slab_mem_spread()) {32233230 objp = alternate_node_alloc(cache, flags);32243231 if (objp)32253232 goto out;
+3
net/core/skbuff.c
···31663166 NAPI_GRO_CB(skb)->free = NAPI_GRO_FREE_STOLEN_HEAD;31673167 goto done;31683168 }31693169+ /* switch back to head shinfo */31703170+ pinfo = skb_shinfo(p);31713171+31693172 if (pinfo->frag_list)31703173 goto merge;31713174 if (skb_gro_len(p) != pinfo->gso_size)
+8-3
net/ipv4/ip_tunnel.c
···853853854854 t = ip_tunnel_find(itn, p, itn->fb_tunnel_dev->type);855855856856- if (!t && (cmd == SIOCADDTUNNEL)) {857857- t = ip_tunnel_create(net, itn, p);858858- err = PTR_ERR_OR_ZERO(t);856856+ if (cmd == SIOCADDTUNNEL) {857857+ if (!t) {858858+ t = ip_tunnel_create(net, itn, p);859859+ err = PTR_ERR_OR_ZERO(t);860860+ break;861861+ }862862+863863+ err = -EEXIST;859864 break;860865 }861866 if (dev != itn->fb_tunnel_dev && cmd == SIOCCHGTUNNEL) {
+1-1
net/ipv4/route.c
···746746 }747747748748 n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw);749749- if (n) {749749+ if (!IS_ERR(n)) {750750 if (!(n->nud_state & NUD_VALID)) {751751 neigh_event_send(n, NULL);752752 } else {
···88#include <net/addrconf.h>99#include <net/ip.h>10101111+/* if ipv6 module registers this function is used by xfrm to force all1212+ * sockets to relookup their nodes - this is fairly expensive, be1313+ * careful1414+ */1515+void (*__fib6_flush_trees)(struct net *);1616+EXPORT_SYMBOL(__fib6_flush_trees);1717+1118#define IPV6_ADDR_SCOPE_TYPE(scope) ((scope) << 16)12191320static inline unsigned int ipv6_addr_scope2type(unsigned int scope)