···280280mcelog281281------282282283283-In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility284284-as a regular cronjob similar to the x86-64 kernel to process and log285285-machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check286286-events are errors reported by the CPU. Processing them is strongly encouraged.287287-All x86-64 kernels since 2.6.4 require the mcelog utility to288288-process machine checks.283283+On x86 kernels the mcelog utility is needed to process and log machine check284284+events when CONFIG_X86_MCE is enabled. Machine check events are errors reported285285+by the CPU. Processing them is strongly encouraged.289286290287Getting updated software291288========================
+1-1
Documentation/DocBook/gadget.tmpl
···708708709709<para>Systems need specialized hardware support to implement OTG,710710notably including a special <emphasis>Mini-AB</emphasis> jack711711-and associated transciever to support <emphasis>Dual-Role</emphasis>711711+and associated transceiver to support <emphasis>Dual-Role</emphasis>712712operation:713713they can act either as a host, using the standard714714Linux-USB host side driver stack,
+2-2
Documentation/DocBook/genericirq.tmpl
···182182 <para>183183 Each interrupt is described by an interrupt descriptor structure184184 irq_desc. The interrupt is referenced by an 'unsigned int' numeric185185- value which selects the corresponding interrupt decription structure185185+ value which selects the corresponding interrupt description structure186186 in the descriptor structures array.187187 The descriptor structure contains status information and pointers188188 to the interrupt flow method and the interrupt chip structure···470470 <para>471471 To avoid copies of identical implementations of IRQ chips the472472 core provides a configurable generic interrupt chip473473- implementation. Developers should check carefuly whether the473473+ implementation. Developers should check carefully whether the474474 generic chip fits their needs before implementing the same475475 functionality slightly differently themselves.476476 </para>
+1-1
Documentation/DocBook/kernel-locking.tmpl
···17601760</para>1761176117621762<para>17631763-There is a furthur optimization possible here: remember our original17631763+There is a further optimization possible here: remember our original17641764cache code, where there were no reference counts and the caller simply17651765held the lock whenever using the object? This is still possible: if17661766you hold the lock, no one can delete the object, so you don't need to
+3-3
Documentation/DocBook/libata.tmpl
···677677678678 <listitem>679679 <para>680680- ATA_QCFLAG_ACTIVE is clared from qc->flags.680680+ ATA_QCFLAG_ACTIVE is cleared from qc->flags.681681 </para>682682 </listitem>683683···708708709709 <listitem>710710 <para>711711- qc->waiting is claread & completed (in that order).711711+ qc->waiting is cleared & completed (in that order).712712 </para>713713 </listitem>714714···1163116311641164 <para>11651165 Once sense data is acquired, this type of errors can be11661166- handled similary to other SCSI errors. Note that sense data11661166+ handled similarly to other SCSI errors. Note that sense data11671167 may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR11681168 && ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such11691169 cases, the error should be considered as an ATA bus error and
+1-1
Documentation/DocBook/media_api.tmpl
···6868 several digital tv standards. While it is called as DVB API,6969 in fact it covers several different video standards including7070 DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated7171- to documment support also for DVB-S2, ISDB-T and ISDB-S.</para>7171+ to document support also for DVB-S2, ISDB-T and ISDB-S.</para>7272 <para>The third part covers the Remote Controller API.</para>7373 <para>The fourth part covers the Media Controller API.</para>7474 <para>For additional information and for the latest development code,
+15-15
Documentation/DocBook/mtdnand.tmpl
···9191 <listitem><para>9292 [MTD Interface]</para><para>9393 These functions provide the interface to the MTD kernel API. 9494- They are not replacable and provide functionality9494+ They are not replaceable and provide functionality9595 which is complete hardware independent.9696 </para></listitem>9797 <listitem><para>···100100 </para></listitem>101101 <listitem><para>102102 [GENERIC]</para><para>103103- Generic functions are not replacable and provide functionality103103+ Generic functions are not replaceable and provide functionality104104 which is complete hardware independent.105105 </para></listitem>106106 <listitem><para>107107 [DEFAULT]</para><para>108108 Default functions provide hardware related functionality which is suitable109109 for most of the implementations. These functions can be replaced by the110110- board driver if neccecary. Those functions are called via pointers in the110110+ board driver if necessary. Those functions are called via pointers in the111111 NAND chip description structure. The board driver can set the functions which112112 should be replaced by board dependent functions before calling nand_scan().113113 If the function pointer is NULL on entry to nand_scan() then the pointer···264264 is set up nand_scan() is called. This function tries to265265 detect and identify then chip. If a chip is found all the266266 internal data fields are initialized accordingly.267267- The structure(s) have to be zeroed out first and then filled with the neccecary 267267+ The structure(s) have to be zeroed out first and then filled with the necessary268268 information about the device.269269 </para>270270 <programlisting>···327327 <sect1 id="Exit_function">328328 <title>Exit function</title>329329 <para>330330- The exit function is only neccecary if the driver is330330+ The exit function is only necessary if the driver is331331 compiled as a module. It releases all resources which332332 are held by the chip driver and unregisters the partitions333333 in the MTD layer.···494494 in this case. See rts_from4.c and diskonchip.c for 495495 implementation reference. In those cases we must also496496 use bad block tables on FLASH, because the ECC layout is497497- interferring with the bad block marker positions.497497+ interfering with the bad block marker positions.498498 See bad block table support for details.499499 </para>500500 </sect2>···542542 <para> 543543 nand_scan() calls the function nand_default_bbt(). 544544 nand_default_bbt() selects appropriate default545545- bad block table desriptors depending on the chip information545545+ bad block table descriptors depending on the chip information546546 which was retrieved by nand_scan().547547 </para>548548 <para>···554554 <sect2 id="Flash_based_tables">555555 <title>Flash based tables</title>556556 <para>557557- It may be desired or neccecary to keep a bad block table in FLASH. 557557+ It may be desired or necessary to keep a bad block table in FLASH.558558 For AG-AND chips this is mandatory, as they have no factory marked559559 bad blocks. They have factory marked good blocks. The marker pattern560560 is erased when the block is erased to be reused. So in case of···565565 of the blocks.566566 </para>567567 <para>568568- The blocks in which the tables are stored are procteted against568568+ The blocks in which the tables are stored are protected against569569 accidental access by marking them bad in the memory bad block570570 table. The bad block table management functions are allowed571571- to circumvernt this protection.571571+ to circumvent this protection.572572 </para>573573 <para>574574 The simplest way to activate the FLASH based bad block table support ···592592 User defined tables are created by filling out a 593593 nand_bbt_descr structure and storing the pointer in the594594 nand_chip structure member bbt_td before calling nand_scan(). 595595- If a mirror table is neccecary a second structure must be595595+ If a mirror table is necessary a second structure must be596596 created and a pointer to this structure must be stored597597 in bbt_md inside the nand_chip structure. If the bbt_md 598598 member is set to NULL then only the main table is used···666666 <para>667667 For automatic placement some blocks must be reserved for668668 bad block table storage. The number of reserved blocks is defined 669669- in the maxblocks member of the babd block table description structure.669669+ in the maxblocks member of the bad block table description structure.670670 Reserving 4 blocks for mirrored tables should be a reasonable number. 671671 This also limits the number of blocks which are scanned for the bad672672 block table ident pattern.···10681068 <chapter id="filesystems">10691069 <title>Filesystem support</title>10701070 <para>10711071- The NAND driver provides all neccecary functions for a10711071+ The NAND driver provides all necessary functions for a10721072 filesystem via the MTD interface.10731073 </para>10741074 <para>10751075- Filesystems must be aware of the NAND pecularities and10751075+ Filesystems must be aware of the NAND peculiarities and10761076 restrictions. One major restrictions of NAND Flash is, that you cannot 10771077 write as often as you want to a page. The consecutive writes to a page, 10781078 before erasing it again, are restricted to 1-3 writes, depending on the ···12221222#define NAND_BBT_VERSION 0x0000010012231223/* Create a bbt if none axists */12241224#define NAND_BBT_CREATE 0x0000020012251225-/* Write bbt if neccecary */12251225+/* Write bbt if necessary */12261226#define NAND_BBT_WRITE 0x0000100012271227/* Read and write back block contents when writing bbt */12281228#define NAND_BBT_SAVECONTENT 0x00002000
+1-1
Documentation/DocBook/regulator.tmpl
···155155 release regulators. Functions are156156 provided to <link linkend='API-regulator-enable'>enable</link>157157 and <link linkend='API-regulator-disable'>disable</link> the158158- reguator and to get and set the runtime parameters of the158158+ regulator and to get and set the runtime parameters of the159159 regulator.160160 </para>161161 <para>
+2-2
Documentation/DocBook/uio-howto.tmpl
···766766 <para>767767 The dynamic memory regions will be allocated when the UIO device file,768768 <varname>/dev/uioX</varname> is opened.769769- Simiar to static memory resources, the memory region information for769769+ Similar to static memory resources, the memory region information for770770 dynamic regions is then visible via sysfs at771771 <varname>/sys/class/uio/uioX/maps/mapY/*</varname>.772772- The dynmaic memory regions will be freed when the UIO device file is772772+ The dynamic memory regions will be freed when the UIO device file is773773 closed. When no processes are holding the device file open, the address774774 returned to userspace is ~0.775775 </para>
+1-1
Documentation/DocBook/usb.tmpl
···153153154154 <listitem><para>The Linux USB API supports synchronous calls for155155 control and bulk messages.156156- It also supports asynchnous calls for all kinds of data transfer,156156+ It also supports asynchronous calls for all kinds of data transfer,157157 using request structures called "URBs" (USB Request Blocks).158158 </para></listitem>159159
+1-1
Documentation/DocBook/writing-an-alsa-driver.tmpl
···56965696 suspending the PCM operations via56975697 <function>snd_pcm_suspend_all()</function> or56985698 <function>snd_pcm_suspend()</function>. It means that the PCM56995699- streams are already stoppped when the register snapshot is56995699+ streams are already stopped when the register snapshot is57005700 taken. But, remember that you don't have to restart the PCM57015701 stream in the resume callback. It'll be restarted via 57025702 trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant>
-6
Documentation/acpi/enumeration.txt
···6060configuring GPIOs it can get its ACPI handle and extract this information6161from ACPI tables.62626363-Currently the kernel is not able to automatically determine from which ACPI6464-device it should make the corresponding platform device so we need to add6565-the ACPI device explicitly to acpi_platform_device_ids list defined in6666-drivers/acpi/acpi_platform.c. This limitation is only for the platform6767-devices, SPI and I2C devices are created automatically as described below.6868-6963DMA support7064~~~~~~~~~~~7165DMA controllers enumerated via ACPI should be registered in the system to
+5-2
Documentation/cpu-freq/intel-pstate.txt
···1515/sys/devices/system/cpu/intel_pstate/16161717 max_perf_pct: limits the maximum P state that will be requested by1818- the driver stated as a percentage of the available performance.1818+ the driver stated as a percentage of the available performance. The1919+ available (P states) performance may be reduced by the no_turbo2020+ setting described below.19212022 min_perf_pct: limits the minimum P state that will be requested by2121- the driver stated as a percentage of the available performance.2323+ the driver stated as a percentage of the max (non-turbo)2424+ performance level.22252326 no_turbo: limits the driver to selecting P states below the turbo2427 frequency range.
···99- reg: physical base address of the controller and length of memory mapped1010 region.11111212+Optional Properties:1313+- clocks: List of clock handles. The parent clocks of the input clocks to the1414+ devices in this power domain are set to oscclk before power gating1515+ and restored back after powering on a domain. This is required for1616+ all domains which are powered on and off and not required for unused1717+ domains.1818+- clock-names: The following clocks can be specified:1919+ - oscclk: Oscillator clock.2020+ - pclkN, clkN: Pairs of parent of input clock and input clock to the2121+ devices in this power domain. Maximum of 4 pairs (N = 0 to 3)2222+ are supported currently.2323+1224Node of a device using power domains must have a samsung,power-domain property1325defined with a phandle to respective power domain.1426···2917 lcd0: power-domain-lcd0 {3018 compatible = "samsung,exynos4210-pd";3119 reg = <0x10023C00 0x10>;2020+ };2121+2222+ mfc_pd: power-domain@10044060 {2323+ compatible = "samsung,exynos4210-pd";2424+ reg = <0x10044060 0x20>;2525+ clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>,2626+ <&clock CLK_MOUT_USER_ACLK333>;2727+ clock-names = "oscclk", "pclk0", "clk0";3228 };33293430Example of the node using power domain:
···88under node /cpus/cpu@0.991010Required properties:1111-- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt1212- for details1111+- None13121413Optional properties:1414+- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt for1515+ details. OPPs *must* be supplied either via DT, i.e. this property, or1616+ populated at runtime.1517- clock-latency: Specify the possible maximum transition latency for clock,1618 in unit of nanoseconds.1719- voltage-tolerance: Specify the CPU voltage tolerance in percentage.
···4455 - compatible: Must contain one of the following:6677+ - "renesas,scifa-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFA compatible UART.88+ - "renesas,scifb-sh73a0" for SH73A0 (SH-Mobile AG5) SCIFB compatible UART.99+ - "renesas,scifa-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFA compatible UART.1010+ - "renesas,scifb-r8a73a4" for R8A73A4 (R-Mobile APE6) SCIFB compatible UART.1111+ - "renesas,scifa-r8a7740" for R8A7740 (R-Mobile A1) SCIFA compatible UART.1212+ - "renesas,scifb-r8a7740" for R8A7740 (R-Mobile A1) SCIFB compatible UART.1313+ - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART.714 - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART.815 - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART.916 - "renesas,scifa-r8a7790" for R8A7790 (R-Car H2) SCIFA compatible UART.
+7-1
Documentation/kernel-parameters.txt
···27902790 leaf rcu_node structure. Useful for very large27912791 systems.2792279227932793+ rcutree.jiffies_till_sched_qs= [KNL]27942794+ Set required age in jiffies for a27952795+ given grace period before RCU starts27962796+ soliciting quiescent-state help from27972797+ rcu_note_context_switch().27982798+27932799 rcutree.jiffies_till_first_fqs= [KNL]27942800 Set delay from grace-period initialization to27952801 first attempt to force quiescent states.···35323526 the allocated input device; If set to 0, video driver35333527 will only send out the event without touching backlight35343528 brightness level.35353535- default: 035293529+ default: 13536353035373531 virtio_mmio.device=35383532 [VMMIO] Memory mapped virtio (platform) device.
+2-2
Documentation/laptops/00-INDEX
···88 - information on hard disk shock protection.99dslm.c1010 - Simple Disk Sleep Monitor program1111-hpfall.c1212- - (HP) laptop accelerometer program for disk protection.1111+freefall.c1212+ - (HP/DELL) laptop accelerometer program for disk protection.1313laptop-mode.txt1414 - how to conserve battery power using laptop-mode.1515sony-laptop.txt
···11VERSION = 322PATCHLEVEL = 1633SUBLEVEL = 044-EXTRAVERSION = -rc444+EXTRAVERSION = -rc655NAME = Shuffling Zombie Juror6677# *DOCUMENTATION*···4141# descending is started. They are now explicitly listed as the4242# prepare rule.43434444+# Beautify output4545+# ---------------------------------------------------------------------------4646+#4747+# Normally, we echo the whole command before executing it. By making4848+# that echo $($(quiet)$(cmd)), we now have the possibility to set4949+# $(quiet) to choose other forms of output instead, e.g.5050+#5151+# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@5252+# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $<5353+#5454+# If $(quiet) is empty, the whole command will be printed.5555+# If it is set to "quiet_", only the short version will be printed.5656+# If it is set to "silent_", nothing will be printed at all, since5757+# the variable $(silent_cmd_cc_o_c) doesn't exist.5858+#5959+# A simple variant is to prefix commands with $(Q) - that's useful6060+# for commands that shall be hidden in non-verbose mode.6161+#6262+# $(Q)ln $@ :<6363+#6464+# If KBUILD_VERBOSE equals 0 then the above command will be hidden.6565+# If KBUILD_VERBOSE equals 1 then the above command is displayed.6666+#4467# To put more focus on warnings, be less verbose as default4568# Use 'make V=1' to see the full commands4669···7350ifndef KBUILD_VERBOSE7451 KBUILD_VERBOSE = 07552endif5353+5454+ifeq ($(KBUILD_VERBOSE),1)5555+ quiet =5656+ Q =5757+else5858+ quiet=quiet_5959+ Q = @6060+endif6161+6262+# If the user is running make -s (silent mode), suppress echoing of6363+# commands6464+6565+ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-46666+ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),)6767+ quiet=silent_6868+endif6969+else # make-3.8x7070+ifneq ($(filter s% -s%,$(MAKEFLAGS)),)7171+ quiet=silent_7272+endif7373+endif7474+7575+export quiet Q KBUILD_VERBOSE76767777# Call a source code checker (by default, "sparse") as part of the7878# C compilation.···174128175129# Fake the "Entering directory" message once, so that IDEs/editors are176130# able to understand relative filenames.131131+ echodir := @echo132132+ quiet_echodir := @echo133133+silent_echodir := @:177134sub-make: FORCE178178- @echo "make[1]: Entering directory \`$(KBUILD_OUTPUT)'"135135+ $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'"179136 $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \180137 KBUILD_SRC=$(CURDIR) \181138 KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \···340291341292export KBUILD_MODULES KBUILD_BUILTIN342293export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD343343-344344-# Beautify output345345-# ---------------------------------------------------------------------------346346-#347347-# Normally, we echo the whole command before executing it. By making348348-# that echo $($(quiet)$(cmd)), we now have the possibility to set349349-# $(quiet) to choose other forms of output instead, e.g.350350-#351351-# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@352352-# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $<353353-#354354-# If $(quiet) is empty, the whole command will be printed.355355-# If it is set to "quiet_", only the short version will be printed.356356-# If it is set to "silent_", nothing will be printed at all, since357357-# the variable $(silent_cmd_cc_o_c) doesn't exist.358358-#359359-# A simple variant is to prefix commands with $(Q) - that's useful360360-# for commands that shall be hidden in non-verbose mode.361361-#362362-# $(Q)ln $@ :<363363-#364364-# If KBUILD_VERBOSE equals 0 then the above command will be hidden.365365-# If KBUILD_VERBOSE equals 1 then the above command is displayed.366366-367367-ifeq ($(KBUILD_VERBOSE),1)368368- quiet =369369- Q =370370-else371371- quiet=quiet_372372- Q = @373373-endif374374-375375-# If the user is running make -s (silent mode), suppress echoing of376376-# commands377377-378378-ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4379379-ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),)380380- quiet=silent_381381-endif382382-else # make-3.8x383383-ifneq ($(filter s% -s%,$(MAKEFLAGS)),)384384- quiet=silent_385385-endif386386-endif387387-388388-export quiet Q KBUILD_VERBOSE389294390295ifneq ($(CC),)391296ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1)···11761173# Packaging of the kernel to various formats11771174# ---------------------------------------------------------------------------11781175# rpm target kept for backward compatibility11791179-package-dir := $(srctree)/scripts/package11761176+package-dir := scripts/package1180117711811178%src-pkg: FORCE11821179 $(Q)$(MAKE) $(build)=$(package-dir) $@
···7474 TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\7575 TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\7676 TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\7777- TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\7878- TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\7977 TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \8078 TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \8179 TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \···101103 TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \102104 TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \103105 TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \104104- TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \105105- TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \106106 TEST_R( op "eq r",11,VAL1,", #0xf5") \107107 TEST_R( op "ne r",0, VAL1,", #0xf5000000") \108108 TEST_R( op " r",8, VAL2,", #0x000af000")···121125 TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \122126 TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \123127 TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \124124- TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \125128 TEST( op "eq" s " r0, #0xf5") \126129 TEST( op "ne" s " r11, #0xf5000000") \127130 TEST( op s " r7, #0x000af000") \···154159 TEST_SUPPORTED("cmp pc, #0x1000");155160 TEST_SUPPORTED("cmp sp, #0x1000");156161157157- /* Data-processing with PC as shift*/162162+ /* Data-processing with PC and a shift count in a register */158163 TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc")159164 TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc")160165 TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc")166166+ TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2")167167+ TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2")168168+ TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2")169169+ TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2")170170+ TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3")171171+ TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3")172172+ TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3")161173162162- /* Data-processing with PC as shift*/174174+ /* Data-processing with PC as a target and status registers updated */163175 TEST_UNSUPPORTED("movs pc, r1")164176 TEST_UNSUPPORTED("movs pc, r1, lsl r2")165177 TEST_UNSUPPORTED("movs pc, #0x10000")···189187 TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"")190188 TEST_BF_R ("add pc, r",14,2f-1f-8,", pc")191189 TEST_BF_R ("mov pc, r",0,2f,"")192192- TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"")190190+ TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1")193191 TEST_BB( "sub pc, pc, #1b-2b+8")194192#if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7)195193 TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */196194#endif197195 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"")198196 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc")199199- TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"")197197+ TEST_R( "add pc, pc, r",10,-2,", asl #1")200198#ifdef CONFIG_THUMB2_KERNEL201199 TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"")202200 TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8")···218216 TEST_BB_R("bx r",7,2f,"")219217 TEST_BF_R("bxeq r",14,2f,"")220218219219+#if __LINUX_ARM_ARCH__ >= 5221220 TEST_R("clz r0, r",0, 0x0,"")222221 TEST_R("clzeq r7, r",14,0x1,"")223222 TEST_R("clz lr, r",7, 0xffffffff,"")···340337 TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2")341338 TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2")342339 TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc")340340+#endif343341344342 TEST_GROUP("Multiply and multiply-accumulate")345343···563559 TEST_UNSUPPORTED("ldrsht r1, [r2], #48")564560#endif565561562562+#if __LINUX_ARM_ARCH__ >= 5566563 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]")567564 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]")568565 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!")···600595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!")601596 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48")602597 TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48")598598+#endif603599604600 TEST_GROUP("Miscellaneous")605601···12331227 TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0")1234122812351229 COPROCESSOR_INSTRUCTIONS_ST_LD("",e)12301230+#if __LINUX_ARM_ARCH__ >= 512361231 COPROCESSOR_INSTRUCTIONS_MC_MR("",e)12321232+#endif12371233 TEST_UNSUPPORTED("svc 0")12381234 TEST_UNSUPPORTED("svc 0xffffff")12391235···12951287 TEST( "blx __dummy_thumb_subroutine_odd")12961288#endif /* __LINUX_ARM_ARCH__ >= 6 */1297128912901290+#if __LINUX_ARM_ARCH__ >= 512981291 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f)12921292+#endif12991293#if __LINUX_ARM_ARCH__ >= 613001294 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f)13011295#endif
+10
arch/arm/kernel/kprobes-test.c
···225225static int post_handler_called;226226static int jprobe_func_called;227227static int kretprobe_handler_called;228228+static int tests_failed;228229229230#define FUNC_ARG1 0x12345678230231#define FUNC_ARG2 0xabcdef···462461463462 pr_info(" jprobe\n");464463 ret = test_jprobe(func);464464+#if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE)465465+ if (ret == -EINVAL) {466466+ pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n");467467+ tests_failed = ret;468468+ ret = 0;469469+ }470470+#endif465471 if (ret < 0)466472 return ret;467473···16791671#endif1680167216811673out:16741674+ if (ret == 0)16751675+ ret = tests_failed;16821676 if (ret == 0)16831677 pr_info("Finished kprobe tests OK\n");16841678 else
···275275 cpu_topology[cpuid].socket_id, mpidr);276276}277277278278-static inline const int cpu_corepower_flags(void)278278+static inline int cpu_corepower_flags(void)279279{280280 return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN;281281}
+3-5
arch/arm/mach-exynos/exynos.c
···173173174174void __init exynos_cpuidle_init(void)175175{176176- if (soc_is_exynos5440())177177- return;178178-179179- platform_device_register(&exynos_cpuidle);176176+ if (soc_is_exynos4210() || soc_is_exynos5250())177177+ platform_device_register(&exynos_cpuidle);180178}181179182180void __init exynos_cpufreq_init(void)···295297 * This is called from smp_prepare_cpus if we've built for SMP, but296298 * we still need to set it up for PM and firmware ops if not.297299 */298298- if (!IS_ENABLED(SMP))300300+ if (!IS_ENABLED(CONFIG_SMP))299301 exynos_sysram_init();300302301303 exynos_cpuidle_init();
+7-2
arch/arm/mach-exynos/firmware.c
···57575858 boot_reg = sysram_ns_base_addr + 0x1c;59596060- if (!soc_is_exynos4212() && !soc_is_exynos3250())6161- boot_reg += 4*cpu;6060+ /*6161+ * Almost all Exynos-series of SoCs that run in secure mode don't need6262+ * additional offset for every CPU, with Exynos4412 being the only6363+ * exception.6464+ */6565+ if (soc_is_exynos4412())6666+ boot_reg += 4 * cpu;62676368 __raw_writel(boot_addr, boot_reg);6469 return 0;
+6-4
arch/arm/mach-exynos/hotplug.c
···40404141static inline void platform_do_lowpower(unsigned int cpu, int *spurious)4242{4343+ u32 mpidr = cpu_logical_map(cpu);4444+ u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);4545+4346 for (;;) {44474545- /* make cpu1 to be turned off at next WFI command */4646- if (cpu == 1)4747- exynos_cpu_power_down(cpu);4848+ /* Turn the CPU off on next WFI instruction. */4949+ exynos_cpu_power_down(core_id);48504951 wfi();50525151- if (pen_release == cpu_logical_map(cpu)) {5353+ if (pen_release == core_id) {5254 /*5355 * OK, proper wakeup, we're done5456 */
+19-15
arch/arm/mach-exynos/platsmp.c
···9090static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)9191{9292 unsigned long timeout;9393- unsigned long phys_cpu = cpu_logical_map(cpu);9393+ u32 mpidr = cpu_logical_map(cpu);9494+ u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);9495 int ret = -ENOSYS;95969697 /*···105104 * the holding pen - release it, then wait for it to flag106105 * that it has been released by resetting pen_release.107106 *108108- * Note that "pen_release" is the hardware CPU ID, whereas107107+ * Note that "pen_release" is the hardware CPU core ID, whereas109108 * "cpu" is Linux's internal ID.110109 */111111- write_pen_release(phys_cpu);110110+ write_pen_release(core_id);112111113113- if (!exynos_cpu_power_state(cpu)) {114114- exynos_cpu_power_up(cpu);112112+ if (!exynos_cpu_power_state(core_id)) {113113+ exynos_cpu_power_up(core_id);115114 timeout = 10;116115117116 /* wait max 10 ms until cpu1 is on */118118- while (exynos_cpu_power_state(cpu) != S5P_CORE_LOCAL_PWR_EN) {117117+ while (exynos_cpu_power_state(core_id)118118+ != S5P_CORE_LOCAL_PWR_EN) {119119 if (timeout-- == 0)120120 break;121121···147145 * Try to set boot address using firmware first148146 * and fall back to boot register if it fails.149147 */150150- ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr);148148+ ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr);151149 if (ret && ret != -ENOSYS)152150 goto fail;153151 if (ret == -ENOSYS) {154154- void __iomem *boot_reg = cpu_boot_reg(phys_cpu);152152+ void __iomem *boot_reg = cpu_boot_reg(core_id);155153156154 if (IS_ERR(boot_reg)) {157155 ret = PTR_ERR(boot_reg);158156 goto fail;159157 }160160- __raw_writel(boot_addr, cpu_boot_reg(phys_cpu));158158+ __raw_writel(boot_addr, cpu_boot_reg(core_id));161159 }162160163163- call_firmware_op(cpu_boot, phys_cpu);161161+ call_firmware_op(cpu_boot, core_id);164162165163 arch_send_wakeup_ipi_mask(cpumask_of(cpu));166164···229227 * boot register if it fails.230228 */231229 for (i = 1; i < max_cpus; ++i) {232232- unsigned long phys_cpu;233230 unsigned long boot_addr;231231+ u32 mpidr;232232+ u32 core_id;234233 int ret;235234236236- phys_cpu = cpu_logical_map(i);235235+ mpidr = cpu_logical_map(i);236236+ core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);237237 boot_addr = virt_to_phys(exynos4_secondary_startup);238238239239- ret = call_firmware_op(set_cpu_boot_addr, phys_cpu, boot_addr);239239+ ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr);240240 if (ret && ret != -ENOSYS)241241 break;242242 if (ret == -ENOSYS) {243243- void __iomem *boot_reg = cpu_boot_reg(phys_cpu);243243+ void __iomem *boot_reg = cpu_boot_reg(core_id);244244245245 if (IS_ERR(boot_reg))246246 break;247247- __raw_writel(boot_addr, cpu_boot_reg(phys_cpu));247247+ __raw_writel(boot_addr, cpu_boot_reg(core_id));248248 }249249 }250250}
+60-1
arch/arm/mach-exynos/pm_domains.c
···1717#include <linux/err.h>1818#include <linux/slab.h>1919#include <linux/pm_domain.h>2020+#include <linux/clk.h>2021#include <linux/delay.h>2122#include <linux/of_address.h>2223#include <linux/of_platform.h>2324#include <linux/sched.h>24252526#include "regs-pmu.h"2727+2828+#define MAX_CLK_PER_DOMAIN 426292730/*2831 * Exynos specific wrapper around the generic power domain···3532 char const *name;3633 bool is_off;3734 struct generic_pm_domain pd;3535+ struct clk *oscclk;3636+ struct clk *clk[MAX_CLK_PER_DOMAIN];3737+ struct clk *pclk[MAX_CLK_PER_DOMAIN];3838};39394040static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)···49435044 pd = container_of(domain, struct exynos_pm_domain, pd);5145 base = pd->base;4646+4747+ /* Set oscclk before powering off a domain*/4848+ if (!power_on) {4949+ int i;5050+5151+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {5252+ if (IS_ERR(pd->clk[i]))5353+ break;5454+ if (clk_set_parent(pd->clk[i], pd->oscclk))5555+ pr_err("%s: error setting oscclk as parent to clock %d\n",5656+ pd->name, i);5757+ }5858+ }52595360 pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0;5461 __raw_writel(pwr, base);···7960 cpu_relax();8061 usleep_range(80, 100);8162 }6363+6464+ /* Restore clocks after powering on a domain*/6565+ if (power_on) {6666+ int i;6767+6868+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {6969+ if (IS_ERR(pd->clk[i]))7070+ break;7171+ if (clk_set_parent(pd->clk[i], pd->pclk[i]))7272+ pr_err("%s: error setting parent to clock%d\n",7373+ pd->name, i);7474+ }7575+ }7676+8277 return 0;8378}8479···185152186153 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {187154 struct exynos_pm_domain *pd;188188- int on;155155+ int on, i;156156+ struct device *dev;189157190158 pdev = of_find_device_by_node(np);159159+ dev = &pdev->dev;191160192161 pd = kzalloc(sizeof(*pd), GFP_KERNEL);193162 if (!pd) {···205170 pd->pd.power_on = exynos_pd_power_on;206171 pd->pd.of_node = np;207172173173+ pd->oscclk = clk_get(dev, "oscclk");174174+ if (IS_ERR(pd->oscclk))175175+ goto no_clk;176176+177177+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {178178+ char clk_name[8];179179+180180+ snprintf(clk_name, sizeof(clk_name), "clk%d", i);181181+ pd->clk[i] = clk_get(dev, clk_name);182182+ if (IS_ERR(pd->clk[i]))183183+ break;184184+ snprintf(clk_name, sizeof(clk_name), "pclk%d", i);185185+ pd->pclk[i] = clk_get(dev, clk_name);186186+ if (IS_ERR(pd->pclk[i])) {187187+ clk_put(pd->clk[i]);188188+ pd->clk[i] = ERR_PTR(-EINVAL);189189+ break;190190+ }191191+ }192192+193193+ if (IS_ERR(pd->clk[0]))194194+ clk_put(pd->oscclk);195195+196196+no_clk:208197 platform_set_drvdata(pdev, pd);209198210199 on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;
+23-8
arch/arm/mach-imx/clk-gate2.c
···67676868 spin_lock_irqsave(gate->lock, flags);69697070- if (gate->share_count && --(*gate->share_count) > 0)7171- goto out;7070+ if (gate->share_count) {7171+ if (WARN_ON(*gate->share_count == 0))7272+ goto out;7373+ else if (--(*gate->share_count) > 0)7474+ goto out;7575+ }72767377 reg = readl(gate->reg);7478 reg &= ~(3 << gate->bit_idx);···8278 spin_unlock_irqrestore(gate->lock, flags);8379}84808585-static int clk_gate2_is_enabled(struct clk_hw *hw)8181+static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx)8682{8787- u32 reg;8888- struct clk_gate2 *gate = to_clk_gate2(hw);8383+ u32 val = readl(reg);89849090- reg = readl(gate->reg);9191-9292- if (((reg >> gate->bit_idx) & 1) == 1)8585+ if (((val >> bit_idx) & 1) == 1)9386 return 1;94879588 return 0;8989+}9090+9191+static int clk_gate2_is_enabled(struct clk_hw *hw)9292+{9393+ struct clk_gate2 *gate = to_clk_gate2(hw);9494+9595+ if (gate->share_count)9696+ return !!(*gate->share_count);9797+ else9898+ return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx);9699}9710098101static struct clk_ops clk_gate2_ops = {···127116 gate->bit_idx = bit_idx;128117 gate->flags = clk_gate2_flags;129118 gate->lock = lock;119119+120120+ /* Initialize share_count per hardware state */121121+ if (share_count)122122+ *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0;130123 gate->share_count = share_count;131124132125 init.name = name;
+2-2
arch/arm/mach-imx/clk-imx6q.c
···7070static const char *lvds_sels[] = {7171 "dummy", "dummy", "dummy", "dummy", "dummy", "dummy",7272 "pll4_audio", "pll5_video", "pll8_mlb", "enet_ref",7373- "pcie_ref", "sata_ref",7373+ "pcie_ref_125m", "sata_ref_100m",7474};75757676enum mx6q_clks {···491491492492 /* All existing boards with PCIe use LVDS1 */493493 if (IS_ENABLED(CONFIG_PCI_IMX6))494494- clk_set_parent(clk[lvds1_sel], clk[sata_ref]);494494+ clk_set_parent(clk[lvds1_sel], clk[sata_ref_100m]);495495496496 /* Set initial power mode */497497 imx6q_set_lpm(WAIT_CLOCKED);
···201201202202 /* Test the CR_C bit and set it if it was cleared */203203 asm volatile(204204- "mrc p15, 0, %0, c1, c0, 0 \n\t"205205- "tst %0, #(1 << 2) \n\t"206206- "orreq %0, %0, #(1 << 2) \n\t"207207- "mcreq p15, 0, %0, c1, c0, 0 \n\t"204204+ "mrc p15, 0, r0, c1, c0, 0 \n\t"205205+ "tst r0, #(1 << 2) \n\t"206206+ "orreq r0, r0, #(1 << 2) \n\t"207207+ "mcreq p15, 0, r0, c1, c0, 0 \n\t"208208 "isb "209209- : : "r" (0));209209+ : : : "r0");210210211211 pr_warn("Failed to suspend the system\n");212212
+1-1
arch/arm/mach-omap2/clkt_dpll.c
···7676 * (assuming that it is counting N upwards), or -2 if the enclosing loop7777 * should skip to the next iteration (again assuming N is increasing).7878 */7979-static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n)7979+static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n)8080{8181 struct dpll_data *dd;8282 long fint, fint_min, fint_max;
···1212#include <linux/efi.h>1313#include <linux/libfdt.h>1414#include <asm/sections.h>1515-#include <generated/compile.h>1616-#include <generated/utsrelease.h>17151816/*1917 * AArch64 requires the DTB to be 8-byte aligned in the first 512MiB from
···145145 select HAVE_IRQ_EXIT_ON_IRQ_STACK146146 select ARCH_USE_CMPXCHG_LOCKREF if PPC64147147 select HAVE_ARCH_AUDITSYSCALL148148+ select ARCH_SUPPORTS_ATOMIC_RMW148149149150config GENERIC_CSUM150151 def_bool CPU_LITTLE_ENDIAN···415414config CRASH_DUMP416415 bool "Build a kdump crash kernel"417416 depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP)418418- select RELOCATABLE if PPC64 || 44x || FSL_BOOKE417417+ select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE419418 help420419 Build a kernel suitable for use as a kdump capture kernel.421420 The same kernel binary can be used as production kernel and dump···10181017if PPC6410191018config RELOCATABLE10201019 bool "Build a relocatable kernel"10201020+ depends on !COMPILE_TEST10211021 select NONSTATIC_KERNEL10221022 help10231023 This builds a kernel image that is capable of running anywhere
···747747748748#ifdef CONFIG_SCHED_SMT749749/* cpumask of CPUs with asymetric SMT dependancy */750750-static const int powerpc_smt_flags(void)750750+static int powerpc_smt_flags(void)751751{752752 int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;753753
···485485 * check that the PMU supports EBB, meaning those that don't can still486486 * use bit 63 of the event code for something else if they wish.487487 */488488- return (ppmu->flags & PPMU_EBB) &&488488+ return (ppmu->flags & PPMU_ARCH_207S) &&489489 ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1);490490}491491···777777 if (ppmu->flags & PPMU_HAS_SIER)778778 sier = mfspr(SPRN_SIER);779779780780- if (ppmu->flags & PPMU_EBB) {780780+ if (ppmu->flags & PPMU_ARCH_207S) {781781 pr_info("MMCR2: %016lx EBBHR: %016lx\n",782782 mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR));783783 pr_info("EBBRR: %016lx BESCR: %016lx\n",···996996 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);997997998998 local64_add(delta, &event->count);999999- local64_sub(delta, &event->hw.period_left);999999+10001000+ /*10011001+ * A number of places program the PMC with (0x80000000 - period_left).10021002+ * We never want period_left to be less than 1 because we will program10031003+ * the PMC with a value >= 0x800000000 and an edge detected PMC will10041004+ * roll around to 0 before taking an exception. We have seen this10051005+ * on POWER8.10061006+ *10071007+ * To fix this, clamp the minimum value of period_left to 1.10081008+ */10091009+ do {10101010+ prev = local64_read(&event->hw.period_left);10111011+ val = prev - delta;10121012+ if (val < 1)10131013+ val = 1;10141014+ } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev);10001015}1001101610021017/*···13141299 ppmu->config_bhrb(cpuhw->bhrb_filter);1315130013161301 write_mmcr0(cpuhw, mmcr0);13021302+13031303+ if (ppmu->flags & PPMU_ARCH_207S)13041304+ mtspr(SPRN_MMCR2, 0);1317130513181306 /*13191307 * Enable instruction sampling if necessary···1714169617151697 if (has_branch_stack(event)) {17161698 /* PMU has BHRB enabled */17171717- if (!(ppmu->flags & PPMU_BHRB))16991699+ if (!(ppmu->flags & PPMU_ARCH_207S))17181700 return -EOPNOTSUPP;17191701 }17201702
···1212#include <mem_user.h>1313#include <os.h>1414#include <skas.h>1515+#include <kern_util.h>15161617struct host_vm_change {1718 struct host_vm_op {···124123{125124 struct host_vm_op *last;126125 int ret = 0;126126+127127+ if ((addr >= STUB_START) && (addr < STUB_END))128128+ return -EINVAL;127129128130 if (hvc->index != 0) {129131 last = &hvc->ops[hvc->index - 1];···287283 /* This is not an else because ret is modified above */288284 if (ret) {289285 printk(KERN_ERR "fix_range_common: failed, killing current "290290- "process\n");286286+ "process: %d\n", task_tgid_vnr(current));287287+ /* We are under mmap_sem, release it such that current can terminate */288288+ up_write(¤t->mm->mmap_sem);291289 force_sig(SIGKILL, current);290290+ do_signal();292291 }293292}294293
+1-1
arch/um/kernel/trap.c
···206206 int is_write = FAULT_WRITE(fi);207207 unsigned long address = FAULT_ADDRESS(fi);208208209209- if (regs)209209+ if (!is_user && regs)210210 current->thread.segv_regs = container_of(regs, struct pt_regs, regs);211211212212 if (!is_user && (address >= start_vm) && (address < end_vm)) {
+2-7
arch/um/os-Linux/skas/process.c
···54545555void wait_stub_done(int pid)5656{5757- int n, status, err, bad_stop = 0;5757+ int n, status, err;58585959 while (1) {6060 CATCH_EINTR(n = waitpid(pid, &status, WUNTRACED | __WALL));···74747575 if (((1 << WSTOPSIG(status)) & STUB_DONE_MASK) != 0)7676 return;7777- else7878- bad_stop = 1;79778078bad_wait:8179 err = ptrace_dump_regs(pid);···8385 printk(UM_KERN_ERR "wait_stub_done : failed to wait for SIGTRAP, "8486 "pid = %d, n = %d, errno = %d, status = 0x%x\n", pid, n, errno,8587 status);8686- if (bad_stop)8787- kill(pid, SIGKILL);8888- else8989- fatal_sigsegv();8888+ fatal_sigsegv();9089}91909291extern unsigned long current_stub_stack(void);
···13821382 intel_pmu_lbr_read();1383138313841384 /*13851385+ * CondChgd bit 63 doesn't mean any overflow status. Ignore13861386+ * and clear the bit.13871387+ */13881388+ if (__test_and_clear_bit(63, (unsigned long *)&status)) {13891389+ if (!status)13901390+ goto done;13911391+ }13921392+13931393+ /*13851394 * PEBS overflow sets bit 62 in the global status register13861395 */13871396 if (__test_and_clear_bit(62, (unsigned long *)&status)) {
···6262 Only used for the 64-bit and x32 vdsos. */6363static unsigned long vdso_addr(unsigned long start, unsigned len)6464{6565+#ifdef CONFIG_X86_326666+ return 0;6767+#else6568 unsigned long addr, end;6669 unsigned offset;6770 end = (start + PMD_SIZE - 1) & PMD_MASK;···8683 addr = align_vdso_addr(addr);87848885 return addr;8686+#endif8987}90889189static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+129-3
drivers/acpi/ac.c
···3030#include <linux/types.h>3131#include <linux/dmi.h>3232#include <linux/delay.h>3333+#ifdef CONFIG_ACPI_PROCFS_POWER3434+#include <linux/proc_fs.h>3535+#include <linux/seq_file.h>3636+#endif3337#include <linux/platform_device.h>3438#include <linux/power_supply.h>3539#include <linux/acpi.h>···5652MODULE_DESCRIPTION("ACPI AC Adapter Driver");5753MODULE_LICENSE("GPL");58545555+5956static int acpi_ac_add(struct acpi_device *device);6057static int acpi_ac_remove(struct acpi_device *device);6158static void acpi_ac_notify(struct acpi_device *device, u32 event);···7166static int acpi_ac_resume(struct device *dev);7267#endif7368static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);6969+7070+#ifdef CONFIG_ACPI_PROCFS_POWER7171+extern struct proc_dir_entry *acpi_lock_ac_dir(void);7272+extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir);7373+static int acpi_ac_open_fs(struct inode *inode, struct file *file);7474+#endif7575+74767577static int ac_sleep_before_get_state_ms;7678···10290};1039110492#define to_acpi_ac(x) container_of(x, struct acpi_ac, charger)9393+9494+#ifdef CONFIG_ACPI_PROCFS_POWER9595+static const struct file_operations acpi_ac_fops = {9696+ .owner = THIS_MODULE,9797+ .open = acpi_ac_open_fs,9898+ .read = seq_read,9999+ .llseek = seq_lseek,100100+ .release = single_release,101101+};102102+#endif105103106104/* --------------------------------------------------------------------------107105 AC Adapter Management···164142static enum power_supply_property ac_props[] = {165143 POWER_SUPPLY_PROP_ONLINE,166144};145145+146146+#ifdef CONFIG_ACPI_PROCFS_POWER147147+/* --------------------------------------------------------------------------148148+ FS Interface (/proc)149149+ -------------------------------------------------------------------------- */150150+151151+static struct proc_dir_entry *acpi_ac_dir;152152+153153+static int acpi_ac_seq_show(struct seq_file *seq, void *offset)154154+{155155+ struct acpi_ac *ac = seq->private;156156+157157+158158+ if (!ac)159159+ return 0;160160+161161+ if (acpi_ac_get_state(ac)) {162162+ seq_puts(seq, "ERROR: Unable to read AC Adapter state\n");163163+ return 0;164164+ }165165+166166+ seq_puts(seq, "state: ");167167+ switch (ac->state) {168168+ case ACPI_AC_STATUS_OFFLINE:169169+ seq_puts(seq, "off-line\n");170170+ break;171171+ case ACPI_AC_STATUS_ONLINE:172172+ seq_puts(seq, "on-line\n");173173+ break;174174+ default:175175+ seq_puts(seq, "unknown\n");176176+ break;177177+ }178178+179179+ return 0;180180+}181181+182182+static int acpi_ac_open_fs(struct inode *inode, struct file *file)183183+{184184+ return single_open(file, acpi_ac_seq_show, PDE_DATA(inode));185185+}186186+187187+static int acpi_ac_add_fs(struct acpi_ac *ac)188188+{189189+ struct proc_dir_entry *entry = NULL;190190+191191+ printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded,"192192+ " please retry with CONFIG_ACPI_PROCFS_POWER cleared\n");193193+ if (!acpi_device_dir(ac->device)) {194194+ acpi_device_dir(ac->device) =195195+ proc_mkdir(acpi_device_bid(ac->device), acpi_ac_dir);196196+ if (!acpi_device_dir(ac->device))197197+ return -ENODEV;198198+ }199199+200200+ /* 'state' [R] */201201+ entry = proc_create_data(ACPI_AC_FILE_STATE,202202+ S_IRUGO, acpi_device_dir(ac->device),203203+ &acpi_ac_fops, ac);204204+ if (!entry)205205+ return -ENODEV;206206+ return 0;207207+}208208+209209+static int acpi_ac_remove_fs(struct acpi_ac *ac)210210+{211211+212212+ if (acpi_device_dir(ac->device)) {213213+ remove_proc_entry(ACPI_AC_FILE_STATE,214214+ acpi_device_dir(ac->device));215215+ remove_proc_entry(acpi_device_bid(ac->device), acpi_ac_dir);216216+ acpi_device_dir(ac->device) = NULL;217217+ }218218+219219+ return 0;220220+}221221+#endif167222168223/* --------------------------------------------------------------------------169224 Driver Model···342243 goto end;343244344245 ac->charger.name = acpi_device_bid(device);246246+#ifdef CONFIG_ACPI_PROCFS_POWER247247+ result = acpi_ac_add_fs(ac);248248+ if (result)249249+ goto end;250250+#endif345251 ac->charger.type = POWER_SUPPLY_TYPE_MAINS;346252 ac->charger.properties = ac_props;347253 ac->charger.num_properties = ARRAY_SIZE(ac_props);···362258 ac->battery_nb.notifier_call = acpi_ac_battery_notify;363259 register_acpi_notifier(&ac->battery_nb);364260end:365365- if (result)261261+ if (result) {262262+#ifdef CONFIG_ACPI_PROCFS_POWER263263+ acpi_ac_remove_fs(ac);264264+#endif366265 kfree(ac);266266+ }367267368268 dmi_check_system(ac_dmi_table);369269 return result;···411303 power_supply_unregister(&ac->charger);412304 unregister_acpi_notifier(&ac->battery_nb);413305306306+#ifdef CONFIG_ACPI_PROCFS_POWER307307+ acpi_ac_remove_fs(ac);308308+#endif309309+414310 kfree(ac);415311416312 return 0;···427315 if (acpi_disabled)428316 return -ENODEV;429317430430- result = acpi_bus_register_driver(&acpi_ac_driver);431431- if (result < 0)318318+#ifdef CONFIG_ACPI_PROCFS_POWER319319+ acpi_ac_dir = acpi_lock_ac_dir();320320+ if (!acpi_ac_dir)432321 return -ENODEV;322322+#endif323323+324324+325325+ result = acpi_bus_register_driver(&acpi_ac_driver);326326+ if (result < 0) {327327+#ifdef CONFIG_ACPI_PROCFS_POWER328328+ acpi_unlock_ac_dir(acpi_ac_dir);329329+#endif330330+ return -ENODEV;331331+ }433332434333 return 0;435334}···448325static void __exit acpi_ac_exit(void)449326{450327 acpi_bus_unregister_driver(&acpi_ac_driver);328328+#ifdef CONFIG_ACPI_PROCFS_POWER329329+ acpi_unlock_ac_dir(acpi_ac_dir);330330+#endif451331}452332module_init(acpi_ac_init);453333module_exit(acpi_ac_exit);
···3535#include <linux/delay.h>3636#include <linux/slab.h>3737#include <linux/suspend.h>3838+#include <linux/delay.h>3839#include <asm/unaligned.h>39404041#ifdef CONFIG_ACPI_PROCFS_POWER···533532 battery->rate_now = abs((s16)battery->rate_now);534533 printk_once(KERN_WARNING FW_BUG "battery: (dis)charge rate"535534 " invalid.\n");535535+ }536536+537537+ /*538538+ * When fully charged, some batteries wrongly report539539+ * capacity_now = design_capacity instead of = full_charge_capacity540540+ */541541+ if (battery->capacity_now > battery->full_charge_capacity542542+ && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) {543543+ battery->capacity_now = battery->full_charge_capacity;544544+ if (battery->capacity_now != battery->design_capacity)545545+ printk_once(KERN_WARNING FW_BUG546546+ "battery: reported current charge level (%d) "547547+ "is higher than reported maximum charge level (%d).\n",548548+ battery->capacity_now, battery->full_charge_capacity);536549 }537550538551 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags)···11661151 {},11671152};1168115311541154+/*11551155+ * Some machines'(E,G Lenovo Z480) ECs are not stable11561156+ * during boot up and this causes battery driver fails to be11571157+ * probed due to failure of getting battery information11581158+ * from EC sometimes. After several retries, the operation11591159+ * may work. So add retry code here and 20ms sleep between11601160+ * every retries.11611161+ */11621162+static int acpi_battery_update_retry(struct acpi_battery *battery)11631163+{11641164+ int retry, ret;11651165+11661166+ for (retry = 5; retry; retry--) {11671167+ ret = acpi_battery_update(battery, false);11681168+ if (!ret)11691169+ break;11701170+11711171+ msleep(20);11721172+ }11731173+ return ret;11741174+}11751175+11691176static int acpi_battery_add(struct acpi_device *device)11701177{11711178 int result = 0;···12061169 mutex_init(&battery->sysfs_lock);12071170 if (acpi_has_method(battery->device->handle, "_BIX"))12081171 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags);12091209- result = acpi_battery_update(battery, false);11721172+11731173+ result = acpi_battery_update_retry(battery);12101174 if (result)12111175 goto fail;11761176+12121177#ifdef CONFIG_ACPI_PROCFS_POWER12131178 result = acpi_battery_add_fs(device);12141179#endif
+85-79
drivers/acpi/ec.c
···11/*22- * ec.c - ACPI Embedded Controller Driver (v2.1)22+ * ec.c - ACPI Embedded Controller Driver (v2.2)33 *44- * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de>55- * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com>66- * Copyright (C) 2004 Luming Yu <luming.yu@intel.com>77- * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>88- * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>44+ * Copyright (C) 2001-2014 Intel Corporation55+ * Author: 2014 Lv Zheng <lv.zheng@intel.com>66+ * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>77+ * 2006 Denis Sadykov <denis.m.sadykov@intel.com>88+ * 2004 Luming Yu <luming.yu@intel.com>99+ * 2001, 2002 Andy Grover <andrew.grover@intel.com>1010+ * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>1111+ * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de>912 *1013 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1114 *···5552/* EC status register */5653#define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */5754#define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */5555+#define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */5856#define ACPI_EC_FLAG_BURST 0x10 /* burst mode */5957#define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */6058···8177 * OpReg are installed */8278 EC_FLAGS_BLOCKED, /* Transactions are blocked */8379};8080+8181+#define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */8282+#define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */84838584/* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */8685static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY;···116109 u8 ri;117110 u8 wlen;118111 u8 rlen;119119- bool done;112112+ u8 flags;120113};121114122115struct acpi_ec *boot_ec, *first_ec;···134127static inline u8 acpi_ec_read_status(struct acpi_ec *ec)135128{136129 u8 x = inb(ec->command_addr);137137- pr_debug("---> status = 0x%2.2x\n", x);130130+ pr_debug("EC_SC(R) = 0x%2.2x "131131+ "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n",132132+ x,133133+ !!(x & ACPI_EC_FLAG_SCI),134134+ !!(x & ACPI_EC_FLAG_BURST),135135+ !!(x & ACPI_EC_FLAG_CMD),136136+ !!(x & ACPI_EC_FLAG_IBF),137137+ !!(x & ACPI_EC_FLAG_OBF));138138 return x;139139}140140141141static inline u8 acpi_ec_read_data(struct acpi_ec *ec)142142{143143 u8 x = inb(ec->data_addr);144144- pr_debug("---> data = 0x%2.2x\n", x);144144+ pr_debug("EC_DATA(R) = 0x%2.2x\n", x);145145 return x;146146}147147148148static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command)149149{150150- pr_debug("<--- command = 0x%2.2x\n", command);150150+ pr_debug("EC_SC(W) = 0x%2.2x\n", command);151151 outb(command, ec->command_addr);152152}153153154154static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data)155155{156156- pr_debug("<--- data = 0x%2.2x\n", data);156156+ pr_debug("EC_DATA(W) = 0x%2.2x\n", data);157157 outb(data, ec->data_addr);158158}159159160160-static int ec_transaction_done(struct acpi_ec *ec)160160+static int ec_transaction_completed(struct acpi_ec *ec)161161{162162 unsigned long flags;163163 int ret = 0;164164 spin_lock_irqsave(&ec->lock, flags);165165- if (!ec->curr || ec->curr->done)165165+ if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE))166166 ret = 1;167167 spin_unlock_irqrestore(&ec->lock, flags);168168 return ret;169169}170170171171-static void start_transaction(struct acpi_ec *ec)171171+static bool advance_transaction(struct acpi_ec *ec)172172{173173- ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0;174174- ec->curr->done = false;175175- acpi_ec_write_cmd(ec, ec->curr->command);176176-}177177-178178-static void advance_transaction(struct acpi_ec *ec, u8 status)179179-{180180- unsigned long flags;181173 struct transaction *t;174174+ u8 status;175175+ bool wakeup = false;182176183183- spin_lock_irqsave(&ec->lock, flags);177177+ pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK");178178+ status = acpi_ec_read_status(ec);184179 t = ec->curr;185180 if (!t)186186- goto unlock;187187- if (t->wlen > t->wi) {188188- if ((status & ACPI_EC_FLAG_IBF) == 0)189189- acpi_ec_write_data(ec,190190- t->wdata[t->wi++]);191191- else192192- goto err;193193- } else if (t->rlen > t->ri) {194194- if ((status & ACPI_EC_FLAG_OBF) == 1) {195195- t->rdata[t->ri++] = acpi_ec_read_data(ec);196196- if (t->rlen == t->ri)197197- t->done = true;181181+ goto err;182182+ if (t->flags & ACPI_EC_COMMAND_POLL) {183183+ if (t->wlen > t->wi) {184184+ if ((status & ACPI_EC_FLAG_IBF) == 0)185185+ acpi_ec_write_data(ec, t->wdata[t->wi++]);186186+ else187187+ goto err;188188+ } else if (t->rlen > t->ri) {189189+ if ((status & ACPI_EC_FLAG_OBF) == 1) {190190+ t->rdata[t->ri++] = acpi_ec_read_data(ec);191191+ if (t->rlen == t->ri) {192192+ t->flags |= ACPI_EC_COMMAND_COMPLETE;193193+ wakeup = true;194194+ }195195+ } else196196+ goto err;197197+ } else if (t->wlen == t->wi &&198198+ (status & ACPI_EC_FLAG_IBF) == 0) {199199+ t->flags |= ACPI_EC_COMMAND_COMPLETE;200200+ wakeup = true;201201+ }202202+ return wakeup;203203+ } else {204204+ if ((status & ACPI_EC_FLAG_IBF) == 0) {205205+ acpi_ec_write_cmd(ec, t->command);206206+ t->flags |= ACPI_EC_COMMAND_POLL;198207 } else199208 goto err;200200- } else if (t->wlen == t->wi &&201201- (status & ACPI_EC_FLAG_IBF) == 0)202202- t->done = true;203203- goto unlock;209209+ return wakeup;210210+ }204211err:205212 /*206213 * If SCI bit is set, then don't think it's a false IRQ207214 * otherwise will take a not handled IRQ as a false one.208215 */209209- if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI))210210- ++t->irq_count;216216+ if (!(status & ACPI_EC_FLAG_SCI)) {217217+ if (in_interrupt() && t)218218+ ++t->irq_count;219219+ }220220+ return wakeup;221221+}211222212212-unlock:213213- spin_unlock_irqrestore(&ec->lock, flags);223223+static void start_transaction(struct acpi_ec *ec)224224+{225225+ ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0;226226+ ec->curr->flags = 0;227227+ (void)advance_transaction(ec);214228}215229216230static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data);···256228 /* don't sleep with disabled interrupts */257229 if (EC_FLAGS_MSI || irqs_disabled()) {258230 udelay(ACPI_EC_MSI_UDELAY);259259- if (ec_transaction_done(ec))231231+ if (ec_transaction_completed(ec))260232 return 0;261233 } else {262234 if (wait_event_timeout(ec->wait,263263- ec_transaction_done(ec),235235+ ec_transaction_completed(ec),264236 msecs_to_jiffies(1)))265237 return 0;266238 }267267- advance_transaction(ec, acpi_ec_read_status(ec));239239+ spin_lock_irqsave(&ec->lock, flags);240240+ (void)advance_transaction(ec);241241+ spin_unlock_irqrestore(&ec->lock, flags);268242 } while (time_before(jiffies, delay));269243 pr_debug("controller reset, restart transaction\n");270244 spin_lock_irqsave(&ec->lock, flags);···298268 return ret;299269}300270301301-static int ec_check_ibf0(struct acpi_ec *ec)302302-{303303- u8 status = acpi_ec_read_status(ec);304304- return (status & ACPI_EC_FLAG_IBF) == 0;305305-}306306-307307-static int ec_wait_ibf0(struct acpi_ec *ec)308308-{309309- unsigned long delay = jiffies + msecs_to_jiffies(ec_delay);310310- /* interrupt wait manually if GPE mode is not active */311311- while (time_before(jiffies, delay))312312- if (wait_event_timeout(ec->wait, ec_check_ibf0(ec),313313- msecs_to_jiffies(1)))314314- return 0;315315- return -ETIME;316316-}317317-318271static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)319272{320273 int status;···317304 status = -ENODEV;318305 goto unlock;319306 }320320- }321321- if (ec_wait_ibf0(ec)) {322322- pr_err("input buffer is not empty, "323323- "aborting transaction\n");324324- status = -ETIME;325325- goto end;326307 }327308 pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n",328309 t->command, t->wdata ? t->wdata[0] : 0);···341334 set_bit(EC_FLAGS_GPE_STORM, &ec->flags);342335 }343336 pr_debug("transaction end\n");344344-end:345337 if (ec->global_lock)346338 acpi_release_global_lock(glk);347339unlock:···640634static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,641635 u32 gpe_number, void *data)642636{637637+ unsigned long flags;643638 struct acpi_ec *ec = data;644644- u8 status = acpi_ec_read_status(ec);645639646646- pr_debug("~~~> interrupt, status:0x%02x\n", status);647647-648648- advance_transaction(ec, status);649649- if (ec_transaction_done(ec) &&650650- (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) {640640+ spin_lock_irqsave(&ec->lock, flags);641641+ if (advance_transaction(ec))651642 wake_up(&ec->wait);652652- ec_check_sci(ec, acpi_ec_read_status(ec));653653- }643643+ spin_unlock_irqrestore(&ec->lock, flags);644644+ ec_check_sci(ec, acpi_ec_read_status(ec));654645 return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE;655646}656647···10691066 /* fall through */10701067 }1071106810721072- if (EC_FLAGS_SKIP_DSDT_SCAN)10691069+ if (EC_FLAGS_SKIP_DSDT_SCAN) {10701070+ kfree(saved_ec);10731071 return -ENODEV;10721072+ }1074107310751074 /* This workaround is needed only on some broken machines,10761075 * which require early EC, but fail to provide ECDT */···11101105 }11111106error:11121107 kfree(boot_ec);11081108+ kfree(saved_ec);11131109 boot_ec = NULL;11141110 return -ENODEV;11151111}
+5-5
drivers/acpi/resource.c
···7777 switch (ares->type) {7878 case ACPI_RESOURCE_TYPE_MEMORY24:7979 memory24 = &ares->data.memory24;8080- if (!memory24->address_length)8080+ if (!memory24->minimum && !memory24->address_length)8181 return false;8282 acpi_dev_get_memresource(res, memory24->minimum,8383 memory24->address_length,···8585 break;8686 case ACPI_RESOURCE_TYPE_MEMORY32:8787 memory32 = &ares->data.memory32;8888- if (!memory32->address_length)8888+ if (!memory32->minimum && !memory32->address_length)8989 return false;9090 acpi_dev_get_memresource(res, memory32->minimum,9191 memory32->address_length,···9393 break;9494 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:9595 fixed_memory32 = &ares->data.fixed_memory32;9696- if (!fixed_memory32->address_length)9696+ if (!fixed_memory32->address && !fixed_memory32->address_length)9797 return false;9898 acpi_dev_get_memresource(res, fixed_memory32->address,9999 fixed_memory32->address_length,···150150 switch (ares->type) {151151 case ACPI_RESOURCE_TYPE_IO:152152 io = &ares->data.io;153153- if (!io->address_length)153153+ if (!io->minimum && !io->address_length)154154 return false;155155 acpi_dev_get_ioresource(res, io->minimum,156156 io->address_length,···158158 break;159159 case ACPI_RESOURCE_TYPE_FIXED_IO:160160 fixed_io = &ares->data.fixed_io;161161- if (!fixed_io->address_length)161161+ if (!fixed_io->address && !fixed_io->address_length)162162 return false;163163 acpi_dev_get_ioresource(res, fixed_io->address,164164 fixed_io->address_length,
···7878struct xgene_ahci_context {7979 struct ahci_host_priv *hpriv;8080 struct device *dev;8181+ u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/8182 void __iomem *csr_core; /* Core CSR address of IP */8283 void __iomem *csr_diag; /* Diag CSR address of IP */8384 void __iomem *csr_axi; /* AXI CSR address of IP */···9998}10099101100/**101101+ * xgene_ahci_restart_engine - Restart the dma engine.102102+ * @ap : ATA port of interest103103+ *104104+ * Restarts the dma engine inside the controller.105105+ */106106+static int xgene_ahci_restart_engine(struct ata_port *ap)107107+{108108+ struct ahci_host_priv *hpriv = ap->host->private_data;109109+110110+ ahci_stop_engine(ap);111111+ ahci_start_fis_rx(ap);112112+ hpriv->start_engine(ap);113113+114114+ return 0;115115+}116116+117117+/**118118+ * xgene_ahci_qc_issue - Issue commands to the device119119+ * @qc: Command to issue120120+ *121121+ * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot122122+ * clear the BSY bit after receiving the PIO setup FIS. This results in the dma123123+ * state machine goes into the CMFatalErrorUpdate state and locks up. By124124+ * restarting the dma engine, it removes the controller out of lock up state.125125+ */126126+static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc)127127+{128128+ struct ata_port *ap = qc->ap;129129+ struct ahci_host_priv *hpriv = ap->host->private_data;130130+ struct xgene_ahci_context *ctx = hpriv->plat_data;131131+ int rc = 0;132132+133133+ if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA))134134+ xgene_ahci_restart_engine(ap);135135+136136+ rc = ahci_qc_issue(qc);137137+138138+ /* Save the last command issued */139139+ ctx->last_cmd[ap->port_no] = qc->tf.command;140140+141141+ return rc;142142+}143143+144144+/**102145 * xgene_ahci_read_id - Read ID data from the specified device103146 * @dev: device104147 * @tf: proposed taskfile105148 * @id: data buffer106149 *107150 * This custom read ID function is required due to the fact that the HW108108- * does not support DEVSLP and the controller state machine may get stuck109109- * after processing the ID query command.151151+ * does not support DEVSLP.110152 */111153static unsigned int xgene_ahci_read_id(struct ata_device *dev,112154 struct ata_taskfile *tf, u16 *id)113155{114156 u32 err_mask;115115- void __iomem *port_mmio = ahci_port_base(dev->link->ap);116157117158 err_mask = ata_do_dev_read_id(dev, tf, id);118159 if (err_mask)···176133 */177134 id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8);178135179179- /*180180- * Due to HW errata, restart the port if no other command active.181181- * Otherwise the controller may get stuck.182182- */183183- if (!readl(port_mmio + PORT_CMD_ISSUE)) {184184- writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD);185185- readl(port_mmio + PORT_CMD); /* Force a barrier */186186- writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD);187187- readl(port_mmio + PORT_CMD); /* Force a barrier */188188- }189136 return 0;190137}191138···333300 .host_stop = xgene_ahci_host_stop,334301 .hardreset = xgene_ahci_hardreset,335302 .read_id = xgene_ahci_read_id,303303+ .qc_issue = xgene_ahci_qc_issue,336304};337305338306static const struct ata_port_info xgene_ahci_port_info = {
+4-3
drivers/ata/libahci.c
···68686969static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val);7070static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);7171-static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc);7271static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc);7372static int ahci_port_start(struct ata_port *ap);7473static void ahci_port_stop(struct ata_port *ap);···619620}620621EXPORT_SYMBOL_GPL(ahci_stop_engine);621622622622-static void ahci_start_fis_rx(struct ata_port *ap)623623+void ahci_start_fis_rx(struct ata_port *ap)623624{624625 void __iomem *port_mmio = ahci_port_base(ap);625626 struct ahci_host_priv *hpriv = ap->host->private_data;···645646 /* flush */646647 readl(port_mmio + PORT_CMD);647648}649649+EXPORT_SYMBOL_GPL(ahci_start_fis_rx);648650649651static int ahci_stop_fis_rx(struct ata_port *ap)650652{···19451945}19461946EXPORT_SYMBOL_GPL(ahci_interrupt);1947194719481948-static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc)19481948+unsigned int ahci_qc_issue(struct ata_queued_cmd *qc)19491949{19501950 struct ata_port *ap = qc->ap;19511951 void __iomem *port_mmio = ahci_port_base(ap);···1974197419751975 return 0;19761976}19771977+EXPORT_SYMBOL_GPL(ahci_qc_issue);1977197819781979static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc)19791980{
+6-1
drivers/ata/libahci_platform.c
···250250 if (IS_ERR(hpriv->phy)) {251251 rc = PTR_ERR(hpriv->phy);252252 switch (rc) {253253- case -ENODEV:254253 case -ENOSYS:254254+ /* No PHY support. Check if PHY is required. */255255+ if (of_find_property(dev->of_node, "phys", NULL)) {256256+ dev_err(dev, "couldn't get sata-phy: ENOSYS\n");257257+ goto err_out;258258+ }259259+ case -ENODEV:255260 /* continue normally */256261 hpriv->phy = NULL;257262 break;
+14-4
drivers/base/platform.c
···9090 return dev->archdata.irqs[num];9191#else9292 struct resource *r;9393- if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node)9494- return of_irq_get(dev->dev.of_node, num);9393+ if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) {9494+ int ret;9595+9696+ ret = of_irq_get(dev->dev.of_node, num);9797+ if (ret >= 0 || ret == -EPROBE_DEFER)9898+ return ret;9999+ }9510096101 r = platform_get_resource(dev, IORESOURCE_IRQ, num);97102···139134{140135 struct resource *r;141136142142- if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node)143143- return of_irq_get_byname(dev->dev.of_node, name);137137+ if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) {138138+ int ret;139139+140140+ ret = of_irq_get_byname(dev->dev.of_node, name);141141+ if (ret >= 0 || ret == -EPROBE_DEFER)142142+ return ret;143143+ }144144145145 r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name);146146 return r ? r->start : -ENXIO;
···406406 H5_HDR_PKT_TYPE(hdr) != HCI_3WIRE_LINK_PKT) {407407 BT_ERR("Non-link packet received in non-active state");408408 h5_reset_rx(h5);409409+ return 0;409410 }410411411412 h5->rx_func = h5_rx_payload;
+39-8
drivers/char/hw_random/core.c
···5555static int data_avail;5656static u8 *rng_buffer;57575858+static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,5959+ int wait);6060+5861static size_t rng_buffer_size(void)5962{6063 return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES;6164}62656666+static void add_early_randomness(struct hwrng *rng)6767+{6868+ unsigned char bytes[16];6969+ int bytes_read;7070+7171+ /*7272+ * Currently only virtio-rng cannot return data during device7373+ * probe, and that's handled in virtio-rng.c itself. If there7474+ * are more such devices, this call to rng_get_data can be7575+ * made conditional here instead of doing it per-device.7676+ */7777+ bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1);7878+ if (bytes_read > 0)7979+ add_device_randomness(bytes, bytes_read);8080+}8181+6382static inline int hwrng_init(struct hwrng *rng)6483{6565- if (!rng->init)6666- return 0;6767- return rng->init(rng);8484+ if (rng->init) {8585+ int ret;8686+8787+ ret = rng->init(rng);8888+ if (ret)8989+ return ret;9090+ }9191+ add_early_randomness(rng);9292+ return 0;6893}69947095static inline void hwrng_cleanup(struct hwrng *rng)···329304{330305 int err = -EINVAL;331306 struct hwrng *old_rng, *tmp;332332- unsigned char bytes[16];333333- int bytes_read;334307335308 if (rng->name == NULL ||336309 (rng->data_read == NULL && rng->read == NULL))···370347 INIT_LIST_HEAD(&rng->list);371348 list_add_tail(&rng->list, &rng_list);372349373373- bytes_read = rng_get_data(rng, bytes, sizeof(bytes), 1);374374- if (bytes_read > 0)375375- add_device_randomness(bytes, bytes_read);350350+ if (old_rng && !rng->init) {351351+ /*352352+ * Use a new device's input to add some randomness to353353+ * the system. If this rng device isn't going to be354354+ * used right away, its init function hasn't been355355+ * called yet; so only use the randomness from devices356356+ * that don't need an init callback.357357+ */358358+ add_early_randomness(rng);359359+ }360360+376361out_unlock:377362 mutex_unlock(&rng_mutex);378363out:
+10
drivers/char/hw_random/virtio-rng.c
···3838 int index;3939};40404141+static bool probe_done;4242+4143static void random_recv_done(struct virtqueue *vq)4244{4345 struct virtrng_info *vi = vq->vdev->priv;···6866{6967 int ret;7068 struct virtrng_info *vi = (struct virtrng_info *)rng->priv;6969+7070+ /*7171+ * Don't ask host for data till we're setup. This call can7272+ * happen during hwrng_register(), after commit d9e7972619.7373+ */7474+ if (unlikely(!probe_done))7575+ return 0;71767277 if (!vi->busy) {7378 vi->busy = true;···146137 return err;147138 }148139140140+ probe_done = true;149141 return 0;150142}151143
+3-1
drivers/char/i8k.c
···138138 if (!alloc_cpumask_var(&old_mask, GFP_KERNEL))139139 return -ENOMEM;140140 cpumask_copy(old_mask, ¤t->cpus_allowed);141141- set_cpus_allowed_ptr(current, cpumask_of(0));141141+ rc = set_cpus_allowed_ptr(current, cpumask_of(0));142142+ if (rc)143143+ goto out;142144 if (smp_processor_id() != 0) {143145 rc = -EBUSY;144146 goto out;
+14-3
drivers/char/random.c
···641641 } while (unlikely(entropy_count < pool_size-2 && pnfrac));642642 }643643644644- if (entropy_count < 0) {644644+ if (unlikely(entropy_count < 0)) {645645 pr_warn("random: negative entropy/overflow: pool %s count %d\n",646646 r->name, entropy_count);647647 WARN_ON(1);···981981 int reserved)982982{983983 int entropy_count, orig;984984- size_t ibytes;984984+ size_t ibytes, nfrac;985985986986 BUG_ON(r->entropy_count > r->poolinfo->poolfracbits);987987···999999 }10001000 if (ibytes < min)10011001 ibytes = 0;10021002- if ((entropy_count -= ibytes << (ENTROPY_SHIFT + 3)) < 0)10021002+10031003+ if (unlikely(entropy_count < 0)) {10041004+ pr_warn("random: negative entropy count: pool %s count %d\n",10051005+ r->name, entropy_count);10061006+ WARN_ON(1);10071007+ entropy_count = 0;10081008+ }10091009+ nfrac = ibytes << (ENTROPY_SHIFT + 3);10101010+ if ((size_t) entropy_count > nfrac)10111011+ entropy_count -= nfrac;10121012+ else10031013 entropy_count = 0;1004101410051015 if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)···13861376 "with %d bits of entropy available\n",13871377 current->comm, nonblocking_pool.entropy_total);1388137813791379+ nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));13891380 ret = extract_entropy_user(&nonblocking_pool, buf, nbytes);1390138113911382 trace_urandom_read(8 * nbytes, ENTROPY_BITS(&nonblocking_pool),
···104104 tristate "Freescale i.MX6 cpufreq support"105105 depends on ARCH_MXC106106 depends on REGULATOR_ANATOP107107+ select PM_OPP107108 help108109 This adds cpufreq driver support for Freescale i.MX6 series SoCs.109110···119118 If in doubt, say Y.120119121120config ARM_KIRKWOOD_CPUFREQ122122- def_bool MACH_KIRKWOOD121121+ def_bool ARCH_KIRKWOOD || MACH_KIRKWOOD123122 help124123 This adds the CPUFreq driver for Marvell Kirkwood125124 SoCs.
+1-1
drivers/cpufreq/Makefile
···4949# LITTLE drivers, so that it is probed last.5050obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o51515252-obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o5252+obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o5353obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o5454obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o5555obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o
+2-5
drivers/cpufreq/cpufreq-cpu0.c
···152152 goto out_put_reg;153153 }154154155155- ret = of_init_opp_table(cpu_dev);156156- if (ret) {157157- pr_err("failed to init OPP table: %d\n", ret);158158- goto out_put_clk;159159- }155155+ /* OPPs might be populated at runtime, don't check for error here */156156+ of_init_opp_table(cpu_dev);160157161158 ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);162159 if (ret) {
+4-2
drivers/cpufreq/cpufreq.c
···11531153 * the creation of a brand new one. So we need to perform this update11541154 * by invoking update_policy_cpu().11551155 */11561156- if (recover_policy && cpu != policy->cpu)11561156+ if (recover_policy && cpu != policy->cpu) {11571157 update_policy_cpu(policy, cpu);11581158- else11581158+ WARN_ON(kobject_move(&policy->kobj, &dev->kobj));11591159+ } else {11591160 policy->cpu = cpu;11611161+ }1160116211611163 cpumask_copy(policy->cpus, cpumask_of(cpu));11621164
+22-13
drivers/cpufreq/intel_pstate.c
···128128129129struct perf_limits {130130 int no_turbo;131131+ int turbo_disabled;131132 int max_perf_pct;132133 int min_perf_pct;133134 int32_t max_perf;···288287 if (ret != 1)289288 return -EINVAL;290289 limits.no_turbo = clamp_t(int, input, 0 , 1);291291-290290+ if (limits.turbo_disabled) {291291+ pr_warn("Turbo disabled by BIOS or unavailable on processor\n");292292+ limits.no_turbo = limits.turbo_disabled;293293+ }292294 return count;293295}294296···361357{362358 u64 value;363359 rdmsrl(BYT_RATIOS, value);364364- return (value >> 8) & 0x3F;360360+ return (value >> 8) & 0x7F;365361}366362367363static int byt_get_max_pstate(void)368364{369365 u64 value;370366 rdmsrl(BYT_RATIOS, value);371371- return (value >> 16) & 0x3F;367367+ return (value >> 16) & 0x7F;372368}373369374370static int byt_get_turbo_pstate(void)375371{376372 u64 value;377373 rdmsrl(BYT_TURBO_RATIOS, value);378378- return value & 0x3F;374374+ return value & 0x7F;379375}380376381377static void byt_set_pstate(struct cpudata *cpudata, int pstate)···385381 u32 vid;386382387383 val = pstate << 8;388388- if (limits.no_turbo)384384+ if (limits.no_turbo && !limits.turbo_disabled)389385 val |= (u64)1 << 32;390386391387 vid_fp = cpudata->vid.min + mul_fp(···409405410406411407 rdmsrl(BYT_VIDS, value);412412- cpudata->vid.min = int_tofp((value >> 8) & 0x3f);413413- cpudata->vid.max = int_tofp((value >> 16) & 0x3f);408408+ cpudata->vid.min = int_tofp((value >> 8) & 0x7f);409409+ cpudata->vid.max = int_tofp((value >> 16) & 0x7f);414410 cpudata->vid.ratio = div_fp(415411 cpudata->vid.max - cpudata->vid.min,416412 int_tofp(cpudata->pstate.max_pstate -···452448 u64 val;453449454450 val = pstate << 8;455455- if (limits.no_turbo)451451+ if (limits.no_turbo && !limits.turbo_disabled)456452 val |= (u64)1 << 32;457453458454 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val);···700696701697 cpu = all_cpu_data[cpunum];702698703703- intel_pstate_get_cpu_pstates(cpu);704704-705699 cpu->cpu = cpunum;700700+ intel_pstate_get_cpu_pstates(cpu);706701707702 init_timer_deferrable(&cpu->timer);708703 cpu->timer.function = intel_pstate_timer_func;···744741 limits.min_perf = int_tofp(1);745742 limits.max_perf_pct = 100;746743 limits.max_perf = int_tofp(1);747747- limits.no_turbo = 0;744744+ limits.no_turbo = limits.turbo_disabled;748745 return 0;749746 }750747 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq;···787784{788785 struct cpudata *cpu;789786 int rc;787787+ u64 misc_en;790788791789 rc = intel_pstate_init_cpu(policy->cpu);792790 if (rc)···795791796792 cpu = all_cpu_data[policy->cpu];797793798798- if (!limits.no_turbo &&799799- limits.min_perf_pct == 100 && limits.max_perf_pct == 100)794794+ rdmsrl(MSR_IA32_MISC_ENABLE, misc_en);795795+ if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE ||796796+ cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) {797797+ limits.turbo_disabled = 1;798798+ limits.no_turbo = 1;799799+ }800800+ if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100)800801 policy->policy = CPUFREQ_POLICY_PERFORMANCE;801802 else802803 policy->policy = CPUFREQ_POLICY_POWERSAVE;
+1-1
drivers/cpufreq/sa1110-cpufreq.c
···349349 name = "K4S641632D";350350 if (machine_is_h3100())351351 name = "KM416S4030CT";352352- if (machine_is_jornada720())352352+ if (machine_is_jornada720() || machine_is_h3600())353353 name = "K4S281632B-1H";354354 if (machine_is_nanoengine())355355 name = "MT48LC8M16A2TG-75";
+3-5
drivers/crypto/caam/jr.c
···453453 int error;454454455455 jrdev = &pdev->dev;456456- jrpriv = kmalloc(sizeof(struct caam_drv_private_jr),457457- GFP_KERNEL);456456+ jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr),457457+ GFP_KERNEL);458458 if (!jrpriv)459459 return -ENOMEM;460460···487487488488 /* Now do the platform independent part */489489 error = caam_jr_init(jrdev); /* now turn on hardware */490490- if (error) {491491- kfree(jrpriv);490490+ if (error)492491 return error;493493- }494492495493 jrpriv->dev = jrdev;496494 spin_lock(&driver_data.jr_alloc_lock);
···11menu "IEEE 1394 (FireWire) support"22+ depends on HAS_DMA23 depends on PCI || COMPILE_TEST34 # firewire-core does not depend on PCI but is45 # not useful without PCI controller driver
+15-7
drivers/firmware/efi/efi.c
···346346347347struct param_info {348348 int verbose;349349+ int found;349350 void *params;350351};351352···363362 (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0))364363 return 0;365364366366- pr_info("Getting parameters from FDT:\n");367367-368365 for (i = 0; i < ARRAY_SIZE(dt_params); i++) {369366 prop = of_get_flat_dt_prop(node, dt_params[i].propname, &len);370370- if (!prop) {371371- pr_err("Can't find %s in device tree!\n",372372- dt_params[i].name);367367+ if (!prop)373368 return 0;374374- }375369 dest = info->params + dt_params[i].offset;370370+ info->found++;376371377372 val = of_read_number(prop, len / sizeof(u32));378373···387390int __init efi_get_fdt_params(struct efi_fdt_params *params, int verbose)388391{389392 struct param_info info;393393+ int ret;394394+395395+ pr_info("Getting EFI parameters from FDT:\n");390396391397 info.verbose = verbose;398398+ info.found = 0;392399 info.params = params;393400394394- return of_scan_flat_dt(fdt_find_uefi_params, &info);401401+ ret = of_scan_flat_dt(fdt_find_uefi_params, &info);402402+ if (!info.found)403403+ pr_info("UEFI not found.\n");404404+ else if (!ret)405405+ pr_err("Can't find '%s' in device tree!\n",406406+ dt_params[info.found].name);407407+408408+ return ret;395409}396410#endif /* CONFIG_EFI_PARAMS_FROM_FDT */
-10
drivers/firmware/efi/fdt.c
···2323 u32 fdt_val32;2424 u64 fdt_val64;25252626- /*2727- * Copy definition of linux_banner here. Since this code is2828- * built as part of the decompressor for ARM v7, pulling2929- * in version.c where linux_banner is defined for the3030- * kernel brings other kernel dependencies with it.3131- */3232- const char linux_banner[] =3333- "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"3434- LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n";3535-3626 /* Do some checks on provided FDT, if it exists*/3727 if (orig_fdt) {3828 if (fdt_check_header(orig_fdt)) {
-6
drivers/gpio/gpio-mcp23s08.c
···900900 if (spi_present_mask & (1 << addr))901901 chips++;902902 }903903- if (!chips)904904- return -ENODEV;905903 } else {906904 type = spi_get_device_id(spi)->driver_data;907905 pdata = dev_get_platdata(&spi->dev);···938940 if (!(spi_present_mask & (1 << addr)))939941 continue;940942 chips--;941941- if (chips < 0) {942942- dev_err(&spi->dev, "FATAL: invalid negative chip id\n");943943- goto fail;944944- }945943 data->mcp[addr] = &data->chip[chips];946944 status = mcp23s08_probe_one(data->mcp[addr], &spi->dev, spi,947945 0x40 | (addr << 1), type, base,
+3-2
drivers/gpu/drm/i915/i915_dma.c
···14641464#else14651465static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv)14661466{14671467- int ret;14671467+ int ret = 0;1468146814691469 DRM_INFO("Replacing VGA console driver\n");1470147014711471 console_lock();14721472- ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1);14721472+ if (con_is_bound(&vga_con))14731473+ ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1);14731474 if (ret == 0) {14741475 ret = do_unregister_con_driver(&vga_con);14751476
···111111112112 pipe_config->adjusted_mode.flags |= flags;113113114114+ /* gen2/3 store dither state in pfit control, needs to match */115115+ if (INTEL_INFO(dev)->gen < 4) {116116+ tmp = I915_READ(PFIT_CONTROL);117117+118118+ pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE;119119+ }120120+114121 dotclock = pipe_config->port_clock;115122116123 if (HAS_PCH_SPLIT(dev_priv->dev))
+9
drivers/gpu/drm/i915/intel_opregion.c
···403403404404 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp);405405406406+ /*407407+ * If the acpi_video interface is not supposed to be used, don't408408+ * bother processing backlight level change requests from firmware.409409+ */410410+ if (!acpi_video_verify_backlight_support()) {411411+ DRM_DEBUG_KMS("opregion backlight request ignored\n");412412+ return 0;413413+ }414414+406415 if (!(bclp & ASLE_BCLP_VALID))407416 return ASLC_BACKLIGHT_FAILED;408417
+10-6
drivers/gpu/drm/i915/intel_panel.c
···361361 pfit_control |= ((intel_crtc->pipe << PFIT_PIPE_SHIFT) |362362 PFIT_FILTER_FUZZY);363363364364- /* Make sure pre-965 set dither correctly for 18bpp panels. */365365- if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18)366366- pfit_control |= PANEL_8TO6_DITHER_ENABLE;367367-368364out:369365 if ((pfit_control & PFIT_ENABLE) == 0) {370366 pfit_control = 0;371367 pfit_pgm_ratios = 0;372368 }369369+370370+ /* Make sure pre-965 set dither correctly for 18bpp panels. */371371+ if (INTEL_INFO(dev)->gen < 4 && pipe_config->pipe_bpp == 18)372372+ pfit_control |= PANEL_8TO6_DITHER_ENABLE;373373374374 pipe_config->gmch_pfit.control = pfit_control;375375 pipe_config->gmch_pfit.pgm_ratios = pfit_pgm_ratios;···11181118 int ret;1119111911201120 if (!dev_priv->vbt.backlight.present) {11211121- DRM_DEBUG_KMS("native backlight control not available per VBT\n");11221122- return 0;11211121+ if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) {11221122+ DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n");11231123+ } else {11241124+ DRM_DEBUG_KMS("no backlight present per VBT\n");11251125+ return 0;11261126+ }11231127 }1124112811251129 /* set level and max in panel struct */
···192192 nouveau_therm_threshold_hyst_polling(therm, &sensor->thrs_shutdown,193193 NOUVEAU_THERM_THRS_SHUTDOWN);194194195195+ spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags);196196+195197 /* schedule the next poll in one second */196198 if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head))197197- ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm);198198-199199- spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags);199199+ ptimer->alarm(ptimer, 1000000000ULL, alarm);200200}201201202202void
+9-8
drivers/gpu/drm/nouveau/nouveau_drm.c
···652652 ret = nouveau_do_resume(drm_dev);653653 if (ret)654654 return ret;655655- if (drm_dev->mode_config.num_crtc)656656- nouveau_fbcon_set_suspend(drm_dev, 0);657655658658- nouveau_fbcon_zfill_all(drm_dev);659659- if (drm_dev->mode_config.num_crtc)656656+ if (drm_dev->mode_config.num_crtc) {660657 nouveau_display_resume(drm_dev);658658+ nouveau_fbcon_set_suspend(drm_dev, 0);659659+ }660660+661661 return 0;662662}663663···683683 ret = nouveau_do_resume(drm_dev);684684 if (ret)685685 return ret;686686- if (drm_dev->mode_config.num_crtc)687687- nouveau_fbcon_set_suspend(drm_dev, 0);688688- nouveau_fbcon_zfill_all(drm_dev);689689- if (drm_dev->mode_config.num_crtc)686686+687687+ if (drm_dev->mode_config.num_crtc) {690688 nouveau_display_resume(drm_dev);689689+ nouveau_fbcon_set_suspend(drm_dev, 0);690690+ }691691+691692 return 0;692693}693694
+3-10
drivers/gpu/drm/nouveau/nouveau_fbcon.c
···531531 if (state == 1)532532 nouveau_fbcon_save_disable_accel(dev);533533 fb_set_suspend(drm->fbcon->helper.fbdev, state);534534- if (state == 0)534534+ if (state == 0) {535535 nouveau_fbcon_restore_accel(dev);536536+ nouveau_fbcon_zfill(dev, drm->fbcon);537537+ }536538 console_unlock();537537- }538538-}539539-540540-void541541-nouveau_fbcon_zfill_all(struct drm_device *dev)542542-{543543- struct nouveau_drm *drm = nouveau_drm(dev);544544- if (drm->fbcon) {545545- nouveau_fbcon_zfill(dev, drm->fbcon);546539 }547540}
···33333434 pending = xchg(&qdev->ram_header->int_pending, 0);35353636+ if (!pending)3737+ return IRQ_NONE;3838+3639 atomic_inc(&qdev->irq_received);37403841 if (pending & QXL_INTERRUPT_DISPLAY) {
+4-4
drivers/gpu/drm/radeon/atombios_crtc.c
···14141414 tmp &= ~EVERGREEN_GRPH_SURFACE_UPDATE_H_RETRACE_EN;14151415 WREG32(EVERGREEN_GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp);1416141614171417- /* set pageflip to happen anywhere in vblank interval */14181418- WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0);14171417+ /* set pageflip to happen only at start of vblank interval (front porch) */14181418+ WREG32(EVERGREEN_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3);1419141914201420 if (!atomic && fb && fb != crtc->primary->fb) {14211421 radeon_fb = to_radeon_framebuffer(fb);···16141614 tmp &= ~AVIVO_D1GRPH_SURFACE_UPDATE_H_RETRACE_EN;16151615 WREG32(AVIVO_D1GRPH_FLIP_CONTROL + radeon_crtc->crtc_offset, tmp);1616161616171617- /* set pageflip to happen anywhere in vblank interval */16181618- WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 0);16171617+ /* set pageflip to happen only at start of vblank interval (front porch) */16181618+ WREG32(AVIVO_D1MODE_MASTER_UPDATE_MODE + radeon_crtc->crtc_offset, 3);1619161916201620 if (!atomic && fb && fb != crtc->primary->fb) {16211621 radeon_fb = to_radeon_framebuffer(fb);
+1-1
drivers/gpu/drm/radeon/atombios_dp.c
···127127 /* flags not zero */128128 if (args.v1.ucReplyStatus == 2) {129129 DRM_DEBUG_KMS("dp_aux_ch flags not zero\n");130130- r = -EBUSY;130130+ r = -EIO;131131 goto done;132132 }133133
+7-3
drivers/gpu/drm/radeon/atombios_encoders.c
···183183 struct backlight_properties props;184184 struct radeon_backlight_privdata *pdata;185185 struct radeon_encoder_atom_dig *dig;186186- u8 backlight_level;187186 char bl_name[16];188187189188 /* Mac laptops with multiple GPUs use the gmux driver for backlight···221222222223 pdata->encoder = radeon_encoder;223224224224- backlight_level = radeon_atom_get_backlight_level_from_reg(rdev);225225-226225 dig = radeon_encoder->enc_priv;227226 dig->bl_dev = bd;228227229228 bd->props.brightness = radeon_atom_backlight_get_brightness(bd);229229+ /* Set a reasonable default here if the level is 0 otherwise230230+ * fbdev will attempt to turn the backlight on after console231231+ * unblanking and it will try and restore 0 which turns the backlight232232+ * off again.233233+ */234234+ if (bd->props.brightness == 0)235235+ bd->props.brightness = RADEON_MAX_BL_LEVEL;230236 bd->props.power = FB_BLANK_UNBLANK;231237 backlight_update_status(bd);232238
···40404141config I2C_MUX_PCA954x4242 tristate "Philips PCA954x I2C Mux/switches"4343+ depends on GPIOLIB4344 help4445 If you say yes here you get support for the Philips PCA954x4546 I2C mux/switch devices.
+2-5
drivers/iio/accel/hid-sensor-accel-3d.c
···110110 struct accel_3d_state *accel_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···150151 ret_type = IIO_VAL_INT;151152 break;152153 case IIO_CHAN_INFO_SAMP_FREQ:153153- ret = hid_sensor_read_samp_freq_value(154154+ ret_type = hid_sensor_read_samp_freq_value(154155 &accel_state->common_attributes, val, val2);155155- ret_type = IIO_VAL_INT_PLUS_MICRO;156156 break;157157 case IIO_CHAN_INFO_HYSTERESIS:158158- ret = hid_sensor_read_raw_hyst_value(158158+ ret_type = hid_sensor_read_raw_hyst_value(159159 &accel_state->common_attributes, val, val2);160160- ret_type = IIO_VAL_INT_PLUS_MICRO;161160 break;162161 default:163162 ret_type = -EINVAL;
+7-1
drivers/iio/accel/mma8452.c
···111111 {6, 250000}, {1, 560000}112112};113113114114+/* 115115+ * Hardware has fullscale of -2G, -4G, -8G corresponding to raw value -2048116116+ * The userspace interface uses m/s^2 and we declare micro units117117+ * So scale factor is given by:118118+ * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665119119+ */114120static const int mma8452_scales[3][2] = {115115- {0, 977}, {0, 1953}, {0, 3906}121121+ {0, 9577}, {0, 19154}, {0, 38307}116122};117123118124static ssize_t mma8452_show_samp_freq_avail(struct device *dev,
+1-1
drivers/iio/adc/ti_am335x_adc.c
···374374 return -EAGAIN;375375 }376376 }377377- map_val = chan->channel + TOTAL_CHANNELS;377377+ map_val = adc_dev->channel_step[chan->scan_index];378378379379 /*380380 * We check the complete FIFO. We programmed just one entry but in case
+2-5
drivers/iio/gyro/hid-sensor-gyro-3d.c
···110110 struct gyro_3d_state *gyro_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···150151 ret_type = IIO_VAL_INT;151152 break;152153 case IIO_CHAN_INFO_SAMP_FREQ:153153- ret = hid_sensor_read_samp_freq_value(154154+ ret_type = hid_sensor_read_samp_freq_value(154155 &gyro_state->common_attributes, val, val2);155155- ret_type = IIO_VAL_INT_PLUS_MICRO;156156 break;157157 case IIO_CHAN_INFO_HYSTERESIS:158158- ret = hid_sensor_read_raw_hyst_value(158158+ ret_type = hid_sensor_read_raw_hyst_value(159159 &gyro_state->common_attributes, val, val2);160160- ret_type = IIO_VAL_INT_PLUS_MICRO;161160 break;162161 default:163162 ret_type = -EINVAL;
+3
drivers/iio/industrialio-event.c
···345345 &indio_dev->event_interface->dev_attr_list);346346 kfree(postfix);347347348348+ if ((ret == -EBUSY) && (shared_by != IIO_SEPARATE))349349+ continue;350350+348351 if (ret)349352 return ret;350353
+2-5
drivers/iio/light/hid-sensor-als.c
···7979 struct als_state *als_state = iio_priv(indio_dev);8080 int report_id = -1;8181 u32 address;8282- int ret;8382 int ret_type;8483 s32 poll_value;8584···128129 ret_type = IIO_VAL_INT;129130 break;130131 case IIO_CHAN_INFO_SAMP_FREQ:131131- ret = hid_sensor_read_samp_freq_value(132132+ ret_type = hid_sensor_read_samp_freq_value(132133 &als_state->common_attributes, val, val2);133133- ret_type = IIO_VAL_INT_PLUS_MICRO;134134 break;135135 case IIO_CHAN_INFO_HYSTERESIS:136136- ret = hid_sensor_read_raw_hyst_value(136136+ ret_type = hid_sensor_read_raw_hyst_value(137137 &als_state->common_attributes, val, val2);138138- ret_type = IIO_VAL_INT_PLUS_MICRO;139138 break;140139 default:141140 ret_type = -EINVAL;
+2-5
drivers/iio/light/hid-sensor-prox.c
···7474 struct prox_state *prox_state = iio_priv(indio_dev);7575 int report_id = -1;7676 u32 address;7777- int ret;7877 int ret_type;7978 s32 poll_value;8079···124125 ret_type = IIO_VAL_INT;125126 break;126127 case IIO_CHAN_INFO_SAMP_FREQ:127127- ret = hid_sensor_read_samp_freq_value(128128+ ret_type = hid_sensor_read_samp_freq_value(128129 &prox_state->common_attributes, val, val2);129129- ret_type = IIO_VAL_INT_PLUS_MICRO;130130 break;131131 case IIO_CHAN_INFO_HYSTERESIS:132132- ret = hid_sensor_read_raw_hyst_value(132132+ ret_type = hid_sensor_read_raw_hyst_value(133133 &prox_state->common_attributes, val, val2);134134- ret_type = IIO_VAL_INT_PLUS_MICRO;135134 break;136135 default:137136 ret_type = -EINVAL;
+10-1
drivers/iio/light/tcs3472.c
···52525353struct tcs3472_data {5454 struct i2c_client *client;5555+ struct mutex lock;5556 u8 enable;5657 u8 control;5758 u8 atime;···117116118117 switch (mask) {119118 case IIO_CHAN_INFO_RAW:119119+ if (iio_buffer_enabled(indio_dev))120120+ return -EBUSY;121121+122122+ mutex_lock(&data->lock);120123 ret = tcs3472_req_data(data);121121- if (ret < 0)124124+ if (ret < 0) {125125+ mutex_unlock(&data->lock);122126 return ret;127127+ }123128 ret = i2c_smbus_read_word_data(data->client, chan->address);129129+ mutex_unlock(&data->lock);124130 if (ret < 0)125131 return ret;126132 *val = ret;···263255 data = iio_priv(indio_dev);264256 i2c_set_clientdata(client, indio_dev);265257 data->client = client;258258+ mutex_init(&data->lock);266259267260 indio_dev->dev.parent = &client->dev;268261 indio_dev->info = &tcs3472_info;
+2-5
drivers/iio/magnetometer/hid-sensor-magn-3d.c
···110110 struct magn_3d_state *magn_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···152153 ret_type = IIO_VAL_INT;153154 break;154155 case IIO_CHAN_INFO_SAMP_FREQ:155155- ret = hid_sensor_read_samp_freq_value(156156+ ret_type = hid_sensor_read_samp_freq_value(156157 &magn_state->common_attributes, val, val2);157157- ret_type = IIO_VAL_INT_PLUS_MICRO;158158 break;159159 case IIO_CHAN_INFO_HYSTERESIS:160160- ret = hid_sensor_read_raw_hyst_value(160160+ ret_type = hid_sensor_read_raw_hyst_value(161161 &magn_state->common_attributes, val, val2);162162- ret_type = IIO_VAL_INT_PLUS_MICRO;163162 break;164163 default:165164 ret_type = -EINVAL;
+2-5
drivers/iio/pressure/hid-sensor-press.c
···7878 struct press_state *press_state = iio_priv(indio_dev);7979 int report_id = -1;8080 u32 address;8181- int ret;8281 int ret_type;8382 s32 poll_value;8483···127128 ret_type = IIO_VAL_INT;128129 break;129130 case IIO_CHAN_INFO_SAMP_FREQ:130130- ret = hid_sensor_read_samp_freq_value(131131+ ret_type = hid_sensor_read_samp_freq_value(131132 &press_state->common_attributes, val, val2);132132- ret_type = IIO_VAL_INT_PLUS_MICRO;133133 break;134134 case IIO_CHAN_INFO_HYSTERESIS:135135- ret = hid_sensor_read_raw_hyst_value(135135+ ret_type = hid_sensor_read_raw_hyst_value(136136 &press_state->common_attributes, val, val2);137137- ret_type = IIO_VAL_INT_PLUS_MICRO;138137 break;139138 default:140139 ret_type = -EINVAL;
···675675 int err;676676677677 uuari = &dev->mdev.priv.uuari;678678- if (init_attr->create_flags & ~IB_QP_CREATE_SIGNATURE_EN)678678+ if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK))679679 return -EINVAL;680680681681 if (init_attr->qp_type == MLX5_IB_QPT_REG_UMR)
+4-4
drivers/iommu/fsl_pamu.c
···170170static unsigned int map_addrspace_size_to_wse(phys_addr_t addrspace_size)171171{172172 /* Bug if not a power of 2 */173173- BUG_ON(!is_power_of_2(addrspace_size));173173+ BUG_ON((addrspace_size & (addrspace_size - 1)));174174175175 /* window size is 2^(WSE+1) bytes */176176- return __ffs(addrspace_size) - 1;176176+ return fls64(addrspace_size) - 2;177177}178178179179/* Derive the PAACE window count encoding for the subwindow count */···351351 struct paace *ppaace;352352 unsigned long fspi;353353354354- if (!is_power_of_2(win_size) || win_size < PAMU_PAGE_SIZE) {354354+ if ((win_size & (win_size - 1)) || win_size < PAMU_PAGE_SIZE) {355355 pr_debug("window size too small or not a power of two %llx\n", win_size);356356 return -EINVAL;357357 }···464464 return -ENOENT;465465 }466466467467- if (!is_power_of_2(subwin_size) || subwin_size < PAMU_PAGE_SIZE) {467467+ if ((subwin_size & (subwin_size - 1)) || subwin_size < PAMU_PAGE_SIZE) {468468 pr_debug("subwindow size out of range, or not a power of 2\n");469469 return -EINVAL;470470 }
+8-10
drivers/iommu/fsl_pamu_domain.c
···301301 * Size must be a power of two and at least be equal302302 * to PAMU page size.303303 */304304- if (!is_power_of_2(size) || size < PAMU_PAGE_SIZE) {304304+ if ((size & (size - 1)) || size < PAMU_PAGE_SIZE) {305305 pr_debug("%s: size too small or not a power of two\n", __func__);306306 return -EINVAL;307307 }···333333 spin_lock_init(&domain->domain_lock);334334335335 return domain;336336-}337337-338338-static inline struct device_domain_info *find_domain(struct device *dev)339339-{340340- return dev->archdata.iommu_domain;341336}342337343338static void remove_device_ref(struct device_domain_info *info, u32 win_cnt)···375380 * Check here if the device is already attached to domain or not.376381 * If the device is already attached to a domain detach it.377382 */378378- old_domain_info = find_domain(dev);383383+ old_domain_info = dev->archdata.iommu_domain;379384 if (old_domain_info && old_domain_info->domain != dma_domain) {380385 spin_unlock_irqrestore(&device_domain_lock, flags);381386 detach_device(dev, old_domain_info->domain);···394399 * the info for the first LIODN as all395400 * LIODNs share the same domain396401 */397397- if (!old_domain_info)402402+ if (!dev->archdata.iommu_domain)398403 dev->archdata.iommu_domain = info;399404 spin_unlock_irqrestore(&device_domain_lock, flags);400405···10371042 group = get_shared_pci_device_group(pdev);10381043 }1039104410451045+ if (!group)10461046+ group = ERR_PTR(-ENODEV);10471047+10401048 return group;10411049}1042105010431051static int fsl_pamu_add_device(struct device *dev)10441052{10451045- struct iommu_group *group = NULL;10531053+ struct iommu_group *group = ERR_PTR(-ENODEV);10461054 struct pci_dev *pdev;10471055 const u32 *prop;10481056 int ret, len;···10681070 group = get_device_iommu_group(dev);10691071 }1070107210711071- if (!group || IS_ERR(group))10731073+ if (IS_ERR(group))10721074 return PTR_ERR(group);1073107510741076 ret = iommu_group_add_device(group, dev);
···20592059 memcpy(p, ic->parm.ni1_io.data, ic->parm.ni1_io.datalen); /* copy data */20602060 l = (p - temp) + ic->parm.ni1_io.datalen; /* total length */2061206120622062- if (ic->parm.ni1_io.timeout > 0)20632063- if (!(pc = ni1_new_l3_process(st, -1)))20642064- { free_invoke_id(st, id);20622062+ if (ic->parm.ni1_io.timeout > 0) {20632063+ pc = ni1_new_l3_process(st, -1);20642064+ if (!pc) {20652065+ free_invoke_id(st, id);20652066 return (-2);20662067 }20672067- pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id; /* remember id */20682068- pc->prot.ni1.proc = ic->parm.ni1_io.proc; /* and procedure */20682068+ /* remember id */20692069+ pc->prot.ni1.ll_id = ic->parm.ni1_io.ll_id;20702070+ /* and procedure */20712071+ pc->prot.ni1.proc = ic->parm.ni1_io.proc;20722072+ }2069207320702074 if (!(skb = l3_alloc_skb(l)))20712075 { free_invoke_id(st, id);
+1-7
drivers/isdn/i4l/isdn_ppp.c
···442442{443443 struct sock_fprog uprog;444444 struct sock_filter *code = NULL;445445- int len, err;445445+ int len;446446447447 if (copy_from_user(&uprog, arg, sizeof(uprog)))448448 return -EFAULT;···457457 code = memdup_user(uprog.filter, len);458458 if (IS_ERR(code))459459 return PTR_ERR(code);460460-461461- err = sk_chk_filter(code, uprog.len);462462- if (err) {463463- kfree(code);464464- return err;465465- }466460467461 *p = code;468462 return uprog.len;
+9
drivers/md/dm-cache-metadata.c
···425425426426 disk_super = dm_block_data(sblock);427427428428+ /* Verify the data block size hasn't changed */429429+ if (le32_to_cpu(disk_super->data_block_size) != cmd->data_block_size) {430430+ DMERR("changing the data block size (from %u to %llu) is not supported",431431+ le32_to_cpu(disk_super->data_block_size),432432+ (unsigned long long)cmd->data_block_size);433433+ r = -EINVAL;434434+ goto bad;435435+ }436436+428437 r = __check_incompat_features(disk_super, cmd);429438 if (r < 0)430439 goto bad;
+2-2
drivers/md/dm-crypt.c
···11/*22- * Copyright (C) 2003 Christophe Saout <christophe@saout.de>22+ * Copyright (C) 2003 Jana Saout <jana@saout.de>33 * Copyright (C) 2004 Clemens Fruhwirth <clemens@endorphin.org>44 * Copyright (C) 2006-2009 Red Hat, Inc. All rights reserved.55 * Copyright (C) 2013 Milan Broz <gmazyland@gmail.com>···19961996module_init(dm_crypt_init);19971997module_exit(dm_crypt_exit);1998199819991999-MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");19991999+MODULE_AUTHOR("Jana Saout <jana@saout.de>");20002000MODULE_DESCRIPTION(DM_NAME " target for transparent encryption / decryption");20012001MODULE_LICENSE("GPL");
···1611161116121612 spin_lock_irqsave(&m->lock, flags);1613161316141614- /* pg_init in progress, requeue until done */16151615- if (!pg_ready(m)) {16141614+ /* pg_init in progress or no paths available */16151615+ if (m->pg_init_in_progress ||16161616+ (!m->nr_valid_paths && m->queue_if_no_path)) {16161617 busy = 1;16171618 goto out;16181619 }
+9
drivers/md/dm-thin-metadata.c
···613613614614 disk_super = dm_block_data(sblock);615615616616+ /* Verify the data block size hasn't changed */617617+ if (le32_to_cpu(disk_super->data_block_size) != pmd->data_block_size) {618618+ DMERR("changing the data block size (from %u to %llu) is not supported",619619+ le32_to_cpu(disk_super->data_block_size),620620+ (unsigned long long)pmd->data_block_size);621621+ r = -EINVAL;622622+ goto bad_unlock_sblock;623623+ }624624+616625 r = __check_incompat_features(disk_super, pmd);617626 if (r < 0)618627 goto bad_unlock_sblock;
+2-2
drivers/md/dm-zero.c
···11/*22- * Copyright (C) 2003 Christophe Saout <christophe@saout.de>22+ * Copyright (C) 2003 Jana Saout <jana@saout.de>33 *44 * This file is released under the GPL.55 */···7979module_init(dm_zero_init)8080module_exit(dm_zero_exit)81818282-MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");8282+MODULE_AUTHOR("Jana Saout <jana@saout.de>");8383MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros");8484MODULE_LICENSE("GPL");
+13-2
drivers/md/dm.c
···54545555static DECLARE_WORK(deferred_remove_work, do_deferred_remove);56565757+static struct workqueue_struct *deferred_remove_workqueue;5858+5759/*5860 * For bio-based dm.5961 * One of these is allocated per bio.···278276 if (r)279277 goto out_free_rq_tio_cache;280278279279+ deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1);280280+ if (!deferred_remove_workqueue) {281281+ r = -ENOMEM;282282+ goto out_uevent_exit;283283+ }284284+281285 _major = major;282286 r = register_blkdev(_major, _name);283287 if (r < 0)284284- goto out_uevent_exit;288288+ goto out_free_workqueue;285289286290 if (!_major)287291 _major = r;288292289293 return 0;290294295295+out_free_workqueue:296296+ destroy_workqueue(deferred_remove_workqueue);291297out_uevent_exit:292298 dm_uevent_exit();293299out_free_rq_tio_cache:···309299static void local_exit(void)310300{311301 flush_scheduled_work();302302+ destroy_workqueue(deferred_remove_workqueue);312303313304 kmem_cache_destroy(_rq_tio_cache);314305 kmem_cache_destroy(_io_cache);···418407419408 if (atomic_dec_and_test(&md->open_count) &&420409 (test_bit(DMF_DEFERRED_REMOVE, &md->flags)))421421- schedule_work(&deferred_remove_work);410410+ queue_work(deferred_remove_workqueue, &deferred_remove_work);422411423412 dm_put(md);424413
+43
drivers/mtd/chips/cfi_cmdset_0001.c
···5252/* Atmel chips */5353#define AT49BV640D 0x02de5454#define AT49BV640DT 0x02db5555+/* Sharp chips */5656+#define LH28F640BFHE_PTTL90 0x00b05757+#define LH28F640BFHE_PBTL90 0x00b15858+#define LH28F640BFHE_PTTL70A 0x00b25959+#define LH28F640BFHE_PBTL70A 0x00b355605661static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *);5762static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *);···263258 (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e;264259};265260261261+static int is_LH28F640BF(struct cfi_private *cfi)262262+{263263+ /* Sharp LH28F640BF Family */264264+ if (cfi->mfr == CFI_MFR_SHARP && (265265+ cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 ||266266+ cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A))267267+ return 1;268268+ return 0;269269+}270270+271271+static void fixup_LH28F640BF(struct mtd_info *mtd)272272+{273273+ struct map_info *map = mtd->priv;274274+ struct cfi_private *cfi = map->fldrv_priv;275275+ struct cfi_pri_intelext *extp = cfi->cmdset_priv;276276+277277+ /* Reset the Partition Configuration Register on LH28F640BF278278+ * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */279279+ if (is_LH28F640BF(cfi)) {280280+ printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n");281281+ map_write(map, CMD(0x60), 0);282282+ map_write(map, CMD(0x04), 0);283283+284284+ /* We have set one single partition thus285285+ * Simultaneous Operations are not allowed */286286+ printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n");287287+ extp->FeatureSupport &= ~512;288288+ }289289+}290290+266291static void fixup_use_point(struct mtd_info *mtd)267292{268293 struct map_info *map = mtd->priv;···344309 { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct },345310 { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb },346311 { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock },312312+ { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock },313313+ { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF },347314 { 0, 0, NULL }348315};349316···16851648 adr += chip->start;16861649 initial_adr = adr;16871650 cmd_adr = adr & ~(wbufsize-1);16511651+16521652+ /* Sharp LH28F640BF chips need the first address for the16531653+ * Page Buffer Program command. See Table 5 of16541654+ * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */16551655+ if (is_LH28F640BF(cfi))16561656+ cmd_adr = adr;1688165716891658 /* Let's determine this according to the interleave only once */16901659 write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9);
···40474047 ecc->layout->oobavail += ecc->layout->oobfree[i].length;40484048 mtd->oobavail = ecc->layout->oobavail;4049404940504050- /* ECC sanity check: warn noisily if it's too weak */40514051- WARN_ON(!nand_ecc_strength_good(mtd));40504050+ /* ECC sanity check: warn if it's too weak */40514051+ if (!nand_ecc_strength_good(mtd))40524052+ pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",40534053+ mtd->name);4052405440534055 /*40544056 * Set the number of read / write steps for one page depending on ECC
+2-2
drivers/mtd/ubi/fastmap.c
···125125 parent = *p;126126 av = rb_entry(parent, struct ubi_ainf_volume, rb);127127128128- if (vol_id < av->vol_id)128128+ if (vol_id > av->vol_id)129129 p = &(*p)->rb_left;130130 else131131 p = &(*p)->rb_right;···423423 pnum, err);424424 ret = err > 0 ? UBI_BAD_FASTMAP : err;425425 goto out;426426- } else if (ret == UBI_IO_BITFLIPS)426426+ } else if (err == UBI_IO_BITFLIPS)427427 scrub = 1;428428429429 /*
+1-1
drivers/net/bonding/bond_main.c
···40684068 }4069406940704070 if (ad_select) {40714071- bond_opt_initstr(&newval, lacp_rate);40714071+ bond_opt_initstr(&newval, ad_select);40724072 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT),40734073 &newval);40744074 if (!valptr) {
+11-32
drivers/net/ethernet/broadcom/bcmsysport.c
···654654655655 work_done = bcm_sysport_tx_reclaim(ring->priv, ring);656656657657- if (work_done < budget) {657657+ if (work_done == 0) {658658 napi_complete(napi);659659 /* re-enable TX interrupt */660660 intrl2_1_mask_clear(ring->priv, BIT(ring->index));661661 }662662663663- return work_done;663663+ return 0;664664}665665666666static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv)···12541254 usleep_range(1000, 2000);12551255}1256125612571257-static inline int umac_reset(struct bcm_sysport_priv *priv)12571257+static inline void umac_reset(struct bcm_sysport_priv *priv)12581258{12591259- unsigned int timeout = 0;12601259 u32 reg;12611261- int ret = 0;1262126012631263- umac_writel(priv, 0, UMAC_CMD);12641264- while (timeout++ < 1000) {12651265- reg = umac_readl(priv, UMAC_CMD);12661266- if (!(reg & CMD_SW_RESET))12671267- break;12681268-12691269- udelay(1);12701270- }12711271-12721272- if (timeout == 1000) {12731273- dev_err(&priv->pdev->dev,12741274- "timeout waiting for MAC to come out of reset\n");12751275- ret = -ETIMEDOUT;12761276- }12771277-12781278- return ret;12611261+ reg = umac_readl(priv, UMAC_CMD);12621262+ reg |= CMD_SW_RESET;12631263+ umac_writel(priv, reg, UMAC_CMD);12641264+ udelay(10);12651265+ reg = umac_readl(priv, UMAC_CMD);12661266+ reg &= ~CMD_SW_RESET;12671267+ umac_writel(priv, reg, UMAC_CMD);12791268}1280126912811270static void umac_set_hw_addr(struct bcm_sysport_priv *priv,···12921303 int ret;1293130412941305 /* Reset UniMAC */12951295- ret = umac_reset(priv);12961296- if (ret) {12971297- netdev_err(dev, "UniMAC reset failed\n");12981298- return ret;12991299- }13061306+ umac_reset(priv);1300130713011308 /* Flush TX and RX FIFOs at TOPCTRL level */13021309 topctrl_flush(priv);···15731588 /* Set the needed headroom once and for all */15741589 BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8);15751590 dev->needed_headroom += sizeof(struct bcm_tsb);15761576-15771577- /* We are interfaced to a switch which handles the multicast15781578- * filtering for us, so we do not support programming any15791579- * multicast hash table in this Ethernet MAC.15801580- */15811581- dev->flags &= ~IFF_MULTICAST;1582159115831592 /* libphy will adjust the link state accordingly */15841593 netif_carrier_off(dev);
+2-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
···797797798798 return;799799 }800800- bnx2x_frag_free(fp, new_data);800800+ if (new_data)801801+ bnx2x_frag_free(fp, new_data);801802drop:802803 /* drop the packet and keep the buffer in the bin */803804 DP(NETIF_MSG_RX_STATUS,
+1-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···1293712937 * without the default SB.1293812938 * For VFs there is no default SB, then we return (index+1).1293912939 */1294012940- pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control);1294012940+ pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control);12941129411294212942 index = control & PCI_MSIX_FLAGS_QSIZE;1294312943
+6-10
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···14081408 if (cb->skb)14091409 continue;1410141014111411- /* set the DMA descriptor length once and for all14121412- * it will only change if we support dynamically sizing14131413- * priv->rx_buf_len, but we do not14141414- */14151415- dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,14161416- priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);14171417-14181411 ret = bcmgenet_rx_refill(priv, cb);14191412 if (ret)14201413 break;···25282535 netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1);25292536 netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1);2530253725312531- err = register_netdev(dev);25322532- if (err)25332533- goto err_clk_disable;25382538+ /* libphy will determine the link state */25392539+ netif_carrier_off(dev);2534254025352541 /* Turn off the main clock, WOL clock is handled separately */25362542 if (!IS_ERR(priv->clk))25372543 clk_disable_unprepare(priv->clk);25442544+25452545+ err = register_netdev(dev);25462546+ if (err)25472547+ goto err;2538254825392549 return err;25402550
···4040#include <linux/if_ether.h>4141#include <linux/if_vlan.h>4242#include <linux/vmalloc.h>4343+#include <linux/irq.h>43444445#include "mlx4_en.h"4546···783782 PKT_HASH_TYPE_L3);784783785784 skb_record_rx_queue(gro_skb, cq->ring);785785+ skb_mark_napi_id(gro_skb, &cq->napi);786786787787 if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) {788788 timestamp = mlx4_en_get_cqe_ts(cqe);···898896899897 /* If we used up all the quota - we're probably not done yet... */900898 if (done == budget) {899899+ int cpu_curr;900900+ const struct cpumask *aff;901901+901902 INC_PERF_COUNTER(priv->pstats.napi_quota);902902- if (unlikely(cq->mcq.irq_affinity_change)) {903903- cq->mcq.irq_affinity_change = false;903903+904904+ cpu_curr = smp_processor_id();905905+ aff = irq_desc_get_irq_data(cq->irq_desc)->affinity;906906+907907+ if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) {908908+ /* Current cpu is not according to smp_irq_affinity -909909+ * probably affinity changed. need to stop this NAPI910910+ * poll, and restart it on the right CPU911911+ */904912 napi_complete(napi);905913 mlx4_en_arm_cq(priv, cq);906914 return 0;907915 }908916 } else {909917 /* Done for now */910910- cq->mcq.irq_affinity_change = false;911918 napi_complete(napi);912919 mlx4_en_arm_cq(priv, cq);913920 }
+13-21
drivers/net/ethernet/mellanox/mlx4/en_tx.c
···351351 return cnt;352352}353353354354-static int mlx4_en_process_tx_cq(struct net_device *dev,355355- struct mlx4_en_cq *cq,356356- int budget)354354+static bool mlx4_en_process_tx_cq(struct net_device *dev,355355+ struct mlx4_en_cq *cq)357356{358357 struct mlx4_en_priv *priv = netdev_priv(dev);359358 struct mlx4_cq *mcq = &cq->mcq;···371372 int factor = priv->cqe_factor;372373 u64 timestamp = 0;373374 int done = 0;375375+ int budget = priv->tx_work_limit;374376375377 if (!priv->port_up)376376- return 0;378378+ return true;377379378380 index = cons_index & size_mask;379381 cqe = &buf[(index << factor) + factor];···447447 netif_tx_wake_queue(ring->tx_queue);448448 ring->wake_queue++;449449 }450450- return done;450450+ return done < budget;451451}452452453453void mlx4_en_tx_irq(struct mlx4_cq *mcq)···467467 struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi);468468 struct net_device *dev = cq->dev;469469 struct mlx4_en_priv *priv = netdev_priv(dev);470470- int done;470470+ int clean_complete;471471472472- done = mlx4_en_process_tx_cq(dev, cq, budget);472472+ clean_complete = mlx4_en_process_tx_cq(dev, cq);473473+ if (!clean_complete)474474+ return budget;473475474474- /* If we used up all the quota - we're probably not done yet... */475475- if (done < budget) {476476- /* Done for now */477477- cq->mcq.irq_affinity_change = false;478478- napi_complete(napi);479479- mlx4_en_arm_cq(priv, cq);480480- return done;481481- } else if (unlikely(cq->mcq.irq_affinity_change)) {482482- cq->mcq.irq_affinity_change = false;483483- napi_complete(napi);484484- mlx4_en_arm_cq(priv, cq);485485- return 0;486486- }487487- return budget;476476+ napi_complete(napi);477477+ mlx4_en_arm_cq(priv, cq);478478+479479+ return 0;488480}489481490482static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
···187187 return d ? to_mii_bus(d) : NULL;188188}189189EXPORT_SYMBOL(of_mdio_find_bus);190190+191191+/* Walk the list of subnodes of a mdio bus and look for a node that matches the192192+ * phy's address with its 'reg' property. If found, set the of_node pointer for193193+ * the phy. This allows auto-probed pyh devices to be supplied with information194194+ * passed in via DT.195195+ */196196+static void of_mdiobus_link_phydev(struct mii_bus *mdio,197197+ struct phy_device *phydev)198198+{199199+ struct device *dev = &phydev->dev;200200+ struct device_node *child;201201+202202+ if (dev->of_node || !mdio->dev.of_node)203203+ return;204204+205205+ for_each_available_child_of_node(mdio->dev.of_node, child) {206206+ int addr;207207+ int ret;208208+209209+ ret = of_property_read_u32(child, "reg", &addr);210210+ if (ret < 0) {211211+ dev_err(dev, "%s has invalid PHY address\n",212212+ child->full_name);213213+ continue;214214+ }215215+216216+ /* A PHY must have a reg property in the range [0-31] */217217+ if (addr >= PHY_MAX_ADDR) {218218+ dev_err(dev, "%s PHY address %i is too large\n",219219+ child->full_name, addr);220220+ continue;221221+ }222222+223223+ if (addr == phydev->addr) {224224+ dev->of_node = child;225225+ return;226226+ }227227+ }228228+}229229+#else /* !IS_ENABLED(CONFIG_OF_MDIO) */230230+static inline void of_mdiobus_link_phydev(struct mii_bus *mdio,231231+ struct phy_device *phydev)232232+{233233+}190234#endif191235192236/**
+1-7
drivers/net/ppp/ppp_generic.c
···539539{540540 struct sock_fprog uprog;541541 struct sock_filter *code = NULL;542542- int len, err;542542+ int len;543543544544 if (copy_from_user(&uprog, arg, sizeof(uprog)))545545 return -EFAULT;···553553 code = memdup_user(uprog.filter, len);554554 if (IS_ERR(code))555555 return PTR_ERR(code);556556-557557- err = sk_chk_filter(code, uprog.len);558558- if (err) {559559- kfree(code);560560- return err;561561- }562556563557 *p = code;564558 return uprog.len;
···312312 int msdu_len, msdu_chaining = 0;313313 struct sk_buff *msdu;314314 struct htt_rx_desc *rx_desc;315315- bool corrupted = false;316315317316 lockdep_assert_held(&htt->rx_ring.lock);318317···438439 last_msdu = __le32_to_cpu(rx_desc->msdu_end.info0) &439440 RX_MSDU_END_INFO0_LAST_MSDU;440441441441- if (msdu_chaining && !last_msdu)442442- corrupted = true;443443-444442 if (last_msdu) {445443 msdu->next = NULL;446444 break;···451455452456 if (*head_msdu == NULL)453457 msdu_chaining = -1;454454-455455- /*456456- * Apparently FW sometimes reports weird chained MSDU sequences with457457- * more than one rx descriptor. This seems like a bug but needs more458458- * analyzing. For the time being fix it by dropping such sequences to459459- * avoid blowing up the host system.460460- */461461- if (corrupted) {462462- ath10k_warn("failed to pop chained msdus, dropping\n");463463- ath10k_htt_rx_free_msdu_chain(*head_msdu);464464- *head_msdu = NULL;465465- *tail_msdu = NULL;466466- msdu_chaining = -EINVAL;467467- }468458469459 /*470460 * Don't refill the ring yet.
···10681068 /* recalculate basic rates */10691069 iwl_calc_basic_rates(priv, ctx);1070107010711071- /*10721072- * force CTS-to-self frames protection if RTS-CTS is not preferred10731073- * one aggregation protection method10741074- */10751075- if (!priv->hw_params.use_rts_for_aggregation)10761076- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN;10771077-10781071 if ((ctx->vif && ctx->vif->bss_conf.use_short_slot) ||10791072 !(ctx->staging.flags & RXON_FLG_BAND_24G_MSK))10801073 ctx->staging.flags |= RXON_FLG_SHORT_SLOT_MSK;···14721479 ctx->staging.flags |= RXON_FLG_TGG_PROTECT_MSK;14731480 else14741481 ctx->staging.flags &= ~RXON_FLG_TGG_PROTECT_MSK;14751475-14761476- if (bss_conf->use_cts_prot)14771477- ctx->staging.flags |= RXON_FLG_SELF_CTS_EN;14781478- else14791479- ctx->staging.flags &= ~RXON_FLG_SELF_CTS_EN;1480148214811483 memcpy(ctx->staging.bssid_addr, bss_conf->bssid, ETH_ALEN);14821484
+1
drivers/net/wireless/iwlwifi/iwl-fw.h
···8888 * P2P client interfaces simultaneously if they are in different bindings.8989 * @IWL_UCODE_TLV_FLAGS_P2P_BSS_PS_SCM: support power save on BSS station and9090 * P2P client interfaces simultaneously if they are in same bindings.9191+ * @IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT: General support for uAPSD9192 * @IWL_UCODE_TLV_FLAGS_P2P_PS_UAPSD: P2P client supports uAPSD power save9293 * @IWL_UCODE_TLV_FLAGS_BCAST_FILTERING: uCode supports broadcast filtering.9394 * @IWL_UCODE_TLV_FLAGS_GO_UAPSD: AP/GO interfaces support uAPSD clients
+2-3
drivers/net/wireless/iwlwifi/mvm/mac-ctxt.c
···667667 if (vif->bss_conf.qos)668668 cmd->qos_flags |= cpu_to_le32(MAC_QOS_FLG_UPDATE_EDCA);669669670670- if (vif->bss_conf.use_cts_prot) {670670+ if (vif->bss_conf.use_cts_prot)671671 cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_TGG_PROTECT);672672- cmd->protection_flags |= cpu_to_le32(MAC_PROT_FLG_SELF_CTS_EN);673673- }672672+674673 IWL_DEBUG_RATE(mvm, "use_cts_prot %d, ht_operation_mode %d\n",675674 vif->bss_conf.use_cts_prot,676675 vif->bss_conf.ht_operation_mode);
+13-6
drivers/net/wireless/iwlwifi/mvm/mac80211.c
···303303 hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP;304304 }305305306306+ if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT &&307307+ !iwlwifi_mod_params.uapsd_disable) {308308+ hw->flags |= IEEE80211_HW_SUPPORTS_UAPSD;309309+ hw->uapsd_queues = IWL_UAPSD_AC_INFO;310310+ hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP;311311+ }312312+306313 hw->sta_data_size = sizeof(struct iwl_mvm_sta);307314 hw->vif_data_size = sizeof(struct iwl_mvm_vif);308315 hw->chanctx_data_size = sizeof(u16);···1166115911671160 bcast_mac = &cmd->macs[mvmvif->id];1168116111691169- /* enable filtering only for associated stations */11701170- if (vif->type != NL80211_IFTYPE_STATION || !vif->bss_conf.assoc)11621162+ /*11631163+ * enable filtering only for associated stations, but not for P2P11641164+ * Clients11651165+ */11661166+ if (vif->type != NL80211_IFTYPE_STATION || vif->p2p ||11671167+ !vif->bss_conf.assoc)11711168 return;1172116911731170 bcast_mac->default_discard = 1;···12461235 struct iwl_bcast_filter_cmd cmd;1247123612481237 if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_BCAST_FILTERING))12491249- return 0;12501250-12511251- /* bcast filtering isn't supported for P2P client */12521252- if (vif->p2p)12531238 return 0;1254123912551240 if (!iwl_mvm_bcast_filter_build_cmd(mvm, &cmd))
+18-45
drivers/net/wireless/iwlwifi/mvm/scan.c
···588588 struct iwl_scan_offload_cmd *scan,589589 struct iwl_mvm_scan_params *params)590590{591591- scan->channel_count =592592- mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels +593593- mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels;591591+ scan->channel_count = req->n_channels;594592 scan->quiet_time = cpu_to_le16(IWL_ACTIVE_QUIET_TIME);595593 scan->quiet_plcp_th = cpu_to_le16(IWL_PLCP_QUIET_THRESH);596594 scan->good_CRC_th = IWL_GOOD_CRC_TH_DEFAULT;···667669 struct cfg80211_sched_scan_request *req,668670 struct iwl_scan_channel_cfg *channels,669671 enum ieee80211_band band,670670- int *head, int *tail,672672+ int *head,671673 u32 ssid_bitmap,672674 struct iwl_mvm_scan_params *params)673675{674674- struct ieee80211_supported_band *s_band;675675- int n_channels = req->n_channels;676676- int i, j, index = 0;677677- bool partial;676676+ int i, index = 0;678677679679- /*680680- * We have to configure all supported channels, even if we don't want to681681- * scan on them, but we have to send channels in the order that we want682682- * to scan. So add requested channels to head of the list and others to683683- * the end.684684- */685685- s_band = &mvm->nvm_data->bands[band];678678+ for (i = 0; i < req->n_channels; i++) {679679+ struct ieee80211_channel *chan = req->channels[i];686680687687- for (i = 0; i < s_band->n_channels && *head <= *tail; i++) {688688- partial = false;689689- for (j = 0; j < n_channels; j++)690690- if (s_band->channels[i].center_freq ==691691- req->channels[j]->center_freq) {692692- index = *head;693693- (*head)++;694694- /*695695- * Channels that came with the request will be696696- * in partial scan .697697- */698698- partial = true;699699- break;700700- }701701- if (!partial) {702702- index = *tail;703703- (*tail)--;704704- }705705- channels->channel_number[index] =706706- cpu_to_le16(ieee80211_frequency_to_channel(707707- s_band->channels[i].center_freq));681681+ if (chan->band != band)682682+ continue;683683+684684+ index = *head;685685+ (*head)++;686686+687687+ channels->channel_number[index] = cpu_to_le16(chan->hw_value);708688 channels->dwell_time[index][0] = params->dwell[band].active;709689 channels->dwell_time[index][1] = params->dwell[band].passive;710690711691 channels->iter_count[index] = cpu_to_le16(1);712692 channels->iter_interval[index] = 0;713693714714- if (!(s_band->channels[i].flags & IEEE80211_CHAN_NO_IR))694694+ if (!(chan->flags & IEEE80211_CHAN_NO_IR))715695 channels->type[index] |=716696 cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_ACTIVE);717697718698 channels->type[index] |=719719- cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL);720720- if (partial)721721- channels->type[index] |=722722- cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL);699699+ cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_FULL |700700+ IWL_SCAN_OFFLOAD_CHANNEL_PARTIAL);723701724724- if (s_band->channels[i].flags & IEEE80211_CHAN_NO_HT40)702702+ if (chan->flags & IEEE80211_CHAN_NO_HT40)725703 channels->type[index] |=726704 cpu_to_le32(IWL_SCAN_OFFLOAD_CHANNEL_NARROW);727705···714740 int band_2ghz = mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels;715741 int band_5ghz = mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels;716742 int head = 0;717717- int tail = band_2ghz + band_5ghz - 1;718743 u32 ssid_bitmap;719744 int cmd_len;720745 int ret;···745772 &scan_cfg->scan_cmd.tx_cmd[0],746773 scan_cfg->data);747774 iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg,748748- IEEE80211_BAND_2GHZ, &head, &tail,775775+ IEEE80211_BAND_2GHZ, &head,749776 ssid_bitmap, ¶ms);750777 }751778 if (band_5ghz) {···755782 scan_cfg->data +756783 SCAN_OFFLOAD_PROBE_REQ_SIZE);757784 iwl_build_channel_cfg(mvm, req, &scan_cfg->channel_cfg,758758- IEEE80211_BAND_5GHZ, &head, &tail,785785+ IEEE80211_BAND_5GHZ, &head,759786 ssid_bitmap, ¶ms);760787 }761788
···231231 */232232static int rt2800usb_autorun_detect(struct rt2x00_dev *rt2x00dev)233233{234234- __le32 reg;234234+ __le32 *reg;235235 u32 fw_mode;236236237237+ reg = kmalloc(sizeof(*reg), GFP_KERNEL);238238+ if (reg == NULL)239239+ return -ENOMEM;237240 /* cannot use rt2x00usb_register_read here as it uses different238241 * mode (MULTI_READ vs. DEVICE_MODE) and does not pass the239242 * magic value USB_MODE_AUTORUN (0x11) to the device, thus the···244241 */245242 rt2x00usb_vendor_request(rt2x00dev, USB_DEVICE_MODE,246243 USB_VENDOR_REQUEST_IN, 0, USB_MODE_AUTORUN,247247- ®, sizeof(reg), REGISTER_TIMEOUT_FIRMWARE);248248- fw_mode = le32_to_cpu(reg);244244+ reg, sizeof(*reg), REGISTER_TIMEOUT_FIRMWARE);245245+ fw_mode = le32_to_cpu(*reg);246246+ kfree(reg);249247250248 if ((fw_mode & 0x00000003) == 2)251249 return 1;···265261 int status;266262 u32 offset;267263 u32 length;264264+ int retval;268265269266 /*270267 * Check which section of the firmware we need.···283278 /*284279 * Write firmware to device.285280 */286286- if (rt2800usb_autorun_detect(rt2x00dev)) {281281+ retval = rt2800usb_autorun_detect(rt2x00dev);282282+ if (retval < 0)283283+ return retval;284284+ if (retval) {287285 rt2x00_info(rt2x00dev,288286 "Firmware loading not required - NIC in AutoRun mode\n");289287 } else {···771763 */772764static int rt2800usb_efuse_detect(struct rt2x00_dev *rt2x00dev)773765{774774- if (rt2800usb_autorun_detect(rt2x00dev))766766+ int retval;767767+768768+ retval = rt2800usb_autorun_detect(rt2x00dev);769769+ if (retval < 0)770770+ return retval;771771+ if (retval)775772 return 1;776773 return rt2800_efuse_detect(rt2x00dev);777774}···785772{786773 int retval;787774788788- if (rt2800usb_efuse_detect(rt2x00dev))775775+ retval = rt2800usb_efuse_detect(rt2x00dev);776776+ if (retval < 0)777777+ return retval;778778+ if (retval)789779 retval = rt2800_read_eeprom_efuse(rt2x00dev);790780 else791781 retval = rt2x00usb_eeprom_read(rt2x00dev, rt2x00dev->eeprom,
+16-11
drivers/net/xen-netfront.c
···14391439 unsigned int i = 0;14401440 unsigned int num_queues = info->netdev->real_num_tx_queues;1441144114421442+ netif_carrier_off(info->netdev);14431443+14421444 for (i = 0; i < num_queues; ++i) {14431445 struct netfront_queue *queue = &info->queues[i];14441444-14451445- /* Stop old i/f to prevent errors whilst we rebuild the state. */14461446- spin_lock_bh(&queue->rx_lock);14471447- spin_lock_irq(&queue->tx_lock);14481448- netif_carrier_off(queue->info->netdev);14491449- spin_unlock_irq(&queue->tx_lock);14501450- spin_unlock_bh(&queue->rx_lock);1451144614521447 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))14531448 unbind_from_irqhandler(queue->tx_irq, queue);···14521457 }14531458 queue->tx_evtchn = queue->rx_evtchn = 0;14541459 queue->tx_irq = queue->rx_irq = 0;14601460+14611461+ napi_synchronize(&queue->napi);1455146214561463 /* End access and free the pages */14571464 xennet_end_access(queue->tx_ring_ref, queue->tx.sring);···20432046 /* By now, the queue structures have been set up */20442047 for (j = 0; j < num_queues; ++j) {20452048 queue = &np->queues[j];20462046- spin_lock_bh(&queue->rx_lock);20472047- spin_lock_irq(&queue->tx_lock);2048204920492050 /* Step 1: Discard all pending TX packet fragments. */20512051+ spin_lock_irq(&queue->tx_lock);20502052 xennet_release_tx_bufs(queue);20532053+ spin_unlock_irq(&queue->tx_lock);2051205420522055 /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */20562056+ spin_lock_bh(&queue->rx_lock);20572057+20532058 for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {20542059 skb_frag_t *frag;20552060 const struct page *page;···20752076 }2076207720772078 queue->rx.req_prod_pvt = requeue_idx;20792079+20802080+ spin_unlock_bh(&queue->rx_lock);20782081 }2079208220802083 /*···20882087 netif_carrier_on(np->netdev);20892088 for (j = 0; j < num_queues; ++j) {20902089 queue = &np->queues[j];20902090+20912091 notify_remote_via_irq(queue->tx_irq);20922092 if (queue->tx_irq != queue->rx_irq)20932093 notify_remote_via_irq(queue->rx_irq);20942094- xennet_tx_buf_gc(queue);20952095- xennet_alloc_rx_buffers(queue);2096209420952095+ spin_lock_irq(&queue->tx_lock);20962096+ xennet_tx_buf_gc(queue);20972097 spin_unlock_irq(&queue->tx_lock);20982098+20992099+ spin_lock_bh(&queue->rx_lock);21002100+ xennet_alloc_rx_buffers(queue);20982101 spin_unlock_bh(&queue->rx_lock);20992102 }21002103
-34
drivers/of/of_mdio.c
···182182}183183EXPORT_SYMBOL(of_mdiobus_register);184184185185-/**186186- * of_mdiobus_link_phydev - Find a device node for a phy187187- * @mdio: pointer to mii_bus structure188188- * @phydev: phydev for which the of_node pointer should be set189189- *190190- * Walk the list of subnodes of a mdio bus and look for a node that matches the191191- * phy's address with its 'reg' property. If found, set the of_node pointer for192192- * the phy. This allows auto-probed pyh devices to be supplied with information193193- * passed in via DT.194194- */195195-void of_mdiobus_link_phydev(struct mii_bus *mdio,196196- struct phy_device *phydev)197197-{198198- struct device *dev = &phydev->dev;199199- struct device_node *child;200200-201201- if (dev->of_node || !mdio->dev.of_node)202202- return;203203-204204- for_each_available_child_of_node(mdio->dev.of_node, child) {205205- int addr;206206-207207- addr = of_mdio_parse_addr(&mdio->dev, child);208208- if (addr < 0)209209- continue;210210-211211- if (addr == phydev->addr) {212212- dev->of_node = child;213213- return;214214- }215215- }216216-}217217-EXPORT_SYMBOL(of_mdiobus_link_phydev);218218-219185/* Helper function for of_phy_find_device */220186static int of_phy_match(struct device *dev, void *phy_np)221187{
+7-2
drivers/pci/pci.c
···31353135 if (probe)31363136 return 0;3137313731383138- /* Wait for Transaction Pending bit clean */31393139- if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP))31383138+ /*31393139+ * Wait for Transaction Pending bit to clear. A word-aligned test31403140+ * is used, so we use the conrol offset rather than status and shift31413141+ * the test bit to match.31423142+ */31433143+ if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL,31443144+ PCI_AF_STATUS_TP << 8))31403145 goto clear;3141314631423147 dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
+2
drivers/phy/Kconfig
···112112config PHY_SUN4I_USB113113 tristate "Allwinner sunxi SoC USB PHY driver"114114 depends on ARCH_SUNXI && HAS_IOMEM && OF115115+ depends on RESET_CONTROLLER115116 select GENERIC_PHY116117 help117118 Enable this to support the transceiver that is part of Allwinner···123122124123config PHY_SAMSUNG_USB2125124 tristate "Samsung USB 2.0 PHY driver"125125+ depends on HAS_IOMEM126126 select GENERIC_PHY127127 select MFD_SYSCON128128 help
···306306{307307 struct imx_thermal_data *data = platform_get_drvdata(pdev);308308 struct regmap *map;309309- int t1, t2, n1, n2;309309+ int t1, n1;310310 int ret;311311 u32 val;312312 u64 temp64;···333333 /*334334 * Sensor data layout:335335 * [31:20] - sensor value @ 25C336336- * [19:8] - sensor value of hot337337- * [7:0] - hot temperature value338336 * Use universal formula now and only need sensor value @ 25C339337 * slope = 0.4297157 - (0.0015976 * 25C fuse)340338 */341339 n1 = val >> 20;342342- n2 = (val & 0xfff00) >> 8;343343- t2 = val & 0xff;344340 t1 = 25; /* t1 always 25C */345341346342 /*···362366 data->c2 = n1 * data->c1 + 1000 * t1;363367364368 /*365365- * Set the default passive cooling trip point to 20 °C below the366366- * maximum die temperature. Can be changed from userspace.369369+ * Set the default passive cooling trip point,370370+ * can be changed from userspace.367371 */368368- data->temp_passive = 1000 * (t2 - 20);372372+ data->temp_passive = IMX_TEMP_PASSIVE;369373370374 /*371371- * The maximum die temperature is t2, let's give 5 °C cushion372372- * for noise and possible temperature rise between measurements.375375+ * The maximum die temperature set to 20 C higher than376376+ * IMX_TEMP_PASSIVE.373377 */374374- data->temp_critical = 1000 * (t2 - 5);378378+ data->temp_critical = 1000 * 20 + data->temp_passive;375379376380 return 0;377381}
+4-3
drivers/thermal/of-thermal.c
···156156157157 ret = thermal_zone_bind_cooling_device(thermal,158158 tbp->trip_id, cdev,159159- tbp->min,160160- tbp->max);159159+ tbp->max,160160+ tbp->min);161161 if (ret)162162 return ret;163163 }···712712 }713713714714 i = 0;715715- for_each_child_of_node(child, gchild)715715+ for_each_child_of_node(child, gchild) {716716 ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++],717717 tz->trips, tz->ntrips);718718 if (ret)719719 goto free_tbps;720720+ }720721721722finish:722723 of_node_put(child);
+18-15
drivers/thermal/thermal_hwmon.c
···140140 return NULL;141141}142142143143+static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz)144144+{145145+ unsigned long temp;146146+ return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp);147147+}148148+143149int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)144150{145151 struct thermal_hwmon_device *hwmon;···195189 if (result)196190 goto free_temp_mem;197191198198- if (tz->ops->get_crit_temp) {199199- unsigned long temperature;200200- if (!tz->ops->get_crit_temp(tz, &temperature)) {201201- snprintf(temp->temp_crit.name,202202- sizeof(temp->temp_crit.name),192192+ if (thermal_zone_crit_temp_valid(tz)) {193193+ snprintf(temp->temp_crit.name,194194+ sizeof(temp->temp_crit.name),203195 "temp%d_crit", hwmon->count);204204- temp->temp_crit.attr.attr.name = temp->temp_crit.name;205205- temp->temp_crit.attr.attr.mode = 0444;206206- temp->temp_crit.attr.show = temp_crit_show;207207- sysfs_attr_init(&temp->temp_crit.attr.attr);208208- result = device_create_file(hwmon->device,209209- &temp->temp_crit.attr);210210- if (result)211211- goto unregister_input;212212- }196196+ temp->temp_crit.attr.attr.name = temp->temp_crit.name;197197+ temp->temp_crit.attr.attr.mode = 0444;198198+ temp->temp_crit.attr.show = temp_crit_show;199199+ sysfs_attr_init(&temp->temp_crit.attr.attr);200200+ result = device_create_file(hwmon->device,201201+ &temp->temp_crit.attr);202202+ if (result)203203+ goto unregister_input;213204 }214205215206 mutex_lock(&thermal_hwmon_list_lock);···253250 }254251255252 device_remove_file(hwmon->device, &temp->temp_input.attr);256256- if (tz->ops->get_crit_temp)253253+ if (thermal_zone_crit_temp_valid(tz))257254 device_remove_file(hwmon->device, &temp->temp_crit.attr);258255259256 mutex_lock(&thermal_hwmon_list_lock);
+1-1
drivers/thermal/ti-soc-thermal/ti-bandgap.c
···11551155 /* register shadow for context save and restore */11561156 bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) *11571157 bgp->conf->sensor_count, GFP_KERNEL);11581158- if (!bgp) {11581158+ if (!bgp->regval) {11591159 dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n");11601160 return ERR_PTR(-ENOMEM);11611161 }
···1169116911701170 if (hwep->type == USB_ENDPOINT_XFER_CONTROL)11711171 cap |= QH_IOS;11721172- if (hwep->num)11731173- cap |= QH_ZLT;11721172+11731173+ cap |= QH_ZLT;11741174 cap |= (hwep->ep.maxpacket << __ffs(QH_MAX_PKT)) & QH_MAX_PKT;11751175 /*11761176 * For ISO-TX, we set mult at QH as the largest value, and use
+19
drivers/usb/core/hub.c
···889889 if (!hub_is_superspeed(hub->hdev))890890 return -EINVAL;891891892892+ ret = hub_port_status(hub, port1, &portstatus, &portchange);893893+ if (ret < 0)894894+ return ret;895895+896896+ /*897897+ * USB controller Advanced Micro Devices, Inc. [AMD] FCH USB XHCI898898+ * Controller [1022:7814] will have spurious result making the following899899+ * usb 3.0 device hotplugging route to the 2.0 root hub and recognized900900+ * as high-speed device if we set the usb 3.0 port link state to901901+ * Disabled. Since it's already in USB_SS_PORT_LS_RX_DETECT state, we902902+ * check the state here to avoid the bug.903903+ */904904+ if ((portstatus & USB_PORT_STAT_LINK_STATE) ==905905+ USB_SS_PORT_LS_RX_DETECT) {906906+ dev_dbg(&hub->ports[port1 - 1]->dev,907907+ "Not disabling port; link state is RxDetect\n");908908+ return ret;909909+ }910910+892911 ret = hub_set_port_link_state(hub, port1, USB_SS_PORT_LS_SS_DISABLED);893912 if (ret)894913 return ret;
···966966 continue;967967 }968968969969- if (ei->i_es_lru_nr == 0 || ei == locked_ei)969969+ if (ei->i_es_lru_nr == 0 || ei == locked_ei ||970970+ !write_trylock(&ei->i_es_lock))970971 continue;971972972972- write_lock(&ei->i_es_lock);973973 shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan);974974 if (ei->i_es_lru_nr == 0)975975 list_del_init(&ei->i_es_lru);
+8-8
fs/ext4/ialloc.c
···338338 fatal = err;339339 } else {340340 ext4_error(sb, "bit already cleared for inode %lu", ino);341341- if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {341341+ if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {342342 int count;343343 count = ext4_free_inodes_count(sb, gdp);344344 percpu_counter_sub(&sbi->s_freeinodes_counter,···874874 goto out;875875 }876876877877+ BUFFER_TRACE(group_desc_bh, "get_write_access");878878+ err = ext4_journal_get_write_access(handle, group_desc_bh);879879+ if (err) {880880+ ext4_std_error(sb, err);881881+ goto out;882882+ }883883+877884 /* We may have to initialize the block bitmap if it isn't already */878885 if (ext4_has_group_desc_csum(sb) &&879886 gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {···915908 ext4_std_error(sb, err);916909 goto out;917910 }918918- }919919-920920- BUFFER_TRACE(group_desc_bh, "get_write_access");921921- err = ext4_journal_get_write_access(handle, group_desc_bh);922922- if (err) {923923- ext4_std_error(sb, err);924924- goto out;925911 }926912927913 /* Update the relevant bg descriptor fields */
+2-2
fs/ext4/mballoc.c
···752752753753 if (free != grp->bb_free) {754754 ext4_grp_locked_error(sb, group, 0, 0,755755- "%u clusters in bitmap, %u in gd; "756756- "block bitmap corrupt.",755755+ "block bitmap and bg descriptor "756756+ "inconsistent: %u vs %u free clusters",757757 free, grp->bb_free);758758 /*759759 * If we intend to continue, we consider group descriptor
+28-32
fs/ext4/super.c
···15251525 arg = JBD2_DEFAULT_MAX_COMMIT_AGE;15261526 sbi->s_commit_interval = HZ * arg;15271527 } else if (token == Opt_max_batch_time) {15281528- if (arg == 0)15291529- arg = EXT4_DEF_MAX_BATCH_TIME;15301528 sbi->s_max_batch_time = arg;15311529 } else if (token == Opt_min_batch_time) {15321530 sbi->s_min_batch_time = arg;···28072809 es = sbi->s_es;2808281028092811 if (es->s_error_count)28102810- ext4_msg(sb, KERN_NOTICE, "error count: %u",28122812+ /* fsck newer than v1.41.13 is needed to clean this condition. */28132813+ ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u",28112814 le32_to_cpu(es->s_error_count));28122815 if (es->s_first_error_time) {28132813- printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d",28162816+ printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d",28142817 sb->s_id, le32_to_cpu(es->s_first_error_time),28152818 (int) sizeof(es->s_first_error_func),28162819 es->s_first_error_func,···28252826 printk("\n");28262827 }28272828 if (es->s_last_error_time) {28282828- printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d",28292829+ printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d",28292830 sb->s_id, le32_to_cpu(es->s_last_error_time),28302831 (int) sizeof(es->s_last_error_func),28312832 es->s_last_error_func,···38793880 goto failed_mount2;38803881 }38813882 }38823882-38833883- /*38843884- * set up enough so that it can read an inode,38853885- * and create new inode for buddy allocator38863886- */38873887- sbi->s_gdb_count = db_count;38883888- if (!test_opt(sb, NOLOAD) &&38893889- EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))38903890- sb->s_op = &ext4_sops;38913891- else38923892- sb->s_op = &ext4_nojournal_sops;38933893-38943894- ext4_ext_init(sb);38953895- err = ext4_mb_init(sb);38963896- if (err) {38973897- ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",38983898- err);38993899- goto failed_mount2;39003900- }39013901-39023883 if (!ext4_check_descriptors(sb, &first_not_zeroed)) {39033884 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!");39043904- goto failed_mount2a;38853885+ goto failed_mount2;39053886 }39063887 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG))39073888 if (!ext4_fill_flex_info(sb)) {39083889 ext4_msg(sb, KERN_ERR,39093890 "unable to initialize "39103891 "flex_bg meta info!");39113911- goto failed_mount2a;38923892+ goto failed_mount2;39123893 }3913389438953895+ sbi->s_gdb_count = db_count;39143896 get_random_bytes(&sbi->s_next_generation, sizeof(u32));39153897 spin_lock_init(&sbi->s_next_gen_lock);39163898···39263946 sbi->s_stripe = ext4_get_stripe_size(sbi);39273947 sbi->s_extent_max_zeroout_kb = 32;3928394839493949+ /*39503950+ * set up enough so that it can read an inode39513951+ */39523952+ if (!test_opt(sb, NOLOAD) &&39533953+ EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))39543954+ sb->s_op = &ext4_sops;39553955+ else39563956+ sb->s_op = &ext4_nojournal_sops;39293957 sb->s_export_op = &ext4_export_ops;39303958 sb->s_xattr = ext4_xattr_handlers;39313959#ifdef CONFIG_QUOTA···41234135 if (err) {41244136 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for "41254137 "reserved pool", ext4_calculate_resv_clusters(sb));41264126- goto failed_mount5;41384138+ goto failed_mount4a;41274139 }4128414041294141 err = ext4_setup_system_zone(sb);41304142 if (err) {41314143 ext4_msg(sb, KERN_ERR, "failed to initialize system "41324144 "zone (%d)", err);41454145+ goto failed_mount4a;41464146+ }41474147+41484148+ ext4_ext_init(sb);41494149+ err = ext4_mb_init(sb);41504150+ if (err) {41514151+ ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",41524152+ err);41334153 goto failed_mount5;41344154 }41354155···42144218failed_mount7:42154219 ext4_unregister_li_request(sb);42164220failed_mount6:42174217- ext4_release_system_zone(sb);42214221+ ext4_mb_release(sb);42184222failed_mount5:42234223+ ext4_ext_release(sb);42244224+ ext4_release_system_zone(sb);42254225+failed_mount4a:42194226 dput(sb->s_root);42204227 sb->s_root = NULL;42214228failed_mount4:···42424243 percpu_counter_destroy(&sbi->s_extent_cache_cnt);42434244 if (sbi->s_mmp_tsk)42444245 kthread_stop(sbi->s_mmp_tsk);42454245-failed_mount2a:42464246- ext4_mb_release(sb);42474246failed_mount2:42484247 for (i = 0; i < db_count; i++)42494248 brelse(sbi->s_group_desc[i]);42504249 ext4_kvfree(sbi->s_group_desc);42514250failed_mount:42524252- ext4_ext_release(sb);42534251 if (sbi->s_chksum_driver)42544252 crypto_free_shash(sbi->s_chksum_driver);42554253 if (sbi->s_proc) {
+18-5
fs/f2fs/data.c
···608608 * b. do not use extent cache for better performance609609 * c. give the block addresses to blockdev610610 */611611-static int get_data_block(struct inode *inode, sector_t iblock,612612- struct buffer_head *bh_result, int create)611611+static int __get_data_block(struct inode *inode, sector_t iblock,612612+ struct buffer_head *bh_result, int create, bool fiemap)613613{614614 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);615615 unsigned int blkbits = inode->i_sb->s_blocksize_bits;···637637 err = 0;638638 goto unlock_out;639639 }640640- if (dn.data_blkaddr == NEW_ADDR)640640+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)641641 goto put_out;642642643643 if (dn.data_blkaddr != NULL_ADDR) {···671671 err = 0;672672 goto unlock_out;673673 }674674- if (dn.data_blkaddr == NEW_ADDR)674674+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)675675 goto put_out;676676677677 end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));···708708 return err;709709}710710711711+static int get_data_block(struct inode *inode, sector_t iblock,712712+ struct buffer_head *bh_result, int create)713713+{714714+ return __get_data_block(inode, iblock, bh_result, create, false);715715+}716716+717717+static int get_data_block_fiemap(struct inode *inode, sector_t iblock,718718+ struct buffer_head *bh_result, int create)719719+{720720+ return __get_data_block(inode, iblock, bh_result, create, true);721721+}722722+711723int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,712724 u64 start, u64 len)713725{714714- return generic_block_fiemap(inode, fieinfo, start, len, get_data_block);726726+ return generic_block_fiemap(inode, fieinfo,727727+ start, len, get_data_block_fiemap);715728}716729717730static int f2fs_read_data_page(struct file *file, struct page *page)
+1-1
fs/f2fs/dir.c
···376376377377put_error:378378 f2fs_put_page(page, 1);379379+error:379380 /* once the failed inode becomes a bad inode, i_mode is S_IFREG */380381 truncate_inode_pages(&inode->i_data, 0);381382 truncate_blocks(inode, 0);382383 remove_dirty_dir_inode(inode);383383-error:384384 remove_inode_page(inode);385385 return ERR_PTR(err);386386}
+2-4
fs/f2fs/f2fs.h
···342342 struct dirty_seglist_info *dirty_info; /* dirty segment information */343343 struct curseg_info *curseg_array; /* active segment information */344344345345- struct list_head wblist_head; /* list of under-writeback pages */346346- spinlock_t wblist_lock; /* lock for checkpoint */347347-348345 block_t seg0_blkaddr; /* block address of 0'th segment */349346 block_t main_blkaddr; /* start block address of main area */350347 block_t ssa_blkaddr; /* start block address of SSA area */···641644 */642645static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)643646{644644- WARN_ON((nid >= NM_I(sbi)->max_nid));647647+ if (unlikely(nid < F2FS_ROOT_INO(sbi)))648648+ return -EINVAL;645649 if (unlikely(nid >= NM_I(sbi)->max_nid))646650 return -EINVAL;647651 return 0;
···731731 cachep = gfs2_glock_aspace_cachep;732732 else733733 cachep = gfs2_glock_cachep;734734- gl = kmem_cache_alloc(cachep, GFP_KERNEL);734734+ gl = kmem_cache_alloc(cachep, GFP_NOFS);735735 if (!gl)736736 return -ENOMEM;737737738738 memset(&gl->gl_lksb, 0, sizeof(struct dlm_lksb));739739740740 if (glops->go_flags & GLOF_LVB) {741741- gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_KERNEL);741741+ gl->gl_lksb.sb_lvbptr = kzalloc(GFS2_MIN_LVB_SIZE, GFP_NOFS);742742 if (!gl->gl_lksb.sb_lvbptr) {743743 kmem_cache_free(cachep, gl);744744 return -ENOMEM;···14041404 gl = list_entry(list->next, struct gfs2_glock, gl_lru);14051405 list_del_init(&gl->gl_lru);14061406 if (!spin_trylock(&gl->gl_spin)) {14071407+add_back_to_lru:14071408 list_add(&gl->gl_lru, &lru_list);14081409 atomic_inc(&lru_count);14091410 continue;14101411 }14121412+ if (test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {14131413+ spin_unlock(&gl->gl_spin);14141414+ goto add_back_to_lru;14151415+ }14111416 clear_bit(GLF_LRU, &gl->gl_flags);14121412- spin_unlock(&lru_lock);14131417 gl->gl_lockref.count++;14141418 if (demote_ok(gl))14151419 handle_callback(gl, LM_ST_UNLOCKED, 0, false);···14211417 if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0)14221418 gl->gl_lockref.count--;14231419 spin_unlock(&gl->gl_spin);14241424- spin_lock(&lru_lock);14201420+ cond_resched_lock(&lru_lock);14251421 }14261422}14271423···14461442 gl = list_entry(lru_list.next, struct gfs2_glock, gl_lru);1447144314481444 /* Test for being demotable */14491449- if (!test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {14451445+ if (!test_bit(GLF_LOCK, &gl->gl_flags)) {14501446 list_move(&gl->gl_lru, &dispose);14511447 atomic_dec(&lru_count);14521448 freed++;
+2-2
fs/gfs2/glops.c
···234234 * inode_go_inval - prepare a inode glock to be released235235 * @gl: the glock236236 * @flags:237237- * 238238- * Normally we invlidate everything, but if we are moving into237237+ *238238+ * Normally we invalidate everything, but if we are moving into239239 * LM_ST_DEFERRED from LM_ST_SHARED or LM_ST_EXCLUSIVE then we240240 * can keep hold of the metadata, since it won't have changed.241241 *
···337337338338/**339339 * gfs2_free_extlen - Return extent length of free blocks340340- * @rbm: Starting position340340+ * @rrbm: Starting position341341 * @len: Max length to check342342 *343343 * Starting at the block specified by the rbm, see how many free blocks···2522252225232523/**25242524 * gfs2_rlist_free - free a resource group list25252525- * @list: the list of resource groups25252525+ * @rlist: the list of resource groups25262526 *25272527 */25282528
+4-1
fs/jbd2/transaction.c
···15881588 * to perform a synchronous write. We do this to detect the15891589 * case where a single process is doing a stream of sync15901590 * writes. No point in waiting for joiners in that case.15911591+ *15921592+ * Setting max_batch_time to 0 disables this completely.15911593 */15921594 pid = current->pid;15931593- if (handle->h_sync && journal->j_last_sync_writer != pid) {15951595+ if (handle->h_sync && journal->j_last_sync_writer != pid &&15961596+ journal->j_max_batch_time) {15941597 u64 commit_time, trans_time;1595159815961599 journal->j_last_sync_writer = pid;
+30
fs/kernfs/mount.c
···211211 kernfs_put(root_kn);212212}213213214214+/**215215+ * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root216216+ * @kernfs_root: the kernfs_root in question217217+ * @ns: the namespace tag218218+ *219219+ * Pin the superblock so the superblock won't be destroyed in subsequent220220+ * operations. This can be used to block ->kill_sb() which may be useful221221+ * for kernfs users which dynamically manage superblocks.222222+ *223223+ * Returns NULL if there's no superblock associated to this kernfs_root, or224224+ * -EINVAL if the superblock is being freed.225225+ */226226+struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns)227227+{228228+ struct kernfs_super_info *info;229229+ struct super_block *sb = NULL;230230+231231+ mutex_lock(&kernfs_mutex);232232+ list_for_each_entry(info, &root->supers, node) {233233+ if (info->ns == ns) {234234+ sb = info->sb;235235+ if (!atomic_inc_not_zero(&info->sb->s_active))236236+ sb = ERR_PTR(-EINVAL);237237+ break;238238+ }239239+ }240240+ mutex_unlock(&kernfs_mutex);241241+ return sb;242242+}243243+214244void __init kernfs_init(void)215245{216246 kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
···2929static struct kmem_cache *nfs_page_cachep;3030static const struct rpc_call_ops nfs_pgio_common_ops;31313232-static void nfs_free_request(struct nfs_page *);3333-3432static bool nfs_pgarray_set(struct nfs_page_array *p, unsigned int pagecount)3533{3634 p->npages = pagecount;···237239 WARN_ON_ONCE(prev == req);238240239241 if (!prev) {242242+ /* a head request */240243 req->wb_head = req;241244 req->wb_this_page = req;242245 } else {246246+ /* a subrequest */243247 WARN_ON_ONCE(prev->wb_this_page != prev->wb_head);244248 WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags));245249 req->wb_head = prev->wb_head;246250 req->wb_this_page = prev->wb_this_page;247251 prev->wb_this_page = req;248252253253+ /* All subrequests take a ref on the head request until254254+ * nfs_page_group_destroy is called */255255+ kref_get(&req->wb_head->wb_kref);256256+249257 /* grab extra ref if head request has extra ref from250258 * the write/commit path to handle handoff between write251259 * and commit lists */252252- if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags))260260+ if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) {261261+ set_bit(PG_INODE_REF, &req->wb_flags);253262 kref_get(&req->wb_kref);263263+ }254264 }255265}256266···274268{275269 struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref);276270 struct nfs_page *tmp, *next;271271+272272+ /* subrequests must release the ref on the head request */273273+ if (req->wb_head != req)274274+ nfs_release_request(req->wb_head);277275278276 if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN))279277 return;···404394 *405395 * Note: Should never be called with the spinlock held!406396 */407407-static void nfs_free_request(struct nfs_page *req)397397+void nfs_free_request(struct nfs_page *req)408398{409399 WARN_ON_ONCE(req->wb_this_page != req);410400···935925 nfs_pageio_doio(desc);936926 if (desc->pg_error < 0)937927 return 0;938938- desc->pg_moreio = 0;939928 if (desc->pg_recoalesce)940929 return 0;941930 /* retry add_request for this subreq */···981972 desc->pg_count = 0;982973 desc->pg_base = 0;983974 desc->pg_recoalesce = 0;975975+ desc->pg_moreio = 0;984976985977 while (!list_empty(&head)) {986978 struct nfs_page *req;
+287-58
fs/nfs/write.c
···4646static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops;4747static const struct nfs_commit_completion_ops nfs_commit_completion_ops;4848static const struct nfs_rw_ops nfs_rw_write_ops;4949+static void nfs_clear_request_commit(struct nfs_page *req);49505051static struct kmem_cache *nfs_wdata_cachep;5152static mempool_t *nfs_wdata_mempool;···9291 set_bit(NFS_CONTEXT_ERROR_WRITE, &ctx->flags);9392}94939494+/*9595+ * nfs_page_find_head_request_locked - find head request associated with @page9696+ *9797+ * must be called while holding the inode lock.9898+ *9999+ * returns matching head request with reference held, or NULL if not found.100100+ */95101static struct nfs_page *9696-nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page)102102+nfs_page_find_head_request_locked(struct nfs_inode *nfsi, struct page *page)97103{98104 struct nfs_page *req = NULL;99105···112104 /* Linearly search the commit list for the correct req */113105 list_for_each_entry_safe(freq, t, &nfsi->commit_info.list, wb_list) {114106 if (freq->wb_page == page) {115115- req = freq;107107+ req = freq->wb_head;116108 break;117109 }118110 }119111 }120112121121- if (req)113113+ if (req) {114114+ WARN_ON_ONCE(req->wb_head != req);115115+122116 kref_get(&req->wb_kref);117117+ }123118124119 return req;125120}126121127127-static struct nfs_page *nfs_page_find_request(struct page *page)122122+/*123123+ * nfs_page_find_head_request - find head request associated with @page124124+ *125125+ * returns matching head request with reference held, or NULL if not found.126126+ */127127+static struct nfs_page *nfs_page_find_head_request(struct page *page)128128{129129 struct inode *inode = page_file_mapping(page)->host;130130 struct nfs_page *req = NULL;131131132132 spin_lock(&inode->i_lock);133133- req = nfs_page_find_request_locked(NFS_I(inode), page);133133+ req = nfs_page_find_head_request_locked(NFS_I(inode), page);134134 spin_unlock(&inode->i_lock);135135 return req;136136}···290274 clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC);291275}292276293293-static struct nfs_page *nfs_find_and_lock_request(struct page *page, bool nonblock)277277+278278+/* nfs_page_group_clear_bits279279+ * @req - an nfs request280280+ * clears all page group related bits from @req281281+ */282282+static void283283+nfs_page_group_clear_bits(struct nfs_page *req)294284{295295- struct inode *inode = page_file_mapping(page)->host;296296- struct nfs_page *req;285285+ clear_bit(PG_TEARDOWN, &req->wb_flags);286286+ clear_bit(PG_UNLOCKPAGE, &req->wb_flags);287287+ clear_bit(PG_UPTODATE, &req->wb_flags);288288+ clear_bit(PG_WB_END, &req->wb_flags);289289+ clear_bit(PG_REMOVE, &req->wb_flags);290290+}291291+292292+293293+/*294294+ * nfs_unroll_locks_and_wait - unlock all newly locked reqs and wait on @req295295+ *296296+ * this is a helper function for nfs_lock_and_join_requests297297+ *298298+ * @inode - inode associated with request page group, must be holding inode lock299299+ * @head - head request of page group, must be holding head lock300300+ * @req - request that couldn't lock and needs to wait on the req bit lock301301+ * @nonblock - if true, don't actually wait302302+ *303303+ * NOTE: this must be called holding page_group bit lock and inode spin lock304304+ * and BOTH will be released before returning.305305+ *306306+ * returns 0 on success, < 0 on error.307307+ */308308+static int309309+nfs_unroll_locks_and_wait(struct inode *inode, struct nfs_page *head,310310+ struct nfs_page *req, bool nonblock)311311+ __releases(&inode->i_lock)312312+{313313+ struct nfs_page *tmp;297314 int ret;298315299299- spin_lock(&inode->i_lock);300300- for (;;) {301301- req = nfs_page_find_request_locked(NFS_I(inode), page);302302- if (req == NULL)303303- break;304304- if (nfs_lock_request(req))305305- break;306306- /* Note: If we hold the page lock, as is the case in nfs_writepage,307307- * then the call to nfs_lock_request() will always308308- * succeed provided that someone hasn't already marked the309309- * request as dirty (in which case we don't care).310310- */311311- spin_unlock(&inode->i_lock);312312- if (!nonblock)313313- ret = nfs_wait_on_request(req);314314- else315315- ret = -EAGAIN;316316- nfs_release_request(req);317317- if (ret != 0)318318- return ERR_PTR(ret);319319- spin_lock(&inode->i_lock);320320- }316316+ /* relinquish all the locks successfully grabbed this run */317317+ for (tmp = head ; tmp != req; tmp = tmp->wb_this_page)318318+ nfs_unlock_request(tmp);319319+320320+ WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags));321321+322322+ /* grab a ref on the request that will be waited on */323323+ kref_get(&req->wb_kref);324324+325325+ nfs_page_group_unlock(head);321326 spin_unlock(&inode->i_lock);322322- return req;327327+328328+ /* release ref from nfs_page_find_head_request_locked */329329+ nfs_release_request(head);330330+331331+ if (!nonblock)332332+ ret = nfs_wait_on_request(req);333333+ else334334+ ret = -EAGAIN;335335+ nfs_release_request(req);336336+337337+ return ret;338338+}339339+340340+/*341341+ * nfs_destroy_unlinked_subrequests - destroy recently unlinked subrequests342342+ *343343+ * @destroy_list - request list (using wb_this_page) terminated by @old_head344344+ * @old_head - the old head of the list345345+ *346346+ * All subrequests must be locked and removed from all lists, so at this point347347+ * they are only "active" in this function, and possibly in nfs_wait_on_request348348+ * with a reference held by some other context.349349+ */350350+static void351351+nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list,352352+ struct nfs_page *old_head)353353+{354354+ while (destroy_list) {355355+ struct nfs_page *subreq = destroy_list;356356+357357+ destroy_list = (subreq->wb_this_page == old_head) ?358358+ NULL : subreq->wb_this_page;359359+360360+ WARN_ON_ONCE(old_head != subreq->wb_head);361361+362362+ /* make sure old group is not used */363363+ subreq->wb_head = subreq;364364+ subreq->wb_this_page = subreq;365365+366366+ nfs_clear_request_commit(subreq);367367+368368+ /* subreq is now totally disconnected from page group or any369369+ * write / commit lists. last chance to wake any waiters */370370+ nfs_unlock_request(subreq);371371+372372+ if (!test_bit(PG_TEARDOWN, &subreq->wb_flags)) {373373+ /* release ref on old head request */374374+ nfs_release_request(old_head);375375+376376+ nfs_page_group_clear_bits(subreq);377377+378378+ /* release the PG_INODE_REF reference */379379+ if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags))380380+ nfs_release_request(subreq);381381+ else382382+ WARN_ON_ONCE(1);383383+ } else {384384+ WARN_ON_ONCE(test_bit(PG_CLEAN, &subreq->wb_flags));385385+ /* zombie requests have already released the last386386+ * reference and were waiting on the rest of the387387+ * group to complete. Since it's no longer part of a388388+ * group, simply free the request */389389+ nfs_page_group_clear_bits(subreq);390390+ nfs_free_request(subreq);391391+ }392392+ }393393+}394394+395395+/*396396+ * nfs_lock_and_join_requests - join all subreqs to the head req and return397397+ * a locked reference, cancelling any pending398398+ * operations for this page.399399+ *400400+ * @page - the page used to lookup the "page group" of nfs_page structures401401+ * @nonblock - if true, don't block waiting for request locks402402+ *403403+ * This function joins all sub requests to the head request by first404404+ * locking all requests in the group, cancelling any pending operations405405+ * and finally updating the head request to cover the whole range covered by406406+ * the (former) group. All subrequests are removed from any write or commit407407+ * lists, unlinked from the group and destroyed.408408+ *409409+ * Returns a locked, referenced pointer to the head request - which after410410+ * this call is guaranteed to be the only request associated with the page.411411+ * Returns NULL if no requests are found for @page, or a ERR_PTR if an412412+ * error was encountered.413413+ */414414+static struct nfs_page *415415+nfs_lock_and_join_requests(struct page *page, bool nonblock)416416+{417417+ struct inode *inode = page_file_mapping(page)->host;418418+ struct nfs_page *head, *subreq;419419+ struct nfs_page *destroy_list = NULL;420420+ unsigned int total_bytes;421421+ int ret;422422+423423+try_again:424424+ total_bytes = 0;425425+426426+ WARN_ON_ONCE(destroy_list);427427+428428+ spin_lock(&inode->i_lock);429429+430430+ /*431431+ * A reference is taken only on the head request which acts as a432432+ * reference to the whole page group - the group will not be destroyed433433+ * until the head reference is released.434434+ */435435+ head = nfs_page_find_head_request_locked(NFS_I(inode), page);436436+437437+ if (!head) {438438+ spin_unlock(&inode->i_lock);439439+ return NULL;440440+ }441441+442442+ /* lock each request in the page group */443443+ nfs_page_group_lock(head);444444+ subreq = head;445445+ do {446446+ /*447447+ * Subrequests are always contiguous, non overlapping448448+ * and in order. If not, it's a programming error.449449+ */450450+ WARN_ON_ONCE(subreq->wb_offset !=451451+ (head->wb_offset + total_bytes));452452+453453+ /* keep track of how many bytes this group covers */454454+ total_bytes += subreq->wb_bytes;455455+456456+ if (!nfs_lock_request(subreq)) {457457+ /* releases page group bit lock and458458+ * inode spin lock and all references */459459+ ret = nfs_unroll_locks_and_wait(inode, head,460460+ subreq, nonblock);461461+462462+ if (ret == 0)463463+ goto try_again;464464+465465+ return ERR_PTR(ret);466466+ }467467+468468+ subreq = subreq->wb_this_page;469469+ } while (subreq != head);470470+471471+ /* Now that all requests are locked, make sure they aren't on any list.472472+ * Commit list removal accounting is done after locks are dropped */473473+ subreq = head;474474+ do {475475+ nfs_list_remove_request(subreq);476476+ subreq = subreq->wb_this_page;477477+ } while (subreq != head);478478+479479+ /* unlink subrequests from head, destroy them later */480480+ if (head->wb_this_page != head) {481481+ /* destroy list will be terminated by head */482482+ destroy_list = head->wb_this_page;483483+ head->wb_this_page = head;484484+485485+ /* change head request to cover whole range that486486+ * the former page group covered */487487+ head->wb_bytes = total_bytes;488488+ }489489+490490+ /*491491+ * prepare head request to be added to new pgio descriptor492492+ */493493+ nfs_page_group_clear_bits(head);494494+495495+ /*496496+ * some part of the group was still on the inode list - otherwise497497+ * the group wouldn't be involved in async write.498498+ * grab a reference for the head request, iff it needs one.499499+ */500500+ if (!test_and_set_bit(PG_INODE_REF, &head->wb_flags))501501+ kref_get(&head->wb_kref);502502+503503+ nfs_page_group_unlock(head);504504+505505+ /* drop lock to clear_request_commit the head req and clean up506506+ * requests on destroy list */507507+ spin_unlock(&inode->i_lock);508508+509509+ nfs_destroy_unlinked_subrequests(destroy_list, head);510510+511511+ /* clean up commit list state */512512+ nfs_clear_request_commit(head);513513+514514+ /* still holds ref on head from nfs_page_find_head_request_locked515515+ * and still has lock on head from lock loop */516516+ return head;323517}324518325519/*···542316 struct nfs_page *req;543317 int ret = 0;544318545545- req = nfs_find_and_lock_request(page, nonblock);319319+ req = nfs_lock_and_join_requests(page, nonblock);546320 if (!req)547321 goto out;548322 ret = PTR_ERR(req);···674448 set_page_private(req->wb_page, (unsigned long)req);675449 }676450 nfsi->npages++;677677- set_bit(PG_INODE_REF, &req->wb_flags);451451+ /* this a head request for a page group - mark it as having an452452+ * extra reference so sub groups can follow suit */453453+ WARN_ON(test_and_set_bit(PG_INODE_REF, &req->wb_flags));678454 kref_get(&req->wb_kref);679455 spin_unlock(&inode->i_lock);680456}···702474 nfsi->npages--;703475 spin_unlock(&inode->i_lock);704476 }705705- nfs_release_request(req);477477+478478+ if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags))479479+ nfs_release_request(req);706480}707481708482static void···868638{869639 struct nfs_commit_info cinfo;870640 unsigned long bytes = 0;871871- bool do_destroy;872641873642 if (test_bit(NFS_IOHDR_REDO, &hdr->flags))874643 goto out;···897668next:898669 nfs_unlock_request(req);899670 nfs_end_page_writeback(req);900900- do_destroy = !test_bit(NFS_IOHDR_NEED_COMMIT, &hdr->flags);901671 nfs_release_request(req);902672 }903673out:···997769 spin_lock(&inode->i_lock);998770999771 for (;;) {10001000- req = nfs_page_find_request_locked(NFS_I(inode), page);772772+ req = nfs_page_find_head_request_locked(NFS_I(inode), page);1001773 if (req == NULL)1002774 goto out_unlock;1003775···1105877 * dropped page.1106878 */1107879 do {11081108- req = nfs_page_find_request(page);880880+ req = nfs_page_find_head_request(page);1109881 if (req == NULL)1110882 return 0;1111883 l_ctx = req->wb_lock_context;···17971569 struct nfs_page *req;17981570 int ret = 0;1799157118001800- for (;;) {18011801- wait_on_page_writeback(page);18021802- req = nfs_page_find_request(page);18031803- if (req == NULL)18041804- break;18051805- if (nfs_lock_request(req)) {18061806- nfs_clear_request_commit(req);18071807- nfs_inode_remove_request(req);18081808- /*18091809- * In case nfs_inode_remove_request has marked the18101810- * page as being dirty18111811- */18121812- cancel_dirty_page(page, PAGE_CACHE_SIZE);18131813- nfs_unlock_and_release_request(req);18141814- break;18151815- }18161816- ret = nfs_wait_on_request(req);18171817- nfs_release_request(req);18181818- if (ret < 0)18191819- break;15721572+ wait_on_page_writeback(page);15731573+15741574+ /* blocking call to cancel all requests and join to a single (head)15751575+ * request */15761576+ req = nfs_lock_and_join_requests(page, false);15771577+15781578+ if (IS_ERR(req)) {15791579+ ret = PTR_ERR(req);15801580+ } else if (req) {15811581+ /* all requests from this page have been cancelled by15821582+ * nfs_lock_and_join_requests, so just remove the head15831583+ * request from the inode / page_private pointer and15841584+ * release it */15851585+ nfs_inode_remove_request(req);15861586+ /*15871587+ * In case nfs_inode_remove_request has marked the15881588+ * page as being dirty15891589+ */15901590+ cancel_dirty_page(page, PAGE_CACHE_SIZE);15911591+ nfs_unlock_and_release_request(req);18201592 }15931593+18211594 return ret;18221595}18231596
···7777 * from written to unwritten, otherwise convert from unwritten to written.7878 */7979#define XFS_BMAPI_CONVERT 0x0408080-#define XFS_BMAPI_STACK_SWITCH 0x08081808281#define XFS_BMAPI_FLAGS \8382 { XFS_BMAPI_ENTIRE, "ENTIRE" }, \···8586 { XFS_BMAPI_PREALLOC, "PREALLOC" }, \8687 { XFS_BMAPI_IGSTATE, "IGSTATE" }, \8788 { XFS_BMAPI_CONTIG, "CONTIG" }, \8888- { XFS_BMAPI_CONVERT, "CONVERT" }, \8989- { XFS_BMAPI_STACK_SWITCH, "STACK_SWITCH" }8989+ { XFS_BMAPI_CONVERT, "CONVERT" }909091919292static inline int xfs_bmapi_aflag(int w)
-53
fs/xfs/xfs_bmap_util.c
···249249}250250251251/*252252- * Stack switching interfaces for allocation253253- */254254-static void255255-xfs_bmapi_allocate_worker(256256- struct work_struct *work)257257-{258258- struct xfs_bmalloca *args = container_of(work,259259- struct xfs_bmalloca, work);260260- unsigned long pflags;261261- unsigned long new_pflags = PF_FSTRANS;262262-263263- /*264264- * we are in a transaction context here, but may also be doing work265265- * in kswapd context, and hence we may need to inherit that state266266- * temporarily to ensure that we don't block waiting for memory reclaim267267- * in any way.268268- */269269- if (args->kswapd)270270- new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD;271271-272272- current_set_flags_nested(&pflags, new_pflags);273273-274274- args->result = __xfs_bmapi_allocate(args);275275- complete(args->done);276276-277277- current_restore_flags_nested(&pflags, new_pflags);278278-}279279-280280-/*281281- * Some allocation requests often come in with little stack to work on. Push282282- * them off to a worker thread so there is lots of stack to use. Otherwise just283283- * call directly to avoid the context switch overhead here.284284- */285285-int286286-xfs_bmapi_allocate(287287- struct xfs_bmalloca *args)288288-{289289- DECLARE_COMPLETION_ONSTACK(done);290290-291291- if (!args->stack_switch)292292- return __xfs_bmapi_allocate(args);293293-294294-295295- args->done = &done;296296- args->kswapd = current_is_kswapd();297297- INIT_WORK_ONSTACK(&args->work, xfs_bmapi_allocate_worker);298298- queue_work(xfs_alloc_wq, &args->work);299299- wait_for_completion(&done);300300- destroy_work_on_stack(&args->work);301301- return args->result;302302-}303303-304304-/*305252 * Check if the endoff is outside the last extent. If so the caller will grow306253 * the allocation to a stripe unit boundary. All offsets are considered outside307254 * the end of file for an empty fork, so 1 is returned in *eof in that case.
-4
fs/xfs/xfs_bmap_util.h
···5555 bool userdata;/* set if is user data */5656 bool aeof; /* allocated space at eof */5757 bool conv; /* overwriting unwritten extents */5858- bool stack_switch;5959- bool kswapd; /* allocation in kswapd context */6058 int flags;6159 struct completion *done;6260 struct work_struct work;···6466int xfs_bmap_finish(struct xfs_trans **tp, struct xfs_bmap_free *flist,6567 int *committed);6668int xfs_bmap_rtalloc(struct xfs_bmalloca *ap);6767-int xfs_bmapi_allocate(struct xfs_bmalloca *args);6868-int __xfs_bmapi_allocate(struct xfs_bmalloca *args);6969int xfs_bmap_eof(struct xfs_inode *ip, xfs_fileoff_t endoff,7070 int whichfork, int *eof);7171int xfs_bmap_count_blocks(struct xfs_trans *tp, struct xfs_inode *ip,
+81-1
fs/xfs/xfs_btree.c
···3333#include "xfs_error.h"3434#include "xfs_trace.h"3535#include "xfs_cksum.h"3636+#include "xfs_alloc.h"36373738/*3839 * Cursor allocation zone.···23242323 * record (to be inserted into parent).23252324 */23262325STATIC int /* error */23272327-xfs_btree_split(23262326+__xfs_btree_split(23282327 struct xfs_btree_cur *cur,23292328 int level,23302329 union xfs_btree_ptr *ptrp,···25032502 XFS_BTREE_TRACE_CURSOR(cur, XBT_ERROR);25042503 return error;25052504}25052505+25062506+struct xfs_btree_split_args {25072507+ struct xfs_btree_cur *cur;25082508+ int level;25092509+ union xfs_btree_ptr *ptrp;25102510+ union xfs_btree_key *key;25112511+ struct xfs_btree_cur **curp;25122512+ int *stat; /* success/failure */25132513+ int result;25142514+ bool kswapd; /* allocation in kswapd context */25152515+ struct completion *done;25162516+ struct work_struct work;25172517+};25182518+25192519+/*25202520+ * Stack switching interfaces for allocation25212521+ */25222522+static void25232523+xfs_btree_split_worker(25242524+ struct work_struct *work)25252525+{25262526+ struct xfs_btree_split_args *args = container_of(work,25272527+ struct xfs_btree_split_args, work);25282528+ unsigned long pflags;25292529+ unsigned long new_pflags = PF_FSTRANS;25302530+25312531+ /*25322532+ * we are in a transaction context here, but may also be doing work25332533+ * in kswapd context, and hence we may need to inherit that state25342534+ * temporarily to ensure that we don't block waiting for memory reclaim25352535+ * in any way.25362536+ */25372537+ if (args->kswapd)25382538+ new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD;25392539+25402540+ current_set_flags_nested(&pflags, new_pflags);25412541+25422542+ args->result = __xfs_btree_split(args->cur, args->level, args->ptrp,25432543+ args->key, args->curp, args->stat);25442544+ complete(args->done);25452545+25462546+ current_restore_flags_nested(&pflags, new_pflags);25472547+}25482548+25492549+/*25502550+ * BMBT split requests often come in with little stack to work on. Push25512551+ * them off to a worker thread so there is lots of stack to use. For the other25522552+ * btree types, just call directly to avoid the context switch overhead here.25532553+ */25542554+STATIC int /* error */25552555+xfs_btree_split(25562556+ struct xfs_btree_cur *cur,25572557+ int level,25582558+ union xfs_btree_ptr *ptrp,25592559+ union xfs_btree_key *key,25602560+ struct xfs_btree_cur **curp,25612561+ int *stat) /* success/failure */25622562+{25632563+ struct xfs_btree_split_args args;25642564+ DECLARE_COMPLETION_ONSTACK(done);25652565+25662566+ if (cur->bc_btnum != XFS_BTNUM_BMAP)25672567+ return __xfs_btree_split(cur, level, ptrp, key, curp, stat);25682568+25692569+ args.cur = cur;25702570+ args.level = level;25712571+ args.ptrp = ptrp;25722572+ args.key = key;25732573+ args.curp = curp;25742574+ args.stat = stat;25752575+ args.done = &done;25762576+ args.kswapd = current_is_kswapd();25772577+ INIT_WORK_ONSTACK(&args.work, xfs_btree_split_worker);25782578+ queue_work(xfs_alloc_wq, &args.work);25792579+ wait_for_completion(&done);25802580+ destroy_work_on_stack(&args.work);25812581+ return args.result;25822582+}25832583+2506258425072585/*25082586 * Copy the old inode root contents into a real block and make the
+1-2
fs/xfs/xfs_iomap.c
···749749 * pointer that the caller gave to us.750750 */751751 error = xfs_bmapi_write(tp, ip, map_start_fsb,752752- count_fsb,753753- XFS_BMAPI_STACK_SWITCH,752752+ count_fsb, 0,754753 &first_block, 1,755754 imap, &nimaps, &free_list);756755 if (error)
+21-4
fs/xfs/xfs_sb.c
···483483 }484484485485 /*486486- * GQUOTINO and PQUOTINO cannot be used together in versions487487- * of superblock that do not have pquotino. from->sb_flags488488- * tells us which quota is active and should be copied to489489- * disk.486486+ * GQUOTINO and PQUOTINO cannot be used together in versions of487487+ * superblock that do not have pquotino. from->sb_flags tells us which488488+ * quota is active and should be copied to disk. If neither are active,489489+ * make sure we write NULLFSINO to the sb_gquotino field as a quota490490+ * inode value of "0" is invalid when the XFS_SB_VERSION_QUOTA feature491491+ * bit is set.492492+ *493493+ * Note that we don't need to handle the sb_uquotino or sb_pquotino here494494+ * as they do not require any translation. Hence the main sb field loop495495+ * will write them appropriately from the in-core superblock.490496 */491497 if ((*fields & XFS_SB_GQUOTINO) &&492498 (from->sb_qflags & XFS_GQUOTA_ACCT))···500494 else if ((*fields & XFS_SB_PQUOTINO) &&501495 (from->sb_qflags & XFS_PQUOTA_ACCT))502496 to->sb_gquotino = cpu_to_be64(from->sb_pquotino);497497+ else {498498+ /*499499+ * We can't rely on just the fields being logged to tell us500500+ * that it is safe to write NULLFSINO - we should only do that501501+ * if quotas are not actually enabled. Hence only write502502+ * NULLFSINO if both in-core quota inodes are NULL.503503+ */504504+ if (from->sb_gquotino == NULLFSINO &&505505+ from->sb_pquotino == NULLFSINO)506506+ to->sb_gquotino = cpu_to_be64(NULLFSINO);507507+ }503508504509 *fields &= ~(XFS_SB_PQUOTINO | XFS_SB_GQUOTINO);505510}
+2
include/acpi/video.h
···2222extern void acpi_video_unregister_backlight(void);2323extern int acpi_video_get_edid(struct acpi_device *device, int type,2424 int device_id, void **edid);2525+extern bool acpi_video_verify_backlight_support(void);2526#else2627static inline int acpi_video_register(void) { return 0; }2728static inline void acpi_video_unregister(void) { return; }···3231{3332 return -ENODEV;3433}3434+static inline bool acpi_video_verify_backlight_support(void) { return false; }3535#endif36363737#endif
···482482 *********************************************************************/483483484484/* Special Values of .frequency field */485485-#define CPUFREQ_ENTRY_INVALID ~0486486-#define CPUFREQ_TABLE_END ~1485485+#define CPUFREQ_ENTRY_INVALID ~0u486486+#define CPUFREQ_TABLE_END ~1u487487/* Special Values of .flags field */488488#define CPUFREQ_BOOST_FREQ (1 << 0)489489
···11+#ifndef __LINUX_OSQ_LOCK_H22+#define __LINUX_OSQ_LOCK_H33+44+/*55+ * An MCS like lock especially tailored for optimistic spinning for sleeping66+ * lock implementations (mutex, rwsem, etc).77+ */88+99+#define OSQ_UNLOCKED_VAL (0)1010+1111+struct optimistic_spin_queue {1212+ /*1313+ * Stores an encoded value of the CPU # of the tail node in the queue.1414+ * If the queue is empty, then it's set to OSQ_UNLOCKED_VAL.1515+ */1616+ atomic_t tail;1717+};1818+1919+/* Init macro and function. */2020+#define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) }2121+2222+static inline void osq_lock_init(struct optimistic_spin_queue *lock)2323+{2424+ atomic_set(&lock->tail, OSQ_UNLOCKED_VAL);2525+}2626+2727+#endif
+2-2
include/linux/percpu-defs.h
···146146 * Declaration/definition used for per-CPU variables that must be read mostly.147147 */148148#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \149149- DECLARE_PER_CPU_SECTION(type, name, "..readmostly")149149+ DECLARE_PER_CPU_SECTION(type, name, "..read_mostly")150150151151#define DEFINE_PER_CPU_READ_MOSTLY(type, name) \152152- DEFINE_PER_CPU_SECTION(type, name, "..readmostly")152152+ DEFINE_PER_CPU_SECTION(type, name, "..read_mostly")153153154154/*155155 * Intermodule exports for per-CPU variables. sparse forgets about
+10-36
include/linux/rcupdate.h
···4444#include <linux/debugobjects.h>4545#include <linux/bug.h>4646#include <linux/compiler.h>4747-#include <linux/percpu.h>4847#include <asm/barrier.h>49485049extern int rcu_expedited; /* for sysctl */···299300#endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */300301301302/*302302- * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings.303303- */304304-305305-#define RCU_COND_RESCHED_LIM 256 /* ms vs. 100s of ms. */306306-DECLARE_PER_CPU(int, rcu_cond_resched_count);307307-void rcu_resched(void);308308-309309-/*310310- * Is it time to report RCU quiescent states?311311- *312312- * Note unsynchronized access to rcu_cond_resched_count. Yes, we might313313- * increment some random CPU's count, and possibly also load the result from314314- * yet another CPU's count. We might even clobber some other CPU's attempt315315- * to zero its counter. This is all OK because the goal is not precision,316316- * but rather reasonable amortization of rcu_note_context_switch() overhead317317- * and extremely high probability of avoiding RCU CPU stall warnings.318318- * Note that this function has to be preempted in just the wrong place,319319- * many thousands of times in a row, for anything bad to happen.320320- */321321-static inline bool rcu_should_resched(void)322322-{323323- return raw_cpu_inc_return(rcu_cond_resched_count) >=324324- RCU_COND_RESCHED_LIM;325325-}326326-327327-/*328328- * Report quiscent states to RCU if it is time to do so.329329- */330330-static inline void rcu_cond_resched(void)331331-{332332- if (unlikely(rcu_should_resched()))333333- rcu_resched();334334-}335335-336336-/*337303 * Infrastructure to implement the synchronize_() primitives in338304 * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.339305 */···322358 * initialization.323359 */324360#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD361361+void init_rcu_head(struct rcu_head *head);362362+void destroy_rcu_head(struct rcu_head *head);325363void init_rcu_head_on_stack(struct rcu_head *head);326364void destroy_rcu_head_on_stack(struct rcu_head *head);327365#else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */366366+static inline void init_rcu_head(struct rcu_head *head)367367+{368368+}369369+370370+static inline void destroy_rcu_head(struct rcu_head *head)371371+{372372+}373373+328374static inline void init_rcu_head_on_stack(struct rcu_head *head)329375{330376}
+4-4
include/linux/rwsem-spinlock.h
···1515#ifdef __KERNEL__1616/*1717 * the rw-semaphore definition1818- * - if activity is 0 then there are no active readers or writers1919- * - if activity is +ve then that is the number of active readers2020- * - if activity is -1 then there is one active writer1818+ * - if count is 0 then there are no active readers or writers1919+ * - if count is +ve then that is the number of active readers2020+ * - if count is -1 then there is one active writer2121 * - if wait_list is not empty, then there are processes waiting for the semaphore2222 */2323struct rw_semaphore {2424- __s32 activity;2424+ __s32 count;2525 raw_spinlock_t wait_lock;2626 struct list_head wait_list;2727#ifdef CONFIG_DEBUG_LOCK_ALLOC
+16-18
include/linux/rwsem.h
···1313#include <linux/kernel.h>1414#include <linux/list.h>1515#include <linux/spinlock.h>1616-1716#include <linux/atomic.h>1717+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER1818+#include <linux/osq_lock.h>1919+#endif18201919-struct optimistic_spin_queue;2021struct rw_semaphore;21222223#ifdef CONFIG_RWSEM_GENERIC_SPINLOCK···2625/* All arch specific implementations share the same struct */2726struct rw_semaphore {2827 long count;2929- raw_spinlock_t wait_lock;3028 struct list_head wait_list;3131-#ifdef CONFIG_SMP2929+ raw_spinlock_t wait_lock;3030+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER3131+ struct optimistic_spin_queue osq; /* spinner MCS lock */3232 /*3333 * Write owner. Used as a speculative check to see3434 * if the owner is running on the cpu.3535 */3636 struct task_struct *owner;3737- struct optimistic_spin_queue *osq; /* spinner MCS lock */3837#endif3938#ifdef CONFIG_DEBUG_LOCK_ALLOC4039 struct lockdep_map dep_map;···6564# define __RWSEM_DEP_MAP_INIT(lockname)6665#endif67666868-#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK)6969-#define __RWSEM_INITIALIZER(name) \7070- { RWSEM_UNLOCKED_VALUE, \7171- __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \7272- LIST_HEAD_INIT((name).wait_list), \7373- NULL, /* owner */ \7474- NULL /* mcs lock */ \7575- __RWSEM_DEP_MAP_INIT(name) }6767+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER6868+#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL7669#else7777-#define __RWSEM_INITIALIZER(name) \7878- { RWSEM_UNLOCKED_VALUE, \7979- __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \8080- LIST_HEAD_INIT((name).wait_list) \8181- __RWSEM_DEP_MAP_INIT(name) }7070+#define __RWSEM_OPT_INIT(lockname)8271#endif7272+7373+#define __RWSEM_INITIALIZER(name) \7474+ { .count = RWSEM_UNLOCKED_VALUE, \7575+ .wait_list = LIST_HEAD_INIT((name).wait_list), \7676+ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock) \7777+ __RWSEM_OPT_INIT(name) \7878+ __RWSEM_DEP_MAP_INIT(name) }83798480#define DECLARE_RWSEM(name) \8581 struct rw_semaphore name = __RWSEM_INITIALIZER(name)
+4-4
include/linux/sched.h
···872872#define SD_NUMA 0x4000 /* cross-node balancing */873873874874#ifdef CONFIG_SCHED_SMT875875-static inline const int cpu_smt_flags(void)875875+static inline int cpu_smt_flags(void)876876{877877 return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;878878}879879#endif880880881881#ifdef CONFIG_SCHED_MC882882-static inline const int cpu_core_flags(void)882882+static inline int cpu_core_flags(void)883883{884884 return SD_SHARE_PKG_RESOURCES;885885}886886#endif887887888888#ifdef CONFIG_NUMA889889-static inline const int cpu_numa_flags(void)889889+static inline int cpu_numa_flags(void)890890{891891 return SD_NUMA;892892}···999999bool cpus_share_cache(int this_cpu, int that_cpu);1000100010011001typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);10021002-typedef const int (*sched_domain_flags_f)(void);10021002+typedef int (*sched_domain_flags_f)(void);1003100310041004#define SDTL_OVERLAP 0x0110051005
-1
include/net/neighbour.h
···203203 void (*proxy_redo)(struct sk_buff *skb);204204 char *id;205205 struct neigh_parms parms;206206- /* HACK. gc_* should follow parms without a gap! */207206 int gc_interval;208207 int gc_thresh1;209208 int gc_thresh2;
···220220221221endif222222223223+config ARCH_SUPPORTS_ATOMIC_RMW224224+ bool225225+223226config MUTEX_SPIN_ON_OWNER224227 def_bool y225225- depends on SMP && !DEBUG_MUTEXES228228+ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW229229+230230+config RWSEM_SPIN_ON_OWNER231231+ def_bool y232232+ depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW226233227234config ARCH_USE_QUEUE_RWLOCK228235 bool
+50-8
kernel/cgroup.c
···16481648 int flags, const char *unused_dev_name,16491649 void *data)16501650{16511651+ struct super_block *pinned_sb = NULL;16521652+ struct cgroup_subsys *ss;16511653 struct cgroup_root *root;16521654 struct cgroup_sb_opts opts;16531655 struct dentry *dentry;16541656 int ret;16571657+ int i;16551658 bool new_sb;1656165916571660 /*···16781675 cgroup_get(&root->cgrp);16791676 ret = 0;16801677 goto out_unlock;16781678+ }16791679+16801680+ /*16811681+ * Destruction of cgroup root is asynchronous, so subsystems may16821682+ * still be dying after the previous unmount. Let's drain the16831683+ * dying subsystems. We just need to ensure that the ones16841684+ * unmounted previously finish dying and don't care about new ones16851685+ * starting. Testing ref liveliness is good enough.16861686+ */16871687+ for_each_subsys(ss, i) {16881688+ if (!(opts.subsys_mask & (1 << i)) ||16891689+ ss->root == &cgrp_dfl_root)16901690+ continue;16911691+16921692+ if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) {16931693+ mutex_unlock(&cgroup_mutex);16941694+ msleep(10);16951695+ ret = restart_syscall();16961696+ goto out_free;16971697+ }16981698+ cgroup_put(&ss->root->cgrp);16811699 }1682170016831701 for_each_root(root) {···17411717 }1742171817431719 /*17441744- * A root's lifetime is governed by its root cgroup.17451745- * tryget_live failure indicate that the root is being17461746- * destroyed. Wait for destruction to complete so that the17471747- * subsystems are free. We can use wait_queue for the wait17481748- * but this path is super cold. Let's just sleep for a bit17491749- * and retry.17201720+ * We want to reuse @root whose lifetime is governed by its17211721+ * ->cgrp. Let's check whether @root is alive and keep it17221722+ * that way. As cgroup_kill_sb() can happen anytime, we17231723+ * want to block it by pinning the sb so that @root doesn't17241724+ * get killed before mount is complete.17251725+ *17261726+ * With the sb pinned, tryget_live can reliably indicate17271727+ * whether @root can be reused. If it's being killed,17281728+ * drain it. We can use wait_queue for the wait but this17291729+ * path is super cold. Let's just sleep a bit and retry.17501730 */17511751- if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {17311731+ pinned_sb = kernfs_pin_sb(root->kf_root, NULL);17321732+ if (IS_ERR(pinned_sb) ||17331733+ !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {17521734 mutex_unlock(&cgroup_mutex);17351735+ if (!IS_ERR_OR_NULL(pinned_sb))17361736+ deactivate_super(pinned_sb);17531737 msleep(10);17541738 ret = restart_syscall();17551739 goto out_free;···18021770 CGROUP_SUPER_MAGIC, &new_sb);18031771 if (IS_ERR(dentry) || !new_sb)18041772 cgroup_put(&root->cgrp);17731773+17741774+ /*17751775+ * If @pinned_sb, we're reusing an existing root and holding an17761776+ * extra ref on its sb. Mount is complete. Put the extra ref.17771777+ */17781778+ if (pinned_sb) {17791779+ WARN_ON(new_sb);17801780+ deactivate_super(pinned_sb);17811781+ }17821782+18051783 return dentry;18061784}18071785···3370332833713329 rcu_read_lock();33723330 css_for_each_child(child, css) {33733373- if (css->flags & CSS_ONLINE) {33313331+ if (child->flags & CSS_ONLINE) {33743332 ret = true;33753333 break;33763334 }
+19-1
kernel/cpuset.c
···1181118111821182int current_cpuset_is_being_rebound(void)11831183{11841184- return task_cs(current) == cpuset_being_rebound;11841184+ int ret;11851185+11861186+ rcu_read_lock();11871187+ ret = task_cs(current) == cpuset_being_rebound;11881188+ rcu_read_unlock();11891189+11901190+ return ret;11851191}1186119211871193static int update_relax_domain_level(struct cpuset *cs, s64 val)···16231617 * resources, wait for the previously scheduled operations before16241618 * proceeding, so that we don't end up keep removing tasks added16251619 * after execution capability is restored.16201620+ *16211621+ * cpuset_hotplug_work calls back into cgroup core via16221622+ * cgroup_transfer_tasks() and waiting for it from a cgroupfs16231623+ * operation like this one can lead to a deadlock through kernfs16241624+ * active_ref protection. Let's break the protection. Losing the16251625+ * protection is okay as we check whether @cs is online after16261626+ * grabbing cpuset_mutex anyway. This only happens on the legacy16271627+ * hierarchies.16261628 */16291629+ css_get(&cs->css);16301630+ kernfs_break_active_protection(of->kn);16271631 flush_work(&cpuset_hotplug_work);1628163216291633 mutex_lock(&cpuset_mutex);···16611645 free_trial_cpuset(trialcs);16621646out_unlock:16631647 mutex_unlock(&cpuset_mutex);16481648+ kernfs_unbreak_active_protection(of->kn);16491649+ css_put(&cs->css);16641650 return retval ?: nbytes;16651651}16661652
+1-1
kernel/events/core.c
···23202320 next_parent = rcu_dereference(next_ctx->parent_ctx);2321232123222322 /* If neither context have a parent context; they cannot be clones. */23232323- if (!parent && !next_parent)23232323+ if (!parent || !next_parent)23242324 goto unlock;2325232523262326 if (next_parent == ctx || next_ctx == parent || next_parent == parent) {
+48-16
kernel/locking/mcs_spinlock.c
···1414 * called from interrupt context and we have preemption disabled while1515 * spinning.1616 */1717-static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_queue, osq_node);1717+static DEFINE_PER_CPU_SHARED_ALIGNED(struct optimistic_spin_node, osq_node);1818+1919+/*2020+ * We use the value 0 to represent "no CPU", thus the encoded value2121+ * will be the CPU number incremented by 1.2222+ */2323+static inline int encode_cpu(int cpu_nr)2424+{2525+ return cpu_nr + 1;2626+}2727+2828+static inline struct optimistic_spin_node *decode_cpu(int encoded_cpu_val)2929+{3030+ int cpu_nr = encoded_cpu_val - 1;3131+3232+ return per_cpu_ptr(&osq_node, cpu_nr);3333+}18341935/*2036 * Get a stable @node->next pointer, either for unlock() or unqueue() purposes.2137 * Can return NULL in case we were the last queued and we updated @lock instead.2238 */2323-static inline struct optimistic_spin_queue *2424-osq_wait_next(struct optimistic_spin_queue **lock,2525- struct optimistic_spin_queue *node,2626- struct optimistic_spin_queue *prev)3939+static inline struct optimistic_spin_node *4040+osq_wait_next(struct optimistic_spin_queue *lock,4141+ struct optimistic_spin_node *node,4242+ struct optimistic_spin_node *prev)2743{2828- struct optimistic_spin_queue *next = NULL;4444+ struct optimistic_spin_node *next = NULL;4545+ int curr = encode_cpu(smp_processor_id());4646+ int old;4747+4848+ /*4949+ * If there is a prev node in queue, then the 'old' value will be5050+ * the prev node's CPU #, else it's set to OSQ_UNLOCKED_VAL since if5151+ * we're currently last in queue, then the queue will then become empty.5252+ */5353+ old = prev ? prev->cpu : OSQ_UNLOCKED_VAL;29543055 for (;;) {3131- if (*lock == node && cmpxchg(lock, node, prev) == node) {5656+ if (atomic_read(&lock->tail) == curr &&5757+ atomic_cmpxchg(&lock->tail, curr, old) == curr) {3258 /*3359 * We were the last queued, we moved @lock back. @prev3460 * will now observe @lock and will complete its···8559 return next;8660}87618888-bool osq_lock(struct optimistic_spin_queue **lock)6262+bool osq_lock(struct optimistic_spin_queue *lock)8963{9090- struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node);9191- struct optimistic_spin_queue *prev, *next;6464+ struct optimistic_spin_node *node = this_cpu_ptr(&osq_node);6565+ struct optimistic_spin_node *prev, *next;6666+ int curr = encode_cpu(smp_processor_id());6767+ int old;92689369 node->locked = 0;9470 node->next = NULL;7171+ node->cpu = curr;95729696- node->prev = prev = xchg(lock, node);9797- if (likely(prev == NULL))7373+ old = atomic_xchg(&lock->tail, curr);7474+ if (old == OSQ_UNLOCKED_VAL)9875 return true;99767777+ prev = decode_cpu(old);7878+ node->prev = prev;10079 ACCESS_ONCE(prev->next) = node;1018010281 /*···180149 return false;181150}182151183183-void osq_unlock(struct optimistic_spin_queue **lock)152152+void osq_unlock(struct optimistic_spin_queue *lock)184153{185185- struct optimistic_spin_queue *node = this_cpu_ptr(&osq_node);186186- struct optimistic_spin_queue *next;154154+ struct optimistic_spin_node *node, *next;155155+ int curr = encode_cpu(smp_processor_id());187156188157 /*189158 * Fast path for the uncontended case.190159 */191191- if (likely(cmpxchg(lock, node, NULL) == node))160160+ if (likely(atomic_cmpxchg(&lock->tail, curr, OSQ_UNLOCKED_VAL) == curr))192161 return;193162194163 /*195164 * Second most likely case.196165 */166166+ node = this_cpu_ptr(&osq_node);197167 next = xchg(&node->next, NULL);198168 if (next) {199169 ACCESS_ONCE(next->locked) = 1;
···2626 unsigned long flags;27272828 if (raw_spin_trylock_irqsave(&sem->wait_lock, flags)) {2929- ret = (sem->activity != 0);2929+ ret = (sem->count != 0);3030 raw_spin_unlock_irqrestore(&sem->wait_lock, flags);3131 }3232 return ret;···4646 debug_check_no_locks_freed((void *)sem, sizeof(*sem));4747 lockdep_init_map(&sem->dep_map, name, key, 0);4848#endif4949- sem->activity = 0;4949+ sem->count = 0;5050 raw_spin_lock_init(&sem->wait_lock);5151 INIT_LIST_HEAD(&sem->wait_list);5252}···9595 waiter = list_entry(next, struct rwsem_waiter, list);9696 } while (waiter->type != RWSEM_WAITING_FOR_WRITE);97979898- sem->activity += woken;9898+ sem->count += woken;9999100100 out:101101 return sem;···126126127127 raw_spin_lock_irqsave(&sem->wait_lock, flags);128128129129- if (sem->activity >= 0 && list_empty(&sem->wait_list)) {129129+ if (sem->count >= 0 && list_empty(&sem->wait_list)) {130130 /* granted */131131- sem->activity++;131131+ sem->count++;132132 raw_spin_unlock_irqrestore(&sem->wait_lock, flags);133133 goto out;134134 }···170170171171 raw_spin_lock_irqsave(&sem->wait_lock, flags);172172173173- if (sem->activity >= 0 && list_empty(&sem->wait_list)) {173173+ if (sem->count >= 0 && list_empty(&sem->wait_list)) {174174 /* granted */175175- sem->activity++;175175+ sem->count++;176176 ret = 1;177177 }178178···206206 * itself into sleep and waiting for system woke it or someone207207 * else in the head of the wait list up.208208 */209209- if (sem->activity == 0)209209+ if (sem->count == 0)210210 break;211211 set_task_state(tsk, TASK_UNINTERRUPTIBLE);212212 raw_spin_unlock_irqrestore(&sem->wait_lock, flags);···214214 raw_spin_lock_irqsave(&sem->wait_lock, flags);215215 }216216 /* got the lock */217217- sem->activity = -1;217217+ sem->count = -1;218218 list_del(&waiter.list);219219220220 raw_spin_unlock_irqrestore(&sem->wait_lock, flags);···235235236236 raw_spin_lock_irqsave(&sem->wait_lock, flags);237237238238- if (sem->activity == 0) {238238+ if (sem->count == 0) {239239 /* got the lock */240240- sem->activity = -1;240240+ sem->count = -1;241241 ret = 1;242242 }243243···255255256256 raw_spin_lock_irqsave(&sem->wait_lock, flags);257257258258- if (--sem->activity == 0 && !list_empty(&sem->wait_list))258258+ if (--sem->count == 0 && !list_empty(&sem->wait_list))259259 sem = __rwsem_wake_one_writer(sem);260260261261 raw_spin_unlock_irqrestore(&sem->wait_lock, flags);···270270271271 raw_spin_lock_irqsave(&sem->wait_lock, flags);272272273273- sem->activity = 0;273273+ sem->count = 0;274274 if (!list_empty(&sem->wait_list))275275 sem = __rwsem_do_wake(sem, 1);276276···287287288288 raw_spin_lock_irqsave(&sem->wait_lock, flags);289289290290- sem->activity = 1;290290+ sem->count = 1;291291 if (!list_empty(&sem->wait_list))292292 sem = __rwsem_do_wake(sem, 0);293293
+8-8
kernel/locking/rwsem-xadd.c
···8282 sem->count = RWSEM_UNLOCKED_VALUE;8383 raw_spin_lock_init(&sem->wait_lock);8484 INIT_LIST_HEAD(&sem->wait_list);8585-#ifdef CONFIG_SMP8585+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER8686 sem->owner = NULL;8787- sem->osq = NULL;8787+ osq_lock_init(&sem->osq);8888#endif8989}9090···262262 return false;263263}264264265265-#ifdef CONFIG_SMP265265+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER266266/*267267 * Try to acquire write lock before the writer has been put on wait queue.268268 */···285285static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)286286{287287 struct task_struct *owner;288288- bool on_cpu = true;288288+ bool on_cpu = false;289289290290 if (need_resched())291291- return 0;291291+ return false;292292293293 rcu_read_lock();294294 owner = ACCESS_ONCE(sem->owner);···297297 rcu_read_unlock();298298299299 /*300300- * If sem->owner is not set, the rwsem owner may have301301- * just acquired it and not set the owner yet or the rwsem302302- * has been released.300300+ * If sem->owner is not set, yet we have just recently entered the301301+ * slowpath, then there is a possibility reader(s) may have the lock.302302+ * To be safe, avoid spinning in these situations.303303 */304304 return on_cpu;305305}
···306306 error = suspend_ops->begin(state);307307 if (error)308308 goto Close;309309- } else if (state == PM_SUSPEND_FREEZE && freeze_ops->begin) {309309+ } else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->begin) {310310 error = freeze_ops->begin();311311 if (error)312312 goto Close;···335335 Close:336336 if (need_suspend_ops(state) && suspend_ops->end)337337 suspend_ops->end();338338- else if (state == PM_SUSPEND_FREEZE && freeze_ops->end)338338+ else if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->end)339339 freeze_ops->end();340340341341 return error;
+112-28
kernel/rcu/tree.c
···206206 rdp->passed_quiesce = 1;207207}208208209209+static DEFINE_PER_CPU(int, rcu_sched_qs_mask);210210+211211+static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {212212+ .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE,213213+ .dynticks = ATOMIC_INIT(1),214214+#ifdef CONFIG_NO_HZ_FULL_SYSIDLE215215+ .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE,216216+ .dynticks_idle = ATOMIC_INIT(1),217217+#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */218218+};219219+220220+/*221221+ * Let the RCU core know that this CPU has gone through the scheduler,222222+ * which is a quiescent state. This is called when the need for a223223+ * quiescent state is urgent, so we burn an atomic operation and full224224+ * memory barriers to let the RCU core know about it, regardless of what225225+ * this CPU might (or might not) do in the near future.226226+ *227227+ * We inform the RCU core by emulating a zero-duration dyntick-idle228228+ * period, which we in turn do by incrementing the ->dynticks counter229229+ * by two.230230+ */231231+static void rcu_momentary_dyntick_idle(void)232232+{233233+ unsigned long flags;234234+ struct rcu_data *rdp;235235+ struct rcu_dynticks *rdtp;236236+ int resched_mask;237237+ struct rcu_state *rsp;238238+239239+ local_irq_save(flags);240240+241241+ /*242242+ * Yes, we can lose flag-setting operations. This is OK, because243243+ * the flag will be set again after some delay.244244+ */245245+ resched_mask = raw_cpu_read(rcu_sched_qs_mask);246246+ raw_cpu_write(rcu_sched_qs_mask, 0);247247+248248+ /* Find the flavor that needs a quiescent state. */249249+ for_each_rcu_flavor(rsp) {250250+ rdp = raw_cpu_ptr(rsp->rda);251251+ if (!(resched_mask & rsp->flavor_mask))252252+ continue;253253+ smp_mb(); /* rcu_sched_qs_mask before cond_resched_completed. */254254+ if (ACCESS_ONCE(rdp->mynode->completed) !=255255+ ACCESS_ONCE(rdp->cond_resched_completed))256256+ continue;257257+258258+ /*259259+ * Pretend to be momentarily idle for the quiescent state.260260+ * This allows the grace-period kthread to record the261261+ * quiescent state, with no need for this CPU to do anything262262+ * further.263263+ */264264+ rdtp = this_cpu_ptr(&rcu_dynticks);265265+ smp_mb__before_atomic(); /* Earlier stuff before QS. */266266+ atomic_add(2, &rdtp->dynticks); /* QS. */267267+ smp_mb__after_atomic(); /* Later stuff after QS. */268268+ break;269269+ }270270+ local_irq_restore(flags);271271+}272272+209273/*210274 * Note a context switch. This is a quiescent state for RCU-sched,211275 * and requires special handling for preemptible RCU.···280216 trace_rcu_utilization(TPS("Start context switch"));281217 rcu_sched_qs(cpu);282218 rcu_preempt_note_context_switch(cpu);219219+ if (unlikely(raw_cpu_read(rcu_sched_qs_mask)))220220+ rcu_momentary_dyntick_idle();283221 trace_rcu_utilization(TPS("End context switch"));284222}285223EXPORT_SYMBOL_GPL(rcu_note_context_switch);286286-287287-static DEFINE_PER_CPU(struct rcu_dynticks, rcu_dynticks) = {288288- .dynticks_nesting = DYNTICK_TASK_EXIT_IDLE,289289- .dynticks = ATOMIC_INIT(1),290290-#ifdef CONFIG_NO_HZ_FULL_SYSIDLE291291- .dynticks_idle_nesting = DYNTICK_TASK_NEST_VALUE,292292- .dynticks_idle = ATOMIC_INIT(1),293293-#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */294294-};295224296225static long blimit = 10; /* Maximum callbacks per rcu_do_batch. */297226static long qhimark = 10000; /* If this many pending, ignore blimit. */···299242300243module_param(jiffies_till_first_fqs, ulong, 0644);301244module_param(jiffies_till_next_fqs, ulong, 0644);245245+246246+/*247247+ * How long the grace period must be before we start recruiting248248+ * quiescent-state help from rcu_note_context_switch().249249+ */250250+static ulong jiffies_till_sched_qs = HZ / 20;251251+module_param(jiffies_till_sched_qs, ulong, 0644);302252303253static bool rcu_start_gp_advanced(struct rcu_state *rsp, struct rcu_node *rnp,304254 struct rcu_data *rdp);···917853 bool *isidle, unsigned long *maxj)918854{919855 unsigned int curr;856856+ int *rcrmp;920857 unsigned int snap;921858922859 curr = (unsigned int)atomic_add_return(0, &rdp->dynticks->dynticks);···958893 }959894960895 /*961961- * There is a possibility that a CPU in adaptive-ticks state962962- * might run in the kernel with the scheduling-clock tick disabled963963- * for an extended time period. Invoke rcu_kick_nohz_cpu() to964964- * force the CPU to restart the scheduling-clock tick in this965965- * CPU is in this state.896896+ * A CPU running for an extended time within the kernel can897897+ * delay RCU grace periods. When the CPU is in NO_HZ_FULL mode,898898+ * even context-switching back and forth between a pair of899899+ * in-kernel CPU-bound tasks cannot advance grace periods.900900+ * So if the grace period is old enough, make the CPU pay attention.901901+ * Note that the unsynchronized assignments to the per-CPU902902+ * rcu_sched_qs_mask variable are safe. Yes, setting of903903+ * bits can be lost, but they will be set again on the next904904+ * force-quiescent-state pass. So lost bit sets do not result905905+ * in incorrect behavior, merely in a grace period lasting906906+ * a few jiffies longer than it might otherwise. Because907907+ * there are at most four threads involved, and because the908908+ * updates are only once every few jiffies, the probability of909909+ * lossage (and thus of slight grace-period extension) is910910+ * quite low.911911+ *912912+ * Note that if the jiffies_till_sched_qs boot/sysfs parameter913913+ * is set too high, we override with half of the RCU CPU stall914914+ * warning delay.966915 */967967- rcu_kick_nohz_cpu(rdp->cpu);968968-969969- /*970970- * Alternatively, the CPU might be running in the kernel971971- * for an extended period of time without a quiescent state.972972- * Attempt to force the CPU through the scheduler to gain the973973- * needed quiescent state, but only if the grace period has gone974974- * on for an uncommonly long time. If there are many stuck CPUs,975975- * we will beat on the first one until it gets unstuck, then move976976- * to the next. Only do this for the primary flavor of RCU.977977- */978978- if (rdp->rsp == rcu_state_p &&916916+ rcrmp = &per_cpu(rcu_sched_qs_mask, rdp->cpu);917917+ if (ULONG_CMP_GE(jiffies,918918+ rdp->rsp->gp_start + jiffies_till_sched_qs) ||979919 ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) {980980- rdp->rsp->jiffies_resched += 5;981981- resched_cpu(rdp->cpu);920920+ if (!(ACCESS_ONCE(*rcrmp) & rdp->rsp->flavor_mask)) {921921+ ACCESS_ONCE(rdp->cond_resched_completed) =922922+ ACCESS_ONCE(rdp->mynode->completed);923923+ smp_mb(); /* ->cond_resched_completed before *rcrmp. */924924+ ACCESS_ONCE(*rcrmp) =925925+ ACCESS_ONCE(*rcrmp) + rdp->rsp->flavor_mask;926926+ resched_cpu(rdp->cpu); /* Force CPU into scheduler. */927927+ rdp->rsp->jiffies_resched += 5; /* Enable beating. */928928+ } else if (ULONG_CMP_GE(jiffies, rdp->rsp->jiffies_resched)) {929929+ /* Time to beat on that CPU again! */930930+ resched_cpu(rdp->cpu); /* Force CPU into scheduler. */931931+ rdp->rsp->jiffies_resched += 5; /* Re-enable beating. */932932+ }982933 }983934984935 return 0;···35723491 "rcu_node_fqs_1",35733492 "rcu_node_fqs_2",35743493 "rcu_node_fqs_3" }; /* Match MAX_RCU_LVLS */34943494+ static u8 fl_mask = 0x1;35753495 int cpustride = 1;35763496 int i;35773497 int j;···35913509 for (i = 1; i < rcu_num_lvls; i++)35923510 rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1];35933511 rcu_init_levelspread(rsp);35123512+ rsp->flavor_mask = fl_mask;35133513+ fl_mask <<= 1;3594351435953515 /* Initialize the elements themselves, starting from the leaves. */35963516
+5-1
kernel/rcu/tree.h
···307307 /* 4) reasons this CPU needed to be kicked by force_quiescent_state */308308 unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */309309 unsigned long offline_fqs; /* Kicked due to being offline. */310310+ unsigned long cond_resched_completed;311311+ /* Grace period that needs help */312312+ /* from cond_resched(). */310313311314 /* 5) __rcu_pending() statistics. */312315 unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */···395392 struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */396393 u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */397394 u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */395395+ u8 flavor_mask; /* bit in flavor mask. */398396 struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */399397 void (*call)(struct rcu_head *head, /* call_rcu() flavor. */400398 void (*func)(struct rcu_head *head));···567563static void do_nocb_deferred_wakeup(struct rcu_data *rdp);568564static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);569565static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);570570-static void rcu_kick_nohz_cpu(int cpu);566566+static void __maybe_unused rcu_kick_nohz_cpu(int cpu);571567static bool init_nocb_callback_list(struct rcu_data *rdp);572568static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);573569static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
+1-1
kernel/rcu/tree_plugin.h
···24042404 * if an adaptive-ticks CPU is failing to respond to the current grace24052405 * period and has not be idle from an RCU perspective, kick it.24062406 */24072407-static void rcu_kick_nohz_cpu(int cpu)24072407+static void __maybe_unused rcu_kick_nohz_cpu(int cpu)24082408{24092409#ifdef CONFIG_NO_HZ_FULL24102410 if (tick_nohz_full_cpu(cpu))
+2-20
kernel/rcu/update.c
···200200EXPORT_SYMBOL_GPL(wait_rcu_gp);201201202202#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD203203-static inline void debug_init_rcu_head(struct rcu_head *head)203203+void init_rcu_head(struct rcu_head *head)204204{205205 debug_object_init(head, &rcuhead_debug_descr);206206}207207208208-static inline void debug_rcu_head_free(struct rcu_head *head)208208+void destroy_rcu_head(struct rcu_head *head)209209{210210 debug_object_free(head, &rcuhead_debug_descr);211211}···350350early_initcall(check_cpu_stall_init);351351352352#endif /* #ifdef CONFIG_RCU_STALL_COMMON */353353-354354-/*355355- * Hooks for cond_resched() and friends to avoid RCU CPU stall warnings.356356- */357357-358358-DEFINE_PER_CPU(int, rcu_cond_resched_count);359359-360360-/*361361- * Report a set of RCU quiescent states, for use by cond_resched()362362- * and friends. Out of line due to being called infrequently.363363- */364364-void rcu_resched(void)365365-{366366- preempt_disable();367367- __this_cpu_write(rcu_cond_resched_count, 0);368368- rcu_note_context_switch(smp_processor_id());369369- preempt_enable();370370-}
···585585 struct itimerspec *new_setting,586586 struct itimerspec *old_setting)587587{588588+ ktime_t exp;589589+588590 if (!rtcdev)589591 return -ENOTSUPP;592592+593593+ if (flags & ~TIMER_ABSTIME)594594+ return -EINVAL;590595591596 if (old_setting)592597 alarm_timer_get(timr, old_setting);···602597603598 /* start the timer */604599 timr->it.alarm.interval = timespec_to_ktime(new_setting->it_interval);605605- alarm_start(&timr->it.alarm.alarmtimer,606606- timespec_to_ktime(new_setting->it_value));600600+ exp = timespec_to_ktime(new_setting->it_value);601601+ /* Convert (if necessary) to absolute time */602602+ if (flags != TIMER_ABSTIME) {603603+ ktime_t now;604604+605605+ now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime();606606+ exp = ktime_add(now, exp);607607+ }608608+609609+ alarm_start(&timr->it.alarm.alarmtimer, exp);607610 return 0;608611}609612···742729743730 if (!alarmtimer_get_rtcdev())744731 return -ENOTSUPP;732732+733733+ if (flags & ~TIMER_ABSTIME)734734+ return -EINVAL;745735746736 if (!capable(CAP_WAKE_ALARM))747737 return -EPERM;
+2-2
kernel/trace/ftrace.c
···265265 func = ftrace_ops_list_func;266266 }267267268268+ update_function_graph_func();269269+268270 /* If there's no change, then do nothing more here */269271 if (ftrace_trace_function == func)270272 return;271271-272272- update_function_graph_func();273273274274 /*275275 * If we are using the list function, it doesn't care
···191191192192 i %= num_online_cpus();193193194194- if (!cpumask_of_node(numa_node)) {194194+ if (numa_node == -1 || !cpumask_of_node(numa_node)) {195195 /* Use all online cpu's for non numa aware system */196196 cpumask_copy(mask, cpu_online_mask);197197 } else {
···289289{290290 struct hci_conn *conn = container_of(work, struct hci_conn,291291 disc_work.work);292292+ int refcnt = atomic_read(&conn->refcnt);292293293294 BT_DBG("hcon %p state %s", conn, state_to_string(conn->state));294295295295- if (atomic_read(&conn->refcnt))296296+ WARN_ON(refcnt < 0);297297+298298+ /* FIXME: It was observed that in pairing failed scenario, refcnt299299+ * drops below 0. Probably this is because l2cap_conn_del calls300300+ * l2cap_chan_del for each channel, and inside l2cap_chan_del conn is301301+ * dropped. After that loop hci_chan_del is called which also drops302302+ * conn. For now make sure that ACL is alive if refcnt is higher then 0,303303+ * otherwise drop it.304304+ */305305+ if (refcnt > 0)296306 return;297307298308 switch (conn->state) {
+46-14
net/bluetooth/smp.c
···385385 { CFM_PASSKEY, CFM_PASSKEY, REQ_PASSKEY, JUST_WORKS, OVERLAP },386386};387387388388+static u8 get_auth_method(struct smp_chan *smp, u8 local_io, u8 remote_io)389389+{390390+ /* If either side has unknown io_caps, use JUST WORKS */391391+ if (local_io > SMP_IO_KEYBOARD_DISPLAY ||392392+ remote_io > SMP_IO_KEYBOARD_DISPLAY)393393+ return JUST_WORKS;394394+395395+ return gen_method[remote_io][local_io];396396+}397397+388398static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth,389399 u8 local_io, u8 remote_io)390400{···411401 BT_DBG("tk_request: auth:%d lcl:%d rem:%d", auth, local_io, remote_io);412402413403 /* If neither side wants MITM, use JUST WORKS */414414- /* If either side has unknown io_caps, use JUST WORKS */415404 /* Otherwise, look up method from the table */416416- if (!(auth & SMP_AUTH_MITM) ||417417- local_io > SMP_IO_KEYBOARD_DISPLAY ||418418- remote_io > SMP_IO_KEYBOARD_DISPLAY)405405+ if (!(auth & SMP_AUTH_MITM))419406 method = JUST_WORKS;420407 else421421- method = gen_method[remote_io][local_io];408408+ method = get_auth_method(smp, local_io, remote_io);422409423410 /* If not bonding, don't ask user to confirm a Zero TK */424411 if (!(auth & SMP_AUTH_BONDING) && method == JUST_CFM)···676669{677670 struct smp_cmd_pairing rsp, *req = (void *) skb->data;678671 struct smp_chan *smp;679679- u8 key_size, auth;672672+ u8 key_size, auth, sec_level;680673 int ret;681674682675 BT_DBG("conn %p", conn);···702695 /* We didn't start the pairing, so match remote */703696 auth = req->auth_req;704697705705- conn->hcon->pending_sec_level = authreq_to_seclevel(auth);698698+ sec_level = authreq_to_seclevel(auth);699699+ if (sec_level > conn->hcon->pending_sec_level)700700+ conn->hcon->pending_sec_level = sec_level;701701+702702+ /* If we need MITM check that it can be acheived */703703+ if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) {704704+ u8 method;705705+706706+ method = get_auth_method(smp, conn->hcon->io_capability,707707+ req->io_capability);708708+ if (method == JUST_WORKS || method == JUST_CFM)709709+ return SMP_AUTH_REQUIREMENTS;710710+ }706711707712 build_pairing_cmd(conn, req, &rsp, auth);708713···761742 key_size = min(req->max_key_size, rsp->max_key_size);762743 if (check_enc_key_size(conn, key_size))763744 return SMP_ENC_KEY_SIZE;745745+746746+ /* If we need MITM check that it can be acheived */747747+ if (conn->hcon->pending_sec_level >= BT_SECURITY_HIGH) {748748+ u8 method;749749+750750+ method = get_auth_method(smp, req->io_capability,751751+ rsp->io_capability);752752+ if (method == JUST_WORKS || method == JUST_CFM)753753+ return SMP_AUTH_REQUIREMENTS;754754+ }764755765756 get_random_bytes(smp->prnd, sizeof(smp->prnd));766757···867838 struct smp_cmd_pairing cp;868839 struct hci_conn *hcon = conn->hcon;869840 struct smp_chan *smp;841841+ u8 sec_level;870842871843 BT_DBG("conn %p", conn);872844···877847 if (!(conn->hcon->link_mode & HCI_LM_MASTER))878848 return SMP_CMD_NOTSUPP;879849880880- hcon->pending_sec_level = authreq_to_seclevel(rp->auth_req);850850+ sec_level = authreq_to_seclevel(rp->auth_req);851851+ if (sec_level > hcon->pending_sec_level)852852+ hcon->pending_sec_level = sec_level;881853882854 if (smp_ltk_encrypt(conn, hcon->pending_sec_level))883855 return 0;···933901 if (smp_sufficient_security(hcon, sec_level))934902 return 1;935903904904+ if (sec_level > hcon->pending_sec_level)905905+ hcon->pending_sec_level = sec_level;906906+936907 if (hcon->link_mode & HCI_LM_MASTER)937937- if (smp_ltk_encrypt(conn, sec_level))938938- goto done;908908+ if (smp_ltk_encrypt(conn, hcon->pending_sec_level))909909+ return 0;939910940911 if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags))941912 return 0;···953918 * requires it.954919 */955920 if (hcon->io_capability != HCI_IO_NO_INPUT_OUTPUT ||956956- sec_level > BT_SECURITY_MEDIUM)921921+ hcon->pending_sec_level > BT_SECURITY_MEDIUM)957922 authreq |= SMP_AUTH_MITM;958923959924 if (hcon->link_mode & HCI_LM_MASTER) {···971936 }972937973938 set_bit(SMP_FLAG_INITIATOR, &smp->flags);974974-975975-done:976976- hcon->pending_sec_level = sec_level;977939978940 return 0;979941}
+18-12
net/core/dev.c
···148148static struct list_head offload_base __read_mostly;149149150150static int netif_rx_internal(struct sk_buff *skb);151151+static int call_netdevice_notifiers_info(unsigned long val,152152+ struct net_device *dev,153153+ struct netdev_notifier_info *info);151154152155/*153156 * The @dev_base_head list is protected by @dev_base_lock and the rtnl···12101207void netdev_state_change(struct net_device *dev)12111208{12121209 if (dev->flags & IFF_UP) {12131213- call_netdevice_notifiers(NETDEV_CHANGE, dev);12101210+ struct netdev_notifier_change_info change_info;12111211+12121212+ change_info.flags_changed = 0;12131213+ call_netdevice_notifiers_info(NETDEV_CHANGE, dev,12141214+ &change_info.info);12141215 rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL);12151216 }12161217}···42344227#endif42354228 napi->weight = weight_p;42364229 local_irq_disable();42374237- while (work < quota) {42304230+ while (1) {42384231 struct sk_buff *skb;42394239- unsigned int qlen;4240423242414233 while ((skb = __skb_dequeue(&sd->process_queue))) {42424234 local_irq_enable();···42494243 }4250424442514245 rps_lock(sd);42524252- qlen = skb_queue_len(&sd->input_pkt_queue);42534253- if (qlen)42544254- skb_queue_splice_tail_init(&sd->input_pkt_queue,42554255- &sd->process_queue);42564256-42574257- if (qlen < quota - work) {42464246+ if (skb_queue_empty(&sd->input_pkt_queue)) {42584247 /*42594248 * Inline a custom version of __napi_complete().42604249 * only current cpu owns and manipulates this napi,42614261- * and NAPI_STATE_SCHED is the only possible flag set on backlog.42624262- * we can use a plain write instead of clear_bit(),42504250+ * and NAPI_STATE_SCHED is the only possible flag set42514251+ * on backlog.42524252+ * We can use a plain write instead of clear_bit(),42634253 * and we dont need an smp_mb() memory barrier.42644254 */42654255 list_del(&napi->poll_list);42664256 napi->state = 0;42574257+ rps_unlock(sd);4267425842684268- quota = work + qlen;42594259+ break;42694260 }42614261+42624262+ skb_queue_splice_tail_init(&sd->input_pkt_queue,42634263+ &sd->process_queue);42704264 rps_unlock(sd);42714265 }42724266 local_irq_enable();
···13011301 len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr);13021302 len -= skb_network_header_len(skb);1303130313041304- /* Drop queries with not link local source */13051305- if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL))13041304+ /* RFC3810 6.213051305+ * Upon reception of an MLD message that contains a Query, the node13061306+ * checks if the source address of the message is a valid link-local13071307+ * address, if the Hop Limit is set to 1, and if the Router Alert13081308+ * option is present in the Hop-By-Hop Options header of the IPv613091309+ * packet. If any of these checks fails, the packet is dropped.13101310+ */13111311+ if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) ||13121312+ ipv6_hdr(skb)->hop_limit != 1 ||13131313+ !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) ||13141314+ IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD))13061315 return -EINVAL;1307131613081317 idev = __in6_dev_get(skb->dev);
···11/*22- * Copyright (c) 2007-2013 Nicira, Inc.22+ * Copyright (c) 2007-2014 Nicira, Inc.33 *44 * This program is free software; you can redistribute it and/or55 * modify it under the terms of version 2 of the GNU General Public···180180 unsigned char ar_tip[4]; /* target IP address */181181} __packed;182182183183-void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *);183183+void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags,184184+ struct sk_buff *);184185void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *,185186 unsigned long *used, __be16 *tcp_flags);186187void ovs_flow_stats_clear(struct sw_flow *);
···110110 return PACKET_RCVD;111111}112112113113+/* Called with rcu_read_lock and BH disabled. */114114+static int gre_err(struct sk_buff *skb, u32 info,115115+ const struct tnl_ptk_info *tpi)116116+{117117+ struct ovs_net *ovs_net;118118+ struct vport *vport;119119+120120+ ovs_net = net_generic(dev_net(skb->dev), ovs_net_id);121121+ vport = rcu_dereference(ovs_net->vport_net.gre_vport);122122+123123+ if (unlikely(!vport))124124+ return PACKET_REJECT;125125+ else126126+ return PACKET_RCVD;127127+}128128+113129static int gre_tnl_send(struct vport *vport, struct sk_buff *skb)114130{115131 struct net *net = ovs_dp_get_net(vport->dp);···202186203187static struct gre_cisco_protocol gre_protocol = {204188 .handler = gre_rcv,189189+ .err_handler = gre_err,205190 .priority = 1,206191};207192
+15-107
net/sctp/ulpevent.c
···366366 * specification [SCTP] and any extensions for a list of possible367367 * error formats.368368 */369369-struct sctp_ulpevent *sctp_ulpevent_make_remote_error(370370- const struct sctp_association *asoc, struct sctp_chunk *chunk,371371- __u16 flags, gfp_t gfp)369369+struct sctp_ulpevent *370370+sctp_ulpevent_make_remote_error(const struct sctp_association *asoc,371371+ struct sctp_chunk *chunk, __u16 flags,372372+ gfp_t gfp)372373{373374 struct sctp_ulpevent *event;374375 struct sctp_remote_error *sre;···388387 /* Copy the skb to a new skb with room for us to prepend389388 * notification with.390389 */391391- skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error),392392- 0, gfp);390390+ skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp);393391394392 /* Pull off the rest of the cause TLV from the chunk. */395393 skb_pull(chunk->skb, elen);···399399 event = sctp_skb2event(skb);400400 sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize);401401402402- sre = (struct sctp_remote_error *)403403- skb_push(skb, sizeof(struct sctp_remote_error));402402+ sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre));404403405404 /* Trim the buffer to the right length. */406406- skb_trim(skb, sizeof(struct sctp_remote_error) + elen);405405+ skb_trim(skb, sizeof(*sre) + elen);407406408408- /* Socket Extensions for SCTP409409- * 5.3.1.3 SCTP_REMOTE_ERROR410410- *411411- * sre_type:412412- * It should be SCTP_REMOTE_ERROR.413413- */407407+ /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */408408+ memset(sre, 0, sizeof(*sre));414409 sre->sre_type = SCTP_REMOTE_ERROR;415415-416416- /*417417- * Socket Extensions for SCTP418418- * 5.3.1.3 SCTP_REMOTE_ERROR419419- *420420- * sre_flags: 16 bits (unsigned integer)421421- * Currently unused.422422- */423410 sre->sre_flags = 0;424424-425425- /* Socket Extensions for SCTP426426- * 5.3.1.3 SCTP_REMOTE_ERROR427427- *428428- * sre_length: sizeof (__u32)429429- *430430- * This field is the total length of the notification data,431431- * including the notification header.432432- */433411 sre->sre_length = skb->len;434434-435435- /* Socket Extensions for SCTP436436- * 5.3.1.3 SCTP_REMOTE_ERROR437437- *438438- * sre_error: 16 bits (unsigned integer)439439- * This value represents one of the Operational Error causes defined in440440- * the SCTP specification, in network byte order.441441- */442412 sre->sre_error = cause;443443-444444- /* Socket Extensions for SCTP445445- * 5.3.1.3 SCTP_REMOTE_ERROR446446- *447447- * sre_assoc_id: sizeof (sctp_assoc_t)448448- *449449- * The association id field, holds the identifier for the association.450450- * All notifications for a given association have the same association451451- * identifier. For TCP style socket, this field is ignored.452452- */453413 sctp_ulpevent_set_owner(event, asoc);454414 sre->sre_assoc_id = sctp_assoc2id(asoc);455415456416 return event;457457-458417fail:459418 return NULL;460419}···858899 return notification->sn_header.sn_type;859900}860901861861-/* Copy out the sndrcvinfo into a msghdr. */902902+/* RFC6458, Section 5.3.2. SCTP Header Information Structure903903+ * (SCTP_SNDRCV, DEPRECATED)904904+ */862905void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event,863906 struct msghdr *msghdr)864907{···869908 if (sctp_ulpevent_is_notification(event))870909 return;871910872872- /* Sockets API Extensions for SCTP873873- * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV)874874- *875875- * sinfo_stream: 16 bits (unsigned integer)876876- *877877- * For recvmsg() the SCTP stack places the message's stream number in878878- * this value.879879- */911911+ memset(&sinfo, 0, sizeof(sinfo));880912 sinfo.sinfo_stream = event->stream;881881- /* sinfo_ssn: 16 bits (unsigned integer)882882- *883883- * For recvmsg() this value contains the stream sequence number that884884- * the remote endpoint placed in the DATA chunk. For fragmented885885- * messages this is the same number for all deliveries of the message886886- * (if more than one recvmsg() is needed to read the message).887887- */888913 sinfo.sinfo_ssn = event->ssn;889889- /* sinfo_ppid: 32 bits (unsigned integer)890890- *891891- * In recvmsg() this value is892892- * the same information that was passed by the upper layer in the peer893893- * application. Please note that byte order issues are NOT accounted894894- * for and this information is passed opaquely by the SCTP stack from895895- * one end to the other.896896- */897914 sinfo.sinfo_ppid = event->ppid;898898- /* sinfo_flags: 16 bits (unsigned integer)899899- *900900- * This field may contain any of the following flags and is composed of901901- * a bitwise OR of these values.902902- *903903- * recvmsg() flags:904904- *905905- * SCTP_UNORDERED - This flag is present when the message was sent906906- * non-ordered.907907- */908915 sinfo.sinfo_flags = event->flags;909909- /* sinfo_tsn: 32 bit (unsigned integer)910910- *911911- * For the receiving side, this field holds a TSN that was912912- * assigned to one of the SCTP Data Chunks.913913- */914916 sinfo.sinfo_tsn = event->tsn;915915- /* sinfo_cumtsn: 32 bit (unsigned integer)916916- *917917- * This field will hold the current cumulative TSN as918918- * known by the underlying SCTP layer. Note this field is919919- * ignored when sending and only valid for a receive920920- * operation when sinfo_flags are set to SCTP_UNORDERED.921921- */922917 sinfo.sinfo_cumtsn = event->cumtsn;923923- /* sinfo_assoc_id: sizeof (sctp_assoc_t)924924- *925925- * The association handle field, sinfo_assoc_id, holds the identifier926926- * for the association announced in the COMMUNICATION_UP notification.927927- * All notifications for a given association have the same identifier.928928- * Ignored for one-to-one style sockets.929929- */930918 sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc);931931-932932- /* context value that is set via SCTP_CONTEXT socket option. */919919+ /* Context value that is set via SCTP_CONTEXT socket option. */933920 sinfo.sinfo_context = event->asoc->default_rcv_context;934934-935921 /* These fields are not used while receiving. */936922 sinfo.sinfo_timetolive = 0;937923938924 put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV,939939- sizeof(struct sctp_sndrcvinfo), (void *)&sinfo);925925+ sizeof(sinfo), &sinfo);940926}941927942928/* Do accounting for bytes received and hold a reference to the association
···101101}102102103103/* tipc_buf_append(): Append a buffer to the fragment list of another buffer104104- * Let first buffer become head buffer105105- * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0106106- * Leaves headbuf pointer at NULL if failure104104+ * @*headbuf: in: NULL for first frag, otherwise value returned from prev call105105+ * out: set when successful non-complete reassembly, otherwise NULL106106+ * @*buf: in: the buffer to append. Always defined107107+ * out: head buf after sucessful complete reassembly, otherwise NULL108108+ * Returns 1 when reassembly complete, otherwise 0107109 */108110int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)109111{···124122 goto out_free;125123 head = *headbuf = frag;126124 skb_frag_list_init(head);125125+ *buf = NULL;127126 return 0;128127 }129128 if (!head)···153150out_free:154151 pr_warn_ratelimited("Unable to build fragment list\n");155152 kfree_skb(*buf);153153+ kfree_skb(*headbuf);154154+ *buf = *headbuf = NULL;156155 return 0;157156}
···14971497 }14981498 CMD(start_p2p_device, START_P2P_DEVICE);14991499 CMD(set_mcast_rate, SET_MCAST_RATE);15001500+#ifdef CONFIG_NL80211_TESTMODE15011501+ CMD(testmode_cmd, TESTMODE);15021502+#endif15001503 if (state->split) {15011504 CMD(crit_proto_start, CRIT_PROTOCOL_START);15021505 CMD(crit_proto_stop, CRIT_PROTOCOL_STOP);15031506 if (rdev->wiphy.flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH)15041507 CMD(channel_switch, CHANNEL_SWITCH);15081508+ CMD(set_qos_map, SET_QOS_MAP);15051509 }15061506- CMD(set_qos_map, SET_QOS_MAP);15071507-15081508-#ifdef CONFIG_NL80211_TESTMODE15091509- CMD(testmode_cmd, TESTMODE);15101510-#endif15111511-15101510+ /* add into the if now */15121511#undef CMD1513151215141513 if (rdev->ops->connect || rdev->ops->auth) {
+7-15
net/wireless/reg.c
···935935 if (!band_rule_found)936936 band_rule_found = freq_in_rule_band(fr, center_freq);937937938938- bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(5));938938+ bw_fits = reg_does_bw_fit(fr, center_freq, MHZ_TO_KHZ(20));939939940940 if (band_rule_found && bw_fits)941941 return rr;···10191019}10201020#endif1021102110221022-/* Find an ieee80211_reg_rule such that a 5MHz channel with frequency10231023- * chan->center_freq fits there.10241024- * If there is no such reg_rule, disable the channel, otherwise set the10251025- * flags corresponding to the bandwidths allowed in the particular reg_rule10221022+/*10231023+ * Note that right now we assume the desired channel bandwidth10241024+ * is always 20 MHz for each individual channel (HT40 uses 20 MHz10251025+ * per channel, the primary and the extension channel).10261026 */10271027static void handle_channel(struct wiphy *wiphy,10281028 enum nl80211_reg_initiator initiator,···10831083 if (reg_rule->flags & NL80211_RRF_AUTO_BW)10841084 max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule);1085108510861086- if (max_bandwidth_khz < MHZ_TO_KHZ(10))10871087- bw_flags = IEEE80211_CHAN_NO_10MHZ;10881088- if (max_bandwidth_khz < MHZ_TO_KHZ(20))10891089- bw_flags |= IEEE80211_CHAN_NO_20MHZ;10901086 if (max_bandwidth_khz < MHZ_TO_KHZ(40))10911091- bw_flags |= IEEE80211_CHAN_NO_HT40;10871087+ bw_flags = IEEE80211_CHAN_NO_HT40;10921088 if (max_bandwidth_khz < MHZ_TO_KHZ(80))10931089 bw_flags |= IEEE80211_CHAN_NO_80MHZ;10941090 if (max_bandwidth_khz < MHZ_TO_KHZ(160))···15181522 if (reg_rule->flags & NL80211_RRF_AUTO_BW)15191523 max_bandwidth_khz = reg_get_max_bandwidth(regd, reg_rule);1520152415211521- if (max_bandwidth_khz < MHZ_TO_KHZ(10))15221522- bw_flags = IEEE80211_CHAN_NO_10MHZ;15231523- if (max_bandwidth_khz < MHZ_TO_KHZ(20))15241524- bw_flags |= IEEE80211_CHAN_NO_20MHZ;15251525 if (max_bandwidth_khz < MHZ_TO_KHZ(40))15261526- bw_flags |= IEEE80211_CHAN_NO_HT40;15261526+ bw_flags = IEEE80211_CHAN_NO_HT40;15271527 if (max_bandwidth_khz < MHZ_TO_KHZ(80))15281528 bw_flags |= IEEE80211_CHAN_NO_80MHZ;15291529 if (max_bandwidth_khz < MHZ_TO_KHZ(160))
+12-3
scripts/kernel-doc
···20732073sub dump_function($$) {20742074 my $prototype = shift;20752075 my $file = shift;20762076+ my $noret = 0;2076207720772078 $prototype =~ s/^static +//;20782079 $prototype =~ s/^extern +//;···20872086 $prototype =~ s/__init_or_module +//;20882087 $prototype =~ s/__must_check +//;20892088 $prototype =~ s/__weak +//;20902090- $prototype =~ s/^#\s*define\s+//; #ak added20892089+ my $define = $prototype =~ s/^#\s*define\s+//; #ak added20912090 $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//;2092209120932092 # Yes, this truly is vile. We are looking for:···21062105 # - atomic_set (macro)21072106 # - pci_match_device, __copy_to_user (long return type)2108210721092109- if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21082108+ if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) {21092109+ # This is an object-like macro, it has no return type and no parameter21102110+ # list.21112111+ # Function-like macros are not allowed to have spaces between21122112+ # declaration_name and opening parenthesis (notice the \s+).21132113+ $return_type = $1;21142114+ $declaration_name = $2;21152115+ $noret = 1;21162116+ } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21102117 $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21112118 $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21122119 $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||···21492140 # of warnings goes sufficiently down, the check is only performed in21502141 # verbose mode.21512142 # TODO: always perform the check.21522152- if ($verbose) {21432143+ if ($verbose && !$noret) {21532144 check_return_section($file, $declaration_name, $return_type);21542145 }21552146
+2-1
sound/pci/hda/hda_controller.c
···193193 dsp_unlock(azx_dev);194194 return azx_dev;195195 }196196- if (!res)196196+ if (!res ||197197+ (chip->driver_caps & AZX_DCAPS_REVERSE_ASSIGN))197198 res = azx_dev;198199 }199200 dsp_unlock(azx_dev);
+6-6
sound/pci/hda/hda_intel.c
···227227/* quirks for Intel PCH */228228#define AZX_DCAPS_INTEL_PCH_NOPM \229229 (AZX_DCAPS_SCH_SNOOP | AZX_DCAPS_BUFSIZE | \230230- AZX_DCAPS_COUNT_LPIB_DELAY)230230+ AZX_DCAPS_COUNT_LPIB_DELAY | AZX_DCAPS_REVERSE_ASSIGN)231231232232#define AZX_DCAPS_INTEL_PCH \233233 (AZX_DCAPS_INTEL_PCH_NOPM | AZX_DCAPS_PM_RUNTIME)···596596 struct azx *chip = card->private_data;597597 struct azx_pcm *p;598598599599- if (chip->disabled)599599+ if (chip->disabled || chip->init_failed)600600 return 0;601601602602 snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);···628628 struct snd_card *card = dev_get_drvdata(dev);629629 struct azx *chip = card->private_data;630630631631- if (chip->disabled)631631+ if (chip->disabled || chip->init_failed)632632 return 0;633633634634 if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL) {···665665 struct snd_card *card = dev_get_drvdata(dev);666666 struct azx *chip = card->private_data;667667668668- if (chip->disabled)668668+ if (chip->disabled || chip->init_failed)669669 return 0;670670671671 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME))···692692 struct hda_codec *codec;693693 int status;694694695695- if (chip->disabled)695695+ if (chip->disabled || chip->init_failed)696696 return 0;697697698698 if (!(chip->driver_caps & AZX_DCAPS_PM_RUNTIME))···729729 struct snd_card *card = dev_get_drvdata(dev);730730 struct azx *chip = card->private_data;731731732732- if (chip->disabled)732732+ if (chip->disabled || chip->init_failed)733733 return 0;734734735735 if (!power_save_controller ||
+1
sound/pci/hda/hda_priv.h
···186186#define AZX_DCAPS_BUFSIZE (1 << 21) /* no buffer size alignment */187187#define AZX_DCAPS_ALIGN_BUFSIZE (1 << 22) /* buffer size alignment */188188#define AZX_DCAPS_4K_BDLE_BOUNDARY (1 << 23) /* BDLE in 4k boundary */189189+#define AZX_DCAPS_REVERSE_ASSIGN (1 << 24) /* Assign devices in reverse order */189190#define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */190191#define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */191192#define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */
···142142static void prepare_logging(void)143143{144144 int i;145145+ struct stat logstat;145146146147 if (!logging)147148 return;···152151 syslog(LOG_ERR, "failed to open log file %s\n", TMON_LOG_FILE);153152 return;154153 }154154+155155+ if (lstat(TMON_LOG_FILE, &logstat) < 0) {156156+ syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE);157157+ fclose(tmon_log);158158+ tmon_log = NULL;159159+ return;160160+ }161161+162162+ /* The log file must be a regular file owned by us */163163+ if (S_ISLNK(logstat.st_mode)) {164164+ syslog(LOG_ERR, "Log file is a symlink. Will not log\n");165165+ fclose(tmon_log);166166+ tmon_log = NULL;167167+ return;168168+ }169169+170170+ if (logstat.st_uid != getuid()) {171171+ syslog(LOG_ERR, "We don't own the log file. Not logging\n");172172+ fclose(tmon_log);173173+ tmon_log = NULL;174174+ return;175175+ }176176+155177156178 fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n");157179 for (i = 0; i < ptdata.nr_tz_sensor; i++) {···355331 disable_tui();356332357333 /* change the file mode mask */358358- umask(0);334334+ umask(S_IWGRP | S_IWOTH);359335360336 /* new SID for the daemon process */361337 sid = setsid();