···280280mcelog281281------282282283283-In Linux 2.6.31+ the i386 kernel needs to run the mcelog utility284284-as a regular cronjob similar to the x86-64 kernel to process and log285285-machine check events when CONFIG_X86_NEW_MCE is enabled. Machine check286286-events are errors reported by the CPU. Processing them is strongly encouraged.287287-All x86-64 kernels since 2.6.4 require the mcelog utility to288288-process machine checks.283283+On x86 kernels the mcelog utility is needed to process and log machine check284284+events when CONFIG_X86_MCE is enabled. Machine check events are errors reported285285+by the CPU. Processing them is strongly encouraged.289286290287Getting updated software291288========================
+1-1
Documentation/DocBook/gadget.tmpl
···708708709709<para>Systems need specialized hardware support to implement OTG,710710notably including a special <emphasis>Mini-AB</emphasis> jack711711-and associated transciever to support <emphasis>Dual-Role</emphasis>711711+and associated transceiver to support <emphasis>Dual-Role</emphasis>712712operation:713713they can act either as a host, using the standard714714Linux-USB host side driver stack,
+2-2
Documentation/DocBook/genericirq.tmpl
···182182 <para>183183 Each interrupt is described by an interrupt descriptor structure184184 irq_desc. The interrupt is referenced by an 'unsigned int' numeric185185- value which selects the corresponding interrupt decription structure185185+ value which selects the corresponding interrupt description structure186186 in the descriptor structures array.187187 The descriptor structure contains status information and pointers188188 to the interrupt flow method and the interrupt chip structure···470470 <para>471471 To avoid copies of identical implementations of IRQ chips the472472 core provides a configurable generic interrupt chip473473- implementation. Developers should check carefuly whether the473473+ implementation. Developers should check carefully whether the474474 generic chip fits their needs before implementing the same475475 functionality slightly differently themselves.476476 </para>
+1-1
Documentation/DocBook/kernel-locking.tmpl
···17601760</para>1761176117621762<para>17631763-There is a furthur optimization possible here: remember our original17631763+There is a further optimization possible here: remember our original17641764cache code, where there were no reference counts and the caller simply17651765held the lock whenever using the object? This is still possible: if17661766you hold the lock, no one can delete the object, so you don't need to
+3-3
Documentation/DocBook/libata.tmpl
···677677678678 <listitem>679679 <para>680680- ATA_QCFLAG_ACTIVE is clared from qc->flags.680680+ ATA_QCFLAG_ACTIVE is cleared from qc->flags.681681 </para>682682 </listitem>683683···708708709709 <listitem>710710 <para>711711- qc->waiting is claread & completed (in that order).711711+ qc->waiting is cleared & completed (in that order).712712 </para>713713 </listitem>714714···1163116311641164 <para>11651165 Once sense data is acquired, this type of errors can be11661166- handled similary to other SCSI errors. Note that sense data11661166+ handled similarly to other SCSI errors. Note that sense data11671167 may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR11681168 && ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such11691169 cases, the error should be considered as an ATA bus error and
+1-1
Documentation/DocBook/media_api.tmpl
···6868 several digital tv standards. While it is called as DVB API,6969 in fact it covers several different video standards including7070 DVB-T, DVB-S, DVB-C and ATSC. The API is currently being updated7171- to documment support also for DVB-S2, ISDB-T and ISDB-S.</para>7171+ to document support also for DVB-S2, ISDB-T and ISDB-S.</para>7272 <para>The third part covers the Remote Controller API.</para>7373 <para>The fourth part covers the Media Controller API.</para>7474 <para>For additional information and for the latest development code,
+15-15
Documentation/DocBook/mtdnand.tmpl
···9191 <listitem><para>9292 [MTD Interface]</para><para>9393 These functions provide the interface to the MTD kernel API. 9494- They are not replacable and provide functionality9494+ They are not replaceable and provide functionality9595 which is complete hardware independent.9696 </para></listitem>9797 <listitem><para>···100100 </para></listitem>101101 <listitem><para>102102 [GENERIC]</para><para>103103- Generic functions are not replacable and provide functionality103103+ Generic functions are not replaceable and provide functionality104104 which is complete hardware independent.105105 </para></listitem>106106 <listitem><para>107107 [DEFAULT]</para><para>108108 Default functions provide hardware related functionality which is suitable109109 for most of the implementations. These functions can be replaced by the110110- board driver if neccecary. Those functions are called via pointers in the110110+ board driver if necessary. Those functions are called via pointers in the111111 NAND chip description structure. The board driver can set the functions which112112 should be replaced by board dependent functions before calling nand_scan().113113 If the function pointer is NULL on entry to nand_scan() then the pointer···264264 is set up nand_scan() is called. This function tries to265265 detect and identify then chip. If a chip is found all the266266 internal data fields are initialized accordingly.267267- The structure(s) have to be zeroed out first and then filled with the neccecary 267267+ The structure(s) have to be zeroed out first and then filled with the necessary268268 information about the device.269269 </para>270270 <programlisting>···327327 <sect1 id="Exit_function">328328 <title>Exit function</title>329329 <para>330330- The exit function is only neccecary if the driver is330330+ The exit function is only necessary if the driver is331331 compiled as a module. It releases all resources which332332 are held by the chip driver and unregisters the partitions333333 in the MTD layer.···494494 in this case. See rts_from4.c and diskonchip.c for 495495 implementation reference. In those cases we must also496496 use bad block tables on FLASH, because the ECC layout is497497- interferring with the bad block marker positions.497497+ interfering with the bad block marker positions.498498 See bad block table support for details.499499 </para>500500 </sect2>···542542 <para> 543543 nand_scan() calls the function nand_default_bbt(). 544544 nand_default_bbt() selects appropriate default545545- bad block table desriptors depending on the chip information545545+ bad block table descriptors depending on the chip information546546 which was retrieved by nand_scan().547547 </para>548548 <para>···554554 <sect2 id="Flash_based_tables">555555 <title>Flash based tables</title>556556 <para>557557- It may be desired or neccecary to keep a bad block table in FLASH. 557557+ It may be desired or necessary to keep a bad block table in FLASH.558558 For AG-AND chips this is mandatory, as they have no factory marked559559 bad blocks. They have factory marked good blocks. The marker pattern560560 is erased when the block is erased to be reused. So in case of···565565 of the blocks.566566 </para>567567 <para>568568- The blocks in which the tables are stored are procteted against568568+ The blocks in which the tables are stored are protected against569569 accidental access by marking them bad in the memory bad block570570 table. The bad block table management functions are allowed571571- to circumvernt this protection.571571+ to circumvent this protection.572572 </para>573573 <para>574574 The simplest way to activate the FLASH based bad block table support ···592592 User defined tables are created by filling out a 593593 nand_bbt_descr structure and storing the pointer in the594594 nand_chip structure member bbt_td before calling nand_scan(). 595595- If a mirror table is neccecary a second structure must be595595+ If a mirror table is necessary a second structure must be596596 created and a pointer to this structure must be stored597597 in bbt_md inside the nand_chip structure. If the bbt_md 598598 member is set to NULL then only the main table is used···666666 <para>667667 For automatic placement some blocks must be reserved for668668 bad block table storage. The number of reserved blocks is defined 669669- in the maxblocks member of the babd block table description structure.669669+ in the maxblocks member of the bad block table description structure.670670 Reserving 4 blocks for mirrored tables should be a reasonable number. 671671 This also limits the number of blocks which are scanned for the bad672672 block table ident pattern.···10681068 <chapter id="filesystems">10691069 <title>Filesystem support</title>10701070 <para>10711071- The NAND driver provides all neccecary functions for a10711071+ The NAND driver provides all necessary functions for a10721072 filesystem via the MTD interface.10731073 </para>10741074 <para>10751075- Filesystems must be aware of the NAND pecularities and10751075+ Filesystems must be aware of the NAND peculiarities and10761076 restrictions. One major restrictions of NAND Flash is, that you cannot 10771077 write as often as you want to a page. The consecutive writes to a page, 10781078 before erasing it again, are restricted to 1-3 writes, depending on the ···12221222#define NAND_BBT_VERSION 0x0000010012231223/* Create a bbt if none axists */12241224#define NAND_BBT_CREATE 0x0000020012251225-/* Write bbt if neccecary */12251225+/* Write bbt if necessary */12261226#define NAND_BBT_WRITE 0x0000100012271227/* Read and write back block contents when writing bbt */12281228#define NAND_BBT_SAVECONTENT 0x00002000
+1-1
Documentation/DocBook/regulator.tmpl
···155155 release regulators. Functions are156156 provided to <link linkend='API-regulator-enable'>enable</link>157157 and <link linkend='API-regulator-disable'>disable</link> the158158- reguator and to get and set the runtime parameters of the158158+ regulator and to get and set the runtime parameters of the159159 regulator.160160 </para>161161 <para>
+2-2
Documentation/DocBook/uio-howto.tmpl
···766766 <para>767767 The dynamic memory regions will be allocated when the UIO device file,768768 <varname>/dev/uioX</varname> is opened.769769- Simiar to static memory resources, the memory region information for769769+ Similar to static memory resources, the memory region information for770770 dynamic regions is then visible via sysfs at771771 <varname>/sys/class/uio/uioX/maps/mapY/*</varname>.772772- The dynmaic memory regions will be freed when the UIO device file is772772+ The dynamic memory regions will be freed when the UIO device file is773773 closed. When no processes are holding the device file open, the address774774 returned to userspace is ~0.775775 </para>
+1-1
Documentation/DocBook/usb.tmpl
···153153154154 <listitem><para>The Linux USB API supports synchronous calls for155155 control and bulk messages.156156- It also supports asynchnous calls for all kinds of data transfer,156156+ It also supports asynchronous calls for all kinds of data transfer,157157 using request structures called "URBs" (USB Request Blocks).158158 </para></listitem>159159
+1-1
Documentation/DocBook/writing-an-alsa-driver.tmpl
···56965696 suspending the PCM operations via56975697 <function>snd_pcm_suspend_all()</function> or56985698 <function>snd_pcm_suspend()</function>. It means that the PCM56995699- streams are already stoppped when the register snapshot is56995699+ streams are already stopped when the register snapshot is57005700 taken. But, remember that you don't have to restart the PCM57015701 stream in the resume callback. It'll be restarted via 57025702 trigger call with <constant>SNDRV_PCM_TRIGGER_RESUME</constant>
+5-2
Documentation/cpu-freq/intel-pstate.txt
···1515/sys/devices/system/cpu/intel_pstate/16161717 max_perf_pct: limits the maximum P state that will be requested by1818- the driver stated as a percentage of the available performance.1818+ the driver stated as a percentage of the available performance. The1919+ available (P states) performance may be reduced by the no_turbo2020+ setting described below.19212022 min_perf_pct: limits the minimum P state that will be requested by2121- the driver stated as a percentage of the available performance.2323+ the driver stated as a percentage of the max (non-turbo)2424+ performance level.22252326 no_turbo: limits the driver to selecting P states below the turbo2427 frequency range.
···99- reg: physical base address of the controller and length of memory mapped1010 region.11111212+Optional Properties:1313+- clocks: List of clock handles. The parent clocks of the input clocks to the1414+ devices in this power domain are set to oscclk before power gating1515+ and restored back after powering on a domain. This is required for1616+ all domains which are powered on and off and not required for unused1717+ domains.1818+- clock-names: The following clocks can be specified:1919+ - oscclk: Oscillator clock.2020+ - pclkN, clkN: Pairs of parent of input clock and input clock to the2121+ devices in this power domain. Maximum of 4 pairs (N = 0 to 3)2222+ are supported currently.2323+1224Node of a device using power domains must have a samsung,power-domain property1325defined with a phandle to respective power domain.1426···2917 lcd0: power-domain-lcd0 {3018 compatible = "samsung,exynos4210-pd";3119 reg = <0x10023C00 0x10>;2020+ };2121+2222+ mfc_pd: power-domain@10044060 {2323+ compatible = "samsung,exynos4210-pd";2424+ reg = <0x10044060 0x20>;2525+ clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_SW_ACLK333>,2626+ <&clock CLK_MOUT_USER_ACLK333>;2727+ clock-names = "oscclk", "pclk0", "clk0";3228 };33293430Example of the node using power domain:
+3
Documentation/devicetree/bindings/arm/l2cc.txt
···4040- arm,filter-ranges : <start length> Starting address and length of window to4141 filter. Addresses in the filter window are directed to the M1 port. Other4242 addresses will go to the M0 port.4343+- arm,io-coherent : indicates that the system is operating in an hardware4444+ I/O coherent mode. Valid only when the arm,pl310-cache compatible4545+ string is used.4346- interrupts : 1 combined interrupt.4447- cache-id-part: cache id part number to be used if it is not present4548 on hardware
···2323- spi-max-frequency: Specifies maximum SPI clock frequency,2424 Units - Hz. Definition as per2525 Documentation/devicetree/bindings/spi/spi-bus.txt2626+- num-cs: total number of chipselects2727+- cs-gpios: should specify GPIOs used for chipselects.2828+ The gpios will be referred to as reg = <index> in the SPI child2929+ nodes. If unspecified, a single SPI device without a chip3030+ select can be used.3131+26322733SPI slave nodes must be children of the SPI master node and can contain2834properties described in Documentation/devicetree/bindings/spi/spi-bus.txt
+11
Documentation/email-clients.txt
···11Email clients info for Linux22======================================================================3344+Git55+----------------------------------------------------------------------66+These days most developers use `git send-email` instead of regular77+email clients. The man page for this is quite good. On the receiving88+end, maintainers use `git am` to apply the patches.99+1010+If you are new to git then send your first patch to yourself. Save it1111+as raw text including all the headers. Run `git am raw_email.txt` and1212+then review the changelog with `git log`. When that works then send1313+the patch to the appropriate mailing list(s).1414+415General Preferences516----------------------------------------------------------------------617Patches for the Linux kernel are submitted via email, preferably as
+2-2
Documentation/laptops/00-INDEX
···88 - information on hard disk shock protection.99dslm.c1010 - Simple Disk Sleep Monitor program1111-hpfall.c1212- - (HP) laptop accelerometer program for disk protection.1111+freefall.c1212+ - (HP/DELL) laptop accelerometer program for disk protection.1313laptop-mode.txt1414 - how to conserve battery power using laptop-mode.1515sony-laptop.txt
···286286 hp-inv-led HP with broken BIOS for inverted mute LED287287 auto BIOS setup (default)288288289289+STAC92HD95290290+==========291291+ hp-led LED support for HP laptops292292+ hp-bass Bass HPF setup for HP Spectre 13293293+289294STAC9872290295========291296 vaio VAIO laptop without SPDIF
···11VERSION = 322PATCHLEVEL = 1633SUBLEVEL = 044-EXTRAVERSION = -rc244+EXTRAVERSION = -rc555NAME = Shuffling Zombie Juror6677# *DOCUMENTATION*···4141# descending is started. They are now explicitly listed as the4242# prepare rule.43434444+# Beautify output4545+# ---------------------------------------------------------------------------4646+#4747+# Normally, we echo the whole command before executing it. By making4848+# that echo $($(quiet)$(cmd)), we now have the possibility to set4949+# $(quiet) to choose other forms of output instead, e.g.5050+#5151+# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@5252+# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $<5353+#5454+# If $(quiet) is empty, the whole command will be printed.5555+# If it is set to "quiet_", only the short version will be printed.5656+# If it is set to "silent_", nothing will be printed at all, since5757+# the variable $(silent_cmd_cc_o_c) doesn't exist.5858+#5959+# A simple variant is to prefix commands with $(Q) - that's useful6060+# for commands that shall be hidden in non-verbose mode.6161+#6262+# $(Q)ln $@ :<6363+#6464+# If KBUILD_VERBOSE equals 0 then the above command will be hidden.6565+# If KBUILD_VERBOSE equals 1 then the above command is displayed.6666+#4467# To put more focus on warnings, be less verbose as default4568# Use 'make V=1' to see the full commands4669···7350ifndef KBUILD_VERBOSE7451 KBUILD_VERBOSE = 07552endif5353+5454+ifeq ($(KBUILD_VERBOSE),1)5555+ quiet =5656+ Q =5757+else5858+ quiet=quiet_5959+ Q = @6060+endif6161+6262+# If the user is running make -s (silent mode), suppress echoing of6363+# commands6464+6565+ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-46666+ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),)6767+ quiet=silent_6868+endif6969+else # make-3.8x7070+ifneq ($(filter s% -s%,$(MAKEFLAGS)),)7171+ quiet=silent_7272+endif7373+endif7474+7575+export quiet Q KBUILD_VERBOSE76767777# Call a source code checker (by default, "sparse") as part of the7878# C compilation.···172126$(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make173127 @:174128129129+# Fake the "Entering directory" message once, so that IDEs/editors are130130+# able to understand relative filenames.131131+ echodir := @echo132132+ quiet_echodir := @echo133133+silent_echodir := @:175134sub-make: FORCE135135+ $($(quiet)echodir) "make[1]: Entering directory \`$(KBUILD_OUTPUT)'"176136 $(if $(KBUILD_VERBOSE:1=),@)$(MAKE) -C $(KBUILD_OUTPUT) \177137 KBUILD_SRC=$(CURDIR) \178138 KBUILD_EXTMOD="$(KBUILD_EXTMOD)" -f $(CURDIR)/Makefile \···340288341289export KBUILD_MODULES KBUILD_BUILTIN342290export KBUILD_CHECKSRC KBUILD_SRC KBUILD_EXTMOD343343-344344-# Beautify output345345-# ---------------------------------------------------------------------------346346-#347347-# Normally, we echo the whole command before executing it. By making348348-# that echo $($(quiet)$(cmd)), we now have the possibility to set349349-# $(quiet) to choose other forms of output instead, e.g.350350-#351351-# quiet_cmd_cc_o_c = Compiling $(RELDIR)/$@352352-# cmd_cc_o_c = $(CC) $(c_flags) -c -o $@ $<353353-#354354-# If $(quiet) is empty, the whole command will be printed.355355-# If it is set to "quiet_", only the short version will be printed.356356-# If it is set to "silent_", nothing will be printed at all, since357357-# the variable $(silent_cmd_cc_o_c) doesn't exist.358358-#359359-# A simple variant is to prefix commands with $(Q) - that's useful360360-# for commands that shall be hidden in non-verbose mode.361361-#362362-# $(Q)ln $@ :<363363-#364364-# If KBUILD_VERBOSE equals 0 then the above command will be hidden.365365-# If KBUILD_VERBOSE equals 1 then the above command is displayed.366366-367367-ifeq ($(KBUILD_VERBOSE),1)368368- quiet =369369- Q =370370-else371371- quiet=quiet_372372- Q = @373373-endif374374-375375-# If the user is running make -s (silent mode), suppress echoing of376376-# commands377377-378378-ifneq ($(filter 4.%,$(MAKE_VERSION)),) # make-4379379-ifneq ($(filter %s ,$(firstword x$(MAKEFLAGS))),)380380- quiet=silent_381381-endif382382-else # make-3.8x383383-ifneq ($(filter s% -s%,$(MAKEFLAGS)),)384384- quiet=silent_385385-endif386386-endif387387-388388-export quiet Q KBUILD_VERBOSE389291390292ifneq ($(CC),)391293ifeq ($(shell $(CC) -v 2>&1 | grep -c "clang version"), 1)···11761170# Packaging of the kernel to various formats11771171# ---------------------------------------------------------------------------11781172# rpm target kept for backward compatibility11791179-package-dir := $(srctree)/scripts/package11731173+package-dir := scripts/package1180117411811175%src-pkg: FORCE11821176 $(Q)$(MAKE) $(build)=$(package-dir) $@
···1010 * -This is the more "natural" hand written assembler1111 */12121313+#include <linux/linkage.h>1314#include <asm/entry.h> /* For the SAVE_* macros */1415#include <asm/asm-offsets.h>1515-#include <asm/linkage.h>16161717#define KSP_WORD_OFF ((TASK_THREAD + THREAD_KSP) / 4)1818
+1-1
arch/arc/kernel/devtree.c
···4141{4242 const struct machine_desc *mdesc;4343 unsigned long dt_root;4444- void *clk;4444+ const void *clk;4545 int len;46464747 if (!early_init_dt_scan(dt))
+4-3
arch/arc/kernel/head.S
···7777 ; Clear BSS before updating any globals7878 ; XXX: use ZOL here7979 mov r5, __bss_start8080- mov r6, __bss_stop8080+ sub r6, __bss_stop, r58181+ lsr.f lp_count, r6, 28282+ lpnz 1f8383+ st.ab 0, [r5, 4]81841:8282- st.ab 0, [r5,4]8383- brlt r5, r6, 1b84858586 ; Uboot - kernel ABI8687 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
+4
arch/arc/kernel/ptrace.c
···146146 pr_debug("REQ=%ld: ADDR =0x%lx, DATA=0x%lx)\n", request, addr, data);147147148148 switch (request) {149149+ case PTRACE_GET_THREAD_AREA:150150+ ret = put_user(task_thread_info(child)->thr_ptr,151151+ (unsigned long __user *)data);152152+ break;149153 default:150154 ret = ptrace_request(child, request, addr, data);151155 break;
+13-2
arch/arc/kernel/smp.c
···337337 * API called by platform code to hookup arch-common ISR to their IPI IRQ338338 */339339static DEFINE_PER_CPU(int, ipi_dev);340340+341341+static struct irqaction arc_ipi_irq = {342342+ .name = "IPI Interrupt",343343+ .flags = IRQF_PERCPU,344344+ .handler = do_IPI,345345+};346346+340347int smp_ipi_irq_setup(int cpu, int irq)341348{342342- int *dev_id = &per_cpu(ipi_dev, smp_processor_id());343343- return request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev_id);349349+ if (!cpu)350350+ return setup_irq(irq, &arc_ipi_irq);351351+ else352352+ arch_unmask_irq(irq);353353+354354+ return 0;344355}
···9494CONFIG_BACKLIGHT_PWM=y9595# CONFIG_USB_SUPPORT is not set9696CONFIG_MMC=y9797-CONFIG_MMC_UNSAFE_RESUME=y9897CONFIG_MMC_BLOCK_MINORS=329998CONFIG_MMC_TEST=y10099CONFIG_MMC_SDHCI=y100100+CONFIG_MMC_SDHCI_PLTFM=y101101CONFIG_MMC_SDHCI_BCM_KONA=y102102CONFIG_NEW_LEDS=y103103CONFIG_LEDS_CLASS=y
···208208 struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS];209209};210210211211-extern unsigned long sync_phys; /* physical address of *mcpm_sync */212212-213211void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster);214212void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster);215213void __mcpm_outbound_leave_critical(unsigned int cluster, int state);
+19-9
arch/arm/kernel/kprobes-test-arm.c
···7474 TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\7575 TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\7676 TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\7777- TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\7878- TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\7977 TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \8078 TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \8179 TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \···101103 TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \102104 TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \103105 TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \104104- TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \105105- TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \106106 TEST_R( op "eq r",11,VAL1,", #0xf5") \107107 TEST_R( op "ne r",0, VAL1,", #0xf5000000") \108108 TEST_R( op " r",8, VAL2,", #0x000af000")···121125 TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \122126 TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \123127 TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \124124- TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \125128 TEST( op "eq" s " r0, #0xf5") \126129 TEST( op "ne" s " r11, #0xf5000000") \127130 TEST( op s " r7, #0x000af000") \···154159 TEST_SUPPORTED("cmp pc, #0x1000");155160 TEST_SUPPORTED("cmp sp, #0x1000");156161157157- /* Data-processing with PC as shift*/162162+ /* Data-processing with PC and a shift count in a register */158163 TEST_UNSUPPORTED(__inst_arm(0xe15c0f1e) " @ cmp r12, r14, asl pc")159164 TEST_UNSUPPORTED(__inst_arm(0xe1a0cf1e) " @ mov r12, r14, asl pc")160165 TEST_UNSUPPORTED(__inst_arm(0xe08caf1e) " @ add r10, r12, r14, asl pc")166166+ TEST_UNSUPPORTED(__inst_arm(0xe151021f) " @ cmp r1, pc, lsl r2")167167+ TEST_UNSUPPORTED(__inst_arm(0xe17f0211) " @ cmn pc, r1, lsl r2")168168+ TEST_UNSUPPORTED(__inst_arm(0xe1a0121f) " @ mov r1, pc, lsl r2")169169+ TEST_UNSUPPORTED(__inst_arm(0xe1a0f211) " @ mov pc, r1, lsl r2")170170+ TEST_UNSUPPORTED(__inst_arm(0xe042131f) " @ sub r1, r2, pc, lsl r3")171171+ TEST_UNSUPPORTED(__inst_arm(0xe1cf1312) " @ bic r1, pc, r2, lsl r3")172172+ TEST_UNSUPPORTED(__inst_arm(0xe081f312) " @ add pc, r1, r2, lsl r3")161173162162- /* Data-processing with PC as shift*/174174+ /* Data-processing with PC as a target and status registers updated */163175 TEST_UNSUPPORTED("movs pc, r1")164176 TEST_UNSUPPORTED("movs pc, r1, lsl r2")165177 TEST_UNSUPPORTED("movs pc, #0x10000")···189187 TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"")190188 TEST_BF_R ("add pc, r",14,2f-1f-8,", pc")191189 TEST_BF_R ("mov pc, r",0,2f,"")192192- TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"")190190+ TEST_BF_R ("add pc, pc, r",14,(2f-1f-8)*2,", asr #1")193191 TEST_BB( "sub pc, pc, #1b-2b+8")194192#if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7)195193 TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */196194#endif197195 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"")198196 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc")199199- TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"")197197+ TEST_R( "add pc, pc, r",10,-2,", asl #1")200198#ifdef CONFIG_THUMB2_KERNEL201199 TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"")202200 TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8")···218216 TEST_BB_R("bx r",7,2f,"")219217 TEST_BF_R("bxeq r",14,2f,"")220218219219+#if __LINUX_ARM_ARCH__ >= 5221220 TEST_R("clz r0, r",0, 0x0,"")222221 TEST_R("clzeq r7, r",14,0x1,"")223222 TEST_R("clz lr, r",7, 0xffffffff,"")···340337 TEST_UNSUPPORTED(__inst_arm(0xe16f02e1) " @ smultt pc, r1, r2")341338 TEST_UNSUPPORTED(__inst_arm(0xe16002ef) " @ smultt r0, pc, r2")342339 TEST_UNSUPPORTED(__inst_arm(0xe1600fe1) " @ smultt r0, r1, pc")340340+#endif343341344342 TEST_GROUP("Multiply and multiply-accumulate")345343···563559 TEST_UNSUPPORTED("ldrsht r1, [r2], #48")564560#endif565561562562+#if __LINUX_ARM_ARCH__ >= 5566563 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]")567564 TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]")568565 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!")···600595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3d0) " @ ldrd r12, [pc, #48]!")601596 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3d0) " @ ldrd pc, [r9], #48")602597 TEST_UNSUPPORTED(__inst_arm(0xe0c9e3d0) " @ ldrd lr, [r9], #48")598598+#endif603599604600 TEST_GROUP("Miscellaneous")605601···12331227 TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0")1234122812351229 COPROCESSOR_INSTRUCTIONS_ST_LD("",e)12301230+#if __LINUX_ARM_ARCH__ >= 512361231 COPROCESSOR_INSTRUCTIONS_MC_MR("",e)12321232+#endif12371233 TEST_UNSUPPORTED("svc 0")12381234 TEST_UNSUPPORTED("svc 0xffffff")12391235···12951287 TEST( "blx __dummy_thumb_subroutine_odd")12961288#endif /* __LINUX_ARM_ARCH__ >= 6 */1297128912901290+#if __LINUX_ARM_ARCH__ >= 512981291 COPROCESSOR_INSTRUCTIONS_ST_LD("2",f)12921292+#endif12991293#if __LINUX_ARM_ARCH__ >= 613001294 COPROCESSOR_INSTRUCTIONS_MC_MR("2",f)13011295#endif
+10
arch/arm/kernel/kprobes-test.c
···225225static int post_handler_called;226226static int jprobe_func_called;227227static int kretprobe_handler_called;228228+static int tests_failed;228229229230#define FUNC_ARG1 0x12345678230231#define FUNC_ARG2 0xabcdef···462461463462 pr_info(" jprobe\n");464463 ret = test_jprobe(func);464464+#if defined(CONFIG_THUMB2_KERNEL) && !defined(MODULE)465465+ if (ret == -EINVAL) {466466+ pr_err("FAIL: Known longtime bug with jprobe on Thumb kernels\n");467467+ tests_failed = ret;468468+ ret = 0;469469+ }470470+#endif465471 if (ret < 0)466472 return ret;467473···16791671#endif1680167216811673out:16741674+ if (ret == 0)16751675+ ret = tests_failed;16821676 if (ret == 0)16831677 pr_info("Finished kprobe tests OK\n");16841678 else
···908908 PTRACE_SYSCALL_EXIT,909909};910910911911-static int tracehook_report_syscall(struct pt_regs *regs,911911+static void tracehook_report_syscall(struct pt_regs *regs,912912 enum ptrace_syscall_dir dir)913913{914914 unsigned long ip;···926926 current_thread_info()->syscall = -1;927927928928 regs->ARM_ip = ip;929929- return current_thread_info()->syscall;930929}931930932931asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno)···937938 return -1;938939939940 if (test_thread_flag(TIF_SYSCALL_TRACE))940940- scno = tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);941941+ tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);942942+943943+ scno = current_thread_info()->syscall;941944942945 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))943946 trace_sys_enter(regs, scno);
+1-1
arch/arm/kernel/topology.c
···275275 cpu_topology[cpuid].socket_id, mpidr);276276}277277278278-static inline const int cpu_corepower_flags(void)278278+static inline int cpu_corepower_flags(void)279279{280280 return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN;281281}
+3-5
arch/arm/mach-exynos/exynos.c
···173173174174void __init exynos_cpuidle_init(void)175175{176176- if (soc_is_exynos5440())177177- return;178178-179179- platform_device_register(&exynos_cpuidle);176176+ if (soc_is_exynos4210() || soc_is_exynos5250())177177+ platform_device_register(&exynos_cpuidle);180178}181179182180void __init exynos_cpufreq_init(void)···295297 * This is called from smp_prepare_cpus if we've built for SMP, but296298 * we still need to set it up for PM and firmware ops if not.297299 */298298- if (!IS_ENABLED(SMP))300300+ if (!IS_ENABLED(CONFIG_SMP))299301 exynos_sysram_init();300302301303 exynos_cpuidle_init();
+7-2
arch/arm/mach-exynos/firmware.c
···57575858 boot_reg = sysram_ns_base_addr + 0x1c;59596060- if (!soc_is_exynos4212() && !soc_is_exynos3250())6161- boot_reg += 4*cpu;6060+ /*6161+ * Almost all Exynos-series of SoCs that run in secure mode don't need6262+ * additional offset for every CPU, with Exynos4412 being the only6363+ * exception.6464+ */6565+ if (soc_is_exynos4412())6666+ boot_reg += 4 * cpu;62676368 __raw_writel(boot_addr, boot_reg);6469 return 0;
+60-1
arch/arm/mach-exynos/pm_domains.c
···1717#include <linux/err.h>1818#include <linux/slab.h>1919#include <linux/pm_domain.h>2020+#include <linux/clk.h>2021#include <linux/delay.h>2122#include <linux/of_address.h>2223#include <linux/of_platform.h>2324#include <linux/sched.h>24252526#include "regs-pmu.h"2727+2828+#define MAX_CLK_PER_DOMAIN 426292730/*2831 * Exynos specific wrapper around the generic power domain···3532 char const *name;3633 bool is_off;3734 struct generic_pm_domain pd;3535+ struct clk *oscclk;3636+ struct clk *clk[MAX_CLK_PER_DOMAIN];3737+ struct clk *pclk[MAX_CLK_PER_DOMAIN];3838};39394040static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)···49435044 pd = container_of(domain, struct exynos_pm_domain, pd);5145 base = pd->base;4646+4747+ /* Set oscclk before powering off a domain*/4848+ if (!power_on) {4949+ int i;5050+5151+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {5252+ if (IS_ERR(pd->clk[i]))5353+ break;5454+ if (clk_set_parent(pd->clk[i], pd->oscclk))5555+ pr_err("%s: error setting oscclk as parent to clock %d\n",5656+ pd->name, i);5757+ }5858+ }52595360 pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0;5461 __raw_writel(pwr, base);···7960 cpu_relax();8061 usleep_range(80, 100);8162 }6363+6464+ /* Restore clocks after powering on a domain*/6565+ if (power_on) {6666+ int i;6767+6868+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {6969+ if (IS_ERR(pd->clk[i]))7070+ break;7171+ if (clk_set_parent(pd->clk[i], pd->pclk[i]))7272+ pr_err("%s: error setting parent to clock%d\n",7373+ pd->name, i);7474+ }7575+ }7676+8277 return 0;8378}8479···185152186153 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {187154 struct exynos_pm_domain *pd;188188- int on;155155+ int on, i;156156+ struct device *dev;189157190158 pdev = of_find_device_by_node(np);159159+ dev = &pdev->dev;191160192161 pd = kzalloc(sizeof(*pd), GFP_KERNEL);193162 if (!pd) {···205170 pd->pd.power_on = exynos_pd_power_on;206171 pd->pd.of_node = np;207172173173+ pd->oscclk = clk_get(dev, "oscclk");174174+ if (IS_ERR(pd->oscclk))175175+ goto no_clk;176176+177177+ for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {178178+ char clk_name[8];179179+180180+ snprintf(clk_name, sizeof(clk_name), "clk%d", i);181181+ pd->clk[i] = clk_get(dev, clk_name);182182+ if (IS_ERR(pd->clk[i]))183183+ break;184184+ snprintf(clk_name, sizeof(clk_name), "pclk%d", i);185185+ pd->pclk[i] = clk_get(dev, clk_name);186186+ if (IS_ERR(pd->pclk[i])) {187187+ clk_put(pd->clk[i]);188188+ pd->clk[i] = ERR_PTR(-EINVAL);189189+ break;190190+ }191191+ }192192+193193+ if (IS_ERR(pd->clk[0]))194194+ clk_put(pd->oscclk);195195+196196+no_clk:208197 platform_set_drvdata(pdev, pd);209198210199 on = __raw_readl(pd->base + 0x4) & S5P_INT_LOCAL_PWR_EN;
+23-8
arch/arm/mach-imx/clk-gate2.c
···67676868 spin_lock_irqsave(gate->lock, flags);69697070- if (gate->share_count && --(*gate->share_count) > 0)7171- goto out;7070+ if (gate->share_count) {7171+ if (WARN_ON(*gate->share_count == 0))7272+ goto out;7373+ else if (--(*gate->share_count) > 0)7474+ goto out;7575+ }72767377 reg = readl(gate->reg);7478 reg &= ~(3 << gate->bit_idx);···8278 spin_unlock_irqrestore(gate->lock, flags);8379}84808585-static int clk_gate2_is_enabled(struct clk_hw *hw)8181+static int clk_gate2_reg_is_enabled(void __iomem *reg, u8 bit_idx)8682{8787- u32 reg;8888- struct clk_gate2 *gate = to_clk_gate2(hw);8383+ u32 val = readl(reg);89849090- reg = readl(gate->reg);9191-9292- if (((reg >> gate->bit_idx) & 1) == 1)8585+ if (((val >> bit_idx) & 1) == 1)9386 return 1;94879588 return 0;8989+}9090+9191+static int clk_gate2_is_enabled(struct clk_hw *hw)9292+{9393+ struct clk_gate2 *gate = to_clk_gate2(hw);9494+9595+ if (gate->share_count)9696+ return !!(*gate->share_count);9797+ else9898+ return clk_gate2_reg_is_enabled(gate->reg, gate->bit_idx);9699}9710098101static struct clk_ops clk_gate2_ops = {···127116 gate->bit_idx = bit_idx;128117 gate->flags = clk_gate2_flags;129118 gate->lock = lock;119119+120120+ /* Initialize share_count per hardware state */121121+ if (share_count)122122+ *share_count = clk_gate2_reg_is_enabled(reg, bit_idx) ? 1 : 0;130123 gate->share_count = share_count;131124132125 init.name = name;
···6666extern void ll_disable_coherency(void);6767extern void ll_enable_coherency(void);68686969+extern void armada_370_xp_cpu_resume(void);7070+6971static struct platform_device armada_xp_cpuidle_device = {7072 .name = "cpuidle-armada-370-xp",7173};···140138 reg = readl(pmsu_mp_base + L2C_NFABRIC_PM_CTL);141139 reg |= L2C_NFABRIC_PM_CTL_PWR_DOWN;142140 writel(reg, pmsu_mp_base + L2C_NFABRIC_PM_CTL);143143-}144144-145145-static void armada_370_xp_cpu_resume(void)146146-{147147- asm volatile("bl ll_add_cpu_to_smp_group\n\t"148148- "bl ll_enable_coherency\n\t"149149- "b cpu_resume\n\t");150141}151142152143/* No locking is needed because we only access per-CPU registers */
+25
arch/arm/mach-mvebu/pmsu_ll.S
···11+/*22+ * Copyright (C) 2014 Marvell33+ *44+ * Thomas Petazzoni <thomas.petazzoni@free-electrons.com>55+ * Gregory Clement <gregory.clement@free-electrons.com>66+ *77+ * This file is licensed under the terms of the GNU General Public88+ * License version 2. This program is licensed "as is" without any99+ * warranty of any kind, whether express or implied.1010+ */1111+1212+#include <linux/linkage.h>1313+#include <asm/assembler.h>1414+1515+/*1616+ * This is the entry point through which CPUs exiting cpuidle deep1717+ * idle state are going.1818+ */1919+ENTRY(armada_370_xp_cpu_resume)2020+ARM_BE8(setend be ) @ go BE8 if entered LE2121+ bl ll_add_cpu_to_smp_group2222+ bl ll_enable_coherency2323+ b cpu_resume2424+ENDPROC(armada_370_xp_cpu_resume)2525+
···7676 * (assuming that it is counting N upwards), or -2 if the enclosing loop7777 * should skip to the next iteration (again assuming N is increasing).7878 */7979-static int _dpll_test_fint(struct clk_hw_omap *clk, u8 n)7979+static int _dpll_test_fint(struct clk_hw_omap *clk, unsigned int n)8080{8181 struct dpll_data *dd;8282 long fint, fint_min, fint_max;
···649649 }650650 break;651651652652+ case 0xb9bc:653653+ switch (rev) {654654+ case 0:655655+ omap_revision = DRA722_REV_ES1_0;656656+ break;657657+ default:658658+ /* If we have no new revisions */659659+ omap_revision = DRA722_REV_ES1_0;660660+ break;661661+ }662662+ break;663663+652664 default:653665 /* Unknown default to latest silicon rev as default*/654666 pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%d)\n",
+4-2
arch/arm/mach-omap2/mux.c
···183183 m0_entry = mux->muxnames[0];184184185185 /* First check for full name in mode0.muxmode format */186186- if (mode0_len && strncmp(muxname, m0_entry, mode0_len))187187- continue;186186+ if (mode0_len)187187+ if (strncmp(muxname, m0_entry, mode0_len) ||188188+ (strlen(m0_entry) != mode0_len))189189+ continue;188190189191 /* Then check for muxmode only */190192 for (i = 0; i < OMAP_MUX_NR_MODES; i++) {
-20
arch/arm/mach-omap2/omap4-common.c
···102102{}103103#endif104104105105-void __init gic_init_irq(void)106106-{107107- void __iomem *omap_irq_base;108108-109109- /* Static mapping, never released */110110- gic_dist_base_addr = ioremap(OMAP44XX_GIC_DIST_BASE, SZ_4K);111111- BUG_ON(!gic_dist_base_addr);112112-113113- twd_base = ioremap(OMAP44XX_LOCAL_TWD_BASE, SZ_4K);114114- BUG_ON(!twd_base);115115-116116- /* Static mapping, never released */117117- omap_irq_base = ioremap(OMAP44XX_GIC_CPU_BASE, SZ_512);118118- BUG_ON(!omap_irq_base);119119-120120- omap_wakeupgen_init();121121-122122- gic_init(0, 29, gic_dist_base_addr, omap_irq_base);123123-}124124-125105void gic_dist_disable(void)126106{127107 if (gic_dist_base_addr)
···6767#define Ip_u2s3u1(op) \6868void ISAOPC(op)(u32 **buf, unsigned int a, signed int b, unsigned int c)69697070+#define Ip_s3s1s2(op) \7171+void ISAOPC(op)(u32 **buf, int a, int b, int c)7272+7073#define Ip_u2u1s3(op) \7174void ISAOPC(op)(u32 **buf, unsigned int a, unsigned int b, signed int c)7275···150147Ip_u2s3u1(_sd);151148Ip_u2u1u3(_sll);152149Ip_u3u2u1(_sllv);150150+Ip_s3s1s2(_slt);153151Ip_u2u1s3(_sltiu);154152Ip_u3u1u2(_sltu);155153Ip_u2u1u3(_sra);
···1212#include <linux/types.h>1313#include <asm/sgidefs.h>14141515-/* Bits which may be set in sc_used_math */1616-#define USEDMATH_FP (1 << 0)1717-#define USEDMATH_MSA (1 << 1)1818-1915#if _MIPS_SIM == _MIPS_SIM_ABI3220162117/*···3741 unsigned long sc_lo2;3842 unsigned long sc_hi3;3943 unsigned long sc_lo3;4040- unsigned long long sc_msaregs[32]; /* Most significant 64 bits */4141- unsigned long sc_msa_csr;4244};43454446#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */···7076 __u32 sc_used_math;7177 __u32 sc_dsp;7278 __u32 sc_reserved;7373- __u64 sc_msaregs[32];7474- __u32 sc_msa_csr;7579};76807781
···126126127127 board_bind_eic_interrupt = &msc_bind_eic_interrupt;128128129129- for (; nirq >= 0; nirq--, imp++) {129129+ for (; nirq > 0; nirq--, imp++) {130130 int n = imp->im_irq;131131132132 switch (imp->im_type) {
+2-2
arch/mips/kernel/pm-cps.c
···101101 if (!coupled_coherence)102102 return;103103104104- smp_mb__before_atomic_inc();104104+ smp_mb__before_atomic();105105 atomic_inc(a);106106107107 while (atomic_read(a) < online)···158158159159 /* Indicate that this CPU might not be coherent */160160 cpumask_clear_cpu(cpu, &cpu_coherent_mask);161161- smp_mb__after_clear_bit();161161+ smp_mb__after_atomic();162162163163 /* Create a non-coherent mapping of the core ready_count */164164 core_ready_count = per_cpu(ready_count, core);
···3131#include <linux/bitops.h>3232#include <asm/cacheflush.h>3333#include <asm/fpu.h>3434-#include <asm/msa.h>3534#include <asm/sim.h>3635#include <asm/ucontext.h>3736#include <asm/cpu-features.h>···46474748extern asmlinkage int _save_fp_context(struct sigcontext __user *sc);4849extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc);4949-5050-extern asmlinkage int _save_msa_context(struct sigcontext __user *sc);5151-extern asmlinkage int _restore_msa_context(struct sigcontext __user *sc);52505351struct sigframe {5452 u32 sf_ass[4]; /* argument save space for o32 */···96100}9710198102/*9999- * These functions will save only the upper 64 bits of the vector registers,100100- * since the lower 64 bits have already been saved as the scalar FP context.101101- */102102-static int copy_msa_to_sigcontext(struct sigcontext __user *sc)103103-{104104- int i;105105- int err = 0;106106-107107- for (i = 0; i < NUM_FPU_REGS; i++) {108108- err |=109109- __put_user(get_fpr64(¤t->thread.fpu.fpr[i], 1),110110- &sc->sc_msaregs[i]);111111- }112112- err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr);113113-114114- return err;115115-}116116-117117-static int copy_msa_from_sigcontext(struct sigcontext __user *sc)118118-{119119- int i;120120- int err = 0;121121- u64 val;122122-123123- for (i = 0; i < NUM_FPU_REGS; i++) {124124- err |= __get_user(val, &sc->sc_msaregs[i]);125125- set_fpr64(¤t->thread.fpu.fpr[i], 1, val);126126- }127127- err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr);128128-129129- return err;130130-}131131-132132-/*133103 * Helper routines134104 */135135-static int protected_save_fp_context(struct sigcontext __user *sc,136136- unsigned used_math)105105+static int protected_save_fp_context(struct sigcontext __user *sc)137106{138107 int err;139139- bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA);140108#ifndef CONFIG_EVA141109 while (1) {142110 lock_fpu_owner();143111 if (is_fpu_owner()) {144112 err = save_fp_context(sc);145145- if (save_msa && !err)146146- err = _save_msa_context(sc);147113 unlock_fpu_owner();148114 } else {149115 unlock_fpu_owner();150116 err = copy_fp_to_sigcontext(sc);151151- if (save_msa && !err)152152- err = copy_msa_to_sigcontext(sc);153117 }154118 if (likely(!err))155119 break;···125169 * EVA does not have FPU EVA instructions so saving fpu context directly126170 * does not work.127171 */128128- disable_msa();129172 lose_fpu(1);130173 err = save_fp_context(sc); /* this might fail */131131- if (save_msa && !err)132132- err = copy_msa_to_sigcontext(sc);133174#endif134175 return err;135176}136177137137-static int protected_restore_fp_context(struct sigcontext __user *sc,138138- unsigned used_math)178178+static int protected_restore_fp_context(struct sigcontext __user *sc)139179{140180 int err, tmp __maybe_unused;141141- bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA);142181#ifndef CONFIG_EVA143182 while (1) {144183 lock_fpu_owner();145184 if (is_fpu_owner()) {146185 err = restore_fp_context(sc);147147- if (restore_msa && !err) {148148- enable_msa();149149- err = _restore_msa_context(sc);150150- } else {151151- /* signal handler may have used MSA */152152- disable_msa();153153- }154186 unlock_fpu_owner();155187 } else {156188 unlock_fpu_owner();157189 err = copy_fp_from_sigcontext(sc);158158- if (!err && (used_math & USEDMATH_MSA))159159- err = copy_msa_from_sigcontext(sc);160190 }161191 if (likely(!err))162192 break;···158216 * EVA does not have FPU EVA instructions so restoring fpu context159217 * directly does not work.160218 */161161- enable_msa();162219 lose_fpu(0);163220 err = restore_fp_context(sc); /* this might fail */164164- if (restore_msa && !err)165165- err = copy_msa_from_sigcontext(sc);166221#endif167222 return err;168223}···191252 err |= __put_user(rddsp(DSP_MASK), &sc->sc_dsp);192253 }193254194194- used_math = used_math() ? USEDMATH_FP : 0;195195- used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0;255255+ used_math = !!used_math();196256 err |= __put_user(used_math, &sc->sc_used_math);197257198258 if (used_math) {···199261 * Save FPU state to signal context. Signal handler200262 * will "inherit" current FPU state.201263 */202202- err |= protected_save_fp_context(sc, used_math);264264+ err |= protected_save_fp_context(sc);203265 }204266 return err;205267}···224286}225287226288static int227227-check_and_restore_fp_context(struct sigcontext __user *sc, unsigned used_math)289289+check_and_restore_fp_context(struct sigcontext __user *sc)228290{229291 int err, sig;230292231293 err = sig = fpcsr_pending(&sc->sc_fpc_csr);232294 if (err > 0)233295 err = 0;234234- err |= protected_restore_fp_context(sc, used_math);296296+ err |= protected_restore_fp_context(sc);235297 return err ?: sig;236298}237299···271333 if (used_math) {272334 /* restore fpu context if we have used it before */273335 if (!err)274274- err = check_and_restore_fp_context(sc, used_math);336336+ err = check_and_restore_fp_context(sc);275337 } else {276276- /* signal handler may have used FPU or MSA. Disable them. */277277- disable_msa();338338+ /* signal handler may have used FPU. Give it up. */278339 lose_fpu(0);279340 }280341
+8-66
arch/mips/kernel/signal32.c
···3030#include <asm/sim.h>3131#include <asm/ucontext.h>3232#include <asm/fpu.h>3333-#include <asm/msa.h>3433#include <asm/war.h>3534#include <asm/vdso.h>3635#include <asm/dsp.h>···41424243extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc);4344extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc);4444-4545-extern asmlinkage int _save_msa_context32(struct sigcontext32 __user *sc);4646-extern asmlinkage int _restore_msa_context32(struct sigcontext32 __user *sc);47454846/*4947 * Including <asm/unistd.h> would give use the 64-bit syscall numbers ...···111115}112116113117/*114114- * These functions will save only the upper 64 bits of the vector registers,115115- * since the lower 64 bits have already been saved as the scalar FP context.116116- */117117-static int copy_msa_to_sigcontext32(struct sigcontext32 __user *sc)118118-{119119- int i;120120- int err = 0;121121-122122- for (i = 0; i < NUM_FPU_REGS; i++) {123123- err |=124124- __put_user(get_fpr64(¤t->thread.fpu.fpr[i], 1),125125- &sc->sc_msaregs[i]);126126- }127127- err |= __put_user(current->thread.fpu.msacsr, &sc->sc_msa_csr);128128-129129- return err;130130-}131131-132132-static int copy_msa_from_sigcontext32(struct sigcontext32 __user *sc)133133-{134134- int i;135135- int err = 0;136136- u64 val;137137-138138- for (i = 0; i < NUM_FPU_REGS; i++) {139139- err |= __get_user(val, &sc->sc_msaregs[i]);140140- set_fpr64(¤t->thread.fpu.fpr[i], 1, val);141141- }142142- err |= __get_user(current->thread.fpu.msacsr, &sc->sc_msa_csr);143143-144144- return err;145145-}146146-147147-/*148118 * sigcontext handlers149119 */150150-static int protected_save_fp_context32(struct sigcontext32 __user *sc,151151- unsigned used_math)120120+static int protected_save_fp_context32(struct sigcontext32 __user *sc)152121{153122 int err;154154- bool save_msa = cpu_has_msa && (used_math & USEDMATH_MSA);155123 while (1) {156124 lock_fpu_owner();157125 if (is_fpu_owner()) {158126 err = save_fp_context32(sc);159159- if (save_msa && !err)160160- err = _save_msa_context32(sc);161127 unlock_fpu_owner();162128 } else {163129 unlock_fpu_owner();164130 err = copy_fp_to_sigcontext32(sc);165165- if (save_msa && !err)166166- err = copy_msa_to_sigcontext32(sc);167131 }168132 if (likely(!err))169133 break;···137181 return err;138182}139183140140-static int protected_restore_fp_context32(struct sigcontext32 __user *sc,141141- unsigned used_math)184184+static int protected_restore_fp_context32(struct sigcontext32 __user *sc)142185{143186 int err, tmp __maybe_unused;144144- bool restore_msa = cpu_has_msa && (used_math & USEDMATH_MSA);145187 while (1) {146188 lock_fpu_owner();147189 if (is_fpu_owner()) {148190 err = restore_fp_context32(sc);149149- if (restore_msa && !err) {150150- enable_msa();151151- err = _restore_msa_context32(sc);152152- } else {153153- /* signal handler may have used MSA */154154- disable_msa();155155- }156191 unlock_fpu_owner();157192 } else {158193 unlock_fpu_owner();159194 err = copy_fp_from_sigcontext32(sc);160160- if (restore_msa && !err)161161- err = copy_msa_from_sigcontext32(sc);162195 }163196 if (likely(!err))164197 break;···186241 err |= __put_user(mflo3(), &sc->sc_lo3);187242 }188243189189- used_math = used_math() ? USEDMATH_FP : 0;190190- used_math |= thread_msa_context_live() ? USEDMATH_MSA : 0;244244+ used_math = !!used_math();191245 err |= __put_user(used_math, &sc->sc_used_math);192246193247 if (used_math) {···194250 * Save FPU state to signal context. Signal handler195251 * will "inherit" current FPU state.196252 */197197- err |= protected_save_fp_context32(sc, used_math);253253+ err |= protected_save_fp_context32(sc);198254 }199255 return err;200256}201257202258static int203203-check_and_restore_fp_context32(struct sigcontext32 __user *sc,204204- unsigned used_math)259259+check_and_restore_fp_context32(struct sigcontext32 __user *sc)205260{206261 int err, sig;207262208263 err = sig = fpcsr_pending(&sc->sc_fpc_csr);209264 if (err > 0)210265 err = 0;211211- err |= protected_restore_fp_context32(sc, used_math);266266+ err |= protected_restore_fp_context32(sc);212267 return err ?: sig;213268}214269···244301 if (used_math) {245302 /* restore fpu context if we have used it before */246303 if (!err)247247- err = check_and_restore_fp_context32(sc, used_math);304304+ err = check_and_restore_fp_context32(sc);248305 } else {249249- /* signal handler may have used FPU or MSA. Disable them. */250250- disable_msa();306306+ /* signal handler may have used FPU. Give it up. */251307 lose_fpu(0);252308 }253309
···414414config CRASH_DUMP415415 bool "Build a kdump crash kernel"416416 depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP)417417- select RELOCATABLE if PPC64 || 44x || FSL_BOOKE417417+ select RELOCATABLE if (PPC64 && !COMPILE_TEST) || 44x || FSL_BOOKE418418 help419419 Build a kernel suitable for use as a kdump capture kernel.420420 The same kernel binary can be used as production kernel and dump···10171017if PPC6410181018config RELOCATABLE10191019 bool "Build a relocatable kernel"10201020+ depends on !COMPILE_TEST10201021 select NONSTATIC_KERNEL10211022 help10221023 This builds a kernel image that is capable of running anywhere
···747747748748#ifdef CONFIG_SCHED_SMT749749/* cpumask of CPUs with asymetric SMT dependancy */750750-static const int powerpc_smt_flags(void)750750+static int powerpc_smt_flags(void)751751{752752 int flags = SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;753753
···485485 * check that the PMU supports EBB, meaning those that don't can still486486 * use bit 63 of the event code for something else if they wish.487487 */488488- return (ppmu->flags & PPMU_EBB) &&488488+ return (ppmu->flags & PPMU_ARCH_207S) &&489489 ((event->attr.config >> PERF_EVENT_CONFIG_EBB_SHIFT) & 1);490490}491491···777777 if (ppmu->flags & PPMU_HAS_SIER)778778 sier = mfspr(SPRN_SIER);779779780780- if (ppmu->flags & PPMU_EBB) {780780+ if (ppmu->flags & PPMU_ARCH_207S) {781781 pr_info("MMCR2: %016lx EBBHR: %016lx\n",782782 mfspr(SPRN_MMCR2), mfspr(SPRN_EBBHR));783783 pr_info("EBBRR: %016lx BESCR: %016lx\n",···996996 } while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);997997998998 local64_add(delta, &event->count);999999- local64_sub(delta, &event->hw.period_left);999999+10001000+ /*10011001+ * A number of places program the PMC with (0x80000000 - period_left).10021002+ * We never want period_left to be less than 1 because we will program10031003+ * the PMC with a value >= 0x800000000 and an edge detected PMC will10041004+ * roll around to 0 before taking an exception. We have seen this10051005+ * on POWER8.10061006+ *10071007+ * To fix this, clamp the minimum value of period_left to 1.10081008+ */10091009+ do {10101010+ prev = local64_read(&event->hw.period_left);10111011+ val = prev - delta;10121012+ if (val < 1)10131013+ val = 1;10141014+ } while (local64_cmpxchg(&event->hw.period_left, prev, val) != prev);10001015}1001101610021017/*···13141299 ppmu->config_bhrb(cpuhw->bhrb_filter);1315130013161301 write_mmcr0(cpuhw, mmcr0);13021302+13031303+ if (ppmu->flags & PPMU_ARCH_207S)13041304+ mtspr(SPRN_MMCR2, 0);1317130513181306 /*13191307 * Enable instruction sampling if necessary···1714169617151697 if (has_branch_stack(event)) {17161698 /* PMU has BHRB enabled */17171717- if (!(ppmu->flags & PPMU_BHRB))16991699+ if (!(ppmu->flags & PPMU_ARCH_207S))17181700 return -EOPNOTSUPP;17191701 }17201702
···141141142142 /* save number of bits */143143 bits[1] = cpu_to_be64(sctx->count[0] << 3);144144- bits[0] = cpu_to_be64(sctx->count[1] << 3) | sctx->count[0] >> 61;144144+ bits[0] = cpu_to_be64(sctx->count[1] << 3 | sctx->count[0] >> 61);145145146146 /* Pad out to 112 mod 128 and append length */147147 index = sctx->count[0] & 0x7f;
+2-2
arch/x86/include/asm/kvm_host.h
···9595#define KVM_REFILL_PAGES 259696#define KVM_MAX_CPUID_ENTRIES 809797#define KVM_NR_FIXED_MTRR_REGION 889898-#define KVM_NR_VAR_MTRR 89898+#define KVM_NR_VAR_MTRR 109999100100#define ASYNC_PF_PER_VCPU 64101101···461461 bool nmi_injected; /* Trying to inject an NMI this entry */462462463463 struct mtrr_state_type mtrr_state;464464- u32 pat;464464+ u64 pat;465465466466 unsigned switch_db_regs;467467 unsigned long db[KVM_NR_DB_REGS];
+16
arch/x86/include/asm/ptrace.h
···231231232232#define ARCH_HAS_USER_SINGLE_STEP_INFO233233234234+/*235235+ * When hitting ptrace_stop(), we cannot return using SYSRET because236236+ * that does not restore the full CPU state, only a minimal set. The237237+ * ptracer can change arbitrary register values, which is usually okay238238+ * because the usual ptrace stops run off the signal delivery path which239239+ * forces IRET; however, ptrace_event() stops happen in arbitrary places240240+ * in the kernel and don't force IRET path.241241+ *242242+ * So force IRET path after a ptrace stop.243243+ */244244+#define arch_ptrace_stop_needed(code, info) \245245+({ \246246+ set_thread_flag(TIF_NOTIFY_RESUME); \247247+ false; \248248+})249249+234250struct user_desc;235251extern int do_get_thread_area(struct task_struct *p, int idx,236252 struct user_desc __user *info);
+9
arch/x86/kernel/cpu/perf_event_intel.c
···13821382 intel_pmu_lbr_read();1383138313841384 /*13851385+ * CondChgd bit 63 doesn't mean any overflow status. Ignore13861386+ * and clear the bit.13871387+ */13881388+ if (__test_and_clear_bit(63, (unsigned long *)&status)) {13891389+ if (!status)13901390+ goto done;13911391+ }13921392+13931393+ /*13851394 * PEBS overflow sets bit 62 in the global status register13861395 */13871396 if (__test_and_clear_bit(62, (unsigned long *)&status)) {
···363363364364 /* Set up to return from userspace. */365365 restorer = current->mm->context.vdso +366366- selected_vdso32->sym___kernel_sigreturn;366366+ selected_vdso32->sym___kernel_rt_sigreturn;367367 if (ksig->ka.sa.sa_flags & SA_RESTORER)368368 restorer = ksig->ka.sa.sa_restorer;369369 put_user_ex(restorer, &frame->pretcode);
···1111 * Check with readelf after changing.1212 */13131414-/* Disable profiling for userspace code: */1515-#define DISABLE_BRANCH_PROFILING1616-1714#include <uapi/linux/time.h>1815#include <asm/vgtod.h>1916#include <asm/hpet.h>
+15-26
arch/x86/vdso/vdso-fakesections.c
···22 * Copyright 2014 Andy Lutomirski33 * Subject to the GNU Public License, v.244 *55- * Hack to keep broken Go programs working.66- *77- * The Go runtime had a couple of bugs: it would read the section table to try88- * to figure out how many dynamic symbols there were (it shouldn't have looked99- * at the section table at all) and, if there were no SHT_SYNDYM section table1010- * entry, it would use an uninitialized value for the number of symbols. As a1111- * workaround, we supply a minimal section table. vdso2c will adjust the1212- * in-memory image so that "vdso_fake_sections" becomes the section table.1313- *1414- * The bug was introduced by:1515- * https://code.google.com/p/go/source/detail?r=56ea40aac72b (2012-08-31)1616- * and is being addressed in the Go runtime in this issue:1717- * https://code.google.com/p/go/issues/detail?id=819755+ * String table for loadable section headers. See vdso2c.h for why66+ * this exists.187 */1982020-#ifndef __x86_64__2121-#error This hack is specific to the 64-bit vDSO2222-#endif2323-2424-#include <linux/elf.h>2525-2626-extern const __visible struct elf64_shdr vdso_fake_sections[];2727-const __visible struct elf64_shdr vdso_fake_sections[] = {2828- {2929- .sh_type = SHT_DYNSYM,3030- .sh_entsize = sizeof(Elf64_Sym),3131- }3232-};99+const char fake_shstrtab[] __attribute__((section(".fake_shstrtab"))) =1010+ ".hash\0"1111+ ".dynsym\0"1212+ ".dynstr\0"1313+ ".gnu.version\0"1414+ ".gnu.version_d\0"1515+ ".dynamic\0"1616+ ".rodata\0"1717+ ".fake_shstrtab\0" /* Yay, self-referential code. */1818+ ".note\0"1919+ ".eh_frame_hdr\0"2020+ ".eh_frame\0"2121+ ".text";
+46-18
arch/x86/vdso/vdso-layout.lds.S
···66 * This script controls its layout.77 */8899+#if defined(BUILD_VDSO64)1010+# define SHDR_SIZE 641111+#elif defined(BUILD_VDSO32) || defined(BUILD_VDSOX32)1212+# define SHDR_SIZE 401313+#else1414+# error unknown VDSO target1515+#endif1616+1717+#define NUM_FAKE_SHDRS 131818+919SECTIONS1020{1121 . = SIZEOF_HEADERS;···2818 .gnu.version_d : { *(.gnu.version_d) }2919 .gnu.version_r : { *(.gnu.version_r) }30202121+ .dynamic : { *(.dynamic) } :text :dynamic2222+2323+ .rodata : {2424+ *(.rodata*)2525+ *(.data*)2626+ *(.sdata*)2727+ *(.got.plt) *(.got)2828+ *(.gnu.linkonce.d.*)2929+ *(.bss*)3030+ *(.dynbss*)3131+ *(.gnu.linkonce.b.*)3232+3333+ /*3434+ * Ideally this would live in a C file, but that won't3535+ * work cleanly for x32 until we start building the x323636+ * C code using an x32 toolchain.3737+ */3838+ VDSO_FAKE_SECTION_TABLE_START = .;3939+ . = . + NUM_FAKE_SHDRS * SHDR_SIZE;4040+ VDSO_FAKE_SECTION_TABLE_END = .;4141+ } :text4242+4343+ .fake_shstrtab : { *(.fake_shstrtab) } :text4444+4545+3146 .note : { *(.note.*) } :text :note32473348 .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr3449 .eh_frame : { KEEP (*(.eh_frame)) } :text35503636- .dynamic : { *(.dynamic) } :text :dynamic3737-3838- .rodata : { *(.rodata*) } :text3939- .data : {4040- *(.data*)4141- *(.sdata*)4242- *(.got.plt) *(.got)4343- *(.gnu.linkonce.d.*)4444- *(.bss*)4545- *(.dynbss*)4646- *(.gnu.linkonce.b.*)4747- }4848-4949- .altinstructions : { *(.altinstructions) }5050- .altinstr_replacement : { *(.altinstr_replacement) }51515252 /*5353- * Align the actual code well away from the non-instruction data.5454- * This is the best thing for the I-cache.5353+ * Text is well-separated from actual data: there's plenty of5454+ * stuff that isn't used at runtime in between.5555 */5656- . = ALIGN(0x100);57565857 .text : { *(.text*) } :text =0x90909090,5858+5959+ /*6060+ * At the end so that eu-elflint stays happy when vdso2c strips6161+ * these. A better implementation would avoid allocating space6262+ * for these.6363+ */6464+ .altinstructions : { *(.altinstructions) } :text6565+ .altinstr_replacement : { *(.altinstr_replacement) } :text59666067 /*6168 * The remainder of the vDSO consists of special pages that are···10275 /DISCARD/ : {10376 *(.discard)10477 *(.discard.*)7878+ *(__bug_table)10579 }10680}10781
+2
arch/x86/vdso/vdso.lds.S
···66 * the DSO.77 */8899+#define BUILD_VDSO641010+911#include "vdso-layout.lds.S"10121113/*
···44 * are built for 32-bit userspace.55 */6677-static void GOFUNC(void *addr, size_t len, FILE *outfile, const char *name)77+/*88+ * We're writing a section table for a few reasons:99+ *1010+ * The Go runtime had a couple of bugs: it would read the section1111+ * table to try to figure out how many dynamic symbols there were (it1212+ * shouldn't have looked at the section table at all) and, if there1313+ * were no SHT_SYNDYM section table entry, it would use an1414+ * uninitialized value for the number of symbols. An empty DYNSYM1515+ * table would work, but I see no reason not to write a valid one (and1616+ * keep full performance for old Go programs). This hack is only1717+ * needed on x86_64.1818+ *1919+ * The bug was introduced on 2012-08-31 by:2020+ * https://code.google.com/p/go/source/detail?r=56ea40aac72b2121+ * and was fixed on 2014-06-13 by:2222+ * https://code.google.com/p/go/source/detail?r=fc1cd5e125952323+ *2424+ * Binutils has issues debugging the vDSO: it reads the section table to2525+ * find SHT_NOTE; it won't look at PT_NOTE for the in-memory vDSO, which2626+ * would break build-id if we removed the section table. Binutils2727+ * also requires that shstrndx != 0. See:2828+ * https://sourceware.org/bugzilla/show_bug.cgi?id=170642929+ *3030+ * elfutils might not look for PT_NOTE if there is a section table at3131+ * all. I don't know whether this matters for any practical purpose.3232+ *3333+ * For simplicity, rather than hacking up a partial section table, we3434+ * just write a mostly complete one. We omit non-dynamic symbols,3535+ * though, since they're rather large.3636+ *3737+ * Once binutils gets fixed, we might be able to drop this for all but3838+ * the 64-bit vdso, since build-id only works in kernel RPMs, and3939+ * systems that update to new enough kernel RPMs will likely update4040+ * binutils in sync. build-id has never worked for home-built kernel4141+ * RPMs without manual symlinking, and I suspect that no one ever does4242+ * that.4343+ */4444+struct BITSFUNC(fake_sections)4545+{4646+ ELF(Shdr) *table;4747+ unsigned long table_offset;4848+ int count, max_count;4949+5050+ int in_shstrndx;5151+ unsigned long shstr_offset;5252+ const char *shstrtab;5353+ size_t shstrtab_len;5454+5555+ int out_shstrndx;5656+};5757+5858+static unsigned int BITSFUNC(find_shname)(struct BITSFUNC(fake_sections) *out,5959+ const char *name)6060+{6161+ const char *outname = out->shstrtab;6262+ while (outname - out->shstrtab < out->shstrtab_len) {6363+ if (!strcmp(name, outname))6464+ return (outname - out->shstrtab) + out->shstr_offset;6565+ outname += strlen(outname) + 1;6666+ }6767+6868+ if (*name)6969+ printf("Warning: could not find output name \"%s\"\n", name);7070+ return out->shstr_offset + out->shstrtab_len - 1; /* Use a null. */7171+}7272+7373+static void BITSFUNC(init_sections)(struct BITSFUNC(fake_sections) *out)7474+{7575+ if (!out->in_shstrndx)7676+ fail("didn't find the fake shstrndx\n");7777+7878+ memset(out->table, 0, out->max_count * sizeof(ELF(Shdr)));7979+8080+ if (out->max_count < 1)8181+ fail("we need at least two fake output sections\n");8282+8383+ PUT_LE(&out->table[0].sh_type, SHT_NULL);8484+ PUT_LE(&out->table[0].sh_name, BITSFUNC(find_shname)(out, ""));8585+8686+ out->count = 1;8787+}8888+8989+static void BITSFUNC(copy_section)(struct BITSFUNC(fake_sections) *out,9090+ int in_idx, const ELF(Shdr) *in,9191+ const char *name)9292+{9393+ uint64_t flags = GET_LE(&in->sh_flags);9494+9595+ bool copy = flags & SHF_ALLOC &&9696+ (GET_LE(&in->sh_size) ||9797+ (GET_LE(&in->sh_type) != SHT_RELA &&9898+ GET_LE(&in->sh_type) != SHT_REL)) &&9999+ strcmp(name, ".altinstructions") &&100100+ strcmp(name, ".altinstr_replacement");101101+102102+ if (!copy)103103+ return;104104+105105+ if (out->count >= out->max_count)106106+ fail("too many copied sections (max = %d)\n", out->max_count);107107+108108+ if (in_idx == out->in_shstrndx)109109+ out->out_shstrndx = out->count;110110+111111+ out->table[out->count] = *in;112112+ PUT_LE(&out->table[out->count].sh_name,113113+ BITSFUNC(find_shname)(out, name));114114+115115+ /* elfutils requires that a strtab have the correct type. */116116+ if (!strcmp(name, ".fake_shstrtab"))117117+ PUT_LE(&out->table[out->count].sh_type, SHT_STRTAB);118118+119119+ out->count++;120120+}121121+122122+static void BITSFUNC(go)(void *addr, size_t len,123123+ FILE *outfile, const char *name)8124{9125 int found_load = 0;10126 unsigned long load_size = -1; /* Work around bogus warning */11127 unsigned long data_size;1212- Elf_Ehdr *hdr = (Elf_Ehdr *)addr;128128+ ELF(Ehdr) *hdr = (ELF(Ehdr) *)addr;13129 int i;14130 unsigned long j;1515- Elf_Shdr *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr,131131+ ELF(Shdr) *symtab_hdr = NULL, *strtab_hdr, *secstrings_hdr,16132 *alt_sec = NULL;1717- Elf_Dyn *dyn = 0, *dyn_end = 0;133133+ ELF(Dyn) *dyn = 0, *dyn_end = 0;18134 const char *secstrings;19135 uint64_t syms[NSYMS] = {};201362121- uint64_t fake_sections_value = 0, fake_sections_size = 0;137137+ struct BITSFUNC(fake_sections) fake_sections = {};221382323- Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(&hdr->e_phoff));139139+ ELF(Phdr) *pt = (ELF(Phdr) *)(addr + GET_LE(&hdr->e_phoff));2414025141 /* Walk the segment table. */26142 for (i = 0; i < GET_LE(&hdr->e_phnum); i++) {···16751 for (i = 0; dyn + i < dyn_end &&16852 GET_LE(&dyn[i].d_tag) != DT_NULL; i++) {16953 typeof(dyn[i].d_tag) tag = GET_LE(&dyn[i].d_tag);170170- if (tag == DT_REL || tag == DT_RELSZ ||5454+ if (tag == DT_REL || tag == DT_RELSZ || tag == DT_RELA ||17155 tag == DT_RELENT || tag == DT_TEXTREL)17256 fail("vdso image contains dynamic relocations\n");17357 }···17761 GET_LE(&hdr->e_shentsize)*GET_LE(&hdr->e_shstrndx);17862 secstrings = addr + GET_LE(&secstrings_hdr->sh_offset);17963 for (i = 0; i < GET_LE(&hdr->e_shnum); i++) {180180- Elf_Shdr *sh = addr + GET_LE(&hdr->e_shoff) +6464+ ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) +18165 GET_LE(&hdr->e_shentsize) * i;18266 if (GET_LE(&sh->sh_type) == SHT_SYMTAB)18367 symtab_hdr = sh;···19882 i < GET_LE(&symtab_hdr->sh_size) / GET_LE(&symtab_hdr->sh_entsize);19983 i++) {20084 int k;201201- Elf_Sym *sym = addr + GET_LE(&symtab_hdr->sh_offset) +8585+ ELF(Sym) *sym = addr + GET_LE(&symtab_hdr->sh_offset) +20286 GET_LE(&symtab_hdr->sh_entsize) * i;20387 const char *name = addr + GET_LE(&strtab_hdr->sh_offset) +20488 GET_LE(&sym->st_name);2058920690 for (k = 0; k < NSYMS; k++) {207207- if (!strcmp(name, required_syms[k])) {9191+ if (!strcmp(name, required_syms[k].name)) {20892 if (syms[k]) {20993 fail("duplicate symbol %s\n",210210- required_syms[k]);9494+ required_syms[k].name);21195 }21296 syms[k] = GET_LE(&sym->st_value);21397 }21498 }21599216216- if (!strcmp(name, "vdso_fake_sections")) {217217- if (fake_sections_value)218218- fail("duplicate vdso_fake_sections\n");219219- fake_sections_value = GET_LE(&sym->st_value);220220- fake_sections_size = GET_LE(&sym->st_size);100100+ if (!strcmp(name, "fake_shstrtab")) {101101+ ELF(Shdr) *sh;102102+103103+ fake_sections.in_shstrndx = GET_LE(&sym->st_shndx);104104+ fake_sections.shstrtab = addr + GET_LE(&sym->st_value);105105+ fake_sections.shstrtab_len = GET_LE(&sym->st_size);106106+ sh = addr + GET_LE(&hdr->e_shoff) +107107+ GET_LE(&hdr->e_shentsize) *108108+ fake_sections.in_shstrndx;109109+ fake_sections.shstr_offset = GET_LE(&sym->st_value) -110110+ GET_LE(&sh->sh_addr);221111 }222112 }113113+114114+ /* Build the output section table. */115115+ if (!syms[sym_VDSO_FAKE_SECTION_TABLE_START] ||116116+ !syms[sym_VDSO_FAKE_SECTION_TABLE_END])117117+ fail("couldn't find fake section table\n");118118+ if ((syms[sym_VDSO_FAKE_SECTION_TABLE_END] -119119+ syms[sym_VDSO_FAKE_SECTION_TABLE_START]) % sizeof(ELF(Shdr)))120120+ fail("fake section table size isn't a multiple of sizeof(Shdr)\n");121121+ fake_sections.table = addr + syms[sym_VDSO_FAKE_SECTION_TABLE_START];122122+ fake_sections.table_offset = syms[sym_VDSO_FAKE_SECTION_TABLE_START];123123+ fake_sections.max_count = (syms[sym_VDSO_FAKE_SECTION_TABLE_END] -124124+ syms[sym_VDSO_FAKE_SECTION_TABLE_START]) /125125+ sizeof(ELF(Shdr));126126+127127+ BITSFUNC(init_sections)(&fake_sections);128128+ for (i = 0; i < GET_LE(&hdr->e_shnum); i++) {129129+ ELF(Shdr) *sh = addr + GET_LE(&hdr->e_shoff) +130130+ GET_LE(&hdr->e_shentsize) * i;131131+ BITSFUNC(copy_section)(&fake_sections, i, sh,132132+ secstrings + GET_LE(&sh->sh_name));133133+ }134134+ if (!fake_sections.out_shstrndx)135135+ fail("didn't generate shstrndx?!?\n");136136+137137+ PUT_LE(&hdr->e_shoff, fake_sections.table_offset);138138+ PUT_LE(&hdr->e_shentsize, sizeof(ELF(Shdr)));139139+ PUT_LE(&hdr->e_shnum, fake_sections.count);140140+ PUT_LE(&hdr->e_shstrndx, fake_sections.out_shstrndx);223141224142 /* Validate mapping addresses. */225143 for (i = 0; i < sizeof(special_pages) / sizeof(special_pages[0]); i++) {···262112263113 if (syms[i] % 4096)264114 fail("%s must be a multiple of 4096\n",265265- required_syms[i]);115115+ required_syms[i].name);266116 if (syms[i] < data_size)267117 fail("%s must be after the text mapping\n",268268- required_syms[i]);118118+ required_syms[i].name);269119 if (syms[sym_end_mapping] < syms[i] + 4096)270270- fail("%s overruns end_mapping\n", required_syms[i]);120120+ fail("%s overruns end_mapping\n",121121+ required_syms[i].name);271122 }272123 if (syms[sym_end_mapping] % 4096)273124 fail("end_mapping must be a multiple of 4096\n");274274-275275- /* Remove sections or use fakes */276276- if (fake_sections_size % sizeof(Elf_Shdr))277277- fail("vdso_fake_sections size is not a multiple of %ld\n",278278- (long)sizeof(Elf_Shdr));279279- PUT_LE(&hdr->e_shoff, fake_sections_value);280280- PUT_LE(&hdr->e_shentsize, fake_sections_value ? sizeof(Elf_Shdr) : 0);281281- PUT_LE(&hdr->e_shnum, fake_sections_size / sizeof(Elf_Shdr));282282- PUT_LE(&hdr->e_shstrndx, SHN_UNDEF);283125284126 if (!name) {285127 fwrite(addr, load_size, 1, outfile);···310168 (unsigned long)GET_LE(&alt_sec->sh_size));311169 }312170 for (i = 0; i < NSYMS; i++) {313313- if (syms[i])171171+ if (required_syms[i].export && syms[i])314172 fprintf(outfile, "\t.sym_%s = 0x%" PRIx64 ",\n",315315- required_syms[i], syms[i]);173173+ required_syms[i].name, syms[i]);316174 }317175 fprintf(outfile, "};\n");318176}
···66 * the DSO.77 */8899+#define BUILD_VDSOX321010+911#include "vdso-layout.lds.S"10121113/*
+4
arch/x86/vdso/vma.c
···6262 Only used for the 64-bit and x32 vdsos. */6363static unsigned long vdso_addr(unsigned long start, unsigned len)6464{6565+#ifdef CONFIG_X86_326666+ return 0;6767+#else6568 unsigned long addr, end;6669 unsigned offset;6770 end = (start + PMD_SIZE - 1) & PMD_MASK;···8683 addr = align_vdso_addr(addr);87848885 return addr;8686+#endif8987}90889189static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+8
block/bio.c
···746746747747 goto done;748748 }749749+750750+ /*751751+ * If the queue doesn't support SG gaps and adding this752752+ * offset would create a gap, disallow it.753753+ */754754+ if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS) &&755755+ bvec_gap_to_prev(prev, offset))756756+ return 0;749757 }750758751759 if (bio->bi_vcnt >= bio->bi_max_vecs)
+3-6
block/blk-cgroup.c
···8080 blkg->q = q;8181 INIT_LIST_HEAD(&blkg->q_node);8282 blkg->blkcg = blkcg;8383- blkg->refcnt = 1;8383+ atomic_set(&blkg->refcnt, 1);84848585 /* root blkg uses @q->root_rl, init rl only for !root blkgs */8686 if (blkcg != &blkcg_root) {···399399400400 /* release the blkcg and parent blkg refs this blkg has been holding */401401 css_put(&blkg->blkcg->css);402402- if (blkg->parent) {403403- spin_lock_irq(blkg->q->queue_lock);402402+ if (blkg->parent)404403 blkg_put(blkg->parent);405405- spin_unlock_irq(blkg->q->queue_lock);406406- }407404408405 blkg_free(blkg);409406}···10901093 * Register @pol with blkcg core. Might sleep and @pol may be modified on10911094 * successful registration. Returns 0 on success and -errno on failure.10921095 */10931093-int __init blkcg_policy_register(struct blkcg_policy *pol)10961096+int blkcg_policy_register(struct blkcg_policy *pol)10941097{10951098 int i, ret;10961099
+9-12
block/blk-cgroup.h
···1818#include <linux/seq_file.h>1919#include <linux/radix-tree.h>2020#include <linux/blkdev.h>2121+#include <linux/atomic.h>21222223/* Max limits for throttle policy */2324#define THROTL_IOPS_MAX UINT_MAX···105104 struct request_list rl;106105107106 /* reference count */108108- int refcnt;107107+ atomic_t refcnt;109108110109 /* is this blkg online? protected by both blkcg and q locks */111110 bool online;···146145void blkcg_exit_queue(struct request_queue *q);147146148147/* Blkio controller policy registration */149149-int __init blkcg_policy_register(struct blkcg_policy *pol);148148+int blkcg_policy_register(struct blkcg_policy *pol);150149void blkcg_policy_unregister(struct blkcg_policy *pol);151150int blkcg_activate_policy(struct request_queue *q,152151 const struct blkcg_policy *pol);···258257 * blkg_get - get a blkg reference259258 * @blkg: blkg to get260259 *261261- * The caller should be holding queue_lock and an existing reference.260260+ * The caller should be holding an existing reference.262261 */263262static inline void blkg_get(struct blkcg_gq *blkg)264263{265265- lockdep_assert_held(blkg->q->queue_lock);266266- WARN_ON_ONCE(!blkg->refcnt);267267- blkg->refcnt++;264264+ WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);265265+ atomic_inc(&blkg->refcnt);268266}269267270268void __blkg_release_rcu(struct rcu_head *rcu);···271271/**272272 * blkg_put - put a blkg reference273273 * @blkg: blkg to put274274- *275275- * The caller should be holding queue_lock.276274 */277275static inline void blkg_put(struct blkcg_gq *blkg)278276{279279- lockdep_assert_held(blkg->q->queue_lock);280280- WARN_ON_ONCE(blkg->refcnt <= 0);281281- if (!--blkg->refcnt)277277+ WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);278278+ if (atomic_dec_and_test(&blkg->refcnt))282279 call_rcu(&blkg->rcu_head, __blkg_release_rcu);283280}284281···577580static inline int blkcg_init_queue(struct request_queue *q) { return 0; }578581static inline void blkcg_drain_queue(struct request_queue *q) { }579582static inline void blkcg_exit_queue(struct request_queue *q) { }580580-static inline int __init blkcg_policy_register(struct blkcg_policy *pol) { return 0; }583583+static inline int blkcg_policy_register(struct blkcg_policy *pol) { return 0; }581584static inline void blkcg_policy_unregister(struct blkcg_policy *pol) { }582585static inline int blkcg_activate_policy(struct request_queue *q,583586 const struct blkcg_policy *pol) { return 0; }
+10
block/blk-merge.c
···568568569569bool blk_rq_merge_ok(struct request *rq, struct bio *bio)570570{571571+ struct request_queue *q = rq->q;572572+571573 if (!rq_mergeable(rq) || !bio_mergeable(bio))572574 return false;573575···592590 if (rq->cmd_flags & REQ_WRITE_SAME &&593591 !blk_write_same_mergeable(rq->bio, bio))594592 return false;593593+594594+ if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS)) {595595+ struct bio_vec *bprev;596596+597597+ bprev = &rq->biotail->bi_io_vec[bio->bi_vcnt - 1];598598+ if (bvec_gap_to_prev(bprev, bio->bi_io_vec[0].bv_offset))599599+ return false;600600+ }595601596602 return true;597603}
···3535#include <linux/delay.h>3636#include <linux/slab.h>3737#include <linux/suspend.h>3838+#include <linux/delay.h>3839#include <asm/unaligned.h>39404041#ifdef CONFIG_ACPI_PROCFS_POWER···533532 battery->rate_now = abs((s16)battery->rate_now);534533 printk_once(KERN_WARNING FW_BUG "battery: (dis)charge rate"535534 " invalid.\n");535535+ }536536+537537+ /*538538+ * When fully charged, some batteries wrongly report539539+ * capacity_now = design_capacity instead of = full_charge_capacity540540+ */541541+ if (battery->capacity_now > battery->full_charge_capacity542542+ && battery->full_charge_capacity != ACPI_BATTERY_VALUE_UNKNOWN) {543543+ battery->capacity_now = battery->full_charge_capacity;544544+ if (battery->capacity_now != battery->design_capacity)545545+ printk_once(KERN_WARNING FW_BUG546546+ "battery: reported current charge level (%d) "547547+ "is higher than reported maximum charge level (%d).\n",548548+ battery->capacity_now, battery->full_charge_capacity);536549 }537550538551 if (test_bit(ACPI_BATTERY_QUIRK_PERCENTAGE_CAPACITY, &battery->flags)···11661151 {},11671152};1168115311541154+/*11551155+ * Some machines'(E,G Lenovo Z480) ECs are not stable11561156+ * during boot up and this causes battery driver fails to be11571157+ * probed due to failure of getting battery information11581158+ * from EC sometimes. After several retries, the operation11591159+ * may work. So add retry code here and 20ms sleep between11601160+ * every retries.11611161+ */11621162+static int acpi_battery_update_retry(struct acpi_battery *battery)11631163+{11641164+ int retry, ret;11651165+11661166+ for (retry = 5; retry; retry--) {11671167+ ret = acpi_battery_update(battery, false);11681168+ if (!ret)11691169+ break;11701170+11711171+ msleep(20);11721172+ }11731173+ return ret;11741174+}11751175+11691176static int acpi_battery_add(struct acpi_device *device)11701177{11711178 int result = 0;···12061169 mutex_init(&battery->sysfs_lock);12071170 if (acpi_has_method(battery->device->handle, "_BIX"))12081171 set_bit(ACPI_BATTERY_XINFO_PRESENT, &battery->flags);12091209- result = acpi_battery_update(battery, false);11721172+11731173+ result = acpi_battery_update_retry(battery);12101174 if (result)12111175 goto fail;11761176+12121177#ifdef CONFIG_ACPI_PROCFS_POWER12131178 result = acpi_battery_add_fs(device);12141179#endif
+85-79
drivers/acpi/ec.c
···11/*22- * ec.c - ACPI Embedded Controller Driver (v2.1)22+ * ec.c - ACPI Embedded Controller Driver (v2.2)33 *44- * Copyright (C) 2006-2008 Alexey Starikovskiy <astarikovskiy@suse.de>55- * Copyright (C) 2006 Denis Sadykov <denis.m.sadykov@intel.com>66- * Copyright (C) 2004 Luming Yu <luming.yu@intel.com>77- * Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>88- * Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>44+ * Copyright (C) 2001-2014 Intel Corporation55+ * Author: 2014 Lv Zheng <lv.zheng@intel.com>66+ * 2006, 2007 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>77+ * 2006 Denis Sadykov <denis.m.sadykov@intel.com>88+ * 2004 Luming Yu <luming.yu@intel.com>99+ * 2001, 2002 Andy Grover <andrew.grover@intel.com>1010+ * 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>1111+ * Copyright (C) 2008 Alexey Starikovskiy <astarikovskiy@suse.de>912 *1013 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1114 *···5552/* EC status register */5653#define ACPI_EC_FLAG_OBF 0x01 /* Output buffer full */5754#define ACPI_EC_FLAG_IBF 0x02 /* Input buffer full */5555+#define ACPI_EC_FLAG_CMD 0x08 /* Input buffer contains a command */5856#define ACPI_EC_FLAG_BURST 0x10 /* burst mode */5957#define ACPI_EC_FLAG_SCI 0x20 /* EC-SCI occurred */6058···8177 * OpReg are installed */8278 EC_FLAGS_BLOCKED, /* Transactions are blocked */8379};8080+8181+#define ACPI_EC_COMMAND_POLL 0x01 /* Available for command byte */8282+#define ACPI_EC_COMMAND_COMPLETE 0x02 /* Completed last byte */84838584/* ec.c is compiled in acpi namespace so this shows up as acpi.ec_delay param */8685static unsigned int ec_delay __read_mostly = ACPI_EC_DELAY;···116109 u8 ri;117110 u8 wlen;118111 u8 rlen;119119- bool done;112112+ u8 flags;120113};121114122115struct acpi_ec *boot_ec, *first_ec;···134127static inline u8 acpi_ec_read_status(struct acpi_ec *ec)135128{136129 u8 x = inb(ec->command_addr);137137- pr_debug("---> status = 0x%2.2x\n", x);130130+ pr_debug("EC_SC(R) = 0x%2.2x "131131+ "SCI_EVT=%d BURST=%d CMD=%d IBF=%d OBF=%d\n",132132+ x,133133+ !!(x & ACPI_EC_FLAG_SCI),134134+ !!(x & ACPI_EC_FLAG_BURST),135135+ !!(x & ACPI_EC_FLAG_CMD),136136+ !!(x & ACPI_EC_FLAG_IBF),137137+ !!(x & ACPI_EC_FLAG_OBF));138138 return x;139139}140140141141static inline u8 acpi_ec_read_data(struct acpi_ec *ec)142142{143143 u8 x = inb(ec->data_addr);144144- pr_debug("---> data = 0x%2.2x\n", x);144144+ pr_debug("EC_DATA(R) = 0x%2.2x\n", x);145145 return x;146146}147147148148static inline void acpi_ec_write_cmd(struct acpi_ec *ec, u8 command)149149{150150- pr_debug("<--- command = 0x%2.2x\n", command);150150+ pr_debug("EC_SC(W) = 0x%2.2x\n", command);151151 outb(command, ec->command_addr);152152}153153154154static inline void acpi_ec_write_data(struct acpi_ec *ec, u8 data)155155{156156- pr_debug("<--- data = 0x%2.2x\n", data);156156+ pr_debug("EC_DATA(W) = 0x%2.2x\n", data);157157 outb(data, ec->data_addr);158158}159159160160-static int ec_transaction_done(struct acpi_ec *ec)160160+static int ec_transaction_completed(struct acpi_ec *ec)161161{162162 unsigned long flags;163163 int ret = 0;164164 spin_lock_irqsave(&ec->lock, flags);165165- if (!ec->curr || ec->curr->done)165165+ if (ec->curr && (ec->curr->flags & ACPI_EC_COMMAND_COMPLETE))166166 ret = 1;167167 spin_unlock_irqrestore(&ec->lock, flags);168168 return ret;169169}170170171171-static void start_transaction(struct acpi_ec *ec)171171+static bool advance_transaction(struct acpi_ec *ec)172172{173173- ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0;174174- ec->curr->done = false;175175- acpi_ec_write_cmd(ec, ec->curr->command);176176-}177177-178178-static void advance_transaction(struct acpi_ec *ec, u8 status)179179-{180180- unsigned long flags;181173 struct transaction *t;174174+ u8 status;175175+ bool wakeup = false;182176183183- spin_lock_irqsave(&ec->lock, flags);177177+ pr_debug("===== %s =====\n", in_interrupt() ? "IRQ" : "TASK");178178+ status = acpi_ec_read_status(ec);184179 t = ec->curr;185180 if (!t)186186- goto unlock;187187- if (t->wlen > t->wi) {188188- if ((status & ACPI_EC_FLAG_IBF) == 0)189189- acpi_ec_write_data(ec,190190- t->wdata[t->wi++]);191191- else192192- goto err;193193- } else if (t->rlen > t->ri) {194194- if ((status & ACPI_EC_FLAG_OBF) == 1) {195195- t->rdata[t->ri++] = acpi_ec_read_data(ec);196196- if (t->rlen == t->ri)197197- t->done = true;181181+ goto err;182182+ if (t->flags & ACPI_EC_COMMAND_POLL) {183183+ if (t->wlen > t->wi) {184184+ if ((status & ACPI_EC_FLAG_IBF) == 0)185185+ acpi_ec_write_data(ec, t->wdata[t->wi++]);186186+ else187187+ goto err;188188+ } else if (t->rlen > t->ri) {189189+ if ((status & ACPI_EC_FLAG_OBF) == 1) {190190+ t->rdata[t->ri++] = acpi_ec_read_data(ec);191191+ if (t->rlen == t->ri) {192192+ t->flags |= ACPI_EC_COMMAND_COMPLETE;193193+ wakeup = true;194194+ }195195+ } else196196+ goto err;197197+ } else if (t->wlen == t->wi &&198198+ (status & ACPI_EC_FLAG_IBF) == 0) {199199+ t->flags |= ACPI_EC_COMMAND_COMPLETE;200200+ wakeup = true;201201+ }202202+ return wakeup;203203+ } else {204204+ if ((status & ACPI_EC_FLAG_IBF) == 0) {205205+ acpi_ec_write_cmd(ec, t->command);206206+ t->flags |= ACPI_EC_COMMAND_POLL;198207 } else199208 goto err;200200- } else if (t->wlen == t->wi &&201201- (status & ACPI_EC_FLAG_IBF) == 0)202202- t->done = true;203203- goto unlock;209209+ return wakeup;210210+ }204211err:205212 /*206213 * If SCI bit is set, then don't think it's a false IRQ207214 * otherwise will take a not handled IRQ as a false one.208215 */209209- if (in_interrupt() && !(status & ACPI_EC_FLAG_SCI))210210- ++t->irq_count;216216+ if (!(status & ACPI_EC_FLAG_SCI)) {217217+ if (in_interrupt() && t)218218+ ++t->irq_count;219219+ }220220+ return wakeup;221221+}211222212212-unlock:213213- spin_unlock_irqrestore(&ec->lock, flags);223223+static void start_transaction(struct acpi_ec *ec)224224+{225225+ ec->curr->irq_count = ec->curr->wi = ec->curr->ri = 0;226226+ ec->curr->flags = 0;227227+ (void)advance_transaction(ec);214228}215229216230static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data);···256228 /* don't sleep with disabled interrupts */257229 if (EC_FLAGS_MSI || irqs_disabled()) {258230 udelay(ACPI_EC_MSI_UDELAY);259259- if (ec_transaction_done(ec))231231+ if (ec_transaction_completed(ec))260232 return 0;261233 } else {262234 if (wait_event_timeout(ec->wait,263263- ec_transaction_done(ec),235235+ ec_transaction_completed(ec),264236 msecs_to_jiffies(1)))265237 return 0;266238 }267267- advance_transaction(ec, acpi_ec_read_status(ec));239239+ spin_lock_irqsave(&ec->lock, flags);240240+ (void)advance_transaction(ec);241241+ spin_unlock_irqrestore(&ec->lock, flags);268242 } while (time_before(jiffies, delay));269243 pr_debug("controller reset, restart transaction\n");270244 spin_lock_irqsave(&ec->lock, flags);···298268 return ret;299269}300270301301-static int ec_check_ibf0(struct acpi_ec *ec)302302-{303303- u8 status = acpi_ec_read_status(ec);304304- return (status & ACPI_EC_FLAG_IBF) == 0;305305-}306306-307307-static int ec_wait_ibf0(struct acpi_ec *ec)308308-{309309- unsigned long delay = jiffies + msecs_to_jiffies(ec_delay);310310- /* interrupt wait manually if GPE mode is not active */311311- while (time_before(jiffies, delay))312312- if (wait_event_timeout(ec->wait, ec_check_ibf0(ec),313313- msecs_to_jiffies(1)))314314- return 0;315315- return -ETIME;316316-}317317-318271static int acpi_ec_transaction(struct acpi_ec *ec, struct transaction *t)319272{320273 int status;···317304 status = -ENODEV;318305 goto unlock;319306 }320320- }321321- if (ec_wait_ibf0(ec)) {322322- pr_err("input buffer is not empty, "323323- "aborting transaction\n");324324- status = -ETIME;325325- goto end;326307 }327308 pr_debug("transaction start (cmd=0x%02x, addr=0x%02x)\n",328309 t->command, t->wdata ? t->wdata[0] : 0);···341334 set_bit(EC_FLAGS_GPE_STORM, &ec->flags);342335 }343336 pr_debug("transaction end\n");344344-end:345337 if (ec->global_lock)346338 acpi_release_global_lock(glk);347339unlock:···640634static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,641635 u32 gpe_number, void *data)642636{637637+ unsigned long flags;643638 struct acpi_ec *ec = data;644644- u8 status = acpi_ec_read_status(ec);645639646646- pr_debug("~~~> interrupt, status:0x%02x\n", status);647647-648648- advance_transaction(ec, status);649649- if (ec_transaction_done(ec) &&650650- (acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) {640640+ spin_lock_irqsave(&ec->lock, flags);641641+ if (advance_transaction(ec))651642 wake_up(&ec->wait);652652- ec_check_sci(ec, acpi_ec_read_status(ec));653653- }643643+ spin_unlock_irqrestore(&ec->lock, flags);644644+ ec_check_sci(ec, acpi_ec_read_status(ec));654645 return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE;655646}656647···10691066 /* fall through */10701067 }1071106810721072- if (EC_FLAGS_SKIP_DSDT_SCAN)10691069+ if (EC_FLAGS_SKIP_DSDT_SCAN) {10701070+ kfree(saved_ec);10731071 return -ENODEV;10721072+ }1074107310751074 /* This workaround is needed only on some broken machines,10761075 * which require early EC, but fail to provide ECDT */···11101105 }11111106error:11121107 kfree(boot_ec);11081108+ kfree(saved_ec);11131109 boot_ec = NULL;11141110 return -ENODEV;11151111}
+5-5
drivers/acpi/resource.c
···7777 switch (ares->type) {7878 case ACPI_RESOURCE_TYPE_MEMORY24:7979 memory24 = &ares->data.memory24;8080- if (!memory24->address_length)8080+ if (!memory24->minimum && !memory24->address_length)8181 return false;8282 acpi_dev_get_memresource(res, memory24->minimum,8383 memory24->address_length,···8585 break;8686 case ACPI_RESOURCE_TYPE_MEMORY32:8787 memory32 = &ares->data.memory32;8888- if (!memory32->address_length)8888+ if (!memory32->minimum && !memory32->address_length)8989 return false;9090 acpi_dev_get_memresource(res, memory32->minimum,9191 memory32->address_length,···9393 break;9494 case ACPI_RESOURCE_TYPE_FIXED_MEMORY32:9595 fixed_memory32 = &ares->data.fixed_memory32;9696- if (!fixed_memory32->address_length)9696+ if (!fixed_memory32->address && !fixed_memory32->address_length)9797 return false;9898 acpi_dev_get_memresource(res, fixed_memory32->address,9999 fixed_memory32->address_length,···150150 switch (ares->type) {151151 case ACPI_RESOURCE_TYPE_IO:152152 io = &ares->data.io;153153- if (!io->address_length)153153+ if (!io->minimum && !io->address_length)154154 return false;155155 acpi_dev_get_ioresource(res, io->minimum,156156 io->address_length,···158158 break;159159 case ACPI_RESOURCE_TYPE_FIXED_IO:160160 fixed_io = &ares->data.fixed_io;161161- if (!fixed_io->address_length)161161+ if (!fixed_io->address && !fixed_io->address_length)162162 return false;163163 acpi_dev_get_ioresource(res, fixed_io->address,164164 fixed_io->address_length,
···7878struct xgene_ahci_context {7979 struct ahci_host_priv *hpriv;8080 struct device *dev;8181+ u8 last_cmd[MAX_AHCI_CHN_PERCTR]; /* tracking the last command issued*/8182 void __iomem *csr_core; /* Core CSR address of IP */8283 void __iomem *csr_diag; /* Diag CSR address of IP */8384 void __iomem *csr_axi; /* AXI CSR address of IP */···9998}10099101100/**101101+ * xgene_ahci_restart_engine - Restart the dma engine.102102+ * @ap : ATA port of interest103103+ *104104+ * Restarts the dma engine inside the controller.105105+ */106106+static int xgene_ahci_restart_engine(struct ata_port *ap)107107+{108108+ struct ahci_host_priv *hpriv = ap->host->private_data;109109+110110+ ahci_stop_engine(ap);111111+ ahci_start_fis_rx(ap);112112+ hpriv->start_engine(ap);113113+114114+ return 0;115115+}116116+117117+/**118118+ * xgene_ahci_qc_issue - Issue commands to the device119119+ * @qc: Command to issue120120+ *121121+ * Due to Hardware errata for IDENTIFY DEVICE command, the controller cannot122122+ * clear the BSY bit after receiving the PIO setup FIS. This results in the dma123123+ * state machine goes into the CMFatalErrorUpdate state and locks up. By124124+ * restarting the dma engine, it removes the controller out of lock up state.125125+ */126126+static unsigned int xgene_ahci_qc_issue(struct ata_queued_cmd *qc)127127+{128128+ struct ata_port *ap = qc->ap;129129+ struct ahci_host_priv *hpriv = ap->host->private_data;130130+ struct xgene_ahci_context *ctx = hpriv->plat_data;131131+ int rc = 0;132132+133133+ if (unlikely(ctx->last_cmd[ap->port_no] == ATA_CMD_ID_ATA))134134+ xgene_ahci_restart_engine(ap);135135+136136+ rc = ahci_qc_issue(qc);137137+138138+ /* Save the last command issued */139139+ ctx->last_cmd[ap->port_no] = qc->tf.command;140140+141141+ return rc;142142+}143143+144144+/**102145 * xgene_ahci_read_id - Read ID data from the specified device103146 * @dev: device104147 * @tf: proposed taskfile105148 * @id: data buffer106149 *107150 * This custom read ID function is required due to the fact that the HW108108- * does not support DEVSLP and the controller state machine may get stuck109109- * after processing the ID query command.151151+ * does not support DEVSLP.110152 */111153static unsigned int xgene_ahci_read_id(struct ata_device *dev,112154 struct ata_taskfile *tf, u16 *id)113155{114156 u32 err_mask;115115- void __iomem *port_mmio = ahci_port_base(dev->link->ap);116157117158 err_mask = ata_do_dev_read_id(dev, tf, id);118159 if (err_mask)···176133 */177134 id[ATA_ID_FEATURE_SUPP] &= ~(1 << 8);178135179179- /*180180- * Due to HW errata, restart the port if no other command active.181181- * Otherwise the controller may get stuck.182182- */183183- if (!readl(port_mmio + PORT_CMD_ISSUE)) {184184- writel(PORT_CMD_FIS_RX, port_mmio + PORT_CMD);185185- readl(port_mmio + PORT_CMD); /* Force a barrier */186186- writel(PORT_CMD_FIS_RX | PORT_CMD_START, port_mmio + PORT_CMD);187187- readl(port_mmio + PORT_CMD); /* Force a barrier */188188- }189136 return 0;190137}191138···333300 .host_stop = xgene_ahci_host_stop,334301 .hardreset = xgene_ahci_hardreset,335302 .read_id = xgene_ahci_read_id,303303+ .qc_issue = xgene_ahci_qc_issue,336304};337305338306static const struct ata_port_info xgene_ahci_port_info = {
+4-3
drivers/ata/libahci.c
···68686969static int ahci_scr_read(struct ata_link *link, unsigned int sc_reg, u32 *val);7070static int ahci_scr_write(struct ata_link *link, unsigned int sc_reg, u32 val);7171-static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc);7271static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc);7372static int ahci_port_start(struct ata_port *ap);7473static void ahci_port_stop(struct ata_port *ap);···619620}620621EXPORT_SYMBOL_GPL(ahci_stop_engine);621622622622-static void ahci_start_fis_rx(struct ata_port *ap)623623+void ahci_start_fis_rx(struct ata_port *ap)623624{624625 void __iomem *port_mmio = ahci_port_base(ap);625626 struct ahci_host_priv *hpriv = ap->host->private_data;···645646 /* flush */646647 readl(port_mmio + PORT_CMD);647648}649649+EXPORT_SYMBOL_GPL(ahci_start_fis_rx);648650649651static int ahci_stop_fis_rx(struct ata_port *ap)650652{···19451945}19461946EXPORT_SYMBOL_GPL(ahci_interrupt);1947194719481948-static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc)19481948+unsigned int ahci_qc_issue(struct ata_queued_cmd *qc)19491949{19501950 struct ata_port *ap = qc->ap;19511951 void __iomem *port_mmio = ahci_port_base(ap);···1974197419751975 return 0;19761976}19771977+EXPORT_SYMBOL_GPL(ahci_qc_issue);1977197819781979static bool ahci_qc_fill_rtf(struct ata_queued_cmd *qc)19791980{
+6-1
drivers/ata/libahci_platform.c
···250250 if (IS_ERR(hpriv->phy)) {251251 rc = PTR_ERR(hpriv->phy);252252 switch (rc) {253253- case -ENODEV:254253 case -ENOSYS:254254+ /* No PHY support. Check if PHY is required. */255255+ if (of_find_property(dev->of_node, "phys", NULL)) {256256+ dev_err(dev, "couldn't get sata-phy: ENOSYS\n");257257+ goto err_out;258258+ }259259+ case -ENODEV:255260 /* continue normally */256261 hpriv->phy = NULL;257262 break;
+4-1
drivers/block/drbd/drbd_receiver.c
···13371337 return 0;13381338 }1339133913401340+ /* Discards don't have any payload.13411341+ * But the scsi layer still expects a bio_vec it can use internally,13421342+ * see sd_setup_discard_cmnd() and blk_add_request_payload(). */13401343 if (peer_req->flags & EE_IS_TRIM)13411341- nr_pages = 0; /* discards don't have any payload. */13441344+ nr_pages = 1;1342134513431346 /* In most cases, we will only need one bio. But in case the lower13441347 * level restrictions happen to be different at this offset on this
+1-1
drivers/block/floppy.c
···37773777 int drive = cbdata->drive;3778377837793779 if (err) {37803780- pr_info("floppy: error %d while reading block 0", err);37803780+ pr_info("floppy: error %d while reading block 0\n", err);37813781 set_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags);37823782 }37833783 complete(&cbdata->complete);
···4949# LITTLE drivers, so that it is probed last.5050obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o51515252-obj-$(CONFIG_ARCH_DAVINCI_DA850) += davinci-cpufreq.o5252+obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o5353obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o5454obj-$(CONFIG_ARM_EXYNOS_CPUFREQ) += exynos-cpufreq.o5555obj-$(CONFIG_ARM_EXYNOS4210_CPUFREQ) += exynos4210-cpufreq.o
+22-13
drivers/cpufreq/intel_pstate.c
···128128129129struct perf_limits {130130 int no_turbo;131131+ int turbo_disabled;131132 int max_perf_pct;132133 int min_perf_pct;133134 int32_t max_perf;···288287 if (ret != 1)289288 return -EINVAL;290289 limits.no_turbo = clamp_t(int, input, 0 , 1);291291-290290+ if (limits.turbo_disabled) {291291+ pr_warn("Turbo disabled by BIOS or unavailable on processor\n");292292+ limits.no_turbo = limits.turbo_disabled;293293+ }292294 return count;293295}294296···361357{362358 u64 value;363359 rdmsrl(BYT_RATIOS, value);364364- return (value >> 8) & 0x3F;360360+ return (value >> 8) & 0x7F;365361}366362367363static int byt_get_max_pstate(void)368364{369365 u64 value;370366 rdmsrl(BYT_RATIOS, value);371371- return (value >> 16) & 0x3F;367367+ return (value >> 16) & 0x7F;372368}373369374370static int byt_get_turbo_pstate(void)375371{376372 u64 value;377373 rdmsrl(BYT_TURBO_RATIOS, value);378378- return value & 0x3F;374374+ return value & 0x7F;379375}380376381377static void byt_set_pstate(struct cpudata *cpudata, int pstate)···385381 u32 vid;386382387383 val = pstate << 8;388388- if (limits.no_turbo)384384+ if (limits.no_turbo && !limits.turbo_disabled)389385 val |= (u64)1 << 32;390386391387 vid_fp = cpudata->vid.min + mul_fp(···409405410406411407 rdmsrl(BYT_VIDS, value);412412- cpudata->vid.min = int_tofp((value >> 8) & 0x3f);413413- cpudata->vid.max = int_tofp((value >> 16) & 0x3f);408408+ cpudata->vid.min = int_tofp((value >> 8) & 0x7f);409409+ cpudata->vid.max = int_tofp((value >> 16) & 0x7f);414410 cpudata->vid.ratio = div_fp(415411 cpudata->vid.max - cpudata->vid.min,416412 int_tofp(cpudata->pstate.max_pstate -···452448 u64 val;453449454450 val = pstate << 8;455455- if (limits.no_turbo)451451+ if (limits.no_turbo && !limits.turbo_disabled)456452 val |= (u64)1 << 32;457453458454 wrmsrl_on_cpu(cpudata->cpu, MSR_IA32_PERF_CTL, val);···700696701697 cpu = all_cpu_data[cpunum];702698703703- intel_pstate_get_cpu_pstates(cpu);704704-705699 cpu->cpu = cpunum;700700+ intel_pstate_get_cpu_pstates(cpu);706701707702 init_timer_deferrable(&cpu->timer);708703 cpu->timer.function = intel_pstate_timer_func;···744741 limits.min_perf = int_tofp(1);745742 limits.max_perf_pct = 100;746743 limits.max_perf = int_tofp(1);747747- limits.no_turbo = 0;744744+ limits.no_turbo = limits.turbo_disabled;748745 return 0;749746 }750747 limits.min_perf_pct = (policy->min * 100) / policy->cpuinfo.max_freq;···787784{788785 struct cpudata *cpu;789786 int rc;787787+ u64 misc_en;790788791789 rc = intel_pstate_init_cpu(policy->cpu);792790 if (rc)···795791796792 cpu = all_cpu_data[policy->cpu];797793798798- if (!limits.no_turbo &&799799- limits.min_perf_pct == 100 && limits.max_perf_pct == 100)794794+ rdmsrl(MSR_IA32_MISC_ENABLE, misc_en);795795+ if (misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE ||796796+ cpu->pstate.max_pstate == cpu->pstate.turbo_pstate) {797797+ limits.turbo_disabled = 1;798798+ limits.no_turbo = 1;799799+ }800800+ if (limits.min_perf_pct == 100 && limits.max_perf_pct == 100)800801 policy->policy = CPUFREQ_POLICY_PERFORMANCE;801802 else802803 policy->policy = CPUFREQ_POLICY_POWERSAVE;
+3-5
drivers/crypto/caam/jr.c
···453453 int error;454454455455 jrdev = &pdev->dev;456456- jrpriv = kmalloc(sizeof(struct caam_drv_private_jr),457457- GFP_KERNEL);456456+ jrpriv = devm_kmalloc(jrdev, sizeof(struct caam_drv_private_jr),457457+ GFP_KERNEL);458458 if (!jrpriv)459459 return -ENOMEM;460460···487487488488 /* Now do the platform independent part */489489 error = caam_jr_init(jrdev); /* now turn on hardware */490490- if (error) {491491- kfree(jrpriv);490490+ if (error)492491 return error;493493- }494492495493 jrpriv->dev = jrdev;496494 spin_lock(&driver_data.jr_alloc_lock);
···11menu "IEEE 1394 (FireWire) support"22+ depends on HAS_DMA23 depends on PCI || COMPILE_TEST34 # firewire-core does not depend on PCI but is45 # not useful without PCI controller driver
+1-1
drivers/firmware/efi/efi-pstore.c
···4040static inline u64 generic_id(unsigned long timestamp,4141 unsigned int part, int count)4242{4343- return (timestamp * 100 + part) * 1000 + count;4343+ return ((u64) timestamp * 100 + part) * 1000 + count;4444}45454646static int efi_pstore_read_func(struct efivar_entry *entry, void *data)
+3-3
drivers/firmware/efi/efi.c
···353353 int depth, void *data)354354{355355 struct param_info *info = data;356356- void *prop, *dest;357357- unsigned long len;356356+ const void *prop;357357+ void *dest;358358 u64 val;359359- int i;359359+ int i, len;360360361361 if (depth != 1 ||362362 (strcmp(uname, "chosen") != 0 && strcmp(uname, "chosen@0") != 0))
···2090209020912091static void hdmi_dpms(struct exynos_drm_display *display, int mode)20922092{20932093+ struct hdmi_context *hdata = display->ctx;20942094+ struct drm_encoder *encoder = hdata->encoder;20952095+ struct drm_crtc *crtc = encoder->crtc;20962096+ struct drm_crtc_helper_funcs *funcs = NULL;20972097+20932098 DRM_DEBUG_KMS("mode %d\n", mode);2094209920952100 switch (mode) {···21042099 case DRM_MODE_DPMS_STANDBY:21052100 case DRM_MODE_DPMS_SUSPEND:21062101 case DRM_MODE_DPMS_OFF:21022102+ /*21032103+ * The SFRs of VP and Mixer are updated by Vertical Sync of21042104+ * Timing generator which is a part of HDMI so the sequence21052105+ * to disable TV Subsystem should be as following,21062106+ * VP -> Mixer -> HDMI21072107+ *21082108+ * Below codes will try to disable Mixer and VP(if used)21092109+ * prior to disabling HDMI.21102110+ */21112111+ if (crtc)21122112+ funcs = crtc->helper_private;21132113+ if (funcs && funcs->dpms)21142114+ (*funcs->dpms)(crtc, mode);21152115+21072116 hdmi_poweroff(display);21082117 break;21092118 default:
···810810tda998x_encoder_mode_valid(struct drm_encoder *encoder,811811 struct drm_display_mode *mode)812812{813813+ if (mode->clock > 150000)814814+ return MODE_CLOCK_HIGH;815815+ if (mode->htotal >= BIT(13))816816+ return MODE_BAD_HVALUE;817817+ if (mode->vtotal >= BIT(11))818818+ return MODE_BAD_VVALUE;813819 return MODE_OK;814820}815821···10541048 return i;10551049 }10561050 } else {10571057- for (i = 10; i > 0; i--) {10581058- msleep(10);10511051+ for (i = 100; i > 0; i--) {10521052+ msleep(1);10591053 ret = reg_read(priv, REG_INT_FLAGS_2);10601054 if (ret < 0)10611055 return ret;···11891183tda998x_encoder_destroy(struct drm_encoder *encoder)11901184{11911185 struct tda998x_priv *priv = to_tda998x_priv(encoder);11921192- drm_i2c_encoder_destroy(encoder);1193118611941187 /* disable all IRQs and free the IRQ handler */11951188 cec_write(priv, REG_CEC_RXSHPDINTENA, 0);···1198119311991194 if (priv->cec)12001195 i2c_unregister_device(priv->cec);11961196+ drm_i2c_encoder_destroy(encoder);12011197 kfree(priv);12021198}12031199
+2
drivers/gpu/drm/i915/i915_debugfs.c
···446446447447 memset(&stats, 0, sizeof(stats));448448 stats.file_priv = file->driver_priv;449449+ spin_lock(&file->table_lock);449450 idr_for_each(&file->object_idr, per_file_stats, &stats);451451+ spin_unlock(&file->table_lock);450452 /*451453 * Although we have a valid reference on file->pid, that does452454 * not guarantee that the task_struct who called get_pid() is
+3-2
drivers/gpu/drm/i915/i915_dma.c
···14641464#else14651465static int i915_kick_out_vgacon(struct drm_i915_private *dev_priv)14661466{14671467- int ret;14671467+ int ret = 0;1468146814691469 DRM_INFO("Replacing VGA console driver\n");1470147014711471 console_lock();14721472- ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1);14721472+ if (con_is_bound(&vga_con))14731473+ ret = do_take_over_console(&dummy_con, 0, MAX_NR_CONSOLES - 1, 1);14731474 if (ret == 0) {14741475 ret = do_unregister_con_driver(&vga_con);14751476
+3
drivers/gpu/drm/i915/i915_drv.h
···656656#define QUIRK_PIPEA_FORCE (1<<0)657657#define QUIRK_LVDS_SSC_DISABLE (1<<1)658658#define QUIRK_INVERT_BRIGHTNESS (1<<2)659659+#define QUIRK_BACKLIGHT_PRESENT (1<<3)659660660661struct intel_fbdev;661662struct intel_fbc_work;···978977 bool always_on;979978 /* power well enable/disable usage count */980979 int count;980980+ /* cached hw enabled state */981981+ bool hw_enabled;981982 unsigned long domains;982983 unsigned long data;983984 const struct i915_power_well_ops *ops;
+5-3
drivers/gpu/drm/i915/i915_gem_context.c
···598598 struct intel_context *from = ring->last_context;599599 struct i915_hw_ppgtt *ppgtt = ctx_to_ppgtt(to);600600 u32 hw_flags = 0;601601+ bool uninitialized = false;601602 int ret, i;602603603604 if (from != NULL && ring == &dev_priv->ring[RCS]) {···697696 i915_gem_context_unreference(from);698697 }699698699699+ uninitialized = !to->is_initialized && from == NULL;700700+ to->is_initialized = true;701701+700702done:701703 i915_gem_context_reference(to);702704 ring->last_context = to;703705 to->last_ring = ring;704706705705- if (ring->id == RCS && !to->is_initialized && from == NULL) {707707+ if (uninitialized) {706708 ret = i915_gem_render_state_init(ring);707709 if (ret)708710 DRM_ERROR("init render state: %d\n", ret);709711 }710710-711711- to->is_initialized = true;712712713713 return 0;714714
+44
drivers/gpu/drm/i915/i915_gem_stolen.c
···7474 if (base == 0)7575 return 0;76767777+ /* make sure we don't clobber the GTT if it's within stolen memory */7878+ if (INTEL_INFO(dev)->gen <= 4 && !IS_G33(dev) && !IS_G4X(dev)) {7979+ struct {8080+ u32 start, end;8181+ } stolen[2] = {8282+ { .start = base, .end = base + dev_priv->gtt.stolen_size, },8383+ { .start = base, .end = base + dev_priv->gtt.stolen_size, },8484+ };8585+ u64 gtt_start, gtt_end;8686+8787+ gtt_start = I915_READ(PGTBL_CTL);8888+ if (IS_GEN4(dev))8989+ gtt_start = (gtt_start & PGTBL_ADDRESS_LO_MASK) |9090+ (gtt_start & PGTBL_ADDRESS_HI_MASK) << 28;9191+ else9292+ gtt_start &= PGTBL_ADDRESS_LO_MASK;9393+ gtt_end = gtt_start + gtt_total_entries(dev_priv->gtt) * 4;9494+9595+ if (gtt_start >= stolen[0].start && gtt_start < stolen[0].end)9696+ stolen[0].end = gtt_start;9797+ if (gtt_end > stolen[1].start && gtt_end <= stolen[1].end)9898+ stolen[1].start = gtt_end;9999+100100+ /* pick the larger of the two chunks */101101+ if (stolen[0].end - stolen[0].start >102102+ stolen[1].end - stolen[1].start) {103103+ base = stolen[0].start;104104+ dev_priv->gtt.stolen_size = stolen[0].end - stolen[0].start;105105+ } else {106106+ base = stolen[1].start;107107+ dev_priv->gtt.stolen_size = stolen[1].end - stolen[1].start;108108+ }109109+110110+ if (stolen[0].start != stolen[1].start ||111111+ stolen[0].end != stolen[1].end) {112112+ DRM_DEBUG_KMS("GTT within stolen memory at 0x%llx-0x%llx\n",113113+ (unsigned long long) gtt_start,114114+ (unsigned long long) gtt_end - 1);115115+ DRM_DEBUG_KMS("Stolen memory adjusted to 0x%x-0x%x\n",116116+ base, base + (u32) dev_priv->gtt.stolen_size - 1);117117+ }118118+ }119119+120120+77121 /* Verify that nothing else uses this physical address. Stolen78122 * memory should be reserved by the BIOS and hidden from the79123 * kernel. So if the region is already marked as busy, something
···403403404404 DRM_DEBUG_DRIVER("bclp = 0x%08x\n", bclp);405405406406+ /*407407+ * If the acpi_video interface is not supposed to be used, don't408408+ * bother processing backlight level change requests from firmware.409409+ */410410+ if (!acpi_video_verify_backlight_support()) {411411+ DRM_DEBUG_KMS("opregion backlight request ignored\n");412412+ return 0;413413+ }414414+406415 if (!(bclp & ASLE_BCLP_VALID))407416 return ASLC_BACKLIGHT_FAILED;408417
+6-2
drivers/gpu/drm/i915/intel_panel.c
···11181118 int ret;1119111911201120 if (!dev_priv->vbt.backlight.present) {11211121- DRM_DEBUG_KMS("native backlight control not available per VBT\n");11221122- return 0;11211121+ if (dev_priv->quirks & QUIRK_BACKLIGHT_PRESENT) {11221122+ DRM_DEBUG_KMS("no backlight present per VBT, but present per quirk\n");11231123+ } else {11241124+ DRM_DEBUG_KMS("no backlight present per VBT\n");11251125+ return 0;11261126+ }11231127 }1124112811251129 /* set level and max in panel struct */
+44-22
drivers/gpu/drm/i915/intel_pm.c
···32093209*/32103210static void vlv_set_rps_idle(struct drm_i915_private *dev_priv)32113211{32123212+ struct drm_device *dev = dev_priv->dev;32133213+32143214+ /* Latest VLV doesn't need to force the gfx clock */32153215+ if (dev->pdev->revision >= 0xd) {32163216+ valleyview_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit);32173217+ return;32183218+ }32193219+32123220 /*32133221 * When we are idle. Drop to min voltage state.32143222 */···56115603 (HSW_PWR_WELL_ENABLE_REQUEST | HSW_PWR_WELL_STATE_ENABLED);56125604}5613560556145614-bool intel_display_power_enabled_sw(struct drm_i915_private *dev_priv,56155615- enum intel_display_power_domain domain)56065606+bool intel_display_power_enabled_unlocked(struct drm_i915_private *dev_priv,56075607+ enum intel_display_power_domain domain)56165608{56175609 struct i915_power_domains *power_domains;56185610 struct i915_power_well *power_well;···56235615 return false;5624561656255617 power_domains = &dev_priv->power_domains;56185618+56265619 is_enabled = true;56205620+56275621 for_each_power_well_rev(i, power_well, BIT(domain), power_domains) {56285622 if (power_well->always_on)56295623 continue;5630562456315631- if (!power_well->count) {56255625+ if (!power_well->hw_enabled) {56325626 is_enabled = false;56335627 break;56345628 }56355629 }56305630+56365631 return is_enabled;56375632}56385633···56435632 enum intel_display_power_domain domain)56445633{56455634 struct i915_power_domains *power_domains;56465646- struct i915_power_well *power_well;56475647- bool is_enabled;56485648- int i;56495649-56505650- if (dev_priv->pm.suspended)56515651- return false;56355635+ bool ret;5652563656535637 power_domains = &dev_priv->power_domains;5654563856555655- is_enabled = true;56565656-56575639 mutex_lock(&power_domains->lock);56585658- for_each_power_well_rev(i, power_well, BIT(domain), power_domains) {56595659- if (power_well->always_on)56605660- continue;56615661-56625662- if (!power_well->ops->is_enabled(dev_priv, power_well)) {56635663- is_enabled = false;56645664- break;56655665- }56665666- }56405640+ ret = intel_display_power_enabled_unlocked(dev_priv, domain);56675641 mutex_unlock(&power_domains->lock);5668564256695669- return is_enabled;56435643+ return ret;56705644}5671564556725646/*···59725976 if (!power_well->count++) {59735977 DRM_DEBUG_KMS("enabling %s\n", power_well->name);59745978 power_well->ops->enable(dev_priv, power_well);59795979+ power_well->hw_enabled = true;59755980 }5976598159775982 check_power_well_state(dev_priv, power_well);···6002600560036006 if (!--power_well->count && i915.disable_power_well) {60046007 DRM_DEBUG_KMS("disabling %s\n", power_well->name);60086008+ power_well->hw_enabled = false;60056009 power_well->ops->disable(dev_priv, power_well);60066010 }60076011···60456047 return 0;60466048}60476049EXPORT_SYMBOL_GPL(i915_release_power_well);60506050+60516051+/*60526052+ * Private interface for the audio driver to get CDCLK in kHz.60536053+ *60546054+ * Caller must request power well using i915_request_power_well() prior to60556055+ * making the call.60566056+ */60576057+int i915_get_cdclk_freq(void)60586058+{60596059+ struct drm_i915_private *dev_priv;60606060+60616061+ if (!hsw_pwr)60626062+ return -ENODEV;60636063+60646064+ dev_priv = container_of(hsw_pwr, struct drm_i915_private,60656065+ power_domains);60666066+60676067+ return intel_ddi_get_cdclk_freq(dev_priv);60686068+}60696069+EXPORT_SYMBOL_GPL(i915_get_cdclk_freq);60706070+6048607160496072#define POWER_DOMAIN_MASK (BIT(POWER_DOMAIN_NUM) - 1)60506073···62866267 int i;6287626862886269 mutex_lock(&power_domains->lock);62896289- for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains)62706270+ for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) {62906271 power_well->ops->sync_hw(dev_priv, power_well);62726272+ power_well->hw_enabled = power_well->ops->is_enabled(dev_priv,62736273+ power_well);62746274+ }62916275 mutex_unlock(&power_domains->lock);62926276}62936277
+8
drivers/gpu/drm/i915/intel_sprite.c
···691691 struct intel_crtc *intel_crtc = to_intel_crtc(crtc);692692693693 /*694694+ * BDW signals flip done immediately if the plane695695+ * is disabled, even if the plane enable is already696696+ * armed to occur at the next vblank :(697697+ */698698+ if (IS_BROADWELL(dev))699699+ intel_wait_for_vblank(dev, intel_crtc->pipe);700700+701701+ /*694702 * FIXME IPS should be fine as long as one plane is695703 * enabled, but in practice it seems to have problems696704 * when going from primary only to sprite only and vice
···87878888 /* clks that need to be on for hpd: */8989 const char **hpd_clk_names;9090+ const long unsigned *hpd_freq;9091 int hpd_clk_cnt;91929293 /* clks that need to be on for screen pwr (ie pixel clk): */
+8
drivers/gpu/drm/msm/hdmi/hdmi_connector.c
···127127 }128128129129 for (i = 0; i < config->hpd_clk_cnt; i++) {130130+ if (config->hpd_freq && config->hpd_freq[i]) {131131+ ret = clk_set_rate(hdmi->hpd_clks[i],132132+ config->hpd_freq[i]);133133+ if (ret)134134+ dev_warn(dev->dev, "failed to set clk %s (%d)\n",135135+ config->hpd_clk_names[i], ret);136136+ }137137+130138 ret = clk_prepare_enable(hdmi->hpd_clks[i]);131139 if (ret) {132140 dev_err(dev->dev, "failed to enable hpd clk: %s (%d)\n",
+17-5
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
···2020#include "msm_mmu.h"2121#include "mdp5_kms.h"22222323+static const char *iommu_ports[] = {2424+ "mdp_0",2525+};2626+2327static struct mdp5_platform_config *mdp5_get_config(struct platform_device *dev);24282529static int mdp5_hw_init(struct msm_kms *kms)···108104static void mdp5_destroy(struct msm_kms *kms)109105{110106 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));107107+ struct msm_mmu *mmu = mdp5_kms->mmu;108108+109109+ if (mmu) {110110+ mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports));111111+ mmu->funcs->destroy(mmu);112112+ }111113 kfree(mdp5_kms);112114}113115···226216 return ret;227217}228218229229-static const char *iommu_ports[] = {230230- "mdp_0",231231-};232232-233219static int get_clk(struct platform_device *pdev, struct clk **clkp,234220 const char *name)235221{···323317 mmu = msm_iommu_new(dev, config->iommu);324318 if (IS_ERR(mmu)) {325319 ret = PTR_ERR(mmu);320320+ dev_err(dev->dev, "failed to init iommu: %d\n", ret);326321 goto fail;327322 }323323+328324 ret = mmu->funcs->attach(mmu, iommu_ports,329325 ARRAY_SIZE(iommu_ports));330330- if (ret)326326+ if (ret) {327327+ dev_err(dev->dev, "failed to attach iommu: %d\n", ret);328328+ mmu->funcs->destroy(mmu);331329 goto fail;330330+ }332331 } else {333332 dev_info(dev->dev, "no iommu, fallback to phys "334333 "contig buffers for scanout\n");335334 mmu = NULL;336335 }336336+ mdp5_kms->mmu = mmu;337337338338 mdp5_kms->id = msm_register_mmu(dev, mmu);339339 if (mdp5_kms->id < 0) {
+1
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
···33333434 /* mapper-id used to request GEM buffer mapped for scanout: */3535 int id;3636+ struct msm_mmu *mmu;36373738 /* for tracking smp allocation amongst pipes: */3839 mdp5_smp_state_t smp_state;
···652652 ret = nouveau_do_resume(drm_dev);653653 if (ret)654654 return ret;655655- if (drm_dev->mode_config.num_crtc)656656- nouveau_fbcon_set_suspend(drm_dev, 0);657655658658- nouveau_fbcon_zfill_all(drm_dev);659659- if (drm_dev->mode_config.num_crtc)656656+ if (drm_dev->mode_config.num_crtc) {660657 nouveau_display_resume(drm_dev);658658+ nouveau_fbcon_set_suspend(drm_dev, 0);659659+ }660660+661661 return 0;662662}663663···683683 ret = nouveau_do_resume(drm_dev);684684 if (ret)685685 return ret;686686- if (drm_dev->mode_config.num_crtc)687687- nouveau_fbcon_set_suspend(drm_dev, 0);688688- nouveau_fbcon_zfill_all(drm_dev);689689- if (drm_dev->mode_config.num_crtc)686686+687687+ if (drm_dev->mode_config.num_crtc) {690688 nouveau_display_resume(drm_dev);689689+ nouveau_fbcon_set_suspend(drm_dev, 0);690690+ }691691+691692 return 0;692693}693694
+3-10
drivers/gpu/drm/nouveau/nouveau_fbcon.c
···531531 if (state == 1)532532 nouveau_fbcon_save_disable_accel(dev);533533 fb_set_suspend(drm->fbcon->helper.fbdev, state);534534- if (state == 0)534534+ if (state == 0) {535535 nouveau_fbcon_restore_accel(dev);536536+ nouveau_fbcon_zfill(dev, drm->fbcon);537537+ }536538 console_unlock();537537- }538538-}539539-540540-void541541-nouveau_fbcon_zfill_all(struct drm_device *dev)542542-{543543- struct nouveau_drm *drm = nouveau_drm(dev);544544- if (drm->fbcon) {545545- nouveau_fbcon_zfill(dev, drm->fbcon);546539 }547540}
···102102extern int radeon_hard_reset;103103extern int radeon_vm_size;104104extern int radeon_vm_block_size;105105+extern int radeon_deep_color;105106106107/*107108 * Copy from radeon_drv.h so we don't have to include both and have conflicting···749748 struct evergreen_irq_stat_regs evergreen;750749 struct cik_irq_stat_regs cik;751750};752752-753753-#define RADEON_MAX_HPD_PINS 7754754-#define RADEON_MAX_CRTCS 6755755-#define RADEON_MAX_AFMT_BLOCKS 7756751757752struct radeon_irq {758753 bool installed;
+9-1
drivers/gpu/drm/radeon/radeon_atombios.c
···12271227 rdev->clock.default_dispclk =12281228 le32_to_cpu(firmware_info->info_21.ulDefaultDispEngineClkFreq);12291229 if (rdev->clock.default_dispclk == 0) {12301230- if (ASIC_IS_DCE5(rdev))12301230+ if (ASIC_IS_DCE6(rdev))12311231+ rdev->clock.default_dispclk = 60000; /* 600 Mhz */12321232+ else if (ASIC_IS_DCE5(rdev))12311233 rdev->clock.default_dispclk = 54000; /* 540 Mhz */12321234 else12331235 rdev->clock.default_dispclk = 60000; /* 600 Mhz */12361236+ }12371237+ /* set a reasonable default for DP */12381238+ if (ASIC_IS_DCE6(rdev) && (rdev->clock.default_dispclk < 53900)) {12391239+ DRM_INFO("Changing default dispclk from %dMhz to 600Mhz\n",12401240+ rdev->clock.default_dispclk / 100);12411241+ rdev->clock.default_dispclk = 60000;12341242 }12351243 rdev->clock.dp_extclk =12361244 le16_to_cpu(firmware_info->info_21.usUniphyDPModeExtClkFreq);
···7373 rdev->pm.dpm.ac_power = true;7474 else7575 rdev->pm.dpm.ac_power = false;7676- if (rdev->asic->dpm.enable_bapm)7777- radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power);7676+ if (rdev->family == CHIP_ARUBA) {7777+ if (rdev->asic->dpm.enable_bapm)7878+ radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power);7979+ }7880 mutex_unlock(&rdev->pm.mutex);7981 } else if (rdev->pm.pm_method == PM_METHOD_PROFILE) {8082 if (rdev->pm.profile == PM_PROFILE_AUTO) {
+2-2
drivers/gpu/drm/radeon/radeon_vm.c
···495495 mutex_unlock(&vm->mutex);496496497497 r = radeon_bo_create(rdev, RADEON_VM_PTE_COUNT * 8,498498- RADEON_GPU_PAGE_SIZE, false, 498498+ RADEON_GPU_PAGE_SIZE, true,499499 RADEON_GEM_DOMAIN_VRAM, NULL, &pt);500500 if (r)501501 return r;···992992 return -ENOMEM;993993 }994994995995- r = radeon_bo_create(rdev, pd_size, align, false,995995+ r = radeon_bo_create(rdev, pd_size, align, true,996996 RADEON_GEM_DOMAIN_VRAM, NULL,997997 &vm->page_directory);998998 if (r)
-6
drivers/gpu/drm/radeon/rv770_dpm.c
···23292329 pi->mclk_ss = radeon_atombios_get_asic_ss_info(rdev, &ss,23302330 ASIC_INTERNAL_MEMORY_SS, 0);2331233123322332- /* disable ss, causes hangs on some cayman boards */23332333- if (rdev->family == CHIP_CAYMAN) {23342334- pi->sclk_ss = false;23352335- pi->mclk_ss = false;23362336- }23372337-23382332 if (pi->sclk_ss || pi->mclk_ss)23392333 pi->dynamic_ss = true;23402334 else
+4-2
drivers/gpu/drm/radeon/si.c
···63766376 case 147:63776377 addr = RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR);63786378 status = RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS);63796379+ /* reset addr and status */63806380+ WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1);63816381+ if (addr == 0x0 && status == 0x0)63826382+ break;63796383 dev_err(rdev->dev, "GPU fault detected: %d 0x%08x\n", src_id, src_data);63806384 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x%08X\n",63816385 addr);63826386 dev_err(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n",63836387 status);63846388 si_vm_decode_fault(rdev, status, addr);63856385- /* reset addr and status */63866386- WREG32_P(VM_CONTEXT1_CNTL2, 1, ~1);63876389 break;63886390 case 176: /* RINGID0 CP_INT */63896391 radeon_fence_process(rdev, RADEON_RING_TYPE_GFX_INDEX);
+9-1
drivers/gpu/drm/radeon/trinity_dpm.c
···18741874 for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++)18751875 pi->at[i] = TRINITY_AT_DFLT;1876187618771877- pi->enable_bapm = false;18771877+ /* There are stability issues reported on latops with18781878+ * bapm installed when switching between AC and battery18791879+ * power. At the same time, some desktop boards hang18801880+ * if it's not enabled and dpm is enabled.18811881+ */18821882+ if (rdev->flags & RADEON_IS_MOBILITY)18831883+ pi->enable_bapm = false;18841884+ else18851885+ pi->enable_bapm = true;18781886 pi->enable_nbps_policy = true;18791887 pi->enable_sclk_ds = true;18801888 pi->enable_gfx_power_gating = true;
···40404141config I2C_MUX_PCA954x4242 tristate "Philips PCA954x I2C Mux/switches"4343+ depends on GPIOLIB4344 help4445 If you say yes here you get support for the Philips PCA954x4546 I2C mux/switch devices.
+2-5
drivers/iio/accel/hid-sensor-accel-3d.c
···110110 struct accel_3d_state *accel_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···150151 ret_type = IIO_VAL_INT;151152 break;152153 case IIO_CHAN_INFO_SAMP_FREQ:153153- ret = hid_sensor_read_samp_freq_value(154154+ ret_type = hid_sensor_read_samp_freq_value(154155 &accel_state->common_attributes, val, val2);155155- ret_type = IIO_VAL_INT_PLUS_MICRO;156156 break;157157 case IIO_CHAN_INFO_HYSTERESIS:158158- ret = hid_sensor_read_raw_hyst_value(158158+ ret_type = hid_sensor_read_raw_hyst_value(159159 &accel_state->common_attributes, val, val2);160160- ret_type = IIO_VAL_INT_PLUS_MICRO;161160 break;162161 default:163162 ret_type = -EINVAL;
+6-2
drivers/iio/adc/ad799x.c
···427427 int ret;428428 struct ad799x_state *st = iio_priv(indio_dev);429429430430+ if (val < 0 || val > RES_MASK(chan->scan_type.realbits))431431+ return -EINVAL;432432+430433 mutex_lock(&indio_dev->mlock);431434 ret = ad799x_i2c_write16(st, ad799x_threshold_reg(chan, dir, info),432432- val);435435+ val << chan->scan_type.shift);433436 mutex_unlock(&indio_dev->mlock);434437435438 return ret;···455452 mutex_unlock(&indio_dev->mlock);456453 if (ret < 0)457454 return ret;458458- *val = valin;455455+ *val = (valin >> chan->scan_type.shift) &456456+ RES_MASK(chan->scan_type.realbits);459457460458 return IIO_VAL_INT;461459}
+1-1
drivers/iio/adc/ti_am335x_adc.c
···374374 return -EAGAIN;375375 }376376 }377377- map_val = chan->channel + TOTAL_CHANNELS;377377+ map_val = adc_dev->channel_step[chan->scan_index];378378379379 /*380380 * We check the complete FIFO. We programmed just one entry but in case
+2-5
drivers/iio/gyro/hid-sensor-gyro-3d.c
···110110 struct gyro_3d_state *gyro_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···150151 ret_type = IIO_VAL_INT;151152 break;152153 case IIO_CHAN_INFO_SAMP_FREQ:153153- ret = hid_sensor_read_samp_freq_value(154154+ ret_type = hid_sensor_read_samp_freq_value(154155 &gyro_state->common_attributes, val, val2);155155- ret_type = IIO_VAL_INT_PLUS_MICRO;156156 break;157157 case IIO_CHAN_INFO_HYSTERESIS:158158- ret = hid_sensor_read_raw_hyst_value(158158+ ret_type = hid_sensor_read_raw_hyst_value(159159 &gyro_state->common_attributes, val, val2);160160- ret_type = IIO_VAL_INT_PLUS_MICRO;161160 break;162161 default:163162 ret_type = -EINVAL;
+4-2
drivers/iio/inkern.c
···183183 else if (name && index >= 0) {184184 pr_err("ERROR: could not get IIO channel %s:%s(%i)\n",185185 np->full_name, name ? name : "", index);186186- return chan;186186+ return NULL;187187 }188188189189 /*···193193 */194194 np = np->parent;195195 if (np && !of_get_property(np, "io-channel-ranges", NULL))196196- break;196196+ return NULL;197197 }198198+198199 return chan;199200}200201···318317 if (channel != NULL)319318 return channel;320319 }320320+321321 return iio_channel_get_sys(name, channel_name);322322}323323EXPORT_SYMBOL_GPL(iio_channel_get);
+2-5
drivers/iio/light/hid-sensor-als.c
···7979 struct als_state *als_state = iio_priv(indio_dev);8080 int report_id = -1;8181 u32 address;8282- int ret;8382 int ret_type;8483 s32 poll_value;8584···128129 ret_type = IIO_VAL_INT;129130 break;130131 case IIO_CHAN_INFO_SAMP_FREQ:131131- ret = hid_sensor_read_samp_freq_value(132132+ ret_type = hid_sensor_read_samp_freq_value(132133 &als_state->common_attributes, val, val2);133133- ret_type = IIO_VAL_INT_PLUS_MICRO;134134 break;135135 case IIO_CHAN_INFO_HYSTERESIS:136136- ret = hid_sensor_read_raw_hyst_value(136136+ ret_type = hid_sensor_read_raw_hyst_value(137137 &als_state->common_attributes, val, val2);138138- ret_type = IIO_VAL_INT_PLUS_MICRO;139138 break;140139 default:141140 ret_type = -EINVAL;
+2-5
drivers/iio/light/hid-sensor-prox.c
···7474 struct prox_state *prox_state = iio_priv(indio_dev);7575 int report_id = -1;7676 u32 address;7777- int ret;7877 int ret_type;7978 s32 poll_value;8079···124125 ret_type = IIO_VAL_INT;125126 break;126127 case IIO_CHAN_INFO_SAMP_FREQ:127127- ret = hid_sensor_read_samp_freq_value(128128+ ret_type = hid_sensor_read_samp_freq_value(128129 &prox_state->common_attributes, val, val2);129129- ret_type = IIO_VAL_INT_PLUS_MICRO;130130 break;131131 case IIO_CHAN_INFO_HYSTERESIS:132132- ret = hid_sensor_read_raw_hyst_value(132132+ ret_type = hid_sensor_read_raw_hyst_value(133133 &prox_state->common_attributes, val, val2);134134- ret_type = IIO_VAL_INT_PLUS_MICRO;135134 break;136135 default:137136 ret_type = -EINVAL;
+10-1
drivers/iio/light/tcs3472.c
···52525353struct tcs3472_data {5454 struct i2c_client *client;5555+ struct mutex lock;5556 u8 enable;5657 u8 control;5758 u8 atime;···117116118117 switch (mask) {119118 case IIO_CHAN_INFO_RAW:119119+ if (iio_buffer_enabled(indio_dev))120120+ return -EBUSY;121121+122122+ mutex_lock(&data->lock);120123 ret = tcs3472_req_data(data);121121- if (ret < 0)124124+ if (ret < 0) {125125+ mutex_unlock(&data->lock);122126 return ret;127127+ }123128 ret = i2c_smbus_read_word_data(data->client, chan->address);129129+ mutex_unlock(&data->lock);124130 if (ret < 0)125131 return ret;126132 *val = ret;···263255 data = iio_priv(indio_dev);264256 i2c_set_clientdata(client, indio_dev);265257 data->client = client;258258+ mutex_init(&data->lock);266259267260 indio_dev->dev.parent = &client->dev;268261 indio_dev->info = &tcs3472_info;
+2-5
drivers/iio/magnetometer/hid-sensor-magn-3d.c
···110110 struct magn_3d_state *magn_state = iio_priv(indio_dev);111111 int report_id = -1;112112 u32 address;113113- int ret;114113 int ret_type;115114 s32 poll_value;116115···152153 ret_type = IIO_VAL_INT;153154 break;154155 case IIO_CHAN_INFO_SAMP_FREQ:155155- ret = hid_sensor_read_samp_freq_value(156156+ ret_type = hid_sensor_read_samp_freq_value(156157 &magn_state->common_attributes, val, val2);157157- ret_type = IIO_VAL_INT_PLUS_MICRO;158158 break;159159 case IIO_CHAN_INFO_HYSTERESIS:160160- ret = hid_sensor_read_raw_hyst_value(160160+ ret_type = hid_sensor_read_raw_hyst_value(161161 &magn_state->common_attributes, val, val2);162162- ret_type = IIO_VAL_INT_PLUS_MICRO;163162 break;164163 default:165164 ret_type = -EINVAL;
+2-5
drivers/iio/pressure/hid-sensor-press.c
···7878 struct press_state *press_state = iio_priv(indio_dev);7979 int report_id = -1;8080 u32 address;8181- int ret;8281 int ret_type;8382 s32 poll_value;8483···127128 ret_type = IIO_VAL_INT;128129 break;129130 case IIO_CHAN_INFO_SAMP_FREQ:130130- ret = hid_sensor_read_samp_freq_value(131131+ ret_type = hid_sensor_read_samp_freq_value(131132 &press_state->common_attributes, val, val2);132132- ret_type = IIO_VAL_INT_PLUS_MICRO;133133 break;134134 case IIO_CHAN_INFO_HYSTERESIS:135135- ret = hid_sensor_read_raw_hyst_value(135135+ ret_type = hid_sensor_read_raw_hyst_value(136136 &press_state->common_attributes, val, val2);137137- ret_type = IIO_VAL_INT_PLUS_MICRO;138137 break;139138 default:140139 ret_type = -EINVAL;
+13-5
drivers/iommu/amd_iommu_v2.c
···4545struct pasid_state {4646 struct list_head list; /* For global state-list */4747 atomic_t count; /* Reference count */4848- atomic_t mmu_notifier_count; /* Counting nested mmu_notifier4848+ unsigned mmu_notifier_count; /* Counting nested mmu_notifier4949 calls */5050 struct task_struct *task; /* Task bound to this PASID */5151 struct mm_struct *mm; /* mm_struct for the faults */···5353 struct pri_queue pri[PRI_QUEUE_SIZE]; /* PRI tag states */5454 struct device_state *device_state; /* Link to our device_state */5555 int pasid; /* PASID index */5656- spinlock_t lock; /* Protect pri_queues */5656+ spinlock_t lock; /* Protect pri_queues and5757+ mmu_notifer_count */5758 wait_queue_head_t wq; /* To wait for count == 0 */5859};5960···432431{433432 struct pasid_state *pasid_state;434433 struct device_state *dev_state;434434+ unsigned long flags;435435436436 pasid_state = mn_to_state(mn);437437 dev_state = pasid_state->device_state;438438439439- if (atomic_add_return(1, &pasid_state->mmu_notifier_count) == 1) {439439+ spin_lock_irqsave(&pasid_state->lock, flags);440440+ if (pasid_state->mmu_notifier_count == 0) {440441 amd_iommu_domain_set_gcr3(dev_state->domain,441442 pasid_state->pasid,442443 __pa(empty_page_table));443444 }445445+ pasid_state->mmu_notifier_count += 1;446446+ spin_unlock_irqrestore(&pasid_state->lock, flags);444447}445448446449static void mn_invalidate_range_end(struct mmu_notifier *mn,···453448{454449 struct pasid_state *pasid_state;455450 struct device_state *dev_state;451451+ unsigned long flags;456452457453 pasid_state = mn_to_state(mn);458454 dev_state = pasid_state->device_state;459455460460- if (atomic_dec_and_test(&pasid_state->mmu_notifier_count)) {456456+ spin_lock_irqsave(&pasid_state->lock, flags);457457+ pasid_state->mmu_notifier_count -= 1;458458+ if (pasid_state->mmu_notifier_count == 0) {461459 amd_iommu_domain_set_gcr3(dev_state->domain,462460 pasid_state->pasid,463461 __pa(pasid_state->mm->pgd));464462 }463463+ spin_unlock_irqrestore(&pasid_state->lock, flags);465464}466465467466static void mn_release(struct mmu_notifier *mn, struct mm_struct *mm)···659650 goto out;660651661652 atomic_set(&pasid_state->count, 1);662662- atomic_set(&pasid_state->mmu_notifier_count, 0);663653 init_waitqueue_head(&pasid_state->wq);664654 spin_lock_init(&pasid_state->lock);665655
···1611161116121612 spin_lock_irqsave(&m->lock, flags);1613161316141614- /* pg_init in progress, requeue until done */16151615- if (!pg_ready(m)) {16141614+ /* pg_init in progress or no paths available */16151615+ if (m->pg_init_in_progress ||16161616+ (!m->nr_valid_paths && m->queue_if_no_path)) {16161617 busy = 1;16171618 goto out;16181619 }
+2-2
drivers/md/dm-zero.c
···11/*22- * Copyright (C) 2003 Christophe Saout <christophe@saout.de>22+ * Copyright (C) 2003 Jana Saout <jana@saout.de>33 *44 * This file is released under the GPL.55 */···7979module_init(dm_zero_init)8080module_exit(dm_zero_exit)81818282-MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");8282+MODULE_AUTHOR("Jana Saout <jana@saout.de>");8383MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros");8484MODULE_LICENSE("GPL");
+13-2
drivers/md/dm.c
···54545555static DECLARE_WORK(deferred_remove_work, do_deferred_remove);56565757+static struct workqueue_struct *deferred_remove_workqueue;5858+5759/*5860 * For bio-based dm.5961 * One of these is allocated per bio.···278276 if (r)279277 goto out_free_rq_tio_cache;280278279279+ deferred_remove_workqueue = alloc_workqueue("kdmremove", WQ_UNBOUND, 1);280280+ if (!deferred_remove_workqueue) {281281+ r = -ENOMEM;282282+ goto out_uevent_exit;283283+ }284284+281285 _major = major;282286 r = register_blkdev(_major, _name);283287 if (r < 0)284284- goto out_uevent_exit;288288+ goto out_free_workqueue;285289286290 if (!_major)287291 _major = r;288292289293 return 0;290294295295+out_free_workqueue:296296+ destroy_workqueue(deferred_remove_workqueue);291297out_uevent_exit:292298 dm_uevent_exit();293299out_free_rq_tio_cache:···309299static void local_exit(void)310300{311301 flush_scheduled_work();302302+ destroy_workqueue(deferred_remove_workqueue);312303313304 kmem_cache_destroy(_rq_tio_cache);314305 kmem_cache_destroy(_io_cache);···418407419408 if (atomic_dec_and_test(&md->open_count) &&420409 (test_bit(DMF_DEFERRED_REMOVE, &md->flags)))421421- schedule_work(&deferred_remove_work);410410+ queue_work(deferred_remove_workqueue, &deferred_remove_work);422411423412 dm_put(md);424413
+14-1
drivers/md/md.c
···55995599 if (mddev->in_sync)56005600 info.state = (1<<MD_SB_CLEAN);56015601 if (mddev->bitmap && mddev->bitmap_info.offset)56025602- info.state = (1<<MD_SB_BITMAP_PRESENT);56025602+ info.state |= (1<<MD_SB_BITMAP_PRESENT);56035603 info.active_disks = insync;56045604 info.working_disks = working;56055605 info.failed_disks = failed;···75017501 rdev->recovery_offset < j)75027502 j = rdev->recovery_offset;75037503 rcu_read_unlock();75047504+75057505+ /* If there is a bitmap, we need to make sure all75067506+ * writes that started before we added a spare75077507+ * complete before we start doing a recovery.75087508+ * Otherwise the write might complete and (via75097509+ * bitmap_endwrite) set a bit in the bitmap after the75107510+ * recovery has checked that bit and skipped that75117511+ * region.75127512+ */75137513+ if (mddev->bitmap) {75147514+ mddev->pers->quiesce(mddev, 1);75157515+ mddev->pers->quiesce(mddev, 0);75167516+ }75047517 }7505751875067519 printk(KERN_INFO "md: %s of RAID array %s\n", desc, mdname(mddev));
+3-2
drivers/mfd/Kconfig
···760760config MFD_DAVINCI_VOICECODEC761761 tristate762762 select MFD_CORE763763+ select REGMAP_MMIO763764764765config MFD_TI_AM335X_TSCADC765766 tristate "TI ADC / Touch Screen chip support"···12261225 functionaltiy of the device other drivers must be enabled.1227122612281227config MFD_STW481X12291229- bool "Support for ST Microelectronics STw481x"12281228+ tristate "Support for ST Microelectronics STw481x"12301229 depends on I2C && ARCH_NOMADIK12311230 select REGMAP_I2C12321231 select MFD_CORE···1249124812501249# Chip drivers12511250config MCP_UCB120012521252- bool "Support for UCB1200 / UCB1300"12511251+ tristate "Support for UCB1200 / UCB1300"12531252 depends on MCP_SA11X012541253 select MCP12551254
+1-1
drivers/mfd/ab8500-core.c
···591591 num_irqs = AB8500_NR_IRQS;592592593593 /* If ->irq_base is zero this will give a linear mapping */594594- ab8500->domain = irq_domain_add_simple(NULL,594594+ ab8500->domain = irq_domain_add_simple(ab8500->dev->of_node,595595 num_irqs, 0,596596 &ab8500_irq_ops, ab8500);597597
+43
drivers/mtd/chips/cfi_cmdset_0001.c
···5252/* Atmel chips */5353#define AT49BV640D 0x02de5454#define AT49BV640DT 0x02db5555+/* Sharp chips */5656+#define LH28F640BFHE_PTTL90 0x00b05757+#define LH28F640BFHE_PBTL90 0x00b15858+#define LH28F640BFHE_PTTL70A 0x00b25959+#define LH28F640BFHE_PBTL70A 0x00b355605661static int cfi_intelext_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *);5762static int cfi_intelext_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *);···263258 (cfi->cfiq->EraseRegionInfo[1] & 0xffff0000) | 0x3e;264259};265260261261+static int is_LH28F640BF(struct cfi_private *cfi)262262+{263263+ /* Sharp LH28F640BF Family */264264+ if (cfi->mfr == CFI_MFR_SHARP && (265265+ cfi->id == LH28F640BFHE_PTTL90 || cfi->id == LH28F640BFHE_PBTL90 ||266266+ cfi->id == LH28F640BFHE_PTTL70A || cfi->id == LH28F640BFHE_PBTL70A))267267+ return 1;268268+ return 0;269269+}270270+271271+static void fixup_LH28F640BF(struct mtd_info *mtd)272272+{273273+ struct map_info *map = mtd->priv;274274+ struct cfi_private *cfi = map->fldrv_priv;275275+ struct cfi_pri_intelext *extp = cfi->cmdset_priv;276276+277277+ /* Reset the Partition Configuration Register on LH28F640BF278278+ * to a single partition (PCR = 0x000): PCR is embedded into A0-A15. */279279+ if (is_LH28F640BF(cfi)) {280280+ printk(KERN_INFO "Reset Partition Config. Register: 1 Partition of 4 planes\n");281281+ map_write(map, CMD(0x60), 0);282282+ map_write(map, CMD(0x04), 0);283283+284284+ /* We have set one single partition thus285285+ * Simultaneous Operations are not allowed */286286+ printk(KERN_INFO "cfi_cmdset_0001: Simultaneous Operations disabled\n");287287+ extp->FeatureSupport &= ~512;288288+ }289289+}290290+266291static void fixup_use_point(struct mtd_info *mtd)267292{268293 struct map_info *map = mtd->priv;···344309 { CFI_MFR_ST, 0x00ba, /* M28W320CT */ fixup_st_m28w320ct },345310 { CFI_MFR_ST, 0x00bb, /* M28W320CB */ fixup_st_m28w320cb },346311 { CFI_MFR_INTEL, CFI_ID_ANY, fixup_unlock_powerup_lock },312312+ { CFI_MFR_SHARP, CFI_ID_ANY, fixup_unlock_powerup_lock },313313+ { CFI_MFR_SHARP, CFI_ID_ANY, fixup_LH28F640BF },347314 { 0, 0, NULL }348315};349316···16851648 adr += chip->start;16861649 initial_adr = adr;16871650 cmd_adr = adr & ~(wbufsize-1);16511651+16521652+ /* Sharp LH28F640BF chips need the first address for the16531653+ * Page Buffer Program command. See Table 5 of16541654+ * LH28F320BF, LH28F640BF, LH28F128BF Series (Appendix FUM00701) */16551655+ if (is_LH28F640BF(cfi))16561656+ cmd_adr = adr;1688165716891658 /* Let's determine this according to the interleave only once */16901659 write_cmd = (cfi->cfiq->P_ID != P_ID_INTEL_PERFORMANCE) ? CMD(0xe8) : CMD(0xe9);
···40474047 ecc->layout->oobavail += ecc->layout->oobfree[i].length;40484048 mtd->oobavail = ecc->layout->oobavail;4049404940504050- /* ECC sanity check: warn noisily if it's too weak */40514051- WARN_ON(!nand_ecc_strength_good(mtd));40504050+ /* ECC sanity check: warn if it's too weak */40514051+ if (!nand_ecc_strength_good(mtd))40524052+ pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",40534053+ mtd->name);4052405440534055 /*40544056 * Set the number of read / write steps for one page depending on ECC
+1-1
drivers/net/bonding/bond_main.c
···40374037 }4038403840394039 if (ad_select) {40404040- bond_opt_initstr(&newval, lacp_rate);40404040+ bond_opt_initstr(&newval, ad_select);40414041 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_AD_SELECT),40424042 &newval);40434043 if (!valptr) {
+11-32
drivers/net/ethernet/broadcom/bcmsysport.c
···710710711711 work_done = bcm_sysport_tx_reclaim(ring->priv, ring);712712713713- if (work_done < budget) {713713+ if (work_done == 0) {714714 napi_complete(napi);715715 /* re-enable TX interrupt */716716 intrl2_1_mask_clear(ring->priv, BIT(ring->index));717717 }718718719719- return work_done;719719+ return 0;720720}721721722722static void bcm_sysport_tx_reclaim_all(struct bcm_sysport_priv *priv)···13391339 usleep_range(1000, 2000);13401340}1341134113421342-static inline int umac_reset(struct bcm_sysport_priv *priv)13421342+static inline void umac_reset(struct bcm_sysport_priv *priv)13431343{13441344- unsigned int timeout = 0;13451344 u32 reg;13461346- int ret = 0;1347134513481348- umac_writel(priv, 0, UMAC_CMD);13491349- while (timeout++ < 1000) {13501350- reg = umac_readl(priv, UMAC_CMD);13511351- if (!(reg & CMD_SW_RESET))13521352- break;13531353-13541354- udelay(1);13551355- }13561356-13571357- if (timeout == 1000) {13581358- dev_err(&priv->pdev->dev,13591359- "timeout waiting for MAC to come out of reset\n");13601360- ret = -ETIMEDOUT;13611361- }13621362-13631363- return ret;13461346+ reg = umac_readl(priv, UMAC_CMD);13471347+ reg |= CMD_SW_RESET;13481348+ umac_writel(priv, reg, UMAC_CMD);13491349+ udelay(10);13501350+ reg = umac_readl(priv, UMAC_CMD);13511351+ reg &= ~CMD_SW_RESET;13521352+ umac_writel(priv, reg, UMAC_CMD);13641353}1365135413661355static void umac_set_hw_addr(struct bcm_sysport_priv *priv,···14011412 int ret;1402141314031414 /* Reset UniMAC */14041404- ret = umac_reset(priv);14051405- if (ret) {14061406- netdev_err(dev, "UniMAC reset failed\n");14071407- return ret;14081408- }14151415+ umac_reset(priv);1409141614101417 /* Flush TX and RX FIFOs at TOPCTRL level */14111418 topctrl_flush(priv);···16831698 /* Set the needed headroom once and for all */16841699 BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8);16851700 dev->needed_headroom += sizeof(struct bcm_tsb);16861686-16871687- /* We are interfaced to a switch which handles the multicast16881688- * filtering for us, so we do not support programming any16891689- * multicast hash table in this Ethernet MAC.16901690- */16911691- dev->flags &= ~IFF_MULTICAST;1692170116931702 /* libphy will adjust the link state accordingly */16941703 netif_carrier_off(dev);
+2-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
···797797798798 return;799799 }800800- bnx2x_frag_free(fp, new_data);800800+ if (new_data)801801+ bnx2x_frag_free(fp, new_data);801802drop:802803 /* drop the packet and keep the buffer in the bin */803804 DP(NETIF_MSG_RX_STATUS,
+1-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···1294612946 * without the default SB.1294712947 * For VFs there is no default SB, then we return (index+1).1294812948 */1294912949- pci_read_config_word(pdev, pdev->msix_cap + PCI_MSI_FLAGS, &control);1294912949+ pci_read_config_word(pdev, pdev->msix_cap + PCI_MSIX_FLAGS, &control);12950129501295112951 index = control & PCI_MSIX_FLAGS_QSIZE;1295212952
+6-10
drivers/net/ethernet/broadcom/genet/bcmgenet.c
···14081408 if (cb->skb)14091409 continue;1410141014111411- /* set the DMA descriptor length once and for all14121412- * it will only change if we support dynamically sizing14131413- * priv->rx_buf_len, but we do not14141414- */14151415- dmadesc_set_length_status(priv, priv->rx_bd_assign_ptr,14161416- priv->rx_buf_len << DMA_BUFLENGTH_SHIFT);14171417-14181411 ret = bcmgenet_rx_refill(priv, cb);14191412 if (ret)14201413 break;···25282535 netif_set_real_num_tx_queues(priv->dev, priv->hw_params->tx_queues + 1);25292536 netif_set_real_num_rx_queues(priv->dev, priv->hw_params->rx_queues + 1);2530253725312531- err = register_netdev(dev);25322532- if (err)25332533- goto err_clk_disable;25382538+ /* libphy will determine the link state */25392539+ netif_carrier_off(dev);2534254025352541 /* Turn off the main clock, WOL clock is handled separately */25362542 if (!IS_ERR(priv->clk))25372543 clk_disable_unprepare(priv->clk);25442544+25452545+ err = register_netdev(dev);25462546+ if (err)25472547+ goto err;2538254825392549 return err;25402550
···4040#include <linux/if_ether.h>4141#include <linux/if_vlan.h>4242#include <linux/vmalloc.h>4343+#include <linux/irq.h>43444445#include "mlx4_en.h"4546···783782 PKT_HASH_TYPE_L3);784783785784 skb_record_rx_queue(gro_skb, cq->ring);785785+ skb_mark_napi_id(gro_skb, &cq->napi);786786787787 if (ring->hwtstamp_rx_filter == HWTSTAMP_FILTER_ALL) {788788 timestamp = mlx4_en_get_cqe_ts(cqe);···898896899897 /* If we used up all the quota - we're probably not done yet... */900898 if (done == budget) {899899+ int cpu_curr;900900+ const struct cpumask *aff;901901+901902 INC_PERF_COUNTER(priv->pstats.napi_quota);902902- if (unlikely(cq->mcq.irq_affinity_change)) {903903- cq->mcq.irq_affinity_change = false;903903+904904+ cpu_curr = smp_processor_id();905905+ aff = irq_desc_get_irq_data(cq->irq_desc)->affinity;906906+907907+ if (unlikely(!cpumask_test_cpu(cpu_curr, aff))) {908908+ /* Current cpu is not according to smp_irq_affinity -909909+ * probably affinity changed. need to stop this NAPI910910+ * poll, and restart it on the right CPU911911+ */904912 napi_complete(napi);905913 mlx4_en_arm_cq(priv, cq);906914 return 0;907915 }908916 } else {909917 /* Done for now */910910- cq->mcq.irq_affinity_change = false;911918 napi_complete(napi);912919 mlx4_en_arm_cq(priv, cq);913920 }
+13-21
drivers/net/ethernet/mellanox/mlx4/en_tx.c
···351351 return cnt;352352}353353354354-static int mlx4_en_process_tx_cq(struct net_device *dev,355355- struct mlx4_en_cq *cq,356356- int budget)354354+static bool mlx4_en_process_tx_cq(struct net_device *dev,355355+ struct mlx4_en_cq *cq)357356{358357 struct mlx4_en_priv *priv = netdev_priv(dev);359358 struct mlx4_cq *mcq = &cq->mcq;···371372 int factor = priv->cqe_factor;372373 u64 timestamp = 0;373374 int done = 0;375375+ int budget = priv->tx_work_limit;374376375377 if (!priv->port_up)376376- return 0;378378+ return true;377379378380 index = cons_index & size_mask;379381 cqe = &buf[(index << factor) + factor];···447447 netif_tx_wake_queue(ring->tx_queue);448448 ring->wake_queue++;449449 }450450- return done;450450+ return done < budget;451451}452452453453void mlx4_en_tx_irq(struct mlx4_cq *mcq)···467467 struct mlx4_en_cq *cq = container_of(napi, struct mlx4_en_cq, napi);468468 struct net_device *dev = cq->dev;469469 struct mlx4_en_priv *priv = netdev_priv(dev);470470- int done;470470+ int clean_complete;471471472472- done = mlx4_en_process_tx_cq(dev, cq, budget);472472+ clean_complete = mlx4_en_process_tx_cq(dev, cq);473473+ if (!clean_complete)474474+ return budget;473475474474- /* If we used up all the quota - we're probably not done yet... */475475- if (done < budget) {476476- /* Done for now */477477- cq->mcq.irq_affinity_change = false;478478- napi_complete(napi);479479- mlx4_en_arm_cq(priv, cq);480480- return done;481481- } else if (unlikely(cq->mcq.irq_affinity_change)) {482482- cq->mcq.irq_affinity_change = false;483483- napi_complete(napi);484484- mlx4_en_arm_cq(priv, cq);485485- return 0;486486- }487487- return budget;476476+ napi_complete(napi);477477+ mlx4_en_arm_cq(priv, cq);478478+479479+ return 0;488480}489481490482static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv,
···187187 return d ? to_mii_bus(d) : NULL;188188}189189EXPORT_SYMBOL(of_mdio_find_bus);190190+191191+/* Walk the list of subnodes of a mdio bus and look for a node that matches the192192+ * phy's address with its 'reg' property. If found, set the of_node pointer for193193+ * the phy. This allows auto-probed pyh devices to be supplied with information194194+ * passed in via DT.195195+ */196196+static void of_mdiobus_link_phydev(struct mii_bus *mdio,197197+ struct phy_device *phydev)198198+{199199+ struct device *dev = &phydev->dev;200200+ struct device_node *child;201201+202202+ if (dev->of_node || !mdio->dev.of_node)203203+ return;204204+205205+ for_each_available_child_of_node(mdio->dev.of_node, child) {206206+ int addr;207207+ int ret;208208+209209+ ret = of_property_read_u32(child, "reg", &addr);210210+ if (ret < 0) {211211+ dev_err(dev, "%s has invalid PHY address\n",212212+ child->full_name);213213+ continue;214214+ }215215+216216+ /* A PHY must have a reg property in the range [0-31] */217217+ if (addr >= PHY_MAX_ADDR) {218218+ dev_err(dev, "%s PHY address %i is too large\n",219219+ child->full_name, addr);220220+ continue;221221+ }222222+223223+ if (addr == phydev->addr) {224224+ dev->of_node = child;225225+ return;226226+ }227227+ }228228+}229229+#else /* !IS_ENABLED(CONFIG_OF_MDIO) */230230+static inline void of_mdiobus_link_phydev(struct mii_bus *mdio,231231+ struct phy_device *phydev)232232+{233233+}190234#endif191235192236/**
+1-7
drivers/net/ppp/ppp_generic.c
···539539{540540 struct sock_fprog uprog;541541 struct sock_filter *code = NULL;542542- int len, err;542542+ int len;543543544544 if (copy_from_user(&uprog, arg, sizeof(uprog)))545545 return -EFAULT;···553553 code = memdup_user(uprog.filter, len);554554 if (IS_ERR(code))555555 return PTR_ERR(code);556556-557557- err = sk_chk_filter(code, uprog.len);558558- if (err) {559559- kfree(code);560560- return err;561561- }562556563557 *p = code;564558 return uprog.len;
···23632363 "FarSync TE1"23642364};2365236523662366-static void23662366+static int23672367fst_init_card(struct fst_card_info *card)23682368{23692369 int i;···23742374 * we'll have to revise it in some way then.23752375 */23762376 for (i = 0; i < card->nports; i++) {23772377- err = register_hdlc_device(card->ports[i].dev);23782378- if (err < 0) {23792379- int j;23772377+ err = register_hdlc_device(card->ports[i].dev);23782378+ if (err < 0) {23802379 pr_err("Cannot register HDLC device for port %d (errno %d)\n",23812381- i, -err);23822382- for (j = i; j < card->nports; j++) {23832383- free_netdev(card->ports[j].dev);23842384- card->ports[j].dev = NULL;23852385- }23862386- card->nports = i;23872387- break;23882388- }23802380+ i, -err);23812381+ while (i--)23822382+ unregister_hdlc_device(card->ports[i].dev);23832383+ return err;23842384+ }23892385 }2390238623912387 pr_info("%s-%s: %s IRQ%d, %d ports\n",23922388 port_to_dev(&card->ports[0])->name,23932389 port_to_dev(&card->ports[card->nports - 1])->name,23942390 type_strings[card->type], card->irq, card->nports);23912391+ return 0;23952392}2396239323972394static const struct net_device_ops fst_ops = {···24442447 /* Try to enable the device */24452448 if ((err = pci_enable_device(pdev)) != 0) {24462449 pr_err("Failed to enable card. Err %d\n", -err);24472447- kfree(card);24482448- return err;24502450+ goto enable_fail;24492451 }2450245224512453 if ((err = pci_request_regions(pdev, "FarSync")) !=0) {24522454 pr_err("Failed to allocate regions. Err %d\n", -err);24532453- pci_disable_device(pdev);24542454- kfree(card);24552455- return err;24552455+ goto regions_fail;24562456 }2457245724582458 /* Get virtual addresses of memory regions */···24582464 card->phys_ctlmem = pci_resource_start(pdev, 3);24592465 if ((card->mem = ioremap(card->phys_mem, FST_MEMSIZE)) == NULL) {24602466 pr_err("Physical memory remap failed\n");24612461- pci_release_regions(pdev);24622462- pci_disable_device(pdev);24632463- kfree(card);24642464- return -ENODEV;24672467+ err = -ENODEV;24682468+ goto ioremap_physmem_fail;24652469 }24662470 if ((card->ctlmem = ioremap(card->phys_ctlmem, 0x10)) == NULL) {24672471 pr_err("Control memory remap failed\n");24682468- pci_release_regions(pdev);24692469- pci_disable_device(pdev);24702470- iounmap(card->mem);24712471- kfree(card);24722472- return -ENODEV;24722472+ err = -ENODEV;24732473+ goto ioremap_ctlmem_fail;24732474 }24742475 dbg(DBG_PCI, "kernel mem %p, ctlmem %p\n", card->mem, card->ctlmem);2475247624762477 /* Register the interrupt handler */24772478 if (request_irq(pdev->irq, fst_intr, IRQF_SHARED, FST_DEV_NAME, card)) {24782479 pr_err("Unable to register interrupt %d\n", card->irq);24792479- pci_release_regions(pdev);24802480- pci_disable_device(pdev);24812481- iounmap(card->ctlmem);24822482- iounmap(card->mem);24832483- kfree(card);24842484- return -ENODEV;24802480+ err = -ENODEV;24812481+ goto irq_fail;24852482 }2486248324872484 /* Record info we need */···24982513 while (i--)24992514 free_netdev(card->ports[i].dev);25002515 pr_err("FarSync: out of memory\n");25012501- free_irq(card->irq, card);25022502- pci_release_regions(pdev);25032503- pci_disable_device(pdev);25042504- iounmap(card->ctlmem);25052505- iounmap(card->mem);25062506- kfree(card);25072507- return -ENODEV;25162516+ err = -ENOMEM;25172517+ goto hdlcdev_fail;25082518 }25092519 card->ports[i].dev = dev;25102520 card->ports[i].card = card;···25452565 pci_set_drvdata(pdev, card);2546256625472567 /* Remainder of card setup */25682568+ if (no_of_cards_added >= FST_MAX_CARDS) {25692569+ pr_err("FarSync: too many cards\n");25702570+ err = -ENOMEM;25712571+ goto card_array_fail;25722572+ }25482573 fst_card_array[no_of_cards_added] = card;25492574 card->card_no = no_of_cards_added++; /* Record instance and bump it */25502550- fst_init_card(card);25752575+ err = fst_init_card(card);25762576+ if (err)25772577+ goto init_card_fail;25512578 if (card->family == FST_FAMILY_TXU) {25522579 /*25532580 * Allocate a dma buffer for transmit and receives···25642577 &card->rx_dma_handle_card);25652578 if (card->rx_dma_handle_host == NULL) {25662579 pr_err("Could not allocate rx dma buffer\n");25672567- fst_disable_intr(card);25682568- pci_release_regions(pdev);25692569- pci_disable_device(pdev);25702570- iounmap(card->ctlmem);25712571- iounmap(card->mem);25722572- kfree(card);25732573- return -ENOMEM;25802580+ err = -ENOMEM;25812581+ goto rx_dma_fail;25742582 }25752583 card->tx_dma_handle_host =25762584 pci_alloc_consistent(card->device, FST_MAX_MTU,25772585 &card->tx_dma_handle_card);25782586 if (card->tx_dma_handle_host == NULL) {25792587 pr_err("Could not allocate tx dma buffer\n");25802580- fst_disable_intr(card);25812581- pci_release_regions(pdev);25822582- pci_disable_device(pdev);25832583- iounmap(card->ctlmem);25842584- iounmap(card->mem);25852585- kfree(card);25862586- return -ENOMEM;25882588+ err = -ENOMEM;25892589+ goto tx_dma_fail;25872590 }25882591 }25892592 return 0; /* Success */25932593+25942594+tx_dma_fail:25952595+ pci_free_consistent(card->device, FST_MAX_MTU,25962596+ card->rx_dma_handle_host,25972597+ card->rx_dma_handle_card);25982598+rx_dma_fail:25992599+ fst_disable_intr(card);26002600+ for (i = 0 ; i < card->nports ; i++)26012601+ unregister_hdlc_device(card->ports[i].dev);26022602+init_card_fail:26032603+ fst_card_array[card->card_no] = NULL;26042604+card_array_fail:26052605+ for (i = 0 ; i < card->nports ; i++)26062606+ free_netdev(card->ports[i].dev);26072607+hdlcdev_fail:26082608+ free_irq(card->irq, card);26092609+irq_fail:26102610+ iounmap(card->ctlmem);26112611+ioremap_ctlmem_fail:26122612+ iounmap(card->mem);26132613+ioremap_physmem_fail:26142614+ pci_release_regions(pdev);26152615+regions_fail:26162616+ pci_disable_device(pdev);26172617+enable_fail:26182618+ kfree(card);26192619+ return err;25902620}2591262125922622/*
+16-11
drivers/net/xen-netfront.c
···14391439 unsigned int i = 0;14401440 unsigned int num_queues = info->netdev->real_num_tx_queues;1441144114421442+ netif_carrier_off(info->netdev);14431443+14421444 for (i = 0; i < num_queues; ++i) {14431445 struct netfront_queue *queue = &info->queues[i];14441444-14451445- /* Stop old i/f to prevent errors whilst we rebuild the state. */14461446- spin_lock_bh(&queue->rx_lock);14471447- spin_lock_irq(&queue->tx_lock);14481448- netif_carrier_off(queue->info->netdev);14491449- spin_unlock_irq(&queue->tx_lock);14501450- spin_unlock_bh(&queue->rx_lock);1451144614521447 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))14531448 unbind_from_irqhandler(queue->tx_irq, queue);···14521457 }14531458 queue->tx_evtchn = queue->rx_evtchn = 0;14541459 queue->tx_irq = queue->rx_irq = 0;14601460+14611461+ napi_synchronize(&queue->napi);1455146214561463 /* End access and free the pages */14571464 xennet_end_access(queue->tx_ring_ref, queue->tx.sring);···20432046 /* By now, the queue structures have been set up */20442047 for (j = 0; j < num_queues; ++j) {20452048 queue = &np->queues[j];20462046- spin_lock_bh(&queue->rx_lock);20472047- spin_lock_irq(&queue->tx_lock);2048204920492050 /* Step 1: Discard all pending TX packet fragments. */20512051+ spin_lock_irq(&queue->tx_lock);20502052 xennet_release_tx_bufs(queue);20532053+ spin_unlock_irq(&queue->tx_lock);2051205420522055 /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */20562056+ spin_lock_bh(&queue->rx_lock);20572057+20532058 for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {20542059 skb_frag_t *frag;20552060 const struct page *page;···20752076 }2076207720772078 queue->rx.req_prod_pvt = requeue_idx;20792079+20802080+ spin_unlock_bh(&queue->rx_lock);20782081 }2079208220802083 /*···20882087 netif_carrier_on(np->netdev);20892088 for (j = 0; j < num_queues; ++j) {20902089 queue = &np->queues[j];20902090+20912091 notify_remote_via_irq(queue->tx_irq);20922092 if (queue->tx_irq != queue->rx_irq)20932093 notify_remote_via_irq(queue->rx_irq);20942094- xennet_tx_buf_gc(queue);20952095- xennet_alloc_rx_buffers(queue);2096209420952095+ spin_lock_irq(&queue->tx_lock);20962096+ xennet_tx_buf_gc(queue);20972097 spin_unlock_irq(&queue->tx_lock);20982098+20992099+ spin_lock_bh(&queue->rx_lock);21002100+ xennet_alloc_rx_buffers(queue);20982101 spin_unlock_bh(&queue->rx_lock);20992102 }21002103
+15
drivers/of/fdt.c
···880880 const u64 phys_offset = __pa(PAGE_OFFSET);881881 base &= PAGE_MASK;882882 size &= PAGE_MASK;883883+884884+ if (sizeof(phys_addr_t) < sizeof(u64)) {885885+ if (base > ULONG_MAX) {886886+ pr_warning("Ignoring memory block 0x%llx - 0x%llx\n",887887+ base, base + size);888888+ return;889889+ }890890+891891+ if (base + size > ULONG_MAX) {892892+ pr_warning("Ignoring memory range 0x%lx - 0x%llx\n",893893+ ULONG_MAX, base + size);894894+ size = ULONG_MAX - base;895895+ }896896+ }897897+883898 if (base + size < phys_offset) {884899 pr_warning("Ignoring memory block 0x%llx - 0x%llx\n",885900 base, base + size);
-34
drivers/of/of_mdio.c
···182182}183183EXPORT_SYMBOL(of_mdiobus_register);184184185185-/**186186- * of_mdiobus_link_phydev - Find a device node for a phy187187- * @mdio: pointer to mii_bus structure188188- * @phydev: phydev for which the of_node pointer should be set189189- *190190- * Walk the list of subnodes of a mdio bus and look for a node that matches the191191- * phy's address with its 'reg' property. If found, set the of_node pointer for192192- * the phy. This allows auto-probed pyh devices to be supplied with information193193- * passed in via DT.194194- */195195-void of_mdiobus_link_phydev(struct mii_bus *mdio,196196- struct phy_device *phydev)197197-{198198- struct device *dev = &phydev->dev;199199- struct device_node *child;200200-201201- if (dev->of_node || !mdio->dev.of_node)202202- return;203203-204204- for_each_available_child_of_node(mdio->dev.of_node, child) {205205- int addr;206206-207207- addr = of_mdio_parse_addr(&mdio->dev, child);208208- if (addr < 0)209209- continue;210210-211211- if (addr == phydev->addr) {212212- dev->of_node = child;213213- return;214214- }215215- }216216-}217217-EXPORT_SYMBOL(of_mdiobus_link_phydev);218218-219185/* Helper function for of_phy_find_device */220186static int of_phy_match(struct device *dev, void *phy_np)221187{
+7-2
drivers/pci/pci.c
···31353135 if (probe)31363136 return 0;3137313731383138- /* Wait for Transaction Pending bit clean */31393139- if (pci_wait_for_pending(dev, pos + PCI_AF_STATUS, PCI_AF_STATUS_TP))31383138+ /*31393139+ * Wait for Transaction Pending bit to clear. A word-aligned test31403140+ * is used, so we use the conrol offset rather than status and shift31413141+ * the test bit to match.31423142+ */31433143+ if (pci_wait_for_pending(dev, pos + PCI_AF_CTRL,31443144+ PCI_AF_STATUS_TP << 8))31403145 goto clear;3141314631423147 dev_err(&dev->dev, "transaction is not cleared; proceeding with reset anyway\n");
+2
drivers/phy/Kconfig
···112112config PHY_SUN4I_USB113113 tristate "Allwinner sunxi SoC USB PHY driver"114114 depends on ARCH_SUNXI && HAS_IOMEM && OF115115+ depends on RESET_CONTROLLER115116 select GENERIC_PHY116117 help117118 Enable this to support the transceiver that is part of Allwinner···123122124123config PHY_SAMSUNG_USB2125124 tristate "Samsung USB 2.0 PHY driver"125125+ depends on HAS_IOMEM126126 select GENERIC_PHY127127 select MFD_SYSCON128128 help
···516516 skb_pull(skb, sizeof(struct fcoe_hdr));517517 fr_len = skb->len - sizeof(struct fcoe_crc_eof);518518519519- stats = per_cpu_ptr(lport->stats, get_cpu());520520- stats->RxFrames++;521521- stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;522522-523519 fp = (struct fc_frame *)skb;524520 fc_frame_init(fp);525521 fr_dev(fp) = lport;526522 fr_sof(fp) = hp->fcoe_sof;527523 if (skb_copy_bits(skb, fr_len, &crc_eof, sizeof(crc_eof))) {528528- put_cpu();529524 kfree_skb(skb);530525 return;531526 }532527 fr_eof(fp) = crc_eof.fcoe_eof;533528 fr_crc(fp) = crc_eof.fcoe_crc32;534529 if (pskb_trim(skb, fr_len)) {535535- put_cpu();536530 kfree_skb(skb);537531 return;538532 }···538544 port = lport_priv(vn_port);539545 if (!ether_addr_equal(port->data_src_addr, dest_mac)) {540546 BNX2FC_HBA_DBG(lport, "fpma mismatch\n");541541- put_cpu();542547 kfree_skb(skb);543548 return;544549 }···545552 if (fh->fh_r_ctl == FC_RCTL_DD_SOL_DATA &&546553 fh->fh_type == FC_TYPE_FCP) {547554 /* Drop FCP data. We dont this in L2 path */548548- put_cpu();549555 kfree_skb(skb);550556 return;551557 }···554562 case ELS_LOGO:555563 if (ntoh24(fh->fh_s_id) == FC_FID_FLOGI) {556564 /* drop non-FIP LOGO */557557- put_cpu();558565 kfree_skb(skb);559566 return;560567 }···563572564573 if (fh->fh_r_ctl == FC_RCTL_BA_ABTS) {565574 /* Drop incoming ABTS */566566- put_cpu();567575 kfree_skb(skb);568576 return;569577 }578578+579579+ stats = per_cpu_ptr(lport->stats, smp_processor_id());580580+ stats->RxFrames++;581581+ stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;570582571583 if (le32_to_cpu(fr_crc(fp)) !=572584 ~crc32(~0, skb->data, fr_len)) {···577583 printk(KERN_WARNING PFX "dropping frame with "578584 "CRC error\n");579585 stats->InvalidCRCCount++;580580- put_cpu();581586 kfree_skb(skb);582587 return;583588 }584584- put_cpu();585589 fc_exch_recv(lport, fp);586590}587591
+2
drivers/scsi/bnx2fc/bnx2fc_io.c
···282282 arr_sz, GFP_KERNEL);283283 if (!cmgr->free_list_lock) {284284 printk(KERN_ERR PFX "failed to alloc free_list_lock\n");285285+ kfree(cmgr->free_list);286286+ cmgr->free_list = NULL;285287 goto mem_err;286288 }287289
+12-1
drivers/scsi/ibmvscsi/ibmvscsi.c
···185185 if (crq->valid & 0x80) {186186 if (++queue->cur == queue->size)187187 queue->cur = 0;188188+189189+ /* Ensure the read of the valid bit occurs before reading any190190+ * other bits of the CRQ entry191191+ */192192+ rmb();188193 } else189194 crq = NULL;190195 spin_unlock_irqrestore(&queue->lock, flags);···208203{209204 struct vio_dev *vdev = to_vio_dev(hostdata->dev);210205206206+ /*207207+ * Ensure the command buffer is flushed to memory before handing it208208+ * over to the VIOS to prevent it from fetching any stale data.209209+ */210210+ mb();211211 return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2);212212}213213···807797 evt->hostdata->dev);808798 if (evt->cmnd_done)809799 evt->cmnd_done(evt->cmnd);810810- } else if (evt->done)800800+ } else if (evt->done && evt->crq.format != VIOSRP_MAD_FORMAT &&801801+ evt->iu.srp.login_req.opcode != SRP_LOGIN_REQ)811802 evt->done(evt);812803 free_event_struct(&evt->hostdata->pool, evt);813804 spin_lock_irqsave(hostdata->host->host_lock, flags);
···237237 virtscsi_vq_done(vscsi, req_vq, virtscsi_complete_cmd);238238};239239240240+static void virtscsi_poll_requests(struct virtio_scsi *vscsi)241241+{242242+ int i, num_vqs;243243+244244+ num_vqs = vscsi->num_queues;245245+ for (i = 0; i < num_vqs; i++)246246+ virtscsi_vq_done(vscsi, &vscsi->req_vqs[i],247247+ virtscsi_complete_cmd);248248+}249249+240250static void virtscsi_complete_free(struct virtio_scsi *vscsi, void *buf)241251{242252 struct virtio_scsi_cmd *cmd = buf;···263253 virtscsi_vq_done(vscsi, &vscsi->ctrl_vq, virtscsi_complete_free);264254};265255256256+static void virtscsi_handle_event(struct work_struct *work);257257+266258static int virtscsi_kick_event(struct virtio_scsi *vscsi,267259 struct virtio_scsi_event_node *event_node)268260{···272260 struct scatterlist sg;273261 unsigned long flags;274262263263+ INIT_WORK(&event_node->work, virtscsi_handle_event);275264 sg_init_one(&sg, &event_node->event, sizeof(struct virtio_scsi_event));276265277266 spin_lock_irqsave(&vscsi->event_vq.vq_lock, flags);···390377{391378 struct virtio_scsi_event_node *event_node = buf;392379393393- INIT_WORK(&event_node->work, virtscsi_handle_event);394380 schedule_work(&event_node->work);395381}396382···600588 if (cmd->resp.tmf.response == VIRTIO_SCSI_S_OK ||601589 cmd->resp.tmf.response == VIRTIO_SCSI_S_FUNCTION_SUCCEEDED)602590 ret = SUCCESS;591591+592592+ /*593593+ * The spec guarantees that all requests related to the TMF have594594+ * been completed, but the callback might not have run yet if595595+ * we're using independent interrupts (e.g. MSI). Poll the596596+ * virtqueues once.597597+ *598598+ * In the abort case, sc->scsi_done will do nothing, because599599+ * the block layer must have detected a timeout and as a result600600+ * REQ_ATOM_COMPLETE has been set.601601+ */602602+ virtscsi_poll_requests(vscsi);603603604604out:605605 mempool_free(cmd, virtscsi_cmd_pool);
+6-2
drivers/spi/spi-pxa2xx.c
···118118 */119119 orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);120120121121+ /* Test SPI_CS_CONTROL_SW_MODE bit enabling */121122 value = orig | SPI_CS_CONTROL_SW_MODE;122123 writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL);123124 value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);···127126 goto detection_done;128127 }129128130130- value &= ~SPI_CS_CONTROL_SW_MODE;129129+ orig = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);130130+131131+ /* Test SPI_CS_CONTROL_SW_MODE bit disabling */132132+ value = orig & ~SPI_CS_CONTROL_SW_MODE;131133 writel(value, drv_data->ioaddr + offset + SPI_CS_CONTROL);132134 value = readl(drv_data->ioaddr + offset + SPI_CS_CONTROL);133133- if (value != orig) {135135+ if (value != (orig & ~SPI_CS_CONTROL_SW_MODE)) {134136 offset = 0x800;135137 goto detection_done;136138 }
+13-31
drivers/spi/spi-qup.c
···424424 return 0;425425}426426427427-static void spi_qup_set_cs(struct spi_device *spi, bool enable)428428-{429429- struct spi_qup *controller = spi_master_get_devdata(spi->master);430430-431431- u32 iocontol, mask;432432-433433- iocontol = readl_relaxed(controller->base + SPI_IO_CONTROL);434434-435435- /* Disable auto CS toggle and use manual */436436- iocontol &= ~SPI_IO_C_MX_CS_MODE;437437- iocontol |= SPI_IO_C_FORCE_CS;438438-439439- iocontol &= ~SPI_IO_C_CS_SELECT_MASK;440440- iocontol |= SPI_IO_C_CS_SELECT(spi->chip_select);441441-442442- mask = SPI_IO_C_CS_N_POLARITY_0 << spi->chip_select;443443-444444- if (enable)445445- iocontol |= mask;446446- else447447- iocontol &= ~mask;448448-449449- writel_relaxed(iocontol, controller->base + SPI_IO_CONTROL);450450-}451451-452427static int spi_qup_transfer_one(struct spi_master *master,453428 struct spi_device *spi,454429 struct spi_transfer *xfer)···546571 return -ENOMEM;547572 }548573574574+ /* use num-cs unless not present or out of range */575575+ if (of_property_read_u16(dev->of_node, "num-cs",576576+ &master->num_chipselect) ||577577+ (master->num_chipselect > SPI_NUM_CHIPSELECTS))578578+ master->num_chipselect = SPI_NUM_CHIPSELECTS;579579+549580 master->bus_num = pdev->id;550581 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;551551- master->num_chipselect = SPI_NUM_CHIPSELECTS;552582 master->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32);553583 master->max_speed_hz = max_freq;554554- master->set_cs = spi_qup_set_cs;555584 master->transfer_one = spi_qup_transfer_one;556585 master->dev.of_node = pdev->dev.of_node;557586 master->auto_runtime_pm = true;···619640 if (ret)620641 goto error;621642622622- ret = devm_spi_register_master(dev, master);623623- if (ret)624624- goto error;625625-626643 pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC);627644 pm_runtime_use_autosuspend(dev);628645 pm_runtime_set_active(dev);629646 pm_runtime_enable(dev);647647+648648+ ret = devm_spi_register_master(dev, master);649649+ if (ret)650650+ goto disable_pm;651651+630652 return 0;631653654654+disable_pm:655655+ pm_runtime_disable(&pdev->dev);632656error:633657 clk_disable_unprepare(cclk);634658 clk_disable_unprepare(iclk);
···465465 struct ad7291_platform_data *pdata = client->dev.platform_data;466466 struct ad7291_chip_info *chip;467467 struct iio_dev *indio_dev;468468- int ret = 0;468468+ int ret;469469470470 indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*chip));471471 if (!indio_dev)···475475 if (pdata && pdata->use_external_ref) {476476 chip->reg = devm_regulator_get(&client->dev, "vref");477477 if (IS_ERR(chip->reg))478478- return ret;478478+ return PTR_ERR(chip->reg);479479480480 ret = regulator_enable(chip->reg);481481 if (ret)
+4-2
drivers/staging/tidspbridge/core/tiomap3430.c
···280280 OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);281281282282 /* Wait until the state has moved to ON */283283- while (*pdata->dsp_prm_read(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST)&284284- OMAP_INTRANSITION_MASK);283283+ while ((*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD,284284+ OMAP2_PM_PWSTST) &285285+ OMAP_INTRANSITION_MASK)286286+ ;285287 /* Disable Automatic transition */286288 (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_DISABLE_AUTO,287289 OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+1-1
drivers/target/iscsi/iscsi_target.c
···13091309 if (cmd->data_direction != DMA_TO_DEVICE) {13101310 pr_err("Command ITT: 0x%08x received DataOUT for a"13111311 " NON-WRITE command.\n", cmd->init_task_tag);13121312- return iscsit_reject_cmd(cmd, ISCSI_REASON_PROTOCOL_ERROR, buf);13121312+ return iscsit_dump_data_payload(conn, payload_length, 1);13131313 }13141314 se_cmd = &cmd->se_cmd;13151315 iscsit_mod_dataout_timer(cmd);
+11-3
drivers/target/iscsi/iscsi_target_auth.c
···174174 char *nr_out_ptr,175175 unsigned int *nr_out_len)176176{177177- char *endptr;178177 unsigned long id;179178 unsigned char id_as_uchar;180179 unsigned char digest[MD5_SIGNATURE_SIZE];···319320 }320321321322 if (type == HEX)322322- id = simple_strtoul(&identifier[2], &endptr, 0);323323+ ret = kstrtoul(&identifier[2], 0, &id);323324 else324324- id = simple_strtoul(identifier, &endptr, 0);325325+ ret = kstrtoul(identifier, 0, &id);326326+327327+ if (ret < 0) {328328+ pr_err("kstrtoul() failed for CHAP identifier: %d\n", ret);329329+ goto out;330330+ }325331 if (id > 255) {326332 pr_err("chap identifier: %lu greater than 255\n", id);327333 goto out;···353349 strlen(challenge));354350 if (!challenge_len) {355351 pr_err("Unable to convert incoming challenge\n");352352+ goto out;353353+ }354354+ if (challenge_len > 1024) {355355+ pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n");356356 goto out;357357 }358358 /*
+7-6
drivers/target/iscsi/iscsi_target_login.c
···12161216static int __iscsi_target_login_thread(struct iscsi_np *np)12171217{12181218 u8 *buffer, zero_tsih = 0;12191219- int ret = 0, rc, stop;12191219+ int ret = 0, rc;12201220 struct iscsi_conn *conn = NULL;12211221 struct iscsi_login *login;12221222 struct iscsi_portal_group *tpg = NULL;···12301230 if (np->np_thread_state == ISCSI_NP_THREAD_RESET) {12311231 np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;12321232 complete(&np->np_restart_comp);12331233+ } else if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN) {12341234+ spin_unlock_bh(&np->np_thread_lock);12351235+ goto exit;12331236 } else {12341237 np->np_thread_state = ISCSI_NP_THREAD_ACTIVE;12351238 }···14251422 }1426142314271424out:14281428- stop = kthread_should_stop();14291429- /* Wait for another socket.. */14301430- if (!stop)14311431- return 1;14251425+ return 1;14261426+14321427exit:14331428 iscsi_stop_login_thread_timer(np);14341429 spin_lock_bh(&np->np_thread_lock);···1443144214441443 allow_signal(SIGINT);1445144414461446- while (!kthread_should_stop()) {14451445+ while (1) {14471446 ret = __iscsi_target_login_thread(np);14481447 /*14491448 * We break and exit here unless another sock_accept() call
···129129130130 tc_device_get_irq(tdev);131131132132- device_register(&tdev->dev);132132+ if (device_register(&tdev->dev)) {133133+ put_device(&tdev->dev);134134+ goto out_err;135135+ }133136 list_add_tail(&tdev->node, &tbus->devices);134137135138out_err:···151148152149 INIT_LIST_HEAD(&tc_bus.devices);153150 dev_set_name(&tc_bus.dev, "tc");154154- device_register(&tc_bus.dev);151151+ if (device_register(&tc_bus.dev)) {152152+ put_device(&tc_bus.dev);153153+ return 0;154154+ }155155156156 if (tc_bus.info.slot_size) {157157 unsigned int tc_clock = tc_get_speed(&tc_bus) / 100000;
+7-11
drivers/thermal/imx_thermal.c
···306306{307307 struct imx_thermal_data *data = platform_get_drvdata(pdev);308308 struct regmap *map;309309- int t1, t2, n1, n2;309309+ int t1, n1;310310 int ret;311311 u32 val;312312 u64 temp64;···333333 /*334334 * Sensor data layout:335335 * [31:20] - sensor value @ 25C336336- * [19:8] - sensor value of hot337337- * [7:0] - hot temperature value338336 * Use universal formula now and only need sensor value @ 25C339337 * slope = 0.4297157 - (0.0015976 * 25C fuse)340338 */341339 n1 = val >> 20;342342- n2 = (val & 0xfff00) >> 8;343343- t2 = val & 0xff;344340 t1 = 25; /* t1 always 25C */345341346342 /*···362366 data->c2 = n1 * data->c1 + 1000 * t1;363367364368 /*365365- * Set the default passive cooling trip point to 20 °C below the366366- * maximum die temperature. Can be changed from userspace.369369+ * Set the default passive cooling trip point,370370+ * can be changed from userspace.367371 */368368- data->temp_passive = 1000 * (t2 - 20);372372+ data->temp_passive = IMX_TEMP_PASSIVE;369373370374 /*371371- * The maximum die temperature is t2, let's give 5 °C cushion372372- * for noise and possible temperature rise between measurements.375375+ * The maximum die temperature set to 20 C higher than376376+ * IMX_TEMP_PASSIVE.373377 */374374- data->temp_critical = 1000 * (t2 - 5);378378+ data->temp_critical = 1000 * 20 + data->temp_passive;375379376380 return 0;377381}
+4-3
drivers/thermal/of-thermal.c
···156156157157 ret = thermal_zone_bind_cooling_device(thermal,158158 tbp->trip_id, cdev,159159- tbp->min,160160- tbp->max);159159+ tbp->max,160160+ tbp->min);161161 if (ret)162162 return ret;163163 }···712712 }713713714714 i = 0;715715- for_each_child_of_node(child, gchild)715715+ for_each_child_of_node(child, gchild) {716716 ret = thermal_of_populate_bind_params(gchild, &tz->tbps[i++],717717 tz->trips, tz->ntrips);718718 if (ret)719719 goto free_tbps;720720+ }720721721722finish:722723 of_node_put(child);
+18-15
drivers/thermal/thermal_hwmon.c
···140140 return NULL;141141}142142143143+static bool thermal_zone_crit_temp_valid(struct thermal_zone_device *tz)144144+{145145+ unsigned long temp;146146+ return tz->ops->get_crit_temp && !tz->ops->get_crit_temp(tz, &temp);147147+}148148+143149int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz)144150{145151 struct thermal_hwmon_device *hwmon;···195189 if (result)196190 goto free_temp_mem;197191198198- if (tz->ops->get_crit_temp) {199199- unsigned long temperature;200200- if (!tz->ops->get_crit_temp(tz, &temperature)) {201201- snprintf(temp->temp_crit.name,202202- sizeof(temp->temp_crit.name),192192+ if (thermal_zone_crit_temp_valid(tz)) {193193+ snprintf(temp->temp_crit.name,194194+ sizeof(temp->temp_crit.name),203195 "temp%d_crit", hwmon->count);204204- temp->temp_crit.attr.attr.name = temp->temp_crit.name;205205- temp->temp_crit.attr.attr.mode = 0444;206206- temp->temp_crit.attr.show = temp_crit_show;207207- sysfs_attr_init(&temp->temp_crit.attr.attr);208208- result = device_create_file(hwmon->device,209209- &temp->temp_crit.attr);210210- if (result)211211- goto unregister_input;212212- }196196+ temp->temp_crit.attr.attr.name = temp->temp_crit.name;197197+ temp->temp_crit.attr.attr.mode = 0444;198198+ temp->temp_crit.attr.show = temp_crit_show;199199+ sysfs_attr_init(&temp->temp_crit.attr.attr);200200+ result = device_create_file(hwmon->device,201201+ &temp->temp_crit.attr);202202+ if (result)203203+ goto unregister_input;213204 }214205215206 mutex_lock(&thermal_hwmon_list_lock);···253250 }254251255252 device_remove_file(hwmon->device, &temp->temp_input.attr);256256- if (tz->ops->get_crit_temp)253253+ if (thermal_zone_crit_temp_valid(tz))257254 device_remove_file(hwmon->device, &temp->temp_crit.attr);258255259256 mutex_lock(&thermal_hwmon_list_lock);
+1-1
drivers/thermal/ti-soc-thermal/ti-bandgap.c
···11551155 /* register shadow for context save and restore */11561156 bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) *11571157 bgp->conf->sensor_count, GFP_KERNEL);11581158- if (!bgp) {11581158+ if (!bgp->regval) {11591159 dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n");11601160 return ERR_PTR(-ENOMEM);11611161 }
···4545config USB_DWC3_OMAP4646 tristate "Texas Instruments OMAP5 and similar Platforms"4747 depends on EXTCON && (ARCH_OMAP2PLUS || COMPILE_TEST)4848+ depends on OF4849 default USB_DWC34950 help5051 Some platforms from Texas Instruments like OMAP5, DRA7xxx and
···176176177177config USB_EHCI_MSM178178 tristate "Support for Qualcomm QSD/MSM on-chip EHCI USB controller"179179- depends on ARCH_MSM179179+ depends on ARCH_MSM || ARCH_QCOM180180 select USB_EHCI_ROOT_HUB_TT181181 ---help---182182 Enables support for the USB Host controller present on the
+4-1
drivers/usb/host/xhci-hub.c
···222223232424#include <linux/slab.h>2525+#include <linux/device.h>2526#include <asm/unaligned.h>26272728#include "xhci.h"···11401139 * including the USB 3.0 roothub, but only if CONFIG_PM_RUNTIME11411140 * is enabled, so also enable remote wake here.11421141 */11431143- if (hcd->self.root_hub->do_remote_wakeup) {11421142+ if (hcd->self.root_hub->do_remote_wakeup11431143+ && device_may_wakeup(hcd->self.controller)) {11441144+11441145 if (t1 & PORT_CONNECT) {11451146 t2 |= PORT_WKOC_E | PORT_WKDISC_E;11461147 t2 &= ~PORT_WKCONN_E;
+6-3
drivers/usb/host/xhci-ring.c
···14331433 xhci_handle_cmd_reset_ep(xhci, slot_id, cmd_trb, cmd_comp_code);14341434 break;14351435 case TRB_RESET_DEV:14361436- WARN_ON(slot_id != TRB_TO_SLOT_ID(14371437- le32_to_cpu(cmd_trb->generic.field[3])));14361436+ /* SLOT_ID field in reset device cmd completion event TRB is 0.14371437+ * Use the SLOT_ID from the command TRB instead (xhci 4.6.11)14381438+ */14391439+ slot_id = TRB_TO_SLOT_ID(14401440+ le32_to_cpu(cmd_trb->generic.field[3]));14381441 xhci_handle_cmd_reset_dev(xhci, slot_id, event);14391442 break;14401443 case TRB_NEC_GET_FW:···35373534 return 0;3538353535393536 max_burst = urb->ep->ss_ep_comp.bMaxBurst;35403540- return roundup(total_packet_count, max_burst + 1) - 1;35373537+ return DIV_ROUND_UP(total_packet_count, max_burst + 1) - 1;35413538}3542353935433540/*
+7-3
drivers/usb/host/xhci.c
···936936 */937937int xhci_resume(struct xhci_hcd *xhci, bool hibernated)938938{939939- u32 command, temp = 0;939939+ u32 command, temp = 0, status;940940 struct usb_hcd *hcd = xhci_to_hcd(xhci);941941 struct usb_hcd *secondary_hcd;942942 int retval = 0;···1054105410551055 done:10561056 if (retval == 0) {10571057- usb_hcd_resume_root_hub(hcd);10581058- usb_hcd_resume_root_hub(xhci->shared_hcd);10571057+ /* Resume root hubs only when have pending events. */10581058+ status = readl(&xhci->op_regs->status);10591059+ if (status & STS_EINT) {10601060+ usb_hcd_resume_root_hub(hcd);10611061+ usb_hcd_resume_root_hub(xhci->shared_hcd);10621062+ }10591063 }1060106410611065 /*
···256256 if (us->fflags & US_FL_WRITE_CACHE)257257 sdev->wce_default_on = 1;258258259259+ /* A few buggy USB-ATA bridges don't understand FUA */260260+ if (us->fflags & US_FL_BROKEN_FUA)261261+ sdev->broken_fua = 1;262262+259263 } else {260264261265 /* Non-disk-type devices don't need to blacklist any pages
+7
drivers/usb/storage/unusual_devs.h
···19361936 USB_SC_DEVICE, USB_PR_DEVICE, NULL,19371937 US_FL_IGNORE_RESIDUE ),1938193819391939+/* Reported by Michael Büsch <m@bues.ch> */19401940+UNUSUAL_DEV( 0x152d, 0x0567, 0x0114, 0x0114,19411941+ "JMicron",19421942+ "USB to ATA/ATAPI Bridge",19431943+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,19441944+ US_FL_BROKEN_FUA ),19451945+19391946/* Reported by Alexandre Oliva <oliva@lsd.ic.unicamp.br>19401947 * JMicron responds to USN and several other SCSI ioctls with a19411948 * residue that causes subsequent I/O requests to fail. */
+2
drivers/video/fbdev/atmel_lcdfb.c
···10571057 goto put_display_node;10581058 }1059105910601060+ INIT_LIST_HEAD(&pdata->pwr_gpios);10601061 ret = -ENOMEM;10611062 for (i = 0; i < of_gpio_named_count(display_np, "atmel,power-control-gpio"); i++) {10621063 gpio = of_get_named_gpio_flags(display_np, "atmel,power-control-gpio",···10831082 dev_err(dev, "set direction output gpio %d failed\n", gpio);10841083 goto put_display_node;10851084 }10851085+ list_add(&og->list, &pdata->pwr_gpios);10861086 }1087108710881088 if (is_gpio_power)
+1-1
drivers/video/fbdev/bfin_adv7393fb.c
···408408 /* Workaround "PPI Does Not Start Properly In Specific Mode" */409409 if (ANOMALY_05000400) {410410 ret = gpio_request_one(P_IDENT(P_PPI0_FS3), GPIOF_OUT_INIT_LOW,411411- "PPI0_FS3")411411+ "PPI0_FS3");412412 if (ret) {413413 dev_err(&client->dev, "PPI0_FS3 GPIO request failed\n");414414 ret = -EBUSY;
···219219obj-y += $(patsubst %,%.gen.o, $(fw-external-y))220220obj-$(CONFIG_FIRMWARE_IN_KERNEL) += $(patsubst %,%.gen.o, $(fw-shipped-y))221221222222+ifeq ($(KBUILD_SRC),)223223+# Makefile.build only creates subdirectories for O= builds, but external224224+# firmware might live outside the kernel source tree225225+_dummy := $(foreach d,$(addprefix $(obj)/,$(dir $(fw-external-y))), $(shell [ -d $(d) ] || mkdir -p $(d)))226226+endif227227+222228# Remove .S files and binaries created from ihex223229# (during 'make clean' .config isn't included so they're all in $(fw-shipped-))224230targets := $(fw-shipped-) $(patsubst $(obj)/%,%, \
+7
fs/aio.c
···830830static void put_reqs_available(struct kioctx *ctx, unsigned nr)831831{832832 struct kioctx_cpu *kcpu;833833+ unsigned long flags;833834834835 preempt_disable();835836 kcpu = this_cpu_ptr(ctx->cpu);836837838838+ local_irq_save(flags);837839 kcpu->reqs_available += nr;840840+838841 while (kcpu->reqs_available >= ctx->req_batch * 2) {839842 kcpu->reqs_available -= ctx->req_batch;840843 atomic_add(ctx->req_batch, &ctx->reqs_available);841844 }842845846846+ local_irq_restore(flags);843847 preempt_enable();844848}845849···851847{852848 struct kioctx_cpu *kcpu;853849 bool ret = false;850850+ unsigned long flags;854851855852 preempt_disable();856853 kcpu = this_cpu_ptr(ctx->cpu);857854855855+ local_irq_save(flags);858856 if (!kcpu->reqs_available) {859857 int old, avail = atomic_read(&ctx->reqs_available);860858···875869 ret = true;876870 kcpu->reqs_available--;877871out:872872+ local_irq_restore(flags);878873 preempt_enable();879874 return ret;880875}
+1-1
fs/autofs4/inode.c
···210210 int pipefd;211211 struct autofs_sb_info *sbi;212212 struct autofs_info *ino;213213- int pgrp;213213+ int pgrp = 0;214214 bool pgrp_set = false;215215 int ret = -EINVAL;216216
···386386 bool reloc_reserved = false;387387 int ret;388388389389+ /* Send isn't supposed to start transactions. */390390+ ASSERT(current->journal_info != (void *)BTRFS_SEND_TRANS_STUB);391391+389392 if (test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state))390393 return ERR_PTR(-EROFS);391394392392- if (current->journal_info &&393393- current->journal_info != (void *)BTRFS_SEND_TRANS_STUB) {395395+ if (current->journal_info) {394396 WARN_ON(type & TRANS_EXTWRITERS);395397 h = current->journal_info;396398 h->use_count++;···493491 smp_mb();494492 if (cur_trans->state >= TRANS_STATE_BLOCKED &&495493 may_wait_transaction(root, type)) {494494+ current->journal_info = h;496495 btrfs_commit_transaction(h, root);497496 goto again;498497 }···16181615 int ret;1619161616201617 ret = btrfs_run_delayed_items(trans, root);16211621- /*16221622- * running the delayed items may have added new refs. account16231623- * them now so that they hinder processing of more delayed refs16241624- * as little as possible.16251625- */16261618 if (ret)16271619 return ret;16281620
+25-5
fs/btrfs/volumes.c
···4040#include "rcu-string.h"4141#include "math.h"4242#include "dev-replace.h"4343+#include "sysfs.h"43444445static int init_first_rw_device(struct btrfs_trans_handle *trans,4546 struct btrfs_root *root,···555554 * This is ok to do without rcu read locked because we hold the556555 * uuid mutex so nothing we touch in here is going to disappear.557556 */558558- name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS);559559- if (!name) {560560- kfree(device);561561- goto error;557557+ if (orig_dev->name) {558558+ name = rcu_string_strdup(orig_dev->name->str, GFP_NOFS);559559+ if (!name) {560560+ kfree(device);561561+ goto error;562562+ }563563+ rcu_assign_pointer(device->name, name);562564 }563563- rcu_assign_pointer(device->name, name);564565565566 list_add(&device->dev_list, &fs_devices->devices);566567 device->fs_devices = fs_devices;···16831680 if (device->bdev)16841681 device->fs_devices->open_devices--;1685168216831683+ /* remove sysfs entry */16841684+ btrfs_kobj_rm_device(root->fs_info, device);16851685+16861686 call_rcu(&device->rcu, free_device);1687168716881688 num_devices = btrfs_super_num_devices(root->fs_info->super_copy) - 1;···21492143 total_bytes = btrfs_super_num_devices(root->fs_info->super_copy);21502144 btrfs_set_super_num_devices(root->fs_info->super_copy,21512145 total_bytes + 1);21462146+21472147+ /* add sysfs device entry */21482148+ btrfs_kobj_add_device(root->fs_info, device);21492149+21522150 mutex_unlock(&root->fs_info->fs_devices->device_list_mutex);2153215121542152 if (seeding_dev) {21532153+ char fsid_buf[BTRFS_UUID_UNPARSED_SIZE];21552154 ret = init_first_rw_device(trans, root, device);21562155 if (ret) {21572156 btrfs_abort_transaction(trans, root, ret);···21672156 btrfs_abort_transaction(trans, root, ret);21682157 goto error_trans;21692158 }21592159+21602160+ /* Sprouting would change fsid of the mounted root,21612161+ * so rename the fsid on the sysfs21622162+ */21632163+ snprintf(fsid_buf, BTRFS_UUID_UNPARSED_SIZE, "%pU",21642164+ root->fs_info->fsid);21652165+ if (kobject_rename(&root->fs_info->super_kobj, fsid_buf))21662166+ goto error_trans;21702167 } else {21712168 ret = btrfs_add_device(trans, root, device);21722169 if (ret) {···22242205 unlock_chunks(root);22252206 btrfs_end_transaction(trans, root);22262207 rcu_string_free(device->name);22082208+ btrfs_kobj_rm_device(root->fs_info, device);22272209 kfree(device);22282210error:22292211 blkdev_put(bdev, FMODE_EXCL);
+1-1
fs/btrfs/zlib.c
···136136 if (workspace->def_strm.total_in > 8192 &&137137 workspace->def_strm.total_in <138138 workspace->def_strm.total_out) {139139- ret = -EIO;139139+ ret = -E2BIG;140140 goto out;141141 }142142 /* we need another page for writing out. Test this
+16
fs/ext4/balloc.c
···194194 if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) {195195 ext4_error(sb, "Checksum bad for group %u", block_group);196196 grp = ext4_get_group_info(sb, block_group);197197+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))198198+ percpu_counter_sub(&sbi->s_freeclusters_counter,199199+ grp->bb_free);197200 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);201201+ if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {202202+ int count;203203+ count = ext4_free_inodes_count(sb, gdp);204204+ percpu_counter_sub(&sbi->s_freeinodes_counter,205205+ count);206206+ }198207 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state);199208 return;200209 }···368359{369360 ext4_fsblk_t blk;370361 struct ext4_group_info *grp = ext4_get_group_info(sb, block_group);362362+ struct ext4_sb_info *sbi = EXT4_SB(sb);371363372364 if (buffer_verified(bh))373365 return;···379369 ext4_unlock_group(sb, block_group);380370 ext4_error(sb, "bg %u: block %llu: invalid block bitmap",381371 block_group, blk);372372+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))373373+ percpu_counter_sub(&sbi->s_freeclusters_counter,374374+ grp->bb_free);382375 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);383376 return;384377 }···389376 desc, bh))) {390377 ext4_unlock_group(sb, block_group);391378 ext4_error(sb, "bg %u: bad block bitmap checksum", block_group);379379+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))380380+ percpu_counter_sub(&sbi->s_freeclusters_counter,381381+ grp->bb_free);392382 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);393383 return;394384 }
+2-2
fs/ext4/extents_status.c
···966966 continue;967967 }968968969969- if (ei->i_es_lru_nr == 0 || ei == locked_ei)969969+ if (ei->i_es_lru_nr == 0 || ei == locked_ei ||970970+ !write_trylock(&ei->i_es_lock))970971 continue;971972972972- write_lock(&ei->i_es_lock);973973 shrunk = __es_try_to_reclaim_extents(ei, nr_to_scan);974974 if (ei->i_es_lru_nr == 0)975975 list_del_init(&ei->i_es_lru);
+30-7
fs/ext4/ialloc.c
···7171 struct ext4_group_desc *gdp)7272{7373 struct ext4_group_info *grp;7474+ struct ext4_sb_info *sbi = EXT4_SB(sb);7475 J_ASSERT_BH(bh, buffer_locked(bh));75767677 /* If checksum is bad mark all blocks and inodes use to prevent···7978 if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) {8079 ext4_error(sb, "Checksum bad for group %u", block_group);8180 grp = ext4_get_group_info(sb, block_group);8181+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))8282+ percpu_counter_sub(&sbi->s_freeclusters_counter,8383+ grp->bb_free);8284 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);8585+ if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {8686+ int count;8787+ count = ext4_free_inodes_count(sb, gdp);8888+ percpu_counter_sub(&sbi->s_freeinodes_counter,8989+ count);9090+ }8391 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state);8492 return 0;8593 }···126116 struct buffer_head *bh = NULL;127117 ext4_fsblk_t bitmap_blk;128118 struct ext4_group_info *grp;119119+ struct ext4_sb_info *sbi = EXT4_SB(sb);129120130121 desc = ext4_get_group_desc(sb, block_group, NULL);131122 if (!desc)···196185 ext4_error(sb, "Corrupt inode bitmap - block_group = %u, "197186 "inode_bitmap = %llu", block_group, bitmap_blk);198187 grp = ext4_get_group_info(sb, block_group);188188+ if (!EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {189189+ int count;190190+ count = ext4_free_inodes_count(sb, desc);191191+ percpu_counter_sub(&sbi->s_freeinodes_counter,192192+ count);193193+ }199194 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state);200195 return NULL;201196 }···338321 fatal = err;339322 } else {340323 ext4_error(sb, "bit already cleared for inode %lu", ino);324324+ if (gdp && !EXT4_MB_GRP_IBITMAP_CORRUPT(grp)) {325325+ int count;326326+ count = ext4_free_inodes_count(sb, gdp);327327+ percpu_counter_sub(&sbi->s_freeinodes_counter,328328+ count);329329+ }341330 set_bit(EXT4_GROUP_INFO_IBITMAP_CORRUPT_BIT, &grp->bb_state);342331 }343332···874851 goto out;875852 }876853854854+ BUFFER_TRACE(group_desc_bh, "get_write_access");855855+ err = ext4_journal_get_write_access(handle, group_desc_bh);856856+ if (err) {857857+ ext4_std_error(sb, err);858858+ goto out;859859+ }860860+877861 /* We may have to initialize the block bitmap if it isn't already */878862 if (ext4_has_group_desc_csum(sb) &&879863 gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {···915885 ext4_std_error(sb, err);916886 goto out;917887 }918918- }919919-920920- BUFFER_TRACE(group_desc_bh, "get_write_access");921921- err = ext4_journal_get_write_access(handle, group_desc_bh);922922- if (err) {923923- ext4_std_error(sb, err);924924- goto out;925888 }926889927890 /* Update the relevant bg descriptor fields */
+19-5
fs/ext4/indirect.c
···389389 return 0;390390failed:391391 for (; i >= 0; i--) {392392- if (i != indirect_blks && branch[i].bh)392392+ /*393393+ * We want to ext4_forget() only freshly allocated indirect394394+ * blocks. Buffer for new_blocks[i-1] is at branch[i].bh and395395+ * buffer at branch[0].bh is indirect block / inode already396396+ * existing before ext4_alloc_branch() was called.397397+ */398398+ if (i > 0 && i != indirect_blks && branch[i].bh)393399 ext4_forget(handle, 1, inode, branch[i].bh,394400 branch[i].bh->b_blocknr);395401 ext4_free_blocks(handle, inode, NULL, new_blocks[i],···13161310 blk = *i_data;13171311 if (level > 0) {13181312 ext4_lblk_t first2;13131313+ ext4_lblk_t count2;13141314+13191315 bh = sb_bread(inode->i_sb, le32_to_cpu(blk));13201316 if (!bh) {13211317 EXT4_ERROR_INODE_BLOCK(inode, le32_to_cpu(blk),13221318 "Read failure");13231319 return -EIO;13241320 }13251325- first2 = (first > offset) ? first - offset : 0;13211321+ if (first > offset) {13221322+ first2 = first - offset;13231323+ count2 = count;13241324+ } else {13251325+ first2 = 0;13261326+ count2 = count - (offset - first);13271327+ }13261328 ret = free_hole_blocks(handle, inode, bh,13271329 (__le32 *)bh->b_data, level - 1,13281328- first2, count - offset,13301330+ first2, count2,13291331 inode->i_sb->s_blocksize >> 2);13301332 if (ret) {13311333 brelse(bh);···13431329 if (level == 0 ||13441330 (bh && all_zeroes((__le32 *)bh->b_data,13451331 (__le32 *)bh->b_data + addr_per_block))) {13461346- ext4_free_data(handle, inode, parent_bh, &blk, &blk+1);13471347- *i_data = 0;13321332+ ext4_free_data(handle, inode, parent_bh,13331333+ i_data, i_data + 1);13481334 }13491335 brelse(bh);13501336 bh = NULL;
+10-2
fs/ext4/mballoc.c
···722722 void *buddy, void *bitmap, ext4_group_t group)723723{724724 struct ext4_group_info *grp = ext4_get_group_info(sb, group);725725+ struct ext4_sb_info *sbi = EXT4_SB(sb);725726 ext4_grpblk_t max = EXT4_CLUSTERS_PER_GROUP(sb);726727 ext4_grpblk_t i = 0;727728 ext4_grpblk_t first;···752751753752 if (free != grp->bb_free) {754753 ext4_grp_locked_error(sb, group, 0, 0,755755- "%u clusters in bitmap, %u in gd; "756756- "block bitmap corrupt.",754754+ "block bitmap and bg descriptor "755755+ "inconsistent: %u vs %u free clusters",757756 free, grp->bb_free);758757 /*759758 * If we intend to continue, we consider group descriptor760759 * corrupt and update bb_free using bitmap value761760 */762761 grp->bb_free = free;762762+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))763763+ percpu_counter_sub(&sbi->s_freeclusters_counter,764764+ grp->bb_free);763765 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT, &grp->bb_state);764766 }765767 mb_set_largest_free_order(sb, grp);···14351431 right_is_free = !mb_test_bit(last + 1, e4b->bd_bitmap);1436143214371433 if (unlikely(block != -1)) {14341434+ struct ext4_sb_info *sbi = EXT4_SB(sb);14381435 ext4_fsblk_t blocknr;1439143614401437 blocknr = ext4_group_first_block_no(sb, e4b->bd_group);···14461441 "freeing already freed block "14471442 "(bit %u); block bitmap corrupt.",14481443 block);14441444+ if (!EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info))14451445+ percpu_counter_sub(&sbi->s_freeclusters_counter,14461446+ e4b->bd_info->bb_free);14491447 /* Mark the block group as corrupt. */14501448 set_bit(EXT4_GROUP_INFO_BBITMAP_CORRUPT_BIT,14511449 &e4b->bd_info->bb_state);
+28-32
fs/ext4/super.c
···15251525 arg = JBD2_DEFAULT_MAX_COMMIT_AGE;15261526 sbi->s_commit_interval = HZ * arg;15271527 } else if (token == Opt_max_batch_time) {15281528- if (arg == 0)15291529- arg = EXT4_DEF_MAX_BATCH_TIME;15301528 sbi->s_max_batch_time = arg;15311529 } else if (token == Opt_min_batch_time) {15321530 sbi->s_min_batch_time = arg;···28072809 es = sbi->s_es;2808281028092811 if (es->s_error_count)28102810- ext4_msg(sb, KERN_NOTICE, "error count: %u",28122812+ /* fsck newer than v1.41.13 is needed to clean this condition. */28132813+ ext4_msg(sb, KERN_NOTICE, "error count since last fsck: %u",28112814 le32_to_cpu(es->s_error_count));28122815 if (es->s_first_error_time) {28132813- printk(KERN_NOTICE "EXT4-fs (%s): initial error at %u: %.*s:%d",28162816+ printk(KERN_NOTICE "EXT4-fs (%s): initial error at time %u: %.*s:%d",28142817 sb->s_id, le32_to_cpu(es->s_first_error_time),28152818 (int) sizeof(es->s_first_error_func),28162819 es->s_first_error_func,···28252826 printk("\n");28262827 }28272828 if (es->s_last_error_time) {28282828- printk(KERN_NOTICE "EXT4-fs (%s): last error at %u: %.*s:%d",28292829+ printk(KERN_NOTICE "EXT4-fs (%s): last error at time %u: %.*s:%d",28292830 sb->s_id, le32_to_cpu(es->s_last_error_time),28302831 (int) sizeof(es->s_last_error_func),28312832 es->s_last_error_func,···38793880 goto failed_mount2;38803881 }38813882 }38823882-38833883- /*38843884- * set up enough so that it can read an inode,38853885- * and create new inode for buddy allocator38863886- */38873887- sbi->s_gdb_count = db_count;38883888- if (!test_opt(sb, NOLOAD) &&38893889- EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))38903890- sb->s_op = &ext4_sops;38913891- else38923892- sb->s_op = &ext4_nojournal_sops;38933893-38943894- ext4_ext_init(sb);38953895- err = ext4_mb_init(sb);38963896- if (err) {38973897- ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",38983898- err);38993899- goto failed_mount2;39003900- }39013901-39023883 if (!ext4_check_descriptors(sb, &first_not_zeroed)) {39033884 ext4_msg(sb, KERN_ERR, "group descriptors corrupted!");39043904- goto failed_mount2a;38853885+ goto failed_mount2;39053886 }39063887 if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG))39073888 if (!ext4_fill_flex_info(sb)) {39083889 ext4_msg(sb, KERN_ERR,39093890 "unable to initialize "39103891 "flex_bg meta info!");39113911- goto failed_mount2a;38923892+ goto failed_mount2;39123893 }3913389438953895+ sbi->s_gdb_count = db_count;39143896 get_random_bytes(&sbi->s_next_generation, sizeof(u32));39153897 spin_lock_init(&sbi->s_next_gen_lock);39163898···39263946 sbi->s_stripe = ext4_get_stripe_size(sbi);39273947 sbi->s_extent_max_zeroout_kb = 32;3928394839493949+ /*39503950+ * set up enough so that it can read an inode39513951+ */39523952+ if (!test_opt(sb, NOLOAD) &&39533953+ EXT4_HAS_COMPAT_FEATURE(sb, EXT4_FEATURE_COMPAT_HAS_JOURNAL))39543954+ sb->s_op = &ext4_sops;39553955+ else39563956+ sb->s_op = &ext4_nojournal_sops;39293957 sb->s_export_op = &ext4_export_ops;39303958 sb->s_xattr = ext4_xattr_handlers;39313959#ifdef CONFIG_QUOTA···41234135 if (err) {41244136 ext4_msg(sb, KERN_ERR, "failed to reserve %llu clusters for "41254137 "reserved pool", ext4_calculate_resv_clusters(sb));41264126- goto failed_mount5;41384138+ goto failed_mount4a;41274139 }4128414041294141 err = ext4_setup_system_zone(sb);41304142 if (err) {41314143 ext4_msg(sb, KERN_ERR, "failed to initialize system "41324144 "zone (%d)", err);41454145+ goto failed_mount4a;41464146+ }41474147+41484148+ ext4_ext_init(sb);41494149+ err = ext4_mb_init(sb);41504150+ if (err) {41514151+ ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",41524152+ err);41334153 goto failed_mount5;41344154 }41354155···42144218failed_mount7:42154219 ext4_unregister_li_request(sb);42164220failed_mount6:42174217- ext4_release_system_zone(sb);42214221+ ext4_mb_release(sb);42184222failed_mount5:42234223+ ext4_ext_release(sb);42244224+ ext4_release_system_zone(sb);42254225+failed_mount4a:42194226 dput(sb->s_root);42204227 sb->s_root = NULL;42214228failed_mount4:···42424243 percpu_counter_destroy(&sbi->s_extent_cache_cnt);42434244 if (sbi->s_mmp_tsk)42444245 kthread_stop(sbi->s_mmp_tsk);42454245-failed_mount2a:42464246- ext4_mb_release(sb);42474246failed_mount2:42484247 for (i = 0; i < db_count; i++)42494248 brelse(sbi->s_group_desc[i]);42504249 ext4_kvfree(sbi->s_group_desc);42514250failed_mount:42524252- ext4_ext_release(sb);42534251 if (sbi->s_chksum_driver)42544252 crypto_free_shash(sbi->s_chksum_driver);42554253 if (sbi->s_proc) {
+18-5
fs/f2fs/data.c
···608608 * b. do not use extent cache for better performance609609 * c. give the block addresses to blockdev610610 */611611-static int get_data_block(struct inode *inode, sector_t iblock,612612- struct buffer_head *bh_result, int create)611611+static int __get_data_block(struct inode *inode, sector_t iblock,612612+ struct buffer_head *bh_result, int create, bool fiemap)613613{614614 struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);615615 unsigned int blkbits = inode->i_sb->s_blocksize_bits;···637637 err = 0;638638 goto unlock_out;639639 }640640- if (dn.data_blkaddr == NEW_ADDR)640640+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)641641 goto put_out;642642643643 if (dn.data_blkaddr != NULL_ADDR) {···671671 err = 0;672672 goto unlock_out;673673 }674674- if (dn.data_blkaddr == NEW_ADDR)674674+ if (dn.data_blkaddr == NEW_ADDR && !fiemap)675675 goto put_out;676676677677 end_offset = ADDRS_PER_PAGE(dn.node_page, F2FS_I(inode));···708708 return err;709709}710710711711+static int get_data_block(struct inode *inode, sector_t iblock,712712+ struct buffer_head *bh_result, int create)713713+{714714+ return __get_data_block(inode, iblock, bh_result, create, false);715715+}716716+717717+static int get_data_block_fiemap(struct inode *inode, sector_t iblock,718718+ struct buffer_head *bh_result, int create)719719+{720720+ return __get_data_block(inode, iblock, bh_result, create, true);721721+}722722+711723int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,712724 u64 start, u64 len)713725{714714- return generic_block_fiemap(inode, fieinfo, start, len, get_data_block);726726+ return generic_block_fiemap(inode, fieinfo,727727+ start, len, get_data_block_fiemap);715728}716729717730static int f2fs_read_data_page(struct file *file, struct page *page)
+1-1
fs/f2fs/dir.c
···376376377377put_error:378378 f2fs_put_page(page, 1);379379+error:379380 /* once the failed inode becomes a bad inode, i_mode is S_IFREG */380381 truncate_inode_pages(&inode->i_data, 0);381382 truncate_blocks(inode, 0);382383 remove_dirty_dir_inode(inode);383383-error:384384 remove_inode_page(inode);385385 return ERR_PTR(err);386386}
+2-4
fs/f2fs/f2fs.h
···342342 struct dirty_seglist_info *dirty_info; /* dirty segment information */343343 struct curseg_info *curseg_array; /* active segment information */344344345345- struct list_head wblist_head; /* list of under-writeback pages */346346- spinlock_t wblist_lock; /* lock for checkpoint */347347-348345 block_t seg0_blkaddr; /* block address of 0'th segment */349346 block_t main_blkaddr; /* start block address of main area */350347 block_t ssa_blkaddr; /* start block address of SSA area */···641644 */642645static inline int check_nid_range(struct f2fs_sb_info *sbi, nid_t nid)643646{644644- WARN_ON((nid >= NM_I(sbi)->max_nid));647647+ if (unlikely(nid < F2FS_ROOT_INO(sbi)))648648+ return -EINVAL;645649 if (unlikely(nid >= NM_I(sbi)->max_nid))646650 return -EINVAL;647651 return 0;
···478478 {OPT_ERR, NULL}479479};480480481481+static int fuse_match_uint(substring_t *s, unsigned int *res)482482+{483483+ int err = -ENOMEM;484484+ char *buf = match_strdup(s);485485+ if (buf) {486486+ err = kstrtouint(buf, 10, res);487487+ kfree(buf);488488+ }489489+ return err;490490+}491491+481492static int parse_fuse_opt(char *opt, struct fuse_mount_data *d, int is_bdev)482493{483494 char *p;···499488 while ((p = strsep(&opt, ",")) != NULL) {500489 int token;501490 int value;491491+ unsigned uv;502492 substring_t args[MAX_OPT_ARGS];503493 if (!*p)504494 continue;···523511 break;524512525513 case OPT_USER_ID:526526- if (match_int(&args[0], &value))514514+ if (fuse_match_uint(&args[0], &uv))527515 return 0;528528- d->user_id = make_kuid(current_user_ns(), value);516516+ d->user_id = make_kuid(current_user_ns(), uv);529517 if (!uid_valid(d->user_id))530518 return 0;531519 d->user_id_present = 1;532520 break;533521534522 case OPT_GROUP_ID:535535- if (match_int(&args[0], &value))523523+ if (fuse_match_uint(&args[0], &uv))536524 return 0;537537- d->group_id = make_kgid(current_user_ns(), value);525525+ d->group_id = make_kgid(current_user_ns(), uv);538526 if (!gid_valid(d->group_id))539527 return 0;540528 d->group_id_present = 1;···1018100610191007 sb->s_flags &= ~(MS_NOSEC | MS_I_VERSION);1020100810211021- if (!parse_fuse_opt((char *) data, &d, is_bdev))10091009+ if (!parse_fuse_opt(data, &d, is_bdev))10221010 goto err;1023101110241012 if (is_bdev) {
+4-1
fs/jbd2/transaction.c
···15881588 * to perform a synchronous write. We do this to detect the15891589 * case where a single process is doing a stream of sync15901590 * writes. No point in waiting for joiners in that case.15911591+ *15921592+ * Setting max_batch_time to 0 disables this completely.15911593 */15921594 pid = current->pid;15931593- if (handle->h_sync && journal->j_last_sync_writer != pid) {15951595+ if (handle->h_sync && journal->j_last_sync_writer != pid &&15961596+ journal->j_max_batch_time) {15941597 u64 commit_time, trans_time;1595159815961599 journal->j_last_sync_writer = pid;
+55-14
fs/kernfs/file.c
···3939 struct list_head files; /* goes through kernfs_open_file.list */4040};41414242+/*4343+ * kernfs_notify() may be called from any context and bounces notifications4444+ * through a work item. To minimize space overhead in kernfs_node, the4545+ * pending queue is implemented as a singly linked list of kernfs_nodes.4646+ * The list is terminated with the self pointer so that whether a4747+ * kernfs_node is on the list or not can be determined by testing the next4848+ * pointer for NULL.4949+ */5050+#define KERNFS_NOTIFY_EOL ((void *)&kernfs_notify_list)5151+5252+static DEFINE_SPINLOCK(kernfs_notify_lock);5353+static struct kernfs_node *kernfs_notify_list = KERNFS_NOTIFY_EOL;5454+4255static struct kernfs_open_file *kernfs_of(struct file *file)4356{4457 return ((struct seq_file *)file->private_data)->private;···796783 return DEFAULT_POLLMASK|POLLERR|POLLPRI;797784}798785799799-/**800800- * kernfs_notify - notify a kernfs file801801- * @kn: file to notify802802- *803803- * Notify @kn such that poll(2) on @kn wakes up.804804- */805805-void kernfs_notify(struct kernfs_node *kn)786786+static void kernfs_notify_workfn(struct work_struct *work)806787{807807- struct kernfs_root *root = kernfs_root(kn);788788+ struct kernfs_node *kn;808789 struct kernfs_open_node *on;809790 struct kernfs_super_info *info;810810- unsigned long flags;811811-812812- if (WARN_ON(kernfs_type(kn) != KERNFS_FILE))791791+repeat:792792+ /* pop one off the notify_list */793793+ spin_lock_irq(&kernfs_notify_lock);794794+ kn = kernfs_notify_list;795795+ if (kn == KERNFS_NOTIFY_EOL) {796796+ spin_unlock_irq(&kernfs_notify_lock);813797 return;798798+ }799799+ kernfs_notify_list = kn->attr.notify_next;800800+ kn->attr.notify_next = NULL;801801+ spin_unlock_irq(&kernfs_notify_lock);814802815803 /* kick poll */816816- spin_lock_irqsave(&kernfs_open_node_lock, flags);804804+ spin_lock_irq(&kernfs_open_node_lock);817805818806 on = kn->attr.open;819807 if (on) {···822808 wake_up_interruptible(&on->poll);823809 }824810825825- spin_unlock_irqrestore(&kernfs_open_node_lock, flags);811811+ spin_unlock_irq(&kernfs_open_node_lock);826812827813 /* kick fsnotify */828814 mutex_lock(&kernfs_mutex);829815830830- list_for_each_entry(info, &root->supers, node) {816816+ list_for_each_entry(info, &kernfs_root(kn)->supers, node) {831817 struct inode *inode;832818 struct dentry *dentry;833819···847833 }848834849835 mutex_unlock(&kernfs_mutex);836836+ kernfs_put(kn);837837+ goto repeat;838838+}839839+840840+/**841841+ * kernfs_notify - notify a kernfs file842842+ * @kn: file to notify843843+ *844844+ * Notify @kn such that poll(2) on @kn wakes up. Maybe be called from any845845+ * context.846846+ */847847+void kernfs_notify(struct kernfs_node *kn)848848+{849849+ static DECLARE_WORK(kernfs_notify_work, kernfs_notify_workfn);850850+ unsigned long flags;851851+852852+ if (WARN_ON(kernfs_type(kn) != KERNFS_FILE))853853+ return;854854+855855+ spin_lock_irqsave(&kernfs_notify_lock, flags);856856+ if (!kn->attr.notify_next) {857857+ kernfs_get(kn);858858+ kn->attr.notify_next = kernfs_notify_list;859859+ kernfs_notify_list = kn;860860+ schedule_work(&kernfs_notify_work);861861+ }862862+ spin_unlock_irqrestore(&kernfs_notify_lock, flags);850863}851864EXPORT_SYMBOL_GPL(kernfs_notify);852865
+30
fs/kernfs/mount.c
···211211 kernfs_put(root_kn);212212}213213214214+/**215215+ * kernfs_pin_sb: try to pin the superblock associated with a kernfs_root216216+ * @kernfs_root: the kernfs_root in question217217+ * @ns: the namespace tag218218+ *219219+ * Pin the superblock so the superblock won't be destroyed in subsequent220220+ * operations. This can be used to block ->kill_sb() which may be useful221221+ * for kernfs users which dynamically manage superblocks.222222+ *223223+ * Returns NULL if there's no superblock associated to this kernfs_root, or224224+ * -EINVAL if the superblock is being freed.225225+ */226226+struct super_block *kernfs_pin_sb(struct kernfs_root *root, const void *ns)227227+{228228+ struct kernfs_super_info *info;229229+ struct super_block *sb = NULL;230230+231231+ mutex_lock(&kernfs_mutex);232232+ list_for_each_entry(info, &root->supers, node) {233233+ if (info->ns == ns) {234234+ sb = info->sb;235235+ if (!atomic_inc_not_zero(&info->sb->s_active))236236+ sb = ERR_PTR(-EINVAL);237237+ break;238238+ }239239+ }240240+ mutex_unlock(&kernfs_mutex);241241+ return sb;242242+}243243+214244void __init kernfs_init(void)215245{216246 kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
···617617618618 switch (create->cr_type) {619619 case NF4LNK:620620- /* ugh! we have to null-terminate the linktext, or621621- * vfs_symlink() will choke. it is always safe to622622- * null-terminate by brute force, since at worst we623623- * will overwrite the first byte of the create namelen624624- * in the XDR buffer, which has already been extracted625625- * during XDR decode.626626- */627627- create->cr_linkname[create->cr_linklen] = 0;628628-629620 status = nfsd_symlink(rqstp, &cstate->current_fh,630621 create->cr_name, create->cr_namelen,631622 create->cr_linkname, create->cr_linklen,
+14-3
fs/nfsd/nfs4xdr.c
···600600 READ_BUF(4);601601 create->cr_linklen = be32_to_cpup(p++);602602 READ_BUF(create->cr_linklen);603603- SAVEMEM(create->cr_linkname, create->cr_linklen);603603+ /*604604+ * The VFS will want a null-terminated string, and605605+ * null-terminating in place isn't safe since this might606606+ * end on a page boundary:607607+ */608608+ create->cr_linkname =609609+ kmalloc(create->cr_linklen + 1, GFP_KERNEL);610610+ if (!create->cr_linkname)611611+ return nfserr_jukebox;612612+ memcpy(create->cr_linkname, p, create->cr_linklen);613613+ create->cr_linkname[create->cr_linklen] = '\0';614614+ defer_free(argp, kfree, create->cr_linkname);604615 break;605616 case NF4BLK:606617 case NF4CHR:···26412630{26422631 __be32 *p;2643263226442644- p = xdr_reserve_space(xdr, 6);26332633+ p = xdr_reserve_space(xdr, 20);26452634 if (!p)26462635 return NULL;26472636 *p++ = htonl(2);···3278326732793268 wire_count = htonl(maxcount);32803269 write_bytes_to_xdr_buf(xdr->buf, length_offset, &wire_count, 4);32813281- xdr_truncate_encode(xdr, length_offset + 4 + maxcount);32703270+ xdr_truncate_encode(xdr, length_offset + 4 + ALIGN(maxcount, 4));32823271 if (maxcount & 3)32833272 write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount,32843273 &zero, 4 - (maxcount&3));
+2-20
fs/proc/stat.c
···184184185185static int stat_open(struct inode *inode, struct file *file)186186{187187- size_t size = 1024 + 128 * num_possible_cpus();188188- char *buf;189189- struct seq_file *m;190190- int res;187187+ size_t size = 1024 + 128 * num_online_cpus();191188192189 /* minimum size to display an interrupt count : 2 bytes */193190 size += 2 * nr_irqs;194194-195195- /* don't ask for more than the kmalloc() max size */196196- if (size > KMALLOC_MAX_SIZE)197197- size = KMALLOC_MAX_SIZE;198198- buf = kmalloc(size, GFP_KERNEL);199199- if (!buf)200200- return -ENOMEM;201201-202202- res = single_open(file, show_stat, NULL);203203- if (!res) {204204- m = file->private_data;205205- m->buf = buf;206206- m->size = ksize(buf);207207- } else208208- kfree(buf);209209- return res;191191+ return single_open_size(file, show_stat, NULL, size);210192}211193212194static const struct file_operations proc_stat_operations = {
+2
fs/quota/dquot.c
···702702 struct dquot *dquot;703703 unsigned long freed = 0;704704705705+ spin_lock(&dq_list_lock);705706 head = free_dquots.prev;706707 while (head != &free_dquots && sc->nr_to_scan) {707708 dquot = list_entry(head, struct dquot, dq_free);···714713 freed++;715714 head = free_dquots.prev;716715 }716716+ spin_unlock(&dq_list_lock);717717 return freed;718718}719719
···3232/* For use by hda_i915 driver */3333extern int i915_request_power_well(void);3434extern int i915_release_power_well(void);3535+extern int i915_get_cdclk_freq(void);35363637#endif /* _I915_POWERWELL_H_ */
···146146 * Declaration/definition used for per-CPU variables that must be read mostly.147147 */148148#define DECLARE_PER_CPU_READ_MOSTLY(type, name) \149149- DECLARE_PER_CPU_SECTION(type, name, "..readmostly")149149+ DECLARE_PER_CPU_SECTION(type, name, "..read_mostly")150150151151#define DEFINE_PER_CPU_READ_MOSTLY(type, name) \152152- DEFINE_PER_CPU_SECTION(type, name, "..readmostly")152152+ DEFINE_PER_CPU_SECTION(type, name, "..read_mostly")153153154154/*155155 * Intermodule exports for per-CPU variables. sparse forgets about
+3
include/linux/ptrace.h
···334334 * calling arch_ptrace_stop() when it would be superfluous. For example,335335 * if the thread has not been back to user mode since the last stop, the336336 * thread state might indicate that nothing needs to be done.337337+ *338338+ * This is guaranteed to be invoked once before a task stops for ptrace and339339+ * may include arch-specific operations necessary prior to a ptrace stop.337340 */338341#define arch_ptrace_stop_needed(code, info) (0)339342#endif
+4-4
include/linux/sched.h
···872872#define SD_NUMA 0x4000 /* cross-node balancing */873873874874#ifdef CONFIG_SCHED_SMT875875-static inline const int cpu_smt_flags(void)875875+static inline int cpu_smt_flags(void)876876{877877 return SD_SHARE_CPUCAPACITY | SD_SHARE_PKG_RESOURCES;878878}879879#endif880880881881#ifdef CONFIG_SCHED_MC882882-static inline const int cpu_core_flags(void)882882+static inline int cpu_core_flags(void)883883{884884 return SD_SHARE_PKG_RESOURCES;885885}886886#endif887887888888#ifdef CONFIG_NUMA889889-static inline const int cpu_numa_flags(void)889889+static inline int cpu_numa_flags(void)890890{891891 return SD_NUMA;892892}···999999bool cpus_share_cache(int this_cpu, int that_cpu);1000100010011001typedef const struct cpumask *(*sched_domain_mask_f)(int cpu);10021002-typedef const int (*sched_domain_flags_f)(void);10021002+typedef int (*sched_domain_flags_f)(void);1003100310041004#define SDTL_OVERLAP 0x0110051005
-4
include/linux/socket.h
···305305/* IPX options */306306#define IPX_TYPE 1307307308308-extern int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,309309- int offset, int len);310308extern int csum_partial_copy_fromiovecend(unsigned char *kdata, 311309 struct iovec *iov, 312310 int offset, ···313315 unsigned long nr_segs);314316315317extern int verify_iovec(struct msghdr *m, struct iovec *iov, struct sockaddr_storage *address, int mode);316316-extern int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata,317317- int offset, int len);318318extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr);319319extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data);320320
+17-2
include/linux/uio.h
···9494 return i->count;9595}96969797-static inline void iov_iter_truncate(struct iov_iter *i, size_t count)9797+/*9898+ * Cap the iov_iter by given limit; note that the second argument is9999+ * *not* the new size - it's upper limit for such. Passing it a value100100+ * greater than the amount of data in iov_iter is fine - it'll just do101101+ * nothing in that case.102102+ */103103+static inline void iov_iter_truncate(struct iov_iter *i, u64 count)98104{105105+ /*106106+ * count doesn't have to fit in size_t - comparison extends both107107+ * operands to u64 here and any value that would be truncated by108108+ * conversion in assignement is by definition greater than all109109+ * values of size_t, including old i->count.110110+ */99111 if (i->count > count)100112 i->count = count;101113}···123111124112int memcpy_fromiovec(unsigned char *kdata, struct iovec *iov, int len);125113int memcpy_toiovec(struct iovec *iov, unsigned char *kdata, int len);126126-114114+int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,115115+ int offset, int len);116116+int memcpy_toiovecend(const struct iovec *v, unsigned char *kdata,117117+ int offset, int len);127118128119#endif
+3-1
include/linux/usb_usual.h
···7070 US_FLAG(NEEDS_CAP16, 0x00400000) \7171 /* cannot handle READ_CAPACITY_10 */ \7272 US_FLAG(IGNORE_UAS, 0x00800000) \7373- /* Device advertises UAS but it is broken */7373+ /* Device advertises UAS but it is broken */ \7474+ US_FLAG(BROKEN_FUA, 0x01000000) \7575+ /* Cannot handle FUA in WRITE or READ CDBs */ \74767577#define US_FLAG(name, value) US_FL_##name = value ,7678enum { US_DO_ALL_FLAGS };
-1
include/net/neighbour.h
···203203 void (*proxy_redo)(struct sk_buff *skb);204204 char *id;205205 struct neigh_parms parms;206206- /* HACK. gc_* should follow parms without a gap! */207206 int gc_interval;208207 int gc_thresh1;209208 int gc_thresh2;
···318318319319static inline unsigned scsi_transfer_length(struct scsi_cmnd *scmd)320320{321321- unsigned int xfer_len = blk_rq_bytes(scmd->request);321321+ unsigned int xfer_len = scsi_out(scmd)->length;322322 unsigned int prot_op = scsi_get_prot_op(scmd);323323 unsigned int sector_size = scmd->device->sector_size;324324
+1
include/scsi/scsi_device.h
···173173 unsigned is_visible:1; /* is the device visible in sysfs */174174 unsigned wce_default_on:1; /* Cache is ON by default */175175 unsigned no_dif:1; /* T10 PI (DIF) should be disabled */176176+ unsigned broken_fua:1; /* Don't set FUA bit */176177177178 atomic_t disk_events_disable_depth; /* disable depth for disk events */178179
···16481648 int flags, const char *unused_dev_name,16491649 void *data)16501650{16511651+ struct super_block *pinned_sb = NULL;16521652+ struct cgroup_subsys *ss;16511653 struct cgroup_root *root;16521654 struct cgroup_sb_opts opts;16531655 struct dentry *dentry;16541656 int ret;16571657+ int i;16551658 bool new_sb;1656165916571660 /*···16781675 cgroup_get(&root->cgrp);16791676 ret = 0;16801677 goto out_unlock;16781678+ }16791679+16801680+ /*16811681+ * Destruction of cgroup root is asynchronous, so subsystems may16821682+ * still be dying after the previous unmount. Let's drain the16831683+ * dying subsystems. We just need to ensure that the ones16841684+ * unmounted previously finish dying and don't care about new ones16851685+ * starting. Testing ref liveliness is good enough.16861686+ */16871687+ for_each_subsys(ss, i) {16881688+ if (!(opts.subsys_mask & (1 << i)) ||16891689+ ss->root == &cgrp_dfl_root)16901690+ continue;16911691+16921692+ if (!percpu_ref_tryget_live(&ss->root->cgrp.self.refcnt)) {16931693+ mutex_unlock(&cgroup_mutex);16941694+ msleep(10);16951695+ ret = restart_syscall();16961696+ goto out_free;16971697+ }16981698+ cgroup_put(&ss->root->cgrp);16811699 }1682170016831701 for_each_root(root) {···17411717 }1742171817431719 /*17441744- * A root's lifetime is governed by its root cgroup.17451745- * tryget_live failure indicate that the root is being17461746- * destroyed. Wait for destruction to complete so that the17471747- * subsystems are free. We can use wait_queue for the wait17481748- * but this path is super cold. Let's just sleep for a bit17491749- * and retry.17201720+ * We want to reuse @root whose lifetime is governed by its17211721+ * ->cgrp. Let's check whether @root is alive and keep it17221722+ * that way. As cgroup_kill_sb() can happen anytime, we17231723+ * want to block it by pinning the sb so that @root doesn't17241724+ * get killed before mount is complete.17251725+ *17261726+ * With the sb pinned, tryget_live can reliably indicate17271727+ * whether @root can be reused. If it's being killed,17281728+ * drain it. We can use wait_queue for the wait but this17291729+ * path is super cold. Let's just sleep a bit and retry.17501730 */17511751- if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {17311731+ pinned_sb = kernfs_pin_sb(root->kf_root, NULL);17321732+ if (IS_ERR(pinned_sb) ||17331733+ !percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {17521734 mutex_unlock(&cgroup_mutex);17351735+ if (!IS_ERR_OR_NULL(pinned_sb))17361736+ deactivate_super(pinned_sb);17531737 msleep(10);17541738 ret = restart_syscall();17551739 goto out_free;···18021770 CGROUP_SUPER_MAGIC, &new_sb);18031771 if (IS_ERR(dentry) || !new_sb)18041772 cgroup_put(&root->cgrp);17731773+17741774+ /*17751775+ * If @pinned_sb, we're reusing an existing root and holding an17761776+ * extra ref on its sb. Mount is complete. Put the extra ref.17771777+ */17781778+ if (pinned_sb) {17791779+ WARN_ON(new_sb);17801780+ deactivate_super(pinned_sb);17811781+ }17821782+18051783 return dentry;18061784}18071785···3370332833713329 rcu_read_lock();33723330 css_for_each_child(child, css) {33733373- if (css->flags & CSS_ONLINE) {33313331+ if (child->flags & CSS_ONLINE) {33743332 ret = true;33753333 break;33763334 }
+19-1
kernel/cpuset.c
···1181118111821182int current_cpuset_is_being_rebound(void)11831183{11841184- return task_cs(current) == cpuset_being_rebound;11841184+ int ret;11851185+11861186+ rcu_read_lock();11871187+ ret = task_cs(current) == cpuset_being_rebound;11881188+ rcu_read_unlock();11891189+11901190+ return ret;11851191}1186119211871193static int update_relax_domain_level(struct cpuset *cs, s64 val)···16231617 * resources, wait for the previously scheduled operations before16241618 * proceeding, so that we don't end up keep removing tasks added16251619 * after execution capability is restored.16201620+ *16211621+ * cpuset_hotplug_work calls back into cgroup core via16221622+ * cgroup_transfer_tasks() and waiting for it from a cgroupfs16231623+ * operation like this one can lead to a deadlock through kernfs16241624+ * active_ref protection. Let's break the protection. Losing the16251625+ * protection is okay as we check whether @cs is online after16261626+ * grabbing cpuset_mutex anyway. This only happens on the legacy16271627+ * hierarchies.16261628 */16291629+ css_get(&cs->css);16301630+ kernfs_break_active_protection(of->kn);16271631 flush_work(&cpuset_hotplug_work);1628163216291633 mutex_lock(&cpuset_mutex);···16611645 free_trial_cpuset(trialcs);16621646out_unlock:16631647 mutex_unlock(&cpuset_mutex);16481648+ kernfs_unbreak_active_protection(of->kn);16491649+ css_put(&cs->css);16641650 return retval ?: nbytes;16651651}16661652
+1-1
kernel/events/core.c
···23202320 next_parent = rcu_dereference(next_ctx->parent_ctx);2321232123222322 /* If neither context have a parent context; they cannot be clones. */23232323- if (!parent && !next_parent)23232323+ if (!parent || !next_parent)23242324 goto unlock;2325232523262326 if (next_parent == ctx || next_ctx == parent || next_parent == parent) {
+3-3
kernel/events/uprobes.c
···846846{847847 int err;848848849849- if (!consumer_del(uprobe, uc)) /* WARN? */849849+ if (WARN_ON(!consumer_del(uprobe, uc)))850850 return;851851852852 err = register_for_each_vma(uprobe, NULL);···927927 int ret = -ENOENT;928928929929 uprobe = find_uprobe(inode, offset);930930- if (!uprobe)930930+ if (WARN_ON(!uprobe))931931 return ret;932932933933 down_write(&uprobe->register_rwsem);···952952 struct uprobe *uprobe;953953954954 uprobe = find_uprobe(inode, offset);955955- if (!uprobe)955955+ if (WARN_ON(!uprobe))956956 return;957957958958 down_write(&uprobe->register_rwsem);
+2-2
kernel/irq/irqdesc.c
···455455 */456456void irq_free_hwirqs(unsigned int from, int cnt)457457{458458- int i;458458+ int i, j;459459460460- for (i = from; cnt > 0; i++, cnt--) {460460+ for (i = from, j = cnt; j > 0; i++, j--) {461461 irq_set_status_flags(i, _IRQ_NOREQUEST | _IRQ_NOPROBE);462462 arch_teardown_hwirq(i);463463 }
+18-26
kernel/printk/printk.c
···14161416/*14171417 * Can we actually use the console at this time on this cpu?14181418 *14191419- * Console drivers may assume that per-cpu resources have been allocated. So14201420- * unless they're explicitly marked as being able to cope (CON_ANYTIME) don't14211421- * call them until this CPU is officially up.14191419+ * Console drivers may assume that per-cpu resources have14201420+ * been allocated. So unless they're explicitly marked as14211421+ * being able to cope (CON_ANYTIME) don't call them until14221422+ * this CPU is officially up.14221423 */14231424static inline int can_use_console(unsigned int cpu)14241425{···14321431 * console_lock held, and 'console_locked' set) if it14331432 * is successful, false otherwise.14341433 */14351435-static int console_trylock_for_printk(void)14341434+static int console_trylock_for_printk(unsigned int cpu)14361435{14371437- unsigned int cpu = smp_processor_id();14381438-14391436 if (!console_trylock())14401437 return 0;14411438 /*···16081609 */16091610 if (!oops_in_progress && !lockdep_recursing(current)) {16101611 recursion_bug = 1;16111611- local_irq_restore(flags);16121612- return 0;16121612+ goto out_restore_irqs;16131613 }16141614 zap_locks();16151615 }···1716171817171719 logbuf_cpu = UINT_MAX;17181720 raw_spin_unlock(&logbuf_lock);17191719- lockdep_on();17201720- local_irq_restore(flags);1721172117221722 /* If called from the scheduler, we can not call up(). */17231723- if (in_sched)17241724- return printed_len;17231723+ if (!in_sched) {17241724+ /*17251725+ * Try to acquire and then immediately release the console17261726+ * semaphore. The release will print out buffers and wake up17271727+ * /dev/kmsg and syslog() users.17281728+ */17291729+ if (console_trylock_for_printk(this_cpu))17301730+ console_unlock();17311731+ }1725173217261726- /*17271727- * Disable preemption to avoid being preempted while holding17281728- * console_sem which would prevent anyone from printing to console17291729- */17301730- preempt_disable();17311731- /*17321732- * Try to acquire and then immediately release the console semaphore.17331733- * The release will print out buffers and wake up /dev/kmsg and syslog()17341734- * users.17351735- */17361736- if (console_trylock_for_printk())17371737- console_unlock();17381738- preempt_enable();17391739-17331733+ lockdep_on();17341734+out_restore_irqs:17351735+ local_irq_restore(flags);17401736 return printed_len;17411737}17421738EXPORT_SYMBOL(vprintk_emit);
···191191192192 i %= num_online_cpus();193193194194- if (!cpumask_of_node(numa_node)) {194194+ if (numa_node == -1 || !cpumask_of_node(numa_node)) {195195 /* Use all online cpu's for non numa aware system */196196 cpumask_copy(mask, cpu_online_mask);197197 } else {
+55
lib/iovec.c
···5151 return 0;5252}5353EXPORT_SYMBOL(memcpy_toiovec);5454+5555+/*5656+ * Copy kernel to iovec. Returns -EFAULT on error.5757+ */5858+5959+int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata,6060+ int offset, int len)6161+{6262+ int copy;6363+ for (; len > 0; ++iov) {6464+ /* Skip over the finished iovecs */6565+ if (unlikely(offset >= iov->iov_len)) {6666+ offset -= iov->iov_len;6767+ continue;6868+ }6969+ copy = min_t(unsigned int, iov->iov_len - offset, len);7070+ if (copy_to_user(iov->iov_base + offset, kdata, copy))7171+ return -EFAULT;7272+ offset = 0;7373+ kdata += copy;7474+ len -= copy;7575+ }7676+7777+ return 0;7878+}7979+EXPORT_SYMBOL(memcpy_toiovecend);8080+8181+/*8282+ * Copy iovec to kernel. Returns -EFAULT on error.8383+ */8484+8585+int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,8686+ int offset, int len)8787+{8888+ /* Skip over the finished iovecs */8989+ while (offset >= iov->iov_len) {9090+ offset -= iov->iov_len;9191+ iov++;9292+ }9393+9494+ while (len > 0) {9595+ u8 __user *base = iov->iov_base + offset;9696+ int copy = min_t(unsigned int, len, iov->iov_len - offset);9797+9898+ offset = 0;9999+ if (copy_from_user(kdata, base, copy))100100+ return -EFAULT;101101+ len -= copy;102102+ kdata += copy;103103+ iov++;104104+ }105105+106106+ return 0;107107+}108108+EXPORT_SYMBOL(memcpy_fromiovecend);
+8-2
lib/lz4/lz4_decompress.c
···108108 if (length == ML_MASK) {109109 for (; *ip == 255; length += 255)110110 ip++;111111+ if (unlikely(length > (size_t)(length + *ip)))112112+ goto _output_error;111113 length += *ip++;112114 }113115···159157160158 /* write overflow error detected */161159_output_error:162162- return (int) (-(((char *)ip) - source));160160+ return -1;163161}164162165163static int lz4_uncompress_unknownoutputsize(const char *source, char *dest,···192190 int s = 255;193191 while ((ip < iend) && (s == 255)) {194192 s = *ip++;193193+ if (unlikely(length > (size_t)(length + s)))194194+ goto _output_error;195195 length += s;196196 }197197 }···234230 if (length == ML_MASK) {235231 while (ip < iend) {236232 int s = *ip++;233233+ if (unlikely(length > (size_t)(length + s)))234234+ goto _output_error;237235 length += s;238236 if (s == 255)239237 continue;···288282289283 /* write overflow error detected */290284_output_error:291291- return (int) (-(((char *) ip) - source));285285+ return -1;292286}293287294288int lz4_decompress(const unsigned char *src, size_t *src_len,
+18-10
lib/swiotlb.c
···8686 * We need to save away the original address corresponding to a mapped entry8787 * for the sync operations.8888 */8989+#define INVALID_PHYS_ADDR (~(phys_addr_t)0)8990static phys_addr_t *io_tlb_orig_addr;90919192/*···189188 io_tlb_list = memblock_virt_alloc(190189 PAGE_ALIGN(io_tlb_nslabs * sizeof(int)),191190 PAGE_SIZE);192192- for (i = 0; i < io_tlb_nslabs; i++)193193- io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);194194- io_tlb_index = 0;195191 io_tlb_orig_addr = memblock_virt_alloc(196192 PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)),197193 PAGE_SIZE);194194+ for (i = 0; i < io_tlb_nslabs; i++) {195195+ io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);196196+ io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;197197+ }198198+ io_tlb_index = 0;198199199200 if (verbose)200201 swiotlb_print_info();···316313 if (!io_tlb_list)317314 goto cleanup3;318315319319- for (i = 0; i < io_tlb_nslabs; i++)320320- io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);321321- io_tlb_index = 0;322322-323316 io_tlb_orig_addr = (phys_addr_t *)324317 __get_free_pages(GFP_KERNEL,325318 get_order(io_tlb_nslabs *···323324 if (!io_tlb_orig_addr)324325 goto cleanup4;325326326326- memset(io_tlb_orig_addr, 0, io_tlb_nslabs * sizeof(phys_addr_t));327327+ for (i = 0; i < io_tlb_nslabs; i++) {328328+ io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);329329+ io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;330330+ }331331+ io_tlb_index = 0;327332328333 swiotlb_print_info();329334···559556 /*560557 * First, sync the memory before unmapping the entry561558 */562562- if (orig_addr && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))559559+ if (orig_addr != INVALID_PHYS_ADDR &&560560+ ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))563561 swiotlb_bounce(orig_addr, tlb_addr, size, DMA_FROM_DEVICE);564562565563 /*···577573 * Step 1: return the slots to the free list, merging the578574 * slots with superceeding slots579575 */580580- for (i = index + nslots - 1; i >= index; i--)576576+ for (i = index + nslots - 1; i >= index; i--) {581577 io_tlb_list[i] = ++count;578578+ io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;579579+ }582580 /*583581 * Step 2: merge the returned slots with the preceding slots,584582 * if available (non zero)···599593 int index = (tlb_addr - io_tlb_start) >> IO_TLB_SHIFT;600594 phys_addr_t orig_addr = io_tlb_orig_addr[index];601595596596+ if (orig_addr == INVALID_PHYS_ADDR)597597+ return;602598 orig_addr += (unsigned long)tlb_addr & ((1 << IO_TLB_SHIFT) - 1);603599604600 switch (target) {
···148148static struct list_head offload_base __read_mostly;149149150150static int netif_rx_internal(struct sk_buff *skb);151151+static int call_netdevice_notifiers_info(unsigned long val,152152+ struct net_device *dev,153153+ struct netdev_notifier_info *info);151154152155/*153156 * The @dev_base_head list is protected by @dev_base_lock and the rtnl···12171214void netdev_state_change(struct net_device *dev)12181215{12191216 if (dev->flags & IFF_UP) {12201220- call_netdevice_notifiers(NETDEV_CHANGE, dev);12171217+ struct netdev_notifier_change_info change_info;12181218+12191219+ change_info.flags_changed = 0;12201220+ call_netdevice_notifiers_info(NETDEV_CHANGE, dev,12211221+ &change_info.info);12211222 rtmsg_ifinfo(RTM_NEWLINK, dev, 0, GFP_KERNEL);12221223 }12231224}···42414234#endif42424235 napi->weight = weight_p;42434236 local_irq_disable();42444244- while (work < quota) {42374237+ while (1) {42454238 struct sk_buff *skb;42464246- unsigned int qlen;4247423942484240 while ((skb = __skb_dequeue(&sd->process_queue))) {42494241 local_irq_enable();···42564250 }4257425142584252 rps_lock(sd);42594259- qlen = skb_queue_len(&sd->input_pkt_queue);42604260- if (qlen)42614261- skb_queue_splice_tail_init(&sd->input_pkt_queue,42624262- &sd->process_queue);42634263-42644264- if (qlen < quota - work) {42534253+ if (skb_queue_empty(&sd->input_pkt_queue)) {42654254 /*42664255 * Inline a custom version of __napi_complete().42674256 * only current cpu owns and manipulates this napi,42684268- * and NAPI_STATE_SCHED is the only possible flag set on backlog.42694269- * we can use a plain write instead of clear_bit(),42574257+ * and NAPI_STATE_SCHED is the only possible flag set42584258+ * on backlog.42594259+ * We can use a plain write instead of clear_bit(),42704260 * and we dont need an smp_mb() memory barrier.42714261 */42724262 list_del(&napi->poll_list);42734263 napi->state = 0;42644264+ rps_unlock(sd);4274426542754275- quota = work + qlen;42664266+ break;42764267 }42684268+42694269+ skb_queue_splice_tail_init(&sd->input_pkt_queue,42704270+ &sd->process_queue);42774271 rps_unlock(sd);42784272 }42794273 local_irq_enable();
-55
net/core/iovec.c
···7575}76767777/*7878- * Copy kernel to iovec. Returns -EFAULT on error.7979- */8080-8181-int memcpy_toiovecend(const struct iovec *iov, unsigned char *kdata,8282- int offset, int len)8383-{8484- int copy;8585- for (; len > 0; ++iov) {8686- /* Skip over the finished iovecs */8787- if (unlikely(offset >= iov->iov_len)) {8888- offset -= iov->iov_len;8989- continue;9090- }9191- copy = min_t(unsigned int, iov->iov_len - offset, len);9292- if (copy_to_user(iov->iov_base + offset, kdata, copy))9393- return -EFAULT;9494- offset = 0;9595- kdata += copy;9696- len -= copy;9797- }9898-9999- return 0;100100-}101101-EXPORT_SYMBOL(memcpy_toiovecend);102102-103103-/*104104- * Copy iovec to kernel. Returns -EFAULT on error.105105- */106106-107107-int memcpy_fromiovecend(unsigned char *kdata, const struct iovec *iov,108108- int offset, int len)109109-{110110- /* Skip over the finished iovecs */111111- while (offset >= iov->iov_len) {112112- offset -= iov->iov_len;113113- iov++;114114- }115115-116116- while (len > 0) {117117- u8 __user *base = iov->iov_base + offset;118118- int copy = min_t(unsigned int, len, iov->iov_len - offset);119119-120120- offset = 0;121121- if (copy_from_user(kdata, base, copy))122122- return -EFAULT;123123- len -= copy;124124- kdata += copy;125125- iov++;126126- }127127-128128- return 0;129129-}130130-EXPORT_SYMBOL(memcpy_fromiovecend);131131-132132-/*13378 * And now for the all-in-one: copy and checksum from a user iovec13479 * directly to a datagram13580 * Calls to csum_partial but the last must be in 32 bit chunks
···13011301 len = ntohs(ipv6_hdr(skb)->payload_len) + sizeof(struct ipv6hdr);13021302 len -= skb_network_header_len(skb);1303130313041304- /* Drop queries with not link local source */13051305- if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL))13041304+ /* RFC3810 6.213051305+ * Upon reception of an MLD message that contains a Query, the node13061306+ * checks if the source address of the message is a valid link-local13071307+ * address, if the Hop Limit is set to 1, and if the Router Alert13081308+ * option is present in the Hop-By-Hop Options header of the IPv613091309+ * packet. If any of these checks fails, the packet is dropped.13101310+ */13111311+ if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL) ||13121312+ ipv6_hdr(skb)->hop_limit != 1 ||13131313+ !(IP6CB(skb)->flags & IP6SKB_ROUTERALERT) ||13141314+ IP6CB(skb)->ra != htons(IPV6_OPT_ROUTERALERT_MLD))13061315 return -EINVAL;1307131613081317 idev = __in6_dev_get(skb->dev);
···11/*22- * Copyright (c) 2007-2013 Nicira, Inc.22+ * Copyright (c) 2007-2014 Nicira, Inc.33 *44 * This program is free software; you can redistribute it and/or55 * modify it under the terms of version 2 of the GNU General Public···180180 unsigned char ar_tip[4]; /* target IP address */181181} __packed;182182183183-void ovs_flow_stats_update(struct sw_flow *, struct sk_buff *);183183+void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags,184184+ struct sk_buff *);184185void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *,185186 unsigned long *used, __be16 *tcp_flags);186187void ovs_flow_stats_clear(struct sw_flow *);
···110110 return PACKET_RCVD;111111}112112113113+/* Called with rcu_read_lock and BH disabled. */114114+static int gre_err(struct sk_buff *skb, u32 info,115115+ const struct tnl_ptk_info *tpi)116116+{117117+ struct ovs_net *ovs_net;118118+ struct vport *vport;119119+120120+ ovs_net = net_generic(dev_net(skb->dev), ovs_net_id);121121+ vport = rcu_dereference(ovs_net->vport_net.gre_vport);122122+123123+ if (unlikely(!vport))124124+ return PACKET_REJECT;125125+ else126126+ return PACKET_RCVD;127127+}128128+113129static int gre_tnl_send(struct vport *vport, struct sk_buff *skb)114130{115131 struct net *net = ovs_dp_get_net(vport->dp);···202186203187static struct gre_cisco_protocol gre_protocol = {204188 .handler = gre_rcv,189189+ .err_handler = gre_err,205190 .priority = 1,206191};207192
+15-107
net/sctp/ulpevent.c
···366366 * specification [SCTP] and any extensions for a list of possible367367 * error formats.368368 */369369-struct sctp_ulpevent *sctp_ulpevent_make_remote_error(370370- const struct sctp_association *asoc, struct sctp_chunk *chunk,371371- __u16 flags, gfp_t gfp)369369+struct sctp_ulpevent *370370+sctp_ulpevent_make_remote_error(const struct sctp_association *asoc,371371+ struct sctp_chunk *chunk, __u16 flags,372372+ gfp_t gfp)372373{373374 struct sctp_ulpevent *event;374375 struct sctp_remote_error *sre;···388387 /* Copy the skb to a new skb with room for us to prepend389388 * notification with.390389 */391391- skb = skb_copy_expand(chunk->skb, sizeof(struct sctp_remote_error),392392- 0, gfp);390390+ skb = skb_copy_expand(chunk->skb, sizeof(*sre), 0, gfp);393391394392 /* Pull off the rest of the cause TLV from the chunk. */395393 skb_pull(chunk->skb, elen);···399399 event = sctp_skb2event(skb);400400 sctp_ulpevent_init(event, MSG_NOTIFICATION, skb->truesize);401401402402- sre = (struct sctp_remote_error *)403403- skb_push(skb, sizeof(struct sctp_remote_error));402402+ sre = (struct sctp_remote_error *) skb_push(skb, sizeof(*sre));404403405404 /* Trim the buffer to the right length. */406406- skb_trim(skb, sizeof(struct sctp_remote_error) + elen);405405+ skb_trim(skb, sizeof(*sre) + elen);407406408408- /* Socket Extensions for SCTP409409- * 5.3.1.3 SCTP_REMOTE_ERROR410410- *411411- * sre_type:412412- * It should be SCTP_REMOTE_ERROR.413413- */407407+ /* RFC6458, Section 6.1.3. SCTP_REMOTE_ERROR */408408+ memset(sre, 0, sizeof(*sre));414409 sre->sre_type = SCTP_REMOTE_ERROR;415415-416416- /*417417- * Socket Extensions for SCTP418418- * 5.3.1.3 SCTP_REMOTE_ERROR419419- *420420- * sre_flags: 16 bits (unsigned integer)421421- * Currently unused.422422- */423410 sre->sre_flags = 0;424424-425425- /* Socket Extensions for SCTP426426- * 5.3.1.3 SCTP_REMOTE_ERROR427427- *428428- * sre_length: sizeof (__u32)429429- *430430- * This field is the total length of the notification data,431431- * including the notification header.432432- */433411 sre->sre_length = skb->len;434434-435435- /* Socket Extensions for SCTP436436- * 5.3.1.3 SCTP_REMOTE_ERROR437437- *438438- * sre_error: 16 bits (unsigned integer)439439- * This value represents one of the Operational Error causes defined in440440- * the SCTP specification, in network byte order.441441- */442412 sre->sre_error = cause;443443-444444- /* Socket Extensions for SCTP445445- * 5.3.1.3 SCTP_REMOTE_ERROR446446- *447447- * sre_assoc_id: sizeof (sctp_assoc_t)448448- *449449- * The association id field, holds the identifier for the association.450450- * All notifications for a given association have the same association451451- * identifier. For TCP style socket, this field is ignored.452452- */453413 sctp_ulpevent_set_owner(event, asoc);454414 sre->sre_assoc_id = sctp_assoc2id(asoc);455415456416 return event;457457-458417fail:459418 return NULL;460419}···858899 return notification->sn_header.sn_type;859900}860901861861-/* Copy out the sndrcvinfo into a msghdr. */902902+/* RFC6458, Section 5.3.2. SCTP Header Information Structure903903+ * (SCTP_SNDRCV, DEPRECATED)904904+ */862905void sctp_ulpevent_read_sndrcvinfo(const struct sctp_ulpevent *event,863906 struct msghdr *msghdr)864907{···869908 if (sctp_ulpevent_is_notification(event))870909 return;871910872872- /* Sockets API Extensions for SCTP873873- * Section 5.2.2 SCTP Header Information Structure (SCTP_SNDRCV)874874- *875875- * sinfo_stream: 16 bits (unsigned integer)876876- *877877- * For recvmsg() the SCTP stack places the message's stream number in878878- * this value.879879- */911911+ memset(&sinfo, 0, sizeof(sinfo));880912 sinfo.sinfo_stream = event->stream;881881- /* sinfo_ssn: 16 bits (unsigned integer)882882- *883883- * For recvmsg() this value contains the stream sequence number that884884- * the remote endpoint placed in the DATA chunk. For fragmented885885- * messages this is the same number for all deliveries of the message886886- * (if more than one recvmsg() is needed to read the message).887887- */888913 sinfo.sinfo_ssn = event->ssn;889889- /* sinfo_ppid: 32 bits (unsigned integer)890890- *891891- * In recvmsg() this value is892892- * the same information that was passed by the upper layer in the peer893893- * application. Please note that byte order issues are NOT accounted894894- * for and this information is passed opaquely by the SCTP stack from895895- * one end to the other.896896- */897914 sinfo.sinfo_ppid = event->ppid;898898- /* sinfo_flags: 16 bits (unsigned integer)899899- *900900- * This field may contain any of the following flags and is composed of901901- * a bitwise OR of these values.902902- *903903- * recvmsg() flags:904904- *905905- * SCTP_UNORDERED - This flag is present when the message was sent906906- * non-ordered.907907- */908915 sinfo.sinfo_flags = event->flags;909909- /* sinfo_tsn: 32 bit (unsigned integer)910910- *911911- * For the receiving side, this field holds a TSN that was912912- * assigned to one of the SCTP Data Chunks.913913- */914916 sinfo.sinfo_tsn = event->tsn;915915- /* sinfo_cumtsn: 32 bit (unsigned integer)916916- *917917- * This field will hold the current cumulative TSN as918918- * known by the underlying SCTP layer. Note this field is919919- * ignored when sending and only valid for a receive920920- * operation when sinfo_flags are set to SCTP_UNORDERED.921921- */922917 sinfo.sinfo_cumtsn = event->cumtsn;923923- /* sinfo_assoc_id: sizeof (sctp_assoc_t)924924- *925925- * The association handle field, sinfo_assoc_id, holds the identifier926926- * for the association announced in the COMMUNICATION_UP notification.927927- * All notifications for a given association have the same identifier.928928- * Ignored for one-to-one style sockets.929929- */930918 sinfo.sinfo_assoc_id = sctp_assoc2id(event->asoc);931931-932932- /* context value that is set via SCTP_CONTEXT socket option. */919919+ /* Context value that is set via SCTP_CONTEXT socket option. */933920 sinfo.sinfo_context = event->asoc->default_rcv_context;934934-935921 /* These fields are not used while receiving. */936922 sinfo.sinfo_timetolive = 0;937923938924 put_cmsg(msghdr, IPPROTO_SCTP, SCTP_SNDRCV,939939- sizeof(struct sctp_sndrcvinfo), (void *)&sinfo);925925+ sizeof(sinfo), &sinfo);940926}941927942928/* Do accounting for bytes received and hold a reference to the association
···9696}97979898/* tipc_buf_append(): Append a buffer to the fragment list of another buffer9999- * Let first buffer become head buffer100100- * Returns 1 and sets *buf to headbuf if chain is complete, otherwise 0101101- * Leaves headbuf pointer at NULL if failure9999+ * @*headbuf: in: NULL for first frag, otherwise value returned from prev call100100+ * out: set when successful non-complete reassembly, otherwise NULL101101+ * @*buf: in: the buffer to append. Always defined102102+ * out: head buf after sucessful complete reassembly, otherwise NULL103103+ * Returns 1 when reassembly complete, otherwise 0102104 */103105int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf)104106{···119117 goto out_free;120118 head = *headbuf = frag;121119 skb_frag_list_init(head);120120+ *buf = NULL;122121 return 0;123122 }124123 if (!head)···148145out_free:149146 pr_warn_ratelimited("Unable to build fragment list\n");150147 kfree_skb(*buf);148148+ kfree_skb(*headbuf);149149+ *buf = *headbuf = NULL;151150 return 0;152151}153152
+12-3
scripts/kernel-doc
···20732073sub dump_function($$) {20742074 my $prototype = shift;20752075 my $file = shift;20762076+ my $noret = 0;2076207720772078 $prototype =~ s/^static +//;20782079 $prototype =~ s/^extern +//;···20872086 $prototype =~ s/__init_or_module +//;20882087 $prototype =~ s/__must_check +//;20892088 $prototype =~ s/__weak +//;20902090- $prototype =~ s/^#\s*define\s+//; #ak added20892089+ my $define = $prototype =~ s/^#\s*define\s+//; #ak added20912090 $prototype =~ s/__attribute__\s*\(\([a-z,]*\)\)//;2092209120932092 # Yes, this truly is vile. We are looking for:···21062105 # - atomic_set (macro)21072106 # - pci_match_device, __copy_to_user (long return type)2108210721092109- if ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21082108+ if ($define && $prototype =~ m/^()([a-zA-Z0-9_~:]+)\s+/) {21092109+ # This is an object-like macro, it has no return type and no parameter21102110+ # list.21112111+ # Function-like macros are not allowed to have spaces between21122112+ # declaration_name and opening parenthesis (notice the \s+).21132113+ $return_type = $1;21142114+ $declaration_name = $2;21152115+ $noret = 1;21162116+ } elsif ($prototype =~ m/^()([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21102117 $prototype =~ m/^(\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21112118 $prototype =~ m/^(\w+\s*\*)\s*([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||21122119 $prototype =~ m/^(\w+\s+\w+)\s+([a-zA-Z0-9_~:]+)\s*\(([^\(]*)\)/ ||···21492140 # of warnings goes sufficiently down, the check is only performed in21502141 # verbose mode.21512142 # TODO: always perform the check.21522152- if ($verbose) {21432143+ if ($verbose && !$noret) {21532144 check_return_section($file, $declaration_name, $return_type);21542145 }21552146
···987987}988988989989/**990990+ * snd_usb_endpoint_release: Tear down an snd_usb_endpoint991991+ *992992+ * @ep: the endpoint to release993993+ *994994+ * This function does not care for the endpoint's use count but will tear995995+ * down all the streaming URBs immediately.996996+ */997997+void snd_usb_endpoint_release(struct snd_usb_endpoint *ep)998998+{999999+ release_urbs(ep, 1);10001000+}10011001+10021002+/**9901003 * snd_usb_endpoint_free: Free the resources of an snd_usb_endpoint9911004 *9921005 * @ep: the list header of the endpoint to free9931006 *994994- * This function does not care for the endpoint's use count but will tear995995- * down all the streaming URBs immediately and free all resources.10071007+ * This free all resources of the given ep.9961008 */9971009void snd_usb_endpoint_free(struct list_head *head)9981010{9991011 struct snd_usb_endpoint *ep;1000101210011013 ep = list_entry(head, struct snd_usb_endpoint, list);10021002- release_urbs(ep, 1);10031014 kfree(ep);10041015}10051016
···193193 int msg, pid, err;194194 struct msgque_data msgque;195195196196+ if (getuid() != 0) {197197+ printf("Please run the test as root - Exiting.\n");198198+ exit(1);199199+ }200200+196201 msgque.key = ftok(argv[0], 822155650);197202 if (msgque.key == -1) {198203 printf("Can't make key\n");
···142142static void prepare_logging(void)143143{144144 int i;145145+ struct stat logstat;145146146147 if (!logging)147148 return;···152151 syslog(LOG_ERR, "failed to open log file %s\n", TMON_LOG_FILE);153152 return;154153 }154154+155155+ if (lstat(TMON_LOG_FILE, &logstat) < 0) {156156+ syslog(LOG_ERR, "Unable to stat log file %s\n", TMON_LOG_FILE);157157+ fclose(tmon_log);158158+ tmon_log = NULL;159159+ return;160160+ }161161+162162+ /* The log file must be a regular file owned by us */163163+ if (S_ISLNK(logstat.st_mode)) {164164+ syslog(LOG_ERR, "Log file is a symlink. Will not log\n");165165+ fclose(tmon_log);166166+ tmon_log = NULL;167167+ return;168168+ }169169+170170+ if (logstat.st_uid != getuid()) {171171+ syslog(LOG_ERR, "We don't own the log file. Not logging\n");172172+ fclose(tmon_log);173173+ tmon_log = NULL;174174+ return;175175+ }176176+155177156178 fprintf(tmon_log, "#----------- THERMAL SYSTEM CONFIG -------------\n");157179 for (i = 0; i < ptdata.nr_tz_sensor; i++) {···355331 disable_tui();356332357333 /* change the file mode mask */358358- umask(0);334334+ umask(S_IWGRP | S_IWOTH);359335360336 /* new SID for the daemon process */361337 sid = setsid();