···9494min_vecs argument set to this limit, and the PCI core will return -ENOSPC9595if it can't meet the minimum number of vectors.96969797-The flags argument should normally be set to 0, but can be used to pass the9898-PCI_IRQ_NOMSI and PCI_IRQ_NOMSIX flag in case a device claims to support9999-MSI or MSI-X, but the support is broken, or to pass PCI_IRQ_NOLEGACY in100100-case the device does not support legacy interrupt lines.101101-102102-By default this function will spread the interrupts around the available103103-CPUs, but this feature can be disabled by passing the PCI_IRQ_NOAFFINITY104104-flag.9797+The flags argument is used to specify which type of interrupt can be used9898+by the device and the driver (PCI_IRQ_LEGACY, PCI_IRQ_MSI, PCI_IRQ_MSIX).9999+A convenient short-hand (PCI_IRQ_ALL_TYPES) is also available to ask for100100+any possible kind of interrupt. If the PCI_IRQ_AFFINITY flag is set,101101+pci_alloc_irq_vectors() will spread the interrupts around the available CPUs.105102106103To get the Linux IRQ numbers passed to request_irq() and free_irq() and the107104vectors, use the following function:···128131capped to the supported limit, so there is no need to query the number of129132vectors supported beforehand:130133131131- nvec = pci_alloc_irq_vectors(pdev, 1, nvec, 0);134134+ nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_ALL_TYPES)132135 if (nvec < 0)133136 goto out_err;134137···137140number to pci_alloc_irq_vectors() function as both 'min_vecs' and138141'max_vecs' parameters:139142140140- ret = pci_alloc_irq_vectors(pdev, nvec, nvec, 0);143143+ ret = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_ALL_TYPES);141144 if (ret < 0)142145 goto out_err;143146···145148the single MSI mode for a device. It could be done by passing two 1s as146149'min_vecs' and 'max_vecs':147150148148- ret = pci_alloc_irq_vectors(pdev, 1, 1, 0);151151+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);149152 if (ret < 0)150153 goto out_err;151154152155Some devices might not support using legacy line interrupts, in which case153153-the PCI_IRQ_NOLEGACY flag can be used to fail the request if the platform154154-can't provide MSI or MSI-X interrupts:156156+the driver can specify that only MSI or MSI-X is acceptable:155157156156- nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_NOLEGACY);158158+ nvec = pci_alloc_irq_vectors(pdev, 1, nvec, PCI_IRQ_MSI | PCI_IRQ_MSIX);157159 if (nvec < 0)158160 goto out_err;159161
+1
Documentation/arm64/silicon-errata.txt
···5353| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |5454| ARM | Cortex-A57 | #852523 | N/A |5555| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |5656+| ARM | Cortex-A72 | #853709 | N/A |5657| ARM | MMU-500 | #841119,#826419 | N/A |5758| | | | |5859| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+1-1
Documentation/conf.py
···131131todo_include_todos = False132132133133primary_domain = 'C'134134-highlight_language = 'C'134134+highlight_language = 'guess'135135136136# -- Options for HTML output ----------------------------------------------137137
···88- interrupts: Interrupt number for McPDM99- interrupt-parent: The parent interrupt controller1010- ti,hwmods: Name of the hwmod associated to the McPDM1111-- clocks: phandle for the pdmclk provider, likely <&twl6040>1212-- clock-names: Must be "pdmclk"13111412Example:1513···1820 interrupts = <0 112 0x4>;1921 interrupt-parent = <&gic>;2022 ti,hwmods = "mcpdm";2121-};2222-2323-In board DTS file the pdmclk needs to be added:2424-2525-&mcpdm {2626- clocks = <&twl6040>;2727- clock-names = "pdmclk";2828- status = "okay";2923};
···6262Required properties:6363- #cooling-cells: Used to provide cooling device specific information6464 Type: unsigned while referring to it. Must be at least 2, in order6565- Size: one cell to specify minimum and maximum cooling state used6565+ Size: one cell to specify minimum and maximum cooling state used6666 in the reference. The first cell is the minimum6767 cooling state requested and the second cell is6868 the maximum cooling state requested in the reference.···119119Optional property:120120- contribution: The cooling contribution to the thermal zone of the121121 Type: unsigned referred cooling device at the referred trip point.122122- Size: one cell The contribution is a ratio of the sum122122+ Size: one cell The contribution is a ratio of the sum123123 of all cooling contributions within a thermal zone.124124125125Note: Using the THERMAL_NO_LIMIT (-1UL) constant in the cooling-device phandle···145145 Size: one cell146146147147- thermal-sensors: A list of thermal sensor phandles and sensor specifier148148- Type: list of used while monitoring the thermal zone.148148+ Type: list of used while monitoring the thermal zone.149149 phandles + sensor150150 specifier151151···473473 <&adc>; /* pcb north */474474475475 /* hotspot = 100 * bandgap - 120 * adc + 484 */476476- coefficients = <100 -120 484>;476476+ coefficients = <100 -120 484>;477477478478 trips {479479 ...···502502 thermal-sensors = <&adc>;503503504504 /* hotspot = 1 * adc + 6000 */505505- coefficients = <1 6000>;505505+ coefficients = <1 6000>;506506507507(d) - Board thermal508508
+2-2
Documentation/hwmon/ftsteutates
···1919implemented in this driver.20202121Specification of the chip can be found here:2222-ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf2323-ftp:///pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf2222+ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/BMC-Teutates_Specification_V1.21.pdf2323+ftp://ftp.ts.fujitsu.com/pub/Mainboard-OEM-Sales/Services/Software&Tools/Linux_SystemMonitoring&Watchdog&GPIO/Fujitsu_mainboards-1-Sensors_HowTo-en-US.pdf
-6
Documentation/kernel-documentation.rst
···366366Cross-referencing from reStructuredText367367~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~368368369369-.. highlight:: none370370-371369To cross-reference the functions and types defined in the kernel-doc comments372370from reStructuredText documents, please use the `Sphinx C Domain`_373371references. For example::···387389388390Function documentation389391----------------------390390-391391-.. highlight:: c392392393393The general format of a function and function-like macro kernel-doc comment is::394394···567571568572Converting DocBook to Sphinx569573----------------------------570570-571571-.. highlight:: none572574573575Over time, we expect all of the documents under ``Documentation/DocBook`` to be574576converted to Sphinx and reStructuredText. For most DocBook XML documents, a good
+4
Documentation/kernel-parameters.txt
···30323032 PAGE_SIZE is used as alignment.30333033 PCI-PCI bridge can be specified, if resource30343034 windows need to be expanded.30353035+ To specify the alignment for several30363036+ instances of a device, the PCI vendor,30373037+ device, subvendor, and subdevice may be30383038+ specified, e.g., 4096@pci:8086:9c22:103c:198f30353039 ecrc= Enable/disable PCIe ECRC (transaction layer30363040 end-to-end CRC checking).30373041 bios: Use BIOS/firmware settings. This is the
-20
Documentation/networking/dsa/dsa.txt
···587587TODO588588====589589590590-The platform device problem591591----------------------------592592-DSA is currently implemented as a platform device driver which is far from ideal593593-as was discussed in this thread:594594-595595-http://permalink.gmane.org/gmane.linux.network/329848596596-597597-This basically prevents the device driver model to be properly used and applied,598598-and support non-MDIO, non-MMIO Ethernet connected switches.599599-600600-Another problem with the platform device driver approach is that it prevents the601601-use of a modular switch drivers build due to a circular dependency, illustrated602602-here:603603-604604-http://comments.gmane.org/gmane.linux.network/345803605605-606606-Attempts of reworking this has been done here:607607-608608-https://lwn.net/Articles/643149/609609-610590Making SWITCHDEV and DSA converge towards an unified codebase611591-------------------------------------------------------------612592
+26-1
Documentation/power/basic-pm-debugging.txt
···164164Again, if you find the offending module(s), it(they) must be unloaded every time165165before hibernation, and please report the problem with it(them).166166167167-c) Advanced debugging167167+c) Using the "test_resume" hibernation option168168+169169+/sys/power/disk generally tells the kernel what to do after creating a170170+hibernation image. One of the available options is "test_resume" which171171+causes the just created image to be used for immediate restoration. Namely,172172+after doing:173173+174174+# echo test_resume > /sys/power/disk175175+# echo disk > /sys/power/state176176+177177+a hibernation image will be created and a resume from it will be triggered178178+immediately without involving the platform firmware in any way.179179+180180+That test can be used to check if failures to resume from hibernation are181181+related to bad interactions with the platform firmware. That is, if the above182182+works every time, but resume from actual hibernation does not work or is183183+unreliable, the platform firmware may be responsible for the failures.184184+185185+On architectures and platforms that support using different kernels to restore186186+hibernation images (that is, the kernel used to read the image from storage and187187+load it into memory is different from the one included in the image) or support188188+kernel address space randomization, it also can be used to check if failures189189+to resume may be related to the differences between the restore and image190190+kernels.191191+192192+d) Advanced debugging168193169194In case that hibernation does not work on your system even in the minimal170195configuration and compiling more drivers as modules is not practical or some
+59-58
Documentation/power/interface.txt
···11-Power Management Interface11+Power Management Interface for System Sleep2233+Copyright (c) 2016 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>3444-The power management subsystem provides a unified sysfs interface to 55-userspace, regardless of what architecture or platform one is66-running. The interface exists in /sys/power/ directory (assuming sysfs77-is mounted at /sys). 55+The power management subsystem provides userspace with a unified sysfs interface66+for system sleep regardless of the underlying system architecture or platform.77+The interface is located in the /sys/power/ directory (assuming that sysfs is88+mounted at /sys).8999-/sys/power/state controls system power state. Reading from this file1010-returns what states are supported, which is hard-coded to 'freeze',1111-'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk'1212-(Suspend-to-Disk). 1010+/sys/power/state is the system sleep state control file.13111414-Writing to this file one of those strings causes the system to1515-transition into that state. Please see the file1616-Documentation/power/states.txt for a description of each of those1717-states.1212+Reading from it returns a list of supported sleep states, encoded as:18131414+'freeze' (Suspend-to-Idle)1515+'standby' (Power-On Suspend)1616+'mem' (Suspend-to-RAM)1717+'disk' (Suspend-to-Disk)19182020-/sys/power/disk controls the operating mode of the suspend-to-disk2121-mechanism. Suspend-to-disk can be handled in several ways. We have a2222-few options for putting the system to sleep - using the platform driver2323-(e.g. ACPI or other suspend_ops), powering off the system or rebooting the2424-system (for testing).1919+Suspend-to-Idle is always supported. Suspend-to-Disk is always supported2020+too as long the kernel has been configured to support hibernation at all2121+(ie. CONFIG_HIBERNATION is set in the kernel configuration file). Support2222+for Suspend-to-RAM and Power-On Suspend depends on the capabilities of the2323+platform.25242626-Additionally, /sys/power/disk can be used to turn on one of the two testing2727-modes of the suspend-to-disk mechanism: 'testproc' or 'test'. If the2828-suspend-to-disk mechanism is in the 'testproc' mode, writing 'disk' to2929-/sys/power/state will cause the kernel to disable nonboot CPUs and freeze3030-tasks, wait for 5 seconds, unfreeze tasks and enable nonboot CPUs. If it is3131-in the 'test' mode, writing 'disk' to /sys/power/state will cause the kernel3232-to disable nonboot CPUs and freeze tasks, shrink memory, suspend devices, wait3333-for 5 seconds, resume devices, unfreeze tasks and enable nonboot CPUs. Then,3434-we are able to look in the log messages and work out, for example, which code3535-is being slow and which device drivers are misbehaving.2525+If one of the strings listed in /sys/power/state is written to it, the system2626+will attempt to transition into the corresponding sleep state. Refer to2727+Documentation/power/states.txt for a description of each of those states.36283737-Reading from this file will display all supported modes and the currently3838-selected one in brackets, for example2929+/sys/power/disk controls the operating mode of hibernation (Suspend-to-Disk).3030+Specifically, it tells the kernel what to do after creating a hibernation image.39314040- [shutdown] reboot test testproc3232+Reading from it returns a list of supported options encoded as:41334242-Writing to this file will accept one of3434+'platform' (put the system into sleep using a platform-provided method)3535+'shutdown' (shut the system down)3636+'reboot' (reboot the system)3737+'suspend' (trigger a Suspend-to-RAM transition)3838+'test_resume' (resume-after-hibernation test mode)43394444- 'platform' (only if the platform supports it)4545- 'shutdown'4646- 'reboot'4747- 'testproc'4848- 'test'4040+The currently selected option is printed in square brackets.49415050-/sys/power/image_size controls the size of the image created by5151-the suspend-to-disk mechanism. It can be written a string5252-representing a non-negative integer that will be used as an upper5353-limit of the image size, in bytes. The suspend-to-disk mechanism will5454-do its best to ensure the image size will not exceed that number. However,5555-if this turns out to be impossible, it will try to suspend anyway using the5656-smallest image possible. In particular, if "0" is written to this file, the5757-suspend image will be as small as possible.4242+The 'platform' option is only available if the platform provides a special4343+mechanism to put the system to sleep after creating a hibernation image (ACPI4444+does that, for example). The 'suspend' option is available if Suspend-to-RAM4545+is supported. Refer to Documentation/power/basic_pm_debugging.txt for the4646+description of the 'test_resume' option.58475959-Reading from this file will display the current image size limit, which6060-is set to 2/5 of available RAM by default.4848+To select an option, write the string representing it to /sys/power/disk.61496262-/sys/power/pm_trace controls the code which saves the last PM event point in6363-the RTC across reboots, so that you can debug a machine that just hangs6464-during suspend (or more commonly, during resume). Namely, the RTC is only6565-used to save the last PM event point if this file contains '1'. Initially it6666-contains '0' which may be changed to '1' by writing a string representing a6767-nonzero integer into it.5050+/sys/power/image_size controls the size of hibernation images.68516969-To use this debugging feature you should attempt to suspend the machine, then7070-reboot it and run5252+It can be written a string representing a non-negative integer that will be5353+used as a best-effort upper limit of the image size, in bytes. The hibernation5454+core will do its best to ensure that the image size will not exceed that number.5555+However, if that turns out to be impossible to achieve, a hibernation image will5656+still be created and its size will be as small as possible. In particular,5757+writing '0' to this file will enforce hibernation images to be as small as5858+possible.71597272- dmesg -s 1000000 | grep 'hash matches'6060+Reading from this file returns the current image size limit, which is set to6161+around 2/5 of available RAM by default.73627474-CAUTION: Using it will cause your machine's real-time (CMOS) clock to be7575-set to a random invalid time after a resume.6363+/sys/power/pm_trace controls the PM trace mechanism saving the last suspend6464+or resume event point in the RTC across reboots.6565+6666+It helps to debug hard lockups or reboots due to device driver failures that6767+occur during system suspend or resume (which is more common) more effectively.6868+6969+If /sys/power/pm_trace contains '1', the fingerprint of each suspend/resume7070+event point in turn will be stored in the RTC memory (overwriting the actual7171+RTC information), so it will survive a system crash if one occurs right after7272+storing it and it can be used later to identify the driver that caused the crash7373+to happen (see Documentation/power/s2ram.txt for more information).7474+7575+Initially it contains '0' which may be changed to '1' by writing a string7676+representing a nonzero integer into it.
+2
Documentation/powerpc/transactional_memory.txt
···167167For signals taken in non-TM or suspended mode, we use the168168normal/non-checkpointed stack pointer.169169170170+Any transaction initiated inside a sighandler and suspended on return171171+from the sighandler to the kernel will get reclaimed and discarded.170172171173Failure cause codes used by kernel172174==================================
+2-1
Documentation/sphinx-static/theme_overrides.css
···4242 caption a.headerlink { opacity: 0; }4343 caption a.headerlink:hover { opacity: 1; }44444545- /* inline literal: drop the borderbox and red color */4545+ /* inline literal: drop the borderbox, padding and red color */46464747 code, .rst-content tt, .rst-content code {4848 color: inherit;4949 border: none;5050+ padding: unset;5051 background: inherit;5152 font-size: 85%;5253 }
+16-1
MAINTAINERS
···890890F: drivers/gpu/drm/arc/891891F: Documentation/devicetree/bindings/display/snps,arcpgu.txt892892893893+ARM ARCHITECTED TIMER DRIVER894894+M: Mark Rutland <mark.rutland@arm.com>895895+M: Marc Zyngier <marc.zyngier@arm.com>896896+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)897897+S: Maintained898898+F: arch/arm/include/asm/arch_timer.h899899+F: arch/arm64/include/asm/arch_timer.h900900+F: drivers/clocksource/arm_arch_timer.c901901+893902ARM HDLCD DRM DRIVER894903M: Liviu Dudau <liviu.dudau@arm.com>895904S: Supported···45434534S: Maintained45444535F: drivers/edac/sb_edac.c4545453645374537+EDAC-SKYLAKE45384538+M: Tony Luck <tony.luck@intel.com>45394539+L: linux-edac@vger.kernel.org45404540+S: Maintained45414541+F: drivers/edac/skx_edac.c45424542+45464543EDAC-XGENE45474544APPLIED MICRO (APM) X-GENE SOC EDAC45484545M: Loc Ho <lho@apm.com>···76797664S: Supported76807665W: https://github.com/SoftRoCE/rxe-dev/wiki/rxe-dev:-Home76817666Q: http://patchwork.kernel.org/project/linux-rdma/list/76827682-F: drivers/infiniband/hw/rxe/76677667+F: drivers/infiniband/sw/rxe/76837668F: include/uapi/rdma/rdma_user_rxe.h7684766976857670MEMBARRIER SUPPORT
···142142143143#ifdef CONFIG_ARC_CURR_IN_REG144144 ; Retrieve orig r25 and save it with rest of callee_regs145145- ld.as r12, [r12, PT_user_r25]145145+ ld r12, [r12, PT_user_r25]146146 PUSH r12147147#else148148 PUSH r25···198198199199 ; SP is back to start of pt_regs200200#ifdef CONFIG_ARC_CURR_IN_REG201201- st.as r12, [sp, PT_user_r25]201201+ st r12, [sp, PT_user_r25]202202#endif203203.endm204204
+1-1
arch/arc/include/asm/irqflags-compact.h
···188188.endm189189190190.macro IRQ_ENABLE scratch191191+ TRACE_ASM_IRQ_ENABLE191192 lr \scratch, [status32]192193 or \scratch, \scratch, (STATUS_E1_MASK | STATUS_E2_MASK)193194 flag \scratch194194- TRACE_ASM_IRQ_ENABLE195195.endm196196197197#endif /* __ASSEMBLY__ */
+1-1
arch/arc/include/asm/pgtable.h
···280280281281#define pte_page(pte) pfn_to_page(pte_pfn(pte))282282#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot)283283-#define pfn_pte(pfn, prot) (__pte(((pte_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)))283283+#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))284284285285/* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/286286#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)
+9-2
arch/arc/include/uapi/asm/elf.h
···13131414/* Machine specific ELF Hdr flags */1515#define EF_ARC_OSABI_MSK 0x00000f001616-#define EF_ARC_OSABI_ORIG 0x00000000 /* MUST be zero for back-compat */1717-#define EF_ARC_OSABI_CURRENT 0x00000300 /* v3 (no legacy syscalls) */1616+1717+#define EF_ARC_OSABI_V3 0x00000300 /* v3 (no legacy syscalls) */1818+#define EF_ARC_OSABI_V4 0x00000400 /* v4 (64bit data any reg align) */1919+2020+#if __GNUC__ < 62121+#define EF_ARC_OSABI_CURRENT EF_ARC_OSABI_V32222+#else2323+#define EF_ARC_OSABI_CURRENT EF_ARC_OSABI_V42424+#endif18251926typedef unsigned long elf_greg_t;2027typedef unsigned long elf_fpregset_t;
···199199 }200200201201 eflags = x->e_flags;202202- if ((eflags & EF_ARC_OSABI_MSK) < EF_ARC_OSABI_CURRENT) {202202+ if ((eflags & EF_ARC_OSABI_MSK) != EF_ARC_OSABI_CURRENT) {203203 pr_err("ABI mismatch - you need newer toolchain\n");204204 force_sigsegv(SIGSEGV, current);205205 return 0;
+4-2
arch/arc/kernel/setup.c
···291291 cpu->dccm.base_addr, TO_KB(cpu->dccm.sz),292292 cpu->iccm.base_addr, TO_KB(cpu->iccm.sz));293293294294- n += scnprintf(buf + n, len - n,295295- "OS ABI [v3]\t: no-legacy-syscalls\n");294294+ n += scnprintf(buf + n, len - n, "OS ABI [v%d]\t: %s\n",295295+ EF_ARC_OSABI_CURRENT >> 8,296296+ EF_ARC_OSABI_CURRENT == EF_ARC_OSABI_V3 ?297297+ "no-legacy-syscalls" : "64-bit data any register aligned");296298297299 return buf;298300}
+9
arch/arc/mm/cache.c
···921921922922 printk(arc_cache_mumbojumbo(0, str, sizeof(str)));923923924924+ /*925925+ * Only master CPU needs to execute rest of function:926926+ * - Assume SMP so all cores will have same cache config so927927+ * any geomtry checks will be same for all928928+ * - IOC setup / dma callbacks only need to be setup once929929+ */930930+ if (cpu)931931+ return;932932+924933 if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE)) {925934 struct cpuinfo_arc_cache *ic = &cpuinfo_arc700[cpu].icache;926935
···13091309 smp_rmb();1310131013111311 pfn = gfn_to_pfn_prot(kvm, gfn, write_fault, &writable);13121312- if (is_error_pfn(pfn))13121312+ if (is_error_noslot_pfn(pfn))13131313 return -EFAULT;1314131413151315 if (kvm_is_device_pfn(pfn)) {
+6
arch/arm/mach-imx/gpc.c
···271271 for (i = 0; i < IMR_NUM; i++)272272 writel_relaxed(~0, gpc_base + GPC_IMR1 + i * 4);273273274274+ /*275275+ * Clear the OF_POPULATED flag set in of_irq_init so that276276+ * later the GPC power domain driver will not be skipped.277277+ */278278+ of_node_clear_flag(node, OF_POPULATED);279279+274280 return 0;275281}276282IRQCHIP_DECLARE(imx_gpc, "fsl,imx6q-gpc", imx_gpc_init);
···728728{729729 void *ptr = (void *)__get_free_pages(PGALLOC_GFP, get_order(sz));730730731731- BUG_ON(!ptr);731731+ if (!ptr || !pgtable_page_ctor(virt_to_page(ptr)))732732+ BUG();732733 return ptr;733734}734735···11561155{11571156 phys_addr_t memblock_limit = 0;11581157 int highmem = 0;11591159- phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;11581158+ u64 vmalloc_limit;11601159 struct memblock_region *reg;11611160 bool should_use_highmem = false;11611161+11621162+ /*11631163+ * Let's use our own (unoptimized) equivalent of __pa() that is11641164+ * not affected by wrap-arounds when sizeof(phys_addr_t) == 4.11651165+ * The result is used as the upper bound on physical memory address11661166+ * and may itself be outside the valid range for which phys_addr_t11671167+ * and therefore __pa() is defined.11681168+ */11691169+ vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;1162117011631171 for_each_memblock(memory, reg) {11641172 phys_addr_t block_start = reg->base;···11931183 if (reg->size > size_limit) {11941184 phys_addr_t overlap_size = reg->size - size_limit;1195118511961196- pr_notice("Truncating RAM at %pa-%pa to -%pa",11971197- &block_start, &block_end, &vmalloc_limit);11981198- memblock_remove(vmalloc_limit, overlap_size);11861186+ pr_notice("Truncating RAM at %pa-%pa",11871187+ &block_start, &block_end);11991188 block_end = vmalloc_limit;11891189+ pr_cont(" to -%pa", &block_end);11901190+ memblock_remove(vmalloc_limit, overlap_size);12001191 should_use_highmem = true;12011192 }12021193 }
+1-1
arch/arm/xen/enlighten.c
···5050static struct vcpu_info __percpu *xen_vcpu_info;51515252/* Linux <-> Xen vCPU id mapping */5353-DEFINE_PER_CPU(int, xen_vcpu_id) = -1;5353+DEFINE_PER_CPU(uint32_t, xen_vcpu_id);5454EXPORT_PER_CPU_SYMBOL(xen_vcpu_id);55555656/* These are unused until we support booting "pre-ballooned" */
+3
arch/arm64/kernel/head.S
···757757 isb758758 bl __create_page_tables // recreate kernel mapping759759760760+ tlbi vmalle1 // Remove any stale TLB entries761761+ dsb nsh762762+760763 msr sctlr_el1, x19 // re-enable the MMU761764 isb762765 ic iallu // flush instructions fetched
+9-1
arch/arm64/kernel/sleep.S
···101101 bl el2_setup // if in EL2 drop to EL1 cleanly102102 /* enable the MMU early - so we can access sleep_save_stash by va */103103 adr_l lr, __enable_mmu /* __cpu_setup will return here */104104- ldr x27, =_cpu_resume /* __enable_mmu will branch here */104104+ adr_l x27, _resume_switched /* __enable_mmu will branch here */105105 adrp x25, idmap_pg_dir106106 adrp x26, swapper_pg_dir107107 b __cpu_setup108108ENDPROC(cpu_resume)109109+110110+ .pushsection ".idmap.text", "ax"111111+_resume_switched:112112+ ldr x8, =_cpu_resume113113+ br x8114114+ENDPROC(_resume_switched)115115+ .ltorg116116+ .popsection109117110118ENTRY(_cpu_resume)111119 mrs x1, mpidr_el1
+1-1
arch/arm64/kvm/hyp/switch.c
···256256257257 /*258258 * We must restore the 32-bit state before the sysregs, thanks259259- * to Cortex-A57 erratum #852523.259259+ * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72).260260 */261261 __sysreg32_restore_state(vcpu);262262 __sysreg_restore_guest_state(guest_ctxt);
+1-9
arch/arm64/kvm/sys_regs.c
···823823 * Architected system registers.824824 * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2825825 *826826- * We could trap ID_DFR0 and tell the guest we don't support performance827827- * monitoring. Unfortunately the patch to make the kernel check ID_DFR0 was828828- * NAKed, so it will read the PMCR anyway.829829- *830830- * Therefore we tell the guest we have 0 counters. Unfortunately, we831831- * must always support PMCCNTR (the cycle counter): we just RAZ/WI for832832- * all PM registers, which doesn't crash the guest kernel at least.833833- *834826 * Debug handling: We do trap most, if not all debug related system835827 * registers. The implementation is good enough to ensure that a guest836828 * can use these with minimal performance degradation. The drawback is···13521360 { Op1( 0), CRn(10), CRm( 3), Op2( 1), access_vm_reg, NULL, c10_AMAIR1 },1353136113541362 /* ICC_SRE */13551355- { Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },13631363+ { Op1( 0), CRn(12), CRm(12), Op2( 5), access_gic_sre },1356136413571365 { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },13581366
···164164 */165165static inline unsigned long ___pa(unsigned long x)166166{167167- if (config_enabled(CONFIG_64BIT)) {167167+ if (IS_ENABLED(CONFIG_64BIT)) {168168 /*169169 * For MIPS64 the virtual address may either be in one of170170 * the compatibility segements ckseg0 or ckseg1, or it may···173173 return x < CKSEG0 ? XPHYSADDR(x) : CPHYSADDR(x);174174 }175175176176- if (!config_enabled(CONFIG_EVA)) {176176+ if (!IS_ENABLED(CONFIG_EVA)) {177177 /*178178 * We're using the standard MIPS32 legacy memory map, ie.179179 * the address x is going to be in kseg0 or kseg1. We can
+1-1
arch/mips/kvm/mmu.c
···4040 srcu_idx = srcu_read_lock(&kvm->srcu);4141 pfn = gfn_to_pfn(kvm, gfn);42424343- if (is_error_pfn(pfn)) {4343+ if (is_error_noslot_pfn(pfn)) {4444 kvm_err("Couldn't get pfn for gfn %#llx!\n", gfn);4545 err = -EFAULT;4646 goto out;
+2-2
arch/parisc/include/uapi/asm/errno.h
···9797#define ENOTCONN 235 /* Transport endpoint is not connected */9898#define ESHUTDOWN 236 /* Cannot send after transport endpoint shutdown */9999#define ETOOMANYREFS 237 /* Too many references: cannot splice */100100-#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */101100#define ETIMEDOUT 238 /* Connection timed out */102101#define ECONNREFUSED 239 /* Connection refused */103103-#define EREMOTERELEASE 240 /* Remote peer released connection */102102+#define EREFUSED ECONNREFUSED /* for HP's NFS apparently */103103+#define EREMOTERELEASE 240 /* Remote peer released connection */104104#define EHOSTDOWN 241 /* Host is down */105105#define EHOSTUNREACH 242 /* No route to host */106106
-8
arch/parisc/kernel/processor.c
···51515252DEFINE_PER_CPU(struct cpuinfo_parisc, cpu_data);53535454-extern int update_cr16_clocksource(void); /* from time.c */5555-5654/*5755** PARISC CPU driver - claim "device" and initialize CPU data structures.5856**···225227 cpu_up(cpuid);226228 }227229#endif228228-229229- /* If we've registered more than one cpu,230230- * we'll use the jiffies clocksource since cr16231231- * is not synchronized between CPUs.232232- */233233- update_cr16_clocksource();234230235231 return 0;236232}
-12
arch/parisc/kernel/time.c
···221221 .flags = CLOCK_SOURCE_IS_CONTINUOUS,222222};223223224224-int update_cr16_clocksource(void)225225-{226226- /* since the cr16 cycle counters are not synchronized across CPUs,227227- we'll check if we should switch to a safe clocksource: */228228- if (clocksource_cr16.rating != 0 && num_online_cpus() > 1) {229229- clocksource_change_rating(&clocksource_cr16, 0);230230- return 1;231231- }232232-233233- return 0;234234-}235235-236224void __init start_cpu_itimer(void)237225{238226 unsigned int cpu = smp_processor_id();
+1
arch/powerpc/include/asm/cputhreads.h
···3344#ifndef __ASSEMBLY__55#include <linux/cpumask.h>66+#include <asm/cpu_has_feature.h>6778/*89 * Mapping of threads to cores
···368368tabort_syscall:369369 /* Firstly we need to enable TM in the kernel */370370 mfmsr r10371371- li r13, 1372372- rldimi r10, r13, MSR_TM_LG, 63-MSR_TM_LG371371+ li r9, 1372372+ rldimi r10, r9, MSR_TM_LG, 63-MSR_TM_LG373373 mtmsrd r10, 0374374375375 /* tabort, this dooms the transaction, nothing else */376376- li r13, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)377377- TABORT(R13)376376+ li r9, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)377377+ TABORT(R9)378378379379 /*380380 * Return directly to userspace. We have corrupted user register state,···382382 * resume after the tbegin of the aborted transaction with the383383 * checkpointed register state.384384 */385385- li r13, MSR_RI386386- andc r10, r10, r13385385+ li r9, MSR_RI386386+ andc r10, r10, r9387387 mtmsrd r10, 1388388 mtspr SPRN_SRR0, r11389389 mtspr SPRN_SRR1, r12
+24-5
arch/powerpc/kernel/exceptions-64s.S
···485485 EXCEPTION_PROLOG_0(PACA_EXMC)486486machine_check_pSeries_0:487487 EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST, 0x200)488488- EXCEPTION_PROLOG_PSERIES_1(machine_check_common, EXC_STD)488488+ /*489489+ * The following is essentially EXCEPTION_PROLOG_PSERIES_1 with the490490+ * difference that MSR_RI is not enabled, because PACA_EXMC is being491491+ * used, so nested machine check corrupts it. machine_check_common492492+ * enables MSR_RI.493493+ */494494+ ld r12,PACAKBASE(r13)495495+ ld r10,PACAKMSR(r13)496496+ xori r10,r10,MSR_RI497497+ mfspr r11,SPRN_SRR0498498+ LOAD_HANDLER(r12, machine_check_common)499499+ mtspr SPRN_SRR0,r12500500+ mfspr r12,SPRN_SRR1501501+ mtspr SPRN_SRR1,r10502502+ rfid503503+ b . /* prevent speculative execution */504504+489505 KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200)490506 KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300)491507 KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380)···985969machine_check_common:986970987971 mfspr r10,SPRN_DAR988988- std r10,PACA_EXGEN+EX_DAR(r13)972972+ std r10,PACA_EXMC+EX_DAR(r13)989973 mfspr r10,SPRN_DSISR990990- stw r10,PACA_EXGEN+EX_DSISR(r13)974974+ stw r10,PACA_EXMC+EX_DSISR(r13)991975 EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)992976 FINISH_NAP993977 RECONCILE_IRQ_STATE(r10, r11)994994- ld r3,PACA_EXGEN+EX_DAR(r13)995995- lwz r4,PACA_EXGEN+EX_DSISR(r13)978978+ ld r3,PACA_EXMC+EX_DAR(r13)979979+ lwz r4,PACA_EXMC+EX_DSISR(r13)980980+ /* Enable MSR_RI when finished with PACA_EXMC */981981+ li r10,MSR_RI982982+ mtmsrd r10,1996983 std r3,_DAR(r1)997984 std r4,_DSISR(r1)998985 bl save_nvgprs
···154154EXPORT_SYMBOL_GPL(pcibios_free_controller);155155156156/*157157+ * This function is used to call pcibios_free_controller()158158+ * in a deferred manner: a callback from the PCI subsystem.159159+ *160160+ * _*DO NOT*_ call pcibios_free_controller() explicitly if161161+ * this is used (or it may access an invalid *phb pointer).162162+ *163163+ * The callback occurs when all references to the root bus164164+ * are dropped (e.g., child buses/devices and their users).165165+ *166166+ * It's called as .release_fn() of 'struct pci_host_bridge'167167+ * which is associated with the 'struct pci_controller.bus'168168+ * (root bus) - it expects .release_data to hold a pointer169169+ * to 'struct pci_controller'.170170+ *171171+ * In order to use it, register .release_fn()/release_data172172+ * like this:173173+ *174174+ * pci_set_host_bridge_release(bridge,175175+ * pcibios_free_controller_deferred176176+ * (void *) phb);177177+ *178178+ * e.g. in the pcibios_root_bridge_prepare() callback from179179+ * pci_create_root_bus().180180+ */181181+void pcibios_free_controller_deferred(struct pci_host_bridge *bridge)182182+{183183+ struct pci_controller *phb = (struct pci_controller *)184184+ bridge->release_data;185185+186186+ pr_debug("domain %d, dynamic %d\n", phb->global_number, phb->is_dynamic);187187+188188+ pcibios_free_controller(phb);189189+}190190+EXPORT_SYMBOL_GPL(pcibios_free_controller_deferred);191191+192192+/*157193 * The function is used to return the minimal alignment158194 * for memory or I/O windows of the associated P2P bridge.159195 * By default, 4KiB alignment for I/O windows and 1MiB for
···12261226 (regs->gpr[1] + __SIGNAL_FRAMESIZE + 16);12271227 if (!access_ok(VERIFY_READ, rt_sf, sizeof(*rt_sf)))12281228 goto bad;12291229+12291230#ifdef CONFIG_PPC_TRANSACTIONAL_MEM12311231+ /*12321232+ * If there is a transactional state then throw it away.12331233+ * The purpose of a sigreturn is to destroy all traces of the12341234+ * signal frame, this includes any transactional state created12351235+ * within in. We only check for suspended as we can never be12361236+ * active in the kernel, we are active, there is nothing better to12371237+ * do than go ahead and Bad Thing later.12381238+ * The cause is not important as there will never be a12391239+ * recheckpoint so it's not user visible.12401240+ */12411241+ if (MSR_TM_SUSPENDED(mfmsr()))12421242+ tm_reclaim_current(0);12431243+12301244 if (__get_user(tmp, &rt_sf->uc.uc_link))12311245 goto bad;12321246 uc_transact = (struct ucontext __user *)(uintptr_t)tmp;
+14
arch/powerpc/kernel/signal_64.c
···676676 if (__copy_from_user(&set, &uc->uc_sigmask, sizeof(set)))677677 goto badframe;678678 set_current_blocked(&set);679679+679680#ifdef CONFIG_PPC_TRANSACTIONAL_MEM681681+ /*682682+ * If there is a transactional state then throw it away.683683+ * The purpose of a sigreturn is to destroy all traces of the684684+ * signal frame, this includes any transactional state created685685+ * within in. We only check for suspended as we can never be686686+ * active in the kernel, we are active, there is nothing better to687687+ * do than go ahead and Bad Thing later.688688+ * The cause is not important as there will never be a689689+ * recheckpoint so it's not user visible.690690+ */691691+ if (MSR_TM_SUSPENDED(mfmsr()))692692+ tm_reclaim_current(0);693693+680694 if (__get_user(msr, &uc->uc_mcontext.gp_regs[PT_MSR]))681695 goto badframe;682696 if (MSR_TM_ACTIVE(msr)) {
+1-1
arch/powerpc/kernel/smp.c
···830830831831 /* Update sibling maps */832832 base = cpu_first_thread_sibling(cpu);833833- for (i = 0; i < threads_per_core; i++) {833833+ for (i = 0; i < threads_per_core && base + i < nr_cpu_ids; i++) {834834 cpumask_clear_cpu(cpu, cpu_sibling_mask(base + i));835835 cpumask_clear_cpu(base + i, cpu_sibling_mask(cpu));836836 cpumask_clear_cpu(cpu, cpu_core_mask(base + i));
···370370 uint32_t dump_id, dump_size, dump_type;371371 struct dump_obj *dump;372372 char name[22];373373+ struct kobject *kobj;373374374375 rc = dump_read_info(&dump_id, &dump_size, &dump_type);375376 if (rc != OPAL_SUCCESS)···382381 * that gracefully and not create two conflicting383382 * entries.384383 */385385- if (kset_find_obj(dump_kset, name))384384+ kobj = kset_find_obj(dump_kset, name);385385+ if (kobj) {386386+ /* Drop reference added by kset_find_obj() */387387+ kobject_put(kobj);386388 return 0;389389+ }387390388391 dump = create_dump_obj(dump_id, dump_size, dump_type);389392 if (!dump)
+6-1
arch/powerpc/platforms/powernv/opal-elog.c
···247247 uint64_t elog_type;248248 int rc;249249 char name[2+16+1];250250+ struct kobject *kobj;250251251252 rc = opal_get_elog_size(&id, &size, &type);252253 if (rc != OPAL_SUCCESS) {···270269 * that gracefully and not create two conflicting271270 * entries.272271 */273273- if (kset_find_obj(elog_kset, name))272272+ kobj = kset_find_obj(elog_kset, name);273273+ if (kobj) {274274+ /* Drop reference added by kset_find_obj() */275275+ kobject_put(kobj);274276 return IRQ_HANDLED;277277+ }275278276279 create_elog_obj(log_id, elog_size, elog_type);277280
+1-1
arch/powerpc/platforms/powernv/pci-ioda.c
···149149150150static struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb)151151{152152- unsigned long pe = phb->ioda.total_pe_num - 1;152152+ long pe;153153154154 for (pe = phb->ioda.total_pe_num - 1; pe >= 0; pe--) {155155 if (!test_and_set_bit(pe, phb->ioda.pe_alloc))
+4
arch/powerpc/platforms/pseries/pci.c
···119119120120 bus = bridge->bus;121121122122+ /* Rely on the pcibios_free_controller_deferred() callback. */123123+ pci_set_host_bridge_release(bridge, pcibios_free_controller_deferred,124124+ (void *) pci_bus_to_host(bus));125125+122126 dn = pcibios_get_phb_of_node(bus);123127 if (!dn)124128 return 0;
+5-2
arch/powerpc/platforms/pseries/pci_dlpar.c
···106106 release_resource(res);107107 }108108109109- /* Free pci_controller data structure */110110- pcibios_free_controller(phb);109109+ /*110110+ * The pci_controller data structure is freed by111111+ * the pcibios_free_controller_deferred() callback;112112+ * see pseries_root_bridge_prepare().113113+ */111114112115 return 0;113116}
···204204#endif205205 }206206 } else if (MACHINE_IS_KVM) {207207- if (sclp.has_vt220 &&208208- config_enabled(CONFIG_SCLP_VT220_CONSOLE))207207+ if (sclp.has_vt220 && IS_ENABLED(CONFIG_SCLP_VT220_CONSOLE))209208 SET_CONSOLE_VT220;210210- else if (sclp.has_linemode &&211211- config_enabled(CONFIG_SCLP_CONSOLE))209209+ else if (sclp.has_linemode && IS_ENABLED(CONFIG_SCLP_CONSOLE))212210 SET_CONSOLE_SCLP;213211 else214212 SET_CONSOLE_HVC;
+1-1
arch/um/include/asm/common.lds.S
···8181 .altinstr_replacement : { *(.altinstr_replacement) }8282 /* .exit.text is discard at runtime, not link time, to deal with references8383 from .altinstructions and .eh_frame */8484- .exit.text : { *(.exit.text) }8484+ .exit.text : { EXIT_TEXT }8585 .exit.data : { *(.exit.data) }86868787 .preinit_array : {
···100100/* Logical package management. We might want to allocate that dynamically */101101static int *physical_to_logical_pkg __read_mostly;102102static unsigned long *physical_package_map __read_mostly;;103103-static unsigned long *logical_package_map __read_mostly;104103static unsigned int max_physical_pkg_id __read_mostly;105104unsigned int __max_logical_packages __read_mostly;106105EXPORT_SYMBOL(__max_logical_packages);106106+static unsigned int logical_packages __read_mostly;107107+static bool logical_packages_frozen __read_mostly;107108108109/* Maximum number of SMT threads on any online core */109110int __max_smt_threads __read_mostly;···278277 if (test_and_set_bit(pkg, physical_package_map))279278 goto found;280279281281- new = find_first_zero_bit(logical_package_map, __max_logical_packages);282282- if (new >= __max_logical_packages) {280280+ if (logical_packages_frozen) {283281 physical_to_logical_pkg[pkg] = -1;284284- pr_warn("APIC(%x) Package %u exceeds logical package map\n",282282+ pr_warn("APIC(%x) Package %u exceeds logical package max\n",285283 apicid, pkg);286284 return -ENOSPC;287285 }288288- set_bit(new, logical_package_map);286286+287287+ new = logical_packages++;289288 pr_info("APIC(%x) Converting physical %u to logical package %u\n",290289 apicid, pkg, new);291290 physical_to_logical_pkg[pkg] = new;···342341 }343342344343 __max_logical_packages = DIV_ROUND_UP(total_cpus, ncpus);344344+ logical_packages = 0;345345346346 /*347347 * Possibly larger than what we need as the number of apic ids per···354352 memset(physical_to_logical_pkg, 0xff, size);355353 size = BITS_TO_LONGS(max_physical_pkg_id) * sizeof(unsigned long);356354 physical_package_map = kzalloc(size, GFP_KERNEL);357357- size = BITS_TO_LONGS(__max_logical_packages) * sizeof(unsigned long);358358- logical_package_map = kzalloc(size, GFP_KERNEL);359359-360360- pr_info("Max logical packages: %u\n", __max_logical_packages);361355362356 for_each_present_cpu(cpu) {363357 unsigned int apicid = apic->cpu_present_to_apicid(cpu);···367369 set_cpu_possible(cpu, false);368370 set_cpu_present(cpu, false);369371 }372372+373373+ if (logical_packages > __max_logical_packages) {374374+ pr_warn("Detected more packages (%u), then computed by BIOS data (%u).\n",375375+ logical_packages, __max_logical_packages);376376+ logical_packages_frozen = true;377377+ __max_logical_packages = logical_packages;378378+ }379379+380380+ pr_info("Max logical packages: %u\n", __max_logical_packages);370381}371382372383void __init smp_store_boot_cpu_info(void)
+69-67
arch/x86/kvm/vmx.c
···422422 struct list_head vmcs02_pool;423423 int vmcs02_num;424424 u64 vmcs01_tsc_offset;425425+ bool change_vmcs01_virtual_x2apic_mode;425426 /* L2 must run next, and mustn't decide to exit to L1. */426427 bool nested_run_pending;427428 /*···435434 struct pi_desc *pi_desc;436435 bool pi_pending;437436 u16 posted_intr_nv;437437+438438+ unsigned long *msr_bitmap;438439439440 struct hrtimer preemption_timer;440441 bool preemption_timer_expired;···927924static unsigned long *vmx_msr_bitmap_longmode;928925static unsigned long *vmx_msr_bitmap_legacy_x2apic;929926static unsigned long *vmx_msr_bitmap_longmode_x2apic;930930-static unsigned long *vmx_msr_bitmap_nested;931927static unsigned long *vmx_vmread_bitmap;932928static unsigned long *vmx_vmwrite_bitmap;933929···22002198 new.control) != old.control);22012199}2202220022012201+static void decache_tsc_multiplier(struct vcpu_vmx *vmx)22022202+{22032203+ vmx->current_tsc_ratio = vmx->vcpu.arch.tsc_scaling_ratio;22042204+ vmcs_write64(TSC_MULTIPLIER, vmx->current_tsc_ratio);22052205+}22062206+22032207/*22042208 * Switches to specified vcpu, until a matching vcpu_put(), but assumes22052209 * vcpu mutex is already taken.···2264225622652257 /* Setup TSC multiplier */22662258 if (kvm_has_tsc_control &&22672267- vmx->current_tsc_ratio != vcpu->arch.tsc_scaling_ratio) {22682268- vmx->current_tsc_ratio = vcpu->arch.tsc_scaling_ratio;22692269- vmcs_write64(TSC_MULTIPLIER, vmx->current_tsc_ratio);22702270- }22592259+ vmx->current_tsc_ratio != vcpu->arch.tsc_scaling_ratio)22602260+ decache_tsc_multiplier(vmx);2271226122722262 vmx_vcpu_pi_load(vcpu, cpu);22732263 vmx->host_pkru = read_pkru();···25142508 unsigned long *msr_bitmap;2515250925162510 if (is_guest_mode(vcpu))25172517- msr_bitmap = vmx_msr_bitmap_nested;25112511+ msr_bitmap = to_vmx(vcpu)->nested.msr_bitmap;25182512 else if (cpu_has_secondary_exec_ctrls() &&25192513 (vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &25202514 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {···63696363 if (!vmx_msr_bitmap_longmode_x2apic)63706364 goto out4;6371636563726372- if (nested) {63736373- vmx_msr_bitmap_nested =63746374- (unsigned long *)__get_free_page(GFP_KERNEL);63756375- if (!vmx_msr_bitmap_nested)63766376- goto out5;63776377- }63786378-63796366 vmx_vmread_bitmap = (unsigned long *)__get_free_page(GFP_KERNEL);63806367 if (!vmx_vmread_bitmap)63816368 goto out6;···6391639263926393 memset(vmx_msr_bitmap_legacy, 0xff, PAGE_SIZE);63936394 memset(vmx_msr_bitmap_longmode, 0xff, PAGE_SIZE);63946394- if (nested)63956395- memset(vmx_msr_bitmap_nested, 0xff, PAGE_SIZE);6396639563976396 if (setup_vmcs_config(&vmcs_config) < 0) {63986397 r = -EIO;···65266529out7:65276530 free_page((unsigned long)vmx_vmread_bitmap);65286531out6:65296529- if (nested)65306530- free_page((unsigned long)vmx_msr_bitmap_nested);65316531-out5:65326532 free_page((unsigned long)vmx_msr_bitmap_longmode_x2apic);65336533out4:65346534 free_page((unsigned long)vmx_msr_bitmap_longmode);···65516557 free_page((unsigned long)vmx_io_bitmap_a);65526558 free_page((unsigned long)vmx_vmwrite_bitmap);65536559 free_page((unsigned long)vmx_vmread_bitmap);65546554- if (nested)65556555- free_page((unsigned long)vmx_msr_bitmap_nested);6556656065576561 free_kvm_area();65586562}···69876995 return 1;69886996 }6989699769986998+ if (cpu_has_vmx_msr_bitmap()) {69996999+ vmx->nested.msr_bitmap =70007000+ (unsigned long *)__get_free_page(GFP_KERNEL);70017001+ if (!vmx->nested.msr_bitmap)70027002+ goto out_msr_bitmap;70037003+ }70047004+69907005 vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);69917006 if (!vmx->nested.cached_vmcs12)69926992- return -ENOMEM;70077007+ goto out_cached_vmcs12;6993700869947009 if (enable_shadow_vmcs) {69957010 shadow_vmcs = alloc_vmcs();69966996- if (!shadow_vmcs) {69976997- kfree(vmx->nested.cached_vmcs12);69986998- return -ENOMEM;69996999- }70117011+ if (!shadow_vmcs)70127012+ goto out_shadow_vmcs;70007013 /* mark vmcs as shadow */70017014 shadow_vmcs->revision_id |= (1u << 31);70027015 /* init shadow vmcs */···70217024 skip_emulated_instruction(vcpu);70227025 nested_vmx_succeed(vcpu);70237026 return 1;70277027+70287028+out_shadow_vmcs:70297029+ kfree(vmx->nested.cached_vmcs12);70307030+70317031+out_cached_vmcs12:70327032+ free_page((unsigned long)vmx->nested.msr_bitmap);70337033+70347034+out_msr_bitmap:70357035+ return -ENOMEM;70247036}7025703770267038/*···71047098 vmx->nested.vmxon = false;71057099 free_vpid(vmx->nested.vpid02);71067100 nested_release_vmcs12(vmx);71017101+ if (vmx->nested.msr_bitmap) {71027102+ free_page((unsigned long)vmx->nested.msr_bitmap);71037103+ vmx->nested.msr_bitmap = NULL;71047104+ }71077105 if (enable_shadow_vmcs)71087106 free_vmcs(vmx->nested.current_shadow_vmcs);71097107 kfree(vmx->nested.cached_vmcs12);···84298419{84308420 u32 sec_exec_control;8431842184228422+ /* Postpone execution until vmcs01 is the current VMCS. */84238423+ if (is_guest_mode(vcpu)) {84248424+ to_vmx(vcpu)->nested.change_vmcs01_virtual_x2apic_mode = true;84258425+ return;84268426+ }84278427+84328428 /*84338429 * There is not point to enable virtualize x2apic without enable84348430 * apicv···94889472{94899473 int msr;94909474 struct page *page;94919491- unsigned long *msr_bitmap;94759475+ unsigned long *msr_bitmap_l1;94769476+ unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap;9492947794789478+ /* This shortcut is ok because we support only x2APIC MSRs so far. */94939479 if (!nested_cpu_has_virt_x2apic_mode(vmcs12))94949480 return false;94959481···95009482 WARN_ON(1);95019483 return false;95029484 }95039503- msr_bitmap = (unsigned long *)kmap(page);95049504- if (!msr_bitmap) {94859485+ msr_bitmap_l1 = (unsigned long *)kmap(page);94869486+ if (!msr_bitmap_l1) {95059487 nested_release_page_clean(page);95069488 WARN_ON(1);95079489 return false;95089490 }9509949194929492+ memset(msr_bitmap_l0, 0xff, PAGE_SIZE);94939493+95109494 if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {95119495 if (nested_cpu_has_apic_reg_virt(vmcs12))95129496 for (msr = 0x800; msr <= 0x8ff; msr++)95139497 nested_vmx_disable_intercept_for_msr(95149514- msr_bitmap,95159515- vmx_msr_bitmap_nested,94989498+ msr_bitmap_l1, msr_bitmap_l0,95169499 msr, MSR_TYPE_R);95179517- /* TPR is allowed */95189518- nested_vmx_disable_intercept_for_msr(msr_bitmap,95199519- vmx_msr_bitmap_nested,95009500+95019501+ nested_vmx_disable_intercept_for_msr(95029502+ msr_bitmap_l1, msr_bitmap_l0,95209503 APIC_BASE_MSR + (APIC_TASKPRI >> 4),95219504 MSR_TYPE_R | MSR_TYPE_W);95059505+95229506 if (nested_cpu_has_vid(vmcs12)) {95239523- /* EOI and self-IPI are allowed */95249507 nested_vmx_disable_intercept_for_msr(95259525- msr_bitmap,95269526- vmx_msr_bitmap_nested,95089508+ msr_bitmap_l1, msr_bitmap_l0,95279509 APIC_BASE_MSR + (APIC_EOI >> 4),95289510 MSR_TYPE_W);95299511 nested_vmx_disable_intercept_for_msr(95309530- msr_bitmap,95319531- vmx_msr_bitmap_nested,95129512+ msr_bitmap_l1, msr_bitmap_l0,95329513 APIC_BASE_MSR + (APIC_SELF_IPI >> 4),95339514 MSR_TYPE_W);95349515 }95359535- } else {95369536- /*95379537- * Enable reading intercept of all the x2apic95389538- * MSRs. We should not rely on vmcs12 to do any95399539- * optimizations here, it may have been modified95409540- * by L1.95419541- */95429542- for (msr = 0x800; msr <= 0x8ff; msr++)95439543- __vmx_enable_intercept_for_msr(95449544- vmx_msr_bitmap_nested,95459545- msr,95469546- MSR_TYPE_R);95479547-95489548- __vmx_enable_intercept_for_msr(95499549- vmx_msr_bitmap_nested,95509550- APIC_BASE_MSR + (APIC_TASKPRI >> 4),95519551- MSR_TYPE_W);95529552- __vmx_enable_intercept_for_msr(95539553- vmx_msr_bitmap_nested,95549554- APIC_BASE_MSR + (APIC_EOI >> 4),95559555- MSR_TYPE_W);95569556- __vmx_enable_intercept_for_msr(95579557- vmx_msr_bitmap_nested,95589558- APIC_BASE_MSR + (APIC_SELF_IPI >> 4),95599559- MSR_TYPE_W);95609516 }95619517 kunmap(page);95629518 nested_release_page_clean(page);···99499957 }9950995899519959 if (cpu_has_vmx_msr_bitmap() &&99529952- exec_control & CPU_BASED_USE_MSR_BITMAPS) {99539953- nested_vmx_merge_msr_bitmap(vcpu, vmcs12);99549954- /* MSR_BITMAP will be set by following vmx_set_efer. */99559955- } else99609960+ exec_control & CPU_BASED_USE_MSR_BITMAPS &&99619961+ nested_vmx_merge_msr_bitmap(vcpu, vmcs12))99629962+ ; /* MSR_BITMAP will be set by following vmx_set_efer. */99639963+ else99569964 exec_control &= ~CPU_BASED_USE_MSR_BITMAPS;9957996599589966 /*···1000310011 vmx->nested.vmcs01_tsc_offset + vmcs12->tsc_offset);1000410012 else1000510013 vmcs_write64(TSC_OFFSET, vmx->nested.vmcs01_tsc_offset);1001410014+ if (kvm_has_tsc_control)1001510015+ decache_tsc_multiplier(vmx);10006100161000710017 if (enable_vpid) {1000810018 /*···1076110767 else1076210768 vmcs_set_bits(PIN_BASED_VM_EXEC_CONTROL,1076310769 PIN_BASED_VMX_PREEMPTION_TIMER);1077010770+ if (kvm_has_tsc_control)1077110771+ decache_tsc_multiplier(vmx);1077210772+1077310773+ if (vmx->nested.change_vmcs01_virtual_x2apic_mode) {1077410774+ vmx->nested.change_vmcs01_virtual_x2apic_mode = false;1077510775+ vmx_set_virtual_x2apic_mode(vcpu,1077610776+ vcpu->arch.apic_base & X2APIC_ENABLE);1077710777+ }10764107781076510779 /* This is needed for same reason as it was needed in prepare_vmcs02 */1076610780 vmx->host_rsp = 0;
+1-1
arch/x86/mm/kaslr.c
···7777 */7878static inline bool kaslr_memory_enabled(void)7979{8080- return kaslr_enabled() && !config_enabled(CONFIG_KASAN);8080+ return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);8181}82828383/* Initialize base and padding for each memory region randomized with KASLR */
+8-2
arch/x86/pci/vmd.c
···4141 * @node: list item for parent traversal.4242 * @rcu: RCU callback item for freeing.4343 * @irq: back pointer to parent.4444+ * @enabled: true if driver enabled IRQ4445 * @virq: the virtual IRQ value provided to the requesting driver.4546 *4647 * Every MSI/MSI-X IRQ requested for a device in a VMD domain will be mapped to···5150 struct list_head node;5251 struct rcu_head rcu;5352 struct vmd_irq_list *irq;5353+ bool enabled;5454 unsigned int virq;5555};5656···124122 unsigned long flags;125123126124 raw_spin_lock_irqsave(&list_lock, flags);125125+ WARN_ON(vmdirq->enabled);127126 list_add_tail_rcu(&vmdirq->node, &vmdirq->irq->irq_list);127127+ vmdirq->enabled = true;128128 raw_spin_unlock_irqrestore(&list_lock, flags);129129130130 data->chip->irq_unmask(data);···140136 data->chip->irq_mask(data);141137142138 raw_spin_lock_irqsave(&list_lock, flags);143143- list_del_rcu(&vmdirq->node);144144- INIT_LIST_HEAD_RCU(&vmdirq->node);139139+ if (vmdirq->enabled) {140140+ list_del_rcu(&vmdirq->node);141141+ vmdirq->enabled = false;142142+ }145143 raw_spin_unlock_irqrestore(&list_lock, flags);146144}147145
···9494 bool do_split = true;9595 struct bio *new = NULL;9696 const unsigned max_sectors = get_max_io_size(q, bio);9797+ unsigned bvecs = 0;97989899 bio_for_each_segment(bv, bio, iter) {100100+ /*101101+ * With arbitrary bio size, the incoming bio may be very102102+ * big. We have to split the bio into small bios so that103103+ * each holds at most BIO_MAX_PAGES bvecs because104104+ * bio_clone() can fail to allocate big bvecs.105105+ *106106+ * It should have been better to apply the limit per107107+ * request queue in which bio_clone() is involved,108108+ * instead of globally. The biggest blocker is the109109+ * bio_clone() in bio bounce.110110+ *111111+ * If bio is splitted by this reason, we should have112112+ * allowed to continue bios merging, but don't do113113+ * that now for making the change simple.114114+ *115115+ * TODO: deal with bio bounce's bio_clone() gracefully116116+ * and convert the global limit into per-queue limit.117117+ */118118+ if (bvecs++ >= BIO_MAX_PAGES)119119+ goto split;120120+99121 /*100122 * If the queue doesn't support SG gaps and adding this101123 * offset would create a gap, disallow it.···194172 struct bio *split, *res;195173 unsigned nsegs;196174197197- if (bio_op(*bio) == REQ_OP_DISCARD)175175+ switch (bio_op(*bio)) {176176+ case REQ_OP_DISCARD:177177+ case REQ_OP_SECURE_ERASE:198178 split = blk_bio_discard_split(q, *bio, bs, &nsegs);199199- else if (bio_op(*bio) == REQ_OP_WRITE_SAME)179179+ break;180180+ case REQ_OP_WRITE_SAME:200181 split = blk_bio_write_same_split(q, *bio, bs, &nsegs);201201- else182182+ break;183183+ default:202184 split = blk_bio_segment_split(q, *bio, q->bio_split, &nsegs);185185+ break;186186+ }203187204188 /* physical segments can be figured out during splitting */205189 res = split ? split : *bio;···241213 * This should probably be returning 0, but blk_add_request_payload()242214 * (Christoph!!!!)243215 */244244- if (bio_op(bio) == REQ_OP_DISCARD)216216+ if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)245217 return 1;246218247219 if (bio_op(bio) == REQ_OP_WRITE_SAME)···413385 nsegs = 0;414386 cluster = blk_queue_cluster(q);415387416416- if (bio_op(bio) == REQ_OP_DISCARD) {388388+ switch (bio_op(bio)) {389389+ case REQ_OP_DISCARD:390390+ case REQ_OP_SECURE_ERASE:417391 /*418392 * This is a hack - drivers should be neither modifying the419393 * biovec, nor relying on bi_vcnt - but because of···423393 * a payload we need to set up here (thank you Christoph) and424394 * bi_vcnt is really the only way of telling if we need to.425395 */426426-427427- if (bio->bi_vcnt)428428- goto single_segment;429429-430430- return 0;431431- }432432-433433- if (bio_op(bio) == REQ_OP_WRITE_SAME) {434434-single_segment:396396+ if (!bio->bi_vcnt)397397+ return 0;398398+ /* Fall through */399399+ case REQ_OP_WRITE_SAME:435400 *sg = sglist;436401 bvec = bio_iovec(bio);437402 sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);438403 return 1;404404+ default:405405+ break;439406 }440407441408 for_each_bio(bio)
···261261 return PTR_ERR(data->mck);262262 }263263264264+ ret = clk_prepare_enable(data->mck);265265+ if (ret) {266266+ pr_err("Unable to enable mck\n");267267+ return ret;268268+ }269269+264270 /* Get the interrupts property */265271 data->irq = irq_of_parse_and_map(node, 0);266272 if (!data->irq) {
+3
drivers/dax/pmem.c
···116116 if (rc)117117 return rc;118118119119+ /* adjust the dax_region resource to the start of data */120120+ res.start += le64_to_cpu(pfn_sb->dataoff);121121+119122 nd_region = to_nd_region(dev->parent);120123 dax_region = alloc_dax_region(dev, nd_region->id, &res,121124 le32_to_cpu(pfn_sb->align), addr, PFN_DEV|PFN_MAP);
+8
drivers/edac/Kconfig
···251251 Support for error detection and correction the Intel252252 Sandy Bridge, Ivy Bridge and Haswell Integrated Memory Controllers.253253254254+config EDAC_SKX255255+ tristate "Intel Skylake server Integrated MC"256256+ depends on EDAC_MM_EDAC && PCI && X86_64 && X86_MCE_INTEL257257+ depends on PCI_MMCONFIG258258+ help259259+ Support for error detection and correction the Intel260260+ Skylake server Integrated Memory Controllers.261261+254262config EDAC_MPC85XX255263 tristate "Freescale MPC83xx / MPC85xx"256264 depends on EDAC_MM_EDAC && FSL_SOC
···11+/*22+ * EDAC driver for Intel(R) Xeon(R) Skylake processors33+ * Copyright (c) 2016, Intel Corporation.44+ *55+ * This program is free software; you can redistribute it and/or modify it66+ * under the terms and conditions of the GNU General Public License,77+ * version 2, as published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope it will be useful, but WITHOUT1010+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1111+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1212+ * more details.1313+ */1414+1515+#include <linux/module.h>1616+#include <linux/init.h>1717+#include <linux/pci.h>1818+#include <linux/pci_ids.h>1919+#include <linux/slab.h>2020+#include <linux/delay.h>2121+#include <linux/edac.h>2222+#include <linux/mmzone.h>2323+#include <linux/smp.h>2424+#include <linux/bitmap.h>2525+#include <linux/math64.h>2626+#include <linux/mod_devicetable.h>2727+#include <asm/cpu_device_id.h>2828+#include <asm/processor.h>2929+#include <asm/mce.h>3030+3131+#include "edac_core.h"3232+3333+#define SKX_REVISION " Ver: 1.0 "3434+3535+/*3636+ * Debug macros3737+ */3838+#define skx_printk(level, fmt, arg...) \3939+ edac_printk(level, "skx", fmt, ##arg)4040+4141+#define skx_mc_printk(mci, level, fmt, arg...) \4242+ edac_mc_chipset_printk(mci, level, "skx", fmt, ##arg)4343+4444+/*4545+ * Get a bit field at register value <v>, from bit <lo> to bit <hi>4646+ */4747+#define GET_BITFIELD(v, lo, hi) \4848+ (((v) & GENMASK_ULL((hi), (lo))) >> (lo))4949+5050+static LIST_HEAD(skx_edac_list);5151+5252+static u64 skx_tolm, skx_tohm;5353+5454+#define NUM_IMC 2 /* memory controllers per socket */5555+#define NUM_CHANNELS 3 /* channels per memory controller */5656+#define NUM_DIMMS 2 /* Max DIMMS per channel */5757+5858+#define MASK26 0x3FFFFFF /* Mask for 2^26 */5959+#define MASK29 0x1FFFFFFF /* Mask for 2^29 */6060+6161+/*6262+ * Each cpu socket contains some pci devices that provide global6363+ * information, and also some that are local to each of the two6464+ * memory controllers on the die.6565+ */6666+struct skx_dev {6767+ struct list_head list;6868+ u8 bus[4];6969+ struct pci_dev *sad_all;7070+ struct pci_dev *util_all;7171+ u32 mcroute;7272+ struct skx_imc {7373+ struct mem_ctl_info *mci;7474+ u8 mc; /* system wide mc# */7575+ u8 lmc; /* socket relative mc# */7676+ u8 src_id, node_id;7777+ struct skx_channel {7878+ struct pci_dev *cdev;7979+ struct skx_dimm {8080+ u8 close_pg;8181+ u8 bank_xor_enable;8282+ u8 fine_grain_bank;8383+ u8 rowbits;8484+ u8 colbits;8585+ } dimms[NUM_DIMMS];8686+ } chan[NUM_CHANNELS];8787+ } imc[NUM_IMC];8888+};8989+static int skx_num_sockets;9090+9191+struct skx_pvt {9292+ struct skx_imc *imc;9393+};9494+9595+struct decoded_addr {9696+ struct skx_dev *dev;9797+ u64 addr;9898+ int socket;9999+ int imc;100100+ int channel;101101+ u64 chan_addr;102102+ int sktways;103103+ int chanways;104104+ int dimm;105105+ int rank;106106+ int channel_rank;107107+ u64 rank_address;108108+ int row;109109+ int column;110110+ int bank_address;111111+ int bank_group;112112+};113113+114114+static struct skx_dev *get_skx_dev(u8 bus, u8 idx)115115+{116116+ struct skx_dev *d;117117+118118+ list_for_each_entry(d, &skx_edac_list, list) {119119+ if (d->bus[idx] == bus)120120+ return d;121121+ }122122+123123+ return NULL;124124+}125125+126126+enum munittype {127127+ CHAN0, CHAN1, CHAN2, SAD_ALL, UTIL_ALL, SAD128128+};129129+130130+struct munit {131131+ u16 did;132132+ u16 devfn[NUM_IMC];133133+ u8 busidx;134134+ u8 per_socket;135135+ enum munittype mtype;136136+};137137+138138+/*139139+ * List of PCI device ids that we need together with some device140140+ * number and function numbers to tell which memory controller the141141+ * device belongs to.142142+ */143143+static const struct munit skx_all_munits[] = {144144+ { 0x2054, { }, 1, 1, SAD_ALL },145145+ { 0x2055, { }, 1, 1, UTIL_ALL },146146+ { 0x2040, { PCI_DEVFN(10, 0), PCI_DEVFN(12, 0) }, 2, 2, CHAN0 },147147+ { 0x2044, { PCI_DEVFN(10, 4), PCI_DEVFN(12, 4) }, 2, 2, CHAN1 },148148+ { 0x2048, { PCI_DEVFN(11, 0), PCI_DEVFN(13, 0) }, 2, 2, CHAN2 },149149+ { 0x208e, { }, 1, 0, SAD },150150+ { }151151+};152152+153153+/*154154+ * We use the per-socket device 0x2016 to count how many sockets are present,155155+ * and to detemine which PCI buses are associated with each socket. Allocate156156+ * and build the full list of all the skx_dev structures that we need here.157157+ */158158+static int get_all_bus_mappings(void)159159+{160160+ struct pci_dev *pdev, *prev;161161+ struct skx_dev *d;162162+ u32 reg;163163+ int ndev = 0;164164+165165+ prev = NULL;166166+ for (;;) {167167+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x2016, prev);168168+ if (!pdev)169169+ break;170170+ ndev++;171171+ d = kzalloc(sizeof(*d), GFP_KERNEL);172172+ if (!d) {173173+ pci_dev_put(pdev);174174+ return -ENOMEM;175175+ }176176+ pci_read_config_dword(pdev, 0xCC, ®);177177+ d->bus[0] = GET_BITFIELD(reg, 0, 7);178178+ d->bus[1] = GET_BITFIELD(reg, 8, 15);179179+ d->bus[2] = GET_BITFIELD(reg, 16, 23);180180+ d->bus[3] = GET_BITFIELD(reg, 24, 31);181181+ edac_dbg(2, "busses: %x, %x, %x, %x\n",182182+ d->bus[0], d->bus[1], d->bus[2], d->bus[3]);183183+ list_add_tail(&d->list, &skx_edac_list);184184+ skx_num_sockets++;185185+ prev = pdev;186186+ }187187+188188+ return ndev;189189+}190190+191191+static int get_all_munits(const struct munit *m)192192+{193193+ struct pci_dev *pdev, *prev;194194+ struct skx_dev *d;195195+ u32 reg;196196+ int i = 0, ndev = 0;197197+198198+ prev = NULL;199199+ for (;;) {200200+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, m->did, prev);201201+ if (!pdev)202202+ break;203203+ ndev++;204204+ if (m->per_socket == NUM_IMC) {205205+ for (i = 0; i < NUM_IMC; i++)206206+ if (m->devfn[i] == pdev->devfn)207207+ break;208208+ if (i == NUM_IMC)209209+ goto fail;210210+ }211211+ d = get_skx_dev(pdev->bus->number, m->busidx);212212+ if (!d)213213+ goto fail;214214+215215+ /* Be sure that the device is enabled */216216+ if (unlikely(pci_enable_device(pdev) < 0)) {217217+ skx_printk(KERN_ERR,218218+ "Couldn't enable %04x:%04x\n", PCI_VENDOR_ID_INTEL, m->did);219219+ goto fail;220220+ }221221+222222+ switch (m->mtype) {223223+ case CHAN0: case CHAN1: case CHAN2:224224+ pci_dev_get(pdev);225225+ d->imc[i].chan[m->mtype].cdev = pdev;226226+ break;227227+ case SAD_ALL:228228+ pci_dev_get(pdev);229229+ d->sad_all = pdev;230230+ break;231231+ case UTIL_ALL:232232+ pci_dev_get(pdev);233233+ d->util_all = pdev;234234+ break;235235+ case SAD:236236+ /*237237+ * one of these devices per core, including cores238238+ * that don't exist on this SKU. Ignore any that239239+ * read a route table of zero, make sure all the240240+ * non-zero values match.241241+ */242242+ pci_read_config_dword(pdev, 0xB4, ®);243243+ if (reg != 0) {244244+ if (d->mcroute == 0)245245+ d->mcroute = reg;246246+ else if (d->mcroute != reg) {247247+ skx_printk(KERN_ERR,248248+ "mcroute mismatch\n");249249+ goto fail;250250+ }251251+ }252252+ ndev--;253253+ break;254254+ }255255+256256+ prev = pdev;257257+ }258258+259259+ return ndev;260260+fail:261261+ pci_dev_put(pdev);262262+ return -ENODEV;263263+}264264+265265+const struct x86_cpu_id skx_cpuids[] = {266266+ { X86_VENDOR_INTEL, 6, 0x55, 0, 0 }, /* Skylake */267267+ { }268268+};269269+MODULE_DEVICE_TABLE(x86cpu, skx_cpuids);270270+271271+static u8 get_src_id(struct skx_dev *d)272272+{273273+ u32 reg;274274+275275+ pci_read_config_dword(d->util_all, 0xF0, ®);276276+277277+ return GET_BITFIELD(reg, 12, 14);278278+}279279+280280+static u8 skx_get_node_id(struct skx_dev *d)281281+{282282+ u32 reg;283283+284284+ pci_read_config_dword(d->util_all, 0xF4, ®);285285+286286+ return GET_BITFIELD(reg, 0, 2);287287+}288288+289289+static int get_dimm_attr(u32 reg, int lobit, int hibit, int add, int minval,290290+ int maxval, char *name)291291+{292292+ u32 val = GET_BITFIELD(reg, lobit, hibit);293293+294294+ if (val < minval || val > maxval) {295295+ edac_dbg(2, "bad %s = %d (raw=%x)\n", name, val, reg);296296+ return -EINVAL;297297+ }298298+ return val + add;299299+}300300+301301+#define IS_DIMM_PRESENT(mtr) GET_BITFIELD((mtr), 15, 15)302302+303303+#define numrank(reg) get_dimm_attr((reg), 12, 13, 0, 1, 2, "ranks")304304+#define numrow(reg) get_dimm_attr((reg), 2, 4, 12, 1, 6, "rows")305305+#define numcol(reg) get_dimm_attr((reg), 0, 1, 10, 0, 2, "cols")306306+307307+static int get_width(u32 mtr)308308+{309309+ switch (GET_BITFIELD(mtr, 8, 9)) {310310+ case 0:311311+ return DEV_X4;312312+ case 1:313313+ return DEV_X8;314314+ case 2:315315+ return DEV_X16;316316+ }317317+ return DEV_UNKNOWN;318318+}319319+320320+static int skx_get_hi_lo(void)321321+{322322+ struct pci_dev *pdev;323323+ u32 reg;324324+325325+ pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x2034, NULL);326326+ if (!pdev) {327327+ edac_dbg(0, "Can't get tolm/tohm\n");328328+ return -ENODEV;329329+ }330330+331331+ pci_read_config_dword(pdev, 0xD0, ®);332332+ skx_tolm = reg;333333+ pci_read_config_dword(pdev, 0xD4, ®);334334+ skx_tohm = reg;335335+ pci_read_config_dword(pdev, 0xD8, ®);336336+ skx_tohm |= (u64)reg << 32;337337+338338+ pci_dev_put(pdev);339339+ edac_dbg(2, "tolm=%llx tohm=%llx\n", skx_tolm, skx_tohm);340340+341341+ return 0;342342+}343343+344344+static int get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,345345+ struct skx_imc *imc, int chan, int dimmno)346346+{347347+ int banks = 16, ranks, rows, cols, npages;348348+ u64 size;349349+350350+ if (!IS_DIMM_PRESENT(mtr))351351+ return 0;352352+ ranks = numrank(mtr);353353+ rows = numrow(mtr);354354+ cols = numcol(mtr);355355+356356+ /*357357+ * Compute size in 8-byte (2^3) words, then shift to MiB (2^20)358358+ */359359+ size = ((1ull << (rows + cols + ranks)) * banks) >> (20 - 3);360360+ npages = MiB_TO_PAGES(size);361361+362362+ edac_dbg(0, "mc#%d: channel %d, dimm %d, %lld Mb (%d pages) bank: %d, rank: %d, row: %#x, col: %#x\n",363363+ imc->mc, chan, dimmno, size, npages,364364+ banks, ranks, rows, cols);365365+366366+ imc->chan[chan].dimms[dimmno].close_pg = GET_BITFIELD(mtr, 0, 0);367367+ imc->chan[chan].dimms[dimmno].bank_xor_enable = GET_BITFIELD(mtr, 9, 9);368368+ imc->chan[chan].dimms[dimmno].fine_grain_bank = GET_BITFIELD(amap, 0, 0);369369+ imc->chan[chan].dimms[dimmno].rowbits = rows;370370+ imc->chan[chan].dimms[dimmno].colbits = cols;371371+372372+ dimm->nr_pages = npages;373373+ dimm->grain = 32;374374+ dimm->dtype = get_width(mtr);375375+ dimm->mtype = MEM_DDR4;376376+ dimm->edac_mode = EDAC_SECDED; /* likely better than this */377377+ snprintf(dimm->label, sizeof(dimm->label), "CPU_SrcID#%u_MC#%u_Chan#%u_DIMM#%u",378378+ imc->src_id, imc->lmc, chan, dimmno);379379+380380+ return 1;381381+}382382+383383+#define SKX_GET_MTMTR(dev, reg) \384384+ pci_read_config_dword((dev), 0x87c, ®)385385+386386+static bool skx_check_ecc(struct pci_dev *pdev)387387+{388388+ u32 mtmtr;389389+390390+ SKX_GET_MTMTR(pdev, mtmtr);391391+392392+ return !!GET_BITFIELD(mtmtr, 2, 2);393393+}394394+395395+static int skx_get_dimm_config(struct mem_ctl_info *mci)396396+{397397+ struct skx_pvt *pvt = mci->pvt_info;398398+ struct skx_imc *imc = pvt->imc;399399+ struct dimm_info *dimm;400400+ int i, j;401401+ u32 mtr, amap;402402+ int ndimms;403403+404404+ for (i = 0; i < NUM_CHANNELS; i++) {405405+ ndimms = 0;406406+ pci_read_config_dword(imc->chan[i].cdev, 0x8C, &amap);407407+ for (j = 0; j < NUM_DIMMS; j++) {408408+ dimm = EDAC_DIMM_PTR(mci->layers, mci->dimms,409409+ mci->n_layers, i, j, 0);410410+ pci_read_config_dword(imc->chan[i].cdev,411411+ 0x80 + 4*j, &mtr);412412+ ndimms += get_dimm_info(mtr, amap, dimm, imc, i, j);413413+ }414414+ if (ndimms && !skx_check_ecc(imc->chan[0].cdev)) {415415+ skx_printk(KERN_ERR, "ECC is disabled on imc %d\n", imc->mc);416416+ return -ENODEV;417417+ }418418+ }419419+420420+ return 0;421421+}422422+423423+static void skx_unregister_mci(struct skx_imc *imc)424424+{425425+ struct mem_ctl_info *mci = imc->mci;426426+427427+ if (!mci)428428+ return;429429+430430+ edac_dbg(0, "MC%d: mci = %p\n", imc->mc, mci);431431+432432+ /* Remove MC sysfs nodes */433433+ edac_mc_del_mc(mci->pdev);434434+435435+ edac_dbg(1, "%s: free mci struct\n", mci->ctl_name);436436+ kfree(mci->ctl_name);437437+ edac_mc_free(mci);438438+}439439+440440+static int skx_register_mci(struct skx_imc *imc)441441+{442442+ struct mem_ctl_info *mci;443443+ struct edac_mc_layer layers[2];444444+ struct pci_dev *pdev = imc->chan[0].cdev;445445+ struct skx_pvt *pvt;446446+ int rc;447447+448448+ /* allocate a new MC control structure */449449+ layers[0].type = EDAC_MC_LAYER_CHANNEL;450450+ layers[0].size = NUM_CHANNELS;451451+ layers[0].is_virt_csrow = false;452452+ layers[1].type = EDAC_MC_LAYER_SLOT;453453+ layers[1].size = NUM_DIMMS;454454+ layers[1].is_virt_csrow = true;455455+ mci = edac_mc_alloc(imc->mc, ARRAY_SIZE(layers), layers,456456+ sizeof(struct skx_pvt));457457+458458+ if (unlikely(!mci))459459+ return -ENOMEM;460460+461461+ edac_dbg(0, "MC#%d: mci = %p\n", imc->mc, mci);462462+463463+ /* Associate skx_dev and mci for future usage */464464+ imc->mci = mci;465465+ pvt = mci->pvt_info;466466+ pvt->imc = imc;467467+468468+ mci->ctl_name = kasprintf(GFP_KERNEL, "Skylake Socket#%d IMC#%d",469469+ imc->node_id, imc->lmc);470470+ mci->mtype_cap = MEM_FLAG_DDR4;471471+ mci->edac_ctl_cap = EDAC_FLAG_NONE;472472+ mci->edac_cap = EDAC_FLAG_NONE;473473+ mci->mod_name = "skx_edac.c";474474+ mci->dev_name = pci_name(imc->chan[0].cdev);475475+ mci->mod_ver = SKX_REVISION;476476+ mci->ctl_page_to_phys = NULL;477477+478478+ rc = skx_get_dimm_config(mci);479479+ if (rc < 0)480480+ goto fail;481481+482482+ /* record ptr to the generic device */483483+ mci->pdev = &pdev->dev;484484+485485+ /* add this new MC control structure to EDAC's list of MCs */486486+ if (unlikely(edac_mc_add_mc(mci))) {487487+ edac_dbg(0, "MC: failed edac_mc_add_mc()\n");488488+ rc = -EINVAL;489489+ goto fail;490490+ }491491+492492+ return 0;493493+494494+fail:495495+ kfree(mci->ctl_name);496496+ edac_mc_free(mci);497497+ imc->mci = NULL;498498+ return rc;499499+}500500+501501+#define SKX_MAX_SAD 24502502+503503+#define SKX_GET_SAD(d, i, reg) \504504+ pci_read_config_dword((d)->sad_all, 0x60 + 8 * (i), ®)505505+#define SKX_GET_ILV(d, i, reg) \506506+ pci_read_config_dword((d)->sad_all, 0x64 + 8 * (i), ®)507507+508508+#define SKX_SAD_MOD3MODE(sad) GET_BITFIELD((sad), 30, 31)509509+#define SKX_SAD_MOD3(sad) GET_BITFIELD((sad), 27, 27)510510+#define SKX_SAD_LIMIT(sad) (((u64)GET_BITFIELD((sad), 7, 26) << 26) | MASK26)511511+#define SKX_SAD_MOD3ASMOD2(sad) GET_BITFIELD((sad), 5, 6)512512+#define SKX_SAD_ATTR(sad) GET_BITFIELD((sad), 3, 4)513513+#define SKX_SAD_INTERLEAVE(sad) GET_BITFIELD((sad), 1, 2)514514+#define SKX_SAD_ENABLE(sad) GET_BITFIELD((sad), 0, 0)515515+516516+#define SKX_ILV_REMOTE(tgt) (((tgt) & 8) == 0)517517+#define SKX_ILV_TARGET(tgt) ((tgt) & 7)518518+519519+static bool skx_sad_decode(struct decoded_addr *res)520520+{521521+ struct skx_dev *d = list_first_entry(&skx_edac_list, typeof(*d), list);522522+ u64 addr = res->addr;523523+ int i, idx, tgt, lchan, shift;524524+ u32 sad, ilv;525525+ u64 limit, prev_limit;526526+ int remote = 0;527527+528528+ /* Simple sanity check for I/O space or out of range */529529+ if (addr >= skx_tohm || (addr >= skx_tolm && addr < BIT_ULL(32))) {530530+ edac_dbg(0, "Address %llx out of range\n", addr);531531+ return false;532532+ }533533+534534+restart:535535+ prev_limit = 0;536536+ for (i = 0; i < SKX_MAX_SAD; i++) {537537+ SKX_GET_SAD(d, i, sad);538538+ limit = SKX_SAD_LIMIT(sad);539539+ if (SKX_SAD_ENABLE(sad)) {540540+ if (addr >= prev_limit && addr <= limit)541541+ goto sad_found;542542+ }543543+ prev_limit = limit + 1;544544+ }545545+ edac_dbg(0, "No SAD entry for %llx\n", addr);546546+ return false;547547+548548+sad_found:549549+ SKX_GET_ILV(d, i, ilv);550550+551551+ switch (SKX_SAD_INTERLEAVE(sad)) {552552+ case 0:553553+ idx = GET_BITFIELD(addr, 6, 8);554554+ break;555555+ case 1:556556+ idx = GET_BITFIELD(addr, 8, 10);557557+ break;558558+ case 2:559559+ idx = GET_BITFIELD(addr, 12, 14);560560+ break;561561+ case 3:562562+ idx = GET_BITFIELD(addr, 30, 32);563563+ break;564564+ }565565+566566+ tgt = GET_BITFIELD(ilv, 4 * idx, 4 * idx + 3);567567+568568+ /* If point to another node, find it and start over */569569+ if (SKX_ILV_REMOTE(tgt)) {570570+ if (remote) {571571+ edac_dbg(0, "Double remote!\n");572572+ return false;573573+ }574574+ remote = 1;575575+ list_for_each_entry(d, &skx_edac_list, list) {576576+ if (d->imc[0].src_id == SKX_ILV_TARGET(tgt))577577+ goto restart;578578+ }579579+ edac_dbg(0, "Can't find node %d\n", SKX_ILV_TARGET(tgt));580580+ return false;581581+ }582582+583583+ if (SKX_SAD_MOD3(sad) == 0)584584+ lchan = SKX_ILV_TARGET(tgt);585585+ else {586586+ switch (SKX_SAD_MOD3MODE(sad)) {587587+ case 0:588588+ shift = 6;589589+ break;590590+ case 1:591591+ shift = 8;592592+ break;593593+ case 2:594594+ shift = 12;595595+ break;596596+ default:597597+ edac_dbg(0, "illegal mod3mode\n");598598+ return false;599599+ }600600+ switch (SKX_SAD_MOD3ASMOD2(sad)) {601601+ case 0:602602+ lchan = (addr >> shift) % 3;603603+ break;604604+ case 1:605605+ lchan = (addr >> shift) % 2;606606+ break;607607+ case 2:608608+ lchan = (addr >> shift) % 2;609609+ lchan = (lchan << 1) | ~lchan;610610+ break;611611+ case 3:612612+ lchan = ((addr >> shift) % 2) << 1;613613+ break;614614+ }615615+ lchan = (lchan << 1) | (SKX_ILV_TARGET(tgt) & 1);616616+ }617617+618618+ res->dev = d;619619+ res->socket = d->imc[0].src_id;620620+ res->imc = GET_BITFIELD(d->mcroute, lchan * 3, lchan * 3 + 2);621621+ res->channel = GET_BITFIELD(d->mcroute, lchan * 2 + 18, lchan * 2 + 19);622622+623623+ edac_dbg(2, "%llx: socket=%d imc=%d channel=%d\n",624624+ res->addr, res->socket, res->imc, res->channel);625625+ return true;626626+}627627+628628+#define SKX_MAX_TAD 8629629+630630+#define SKX_GET_TADBASE(d, mc, i, reg) \631631+ pci_read_config_dword((d)->imc[mc].chan[0].cdev, 0x850 + 4 * (i), ®)632632+#define SKX_GET_TADWAYNESS(d, mc, i, reg) \633633+ pci_read_config_dword((d)->imc[mc].chan[0].cdev, 0x880 + 4 * (i), ®)634634+#define SKX_GET_TADCHNILVOFFSET(d, mc, ch, i, reg) \635635+ pci_read_config_dword((d)->imc[mc].chan[ch].cdev, 0x90 + 4 * (i), ®)636636+637637+#define SKX_TAD_BASE(b) ((u64)GET_BITFIELD((b), 12, 31) << 26)638638+#define SKX_TAD_SKT_GRAN(b) GET_BITFIELD((b), 4, 5)639639+#define SKX_TAD_CHN_GRAN(b) GET_BITFIELD((b), 6, 7)640640+#define SKX_TAD_LIMIT(b) (((u64)GET_BITFIELD((b), 12, 31) << 26) | MASK26)641641+#define SKX_TAD_OFFSET(b) ((u64)GET_BITFIELD((b), 4, 23) << 26)642642+#define SKX_TAD_SKTWAYS(b) (1 << GET_BITFIELD((b), 10, 11))643643+#define SKX_TAD_CHNWAYS(b) (GET_BITFIELD((b), 8, 9) + 1)644644+645645+/* which bit used for both socket and channel interleave */646646+static int skx_granularity[] = { 6, 8, 12, 30 };647647+648648+static u64 skx_do_interleave(u64 addr, int shift, int ways, u64 lowbits)649649+{650650+ addr >>= shift;651651+ addr /= ways;652652+ addr <<= shift;653653+654654+ return addr | (lowbits & ((1ull << shift) - 1));655655+}656656+657657+static bool skx_tad_decode(struct decoded_addr *res)658658+{659659+ int i;660660+ u32 base, wayness, chnilvoffset;661661+ int skt_interleave_bit, chn_interleave_bit;662662+ u64 channel_addr;663663+664664+ for (i = 0; i < SKX_MAX_TAD; i++) {665665+ SKX_GET_TADBASE(res->dev, res->imc, i, base);666666+ SKX_GET_TADWAYNESS(res->dev, res->imc, i, wayness);667667+ if (SKX_TAD_BASE(base) <= res->addr && res->addr <= SKX_TAD_LIMIT(wayness))668668+ goto tad_found;669669+ }670670+ edac_dbg(0, "No TAD entry for %llx\n", res->addr);671671+ return false;672672+673673+tad_found:674674+ res->sktways = SKX_TAD_SKTWAYS(wayness);675675+ res->chanways = SKX_TAD_CHNWAYS(wayness);676676+ skt_interleave_bit = skx_granularity[SKX_TAD_SKT_GRAN(base)];677677+ chn_interleave_bit = skx_granularity[SKX_TAD_CHN_GRAN(base)];678678+679679+ SKX_GET_TADCHNILVOFFSET(res->dev, res->imc, res->channel, i, chnilvoffset);680680+ channel_addr = res->addr - SKX_TAD_OFFSET(chnilvoffset);681681+682682+ if (res->chanways == 3 && skt_interleave_bit > chn_interleave_bit) {683683+ /* Must handle channel first, then socket */684684+ channel_addr = skx_do_interleave(channel_addr, chn_interleave_bit,685685+ res->chanways, channel_addr);686686+ channel_addr = skx_do_interleave(channel_addr, skt_interleave_bit,687687+ res->sktways, channel_addr);688688+ } else {689689+ /* Handle socket then channel. Preserve low bits from original address */690690+ channel_addr = skx_do_interleave(channel_addr, skt_interleave_bit,691691+ res->sktways, res->addr);692692+ channel_addr = skx_do_interleave(channel_addr, chn_interleave_bit,693693+ res->chanways, res->addr);694694+ }695695+696696+ res->chan_addr = channel_addr;697697+698698+ edac_dbg(2, "%llx: chan_addr=%llx sktways=%d chanways=%d\n",699699+ res->addr, res->chan_addr, res->sktways, res->chanways);700700+ return true;701701+}702702+703703+#define SKX_MAX_RIR 4704704+705705+#define SKX_GET_RIRWAYNESS(d, mc, ch, i, reg) \706706+ pci_read_config_dword((d)->imc[mc].chan[ch].cdev, \707707+ 0x108 + 4 * (i), ®)708708+#define SKX_GET_RIRILV(d, mc, ch, idx, i, reg) \709709+ pci_read_config_dword((d)->imc[mc].chan[ch].cdev, \710710+ 0x120 + 16 * idx + 4 * (i), ®)711711+712712+#define SKX_RIR_VALID(b) GET_BITFIELD((b), 31, 31)713713+#define SKX_RIR_LIMIT(b) (((u64)GET_BITFIELD((b), 1, 11) << 29) | MASK29)714714+#define SKX_RIR_WAYS(b) (1 << GET_BITFIELD((b), 28, 29))715715+#define SKX_RIR_CHAN_RANK(b) GET_BITFIELD((b), 16, 19)716716+#define SKX_RIR_OFFSET(b) ((u64)(GET_BITFIELD((b), 2, 15) << 26))717717+718718+static bool skx_rir_decode(struct decoded_addr *res)719719+{720720+ int i, idx, chan_rank;721721+ int shift;722722+ u32 rirway, rirlv;723723+ u64 rank_addr, prev_limit = 0, limit;724724+725725+ if (res->dev->imc[res->imc].chan[res->channel].dimms[0].close_pg)726726+ shift = 6;727727+ else728728+ shift = 13;729729+730730+ for (i = 0; i < SKX_MAX_RIR; i++) {731731+ SKX_GET_RIRWAYNESS(res->dev, res->imc, res->channel, i, rirway);732732+ limit = SKX_RIR_LIMIT(rirway);733733+ if (SKX_RIR_VALID(rirway)) {734734+ if (prev_limit <= res->chan_addr &&735735+ res->chan_addr <= limit)736736+ goto rir_found;737737+ }738738+ prev_limit = limit;739739+ }740740+ edac_dbg(0, "No RIR entry for %llx\n", res->addr);741741+ return false;742742+743743+rir_found:744744+ rank_addr = res->chan_addr >> shift;745745+ rank_addr /= SKX_RIR_WAYS(rirway);746746+ rank_addr <<= shift;747747+ rank_addr |= res->chan_addr & GENMASK_ULL(shift - 1, 0);748748+749749+ res->rank_address = rank_addr;750750+ idx = (res->chan_addr >> shift) % SKX_RIR_WAYS(rirway);751751+752752+ SKX_GET_RIRILV(res->dev, res->imc, res->channel, idx, i, rirlv);753753+ res->rank_address = rank_addr - SKX_RIR_OFFSET(rirlv);754754+ chan_rank = SKX_RIR_CHAN_RANK(rirlv);755755+ res->channel_rank = chan_rank;756756+ res->dimm = chan_rank / 4;757757+ res->rank = chan_rank % 4;758758+759759+ edac_dbg(2, "%llx: dimm=%d rank=%d chan_rank=%d rank_addr=%llx\n",760760+ res->addr, res->dimm, res->rank,761761+ res->channel_rank, res->rank_address);762762+ return true;763763+}764764+765765+static u8 skx_close_row[] = {766766+ 15, 16, 17, 18, 20, 21, 22, 28, 10, 11, 12, 13, 29, 30, 31, 32, 33767767+};768768+static u8 skx_close_column[] = {769769+ 3, 4, 5, 14, 19, 23, 24, 25, 26, 27770770+};771771+static u8 skx_open_row[] = {772772+ 14, 15, 16, 20, 28, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33773773+};774774+static u8 skx_open_column[] = {775775+ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12776776+};777777+static u8 skx_open_fine_column[] = {778778+ 3, 4, 5, 7, 8, 9, 10, 11, 12, 13779779+};780780+781781+static int skx_bits(u64 addr, int nbits, u8 *bits)782782+{783783+ int i, res = 0;784784+785785+ for (i = 0; i < nbits; i++)786786+ res |= ((addr >> bits[i]) & 1) << i;787787+ return res;788788+}789789+790790+static int skx_bank_bits(u64 addr, int b0, int b1, int do_xor, int x0, int x1)791791+{792792+ int ret = GET_BITFIELD(addr, b0, b0) | (GET_BITFIELD(addr, b1, b1) << 1);793793+794794+ if (do_xor)795795+ ret ^= GET_BITFIELD(addr, x0, x0) | (GET_BITFIELD(addr, x1, x1) << 1);796796+797797+ return ret;798798+}799799+800800+static bool skx_mad_decode(struct decoded_addr *r)801801+{802802+ struct skx_dimm *dimm = &r->dev->imc[r->imc].chan[r->channel].dimms[r->dimm];803803+ int bg0 = dimm->fine_grain_bank ? 6 : 13;804804+805805+ if (dimm->close_pg) {806806+ r->row = skx_bits(r->rank_address, dimm->rowbits, skx_close_row);807807+ r->column = skx_bits(r->rank_address, dimm->colbits, skx_close_column);808808+ r->column |= 0x400; /* C10 is autoprecharge, always set */809809+ r->bank_address = skx_bank_bits(r->rank_address, 8, 9, dimm->bank_xor_enable, 22, 28);810810+ r->bank_group = skx_bank_bits(r->rank_address, 6, 7, dimm->bank_xor_enable, 20, 21);811811+ } else {812812+ r->row = skx_bits(r->rank_address, dimm->rowbits, skx_open_row);813813+ if (dimm->fine_grain_bank)814814+ r->column = skx_bits(r->rank_address, dimm->colbits, skx_open_fine_column);815815+ else816816+ r->column = skx_bits(r->rank_address, dimm->colbits, skx_open_column);817817+ r->bank_address = skx_bank_bits(r->rank_address, 18, 19, dimm->bank_xor_enable, 22, 23);818818+ r->bank_group = skx_bank_bits(r->rank_address, bg0, 17, dimm->bank_xor_enable, 20, 21);819819+ }820820+ r->row &= (1u << dimm->rowbits) - 1;821821+822822+ edac_dbg(2, "%llx: row=%x col=%x bank_addr=%d bank_group=%d\n",823823+ r->addr, r->row, r->column, r->bank_address,824824+ r->bank_group);825825+ return true;826826+}827827+828828+static bool skx_decode(struct decoded_addr *res)829829+{830830+831831+ return skx_sad_decode(res) && skx_tad_decode(res) &&832832+ skx_rir_decode(res) && skx_mad_decode(res);833833+}834834+835835+#ifdef CONFIG_EDAC_DEBUG836836+/*837837+ * Debug feature. Make /sys/kernel/debug/skx_edac_test/addr.838838+ * Write an address to this file to exercise the address decode839839+ * logic in this driver.840840+ */841841+static struct dentry *skx_test;842842+static u64 skx_fake_addr;843843+844844+static int debugfs_u64_set(void *data, u64 val)845845+{846846+ struct decoded_addr res;847847+848848+ res.addr = val;849849+ skx_decode(&res);850850+851851+ return 0;852852+}853853+854854+DEFINE_SIMPLE_ATTRIBUTE(fops_u64_wo, NULL, debugfs_u64_set, "%llu\n");855855+856856+static struct dentry *mydebugfs_create(const char *name, umode_t mode,857857+ struct dentry *parent, u64 *value)858858+{859859+ return debugfs_create_file(name, mode, parent, value, &fops_u64_wo);860860+}861861+862862+static void setup_skx_debug(void)863863+{864864+ skx_test = debugfs_create_dir("skx_edac_test", NULL);865865+ mydebugfs_create("addr", S_IWUSR, skx_test, &skx_fake_addr);866866+}867867+868868+static void teardown_skx_debug(void)869869+{870870+ debugfs_remove_recursive(skx_test);871871+}872872+#else873873+static void setup_skx_debug(void)874874+{875875+}876876+877877+static void teardown_skx_debug(void)878878+{879879+}880880+#endif /*CONFIG_EDAC_DEBUG*/881881+882882+static void skx_mce_output_error(struct mem_ctl_info *mci,883883+ const struct mce *m,884884+ struct decoded_addr *res)885885+{886886+ enum hw_event_mc_err_type tp_event;887887+ char *type, *optype, msg[256];888888+ bool ripv = GET_BITFIELD(m->mcgstatus, 0, 0);889889+ bool overflow = GET_BITFIELD(m->status, 62, 62);890890+ bool uncorrected_error = GET_BITFIELD(m->status, 61, 61);891891+ bool recoverable;892892+ u32 core_err_cnt = GET_BITFIELD(m->status, 38, 52);893893+ u32 mscod = GET_BITFIELD(m->status, 16, 31);894894+ u32 errcode = GET_BITFIELD(m->status, 0, 15);895895+ u32 optypenum = GET_BITFIELD(m->status, 4, 6);896896+897897+ recoverable = GET_BITFIELD(m->status, 56, 56);898898+899899+ if (uncorrected_error) {900900+ if (ripv) {901901+ type = "FATAL";902902+ tp_event = HW_EVENT_ERR_FATAL;903903+ } else {904904+ type = "NON_FATAL";905905+ tp_event = HW_EVENT_ERR_UNCORRECTED;906906+ }907907+ } else {908908+ type = "CORRECTED";909909+ tp_event = HW_EVENT_ERR_CORRECTED;910910+ }911911+912912+ /*913913+ * According with Table 15-9 of the Intel Architecture spec vol 3A,914914+ * memory errors should fit in this mask:915915+ * 000f 0000 1mmm cccc (binary)916916+ * where:917917+ * f = Correction Report Filtering Bit. If 1, subsequent errors918918+ * won't be shown919919+ * mmm = error type920920+ * cccc = channel921921+ * If the mask doesn't match, report an error to the parsing logic922922+ */923923+ if (!((errcode & 0xef80) == 0x80)) {924924+ optype = "Can't parse: it is not a mem";925925+ } else {926926+ switch (optypenum) {927927+ case 0:928928+ optype = "generic undef request error";929929+ break;930930+ case 1:931931+ optype = "memory read error";932932+ break;933933+ case 2:934934+ optype = "memory write error";935935+ break;936936+ case 3:937937+ optype = "addr/cmd error";938938+ break;939939+ case 4:940940+ optype = "memory scrubbing error";941941+ break;942942+ default:943943+ optype = "reserved";944944+ break;945945+ }946946+ }947947+948948+ snprintf(msg, sizeof(msg),949949+ "%s%s err_code:%04x:%04x socket:%d imc:%d rank:%d bg:%d ba:%d row:%x col:%x",950950+ overflow ? " OVERFLOW" : "",951951+ (uncorrected_error && recoverable) ? " recoverable" : "",952952+ mscod, errcode,953953+ res->socket, res->imc, res->rank,954954+ res->bank_group, res->bank_address, res->row, res->column);955955+956956+ edac_dbg(0, "%s\n", msg);957957+958958+ /* Call the helper to output message */959959+ edac_mc_handle_error(tp_event, mci, core_err_cnt,960960+ m->addr >> PAGE_SHIFT, m->addr & ~PAGE_MASK, 0,961961+ res->channel, res->dimm, -1,962962+ optype, msg);963963+}964964+965965+static int skx_mce_check_error(struct notifier_block *nb, unsigned long val,966966+ void *data)967967+{968968+ struct mce *mce = (struct mce *)data;969969+ struct decoded_addr res;970970+ struct mem_ctl_info *mci;971971+ char *type;972972+973973+ if (get_edac_report_status() == EDAC_REPORTING_DISABLED)974974+ return NOTIFY_DONE;975975+976976+ /* ignore unless this is memory related with an address */977977+ if ((mce->status & 0xefff) >> 7 != 1 || !(mce->status & MCI_STATUS_ADDRV))978978+ return NOTIFY_DONE;979979+980980+ res.addr = mce->addr;981981+ if (!skx_decode(&res))982982+ return NOTIFY_DONE;983983+ mci = res.dev->imc[res.imc].mci;984984+985985+ if (mce->mcgstatus & MCG_STATUS_MCIP)986986+ type = "Exception";987987+ else988988+ type = "Event";989989+990990+ skx_mc_printk(mci, KERN_DEBUG, "HANDLING MCE MEMORY ERROR\n");991991+992992+ skx_mc_printk(mci, KERN_DEBUG, "CPU %d: Machine Check %s: %Lx "993993+ "Bank %d: %016Lx\n", mce->extcpu, type,994994+ mce->mcgstatus, mce->bank, mce->status);995995+ skx_mc_printk(mci, KERN_DEBUG, "TSC %llx ", mce->tsc);996996+ skx_mc_printk(mci, KERN_DEBUG, "ADDR %llx ", mce->addr);997997+ skx_mc_printk(mci, KERN_DEBUG, "MISC %llx ", mce->misc);998998+999999+ skx_mc_printk(mci, KERN_DEBUG, "PROCESSOR %u:%x TIME %llu SOCKET "10001000+ "%u APIC %x\n", mce->cpuvendor, mce->cpuid,10011001+ mce->time, mce->socketid, mce->apicid);10021002+10031003+ skx_mce_output_error(mci, mce, &res);10041004+10051005+ return NOTIFY_DONE;10061006+}10071007+10081008+static struct notifier_block skx_mce_dec = {10091009+ .notifier_call = skx_mce_check_error,10101010+};10111011+10121012+static void skx_remove(void)10131013+{10141014+ int i, j;10151015+ struct skx_dev *d, *tmp;10161016+10171017+ edac_dbg(0, "\n");10181018+10191019+ list_for_each_entry_safe(d, tmp, &skx_edac_list, list) {10201020+ list_del(&d->list);10211021+ for (i = 0; i < NUM_IMC; i++) {10221022+ skx_unregister_mci(&d->imc[i]);10231023+ for (j = 0; j < NUM_CHANNELS; j++)10241024+ pci_dev_put(d->imc[i].chan[j].cdev);10251025+ }10261026+ pci_dev_put(d->util_all);10271027+ pci_dev_put(d->sad_all);10281028+10291029+ kfree(d);10301030+ }10311031+}10321032+10331033+/*10341034+ * skx_init:10351035+ * make sure we are running on the correct cpu model10361036+ * search for all the devices we need10371037+ * check which DIMMs are present.10381038+ */10391039+int __init skx_init(void)10401040+{10411041+ const struct x86_cpu_id *id;10421042+ const struct munit *m;10431043+ int rc = 0, i;10441044+ u8 mc = 0, src_id, node_id;10451045+ struct skx_dev *d;10461046+10471047+ edac_dbg(2, "\n");10481048+10491049+ id = x86_match_cpu(skx_cpuids);10501050+ if (!id)10511051+ return -ENODEV;10521052+10531053+ rc = skx_get_hi_lo();10541054+ if (rc)10551055+ return rc;10561056+10571057+ rc = get_all_bus_mappings();10581058+ if (rc < 0)10591059+ goto fail;10601060+ if (rc == 0) {10611061+ edac_dbg(2, "No memory controllers found\n");10621062+ return -ENODEV;10631063+ }10641064+10651065+ for (m = skx_all_munits; m->did; m++) {10661066+ rc = get_all_munits(m);10671067+ if (rc < 0)10681068+ goto fail;10691069+ if (rc != m->per_socket * skx_num_sockets) {10701070+ edac_dbg(2, "Expected %d, got %d of %x\n",10711071+ m->per_socket * skx_num_sockets, rc, m->did);10721072+ rc = -ENODEV;10731073+ goto fail;10741074+ }10751075+ }10761076+10771077+ list_for_each_entry(d, &skx_edac_list, list) {10781078+ src_id = get_src_id(d);10791079+ node_id = skx_get_node_id(d);10801080+ edac_dbg(2, "src_id=%d node_id=%d\n", src_id, node_id);10811081+ for (i = 0; i < NUM_IMC; i++) {10821082+ d->imc[i].mc = mc++;10831083+ d->imc[i].lmc = i;10841084+ d->imc[i].src_id = src_id;10851085+ d->imc[i].node_id = node_id;10861086+ rc = skx_register_mci(&d->imc[i]);10871087+ if (rc < 0)10881088+ goto fail;10891089+ }10901090+ }10911091+10921092+ /* Ensure that the OPSTATE is set correctly for POLL or NMI */10931093+ opstate_init();10941094+10951095+ setup_skx_debug();10961096+10971097+ mce_register_decode_chain(&skx_mce_dec);10981098+10991099+ return 0;11001100+fail:11011101+ skx_remove();11021102+ return rc;11031103+}11041104+11051105+static void __exit skx_exit(void)11061106+{11071107+ edac_dbg(2, "\n");11081108+ mce_unregister_decode_chain(&skx_mce_dec);11091109+ skx_remove();11101110+ teardown_skx_debug();11111111+}11121112+11131113+module_init(skx_init);11141114+module_exit(skx_exit);11151115+11161116+module_param(edac_op_state, int, 0444);11171117+MODULE_PARM_DESC(edac_op_state, "EDAC Error Reporting state: 0=Poll,1=NMI");11181118+11191119+MODULE_LICENSE("GPL v2");11201120+MODULE_AUTHOR("Tony Luck");11211121+MODULE_DESCRIPTION("MC Driver for Intel Skylake server processors");
+6-5
drivers/gpio/Kconfig
···5050config OF_GPIO5151 def_bool y5252 depends on OF5353+ depends on HAS_IOMEM53545455config GPIO_ACPI5556 def_bool y···189188config GPIO_ETRAXFS190189 bool "Axis ETRAX FS General I/O"191190 depends on CRIS || COMPILE_TEST192192- depends on OF191191+ depends on OF_GPIO193192 select GPIO_GENERIC194193 select GPIOLIB_IRQCHIP195194 help···215214216215config GPIO_GRGPIO217216 tristate "Aeroflex Gaisler GRGPIO support"218218- depends on OF217217+ depends on OF_GPIO219218 select GPIO_GENERIC220219 select IRQ_DOMAIN221220 help···313312config GPIO_MVEBU314313 def_bool y315314 depends on PLAT_ORION316316- depends on OF315315+ depends on OF_GPIO317316 select GENERIC_IRQ_CHIP318317319318config GPIO_MXC···406405 bool "NVIDIA Tegra GPIO support"407406 default ARCH_TEGRA408407 depends on ARCH_TEGRA || COMPILE_TEST409409- depends on OF408408+ depends on OF_GPIO410409 help411410 Say yes here to support GPIO pins on NVIDIA Tegra SoCs.412411···1100109911011100config GPIO_74X16411021101 tristate "74x164 serial-in/parallel-out 8-bits shift register"11031103- depends on OF11021102+ depends on OF_GPIO11041103 help11051104 Driver for 74x164 compatible serial-in/parallel-out 8-outputs11061105 shift registers. This driver can be used to provide access
+4-4
drivers/gpio/gpio-max730x.c
···192192 ts->chip.parent = dev;193193 ts->chip.owner = THIS_MODULE;194194195195+ ret = gpiochip_add_data(&ts->chip, ts);196196+ if (ret)197197+ goto exit_destroy;198198+195199 /*196200 * initialize pullups according to platform data and cache the197201 * register values for later use.···216212 goto exit_destroy;217213 }218214 }219219-220220- ret = gpiochip_add_data(&ts->chip, ts);221221- if (ret)222222- goto exit_destroy;223215224216 return ret;225217
···321321 (le16_to_cpu(path->usConnObjectId) &322322 OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;323323324324+ /* Skip TV/CV support */325325+ if ((le16_to_cpu(path->usDeviceTag) ==326326+ ATOM_DEVICE_TV1_SUPPORT) ||327327+ (le16_to_cpu(path->usDeviceTag) ==328328+ ATOM_DEVICE_CV_SUPPORT))329329+ continue;330330+331331+ if (con_obj_id >= ARRAY_SIZE(object_connector_convert)) {332332+ DRM_ERROR("invalid con_obj_id %d for device tag 0x%04x\n",333333+ con_obj_id, le16_to_cpu(path->usDeviceTag));334334+ continue;335335+ }336336+324337 connector_type =325338 object_connector_convert[con_obj_id];326339 connector_object_id = con_obj_id;
-9
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
···200200 atpx->is_hybrid = false;201201 if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {202202 printk("ATPX Hybrid Graphics\n");203203-#if 1204204- /* This is a temporary hack until the D3 cold support205205- * makes it upstream. The ATPX power_control method seems206206- * to still work on even if the system should be using207207- * the new standardized hybrid D3 cold ACPI interface.208208- */209209- atpx->functions.power_cntl = true;210210-#else211203 atpx->functions.power_cntl = false;212212-#endif213204 atpx->is_hybrid = true;214205 }215206
+2-2
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
···221221 * Unbinds the requested pages from the gart page table and222222 * replaces them with the dummy page (all asics).223223 */224224-void amdgpu_gart_unbind(struct amdgpu_device *adev, unsigned offset,224224+void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,225225 int pages)226226{227227 unsigned t;···268268 * (all asics).269269 * Returns 0 for success, -EINVAL for failure.270270 */271271-int amdgpu_gart_bind(struct amdgpu_device *adev, unsigned offset,271271+int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,272272 int pages, struct page **pagelist, dma_addr_t *dma_addr,273273 uint32_t flags)274274{
···54045404 struct drm_pending_vblank_event *e = NULL;54055405 int ret = -EINVAL;5406540654075407+ if (!drm_core_check_feature(dev, DRIVER_MODESET))54085408+ return -EINVAL;54095409+54075410 if (page_flip->flags & ~DRM_MODE_PAGE_FLIP_FLAGS ||54085411 page_flip->reserved != 0)54095412 return -EINVAL;
+1-1
drivers/gpu/drm/drm_fb_helper.c
···464464465465 /* Sometimes user space wants everything disabled, so don't steal the466466 * display if there's a master. */467467- if (lockless_dereference(dev->master))467467+ if (READ_ONCE(dev->master))468468 return false;469469470470 drm_for_each_crtc(crtc, dev) {
···882882883883 struct i915_ctx_hang_stats hang_stats;884884885885- /* Unique identifier for this context, used by the hw for tracking */886885 unsigned long flags;887886#define CONTEXT_NO_ZEROMAP BIT(0)888887#define CONTEXT_NO_ERROR_CAPTURE BIT(1)889889- unsigned hw_id;888888+889889+ /* Unique identifier for this context, used by the hw for tracking */890890+ unsigned int hw_id;890891 u32 user_handle;891892892893 u32 ggtt_alignment;···18551854 enum modeset_restore modeset_restore;18561855 struct mutex modeset_restore_lock;18571856 struct drm_atomic_state *modeset_restore_state;18571857+ struct drm_modeset_acquire_ctx reset_ctx;1858185818591859 struct list_head vm_list; /* Global list of all address spaces */18601860 struct i915_ggtt ggtt; /* VM representing the global address space */···19631961 bool suspended_to_idle;19641962 struct i915_suspend_saved_registers regfile;19651963 struct vlv_s0ix_state vlv_s0ix_state;19641964+19651965+ enum {19661966+ I915_SKL_SAGV_UNKNOWN = 0,19671967+ I915_SKL_SAGV_DISABLED,19681968+ I915_SKL_SAGV_ENABLED,19691969+ I915_SKL_SAGV_NOT_CONTROLLED19701970+ } skl_sagv_status;1966197119671972 struct {19681973 /*···35993590/* belongs in i915_gem_gtt.h */36003591static inline void i915_gem_chipset_flush(struct drm_i915_private *dev_priv)36013592{35933593+ wmb();36023594 if (INTEL_GEN(dev_priv) < 6)36033595 intel_gtt_chipset_flush();36043596}
+8-2
drivers/gpu/drm/i915/i915_gem.c
···879879 ret = i915_gem_shmem_pread(dev, obj, args, file);880880881881 /* pread for non shmem backed objects */882882- if (ret == -EFAULT || ret == -ENODEV)882882+ if (ret == -EFAULT || ret == -ENODEV) {883883+ intel_runtime_pm_get(to_i915(dev));883884 ret = i915_gem_gtt_pread(dev, obj, args->size,884885 args->offset, args->data_ptr);886886+ intel_runtime_pm_put(to_i915(dev));887887+ }885888886889out:887890 drm_gem_object_unreference(&obj->base);···13091306 * textures). Fallback to the shmem path in that case. */13101307 }1311130813121312- if (ret == -EFAULT) {13091309+ if (ret == -EFAULT || ret == -ENOSPC) {13131310 if (obj->phys_handle)13141311 ret = i915_gem_phys_pwrite(obj, args, file);13151312 else if (i915_gem_object_has_struct_page(obj))···31723169 }3173317031743171 intel_ring_init_seqno(engine, engine->last_submitted_seqno);31723172+31733173+ engine->i915->gt.active_engines &= ~intel_engine_flag(engine);31753174}3176317531773176void i915_gem_reset(struct drm_device *dev)···3191318631923187 for_each_engine(engine, dev_priv)31933188 i915_gem_reset_engine_cleanup(engine);31893189+ mod_delayed_work(dev_priv->wq, &dev_priv->gt.idle_work, 0);3194319031953191 i915_gem_context_reset(dev);31963192
+3-10
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···943943{944944 const unsigned other_rings = ~intel_engine_flag(req->engine);945945 struct i915_vma *vma;946946- uint32_t flush_domains = 0;947947- bool flush_chipset = false;948946 int ret;949947950948 list_for_each_entry(vma, vmas, exec_list) {···955957 }956958957959 if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)958958- flush_chipset |= i915_gem_clflush_object(obj, false);959959-960960- flush_domains |= obj->base.write_domain;960960+ i915_gem_clflush_object(obj, false);961961 }962962963963- if (flush_chipset)964964- i915_gem_chipset_flush(req->engine->i915);965965-966966- if (flush_domains & I915_GEM_DOMAIN_GTT)967967- wmb();963963+ /* Unconditionally flush any chipset caches (for streaming writes). */964964+ i915_gem_chipset_flush(req->engine->i915);968965969966 /* Unconditionally invalidate gpu caches and ensure that we do flush970967 * any residual writes from the previous batch.
···145145static const struct ddi_buf_trans skl_u_ddi_translations_dp[] = {146146 { 0x0000201B, 0x000000A2, 0x0 },147147 { 0x00005012, 0x00000088, 0x0 },148148- { 0x80007011, 0x000000CD, 0x0 },148148+ { 0x80007011, 0x000000CD, 0x1 },149149 { 0x80009010, 0x000000C0, 0x1 },150150 { 0x0000201B, 0x0000009D, 0x0 },151151 { 0x80005012, 0x000000C0, 0x1 },···158158static const struct ddi_buf_trans skl_y_ddi_translations_dp[] = {159159 { 0x00000018, 0x000000A2, 0x0 },160160 { 0x00005012, 0x00000088, 0x0 },161161- { 0x80007011, 0x000000CD, 0x0 },161161+ { 0x80007011, 0x000000CD, 0x3 },162162 { 0x80009010, 0x000000C0, 0x3 },163163 { 0x00000018, 0x0000009D, 0x0 },164164 { 0x80005012, 0x000000C0, 0x3 },···388388 }389389}390390391391+static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port port)392392+{393393+ int n_hdmi_entries;394394+ int hdmi_level;395395+ int hdmi_default_entry;396396+397397+ hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;398398+399399+ if (IS_BROXTON(dev_priv))400400+ return hdmi_level;401401+402402+ if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {403403+ skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);404404+ hdmi_default_entry = 8;405405+ } else if (IS_BROADWELL(dev_priv)) {406406+ n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);407407+ hdmi_default_entry = 7;408408+ } else if (IS_HASWELL(dev_priv)) {409409+ n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);410410+ hdmi_default_entry = 6;411411+ } else {412412+ WARN(1, "ddi translation table missing\n");413413+ n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);414414+ hdmi_default_entry = 7;415415+ }416416+417417+ /* Choose a good default if VBT is badly populated */418418+ if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||419419+ hdmi_level >= n_hdmi_entries)420420+ hdmi_level = hdmi_default_entry;421421+422422+ return hdmi_level;423423+}424424+391425/*392426 * Starting with Haswell, DDI port buffers must be programmed with correct393427 * values in advance. The buffer values are different for FDI and DP modes,···433399{434400 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);435401 u32 iboost_bit = 0;436436- int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry,402402+ int i, n_hdmi_entries, n_dp_entries, n_edp_entries,437403 size;438404 int hdmi_level;439405 enum port port;···444410 const struct ddi_buf_trans *ddi_translations;445411446412 port = intel_ddi_get_encoder_port(encoder);447447- hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;413413+ hdmi_level = intel_ddi_hdmi_level(dev_priv, port);448414449415 if (IS_BROXTON(dev_priv)) {450416 if (encoder->type != INTEL_OUTPUT_HDMI)···464430 skl_get_buf_trans_edp(dev_priv, &n_edp_entries);465431 ddi_translations_hdmi =466432 skl_get_buf_trans_hdmi(dev_priv, &n_hdmi_entries);467467- hdmi_default_entry = 8;468433 /* If we're boosting the current, set bit 31 of trans1 */469434 if (dev_priv->vbt.ddi_port_info[port].hdmi_boost_level ||470435 dev_priv->vbt.ddi_port_info[port].dp_boost_level)···489456490457 n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);491458 n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);492492- hdmi_default_entry = 7;493459 } else if (IS_HASWELL(dev_priv)) {494460 ddi_translations_fdi = hsw_ddi_translations_fdi;495461 ddi_translations_dp = hsw_ddi_translations_dp;···496464 ddi_translations_hdmi = hsw_ddi_translations_hdmi;497465 n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp);498466 n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi);499499- hdmi_default_entry = 6;500467 } else {501468 WARN(1, "ddi translation table missing\n");502469 ddi_translations_edp = bdw_ddi_translations_dp;···505474 n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);506475 n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);507476 n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);508508- hdmi_default_entry = 7;509477 }510478511479 switch (encoder->type) {···534504535505 if (encoder->type != INTEL_OUTPUT_HDMI)536506 return;537537-538538- /* Choose a good default if VBT is badly populated */539539- if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN ||540540- hdmi_level >= n_hdmi_entries)541541- hdmi_level = hdmi_default_entry;542507543508 /* Entry 9 is for HDMI: */544509 I915_WRITE(DDI_BUF_TRANS_LO(port, i),···14041379 TRANS_CLK_SEL_DISABLED);14051380}1406138114071407-static void skl_ddi_set_iboost(struct drm_i915_private *dev_priv,14081408- u32 level, enum port port, int type)13821382+static void _skl_ddi_set_iboost(struct drm_i915_private *dev_priv,13831383+ enum port port, uint8_t iboost)14091384{13851385+ u32 tmp;13861386+13871387+ tmp = I915_READ(DISPIO_CR_TX_BMU_CR0);13881388+ tmp &= ~(BALANCE_LEG_MASK(port) | BALANCE_LEG_DISABLE(port));13891389+ if (iboost)13901390+ tmp |= iboost << BALANCE_LEG_SHIFT(port);13911391+ else13921392+ tmp |= BALANCE_LEG_DISABLE(port);13931393+ I915_WRITE(DISPIO_CR_TX_BMU_CR0, tmp);13941394+}13951395+13961396+static void skl_ddi_set_iboost(struct intel_encoder *encoder, u32 level)13971397+{13981398+ struct intel_digital_port *intel_dig_port = enc_to_dig_port(&encoder->base);13991399+ struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev);14001400+ enum port port = intel_dig_port->port;14011401+ int type = encoder->type;14101402 const struct ddi_buf_trans *ddi_translations;14111403 uint8_t iboost;14121404 uint8_t dp_iboost, hdmi_iboost;14131405 int n_entries;14141414- u32 reg;1415140614161407 /* VBT may override standard boost values */14171408 dp_iboost = dev_priv->vbt.ddi_port_info[port].dp_boost_level;···14691428 return;14701429 }1471143014721472- reg = I915_READ(DISPIO_CR_TX_BMU_CR0);14731473- reg &= ~BALANCE_LEG_MASK(port);14741474- reg &= ~(1 << (BALANCE_LEG_DISABLE_SHIFT + port));14311431+ _skl_ddi_set_iboost(dev_priv, port, iboost);1475143214761476- if (iboost)14771477- reg |= iboost << BALANCE_LEG_SHIFT(port);14781478- else14791479- reg |= 1 << (BALANCE_LEG_DISABLE_SHIFT + port);14801480-14811481- I915_WRITE(DISPIO_CR_TX_BMU_CR0, reg);14331433+ if (port == PORT_A && intel_dig_port->max_lanes == 4)14341434+ _skl_ddi_set_iboost(dev_priv, PORT_E, iboost);14821435}1483143614841437static void bxt_ddi_vswing_sequence(struct drm_i915_private *dev_priv,···16031568 level = translate_signal_level(signal_levels);1604156916051570 if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))16061606- skl_ddi_set_iboost(dev_priv, level, port, encoder->type);15711571+ skl_ddi_set_iboost(encoder, level);16071572 else if (IS_BROXTON(dev_priv))16081573 bxt_ddi_vswing_sequence(dev_priv, level, port, encoder->type);16091574···16721637 intel_dp_stop_link_train(intel_dp);16731638 } else if (type == INTEL_OUTPUT_HDMI) {16741639 struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);16401640+ int level = intel_ddi_hdmi_level(dev_priv, port);16411641+16421642+ if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))16431643+ skl_ddi_set_iboost(intel_encoder, level);1675164416761645 intel_hdmi->set_infoframes(encoder,16771646 crtc->config->has_hdmi_sink,
+125-64
drivers/gpu/drm/i915/intel_display.c
···3093309330943094 for_each_crtc(dev, crtc) {30953095 struct intel_plane *plane = to_intel_plane(crtc->primary);30963096- struct intel_plane_state *plane_state;30973097-30983098- drm_modeset_lock_crtc(crtc, &plane->base);30993099- plane_state = to_intel_plane_state(plane->base.state);30963096+ struct intel_plane_state *plane_state =30973097+ to_intel_plane_state(plane->base.state);3100309831013099 if (plane_state->visible)31023100 plane->update_plane(&plane->base,31033101 to_intel_crtc_state(crtc->state),31043102 plane_state);31053105-31063106- drm_modeset_unlock_crtc(crtc);31073103 }31043104+}31053105+31063106+static int31073107+__intel_display_resume(struct drm_device *dev,31083108+ struct drm_atomic_state *state)31093109+{31103110+ struct drm_crtc_state *crtc_state;31113111+ struct drm_crtc *crtc;31123112+ int i, ret;31133113+31143114+ intel_modeset_setup_hw_state(dev);31153115+ i915_redisable_vga(dev);31163116+31173117+ if (!state)31183118+ return 0;31193119+31203120+ for_each_crtc_in_state(state, crtc, crtc_state, i) {31213121+ /*31223122+ * Force recalculation even if we restore31233123+ * current state. With fast modeset this may not result31243124+ * in a modeset when the state is compatible.31253125+ */31263126+ crtc_state->mode_changed = true;31273127+ }31283128+31293129+ /* ignore any reset values/BIOS leftovers in the WM registers */31303130+ to_intel_atomic_state(state)->skip_intermediate_wm = true;31313131+31323132+ ret = drm_atomic_commit(state);31333133+31343134+ WARN_ON(ret == -EDEADLK);31353135+ return ret;31083136}3109313731103138void intel_prepare_reset(struct drm_i915_private *dev_priv)31113139{31403140+ struct drm_device *dev = &dev_priv->drm;31413141+ struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;31423142+ struct drm_atomic_state *state;31433143+ int ret;31443144+31123145 /* no reset support for gen2 */31133146 if (IS_GEN2(dev_priv))31143147 return;3115314831163116- /* reset doesn't touch the display */31493149+ /*31503150+ * Need mode_config.mutex so that we don't31513151+ * trample ongoing ->detect() and whatnot.31523152+ */31533153+ mutex_lock(&dev->mode_config.mutex);31543154+ drm_modeset_acquire_init(ctx, 0);31553155+ while (1) {31563156+ ret = drm_modeset_lock_all_ctx(dev, ctx);31573157+ if (ret != -EDEADLK)31583158+ break;31593159+31603160+ drm_modeset_backoff(ctx);31613161+ }31623162+31633163+ /* reset doesn't touch the display, but flips might get nuked anyway, */31173164 if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv))31183165 return;3119316631203120- drm_modeset_lock_all(&dev_priv->drm);31213167 /*31223168 * Disabling the crtcs gracefully seems nicer. Also the31233169 * g33 docs say we should at least disable all the planes.31243170 */31253125- intel_display_suspend(&dev_priv->drm);31713171+ state = drm_atomic_helper_duplicate_state(dev, ctx);31723172+ if (IS_ERR(state)) {31733173+ ret = PTR_ERR(state);31743174+ state = NULL;31753175+ DRM_ERROR("Duplicating state failed with %i\n", ret);31763176+ goto err;31773177+ }31783178+31793179+ ret = drm_atomic_helper_disable_all(dev, ctx);31803180+ if (ret) {31813181+ DRM_ERROR("Suspending crtc's failed with %i\n", ret);31823182+ goto err;31833183+ }31843184+31853185+ dev_priv->modeset_restore_state = state;31863186+ state->acquire_ctx = ctx;31873187+ return;31883188+31893189+err:31903190+ drm_atomic_state_free(state);31263191}3127319231283193void intel_finish_reset(struct drm_i915_private *dev_priv)31293194{31953195+ struct drm_device *dev = &dev_priv->drm;31963196+ struct drm_modeset_acquire_ctx *ctx = &dev_priv->reset_ctx;31973197+ struct drm_atomic_state *state = dev_priv->modeset_restore_state;31983198+ int ret;31993199+31303200 /*31313201 * Flips in the rings will be nuked by the reset,31323202 * so complete all pending flips so that user space···32073137 /* no reset support for gen2 */32083138 if (IS_GEN2(dev_priv))32093139 return;31403140+31413141+ dev_priv->modeset_restore_state = NULL;3210314232113143 /* reset doesn't touch the display */32123144 if (INTEL_GEN(dev_priv) >= 5 || IS_G4X(dev_priv)) {···32213149 * FIXME: Atomic will make this obsolete since we won't schedule32223150 * CS-based flips (which might get lost in gpu resets) any more.32233151 */32243224- intel_update_primary_planes(&dev_priv->drm);32253225- return;31523152+ intel_update_primary_planes(dev);31533153+ } else {31543154+ /*31553155+ * The display has been reset as well,31563156+ * so need a full re-initialization.31573157+ */31583158+ intel_runtime_pm_disable_interrupts(dev_priv);31593159+ intel_runtime_pm_enable_interrupts(dev_priv);31603160+31613161+ intel_modeset_init_hw(dev);31623162+31633163+ spin_lock_irq(&dev_priv->irq_lock);31643164+ if (dev_priv->display.hpd_irq_setup)31653165+ dev_priv->display.hpd_irq_setup(dev_priv);31663166+ spin_unlock_irq(&dev_priv->irq_lock);31673167+31683168+ ret = __intel_display_resume(dev, state);31693169+ if (ret)31703170+ DRM_ERROR("Restoring old state failed with %i\n", ret);31713171+31723172+ intel_hpd_init(dev_priv);32263173 }3227317432283228- /*32293229- * The display has been reset as well,32303230- * so need a full re-initialization.32313231- */32323232- intel_runtime_pm_disable_interrupts(dev_priv);32333233- intel_runtime_pm_enable_interrupts(dev_priv);32343234-32353235- intel_modeset_init_hw(&dev_priv->drm);32363236-32373237- spin_lock_irq(&dev_priv->irq_lock);32383238- if (dev_priv->display.hpd_irq_setup)32393239- dev_priv->display.hpd_irq_setup(dev_priv);32403240- spin_unlock_irq(&dev_priv->irq_lock);32413241-32423242- intel_display_resume(&dev_priv->drm);32433243-32443244- intel_hpd_init(dev_priv);32453245-32463246- drm_modeset_unlock_all(&dev_priv->drm);31753175+ drm_modeset_drop_locks(ctx);31763176+ drm_modeset_acquire_fini(ctx);31773177+ mutex_unlock(&dev->mode_config.mutex);32473178}3248317932493180static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)···1375913684 intel_state->cdclk_pll_vco != dev_priv->cdclk_pll.vco))1376013685 dev_priv->display.modeset_commit_cdclk(state);13761136861368713687+ /*1368813688+ * SKL workaround: bspec recommends we disable the SAGV when we1368913689+ * have more then one pipe enabled1369013690+ */1369113691+ if (IS_SKYLAKE(dev_priv) && !skl_can_enable_sagv(state))1369213692+ skl_disable_sagv(dev_priv);1369313693+1376213694 intel_modeset_verify_disabled(dev);1376313695 }1376413696···13838137561383913757 intel_modeset_verify_crtc(crtc, old_crtc_state, crtc->state);1384013758 }1375913759+1376013760+ if (IS_SKYLAKE(dev_priv) && intel_state->modeset &&1376113761+ skl_can_enable_sagv(state))1376213762+ skl_enable_sagv(dev_priv);13841137631384213764 drm_atomic_helper_commit_hw_done(state);1384313765···1624216156 struct drm_atomic_state *state = dev_priv->modeset_restore_state;1624316157 struct drm_modeset_acquire_ctx ctx;1624416158 int ret;1624516245- bool setup = false;16246161591624716160 dev_priv->modeset_restore_state = NULL;1616116161+ if (state)1616216162+ state->acquire_ctx = &ctx;16248161631624916164 /*1625016165 * This is a cludge because with real atomic modeset mode_config.mutex···1625616169 mutex_lock(&dev->mode_config.mutex);1625716170 drm_modeset_acquire_init(&ctx, 0);16258161711625916259-retry:1626016260- ret = drm_modeset_lock_all_ctx(dev, &ctx);1617216172+ while (1) {1617316173+ ret = drm_modeset_lock_all_ctx(dev, &ctx);1617416174+ if (ret != -EDEADLK)1617516175+ break;16261161761626216262- if (ret == 0 && !setup) {1626316263- setup = true;1626416264-1626516265- intel_modeset_setup_hw_state(dev);1626616266- i915_redisable_vga(dev);1626716267- }1626816268-1626916269- if (ret == 0 && state) {1627016270- struct drm_crtc_state *crtc_state;1627116271- struct drm_crtc *crtc;1627216272- int i;1627316273-1627416274- state->acquire_ctx = &ctx;1627516275-1627616276- /* ignore any reset values/BIOS leftovers in the WM registers */1627716277- to_intel_atomic_state(state)->skip_intermediate_wm = true;1627816278-1627916279- for_each_crtc_in_state(state, crtc, crtc_state, i) {1628016280- /*1628116281- * Force recalculation even if we restore1628216282- * current state. With fast modeset this may not result1628316283- * in a modeset when the state is compatible.1628416284- */1628516285- crtc_state->mode_changed = true;1628616286- }1628716287-1628816288- ret = drm_atomic_commit(state);1628916289- }1629016290-1629116291- if (ret == -EDEADLK) {1629216177 drm_modeset_backoff(&ctx);1629316293- goto retry;1629416178 }1617916179+1618016180+ if (!ret)1618116181+ ret = __intel_display_resume(dev, state);16295161821629616183 drm_modeset_drop_locks(&ctx);1629716184 drm_modeset_acquire_fini(&ctx);
···12301230 if (i915.enable_fbc >= 0)12311231 return !!i915.enable_fbc;1232123212331233+ if (!HAS_FBC(dev_priv))12341234+ return 0;12351235+12331236 if (IS_BROADWELL(dev_priv))12341237 return 1;1235123812361239 return 0;12401240+}12411241+12421242+static bool need_fbc_vtd_wa(struct drm_i915_private *dev_priv)12431243+{12441244+#ifdef CONFIG_INTEL_IOMMU12451245+ /* WaFbcTurnOffFbcWhenHyperVisorIsUsed:skl,bxt */12461246+ if (intel_iommu_gfx_mapped &&12471247+ (IS_SKYLAKE(dev_priv) || IS_BROXTON(dev_priv))) {12481248+ DRM_INFO("Disabling framebuffer compression (FBC) to prevent screen flicker with VT-d enabled\n");12491249+ return true;12501250+ }12511251+#endif12521252+12531253+ return false;12371254}1238125512391256/**···12691252 fbc->enabled = false;12701253 fbc->active = false;12711254 fbc->work.scheduled = false;12551255+12561256+ if (need_fbc_vtd_wa(dev_priv))12571257+ mkwrite_device_info(dev_priv)->has_fbc = false;1272125812731259 i915.enable_fbc = intel_sanitize_fbc_option(dev_priv);12741260 DRM_DEBUG_KMS("Sanitized enable_fbc value: %d\n", i915.enable_fbc);
+267-9
drivers/gpu/drm/i915/intel_pm.c
···2852285228532853#define SKL_DDB_SIZE 896 /* in blocks */28542854#define BXT_DDB_SIZE 51228552855+#define SKL_SAGV_BLOCK_TIME 30 /* µs */2855285628562857/*28572858 * Return the index of a plane in the SKL DDB and wm result arrays. Primary···28742873 MISSING_CASE(plane->base.type);28752874 return plane->plane;28762875 }28762876+}28772877+28782878+/*28792879+ * SAGV dynamically adjusts the system agent voltage and clock frequencies28802880+ * depending on power and performance requirements. The display engine access28812881+ * to system memory is blocked during the adjustment time. Because of the28822882+ * blocking time, having this enabled can cause full system hangs and/or pipe28832883+ * underruns if we don't meet all of the following requirements:28842884+ *28852885+ * - <= 1 pipe enabled28862886+ * - All planes can enable watermarks for latencies >= SAGV engine block time28872887+ * - We're not using an interlaced display configuration28882888+ */28892889+int28902890+skl_enable_sagv(struct drm_i915_private *dev_priv)28912891+{28922892+ int ret;28932893+28942894+ if (dev_priv->skl_sagv_status == I915_SKL_SAGV_NOT_CONTROLLED ||28952895+ dev_priv->skl_sagv_status == I915_SKL_SAGV_ENABLED)28962896+ return 0;28972897+28982898+ DRM_DEBUG_KMS("Enabling the SAGV\n");28992899+ mutex_lock(&dev_priv->rps.hw_lock);29002900+29012901+ ret = sandybridge_pcode_write(dev_priv, GEN9_PCODE_SAGV_CONTROL,29022902+ GEN9_SAGV_ENABLE);29032903+29042904+ /* We don't need to wait for the SAGV when enabling */29052905+ mutex_unlock(&dev_priv->rps.hw_lock);29062906+29072907+ /*29082908+ * Some skl systems, pre-release machines in particular,29092909+ * don't actually have an SAGV.29102910+ */29112911+ if (ret == -ENXIO) {29122912+ DRM_DEBUG_DRIVER("No SAGV found on system, ignoring\n");29132913+ dev_priv->skl_sagv_status = I915_SKL_SAGV_NOT_CONTROLLED;29142914+ return 0;29152915+ } else if (ret < 0) {29162916+ DRM_ERROR("Failed to enable the SAGV\n");29172917+ return ret;29182918+ }29192919+29202920+ dev_priv->skl_sagv_status = I915_SKL_SAGV_ENABLED;29212921+ return 0;29222922+}29232923+29242924+static int29252925+skl_do_sagv_disable(struct drm_i915_private *dev_priv)29262926+{29272927+ int ret;29282928+ uint32_t temp = GEN9_SAGV_DISABLE;29292929+29302930+ ret = sandybridge_pcode_read(dev_priv, GEN9_PCODE_SAGV_CONTROL,29312931+ &temp);29322932+ if (ret)29332933+ return ret;29342934+ else29352935+ return temp & GEN9_SAGV_IS_DISABLED;29362936+}29372937+29382938+int29392939+skl_disable_sagv(struct drm_i915_private *dev_priv)29402940+{29412941+ int ret, result;29422942+29432943+ if (dev_priv->skl_sagv_status == I915_SKL_SAGV_NOT_CONTROLLED ||29442944+ dev_priv->skl_sagv_status == I915_SKL_SAGV_DISABLED)29452945+ return 0;29462946+29472947+ DRM_DEBUG_KMS("Disabling the SAGV\n");29482948+ mutex_lock(&dev_priv->rps.hw_lock);29492949+29502950+ /* bspec says to keep retrying for at least 1 ms */29512951+ ret = wait_for(result = skl_do_sagv_disable(dev_priv), 1);29522952+ mutex_unlock(&dev_priv->rps.hw_lock);29532953+29542954+ if (ret == -ETIMEDOUT) {29552955+ DRM_ERROR("Request to disable SAGV timed out\n");29562956+ return -ETIMEDOUT;29572957+ }29582958+29592959+ /*29602960+ * Some skl systems, pre-release machines in particular,29612961+ * don't actually have an SAGV.29622962+ */29632963+ if (result == -ENXIO) {29642964+ DRM_DEBUG_DRIVER("No SAGV found on system, ignoring\n");29652965+ dev_priv->skl_sagv_status = I915_SKL_SAGV_NOT_CONTROLLED;29662966+ return 0;29672967+ } else if (result < 0) {29682968+ DRM_ERROR("Failed to disable the SAGV\n");29692969+ return result;29702970+ }29712971+29722972+ dev_priv->skl_sagv_status = I915_SKL_SAGV_DISABLED;29732973+ return 0;29742974+}29752975+29762976+bool skl_can_enable_sagv(struct drm_atomic_state *state)29772977+{29782978+ struct drm_device *dev = state->dev;29792979+ struct drm_i915_private *dev_priv = to_i915(dev);29802980+ struct intel_atomic_state *intel_state = to_intel_atomic_state(state);29812981+ struct drm_crtc *crtc;29822982+ enum pipe pipe;29832983+ int level, plane;29842984+29852985+ /*29862986+ * SKL workaround: bspec recommends we disable the SAGV when we have29872987+ * more then one pipe enabled29882988+ *29892989+ * If there are no active CRTCs, no additional checks need be performed29902990+ */29912991+ if (hweight32(intel_state->active_crtcs) == 0)29922992+ return true;29932993+ else if (hweight32(intel_state->active_crtcs) > 1)29942994+ return false;29952995+29962996+ /* Since we're now guaranteed to only have one active CRTC... */29972997+ pipe = ffs(intel_state->active_crtcs) - 1;29982998+ crtc = dev_priv->pipe_to_crtc_mapping[pipe];29992999+30003000+ if (crtc->state->mode.flags & DRM_MODE_FLAG_INTERLACE)30013001+ return false;30023002+30033003+ for_each_plane(dev_priv, pipe, plane) {30043004+ /* Skip this plane if it's not enabled */30053005+ if (intel_state->wm_results.plane[pipe][plane][0] == 0)30063006+ continue;30073007+30083008+ /* Find the highest enabled wm level for this plane */30093009+ for (level = ilk_wm_max_level(dev);30103010+ intel_state->wm_results.plane[pipe][plane][level] == 0; --level)30113011+ { }30123012+30133013+ /*30143014+ * If any of the planes on this pipe don't enable wm levels30153015+ * that incur memory latencies higher then 30µs we can't enable30163016+ * the SAGV30173017+ */30183018+ if (dev_priv->wm.skl_latency[level] < SKL_SAGV_BLOCK_TIME)30193019+ return false;30203020+ }30213021+30223022+ return true;28773023}2878302428793025static void···32543106 total_data_rate += intel_cstate->wm.skl.plane_data_rate[id];32553107 total_data_rate += intel_cstate->wm.skl.plane_y_data_rate[id];32563108 }32573257-32583258- WARN_ON(cstate->plane_mask && total_data_rate == 0);3259310932603110 return total_data_rate;32613111}···34903344 plane_bytes_per_line *= 4;34913345 plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);34923346 plane_blocks_per_line /= 4;33473347+ } else if (tiling == DRM_FORMAT_MOD_NONE) {33483348+ plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512) + 1;34933349 } else {34943350 plane_blocks_per_line = DIV_ROUND_UP(plane_bytes_per_line, 512);34953351 }···40583910 * pretend that all pipes switched active status so that we'll40593911 * ensure a full DDB recompute.40603912 */40614061- if (dev_priv->wm.distrust_bios_wm)39133913+ if (dev_priv->wm.distrust_bios_wm) {39143914+ ret = drm_modeset_lock(&dev->mode_config.connection_mutex,39153915+ state->acquire_ctx);39163916+ if (ret)39173917+ return ret;39183918+40623919 intel_state->active_pipe_changes = ~0;39203920+39213921+ /*39223922+ * We usually only initialize intel_state->active_crtcs if we39233923+ * we're doing a modeset; make sure this field is always39243924+ * initialized during the sanitization process that happens39253925+ * on the first commit too.39263926+ */39273927+ if (!intel_state->modeset)39283928+ intel_state->active_crtcs = dev_priv->active_crtcs;39293929+ }4063393040643931 /*40653932 * If the modeset changes which CRTC's are active, we need to···41043941 ret = skl_allocate_pipe_ddb(cstate, ddb);41053942 if (ret)41063943 return ret;39443944+39453945+ ret = drm_atomic_add_affected_planes(state, &intel_crtc->base);39463946+ if (ret)39473947+ return ret;41073948 }4108394941093950 return 0;39513951+}39523952+39533953+static void39543954+skl_copy_wm_for_pipe(struct skl_wm_values *dst,39553955+ struct skl_wm_values *src,39563956+ enum pipe pipe)39573957+{39583958+ dst->wm_linetime[pipe] = src->wm_linetime[pipe];39593959+ memcpy(dst->plane[pipe], src->plane[pipe],39603960+ sizeof(dst->plane[pipe]));39613961+ memcpy(dst->plane_trans[pipe], src->plane_trans[pipe],39623962+ sizeof(dst->plane_trans[pipe]));39633963+39643964+ dst->ddb.pipe[pipe] = src->ddb.pipe[pipe];39653965+ memcpy(dst->ddb.y_plane[pipe], src->ddb.y_plane[pipe],39663966+ sizeof(dst->ddb.y_plane[pipe]));39673967+ memcpy(dst->ddb.plane[pipe], src->ddb.plane[pipe],39683968+ sizeof(dst->ddb.plane[pipe]));41103969}4111397041123971static int···42034018 struct drm_device *dev = crtc->dev;42044019 struct drm_i915_private *dev_priv = to_i915(dev);42054020 struct skl_wm_values *results = &dev_priv->wm.skl_results;40214021+ struct skl_wm_values *hw_vals = &dev_priv->wm.skl_hw;42064022 struct intel_crtc_state *cstate = to_intel_crtc_state(crtc->state);42074023 struct skl_pipe_wm *pipe_wm = &cstate->wm.skl.optimal;40244024+ int pipe;4208402542094026 if ((results->dirty_pipes & drm_crtc_mask(crtc)) == 0)42104027 return;···42184031 skl_write_wm_values(dev_priv, results);42194032 skl_flush_wm_values(dev_priv, results);4220403342214221- /* store the new configuration */42224222- dev_priv->wm.skl_hw = *results;40344034+ /*40354035+ * Store the new configuration (but only for the pipes that have40364036+ * changed; the other values weren't recomputed).40374037+ */40384038+ for_each_pipe_masked(dev_priv, pipe, results->dirty_pipes)40394039+ skl_copy_wm_for_pipe(hw_vals, results, pipe);4223404042244041 mutex_unlock(&dev_priv->wm.wm_mutex);42254042}···6765657467666575void intel_cleanup_gt_powersave(struct drm_i915_private *dev_priv)67676576{67686768- if (IS_CHERRYVIEW(dev_priv))67696769- return;67706770- else if (IS_VALLEYVIEW(dev_priv))65776577+ if (IS_VALLEYVIEW(dev_priv))67716578 valleyview_cleanup_gt_powersave(dev_priv);6772657967736580 if (!i915.enable_rc6)···78477658 }78487659}7849766076617661+static inline int gen6_check_mailbox_status(struct drm_i915_private *dev_priv)76627662+{76637663+ uint32_t flags =76647664+ I915_READ_FW(GEN6_PCODE_MAILBOX) & GEN6_PCODE_ERROR_MASK;76657665+76667666+ switch (flags) {76677667+ case GEN6_PCODE_SUCCESS:76687668+ return 0;76697669+ case GEN6_PCODE_UNIMPLEMENTED_CMD:76707670+ case GEN6_PCODE_ILLEGAL_CMD:76717671+ return -ENXIO;76727672+ case GEN6_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:76737673+ return -EOVERFLOW;76747674+ case GEN6_PCODE_TIMEOUT:76757675+ return -ETIMEDOUT;76767676+ default:76777677+ MISSING_CASE(flags)76787678+ return 0;76797679+ }76807680+}76817681+76827682+static inline int gen7_check_mailbox_status(struct drm_i915_private *dev_priv)76837683+{76847684+ uint32_t flags =76857685+ I915_READ_FW(GEN6_PCODE_MAILBOX) & GEN6_PCODE_ERROR_MASK;76867686+76877687+ switch (flags) {76887688+ case GEN6_PCODE_SUCCESS:76897689+ return 0;76907690+ case GEN6_PCODE_ILLEGAL_CMD:76917691+ return -ENXIO;76927692+ case GEN7_PCODE_TIMEOUT:76937693+ return -ETIMEDOUT;76947694+ case GEN7_PCODE_ILLEGAL_DATA:76957695+ return -EINVAL;76967696+ case GEN7_PCODE_MIN_FREQ_TABLE_GT_RATIO_OUT_OF_RANGE:76977697+ return -EOVERFLOW;76987698+ default:76997699+ MISSING_CASE(flags);77007700+ return 0;77017701+ }77027702+}77037703+78507704int sandybridge_pcode_read(struct drm_i915_private *dev_priv, u32 mbox, u32 *val)78517705{77067706+ int status;77077707+78527708 WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));7853770978547710 /* GEN6_PCODE_* are outside of the forcewake domain, we can···79207686 *val = I915_READ_FW(GEN6_PCODE_DATA);79217687 I915_WRITE_FW(GEN6_PCODE_DATA, 0);7922768876897689+ if (INTEL_GEN(dev_priv) > 6)76907690+ status = gen7_check_mailbox_status(dev_priv);76917691+ else76927692+ status = gen6_check_mailbox_status(dev_priv);76937693+76947694+ if (status) {76957695+ DRM_DEBUG_DRIVER("warning: pcode (read) mailbox access failed: %d\n",76967696+ status);76977697+ return status;76987698+ }76997699+79237700 return 0;79247701}7925770279267703int sandybridge_pcode_write(struct drm_i915_private *dev_priv,79277927- u32 mbox, u32 val)77047704+ u32 mbox, u32 val)79287705{77067706+ int status;77077707+79297708 WARN_ON(!mutex_is_locked(&dev_priv->rps.hw_lock));7930770979317710 /* GEN6_PCODE_* are outside of the forcewake domain, we can···79627715 }7963771679647717 I915_WRITE_FW(GEN6_PCODE_DATA, 0);77187718+77197719+ if (INTEL_GEN(dev_priv) > 6)77207720+ status = gen7_check_mailbox_status(dev_priv);77217721+ else77227722+ status = gen6_check_mailbox_status(dev_priv);77237723+77247724+ if (status) {77257725+ DRM_DEBUG_DRIVER("warning: pcode (write) mailbox access failed: %d\n",77267726+ status);77277727+ return status;77287728+ }7965772979667730 return 0;79677731}
···22 tristate "DRM Support for Mediatek SoCs"33 depends on DRM44 depends on ARCH_MEDIATEK || (ARM && COMPILE_TEST)55+ depends on COMMON_CLK66+ depends on HAVE_ARM_SMCCC77+ depends on OF58 select DRM_GEM_CMA_HELPER69 select DRM_KMS_HELPER710 select DRM_MIPI_DSI
···627627 if (radeon_crtc->ss.refdiv) {628628 radeon_crtc->pll_flags |= RADEON_PLL_USE_REF_DIV;629629 radeon_crtc->pll_reference_div = radeon_crtc->ss.refdiv;630630- if (rdev->family >= CHIP_RV770)630630+ if (ASIC_IS_AVIVO(rdev) &&631631+ rdev->family != CHIP_RS780 &&632632+ rdev->family != CHIP_RS880)631633 radeon_crtc->pll_flags |= RADEON_PLL_USE_FRAC_FB_DIV;632634 }633635 }
-9
drivers/gpu/drm/radeon/radeon_atpx_handler.c
···198198 atpx->is_hybrid = false;199199 if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {200200 printk("ATPX Hybrid Graphics\n");201201-#if 1202202- /* This is a temporary hack until the D3 cold support203203- * makes it upstream. The ATPX power_control method seems204204- * to still work on even if the system should be using205205- * the new standardized hybrid D3 cold ACPI interface.206206- */207207- atpx->functions.power_cntl = true;208208-#else209201 atpx->functions.power_cntl = false;210210-#endif211202 atpx->is_hybrid = true;212203 }213204
···840840 .destroy = tegra_output_encoder_destroy,841841};842842843843+static void tegra_dsi_unprepare(struct tegra_dsi *dsi)844844+{845845+ int err;846846+847847+ if (dsi->slave)848848+ tegra_dsi_unprepare(dsi->slave);849849+850850+ err = tegra_mipi_disable(dsi->mipi);851851+ if (err < 0)852852+ dev_err(dsi->dev, "failed to disable MIPI calibration: %d\n",853853+ err);854854+855855+ pm_runtime_put(dsi->dev);856856+}857857+843858static void tegra_dsi_encoder_disable(struct drm_encoder *encoder)844859{845860 struct tegra_output *output = encoder_to_output(encoder);···891876892877 tegra_dsi_disable(dsi);893878894894- pm_runtime_put(dsi->dev);879879+ tegra_dsi_unprepare(dsi);880880+}881881+882882+static void tegra_dsi_prepare(struct tegra_dsi *dsi)883883+{884884+ int err;885885+886886+ pm_runtime_get_sync(dsi->dev);887887+888888+ err = tegra_mipi_enable(dsi->mipi);889889+ if (err < 0)890890+ dev_err(dsi->dev, "failed to enable MIPI calibration: %d\n",891891+ err);892892+893893+ err = tegra_dsi_pad_calibrate(dsi);894894+ if (err < 0)895895+ dev_err(dsi->dev, "MIPI calibration failed: %d\n", err);896896+897897+ if (dsi->slave)898898+ tegra_dsi_prepare(dsi->slave);895899}896900897901static void tegra_dsi_encoder_enable(struct drm_encoder *encoder)···921887 struct tegra_dsi *dsi = to_dsi(output);922888 struct tegra_dsi_state *state;923889 u32 value;924924- int err;925890926926- pm_runtime_get_sync(dsi->dev);927927-928928- err = tegra_dsi_pad_calibrate(dsi);929929- if (err < 0)930930- dev_err(dsi->dev, "MIPI calibration failed: %d\n", err);891891+ tegra_dsi_prepare(dsi);931892932893 state = tegra_dsi_get_state(dsi);933894
+4
drivers/gpu/drm/udl/udl_fb.c
···203203204204 ufbdev->fb_count++;205205206206+#ifdef CONFIG_DRM_FBDEV_EMULATION206207 if (fb_defio && (info->fbdefio == NULL)) {207208 /* enable defio at last moment if not disabled by client */208209···219218 info->fbdefio = fbdefio;220219 fb_deferred_io_init(info);221220 }221221+#endif222222223223 pr_notice("open /dev/fb%d user=%d fb_info=%p count=%d\n",224224 info->node, user, info, ufbdev->fb_count);···237235238236 ufbdev->fb_count--;239237238238+#ifdef CONFIG_DRM_FBDEV_EMULATION240239 if ((ufbdev->fb_count == 0) && (info->fbdefio)) {241240 fb_deferred_io_cleanup(info);242241 kfree(info->fbdefio);243242 info->fbdefio = NULL;244243 info->fbops->fb_mmap = udl_fb_mmap;245244 }245245+#endif246246247247 pr_warn("released /dev/fb%d user=%d count=%d\n",248248 info->node, user, ufbdev->fb_count);
+32-33
drivers/gpu/host1x/mipi.c
···242242 dev->pads = args.args[0];243243 dev->device = device;244244245245- mutex_lock(&dev->mipi->lock);246246-247247- if (dev->mipi->usage_count++ == 0) {248248- err = tegra_mipi_power_up(dev->mipi);249249- if (err < 0) {250250- dev_err(dev->mipi->dev,251251- "failed to power up MIPI bricks: %d\n",252252- err);253253- return ERR_PTR(err);254254- }255255- }256256-257257- mutex_unlock(&dev->mipi->lock);258258-259245 return dev;260246261247put:···256270257271void tegra_mipi_free(struct tegra_mipi_device *device)258272{259259- int err;260260-261261- mutex_lock(&device->mipi->lock);262262-263263- if (--device->mipi->usage_count == 0) {264264- err = tegra_mipi_power_down(device->mipi);265265- if (err < 0) {266266- /*267267- * Not much that can be done here, so an error message268268- * will have to do.269269- */270270- dev_err(device->mipi->dev,271271- "failed to power down MIPI bricks: %d\n",272272- err);273273- }274274- }275275-276276- mutex_unlock(&device->mipi->lock);277277-278273 platform_device_put(device->pdev);279274 kfree(device);280275}281276EXPORT_SYMBOL(tegra_mipi_free);277277+278278+int tegra_mipi_enable(struct tegra_mipi_device *dev)279279+{280280+ int err = 0;281281+282282+ mutex_lock(&dev->mipi->lock);283283+284284+ if (dev->mipi->usage_count++ == 0)285285+ err = tegra_mipi_power_up(dev->mipi);286286+287287+ mutex_unlock(&dev->mipi->lock);288288+289289+ return err;290290+291291+}292292+EXPORT_SYMBOL(tegra_mipi_enable);293293+294294+int tegra_mipi_disable(struct tegra_mipi_device *dev)295295+{296296+ int err = 0;297297+298298+ mutex_lock(&dev->mipi->lock);299299+300300+ if (--dev->mipi->usage_count == 0)301301+ err = tegra_mipi_power_down(dev->mipi);302302+303303+ mutex_unlock(&dev->mipi->lock);304304+305305+ return err;306306+307307+}308308+EXPORT_SYMBOL(tegra_mipi_disable);282309283310static int tegra_mipi_wait(struct tegra_mipi *mipi)284311{
···673673{674674 if (!mem)675675 return I40IW_ERR_PARAM;676676+ /*677677+ * mem->va points to the parent of mem, so both mem and mem->va678678+ * can not be touched once mem->va is freed679679+ */676680 kfree(mem->va);677677- mem->va = NULL;678681 return 0;679682}680683
···464464 return -ENODEV;465465466466 /* Power GPIO pin */467467- data->gpio_power = gpiod_get_optional(dev, "power", GPIOD_OUT_LOW);467467+ data->gpio_power = devm_gpiod_get_optional(dev, "power", GPIOD_OUT_LOW);468468 if (IS_ERR(data->gpio_power)) {469469 if (PTR_ERR(data->gpio_power) != -EPROBE_DEFER)470470 dev_err(dev, "Shutdown GPIO request failed\n");
+5-2
drivers/iommu/arm-smmu-v3.c
···879879 * We may have concurrent producers, so we need to be careful880880 * not to touch any of the shadow cmdq state.881881 */882882- queue_read(cmd, Q_ENT(q, idx), q->ent_dwords);882882+ queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);883883 dev_err(smmu->dev, "skipping command in error state:\n");884884 for (i = 0; i < ARRAY_SIZE(cmd); ++i)885885 dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);···890890 return;891891 }892892893893- queue_write(cmd, Q_ENT(q, idx), q->ent_dwords);893893+ queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);894894}895895896896static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,···10341034 case STRTAB_STE_0_CFG_S2_TRANS:10351035 ste_live = true;10361036 break;10371037+ case STRTAB_STE_0_CFG_ABORT:10381038+ if (disable_bypass)10391039+ break;10371040 default:10381041 BUG(); /* STE corruption */10391042 }
···286286 int prot = IOMMU_READ;287287 arm_v7s_iopte attr = pte >> ARM_V7S_ATTR_SHIFT(lvl);288288289289- if (attr & ARM_V7S_PTE_AP_RDONLY)289289+ if (!(attr & ARM_V7S_PTE_AP_RDONLY))290290 prot |= IOMMU_WRITE;291291 if ((attr & (ARM_V7S_TEX_MASK << ARM_V7S_TEX_SHIFT)) == 0)292292 prot |= IOMMU_MMIO;293293 else if (pte & ARM_V7S_ATTR_C)294294 prot |= IOMMU_CACHE;295295+ if (pte & ARM_V7S_ATTR_XN(lvl))296296+ prot |= IOMMU_NOEXEC;295297296298 return prot;297299}
+6-1
drivers/irqchip/irq-gic-v3-its.c
···15451545 u32 val;1546154615471547 val = readl_relaxed(base + GITS_CTLR);15481548- if (val & GITS_CTLR_QUIESCENT)15481548+ /*15491549+ * GIC architecture specification requires the ITS to be both15501550+ * disabled and quiescent for writes to GITS_BASER<n> or15511551+ * GITS_CBASER to not have UNPREDICTABLE results.15521552+ */15531553+ if ((val & GITS_CTLR_QUIESCENT) && !(val & GITS_CTLR_ENABLE))15491554 return 0;1550155515511556 /* Disable the generation of all interrupts to this ITS */
+9-2
drivers/irqchip/irq-gic-v3.c
···667667#endif668668669669#ifdef CONFIG_CPU_PM670670+/* Check whether it's single security state view */671671+static bool gic_dist_security_disabled(void)672672+{673673+ return readl_relaxed(gic_data.dist_base + GICD_CTLR) & GICD_CTLR_DS;674674+}675675+670676static int gic_cpu_pm_notifier(struct notifier_block *self,671677 unsigned long cmd, void *v)672678{673679 if (cmd == CPU_PM_EXIT) {674674- gic_enable_redist(true);680680+ if (gic_dist_security_disabled())681681+ gic_enable_redist(true);675682 gic_cpu_sys_reg_init();676676- } else if (cmd == CPU_PM_ENTER) {683683+ } else if (cmd == CPU_PM_ENTER && gic_dist_security_disabled()) {677684 gic_write_grpen1(0);678685 gic_enable_redist(false);679686 }
+7
drivers/irqchip/irq-gic.c
···769769 int cpu;770770 unsigned long flags, map = 0;771771772772+ if (unlikely(nr_cpu_ids == 1)) {773773+ /* Only one CPU? let's do a self-IPI... */774774+ writel_relaxed(2 << 24 | irq,775775+ gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);776776+ return;777777+ }778778+772779 raw_spin_lock_irqsave(&irq_controller_lock, flags);773780774781 /* Convert our logical CPU mask into a physical one. */
+14-4
drivers/irqchip/irq-mips-gic.c
···713713 unsigned long flags;714714 int i;715715716716- irq_set_chip_and_handler(virq, &gic_level_irq_controller,717717- handle_level_irq);718718-719716 spin_lock_irqsave(&gic_lock, flags);720717 gic_map_to_pin(intr, gic_cpu_pin);721718 gic_map_to_vpe(intr, mips_cm_vp_id(vpe));···729732{730733 if (GIC_HWIRQ_TO_LOCAL(hw) < GIC_NUM_LOCAL_INTRS)731734 return gic_local_irq_domain_map(d, virq, hw);735735+736736+ irq_set_chip_and_handler(virq, &gic_level_irq_controller,737737+ handle_level_irq);738738+732739 return gic_shared_irq_domain_map(d, virq, hw, 0);733740}734741···772771 hwirq = GIC_SHARED_TO_HWIRQ(base_hwirq + i);773772774773 ret = irq_domain_set_hwirq_and_chip(d, virq + i, hwirq,775775- &gic_edge_irq_controller,774774+ &gic_level_irq_controller,776775 NULL);777776 if (ret)778777 goto error;778778+779779+ irq_set_handler(virq + i, handle_level_irq);779780780781 ret = gic_shared_irq_domain_map(d, virq + i, hwirq, cpu);781782 if (ret)···893890 return;894891}895892893893+static void gic_dev_domain_activate(struct irq_domain *domain,894894+ struct irq_data *d)895895+{896896+ gic_shared_irq_domain_map(domain, d->irq, d->hwirq, 0);897897+}898898+896899static struct irq_domain_ops gic_dev_domain_ops = {897900 .xlate = gic_dev_domain_xlate,898901 .alloc = gic_dev_domain_alloc,899902 .free = gic_dev_domain_free,903903+ .activate = gic_dev_domain_activate,900904};901905902906static int gic_ipi_domain_xlate(struct irq_domain *d, struct device_node *ctrlr,
···289289 pb->bio_submitted = true;290290291291 /*292292- * Map reads as normal only if corrupt_bio_byte set.292292+ * Error reads if neither corrupt_bio_byte or drop_writes are set.293293+ * Otherwise, flakey_end_io() will decide if the reads should be modified.293294 */294295 if (bio_data_dir(bio) == READ) {295295- /* If flags were specified, only corrupt those that match. */296296- if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&297297- all_corrupt_bio_flags_match(bio, fc))298298- goto map_bio;299299- else296296+ if (!fc->corrupt_bio_byte && !test_bit(DROP_WRITES, &fc->flags))300297 return -EIO;298298+ goto map_bio;301299 }302300303301 /*···332334 struct flakey_c *fc = ti->private;333335 struct per_bio_data *pb = dm_per_bio_data(bio, sizeof(struct per_bio_data));334336335335- /*336336- * Corrupt successful READs while in down state.337337- */338337 if (!error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {339339- if (fc->corrupt_bio_byte)338338+ if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&339339+ all_corrupt_bio_flags_match(bio, fc)) {340340+ /*341341+ * Corrupt successful matching READs while in down state.342342+ */340343 corrupt_bio_data(bio, fc);341341- else344344+345345+ } else if (!test_bit(DROP_WRITES, &fc->flags)) {346346+ /*347347+ * Error read during the down_interval if drop_writes348348+ * wasn't configured.349349+ */342350 return -EIO;351351+ }343352 }344353345354 return error;
+6-5
drivers/md/dm-log.c
···291291 core->nr_regions = le64_to_cpu(disk->nr_regions);292292}293293294294-static int rw_header(struct log_c *lc, int rw)294294+static int rw_header(struct log_c *lc, int op)295295{296296- lc->io_req.bi_op = rw;296296+ lc->io_req.bi_op = op;297297+ lc->io_req.bi_op_flags = 0;297298298299 return dm_io(&lc->io_req, 1, &lc->header_location, NULL);299300}···317316{318317 int r;319318320320- r = rw_header(log, READ);319319+ r = rw_header(log, REQ_OP_READ);321320 if (r)322321 return r;323322···631630 header_to_disk(&lc->header, lc->disk_header);632631633632 /* write the new header */634634- r = rw_header(lc, WRITE);633633+ r = rw_header(lc, REQ_OP_WRITE);635634 if (!r) {636635 r = flush_header(lc);637636 if (r)···699698 log_clear_bit(lc, lc->clean_bits, i);700699 }701700702702- r = rw_header(lc, WRITE);701701+ r = rw_header(lc, REQ_OP_WRITE);703702 if (r)704703 fail_log_device(lc);705704 else {
+48-36
drivers/md/dm-raid.c
···191191#define RT_FLAG_RS_BITMAP_LOADED 2192192#define RT_FLAG_UPDATE_SBS 3193193#define RT_FLAG_RESHAPE_RS 4194194-#define RT_FLAG_KEEP_RS_FROZEN 5195194196195/* Array elements of 64 bit needed for rebuild/failed disk bits */197196#define DISKS_ARRAY_ELEMS ((MAX_RAID_DEVICES + (sizeof(uint64_t) * 8 - 1)) / sizeof(uint64_t) / 8)···860861{861862 unsigned long min_region_size = rs->ti->len / (1 << 21);862863864864+ if (rs_is_raid0(rs))865865+ return 0;866866+863867 if (!region_size) {864868 /*865869 * Choose a reasonable default. All figures in sectors.···932930 rebuild_cnt++;933931934932 switch (rs->raid_type->level) {933933+ case 0:934934+ break;935935 case 1:936936 if (rebuild_cnt >= rs->md.raid_disks)937937 goto too_many;···23392335 case 0:23402336 break;23412337 default:23382338+ /*23392339+ * We have to keep any raid0 data/metadata device pairs or23402340+ * the MD raid0 personality will fail to start the array.23412341+ */23422342+ if (rs_is_raid0(rs))23432343+ continue;23442344+23422345 dev = container_of(rdev, struct raid_dev, rdev);23432346 if (dev->meta_dev)23442347 dm_put_device(ti, dev->meta_dev);···25902579 } else {25912580 /* Process raid1 without delta_disks */25922581 mddev->raid_disks = rs->raid_disks;25932593- set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);25942582 reshape = false;25952583 }25962584 } else {···26002590 if (reshape) {26012591 set_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags);26022592 set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);26032603- set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);26042593 } else if (mddev->raid_disks < rs->raid_disks)26052594 /* Create new superblocks and bitmaps, if any new disks */26062595 set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);···29112902 goto bad;2912290329132904 set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags);29142914- set_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags);29152905 /* Takeover ain't recovery, so disable recovery */29162906 rs_setup_recovery(rs, MaxSector);29172907 rs_set_new(rs);···33943386{33953387 struct raid_set *rs = ti->private;3396338833973397- if (test_and_clear_bit(RT_FLAG_RS_RESUMED, &rs->runtime_flags)) {33983398- if (!rs->md.suspended)33993399- mddev_suspend(&rs->md);34003400- rs->md.ro = 1;34013401- }33893389+ if (!rs->md.suspended)33903390+ mddev_suspend(&rs->md);33913391+33923392+ rs->md.ro = 1;34023393}3403339434043395static void attempt_restore_of_faulty_devices(struct raid_set *rs)34053396{34063397 int i;34073407- uint64_t failed_devices, cleared_failed_devices = 0;33983398+ uint64_t cleared_failed_devices[DISKS_ARRAY_ELEMS];34083399 unsigned long flags;34003400+ bool cleared = false;34093401 struct dm_raid_superblock *sb;34023402+ struct mddev *mddev = &rs->md;34103403 struct md_rdev *r;34043404+34053405+ /* RAID personalities have to provide hot add/remove methods or we need to bail out. */34063406+ if (!mddev->pers || !mddev->pers->hot_add_disk || !mddev->pers->hot_remove_disk)34073407+ return;34083408+34093409+ memset(cleared_failed_devices, 0, sizeof(cleared_failed_devices));3411341034123411 for (i = 0; i < rs->md.raid_disks; i++) {34133412 r = &rs->dev[i].rdev;···34353420 * ourselves.34363421 */34373422 if ((r->raid_disk >= 0) &&34383438- (r->mddev->pers->hot_remove_disk(r->mddev, r) != 0))34233423+ (mddev->pers->hot_remove_disk(mddev, r) != 0))34393424 /* Failed to revive this device, try next */34403425 continue;34413426···34453430 clear_bit(Faulty, &r->flags);34463431 clear_bit(WriteErrorSeen, &r->flags);34473432 clear_bit(In_sync, &r->flags);34483448- if (r->mddev->pers->hot_add_disk(r->mddev, r)) {34333433+ if (mddev->pers->hot_add_disk(mddev, r)) {34493434 r->raid_disk = -1;34503435 r->saved_raid_disk = -1;34513436 r->flags = flags;34523437 } else {34533438 r->recovery_offset = 0;34543454- cleared_failed_devices |= 1 << i;34393439+ set_bit(i, (void *) cleared_failed_devices);34403440+ cleared = true;34553441 }34563442 }34573443 }34583458- if (cleared_failed_devices) {34443444+34453445+ /* If any failed devices could be cleared, update all sbs failed_devices bits */34463446+ if (cleared) {34473447+ uint64_t failed_devices[DISKS_ARRAY_ELEMS];34483448+34593449 rdev_for_each(r, &rs->md) {34603450 sb = page_address(r->sb_page);34613461- failed_devices = le64_to_cpu(sb->failed_devices);34623462- failed_devices &= ~cleared_failed_devices;34633463- sb->failed_devices = cpu_to_le64(failed_devices);34513451+ sb_retrieve_failed_devices(sb, failed_devices);34523452+34533453+ for (i = 0; i < DISKS_ARRAY_ELEMS; i++)34543454+ failed_devices[i] &= ~cleared_failed_devices[i];34553455+34563456+ sb_update_failed_devices(sb, failed_devices);34643457 }34653458 }34663459}···36333610 * devices are reachable again.36343611 */36353612 attempt_restore_of_faulty_devices(rs);36363636- } else {36373637- mddev->ro = 0;36383638- mddev->in_sync = 0;36393639-36403640- /*36413641- * When passing in flags to the ctr, we expect userspace36423642- * to reset them because they made it to the superblocks36433643- * and reload the mapping anyway.36443644- *36453645- * -> only unfreeze recovery in case of a table reload or36463646- * we'll have a bogus recovery/reshape position36473647- * retrieved from the superblock by the ctr because36483648- * the ongoing recovery/reshape will change it after read.36493649- */36503650- if (!test_bit(RT_FLAG_KEEP_RS_FROZEN, &rs->runtime_flags))36513651- clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);36523652-36533653- if (mddev->suspended)36543654- mddev_resume(mddev);36553613 }36143614+36153615+ mddev->ro = 0;36163616+ mddev->in_sync = 0;36173617+36183618+ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);36193619+36203620+ if (mddev->suspended)36213621+ mddev_resume(mddev);36563622}3657362336583624static struct target_type raid_target = {
+5-2
drivers/md/dm-round-robin.c
···210210 struct path_info *pi = NULL;211211 struct dm_path *current_path = NULL;212212213213+ local_irq_save(flags);213214 current_path = *this_cpu_ptr(s->current_path);214215 if (current_path) {215216 percpu_counter_dec(&s->repeat_count);216216- if (percpu_counter_read_positive(&s->repeat_count) > 0)217217+ if (percpu_counter_read_positive(&s->repeat_count) > 0) {218218+ local_irq_restore(flags);217219 return current_path;220220+ }218221 }219222220220- spin_lock_irqsave(&s->lock, flags);223223+ spin_lock(&s->lock);221224 if (!list_empty(&s->valid_paths)) {222225 pi = list_entry(s->valid_paths.next, struct path_info, list);223226 list_move_tail(&pi->list, &s->valid_paths);
+9-1
drivers/misc/cxl/vphb.c
···230230 if (phb->bus == NULL)231231 return -ENXIO;232232233233+ /* Set release hook on root bus */234234+ pci_set_host_bridge_release(to_pci_host_bridge(phb->bus->bridge),235235+ pcibios_free_controller_deferred,236236+ (void *) phb);237237+233238 /* Claim resources. This might need some rework as well depending234239 * whether we are doing probe-only or not, like assigning unassigned235240 * resources etc...···261256 afu->phb = NULL;262257263258 pci_remove_root_bus(phb->bus);264264- pcibios_free_controller(phb);259259+ /*260260+ * We don't free phb here - that's handled by261261+ * pcibios_free_controller_deferred()262262+ */265263}266264267265static bool _cxl_pci_is_vphb_device(struct pci_controller *phb)
+3-2
drivers/mmc/card/block.c
···17261726 break;1727172717281728 if (req_op(next) == REQ_OP_DISCARD ||17291729+ req_op(next) == REQ_OP_SECURE_ERASE ||17291730 req_op(next) == REQ_OP_FLUSH)17301731 break;17311732···21512150 struct mmc_card *card = md->queue.card;21522151 struct mmc_host *host = card->host;21532152 unsigned long flags;21532153+ bool req_is_special = mmc_req_is_special(req);2154215421552155 if (req && !mq->mqrq_prev->req)21562156 /* claim host only for the first request */···21922190 }2193219121942192out:21952195- if ((!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST)) ||21962196- mmc_req_is_special(req))21932193+ if ((!req && !(mq->flags & MMC_QUEUE_NEW_REQUEST)) || req_is_special)21972194 /*21982195 * Release host when there are no more requests21992196 * and after special request(discard, flush) is done.
+5-2
drivers/mmc/card/queue.c
···3333 /*3434 * We only like normal block requests and discards.3535 */3636- if (req->cmd_type != REQ_TYPE_FS && req_op(req) != REQ_OP_DISCARD) {3636+ if (req->cmd_type != REQ_TYPE_FS && req_op(req) != REQ_OP_DISCARD &&3737+ req_op(req) != REQ_OP_SECURE_ERASE) {3738 blk_dump_rq_flags(req, "MMC bad request");3839 return BLKPREP_KILL;3940 }···6564 spin_unlock_irq(q->queue_lock);66656766 if (req || mq->mqrq_prev->req) {6767+ bool req_is_special = mmc_req_is_special(req);6868+6869 set_current_state(TASK_RUNNING);6970 mq->issue_fn(mq, req);7071 cond_resched();···8279 * has been finished. Do not assign it to previous8380 * request.8481 */8585- if (mmc_req_is_special(req))8282+ if (req_is_special)8683 mq->mqrq_cur->req = NULL;87848885 mq->mqrq_prev->brq.mrq.data = NULL;
···29222922{29232923 unsigned int size = lstatus & BD_LENGTH_MASK;29242924 struct page *page = rxb->page;29252925+ bool last = !!(lstatus & BD_LFLAG(RXBD_LAST));2925292629262927 /* Remove the FCS from the packet length */29272927- if (likely(lstatus & BD_LFLAG(RXBD_LAST)))29282928+ if (last)29282929 size -= ETH_FCS_LEN;2929293029302930- if (likely(first))29312931+ if (likely(first)) {29312932 skb_put(skb, size);29322932- else29332933- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,29342934- rxb->page_offset + RXBUF_ALIGNMENT,29352935- size, GFAR_RXB_TRUESIZE);29332933+ } else {29342934+ /* the last fragments' length contains the full frame length */29352935+ if (last)29362936+ size -= skb->len;29372937+29382938+ /* in case the last fragment consisted only of the FCS */29392939+ if (size > 0)29402940+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,29412941+ rxb->page_offset + RXBUF_ALIGNMENT,29422942+ size, GFAR_RXB_TRUESIZE);29432943+ }2936294429372945 /* try reuse page */29382946 if (unlikely(page_count(page) != 1))
···209209void i40e_notify_client_of_netdev_open(struct i40e_vsi *vsi)210210{211211 struct i40e_client_instance *cdev;212212+ int ret = 0;212213213214 if (!vsi)214215 return;···222221 "Cannot locate client instance open routine\n");223222 continue;224223 }225225- cdev->client->ops->open(&cdev->lan_info, cdev->client);224224+ if (!(test_bit(__I40E_CLIENT_INSTANCE_OPENED,225225+ &cdev->state))) {226226+ ret = cdev->client->ops->open(&cdev->lan_info,227227+ cdev->client);228228+ if (!ret)229229+ set_bit(__I40E_CLIENT_INSTANCE_OPENED,230230+ &cdev->state);231231+ }226232 }227233 }228234 mutex_unlock(&i40e_client_instance_mutex);···440432 * i40e_client_add_instance - add a client instance struct to the instance list441433 * @pf: pointer to the board struct442434 * @client: pointer to a client struct in the client list.435435+ * @existing: if there was already an existing instance443436 *444444- * Returns cdev ptr on success, NULL on failure437437+ * Returns cdev ptr on success or if already exists, NULL on failure445438 **/446439static447440struct i40e_client_instance *i40e_client_add_instance(struct i40e_pf *pf,448448- struct i40e_client *client)441441+ struct i40e_client *client,442442+ bool *existing)449443{450444 struct i40e_client_instance *cdev;451445 struct netdev_hw_addr *mac = NULL;···456446 mutex_lock(&i40e_client_instance_mutex);457447 list_for_each_entry(cdev, &i40e_client_instances, list) {458448 if ((cdev->lan_info.pf == pf) && (cdev->client == client)) {459459- cdev = NULL;449449+ *existing = true;460450 goto out;461451 }462452 }···540530{541531 struct i40e_client_instance *cdev;542532 struct i40e_client *client;533533+ bool existing = false;543534 int ret = 0;544535545536 if (!(pf->flags & I40E_FLAG_SERVICE_CLIENT_REQUESTED))···564553 /* check if L2 VSI is up, if not we are not ready */565554 if (test_bit(__I40E_DOWN, &pf->vsi[pf->lan_vsi]->state))566555 continue;556556+ } else {557557+ dev_warn(&pf->pdev->dev, "This client %s is being instanciated at probe\n",558558+ client->name);567559 }568560569561 /* Add the client instance to the instance list */570570- cdev = i40e_client_add_instance(pf, client);562562+ cdev = i40e_client_add_instance(pf, client, &existing);571563 if (!cdev)572564 continue;573565574574- /* Also up the ref_cnt of no. of instances of this client */575575- atomic_inc(&client->ref_cnt);576576- dev_info(&pf->pdev->dev, "Added instance of Client %s to PF%d bus=0x%02x func=0x%02x\n",577577- client->name, pf->hw.pf_id,578578- pf->hw.bus.device, pf->hw.bus.func);566566+ if (!existing) {567567+ /* Also up the ref_cnt for no. of instances of this568568+ * client.569569+ */570570+ atomic_inc(&client->ref_cnt);571571+ dev_info(&pf->pdev->dev, "Added instance of Client %s to PF%d bus=0x%02x func=0x%02x\n",572572+ client->name, pf->hw.pf_id,573573+ pf->hw.bus.device, pf->hw.bus.func);574574+ }579575580576 mutex_lock(&i40e_client_instance_mutex);581577 /* Send an Open request to the client */···634616 pf->hw.pf_id, pf->hw.bus.device, pf->hw.bus.func);635617636618 /* Since in some cases register may have happened before a device gets637637- * added, we can schedule a subtask to go initiate the clients.619619+ * added, we can schedule a subtask to go initiate the clients if620620+ * they can be launched at probe time.638621 */639622 pf->flags |= I40E_FLAG_SERVICE_CLIENT_REQUESTED;640623 i40e_service_event_schedule(pf);
···29592959 }2960296029612961 /* was that the last pool using this rar? */29622962- if (mpsar_lo == 0 && mpsar_hi == 0 && rar != 0)29622962+ if (mpsar_lo == 0 && mpsar_hi == 0 &&29632963+ rar != 0 && rar != hw->mac.san_mac_rar_index)29632964 hw->mac.ops.clear_rar(hw, rar);29652965+29642966 return 0;29652967}29662968
+53-30
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
···143143 return cmd->cmd_buf + (idx << cmd->log_stride);144144}145145146146-static u8 xor8_buf(void *buf, int len)146146+static u8 xor8_buf(void *buf, size_t offset, int len)147147{148148 u8 *ptr = buf;149149 u8 sum = 0;150150 int i;151151+ int end = len + offset;151152152152- for (i = 0; i < len; i++)153153+ for (i = offset; i < end; i++)153154 sum ^= ptr[i];154155155156 return sum;···158157159158static int verify_block_sig(struct mlx5_cmd_prot_block *block)160159{161161- if (xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 1) != 0xff)160160+ size_t rsvd0_off = offsetof(struct mlx5_cmd_prot_block, rsvd0);161161+ int xor_len = sizeof(*block) - sizeof(block->data) - 1;162162+163163+ if (xor8_buf(block, rsvd0_off, xor_len) != 0xff)162164 return -EINVAL;163165164164- if (xor8_buf(block, sizeof(*block)) != 0xff)166166+ if (xor8_buf(block, 0, sizeof(*block)) != 0xff)165167 return -EINVAL;166168167169 return 0;168170}169171170170-static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token,171171- int csum)172172+static void calc_block_sig(struct mlx5_cmd_prot_block *block)172173{173173- block->token = token;174174- if (csum) {175175- block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) -176176- sizeof(block->data) - 2);177177- block->sig = ~xor8_buf(block, sizeof(*block) - 1);178178- }174174+ int ctrl_xor_len = sizeof(*block) - sizeof(block->data) - 2;175175+ size_t rsvd0_off = offsetof(struct mlx5_cmd_prot_block, rsvd0);176176+177177+ block->ctrl_sig = ~xor8_buf(block, rsvd0_off, ctrl_xor_len);178178+ block->sig = ~xor8_buf(block, 0, sizeof(*block) - 1);179179}180180181181-static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token, int csum)181181+static void calc_chain_sig(struct mlx5_cmd_msg *msg)182182{183183 struct mlx5_cmd_mailbox *next = msg->next;184184+ int size = msg->len;185185+ int blen = size - min_t(int, sizeof(msg->first.data), size);186186+ int n = (blen + MLX5_CMD_DATA_BLOCK_SIZE - 1)187187+ / MLX5_CMD_DATA_BLOCK_SIZE;188188+ int i = 0;184189185185- while (next) {186186- calc_block_sig(next->buf, token, csum);190190+ for (i = 0; i < n && next; i++) {191191+ calc_block_sig(next->buf);187192 next = next->next;188193 }189194}190195191196static void set_signature(struct mlx5_cmd_work_ent *ent, int csum)192197{193193- ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay));194194- calc_chain_sig(ent->in, ent->token, csum);195195- calc_chain_sig(ent->out, ent->token, csum);198198+ ent->lay->sig = ~xor8_buf(ent->lay, 0, sizeof(*ent->lay));199199+ if (csum) {200200+ calc_chain_sig(ent->in);201201+ calc_chain_sig(ent->out);202202+ }196203}197204198205static void poll_timeout(struct mlx5_cmd_work_ent *ent)···231222 struct mlx5_cmd_mailbox *next = ent->out->next;232223 int err;233224 u8 sig;225225+ int size = ent->out->len;226226+ int blen = size - min_t(int, sizeof(ent->out->first.data), size);227227+ int n = (blen + MLX5_CMD_DATA_BLOCK_SIZE - 1)228228+ / MLX5_CMD_DATA_BLOCK_SIZE;229229+ int i = 0;234230235235- sig = xor8_buf(ent->lay, sizeof(*ent->lay));231231+ sig = xor8_buf(ent->lay, 0, sizeof(*ent->lay));236232 if (sig != 0xff)237233 return -EINVAL;238234239239- while (next) {235235+ for (i = 0; i < n && next; i++) {240236 err = verify_block_sig(next->buf);241237 if (err)242238 return err;···797783 spin_unlock_irqrestore(&cmd->alloc_lock, flags);798784 }799785800800- ent->token = alloc_token(cmd);801786 cmd->ent_arr[ent->idx] = ent;802787 lay = get_inst(cmd, ent->idx);803788 ent->lay = lay;···896883static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,897884 struct mlx5_cmd_msg *out, void *uout, int uout_size,898885 mlx5_cmd_cbk_t callback,899899- void *context, int page_queue, u8 *status)886886+ void *context, int page_queue, u8 *status,887887+ u8 token)900888{901889 struct mlx5_cmd *cmd = &dev->cmd;902890 struct mlx5_cmd_work_ent *ent;···913899 page_queue);914900 if (IS_ERR(ent))915901 return PTR_ERR(ent);902902+903903+ ent->token = token;916904917905 if (!callback)918906 init_completion(&ent->done);···987971 .write = dbg_write,988972};989973990990-static int mlx5_copy_to_msg(struct mlx5_cmd_msg *to, void *from, int size)974974+static int mlx5_copy_to_msg(struct mlx5_cmd_msg *to, void *from, int size,975975+ u8 token)991976{992977 struct mlx5_cmd_prot_block *block;993978 struct mlx5_cmd_mailbox *next;···1014997 memcpy(block->data, from, copy);1015998 from += copy;1016999 size -= copy;10001000+ block->token = token;10171001 next = next->next;10181002 }10191003···10841066}1085106710861068static struct mlx5_cmd_msg *mlx5_alloc_cmd_msg(struct mlx5_core_dev *dev,10871087- gfp_t flags, int size)10691069+ gfp_t flags, int size,10701070+ u8 token)10881071{10891072 struct mlx5_cmd_mailbox *tmp, *head = NULL;10901073 struct mlx5_cmd_prot_block *block;···11141095 tmp->next = head;11151096 block->next = cpu_to_be64(tmp->next ? tmp->next->dma : 0);11161097 block->block_num = cpu_to_be32(n - i - 1);10981098+ block->token = token;11171099 head = tmp;11181100 }11191101 msg->next = head;···14831463 }1484146414851465 if (IS_ERR(msg))14861486- msg = mlx5_alloc_cmd_msg(dev, gfp, in_size);14661466+ msg = mlx5_alloc_cmd_msg(dev, gfp, in_size, 0);1487146714881468 return msg;14891469}···15031483 int err;15041484 u8 status = 0;15051485 u32 drv_synd;14861486+ u8 token;1506148715071488 if (pci_channel_offline(dev->pdev) ||15081489 dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {···15241503 return err;15251504 }1526150515271527- err = mlx5_copy_to_msg(inb, in, in_size);15061506+ token = alloc_token(&dev->cmd);15071507+15081508+ err = mlx5_copy_to_msg(inb, in, in_size, token);15281509 if (err) {15291510 mlx5_core_warn(dev, "err %d\n", err);15301511 goto out_in;15311512 }1532151315331533- outb = mlx5_alloc_cmd_msg(dev, gfp, out_size);15141514+ outb = mlx5_alloc_cmd_msg(dev, gfp, out_size, token);15341515 if (IS_ERR(outb)) {15351516 err = PTR_ERR(outb);15361517 goto out_in;15371518 }1538151915391520 err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context,15401540- pages_queue, &status);15211521+ pages_queue, &status, token);15411522 if (err)15421523 goto out_out;15431524···16101587 INIT_LIST_HEAD(&cmd->cache.med.head);1611158816121589 for (i = 0; i < NUM_LONG_LISTS; i++) {16131613- msg = mlx5_alloc_cmd_msg(dev, GFP_KERNEL, LONG_LIST_SIZE);15901590+ msg = mlx5_alloc_cmd_msg(dev, GFP_KERNEL, LONG_LIST_SIZE, 0);16141591 if (IS_ERR(msg)) {16151592 err = PTR_ERR(msg);16161593 goto ex_err;···16201597 }1621159816221599 for (i = 0; i < NUM_MED_LISTS; i++) {16231623- msg = mlx5_alloc_cmd_msg(dev, GFP_KERNEL, MED_LIST_SIZE);16001600+ msg = mlx5_alloc_cmd_msg(dev, GFP_KERNEL, MED_LIST_SIZE, 0);16241601 if (IS_ERR(msg)) {16251602 err = PTR_ERR(msg);16261603 goto ex_err;
···22752275 if (pd) {22762276 memcpy(&lp->cfg, pd, sizeof(lp->cfg));22772277 lp->io_shift = SMC91X_IO_SHIFT(lp->cfg.flags);22782278+22792279+ if (!SMC_8BIT(lp) && !SMC_16BIT(lp)) {22802280+ dev_err(&pdev->dev,22812281+ "at least one of 8-bit or 16-bit access support is required.\n");22822282+ ret = -ENXIO;22832283+ goto out_free_netdev;22842284+ }22782285 }2279228622802287#if IS_BUILTIN(CONFIG_OF)
+45-20
drivers/net/ethernet/smsc/smc91x.h
···3737#include <linux/smc91x.h>38383939/*4040+ * Any 16-bit access is performed with two 8-bit accesses if the hardware4141+ * can't do it directly. Most registers are 16-bit so those are mandatory.4242+ */4343+#define SMC_outw_b(x, a, r) \4444+ do { \4545+ unsigned int __val16 = (x); \4646+ unsigned int __reg = (r); \4747+ SMC_outb(__val16, a, __reg); \4848+ SMC_outb(__val16 >> 8, a, __reg + (1 << SMC_IO_SHIFT)); \4949+ } while (0)5050+5151+#define SMC_inw_b(a, r) \5252+ ({ \5353+ unsigned int __val16; \5454+ unsigned int __reg = r; \5555+ __val16 = SMC_inb(a, __reg); \5656+ __val16 |= SMC_inb(a, __reg + (1 << SMC_IO_SHIFT)) << 8; \5757+ __val16; \5858+ })5959+6060+/*4061 * Define your architecture specific bus configuration parameters here.4162 */4263···7655#define SMC_IO_SHIFT (lp->io_shift)77567857#define SMC_inb(a, r) readb((a) + (r))7979-#define SMC_inw(a, r) readw((a) + (r))5858+#define SMC_inw(a, r) \5959+ ({ \6060+ unsigned int __smc_r = r; \6161+ SMC_16BIT(lp) ? readw((a) + __smc_r) : \6262+ SMC_8BIT(lp) ? SMC_inw_b(a, __smc_r) : \6363+ ({ BUG(); 0; }); \6464+ })6565+8066#define SMC_inl(a, r) readl((a) + (r))8167#define SMC_outb(v, a, r) writeb(v, (a) + (r))6868+#define SMC_outw(v, a, r) \6969+ do { \7070+ unsigned int __v = v, __smc_r = r; \7171+ if (SMC_16BIT(lp)) \7272+ __SMC_outw(__v, a, __smc_r); \7373+ else if (SMC_8BIT(lp)) \7474+ SMC_outw_b(__v, a, __smc_r); \7575+ else \7676+ BUG(); \7777+ } while (0)7878+8279#define SMC_outl(v, a, r) writel(v, (a) + (r))8080+#define SMC_insb(a, r, p, l) readsb((a) + (r), p, l)8181+#define SMC_outsb(a, r, p, l) writesb((a) + (r), p, l)8382#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)8483#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)8584#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)···10766#define SMC_IRQ_FLAGS (-1) /* from resource */1086710968/* We actually can't write halfwords properly if not word aligned */110110-static inline void SMC_outw(u16 val, void __iomem *ioaddr, int reg)6969+static inline void __SMC_outw(u16 val, void __iomem *ioaddr, int reg)11170{11271 if ((machine_is_mainstone() || machine_is_stargate2() ||11372 machine_is_pxa_idp()) && reg & 2) {···457416458417#if ! SMC_CAN_USE_16BIT459418460460-/*461461- * Any 16-bit access is performed with two 8-bit accesses if the hardware462462- * can't do it directly. Most registers are 16-bit so those are mandatory.463463- */464464-#define SMC_outw(x, ioaddr, reg) \465465- do { \466466- unsigned int __val16 = (x); \467467- SMC_outb( __val16, ioaddr, reg ); \468468- SMC_outb( __val16 >> 8, ioaddr, reg + (1 << SMC_IO_SHIFT));\469469- } while (0)470470-#define SMC_inw(ioaddr, reg) \471471- ({ \472472- unsigned int __val16; \473473- __val16 = SMC_inb( ioaddr, reg ); \474474- __val16 |= SMC_inb( ioaddr, reg + (1 << SMC_IO_SHIFT)) << 8; \475475- __val16; \476476- })477477-419419+#define SMC_outw(x, ioaddr, reg) SMC_outw_b(x, ioaddr, reg)420420+#define SMC_inw(ioaddr, reg) SMC_inw_b(ioaddr, reg)478421#define SMC_insw(a, r, p, l) BUG()479422#define SMC_outsw(a, r, p, l) BUG()480423
+10-7
drivers/net/ethernet/synopsys/dwc_eth_qos.c
···16221622 DWCEQOS_MMC_CTRL_RSTONRD);16231623 dwceqos_enable_mmc_interrupt(lp);1624162416251625- /* Enable Interrupts */16261626- dwceqos_write(lp, REG_DWCEQOS_DMA_CH0_IE,16271627- DWCEQOS_DMA_CH0_IE_NIE |16281628- DWCEQOS_DMA_CH0_IE_RIE | DWCEQOS_DMA_CH0_IE_TIE |16291629- DWCEQOS_DMA_CH0_IE_AIE |16301630- DWCEQOS_DMA_CH0_IE_FBEE);16311631-16251625+ dwceqos_write(lp, REG_DWCEQOS_DMA_CH0_IE, 0);16321626 dwceqos_write(lp, REG_DWCEQOS_MAC_IE, 0);1633162716341628 dwceqos_write(lp, REG_DWCEQOS_MAC_CFG, DWCEQOS_MAC_CFG_IPC |···1898190418991905 netif_start_queue(ndev);19001906 tasklet_enable(&lp->tx_bdreclaim_tasklet);19071907+19081908+ /* Enable Interrupts -- do this only after we enable NAPI and the19091909+ * tasklet.19101910+ */19111911+ dwceqos_write(lp, REG_DWCEQOS_DMA_CH0_IE,19121912+ DWCEQOS_DMA_CH0_IE_NIE |19131913+ DWCEQOS_DMA_CH0_IE_RIE | DWCEQOS_DMA_CH0_IE_TIE |19141914+ DWCEQOS_DMA_CH0_IE_AIE |19151915+ DWCEQOS_DMA_CH0_IE_FBEE);1901191619021917 return 0;19031918}
+1-1
drivers/net/ethernet/tehuti/tehuti.c
···19871987 if ((readl(nic->regs + FPGA_VER) & 0xFFF) >= 378) {19881988 err = pci_enable_msi(pdev);19891989 if (err)19901990- pr_err("Can't eneble msi. error is %d\n", err);19901990+ pr_err("Can't enable msi. error is %d\n", err);19911991 else19921992 nic->irq_type = IRQ_MSI;19931993 } else
+5-3
drivers/net/ethernet/xilinx/xilinx_emaclite.c
···11311131 lp->rx_ping_pong = get_bool(ofdev, "xlnx,rx-ping-pong");11321132 mac_address = of_get_mac_address(ofdev->dev.of_node);1133113311341134- if (mac_address)11341134+ if (mac_address) {11351135 /* Set the MAC address. */11361136 memcpy(ndev->dev_addr, mac_address, ETH_ALEN);11371137- else11381138- dev_warn(dev, "No MAC address found\n");11371137+ } else {11381138+ dev_warn(dev, "No MAC address found, using random\n");11391139+ eth_hw_addr_random(ndev);11401140+ }1139114111401142 /* Clear the Tx CSR's in case this is a restart */11411143 __raw_writel(0, lp->base_addr + XEL_TSR_OFFSET);
···888888 if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC)))889889 goto drop;890890891891- if (skb->sk && sk_fullsock(skb->sk)) {892892- sock_tx_timestamp(skb->sk, skb->sk->sk_tsflags,893893- &skb_shinfo(skb)->tx_flags);894894- sw_tx_timestamp(skb);895895- }891891+ skb_tx_timestamp(skb);896892897893 /* Orphan the skb - required as we might hang on to it898894 * for indefinite time.
···6969/*7070 * Version numbers7171 */7272-#define VMXNET3_DRIVER_VERSION_STRING "1.4.9.0-k"7272+#define VMXNET3_DRIVER_VERSION_STRING "1.4.a.0-k"73737474/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */7575-#define VMXNET3_DRIVER_VERSION_NUM 0x010409007575+#define VMXNET3_DRIVER_VERSION_NUM 0x01040a0076767777#if defined(CONFIG_PCI_MSI)7878 /* RSS only makes sense if MSI-X is supported. */
···517517 pr_warning("End of tree marker overwritten: %08x\n",518518 be32_to_cpup(mem + size));519519520520- if (detached) {520520+ if (detached && mynodes) {521521 of_node_set_flag(*mynodes, OF_DETACHED);522522 pr_debug("unflattened tree is detached\n");523523 }
+3-2
drivers/of/irq.c
···544544545545 list_del(&desc->list);546546547547+ of_node_set_flag(desc->dev, OF_POPULATED);548548+547549 pr_debug("of_irq_init: init %s (%p), parent %p\n",548550 desc->dev->full_name,549551 desc->dev, desc->interrupt_parent);550552 ret = desc->irq_init_cb(desc->dev,551553 desc->interrupt_parent);552554 if (ret) {555555+ of_node_clear_flag(desc->dev, OF_POPULATED);553556 kfree(desc);554557 continue;555558 }···562559 * its children can get processed in a subsequent pass.563560 */564561 list_add_tail(&desc->list, &intc_parent_list);565565-566566- of_node_set_flag(desc->dev, OF_POPULATED);567562 }568563569564 /* Get the next pending parent that might have children */
+2
drivers/of/platform.c
···497497}498498EXPORT_SYMBOL_GPL(of_platform_default_populate);499499500500+#ifndef CONFIG_PPC500501static int __init of_platform_default_populate_init(void)501502{502503 struct device_node *node;···522521 return 0;523522}524523arch_initcall_sync(of_platform_default_populate_init);524524+#endif525525526526static int of_platform_device_destroy(struct device *dev, void *data)527527{
···10691069 nvec = maxvec;1070107010711071 for (;;) {10721072- if (!(flags & PCI_IRQ_NOAFFINITY)) {10721072+ if (flags & PCI_IRQ_AFFINITY) {10731073 dev->irq_affinity = irq_create_affinity_mask(&nvec);10741074 if (nvec < minvec)10751075 return -ENOSPC;···11051105 **/11061106int pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec)11071107{11081108- return __pci_enable_msi_range(dev, minvec, maxvec, PCI_IRQ_NOAFFINITY);11081108+ return __pci_enable_msi_range(dev, minvec, maxvec, 0);11091109}11101110EXPORT_SYMBOL(pci_enable_msi_range);11111111···11201120 return -ERANGE;1121112111221122 for (;;) {11231123- if (!(flags & PCI_IRQ_NOAFFINITY)) {11231123+ if (flags & PCI_IRQ_AFFINITY) {11241124 dev->irq_affinity = irq_create_affinity_mask(&nvec);11251125 if (nvec < minvec)11261126 return -ENOSPC;···11601160int pci_enable_msix_range(struct pci_dev *dev, struct msix_entry *entries,11611161 int minvec, int maxvec)11621162{11631163- return __pci_enable_msix_range(dev, entries, minvec, maxvec,11641164- PCI_IRQ_NOAFFINITY);11631163+ return __pci_enable_msix_range(dev, entries, minvec, maxvec, 0);11651164}11661165EXPORT_SYMBOL(pci_enable_msix_range);11671166···11861187{11871188 int vecs = -ENOSPC;1188118911891189- if (!(flags & PCI_IRQ_NOMSIX)) {11901190+ if (flags & PCI_IRQ_MSIX) {11901191 vecs = __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs,11911192 flags);11921193 if (vecs > 0)11931194 return vecs;11941195 }1195119611961196- if (!(flags & PCI_IRQ_NOMSI)) {11971197+ if (flags & PCI_IRQ_MSI) {11971198 vecs = __pci_enable_msi_range(dev, min_vecs, max_vecs, flags);11981199 if (vecs > 0)11991200 return vecs;12001201 }1201120212021203 /* use legacy irq if allowed */12031203- if (!(flags & PCI_IRQ_NOLEGACY) && min_vecs == 1)12041204+ if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1) {12051205+ pci_intx(dev, 1);12041206 return 1;12071207+ }12081208+12051209 return vecs;12061210}12071211EXPORT_SYMBOL(pci_alloc_irq_vectors);
+3-5
drivers/platform/olpc/olpc-ec.c
···11/*22 * Generic driver for the OLPC Embedded Controller.33 *44+ * Author: Andres Salomon <dilinger@queued.net>55+ *46 * Copyright (C) 2011-2012 One Laptop per Child Foundation.57 *68 * Licensed under the GPL v2 or later.···1412#include <linux/platform_device.h>1513#include <linux/slab.h>1614#include <linux/workqueue.h>1717-#include <linux/module.h>1515+#include <linux/init.h>1816#include <linux/list.h>1917#include <linux/olpc-ec.h>2018#include <asm/olpc.h>···328326{329327 return platform_driver_register(&olpc_ec_plat_driver);330328}331331-332329arch_initcall(olpc_ec_init_module);333333-334334-MODULE_AUTHOR("Andres Salomon <dilinger@queued.net>");335335-MODULE_LICENSE("GPL");
+2-6
drivers/platform/x86/intel_pmic_gpio.c
···11/* Moorestown PMIC GPIO (access through IPC) driver22 * Copyright (c) 2008 - 2009, Intel Corporation.33 *44+ * Author: Alek Du <alek.du@intel.com>55+ *46 * This program is free software; you can redistribute it and/or modify57 * it under the terms of the GNU General Public License version 2 as68 * published by the Free Software Foundation.···23212422#define pr_fmt(fmt) "%s: " fmt, __func__25232626-#include <linux/module.h>2724#include <linux/kernel.h>2825#include <linux/interrupt.h>2926#include <linux/delay.h>···323322{324323 return platform_driver_register(&platform_pmic_gpio_driver);325324}326326-327325subsys_initcall(platform_pmic_gpio_init);328328-329329-MODULE_AUTHOR("Alek Du <alek.du@intel.com>");330330-MODULE_DESCRIPTION("Intel Moorestown PMIC GPIO driver");331331-MODULE_LICENSE("GPL v2");
+11-2
drivers/scsi/aacraid/commctrl.c
···6363 struct fib *fibptr;6464 struct hw_fib * hw_fib = (struct hw_fib *)0;6565 dma_addr_t hw_fib_pa = (dma_addr_t)0LL;6666- unsigned size;6666+ unsigned int size, osize;6767 int retval;68686969 if (dev->in_reset) {···8787 * will not overrun the buffer when we copy the memory. Return8888 * an error if we would.8989 */9090- size = le16_to_cpu(kfib->header.Size) + sizeof(struct aac_fibhdr);9090+ osize = size = le16_to_cpu(kfib->header.Size) +9191+ sizeof(struct aac_fibhdr);9192 if (size < le16_to_cpu(kfib->header.SenderSize))9293 size = le16_to_cpu(kfib->header.SenderSize);9394 if (size > dev->max_fib_size) {···116115117116 if (copy_from_user(kfib, arg, size)) {118117 retval = -EFAULT;118118+ goto cleanup;119119+ }120120+121121+ /* Sanity check the second copy */122122+ if ((osize != le16_to_cpu(kfib->header.Size) +123123+ sizeof(struct aac_fibhdr))124124+ || (size < le16_to_cpu(kfib->header.SenderSize))) {125125+ retval = -EINVAL;119126 goto cleanup;120127 }121128
···9696 struct acm_rb read_buffers[ACM_NR];9797 struct acm_wb *putbuffer; /* for acm_tty_put_char() */9898 int rx_buflimit;9999- int rx_endpoint;10099 spinlock_t read_lock;101100 int write_used; /* number of non-empty write buffers */102101 int transmitting;
+63-3
drivers/usb/core/config.c
···171171 ep, buffer, size);172172}173173174174+static const unsigned short low_speed_maxpacket_maxes[4] = {175175+ [USB_ENDPOINT_XFER_CONTROL] = 8,176176+ [USB_ENDPOINT_XFER_ISOC] = 0,177177+ [USB_ENDPOINT_XFER_BULK] = 0,178178+ [USB_ENDPOINT_XFER_INT] = 8,179179+};180180+static const unsigned short full_speed_maxpacket_maxes[4] = {181181+ [USB_ENDPOINT_XFER_CONTROL] = 64,182182+ [USB_ENDPOINT_XFER_ISOC] = 1023,183183+ [USB_ENDPOINT_XFER_BULK] = 64,184184+ [USB_ENDPOINT_XFER_INT] = 64,185185+};186186+static const unsigned short high_speed_maxpacket_maxes[4] = {187187+ [USB_ENDPOINT_XFER_CONTROL] = 64,188188+ [USB_ENDPOINT_XFER_ISOC] = 1024,189189+ [USB_ENDPOINT_XFER_BULK] = 512,190190+ [USB_ENDPOINT_XFER_INT] = 1023,191191+};192192+static const unsigned short super_speed_maxpacket_maxes[4] = {193193+ [USB_ENDPOINT_XFER_CONTROL] = 512,194194+ [USB_ENDPOINT_XFER_ISOC] = 1024,195195+ [USB_ENDPOINT_XFER_BULK] = 1024,196196+ [USB_ENDPOINT_XFER_INT] = 1024,197197+};198198+174199static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,175200 int asnum, struct usb_host_interface *ifp, int num_ep,176201 unsigned char *buffer, int size)···204179 struct usb_endpoint_descriptor *d;205180 struct usb_host_endpoint *endpoint;206181 int n, i, j, retval;182182+ unsigned int maxp;183183+ const unsigned short *maxpacket_maxes;207184208185 d = (struct usb_endpoint_descriptor *) buffer;209186 buffer += d->bLength;···313286 endpoint->desc.wMaxPacketSize = cpu_to_le16(8);314287 }315288289289+ /* Validate the wMaxPacketSize field */290290+ maxp = usb_endpoint_maxp(&endpoint->desc);291291+292292+ /* Find the highest legal maxpacket size for this endpoint */293293+ i = 0; /* additional transactions per microframe */294294+ switch (to_usb_device(ddev)->speed) {295295+ case USB_SPEED_LOW:296296+ maxpacket_maxes = low_speed_maxpacket_maxes;297297+ break;298298+ case USB_SPEED_FULL:299299+ maxpacket_maxes = full_speed_maxpacket_maxes;300300+ break;301301+ case USB_SPEED_HIGH:302302+ /* Bits 12..11 are allowed only for HS periodic endpoints */303303+ if (usb_endpoint_xfer_int(d) || usb_endpoint_xfer_isoc(d)) {304304+ i = maxp & (BIT(12) | BIT(11));305305+ maxp &= ~i;306306+ }307307+ /* fallthrough */308308+ default:309309+ maxpacket_maxes = high_speed_maxpacket_maxes;310310+ break;311311+ case USB_SPEED_SUPER:312312+ case USB_SPEED_SUPER_PLUS:313313+ maxpacket_maxes = super_speed_maxpacket_maxes;314314+ break;315315+ }316316+ j = maxpacket_maxes[usb_endpoint_type(&endpoint->desc)];317317+318318+ if (maxp > j) {319319+ dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has invalid maxpacket %d, setting to %d\n",320320+ cfgno, inum, asnum, d->bEndpointAddress, maxp, j);321321+ maxp = j;322322+ endpoint->desc.wMaxPacketSize = cpu_to_le16(i | maxp);323323+ }324324+316325 /*317326 * Some buggy high speed devices have bulk endpoints using318327 * maxpacket sizes other than 512. High speed HCDs may not···356293 */357294 if (to_usb_device(ddev)->speed == USB_SPEED_HIGH358295 && usb_endpoint_xfer_bulk(d)) {359359- unsigned maxp;360360-361361- maxp = usb_endpoint_maxp(&endpoint->desc) & 0x07ff;362296 if (maxp != 512)363297 dev_warn(ddev, "config %d interface %d altsetting %d "364298 "bulk endpoint 0x%X has invalid maxpacket %d\n",
+5-2
drivers/usb/core/devio.c
···241241 goto error_decrease_mem;242242 }243243244244- mem = usb_alloc_coherent(ps->dev, size, GFP_USER, &dma_handle);244244+ mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,245245+ &dma_handle);245246 if (!mem) {246247 ret = -ENOMEM;247248 goto error_free_usbm;···25832582 if (file->f_mode & FMODE_WRITE && !list_empty(&ps->async_completed))25842583 mask |= POLLOUT | POLLWRNORM;25852584 if (!connected(ps))25862586- mask |= POLLERR | POLLHUP;25852585+ mask |= POLLHUP;25862586+ if (list_empty(&ps->list))25872587+ mask |= POLLERR;25872588 return mask;25882589}25892590
+9-14
drivers/usb/core/hub.c
···1052105210531053 /* Continue a partial initialization */10541054 if (type == HUB_INIT2 || type == HUB_INIT3) {10551055- device_lock(hub->intfdev);10551055+ device_lock(&hdev->dev);1056105610571057 /* Was the hub disconnected while we were waiting? */10581058- if (hub->disconnected) {10591059- device_unlock(hub->intfdev);10601060- kref_put(&hub->kref, hub_release);10611061- return;10621062- }10581058+ if (hub->disconnected)10591059+ goto disconnected;10631060 if (type == HUB_INIT2)10641061 goto init2;10651062 goto init3;···12591262 queue_delayed_work(system_power_efficient_wq,12601263 &hub->init_work,12611264 msecs_to_jiffies(delay));12621262- device_unlock(hub->intfdev);12651265+ device_unlock(&hdev->dev);12631266 return; /* Continues at init3: below */12641267 } else {12651268 msleep(delay);···12781281 /* Scan all ports that need attention */12791282 kick_hub_wq(hub);1280128312811281- /* Allow autosuspend if it was suppressed */12821282- if (type <= HUB_INIT3)12841284+ if (type == HUB_INIT2 || type == HUB_INIT3) {12851285+ /* Allow autosuspend if it was suppressed */12861286+ disconnected:12831287 usb_autopm_put_interface_async(to_usb_interface(hub->intfdev));12841284-12851285- if (type == HUB_INIT2 || type == HUB_INIT3)12861286- device_unlock(hub->intfdev);12881288+ device_unlock(&hdev->dev);12891289+ }1287129012881291 kref_put(&hub->kref, hub_release);12891292}···13111314{13121315 struct usb_device *hdev = hub->hdev;13131316 int i;13141314-13151315- cancel_delayed_work_sync(&hub->init_work);1316131713171318 /* hub_wq and related activity won't re-trigger */13181319 hub->quiescing = 1;
+1
drivers/usb/dwc3/dwc3-of-simple.c
···6161 if (!simple->clks)6262 return -ENOMEM;63636464+ platform_set_drvdata(pdev, simple);6465 simple->dev = dev;65666667 for (i = 0; i < simple->num_clocks; i++) {
···829829 if (!req->request.no_interrupt && !chain)830830 trb->ctrl |= DWC3_TRB_CTRL_IOC | DWC3_TRB_CTRL_ISP_IMI;831831832832- if (last)832832+ if (last && !usb_endpoint_xfer_isoc(dep->endpoint.desc))833833 trb->ctrl |= DWC3_TRB_CTRL_LST;834834835835 if (chain)···1955195519561956static int __dwc3_cleanup_done_trbs(struct dwc3 *dwc, struct dwc3_ep *dep,19571957 struct dwc3_request *req, struct dwc3_trb *trb,19581958- const struct dwc3_event_depevt *event, int status)19581958+ const struct dwc3_event_depevt *event, int status,19591959+ int chain)19591960{19601961 unsigned int count;19611962 unsigned int s_pkt = 0;···19651964 dep->queued_requests--;19661965 trace_dwc3_complete_trb(dep, trb);1967196619671967+ /*19681968+ * If we're in the middle of series of chained TRBs and we19691969+ * receive a short transfer along the way, DWC3 will skip19701970+ * through all TRBs including the last TRB in the chain (the19711971+ * where CHN bit is zero. DWC3 will also avoid clearing HWO19721972+ * bit and SW has to do it manually.19731973+ *19741974+ * We're going to do that here to avoid problems of HW trying19751975+ * to use bogus TRBs for transfers.19761976+ */19771977+ if (chain && (trb->ctrl & DWC3_TRB_CTRL_HWO))19781978+ trb->ctrl &= ~DWC3_TRB_CTRL_HWO;19791979+19681980 if ((trb->ctrl & DWC3_TRB_CTRL_HWO) && status != -ESHUTDOWN)19691969- /*19701970- * We continue despite the error. There is not much we19711971- * can do. If we don't clean it up we loop forever. If19721972- * we skip the TRB then it gets overwritten after a19731973- * while since we use them in a ring buffer. A BUG()19741974- * would help. Lets hope that if this occurs, someone19751975- * fixes the root cause instead of looking away :)19761976- */19771977- dev_err(dwc->dev, "%s's TRB (%p) still owned by HW\n",19781978- dep->name, trb);19811981+ return 1;19821982+19791983 count = trb->size & DWC3_TRB_SIZE_MASK;1980198419811985 if (dep->direction) {···20192013 s_pkt = 1;20202014 }2021201520222022- /*20232023- * We assume here we will always receive the entire data block20242024- * which we should receive. Meaning, if we program RX to20252025- * receive 4K but we receive only 2K, we assume that's all we20262026- * should receive and we simply bounce the request back to the20272027- * gadget driver for further processing.20282028- */20292029- req->request.actual += req->request.length - count;20302030- if (s_pkt)20162016+ if (s_pkt && !chain)20312017 return 1;20322018 if ((event->status & DEPEVT_STATUS_LST) &&20332019 (trb->ctrl & (DWC3_TRB_CTRL_LST |···20382040 struct dwc3_trb *trb;20392041 unsigned int slot;20402042 unsigned int i;20432043+ int count = 0;20412044 int ret;2042204520432046 do {20472047+ int chain;20482048+20442049 req = next_request(&dep->started_list);20452050 if (WARN_ON_ONCE(!req))20462051 return 1;2047205220532053+ chain = req->request.num_mapped_sgs > 0;20482054 i = 0;20492055 do {20502056 slot = req->first_trb_index + i;···20562054 slot++;20572055 slot %= DWC3_TRB_NUM;20582056 trb = &dep->trb_pool[slot];20572057+ count += trb->size & DWC3_TRB_SIZE_MASK;2059205820602059 ret = __dwc3_cleanup_done_trbs(dwc, dep, req, trb,20612061- event, status);20602060+ event, status, chain);20622061 if (ret)20632062 break;20642063 } while (++i < req->request.num_mapped_sgs);2065206420652065+ /*20662066+ * We assume here we will always receive the entire data block20672067+ * which we should receive. Meaning, if we program RX to20682068+ * receive 4K but we receive only 2K, we assume that's all we20692069+ * should receive and we simply bounce the request back to the20702070+ * gadget driver for further processing.20712071+ */20722072+ req->request.actual += req->request.length - count;20662073 dwc3_gadget_giveback(dep, req, status);2067207420682075 if (ret)
+4-2
drivers/usb/gadget/composite.c
···19131913 break;1914191419151915 case USB_RECIP_ENDPOINT:19161916+ if (!cdev->config)19171917+ break;19161918 endp = ((w_index & 0x80) >> 3) | (w_index & 0x0f);19171919 list_for_each_entry(f, &cdev->config->functions, list) {19181920 if (test_bit(endp, f->endpoints))···2126212421272125 cdev->os_desc_req = usb_ep_alloc_request(ep0, GFP_KERNEL);21282126 if (!cdev->os_desc_req) {21292129- ret = PTR_ERR(cdev->os_desc_req);21272127+ ret = -ENOMEM;21302128 goto end;21312129 }2132213021332131 /* OS feature descriptor length <= 4kB */21342132 cdev->os_desc_req->buf = kmalloc(4096, GFP_KERNEL);21352133 if (!cdev->os_desc_req->buf) {21362136- ret = PTR_ERR(cdev->os_desc_req->buf);21342134+ ret = -ENOMEM;21372135 kfree(cdev->os_desc_req);21382136 goto end;21392137 }
···680680{681681 rndis_reset_cmplt_type *resp;682682 rndis_resp_t *r;683683+ u8 *xbuf;684684+ u32 length;685685+686686+ /* drain the response queue */687687+ while ((xbuf = rndis_get_next_response(params, &length)))688688+ rndis_free_response(params, xbuf);683689684690 r = rndis_add_response(params, sizeof(rndis_reset_cmplt_type));685691 if (!r)
+2-1
drivers/usb/gadget/function/u_ether.c
···556556 /* Multi frame CDC protocols may store the frame for557557 * later which is not a dropped frame.558558 */559559- if (dev->port_usb->supports_multi_frame)559559+ if (dev->port_usb &&560560+ dev->port_usb->supports_multi_frame)560561 goto multiframe;561562 goto drop;562563 }
···386386387387 ret = 0;388388 virt_dev = xhci->devs[slot_id];389389+ if (!virt_dev)390390+ return -ENODEV;391391+389392 cmd = xhci_alloc_command(xhci, false, true, GFP_NOIO);390393 if (!cmd) {391394 xhci_dbg(xhci, "Couldn't allocate command structure.\n");
+2-1
drivers/usb/host/xhci-pci.c
···314314 usb_remove_hcd(xhci->shared_hcd);315315 usb_put_hcd(xhci->shared_hcd);316316 }317317- usb_hcd_pci_remove(dev);318317319318 /* Workaround for spurious wakeups at shutdown with HSW */320319 if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)321320 pci_set_power_state(dev, PCI_D3hot);321321+322322+ usb_hcd_pci_remove(dev);322323}323324324325#ifdef CONFIG_PM
+9-7
drivers/usb/host/xhci-ring.c
···1334133413351335 cmd = list_entry(xhci->cmd_list.next, struct xhci_command, cmd_list);1336133613371337- if (cmd->command_trb != xhci->cmd_ring->dequeue) {13381338- xhci_err(xhci,13391339- "Command completion event does not match command\n");13401340- return;13411341- }13421342-13431337 del_timer(&xhci->cmd_timer);1344133813451339 trace_xhci_cmd_completion(cmd_trb, (struct xhci_generic_trb *) event);···13451351 xhci_handle_stopped_cmd_ring(xhci, cmd);13461352 return;13471353 }13541354+13551355+ if (cmd->command_trb != xhci->cmd_ring->dequeue) {13561356+ xhci_err(xhci,13571357+ "Command completion event does not match command\n");13581358+ return;13591359+ }13601360+13481361 /*13491362 * Host aborted the command ring, check if the current command was13501363 * supposed to be aborted, otherwise continue normally.···32443243 send_addr = addr;3245324432463245 /* Queue the TRBs, even if they are zero-length */32473247- for (enqd_len = 0; enqd_len < full_len; enqd_len += trb_buff_len) {32463246+ for (enqd_len = 0; first_trb || enqd_len < full_len;32473247+ enqd_len += trb_buff_len) {32483248 field = TRB_TYPE(TRB_NORMAL);3249324932503250 /* TRB buffer should not cross 64KB boundaries */
+5-5
drivers/usb/misc/ftdi-elan.c
···665665{666666 char data[30 *3 + 4];667667 char *d = data;668668- int m = (sizeof(data) - 1) / 3;668668+ int m = (sizeof(data) - 1) / 3 - 1;669669 int bytes_read = 0;670670 int retry_on_empty = 10;671671 int retry_on_timeout = 5;···16841684 int i = 0;16851685 char data[30 *3 + 4];16861686 char *d = data;16871687- int m = (sizeof(data) - 1) / 3;16871687+ int m = (sizeof(data) - 1) / 3 - 1;16881688 int l = 0;16891689 struct u132_target *target = &ftdi->target[ed];16901690 struct u132_command *command = &ftdi->command[···18761876 if (packet_bytes > 2) {18771877 char diag[30 *3 + 4];18781878 char *d = diag;18791879- int m = (sizeof(diag) - 1) / 3;18791879+ int m = (sizeof(diag) - 1) / 3 - 1;18801880 char *b = ftdi->bulk_in_buffer;18811881 int bytes_read = 0;18821882 diag[0] = 0;···20532053 if (packet_bytes > 2) {20542054 char diag[30 *3 + 4];20552055 char *d = diag;20562056- int m = (sizeof(diag) - 1) / 3;20562056+ int m = (sizeof(diag) - 1) / 3 - 1;20572057 char *b = ftdi->bulk_in_buffer;20582058 int bytes_read = 0;20592059 unsigned char c = 0;···21552155 if (packet_bytes > 2) {21562156 char diag[30 *3 + 4];21572157 char *d = diag;21582158- int m = (sizeof(diag) - 1) / 3;21582158+ int m = (sizeof(diag) - 1) / 3 - 1;21592159 char *b = ftdi->bulk_in_buffer;21602160 int bytes_read = 0;21612161 diag[0] = 0;
···514514 if (gpio > 0)515515 dparam->enable_gpio = gpio;516516517517- if (dparam->type == USBHS_TYPE_RCAR_GEN2)517517+ if (dparam->type == USBHS_TYPE_RCAR_GEN2 ||518518+ dparam->type == USBHS_TYPE_RCAR_GEN3)518519 dparam->has_usb_dmac = 1;519520520521 return info;
+2-2
drivers/usb/renesas_usbhs/fifo.c
···871871872872 /* use PIO if packet is less than pio_dma_border or pipe is DCP */873873 if ((len < usbhs_get_dparam(priv, pio_dma_border)) ||874874- usbhs_pipe_is_dcp(pipe))874874+ usbhs_pipe_type_is(pipe, USB_ENDPOINT_XFER_ISOC))875875 goto usbhsf_pio_prepare_push;876876877877 /* check data length if this driver don't use USB-DMAC */···976976977977 /* use PIO if packet is less than pio_dma_border or pipe is DCP */978978 if ((pkt->length < usbhs_get_dparam(priv, pio_dma_border)) ||979979- usbhs_pipe_is_dcp(pipe))979979+ usbhs_pipe_type_is(pipe, USB_ENDPOINT_XFER_ISOC))980980 goto usbhsf_pio_prepare_pop;981981982982 fifo = usbhsf_get_dma_fifo(priv, pkt);
+5-2
drivers/usb/renesas_usbhs/mod_gadget.c
···617617 * use dmaengine if possible.618618 * It will use pio handler if impossible.619619 */620620- if (usb_endpoint_dir_in(desc))620620+ if (usb_endpoint_dir_in(desc)) {621621 pipe->handler = &usbhs_fifo_dma_push_handler;622622- else622622+ } else {623623 pipe->handler = &usbhs_fifo_dma_pop_handler;624624+ usbhs_xxxsts_clear(priv, BRDYSTS,625625+ usbhs_pipe_number(pipe));626626+ }624627625628 ret = 0;626629 }
···6060 CHUNK_ALLOC_FORCE = 2,6161};62626363-/*6464- * Control how reservations are dealt with.6565- *6666- * RESERVE_FREE - freeing a reservation.6767- * RESERVE_ALLOC - allocating space and we need to update bytes_may_use for6868- * ENOSPC accounting6969- * RESERVE_ALLOC_NO_ACCOUNT - allocating space and we should not update7070- * bytes_may_use as the ENOSPC accounting is done elsewhere7171- */7272-enum {7373- RESERVE_FREE = 0,7474- RESERVE_ALLOC = 1,7575- RESERVE_ALLOC_NO_ACCOUNT = 2,7676-};7777-7863static int update_block_group(struct btrfs_trans_handle *trans,7964 struct btrfs_root *root, u64 bytenr,8065 u64 num_bytes, int alloc);···89104 struct btrfs_key *key);90105static void dump_space_info(struct btrfs_space_info *info, u64 bytes,91106 int dump_block_groups);9292-static int btrfs_update_reserved_bytes(struct btrfs_block_group_cache *cache,9393- u64 num_bytes, int reserve,9494- int delalloc);107107+static int btrfs_add_reserved_bytes(struct btrfs_block_group_cache *cache,108108+ u64 ram_bytes, u64 num_bytes, int delalloc);109109+static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,110110+ u64 num_bytes, int delalloc);95111static int block_rsv_use_bytes(struct btrfs_block_rsv *block_rsv,96112 u64 num_bytes);97113int btrfs_pin_extent(struct btrfs_root *root,···34873501 dcs = BTRFS_DC_SETUP;34883502 else if (ret == -ENOSPC)34893503 set_bit(BTRFS_TRANS_CACHE_ENOSPC, &trans->transaction->flags);34903490- btrfs_free_reserved_data_space(inode, 0, num_pages);3491350434923505out_put:34933506 iput(inode);···44574472 }44584473}4459447444754475+/*44764476+ * If force is CHUNK_ALLOC_FORCE:44774477+ * - return 1 if it successfully allocates a chunk,44784478+ * - return errors including -ENOSPC otherwise.44794479+ * If force is NOT CHUNK_ALLOC_FORCE:44804480+ * - return 0 if it doesn't need to allocate a new chunk,44814481+ * - return 1 if it successfully allocates a chunk,44824482+ * - return errors including -ENOSPC otherwise.44834483+ */44604484static int do_chunk_alloc(struct btrfs_trans_handle *trans,44614485 struct btrfs_root *extent_root, u64 flags, int force)44624486{···48764882 btrfs_get_alloc_profile(root, 0),48774883 CHUNK_ALLOC_NO_FORCE);48784884 btrfs_end_transaction(trans, root);48794879- if (ret == -ENOSPC)48854885+ if (ret > 0 || ret == -ENOSPC)48804886 ret = 0;48814887 break;48824888 case COMMIT_TRANS:···64916497}6492649864936499/**64946494- * btrfs_update_reserved_bytes - update the block_group and space info counters65006500+ * btrfs_add_reserved_bytes - update the block_group and space info counters64956501 * @cache: The cache we are manipulating65026502+ * @ram_bytes: The number of bytes of file content, and will be same to65036503+ * @num_bytes except for the compress path.64966504 * @num_bytes: The number of bytes in question64976497- * @reserve: One of the reservation enums64986505 * @delalloc: The blocks are allocated for the delalloc write64996506 *65006500- * This is called by the allocator when it reserves space, or by somebody who is65016501- * freeing space that was never actually used on disk. For example if you65026502- * reserve some space for a new leaf in transaction A and before transaction A65036503- * commits you free that leaf, you call this with reserve set to 0 in order to65046504- * clear the reservation.65056505- *65066506- * Metadata reservations should be called with RESERVE_ALLOC so we do the proper65076507+ * This is called by the allocator when it reserves space. Metadata65086508+ * reservations should be called with RESERVE_ALLOC so we do the proper65076509 * ENOSPC accounting. For data we handle the reservation through clearing the65086510 * delalloc bits in the io_tree. We have to do this since we could end up65096511 * allocating less disk space for the amount of data we have reserved in the···65096519 * make the reservation and return -EAGAIN, otherwise this function always65106520 * succeeds.65116521 */65126512-static int btrfs_update_reserved_bytes(struct btrfs_block_group_cache *cache,65136513- u64 num_bytes, int reserve, int delalloc)65226522+static int btrfs_add_reserved_bytes(struct btrfs_block_group_cache *cache,65236523+ u64 ram_bytes, u64 num_bytes, int delalloc)65146524{65156525 struct btrfs_space_info *space_info = cache->space_info;65166526 int ret = 0;6517652765186528 spin_lock(&space_info->lock);65196529 spin_lock(&cache->lock);65206520- if (reserve != RESERVE_FREE) {65216521- if (cache->ro) {65226522- ret = -EAGAIN;65236523- } else {65246524- cache->reserved += num_bytes;65256525- space_info->bytes_reserved += num_bytes;65266526- if (reserve == RESERVE_ALLOC) {65276527- trace_btrfs_space_reservation(cache->fs_info,65286528- "space_info", space_info->flags,65296529- num_bytes, 0);65306530- space_info->bytes_may_use -= num_bytes;65316531- }65326532-65336533- if (delalloc)65346534- cache->delalloc_bytes += num_bytes;65356535- }65306530+ if (cache->ro) {65316531+ ret = -EAGAIN;65366532 } else {65376537- if (cache->ro)65386538- space_info->bytes_readonly += num_bytes;65396539- cache->reserved -= num_bytes;65406540- space_info->bytes_reserved -= num_bytes;65336533+ cache->reserved += num_bytes;65346534+ space_info->bytes_reserved += num_bytes;6541653565366536+ trace_btrfs_space_reservation(cache->fs_info,65376537+ "space_info", space_info->flags,65386538+ ram_bytes, 0);65396539+ space_info->bytes_may_use -= ram_bytes;65426540 if (delalloc)65436543- cache->delalloc_bytes -= num_bytes;65416541+ cache->delalloc_bytes += num_bytes;65446542 }65456543 spin_unlock(&cache->lock);65466544 spin_unlock(&space_info->lock);65476545 return ret;65486546}6549654765486548+/**65496549+ * btrfs_free_reserved_bytes - update the block_group and space info counters65506550+ * @cache: The cache we are manipulating65516551+ * @num_bytes: The number of bytes in question65526552+ * @delalloc: The blocks are allocated for the delalloc write65536553+ *65546554+ * This is called by somebody who is freeing space that was never actually used65556555+ * on disk. For example if you reserve some space for a new leaf in transaction65566556+ * A and before transaction A commits you free that leaf, you call this with65576557+ * reserve set to 0 in order to clear the reservation.65586558+ */65596559+65606560+static int btrfs_free_reserved_bytes(struct btrfs_block_group_cache *cache,65616561+ u64 num_bytes, int delalloc)65626562+{65636563+ struct btrfs_space_info *space_info = cache->space_info;65646564+ int ret = 0;65656565+65666566+ spin_lock(&space_info->lock);65676567+ spin_lock(&cache->lock);65686568+ if (cache->ro)65696569+ space_info->bytes_readonly += num_bytes;65706570+ cache->reserved -= num_bytes;65716571+ space_info->bytes_reserved -= num_bytes;65726572+65736573+ if (delalloc)65746574+ cache->delalloc_bytes -= num_bytes;65756575+ spin_unlock(&cache->lock);65766576+ spin_unlock(&space_info->lock);65776577+ return ret;65786578+}65506579void btrfs_prepare_extent_commit(struct btrfs_trans_handle *trans,65516580 struct btrfs_root *root)65526581{···72007191 WARN_ON(test_bit(EXTENT_BUFFER_DIRTY, &buf->bflags));7201719272027193 btrfs_add_free_space(cache, buf->start, buf->len);72037203- btrfs_update_reserved_bytes(cache, buf->len, RESERVE_FREE, 0);71947194+ btrfs_free_reserved_bytes(cache, buf->len, 0);72047195 btrfs_put_block_group(cache);72057196 trace_btrfs_reserved_extent_free(root, buf->start, buf->len);72067197 pin = 0;···74257416 * the free space extent currently.74267417 */74277418static noinline int find_free_extent(struct btrfs_root *orig_root,74287428- u64 num_bytes, u64 empty_size,74297429- u64 hint_byte, struct btrfs_key *ins,74307430- u64 flags, int delalloc)74197419+ u64 ram_bytes, u64 num_bytes, u64 empty_size,74207420+ u64 hint_byte, struct btrfs_key *ins,74217421+ u64 flags, int delalloc)74317422{74327423 int ret = 0;74337424 struct btrfs_root *root = orig_root->fs_info->extent_root;···74397430 struct btrfs_space_info *space_info;74407431 int loop = 0;74417432 int index = __get_raid_index(flags);74427442- int alloc_type = (flags & BTRFS_BLOCK_GROUP_DATA) ?74437443- RESERVE_ALLOC_NO_ACCOUNT : RESERVE_ALLOC;74447433 bool failed_cluster_refill = false;74457434 bool failed_alloc = false;74467435 bool use_cluster = true;···77707763 search_start - offset);77717764 BUG_ON(offset > search_start);7772776577737773- ret = btrfs_update_reserved_bytes(block_group, num_bytes,77747774- alloc_type, delalloc);77667766+ ret = btrfs_add_reserved_bytes(block_group, ram_bytes,77677767+ num_bytes, delalloc);77757768 if (ret == -EAGAIN) {77767769 btrfs_add_free_space(block_group, offset, num_bytes);77777770 goto loop;···79437936 up_read(&info->groups_sem);79447937}7945793879467946-int btrfs_reserve_extent(struct btrfs_root *root,79397939+int btrfs_reserve_extent(struct btrfs_root *root, u64 ram_bytes,79477940 u64 num_bytes, u64 min_alloc_size,79487941 u64 empty_size, u64 hint_byte,79497942 struct btrfs_key *ins, int is_data, int delalloc)···79557948 flags = btrfs_get_alloc_profile(root, is_data);79567949again:79577950 WARN_ON(num_bytes < root->sectorsize);79587958- ret = find_free_extent(root, num_bytes, empty_size, hint_byte, ins,79597959- flags, delalloc);79517951+ ret = find_free_extent(root, ram_bytes, num_bytes, empty_size,79527952+ hint_byte, ins, flags, delalloc);79607953 if (!ret && !is_data) {79617954 btrfs_dec_block_group_reservations(root->fs_info,79627955 ins->objectid);···79657958 num_bytes = min(num_bytes >> 1, ins->offset);79667959 num_bytes = round_down(num_bytes, root->sectorsize);79677960 num_bytes = max(num_bytes, min_alloc_size);79617961+ ram_bytes = num_bytes;79687962 if (num_bytes == min_alloc_size)79697963 final_tried = true;79707964 goto again;···80037995 if (btrfs_test_opt(root->fs_info, DISCARD))80047996 ret = btrfs_discard_extent(root, start, len, NULL);80057997 btrfs_add_free_space(cache, start, len);80068006- btrfs_update_reserved_bytes(cache, len, RESERVE_FREE, delalloc);79987998+ btrfs_free_reserved_bytes(cache, len, delalloc);80077999 trace_btrfs_reserved_extent_free(root, start, len);80088000 }80098001···82318223 if (!block_group)82328224 return -EINVAL;8233822582348234- ret = btrfs_update_reserved_bytes(block_group, ins->offset,82358235- RESERVE_ALLOC_NO_ACCOUNT, 0);82268226+ ret = btrfs_add_reserved_bytes(block_group, ins->offset,82278227+ ins->offset, 0);82368228 BUG_ON(ret); /* logic error */82378229 ret = alloc_reserved_file_extent(trans, root, 0, root_objectid,82388230 0, owner, offset, ins, 1);···83768368 if (IS_ERR(block_rsv))83778369 return ERR_CAST(block_rsv);8378837083798379- ret = btrfs_reserve_extent(root, blocksize, blocksize,83718371+ ret = btrfs_reserve_extent(root, blocksize, blocksize, blocksize,83808372 empty_size, hint, &ins, 0, 0);83818373 if (ret)83828374 goto out_unuse;···85298521 wc->reada_slot = slot;85308522}8531852385328532-/*85338533- * These may not be seen by the usual inc/dec ref code so we have to85348534- * add them here.85358535- */85368536-static int record_one_subtree_extent(struct btrfs_trans_handle *trans,85378537- struct btrfs_root *root, u64 bytenr,85388538- u64 num_bytes)85398539-{85408540- struct btrfs_qgroup_extent_record *qrecord;85418541- struct btrfs_delayed_ref_root *delayed_refs;85428542-85438543- qrecord = kmalloc(sizeof(*qrecord), GFP_NOFS);85448544- if (!qrecord)85458545- return -ENOMEM;85468546-85478547- qrecord->bytenr = bytenr;85488548- qrecord->num_bytes = num_bytes;85498549- qrecord->old_roots = NULL;85508550-85518551- delayed_refs = &trans->transaction->delayed_refs;85528552- spin_lock(&delayed_refs->lock);85538553- if (btrfs_qgroup_insert_dirty_extent(trans->fs_info,85548554- delayed_refs, qrecord))85558555- kfree(qrecord);85568556- spin_unlock(&delayed_refs->lock);85578557-85588558- return 0;85598559-}85608560-85618524static int account_leaf_items(struct btrfs_trans_handle *trans,85628525 struct btrfs_root *root,85638526 struct extent_buffer *eb)···8562858385638584 num_bytes = btrfs_file_extent_disk_num_bytes(eb, fi);8564858585658565- ret = record_one_subtree_extent(trans, root, bytenr, num_bytes);85868586+ ret = btrfs_qgroup_insert_dirty_extent(trans, root->fs_info,85878587+ bytenr, num_bytes, GFP_NOFS);85668588 if (ret)85678589 return ret;85688590 }···87128732 btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK);87138733 path->locks[level] = BTRFS_READ_LOCK_BLOCKING;8714873487158715- ret = record_one_subtree_extent(trans, root, child_bytenr,87168716- root->nodesize);87358735+ ret = btrfs_qgroup_insert_dirty_extent(trans,87368736+ root->fs_info, child_bytenr,87378737+ root->nodesize, GFP_NOFS);87178738 if (ret)87188739 goto out;87198740 }···98879906 } else {98889907 ret = 0;98899908 }99099909+ free_extent_map(em);98909910 goto out;98919911 }98929912 path->slots[0]++;···99249942 block_group->iref = 0;99259943 block_group->inode = NULL;99269944 spin_unlock(&block_group->lock);99459945+ ASSERT(block_group->io_ctl.inode == NULL);99279946 iput(inode);99289947 last = block_group->key.objectid + block_group->key.offset;99299948 btrfs_put_block_group(block_group);···99829999 free_excluded_extents(info->extent_root, block_group);998310000998410001 btrfs_remove_free_space_cache(block_group);1000210002+ ASSERT(list_empty(&block_group->dirty_list));1000310003+ ASSERT(list_empty(&block_group->io_list));1000410004+ ASSERT(list_empty(&block_group->bg_list));1000510005+ ASSERT(atomic_read(&block_group->count) == 1);998510006 btrfs_put_block_group(block_group);998610007998710008 spin_lock(&info->block_group_cache_lock);
···20702070 }20712071 trans->sync = true;2072207220732073- btrfs_init_log_ctx(&ctx);20732073+ btrfs_init_log_ctx(&ctx, inode);2074207420752075 ret = btrfs_log_dentry_safe(trans, root, dentry, start, end, &ctx);20762076 if (ret < 0) {···2675267526762676 alloc_start = round_down(offset, blocksize);26772677 alloc_end = round_up(offset + len, blocksize);26782678+ cur_offset = alloc_start;2678267926792680 /* Make sure we aren't being give some crap mode */26802681 if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))···2768276727692768 /* First, check if we exceed the qgroup limit */27702769 INIT_LIST_HEAD(&reserve_list);27712771- cur_offset = alloc_start;27722770 while (1) {27732771 em = btrfs_get_extent(inode, NULL, 0, cur_offset,27742772 alloc_end - cur_offset, 0);···27942794 last_byte - cur_offset);27952795 if (ret < 0)27962796 break;27972797+ } else {27982798+ /*27992799+ * Do not need to reserve unwritten extent for this28002800+ * range, free reserved data space first, otherwise28012801+ * it'll result in false ENOSPC error.28022802+ */28032803+ btrfs_free_reserved_data_space(inode, cur_offset,28042804+ last_byte - cur_offset);27972805 }27982806 free_extent_map(em);27992807 cur_offset = last_byte;···28192811 range->start,28202812 range->len, 1 << inode->i_blkbits,28212813 offset + len, &alloc_hint);28142814+ else28152815+ btrfs_free_reserved_data_space(inode, range->start,28162816+ range->len);28222817 list_del(&range->list);28232818 kfree(range);28242819 }···28562845 unlock_extent_cached(&BTRFS_I(inode)->io_tree, alloc_start, locked_end,28572846 &cached_state, GFP_KERNEL);28582847out:28592859- /*28602860- * As we waited the extent range, the data_rsv_map must be empty28612861- * in the range, as written data range will be released from it.28622862- * And for prealloacted extent, it will also be released when28632863- * its metadata is written.28642864- * So this is completely used as cleanup.28652865- */28662866- btrfs_qgroup_free_data(inode, alloc_start, alloc_end - alloc_start);28672848 inode_unlock(inode);28682849 /* Let go of our reservation. */28692869- btrfs_free_reserved_data_space(inode, alloc_start,28702870- alloc_end - alloc_start);28502850+ if (ret != 0)28512851+ btrfs_free_reserved_data_space(inode, alloc_start,28522852+ alloc_end - cur_offset);28712853 return ret;28722854}28732855
···4646 struct btrfs_fs_info *fs_info);4747int btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info);4848void btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info);4949-int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info);4949+int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,5050+ bool interruptible);5051int btrfs_add_qgroup_relation(struct btrfs_trans_handle *trans,5152 struct btrfs_fs_info *fs_info, u64 src, u64 dst);5253int btrfs_del_qgroup_relation(struct btrfs_trans_handle *trans,···6463struct btrfs_delayed_extent_op;6564int btrfs_qgroup_prepare_account_extents(struct btrfs_trans_handle *trans,6665 struct btrfs_fs_info *fs_info);6767-struct btrfs_qgroup_extent_record *6868-btrfs_qgroup_insert_dirty_extent(struct btrfs_fs_info *fs_info,6969- struct btrfs_delayed_ref_root *delayed_refs,7070- struct btrfs_qgroup_extent_record *record);6666+/*6767+ * Insert one dirty extent record into @delayed_refs, informing qgroup to6868+ * account that extent at commit trans time.6969+ *7070+ * No lock version, caller must acquire delayed ref lock and allocate memory.7171+ *7272+ * Return 0 for success insert7373+ * Return >0 for existing record, caller can free @record safely.7474+ * Error is not possible7575+ */7676+int btrfs_qgroup_insert_dirty_extent_nolock(7777+ struct btrfs_fs_info *fs_info,7878+ struct btrfs_delayed_ref_root *delayed_refs,7979+ struct btrfs_qgroup_extent_record *record);8080+8181+/*8282+ * Insert one dirty extent record into @delayed_refs, informing qgroup to8383+ * account that extent at commit trans time.8484+ *8585+ * Better encapsulated version.8686+ *8787+ * Return 0 if the operation is done.8888+ * Return <0 for error, like memory allocation failure or invalid parameter8989+ * (NULL trans)9090+ */9191+int btrfs_qgroup_insert_dirty_extent(struct btrfs_trans_handle *trans,9292+ struct btrfs_fs_info *fs_info, u64 bytenr, u64 num_bytes,9393+ gfp_t gfp_flag);9494+7195int7296btrfs_qgroup_account_extent(struct btrfs_trans_handle *trans,7397 struct btrfs_fs_info *fs_info,
+116-10
fs/btrfs/relocation.c
···3131#include "async-thread.h"3232#include "free-space-cache.h"3333#include "inode-map.h"3434+#include "qgroup.h"34353536/*3637 * backref_node, mapping_node and tree_block start with this···30383037 u64 num_bytes;30393038 int nr = 0;30403039 int ret = 0;30403040+ u64 prealloc_start = cluster->start - offset;30413041+ u64 prealloc_end = cluster->end - offset;30423042+ u64 cur_offset;3041304330423044 BUG_ON(cluster->start != cluster->boundary[0]);30433045 inode_lock(inode);3044304630453045- ret = btrfs_check_data_free_space(inode, cluster->start,30463046- cluster->end + 1 - cluster->start);30473047+ ret = btrfs_check_data_free_space(inode, prealloc_start,30483048+ prealloc_end + 1 - prealloc_start);30473049 if (ret)30483050 goto out;3049305130523052+ cur_offset = prealloc_start;30503053 while (nr < cluster->nr) {30513054 start = cluster->boundary[nr] - offset;30523055 if (nr + 1 < cluster->nr)···3060305530613056 lock_extent(&BTRFS_I(inode)->io_tree, start, end);30623057 num_bytes = end + 1 - start;30583058+ if (cur_offset < start)30593059+ btrfs_free_reserved_data_space(inode, cur_offset,30603060+ start - cur_offset);30633061 ret = btrfs_prealloc_file_range(inode, 0, start,30643062 num_bytes, num_bytes,30653063 end + 1, &alloc_hint);30643064+ cur_offset = end + 1;30663065 unlock_extent(&BTRFS_I(inode)->io_tree, start, end);30673066 if (ret)30683067 break;30693068 nr++;30703069 }30713071- btrfs_free_reserved_data_space(inode, cluster->start,30723072- cluster->end + 1 - cluster->start);30703070+ if (cur_offset < prealloc_end)30713071+ btrfs_free_reserved_data_space(inode, cur_offset,30723072+ prealloc_end + 1 - cur_offset);30733073out:30743074 inode_unlock(inode);30753075 return ret;···39263916 return 0;39273917}3928391839193919+/*39203920+ * Qgroup fixer for data chunk relocation.39213921+ * The data relocation is done in the following steps39223922+ * 1) Copy data extents into data reloc tree39233923+ * 2) Create tree reloc tree(special snapshot) for related subvolumes39243924+ * 3) Modify file extents in tree reloc tree39253925+ * 4) Merge tree reloc tree with original fs tree, by swapping tree blocks39263926+ *39273927+ * The problem is, data and tree reloc tree are not accounted to qgroup,39283928+ * and 4) will only info qgroup to track tree blocks change, not file extents39293929+ * in the tree blocks.39303930+ *39313931+ * The good news is, related data extents are all in data reloc tree, so we39323932+ * only need to info qgroup to track all file extents in data reloc tree39333933+ * before commit trans.39343934+ */39353935+static int qgroup_fix_relocated_data_extents(struct btrfs_trans_handle *trans,39363936+ struct reloc_control *rc)39373937+{39383938+ struct btrfs_fs_info *fs_info = rc->extent_root->fs_info;39393939+ struct inode *inode = rc->data_inode;39403940+ struct btrfs_root *data_reloc_root = BTRFS_I(inode)->root;39413941+ struct btrfs_path *path;39423942+ struct btrfs_key key;39433943+ int ret = 0;39443944+39453945+ if (!fs_info->quota_enabled)39463946+ return 0;39473947+39483948+ /*39493949+ * Only for stage where we update data pointers the qgroup fix is39503950+ * valid.39513951+ * For MOVING_DATA stage, we will miss the timing of swapping tree39523952+ * blocks, and won't fix it.39533953+ */39543954+ if (!(rc->stage == UPDATE_DATA_PTRS && rc->extents_found))39553955+ return 0;39563956+39573957+ path = btrfs_alloc_path();39583958+ if (!path)39593959+ return -ENOMEM;39603960+ key.objectid = btrfs_ino(inode);39613961+ key.type = BTRFS_EXTENT_DATA_KEY;39623962+ key.offset = 0;39633963+39643964+ ret = btrfs_search_slot(NULL, data_reloc_root, &key, path, 0, 0);39653965+ if (ret < 0)39663966+ goto out;39673967+39683968+ lock_extent(&BTRFS_I(inode)->io_tree, 0, (u64)-1);39693969+ while (1) {39703970+ struct btrfs_file_extent_item *fi;39713971+39723972+ btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);39733973+ if (key.objectid > btrfs_ino(inode))39743974+ break;39753975+ if (key.type != BTRFS_EXTENT_DATA_KEY)39763976+ goto next;39773977+ fi = btrfs_item_ptr(path->nodes[0], path->slots[0],39783978+ struct btrfs_file_extent_item);39793979+ if (btrfs_file_extent_type(path->nodes[0], fi) !=39803980+ BTRFS_FILE_EXTENT_REG)39813981+ goto next;39823982+ ret = btrfs_qgroup_insert_dirty_extent(trans, fs_info,39833983+ btrfs_file_extent_disk_bytenr(path->nodes[0], fi),39843984+ btrfs_file_extent_disk_num_bytes(path->nodes[0], fi),39853985+ GFP_NOFS);39863986+ if (ret < 0)39873987+ break;39883988+next:39893989+ ret = btrfs_next_item(data_reloc_root, path);39903990+ if (ret < 0)39913991+ break;39923992+ if (ret > 0) {39933993+ ret = 0;39943994+ break;39953995+ }39963996+ }39973997+ unlock_extent(&BTRFS_I(inode)->io_tree, 0 , (u64)-1);39983998+out:39993999+ btrfs_free_path(path);40004000+ return ret;40014001+}40024002+39294003static noinline_for_stack int relocate_block_group(struct reloc_control *rc)39304004{39314005 struct rb_root blocks = RB_ROOT;···4196410241974103 /* get rid of pinned extents */41984104 trans = btrfs_join_transaction(rc->extent_root);41994199- if (IS_ERR(trans))41054105+ if (IS_ERR(trans)) {42004106 err = PTR_ERR(trans);42014201- else42024202- btrfs_commit_transaction(trans, rc->extent_root);41074107+ goto out_free;41084108+ }41094109+ err = qgroup_fix_relocated_data_extents(trans, rc);41104110+ if (err < 0) {41114111+ btrfs_abort_transaction(trans, err);41124112+ goto out_free;41134113+ }41144114+ btrfs_commit_transaction(trans, rc->extent_root);42034115out_free:42044116 btrfs_free_block_rsv(rc->extent_root, rc->block_rsv);42054117 btrfs_free_path(path);···45684468 unset_reloc_control(rc);4569446945704470 trans = btrfs_join_transaction(rc->extent_root);45714571- if (IS_ERR(trans))44714471+ if (IS_ERR(trans)) {45724472 err = PTR_ERR(trans);45734573- else45744574- err = btrfs_commit_transaction(trans, rc->extent_root);44734473+ goto out_free;44744474+ }44754475+ err = qgroup_fix_relocated_data_extents(trans, rc);44764476+ if (err < 0) {44774477+ btrfs_abort_transaction(trans, err);44784478+ goto out_free;44794479+ }44804480+ err = btrfs_commit_transaction(trans, rc->extent_root);45754481out_free:45764482 kfree(rc);45774483out:
+18-9
fs/btrfs/root-tree.c
···272272 root_key.objectid = key.offset;273273 key.offset++;274274275275+ /*276276+ * The root might have been inserted already, as before we look277277+ * for orphan roots, log replay might have happened, which278278+ * triggers a transaction commit and qgroup accounting, which279279+ * in turn reads and inserts fs roots while doing backref280280+ * walking.281281+ */282282+ root = btrfs_lookup_fs_root(tree_root->fs_info,283283+ root_key.objectid);284284+ if (root) {285285+ WARN_ON(!test_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED,286286+ &root->state));287287+ if (btrfs_root_refs(&root->root_item) == 0)288288+ btrfs_add_dead_root(root);289289+ continue;290290+ }291291+275292 root = btrfs_read_fs_root(tree_root, &root_key);276293 err = PTR_ERR_OR_ZERO(root);277294 if (err && err != -ENOENT) {···327310 set_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED, &root->state);328311329312 err = btrfs_insert_fs_root(root->fs_info, root);330330- /*331331- * The root might have been inserted already, as before we look332332- * for orphan roots, log replay might have happened, which333333- * triggers a transaction commit and qgroup accounting, which334334- * in turn reads and inserts fs roots while doing backref335335- * walking.336336- */337337- if (err == -EEXIST)338338- err = 0;339313 if (err) {314314+ BUG_ON(err == -EEXIST);340315 btrfs_free_fs_root(root);341316 break;342317 }
+16
fs/btrfs/super.c
···22412241 struct btrfs_trans_handle *trans;22422242 struct btrfs_root *root = btrfs_sb(sb)->tree_root;2243224322442244+ root->fs_info->fs_frozen = 1;22452245+ /*22462246+ * We don't need a barrier here, we'll wait for any transaction that22472247+ * could be in progress on other threads (and do delayed iputs that22482248+ * we want to avoid on a frozen filesystem), or do the commit22492249+ * ourselves.22502250+ */22442251 trans = btrfs_attach_transaction_barrier(root);22452252 if (IS_ERR(trans)) {22462253 /* no transaction, don't bother */···22562249 return PTR_ERR(trans);22572250 }22582251 return btrfs_commit_transaction(trans, root);22522252+}22532253+22542254+static int btrfs_unfreeze(struct super_block *sb)22552255+{22562256+ struct btrfs_root *root = btrfs_sb(sb)->tree_root;22572257+22582258+ root->fs_info->fs_frozen = 0;22592259+ return 0;22592260}2260226122612262static int btrfs_show_devname(struct seq_file *m, struct dentry *root)···23142299 .statfs = btrfs_statfs,23152300 .remount_fs = btrfs_remount,23162301 .freeze_fs = btrfs_freeze,23022302+ .unfreeze_fs = btrfs_unfreeze,23172303};2318230423192305static const struct file_operations btrfs_ctl_fops = {
+6-1
fs/btrfs/transaction.c
···2278227822792279 kmem_cache_free(btrfs_trans_handle_cachep, trans);2280228022812281+ /*22822282+ * If fs has been frozen, we can not handle delayed iputs, otherwise22832283+ * it'll result in deadlock about SB_FREEZE_FS.22842284+ */22812285 if (current != root->fs_info->transaction_kthread &&22822282- current != root->fs_info->cleaner_kthread)22862286+ current != root->fs_info->cleaner_kthread &&22872287+ !root->fs_info->fs_frozen)22832288 btrfs_run_delayed_iputs(root);2284228922852290 return ret;
+19-2
fs/btrfs/tree-log.c
···2727#include "backref.h"2828#include "hash.h"2929#include "compression.h"3030+#include "qgroup.h"30313132/* magic values for the inode_only field in btrfs_log_inode:3233 *···680679 ins.offset = btrfs_file_extent_disk_num_bytes(eb, item);681680 ins.type = BTRFS_EXTENT_ITEM_KEY;682681 offset = key->offset - btrfs_file_extent_offset(eb, item);682682+683683+ /*684684+ * Manually record dirty extent, as here we did a shallow685685+ * file extent item copy and skip normal backref update,686686+ * but modifying extent tree all by ourselves.687687+ * So need to manually record dirty extent for qgroup,688688+ * as the owner of the file extent changed from log tree689689+ * (doesn't affect qgroup) to fs/file tree(affects qgroup)690690+ */691691+ ret = btrfs_qgroup_insert_dirty_extent(trans, root->fs_info,692692+ btrfs_file_extent_disk_bytenr(eb, item),693693+ btrfs_file_extent_disk_num_bytes(eb, item),694694+ GFP_NOFS);695695+ if (ret < 0)696696+ goto out;683697684698 if (ins.objectid > 0) {685699 u64 csum_start;···28232807 */28242808 mutex_unlock(&root->log_mutex);2825280928262826- btrfs_init_log_ctx(&root_log_ctx);28102810+ btrfs_init_log_ctx(&root_log_ctx, NULL);2827281128282812 mutex_lock(&log_root_tree->log_mutex);28292813 atomic_inc(&log_root_tree->log_batch);···47574741 if (ret < 0) {47584742 err = ret;47594743 goto out_unlock;47604760- } else if (ret > 0) {47444744+ } else if (ret > 0 && ctx &&47454745+ other_ino != btrfs_ino(ctx->inode)) {47614746 struct btrfs_key inode_key;47624747 struct inode *other_inode;47634748
···538538 /* NAT cache management */539539 struct radix_tree_root nat_root;/* root of the nat entry cache */540540 struct radix_tree_root nat_set_root;/* root of the nat set cache */541541- struct percpu_rw_semaphore nat_tree_lock; /* protect nat_tree_lock */541541+ struct rw_semaphore nat_tree_lock; /* protect nat_tree_lock */542542 struct list_head nat_entries; /* cached nat entry list (clean) */543543 unsigned int nat_cnt; /* the # of cached nat entries */544544 unsigned int dirty_nat_cnt; /* total num of nat entries in set */···787787 struct f2fs_checkpoint *ckpt; /* raw checkpoint pointer */788788 struct inode *meta_inode; /* cache meta blocks */789789 struct mutex cp_mutex; /* checkpoint procedure lock */790790- struct percpu_rw_semaphore cp_rwsem; /* blocking FS operations */790790+ struct rw_semaphore cp_rwsem; /* blocking FS operations */791791 struct rw_semaphore node_write; /* locking node writes */792792 wait_queue_head_t cp_wait;793793 unsigned long last_time[MAX_TIME]; /* to store time in jiffies */···1074107410751075static inline void f2fs_lock_op(struct f2fs_sb_info *sbi)10761076{10771077- percpu_down_read(&sbi->cp_rwsem);10771077+ down_read(&sbi->cp_rwsem);10781078}1079107910801080static inline void f2fs_unlock_op(struct f2fs_sb_info *sbi)10811081{10821082- percpu_up_read(&sbi->cp_rwsem);10821082+ up_read(&sbi->cp_rwsem);10831083}1084108410851085static inline void f2fs_lock_all(struct f2fs_sb_info *sbi)10861086{10871087- percpu_down_write(&sbi->cp_rwsem);10871087+ down_write(&sbi->cp_rwsem);10881088}1089108910901090static inline void f2fs_unlock_all(struct f2fs_sb_info *sbi)10911091{10921092- percpu_up_write(&sbi->cp_rwsem);10921092+ up_write(&sbi->cp_rwsem);10931093}1094109410951095static inline int __get_cp_reason(struct f2fs_sb_info *sbi)
+9-4
fs/f2fs/file.c
···20862086 if (unlikely(f2fs_readonly(src->i_sb)))20872087 return -EROFS;2088208820892089- if (S_ISDIR(src->i_mode) || S_ISDIR(dst->i_mode))20902090- return -EISDIR;20892089+ if (!S_ISREG(src->i_mode) || !S_ISREG(dst->i_mode))20902090+ return -EINVAL;2091209120922092 if (f2fs_encrypted_inode(src) || f2fs_encrypted_inode(dst))20932093 return -EOPNOTSUPP;2094209420952095 inode_lock(src);20962096- if (src != dst)20972097- inode_lock(dst);20962096+ if (src != dst) {20972097+ if (!inode_trylock(dst)) {20982098+ ret = -EBUSY;20992099+ goto out;21002100+ }21012101+ }2098210220992103 ret = -EINVAL;21002104 if (pos_in + len > src->i_size || pos_in + len < pos_in)···21562152out_unlock:21572153 if (src != dst)21582154 inode_unlock(dst);21552155+out:21592156 inode_unlock(src);21602157 return ret;21612158}
+23-24
fs/f2fs/node.c
···206206 struct nat_entry *e;207207 bool need = false;208208209209- percpu_down_read(&nm_i->nat_tree_lock);209209+ down_read(&nm_i->nat_tree_lock);210210 e = __lookup_nat_cache(nm_i, nid);211211 if (e) {212212 if (!get_nat_flag(e, IS_CHECKPOINTED) &&213213 !get_nat_flag(e, HAS_FSYNCED_INODE))214214 need = true;215215 }216216- percpu_up_read(&nm_i->nat_tree_lock);216216+ up_read(&nm_i->nat_tree_lock);217217 return need;218218}219219···223223 struct nat_entry *e;224224 bool is_cp = true;225225226226- percpu_down_read(&nm_i->nat_tree_lock);226226+ down_read(&nm_i->nat_tree_lock);227227 e = __lookup_nat_cache(nm_i, nid);228228 if (e && !get_nat_flag(e, IS_CHECKPOINTED))229229 is_cp = false;230230- percpu_up_read(&nm_i->nat_tree_lock);230230+ up_read(&nm_i->nat_tree_lock);231231 return is_cp;232232}233233···237237 struct nat_entry *e;238238 bool need_update = true;239239240240- percpu_down_read(&nm_i->nat_tree_lock);240240+ down_read(&nm_i->nat_tree_lock);241241 e = __lookup_nat_cache(nm_i, ino);242242 if (e && get_nat_flag(e, HAS_LAST_FSYNC) &&243243 (get_nat_flag(e, IS_CHECKPOINTED) ||244244 get_nat_flag(e, HAS_FSYNCED_INODE)))245245 need_update = false;246246- percpu_up_read(&nm_i->nat_tree_lock);246246+ up_read(&nm_i->nat_tree_lock);247247 return need_update;248248}249249···284284 struct f2fs_nm_info *nm_i = NM_I(sbi);285285 struct nat_entry *e;286286287287- percpu_down_write(&nm_i->nat_tree_lock);287287+ down_write(&nm_i->nat_tree_lock);288288 e = __lookup_nat_cache(nm_i, ni->nid);289289 if (!e) {290290 e = grab_nat_entry(nm_i, ni->nid);···334334 set_nat_flag(e, HAS_FSYNCED_INODE, true);335335 set_nat_flag(e, HAS_LAST_FSYNC, fsync_done);336336 }337337- percpu_up_write(&nm_i->nat_tree_lock);337337+ up_write(&nm_i->nat_tree_lock);338338}339339340340int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)···342342 struct f2fs_nm_info *nm_i = NM_I(sbi);343343 int nr = nr_shrink;344344345345- percpu_down_write(&nm_i->nat_tree_lock);345345+ if (!down_write_trylock(&nm_i->nat_tree_lock))346346+ return 0;346347347348 while (nr_shrink && !list_empty(&nm_i->nat_entries)) {348349 struct nat_entry *ne;···352351 __del_from_nat_cache(nm_i, ne);353352 nr_shrink--;354353 }355355- percpu_up_write(&nm_i->nat_tree_lock);354354+ up_write(&nm_i->nat_tree_lock);356355 return nr - nr_shrink;357356}358357···374373 ni->nid = nid;375374376375 /* Check nat cache */377377- percpu_down_read(&nm_i->nat_tree_lock);376376+ down_read(&nm_i->nat_tree_lock);378377 e = __lookup_nat_cache(nm_i, nid);379378 if (e) {380379 ni->ino = nat_get_ino(e);381380 ni->blk_addr = nat_get_blkaddr(e);382381 ni->version = nat_get_version(e);383383- percpu_up_read(&nm_i->nat_tree_lock);382382+ up_read(&nm_i->nat_tree_lock);384383 return;385384 }386385···404403 node_info_from_raw_nat(ni, &ne);405404 f2fs_put_page(page, 1);406405cache:407407- percpu_up_read(&nm_i->nat_tree_lock);406406+ up_read(&nm_i->nat_tree_lock);408407 /* cache nat entry */409409- percpu_down_write(&nm_i->nat_tree_lock);408408+ down_write(&nm_i->nat_tree_lock);410409 cache_nat_entry(sbi, nid, &ne);411411- percpu_up_write(&nm_i->nat_tree_lock);410410+ up_write(&nm_i->nat_tree_lock);412411}413412414413/*···17891788 ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nid), FREE_NID_PAGES,17901789 META_NAT, true);1791179017921792- percpu_down_read(&nm_i->nat_tree_lock);17911791+ down_read(&nm_i->nat_tree_lock);1793179217941793 while (1) {17951794 struct page *page = get_current_nat_page(sbi, nid);···18211820 remove_free_nid(nm_i, nid);18221821 }18231822 up_read(&curseg->journal_rwsem);18241824- percpu_up_read(&nm_i->nat_tree_lock);18231823+ up_read(&nm_i->nat_tree_lock);1825182418261825 ra_meta_pages(sbi, NAT_BLOCK_OFFSET(nm_i->next_scan_nid),18271826 nm_i->ra_nid_pages, META_NAT, false);···22102209 if (!nm_i->dirty_nat_cnt)22112210 return;2212221122132213- percpu_down_write(&nm_i->nat_tree_lock);22122212+ down_write(&nm_i->nat_tree_lock);2214221322152214 /*22162215 * if there are no enough space in journal to store dirty nat···22332232 list_for_each_entry_safe(set, tmp, &sets, set_list)22342233 __flush_nat_entry_set(sbi, set);2235223422362236- percpu_up_write(&nm_i->nat_tree_lock);22352235+ up_write(&nm_i->nat_tree_lock);2237223622382237 f2fs_bug_on(sbi, nm_i->dirty_nat_cnt);22392238}···2269226822702269 mutex_init(&nm_i->build_lock);22712270 spin_lock_init(&nm_i->free_nid_list_lock);22722272- if (percpu_init_rwsem(&nm_i->nat_tree_lock))22732273- return -ENOMEM;22712271+ init_rwsem(&nm_i->nat_tree_lock);2274227222752273 nm_i->next_scan_nid = le32_to_cpu(sbi->ckpt->next_free_nid);22762274 nm_i->bitmap_size = __bitmap_size(sbi, NAT_BITMAP);···23262326 spin_unlock(&nm_i->free_nid_list_lock);2327232723282328 /* destroy nat cache */23292329- percpu_down_write(&nm_i->nat_tree_lock);23292329+ down_write(&nm_i->nat_tree_lock);23302330 while ((found = __gang_lookup_nat_cache(nm_i,23312331 nid, NATVEC_SIZE, natvec))) {23322332 unsigned idx;···23512351 kmem_cache_free(nat_entry_set_slab, setvec[idx]);23522352 }23532353 }23542354- percpu_up_write(&nm_i->nat_tree_lock);23542354+ up_write(&nm_i->nat_tree_lock);2355235523562356- percpu_free_rwsem(&nm_i->nat_tree_lock);23572356 kfree(nm_i->nat_bitmap);23582357 sbi->nm_info = NULL;23592358 kfree(nm_i);
+1-5
fs/f2fs/super.c
···706706 percpu_counter_destroy(&sbi->nr_pages[i]);707707 percpu_counter_destroy(&sbi->alloc_valid_block_count);708708 percpu_counter_destroy(&sbi->total_valid_inode_count);709709-710710- percpu_free_rwsem(&sbi->cp_rwsem);711709}712710713711static void f2fs_put_super(struct super_block *sb)···14811483{14821484 int i, err;1483148514841484- if (percpu_init_rwsem(&sbi->cp_rwsem))14851485- return -ENOMEM;14861486-14871486 for (i = 0; i < NR_COUNT_TYPE; i++) {14881487 err = percpu_counter_init(&sbi->nr_pages[i], 0, GFP_KERNEL);14891488 if (err)···16811686 sbi->write_io[i].bio = NULL;16821687 }1683168816891689+ init_rwsem(&sbi->cp_rwsem);16841690 init_waitqueue_head(&sbi->cp_wait);16851691 init_sb_info(sbi);16861692
+13-8
fs/iomap.c
···8484 * Now the data has been copied, commit the range we've copied. This8585 * should not fail unless the filesystem has had a fatal error.8686 */8787- ret = ops->iomap_end(inode, pos, length, written > 0 ? written : 0,8888- flags, &iomap);8787+ if (ops->iomap_end) {8888+ ret = ops->iomap_end(inode, pos, length,8989+ written > 0 ? written : 0,9090+ flags, &iomap);9191+ }89929093 return written ? written : ret;9194}···197194 if (mapping_writably_mapped(inode->i_mapping))198195 flush_dcache_page(page);199196200200- pagefault_disable();201197 copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes);202202- pagefault_enable();203198204199 flush_dcache_page(page);205205- mark_page_accessed(page);206200207201 status = iomap_write_end(inode, pos, bytes, copied, page);208202 if (unlikely(status < 0))···470470 if (ret)471471 return ret;472472473473- ret = filemap_write_and_wait(inode->i_mapping);474474- if (ret)475475- return ret;473473+ if (fi->fi_flags & FIEMAP_FLAG_SYNC) {474474+ ret = filemap_write_and_wait(inode->i_mapping);475475+ if (ret)476476+ return ret;477477+ }476478477479 while (len > 0) {478480 ret = iomap_apply(inode, start, len, 0, ops, &ctx,479481 iomap_fiemap_actor);482482+ /* inode with no (attribute) mapping will give ENOENT */483483+ if (ret == -ENOENT)484484+ break;480485 if (ret < 0)481486 return ret;482487 if (ret == 0)
···741741 * page is inserted into the pagecache when we have to serve a write742742 * fault on a hole. It should never be dirtied and can simply be743743 * dropped from the pagecache once we get real data for the page.744744+ *745745+ * XXX: This is racy against mmap, and there's nothing we can do about746746+ * it. dax_do_io() should really do this invalidation internally as747747+ * it will know if we've allocated over a holei for this specific IO and748748+ * if so it needs to update the mapping tree and invalidate existing749749+ * PTEs over the newly allocated range. Remove this invalidation when750750+ * dax_do_io() is fixed up.744751 */745752 if (mapping->nrpages) {746746- ret = invalidate_inode_pages2(mapping);753753+ loff_t end = iocb->ki_pos + iov_iter_count(from) - 1;754754+755755+ ret = invalidate_inode_pages2_range(mapping,756756+ iocb->ki_pos >> PAGE_SHIFT,757757+ end >> PAGE_SHIFT);747758 WARN_ON_ONCE(ret);748759 }749760
···882882static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,883883 int op)884884{885885- if (unlikely(op == REQ_OP_DISCARD))885885+ if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))886886 return min(q->limits.max_discard_sectors, UINT_MAX >> 9);887887888888 if (unlikely(op == REQ_OP_WRITE_SAME))···913913 if (unlikely(rq->cmd_type != REQ_TYPE_FS))914914 return q->limits.max_hw_sectors;915915916916- if (!q->limits.chunk_sectors || (req_op(rq) == REQ_OP_DISCARD))916916+ if (!q->limits.chunk_sectors ||917917+ req_op(rq) == REQ_OP_DISCARD ||918918+ req_op(rq) == REQ_OP_SECURE_ERASE)917919 return blk_queue_get_max_sectors(q, req_op(rq));918920919921 return min(blk_max_size_offset(q, offset),
+6-2
include/linux/compiler-gcc.h
···242242 */243243#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)244244245245-#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP245245+/*246246+ * sparse (__CHECKER__) pretends to be gcc, but can't do constant247247+ * folding in __builtin_bswap*() (yet), so don't set these for it.248248+ */249249+#if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP) && !defined(__CHECKER__)246250#if GCC_VERSION >= 40400247251#define __HAVE_BUILTIN_BSWAP32__248252#define __HAVE_BUILTIN_BSWAP64__···254250#if GCC_VERSION >= 40800255251#define __HAVE_BUILTIN_BSWAP16__256252#endif257257-#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */253253+#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP && !__CHECKER__ */258254259255#if GCC_VERSION >= 50000260256#define KASAN_ABI_VERSION 4
+3-3
include/linux/compiler.h
···527527 * object's lifetime is managed by something other than RCU. That528528 * "something other" might be reference counting or simple immortality.529529 *530530- * The seemingly unused void * variable is to validate @p is indeed a pointer531531- * type. All pointer types silently cast to void *.530530+ * The seemingly unused size_t variable is to validate @p is indeed a pointer531531+ * type by making sure it can be dereferenced.532532 */533533#define lockless_dereference(p) \534534({ \535535 typeof(p) _________p1 = READ_ONCE(p); \536536- __maybe_unused const void * const _________p2 = _________p1; \536536+ size_t __maybe_unused __size_of_ptr = sizeof(*(p)); \537537 smp_read_barrier_depends(); /* Dependency order vs. p above. */ \538538 (_________p1); \539539})
···11#ifndef __SMC91X_H__22#define __SMC91X_H__3344+/*55+ * These bits define which access sizes a platform can support, rather66+ * than the maximal access size. So, if your platform can do 16-bit77+ * and 32-bit accesses to the SMC91x device, but not 8-bit, set both88+ * SMC91X_USE_16BIT and SMC91X_USE_32BIT.99+ *1010+ * The SMC91x driver requires at least one of SMC91X_USE_8BIT or1111+ * SMC91X_USE_16BIT to be supported - just setting SMC91X_USE_32BIT is1212+ * an invalid configuration.1313+ */414#define SMC91X_USE_8BIT (1 << 0)515#define SMC91X_USE_16BIT (1 << 1)616#define SMC91X_USE_32BIT (1 << 2)
···14141515#include <linux/atmapi.h>1616#include <linux/atmioc.h>1717+#include <linux/time.h>17181819#define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)1920 /* get pool statistics */
+2-1
include/uapi/linux/if_pppol2tp.h
···1616#define _UAPI__LINUX_IF_PPPOL2TP_H17171818#include <linux/types.h>1919-1919+#include <linux/in.h>2020+#include <linux/in6.h>20212122/* Structure used to connect() the socket to a particular tunnel UDP2223 * socket over IPv4.
+3
include/uapi/linux/if_pppox.h
···2121#include <asm/byteorder.h>22222323#include <linux/socket.h>2424+#include <linux/if.h>2425#include <linux/if_ether.h>2526#include <linux/if_pppol2tp.h>2727+#include <linux/in.h>2828+#include <linux/in6.h>26292730/* For user-space programs to pick up these definitions2831 * which they wouldn't get otherwise without defining __KERNEL__
···820820 desc->name = name;821821822822 if (handle != handle_bad_irq && is_chained) {823823+ /*824824+ * We're about to start this interrupt immediately,825825+ * hence the need to set the trigger configuration.826826+ * But the .set_type callback may have overridden the827827+ * flow handler, ignoring that we're dealing with a828828+ * chained interrupt. Reset it immediately because we829829+ * do know better.830830+ */831831+ __irq_set_trigger(desc, irqd_get_trigger_type(&desc->irq_data));832832+ desc->handle_irq = handle;833833+823834 irq_settings_set_noprobe(desc);824835 irq_settings_set_norequest(desc);825836 irq_settings_set_nothread(desc);
···835835 */836836static bool rtree_next_node(struct memory_bitmap *bm)837837{838838- bm->cur.node = list_entry(bm->cur.node->list.next,839839- struct rtree_node, list);840840- if (&bm->cur.node->list != &bm->cur.zone->leaves) {838838+ if (!list_is_last(&bm->cur.node->list, &bm->cur.zone->leaves)) {839839+ bm->cur.node = list_entry(bm->cur.node->list.next,840840+ struct rtree_node, list);841841 bm->cur.node_pfn += BM_BITS_PER_BLOCK;842842 bm->cur.node_bit = 0;843843 touch_softlockup_watchdog();···845845 }846846847847 /* No more nodes, goto next zone */848848- bm->cur.zone = list_entry(bm->cur.zone->list.next,848848+ if (!list_is_last(&bm->cur.zone->list, &bm->zones)) {849849+ bm->cur.zone = list_entry(bm->cur.zone->list.next,849850 struct mem_zone_bm_rtree, list);850850- if (&bm->cur.zone->list != &bm->zones) {851851 bm->cur.node = list_entry(bm->cur.zone->leaves.next,852852 struct rtree_node, list);853853 bm->cur.node_pfn = 0;
+2-2
kernel/printk/braille.c
···991010char *_braille_console_setup(char **str, char **brl_options)1111{1212- if (!memcmp(*str, "brl,", 4)) {1212+ if (!strncmp(*str, "brl,", 4)) {1313 *brl_options = "";1414 *str += 4;1515- } else if (!memcmp(str, "brl=", 4)) {1515+ } else if (!strncmp(*str, "brl=", 4)) {1616 *brl_options = *str + 4;1717 *str = strchr(*brl_options, ',');1818 if (!*str)
+25-8
kernel/sched/cputime.c
···263263 cpustat[CPUTIME_IDLE] += (__force u64) cputime;264264}265265266266+/*267267+ * When a guest is interrupted for a longer amount of time, missed clock268268+ * ticks are not redelivered later. Due to that, this function may on269269+ * occasion account more time than the calling functions think elapsed.270270+ */266271static __always_inline cputime_t steal_account_process_time(cputime_t maxtime)267272{268273#ifdef CONFIG_PARAVIRT···376371 * idle, or potentially user or system time. Due to rounding,377372 * other time can exceed ticks occasionally.378373 */379379- other = account_other_time(cputime);374374+ other = account_other_time(ULONG_MAX);380375 if (other >= cputime)381376 return;382377 cputime -= other;···491486 }492487493488 cputime = cputime_one_jiffy;494494- steal = steal_account_process_time(cputime);489489+ steal = steal_account_process_time(ULONG_MAX);495490496491 if (steal >= cputime)497492 return;···521516 }522517523518 cputime = jiffies_to_cputime(ticks);524524- steal = steal_account_process_time(cputime);519519+ steal = steal_account_process_time(ULONG_MAX);525520526521 if (steal >= cputime)527522 return;···619614 stime = curr->stime;620615 utime = curr->utime;621616622622- if (utime == 0) {623623- stime = rtime;617617+ /*618618+ * If either stime or both stime and utime are 0, assume all runtime is619619+ * userspace. Once a task gets some ticks, the monotonicy code at620620+ * 'update' will ensure things converge to the observed ratio.621621+ */622622+ if (stime == 0) {623623+ utime = rtime;624624 goto update;625625 }626626627627- if (stime == 0) {628628- utime = rtime;627627+ if (utime == 0) {628628+ stime = rtime;629629 goto update;630630 }631631632632 stime = scale_stime((__force u64)stime, (__force u64)rtime,633633 (__force u64)(stime + utime));634634635635+update:635636 /*636637 * Make sure stime doesn't go backwards; this preserves monotonicity637638 * for utime because rtime is monotonic.···660649 stime = rtime - utime;661650 }662651663663-update:664652 prev->stime = stime;665653 prev->utime = utime;666654out:···704694 unsigned long now = READ_ONCE(jiffies);705695 cputime_t delta, other;706696697697+ /*698698+ * Unlike tick based timing, vtime based timing never has lost699699+ * ticks, and no need for steal time accounting to make up for700700+ * lost ticks. Vtime accounts a rounded version of actual701701+ * elapsed time. Limit account_other_time to prevent rounding702702+ * errors from causing elapsed vtime to go negative.703703+ */707704 delta = jiffies_to_cputime(now - tsk->vtime_snap);708705 other = account_other_time(delta);709706 WARN_ON_ONCE(tsk->vtime_snap_whence == VTIME_INACTIVE);
+43-2
kernel/sysctl.c
···21402140 return 0;21412141}2142214221432143+static int do_proc_douintvec_conv(bool *negp, unsigned long *lvalp,21442144+ int *valp,21452145+ int write, void *data)21462146+{21472147+ if (write) {21482148+ if (*negp)21492149+ return -EINVAL;21502150+ *valp = *lvalp;21512151+ } else {21522152+ unsigned int val = *valp;21532153+ *lvalp = (unsigned long)val;21542154+ }21552155+ return 0;21562156+}21572157+21432158static const char proc_wspace_sep[] = { ' ', '\t', '\n' };2144215921452160static int __do_proc_dointvec(void *tbl_data, struct ctl_table *table,···22742259int proc_dointvec(struct ctl_table *table, int write,22752260 void __user *buffer, size_t *lenp, loff_t *ppos)22762261{22772277- return do_proc_dointvec(table,write,buffer,lenp,ppos,22782278- NULL,NULL);22622262+ return do_proc_dointvec(table, write, buffer, lenp, ppos, NULL, NULL);22632263+}22642264+22652265+/**22662266+ * proc_douintvec - read a vector of unsigned integers22672267+ * @table: the sysctl table22682268+ * @write: %TRUE if this is a write to the sysctl file22692269+ * @buffer: the user buffer22702270+ * @lenp: the size of the user buffer22712271+ * @ppos: file position22722272+ *22732273+ * Reads/writes up to table->maxlen/sizeof(unsigned int) unsigned integer22742274+ * values from/to the user buffer, treated as an ASCII string.22752275+ *22762276+ * Returns 0 on success.22772277+ */22782278+int proc_douintvec(struct ctl_table *table, int write,22792279+ void __user *buffer, size_t *lenp, loff_t *ppos)22802280+{22812281+ return do_proc_dointvec(table, write, buffer, lenp, ppos,22822282+ do_proc_douintvec_conv, NULL);22792283}2280228422812285/*···28922858 return -ENOSYS;28932859}2894286028612861+int proc_douintvec(struct ctl_table *table, int write,28622862+ void __user *buffer, size_t *lenp, loff_t *ppos)28632863+{28642864+ return -ENOSYS;28652865+}28662866+28952867int proc_dointvec_minmax(struct ctl_table *table, int write,28962868 void __user *buffer, size_t *lenp, loff_t *ppos)28972869{···29432903 * exception granted :-)29442904 */29452905EXPORT_SYMBOL(proc_dointvec);29062906+EXPORT_SYMBOL(proc_douintvec);29462907EXPORT_SYMBOL(proc_dointvec_jiffies);29472908EXPORT_SYMBOL(proc_dointvec_minmax);29482909EXPORT_SYMBOL(proc_dointvec_userhz_jiffies);
+4-1
kernel/time/timekeeping.c
···401401 do {402402 seq = raw_read_seqcount_latch(&tkf->seq);403403 tkr = tkf->base + (seq & 0x01);404404- now = ktime_to_ns(tkr->base) + timekeeping_get_ns(tkr);404404+ now = ktime_to_ns(tkr->base);405405+406406+ now += clocksource_delta(tkr->read(tkr->clock),407407+ tkr->cycle_last, tkr->mask);405408 } while (read_seqcount_retry(&tkf->seq, seq));406409407410 return now;
+7-2
kernel/time/timekeeping_debug.c
···23232424#include "timekeeping_internal.h"25252626-static unsigned int sleep_time_bin[32] = {0};2626+#define NUM_BINS 322727+2828+static unsigned int sleep_time_bin[NUM_BINS] = {0};27292830static int tk_debug_show_sleep_time(struct seq_file *s, void *data)2931{···71697270void tk_debug_account_sleep_time(struct timespec64 *t)7371{7474- sleep_time_bin[fls(t->tv_sec)]++;7272+ /* Cap bin index so we don't overflow the array */7373+ int bin = min(fls(t->tv_sec), NUM_BINS-1);7474+7575+ sleep_time_bin[bin]++;7576}7677
+1-1
kernel/trace/blktrace.c
···223223 what |= MASK_TC_BIT(op_flags, META);224224 what |= MASK_TC_BIT(op_flags, PREFLUSH);225225 what |= MASK_TC_BIT(op_flags, FUA);226226- if (op == REQ_OP_DISCARD)226226+ if (op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE)227227 what |= BLK_TC_ACT(BLK_TC_DISCARD);228228 if (op == REQ_OP_FLUSH)229229 what |= BLK_TC_ACT(BLK_TC_FLUSH);
+4-3
lib/rhashtable.c
···7777 size = min_t(unsigned int, size, tbl->size >> 1);78787979 if (sizeof(spinlock_t) != 0) {8080+ tbl->locks = NULL;8081#ifdef CONFIG_NUMA8182 if (size * sizeof(spinlock_t) > PAGE_SIZE &&8283 gfp == GFP_KERNEL)8384 tbl->locks = vmalloc(size * sizeof(spinlock_t));8484- else8585#endif8686 if (gfp != GFP_KERNEL)8787 gfp |= __GFP_NOWARN | __GFP_NORETRY;88888989- tbl->locks = kmalloc_array(size, sizeof(spinlock_t),9090- gfp);8989+ if (!tbl->locks)9090+ tbl->locks = kmalloc_array(size, sizeof(spinlock_t),9191+ gfp);9192 if (!tbl->locks)9293 return -ENOMEM;9394 for (i = 0; i < size; i++)
+8-1
mm/Kconfig
···262262 select MIGRATION263263 depends on MMU264264 help265265- Allows the compaction of memory for the allocation of huge pages.265265+ Compaction is the only memory management component to form266266+ high order (larger physically contiguous) memory blocks267267+ reliably. The page allocator relies on compaction heavily and268268+ the lack of the feature can lead to unexpected OOM killer269269+ invocations for high order memory requests. You shouldn't270270+ disable this option unless there really is a strong reason for271271+ it and then we would be really interested to hear about that at272272+ linux-mm@kvack.org.266273267274#268275# support for page migration
+6-1
mm/huge_memory.c
···15121512 struct page *page;15131513 pgtable_t pgtable;15141514 pmd_t _pmd;15151515- bool young, write, dirty;15151515+ bool young, write, dirty, soft_dirty;15161516 unsigned long addr;15171517 int i;15181518···15461546 write = pmd_write(*pmd);15471547 young = pmd_young(*pmd);15481548 dirty = pmd_dirty(*pmd);15491549+ soft_dirty = pmd_soft_dirty(*pmd);1549155015501551 pmdp_huge_split_prepare(vma, haddr, pmd);15511552 pgtable = pgtable_trans_huge_withdraw(mm, pmd);···15631562 swp_entry_t swp_entry;15641563 swp_entry = make_migration_entry(page + i, write);15651564 entry = swp_entry_to_pte(swp_entry);15651565+ if (soft_dirty)15661566+ entry = pte_swp_mksoft_dirty(entry);15661567 } else {15671568 entry = mk_pte(page + i, vma->vm_page_prot);15681569 entry = maybe_mkwrite(entry, vma);···15721569 entry = pte_wrprotect(entry);15731570 if (!young)15741571 entry = pte_mkold(entry);15721572+ if (soft_dirty)15731573+ entry = pte_mksoft_dirty(entry);15751574 }15761575 if (dirty)15771576 SetPageDirty(page + i);
+18-18
mm/memcontrol.c
···40824082 atomic_add(n, &memcg->id.ref);40834083}4084408440854085-static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)40864086-{40874087- while (!atomic_inc_not_zero(&memcg->id.ref)) {40884088- /*40894089- * The root cgroup cannot be destroyed, so it's refcount must40904090- * always be >= 1.40914091- */40924092- if (WARN_ON_ONCE(memcg == root_mem_cgroup)) {40934093- VM_BUG_ON(1);40944094- break;40954095- }40964096- memcg = parent_mem_cgroup(memcg);40974097- if (!memcg)40984098- memcg = root_mem_cgroup;40994099- }41004100- return memcg;41014101-}41024102-41034085static void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n)41044086{41054087 if (atomic_sub_and_test(n, &memcg->id.ref)) {···58035821subsys_initcall(mem_cgroup_init);5804582258055823#ifdef CONFIG_MEMCG_SWAP58245824+static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)58255825+{58265826+ while (!atomic_inc_not_zero(&memcg->id.ref)) {58275827+ /*58285828+ * The root cgroup cannot be destroyed, so it's refcount must58295829+ * always be >= 1.58305830+ */58315831+ if (WARN_ON_ONCE(memcg == root_mem_cgroup)) {58325832+ VM_BUG_ON(1);58335833+ break;58345834+ }58355835+ memcg = parent_mem_cgroup(memcg);58365836+ if (!memcg)58375837+ memcg = root_mem_cgroup;58385838+ }58395839+ return memcg;58405840+}58415841+58065842/**58075843 * mem_cgroup_swapout - transfer a memsw charge to swap58085844 * @page: page whose memsw charge to transfer
+9
mm/readahead.c
···88 */991010#include <linux/kernel.h>1111+#include <linux/dax.h>1112#include <linux/gfp.h>1213#include <linux/export.h>1314#include <linux/blkdev.h>···544543{545544 if (!mapping || !mapping->a_ops)546545 return -EINVAL;546546+547547+ /*548548+ * Readahead doesn't make sense for DAX inodes, but we don't want it549549+ * to report a failure either. Instead, we just return success and550550+ * don't do any work.551551+ */552552+ if (dax_mapping(mapping))553553+ return 0;547554548555 return force_page_cache_readahead(mapping, filp, index, nr);549556}
+2-2
mm/usercopy.c
···8383 unsigned long check_high = check_low + n;84848585 /* Does not overlap if entirely above or entirely below. */8686- if (check_low >= high || check_high < low)8686+ if (check_low >= high || check_high <= low)8787 return false;88888989 return true;···124124static inline const char *check_bogus_address(const void *ptr, unsigned long n)125125{126126 /* Reject if object wraps past end of memory. */127127- if (ptr + n < ptr)127127+ if ((unsigned long)ptr + n < (unsigned long)ptr)128128 return "<wrapped address>";129129130130 /* Reject if NULL or ZERO-allocation. */
+1-1
net/bluetooth/af_bluetooth.c
···250250251251 skb_free_datagram(sk, skb);252252253253- if (msg->msg_flags & MSG_TRUNC)253253+ if (flags & MSG_TRUNC)254254 copied = skblen;255255256256 return err ? : copied;
···32323333#include <linux/debugfs.h>3434#include <linux/crc16.h>3535+#include <linux/filter.h>35363637#include <net/bluetooth/bluetooth.h>3738#include <net/bluetooth/hci_core.h>···58365835 if (chan->sdu)58375836 break;5838583758385838+ if (!pskb_may_pull(skb, L2CAP_SDULEN_SIZE))58395839+ break;58405840+58395841 chan->sdu_len = get_unaligned_le16(skb->data);58405842 skb_pull(skb, L2CAP_SDULEN_SIZE);58415843···66136609 l2cap_send_disconn_req(chan, ECONNRESET);66146610 goto drop;66156611 }66126612+66136613+ if ((chan->mode == L2CAP_MODE_ERTM ||66146614+ chan->mode == L2CAP_MODE_STREAMING) && sk_filter(chan->data, skb))66156615+ goto drop;6616661666176617 if (!control->sframe) {66186618 int err;
+12-2
net/bluetooth/l2cap_sock.c
···10191019 goto done;1020102010211021 if (pi->rx_busy_skb) {10221022- if (!sock_queue_rcv_skb(sk, pi->rx_busy_skb))10221022+ if (!__sock_queue_rcv_skb(sk, pi->rx_busy_skb))10231023 pi->rx_busy_skb = NULL;10241024 else10251025 goto done;···12701270 goto done;12711271 }1272127212731273- err = sock_queue_rcv_skb(sk, skb);12731273+ if (chan->mode != L2CAP_MODE_ERTM &&12741274+ chan->mode != L2CAP_MODE_STREAMING) {12751275+ /* Even if no filter is attached, we could potentially12761276+ * get errors from security modules, etc.12771277+ */12781278+ err = sk_filter(sk, skb);12791279+ if (err)12801280+ goto done;12811281+ }12821282+12831283+ err = __sock_queue_rcv_skb(sk, skb);1274128412751285 /* For ERTM, handle one skb that doesn't fit into the recv12761286 * buffer. This is important to do because the data frames
+2-2
net/ipv4/fib_trie.c
···249249 * index into the parent's child array. That is, they will be used to find250250 * 'n' among tp's children.251251 *252252- * The bits from (n->pos + n->bits) to (tn->pos - 1) - "S" - are skipped bits252252+ * The bits from (n->pos + n->bits) to (tp->pos - 1) - "S" - are skipped bits253253 * for the node n.254254 *255255 * All the bits we have seen so far are significant to the node n. The rest···258258 * The bits from (n->pos) to (n->pos + n->bits - 1) - "C" - are the index into259259 * n's child array, and will of course be different for each child.260260 *261261- * The rest of the bits, from 0 to (n->pos + n->bits), are completely unknown261261+ * The rest of the bits, from 0 to (n->pos -1) - "u" - are completely unknown262262 * at this point.263263 */264264
+5-3
net/ipv4/ip_tunnel_core.c
···7373 skb_dst_set(skb, &rt->dst);7474 memset(IPCB(skb), 0, sizeof(*IPCB(skb)));75757676- if (skb_iif && proto == IPPROTO_UDP) {7777- /* Arrived from an ingress interface and got udp encapuslated.7878- * The encapsulated network segment length may exceed dst mtu.7676+ if (skb_iif && !(df & htons(IP_DF))) {7777+ /* Arrived from an ingress interface, got encapsulated, with7878+ * fragmentation of encapulating frames allowed.7979+ * If skb is gso, the resulting encapsulated network segments8080+ * may exceed dst mtu.7981 * Allow IP Fragmentation of segments.8082 */8183 IPCB(skb)->flags |= IPSKB_FRAG_SEGS;
···854854 error = -ENOTCONN;855855 if (sk == NULL)856856 goto end;857857- if (sk->sk_state != PPPOX_CONNECTED)857857+ if (!(sk->sk_state & PPPOX_CONNECTED))858858 goto end;859859860860 error = -EBADF;
+4
net/netfilter/nf_conntrack_standalone.c
···205205 struct nf_conn *ct = nf_ct_tuplehash_to_ctrack(hash);206206 const struct nf_conntrack_l3proto *l3proto;207207 const struct nf_conntrack_l4proto *l4proto;208208+ struct net *net = seq_file_net(s);208209 int ret = 0;209210210211 NF_CT_ASSERT(ct);···214213215214 /* we only want to print DIR_ORIGINAL */216215 if (NF_CT_DIRECTION(hash))216216+ goto release;217217+218218+ if (!net_eq(nf_ct_net(ct), net))217219 goto release;218220219221 l3proto = __nf_ct_l3proto_find(nf_ct_l3num(ct));
+9-8
net/netfilter/nfnetlink_acct.c
···326326{327327 int ret = 0;328328329329- /* we want to avoid races with nfnl_acct_find_get. */330330- if (atomic_dec_and_test(&cur->refcnt)) {329329+ /* We want to avoid races with nfnl_acct_put. So only when the current330330+ * refcnt is 1, we decrease it to 0.331331+ */332332+ if (atomic_cmpxchg(&cur->refcnt, 1, 0) == 1) {331333 /* We are protected by nfnl mutex. */332334 list_del_rcu(&cur->head);333335 kfree_rcu(cur, rcu_head);334336 } else {335335- /* still in use, restore reference counter. */336336- atomic_inc(&cur->refcnt);337337 ret = -EBUSY;338338 }339339 return ret;···443443}444444EXPORT_SYMBOL_GPL(nfnl_acct_update);445445446446-static void nfnl_overquota_report(struct nf_acct *nfacct)446446+static void nfnl_overquota_report(struct net *net, struct nf_acct *nfacct)447447{448448 int ret;449449 struct sk_buff *skb;···458458 kfree_skb(skb);459459 return;460460 }461461- netlink_broadcast(init_net.nfnl, skb, 0, NFNLGRP_ACCT_QUOTA,461461+ netlink_broadcast(net->nfnl, skb, 0, NFNLGRP_ACCT_QUOTA,462462 GFP_ATOMIC);463463}464464465465-int nfnl_acct_overquota(const struct sk_buff *skb, struct nf_acct *nfacct)465465+int nfnl_acct_overquota(struct net *net, const struct sk_buff *skb,466466+ struct nf_acct *nfacct)466467{467468 u64 now;468469 u64 *quota;···481480482481 if (now >= *quota &&483482 !test_and_set_bit(NFACCT_OVERQUOTA_BIT, &nfacct->flags)) {484484- nfnl_overquota_report(nfacct);483483+ nfnl_overquota_report(net, nfacct);485484 }486485487486 return ret;
+10-6
net/netfilter/nfnetlink_cttimeout.c
···330330{331331 int ret = 0;332332333333- /* we want to avoid races with nf_ct_timeout_find_get. */334334- if (atomic_dec_and_test(&timeout->refcnt)) {333333+ /* We want to avoid races with ctnl_timeout_put. So only when the334334+ * current refcnt is 1, we decrease it to 0.335335+ */336336+ if (atomic_cmpxchg(&timeout->refcnt, 1, 0) == 1) {335337 /* We are protected by nfnl mutex. */336338 list_del_rcu(&timeout->head);337339 nf_ct_l4proto_put(timeout->l4proto);338340 ctnl_untimeout(net, timeout);339341 kfree_rcu(timeout, rcu_head);340342 } else {341341- /* still in use, restore reference counter. */342342- atomic_inc(&timeout->refcnt);343343 ret = -EBUSY;344344 }345345 return ret;···543543544544static void ctnl_timeout_put(struct ctnl_timeout *timeout)545545{546546- atomic_dec(&timeout->refcnt);546546+ if (atomic_dec_and_test(&timeout->refcnt))547547+ kfree_rcu(timeout, rcu_head);548548+547549 module_put(THIS_MODULE);548550}549551#endif /* CONFIG_NF_CONNTRACK_TIMEOUT */···593591 list_for_each_entry_safe(cur, tmp, &net->nfct_timeout_list, head) {594592 list_del_rcu(&cur->head);595593 nf_ct_l4proto_put(cur->l4proto);596596- kfree_rcu(cur, rcu_head);594594+595595+ if (atomic_dec_and_test(&cur->refcnt))596596+ kfree_rcu(cur, rcu_head);597597 }598598}599599
···127127 daddr, dport,128128 in->ifindex);129129130130+ if (sk && !atomic_inc_not_zero(&sk->sk_refcnt))131131+ sk = NULL;130132 /* NOTE: we return listeners even if bound to131133 * 0.0.0.0, those are filtered out in132134 * xt_socket, since xt_TPROXY needs 0 bound···197195 daddr, ntohs(dport),198196 in->ifindex);199197198198+ if (sk && !atomic_inc_not_zero(&sk->sk_refcnt))199199+ sk = NULL;200200 /* NOTE: we return listeners even if bound to201201 * 0.0.0.0, those are filtered out in202202 * xt_socket, since xt_TPROXY needs 0 bound
···5353 u32 *tlv = (u32 *)(skbdata);5454 u16 totlen = nla_total_size(dlen); /*alignment + hdr */5555 char *dptr = (char *)tlv + NLA_HDRLEN;5656- u32 htlv = attrtype << 16 | totlen;5656+ u32 htlv = attrtype << 16 | dlen;57575858 *tlv = htonl(htlv);5959 memset(dptr, 0, totlen - NLA_HDRLEN);···135135136136int ife_validate_meta_u32(void *val, int len)137137{138138- if (len == 4)138138+ if (len == sizeof(u32))139139 return 0;140140141141 return -EINVAL;···144144145145int ife_validate_meta_u16(void *val, int len)146146{147147- /* length will include padding */148148- if (len == NLA_ALIGN(2))147147+ /* length will not include padding */148148+ if (len == sizeof(u16))149149 return 0;150150151151 return -EINVAL;···652652 u8 *tlvdata = (u8 *)tlv;653653 u16 mtype = tlv->type;654654 u16 mlen = tlv->len;655655+ u16 alen;655656656657 mtype = ntohs(mtype);657658 mlen = ntohs(mlen);659659+ alen = NLA_ALIGN(mlen);658660659659- if (find_decode_metaid(skb, ife, mtype, (mlen - 4),660660- (void *)(tlvdata + 4))) {661661+ if (find_decode_metaid(skb, ife, mtype, (mlen - NLA_HDRLEN),662662+ (void *)(tlvdata + NLA_HDRLEN))) {661663 /* abuse overlimits to count when we receive metadata662664 * but dont have an ops for it663665 */···668666 ife->tcf_qstats.overlimits++;669667 }670668671671- tlvdata += mlen;672672- ifehdrln -= mlen;669669+ tlvdata += alen;670670+ ifehdrln -= alen;673671 tlv = (struct meta_tlvhdr *)tlvdata;674672 }675673
+5-4
net/sched/sch_generic.c
···641641 struct Qdisc *sch;642642643643 if (!try_module_get(ops->owner))644644- goto errout;644644+ return NULL;645645646646 sch = qdisc_alloc(dev_queue, ops);647647- if (IS_ERR(sch))648648- goto errout;647647+ if (IS_ERR(sch)) {648648+ module_put(ops->owner);649649+ return NULL;650650+ }649651 sch->parent = parentid;650652651653 if (!ops->init || ops->init(sch, NULL) == 0)652654 return sch;653655654656 qdisc_destroy(sch);655655-errout:656657 return NULL;657658}658659EXPORT_SYMBOL(qdisc_create_dflt);
+7-4
net/sctp/input.c
···119119 skb_transport_offset(skb))120120 goto discard_it;121121122122- if (!pskb_may_pull(skb, sizeof(struct sctphdr)))122122+ /* If the packet is fragmented and we need to do crc checking,123123+ * it's better to just linearize it otherwise crc computing124124+ * takes longer.125125+ */126126+ if ((!(skb_shinfo(skb)->gso_type & SKB_GSO_SCTP) &&127127+ skb_linearize(skb)) ||128128+ !pskb_may_pull(skb, sizeof(struct sctphdr)))123129 goto discard_it;124130125131 /* Pull up the IP header. */···11811175 * those cannot be on GSO-style anyway.11821176 */11831177 if ((skb_shinfo(skb)->gso_type & SKB_GSO_SCTP) == SKB_GSO_SCTP)11841184- return NULL;11851185-11861186- if (skb_linearize(skb))11871178 return NULL;1188117911891180 ch = (sctp_chunkhdr_t *) skb->data;
-13
net/sctp/inqueue.c
···170170171171 chunk = list_entry(entry, struct sctp_chunk, list);172172173173- /* Linearize if it's not GSO */174174- if ((skb_shinfo(chunk->skb)->gso_type & SKB_GSO_SCTP) != SKB_GSO_SCTP &&175175- skb_is_nonlinear(chunk->skb)) {176176- if (skb_linearize(chunk->skb)) {177177- __SCTP_INC_STATS(dev_net(chunk->skb->dev), SCTP_MIB_IN_PKT_DISCARDS);178178- sctp_chunk_free(chunk);179179- goto next_chunk;180180- }181181-182182- /* Update sctp_hdr as it probably changed */183183- chunk->sctp_hdr = sctp_hdr(chunk->skb);184184- }185185-186173 if ((skb_shinfo(chunk->skb)->gso_type & SKB_GSO_SCTP) == SKB_GSO_SCTP) {187174 /* GSO-marked skbs but without frags, handle188175 * them normally
···136136config HARDENED_USERCOPY137137 bool "Harden memory copies between kernel and userspace"138138 depends on HAVE_ARCH_HARDENED_USERCOPY139139+ depends on HAVE_HARDENED_USERCOPY_ALLOCATOR139140 select BUG140141 help141142 This option checks for obviously wrong memory regions when
···299299 clk_enable(ssc_p->ssc->clk);300300 ssc_p->mck_rate = clk_get_rate(ssc_p->ssc->clk);301301302302- /* Reset the SSC to keep it at a clean status */303303- ssc_writel(ssc_p->ssc->regs, CR, SSC_BIT(CR_SWRST));302302+ /* Reset the SSC unless initialized to keep it in a clean state */303303+ if (!ssc_p->initialized)304304+ ssc_writel(ssc_p->ssc->regs, CR, SSC_BIT(CR_SWRST));304305305306 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {306307 dir = 0;
+2-2
sound/soc/codecs/da7213.c
···12471247 return -EINVAL;12481248 }1249124912501250- /* By default only 32 BCLK per WCLK is supported */12511251- dai_clk_mode |= DA7213_DAI_BCLKS_PER_WCLK_32;12501250+ /* By default only 64 BCLK per WCLK is supported */12511251+ dai_clk_mode |= DA7213_DAI_BCLKS_PER_WCLK_64;1252125212531253 snd_soc_write(codec, DA7213_DAI_CLK_MODE, dai_clk_mode);12541254 snd_soc_update_bits(codec, DA7213_DAI_CTRL, DA7213_DAI_FORMAT_MASK,
···212212 0xfa2f, 0xfaea, 0xfba5, 0xfc60, 0xfd1a, 0xfdd4, 0xfe8e, 0xff47213213};214214215215-static struct snd_soc_dai *nau8825_get_codec_dai(struct nau8825 *nau8825)216216-{217217- struct snd_soc_codec *codec = snd_soc_dapm_to_codec(nau8825->dapm);218218- struct snd_soc_component *component = &codec->component;219219- struct snd_soc_dai *codec_dai, *_dai;220220-221221- list_for_each_entry_safe(codec_dai, _dai, &component->dai_list, list) {222222- if (!strncmp(codec_dai->name, NUVOTON_CODEC_DAI,223223- strlen(NUVOTON_CODEC_DAI)))224224- return codec_dai;225225- }226226- return NULL;227227-}228228-229229-static bool nau8825_dai_is_active(struct nau8825 *nau8825)230230-{231231- struct snd_soc_dai *codec_dai = nau8825_get_codec_dai(nau8825);232232-233233- if (codec_dai) {234234- if (codec_dai->playback_active || codec_dai->capture_active)235235- return true;236236- }237237- return false;238238-}239239-240215/**241216 * nau8825_sema_acquire - acquire the semaphore of nau88l25242217 * @nau8825: component to register the codec private data with···225250 * Acquires the semaphore without jiffies. If no more tasks are allowed226251 * to acquire the semaphore, calling this function will put the task to227252 * sleep until the semaphore is released.228228- * It returns if the semaphore was acquired.253253+ * If the semaphore is not released within the specified number of jiffies,254254+ * this function returns -ETIME.255255+ * If the sleep is interrupted by a signal, this function will return -EINTR.256256+ * It returns 0 if the semaphore was acquired successfully.229257 */230230-static void nau8825_sema_acquire(struct nau8825 *nau8825, long timeout)258258+static int nau8825_sema_acquire(struct nau8825 *nau8825, long timeout)231259{232260 int ret;233261234234- if (timeout)262262+ if (timeout) {235263 ret = down_timeout(&nau8825->xtalk_sem, timeout);236236- else264264+ if (ret < 0)265265+ dev_warn(nau8825->dev, "Acquire semaphone timeout\n");266266+ } else {237267 ret = down_interruptible(&nau8825->xtalk_sem);268268+ if (ret < 0)269269+ dev_warn(nau8825->dev, "Acquire semaphone fail\n");270270+ }238271239239- if (ret < 0)240240- dev_warn(nau8825->dev, "Acquire semaphone fail\n");272272+ return ret;241273}242274243275/**···11871205 struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);11881206 unsigned int val_len = 0;1189120712081208+ nau8825_sema_acquire(nau8825, 2 * HZ);12091209+11901210 switch (params_width(params)) {11911211 case 16:11921212 val_len |= NAU8825_I2S_DL_16;···12091225 regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL1,12101226 NAU8825_I2S_DL_MASK, val_len);1211122712281228+ /* Release the semaphone. */12291229+ nau8825_sema_release(nau8825);12301230+12121231 return 0;12131232}12141233···12201233 struct snd_soc_codec *codec = codec_dai->codec;12211234 struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);12221235 unsigned int ctrl1_val = 0, ctrl2_val = 0;12361236+12371237+ nau8825_sema_acquire(nau8825, 2 * HZ);1223123812241239 switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {12251240 case SND_SOC_DAIFMT_CBM_CFM:···12701281 ctrl1_val);12711282 regmap_update_bits(nau8825->regmap, NAU8825_REG_I2S_PCM_CTRL2,12721283 NAU8825_I2S_MS_MASK, ctrl2_val);12841284+12851285+ /* Release the semaphone. */12861286+ nau8825_sema_release(nau8825);1273128712741288 return 0;12751289}···16031611 * cess and restore changes if process16041612 * is ongoing when ejection.16051613 */16141614+ int ret;16061615 nau8825->xtalk_protect = true;16071607- nau8825_sema_acquire(nau8825, 0);16161616+ ret = nau8825_sema_acquire(nau8825, 0);16171617+ if (ret < 0)16181618+ nau8825->xtalk_protect = false;16081619 }16091620 /* Startup cross talk detection process */16101621 nau8825->xtalk_state = NAU8825_XTALK_PREPARE;···22332238static int __maybe_unused nau8825_resume(struct snd_soc_codec *codec)22342239{22352240 struct nau8825 *nau8825 = snd_soc_codec_get_drvdata(codec);22412241+ int ret;2236224222372243 regcache_cache_only(nau8825->regmap, false);22382244 regcache_sync(nau8825->regmap);22392239- if (nau8825_is_jack_inserted(nau8825->regmap)) {22402240- /* If the jack is inserted, we need to check whether the play-22412241- * back is active before suspend. If active, the driver has to22422242- * raise the protection for cross talk function to avoid the22432243- * playback recovers before cross talk process finish. Other-22442244- * wise, the playback will be interfered by cross talk func-22452245- * tion. It is better to apply hardware related parameters22462246- * before starting playback or record.22472247- */22482248- if (nau8825_dai_is_active(nau8825)) {22492249- nau8825->xtalk_protect = true;22502250- nau8825_sema_acquire(nau8825, 0);22512251- }22522252- }22452245+ nau8825->xtalk_protect = true;22462246+ ret = nau8825_sema_acquire(nau8825, 0);22472247+ if (ret < 0)22482248+ nau8825->xtalk_protect = false;22532249 enable_irq(nau8825->irq);2254225022552251 return 0;
+1-1
sound/soc/codecs/wm2000.c
···581581 if (anc_transitions[i].dest == ANC_OFF)582582 clk_disable_unprepare(wm2000->mclk);583583584584- return ret;584584+ return 0;585585}586586587587static int wm2000_anc_set_mode(struct wm2000_priv *wm2000)
···77 * it under the terms of the GNU General Public License version 2 as88 * published by the Free Software Foundation.99 */1010+#include <linux/module.h>1011#include <linux/of.h>1112#include <sound/simple_card_utils.h>1213···9695 return 0;9796}9897EXPORT_SYMBOL_GPL(asoc_simple_card_parse_card_name);9898+9999+/* Module information */100100+MODULE_AUTHOR("Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>");101101+MODULE_DESCRIPTION("ALSA SoC Simple Card Utils");102102+MODULE_LICENSE("GPL v2");
+5
sound/soc/intel/skylake/skl-sst-utils.c
···123123124124 uuid_mod = (uuid_le *)uuid;125125126126+ if (list_empty(&ctx->uuid_list)) {127127+ dev_err(ctx->dev, "Module list is empty\n");128128+ return -EINVAL;129129+ }130130+126131 list_for_each_entry(module, &ctx->uuid_list, list) {127132 if (uuid_le_cmp(*uuid_mod, module->uuid) == 0) {128133 dfw_config->module_id = module->id;
+3-1
sound/soc/intel/skylake/skl.c
···672672673673 skl->nhlt = skl_nhlt_init(bus->dev);674674675675- if (skl->nhlt == NULL)675675+ if (skl->nhlt == NULL) {676676+ err = -ENODEV;676677 goto out_free;678678+ }677679678680 skl_nhlt_update_topology_bin(skl);679681
+32-29
sound/soc/omap/omap-abe-twl6040.c
···3838struct abe_twl6040 {3939 int jack_detection; /* board can detect jack events */4040 int mclk_freq; /* MCLK frequency speed for twl6040 */4141-4242- struct platform_device *dmic_codec_dev;4341};4242+4343+struct platform_device *dmic_codec_dev;44444545static int omap_abe_hw_params(struct snd_pcm_substream *substream,4646 struct snd_pcm_hw_params *params)···258258 if (priv == NULL)259259 return -ENOMEM;260260261261- priv->dmic_codec_dev = ERR_PTR(-EINVAL);262262-263261 if (snd_soc_of_parse_card_name(card, "ti,model")) {264262 dev_err(&pdev->dev, "Card name is not provided\n");265263 return -ENODEV;···282284 num_links = 2;283285 abe_twl6040_dai_links[1].cpu_of_node = dai_node;284286 abe_twl6040_dai_links[1].platform_of_node = dai_node;285285-286286- priv->dmic_codec_dev = platform_device_register_simple(287287- "dmic-codec", -1, NULL, 0);288288- if (IS_ERR(priv->dmic_codec_dev)) {289289- dev_err(&pdev->dev, "Can't instantiate dmic-codec\n");290290- return PTR_ERR(priv->dmic_codec_dev);291291- }292287 } else {293288 num_links = 1;294289 }···290299 of_property_read_u32(node, "ti,mclk-freq", &priv->mclk_freq);291300 if (!priv->mclk_freq) {292301 dev_err(&pdev->dev, "MCLK frequency not provided\n");293293- ret = -EINVAL;294294- goto err_unregister;302302+ return -EINVAL;295303 }296304297305 card->fully_routed = 1;298306299307 if (!priv->mclk_freq) {300308 dev_err(&pdev->dev, "MCLK frequency missing\n");301301- ret = -ENODEV;302302- goto err_unregister;309309+ return -ENODEV;303310 }304311305312 card->dai_link = abe_twl6040_dai_links;···306317 snd_soc_card_set_drvdata(card, priv);307318308319 ret = snd_soc_register_card(card);309309- if (ret) {320320+ if (ret)310321 dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",311322 ret);312312- goto err_unregister;313313- }314314-315315- return 0;316316-317317-err_unregister:318318- if (!IS_ERR(priv->dmic_codec_dev))319319- platform_device_unregister(priv->dmic_codec_dev);320323321324 return ret;322325}···316335static int omap_abe_remove(struct platform_device *pdev)317336{318337 struct snd_soc_card *card = platform_get_drvdata(pdev);319319- struct abe_twl6040 *priv = snd_soc_card_get_drvdata(card);320338321339 snd_soc_unregister_card(card);322322-323323- if (!IS_ERR(priv->dmic_codec_dev))324324- platform_device_unregister(priv->dmic_codec_dev);325340326341 return 0;327342}···338361 .remove = omap_abe_remove,339362};340363341341-module_platform_driver(omap_abe_driver);364364+static int __init omap_abe_init(void)365365+{366366+ int ret;367367+368368+ dmic_codec_dev = platform_device_register_simple("dmic-codec", -1, NULL,369369+ 0);370370+ if (IS_ERR(dmic_codec_dev)) {371371+ pr_err("%s: dmic-codec device registration failed\n", __func__);372372+ return PTR_ERR(dmic_codec_dev);373373+ }374374+375375+ ret = platform_driver_register(&omap_abe_driver);376376+ if (ret) {377377+ pr_err("%s: platform driver registration failed\n", __func__);378378+ platform_device_unregister(dmic_codec_dev);379379+ }380380+381381+ return ret;382382+}383383+module_init(omap_abe_init);384384+385385+static void __exit omap_abe_exit(void)386386+{387387+ platform_driver_unregister(&omap_abe_driver);388388+ platform_device_unregister(dmic_codec_dev);389389+}390390+module_exit(omap_abe_exit);342391343392MODULE_AUTHOR("Misael Lopez Cruz <misael.lopez@ti.com>");344393MODULE_DESCRIPTION("ALSA SoC for OMAP boards with ABE and twl6040 codec");
+3-19
sound/soc/omap/omap-mcpdm.c
···3131#include <linux/err.h>3232#include <linux/io.h>3333#include <linux/irq.h>3434-#include <linux/clk.h>3534#include <linux/slab.h>3635#include <linux/pm_runtime.h>3736#include <linux/of_device.h>···5455 unsigned long phys_base;5556 void __iomem *io_base;5657 int irq;5757- struct clk *pdmclk;58585959 struct mutex mutex;6060···388390 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai);389391 int ret;390392391391- clk_prepare_enable(mcpdm->pdmclk);392393 pm_runtime_enable(mcpdm->dev);393394394395 /* Disable lines while request is ongoing */395396 pm_runtime_get_sync(mcpdm->dev);396397 omap_mcpdm_write(mcpdm, MCPDM_REG_CTRL, 0x00);397398398398- ret = devm_request_irq(mcpdm->dev, mcpdm->irq, omap_mcpdm_irq_handler,399399- 0, "McPDM", (void *)mcpdm);399399+ ret = request_irq(mcpdm->irq, omap_mcpdm_irq_handler, 0, "McPDM",400400+ (void *)mcpdm);400401401402 pm_runtime_put_sync(mcpdm->dev);402403···420423{421424 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai);422425426426+ free_irq(mcpdm->irq, (void *)mcpdm);423427 pm_runtime_disable(mcpdm->dev);424428425425- clk_disable_unprepare(mcpdm->pdmclk);426429 return 0;427430}428431···442445 mcpdm->pm_active_count++;443446 }444447445445- clk_disable_unprepare(mcpdm->pdmclk);446446-447448 return 0;448449}449450450451static int omap_mcpdm_resume(struct snd_soc_dai *dai)451452{452453 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai);453453-454454- clk_prepare_enable(mcpdm->pdmclk);455454456455 if (mcpdm->pm_active_count) {457456 while (mcpdm->pm_active_count--)···541548 return mcpdm->irq;542549543550 mcpdm->dev = &pdev->dev;544544-545545- mcpdm->pdmclk = devm_clk_get(&pdev->dev, "pdmclk");546546- if (IS_ERR(mcpdm->pdmclk)) {547547- if (PTR_ERR(mcpdm->pdmclk) == -EPROBE_DEFER)548548- return -EPROBE_DEFER;549549- dev_warn(&pdev->dev, "Error getting pdmclk (%ld)!\n",550550- PTR_ERR(mcpdm->pdmclk));551551- mcpdm->pdmclk = NULL;552552- }553551554552 ret = devm_snd_soc_register_component(&pdev->dev,555553 &omap_mcpdm_component,
+4-3
sound/soc/samsung/s3c24xx_uda134x.c
···58585959static int s3c24xx_uda134x_startup(struct snd_pcm_substream *substream)6060{6161- int ret = 0;6161+ struct snd_soc_pcm_runtime *rtd = substream->private_data;6262+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;6263#ifdef ENFORCE_RATES6364 struct snd_pcm_runtime *runtime = substream->runtime;6465#endif6666+ int ret = 0;65676668 mutex_lock(&clk_lock);6769 pr_debug("%s %d\n", __func__, clk_users);···7371 printk(KERN_ERR "%s cannot get xtal\n", __func__);7472 ret = PTR_ERR(xtal);7573 } else {7676- pclk = clk_get(&s3c24xx_uda134x_snd_device->dev,7777- "pclk");7474+ pclk = clk_get(cpu_dai->dev, "iis");7875 if (IS_ERR(pclk)) {7976 printk(KERN_ERR "%s cannot get pclk\n",8077 __func__);
···8787/* Supported VGICv3 address types */8888#define KVM_VGIC_V3_ADDR_TYPE_DIST 28989#define KVM_VGIC_V3_ADDR_TYPE_REDIST 39090+#define KVM_VGIC_ITS_ADDR_TYPE 490919192#define KVM_VGIC_V3_DIST_SIZE SZ_64K9293#define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)9494+#define KVM_VGIC_V3_ITS_SIZE (2 * SZ_64K)93959496#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */9597#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
+41
tools/arch/s390/include/uapi/asm/kvm.h
···9393 __u64 fac_list[256];9494};95959696+#define KVM_S390_VM_CPU_PROCESSOR_FEAT 29797+#define KVM_S390_VM_CPU_MACHINE_FEAT 39898+9999+#define KVM_S390_VM_CPU_FEAT_NR_BITS 1024100100+#define KVM_S390_VM_CPU_FEAT_ESOP 0101101+#define KVM_S390_VM_CPU_FEAT_SIEF2 1102102+#define KVM_S390_VM_CPU_FEAT_64BSCAO 2103103+#define KVM_S390_VM_CPU_FEAT_SIIF 3104104+#define KVM_S390_VM_CPU_FEAT_GPERE 4105105+#define KVM_S390_VM_CPU_FEAT_GSLS 5106106+#define KVM_S390_VM_CPU_FEAT_IB 6107107+#define KVM_S390_VM_CPU_FEAT_CEI 7108108+#define KVM_S390_VM_CPU_FEAT_IBS 8109109+#define KVM_S390_VM_CPU_FEAT_SKEY 9110110+#define KVM_S390_VM_CPU_FEAT_CMMA 10111111+#define KVM_S390_VM_CPU_FEAT_PFMFI 11112112+#define KVM_S390_VM_CPU_FEAT_SIGPIF 12113113+struct kvm_s390_vm_cpu_feat {114114+ __u64 feat[16];115115+};116116+117117+#define KVM_S390_VM_CPU_PROCESSOR_SUBFUNC 4118118+#define KVM_S390_VM_CPU_MACHINE_SUBFUNC 5119119+/* for "test bit" instructions MSB 0 bit ordering, for "query" raw blocks */120120+struct kvm_s390_vm_cpu_subfunc {121121+ __u8 plo[32]; /* always */122122+ __u8 ptff[16]; /* with TOD-clock steering */123123+ __u8 kmac[16]; /* with MSA */124124+ __u8 kmc[16]; /* with MSA */125125+ __u8 km[16]; /* with MSA */126126+ __u8 kimd[16]; /* with MSA */127127+ __u8 klmd[16]; /* with MSA */128128+ __u8 pckmo[16]; /* with MSA3 */129129+ __u8 kmctr[16]; /* with MSA4 */130130+ __u8 kmf[16]; /* with MSA4 */131131+ __u8 kmo[16]; /* with MSA4 */132132+ __u8 pcc[16]; /* with MSA4 */133133+ __u8 ppno[16]; /* with MSA5 */134134+ __u8 reserved[1824];135135+};136136+96137/* kvm attributes for crypto */97138#define KVM_S390_VM_CRYPTO_ENABLE_AES_KW 098139#define KVM_S390_VM_CRYPTO_ENABLE_DEA_KW 1
···11/*22- * gpio-hammer - example swiss army knife to shake GPIO lines on a system22+ * gpio-event-mon - monitor GPIO line events from userspace33 *44 * Copyright (C) 2016 Linus Walleij55 *
+5-1
tools/include/linux/string.h
···8899int strtobool(const char *s, bool *res);10101111-#ifdef __GLIBC__1111+/*1212+ * glibc based builds needs the extern while uClibc doesn't.1313+ * However uClibc headers also define __GLIBC__ hence the hack below1414+ */1515+#if defined(__GLIBC__) && !defined(__UCLIBC__)1216extern size_t strlcpy(char *dest, const char *src, size_t size);1317#endif1418
···51515252 irq = kzalloc(sizeof(struct vgic_irq), GFP_KERNEL);5353 if (!irq)5454- return NULL;5454+ return ERR_PTR(-ENOMEM);55555656 INIT_LIST_HEAD(&irq->lpi_list);5757 INIT_LIST_HEAD(&irq->ap_list);···441441 * Find the target VCPU and the LPI number for a given devid/eventid pair442442 * and make this IRQ pending, possibly injecting it.443443 * Must be called with the its_lock mutex held.444444+ * Returns 0 on success, a positive error value for any ITS mapping445445+ * related errors and negative error values for generic errors.444446 */445445-static void vgic_its_trigger_msi(struct kvm *kvm, struct vgic_its *its,446446- u32 devid, u32 eventid)447447+static int vgic_its_trigger_msi(struct kvm *kvm, struct vgic_its *its,448448+ u32 devid, u32 eventid)447449{450450+ struct kvm_vcpu *vcpu;448451 struct its_itte *itte;449452450453 if (!its->enabled)451451- return;454454+ return -EBUSY;452455453456 itte = find_itte(its, devid, eventid);454454- /* Triggering an unmapped IRQ gets silently dropped. */455455- if (itte && its_is_collection_mapped(itte->collection)) {456456- struct kvm_vcpu *vcpu;457457+ if (!itte || !its_is_collection_mapped(itte->collection))458458+ return E_ITS_INT_UNMAPPED_INTERRUPT;457459458458- vcpu = kvm_get_vcpu(kvm, itte->collection->target_addr);459459- if (vcpu && vcpu->arch.vgic_cpu.lpis_enabled) {460460- spin_lock(&itte->irq->irq_lock);461461- itte->irq->pending = true;462462- vgic_queue_irq_unlock(kvm, itte->irq);463463- }464464- }460460+ vcpu = kvm_get_vcpu(kvm, itte->collection->target_addr);461461+ if (!vcpu)462462+ return E_ITS_INT_UNMAPPED_INTERRUPT;463463+464464+ if (!vcpu->arch.vgic_cpu.lpis_enabled)465465+ return -EBUSY;466466+467467+ spin_lock(&itte->irq->irq_lock);468468+ itte->irq->pending = true;469469+ vgic_queue_irq_unlock(kvm, itte->irq);470470+471471+ return 0;472472+}473473+474474+static struct vgic_io_device *vgic_get_its_iodev(struct kvm_io_device *dev)475475+{476476+ struct vgic_io_device *iodev;477477+478478+ if (dev->ops != &kvm_io_gic_ops)479479+ return NULL;480480+481481+ iodev = container_of(dev, struct vgic_io_device, dev);482482+483483+ if (iodev->iodev_type != IODEV_ITS)484484+ return NULL;485485+486486+ return iodev;465487}466488467489/*468490 * Queries the KVM IO bus framework to get the ITS pointer from the given469491 * doorbell address.470492 * We then call vgic_its_trigger_msi() with the decoded data.493493+ * According to the KVM_SIGNAL_MSI API description returns 1 on success.471494 */472495int vgic_its_inject_msi(struct kvm *kvm, struct kvm_msi *msi)473496{474497 u64 address;475498 struct kvm_io_device *kvm_io_dev;476499 struct vgic_io_device *iodev;500500+ int ret;477501478502 if (!vgic_has_its(kvm))479503 return -ENODEV;···509485510486 kvm_io_dev = kvm_io_bus_get_dev(kvm, KVM_MMIO_BUS, address);511487 if (!kvm_io_dev)512512- return -ENODEV;488488+ return -EINVAL;513489514514- iodev = container_of(kvm_io_dev, struct vgic_io_device, dev);490490+ iodev = vgic_get_its_iodev(kvm_io_dev);491491+ if (!iodev)492492+ return -EINVAL;515493516494 mutex_lock(&iodev->its->its_lock);517517- vgic_its_trigger_msi(kvm, iodev->its, msi->devid, msi->data);495495+ ret = vgic_its_trigger_msi(kvm, iodev->its, msi->devid, msi->data);518496 mutex_unlock(&iodev->its->its_lock);519497520520- return 0;498498+ if (ret < 0)499499+ return ret;500500+501501+ /*502502+ * KVM_SIGNAL_MSI demands a return value > 0 for success and 0503503+ * if the guest has blocked the MSI. So we map any LPI mapping504504+ * related error to that.505505+ */506506+ if (ret)507507+ return 0;508508+ else509509+ return 1;521510}522511523512/* Requires the its_lock to be held. */···539502 list_del(&itte->itte_list);540503541504 /* This put matches the get in vgic_add_lpi. */542542- vgic_put_irq(kvm, itte->irq);505505+ if (itte->irq)506506+ vgic_put_irq(kvm, itte->irq);543507544508 kfree(itte);545509}···735697 struct its_device *device;736698 struct its_collection *collection, *new_coll = NULL;737699 int lpi_nr;700700+ struct vgic_irq *irq;738701739702 device = find_its_device(its, device_id);740703 if (!device)···749710 lpi_nr >= max_lpis_propbaser(kvm->arch.vgic.propbaser))750711 return E_ITS_MAPTI_PHYSICALID_OOR;751712713713+ /* If there is an existing mapping, behavior is UNPREDICTABLE. */714714+ if (find_itte(its, device_id, event_id))715715+ return 0;716716+752717 collection = find_collection(its, coll_id);753718 if (!collection) {754719 int ret = vgic_its_alloc_collection(its, &collection, coll_id);···761718 new_coll = collection;762719 }763720764764- itte = find_itte(its, device_id, event_id);721721+ itte = kzalloc(sizeof(struct its_itte), GFP_KERNEL);765722 if (!itte) {766766- itte = kzalloc(sizeof(struct its_itte), GFP_KERNEL);767767- if (!itte) {768768- if (new_coll)769769- vgic_its_free_collection(its, coll_id);770770- return -ENOMEM;771771- }772772-773773- itte->event_id = event_id;774774- list_add_tail(&itte->itte_list, &device->itt_head);723723+ if (new_coll)724724+ vgic_its_free_collection(its, coll_id);725725+ return -ENOMEM;775726 }727727+728728+ itte->event_id = event_id;729729+ list_add_tail(&itte->itte_list, &device->itt_head);776730777731 itte->collection = collection;778732 itte->lpi = lpi_nr;779779- itte->irq = vgic_add_lpi(kvm, lpi_nr);733733+734734+ irq = vgic_add_lpi(kvm, lpi_nr);735735+ if (IS_ERR(irq)) {736736+ if (new_coll)737737+ vgic_its_free_collection(its, coll_id);738738+ its_free_itte(kvm, itte);739739+ return PTR_ERR(irq);740740+ }741741+ itte->irq = irq;742742+780743 update_affinity_itte(kvm, itte);781744782745 /*···1030981 u32 msi_data = its_cmd_get_id(its_cmd);1031982 u64 msi_devid = its_cmd_get_deviceid(its_cmd);103298310331033- vgic_its_trigger_msi(kvm, its, msi_devid, msi_data);10341034-10351035- return 0;984984+ return vgic_its_trigger_msi(kvm, its, msi_devid, msi_data);1036985}10379861038987/*···13351288 its_sync_lpi_pending_table(vcpu);13361289}1337129013381338-static int vgic_its_init_its(struct kvm *kvm, struct vgic_its *its)12911291+static int vgic_register_its_iodev(struct kvm *kvm, struct vgic_its *its)13391292{13401293 struct vgic_io_device *iodev = &its->iodev;13411294 int ret;1342129513431343- if (its->initialized)13441344- return 0;12961296+ if (!its->initialized)12971297+ return -EBUSY;1345129813461299 if (IS_VGIC_ADDR_UNDEF(its->vgic_its_base))13471300 return -ENXIO;···13571310 ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, iodev->base_addr,13581311 KVM_VGIC_V3_ITS_SIZE, &iodev->dev);13591312 mutex_unlock(&kvm->slots_lock);13601360-13611361- if (!ret)13621362- its->initialized = true;1363131313641314 return ret;13651315}···14791435 if (type != KVM_VGIC_ITS_ADDR_TYPE)14801436 return -ENODEV;1481143714821482- if (its->initialized)14831483- return -EBUSY;14841484-14851438 if (copy_from_user(&addr, uaddr, sizeof(addr)))14861439 return -EFAULT;14871440···14941453 case KVM_DEV_ARM_VGIC_GRP_CTRL:14951454 switch (attr->attr) {14961455 case KVM_DEV_ARM_VGIC_CTRL_INIT:14971497- return vgic_its_init_its(dev->kvm, its);14561456+ its->initialized = true;14571457+14581458+ return 0;14981459 }14991460 break;15001461 }···15401497{15411498 return kvm_register_device_ops(&kvm_arm_vgic_its_ops,15421499 KVM_DEV_TYPE_ARM_VGIC_ITS);15001500+}15011501+15021502+/*15031503+ * Registers all ITSes with the kvm_io_bus framework.15041504+ * To follow the existing VGIC initialization sequence, this has to be15051505+ * done as late as possible, just before the first VCPU runs.15061506+ */15071507+int vgic_register_its_iodevs(struct kvm *kvm)15081508+{15091509+ struct kvm_device *dev;15101510+ int ret = 0;15111511+15121512+ list_for_each_entry(dev, &kvm->devices, vm_node) {15131513+ if (dev->ops != &kvm_arm_vgic_its_ops)15141514+ continue;15151515+15161516+ ret = vgic_register_its_iodev(kvm, dev->private);15171517+ if (ret)15181518+ return ret;15191519+ /*15201520+ * We don't need to care about tearing down previously15211521+ * registered ITSes, as the kvm_io_bus framework removes15221522+ * them for us if the VM gets destroyed.15231523+ */15241524+ }15251525+15261526+ return ret;15431527}
+16-10
virt/kvm/arm/vgic/vgic-mmio-v3.c
···306306{307307 struct vgic_dist *dist = &vcpu->kvm->arch.vgic;308308 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;309309- u64 propbaser = dist->propbaser;309309+ u64 old_propbaser, propbaser;310310311311 /* Storing a value with LPIs already enabled is undefined */312312 if (vgic_cpu->lpis_enabled)313313 return;314314315315- propbaser = update_64bit_reg(propbaser, addr & 4, len, val);316316- propbaser = vgic_sanitise_propbaser(propbaser);317317-318318- dist->propbaser = propbaser;315315+ do {316316+ old_propbaser = dist->propbaser;317317+ propbaser = old_propbaser;318318+ propbaser = update_64bit_reg(propbaser, addr & 4, len, val);319319+ propbaser = vgic_sanitise_propbaser(propbaser);320320+ } while (cmpxchg64(&dist->propbaser, old_propbaser,321321+ propbaser) != old_propbaser);319322}320323321324static unsigned long vgic_mmio_read_pendbase(struct kvm_vcpu *vcpu,···334331 unsigned long val)335332{336333 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;337337- u64 pendbaser = vgic_cpu->pendbaser;334334+ u64 old_pendbaser, pendbaser;338335339336 /* Storing a value with LPIs already enabled is undefined */340337 if (vgic_cpu->lpis_enabled)341338 return;342339343343- pendbaser = update_64bit_reg(pendbaser, addr & 4, len, val);344344- pendbaser = vgic_sanitise_pendbaser(pendbaser);345345-346346- vgic_cpu->pendbaser = pendbaser;340340+ do {341341+ old_pendbaser = vgic_cpu->pendbaser;342342+ pendbaser = old_pendbaser;343343+ pendbaser = update_64bit_reg(pendbaser, addr & 4, len, val);344344+ pendbaser = vgic_sanitise_pendbaser(pendbaser);345345+ } while (cmpxchg64(&vgic_cpu->pendbaser, old_pendbaser,346346+ pendbaser) != old_pendbaser);347347}348348349349/*
+8
virt/kvm/arm/vgic/vgic-v3.c
···289289 goto out;290290 }291291292292+ if (vgic_has_its(kvm)) {293293+ ret = vgic_register_its_iodevs(kvm);294294+ if (ret) {295295+ kvm_err("Unable to register VGIC ITS MMIO regions\n");296296+ goto out;297297+ }298298+ }299299+292300 dist->ready = true;293301294302out: