···4343 reset signal present internally in some host controller IC designs.4444 See Documentation/devicetree/bindings/reset/reset.txt for details.45454646+* reset-names: request name for using "resets" property. Must be "reset".4747+ (It will be used together with "resets" property.)4848+4649* clocks: from common clock binding: handle to biu and ciu clocks for the4750 bus interface unit clock and the card interface unit clock.4851···106103 interrupts = <0 75 0>;107104 #address-cells = <1>;108105 #size-cells = <0>;106106+ resets = <&rst 20>;107107+ reset-names = "reset";109108 };110109111110[board specific internal DMA resources]
···1414 - #size-cells : The value of this property must be 11515 - ranges : defines mapping between pin controller node (parent) to1616 gpio-bank node (children).1717- - interrupt-parent: phandle of the interrupt parent to which the external1818- GPIO interrupts are forwarded to.1919- - st,syscfg: Should be phandle/offset pair. The phandle to the syscon node2020- which includes IRQ mux selection register, and the offset of the IRQ mux2121- selection register.2217 - pins-are-numbered: Specify the subnodes are using numbered pinmux to2318 specify pins.2419···32373338Optional properties:3439 - reset: : Reference to the reset controller4040+ - interrupt-parent: phandle of the interrupt parent to which the external4141+ GPIO interrupts are forwarded to.4242+ - st,syscfg: Should be phandle/offset pair. The phandle to the syscon node4343+ which includes IRQ mux selection register, and the offset of the IRQ mux4444+ selection register.35453646Example:3747#include <dt-bindings/pinctrl/stm32f429-pinfunc.h>
···12121313Optional properties:1414- ti,dmic: phandle for the OMAP dmic node if the machine have it connected1515-- ti,jack_detection: Need to be present if the board capable to detect jack1515+- ti,jack-detection: Need to be present if the board capable to detect jack1616 insertion, removal.17171818Available audio endpoints for the audio-routing table:
-1
Documentation/filesystems/Locking
···447447 int (*flush) (struct file *);448448 int (*release) (struct inode *, struct file *);449449 int (*fsync) (struct file *, loff_t start, loff_t end, int datasync);450450- int (*aio_fsync) (struct kiocb *, int datasync);451450 int (*fasync) (int, struct file *, int);452451 int (*lock) (struct file *, int, struct file_lock *);453452 ssize_t (*readv) (struct file *, const struct iovec *, unsigned long,
-1
Documentation/filesystems/vfs.txt
···828828 int (*flush) (struct file *, fl_owner_t id);829829 int (*release) (struct inode *, struct file *);830830 int (*fsync) (struct file *, loff_t, loff_t, int datasync);831831- int (*aio_fsync) (struct kiocb *, int datasync);832831 int (*fasync) (int, struct file *, int);833832 int (*lock) (struct file *, int, struct file_lock *);834833 ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
+2-2
Documentation/i2c/i2c-topology
···326326327327This is a good topology.328328329329- .--------.329329+ .--------.330330 .----------. .--| dev D1 |331331 | parent- |--' '--------'332332 .--| locked | .--------.···350350351351This is a good topology.352352353353- .--------.353353+ .--------.354354 .----------. .--| dev D1 |355355 | mux- |--' '--------'356356 .--| locked | .--------.
+2-1
Documentation/networking/dsa/dsa.txt
···6767Switch tagging protocols6868------------------------69697070-DSA currently supports 4 different tagging protocols, and a tag-less mode as7070+DSA currently supports 5 different tagging protocols, and a tag-less mode as7171well. The different protocols are implemented in:72727373net/dsa/tag_trailer.c: Marvell's 4 trailer tag mode (legacy)7474net/dsa/tag_dsa.c: Marvell's original DSA tag7575net/dsa/tag_edsa.c: Marvell's enhanced DSA tag7676net/dsa/tag_brcm.c: Broadcom's 4 bytes tag7777+net/dsa/tag_qca.c: Qualcomm's 2 bytes tag77787879The exact format of the tag protocol is vendor specific, but in general, they7980all contain something which:
+11
Documentation/virtual/kvm/api.txt
···777777conjunction with KVM_SET_CLOCK, it is used to ensure monotonicity on scenarios778778such as migration.779779780780+When KVM_CAP_ADJUST_CLOCK is passed to KVM_CHECK_EXTENSION, it returns the781781+set of bits that KVM can return in struct kvm_clock_data's flag member.782782+783783+The only flag defined now is KVM_CLOCK_TSC_STABLE. If set, the returned784784+value is the exact kvmclock value seen by all VCPUs at the instant785785+when KVM_GET_CLOCK was called. If clear, the returned value is simply786786+CLOCK_MONOTONIC plus a constant offset; the offset can be modified787787+with KVM_SET_CLOCK. KVM will try to make all VCPUs follow this clock,788788+but the exact value read by each VCPU could differ, because the host789789+TSC is not stable.790790+780791struct kvm_clock_data {781792 __u64 clock; /* kvmclock current value */782793 __u32 flags;
+3-1
MAINTAINERS
···70847084LED SUBSYSTEM70857085M: Richard Purdie <rpurdie@rpsys.net>70867086M: Jacek Anaszewski <j.anaszewski@samsung.com>70877087+M: Pavel Machek <pavel@ucw.cz>70877088L: linux-leds@vger.kernel.org70887089T: git git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds.git70897090S: Maintained···80588057F: include/linux/mlx4/8059805880608059MELLANOX MLX5 core VPI driver80608060+M: Saeed Mahameed <saeedm@mellanox.com>80618061M: Matan Barak <matanb@mellanox.com>80628062M: Leon Romanovsky <leonro@mellanox.com>80638063L: netdev@vger.kernel.org···93379335M: Keith Busch <keith.busch@intel.com>93389336L: linux-pci@vger.kernel.org93399337S: Supported93409340-F: arch/x86/pci/vmd.c93389338+F: drivers/pci/host/vmd.c9341933993429340PCIE DRIVER FOR ST SPEAR13XX93439341M: Pratyush Anand <pratyush.anand@gmail.com>
···50505151cflags-$(atleast_gcc44) += -fsection-anchors52525353+cflags-$(CONFIG_ARC_HAS_LLSC) += -mlock5454+cflags-$(CONFIG_ARC_HAS_SWAPE) += -mswape5555+5356ifdef CONFIG_ISA_ARCV254575558ifndef CONFIG_ARC_HAS_LL64···7168ifndef CONFIG_CC_OPTIMIZE_FOR_SIZE7269# Generic build system uses -O2, we want -O37370# Note: No need to add to cflags-y as that happens anyways7474-ARCH_CFLAGS += -O37171+#7272+# Disable the false maybe-uninitialized warings gcc spits out at -O37373+ARCH_CFLAGS += -O3 $(call cc-disable-warning,maybe-uninitialized,)7574endif76757776# small data is default for elf32 tool-chain. If not usable, disable it
···1414CONFIG_INITRAMFS_SOURCE="../arc_initramfs/"1515CONFIG_KALLSYMS_ALL=y1616CONFIG_EMBEDDED=y1717+CONFIG_PERF_EVENTS=y1718# CONFIG_SLUB_DEBUG is not set1819# CONFIG_COMPAT_BRK is not set1920CONFIG_KPROBES=y
+1
arch/arc/configs/nsim_hs_defconfig
···1414CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/"1515CONFIG_KALLSYMS_ALL=y1616CONFIG_EMBEDDED=y1717+CONFIG_PERF_EVENTS=y1718# CONFIG_SLUB_DEBUG is not set1819# CONFIG_COMPAT_BRK is not set1920CONFIG_KPROBES=y
+1
arch/arc/configs/nsim_hs_smp_defconfig
···1212CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"1313CONFIG_KALLSYMS_ALL=y1414CONFIG_EMBEDDED=y1515+CONFIG_PERF_EVENTS=y1516# CONFIG_SLUB_DEBUG is not set1617# CONFIG_COMPAT_BRK is not set1718CONFIG_KPROBES=y
+1
arch/arc/configs/nsimosci_defconfig
···1414CONFIG_INITRAMFS_SOURCE="../arc_initramfs/"1515CONFIG_KALLSYMS_ALL=y1616CONFIG_EMBEDDED=y1717+CONFIG_PERF_EVENTS=y1718# CONFIG_SLUB_DEBUG is not set1819# CONFIG_COMPAT_BRK is not set1920CONFIG_KPROBES=y
+1
arch/arc/configs/nsimosci_hs_defconfig
···1414CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"1515CONFIG_KALLSYMS_ALL=y1616CONFIG_EMBEDDED=y1717+CONFIG_PERF_EVENTS=y1718# CONFIG_SLUB_DEBUG is not set1819# CONFIG_COMPAT_BRK is not set1920CONFIG_KPROBES=y
+1-2
arch/arc/configs/nsimosci_hs_smp_defconfig
···1010# CONFIG_PID_NS is not set1111CONFIG_BLK_DEV_INITRD=y1212CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/"1313+CONFIG_PERF_EVENTS=y1314# CONFIG_COMPAT_BRK is not set1415CONFIG_KPROBES=y1516CONFIG_MODULES=y···3534# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3635# CONFIG_INET_XFRM_MODE_TUNNEL is not set3736# CONFIG_INET_XFRM_MODE_BEET is not set3838-# CONFIG_INET_LRO is not set3937# CONFIG_IPV6 is not set4038# CONFIG_WIRELESS is not set4139CONFIG_DEVTMPFS=y···7272# CONFIG_HWMON is not set7373CONFIG_DRM=y7474CONFIG_DRM_ARCPGU=y7575-CONFIG_FRAMEBUFFER_CONSOLE=y7675CONFIG_LOGO=y7776# CONFIG_HID is not set7877# CONFIG_USB_SUPPORT is not set
+2
arch/arc/include/asm/arcregs.h
···4343#define STATUS_AE_BIT 5 /* Exception active */4444#define STATUS_DE_BIT 6 /* PC is in delay slot */4545#define STATUS_U_BIT 7 /* User/Kernel mode */4646+#define STATUS_Z_BIT 114647#define STATUS_L_BIT 12 /* Loop inhibit */47484849/* These masks correspond to the status word(STATUS_32) bits */4950#define STATUS_AE_MASK (1<<STATUS_AE_BIT)5051#define STATUS_DE_MASK (1<<STATUS_DE_BIT)5152#define STATUS_U_MASK (1<<STATUS_U_BIT)5353+#define STATUS_Z_MASK (1<<STATUS_Z_BIT)5254#define STATUS_L_MASK (1<<STATUS_L_BIT)53555456/*
+2-2
arch/arc/include/asm/smp.h
···3737 * API expected BY platform smp code (FROM arch smp code)3838 *3939 * smp_ipi_irq_setup:4040- * Takes @cpu and @irq to which the arch-common ISR is hooked up4040+ * Takes @cpu and @hwirq to which the arch-common ISR is hooked up4141 */4242-extern int smp_ipi_irq_setup(int cpu, int irq);4242+extern int smp_ipi_irq_setup(int cpu, irq_hw_number_t hwirq);43434444/*4545 * struct plat_smp_ops - SMP callbacks provided by platform to ARC SMP
···181181{182182 unsigned long flags;183183 cpumask_t online;184184+ unsigned int destination_bits;185185+ unsigned int distribution_mode;184186185187 /* errout if no online cpu per @cpumask */186188 if (!cpumask_and(&online, cpumask, cpu_online_mask))···190188191189 raw_spin_lock_irqsave(&mcip_lock, flags);192190193193- idu_set_dest(data->hwirq, cpumask_bits(&online)[0]);194194- idu_set_mode(data->hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_RR);191191+ destination_bits = cpumask_bits(&online)[0];192192+ idu_set_dest(data->hwirq, destination_bits);193193+194194+ if (ffs(destination_bits) == fls(destination_bits))195195+ distribution_mode = IDU_M_DISTRI_DEST;196196+ else197197+ distribution_mode = IDU_M_DISTRI_RR;198198+199199+ idu_set_mode(data->hwirq, IDU_M_TRIG_LEVEL, distribution_mode);195200196201 raw_spin_unlock_irqrestore(&mcip_lock, flags);197202···216207217208};218209219219-static int idu_first_irq;210210+static irq_hw_number_t idu_first_hwirq;220211221212static void idu_cascade_isr(struct irq_desc *desc)222213{223223- struct irq_domain *domain = irq_desc_get_handler_data(desc);224224- unsigned int core_irq = irq_desc_get_irq(desc);225225- unsigned int idu_irq;214214+ struct irq_domain *idu_domain = irq_desc_get_handler_data(desc);215215+ irq_hw_number_t core_hwirq = irqd_to_hwirq(irq_desc_get_irq_data(desc));216216+ irq_hw_number_t idu_hwirq = core_hwirq - idu_first_hwirq;226217227227- idu_irq = core_irq - idu_first_irq;228228- generic_handle_irq(irq_find_mapping(domain, idu_irq));218218+ generic_handle_irq(irq_find_mapping(idu_domain, idu_hwirq));229219}230220231221static int idu_irq_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hwirq)···290282 struct irq_domain *domain;291283 /* Read IDU BCR to confirm nr_irqs */292284 int nr_irqs = of_irq_count(intc);293293- int i, irq;285285+ int i, virq;294286 struct mcip_bcr mp;295287296288 READ_BCR(ARC_REG_MCIP_BCR, mp);···311303 * however we need it to get the parent virq and set IDU handler312304 * as first level isr313305 */314314- irq = irq_of_parse_and_map(intc, i);306306+ virq = irq_of_parse_and_map(intc, i);315307 if (!i)316316- idu_first_irq = irq;308308+ idu_first_hwirq = irqd_to_hwirq(irq_get_irq_data(virq));317309318318- irq_set_chained_handler_and_data(irq, idu_cascade_isr, domain);310310+ irq_set_chained_handler_and_data(virq, idu_cascade_isr, domain);319311 }320312321313 __mcip_cmd(CMD_IDU_ENABLE, 0);
+11-9
arch/arc/kernel/process.c
···43434444SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)4545{4646- int uval;4747- int ret;4646+ struct pt_regs *regs = current_pt_regs();4747+ int uval = -EFAULT;48484949 /*5050 * This is only for old cores lacking LLOCK/SCOND, which by defintion···5454 */5555 WARN_ON_ONCE(IS_ENABLED(CONFIG_SMP));56565757+ /* Z indicates to userspace if operation succeded */5858+ regs->status32 &= ~STATUS_Z_MASK;5959+5760 if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))5861 return -EFAULT;59626063 preempt_disable();61646262- ret = __get_user(uval, uaddr);6363- if (ret)6565+ if (__get_user(uval, uaddr))6466 goto done;65676666- if (uval != expected)6767- ret = -EAGAIN;6868- else6969- ret = __put_user(new, uaddr);6868+ if (uval == expected) {6969+ if (!__put_user(new, uaddr))7070+ regs->status32 |= STATUS_Z_MASK;7171+ }70727173done:7274 preempt_enable();73757474- return ret;7676+ return uval;7577}76787779void arch_cpu_idle(void)
+15-8
arch/arc/kernel/smp.c
···2222#include <linux/atomic.h>2323#include <linux/cpumask.h>2424#include <linux/reboot.h>2525+#include <linux/irqdomain.h>2526#include <asm/processor.h>2627#include <asm/setup.h>2728#include <asm/mach_desc.h>···6867 int i;69687069 /*7171- * Initialise the present map, which describes the set of CPUs7272- * actually populated at the present time.7070+ * if platform didn't set the present map already, do it now7171+ * boot cpu is set to present already by init/main.c7372 */7474- for (i = 0; i < max_cpus; i++)7575- set_cpu_present(i, true);7373+ if (num_present_cpus() <= 1) {7474+ for (i = 0; i < max_cpus; i++)7575+ set_cpu_present(i, true);7676+ }7677}77787879void __init smp_cpus_done(unsigned int max_cpus)···354351 */355352static DEFINE_PER_CPU(int, ipi_dev);356353357357-int smp_ipi_irq_setup(int cpu, int irq)354354+int smp_ipi_irq_setup(int cpu, irq_hw_number_t hwirq)358355{359356 int *dev = per_cpu_ptr(&ipi_dev, cpu);357357+ unsigned int virq = irq_find_mapping(NULL, hwirq);358358+359359+ if (!virq)360360+ panic("Cannot find virq for root domain and hwirq=%lu", hwirq);360361361362 /* Boot cpu calls request, all call enable */362363 if (!cpu) {363364 int rc;364365365365- rc = request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev);366366+ rc = request_percpu_irq(virq, do_IPI, "IPI Interrupt", dev);366367 if (rc)367367- panic("Percpu IRQ request failed for %d\n", irq);368368+ panic("Percpu IRQ request failed for %u\n", virq);368369 }369370370370- enable_percpu_irq(irq, 0);371371+ enable_percpu_irq(virq, 0);371372372373 return 0;373374}
+11-8
arch/arc/kernel/time.c
···152152 cycle_t full;153153 } stamp;154154155155-156156- __asm__ __volatile(157157- "1: \n"158158- " lr %0, [AUX_RTC_LOW] \n"159159- " lr %1, [AUX_RTC_HIGH] \n"160160- " lr %2, [AUX_RTC_CTRL] \n"161161- " bbit0.nt %2, 31, 1b \n"162162- : "=r" (stamp.low), "=r" (stamp.high), "=r" (status));155155+ /*156156+ * hardware has an internal state machine which tracks readout of157157+ * low/high and updates the CTRL.status if158158+ * - interrupt/exception taken between the two reads159159+ * - high increments after low has been read160160+ */161161+ do {162162+ stamp.low = read_aux_reg(AUX_RTC_LOW);163163+ stamp.high = read_aux_reg(AUX_RTC_HIGH);164164+ status = read_aux_reg(AUX_RTC_CTRL);165165+ } while (!(status & _BITUL(31)));163166164167 return stamp.full;165168}
+26
arch/arc/mm/dma.c
···105105 __free_pages(page, get_order(size));106106}107107108108+static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma,109109+ void *cpu_addr, dma_addr_t dma_addr, size_t size,110110+ unsigned long attrs)111111+{112112+ unsigned long user_count = vma_pages(vma);113113+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;114114+ unsigned long pfn = __phys_to_pfn(plat_dma_to_phys(dev, dma_addr));115115+ unsigned long off = vma->vm_pgoff;116116+ int ret = -ENXIO;117117+118118+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);119119+120120+ if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))121121+ return ret;122122+123123+ if (off < count && user_count <= (count - off)) {124124+ ret = remap_pfn_range(vma, vma->vm_start,125125+ pfn + off,126126+ user_count << PAGE_SHIFT,127127+ vma->vm_page_prot);128128+ }129129+130130+ return ret;131131+}132132+108133/*109134 * streaming DMA Mapping API...110135 * CPU accesses page via normal paddr, thus needs to explicitly made···218193struct dma_map_ops arc_dma_ops = {219194 .alloc = arc_dma_alloc,220195 .free = arc_dma_free,196196+ .mmap = arc_dma_mmap,221197 .map_page = arc_dma_map_page,222198 .map_sg = arc_dma_map_sg,223199 .sync_single_for_device = arc_dma_sync_single_for_device,
···5757 /* VTTBR value associated with below pgd and vmid */5858 u64 vttbr;59596060+ /* The last vcpu id that ran on each physical CPU */6161+ int __percpu *last_vcpu_ran;6262+6063 /* Timer */6164 struct arch_timer_kvm timer;6265
···7474 dump_mem("", "Exception stack", frame + 4, frame + 4 + sizeof(struct pt_regs));7575}76767777+void dump_backtrace_stm(u32 *stack, u32 instruction)7878+{7979+ char str[80], *p;8080+ unsigned int x;8181+ int reg;8282+8383+ for (reg = 10, x = 0, p = str; reg >= 0; reg--) {8484+ if (instruction & BIT(reg)) {8585+ p += sprintf(p, " r%d:%08x", reg, *stack--);8686+ if (++x == 6) {8787+ x = 0;8888+ p = str;8989+ printk("%s\n", str);9090+ }9191+ }9292+ }9393+ if (p != str)9494+ printk("%s\n", str);9595+}9696+7797#ifndef CONFIG_ARM_UNWIND7898/*7999 * Stack pointers should always be within the kernels view of
+5
arch/arm/kernel/vmlinux-xip.lds.S
···33 * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>44 */5566+/* No __ro_after_init data in the .rodata section - which will always be ro */77+#define RO_AFTER_INIT_DATA88+69#include <asm-generic/vmlinux.lds.h>710#include <asm/cache.h>811#include <asm/thread_info.h>···225222 ARM_EXIT_KEEP(EXIT_DATA)226223 . = ALIGN(PAGE_SIZE);227224 __init_end = .;225225+226226+ *(.data..ro_after_init)228227229228 NOSAVE_DATA230229 CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
+26-1
arch/arm/kvm/arm.c
···114114 */115115int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)116116{117117- int ret = 0;117117+ int ret, cpu;118118119119 if (type)120120 return -EINVAL;121121+122122+ kvm->arch.last_vcpu_ran = alloc_percpu(typeof(*kvm->arch.last_vcpu_ran));123123+ if (!kvm->arch.last_vcpu_ran)124124+ return -ENOMEM;125125+126126+ for_each_possible_cpu(cpu)127127+ *per_cpu_ptr(kvm->arch.last_vcpu_ran, cpu) = -1;121128122129 ret = kvm_alloc_stage2_pgd(kvm);123130 if (ret)···148141out_free_stage2_pgd:149142 kvm_free_stage2_pgd(kvm);150143out_fail_alloc:144144+ free_percpu(kvm->arch.last_vcpu_ran);145145+ kvm->arch.last_vcpu_ran = NULL;151146 return ret;152147}153148···176167void kvm_arch_destroy_vm(struct kvm *kvm)177168{178169 int i;170170+171171+ free_percpu(kvm->arch.last_vcpu_ran);172172+ kvm->arch.last_vcpu_ran = NULL;179173180174 for (i = 0; i < KVM_MAX_VCPUS; ++i) {181175 if (kvm->vcpus[i]) {···324312325313void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)326314{315315+ int *last_ran;316316+317317+ last_ran = this_cpu_ptr(vcpu->kvm->arch.last_vcpu_ran);318318+319319+ /*320320+ * We might get preempted before the vCPU actually runs, but321321+ * over-invalidation doesn't affect correctness.322322+ */323323+ if (*last_ran != vcpu->vcpu_id) {324324+ kvm_call_hyp(__kvm_tlb_flush_local_vmid, vcpu);325325+ *last_ran = vcpu->vcpu_id;326326+ }327327+327328 vcpu->cpu = cpu;328329 vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state);329330
···205205206206#define OMAP3_SHOW_FEATURE(feat) \207207 if (omap3_has_ ##feat()) \208208- printk(#feat" ");208208+ n += scnprintf(buf + n, sizeof(buf) - n, #feat " ");209209210210static void __init omap3_cpuinfo(void)211211{212212 const char *cpu_name;213213+ char buf[64];214214+ int n = 0;215215+216216+ memset(buf, 0, sizeof(buf));213217214218 /*215219 * OMAP3430 and OMAP3530 are assumed to be same.···245241 cpu_name = "OMAP3503";246242 }247243248248- sprintf(soc_name, "%s", cpu_name);244244+ scnprintf(soc_name, sizeof(soc_name), "%s", cpu_name);249245250246 /* Print verbose information */251251- pr_info("%s %s (", soc_name, soc_rev);247247+ n += scnprintf(buf, sizeof(buf) - n, "%s %s (", soc_name, soc_rev);252248253249 OMAP3_SHOW_FEATURE(l2cache);254250 OMAP3_SHOW_FEATURE(iva);···256252 OMAP3_SHOW_FEATURE(neon);257253 OMAP3_SHOW_FEATURE(isp);258254 OMAP3_SHOW_FEATURE(192mhz_clk);259259-260260- printk(")\n");255255+ if (*(buf + n - 1) == ' ')256256+ n--;257257+ n += scnprintf(buf + n, sizeof(buf) - n, ")\n");258258+ pr_info("%s", buf);261259}262260263261#define OMAP3_CHECK_FEATURE(status,feat) \
+3
arch/arm/mach-omap2/prm3xxx.c
···319319 if (has_uart4) {320320 en_uart4_mask = OMAP3630_EN_UART4_MASK;321321 grpsel_uart4_mask = OMAP3630_GRPSEL_UART4_MASK;322322+ } else {323323+ en_uart4_mask = 0;324324+ grpsel_uart4_mask = 0;322325 }323326324327 /* Enable wakeups in PER */
+6
arch/arm/mach-omap2/voltage.c
···8787 return -ENODATA;8888 }89899090+ if (!voltdm->volt_data) {9191+ pr_err("%s: No voltage data defined for vdd_%s\n",9292+ __func__, voltdm->name);9393+ return -ENODATA;9494+ }9595+9096 /* Adjust voltage to the exact voltage from the OPP table */9197 for (i = 0; voltdm->volt_data[i].volt_nominal != 0; i++) {9298 if (voltdm->volt_data[i].volt_nominal >= target_volt) {
···6262 /* VTTBR value associated with above pgd and vmid */6363 u64 vttbr;64646565+ /* The last vcpu id that ran on each physical CPU */6666+ int __percpu *last_vcpu_ran;6767+6568 /* The maximum number of vCPUs depends on the used GIC model */6669 int max_vcpus;6770
+1-1
arch/arm64/include/asm/kvm_mmu.h
···128128 return v;129129}130130131131-#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))131131+#define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v))))132132133133/*134134 * We currently only support a 40bit IPA.
···31313232/*3333 * ARMv8 PMUv3 Performance Events handling code.3434- * Common event types.3434+ * Common event types (some are defined in asm/perf_event.h).3535 */3636-3737-/* Required events. */3838-#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x003939-#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x034040-#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x044141-#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x104242-#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x114343-#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x1244364537/* At least one of the following is required. */4638#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x08
···116116117117EXC_REAL_BEGIN(system_reset, 0x100, 0x200)118118 SET_SCRATCH0(r13)119119- EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,119119+ GET_PACA(r13)120120+ clrrdi r13,r13,1 /* Last bit of HSPRG0 is set if waking from winkle */121121+ EXCEPTION_PROLOG_PSERIES_PACA(PACA_EXGEN, system_reset_common, EXC_STD,120122 IDLETEST, 0x100)121123122124EXC_REAL_END(system_reset, 0x100, 0x200)···126124127125#ifdef CONFIG_PPC_P7_NAP128126EXC_COMMON_BEGIN(system_reset_idle_common)127127+BEGIN_FTR_SECTION128128+ GET_PACA(r13) /* Restore HSPRG0 to get the winkle bit in r13 */129129+END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)129130 bl pnv_restore_hyp_resource130131131132 li r0,PNV_THREAD_RUNNING···174169 SET_SCRATCH0(r13) /* save r13 */175170 /*176171 * Running native on arch 2.06 or later, we may wakeup from winkle177177- * inside machine check. If yes, then last bit of HSPGR0 would be set172172+ * inside machine check. If yes, then last bit of HSPRG0 would be set178173 * to 1. Hence clear it unconditionally.179174 */180175 GET_PACA(r13)···393388 /*394389 * Go back to winkle. Please note that this thread was woken up in395390 * machine check from winkle and have not restored the per-subcore396396- * state. Hence before going back to winkle, set last bit of HSPGR0391391+ * state. Hence before going back to winkle, set last bit of HSPRG0397392 * to 1. This will make sure that if this thread gets woken up398393 * again at reset vector 0x100 then it will get chance to restore399394 * the subcore state.
+21-21
arch/powerpc/kernel/process.c
···12151215 int instr;1216121612171217 if (!(i % 8))12181218- printk("\n");12181218+ pr_cont("\n");1219121912201220#if !defined(CONFIG_BOOKE)12211221 /* If executing with the IMMU off, adjust pc rather···1227122712281228 if (!__kernel_text_address(pc) ||12291229 probe_kernel_address((unsigned int __user *)pc, instr)) {12301230- printk(KERN_CONT "XXXXXXXX ");12301230+ pr_cont("XXXXXXXX ");12311231 } else {12321232 if (regs->nip == pc)12331233- printk(KERN_CONT "<%08x> ", instr);12331233+ pr_cont("<%08x> ", instr);12341234 else12351235- printk(KERN_CONT "%08x ", instr);12351235+ pr_cont("%08x ", instr);12361236 }1237123712381238 pc += sizeof(int);12391239 }1240124012411241- printk("\n");12411241+ pr_cont("\n");12421242}1243124312441244struct regbit {···1282128212831283 for (; bits->bit; ++bits)12841284 if (val & bits->bit) {12851285- printk("%s%s", s, bits->name);12851285+ pr_cont("%s%s", s, bits->name);12861286 s = sep;12871287 }12881288}···13051305 * T: Transactional (bit 34)13061306 */13071307 if (val & (MSR_TM | MSR_TS_S | MSR_TS_T)) {13081308- printk(",TM[");13081308+ pr_cont(",TM[");13091309 print_bits(val, msr_tm_bits, "");13101310- printk("]");13101310+ pr_cont("]");13111311 }13121312}13131313#else···1316131613171317static void print_msr_bits(unsigned long val)13181318{13191319- printk("<");13191319+ pr_cont("<");13201320 print_bits(val, msr_bits, ",");13211321 print_tm_bits(val);13221322- printk(">");13221322+ pr_cont(">");13231323}1324132413251325#ifdef CONFIG_PPC64···13471347 printk(" CR: %08lx XER: %08lx\n", regs->ccr, regs->xer);13481348 trap = TRAP(regs);13491349 if ((regs->trap != 0xc00) && cpu_has_feature(CPU_FTR_CFAR))13501350- printk("CFAR: "REG" ", regs->orig_gpr3);13501350+ pr_cont("CFAR: "REG" ", regs->orig_gpr3);13511351 if (trap == 0x200 || trap == 0x300 || trap == 0x600)13521352#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)13531353- printk("DEAR: "REG" ESR: "REG" ", regs->dar, regs->dsisr);13531353+ pr_cont("DEAR: "REG" ESR: "REG" ", regs->dar, regs->dsisr);13541354#else13551355- printk("DAR: "REG" DSISR: %08lx ", regs->dar, regs->dsisr);13551355+ pr_cont("DAR: "REG" DSISR: %08lx ", regs->dar, regs->dsisr);13561356#endif13571357#ifdef CONFIG_PPC6413581358- printk("SOFTE: %ld ", regs->softe);13581358+ pr_cont("SOFTE: %ld ", regs->softe);13591359#endif13601360#ifdef CONFIG_PPC_TRANSACTIONAL_MEM13611361 if (MSR_TM_ACTIVE(regs->msr))13621362- printk("\nPACATMSCRATCH: %016llx ", get_paca()->tm_scratch);13621362+ pr_cont("\nPACATMSCRATCH: %016llx ", get_paca()->tm_scratch);13631363#endif1364136413651365 for (i = 0; i < 32; i++) {13661366 if ((i % REGS_PER_LINE) == 0)13671367- printk("\nGPR%02d: ", i);13681368- printk(REG " ", regs->gpr[i]);13671367+ pr_cont("\nGPR%02d: ", i);13681368+ pr_cont(REG " ", regs->gpr[i]);13691369 if (i == LAST_VOLATILE && !FULL_REGS(regs))13701370 break;13711371 }13721372- printk("\n");13721372+ pr_cont("\n");13731373#ifdef CONFIG_KALLSYMS13741374 /*13751375 * Lookup NIP late so we have the best change of getting the···19001900 printk("["REG"] ["REG"] %pS", sp, ip, (void *)ip);19011901#ifdef CONFIG_FUNCTION_GRAPH_TRACER19021902 if ((ip == rth) && curr_frame >= 0) {19031903- printk(" (%pS)",19031903+ pr_cont(" (%pS)",19041904 (void *)current->ret_stack[curr_frame].ret);19051905 curr_frame--;19061906 }19071907#endif19081908 if (firstframe)19091909- printk(" (unreliable)");19101910- printk("\n");19091909+ pr_cont(" (unreliable)");19101910+ pr_cont("\n");19111911 }19121912 firstframe = 0;19131913
+14-6
arch/powerpc/kernel/setup_64.c
···226226 if (firmware_has_feature(FW_FEATURE_OPAL))227227 opal_configure_cores();228228229229- /* Enable AIL if supported, and we are in hypervisor mode */230230- if (early_cpu_has_feature(CPU_FTR_HVMODE) &&231231- early_cpu_has_feature(CPU_FTR_ARCH_207S)) {232232- unsigned long lpcr = mfspr(SPRN_LPCR);233233- mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);234234- }229229+ /* AIL on native is done in cpu_ready_for_interrupts() */235230 }236231}237232238233static void cpu_ready_for_interrupts(void)239234{235235+ /*236236+ * Enable AIL if supported, and we are in hypervisor mode. This237237+ * is called once for every processor.238238+ *239239+ * If we are not in hypervisor mode the job is done once for240240+ * the whole partition in configure_exceptions().241241+ */242242+ if (early_cpu_has_feature(CPU_FTR_HVMODE) &&243243+ early_cpu_has_feature(CPU_FTR_ARCH_207S)) {244244+ unsigned long lpcr = mfspr(SPRN_LPCR);245245+ mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);246246+ }247247+240248 /* Set IR and DR in PACA MSR */241249 get_paca()->kernel_msr = MSR_KERNEL;242250}
+4
arch/powerpc/mm/hash_utils_64.c
···10291029{10301030 /* Initialize hash table for that CPU */10311031 if (!firmware_has_feature(FW_FEATURE_LPAR)) {10321032+10331033+ if (cpu_has_feature(CPU_FTR_POWER9_DD1))10341034+ update_hid_for_hash();10351035+10321036 if (!cpu_has_feature(CPU_FTR_ARCH_300))10331037 mtspr(SPRN_SDR1, _SDR1);10341038 else
+4
arch/powerpc/mm/pgtable-radix.c
···388388 * update partition table control register and UPRT389389 */390390 if (!firmware_has_feature(FW_FEATURE_LPAR)) {391391+392392+ if (cpu_has_feature(CPU_FTR_POWER9_DD1))393393+ update_hid_for_radix();394394+391395 lpcr = mfspr(SPRN_LPCR);392396 mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);393397
+4
arch/powerpc/mm/tlb-radix.c
···5050 for (set = 0; set < POWER9_TLB_SETS_RADIX ; set++) {5151 __tlbiel_pid(pid, set, ric);5252 }5353+ if (cpu_has_feature(CPU_FTR_POWER9_DD1))5454+ asm volatile(PPC_INVALIDATE_ERAT : : :"memory");5355 return;5456}5557···8583 asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1)8684 : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");8785 asm volatile("ptesync": : :"memory");8686+ if (cpu_has_feature(CPU_FTR_POWER9_DD1))8787+ asm volatile(PPC_INVALIDATE_ERAT : : :"memory");8888}89899090static inline void _tlbie_va(unsigned long va, unsigned long pid,
···423423 dma_addr_t dma_addr_base, dma_addr;424424 int flags = ZPCI_PTE_VALID;425425 struct scatterlist *s;426426- unsigned long pa;426426+ unsigned long pa = 0;427427 int ret;428428429429 size = PAGE_ALIGN(size);
+23
arch/sparc/Kconfig
···4343 select ARCH_HAS_SG_CHAIN4444 select CPU_NO_EFFICIENT_FFS4545 select HAVE_ARCH_HARDENED_USERCOPY4646+ select PROVE_LOCKING_SMALL if PROVE_LOCKING46474748config SPARC324849 def_bool !64BIT···89889089config ARCH_PROC_KCORE_TEXT9190 def_bool y9191+9292+config ARCH_ATU9393+ bool9494+ default y if SPARC649595+9696+config ARCH_DMA_ADDR_T_64BIT9797+ bool9898+ default y if ARCH_ATU929993100config IOMMU_HELPER94101 bool···312303313304config ARCH_SPARSEMEM_DEFAULT314305 def_bool y if SPARC64306306+307307+config FORCE_MAX_ZONEORDER308308+ int "Maximum zone order"309309+ default "13"310310+ help311311+ The kernel memory allocator divides physically contiguous memory312312+ blocks into "zones", where each zone is a power of two number of313313+ pages. This option selects the largest power of two that the kernel314314+ keeps in the memory allocator. If you need to allocate very large315315+ blocks of physically contiguous memory, then you may need to316316+ increase this value.317317+318318+ This config option is actually maximum order plus one. For example,319319+ a value of 13 means that the largest free memory block is 2^12 pages.315320316321source "mm/Kconfig"317322
+343
arch/sparc/include/asm/hypervisor.h
···23352335 */23362336#define HV_FAST_PCI_MSG_SETVALID 0xd32337233723382338+/* PCI IOMMU v2 definitions and services23392339+ *23402340+ * While the PCI IO definitions above is valid IOMMU v2 adds new PCI IO23412341+ * definitions and services.23422342+ *23432343+ * CTE Clump Table Entry. First level table entry in the ATU.23442344+ *23452345+ * pci_device_list23462346+ * A 32-bit aligned list of pci_devices.23472347+ *23482348+ * pci_device_listp23492349+ * real address of a pci_device_list. 32-bit aligned.23502350+ *23512351+ * iotte IOMMU translation table entry.23522352+ *23532353+ * iotte_attributes23542354+ * IO Attributes for IOMMU v2 mappings. In addition to23552355+ * read, write IOMMU v2 supports relax ordering23562356+ *23572357+ * io_page_list A 64-bit aligned list of real addresses. Each real23582358+ * address in an io_page_list must be properly aligned23592359+ * to the pagesize of the given IOTSB.23602360+ *23612361+ * io_page_list_p Real address of an io_page_list, 64-bit aligned.23622362+ *23632363+ * IOTSB IO Translation Storage Buffer. An aligned table of23642364+ * IOTTEs. Each IOTSB has a pagesize, table size, and23652365+ * virtual address associated with it that must match23662366+ * a pagesize and table size supported by the un-derlying23672367+ * hardware implementation. The alignment requirements23682368+ * for an IOTSB depend on the pagesize used for that IOTSB.23692369+ * Each IOTTE in an IOTSB maps one pagesize-sized page.23702370+ * The size of the IOTSB dictates how large of a virtual23712371+ * address space the IOTSB is capable of mapping.23722372+ *23732373+ * iotsb_handle An opaque identifier for an IOTSB. A devhandle plus23742374+ * iotsb_handle represents a binding of an IOTSB to a23752375+ * PCI root complex.23762376+ *23772377+ * iotsb_index Zero-based IOTTE number within an IOTSB.23782378+ */23792379+23802380+/* The index_count argument consists of two fields:23812381+ * bits 63:48 #iottes and bits 47:0 iotsb_index23822382+ */23832383+#define HV_PCI_IOTSB_INDEX_COUNT(__iottes, __iotsb_index) \23842384+ (((u64)(__iottes) << 48UL) | ((u64)(__iotsb_index)))23852385+23862386+/* pci_iotsb_conf()23872387+ * TRAP: HV_FAST_TRAP23882388+ * FUNCTION: HV_FAST_PCI_IOTSB_CONF23892389+ * ARG0: devhandle23902390+ * ARG1: r_addr23912391+ * ARG2: size23922392+ * ARG3: pagesize23932393+ * ARG4: iova23942394+ * RET0: status23952395+ * RET1: iotsb_handle23962396+ * ERRORS: EINVAL Invalid devhandle, size, iova, or pagesize23972397+ * EBADALIGN r_addr is not properly aligned23982398+ * ENORADDR r_addr is not a valid real address23992399+ * ETOOMANY No further IOTSBs may be configured24002400+ * EBUSY Duplicate devhandle, raddir, iova combination24012401+ *24022402+ * Create an IOTSB suitable for the PCI root complex identified by devhandle,24032403+ * for the DMA virtual address defined by the argument iova.24042404+ *24052405+ * r_addr is the properly aligned base address of the IOTSB and size is the24062406+ * IOTSB (table) size in bytes.The IOTSB is required to be zeroed prior to24072407+ * being configured. If it contains any values other than zeros then the24082408+ * behavior is undefined.24092409+ *24102410+ * pagesize is the size of each page in the IOTSB. Note that the combination of24112411+ * size (table size) and pagesize must be valid.24122412+ *24132413+ * virt is the DMA virtual address this IOTSB will map.24142414+ *24152415+ * If successful, the opaque 64-bit handle iotsb_handle is returned in ret1.24162416+ * Once configured, privileged access to the IOTSB memory is prohibited and24172417+ * creates undefined behavior. The only permitted access is indirect via these24182418+ * services.24192419+ */24202420+#define HV_FAST_PCI_IOTSB_CONF 0x19024212421+24222422+/* pci_iotsb_info()24232423+ * TRAP: HV_FAST_TRAP24242424+ * FUNCTION: HV_FAST_PCI_IOTSB_INFO24252425+ * ARG0: devhandle24262426+ * ARG1: iotsb_handle24272427+ * RET0: status24282428+ * RET1: r_addr24292429+ * RET2: size24302430+ * RET3: pagesize24312431+ * RET4: iova24322432+ * RET5: #bound24332433+ * ERRORS: EINVAL Invalid devhandle or iotsb_handle24342434+ *24352435+ * This service returns configuration information about an IOTSB previously24362436+ * created with pci_iotsb_conf.24372437+ *24382438+ * iotsb_handle value 0 may be used with this service to inquire about the24392439+ * legacy IOTSB that may or may not exist. If the service succeeds, the return24402440+ * values describe the legacy IOTSB and I/O virtual addresses mapped by that24412441+ * table. However, the table base address r_addr may contain the value -1 which24422442+ * indicates a memory range that cannot be accessed or be reclaimed.24432443+ *24442444+ * The return value #bound contains the number of PCI devices that iotsb_handle24452445+ * is currently bound to.24462446+ */24472447+#define HV_FAST_PCI_IOTSB_INFO 0x19124482448+24492449+/* pci_iotsb_unconf()24502450+ * TRAP: HV_FAST_TRAP24512451+ * FUNCTION: HV_FAST_PCI_IOTSB_UNCONF24522452+ * ARG0: devhandle24532453+ * ARG1: iotsb_handle24542454+ * RET0: status24552455+ * ERRORS: EINVAL Invalid devhandle or iotsb_handle24562456+ * EBUSY The IOTSB is bound and may not be unconfigured24572457+ *24582458+ * This service unconfigures the IOTSB identified by the devhandle and24592459+ * iotsb_handle arguments, previously created with pci_iotsb_conf.24602460+ * The IOTSB must not be currently bound to any device or the service will fail24612461+ *24622462+ * If the call succeeds, iotsb_handle is no longer valid.24632463+ */24642464+#define HV_FAST_PCI_IOTSB_UNCONF 0x19224652465+24662466+/* pci_iotsb_bind()24672467+ * TRAP: HV_FAST_TRAP24682468+ * FUNCTION: HV_FAST_PCI_IOTSB_BIND24692469+ * ARG0: devhandle24702470+ * ARG1: iotsb_handle24712471+ * ARG2: pci_device24722472+ * RET0: status24732473+ * ERRORS: EINVAL Invalid devhandle, iotsb_handle, or pci_device24742474+ * EBUSY A PCI function is already bound to an IOTSB at the same24752475+ * address range as specified by devhandle, iotsb_handle.24762476+ *24772477+ * This service binds the PCI function specified by the argument pci_device to24782478+ * the IOTSB specified by the arguments devhandle and iotsb_handle.24792479+ *24802480+ * The PCI device function is bound to the specified IOTSB with the IOVA range24812481+ * specified when the IOTSB was configured via pci_iotsb_conf. If the function24822482+ * is already bound then it is unbound first.24832483+ */24842484+#define HV_FAST_PCI_IOTSB_BIND 0x19324852485+24862486+/* pci_iotsb_unbind()24872487+ * TRAP: HV_FAST_TRAP24882488+ * FUNCTION: HV_FAST_PCI_IOTSB_UNBIND24892489+ * ARG0: devhandle24902490+ * ARG1: iotsb_handle24912491+ * ARG2: pci_device24922492+ * RET0: status24932493+ * ERRORS: EINVAL Invalid devhandle, iotsb_handle, or pci_device24942494+ * ENOMAP The PCI function was not bound to the specified IOTSB24952495+ *24962496+ * This service unbinds the PCI device specified by the argument pci_device24972497+ * from the IOTSB identified * by the arguments devhandle and iotsb_handle.24982498+ *24992499+ * If the PCI device is not bound to the specified IOTSB then this service will25002500+ * fail with status ENOMAP25012501+ */25022502+#define HV_FAST_PCI_IOTSB_UNBIND 0x19425032503+25042504+/* pci_iotsb_get_binding()25052505+ * TRAP: HV_FAST_TRAP25062506+ * FUNCTION: HV_FAST_PCI_IOTSB_GET_BINDING25072507+ * ARG0: devhandle25082508+ * ARG1: iotsb_handle25092509+ * ARG2: iova25102510+ * RET0: status25112511+ * RET1: iotsb_handle25122512+ * ERRORS: EINVAL Invalid devhandle, pci_device, or iova25132513+ * ENOMAP The PCI function is not bound to an IOTSB at iova25142514+ *25152515+ * This service returns the IOTSB binding, iotsb_handle, for a given pci_device25162516+ * and DMA virtual address, iova.25172517+ *25182518+ * iova must be the base address of a DMA virtual address range as defined by25192519+ * the iommu-address-ranges property in the root complex device node defined25202520+ * by the argument devhandle.25212521+ */25222522+#define HV_FAST_PCI_IOTSB_GET_BINDING 0x19525232523+25242524+/* pci_iotsb_map()25252525+ * TRAP: HV_FAST_TRAP25262526+ * FUNCTION: HV_FAST_PCI_IOTSB_MAP25272527+ * ARG0: devhandle25282528+ * ARG1: iotsb_handle25292529+ * ARG2: index_count25302530+ * ARG3: iotte_attributes25312531+ * ARG4: io_page_list_p25322532+ * RET0: status25332533+ * RET1: #mapped25342534+ * ERRORS: EINVAL Invalid devhandle, iotsb_handle, #iottes,25352535+ * iotsb_index or iotte_attributes25362536+ * EBADALIGN Improperly aligned io_page_list_p or I/O page25372537+ * address in the I/O page list.25382538+ * ENORADDR Invalid io_page_list_p or I/O page address in25392539+ * the I/O page list.25402540+ *25412541+ * This service creates and flushes mappings in the IOTSB defined by the25422542+ * arguments devhandle, iotsb.25432543+ *25442544+ * The index_count argument consists of two fields. Bits 63:48 contain #iotte25452545+ * and bits 47:0 contain iotsb_index25462546+ *25472547+ * The first mapping is created in the IOTSB index specified by iotsb_index.25482548+ * Subsequent mappings are created at iotsb_index+1 and so on.25492549+ *25502550+ * The attributes of each mapping are defined by the argument iotte_attributes.25512551+ *25522552+ * The io_page_list_p specifies the real address of the 64-bit-aligned list of25532553+ * #iottes I/O page addresses. Each page address must be a properly aligned25542554+ * real address of a page to be mapped in the IOTSB. The first entry in the I/O25552555+ * page list contains the real address of the first page, the 2nd entry for the25562556+ * 2nd page, and so on.25572557+ *25582558+ * #iottes must be greater than zero.25592559+ *25602560+ * The return value #mapped is the actual number of mappings created, which may25612561+ * be less than or equal to the argument #iottes. If the function returns25622562+ * successfully with a #mapped value less than the requested #iottes then the25632563+ * caller should continue to invoke the service with updated iotsb_index,25642564+ * #iottes, and io_page_list_p arguments until all pages are mapped.25652565+ *25662566+ * This service must not be used to demap a mapping. In other words, all25672567+ * mappings must be valid and have one or both of the RW attribute bits set.25682568+ *25692569+ * Note:25702570+ * It is implementation-defined whether I/O page real address validity checking25712571+ * is done at time mappings are established or deferred until they are25722572+ * accessed.25732573+ */25742574+#define HV_FAST_PCI_IOTSB_MAP 0x19625752575+25762576+/* pci_iotsb_map_one()25772577+ * TRAP: HV_FAST_TRAP25782578+ * FUNCTION: HV_FAST_PCI_IOTSB_MAP_ONE25792579+ * ARG0: devhandle25802580+ * ARG1: iotsb_handle25812581+ * ARG2: iotsb_index25822582+ * ARG3: iotte_attributes25832583+ * ARG4: r_addr25842584+ * RET0: status25852585+ * ERRORS: EINVAL Invalid devhandle,iotsb_handle, iotsb_index25862586+ * or iotte_attributes25872587+ * EBADALIGN Improperly aligned r_addr25882588+ * ENORADDR Invalid r_addr25892589+ *25902590+ * This service creates and flushes a single mapping in the IOTSB defined by the25912591+ * arguments devhandle, iotsb.25922592+ *25932593+ * The mapping for the page at r_addr is created at the IOTSB index specified by25942594+ * iotsb_index with the attributes iotte_attributes.25952595+ *25962596+ * This service must not be used to demap a mapping. In other words, the mapping25972597+ * must be valid and have one or both of the RW attribute bits set.25982598+ *25992599+ * Note:26002600+ * It is implementation-defined whether I/O page real address validity checking26012601+ * is done at time mappings are established or deferred until they are26022602+ * accessed.26032603+ */26042604+#define HV_FAST_PCI_IOTSB_MAP_ONE 0x19726052605+26062606+/* pci_iotsb_demap()26072607+ * TRAP: HV_FAST_TRAP26082608+ * FUNCTION: HV_FAST_PCI_IOTSB_DEMAP26092609+ * ARG0: devhandle26102610+ * ARG1: iotsb_handle26112611+ * ARG2: iotsb_index26122612+ * ARG3: #iottes26132613+ * RET0: status26142614+ * RET1: #unmapped26152615+ * ERRORS: EINVAL Invalid devhandle, iotsb_handle, iotsb_index or #iottes26162616+ *26172617+ * This service unmaps and flushes up to #iottes mappings starting at index26182618+ * iotsb_index from the IOTSB defined by the arguments devhandle, iotsb.26192619+ *26202620+ * #iottes must be greater than zero.26212621+ *26222622+ * The actual number of IOTTEs unmapped is returned in #unmapped and may be less26232623+ * than or equal to the requested number of IOTTEs, #iottes.26242624+ *26252625+ * If #unmapped is less than #iottes, the caller should continue to invoke this26262626+ * service with updated iotsb_index and #iottes arguments until all pages are26272627+ * demapped.26282628+ */26292629+#define HV_FAST_PCI_IOTSB_DEMAP 0x19826302630+26312631+/* pci_iotsb_getmap()26322632+ * TRAP: HV_FAST_TRAP26332633+ * FUNCTION: HV_FAST_PCI_IOTSB_GETMAP26342634+ * ARG0: devhandle26352635+ * ARG1: iotsb_handle26362636+ * ARG2: iotsb_index26372637+ * RET0: status26382638+ * RET1: r_addr26392639+ * RET2: iotte_attributes26402640+ * ERRORS: EINVAL Invalid devhandle, iotsb_handle, or iotsb_index26412641+ * ENOMAP No mapping was found26422642+ *26432643+ * This service returns the mapping specified by index iotsb_index from the26442644+ * IOTSB defined by the arguments devhandle, iotsb.26452645+ *26462646+ * Upon success, the real address of the mapping shall be returned in26472647+ * r_addr and thethe IOTTE mapping attributes shall be returned in26482648+ * iotte_attributes.26492649+ *26502650+ * The return value iotte_attributes may not include optional features used in26512651+ * the call to create the mapping.26522652+ */26532653+#define HV_FAST_PCI_IOTSB_GETMAP 0x19926542654+26552655+/* pci_iotsb_sync_mappings()26562656+ * TRAP: HV_FAST_TRAP26572657+ * FUNCTION: HV_FAST_PCI_IOTSB_SYNC_MAPPINGS26582658+ * ARG0: devhandle26592659+ * ARG1: iotsb_handle26602660+ * ARG2: iotsb_index26612661+ * ARG3: #iottes26622662+ * RET0: status26632663+ * RET1: #synced26642664+ * ERROS: EINVAL Invalid devhandle, iotsb_handle, iotsb_index, or #iottes26652665+ *26662666+ * This service synchronizes #iottes mappings starting at index iotsb_index in26672667+ * the IOTSB defined by the arguments devhandle, iotsb.26682668+ *26692669+ * #iottes must be greater than zero.26702670+ *26712671+ * The actual number of IOTTEs synchronized is returned in #synced, which may26722672+ * be less than or equal to the requested number, #iottes.26732673+ *26742674+ * Upon a successful return, #synced is less than #iottes, the caller should26752675+ * continue to invoke this service with updated iotsb_index and #iottes26762676+ * arguments until all pages are synchronized.26772677+ */26782678+#define HV_FAST_PCI_IOTSB_SYNC_MAPPINGS 0x19a26792679+23382680/* Logical Domain Channel services. */2339268123402682#define LDC_CHANNEL_DOWN 0···33352993#define HV_GRP_SDIO 0x010833362994#define HV_GRP_SDIO_ERR 0x010933372995#define HV_GRP_REBOOT_DATA 0x011029962996+#define HV_GRP_ATU 0x011133382997#define HV_GRP_M7_PERF 0x011433392998#define HV_GRP_NIAG_PERF 0x020033402999#define HV_GRP_FIRE_PERF 0x0201
···4444 { .major = 1, .minor = 1 },4545};46464747+static unsigned long vatu_major = 1;4848+static unsigned long vatu_minor = 1;4949+4750#define PGLIST_NENTS (PAGE_SIZE / sizeof(u64))48514952struct iommu_batch {···7269}73707471/* Interrupts must be disabled. */7575-static long iommu_batch_flush(struct iommu_batch *p)7272+static long iommu_batch_flush(struct iommu_batch *p, u64 mask)7673{7774 struct pci_pbm_info *pbm = p->dev->archdata.host_controller;7575+ u64 *pglist = p->pglist;7676+ u64 index_count;7877 unsigned long devhandle = pbm->devhandle;7978 unsigned long prot = p->prot;8079 unsigned long entry = p->entry;8181- u64 *pglist = p->pglist;8280 unsigned long npages = p->npages;8181+ unsigned long iotsb_num;8282+ unsigned long ret;8383+ long num;83848485 /* VPCI maj=1, min=[0,1] only supports read and write */8586 if (vpci_major < 2)8687 prot &= (HV_PCI_MAP_ATTR_READ | HV_PCI_MAP_ATTR_WRITE);87888889 while (npages != 0) {8989- long num;9090-9191- num = pci_sun4v_iommu_map(devhandle, HV_PCI_TSBID(0, entry),9292- npages, prot, __pa(pglist));9393- if (unlikely(num < 0)) {9494- if (printk_ratelimit())9595- printk("iommu_batch_flush: IOMMU map of "9696- "[%08lx:%08llx:%lx:%lx:%lx] failed with "9797- "status %ld\n",9898- devhandle, HV_PCI_TSBID(0, entry),9999- npages, prot, __pa(pglist), num);100100- return -1;9090+ if (mask <= DMA_BIT_MASK(32)) {9191+ num = pci_sun4v_iommu_map(devhandle,9292+ HV_PCI_TSBID(0, entry),9393+ npages,9494+ prot,9595+ __pa(pglist));9696+ if (unlikely(num < 0)) {9797+ pr_err_ratelimited("%s: IOMMU map of [%08lx:%08llx:%lx:%lx:%lx] failed with status %ld\n",9898+ __func__,9999+ devhandle,100100+ HV_PCI_TSBID(0, entry),101101+ npages, prot, __pa(pglist),102102+ num);103103+ return -1;104104+ }105105+ } else {106106+ index_count = HV_PCI_IOTSB_INDEX_COUNT(npages, entry),107107+ iotsb_num = pbm->iommu->atu->iotsb->iotsb_num;108108+ ret = pci_sun4v_iotsb_map(devhandle,109109+ iotsb_num,110110+ index_count,111111+ prot,112112+ __pa(pglist),113113+ &num);114114+ if (unlikely(ret != HV_EOK)) {115115+ pr_err_ratelimited("%s: ATU map of [%08lx:%lx:%llx:%lx:%lx] failed with status %ld\n",116116+ __func__,117117+ devhandle, iotsb_num,118118+ index_count, prot,119119+ __pa(pglist), ret);120120+ return -1;121121+ }101122 }102102-103123 entry += num;104124 npages -= num;105125 pglist += num;···134108 return 0;135109}136110137137-static inline void iommu_batch_new_entry(unsigned long entry)111111+static inline void iommu_batch_new_entry(unsigned long entry, u64 mask)138112{139113 struct iommu_batch *p = this_cpu_ptr(&iommu_batch);140114141115 if (p->entry + p->npages == entry)142116 return;143117 if (p->entry != ~0UL)144144- iommu_batch_flush(p);118118+ iommu_batch_flush(p, mask);145119 p->entry = entry;146120}147121148122/* Interrupts must be disabled. */149149-static inline long iommu_batch_add(u64 phys_page)123123+static inline long iommu_batch_add(u64 phys_page, u64 mask)150124{151125 struct iommu_batch *p = this_cpu_ptr(&iommu_batch);152126···154128155129 p->pglist[p->npages++] = phys_page;156130 if (p->npages == PGLIST_NENTS)157157- return iommu_batch_flush(p);131131+ return iommu_batch_flush(p, mask);158132159133 return 0;160134}161135162136/* Interrupts must be disabled. */163163-static inline long iommu_batch_end(void)137137+static inline long iommu_batch_end(u64 mask)164138{165139 struct iommu_batch *p = this_cpu_ptr(&iommu_batch);166140167141 BUG_ON(p->npages >= PGLIST_NENTS);168142169169- return iommu_batch_flush(p);143143+ return iommu_batch_flush(p, mask);170144}171145172146static void *dma_4v_alloc_coherent(struct device *dev, size_t size,173147 dma_addr_t *dma_addrp, gfp_t gfp,174148 unsigned long attrs)175149{150150+ u64 mask;176151 unsigned long flags, order, first_page, npages, n;177152 unsigned long prot = 0;178153 struct iommu *iommu;154154+ struct atu *atu;155155+ struct iommu_map_table *tbl;179156 struct page *page;180157 void *ret;181158 long entry;···203174 memset((char *)first_page, 0, PAGE_SIZE << order);204175205176 iommu = dev->archdata.iommu;177177+ atu = iommu->atu;206178207207- entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, NULL,179179+ mask = dev->coherent_dma_mask;180180+ if (mask <= DMA_BIT_MASK(32))181181+ tbl = &iommu->tbl;182182+ else183183+ tbl = &atu->tbl;184184+185185+ entry = iommu_tbl_range_alloc(dev, tbl, npages, NULL,208186 (unsigned long)(-1), 0);209187210188 if (unlikely(entry == IOMMU_ERROR_CODE))211189 goto range_alloc_fail;212190213213- *dma_addrp = (iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT));191191+ *dma_addrp = (tbl->table_map_base + (entry << IO_PAGE_SHIFT));214192 ret = (void *) first_page;215193 first_page = __pa(first_page);216194···229193 entry);230194231195 for (n = 0; n < npages; n++) {232232- long err = iommu_batch_add(first_page + (n * PAGE_SIZE));196196+ long err = iommu_batch_add(first_page + (n * PAGE_SIZE), mask);233197 if (unlikely(err < 0L))234198 goto iommu_map_fail;235199 }236200237237- if (unlikely(iommu_batch_end() < 0L))201201+ if (unlikely(iommu_batch_end(mask) < 0L))238202 goto iommu_map_fail;239203240204 local_irq_restore(flags);···242206 return ret;243207244208iommu_map_fail:245245- iommu_tbl_range_free(&iommu->tbl, *dma_addrp, npages, IOMMU_ERROR_CODE);209209+ iommu_tbl_range_free(tbl, *dma_addrp, npages, IOMMU_ERROR_CODE);246210247211range_alloc_fail:248212 free_pages(first_page, order);249213 return NULL;250214}251215252252-static void dma_4v_iommu_demap(void *demap_arg, unsigned long entry,253253- unsigned long npages)216216+unsigned long dma_4v_iotsb_bind(unsigned long devhandle,217217+ unsigned long iotsb_num,218218+ struct pci_bus *bus_dev)254219{255255- u32 devhandle = *(u32 *)demap_arg;220220+ struct pci_dev *pdev;221221+ unsigned long err;222222+ unsigned int bus;223223+ unsigned int device;224224+ unsigned int fun;225225+226226+ list_for_each_entry(pdev, &bus_dev->devices, bus_list) {227227+ if (pdev->subordinate) {228228+ /* No need to bind pci bridge */229229+ dma_4v_iotsb_bind(devhandle, iotsb_num,230230+ pdev->subordinate);231231+ } else {232232+ bus = bus_dev->number;233233+ device = PCI_SLOT(pdev->devfn);234234+ fun = PCI_FUNC(pdev->devfn);235235+ err = pci_sun4v_iotsb_bind(devhandle, iotsb_num,236236+ HV_PCI_DEVICE_BUILD(bus,237237+ device,238238+ fun));239239+240240+ /* If bind fails for one device it is going to fail241241+ * for rest of the devices because we are sharing242242+ * IOTSB. So in case of failure simply return with243243+ * error.244244+ */245245+ if (err)246246+ return err;247247+ }248248+ }249249+250250+ return 0;251251+}252252+253253+static void dma_4v_iommu_demap(struct device *dev, unsigned long devhandle,254254+ dma_addr_t dvma, unsigned long iotsb_num,255255+ unsigned long entry, unsigned long npages)256256+{256257 unsigned long num, flags;258258+ unsigned long ret;257259258260 local_irq_save(flags);259261 do {260260- num = pci_sun4v_iommu_demap(devhandle,261261- HV_PCI_TSBID(0, entry),262262- npages);263263-262262+ if (dvma <= DMA_BIT_MASK(32)) {263263+ num = pci_sun4v_iommu_demap(devhandle,264264+ HV_PCI_TSBID(0, entry),265265+ npages);266266+ } else {267267+ ret = pci_sun4v_iotsb_demap(devhandle, iotsb_num,268268+ entry, npages, &num);269269+ if (unlikely(ret != HV_EOK)) {270270+ pr_err_ratelimited("pci_iotsb_demap() failed with error: %ld\n",271271+ ret);272272+ }273273+ }264274 entry += num;265275 npages -= num;266276 } while (npages != 0);···318236{319237 struct pci_pbm_info *pbm;320238 struct iommu *iommu;239239+ struct atu *atu;240240+ struct iommu_map_table *tbl;321241 unsigned long order, npages, entry;242242+ unsigned long iotsb_num;322243 u32 devhandle;323244324245 npages = IO_PAGE_ALIGN(size) >> IO_PAGE_SHIFT;325246 iommu = dev->archdata.iommu;326247 pbm = dev->archdata.host_controller;248248+ atu = iommu->atu;327249 devhandle = pbm->devhandle;328328- entry = ((dvma - iommu->tbl.table_map_base) >> IO_PAGE_SHIFT);329329- dma_4v_iommu_demap(&devhandle, entry, npages);330330- iommu_tbl_range_free(&iommu->tbl, dvma, npages, IOMMU_ERROR_CODE);250250+251251+ if (dvma <= DMA_BIT_MASK(32)) {252252+ tbl = &iommu->tbl;253253+ iotsb_num = 0; /* we don't care for legacy iommu */254254+ } else {255255+ tbl = &atu->tbl;256256+ iotsb_num = atu->iotsb->iotsb_num;257257+ }258258+ entry = ((dvma - tbl->table_map_base) >> IO_PAGE_SHIFT);259259+ dma_4v_iommu_demap(dev, devhandle, dvma, iotsb_num, entry, npages);260260+ iommu_tbl_range_free(tbl, dvma, npages, IOMMU_ERROR_CODE);331261 order = get_order(size);332262 if (order < 10)333263 free_pages((unsigned long)cpu, order);···351257 unsigned long attrs)352258{353259 struct iommu *iommu;260260+ struct atu *atu;261261+ struct iommu_map_table *tbl;262262+ u64 mask;354263 unsigned long flags, npages, oaddr;355264 unsigned long i, base_paddr;356356- u32 bus_addr, ret;357265 unsigned long prot;266266+ dma_addr_t bus_addr, ret;358267 long entry;359268360269 iommu = dev->archdata.iommu;270270+ atu = iommu->atu;361271362272 if (unlikely(direction == DMA_NONE))363273 goto bad;···370272 npages = IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK);371273 npages >>= IO_PAGE_SHIFT;372274373373- entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages, NULL,275275+ mask = *dev->dma_mask;276276+ if (mask <= DMA_BIT_MASK(32))277277+ tbl = &iommu->tbl;278278+ else279279+ tbl = &atu->tbl;280280+281281+ entry = iommu_tbl_range_alloc(dev, tbl, npages, NULL,374282 (unsigned long)(-1), 0);375283376284 if (unlikely(entry == IOMMU_ERROR_CODE))377285 goto bad;378286379379- bus_addr = (iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT));287287+ bus_addr = (tbl->table_map_base + (entry << IO_PAGE_SHIFT));380288 ret = bus_addr | (oaddr & ~IO_PAGE_MASK);381289 base_paddr = __pa(oaddr & IO_PAGE_MASK);382290 prot = HV_PCI_MAP_ATTR_READ;···397293 iommu_batch_start(dev, prot, entry);398294399295 for (i = 0; i < npages; i++, base_paddr += IO_PAGE_SIZE) {400400- long err = iommu_batch_add(base_paddr);296296+ long err = iommu_batch_add(base_paddr, mask);401297 if (unlikely(err < 0L))402298 goto iommu_map_fail;403299 }404404- if (unlikely(iommu_batch_end() < 0L))300300+ if (unlikely(iommu_batch_end(mask) < 0L))405301 goto iommu_map_fail;406302407303 local_irq_restore(flags);···414310 return DMA_ERROR_CODE;415311416312iommu_map_fail:417417- iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, IOMMU_ERROR_CODE);313313+ iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE);418314 return DMA_ERROR_CODE;419315}420316···424320{425321 struct pci_pbm_info *pbm;426322 struct iommu *iommu;323323+ struct atu *atu;324324+ struct iommu_map_table *tbl;427325 unsigned long npages;326326+ unsigned long iotsb_num;428327 long entry;429328 u32 devhandle;430329···439332440333 iommu = dev->archdata.iommu;441334 pbm = dev->archdata.host_controller;335335+ atu = iommu->atu;442336 devhandle = pbm->devhandle;443337444338 npages = IO_PAGE_ALIGN(bus_addr + sz) - (bus_addr & IO_PAGE_MASK);445339 npages >>= IO_PAGE_SHIFT;446340 bus_addr &= IO_PAGE_MASK;447447- entry = (bus_addr - iommu->tbl.table_map_base) >> IO_PAGE_SHIFT;448448- dma_4v_iommu_demap(&devhandle, entry, npages);449449- iommu_tbl_range_free(&iommu->tbl, bus_addr, npages, IOMMU_ERROR_CODE);341341+342342+ if (bus_addr <= DMA_BIT_MASK(32)) {343343+ iotsb_num = 0; /* we don't care for legacy iommu */344344+ tbl = &iommu->tbl;345345+ } else {346346+ iotsb_num = atu->iotsb->iotsb_num;347347+ tbl = &atu->tbl;348348+ }349349+ entry = (bus_addr - tbl->table_map_base) >> IO_PAGE_SHIFT;350350+ dma_4v_iommu_demap(dev, devhandle, bus_addr, iotsb_num, entry, npages);351351+ iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE);450352}451353452354static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,···469353 unsigned long seg_boundary_size;470354 int outcount, incount, i;471355 struct iommu *iommu;356356+ struct atu *atu;357357+ struct iommu_map_table *tbl;358358+ u64 mask;472359 unsigned long base_shift;473360 long err;474361475362 BUG_ON(direction == DMA_NONE);476363477364 iommu = dev->archdata.iommu;365365+ atu = iommu->atu;366366+478367 if (nelems == 0 || !iommu)479368 return 0;480369···505384 max_seg_size = dma_get_max_seg_size(dev);506385 seg_boundary_size = ALIGN(dma_get_seg_boundary(dev) + 1,507386 IO_PAGE_SIZE) >> IO_PAGE_SHIFT;508508- base_shift = iommu->tbl.table_map_base >> IO_PAGE_SHIFT;387387+388388+ mask = *dev->dma_mask;389389+ if (mask <= DMA_BIT_MASK(32))390390+ tbl = &iommu->tbl;391391+ else392392+ tbl = &atu->tbl;393393+394394+ base_shift = tbl->table_map_base >> IO_PAGE_SHIFT;395395+509396 for_each_sg(sglist, s, nelems, i) {510397 unsigned long paddr, npages, entry, out_entry = 0, slen;511398···526397 /* Allocate iommu entries for that segment */527398 paddr = (unsigned long) SG_ENT_PHYS_ADDRESS(s);528399 npages = iommu_num_pages(paddr, slen, IO_PAGE_SIZE);529529- entry = iommu_tbl_range_alloc(dev, &iommu->tbl, npages,400400+ entry = iommu_tbl_range_alloc(dev, tbl, npages,530401 &handle, (unsigned long)(-1), 0);531402532403 /* Handle failure */533404 if (unlikely(entry == IOMMU_ERROR_CODE)) {534534- if (printk_ratelimit())535535- printk(KERN_INFO "iommu_alloc failed, iommu %p paddr %lx"536536- " npages %lx\n", iommu, paddr, npages);405405+ pr_err_ratelimited("iommu_alloc failed, iommu %p paddr %lx npages %lx\n",406406+ tbl, paddr, npages);537407 goto iommu_map_failed;538408 }539409540540- iommu_batch_new_entry(entry);410410+ iommu_batch_new_entry(entry, mask);541411542412 /* Convert entry to a dma_addr_t */543543- dma_addr = iommu->tbl.table_map_base + (entry << IO_PAGE_SHIFT);413413+ dma_addr = tbl->table_map_base + (entry << IO_PAGE_SHIFT);544414 dma_addr |= (s->offset & ~IO_PAGE_MASK);545415546416 /* Insert into HW table */547417 paddr &= IO_PAGE_MASK;548418 while (npages--) {549549- err = iommu_batch_add(paddr);419419+ err = iommu_batch_add(paddr, mask);550420 if (unlikely(err < 0L))551421 goto iommu_map_failed;552422 paddr += IO_PAGE_SIZE;···580452 dma_next = dma_addr + slen;581453 }582454583583- err = iommu_batch_end();455455+ err = iommu_batch_end(mask);584456585457 if (unlikely(err < 0L))586458 goto iommu_map_failed;···603475 vaddr = s->dma_address & IO_PAGE_MASK;604476 npages = iommu_num_pages(s->dma_address, s->dma_length,605477 IO_PAGE_SIZE);606606- iommu_tbl_range_free(&iommu->tbl, vaddr, npages,478478+ iommu_tbl_range_free(tbl, vaddr, npages,607479 IOMMU_ERROR_CODE);608480 /* XXX demap? XXX */609481 s->dma_address = DMA_ERROR_CODE;···624496 struct pci_pbm_info *pbm;625497 struct scatterlist *sg;626498 struct iommu *iommu;499499+ struct atu *atu;627500 unsigned long flags, entry;501501+ unsigned long iotsb_num;628502 u32 devhandle;629503630504 BUG_ON(direction == DMA_NONE);631505632506 iommu = dev->archdata.iommu;633507 pbm = dev->archdata.host_controller;508508+ atu = iommu->atu;634509 devhandle = pbm->devhandle;635510636511 local_irq_save(flags);···643512 dma_addr_t dma_handle = sg->dma_address;644513 unsigned int len = sg->dma_length;645514 unsigned long npages;646646- struct iommu_map_table *tbl = &iommu->tbl;515515+ struct iommu_map_table *tbl;647516 unsigned long shift = IO_PAGE_SHIFT;648517649518 if (!len)650519 break;651520 npages = iommu_num_pages(dma_handle, len, IO_PAGE_SIZE);521521+522522+ if (dma_handle <= DMA_BIT_MASK(32)) {523523+ iotsb_num = 0; /* we don't care for legacy iommu */524524+ tbl = &iommu->tbl;525525+ } else {526526+ iotsb_num = atu->iotsb->iotsb_num;527527+ tbl = &atu->tbl;528528+ }652529 entry = ((dma_handle - tbl->table_map_base) >> shift);653653- dma_4v_iommu_demap(&devhandle, entry, npages);654654- iommu_tbl_range_free(&iommu->tbl, dma_handle, npages,530530+ dma_4v_iommu_demap(dev, devhandle, dma_handle, iotsb_num,531531+ entry, npages);532532+ iommu_tbl_range_free(tbl, dma_handle, npages,655533 IOMMU_ERROR_CODE);656534 sg = sg_next(sg);657535 }···719579 }720580 }721581 return cnt;582582+}583583+584584+static int pci_sun4v_atu_alloc_iotsb(struct pci_pbm_info *pbm)585585+{586586+ struct atu *atu = pbm->iommu->atu;587587+ struct atu_iotsb *iotsb;588588+ void *table;589589+ u64 table_size;590590+ u64 iotsb_num;591591+ unsigned long order;592592+ unsigned long err;593593+594594+ iotsb = kzalloc(sizeof(*iotsb), GFP_KERNEL);595595+ if (!iotsb) {596596+ err = -ENOMEM;597597+ goto out_err;598598+ }599599+ atu->iotsb = iotsb;600600+601601+ /* calculate size of IOTSB */602602+ table_size = (atu->size / IO_PAGE_SIZE) * 8;603603+ order = get_order(table_size);604604+ table = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);605605+ if (!table) {606606+ err = -ENOMEM;607607+ goto table_failed;608608+ }609609+ iotsb->table = table;610610+ iotsb->ra = __pa(table);611611+ iotsb->dvma_size = atu->size;612612+ iotsb->dvma_base = atu->base;613613+ iotsb->table_size = table_size;614614+ iotsb->page_size = IO_PAGE_SIZE;615615+616616+ /* configure and register IOTSB with HV */617617+ err = pci_sun4v_iotsb_conf(pbm->devhandle,618618+ iotsb->ra,619619+ iotsb->table_size,620620+ iotsb->page_size,621621+ iotsb->dvma_base,622622+ &iotsb_num);623623+ if (err) {624624+ pr_err(PFX "pci_iotsb_conf failed error: %ld\n", err);625625+ goto iotsb_conf_failed;626626+ }627627+ iotsb->iotsb_num = iotsb_num;628628+629629+ err = dma_4v_iotsb_bind(pbm->devhandle, iotsb_num, pbm->pci_bus);630630+ if (err) {631631+ pr_err(PFX "pci_iotsb_bind failed error: %ld\n", err);632632+ goto iotsb_conf_failed;633633+ }634634+635635+ return 0;636636+637637+iotsb_conf_failed:638638+ free_pages((unsigned long)table, order);639639+table_failed:640640+ kfree(iotsb);641641+out_err:642642+ return err;643643+}644644+645645+static int pci_sun4v_atu_init(struct pci_pbm_info *pbm)646646+{647647+ struct atu *atu = pbm->iommu->atu;648648+ unsigned long err;649649+ const u64 *ranges;650650+ u64 map_size, num_iotte;651651+ u64 dma_mask;652652+ const u32 *page_size;653653+ int len;654654+655655+ ranges = of_get_property(pbm->op->dev.of_node, "iommu-address-ranges",656656+ &len);657657+ if (!ranges) {658658+ pr_err(PFX "No iommu-address-ranges\n");659659+ return -EINVAL;660660+ }661661+662662+ page_size = of_get_property(pbm->op->dev.of_node, "iommu-pagesizes",663663+ NULL);664664+ if (!page_size) {665665+ pr_err(PFX "No iommu-pagesizes\n");666666+ return -EINVAL;667667+ }668668+669669+ /* There are 4 iommu-address-ranges supported. Each range is pair of670670+ * {base, size}. The ranges[0] and ranges[1] are 32bit address space671671+ * while ranges[2] and ranges[3] are 64bit space. We want to use 64bit672672+ * address ranges to support 64bit addressing. Because 'size' for673673+ * address ranges[2] and ranges[3] are same we can select either of674674+ * ranges[2] or ranges[3] for mapping. However due to 'size' is too675675+ * large for OS to allocate IOTSB we are using fix size 32G676676+ * (ATU_64_SPACE_SIZE) which is more than enough for all PCIe devices677677+ * to share.678678+ */679679+ atu->ranges = (struct atu_ranges *)ranges;680680+ atu->base = atu->ranges[3].base;681681+ atu->size = ATU_64_SPACE_SIZE;682682+683683+ /* Create IOTSB */684684+ err = pci_sun4v_atu_alloc_iotsb(pbm);685685+ if (err) {686686+ pr_err(PFX "Error creating ATU IOTSB\n");687687+ return err;688688+ }689689+690690+ /* Create ATU iommu map.691691+ * One bit represents one iotte in IOTSB table.692692+ */693693+ dma_mask = (roundup_pow_of_two(atu->size) - 1UL);694694+ num_iotte = atu->size / IO_PAGE_SIZE;695695+ map_size = num_iotte / 8;696696+ atu->tbl.table_map_base = atu->base;697697+ atu->dma_addr_mask = dma_mask;698698+ atu->tbl.map = kzalloc(map_size, GFP_KERNEL);699699+ if (!atu->tbl.map)700700+ return -ENOMEM;701701+702702+ iommu_tbl_pool_init(&atu->tbl, num_iotte, IO_PAGE_SHIFT,703703+ NULL, false /* no large_pool */,704704+ 0 /* default npools */,705705+ false /* want span boundary checking */);706706+707707+ return 0;722708}723709724710static int pci_sun4v_iommu_init(struct pci_pbm_info *pbm)···11849181185919 pci_sun4v_scan_bus(pbm, &op->dev);1186920921921+ /* if atu_init fails its not complete failure.922922+ * we can still continue using legacy iommu.923923+ */924924+ if (pbm->iommu->atu) {925925+ err = pci_sun4v_atu_init(pbm);926926+ if (err) {927927+ kfree(pbm->iommu->atu);928928+ pbm->iommu->atu = NULL;929929+ pr_err(PFX "ATU init failed, err=%d\n", err);930930+ }931931+ }932932+1187933 pbm->next = pci_pbm_root;1188934 pci_pbm_root = pbm;1189935···1209931 struct pci_pbm_info *pbm;1210932 struct device_node *dp;1211933 struct iommu *iommu;934934+ struct atu *atu;1212935 u32 devhandle;1213936 int i, err = -ENODEV;937937+ static bool hv_atu = true;12149381215939 dp = op->dev.of_node;1216940···1233953 }1234954 pr_info(PFX "Registered hvapi major[%lu] minor[%lu]\n",1235955 vpci_major, vpci_minor);956956+957957+ err = sun4v_hvapi_register(HV_GRP_ATU, vatu_major, &vatu_minor);958958+ if (err) {959959+ /* don't return an error if we fail to register the960960+ * ATU group, but ATU hcalls won't be available.961961+ */962962+ hv_atu = false;963963+ pr_err(PFX "Could not register hvapi ATU err=%d\n",964964+ err);965965+ } else {966966+ pr_info(PFX "Registered hvapi ATU major[%lu] minor[%lu]\n",967967+ vatu_major, vatu_minor);968968+ }12369691237970 dma_ops = &sun4v_dma_ops;1238971 }···1284991 }12859921286993 pbm->iommu = iommu;994994+ iommu->atu = NULL;995995+ if (hv_atu) {996996+ atu = kzalloc(sizeof(*atu), GFP_KERNEL);997997+ if (!atu)998998+ pr_err(PFX "Could not allocate atu\n");999999+ else10001000+ iommu->atu = atu;10011001+ }1287100212881003 err = pci_sun4v_pbm_init(pbm, op, devhandle);12891004 if (err)···13021001 return 0;1303100213041003out_free_iommu:10041004+ kfree(iommu->atu);13051005 kfree(pbm->iommu);1306100613071007out_free_controller:
+21
arch/sparc/kernel/pci_sun4v.h
···8989 unsigned long msinum,9090 unsigned long valid);91919292+/* Sun4v HV IOMMU v2 APIs */9393+unsigned long pci_sun4v_iotsb_conf(unsigned long devhandle,9494+ unsigned long ra,9595+ unsigned long table_size,9696+ unsigned long page_size,9797+ unsigned long dvma_base,9898+ u64 *iotsb_num);9999+unsigned long pci_sun4v_iotsb_bind(unsigned long devhandle,100100+ unsigned long iotsb_num,101101+ unsigned int pci_device);102102+unsigned long pci_sun4v_iotsb_map(unsigned long devhandle,103103+ unsigned long iotsb_num,104104+ unsigned long iotsb_index_iottes,105105+ unsigned long io_attributes,106106+ unsigned long io_page_list_pa,107107+ long *mapped);108108+unsigned long pci_sun4v_iotsb_demap(unsigned long devhandle,109109+ unsigned long iotsb_num,110110+ unsigned long iotsb_index,111111+ unsigned long iottes,112112+ unsigned long *demapped);92113#endif /* !(_PCI_SUN4V_H) */
···8989 sf = (struct signal_frame __user *) regs->u_regs[UREG_FP];90909191 /* 1. Make sure we are not getting garbage from the user */9292- if (!invalid_frame_pointer(sf, sizeof(*sf)))9292+ if (invalid_frame_pointer(sf, sizeof(*sf)))9393 goto segv_and_exit;94949595 if (get_user(ufp, &sf->info.si_regs.u_regs[UREG_FP]))···150150151151 synchronize_user_stack();152152 sf = (struct rt_signal_frame __user *) regs->u_regs[UREG_FP];153153- if (!invalid_frame_pointer(sf, sizeof(*sf)))153153+ if (invalid_frame_pointer(sf, sizeof(*sf)))154154 goto segv;155155156156 if (get_user(ufp, &sf->regs.u_regs[UREG_FP]))
+64-7
arch/sparc/mm/init_64.c
···802802};803803static struct mdesc_mblock *mblocks;804804static int num_mblocks;805805+static int find_numa_node_for_addr(unsigned long pa,806806+ struct node_mem_mask *pnode_mask);805807806806-static unsigned long ra_to_pa(unsigned long addr)808808+static unsigned long __init ra_to_pa(unsigned long addr)807809{808810 int i;809811···821819 return addr;822820}823821824824-static int find_node(unsigned long addr)822822+static int __init find_node(unsigned long addr)825823{824824+ static bool search_mdesc = true;825825+ static struct node_mem_mask last_mem_mask = { ~0UL, ~0UL };826826+ static int last_index;826827 int i;827828828829 addr = ra_to_pa(addr);···835830 if ((addr & p->mask) == p->val)836831 return i;837832 }838838- /* The following condition has been observed on LDOM guests.*/839839- WARN_ONCE(1, "find_node: A physical address doesn't match a NUMA node"840840- " rule. Some physical memory will be owned by node 0.");841841- return 0;833833+ /* The following condition has been observed on LDOM guests because834834+ * node_masks only contains the best latency mask and value.835835+ * LDOM guest's mdesc can contain a single latency group to836836+ * cover multiple address range. Print warning message only if the837837+ * address cannot be found in node_masks nor mdesc.838838+ */839839+ if ((search_mdesc) &&840840+ ((addr & last_mem_mask.mask) != last_mem_mask.val)) {841841+ /* find the available node in the mdesc */842842+ last_index = find_numa_node_for_addr(addr, &last_mem_mask);843843+ numadbg("find_node: latency group for address 0x%lx is %d\n",844844+ addr, last_index);845845+ if ((last_index < 0) || (last_index >= num_node_masks)) {846846+ /* WARN_ONCE() and use default group 0 */847847+ WARN_ONCE(1, "find_node: A physical address doesn't match a NUMA node rule. Some physical memory will be owned by node 0.");848848+ search_mdesc = false;849849+ last_index = 0;850850+ }851851+ }852852+853853+ return last_index;842854}843855844844-static u64 memblock_nid_range(u64 start, u64 end, int *nid)856856+static u64 __init memblock_nid_range(u64 start, u64 end, int *nid)845857{846858 *nid = find_node(start);847859 start += PAGE_SIZE;···11801158 return (from == to) ? LOCAL_DISTANCE : REMOTE_DISTANCE;11811159 }11821160 return numa_latency[from][to];11611161+}11621162+11631163+static int find_numa_node_for_addr(unsigned long pa,11641164+ struct node_mem_mask *pnode_mask)11651165+{11661166+ struct mdesc_handle *md = mdesc_grab();11671167+ u64 node, arc;11681168+ int i = 0;11691169+11701170+ node = mdesc_node_by_name(md, MDESC_NODE_NULL, "latency-groups");11711171+ if (node == MDESC_NODE_NULL)11721172+ goto out;11731173+11741174+ mdesc_for_each_node_by_name(md, node, "group") {11751175+ mdesc_for_each_arc(arc, md, node, MDESC_ARC_TYPE_FWD) {11761176+ u64 target = mdesc_arc_target(md, arc);11771177+ struct mdesc_mlgroup *m = find_mlgroup(target);11781178+11791179+ if (!m)11801180+ continue;11811181+ if ((pa & m->mask) == m->match) {11821182+ if (pnode_mask) {11831183+ pnode_mask->mask = m->mask;11841184+ pnode_mask->val = m->match;11851185+ }11861186+ mdesc_release(md);11871187+ return i;11881188+ }11891189+ }11901190+ i++;11911191+ }11921192+11931193+out:11941194+ mdesc_release(md);11951195+ return -1;11831196}1184119711851198static int __init find_best_numa_node_for_mlgroup(struct mdesc_mlgroup *grp)
+3
arch/tile/include/asm/cache.h
···6161 */6262#define __write_once __read_mostly63636464+/* __ro_after_init is the generic name for the tile arch __write_once. */6565+#define __ro_after_init __read_mostly6666+6467#endif /* _ASM_TILE_CACHE_H */
···88#define PCI_DEVICE_ID_INTEL_HSW_IMC 0x0c0099#define PCI_DEVICE_ID_INTEL_HSW_U_IMC 0x0a041010#define PCI_DEVICE_ID_INTEL_BDW_IMC 0x16041111-#define PCI_DEVICE_ID_INTEL_SKL_IMC 0x191f1212-#define PCI_DEVICE_ID_INTEL_SKL_U_IMC 0x190c1111+#define PCI_DEVICE_ID_INTEL_SKL_U_IMC 0x19041212+#define PCI_DEVICE_ID_INTEL_SKL_Y_IMC 0x190c1313+#define PCI_DEVICE_ID_INTEL_SKL_HD_IMC 0x19001414+#define PCI_DEVICE_ID_INTEL_SKL_HQ_IMC 0x19101515+#define PCI_DEVICE_ID_INTEL_SKL_SD_IMC 0x190f1616+#define PCI_DEVICE_ID_INTEL_SKL_SQ_IMC 0x191f13171418/* SNB event control */1519#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff···620616621617static const struct pci_device_id skl_uncore_pci_ids[] = {622618 { /* IMC */623623- PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_IMC),619619+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_Y_IMC),624620 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),625621 },626622 { /* IMC */627623 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_U_IMC),624624+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),625625+ },626626+ { /* IMC */627627+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_HD_IMC),628628+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),629629+ },630630+ { /* IMC */631631+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_HQ_IMC),632632+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),633633+ },634634+ { /* IMC */635635+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SD_IMC),636636+ .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),637637+ },638638+ { /* IMC */639639+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_SQ_IMC),628640 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),629641 },630642···686666 IMC_DEV(HSW_IMC, &hsw_uncore_pci_driver), /* 4th Gen Core Processor */687667 IMC_DEV(HSW_U_IMC, &hsw_uncore_pci_driver), /* 4th Gen Core ULT Mobile Processor */688668 IMC_DEV(BDW_IMC, &bdw_uncore_pci_driver), /* 5th Gen Core U */689689- IMC_DEV(SKL_IMC, &skl_uncore_pci_driver), /* 6th Gen Core */669669+ IMC_DEV(SKL_Y_IMC, &skl_uncore_pci_driver), /* 6th Gen Core Y */690670 IMC_DEV(SKL_U_IMC, &skl_uncore_pci_driver), /* 6th Gen Core U */671671+ IMC_DEV(SKL_HD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Dual Core */672672+ IMC_DEV(SKL_HQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core H Quad Core */673673+ IMC_DEV(SKL_SD_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Dual Core */674674+ IMC_DEV(SKL_SQ_IMC, &skl_uncore_pci_driver), /* 6th Gen Core S Quad Core */691675 { /* end marker */ }692676};693677
+1
arch/x86/include/asm/intel-mid.h
···17171818extern int intel_mid_pci_init(void);1919extern int intel_mid_pci_set_power_state(struct pci_dev *pdev, pci_power_t state);2020+extern pci_power_t intel_mid_pci_get_power_state(struct pci_dev *pdev);20212122extern void intel_mid_pwr_power_off(void);2223
+4-1
arch/x86/kernel/apm_32.c
···1042104210431043 if (apm_info.get_power_status_broken)10441044 return APM_32_UNSUPPORTED;10451045- if (apm_bios_call(&call))10451045+ if (apm_bios_call(&call)) {10461046+ if (!call.err)10471047+ return APM_NO_ERROR;10461048 return call.err;10491049+ }10471050 *status = call.ebx;10481051 *bat = call.ecx;10491052 if (apm_info.get_power_status_swabinminutes) {
+1-5
arch/x86/kernel/cpu/amd.c
···347347#ifdef CONFIG_SMP348348 unsigned bits;349349 int cpu = smp_processor_id();350350- unsigned int socket_id, core_complex_id;351350352351 bits = c->x86_coreid_bits;353352 /* Low order bits define the core id (index of core in socket) */···364365 if (c->x86 != 0x17 || !cpuid_edx(0x80000006))365366 return;366367367367- socket_id = (c->apicid >> bits) - 1;368368- core_complex_id = (c->apicid & ((1 << bits) - 1)) >> 3;369369-370370- per_cpu(cpu_llc_id, cpu) = (socket_id << 3) | core_complex_id;368368+ per_cpu(cpu_llc_id, cpu) = c->apicid >> 3;371369#endif372370}373371
+30-2
arch/x86/kernel/cpu/common.c
···979979}980980981981/*982982+ * The physical to logical package id mapping is initialized from the983983+ * acpi/mptables information. Make sure that CPUID actually agrees with984984+ * that.985985+ */986986+static void sanitize_package_id(struct cpuinfo_x86 *c)987987+{988988+#ifdef CONFIG_SMP989989+ unsigned int pkg, apicid, cpu = smp_processor_id();990990+991991+ apicid = apic->cpu_present_to_apicid(cpu);992992+ pkg = apicid >> boot_cpu_data.x86_coreid_bits;993993+994994+ if (apicid != c->initial_apicid) {995995+ pr_err(FW_BUG "CPU%u: APIC id mismatch. Firmware: %x CPUID: %x\n",996996+ cpu, apicid, c->initial_apicid);997997+ c->initial_apicid = apicid;998998+ }999999+ if (pkg != c->phys_proc_id) {10001000+ pr_err(FW_BUG "CPU%u: Using firmware package id %u instead of %u\n",10011001+ cpu, pkg, c->phys_proc_id);10021002+ c->phys_proc_id = pkg;10031003+ }10041004+ c->logical_proc_id = topology_phys_to_logical_pkg(pkg);10051005+#else10061006+ c->logical_proc_id = 0;10071007+#endif10081008+}10091009+10101010+/*9821011 * This does the hard work of actually picking apart the CPU stuff...9831012 */9841013static void identify_cpu(struct cpuinfo_x86 *c)···11321103#ifdef CONFIG_NUMA11331104 numa_add_cpu(smp_processor_id());11341105#endif11351135- /* The boot/hotplug time assigment got cleared, restore it */11361136- c->logical_proc_id = topology_phys_to_logical_pkg(c->phys_proc_id);11061106+ sanitize_package_id(c);11371107}1138110811391109/*
+27-31
arch/x86/kvm/irq_comm.c
···156156}157157158158159159+static int kvm_hv_set_sint(struct kvm_kernel_irq_routing_entry *e,160160+ struct kvm *kvm, int irq_source_id, int level,161161+ bool line_status)162162+{163163+ if (!level)164164+ return -1;165165+166166+ return kvm_hv_synic_set_irq(kvm, e->hv_sint.vcpu, e->hv_sint.sint);167167+}168168+159169int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,160170 struct kvm *kvm, int irq_source_id, int level,161171 bool line_status)···173163 struct kvm_lapic_irq irq;174164 int r;175165176176- if (unlikely(e->type != KVM_IRQ_ROUTING_MSI))177177- return -EWOULDBLOCK;166166+ switch (e->type) {167167+ case KVM_IRQ_ROUTING_HV_SINT:168168+ return kvm_hv_set_sint(e, kvm, irq_source_id, level,169169+ line_status);178170179179- if (kvm_msi_route_invalid(kvm, e))180180- return -EINVAL;171171+ case KVM_IRQ_ROUTING_MSI:172172+ if (kvm_msi_route_invalid(kvm, e))173173+ return -EINVAL;181174182182- kvm_set_msi_irq(kvm, e, &irq);175175+ kvm_set_msi_irq(kvm, e, &irq);183176184184- if (kvm_irq_delivery_to_apic_fast(kvm, NULL, &irq, &r, NULL))185185- return r;186186- else187187- return -EWOULDBLOCK;177177+ if (kvm_irq_delivery_to_apic_fast(kvm, NULL, &irq, &r, NULL))178178+ return r;179179+ break;180180+181181+ default:182182+ break;183183+ }184184+185185+ return -EWOULDBLOCK;188186}189187190188int kvm_request_irq_source_id(struct kvm *kvm)···270252 if (kimn->irq == gsi)271253 kimn->func(kimn, mask);272254 srcu_read_unlock(&kvm->irq_srcu, idx);273273-}274274-275275-static int kvm_hv_set_sint(struct kvm_kernel_irq_routing_entry *e,276276- struct kvm *kvm, int irq_source_id, int level,277277- bool line_status)278278-{279279- if (!level)280280- return -1;281281-282282- return kvm_hv_synic_set_irq(kvm, e->hv_sint.vcpu, e->hv_sint.sint);283255}284256285257int kvm_set_routing_entry(struct kvm *kvm,···429421 }430422 }431423 srcu_read_unlock(&kvm->irq_srcu, idx);432432-}433433-434434-int kvm_arch_set_irq(struct kvm_kernel_irq_routing_entry *irq, struct kvm *kvm,435435- int irq_source_id, int level, bool line_status)436436-{437437- switch (irq->type) {438438- case KVM_IRQ_ROUTING_HV_SINT:439439- return kvm_hv_set_sint(irq, kvm, irq_source_id, level,440440- line_status);441441- default:442442- return -EWOULDBLOCK;443443- }444424}445425446426void kvm_arch_irq_routing_update(struct kvm *kvm)
+34-13
arch/x86/kvm/x86.c
···210210 struct kvm_shared_msrs *locals211211 = container_of(urn, struct kvm_shared_msrs, urn);212212 struct kvm_shared_msr_values *values;213213+ unsigned long flags;213214215215+ /*216216+ * Disabling irqs at this point since the following code could be217217+ * interrupted and executed through kvm_arch_hardware_disable()218218+ */219219+ local_irq_save(flags);220220+ if (locals->registered) {221221+ locals->registered = false;222222+ user_return_notifier_unregister(urn);223223+ }224224+ local_irq_restore(flags);214225 for (slot = 0; slot < shared_msrs_global.nr; ++slot) {215226 values = &locals->values[slot];216227 if (values->host != values->curr) {···229218 values->curr = values->host;230219 }231220 }232232- locals->registered = false;233233- user_return_notifier_unregister(urn);234221}235222236223static void shared_msr_update(unsigned slot, u32 msr)···1733172417341725static u64 __get_kvmclock_ns(struct kvm *kvm)17351726{17361736- struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, 0);17371727 struct kvm_arch *ka = &kvm->arch;17381738- s64 ns;17281728+ struct pvclock_vcpu_time_info hv_clock;1739172917401740- if (vcpu->arch.hv_clock.flags & PVCLOCK_TSC_STABLE_BIT) {17411741- u64 tsc = kvm_read_l1_tsc(vcpu, rdtsc());17421742- ns = __pvclock_read_cycles(&vcpu->arch.hv_clock, tsc);17431743- } else {17441744- ns = ktime_get_boot_ns() + ka->kvmclock_offset;17301730+ spin_lock(&ka->pvclock_gtod_sync_lock);17311731+ if (!ka->use_master_clock) {17321732+ spin_unlock(&ka->pvclock_gtod_sync_lock);17331733+ return ktime_get_boot_ns() + ka->kvmclock_offset;17451734 }1746173517471747- return ns;17361736+ hv_clock.tsc_timestamp = ka->master_cycle_now;17371737+ hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset;17381738+ spin_unlock(&ka->pvclock_gtod_sync_lock);17391739+17401740+ kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL,17411741+ &hv_clock.tsc_shift,17421742+ &hv_clock.tsc_to_system_mul);17431743+ return __pvclock_read_cycles(&hv_clock, rdtsc());17481744}1749174517501746u64 get_kvmclock_ns(struct kvm *kvm)···26102596 case KVM_CAP_PIT_STATE2:26112597 case KVM_CAP_SET_IDENTITY_MAP_ADDR:26122598 case KVM_CAP_XEN_HVM:26132613- case KVM_CAP_ADJUST_CLOCK:26142599 case KVM_CAP_VCPU_EVENTS:26152600 case KVM_CAP_HYPERV:26162601 case KVM_CAP_HYPERV_VAPIC:···26352622 case KVM_CAP_PCI_2_3:26362623#endif26372624 r = 1;26252625+ break;26262626+ case KVM_CAP_ADJUST_CLOCK:26272627+ r = KVM_CLOCK_TSC_STABLE;26382628 break;26392629 case KVM_CAP_X86_SMM:26402630 /* SMBASE is usually relocated above 1M on modern chipsets,···34313415 };34323416 case KVM_SET_VAPIC_ADDR: {34333417 struct kvm_vapic_addr va;34183418+ int idx;3434341934353420 r = -EINVAL;34363421 if (!lapic_in_kernel(vcpu))···34393422 r = -EFAULT;34403423 if (copy_from_user(&va, argp, sizeof va))34413424 goto out;34253425+ idx = srcu_read_lock(&vcpu->kvm->srcu);34423426 r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr);34273427+ srcu_read_unlock(&vcpu->kvm->srcu, idx);34433428 break;34443429 }34453430 case KVM_X86_SETUP_MCE: {···41224103 struct kvm_clock_data user_ns;41234104 u64 now_ns;4124410541254125- now_ns = get_kvmclock_ns(kvm);41064106+ local_irq_disable();41074107+ now_ns = __get_kvmclock_ns(kvm);41264108 user_ns.clock = now_ns;41274127- user_ns.flags = 0;41094109+ user_ns.flags = kvm->arch.use_master_clock ? KVM_CLOCK_TSC_STABLE : 0;41104110+ local_irq_enable();41284111 memset(&user_ns.pad, 0, sizeof(user_ns.pad));4129411241304113 r = -EFAULT;
···5050/**5151 * acpi_create_platform_device - Create platform device for ACPI device node5252 * @adev: ACPI device node to create a platform device for.5353+ * @properties: Optional collection of build-in properties.5354 *5455 * Check if the given @adev can be represented as a platform device and, if5556 * that's the case, create and register a platform device, populate its common···5857 *5958 * Name of the platform device will be the same as @adev's.6059 */6161-struct platform_device *acpi_create_platform_device(struct acpi_device *adev)6060+struct platform_device *acpi_create_platform_device(struct acpi_device *adev,6161+ struct property_entry *properties)6262{6363 struct platform_device *pdev = NULL;6464 struct platform_device_info pdevinfo;···108106 pdevinfo.res = resources;109107 pdevinfo.num_res = count;110108 pdevinfo.fwnode = acpi_fwnode_handle(adev);109109+ pdevinfo.properties = properties;111110112111 if (acpi_dma_supported(adev))113112 pdevinfo.dma_mask = DMA_BIT_MASK(32);
+4-6
drivers/acpi/acpica/tbfadt.c
···480480 u32 i;481481482482 /*483483- * For ACPI 1.0 FADTs (revision 1), ensure that reserved fields which483483+ * For ACPI 1.0 FADTs (revision 1 or 2), ensure that reserved fields which484484 * should be zero are indeed zero. This will workaround BIOSs that485485 * inadvertently place values in these fields.486486 *487487 * The ACPI 1.0 reserved fields that will be zeroed are the bytes located488488 * at offset 45, 55, 95, and the word located at offset 109, 110.489489 *490490- * Note: The FADT revision value is unreliable because of BIOS errors.491491- * The table length is instead used as the final word on the version.492492- *493493- * Note: FADT revision 3 is the ACPI 2.0 version of the FADT.490490+ * Note: The FADT revision value is unreliable. Only the length can be491491+ * trusted.494492 */495495- if (acpi_gbl_FADT.header.length <= ACPI_FADT_V3_SIZE) {493493+ if (acpi_gbl_FADT.header.length <= ACPI_FADT_V2_SIZE) {496494 acpi_gbl_FADT.preferred_profile = 0;497495 acpi_gbl_FADT.pstate_control = 0;498496 acpi_gbl_FADT.cst_control = 0;
+2-2
drivers/acpi/dptf/int340x_thermal.c
···3434 const struct acpi_device_id *id)3535{3636 if (IS_ENABLED(CONFIG_INT340X_THERMAL))3737- acpi_create_platform_device(adev);3737+ acpi_create_platform_device(adev, NULL);3838 /* Intel SoC DTS thermal driver needs INT3401 to set IRQ descriptor */3939 else if (IS_ENABLED(CONFIG_INTEL_SOC_DTS_THERMAL) &&4040 id->driver_data == INT3401_DEVICE)4141- acpi_create_platform_device(adev);4141+ acpi_create_platform_device(adev, NULL);4242 return 1;4343}4444
···324324{325325 int ret = -EPROBE_DEFER;326326 int local_trigger_count = atomic_read(&deferred_trigger_count);327327- bool test_remove = IS_ENABLED(CONFIG_DEBUG_TEST_DRIVER_REMOVE);327327+ bool test_remove = IS_ENABLED(CONFIG_DEBUG_TEST_DRIVER_REMOVE) &&328328+ !drv->suppress_bind_attrs;328329329330 if (defer_all_probes) {330331 /*···384383 if (test_remove) {385384 test_remove = false;386385387387- if (dev->bus && dev->bus->remove)386386+ if (dev->bus->remove)388387 dev->bus->remove(dev);389388 else if (drv->remove)390389 drv->remove(dev);
···853853 return n;854854}855855856856-/* This can be removed if we are certain that no users of the block857857- * layer will ever use zero-count pages in bios. Otherwise we have to858858- * protect against the put_page sometimes done by the network layer.859859- *860860- * See http://oss.sgi.com/archives/xfs/2007-01/msg00594.html for861861- * discussion.862862- *863863- * We cannot use get_page in the workaround, because it insists on a864864- * positive page count as a precondition. So we use _refcount directly.865865- */866866-static void867867-bio_pageinc(struct bio *bio)868868-{869869- struct bio_vec bv;870870- struct page *page;871871- struct bvec_iter iter;872872-873873- bio_for_each_segment(bv, bio, iter) {874874- /* Non-zero page count for non-head members of875875- * compound pages is no longer allowed by the kernel.876876- */877877- page = compound_head(bv.bv_page);878878- page_ref_inc(page);879879- }880880-}881881-882882-static void883883-bio_pagedec(struct bio *bio)884884-{885885- struct page *page;886886- struct bio_vec bv;887887- struct bvec_iter iter;888888-889889- bio_for_each_segment(bv, bio, iter) {890890- page = compound_head(bv.bv_page);891891- page_ref_dec(page);892892- }893893-}894894-895856static void896857bufinit(struct buf *buf, struct request *rq, struct bio *bio)897858{···860899 buf->rq = rq;861900 buf->bio = bio;862901 buf->iter = bio->bi_iter;863863- bio_pageinc(bio);864902}865903866904static struct buf *···10871127 if (buf == d->ip.buf)10881128 d->ip.buf = NULL;10891129 rq = buf->rq;10901090- bio_pagedec(buf->bio);10911130 mempool_free(buf, d->bufpool);10921131 n = (unsigned long) rq->special;10931132 rq->special = (void *) --n;
+1-1
drivers/block/drbd/drbd_main.c
···18711871 drbd_update_congested(connection);18721872 }18731873 do {18741874- rv = kernel_sendmsg(sock, &msg, &iov, 1, size);18741874+ rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);18751875 if (rv == -EAGAIN) {18761876 if (we_should_drop_the_connection(connection, sock))18771877 break;
+1-1
drivers/block/nbd.c
···599599 return -EINVAL;600600601601 sreq = blk_mq_alloc_request(bdev_get_queue(bdev), WRITE, 0);602602- if (!sreq)602602+ if (IS_ERR(sreq))603603 return -ENOMEM;604604605605 mutex_unlock(&nbd->tx_lock);
+2-2
drivers/char/ipmi/bt-bmc.c
···484484}485485486486static const struct of_device_id bt_bmc_match[] = {487487- { .compatible = "aspeed,ast2400-bt-bmc" },487487+ { .compatible = "aspeed,ast2400-ibt-bmc" },488488 { },489489};490490···502502MODULE_DEVICE_TABLE(of, bt_bmc_match);503503MODULE_LICENSE("GPL");504504MODULE_AUTHOR("Alistair Popple <alistair@popple.id.au>");505505-MODULE_DESCRIPTION("Linux device interface to the BT interface");505505+MODULE_DESCRIPTION("Linux device interface to the IPMI BT interface");
···463463 struct xgene_clk *pclk = to_xgene_clk(hw);464464 unsigned long flags = 0;465465 u32 data;466466- phys_addr_t reg;467466468467 if (pclk->lock)469468 spin_lock_irqsave(pclk->lock, flags);470469471470 if (pclk->param.csr_reg != NULL) {472471 pr_debug("%s clock enabled\n", clk_hw_get_name(hw));473473- reg = __pa(pclk->param.csr_reg);474472 /* First enable the clock */475473 data = xgene_clk_read(pclk->param.csr_reg +476474 pclk->param.reg_clk_offset);477475 data |= pclk->param.reg_clk_mask;478476 xgene_clk_write(data, pclk->param.csr_reg +479477 pclk->param.reg_clk_offset);480480- pr_debug("%s clock PADDR base %pa clk offset 0x%08X mask 0x%08X value 0x%08X\n",481481- clk_hw_get_name(hw), ®,478478+ pr_debug("%s clk offset 0x%08X mask 0x%08X value 0x%08X\n",479479+ clk_hw_get_name(hw),482480 pclk->param.reg_clk_offset, pclk->param.reg_clk_mask,483481 data);484482···486488 data &= ~pclk->param.reg_csr_mask;487489 xgene_clk_write(data, pclk->param.csr_reg +488490 pclk->param.reg_csr_offset);489489- pr_debug("%s CSR RESET PADDR base %pa csr offset 0x%08X mask 0x%08X value 0x%08X\n",490490- clk_hw_get_name(hw), ®,491491+ pr_debug("%s csr offset 0x%08X mask 0x%08X value 0x%08X\n",492492+ clk_hw_get_name(hw),491493 pclk->param.reg_csr_offset, pclk->param.reg_csr_mask,492494 data);493495 }
+6-2
drivers/clk/imx/clk-pllv3.c
···223223 temp64 *= mfn;224224 do_div(temp64, mfd);225225226226- return (parent_rate * div) + (u32)temp64;226226+ return parent_rate * div + (unsigned long)temp64;227227}228228229229static long clk_pllv3_av_round_rate(struct clk_hw *hw, unsigned long rate,···247247 do_div(temp64, parent_rate);248248 mfn = temp64;249249250250- return parent_rate * div + parent_rate * mfn / mfd;250250+ temp64 = (u64)parent_rate;251251+ temp64 *= mfn;252252+ do_div(temp64, mfd);253253+254254+ return parent_rate * div + (unsigned long)temp64;251255}252256253257static int clk_pllv3_av_set_rate(struct clk_hw *hw, unsigned long rate,
+1-1
drivers/clk/mmp/clk-of-mmp2.c
···313313 }314314315315 pxa_unit->apmu_base = of_iomap(np, 1);316316- if (!pxa_unit->mpmu_base) {316316+ if (!pxa_unit->apmu_base) {317317 pr_err("failed to map apmu registers\n");318318 return;319319 }
+1-1
drivers/clk/mmp/clk-of-pxa168.c
···262262 }263263264264 pxa_unit->apmu_base = of_iomap(np, 1);265265- if (!pxa_unit->mpmu_base) {265265+ if (!pxa_unit->apmu_base) {266266 pr_err("failed to map apmu registers\n");267267 return;268268 }
+2-2
drivers/clk/mmp/clk-of-pxa910.c
···282282 }283283284284 pxa_unit->apmu_base = of_iomap(np, 1);285285- if (!pxa_unit->mpmu_base) {285285+ if (!pxa_unit->apmu_base) {286286 pr_err("failed to map apmu registers\n");287287 return;288288 }···294294 }295295296296 pxa_unit->apbcp_base = of_iomap(np, 3);297297- if (!pxa_unit->mpmu_base) {297297+ if (!pxa_unit->apbcp_base) {298298 pr_err("failed to map apbcp registers\n");299299 return;300300 }
+1-4
drivers/clk/rockchip/clk-ddr.c
···144144 ddrclk->ddr_flag = ddr_flag;145145146146 clk = clk_register(NULL, &ddrclk->hw);147147- if (IS_ERR(clk)) {148148- pr_err("%s: could not register ddrclk %s\n", __func__, name);147147+ if (IS_ERR(clk))149148 kfree(ddrclk);150150- return NULL;151151- }152149153150 return clk;154151}
+14-8
drivers/clk/samsung/clk-exynos-clkout.c
···132132 pr_err("%s: failed to register clkout clock\n", __func__);133133}134134135135+/*136136+ * We use CLK_OF_DECLARE_DRIVER initialization method to avoid setting137137+ * the OF_POPULATED flag on the pmu device tree node, so later the138138+ * Exynos PMU platform device can be properly probed with PMU driver.139139+ */140140+135141static void __init exynos4_clkout_init(struct device_node *node)136142{137143 exynos_clkout_init(node, EXYNOS4_CLKOUT_MUX_MASK);138144}139139-CLK_OF_DECLARE(exynos4210_clkout, "samsung,exynos4210-pmu",145145+CLK_OF_DECLARE_DRIVER(exynos4210_clkout, "samsung,exynos4210-pmu",140146 exynos4_clkout_init);141141-CLK_OF_DECLARE(exynos4212_clkout, "samsung,exynos4212-pmu",147147+CLK_OF_DECLARE_DRIVER(exynos4212_clkout, "samsung,exynos4212-pmu",142148 exynos4_clkout_init);143143-CLK_OF_DECLARE(exynos4412_clkout, "samsung,exynos4412-pmu",149149+CLK_OF_DECLARE_DRIVER(exynos4412_clkout, "samsung,exynos4412-pmu",144150 exynos4_clkout_init);145145-CLK_OF_DECLARE(exynos3250_clkout, "samsung,exynos3250-pmu",151151+CLK_OF_DECLARE_DRIVER(exynos3250_clkout, "samsung,exynos3250-pmu",146152 exynos4_clkout_init);147153148154static void __init exynos5_clkout_init(struct device_node *node)149155{150156 exynos_clkout_init(node, EXYNOS5_CLKOUT_MUX_MASK);151157}152152-CLK_OF_DECLARE(exynos5250_clkout, "samsung,exynos5250-pmu",158158+CLK_OF_DECLARE_DRIVER(exynos5250_clkout, "samsung,exynos5250-pmu",153159 exynos5_clkout_init);154154-CLK_OF_DECLARE(exynos5410_clkout, "samsung,exynos5410-pmu",160160+CLK_OF_DECLARE_DRIVER(exynos5410_clkout, "samsung,exynos5410-pmu",155161 exynos5_clkout_init);156156-CLK_OF_DECLARE(exynos5420_clkout, "samsung,exynos5420-pmu",162162+CLK_OF_DECLARE_DRIVER(exynos5420_clkout, "samsung,exynos5420-pmu",157163 exynos5_clkout_init);158158-CLK_OF_DECLARE(exynos5433_clkout, "samsung,exynos5433-pmu",164164+CLK_OF_DECLARE_DRIVER(exynos5433_clkout, "samsung,exynos5433-pmu",159165 exynos5_clkout_init);
+10-1
drivers/crypto/caam/caamalg.c
···137137 }138138139139 buf = it_page + it->offset;140140- len = min(tlen, it->length);140140+ len = min_t(size_t, tlen, it->length);141141 print_hex_dump(level, prefix_str, prefix_type, rowsize,142142 groupsize, buf, len, ascii);143143 tlen -= len;···4581458145824582 /* Skip AES algorithms if not supported by device */45834583 if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))45844584+ continue;45854585+45864586+ /*45874587+ * Check support for AES modes not available45884588+ * on LP devices.45894589+ */45904590+ if ((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP)45914591+ if ((alg->class1_alg_type & OP_ALG_AAI_MASK) ==45924592+ OP_ALG_AAI_XTS)45844593 continue;4585459445864595 t_alg = caam_alg_alloc(alg);
+1
drivers/dma/Kconfig
···306306 depends on ARCH_MMP || COMPILE_TEST307307 select DMA_ENGINE308308 select MMP_SRAM if ARCH_MMP309309+ select GENERIC_ALLOCATOR309310 help310311 Support the MMP Two-Channel DMA engine.311312 This engine used for MMP Audio DMA and pxa910 SQU.
+26-5
drivers/dma/cppi41.c
···317317318318 while (val) {319319 u32 desc, len;320320+ int error;321321+322322+ error = pm_runtime_get(cdd->ddev.dev);323323+ if (error < 0)324324+ dev_err(cdd->ddev.dev, "%s pm runtime get: %i\n",325325+ __func__, error);320326321327 q_num = __fls(val);322328 val &= ~(1 << q_num);···344338 dma_cookie_complete(&c->txd);345339 dmaengine_desc_get_callback_invoke(&c->txd, NULL);346340347347- /* Paired with cppi41_dma_issue_pending */348341 pm_runtime_mark_last_busy(cdd->ddev.dev);349342 pm_runtime_put_autosuspend(cdd->ddev.dev);350343 }···367362 int error;368363369364 error = pm_runtime_get_sync(cdd->ddev.dev);370370- if (error < 0)365365+ if (error < 0) {366366+ dev_err(cdd->ddev.dev, "%s pm runtime get: %i\n",367367+ __func__, error);368368+ pm_runtime_put_noidle(cdd->ddev.dev);369369+371370 return error;371371+ }372372373373 dma_cookie_init(chan);374374 dma_async_tx_descriptor_init(&c->txd, chan);···395385 int error;396386397387 error = pm_runtime_get_sync(cdd->ddev.dev);398398- if (error < 0)388388+ if (error < 0) {389389+ pm_runtime_put_noidle(cdd->ddev.dev);390390+399391 return;392392+ }400393401394 WARN_ON(!list_empty(&cdd->pending));402395···473460 struct cppi41_dd *cdd = c->cdd;474461 int error;475462476476- /* PM runtime paired with dmaengine_desc_get_callback_invoke */477463 error = pm_runtime_get(cdd->ddev.dev);478464 if ((error != -EINPROGRESS) && error < 0) {465465+ pm_runtime_put_noidle(cdd->ddev.dev);479466 dev_err(cdd->ddev.dev, "Failed to pm_runtime_get: %i\n",480467 error);481468···486473 push_desc_queue(c);487474 else488475 pending_desc(c);476476+477477+ pm_runtime_mark_last_busy(cdd->ddev.dev);478478+ pm_runtime_put_autosuspend(cdd->ddev.dev);489479}490480491481static u32 get_host_pd0(u32 length)···10751059 deinit_cppi41(dev, cdd);10761060err_init_cppi:10771061 pm_runtime_dont_use_autosuspend(dev);10781078- pm_runtime_put_sync(dev);10791062err_get_sync:10631063+ pm_runtime_put_sync(dev);10801064 pm_runtime_disable(dev);10811065 iounmap(cdd->usbss_mem);10821066 iounmap(cdd->ctrl_mem);···10881072static int cppi41_dma_remove(struct platform_device *pdev)10891073{10901074 struct cppi41_dd *cdd = platform_get_drvdata(pdev);10751075+ int error;1091107610771077+ error = pm_runtime_get_sync(&pdev->dev);10781078+ if (error < 0)10791079+ dev_err(&pdev->dev, "%s could not pm_runtime_get: %i\n",10801080+ __func__, error);10921081 of_dma_controller_free(pdev->dev.of_node);10931082 dma_async_device_unregister(&cdd->ddev);10941083
+1
drivers/dma/edma.c
···16281628 if (echan->slot[0] < 0) {16291629 dev_err(dev, "Entry slot allocation failed for channel %u\n",16301630 EDMA_CHAN_SLOT(echan->ch_num));16311631+ ret = echan->slot[0];16311632 goto err_slot;16321633 }16331634
···9797 if (ret < 0)9898 return ret;9999100100- return !!(ret & BIT(pos));100100+ return !(ret & BIT(pos));101101}102102103103static int tc3589x_gpio_set_single_ended(struct gpio_chip *chip,
+5-2
drivers/gpio/gpiolib.c
···27372737 if (IS_ERR(desc))27382738 return PTR_ERR(desc);2739273927402740- /* Flush direction if something changed behind our back */27412741- if (chip->get_direction) {27402740+ /*27412741+ * If it's fast: flush the direction setting if something changed27422742+ * behind our back27432743+ */27442744+ if (!chip->can_sleep && chip->get_direction) {27422745 int dir = chip->get_direction(chip, offset);2743274627442747 if (dir)
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
···459459 u64 metadata_flags;460460 void *metadata;461461 u32 metadata_size;462462+ unsigned prime_shared_count;462463 /* list of all virtual address to which this bo463464 * is associated to464465 */
+4-1
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
···395395{396396 int i, ret;397397 struct device *dev;398398-399398 struct amdgpu_device *adev = (struct amdgpu_device *)handle;399399+400400+ /* return early if no ACP */401401+ if (!adev->acp.acp_genpd)402402+ return 0;400403401404 for (i = 0; i < ACP_DEVS ; i++) {402405 dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
···769769{770770 struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);771771772772- if (amdgpu_connector->ddc_bus->has_aux) {772772+ if (amdgpu_connector->ddc_bus && amdgpu_connector->ddc_bus->has_aux) {773773 drm_dp_aux_unregister(&amdgpu_connector->ddc_bus->aux);774774 amdgpu_connector->ddc_bus->has_aux = false;775775 }
+7-20
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···658658 return false;659659660660 if (amdgpu_passthrough(adev)) {661661- /* for FIJI: In whole GPU pass-through virtualization case662662- * old smc fw won't clear some registers (e.g. MEM_SIZE, BIOS_SCRATCH)663663- * so amdgpu_card_posted return false and driver will incorrectly skip vPost.664664- * but if we force vPost do in pass-through case, the driver reload will hang.665665- * whether doing vPost depends on amdgpu_card_posted if smc version is above666666- * 00160e00 for FIJI.661661+ /* for FIJI: In whole GPU pass-through virtualization case, after VM reboot662662+ * some old smc fw still need driver do vPost otherwise gpu hang, while663663+ * those smc fw version above 22.15 doesn't have this flaw, so we force664664+ * vpost executed for smc version below 22.15667665 */668666 if (adev->asic_type == CHIP_FIJI) {669667 int err;···672674 return true;673675674676 fw_ver = *((uint32_t *)adev->pm.fw->data + 69);675675- if (fw_ver >= 0x00160e00)676676- return !amdgpu_card_posted(adev);677677+ if (fw_ver < 0x00160e00)678678+ return true;677679 }678678- } else {679679- /* in bare-metal case, amdgpu_card_posted return false680680- * after system reboot/boot, and return true if driver681681- * reloaded.682682- * we shouldn't do vPost after driver reload otherwise GPU683683- * could hang.684684- */685685- if (amdgpu_card_posted(adev))686686- return false;687680 }688688-689689- /* we assume vPost is neede for all other cases */690690- return true;681681+ return !amdgpu_card_posted(adev);691682}692683693684/**
+24-2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···735735736736static int __init amdgpu_init(void)737737{738738- amdgpu_sync_init();739739- amdgpu_fence_slab_init();738738+ int r;739739+740740+ r = amdgpu_sync_init();741741+ if (r)742742+ goto error_sync;743743+744744+ r = amdgpu_fence_slab_init();745745+ if (r)746746+ goto error_fence;747747+748748+ r = amd_sched_fence_slab_init();749749+ if (r)750750+ goto error_sched;751751+740752 if (vgacon_text_force()) {741753 DRM_ERROR("VGACON disables amdgpu kernel modesetting.\n");742754 return -EINVAL;···760748 amdgpu_register_atpx_handler();761749 /* let modprobe override vga console setting */762750 return drm_pci_init(driver, pdriver);751751+752752+error_sched:753753+ amdgpu_fence_slab_fini();754754+755755+error_fence:756756+ amdgpu_sync_fini();757757+758758+error_sync:759759+ return r;763760}764761765762static void __exit amdgpu_exit(void)···777756 drm_pci_exit(driver, pdriver);778757 amdgpu_unregister_atpx_handler();779758 amdgpu_sync_fini();759759+ amd_sched_fence_slab_fini();780760 amdgpu_fence_slab_fini();781761}782762
···7474 if (ret)7575 return ERR_PTR(ret);76767777+ bo->prime_shared_count = 1;7778 return &bo->gem_base;7879}79808081int amdgpu_gem_prime_pin(struct drm_gem_object *obj)8182{8283 struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);8383- int ret = 0;8484+ long ret = 0;84858586 ret = amdgpu_bo_reserve(bo, false);8687 if (unlikely(ret != 0))8788 return ret;88899090+ /*9191+ * Wait for all shared fences to complete before we switch to future9292+ * use of exclusive fence on this prime shared bo.9393+ */9494+ ret = reservation_object_wait_timeout_rcu(bo->tbo.resv, true, false,9595+ MAX_SCHEDULE_TIMEOUT);9696+ if (unlikely(ret < 0)) {9797+ DRM_DEBUG_PRIME("Fence wait failed: %li\n", ret);9898+ amdgpu_bo_unreserve(bo);9999+ return ret;100100+ }101101+89102 /* pin buffer into GTT */90103 ret = amdgpu_bo_pin(bo, AMDGPU_GEM_DOMAIN_GTT, NULL);104104+ if (likely(ret == 0))105105+ bo->prime_shared_count++;106106+91107 amdgpu_bo_unreserve(bo);92108 return ret;93109}···118102 return;119103120104 amdgpu_bo_unpin(bo);105105+ if (bo->prime_shared_count)106106+ bo->prime_shared_count--;121107 amdgpu_bo_unreserve(bo);122108}123109
···3434static void amd_sched_wakeup(struct amd_gpu_scheduler *sched);3535static void amd_sched_process_job(struct fence *f, struct fence_cb *cb);36363737-struct kmem_cache *sched_fence_slab;3838-atomic_t sched_fence_slab_ref = ATOMIC_INIT(0);3939-4037/* Initialize a given run queue struct */4138static void amd_sched_rq_init(struct amd_sched_rq *rq)4239{···615618 INIT_LIST_HEAD(&sched->ring_mirror_list);616619 spin_lock_init(&sched->job_list_lock);617620 atomic_set(&sched->hw_rq_count, 0);618618- if (atomic_inc_return(&sched_fence_slab_ref) == 1) {619619- sched_fence_slab = kmem_cache_create(620620- "amd_sched_fence", sizeof(struct amd_sched_fence), 0,621621- SLAB_HWCACHE_ALIGN, NULL);622622- if (!sched_fence_slab)623623- return -ENOMEM;624624- }625621626622 /* Each scheduler will run on a seperate kernel thread */627623 sched->thread = kthread_run(amd_sched_main, sched, sched->name);···635645{636646 if (sched->thread)637647 kthread_stop(sched->thread);638638- rcu_barrier();639639- if (atomic_dec_and_test(&sched_fence_slab_ref))640640- kmem_cache_destroy(sched_fence_slab);641648}
+3-3
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
···3030struct amd_gpu_scheduler;3131struct amd_sched_rq;32323333-extern struct kmem_cache *sched_fence_slab;3434-extern atomic_t sched_fence_slab_ref;3535-3633/**3734 * A scheduler entity is a wrapper around a job queue or a group3835 * of other entities. Entities take turns emitting jobs from their···141144void amd_sched_entity_fini(struct amd_gpu_scheduler *sched,142145 struct amd_sched_entity *entity);143146void amd_sched_entity_push_job(struct amd_sched_job *sched_job);147147+148148+int amd_sched_fence_slab_init(void);149149+void amd_sched_fence_slab_fini(void);144150145151struct amd_sched_fence *amd_sched_fence_create(146152 struct amd_sched_entity *s_entity, void *owner);
···18061806 /* Use a partial view if it is bigger than available space */18071807 chunk_size = MIN_CHUNK_PAGES;18081808 if (i915_gem_object_is_tiled(obj))18091809- chunk_size = max(chunk_size, tile_row_pages(obj));18091809+ chunk_size = roundup(chunk_size, tile_row_pages(obj));1810181018111811 memset(&view, 0, sizeof(view));18121812 view.type = I915_GGTT_VIEW_PARTIAL;···35433543 if (view->type == I915_GGTT_VIEW_NORMAL)35443544 vma = i915_gem_object_ggtt_pin(obj, view, 0, alignment,35453545 PIN_MAPPABLE | PIN_NONBLOCK);35463546- if (IS_ERR(vma))35473547- vma = i915_gem_object_ggtt_pin(obj, view, 0, alignment, 0);35463546+ if (IS_ERR(vma)) {35473547+ struct drm_i915_private *i915 = to_i915(obj->base.dev);35483548+ unsigned int flags;35493549+35503550+ /* Valleyview is definitely limited to scanning out the first35513551+ * 512MiB. Lets presume this behaviour was inherited from the35523552+ * g4x display engine and that all earlier gen are similarly35533553+ * limited. Testing suggests that it is a little more35543554+ * complicated than this. For example, Cherryview appears quite35553555+ * happy to scanout from anywhere within its global aperture.35563556+ */35573557+ flags = 0;35583558+ if (HAS_GMCH_DISPLAY(i915))35593559+ flags = PIN_MAPPABLE;35603560+ vma = i915_gem_object_ggtt_pin(obj, view, 0, alignment, flags);35613561+ }35483562 if (IS_ERR(vma))35493563 goto err_unpin_display;35503564
+8
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···12811281 return ctx;12821282}1283128312841284+static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)12851285+{12861286+ return !(obj->cache_level == I915_CACHE_NONE ||12871287+ obj->cache_level == I915_CACHE_WT);12881288+}12891289+12841290void i915_vma_move_to_active(struct i915_vma *vma,12851291 struct drm_i915_gem_request *req,12861292 unsigned int flags)···1317131113181312 /* update for the implicit flush after a batch */13191313 obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;13141314+ if (!obj->cache_dirty && gpu_write_needs_clflush(obj))13151315+ obj->cache_dirty = true;13201316 }1321131713221318 if (flags & EXEC_OBJECT_NEEDS_FENCE)
+22-8
drivers/gpu/drm/i915/intel_bios.c
···11431143 if (!child)11441144 return;1145114511461146- aux_channel = child->raw[25];11461146+ aux_channel = child->common.aux_channel;11471147 ddc_pin = child->common.ddc_pin;1148114811491149 is_dvi = child->common.device_type & DEVICE_TYPE_TMDS_DVI_SIGNALING;···16731673 return false;16741674}1675167516761676-bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port)16761676+static bool child_dev_is_dp_dual_mode(const union child_device_config *p_child,16771677+ enum port port)16771678{16781679 static const struct {16791680 u16 dp, hdmi;···16881687 [PORT_D] = { DVO_PORT_DPD, DVO_PORT_HDMID, },16891688 [PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, },16901689 };16911691- int i;1692169016931691 if (port == PORT_A || port >= ARRAY_SIZE(port_mapping))16941692 return false;1695169316961696- if (!dev_priv->vbt.child_dev_num)16941694+ if ((p_child->common.device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) !=16951695+ (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS))16971696 return false;16971697+16981698+ if (p_child->common.dvo_port == port_mapping[port].dp)16991699+ return true;17001700+17011701+ /* Only accept a HDMI dvo_port as DP++ if it has an AUX channel */17021702+ if (p_child->common.dvo_port == port_mapping[port].hdmi &&17031703+ p_child->common.aux_channel != 0)17041704+ return true;17051705+17061706+ return false;17071707+}17081708+17091709+bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv,17101710+ enum port port)17111711+{17121712+ int i;1698171316991714 for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {17001715 const union child_device_config *p_child =17011716 &dev_priv->vbt.child_dev[i];1702171717031703- if ((p_child->common.dvo_port == port_mapping[port].dp ||17041704- p_child->common.dvo_port == port_mapping[port].hdmi) &&17051705- (p_child->common.device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) ==17061706- (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS))17181718+ if (child_dev_is_dp_dual_mode(p_child, port))17071719 return true;17081720 }17091721
+26-3
drivers/gpu/drm/i915/intel_display.c
···1024310243 bxt_set_cdclk(to_i915(dev), req_cdclk);1024410244}10245102451024610246+static int bdw_adjust_min_pipe_pixel_rate(struct intel_crtc_state *crtc_state,1024710247+ int pixel_rate)1024810248+{1024910249+ struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);1025010250+1025110251+ /* pixel rate mustn't exceed 95% of cdclk with IPS on BDW */1025210252+ if (IS_BROADWELL(dev_priv) && crtc_state->ips_enabled)1025310253+ pixel_rate = DIV_ROUND_UP(pixel_rate * 100, 95);1025410254+1025510255+ /* BSpec says "Do not use DisplayPort with CDCLK less than1025610256+ * 432 MHz, audio enabled, port width x4, and link rate1025710257+ * HBR2 (5.4 GHz), or else there may be audio corruption or1025810258+ * screen corruption."1025910259+ */1026010260+ if (intel_crtc_has_dp_encoder(crtc_state) &&1026110261+ crtc_state->has_audio &&1026210262+ crtc_state->port_clock >= 540000 &&1026310263+ crtc_state->lane_count == 4)1026410264+ pixel_rate = max(432000, pixel_rate);1026510265+1026610266+ return pixel_rate;1026710267+}1026810268+1024610269/* compute the max rate for new configuration */1024710270static int ilk_max_pixel_rate(struct drm_atomic_state *state)1024810271{···10291102681029210269 pixel_rate = ilk_pipe_pixel_rate(crtc_state);10293102701029410294- /* pixel rate mustn't exceed 95% of cdclk with IPS on BDW */1029510295- if (IS_BROADWELL(dev_priv) && crtc_state->ips_enabled)1029610296- pixel_rate = DIV_ROUND_UP(pixel_rate * 100, 95);1027110271+ if (IS_BROADWELL(dev_priv) || IS_GEN9(dev_priv))1027210272+ pixel_rate = bdw_adjust_min_pipe_pixel_rate(crtc_state,1027310273+ pixel_rate);10297102741029810275 intel_state->min_pixclk[i] = pixel_rate;1029910276 }
-10
drivers/gpu/drm/i915/intel_dp.c
···44634463intel_dp_detect(struct drm_connector *connector, bool force)44644464{44654465 struct intel_dp *intel_dp = intel_attached_dp(connector);44664466- struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);44674467- struct intel_encoder *intel_encoder = &intel_dig_port->base;44684466 enum drm_connector_status status = connector->status;4469446744704468 DRM_DEBUG_KMS("[CONNECTOR:%d:%s]\n",44714469 connector->base.id, connector->name);44724472-44734473- if (intel_dp->is_mst) {44744474- /* MST devices are disconnected from a monitor POV */44754475- intel_dp_unset_edid(intel_dp);44764476- if (intel_encoder->type != INTEL_OUTPUT_EDP)44774477- intel_encoder->type = INTEL_OUTPUT_DP;44784478- return connector_status_disconnected;44794479- }4480447044814471 /* If full detect is not performed yet, do a full detect */44824472 if (!intel_dp->detect_done)
+48-36
drivers/gpu/drm/i915/intel_hdmi.c
···17991799 intel_hdmi->aspect_ratio = HDMI_PICTURE_ASPECT_NONE;18001800}1801180118021802+static u8 intel_hdmi_ddc_pin(struct drm_i915_private *dev_priv,18031803+ enum port port)18041804+{18051805+ const struct ddi_vbt_port_info *info =18061806+ &dev_priv->vbt.ddi_port_info[port];18071807+ u8 ddc_pin;18081808+18091809+ if (info->alternate_ddc_pin) {18101810+ DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (VBT)\n",18111811+ info->alternate_ddc_pin, port_name(port));18121812+ return info->alternate_ddc_pin;18131813+ }18141814+18151815+ switch (port) {18161816+ case PORT_B:18171817+ if (IS_BROXTON(dev_priv))18181818+ ddc_pin = GMBUS_PIN_1_BXT;18191819+ else18201820+ ddc_pin = GMBUS_PIN_DPB;18211821+ break;18221822+ case PORT_C:18231823+ if (IS_BROXTON(dev_priv))18241824+ ddc_pin = GMBUS_PIN_2_BXT;18251825+ else18261826+ ddc_pin = GMBUS_PIN_DPC;18271827+ break;18281828+ case PORT_D:18291829+ if (IS_CHERRYVIEW(dev_priv))18301830+ ddc_pin = GMBUS_PIN_DPD_CHV;18311831+ else18321832+ ddc_pin = GMBUS_PIN_DPD;18331833+ break;18341834+ default:18351835+ MISSING_CASE(port);18361836+ ddc_pin = GMBUS_PIN_DPB;18371837+ break;18381838+ }18391839+18401840+ DRM_DEBUG_KMS("Using DDC pin 0x%x for port %c (platform default)\n",18411841+ ddc_pin, port_name(port));18421842+18431843+ return ddc_pin;18441844+}18451845+18021846void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,18031847 struct intel_connector *intel_connector)18041848{···18521808 struct drm_device *dev = intel_encoder->base.dev;18531809 struct drm_i915_private *dev_priv = to_i915(dev);18541810 enum port port = intel_dig_port->port;18551855- uint8_t alternate_ddc_pin;1856181118571812 DRM_DEBUG_KMS("Adding HDMI connector on port %c\n",18581813 port_name(port));···18691826 connector->doublescan_allowed = 0;18701827 connector->stereo_allowed = 1;1871182818291829+ intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(dev_priv, port);18301830+18721831 switch (port) {18731832 case PORT_B:18741874- if (IS_BROXTON(dev_priv))18751875- intel_hdmi->ddc_bus = GMBUS_PIN_1_BXT;18761876- else18771877- intel_hdmi->ddc_bus = GMBUS_PIN_DPB;18781833 /*18791834 * On BXT A0/A1, sw needs to activate DDIA HPD logic and18801835 * interrupts to check the external panel connection.···18831842 intel_encoder->hpd_pin = HPD_PORT_B;18841843 break;18851844 case PORT_C:18861886- if (IS_BROXTON(dev_priv))18871887- intel_hdmi->ddc_bus = GMBUS_PIN_2_BXT;18881888- else18891889- intel_hdmi->ddc_bus = GMBUS_PIN_DPC;18901845 intel_encoder->hpd_pin = HPD_PORT_C;18911846 break;18921847 case PORT_D:18931893- if (WARN_ON(IS_BROXTON(dev_priv)))18941894- intel_hdmi->ddc_bus = GMBUS_PIN_DISABLED;18951895- else if (IS_CHERRYVIEW(dev_priv))18961896- intel_hdmi->ddc_bus = GMBUS_PIN_DPD_CHV;18971897- else18981898- intel_hdmi->ddc_bus = GMBUS_PIN_DPD;18991848 intel_encoder->hpd_pin = HPD_PORT_D;19001849 break;19011850 case PORT_E:19021902- /* On SKL PORT E doesn't have seperate GMBUS pin19031903- * We rely on VBT to set a proper alternate GMBUS pin. */19041904- alternate_ddc_pin =19051905- dev_priv->vbt.ddi_port_info[PORT_E].alternate_ddc_pin;19061906- switch (alternate_ddc_pin) {19071907- case DDC_PIN_B:19081908- intel_hdmi->ddc_bus = GMBUS_PIN_DPB;19091909- break;19101910- case DDC_PIN_C:19111911- intel_hdmi->ddc_bus = GMBUS_PIN_DPC;19121912- break;19131913- case DDC_PIN_D:19141914- intel_hdmi->ddc_bus = GMBUS_PIN_DPD;19151915- break;19161916- default:19171917- MISSING_CASE(alternate_ddc_pin);19181918- }19191851 intel_encoder->hpd_pin = HPD_PORT_E;19201852 break;19211921- case PORT_A:19221922- intel_encoder->hpd_pin = HPD_PORT_A;19231923- /* Internal port only for eDP. */19241853 default:19251925- BUG();18541854+ MISSING_CASE(port);18551855+ return;19261856 }1927185719281858 if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) {
+3-1
drivers/gpu/drm/i915/intel_runtime_pm.c
···1139113911401140 intel_power_sequencer_reset(dev_priv);1141114111421142- intel_hpd_poll_init(dev_priv);11421142+ /* Prevent us from re-enabling polling on accident in late suspend */11431143+ if (!dev_priv->drm.dev->power.is_suspended)11441144+ intel_hpd_poll_init(dev_priv);11431145}1144114611451147static void vlv_display_power_well_enable(struct drm_i915_private *dev_priv,
+1-1
drivers/gpu/drm/i915/intel_sprite.c
···358358 int plane = intel_plane->plane;359359 u32 sprctl;360360 u32 sprsurf_offset, linear_offset;361361- unsigned int rotation = dplane->state->rotation;361361+ unsigned int rotation = plane_state->base.rotation;362362 const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;363363 int crtc_x = plane_state->base.dst.x1;364364 int crtc_y = plane_state->base.dst.y1;
···68686969 ipu_dc_disable_channel(ipu_crtc->dc);7070 ipu_di_disable(ipu_crtc->di);7171+ /*7272+ * Planes must be disabled before DC clock is removed, as otherwise the7373+ * attached IDMACs will be left in undefined state, possibly hanging7474+ * the IPU or even system.7575+ */7676+ drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, false);7177 ipu_dc_disable(ipu);72787379 spin_lock_irq(&crtc->dev->event_lock);···8276 crtc->state->event = NULL;8377 }8478 spin_unlock_irq(&crtc->dev->event_lock);8585-8686- /* always disable planes on the CRTC */8787- drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, true);88798980 drm_crtc_vblank_off(crtc);9081}
···223223 plane_cnt++;224224 }225225226226- /*227227- * If there is no base layer, enable border color.228228- * Although it's not possbile in current blend logic,229229- * put it here as a reminder.230230- */231231- if (!pstates[STAGE_BASE] && plane_cnt) {226226+ if (!pstates[STAGE_BASE]) {232227 ctl_blend_flags |= MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT;233228 DBG("Border Color is enabled");234229 }···360365 return pa->state->zpos - pb->state->zpos;361366}362367368368+/* is there a helper for this? */369369+static bool is_fullscreen(struct drm_crtc_state *cstate,370370+ struct drm_plane_state *pstate)371371+{372372+ return (pstate->crtc_x <= 0) && (pstate->crtc_y <= 0) &&373373+ ((pstate->crtc_x + pstate->crtc_w) >= cstate->mode.hdisplay) &&374374+ ((pstate->crtc_y + pstate->crtc_h) >= cstate->mode.vdisplay);375375+}376376+363377static int mdp5_crtc_atomic_check(struct drm_crtc *crtc,364378 struct drm_crtc_state *state)365379{···379375 struct plane_state pstates[STAGE_MAX + 1];380376 const struct mdp5_cfg_hw *hw_cfg;381377 const struct drm_plane_state *pstate;382382- int cnt = 0, i;378378+ int cnt = 0, base = 0, i;383379384380 DBG("%s: check", mdp5_crtc->name);385381386386- /* verify that there are not too many planes attached to crtc387387- * and that we don't have conflicting mixer stages:388388- */389389- hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);390382 drm_atomic_crtc_state_for_each_plane_state(plane, pstate, state) {391391- if (cnt >= (hw_cfg->lm.nb_stages)) {392392- dev_err(dev->dev, "too many planes!\n");393393- return -EINVAL;394394- }395395-396396-397383 pstates[cnt].plane = plane;398384 pstates[cnt].state = to_mdp5_plane_state(pstate);399385···393399 /* assign a stage based on sorted zpos property */394400 sort(pstates, cnt, sizeof(pstates[0]), pstate_cmp, NULL);395401402402+ /* if the bottom-most layer is not fullscreen, we need to use403403+ * it for solid-color:404404+ */405405+ if ((cnt > 0) && !is_fullscreen(state, &pstates[0].state->base))406406+ base++;407407+408408+ /* verify that there are not too many planes attached to crtc409409+ * and that we don't have conflicting mixer stages:410410+ */411411+ hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg);412412+413413+ if ((cnt + base) >= hw_cfg->lm.nb_stages) {414414+ dev_err(dev->dev, "too many planes!\n");415415+ return -EINVAL;416416+ }417417+396418 for (i = 0; i < cnt; i++) {397397- pstates[i].state->stage = STAGE_BASE + i;419419+ pstates[i].state->stage = STAGE_BASE + i + base;398420 DBG("%s: assign pipe %s on stage=%d", mdp5_crtc->name,399421 pipe2name(mdp5_plane_pipe(pstates[i].plane)),400422 pstates[i].state->stage);
+3-6
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
···292292 format = to_mdp_format(msm_framebuffer_format(state->fb));293293 if (MDP_FORMAT_IS_YUV(format) &&294294 !pipe_supports_yuv(mdp5_plane->caps)) {295295- dev_err(plane->dev->dev,296296- "Pipe doesn't support YUV\n");295295+ DBG("Pipe doesn't support YUV\n");297296298297 return -EINVAL;299298 }···300301 if (!(mdp5_plane->caps & MDP_PIPE_CAP_SCALE) &&301302 (((state->src_w >> 16) != state->crtc_w) ||302303 ((state->src_h >> 16) != state->crtc_h))) {303303- dev_err(plane->dev->dev,304304- "Pipe doesn't support scaling (%dx%d -> %dx%d)\n",304304+ DBG("Pipe doesn't support scaling (%dx%d -> %dx%d)\n",305305 state->src_w >> 16, state->src_h >> 16,306306 state->crtc_w, state->crtc_h);307307···311313 vflip = !!(state->rotation & DRM_REFLECT_Y);312314 if ((vflip && !(mdp5_plane->caps & MDP_PIPE_CAP_VFLIP)) ||313315 (hflip && !(mdp5_plane->caps & MDP_PIPE_CAP_HFLIP))) {314314- dev_err(plane->dev->dev,315315- "Pipe doesn't support flip\n");316316+ DBG("Pipe doesn't support flip\n");316317317318 return -EINVAL;318319 }
+1-1
drivers/gpu/drm/msm/msm_drv.c
···228228 flush_workqueue(priv->atomic_wq);229229 destroy_workqueue(priv->atomic_wq);230230231231- if (kms)231231+ if (kms && kms->funcs)232232 kms->funcs->destroy(kms);233233234234 if (gpu) {
···931931{932932 struct radeon_connector *radeon_connector = to_radeon_connector(connector);933933934934- if (radeon_connector->ddc_bus->has_aux) {934934+ if (radeon_connector->ddc_bus && radeon_connector->ddc_bus->has_aux) {935935 drm_dp_aux_unregister(&radeon_connector->ddc_bus->aux);936936 radeon_connector->ddc_bus->has_aux = false;937937 }
+13
drivers/gpu/drm/radeon/radeon_device.c
···104104 "LAST",105105};106106107107+#if defined(CONFIG_VGA_SWITCHEROO)108108+bool radeon_has_atpx_dgpu_power_cntl(void);109109+bool radeon_is_atpx_hybrid(void);110110+#else111111+static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; }112112+static inline bool radeon_is_atpx_hybrid(void) { return false; }113113+#endif114114+107115#define RADEON_PX_QUIRK_DISABLE_PX (1 << 0)108116#define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1)109117···167159 }168160169161 if (rdev->px_quirk_flags & RADEON_PX_QUIRK_DISABLE_PX)162162+ rdev->flags &= ~RADEON_IS_PX;163163+164164+ /* disable PX is the system doesn't support dGPU power control or hybrid gfx */165165+ if (!radeon_is_atpx_hybrid() &&166166+ !radeon_has_atpx_dgpu_power_cntl())170167 rdev->flags &= ~RADEON_IS_PX;171168}172169
+2-2
drivers/gpu/drm/sun4i/sun4i_drv.c
···142142143143 /* Create our layers */144144 drv->layers = sun4i_layers_init(drm);145145- if (!drv->layers) {145145+ if (IS_ERR(drv->layers)) {146146 dev_err(drm->dev, "Couldn't create the planes\n");147147- ret = -EINVAL;147147+ ret = PTR_ERR(drv->layers);148148 goto free_drm;149149 }150150
+8-12
drivers/gpu/drm/sun4i/sun4i_rgb.c
···152152153153 DRM_DEBUG_DRIVER("Enabling RGB output\n");154154155155- if (!IS_ERR(tcon->panel)) {155155+ if (!IS_ERR(tcon->panel))156156 drm_panel_prepare(tcon->panel);157157- drm_panel_enable(tcon->panel);158158- }159159-160160- /* encoder->bridge can be NULL; drm_bridge_enable checks for it */161161- drm_bridge_enable(encoder->bridge);162157163158 sun4i_tcon_channel_enable(tcon, 0);159159+160160+ if (!IS_ERR(tcon->panel))161161+ drm_panel_enable(tcon->panel);164162}165163166164static void sun4i_rgb_encoder_disable(struct drm_encoder *encoder)···169171170172 DRM_DEBUG_DRIVER("Disabling RGB output\n");171173174174+ if (!IS_ERR(tcon->panel))175175+ drm_panel_disable(tcon->panel);176176+172177 sun4i_tcon_channel_disable(tcon, 0);173178174174- /* encoder->bridge can be NULL; drm_bridge_disable checks for it */175175- drm_bridge_disable(encoder->bridge);176176-177177- if (!IS_ERR(tcon->panel)) {178178- drm_panel_disable(tcon->panel);179179+ if (!IS_ERR(tcon->panel))179180 drm_panel_unprepare(tcon->panel);180180- }181181}182182183183static void sun4i_rgb_encoder_mode_set(struct drm_encoder *encoder,
···961961{962962 int ret = 0;963963964964- dev_set_name(&child_device_obj->device, "vmbus-%pUl",964964+ dev_set_name(&child_device_obj->device, "%pUl",965965 child_device_obj->channel->offermsg.offer.if_instance.b);966966967967 child_device_obj->device.bus = &hv_bus;
-1
drivers/i2c/Kconfig
···59596060config I2C_MUX6161 tristate "I2C bus multiplexing support"6262- depends on HAS_IOMEM6362 help6463 Say Y here if you want the I2C core to support the ability to6564 handle multiplexed I2C bus topologies, by presenting each
+1-1
drivers/i2c/busses/i2c-digicolor.c
···347347348348 ret = i2c_add_adapter(&i2c->adap);349349 if (ret < 0) {350350- clk_unprepare(i2c->clk);350350+ clk_disable_unprepare(i2c->clk);351351 return ret;352352 }353353
+1
drivers/i2c/muxes/Kconfig
···63636464config I2C_MUX_REG6565 tristate "Register-based I2C multiplexer"6666+ depends on HAS_IOMEM6667 help6768 If you say yes to this option, support will be included for a6869 register based I2C multiplexer. This driver provides access to
+20-2
drivers/i2c/muxes/i2c-demux-pinctrl.c
···6969 goto err_with_revert;7070 }71717272- p = devm_pinctrl_get_select(adap->dev.parent, priv->bus_name);7272+ /*7373+ * Check if there are pinctrl states at all. Note: we cant' use7474+ * devm_pinctrl_get_select() because we need to distinguish between7575+ * the -ENODEV from devm_pinctrl_get() and pinctrl_lookup_state().7676+ */7777+ p = devm_pinctrl_get(adap->dev.parent);7378 if (IS_ERR(p)) {7479 ret = PTR_ERR(p);7575- goto err_with_put;8080+ /* continue if just no pinctrl states (e.g. i2c-gpio), otherwise exit */8181+ if (ret != -ENODEV)8282+ goto err_with_put;8383+ } else {8484+ /* there are states. check and use them */8585+ struct pinctrl_state *s = pinctrl_lookup_state(p, priv->bus_name);8686+8787+ if (IS_ERR(s)) {8888+ ret = PTR_ERR(s);8989+ goto err_with_put;9090+ }9191+ ret = pinctrl_select_state(p, s);9292+ if (ret < 0)9393+ goto err_with_put;7694 }77957896 priv->chan[new_chan].parent_adap = adap;
···775775 }776776 mutex_unlock(&affinity->lock);777777}778778-779779-int hfi1_set_sdma_affinity(struct hfi1_devdata *dd, const char *buf,780780- size_t count)781781-{782782- struct hfi1_affinity_node *entry;783783- cpumask_var_t mask;784784- int ret, i;785785-786786- mutex_lock(&node_affinity.lock);787787- entry = node_affinity_lookup(dd->node);788788-789789- if (!entry) {790790- ret = -EINVAL;791791- goto unlock;792792- }793793-794794- ret = zalloc_cpumask_var(&mask, GFP_KERNEL);795795- if (!ret) {796796- ret = -ENOMEM;797797- goto unlock;798798- }799799-800800- ret = cpulist_parse(buf, mask);801801- if (ret)802802- goto out;803803-804804- if (!cpumask_subset(mask, cpu_online_mask) || cpumask_empty(mask)) {805805- dd_dev_warn(dd, "Invalid CPU mask\n");806806- ret = -EINVAL;807807- goto out;808808- }809809-810810- /* reset the SDMA interrupt affinity details */811811- init_cpu_mask_set(&entry->def_intr);812812- cpumask_copy(&entry->def_intr.mask, mask);813813-814814- /* Reassign the affinity for each SDMA interrupt. */815815- for (i = 0; i < dd->num_msix_entries; i++) {816816- struct hfi1_msix_entry *msix;817817-818818- msix = &dd->msix_entries[i];819819- if (msix->type != IRQ_SDMA)820820- continue;821821-822822- ret = get_irq_affinity(dd, msix);823823-824824- if (ret)825825- break;826826- }827827-out:828828- free_cpumask_var(mask);829829-unlock:830830- mutex_unlock(&node_affinity.lock);831831- return ret ? ret : strnlen(buf, PAGE_SIZE);832832-}833833-834834-int hfi1_get_sdma_affinity(struct hfi1_devdata *dd, char *buf)835835-{836836- struct hfi1_affinity_node *entry;837837-838838- mutex_lock(&node_affinity.lock);839839- entry = node_affinity_lookup(dd->node);840840-841841- if (!entry) {842842- mutex_unlock(&node_affinity.lock);843843- return -EINVAL;844844- }845845-846846- cpumap_print_to_pagebuf(true, buf, &entry->def_intr.mask);847847- mutex_unlock(&node_affinity.lock);848848- return strnlen(buf, PAGE_SIZE);849849-}
-4
drivers/infiniband/hw/hfi1/affinity.h
···102102/* Release a CPU used by a user process. */103103void hfi1_put_proc_affinity(int);104104105105-int hfi1_get_sdma_affinity(struct hfi1_devdata *dd, char *buf);106106-int hfi1_set_sdma_affinity(struct hfi1_devdata *dd, const char *buf,107107- size_t count);108108-109105struct hfi1_affinity_node {110106 int node;111107 struct cpu_mask_set def_intr;
+9-18
drivers/infiniband/hw/hfi1/chip.c
···63016301 /* leave shared count at zero for both global and VL15 */63026302 write_global_credit(dd, vau, vl15buf, 0);6303630363046304- /* We may need some credits for another VL when sending packets63056305- * with the snoop interface. Dividing it down the middle for VL1563066306- * and VL0 should suffice.63076307- */63086308- if (unlikely(dd->hfi1_snoop.mode_flag == HFI1_PORT_SNOOP_MODE)) {63096309- write_csr(dd, SEND_CM_CREDIT_VL15, (u64)(vl15buf >> 1)63106310- << SEND_CM_CREDIT_VL15_DEDICATED_LIMIT_VL_SHIFT);63116311- write_csr(dd, SEND_CM_CREDIT_VL, (u64)(vl15buf >> 1)63126312- << SEND_CM_CREDIT_VL_DEDICATED_LIMIT_VL_SHIFT);63136313- } else {63146314- write_csr(dd, SEND_CM_CREDIT_VL15, (u64)vl15buf63156315- << SEND_CM_CREDIT_VL15_DEDICATED_LIMIT_VL_SHIFT);63166316- }63046304+ write_csr(dd, SEND_CM_CREDIT_VL15, (u64)vl15buf63056305+ << SEND_CM_CREDIT_VL15_DEDICATED_LIMIT_VL_SHIFT);63176306}6318630763196308/*···99049915 u32 mask = ~((1U << ppd->lmc) - 1);99059916 u64 c1 = read_csr(ppd->dd, DCC_CFG_PORT_CONFIG1);9906991799079907- if (dd->hfi1_snoop.mode_flag)99089908- dd_dev_info(dd, "Set lid/lmc while snooping");99099909-99109918 c1 &= ~(DCC_CFG_PORT_CONFIG1_TARGET_DLID_SMASK99119919 | DCC_CFG_PORT_CONFIG1_DLID_MASK_SMASK);99129920 c1 |= ((ppd->lid & DCC_CFG_PORT_CONFIG1_TARGET_DLID_MASK)···1209812112 mod_timer(&dd->synth_stats_timer, jiffies + HZ * SYNTH_CNT_TIME);1209912113}12100121141210112101-#define C_MAX_NAME 13 /* 12 chars + one for /0 */1211512115+#define C_MAX_NAME 16 /* 15 chars + one for /0 */1210212116static int init_cntrs(struct hfi1_devdata *dd)1210312117{1210412118 int i, rcv_ctxts, j;···1444914463 * Any error printing is already done by the init code.1445014464 * On return, we have the chip mapped.1445114465 */1445214452- ret = hfi1_pcie_ddinit(dd, pdev, ent);1446614466+ ret = hfi1_pcie_ddinit(dd, pdev);1445314467 if (ret < 0)1445414468 goto bail_free;1445514469···1467614690 ret = init_rcverr(dd);1467714691 if (ret)1467814692 goto bail_free_cntrs;1469314693+1469414694+ init_completion(&dd->user_comp);1469514695+1469614696+ /* The user refcount starts with one to inidicate an active device */1469714697+ atomic_set(&dd->user_refcount, 1);14679146981468014699 goto bail;1468114700
+3
drivers/infiniband/hw/hfi1/chip.h
···320320/* DC_DC8051_CFG_MODE.GENERAL bits */321321#define DISABLE_SELF_GUID_CHECK 0x2322322323323+/* Bad L2 frame error code */324324+#define BAD_L2_ERR 0x6325325+323326/*324327 * Eager buffer minimum and maximum sizes supported by the hardware.325328 * All power-of-two sizes in between are supported as well.
···172172 struct hfi1_devdata,173173 user_cdev);174174175175+ if (!atomic_inc_not_zero(&dd->user_refcount))176176+ return -ENXIO;177177+175178 /* Just take a ref now. Not all opens result in a context assign */176179 kobject_get(&dd->kobj);177180···186183 fd->rec_cpu_num = -1; /* no cpu affinity by default */187184 fd->mm = current->mm;188185 atomic_inc(&fd->mm->mm_count);186186+ fp->private_data = fd;187187+ } else {188188+ fp->private_data = NULL;189189+190190+ if (atomic_dec_and_test(&dd->user_refcount))191191+ complete(&dd->user_comp);192192+193193+ return -ENOMEM;189194 }190195191191- fp->private_data = fd;192192-193193- return fd ? 0 : -ENOMEM;196196+ return 0;194197}195198196199static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,···807798done:808799 mmdrop(fdata->mm);809800 kobject_put(&dd->kobj);801801+802802+ if (atomic_dec_and_test(&dd->user_refcount))803803+ complete(&dd->user_comp);804804+810805 kfree(fdata);811806 return 0;812807}
+34-55
drivers/infiniband/hw/hfi1/hfi.h
···367367 u8 etype;368368};369369370370-/*371371- * Private data for snoop/capture support.372372- */373373-struct hfi1_snoop_data {374374- int mode_flag;375375- struct cdev cdev;376376- struct device *class_dev;377377- /* protect snoop data */378378- spinlock_t snoop_lock;379379- struct list_head queue;380380- wait_queue_head_t waitq;381381- void *filter_value;382382- int (*filter_callback)(void *hdr, void *data, void *value);383383- u64 dcc_cfg; /* saved value of DCC Cfg register */384384-};385385-386386-/* snoop mode_flag values */387387-#define HFI1_PORT_SNOOP_MODE 1U388388-#define HFI1_PORT_CAPTURE_MODE 2U389389-390370struct rvt_sge_state;391371392372/*···592612 /* host link state variables */593613 struct mutex hls_lock;594614 u32 host_link_state;595595-596596- spinlock_t sdma_alllock ____cacheline_aligned_in_smp;597615598616 u32 lstate; /* logical link state */599617···10821104 char *portcntrnames;10831105 size_t portcntrnameslen;1084110610851085- struct hfi1_snoop_data hfi1_snoop;10861086-10871107 struct err_info_rcvport err_info_rcvport;10881108 struct err_info_constraint err_info_rcv_constraint;10891109 struct err_info_constraint err_info_xmit_constraint;···11171141 rhf_rcv_function_ptr normal_rhf_rcv_functions[8];1118114211191143 /*11201120- * Handlers for outgoing data so that snoop/capture does not11211121- * have to have its hooks in the send path11441144+ * Capability to have different send engines simply by changing a11451145+ * pointer value.11221146 */11231147 send_routine process_pio_send;11241148 send_routine process_dma_send;···11501174 spinlock_t aspm_lock;11511175 /* Number of verbs contexts which have disabled ASPM */11521176 atomic_t aspm_disabled_cnt;11771177+ /* Keeps track of user space clients */11781178+ atomic_t user_refcount;11791179+ /* Used to wait for outstanding user space clients before dev removal */11801180+ struct completion user_comp;1153118111541182 struct hfi1_affinity *affinity;11551183 struct rhashtable sdma_rht;···12011221extern u32 hfi1_cpulist_count;12021222extern unsigned long *hfi1_cpulist;1203122312041204-extern unsigned int snoop_drop_send;12051205-extern unsigned int snoop_force_capture;12061224int hfi1_init(struct hfi1_devdata *, int);12071225int hfi1_count_units(int *npresentp, int *nupp);12081226int hfi1_count_active_units(void);···15351557void reset_link_credits(struct hfi1_devdata *dd);15361558void assign_remote_cm_au_table(struct hfi1_devdata *dd, u8 vcu);1537155915381538-int snoop_recv_handler(struct hfi1_packet *packet);15391539-int snoop_send_dma_handler(struct rvt_qp *qp, struct hfi1_pkt_state *ps,15401540- u64 pbc);15411541-int snoop_send_pio_handler(struct rvt_qp *qp, struct hfi1_pkt_state *ps,15421542- u64 pbc);15431543-void snoop_inline_pio_send(struct hfi1_devdata *dd, struct pio_buf *pbuf,15441544- u64 pbc, const void *from, size_t count);15451560int set_buffer_control(struct hfi1_pportdata *ppd, struct buffer_control *bc);1546156115471562static inline struct hfi1_devdata *dd_from_ppd(struct hfi1_pportdata *ppd)···1734176317351764int hfi1_pcie_init(struct pci_dev *, const struct pci_device_id *);17361765void hfi1_pcie_cleanup(struct pci_dev *);17371737-int hfi1_pcie_ddinit(struct hfi1_devdata *, struct pci_dev *,17381738- const struct pci_device_id *);17661766+int hfi1_pcie_ddinit(struct hfi1_devdata *, struct pci_dev *);17391767void hfi1_pcie_ddcleanup(struct hfi1_devdata *);17401768void hfi1_pcie_flr(struct hfi1_devdata *);17411769int pcie_speeds(struct hfi1_devdata *);···17691799int kdeth_process_eager(struct hfi1_packet *packet);17701800int process_receive_invalid(struct hfi1_packet *packet);1771180117721772-extern rhf_rcv_function_ptr snoop_rhf_rcv_functions[8];17731773-17741802void update_sge(struct rvt_sge_state *ss, u32 length);1775180317761804/* global module parameter variables */···17951827#define DRIVER_NAME "hfi1"17961828#define HFI1_USER_MINOR_BASE 017971829#define HFI1_TRACE_MINOR 12717981798-#define HFI1_DIAGPKT_MINOR 12817991799-#define HFI1_DIAG_MINOR_BASE 12918001800-#define HFI1_SNOOP_CAPTURE_BASE 20018011830#define HFI1_NMINORS 2551802183118031832#define PCI_VENDOR_ID_INTEL 0x8086···18131848static inline u64 hfi1_pkt_default_send_ctxt_mask(struct hfi1_devdata *dd,18141849 u16 ctxt_type)18151850{18161816- u64 base_sc_integrity =18511851+ u64 base_sc_integrity;18521852+18531853+ /* No integrity checks if HFI1_CAP_NO_INTEGRITY is set */18541854+ if (HFI1_CAP_IS_KSET(NO_INTEGRITY))18551855+ return 0;18561856+18571857+ base_sc_integrity =18171858 SEND_CTXT_CHECK_ENABLE_DISALLOW_BYPASS_BAD_PKT_LEN_SMASK18181859 | SEND_CTXT_CHECK_ENABLE_DISALLOW_PBC_STATIC_RATE_CONTROL_SMASK18191860 | SEND_CTXT_CHECK_ENABLE_DISALLOW_TOO_LONG_BYPASS_PACKETS_SMASK···18341863 | SEND_CTXT_CHECK_ENABLE_CHECK_VL_MAPPING_SMASK18351864 | SEND_CTXT_CHECK_ENABLE_CHECK_OPCODE_SMASK18361865 | SEND_CTXT_CHECK_ENABLE_CHECK_SLID_SMASK18371837- | SEND_CTXT_CHECK_ENABLE_CHECK_JOB_KEY_SMASK18381866 | SEND_CTXT_CHECK_ENABLE_CHECK_VL_SMASK18391867 | SEND_CTXT_CHECK_ENABLE_CHECK_ENABLE_SMASK;18401868···18421872 else18431873 base_sc_integrity |= HFI1_PKT_KERNEL_SC_INTEGRITY;1844187418451845- if (is_ax(dd))18461846- /* turn off send-side job key checks - A0 */18471847- return base_sc_integrity &18481848- ~SEND_CTXT_CHECK_ENABLE_CHECK_JOB_KEY_SMASK;18751875+ /* turn on send-side job key checks if !A0 */18761876+ if (!is_ax(dd))18771877+ base_sc_integrity |= SEND_CTXT_CHECK_ENABLE_CHECK_JOB_KEY_SMASK;18781878+18491879 return base_sc_integrity;18501880}1851188118521882static inline u64 hfi1_pkt_base_sdma_integrity(struct hfi1_devdata *dd)18531883{18541854- u64 base_sdma_integrity =18841884+ u64 base_sdma_integrity;18851885+18861886+ /* No integrity checks if HFI1_CAP_NO_INTEGRITY is set */18871887+ if (HFI1_CAP_IS_KSET(NO_INTEGRITY))18881888+ return 0;18891889+18901890+ base_sdma_integrity =18551891 SEND_DMA_CHECK_ENABLE_DISALLOW_BYPASS_BAD_PKT_LEN_SMASK18561856- | SEND_DMA_CHECK_ENABLE_DISALLOW_PBC_STATIC_RATE_CONTROL_SMASK18571892 | SEND_DMA_CHECK_ENABLE_DISALLOW_TOO_LONG_BYPASS_PACKETS_SMASK18581893 | SEND_DMA_CHECK_ENABLE_DISALLOW_TOO_LONG_IB_PACKETS_SMASK18591894 | SEND_DMA_CHECK_ENABLE_DISALLOW_BAD_PKT_LEN_SMASK···18701895 | SEND_DMA_CHECK_ENABLE_CHECK_VL_MAPPING_SMASK18711896 | SEND_DMA_CHECK_ENABLE_CHECK_OPCODE_SMASK18721897 | SEND_DMA_CHECK_ENABLE_CHECK_SLID_SMASK18731873- | SEND_DMA_CHECK_ENABLE_CHECK_JOB_KEY_SMASK18741898 | SEND_DMA_CHECK_ENABLE_CHECK_VL_SMASK18751899 | SEND_DMA_CHECK_ENABLE_CHECK_ENABLE_SMASK;1876190018771877- if (is_ax(dd))18781878- /* turn off send-side job key checks - A0 */18791879- return base_sdma_integrity &18801880- ~SEND_DMA_CHECK_ENABLE_CHECK_JOB_KEY_SMASK;19011901+ if (!HFI1_CAP_IS_KSET(STATIC_RATE_CTRL))19021902+ base_sdma_integrity |=19031903+ SEND_DMA_CHECK_ENABLE_DISALLOW_PBC_STATIC_RATE_CONTROL_SMASK;19041904+19051905+ /* turn on send-side job key checks if !A0 */19061906+ if (!is_ax(dd))19071907+ base_sdma_integrity |=19081908+ SEND_DMA_CHECK_ENABLE_CHECK_JOB_KEY_SMASK;19091909+18811910 return base_sdma_integrity;18821911}18831912
+64-40
drivers/infiniband/hw/hfi1/init.c
···144144 struct hfi1_ctxtdata *rcd;145145146146 ppd = dd->pport + (i % dd->num_pports);147147+148148+ /* dd->rcd[i] gets assigned inside the callee */147149 rcd = hfi1_create_ctxtdata(ppd, i, dd->node);148150 if (!rcd) {149151 dd_dev_err(dd,···171169 if (!rcd->sc) {172170 dd_dev_err(dd,173171 "Unable to allocate kernel send context, failing\n");174174- dd->rcd[rcd->ctxt] = NULL;175175- hfi1_free_ctxtdata(dd, rcd);176172 goto nomem;177173 }178174···178178 if (ret < 0) {179179 dd_dev_err(dd,180180 "Failed to setup kernel receive context, failing\n");181181- sc_free(rcd->sc);182182- dd->rcd[rcd->ctxt] = NULL;183183- hfi1_free_ctxtdata(dd, rcd);184181 ret = -EFAULT;185182 goto bail;186183 }···193196nomem:194197 ret = -ENOMEM;195198bail:199199+ if (dd->rcd) {200200+ for (i = 0; i < dd->num_rcv_contexts; ++i)201201+ hfi1_free_ctxtdata(dd, dd->rcd[i]);202202+ }196203 kfree(dd->rcd);197204 dd->rcd = NULL;198205 return ret;···217216 dd->num_rcv_contexts - dd->first_user_ctxt)218217 kctxt_ngroups = (dd->rcv_entries.nctxt_extra -219218 (dd->num_rcv_contexts - dd->first_user_ctxt));220220- rcd = kzalloc(sizeof(*rcd), GFP_KERNEL);219219+ rcd = kzalloc_node(sizeof(*rcd), GFP_KERNEL, numa);221220 if (rcd) {222221 u32 rcvtids, max_entries;223222···262261 }263262 rcd->eager_base = base * dd->rcv_entries.group_size;264263265265- /* Validate and initialize Rcv Hdr Q variables */266266- if (rcvhdrcnt % HDRQ_INCREMENT) {267267- dd_dev_err(dd,268268- "ctxt%u: header queue count %d must be divisible by %lu\n",269269- rcd->ctxt, rcvhdrcnt, HDRQ_INCREMENT);270270- goto bail;271271- }272264 rcd->rcvhdrq_cnt = rcvhdrcnt;273265 rcd->rcvhdrqentsize = hfi1_hdrq_entsize;274266 /*···500506 INIT_WORK(&ppd->qsfp_info.qsfp_work, qsfp_event);501507502508 mutex_init(&ppd->hls_lock);503503- spin_lock_init(&ppd->sdma_alllock);504509 spin_lock_init(&ppd->qsfp_info.qsfp_lock);505510506511 ppd->qsfp_info.ppd = ppd;···13921399 hfi1_free_devdata(dd);13931400}1394140114021402+static int init_validate_rcvhdrcnt(struct device *dev, uint thecnt)14031403+{14041404+ if (thecnt <= HFI1_MIN_HDRQ_EGRBUF_CNT) {14051405+ hfi1_early_err(dev, "Receive header queue count too small\n");14061406+ return -EINVAL;14071407+ }14081408+14091409+ if (thecnt > HFI1_MAX_HDRQ_EGRBUF_CNT) {14101410+ hfi1_early_err(dev,14111411+ "Receive header queue count cannot be greater than %u\n",14121412+ HFI1_MAX_HDRQ_EGRBUF_CNT);14131413+ return -EINVAL;14141414+ }14151415+14161416+ if (thecnt % HDRQ_INCREMENT) {14171417+ hfi1_early_err(dev, "Receive header queue count %d must be divisible by %lu\n",14181418+ thecnt, HDRQ_INCREMENT);14191419+ return -EINVAL;14201420+ }14211421+14221422+ return 0;14231423+}14241424+13951425static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)13961426{13971427 int ret = 0, j, pidx, initfail;13981398- struct hfi1_devdata *dd = ERR_PTR(-EINVAL);14281428+ struct hfi1_devdata *dd;13991429 struct hfi1_pportdata *ppd;1400143014011431 /* First, lock the non-writable module parameters */14021432 HFI1_CAP_LOCK();1403143314041434 /* Validate some global module parameters */14051405- if (rcvhdrcnt <= HFI1_MIN_HDRQ_EGRBUF_CNT) {14061406- hfi1_early_err(&pdev->dev, "Header queue count too small\n");14071407- ret = -EINVAL;14351435+ ret = init_validate_rcvhdrcnt(&pdev->dev, rcvhdrcnt);14361436+ if (ret)14081437 goto bail;14091409- }14101410- if (rcvhdrcnt > HFI1_MAX_HDRQ_EGRBUF_CNT) {14111411- hfi1_early_err(&pdev->dev,14121412- "Receive header queue count cannot be greater than %u\n",14131413- HFI1_MAX_HDRQ_EGRBUF_CNT);14141414- ret = -EINVAL;14151415- goto bail;14161416- }14381438+14171439 /* use the encoding function as a sanitization check */14181440 if (!encode_rcv_header_entry_size(hfi1_hdrq_entsize)) {14191441 hfi1_early_err(&pdev->dev, "Invalid HdrQ Entry size %u\n",···14691461 if (ret)14701462 goto bail;1471146314721472- /*14731473- * Do device-specific initialization, function table setup, dd14741474- * allocation, etc.14751475- */14761476- switch (ent->device) {14771477- case PCI_DEVICE_ID_INTEL0:14781478- case PCI_DEVICE_ID_INTEL1:14791479- dd = hfi1_init_dd(pdev, ent);14801480- break;14811481- default:14641464+ if (!(ent->device == PCI_DEVICE_ID_INTEL0 ||14651465+ ent->device == PCI_DEVICE_ID_INTEL1)) {14821466 hfi1_early_err(&pdev->dev,14831467 "Failing on unknown Intel deviceid 0x%x\n",14841468 ent->device);14851469 ret = -ENODEV;14701470+ goto clean_bail;14861471 }1487147214881488- if (IS_ERR(dd))14731473+ /*14741474+ * Do device-specific initialization, function table setup, dd14751475+ * allocation, etc.14761476+ */14771477+ dd = hfi1_init_dd(pdev, ent);14781478+14791479+ if (IS_ERR(dd)) {14891480 ret = PTR_ERR(dd);14901490- if (ret)14911481 goto clean_bail; /* error already printed */14821482+ }1492148314931484 ret = create_workqueues(dd);14941485 if (ret)···15451538 return ret;15461539}1547154015411541+static void wait_for_clients(struct hfi1_devdata *dd)15421542+{15431543+ /*15441544+ * Remove the device init value and complete the device if there is15451545+ * no clients or wait for active clients to finish.15461546+ */15471547+ if (atomic_dec_and_test(&dd->user_refcount))15481548+ complete(&dd->user_comp);15491549+15501550+ wait_for_completion(&dd->user_comp);15511551+}15521552+15481553static void remove_one(struct pci_dev *pdev)15491554{15501555 struct hfi1_devdata *dd = pci_get_drvdata(pdev);1551155615521557 /* close debugfs files before ib unregister */15531558 hfi1_dbg_ibdev_exit(&dd->verbs_dev);15591559+15601560+ /* remove the /dev hfi1 interface */15611561+ hfi1_device_remove(dd);15621562+15631563+ /* wait for existing user space clients to finish */15641564+ wait_for_clients(dd);15651565+15541566 /* unregister from IB core */15551567 hfi1_unregister_ib_device(dd);15561568···1583155715841558 /* wait until all of our (qsfp) queue_work() calls complete */15851559 flush_workqueue(ib_wq);15861586-15871587- hfi1_device_remove(dd);1588156015891561 postinit_cleanup(dd);15901562}
+1-2
drivers/infiniband/hw/hfi1/pcie.c
···157157 * fields required to re-initialize after a chip reset, or for158158 * various other purposes159159 */160160-int hfi1_pcie_ddinit(struct hfi1_devdata *dd, struct pci_dev *pdev,161161- const struct pci_device_id *ent)160160+int hfi1_pcie_ddinit(struct hfi1_devdata *dd, struct pci_dev *pdev)162161{163162 unsigned long len;164163 resource_size_t addr;
+3-10
drivers/infiniband/hw/hfi1/pio.c
···668668void set_pio_integrity(struct send_context *sc)669669{670670 struct hfi1_devdata *dd = sc->dd;671671- u64 reg = 0;672671 u32 hw_context = sc->hw_context;673672 int type = sc->type;674673675675- /*676676- * No integrity checks if HFI1_CAP_NO_INTEGRITY is set, or if677677- * we're snooping.678678- */679679- if (likely(!HFI1_CAP_IS_KSET(NO_INTEGRITY)) &&680680- dd->hfi1_snoop.mode_flag != HFI1_PORT_SNOOP_MODE)681681- reg = hfi1_pkt_default_send_ctxt_mask(dd, type);682682-683683- write_kctxt_csr(dd, hw_context, SC(CHECK_ENABLE), reg);674674+ write_kctxt_csr(dd, hw_context,675675+ SC(CHECK_ENABLE),676676+ hfi1_pkt_default_send_ctxt_mask(dd, type));684677}685678686679static u32 get_buffers_allocated(struct send_context *sc)
···522522 if (qp->sq.queue) {523523 __rxe_do_task(&qp->comp.task);524524 __rxe_do_task(&qp->req.task);525525+ rxe_queue_reset(qp->sq.queue);525526 }526527527528 /* cleanup attributes */···574573{575574 qp->req.state = QP_STATE_ERROR;576575 qp->resp.state = QP_STATE_ERROR;576576+ qp->attr.qp_state = IB_QPS_ERR;577577578578 /* drain work and packet queues */579579 rxe_run_task(&qp->resp.task, 1);
+9
drivers/infiniband/sw/rxe/rxe_queue.c
···8484 return -EINVAL;8585}86868787+inline void rxe_queue_reset(struct rxe_queue *q)8888+{8989+ /* queue is comprised from header and the memory9090+ * of the actual queue. See "struct rxe_queue_buf" in rxe_queue.h9191+ * reset only the queue itself and not the management header9292+ */9393+ memset(q->buf->data, 0, q->buf_size - sizeof(struct rxe_queue_buf));9494+}9595+8796struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe,8897 int *num_elem,8998 unsigned int elem_size)
+2
drivers/infiniband/sw/rxe/rxe_queue.h
···8484 size_t buf_size,8585 struct rxe_mmap_info **ip_p);86868787+void rxe_queue_reset(struct rxe_queue *q);8888+8789struct rxe_queue *rxe_queue_init(struct rxe_dev *rxe,8890 int *num_elem,8991 unsigned int elem_size);
+13-8
drivers/infiniband/sw/rxe/rxe_req.c
···696696 qp->req.wqe_index);697697 wqe->state = wqe_state_done;698698 wqe->status = IB_WC_SUCCESS;699699- goto complete;699699+ __rxe_do_task(&qp->comp.task);700700+ return 0;700701 }701702 payload = mtu;702703 }···746745 wqe->status = IB_WC_LOC_PROT_ERR;747746 wqe->state = wqe_state_error;748747749749-complete:750750- if (qp_type(qp) != IB_QPT_RC) {751751- while (rxe_completer(qp) == 0)752752- ;753753- }754754-755755- return 0;748748+ /*749749+ * IBA Spec. Section 10.7.3.1 SIGNALED COMPLETIONS750750+ * ---------8<---------8<-------------751751+ * ...Note that if a completion error occurs, a Work Completion752752+ * will always be generated, even if the signaling753753+ * indicator requests an Unsignaled Completion.754754+ * ---------8<---------8<-------------755755+ */756756+ wqe->wr.send_flags |= IB_SEND_SIGNALED;757757+ __rxe_do_task(&qp->comp.task);758758+ return -EAGAIN;756759757760exit:758761 return -EAGAIN;
+7-6
drivers/mailbox/pcc.c
···6565#include <linux/mailbox_controller.h>6666#include <linux/mailbox_client.h>6767#include <linux/io-64-nonatomic-lo-hi.h>6868+#include <acpi/pcc.h>68696970#include "mailbox.h"7071···268267 if (chan->txdone_method == TXDONE_BY_POLL && cl->knows_txdone)269268 chan->txdone_method |= TXDONE_BY_ACK;270269270270+ spin_unlock_irqrestore(&chan->lock, flags);271271+271272 if (pcc_doorbell_irq[subspace_id] > 0) {272273 int rc;273274···278275 if (unlikely(rc)) {279276 dev_err(dev, "failed to register PCC interrupt %d\n",280277 pcc_doorbell_irq[subspace_id]);278278+ pcc_mbox_free_channel(chan);281279 chan = ERR_PTR(rc);282280 }283281 }284284-285285- spin_unlock_irqrestore(&chan->lock, flags);286282287283 return chan;288284}···306304 return;307305 }308306307307+ if (pcc_doorbell_irq[id] > 0)308308+ devm_free_irq(chan->mbox->dev, pcc_doorbell_irq[id], chan);309309+309310 spin_lock_irqsave(&chan->lock, flags);310311 chan->cl = NULL;311312 chan->active_req = NULL;312313 if (chan->txdone_method == (TXDONE_BY_POLL | TXDONE_BY_ACK))313314 chan->txdone_method = TXDONE_BY_POLL;314315315315- if (pcc_doorbell_irq[id] > 0)316316- devm_free_irq(chan->mbox->dev, pcc_doorbell_irq[id], chan);317317-318316 spin_unlock_irqrestore(&chan->lock, flags);319317}320318EXPORT_SYMBOL_GPL(pcc_mbox_free_channel);321321-322319323320/**324321 * pcc_send_data - Called from Mailbox Controller code. Used
+5
drivers/media/dvb-frontends/Kconfig
···513513 depends on DVB_CORE514514 default DVB_AS102515515516516+config DVB_GP8PSK_FE517517+ tristate518518+ depends on DVB_CORE519519+ default DVB_USB_GP8PSK520520+516521comment "DVB-C (cable) frontends"517522 depends on DVB_CORE518523
···11+/*22+ * gp8psk_fe driver33+ *44+ * This program is free software; you can redistribute it and/or modify55+ * it under the terms of the GNU General Public License as published by66+ * the Free Software Foundation; either version 2, or (at your option)77+ * any later version.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ */1414+1515+#ifndef GP8PSK_FE_H1616+#define GP8PSK_FE_H1717+1818+#include <linux/types.h>1919+2020+/* gp8psk commands */2121+2222+#define GET_8PSK_CONFIG 0x80 /* in */2323+#define SET_8PSK_CONFIG 0x812424+#define I2C_WRITE 0x832525+#define I2C_READ 0x842626+#define ARM_TRANSFER 0x852727+#define TUNE_8PSK 0x862828+#define GET_SIGNAL_STRENGTH 0x87 /* in */2929+#define LOAD_BCM4500 0x883030+#define BOOT_8PSK 0x89 /* in */3131+#define START_INTERSIL 0x8A /* in */3232+#define SET_LNB_VOLTAGE 0x8B3333+#define SET_22KHZ_TONE 0x8C3434+#define SEND_DISEQC_COMMAND 0x8D3535+#define SET_DVB_MODE 0x8E3636+#define SET_DN_SWITCH 0x8F3737+#define GET_SIGNAL_LOCK 0x90 /* in */3838+#define GET_FW_VERS 0x923939+#define GET_SERIAL_NUMBER 0x93 /* in */4040+#define USE_EXTRA_VOLT 0x944141+#define GET_FPGA_VERS 0x954242+#define CW3K_INIT 0x9d4343+4444+/* PSK_configuration bits */4545+#define bm8pskStarted 0x014646+#define bm8pskFW_Loaded 0x024747+#define bmIntersilOn 0x044848+#define bmDVBmode 0x084949+#define bm22kHz 0x105050+#define bmSEL18V 0x205151+#define bmDCtuned 0x405252+#define bmArmed 0x805353+5454+/* Satellite modulation modes */5555+#define ADV_MOD_DVB_QPSK 0 /* DVB-S QPSK */5656+#define ADV_MOD_TURBO_QPSK 1 /* Turbo QPSK */5757+#define ADV_MOD_TURBO_8PSK 2 /* Turbo 8PSK (also used for Trellis 8PSK) */5858+#define ADV_MOD_TURBO_16QAM 3 /* Turbo 16QAM (also used for Trellis 8PSK) */5959+6060+#define ADV_MOD_DCII_C_QPSK 4 /* Digicipher II Combo */6161+#define ADV_MOD_DCII_I_QPSK 5 /* Digicipher II I-stream */6262+#define ADV_MOD_DCII_Q_QPSK 6 /* Digicipher II Q-stream */6363+#define ADV_MOD_DCII_C_OQPSK 7 /* Digicipher II offset QPSK */6464+#define ADV_MOD_DSS_QPSK 8 /* DSS (DIRECTV) QPSK */6565+#define ADV_MOD_DVB_BPSK 9 /* DVB-S BPSK */6666+6767+/* firmware revision id's */6868+#define GP8PSK_FW_REV1 0x0206046969+#define GP8PSK_FW_REV2 0x0207047070+#define GP8PSK_FW_VERS(_fw_vers) \7171+ ((_fw_vers)[2]<<0x10 | (_fw_vers)[1]<<0x08 | (_fw_vers)[0])7272+7373+struct gp8psk_fe_ops {7474+ int (*in)(void *priv, u8 req, u16 value, u16 index, u8 *b, int blen);7575+ int (*out)(void *priv, u8 req, u16 value, u16 index, u8 *b, int blen);7676+ int (*reload)(void *priv);7777+};7878+7979+struct dvb_frontend *gp8psk_fe_attach(const struct gp8psk_fe_ops *ops,8080+ void *priv, bool is_rev1);8181+8282+#endif
···404404 * Powered is in/decremented for each call to modify the state.405405 * @udev: pointer to the device's struct usb_device.406406 *407407- * @usb_mutex: semaphore of USB control messages (reading needs two messages)408408- * @i2c_mutex: semaphore for i2c-transfers407407+ * @data_mutex: mutex to protect the data structure used to store URB data408408+ * @usb_mutex: mutex of USB control messages (reading needs two messages).409409+ * Please notice that this mutex is used internally at the generic410410+ * URB control functions. So, drivers using dvb_usb_generic_rw() and411411+ * derivated functions should not lock it internally.412412+ * @i2c_mutex: mutex for i2c-transfers409413 *410414 * @i2c_adap: device's i2c_adapter if it uses I2CoverUSB411415 *···437433 int powered;438434439435 /* locking */436436+ struct mutex data_mutex;440437 struct mutex usb_mutex;441438442439 /* i2c */
···502502 for (i = 0; i < LPSS_PRIV_REG_COUNT; i++)503503 lpss->priv_ctx[i] = readl(lpss->priv + i * 4);504504505505- /* Put the device into reset state */506506- writel(0, lpss->priv + LPSS_PRIV_RESETS);507507-508505 return 0;509506}510507EXPORT_SYMBOL_GPL(intel_lpss_suspend);
···2626#include "mmc_ops.h"2727#include "sd_ops.h"28282929+#define DEFAULT_CMD6_TIMEOUT_MS 5003030+2931static const unsigned int tran_exp[] = {3032 10000, 100000, 1000000, 10000000,3133 0, 0, 0, 0···573571 card->erased_byte = 0x0;574572575573 /* eMMC v4.5 or later */574574+ card->ext_csd.generic_cmd6_time = DEFAULT_CMD6_TIMEOUT_MS;576575 if (card->ext_csd.rev >= 6) {577576 card->ext_csd.feature_support |= MMC_DISCARD_FEATURE;578577
+1-1
drivers/mmc/host/dw_mmc.c
···29402940 return ERR_PTR(-ENOMEM);2941294129422942 /* find reset controller when exist */29432943- pdata->rstc = devm_reset_control_get_optional(dev, NULL);29432943+ pdata->rstc = devm_reset_control_get_optional(dev, "reset");29442944 if (IS_ERR(pdata->rstc)) {29452945 if (PTR_ERR(pdata->rstc) == -EPROBE_DEFER)29462946 return ERR_PTR(-EPROBE_DEFER);
+2-2
drivers/mmc/host/mxs-mmc.c
···661661662662 platform_set_drvdata(pdev, mmc);663663664664+ spin_lock_init(&host->lock);665665+664666 ret = devm_request_irq(&pdev->dev, irq_err, mxs_mmc_irq_handler, 0,665667 dev_name(&pdev->dev), host);666668 if (ret)667669 goto out_free_dma;668668-669669- spin_lock_init(&host->lock);670670671671 ret = mmc_add_host(mmc);672672 if (ret)
+26-10
drivers/mmc/host/sdhci.c
···2086208620872087 if (!host->tuning_done) {20882088 pr_info(DRIVER_NAME ": Timeout waiting for Buffer Read Ready interrupt during tuning procedure, falling back to fixed sampling clock\n");20892089+20902090+ sdhci_do_reset(host, SDHCI_RESET_CMD);20912091+ sdhci_do_reset(host, SDHCI_RESET_DATA);20922092+20892093 ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);20902094 ctrl &= ~SDHCI_CTRL_TUNED_CLK;20912095 ctrl &= ~SDHCI_CTRL_EXEC_TUNING;···2290228622912287 for (i = 0; i < SDHCI_MAX_MRQS; i++) {22922288 mrq = host->mrqs_done[i];22932293- if (mrq) {22942294- host->mrqs_done[i] = NULL;22892289+ if (mrq)22952290 break;22962296- }22972291 }2298229222992293 if (!mrq) {···23222320 * upon error conditions.23232321 */23242322 if (sdhci_needs_reset(host, mrq)) {23232323+ /*23242324+ * Do not finish until command and data lines are available for23252325+ * reset. Note there can only be one other mrq, so it cannot23262326+ * also be in mrqs_done, otherwise host->cmd and host->data_cmd23272327+ * would both be null.23282328+ */23292329+ if (host->cmd || host->data_cmd) {23302330+ spin_unlock_irqrestore(&host->lock, flags);23312331+ return true;23322332+ }23332333+23252334 /* Some controllers need this kick or reset won't work here */23262335 if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET)23272336 /* This is to force an update */···2340232723412328 /* Spec says we should do both at the same time, but Ricoh23422329 controllers do not like that. */23432343- if (!host->cmd)23442344- sdhci_do_reset(host, SDHCI_RESET_CMD);23452345- if (!host->data_cmd)23462346- sdhci_do_reset(host, SDHCI_RESET_DATA);23302330+ sdhci_do_reset(host, SDHCI_RESET_CMD);23312331+ sdhci_do_reset(host, SDHCI_RESET_DATA);2347233223482333 host->pending_reset = false;23492334 }2350233523512336 if (!sdhci_has_requests(host))23522337 sdhci_led_deactivate(host);23382338+23392339+ host->mrqs_done[i] = NULL;2353234023542341 mmiowb();23552342 spin_unlock_irqrestore(&host->lock, flags);···25252512 if (!host->data) {25262513 struct mmc_command *data_cmd = host->data_cmd;2527251425282528- if (data_cmd)25292529- host->data_cmd = NULL;25302530-25312515 /*25322516 * The "data complete" interrupt is also used to25332517 * indicate that a busy state has ended. See comment···25322522 */25332523 if (data_cmd && (data_cmd->flags & MMC_RSP_BUSY)) {25342524 if (intmask & SDHCI_INT_DATA_TIMEOUT) {25252525+ host->data_cmd = NULL;25352526 data_cmd->error = -ETIMEDOUT;25362527 sdhci_finish_mrq(host, data_cmd->mrq);25372528 return;25382529 }25392530 if (intmask & SDHCI_INT_DATA_END) {25312531+ host->data_cmd = NULL;25402532 /*25412533 * Some cards handle busy-end interrupt25422534 * before the command completed, so make···29232911 sdhci_enable_preset_value(host, true);29242912 spin_unlock_irqrestore(&host->lock, flags);29252913 }29142914+29152915+ if ((mmc->caps2 & MMC_CAP2_HS400_ES) &&29162916+ mmc->ops->hs400_enhanced_strobe)29172917+ mmc->ops->hs400_enhanced_strobe(mmc, &mmc->ios);2926291829272919 spin_lock_irqsave(&host->lock, flags);29282920
···4949#include <linux/firmware.h>5050#include <linux/log2.h>5151#include <linux/aer.h>5252+#include <linux/crash_dump.h>52535354#if IS_ENABLED(CONFIG_CNIC)5455#define BCM_CNIC 1···47654764 BNX2_WR(bp, BNX2_PCI_GRC_WINDOW3_ADDR, BNX2_MSIX_PBA_ADDR);47664765}4767476647684768-static int47694769-bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)47674767+static void47684768+bnx2_wait_dma_complete(struct bnx2 *bp)47704769{47714770 u32 val;47724772- int i, rc = 0;47734773- u8 old_port;47714771+ int i;4774477247754775- /* Wait for the current PCI transaction to complete before47764776- * issuing a reset. */47734773+ /*47744774+ * Wait for the current PCI transaction to complete before47754775+ * issuing a reset.47764776+ */47774777 if ((BNX2_CHIP(bp) == BNX2_CHIP_5706) ||47784778 (BNX2_CHIP(bp) == BNX2_CHIP_5708)) {47794779 BNX2_WR(bp, BNX2_MISC_ENABLE_CLR_BITS,···47974795 break;47984796 }47994797 }47984798+47994799+ return;48004800+}48014801+48024802+48034803+static int48044804+bnx2_reset_chip(struct bnx2 *bp, u32 reset_code)48054805+{48064806+ u32 val;48074807+ int i, rc = 0;48084808+ u8 old_port;48094809+48104810+ /* Wait for the current PCI transaction to complete before48114811+ * issuing a reset. */48124812+ bnx2_wait_dma_complete(bp);4800481348014814 /* Wait for the firmware to tell us it is ok to issue a reset. */48024815 bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT0 | reset_code, 1, 1);···63786361 struct bnx2 *bp = netdev_priv(dev);63796362 int rc;6380636363646364+ rc = bnx2_request_firmware(bp);63656365+ if (rc < 0)63666366+ goto out;63676367+63816368 netif_carrier_off(dev);6382636963836370 bnx2_disable_int(bp);···64506429 bnx2_free_irq(bp);64516430 bnx2_free_mem(bp);64526431 bnx2_del_napi(bp);64326432+ bnx2_release_firmware(bp);64536433 goto out;64546434}64556435···8597857585988576 pci_set_drvdata(pdev, dev);8599857786008600- rc = bnx2_request_firmware(bp);86018601- if (rc < 0)86028602- goto error;85788578+ /*85798579+ * In-flight DMA from 1st kernel could continue going in kdump kernel.85808580+ * New io-page table has been created before bnx2 does reset at open stage.85818581+ * We have to wait for the in-flight DMA to complete to avoid it look up85828582+ * into the newly created io-page table.85838583+ */85848584+ if (is_kdump_kernel())85858585+ bnx2_wait_dma_complete(bp);8603858686048604-86058605- bnx2_reset_chip(bp, BNX2_DRV_MSG_CODE_RESET);86068587 memcpy(dev->dev_addr, bp->mac_addr, ETH_ALEN);8607858886088589 dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG |···86388613 return 0;8639861486408615error:86418641- bnx2_release_firmware(bp);86428616 pci_iounmap(pdev, bp->regview);86438617 pci_release_regions(pdev);86448618 pci_disable_device(pdev);
+10-5
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···49344934 napi_hash_del(&bnapi->napi);49354935 netif_napi_del(&bnapi->napi);49364936 }49374937+ /* We called napi_hash_del() before netif_napi_del(), we need49384938+ * to respect an RCU grace period before freeing napi structures.49394939+ */49404940+ synchronize_net();49374941}4938494249394943static void bnxt_init_napi(struct bnxt *bp)···63136309 struct tc_to_netdev *ntc)63146310{63156311 struct bnxt *bp = netdev_priv(dev);63126312+ bool sh = false;63166313 u8 tc;6317631463186315 if (ntc->type != TC_SETUP_MQPRIO)···63306325 if (netdev_get_num_tc(dev) == tc)63316326 return 0;6332632763286328+ if (bp->flags & BNXT_FLAG_SHARED_RINGS)63296329+ sh = true;63306330+63336331 if (tc) {63346332 int max_rx_rings, max_tx_rings, rc;63356335- bool sh = false;63366336-63376337- if (bp->flags & BNXT_FLAG_SHARED_RINGS)63386338- sh = true;6339633363406334 rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, sh);63416335 if (rc || bp->tx_nr_rings_per_tc * tc > max_tx_rings)···63526348 bp->tx_nr_rings = bp->tx_nr_rings_per_tc;63536349 netdev_reset_tc(dev);63546350 }63556355- bp->cp_nr_rings = max_t(int, bp->tx_nr_rings, bp->rx_nr_rings);63516351+ bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) :63526352+ bp->tx_nr_rings + bp->rx_nr_rings;63566353 bp->num_stat_ctxs = bp->cp_nr_rings;6357635463586355 if (netif_running(bp->dev))
+2-2
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
···774774775775 if (vf->flags & BNXT_VF_LINK_UP) {776776 /* if physical link is down, force link up on VF */777777- if (phy_qcfg_resp.link ==778778- PORT_PHY_QCFG_RESP_LINK_NO_LINK) {777777+ if (phy_qcfg_resp.link !=778778+ PORT_PHY_QCFG_RESP_LINK_LINK) {779779 phy_qcfg_resp.link =780780 PORT_PHY_QCFG_RESP_LINK_LINK;781781 phy_qcfg_resp.link_speed = cpu_to_le16(
···5757 if (esw->mode != SRIOV_OFFLOADS)5858 return ERR_PTR(-EOPNOTSUPP);59596060- action = attr->action;6060+ /* per flow vlan pop/push is emulated, don't set that into the firmware */6161+ action = attr->action & ~(MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH | MLX5_FLOW_CONTEXT_ACTION_VLAN_POP);61626263 if (action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {6364 dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+1-1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
···16901690{1691169116921692 steering->root_ns = create_root_ns(steering, FS_FT_NIC_RX);16931693- if (IS_ERR_OR_NULL(steering->root_ns))16931693+ if (!steering->root_ns)16941694 goto cleanup;1695169516961696 if (init_root_tree(steering, &root_fs, &steering->root_ns->ns.node))
···575575576576 mac |= TXEN | RXEN; /* enable RX/TX */577577578578- /* We don't have ethtool support yet, so force flow-control mode579579- * to 'full' always.580580- */581581- mac |= TXFC | RXFC;578578+ /* Configure MAC flow control to match the PHY's settings. */579579+ if (phydev->pause)580580+ mac |= RXFC;581581+ if (phydev->pause != phydev->asym_pause)582582+ mac |= TXFC;582583583584 /* setup link speed */584585 mac &= ~SPEED_MASK;···10031002 /* enable mac irq */10041003 writel((u32)~DIS_INT, adpt->base + EMAC_INT_STATUS);10051004 writel(adpt->irq.mask, adpt->base + EMAC_INT_MASK);10051005+10061006+ /* Enable pause frames. Without this feature, the EMAC has been shown10071007+ * to receive (and drop) frames with FCS errors at gigabit connections.10081008+ */10091009+ adpt->phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;10101010+ adpt->phydev->advertising |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;1006101110071012 adpt->phydev->irq = PHY_IGNORE_INTERRUPT;10081013 phy_start(adpt->phydev);
···107107config DWMAC_STM32108108 tristate "STM32 DWMAC support"109109 default ARCH_STM32110110- depends on OF && HAS_IOMEM110110+ depends on OF && HAS_IOMEM && (ARCH_STM32 || COMPILE_TEST)111111 select MFD_SYSCON112112 ---help---113113 Support for ethernet controller on STM32 SOCs.
···120120 unsigned long ip_csum_bypassed;121121 unsigned long ipv4_pkt_rcvd;122122 unsigned long ipv6_pkt_rcvd;123123- unsigned long rx_msg_type_ext_no_ptp;124124- unsigned long rx_msg_type_sync;125125- unsigned long rx_msg_type_follow_up;126126- unsigned long rx_msg_type_delay_req;127127- unsigned long rx_msg_type_delay_resp;128128- unsigned long rx_msg_type_pdelay_req;129129- unsigned long rx_msg_type_pdelay_resp;130130- unsigned long rx_msg_type_pdelay_follow_up;123123+ unsigned long no_ptp_rx_msg_type_ext;124124+ unsigned long ptp_rx_msg_type_sync;125125+ unsigned long ptp_rx_msg_type_follow_up;126126+ unsigned long ptp_rx_msg_type_delay_req;127127+ unsigned long ptp_rx_msg_type_delay_resp;128128+ unsigned long ptp_rx_msg_type_pdelay_req;129129+ unsigned long ptp_rx_msg_type_pdelay_resp;130130+ unsigned long ptp_rx_msg_type_pdelay_follow_up;131131+ unsigned long ptp_rx_msg_type_announce;132132+ unsigned long ptp_rx_msg_type_management;133133+ unsigned long ptp_rx_msg_pkt_reserved_type;131134 unsigned long ptp_frame_type;132135 unsigned long ptp_ver;133136 unsigned long timestamp_dropped;···485482/* PTP and HW Timer helpers */486483struct stmmac_hwtimestamp {487484 void (*config_hw_tstamping) (void __iomem *ioaddr, u32 data);488488- u32 (*config_sub_second_increment) (void __iomem *ioaddr, u32 clk_rate);485485+ u32 (*config_sub_second_increment)(void __iomem *ioaddr, u32 ptp_clock,486486+ int gmac4);489487 int (*init_systime) (void __iomem *ioaddr, u32 sec, u32 nsec);490488 int (*config_addend) (void __iomem *ioaddr, u32 addend);491489 int (*adjust_systime) (void __iomem *ioaddr, u32 sec, u32 nsec,492492- int add_sub);490490+ int add_sub, int gmac4);493491 u64(*get_systime) (void __iomem *ioaddr);494492};495493
···123123 x->ipv4_pkt_rcvd++;124124 if (rdes1 & RDES1_IPV6_HEADER)125125 x->ipv6_pkt_rcvd++;126126- if (message_type == RDES_EXT_SYNC)127127- x->rx_msg_type_sync++;126126+127127+ if (message_type == RDES_EXT_NO_PTP)128128+ x->no_ptp_rx_msg_type_ext++;129129+ else if (message_type == RDES_EXT_SYNC)130130+ x->ptp_rx_msg_type_sync++;128131 else if (message_type == RDES_EXT_FOLLOW_UP)129129- x->rx_msg_type_follow_up++;132132+ x->ptp_rx_msg_type_follow_up++;130133 else if (message_type == RDES_EXT_DELAY_REQ)131131- x->rx_msg_type_delay_req++;134134+ x->ptp_rx_msg_type_delay_req++;132135 else if (message_type == RDES_EXT_DELAY_RESP)133133- x->rx_msg_type_delay_resp++;136136+ x->ptp_rx_msg_type_delay_resp++;134137 else if (message_type == RDES_EXT_PDELAY_REQ)135135- x->rx_msg_type_pdelay_req++;138138+ x->ptp_rx_msg_type_pdelay_req++;136139 else if (message_type == RDES_EXT_PDELAY_RESP)137137- x->rx_msg_type_pdelay_resp++;140140+ x->ptp_rx_msg_type_pdelay_resp++;138141 else if (message_type == RDES_EXT_PDELAY_FOLLOW_UP)139139- x->rx_msg_type_pdelay_follow_up++;140140- else141141- x->rx_msg_type_ext_no_ptp++;142142+ x->ptp_rx_msg_type_pdelay_follow_up++;143143+ else if (message_type == RDES_PTP_ANNOUNCE)144144+ x->ptp_rx_msg_type_announce++;145145+ else if (message_type == RDES_PTP_MANAGEMENT)146146+ x->ptp_rx_msg_type_management++;147147+ else if (message_type == RDES_PTP_PKT_RESERVED_TYPE)148148+ x->ptp_rx_msg_pkt_reserved_type++;142149143150 if (rdes1 & RDES1_PTP_PACKET_TYPE)144151 x->ptp_frame_type++;···211204212205static int dwmac4_wrback_get_tx_timestamp_status(struct dma_desc *p)213206{214214- return (p->des3 & TDES3_TIMESTAMP_STATUS)215215- >> TDES3_TIMESTAMP_STATUS_SHIFT;207207+ /* Context type from W/B descriptor must be zero */208208+ if (p->des3 & TDES3_CONTEXT_TYPE)209209+ return -EINVAL;210210+211211+ /* Tx Timestamp Status is 1 so des0 and des1'll have valid values */212212+ if (p->des3 & TDES3_TIMESTAMP_STATUS)213213+ return 0;214214+215215+ return 1;216216}217217218218-/* NOTE: For RX CTX bit has to be checked before219219- * HAVE a specific function for TX and another one for RX220220- */221221-static u64 dwmac4_wrback_get_timestamp(void *desc, u32 ats)218218+static inline u64 dwmac4_get_timestamp(void *desc, u32 ats)222219{223220 struct dma_desc *p = (struct dma_desc *)desc;224221 u64 ns;···234223 return ns;235224}236225237237-static int dwmac4_context_get_rx_timestamp_status(void *desc, u32 ats)226226+static int dwmac4_rx_check_timestamp(void *desc)238227{239228 struct dma_desc *p = (struct dma_desc *)desc;229229+ u32 own, ctxt;230230+ int ret = 1;240231241241- return (p->des1 & RDES1_TIMESTAMP_AVAILABLE)242242- >> RDES1_TIMESTAMP_AVAILABLE_SHIFT;232232+ own = p->des3 & RDES3_OWN;233233+ ctxt = ((p->des3 & RDES3_CONTEXT_DESCRIPTOR)234234+ >> RDES3_CONTEXT_DESCRIPTOR_SHIFT);235235+236236+ if (likely(!own && ctxt)) {237237+ if ((p->des0 == 0xffffffff) && (p->des1 == 0xffffffff))238238+ /* Corrupted value */239239+ ret = -EINVAL;240240+ else241241+ /* A valid Timestamp is ready to be read */242242+ ret = 0;243243+ }244244+245245+ /* Timestamp not ready */246246+ return ret;247247+}248248+249249+static int dwmac4_wrback_get_rx_timestamp_status(void *desc, u32 ats)250250+{251251+ struct dma_desc *p = (struct dma_desc *)desc;252252+ int ret = -EINVAL;253253+254254+ /* Get the status from normal w/b descriptor */255255+ if (likely(p->des3 & TDES3_RS1V)) {256256+ if (likely(p->des1 & RDES1_TIMESTAMP_AVAILABLE)) {257257+ int i = 0;258258+259259+ /* Check if timestamp is OK from context descriptor */260260+ do {261261+ ret = dwmac4_rx_check_timestamp(desc);262262+ if (ret < 0)263263+ goto exit;264264+ i++;265265+266266+ } while ((ret == 1) || (i < 10));267267+268268+ if (i == 10)269269+ ret = -EBUSY;270270+ }271271+ }272272+exit:273273+ return ret;243274}244275245276static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,···426373 .get_rx_frame_len = dwmac4_wrback_get_rx_frame_len,427374 .enable_tx_timestamp = dwmac4_rd_enable_tx_timestamp,428375 .get_tx_timestamp_status = dwmac4_wrback_get_tx_timestamp_status,429429- .get_timestamp = dwmac4_wrback_get_timestamp,430430- .get_rx_timestamp_status = dwmac4_context_get_rx_timestamp_status,376376+ .get_rx_timestamp_status = dwmac4_wrback_get_rx_timestamp_status,377377+ .get_timestamp = dwmac4_get_timestamp,431378 .set_tx_ic = dwmac4_rd_set_tx_ic,432379 .prepare_tx_desc = dwmac4_rd_prepare_tx_desc,433380 .prepare_tso_tx_desc = dwmac4_rd_prepare_tso_tx_desc,
···3434}35353636static u32 stmmac_config_sub_second_increment(void __iomem *ioaddr,3737- u32 ptp_clock)3737+ u32 ptp_clock, int gmac4)3838{3939 u32 value = readl(ioaddr + PTP_TCR);4040 unsigned long data;41414242- /* Convert the ptp_clock to nano second4343- * formula = (2/ptp_clock) * 10000000004444- * where, ptp_clock = 50MHz.4242+ /* For GMAC3.x, 4.x versions, convert the ptp_clock to nano second4343+ * formula = (1/ptp_clock) * 10000000004444+ * where ptp_clock is 50MHz if fine method is used to update system4545 */4646- data = (2000000000ULL / ptp_clock);4646+ if (value & PTP_TCR_TSCFUPDT)4747+ data = (1000000000ULL / 50000000);4848+ else4949+ data = (1000000000ULL / ptp_clock);47504851 /* 0.465ns accuracy */4952 if (!(value & PTP_TCR_TSCTRLSSR))5053 data = (data * 1000) / 465;5454+5555+ data &= PTP_SSIR_SSINC_MASK;5656+5757+ if (gmac4)5858+ data = data << GMAC4_PTP_SSIR_SSINC_SHIFT;51595260 writel(data, ioaddr + PTP_SSIR);5361···112104}113105114106static int stmmac_adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,115115- int add_sub)107107+ int add_sub, int gmac4)116108{117109 u32 value;118110 int limit;119111112112+ if (add_sub) {113113+ /* If the new sec value needs to be subtracted with114114+ * the system time, then MAC_STSUR reg should be115115+ * programmed with (2^32 – <new_sec_value>)116116+ */117117+ if (gmac4)118118+ sec = (100000000ULL - sec);119119+120120+ value = readl(ioaddr + PTP_TCR);121121+ if (value & PTP_TCR_TSCTRLSSR)122122+ nsec = (PTP_DIGITAL_ROLLOVER_MODE - nsec);123123+ else124124+ nsec = (PTP_BINARY_ROLLOVER_MODE - nsec);125125+ }126126+120127 writel(sec, ioaddr + PTP_STSUR);121121- writel(((add_sub << PTP_STNSUR_ADDSUB_SHIFT) | nsec),122122- ioaddr + PTP_STNSUR);128128+ value = (add_sub << PTP_STNSUR_ADDSUB_SHIFT) | nsec;129129+ writel(value, ioaddr + PTP_STNSUR);130130+123131 /* issue command to initialize the system time value */124132 value = readl(ioaddr + PTP_TCR);125133 value |= PTP_TCR_TSUPDT;···158134{159135 u64 ns;160136137137+ /* Get the TSSS value */161138 ns = readl(ioaddr + PTP_STNSR);162162- /* convert sec time value to nanosecond */139139+ /* Get the TSS and convert sec time value to nanosecond */163140 ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;164141165142 return ns;
+57-47
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···340340341341/* stmmac_get_tx_hwtstamp - get HW TX timestamps342342 * @priv: driver private structure343343- * @entry : descriptor index to be used.343343+ * @p : descriptor pointer344344 * @skb : the socket buffer345345 * Description :346346 * This function will read timestamp from the descriptor & pass it to stack.347347 * and also perform some sanity checks.348348 */349349static void stmmac_get_tx_hwtstamp(struct stmmac_priv *priv,350350- unsigned int entry, struct sk_buff *skb)350350+ struct dma_desc *p, struct sk_buff *skb)351351{352352 struct skb_shared_hwtstamps shhwtstamp;353353 u64 ns;354354- void *desc = NULL;355354356355 if (!priv->hwts_tx_en)357356 return;···359360 if (likely(!skb || !(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)))360361 return;361362362362- if (priv->adv_ts)363363- desc = (priv->dma_etx + entry);364364- else365365- desc = (priv->dma_tx + entry);366366-367363 /* check tx tstamp status */368368- if (!priv->hw->desc->get_tx_timestamp_status((struct dma_desc *)desc))369369- return;364364+ if (!priv->hw->desc->get_tx_timestamp_status(p)) {365365+ /* get the valid tstamp */366366+ ns = priv->hw->desc->get_timestamp(p, priv->adv_ts);370367371371- /* get the valid tstamp */372372- ns = priv->hw->desc->get_timestamp(desc, priv->adv_ts);368368+ memset(&shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));369369+ shhwtstamp.hwtstamp = ns_to_ktime(ns);373370374374- memset(&shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));375375- shhwtstamp.hwtstamp = ns_to_ktime(ns);376376- /* pass tstamp to stack */377377- skb_tstamp_tx(skb, &shhwtstamp);371371+ netdev_info(priv->dev, "get valid TX hw timestamp %llu\n", ns);372372+ /* pass tstamp to stack */373373+ skb_tstamp_tx(skb, &shhwtstamp);374374+ }378375379376 return;380377}381378382379/* stmmac_get_rx_hwtstamp - get HW RX timestamps383380 * @priv: driver private structure384384- * @entry : descriptor index to be used.381381+ * @p : descriptor pointer382382+ * @np : next descriptor pointer385383 * @skb : the socket buffer386384 * Description :387385 * This function will read received packet's timestamp from the descriptor388386 * and pass it to stack. It also perform some sanity checks.389387 */390390-static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv,391391- unsigned int entry, struct sk_buff *skb)388388+static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p,389389+ struct dma_desc *np, struct sk_buff *skb)392390{393391 struct skb_shared_hwtstamps *shhwtstamp = NULL;394392 u64 ns;395395- void *desc = NULL;396393397394 if (!priv->hwts_rx_en)398395 return;399396400400- if (priv->adv_ts)401401- desc = (priv->dma_erx + entry);402402- else403403- desc = (priv->dma_rx + entry);397397+ /* Check if timestamp is available */398398+ if (!priv->hw->desc->get_rx_timestamp_status(p, priv->adv_ts)) {399399+ /* For GMAC4, the valid timestamp is from CTX next desc. */400400+ if (priv->plat->has_gmac4)401401+ ns = priv->hw->desc->get_timestamp(np, priv->adv_ts);402402+ else403403+ ns = priv->hw->desc->get_timestamp(p, priv->adv_ts);404404405405- /* exit if rx tstamp is not valid */406406- if (!priv->hw->desc->get_rx_timestamp_status(desc, priv->adv_ts))407407- return;408408-409409- /* get valid tstamp */410410- ns = priv->hw->desc->get_timestamp(desc, priv->adv_ts);411411- shhwtstamp = skb_hwtstamps(skb);412412- memset(shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));413413- shhwtstamp->hwtstamp = ns_to_ktime(ns);405405+ netdev_info(priv->dev, "get valid RX hw timestamp %llu\n", ns);406406+ shhwtstamp = skb_hwtstamps(skb);407407+ memset(shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));408408+ shhwtstamp->hwtstamp = ns_to_ktime(ns);409409+ } else {410410+ netdev_err(priv->dev, "cannot get RX hw timestamp\n");411411+ }414412}415413416414/**···594598 priv->hwts_tx_en = config.tx_type == HWTSTAMP_TX_ON;595599596600 if (!priv->hwts_tx_en && !priv->hwts_rx_en)597597- priv->hw->ptp->config_hw_tstamping(priv->ioaddr, 0);601601+ priv->hw->ptp->config_hw_tstamping(priv->ptpaddr, 0);598602 else {599603 value = (PTP_TCR_TSENA | PTP_TCR_TSCFUPDT | PTP_TCR_TSCTRLSSR |600604 tstamp_all | ptp_v2 | ptp_over_ethernet |601605 ptp_over_ipv6_udp | ptp_over_ipv4_udp | ts_event_en |602606 ts_master_en | snap_type_sel);603603- priv->hw->ptp->config_hw_tstamping(priv->ioaddr, value);607607+ priv->hw->ptp->config_hw_tstamping(priv->ptpaddr, value);604608605609 /* program Sub Second Increment reg */606610 sec_inc = priv->hw->ptp->config_sub_second_increment(607607- priv->ioaddr, priv->clk_ptp_rate);611611+ priv->ptpaddr, priv->clk_ptp_rate,612612+ priv->plat->has_gmac4);608613 temp = div_u64(1000000000ULL, sec_inc);609614610615 /* calculate default added value:···615618 */616619 temp = (u64)(temp << 32);617620 priv->default_addend = div_u64(temp, priv->clk_ptp_rate);618618- priv->hw->ptp->config_addend(priv->ioaddr,621621+ priv->hw->ptp->config_addend(priv->ptpaddr,619622 priv->default_addend);620623621624 /* initialize system time */622625 ktime_get_real_ts64(&now);623626624627 /* lower 32 bits of tv_sec are safe until y2106 */625625- priv->hw->ptp->init_systime(priv->ioaddr, (u32)now.tv_sec,628628+ priv->hw->ptp->init_systime(priv->ptpaddr, (u32)now.tv_sec,626629 now.tv_nsec);627630 }628631···876879 phy_disconnect(phydev);877880 return -ENODEV;878881 }882882+883883+ /* stmmac_adjust_link will change this to PHY_IGNORE_INTERRUPT to avoid884884+ * subsequent PHY polling, make sure we force a link transition if885885+ * we have a UP/DOWN/UP transition886886+ */887887+ if (phydev->is_pseudo_fixed_link)888888+ phydev->irq = PHY_POLL;879889880890 pr_debug("stmmac_init_phy: %s: attached to PHY (UID 0x%x)"881891 " Link = %d\n", dev->name, phydev->phy_id, phydev->link);···13371333 priv->dev->stats.tx_packets++;13381334 priv->xstats.tx_pkt_n++;13391335 }13401340- stmmac_get_tx_hwtstamp(priv, entry, skb);13361336+ stmmac_get_tx_hwtstamp(priv, p, skb);13411337 }1342133813431339 if (likely(priv->tx_skbuff_dma[entry].buf)) {···14831479 unsigned int mode = MMC_CNTRL_RESET_ON_READ | MMC_CNTRL_COUNTER_RESET |14841480 MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;1485148114861486- if (priv->synopsys_id >= DWMAC_CORE_4_00)14821482+ if (priv->synopsys_id >= DWMAC_CORE_4_00) {14831483+ priv->ptpaddr = priv->ioaddr + PTP_GMAC4_OFFSET;14871484 priv->mmcaddr = priv->ioaddr + MMC_GMAC4_OFFSET;14881488- else14851485+ } else {14861486+ priv->ptpaddr = priv->ioaddr + PTP_GMAC3_X_OFFSET;14891487 priv->mmcaddr = priv->ioaddr + MMC_GMAC3_X_OFFSET;14881488+ }1490148914911490 dwmac_mmc_intr_all_mask(priv->mmcaddr);14921491···24842477 if (netif_msg_rx_status(priv)) {24852478 void *rx_head;2486247924872487- pr_debug("%s: descriptor ring:\n", __func__);24802480+ pr_info(">>>>>> %s: descriptor ring:\n", __func__);24882481 if (priv->extend_desc)24892482 rx_head = (void *)priv->dma_erx;24902483 else···24952488 while (count < limit) {24962489 int status;24972490 struct dma_desc *p;24912491+ struct dma_desc *np;2498249224992493 if (priv->extend_desc)25002494 p = (struct dma_desc *)(priv->dma_erx + entry);···25152507 next_entry = priv->cur_rx;2516250825172509 if (priv->extend_desc)25182518- prefetch(priv->dma_erx + next_entry);25102510+ np = (struct dma_desc *)(priv->dma_erx + next_entry);25192511 else25202520- prefetch(priv->dma_rx + next_entry);25122512+ np = priv->dma_rx + next_entry;25132513+25142514+ prefetch(np);2521251525222516 if ((priv->extend_desc) && (priv->hw->desc->rx_extended_status))25232517 priv->hw->desc->rx_extended_status(&priv->dev->stats,···25712561 frame_len -= ETH_FCS_LEN;2572256225732563 if (netif_msg_rx_status(priv)) {25742574- pr_debug("\tdesc: %p [entry %d] buff=0x%x\n",25642564+ pr_info("\tdesc: %p [entry %d] buff=0x%x\n",25752565 p, entry, des);25762566 if (frame_len > ETH_FRAME_LEN)25772567 pr_debug("\tframe size %d, COE: %d\n",···26282618 DMA_FROM_DEVICE);26292619 }2630262026312631- stmmac_get_rx_hwtstamp(priv, entry, skb);26322632-26332621 if (netif_msg_pktdata(priv)) {26342622 pr_debug("frame received (%dbytes)", frame_len);26352623 print_pkt(skb->data, frame_len);26362624 }26252625+26262626+ stmmac_get_rx_hwtstamp(priv, p, np, skb);2637262726382628 stmmac_rx_vlan(priv->dev, skb);26392629
···23752375 * to the PHY is the Ethernet MAC DT node.23762376 */23772377 ret = of_phy_register_fixed_link(slave_node);23782378- if (ret)23782378+ if (ret) {23792379+ if (ret != -EPROBE_DEFER)23802380+ dev_err(&pdev->dev, "failed to register fixed-link phy: %d\n", ret);23792381 return ret;23822382+ }23802383 slave_data->phy_node = of_node_get(slave_node);23812384 } else if (parp) {23822385 u32 phyid;···24002397 }24012398 snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),24022399 PHY_ID_FMT, mdio->name, phyid);24002400+ put_device(&mdio->dev);24032401 } else {24042402 dev_err(&pdev->dev,24052403 "No slave[%d] phy_id, phy-handle, or fixed-link property\n",···24422438 }2443243924442440 return 0;24412441+}24422442+24432443+static void cpsw_remove_dt(struct platform_device *pdev)24442444+{24452445+ struct net_device *ndev = platform_get_drvdata(pdev);24462446+ struct cpsw_common *cpsw = ndev_to_cpsw(ndev);24472447+ struct cpsw_platform_data *data = &cpsw->data;24482448+ struct device_node *node = pdev->dev.of_node;24492449+ struct device_node *slave_node;24502450+ int i = 0;24512451+24522452+ for_each_available_child_of_node(node, slave_node) {24532453+ struct cpsw_slave_data *slave_data = &data->slave_data[i];24542454+24552455+ if (strcmp(slave_node->name, "slave"))24562456+ continue;24572457+24582458+ if (of_phy_is_fixed_link(slave_node)) {24592459+ struct phy_device *phydev;24602460+24612461+ phydev = of_phy_find_device(slave_node);24622462+ if (phydev) {24632463+ fixed_phy_unregister(phydev);24642464+ /* Put references taken by24652465+ * of_phy_find_device() and24662466+ * of_phy_register_fixed_link().24672467+ */24682468+ phy_device_free(phydev);24692469+ phy_device_free(phydev);24702470+ }24712471+ }24722472+24732473+ of_node_put(slave_data->phy_node);24742474+24752475+ i++;24762476+ if (i == data->slaves)24772477+ break;24782478+ }24792479+24802480+ of_platform_depopulate(&pdev->dev);24452481}2446248224472483static int cpsw_probe_dual_emac(struct cpsw_priv *priv)···25912547 int irq;2592254825932549 cpsw = devm_kzalloc(&pdev->dev, sizeof(struct cpsw_common), GFP_KERNEL);25502550+ if (!cpsw)25512551+ return -ENOMEM;25522552+25942553 cpsw->dev = &pdev->dev;2595255425962555 ndev = alloc_etherdev_mq(sizeof(struct cpsw_priv), CPSW_MAX_QUEUES);···26312584 /* Select default pin state */26322585 pinctrl_pm_select_default_state(&pdev->dev);2633258626342634- if (cpsw_probe_dt(&cpsw->data, pdev)) {26352635- dev_err(&pdev->dev, "cpsw: platform data missing\n");26362636- ret = -ENODEV;25872587+ /* Need to enable clocks with runtime PM api to access module25882588+ * registers25892589+ */25902590+ ret = pm_runtime_get_sync(&pdev->dev);25912591+ if (ret < 0) {25922592+ pm_runtime_put_noidle(&pdev->dev);26372593 goto clean_runtime_disable_ret;26382594 }25952595+25962596+ ret = cpsw_probe_dt(&cpsw->data, pdev);25972597+ if (ret)25982598+ goto clean_dt_ret;25992599+26392600 data = &cpsw->data;26402601 cpsw->rx_ch_num = 1;26412602 cpsw->tx_ch_num = 1;···26632608 GFP_KERNEL);26642609 if (!cpsw->slaves) {26652610 ret = -ENOMEM;26662666- goto clean_runtime_disable_ret;26112611+ goto clean_dt_ret;26672612 }26682613 for (i = 0; i < data->slaves; i++)26692614 cpsw->slaves[i].slave_num = i;···26752620 if (IS_ERR(clk)) {26762621 dev_err(priv->dev, "fck is not found\n");26772622 ret = -ENODEV;26782678- goto clean_runtime_disable_ret;26232623+ goto clean_dt_ret;26792624 }26802625 cpsw->bus_freq_mhz = clk_get_rate(clk) / 1000000;26812626···26832628 ss_regs = devm_ioremap_resource(&pdev->dev, ss_res);26842629 if (IS_ERR(ss_regs)) {26852630 ret = PTR_ERR(ss_regs);26862686- goto clean_runtime_disable_ret;26312631+ goto clean_dt_ret;26872632 }26882633 cpsw->regs = ss_regs;2689263426902690- /* Need to enable clocks with runtime PM api to access module26912691- * registers26922692- */26932693- ret = pm_runtime_get_sync(&pdev->dev);26942694- if (ret < 0) {26952695- pm_runtime_put_noidle(&pdev->dev);26962696- goto clean_runtime_disable_ret;26972697- }26982635 cpsw->version = readl(&cpsw->regs->id_ver);26992699- pm_runtime_put_sync(&pdev->dev);2700263627012637 res = platform_get_resource(pdev, IORESOURCE_MEM, 1);27022638 cpsw->wr_regs = devm_ioremap_resource(&pdev->dev, res);27032639 if (IS_ERR(cpsw->wr_regs)) {27042640 ret = PTR_ERR(cpsw->wr_regs);27052705- goto clean_runtime_disable_ret;26412641+ goto clean_dt_ret;27062642 }2707264327082644 memset(&dma_params, 0, sizeof(dma_params));···27302684 default:27312685 dev_err(priv->dev, "unknown version 0x%08x\n", cpsw->version);27322686 ret = -ENODEV;27332733- goto clean_runtime_disable_ret;26872687+ goto clean_dt_ret;27342688 }27352689 for (i = 0; i < cpsw->data.slaves; i++) {27362690 struct cpsw_slave *slave = &cpsw->slaves[i];···27592713 if (!cpsw->dma) {27602714 dev_err(priv->dev, "error initializing dma\n");27612715 ret = -ENOMEM;27622762- goto clean_runtime_disable_ret;27162716+ goto clean_dt_ret;27632717 }2764271827652719 cpsw->txch[0] = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);···28572811 ret = cpsw_probe_dual_emac(priv);28582812 if (ret) {28592813 cpsw_err(priv, probe, "error probe slave 2 emac interface\n");28602860- goto clean_ale_ret;28142814+ goto clean_unregister_netdev_ret;28612815 }28622816 }2863281728182818+ pm_runtime_put(&pdev->dev);28192819+28642820 return 0;2865282128222822+clean_unregister_netdev_ret:28232823+ unregister_netdev(ndev);28662824clean_ale_ret:28672825 cpsw_ale_destroy(cpsw->ale);28682826clean_dma_ret:28692827 cpdma_ctlr_destroy(cpsw->dma);28282828+clean_dt_ret:28292829+ cpsw_remove_dt(pdev);28302830+ pm_runtime_put_sync(&pdev->dev);28702831clean_runtime_disable_ret:28712832 pm_runtime_disable(&pdev->dev);28722833clean_ndev_ret:···2899284629002847 cpsw_ale_destroy(cpsw->ale);29012848 cpdma_ctlr_destroy(cpsw->dma);29022902- of_platform_depopulate(&pdev->dev);28492849+ cpsw_remove_dt(pdev);29032850 pm_runtime_put_sync(&pdev->dev);29042851 pm_runtime_disable(&pdev->dev);29052852 if (cpsw->data.dual_emac)
+6-4
drivers/net/ethernet/ti/davinci_emac.c
···14101410 int i = 0;14111411 struct emac_priv *priv = netdev_priv(ndev);14121412 struct phy_device *phydev = NULL;14131413+ struct device *phy = NULL;1413141414141415 ret = pm_runtime_get_sync(&priv->pdev->dev);14151416 if (ret < 0) {···1489148814901489 /* use the first phy on the bus if pdata did not give us a phy id */14911490 if (!phydev && !priv->phy_id) {14921492- struct device *phy;14931493-14941491 phy = bus_find_device(&mdio_bus_type, NULL, NULL,14951492 match_first_device);14961496- if (phy)14931493+ if (phy) {14971494 priv->phy_id = dev_name(phy);14951495+ if (!priv->phy_id || !*priv->phy_id)14961496+ put_device(phy);14971497+ }14981498 }1499149915001500 if (!phydev && priv->phy_id && *priv->phy_id) {15011501 phydev = phy_connect(ndev, priv->phy_id,15021502 &emac_adjust_link,15031503 PHY_INTERFACE_MODE_MII);15041504-15041504+ put_device(phy); /* reference taken by bus_find_device */15051505 if (IS_ERR(phydev)) {15061506 dev_err(emac_dev, "could not connect to phy %s\n",15071507 priv->phy_id);
···6262/* Vitesse Extended Page Access Register */6363#define MII_VSC82X4_EXT_PAGE_ACCESS 0x1f64646565+/* Vitesse VSC8601 Extended PHY Control Register 1 */6666+#define MII_VSC8601_EPHY_CTL 0x176767+#define MII_VSC8601_EPHY_CTL_RGMII_SKEW (1 << 8)6868+6569#define PHY_ID_VSC8234 0x000fc6206670#define PHY_ID_VSC8244 0x000fc6c06771#define PHY_ID_VSC8514 0x00070670···113109 err = vsc824x_add_skew(phydev);114110115111 return err;112112+}113113+114114+/* This adds a skew for both TX and RX clocks, so the skew should only be115115+ * applied to "rgmii-id" interfaces. It may not work as expected116116+ * on "rgmii-txid", "rgmii-rxid" or "rgmii" interfaces. */117117+static int vsc8601_add_skew(struct phy_device *phydev)118118+{119119+ int ret;120120+121121+ ret = phy_read(phydev, MII_VSC8601_EPHY_CTL);122122+ if (ret < 0)123123+ return ret;124124+125125+ ret |= MII_VSC8601_EPHY_CTL_RGMII_SKEW;126126+ return phy_write(phydev, MII_VSC8601_EPHY_CTL, ret);127127+}128128+129129+static int vsc8601_config_init(struct phy_device *phydev)130130+{131131+ int ret = 0;132132+133133+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)134134+ ret = vsc8601_add_skew(phydev);135135+136136+ if (ret < 0)137137+ return ret;138138+139139+ return genphy_config_init(phydev);116140}117141118142static int vsc824x_ack_interrupt(struct phy_device *phydev)···307275 .phy_id_mask = 0x000ffff0,308276 .features = PHY_GBIT_FEATURES,309277 .flags = PHY_HAS_INTERRUPT,310310- .config_init = &genphy_config_init,278278+ .config_init = &vsc8601_config_init,311279 .config_aneg = &genphy_config_aneg,312280 .read_status = &genphy_read_status,313281 .ack_interrupt = &vsc824x_ack_interrupt,
···45164516 /* store current 11d setting */45174517 if (brcmf_fil_cmd_int_get(ifp, BRCMF_C_GET_REGULATORY,45184518 &ifp->vif->is_11d)) {45194519- supports_11d = false;45194519+ is_11d = supports_11d = false;45204520 } else {45214521 country_ie = brcmf_parse_tlvs((u8 *)settings->beacon.tail,45224522 settings->beacon.tail_len,
+38-11
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
···10871087 ret = iwl_mvm_switch_to_d3(mvm);10881088 if (ret)10891089 return ret;10901090+ } else {10911091+ /* In theory, we wouldn't have to stop a running sched10921092+ * scan in order to start another one (for10931093+ * net-detect). But in practice this doesn't seem to10941094+ * work properly, so stop any running sched_scan now.10951095+ */10961096+ ret = iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED, true);10971097+ if (ret)10981098+ return ret;10901099 }1091110010921101 /* rfkill release can be either for wowlan or netdetect */···12631254 out:12641255 if (ret < 0) {12651256 iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);12661266- ieee80211_restart_hw(mvm->hw);12571257+ if (mvm->restart_fw > 0) {12581258+ mvm->restart_fw--;12591259+ ieee80211_restart_hw(mvm->hw);12601260+ }12671261 iwl_mvm_free_nd(mvm);12681262 }12691263 out_noreset:···21002088 iwl_mvm_update_changed_regdom(mvm);2101208921022090 if (mvm->net_detect) {20912091+ /* If this is a non-unified image, we restart the FW,20922092+ * so no need to stop the netdetect scan. If that20932093+ * fails, continue and try to get the wake-up reasons,20942094+ * but trigger a HW restart by keeping a failure code20952095+ * in ret.20962096+ */20972097+ if (unified_image)20982098+ ret = iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_NETDETECT,20992099+ false);21002100+21032101 iwl_mvm_query_netdetect_reasons(mvm, vif);21042102 /* has unlocked the mutex, so skip that */21052103 goto out;···22932271static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)22942272{22952273 struct iwl_mvm *mvm = inode->i_private;22962296- int remaining_time = 10;22742274+ bool unified_image = fw_has_capa(&mvm->fw->ucode_capa,22752275+ IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);2297227622982277 mvm->d3_test_active = false;22992278···23052282 mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;2306228323072284 iwl_abort_notification_waits(&mvm->notif_wait);23082308- ieee80211_restart_hw(mvm->hw);22852285+ if (!unified_image) {22862286+ int remaining_time = 10;2309228723102310- /* wait for restart and disconnect all interfaces */23112311- while (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&23122312- remaining_time > 0) {23132313- remaining_time--;23142314- msleep(1000);22882288+ ieee80211_restart_hw(mvm->hw);22892289+22902290+ /* wait for restart and disconnect all interfaces */22912291+ while (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&22922292+ remaining_time > 0) {22932293+ remaining_time--;22942294+ msleep(1000);22952295+ }22962296+22972297+ if (remaining_time == 0)22982298+ IWL_ERR(mvm, "Timed out waiting for HW restart!\n");23152299 }23162316-23172317- if (remaining_time == 0)23182318- IWL_ERR(mvm, "Timed out waiting for HW restart to finish!\n");2319230023202301 ieee80211_iterate_active_interfaces_atomic(23212302 mvm->hw, IEEE80211_IFACE_ITER_NORMAL,
···1199119912001200static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)12011201{12021202+ bool unified_image = fw_has_capa(&mvm->fw->ucode_capa,12031203+ IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);12041204+12021205 /* This looks a bit arbitrary, but the idea is that if we run12031206 * out of possible simultaneous scans and the userspace is12041207 * trying to run a scan type that is already running, we···12281225 return -EBUSY;12291226 return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_REGULAR, true);12301227 case IWL_MVM_SCAN_NETDETECT:12311231- /* No need to stop anything for net-detect since the12321232- * firmware is restarted anyway. This way, any sched12331233- * scans that were running will be restarted when we12341234- * resume.12351235- */12361236- return 0;12281228+ /* For non-unified images, there's no need to stop12291229+ * anything for net-detect since the firmware is12301230+ * restarted anyway. This way, any sched scans that12311231+ * were running will be restarted when we resume.12321232+ */12331233+ if (!unified_image)12341234+ return 0;12351235+12361236+ /* If this is a unified image and we ran out of scans,12371237+ * we need to stop something. Prefer stopping regular12381238+ * scans, because the results are useless at this12391239+ * point, and we should be able to keep running12401240+ * another scheduled scan while suspended.12411241+ */12421242+ if (mvm->scan_status & IWL_MVM_SCAN_REGULAR_MASK)12431243+ return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_REGULAR,12441244+ true);12451245+ if (mvm->scan_status & IWL_MVM_SCAN_SCHED_MASK)12461246+ return iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_SCHED,12471247+ true);12481248+12491249+ /* fall through, something is wrong if no scan was12501250+ * running but we ran out of scans.12511251+ */12371252 default:12381253 WARN_ON(1);12391254 break;
+49-32
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
···541541MODULE_DEVICE_TABLE(pci, iwl_hw_card_ids);542542543543#ifdef CONFIG_ACPI544544-#define SPL_METHOD "SPLC"545545-#define SPL_DOMAINTYPE_MODULE BIT(0)546546-#define SPL_DOMAINTYPE_WIFI BIT(1)547547-#define SPL_DOMAINTYPE_WIGIG BIT(2)548548-#define SPL_DOMAINTYPE_RFEM BIT(3)544544+#define ACPI_SPLC_METHOD "SPLC"545545+#define ACPI_SPLC_DOMAIN_WIFI (0x07)549546550550-static u64 splx_get_pwr_limit(struct iwl_trans *trans, union acpi_object *splx)547547+static u64 splc_get_pwr_limit(struct iwl_trans *trans, union acpi_object *splc)551548{552552- union acpi_object *limits, *domain_type, *power_limit;549549+ union acpi_object *data_pkg, *dflt_pwr_limit;550550+ int i;553551554554- if (splx->type != ACPI_TYPE_PACKAGE ||555555- splx->package.count != 2 ||556556- splx->package.elements[0].type != ACPI_TYPE_INTEGER ||557557- splx->package.elements[0].integer.value != 0) {558558- IWL_ERR(trans, "Unsupported splx structure\n");552552+ /* We need at least two elements, one for the revision and one553553+ * for the data itself. Also check that the revision is554554+ * supported (currently only revision 0).555555+ */556556+ if (splc->type != ACPI_TYPE_PACKAGE ||557557+ splc->package.count < 2 ||558558+ splc->package.elements[0].type != ACPI_TYPE_INTEGER ||559559+ splc->package.elements[0].integer.value != 0) {560560+ IWL_DEBUG_INFO(trans,561561+ "Unsupported structure returned by the SPLC method. Ignoring.\n");559562 return 0;560563 }561564562562- limits = &splx->package.elements[1];563563- if (limits->type != ACPI_TYPE_PACKAGE ||564564- limits->package.count < 2 ||565565- limits->package.elements[0].type != ACPI_TYPE_INTEGER ||566566- limits->package.elements[1].type != ACPI_TYPE_INTEGER) {567567- IWL_ERR(trans, "Invalid limits element\n");565565+ /* loop through all the packages to find the one for WiFi */566566+ for (i = 1; i < splc->package.count; i++) {567567+ union acpi_object *domain;568568+569569+ data_pkg = &splc->package.elements[i];570570+571571+ /* Skip anything that is not a package with the right572572+ * amount of elements (i.e. at least 2 integers).573573+ */574574+ if (data_pkg->type != ACPI_TYPE_PACKAGE ||575575+ data_pkg->package.count < 2 ||576576+ data_pkg->package.elements[0].type != ACPI_TYPE_INTEGER ||577577+ data_pkg->package.elements[1].type != ACPI_TYPE_INTEGER)578578+ continue;579579+580580+ domain = &data_pkg->package.elements[0];581581+ if (domain->integer.value == ACPI_SPLC_DOMAIN_WIFI)582582+ break;583583+584584+ data_pkg = NULL;585585+ }586586+587587+ if (!data_pkg) {588588+ IWL_DEBUG_INFO(trans,589589+ "No element for the WiFi domain returned by the SPLC method.\n");568590 return 0;569591 }570592571571- domain_type = &limits->package.elements[0];572572- power_limit = &limits->package.elements[1];573573- if (!(domain_type->integer.value & SPL_DOMAINTYPE_WIFI)) {574574- IWL_DEBUG_INFO(trans, "WiFi power is not limited\n");575575- return 0;576576- }577577-578578- return power_limit->integer.value;593593+ dflt_pwr_limit = &data_pkg->package.elements[1];594594+ return dflt_pwr_limit->integer.value;579595}580596581597static void set_dflt_pwr_limit(struct iwl_trans *trans, struct pci_dev *pdev)582598{583599 acpi_handle pxsx_handle;584600 acpi_handle handle;585585- struct acpi_buffer splx = {ACPI_ALLOCATE_BUFFER, NULL};601601+ struct acpi_buffer splc = {ACPI_ALLOCATE_BUFFER, NULL};586602 acpi_status status;587603588604 pxsx_handle = ACPI_HANDLE(&pdev->dev);···609593 }610594611595 /* Get the method's handle */612612- status = acpi_get_handle(pxsx_handle, (acpi_string)SPL_METHOD, &handle);596596+ status = acpi_get_handle(pxsx_handle, (acpi_string)ACPI_SPLC_METHOD,597597+ &handle);613598 if (ACPI_FAILURE(status)) {614614- IWL_DEBUG_INFO(trans, "SPL method not found\n");599599+ IWL_DEBUG_INFO(trans, "SPLC method not found\n");615600 return;616601 }617602618603 /* Call SPLC with no arguments */619619- status = acpi_evaluate_object(handle, NULL, NULL, &splx);604604+ status = acpi_evaluate_object(handle, NULL, NULL, &splc);620605 if (ACPI_FAILURE(status)) {621606 IWL_ERR(trans, "SPLC invocation failed (0x%x)\n", status);622607 return;623608 }624609625625- trans->dflt_pwr_limit = splx_get_pwr_limit(trans, splx.pointer);610610+ trans->dflt_pwr_limit = splc_get_pwr_limit(trans, splc.pointer);626611 IWL_DEBUG_INFO(trans, "Default power limit set to %lld\n",627612 trans->dflt_pwr_limit);628628- kfree(splx.pointer);613613+ kfree(splc.pointer);629614}630615631616#else /* CONFIG_ACPI */
+8
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
···592592static int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,593593 int slots_num, u32 txq_id)594594{595595+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);595596 int ret;596597597598 txq->need_update = false;···607606 return ret;608607609608 spin_lock_init(&txq->lock);609609+610610+ if (txq_id == trans_pcie->cmd_queue) {611611+ static struct lock_class_key iwl_pcie_cmd_queue_lock_class;612612+613613+ lockdep_set_class(&txq->lock, &iwl_pcie_cmd_queue_lock_class);614614+ }615615+610616 __skb_queue_head_init(&txq->overflow_q);611617612618 /*
···88888989static unsigned long db_init = 0x7;9090module_param(db_init, ulong, 0644);9191-MODULE_PARM_DESC(delay_ms, "Initial doorbell bits to ring on the peer");9191+MODULE_PARM_DESC(db_init, "Initial doorbell bits to ring on the peer");92929393struct pp_ctx {9494 struct ntb_dev *ntb;
+1-1
drivers/nvme/host/lightnvm.c
···612612613613 ret = nvm_register(dev);614614615615- ns->lba_shift = ilog2(dev->sec_size) - 9;615615+ ns->lba_shift = ilog2(dev->sec_size);616616617617 if (sysfs_create_group(&dev->dev.kobj, attrs))618618 pr_warn("%s: failed to create sysfs group for identification\n",
···951951952952static void nvmet_rdma_destroy_queue_ib(struct nvmet_rdma_queue *queue)953953{954954+ ib_drain_qp(queue->cm_id->qp);954955 rdma_destroy_qp(queue->cm_id);955956 ib_free_cq(queue->cq);956957}···10671066 spin_lock_init(&queue->rsp_wr_wait_lock);10681067 INIT_LIST_HEAD(&queue->free_rsps);10691068 spin_lock_init(&queue->rsps_lock);10691069+ INIT_LIST_HEAD(&queue->queue_list);1070107010711071 queue->idx = ida_simple_get(&nvmet_rdma_queue_ida, 0, 0, GFP_KERNEL);10721072 if (queue->idx < 0) {···1246124412471245 if (disconnect) {12481246 rdma_disconnect(queue->cm_id);12491249- ib_drain_qp(queue->cm_id->qp);12501247 schedule_work(&queue->release_work);12511248 }12521249}···12701269{12711270 WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);1272127112731273- pr_err("failed to connect queue\n");12721272+ mutex_lock(&nvmet_rdma_queue_mutex);12731273+ if (!list_empty(&queue->queue_list))12741274+ list_del_init(&queue->queue_list);12751275+ mutex_unlock(&nvmet_rdma_queue_mutex);12761276+12771277+ pr_err("failed to connect queue %d\n", queue->idx);12741278 schedule_work(&queue->release_work);12751279}12761280···13581352 case RDMA_CM_EVENT_ADDR_CHANGE:13591353 case RDMA_CM_EVENT_DISCONNECTED:13601354 case RDMA_CM_EVENT_TIMEWAIT_EXIT:13611361- nvmet_rdma_queue_disconnect(queue);13551355+ /*13561356+ * We might end up here when we already freed the qp13571357+ * which means queue release sequence is in progress,13581358+ * so don't get in the way...13591359+ */13601360+ if (queue)13611361+ nvmet_rdma_queue_disconnect(queue);13621362 break;13631363 case RDMA_CM_EVENT_DEVICE_REMOVAL:13641364 ret = nvmet_rdma_device_removal(cm_id, queue);
-2
drivers/of/base.c
···20772077 name = of_get_property(of_aliases, "stdout", NULL);20782078 if (name)20792079 of_stdout = of_find_node_opts_by_path(name, &of_stdout_options);20802080- if (of_stdout)20812081- console_set_by_of();20822080 }2083208120842082 if (!of_aliases)
···121121 return -EINVAL;122122 }123123124124+ /*125125+ * If we have a shadow copy in RAM, the PCI device doesn't respond126126+ * to the shadow range, so we don't need to claim it, and upstream127127+ * bridges don't need to route the range to the device.128128+ */129129+ if (res->flags & IORESOURCE_ROM_SHADOW)130130+ return 0;131131+124132 root = pci_find_parent_resource(dev, res);125133 if (!root) {126134 dev_info(&dev->dev, "can't claim BAR %d %pR: no compatible bridge window\n",
+1-1
drivers/pcmcia/soc_common.c
···107107108108 ret = regulator_enable(r->reg);109109 } else {110110- regulator_disable(r->reg);110110+ ret = regulator_disable(r->reg);111111 }112112 if (ret == 0)113113 r->on = on;
+3-2
drivers/phy/phy-da8xx-usb.c
···198198 } else {199199 int ret;200200201201- ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy", "ohci.0");201201+ ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy",202202+ "ohci-da8xx");202203 if (ret)203204 dev_warn(dev, "Failed to create usb11 phy lookup\n");204205 ret = phy_create_lookup(d_phy->usb20_phy, "usb-phy",···217216218217 if (!pdev->dev.of_node) {219218 phy_remove_lookup(d_phy->usb20_phy, "usb-phy", "musb-da8xx");220220- phy_remove_lookup(d_phy->usb11_phy, "usb-phy", "ohci.0");219219+ phy_remove_lookup(d_phy->usb11_phy, "usb-phy", "ohci-da8xx");221220 }222221223222 return 0;
···264264 return AE_OK;265265266266 if (acpi_match_device_ids(dev, ids) == 0)267267- if (acpi_create_platform_device(dev))267267+ if (acpi_create_platform_device(dev, NULL))268268 dev_info(&dev->dev,269269 "intel-hid: created platform device\n");270270
+1-1
drivers/platform/x86/intel-vbtn.c
···164164 return AE_OK;165165166166 if (acpi_match_device_ids(dev, ids) == 0)167167- if (acpi_create_platform_device(dev))167167+ if (acpi_create_platform_device(dev, NULL))168168 dev_info(&dev->dev,169169 "intel-vbtn: created platform device\n");170170
+19-7
drivers/platform/x86/toshiba-wmi.c
···2424#include <linux/acpi.h>2525#include <linux/input.h>2626#include <linux/input/sparse-keymap.h>2727+#include <linux/dmi.h>27282829MODULE_AUTHOR("Azael Avalos");2930MODULE_DESCRIPTION("Toshiba WMI Hotkey Driver");3031MODULE_LICENSE("GPL");31323232-#define TOSHIBA_WMI_EVENT_GUID "59142400-C6A3-40FA-BADB-8A2652834100"3333+#define WMI_EVENT_GUID "59142400-C6A3-40FA-BADB-8A2652834100"33343434-MODULE_ALIAS("wmi:"TOSHIBA_WMI_EVENT_GUID);3535+MODULE_ALIAS("wmi:"WMI_EVENT_GUID);35363637static struct input_dev *toshiba_wmi_input_dev;3738···6463 kfree(response.pointer);6564}66656666+static struct dmi_system_id toshiba_wmi_dmi_table[] __initdata = {6767+ {6868+ .ident = "Toshiba laptop",6969+ .matches = {7070+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),7171+ },7272+ },7373+ {}7474+};7575+6776static int __init toshiba_wmi_input_setup(void)6877{6978 acpi_status status;···9281 if (err)9382 goto err_free_dev;94839595- status = wmi_install_notify_handler(TOSHIBA_WMI_EVENT_GUID,8484+ status = wmi_install_notify_handler(WMI_EVENT_GUID,9685 toshiba_wmi_notify, NULL);9786 if (ACPI_FAILURE(status)) {9887 err = -EIO;···10695 return 0;1079610897 err_remove_notifier:109109- wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID);9898+ wmi_remove_notify_handler(WMI_EVENT_GUID);11099 err_free_keymap:111100 sparse_keymap_free(toshiba_wmi_input_dev);112101 err_free_dev:···116105117106static void toshiba_wmi_input_destroy(void)118107{119119- wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID);108108+ wmi_remove_notify_handler(WMI_EVENT_GUID);120109 sparse_keymap_free(toshiba_wmi_input_dev);121110 input_unregister_device(toshiba_wmi_input_dev);122111}···125114{126115 int ret;127116128128- if (!wmi_has_guid(TOSHIBA_WMI_EVENT_GUID))117117+ if (!wmi_has_guid(WMI_EVENT_GUID) ||118118+ !dmi_check_system(toshiba_wmi_dmi_table))129119 return -ENODEV;130120131121 ret = toshiba_wmi_input_setup();···142130143131static void __exit toshiba_wmi_exit(void)144132{145145- if (wmi_has_guid(TOSHIBA_WMI_EVENT_GUID))133133+ if (wmi_has_guid(WMI_EVENT_GUID))146134 toshiba_wmi_input_destroy();147135}148136
···707707 srb_t *sp;708708 int rval;709709710710+ if (unlikely(test_bit(UNLOADING, &base_vha->dpc_flags))) {711711+ cmd->result = DID_NO_CONNECT << 16;712712+ goto qc24_fail_command;713713+ }714714+710715 if (ha->flags.eeh_busy) {711716 if (ha->flags.pci_channel_io_perm_failure) {712717 ql_dbg(ql_dbg_aer, vha, 0x9010,···14561451 for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {14571452 sp = req->outstanding_cmds[cnt];14581453 if (sp) {14541454+ /* Get a reference to the sp and drop the lock.14551455+ * The reference ensures this sp->done() call14561456+ * - and not the call in qla2xxx_eh_abort() -14571457+ * ends the SCSI command (with result 'res').14581458+ */14591459+ sp_get(sp);14601460+ spin_unlock_irqrestore(&ha->hardware_lock, flags);14611461+ qla2xxx_eh_abort(GET_CMD_SP(sp));14621462+ spin_lock_irqsave(&ha->hardware_lock, flags);14591463 req->outstanding_cmds[cnt] = NULL;14601464 sp->done(vha, sp, res);14611465 }···23552341{23562342 scsi_qla_host_t *vha = shost_priv(shost);2357234323442344+ if (test_bit(UNLOADING, &vha->dpc_flags))23452345+ return 1;23582346 if (!vha->host)23592347 return 1;23602348 if (time > vha->hw->loop_reset_delay * HZ)
+3-2
drivers/scsi/vmw_pvscsi.c
···793793 unsigned long flags;794794 int result = SUCCESS;795795 DECLARE_COMPLETION_ONSTACK(abort_cmp);796796+ int done;796797797798 scmd_printk(KERN_DEBUG, cmd, "task abort on host %u, %p\n",798799 adapter->host->host_no, cmd);···825824 pvscsi_abort_cmd(adapter, ctx);826825 spin_unlock_irqrestore(&adapter->hw_lock, flags);827826 /* Wait for 2 secs for the completion. */828828- wait_for_completion_timeout(&abort_cmp, msecs_to_jiffies(2000));827827+ done = wait_for_completion_timeout(&abort_cmp, msecs_to_jiffies(2000));829828 spin_lock_irqsave(&adapter->hw_lock, flags);830829831831- if (!completion_done(&abort_cmp)) {830830+ if (!done) {832831 /*833832 * Failed to abort the command, unmark the fact that it834833 * was requested to be aborted.
···207207 * clock period is specified by user with prescaling208208 * already taken into account.209209 */210210- return counter->clock_period_ps;210210+ *period_ps = counter->clock_period_ps;211211+ return 0;211212 }212213213214 switch (generic_clock_source & NI_GPCT_PRESCALE_MODE_CLOCK_SRC_MASK) {
···995995 }996996 val = readl(base + ext_cap_offset);997997998998+ /* Auto handoff never worked for these devices. Force it and continue */999999+ if ((pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241) ||10001000+ (pdev->vendor == PCI_VENDOR_ID_RENESAS10011001+ && pdev->device == 0x0014)) {10021002+ val = (val | XHCI_HC_OS_OWNED) & ~XHCI_HC_BIOS_OWNED;10031003+ writel(val, base + ext_cap_offset);10041004+ }10051005+9981006 /* If the BIOS owns the HC, signal that the OS wants it, and wait */9991007 if (val & XHCI_HC_BIOS_OWNED) {10001008 writel(val | XHCI_HC_OS_OWNED, base + ext_cap_offset);
+2-1
drivers/usb/musb/da8xx.c
···479479480480 glue->phy = devm_phy_get(&pdev->dev, "usb-phy");481481 if (IS_ERR(glue->phy)) {482482- dev_err(&pdev->dev, "failed to get phy\n");482482+ if (PTR_ERR(glue->phy) != -EPROBE_DEFER)483483+ dev_err(&pdev->dev, "failed to get phy\n");483484 return PTR_ERR(glue->phy);484485 }485486
-5
drivers/usb/musb/musb_core.c
···21142114 musb->io.ep_offset = musb_flat_ep_offset;21152115 musb->io.ep_select = musb_flat_ep_select;21162116 }21172117- /* And override them with platform specific ops if specified. */21182118- if (musb->ops->ep_offset)21192119- musb->io.ep_offset = musb->ops->ep_offset;21202120- if (musb->ops->ep_select)21212121- musb->io.ep_select = musb->ops->ep_select;2122211721232118 /* At least tusb6010 has its own offsets */21242119 if (musb->ops->ep_offset)
+13-3
drivers/uwb/lc-rc.c
···5656 struct uwb_rc *rc = NULL;57575858 dev = class_find_device(&uwb_rc_class, NULL, &index, uwb_rc_index_match);5959- if (dev)5959+ if (dev) {6060 rc = dev_get_drvdata(dev);6161+ put_device(dev);6262+ }6363+6164 return rc;6265}6366···470467 if (dev) {471468 rc = dev_get_drvdata(dev);472469 __uwb_rc_get(rc);470470+ put_device(dev);473471 }472472+474473 return rc;475474}476475EXPORT_SYMBOL_GPL(__uwb_rc_try_get);···525520526521 dev = class_find_device(&uwb_rc_class, NULL, grandpa_dev,527522 find_rc_grandpa);528528- if (dev)523523+ if (dev) {529524 rc = dev_get_drvdata(dev);525525+ put_device(dev);526526+ }527527+530528 return rc;531529}532530EXPORT_SYMBOL_GPL(uwb_rc_get_by_grandpa);···561553 struct uwb_rc *rc = NULL;562554563555 dev = class_find_device(&uwb_rc_class, NULL, addr, find_rc_dev);564564- if (dev)556556+ if (dev) {565557 rc = dev_get_drvdata(dev);558558+ put_device(dev);559559+ }566560567561 return rc;568562}
···526526 np = of_find_matching_node_and_match(NULL, versatile_clcd_of_match,527527 &clcd_id);528528 if (!np) {529529- dev_err(dev, "no Versatile syscon node\n");530530- return -ENODEV;529529+ /* Vexpress does not have this */530530+ return 0;531531 }532532 versatile_clcd_type = (enum versatile_clcd)clcd_id->data;533533
+114-103
fs/aio.c
···10781078 unsigned tail, pos, head;10791079 unsigned long flags;1080108010811081+ if (kiocb->ki_flags & IOCB_WRITE) {10821082+ struct file *file = kiocb->ki_filp;10831083+10841084+ /*10851085+ * Tell lockdep we inherited freeze protection from submission10861086+ * thread.10871087+ */10881088+ __sb_writers_acquired(file_inode(file)->i_sb, SB_FREEZE_WRITE);10891089+ file_end_write(file);10901090+ }10911091+10811092 /*10821093 * Special case handling for sync iocbs:10831094 * - events go directly into the iocb for fast handling···14031392 return -EINVAL;14041393}1405139414061406-typedef ssize_t (rw_iter_op)(struct kiocb *, struct iov_iter *);14071407-14081408-static int aio_setup_vectored_rw(int rw, char __user *buf, size_t len,14091409- struct iovec **iovec,14101410- bool compat,14111411- struct iov_iter *iter)13951395+static int aio_setup_rw(int rw, struct iocb *iocb, struct iovec **iovec,13961396+ bool vectored, bool compat, struct iov_iter *iter)14121397{13981398+ void __user *buf = (void __user *)(uintptr_t)iocb->aio_buf;13991399+ size_t len = iocb->aio_nbytes;14001400+14011401+ if (!vectored) {14021402+ ssize_t ret = import_single_range(rw, buf, len, *iovec, iter);14031403+ *iovec = NULL;14041404+ return ret;14051405+ }14131406#ifdef CONFIG_COMPAT14141407 if (compat)14151415- return compat_import_iovec(rw,14161416- (struct compat_iovec __user *)buf,14171417- len, UIO_FASTIOV, iovec, iter);14081408+ return compat_import_iovec(rw, buf, len, UIO_FASTIOV, iovec,14091409+ iter);14181410#endif14191419- return import_iovec(rw, (struct iovec __user *)buf,14201420- len, UIO_FASTIOV, iovec, iter);14111411+ return import_iovec(rw, buf, len, UIO_FASTIOV, iovec, iter);14211412}1422141314231423-/*14241424- * aio_run_iocb:14251425- * Performs the initial checks and io submission.14261426- */14271427-static ssize_t aio_run_iocb(struct kiocb *req, unsigned opcode,14281428- char __user *buf, size_t len, bool compat)14141414+static inline ssize_t aio_ret(struct kiocb *req, ssize_t ret)14291415{14301430- struct file *file = req->ki_filp;14311431- ssize_t ret;14321432- int rw;14331433- fmode_t mode;14341434- rw_iter_op *iter_op;14351435- struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;14361436- struct iov_iter iter;14371437-14381438- switch (opcode) {14391439- case IOCB_CMD_PREAD:14401440- case IOCB_CMD_PREADV:14411441- mode = FMODE_READ;14421442- rw = READ;14431443- iter_op = file->f_op->read_iter;14441444- goto rw_common;14451445-14461446- case IOCB_CMD_PWRITE:14471447- case IOCB_CMD_PWRITEV:14481448- mode = FMODE_WRITE;14491449- rw = WRITE;14501450- iter_op = file->f_op->write_iter;14511451- goto rw_common;14521452-rw_common:14531453- if (unlikely(!(file->f_mode & mode)))14541454- return -EBADF;14551455-14561456- if (!iter_op)14571457- return -EINVAL;14581458-14591459- if (opcode == IOCB_CMD_PREADV || opcode == IOCB_CMD_PWRITEV)14601460- ret = aio_setup_vectored_rw(rw, buf, len,14611461- &iovec, compat, &iter);14621462- else {14631463- ret = import_single_range(rw, buf, len, iovec, &iter);14641464- iovec = NULL;14651465- }14661466- if (!ret)14671467- ret = rw_verify_area(rw, file, &req->ki_pos,14681468- iov_iter_count(&iter));14691469- if (ret < 0) {14701470- kfree(iovec);14711471- return ret;14721472- }14731473-14741474- if (rw == WRITE)14751475- file_start_write(file);14761476-14771477- ret = iter_op(req, &iter);14781478-14791479- if (rw == WRITE)14801480- file_end_write(file);14811481- kfree(iovec);14821482- break;14831483-14841484- case IOCB_CMD_FDSYNC:14851485- if (!file->f_op->aio_fsync)14861486- return -EINVAL;14871487-14881488- ret = file->f_op->aio_fsync(req, 1);14891489- break;14901490-14911491- case IOCB_CMD_FSYNC:14921492- if (!file->f_op->aio_fsync)14931493- return -EINVAL;14941494-14951495- ret = file->f_op->aio_fsync(req, 0);14961496- break;14971497-14981498- default:14991499- pr_debug("EINVAL: no operation provided\n");15001500- return -EINVAL;15011501- }15021502-15031503- if (ret != -EIOCBQUEUED) {14161416+ switch (ret) {14171417+ case -EIOCBQUEUED:14181418+ return ret;14191419+ case -ERESTARTSYS:14201420+ case -ERESTARTNOINTR:14211421+ case -ERESTARTNOHAND:14221422+ case -ERESTART_RESTARTBLOCK:15041423 /*15051424 * There's no easy way to restart the syscall since other AIO's15061425 * may be already running. Just fail this IO with EINTR.15071426 */15081508- if (unlikely(ret == -ERESTARTSYS || ret == -ERESTARTNOINTR ||15091509- ret == -ERESTARTNOHAND ||15101510- ret == -ERESTART_RESTARTBLOCK))15111511- ret = -EINTR;14271427+ ret = -EINTR;14281428+ /*FALLTHRU*/14291429+ default:15121430 aio_complete(req, ret, 0);14311431+ return 0;15131432 }14331433+}1514143415151515- return 0;14351435+static ssize_t aio_read(struct kiocb *req, struct iocb *iocb, bool vectored,14361436+ bool compat)14371437+{14381438+ struct file *file = req->ki_filp;14391439+ struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;14401440+ struct iov_iter iter;14411441+ ssize_t ret;14421442+14431443+ if (unlikely(!(file->f_mode & FMODE_READ)))14441444+ return -EBADF;14451445+ if (unlikely(!file->f_op->read_iter))14461446+ return -EINVAL;14471447+14481448+ ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter);14491449+ if (ret)14501450+ return ret;14511451+ ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter));14521452+ if (!ret)14531453+ ret = aio_ret(req, file->f_op->read_iter(req, &iter));14541454+ kfree(iovec);14551455+ return ret;14561456+}14571457+14581458+static ssize_t aio_write(struct kiocb *req, struct iocb *iocb, bool vectored,14591459+ bool compat)14601460+{14611461+ struct file *file = req->ki_filp;14621462+ struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;14631463+ struct iov_iter iter;14641464+ ssize_t ret;14651465+14661466+ if (unlikely(!(file->f_mode & FMODE_WRITE)))14671467+ return -EBADF;14681468+ if (unlikely(!file->f_op->write_iter))14691469+ return -EINVAL;14701470+14711471+ ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter);14721472+ if (ret)14731473+ return ret;14741474+ ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter));14751475+ if (!ret) {14761476+ req->ki_flags |= IOCB_WRITE;14771477+ file_start_write(file);14781478+ ret = aio_ret(req, file->f_op->write_iter(req, &iter));14791479+ /*14801480+ * We release freeze protection in aio_complete(). Fool lockdep14811481+ * by telling it the lock got released so that it doesn't14821482+ * complain about held lock when we return to userspace.14831483+ */14841484+ __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE);14851485+ }14861486+ kfree(iovec);14871487+ return ret;15161488}1517148915181490static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,15191491 struct iocb *iocb, bool compat)15201492{15211493 struct aio_kiocb *req;14941494+ struct file *file;15221495 ssize_t ret;1523149615241497 /* enforce forwards compatibility on users */···15251530 if (unlikely(!req))15261531 return -EAGAIN;1527153215281528- req->common.ki_filp = fget(iocb->aio_fildes);15331533+ req->common.ki_filp = file = fget(iocb->aio_fildes);15291534 if (unlikely(!req->common.ki_filp)) {15301535 ret = -EBADF;15311536 goto out_put_req;···15601565 req->ki_user_iocb = user_iocb;15611566 req->ki_user_data = iocb->aio_data;1562156715631563- ret = aio_run_iocb(&req->common, iocb->aio_lio_opcode,15641564- (char __user *)(unsigned long)iocb->aio_buf,15651565- iocb->aio_nbytes,15661566- compat);15671567- if (ret)15681568- goto out_put_req;15681568+ get_file(file);15691569+ switch (iocb->aio_lio_opcode) {15701570+ case IOCB_CMD_PREAD:15711571+ ret = aio_read(&req->common, iocb, false, compat);15721572+ break;15731573+ case IOCB_CMD_PWRITE:15741574+ ret = aio_write(&req->common, iocb, false, compat);15751575+ break;15761576+ case IOCB_CMD_PREADV:15771577+ ret = aio_read(&req->common, iocb, true, compat);15781578+ break;15791579+ case IOCB_CMD_PWRITEV:15801580+ ret = aio_write(&req->common, iocb, true, compat);15811581+ break;15821582+ default:15831583+ pr_debug("invalid aio operation %d\n", iocb->aio_lio_opcode);15841584+ ret = -EINVAL;15851585+ break;15861586+ }15871587+ fput(file);1569158815891589+ if (ret && ret != -EIOCBQUEUED)15901590+ goto out_put_req;15701591 return 0;15711592out_put_req:15721593 put_reqs_available(ctx, 1);
···11#include <linux/slab.h>22#include <linux/file.h>33#include <linux/fdtable.h>44+#include <linux/freezer.h>45#include <linux/mm.h>56#include <linux/stat.h>67#include <linux/fcntl.h>···424423 if (core_waiters > 0) {425424 struct core_thread *ptr;426425426426+ freezer_do_not_count();427427 wait_for_completion(&core_state->startup);428428+ freezer_count();428429 /*429430 * Wait for all the threads to become inactive, so that430431 * all the thread context (extended register state, like
+21-32
fs/crypto/fname.c
···3939static int fname_encrypt(struct inode *inode,4040 const struct qstr *iname, struct fscrypt_str *oname)4141{4242- u32 ciphertext_len;4342 struct skcipher_request *req = NULL;4443 DECLARE_FS_COMPLETION_RESULT(ecr);4544 struct fscrypt_info *ci = inode->i_crypt_info;4645 struct crypto_skcipher *tfm = ci->ci_ctfm;4746 int res = 0;4847 char iv[FS_CRYPTO_BLOCK_SIZE];4949- struct scatterlist src_sg, dst_sg;4848+ struct scatterlist sg;5049 int padding = 4 << (ci->ci_flags & FS_POLICY_FLAGS_PAD_MASK);5151- char *workbuf, buf[32], *alloc_buf = NULL;5252- unsigned lim;5050+ unsigned int lim;5151+ unsigned int cryptlen;53525453 lim = inode->i_sb->s_cop->max_namelen(inode);5554 if (iname->len <= 0 || iname->len > lim)5655 return -EIO;57565858- ciphertext_len = max(iname->len, (u32)FS_CRYPTO_BLOCK_SIZE);5959- ciphertext_len = round_up(ciphertext_len, padding);6060- ciphertext_len = min(ciphertext_len, lim);5757+ /*5858+ * Copy the filename to the output buffer for encrypting in-place and5959+ * pad it with the needed number of NUL bytes.6060+ */6161+ cryptlen = max_t(unsigned int, iname->len, FS_CRYPTO_BLOCK_SIZE);6262+ cryptlen = round_up(cryptlen, padding);6363+ cryptlen = min(cryptlen, lim);6464+ memcpy(oname->name, iname->name, iname->len);6565+ memset(oname->name + iname->len, 0, cryptlen - iname->len);61666262- if (ciphertext_len <= sizeof(buf)) {6363- workbuf = buf;6464- } else {6565- alloc_buf = kmalloc(ciphertext_len, GFP_NOFS);6666- if (!alloc_buf)6767- return -ENOMEM;6868- workbuf = alloc_buf;6969- }6767+ /* Initialize the IV */6868+ memset(iv, 0, FS_CRYPTO_BLOCK_SIZE);70697171- /* Allocate request */7070+ /* Set up the encryption request */7271 req = skcipher_request_alloc(tfm, GFP_NOFS);7372 if (!req) {7473 printk_ratelimited(KERN_ERR7575- "%s: crypto_request_alloc() failed\n", __func__);7676- kfree(alloc_buf);7474+ "%s: skcipher_request_alloc() failed\n", __func__);7775 return -ENOMEM;7876 }7977 skcipher_request_set_callback(req,8078 CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,8179 fname_crypt_complete, &ecr);8080+ sg_init_one(&sg, oname->name, cryptlen);8181+ skcipher_request_set_crypt(req, &sg, &sg, cryptlen, iv);82828383- /* Copy the input */8484- memcpy(workbuf, iname->name, iname->len);8585- if (iname->len < ciphertext_len)8686- memset(workbuf + iname->len, 0, ciphertext_len - iname->len);8787-8888- /* Initialize IV */8989- memset(iv, 0, FS_CRYPTO_BLOCK_SIZE);9090-9191- /* Create encryption request */9292- sg_init_one(&src_sg, workbuf, ciphertext_len);9393- sg_init_one(&dst_sg, oname->name, ciphertext_len);9494- skcipher_request_set_crypt(req, &src_sg, &dst_sg, ciphertext_len, iv);8383+ /* Do the encryption */9584 res = crypto_skcipher_encrypt(req);9685 if (res == -EINPROGRESS || res == -EBUSY) {8686+ /* Request is being completed asynchronously; wait for it */9787 wait_for_completion(&ecr.completion);9888 res = ecr.res;9989 }100100- kfree(alloc_buf);10190 skcipher_request_free(req);10291 if (res < 0) {10392 printk_ratelimited(KERN_ERR···94105 return res;95106 }961079797- oname->len = ciphertext_len;108108+ oname->len = cryptlen;98109 return 0;99110}100111
+13-3
fs/crypto/keyinfo.c
···185185 struct crypto_skcipher *ctfm;186186 const char *cipher_str;187187 int keysize;188188- u8 raw_key[FS_MAX_KEY_SIZE];188188+ u8 *raw_key = NULL;189189 int res;190190191191 res = fscrypt_initialize();···238238 if (res)239239 goto out;240240241241+ /*242242+ * This cannot be a stack buffer because it is passed to the scatterlist243243+ * crypto API as part of key derivation.244244+ */245245+ res = -ENOMEM;246246+ raw_key = kmalloc(FS_MAX_KEY_SIZE, GFP_NOFS);247247+ if (!raw_key)248248+ goto out;249249+241250 if (fscrypt_dummy_context_enabled(inode)) {242251 memset(raw_key, 0x42, FS_AES_256_XTS_KEY_SIZE);243252 goto got_key;···285276 if (res)286277 goto out;287278288288- memzero_explicit(raw_key, sizeof(raw_key));279279+ kzfree(raw_key);280280+ raw_key = NULL;289281 if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) != NULL) {290282 put_crypt_info(crypt_info);291283 goto retry;···297287 if (res == -ENOKEY)298288 res = 0;299289 put_crypt_info(crypt_info);300300- memzero_explicit(raw_key, sizeof(raw_key));290290+ kzfree(raw_key);301291 return res;302292}303293
···1131113111321132 err = -ENOMEM;11331133 root = fuse_get_root_inode(sb, d.rootmode);11341134+ sb->s_d_op = &fuse_root_dentry_operations;11341135 root_dentry = d_make_root(root);11351136 if (!root_dentry)11361137 goto err_dev_free;11371137- /* only now - we want root dentry with NULL ->d_op */11381138+ /* Root dentry doesn't have .d_revalidate */11381139 sb->s_d_op = &fuse_dentry_operations;1139114011401141 init_req = fuse_request_alloc(0);
+2-1
fs/nfs/client.c
···314314 /* Match the full socket address */315315 if (!rpc_cmp_addr_port(sap, clap))316316 /* Match all xprt_switch full socket addresses */317317- if (!rpc_clnt_xprt_switch_has_addr(clp->cl_rpcclient,317317+ if (IS_ERR(clp->cl_rpcclient) ||318318+ !rpc_clnt_xprt_switch_has_addr(clp->cl_rpcclient,318319 sap))319320 continue;320321
+1-1
fs/nfs/namespace.c
···9898 return end;9999 }100100 namelen = strlen(base);101101- if (flags & NFS_PATH_CANONICAL) {101101+ if (*end == '/') {102102 /* Strip off excess slashes in base string */103103 while (namelen > 0 && base[namelen - 1] == '/')104104 namelen--;
+7-5
fs/nfs/nfs4session.c
···178178 __must_hold(&tbl->slot_tbl_lock)179179{180180 struct nfs4_slot *slot;181181+ int ret;181182182183 slot = nfs4_lookup_slot(tbl, slotid);183183- if (IS_ERR(slot))184184- return PTR_ERR(slot);185185- *seq_nr = slot->seq_nr;186186- return 0;184184+ ret = PTR_ERR_OR_ZERO(slot);185185+ if (!ret)186186+ *seq_nr = slot->seq_nr;187187+188188+ return ret;187189}188190189191/*···198196static bool nfs4_slot_seqid_in_use(struct nfs4_slot_table *tbl,199197 u32 slotid, u32 seq_nr)200198{201201- u32 cur_seq;199199+ u32 cur_seq = 0;202200 bool ret = false;203201204202 spin_lock(&tbl->slot_tbl_lock);
+2
fs/nfs/pnfs.c
···146146 u32 id;147147 int i;148148149149+ if (fsinfo->nlayouttypes == 0)150150+ goto out_no_driver;149151 if (!(server->nfs_client->cl_exchange_flags &150152 (EXCHGID4_FLAG_USE_NON_PNFS | EXCHGID4_FLAG_USE_PNFS_MDS))) {151153 printk(KERN_ERR "NFS: %s: cl_exchange_flags 0x%x\n",
-2
fs/ntfs/dir.c
···15441544 .iterate = ntfs_readdir, /* Read directory contents. */15451545#ifdef NTFS_RW15461546 .fsync = ntfs_dir_fsync, /* Sync a directory to disk. */15471547- /*.aio_fsync = ,*/ /* Sync all outstanding async15481548- i/o operations on a kiocb. */15491547#endif /* NTFS_RW */15501548 /*.ioctl = ,*/ /* Perform function on the15511549 mounted filesystem. */
+1-1
fs/ocfs2/dir.c
···36993699static int ocfs2_dx_dir_rebalance_credits(struct ocfs2_super *osb,37003700 struct ocfs2_dx_root_block *dx_root)37013701{37023702- int credits = ocfs2_clusters_to_blocks(osb->sb, 2);37023702+ int credits = ocfs2_clusters_to_blocks(osb->sb, 3);3703370337043704 credits += ocfs2_calc_extend_credits(osb->sb, &dx_root->dr_list);37053705 credits += ocfs2_quota_trans_credits(osb->sb);
···170170 const void *value, size_t size, int flags)171171{172172 struct inode *inode = dentry->d_inode;173173- int error = -EOPNOTSUPP;173173+ int error = -EAGAIN;174174 int issec = !strncmp(name, XATTR_SECURITY_PREFIX,175175 XATTR_SECURITY_PREFIX_LEN);176176···183183 security_inode_post_setxattr(dentry, name, value,184184 size, flags);185185 }186186- } else if (issec) {187187- const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;188188-186186+ } else {189187 if (unlikely(is_bad_inode(inode)))190188 return -EIO;191191- error = security_inode_setsecurity(inode, suffix, value,192192- size, flags);193193- if (!error)194194- fsnotify_xattr(dentry);189189+ }190190+ if (error == -EAGAIN) {191191+ error = -EOPNOTSUPP;192192+193193+ if (issec) {194194+ const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;195195+196196+ error = security_inode_setsecurity(inode, suffix, value,197197+ size, flags);198198+ if (!error)199199+ fsnotify_xattr(dentry);200200+ }195201 }196202197203 return error;
+5-12
fs/xfs/libxfs/xfs_defer.c
···199199 struct xfs_defer_pending *dfp;200200201201 list_for_each_entry(dfp, &dop->dop_intake, dfp_list) {202202- trace_xfs_defer_intake_work(tp->t_mountp, dfp);203202 dfp->dfp_intent = dfp->dfp_type->create_intent(tp,204203 dfp->dfp_count);204204+ trace_xfs_defer_intake_work(tp->t_mountp, dfp);205205 list_sort(tp->t_mountp, &dfp->dfp_work,206206 dfp->dfp_type->diff_items);207207 list_for_each(li, &dfp->dfp_work)···221221 struct xfs_defer_pending *dfp;222222223223 trace_xfs_defer_trans_abort(tp->t_mountp, dop);224224- /*225225- * If the transaction was committed, drop the intent reference226226- * since we're bailing out of here. The other reference is227227- * dropped when the intent hits the AIL. If the transaction228228- * was not committed, the intent is freed by the intent item229229- * unlock handler on abort.230230- */231231- if (!dop->dop_committed)232232- return;233224234234- /* Abort intent items. */225225+ /* Abort intent items that don't have a done item. */235226 list_for_each_entry(dfp, &dop->dop_pending, dfp_list) {236227 trace_xfs_defer_pending_abort(tp->t_mountp, dfp);237237- if (!dfp->dfp_done)228228+ if (dfp->dfp_intent && !dfp->dfp_done) {238229 dfp->dfp_type->abort_intent(dfp->dfp_intent);230230+ dfp->dfp_intent = NULL;231231+ }239232 }240233241234 /* Shut down FS. */
+70-94
include/acpi/actbl.h
···230230/* Fields common to all versions of the FADT */231231232232struct acpi_table_fadt {233233- struct acpi_table_header header; /* [V1] Common ACPI table header */234234- u32 facs; /* [V1] 32-bit physical address of FACS */235235- u32 dsdt; /* [V1] 32-bit physical address of DSDT */236236- u8 model; /* [V1] System Interrupt Model (ACPI 1.0) - not used in ACPI 2.0+ */237237- u8 preferred_profile; /* [V1] Conveys preferred power management profile to OSPM. */238238- u16 sci_interrupt; /* [V1] System vector of SCI interrupt */239239- u32 smi_command; /* [V1] 32-bit Port address of SMI command port */240240- u8 acpi_enable; /* [V1] Value to write to SMI_CMD to enable ACPI */241241- u8 acpi_disable; /* [V1] Value to write to SMI_CMD to disable ACPI */242242- u8 s4_bios_request; /* [V1] Value to write to SMI_CMD to enter S4BIOS state */243243- u8 pstate_control; /* [V1] Processor performance state control */244244- u32 pm1a_event_block; /* [V1] 32-bit port address of Power Mgt 1a Event Reg Blk */245245- u32 pm1b_event_block; /* [V1] 32-bit port address of Power Mgt 1b Event Reg Blk */246246- u32 pm1a_control_block; /* [V1] 32-bit port address of Power Mgt 1a Control Reg Blk */247247- u32 pm1b_control_block; /* [V1] 32-bit port address of Power Mgt 1b Control Reg Blk */248248- u32 pm2_control_block; /* [V1] 32-bit port address of Power Mgt 2 Control Reg Blk */249249- u32 pm_timer_block; /* [V1] 32-bit port address of Power Mgt Timer Ctrl Reg Blk */250250- u32 gpe0_block; /* [V1] 32-bit port address of General Purpose Event 0 Reg Blk */251251- u32 gpe1_block; /* [V1] 32-bit port address of General Purpose Event 1 Reg Blk */252252- u8 pm1_event_length; /* [V1] Byte Length of ports at pm1x_event_block */253253- u8 pm1_control_length; /* [V1] Byte Length of ports at pm1x_control_block */254254- u8 pm2_control_length; /* [V1] Byte Length of ports at pm2_control_block */255255- u8 pm_timer_length; /* [V1] Byte Length of ports at pm_timer_block */256256- u8 gpe0_block_length; /* [V1] Byte Length of ports at gpe0_block */257257- u8 gpe1_block_length; /* [V1] Byte Length of ports at gpe1_block */258258- u8 gpe1_base; /* [V1] Offset in GPE number space where GPE1 events start */259259- u8 cst_control; /* [V1] Support for the _CST object and C-States change notification */260260- u16 c2_latency; /* [V1] Worst case HW latency to enter/exit C2 state */261261- u16 c3_latency; /* [V1] Worst case HW latency to enter/exit C3 state */262262- u16 flush_size; /* [V1] Processor memory cache line width, in bytes */263263- u16 flush_stride; /* [V1] Number of flush strides that need to be read */264264- u8 duty_offset; /* [V1] Processor duty cycle index in processor P_CNT reg */265265- u8 duty_width; /* [V1] Processor duty cycle value bit width in P_CNT register */266266- u8 day_alarm; /* [V1] Index to day-of-month alarm in RTC CMOS RAM */267267- u8 month_alarm; /* [V1] Index to month-of-year alarm in RTC CMOS RAM */268268- u8 century; /* [V1] Index to century in RTC CMOS RAM */269269- u16 boot_flags; /* [V3] IA-PC Boot Architecture Flags (see below for individual flags) */270270- u8 reserved; /* [V1] Reserved, must be zero */271271- u32 flags; /* [V1] Miscellaneous flag bits (see below for individual flags) */272272- /* End of Version 1 FADT fields (ACPI 1.0) */273273-274274- struct acpi_generic_address reset_register; /* [V3] 64-bit address of the Reset register */275275- u8 reset_value; /* [V3] Value to write to the reset_register port to reset the system */276276- u16 arm_boot_flags; /* [V5] ARM-Specific Boot Flags (see below for individual flags) (ACPI 5.1) */277277- u8 minor_revision; /* [V5] FADT Minor Revision (ACPI 5.1) */278278- u64 Xfacs; /* [V3] 64-bit physical address of FACS */279279- u64 Xdsdt; /* [V3] 64-bit physical address of DSDT */280280- struct acpi_generic_address xpm1a_event_block; /* [V3] 64-bit Extended Power Mgt 1a Event Reg Blk address */281281- struct acpi_generic_address xpm1b_event_block; /* [V3] 64-bit Extended Power Mgt 1b Event Reg Blk address */282282- struct acpi_generic_address xpm1a_control_block; /* [V3] 64-bit Extended Power Mgt 1a Control Reg Blk address */283283- struct acpi_generic_address xpm1b_control_block; /* [V3] 64-bit Extended Power Mgt 1b Control Reg Blk address */284284- struct acpi_generic_address xpm2_control_block; /* [V3] 64-bit Extended Power Mgt 2 Control Reg Blk address */285285- struct acpi_generic_address xpm_timer_block; /* [V3] 64-bit Extended Power Mgt Timer Ctrl Reg Blk address */286286- struct acpi_generic_address xgpe0_block; /* [V3] 64-bit Extended General Purpose Event 0 Reg Blk address */287287- struct acpi_generic_address xgpe1_block; /* [V3] 64-bit Extended General Purpose Event 1 Reg Blk address */288288- /* End of Version 3 FADT fields (ACPI 2.0) */289289-290290- struct acpi_generic_address sleep_control; /* [V4] 64-bit Sleep Control register (ACPI 5.0) */291291- /* End of Version 4 FADT fields (ACPI 3.0 and ACPI 4.0) (Field was originally reserved in ACPI 3.0) */292292-293293- struct acpi_generic_address sleep_status; /* [V5] 64-bit Sleep Status register (ACPI 5.0) */294294- /* End of Version 5 FADT fields (ACPI 5.0) */295295-296296- u64 hypervisor_id; /* [V6] Hypervisor Vendor ID (ACPI 6.0) */297297- /* End of Version 6 FADT fields (ACPI 6.0) */298298-233233+ struct acpi_table_header header; /* Common ACPI table header */234234+ u32 facs; /* 32-bit physical address of FACS */235235+ u32 dsdt; /* 32-bit physical address of DSDT */236236+ u8 model; /* System Interrupt Model (ACPI 1.0) - not used in ACPI 2.0+ */237237+ u8 preferred_profile; /* Conveys preferred power management profile to OSPM. */238238+ u16 sci_interrupt; /* System vector of SCI interrupt */239239+ u32 smi_command; /* 32-bit Port address of SMI command port */240240+ u8 acpi_enable; /* Value to write to SMI_CMD to enable ACPI */241241+ u8 acpi_disable; /* Value to write to SMI_CMD to disable ACPI */242242+ u8 s4_bios_request; /* Value to write to SMI_CMD to enter S4BIOS state */243243+ u8 pstate_control; /* Processor performance state control */244244+ u32 pm1a_event_block; /* 32-bit port address of Power Mgt 1a Event Reg Blk */245245+ u32 pm1b_event_block; /* 32-bit port address of Power Mgt 1b Event Reg Blk */246246+ u32 pm1a_control_block; /* 32-bit port address of Power Mgt 1a Control Reg Blk */247247+ u32 pm1b_control_block; /* 32-bit port address of Power Mgt 1b Control Reg Blk */248248+ u32 pm2_control_block; /* 32-bit port address of Power Mgt 2 Control Reg Blk */249249+ u32 pm_timer_block; /* 32-bit port address of Power Mgt Timer Ctrl Reg Blk */250250+ u32 gpe0_block; /* 32-bit port address of General Purpose Event 0 Reg Blk */251251+ u32 gpe1_block; /* 32-bit port address of General Purpose Event 1 Reg Blk */252252+ u8 pm1_event_length; /* Byte Length of ports at pm1x_event_block */253253+ u8 pm1_control_length; /* Byte Length of ports at pm1x_control_block */254254+ u8 pm2_control_length; /* Byte Length of ports at pm2_control_block */255255+ u8 pm_timer_length; /* Byte Length of ports at pm_timer_block */256256+ u8 gpe0_block_length; /* Byte Length of ports at gpe0_block */257257+ u8 gpe1_block_length; /* Byte Length of ports at gpe1_block */258258+ u8 gpe1_base; /* Offset in GPE number space where GPE1 events start */259259+ u8 cst_control; /* Support for the _CST object and C-States change notification */260260+ u16 c2_latency; /* Worst case HW latency to enter/exit C2 state */261261+ u16 c3_latency; /* Worst case HW latency to enter/exit C3 state */262262+ u16 flush_size; /* Processor memory cache line width, in bytes */263263+ u16 flush_stride; /* Number of flush strides that need to be read */264264+ u8 duty_offset; /* Processor duty cycle index in processor P_CNT reg */265265+ u8 duty_width; /* Processor duty cycle value bit width in P_CNT register */266266+ u8 day_alarm; /* Index to day-of-month alarm in RTC CMOS RAM */267267+ u8 month_alarm; /* Index to month-of-year alarm in RTC CMOS RAM */268268+ u8 century; /* Index to century in RTC CMOS RAM */269269+ u16 boot_flags; /* IA-PC Boot Architecture Flags (see below for individual flags) */270270+ u8 reserved; /* Reserved, must be zero */271271+ u32 flags; /* Miscellaneous flag bits (see below for individual flags) */272272+ struct acpi_generic_address reset_register; /* 64-bit address of the Reset register */273273+ u8 reset_value; /* Value to write to the reset_register port to reset the system */274274+ u16 arm_boot_flags; /* ARM-Specific Boot Flags (see below for individual flags) (ACPI 5.1) */275275+ u8 minor_revision; /* FADT Minor Revision (ACPI 5.1) */276276+ u64 Xfacs; /* 64-bit physical address of FACS */277277+ u64 Xdsdt; /* 64-bit physical address of DSDT */278278+ struct acpi_generic_address xpm1a_event_block; /* 64-bit Extended Power Mgt 1a Event Reg Blk address */279279+ struct acpi_generic_address xpm1b_event_block; /* 64-bit Extended Power Mgt 1b Event Reg Blk address */280280+ struct acpi_generic_address xpm1a_control_block; /* 64-bit Extended Power Mgt 1a Control Reg Blk address */281281+ struct acpi_generic_address xpm1b_control_block; /* 64-bit Extended Power Mgt 1b Control Reg Blk address */282282+ struct acpi_generic_address xpm2_control_block; /* 64-bit Extended Power Mgt 2 Control Reg Blk address */283283+ struct acpi_generic_address xpm_timer_block; /* 64-bit Extended Power Mgt Timer Ctrl Reg Blk address */284284+ struct acpi_generic_address xgpe0_block; /* 64-bit Extended General Purpose Event 0 Reg Blk address */285285+ struct acpi_generic_address xgpe1_block; /* 64-bit Extended General Purpose Event 1 Reg Blk address */286286+ struct acpi_generic_address sleep_control; /* 64-bit Sleep Control register (ACPI 5.0) */287287+ struct acpi_generic_address sleep_status; /* 64-bit Sleep Status register (ACPI 5.0) */288288+ u64 hypervisor_id; /* Hypervisor Vendor ID (ACPI 6.0) */299289};300290301291/* Masks for FADT IA-PC Boot Architecture Flags (boot_flags) [Vx]=Introduced in this FADT revision */···301311302312/* Masks for FADT ARM Boot Architecture Flags (arm_boot_flags) ACPI 5.1 */303313304304-#define ACPI_FADT_PSCI_COMPLIANT (1) /* 00: [V5] PSCI 0.2+ is implemented */305305-#define ACPI_FADT_PSCI_USE_HVC (1<<1) /* 01: [V5] HVC must be used instead of SMC as the PSCI conduit */314314+#define ACPI_FADT_PSCI_COMPLIANT (1) /* 00: [V5+] PSCI 0.2+ is implemented */315315+#define ACPI_FADT_PSCI_USE_HVC (1<<1) /* 01: [V5+] HVC must be used instead of SMC as the PSCI conduit */306316307317/* Masks for FADT flags */308318···399409 * match the expected length. In other words, the length of the400410 * FADT is the bottom line as to what the version really is.401411 *402402- * NOTE: There is no officialy released V2 of the FADT. This403403- * version was used only for prototyping and testing during the404404- * 32-bit to 64-bit transition. V3 was the first official 64-bit405405- * version of the FADT.406406- *407407- * Update this list of defines when a new version of the FADT is408408- * added to the ACPI specification. Note that the FADT version is409409- * only incremented when new fields are appended to the existing410410- * version. Therefore, the FADT version is competely independent411411- * from the version of the ACPI specification where it is412412- * defined.413413- *414414- * For reference, the various FADT lengths are as follows:415415- * FADT V1 size: 0x074 ACPI 1.0416416- * FADT V3 size: 0x0F4 ACPI 2.0417417- * FADT V4 size: 0x100 ACPI 3.0 and ACPI 4.0418418- * FADT V5 size: 0x10C ACPI 5.0419419- * FADT V6 size: 0x114 ACPI 6.0412412+ * For reference, the values below are as follows:413413+ * FADT V1 size: 0x074414414+ * FADT V2 size: 0x084415415+ * FADT V3 size: 0x0F4416416+ * FADT V4 size: 0x0F4417417+ * FADT V5 size: 0x10C418418+ * FADT V6 size: 0x114420419 */421421-#define ACPI_FADT_V1_SIZE (u32) (ACPI_FADT_OFFSET (flags) + 4) /* ACPI 1.0 */422422-#define ACPI_FADT_V3_SIZE (u32) (ACPI_FADT_OFFSET (sleep_control)) /* ACPI 2.0 */423423-#define ACPI_FADT_V4_SIZE (u32) (ACPI_FADT_OFFSET (sleep_status)) /* ACPI 3.0 and ACPI 4.0 */424424-#define ACPI_FADT_V5_SIZE (u32) (ACPI_FADT_OFFSET (hypervisor_id)) /* ACPI 5.0 */425425-#define ACPI_FADT_V6_SIZE (u32) (sizeof (struct acpi_table_fadt)) /* ACPI 6.0 */420420+#define ACPI_FADT_V1_SIZE (u32) (ACPI_FADT_OFFSET (flags) + 4)421421+#define ACPI_FADT_V2_SIZE (u32) (ACPI_FADT_OFFSET (minor_revision) + 1)422422+#define ACPI_FADT_V3_SIZE (u32) (ACPI_FADT_OFFSET (sleep_control))423423+#define ACPI_FADT_V5_SIZE (u32) (ACPI_FADT_OFFSET (hypervisor_id))424424+#define ACPI_FADT_V6_SIZE (u32) (sizeof (struct acpi_table_fadt))426425427427-/* Update these when new FADT versions are added */428428-429429-#define ACPI_FADT_MAX_VERSION 6430426#define ACPI_FADT_CONFORMANCE "ACPI 6.1 (FADT version 6)"431427432428#endif /* __ACTBL_H__ */
+3
include/acpi/platform/aclinux.h
···191191#ifndef __init192192#define __init193193#endif194194+#ifndef __iomem195195+#define __iomem196196+#endif194197195198/* Host-dependent types and defines for user-space ACPICA */196199
+3
include/asm-generic/sections.h
···1414 * [_sdata, _edata]: contains .data.* sections, may also contain .rodata.*1515 * and/or .init.* sections.1616 * [__start_rodata, __end_rodata]: contains .rodata.* sections1717+ * [__start_data_ro_after_init, __end_data_ro_after_init]:1818+ * contains data.ro_after_init section1719 * [__init_begin, __init_end]: contains .init.* sections, but .init.text.*1820 * may be out of this range on some architectures.1921 * [_sinittext, _einittext]: contains .init.text.* sections···3331extern char __bss_start[], __bss_stop[];3432extern char __init_begin[], __init_end[];3533extern char _sinittext[], _einittext[];3434+extern char __start_data_ro_after_init[], __end_data_ro_after_init[];3635extern char _end[];3736extern char __per_cpu_load[], __per_cpu_start[], __per_cpu_end[];3837extern char __kprobes_text_start[], __kprobes_text_end[];
+4-1
include/asm-generic/vmlinux.lds.h
···259259 * own by defining an empty RO_AFTER_INIT_DATA.260260 */261261#ifndef RO_AFTER_INIT_DATA262262-#define RO_AFTER_INIT_DATA *(.data..ro_after_init)262262+#define RO_AFTER_INIT_DATA \263263+ __start_data_ro_after_init = .; \264264+ *(.data..ro_after_init) \265265+ __end_data_ro_after_init = .;263266#endif264267265268/*
···1414 * are obviously wrong for any sort of memory access.1515 */1616#define BPF_REGISTER_MAX_RANGE (1024 * 1024 * 1024)1717-#define BPF_REGISTER_MIN_RANGE -(1024 * 1024 * 1024)1717+#define BPF_REGISTER_MIN_RANGE -118181919struct bpf_reg_state {2020 enum bpf_reg_type type;···2222 * Used to determine if any memory access using this register will2323 * result in a bad access.2424 */2525- u64 min_value, max_value;2525+ s64 min_value;2626+ u64 max_value;2627 union {2728 /* valid when type == CONST_IMM | PTR_TO_STACK | UNKNOWN_VALUE */2829 s64 imm;
···170170extern struct list_head net_namespace_list;171171172172struct net *get_net_ns_by_pid(pid_t pid);173173-struct net *get_net_ns_by_fd(int pid);173173+struct net *get_net_ns_by_fd(int fd);174174175175#ifdef CONFIG_SYSCTL176176void ipx_register_sysctl(void);
···14141515#include <linux/atmapi.h>1616#include <linux/atmioc.h>1717-#include <linux/time.h>18171918#define ZATM_GETPOOL _IOW('a',ATMIOC_SARPRV+1,struct atmif_sioc)2019 /* get pool statistics */
-2
include/uapi/linux/bpqether.h
···55 * Defines for the BPQETHER pseudo device driver66 */7788-#ifndef __LINUX_IF_ETHER_H98#include <linux/if_ether.h>1010-#endif1191210#define SIOCSBPQETHOPT (SIOCDEVPRIVATE+0) /* reserved */1311#define SIOCSBPQETHADDR (SIOCDEVPRIVATE+1)
+7
include/uapi/linux/kvm.h
···972972 __u8 pad[16];973973};974974975975+/* For KVM_CAP_ADJUST_CLOCK */976976+977977+/* Do not use 1, KVM_CHECK_EXTENSION returned it before we had flags. */978978+#define KVM_CLOCK_TSC_STABLE 2979979+975980struct kvm_clock_data {976981 __u64 clock;977982 __u32 flags;978983 __u32 pad[9];979984};985985+986986+/* For KVM_CAP_SW_TLB */980987981988#define KVM_MMU_FSL_BOOKE_NOHV 0982989#define KVM_MMU_FSL_BOOKE_HV 1
+2-1
kernel/bpf/hashtab.c
···687687688688 hlist_for_each_entry_safe(l, n, head, hash_node) {689689 hlist_del_rcu(&l->hash_node);690690- htab_elem_free(htab, l);690690+ if (l->state != HTAB_EXTRA_ELEM_USED)691691+ htab_elem_free(htab, l);691692 }692693 }693694}
···216216 reg->map_ptr->key_size,217217 reg->map_ptr->value_size);218218 if (reg->min_value != BPF_REGISTER_MIN_RANGE)219219- verbose(",min_value=%llu",220220- (unsigned long long)reg->min_value);219219+ verbose(",min_value=%lld",220220+ (long long)reg->min_value);221221 if (reg->max_value != BPF_REGISTER_MAX_RANGE)222222 verbose(",max_value=%llu",223223 (unsigned long long)reg->max_value);···758758 * index'es we need to make sure that whatever we use759759 * will have a set floor within our range.760760 */761761- if ((s64)reg->min_value < 0) {761761+ if (reg->min_value < 0) {762762 verbose("R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",763763 regno);764764 return -EACCES;···14681468{14691469 if (reg->max_value > BPF_REGISTER_MAX_RANGE)14701470 reg->max_value = BPF_REGISTER_MAX_RANGE;14711471- if ((s64)reg->min_value < BPF_REGISTER_MIN_RANGE)14711471+ if (reg->min_value < BPF_REGISTER_MIN_RANGE ||14721472+ reg->min_value > BPF_REGISTER_MAX_RANGE)14721473 reg->min_value = BPF_REGISTER_MIN_RANGE;14731474}14741475···14771476 struct bpf_insn *insn)14781477{14791478 struct bpf_reg_state *regs = env->cur_state.regs, *dst_reg;14801480- u64 min_val = BPF_REGISTER_MIN_RANGE, max_val = BPF_REGISTER_MAX_RANGE;14791479+ s64 min_val = BPF_REGISTER_MIN_RANGE;14801480+ u64 max_val = BPF_REGISTER_MAX_RANGE;14811481 bool min_set = false, max_set = false;14821482 u8 opcode = BPF_OP(insn->code);14831483···15141512 return;15151513 }1516151415151515+ /* If one of our values was at the end of our ranges then we can't just15161516+ * do our normal operations to the register, we need to set the values15171517+ * to the min/max since they are undefined.15181518+ */15191519+ if (min_val == BPF_REGISTER_MIN_RANGE)15201520+ dst_reg->min_value = BPF_REGISTER_MIN_RANGE;15211521+ if (max_val == BPF_REGISTER_MAX_RANGE)15221522+ dst_reg->max_value = BPF_REGISTER_MAX_RANGE;15231523+15171524 switch (opcode) {15181525 case BPF_ADD:15191519- dst_reg->min_value += min_val;15201520- dst_reg->max_value += max_val;15261526+ if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE)15271527+ dst_reg->min_value += min_val;15281528+ if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)15291529+ dst_reg->max_value += max_val;15211530 break;15221531 case BPF_SUB:15231523- dst_reg->min_value -= min_val;15241524- dst_reg->max_value -= max_val;15321532+ if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE)15331533+ dst_reg->min_value -= min_val;15341534+ if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)15351535+ dst_reg->max_value -= max_val;15251536 break;15261537 case BPF_MUL:15271527- dst_reg->min_value *= min_val;15281528- dst_reg->max_value *= max_val;15381538+ if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE)15391539+ dst_reg->min_value *= min_val;15401540+ if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)15411541+ dst_reg->max_value *= max_val;15291542 break;15301543 case BPF_AND:15311531- /* & is special since it could end up with 0 bits set. */15321532- dst_reg->min_value &= min_val;15441544+ /* Disallow AND'ing of negative numbers, ain't nobody got time15451545+ * for that. Otherwise the minimum is 0 and the max is the max15461546+ * value we could AND against.15471547+ */15481548+ if (min_val < 0)15491549+ dst_reg->min_value = BPF_REGISTER_MIN_RANGE;15501550+ else15511551+ dst_reg->min_value = 0;15331552 dst_reg->max_value = max_val;15341553 break;15351554 case BPF_LSH:···15601537 */15611538 if (min_val > ilog2(BPF_REGISTER_MAX_RANGE))15621539 dst_reg->min_value = BPF_REGISTER_MIN_RANGE;15631563- else15401540+ else if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE)15641541 dst_reg->min_value <<= min_val;1565154215661543 if (max_val > ilog2(BPF_REGISTER_MAX_RANGE))15671544 dst_reg->max_value = BPF_REGISTER_MAX_RANGE;15681568- else15451545+ else if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)15691546 dst_reg->max_value <<= max_val;15701547 break;15711548 case BPF_RSH:15721572- dst_reg->min_value >>= min_val;15731573- dst_reg->max_value >>= max_val;15741574- break;15751575- case BPF_MOD:15761576- /* % is special since it is an unsigned modulus, so the floor15771577- * will always be 0.15491549+ /* RSH by a negative number is undefined, and the BPF_RSH is an15501550+ * unsigned shift, so make the appropriate casts.15781551 */15791579- dst_reg->min_value = 0;15801580- dst_reg->max_value = max_val - 1;15521552+ if (min_val < 0 || dst_reg->min_value < 0)15531553+ dst_reg->min_value = BPF_REGISTER_MIN_RANGE;15541554+ else15551555+ dst_reg->min_value =15561556+ (u64)(dst_reg->min_value) >> min_val;15571557+ if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)15581558+ dst_reg->max_value >>= max_val;15811559 break;15821560 default:15831561 reset_reg_range_values(regs, insn->dst_reg);
+2-2
kernel/irq/manage.c
···1341134113421342 } else if (new->flags & IRQF_TRIGGER_MASK) {13431343 unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;13441344- unsigned int omsk = irq_settings_get_trigger_mask(desc);13441344+ unsigned int omsk = irqd_get_trigger_type(&desc->irq_data);1345134513461346 if (nmsk != omsk)13471347 /* hope the handler works with current trigger mode */13481348 pr_warn("irq %d uses trigger mode %u; requested %u\n",13491349- irq, nmsk, omsk);13491349+ irq, omsk, nmsk);13501350 }1351135113521352 *old_ptr = new;
+17-3
kernel/locking/lockdep_internals.h
···4646 (LOCKF_USED_IN_HARDIRQ_READ | LOCKF_USED_IN_SOFTIRQ_READ)47474848/*4949+ * CONFIG_PROVE_LOCKING_SMALL is defined for sparc. Sparc requires .text,5050+ * .data and .bss to fit in required 32MB limit for the kernel. With5151+ * PROVE_LOCKING we could go over this limit and cause system boot-up problems.5252+ * So, reduce the static allocations for lockdeps related structures so that5353+ * everything fits in current required size limit.5454+ */5555+#ifdef CONFIG_PROVE_LOCKING_SMALL5656+/*4957 * MAX_LOCKDEP_ENTRIES is the maximum number of lock dependencies5058 * we track.5159 *···6254 * table (if it's not there yet), and we check it for lock order6355 * conflicts and deadlocks.6456 */5757+#define MAX_LOCKDEP_ENTRIES 16384UL5858+#define MAX_LOCKDEP_CHAINS_BITS 155959+#define MAX_STACK_TRACE_ENTRIES 262144UL6060+#else6561#define MAX_LOCKDEP_ENTRIES 32768UL66626763#define MAX_LOCKDEP_CHAINS_BITS 166868-#define MAX_LOCKDEP_CHAINS (1UL << MAX_LOCKDEP_CHAINS_BITS)6969-7070-#define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)71647265/*7366 * Stack-trace: tightly packed array of stack backtrace7467 * addresses. Protected by the hash_lock.7568 */7669#define MAX_STACK_TRACE_ENTRIES 524288UL7070+#endif7171+7272+#define MAX_LOCKDEP_CHAINS (1UL << MAX_LOCKDEP_CHAINS_BITS)7373+7474+#define MAX_LOCKDEP_CHAIN_HLOCKS (MAX_LOCKDEP_CHAINS*5)77757876extern struct list_head all_lock_classes;7977extern struct lock_chain lock_chains[];
+3-1
kernel/power/suspend_test.c
···203203204204 /* RTCs have initialized by now too ... can we use one? */205205 dev = class_find_device(rtc_class, NULL, NULL, has_wakealarm);206206- if (dev)206206+ if (dev) {207207 rtc = rtc_class_open(dev_name(dev));208208+ put_device(dev);209209+ }208210 if (!rtc) {209211 printk(warn_no_rtc);210212 return 0;
+1-23
kernel/printk/printk.c
···253253int console_set_on_cmdline;254254EXPORT_SYMBOL(console_set_on_cmdline);255255256256-#ifdef CONFIG_OF257257-static bool of_specified_console;258258-259259-void console_set_by_of(void)260260-{261261- of_specified_console = true;262262-}263263-#else264264-# define of_specified_console false265265-#endif266266-267256/* Flag: console code may call schedule() */268257static int console_may_schedule;269258···783794 return ret;784795}785796786786-static void cont_flush(void);787787-788797static ssize_t devkmsg_read(struct file *file, char __user *buf,789798 size_t count, loff_t *ppos)790799{···798811 if (ret)799812 return ret;800813 raw_spin_lock_irq(&logbuf_lock);801801- cont_flush();802814 while (user->seq == log_next_seq) {803815 if (file->f_flags & O_NONBLOCK) {804816 ret = -EAGAIN;···860874 return -ESPIPE;861875862876 raw_spin_lock_irq(&logbuf_lock);863863- cont_flush();864877 switch (whence) {865878 case SEEK_SET:866879 /* the first record */···898913 poll_wait(file, &log_wait, wait);899914900915 raw_spin_lock_irq(&logbuf_lock);901901- cont_flush();902916 if (user->seq < log_next_seq) {903917 /* return error when data has vanished underneath us */904918 if (user->seq < log_first_seq)···12841300 size_t skip;1285130112861302 raw_spin_lock_irq(&logbuf_lock);12871287- cont_flush();12881303 if (syslog_seq < log_first_seq) {12891304 /* messages are gone, move to first one */12901305 syslog_seq = log_first_seq;···13431360 return -ENOMEM;1344136113451362 raw_spin_lock_irq(&logbuf_lock);13461346- cont_flush();13471363 if (buf) {13481364 u64 next_seq;13491365 u64 seq;···15041522 /* Number of chars in the log buffer */15051523 case SYSLOG_ACTION_SIZE_UNREAD:15061524 raw_spin_lock_irq(&logbuf_lock);15071507- cont_flush();15081525 if (syslog_seq < log_first_seq) {15091526 /* messages are gone, move to first one */15101527 syslog_seq = log_first_seq;···26382657 * didn't select a console we take the first one26392658 * that registers here.26402659 */26412641- if (preferred_console < 0 && !of_specified_console) {26602660+ if (preferred_console < 0) {26422661 if (newcon->index < 0)26432662 newcon->index = 0;26442663 if (newcon->setup == NULL ||···30203039 dumper->active = true;3021304030223041 raw_spin_lock_irqsave(&logbuf_lock, flags);30233023- cont_flush();30243042 dumper->cur_seq = clear_seq;30253043 dumper->cur_idx = clear_idx;30263044 dumper->next_seq = log_next_seq;···31103130 bool ret;3111313131123132 raw_spin_lock_irqsave(&logbuf_lock, flags);31133113- cont_flush();31143133 ret = kmsg_dump_get_line_nolock(dumper, syslog, line, size, len);31153134 raw_spin_unlock_irqrestore(&logbuf_lock, flags);31163135···31523173 goto out;3153317431543175 raw_spin_lock_irqsave(&logbuf_lock, flags);31553155- cont_flush();31563176 if (dumper->cur_seq < log_first_seq) {31573177 /* messages are gone, move to first available one */31583178 dumper->cur_seq = log_first_seq;
+5-1
kernel/taskstats.c
···5454 [TASKSTATS_CMD_ATTR_REGISTER_CPUMASK] = { .type = NLA_STRING },5555 [TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK] = { .type = NLA_STRING },};56565757-static const struct nla_policy cgroupstats_cmd_get_policy[CGROUPSTATS_CMD_ATTR_MAX+1] = {5757+/*5858+ * We have to use TASKSTATS_CMD_ATTR_MAX here, it is the maxattr in the family.5959+ * Make sure they are always aligned.6060+ */6161+static const struct nla_policy cgroupstats_cmd_get_policy[TASKSTATS_CMD_ATTR_MAX+1] = {5862 [CGROUPSTATS_CMD_ATTR_FD] = { .type = NLA_U32 },5963};6064
+23-1
kernel/trace/ftrace.c
···1862186218631863 /* Update rec->flags */18641864 do_for_each_ftrace_rec(pg, rec) {18651865+18661866+ if (rec->flags & FTRACE_FL_DISABLED)18671867+ continue;18681868+18651869 /* We need to update only differences of filter_hash */18661870 in_old = !!ftrace_lookup_ip(old_hash, rec->ip);18671871 in_new = !!ftrace_lookup_ip(new_hash, rec->ip);···1888188418891885 /* Roll back what we did above */18901886 do_for_each_ftrace_rec(pg, rec) {18871887+18881888+ if (rec->flags & FTRACE_FL_DISABLED)18891889+ continue;18901890+18911891 if (rec == end)18921892 goto err_out;18931893···24052397 return;2406239824072399 do_for_each_ftrace_rec(pg, rec) {24002400+24012401+ if (rec->flags & FTRACE_FL_DISABLED)24022402+ continue;24032403+24082404 failed = __ftrace_replace_code(rec, enable);24092405 if (failed) {24102406 ftrace_bug(failed, rec);···27752763 struct dyn_ftrace *rec;2776276427772765 do_for_each_ftrace_rec(pg, rec) {27782778- if (FTRACE_WARN_ON_ONCE(rec->flags))27662766+ if (FTRACE_WARN_ON_ONCE(rec->flags & ~FTRACE_FL_DISABLED))27792767 pr_warn(" %pS flags:%lx\n",27802768 (void *)rec->ip, rec->flags);27812769 } while_for_each_ftrace_rec();···36103598 goto out_unlock;3611359936123600 do_for_each_ftrace_rec(pg, rec) {36013601+36023602+ if (rec->flags & FTRACE_FL_DISABLED)36033603+ continue;36043604+36133605 if (ftrace_match_record(rec, &func_g, mod_match, exclude_mod)) {36143606 ret = enter_record(hash, rec, clear_filter);36153607 if (ret < 0) {···38083792 mutex_lock(&ftrace_lock);3809379338103794 do_for_each_ftrace_rec(pg, rec) {37953795+37963796+ if (rec->flags & FTRACE_FL_DISABLED)37973797+ continue;3811379838123799 if (!ftrace_match_record(rec, &func_g, NULL, 0))38133800 continue;···47034684 }4704468547054686 do_for_each_ftrace_rec(pg, rec) {46874687+46884688+ if (rec->flags & FTRACE_FL_DISABLED)46894689+ continue;4706469047074691 if (ftrace_match_record(rec, &func_g, NULL, 0)) {47084692 /* if it is in the array */
+3
lib/Kconfig.debug
···1085108510861086 For more details, see Documentation/locking/lockdep-design.txt.1087108710881088+config PROVE_LOCKING_SMALL10891089+ bool10901090+10881091config LOCKDEP10891092 bool10901093 depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
+3-1
lib/iov_iter.c
···683683 struct pipe_inode_info *pipe = i->pipe;684684 struct pipe_buffer *buf;685685 int idx = i->idx;686686- size_t off = i->iov_offset;686686+ size_t off = i->iov_offset, orig_sz;687687688688 if (unlikely(i->count < size))689689 size = i->count;690690+ orig_sz = size;690691691692 if (size) {692693 if (off) /* make it relative to the beginning of buffer */···714713 pipe->nrbufs--;715714 }716715 }716716+ i->count -= orig_sz;717717}718718719719void iov_iter_advance(struct iov_iter *i, size_t size)
+2
lib/stackdepot.c
···192192 trace->entries = stack->entries;193193 trace->skip = 0;194194}195195+EXPORT_SYMBOL_GPL(depot_fetch_stack);195196196197/**197198 * depot_save_stack - save stack in a stack depot.···284283fast_exit:285284 return retval;286285}286286+EXPORT_SYMBOL_GPL(depot_save_stack);
···17321732 if (inode->i_blkbits == PAGE_SHIFT ||17331733 !mapping->a_ops->is_partially_uptodate)17341734 goto page_not_up_to_date;17351735+ /* pipes can't handle partially uptodate pages */17361736+ if (unlikely(iter->type & ITER_PIPE))17371737+ goto page_not_up_to_date;17351738 if (!trylock_page(page))17361739 goto page_not_up_to_date;17371740 /* Did it get truncated before we got the lock? */
···18261826 * is not the case is if a reserve map was changed between calls. It18271827 * is the responsibility of the caller to notice the difference and18281828 * take appropriate action.18291829+ *18301830+ * vma_add_reservation is used in error paths where a reservation must18311831+ * be restored when a newly allocated huge page must be freed. It is18321832+ * to be called after calling vma_needs_reservation to determine if a18331833+ * reservation exists.18291834 */18301835enum vma_resv_mode {18311836 VMA_NEEDS_RESV,18321837 VMA_COMMIT_RESV,18331838 VMA_END_RESV,18391839+ VMA_ADD_RESV,18341840};18351841static long __vma_reservation_common(struct hstate *h,18361842 struct vm_area_struct *vma, unsigned long addr,···18611855 case VMA_END_RESV:18621856 region_abort(resv, idx, idx + 1);18631857 ret = 0;18581858+ break;18591859+ case VMA_ADD_RESV:18601860+ if (vma->vm_flags & VM_MAYSHARE)18611861+ ret = region_add(resv, idx, idx + 1);18621862+ else {18631863+ region_abort(resv, idx, idx + 1);18641864+ ret = region_del(resv, idx, idx + 1);18651865+ }18641866 break;18651867 default:18661868 BUG();···19151901 struct vm_area_struct *vma, unsigned long addr)19161902{19171903 (void)__vma_reservation_common(h, vma, addr, VMA_END_RESV);19041904+}19051905+19061906+static long vma_add_reservation(struct hstate *h,19071907+ struct vm_area_struct *vma, unsigned long addr)19081908+{19091909+ return __vma_reservation_common(h, vma, addr, VMA_ADD_RESV);19101910+}19111911+19121912+/*19131913+ * This routine is called to restore a reservation on error paths. In the19141914+ * specific error paths, a huge page was allocated (via alloc_huge_page)19151915+ * and is about to be freed. If a reservation for the page existed,19161916+ * alloc_huge_page would have consumed the reservation and set PagePrivate19171917+ * in the newly allocated page. When the page is freed via free_huge_page,19181918+ * the global reservation count will be incremented if PagePrivate is set.19191919+ * However, free_huge_page can not adjust the reserve map. Adjust the19201920+ * reserve map here to be consistent with global reserve count adjustments19211921+ * to be made by free_huge_page.19221922+ */19231923+static void restore_reserve_on_error(struct hstate *h,19241924+ struct vm_area_struct *vma, unsigned long address,19251925+ struct page *page)19261926+{19271927+ if (unlikely(PagePrivate(page))) {19281928+ long rc = vma_needs_reservation(h, vma, address);19291929+19301930+ if (unlikely(rc < 0)) {19311931+ /*19321932+ * Rare out of memory condition in reserve map19331933+ * manipulation. Clear PagePrivate so that19341934+ * global reserve count will not be incremented19351935+ * by free_huge_page. This will make it appear19361936+ * as though the reservation for this page was19371937+ * consumed. This may prevent the task from19381938+ * faulting in the page at a later time. This19391939+ * is better than inconsistent global huge page19401940+ * accounting of reserve counts.19411941+ */19421942+ ClearPagePrivate(page);19431943+ } else if (rc) {19441944+ rc = vma_add_reservation(h, vma, address);19451945+ if (unlikely(rc < 0))19461946+ /*19471947+ * See above comment about rare out of19481948+ * memory condition.19491949+ */19501950+ ClearPagePrivate(page);19511951+ } else19521952+ vma_end_reservation(h, vma, address);19531953+ }19181954}1919195519201956struct page *alloc_huge_page(struct vm_area_struct *vma,···35623498 spin_unlock(ptl);35633499 mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);35643500out_release_all:35013501+ restore_reserve_on_error(h, vma, address, new_page);35653502 put_page(new_page);35663503out_release_old:35673504 put_page(old_page);···37453680 spin_unlock(ptl);37463681backout_unlocked:37473682 unlock_page(page);36833683+ restore_reserve_on_error(h, vma, address, page);37483684 put_page(page);37493685 goto out;37503686}
···11121112 }1113111311141114 if (!PageHuge(p) && PageTransHuge(hpage)) {11151115- lock_page(hpage);11161116- if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {11171117- unlock_page(hpage);11181118- if (!PageAnon(hpage))11151115+ lock_page(p);11161116+ if (!PageAnon(p) || unlikely(split_huge_page(p))) {11171117+ unlock_page(p);11181118+ if (!PageAnon(p))11191119 pr_err("Memory failure: %#lx: non anonymous thp\n",11201120 pfn);11211121 else···11261126 put_hwpoison_page(p);11271127 return -EBUSY;11281128 }11291129- unlock_page(hpage);11301130- get_hwpoison_page(p);11311131- put_hwpoison_page(hpage);11291129+ unlock_page(p);11321130 VM_BUG_ON_PAGE(!page_count(p), p);11331131 hpage = compound_head(p);11341132 }
+21-9
mm/mremap.c
···104104static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,105105 unsigned long old_addr, unsigned long old_end,106106 struct vm_area_struct *new_vma, pmd_t *new_pmd,107107- unsigned long new_addr, bool need_rmap_locks)107107+ unsigned long new_addr, bool need_rmap_locks, bool *need_flush)108108{109109 struct mm_struct *mm = vma->vm_mm;110110 pte_t *old_pte, *new_pte, pte;111111 spinlock_t *old_ptl, *new_ptl;112112+ bool force_flush = false;113113+ unsigned long len = old_end - old_addr;112114113115 /*114116 * When need_rmap_locks is true, we take the i_mmap_rwsem and anon_vma···148146 new_pte++, new_addr += PAGE_SIZE) {149147 if (pte_none(*old_pte))150148 continue;149149+150150+ /*151151+ * We are remapping a dirty PTE, make sure to152152+ * flush TLB before we drop the PTL for the153153+ * old PTE or we may race with page_mkclean().154154+ */155155+ if (pte_present(*old_pte) && pte_dirty(*old_pte))156156+ force_flush = true;151157 pte = ptep_get_and_clear(mm, old_addr, old_pte);152158 pte = move_pte(pte, new_vma->vm_page_prot, old_addr, new_addr);153159 pte = move_soft_dirty_pte(pte);···166156 if (new_ptl != old_ptl)167157 spin_unlock(new_ptl);168158 pte_unmap(new_pte - 1);159159+ if (force_flush)160160+ flush_tlb_range(vma, old_end - len, old_end);161161+ else162162+ *need_flush = true;169163 pte_unmap_unlock(old_pte - 1, old_ptl);170164 if (need_rmap_locks)171165 drop_rmap_locks(vma);···215201 if (need_rmap_locks)216202 take_rmap_locks(vma);217203 moved = move_huge_pmd(vma, old_addr, new_addr,218218- old_end, old_pmd, new_pmd);204204+ old_end, old_pmd, new_pmd,205205+ &need_flush);219206 if (need_rmap_locks)220207 drop_rmap_locks(vma);221221- if (moved) {222222- need_flush = true;208208+ if (moved)223209 continue;224224- }225210 }226211 split_huge_pmd(vma, old_pmd, old_addr);227212 if (pmd_trans_unstable(old_pmd))···233220 extent = next - new_addr;234221 if (extent > LATENCY_LIMIT)235222 extent = LATENCY_LIMIT;236236- move_ptes(vma, old_pmd, old_addr, old_addr + extent,237237- new_vma, new_pmd, new_addr, need_rmap_locks);238238- need_flush = true;223223+ move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma,224224+ new_pmd, new_addr, need_rmap_locks, &need_flush);239225 }240240- if (likely(need_flush))226226+ if (need_flush)241227 flush_tlb_range(vma, old_end-len, old_addr);242228243229 mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+1-1
mm/page_alloc.c
···36583658 /* Make sure we know about allocations which stall for too long */36593659 if (time_after(jiffies, alloc_start + stall_timeout)) {36603660 warn_alloc(gfp_mask,36613661- "page alloction stalls for %ums, order:%u\n",36613661+ "page allocation stalls for %ums, order:%u",36623662 jiffies_to_msecs(jiffies-alloc_start), order);36633663 stall_timeout += 10 * HZ;36643664 }
···533533534534 s = create_cache(cache_name, root_cache->object_size,535535 root_cache->size, root_cache->align,536536- root_cache->flags, root_cache->ctor,537537- memcg, root_cache);536536+ root_cache->flags & CACHE_CREATE_MASK,537537+ root_cache->ctor, memcg, root_cache);538538 /*539539 * If we could not create a memcg cache, do not complain, because540540 * that's not critical at all as we can always proceed with the root
+2
mm/swapfile.c
···22242224 swab32s(&swap_header->info.version);22252225 swab32s(&swap_header->info.last_page);22262226 swab32s(&swap_header->info.nr_badpages);22272227+ if (swap_header->info.nr_badpages > MAX_SWAP_BADPAGES)22282228+ return 0;22272229 for (i = 0; i < swap_header->info.nr_badpages; i++)22282230 swab32s(&swap_header->info.badpages[i]);22292231 }
···10091009 __kfree_skb(skb);10101010 }1011101110121012+ /* If socket has been already reset kill it. */10131013+ if (sk->sk_state == DCCP_CLOSED)10141014+ goto adjudge_to_death;10151015+10121016 if (data_was_unread) {10131017 /* Unread data was tossed, send an appropriate Reset Code */10141018 DCCP_WARN("ABORT with %u bytes unread\n", data_was_unread);
+4-5
net/ipv4/af_inet.c
···533533534534static long inet_wait_for_connect(struct sock *sk, long timeo, int writebias)535535{536536- DEFINE_WAIT(wait);536536+ DEFINE_WAIT_FUNC(wait, woken_wake_function);537537538538- prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);538538+ add_wait_queue(sk_sleep(sk), &wait);539539 sk->sk_write_pending += writebias;540540541541 /* Basic assumption: if someone sets sk->sk_err, he _must_···545545 */546546 while ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {547547 release_sock(sk);548548- timeo = schedule_timeout(timeo);548548+ timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);549549 lock_sock(sk);550550 if (signal_pending(current) || !timeo)551551 break;552552- prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);553552 }554554- finish_wait(sk_sleep(sk), &wait);553553+ remove_wait_queue(sk_sleep(sk), &wait);555554 sk->sk_write_pending -= writebias;556555 return timeo;557556}
+15-5
net/ipv4/fib_frontend.c
···151151152152int fib_unmerge(struct net *net)153153{154154- struct fib_table *old, *new;154154+ struct fib_table *old, *new, *main_table;155155156156 /* attempt to fetch local table if it has been allocated */157157 old = fib_get_table(net, RT_TABLE_LOCAL);···162162 if (!new)163163 return -ENOMEM;164164165165+ /* table is already unmerged */166166+ if (new == old)167167+ return 0;168168+165169 /* replace merged table with clean table */166166- if (new != old) {167167- fib_replace_table(net, old, new);168168- fib_free_table(old);169169- }170170+ fib_replace_table(net, old, new);171171+ fib_free_table(old);172172+173173+ /* attempt to fetch main table if it has been allocated */174174+ main_table = fib_get_table(net, RT_TABLE_MAIN);175175+ if (!main_table)176176+ return 0;177177+178178+ /* flush local entries from main table */179179+ fib_table_flush_external(main_table);170180171181 return 0;172182}
+77-13
net/ipv4/fib_trie.c
···17431743 local_l = fib_find_node(lt, &local_tp, l->key);1744174417451745 if (fib_insert_alias(lt, local_tp, local_l, new_fa,17461746- NULL, l->key))17461746+ NULL, l->key)) {17471747+ kmem_cache_free(fn_alias_kmem, new_fa);17471748 goto out;17491749+ }17481750 }1749175117501752 /* stop loop if key wrapped back to 0 */···17601758 fib_trie_free(local_tb);1761175917621760 return NULL;17611761+}17621762+17631763+/* Caller must hold RTNL */17641764+void fib_table_flush_external(struct fib_table *tb)17651765+{17661766+ struct trie *t = (struct trie *)tb->tb_data;17671767+ struct key_vector *pn = t->kv;17681768+ unsigned long cindex = 1;17691769+ struct hlist_node *tmp;17701770+ struct fib_alias *fa;17711771+17721772+ /* walk trie in reverse order */17731773+ for (;;) {17741774+ unsigned char slen = 0;17751775+ struct key_vector *n;17761776+17771777+ if (!(cindex--)) {17781778+ t_key pkey = pn->key;17791779+17801780+ /* cannot resize the trie vector */17811781+ if (IS_TRIE(pn))17821782+ break;17831783+17841784+ /* resize completed node */17851785+ pn = resize(t, pn);17861786+ cindex = get_index(pkey, pn);17871787+17881788+ continue;17891789+ }17901790+17911791+ /* grab the next available node */17921792+ n = get_child(pn, cindex);17931793+ if (!n)17941794+ continue;17951795+17961796+ if (IS_TNODE(n)) {17971797+ /* record pn and cindex for leaf walking */17981798+ pn = n;17991799+ cindex = 1ul << n->bits;18001800+18011801+ continue;18021802+ }18031803+18041804+ hlist_for_each_entry_safe(fa, tmp, &n->leaf, fa_list) {18051805+ /* if alias was cloned to local then we just18061806+ * need to remove the local copy from main18071807+ */18081808+ if (tb->tb_id != fa->tb_id) {18091809+ hlist_del_rcu(&fa->fa_list);18101810+ alias_free_mem_rcu(fa);18111811+ continue;18121812+ }18131813+18141814+ /* record local slen */18151815+ slen = fa->fa_slen;18161816+ }18171817+18181818+ /* update leaf slen */18191819+ n->slen = slen;18201820+18211821+ if (hlist_empty(&n->leaf)) {18221822+ put_child_root(pn, n->key, NULL);18231823+ node_free(n);18241824+ }18251825+ }17631826}1764182717651828/* Caller must hold RTNL. */···24802413 struct key_vector *l, **tp = &iter->tnode;24812414 t_key key;2482241524832483- /* use cache location of next-to-find key */24162416+ /* use cached location of previously found key */24842417 if (iter->pos > 0 && pos >= iter->pos) {24852485- pos -= iter->pos;24862418 key = iter->key;24872419 } else {24882488- iter->pos = 0;24202420+ iter->pos = 1;24892421 key = 0;24902422 }2491242324922492- while ((l = leaf_walk_rcu(tp, key)) != NULL) {24242424+ pos -= iter->pos;24252425+24262426+ while ((l = leaf_walk_rcu(tp, key)) && (pos-- > 0)) {24932427 key = l->key + 1;24942428 iter->pos++;24952495-24962496- if (--pos <= 0)24972497- break;24982498-24992429 l = NULL;2500243025012431 /* handle unlikely case of a key wrap */···25012437 }2502243825032439 if (l)25042504- iter->key = key; /* remember it */24402440+ iter->key = l->key; /* remember it */25052441 else25062442 iter->pos = 0; /* forget it */25072443···25292465 return fib_route_get_idx(iter, *pos);2530246625312467 iter->pos = 0;25322532- iter->key = 0;24682468+ iter->key = KEY_MAX;2533246925342470 return SEQ_START_TOKEN;25352471}···25382474{25392475 struct fib_route_iter *iter = seq->private;25402476 struct key_vector *l = NULL;25412541- t_key key = iter->key;24772477+ t_key key = iter->key + 1;2542247825432479 ++*pos;25442480···25472483 l = leaf_walk_rcu(&iter->tnode, key);2548248425492485 if (l) {25502550- iter->key = l->key + 1;24862486+ iter->key = l->key;25512487 iter->pos++;25522488 } else {25532489 iter->pos = 0;
···117117 if (opt->is_strictroute && rt->rt_uses_gateway)118118 goto sr_failed;119119120120- IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;120120+ IPCB(skb)->flags |= IPSKB_FORWARDED;121121 mtu = ip_dst_mtu_maybe_forward(&rt->dst, true);122122 if (ip_exceeds_mtu(skb, mtu)) {123123 IP_INC_STATS(net, IPSTATS_MIB_FRAGFAILS);
+15-10
net/ipv4/ip_output.c
···239239 struct sk_buff *segs;240240 int ret = 0;241241242242- /* common case: fragmentation of segments is not allowed,243243- * or seglen is <= mtu242242+ /* common case: seglen is <= mtu244243 */245245- if (((IPCB(skb)->flags & IPSKB_FRAG_SEGS) == 0) ||246246- skb_gso_validate_mtu(skb, mtu))244244+ if (skb_gso_validate_mtu(skb, mtu))247245 return ip_finish_output2(net, sk, skb);248246249249- /* Slowpath - GSO segment length is exceeding the dst MTU.247247+ /* Slowpath - GSO segment length exceeds the egress MTU.250248 *251251- * This can happen in two cases:252252- * 1) TCP GRO packet, DF bit not set253253- * 2) skb arrived via virtio-net, we thus get TSO/GSO skbs directly254254- * from host network stack.249249+ * This can happen in several cases:250250+ * - Forwarding of a TCP GRO skb, when DF flag is not set.251251+ * - Forwarding of an skb that arrived on a virtualization interface252252+ * (virtio-net/vhost/tap) with TSO/GSO size set by other network253253+ * stack.254254+ * - Local GSO skb transmitted on an NETIF_F_TSO tunnel stacked over an255255+ * interface with a smaller MTU.256256+ * - Arriving GRO skb (or GSO skb in a virtualized environment) that is257257+ * bridged to a NETIF_F_TSO tunnel stacked over an interface with an258258+ * insufficent MTU.255259 */256260 features = netif_skb_features(skb);257261 BUILD_BUG_ON(sizeof(*IPCB(skb)) > SKB_SGO_CB_OFFSET);···15831579 }1584158015851581 oif = arg->bound_dev_if;15861586- oif = oif ? : skb->skb_iif;15821582+ if (!oif && netif_index_is_l3_master(net, skb->skb_iif))15831583+ oif = skb->skb_iif;1587158415881585 flowi4_init_output(&fl4, oif,15891586 IP4_REPLY_MARK(net, skb->mark),
-11
net/ipv4/ip_tunnel_core.c
···6363 int pkt_len = skb->len - skb_inner_network_offset(skb);6464 struct net *net = dev_net(rt->dst.dev);6565 struct net_device *dev = skb->dev;6666- int skb_iif = skb->skb_iif;6766 struct iphdr *iph;6867 int err;6968···7172 skb_clear_hash_if_not_l4(skb);7273 skb_dst_set(skb, &rt->dst);7374 memset(IPCB(skb), 0, sizeof(*IPCB(skb)));7474-7575- if (skb_iif && !(df & htons(IP_DF))) {7676- /* Arrived from an ingress interface, got encapsulated, with7777- * fragmentation of encapulating frames allowed.7878- * If skb is gso, the resulting encapsulated network segments7979- * may exceed dst mtu.8080- * Allow IP Fragmentation of segments.8181- */8282- IPCB(skb)->flags |= IPSKB_FRAG_SEGS;8383- }84758576 /* Push down and install the IP header. */8677 skb_push(skb, sizeof(struct iphdr));
+1-1
net/ipv4/ipmr.c
···17491749 vif->dev->stats.tx_bytes += skb->len;17501750 }1751175117521752- IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;17521752+ IPCB(skb)->flags |= IPSKB_FORWARDED;1753175317541754 /* RFC1584 teaches, that DVMRP/PIM router must deliver packets locally17551755 * not only before forwarding, but after forwarding on all output
···753753 goto reject_redirect;754754 }755755756756- n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw);756756+ n = __ipv4_neigh_lookup(rt->dst.dev, new_gw);757757+ if (!n)758758+ n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev);757759 if (!IS_ERR(n)) {758760 if (!(n->nud_state & NUD_VALID)) {759761 neigh_event_send(n, NULL);
+2-2
net/ipv4/tcp.c
···1164116411651165 err = -EPIPE;11661166 if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))11671167- goto out_err;11671167+ goto do_error;1168116811691169 sg = !!(sk->sk_route_caps & NETIF_F_SG);11701170···1241124112421242 if (!skb_can_coalesce(skb, i, pfrag->page,12431243 pfrag->offset)) {12441244- if (i == sysctl_max_skb_frags || !sg) {12441244+ if (i >= sysctl_max_skb_frags || !sg) {12451245 tcp_mark_push(tp, skb);12461246 goto new_segment;12471247 }
+3-1
net/ipv4/tcp_cong.c
···200200 icsk->icsk_ca_ops = ca;201201 icsk->icsk_ca_setsockopt = 1;202202203203- if (sk->sk_state != TCP_CLOSE)203203+ if (sk->sk_state != TCP_CLOSE) {204204+ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));204205 tcp_init_congestion_control(sk);206206+ }205207}206208207209/* Manage refcounts on socket close. */
···448448 if (__ipv6_addr_needs_scope_id(addr_type))449449 iif = skb->dev->ifindex;450450 else451451- iif = l3mdev_master_ifindex(skb->dev);451451+ iif = l3mdev_master_ifindex(skb_dst(skb)->dev);452452453453 /*454454 * Must not send error if the source does not uniquely
···10341034 int mtu;10351035 unsigned int psh_hlen = sizeof(struct ipv6hdr) + t->encap_hlen;10361036 unsigned int max_headroom = psh_hlen;10371037+ bool use_cache = false;10371038 u8 hop_limit;10381039 int err = -1;10391040···1067106610681067 memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));10691068 neigh_release(neigh);10701070- } else if (!fl6->flowi6_mark)10691069+ } else if (!(t->parms.flags &10701070+ (IP6_TNL_F_USE_ORIG_TCLASS | IP6_TNL_F_USE_ORIG_FWMARK))) {10711071+ /* enable the cache only only if the routing decision does10721072+ * not depend on the current inner header value10731073+ */10741074+ use_cache = true;10751075+ }10761076+10771077+ if (use_cache)10711078 dst = dst_cache_get(&t->dst_cache);1072107910731080 if (!ip6_tnl_xmit_ctl(t, &fl6->saddr, &fl6->daddr))···11591150 if (t->encap.type != TUNNEL_ENCAP_NONE)11601151 goto tx_err_dst_release;11611152 } else {11621162- if (!fl6->flowi6_mark && ndst)11531153+ if (use_cache && ndst)11631154 dst_cache_set_ip6(&t->dst_cache, ndst, &fl6->saddr);11641155 }11651156 skb_dst_set(skb, dst);
···9797 unsigned int len = skb->len;9898 int ret = l2tp_xmit_skb(session, skb, session->hdr_len);9999100100- if (likely(ret == NET_XMIT_SUCCESS)) {100100+ if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN)) {101101 atomic_long_add(len, &priv->tx_bytes);102102 atomic_long_inc(&priv->tx_packets);103103 } else {
+3-2
net/l2tp/l2tp_ip.c
···251251 int ret;252252 int chk_addr_ret;253253254254- if (!sock_flag(sk, SOCK_ZAPPED))255255- return -EINVAL;256254 if (addr_len < sizeof(struct sockaddr_l2tpip))257255 return -EINVAL;258256 if (addr->l2tp_family != AF_INET)···265267 read_unlock_bh(&l2tp_ip_lock);266268267269 lock_sock(sk);270270+ if (!sock_flag(sk, SOCK_ZAPPED))271271+ goto out;272272+268273 if (sk->sk_state != TCP_CLOSE || addr_len < sizeof(struct sockaddr_l2tpip))269274 goto out;270275
+3-2
net/l2tp/l2tp_ip6.c
···269269 int addr_type;270270 int err;271271272272- if (!sock_flag(sk, SOCK_ZAPPED))273273- return -EINVAL;274272 if (addr->l2tp_family != AF_INET6)275273 return -EINVAL;276274 if (addr_len < sizeof(*addr))···294296 lock_sock(sk);295297296298 err = -EINVAL;299299+ if (!sock_flag(sk, SOCK_ZAPPED))300300+ goto out_unlock;301301+297302 if (sk->sk_state != TCP_CLOSE)298303 goto out_unlock;299304
+1-1
net/mac80211/sta_info.c
···688688 }689689690690 /* No need to do anything if the driver does all */691691- if (!local->ops->set_tim)691691+ if (ieee80211_hw_check(&local->hw, AP_LINK_PS))692692 return;693693694694 if (sta->dead)
···270270 vht_cap->vht_mcs.tx_mcs_map |= cpu_to_le16(peer_tx << i * 2);271271 }272272273273+ /*274274+ * This is a workaround for VHT-enabled STAs which break the spec275275+ * and have the VHT-MCS Rx map filled in with value 3 for all eight276276+ * spacial streams, an example is AR9462.277277+ *278278+ * As per spec, in section 22.1.1 Introduction to the VHT PHY279279+ * A VHT STA shall support at least single spactial stream VHT-MCSs280280+ * 0 to 7 (transmit and receive) in all supported channel widths.281281+ */282282+ if (vht_cap->vht_mcs.rx_mcs_map == cpu_to_le16(0xFFFF)) {283283+ vht_cap->vht_supported = false;284284+ sdata_info(sdata, "Ignoring VHT IE from %pM due to invalid rx_mcs_map\n",285285+ sta->addr);286286+ return;287287+ }288288+273289 /* finally set up the bandwidth */274290 switch (vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {275291 case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ:
+1-1
net/netfilter/ipvs/ip_vs_ctl.c
···28452845 .hdrsize = 0,28462846 .name = IPVS_GENL_NAME,28472847 .version = IPVS_GENL_VERSION,28482848- .maxattr = IPVS_CMD_MAX,28482848+ .maxattr = IPVS_CMD_ATTR_MAX,28492849 .netnsok = true, /* Make ipvsadm to work on netns */28502850};28512851
···7676 struct delayed_work dwork;7777 u32 last_bucket;7878 bool exiting;7979+ long next_gc_run;7980};80818182static __read_mostly struct kmem_cache *nf_conntrack_cachep;···8483static __read_mostly DEFINE_SPINLOCK(nf_conntrack_locks_all_lock);8584static __read_mostly bool nf_conntrack_locks_all;86858686+/* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */8787#define GC_MAX_BUCKETS_DIV 64u8888-#define GC_MAX_BUCKETS 8192u8989-#define GC_INTERVAL (5 * HZ)8888+/* upper bound of scan intervals */8989+#define GC_INTERVAL_MAX (2 * HZ)9090+/* maximum conntracks to evict per gc run */9091#define GC_MAX_EVICTS 256u91929293static struct conntrack_gc_work conntrack_gc_work;···939936static void gc_worker(struct work_struct *work)940937{941938 unsigned int i, goal, buckets = 0, expired_count = 0;942942- unsigned long next_run = GC_INTERVAL;943943- unsigned int ratio, scanned = 0;944939 struct conntrack_gc_work *gc_work;940940+ unsigned int ratio, scanned = 0;941941+ unsigned long next_run;945942946943 gc_work = container_of(work, struct conntrack_gc_work, dwork.work);947944948948- goal = min(nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV, GC_MAX_BUCKETS);945945+ goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV;949946 i = gc_work->last_bucket;950947951948 do {···985982 if (gc_work->exiting)986983 return;987984985985+ /*986986+ * Eviction will normally happen from the packet path, and not987987+ * from this gc worker.988988+ *989989+ * This worker is only here to reap expired entries when system went990990+ * idle after a busy period.991991+ *992992+ * The heuristics below are supposed to balance conflicting goals:993993+ *994994+ * 1. Minimize time until we notice a stale entry995995+ * 2. Maximize scan intervals to not waste cycles996996+ *997997+ * Normally, expired_count will be 0, this increases the next_run time998998+ * to priorize 2) above.999999+ *10001000+ * As soon as a timed-out entry is found, move towards 1) and increase10011001+ * the scan frequency.10021002+ * In case we have lots of evictions next scan is done immediately.10031003+ */9881004 ratio = scanned ? expired_count * 100 / scanned : 0;989989- if (ratio >= 90 || expired_count == GC_MAX_EVICTS)10051005+ if (ratio >= 90 || expired_count == GC_MAX_EVICTS) {10061006+ gc_work->next_gc_run = 0;9901007 next_run = 0;10081008+ } else if (expired_count) {10091009+ gc_work->next_gc_run /= 2U;10101010+ next_run = msecs_to_jiffies(1);10111011+ } else {10121012+ if (gc_work->next_gc_run < GC_INTERVAL_MAX)10131013+ gc_work->next_gc_run += msecs_to_jiffies(1);10141014+10151015+ next_run = gc_work->next_gc_run;10161016+ }99110179921018 gc_work->last_bucket = i;993993- schedule_delayed_work(&gc_work->dwork, next_run);10191019+ queue_delayed_work(system_long_wq, &gc_work->dwork, next_run);9941020}99510219961022static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work)9971023{9981024 INIT_DELAYED_WORK(&gc_work->dwork, gc_worker);10251025+ gc_work->next_gc_run = GC_INTERVAL_MAX;9991026 gc_work->exiting = false;10001027}10011028···19181885 nf_ct_untracked_status_or(IPS_CONFIRMED | IPS_UNTRACKED);1919188619201887 conntrack_gc_work_init(&conntrack_gc_work);19211921- schedule_delayed_work(&conntrack_gc_work.dwork, GC_INTERVAL);18881888+ queue_delayed_work(system_long_wq, &conntrack_gc_work.dwork, GC_INTERVAL_MAX);1922188919231890 return 0;19241891
+8-3
net/netfilter/nf_conntrack_helper.c
···138138139139 for (i = 0; i < nf_ct_helper_hsize; i++) {140140 hlist_for_each_entry_rcu(h, &nf_ct_helper_hash[i], hnode) {141141- if (!strcmp(h->name, name) &&142142- h->tuple.src.l3num == l3num &&143143- h->tuple.dst.protonum == protonum)141141+ if (strcmp(h->name, name))142142+ continue;143143+144144+ if (h->tuple.src.l3num != NFPROTO_UNSPEC &&145145+ h->tuple.src.l3num != l3num)146146+ continue;147147+148148+ if (h->tuple.dst.protonum == protonum)144149 return h;145150 }146151 }
+4-1
net/netfilter/nf_conntrack_sip.c
···14361436 handler = &sip_handlers[i];14371437 if (handler->request == NULL)14381438 continue;14391439- if (*datalen < handler->len ||14391439+ if (*datalen < handler->len + 2 ||14401440 strncasecmp(*dptr, handler->method, handler->len))14411441+ continue;14421442+ if ((*dptr)[handler->len] != ' ' ||14431443+ !isalpha((*dptr)[handler->len+1]))14411444 continue;1442144514431446 if (ct_sip_get_header(ct, *dptr, 0, *datalen, SIP_HDR_CSEQ,
+11-7
net/netfilter/nf_tables_api.c
···2956295629572957 err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set);29582958 if (err < 0)29592959- goto err2;29592959+ goto err3;2960296029612961 list_add_tail_rcu(&set->list, &table->sets);29622962 table->use++;29632963 return 0;2964296429652965+err3:29662966+ ops->destroy(set);29652967err2:29662968 kfree(set);29672969err1:···34543452 return elem;34553453}3456345434573457-void nft_set_elem_destroy(const struct nft_set *set, void *elem)34553455+void nft_set_elem_destroy(const struct nft_set *set, void *elem,34563456+ bool destroy_expr)34583457{34593458 struct nft_set_ext *ext = nft_set_elem_ext(set, elem);3460345934613460 nft_data_uninit(nft_set_ext_key(ext), NFT_DATA_VALUE);34623461 if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))34633462 nft_data_uninit(nft_set_ext_data(ext), set->dtype);34643464- if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR))34633463+ if (destroy_expr && nft_set_ext_exists(ext, NFT_SET_EXT_EXPR))34653464 nf_tables_expr_destroy(NULL, nft_set_ext_expr(ext));3466346534673466 kfree(elem);···35683565 dreg = nft_type_to_reg(set->dtype);35693566 list_for_each_entry(binding, &set->bindings, list) {35703567 struct nft_ctx bind_ctx = {35683568+ .net = ctx->net,35713569 .afi = ctx->afi,35723570 .table = ctx->table,35733571 .chain = (struct nft_chain *)binding->chain,···3816381238173813 gcb = container_of(rcu, struct nft_set_gc_batch, head.rcu);38183814 for (i = 0; i < gcb->head.cnt; i++)38193819- nft_set_elem_destroy(gcb->head.set, gcb->elems[i]);38153815+ nft_set_elem_destroy(gcb->head.set, gcb->elems[i], true);38203816 kfree(gcb);38213817}38223818EXPORT_SYMBOL_GPL(nft_set_gc_batch_release);···40344030 break;40354031 case NFT_MSG_DELSETELEM:40364032 nft_set_elem_destroy(nft_trans_elem_set(trans),40374037- nft_trans_elem(trans).priv);40334033+ nft_trans_elem(trans).priv, true);40384034 break;40394035 }40404036 kfree(trans);···41754171 break;41764172 case NFT_MSG_NEWSETELEM:41774173 nft_set_elem_destroy(nft_trans_elem_set(trans),41784178- nft_trans_elem(trans).priv);41744174+ nft_trans_elem(trans).priv, true);41794175 break;41804176 }41814177 kfree(trans);···44254421 * Otherwise a 0 is returned and the attribute value is stored in the44264422 * destination variable.44274423 */44284428-unsigned int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)44244424+int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest)44294425{44304426 u32 val;44314427
+13-6
net/netfilter/nft_dynset.c
···4444 ®s->data[priv->sreg_key],4545 ®s->data[priv->sreg_data],4646 timeout, GFP_ATOMIC);4747- if (elem == NULL) {4848- if (set->size)4949- atomic_dec(&set->nelems);5050- return NULL;5151- }4747+ if (elem == NULL)4848+ goto err1;52495350 ext = nft_set_elem_ext(set, elem);5451 if (priv->expr != NULL &&5552 nft_expr_clone(nft_set_ext_expr(ext), priv->expr) < 0)5656- return NULL;5353+ goto err2;57545855 return elem;5656+5757+err2:5858+ nft_set_elem_destroy(set, elem, false);5959+err1:6060+ if (set->size)6161+ atomic_dec(&set->nelems);6262+ return NULL;5963}60646165static void nft_dynset_eval(const struct nft_expr *expr,···142138 if (IS_ERR(set))143139 return PTR_ERR(set);144140 }141141+142142+ if (set->ops->update == NULL)143143+ return -EOPNOTSUPP;145144146145 if (set->flags & NFT_SET_CONSTANT)147146 return -EBUSY;
+14-5
net/netfilter/nft_set_hash.c
···9898 const struct nft_set_ext **ext)9999{100100 struct nft_hash *priv = nft_set_priv(set);101101- struct nft_hash_elem *he;101101+ struct nft_hash_elem *he, *prev;102102 struct nft_hash_cmp_arg arg = {103103 .genmask = NFT_GENMASK_ANY,104104 .set = set,···112112 he = new(set, expr, regs);113113 if (he == NULL)114114 goto err1;115115- if (rhashtable_lookup_insert_key(&priv->ht, &arg, &he->node,116116- nft_hash_params))115115+116116+ prev = rhashtable_lookup_get_insert_key(&priv->ht, &arg, &he->node,117117+ nft_hash_params);118118+ if (IS_ERR(prev))117119 goto err2;120120+121121+ /* Another cpu may race to insert the element with the same key */122122+ if (prev) {123123+ nft_set_elem_destroy(set, he, true);124124+ he = prev;125125+ }126126+118127out:119128 *ext = &he->ext;120129 return true;121130122131err2:123123- nft_set_elem_destroy(set, he);132132+ nft_set_elem_destroy(set, he, true);124133err1:125134 return false;126135}···341332342333static void nft_hash_elem_destroy(void *ptr, void *arg)343334{344344- nft_set_elem_destroy((const struct nft_set *)arg, ptr);335335+ nft_set_elem_destroy((const struct nft_set *)arg, ptr, true);345336}346337347338static void nft_hash_destroy(const struct nft_set *set)
···4444 * being done.4545 *4646 * When the underlying transport disconnects, MRs are left in one of4747- * three states:4747+ * four states:4848 *4949 * INVALID: The MR was not in use before the QP entered ERROR state.5050- * (Or, the LOCAL_INV WR has not completed or flushed yet).5151- *5252- * STALE: The MR was being registered or unregistered when the QP5353- * entered ERROR state, and the pending WR was flushed.5450 *5551 * VALID: The MR was registered before the QP entered ERROR state.5652 *5757- * When frwr_op_map encounters STALE and VALID MRs, they are recovered5858- * with ib_dereg_mr and then are re-initialized. Beause MR recovery5353+ * FLUSHED_FR: The MR was being registered when the QP entered ERROR5454+ * state, and the pending WR was flushed.5555+ *5656+ * FLUSHED_LI: The MR was being invalidated when the QP entered ERROR5757+ * state, and the pending WR was flushed.5858+ *5959+ * When frwr_op_map encounters FLUSHED and VALID MRs, they are recovered6060+ * with ib_dereg_mr and then are re-initialized. Because MR recovery5961 * allocates fresh resources, it is deferred to a workqueue, and the6062 * recovered MRs are placed back on the rb_mws list when recovery is6163 * complete. frwr_op_map allocates another MR for the current RPC while···179177static void180178frwr_op_recover_mr(struct rpcrdma_mw *mw)181179{180180+ enum rpcrdma_frmr_state state = mw->frmr.fr_state;182181 struct rpcrdma_xprt *r_xprt = mw->mw_xprt;183182 struct rpcrdma_ia *ia = &r_xprt->rx_ia;184183 int rc;185184186185 rc = __frwr_reset_mr(ia, mw);187187- ib_dma_unmap_sg(ia->ri_device, mw->mw_sg, mw->mw_nents, mw->mw_dir);186186+ if (state != FRMR_FLUSHED_LI)187187+ ib_dma_unmap_sg(ia->ri_device,188188+ mw->mw_sg, mw->mw_nents, mw->mw_dir);188189 if (rc)189190 goto out_release;190191···267262}268263269264static void270270-__frwr_sendcompletion_flush(struct ib_wc *wc, struct rpcrdma_frmr *frmr,271271- const char *wr)265265+__frwr_sendcompletion_flush(struct ib_wc *wc, const char *wr)272266{273273- frmr->fr_state = FRMR_IS_STALE;274267 if (wc->status != IB_WC_WR_FLUSH_ERR)275268 pr_err("rpcrdma: %s: %s (%u/0x%x)\n",276269 wr, ib_wc_status_msg(wc->status),···291288 if (wc->status != IB_WC_SUCCESS) {292289 cqe = wc->wr_cqe;293290 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);294294- __frwr_sendcompletion_flush(wc, frmr, "fastreg");291291+ frmr->fr_state = FRMR_FLUSHED_FR;292292+ __frwr_sendcompletion_flush(wc, "fastreg");295293 }296294}297295···312308 if (wc->status != IB_WC_SUCCESS) {313309 cqe = wc->wr_cqe;314310 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);315315- __frwr_sendcompletion_flush(wc, frmr, "localinv");311311+ frmr->fr_state = FRMR_FLUSHED_LI;312312+ __frwr_sendcompletion_flush(wc, "localinv");316313 }317314}318315···333328 /* WARNING: Only wr_cqe and status are reliable at this point */334329 cqe = wc->wr_cqe;335330 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe);336336- if (wc->status != IB_WC_SUCCESS)337337- __frwr_sendcompletion_flush(wc, frmr, "localinv");331331+ if (wc->status != IB_WC_SUCCESS) {332332+ frmr->fr_state = FRMR_FLUSHED_LI;333333+ __frwr_sendcompletion_flush(wc, "localinv");334334+ }338335 complete(&frmr->fr_linv_done);339336}340337
···216216enum rpcrdma_frmr_state {217217 FRMR_IS_INVALID, /* ready to be used */218218 FRMR_IS_VALID, /* in use */219219- FRMR_IS_STALE, /* failed completion */219219+ FRMR_FLUSHED_FR, /* flushed FASTREG WR */220220+ FRMR_FLUSHED_LI, /* flushed LOCALINV WR */220221};221222222223struct rpcrdma_frmr {
+1-47
net/tipc/socket.c
···11/*22 * net/tipc/socket.c: TIPC socket API33 *44- * Copyright (c) 2001-2007, 2012-2015, Ericsson AB44+ * Copyright (c) 2001-2007, 2012-2016, Ericsson AB55 * Copyright (c) 2004-2008, 2010-2013, Wind River Systems66 * All rights reserved.77 *···129129static const struct proto_ops stream_ops;130130static const struct proto_ops msg_ops;131131static struct proto tipc_proto;132132-133132static const struct rhashtable_params tsk_rht_params;134134-135135-/*136136- * Revised TIPC socket locking policy:137137- *138138- * Most socket operations take the standard socket lock when they start139139- * and hold it until they finish (or until they need to sleep). Acquiring140140- * this lock grants the owner exclusive access to the fields of the socket141141- * data structures, with the exception of the backlog queue. A few socket142142- * operations can be done without taking the socket lock because they only143143- * read socket information that never changes during the life of the socket.144144- *145145- * Socket operations may acquire the lock for the associated TIPC port if they146146- * need to perform an operation on the port. If any routine needs to acquire147147- * both the socket lock and the port lock it must take the socket lock first148148- * to avoid the risk of deadlock.149149- *150150- * The dispatcher handling incoming messages cannot grab the socket lock in151151- * the standard fashion, since invoked it runs at the BH level and cannot block.152152- * Instead, it checks to see if the socket lock is currently owned by someone,153153- * and either handles the message itself or adds it to the socket's backlog154154- * queue; in the latter case the queued message is processed once the process155155- * owning the socket lock releases it.156156- *157157- * NOTE: Releasing the socket lock while an operation is sleeping overcomes158158- * the problem of a blocked socket operation preventing any other operations159159- * from occurring. However, applications must be careful if they have160160- * multiple threads trying to send (or receive) on the same socket, as these161161- * operations might interfere with each other. For example, doing a connect162162- * and a receive at the same time might allow the receive to consume the163163- * ACK message meant for the connect. While additional work could be done164164- * to try and overcome this, it doesn't seem to be worthwhile at the present.165165- *166166- * NOTE: Releasing the socket lock while an operation is sleeping also ensures167167- * that another operation that must be performed in a non-blocking manner is168168- * not delayed for very long because the lock has already been taken.169169- *170170- * NOTE: This code assumes that certain fields of a port/socket pair are171171- * constant over its lifetime; such fields can be examined without taking172172- * the socket lock and/or port lock, and do not need to be re-read even173173- * after resuming processing after waiting. These fields include:174174- * - socket type175175- * - pointer to socket sk structure (aka tipc_sock structure)176176- * - pointer to port structure177177- * - port reference178178- */179133180134static u32 tsk_own_node(struct tipc_sock *tsk)181135{
+13-7
net/unix/af_unix.c
···21992199 * Sleep until more data has arrived. But check for races..22002200 */22012201static long unix_stream_data_wait(struct sock *sk, long timeo,22022202- struct sk_buff *last, unsigned int last_len)22022202+ struct sk_buff *last, unsigned int last_len,22032203+ bool freezable)22032204{22042205 struct sk_buff *tail;22052206 DEFINE_WAIT(wait);···2221222022222221 sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);22232222 unix_state_unlock(sk);22242224- timeo = freezable_schedule_timeout(timeo);22232223+ if (freezable)22242224+ timeo = freezable_schedule_timeout(timeo);22252225+ else22262226+ timeo = schedule_timeout(timeo);22252227 unix_state_lock(sk);2226222822272229 if (sock_flag(sk, SOCK_DEAD))···22542250 unsigned int splice_flags;22552251};2256225222572257-static int unix_stream_read_generic(struct unix_stream_read_state *state)22532253+static int unix_stream_read_generic(struct unix_stream_read_state *state,22542254+ bool freezable)22582255{22592256 struct scm_cookie scm;22602257 struct socket *sock = state->socket;···23352330 mutex_unlock(&u->iolock);2336233123372332 timeo = unix_stream_data_wait(sk, timeo, last,23382338- last_len);23332333+ last_len, freezable);2339233423402335 if (signal_pending(current)) {23412336 err = sock_intr_errno(timeo);···24772472 .flags = flags24782473 };2479247424802480- return unix_stream_read_generic(&state);24752475+ return unix_stream_read_generic(&state, true);24812476}2482247724832478static int unix_stream_splice_actor(struct sk_buff *skb,···25082503 flags & SPLICE_F_NONBLOCK)25092504 state.flags = MSG_DONTWAIT;2510250525112511- return unix_stream_read_generic(&state);25062506+ return unix_stream_read_generic(&state, false);25122507}2513250825142509static int unix_shutdown(struct socket *sock, int mode)···28172812 i++;28182813 }28192814 for ( ; i < len; i++)28202820- seq_putc(seq, u->addr->name->sun_path[i]);28152815+ seq_putc(seq, u->addr->name->sun_path[i] ?:28162816+ '@');28212817 }28222818 unix_state_unlock(s);28232819 seq_putc(seq, '\n');
···5757 * also linked into the probe response struct.5858 */59596060+/*6161+ * Limit the number of BSS entries stored in mac80211. Each one is6262+ * a bit over 4k at most, so this limits to roughly 4-5M of memory.6363+ * If somebody wants to really attack this though, they'd likely6464+ * use small beacons, and only one type of frame, limiting each of6565+ * the entries to a much smaller size (in order to generate more6666+ * entries in total, so overhead is bigger.)6767+ */6868+static int bss_entries_limit = 1000;6969+module_param(bss_entries_limit, int, 0644);7070+MODULE_PARM_DESC(bss_entries_limit,7171+ "limit to number of scan BSS entries (per wiphy, default 1000)");7272+6073#define IEEE80211_SCAN_RESULT_EXPIRE (30 * HZ)61746275static void bss_free(struct cfg80211_internal_bss *bss)···150137151138 list_del_init(&bss->list);152139 rb_erase(&bss->rbn, &rdev->bss_tree);140140+ rdev->bss_entries--;141141+ WARN_ONCE((rdev->bss_entries == 0) ^ list_empty(&rdev->bss_list),142142+ "rdev bss entries[%d]/list[empty:%d] corruption\n",143143+ rdev->bss_entries, list_empty(&rdev->bss_list));153144 bss_ref_put(rdev, bss);154145 return true;155146}···178161179162 if (expired)180163 rdev->bss_generation++;164164+}165165+166166+static bool cfg80211_bss_expire_oldest(struct cfg80211_registered_device *rdev)167167+{168168+ struct cfg80211_internal_bss *bss, *oldest = NULL;169169+ bool ret;170170+171171+ lockdep_assert_held(&rdev->bss_lock);172172+173173+ list_for_each_entry(bss, &rdev->bss_list, list) {174174+ if (atomic_read(&bss->hold))175175+ continue;176176+177177+ if (!list_empty(&bss->hidden_list) &&178178+ !bss->pub.hidden_beacon_bss)179179+ continue;180180+181181+ if (oldest && time_before(oldest->ts, bss->ts))182182+ continue;183183+ oldest = bss;184184+ }185185+186186+ if (WARN_ON(!oldest))187187+ return false;188188+189189+ /*190190+ * The callers make sure to increase rdev->bss_generation if anything191191+ * gets removed (and a new entry added), so there's no need to also do192192+ * it here.193193+ */194194+195195+ ret = __cfg80211_unlink_bss(rdev, oldest);196196+ WARN_ON(!ret);197197+ return ret;181198}182199183200void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev,···740689 const u8 *ie;741690 int i, ssidlen;742691 u8 fold = 0;692692+ u32 n_entries = 0;743693744694 ies = rcu_access_pointer(new->pub.beacon_ies);745695 if (WARN_ON(!ies))···764712 /* This is the bad part ... */765713766714 list_for_each_entry(bss, &rdev->bss_list, list) {715715+ /*716716+ * we're iterating all the entries anyway, so take the717717+ * opportunity to validate the list length accounting718718+ */719719+ n_entries++;720720+767721 if (!ether_addr_equal(bss->pub.bssid, new->pub.bssid))768722 continue;769723 if (bss->pub.channel != new->pub.channel)···797739 rcu_assign_pointer(bss->pub.beacon_ies,798740 new->pub.beacon_ies);799741 }742742+743743+ WARN_ONCE(n_entries != rdev->bss_entries,744744+ "rdev bss entries[%d]/list[len:%d] corruption\n",745745+ rdev->bss_entries, n_entries);800746801747 return true;802748}···956894 }957895 }958896897897+ if (rdev->bss_entries >= bss_entries_limit &&898898+ !cfg80211_bss_expire_oldest(rdev)) {899899+ kfree(new);900900+ goto drop;901901+ }902902+959903 list_add_tail(&new->list, &rdev->bss_list);904904+ rdev->bss_entries++;960905 rb_insert_bss(rdev, new);961906 found = new;962907 }
···1717ifdef CONFIG_UBSAN_NULL1818 CFLAGS_UBSAN += $(call cc-option, -fsanitize=null)1919endif2020+2121+ # -fsanitize=* options makes GCC less smart than usual and2222+ # increase number of 'maybe-uninitialized false-positives2323+ CFLAGS_UBSAN += $(call cc-option, -Wno-maybe-uninitialized)2024endif
+3
scripts/bloat-o-meter
···88# of the GNU General Public License, incorporated herein by reference.991010import sys, os, re1111+from signal import signal, SIGPIPE, SIG_DFL1212+1313+signal(SIGPIPE, SIG_DFL)11141215if len(sys.argv) != 3:1316 sys.stderr.write("usage: %s file1 file2\n" % sys.argv[0])
···296296 struct cpufreq_affected_cpus *cpus;297297298298 if (!bitmask_isbitset(cpus_chosen, cpu) ||299299- cpupower_is_cpu_online(cpu))299299+ cpupower_is_cpu_online(cpu) != 1)300300 continue;301301302302 cpus = cpufreq_get_related_cpus(cpu);···316316 cpu <= bitmask_last(cpus_chosen); cpu++) {317317318318 if (!bitmask_isbitset(cpus_chosen, cpu) ||319319- cpupower_is_cpu_online(cpu))320320- continue;321321-322322- if (cpupower_is_cpu_online(cpu) != 1)319319+ cpupower_is_cpu_online(cpu) != 1)323320 continue;324321325322 printf(_("Setting cpu: %d\n"), cpu);
+5-3
virt/kvm/arm/pmu.c
···305305 continue;306306 type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)307307 & ARMV8_PMU_EVTYPE_EVENT;308308- if ((type == ARMV8_PMU_EVTYPE_EVENT_SW_INCR)308308+ if ((type == ARMV8_PMUV3_PERFCTR_SW_INCR)309309 && (enable & BIT(i))) {310310 reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;311311 reg = lower_32_bits(reg);···379379 eventsel = data & ARMV8_PMU_EVTYPE_EVENT;380380381381 /* Software increment event does't need to be backed by a perf event */382382- if (eventsel == ARMV8_PMU_EVTYPE_EVENT_SW_INCR)382382+ if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR &&383383+ select_idx != ARMV8_PMU_CYCLE_IDX)383384 return;384385385386 memset(&attr, 0, sizeof(struct perf_event_attr));···392391 attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;393392 attr.exclude_hv = 1; /* Don't count EL2 events */394393 attr.exclude_host = 1; /* Don't count host events */395395- attr.config = eventsel;394394+ attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?395395+ ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;396396397397 counter = kvm_pmu_get_counter_value(vcpu, select_idx);398398 /* The initial sample period (overflow count) of an event. */
+27-14
virt/kvm/arm/vgic/vgic-mmio.c
···453453 return container_of(dev, struct vgic_io_device, dev);454454}455455456456-static bool check_region(const struct vgic_register_region *region,456456+static bool check_region(const struct kvm *kvm,457457+ const struct vgic_register_region *region,457458 gpa_t addr, int len)458459{459459- if ((region->access_flags & VGIC_ACCESS_8bit) && len == 1)460460- return true;461461- if ((region->access_flags & VGIC_ACCESS_32bit) &&462462- len == sizeof(u32) && !(addr & 3))463463- return true;464464- if ((region->access_flags & VGIC_ACCESS_64bit) &&465465- len == sizeof(u64) && !(addr & 7))466466- return true;460460+ int flags, nr_irqs = kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS;461461+462462+ switch (len) {463463+ case sizeof(u8):464464+ flags = VGIC_ACCESS_8bit;465465+ break;466466+ case sizeof(u32):467467+ flags = VGIC_ACCESS_32bit;468468+ break;469469+ case sizeof(u64):470470+ flags = VGIC_ACCESS_64bit;471471+ break;472472+ default:473473+ return false;474474+ }475475+476476+ if ((region->access_flags & flags) && IS_ALIGNED(addr, len)) {477477+ if (!region->bits_per_irq)478478+ return true;479479+480480+ /* Do we access a non-allocated IRQ? */481481+ return VGIC_ADDR_TO_INTID(addr, region->bits_per_irq) < nr_irqs;482482+ }467483468484 return false;469485}···493477494478 region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions,495479 addr - iodev->base_addr);496496- if (!region || !check_region(region, addr, len)) {480480+ if (!region || !check_region(vcpu->kvm, region, addr, len)) {497481 memset(val, 0, len);498482 return 0;499483 }···526510527511 region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions,528512 addr - iodev->base_addr);529529- if (!region)530530- return 0;531531-532532- if (!check_region(region, addr, len))513513+ if (!region || !check_region(vcpu->kvm, region, addr, len))533514 return 0;534515535516 switch (iodev->iodev_type) {
+7-7
virt/kvm/arm/vgic/vgic-mmio.h
···5050#define VGIC_ADDR_IRQ_MASK(bits) (((bits) * 1024 / 8) - 1)51515252/*5353- * (addr & mask) gives us the byte offset for the INT ID, so we want to5454- * divide this with 'bytes per irq' to get the INT ID, which is given5555- * by '(bits) / 8'. But we do this with fixed-point-arithmetic and5656- * take advantage of the fact that division by a fraction equals5757- * multiplication with the inverted fraction, and scale up both the5858- * numerator and denominator with 8 to support at most 64 bits per IRQ:5353+ * (addr & mask) gives us the _byte_ offset for the INT ID.5454+ * We multiply this by 8 the get the _bit_ offset, then divide this by5555+ * the number of bits to learn the actual INT ID.5656+ * But instead of a division (which requires a "long long div" implementation),5757+ * we shift by the binary logarithm of <bits>.5858+ * This assumes that <bits> is a power of two.5959 */6060#define VGIC_ADDR_TO_INTID(addr, bits) (((addr) & VGIC_ADDR_IRQ_MASK(bits)) * \6161- 64 / (bits) / 8)6161+ 8 >> ilog2(bits))62626363/*6464 * Some VGIC registers store per-IRQ information, with a different number
+12
virt/kvm/arm/vgic/vgic.c
···273273 * no more work for us to do.274274 */275275 spin_unlock(&irq->irq_lock);276276+277277+ /*278278+ * We have to kick the VCPU here, because we could be279279+ * queueing an edge-triggered interrupt for which we280280+ * get no EOI maintenance interrupt. In that case,281281+ * while the IRQ is already on the VCPU's AP list, the282282+ * VCPU could have EOI'ed the original interrupt and283283+ * won't see this one until it exits for some other284284+ * reason.285285+ */286286+ if (vcpu)287287+ kvm_vcpu_kick(vcpu);276288 return false;277289 }278290