···44interrupt.5566Required Properties:77-- compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc"88- as fallback77+- compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc" or88+ "qca,<soctype>-cpu-intc", "qca,ar7240-misc-intc"99- reg: Base address and size of the controllers memory area1010- interrupt-parent: phandle of the parent interrupt controller.1111- interrupts: Interrupt specifier for the controllers interrupt.1212- interrupt-controller : Identifies the node as an interrupt controller1313- #interrupt-cells : Specifies the number of cells needed to encode interrupt1414 source, should be 11515+1616+Compatible fallback depends on the SoC. Use ar7100 for ar71xx and ar913x,1717+use ar7240 for all other SoCs.15181619Please refer to interrupts.txt in this directory for details of the common1720Interrupt Controllers bindings used by client devices.···23202421 interrupt-controller@18060010 {2522 compatible = "qca,ar9132-misc-intc", qca,ar7100-misc-intc";2323+ reg = <0x18060010 0x4>;2424+2525+ interrupt-parent = <&cpuintc>;2626+ interrupts = <6>;2727+2828+ interrupt-controller;2929+ #interrupt-cells = <1>;3030+ };3131+3232+Another example:3333+3434+ interrupt-controller@18060010 {3535+ compatible = "qca,ar9331-misc-intc", qca,ar7240-misc-intc";2636 reg = <0x18060010 0x4>;27372838 interrupt-parent = <&cpuintc>;
···66 "lsi,zevio-usb"77 "qcom,ci-hdrc"88 "chipidea,usb2"99+ "xlnx,zynq-usb-2.20a"910- reg: base address and length of the registers1011- interrupts: interrupt for the USB controller1112
+38-13
Documentation/power/pci.txt
···979979(alternatively, the runtime_suspend() callback will have to check if the980980device should really be suspended and return -EAGAIN if that is not the case).981981982982-The runtime PM of PCI devices is disabled by default. It is also blocked by983983-pci_pm_init() that runs the pm_runtime_forbid() helper function. If a PCI984984-driver implements the runtime PM callbacks and intends to use the runtime PM985985-framework provided by the PM core and the PCI subsystem, it should enable this986986-feature by executing the pm_runtime_enable() helper function. However, the987987-driver should not call the pm_runtime_allow() helper function unblocking988988-the runtime PM of the device. Instead, it should allow user space or some989989-platform-specific code to do that (user space can do it via sysfs), although990990-once it has called pm_runtime_enable(), it must be prepared to handle the982982+The runtime PM of PCI devices is enabled by default by the PCI core. PCI983983+device drivers do not need to enable it and should not attempt to do so.984984+However, it is blocked by pci_pm_init() that runs the pm_runtime_forbid()985985+helper function. In addition to that, the runtime PM usage counter of986986+each PCI device is incremented by local_pci_probe() before executing the987987+probe callback provided by the device's driver.988988+989989+If a PCI driver implements the runtime PM callbacks and intends to use the990990+runtime PM framework provided by the PM core and the PCI subsystem, it needs991991+to decrement the device's runtime PM usage counter in its probe callback992992+function. If it doesn't do that, the counter will always be different from993993+zero for the device and it will never be runtime-suspended. The simplest994994+way to do that is by calling pm_runtime_put_noidle(), but if the driver995995+wants to schedule an autosuspend right away, for example, it may call996996+pm_runtime_put_autosuspend() instead for this purpose. Generally, it997997+just needs to call a function that decrements the devices usage counter998998+from its probe routine to make runtime PM work for the device.999999+10001000+It is important to remember that the driver's runtime_suspend() callback10011001+may be executed right after the usage counter has been decremented, because10021002+user space may already have cuased the pm_runtime_allow() helper function10031003+unblocking the runtime PM of the device to run via sysfs, so the driver must10041004+be prepared to cope with that.10051005+10061006+The driver itself should not call pm_runtime_allow(), though. Instead, it10071007+should let user space or some platform-specific code do that (user space can10081008+do it via sysfs as stated above), but it must be prepared to handle the9911009runtime PM of the device correctly as soon as pm_runtime_allow() is called992992-(which may happen at any time). [It also is possible that user space causes993993-pm_runtime_allow() to be called via sysfs before the driver is loaded, so in994994-fact the driver has to be prepared to handle the runtime PM of the device as995995-soon as it calls pm_runtime_enable().]10101010+(which may happen at any time, even before the driver is loaded).10111011+10121012+When the driver's remove callback runs, it has to balance the decrementation10131013+of the device's runtime PM usage counter at the probe time. For this reason,10141014+if it has decremented the counter in its probe callback, it must run10151015+pm_runtime_get_noresume() in its remove callback. [Since the core carries10161016+out a runtime resume of the device and bumps up the device's usage counter10171017+before running the driver's remove callback, the runtime PM of the device10181018+is effectively disabled for the duration of the remove execution and all10191019+runtime PM helper functions incrementing the device's usage counter are10201020+then effectively equivalent to pm_runtime_get_noresume().]99610219971022The runtime PM framework works by processing requests to suspend or resume9981023devices, or to check if they are idle (in which cases it is reasonable to
+1
Documentation/ptp/testptp.c
···1818 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.1919 */2020#define _GNU_SOURCE2121+#define __SANE_USERSPACE_TYPES__ /* For PPC64, to get LL64 types */2122#include <errno.h>2223#include <fcntl.h>2324#include <inttypes.h>
···1919 * This may need to be greater than __NR_last_syscall+1 in order to2020 * account for the padding in the syscall table2121 */2222-#define __NR_syscalls (388)2222+#define __NR_syscalls (392)23232424/*2525 * *NOTE*: This is a ghost syscall private to the kernel. Only the
+2
arch/arm/include/uapi/asm/unistd.h
···414414#define __NR_memfd_create (__NR_SYSCALL_BASE+385)415415#define __NR_bpf (__NR_SYSCALL_BASE+386)416416#define __NR_execveat (__NR_SYSCALL_BASE+387)417417+#define __NR_userfaultfd (__NR_SYSCALL_BASE+388)418418+#define __NR_membarrier (__NR_SYSCALL_BASE+389)417419418420/*419421 * The following SWIs are ARM private.
···385385#define MIPS_CPU_CDMM 0x4000000000ull /* CPU has Common Device Memory Map */386386#define MIPS_CPU_BP_GHIST 0x8000000000ull /* R12K+ Branch Prediction Global History */387387#define MIPS_CPU_SP 0x10000000000ull /* Small (1KB) page support */388388+#define MIPS_CPU_FTLB 0x20000000000ull /* CPU has Fixed-page-size TLB */388389389390/*390391 * CPU ASE encodings
+9
arch/mips/include/asm/maar.h
···6666}67676868/**6969+ * maar_init() - initialise MAARs7070+ *7171+ * Performs initialisation of MAARs for the current CPU, making use of the7272+ * platforms implementation of platform_maar_init where necessary and7373+ * duplicating the setup it provides on secondary CPUs.7474+ */7575+extern void maar_init(void);7676+7777+/**6978 * struct maar_config - MAAR configuration data7079 * @lower: The lowest address that the MAAR pair will affect. Must be7180 * aligned to a 2^16 byte boundary.
+39
arch/mips/include/asm/mips-cm.h
···194194BUILD_CM_R_(gic_status, MIPS_CM_GCB_OFS + 0xd0)195195BUILD_CM_R_(cpc_status, MIPS_CM_GCB_OFS + 0xf0)196196BUILD_CM_RW(l2_config, MIPS_CM_GCB_OFS + 0x130)197197+BUILD_CM_RW(sys_config2, MIPS_CM_GCB_OFS + 0x150)197198198199/* Core Local & Core Other register accessor functions */199200BUILD_CM_Cx_RW(reset_release, 0x00)···317316#define CM_GCR_L2_CONFIG_ASSOC_SHF 0318317#define CM_GCR_L2_CONFIG_ASSOC_MSK (_ULCAST_(0xff) << 0)319318319319+/* GCR_SYS_CONFIG2 register fields */320320+#define CM_GCR_SYS_CONFIG2_MAXVPW_SHF 0321321+#define CM_GCR_SYS_CONFIG2_MAXVPW_MSK (_ULCAST_(0xf) << 0)322322+320323/* GCR_Cx_COHERENCE register fields */321324#define CM_GCR_Cx_COHERENCE_COHDOMAINEN_SHF 0322325#define CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK (_ULCAST_(0xff) << 0)···408403 return 0;409404410405 return read_gcr_rev();406406+}407407+408408+/**409409+ * mips_cm_max_vp_width() - return the width in bits of VP indices410410+ *411411+ * Return: the width, in bits, of VP indices in fields that combine core & VP412412+ * indices.413413+ */414414+static inline unsigned int mips_cm_max_vp_width(void)415415+{416416+ extern int smp_num_siblings;417417+418418+ if (mips_cm_revision() >= CM_REV_CM3)419419+ return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW_MSK;420420+421421+ return smp_num_siblings;422422+}423423+424424+/**425425+ * mips_cm_vp_id() - calculate the hardware VP ID for a CPU426426+ * @cpu: the CPU whose VP ID to calculate427427+ *428428+ * Hardware such as the GIC uses identifiers for VPs which may not match the429429+ * CPU numbers used by Linux. This function calculates the hardware VP430430+ * identifier corresponding to a given CPU.431431+ *432432+ * Return: the VP ID for the CPU.433433+ */434434+static inline unsigned int mips_cm_vp_id(unsigned int cpu)435435+{436436+ unsigned int core = cpu_data[cpu].core;437437+ unsigned int vp = cpu_vpe_id(&cpu_data[cpu]);438438+439439+ return (core * mips_cm_max_vp_width()) + vp;411440}412441413442#endif /* __MIPS_ASM_MIPS_CM_H__ */
···410410static inline unsigned int decode_config0(struct cpuinfo_mips *c)411411{412412 unsigned int config0;413413- int isa;413413+ int isa, mt;414414415415 config0 = read_c0_config();416416417417 /*418418 * Look for Standard TLB or Dual VTLB and FTLB419419 */420420- if ((((config0 & MIPS_CONF_MT) >> 7) == 1) ||421421- (((config0 & MIPS_CONF_MT) >> 7) == 4))420420+ mt = config0 & MIPS_CONF_MT;421421+ if (mt == MIPS_CONF_MT_TLB)422422 c->options |= MIPS_CPU_TLB;423423+ else if (mt == MIPS_CONF_MT_FTLB)424424+ c->options |= MIPS_CPU_TLB | MIPS_CPU_FTLB;423425424426 isa = (config0 & MIPS_CONF_AT) >> 13;425427 switch (isa) {···561559 if (cpu_has_tlb) {562560 if (((config4 & MIPS_CONF4_IE) >> 29) == 2)563561 c->options |= MIPS_CPU_TLBINV;562562+564563 /*565565- * This is a bit ugly. R6 has dropped that field from566566- * config4 and the only valid configuration is VTLB+FTLB so567567- * set a good value for mmuextdef for that case.564564+ * R6 has dropped the MMUExtDef field from config4.565565+ * On R6 the fields always describe the FTLB, and only if it is566566+ * present according to Config.MT.568567 */569569- if (cpu_has_mips_r6)568568+ if (!cpu_has_mips_r6)569569+ mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF;570570+ else if (cpu_has_ftlb)570571 mmuextdef = MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT;571572 else572572- mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF;573573+ mmuextdef = 0;573574574575 switch (mmuextdef) {575576 case MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT:
+9-1
arch/mips/kernel/setup.c
···338338 if (end <= reserved_end)339339 continue;340340#ifdef CONFIG_BLK_DEV_INITRD341341- /* mapstart should be after initrd_end */341341+ /* Skip zones before initrd and initrd itself */342342 if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end)))343343 continue;344344#endif···370370#endif371371 max_low_pfn = PFN_DOWN(HIGHMEM_START);372372 }373373+374374+#ifdef CONFIG_BLK_DEV_INITRD375375+ /*376376+ * mapstart should be after initrd_end377377+ */378378+ if (initrd_end)379379+ mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end)));380380+#endif373381374382 /*375383 * Initialize the boot-time allocator with low memory only.
+2
arch/mips/kernel/smp.c
···4242#include <asm/mmu_context.h>4343#include <asm/time.h>4444#include <asm/setup.h>4545+#include <asm/maar.h>45464647cpumask_t cpu_callin_map; /* Bitmask of started secondaries */4748···158157 mips_clockevent_init();159158 mp_ops->init_secondary();160159 cpu_report();160160+ maar_init();161161162162 /*163163 * XXX parity protection should be folded in here when it's converted
···1128112811291129/* Runs on exception stack */11301130ENTRY(nmi)11311131+ /*11321132+ * Fix up the exception frame if we're on Xen.11331133+ * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most11341134+ * one value to the stack on native, so it may clobber the rdx11351135+ * scratch slot, but it won't clobber any of the important11361136+ * slots past it.11371137+ *11381138+ * Xen is a different story, because the Xen frame itself overlaps11391139+ * the "NMI executing" variable.11401140+ */11311141 PARAVIRT_ADJUST_EXCEPTION_FRAME11421142+11321143 /*11331144 * We allow breakpoints in NMIs. If a breakpoint occurs, then11341145 * the iretq it performs will take us out of NMI context.···11901179 * we don't want to enable interrupts, because then we'll end11911180 * up in an awkward situation in which IRQs are on but NMIs11921181 * are off.11821182+ *11831183+ * We also must not push anything to the stack before switching11841184+ * stacks lest we corrupt the "NMI executing" variable.11931185 */1194118611951195- SWAPGS11871187+ SWAPGS_UNSAFE_STACK11961188 cld11971189 movq %rsp, %rdx11981190 movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+2
arch/x86/include/asm/efi.h
···8686extern void __iomem *__init efi_ioremap(unsigned long addr, unsigned long size,8787 u32 type, u64 attribute);88888989+#ifdef CONFIG_KASAN8990/*9091 * CONFIG_KASAN may redefine memset to __memset. __memset function is present9192 * only in kernel binary. Since the EFI stub linked into a separate binary it···9695#undef memcpy9796#undef memset9897#undef memmove9898+#endif9999100100#endif /* CONFIG_X86_32 */101101
···4141#include <asm/timer.h>4242#include <asm/special_insns.h>43434444-/* nop stub */4545-void _paravirt_nop(void)4646-{4747-}4444+/*4545+ * nop stub, which must not clobber anything *including the stack* to4646+ * avoid confusing the entry prologues.4747+ */4848+extern void _paravirt_nop(void);4949+asm (".pushsection .entry.text, \"ax\"\n"5050+ ".global _paravirt_nop\n"5151+ "_paravirt_nop:\n\t"5252+ "ret\n\t"5353+ ".size _paravirt_nop, . - _paravirt_nop\n\t"5454+ ".type _paravirt_nop, @function\n\t"5555+ ".popsection");48564957/* identity function, which can be inlined */5058u32 _paravirt_ident_32(u32 x)
+13-112
arch/x86/kvm/svm.c
···514514 struct vcpu_svm *svm = to_svm(vcpu);515515516516 if (svm->vmcb->control.next_rip != 0) {517517- WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));517517+ WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));518518 svm->next_rip = svm->vmcb->control.next_rip;519519 }520520···866866 set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0);867867}868868869869-#define MTRR_TYPE_UC_MINUS 7870870-#define MTRR2PROTVAL_INVALID 0xff871871-872872-static u8 mtrr2protval[8];873873-874874-static u8 fallback_mtrr_type(int mtrr)875875-{876876- /*877877- * WT and WP aren't always available in the host PAT. Treat878878- * them as UC and UC- respectively. Everything else should be879879- * there.880880- */881881- switch (mtrr)882882- {883883- case MTRR_TYPE_WRTHROUGH:884884- return MTRR_TYPE_UNCACHABLE;885885- case MTRR_TYPE_WRPROT:886886- return MTRR_TYPE_UC_MINUS;887887- default:888888- BUG();889889- }890890-}891891-892892-static void build_mtrr2protval(void)893893-{894894- int i;895895- u64 pat;896896-897897- for (i = 0; i < 8; i++)898898- mtrr2protval[i] = MTRR2PROTVAL_INVALID;899899-900900- /* Ignore the invalid MTRR types. */901901- mtrr2protval[2] = 0;902902- mtrr2protval[3] = 0;903903-904904- /*905905- * Use host PAT value to figure out the mapping from guest MTRR906906- * values to nested page table PAT/PCD/PWT values. We do not907907- * want to change the host PAT value every time we enter the908908- * guest.909909- */910910- rdmsrl(MSR_IA32_CR_PAT, pat);911911- for (i = 0; i < 8; i++) {912912- u8 mtrr = pat >> (8 * i);913913-914914- if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID)915915- mtrr2protval[mtrr] = __cm_idx2pte(i);916916- }917917-918918- for (i = 0; i < 8; i++) {919919- if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) {920920- u8 fallback = fallback_mtrr_type(i);921921- mtrr2protval[i] = mtrr2protval[fallback];922922- BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID);923923- }924924- }925925-}926926-927869static __init int svm_hardware_setup(void)928870{929871 int cpu;···932990 } else933991 kvm_disable_tdp();934992935935- build_mtrr2protval();936993 return 0;937994938995err:···10861145 return target_tsc - tsc;10871146}1088114710891089-static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat)10901090-{10911091- struct kvm_vcpu *vcpu = &svm->vcpu;10921092-10931093- /* Unlike Intel, AMD takes the guest's CR0.CD into account.10941094- *10951095- * AMD does not have IPAT. To emulate it for the case of guests10961096- * with no assigned devices, just set everything to WB. If guests10971097- * have assigned devices, however, we cannot force WB for RAM10981098- * pages only, so use the guest PAT directly.10991099- */11001100- if (!kvm_arch_has_assigned_device(vcpu->kvm))11011101- *g_pat = 0x0606060606060606;11021102- else11031103- *g_pat = vcpu->arch.pat;11041104-}11051105-11061106-static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)11071107-{11081108- u8 mtrr;11091109-11101110- /*11111111- * 1. MMIO: trust guest MTRR, so same as item 3.11121112- * 2. No passthrough: always map as WB, and force guest PAT to WB as well11131113- * 3. Passthrough: can't guarantee the result, try to trust guest.11141114- */11151115- if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm))11161116- return 0;11171117-11181118- if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED) &&11191119- kvm_read_cr0(vcpu) & X86_CR0_CD)11201120- return _PAGE_NOCACHE;11211121-11221122- mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn);11231123- return mtrr2protval[mtrr];11241124-}11251125-11261148static void init_vmcb(struct vcpu_svm *svm, bool init_event)11271149{11281150 struct vmcb_control_area *control = &svm->vmcb->control;···11821278 clr_cr_intercept(svm, INTERCEPT_CR3_READ);11831279 clr_cr_intercept(svm, INTERCEPT_CR3_WRITE);11841280 save->g_pat = svm->vcpu.arch.pat;11851185- svm_set_guest_pat(svm, &save->g_pat);11861281 save->cr3 = 0;11871282 save->cr4 = 0;11881283 }···1576167315771674 if (!vcpu->fpu_active)15781675 cr0 |= X86_CR0_TS;15791579-15801580- /* These are emulated via page tables. */15811581- cr0 &= ~(X86_CR0_CD | X86_CR0_NW);15821582-16761676+ /*16771677+ * re-enable caching here because the QEMU bios16781678+ * does not do it - this results in some delay at16791679+ * reboot16801680+ */16811681+ if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED))16821682+ cr0 &= ~(X86_CR0_CD | X86_CR0_NW);15831683 svm->vmcb->save.cr0 = cr0;15841684 mark_dirty(svm->vmcb, VMCB_CR);15851685 update_cr0_intercept(svm);···32573351 case MSR_VM_IGNNE:32583352 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);32593353 break;32603260- case MSR_IA32_CR_PAT:32613261- if (npt_enabled) {32623262- if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))32633263- return 1;32643264- vcpu->arch.pat = data;32653265- svm_set_guest_pat(svm, &svm->vmcb->save.g_pat);32663266- mark_dirty(svm->vmcb, VMCB_NPT);32673267- break;32683268- }32693269- /* fall through */32703354 default:32713355 return kvm_set_msr_common(vcpu, msr);32723356 }···40894193static bool svm_has_high_real_mode_segbase(void)40904194{40914195 return true;41964196+}41974197+41984198+static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)41994199+{42004200+ return 0;40924201}4093420240944203static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+8-3
arch/x86/kvm/vmx.c
···86178617 u64 ipat = 0;8618861886198619 /* For VT-d and EPT combination86208620- * 1. MMIO: guest may want to apply WC, trust it.86208620+ * 1. MMIO: always map as UC86218621 * 2. EPT with VT-d:86228622 * a. VT-d without snooping control feature: can't guarantee the86238623- * result, try to trust guest. So the same as item 1.86238623+ * result, try to trust guest.86248624 * b. VT-d with snooping control feature: snooping control feature of86258625 * VT-d engine can guarantee the cache correctness. Just set it86268626 * to WB to keep consistent with host. So the same as item 3.86278627 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep86288628 * consistent with host MTRR86298629 */86308630- if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86308630+ if (is_mmio) {86318631+ cache = MTRR_TYPE_UNCACHABLE;86328632+ goto exit;86338633+ }86348634+86358635+ if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) {86318636 ipat = VMX_EPT_IPAT_BIT;86328637 cache = MTRR_TYPE_WRBACK;86338638 goto exit;
-4
arch/x86/kvm/x86.c
···17081708 vcpu->pvclock_set_guest_stopped_request = false;17091709 }1710171017111711- pvclock_flags |= PVCLOCK_COUNTS_FROM_ZERO;17121712-17131711 /* If the host uses TSC clocksource, then it is stable */17141712 if (use_master_clock)17151713 pvclock_flags |= PVCLOCK_TSC_STABLE_BIT;···20052007 &vcpu->requests);2006200820072009 ka->boot_vcpu_runs_old_kvmclock = tmp;20082008-20092009- ka->kvmclock_offset = -get_kernel_ns();20102010 }2011201120122012 vcpu->arch.time = data;
···10441044 goto err_exit;1045104510461046 mutex_lock(&ec->mutex);10471047+ result = -ENODATA;10471048 list_for_each_entry(handler, &ec->list, node) {10481049 if (value == handler->query_bit) {10501050+ result = 0;10491051 q->handler = acpi_ec_get_query_handler(handler);10501052 ec_dbg_evt("Query(0x%02x) scheduled",10511053 q->handler->query_bit);
+1
drivers/acpi/pci_irq.c
···372372373373 /* Interrupt Line values above 0xF are forbidden */374374 if (dev->irq > 0 && (dev->irq <= 0xF) &&375375+ acpi_isa_irq_available(dev->irq) &&375376 (acpi_isa_irq_to_gsi(dev->irq, &dev_gsi) == 0)) {376377 dev_warn(&dev->dev, "PCI INT %c: no GSI - using ISA IRQ %d\n",377378 pin_name(dev->pin), dev->irq);
+14-2
drivers/acpi/pci_link.c
···498498 PIRQ_PENALTY_PCI_POSSIBLE;499499 }500500 }501501- /* Add a penalty for the SCI */502502- acpi_irq_penalty[acpi_gbl_FADT.sci_interrupt] += PIRQ_PENALTY_PCI_USING;501501+503502 return 0;504503}505504···551552 acpi_irq_penalty[link->irq.possible[i]])552553 irq = link->irq.possible[i];553554 }555555+ }556556+ if (acpi_irq_penalty[irq] >= PIRQ_PENALTY_ISA_ALWAYS) {557557+ printk(KERN_ERR PREFIX "No IRQ available for %s [%s]. "558558+ "Try pci=noacpi or acpi=off\n",559559+ acpi_device_name(link->device),560560+ acpi_device_bid(link->device));561561+ return -ENODEV;554562 }555563556564 /* Attempt to enable the link device at this IRQ. */···825819 else826820 acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;827821 }822822+}823823+824824+bool acpi_isa_irq_available(int irq)825825+{826826+ return irq >= 0 && (irq >= ARRAY_SIZE(acpi_irq_penalty) ||827827+ acpi_irq_penalty[irq] < PIRQ_PENALTY_ISA_ALWAYS);828828}829829830830/*
+8-2
drivers/base/cacheinfo.c
···148148149149 if (sibling == cpu) /* skip itself */150150 continue;151151+151152 sib_cpu_ci = get_cpu_cacheinfo(sibling);153153+ if (!sib_cpu_ci->info_list)154154+ continue;155155+152156 sib_leaf = sib_cpu_ci->info_list + index;153157 cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);154158 cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);···163159164160static void free_cache_attributes(unsigned int cpu)165161{162162+ if (!per_cpu_cacheinfo(cpu))163163+ return;164164+166165 cache_shared_cpu_map_remove(cpu);167166168167 kfree(per_cpu_cacheinfo(cpu));···521514 break;522515 case CPU_DEAD:523516 cache_remove_dev(cpu);524524- if (per_cpu_cacheinfo(cpu))525525- free_cache_attributes(cpu);517517+ free_cache_attributes(cpu);526518 break;527519 }528520 return notifier_from_errno(rc);
+12-5
drivers/base/power/opp.c
···892892 u32 microvolt[3] = {0};893893 int count, ret;894894895895- count = of_property_count_u32_elems(opp->np, "opp-microvolt");896896- if (!count)895895+ /* Missing property isn't a problem, but an invalid entry is */896896+ if (!of_find_property(opp->np, "opp-microvolt", NULL))897897 return 0;898898+899899+ count = of_property_count_u32_elems(opp->np, "opp-microvolt");900900+ if (count < 0) {901901+ dev_err(dev, "%s: Invalid opp-microvolt property (%d)\n",902902+ __func__, count);903903+ return count;904904+ }898905899906 /* There can be one or three elements here */900907 if (count != 1 && count != 3) {···10701063 * share a common logic which is isolated here.10711064 *10721065 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the10731073- * copy operation, returns 0 if no modifcation was done OR modification was10661066+ * copy operation, returns 0 if no modification was done OR modification was10741067 * successful.10751068 *10761069 * Locking: The internal device_opp and opp structures are RCU protected.···11581151 * mutex locking or synchronize_rcu() blocking calls cannot be used.11591152 *11601153 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the11611161- * copy operation, returns 0 if no modifcation was done OR modification was11541154+ * copy operation, returns 0 if no modification was done OR modification was11621155 * successful.11631156 */11641157int dev_pm_opp_enable(struct device *dev, unsigned long freq)···11841177 * mutex locking or synchronize_rcu() blocking calls cannot be used.11851178 *11861179 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the11871187- * copy operation, returns 0 if no modifcation was done OR modification was11801180+ * copy operation, returns 0 if no modification was done OR modification was11881181 * successful.11891182 */11901183int dev_pm_opp_disable(struct device *dev, unsigned long freq)
+4-3
drivers/char/hw_random/xgene-rng.c
···344344 if (IS_ERR(ctx->csr_base))345345 return PTR_ERR(ctx->csr_base);346346347347- ctx->irq = platform_get_irq(pdev, 0);348348- if (ctx->irq < 0) {347347+ rc = platform_get_irq(pdev, 0);348348+ if (rc < 0) {349349 dev_err(&pdev->dev, "No IRQ resource\n");350350- return ctx->irq;350350+ return rc;351351 }352352+ ctx->irq = rc;352353353354 dev_dbg(&pdev->dev, "APM X-Gene RNG BASE %p ALARM IRQ %d",354355 ctx->csr_base, ctx->irq);
+27
drivers/crypto/marvell/cesa.h
···687687688688int mv_cesa_queue_req(struct crypto_async_request *req);689689690690+/*691691+ * Helper function that indicates whether a crypto request needs to be692692+ * cleaned up or not after being enqueued using mv_cesa_queue_req().693693+ */694694+static inline int mv_cesa_req_needs_cleanup(struct crypto_async_request *req,695695+ int ret)696696+{697697+ /*698698+ * The queue still had some space, the request was queued699699+ * normally, so there's no need to clean it up.700700+ */701701+ if (ret == -EINPROGRESS)702702+ return false;703703+704704+ /*705705+ * The queue had not space left, but since the request is706706+ * flagged with CRYPTO_TFM_REQ_MAY_BACKLOG, it was added to707707+ * the backlog and will be processed later. There's no need to708708+ * clean it up.709709+ */710710+ if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)711711+ return false;712712+713713+ /* Request wasn't queued, we need to clean it up */714714+ return true;715715+}716716+690717/* TDMA functions */691718692719static inline void mv_cesa_req_dma_iter_init(struct mv_cesa_dma_iter *iter,
+3-4
drivers/crypto/marvell/cipher.c
···189189{190190 struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);191191 struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq);192192-193192 creq->req.base.engine = engine;194193195194 if (creq->req.base.type == CESA_DMA_REQ)···430431 return ret;431432432433 ret = mv_cesa_queue_req(&req->base);433433- if (ret && ret != -EINPROGRESS)434434+ if (mv_cesa_req_needs_cleanup(&req->base, ret))434435 mv_cesa_ablkcipher_cleanup(req);435436436437 return ret;···550551 return ret;551552552553 ret = mv_cesa_queue_req(&req->base);553553- if (ret && ret != -EINPROGRESS)554554+ if (mv_cesa_req_needs_cleanup(&req->base, ret))554555 mv_cesa_ablkcipher_cleanup(req);555556556557 return ret;···692693 return ret;693694694695 ret = mv_cesa_queue_req(&req->base);695695- if (ret && ret != -EINPROGRESS)696696+ if (mv_cesa_req_needs_cleanup(&req->base, ret))696697 mv_cesa_ablkcipher_cleanup(req);697698698699 return ret;
+3-5
drivers/crypto/marvell/hash.c
···739739 return 0;740740741741 ret = mv_cesa_queue_req(&req->base);742742- if (ret && ret != -EINPROGRESS) {742742+ if (mv_cesa_req_needs_cleanup(&req->base, ret))743743 mv_cesa_ahash_cleanup(req);744744- return ret;745745- }746744747745 return ret;748746}···764766 return 0;765767766768 ret = mv_cesa_queue_req(&req->base);767767- if (ret && ret != -EINPROGRESS)769769+ if (mv_cesa_req_needs_cleanup(&req->base, ret))768770 mv_cesa_ahash_cleanup(req);769771770772 return ret;···789791 return 0;790792791793 ret = mv_cesa_queue_req(&req->base);792792- if (ret && ret != -EINPROGRESS)794794+ if (mv_cesa_req_needs_cleanup(&req->base, ret))793795 mv_cesa_ahash_cleanup(req);794796795797 return ret;
···11+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.22+ *33+ * This program is free software; you can redistribute it and/or modify44+ * it under the terms of the GNU General Public License version 2 and55+ * only version 2 as published by the Free Software Foundation.66+ *77+ * This program is distributed in the hope that it will be useful,88+ * but WITHOUT ANY WARRANTY; without even the implied warranty of99+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1010+ * GNU General Public License for more details.1111+ */1212+1313+#include <linux/io.h>1414+#include <linux/errno.h>1515+#include <linux/qcom_scm.h>1616+1717+/**1818+ * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus1919+ * @entry: Entry point function for the cpus2020+ * @cpus: The cpumask of cpus that will use the entry point2121+ *2222+ * Set the cold boot address of the cpus. Any cpu outside the supported2323+ * range would be removed from the cpu present mask.2424+ */2525+int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)2626+{2727+ return -ENOTSUPP;2828+}2929+3030+/**3131+ * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus3232+ * @entry: Entry point function for the cpus3333+ * @cpus: The cpumask of cpus that will use the entry point3434+ *3535+ * Set the Linux entry point for the SCM to transfer control to when coming3636+ * out of a power down. CPU power down may be executed on cpuidle or hotplug.3737+ */3838+int __qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus)3939+{4040+ return -ENOTSUPP;4141+}4242+4343+/**4444+ * qcom_scm_cpu_power_down() - Power down the cpu4545+ * @flags - Flags to flush cache4646+ *4747+ * This is an end point to power down cpu. If there was a pending interrupt,4848+ * the control would return from this function, otherwise, the cpu jumps to the4949+ * warm boot entry point set for this cpu upon reset.5050+ */5151+void __qcom_scm_cpu_power_down(u32 flags)5252+{5353+}5454+5555+int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id)5656+{5757+ return -ENOTSUPP;5858+}5959+6060+int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp)6161+{6262+ return -ENOTSUPP;6363+}
+17
drivers/hv/channel_mgmt.c
···204204 spin_lock_irqsave(&vmbus_connection.channel_lock, flags);205205 list_del(&channel->listentry);206206 spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags);207207+208208+ primary_channel = channel;207209 } else {208210 primary_channel = channel->primary_channel;209211 spin_lock_irqsave(&primary_channel->lock, flags);···213211 primary_channel->num_sc--;214212 spin_unlock_irqrestore(&primary_channel->lock, flags);215213 }214214+215215+ /*216216+ * We need to free the bit for init_vp_index() to work in the case217217+ * of sub-channel, when we reload drivers like hv_netvsc.218218+ */219219+ cpumask_clear_cpu(channel->target_cpu,220220+ &primary_channel->alloced_cpus_in_node);221221+216222 free_channel(channel);217223}218224···468458 continue;469459 }470460461461+ /*462462+ * NOTE: in the case of sub-channel, we clear the sub-channel463463+ * related bit(s) in primary->alloced_cpus_in_node in464464+ * hv_process_channel_removal(), so when we reload drivers465465+ * like hv_netvsc in SMP guest, here we're able to re-allocate466466+ * bit from primary->alloced_cpus_in_node.467467+ */471468 if (!cpumask_test_cpu(cur_cpu,472469 &primary->alloced_cpus_in_node)) {473470 cpumask_set_cpu(cur_cpu,
···11491149 unsigned long dt;11501150 unsigned long flags;11511151 int i;11521152+ LIST_HEAD(remove_list);11531153+ struct ipoib_mcast *mcast, *tmcast;11541154+ struct net_device *dev = priv->dev;1152115511531156 if (test_bit(IPOIB_STOP_NEIGH_GC, &priv->flags))11541157 return;···11791176 lockdep_is_held(&priv->lock))) != NULL) {11801177 /* was the neigh idle for two GC periods */11811178 if (time_after(neigh_obsolete, neigh->alive)) {11791179+ u8 *mgid = neigh->daddr + 4;11801180+11811181+ /* Is this multicast ? */11821182+ if (*mgid == 0xff) {11831183+ mcast = __ipoib_mcast_find(dev, mgid);11841184+11851185+ if (mcast && test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) {11861186+ list_del(&mcast->list);11871187+ rb_erase(&mcast->rb_node, &priv->multicast_tree);11881188+ list_add_tail(&mcast->list, &remove_list);11891189+ }11901190+ }11911191+11821192 rcu_assign_pointer(*np,11831193 rcu_dereference_protected(neigh->hnext,11841194 lockdep_is_held(&priv->lock)));···1207119112081192out_unlock:12091193 spin_unlock_irqrestore(&priv->lock, flags);11941194+ list_for_each_entry_safe(mcast, tmcast, &remove_list, list)11951195+ ipoib_mcast_leave(dev, mcast);12101196}1211119712121198static void ipoib_reap_neigh(struct work_struct *work)
+14-12
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
···153153 return mcast;154154}155155156156-static struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid)156156+struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid)157157{158158 struct ipoib_dev_priv *priv = netdev_priv(dev);159159 struct rb_node *n = priv->multicast_tree.rb_node;···508508 rec.hop_limit = priv->broadcast->mcmember.hop_limit;509509510510 /*511511- * Historically Linux IPoIB has never properly supported SEND512512- * ONLY join. It emulated it by not providing all the required513513- * attributes, which is enough to prevent group creation and514514- * detect if there are full members or not. A major problem515515- * with supporting SEND ONLY is detecting when the group is516516- * auto-destroyed as IPoIB will cache the MLID..511511+ * Send-only IB Multicast joins do not work at the core512512+ * IB layer yet, so we can't use them here. However,513513+ * we are emulating an Ethernet multicast send, which514514+ * does not require a multicast subscription and will515515+ * still send properly. The most appropriate thing to516516+ * do is to create the group if it doesn't exist as that517517+ * most closely emulates the behavior, from a user space518518+ * application perspecitive, of Ethernet multicast519519+ * operation. For now, we do a full join, maybe later520520+ * when the core IB layers support send only joins we521521+ * will use them.517522 */518518-#if 1519519- if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))520520- comp_mask &= ~IB_SA_MCMEMBER_REC_TRAFFIC_CLASS;521521-#else523523+#if 0522524 if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))523525 rec.join_state = 4;524526#endif···677675 return 0;678676}679677680680-static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)678678+int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)681679{682680 struct ipoib_dev_priv *priv = netdev_priv(dev);683681 int ret = 0;
+5
drivers/infiniband/ulp/iser/iscsi_iser.c
···9797module_param_named(max_sectors, iser_max_sectors, uint, S_IRUGO | S_IWUSR);9898MODULE_PARM_DESC(max_sectors, "Max number of sectors in a single scsi command (default:1024");9999100100+bool iser_always_reg = true;101101+module_param_named(always_register, iser_always_reg, bool, S_IRUGO);102102+MODULE_PARM_DESC(always_register,103103+ "Always register memory, even for continuous memory regions (default:true)");104104+100105bool iser_pi_enable = false;101106module_param_named(pi_enable, iser_pi_enable, bool, S_IRUGO);102107MODULE_PARM_DESC(pi_enable, "Enable T10-PI offload support (default:disabled)");
+1
drivers/infiniband/ulp/iser/iscsi_iser.h
···611611extern bool iser_pi_enable;612612extern int iser_pi_guard;613613extern unsigned int iser_max_sectors;614614+extern bool iser_always_reg;614615615616int iser_assign_reg_ops(struct iser_device *device);616617
···196196config JOYSTICK_ZHENHUA197197 tristate "5-byte Zhenhua RC transmitter"198198 select SERIO199199+ select BITREVERSE199200 help200201 Say Y here if you have a Zhen Hua PPM-4CH transmitter which is201202 supplied with a ready to fly micro electric indoor helicopters
···3215321532163216 /* Restrict dma_mask to the width that the iommu can handle */32173217 dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask);32183218+ /* Ensure we reserve the whole size-aligned region */32193219+ nrpages = __roundup_pow_of_two(nrpages);3218322032193221 if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) {32203222 /*···37133711static int __init iommu_init_mempool(void)37143712{37153713 int ret;37163716- ret = iommu_iova_cache_init();37143714+ ret = iova_cache_get();37173715 if (ret)37183716 return ret;37193717···3727372537283726 kmem_cache_destroy(iommu_domain_cache);37293727domain_error:37303730- iommu_iova_cache_destroy();37283728+ iova_cache_put();3731372937323730 return -ENOMEM;37333731}···37363734{37373735 kmem_cache_destroy(iommu_devinfo_cache);37383736 kmem_cache_destroy(iommu_domain_cache);37393739- iommu_iova_cache_destroy();37373737+ iova_cache_put();37403738}3741373937423740static void quirk_ioat_snb_local_iommu(struct pci_dev *pdev)
+69-51
drivers/iommu/iova.c
···1818 */19192020#include <linux/iova.h>2121+#include <linux/module.h>2122#include <linux/slab.h>2222-2323-static struct kmem_cache *iommu_iova_cache;2424-2525-int iommu_iova_cache_init(void)2626-{2727- int ret = 0;2828-2929- iommu_iova_cache = kmem_cache_create("iommu_iova",3030- sizeof(struct iova),3131- 0,3232- SLAB_HWCACHE_ALIGN,3333- NULL);3434- if (!iommu_iova_cache) {3535- pr_err("Couldn't create iova cache\n");3636- ret = -ENOMEM;3737- }3838-3939- return ret;4040-}4141-4242-void iommu_iova_cache_destroy(void)4343-{4444- kmem_cache_destroy(iommu_iova_cache);4545-}4646-4747-struct iova *alloc_iova_mem(void)4848-{4949- return kmem_cache_alloc(iommu_iova_cache, GFP_ATOMIC);5050-}5151-5252-void free_iova_mem(struct iova *iova)5353-{5454- kmem_cache_free(iommu_iova_cache, iova);5555-}56235724void5825init_iova_domain(struct iova_domain *iovad, unsigned long granule,···3972 iovad->start_pfn = start_pfn;4073 iovad->dma_32bit_pfn = pfn_32bit;4174}7575+EXPORT_SYMBOL_GPL(init_iova_domain);42764377static struct rb_node *4478__get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn)···88120 }89121}901229191-/* Computes the padding size required, to make the9292- * the start address naturally aligned on its size123123+/*124124+ * Computes the padding size required, to make the start address125125+ * naturally aligned on the power-of-two order of its size93126 */9494-static int9595-iova_get_pad_size(int size, unsigned int limit_pfn)127127+static unsigned int128128+iova_get_pad_size(unsigned int size, unsigned int limit_pfn)96129{9797- unsigned int pad_size = 0;9898- unsigned int order = ilog2(size);9999-100100- if (order)101101- pad_size = (limit_pfn + 1) % (1 << order);102102-103103- return pad_size;130130+ return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1);104131}105132106133static int __alloc_and_insert_iova_range(struct iova_domain *iovad,···205242 rb_insert_color(&iova->node, root);206243}207244245245+static struct kmem_cache *iova_cache;246246+static unsigned int iova_cache_users;247247+static DEFINE_MUTEX(iova_cache_mutex);248248+249249+struct iova *alloc_iova_mem(void)250250+{251251+ return kmem_cache_alloc(iova_cache, GFP_ATOMIC);252252+}253253+EXPORT_SYMBOL(alloc_iova_mem);254254+255255+void free_iova_mem(struct iova *iova)256256+{257257+ kmem_cache_free(iova_cache, iova);258258+}259259+EXPORT_SYMBOL(free_iova_mem);260260+261261+int iova_cache_get(void)262262+{263263+ mutex_lock(&iova_cache_mutex);264264+ if (!iova_cache_users) {265265+ iova_cache = kmem_cache_create(266266+ "iommu_iova", sizeof(struct iova), 0,267267+ SLAB_HWCACHE_ALIGN, NULL);268268+ if (!iova_cache) {269269+ mutex_unlock(&iova_cache_mutex);270270+ printk(KERN_ERR "Couldn't create iova cache\n");271271+ return -ENOMEM;272272+ }273273+ }274274+275275+ iova_cache_users++;276276+ mutex_unlock(&iova_cache_mutex);277277+278278+ return 0;279279+}280280+EXPORT_SYMBOL_GPL(iova_cache_get);281281+282282+void iova_cache_put(void)283283+{284284+ mutex_lock(&iova_cache_mutex);285285+ if (WARN_ON(!iova_cache_users)) {286286+ mutex_unlock(&iova_cache_mutex);287287+ return;288288+ }289289+ iova_cache_users--;290290+ if (!iova_cache_users)291291+ kmem_cache_destroy(iova_cache);292292+ mutex_unlock(&iova_cache_mutex);293293+}294294+EXPORT_SYMBOL_GPL(iova_cache_put);295295+208296/**209297 * alloc_iova - allocates an iova210298 * @iovad: - iova domain in question···279265 if (!new_iova)280266 return NULL;281267282282- /* If size aligned is set then round the size to283283- * to next power of two.284284- */285285- if (size_aligned)286286- size = __roundup_pow_of_two(size);287287-288268 ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn,289269 new_iova, size_aligned);290270···289281290282 return new_iova;291283}284284+EXPORT_SYMBOL_GPL(alloc_iova);292285293286/**294287 * find_iova - find's an iova for a given pfn···330321 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);331322 return NULL;332323}324324+EXPORT_SYMBOL_GPL(find_iova);333325334326/**335327 * __free_iova - frees the given iova···349339 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);350340 free_iova_mem(iova);351341}342342+EXPORT_SYMBOL_GPL(__free_iova);352343353344/**354345 * free_iova - finds and frees the iova for a given pfn···367356 __free_iova(iovad, iova);368357369358}359359+EXPORT_SYMBOL_GPL(free_iova);370360371361/**372362 * put_iova_domain - destroys the iova doamin···390378 }391379 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);392380}381381+EXPORT_SYMBOL_GPL(put_iova_domain);393382394383static int395384__is_range_overlap(struct rb_node *node,···480467 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);481468 return iova;482469}470470+EXPORT_SYMBOL_GPL(reserve_iova);483471484472/**485473 * copy_reserved_iova - copies the reserved between domains···507493 }508494 spin_unlock_irqrestore(&from->iova_rbtree_lock, flags);509495}496496+EXPORT_SYMBOL_GPL(copy_reserved_iova);510497511498struct iova *512499split_and_remove_iova(struct iova_domain *iovad, struct iova *iova,···549534 free_iova_mem(prev);550535 return NULL;551536}537537+538538+MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");539539+MODULE_LICENSE("GPL");
+16-8
drivers/irqchip/irq-atmel-aic5.c
···8888{8989 struct irq_domain *domain = d->domain;9090 struct irq_domain_chip_generic *dgc = domain->gc;9191- struct irq_chip_generic *gc = dgc->gc[0];9191+ struct irq_chip_generic *bgc = dgc->gc[0];9292+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);92939393- /* Disable interrupt on AIC5 */9494- irq_gc_lock(gc);9494+ /*9595+ * Disable interrupt on AIC5. We always take the lock of the9696+ * first irq chip as all chips share the same registers.9797+ */9898+ irq_gc_lock(bgc);9599 irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);96100 irq_reg_writel(gc, 1, AT91_AIC5_IDCR);97101 gc->mask_cache &= ~d->mask;9898- irq_gc_unlock(gc);102102+ irq_gc_unlock(bgc);99103}100104101105static void aic5_unmask(struct irq_data *d)102106{103107 struct irq_domain *domain = d->domain;104108 struct irq_domain_chip_generic *dgc = domain->gc;105105- struct irq_chip_generic *gc = dgc->gc[0];109109+ struct irq_chip_generic *bgc = dgc->gc[0];110110+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);106111107107- /* Enable interrupt on AIC5 */108108- irq_gc_lock(gc);112112+ /*113113+ * Enable interrupt on AIC5. We always take the lock of the114114+ * first irq chip as all chips share the same registers.115115+ */116116+ irq_gc_lock(bgc);109117 irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR);110118 irq_reg_writel(gc, 1, AT91_AIC5_IECR);111119 gc->mask_cache |= d->mask;112112- irq_gc_unlock(gc);120120+ irq_gc_unlock(bgc);113121}114122115123static int aic5_retrigger(struct irq_data *d)
+10-2
drivers/irqchip/irq-mips-gic.c
···320320 intrmask[i] = gic_read(intrmask_reg);321321 pending_reg += gic_reg_step;322322 intrmask_reg += gic_reg_step;323323+324324+ if (!config_enabled(CONFIG_64BIT) || mips_cm_is64)325325+ continue;326326+327327+ pending[i] |= (u64)gic_read(pending_reg) << 32;328328+ intrmask[i] |= (u64)gic_read(intrmask_reg) << 32;329329+ pending_reg += gic_reg_step;330330+ intrmask_reg += gic_reg_step;323331 }324332325333 bitmap_and(pending, pending, intrmask, gic_shared_intrs);···434426 spin_lock_irqsave(&gic_lock, flags);435427436428 /* Re-route this IRQ */437437- gic_map_to_vpe(irq, cpumask_first(&tmp));429429+ gic_map_to_vpe(irq, mips_cm_vp_id(cpumask_first(&tmp)));438430439431 /* Update the pcpu_masks */440432 for (i = 0; i < NR_CPUS; i++)···607599 GIC_SHARED_TO_HWIRQ(intr));608600 int i;609601610610- gic_map_to_vpe(intr, cpu);602602+ gic_map_to_vpe(intr, mips_cm_vp_id(cpu));611603 for (i = 0; i < NR_CPUS; i++)612604 clear_bit(intr, pcpu_masks[i].pcpu_mask);613605 set_bit(intr, pcpu_masks[cpu].pcpu_mask);
···926926 goto bad;927927 }928928929929+ if (data_size > ubi->leb_size) {930930+ ubi_err(ubi, "bad data_size");931931+ goto bad;932932+ }933933+929934 if (vol_type == UBI_VID_STATIC) {930935 /*931936 * Although from high-level point of view static volumes may
+1
drivers/mtd/ubi/vtbl.c
···649649 if (ubi->corr_peb_count)650650 ubi_err(ubi, "%d PEBs are corrupted and not used",651651 ubi->corr_peb_count);652652+ return -ENOSPC;652653 }653654 ubi->rsvd_pebs += reserved_pebs;654655 ubi->avail_pebs -= reserved_pebs;
+1
drivers/mtd/ubi/wl.c
···16011601 if (ubi->corr_peb_count)16021602 ubi_err(ubi, "%d PEBs are corrupted and not used",16031603 ubi->corr_peb_count);16041604+ err = -ENOSPC;16041605 goto out_free;16051606 }16061607 ubi->avail_pebs -= reserved_pebs;
···953953 /* take the lock before we start messing with the ring */954954 mutex_lock(&hw->aq.arq_mutex);955955956956+ if (hw->aq.arq.count == 0) {957957+ i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,958958+ "AQRX: Admin queue not initialized.\n");959959+ ret_code = I40E_ERR_QUEUE_EMPTY;960960+ goto clean_arq_element_err;961961+ }962962+956963 /* set next_to_use to head */957964 ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK);958965 if (ntu == ntc) {···10211014 /* Set pending if needed, unlock and return */10221015 if (pending != NULL)10231016 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);10171017+10181018+clean_arq_element_err:10241019 mutex_unlock(&hw->aq.arq_mutex);1025102010261021 if (i40e_is_nvm_update_op(&e->desc)) {
···894894 /* take the lock before we start messing with the ring */895895 mutex_lock(&hw->aq.arq_mutex);896896897897+ if (hw->aq.arq.count == 0) {898898+ i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE,899899+ "AQRX: Admin queue not initialized.\n");900900+ ret_code = I40E_ERR_QUEUE_EMPTY;901901+ goto clean_arq_element_err;902902+ }903903+897904 /* set next_to_use to head */898905 ntu = (rd32(hw, hw->aq.arq.head) & I40E_VF_ARQH1_ARQH_MASK);899906 if (ntu == ntc) {···962955 /* Set pending if needed, unlock and return */963956 if (pending != NULL)964957 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc);958958+959959+clean_arq_element_err:965960 mutex_unlock(&hw->aq.arq_mutex);966961967962 return ret_code;
+4-3
drivers/net/ethernet/mellanox/mlx4/mcg.c
···11841184 if (prot == MLX4_PROT_ETH) {11851185 /* manage the steering entry for promisc mode */11861186 if (new_entry)11871187- new_steering_entry(dev, port, steer, index, qp->qpn);11871187+ err = new_steering_entry(dev, port, steer,11881188+ index, qp->qpn);11881189 else11891189- existing_steering_entry(dev, port, steer,11901190- index, qp->qpn);11901190+ err = existing_steering_entry(dev, port, steer,11911191+ index, qp->qpn);11911192 }11921193 if (err && link && index != -1) {11931194 if (index < dev->caps.num_mgms)
···299299 * Unbound PCI devices are always put in D0, regardless of300300 * runtime PM status. During probe, the device is set to301301 * active and the usage count is incremented. If the driver302302- * supports runtime PM, it should call pm_runtime_put_noidle()303303- * in its probe routine and pm_runtime_get_noresume() in its304304- * remove routine.302302+ * supports runtime PM, it should call pm_runtime_put_noidle(),303303+ * or any other runtime PM helper function decrementing the usage304304+ * count, in its probe routine and pm_runtime_get_noresume() in305305+ * its remove routine.305306 */306307 pm_runtime_get_sync(dev);307308 pci_dev->driver = pci_drv;
+20
drivers/staging/android/TODO
···55 - add proper arch dependencies as needed66 - audit userspace interfaces to make sure they are sane7788+99+ion/1010+ - Remove ION_IOC_SYNC: Flushing for devices should be purely a kernel internal1111+ interface on top of dma-buf. flush_for_device needs to be added to dma-buf1212+ first.1313+ - Remove ION_IOC_CUSTOM: Atm used for cache flushing for cpu access in some1414+ vendor trees. Should be replaced with an ioctl on the dma-buf to expose the1515+ begin/end_cpu_access hooks to userspace.1616+ - Clarify the tricks ion plays with explicitly managing coherency behind the1717+ dma api's back (this is absolutely needed for high-perf gpu drivers): Add an1818+ explicit coherency management mode to flush_for_device to be used by drivers1919+ which want to manage caches themselves and which indicates whether cpu caches2020+ need flushing.2121+ - With those removed there's probably no use for ION_IOC_IMPORT anymore either2222+ since ion would just be the central allocator for shared buffers.2323+ - Add dt-binding to expose cma regions as ion heaps, with the rule that any2424+ such cma regions must already be used by some device for dma. I.e. ion only2525+ exposes existing cma regions and doesn't reserve unecessarily memory when2626+ booting a system which doesn't use ion.2727+828Please send patches to Greg Kroah-Hartman <greg@kroah.com> and Cc:929Arve Hjønnevåg <arve@android.com> and Riley Andrews <riandrews@android.com>
+3-3
drivers/staging/android/ion/ion.c
···11791179 mutex_unlock(&client->lock);11801180 goto end;11811181 }11821182- mutex_unlock(&client->lock);1183118211841183 handle = ion_handle_create(client, buffer);11851185- if (IS_ERR(handle))11841184+ if (IS_ERR(handle)) {11851185+ mutex_unlock(&client->lock);11861186 goto end;11871187+ }1187118811881188- mutex_lock(&client->lock);11891189 ret = ion_handle_add(client, handle);11901190 mutex_unlock(&client->lock);11911191 if (ret) {
+1-1
drivers/staging/fbtft/fb_uc1611.c
···76767777 /* Set CS active high */7878 par->spi->mode |= SPI_CS_HIGH;7979- ret = par->spi->master->setup(par->spi);7979+ ret = spi_setup(par->spi);8080 if (ret) {8181 dev_err(par->info->device, "Could not set SPI_CS_HIGH\n");8282 return ret;
+2-2
drivers/staging/fbtft/fb_watterott.c
···169169 /* enable SPI interface by having CS and MOSI low during reset */170170 save_mode = par->spi->mode;171171 par->spi->mode |= SPI_CS_HIGH;172172- ret = par->spi->master->setup(par->spi); /* set CS inactive low */172172+ ret = spi_setup(par->spi); /* set CS inactive low */173173 if (ret) {174174 dev_err(par->info->device, "Could not set SPI_CS_HIGH\n");175175 return ret;···180180 par->fbtftops.reset(par);181181 mdelay(1000);182182 par->spi->mode = save_mode;183183- ret = par->spi->master->setup(par->spi);183183+ ret = spi_setup(par->spi);184184 if (ret) {185185 dev_err(par->info->device, "Could not restore SPI mode\n");186186 return ret;
+3-7
drivers/staging/fbtft/fbtft-core.c
···1436143614371437 /* 9-bit SPI setup */14381438 if (par->spi && display->buswidth == 9) {14391439- par->spi->bits_per_word = 9;14401440- ret = par->spi->master->setup(par->spi);14411441- if (ret) {14391439+ if (par->spi->master->bits_per_word_mask & SPI_BPW_MASK(9)) {14401440+ par->spi->bits_per_word = 9;14411441+ } else {14421442 dev_warn(&par->spi->dev,14431443 "9-bit SPI not available, emulating using 8-bit.\n");14441444- par->spi->bits_per_word = 8;14451445- ret = par->spi->master->setup(par->spi);14461446- if (ret)14471447- goto out_release;14481444 /* allocate buffer with room for dc bits */14491445 par->extra = devm_kzalloc(par->info->device,14501446 par->txbuf.len + (par->txbuf.len / 8) + 8,
+4-7
drivers/staging/fbtft/flexfb.c
···463463 }464464 par->fbtftops.write_register = fbtft_write_reg8_bus9;465465 par->fbtftops.write_vmem = fbtft_write_vmem16_bus9;466466- sdev->bits_per_word = 9;467467- ret = sdev->master->setup(sdev);468468- if (ret) {466466+ if (par->spi->master->bits_per_word_mask467467+ & SPI_BPW_MASK(9)) {468468+ par->spi->bits_per_word = 9;469469+ } else {469470 dev_warn(dev,470471 "9-bit SPI not available, emulating using 8-bit.\n");471471- sdev->bits_per_word = 8;472472- ret = sdev->master->setup(sdev);473473- if (ret)474474- goto out_release;475472 /* allocate buffer with room for dc bits */476473 par->extra = devm_kzalloc(par->info->device,477474 par->txbuf.len + (par->txbuf.len / 8) + 8,
+6-10
drivers/staging/lustre/README.txt
···1414Lustre has independent Metadata and Data servers that clients can access1515in parallel to maximize performance.16161717-In order to use Lustre client you will need to download lustre client1818-tools from1919-https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/2020-the package name is lustre-client.1717+In order to use Lustre client you will need to download the "lustre-client"1818+package that contains the userspace tools from http://lustre.org/download/21192220You will need to install and configure your Lustre servers separately.2321···74767577More Information7678================7777-You can get more information at7878-OpenSFS website: http://lustre.opensfs.org/about/7979-Intel HPDD wiki: https://wiki.hpdd.intel.com7979+You can get more information at the Lustre website: http://wiki.lustre.org/80808181-Out of tree Lustre client and server code is available at:8282-http://git.whamcloud.com/fs/lustre-release.git8181+Source for the userspace tools and out-of-tree client and server code8282+is available at: http://git.hpdd.intel.com/fs/lustre-release.git83838484Latest binary packages:8585-http://lustre.opensfs.org/download-lustre/8585+http://lustre.org/download/
···55config HDM_DIM266 tristate "DIM2 HDM"77 depends on AIM_NETWORK88+ depends on HAS_IOMEM89910 ---help---1011 Say Y here if you want to connect via MediaLB to network transceiver.
+1-1
drivers/staging/most/hdm-usb/Kconfig
···4455config HDM_USB66 tristate "USB HDM"77- depends on USB77+ depends on USB && NET88 select AIM_NETWORK99 ---help---1010 Say Y here if you want to connect via USB to network tranceiver.
+1
drivers/staging/most/mostcore/Kconfig
···4455config MOSTCORE66 tristate "MOST Core"77+ depends on HAS_DMA7889 ---help---910 Say Y here if you want to enable MOST support.
···3737#define POLLJIFFIES_TESTWORK 1003838#define POLLJIFFIES_NORMALCHANNEL 1039394040+static int busreg_rc = -ENODEV; /* stores the result from bus registration */4141+4042static int visorbus_uevent(struct device *xdev, struct kobj_uevent_env *env);4143static int visorbus_match(struct device *xdev, struct device_driver *xdrv);4244static void fix_vbus_dev_info(struct visor_device *visordev);···865863{866864 int rc = 0;867865866866+ if (busreg_rc < 0)867867+ return -ENODEV; /*can't register on a nonexistent bus*/868868+868869 drv->driver.name = drv->name;869870 drv->driver.bus = &visorbus_type;870871 drv->driver.probe = visordriver_probe_device;···890885 if (rc < 0)891886 return rc;892887 rc = register_driver_attributes(drv);888888+ if (rc < 0)889889+ driver_unregister(&drv->driver);893890 return rc;894891}895892EXPORT_SYMBOL_GPL(visorbus_register_visor_driver);···12671260static int12681261create_bus_type(void)12691262{12701270- int rc = 0;12711271-12721272- rc = bus_register(&visorbus_type);12731273- return rc;12631263+ busreg_rc = bus_register(&visorbus_type);12641264+ return busreg_rc;12741265}1275126612761267/** Remove the one-and-only one instance of the visor bus type (visorbus_type).
+11-7
drivers/staging/unisys/visornic/visornic_main.c
···11891189 spin_lock_irqsave(&devdata->priv_lock, flags);11901190 atomic_dec(&devdata->num_rcvbuf_in_iovm);1191119111921192- /* update rcv stats - call it with priv_lock held */11931193- devdata->net_stats.rx_packets++;11941194- devdata->net_stats.rx_bytes = skb->len;11951195-11961192 /* set length to how much was ACTUALLY received -11971193 * NOTE: rcv_done_len includes actual length of data rcvd11981194 * including ethhdr11991195 */12001196 skb->len = cmdrsp->net.rcv.rcv_done_len;11971197+11981198+ /* update rcv stats - call it with priv_lock held */11991199+ devdata->net_stats.rx_packets++;12001200+ devdata->net_stats.rx_bytes += skb->len;1201120112021202 /* test enabled while holding lock */12031203 if (!(devdata->enabled && devdata->enab_dis_acked)) {···19241924 "%s debugfs_create_dir %s failed\n",19251925 __func__, netdev->name);19261926 err = -ENOMEM;19271927- goto cleanup_xmit_cmdrsp;19271927+ goto cleanup_register_netdev;19281928 }1929192919301930 dev_info(&dev->device, "%s success netdev=%s\n",19311931 __func__, netdev->name);19321932 return 0;19331933+19341934+cleanup_register_netdev:19351935+ unregister_netdev(netdev);1933193619341937cleanup_napi_add:19351938 del_timer_sync(&devdata->irq_poll_timer);···21312128 if (!dev_num_pool)21322129 goto cleanup_workqueue;2133213021342134- visorbus_register_visor_driver(&visornic_driver);21352135- return 0;21312131+ err = visorbus_register_visor_driver(&visornic_driver);21322132+ if (!err)21332133+ return 0;2136213421372135cleanup_workqueue:21382136 if (visornic_timeout_reset_workqueue) {
+3-2
drivers/target/iscsi/iscsi_target_parameters.c
···407407 TYPERANGE_UTF8, USE_INITIAL_ONLY);408408 if (!param)409409 goto out;410410+410411 /*411412 * Extra parameters for ISER from RFC-5046412413 */···497496 } else if (!strcmp(param->name, SESSIONTYPE)) {498497 SET_PSTATE_NEGOTIATE(param);499498 } else if (!strcmp(param->name, IFMARKER)) {500500- SET_PSTATE_NEGOTIATE(param);499499+ SET_PSTATE_REJECT(param);501500 } else if (!strcmp(param->name, OFMARKER)) {502502- SET_PSTATE_NEGOTIATE(param);501501+ SET_PSTATE_REJECT(param);503502 } else if (!strcmp(param->name, IFMARKINT)) {504503 SET_PSTATE_REJECT(param);505504 } else if (!strcmp(param->name, OFMARKINT)) {
+26-19
drivers/target/target_core_device.c
···6262 struct se_session *se_sess = se_cmd->se_sess;6363 struct se_node_acl *nacl = se_sess->se_node_acl;6464 struct se_dev_entry *deve;6565+ sense_reason_t ret = TCM_NO_SENSE;65666667 rcu_read_lock();6768 deve = target_nacl_find_deve(nacl, unpacked_lun);6869 if (deve) {6970 atomic_long_inc(&deve->total_cmds);7070-7171- if ((se_cmd->data_direction == DMA_TO_DEVICE) &&7272- (deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)) {7373- pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN"7474- " Access for 0x%08llx\n",7575- se_cmd->se_tfo->get_fabric_name(),7676- unpacked_lun);7777- rcu_read_unlock();7878- return TCM_WRITE_PROTECTED;7979- }80718172 if (se_cmd->data_direction == DMA_TO_DEVICE)8273 atomic_long_add(se_cmd->data_length,···84938594 percpu_ref_get(&se_lun->lun_ref);8695 se_cmd->lun_ref_active = true;9696+9797+ if ((se_cmd->data_direction == DMA_TO_DEVICE) &&9898+ (deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)) {9999+ pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN"100100+ " Access for 0x%08llx\n",101101+ se_cmd->se_tfo->get_fabric_name(),102102+ unpacked_lun);103103+ rcu_read_unlock();104104+ ret = TCM_WRITE_PROTECTED;105105+ goto ref_dev;106106+ }87107 }88108 rcu_read_unlock();89109···111109 unpacked_lun);112110 return TCM_NON_EXISTENT_LUN;113111 }114114- /*115115- * Force WRITE PROTECT for virtual LUN 0116116- */117117- if ((se_cmd->data_direction != DMA_FROM_DEVICE) &&118118- (se_cmd->data_direction != DMA_NONE))119119- return TCM_WRITE_PROTECTED;120112121113 se_lun = se_sess->se_tpg->tpg_virt_lun0;122114 se_cmd->se_lun = se_sess->se_tpg->tpg_virt_lun0;···119123120124 percpu_ref_get(&se_lun->lun_ref);121125 se_cmd->lun_ref_active = true;126126+127127+ /*128128+ * Force WRITE PROTECT for virtual LUN 0129129+ */130130+ if ((se_cmd->data_direction != DMA_FROM_DEVICE) &&131131+ (se_cmd->data_direction != DMA_NONE)) {132132+ ret = TCM_WRITE_PROTECTED;133133+ goto ref_dev;134134+ }122135 }123136 /*124137 * RCU reference protected by percpu se_lun->lun_ref taken above that···135130 * pointer can be kfree_rcu() by the final se_lun->lun_group put via136131 * target_core_fabric_configfs.c:target_fabric_port_release137132 */133133+ref_dev:138134 se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);139135 atomic_long_inc(&se_cmd->se_dev->num_cmds);140136···146140 atomic_long_add(se_cmd->data_length,147141 &se_cmd->se_dev->read_bytes);148142149149- return 0;143143+ return ret;150144}151145EXPORT_SYMBOL(transport_lookup_cmd_lun);152146···433427434428 hlist_del_rcu(&orig->link);435429 clear_bit(DEF_PR_REG_ACTIVE, &orig->deve_flags);436436- rcu_assign_pointer(orig->se_lun, NULL);437437- rcu_assign_pointer(orig->se_lun_acl, NULL);438430 orig->lun_flags = 0;439431 orig->creation_time = 0;440432 orig->attach_count--;···442438 */443439 kref_put(&orig->pr_kref, target_pr_kref_release);444440 wait_for_completion(&orig->pr_comp);441441+442442+ rcu_assign_pointer(orig->se_lun, NULL);443443+ rcu_assign_pointer(orig->se_lun_acl, NULL);445444446445 kfree_rcu(orig, rcu_head);447446
···144144 switch_on_temp = 0;145145146146 temperature_threshold = control_temp - switch_on_temp;147147+ /*148148+ * estimate_pid_constants() tries to find appropriate default149149+ * values for thermal zones that don't provide them. If a150150+ * system integrator has configured a thermal zone with two151151+ * passive trip points at the same temperature, that person152152+ * hasn't put any effort to set up the thermal zone properly153153+ * so just give up.154154+ */155155+ if (!temperature_threshold)156156+ return;147157148158 if (!tz->tzp->k_po || force)149159 tz->tzp->k_po = int_to_frac(sustainable_power) /
···17671767 usb_del_gadget_udc(&u3d->gadget);1768176817691769 /* free memory allocated in probe */17701770- if (u3d->trb_pool)17711771- dma_pool_destroy(u3d->trb_pool);17701770+ dma_pool_destroy(u3d->trb_pool);1772177117731772 if (u3d->ep_context)17741773 dma_free_coherent(&dev->dev, u3d->ep_context_size,
+1-2
drivers/usb/gadget/udc/mv_udc_core.c
···21002100 }2101210121022102 /* free memory allocated in probe */21032103- if (udc->dtd_pool)21042104- dma_pool_destroy(udc->dtd_pool);21032103+ dma_pool_destroy(udc->dtd_pool);2105210421062105 if (udc->ep_dqh)21072106 dma_free_coherent(&pdev->dev, udc->ep_dqh_size,
+8-9
drivers/usb/host/xhci-mem.c
···14981498 * use Event Data TRBs, and we don't chain in a link TRB on short14991499 * transfers, we're basically dividing by 1.15001500 *15011501- * xHCI 1.0 specification indicates that the Average TRB Length should15021502- * be set to 8 for control endpoints.15011501+ * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length15021502+ * should be set to 8 for control endpoints.15031503 */15041504- if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)15041504+ if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)15051505 ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));15061506 else15071507 ep_ctx->tx_info |=···17921792 int size;17931793 int i, j, num_ports;1794179417951795- if (timer_pending(&xhci->cmd_timer))17961796- del_timer_sync(&xhci->cmd_timer);17951795+ del_timer_sync(&xhci->cmd_timer);1797179617981797 /* Free the Event Ring Segment Table and the actual Event Ring */17991798 size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);···2320232123212322 INIT_LIST_HEAD(&xhci->cmd_list);2322232323242324+ /* init command timeout timer */23252325+ setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,23262326+ (unsigned long)xhci);23272327+23232328 page_size = readl(&xhci->op_regs->page_size);23242329 xhci_dbg_trace(xhci, trace_xhci_dbg_init,23252330 "Supported page size register = 0x%x", page_size);···25072504 xhci_dbg_trace(xhci, trace_xhci_dbg_init,25082505 "Wrote ERST address to ir_set 0.");25092506 xhci_print_ir_set(xhci, 0);25102510-25112511- /* init command timeout timer */25122512- setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,25132513- (unsigned long)xhci);2514250725152508 /*25162509 * XXX: Might need to set the Interrupter Moderation Register to
+45-45
drivers/usb/host/xhci-pci.c
···180180 "QUIRK: Resetting on resume");181181}182182183183-/*184184- * In some Intel xHCI controllers, in order to get D3 working,185185- * through a vendor specific SSIC CONFIG register at offset 0x883c,186186- * SSIC PORT need to be marked as "unused" before putting xHCI187187- * into D3. After D3 exit, the SSIC port need to be marked as "used".188188- * Without this change, xHCI might not enter D3 state.189189- * Make sure PME works on some Intel xHCI controllers by writing 1 to clear190190- * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4191191- */192192-static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend)193193-{194194- struct xhci_hcd *xhci = hcd_to_xhci(hcd);195195- struct pci_dev *pdev = to_pci_dev(hcd->self.controller);196196- u32 val;197197- void __iomem *reg;198198-199199- if (pdev->vendor == PCI_VENDOR_ID_INTEL &&200200- pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {201201-202202- reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2;203203-204204- /* Notify SSIC that SSIC profile programming is not done */205205- val = readl(reg) & ~PROG_DONE;206206- writel(val, reg);207207-208208- /* Mark SSIC port as unused(suspend) or used(resume) */209209- val = readl(reg);210210- if (suspend)211211- val |= SSIC_PORT_UNUSED;212212- else213213- val &= ~SSIC_PORT_UNUSED;214214- writel(val, reg);215215-216216- /* Notify SSIC that SSIC profile programming is done */217217- val = readl(reg) | PROG_DONE;218218- writel(val, reg);219219- readl(reg);220220- }221221-222222- reg = (void __iomem *) xhci->cap_regs + 0x80a4;223223- val = readl(reg);224224- writel(val | BIT(28), reg);225225- readl(reg);226226-}227227-228183#ifdef CONFIG_ACPI229184static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev)230185{···300345}301346302347#ifdef CONFIG_PM348348+/*349349+ * In some Intel xHCI controllers, in order to get D3 working,350350+ * through a vendor specific SSIC CONFIG register at offset 0x883c,351351+ * SSIC PORT need to be marked as "unused" before putting xHCI352352+ * into D3. After D3 exit, the SSIC port need to be marked as "used".353353+ * Without this change, xHCI might not enter D3 state.354354+ * Make sure PME works on some Intel xHCI controllers by writing 1 to clear355355+ * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4356356+ */357357+static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend)358358+{359359+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);360360+ struct pci_dev *pdev = to_pci_dev(hcd->self.controller);361361+ u32 val;362362+ void __iomem *reg;363363+364364+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&365365+ pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) {366366+367367+ reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2;368368+369369+ /* Notify SSIC that SSIC profile programming is not done */370370+ val = readl(reg) & ~PROG_DONE;371371+ writel(val, reg);372372+373373+ /* Mark SSIC port as unused(suspend) or used(resume) */374374+ val = readl(reg);375375+ if (suspend)376376+ val |= SSIC_PORT_UNUSED;377377+ else378378+ val &= ~SSIC_PORT_UNUSED;379379+ writel(val, reg);380380+381381+ /* Notify SSIC that SSIC profile programming is done */382382+ val = readl(reg) | PROG_DONE;383383+ writel(val, reg);384384+ readl(reg);385385+ }386386+387387+ reg = (void __iomem *) xhci->cap_regs + 0x80a4;388388+ val = readl(reg);389389+ writel(val | BIT(28), reg);390390+ readl(reg);391391+}392392+303393static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup)304394{305395 struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+11-2
drivers/usb/host/xhci-ring.c
···302302 ret = xhci_handshake(&xhci->op_regs->cmd_ring,303303 CMD_RING_RUNNING, 0, 5 * 1000 * 1000);304304 if (ret < 0) {305305+ /* we are about to kill xhci, give it one more chance */306306+ xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,307307+ &xhci->op_regs->cmd_ring);308308+ udelay(1000);309309+ ret = xhci_handshake(&xhci->op_regs->cmd_ring,310310+ CMD_RING_RUNNING, 0, 3 * 1000 * 1000);311311+ if (ret == 0)312312+ return 0;313313+305314 xhci_err(xhci, "Stopped the command ring failed, "306315 "maybe the host is dead\n");307316 xhci->xhc_state |= XHCI_STATE_DYING;···34703461 if (start_cycle == 0)34713462 field |= 0x1;3472346334733473- /* xHCI 1.0 6.4.1.2.1: Transfer Type field */34743474- if (xhci->hci_version == 0x100) {34643464+ /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */34653465+ if (xhci->hci_version >= 0x100) {34753466 if (urb->transfer_buffer_length > 0) {34763467 if (setup->bRequestType & USB_DIR_IN)34773468 field |= TRB_TX_TYPE(TRB_DATA_IN);
+11-13
drivers/usb/host/xhci.c
···146146 "waited %u microseconds.\n",147147 XHCI_MAX_HALT_USEC);148148 if (!ret)149149- xhci->xhc_state &= ~XHCI_STATE_HALTED;149149+ xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING);150150+150151 return ret;151152}152153···655654}656655EXPORT_SYMBOL_GPL(xhci_run);657656658658-static void xhci_only_stop_hcd(struct usb_hcd *hcd)659659-{660660- struct xhci_hcd *xhci = hcd_to_xhci(hcd);661661-662662- spin_lock_irq(&xhci->lock);663663- xhci_halt(xhci);664664- spin_unlock_irq(&xhci->lock);665665-}666666-667657/*668658 * Stop xHCI driver.669659 *···669677 u32 temp;670678 struct xhci_hcd *xhci = hcd_to_xhci(hcd);671679672672- if (!usb_hcd_is_primary_hcd(hcd)) {673673- xhci_only_stop_hcd(xhci->shared_hcd);680680+ if (xhci->xhc_state & XHCI_STATE_HALTED)674681 return;675675- }676682683683+ mutex_lock(&xhci->mutex);677684 spin_lock_irq(&xhci->lock);685685+ xhci->xhc_state |= XHCI_STATE_HALTED;686686+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;687687+678688 /* Make sure the xHC is halted for a USB3 roothub679689 * (xhci_stop() could be called as part of failed init).680690 */···711717 xhci_dbg_trace(xhci, trace_xhci_dbg_init,712718 "xhci_stop completed - status = %x",713719 readl(&xhci->op_regs->status));720720+ mutex_unlock(&xhci->mutex);714721}715722716723/*···37873792 struct xhci_command *command = NULL;3788379337893794 mutex_lock(&xhci->mutex);37953795+37963796+ if (xhci->xhc_state) /* dying or halted */37973797+ goto out;3790379837913799 if (!udev->slot_id) {37923800 xhci_dbg_trace(xhci, trace_xhci_dbg_address,
···8080static int whiteheat_firmware_attach(struct usb_serial *serial);81818282/* function prototypes for the Connect Tech WhiteHEAT serial converter */8383+static int whiteheat_probe(struct usb_serial *serial,8484+ const struct usb_device_id *id);8385static int whiteheat_attach(struct usb_serial *serial);8486static void whiteheat_release(struct usb_serial *serial);8587static int whiteheat_port_probe(struct usb_serial_port *port);···118116 .description = "Connect Tech - WhiteHEAT",119117 .id_table = id_table_std,120118 .num_ports = 4,119119+ .probe = whiteheat_probe,121120 .attach = whiteheat_attach,122121 .release = whiteheat_release,123122 .port_probe = whiteheat_port_probe,···220217/*****************************************************************************221218 * Connect Tech's White Heat serial driver functions222219 *****************************************************************************/220220+221221+static int whiteheat_probe(struct usb_serial *serial,222222+ const struct usb_device_id *id)223223+{224224+ struct usb_host_interface *iface_desc;225225+ struct usb_endpoint_descriptor *endpoint;226226+ size_t num_bulk_in = 0;227227+ size_t num_bulk_out = 0;228228+ size_t min_num_bulk;229229+ unsigned int i;230230+231231+ iface_desc = serial->interface->cur_altsetting;232232+233233+ for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {234234+ endpoint = &iface_desc->endpoint[i].desc;235235+ if (usb_endpoint_is_bulk_in(endpoint))236236+ ++num_bulk_in;237237+ if (usb_endpoint_is_bulk_out(endpoint))238238+ ++num_bulk_out;239239+ }240240+241241+ min_num_bulk = COMMAND_PORT + 1;242242+ if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk)243243+ return -ENODEV;244244+245245+ return 0;246246+}247247+223248static int whiteheat_attach(struct usb_serial *serial)224249{225250 struct usb_serial_port *command_port;
+2-1
drivers/watchdog/Kconfig
···817817 tristate "Intel TCO Timer/Watchdog"818818 depends on (X86 || IA64) && PCI819819 select WATCHDOG_CORE820820+ depends on I2C || I2C=n820821 select LPC_ICH if !EXPERT821821- select I2C_I801 if !EXPERT822822+ select I2C_I801 if !EXPERT && I2C822823 ---help---823824 Hardware driver for the intel TCO timer based watchdog devices.824825 These drivers are included in the Intel 82801 I/O Controller
+8-2
drivers/watchdog/bcm2835_wdt.c
···3636#define PM_RSTC_WRCFG_FULL_RESET 0x000000203737#define PM_RSTC_RESET 0x0000010238383939+/*4040+ * The Raspberry Pi firmware uses the RSTS register to know which partiton4141+ * to boot from. The partiton value is spread into bits 0, 2, 4, 6, 8, 10.4242+ * Partiton 63 is a special partition used by the firmware to indicate halt.4343+ */4444+#define PM_RSTS_RASPBERRYPI_HALT 0x5554545+3946#define SECS_TO_WDOG_TICKS(x) ((x) << 16)4047#define WDOG_TICKS_TO_SECS(x) ((x) >> 16)4148···158151 * hard reset.159152 */160153 val = readl_relaxed(wdt->base + PM_RSTS);161161- val &= PM_RSTC_WRCFG_CLR;162162- val |= PM_PASSWORD | PM_RSTS_HADWRH_SET;154154+ val |= PM_PASSWORD | PM_RSTS_RASPBERRYPI_HALT;163155 writel_relaxed(val, wdt->base + PM_RSTS);164156165157 /* Continue with normal reset mechanism */
···444444 return 0;445445}446446447447+/* Server has provided av pairs/target info in the type 2 challenge448448+ * packet and we have plucked it and stored within smb session.449449+ * We parse that blob here to find the server given timestamp450450+ * as part of ntlmv2 authentication (or local current time as451451+ * default in case of failure)452452+ */453453+static __le64454454+find_timestamp(struct cifs_ses *ses)455455+{456456+ unsigned int attrsize;457457+ unsigned int type;458458+ unsigned int onesize = sizeof(struct ntlmssp2_name);459459+ unsigned char *blobptr;460460+ unsigned char *blobend;461461+ struct ntlmssp2_name *attrptr;462462+463463+ if (!ses->auth_key.len || !ses->auth_key.response)464464+ return 0;465465+466466+ blobptr = ses->auth_key.response;467467+ blobend = blobptr + ses->auth_key.len;468468+469469+ while (blobptr + onesize < blobend) {470470+ attrptr = (struct ntlmssp2_name *) blobptr;471471+ type = le16_to_cpu(attrptr->type);472472+ if (type == NTLMSSP_AV_EOL)473473+ break;474474+ blobptr += 2; /* advance attr type */475475+ attrsize = le16_to_cpu(attrptr->length);476476+ blobptr += 2; /* advance attr size */477477+ if (blobptr + attrsize > blobend)478478+ break;479479+ if (type == NTLMSSP_AV_TIMESTAMP) {480480+ if (attrsize == sizeof(u64))481481+ return *((__le64 *)blobptr);482482+ }483483+ blobptr += attrsize; /* advance attr value */484484+ }485485+486486+ return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));487487+}488488+447489static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,448490 const struct nls_table *nls_cp)449491{···683641 struct ntlmv2_resp *ntlmv2;684642 char ntlmv2_hash[16];685643 unsigned char *tiblob = NULL; /* target info blob */644644+ __le64 rsp_timestamp;686645687646 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) {688647 if (!ses->domainName) {···702659 }703660 }704661662662+ /* Must be within 5 minutes of the server (or in range +/-2h663663+ * in case of Mac OS X), so simply carry over server timestamp664664+ * (as Windows 7 does)665665+ */666666+ rsp_timestamp = find_timestamp(ses);667667+705668 baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);706669 tilen = ses->auth_key.len;707670 tiblob = ses->auth_key.response;···724675 (ses->auth_key.response + CIFS_SESS_KEY_SIZE);725676 ntlmv2->blob_signature = cpu_to_le32(0x00000101);726677 ntlmv2->reserved = 0;727727- /* Must be within 5 minutes of the server */728728- ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));678678+ ntlmv2->time = rsp_timestamp;679679+729680 get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));730681 ntlmv2->reserved2 = 0;731682
···730730#define DF_EMULATED_VPD_UNIT_SERIAL 0x00000004731731#define DF_USING_UDEV_PATH 0x00000008732732#define DF_USING_ALIAS 0x00000010733733+#define DF_READ_ONLY 0x00000020733734 /* Physical device queue depth */734735 u32 queue_depth;735736 /* Used for SPC-2 reservations enforce of ISIDs */
-2
include/uapi/linux/userfaultfd.h
···11111212#include <linux/types.h>13131414-#include <linux/compiler.h>1515-1614#define UFFD_API ((__u64)0xAA)1715/*1816 * After implementing the respective features it will become:
+7-7
ipc/msg.c
···137137 return retval;138138 }139139140140- /* ipc_addid() locks msq upon success. */141141- id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);142142- if (id < 0) {143143- ipc_rcu_putref(msq, msg_rcu_free);144144- return id;145145- }146146-147140 msq->q_stime = msq->q_rtime = 0;148141 msq->q_ctime = get_seconds();149142 msq->q_cbytes = msq->q_qnum = 0;···145152 INIT_LIST_HEAD(&msq->q_messages);146153 INIT_LIST_HEAD(&msq->q_receivers);147154 INIT_LIST_HEAD(&msq->q_senders);155155+156156+ /* ipc_addid() locks msq upon success. */157157+ id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);158158+ if (id < 0) {159159+ ipc_rcu_putref(msq, msg_rcu_free);160160+ return id;161161+ }148162149163 ipc_unlock_object(&msq->q_perm);150164 rcu_read_unlock();
+7-6
ipc/shm.c
···551551 if (IS_ERR(file))552552 goto no_file;553553554554- id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);555555- if (id < 0) {556556- error = id;557557- goto no_id;558558- }559559-560554 shp->shm_cprid = task_tgid_vnr(current);561555 shp->shm_lprid = 0;562556 shp->shm_atim = shp->shm_dtim = 0;···559565 shp->shm_nattch = 0;560566 shp->shm_file = file;561567 shp->shm_creator = current;568568+569569+ id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);570570+ if (id < 0) {571571+ error = id;572572+ goto no_id;573573+ }574574+562575 list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist);563576564577 /*
···12431243 PERF_EVENT_STATE_INACTIVE;12441244}1245124512461246-/*12471247- * Called at perf_event creation and when events are attached/detached from a12481248- * group.12491249- */12501250-static void perf_event__read_size(struct perf_event *event)12461246+static void __perf_event_read_size(struct perf_event *event, int nr_siblings)12511247{12521248 int entry = sizeof(u64); /* value */12531249 int size = 0;···12591263 entry += sizeof(u64);1260126412611265 if (event->attr.read_format & PERF_FORMAT_GROUP) {12621262- nr += event->group_leader->nr_siblings;12661266+ nr += nr_siblings;12631267 size += sizeof(u64);12641268 }12651269···12671271 event->read_size = size;12681272}1269127312701270-static void perf_event__header_size(struct perf_event *event)12741274+static void __perf_event_header_size(struct perf_event *event, u64 sample_type)12711275{12721276 struct perf_sample_data *data;12731273- u64 sample_type = event->attr.sample_type;12741277 u16 size = 0;12751275-12761276- perf_event__read_size(event);1277127812781279 if (sample_type & PERF_SAMPLE_IP)12791280 size += sizeof(data->ip);···12941301 size += sizeof(data->txn);1295130212961303 event->header_size = size;13041304+}13051305+13061306+/*13071307+ * Called at perf_event creation and when events are attached/detached from a13081308+ * group.13091309+ */13101310+static void perf_event__header_size(struct perf_event *event)13111311+{13121312+ __perf_event_read_size(event,13131313+ event->group_leader->nr_siblings);13141314+ __perf_event_header_size(event, event->attr.sample_type);12971315}1298131612991317static void perf_event__id_header_size(struct perf_event *event)···13321328 size += sizeof(data->cpu_entry);1333132913341330 event->id_header_size = size;13311331+}13321332+13331333+static bool perf_event_validate_size(struct perf_event *event)13341334+{13351335+ /*13361336+ * The values computed here will be over-written when we actually13371337+ * attach the event.13381338+ */13391339+ __perf_event_read_size(event, event->group_leader->nr_siblings + 1);13401340+ __perf_event_header_size(event, event->attr.sample_type & ~PERF_SAMPLE_READ);13411341+ perf_event__id_header_size(event);13421342+13431343+ /*13441344+ * Sum the lot; should not exceed the 64k limit we have on records.13451345+ * Conservative limit to allow for callchains and other variable fields.13461346+ */13471347+ if (event->read_size + event->header_size +13481348+ event->id_header_size + sizeof(struct perf_event_header) >= 16*1024)13491349+ return false;13501350+13511351+ return true;13351352}1336135313371354static void perf_group_attach(struct perf_event *event)···8322829783238298 if (move_group) {83248299 gctx = group_leader->ctx;83008300+ mutex_lock_double(&gctx->mutex, &ctx->mutex);83018301+ } else {83028302+ mutex_lock(&ctx->mutex);83038303+ }8325830483058305+ if (!perf_event_validate_size(event)) {83068306+ err = -E2BIG;83078307+ goto err_locked;83088308+ }83098309+83108310+ /*83118311+ * Must be under the same ctx::mutex as perf_install_in_context(),83128312+ * because we need to serialize with concurrent event creation.83138313+ */83148314+ if (!exclusive_event_installable(event, ctx)) {83158315+ /* exclusive and group stuff are assumed mutually exclusive */83168316+ WARN_ON_ONCE(move_group);83178317+83188318+ err = -EBUSY;83198319+ goto err_locked;83208320+ }83218321+83228322+ WARN_ON_ONCE(ctx->parent_ctx);83238323+83248324+ if (move_group) {83268325 /*83278326 * See perf_event_ctx_lock() for comments on the details83288327 * of swizzling perf_event::ctx.83298328 */83308330- mutex_lock_double(&gctx->mutex, &ctx->mutex);83318331-83328329 perf_remove_from_context(group_leader, false);8333833083348331 list_for_each_entry(sibling, &group_leader->sibling_list,···83588311 perf_remove_from_context(sibling, false);83598312 put_ctx(gctx);83608313 }83618361- } else {83628362- mutex_lock(&ctx->mutex);83638363- }8364831483658365- WARN_ON_ONCE(ctx->parent_ctx);83668366-83678367- if (move_group) {83688315 /*83698316 * Wait for everybody to stop referencing the events through83708317 * the old lists, before installing it on new lists.···83908349 perf_event__state_init(group_leader);83918350 perf_install_in_context(ctx, group_leader, group_leader->cpu);83928351 get_ctx(ctx);83528352+83538353+ /*83548354+ * Now that all events are installed in @ctx, nothing83558355+ * references @gctx anymore, so drop the last reference we have83568356+ * on it.83578357+ */83588358+ put_ctx(gctx);83938359 }8394836083958395- if (!exclusive_event_installable(event, ctx)) {83968396- err = -EBUSY;83978397- mutex_unlock(&ctx->mutex);83988398- fput(event_file);83998399- goto err_context;84008400- }83618361+ /*83628362+ * Precalculate sample_data sizes; do while holding ctx::mutex such83638363+ * that we're serialized against further additions and before83648364+ * perf_install_in_context() which is the point the event is active and83658365+ * can use these values.83668366+ */83678367+ perf_event__header_size(event);83688368+ perf_event__id_header_size(event);8401836984028370 perf_install_in_context(ctx, event, event->cpu);84038371 perf_unpin_context(ctx);8404837284058405- if (move_group) {83738373+ if (move_group)84068374 mutex_unlock(&gctx->mutex);84078407- put_ctx(gctx);84088408- }84098375 mutex_unlock(&ctx->mutex);8410837684118377 put_online_cpus();···84248376 mutex_unlock(¤t->perf_event_mutex);8425837784268378 /*84278427- * Precalculate sample_data sizes84288428- */84298429- perf_event__header_size(event);84308430- perf_event__id_header_size(event);84318431-84328432- /*84338379 * Drop the reference on the group_event after placing the84348380 * new event on the sibling_list. This ensures destruction84358381 * of the group leader will find the pointer to itself in···84338391 fd_install(event_fd, event_file);84348392 return event_fd;8435839383948394+err_locked:83958395+ if (move_group)83968396+ mutex_unlock(&gctx->mutex);83978397+ mutex_unlock(&ctx->mutex);83988398+/* err_file: */83998399+ fput(event_file);84368400err_context:84378401 perf_unpin_context(ctx);84388402 put_ctx(ctx);
···49344934 idle->state = TASK_RUNNING;49354935 idle->se.exec_start = sched_clock();4936493649374937- do_set_cpus_allowed(idle, cpumask_of(cpu));49374937+#ifdef CONFIG_SMP49384938+ /*49394939+ * Its possible that init_idle() gets called multiple times on a task,49404940+ * in that case do_set_cpus_allowed() will not do the right thing.49414941+ *49424942+ * And since this is boot we can forgo the serialization.49434943+ */49444944+ set_cpus_allowed_common(idle, cpumask_of(cpu));49454945+#endif49384946 /*49394947 * We're having a chicken and egg problem, even though we are49404948 * holding rq->lock, the cpu isn't yet set to this cpu so the···4959495149604952 rq->curr = rq->idle = idle;49614953 idle->on_rq = TASK_ON_RQ_QUEUED;49624962-#if defined(CONFIG_SMP)49544954+#ifdef CONFIG_SMP49634955 idle->on_cpu = 1;49644956#endif49654957 raw_spin_unlock(&rq->lock);···49744966 idle->sched_class = &idle_sched_class;49754967 ftrace_graph_init_idle_task(idle, cpu);49764968 vtime_init_idle(idle, cpu);49774977-#if defined(CONFIG_SMP)49694969+#ifdef CONFIG_SMP49784970 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);49794971#endif49804972}
+1-1
mm/dmapool.c
···394394 list_for_each_entry(page, &pool->page_list, page_list) {395395 if (dma < page->dma)396396 continue;397397- if (dma < (page->dma + pool->allocation))397397+ if ((dma - page->dma) < pool->allocation)398398 return page;399399 }400400 return NULL;
+8
mm/hugetlb.c
···32023202 continue;3203320332043204 /*32053205+ * Shared VMAs have their own reserves and do not affect32063206+ * MAP_PRIVATE accounting but it is possible that a shared32073207+ * VMA is using the same page so check and skip such VMAs.32083208+ */32093209+ if (iter_vma->vm_flags & VM_MAYSHARE)32103210+ continue;32113211+32123212+ /*32053213 * Unmap the page from other VMAs without their own reserves.32063214 * They get marked to be SIGKILLed if they fault in these32073215 * areas. This is because a future no-page fault on this VMA
+18-13
mm/memcontrol.c
···644644}645645646646/*647647+ * Return page count for single (non recursive) @memcg.648648+ *647649 * Implementation Note: reading percpu statistics for memcg.648650 *649651 * Both of vmstat[] and percpu_counter has threshold and do periodic650652 * synchronization to implement "quick" read. There are trade-off between651653 * reading cost and precision of value. Then, we may have a chance to implement652652- * a periodic synchronizion of counter in memcg's counter.654654+ * a periodic synchronization of counter in memcg's counter.653655 *654656 * But this _read() function is used for user interface now. The user accounts655657 * memory usage by memory cgroup and he _always_ requires exact value because···661659 *662660 * If there are kernel internal actions which can make use of some not-exact663661 * value, and reading all cpu value can be performance bottleneck in some664664- * common workload, threashold and synchonization as vmstat[] should be662662+ * common workload, threshold and synchronization as vmstat[] should be665663 * implemented.666664 */667667-static long mem_cgroup_read_stat(struct mem_cgroup *memcg,668668- enum mem_cgroup_stat_index idx)665665+static unsigned long666666+mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx)669667{670668 long val = 0;671669 int cpu;672670671671+ /* Per-cpu values can be negative, use a signed accumulator */673672 for_each_possible_cpu(cpu)674673 val += per_cpu(memcg->stat->count[idx], cpu);674674+ /*675675+ * Summing races with updates, so val may be negative. Avoid exposing676676+ * transient negative values.677677+ */678678+ if (val < 0)679679+ val = 0;675680 return val;676681}677682···12631254 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {12641255 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)12651256 continue;12661266- pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i],12571257+ pr_cont(" %s:%luKB", mem_cgroup_stat_names[i],12671258 K(mem_cgroup_read_stat(iter, i)));12681259 }12691260···28282819 enum mem_cgroup_stat_index idx)28292820{28302821 struct mem_cgroup *iter;28312831- long val = 0;28222822+ unsigned long val = 0;2832282328332833- /* Per-cpu values can be negative, use a signed accumulator */28342824 for_each_mem_cgroup_tree(iter, memcg)28352825 val += mem_cgroup_read_stat(iter, idx);2836282628372837- if (val < 0) /* race ? */28382838- val = 0;28392827 return val;28402828}28412829···31753169 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {31763170 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)31773171 continue;31783178- seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i],31723172+ seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i],31793173 mem_cgroup_read_stat(memcg, i) * PAGE_SIZE);31803174 }31813175···32003194 (u64)memsw * PAGE_SIZE);3201319532023196 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) {32033203- long long val = 0;31973197+ unsigned long long val = 0;3204319832053199 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account)32063200 continue;32073201 for_each_mem_cgroup_tree(mi, memcg)32083202 val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE;32093209- seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val);32033203+ seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val);32103204 }3211320532123206 for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) {···41854179 if (memcg_wb_domain_init(memcg, GFP_KERNEL))41864180 goto out_free_stat;4187418141884188- spin_lock_init(&memcg->pcp_counter_lock);41894182 return memcg;4190418341914184out_free_stat:
+11-1
mm/migrate.c
···740740 if (PageSwapBacked(page))741741 SetPageSwapBacked(newpage);742742743743+ /*744744+ * Indirectly called below, migrate_page_copy() copies PG_dirty and thus745745+ * needs newpage's memcg set to transfer memcg dirty page accounting.746746+ * So perform memcg migration in two steps:747747+ * 1. set newpage->mem_cgroup (here)748748+ * 2. clear page->mem_cgroup (below)749749+ */750750+ set_page_memcg(newpage, page_memcg(page));751751+743752 mapping = page_mapping(page);744753 if (!mapping)745754 rc = migrate_page(mapping, newpage, page, mode);···765756 rc = fallback_migrate_page(mapping, newpage, page, mode);766757767758 if (rc != MIGRATEPAGE_SUCCESS) {759759+ set_page_memcg(newpage, NULL);768760 newpage->mapping = NULL;769761 } else {770770- mem_cgroup_migrate(page, newpage, false);762762+ set_page_memcg(page, NULL);771763 if (page_was_mapped)772764 remove_migration_ptes(page, newpage);773765 page->mapping = NULL;
+10-3
mm/slab.c
···21902190 size += BYTES_PER_WORD;21912191 }21922192#if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)21932193- if (size >= kmalloc_size(INDEX_NODE + 1)21942194- && cachep->object_size > cache_line_size()21952195- && ALIGN(size, cachep->align) < PAGE_SIZE) {21932193+ /*21942194+ * To activate debug pagealloc, off-slab management is necessary21952195+ * requirement. In early phase of initialization, small sized slab21962196+ * doesn't get initialized so it would not be possible. So, we need21972197+ * to check size >= 256. It guarantees that all necessary small21982198+ * sized slab is initialized in current slab initialization sequence.21992199+ */22002200+ if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&22012201+ size >= 256 && cachep->object_size > cache_line_size() &&22022202+ ALIGN(size, cachep->align) < PAGE_SIZE) {21962203 cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);21972204 size = PAGE_SIZE;21982205 }
···1169116911701170 fl6->flowi6_iif = LOOPBACK_IFINDEX;1171117111721172- if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr))11721172+ if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr) ||11731173+ fl6->flowi6_oif)11731174 flags |= RT6_LOOKUP_F_IFACE;1174117511751176 if (!ipv6_addr_any(&fl6->saddr))
+9-2
net/l2tp/l2tp_core.c
···13191319 tunnel = container_of(work, struct l2tp_tunnel, del_work);13201320 sk = l2tp_tunnel_sock_lookup(tunnel);13211321 if (!sk)13221322- return;13221322+ goto out;1323132313241324 sock = sk->sk_socket;13251325···13411341 }1342134213431343 l2tp_tunnel_sock_put(sk);13441344+out:13451345+ l2tp_tunnel_dec_refcount(tunnel);13441346}1345134713461348/* Create a socket for the tunnel, if one isn't set up by···16381636 */16391637int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)16401638{16391639+ l2tp_tunnel_inc_refcount(tunnel);16411640 l2tp_tunnel_closeall(tunnel);16421642- return (false == queue_work(l2tp_wq, &tunnel->del_work));16411641+ if (false == queue_work(l2tp_wq, &tunnel->del_work)) {16421642+ l2tp_tunnel_dec_refcount(tunnel);16431643+ return 1;16441644+ }16451645+ return 0;16431646}16441647EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);16451648
+11-9
net/sctp/associola.c
···12081208 * within this document.12091209 *12101210 * Our basic strategy is to round-robin transports in priorities12111211- * according to sctp_state_prio_map[] e.g., if no such12111211+ * according to sctp_trans_score() e.g., if no such12121212 * transport with state SCTP_ACTIVE exists, round-robin through12131213 * SCTP_UNKNOWN, etc. You get the picture.12141214 */12151215-static const u8 sctp_trans_state_to_prio_map[] = {12161216- [SCTP_ACTIVE] = 3, /* best case */12171217- [SCTP_UNKNOWN] = 2,12181218- [SCTP_PF] = 1,12191219- [SCTP_INACTIVE] = 0, /* worst case */12201220-};12211221-12221215static u8 sctp_trans_score(const struct sctp_transport *trans)12231216{12241224- return sctp_trans_state_to_prio_map[trans->state];12171217+ switch (trans->state) {12181218+ case SCTP_ACTIVE:12191219+ return 3; /* best case */12201220+ case SCTP_UNKNOWN:12211221+ return 2;12221222+ case SCTP_PF:12231223+ return 1;12241224+ default: /* case SCTP_INACTIVE */12251225+ return 0; /* worst case */12261226+ }12251227}1226122812271229static struct sctp_transport *sctp_trans_elect_tie(struct sctp_transport *trans1,
+24-20
net/sctp/sm_sideeffect.c
···244244 int error;245245 struct sctp_transport *transport = (struct sctp_transport *) peer;246246 struct sctp_association *asoc = transport->asoc;247247- struct net *net = sock_net(asoc->base.sk);247247+ struct sock *sk = asoc->base.sk;248248+ struct net *net = sock_net(sk);248249249250 /* Check whether a task is in the sock. */250251251251- bh_lock_sock(asoc->base.sk);252252- if (sock_owned_by_user(asoc->base.sk)) {252252+ bh_lock_sock(sk);253253+ if (sock_owned_by_user(sk)) {253254 pr_debug("%s: sock is busy\n", __func__);254255255256 /* Try again later. */···273272 transport, GFP_ATOMIC);274273275274 if (error)276276- asoc->base.sk->sk_err = -error;275275+ sk->sk_err = -error;277276278277out_unlock:279279- bh_unlock_sock(asoc->base.sk);278278+ bh_unlock_sock(sk);280279 sctp_transport_put(transport);281280}282281···286285static void sctp_generate_timeout_event(struct sctp_association *asoc,287286 sctp_event_timeout_t timeout_type)288287{289289- struct net *net = sock_net(asoc->base.sk);288288+ struct sock *sk = asoc->base.sk;289289+ struct net *net = sock_net(sk);290290 int error = 0;291291292292- bh_lock_sock(asoc->base.sk);293293- if (sock_owned_by_user(asoc->base.sk)) {292292+ bh_lock_sock(sk);293293+ if (sock_owned_by_user(sk)) {294294 pr_debug("%s: sock is busy: timer %d\n", __func__,295295 timeout_type);296296···314312 (void *)timeout_type, GFP_ATOMIC);315313316314 if (error)317317- asoc->base.sk->sk_err = -error;315315+ sk->sk_err = -error;318316319317out_unlock:320320- bh_unlock_sock(asoc->base.sk);318318+ bh_unlock_sock(sk);321319 sctp_association_put(asoc);322320}323321···367365 int error = 0;368366 struct sctp_transport *transport = (struct sctp_transport *) data;369367 struct sctp_association *asoc = transport->asoc;370370- struct net *net = sock_net(asoc->base.sk);368368+ struct sock *sk = asoc->base.sk;369369+ struct net *net = sock_net(sk);371370372372- bh_lock_sock(asoc->base.sk);373373- if (sock_owned_by_user(asoc->base.sk)) {371371+ bh_lock_sock(sk);372372+ if (sock_owned_by_user(sk)) {374373 pr_debug("%s: sock is busy\n", __func__);375374376375 /* Try again later. */···391388 asoc->state, asoc->ep, asoc,392389 transport, GFP_ATOMIC);393390394394- if (error)395395- asoc->base.sk->sk_err = -error;391391+ if (error)392392+ sk->sk_err = -error;396393397394out_unlock:398398- bh_unlock_sock(asoc->base.sk);395395+ bh_unlock_sock(sk);399396 sctp_transport_put(transport);400397}401398···406403{407404 struct sctp_transport *transport = (struct sctp_transport *) data;408405 struct sctp_association *asoc = transport->asoc;409409- struct net *net = sock_net(asoc->base.sk);406406+ struct sock *sk = asoc->base.sk;407407+ struct net *net = sock_net(sk);410408411411- bh_lock_sock(asoc->base.sk);412412- if (sock_owned_by_user(asoc->base.sk)) {409409+ bh_lock_sock(sk);410410+ if (sock_owned_by_user(sk)) {413411 pr_debug("%s: sock is busy\n", __func__);414412415413 /* Try again later. */···431427 asoc->state, asoc->ep, asoc, transport, GFP_ATOMIC);432428433429out_unlock:434434- bh_unlock_sock(asoc->base.sk);430430+ bh_unlock_sock(sk);435431 sctp_association_put(asoc);436432}437433
-19
net/sunrpc/xprtrdma/fmr_ops.c
···3939fmr_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep,4040 struct rpcrdma_create_data_internal *cdata)4141{4242- struct ib_device_attr *devattr = &ia->ri_devattr;4343- struct ib_mr *mr;4444-4545- /* Obtain an lkey to use for the regbufs, which are4646- * protected from remote access.4747- */4848- if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY) {4949- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;5050- } else {5151- mr = ib_get_dma_mr(ia->ri_pd, IB_ACCESS_LOCAL_WRITE);5252- if (IS_ERR(mr)) {5353- pr_err("%s: ib_get_dma_mr for failed with %lX\n",5454- __func__, PTR_ERR(mr));5555- return -ENOMEM;5656- }5757- ia->ri_dma_lkey = ia->ri_dma_mr->lkey;5858- ia->ri_dma_mr = mr;5959- }6060-6142 return 0;6243}6344
-5
net/sunrpc/xprtrdma/frwr_ops.c
···189189 struct ib_device_attr *devattr = &ia->ri_devattr;190190 int depth, delta;191191192192- /* Obtain an lkey to use for the regbufs, which are193193- * protected from remote access.194194- */195195- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;196196-197192 ia->ri_max_frmr_depth =198193 min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS,199194 devattr->max_fast_reg_page_list_len);
+1-9
net/sunrpc/xprtrdma/physical_ops.c
···2323physical_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep,2424 struct rpcrdma_create_data_internal *cdata)2525{2626- struct ib_device_attr *devattr = &ia->ri_devattr;2726 struct ib_mr *mr;28272928 /* Obtain an rkey to use for RPC data payloads.···3637 __func__, PTR_ERR(mr));3738 return -ENOMEM;3839 }4040+3941 ia->ri_dma_mr = mr;4040-4141- /* Obtain an lkey to use for regbufs.4242- */4343- if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY)4444- ia->ri_dma_lkey = ia->ri_device->local_dma_lkey;4545- else4646- ia->ri_dma_lkey = ia->ri_dma_mr->lkey;4747-4842 return 0;4943}5044
···6565 struct rdma_cm_id *ri_id;6666 struct ib_pd *ri_pd;6767 struct ib_mr *ri_dma_mr;6868- u32 ri_dma_lkey;6968 struct completion ri_done;7069 int ri_async_rc;7170 unsigned int ri_max_frmr_depth;
+14-1
net/unix/af_unix.c
···21792179 if (UNIXCB(skb).fp)21802180 scm.fp = scm_fp_dup(UNIXCB(skb).fp);2181218121822182- sk_peek_offset_fwd(sk, chunk);21822182+ if (skip) {21832183+ sk_peek_offset_fwd(sk, chunk);21842184+ skip -= chunk;21852185+ }2183218621872187+ if (UNIXCB(skb).fp)21882188+ break;21892189+21902190+ last = skb;21912191+ last_len = skb->len;21922192+ unix_state_lock(sk);21932193+ skb = skb_peek_next(skb, &sk->sk_receive_queue);21942194+ if (skb)21952195+ goto again;21962196+ unix_state_unlock(sk);21842197 break;21852198 }21862199 } while (size);
+7-7
samples/kprobes/jprobe_example.c
···11/*22 * Here's a sample kernel module showing the use of jprobes to dump33- * the arguments of do_fork().33+ * the arguments of _do_fork().44 *55 * For more information on theory of operation of jprobes, see66 * Documentation/kprobes.txt77 *88 * Build and insert the kernel module as done in the kprobe example.99 * You will see the trace data in /var/log/messages and on the1010- * console whenever do_fork() is invoked to create a new process.1010+ * console whenever _do_fork() is invoked to create a new process.1111 * (Some messages may be suppressed if syslogd is configured to1212 * eliminate duplicate messages.)1313 */···1717#include <linux/kprobes.h>18181919/*2020- * Jumper probe for do_fork.2020+ * Jumper probe for _do_fork.2121 * Mirror principle enables access to arguments of the probed routine2222 * from the probe handler.2323 */24242525-/* Proxy routine having the same arguments as actual do_fork() routine */2626-static long jdo_fork(unsigned long clone_flags, unsigned long stack_start,2525+/* Proxy routine having the same arguments as actual _do_fork() routine */2626+static long j_do_fork(unsigned long clone_flags, unsigned long stack_start,2727 unsigned long stack_size, int __user *parent_tidptr,2828 int __user *child_tidptr)2929{···3636}37373838static struct jprobe my_jprobe = {3939- .entry = jdo_fork,3939+ .entry = j_do_fork,4040 .kp = {4141- .symbol_name = "do_fork",4141+ .symbol_name = "_do_fork",4242 },4343};4444
+3-3
samples/kprobes/kprobe_example.c
···11/*22 * NOTE: This example is works on x86 and powerpc.33 * Here's a sample kernel module showing the use of kprobes to dump a44- * stack trace and selected registers when do_fork() is called.44+ * stack trace and selected registers when _do_fork() is called.55 *66 * For more information on theory of operation of kprobes, see77 * Documentation/kprobes.txt88 *99 * You will see the trace data in /var/log/messages and on the console1010- * whenever do_fork() is invoked to create a new process.1010+ * whenever _do_fork() is invoked to create a new process.1111 */12121313#include <linux/kernel.h>···16161717/* For each probe you need to allocate a kprobe structure */1818static struct kprobe kp = {1919- .symbol_name = "do_fork",1919+ .symbol_name = "_do_fork",2020};21212222/* kprobe pre_handler: called just before the probed instruction is executed */
+2-2
samples/kprobes/kretprobe_example.c
···77 *88 * usage: insmod kretprobe_example.ko func=<func_name>99 *1010- * If no func_name is specified, do_fork is instrumented1010+ * If no func_name is specified, _do_fork is instrumented1111 *1212 * For more information on theory of operation of kretprobes, see1313 * Documentation/kprobes.txt···2525#include <linux/limits.h>2626#include <linux/sched.h>27272828-static char func_name[NAME_MAX] = "do_fork";2828+static char func_name[NAME_MAX] = "_do_fork";2929module_param_string(func, func_name, NAME_MAX, S_IRUGO);3030MODULE_PARM_DESC(func, "Function to kretprobe; this module will report the"3131 " function's execution time");
···2020#include <getopt.h>2121#include <err.h>2222#include <arpa/inet.h>2323+#include <openssl/opensslv.h>2324#include <openssl/bio.h>2425#include <openssl/evp.h>2526#include <openssl/pem.h>2626-#include <openssl/cms.h>2727#include <openssl/err.h>2828#include <openssl/engine.h>2929+3030+/*3131+ * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to3232+ * assume that it's not available and its header file is missing and that we3333+ * should use PKCS#7 instead. Switching to the older PKCS#7 format restricts3434+ * the options we have on specifying the X.509 certificate we want.3535+ *3636+ * Further, older versions of OpenSSL don't support manually adding signers to3737+ * the PKCS#7 message so have to accept that we get a certificate included in3838+ * the signature message. Nor do such older versions of OpenSSL support3939+ * signing with anything other than SHA1 - so we're stuck with that if such is4040+ * the case.4141+ */4242+#if OPENSSL_VERSION_NUMBER < 0x10000000L4343+#define USE_PKCS74444+#endif4545+#ifndef USE_PKCS74646+#include <openssl/cms.h>4747+#else4848+#include <openssl/pkcs7.h>4949+#endif29503051struct module_signature {3152 uint8_t algo; /* Public-key crypto algorithm [0] */···131110 struct module_signature sig_info = { .id_type = PKEY_ID_PKCS7 };132111 char *hash_algo = NULL;133112 char *private_key_name, *x509_name, *module_name, *dest_name;134134- bool save_cms = false, replace_orig;113113+ bool save_sig = false, replace_orig;135114 bool sign_only = false;136115 unsigned char buf[4096];137137- unsigned long module_size, cms_size;138138- unsigned int use_keyid = 0, use_signed_attrs = CMS_NOATTR;116116+ unsigned long module_size, sig_size;117117+ unsigned int use_signed_attrs;139118 const EVP_MD *digest_algo;140119 EVP_PKEY *private_key;120120+#ifndef USE_PKCS7141121 CMS_ContentInfo *cms;122122+ unsigned int use_keyid = 0;123123+#else124124+ PKCS7 *pkcs7;125125+#endif142126 X509 *x509;143127 BIO *b, *bd = NULL, *bm;144128 int opt, n;145145-146129 OpenSSL_add_all_algorithms();147130 ERR_load_crypto_strings();148131 ERR_clear_error();149132150133 key_pass = getenv("KBUILD_SIGN_PIN");151134135135+#ifndef USE_PKCS7136136+ use_signed_attrs = CMS_NOATTR;137137+#else138138+ use_signed_attrs = PKCS7_NOATTR;139139+#endif140140+152141 do {153142 opt = getopt(argc, argv, "dpk");154143 switch (opt) {155155- case 'p': save_cms = true; break;156156- case 'd': sign_only = true; save_cms = true; break;144144+ case 'p': save_sig = true; break;145145+ case 'd': sign_only = true; save_sig = true; break;146146+#ifndef USE_PKCS7157147 case 'k': use_keyid = CMS_USE_KEYID; break;148148+#endif158149 case -1: break;159150 default: format();160151 }···189156 "asprintf");190157 replace_orig = true;191158 }159159+160160+#ifdef USE_PKCS7161161+ if (strcmp(hash_algo, "sha1") != 0) {162162+ fprintf(stderr, "sign-file: %s only supports SHA1 signing\n",163163+ OPENSSL_VERSION_TEXT);164164+ exit(3);165165+ }166166+#endif192167193168 /* Read the private key and the X.509 cert the PKCS#7 message194169 * will point to.···254213 bm = BIO_new_file(module_name, "rb");255214 ERR(!bm, "%s", module_name);256215257257- /* Load the CMS message from the digest buffer. */216216+#ifndef USE_PKCS7217217+ /* Load the signature message from the digest buffer. */258218 cms = CMS_sign(NULL, NULL, NULL, NULL,259219 CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | CMS_DETACHED | CMS_STREAM);260220 ERR(!cms, "CMS_sign");···263221 ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo,264222 CMS_NOCERTS | CMS_BINARY | CMS_NOSMIMECAP |265223 use_keyid | use_signed_attrs),266266- "CMS_sign_add_signer");224224+ "CMS_add1_signer");267225 ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) < 0,268226 "CMS_final");269227270270- if (save_cms) {271271- char *cms_name;228228+#else229229+ pkcs7 = PKCS7_sign(x509, private_key, NULL, bm,230230+ PKCS7_NOCERTS | PKCS7_BINARY |231231+ PKCS7_DETACHED | use_signed_attrs);232232+ ERR(!pkcs7, "PKCS7_sign");233233+#endif272234273273- ERR(asprintf(&cms_name, "%s.p7s", module_name) < 0, "asprintf");274274- b = BIO_new_file(cms_name, "wb");275275- ERR(!b, "%s", cms_name);276276- ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0, "%s", cms_name);235235+ if (save_sig) {236236+ char *sig_file_name;237237+238238+ ERR(asprintf(&sig_file_name, "%s.p7s", module_name) < 0,239239+ "asprintf");240240+ b = BIO_new_file(sig_file_name, "wb");241241+ ERR(!b, "%s", sig_file_name);242242+#ifndef USE_PKCS7243243+ ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0,244244+ "%s", sig_file_name);245245+#else246246+ ERR(i2d_PKCS7_bio(b, pkcs7) < 0,247247+ "%s", sig_file_name);248248+#endif277249 BIO_free(b);278250 }279251···303247 ERR(n < 0, "%s", module_name);304248 module_size = BIO_number_written(bd);305249250250+#ifndef USE_PKCS7306251 ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) < 0, "%s", dest_name);307307- cms_size = BIO_number_written(bd) - module_size;308308- sig_info.sig_len = htonl(cms_size);252252+#else253253+ ERR(i2d_PKCS7_bio(bd, pkcs7) < 0, "%s", dest_name);254254+#endif255255+ sig_size = BIO_number_written(bd) - module_size;256256+ sig_info.sig_len = htonl(sig_size);309257 ERR(BIO_write(bd, &sig_info, sizeof(sig_info)) < 0, "%s", dest_name);310258 ERR(BIO_write(bd, magic_number, sizeof(magic_number) - 1) < 0, "%s", dest_name);311259
+4-4
security/keys/gc.c
···134134 kdebug("- %u", key->serial);135135 key_check(key);136136137137+ /* Throw away the key data */138138+ if (key->type->destroy)139139+ key->type->destroy(key);140140+137141 security_key_free(key);138142139143 /* deal with the user's key tracking and quota */···151147 atomic_dec(&key->user->nkeys);152148 if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))153149 atomic_dec(&key->user->nikeys);154154-155155- /* now throw away the key memory */156156- if (key->type->destroy)157157- key->type->destroy(key);158150159151 key_user_put(key->user);160152
+6-2
tools/build/Makefile.feature
···4141 libelf-getphdrnum \4242 libelf-mmap \4343 libnuma \4444+ numa_num_possible_cpus \4445 libperl \4546 libpython \4647 libpython-version \···5251 timerfd \5352 libdw-dwarf-unwind \5453 zlib \5555- lzma5454+ lzma \5555+ get_cpuid56565757FEATURE_DISPLAY ?= \5858 dwarf \···6361 libbfd \6462 libelf \6563 libnuma \6464+ numa_num_possible_cpus \6665 libperl \6766 libpython \6867 libslang \6968 libunwind \7069 libdw-dwarf-unwind \7170 zlib \7272- lzma7171+ lzma \7272+ get_cpuid73737474# Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features.7575# If in the future we need per-feature checks/flags for features not
···7777# include "test-libnuma.c"7878#undef main79798080+#define main main_test_numa_num_possible_cpus8181+# include "test-numa_num_possible_cpus.c"8282+#undef main8383+8084#define main main_test_timerfd8185# include "test-timerfd.c"8286#undef main···121117# include "test-lzma.c"122118#undef main123119120120+#define main main_test_get_cpuid121121+# include "test-get_cpuid.c"122122+#undef main123123+124124int main(int argc, char *argv[])125125{126126 main_test_libpython();···144136 main_test_libbfd();145137 main_test_backtrace();146138 main_test_libnuma();139139+ main_test_numa_num_possible_cpus();147140 main_test_timerfd();148141 main_test_stackprotector_all();149142 main_test_libdw_dwarf_unwind();···152143 main_test_zlib();153144 main_test_pthread_attr_setaffinity_np();154145 main_test_lzma();146146+ main_test_get_cpuid();155147156148 return 0;157149}
···37953795 struct format_field *field;37963796 struct printk_map *printk;37973797 long long val, fval;37983798- unsigned long addr;37983798+ unsigned long long addr;37993799 char *str;38003800 unsigned char *hex;38013801 int print;···38283828 */38293829 if (!(field->flags & FIELD_IS_ARRAY) &&38303830 field->size == pevent->long_size) {38313831- addr = *(unsigned long *)(data + field->offset);38313831+38323832+ /* Handle heterogeneous recording and processing38333833+ * architectures38343834+ *38353835+ * CASE I:38363836+ * Traces recorded on 32-bit devices (32-bit38373837+ * addressing) and processed on 64-bit devices:38383838+ * In this case, only 32 bits should be read.38393839+ *38403840+ * CASE II:38413841+ * Traces recorded on 64 bit devices and processed38423842+ * on 32-bit devices:38433843+ * In this case, 64 bits must be read.38443844+ */38453845+ addr = (pevent->long_size == 8) ?38463846+ *(unsigned long long *)(data + field->offset) :38473847+ (unsigned long long)*(unsigned int *)(data + field->offset);38483848+38323849 /* Check if it matches a print format */38333850 printk = find_printk(pevent, addr);38343851 if (printk)38353852 trace_seq_puts(s, printk->printk);38363853 else38373837- trace_seq_printf(s, "%lx", addr);38543854+ trace_seq_printf(s, "%llx", addr);38383855 break;38393856 }38403857 str = malloc(len + 1);
-15
tools/perf/Documentation/intel-pt.txt
···364364365365 CYC packets are not requested by default.366366367367-no_force_psb This is a driver option and is not in the IA32_RTIT_CTL MSR.368368-369369- It stops the driver resetting the byte count to zero whenever370370- enabling the trace (for example on context switches) which in371371- turn results in no PSB being forced. However some processors372372- will produce a PSB anyway.373373-374374- In any case, there is still a PSB when the trace is enabled for375375- the first time.376376-377377- no_force_psb can be used to slightly decrease the trace size but378378- may make it harder for the decoder to recover from errors.379379-380380- no_force_psb is not selected by default.381381-382367383368new snapshot option384369-------------------
+15-5
tools/perf/config/Makefile
···573573 msg := $(warning No numa.h found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev);574574 NO_LIBNUMA := 1575575 else576576- CFLAGS += -DHAVE_LIBNUMA_SUPPORT577577- EXTLIBS += -lnuma578578- $(call detected,CONFIG_NUMA)576576+ ifeq ($(feature-numa_num_possible_cpus), 0)577577+ msg := $(warning Old numa library found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev >= 2.0.8);578578+ NO_LIBNUMA := 1579579+ else580580+ CFLAGS += -DHAVE_LIBNUMA_SUPPORT581581+ EXTLIBS += -lnuma582582+ $(call detected,CONFIG_NUMA)583583+ endif579584 endif580585endif581586···626621endif627622628623ifndef NO_AUXTRACE629629- $(call detected,CONFIG_AUXTRACE)630630- CFLAGS += -DHAVE_AUXTRACE_SUPPORT624624+ ifeq ($(feature-get_cpuid), 0)625625+ msg := $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc);626626+ NO_AUXTRACE := 1627627+ else628628+ $(call detected,CONFIG_AUXTRACE)629629+ CFLAGS += -DHAVE_AUXTRACE_SUPPORT630630+ endif631631endif632632633633# Among the variables below, these:
+7-6
tools/perf/util/probe-event.c
···270270 int ret = 0;271271272272 if (module) {273273- list_for_each_entry(dso, &host_machine->dsos.head, node) {274274- if (!dso->kernel)275275- continue;276276- if (strncmp(dso->short_name + 1, module,277277- dso->short_name_len - 2) == 0)278278- goto found;273273+ char module_name[128];274274+275275+ snprintf(module_name, sizeof(module_name), "[%s]", module);276276+ map = map_groups__find_by_name(&host_machine->kmaps, MAP__FUNCTION, module_name);277277+ if (map) {278278+ dso = map->dso;279279+ goto found;279280 }280281 pr_debug("Failed to find module %s.\n", module);281282 return -ENOENT;
+4-1
tools/perf/util/session.c
···15801580 file_offset = page_offset;15811581 head = data_offset - page_offset;1582158215831583- if (data_size && (data_offset + data_size < file_size))15831583+ if (data_size == 0)15841584+ goto out;15851585+15861586+ if (data_offset + data_size < file_size)15841587 file_size = data_offset + data_size;1585158815861589 ui_progress__init(&prog, file_size, "Processing events...");
+14-2
tools/perf/util/stat.c
···196196 memset(counter->per_pkg_mask, 0, MAX_NR_CPUS);197197}198198199199-static int check_per_pkg(struct perf_evsel *counter, int cpu, bool *skip)199199+static int check_per_pkg(struct perf_evsel *counter,200200+ struct perf_counts_values *vals, int cpu, bool *skip)200201{201202 unsigned long *mask = counter->per_pkg_mask;202203 struct cpu_map *cpus = perf_evsel__cpus(counter);···219218 counter->per_pkg_mask = mask;220219 }221220221221+ /*222222+ * we do not consider an event that has not run as a good223223+ * instance to mark a package as used (skip=1). Otherwise224224+ * we may run into a situation where the first CPU in a package225225+ * is not running anything, yet the second is, and this function226226+ * would mark the package as used after the first CPU and would227227+ * not read the values from the second CPU.228228+ */229229+ if (!(vals->run && vals->ena))230230+ return 0;231231+222232 s = cpu_map__get_socket(cpus, cpu);223233 if (s < 0)224234 return -1;···247235 static struct perf_counts_values zero;248236 bool skip = false;249237250250- if (check_per_pkg(evsel, cpu, &skip)) {238238+ if (check_per_pkg(evsel, count, cpu, &skip)) {251239 pr_err("failed to read per-pkg counter\n");252240 return -1;253241 }
···709709710710 dir = opendir(procfs__mountpoint());711711 if (!dir)712712- return -1;712712+ return false;713713714714 /* Walk through the directory. */715715 while (ret && (d = readdir(dir)) != NULL) {
+34-5
tools/power/x86/turbostat/turbostat.c
···7171unsigned int extra_msr_offset64;7272unsigned int extra_delta_offset32;7373unsigned int extra_delta_offset64;7474+unsigned int aperf_mperf_multiplier = 1;7475int do_smi;7576double bclk;7777+double base_hz;7878+double tsc_tweak = 1.0;7679unsigned int show_pkg;7780unsigned int show_core;7881unsigned int show_cpu;···505502 /* %Busy */506503 if (has_aperf) {507504 if (!skip_c0)508508- outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc);505505+ outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc/tsc_tweak);509506 else510507 outp += sprintf(outp, "********");511508 }···513510 /* Bzy_MHz */514511 if (has_aperf)515512 outp += sprintf(outp, "%8.0f",516516- 1.0 * t->tsc / units * t->aperf / t->mperf / interval_float);513513+ 1.0 * t->tsc * tsc_tweak / units * t->aperf / t->mperf / interval_float);517514518515 /* TSC_MHz */519516 outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float);···987984 return -3;988985 if (get_msr(cpu, MSR_IA32_MPERF, &t->mperf))989986 return -4;987987+ t->aperf = t->aperf * aperf_mperf_multiplier;988988+ t->mperf = t->mperf * aperf_mperf_multiplier;990989 }991990992991 if (do_smi) {···11531148int slv_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11541149int amt_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11551150int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV};11511151+11521152+11531153+static void11541154+calculate_tsc_tweak()11551155+{11561156+ unsigned long long msr;11571157+ unsigned int base_ratio;11581158+11591159+ get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr);11601160+ base_ratio = (msr >> 8) & 0xFF;11611161+ base_hz = base_ratio * bclk * 1000000;11621162+ tsc_tweak = base_hz / tsc_hz;11631163+}1156116411571165static void11581166dump_nhm_platform_info(void)···1944192619451927 switch (model) {19461928 case 0x3A: /* IVB */19471947- case 0x3E: /* IVB Xeon */19481948-19491929 case 0x3C: /* HSW */19501930 case 0x3F: /* HSX */19511931 case 0x45: /* HSW */···25592543 return 0;25602544}2561254525462546+unsigned int get_aperf_mperf_multiplier(unsigned int family, unsigned int model)25472547+{25482548+ if (is_knl(family, model))25492549+ return 1024;25502550+ return 1;25512551+}25522552+25622553#define SLM_BCLK_FREQS 525632554double slm_freq_table[SLM_BCLK_FREQS] = { 83.3, 100.0, 133.3, 116.7, 80.0};25642555···27672744 }27682745 }2769274627472747+ if (has_aperf)27482748+ aperf_mperf_multiplier = get_aperf_mperf_multiplier(family, model);27492749+27702750 do_nhm_platform_info = do_nhm_cstates = do_smi = probe_nhm_msrs(family, model);27712751 do_snb_cstates = has_snb_msrs(family, model);27722752 do_pc2 = do_snb_cstates && (pkg_cstate_limit >= PCL__2);···2787276127882762 if (debug)27892763 dump_cstate_pstate_config_info();27642764+27652765+ if (has_skl_msrs(family, model))27662766+ calculate_tsc_tweak();2790276727912768 return;27922769}···31193090}3120309131213092void print_version() {31223122- fprintf(stderr, "turbostat version 4.7 17-June, 2015"30933093+ fprintf(stderr, "turbostat version 4.8 26-Sep, 2015"31233094 " - Len Brown <lenb@kernel.org>\n");31243095}31253096