···44NTFS355=====6677-87Summary and Features98====================1091111-NTFS3 is fully functional NTFS Read-Write driver. The driver works with1212-NTFS versions up to 3.1, normal/compressed/sparse files1313-and journal replaying. File system type to use on mount is 'ntfs3'.1010+NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS1111+versions up to 3.1. File system type to use on mount is *ntfs3*.14121513- This driver implements NTFS read/write support for normal, sparse and1614 compressed files.1717-- Supports native journal replaying;1818-- Supports extended attributes1919- Predefined extended attributes:2020- - 'system.ntfs_security' gets/sets security2121- descriptor (SECURITY_DESCRIPTOR_RELATIVE)2222- - 'system.ntfs_attrib' gets/sets ntfs file/dir attributes.2323- Note: applied to empty files, this allows to switch type between2424- sparse(0x200), compressed(0x800) and normal;1515+- Supports native journal replaying.2516- Supports NFS export of mounted NTFS volumes.1717+- Supports extended attributes. Predefined extended attributes:1818+1919+ - *system.ntfs_security* gets/sets security2020+2121+ Descriptor: SECURITY_DESCRIPTOR_RELATIVE2222+2323+ - *system.ntfs_attrib* gets/sets ntfs file/dir attributes.2424+2525+ Note: Applied to empty files, this allows to switch type between2626+ sparse(0x200), compressed(0x800) and normal.26272728Mount Options2829=============29303031The list below describes mount options supported by NTFS3 driver in addition to3131-generic ones.3232+generic ones. You can use every mount option with **no** option. If it is in3333+this table marked with no it means default is without **no**.32343333-===============================================================================3535+.. flat-table::3636+ :widths: 1 53737+ :fill-cells:34383535-nls=name This option informs the driver how to interpret path3636- strings and translate them to Unicode and back. If3737- this option is not set, the default codepage will be3838- used (CONFIG_NLS_DEFAULT).3939- Examples:4040- 'nls=utf8'3939+ * - iocharset=name4040+ - This option informs the driver how to interpret path strings and4141+ translate them to Unicode and back. If this option is not set, the4242+ default codepage will be used (CONFIG_NLS_DEFAULT).41434242-uid=4343-gid=4444-umask= Controls the default permissions for files/directories created4545- after the NTFS volume is mounted.4444+ Example: iocharset=utf846454747-fmask=4848-dmask= Instead of specifying umask which applies both to4949- files and directories, fmask applies only to files and5050- dmask only to directories.4646+ * - uid=4747+ - :rspan:`1`4848+ * - gid=51495252-nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)5353- attribute will not be shown under Linux.5050+ * - umask=5151+ - Controls the default permissions for files/directories created after5252+ the NTFS volume is mounted.54535555-sys_immutable Files with the Windows-specific SYSTEM5656- (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system5757- immutable files.5454+ * - dmask=5555+ - :rspan:`1` Instead of specifying umask which applies both to files and5656+ directories, fmask applies only to files and dmask only to directories.5757+ * - fmask=58585959-discard Enable support of the TRIM command for improved performance6060- on delete operations, which is recommended for use with the6161- solid-state drives (SSD).5959+ * - noacsrules6060+ - "No access rules" mount option sets access rights for files/folders to6161+ 777 and owner/group to root. This mount option absorbs all other6262+ permissions.62636363-force Forces the driver to mount partitions even if 'dirty' flag6464- (volume dirty) is set. Not recommended for use.6464+ - Permissions change for files/folders will be reported as successful,6565+ but they will remain 777.65666666-sparse Create new files as "sparse".6767+ - Owner/group change will be reported as successful, butthey will stay6868+ as root.67696868-showmeta Use this parameter to show all meta-files (System Files) on6969- a mounted NTFS partition.7070- By default, all meta-files are hidden.7070+ * - nohidden7171+ - Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute7272+ will not be shown under Linux.71737272-prealloc Preallocate space for files excessively when file size is7373- increasing on writes. Decreases fragmentation in case of7474- parallel write operations to different files.7474+ * - sys_immutable7575+ - Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute7676+ will be marked as system immutable files.75777676-no_acs_rules "No access rules" mount option sets access rights for7777- files/folders to 777 and owner/group to root. This mount7878- option absorbs all other permissions:7979- - permissions change for files/folders will be reported8080- as successful, but they will remain 777;8181- - owner/group change will be reported as successful, but8282- they will stay as root7878+ * - discard7979+ - Enable support of the TRIM command for improved performance on delete8080+ operations, which is recommended for use with the solid-state drives8181+ (SSD).83828484-acl Support POSIX ACLs (Access Control Lists). Effective if8585- supported by Kernel. Not to be confused with NTFS ACLs.8686- The option specified as acl enables support for POSIX ACLs.8383+ * - force8484+ - Forces the driver to mount partitions even if volume is marked dirty.8585+ Not recommended for use.87868888-noatime All files and directories will not update their last access8989- time attribute if a partition is mounted with this parameter.9090- This option can speed up file system operation.8787+ * - sparse8888+ - Create new files as sparse.91899292-===============================================================================9090+ * - showmeta9191+ - Use this parameter to show all meta-files (System Files) on a mounted9292+ NTFS partition. By default, all meta-files are hidden.93939494-ToDo list9494+ * - prealloc9595+ - Preallocate space for files excessively when file size is increasing on9696+ writes. Decreases fragmentation in case of parallel write operations to9797+ different files.9898+9999+ * - acl100100+ - Support POSIX ACLs (Access Control Lists). Effective if supported by101101+ Kernel. Not to be confused with NTFS ACLs. The option specified as acl102102+ enables support for POSIX ACLs.103103+104104+Todo list95105=========9696-9797-- Full journaling support (currently journal replaying is supported) over JBD.9898-106106+- Full journaling support over JBD. Currently journal replaying is supported107107+ which is not necessarily as effectice as JBD would be.99108100109References101110==========102102-https://www.paragon-software.com/home/ntfs-linux-professional/103103- - Commercial version of the NTFS driver for Linux.111111+- Commercial version of the NTFS driver for Linux.112112+ https://www.paragon-software.com/home/ntfs-linux-professional/104113105105-almaz.alexandrovich@paragon-software.com106106- - Direct e-mail address for feedback and requests on the NTFS3 implementation.114114+- Direct e-mail address for feedback and requests on the NTFS3 implementation.115115+ almaz.alexandrovich@paragon-software.com
+5-4
Documentation/networking/devlink/ice.rst
···3030 PHY, link, etc.3131 * - ``fw.mgmt.api``3232 - running3333- - 1.53434- - 2-digit version number of the API exported over the AdminQ by the3535- management firmware. Used by the driver to identify what commands3636- are supported.3333+ - 1.5.13434+ - 3-digit version number (major.minor.patch) of the API exported over3535+ the AdminQ by the management firmware. Used by the driver to3636+ identify what commands are supported. Historical versions of the3737+ kernel only displayed a 2-digit version number (major.minor).3738 * - ``fw.mgmt.build``3839 - running3940 - 0x305d955f
+5-5
Documentation/networking/mctp.rst
···5959 };60606161 struct sockaddr_mctp {6262- unsigned short int smctp_family;6363- int smctp_network;6464- struct mctp_addr smctp_addr;6565- __u8 smctp_type;6666- __u8 smctp_tag;6262+ __kernel_sa_family_t smctp_family;6363+ unsigned int smctp_network;6464+ struct mctp_addr smctp_addr;6565+ __u8 smctp_type;6666+ __u8 smctp_tag;6767 };68686969 #define MCTP_NET_ANY 0x0
+1-1
Documentation/userspace-api/vduse.rst
···1818is clarified or fixed in the future.19192020Create/Destroy VDUSE devices2121-------------------------2121+----------------------------22222323VDUSE devices are created as follows:2424
+4-4
MAINTAINERS
···7349734973507350FPGA MANAGER FRAMEWORK73517351M: Moritz Fischer <mdf@kernel.org>73527352+M: Wu Hao <hao.wu@intel.com>73537353+M: Xu Yilun <yilun.xu@intel.com>73527354R: Tom Rix <trix@redhat.com>73537355L: linux-fpga@vger.kernel.org73547356S: Maintained73557355-W: http://www.rocketboards.org73567357Q: http://patchwork.kernel.org/project/linux-fpga/list/73577358T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git73587359F: Documentation/devicetree/bindings/fpga/···1028610285M: Christian Borntraeger <borntraeger@de.ibm.com>1028710286M: Janosch Frank <frankja@linux.ibm.com>1028810287R: David Hildenbrand <david@redhat.com>1028910289-R: Cornelia Huck <cohuck@redhat.com>1029010288R: Claudio Imbrenda <imbrenda@linux.ibm.com>1029110289L: kvm@vger.kernel.org1029210290S: Supported···1630816308M: Heiko Carstens <hca@linux.ibm.com>1630916309M: Vasily Gorbik <gor@linux.ibm.com>1631016310M: Christian Borntraeger <borntraeger@de.ibm.com>1631116311+R: Alexander Gordeev <agordeev@linux.ibm.com>1631116312L: linux-s390@vger.kernel.org1631216313S: Supported1631316314W: http://www.ibm.com/developerworks/linux/linux390/···1638716386F: drivers/s390/crypto/vfio_ap_private.h16388163871638916388S390 VFIO-CCW DRIVER1639016390-M: Cornelia Huck <cohuck@redhat.com>1639116389M: Eric Farman <farman@linux.ibm.com>1639216390M: Matthew Rosato <mjrosato@linux.ibm.com>1639316391R: Halil Pasic <pasic@linux.ibm.com>···1799317993SY8106A REGULATOR DRIVER1799417994M: Icenowy Zheng <icenowy@aosc.io>1799517995S: Maintained1799617996-F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt1799617996+F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml1799717997F: drivers/regulator/sy8106a-regulator.c17998179981799917999SYNC FILE FRAMEWORK
···26262727extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);28282929-/* Macro to mark a page protection as uncacheable */3030-#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE))3131-3232-extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);3333-3429/* to cope with aliasing VIPT cache */3530#define HAVE_ARCH_UNMAPPED_AREA3631
···35353636static void *host_s2_zalloc_pages_exact(size_t size)3737{3838- return hyp_alloc_pages(&host_s2_pool, get_order(size));3838+ void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size));3939+4040+ hyp_split_page(hyp_virt_to_page(addr));4141+4242+ /*4343+ * The size of concatenated PGDs is always a power of two of PAGE_SIZE,4444+ * so there should be no need to free any of the tail pages to make the4545+ * allocation exact.4646+ */4747+ WARN_ON(size != (PAGE_SIZE << get_order(size)));4848+4949+ return addr;3950}40514152static void *host_s2_zalloc_page(void *pool)
+15
arch/arm64/kvm/hyp/nvhe/page_alloc.c
···152152153153static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)154154{155155+ BUG_ON(!p->refcount);155156 p->refcount--;156157 return (p->refcount == 0);157158}···192191 hyp_spin_lock(&pool->lock);193192 hyp_page_ref_inc(p);194193 hyp_spin_unlock(&pool->lock);194194+}195195+196196+void hyp_split_page(struct hyp_page *p)197197+{198198+ unsigned short order = p->order;199199+ unsigned int i;200200+201201+ p->order = 0;202202+ for (i = 1; i < (1 << order); i++) {203203+ struct hyp_page *tail = p + i;204204+205205+ tail->order = 0;206206+ hyp_set_page_refcounted(tail);207207+ }195208}196209197210void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order)
+4-2
arch/arm64/kvm/mmu.c
···15291529 * when updating the PG_mte_tagged page flag, see15301530 * sanitise_mte_tags for more details.15311531 */15321532- if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED)15331533- return -EINVAL;15321532+ if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) {15331533+ ret = -EINVAL;15341534+ break;15351535+ }1534153615351537 if (vma->vm_flags & VM_PFNMAP) {15361538 /* IO region dirty page logging not allowed */
+2-1
arch/csky/Kconfig
···88 select ARCH_HAS_SYNC_DMA_FOR_DEVICE99 select ARCH_USE_BUILTIN_BSWAP1010 select ARCH_USE_QUEUED_RWLOCKS1111- select ARCH_WANT_FRAME_POINTERS if !CPU_CK6101111+ select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)1212 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT1313 select COMMON_CLK1414 select CLKSRC_MMIO···241241242242menuconfig HAVE_TCM243243 bool "Tightly-Coupled/Sram Memory"244244+ depends on !COMPILE_TEST244245 help245246 The implementation are not only used by TCM (Tightly-Coupled Meory)246247 but also used by sram on SOC bus. It follow existed linux tcm
-1
arch/csky/include/asm/bitops.h
···7474 * bug fix, why only could use atomic!!!!7575 */7676#include <asm-generic/bitops/non-atomic.h>7777-#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)78777978#include <asm-generic/bitops/le.h>8079#include <asm-generic/bitops/ext2-atomic.h>
+2-1
arch/csky/kernel/ptrace.c
···9999 if (ret)100100 return ret;101101102102- regs.sr = task_pt_regs(target)->sr;102102+ /* BIT(0) of regs.sr is Condition Code/Carry bit */103103+ regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));103104#ifdef CONFIG_CPU_HAS_HILO104105 regs.dcsr = task_pt_regs(target)->dcsr;105106#endif
+4
arch/csky/kernel/signal.c
···5252 struct sigcontext __user *sc)5353{5454 int err = 0;5555+ unsigned long sr = regs->sr;55565657 /* sc_pt_regs is structured the same as the start of pt_regs */5758 err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));5959+6060+ /* BIT(0) of regs->sr is Condition Code/Carry bit */6161+ regs->sr = (sr & ~1) | (regs->sr & 1);58625963 /* Restore the floating-point state. */6064 err |= restore_fpu_state(sc);
···126126/*127127 * This is the sequence required to execute idle instructions, as128128 * specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0.129129- *130130- * The 0(r1) slot is used to save r2 in isa206, so use that here.129129+ * We have to store a GPR somewhere, ptesync, then reload it, and create130130+ * a false dependency on the result of the load. It doesn't matter which131131+ * GPR we store, or where we store it. We have already stored r2 to the132132+ * stack at -8(r1) in isa206_idle_insn_mayloss, so use that.131133 */132134#define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST) \133135 /* Magic NAP/SLEEP/WINKLE mode enter sequence */ \134134- std r2,0(r1); \136136+ std r2,-8(r1); \135137 ptesync; \136136- ld r2,0(r1); \138138+ ld r2,-8(r1); \137139236: cmpd cr0,r2,r2; \138140 bne 236b; \139141 IDLE_INST; \
-2
arch/powerpc/kernel/smp.c
···1730173017311731void arch_cpu_idle_dead(void)17321732{17331733- sched_preempt_enable_no_resched();17341734-17351733 /*17361734 * Disable on the down path. This will be re-enabled by17371735 * start_secondary() via start_secondary_resume() below
+17-11
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···255255 * r3 contains the SRR1 wakeup value, SRR1 is trashed.256256 */257257_GLOBAL(idle_kvm_start_guest)258258- ld r4,PACAEMERGSP(r13)259258 mfcr r5260259 mflr r0261261- std r1,0(r4)262262- std r5,8(r4)263263- std r0,16(r4)264264- subi r1,r4,STACK_FRAME_OVERHEAD260260+ std r5, 8(r1) // Save CR in caller's frame261261+ std r0, 16(r1) // Save LR in caller's frame262262+ // Create frame on emergency stack263263+ ld r4, PACAEMERGSP(r13)264264+ stdu r1, -SWITCH_FRAME_SIZE(r4)265265+ // Switch to new frame on emergency stack266266+ mr r1, r4267267+ std r3, 32(r1) // Save SRR1 wakeup value265268 SAVE_NVGPRS(r1)266269267270 /*···315312 beq kvm_no_guest316313317314kvm_secondary_got_guest:315315+316316+ // About to go to guest, clear saved SRR1317317+ li r0, 0318318+ std r0, 32(r1)318319319320 /* Set HSTATE_DSCR(r13) to something sensible */320321 ld r6, PACA_DSCR_DEFAULT(r13)···399392 mfspr r4, SPRN_LPCR400393 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1401394 mtspr SPRN_LPCR, r4402402- /* set up r3 for return */403403- mfspr r3,SPRN_SRR1395395+ // Return SRR1 wakeup value, or 0 if we went into the guest396396+ ld r3, 32(r1)404397 REST_NVGPRS(r1)405405- addi r1, r1, STACK_FRAME_OVERHEAD406406- ld r0, 16(r1)407407- ld r5, 8(r1)408408- ld r1, 0(r1)398398+ ld r1, 0(r1) // Switch back to caller stack399399+ ld r0, 16(r1) // Reload LR400400+ ld r5, 8(r1) // Reload CR409401 mtlr r0410402 mtcr r5411403 blr
+2-1
arch/powerpc/sysdev/xive/common.c
···945945 * interrupt to be inactive in that case.946946 */947947 *state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&948948- (xd->saved_p || !!(pq & XIVE_ESB_VAL_P));948948+ (xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&949949+ !irqd_irq_disabled(data)));949950 return 0;950951 default:951952 return -EINVAL;
+12
arch/s390/kvm/gaccess.c
···894894895895/**896896 * guest_translate_address - translate guest logical into guest absolute address897897+ * @vcpu: virtual cpu898898+ * @gva: Guest virtual address899899+ * @ar: Access register900900+ * @gpa: Guest physical address901901+ * @mode: Translation access mode897902 *898903 * Parameter semantics are the same as the ones from guest_translate.899904 * The memory contents at the guest address are not changed.···939934940935/**941936 * check_gva_range - test a range of guest virtual addresses for accessibility937937+ * @vcpu: virtual cpu938938+ * @gva: Guest virtual address939939+ * @ar: Access register940940+ * @length: Length of test range941941+ * @mode: Translation access mode942942 */943943int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar,944944 unsigned long length, enum gacc_mode mode)···966956967957/**968958 * kvm_s390_check_low_addr_prot_real - check for low-address protection959959+ * @vcpu: virtual cpu969960 * @gra: Guest real address970961 *971962 * Checks whether an address is subject to low-address protection and set···990979 * @pgt: pointer to the beginning of the page table for the given address if991980 * successful (return value 0), or to the first invalid DAT entry in992981 * case of exceptions (return value > 0)982982+ * @dat_protection: referenced memory is write protected993983 * @fake: pgt references contiguous guest memory block, not a pgtable994984 */995985static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+3-1
arch/s390/kvm/intercept.c
···269269270270/**271271 * handle_external_interrupt - used for external interruption interceptions272272+ * @vcpu: virtual cpu272273 *273274 * This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if274275 * the new PSW does not have external interrupts disabled. In the first case,···316315}317316318317/**319319- * Handle MOVE PAGE partial execution interception.318318+ * handle_mvpg_pei - Handle MOVE PAGE partial execution interception.319319+ * @vcpu: virtual cpu320320 *321321 * This interception can only happen for guests with DAT disabled and322322 * addresses that are currently not mapped in the host. Thus we try to
+6-7
arch/s390/lib/string.c
···259259#ifdef __HAVE_ARCH_STRRCHR260260char *strrchr(const char *s, int c)261261{262262- size_t len = __strend(s) - s;262262+ ssize_t len = __strend(s) - s;263263264264- if (len)265265- do {266266- if (s[len] == (char) c)267267- return (char *) s + len;268268- } while (--len > 0);269269- return NULL;264264+ do {265265+ if (s[len] == (char)c)266266+ return (char *)s + len;267267+ } while (--len >= 0);268268+ return NULL;270269}271270EXPORT_SYMBOL(strrchr);272271#endif
-1
arch/x86/Kconfig
···1525152515261526config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT15271527 bool "Activate AMD Secure Memory Encryption (SME) by default"15281528- default y15291528 depends on AMD_MEM_ENCRYPT15301529 help15311530 Say yes to have system memory encrypted by default if running on
+1
arch/x86/events/msr.c
···6868 case INTEL_FAM6_BROADWELL_D:6969 case INTEL_FAM6_BROADWELL_G:7070 case INTEL_FAM6_BROADWELL_X:7171+ case INTEL_FAM6_SAPPHIRERAPIDS_X:71727273 case INTEL_FAM6_ATOM_SILVERMONT:7374 case INTEL_FAM6_ATOM_SILVERMONT_D:
+1-1
arch/x86/kernel/fpu/signal.c
···385385 return -EINVAL;386386 } else {387387 /* Mask invalid bits out for historical reasons (broken hardware). */388388- fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;388388+ fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;389389 }390390391391 /* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+13-7
arch/x86/kvm/lapic.c
···23212321void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)23222322{23232323 struct kvm_lapic *apic = vcpu->arch.apic;23242324+ u64 msr_val;23242325 int i;2325232623262327 if (!init_event) {23272327- vcpu->arch.apic_base = APIC_DEFAULT_PHYS_BASE |23282328- MSR_IA32_APICBASE_ENABLE;23282328+ msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;23292329 if (kvm_vcpu_is_reset_bsp(vcpu))23302330- vcpu->arch.apic_base |= MSR_IA32_APICBASE_BSP;23302330+ msr_val |= MSR_IA32_APICBASE_BSP;23312331+ kvm_lapic_set_base(vcpu, msr_val);23312332 }2332233323332334 if (!apic)···23372336 /* Stop the timer in case it's a reset to an active apic */23382337 hrtimer_cancel(&apic->lapic_timer.timer);2339233823402340- if (!init_event) {23412341- apic->base_address = APIC_DEFAULT_PHYS_BASE;23422342-23392339+ /* The xAPIC ID is set at RESET even if the APIC was already enabled. */23402340+ if (!init_event)23432341 kvm_apic_set_xapic_id(apic, vcpu->vcpu_id);23442344- }23452342 kvm_apic_set_version(apic->vcpu);2346234323472344 for (i = 0; i < KVM_APIC_LVT_NUM; i++)···24802481 lapic_timer_advance_dynamic = false;24812482 }2482248324842484+ /*24852485+ * Stuff the APIC ENABLE bit in lieu of temporarily incrementing24862486+ * apic_hw_disabled; the full RESET value is set by kvm_lapic_reset().24872487+ */24882488+ vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE;24832489 static_branch_inc(&apic_sw_disabled.key); /* sw disabled at reset */24842490 kvm_iodevice_init(&apic->dev, &apic_mmio_ops);24852491···29462942void kvm_lapic_exit(void)29472943{29482944 static_key_deferred_flush(&apic_hw_disabled);29452945+ WARN_ON(static_branch_unlikely(&apic_hw_disabled.key));29492946 static_key_deferred_flush(&apic_sw_disabled);29472947+ WARN_ON(static_branch_unlikely(&apic_sw_disabled.key));29502948}
···191191192192 /* SEV-ES scratch area support */193193 void *ghcb_sa;194194- u64 ghcb_sa_len;194194+ u32 ghcb_sa_len;195195 bool ghcb_sa_sync;196196 bool ghcb_sa_free;197197
+9-6
arch/x86/kvm/vmx/vmx.c
···5562556255635563static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu)55645564{55655565- vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK;55665566- vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK;55675567- return 0;55655565+ /*55665566+ * Hardware may or may not set the BUS_LOCK_DETECTED flag on BUS_LOCK55675567+ * VM-Exits. Unconditionally set the flag here and leave the handling to55685568+ * vmx_handle_exit().55695569+ */55705570+ to_vmx(vcpu)->exit_reason.bus_lock_detected = true;55715571+ return 1;55685572}5569557355705574/*···60556051 int ret = __vmx_handle_exit(vcpu, exit_fastpath);6056605260576053 /*60586058- * Even when current exit reason is handled by KVM internally, we60596059- * still need to exit to user space when bus lock detected to inform60606060- * that there is a bus lock in guest.60546054+ * Exit to user space when bus lock detected to inform that there is60556055+ * a bus lock in guest.60616056 */60626057 if (to_vmx(vcpu)->exit_reason.bus_lock_detected) {60636058 if (ret > 0)
+2-1
arch/x86/kvm/x86.c
···1139211392 int level = i + 1;1139311393 int lpages = __kvm_mmu_slot_lpages(slot, npages, level);11394113941139511395- WARN_ON(slot->arch.rmap[i]);1139511395+ if (slot->arch.rmap[i])1139611396+ continue;11396113971139711398 slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT);1139811399 if (!slot->arch.rmap[i]) {
+6
block/bfq-cgroup.c
···666666 bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);667667 bfqg_and_blkg_put(bfqq_group(bfqq));668668669669+ if (entity->parent &&670670+ entity->parent->last_bfqq_created == bfqq)671671+ entity->parent->last_bfqq_created = NULL;672672+ else if (bfqd->last_bfqq_created == bfqq)673673+ bfqd->last_bfqq_created = NULL;674674+669675 entity->parent = bfqg->my_entity;670676 entity->sched_data = &bfqg->sched_data;671677 /* pin down bfqg and its associated blkg */
+78-70
block/blk-core.c
···4949#include "blk-mq.h"5050#include "blk-mq-sched.h"5151#include "blk-pm.h"5252-#include "blk-rq-qos.h"53525453struct dentry *blk_debugfs_root;5554···336337}337338EXPORT_SYMBOL(blk_put_queue);338339339339-void blk_set_queue_dying(struct request_queue *q)340340+void blk_queue_start_drain(struct request_queue *q)340341{341341- blk_queue_flag_set(QUEUE_FLAG_DYING, q);342342-343342 /*344343 * When queue DYING flag is set, we need to block new req345344 * entering queue, so we call blk_freeze_queue_start() to346345 * prevent I/O from crossing blk_queue_enter().347346 */348347 blk_freeze_queue_start(q);349349-350348 if (queue_is_mq(q))351349 blk_mq_wake_waiters(q);352352-353350 /* Make blk_queue_enter() reexamine the DYING flag. */354351 wake_up_all(&q->mq_freeze_wq);352352+}353353+354354+void blk_set_queue_dying(struct request_queue *q)355355+{356356+ blk_queue_flag_set(QUEUE_FLAG_DYING, q);357357+ blk_queue_start_drain(q);355358}356359EXPORT_SYMBOL_GPL(blk_set_queue_dying);357360···386385 */387386 blk_freeze_queue(q);388387389389- rq_qos_exit(q);390390-391388 blk_queue_flag_set(QUEUE_FLAG_DEAD, q);392392-393393- /* for synchronous bio-based driver finish in-flight integrity i/o */394394- blk_flush_integrity();395389396390 blk_sync_queue(q);397391 if (queue_is_mq(q))···412416}413417EXPORT_SYMBOL(blk_cleanup_queue);414418419419+static bool blk_try_enter_queue(struct request_queue *q, bool pm)420420+{421421+ rcu_read_lock();422422+ if (!percpu_ref_tryget_live(&q->q_usage_counter))423423+ goto fail;424424+425425+ /*426426+ * The code that increments the pm_only counter must ensure that the427427+ * counter is globally visible before the queue is unfrozen.428428+ */429429+ if (blk_queue_pm_only(q) &&430430+ (!pm || queue_rpm_status(q) == RPM_SUSPENDED))431431+ goto fail_put;432432+433433+ rcu_read_unlock();434434+ return true;435435+436436+fail_put:437437+ percpu_ref_put(&q->q_usage_counter);438438+fail:439439+ rcu_read_unlock();440440+ return false;441441+}442442+415443/**416444 * blk_queue_enter() - try to increase q->q_usage_counter417445 * @q: request queue pointer···445425{446426 const bool pm = flags & BLK_MQ_REQ_PM;447427448448- while (true) {449449- bool success = false;450450-451451- rcu_read_lock();452452- if (percpu_ref_tryget_live(&q->q_usage_counter)) {453453- /*454454- * The code that increments the pm_only counter is455455- * responsible for ensuring that that counter is456456- * globally visible before the queue is unfrozen.457457- */458458- if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||459459- !blk_queue_pm_only(q)) {460460- success = true;461461- } else {462462- percpu_ref_put(&q->q_usage_counter);463463- }464464- }465465- rcu_read_unlock();466466-467467- if (success)468468- return 0;469469-428428+ while (!blk_try_enter_queue(q, pm)) {470429 if (flags & BLK_MQ_REQ_NOWAIT)471430 return -EBUSY;472431473432 /*474474- * read pair of barrier in blk_freeze_queue_start(),475475- * we need to order reading __PERCPU_REF_DEAD flag of476476- * .q_usage_counter and reading .mq_freeze_depth or477477- * queue dying flag, otherwise the following wait may478478- * never return if the two reads are reordered.433433+ * read pair of barrier in blk_freeze_queue_start(), we need to434434+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and435435+ * reading .mq_freeze_depth or queue dying flag, otherwise the436436+ * following wait may never return if the two reads are437437+ * reordered.479438 */480439 smp_rmb();481481-482440 wait_event(q->mq_freeze_wq,483441 (!q->mq_freeze_depth &&484442 blk_pm_resume_queue(pm, q)) ||···464466 if (blk_queue_dying(q))465467 return -ENODEV;466468 }469469+470470+ return 0;467471}468472469473static inline int bio_queue_enter(struct bio *bio)470474{471471- struct request_queue *q = bio->bi_bdev->bd_disk->queue;472472- bool nowait = bio->bi_opf & REQ_NOWAIT;473473- int ret;475475+ struct gendisk *disk = bio->bi_bdev->bd_disk;476476+ struct request_queue *q = disk->queue;474477475475- ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);476476- if (unlikely(ret)) {477477- if (nowait && !blk_queue_dying(q))478478+ while (!blk_try_enter_queue(q, false)) {479479+ if (bio->bi_opf & REQ_NOWAIT) {480480+ if (test_bit(GD_DEAD, &disk->state))481481+ goto dead;478482 bio_wouldblock_error(bio);479479- else480480- bio_io_error(bio);483483+ return -EBUSY;484484+ }485485+486486+ /*487487+ * read pair of barrier in blk_freeze_queue_start(), we need to488488+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and489489+ * reading .mq_freeze_depth or queue dying flag, otherwise the490490+ * following wait may never return if the two reads are491491+ * reordered.492492+ */493493+ smp_rmb();494494+ wait_event(q->mq_freeze_wq,495495+ (!q->mq_freeze_depth &&496496+ blk_pm_resume_queue(false, q)) ||497497+ test_bit(GD_DEAD, &disk->state));498498+ if (test_bit(GD_DEAD, &disk->state))499499+ goto dead;481500 }482501483483- return ret;502502+ return 0;503503+dead:504504+ bio_io_error(bio);505505+ return -ENODEV;484506}485507486508void blk_queue_exit(struct request_queue *q)···917899 struct gendisk *disk = bio->bi_bdev->bd_disk;918900 blk_qc_t ret = BLK_QC_T_NONE;919901920920- if (blk_crypto_bio_prep(&bio)) {921921- if (!disk->fops->submit_bio)922922- return blk_mq_submit_bio(bio);902902+ if (unlikely(bio_queue_enter(bio) != 0))903903+ return BLK_QC_T_NONE;904904+905905+ if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))906906+ goto queue_exit;907907+ if (disk->fops->submit_bio) {923908 ret = disk->fops->submit_bio(bio);909909+ goto queue_exit;924910 }911911+ return blk_mq_submit_bio(bio);912912+913913+queue_exit:925914 blk_queue_exit(disk->queue);926915 return ret;927916}···966941 struct request_queue *q = bio->bi_bdev->bd_disk->queue;967942 struct bio_list lower, same;968943969969- if (unlikely(bio_queue_enter(bio) != 0))970970- continue;971971-972944 /*973945 * Create a fresh bio_list for all subordinate requests.974946 */···1001979static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)1002980{1003981 struct bio_list bio_list[2] = { };10041004- blk_qc_t ret = BLK_QC_T_NONE;982982+ blk_qc_t ret;10059831006984 current->bio_list = bio_list;10079851008986 do {10091009- struct gendisk *disk = bio->bi_bdev->bd_disk;10101010-10111011- if (unlikely(bio_queue_enter(bio) != 0))10121012- continue;10131013-10141014- if (!blk_crypto_bio_prep(&bio)) {10151015- blk_queue_exit(disk->queue);10161016- ret = BLK_QC_T_NONE;10171017- continue;10181018- }10191019-10201020- ret = blk_mq_submit_bio(bio);987987+ ret = __submit_bio(bio);1021988 } while ((bio = bio_list_pop(&bio_list[0])));10229891023990 current->bio_list = NULL;···10241013 */10251014blk_qc_t submit_bio_noacct(struct bio *bio)10261015{10271027- if (!submit_bio_checks(bio))10281028- return BLK_QC_T_NONE;10291029-10301016 /*10311017 * We only want one ->submit_bio to be active at a time, else stack10321018 * usage with stacked devices could be a problem. Use current->bio_list
···2121#include <linux/earlycpio.h>2222#include <linux/initrd.h>2323#include <linux/security.h>2424+#include <linux/kmemleak.h>2425#include "internal.h"25262627#ifdef CONFIG_ACPI_CUSTOM_DSDT···601600 * works fine.602601 */603602 arch_reserve_mem_area(acpi_tables_addr, all_tables_size);603603+604604+ kmemleak_ignore_phys(acpi_tables_addr);604605605606 /*606607 * early_ioremap only can remap 256k one time. If we map all
+2-1
drivers/acpi/x86/s2idle.c
···371371 return 0;372372373373 if (acpi_s2idle_vendor_amd()) {374374- /* AMD0004, AMDI0005:374374+ /* AMD0004, AMD0005, AMDI0005:375375 * - Should use rev_id 0x0376376 * - function mask > 0x3: Should use AMD method, but has off by one bug377377 * - function mask = 0x3: Should use Microsoft method···390390 ACPI_LPS0_DSM_UUID_MICROSOFT, 0,391391 &lps0_dsm_guid_microsoft);392392 if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||393393+ !strcmp(hid, "AMD0005") ||393394 !strcmp(hid, "AMDI0005"))) {394395 lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;395396 acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
···7171 int opt_mask = 0;7272 int token;7373 int ret = -EINVAL;7474- int i, dest_port, nr_poll_queues;7474+ int nr_poll_queues = 0;7575+ int dest_port = 0;7576 int p_cnt = 0;7777+ int i;76787779 options = kstrdup(buf, GFP_KERNEL);7880 if (!options)
+6-31
drivers/block/virtio_blk.c
···689689static unsigned int virtblk_queue_depth;690690module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);691691692692-static int virtblk_validate(struct virtio_device *vdev)693693-{694694- u32 blk_size;695695-696696- if (!vdev->config->get) {697697- dev_err(&vdev->dev, "%s failure: config access disabled\n",698698- __func__);699699- return -EINVAL;700700- }701701-702702- if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))703703- return 0;704704-705705- blk_size = virtio_cread32(vdev,706706- offsetof(struct virtio_blk_config, blk_size));707707-708708- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)709709- __virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);710710-711711- return 0;712712-}713713-714692static int virtblk_probe(struct virtio_device *vdev)715693{716694 struct virtio_blk *vblk;···699721 u16 min_io_size;700722 u8 physical_block_exp, alignment_offset;701723 unsigned int queue_depth;724724+725725+ if (!vdev->config->get) {726726+ dev_err(&vdev->dev, "%s failure: config access disabled\n",727727+ __func__);728728+ return -EINVAL;729729+ }702730703731 err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),704732 GFP_KERNEL);···819835 blk_queue_logical_block_size(q, blk_size);820836 else821837 blk_size = queue_logical_block_size(q);822822-823823- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) {824824- dev_err(&vdev->dev,825825- "block size is changed unexpectedly, now is %u\n",826826- blk_size);827827- err = -EINVAL;828828- goto out_cleanup_disk;829829- }830838831839 /* Use topology information if available */832840 err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,···9851009 .driver.name = KBUILD_MODNAME,9861010 .driver.owner = THIS_MODULE,9871011 .id_table = id_table,988988- .validate = virtblk_validate,9891012 .probe = virtblk_probe,9901013 .remove = virtblk_remove,9911014 .config_changed = virtblk_config_changed,
-12
drivers/bus/Kconfig
···152152 Interface 2, which can be used to connect things like NAND Flash,153153 SRAM, ethernet adapters, FPGAs and LCD displays.154154155155-config SIMPLE_PM_BUS156156- tristate "Simple Power-Managed Bus Driver"157157- depends on OF && PM158158- help159159- Driver for transparent busses that don't need a real driver, but160160- where the bus controller is part of a PM domain, or under the control161161- of a functional clock, and thus relies on runtime PM for managing162162- this PM domain and/or clock.163163- An example of such a bus controller is the Renesas Bus State164164- Controller (BSC, sometimes called "LBSC within Bus Bridge", or165165- "External Bus Interface") as found on several Renesas ARM SoCs.166166-167155config SUN50I_DE2_BUS168156 bool "Allwinner A64 DE2 Bus Driver"169157 default ARM64
···1313#include <linux/platform_device.h>1414#include <linux/pm_runtime.h>15151616-1716static int simple_pm_bus_probe(struct platform_device *pdev)1817{1919- const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);2020- struct device_node *np = pdev->dev.of_node;1818+ const struct device *dev = &pdev->dev;1919+ const struct of_dev_auxdata *lookup = dev_get_platdata(dev);2020+ struct device_node *np = dev->of_node;2121+ const struct of_device_id *match;2222+2323+ /*2424+ * Allow user to use driver_override to bind this driver to a2525+ * transparent bus device which has a different compatible string2626+ * that's not listed in simple_pm_bus_of_match. We don't want to do any2727+ * of the simple-pm-bus tasks for these devices, so return early.2828+ */2929+ if (pdev->driver_override)3030+ return 0;3131+3232+ match = of_match_device(dev->driver->of_match_table, dev);3333+ /*3434+ * These are transparent bus devices (not simple-pm-bus matches) that3535+ * have their child nodes populated automatically. So, don't need to3636+ * do anything more. We only match with the device if this driver is3737+ * the most specific match because we don't want to incorrectly bind to3838+ * a device that has a more specific driver.3939+ */4040+ if (match && match->data) {4141+ if (of_property_match_string(np, "compatible", match->compatible) == 0)4242+ return 0;4343+ else4444+ return -ENODEV;4545+ }21462247 dev_dbg(&pdev->dev, "%s\n", __func__);2348···56315732static int simple_pm_bus_remove(struct platform_device *pdev)5833{3434+ const void *data = of_device_get_match_data(&pdev->dev);3535+3636+ if (pdev->driver_override || data)3737+ return 0;3838+5939 dev_dbg(&pdev->dev, "%s\n", __func__);60406141 pm_runtime_disable(&pdev->dev);6242 return 0;6343}64444545+#define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */4646+6547static const struct of_device_id simple_pm_bus_of_match[] = {6648 { .compatible = "simple-pm-bus", },4949+ { .compatible = "simple-bus", .data = ONLY_BUS },5050+ { .compatible = "simple-mfd", .data = ONLY_BUS },5151+ { .compatible = "isa", .data = ONLY_BUS },5252+ { .compatible = "arm,amba-bus", .data = ONLY_BUS },6753 { /* sentinel */ }6854};6955MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+1
drivers/clk/qcom/Kconfig
···564564565565config SM_GCC_6350566566 tristate "SM6350 Global Clock Controller"567567+ select QCOM_GDSC567568 help568569 Support for the global clock controller on SM6350 devices.569570 Say Y if you want to use peripheral devices such as UART,
···2525#include <acpi/ghes.h>2626#include <ras/ras_event.h>27272828-static char rcd_decode_str[CPER_REC_LEN];2929-3028/*3129 * CPER record ID need to be unique even after reboot, because record3230 * ID is used as index for ERST storage, while CPER records from···310312 struct cper_mem_err_compact *cmem)311313{312314 const char *ret = trace_seq_buffer_ptr(p);315315+ char rcd_decode_str[CPER_REC_LEN];313316314317 if (cper_mem_err_location(cmem, rcd_decode_str))315318 trace_seq_printf(p, "%s", rcd_decode_str);···325326 int len)326327{327328 struct cper_mem_err_compact cmem;329329+ char rcd_decode_str[CPER_REC_LEN];328330329331 /* Don't trust UEFI 2.1/2.2 structure with bad validation bits */330332 if (len == sizeof(struct cper_sec_mem_err_old) &&
···414414 unsigned long data_size,415415 efi_char16_t *data)416416{417417- if (down_interruptible(&efi_runtime_lock)) {417417+ if (down_trylock(&efi_runtime_lock)) {418418 pr_warn("failed to invoke the reset_system() runtime service:\n"419419 "could not get exclusive access to the firmware\n");420420 return;
···18341834 u8 *edid, int num_blocks)18351835{18361836 int i;18371837- u8 num_of_ext = edid[0x7e];18371837+ u8 last_block;18381838+18391839+ /*18401840+ * 0x7e in the EDID is the number of extension blocks. The EDID18411841+ * is 1 (base block) + num_ext_blocks big. That means we can think18421842+ * of 0x7e in the EDID of the _index_ of the last block in the18431843+ * combined chunk of memory.18441844+ */18451845+ last_block = edid[0x7e];1838184618391847 /* Calculate real checksum for the last edid extension block data */18401840- connector->real_edid_checksum =18411841- drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);18481848+ if (last_block < num_blocks)18491849+ connector->real_edid_checksum =18501850+ drm_edid_block_checksum(edid + last_block * EDID_LENGTH);1842185118431852 if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))18441853 return;
+6
drivers/gpu/drm/drm_fb_helper.c
···15061506{15071507 struct drm_client_dev *client = &fb_helper->client;15081508 struct drm_device *dev = fb_helper->dev;15091509+ struct drm_mode_config *config = &dev->mode_config;15091510 int ret = 0;15101511 int crtc_count = 0;15111512 struct drm_connector_list_iter conn_iter;···16641663 /* Handle our overallocation */16651664 sizes.surface_height *= drm_fbdev_overalloc;16661665 sizes.surface_height /= 100;16661666+ if (sizes.surface_height > config->max_height) {16671667+ drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",16681668+ config->max_height);16691669+ sizes.surface_height = config->max_height;16701670+ }1667167116681672 /* push down into drivers */16691673 ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
···299299 return 0;300300}301301302302+/*303303+ * Hyper-V supports a hardware cursor feature. It's not used by Linux VM,304304+ * but the Hyper-V host still draws a point as an extra mouse pointer,305305+ * which is unwanted, especially when Xorg is running.306306+ *307307+ * The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted308308+ * pointer, by setting msg.ptr_pos.is_visible = 1 and setting the309309+ * msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't310310+ * work in tests.311311+ *312312+ * Copy synthvid_send_ptr() to hyperv_drm and rename it to313313+ * hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the314314+ * handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still315315+ * draws an extra unwanted mouse pointer after the VM Connection window is316316+ * closed and reopened.317317+ */318318+int hyperv_hide_hw_ptr(struct hv_device *hdev)319319+{320320+ struct synthvid_msg msg;321321+322322+ memset(&msg, 0, sizeof(struct synthvid_msg));323323+ msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;324324+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +325325+ sizeof(struct synthvid_pointer_position);326326+ msg.ptr_pos.is_visible = 1;327327+ msg.ptr_pos.video_output = 0;328328+ msg.ptr_pos.image_x = 0;329329+ msg.ptr_pos.image_y = 0;330330+ hyperv_sendpacket(hdev, &msg);331331+332332+ memset(&msg, 0, sizeof(struct synthvid_msg));333333+ msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;334334+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +335335+ sizeof(struct synthvid_pointer_shape);336336+ msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE;337337+ msg.ptr_shape.is_argb = 1;338338+ msg.ptr_shape.width = 1;339339+ msg.ptr_shape.height = 1;340340+ msg.ptr_shape.hot_x = 0;341341+ msg.ptr_shape.hot_y = 0;342342+ msg.ptr_shape.data[0] = 0;343343+ msg.ptr_shape.data[1] = 1;344344+ msg.ptr_shape.data[2] = 1;345345+ msg.ptr_shape.data[3] = 1;346346+ hyperv_sendpacket(hdev, &msg);347347+348348+ return 0;349349+}350350+302351int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect)303352{304353 struct hyperv_drm_device *hv = hv_get_drvdata(hdev);···441392 return;442393 }443394444444- if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE)395395+ if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {445396 hv->dirt_needed = msg->feature_chg.is_dirt_needed;397397+ if (hv->dirt_needed)398398+ hyperv_hide_hw_ptr(hv->hdev);399399+ }446400}447401448402static void hyperv_receive(void *ctx)
···571571 }572572573573 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");574574- ret = IS_ERR(icc_path);575575- if (ret)574574+ if (IS_ERR(icc_path)) {575575+ ret = PTR_ERR(icc_path);576576 goto fail;577577+ }577578578579 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");579579- ret = IS_ERR(ocmem_icc_path);580580- if (ret) {580580+ if (IS_ERR(ocmem_icc_path)) {581581+ ret = PTR_ERR(ocmem_icc_path);581582 /* allow -ENODATA, ocmem icc is optional */582583 if (ret != -ENODATA)583584 goto fail;
+5-4
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
···699699 }700700701701 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");702702- ret = IS_ERR(icc_path);703703- if (ret)702702+ if (IS_ERR(icc_path)) {703703+ ret = PTR_ERR(icc_path);704704 goto fail;705705+ }705706706707 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");707707- ret = IS_ERR(ocmem_icc_path);708708- if (ret) {708708+ if (IS_ERR(ocmem_icc_path)) {709709+ ret = PTR_ERR(ocmem_icc_path);709710 /* allow -ENODATA, ocmem icc is optional */710711 if (ret != -ENODATA)711712 goto fail;
+6
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
···296296 u32 val;297297 int request, ack;298298299299+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));300300+299301 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))300302 return -EINVAL;301303···338336void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)339337{340338 int bit;339339+340340+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));341341342342 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))343343 return;···1485148114861482 if (!pdev)14871483 return -ENODEV;14841484+14851485+ mutex_init(&gmu->lock);1488148614891487 gmu->dev = &pdev->dev;14901488
+3
drivers/gpu/drm/msm/adreno/a6xx_gmu.h
···4444struct a6xx_gmu {4545 struct device *dev;46464747+ /* For serializing communication with the GMU: */4848+ struct mutex lock;4949+4750 struct msm_gem_address_space *aspace;48514952 void * __iomem mmio;
+44-9
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
···106106 u32 asid;107107 u64 memptr = rbmemptr(ring, ttbr0);108108109109- if (ctx == a6xx_gpu->cur_ctx)109109+ if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)110110 return;111111112112 if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))···139139 OUT_PKT7(ring, CP_EVENT_WRITE, 1);140140 OUT_RING(ring, 0x31);141141142142- a6xx_gpu->cur_ctx = ctx;142142+ a6xx_gpu->cur_ctx_seqno = ctx->seqno;143143}144144145145static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)···881881 A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \882882 A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)883883884884-static int a6xx_hw_init(struct msm_gpu *gpu)884884+static int hw_init(struct msm_gpu *gpu)885885{886886 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);887887 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);···10811081 /* Always come up on rb 0 */10821082 a6xx_gpu->cur_ring = gpu->rb[0];1083108310841084- a6xx_gpu->cur_ctx = NULL;10841084+ a6xx_gpu->cur_ctx_seqno = 0;1085108510861086 /* Enable the SQE_to start the CP engine */10871087 gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);···11311131 /* Take the GMU out of its special boot mode */11321132 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER);11331133 }11341134+11351135+ return ret;11361136+}11371137+11381138+static int a6xx_hw_init(struct msm_gpu *gpu)11391139+{11401140+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);11411141+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);11421142+ int ret;11431143+11441144+ mutex_lock(&a6xx_gpu->gmu.lock);11451145+ ret = hw_init(gpu);11461146+ mutex_unlock(&a6xx_gpu->gmu.lock);1134114711351148 return ret;11361149}···1522150915231510 trace_msm_gpu_resume(0);1524151115121512+ mutex_lock(&a6xx_gpu->gmu.lock);15251513 ret = a6xx_gmu_resume(a6xx_gpu);15141514+ mutex_unlock(&a6xx_gpu->gmu.lock);15261515 if (ret)15271516 return ret;15281517···1547153215481533 msm_devfreq_suspend(gpu);1549153415351535+ mutex_lock(&a6xx_gpu->gmu.lock);15501536 ret = a6xx_gmu_stop(a6xx_gpu);15371537+ mutex_unlock(&a6xx_gpu->gmu.lock);15511538 if (ret)15521539 return ret;15531540···15641547{15651548 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);15661549 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);15671567- static DEFINE_MUTEX(perfcounter_oob);1568155015691569- mutex_lock(&perfcounter_oob);15511551+ mutex_lock(&a6xx_gpu->gmu.lock);1570155215711553 /* Force the GPU power on so we can read this register */15721554 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);1573155515741556 *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,15751575- REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);15571557+ REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);1576155815771559 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);15781578- mutex_unlock(&perfcounter_oob);15601560+15611561+ mutex_unlock(&a6xx_gpu->gmu.lock);15621562+15791563 return 0;15801564}15811565···16381620 return ~0LU;1639162116401622 return (unsigned long)busy_time;16231623+}16241624+16251625+void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)16261626+{16271627+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);16281628+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);16291629+16301630+ mutex_lock(&a6xx_gpu->gmu.lock);16311631+ a6xx_gmu_set_freq(gpu, opp);16321632+ mutex_unlock(&a6xx_gpu->gmu.lock);16411633}1642163416431635static struct msm_gem_address_space *···17941766#endif17951767 .gpu_busy = a6xx_gpu_busy,17961768 .gpu_get_freq = a6xx_gmu_get_freq,17971797- .gpu_set_freq = a6xx_gmu_set_freq,17691769+ .gpu_set_freq = a6xx_gpu_set_freq,17981770#if defined(CONFIG_DRM_MSM_GPU_STATE)17991771 .gpu_state_get = a6xx_gpu_state_get,18001772 .gpu_state_put = a6xx_gpu_state_put,···18371809 if (info && (info->revn == 650 || info->revn == 660 ||18381810 adreno_cmp_rev(ADRENO_REV(6, 3, 5, ANY_ID), info->rev)))18391811 adreno_gpu->base.hw_apriv = true;18121812+18131813+ /*18141814+ * For now only clamp to idle freq for devices where this is known not18151815+ * to cause power supply issues:18161816+ */18171817+ if (info && (info->revn == 618))18181818+ gpu->clamp_to_idle = true;1840181918411820 a6xx_llc_slices_init(pdev, a6xx_gpu);18421821
+10-1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
···1919 uint64_t sqe_iova;20202121 struct msm_ringbuffer *cur_ring;2222- struct msm_file_private *cur_ctx;2222+2323+ /**2424+ * cur_ctx_seqno:2525+ *2626+ * The ctx->seqno value of the context with current pgtables2727+ * installed. Tracked by seqno rather than pointer value to2828+ * avoid dangling pointers, and cases where a ctx can be freed2929+ * and a new one created with the same address.3030+ */3131+ int cur_ctx_seqno;23322433 struct a6xx_gmu gmu;2534
···13091309 * can not declared display is connected unless13101310 * HDMI cable is plugged in and sink_count of13111311 * dongle become 113121312+ * also only signal audio when disconnected13121313 */13131313- if (dp->link->sink_count)13141314+ if (dp->link->sink_count) {13141315 dp->dp_display.is_connected = true;13151315- else13161316+ } else {13161317 dp->dp_display.is_connected = false;13171317-13181318- dp_display_handle_plugged_change(g_dp_display,13191319- dp->dp_display.is_connected);13181318+ dp_display_handle_plugged_change(g_dp_display, false);13191319+ }1320132013211321 DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n",13221322 dp->link->sink_count, dp->dp_display.is_connected,
+3-1
drivers/gpu/drm/msm/dsi/dsi.c
···215215 goto fail;216216 }217217218218- if (!msm_dsi_manager_validate_current_config(msm_dsi->id))218218+ if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {219219+ ret = -EINVAL;219220 goto fail;221221+ }220222221223 msm_dsi->encoder = encoder;222224
···295295 depends on OF296296 depends on I2C297297 depends on BACKLIGHT_CLASS_DEVICE298298+ select CRC32298299 help299300 The panel is used with different sizes LCDs, from 480x272 to300301 1280x800, and 24 bit per pixel.
···8686 }87878888 /*8989- * Create and initialize the encoder. On Gen3 skip the LVDS1 output if8989+ * Create and initialize the encoder. On Gen3, skip the LVDS1 output if9090 * the LVDS1 encoder is used as a companion for LVDS0 in dual-link9191- * mode.9191+ * mode, or any LVDS output if it isn't connected. The latter may happen9292+ * on D3 or E3 as the LVDS encoders are needed to provide the pixel9393+ * clock to the DU, even when the LVDS outputs are not used.9294 */9393- if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) {9494- if (rcar_lvds_dual_link(bridge))9595+ if (rcdu->info->gen >= 3) {9696+ if (output == RCAR_DU_OUTPUT_LVDS1 &&9797+ rcar_lvds_dual_link(bridge))9898+ return -ENOLINK;9999+100100+ if ((output == RCAR_DU_OUTPUT_LVDS0 ||101101+ output == RCAR_DU_OUTPUT_LVDS1) &&102102+ !rcar_lvds_is_connected(bridge))95103 return -ENOLINK;96104 }97105
···738738739739 if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) {740740 ret = fxls8962af_fifo_flush(indio_dev);741741- if (ret)741741+ if (ret < 0)742742 return IRQ_NONE;743743744744 return IRQ_HANDLED;
···353353 if (dec > st->info->max_dec)354354 dec = st->info->max_dec;355355356356- ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);356356+ ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);357357 if (ret)358358 goto error;359359360360+ adis_dev_unlock(&st->adis);360361 /*361362 * If decimation is used, then gyro and accel data will have meaningful362363 * bits on the LSB registers. This info is used on the trigger handler.
···276276 ret = wait_event_timeout(opt->result_ready_queue,277277 opt->result_ready,278278 msecs_to_jiffies(OPT3001_RESULT_READY_LONG));279279+ if (ret == 0)280280+ return -ETIMEDOUT;279281 } else {280282 /* Sleep for result ready time */281283 timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ?···314312 /* Disallow IRQ to access the device while lock is active */315313 opt->ok_to_ignore_lock = false;316314317317- if (ret == 0)318318- return -ETIMEDOUT;319319- else if (ret < 0)315315+ if (ret < 0)320316 return ret;321317322318 if (opt->use_irq) {
···7171 unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]];7272 unsigned int Rt;73737474- Rt = z2;7575- Rt -= z1;7676- Rt *= st->x_plate_ohms;7777- Rt = DIV_ROUND_CLOSEST(Rt, 16);7878- Rt *= x;7979- Rt /= z1;8080- Rt = DIV_ROUND_CLOSEST(Rt, 256);8181- /*8282- * On increased pressure the resistance (Rt) is decreasing8383- * so, convert values to make it looks as real pressure.8484- */8585- if (Rt < GRTS_DEFAULT_PRESSURE_MAX)8686- press = GRTS_DEFAULT_PRESSURE_MAX - Rt;7474+ if (likely(x && z1)) {7575+ Rt = z2;7676+ Rt -= z1;7777+ Rt *= st->x_plate_ohms;7878+ Rt = DIV_ROUND_CLOSEST(Rt, 16);7979+ Rt *= x;8080+ Rt /= z1;8181+ Rt = DIV_ROUND_CLOSEST(Rt, 256);8282+ /*8383+ * On increased pressure the resistance (Rt) is8484+ * decreasing so, convert values to make it looks as8585+ * real pressure.8686+ */8787+ if (Rt < GRTS_DEFAULT_PRESSURE_MAX)8888+ press = GRTS_DEFAULT_PRESSURE_MAX - Rt;8989+ }8790 }88918992 if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
+8
drivers/iommu/Kconfig
···355355 'arm-smmu.disable_bypass' will continue to override this356356 config.357357358358+config ARM_SMMU_QCOM359359+ def_tristate y360360+ depends on ARM_SMMU && ARCH_QCOM361361+ select QCOM_SCM362362+ help363363+ When running on a Qualcomm platform that has the custom variant364364+ of the ARM SMMU, this needs to be built into the SMMU driver.365365+358366config ARM_SMMU_V3359367 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"360368 depends on ARM64
+4-4
drivers/isdn/hardware/mISDN/hfcpci.c
···19941994 pci_set_master(hc->pdev);19951995 if (!hc->irq) {19961996 printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");19971997- return 1;19971997+ return -EINVAL;19981998 }19991999 hc->hw.pci_io =20002000 (char __iomem *)(unsigned long)hc->pdev->resource[1].start;2001200120022002 if (!hc->hw.pci_io) {20032003 printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");20042004- return 1;20042004+ return -ENOMEM;20052005 }20062006 /* Allocate memory for FIFOS */20072007 /* the memory needs to be on a 32k boundary within the first 4G */···20122012 if (!buffer) {20132013 printk(KERN_WARNING20142014 "HFC-PCI: Error allocating memory for FIFO!\n");20152015- return 1;20152015+ return -ENOMEM;20162016 }20172017 hc->hw.fifos = buffer;20182018 pci_write_config_dword(hc->pdev, 0x80, hc->hw.dmahandle);···20222022 "HFC-PCI: Error in ioremap for PCI!\n");20232023 dma_free_coherent(&hc->pdev->dev, 0x8000, hc->hw.fifos,20242024 hc->hw.dmahandle);20252025- return 1;20252025+ return -ENOMEM;20262026 }2027202720282028 printk(KERN_INFO
···490490 struct mapped_device *md = tio->md;491491 struct dm_target *ti = md->immutable_target;492492493493+ /*494494+ * blk-mq's unquiesce may come from outside events, such as495495+ * elevator switch, updating nr_requests or others, and request may496496+ * come during suspend, so simply ask for blk-mq to requeue it.497497+ */498498+ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)))499499+ return BLK_STS_RESOURCE;500500+493501 if (unlikely(!ti)) {494502 int srcu_idx;495503 struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+12-3
drivers/md/dm-verity-target.c
···475475 struct bvec_iter start;476476 unsigned b;477477 struct crypto_wait wait;478478+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);478479479480 for (b = 0; b < io->n_blocks; b++) {480481 int r;···530529 else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,531530 cur_block, NULL, &start) == 0)532531 continue;533533- else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,534534- cur_block))535535- return -EIO;532532+ else {533533+ if (bio->bi_status) {534534+ /*535535+ * Error correction failed; Just return error536536+ */537537+ return -EIO;538538+ }539539+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,540540+ cur_block))541541+ return -EIO;542542+ }536543 }537544538545 return 0;
+10-7
drivers/md/dm.c
···496496 false, 0, &io->stats_aux);497497}498498499499-static void end_io_acct(struct dm_io *io)499499+static void end_io_acct(struct mapped_device *md, struct bio *bio,500500+ unsigned long start_time, struct dm_stats_aux *stats_aux)500501{501501- struct mapped_device *md = io->md;502502- struct bio *bio = io->orig_bio;503503- unsigned long duration = jiffies - io->start_time;502502+ unsigned long duration = jiffies - start_time;504503505505- bio_end_io_acct(bio, io->start_time);504504+ bio_end_io_acct(bio, start_time);506505507506 if (unlikely(dm_stats_used(&md->stats)))508507 dm_stats_account_io(&md->stats, bio_data_dir(bio),509508 bio->bi_iter.bi_sector, bio_sectors(bio),510510- true, duration, &io->stats_aux);509509+ true, duration, stats_aux);511510512511 /* nudge anyone waiting on suspend queue */513512 if (unlikely(wq_has_sleeper(&md->wait)))···789790 blk_status_t io_error;790791 struct bio *bio;791792 struct mapped_device *md = io->md;793793+ unsigned long start_time = 0;794794+ struct dm_stats_aux stats_aux;792795793796 /* Push-back supersedes any I/O errors */794797 if (unlikely(error)) {···822821 }823822824823 io_error = io->status;825825- end_io_acct(io);824824+ start_time = io->start_time;825825+ stats_aux = io->stats_aux;826826 free_io(md, io);827827+ end_io_acct(md, bio, start_time, &stats_aux);827828828829 if (io_error == BLK_STS_DM_REQUEUE)829830 return;
+1
drivers/misc/Kconfig
···224224 tristate "HiSilicon Hi6421v600 IRQ and powerkey"225225 depends on OF226226 depends on SPMI227227+ depends on HAS_IOMEM227228 select MFD_CORE228229 select REGMAP_SPMI229230 help
···26492649free_seq_arr:26502650 kfree(cs_seq_arr);2651265126522652- /* update output args */26532653- memset(args, 0, sizeof(*args));26542652 if (rc)26552653 return rc;26542654+26552655+ if (mcs_data.wait_status == -ERESTARTSYS) {26562656+ dev_err_ratelimited(hdev->dev,26572657+ "user process got signal while waiting for Multi-CS\n");26582658+ return -EINTR;26592659+ }26602660+26612661+ /* update output args */26622662+ memset(args, 0, sizeof(*args));2656266326572664 if (mcs_data.completion_bitmap) {26582665 args->out.status = HL_WAIT_CS_STATUS_COMPLETED;···26742667 /* update if some CS was gone */26752668 if (mcs_data.timestamp)26762669 args->out.flags |= HL_WAIT_CS_STATUS_FLAG_GONE;26772677- } else if (mcs_data.wait_status == -ERESTARTSYS) {26782678- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;26792670 } else {26802671 args->out.status = HL_WAIT_CS_STATUS_BUSY;26812672 }···26932688 rc = _hl_cs_wait_ioctl(hdev, hpriv->ctx, args->in.timeout_us, seq,26942689 &status, ×tamp);2695269026912691+ if (rc == -ERESTARTSYS) {26922692+ dev_err_ratelimited(hdev->dev,26932693+ "user process got signal while waiting for CS handle %llu\n",26942694+ seq);26952695+ return -EINTR;26962696+ }26972697+26962698 memset(args, 0, sizeof(*args));2697269926982700 if (rc) {26992699- if (rc == -ERESTARTSYS) {27002700- dev_err_ratelimited(hdev->dev,27012701- "user process got signal while waiting for CS handle %llu\n",27022702- seq);27032703- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;27042704- rc = -EINTR;27052705- } else if (rc == -ETIMEDOUT) {27012701+ if (rc == -ETIMEDOUT) {27062702 dev_err_ratelimited(hdev->dev,27072703 "CS %llu has timed-out while user process is waiting for it\n",27082704 seq);···28292823 dev_err_ratelimited(hdev->dev,28302824 "user process got signal while waiting for interrupt ID %d\n",28312825 interrupt->interrupt_id);28322832- *status = HL_WAIT_CS_STATUS_INTERRUPTED;28332826 rc = -EINTR;28342827 } else {28352828 *status = CS_WAIT_STATUS_BUSY;···28832878 args->in.interrupt_timeout_us, args->in.addr,28842879 args->in.target, interrupt_offset, &status);2885288028862886- memset(args, 0, sizeof(*args));28872887-28882881 if (rc) {28892882 if (rc != -EINTR)28902883 dev_err_ratelimited(hdev->dev,···2890288728912888 return rc;28922889 }28902890+28912891+ memset(args, 0, sizeof(*args));2893289228942893 switch (status) {28952894 case CS_WAIT_STATUS_COMPLETED:
+8-4
drivers/misc/mei/hbm.c
···1298129812991299 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||13001300 dev->hbm_state != MEI_HBM_STARTING) {13011301- if (dev->dev_state == MEI_DEV_POWER_DOWN) {13011301+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||13021302+ dev->dev_state == MEI_DEV_POWERING_DOWN) {13021303 dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n");13031304 return 0;13041305 }···1382138113831382 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||13841383 dev->hbm_state != MEI_HBM_DR_SETUP) {13851385- if (dev->dev_state == MEI_DEV_POWER_DOWN) {13841384+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||13851385+ dev->dev_state == MEI_DEV_POWERING_DOWN) {13861386 dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n");13871387 return 0;13881388 }···1450144814511449 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||14521450 dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) {14531453- if (dev->dev_state == MEI_DEV_POWER_DOWN) {14511451+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||14521452+ dev->dev_state == MEI_DEV_POWERING_DOWN) {14541453 dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n");14551454 return 0;14561455 }···1493149014941491 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||14951492 dev->hbm_state != MEI_HBM_ENUM_CLIENTS) {14961496- if (dev->dev_state == MEI_DEV_POWER_DOWN) {14931493+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||14941494+ dev->dev_state == MEI_DEV_POWERING_DOWN) {14971495 dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n");14981496 return 0;14991497 }
+1
drivers/misc/mei/hw-me-regs.h
···9292#define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */93939494#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */9595+#define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */95969697#define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */9798
···752752 struct net_device *prev_dev = chan->prev_dev;753753754754 dev_info(&pdev->dev, "removing device %s\n", dev->name);755755+ /* do that only for first channel */756756+ if (!prev_dev && chan->pciec_card)757757+ peak_pciec_remove(chan->pciec_card);755758 unregister_sja1000dev(dev);756759 free_sja1000dev(dev);757760 dev = prev_dev;758761759759- if (!dev) {760760- /* do that only for first channel */761761- if (chan->pciec_card)762762- peak_pciec_remove(chan->pciec_card);762762+ if (!dev)763763 break;764764- }765764 priv = netdev_priv(dev);766765 chan = priv->priv;767766 }
+3-5
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
···551551 } else if (sm->channel_p_w_b & PUCAN_BUS_WARNING) {552552 new_state = CAN_STATE_ERROR_WARNING;553553 } else {554554- /* no error bit (so, no error skb, back to active state) */555555- dev->can.state = CAN_STATE_ERROR_ACTIVE;554554+ /* back to (or still in) ERROR_ACTIVE state */555555+ new_state = CAN_STATE_ERROR_ACTIVE;556556 pdev->bec.txerr = 0;557557 pdev->bec.rxerr = 0;558558- return 0;559558 }560559561560 /* state hasn't changed */···567568568569 /* allocate an skb to store the error frame */569570 skb = alloc_can_err_skb(netdev, &cf);570570- if (skb)571571- can_change_state(netdev, cf, tx_state, rx_state);571571+ can_change_state(netdev, cf, tx_state, rx_state);572572573573 /* things must be done even in case of OOM */574574 if (new_state == CAN_STATE_BUS_OFF)
···10351035{10361036 struct mt7530_priv *priv = ds->priv;1037103710381038- if (!dsa_is_user_port(ds, port))10391039- return 0;10401040-10411038 mutex_lock(&priv->reg_mutex);1042103910431040 /* Allow the user port gets connected to the cpu port and also···10561059mt7530_port_disable(struct dsa_switch *ds, int port)10571060{10581061 struct mt7530_priv *priv = ds->priv;10591059-10601060- if (!dsa_is_user_port(ds, port))10611061- return;1062106210631063 mutex_lock(&priv->reg_mutex);10641064···32053211 return -ENOMEM;3206321232073213 priv->ds->dev = &mdiodev->dev;32083208- priv->ds->num_ports = DSA_MAX_PORTS;32143214+ priv->ds->num_ports = MT7530_NUM_PORTS;3209321532103216 /* Use medatek,mcm property to distinguish hardware type that would32113217 * casues a little bit differences on power-on sequence.
···1010static LIST_HEAD(hnae3_client_list);1111static LIST_HEAD(hnae3_ae_dev_list);12121313+void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo)1414+{1515+ const struct pci_device_id *pci_id;1616+ struct hnae3_ae_dev *ae_dev;1717+1818+ if (!ae_algo)1919+ return;2020+2121+ list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {2222+ if (!hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))2323+ continue;2424+2525+ pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);2626+ if (!pci_id)2727+ continue;2828+ if (IS_ENABLED(CONFIG_PCI_IOV))2929+ pci_disable_sriov(ae_dev->pdev);3030+ }3131+}3232+EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);3333+1334/* we are keeping things simple and using single lock for all the1435 * list. This is a non-critical code so other updations, if happen1536 * in parallel, can wait.
···1847184718481848static int hns3_skb_linearize(struct hns3_enet_ring *ring,18491849 struct sk_buff *skb,18501850- u8 max_non_tso_bd_num,18511850 unsigned int bd_num)18521851{18531852 /* 'bd_num == UINT_MAX' means the skb' fraglist has a···18631864 * will not help.18641865 */18651866 if (skb->len > HNS3_MAX_TSO_SIZE ||18661866- (!skb_is_gso(skb) && skb->len >18671867- HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) {18671867+ (!skb_is_gso(skb) && skb->len > HNS3_MAX_NON_TSO_SIZE)) {18681868 u64_stats_update_begin(&ring->syncp);18691869 ring->stats.hw_limitation++;18701870 u64_stats_update_end(&ring->syncp);···18981900 goto out;18991901 }1900190219011901- if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num,19021902- bd_num))19031903+ if (hns3_skb_linearize(ring, skb, bd_num))19031904 return -ENOMEM;1904190519051906 bd_num = hns3_tx_bd_count(skb->len);···32553258{32563259 hns3_unmap_buffer(ring, &ring->desc_cb[i]);32573260 ring->desc[i].addr = 0;32613261+ ring->desc_cb[i].refill = 0;32583262}3259326332603264static void hns3_free_buffer_detach(struct hns3_enet_ring *ring, int i,···3334333633353337 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +33363338 ring->desc_cb[i].page_offset);33393339+ ring->desc_cb[i].refill = 1;3337334033383341 return 0;33393342}···33643365{33653366 hns3_unmap_buffer(ring, &ring->desc_cb[i]);33663367 ring->desc_cb[i] = *res_cb;33683368+ ring->desc_cb[i].refill = 1;33673369 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +33683370 ring->desc_cb[i].page_offset);33693371 ring->desc[i].rx.bd_base_info = 0;···33733373static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i)33743374{33753375 ring->desc_cb[i].reuse_flag = 0;33763376+ ring->desc_cb[i].refill = 1;33763377 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma +33773378 ring->desc_cb[i].page_offset);33783379 ring->desc[i].rx.bd_base_info = 0;···34803479 int ntc = ring->next_to_clean;34813480 int ntu = ring->next_to_use;3482348134823482+ if (unlikely(ntc == ntu && !ring->desc_cb[ntc].refill))34833483+ return ring->desc_num;34843484+34833485 return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu;34843486}3485348734863486-static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,34883488+/* Return true if there is any allocation failure */34893489+static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring,34873490 int cleand_count)34883491{34893492 struct hns3_desc_cb *desc_cb;···35123507 hns3_rl_err(ring_to_netdev(ring),35133508 "alloc rx buffer failed: %d\n",35143509 ret);35153515- break;35103510+35113511+ writel(i, ring->tqp->io_base +35123512+ HNS3_RING_RX_RING_HEAD_REG);35133513+ return true;35163514 }35173515 hns3_replace_buffer(ring, ring->next_to_use, &res_cbs);35183516···35283520 }3529352135303522 writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG);35233523+ return false;35313524}3532352535333526static bool hns3_can_reuse_page(struct hns3_desc_cb *cb)···38333824{38343825 ring->desc[ring->next_to_clean].rx.bd_base_info &=38353826 cpu_to_le32(~BIT(HNS3_RXD_VLD_B));38273827+ ring->desc_cb[ring->next_to_clean].refill = 0;38363828 ring->next_to_clean += 1;3837382938383830 if (unlikely(ring->next_to_clean == ring->desc_num))···41804170{41814171#define RCB_NOF_ALLOC_RX_BUFF_ONCE 1641824172 int unused_count = hns3_desc_unused(ring);41734173+ bool failure = false;41834174 int recv_pkts = 0;41844175 int err;41854176···41894178 while (recv_pkts < budget) {41904179 /* Reuse or realloc buffers */41914180 if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) {41924192- hns3_nic_alloc_rx_buffers(ring, unused_count);41934193- unused_count = hns3_desc_unused(ring) -41944194- ring->pending_buf;41814181+ failure = failure ||41824182+ hns3_nic_alloc_rx_buffers(ring, unused_count);41834183+ unused_count = 0;41954184 }4196418541974186 /* Poll one pkt */···42104199 }4211420042124201out:42134213- /* Make all data has been write before submit */42144214- if (unused_count > 0)42154215- hns3_nic_alloc_rx_buffers(ring, unused_count);42164216-42174217- return recv_pkts;42024202+ return failure ? budget : recv_pkts;42184203}4219420442204205static void hns3_update_rx_int_coalesce(struct hns3_enet_tqp_vector *tqp_vector)
+3-4
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
···186186187187#define HNS3_MAX_BD_SIZE 65535188188#define HNS3_MAX_TSO_BD_NUM 63U189189-#define HNS3_MAX_TSO_SIZE \190190- (HNS3_MAX_BD_SIZE * HNS3_MAX_TSO_BD_NUM)189189+#define HNS3_MAX_TSO_SIZE 1048576U190190+#define HNS3_MAX_NON_TSO_SIZE 9728U191191192192-#define HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num) \193193- (HNS3_MAX_BD_SIZE * (max_non_tso_bd_num))194192195193#define HNS3_VECTOR_GL0_OFFSET 0x100196194#define HNS3_VECTOR_GL1_OFFSET 0x200···330332 u32 length; /* length of the buffer */331333332334 u16 reuse_flag;335335+ u16 refill;333336334337 /* desc type, used by the ring user to mark the type of the priv data */335338 u16 type;
···137137 *changed = true;138138 break;139139 case IEEE_8021QAZ_TSA_ETS:140140+ /* The hardware will switch to sp mode if bandwidth is141141+ * 0, so limit ets bandwidth must be greater than 0.142142+ */143143+ if (!ets->tc_tx_bw[i]) {144144+ dev_err(&hdev->pdev->dev,145145+ "tc%u ets bw cannot be 0\n", i);146146+ return -EINVAL;147147+ }148148+140149 if (hdev->tm_info.tc_info[i].tc_sch_mode !=141150 HCLGE_SCH_MODE_DWRR)142151 *changed = true;
···2525 case ICE_DEV_ID_E810C_BACKPLANE:2626 case ICE_DEV_ID_E810C_QSFP:2727 case ICE_DEV_ID_E810C_SFP:2828+ case ICE_DEV_ID_E810_XXV_BACKPLANE:2929+ case ICE_DEV_ID_E810_XXV_QSFP:2830 case ICE_DEV_ID_E810_XXV_SFP:2931 hw->mac_type = ICE_MAC_E810;3032 break;
···19101910 for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)19111911 if (hw->tnl.tbl[i].valid &&19121912 hw->tnl.tbl[i].type == type &&19131913- idx--)19131913+ idx-- == 0)19141914 return i;1915191519161916 WARN_ON_ONCE(1);···20702070 u16 index;2071207120722072 tnl_type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? TNL_VXLAN : TNL_GENEVE;20732073- index = ice_tunnel_idx_to_entry(&pf->hw, idx, tnl_type);20732073+ index = ice_tunnel_idx_to_entry(&pf->hw, tnl_type, idx);2074207420752075 status = ice_create_tunnel(&pf->hw, index, tnl_type, ntohs(ti->port));20762076 if (status) {
+9
drivers/net/ethernet/intel/ice/ice_lib.c
···30333033 */30343034int ice_vsi_release(struct ice_vsi *vsi)30353035{30363036+ enum ice_status err;30363037 struct ice_pf *pf;3037303830383039 if (!vsi->back)···3106310531073106 ice_fltr_remove_all(vsi);31083107 ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);31083108+ err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);31093109+ if (err)31103110+ dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",31113111+ vsi->vsi_num, err);31093112 ice_vsi_delete(vsi);31103113 ice_vsi_free_q_vectors(vsi);31113114···32903285 prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);3291328632923287 ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);32883288+ ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx);32893289+ if (ret)32903290+ dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n",32913291+ vsi->vsi_num, ret);32933292 ice_vsi_free_q_vectors(vsi);3294329332953294 /* SR-IOV determines needed MSIX resources all at once instead of per
+7-1
drivers/net/ethernet/intel/ice/ice_main.c
···43424342 if (!pf)43434343 return -ENOMEM;4344434443454345+ /* initialize Auxiliary index to invalid value */43464346+ pf->aux_idx = -1;43474347+43454348 /* set up for high or low DMA */43464349 err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));43474350 if (err)···4730472747314728 ice_aq_cancel_waiting_tasks(pf);47324729 ice_unplug_aux_dev(pf);47334733- ida_free(&ice_aux_ida, pf->aux_idx);47304730+ if (pf->aux_idx >= 0)47314731+ ida_free(&ice_aux_ida, pf->aux_idx);47344732 set_bit(ICE_DOWN, pf->state);4735473347364734 mutex_destroy(&(&pf->hw)->fdir_fltr_lock);···51315127 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 },51325128 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 },51335129 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 },51305130+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 },51315131+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 },51345132 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 },51355133 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 },51365134 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
+13
drivers/net/ethernet/intel/ice/ice_sched.c
···20712071}2072207220732073/**20742074+ * ice_rm_vsi_rdma_cfg - remove VSI and its RDMA children nodes20752075+ * @pi: port information structure20762076+ * @vsi_handle: software VSI handle20772077+ *20782078+ * This function clears the VSI and its RDMA children nodes from scheduler tree20792079+ * for all TCs.20802080+ */20812081+enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle)20822082+{20832083+ return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_RDMA);20842084+}20852085+20862086+/**20742087 * ice_get_agg_info - get the aggregator ID20752088 * @hw: pointer to the hardware structure20762089 * @agg_id: aggregator ID
···132132 case MC_CMD_MEDIA_SFP_PLUS:133133 case MC_CMD_MEDIA_QSFP_PLUS:134134 SET_BIT(FIBRE);135135- if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN))135135+ if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) {136136 SET_BIT(1000baseT_Full);137137- if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN))138138- SET_BIT(10000baseT_Full);139139- if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN))137137+ SET_BIT(1000baseX_Full);138138+ }139139+ if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) {140140+ SET_BIT(10000baseCR_Full);141141+ SET_BIT(10000baseLR_Full);142142+ SET_BIT(10000baseSR_Full);143143+ }144144+ if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) {140145 SET_BIT(40000baseCR4_Full);141141- if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN))146146+ SET_BIT(40000baseSR4_Full);147147+ }148148+ if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) {142149 SET_BIT(100000baseCR4_Full);143143- if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN))150150+ SET_BIT(100000baseSR4_Full);151151+ }152152+ if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) {144153 SET_BIT(25000baseCR_Full);154154+ SET_BIT(25000baseSR_Full);155155+ }145156 if (cap & (1 << MC_CMD_PHY_CAP_50000FDX_LBN))146157 SET_BIT(50000baseCR2_Full);147158 break;···203192 result |= (1 << MC_CMD_PHY_CAP_100FDX_LBN);204193 if (TEST_BIT(1000baseT_Half))205194 result |= (1 << MC_CMD_PHY_CAP_1000HDX_LBN);206206- if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full))195195+ if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full) ||196196+ TEST_BIT(1000baseX_Full))207197 result |= (1 << MC_CMD_PHY_CAP_1000FDX_LBN);208208- if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full))198198+ if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full) ||199199+ TEST_BIT(10000baseCR_Full) || TEST_BIT(10000baseLR_Full) ||200200+ TEST_BIT(10000baseSR_Full))209201 result |= (1 << MC_CMD_PHY_CAP_10000FDX_LBN);210210- if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full))202202+ if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full) ||203203+ TEST_BIT(40000baseSR4_Full))211204 result |= (1 << MC_CMD_PHY_CAP_40000FDX_LBN);212212- if (TEST_BIT(100000baseCR4_Full))205205+ if (TEST_BIT(100000baseCR4_Full) || TEST_BIT(100000baseSR4_Full))213206 result |= (1 << MC_CMD_PHY_CAP_100000FDX_LBN);214214- if (TEST_BIT(25000baseCR_Full))207207+ if (TEST_BIT(25000baseCR_Full) || TEST_BIT(25000baseSR_Full))215208 result |= (1 << MC_CMD_PHY_CAP_25000FDX_LBN);216209 if (TEST_BIT(50000baseCR2_Full))217210 result |= (1 << MC_CMD_PHY_CAP_50000FDX_LBN);
+2-2
drivers/net/ethernet/sfc/ptp.c
···648648 } else if (rc == -EINVAL) {649649 fmt = MC_CMD_PTP_OUT_GET_ATTRIBUTES_SECONDS_NANOSECONDS;650650 } else if (rc == -EPERM) {651651- netif_info(efx, probe, efx->net_dev, "no PTP support\n");651651+ pci_info(efx->pci_dev, "no PTP support\n");652652 return rc;653653 } else {654654 efx_mcdi_display_error(efx, MC_CMD_PTP, sizeof(inbuf),···824824 * should only have been called during probe.825825 */826826 if (rc == -ENOSYS || rc == -EPERM)827827- netif_info(efx, probe, efx->net_dev, "no PTP support\n");827827+ pci_info(efx->pci_dev, "no PTP support\n");828828 else if (rc)829829 efx_mcdi_display_error(efx, MC_CMD_PTP,830830 MC_CMD_PTP_IN_DISABLE_LEN,
+1-1
drivers/net/ethernet/sfc/siena_sriov.c
···10571057 return;1058105810591059 if (efx_siena_sriov_cmd(efx, false, &efx->vi_scale, &count)) {10601060- netif_info(efx, probe, efx->net_dev, "no SR-IOV VFs probed\n");10601060+ pci_info(efx->pci_dev, "no SR-IOV VFs probed\n");10611061 return;10621062 }10631063 if (count > 0 && count > max_vfs)
···117117 select PHYLIB118118 select MICROCHIP_PHY119119 select FIXED_PHY120120+ select CRC32120121 help121122 This option adds support for Microchip LAN78XX based USB 2122123 & USB 3 10/100/1000 Ethernet adapters.
+4
drivers/net/usb/usbnet.c
···17881788 if (!dev->rx_urb_size)17891789 dev->rx_urb_size = dev->hard_mtu;17901790 dev->maxpacket = usb_maxpacket (dev->udev, dev->out, 1);17911791+ if (dev->maxpacket == 0) {17921792+ /* that is a broken device */17931793+ goto out4;17941794+ }1791179517921796 /* let userspace know we have a random address */17931797 if (ether_addr_equal(net->dev_addr, node_id))
-4
drivers/net/vrf.c
···13601360 bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr);13611361 bool is_ndisc = ipv6_ndisc_frame(skb);1362136213631363- nf_reset_ct(skb);13641364-13651363 /* loopback, multicast & non-ND link-local traffic; do not push through13661364 * packet taps again. Reset pkt_type for upper layers to process skb.13671365 * For strict packets with a source LLA, determine the dst using the···14211423 skb->dev = vrf_dev;14221424 skb->skb_iif = vrf_dev->ifindex;14231425 IPCB(skb)->flags |= IPSKB_L3SLAVE;14241424-14251425- nf_reset_ct(skb);1426142614271427 if (ipv4_is_multicast(ip_hdr(skb)->daddr))14281428 goto out;
+2-4
drivers/nfc/st95hf/core.c
···12261226 &reset_cmd,12271227 ST95HF_RESET_CMD_LEN,12281228 ASYNC);12291229- if (result) {12291229+ if (result)12301230 dev_err(&spictx->spidev->dev,12311231 "ST95HF reset failed in remove() err = %d\n", result);12321232- return result;12331233- }1234123212351233 /* wait for 3 ms to complete the controller reset process */12361234 usleep_range(3000, 4000);···12371239 if (stcontext->st95hf_supply)12381240 regulator_disable(stcontext->st95hf_supply);1239124112401240- return result;12421242+ return 0;12411243}1242124412431245/* Register as SPI protocol driver */
···137137 priv = spi_controller_get_devdata(ctlr);138138 priv->spi = spi;139139140140+ /*141141+ * Increase lockdep class as these lock are taken while the parent bus142142+ * already holds their instance's lock.143143+ */144144+ lockdep_set_subclass(&ctlr->io_mutex, 1);145145+ lockdep_set_subclass(&ctlr->add_lock, 1);146146+140147 priv->mux = devm_mux_control_get(&spi->dev, NULL);141148 if (IS_ERR(priv->mux)) {142149 ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
+7-19
drivers/spi/spi-nxp-fspi.c
···33333434#include <linux/acpi.h>3535#include <linux/bitops.h>3636+#include <linux/bitfield.h>3637#include <linux/clk.h>3738#include <linux/completion.h>3839#include <linux/delay.h>···316315#define NXP_FSPI_MIN_IOMAP SZ_4M317316318317#define DCFG_RCWSR1 0x100318318+#define SYS_PLL_RAT GENMASK(6, 2)319319320320/* Access flash memory using IP bus only */321321#define FSPI_QUIRK_USE_IP_ONLY BIT(0)···928926 { .family = "QorIQ LS1028A" },929927 { /* sentinel */ }930928 };931931- struct device_node *np;932929 struct regmap *map;933933- u32 val = 0, sysclk = 0;930930+ u32 val, sys_pll_ratio;934931 int ret;935932936933 /* Check for LS1028A family */···938937 return;939938 }940939941941- /* Compute system clock frequency multiplier ratio */942940 map = syscon_regmap_lookup_by_compatible("fsl,ls1028a-dcfg");943941 if (IS_ERR(map)) {944942 dev_err(f->dev, "No syscon regmap\n");···948948 if (ret < 0)949949 goto err;950950951951- /* Strap bits 6:2 define SYS_PLL_RAT i.e frequency multiplier ratio */952952- val = (val >> 2) & 0x1F;953953- WARN(val == 0, "Strapping is zero: Cannot determine ratio");951951+ sys_pll_ratio = FIELD_GET(SYS_PLL_RAT, val);952952+ dev_dbg(f->dev, "val: 0x%08x, sys_pll_ratio: %d\n", val, sys_pll_ratio);954953955955- /* Compute system clock frequency */956956- np = of_find_node_by_name(NULL, "clock-sysclk");957957- if (!np)958958- goto err;959959-960960- if (of_property_read_u32(np, "clock-frequency", &sysclk))961961- goto err;962962-963963- sysclk = (sysclk * val) / 1000000; /* Convert sysclk to Mhz */964964- dev_dbg(f->dev, "val: 0x%08x, sysclk: %dMhz\n", val, sysclk);965965-966966- /* Use IP bus only if PLL is 300MHz */967967- if (sysclk == 300)954954+ /* Use IP bus only if platform clock is 300MHz */955955+ if (sys_pll_ratio == 3)968956 f->devtype_data->quirks |= FSPI_QUIRK_USE_IP_ONLY;969957970958 return;
···585585{586586 struct optee *optee = platform_get_drvdata(pdev);587587588588+ /* Unregister OP-TEE specific client devices on TEE bus */589589+ optee_unregister_devices();590590+588591 /*589592 * Ask OP-TEE to free all cached shared memory objects to decrease590593 * reference counters and also avoid wild pointers in secure world
···361361 If unsure, say N.362362363363config SERIAL_8250_FSL364364- bool364364+ bool "Freescale 16550 UART support" if COMPILE_TEST && !(PPC || ARM || ARM64)365365 depends on SERIAL_8250_CONSOLE366366- default PPC || ARM || ARM64 || COMPILE_TEST366366+ default PPC || ARM || ARM64367367+ help368368+ Selecting this option enables a workaround for a break-detection369369+ erratum for Freescale 16550 UARTs in the 8250 driver. It also370370+ enables support for ACPI enumeration.367371368372config SERIAL_8250_DW369373 tristate "Support for Synopsys DesignWare 8250 quirks"
+13-15
drivers/usb/host/xhci-dbgtty.c
···408408 return -EBUSY;409409410410 xhci_dbc_tty_init_port(dbc, port);411411- tty_dev = tty_port_register_device(&port->port,412412- dbc_tty_driver, 0, NULL);413413- if (IS_ERR(tty_dev)) {414414- ret = PTR_ERR(tty_dev);415415- goto register_fail;416416- }417411418412 ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);419413 if (ret)420420- goto buf_alloc_fail;414414+ goto err_exit_port;421415422416 ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool,423417 dbc_read_complete);424418 if (ret)425425- goto request_fail;419419+ goto err_free_fifo;426420427421 ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool,428422 dbc_write_complete);429423 if (ret)430430- goto request_fail;424424+ goto err_free_requests;425425+426426+ tty_dev = tty_port_register_device(&port->port,427427+ dbc_tty_driver, 0, NULL);428428+ if (IS_ERR(tty_dev)) {429429+ ret = PTR_ERR(tty_dev);430430+ goto err_free_requests;431431+ }431432432433 port->registered = true;433434434435 return 0;435436436436-request_fail:437437+err_free_requests:437438 xhci_dbc_free_requests(&port->read_pool);438439 xhci_dbc_free_requests(&port->write_pool);440440+err_free_fifo:439441 kfifo_free(&port->write_fifo);440440-441441-buf_alloc_fail:442442- tty_unregister_device(dbc_tty_driver, 0);443443-444444-register_fail:442442+err_exit_port:445443 xhci_dbc_tty_exit_port(port);446444447445 dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
···366366/* Must be called with xhci->lock held, releases and aquires lock back */367367static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)368368{369369- u64 temp_64;369369+ u32 temp_32;370370 int ret;371371372372 xhci_dbg(xhci, "Abort command ring\n");373373374374 reinit_completion(&xhci->cmd_ring_stop_completion);375375376376- temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);377377- xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,378378- &xhci->op_regs->cmd_ring);376376+ /*377377+ * The control bits like command stop, abort are located in lower378378+ * dword of the command ring control register. Limit the write379379+ * to the lower dword to avoid corrupting the command ring pointer380380+ * in case if the command ring is stopped by the time upper dword381381+ * is written.382382+ */383383+ temp_32 = readl(&xhci->op_regs->cmd_ring);384384+ writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);379385380386 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the381387 * completion of the Command Abort operation. If CRR is not negated in 5···565559 struct xhci_ring *ep_ring;566560 struct xhci_command *cmd;567561 struct xhci_segment *new_seg;562562+ struct xhci_segment *halted_seg = NULL;568563 union xhci_trb *new_deq;569564 int new_cycle;565565+ union xhci_trb *halted_trb;566566+ int index = 0;570567 dma_addr_t addr;571568 u64 hw_dequeue;572569 bool cycle_found = false;···607598 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);608599 new_seg = ep_ring->deq_seg;609600 new_deq = ep_ring->dequeue;610610- new_cycle = hw_dequeue & 0x1;601601+602602+ /*603603+ * Quirk: xHC write-back of the DCS field in the hardware dequeue604604+ * pointer is wrong - use the cycle state of the TRB pointed to by605605+ * the dequeue pointer.606606+ */607607+ if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&608608+ !(ep->ep_state & EP_HAS_STREAMS))609609+ halted_seg = trb_in_td(xhci, td->start_seg,610610+ td->first_trb, td->last_trb,611611+ hw_dequeue & ~0xf, false);612612+ if (halted_seg) {613613+ index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /614614+ sizeof(*halted_trb);615615+ halted_trb = &halted_seg->trbs[index];616616+ new_cycle = halted_trb->generic.field[3] & 0x1;617617+ xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",618618+ (u8)(hw_dequeue & 0x1), index, new_cycle);619619+ } else {620620+ new_cycle = hw_dequeue & 0x1;621621+ }611622612623 /*613624 * We want to find the pointer, segment and cycle state of the new trb
+5
drivers/usb/host/xhci.c
···32143214 return;3215321532163216 /* Bail out if toggle is already being cleared by a endpoint reset */32173217+ spin_lock_irqsave(&xhci->lock, flags);32173218 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {32183219 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;32203220+ spin_unlock_irqrestore(&xhci->lock, flags);32193221 return;32203222 }32233223+ spin_unlock_irqrestore(&xhci->lock, flags);32213224 /* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */32223225 if (usb_endpoint_xfer_control(&host_ep->desc) ||32233226 usb_endpoint_xfer_isoc(&host_ep->desc))···33063303 xhci_free_command(xhci, cfg_cmd);33073304cleanup:33083305 xhci_free_command(xhci, stop_cmd);33063306+ spin_lock_irqsave(&xhci->lock, flags);33093307 if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)33103308 ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;33093309+ spin_unlock_irqrestore(&xhci->lock, flags);33113310}3312331133133312static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+1
drivers/usb/host/xhci.h
···18991899#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)19001900#define XHCI_NO_SOFT_RETRY BIT_ULL(40)19011901#define XHCI_BROKEN_D3COLD BIT_ULL(41)19021902+#define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)1902190319031904 unsigned int num_active_eps;19041905 unsigned int limit_active_eps;
+3-1
drivers/usb/musb/musb_dsps.c
···899899 if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {900900 ret = dsps_setup_optional_vbus_irq(pdev, glue);901901 if (ret)902902- goto err;902902+ goto unregister_pdev;903903 }904904905905 return 0;906906907907+unregister_pdev:908908+ platform_device_unregister(glue->musb);907909err:908910 pm_runtime_disable(&pdev->dev);909911 iounmap(glue->usbss_base);
···178178 struct fd f = fdget(fd);179179 int ret = -EBADF;180180181181- if (!f.file)181181+ if (!f.file || !(f.file->f_mode & FMODE_READ))182182 goto out;183183184184 ret = kernel_read_file(f.file, offset, buf, buf_size, file_size, id);
+8-1
fs/kernfs/dir.c
···1111111111121112 kn = kernfs_find_ns(parent, dentry->d_name.name, ns);11131113 /* attach dentry and inode */11141114- if (kn && kernfs_active(kn)) {11141114+ if (kn) {11151115+ /* Inactive nodes are invisible to the VFS so don't11161116+ * create a negative.11171117+ */11181118+ if (!kernfs_active(kn)) {11191119+ up_read(&kernfs_rwsem);11201120+ return NULL;11211121+ }11151122 inode = kernfs_get_inode(dir->i_sb, kn);11161123 if (!inode)11171124 inode = ERR_PTR(-ENOMEM);
···77 *88 */991010-#include <linux/blkdev.h>1111-#include <linux/buffer_head.h>1210#include <linux/fs.h>1313-#include <linux/iversion.h>1411#include <linux/nls.h>15121613#include "debug.h"···1518#include "ntfs_fs.h"16191720/* Convert little endian UTF-16 to NLS string. */1818-int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni,2121+int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len,1922 u8 *buf, int buf_len)2023{2121- int ret, uni_len, warn;2222- const __le16 *ip;2424+ int ret, warn;2325 u8 *op;2424- struct nls_table *nls = sbi->options.nls;2626+ struct nls_table *nls = sbi->options->nls;25272628 static_assert(sizeof(wchar_t) == sizeof(__le16));27292830 if (!nls) {2931 /* UTF-16 -> UTF-8 */3030- ret = utf16s_to_utf8s((wchar_t *)uni->name, uni->len,3131- UTF16_LITTLE_ENDIAN, buf, buf_len);3232+ ret = utf16s_to_utf8s(name, len, UTF16_LITTLE_ENDIAN, buf,3333+ buf_len);3234 buf[ret] = '\0';3335 return ret;3436 }35373636- ip = uni->name;3738 op = buf;3838- uni_len = uni->len;3939 warn = 0;40404141- while (uni_len--) {4141+ while (len--) {4242 u16 ec;4343 int charlen;4444 char dump[5];···4652 break;4753 }48544949- ec = le16_to_cpu(*ip++);5555+ ec = le16_to_cpu(*name++);5056 charlen = nls->uni2char(ec, op, buf_len);51575258 if (charlen > 0) {···180186{181187 int ret, slen;182188 const u8 *end;183183- struct nls_table *nls = sbi->options.nls;189189+ struct nls_table *nls = sbi->options->nls;184190 u16 *uname = uni->name;185191186192 static_assert(sizeof(wchar_t) == sizeof(u16));···295301 return 0;296302297303 /* Skip meta files. Unless option to show metafiles is set. */298298- if (!sbi->options.showmeta && ntfs_is_meta_file(sbi, ino))304304+ if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino))299305 return 0;300306301301- if (sbi->options.nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))307307+ if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))302308 return 0;303309304304- name_len = ntfs_utf16_to_nls(sbi, (struct le_str *)&fname->name_len,305305- name, PATH_MAX);310310+ name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,311311+ PATH_MAX);306312 if (name_len <= 0) {307313 ntfs_warn(sbi->sb, "failed to convert name for inode %lx.",308314 ino);
+7-5
fs/ntfs3/file.c
···1212#include <linux/compat.h>1313#include <linux/falloc.h>1414#include <linux/fiemap.h>1515-#include <linux/nls.h>16151716#include "debug.h"1817#include "ntfs.h"···587588 truncate_pagecache(inode, vbo_down);588589589590 if (!is_sparsed(ni) && !is_compressed(ni)) {590590- /* Normal file. */591591- err = ntfs_zero_range(inode, vbo, end);591591+ /*592592+ * Normal file, can't make hole.593593+ * TODO: Try to find way to save info about hole.594594+ */595595+ err = -EOPNOTSUPP;592596 goto out;593597 }594598···739737 umode_t mode = inode->i_mode;740738 int err;741739742742- if (sbi->options.no_acs_rules) {740740+ if (sbi->options->noacsrules) {743741 /* "No access rules" - Force any changes of time etc. */744742 attr->ia_valid |= ATTR_FORCE;745743 /* and disable for editing some attributes. */···11871185 int err = 0;1188118611891187 /* If we are last writer on the inode, drop the block reservation. */11901190- if (sbi->options.prealloc && ((file->f_mode & FMODE_WRITE) &&11881188+ if (sbi->options->prealloc && ((file->f_mode & FMODE_WRITE) &&11911189 atomic_read(&inode->i_writecount) == 1)) {11921190 ni_lock(ni);11931191 down_write(&ni->file.run_lock);
+41-14
fs/ntfs3/frecord.c
···55 *66 */7788-#include <linux/blkdev.h>99-#include <linux/buffer_head.h>108#include <linux/fiemap.h>119#include <linux/fs.h>1212-#include <linux/nls.h>1310#include <linux/vmalloc.h>14111512#include "debug.h"···705708 continue;706709707710 mi = ni_find_mi(ni, ino_get(&le->ref));711711+ if (!mi) {712712+ /* Should never happened, 'cause already checked. */713713+ goto bad;714714+ }708715709716 attr = mi_find_attr(mi, NULL, le->type, le_name(le),710717 le->name_len, &le->id);718718+ if (!attr) {719719+ /* Should never happened, 'cause already checked. */720720+ goto bad;721721+ }711722 asize = le32_to_cpu(attr->size);712723713724 /* Insert into primary record. */714725 attr_ins = mi_insert_attr(&ni->mi, le->type, le_name(le),715726 le->name_len, asize,716727 le16_to_cpu(attr->name_off));717717- id = attr_ins->id;728728+ if (!attr_ins) {729729+ /*730730+ * Internal error.731731+ * Either no space in primary record (already checked).732732+ * Either tried to insert another733733+ * non indexed attribute (logic error).734734+ */735735+ goto bad;736736+ }718737719738 /* Copy all except id. */739739+ id = attr_ins->id;720740 memcpy(attr_ins, attr, asize);721741 attr_ins->id = id;722742···749735 ni->attr_list.dirty = false;750736751737 return 0;738738+bad:739739+ ntfs_inode_err(&ni->vfs_inode, "Internal error");740740+ make_bad_inode(&ni->vfs_inode);741741+ return -EINVAL;752742}753743754744/*···973955 /* Only indexed attributes can share same record. */974956 continue;975957 }958958+959959+ /*960960+ * Do not try to insert this attribute961961+ * if there is no room in record.962962+ */963963+ if (le32_to_cpu(mi->mrec->used) + asize > sbi->record_size)964964+ continue;976965977966 /* Try to insert attribute into this subrecord. */978967 attr = ni_ins_new_attr(ni, mi, le, type, name, name_len, asize,···14761451 attr->res.flags = RESIDENT_FLAG_INDEXED;1477145214781453 /* is_attr_indexed(attr)) == true */14791479- le16_add_cpu(&ni->mi.mrec->hard_links, +1);14541454+ le16_add_cpu(&ni->mi.mrec->hard_links, 1);14801455 ni->mi.dirty = true;14811456 }14821457 attr->res.res = 0;···1631160616321607 *le = NULL;1633160816341634- if (FILE_NAME_POSIX == name_type)16091609+ if (name_type == FILE_NAME_POSIX)16351610 return NULL;1636161116371612 /* Enumerate all names. */···17311706/*17321707 * ni_parse_reparse17331708 *17341734- * Buffer is at least 24 bytes.17091709+ * buffer - memory for reparse buffer header17351710 */17361711enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,17371737- void *buffer)17121712+ struct REPARSE_DATA_BUFFER *buffer)17381713{17391714 const struct REPARSE_DATA_BUFFER *rp = NULL;17401715 u8 bits;17411716 u16 len;17421717 typeof(rp->CompressReparseBuffer) *cmpr;17431743-17441744- static_assert(sizeof(struct REPARSE_DATA_BUFFER) <= 24);1745171817461719 /* Try to estimate reparse point. */17471720 if (!attr->non_res) {···1825180218261803 return REPARSE_NONE;18271804 }18051805+18061806+ if (buffer != rp)18071807+ memcpy(buffer, rp, sizeof(struct REPARSE_DATA_BUFFER));1828180818291809 /* Looks like normal symlink. */18301810 return REPARSE_LINK;···29322906 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), de + 1, de_key_size);29332907 mi_get_ref(&ni->mi, &de->ref);2934290829352935- if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) {29092909+ if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1))29362910 return false;29372937- }29382911 }2939291229402913 return true;···31023077 const struct EA_INFO *info;3103307831043079 info = resident_data_ex(attr, sizeof(struct EA_INFO));31053105- dup->ea_size = info->size_pack;30803080+ /* If ATTR_EA_INFO exists 'info' can't be NULL. */30813081+ if (info)30823082+ dup->ea_size = info->size_pack;31063083 }31073084 }31083085···32323205 goto out;32333206 }3234320732353235- err = al_update(ni);32083208+ err = al_update(ni, sync);32363209 if (err)32373210 goto out;32383211 }
···1010#ifndef _LINUX_NTFS3_NTFS_H1111#define _LINUX_NTFS3_NTFS_H12121313-/* TODO: Check 4K MFT record and 512 bytes cluster. */1313+#include <linux/blkdev.h>1414+#include <linux/build_bug.h>1515+#include <linux/kernel.h>1616+#include <linux/stddef.h>1717+#include <linux/string.h>1818+#include <linux/types.h>14191515-/* Activate this define to use binary search in indexes. */1616-#define NTFS3_INDEX_BINARY_SEARCH2020+#include "debug.h"2121+2222+/* TODO: Check 4K MFT record and 512 bytes cluster. */17231824/* Check each run for marked clusters. */1925#define NTFS3_CHECK_FREE_CLST20262127#define NTFS_NAME_LEN 25522282323-/* ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. */2424-#define NTFS_LINK_MAX 0x4002525-//#define NTFS_LINK_MAX 0xffff2929+/*3030+ * ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff.3131+ * xfstest generic/041 creates 3003 hardlinks.3232+ */3333+#define NTFS_LINK_MAX 400026342735/*2836 * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys.
···55 *66 */7788-#include <linux/blkdev.h>99-#include <linux/buffer_head.h>108#include <linux/fs.h>1111-#include <linux/nls.h>129#include <linux/posix_acl.h>1310#include <linux/posix_acl_xattr.h>1411#include <linux/xattr.h>···7578 size_t add_bytes, const struct EA_INFO **info)7679{7780 int err;8181+ struct ntfs_sb_info *sbi = ni->mi.sbi;7882 struct ATTR_LIST_ENTRY *le = NULL;7983 struct ATTRIB *attr_info, *attr_ea;8084 void *ea_p;···100102101103 /* Check Ea limit. */102104 size = le32_to_cpu((*info)->size);103103- if (size > ni->mi.sbi->ea_max_size)105105+ if (size > sbi->ea_max_size)104106 return -EFBIG;105107106106- if (attr_size(attr_ea) > ni->mi.sbi->ea_max_size)108108+ if (attr_size(attr_ea) > sbi->ea_max_size)107109 return -EFBIG;108110109111 /* Allocate memory for packed Ea. */···111113 if (!ea_p)112114 return -ENOMEM;113115114114- if (attr_ea->non_res) {116116+ if (!size) {117117+ ;118118+ } else if (attr_ea->non_res) {115119 struct runs_tree run;116120117121 run_init(&run);118122119123 err = attr_load_runs(attr_ea, ni, &run, NULL);120124 if (!err)121121- err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size,122122- NULL);125125+ err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL);123126 run_close(&run);124127125128 if (err)···259260260261static noinline int ntfs_set_ea(struct inode *inode, const char *name,261262 size_t name_len, const void *value,262262- size_t val_size, int flags, int locked)263263+ size_t val_size, int flags)263264{264265 struct ntfs_inode *ni = ntfs_i(inode);265266 struct ntfs_sb_info *sbi = ni->mi.sbi;···278279 u64 new_sz;279280 void *p;280281281281- if (!locked)282282- ni_lock(ni);282282+ ni_lock(ni);283283284284 run_init(&ea_run);285285···368370 new_ea->name[name_len] = 0;369371 memcpy(new_ea->name + name_len + 1, value, val_size);370372 new_pack = le16_to_cpu(ea_info.size_pack) + packed_ea_size(new_ea);371371-372372- /* Should fit into 16 bits. */373373- if (new_pack > 0xffff) {374374- err = -EFBIG; // -EINVAL?375375- goto out;376376- }377373 ea_info.size_pack = cpu_to_le16(new_pack);378378-379374 /* New size of ATTR_EA. */380375 size += add;381381- if (size > sbi->ea_max_size) {376376+ ea_info.size = cpu_to_le32(size);377377+378378+ /*379379+ * 1. Check ea_info.size_pack for overflow.380380+ * 2. New attibute size must fit value from $AttrDef381381+ */382382+ if (new_pack > 0xffff || size > sbi->ea_max_size) {383383+ ntfs_inode_warn(384384+ inode,385385+ "The size of extended attributes must not exceed 64KiB");382386 err = -EFBIG; // -EINVAL?383387 goto out;384388 }385385- ea_info.size = cpu_to_le32(size);386389387390update_ea:388391···443444 /* Delete xattr, ATTR_EA */444445 ni_remove_attr_le(ni, attr, mi, le);445446 } else if (attr->non_res) {446446- err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size);447447+ err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0);447448 if (err)448449 goto out;449450 } else {···467468 mark_inode_dirty(&ni->vfs_inode);468469469470out:470470- if (!locked)471471- ni_unlock(ni);471471+ ni_unlock(ni);472472473473 run_close(&ea_run);474474 kfree(ea_all);···476478}477479478480#ifdef CONFIG_NTFS3_FS_POSIX_ACL479479-static inline void ntfs_posix_acl_release(struct posix_acl *acl)480480-{481481- if (acl && refcount_dec_and_test(&acl->a_refcount))482482- kfree(acl);483483-}484484-485481static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns,486482 struct inode *inode, int type,487483 int locked)···513521 /* Translate extended attribute to acl. */514522 if (err >= 0) {515523 acl = posix_acl_from_xattr(mnt_userns, buf, err);516516- if (!IS_ERR(acl))517517- set_cached_acl(inode, type, acl);524524+ } else if (err == -ENODATA) {525525+ acl = NULL;518526 } else {519519- acl = err == -ENODATA ? NULL : ERR_PTR(err);527527+ acl = ERR_PTR(err);520528 }529529+530530+ if (!IS_ERR(acl))531531+ set_cached_acl(inode, type, acl);521532522533 __putname(buf);523534···541546542547static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,543548 struct inode *inode, struct posix_acl *acl,544544- int type, int locked)549549+ int type)545550{546551 const char *name;547552 size_t size, name_len;548553 void *value = NULL;549554 int err = 0;555555+ int flags;550556551557 if (S_ISLNK(inode->i_mode))552558 return -EOPNOTSUPP;···557561 if (acl) {558562 umode_t mode = inode->i_mode;559563560560- err = posix_acl_equiv_mode(acl, &mode);561561- if (err < 0)562562- return err;564564+ err = posix_acl_update_mode(mnt_userns, inode, &mode,565565+ &acl);566566+ if (err)567567+ goto out;563568564569 if (inode->i_mode != mode) {565570 inode->i_mode = mode;566571 mark_inode_dirty(inode);567567- }568568-569569- if (!err) {570570- /*571571- * ACL can be exactly represented in the572572- * traditional file mode permission bits.573573- */574574- acl = NULL;575572 }576573 }577574 name = XATTR_NAME_POSIX_ACL_ACCESS;···583594 }584595585596 if (!acl) {597597+ /* Remove xattr if it can be presented via mode. */586598 size = 0;587599 value = NULL;600600+ flags = XATTR_REPLACE;588601 } else {589602 size = posix_acl_xattr_size(acl->a_count);590603 value = kmalloc(size, GFP_NOFS);591604 if (!value)592605 return -ENOMEM;593593-594606 err = posix_acl_to_xattr(mnt_userns, acl, value, size);595607 if (err < 0)596608 goto out;609609+ flags = 0;597610 }598611599599- err = ntfs_set_ea(inode, name, name_len, value, size, 0, locked);612612+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);613613+ if (err == -ENODATA && !size)614614+ err = 0; /* Removing non existed xattr. */600615 if (!err)601616 set_cached_acl(inode, type, acl);602617···616623int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,617624 struct posix_acl *acl, int type)618625{619619- return ntfs_set_acl_ex(mnt_userns, inode, acl, type, 0);620620-}621621-622622-static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,623623- struct inode *inode, int type, void *buffer,624624- size_t size)625625-{626626- struct posix_acl *acl;627627- int err;628628-629629- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {630630- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");631631- return -EOPNOTSUPP;632632- }633633-634634- acl = ntfs_get_acl(inode, type, false);635635- if (IS_ERR(acl))636636- return PTR_ERR(acl);637637-638638- if (!acl)639639- return -ENODATA;640640-641641- err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);642642- ntfs_posix_acl_release(acl);643643-644644- return err;645645-}646646-647647-static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,648648- struct inode *inode, int type, const void *value,649649- size_t size)650650-{651651- struct posix_acl *acl;652652- int err;653653-654654- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {655655- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");656656- return -EOPNOTSUPP;657657- }658658-659659- if (!inode_owner_or_capable(mnt_userns, inode))660660- return -EPERM;661661-662662- if (!value) {663663- acl = NULL;664664- } else {665665- acl = posix_acl_from_xattr(mnt_userns, value, size);666666- if (IS_ERR(acl))667667- return PTR_ERR(acl);668668-669669- if (acl) {670670- err = posix_acl_valid(mnt_userns, acl);671671- if (err)672672- goto release_and_out;673673- }674674- }675675-676676- err = ntfs_set_acl(mnt_userns, inode, acl, type);677677-678678-release_and_out:679679- ntfs_posix_acl_release(acl);680680- return err;626626+ return ntfs_set_acl_ex(mnt_userns, inode, acl, type);681627}682628683629/*···630698 struct posix_acl *default_acl, *acl;631699 int err;632700633633- /*634634- * TODO: Refactoring lock.635635- * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir)636636- */637637- inode->i_default_acl = NULL;701701+ err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl);702702+ if (err)703703+ return err;638704639639- default_acl = ntfs_get_acl_ex(mnt_userns, dir, ACL_TYPE_DEFAULT, 1);640640-641641- if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) {642642- inode->i_mode &= ~current_umask();643643- err = 0;644644- goto out;645645- }646646-647647- if (IS_ERR(default_acl)) {648648- err = PTR_ERR(default_acl);649649- goto out;650650- }651651-652652- acl = default_acl;653653- err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);654654- if (err < 0)655655- goto out1;656656- if (!err) {657657- posix_acl_release(acl);658658- acl = NULL;659659- }660660-661661- if (!S_ISDIR(inode->i_mode)) {662662- posix_acl_release(default_acl);663663- default_acl = NULL;664664- }665665-666666- if (default_acl)705705+ if (default_acl) {667706 err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,668668- ACL_TYPE_DEFAULT, 1);707707+ ACL_TYPE_DEFAULT);708708+ posix_acl_release(default_acl);709709+ } else {710710+ inode->i_default_acl = NULL;711711+ }669712670713 if (!acl)671714 inode->i_acl = NULL;672672- else if (!err)673673- err = ntfs_set_acl_ex(mnt_userns, inode, acl, ACL_TYPE_ACCESS,674674- 1);715715+ else {716716+ if (!err)717717+ err = ntfs_set_acl_ex(mnt_userns, inode, acl,718718+ ACL_TYPE_ACCESS);719719+ posix_acl_release(acl);720720+ }675721676676- posix_acl_release(acl);677677-out1:678678- posix_acl_release(default_acl);679679-680680-out:681722 return err;682723}683724#endif···677772int ntfs_permission(struct user_namespace *mnt_userns, struct inode *inode,678773 int mask)679774{680680- if (ntfs_sb(inode->i_sb)->options.no_acs_rules) {775775+ if (ntfs_sb(inode->i_sb)->options->noacsrules) {681776 /* "No access rules" mode - Allow all changes. */682777 return 0;683778 }···785880 goto out;786881 }787882788788-#ifdef CONFIG_NTFS3_FS_POSIX_ACL789789- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&790790- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,791791- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||792792- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&793793- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,794794- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {795795- /* TODO: init_user_ns? */796796- err = ntfs_xattr_get_acl(797797- &init_user_ns, inode,798798- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1799799- ? ACL_TYPE_ACCESS800800- : ACL_TYPE_DEFAULT,801801- buffer, size);802802- goto out;803803- }804804-#endif805883 /* Deal with NTFS extended attribute. */806884 err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);807885···8971009 goto out;8981010 }8991011900900-#ifdef CONFIG_NTFS3_FS_POSIX_ACL901901- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&902902- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,903903- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||904904- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&905905- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,906906- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {907907- err = ntfs_xattr_set_acl(908908- mnt_userns, inode,909909- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1910910- ? ACL_TYPE_ACCESS911911- : ACL_TYPE_DEFAULT,912912- value, size);913913- goto out;914914- }915915-#endif9161012 /* Deal with NTFS extended attribute. */917917- err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);10131013+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);91810149191015out:9201016 return err;···9141042 int err;9151043 __le32 value;916104410451045+ /* TODO: refactor this, so we don't lock 4 times in ntfs_set_ea */9171046 value = cpu_to_le32(i_uid_read(inode));9181047 err = ntfs_set_ea(inode, "$LXUID", sizeof("$LXUID") - 1, &value,919919- sizeof(value), 0, 0);10481048+ sizeof(value), 0);9201049 if (err)9211050 goto out;92210519231052 value = cpu_to_le32(i_gid_read(inode));9241053 err = ntfs_set_ea(inode, "$LXGID", sizeof("$LXGID") - 1, &value,925925- sizeof(value), 0, 0);10541054+ sizeof(value), 0);9261055 if (err)9271056 goto out;92810579291058 value = cpu_to_le32(inode->i_mode);9301059 err = ntfs_set_ea(inode, "$LXMOD", sizeof("$LXMOD") - 1, &value,931931- sizeof(value), 0, 0);10601060+ sizeof(value), 0);9321061 if (err)9331062 goto out;93410639351064 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {9361065 value = cpu_to_le32(inode->i_rdev);9371066 err = ntfs_set_ea(inode, "$LXDEV", sizeof("$LXDEV") - 1, &value,938938- sizeof(value), 0, 0);10671067+ sizeof(value), 0);9391068 if (err)9401069 goto out;9411070 }
+12-34
fs/ocfs2/alloc.c
···70457045int ocfs2_convert_inline_data_to_extents(struct inode *inode,70467046 struct buffer_head *di_bh)70477047{70487048- int ret, i, has_data, num_pages = 0;70487048+ int ret, has_data, num_pages = 0;70497049 int need_free = 0;70507050 u32 bit_off, num;70517051 handle_t *handle;···70547054 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);70557055 struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;70567056 struct ocfs2_alloc_context *data_ac = NULL;70577057- struct page **pages = NULL;70587058- loff_t end = osb->s_clustersize;70577057+ struct page *page = NULL;70597058 struct ocfs2_extent_tree et;70607059 int did_quota = 0;7061706070627061 has_data = i_size_read(inode) ? 1 : 0;7063706270647063 if (has_data) {70657065- pages = kcalloc(ocfs2_pages_per_cluster(osb->sb),70667066- sizeof(struct page *), GFP_NOFS);70677067- if (pages == NULL) {70687068- ret = -ENOMEM;70697069- mlog_errno(ret);70707070- return ret;70717071- }70727072-70737064 ret = ocfs2_reserve_clusters(osb, 1, &data_ac);70747065 if (ret) {70757066 mlog_errno(ret);70767076- goto free_pages;70677067+ goto out;70777068 }70787069 }70797070···70847093 }7085709470867095 if (has_data) {70877087- unsigned int page_end;70967096+ unsigned int page_end = min_t(unsigned, PAGE_SIZE,70977097+ osb->s_clustersize);70887098 u64 phys;7089709970907100 ret = dquot_alloc_space_nodirty(inode,···71097117 */71107118 block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off);7111711971127112- /*71137113- * Non sparse file systems zero on extend, so no need71147114- * to do that now.71157115- */71167116- if (!ocfs2_sparse_alloc(osb) &&71177117- PAGE_SIZE < osb->s_clustersize)71187118- end = PAGE_SIZE;71197119-71207120- ret = ocfs2_grab_eof_pages(inode, 0, end, pages, &num_pages);71207120+ ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page,71217121+ &num_pages);71217122 if (ret) {71227123 mlog_errno(ret);71237124 need_free = 1;···71217136 * This should populate the 1st page for us and mark71227137 * it up to date.71237138 */71247124- ret = ocfs2_read_inline_data(inode, pages[0], di_bh);71397139+ ret = ocfs2_read_inline_data(inode, page, di_bh);71257140 if (ret) {71267141 mlog_errno(ret);71277142 need_free = 1;71287143 goto out_unlock;71297144 }7130714571317131- page_end = PAGE_SIZE;71327132- if (PAGE_SIZE > osb->s_clustersize)71337133- page_end = osb->s_clustersize;71347134-71357135- for (i = 0; i < num_pages; i++)71367136- ocfs2_map_and_dirty_page(inode, handle, 0, page_end,71377137- pages[i], i > 0, &phys);71467146+ ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0,71477147+ &phys);71387148 }7139714971407150 spin_lock(&oi->ip_lock);···71607180 }7161718171627182out_unlock:71637163- if (pages)71647164- ocfs2_unlock_and_free_pages(pages, num_pages);71837183+ if (page)71847184+ ocfs2_unlock_and_free_pages(&page, num_pages);7165718571667186out_commit:71677187 if (ret < 0 && did_quota)···71857205out:71867206 if (data_ac)71877207 ocfs2_free_alloc_context(data_ac);71887188-free_pages:71897189- kfree(pages);71907208 return ret;71917209}71927210
+10-4
fs/ocfs2/super.c
···21672167 }2168216821692169 if (ocfs2_clusterinfo_valid(osb)) {21702170+ /*21712171+ * ci_stack and ci_cluster in ocfs2_cluster_info may not be null21722172+ * terminated, so make sure no overflow happens here by using21732173+ * memcpy. Destination strings will always be null terminated21742174+ * because osb is allocated using kzalloc.21752175+ */21702176 osb->osb_stackflags =21712177 OCFS2_RAW_SB(di)->s_cluster_info.ci_stackflags;21722172- strlcpy(osb->osb_cluster_stack,21782178+ memcpy(osb->osb_cluster_stack,21732179 OCFS2_RAW_SB(di)->s_cluster_info.ci_stack,21742174- OCFS2_STACK_LABEL_LEN + 1);21802180+ OCFS2_STACK_LABEL_LEN);21752181 if (strlen(osb->osb_cluster_stack) != OCFS2_STACK_LABEL_LEN) {21762182 mlog(ML_ERROR,21772183 "couldn't mount because of an invalid "···21862180 status = -EINVAL;21872181 goto bail;21882182 }21892189- strlcpy(osb->osb_cluster_name,21832183+ memcpy(osb->osb_cluster_name,21902184 OCFS2_RAW_SB(di)->s_cluster_info.ci_cluster,21912191- OCFS2_CLUSTER_NAME_LEN + 1);21852185+ OCFS2_CLUSTER_NAME_LEN);21922186 } else {21932187 /* The empty string is identical with classic tools that21942188 * don't know about s_cluster_info. */
+9-3
fs/userfaultfd.c
···18271827 if (mode_wp && mode_dontwake)18281828 return -EINVAL;1829182918301830- ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,18311831- uffdio_wp.range.len, mode_wp,18321832- &ctx->mmap_changing);18301830+ if (mmget_not_zero(ctx->mm)) {18311831+ ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start,18321832+ uffdio_wp.range.len, mode_wp,18331833+ &ctx->mmap_changing);18341834+ mmput(ctx->mm);18351835+ } else {18361836+ return -ESRCH;18371837+ }18381838+18331839 if (ret)18341840 return ret;18351841
+4
include/linux/cpuhotplug.h
···7272 CPUHP_SLUB_DEAD,7373 CPUHP_DEBUG_OBJ_DEAD,7474 CPUHP_MM_WRITEBACK_DEAD,7575+ /* Must be after CPUHP_MM_VMSTAT_DEAD */7676+ CPUHP_MM_DEMOTION_DEAD,7577 CPUHP_MM_VMSTAT_DEAD,7678 CPUHP_SOFTIRQ_DEAD,7779 CPUHP_NET_MVNETA_DEAD,···242240 CPUHP_AP_BASE_CACHEINFO_ONLINE,243241 CPUHP_AP_ONLINE_DYN,244242 CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30,243243+ /* Must be after CPUHP_AP_ONLINE_DYN for node_states[N_CPU] update */244244+ CPUHP_AP_MM_DEMOTION_ONLINE,245245 CPUHP_AP_X86_HPET_ONLINE,246246 CPUHP_AP_X86_KVM_CLK_ONLINE,247247 CPUHP_AP_DTPM_CPU_ONLINE,
+1-1
include/linux/elfcore.h
···109109#endif110110}111111112112-#if defined(CONFIG_UM) || defined(CONFIG_IA64)112112+#if (defined(CONFIG_UML) && defined(CONFIG_X86_32)) || defined(CONFIG_IA64)113113/*114114 * These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out115115 * extra segments containing the gate DSO contents. Dumping its
+1
include/linux/genhd.h
···149149 unsigned long state;150150#define GD_NEED_PART_SCAN 0151151#define GD_READ_ONLY 1152152+#define GD_DEAD 2152153153154 struct mutex open_mutex; /* open/close mutex */154155 unsigned open_partitions; /* number of open partitions */
+4-1
include/linux/memory.h
···160160#define register_hotmemory_notifier(nb) register_memory_notifier(nb)161161#define unregister_hotmemory_notifier(nb) unregister_memory_notifier(nb)162162#else163163-#define hotplug_memory_notifier(fn, pri) ({ 0; })163163+static inline int hotplug_memory_notifier(notifier_fn_t fn, int pri)164164+{165165+ return 0;166166+}164167/* These aren't inline functions due to a GCC bug. */165168#define register_hotmemory_notifier(nb) ({ (void)(nb); 0; })166169#define unregister_hotmemory_notifier(nb) ({ (void)(nb); })
···531531 /* I/O mutex */532532 struct mutex io_mutex;533533534534+ /* Used to avoid adding the same CS twice */535535+ struct mutex add_lock;536536+534537 /* lock and mutex for SPI bus locking */535538 spinlock_t bus_lock_spinlock;536539 struct mutex bus_lock_mutex;
+9-40
include/linux/trace_recursion.h
···1616 * When function tracing occurs, the following steps are made:1717 * If arch does not support a ftrace feature:1818 * call internal function (uses INTERNAL bits) which calls...1919- * If callback is registered to the "global" list, the list2020- * function is called and recursion checks the GLOBAL bits.2121- * then this function calls...2219 * The function callback, which can use the FTRACE bits to2320 * check for recursion.2424- *2525- * Now if the arch does not support a feature, and it calls2626- * the global list function which calls the ftrace callback2727- * all three of these steps will do a recursion protection.2828- * There's no reason to do one if the previous caller already2929- * did. The recursion that we are protecting against will3030- * go through the same steps again.3131- *3232- * To prevent the multiple recursion checks, if a recursion3333- * bit is set that is higher than the MAX bit of the current3434- * check, then we know that the check was made by the previous3535- * caller, and we can skip the current check.3621 */3722enum {3823 /* Function recursion bits */···2540 TRACE_FTRACE_NMI_BIT,2641 TRACE_FTRACE_IRQ_BIT,2742 TRACE_FTRACE_SIRQ_BIT,4343+ TRACE_FTRACE_TRANSITION_BIT,28442929- /* INTERNAL_BITs must be greater than FTRACE_BITs */4545+ /* Internal use recursion bits */3046 TRACE_INTERNAL_BIT,3147 TRACE_INTERNAL_NMI_BIT,3248 TRACE_INTERNAL_IRQ_BIT,3349 TRACE_INTERNAL_SIRQ_BIT,5050+ TRACE_INTERNAL_TRANSITION_BIT,34513552 TRACE_BRANCH_BIT,3653/*···7386 */7487 TRACE_GRAPH_NOTRACE_BIT,75887676- /*7777- * When transitioning between context, the preempt_count() may7878- * not be correct. Allow for a single recursion to cover this case.7979- */8080- TRACE_TRANSITION_BIT,8181-8289 /* Used to prevent recursion recording from recursing. */8390 TRACE_RECORD_RECURSION_BIT,8491};···94113#define TRACE_CONTEXT_BITS 49511496115#define TRACE_FTRACE_START TRACE_FTRACE_BIT9797-#define TRACE_FTRACE_MAX ((1 << (TRACE_FTRACE_START + TRACE_CONTEXT_BITS)) - 1)9811699117#define TRACE_LIST_START TRACE_INTERNAL_BIT100100-#define TRACE_LIST_MAX ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)101118102102-#define TRACE_CONTEXT_MASK TRACE_LIST_MAX119119+#define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1)103120104121/*105122 * Used for setting context···111132 TRACE_CTX_IRQ,112133 TRACE_CTX_SOFTIRQ,113134 TRACE_CTX_NORMAL,135135+ TRACE_CTX_TRANSITION,114136};115137116138static __always_inline int trace_get_context_bit(void)···140160#endif141161142162static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip,143143- int start, int max)163163+ int start)144164{145165 unsigned int val = READ_ONCE(current->trace_recursion);146166 int bit;147147-148148- /* A previous recursion check was made */149149- if ((val & TRACE_CONTEXT_MASK) > max)150150- return 0;151167152168 bit = trace_get_context_bit() + start;153169 if (unlikely(val & (1 << bit))) {···151175 * It could be that preempt_count has not been updated during152176 * a switch between contexts. Allow for a single recursion.153177 */154154- bit = TRACE_TRANSITION_BIT;178178+ bit = TRACE_CTX_TRANSITION + start;155179 if (val & (1 << bit)) {156180 do_ftrace_record_recursion(ip, pip);157181 return -1;158182 }159159- } else {160160- /* Normal check passed, clear the transition to allow it again */161161- val &= ~(1 << TRACE_TRANSITION_BIT);162183 }163184164185 val |= 1 << bit;165186 current->trace_recursion = val;166187 barrier();167188168168- return bit + 1;189189+ return bit;169190}170191171192static __always_inline void trace_clear_recursion(int bit)172193{173173- if (!bit)174174- return;175175-176194 barrier();177177- bit--;178195 trace_recursion_clear(bit);179196}180197···183214static __always_inline int ftrace_test_recursion_trylock(unsigned long ip,184215 unsigned long parent_ip)185216{186186- return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX);217217+ return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START);187218}188219189220/**
···5454 struct sock sk;55555656 /* bind() params */5757- int bind_net;5757+ unsigned int bind_net;5858 mctp_eid_t bind_addr;5959 __u8 bind_type;6060
+3-3
include/net/sctp/sm.h
···384384 * Verification Tag value does not match the receiver's own385385 * tag value, the receiver shall silently discard the packet...386386 */387387- if (ntohl(chunk->sctp_hdr->vtag) == asoc->c.my_vtag)388388- return 1;387387+ if (ntohl(chunk->sctp_hdr->vtag) != asoc->c.my_vtag)388388+ return 0;389389390390 chunk->transport->encap_port = SCTP_INPUT_CB(chunk->skb)->encap_port;391391- return 0;391391+ return 1;392392}393393394394/* Check VTAG of the packet matches the sender's own tag and the T bit is
+3-2
include/net/tcp.h
···15791579 u8 keylen;15801580 u8 family; /* AF_INET or AF_INET6 */15811581 u8 prefixlen;15821582+ u8 flags;15821583 union tcp_md5_addr addr;15831584 int l3index; /* set if key added with L3 scope */15841585 u8 key[TCP_MD5SIG_MAXKEYLEN];···16251624int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,16261625 const struct sock *sk, const struct sk_buff *skb);16271626int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,16281628- int family, u8 prefixlen, int l3index,16271627+ int family, u8 prefixlen, int l3index, u8 flags,16291628 const u8 *newkey, u8 newkeylen, gfp_t gfp);16301629int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr,16311631- int family, u8 prefixlen, int l3index);16301630+ int family, u8 prefixlen, int l3index, u8 flags);16321631struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk,16331632 const struct sock *addr_sk);16341633
···1010#define __UAPI_MCTP_H11111212#include <linux/types.h>1313+#include <linux/socket.h>13141415typedef __u8 mctp_eid_t;1516···1918};20192120struct sockaddr_mctp {2222- unsigned short int smctp_family;2323- int smctp_network;2121+ __kernel_sa_family_t smctp_family;2222+ __u16 __smctp_pad0;2323+ unsigned int smctp_network;2424 struct mctp_addr smctp_addr;2525 __u8 smctp_type;2626 __u8 smctp_tag;2727+ __u8 __smctp_pad1;2728};28292930#define MCTP_NET_ANY 0x0
+2-4
include/uapi/misc/habanalabs.h
···917917#define HL_WAIT_CS_STATUS_BUSY 1918918#define HL_WAIT_CS_STATUS_TIMEDOUT 2919919#define HL_WAIT_CS_STATUS_ABORTED 3920920-#define HL_WAIT_CS_STATUS_INTERRUPTED 4921920922921#define HL_WAIT_CS_STATUS_FLAG_GONE 0x1923922#define HL_WAIT_CS_STATUS_FLAG_TIMESTAMP_VLD 0x2···12851286 * EIO - The CS was aborted (usually because the device was reset)12861287 * ENODEV - The device wants to do hard-reset (so user need to close FD)12871288 *12881288- * The driver also returns a custom define inside the IOCTL which can be:12891289+ * The driver also returns a custom define in case the IOCTL call returned 0.12901290+ * The define can be one of the following:12891291 *12901292 * HL_WAIT_CS_STATUS_COMPLETED - The CS has been completed successfully (0)12911293 * HL_WAIT_CS_STATUS_BUSY - The CS is still executing (0)···12941294 * (ETIMEDOUT)12951295 * HL_WAIT_CS_STATUS_ABORTED - The CS was aborted, usually because the12961296 * device was reset (EIO)12971297- * HL_WAIT_CS_STATUS_INTERRUPTED - Waiting for the CS was interrupted (EINTR)12981298- *12991297 */1300129813011299#define HL_IOCTL_WAIT_CS \
+1
init/main.c
···382382 ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);383383 if (ret < 0 || ret > len) {384384 pr_err("Failed to print extra kernel cmdline.\n");385385+ memblock_free_ptr(new_cmdline, len + 1);385386 return NULL;386387 }387388
+1-1
kernel/auditsc.c
···657657 result = audit_comparator(audit_loginuid_set(tsk), f->op, f->val);658658 break;659659 case AUDIT_SADDR_FAM:660660- if (ctx->sockaddr)660660+ if (ctx && ctx->sockaddr)661661 result = audit_comparator(ctx->sockaddr->ss_family,662662 f->op, f->val);663663 break;
···552552 * Wrapper function for adding an entry to the hash.553553 * This function takes care of locking itself.554554 */555555-static void add_dma_entry(struct dma_debug_entry *entry)555555+static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs)556556{557557 struct hash_bucket *bucket;558558 unsigned long flags;···566566 if (rc == -ENOMEM) {567567 pr_err("cacheline tracking ENOMEM, dma-debug disabled\n");568568 global_disable = true;569569- } else if (rc == -EEXIST) {569569+ } else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {570570 err_printk(entry->dev, entry,571571 "cacheline tracking EEXIST, overlapping mappings aren't supported\n");572572 }···11911191EXPORT_SYMBOL(debug_dma_map_single);1192119211931193void debug_dma_map_page(struct device *dev, struct page *page, size_t offset,11941194- size_t size, int direction, dma_addr_t dma_addr)11941194+ size_t size, int direction, dma_addr_t dma_addr,11951195+ unsigned long attrs)11951196{11961197 struct dma_debug_entry *entry;11971198···12231222 check_for_illegal_area(dev, addr, size);12241223 }1225122412261226- add_dma_entry(entry);12251225+ add_dma_entry(entry, attrs);12271226}1228122712291228void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)···12811280}1282128112831282void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,12841284- int nents, int mapped_ents, int direction)12831283+ int nents, int mapped_ents, int direction,12841284+ unsigned long attrs)12851285{12861286 struct dma_debug_entry *entry;12871287 struct scatterlist *s;···1290128812911289 if (unlikely(dma_debug_disabled()))12921290 return;12911291+12921292+ for_each_sg(sg, s, nents, i) {12931293+ check_for_stack(dev, sg_page(s), s->offset);12941294+ if (!PageHighMem(sg_page(s)))12951295+ check_for_illegal_area(dev, sg_virt(s), s->length);12961296+ }1293129712941298 for_each_sg(sg, s, mapped_ents, i) {12951299 entry = dma_entry_alloc();···13121304 entry->sg_call_ents = nents;13131305 entry->sg_mapped_ents = mapped_ents;1314130613151315- check_for_stack(dev, sg_page(s), s->offset);13161316-13171317- if (!PageHighMem(sg_page(s))) {13181318- check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s));13191319- }13201320-13211307 check_sg_segment(dev, s);1322130813231323- add_dma_entry(entry);13091309+ add_dma_entry(entry, attrs);13241310 }13251311}13261312···13701368}1371136913721370void debug_dma_alloc_coherent(struct device *dev, size_t size,13731373- dma_addr_t dma_addr, void *virt)13711371+ dma_addr_t dma_addr, void *virt,13721372+ unsigned long attrs)13741373{13751374 struct dma_debug_entry *entry;13761375···14011398 else14021399 entry->pfn = page_to_pfn(virt_to_page(virt));1403140014041404- add_dma_entry(entry);14011401+ add_dma_entry(entry, attrs);14051402}1406140314071404void debug_dma_free_coherent(struct device *dev, size_t size,···14321429}1433143014341431void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size,14351435- int direction, dma_addr_t dma_addr)14321432+ int direction, dma_addr_t dma_addr,14331433+ unsigned long attrs)14361434{14371435 struct dma_debug_entry *entry;14381436···14531449 entry->direction = direction;14541450 entry->map_err_type = MAP_ERR_NOT_CHECKED;1455145114561456- add_dma_entry(entry);14521452+ add_dma_entry(entry, attrs);14571453}1458145414591455void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
+16-8
kernel/dma/debug.h
···1111#ifdef CONFIG_DMA_API_DEBUG1212extern void debug_dma_map_page(struct device *dev, struct page *page,1313 size_t offset, size_t size,1414- int direction, dma_addr_t dma_addr);1414+ int direction, dma_addr_t dma_addr,1515+ unsigned long attrs);15161617extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,1718 size_t size, int direction);18191920extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,2020- int nents, int mapped_ents, int direction);2121+ int nents, int mapped_ents, int direction,2222+ unsigned long attrs);21232224extern void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,2325 int nelems, int dir);24262527extern void debug_dma_alloc_coherent(struct device *dev, size_t size,2626- dma_addr_t dma_addr, void *virt);2828+ dma_addr_t dma_addr, void *virt,2929+ unsigned long attrs);27302831extern void debug_dma_free_coherent(struct device *dev, size_t size,2932 void *virt, dma_addr_t addr);30333134extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr,3235 size_t size, int direction,3333- dma_addr_t dma_addr);3636+ dma_addr_t dma_addr,3737+ unsigned long attrs);34383539extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,3640 size_t size, int direction);···5753#else /* CONFIG_DMA_API_DEBUG */5854static inline void debug_dma_map_page(struct device *dev, struct page *page,5955 size_t offset, size_t size,6060- int direction, dma_addr_t dma_addr)5656+ int direction, dma_addr_t dma_addr,5757+ unsigned long attrs)6158{6259}6360···6863}69647065static inline void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,7171- int nents, int mapped_ents, int direction)6666+ int nents, int mapped_ents, int direction,6767+ unsigned long attrs)7268{7369}7470···8074}81758276static inline void debug_dma_alloc_coherent(struct device *dev, size_t size,8383- dma_addr_t dma_addr, void *virt)7777+ dma_addr_t dma_addr, void *virt,7878+ unsigned long attrs)8479{8580}8681···92859386static inline void debug_dma_map_resource(struct device *dev, phys_addr_t addr,9487 size_t size, int direction,9595- dma_addr_t dma_addr)8888+ dma_addr_t dma_addr,8989+ unsigned long attrs)9690{9791}9892
+12-12
kernel/dma/mapping.c
···156156 addr = dma_direct_map_page(dev, page, offset, size, dir, attrs);157157 else158158 addr = ops->map_page(dev, page, offset, size, dir, attrs);159159- debug_dma_map_page(dev, page, offset, size, dir, addr);159159+ debug_dma_map_page(dev, page, offset, size, dir, addr, attrs);160160161161 return addr;162162}···195195 ents = ops->map_sg(dev, sg, nents, dir, attrs);196196197197 if (ents > 0)198198- debug_dma_map_sg(dev, sg, nents, ents, dir);198198+ debug_dma_map_sg(dev, sg, nents, ents, dir, attrs);199199 else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM &&200200 ents != -EIO))201201 return -EIO;···249249 * Returns 0 on success or a negative error code on error. The following250250 * error codes are supported with the given meaning:251251 *252252- * -EINVAL - An invalid argument, unaligned access or other error253253- * in usage. Will not succeed if retried.254254- * -ENOMEM - Insufficient resources (like memory or IOVA space) to255255- * complete the mapping. Should succeed if retried later.256256- * -EIO - Legacy error code with an unknown meaning. eg. this is257257- * returned if a lower level call returned DMA_MAPPING_ERROR.252252+ * -EINVAL An invalid argument, unaligned access or other error253253+ * in usage. Will not succeed if retried.254254+ * -ENOMEM Insufficient resources (like memory or IOVA space) to255255+ * complete the mapping. Should succeed if retried later.256256+ * -EIO Legacy error code with an unknown meaning. eg. this is257257+ * returned if a lower level call returned DMA_MAPPING_ERROR.258258 */259259int dma_map_sgtable(struct device *dev, struct sg_table *sgt,260260 enum dma_data_direction dir, unsigned long attrs)···305305 else if (ops->map_resource)306306 addr = ops->map_resource(dev, phys_addr, size, dir, attrs);307307308308- debug_dma_map_resource(dev, phys_addr, size, dir, addr);308308+ debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs);309309 return addr;310310}311311EXPORT_SYMBOL(dma_map_resource);···510510 else511511 return NULL;512512513513- debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr);513513+ debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr, attrs);514514 return cpu_addr;515515}516516EXPORT_SYMBOL(dma_alloc_attrs);···566566 struct page *page = __dma_alloc_pages(dev, size, dma_handle, dir, gfp);567567568568 if (page)569569- debug_dma_map_page(dev, page, 0, size, dir, *dma_handle);569569+ debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0);570570 return page;571571}572572EXPORT_SYMBOL_GPL(dma_alloc_pages);···644644645645 if (sgt) {646646 sgt->nents = 1;647647- debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir);647647+ debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir, attrs);648648 }649649 return sgt;650650}
+6-19
kernel/signal.c
···426426 */427427 rcu_read_lock();428428 ucounts = task_ucounts(t);429429- sigpending = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);430430- switch (sigpending) {431431- case 1:432432- if (likely(get_ucounts(ucounts)))433433- break;434434- fallthrough;435435- case LONG_MAX:436436- /*437437- * we need to decrease the ucount in the userns tree on any438438- * failure to avoid counts leaking.439439- */440440- dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1);441441- rcu_read_unlock();442442- return NULL;443443- }429429+ sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);444430 rcu_read_unlock();431431+ if (!sigpending)432432+ return NULL;445433446434 if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {447435 q = kmem_cache_alloc(sigqueue_cachep, gfp_flags);···438450 }439451440452 if (unlikely(q == NULL)) {441441- if (dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1))442442- put_ucounts(ucounts);453453+ dec_rlimit_put_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING);443454 } else {444455 INIT_LIST_HEAD(&q->list);445456 q->flags = sigqueue_flags;···451464{452465 if (q->flags & SIGQUEUE_PREALLOC)453466 return;454454- if (q->ucounts && dec_rlimit_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING, 1)) {455455- put_ucounts(q->ucounts);467467+ if (q->ucounts) {468468+ dec_rlimit_put_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING);456469 q->ucounts = NULL;457470 }458471 kmem_cache_free(sigqueue_cachep, q);
+2-2
kernel/trace/ftrace.c
···69776977 struct ftrace_ops *op;69786978 int bit;6979697969806980- bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);69806980+ bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);69816981 if (bit < 0)69826982 return;69836983···70527052{70537053 int bit;7054705470557055- bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX);70557055+ bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START);70567056 if (bit < 0)70577057 return;70587058
+4-7
kernel/trace/trace.c
···17441744 irq_work_queue(&tr->fsnotify_irqwork);17451745}1746174617471747-/*17481748- * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \17491749- * defined(CONFIG_FSNOTIFY)17501750- */17511751-#else17471747+#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \17481748+ || defined(CONFIG_OSNOISE_TRACER)1752174917531750#define trace_create_maxlat_file(tr, d_tracer) \17541751 trace_create_file("tracing_max_latency", 0644, d_tracer, \17551752 &tr->max_latency, &tracing_max_lat_fops)1756175317541754+#else17551755+#define trace_create_maxlat_file(tr, d_tracer) do { } while (0)17571756#endif1758175717591758#ifdef CONFIG_TRACER_MAX_TRACE···9472947394739474 create_trace_options_dir(tr);9474947594759475-#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)94769476 trace_create_maxlat_file(tr, d_tracer);94779477-#endif9478947794799478 if (ftrace_create_function_files(tr, d_tracer))94809479 MEM_FAIL(1, "Could not allocate function filter files");
+58-3
kernel/trace/trace_eprobe.c
···119119 int argc, const char **argv, struct dyn_event *ev)120120{121121 struct trace_eprobe *ep = to_trace_eprobe(ev);122122+ const char *slash;122123123123- return strcmp(trace_probe_name(&ep->tp), event) == 0 &&124124- (!system || strcmp(trace_probe_group_name(&ep->tp), system) == 0) &&125125- trace_probe_match_command_args(&ep->tp, argc, argv);124124+ /*125125+ * We match the following:126126+ * event only - match all eprobes with event name127127+ * system and event only - match all system/event probes128128+ *129129+ * The below has the above satisfied with more arguments:130130+ *131131+ * attached system/event - If the arg has the system and event132132+ * the probe is attached to, match133133+ * probes with the attachment.134134+ *135135+ * If any more args are given, then it requires a full match.136136+ */137137+138138+ /*139139+ * If system exists, but this probe is not part of that system140140+ * do not match.141141+ */142142+ if (system && strcmp(trace_probe_group_name(&ep->tp), system) != 0)143143+ return false;144144+145145+ /* Must match the event name */146146+ if (strcmp(trace_probe_name(&ep->tp), event) != 0)147147+ return false;148148+149149+ /* No arguments match all */150150+ if (argc < 1)151151+ return true;152152+153153+ /* First argument is the system/event the probe is attached to */154154+155155+ slash = strchr(argv[0], '/');156156+ if (!slash)157157+ slash = strchr(argv[0], '.');158158+ if (!slash)159159+ return false;160160+161161+ if (strncmp(ep->event_system, argv[0], slash - argv[0]))162162+ return false;163163+ if (strcmp(ep->event_name, slash + 1))164164+ return false;165165+166166+ argc--;167167+ argv++;168168+169169+ /* If there are no other args, then match */170170+ if (argc < 1)171171+ return true;172172+173173+ return trace_probe_match_command_args(&ep->tp, argc, argv);126174}127175128176static struct dyn_event_operations eprobe_dyn_event_ops = {···680632681633 trace_event_trigger_enable_disable(file, 0);682634 update_cond_flag(file);635635+636636+ /* Make sure nothing is using the edata or trigger */637637+ tracepoint_synchronize_unregister();638638+639639+ kfree(edata);640640+ kfree(trigger);641641+683642 return 0;684643}685644
+1-1
kernel/trace/trace_events_hist.c
···25062506 * events. However, for convenience, users are allowed to directly25072507 * specify an event field in an action, which will be automatically25082508 * converted into a variable on their behalf.25092509-25092509+ *25102510 * If a user specifies a field on an event that isn't the event the25112511 * histogram currently being defined (the target event histogram), the25122512 * only way that can be accomplished is if a new hist trigger is
+49
kernel/ucount.c
···284284 return (new == 0);285285}286286287287+static void do_dec_rlimit_put_ucounts(struct ucounts *ucounts,288288+ struct ucounts *last, enum ucount_type type)289289+{290290+ struct ucounts *iter, *next;291291+ for (iter = ucounts; iter != last; iter = next) {292292+ long dec = atomic_long_add_return(-1, &iter->ucount[type]);293293+ WARN_ON_ONCE(dec < 0);294294+ next = iter->ns->ucounts;295295+ if (dec == 0)296296+ put_ucounts(iter);297297+ }298298+}299299+300300+void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type)301301+{302302+ do_dec_rlimit_put_ucounts(ucounts, NULL, type);303303+}304304+305305+long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type)306306+{307307+ /* Caller must hold a reference to ucounts */308308+ struct ucounts *iter;309309+ long dec, ret = 0;310310+311311+ for (iter = ucounts; iter; iter = iter->ns->ucounts) {312312+ long max = READ_ONCE(iter->ns->ucount_max[type]);313313+ long new = atomic_long_add_return(1, &iter->ucount[type]);314314+ if (new < 0 || new > max)315315+ goto unwind;316316+ if (iter == ucounts)317317+ ret = new;318318+ /*319319+ * Grab an extra ucount reference for the caller when320320+ * the rlimit count was previously 0.321321+ */322322+ if (new != 1)323323+ continue;324324+ if (!get_ucounts(iter))325325+ goto dec_unwind;326326+ }327327+ return ret;328328+dec_unwind:329329+ dec = atomic_long_add_return(-1, &iter->ucount[type]);330330+ WARN_ON_ONCE(dec < 0);331331+unwind:332332+ do_dec_rlimit_put_ucounts(ucounts, iter, type);333333+ return 0;334334+}335335+287336bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max)288337{289338 struct ucounts *iter;
+4-2
mm/huge_memory.c
···27002700 if (mapping) {27012701 int nr = thp_nr_pages(head);2702270227032703- if (PageSwapBacked(head))27032703+ if (PageSwapBacked(head)) {27042704 __mod_lruvec_page_state(head, NR_SHMEM_THPS,27052705 -nr);27062706- else27062706+ } else {27072707 __mod_lruvec_page_state(head, NR_FILE_THPS,27082708 -nr);27092709+ filemap_nr_thps_dec(mapping);27102710+ }27092711 }2710271227112713 __split_huge_page(page, list, end);
+4-1
mm/memblock.c
···932932 * covered by the memory map. The struct page representing NOMAP memory933933 * frames in the memory map will be PageReserved()934934 *935935+ * Note: if the memory being marked %MEMBLOCK_NOMAP was allocated from936936+ * memblock, the caller must inform kmemleak to ignore that memory937937+ *935938 * Return: 0 on success, -errno on failure.936939 */937940int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size)···16901687 if (!size)16911688 return;1692168916931693- if (memblock.memory.cnt <= 1) {16901690+ if (!memblock_memory->total_size) {16941691 pr_warn("%s: No memory registered yet\n", __func__);16951692 return;16961693 }
+5-11
mm/mempolicy.c
···856856 goto out;857857 }858858859859- if (flags & MPOL_F_NUMA_BALANCING) {860860- if (new && new->mode == MPOL_BIND) {861861- new->flags |= (MPOL_F_MOF | MPOL_F_MORON);862862- } else {863863- ret = -EINVAL;864864- mpol_put(new);865865- goto out;866866- }867867- }868868-869859 ret = mpol_set_nodemask(new, nodes, scratch);870860 if (ret) {871861 mpol_put(new);···14481458 return -EINVAL;14491459 if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES))14501460 return -EINVAL;14511451-14611461+ if (*flags & MPOL_F_NUMA_BALANCING) {14621462+ if (*mode != MPOL_BIND)14631463+ return -EINVAL;14641464+ *flags |= (MPOL_F_MOF | MPOL_F_MORON);14651465+ }14521466 return 0;14531467}14541468
+37-25
mm/migrate.c
···30663066EXPORT_SYMBOL(migrate_vma_finalize);30673067#endif /* CONFIG_DEVICE_PRIVATE */3068306830693069-#if defined(CONFIG_MEMORY_HOTPLUG)30693069+#if defined(CONFIG_HOTPLUG_CPU)30703070/* Disable reclaim-based migration. */30713071static void __disable_all_migrate_targets(void)30723072{···32093209}3210321032113211/*32123212- * React to hotplug events that might affect the migration targets32133213- * like events that online or offline NUMA nodes.32143214- *32153215- * The ordering is also currently dependent on which nodes have32163216- * CPUs. That means we need CPU on/offline notification too.32173217- */32183218-static int migration_online_cpu(unsigned int cpu)32193219-{32203220- set_migration_target_nodes();32213221- return 0;32223222-}32233223-32243224-static int migration_offline_cpu(unsigned int cpu)32253225-{32263226- set_migration_target_nodes();32273227- return 0;32283228-}32293229-32303230-/*32313212 * This leaves migrate-on-reclaim transiently disabled between32323213 * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs32333214 * whether reclaim-based migration is enabled or not, which···32203239 * set_migration_target_nodes().32213240 */32223241static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,32233223- unsigned long action, void *arg)32423242+ unsigned long action, void *_arg)32243243{32443244+ struct memory_notify *arg = _arg;32453245+32463246+ /*32473247+ * Only update the node migration order when a node is32483248+ * changing status, like online->offline. This avoids32493249+ * the overhead of synchronize_rcu() in most cases.32503250+ */32513251+ if (arg->status_change_nid < 0)32523252+ return notifier_from_errno(0);32533253+32253254 switch (action) {32263255 case MEM_GOING_OFFLINE:32273256 /*···32653274 return notifier_from_errno(0);32663275}3267327632773277+/*32783278+ * React to hotplug events that might affect the migration targets32793279+ * like events that online or offline NUMA nodes.32803280+ *32813281+ * The ordering is also currently dependent on which nodes have32823282+ * CPUs. That means we need CPU on/offline notification too.32833283+ */32843284+static int migration_online_cpu(unsigned int cpu)32853285+{32863286+ set_migration_target_nodes();32873287+ return 0;32883288+}32893289+32903290+static int migration_offline_cpu(unsigned int cpu)32913291+{32923292+ set_migration_target_nodes();32933293+ return 0;32943294+}32953295+32683296static int __init migrate_on_reclaim_init(void)32693297{32703298 int ret;3271329932723272- ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",32733273- migration_online_cpu,32743274- migration_offline_cpu);33003300+ ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline",33013301+ NULL, migration_offline_cpu);32753302 /*32763303 * In the unlikely case that this fails, the automatic32773304 * migration targets may become suboptimal for nodes···32973288 * rare case, do not bother trying to do anything special.32983289 */32993290 WARN_ON(ret < 0);32913291+ ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online",32923292+ migration_online_cpu, NULL);32933293+ WARN_ON(ret < 0);3300329433013295 hotplug_memory_notifier(migrate_on_reclaim_callback, 100);33023296 return 0;33033297}33043298late_initcall(migrate_on_reclaim_init);33053305-#endif /* CONFIG_MEMORY_HOTPLUG */32993299+#endif /* CONFIG_HOTPLUG_CPU */
···10951095 return 0;10961096}1097109710981098-#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)10981098+#if defined(CONFIG_NUMA)10991099/*11001100 * Drains freelist for a node on each slab cache, used for memory hot-remove.11011101 * Returns -EBUSY if all objects cannot be drained so that the node is not···11571157out:11581158 return notifier_from_errno(ret);11591159}11601160-#endif /* CONFIG_NUMA && CONFIG_MEMORY_HOTPLUG */11601160+#endif /* CONFIG_NUMA */1161116111621162/*11631163 * swap the static kmem_cache_node with kmalloced memory
+25-8
mm/slub.c
···17011701}1702170217031703static inline bool slab_free_freelist_hook(struct kmem_cache *s,17041704- void **head, void **tail)17041704+ void **head, void **tail,17051705+ int *cnt)17051706{1706170717071708 void *object;···17291728 *head = object;17301729 if (!*tail)17311730 *tail = object;17311731+ } else {17321732+ /*17331733+ * Adjust the reconstructed freelist depth17341734+ * accordingly if object's reuse is delayed.17351735+ */17361736+ --(*cnt);17321737 }17331738 } while (object != old_tail);17341739···34203413 struct kmem_cache_cpu *c;34213414 unsigned long tid;3422341534233423- memcg_slab_free_hook(s, &head, 1);34163416+ /* memcg_slab_free_hook() is already called for bulk free. */34173417+ if (!tail)34183418+ memcg_slab_free_hook(s, &head, 1);34243419redo:34253420 /*34263421 * Determine the currently cpus per cpu slab.···34893480 * With KASAN enabled slab_free_freelist_hook modifies the freelist34903481 * to remove objects, whose reuse must be delayed.34913482 */34923492- if (slab_free_freelist_hook(s, &head, &tail))34833483+ if (slab_free_freelist_hook(s, &head, &tail, &cnt))34933484 do_slab_free(s, page, head, tail, cnt, addr);34943485}34953486···42124203 if (alloc_kmem_cache_cpus(s))42134204 return 0;4214420542154215- free_kmem_cache_nodes(s);42164206error:42074207+ __kmem_cache_release(s);42174208 return -EINVAL;42184209}42194210···48894880 return 0;4890488148914882 err = sysfs_slab_add(s);48924892- if (err)48834883+ if (err) {48934884 __kmem_cache_release(s);48854885+ return err;48864886+ }4894488748954888 if (s->flags & SLAB_STORE_USER)48964889 debugfs_slab_add(s);4897489048984898- return err;48914891+ return 0;48994892}4900489349014894void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)···61196108 struct kmem_cache *s = file_inode(filep)->i_private;61206109 unsigned long *obj_map;6121611061226122- obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);61236123- if (!obj_map)61116111+ if (!t)61246112 return -ENOMEM;61136113+61146114+ obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);61156115+ if (!obj_map) {61166116+ seq_release_private(inode, filep);61176117+ return -ENOMEM;61186118+ }6125611961266120 if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0)61276121 alloc = TRACK_ALLOC;···6135611961366120 if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) {61376121 bitmap_free(obj_map);61226122+ seq_release_private(inode, filep);61386123 return -ENOMEM;61396124 }61406125
+1-3
net/bridge/br_private.h
···1125112511261126static inline unsigned long br_multicast_gmi(const struct net_bridge_mcast *brmctx)11271127{11281128- /* use the RFC default of 2 for QRV */11291129- return 2 * brmctx->multicast_query_interval +11301130- brmctx->multicast_query_response_interval;11281128+ return brmctx->multicast_membership_interval;11311129}1132113011331131static inline bool
+3-1
net/bridge/netfilter/ebtables.c
···926926 return -ENOMEM;927927 for_each_possible_cpu(i) {928928 newinfo->chainstack[i] =929929- vmalloc(array_size(udc_cnt, sizeof(*(newinfo->chainstack[0]))));929929+ vmalloc_node(array_size(udc_cnt,930930+ sizeof(*(newinfo->chainstack[0]))),931931+ cpu_to_node(i));930932 if (!newinfo->chainstack[i]) {931933 while (i)932934 vfree(newinfo->chainstack[--i]);
+36-15
net/can/isotp.c
···121121struct tpcon {122122 int idx;123123 int len;124124- u8 state;124124+ u32 state;125125 u8 bs;126126 u8 sn;127127 u8 ll_dl;···848848{849849 struct sock *sk = sock->sk;850850 struct isotp_sock *so = isotp_sk(sk);851851+ u32 old_state = so->tx.state;851852 struct sk_buff *skb;852853 struct net_device *dev;853854 struct canfd_frame *cf;···861860 return -EADDRNOTAVAIL;862861863862 /* we do not support multiple buffers - for now */864864- if (so->tx.state != ISOTP_IDLE || wq_has_sleeper(&so->wait)) {865865- if (msg->msg_flags & MSG_DONTWAIT)866866- return -EAGAIN;863863+ if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE ||864864+ wq_has_sleeper(&so->wait)) {865865+ if (msg->msg_flags & MSG_DONTWAIT) {866866+ err = -EAGAIN;867867+ goto err_out;868868+ }867869868870 /* wait for complete transmission of current pdu */869869- wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);871871+ err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);872872+ if (err)873873+ goto err_out;870874 }871875872872- if (!size || size > MAX_MSG_LENGTH)873873- return -EINVAL;876876+ if (!size || size > MAX_MSG_LENGTH) {877877+ err = -EINVAL;878878+ goto err_out;879879+ }874880875881 /* take care of a potential SF_DL ESC offset for TX_DL > 8 */876882 off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0;877883878884 /* does the given data fit into a single frame for SF_BROADCAST? */879885 if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) &&880880- (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off))881881- return -EINVAL;886886+ (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) {887887+ err = -EINVAL;888888+ goto err_out;889889+ }882890883891 err = memcpy_from_msg(so->tx.buf, msg, size);884892 if (err < 0)885885- return err;893893+ goto err_out;886894887895 dev = dev_get_by_index(sock_net(sk), so->ifindex);888888- if (!dev)889889- return -ENXIO;896896+ if (!dev) {897897+ err = -ENXIO;898898+ goto err_out;899899+ }890900891901 skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv),892902 msg->msg_flags & MSG_DONTWAIT, &err);893903 if (!skb) {894904 dev_put(dev);895895- return err;905905+ goto err_out;896906 }897907898908 can_skb_reserve(skb);899909 can_skb_prv(skb)->ifindex = dev->ifindex;900910 can_skb_prv(skb)->skbcnt = 0;901911902902- so->tx.state = ISOTP_SENDING;903912 so->tx.len = size;904913 so->tx.idx = 0;905914···965954 if (err) {966955 pr_notice_once("can-isotp: %s: can_send_ret %pe\n",967956 __func__, ERR_PTR(err));968968- return err;957957+ goto err_out;969958 }970959971960 if (wait_tx_done) {972961 /* wait for complete transmission of current pdu */973962 wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);963963+964964+ if (sk->sk_err)965965+ return -sk->sk_err;974966 }975967976968 return size;969969+970970+err_out:971971+ so->tx.state = old_state;972972+ if (so->tx.state == ISOTP_IDLE)973973+ wake_up_interruptible(&so->wait);974974+975975+ return err;977976}978977979978static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
···120120config NF_CONNTRACK_SECMARK121121 bool 'Connection tracking security mark support'122122 depends on NETWORK_SECMARK123123- default m if NETFILTER_ADVANCED=n123123+ default y if NETFILTER_ADVANCED=n124124 help125125 This option enables security markings to be applied to126126 connections. Typically they are copied to connections from
+5
net/netfilter/ipvs/ip_vs_ctl.c
···40984098 tbl[idx++].data = &ipvs->sysctl_ignore_tunneled;40994099 ipvs->sysctl_run_estimation = 1;41004100 tbl[idx++].data = &ipvs->sysctl_run_estimation;41014101+#ifdef CONFIG_IP_VS_DEBUG41024102+ /* Global sysctls must be ro in non-init netns */41034103+ if (!net_eq(net, &init_net))41044104+ tbl[idx++].mode = 0444;41054105+#endif4101410641024107 ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);41034108 if (ipvs->sysctl_hdr == NULL) {
+3-6
net/netfilter/nft_chain_filter.c
···344344 return;345345 }346346347347- /* UNREGISTER events are also happening on netns exit.348348- *349349- * Although nf_tables core releases all tables/chains, only this event350350- * handler provides guarantee that hook->ops.dev is still accessible,351351- * so we cannot skip exiting net namespaces.352352- */353347 __nft_release_basechain(ctx);354348}355349···360366361367 if (event != NETDEV_UNREGISTER &&362368 event != NETDEV_CHANGENAME)369369+ return NOTIFY_DONE;370370+371371+ if (!check_net(ctx.net))363372 return NOTIFY_DONE;364373365374 nft_net = nft_pernet(ctx.net);
+1-1
net/netfilter/xt_IDLETIMER.c
···137137{138138 int ret;139139140140- info->timer = kmalloc(sizeof(*info->timer), GFP_KERNEL);140140+ info->timer = kzalloc(sizeof(*info->timer), GFP_KERNEL);141141 if (!info->timer) {142142 ret = -ENOMEM;143143 goto out;
···41444144{41454145 struct wcd938x_priv *wcd = dev_get_drvdata(comp->dev);4146414641474147- if (!jack)41474147+ if (jack)41484148 return wcd_mbhc_start(wcd->wcd_mbhc, &wcd->mbhc_cfg, jack);41494149-41504150- wcd_mbhc_stop(wcd->wcd_mbhc);41494149+ else41504150+ wcd_mbhc_stop(wcd->wcd_mbhc);4151415141524152 return 0;41534153}
+10-3
sound/soc/codecs/wm8960.c
···742742 int i, j, k;743743 int ret;744744745745- if (!(iface1 & (1<<6))) {746746- dev_dbg(component->dev,747747- "Codec is slave mode, no need to configure clock\n");745745+ /*746746+ * For Slave mode clocking should still be configured,747747+ * so this if statement should be removed, but some platform748748+ * may not work if the sysclk is not configured, to avoid such749749+ * compatible issue, just add '!wm8960->sysclk' condition in750750+ * this if statement.751751+ */752752+ if (!(iface1 & (1 << 6)) && !wm8960->sysclk) {753753+ dev_warn(component->dev,754754+ "slave mode, but proceeding with no clock configuration\n");748755 return 0;749756 }750757
+12-5
sound/soc/fsl/fsl_xcvr.c
···487487 return ret;488488 }489489490490- /* clear DPATH RESET */490490+ /* set DPATH RESET */491491 m_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);492492+ v_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);492493 ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, m_ctl, v_ctl);493494 if (ret < 0) {494495 dev_err(dai->dev, "Error while setting EXT_CTRL: %d\n", ret);···591590 val |= FSL_XCVR_EXT_CTRL_CMDC_RESET(tx);592591 }593592594594- /* set DPATH RESET */595595- mask |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);596596- val |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx);597597-598593 ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, mask, val);599594 if (ret < 0) {600595 dev_err(dai->dev, "Err setting DPATH RESET: %d\n", ret);···640643 dev_err(dai->dev, "Failed to enable DMA: %d\n", ret);641644 return ret;642645 }646646+647647+ /* clear DPATH RESET */648648+ ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL,649649+ FSL_XCVR_EXT_CTRL_DPTH_RESET(tx),650650+ 0);651651+ if (ret < 0) {652652+ dev_err(dai->dev, "Failed to clear DPATH RESET: %d\n", ret);653653+ return ret;654654+ }655655+643656 break;644657 case SNDRV_PCM_TRIGGER_STOP:645658 case SNDRV_PCM_TRIGGER_SUSPEND:
+12-25
sound/soc/intel/boards/bytcht_es8316.c
···456456457457static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)458458{459459+ struct device *dev = &pdev->dev;459460 static const char * const mic_name[] = { "in1", "in2" };461461+ struct snd_soc_acpi_mach *mach = dev_get_platdata(dev);460462 struct property_entry props[MAX_NO_PROPS] = {};461463 struct byt_cht_es8316_private *priv;462464 const struct dmi_system_id *dmi_id;463463- struct device *dev = &pdev->dev;464464- struct snd_soc_acpi_mach *mach;465465 struct fwnode_handle *fwnode;466466 const char *platform_name;467467 struct acpi_device *adev;···476476 if (!priv)477477 return -ENOMEM;478478479479- mach = dev->platform_data;480479 /* fix index of codec dai */481480 for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) {482481 if (!strcmp(byt_cht_es8316_dais[i].codecs->name,···493494 put_device(&adev->dev);494495 byt_cht_es8316_dais[dai_index].codecs->name = codec_name;495496 } else {496496- dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);497497+ dev_err(dev, "Error cannot find '%s' dev\n", mach->id);497498 return -ENXIO;498499 }499500···532533533534 /* get the clock */534535 priv->mclk = devm_clk_get(dev, "pmc_plt_clk_3");535535- if (IS_ERR(priv->mclk)) {536536- ret = PTR_ERR(priv->mclk);537537- dev_err(dev, "clk_get pmc_plt_clk_3 failed: %d\n", ret);538538- return ret;539539- }536536+ if (IS_ERR(priv->mclk))537537+ return dev_err_probe(dev, PTR_ERR(priv->mclk), "clk_get pmc_plt_clk_3 failed\n");540538541539 /* get speaker enable GPIO */542540 codec_dev = acpi_get_first_physical_node(adev);···563567564568 devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);565569 priv->speaker_en_gpio =566566- gpiod_get_index(codec_dev, "speaker-enable", 0,567567- /* see comment in byt_cht_es8316_resume */568568- GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE);569569-570570+ gpiod_get_optional(codec_dev, "speaker-enable",571571+ /* see comment in byt_cht_es8316_resume() */572572+ GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE);570573 if (IS_ERR(priv->speaker_en_gpio)) {571571- ret = PTR_ERR(priv->speaker_en_gpio);572572- switch (ret) {573573- case -ENOENT:574574- priv->speaker_en_gpio = NULL;575575- break;576576- default:577577- dev_err(dev, "get speaker GPIO failed: %d\n", ret);578578- fallthrough;579579- case -EPROBE_DEFER:580580- goto err_put_codec;581581- }574574+ ret = dev_err_probe(dev, PTR_ERR(priv->speaker_en_gpio),575575+ "get speaker GPIO failed\n");576576+ goto err_put_codec;582577 }583578584579 snprintf(components_string, sizeof(components_string),···584597 byt_cht_es8316_card.long_name = long_name;585598#endif586599587587- sof_parent = snd_soc_acpi_sof_parent(&pdev->dev);600600+ sof_parent = snd_soc_acpi_sof_parent(dev);588601589602 /* set card and driver name */590603 if (sof_parent) {
···17191719 */17201720 fp->attributes &= ~UAC_EP_CS_ATTR_FILL_MAX;17211721 break;17221722+ case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */17231723+ /* mic works only when ep packet size is set to wMaxPacketSize */17241724+ fp->attributes |= UAC_EP_CS_ATTR_FILL_MAX;17251725+ break;17261726+17221727 }17231728}17241729···18891884 QUIRK_FLAG_GET_SAMPLE_RATE),18901885 DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */18911886 QUIRK_FLAG_GET_SAMPLE_RATE),18871887+ DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */18881888+ QUIRK_FLAG_IGNORE_CTL_ERROR),18921889 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */18931890 QUIRK_FLAG_GET_SAMPLE_RATE),18941891 DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */18951892 QUIRK_FLAG_ALIGN_TRANSFER),18931893+ DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */18941894+ QUIRK_FLAG_GET_SAMPLE_RATE),1896189518971896 /* Vendor matches */18981897 VENDOR_FLG(0x045e, /* MS Lifecam */
+1-1
tools/kvm/kvm_stat/kvm_stat
···742742 The fields are all available KVM debugfs files743743744744 """745745- exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns']745745+ exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns', 'halt_wait_ns']746746 fields = [field for field in self.walkdir(PATH_DEBUGFS_KVM)[2]747747 if field not in exempt_list]748748
+3-3
tools/lib/perf/tests/test-evlist.c
···4040 .type = PERF_TYPE_SOFTWARE,4141 .config = PERF_COUNT_SW_TASK_CLOCK,4242 };4343- int err, cpu, tmp;4343+ int err, idx;44444545 cpus = perf_cpu_map__new(NULL);4646 __T("failed to create cpus", cpus);···7070 perf_evlist__for_each_evsel(evlist, evsel) {7171 cpus = perf_evsel__cpus(evsel);72727373- perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {7373+ for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {7474 struct perf_counts_values counts = { .val = 0 };75757676- perf_evsel__read(evsel, cpu, 0, &counts);7676+ perf_evsel__read(evsel, idx, 0, &counts);7777 __T("failed to read value for evsel", counts.val != 0);7878 }7979 }
+4-3
tools/lib/perf/tests/test-evsel.c
···2222 .type = PERF_TYPE_SOFTWARE,2323 .config = PERF_COUNT_SW_CPU_CLOCK,2424 };2525- int err, cpu, tmp;2525+ int err, idx;26262727 cpus = perf_cpu_map__new(NULL);2828 __T("failed to create cpus", cpus);···3333 err = perf_evsel__open(evsel, cpus, NULL);3434 __T("failed to open evsel", err == 0);35353636- perf_cpu_map__for_each_cpu(cpu, tmp, cpus) {3636+ for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) {3737 struct perf_counts_values counts = { .val = 0 };38383939- perf_evsel__read(evsel, cpu, 0, &counts);3939+ perf_evsel__read(evsel, idx, 0, &counts);4040 __T("failed to read value for evsel", counts.val != 0);4141 }4242···148148 __T("failed to mmap evsel", err == 0);149149150150 pc = perf_evsel__mmap_base(evsel, 0, 0);151151+ __T("failed to get mmapped address", pc);151152152153#if defined(__i386__) || defined(__x86_64__)153154 __T("userspace counter access not supported", pc->cap_user_rdpmc);
+25-31
tools/objtool/elf.c
···508508 list_add_tail(&reloc->list, &sec->reloc->reloc_list);509509 elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc));510510511511+ sec->reloc->sh.sh_size += sec->reloc->sh.sh_entsize;511512 sec->reloc->changed = true;512513513514 return 0;···978977 }979978}980979981981-static int elf_rebuild_rel_reloc_section(struct section *sec, int nr)980980+static int elf_rebuild_rel_reloc_section(struct section *sec)982981{983982 struct reloc *reloc;984984- int idx = 0, size;983983+ int idx = 0;985984 void *buf;986985987986 /* Allocate a buffer for relocations */988988- size = nr * sizeof(GElf_Rel);989989- buf = malloc(size);987987+ buf = malloc(sec->sh.sh_size);990988 if (!buf) {991989 perror("malloc");992990 return -1;993991 }994992995993 sec->data->d_buf = buf;996996- sec->data->d_size = size;994994+ sec->data->d_size = sec->sh.sh_size;997995 sec->data->d_type = ELF_T_REL;998998-999999- sec->sh.sh_size = size;10009961001997 idx = 0;1002998 list_for_each_entry(reloc, &sec->reloc_list, list) {1003999 reloc->rel.r_offset = reloc->offset;10041000 reloc->rel.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);10051005- gelf_update_rel(sec->data, idx, &reloc->rel);10011001+ if (!gelf_update_rel(sec->data, idx, &reloc->rel)) {10021002+ WARN_ELF("gelf_update_rel");10031003+ return -1;10041004+ }10061005 idx++;10071006 }1008100710091008 return 0;10101009}1011101010121012-static int elf_rebuild_rela_reloc_section(struct section *sec, int nr)10111011+static int elf_rebuild_rela_reloc_section(struct section *sec)10131012{10141013 struct reloc *reloc;10151015- int idx = 0, size;10141014+ int idx = 0;10161015 void *buf;1017101610181017 /* Allocate a buffer for relocations with addends */10191019- size = nr * sizeof(GElf_Rela);10201020- buf = malloc(size);10181018+ buf = malloc(sec->sh.sh_size);10211019 if (!buf) {10221020 perror("malloc");10231021 return -1;10241022 }1025102310261024 sec->data->d_buf = buf;10271027- sec->data->d_size = size;10251025+ sec->data->d_size = sec->sh.sh_size;10281026 sec->data->d_type = ELF_T_RELA;10291029-10301030- sec->sh.sh_size = size;1031102710321028 idx = 0;10331029 list_for_each_entry(reloc, &sec->reloc_list, list) {10341030 reloc->rela.r_offset = reloc->offset;10351031 reloc->rela.r_addend = reloc->addend;10361032 reloc->rela.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type);10371037- gelf_update_rela(sec->data, idx, &reloc->rela);10331033+ if (!gelf_update_rela(sec->data, idx, &reloc->rela)) {10341034+ WARN_ELF("gelf_update_rela");10351035+ return -1;10361036+ }10381037 idx++;10391038 }10401039···1043104210441043static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec)10451044{10461046- struct reloc *reloc;10471047- int nr;10481048-10491049- nr = 0;10501050- list_for_each_entry(reloc, &sec->reloc_list, list)10511051- nr++;10521052-10531045 switch (sec->sh.sh_type) {10541054- case SHT_REL: return elf_rebuild_rel_reloc_section(sec, nr);10551055- case SHT_RELA: return elf_rebuild_rela_reloc_section(sec, nr);10461046+ case SHT_REL: return elf_rebuild_rel_reloc_section(sec);10471047+ case SHT_RELA: return elf_rebuild_rela_reloc_section(sec);10561048 default: return -1;10571049 }10581050}···11051111 /* Update changed relocation sections and section headers: */11061112 list_for_each_entry(sec, &elf->sections, list) {11071113 if (sec->changed) {11081108- if (sec->base &&11091109- elf_rebuild_reloc_section(elf, sec)) {11101110- WARN("elf_rebuild_reloc_section");11111111- return -1;11121112- }11131113-11141114 s = elf_getscn(elf->elf, sec->idx);11151115 if (!s) {11161116 WARN_ELF("elf_getscn");···11121124 }11131125 if (!gelf_update_shdr(s, &sec->sh)) {11141126 WARN_ELF("gelf_update_shdr");11271127+ return -1;11281128+ }11291129+11301130+ if (sec->base &&11311131+ elf_rebuild_reloc_section(elf, sec)) {11321132+ WARN("elf_rebuild_reloc_section");11151133 return -1;11161134 }11171135
···4242# Flag for tc match, supposed to be skip_sw/skip_hw which means do not process4343# filter by software/hardware4444TC_FLAG=skip_hw4545+# IPv6 traceroute utility name.4646+TROUTE6=traceroute64747+
···741741 return $lret742742}743743744744+# test port shadowing.745745+# create two listening services, one on router (ns0), one746746+# on client (ns2), which is masqueraded from ns1 point of view.747747+# ns2 sends udp packet coming from service port to ns1, on a highport.748748+# Later, if n1 uses same highport to connect to ns0:service, packet749749+# might be port-forwarded to ns2 instead.750750+751751+# second argument tells if we expect the 'fake-entry' to take effect752752+# (CLIENT) or not (ROUTER).753753+test_port_shadow()754754+{755755+ local test=$1756756+ local expect=$2757757+ local daddrc="10.0.1.99"758758+ local daddrs="10.0.1.1"759759+ local result=""760760+ local logmsg=""761761+762762+ echo ROUTER | ip netns exec "$ns0" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 &763763+ nc_r=$!764764+765765+ echo CLIENT | ip netns exec "$ns2" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 &766766+ nc_c=$!767767+768768+ # make shadow entry, from client (ns2), going to (ns1), port 41404, sport 1405.769769+ echo "fake-entry" | ip netns exec "$ns2" nc -w 1 -p 1405 -u "$daddrc" 41404 > /dev/null770770+771771+ # ns1 tries to connect to ns0:1405. With default settings this should connect772772+ # to client, it matches the conntrack entry created above.773773+774774+ result=$(echo "" | ip netns exec "$ns1" nc -w 1 -p 41404 -u "$daddrs" 1405)775775+776776+ if [ "$result" = "$expect" ] ;then777777+ echo "PASS: portshadow test $test: got reply from ${expect}${logmsg}"778778+ else779779+ echo "ERROR: portshadow test $test: got reply from \"$result\", not $expect as intended"780780+ ret=1781781+ fi782782+783783+ kill $nc_r $nc_c 2>/dev/null784784+785785+ # flush udp entries for next test round, if any786786+ ip netns exec "$ns0" conntrack -F >/dev/null 2>&1787787+}788788+789789+# This prevents port shadow of router service via packet filter,790790+# packets claiming to originate from service port from internal791791+# network are dropped.792792+test_port_shadow_filter()793793+{794794+ local family=$1795795+796796+ip netns exec "$ns0" nft -f /dev/stdin <<EOF797797+table $family filter {798798+ chain forward {799799+ type filter hook forward priority 0; policy accept;800800+ meta iif veth1 udp sport 1405 drop801801+ }802802+}803803+EOF804804+ test_port_shadow "port-filter" "ROUTER"805805+806806+ ip netns exec "$ns0" nft delete table $family filter807807+}808808+809809+# This prevents port shadow of router service via notrack.810810+test_port_shadow_notrack()811811+{812812+ local family=$1813813+814814+ip netns exec "$ns0" nft -f /dev/stdin <<EOF815815+table $family raw {816816+ chain prerouting {817817+ type filter hook prerouting priority -300; policy accept;818818+ meta iif veth0 udp dport 1405 notrack819819+ udp dport 1405 notrack820820+ }821821+ chain output {822822+ type filter hook output priority -300; policy accept;823823+ udp sport 1405 notrack824824+ }825825+}826826+EOF827827+ test_port_shadow "port-notrack" "ROUTER"828828+829829+ ip netns exec "$ns0" nft delete table $family raw830830+}831831+832832+# This prevents port shadow of router service via sport remap.833833+test_port_shadow_pat()834834+{835835+ local family=$1836836+837837+ip netns exec "$ns0" nft -f /dev/stdin <<EOF838838+table $family pat {839839+ chain postrouting {840840+ type nat hook postrouting priority -1; policy accept;841841+ meta iif veth1 udp sport <= 1405 masquerade to : 1406-65535 random842842+ }843843+}844844+EOF845845+ test_port_shadow "pat" "ROUTER"846846+847847+ ip netns exec "$ns0" nft delete table $family pat848848+}849849+850850+test_port_shadowing()851851+{852852+ local family="ip"853853+854854+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null855855+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null856856+857857+ ip netns exec "$ns0" nft -f /dev/stdin <<EOF858858+table $family nat {859859+ chain postrouting {860860+ type nat hook postrouting priority 0; policy accept;861861+ meta oif veth0 masquerade862862+ }863863+}864864+EOF865865+ if [ $? -ne 0 ]; then866866+ echo "SKIP: Could not add add $family masquerade hook"867867+ return $ksft_skip868868+ fi869869+870870+ # test default behaviour. Packet from ns1 to ns0 is redirected to ns2.871871+ test_port_shadow "default" "CLIENT"872872+873873+ # test packet filter based mitigation: prevent forwarding of874874+ # packets claiming to come from the service port.875875+ test_port_shadow_filter "$family"876876+877877+ # test conntrack based mitigation: connections going or coming878878+ # from router:service bypass connection tracking.879879+ test_port_shadow_notrack "$family"880880+881881+ # test nat based mitigation: fowarded packets coming from service port882882+ # are masqueraded with random highport.883883+ test_port_shadow_pat "$family"884884+885885+ ip netns exec "$ns0" nft delete table $family nat886886+}744887745888# ip netns exec "$ns0" ping -c 1 -q 10.0.$i.99746889for i in 0 1 2; do···1003860reset_counters1004861$test_inet_nat && test_redirect inet1005862$test_inet_nat && test_redirect6 inet863863+864864+test_port_shadowing10068651007866if [ $ret -ne 0 ];then1008867 echo -n "FAIL: "
+20-3
tools/testing/selftests/vm/userfaultfd.c
···414414 uffd_test_ops->allocate_area((void **)&area_src);415415 uffd_test_ops->allocate_area((void **)&area_dst);416416417417- uffd_test_ops->release_pages(area_src);418418- uffd_test_ops->release_pages(area_dst);419419-420417 userfaultfd_open(features);421418422419 count_verify = malloc(nr_pages * sizeof(unsigned long long));···433436 */434437 *(area_count(area_src, nr) + 1) = 1;435438 }439439+440440+ /*441441+ * After initialization of area_src, we must explicitly release pages442442+ * for area_dst to make sure it's fully empty. Otherwise we could have443443+ * some area_dst pages be errornously initialized with zero pages,444444+ * hence we could hit memory corruption later in the test.445445+ *446446+ * One example is when THP is globally enabled, above allocate_area()447447+ * calls could have the two areas merged into a single VMA (as they448448+ * will have the same VMA flags so they're mergeable). When we449449+ * initialize the area_src above, it's possible that some part of450450+ * area_dst could have been faulted in via one huge THP that will be451451+ * shared between area_src and area_dst. It could cause some of the452452+ * area_dst won't be trapped by missing userfaults.453453+ *454454+ * This release_pages() will guarantee even if that happened, we'll455455+ * proactively split the thp and drop any accidentally initialized456456+ * pages within area_dst.457457+ */458458+ uffd_test_ops->release_pages(area_dst);436459437460 pipefd = malloc(sizeof(int) * nr_cpus * 2);438461 if (!pipefd)