···5050--------------------------------51515252ksmbd.mountd is userspace process to, transfer user account and password that5353-are registered using ksmbd.adduser(part of utils for user space). Further it5353+are registered using ksmbd.adduser (part of utils for user space). Further it5454allows sharing information parameters that parsed from smb.conf to ksmbd in5555kernel. For the execution part it has a daemon which is continuously running5656and connected to the kernel interface using netlink socket, it waits for the5757-requests(dcerpc and share/user info). It handles RPC calls (at a minimum few5757+requests (dcerpc and share/user info). It handles RPC calls (at a minimum few5858dozen) that are most important for file server from NetShareEnum and5959NetServerGetInfo. Complete DCE/RPC response is prepared from the user space6060and passed over to the associated kernel thread for the client.···1541541. Enable all component prints155155 # sudo ksmbd.control -d "all"156156157157-2. Enable one of components(smb, auth, vfs, oplock, ipc, conn, rdma)157157+2. Enable one of components (smb, auth, vfs, oplock, ipc, conn, rdma)158158 # sudo ksmbd.control -d "smb"159159160160-3. Show what prints are enable.161161- # cat/sys/class/ksmbd-control/debug160160+3. Show what prints are enabled.161161+ # cat /sys/class/ksmbd-control/debug162162 [smb] auth vfs oplock ipc conn [rdma]1631631641644. Disable prints:
+56-39
Documentation/filesystems/netfs_library.rst
···11.. SPDX-License-Identifier: GPL-2.02233=================================44-NETWORK FILESYSTEM HELPER LIBRARY44+Network Filesystem Helper Library55=================================6677.. Contents:···37373838The following services are provided:39394040- * Handles transparent huge pages (THPs).4040+ * Handle folios that span multiple pages.41414242- * Insulates the netfs from VM interface changes.4242+ * Insulate the netfs from VM interface changes.43434444- * Allows the netfs to arbitrarily split reads up into pieces, even ones that4545- don't match page sizes or page alignments and that may cross pages.4444+ * Allow the netfs to arbitrarily split reads up into pieces, even ones that4545+ don't match folio sizes or folio alignments and that may cross folios.46464747- * Allows the netfs to expand a readahead request in both directions to meet4848- its needs.4747+ * Allow the netfs to expand a readahead request in both directions to meet its4848+ needs.49495050- * Allows the netfs to partially fulfil a read, which will then be resubmitted.5050+ * Allow the netfs to partially fulfil a read, which will then be resubmitted.51515252- * Handles local caching, allowing cached data and server-read data to be5252+ * Handle local caching, allowing cached data and server-read data to be5353 interleaved for a single request.54545555- * Handles clearing of bufferage that aren't on the server.5555+ * Handle clearing of bufferage that aren't on the server.56565757 * Handle retrying of reads that failed, switching reads from the cache to the5858 server as necessary.···70707171Three read helpers are provided::72727373- * void netfs_readahead(struct readahead_control *ractl,7474- const struct netfs_read_request_ops *ops,7575- void *netfs_priv);``7676- * int netfs_readpage(struct file *file,7777- struct page *page,7878- const struct netfs_read_request_ops *ops,7979- void *netfs_priv);8080- * int netfs_write_begin(struct file *file,8181- struct address_space *mapping,8282- loff_t pos,8383- unsigned int len,8484- unsigned int flags,8585- struct page **_page,8686- void **_fsdata,8787- const struct netfs_read_request_ops *ops,8888- void *netfs_priv);7373+ void netfs_readahead(struct readahead_control *ractl,7474+ const struct netfs_read_request_ops *ops,7575+ void *netfs_priv);7676+ int netfs_readpage(struct file *file,7777+ struct folio *folio,7878+ const struct netfs_read_request_ops *ops,7979+ void *netfs_priv);8080+ int netfs_write_begin(struct file *file,8181+ struct address_space *mapping,8282+ loff_t pos,8383+ unsigned int len,8484+ unsigned int flags,8585+ struct folio **_folio,8686+ void **_fsdata,8787+ const struct netfs_read_request_ops *ops,8888+ void *netfs_priv);89899090Each corresponds to a VM operation, with the addition of a couple of parameters9191for the use of the read helpers:···103103For ->readahead() and ->readpage(), the network filesystem should just jump104104into the corresponding read helper; whereas for ->write_begin(), it may be a105105little more complicated as the network filesystem might want to flush106106-conflicting writes or track dirty data and needs to put the acquired page if an107107-error occurs after calling the helper.106106+conflicting writes or track dirty data and needs to put the acquired folio if107107+an error occurs after calling the helper.108108109109The helpers manage the read request, calling back into the network filesystem110110through the suppplied table of operations. Waits will be performed as···253253 void (*issue_op)(struct netfs_read_subrequest *subreq);254254 bool (*is_still_valid)(struct netfs_read_request *rreq);255255 int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,256256- struct page *page, void **_fsdata);256256+ struct folio *folio, void **_fsdata);257257 void (*done)(struct netfs_read_request *rreq);258258 void (*cleanup)(struct address_space *mapping, void *netfs_priv);259259 };···313313314314 There is no return value; the netfs_subreq_terminated() function should be315315 called to indicate whether or not the operation succeeded and how much data316316- it transferred. The filesystem also should not deal with setting pages316316+ it transferred. The filesystem also should not deal with setting folios317317 uptodate, unlocking them or dropping their refs - the helpers need to deal318318 with this as they have to coordinate with copying to the local cache.319319320320- Note that the helpers have the pages locked, but not pinned. It is possible321321- to use the ITER_XARRAY iov iterator to refer to the range of the inode that322322- is being operated upon without the need to allocate large bvec tables.320320+ Note that the helpers have the folios locked, but not pinned. It is321321+ possible to use the ITER_XARRAY iov iterator to refer to the range of the322322+ inode that is being operated upon without the need to allocate large bvec323323+ tables.323324324325 * ``is_still_valid()``325326···331330 * ``check_write_begin()``332331333332 [Optional] This is called from the netfs_write_begin() helper once it has334334- allocated/grabbed the page to be modified to allow the filesystem to flush333333+ allocated/grabbed the folio to be modified to allow the filesystem to flush335334 conflicting state before allowing it to be modified.336335337337- It should return 0 if everything is now fine, -EAGAIN if the page should be336336+ It should return 0 if everything is now fine, -EAGAIN if the folio should be338337 regrabbed and any other error code to abort the operation.339338340339 * ``done``341340342342- [Optional] This is called after the pages in the request have all been341341+ [Optional] This is called after the folios in the request have all been343342 unlocked (and marked uptodate if applicable).344343345344 * ``cleanup``···391390 * If NETFS_SREQ_CLEAR_TAIL was set, a short read will be cleared to the392391 end of the slice instead of reissuing.393392394394- * Once the data is read, the pages that have been fully read/cleared:393393+ * Once the data is read, the folios that have been fully read/cleared:395394396395 * Will be marked uptodate.397396···399398400399 * Unlocked401400402402- * Any pages that need writing to the cache will then have DIO writes issued.401401+ * Any folios that need writing to the cache will then have DIO writes issued.403402404403 * Synchronous operations will wait for reading to be complete.405404406406- * Writes to the cache will proceed asynchronously and the pages will have the405405+ * Writes to the cache will proceed asynchronously and the folios will have the407406 PG_fscache mark removed when that completes.408407409408 * The request structures will be cleaned up when everything has completed.···452451 bool seek_data,453452 netfs_io_terminated_t term_func,454453 void *term_func_priv);454454+455455+ int (*prepare_write)(struct netfs_cache_resources *cres,456456+ loff_t *_start, size_t *_len, loff_t i_size);455457456458 int (*write)(struct netfs_cache_resources *cres,457459 loff_t start_pos,···513509 indicating whether the termination is definitely happening in the caller's514510 context.515511512512+ * ``prepare_write()``513513+514514+ [Required] Called to adjust a write to the cache and check that there is515515+ sufficient space in the cache. The start and length values indicate the516516+ size of the write that netfslib is proposing, and this can be adjusted by517517+ the cache to respect DIO boundaries. The file size is passed for518518+ information.519519+516520 * ``write()``517521518522 [Required] Called to write to the cache. The start file offset is given···537525there isn't a read request structure as well, such as writing dirty data to the538526cache.539527528528+529529+API Function Reference530530+======================531531+540532.. kernel-doc:: include/linux/netfs.h533533+.. kernel-doc:: fs/netfs/read_helper.c
+5-2
MAINTAINERS
···15994159941599515995RANDOM NUMBER DRIVER1599615996M: "Theodore Ts'o" <tytso@mit.edu>1599715997+M: Jason A. Donenfeld <Jason@zx2c4.com>1599715998S: Maintained1599815999F: drivers/char/random.c1599916000···16639166381664016639S390 IUCV NETWORK LAYER1664116640M: Julian Wiedmann <jwi@linux.ibm.com>1664216642-M: Karsten Graul <kgraul@linux.ibm.com>1664116641+M: Alexandra Winter <wintera@linux.ibm.com>1664216642+M: Wenjia Zhang <wenjia@linux.ibm.com>1664316643L: linux-s390@vger.kernel.org1664416644L: netdev@vger.kernel.org1664516645S: Supported···16651166491665216650S390 NETWORK DRIVERS1665316651M: Julian Wiedmann <jwi@linux.ibm.com>1665416654-M: Karsten Graul <kgraul@linux.ibm.com>1665216652+M: Alexandra Winter <wintera@linux.ibm.com>1665316653+M: Wenjia Zhang <wenjia@linux.ibm.com>1665516654L: linux-s390@vger.kernel.org1665616655L: netdev@vger.kernel.org1665716656S: Supported
···991991 and vice-versa 32-bit applications to call 64-bit mmap().992992 Required for applications doing different bitness syscalls.993993994994+config PAGE_SIZE_LESS_THAN_64KB995995+ def_bool y996996+ depends on !ARM64_64K_PAGES997997+ depends on !IA64_PAGE_SIZE_64KB998998+ depends on !PAGE_SIZE_64KB999999+ depends on !PARISC_PAGE_SIZE_64KB10001000+ depends on !PPC_64K_PAGES10011001+ depends on !PPC_256K_PAGES10021002+ depends on !PAGE_SIZE_256KB10031003+9941004# This allows to use a set of generic functions to determine mmap base9951005# address by giving priority to top-down scheme only if the process9961006# is not in legacy mode (compat task, unlimited stack size or
···403403404404static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu);405405406406+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code);407407+406408/*407409 * Allow the hypervisor to handle the exit with an exit handler if it has one.408410 *···431429 */432430static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)433431{432432+ /*433433+ * Save PSTATE early so that we can evaluate the vcpu mode434434+ * early on.435435+ */436436+ vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);437437+438438+ /*439439+ * Check whether we want to repaint the state one way or440440+ * another.441441+ */442442+ early_exit_filter(vcpu, exit_code);443443+434444 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)435445 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);436446
+6-1
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
···7070static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)7171{7272 ctxt->regs.pc = read_sysreg_el2(SYS_ELR);7373- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);7373+ /*7474+ * Guest PSTATE gets saved at guest fixup time in all7575+ * cases. We still need to handle the nVHE host side here.7676+ */7777+ if (!has_vhe() && ctxt->__hyp_running_vcpu)7878+ ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);74797580 if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))7681 ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
+1-7
arch/arm64/kvm/hyp/nvhe/switch.c
···233233 * Returns false if the guest ran in AArch32 when it shouldn't have, and234234 * thus should exit to the host, or true if a the guest run loop can continue.235235 */236236-static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)236236+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)237237{238238 struct kvm *kvm = kern_hyp_va(vcpu->kvm);239239···248248 vcpu->arch.target = -1;249249 *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);250250 *exit_code |= ARM_EXCEPTION_IL;251251- return false;252251 }253253-254254- return true;255252}256253257254/* Switch to the guest for legacy non-VHE systems */···312315 do {313316 /* Jump in the fire! */314317 exit_code = __guest_enter(vcpu);315315-316316- if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))317317- break;318318319319 /* And we're baaack! */320320 } while (fixup_guest_exit(vcpu, &exit_code));
+4
arch/arm64/kvm/hyp/vhe/switch.c
···112112 return hyp_exit_handlers;113113}114114115115+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)116116+{117117+}118118+115119/* Switch to the guest for VHE systems running in EL2 */116120static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)117121{
+1-1
arch/mips/Kconfig
···30973097config PGTABLE_LEVELS30983098 int30993099 default 4 if PAGE_SIZE_4KB && MIPS_VA_BITS_4831003100- default 3 if 64BIT && !PAGE_SIZE_64KB31003100+ default 3 if 64BIT && (!PAGE_SIZE_64KB || MIPS_VA_BITS_48)31013101 default 23102310231033103config MIPS_AUTO_PFN_OFFSET
···102102 */103103extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name);104104105105-/*106106- * Tasks that are not using SVA have mm->pasid set to zero to note that they107107- * will not have the valid bit set in MSR_IA32_PASID while they are running.108108- */109109-#define PASID_DISABLED 0110110-111105/* Trap handling */112106extern int fpu__exception_code(struct fpu *fpu, int trap_nr);113107extern void fpu_sync_fpstate(struct fpu *fpu);
···5656 struct kvm_io_device dev_master;5757 struct kvm_io_device dev_slave;5858 struct kvm_io_device dev_elcr;5959- void (*ack_notifier)(void *opaque, int irq);6059 unsigned long irq_states[PIC_NUM_PINS];6160};6261
+1-1
arch/x86/kvm/lapic.c
···707707static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)708708{709709 int highest_irr;710710- if (apic->vcpu->arch.apicv_active)710710+ if (kvm_x86_ops.sync_pir_to_irr)711711 highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu);712712 else713713 highest_irr = apic_find_highest_irr(apic);
+52-45
arch/x86/kvm/mmu/mmu.c
···15821582 flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);1583158315841584 if (is_tdp_mmu_enabled(kvm))15851585- flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);15851585+ flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);1586158615871587 return flush;15881588}···21732173 iterator->shadow_addr = root;21742174 iterator->level = vcpu->arch.mmu->shadow_root_level;2175217521762176- if (iterator->level == PT64_ROOT_4LEVEL &&21762176+ if (iterator->level >= PT64_ROOT_4LEVEL &&21772177 vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL &&21782178 !vcpu->arch.mmu->direct_map)21792179- --iterator->level;21792179+ iterator->level = PT32E_ROOT_LEVEL;2180218021812181 if (iterator->level == PT32E_ROOT_LEVEL) {21822182 /*···48554855 struct kvm_mmu *context = &vcpu->arch.guest_mmu;48564856 struct kvm_mmu_role_regs regs = {48574857 .cr0 = cr0,48584858- .cr4 = cr4,48584858+ .cr4 = cr4 & ~X86_CR4_PKE,48594859 .efer = efer,48604860 };48614861 union kvm_mmu_role new_role;···49194919 context->direct_map = false;4920492049214921 update_permission_bitmask(context, true);49224922- update_pkru_bitmask(context);49224922+ context->pkru_mask = 0;49234923 reset_rsvds_bits_mask_ept(vcpu, context, execonly);49244924 reset_ept_shadow_zero_bits_mask(vcpu, context, execonly);49254925}···50255025 /*50265026 * Invalidate all MMU roles to force them to reinitialize as CPUID50275027 * information is factored into reserved bit calculations.50285028+ *50295029+ * Correctly handling multiple vCPU models with respect to paging and50305030+ * physical address properties) in a single VM would require tracking50315031+ * all relevant CPUID information in kvm_mmu_page_role. That is very50325032+ * undesirable as it would increase the memory requirements for50335033+ * gfn_track (see struct kvm_mmu_page_role comments). For now that50345034+ * problem is swept under the rug; KVM's CPUID API is horrific and50355035+ * it's all but impossible to solve it without introducing a new API.50285036 */50295037 vcpu->arch.root_mmu.mmu_role.ext.valid = 0;50305038 vcpu->arch.guest_mmu.mmu_role.ext.valid = 0;···50405032 kvm_mmu_reset_context(vcpu);5041503350425034 /*50435043- * KVM does not correctly handle changing guest CPUID after KVM_RUN, as50445044- * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't50455045- * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page50465046- * faults due to reusing SPs/SPTEs. Alert userspace, but otherwise50475047- * sweep the problem under the rug.50485048- *50495049- * KVM's horrific CPUID ABI makes the problem all but impossible to50505050- * solve, as correctly handling multiple vCPU models (with respect to50515051- * paging and physical address properties) in a single VM would require50525052- * tracking all relevant CPUID information in kvm_mmu_page_role. That50535053- * is very undesirable as it would double the memory requirements for50545054- * gfn_track (see struct kvm_mmu_page_role comments), and in practice50555055- * no sane VMM mucks with the core vCPU model on the fly.50355035+ * Changing guest CPUID after KVM_RUN is forbidden, see the comment in50365036+ * kvm_arch_vcpu_ioctl().50565037 */50575057- if (vcpu->arch.last_vmentry_cpu != -1) {50585058- pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest instability\n");50595059- pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} will fail after KVM_RUN starting with Linux 5.16\n");50605060- }50385038+ KVM_BUG_ON(vcpu->arch.last_vmentry_cpu != -1, vcpu->kvm);50615039}5062504050635041void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)···5363536953645370void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)53655371{53665366- kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE);53725372+ kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE);53675373 ++vcpu->stat.invlpg;53685374}53695375EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);···58485854void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,58495855 const struct kvm_memory_slot *slot)58505856{58515851- bool flush = false;58525852-58535857 if (kvm_memslots_have_rmaps(kvm)) {58545858 write_lock(&kvm->mmu_lock);58555859 /*···58555863 * logging at a 4k granularity and never creates collapsible58565864 * 2m SPTEs during dirty logging.58575865 */58585858- flush = slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true);58595859- if (flush)58665866+ if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true))58605867 kvm_arch_flush_remote_tlbs_memslot(kvm, slot);58615868 write_unlock(&kvm->mmu_lock);58625869 }5863587058645871 if (is_tdp_mmu_enabled(kvm)) {58655872 read_lock(&kvm->mmu_lock);58665866- flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush);58675867- if (flush)58685868- kvm_arch_flush_remote_tlbs_memslot(kvm, slot);58735873+ kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);58695874 read_unlock(&kvm->mmu_lock);58705875 }58715876}···61716182 mmu_audit_disable();61726183}6173618461856185+/*61866186+ * Calculate the effective recovery period, accounting for '0' meaning "let KVM61876187+ * select a halving time of 1 hour". Returns true if recovery is enabled.61886188+ */61896189+static bool calc_nx_huge_pages_recovery_period(uint *period)61906190+{61916191+ /*61926192+ * Use READ_ONCE to get the params, this may be called outside of the61936193+ * param setters, e.g. by the kthread to compute its next timeout.61946194+ */61956195+ bool enabled = READ_ONCE(nx_huge_pages);61966196+ uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio);61976197+61986198+ if (!enabled || !ratio)61996199+ return false;62006200+62016201+ *period = READ_ONCE(nx_huge_pages_recovery_period_ms);62026202+ if (!*period) {62036203+ /* Make sure the period is not less than one second. */62046204+ ratio = min(ratio, 3600u);62056205+ *period = 60 * 60 * 1000 / ratio;62066206+ }62076207+ return true;62086208+}62096209+61746210static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel_param *kp)61756211{61766212 bool was_recovery_enabled, is_recovery_enabled;61776213 uint old_period, new_period;61786214 int err;6179621561806180- was_recovery_enabled = nx_huge_pages_recovery_ratio;61816181- old_period = nx_huge_pages_recovery_period_ms;62166216+ was_recovery_enabled = calc_nx_huge_pages_recovery_period(&old_period);6182621761836218 err = param_set_uint(val, kp);61846219 if (err)61856220 return err;6186622161876187- is_recovery_enabled = nx_huge_pages_recovery_ratio;61886188- new_period = nx_huge_pages_recovery_period_ms;62226222+ is_recovery_enabled = calc_nx_huge_pages_recovery_period(&new_period);6189622361906190- if (READ_ONCE(nx_huge_pages) && is_recovery_enabled &&62246224+ if (is_recovery_enabled &&61916225 (!was_recovery_enabled || old_period > new_period)) {61926226 struct kvm *kvm;61936227···6274626262756263static long get_nx_lpage_recovery_timeout(u64 start_time)62766264{62776277- uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio);62786278- uint period = READ_ONCE(nx_huge_pages_recovery_period_ms);62656265+ bool enabled;62666266+ uint period;6279626762806280- if (!period && ratio) {62816281- /* Make sure the period is not less than one second. */62826282- ratio = min(ratio, 3600u);62836283- period = 60 * 60 * 1000 / ratio;62846284- }62686268+ enabled = calc_nx_huge_pages_recovery_period(&period);6285626962866286- return READ_ONCE(nx_huge_pages) && ratio62876287- ? start_time + msecs_to_jiffies(period) - get_jiffies_64()62886288- : MAX_SCHEDULE_TIMEOUT;62706270+ return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64()62716271+ : MAX_SCHEDULE_TIMEOUT;62896272}6290627362916274static int kvm_nx_lpage_recovery_worker(struct kvm *kvm, uintptr_t data)
+14-24
arch/x86/kvm/mmu/tdp_mmu.c
···317317 struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));318318 int level = sp->role.level;319319 gfn_t base_gfn = sp->gfn;320320- u64 old_child_spte;321321- u64 *sptep;322322- gfn_t gfn;323320 int i;324321325322 trace_kvm_mmu_prepare_zap_page(sp);···324327 tdp_mmu_unlink_page(kvm, sp, shared);325328326329 for (i = 0; i < PT64_ENT_PER_PAGE; i++) {327327- sptep = rcu_dereference(pt) + i;328328- gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);330330+ u64 *sptep = rcu_dereference(pt) + i;331331+ gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);332332+ u64 old_child_spte;329333330334 if (shared) {331335 /*···372374 shared);373375 }374376375375- kvm_flush_remote_tlbs_with_address(kvm, gfn,377377+ kvm_flush_remote_tlbs_with_address(kvm, base_gfn,376378 KVM_PAGES_PER_HPAGE(level + 1));377379378380 call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);···10311033{10321034 struct kvm_mmu_page *root;1033103510341034- for_each_tdp_mmu_root(kvm, root, range->slot->as_id)10351035- flush |= zap_gfn_range(kvm, root, range->start, range->end,10361036- range->may_block, flush, false);10361036+ for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false)10371037+ flush = zap_gfn_range(kvm, root, range->start, range->end,10381038+ range->may_block, flush, false);1037103910381040 return flush;10391041}···13621364 * Clear leaf entries which could be replaced by large mappings, for13631365 * GFNs within the slot.13641366 */13651365-static bool zap_collapsible_spte_range(struct kvm *kvm,13671367+static void zap_collapsible_spte_range(struct kvm *kvm,13661368 struct kvm_mmu_page *root,13671367- const struct kvm_memory_slot *slot,13681368- bool flush)13691369+ const struct kvm_memory_slot *slot)13691370{13701371 gfn_t start = slot->base_gfn;13711372 gfn_t end = start + slot->npages;···1375137813761379 tdp_root_for_each_pte(iter, root, start, end) {13771380retry:13781378- if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) {13791379- flush = false;13811381+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))13801382 continue;13811381- }1382138313831384 if (!is_shadow_present_pte(iter.old_spte) ||13841385 !is_last_spte(iter.old_spte, iter.level))···13881393 pfn, PG_LEVEL_NUM))13891394 continue;1390139513961396+ /* Note, a successful atomic zap also does a remote TLB flush. */13911397 if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) {13921398 /*13931399 * The iter must explicitly re-read the SPTE because···13971401 iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep));13981402 goto retry;13991403 }14001400- flush = true;14011404 }1402140514031406 rcu_read_unlock();14041404-14051405- return flush;14061407}1407140814081409/*14091410 * Clear non-leaf entries (and free associated page tables) which could14101411 * be replaced by large mappings, for GFNs within the slot.14111412 */14121412-bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,14131413- const struct kvm_memory_slot *slot,14141414- bool flush)14131413+void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,14141414+ const struct kvm_memory_slot *slot)14151415{14161416 struct kvm_mmu_page *root;1417141714181418 lockdep_assert_held_read(&kvm->mmu_lock);1419141914201420 for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)14211421- flush = zap_collapsible_spte_range(kvm, root, slot, flush);14221422-14231423- return flush;14211421+ zap_collapsible_spte_range(kvm, root, slot);14241422}1425142314261424/*
···989989static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)990990{991991 struct vcpu_svm *svm = to_svm(vcpu);992992+ int cpu = get_cpu();992993994994+ WARN_ON(cpu != vcpu->cpu);993995 svm->avic_is_running = is_run;994996995995- if (!kvm_vcpu_apicv_active(vcpu))996996- return;997997-998998- if (is_run)999999- avic_vcpu_load(vcpu, vcpu->cpu);10001000- else10011001- avic_vcpu_put(vcpu);997997+ if (kvm_vcpu_apicv_active(vcpu)) {998998+ if (is_run)999999+ avic_vcpu_load(vcpu, cpu);10001000+ else10011001+ avic_vcpu_put(vcpu);10021002+ }10031003+ put_cpu();10021004}1003100510041006void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
+86-75
arch/x86/kvm/svm/sev.c
···15431543 return false;15441544}1545154515461546-static int sev_lock_for_migration(struct kvm *kvm)15461546+static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)15471547{15481548- struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;15481548+ struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;15491549+ struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;15501550+ int r = -EBUSY;15511551+15521552+ if (dst_kvm == src_kvm)15531553+ return -EINVAL;1549155415501555 /*15511551- * Bail if this VM is already involved in a migration to avoid deadlock15521552- * between two VMs trying to migrate to/from each other.15561556+ * Bail if these VMs are already involved in a migration to avoid15571557+ * deadlock between two VMs trying to migrate to/from each other.15531558 */15541554- if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1))15591559+ if (atomic_cmpxchg_acquire(&dst_sev->migration_in_progress, 0, 1))15551560 return -EBUSY;1556156115571557- mutex_lock(&kvm->lock);15621562+ if (atomic_cmpxchg_acquire(&src_sev->migration_in_progress, 0, 1))15631563+ goto release_dst;1558156415651565+ r = -EINTR;15661566+ if (mutex_lock_killable(&dst_kvm->lock))15671567+ goto release_src;15681568+ if (mutex_lock_killable(&src_kvm->lock))15691569+ goto unlock_dst;15591570 return 0;15711571+15721572+unlock_dst:15731573+ mutex_unlock(&dst_kvm->lock);15741574+release_src:15751575+ atomic_set_release(&src_sev->migration_in_progress, 0);15761576+release_dst:15771577+ atomic_set_release(&dst_sev->migration_in_progress, 0);15781578+ return r;15601579}1561158015621562-static void sev_unlock_after_migration(struct kvm *kvm)15811581+static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)15631582{15641564- struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;15831583+ struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;15841584+ struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;1565158515661566- mutex_unlock(&kvm->lock);15671567- atomic_set_release(&sev->migration_in_progress, 0);15861586+ mutex_unlock(&dst_kvm->lock);15871587+ mutex_unlock(&src_kvm->lock);15881588+ atomic_set_release(&dst_sev->migration_in_progress, 0);15891589+ atomic_set_release(&src_sev->migration_in_progress, 0);15681590}1569159115701592···16291607 dst->asid = src->asid;16301608 dst->handle = src->handle;16311609 dst->pages_locked = src->pages_locked;16101610+ dst->enc_context_owner = src->enc_context_owner;1632161116331612 src->asid = 0;16341613 src->active = false;16351614 src->handle = 0;16361615 src->pages_locked = 0;16161616+ src->enc_context_owner = NULL;1637161716381638- INIT_LIST_HEAD(&dst->regions_list);16391639- list_replace_init(&src->regions_list, &dst->regions_list);16181618+ list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list);16401619}1641162016421621static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)···16891666 bool charged = false;16901667 int ret;1691166816921692- ret = sev_lock_for_migration(kvm);16931693- if (ret)16941694- return ret;16951695-16961696- if (sev_guest(kvm)) {16971697- ret = -EINVAL;16981698- goto out_unlock;16991699- }17001700-17011669 source_kvm_file = fget(source_fd);17021670 if (!file_is_kvm(source_kvm_file)) {17031671 ret = -EBADF;···16961682 }1697168316981684 source_kvm = source_kvm_file->private_data;16991699- ret = sev_lock_for_migration(source_kvm);16851685+ ret = sev_lock_two_vms(kvm, source_kvm);17001686 if (ret)17011687 goto out_fput;1702168817031703- if (!sev_guest(source_kvm)) {16891689+ if (sev_guest(kvm) || !sev_guest(source_kvm)) {17041690 ret = -EINVAL;17051705- goto out_source;16911691+ goto out_unlock;17061692 }1707169317081694 src_sev = &to_kvm_svm(source_kvm)->sev_info;16951695+16961696+ /*16971697+ * VMs mirroring src's encryption context rely on it to keep the16981698+ * ASID allocated, but below we are clearing src_sev->asid.16991699+ */17001700+ if (src_sev->num_mirrored_vms) {17011701+ ret = -EBUSY;17021702+ goto out_unlock;17031703+ }17041704+17091705 dst_sev->misc_cg = get_current_misc_cg();17101706 cg_cleanup_sev = dst_sev;17111707 if (dst_sev->misc_cg != src_sev->misc_cg) {···17521728 sev_misc_cg_uncharge(cg_cleanup_sev);17531729 put_misc_cg(cg_cleanup_sev->misc_cg);17541730 cg_cleanup_sev->misc_cg = NULL;17551755-out_source:17561756- sev_unlock_after_migration(source_kvm);17311731+out_unlock:17321732+ sev_unlock_two_vms(kvm, source_kvm);17571733out_fput:17581734 if (source_kvm_file)17591735 fput(source_kvm_file);17601760-out_unlock:17611761- sev_unlock_after_migration(kvm);17621736 return ret;17631737}17641738···19751953{19761954 struct file *source_kvm_file;19771955 struct kvm *source_kvm;19781978- struct kvm_sev_info source_sev, *mirror_sev;19561956+ struct kvm_sev_info *source_sev, *mirror_sev;19791957 int ret;1980195819811959 source_kvm_file = fget(source_fd);19821960 if (!file_is_kvm(source_kvm_file)) {19831961 ret = -EBADF;19841984- goto e_source_put;19621962+ goto e_source_fput;19851963 }1986196419871965 source_kvm = source_kvm_file->private_data;19881988- mutex_lock(&source_kvm->lock);19661966+ ret = sev_lock_two_vms(kvm, source_kvm);19671967+ if (ret)19681968+ goto e_source_fput;1989196919901990- if (!sev_guest(source_kvm)) {19701970+ /*19711971+ * Mirrors of mirrors should work, but let's not get silly. Also19721972+ * disallow out-of-band SEV/SEV-ES init if the target is already an19731973+ * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being19741974+ * created after SEV/SEV-ES initialization, e.g. to init intercepts.19751975+ */19761976+ if (sev_guest(kvm) || !sev_guest(source_kvm) ||19771977+ is_mirroring_enc_context(source_kvm) || kvm->created_vcpus) {19911978 ret = -EINVAL;19921992- goto e_source_unlock;19791979+ goto e_unlock;19931980 }19941994-19951995- /* Mirrors of mirrors should work, but let's not get silly */19961996- if (is_mirroring_enc_context(source_kvm) || source_kvm == kvm) {19971997- ret = -EINVAL;19981998- goto e_source_unlock;19991999- }20002000-20012001- memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info,20022002- sizeof(source_sev));2003198120041982 /*20051983 * The mirror kvm holds an enc_context_owner ref so its asid can't20061984 * disappear until we're done with it20071985 */19861986+ source_sev = &to_kvm_svm(source_kvm)->sev_info;20081987 kvm_get_kvm(source_kvm);20092009-20102010- fput(source_kvm_file);20112011- mutex_unlock(&source_kvm->lock);20122012- mutex_lock(&kvm->lock);20132013-20142014- /*20152015- * Disallow out-of-band SEV/SEV-ES init if the target is already an20162016- * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being20172017- * created after SEV/SEV-ES initialization, e.g. to init intercepts.20182018- */20192019- if (sev_guest(kvm) || kvm->created_vcpus) {20202020- ret = -EINVAL;20212021- goto e_mirror_unlock;20222022- }19881988+ source_sev->num_mirrored_vms++;2023198920241990 /* Set enc_context_owner and copy its encryption context over */20251991 mirror_sev = &to_kvm_svm(kvm)->sev_info;20261992 mirror_sev->enc_context_owner = source_kvm;20271993 mirror_sev->active = true;20282028- mirror_sev->asid = source_sev.asid;20292029- mirror_sev->fd = source_sev.fd;20302030- mirror_sev->es_active = source_sev.es_active;20312031- mirror_sev->handle = source_sev.handle;19941994+ mirror_sev->asid = source_sev->asid;19951995+ mirror_sev->fd = source_sev->fd;19961996+ mirror_sev->es_active = source_sev->es_active;19971997+ mirror_sev->handle = source_sev->handle;19981998+ INIT_LIST_HEAD(&mirror_sev->regions_list);19991999+ ret = 0;20002000+20322001 /*20332002 * Do not copy ap_jump_table. Since the mirror does not share the same20342003 * KVM contexts as the original, and they may have different20352004 * memory-views.20362005 */2037200620382038- mutex_unlock(&kvm->lock);20392039- return 0;20402040-20412041-e_mirror_unlock:20422042- mutex_unlock(&kvm->lock);20432043- kvm_put_kvm(source_kvm);20442044- return ret;20452045-e_source_unlock:20462046- mutex_unlock(&source_kvm->lock);20472047-e_source_put:20072007+e_unlock:20082008+ sev_unlock_two_vms(kvm, source_kvm);20092009+e_source_fput:20482010 if (source_kvm_file)20492011 fput(source_kvm_file);20502012 return ret;···20402034 struct list_head *head = &sev->regions_list;20412035 struct list_head *pos, *q;2042203620372037+ WARN_ON(sev->num_mirrored_vms);20382038+20432039 if (!sev_guest(kvm))20442040 return;2045204120462042 /* If this is a mirror_kvm release the enc_context_owner and skip sev cleanup */20472043 if (is_mirroring_enc_context(kvm)) {20482048- kvm_put_kvm(sev->enc_context_owner);20442044+ struct kvm *owner_kvm = sev->enc_context_owner;20452045+ struct kvm_sev_info *owner_sev = &to_kvm_svm(owner_kvm)->sev_info;20462046+20472047+ mutex_lock(&owner_kvm->lock);20482048+ if (!WARN_ON(!owner_sev->num_mirrored_vms))20492049+ owner_sev->num_mirrored_vms--;20502050+ mutex_unlock(&owner_kvm->lock);20512051+ kvm_put_kvm(owner_kvm);20492052 return;20502053 }20512051-20522052- mutex_lock(&kvm->lock);2053205420542055 /*20552056 * Ensure that all guest tagged cache entries are flushed before···20762063 cond_resched();20772064 }20782065 }20792079-20802080- mutex_unlock(&kvm->lock);2081206620822067 sev_unbind_asid(kvm, sev->handle);20832068 sev_asid_free(sev);
···7979 struct list_head regions_list; /* List of registered regions */8080 u64 ap_jump_table; /* SEV-ES AP Jump Table address */8181 struct kvm *enc_context_owner; /* Owner of copied encryption context */8282+ unsigned long num_mirrored_vms; /* Number of VMs sharing this ASID */8283 struct misc_cg *misc_cg; /* For misc cgroup accounting */8384 atomic_t migration_in_progress;8485};
+22-27
arch/x86/kvm/vmx/nested.c
···11621162 WARN_ON(!enable_vpid);1163116311641164 /*11651165- * If VPID is enabled and used by vmc12, but L2 does not have a unique11661166- * TLB tag (ASID), i.e. EPT is disabled and KVM was unable to allocate11671167- * a VPID for L2, flush the current context as the effective ASID is11681168- * common to both L1 and L2.11691169- *11701170- * Defer the flush so that it runs after vmcs02.EPTP has been set by11711171- * KVM_REQ_LOAD_MMU_PGD (if nested EPT is enabled) and to avoid11721172- * redundant flushes further down the nested pipeline.11731173- *11741174- * If a TLB flush isn't required due to any of the above, and vpid12 is11751175- * changing then the new "virtual" VPID (vpid12) will reuse the same11761176- * "real" VPID (vpid02), and so needs to be flushed. There's no direct11771177- * mapping between vpid02 and vpid12, vpid02 is per-vCPU and reused for11781178- * all nested vCPUs. Remember, a flush on VM-Enter does not invalidate11791179- * guest-physical mappings, so there is no need to sync the nEPT MMU.11651165+ * VPID is enabled and in use by vmcs12. If vpid12 is changing, then11661166+ * emulate a guest TLB flush as KVM does not track vpid12 history nor11671167+ * is the VPID incorporated into the MMU context. I.e. KVM must assume11681168+ * that the new vpid12 has never been used and thus represents a new11691169+ * guest ASID that cannot have entries in the TLB.11801170 */11811181- if (!nested_has_guest_tlb_tag(vcpu)) {11821182- kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);11831183- } else if (is_vmenter &&11841184- vmcs12->virtual_processor_id != vmx->nested.last_vpid) {11711171+ if (is_vmenter && vmcs12->virtual_processor_id != vmx->nested.last_vpid) {11851172 vmx->nested.last_vpid = vmcs12->virtual_processor_id;11861186- vpid_sync_context(nested_get_vpid02(vcpu));11731173+ kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);11741174+ return;11871175 }11761176+11771177+ /*11781178+ * If VPID is enabled, used by vmc12, and vpid12 is not changing but11791179+ * does not have a unique TLB tag (ASID), i.e. EPT is disabled and11801180+ * KVM was unable to allocate a VPID for L2, flush the current context11811181+ * as the effective ASID is common to both L1 and L2.11821182+ */11831183+ if (!nested_has_guest_tlb_tag(vcpu))11841184+ kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);11881185}1189118611901187static bool is_bitwise_subset(u64 superset, u64 subset, u64 mask)···33413344 };33423345 u32 failed_index;3343334633443344- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))33453345- kvm_vcpu_flush_tlb_current(vcpu);33473347+ kvm_service_local_tlb_flush_requests(vcpu);3346334833473349 evaluate_pending_interrupts = exec_controls_get(vmx) &33483350 (CPU_BASED_INTR_WINDOW_EXITING | CPU_BASED_NMI_WINDOW_EXITING);···44984502 (void)nested_get_evmcs_page(vcpu);44994503 }4500450445014501- /* Service the TLB flush request for L2 before switching to L1. */45024502- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))45034503- kvm_vcpu_flush_tlb_current(vcpu);45054505+ /* Service pending TLB flush requests for L2 before switching to L1. */45064506+ kvm_service_local_tlb_flush_requests(vcpu);4504450745054508 /*45064509 * VCPU_EXREG_PDPTR will be clobbered in arch/x86/kvm/vmx/vmx.h between···48524857 if (!vmx->nested.cached_vmcs12)48534858 goto out_cached_vmcs12;4854485948604860+ vmx->nested.shadow_vmcs12_cache.gpa = INVALID_GPA;48554861 vmx->nested.cached_shadow_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL_ACCOUNT);48564862 if (!vmx->nested.cached_shadow_vmcs12)48574863 goto out_cached_shadow_vmcs12;···52855289 struct gfn_to_hva_cache *ghc = &vmx->nested.vmcs12_cache;52865290 struct vmcs_hdr hdr;5287529152885288- if (ghc->gpa != vmptr &&52895289- kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) {52925292+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) {52905293 /*52915294 * Reads from an unbacked page return all 1s,52925295 * which means that the 32 bits located at the
+11-9
arch/x86/kvm/vmx/posted_intr.c
···55#include <asm/cpu.h>6677#include "lapic.h"88+#include "irq.h"89#include "posted_intr.h"910#include "trace.h"1011#include "vmx.h"···7877 pi_set_on(pi_desc);7978}80798080+static bool vmx_can_use_vtd_pi(struct kvm *kvm)8181+{8282+ return irqchip_in_kernel(kvm) && enable_apicv &&8383+ kvm_arch_has_assigned_device(kvm) &&8484+ irq_remapping_cap(IRQ_POSTING_CAP);8585+}8686+8187void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)8288{8389 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);84908585- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||8686- !irq_remapping_cap(IRQ_POSTING_CAP) ||8787- !kvm_vcpu_apicv_active(vcpu))9191+ if (!vmx_can_use_vtd_pi(vcpu->kvm))8892 return;89939094 /* Set SN when the vCPU is preempted */···147141 struct pi_desc old, new;148142 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);149143150150- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||151151- !irq_remapping_cap(IRQ_POSTING_CAP) ||152152- !kvm_vcpu_apicv_active(vcpu))144144+ if (!vmx_can_use_vtd_pi(vcpu->kvm))153145 return 0;154146155147 WARN_ON(irqs_disabled());···274270 struct vcpu_data vcpu_info;275271 int idx, ret = 0;276272277277- if (!kvm_arch_has_assigned_device(kvm) ||278278- !irq_remapping_cap(IRQ_POSTING_CAP) ||279279- !kvm_vcpu_apicv_active(kvm->vcpus[0]))273273+ if (!vmx_can_use_vtd_pi(kvm))280274 return 0;281275282276 idx = srcu_read_lock(&kvm->irq_srcu);
+41-25
arch/x86/kvm/vmx/vmx.c
···29182918 }29192919}2920292029212921+static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)29222922+{29232923+ if (is_guest_mode(vcpu))29242924+ return nested_get_vpid02(vcpu);29252925+ return to_vmx(vcpu)->vpid;29262926+}29272927+29212928static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)29222929{29232930 struct kvm_mmu *mmu = vcpu->arch.mmu;···29372930 if (enable_ept)29382931 ept_sync_context(construct_eptp(vcpu, root_hpa,29392932 mmu->shadow_root_level));29402940- else if (!is_guest_mode(vcpu))29412941- vpid_sync_context(to_vmx(vcpu)->vpid);29422933 else29432943- vpid_sync_context(nested_get_vpid02(vcpu));29342934+ vpid_sync_context(vmx_get_current_vpid(vcpu));29442935}2945293629462937static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)29472938{29482939 /*29492949- * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in29402940+ * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in29502941 * vmx_flush_tlb_guest() for an explanation of why this is ok.29512942 */29522952- vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr);29432943+ vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);29532944}2954294529552946static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)29562947{29572948 /*29582958- * vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==029592959- * or a vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit29602960- * are required to flush GVA->{G,H}PA mappings from the TLB if vpid is29492949+ * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a29502950+ * vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit are29512951+ * required to flush GVA->{G,H}PA mappings from the TLB if vpid is29612952 * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed),29622953 * i.e. no explicit INVVPID is necessary.29632954 */29642964- vpid_sync_context(to_vmx(vcpu)->vpid);29552955+ vpid_sync_context(vmx_get_current_vpid(vcpu));29652956}2966295729672958void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu)···62676262{62686263 struct vcpu_vmx *vmx = to_vmx(vcpu);62696264 int max_irr;62706270- bool max_irr_updated;62656265+ bool got_posted_interrupt;6271626662726272- if (KVM_BUG_ON(!vcpu->arch.apicv_active, vcpu->kvm))62676267+ if (KVM_BUG_ON(!enable_apicv, vcpu->kvm))62736268 return -EIO;6274626962756270 if (pi_test_on(&vmx->pi_desc)) {···62796274 * But on x86 this is just a compiler barrier anyway.62806275 */62816276 smp_mb__after_atomic();62826282- max_irr_updated =62776277+ got_posted_interrupt =62836278 kvm_apic_update_irr(vcpu, vmx->pi_desc.pir, &max_irr);62846284-62856285- /*62866286- * If we are running L2 and L1 has a new pending interrupt62876287- * which can be injected, this may cause a vmexit or it may62886288- * be injected into L2. Either way, this interrupt will be62896289- * processed via KVM_REQ_EVENT, not RVI, because we do not use62906290- * virtual interrupt delivery to inject L1 interrupts into L2.62916291- */62926292- if (is_guest_mode(vcpu) && max_irr_updated)62936293- kvm_make_request(KVM_REQ_EVENT, vcpu);62946279 } else {62956280 max_irr = kvm_lapic_find_highest_irr(vcpu);62816281+ got_posted_interrupt = false;62966282 }62976297- vmx_hwapic_irr_update(vcpu, max_irr);62836283+62846284+ /*62856285+ * Newly recognized interrupts are injected via either virtual interrupt62866286+ * delivery (RVI) or KVM_REQ_EVENT. Virtual interrupt delivery is62876287+ * disabled in two cases:62886288+ *62896289+ * 1) If L2 is running and the vCPU has a new pending interrupt. If L162906290+ * wants to exit on interrupts, KVM_REQ_EVENT is needed to synthesize a62916291+ * VM-Exit to L1. If L1 doesn't want to exit, the interrupt is injected62926292+ * into L2, but KVM doesn't use virtual interrupt delivery to inject62936293+ * interrupts into L2, and so KVM_REQ_EVENT is again needed.62946294+ *62956295+ * 2) If APICv is disabled for this vCPU, assigned devices may still62966296+ * attempt to post interrupts. The posted interrupt vector will cause62976297+ * a VM-Exit and the subsequent entry will call sync_pir_to_irr.62986298+ */62996299+ if (!is_guest_mode(vcpu) && kvm_vcpu_apicv_active(vcpu))63006300+ vmx_set_rvi(max_irr);63016301+ else if (got_posted_interrupt)63026302+ kvm_make_request(KVM_REQ_EVENT, vcpu);63036303+62986304 return max_irr;62996305}63006306···77777761 ple_window_shrink = 0;77787762 }7779776377807780- if (!cpu_has_vmx_apicv()) {77647764+ if (!cpu_has_vmx_apicv())77817765 enable_apicv = 0;77667766+ if (!enable_apicv)77827767 vmx_x86_ops.sync_pir_to_irr = NULL;77837783- }7784776877857769 if (cpu_has_vmx_tsc_scaling()) {77867770 kvm_has_tsc_control = true;
+53-13
arch/x86/kvm/x86.c
···32583258 static_call(kvm_x86_tlb_flush_guest)(vcpu);32593259}3260326032613261+32623262+static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)32633263+{32643264+ ++vcpu->stat.tlb_flush;32653265+ static_call(kvm_x86_tlb_flush_current)(vcpu);32663266+}32673267+32683268+/*32693269+ * Service "local" TLB flush requests, which are specific to the current MMU32703270+ * context. In addition to the generic event handling in vcpu_enter_guest(),32713271+ * TLB flushes that are targeted at an MMU context also need to be serviced32723272+ * prior before nested VM-Enter/VM-Exit.32733273+ */32743274+void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)32753275+{32763276+ if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))32773277+ kvm_vcpu_flush_tlb_current(vcpu);32783278+32793279+ if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))32803280+ kvm_vcpu_flush_tlb_guest(vcpu);32813281+}32823282+EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests);32833283+32613284static void record_steal_time(struct kvm_vcpu *vcpu)32623285{32633286 struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;···41564133 case KVM_CAP_SGX_ATTRIBUTE:41574134#endif41584135 case KVM_CAP_VM_COPY_ENC_CONTEXT_FROM:41364136+ case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:41594137 case KVM_CAP_SREGS2:41604138 case KVM_CAP_EXIT_ON_EMULATION_FAILURE:41614139 case KVM_CAP_VCPU_ATTRIBUTES:···44724448static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,44734449 struct kvm_lapic_state *s)44744450{44754475- if (vcpu->arch.apicv_active)44764476- static_call(kvm_x86_sync_pir_to_irr)(vcpu);44514451+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);4477445244784453 return kvm_apic_get_state(vcpu, s);44794454}···51475124 struct kvm_cpuid __user *cpuid_arg = argp;51485125 struct kvm_cpuid cpuid;5149512651275127+ /*51285128+ * KVM does not correctly handle changing guest CPUID after KVM_RUN, as51295129+ * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't51305130+ * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page51315131+ * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with51325132+ * the core vCPU model on the fly, so fail.51335133+ */51345134+ r = -EINVAL;51355135+ if (vcpu->arch.last_vmentry_cpu != -1)51365136+ goto out;51375137+51505138 r = -EFAULT;51515139 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))51525140 goto out;···51675133 case KVM_SET_CPUID2: {51685134 struct kvm_cpuid2 __user *cpuid_arg = argp;51695135 struct kvm_cpuid2 cpuid;51365136+51375137+ /*51385138+ * KVM_SET_CPUID{,2} after KVM_RUN is forbidded, see the comment in51395139+ * KVM_SET_CPUID case above.51405140+ */51415141+ r = -EINVAL;51425142+ if (vcpu->arch.last_vmentry_cpu != -1)51435143+ goto out;5170514451715145 r = -EFAULT;51725146 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))···95709528 if (irqchip_split(vcpu->kvm))95719529 kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors);95729530 else {95739573- if (vcpu->arch.apicv_active)95749574- static_call(kvm_x86_sync_pir_to_irr)(vcpu);95319531+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);95759532 if (ioapic_in_kernel(vcpu->kvm))95769533 kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);95779534 }···96899648 /* Flushing all ASIDs flushes the current ASID... */96909649 kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);96919650 }96929692- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))96939693- kvm_vcpu_flush_tlb_current(vcpu);96949694- if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))96959695- kvm_vcpu_flush_tlb_guest(vcpu);96519651+ kvm_service_local_tlb_flush_requests(vcpu);9696965296979653 if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {96989654 vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;···9840980298419803 /*98429804 * This handles the case where a posted interrupt was98439843- * notified with kvm_vcpu_kick.98059805+ * notified with kvm_vcpu_kick. Assigned devices can98069806+ * use the POSTED_INTR_VECTOR even if APICv is disabled,98079807+ * so do it even if APICv is disabled on this vCPU.98449808 */98459845- if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)98469846- static_call(kvm_x86_sync_pir_to_irr)(vcpu);98099809+ if (kvm_lapic_enabled(vcpu))98109810+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);9847981198489812 if (kvm_vcpu_exit_request(vcpu)) {98499813 vcpu->mode = OUTSIDE_GUEST_MODE;···98899849 if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))98909850 break;9891985198929892- if (vcpu->arch.apicv_active)98939893- static_call(kvm_x86_sync_pir_to_irr)(vcpu);98529852+ if (kvm_lapic_enabled(vcpu))98539853+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);9894985498959855 if (unlikely(kvm_vcpu_exit_request(vcpu))) {98969856 exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
···929929{930930 int ret;931931932932- pr_info("AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>\n");933933-934932 if (!amd_iommu_v2_supported()) {935935- pr_info("AMD IOMMUv2 functionality not available on this system\n");933933+ pr_info("AMD IOMMUv2 functionality not available on this system - This is not a bug.\n");936934 /*937935 * Load anyway to provide the symbols to other modules938936 * which may use AMD IOMMUv2 optionally.···944946 goto out;945947946948 amd_iommu_register_ppr_notifier(&ppr_nb);949949+950950+ pr_info("AMD IOMMUv2 loaded and initialized\n");947951948952 return 0;949953
···13391339 pte = &pte[pfn_level_offset(pfn, level)];1340134013411341 do {13421342- unsigned long level_pfn;13421342+ unsigned long level_pfn = pfn & level_mask(level);1343134313441344 if (!dma_pte_present(pte))13451345 goto next;13461346-13471347- level_pfn = pfn & level_mask(level);1348134613491347 /* If range covers entire pagetable, free it */13501348 if (start_pfn <= level_pfn &&···13641366 freelist);13651367 }13661368next:13671367- pfn += level_size(level);13691369+ pfn = level_pfn + level_size(level);13681370 } while (!first_pte_in_page(++pte) && pfn <= last_pfn);1369137113701372 if (first_pte)
+2-2
drivers/iommu/rockchip-iommu.c
···200200#define DTE_HI_MASK2 GENMASK(7, 4)201201#define DTE_HI_SHIFT1 24 /* shift bit 8 to bit 32 */202202#define DTE_HI_SHIFT2 32 /* shift bit 4 to bit 36 */203203-#define PAGE_DESC_HI_MASK1 GENMASK_ULL(39, 36)204204-#define PAGE_DESC_HI_MASK2 GENMASK_ULL(35, 32)203203+#define PAGE_DESC_HI_MASK1 GENMASK_ULL(35, 32)204204+#define PAGE_DESC_HI_MASK2 GENMASK_ULL(39, 36)205205206206static inline phys_addr_t rk_dte_pt_address_v2(u32 dte)207207{
+1-3
drivers/net/Kconfig
···567567config VMXNET3568568 tristate "VMware VMXNET3 ethernet driver"569569 depends on PCI && INET570570- depends on !(PAGE_SIZE_64KB || ARM64_64K_PAGES || \571571- IA64_PAGE_SIZE_64KB || PARISC_PAGE_SIZE_64KB || \572572- PPC_64K_PAGES)570570+ depends on PAGE_SIZE_LESS_THAN_64KB573571 help574572 This driver supports VMware's vmxnet3 virtual ethernet NIC.575573 To compile this driver as a module, choose M here: the
···5050}51515252static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,5353- u16 status, u16 lpa,5353+ u16 ctrl, u16 status, u16 lpa,5454 struct phylink_link_state *state)5555{5656+ state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);5757+5658 if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) {5757- state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);5959+ /* The Spped and Duplex Resolved register is 1 if AN is enabled6060+ * and complete, or if AN is disabled. So with disabled AN we6161+ * still get here on link up. But we want to set an_complete6262+ * only if AN was enabled, thus we look at BMCR_ANENABLE.6363+ * (According to 802.3-2008 section 22.2.4.2.10, we should be6464+ * able to get this same value from BMSR_ANEGCAPABLE, but tests6565+ * show that these Marvell PHYs don't conform to this part of6666+ * the specificaion - BMSR_ANEGCAPABLE is simply always 1.)6767+ */6868+ state->an_complete = !!(ctrl & BMCR_ANENABLE);5869 state->duplex = status &5970 MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ?6071 DUPLEX_FULL : DUPLEX_HALF;···9281 dev_err(chip->dev, "invalid PHY speed\n");9382 return -EINVAL;9483 }8484+ } else if (state->link &&8585+ state->interface != PHY_INTERFACE_MODE_SGMII) {8686+ /* If Speed and Duplex Resolved register is 0 and link is up, it8787+ * means that AN was enabled, but link partner had it disabled8888+ * and the PHY invoked the Auto-Negotiation Bypass feature and8989+ * linked anyway.9090+ */9191+ state->duplex = DUPLEX_FULL;9292+ if (state->interface == PHY_INTERFACE_MODE_2500BASEX)9393+ state->speed = SPEED_2500;9494+ else9595+ state->speed = SPEED_1000;9596 } else {9697 state->link = false;9798 }···191168int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,192169 int lane, struct phylink_link_state *state)193170{194194- u16 lpa, status;171171+ u16 lpa, status, ctrl;195172 int err;173173+174174+ err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl);175175+ if (err) {176176+ dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);177177+ return err;178178+ }196179197180 err = mv88e6352_serdes_read(chip, 0x11, &status);198181 if (err) {···212183 return err;213184 }214185215215- return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state);186186+ return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);216187}217188218189int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port,···912883static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,913884 int port, int lane, struct phylink_link_state *state)914885{915915- u16 lpa, status;886886+ u16 lpa, status, ctrl;916887 int err;888888+889889+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,890890+ MV88E6390_SGMII_BMCR, &ctrl);891891+ if (err) {892892+ dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);893893+ return err;894894+ }917895918896 err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,919897 MV88E6390_SGMII_PHY_STATUS, &status);···936900 return err;937901 }938902939939- return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state);903903+ return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);940904}941905942906static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip,···13071271 }13081272}1309127313101310-static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane)12741274+static int mv88e6393x_serdes_power_lane(struct mv88e6xxx_chip *chip, int lane,12751275+ bool on)13111276{13121312- u16 reg, pcs;12771277+ u16 reg;12781278+ int err;12791279+12801280+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,12811281+ MV88E6393X_SERDES_CTRL1, ®);12821282+ if (err)12831283+ return err;12841284+12851285+ if (on)12861286+ reg &= ~(MV88E6393X_SERDES_CTRL1_TX_PDOWN |12871287+ MV88E6393X_SERDES_CTRL1_RX_PDOWN);12881288+ else12891289+ reg |= MV88E6393X_SERDES_CTRL1_TX_PDOWN |12901290+ MV88E6393X_SERDES_CTRL1_RX_PDOWN;12911291+12921292+ return mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,12931293+ MV88E6393X_SERDES_CTRL1, reg);12941294+}12951295+12961296+static int mv88e6393x_serdes_erratum_4_6(struct mv88e6xxx_chip *chip, int lane)12971297+{12981298+ u16 reg;13131299 int err;1314130013151301 /* mv88e6393x family errata 4.6:···13421284 * It seems that after this workaround the SERDES is automatically13431285 * powered up (the bit is cleared), so power it down.13441286 */13451345- if (lane == MV88E6393X_PORT0_LANE || lane == MV88E6393X_PORT9_LANE ||13461346- lane == MV88E6393X_PORT10_LANE) {13471347- err = mv88e6390_serdes_read(chip, lane,13481348- MDIO_MMD_PHYXS,13491349- MV88E6393X_SERDES_POC, ®);13501350- if (err)13511351- return err;12871287+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,12881288+ MV88E6393X_SERDES_POC, ®);12891289+ if (err)12901290+ return err;1352129113531353- reg &= ~MV88E6393X_SERDES_POC_PDOWN;13541354- reg |= MV88E6393X_SERDES_POC_RESET;12921292+ reg &= ~MV88E6393X_SERDES_POC_PDOWN;12931293+ reg |= MV88E6393X_SERDES_POC_RESET;1355129413561356- err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,13571357- MV88E6393X_SERDES_POC, reg);13581358- if (err)13591359- return err;12951295+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,12961296+ MV88E6393X_SERDES_POC, reg);12971297+ if (err)12981298+ return err;1360129913611361- err = mv88e6390_serdes_power_sgmii(chip, lane, false);13621362- if (err)13631363- return err;13641364- }13001300+ err = mv88e6390_serdes_power_sgmii(chip, lane, false);13011301+ if (err)13021302+ return err;13031303+13041304+ return mv88e6393x_serdes_power_lane(chip, lane, false);13051305+}13061306+13071307+int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip)13081308+{13091309+ int err;13101310+13111311+ err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT0_LANE);13121312+ if (err)13131313+ return err;13141314+13151315+ err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT9_LANE);13161316+ if (err)13171317+ return err;13181318+13191319+ return mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT10_LANE);13201320+}13211321+13221322+static int mv88e6393x_serdes_erratum_4_8(struct mv88e6xxx_chip *chip, int lane)13231323+{13241324+ u16 reg, pcs;13251325+ int err;1365132613661327 /* mv88e6393x family errata 4.8:13671328 * When a SERDES port is operating in 1000BASE-X or SGMII mode link may···14111334 MV88E6393X_ERRATA_4_8_REG, reg);14121335}1413133614141414-int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip)13371337+static int mv88e6393x_serdes_erratum_5_2(struct mv88e6xxx_chip *chip, int lane,13381338+ u8 cmode)14151339{13401340+ static const struct {13411341+ u16 dev, reg, val, mask;13421342+ } fixes[] = {13431343+ { MDIO_MMD_VEND1, 0x8093, 0xcb5a, 0xffff },13441344+ { MDIO_MMD_VEND1, 0x8171, 0x7088, 0xffff },13451345+ { MDIO_MMD_VEND1, 0x80c9, 0x311a, 0xffff },13461346+ { MDIO_MMD_VEND1, 0x80a2, 0x8000, 0xff7f },13471347+ { MDIO_MMD_VEND1, 0x80a9, 0x0000, 0xfff0 },13481348+ { MDIO_MMD_VEND1, 0x80a3, 0x0000, 0xf8ff },13491349+ { MDIO_MMD_PHYXS, MV88E6393X_SERDES_POC,13501350+ MV88E6393X_SERDES_POC_RESET, MV88E6393X_SERDES_POC_RESET },13511351+ };13521352+ int err, i;13531353+ u16 reg;13541354+13551355+ /* mv88e6393x family errata 5.2:13561356+ * For optimal signal integrity the following sequence should be applied13571357+ * to SERDES operating in 10G mode. These registers only apply to 10G13581358+ * operation and have no effect on other speeds.13591359+ */13601360+ if (cmode != MV88E6393X_PORT_STS_CMODE_10GBASER)13611361+ return 0;13621362+13631363+ for (i = 0; i < ARRAY_SIZE(fixes); ++i) {13641364+ err = mv88e6390_serdes_read(chip, lane, fixes[i].dev,13651365+ fixes[i].reg, ®);13661366+ if (err)13671367+ return err;13681368+13691369+ reg &= ~fixes[i].mask;13701370+ reg |= fixes[i].val;13711371+13721372+ err = mv88e6390_serdes_write(chip, lane, fixes[i].dev,13731373+ fixes[i].reg, reg);13741374+ if (err)13751375+ return err;13761376+ }13771377+13781378+ return 0;13791379+}13801380+13811381+static int mv88e6393x_serdes_fix_2500basex_an(struct mv88e6xxx_chip *chip,13821382+ int lane, u8 cmode, bool on)13831383+{13841384+ u16 reg;14161385 int err;1417138614181418- err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT0_LANE);13871387+ if (cmode != MV88E6XXX_PORT_STS_CMODE_2500BASEX)13881388+ return 0;13891389+13901390+ /* Inband AN is broken on Amethyst in 2500base-x mode when set by13911391+ * standard mechanism (via cmode).13921392+ * We can get around this by configuring the PCS mode to 1000base-x13931393+ * and then writing value 0x58 to register 1e.8000. (This must be done13941394+ * while SerDes receiver and transmitter are disabled, which is, when13951395+ * this function is called.)13961396+ * It seem that when we do this configuration to 2500base-x mode (by13971397+ * changing PCS mode to 1000base-x and frequency to 3.125 GHz from13981398+ * 1.25 GHz) and then configure to sgmii or 1000base-x, the device13991399+ * thinks that it already has SerDes at 1.25 GHz and does not change14001400+ * the 1e.8000 register, leaving SerDes at 3.125 GHz.14011401+ * To avoid this, change PCS mode back to 2500base-x when disabling14021402+ * SerDes from 2500base-x mode.14031403+ */14041404+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,14051405+ MV88E6393X_SERDES_POC, ®);14191406 if (err)14201407 return err;1421140814221422- err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT9_LANE);14091409+ reg &= ~(MV88E6393X_SERDES_POC_PCS_MASK | MV88E6393X_SERDES_POC_AN);14101410+ if (on)14111411+ reg |= MV88E6393X_SERDES_POC_PCS_1000BASEX |14121412+ MV88E6393X_SERDES_POC_AN;14131413+ else14141414+ reg |= MV88E6393X_SERDES_POC_PCS_2500BASEX;14151415+ reg |= MV88E6393X_SERDES_POC_RESET;14161416+14171417+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,14181418+ MV88E6393X_SERDES_POC, reg);14231419 if (err)14241420 return err;1425142114261426- return mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT10_LANE);14221422+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_VEND1, 0x8000, 0x58);14231423+ if (err)14241424+ return err;14251425+14261426+ return 0;14271427}1428142814291429int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,14301430 bool on)14311431{14321432 u8 cmode = chip->ports[port].cmode;14331433+ int err;1433143414341435 if (port != 0 && port != 9 && port != 10)14351436 return -EOPNOTSUPP;14371437+14381438+ if (on) {14391439+ err = mv88e6393x_serdes_erratum_4_8(chip, lane);14401440+ if (err)14411441+ return err;14421442+14431443+ err = mv88e6393x_serdes_erratum_5_2(chip, lane, cmode);14441444+ if (err)14451445+ return err;14461446+14471447+ err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,14481448+ true);14491449+ if (err)14501450+ return err;14511451+14521452+ err = mv88e6393x_serdes_power_lane(chip, lane, true);14531453+ if (err)14541454+ return err;14551455+ }1436145614371457 switch (cmode) {14381458 case MV88E6XXX_PORT_STS_CMODE_SGMII:14391459 case MV88E6XXX_PORT_STS_CMODE_1000BASEX:14401460 case MV88E6XXX_PORT_STS_CMODE_2500BASEX:14411441- return mv88e6390_serdes_power_sgmii(chip, lane, on);14611461+ err = mv88e6390_serdes_power_sgmii(chip, lane, on);14621462+ break;14421463 case MV88E6393X_PORT_STS_CMODE_5GBASER:14431464 case MV88E6393X_PORT_STS_CMODE_10GBASER:14441444- return mv88e6390_serdes_power_10g(chip, lane, on);14651465+ err = mv88e6390_serdes_power_10g(chip, lane, on);14661466+ break;14451467 }1446146814471447- return 0;14691469+ if (err)14701470+ return err;14711471+14721472+ if (!on) {14731473+ err = mv88e6393x_serdes_power_lane(chip, lane, false);14741474+ if (err)14751475+ return err;14761476+14771477+ err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,14781478+ false);14791479+ }14801480+14811481+ return err;14481482}
···22862286 bool carry_xdp_prog)22872287{22882288 struct bpf_prog *xdp_prog;22892289- int i, t;22892289+ int i, t, ret;2290229022912291- mlx4_en_copy_priv(tmp, priv, prof);22912291+ ret = mlx4_en_copy_priv(tmp, priv, prof);22922292+ if (ret) {22932293+ en_warn(priv, "%s: mlx4_en_copy_priv() failed, return\n",22942294+ __func__);22952295+ return ret;22962296+ }2292229722932298 if (mlx4_en_alloc_resources(tmp)) {22942299 en_warn(priv,
+1-1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
···341341 case MLX5_CMD_OP_DEALLOC_SF:342342 case MLX5_CMD_OP_DESTROY_UCTX:343343 case MLX5_CMD_OP_DESTROY_UMEM:344344+ case MLX5_CMD_OP_MODIFY_RQT:344345 return MLX5_CMD_STAT_OK;345346346347 case MLX5_CMD_OP_QUERY_HCA_CAP:···447446 case MLX5_CMD_OP_MODIFY_TIS:448447 case MLX5_CMD_OP_QUERY_TIS:449448 case MLX5_CMD_OP_CREATE_RQT:450450- case MLX5_CMD_OP_MODIFY_RQT:451449 case MLX5_CMD_OP_QUERY_RQT:452450453451 case MLX5_CMD_OP_CREATE_FLOW_TABLE:
···130130 /* If vports min rate divider is 0 but their group has bw_share configured, then131131 * need to set bw_share for vports to minimal value.132132 */133133- if (!group_level && !max_guarantee && group->bw_share)133133+ if (!group_level && !max_guarantee && group && group->bw_share)134134 return 1;135135 return 0;136136}···419419 return err;420420421421 /* Recalculate bw share weights of old and new groups */422422- if (vport->qos.bw_share) {422422+ if (vport->qos.bw_share || new_group->bw_share) {423423 esw_qos_normalize_vports_min_rate(esw, curr_group, extack);424424 esw_qos_normalize_vports_min_rate(esw, new_group, extack);425425 }
···329329esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr)330330{331331 struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;332332+ bool result = false;332333 int i;333334334334- for (i = esw_attr->split_count; i < esw_attr->out_count; i++)335335+ /* Indirect table is supported only for flows with in_port uplink336336+ * and the destination is vport on the same eswitch as the uplink,337337+ * return false in case at least one of destinations doesn't meet338338+ * this criteria.339339+ */340340+ for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {335341 if (esw_attr->dests[i].rep &&336342 mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,337337- esw_attr->dests[i].mdev))338338- return true;339339- return false;343343+ esw_attr->dests[i].mdev)) {344344+ result = true;345345+ } else {346346+ result = false;347347+ break;348348+ }349349+ }350350+ return result;340351}341352342353static int···25232512 struct mlx5_eswitch *esw = master->priv.eswitch;25242513 struct mlx5_flow_table_attr ft_attr = {25252514 .max_fte = 1, .prio = 0, .level = 0,25152515+ .flags = MLX5_FLOW_TABLE_OTHER_VPORT,25262516 };25272517 struct mlx5_flow_namespace *egress_ns;25282518 struct mlx5_flow_table *acl;
···3131 dev->timeouts->to[type] = val;3232}33333434-static void tout_set_def_val(struct mlx5_core_dev *dev)3434+void mlx5_tout_set_def_val(struct mlx5_core_dev *dev)3535{3636 int i;37373838- for (i = MLX5_TO_FW_PRE_INIT_TIMEOUT_MS; i < MAX_TIMEOUT_TYPES; i++)3838+ for (i = 0; i < MAX_TIMEOUT_TYPES; i++)3939 tout_set(dev, tout_def_sw_val[i], i);4040}4141···4545 if (!dev->timeouts)4646 return -ENOMEM;47474848- tout_set_def_val(dev);4948 return 0;5049}5150
···8989#define IWL_EXPORT_SYMBOL(sym)9090#endif91919292+/* max retry for init flow */9393+#define IWL_MAX_INIT_RETRY 29494+9295#endif /* __iwl_drv_h__ */
+23-1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···1616#include <net/ieee80211_radiotap.h>1717#include <net/tcp.h>18181919+#include "iwl-drv.h"1920#include "iwl-op-mode.h"2021#include "iwl-io.h"2122#include "mvm.h"···11181117{11191118 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);11201119 int ret;11201120+ int retry, max_retry = 0;1121112111221122 mutex_lock(&mvm->mutex);11231123- ret = __iwl_mvm_mac_start(mvm);11231123+11241124+ /* we are starting the mac not in error flow, and restart is enabled */11251125+ if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) &&11261126+ iwlwifi_mod_params.fw_restart) {11271127+ max_retry = IWL_MAX_INIT_RETRY;11281128+ /*11291129+ * This will prevent mac80211 recovery flows to trigger during11301130+ * init failures11311131+ */11321132+ set_bit(IWL_MVM_STATUS_STARTING, &mvm->status);11331133+ }11341134+11351135+ for (retry = 0; retry <= max_retry; retry++) {11361136+ ret = __iwl_mvm_mac_start(mvm);11371137+ if (!ret)11381138+ break;11391139+11401140+ IWL_ERR(mvm, "mac start retry %d\n", retry);11411141+ }11421142+ clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);11431143+11241144 mutex_unlock(&mvm->mutex);1125114511261146 return ret;
+3
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
···11231123 * @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running11241124 * @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA11251125 * @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it)11261126+ * @IWL_MVM_STATUS_STARTING: starting mac,11271127+ * used to disable restart flow while in STARTING state11261128 */11271129enum iwl_mvm_status {11281130 IWL_MVM_STATUS_HW_RFKILL,···11361134 IWL_MVM_STATUS_FIRMWARE_RUNNING,11371135 IWL_MVM_STATUS_NEED_FLUSH_P2P,11381136 IWL_MVM_STATUS_IN_D3,11371137+ IWL_MVM_STATUS_STARTING,11391138};1140113911411140/* Keep track of completed init configuration */
+5
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
···686686 int ret;687687688688 rtnl_lock();689689+ wiphy_lock(mvm->hw->wiphy);689690 mutex_lock(&mvm->mutex);690691691692 ret = iwl_run_init_mvm_ucode(mvm);···702701 iwl_mvm_stop_device(mvm);703702704703 mutex_unlock(&mvm->mutex);704704+ wiphy_unlock(mvm->hw->wiphy);705705 rtnl_unlock();706706707707 if (ret < 0)···16021600 */16031601 if (!mvm->fw_restart && fw_error) {16041602 iwl_fw_error_collect(&mvm->fwrt, false);16031603+ } else if (test_bit(IWL_MVM_STATUS_STARTING,16041604+ &mvm->status)) {16051605+ IWL_ERR(mvm, "Starting mac, retry will be triggered anyway\n");16051606 } else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {16061607 struct iwl_mvm_reprobe *reprobe;16071608
+8-2
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
···13391339 u16 mac_type, u8 mac_step,13401340 u16 rf_type, u8 cdb, u8 rf_id, u8 no_160, u8 cores)13411341{13421342+ int num_devices = ARRAY_SIZE(iwl_dev_info_table);13421343 int i;1343134413441344- for (i = ARRAY_SIZE(iwl_dev_info_table) - 1; i >= 0; i--) {13451345+ if (!num_devices)13461346+ return NULL;13471347+13481348+ for (i = num_devices - 1; i >= 0; i--) {13451349 const struct iwl_dev_info *dev_info = &iwl_dev_info_table[i];1346135013471351 if (dev_info->device != (u16)IWL_CFG_ANY &&···14461442 */14471443 if (iwl_trans->trans_cfg->rf_id &&14481444 iwl_trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_9000 &&14491449- !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans))14451445+ !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) {14461446+ ret = -EINVAL;14501447 goto out_free_trans;14481448+ }1451144914521450 dev_info = iwl_pci_find_dev_info(pdev->device, pdev->subsystem_device,14531451 CSR_HW_REV_TYPE(iwl_trans->hw_rev),
···173173 if (!(cb->flags & MT_TX_CB_DMA_DONE))174174 continue;175175176176- if (!time_is_after_jiffies(cb->jiffies +176176+ if (time_is_after_jiffies(cb->jiffies +177177 MT_TX_STATUS_SKB_TIMEOUT))178178 continue;179179 }
+3
drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
···2525 if (status == -ENODEV || status == -ENOENT)2626 return true;27272828+ if (!test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags))2929+ return false;3030+2831 if (status == -EPROTO || status == -ETIMEDOUT)2932 rt2x00dev->num_proto_errs++;3033 else
···142142143143#define MPT_MAX_CALLBACKS 32144144145145+#define MPT_MAX_HBA_NUM_PHYS 32146146+145147#define INTERNAL_CMDS_COUNT 10 /* reserved cmds */146148/* reserved for issuing internally framed scsi io cmds */147149#define INTERNAL_SCSIIO_CMDS_COUNT 3···800798 * @enclosure_handle: handle for this a member of an enclosure801799 * @device_info: bitwise defining capabilities of this sas_host/expander802800 * @responding: used in _scsih_expander_device_mark_responding801801+ * @nr_phys_allocated: Allocated memory for this many count phys803802 * @phy: a list of phys that make up this sas_host/expander804803 * @sas_port_list: list of ports attached to this sas_host/expander805804 * @port: hba port entry containing node's port number info···816813 u16 enclosure_handle;817814 u64 enclosure_logical_id;818815 u8 responding;816816+ u8 nr_phys_allocated;819817 struct hba_port *port;820818 struct _sas_phy *phy;821819 struct list_head sas_port_list;
+57-2
drivers/scsi/mpt3sas/mpt3sas_scsih.c
···3869386938703870 shost_for_each_device(sdev, ioc->shost) {38713871 sas_device_priv_data = sdev->hostdata;38723872- if (!sas_device_priv_data)38723872+ if (!sas_device_priv_data || !sas_device_priv_data->sas_target)38733873 continue;38743874 if (sas_device_priv_data->sas_target->sas_address38753875 != sas_address)···64066406 int i, j, count = 0, lcount = 0;64076407 int ret;64086408 u64 sas_addr;64096409+ u8 num_phys;6409641064106411 drsprintk(ioc, ioc_info(ioc,64116412 "updating ports for sas_host(0x%016llx)\n",64126413 (unsigned long long)ioc->sas_hba.sas_address));64146414+64156415+ mpt3sas_config_get_number_hba_phys(ioc, &num_phys);64166416+ if (!num_phys) {64176417+ ioc_err(ioc, "failure at %s:%d/%s()!\n",64186418+ __FILE__, __LINE__, __func__);64196419+ return;64206420+ }64216421+64226422+ if (num_phys > ioc->sas_hba.nr_phys_allocated) {64236423+ ioc_err(ioc, "failure at %s:%d/%s()!\n",64246424+ __FILE__, __LINE__, __func__);64256425+ return;64266426+ }64276427+ ioc->sas_hba.num_phys = num_phys;6413642864146429 port_table = kcalloc(ioc->sas_hba.num_phys,64156430 sizeof(struct hba_port), GFP_KERNEL);···66266611 ioc->sas_hba.phy[i].hba_vphy = 1;66276612 }6628661366146614+ /*66156615+ * Add new HBA phys to STL if these new phys got added as part66166616+ * of HBA Firmware upgrade/downgrade operation.66176617+ */66186618+ if (!ioc->sas_hba.phy[i].phy) {66196619+ if ((mpt3sas_config_get_phy_pg0(ioc, &mpi_reply,66206620+ &phy_pg0, i))) {66216621+ ioc_err(ioc, "failure at %s:%d/%s()!\n",66226622+ __FILE__, __LINE__, __func__);66236623+ continue;66246624+ }66256625+ ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &66266626+ MPI2_IOCSTATUS_MASK;66276627+ if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {66286628+ ioc_err(ioc, "failure at %s:%d/%s()!\n",66296629+ __FILE__, __LINE__, __func__);66306630+ continue;66316631+ }66326632+ ioc->sas_hba.phy[i].phy_id = i;66336633+ mpt3sas_transport_add_host_phy(ioc,66346634+ &ioc->sas_hba.phy[i], phy_pg0,66356635+ ioc->sas_hba.parent_dev);66366636+ continue;66376637+ }66296638 ioc->sas_hba.phy[i].handle = ioc->sas_hba.handle;66306639 attached_handle = le16_to_cpu(sas_iounit_pg0->PhyData[i].66316640 AttachedDevHandle);···66606621 mpt3sas_transport_update_links(ioc, ioc->sas_hba.sas_address,66616622 attached_handle, i, link_rate,66626623 ioc->sas_hba.phy[i].port);66246624+ }66256625+ /*66266626+ * Clear the phy details if this phy got disabled as part of66276627+ * HBA Firmware upgrade/downgrade operation.66286628+ */66296629+ for (i = ioc->sas_hba.num_phys;66306630+ i < ioc->sas_hba.nr_phys_allocated; i++) {66316631+ if (ioc->sas_hba.phy[i].phy &&66326632+ ioc->sas_hba.phy[i].phy->negotiated_linkrate >=66336633+ SAS_LINK_RATE_1_5_GBPS)66346634+ mpt3sas_transport_update_links(ioc,66356635+ ioc->sas_hba.sas_address, 0, i,66366636+ MPI2_SAS_NEG_LINK_RATE_PHY_DISABLED, NULL);66636637 }66646638 out:66656639 kfree(sas_iounit_pg0);···67066654 __FILE__, __LINE__, __func__);67076655 return;67086656 }67096709- ioc->sas_hba.phy = kcalloc(num_phys,66576657+66586658+ ioc->sas_hba.nr_phys_allocated = max_t(u8,66596659+ MPT_MAX_HBA_NUM_PHYS, num_phys);66606660+ ioc->sas_hba.phy = kcalloc(ioc->sas_hba.nr_phys_allocated,67106661 sizeof(struct _sas_phy), GFP_KERNEL);67116662 if (!ioc->sas_hba.phy) {67126663 ioc_err(ioc, "failure at %s:%d/%s()!\n",
+1-1
drivers/scsi/qla2xxx/qla_edif.c
···865865 "APP request entry - portid=%06x.\n", tdid.b24);866866867867 /* Ran out of space */868868- if (pcnt > app_req.num_ports)868868+ if (pcnt >= app_req.num_ports)869869 break;870870871871 if (tdid.b24 != 0 && tdid.b24 != fcport->d_id.b24)
···812812813813 mutex_lock(&sdev->state_mutex);814814 if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {815815- ret = count;815815+ ret = 0;816816 } else {817817 ret = scsi_device_set_state(sdev, state);818818 if (ret == 0 && state == SDEV_RUNNING)
+1
drivers/scsi/ufs/ufs-mediatek.c
···11891189 }11901190 link = device_link_add(dev, &reset_pdev->dev,11911191 DL_FLAG_AUTOPROBE_CONSUMER);11921192+ put_device(&reset_pdev->dev);11921193 if (!link) {11931194 dev_notice(dev, "add reset device_link fail\n");11941195 goto skip_reset;
+1-1
drivers/scsi/ufs/ufshpb.c
···331331 cdb[0] = UFSHPB_READ;332332333333 if (hba->dev_quirks & UFS_DEVICE_QUIRK_SWAP_L2P_ENTRY_FOR_HPB_READ)334334- ppn_tmp = swab64(ppn);334334+ ppn_tmp = (__force __be64)swab64((__force u64)ppn);335335336336 /* ppn value is stored as big-endian in the host memory */337337 memcpy(&cdb[6], &ppn_tmp, sizeof(__be64));
···511511512512 vhost_disable_notify(&vsock->dev, vq);513513 do {514514- u32 len;515515-516514 if (!vhost_vsock_more_replies(vsock)) {517515 /* Stop tx until the device processes already518516 * pending replies. Leave tx virtqueue···538540 continue;539541 }540542541541- len = pkt->len;543543+ total_len += sizeof(pkt->hdr) + pkt->len;542544543545 /* Deliver to monitoring devices all received packets */544546 virtio_transport_deliver_tap_pkt(pkt);···551553 else552554 virtio_transport_free_pkt(pkt);553555554554- len += sizeof(pkt->hdr);555555- vhost_add_used(vq, head, len);556556- total_len += len;556556+ vhost_add_used(vq, head, 0);557557 added = true;558558 } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));559559
-60
drivers/virtio/virtio_ring.c
···1414#include <linux/spinlock.h>1515#include <xen/xen.h>16161717-static bool force_used_validation = false;1818-module_param(force_used_validation, bool, 0444);1919-2017#ifdef DEBUG2118/* For development, we want to crash whenever the ring is screwed. */2219#define BAD_RING(_vq, fmt, args...) \···181184 size_t event_size_in_bytes;182185 } packed;183186 };184184-185185- /* Per-descriptor in buffer length */186186- u32 *buflen;187187188188 /* How to notify other side. FIXME: commonalize hcalls! */189189 bool (*notify)(struct virtqueue *vq);···490496 unsigned int i, n, avail, descs_used, prev, err_idx;491497 int head;492498 bool indirect;493493- u32 buflen = 0;494499495500 START_USE(vq);496501···571578 VRING_DESC_F_NEXT |572579 VRING_DESC_F_WRITE,573580 indirect);574574- buflen += sg->length;575581 }576582 }577583 /* Last one doesn't continue. */···609617 vq->split.desc_state[head].indir_desc = desc;610618 else611619 vq->split.desc_state[head].indir_desc = ctx;612612-613613- /* Store in buffer length if necessary */614614- if (vq->buflen)615615- vq->buflen[head] = buflen;616620617621 /* Put entry in available array (but don't update avail->idx until they618622 * do sync). */···782794 }783795 if (unlikely(!vq->split.desc_state[i].data)) {784796 BAD_RING(vq, "id %u is not a head!\n", i);785785- return NULL;786786- }787787- if (vq->buflen && unlikely(*len > vq->buflen[i])) {788788- BAD_RING(vq, "used len %d is larger than in buflen %u\n",789789- *len, vq->buflen[i]);790797 return NULL;791798 }792799···10621079 unsigned int i, n, err_idx;10631080 u16 head, id;10641081 dma_addr_t addr;10651065- u32 buflen = 0;1066108210671083 head = vq->packed.next_avail_idx;10681084 desc = alloc_indirect_packed(total_sg, gfp);···10911109 desc[i].addr = cpu_to_le64(addr);10921110 desc[i].len = cpu_to_le32(sg->length);10931111 i++;10941094- if (n >= out_sgs)10951095- buflen += sg->length;10961112 }10971113 }10981114···11441164 vq->packed.desc_state[id].indir_desc = desc;11451165 vq->packed.desc_state[id].last = id;1146116611471147- /* Store in buffer length if necessary */11481148- if (vq->buflen)11491149- vq->buflen[id] = buflen;11501150-11511167 vq->num_added += 1;1152116811531169 pr_debug("Added buffer head %i to %p\n", head, vq);···11791203 __le16 head_flags, flags;11801204 u16 head, id, prev, curr, avail_used_flags;11811205 int err;11821182- u32 buflen = 0;1183120611841207 START_USE(vq);11851208···12581283 1 << VRING_PACKED_DESC_F_AVAIL |12591284 1 << VRING_PACKED_DESC_F_USED;12601285 }12611261- if (n >= out_sgs)12621262- buflen += sg->length;12631286 }12641287 }12651288···12761303 vq->packed.desc_state[id].data = data;12771304 vq->packed.desc_state[id].indir_desc = ctx;12781305 vq->packed.desc_state[id].last = prev;12791279-12801280- /* Store in buffer length if necessary */12811281- if (vq->buflen)12821282- vq->buflen[id] = buflen;1283130612841307 /*12851308 * A driver MUST NOT make the first descriptor in the list···14611492 }14621493 if (unlikely(!vq->packed.desc_state[id].data)) {14631494 BAD_RING(vq, "id %u is not a head!\n", id);14641464- return NULL;14651465- }14661466- if (vq->buflen && unlikely(*len > vq->buflen[id])) {14671467- BAD_RING(vq, "used len %d is larger than in buflen %u\n",14681468- *len, vq->buflen[id]);14691495 return NULL;14701496 }14711497···16681704 struct vring_virtqueue *vq;16691705 struct vring_packed_desc *ring;16701706 struct vring_packed_desc_event *driver, *device;16711671- struct virtio_driver *drv = drv_to_virtio(vdev->dev.driver);16721707 dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr;16731708 size_t ring_size_in_bytes, event_size_in_bytes;16741709···17571794 if (!vq->packed.desc_extra)17581795 goto err_desc_extra;1759179617601760- if (!drv->suppress_used_validation || force_used_validation) {17611761- vq->buflen = kmalloc_array(num, sizeof(*vq->buflen),17621762- GFP_KERNEL);17631763- if (!vq->buflen)17641764- goto err_buflen;17651765- } else {17661766- vq->buflen = NULL;17671767- }17681768-17691797 /* No callback? Tell other side not to bother us. */17701798 if (!callback) {17711799 vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;···17691815 spin_unlock(&vdev->vqs_list_lock);17701816 return &vq->vq;1771181717721772-err_buflen:17731773- kfree(vq->packed.desc_extra);17741818err_desc_extra:17751819 kfree(vq->packed.desc_state);17761820err_desc_state:···21762224 void (*callback)(struct virtqueue *),21772225 const char *name)21782226{21792179- struct virtio_driver *drv = drv_to_virtio(vdev->dev.driver);21802227 struct vring_virtqueue *vq;2181222821822229 if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))···22352284 if (!vq->split.desc_extra)22362285 goto err_extra;2237228622382238- if (!drv->suppress_used_validation || force_used_validation) {22392239- vq->buflen = kmalloc_array(vring.num, sizeof(*vq->buflen),22402240- GFP_KERNEL);22412241- if (!vq->buflen)22422242- goto err_buflen;22432243- } else {22442244- vq->buflen = NULL;22452245- }22462246-22472287 /* Put everything in free lists. */22482288 vq->free_head = 0;22492289 memset(vq->split.desc_state, 0, vring.num *···22452303 spin_unlock(&vdev->vqs_list_lock);22462304 return &vq->vq;2247230522482248-err_buflen:22492249- kfree(vq->split.desc_extra);22502306err_extra:22512307 kfree(vq->split.desc_state);22522308err_state:
+6-2
fs/erofs/utils.c
···150150 * however in order to avoid some race conditions, add a151151 * DBG_BUGON to observe this in advance.152152 */153153- DBG_BUGON(xa_erase(&sbi->managed_pslots, grp->index) != grp);153153+ DBG_BUGON(__xa_erase(&sbi->managed_pslots, grp->index) != grp);154154155155 /* last refcount should be connected with its managed pslot. */156156 erofs_workgroup_unfreeze(grp, 0);···165165 unsigned int freed = 0;166166 unsigned long index;167167168168+ xa_lock(&sbi->managed_pslots);168169 xa_for_each(&sbi->managed_pslots, index, grp) {169170 /* try to shrink each valid workgroup */170171 if (!erofs_try_to_release_workgroup(sbi, grp))171172 continue;173173+ xa_unlock(&sbi->managed_pslots);172174173175 ++freed;174176 if (!--nr_shrink)175175- break;177177+ return freed;178178+ xa_lock(&sbi->managed_pslots);176179 }180180+ xa_unlock(&sbi->managed_pslots);177181 return freed;178182}179183
+50-23
fs/io_uring.c
···1278127812791279static bool io_match_task(struct io_kiocb *head, struct task_struct *task,12801280 bool cancel_all)12811281+ __must_hold(&req->ctx->timeout_lock)12811282{12821283 struct io_kiocb *req;12831284···12921291 return true;12931292 }12941293 return false;12941294+}12951295+12961296+static bool io_match_linked(struct io_kiocb *head)12971297+{12981298+ struct io_kiocb *req;12991299+13001300+ io_for_each_link(req, head) {13011301+ if (req->flags & REQ_F_INFLIGHT)13021302+ return true;13031303+ }13041304+ return false;13051305+}13061306+13071307+/*13081308+ * As io_match_task() but protected against racing with linked timeouts.13091309+ * User must not hold timeout_lock.13101310+ */13111311+static bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,13121312+ bool cancel_all)13131313+{13141314+ bool matched;13151315+13161316+ if (task && head->task != task)13171317+ return false;13181318+ if (cancel_all)13191319+ return true;13201320+13211321+ if (head->flags & REQ_F_LINK_TIMEOUT) {13221322+ struct io_ring_ctx *ctx = head->ctx;13231323+13241324+ /* protect against races with linked timeouts */13251325+ spin_lock_irq(&ctx->timeout_lock);13261326+ matched = io_match_linked(head);13271327+ spin_unlock_irq(&ctx->timeout_lock);13281328+ } else {13291329+ matched = io_match_linked(head);13301330+ }13311331+ return matched;12951332}1296133312971334static inline bool req_has_async_data(struct io_kiocb *req)···43664327 kfree(nxt);43674328 if (++i == nbufs)43684329 return i;43304330+ cond_resched();43694331 }43704332 i++;43714333 kfree(buf);···57395699 int posted = 0, i;5740570057415701 spin_lock(&ctx->completion_lock);57425742- spin_lock_irq(&ctx->timeout_lock);57435702 for (i = 0; i < (1U << ctx->cancel_hash_bits); i++) {57445703 struct hlist_head *list;5745570457465705 list = &ctx->cancel_hash[i];57475706 hlist_for_each_entry_safe(req, tmp, list, hash_node) {57485748- if (io_match_task(req, tsk, cancel_all))57075707+ if (io_match_task_safe(req, tsk, cancel_all))57495708 posted += io_poll_remove_one(req);57505709 }57515710 }57525752- spin_unlock_irq(&ctx->timeout_lock);57535711 spin_unlock(&ctx->completion_lock);5754571257555713 if (posted)···6195615761966158 if (get_timespec64(&data->ts, u64_to_user_ptr(sqe->addr)))61976159 return -EFAULT;61606160+61616161+ if (data->ts.tv_sec < 0 || data->ts.tv_nsec < 0)61626162+ return -EINVAL;6198616361996164 data->mode = io_translate_timeout_mode(flags);62006165 hrtimer_init(&data->timer, io_timeout_get_clock(data), data->mode);···69236882static void io_req_task_link_timeout(struct io_kiocb *req, bool *locked)69246883{69256884 struct io_kiocb *prev = req->timeout.prev;69266926- int ret;68856885+ int ret = -ENOENT;6927688669286887 if (prev) {69296929- ret = io_try_cancel_userdata(req, prev->user_data);68886888+ if (!(req->task->flags & PF_EXITING))68896889+ ret = io_try_cancel_userdata(req, prev->user_data);69306890 io_req_complete_post(req, ret ?: -ETIME, 0);69316891 io_put_req(prev);69326892 } else {···92999257 struct io_buffer *buf;93009258 unsigned long index;9301925993029302- xa_for_each(&ctx->io_buffers, index, buf) {92609260+ xa_for_each(&ctx->io_buffers, index, buf)93039261 __io_remove_buffers(ctx, buf, index, -1U);93049304- cond_resched();93059305- }93069262}9307926393089264static void io_req_caches_free(struct io_ring_ctx *ctx)···96049564{96059565 struct io_kiocb *req = container_of(work, struct io_kiocb, work);96069566 struct io_task_cancel *cancel = data;96079607- bool ret;9608956796099609- if (!cancel->all && (req->flags & REQ_F_LINK_TIMEOUT)) {96109610- struct io_ring_ctx *ctx = req->ctx;96119611-96129612- /* protect against races with linked timeouts */96139613- spin_lock_irq(&ctx->timeout_lock);96149614- ret = io_match_task(req, cancel->task, cancel->all);96159615- spin_unlock_irq(&ctx->timeout_lock);96169616- } else {96179617- ret = io_match_task(req, cancel->task, cancel->all);96189618- }96199619- return ret;95689568+ return io_match_task_safe(req, cancel->task, cancel->all);96209569}9621957096229571static __cold bool io_cancel_defer_files(struct io_ring_ctx *ctx,···96169587 LIST_HEAD(list);9617958896189589 spin_lock(&ctx->completion_lock);96199619- spin_lock_irq(&ctx->timeout_lock);96209590 list_for_each_entry_reverse(de, &ctx->defer_list, list) {96219621- if (io_match_task(de->req, task, cancel_all)) {95919591+ if (io_match_task_safe(de->req, task, cancel_all)) {96229592 list_cut_position(&list, &ctx->defer_list, &de->list);96239593 break;96249594 }96259595 }96269626- spin_unlock_irq(&ctx->timeout_lock);96279596 spin_unlock(&ctx->completion_lock);96289597 if (list_empty(&list))96299598 return false;
+16-10
fs/iomap/buffered-io.c
···205205 struct readahead_control *rac;206206};207207208208-static loff_t iomap_read_inline_data(const struct iomap_iter *iter,208208+/**209209+ * iomap_read_inline_data - copy inline data into the page cache210210+ * @iter: iteration structure211211+ * @page: page to copy to212212+ *213213+ * Copy the inline data in @iter into @page and zero out the rest of the page.214214+ * Only a single IOMAP_INLINE extent is allowed at the end of each file.215215+ * Returns zero for success to complete the read, or the usual negative errno.216216+ */217217+static int iomap_read_inline_data(const struct iomap_iter *iter,209218 struct page *page)210219{211220 const struct iomap *iomap = iomap_iter_srcmap(iter);···223214 void *addr;224215225216 if (PageUptodate(page))226226- return PAGE_SIZE - poff;217217+ return 0;227218228219 if (WARN_ON_ONCE(size > PAGE_SIZE - poff))229220 return -EIO;···240231 memset(addr + size, 0, PAGE_SIZE - poff - size);241232 kunmap_local(addr);242233 iomap_set_range_uptodate(page, poff, PAGE_SIZE - poff);243243- return PAGE_SIZE - poff;234234+ return 0;244235}245236246237static inline bool iomap_block_needs_zeroing(const struct iomap_iter *iter,···266257 sector_t sector;267258268259 if (iomap->type == IOMAP_INLINE)269269- return min(iomap_read_inline_data(iter, page), length);260260+ return iomap_read_inline_data(iter, page);270261271262 /* zero post-eof blocks as the page may be mapped */272263 iop = iomap_page_create(iter->inode, page);···379370 ctx->cur_page_in_bio = false;380371 }381372 ret = iomap_readpage_iter(iter, ctx, done);373373+ if (ret <= 0)374374+ return ret;382375 }383376384377 return done;···591580static int iomap_write_begin_inline(const struct iomap_iter *iter,592581 struct page *page)593582{594594- int ret;595595-596583 /* needs more work for the tailpacking case; disable for now */597584 if (WARN_ON_ONCE(iomap_iter_srcmap(iter)->offset != 0))598585 return -EIO;599599- ret = iomap_read_inline_data(iter, page);600600- if (ret < 0)601601- return ret;602602- return 0;586586+ return iomap_read_inline_data(iter, page);603587}604588605589static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
···10081008}10091009EXPORT_SYMBOL(netfs_readpage);1010101010111011-/**10121012- * netfs_skip_folio_read - prep a folio for writing without reading first10111011+/*10121012+ * Prepare a folio for writing without reading first10131013 * @folio: The folio being prepared10141014 * @pos: starting position for the write10151015 * @len: length of write
···289289 loff_t newsize = pos + len;290290 loff_t end = newsize - 1;291291292292- truncate_pagecache_range(inode, pos, end);292292+ WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping,293293+ pos >> PAGE_SHIFT, end >> PAGE_SHIFT));294294+293295 spin_lock(&inode->i_lock);294296 if (newsize > i_size_read(inode))295297 i_size_write(inode, newsize);
+1-2
fs/nfs/nfs42xdr.c
···14341434 status = decode_clone(xdr);14351435 if (status)14361436 goto out;14371437- status = decode_getfattr(xdr, res->dst_fattr, res->server);14381438-14371437+ decode_getfattr(xdr, res->dst_fattr, res->server);14391438out:14401439 res->rpc_status = status;14411440 return status;
+4
fs/nfs/nfs4state.c
···19981998 dprintk("%s: exit with error %d for server %s\n",19991999 __func__, -EPROTONOSUPPORT, clp->cl_hostname);20002000 return -EPROTONOSUPPORT;20012001+ case -ENOSPC:20022002+ if (clp->cl_cons_state == NFS_CS_SESSION_INITING)20032003+ nfs_mark_client_ready(clp, -EIO);20042004+ return -EIO;20012005 case -NFS4ERR_NOT_SAME: /* FixMe: implement recovery20022006 * in nfs4_exchange_id */20032007 default:
···5252config NTFS_RW5353 bool "NTFS write support"5454 depends on NTFS_FS5555+ depends on PAGE_SIZE_LESS_THAN_64KB5556 help5657 This enables the partial, but safe, write support in the NTFS driver.5758
+7-10
fs/xfs/libxfs/xfs_attr.c
···1077107710781078 state = xfs_da_state_alloc(args);10791079 if (statep != NULL)10801080- *statep = NULL;10801080+ *statep = state;1081108110821082 /*10831083 * Search to see if name exists, and get back a pointer to it.10841084 */10851085 error = xfs_da3_node_lookup_int(state, &retval);10861086- if (error) {10871087- xfs_da_state_free(state);10881088- return error;10891089- }10861086+ if (error)10871087+ retval = error;1090108810911091- if (statep != NULL)10921092- *statep = state;10931093- else10891089+ if (!statep)10941090 xfs_da_state_free(state);10911091+10951092 return retval;10961093}10971094···11091112 */11101113 retval = xfs_attr_node_hasname(args, &dac->da_state);11111114 if (retval != -ENOATTR && retval != -EEXIST)11121112- return retval;11151115+ goto error;1113111611141117 if (retval == -ENOATTR && (args->attr_flags & XATTR_REPLACE))11151118 goto error;···1334133713351338 error = xfs_attr_node_hasname(args, state);13361339 if (error != -EEXIST)13371337- return error;13401340+ goto out;13381341 error = 0;1339134213401343 ASSERT((*state)->path.blk[(*state)->path.active - 1].bp != NULL);
-21
fs/xfs/xfs_icache.c
···289289 trace_xfs_perag_clear_inode_tag(mp, pag->pag_agno, tag, _RET_IP_);290290}291291292292-static inline void293293-xfs_inew_wait(294294- struct xfs_inode *ip)295295-{296296- wait_queue_head_t *wq = bit_waitqueue(&ip->i_flags, __XFS_INEW_BIT);297297- DEFINE_WAIT_BIT(wait, &ip->i_flags, __XFS_INEW_BIT);298298-299299- do {300300- prepare_to_wait(wq, &wait.wq_entry, TASK_UNINTERRUPTIBLE);301301- if (!xfs_iflags_test(ip, XFS_INEW))302302- break;303303- schedule();304304- } while (true);305305- finish_wait(wq, &wait.wq_entry);306306-}307307-308292/*309293 * When we recycle a reclaimable inode, we need to re-initialise the VFS inode310294 * part of the structure. This is made more complex by the fact we store···352368 ASSERT(!rwsem_is_locked(&inode->i_rwsem));353369 error = xfs_reinit_inode(mp, inode);354370 if (error) {355355- bool wake;356356-357371 /*358372 * Re-initializing the inode failed, and we are in deep359373 * trouble. Try to re-add it to the reclaim list.360374 */361375 rcu_read_lock();362376 spin_lock(&ip->i_flags_lock);363363- wake = !!__xfs_iflags_test(ip, XFS_INEW);364377 ip->i_flags &= ~(XFS_INEW | XFS_IRECLAIM);365365- if (wake)366366- wake_up_bit(&ip->i_flags, __XFS_INEW_BIT);367378 ASSERT(ip->i_flags & XFS_IRECLAIMABLE);368379 spin_unlock(&ip->i_flags_lock);369380 rcu_read_unlock();
+1-3
fs/xfs/xfs_inode.h
···231231#define XFS_IRECLAIM (1 << 0) /* started reclaiming this inode */232232#define XFS_ISTALE (1 << 1) /* inode has been staled */233233#define XFS_IRECLAIMABLE (1 << 2) /* inode can be reclaimed */234234-#define __XFS_INEW_BIT 3 /* inode has just been allocated */235235-#define XFS_INEW (1 << __XFS_INEW_BIT)234234+#define XFS_INEW (1 << 3) /* inode has just been allocated */236235#define XFS_IPRESERVE_DM_FIELDS (1 << 4) /* has legacy DMAPI fields set */237236#define XFS_ITRUNCATED (1 << 5) /* truncated down so flush-on-close */238237#define XFS_IDIRTY_RELEASE (1 << 6) /* dirty release already seen */···491492 xfs_iflags_clear(ip, XFS_INEW);492493 barrier();493494 unlock_new_inode(VFS_I(ip));494494- wake_up_bit(&ip->i_flags, __XFS_INEW_BIT);495495}496496497497static inline void xfs_setup_existing_inode(struct xfs_inode *ip)
···152152 * @feature_table_size: number of entries in the feature table array.153153 * @feature_table_legacy: same as feature_table but when working in legacy mode.154154 * @feature_table_size_legacy: number of entries in feature table legacy array.155155- * @suppress_used_validation: set to not have core validate used length156155 * @probe: the function to call when a device is found. Returns 0 or -errno.157156 * @scan: optional function to call after successful probe; intended158157 * for virtio-scsi to invoke a scan.···168169 unsigned int feature_table_size;169170 const unsigned int *feature_table_legacy;170171 unsigned int feature_table_size_legacy;171171- bool suppress_used_validation;172172 int (*validate)(struct virtio_device *dev);173173 int (*probe)(struct virtio_device *dev);174174 void (*scan)(struct virtio_device *dev);
+1-1
include/net/busy_poll.h
···133133 if (unlikely(READ_ONCE(sk->sk_napi_id) != skb->napi_id))134134 WRITE_ONCE(sk->sk_napi_id, skb->napi_id);135135#endif136136- sk_rx_queue_set(sk, skb);136136+ sk_rx_queue_update(sk, skb);137137}138138139139static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id)
+11
include/net/dst_cache.h
···8080}81818282/**8383+ * dst_cache_reset_now - invalidate the cache contents immediately8484+ * @dst_cache: the cache8585+ *8686+ * The caller must be sure there are no concurrent users, as this frees8787+ * all dst_cache users immediately, rather than waiting for the next8888+ * per-cpu usage like dst_cache_reset does. Most callers should use the8989+ * higher speed lazily-freed dst_cache_reset function instead.9090+ */9191+void dst_cache_reset_now(struct dst_cache *dst_cache);9292+9393+/**8394 * dst_cache_init - initialize the cache, allocating the required storage8495 * @dst_cache: the cache8596 * @gfp: allocation flags
···438438#ifdef CONFIG_IP_ROUTE_CLASSID439439static inline int fib_num_tclassid_users(struct net *net)440440{441441- return net->ipv4.fib_num_tclassid_users;441441+ return atomic_read(&net->ipv4.fib_num_tclassid_users);442442}443443#else444444static inline int fib_num_tclassid_users(struct net *net)
···19431943 return -1;19441944}1945194519461946-static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)19461946+static inline void __sk_rx_queue_set(struct sock *sk,19471947+ const struct sk_buff *skb,19481948+ bool force_set)19471949{19481950#ifdef CONFIG_SOCK_RX_QUEUE_MAPPING19491951 if (skb_rx_queue_recorded(skb)) {19501952 u16 rx_queue = skb_get_rx_queue(skb);1951195319521952- if (unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue))19541954+ if (force_set ||19551955+ unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue))19531956 WRITE_ONCE(sk->sk_rx_queue_mapping, rx_queue);19541957 }19551958#endif19591959+}19601960+19611961+static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)19621962+{19631963+ __sk_rx_queue_set(sk, skb, true);19641964+}19651965+19661966+static inline void sk_rx_queue_update(struct sock *sk, const struct sk_buff *skb)19671967+{19681968+ __sk_rx_queue_set(sk, skb, false);19561969}1957197019581971static inline void sk_rx_queue_clear(struct sock *sk)···24702457 * @sk: socket24712458 *24722459 * Use the per task page_frag instead of the per socket one for24732473- * optimization when we know that we're in the normal context and owns24602460+ * optimization when we know that we're in process context and own24742461 * everything that's associated with %current.24752462 *24762476- * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest24772477- * inside other socket operations and end up recursing into sk_page_frag()24782478- * while it's already in use.24632463+ * Both direct reclaim and page faults can nest inside other24642464+ * socket operations and end up recursing into sk_page_frag()24652465+ * while it's already in use: explicitly avoid task page_frag24662466+ * usage if the caller is potentially doing any of them.24672467+ * This assumes that page fault handlers use the GFP_NOFS flags.24792468 *24802469 * Return: a per task page_frag if context allows that,24812470 * otherwise a per socket one.24822471 */24832472static inline struct page_frag *sk_page_frag(struct sock *sk)24842473{24852485- if (gfpflags_normal_context(sk->sk_allocation))24742474+ if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==24752475+ (__GFP_DIRECT_RECLAIM | __GFP_FS))24862476 return ¤t->task_frag;2487247724882478 return &sk->sk_frag;
···117117#define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */118118#define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */119119120120-#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is less than this value120120+#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is more than this value121121 * then the frame is Ethernet II. Else it is 802.3 */122122123123/*
+7
kernel/cpu.c
···3131#include <linux/smpboot.h>3232#include <linux/relay.h>3333#include <linux/slab.h>3434+#include <linux/scs.h>3435#include <linux/percpu-rwsem.h>3536#include <linux/cpuset.h>3637···587586{588587 struct task_struct *idle = idle_thread_get(cpu);589588 int ret;589589+590590+ /*591591+ * Reset stale stack state from the last time this CPU was online.592592+ */593593+ scs_task_reset(idle);594594+ kasan_unpoison_task_stack(idle);590595591596 /*592597 * Some architectures have to walk the irq descriptors to
+3
kernel/events/core.c
···97599759 continue;97609760 if (event->attr.config != entry->type)97619761 continue;97629762+ /* Cannot deliver synchronous signal to other task. */97639763+ if (event->attr.sigtrap)97649764+ continue;97629765 if (perf_tp_event_match(event, &data, regs))97639766 perf_swevent_event(event, count, &data, regs);97649767 }
···105105 * atomic_long_cmpxchg() will be used to obtain writer lock.106106 *107107 * There are three places where the lock handoff bit may be set or cleared.108108- * 1) rwsem_mark_wake() for readers.109109- * 2) rwsem_try_write_lock() for writers.110110- * 3) Error path of rwsem_down_write_slowpath().108108+ * 1) rwsem_mark_wake() for readers -- set, clear109109+ * 2) rwsem_try_write_lock() for writers -- set, clear110110+ * 3) rwsem_del_waiter() -- clear111111 *112112 * For all the above cases, wait_lock will be held. A writer must also113113 * be the first one in the wait_list to be eligible for setting the handoff···334334 struct task_struct *task;335335 enum rwsem_waiter_type type;336336 unsigned long timeout;337337+338338+ /* Writer only, not initialized in reader */339339+ bool handoff_set;337340};338341#define rwsem_first_waiter(sem) \339342 list_first_entry(&sem->wait_list, struct rwsem_waiter, list)···345342 RWSEM_WAKE_ANY, /* Wake whatever's at head of wait list */346343 RWSEM_WAKE_READERS, /* Wake readers only */347344 RWSEM_WAKE_READ_OWNED /* Waker thread holds the read lock */348348-};349349-350350-enum writer_wait_state {351351- WRITER_NOT_FIRST, /* Writer is not first in wait list */352352- WRITER_FIRST, /* Writer is first in wait list */353353- WRITER_HANDOFF /* Writer is first & handoff needed */354345};355346356347/*···362365 */363366#define MAX_READERS_WAKEUP 0x100364367368368+static inline void369369+rwsem_add_waiter(struct rw_semaphore *sem, struct rwsem_waiter *waiter)370370+{371371+ lockdep_assert_held(&sem->wait_lock);372372+ list_add_tail(&waiter->list, &sem->wait_list);373373+ /* caller will set RWSEM_FLAG_WAITERS */374374+}375375+376376+/*377377+ * Remove a waiter from the wait_list and clear flags.378378+ *379379+ * Both rwsem_mark_wake() and rwsem_try_write_lock() contain a full 'copy' of380380+ * this function. Modify with care.381381+ */382382+static inline void383383+rwsem_del_waiter(struct rw_semaphore *sem, struct rwsem_waiter *waiter)384384+{385385+ lockdep_assert_held(&sem->wait_lock);386386+ list_del(&waiter->list);387387+ if (likely(!list_empty(&sem->wait_list)))388388+ return;389389+390390+ atomic_long_andnot(RWSEM_FLAG_HANDOFF | RWSEM_FLAG_WAITERS, &sem->count);391391+}392392+365393/*366394 * handle the lock release when processes blocked on it that can now run367395 * - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must···398376 * preferably when the wait_lock is released399377 * - woken process blocks are discarded from the list after having task zeroed400378 * - writers are only marked woken if downgrading is false379379+ *380380+ * Implies rwsem_del_waiter() for all woken readers.401381 */402382static void rwsem_mark_wake(struct rw_semaphore *sem,403383 enum rwsem_wake_type wake_type,···514490515491 adjustment = woken * RWSEM_READER_BIAS - adjustment;516492 lockevent_cond_inc(rwsem_wake_reader, woken);517517- if (list_empty(&sem->wait_list)) {518518- /* hit end of list above */519519- adjustment -= RWSEM_FLAG_WAITERS;520520- }521493522522- /*523523- * When we've woken a reader, we no longer need to force writers524524- * to give up the lock and we can clear HANDOFF.525525- */526526- if (woken && (atomic_long_read(&sem->count) & RWSEM_FLAG_HANDOFF))527527- adjustment -= RWSEM_FLAG_HANDOFF;494494+ oldcount = atomic_long_read(&sem->count);495495+ if (list_empty(&sem->wait_list)) {496496+ /*497497+ * Combined with list_move_tail() above, this implies498498+ * rwsem_del_waiter().499499+ */500500+ adjustment -= RWSEM_FLAG_WAITERS;501501+ if (oldcount & RWSEM_FLAG_HANDOFF)502502+ adjustment -= RWSEM_FLAG_HANDOFF;503503+ } else if (woken) {504504+ /*505505+ * When we've woken a reader, we no longer need to force506506+ * writers to give up the lock and we can clear HANDOFF.507507+ */508508+ if (oldcount & RWSEM_FLAG_HANDOFF)509509+ adjustment -= RWSEM_FLAG_HANDOFF;510510+ }528511529512 if (adjustment)530513 atomic_long_add(adjustment, &sem->count);···563532 * race conditions between checking the rwsem wait list and setting the564533 * sem->count accordingly.565534 *566566- * If wstate is WRITER_HANDOFF, it will make sure that either the handoff567567- * bit is set or the lock is acquired with handoff bit cleared.535535+ * Implies rwsem_del_waiter() on success.568536 */569537static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,570570- enum writer_wait_state wstate)538538+ struct rwsem_waiter *waiter)571539{540540+ bool first = rwsem_first_waiter(sem) == waiter;572541 long count, new;573542574543 lockdep_assert_held(&sem->wait_lock);···577546 do {578547 bool has_handoff = !!(count & RWSEM_FLAG_HANDOFF);579548580580- if (has_handoff && wstate == WRITER_NOT_FIRST)581581- return false;549549+ if (has_handoff) {550550+ if (!first)551551+ return false;552552+553553+ /* First waiter inherits a previously set handoff bit */554554+ waiter->handoff_set = true;555555+ }582556583557 new = count;584558585559 if (count & RWSEM_LOCK_MASK) {586586- if (has_handoff || (wstate != WRITER_HANDOFF))560560+ if (has_handoff || (!rt_task(waiter->task) &&561561+ !time_after(jiffies, waiter->timeout)))587562 return false;588563589564 new |= RWSEM_FLAG_HANDOFF;···606569 * We have either acquired the lock with handoff bit cleared or607570 * set the handoff bit.608571 */609609- if (new & RWSEM_FLAG_HANDOFF)572572+ if (new & RWSEM_FLAG_HANDOFF) {573573+ waiter->handoff_set = true;574574+ lockevent_inc(rwsem_wlock_handoff);610575 return false;576576+ }611577578578+ /*579579+ * Have rwsem_try_write_lock() fully imply rwsem_del_waiter() on580580+ * success.581581+ */582582+ list_del(&waiter->list);612583 rwsem_set_owner(sem);613584 return true;614585}···1001956 }1002957 adjustment += RWSEM_FLAG_WAITERS;1003958 }10041004- list_add_tail(&waiter.list, &sem->wait_list);959959+ rwsem_add_waiter(sem, &waiter);10059601006961 /* we're now waiting on the lock, but no longer actively locking */1007962 count = atomic_long_add_return(adjustment, &sem->count);···10471002 return sem;1048100310491004out_nolock:10501050- list_del(&waiter.list);10511051- if (list_empty(&sem->wait_list)) {10521052- atomic_long_andnot(RWSEM_FLAG_WAITERS|RWSEM_FLAG_HANDOFF,10531053- &sem->count);10541054- }10051005+ rwsem_del_waiter(sem, &waiter);10551006 raw_spin_unlock_irq(&sem->wait_lock);10561007 __set_current_state(TASK_RUNNING);10571008 lockevent_inc(rwsem_rlock_fail);···10611020rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)10621021{10631022 long count;10641064- enum writer_wait_state wstate;10651023 struct rwsem_waiter waiter;10661066- struct rw_semaphore *ret = sem;10671024 DEFINE_WAKE_Q(wake_q);1068102510691026 /* do optimistic spinning and steal lock if possible */···10771038 waiter.task = current;10781039 waiter.type = RWSEM_WAITING_FOR_WRITE;10791040 waiter.timeout = jiffies + RWSEM_WAIT_TIMEOUT;10411041+ waiter.handoff_set = false;1080104210811043 raw_spin_lock_irq(&sem->wait_lock);10821082-10831083- /* account for this before adding a new element to the list */10841084- wstate = list_empty(&sem->wait_list) ? WRITER_FIRST : WRITER_NOT_FIRST;10851085-10861086- list_add_tail(&waiter.list, &sem->wait_list);10441044+ rwsem_add_waiter(sem, &waiter);1087104510881046 /* we're now waiting on the lock */10891089- if (wstate == WRITER_NOT_FIRST) {10471047+ if (rwsem_first_waiter(sem) != &waiter) {10901048 count = atomic_long_read(&sem->count);1091104910921050 /*···11191083 /* wait until we successfully acquire the lock */11201084 set_current_state(state);11211085 for (;;) {11221122- if (rwsem_try_write_lock(sem, wstate)) {10861086+ if (rwsem_try_write_lock(sem, &waiter)) {11231087 /* rwsem_try_write_lock() implies ACQUIRE on success */11241088 break;11251089 }1126109011271091 raw_spin_unlock_irq(&sem->wait_lock);10921092+10931093+ if (signal_pending_state(state, current))10941094+ goto out_nolock;1128109511291096 /*11301097 * After setting the handoff bit and failing to acquire···11371098 * In this case, we attempt to acquire the lock again11381099 * without sleeping.11391100 */11401140- if (wstate == WRITER_HANDOFF) {11011101+ if (waiter.handoff_set) {11411102 enum owner_state owner_state;1142110311431104 preempt_disable();···11481109 goto trylock_again;11491110 }1150111111511151- /* Block until there are no active lockers. */11521152- for (;;) {11531153- if (signal_pending_state(state, current))11541154- goto out_nolock;11551155-11561156- schedule();11571157- lockevent_inc(rwsem_sleep_writer);11581158- set_current_state(state);11591159- /*11601160- * If HANDOFF bit is set, unconditionally do11611161- * a trylock.11621162- */11631163- if (wstate == WRITER_HANDOFF)11641164- break;11651165-11661166- if ((wstate == WRITER_NOT_FIRST) &&11671167- (rwsem_first_waiter(sem) == &waiter))11681168- wstate = WRITER_FIRST;11691169-11701170- count = atomic_long_read(&sem->count);11711171- if (!(count & RWSEM_LOCK_MASK))11721172- break;11731173-11741174- /*11751175- * The setting of the handoff bit is deferred11761176- * until rwsem_try_write_lock() is called.11771177- */11781178- if ((wstate == WRITER_FIRST) && (rt_task(current) ||11791179- time_after(jiffies, waiter.timeout))) {11801180- wstate = WRITER_HANDOFF;11811181- lockevent_inc(rwsem_wlock_handoff);11821182- break;11831183- }11841184- }11121112+ schedule();11131113+ lockevent_inc(rwsem_sleep_writer);11141114+ set_current_state(state);11851115trylock_again:11861116 raw_spin_lock_irq(&sem->wait_lock);11871117 }11881118 __set_current_state(TASK_RUNNING);11891189- list_del(&waiter.list);11901119 raw_spin_unlock_irq(&sem->wait_lock);11911120 lockevent_inc(rwsem_wlock);11921192-11931193- return ret;11211121+ return sem;1194112211951123out_nolock:11961124 __set_current_state(TASK_RUNNING);11971125 raw_spin_lock_irq(&sem->wait_lock);11981198- list_del(&waiter.list);11991199-12001200- if (unlikely(wstate == WRITER_HANDOFF))12011201- atomic_long_add(-RWSEM_FLAG_HANDOFF, &sem->count);12021202-12031203- if (list_empty(&sem->wait_list))12041204- atomic_long_andnot(RWSEM_FLAG_WAITERS, &sem->count);12051205- else11261126+ rwsem_del_waiter(sem, &waiter);11271127+ if (!list_empty(&sem->wait_list))12061128 rwsem_mark_wake(sem, RWSEM_WAKE_ANY, &wake_q);12071129 raw_spin_unlock_irq(&sem->wait_lock);12081130 wake_up_q(&wake_q);12091131 lockevent_inc(rwsem_wlock_fail);12101210-12111132 return ERR_PTR(-EINTR);12121133}12131134···1248124912491250 DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);1250125112511251- /*12521252- * Optimize for the case when the rwsem is not locked at all.12531253- */12541254- tmp = RWSEM_UNLOCKED_VALUE;12551255- do {12521252+ tmp = atomic_long_read(&sem->count);12531253+ while (!(tmp & RWSEM_READ_FAILED_MASK)) {12561254 if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,12571257- tmp + RWSEM_READER_BIAS)) {12551255+ tmp + RWSEM_READER_BIAS)) {12581256 rwsem_set_reader_owned(sem);12591257 return 1;12601258 }12611261- } while (!(tmp & RWSEM_READ_FAILED_MASK));12591259+ }12621260 return 0;12631261}12641262
-4
kernel/sched/core.c
···86198619 idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY;86208620 kthread_set_per_cpu(idle, cpu);8621862186228622- scs_task_reset(idle);86238623- kasan_unpoison_task_stack(idle);86248624-86258622#ifdef CONFIG_SMP86268623 /*86278624 * It's possible that init_idle() gets called multiple times on a task,···87748777 finish_arch_post_lock_switch();87758778 }8776877987778777- scs_task_reset(current);87788780 /* finish_cpu(), as ran on the BP, will clean up the active_mm state */87798781}87808782
···1515#include <linux/jhash.h>1616#include <linux/slab.h>1717#include <linux/sort.h>1818+#include <linux/kmemleak.h>18191920#include "tracing_map.h"2021#include "trace.h"···308307 for (i = 0; i < a->n_pages; i++) {309308 if (!a->pages[i])310309 break;310310+ kmemleak_free(a->pages[i]);311311 free_page((unsigned long)a->pages[i]);312312 }313313···344342 a->pages[i] = (void *)get_zeroed_page(GFP_KERNEL);345343 if (!a->pages[i])346344 goto free;345345+ kmemleak_alloc(a->pages[i], PAGE_SIZE, 1, GFP_KERNEL);347346 }348347 out:349348 return a;
+6-6
lib/siphash.c
···4949 SIPROUND; \5050 return (v0 ^ v1) ^ (v2 ^ v3);51515252+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS5253u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)5354{5455 const u8 *end = data + len - (len % sizeof(u64));···8180 POSTAMBLE8281}8382EXPORT_SYMBOL(__siphash_aligned);8383+#endif84848585-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS8685u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)8786{8887 const u8 *end = data + len - (len % sizeof(u64));···114113 POSTAMBLE115114}116115EXPORT_SYMBOL(__siphash_unaligned);117117-#endif118116119117/**120118 * siphash_1u64 - compute 64-bit siphash PRF value of a u64···250250 HSIPROUND; \251251 return (v0 ^ v1) ^ (v2 ^ v3);252252253253+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS253254u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)254255{255256 const u8 *end = data + len - (len % sizeof(u64));···281280 HPOSTAMBLE282281}283282EXPORT_SYMBOL(__hsiphash_aligned);283283+#endif284284285285-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS286285u32 __hsiphash_unaligned(const void *data, size_t len,287286 const hsiphash_key_t *key)288287{···314313 HPOSTAMBLE315314}316315EXPORT_SYMBOL(__hsiphash_unaligned);317317-#endif318316319317/**320318 * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32···418418 HSIPROUND; \419419 return v1 ^ v3;420420421421+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS421422u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)422423{423424 const u8 *end = data + len - (len % sizeof(u32));···439438 HPOSTAMBLE440439}441440EXPORT_SYMBOL(__hsiphash_aligned);441441+#endif442442443443-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS444443u32 __hsiphash_unaligned(const void *data, size_t len,445444 const hsiphash_key_t *key)446445{···462461 HPOSTAMBLE463462}464463EXPORT_SYMBOL(__hsiphash_unaligned);465465-#endif466464467465/**468466 * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+4-1
net/core/dev.c
···40934093 if (dev->flags & IFF_UP) {40944094 int cpu = smp_processor_id(); /* ok because BHs are off */4095409540964096- if (txq->xmit_lock_owner != cpu) {40964096+ /* Other cpus might concurrently change txq->xmit_lock_owner40974097+ * to -1 or to their cpu id, but not to our id.40984098+ */40994099+ if (READ_ONCE(txq->xmit_lock_owner) != cpu) {40974100 if (dev_xmit_recursion())40984101 goto recursion_alert;40994102
···409409 goto err;410410411411 /* Find the output device */412412- out_dev = rcu_dereference(nh->nh_dev);412412+ out_dev = nh->nh_dev;413413 if (!mpls_output_possible(out_dev))414414 goto tx_err;415415···698698 (dev->addr_len != nh->nh_via_alen))699699 goto errout;700700701701- RCU_INIT_POINTER(nh->nh_dev, dev);701701+ nh->nh_dev = dev;702702703703 if (!(dev->flags & IFF_UP)) {704704 nh->nh_flags |= RTNH_F_DEAD;···14911491 kfree(mdev);14921492}1493149314941494-static void mpls_ifdown(struct net_device *dev, int event)14941494+static int mpls_ifdown(struct net_device *dev, int event)14951495{14961496 struct mpls_route __rcu **platform_label;14971497 struct net *net = dev_net(dev);14981498- u8 alive, deleted;14991498 unsigned index;1500149915011500 platform_label = rtnl_dereference(net->mpls.platform_label);15021501 for (index = 0; index < net->mpls.platform_labels; index++) {15031502 struct mpls_route *rt = rtnl_dereference(platform_label[index]);15031503+ bool nh_del = false;15041504+ u8 alive = 0;1504150515051506 if (!rt)15061507 continue;1507150815081508- alive = 0;15091509- deleted = 0;15091509+ if (event == NETDEV_UNREGISTER) {15101510+ u8 deleted = 0;15111511+15121512+ for_nexthops(rt) {15131513+ if (!nh->nh_dev || nh->nh_dev == dev)15141514+ deleted++;15151515+ if (nh->nh_dev == dev)15161516+ nh_del = true;15171517+ } endfor_nexthops(rt);15181518+15191519+ /* if there are no more nexthops, delete the route */15201520+ if (deleted == rt->rt_nhn) {15211521+ mpls_route_update(net, index, NULL, NULL);15221522+ continue;15231523+ }15241524+15251525+ if (nh_del) {15261526+ size_t size = sizeof(*rt) + rt->rt_nhn *15271527+ rt->rt_nh_size;15281528+ struct mpls_route *orig = rt;15291529+15301530+ rt = kmalloc(size, GFP_KERNEL);15311531+ if (!rt)15321532+ return -ENOMEM;15331533+ memcpy(rt, orig, size);15341534+ }15351535+ }15361536+15101537 change_nexthops(rt) {15111538 unsigned int nh_flags = nh->nh_flags;1512153915131513- if (rtnl_dereference(nh->nh_dev) != dev)15401540+ if (nh->nh_dev != dev)15141541 goto next;1515154215161543 switch (event) {···15501523 break;15511524 }15521525 if (event == NETDEV_UNREGISTER)15531553- RCU_INIT_POINTER(nh->nh_dev, NULL);15261526+ nh->nh_dev = NULL;1554152715551528 if (nh->nh_flags != nh_flags)15561529 WRITE_ONCE(nh->nh_flags, nh_flags);15571530next:15581531 if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)))15591532 alive++;15601560- if (!rtnl_dereference(nh->nh_dev))15611561- deleted++;15621533 } endfor_nexthops(rt);1563153415641535 WRITE_ONCE(rt->rt_nhn_alive, alive);1565153615661566- /* if there are no more nexthops, delete the route */15671567- if (event == NETDEV_UNREGISTER && deleted == rt->rt_nhn)15681568- mpls_route_update(net, index, NULL, NULL);15371537+ if (nh_del)15381538+ mpls_route_update(net, index, rt, NULL);15691539 }15401540+15411541+ return 0;15701542}1571154315721544static void mpls_ifup(struct net_device *dev, unsigned int flags)···15851559 alive = 0;15861560 change_nexthops(rt) {15871561 unsigned int nh_flags = nh->nh_flags;15881588- struct net_device *nh_dev =15891589- rtnl_dereference(nh->nh_dev);1590156215911563 if (!(nh_flags & flags)) {15921564 alive++;15931565 continue;15941566 }15951595- if (nh_dev != dev)15671567+ if (nh->nh_dev != dev)15961568 continue;15971569 alive++;15981570 nh_flags &= ~flags;···16211597 return NOTIFY_OK;1622159816231599 switch (event) {16001600+ int err;16011601+16241602 case NETDEV_DOWN:16251625- mpls_ifdown(dev, event);16031603+ err = mpls_ifdown(dev, event);16041604+ if (err)16051605+ return notifier_from_errno(err);16261606 break;16271607 case NETDEV_UP:16281608 flags = dev_get_flags(dev);···16371609 break;16381610 case NETDEV_CHANGE:16391611 flags = dev_get_flags(dev);16401640- if (flags & (IFF_RUNNING | IFF_LOWER_UP))16121612+ if (flags & (IFF_RUNNING | IFF_LOWER_UP)) {16411613 mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);16421642- else16431643- mpls_ifdown(dev, event);16141614+ } else {16151615+ err = mpls_ifdown(dev, event);16161616+ if (err)16171617+ return notifier_from_errno(err);16181618+ }16441619 break;16451620 case NETDEV_UNREGISTER:16461646- mpls_ifdown(dev, event);16211621+ err = mpls_ifdown(dev, event);16221622+ if (err)16231623+ return notifier_from_errno(err);16471624 mdev = mpls_dev_get(dev);16481625 if (mdev) {16491626 mpls_dev_sysctl_unregister(dev, mdev);···16591626 case NETDEV_CHANGENAME:16601627 mdev = mpls_dev_get(dev);16611628 if (mdev) {16621662- int err;16631663-16641629 mpls_dev_sysctl_unregister(dev, mdev);16651630 err = mpls_dev_sysctl_register(dev, mdev);16661631 if (err)···20251994 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),20261995 nh->nh_via_alen))20271996 goto nla_put_failure;20282028- dev = rtnl_dereference(nh->nh_dev);19971997+ dev = nh->nh_dev;20291998 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))20301999 goto nla_put_failure;20312000 if (nh->nh_flags & RTNH_F_LINKDOWN)···20432012 goto nla_put_failure;2044201320452014 for_nexthops(rt) {20462046- dev = rtnl_dereference(nh->nh_dev);20152015+ dev = nh->nh_dev;20472016 if (!dev)20482017 continue;20492018···21542123static bool mpls_rt_uses_dev(struct mpls_route *rt,21552124 const struct net_device *dev)21562125{21572157- struct net_device *nh_dev;21582158-21592126 if (rt->rt_nhn == 1) {21602127 struct mpls_nh *nh = rt->rt_nh;2161212821622162- nh_dev = rtnl_dereference(nh->nh_dev);21632163- if (dev == nh_dev)21292129+ if (nh->nh_dev == dev)21642130 return true;21652131 } else {21662132 for_nexthops(rt) {21672167- nh_dev = rtnl_dereference(nh->nh_dev);21682168- if (nh_dev == dev)21332133+ if (nh->nh_dev == dev)21692134 return true;21702135 } endfor_nexthops(rt);21712136 }···22492222 size_t nhsize = 0;2250222322512224 for_nexthops(rt) {22522252- if (!rtnl_dereference(nh->nh_dev))22252225+ if (!nh->nh_dev)22532226 continue;22542227 nhsize += nla_total_size(sizeof(struct rtnexthop));22552228 /* RTA_VIA */···24952468 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),24962469 nh->nh_via_alen))24972470 goto nla_put_failure;24982498- dev = rtnl_dereference(nh->nh_dev);24712471+ dev = nh->nh_dev;24992472 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))25002473 goto nla_put_failure;25012474···25342507 rt0 = mpls_rt_alloc(1, lo->addr_len, 0);25352508 if (IS_ERR(rt0))25362509 goto nort0;25372537- RCU_INIT_POINTER(rt0->rt_nh->nh_dev, lo);25102510+ rt0->rt_nh->nh_dev = lo;25382511 rt0->rt_protocol = RTPROT_KERNEL;25392512 rt0->rt_payload_type = MPT_IPV4;25402513 rt0->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;···25482521 rt2 = mpls_rt_alloc(1, lo->addr_len, 0);25492522 if (IS_ERR(rt2))25502523 goto nort2;25512551- RCU_INIT_POINTER(rt2->rt_nh->nh_dev, lo);25242524+ rt2->rt_nh->nh_dev = lo;25522525 rt2->rt_protocol = RTPROT_KERNEL;25532526 rt2->rt_payload_type = MPT_IPV6;25542527 rt2->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;
+1-1
net/mpls/internal.h
···8787};88888989struct mpls_nh { /* next hop label forwarding entry */9090- struct net_device __rcu *nh_dev;9090+ struct net_device *nh_dev;91919292 /* nh_flags is accessed under RCU in the packet path; it is9393 * modified handling netdev events with rtnl lock held
+5
net/netlink/af_netlink.c
···18481848 if (msg->msg_flags & MSG_OOB)18491849 return -EOPNOTSUPP;1850185018511851+ if (len == 0) {18521852+ pr_warn_once("Zero length message leads to an empty skb\n");18531853+ return -ENODATA;18541854+ }18551855+18511856 err = scm_send(sock, msg, &scm, true);18521857 if (err < 0)18531858 return err;
···135135 return bundle;136136}137137138138+static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)139139+{140140+ rxrpc_put_peer(bundle->params.peer);141141+ kfree(bundle);142142+}143143+138144void rxrpc_put_bundle(struct rxrpc_bundle *bundle)139145{140146 unsigned int d = bundle->debug_id;141147 unsigned int u = atomic_dec_return(&bundle->usage);142148143149 _debug("PUT B=%x %u", d, u);144144- if (u == 0) {145145- rxrpc_put_peer(bundle->params.peer);146146- kfree(bundle);147147- }150150+ if (u == 0)151151+ rxrpc_free_bundle(bundle);148152}149153150154/*···332328 return candidate;333329334330found_bundle_free:335335- kfree(candidate);331331+ rxrpc_free_bundle(candidate);336332found_bundle:337333 rxrpc_get_bundle(bundle);338334 spin_unlock(&local->client_bundles_lock);
+9-5
net/rxrpc/peer_object.c
···299299 return peer;300300}301301302302+static void rxrpc_free_peer(struct rxrpc_peer *peer)303303+{304304+ rxrpc_put_local(peer->local);305305+ kfree_rcu(peer, rcu);306306+}307307+302308/*303309 * Set up a new incoming peer. There shouldn't be any other matching peers304310 * since we've already done a search in the list from the non-reentrant context···371365 spin_unlock_bh(&rxnet->peer_hash_lock);372366373367 if (peer)374374- kfree(candidate);368368+ rxrpc_free_peer(candidate);375369 else376370 peer = candidate;377371 }···426420 list_del_init(&peer->keepalive_link);427421 spin_unlock_bh(&rxnet->peer_hash_lock);428422429429- rxrpc_put_local(peer->local);430430- kfree_rcu(peer, rcu);423423+ rxrpc_free_peer(peer);431424}432425433426/*···462457 if (n == 0) {463458 hash_del_rcu(&peer->hash_link);464459 list_del_init(&peer->keepalive_link);465465- rxrpc_put_local(peer->local);466466- kfree_rcu(peer, rcu);460460+ rxrpc_free_peer(peer);467461 }468462}469463
+6-2
net/smc/smc_close.c
···195195 int old_state;196196 long timeout;197197 int rc = 0;198198+ int rc1 = 0;198199199200 timeout = current->flags & PF_EXITING ?200201 0 : sock_flag(sk, SOCK_LINGER) ?···233232 /* actively shutdown clcsock before peer close it,234233 * prevent peer from entering TIME_WAIT state.235234 */236236- if (smc->clcsock && smc->clcsock->sk)237237- rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);235235+ if (smc->clcsock && smc->clcsock->sk) {236236+ rc1 = kernel_sock_shutdown(smc->clcsock,237237+ SHUT_RDWR);238238+ rc = rc ? rc : rc1;239239+ }238240 } else {239241 /* peer event has changed the state */240242 goto again;
+3-4
net/smc/smc_core.c
···625625void smc_lgr_cleanup_early(struct smc_connection *conn)626626{627627 struct smc_link_group *lgr = conn->lgr;628628- struct list_head *lgr_list;629628 spinlock_t *lgr_lock;630629631630 if (!lgr)632631 return;633632634633 smc_conn_free(conn);635635- lgr_list = smc_lgr_list_head(lgr, &lgr_lock);634634+ smc_lgr_list_head(lgr, &lgr_lock);636635 spin_lock_bh(lgr_lock);637636 /* do not use this link group for new connections */638638- if (!list_empty(lgr_list))639639- list_del_init(lgr_list);637637+ if (!list_empty(&lgr->list))638638+ list_del_init(&lgr->list);640639 spin_unlock_bh(lgr_lock);641640 __smc_lgr_terminate(lgr, true);642641}
···438438#define for_each_hda_codec_node(nid, codec) \439439 for ((nid) = (codec)->core.start_nid; (nid) < (codec)->core.end_nid; (nid)++)440440441441+/* Set the codec power_state flag to indicate to allow unsol event handling;442442+ * see hda_codec_unsol_event() in hda_bind.c. Calling this might confuse the443443+ * state tracking, so use with care.444444+ */445445+static inline void snd_hda_codec_allow_unsol_events(struct hda_codec *codec)446446+{447447+ codec->core.dev.power.power_state = PMSG_ON;448448+}449449+441450/*442451 * get widget capabilities443452 */
+5
sound/pci/hda/patch_cs8409.c
···750750 if (cs42l42->full_scale_vol)751751 cs8409_i2c_write(cs42l42, 0x2001, 0x01);752752753753+ /* we have to explicitly allow unsol event handling even during the754754+ * resume phase so that the jack event is processed properly755755+ */756756+ snd_hda_codec_allow_unsol_events(cs42l42->codec);757757+753758 cs42l42_enable_jack_detect(cs42l42);754759}755760
···20202121 if (comp_ids) {2222 for (i = 0; i < comp_ids->num_codecs; i++) {2323- if (acpi_dev_present(comp_ids->codecs[i], NULL, -1))2323+ if (acpi_dev_present(comp_ids->codecs[i], NULL, -1)) {2424+ strscpy(machine->id, comp_ids->codecs[i], ACPI_ID_LEN);2425 return true;2626+ }2527 }2628 }2729
+7
sound/soc/sof/intel/hda.c
···5858 return -EINVAL;5959 }60606161+ /* DAI already configured, reset it before reconfiguring it */6262+ if (sof_dai->configured) {6363+ ret = hda_ctrl_dai_widget_free(w);6464+ if (ret < 0)6565+ return ret;6666+ }6767+6168 config = &sof_dai->dai_config[sof_dai->current_config];62696370 /*
+146-33
sound/soc/tegra/tegra186_dspk.c
···2626 { TEGRA186_DSPK_CODEC_CTRL, 0x03000000 },2727};28282929-static int tegra186_dspk_get_control(struct snd_kcontrol *kcontrol,2929+static int tegra186_dspk_get_fifo_th(struct snd_kcontrol *kcontrol,3030 struct snd_ctl_elem_value *ucontrol)3131{3232 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);3333 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);34343535- if (strstr(kcontrol->id.name, "FIFO Threshold"))3636- ucontrol->value.integer.value[0] = dspk->rx_fifo_th;3737- else if (strstr(kcontrol->id.name, "OSR Value"))3838- ucontrol->value.integer.value[0] = dspk->osr_val;3939- else if (strstr(kcontrol->id.name, "LR Polarity Select"))4040- ucontrol->value.integer.value[0] = dspk->lrsel;4141- else if (strstr(kcontrol->id.name, "Channel Select"))4242- ucontrol->value.integer.value[0] = dspk->ch_sel;4343- else if (strstr(kcontrol->id.name, "Mono To Stereo"))4444- ucontrol->value.integer.value[0] = dspk->mono_to_stereo;4545- else if (strstr(kcontrol->id.name, "Stereo To Mono"))4646- ucontrol->value.integer.value[0] = dspk->stereo_to_mono;3535+ ucontrol->value.integer.value[0] = dspk->rx_fifo_th;47364837 return 0;4938}50395151-static int tegra186_dspk_put_control(struct snd_kcontrol *kcontrol,4040+static int tegra186_dspk_put_fifo_th(struct snd_kcontrol *kcontrol,5241 struct snd_ctl_elem_value *ucontrol)5342{5443 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);5544 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);5656- int val = ucontrol->value.integer.value[0];4545+ int value = ucontrol->value.integer.value[0];57465858- if (strstr(kcontrol->id.name, "FIFO Threshold"))5959- dspk->rx_fifo_th = val;6060- else if (strstr(kcontrol->id.name, "OSR Value"))6161- dspk->osr_val = val;6262- else if (strstr(kcontrol->id.name, "LR Polarity Select"))6363- dspk->lrsel = val;6464- else if (strstr(kcontrol->id.name, "Channel Select"))6565- dspk->ch_sel = val;6666- else if (strstr(kcontrol->id.name, "Mono To Stereo"))6767- dspk->mono_to_stereo = val;6868- else if (strstr(kcontrol->id.name, "Stereo To Mono"))6969- dspk->stereo_to_mono = val;4747+ if (value == dspk->rx_fifo_th)4848+ return 0;4949+5050+ dspk->rx_fifo_th = value;5151+5252+ return 1;5353+}5454+5555+static int tegra186_dspk_get_osr_val(struct snd_kcontrol *kcontrol,5656+ struct snd_ctl_elem_value *ucontrol)5757+{5858+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);5959+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);6060+6161+ ucontrol->value.enumerated.item[0] = dspk->osr_val;70627163 return 0;6464+}6565+6666+static int tegra186_dspk_put_osr_val(struct snd_kcontrol *kcontrol,6767+ struct snd_ctl_elem_value *ucontrol)6868+{6969+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);7070+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);7171+ unsigned int value = ucontrol->value.enumerated.item[0];7272+7373+ if (value == dspk->osr_val)7474+ return 0;7575+7676+ dspk->osr_val = value;7777+7878+ return 1;7979+}8080+8181+static int tegra186_dspk_get_pol_sel(struct snd_kcontrol *kcontrol,8282+ struct snd_ctl_elem_value *ucontrol)8383+{8484+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);8585+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);8686+8787+ ucontrol->value.enumerated.item[0] = dspk->lrsel;8888+8989+ return 0;9090+}9191+9292+static int tegra186_dspk_put_pol_sel(struct snd_kcontrol *kcontrol,9393+ struct snd_ctl_elem_value *ucontrol)9494+{9595+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);9696+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);9797+ unsigned int value = ucontrol->value.enumerated.item[0];9898+9999+ if (value == dspk->lrsel)100100+ return 0;101101+102102+ dspk->lrsel = value;103103+104104+ return 1;105105+}106106+107107+static int tegra186_dspk_get_ch_sel(struct snd_kcontrol *kcontrol,108108+ struct snd_ctl_elem_value *ucontrol)109109+{110110+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);111111+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);112112+113113+ ucontrol->value.enumerated.item[0] = dspk->ch_sel;114114+115115+ return 0;116116+}117117+118118+static int tegra186_dspk_put_ch_sel(struct snd_kcontrol *kcontrol,119119+ struct snd_ctl_elem_value *ucontrol)120120+{121121+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);122122+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);123123+ unsigned int value = ucontrol->value.enumerated.item[0];124124+125125+ if (value == dspk->ch_sel)126126+ return 0;127127+128128+ dspk->ch_sel = value;129129+130130+ return 1;131131+}132132+133133+static int tegra186_dspk_get_mono_to_stereo(struct snd_kcontrol *kcontrol,134134+ struct snd_ctl_elem_value *ucontrol)135135+{136136+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);137137+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);138138+139139+ ucontrol->value.enumerated.item[0] = dspk->mono_to_stereo;140140+141141+ return 0;142142+}143143+144144+static int tegra186_dspk_put_mono_to_stereo(struct snd_kcontrol *kcontrol,145145+ struct snd_ctl_elem_value *ucontrol)146146+{147147+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);148148+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);149149+ unsigned int value = ucontrol->value.enumerated.item[0];150150+151151+ if (value == dspk->mono_to_stereo)152152+ return 0;153153+154154+ dspk->mono_to_stereo = value;155155+156156+ return 1;157157+}158158+159159+static int tegra186_dspk_get_stereo_to_mono(struct snd_kcontrol *kcontrol,160160+ struct snd_ctl_elem_value *ucontrol)161161+{162162+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);163163+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);164164+165165+ ucontrol->value.enumerated.item[0] = dspk->stereo_to_mono;166166+167167+ return 0;168168+}169169+170170+static int tegra186_dspk_put_stereo_to_mono(struct snd_kcontrol *kcontrol,171171+ struct snd_ctl_elem_value *ucontrol)172172+{173173+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);174174+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);175175+ unsigned int value = ucontrol->value.enumerated.item[0];176176+177177+ if (value == dspk->stereo_to_mono)178178+ return 0;179179+180180+ dspk->stereo_to_mono = value;181181+182182+ return 1;72183}7318474185static int __maybe_unused tegra186_dspk_runtime_suspend(struct device *dev)···390279static const struct snd_kcontrol_new tegrat186_dspk_controls[] = {391280 SOC_SINGLE_EXT("FIFO Threshold", SND_SOC_NOPM, 0,392281 TEGRA186_DSPK_RX_FIFO_DEPTH - 1, 0,393393- tegra186_dspk_get_control, tegra186_dspk_put_control),282282+ tegra186_dspk_get_fifo_th, tegra186_dspk_put_fifo_th),394283 SOC_ENUM_EXT("OSR Value", tegra186_dspk_osr_enum,395395- tegra186_dspk_get_control, tegra186_dspk_put_control),284284+ tegra186_dspk_get_osr_val, tegra186_dspk_put_osr_val),396285 SOC_ENUM_EXT("LR Polarity Select", tegra186_dspk_lrsel_enum,397397- tegra186_dspk_get_control, tegra186_dspk_put_control),286286+ tegra186_dspk_get_pol_sel, tegra186_dspk_put_pol_sel),398287 SOC_ENUM_EXT("Channel Select", tegra186_dspk_ch_sel_enum,399399- tegra186_dspk_get_control, tegra186_dspk_put_control),288288+ tegra186_dspk_get_ch_sel, tegra186_dspk_put_ch_sel),400289 SOC_ENUM_EXT("Mono To Stereo", tegra186_dspk_mono_conv_enum,401401- tegra186_dspk_get_control, tegra186_dspk_put_control),290290+ tegra186_dspk_get_mono_to_stereo,291291+ tegra186_dspk_put_mono_to_stereo),402292 SOC_ENUM_EXT("Stereo To Mono", tegra186_dspk_stereo_conv_enum,403403- tegra186_dspk_get_control, tegra186_dspk_put_control),293293+ tegra186_dspk_get_stereo_to_mono,294294+ tegra186_dspk_put_stereo_to_mono),404295};405296406297static const struct snd_soc_component_driver tegra186_dspk_cmpnt = {
···193193 struct soc_mixer_control *mc =194194 (struct soc_mixer_control *)kcontrol->private_value;;195195196196+ if (value == bytes_map[mc->reg])197197+ return 0;198198+196199 if (value >= 0 && value <= 255) {197200 /* update byte map and enable slot */198201 bytes_map[mc->reg] = value;
+7-4
sound/soc/tegra/tegra210_ahub.c
···6262 unsigned int *item = uctl->value.enumerated.item;6363 unsigned int value = e->values[item[0]];6464 unsigned int i, bit_pos, reg_idx = 0, reg_val = 0;6565+ int change = 0;65666667 if (item[0] >= e->items)6768 return -EINVAL;···87868887 /* Update widget power if state has changed */8988 if (snd_soc_component_test_bits(cmpnt, update[i].reg,9090- update[i].mask, update[i].val))9191- snd_soc_dapm_mux_update_power(dapm, kctl, item[0], e,9292- &update[i]);8989+ update[i].mask,9090+ update[i].val))9191+ change |= snd_soc_dapm_mux_update_power(dapm, kctl,9292+ item[0], e,9393+ &update[i]);9394 }94959595- return 0;9696+ return change;9697}97989899static struct snd_soc_dai_driver tegra210_ahub_dais[] = {
+3
sound/soc/tegra/tegra210_amx.c
···222222 int reg = mc->reg;223223 int value = ucontrol->value.integer.value[0];224224225225+ if (value == bytes_map[reg])226226+ return 0;227227+225228 if (value >= 0 && value <= 255) {226229 /* Update byte map and enable slot */227230 bytes_map[reg] = value;
+150-36
sound/soc/tegra/tegra210_dmic.c
···156156 return 0;157157}158158159159-static int tegra210_dmic_get_control(struct snd_kcontrol *kcontrol,160160- struct snd_ctl_elem_value *ucontrol)159159+static int tegra210_dmic_get_boost_gain(struct snd_kcontrol *kcontrol,160160+ struct snd_ctl_elem_value *ucontrol)161161{162162 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);163163 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);164164165165- if (strstr(kcontrol->id.name, "Boost Gain Volume"))166166- ucontrol->value.integer.value[0] = dmic->boost_gain;167167- else if (strstr(kcontrol->id.name, "Channel Select"))168168- ucontrol->value.integer.value[0] = dmic->ch_select;169169- else if (strstr(kcontrol->id.name, "Mono To Stereo"))170170- ucontrol->value.integer.value[0] = dmic->mono_to_stereo;171171- else if (strstr(kcontrol->id.name, "Stereo To Mono"))172172- ucontrol->value.integer.value[0] = dmic->stereo_to_mono;173173- else if (strstr(kcontrol->id.name, "OSR Value"))174174- ucontrol->value.integer.value[0] = dmic->osr_val;175175- else if (strstr(kcontrol->id.name, "LR Polarity Select"))176176- ucontrol->value.integer.value[0] = dmic->lrsel;165165+ ucontrol->value.integer.value[0] = dmic->boost_gain;177166178167 return 0;179168}180169181181-static int tegra210_dmic_put_control(struct snd_kcontrol *kcontrol,182182- struct snd_ctl_elem_value *ucontrol)170170+static int tegra210_dmic_put_boost_gain(struct snd_kcontrol *kcontrol,171171+ struct snd_ctl_elem_value *ucontrol)183172{184173 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);185174 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);186175 int value = ucontrol->value.integer.value[0];187176188188- if (strstr(kcontrol->id.name, "Boost Gain Volume"))189189- dmic->boost_gain = value;190190- else if (strstr(kcontrol->id.name, "Channel Select"))191191- dmic->ch_select = ucontrol->value.integer.value[0];192192- else if (strstr(kcontrol->id.name, "Mono To Stereo"))193193- dmic->mono_to_stereo = value;194194- else if (strstr(kcontrol->id.name, "Stereo To Mono"))195195- dmic->stereo_to_mono = value;196196- else if (strstr(kcontrol->id.name, "OSR Value"))197197- dmic->osr_val = value;198198- else if (strstr(kcontrol->id.name, "LR Polarity Select"))199199- dmic->lrsel = value;177177+ if (value == dmic->boost_gain)178178+ return 0;179179+180180+ dmic->boost_gain = value;181181+182182+ return 1;183183+}184184+185185+static int tegra210_dmic_get_ch_select(struct snd_kcontrol *kcontrol,186186+ struct snd_ctl_elem_value *ucontrol)187187+{188188+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);189189+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);190190+191191+ ucontrol->value.enumerated.item[0] = dmic->ch_select;200192201193 return 0;194194+}195195+196196+static int tegra210_dmic_put_ch_select(struct snd_kcontrol *kcontrol,197197+ struct snd_ctl_elem_value *ucontrol)198198+{199199+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);200200+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);201201+ unsigned int value = ucontrol->value.enumerated.item[0];202202+203203+ if (value == dmic->ch_select)204204+ return 0;205205+206206+ dmic->ch_select = value;207207+208208+ return 1;209209+}210210+211211+static int tegra210_dmic_get_mono_to_stereo(struct snd_kcontrol *kcontrol,212212+ struct snd_ctl_elem_value *ucontrol)213213+{214214+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);215215+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);216216+217217+ ucontrol->value.enumerated.item[0] = dmic->mono_to_stereo;218218+219219+ return 0;220220+}221221+222222+static int tegra210_dmic_put_mono_to_stereo(struct snd_kcontrol *kcontrol,223223+ struct snd_ctl_elem_value *ucontrol)224224+{225225+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);226226+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);227227+ unsigned int value = ucontrol->value.enumerated.item[0];228228+229229+ if (value == dmic->mono_to_stereo)230230+ return 0;231231+232232+ dmic->mono_to_stereo = value;233233+234234+ return 1;235235+}236236+237237+static int tegra210_dmic_get_stereo_to_mono(struct snd_kcontrol *kcontrol,238238+ struct snd_ctl_elem_value *ucontrol)239239+{240240+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);241241+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);242242+243243+ ucontrol->value.enumerated.item[0] = dmic->stereo_to_mono;244244+245245+ return 0;246246+}247247+248248+static int tegra210_dmic_put_stereo_to_mono(struct snd_kcontrol *kcontrol,249249+ struct snd_ctl_elem_value *ucontrol)250250+{251251+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);252252+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);253253+ unsigned int value = ucontrol->value.enumerated.item[0];254254+255255+ if (value == dmic->stereo_to_mono)256256+ return 0;257257+258258+ dmic->stereo_to_mono = value;259259+260260+ return 1;261261+}262262+263263+static int tegra210_dmic_get_osr_val(struct snd_kcontrol *kcontrol,264264+ struct snd_ctl_elem_value *ucontrol)265265+{266266+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);267267+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);268268+269269+ ucontrol->value.enumerated.item[0] = dmic->osr_val;270270+271271+ return 0;272272+}273273+274274+static int tegra210_dmic_put_osr_val(struct snd_kcontrol *kcontrol,275275+ struct snd_ctl_elem_value *ucontrol)276276+{277277+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);278278+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);279279+ unsigned int value = ucontrol->value.enumerated.item[0];280280+281281+ if (value == dmic->osr_val)282282+ return 0;283283+284284+ dmic->osr_val = value;285285+286286+ return 1;287287+}288288+289289+static int tegra210_dmic_get_pol_sel(struct snd_kcontrol *kcontrol,290290+ struct snd_ctl_elem_value *ucontrol)291291+{292292+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);293293+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);294294+295295+ ucontrol->value.enumerated.item[0] = dmic->lrsel;296296+297297+ return 0;298298+}299299+300300+static int tegra210_dmic_put_pol_sel(struct snd_kcontrol *kcontrol,301301+ struct snd_ctl_elem_value *ucontrol)302302+{303303+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);304304+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);305305+ unsigned int value = ucontrol->value.enumerated.item[0];306306+307307+ if (value == dmic->lrsel)308308+ return 0;309309+310310+ dmic->lrsel = value;311311+312312+ return 1;202313}203314204315static const struct snd_soc_dai_ops tegra210_dmic_dai_ops = {···398287399288static const struct snd_kcontrol_new tegra210_dmic_controls[] = {400289 SOC_SINGLE_EXT("Boost Gain Volume", 0, 0, MAX_BOOST_GAIN, 0,401401- tegra210_dmic_get_control, tegra210_dmic_put_control),290290+ tegra210_dmic_get_boost_gain,291291+ tegra210_dmic_put_boost_gain),402292 SOC_ENUM_EXT("Channel Select", tegra210_dmic_ch_enum,403403- tegra210_dmic_get_control, tegra210_dmic_put_control),293293+ tegra210_dmic_get_ch_select, tegra210_dmic_put_ch_select),404294 SOC_ENUM_EXT("Mono To Stereo",405405- tegra210_dmic_mono_conv_enum, tegra210_dmic_get_control,406406- tegra210_dmic_put_control),295295+ tegra210_dmic_mono_conv_enum,296296+ tegra210_dmic_get_mono_to_stereo,297297+ tegra210_dmic_put_mono_to_stereo),407298 SOC_ENUM_EXT("Stereo To Mono",408408- tegra210_dmic_stereo_conv_enum, tegra210_dmic_get_control,409409- tegra210_dmic_put_control),299299+ tegra210_dmic_stereo_conv_enum,300300+ tegra210_dmic_get_stereo_to_mono,301301+ tegra210_dmic_put_stereo_to_mono),410302 SOC_ENUM_EXT("OSR Value", tegra210_dmic_osr_enum,411411- tegra210_dmic_get_control, tegra210_dmic_put_control),303303+ tegra210_dmic_get_osr_val, tegra210_dmic_put_osr_val),412304 SOC_ENUM_EXT("LR Polarity Select", tegra210_dmic_lrsel_enum,413413- tegra210_dmic_get_control, tegra210_dmic_put_control),305305+ tegra210_dmic_get_pol_sel, tegra210_dmic_put_pol_sel),414306};415307416308static const struct snd_soc_component_driver tegra210_dmic_compnt = {
+240-84
sound/soc/tegra/tegra210_i2s.c
···302302 return 0;303303}304304305305+static int tegra210_i2s_get_loopback(struct snd_kcontrol *kcontrol,306306+ struct snd_ctl_elem_value *ucontrol)307307+{308308+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);309309+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);310310+311311+ ucontrol->value.integer.value[0] = i2s->loopback;312312+313313+ return 0;314314+}315315+316316+static int tegra210_i2s_put_loopback(struct snd_kcontrol *kcontrol,317317+ struct snd_ctl_elem_value *ucontrol)318318+{319319+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);320320+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);321321+ int value = ucontrol->value.integer.value[0];322322+323323+ if (value == i2s->loopback)324324+ return 0;325325+326326+ i2s->loopback = value;327327+328328+ regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, I2S_CTRL_LPBK_MASK,329329+ i2s->loopback << I2S_CTRL_LPBK_SHIFT);330330+331331+ return 1;332332+}333333+334334+static int tegra210_i2s_get_fsync_width(struct snd_kcontrol *kcontrol,335335+ struct snd_ctl_elem_value *ucontrol)336336+{337337+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);338338+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);339339+340340+ ucontrol->value.integer.value[0] = i2s->fsync_width;341341+342342+ return 0;343343+}344344+345345+static int tegra210_i2s_put_fsync_width(struct snd_kcontrol *kcontrol,346346+ struct snd_ctl_elem_value *ucontrol)347347+{348348+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);349349+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);350350+ int value = ucontrol->value.integer.value[0];351351+352352+ if (value == i2s->fsync_width)353353+ return 0;354354+355355+ i2s->fsync_width = value;356356+357357+ /*358358+ * Frame sync width is used only for FSYNC modes and not359359+ * applicable for LRCK modes. Reset value for this field is "0",360360+ * which means the width is one bit clock wide.361361+ * The width requirement may depend on the codec and in such362362+ * cases mixer control is used to update custom values. A value363363+ * of "N" here means, width is "N + 1" bit clock wide.364364+ */365365+ regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,366366+ I2S_CTRL_FSYNC_WIDTH_MASK,367367+ i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);368368+369369+ return 1;370370+}371371+372372+static int tegra210_i2s_cget_stereo_to_mono(struct snd_kcontrol *kcontrol,373373+ struct snd_ctl_elem_value *ucontrol)374374+{375375+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);376376+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);377377+378378+ ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_TX_PATH];379379+380380+ return 0;381381+}382382+383383+static int tegra210_i2s_cput_stereo_to_mono(struct snd_kcontrol *kcontrol,384384+ struct snd_ctl_elem_value *ucontrol)385385+{386386+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);387387+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);388388+ unsigned int value = ucontrol->value.enumerated.item[0];389389+390390+ if (value == i2s->stereo_to_mono[I2S_TX_PATH])391391+ return 0;392392+393393+ i2s->stereo_to_mono[I2S_TX_PATH] = value;394394+395395+ return 1;396396+}397397+398398+static int tegra210_i2s_cget_mono_to_stereo(struct snd_kcontrol *kcontrol,399399+ struct snd_ctl_elem_value *ucontrol)400400+{401401+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);402402+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);403403+404404+ ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_TX_PATH];405405+406406+ return 0;407407+}408408+409409+static int tegra210_i2s_cput_mono_to_stereo(struct snd_kcontrol *kcontrol,410410+ struct snd_ctl_elem_value *ucontrol)411411+{412412+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);413413+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);414414+ unsigned int value = ucontrol->value.enumerated.item[0];415415+416416+ if (value == i2s->mono_to_stereo[I2S_TX_PATH])417417+ return 0;418418+419419+ i2s->mono_to_stereo[I2S_TX_PATH] = value;420420+421421+ return 1;422422+}423423+424424+static int tegra210_i2s_pget_stereo_to_mono(struct snd_kcontrol *kcontrol,425425+ struct snd_ctl_elem_value *ucontrol)426426+{427427+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);428428+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);429429+430430+ ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_RX_PATH];431431+432432+ return 0;433433+}434434+435435+static int tegra210_i2s_pput_stereo_to_mono(struct snd_kcontrol *kcontrol,436436+ struct snd_ctl_elem_value *ucontrol)437437+{438438+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);439439+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);440440+ unsigned int value = ucontrol->value.enumerated.item[0];441441+442442+ if (value == i2s->stereo_to_mono[I2S_RX_PATH])443443+ return 0;444444+445445+ i2s->stereo_to_mono[I2S_RX_PATH] = value;446446+447447+ return 1;448448+}449449+450450+static int tegra210_i2s_pget_mono_to_stereo(struct snd_kcontrol *kcontrol,451451+ struct snd_ctl_elem_value *ucontrol)452452+{453453+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);454454+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);455455+456456+ ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_RX_PATH];457457+458458+ return 0;459459+}460460+461461+static int tegra210_i2s_pput_mono_to_stereo(struct snd_kcontrol *kcontrol,462462+ struct snd_ctl_elem_value *ucontrol)463463+{464464+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);465465+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);466466+ unsigned int value = ucontrol->value.enumerated.item[0];467467+468468+ if (value == i2s->mono_to_stereo[I2S_RX_PATH])469469+ return 0;470470+471471+ i2s->mono_to_stereo[I2S_RX_PATH] = value;472472+473473+ return 1;474474+}475475+476476+static int tegra210_i2s_pget_fifo_th(struct snd_kcontrol *kcontrol,477477+ struct snd_ctl_elem_value *ucontrol)478478+{479479+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);480480+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);481481+482482+ ucontrol->value.integer.value[0] = i2s->rx_fifo_th;483483+484484+ return 0;485485+}486486+487487+static int tegra210_i2s_pput_fifo_th(struct snd_kcontrol *kcontrol,488488+ struct snd_ctl_elem_value *ucontrol)489489+{490490+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);491491+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);492492+ int value = ucontrol->value.integer.value[0];493493+494494+ if (value == i2s->rx_fifo_th)495495+ return 0;496496+497497+ i2s->rx_fifo_th = value;498498+499499+ return 1;500500+}501501+502502+static int tegra210_i2s_get_bclk_ratio(struct snd_kcontrol *kcontrol,503503+ struct snd_ctl_elem_value *ucontrol)504504+{505505+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);506506+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);507507+508508+ ucontrol->value.integer.value[0] = i2s->bclk_ratio;509509+510510+ return 0;511511+}512512+513513+static int tegra210_i2s_put_bclk_ratio(struct snd_kcontrol *kcontrol,514514+ struct snd_ctl_elem_value *ucontrol)515515+{516516+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);517517+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);518518+ int value = ucontrol->value.integer.value[0];519519+520520+ if (value == i2s->bclk_ratio)521521+ return 0;522522+523523+ i2s->bclk_ratio = value;524524+525525+ return 1;526526+}527527+305528static int tegra210_i2s_set_dai_bclk_ratio(struct snd_soc_dai *dai,306529 unsigned int ratio)307530{308531 struct tegra210_i2s *i2s = snd_soc_dai_get_drvdata(dai);309532310533 i2s->bclk_ratio = ratio;311311-312312- return 0;313313-}314314-315315-static int tegra210_i2s_get_control(struct snd_kcontrol *kcontrol,316316- struct snd_ctl_elem_value *ucontrol)317317-{318318- struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);319319- struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);320320- long *uctl_val = &ucontrol->value.integer.value[0];321321-322322- if (strstr(kcontrol->id.name, "Loopback"))323323- *uctl_val = i2s->loopback;324324- else if (strstr(kcontrol->id.name, "FSYNC Width"))325325- *uctl_val = i2s->fsync_width;326326- else if (strstr(kcontrol->id.name, "Capture Stereo To Mono"))327327- *uctl_val = i2s->stereo_to_mono[I2S_TX_PATH];328328- else if (strstr(kcontrol->id.name, "Capture Mono To Stereo"))329329- *uctl_val = i2s->mono_to_stereo[I2S_TX_PATH];330330- else if (strstr(kcontrol->id.name, "Playback Stereo To Mono"))331331- *uctl_val = i2s->stereo_to_mono[I2S_RX_PATH];332332- else if (strstr(kcontrol->id.name, "Playback Mono To Stereo"))333333- *uctl_val = i2s->mono_to_stereo[I2S_RX_PATH];334334- else if (strstr(kcontrol->id.name, "Playback FIFO Threshold"))335335- *uctl_val = i2s->rx_fifo_th;336336- else if (strstr(kcontrol->id.name, "BCLK Ratio"))337337- *uctl_val = i2s->bclk_ratio;338338-339339- return 0;340340-}341341-342342-static int tegra210_i2s_put_control(struct snd_kcontrol *kcontrol,343343- struct snd_ctl_elem_value *ucontrol)344344-{345345- struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);346346- struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);347347- int value = ucontrol->value.integer.value[0];348348-349349- if (strstr(kcontrol->id.name, "Loopback")) {350350- i2s->loopback = value;351351-352352- regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,353353- I2S_CTRL_LPBK_MASK,354354- i2s->loopback << I2S_CTRL_LPBK_SHIFT);355355-356356- } else if (strstr(kcontrol->id.name, "FSYNC Width")) {357357- /*358358- * Frame sync width is used only for FSYNC modes and not359359- * applicable for LRCK modes. Reset value for this field is "0",360360- * which means the width is one bit clock wide.361361- * The width requirement may depend on the codec and in such362362- * cases mixer control is used to update custom values. A value363363- * of "N" here means, width is "N + 1" bit clock wide.364364- */365365- i2s->fsync_width = value;366366-367367- regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,368368- I2S_CTRL_FSYNC_WIDTH_MASK,369369- i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);370370-371371- } else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) {372372- i2s->stereo_to_mono[I2S_TX_PATH] = value;373373- } else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) {374374- i2s->mono_to_stereo[I2S_TX_PATH] = value;375375- } else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) {376376- i2s->stereo_to_mono[I2S_RX_PATH] = value;377377- } else if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) {378378- i2s->mono_to_stereo[I2S_RX_PATH] = value;379379- } else if (strstr(kcontrol->id.name, "Playback FIFO Threshold")) {380380- i2s->rx_fifo_th = value;381381- } else if (strstr(kcontrol->id.name, "BCLK Ratio")) {382382- i2s->bclk_ratio = value;383383- }384534385535 return 0;386536}···748598 tegra210_i2s_stereo_conv_text);749599750600static const struct snd_kcontrol_new tegra210_i2s_controls[] = {751751- SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_control,752752- tegra210_i2s_put_control),753753- SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0, tegra210_i2s_get_control,754754- tegra210_i2s_put_control),601601+ SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_loopback,602602+ tegra210_i2s_put_loopback),603603+ SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0,604604+ tegra210_i2s_get_fsync_width,605605+ tegra210_i2s_put_fsync_width),755606 SOC_ENUM_EXT("Capture Stereo To Mono", tegra210_i2s_stereo_conv_enum,756756- tegra210_i2s_get_control, tegra210_i2s_put_control),607607+ tegra210_i2s_cget_stereo_to_mono,608608+ tegra210_i2s_cput_stereo_to_mono),757609 SOC_ENUM_EXT("Capture Mono To Stereo", tegra210_i2s_mono_conv_enum,758758- tegra210_i2s_get_control, tegra210_i2s_put_control),610610+ tegra210_i2s_cget_mono_to_stereo,611611+ tegra210_i2s_cput_mono_to_stereo),759612 SOC_ENUM_EXT("Playback Stereo To Mono", tegra210_i2s_stereo_conv_enum,760760- tegra210_i2s_get_control, tegra210_i2s_put_control),613613+ tegra210_i2s_pget_mono_to_stereo,614614+ tegra210_i2s_pput_mono_to_stereo),761615 SOC_ENUM_EXT("Playback Mono To Stereo", tegra210_i2s_mono_conv_enum,762762- tegra210_i2s_get_control, tegra210_i2s_put_control),616616+ tegra210_i2s_pget_stereo_to_mono,617617+ tegra210_i2s_pput_stereo_to_mono),763618 SOC_SINGLE_EXT("Playback FIFO Threshold", 0, 0, I2S_RX_FIFO_DEPTH - 1,764764- 0, tegra210_i2s_get_control, tegra210_i2s_put_control),765765- SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0, tegra210_i2s_get_control,766766- tegra210_i2s_put_control),619619+ 0, tegra210_i2s_pget_fifo_th, tegra210_i2s_pput_fifo_th),620620+ SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0,621621+ tegra210_i2s_get_bclk_ratio,622622+ tegra210_i2s_put_bclk_ratio),767623};768624769625static const struct snd_soc_dapm_widget tegra210_i2s_widgets[] = {
+19-7
sound/soc/tegra/tegra210_mixer.c
···192192 return 0;193193}194194195195-static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol,196196- struct snd_ctl_elem_value *ucontrol)195195+static int tegra210_mixer_apply_gain(struct snd_kcontrol *kcontrol,196196+ struct snd_ctl_elem_value *ucontrol,197197+ bool instant_gain)197198{198199 struct soc_mixer_control *mc =199200 (struct soc_mixer_control *)kcontrol->private_value;200201 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);201202 struct tegra210_mixer *mixer = snd_soc_component_get_drvdata(cmpnt);202203 unsigned int reg = mc->reg, id;203203- bool instant_gain = false;204204 int err;205205-206206- if (strstr(kcontrol->id.name, "Instant Gain Volume"))207207- instant_gain = true;208205209206 /* Save gain value for specific MIXER input */210207 id = (reg - TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_0) /211208 TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_STRIDE;209209+210210+ if (mixer->gain_value[id] == ucontrol->value.integer.value[0])211211+ return 0;212212213213 mixer->gain_value[id] = ucontrol->value.integer.value[0];214214···219219 }220220221221 return 1;222222+}223223+224224+static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol,225225+ struct snd_ctl_elem_value *ucontrol)226226+{227227+ return tegra210_mixer_apply_gain(kcontrol, ucontrol, false);228228+}229229+230230+static int tegra210_mixer_put_instant_gain(struct snd_kcontrol *kcontrol,231231+ struct snd_ctl_elem_value *ucontrol)232232+{233233+ return tegra210_mixer_apply_gain(kcontrol, ucontrol, true);222234}223235224236static int tegra210_mixer_set_audio_cif(struct tegra210_mixer *mixer,···400388 SOC_SINGLE_EXT("RX" #id " Instant Gain Volume", \401389 MIXER_GAIN_CFG_RAM_ADDR((id) - 1), 0, \402390 0x20000, 0, tegra210_mixer_get_gain, \403403- tegra210_mixer_put_gain),391391+ tegra210_mixer_put_instant_gain),404392405393/* Volume controls for all MIXER inputs */406394static const struct snd_kcontrol_new tegra210_mixer_gain_ctls[] = {
···32443244 return tegra210_sfc_write_coeff_ram(cmpnt);32453245}3246324632473247-static int tegra210_sfc_get_control(struct snd_kcontrol *kcontrol,32473247+static int tegra210_sfc_iget_stereo_to_mono(struct snd_kcontrol *kcontrol,32483248 struct snd_ctl_elem_value *ucontrol)32493249{32503250 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32513251 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);3252325232533253- if (strstr(kcontrol->id.name, "Input Stereo To Mono"))32543254- ucontrol->value.integer.value[0] =32553255- sfc->stereo_to_mono[SFC_RX_PATH];32563256- else if (strstr(kcontrol->id.name, "Input Mono To Stereo"))32573257- ucontrol->value.integer.value[0] =32583258- sfc->mono_to_stereo[SFC_RX_PATH];32593259- else if (strstr(kcontrol->id.name, "Output Stereo To Mono"))32603260- ucontrol->value.integer.value[0] =32613261- sfc->stereo_to_mono[SFC_TX_PATH];32623262- else if (strstr(kcontrol->id.name, "Output Mono To Stereo"))32633263- ucontrol->value.integer.value[0] =32643264- sfc->mono_to_stereo[SFC_TX_PATH];32533253+ ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_RX_PATH];3265325432663255 return 0;32673256}3268325732693269-static int tegra210_sfc_put_control(struct snd_kcontrol *kcontrol,32583258+static int tegra210_sfc_iput_stereo_to_mono(struct snd_kcontrol *kcontrol,32703259 struct snd_ctl_elem_value *ucontrol)32713260{32723261 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32733262 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32743274- int value = ucontrol->value.integer.value[0];32633263+ unsigned int value = ucontrol->value.enumerated.item[0];3275326432763276- if (strstr(kcontrol->id.name, "Input Stereo To Mono"))32773277- sfc->stereo_to_mono[SFC_RX_PATH] = value;32783278- else if (strstr(kcontrol->id.name, "Input Mono To Stereo"))32793279- sfc->mono_to_stereo[SFC_RX_PATH] = value;32803280- else if (strstr(kcontrol->id.name, "Output Stereo To Mono"))32813281- sfc->stereo_to_mono[SFC_TX_PATH] = value;32823282- else if (strstr(kcontrol->id.name, "Output Mono To Stereo"))32833283- sfc->mono_to_stereo[SFC_TX_PATH] = value;32843284- else32653265+ if (value == sfc->stereo_to_mono[SFC_RX_PATH])32853266 return 0;32673267+32683268+ sfc->stereo_to_mono[SFC_RX_PATH] = value;32693269+32703270+ return 1;32713271+}32723272+32733273+static int tegra210_sfc_iget_mono_to_stereo(struct snd_kcontrol *kcontrol,32743274+ struct snd_ctl_elem_value *ucontrol)32753275+{32763276+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32773277+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32783278+32793279+ ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_RX_PATH];32803280+32813281+ return 0;32823282+}32833283+32843284+static int tegra210_sfc_iput_mono_to_stereo(struct snd_kcontrol *kcontrol,32853285+ struct snd_ctl_elem_value *ucontrol)32863286+{32873287+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32883288+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32893289+ unsigned int value = ucontrol->value.enumerated.item[0];32903290+32913291+ if (value == sfc->mono_to_stereo[SFC_RX_PATH])32923292+ return 0;32933293+32943294+ sfc->mono_to_stereo[SFC_RX_PATH] = value;32953295+32963296+ return 1;32973297+}32983298+32993299+static int tegra210_sfc_oget_stereo_to_mono(struct snd_kcontrol *kcontrol,33003300+ struct snd_ctl_elem_value *ucontrol)33013301+{33023302+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33033303+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33043304+33053305+ ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_TX_PATH];33063306+33073307+ return 0;33083308+}33093309+33103310+static int tegra210_sfc_oput_stereo_to_mono(struct snd_kcontrol *kcontrol,33113311+ struct snd_ctl_elem_value *ucontrol)33123312+{33133313+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33143314+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33153315+ unsigned int value = ucontrol->value.enumerated.item[0];33163316+33173317+ if (value == sfc->stereo_to_mono[SFC_TX_PATH])33183318+ return 0;33193319+33203320+ sfc->stereo_to_mono[SFC_TX_PATH] = value;33213321+33223322+ return 1;33233323+}33243324+33253325+static int tegra210_sfc_oget_mono_to_stereo(struct snd_kcontrol *kcontrol,33263326+ struct snd_ctl_elem_value *ucontrol)33273327+{33283328+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33293329+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33303330+33313331+ ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_TX_PATH];33323332+33333333+ return 0;33343334+}33353335+33363336+static int tegra210_sfc_oput_mono_to_stereo(struct snd_kcontrol *kcontrol,33373337+ struct snd_ctl_elem_value *ucontrol)33383338+{33393339+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33403340+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33413341+ unsigned int value = ucontrol->value.enumerated.item[0];33423342+33433343+ if (value == sfc->mono_to_stereo[SFC_TX_PATH])33443344+ return 0;33453345+33463346+ sfc->mono_to_stereo[SFC_TX_PATH] = value;3286334732873348 return 1;32883349}···3445338434463385static const struct snd_kcontrol_new tegra210_sfc_controls[] = {34473386 SOC_ENUM_EXT("Input Stereo To Mono", tegra210_sfc_stereo_conv_enum,34483448- tegra210_sfc_get_control, tegra210_sfc_put_control),33873387+ tegra210_sfc_iget_stereo_to_mono,33883388+ tegra210_sfc_iput_stereo_to_mono),34493389 SOC_ENUM_EXT("Input Mono To Stereo", tegra210_sfc_mono_conv_enum,34503450- tegra210_sfc_get_control, tegra210_sfc_put_control),33903390+ tegra210_sfc_iget_mono_to_stereo,33913391+ tegra210_sfc_iput_mono_to_stereo),34513392 SOC_ENUM_EXT("Output Stereo To Mono", tegra210_sfc_stereo_conv_enum,34523452- tegra210_sfc_get_control, tegra210_sfc_put_control),33933393+ tegra210_sfc_oget_stereo_to_mono,33943394+ tegra210_sfc_oput_stereo_to_mono),34533395 SOC_ENUM_EXT("Output Mono To Stereo", tegra210_sfc_mono_conv_enum,34543454- tegra210_sfc_get_control, tegra210_sfc_put_control),33963396+ tegra210_sfc_oget_mono_to_stereo,33973397+ tegra210_sfc_oput_mono_to_stereo),34553398};3456339934573400static const struct snd_soc_component_driver tegra210_sfc_cmpnt = {
+1-21
tools/include/linux/kernel.h
···77#include <assert.h>88#include <linux/build_bug.h>99#include <linux/compiler.h>1010+#include <linux/math.h>1011#include <endian.h>1112#include <byteswap.h>12131314#ifndef UINT_MAX1415#define UINT_MAX (~0U)1516#endif1616-1717-#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))18171918#define PERF_ALIGN(x, a) __PERF_ALIGN_MASK(x, (typeof(x))(a)-1)2019#define __PERF_ALIGN_MASK(x, mask) (((x)+(mask))&~(mask))···4950 typeof(y) _min2 = (y); \5051 (void) (&_min1 == &_min2); \5152 _min1 < _min2 ? _min1 : _min2; })5252-#endif5353-5454-#ifndef roundup5555-#define roundup(x, y) ( \5656-{ \5757- const typeof(y) __y = y; \5858- (((x) + (__y - 1)) / __y) * __y; \5959-} \6060-)6153#endif62546355#ifndef BUG_ON···93103int scnprintf_pad(char * buf, size_t size, const char * fmt, ...);9410495105#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))9696-9797-/*9898- * This looks more complex than it should be. But we need to9999- * get the type for the ~ right in round_down (it needs to be100100- * as wide as the result!), and we want to evaluate the macro101101- * arguments just once each.102102- */103103-#define __round_mask(x, y) ((__typeof__(x))((y)-1))104104-#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1)105105-#define round_down(x, y) ((x) & ~__round_mask(x, y))106106107107#define current_gfp_context(k) 0108108#define synchronize_rcu()
+25
tools/include/linux/math.h
···11+#ifndef _TOOLS_MATH_H22+#define _TOOLS_MATH_H33+44+/*55+ * This looks more complex than it should be. But we need to66+ * get the type for the ~ right in round_down (it needs to be77+ * as wide as the result!), and we want to evaluate the macro88+ * arguments just once each.99+ */1010+#define __round_mask(x, y) ((__typeof__(x))((y)-1))1111+#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1)1212+#define round_down(x, y) ((x) & ~__round_mask(x, y))1313+1414+#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))1515+1616+#ifndef roundup1717+#define roundup(x, y) ( \1818+{ \1919+ const typeof(y) __y = y; \2020+ (((x) + (__y - 1)) / __y) * __y; \2121+} \2222+)2323+#endif2424+2525+#endif
+3
tools/testing/radix-tree/linux/lockdep.h
···11#ifndef _LINUX_LOCKDEP_H22#define _LINUX_LOCKDEP_H33+44+#include <linux/spinlock.h>55+36struct lock_class_key {47 unsigned int a;58};
···276276n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7277277ip2 link del wg0278278ip2 link del wg1279279-! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel279279+read _ _ tx_bytes_before < <(n0 wg show wg1 transfer)280280+! n0 ping -W 1 -c 10 -f 192.168.241.2 || false281281+sleep 1282282+read _ _ tx_bytes_after < <(n0 wg show wg1 transfer)283283+(( tx_bytes_after - tx_bytes_before < 70000 ))280284281285ip0 link del wg1282286ip1 link del wg0···613609kill $ncat_pid614610ip0 link del wg0615611612612+# Ensure that dst_cache references don't outlive netns lifetime613613+ip1 link add dev wg0 type wireguard614614+ip2 link add dev wg0 type wireguard615615+configure_peers616616+ip1 link add veth1 type veth peer name veth2617617+ip1 link set veth2 netns $netns2618618+ip1 addr add fd00:aa::1/64 dev veth1619619+ip2 addr add fd00:aa::2/64 dev veth2620620+ip1 link set veth1 up621621+ip2 link set veth2 up622622+waitiface $netns1 veth1623623+waitiface $netns2 veth2624624+ip1 -6 route add default dev veth1 via fd00:aa::2625625+ip2 -6 route add default dev veth2 via fd00:aa::1626626+n1 wg set wg0 peer "$pub2" endpoint [fd00:aa::2]:2627627+n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::1]:1628628+n1 ping6 -c 1 fd00::2629629+pp ip netns delete $netns1630630+pp ip netns delete $netns2631631+pp ip netns add $netns1632632+pp ip netns add $netns2633633+616634# Ensure there aren't circular reference loops617635ip1 link add wg1 type wireguard618636ip2 link add wg2 type wireguard···653627done < /dev/kmsg654628alldeleted=1655629for object in "${!objects[@]}"; do656656- if [[ ${objects["$object"]} != *createddestroyed ]]; then630630+ if [[ ${objects["$object"]} != *createddestroyed && ${objects["$object"]} != *createdcreateddestroyeddestroyed ]]; then657631 echo "Error: $object: merely ${objects["$object"]}" >&3658632 alldeleted=0659633 fi