···1226122612271227 Note that all fields in this file are hierarchical and the12281228 file modified event can be generated due to an event down the12291229- hierarchy. For for the local events at the cgroup level see12291229+ hierarchy. For the local events at the cgroup level see12301230 memory.events.local.1231123112321232 low···2170217021712171Cgroup v2 device controller has no interface files and is implemented21722172on top of cgroup BPF. To control access to device files, a user may21732173-create bpf programs of the BPF_CGROUP_DEVICE type and attach them21742174-to cgroups. On an attempt to access a device file, corresponding21752175-BPF programs will be executed, and depending on the return value21762176-the attempt will succeed or fail with -EPERM.21732173+create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach21742174+them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a21752175+device file, corresponding BPF programs will be executed, and depending21762176+on the return value the attempt will succeed or fail with -EPERM.2177217721782178-A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx21792179-structure, which describes the device access attempt: access type21802180-(mknod/read/write) and device (type, major and minor numbers).21812181-If the program returns 0, the attempt fails with -EPERM, otherwise21822182-it succeeds.21782178+A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the21792179+bpf_cgroup_dev_ctx structure, which describes the device access attempt:21802180+access type (mknod/read/write) and device (type, major and minor numbers).21812181+If the program returns 0, the attempt fails with -EPERM, otherwise it21822182+succeeds.2183218321842184-An example of BPF_CGROUP_DEVICE program may be found in the kernel21852185-source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file.21842184+An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in21852185+tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.218621862187218721882188RDMA
···44NTFS355=====6677-87Summary and Features98====================1091111-NTFS3 is fully functional NTFS Read-Write driver. The driver works with1212-NTFS versions up to 3.1, normal/compressed/sparse files1313-and journal replaying. File system type to use on mount is 'ntfs3'.1010+NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS1111+versions up to 3.1. File system type to use on mount is *ntfs3*.14121513- This driver implements NTFS read/write support for normal, sparse and1614 compressed files.1717-- Supports native journal replaying;1818-- Supports extended attributes1919- Predefined extended attributes:2020- - 'system.ntfs_security' gets/sets security2121- descriptor (SECURITY_DESCRIPTOR_RELATIVE)2222- - 'system.ntfs_attrib' gets/sets ntfs file/dir attributes.2323- Note: applied to empty files, this allows to switch type between2424- sparse(0x200), compressed(0x800) and normal;1515+- Supports native journal replaying.2516- Supports NFS export of mounted NTFS volumes.1717+- Supports extended attributes. Predefined extended attributes:1818+1919+ - *system.ntfs_security* gets/sets security2020+2121+ Descriptor: SECURITY_DESCRIPTOR_RELATIVE2222+2323+ - *system.ntfs_attrib* gets/sets ntfs file/dir attributes.2424+2525+ Note: Applied to empty files, this allows to switch type between2626+ sparse(0x200), compressed(0x800) and normal.26272728Mount Options2829=============29303031The list below describes mount options supported by NTFS3 driver in addition to3131-generic ones.3232+generic ones. You can use every mount option with **no** option. If it is in3333+this table marked with no it means default is without **no**.32343333-===============================================================================3535+.. flat-table::3636+ :widths: 1 53737+ :fill-cells:34383535-nls=name This option informs the driver how to interpret path3636- strings and translate them to Unicode and back. If3737- this option is not set, the default codepage will be3838- used (CONFIG_NLS_DEFAULT).3939- Examples:4040- 'nls=utf8'3939+ * - iocharset=name4040+ - This option informs the driver how to interpret path strings and4141+ translate them to Unicode and back. If this option is not set, the4242+ default codepage will be used (CONFIG_NLS_DEFAULT).41434242-uid=4343-gid=4444-umask= Controls the default permissions for files/directories created4545- after the NTFS volume is mounted.4444+ Example: iocharset=utf846454747-fmask=4848-dmask= Instead of specifying umask which applies both to4949- files and directories, fmask applies only to files and5050- dmask only to directories.4646+ * - uid=4747+ - :rspan:`1`4848+ * - gid=51495252-nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN)5353- attribute will not be shown under Linux.5050+ * - umask=5151+ - Controls the default permissions for files/directories created after5252+ the NTFS volume is mounted.54535555-sys_immutable Files with the Windows-specific SYSTEM5656- (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system5757- immutable files.5454+ * - dmask=5555+ - :rspan:`1` Instead of specifying umask which applies both to files and5656+ directories, fmask applies only to files and dmask only to directories.5757+ * - fmask=58585959-discard Enable support of the TRIM command for improved performance6060- on delete operations, which is recommended for use with the6161- solid-state drives (SSD).5959+ * - noacsrules6060+ - "No access rules" mount option sets access rights for files/folders to6161+ 777 and owner/group to root. This mount option absorbs all other6262+ permissions.62636363-force Forces the driver to mount partitions even if 'dirty' flag6464- (volume dirty) is set. Not recommended for use.6464+ - Permissions change for files/folders will be reported as successful,6565+ but they will remain 777.65666666-sparse Create new files as "sparse".6767+ - Owner/group change will be reported as successful, butthey will stay6868+ as root.67696868-showmeta Use this parameter to show all meta-files (System Files) on6969- a mounted NTFS partition.7070- By default, all meta-files are hidden.7070+ * - nohidden7171+ - Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute7272+ will not be shown under Linux.71737272-prealloc Preallocate space for files excessively when file size is7373- increasing on writes. Decreases fragmentation in case of7474- parallel write operations to different files.7474+ * - sys_immutable7575+ - Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute7676+ will be marked as system immutable files.75777676-no_acs_rules "No access rules" mount option sets access rights for7777- files/folders to 777 and owner/group to root. This mount7878- option absorbs all other permissions:7979- - permissions change for files/folders will be reported8080- as successful, but they will remain 777;8181- - owner/group change will be reported as successful, but8282- they will stay as root7878+ * - discard7979+ - Enable support of the TRIM command for improved performance on delete8080+ operations, which is recommended for use with the solid-state drives8181+ (SSD).83828484-acl Support POSIX ACLs (Access Control Lists). Effective if8585- supported by Kernel. Not to be confused with NTFS ACLs.8686- The option specified as acl enables support for POSIX ACLs.8383+ * - force8484+ - Forces the driver to mount partitions even if volume is marked dirty.8585+ Not recommended for use.87868888-noatime All files and directories will not update their last access8989- time attribute if a partition is mounted with this parameter.9090- This option can speed up file system operation.8787+ * - sparse8888+ - Create new files as sparse.91899292-===============================================================================9090+ * - showmeta9191+ - Use this parameter to show all meta-files (System Files) on a mounted9292+ NTFS partition. By default, all meta-files are hidden.93939494-ToDo list9494+ * - prealloc9595+ - Preallocate space for files excessively when file size is increasing on9696+ writes. Decreases fragmentation in case of parallel write operations to9797+ different files.9898+9999+ * - acl100100+ - Support POSIX ACLs (Access Control Lists). Effective if supported by101101+ Kernel. Not to be confused with NTFS ACLs. The option specified as acl102102+ enables support for POSIX ACLs.103103+104104+Todo list95105=========9696-9797-- Full journaling support (currently journal replaying is supported) over JBD.9898-106106+- Full journaling support over JBD. Currently journal replaying is supported107107+ which is not necessarily as effectice as JBD would be.99108100109References101110==========102102-https://www.paragon-software.com/home/ntfs-linux-professional/103103- - Commercial version of the NTFS driver for Linux.111111+- Commercial version of the NTFS driver for Linux.112112+ https://www.paragon-software.com/home/ntfs-linux-professional/104113105105-almaz.alexandrovich@paragon-software.com106106- - Direct e-mail address for feedback and requests on the NTFS3 implementation.114114+- Direct e-mail address for feedback and requests on the NTFS3 implementation.115115+ almaz.alexandrovich@paragon-software.com
+1-1
Documentation/userspace-api/vduse.rst
···1818is clarified or fixed in the future.19192020Create/Destroy VDUSE devices2121-------------------------2121+----------------------------22222323VDUSE devices are created as follows:2424
+8-7
MAINTAINERS
···7343734373447344FPGA MANAGER FRAMEWORK73457345M: Moritz Fischer <mdf@kernel.org>73467346+M: Wu Hao <hao.wu@intel.com>73477347+M: Xu Yilun <yilun.xu@intel.com>73467348R: Tom Rix <trix@redhat.com>73477349L: linux-fpga@vger.kernel.org73487350S: Maintained73497349-W: http://www.rocketboards.org73507351Q: http://patchwork.kernel.org/project/linux-fpga/list/73517352T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git73527353F: Documentation/devicetree/bindings/fpga/···74417440M: Joakim Zhang <qiangqing.zhang@nxp.com>74427441L: netdev@vger.kernel.org74437442S: Maintained74447444-F: Documentation/devicetree/bindings/net/fsl-fec.txt74437443+F: Documentation/devicetree/bindings/net/fsl,fec.yaml74457444F: drivers/net/ethernet/freescale/fec.h74467445F: drivers/net/ethernet/freescale/fec_main.c74477446F: drivers/net/ethernet/freescale/fec_ptp.c···93089307F: drivers/platform/x86/intel/atomisp2/led.c9309930893109309INTEL BIOS SAR INT1092 DRIVER93119311-M: Shravan S <s.shravan@intel.com>93109310+M: Shravan Sudhakar <s.shravan@intel.com>93129311M: Intel Corporation <linuxwwan@intel.com>93139312L: platform-driver-x86@vger.kernel.org93149313S: Maintained···96309629F: tools/power/x86/intel-speed-select/9631963096329631INTEL STRATIX10 FIRMWARE DRIVERS96339633-M: Richard Gong <richard.gong@linux.intel.com>96329632+M: Dinh Nguyen <dinguyen@kernel.org>96349633L: linux-kernel@vger.kernel.org96359634S: Maintained96369635F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu···1028010279M: Christian Borntraeger <borntraeger@de.ibm.com>1028110280M: Janosch Frank <frankja@linux.ibm.com>1028210281R: David Hildenbrand <david@redhat.com>1028310283-R: Cornelia Huck <cohuck@redhat.com>1028410282R: Claudio Imbrenda <imbrenda@linux.ibm.com>1028510283L: kvm@vger.kernel.org1028610284S: Supported···1115311153F: Documentation/devicetree/bindings/net/dsa/marvell.txt1115411154F: Documentation/networking/devlink/mv88e6xxx.rst1115511155F: drivers/net/dsa/mv88e6xxx/1115611156+F: include/linux/dsa/mv88e6xxx.h1115611157F: include/linux/platform_data/mv88e6xxx.h11157111581115811159MARVELL ARMADA 3700 PHY DRIVERS···1630216301M: Heiko Carstens <hca@linux.ibm.com>1630316302M: Vasily Gorbik <gor@linux.ibm.com>1630416303M: Christian Borntraeger <borntraeger@de.ibm.com>1630416304+R: Alexander Gordeev <agordeev@linux.ibm.com>1630516305L: linux-s390@vger.kernel.org1630616306S: Supported1630716307W: http://www.ibm.com/developerworks/linux/linux390/···1638116379F: drivers/s390/crypto/vfio_ap_private.h16382163801638316381S390 VFIO-CCW DRIVER1638416384-M: Cornelia Huck <cohuck@redhat.com>1638516382M: Eric Farman <farman@linux.ibm.com>1638616383M: Matthew Rosato <mjrosato@linux.ibm.com>1638716384R: Halil Pasic <pasic@linux.ibm.com>···1798717986SY8106A REGULATOR DRIVER1798817987M: Icenowy Zheng <icenowy@aosc.io>1798917988S: Maintained1799017990-F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt1798917989+F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml1799117990F: drivers/regulator/sy8106a-regulator.c17992179911799317992SYNC FILE FRAMEWORK
···26262727extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);28282929-/* Macro to mark a page protection as uncacheable */3030-#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE))3131-3232-extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE);3333-3429/* to cope with aliasing VIPT cache */3530#define HAVE_ARCH_UNMAPPED_AREA3631
···4343#ifdef CONFIG_ARM64_4K_PAGES4444 order = PUD_SHIFT - PAGE_SHIFT;4545#else4646- order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT;4646+ order = CONT_PMD_SHIFT - PAGE_SHIFT;4747#endif4848 /*4949 * HugeTLB CMA reservation is required for gigantic
+2-1
arch/csky/Kconfig
···88 select ARCH_HAS_SYNC_DMA_FOR_DEVICE99 select ARCH_USE_BUILTIN_BSWAP1010 select ARCH_USE_QUEUED_RWLOCKS1111- select ARCH_WANT_FRAME_POINTERS if !CPU_CK6101111+ select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace)1212 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT1313 select COMMON_CLK1414 select CLKSRC_MMIO···241241242242menuconfig HAVE_TCM243243 bool "Tightly-Coupled/Sram Memory"244244+ depends on !COMPILE_TEST244245 help245246 The implementation are not only used by TCM (Tightly-Coupled Meory)246247 but also used by sram on SOC bus. It follow existed linux tcm
-1
arch/csky/include/asm/bitops.h
···7474 * bug fix, why only could use atomic!!!!7575 */7676#include <asm-generic/bitops/non-atomic.h>7777-#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)78777978#include <asm-generic/bitops/le.h>8079#include <asm-generic/bitops/ext2-atomic.h>
+2-1
arch/csky/kernel/ptrace.c
···9999 if (ret)100100 return ret;101101102102- regs.sr = task_pt_regs(target)->sr;102102+ /* BIT(0) of regs.sr is Condition Code/Carry bit */103103+ regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0));103104#ifdef CONFIG_CPU_HAS_HILO104105 regs.dcsr = task_pt_regs(target)->dcsr;105106#endif
+4
arch/csky/kernel/signal.c
···5252 struct sigcontext __user *sc)5353{5454 int err = 0;5555+ unsigned long sr = regs->sr;55565657 /* sc_pt_regs is structured the same as the start of pt_regs */5758 err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs));5959+6060+ /* BIT(0) of regs->sr is Condition Code/Carry bit */6161+ regs->sr = (sr & ~1) | (regs->sr & 1);58625963 /* Restore the floating-point state. */6064 err |= restore_fpu_state(sc);
+17-11
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···255255 * r3 contains the SRR1 wakeup value, SRR1 is trashed.256256 */257257_GLOBAL(idle_kvm_start_guest)258258- ld r4,PACAEMERGSP(r13)259258 mfcr r5260259 mflr r0261261- std r1,0(r4)262262- std r5,8(r4)263263- std r0,16(r4)264264- subi r1,r4,STACK_FRAME_OVERHEAD260260+ std r5, 8(r1) // Save CR in caller's frame261261+ std r0, 16(r1) // Save LR in caller's frame262262+ // Create frame on emergency stack263263+ ld r4, PACAEMERGSP(r13)264264+ stdu r1, -SWITCH_FRAME_SIZE(r4)265265+ // Switch to new frame on emergency stack266266+ mr r1, r4267267+ std r3, 32(r1) // Save SRR1 wakeup value265268 SAVE_NVGPRS(r1)266269267270 /*···315312 beq kvm_no_guest316313317314kvm_secondary_got_guest:315315+316316+ // About to go to guest, clear saved SRR1317317+ li r0, 0318318+ std r0, 32(r1)318319319320 /* Set HSTATE_DSCR(r13) to something sensible */320321 ld r6, PACA_DSCR_DEFAULT(r13)···399392 mfspr r4, SPRN_LPCR400393 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1401394 mtspr SPRN_LPCR, r4402402- /* set up r3 for return */403403- mfspr r3,SPRN_SRR1395395+ // Return SRR1 wakeup value, or 0 if we went into the guest396396+ ld r3, 32(r1)404397 REST_NVGPRS(r1)405405- addi r1, r1, STACK_FRAME_OVERHEAD406406- ld r0, 16(r1)407407- ld r5, 8(r1)408408- ld r1, 0(r1)398398+ ld r1, 0(r1) // Switch back to caller stack399399+ ld r0, 16(r1) // Reload LR400400+ ld r5, 8(r1) // Reload CR409401 mtlr r0410402 mtcr r5411403 blr
+2-1
arch/powerpc/sysdev/xive/common.c
···945945 * interrupt to be inactive in that case.946946 */947947 *state = (pq != XIVE_ESB_INVALID) && !xd->stale_p &&948948- (xd->saved_p || !!(pq & XIVE_ESB_VAL_P));948948+ (xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) &&949949+ !irqd_irq_disabled(data)));949950 return 0;950951 default:951952 return -EINVAL;
+6-7
arch/s390/lib/string.c
···259259#ifdef __HAVE_ARCH_STRRCHR260260char *strrchr(const char *s, int c)261261{262262- size_t len = __strend(s) - s;262262+ ssize_t len = __strend(s) - s;263263264264- if (len)265265- do {266266- if (s[len] == (char) c)267267- return (char *) s + len;268268- } while (--len > 0);269269- return NULL;264264+ do {265265+ if (s[len] == (char)c)266266+ return (char *)s + len;267267+ } while (--len >= 0);268268+ return NULL;270269}271270EXPORT_SYMBOL(strrchr);272271#endif
-1
arch/x86/Kconfig
···1525152515261526config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT15271527 bool "Activate AMD Secure Memory Encryption (SME) by default"15281528- default y15291528 depends on AMD_MEM_ENCRYPT15301529 help15311530 Say yes to have system memory encrypted by default if running on
+1
arch/x86/events/msr.c
···6868 case INTEL_FAM6_BROADWELL_D:6969 case INTEL_FAM6_BROADWELL_G:7070 case INTEL_FAM6_BROADWELL_X:7171+ case INTEL_FAM6_SAPPHIRERAPIDS_X:71727273 case INTEL_FAM6_ATOM_SILVERMONT:7374 case INTEL_FAM6_ATOM_SILVERMONT_D:
+1-1
arch/x86/kernel/fpu/signal.c
···385385 return -EINVAL;386386 } else {387387 /* Mask invalid bits out for historical reasons (broken hardware). */388388- fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask;388388+ fpu->state.fxsave.mxcsr &= mxcsr_feature_mask;389389 }390390391391 /* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+6
block/bfq-cgroup.c
···666666 bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);667667 bfqg_and_blkg_put(bfqq_group(bfqq));668668669669+ if (entity->parent &&670670+ entity->parent->last_bfqq_created == bfqq)671671+ entity->parent->last_bfqq_created = NULL;672672+ else if (bfqd->last_bfqq_created == bfqq)673673+ bfqd->last_bfqq_created = NULL;674674+669675 entity->parent = bfqg->my_entity;670676 entity->sched_data = &bfqg->sched_data;671677 /* pin down bfqg and its associated blkg */
+78-70
block/blk-core.c
···4949#include "blk-mq.h"5050#include "blk-mq-sched.h"5151#include "blk-pm.h"5252-#include "blk-rq-qos.h"53525453struct dentry *blk_debugfs_root;5554···336337}337338EXPORT_SYMBOL(blk_put_queue);338339339339-void blk_set_queue_dying(struct request_queue *q)340340+void blk_queue_start_drain(struct request_queue *q)340341{341341- blk_queue_flag_set(QUEUE_FLAG_DYING, q);342342-343342 /*344343 * When queue DYING flag is set, we need to block new req345344 * entering queue, so we call blk_freeze_queue_start() to346345 * prevent I/O from crossing blk_queue_enter().347346 */348347 blk_freeze_queue_start(q);349349-350348 if (queue_is_mq(q))351349 blk_mq_wake_waiters(q);352352-353350 /* Make blk_queue_enter() reexamine the DYING flag. */354351 wake_up_all(&q->mq_freeze_wq);352352+}353353+354354+void blk_set_queue_dying(struct request_queue *q)355355+{356356+ blk_queue_flag_set(QUEUE_FLAG_DYING, q);357357+ blk_queue_start_drain(q);355358}356359EXPORT_SYMBOL_GPL(blk_set_queue_dying);357360···386385 */387386 blk_freeze_queue(q);388387389389- rq_qos_exit(q);390390-391388 blk_queue_flag_set(QUEUE_FLAG_DEAD, q);392392-393393- /* for synchronous bio-based driver finish in-flight integrity i/o */394394- blk_flush_integrity();395389396390 blk_sync_queue(q);397391 if (queue_is_mq(q))···412416}413417EXPORT_SYMBOL(blk_cleanup_queue);414418419419+static bool blk_try_enter_queue(struct request_queue *q, bool pm)420420+{421421+ rcu_read_lock();422422+ if (!percpu_ref_tryget_live(&q->q_usage_counter))423423+ goto fail;424424+425425+ /*426426+ * The code that increments the pm_only counter must ensure that the427427+ * counter is globally visible before the queue is unfrozen.428428+ */429429+ if (blk_queue_pm_only(q) &&430430+ (!pm || queue_rpm_status(q) == RPM_SUSPENDED))431431+ goto fail_put;432432+433433+ rcu_read_unlock();434434+ return true;435435+436436+fail_put:437437+ percpu_ref_put(&q->q_usage_counter);438438+fail:439439+ rcu_read_unlock();440440+ return false;441441+}442442+415443/**416444 * blk_queue_enter() - try to increase q->q_usage_counter417445 * @q: request queue pointer···445425{446426 const bool pm = flags & BLK_MQ_REQ_PM;447427448448- while (true) {449449- bool success = false;450450-451451- rcu_read_lock();452452- if (percpu_ref_tryget_live(&q->q_usage_counter)) {453453- /*454454- * The code that increments the pm_only counter is455455- * responsible for ensuring that that counter is456456- * globally visible before the queue is unfrozen.457457- */458458- if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) ||459459- !blk_queue_pm_only(q)) {460460- success = true;461461- } else {462462- percpu_ref_put(&q->q_usage_counter);463463- }464464- }465465- rcu_read_unlock();466466-467467- if (success)468468- return 0;469469-428428+ while (!blk_try_enter_queue(q, pm)) {470429 if (flags & BLK_MQ_REQ_NOWAIT)471430 return -EBUSY;472431473432 /*474474- * read pair of barrier in blk_freeze_queue_start(),475475- * we need to order reading __PERCPU_REF_DEAD flag of476476- * .q_usage_counter and reading .mq_freeze_depth or477477- * queue dying flag, otherwise the following wait may478478- * never return if the two reads are reordered.433433+ * read pair of barrier in blk_freeze_queue_start(), we need to434434+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and435435+ * reading .mq_freeze_depth or queue dying flag, otherwise the436436+ * following wait may never return if the two reads are437437+ * reordered.479438 */480439 smp_rmb();481481-482440 wait_event(q->mq_freeze_wq,483441 (!q->mq_freeze_depth &&484442 blk_pm_resume_queue(pm, q)) ||···464466 if (blk_queue_dying(q))465467 return -ENODEV;466468 }469469+470470+ return 0;467471}468472469473static inline int bio_queue_enter(struct bio *bio)470474{471471- struct request_queue *q = bio->bi_bdev->bd_disk->queue;472472- bool nowait = bio->bi_opf & REQ_NOWAIT;473473- int ret;475475+ struct gendisk *disk = bio->bi_bdev->bd_disk;476476+ struct request_queue *q = disk->queue;474477475475- ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0);476476- if (unlikely(ret)) {477477- if (nowait && !blk_queue_dying(q))478478+ while (!blk_try_enter_queue(q, false)) {479479+ if (bio->bi_opf & REQ_NOWAIT) {480480+ if (test_bit(GD_DEAD, &disk->state))481481+ goto dead;478482 bio_wouldblock_error(bio);479479- else480480- bio_io_error(bio);483483+ return -EBUSY;484484+ }485485+486486+ /*487487+ * read pair of barrier in blk_freeze_queue_start(), we need to488488+ * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and489489+ * reading .mq_freeze_depth or queue dying flag, otherwise the490490+ * following wait may never return if the two reads are491491+ * reordered.492492+ */493493+ smp_rmb();494494+ wait_event(q->mq_freeze_wq,495495+ (!q->mq_freeze_depth &&496496+ blk_pm_resume_queue(false, q)) ||497497+ test_bit(GD_DEAD, &disk->state));498498+ if (test_bit(GD_DEAD, &disk->state))499499+ goto dead;481500 }482501483483- return ret;502502+ return 0;503503+dead:504504+ bio_io_error(bio);505505+ return -ENODEV;484506}485507486508void blk_queue_exit(struct request_queue *q)···917899 struct gendisk *disk = bio->bi_bdev->bd_disk;918900 blk_qc_t ret = BLK_QC_T_NONE;919901920920- if (blk_crypto_bio_prep(&bio)) {921921- if (!disk->fops->submit_bio)922922- return blk_mq_submit_bio(bio);902902+ if (unlikely(bio_queue_enter(bio) != 0))903903+ return BLK_QC_T_NONE;904904+905905+ if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio))906906+ goto queue_exit;907907+ if (disk->fops->submit_bio) {923908 ret = disk->fops->submit_bio(bio);909909+ goto queue_exit;924910 }911911+ return blk_mq_submit_bio(bio);912912+913913+queue_exit:925914 blk_queue_exit(disk->queue);926915 return ret;927916}···966941 struct request_queue *q = bio->bi_bdev->bd_disk->queue;967942 struct bio_list lower, same;968943969969- if (unlikely(bio_queue_enter(bio) != 0))970970- continue;971971-972944 /*973945 * Create a fresh bio_list for all subordinate requests.974946 */···1001979static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)1002980{1003981 struct bio_list bio_list[2] = { };10041004- blk_qc_t ret = BLK_QC_T_NONE;982982+ blk_qc_t ret;10059831006984 current->bio_list = bio_list;10079851008986 do {10091009- struct gendisk *disk = bio->bi_bdev->bd_disk;10101010-10111011- if (unlikely(bio_queue_enter(bio) != 0))10121012- continue;10131013-10141014- if (!blk_crypto_bio_prep(&bio)) {10151015- blk_queue_exit(disk->queue);10161016- ret = BLK_QC_T_NONE;10171017- continue;10181018- }10191019-10201020- ret = blk_mq_submit_bio(bio);987987+ ret = __submit_bio(bio);1021988 } while ((bio = bio_list_pop(&bio_list[0])));10229891023990 current->bio_list = NULL;···10241013 */10251014blk_qc_t submit_bio_noacct(struct bio *bio)10261015{10271027- if (!submit_bio_checks(bio))10281028- return BLK_QC_T_NONE;10291029-10301016 /*10311017 * We only want one ->submit_bio to be active at a time, else stack10321018 * usage with stacked devices could be a problem. Use current->bio_list
···371371 return 0;372372373373 if (acpi_s2idle_vendor_amd()) {374374- /* AMD0004, AMDI0005:374374+ /* AMD0004, AMD0005, AMDI0005:375375 * - Should use rev_id 0x0376376 * - function mask > 0x3: Should use AMD method, but has off by one bug377377 * - function mask = 0x3: Should use Microsoft method···390390 ACPI_LPS0_DSM_UUID_MICROSOFT, 0,391391 &lps0_dsm_guid_microsoft);392392 if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||393393+ !strcmp(hid, "AMD0005") ||393394 !strcmp(hid, "AMDI0005"))) {394395 lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;395396 acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
···7171 int opt_mask = 0;7272 int token;7373 int ret = -EINVAL;7474- int i, dest_port, nr_poll_queues;7474+ int nr_poll_queues = 0;7575+ int dest_port = 0;7576 int p_cnt = 0;7777+ int i;76787779 options = kstrdup(buf, GFP_KERNEL);7880 if (!options)
+6-31
drivers/block/virtio_blk.c
···689689static unsigned int virtblk_queue_depth;690690module_param_named(queue_depth, virtblk_queue_depth, uint, 0444);691691692692-static int virtblk_validate(struct virtio_device *vdev)693693-{694694- u32 blk_size;695695-696696- if (!vdev->config->get) {697697- dev_err(&vdev->dev, "%s failure: config access disabled\n",698698- __func__);699699- return -EINVAL;700700- }701701-702702- if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE))703703- return 0;704704-705705- blk_size = virtio_cread32(vdev,706706- offsetof(struct virtio_blk_config, blk_size));707707-708708- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE)709709- __virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE);710710-711711- return 0;712712-}713713-714692static int virtblk_probe(struct virtio_device *vdev)715693{716694 struct virtio_blk *vblk;···699721 u16 min_io_size;700722 u8 physical_block_exp, alignment_offset;701723 unsigned int queue_depth;724724+725725+ if (!vdev->config->get) {726726+ dev_err(&vdev->dev, "%s failure: config access disabled\n",727727+ __func__);728728+ return -EINVAL;729729+ }702730703731 err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS),704732 GFP_KERNEL);···819835 blk_queue_logical_block_size(q, blk_size);820836 else821837 blk_size = queue_logical_block_size(q);822822-823823- if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) {824824- dev_err(&vdev->dev,825825- "block size is changed unexpectedly, now is %u\n",826826- blk_size);827827- err = -EINVAL;828828- goto out_cleanup_disk;829829- }830838831839 /* Use topology information if available */832840 err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY,···9851009 .driver.name = KBUILD_MODNAME,9861010 .driver.owner = THIS_MODULE,9871011 .id_table = id_table,988988- .validate = virtblk_validate,9891012 .probe = virtblk_probe,9901013 .remove = virtblk_remove,9911014 .config_changed = virtblk_config_changed,
-12
drivers/bus/Kconfig
···152152 Interface 2, which can be used to connect things like NAND Flash,153153 SRAM, ethernet adapters, FPGAs and LCD displays.154154155155-config SIMPLE_PM_BUS156156- tristate "Simple Power-Managed Bus Driver"157157- depends on OF && PM158158- help159159- Driver for transparent busses that don't need a real driver, but160160- where the bus controller is part of a PM domain, or under the control161161- of a functional clock, and thus relies on runtime PM for managing162162- this PM domain and/or clock.163163- An example of such a bus controller is the Renesas Bus State164164- Controller (BSC, sometimes called "LBSC within Bus Bridge", or165165- "External Bus Interface") as found on several Renesas ARM SoCs.166166-167155config SUN50I_DE2_BUS168156 bool "Allwinner A64 DE2 Bus Driver"169157 default ARM64
···1313#include <linux/platform_device.h>1414#include <linux/pm_runtime.h>15151616-1716static int simple_pm_bus_probe(struct platform_device *pdev)1817{1919- const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev);2020- struct device_node *np = pdev->dev.of_node;1818+ const struct device *dev = &pdev->dev;1919+ const struct of_dev_auxdata *lookup = dev_get_platdata(dev);2020+ struct device_node *np = dev->of_node;2121+ const struct of_device_id *match;2222+2323+ /*2424+ * Allow user to use driver_override to bind this driver to a2525+ * transparent bus device which has a different compatible string2626+ * that's not listed in simple_pm_bus_of_match. We don't want to do any2727+ * of the simple-pm-bus tasks for these devices, so return early.2828+ */2929+ if (pdev->driver_override)3030+ return 0;3131+3232+ match = of_match_device(dev->driver->of_match_table, dev);3333+ /*3434+ * These are transparent bus devices (not simple-pm-bus matches) that3535+ * have their child nodes populated automatically. So, don't need to3636+ * do anything more. We only match with the device if this driver is3737+ * the most specific match because we don't want to incorrectly bind to3838+ * a device that has a more specific driver.3939+ */4040+ if (match && match->data) {4141+ if (of_property_match_string(np, "compatible", match->compatible) == 0)4242+ return 0;4343+ else4444+ return -ENODEV;4545+ }21462247 dev_dbg(&pdev->dev, "%s\n", __func__);2348···56315732static int simple_pm_bus_remove(struct platform_device *pdev)5833{3434+ const void *data = of_device_get_match_data(&pdev->dev);3535+3636+ if (pdev->driver_override || data)3737+ return 0;3838+5939 dev_dbg(&pdev->dev, "%s\n", __func__);60406141 pm_runtime_disable(&pdev->dev);6242 return 0;6343}64444545+#define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */4646+6547static const struct of_device_id simple_pm_bus_of_match[] = {6648 { .compatible = "simple-pm-bus", },4949+ { .compatible = "simple-bus", .data = ONLY_BUS },5050+ { .compatible = "simple-mfd", .data = ONLY_BUS },5151+ { .compatible = "isa", .data = ONLY_BUS },5252+ { .compatible = "arm,amba-bus", .data = ONLY_BUS },6753 { /* sentinel */ }6854};6955MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+1
drivers/clk/qcom/Kconfig
···564564565565config SM_GCC_6350566566 tristate "SM6350 Global Clock Controller"567567+ select QCOM_GDSC567568 help568569 Support for the global clock controller on SM6350 devices.569570 Say Y if you want to use peripheral devices such as UART,
···2525#include <acpi/ghes.h>2626#include <ras/ras_event.h>27272828-static char rcd_decode_str[CPER_REC_LEN];2929-3028/*3129 * CPER record ID need to be unique even after reboot, because record3230 * ID is used as index for ERST storage, while CPER records from···310312 struct cper_mem_err_compact *cmem)311313{312314 const char *ret = trace_seq_buffer_ptr(p);315315+ char rcd_decode_str[CPER_REC_LEN];313316314317 if (cper_mem_err_location(cmem, rcd_decode_str))315318 trace_seq_printf(p, "%s", rcd_decode_str);···325326 int len)326327{327328 struct cper_mem_err_compact cmem;329329+ char rcd_decode_str[CPER_REC_LEN];328330329331 /* Don't trust UEFI 2.1/2.2 structure with bad validation bits */330332 if (len == sizeof(struct cper_sec_mem_err_old) &&
···414414 unsigned long data_size,415415 efi_char16_t *data)416416{417417- if (down_interruptible(&efi_runtime_lock)) {417417+ if (down_trylock(&efi_runtime_lock)) {418418 pr_warn("failed to invoke the reset_system() runtime service:\n"419419 "could not get exclusive access to the firmware\n");420420 return;
···476476477477static void gpio_mockup_unregister_pdevs(void)478478{479479+ struct platform_device *pdev;480480+ struct fwnode_handle *fwnode;479481 int i;480482481481- for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++)482482- platform_device_unregister(gpio_mockup_pdevs[i]);483483+ for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) {484484+ pdev = gpio_mockup_pdevs[i];485485+ if (!pdev)486486+ continue;487487+488488+ fwnode = dev_fwnode(&pdev->dev);489489+ platform_device_unregister(pdev);490490+ fwnode_remove_software_node(fwnode);491491+ }483492}484493485494static __init char **gpio_mockup_make_line_names(const char *label,···517508 struct property_entry properties[GPIO_MOCKUP_MAX_PROP];518509 struct platform_device_info pdevinfo;519510 struct platform_device *pdev;511511+ struct fwnode_handle *fwnode;520512 char **line_names = NULL;521513 char chip_label[32];522514 int prop = 0, base;···546536 "gpio-line-names", line_names, ngpio);547537 }548538539539+ fwnode = fwnode_create_software_node(properties, NULL);540540+ if (IS_ERR(fwnode))541541+ return PTR_ERR(fwnode);542542+549543 pdevinfo.name = "gpio-mockup";550544 pdevinfo.id = idx;551551- pdevinfo.properties = properties;545545+ pdevinfo.fwnode = fwnode;552546553547 pdev = platform_device_register_full(&pdevinfo);554548 kfree_strarray(line_names, ngpio);555549 if (IS_ERR(pdev)) {550550+ fwnode_remove_software_node(fwnode);556551 pr_err("error registering device");557552 return PTR_ERR(pdev);558553 }
+9-7
drivers/gpio/gpio-pca953x.c
···559559560560 mutex_lock(&chip->i2c_lock);561561562562- /* Disable pull-up/pull-down */563563- ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);564564- if (ret)565565- goto exit;566566-567562 /* Configure pull-up/pull-down */568563 if (config == PIN_CONFIG_BIAS_PULL_UP)569564 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit);570565 else if (config == PIN_CONFIG_BIAS_PULL_DOWN)571566 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0);567567+ else568568+ ret = 0;572569 if (ret)573570 goto exit;574571575575- /* Enable pull-up/pull-down */576576- ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);572572+ /* Disable/Enable pull-up/pull-down */573573+ if (config == PIN_CONFIG_BIAS_DISABLE)574574+ ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0);575575+ else576576+ ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit);577577578578exit:579579 mutex_unlock(&chip->i2c_lock);···587587588588 switch (pinconf_to_config_param(config)) {589589 case PIN_CONFIG_BIAS_PULL_UP:590590+ case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:590591 case PIN_CONFIG_BIAS_PULL_DOWN:592592+ case PIN_CONFIG_BIAS_DISABLE:591593 return pca953x_gpio_set_pull_up_down(chip, offset, config);592594 default:593595 return -ENOTSUPP;
+12-3
drivers/gpu/drm/drm_edid.c
···18341834 u8 *edid, int num_blocks)18351835{18361836 int i;18371837- u8 num_of_ext = edid[0x7e];18371837+ u8 last_block;18381838+18391839+ /*18401840+ * 0x7e in the EDID is the number of extension blocks. The EDID18411841+ * is 1 (base block) + num_ext_blocks big. That means we can think18421842+ * of 0x7e in the EDID of the _index_ of the last block in the18431843+ * combined chunk of memory.18441844+ */18451845+ last_block = edid[0x7e];1838184618391847 /* Calculate real checksum for the last edid extension block data */18401840- connector->real_edid_checksum =18411841- drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH);18481848+ if (last_block < num_blocks)18491849+ connector->real_edid_checksum =18501850+ drm_edid_block_checksum(edid + last_block * EDID_LENGTH);1842185118431852 if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))18441853 return;
+6
drivers/gpu/drm/drm_fb_helper.c
···15061506{15071507 struct drm_client_dev *client = &fb_helper->client;15081508 struct drm_device *dev = fb_helper->dev;15091509+ struct drm_mode_config *config = &dev->mode_config;15091510 int ret = 0;15101511 int crtc_count = 0;15111512 struct drm_connector_list_iter conn_iter;···16641663 /* Handle our overallocation */16651664 sizes.surface_height *= drm_fbdev_overalloc;16661665 sizes.surface_height /= 100;16661666+ if (sizes.surface_height > config->max_height) {16671667+ drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n",16681668+ config->max_height);16691669+ sizes.surface_height = config->max_height;16701670+ }1667167116681672 /* push down into drivers */16691673 ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
···299299 return 0;300300}301301302302+/*303303+ * Hyper-V supports a hardware cursor feature. It's not used by Linux VM,304304+ * but the Hyper-V host still draws a point as an extra mouse pointer,305305+ * which is unwanted, especially when Xorg is running.306306+ *307307+ * The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted308308+ * pointer, by setting msg.ptr_pos.is_visible = 1 and setting the309309+ * msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't310310+ * work in tests.311311+ *312312+ * Copy synthvid_send_ptr() to hyperv_drm and rename it to313313+ * hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the314314+ * handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still315315+ * draws an extra unwanted mouse pointer after the VM Connection window is316316+ * closed and reopened.317317+ */318318+int hyperv_hide_hw_ptr(struct hv_device *hdev)319319+{320320+ struct synthvid_msg msg;321321+322322+ memset(&msg, 0, sizeof(struct synthvid_msg));323323+ msg.vid_hdr.type = SYNTHVID_POINTER_POSITION;324324+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +325325+ sizeof(struct synthvid_pointer_position);326326+ msg.ptr_pos.is_visible = 1;327327+ msg.ptr_pos.video_output = 0;328328+ msg.ptr_pos.image_x = 0;329329+ msg.ptr_pos.image_y = 0;330330+ hyperv_sendpacket(hdev, &msg);331331+332332+ memset(&msg, 0, sizeof(struct synthvid_msg));333333+ msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE;334334+ msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) +335335+ sizeof(struct synthvid_pointer_shape);336336+ msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE;337337+ msg.ptr_shape.is_argb = 1;338338+ msg.ptr_shape.width = 1;339339+ msg.ptr_shape.height = 1;340340+ msg.ptr_shape.hot_x = 0;341341+ msg.ptr_shape.hot_y = 0;342342+ msg.ptr_shape.data[0] = 0;343343+ msg.ptr_shape.data[1] = 1;344344+ msg.ptr_shape.data[2] = 1;345345+ msg.ptr_shape.data[3] = 1;346346+ hyperv_sendpacket(hdev, &msg);347347+348348+ return 0;349349+}350350+302351int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect)303352{304353 struct hyperv_drm_device *hv = hv_get_drvdata(hdev);···441392 return;442393 }443394444444- if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE)395395+ if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) {445396 hv->dirt_needed = msg->feature_chg.is_dirt_needed;397397+ if (hv->dirt_needed)398398+ hyperv_hide_hw_ptr(hv->hdev);399399+ }446400}447401448402static void hyperv_receive(void *ctx)
···571571 }572572573573 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");574574- ret = IS_ERR(icc_path);575575- if (ret)574574+ if (IS_ERR(icc_path)) {575575+ ret = PTR_ERR(icc_path);576576 goto fail;577577+ }577578578579 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");579579- ret = IS_ERR(ocmem_icc_path);580580- if (ret) {580580+ if (IS_ERR(ocmem_icc_path)) {581581+ ret = PTR_ERR(ocmem_icc_path);581582 /* allow -ENODATA, ocmem icc is optional */582583 if (ret != -ENODATA)583584 goto fail;
+5-4
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
···699699 }700700701701 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");702702- ret = IS_ERR(icc_path);703703- if (ret)702702+ if (IS_ERR(icc_path)) {703703+ ret = PTR_ERR(icc_path);704704 goto fail;705705+ }705706706707 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem");707707- ret = IS_ERR(ocmem_icc_path);708708- if (ret) {708708+ if (IS_ERR(ocmem_icc_path)) {709709+ ret = PTR_ERR(ocmem_icc_path);709710 /* allow -ENODATA, ocmem icc is optional */710711 if (ret != -ENODATA)711712 goto fail;
+6
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
···296296 u32 val;297297 int request, ack;298298299299+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));300300+299301 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))300302 return -EINVAL;301303···338336void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state)339337{340338 int bit;339339+340340+ WARN_ON_ONCE(!mutex_is_locked(&gmu->lock));341341342342 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits))343343 return;···1485148114861482 if (!pdev)14871483 return -ENODEV;14841484+14851485+ mutex_init(&gmu->lock);1488148614891487 gmu->dev = &pdev->dev;14901488
+3
drivers/gpu/drm/msm/adreno/a6xx_gmu.h
···4444struct a6xx_gmu {4545 struct device *dev;46464747+ /* For serializing communication with the GMU: */4848+ struct mutex lock;4949+4750 struct msm_gem_address_space *aspace;48514952 void * __iomem mmio;
+37-9
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
···106106 u32 asid;107107 u64 memptr = rbmemptr(ring, ttbr0);108108109109- if (ctx == a6xx_gpu->cur_ctx)109109+ if (ctx->seqno == a6xx_gpu->cur_ctx_seqno)110110 return;111111112112 if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))···139139 OUT_PKT7(ring, CP_EVENT_WRITE, 1);140140 OUT_RING(ring, 0x31);141141142142- a6xx_gpu->cur_ctx = ctx;142142+ a6xx_gpu->cur_ctx_seqno = ctx->seqno;143143}144144145145static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)···881881 A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \882882 A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR)883883884884-static int a6xx_hw_init(struct msm_gpu *gpu)884884+static int hw_init(struct msm_gpu *gpu)885885{886886 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);887887 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);···10811081 /* Always come up on rb 0 */10821082 a6xx_gpu->cur_ring = gpu->rb[0];1083108310841084- a6xx_gpu->cur_ctx = NULL;10841084+ a6xx_gpu->cur_ctx_seqno = 0;1085108510861086 /* Enable the SQE_to start the CP engine */10871087 gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1);···11311131 /* Take the GMU out of its special boot mode */11321132 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER);11331133 }11341134+11351135+ return ret;11361136+}11371137+11381138+static int a6xx_hw_init(struct msm_gpu *gpu)11391139+{11401140+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);11411141+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);11421142+ int ret;11431143+11441144+ mutex_lock(&a6xx_gpu->gmu.lock);11451145+ ret = hw_init(gpu);11461146+ mutex_unlock(&a6xx_gpu->gmu.lock);1134114711351148 return ret;11361149}···1522150915231510 trace_msm_gpu_resume(0);1524151115121512+ mutex_lock(&a6xx_gpu->gmu.lock);15251513 ret = a6xx_gmu_resume(a6xx_gpu);15141514+ mutex_unlock(&a6xx_gpu->gmu.lock);15261515 if (ret)15271516 return ret;15281517···1547153215481533 msm_devfreq_suspend(gpu);1549153415351535+ mutex_lock(&a6xx_gpu->gmu.lock);15501536 ret = a6xx_gmu_stop(a6xx_gpu);15371537+ mutex_unlock(&a6xx_gpu->gmu.lock);15511538 if (ret)15521539 return ret;15531540···15641547{15651548 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);15661549 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);15671567- static DEFINE_MUTEX(perfcounter_oob);1568155015691569- mutex_lock(&perfcounter_oob);15511551+ mutex_lock(&a6xx_gpu->gmu.lock);1570155215711553 /* Force the GPU power on so we can read this register */15721554 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);1573155515741556 *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO,15751575- REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);15571557+ REG_A6XX_CP_ALWAYS_ON_COUNTER_HI);1576155815771559 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET);15781578- mutex_unlock(&perfcounter_oob);15601560+15611561+ mutex_unlock(&a6xx_gpu->gmu.lock);15621562+15791563 return 0;15801564}15811565···16381620 return ~0LU;1639162116401622 return (unsigned long)busy_time;16231623+}16241624+16251625+void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp)16261626+{16271627+ struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);16281628+ struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);16291629+16301630+ mutex_lock(&a6xx_gpu->gmu.lock);16311631+ a6xx_gmu_set_freq(gpu, opp);16321632+ mutex_unlock(&a6xx_gpu->gmu.lock);16411633}1642163416431635static struct msm_gem_address_space *···17941766#endif17951767 .gpu_busy = a6xx_gpu_busy,17961768 .gpu_get_freq = a6xx_gmu_get_freq,17971797- .gpu_set_freq = a6xx_gmu_set_freq,17691769+ .gpu_set_freq = a6xx_gpu_set_freq,17981770#if defined(CONFIG_DRM_MSM_GPU_STATE)17991771 .gpu_state_get = a6xx_gpu_state_get,18001772 .gpu_state_put = a6xx_gpu_state_put,
+10-1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
···1919 uint64_t sqe_iova;20202121 struct msm_ringbuffer *cur_ring;2222- struct msm_file_private *cur_ctx;2222+2323+ /**2424+ * cur_ctx_seqno:2525+ *2626+ * The ctx->seqno value of the context with current pgtables2727+ * installed. Tracked by seqno rather than pointer value to2828+ * avoid dangling pointers, and cases where a ctx can be freed2929+ * and a new one created with the same address.3030+ */3131+ int cur_ctx_seqno;23322433 struct a6xx_gmu gmu;2534
···13091309 * can not declared display is connected unless13101310 * HDMI cable is plugged in and sink_count of13111311 * dongle become 113121312+ * also only signal audio when disconnected13121313 */13131313- if (dp->link->sink_count)13141314+ if (dp->link->sink_count) {13141315 dp->dp_display.is_connected = true;13151315- else13161316+ } else {13161317 dp->dp_display.is_connected = false;13171317-13181318- dp_display_handle_plugged_change(g_dp_display,13191319- dp->dp_display.is_connected);13181318+ dp_display_handle_plugged_change(g_dp_display, false);13191319+ }1320132013211321 DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n",13221322 dp->link->sink_count, dp->dp_display.is_connected,
+3-1
drivers/gpu/drm/msm/dsi/dsi.c
···215215 goto fail;216216 }217217218218- if (!msm_dsi_manager_validate_current_config(msm_dsi->id))218218+ if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) {219219+ ret = -EINVAL;219220 goto fail;221221+ }220222221223 msm_dsi->encoder = encoder;222224
···4646 if (!submit)4747 return ERR_PTR(-ENOMEM);48484949- ret = drm_sched_job_init(&submit->base, &queue->entity, queue);4949+ ret = drm_sched_job_init(&submit->base, queue->entity, queue);5050 if (ret) {5151 kfree(submit);5252 return ERR_PTR(ret);···171171static int submit_lookup_cmds(struct msm_gem_submit *submit,172172 struct drm_msm_gem_submit *args, struct drm_file *file)173173{174174- unsigned i, sz;174174+ unsigned i;175175+ size_t sz;175176 int ret = 0;176177177178 for (i = 0; i < args->nr_cmds; i++) {···908907 /* The scheduler owns a ref now: */909908 msm_gem_submit_get(submit);910909911911- drm_sched_entity_push_job(&submit->base, &queue->entity);910910+ drm_sched_entity_push_job(&submit->base, queue->entity);912911913912 args->fence = submit->fence_id;914913
+64-2
drivers/gpu/drm/msm/msm_gpu.h
···258258#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN)259259260260/**261261+ * struct msm_file_private - per-drm_file context262262+ *263263+ * @queuelock: synchronizes access to submitqueues list264264+ * @submitqueues: list of &msm_gpu_submitqueue created by userspace265265+ * @queueid: counter incremented each time a submitqueue is created,266266+ * used to assign &msm_gpu_submitqueue.id267267+ * @aspace: the per-process GPU address-space268268+ * @ref: reference count269269+ * @seqno: unique per process seqno270270+ */271271+struct msm_file_private {272272+ rwlock_t queuelock;273273+ struct list_head submitqueues;274274+ int queueid;275275+ struct msm_gem_address_space *aspace;276276+ struct kref ref;277277+ int seqno;278278+279279+ /**280280+ * entities:281281+ *282282+ * Table of per-priority-level sched entities used by submitqueues283283+ * associated with this &drm_file. Because some userspace apps284284+ * make assumptions about rendering from multiple gl contexts285285+ * (of the same priority) within the process happening in FIFO286286+ * order without requiring any fencing beyond MakeCurrent(), we287287+ * create at most one &drm_sched_entity per-process per-priority-288288+ * level.289289+ */290290+ struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS];291291+};292292+293293+/**261294 * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority262295 *263296 * @gpu: the gpu instance···337304}338305339306/**307307+ * struct msm_gpu_submitqueues - Userspace created context.308308+ *340309 * A submitqueue is associated with a gl context or vk queue (or equiv)341310 * in userspace.342311 *···356321 * seqno, protected by submitqueue lock357322 * @lock: submitqueue lock358323 * @ref: reference count359359- * @entity: the submit job-queue324324+ * @entity: the submit job-queue360325 */361326struct msm_gpu_submitqueue {362327 int id;···368333 struct idr fence_idr;369334 struct mutex lock;370335 struct kref ref;371371- struct drm_sched_entity entity;336336+ struct drm_sched_entity *entity;372337};373338374339struct msm_gpu_state_bo {···455420456421int msm_gpu_pm_suspend(struct msm_gpu *gpu);457422int msm_gpu_pm_resume(struct msm_gpu *gpu);423423+424424+int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);425425+struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,426426+ u32 id);427427+int msm_submitqueue_create(struct drm_device *drm,428428+ struct msm_file_private *ctx,429429+ u32 prio, u32 flags, u32 *id);430430+int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,431431+ struct drm_msm_submitqueue_query *args);432432+int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);433433+void msm_submitqueue_close(struct msm_file_private *ctx);434434+435435+void msm_submitqueue_destroy(struct kref *kref);436436+437437+void __msm_file_private_destroy(struct kref *kref);438438+439439+static inline void msm_file_private_put(struct msm_file_private *ctx)440440+{441441+ kref_put(&ctx->ref, __msm_file_private_destroy);442442+}443443+444444+static inline struct msm_file_private *msm_file_private_get(445445+ struct msm_file_private *ctx)446446+{447447+ kref_get(&ctx->ref);448448+ return ctx;449449+}458450459451void msm_devfreq_init(struct msm_gpu *gpu);460452void msm_devfreq_cleanup(struct msm_gpu *gpu);
+6
drivers/gpu/drm/msm/msm_gpu_devfreq.c
···151151 unsigned int idle_time;152152 unsigned long target_freq = df->idle_freq;153153154154+ if (!df->devfreq)155155+ return;156156+154157 /*155158 * Hold devfreq lock to synchronize with get_dev_status()/156159 * target() callbacks···188185{189186 struct msm_gpu_devfreq *df = &gpu->devfreq;190187 unsigned long idle_freq, target_freq = 0;188188+189189+ if (!df->devfreq)190190+ return;191191192192 /*193193 * Hold devfreq lock to synchronize with get_dev_status()/
+58-14
drivers/gpu/drm/msm/msm_submitqueue.c
···7788#include "msm_gpu.h"991010+void __msm_file_private_destroy(struct kref *kref)1111+{1212+ struct msm_file_private *ctx = container_of(kref,1313+ struct msm_file_private, ref);1414+ int i;1515+1616+ for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) {1717+ if (!ctx->entities[i])1818+ continue;1919+2020+ drm_sched_entity_destroy(ctx->entities[i]);2121+ kfree(ctx->entities[i]);2222+ }2323+2424+ msm_gem_address_space_put(ctx->aspace);2525+ kfree(ctx);2626+}2727+1028void msm_submitqueue_destroy(struct kref *kref)1129{1230 struct msm_gpu_submitqueue *queue = container_of(kref,1331 struct msm_gpu_submitqueue, ref);14321533 idr_destroy(&queue->fence_idr);1616-1717- drm_sched_entity_destroy(&queue->entity);18341935 msm_file_private_put(queue->ctx);2036···7761 }7862}79636464+static struct drm_sched_entity *6565+get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring,6666+ unsigned ring_nr, enum drm_sched_priority sched_prio)6767+{6868+ static DEFINE_MUTEX(entity_lock);6969+ unsigned idx = (ring_nr * NR_SCHED_PRIORITIES) + sched_prio;7070+7171+ /* We should have already validated that the requested priority is7272+ * valid by the time we get here.7373+ */7474+ if (WARN_ON(idx >= ARRAY_SIZE(ctx->entities)))7575+ return ERR_PTR(-EINVAL);7676+7777+ mutex_lock(&entity_lock);7878+7979+ if (!ctx->entities[idx]) {8080+ struct drm_sched_entity *entity;8181+ struct drm_gpu_scheduler *sched = &ring->sched;8282+ int ret;8383+8484+ entity = kzalloc(sizeof(*ctx->entities[idx]), GFP_KERNEL);8585+8686+ ret = drm_sched_entity_init(entity, sched_prio, &sched, 1, NULL);8787+ if (ret) {8888+ kfree(entity);8989+ return ERR_PTR(ret);9090+ }9191+9292+ ctx->entities[idx] = entity;9393+ }9494+9595+ mutex_unlock(&entity_lock);9696+9797+ return ctx->entities[idx];9898+}9999+80100int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,81101 u32 prio, u32 flags, u32 *id)82102{83103 struct msm_drm_private *priv = drm->dev_private;84104 struct msm_gpu_submitqueue *queue;8585- struct msm_ringbuffer *ring;8686- struct drm_gpu_scheduler *sched;87105 enum drm_sched_priority sched_prio;88106 unsigned ring_nr;89107 int ret;···14191 queue->flags = flags;14292 queue->ring_nr = ring_nr;14393144144- ring = priv->gpu->rb[ring_nr];145145- sched = &ring->sched;146146-147147- ret = drm_sched_entity_init(&queue->entity,148148- sched_prio, &sched, 1, NULL);149149- if (ret) {9494+ queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr],9595+ ring_nr, sched_prio);9696+ if (IS_ERR(queue->entity)) {9797+ ret = PTR_ERR(queue->entity);15098 kfree(queue);15199 return ret;152100 }···187139 * than the middle priority level.188140 */189141 default_prio = DIV_ROUND_UP(max_priority, 2);190190-191191- INIT_LIST_HEAD(&ctx->submitqueues);192192-193193- rwlock_init(&ctx->queuelock);194142195143 return msm_submitqueue_create(drm, ctx, default_prio, 0, NULL);196144}
···295295 depends on OF296296 depends on I2C297297 depends on BACKLIGHT_CLASS_DEVICE298298+ select CRC32298299 help299300 The panel is used with different sizes LCDs, from 480x272 to300301 1280x800, and 24 bit per pixel.
···8686 }87878888 /*8989- * Create and initialize the encoder. On Gen3 skip the LVDS1 output if8989+ * Create and initialize the encoder. On Gen3, skip the LVDS1 output if9090 * the LVDS1 encoder is used as a companion for LVDS0 in dual-link9191- * mode.9191+ * mode, or any LVDS output if it isn't connected. The latter may happen9292+ * on D3 or E3 as the LVDS encoders are needed to provide the pixel9393+ * clock to the DU, even when the LVDS outputs are not used.9294 */9393- if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) {9494- if (rcar_lvds_dual_link(bridge))9595+ if (rcdu->info->gen >= 3) {9696+ if (output == RCAR_DU_OUTPUT_LVDS1 &&9797+ rcar_lvds_dual_link(bridge))9898+ return -ENOLINK;9999+100100+ if ((output == RCAR_DU_OUTPUT_LVDS0 ||101101+ output == RCAR_DU_OUTPUT_LVDS1) &&102102+ !rcar_lvds_is_connected(bridge))95103 return -ENOLINK;96104 }97105
···738738739739 if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) {740740 ret = fxls8962af_fifo_flush(indio_dev);741741- if (ret)741741+ if (ret < 0)742742 return IRQ_NONE;743743744744 return IRQ_HANDLED;
···353353 if (dec > st->info->max_dec)354354 dec = st->info->max_dec;355355356356- ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);356356+ ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec);357357 if (ret)358358 goto error;359359360360+ adis_dev_unlock(&st->adis);360361 /*361362 * If decimation is used, then gyro and accel data will have meaningful362363 * bits on the LSB registers. This info is used on the trigger handler.
···276276 ret = wait_event_timeout(opt->result_ready_queue,277277 opt->result_ready,278278 msecs_to_jiffies(OPT3001_RESULT_READY_LONG));279279+ if (ret == 0)280280+ return -ETIMEDOUT;279281 } else {280282 /* Sleep for result ready time */281283 timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ?···314312 /* Disallow IRQ to access the device while lock is active */315313 opt->ok_to_ignore_lock = false;316314317317- if (ret == 0)318318- return -ETIMEDOUT;319319- else if (ret < 0)315315+ if (ret < 0)320316 return ret;321317322318 if (opt->use_irq) {
+1
drivers/iio/test/Makefile
···5566# Keep in alphabetical order77obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o88+CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
···7171 unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]];7272 unsigned int Rt;73737474- Rt = z2;7575- Rt -= z1;7676- Rt *= st->x_plate_ohms;7777- Rt = DIV_ROUND_CLOSEST(Rt, 16);7878- Rt *= x;7979- Rt /= z1;8080- Rt = DIV_ROUND_CLOSEST(Rt, 256);8181- /*8282- * On increased pressure the resistance (Rt) is decreasing8383- * so, convert values to make it looks as real pressure.8484- */8585- if (Rt < GRTS_DEFAULT_PRESSURE_MAX)8686- press = GRTS_DEFAULT_PRESSURE_MAX - Rt;7474+ if (likely(x && z1)) {7575+ Rt = z2;7676+ Rt -= z1;7777+ Rt *= st->x_plate_ohms;7878+ Rt = DIV_ROUND_CLOSEST(Rt, 16);7979+ Rt *= x;8080+ Rt /= z1;8181+ Rt = DIV_ROUND_CLOSEST(Rt, 256);8282+ /*8383+ * On increased pressure the resistance (Rt) is8484+ * decreasing so, convert values to make it looks as8585+ * real pressure.8686+ */8787+ if (Rt < GRTS_DEFAULT_PRESSURE_MAX)8888+ press = GRTS_DEFAULT_PRESSURE_MAX - Rt;8989+ }8790 }88918992 if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
+8
drivers/iommu/Kconfig
···355355 'arm-smmu.disable_bypass' will continue to override this356356 config.357357358358+config ARM_SMMU_QCOM359359+ def_tristate y360360+ depends on ARM_SMMU && ARCH_QCOM361361+ select QCOM_SCM362362+ help363363+ When running on a Qualcomm platform that has the custom variant364364+ of the ARM SMMU, this needs to be built into the SMMU driver.365365+358366config ARM_SMMU_V3359367 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"360368 depends on ARM64
···490490 struct mapped_device *md = tio->md;491491 struct dm_target *ti = md->immutable_target;492492493493+ /*494494+ * blk-mq's unquiesce may come from outside events, such as495495+ * elevator switch, updating nr_requests or others, and request may496496+ * come during suspend, so simply ask for blk-mq to requeue it.497497+ */498498+ if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)))499499+ return BLK_STS_RESOURCE;500500+493501 if (unlikely(!ti)) {494502 int srcu_idx;495503 struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+12-3
drivers/md/dm-verity-target.c
···475475 struct bvec_iter start;476476 unsigned b;477477 struct crypto_wait wait;478478+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);478479479480 for (b = 0; b < io->n_blocks; b++) {480481 int r;···530529 else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA,531530 cur_block, NULL, &start) == 0)532531 continue;533533- else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,534534- cur_block))535535- return -EIO;532532+ else {533533+ if (bio->bi_status) {534534+ /*535535+ * Error correction failed; Just return error536536+ */537537+ return -EIO;538538+ }539539+ if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA,540540+ cur_block))541541+ return -EIO;542542+ }536543 }537544538545 return 0;
+10-7
drivers/md/dm.c
···496496 false, 0, &io->stats_aux);497497}498498499499-static void end_io_acct(struct dm_io *io)499499+static void end_io_acct(struct mapped_device *md, struct bio *bio,500500+ unsigned long start_time, struct dm_stats_aux *stats_aux)500501{501501- struct mapped_device *md = io->md;502502- struct bio *bio = io->orig_bio;503503- unsigned long duration = jiffies - io->start_time;502502+ unsigned long duration = jiffies - start_time;504503505505- bio_end_io_acct(bio, io->start_time);504504+ bio_end_io_acct(bio, start_time);506505507506 if (unlikely(dm_stats_used(&md->stats)))508507 dm_stats_account_io(&md->stats, bio_data_dir(bio),509508 bio->bi_iter.bi_sector, bio_sectors(bio),510510- true, duration, &io->stats_aux);509509+ true, duration, stats_aux);511510512511 /* nudge anyone waiting on suspend queue */513512 if (unlikely(wq_has_sleeper(&md->wait)))···789790 blk_status_t io_error;790791 struct bio *bio;791792 struct mapped_device *md = io->md;793793+ unsigned long start_time = 0;794794+ struct dm_stats_aux stats_aux;792795793796 /* Push-back supersedes any I/O errors */794797 if (unlikely(error)) {···822821 }823822824823 io_error = io->status;825825- end_io_acct(io);824824+ start_time = io->start_time;825825+ stats_aux = io->stats_aux;826826 free_io(md, io);827827+ end_io_acct(md, bio, start_time, &stats_aux);827828828829 if (io_error == BLK_STS_DM_REQUEUE)829830 return;
+1
drivers/misc/Kconfig
···224224 tristate "HiSilicon Hi6421v600 IRQ and powerkey"225225 depends on OF226226 depends on SPMI227227+ depends on HAS_IOMEM227228 select MFD_CORE228229 select REGMAP_SPMI229230 help
···26492649free_seq_arr:26502650 kfree(cs_seq_arr);2651265126522652- /* update output args */26532653- memset(args, 0, sizeof(*args));26542652 if (rc)26552653 return rc;26542654+26552655+ if (mcs_data.wait_status == -ERESTARTSYS) {26562656+ dev_err_ratelimited(hdev->dev,26572657+ "user process got signal while waiting for Multi-CS\n");26582658+ return -EINTR;26592659+ }26602660+26612661+ /* update output args */26622662+ memset(args, 0, sizeof(*args));2656266326572664 if (mcs_data.completion_bitmap) {26582665 args->out.status = HL_WAIT_CS_STATUS_COMPLETED;···26742667 /* update if some CS was gone */26752668 if (mcs_data.timestamp)26762669 args->out.flags |= HL_WAIT_CS_STATUS_FLAG_GONE;26772677- } else if (mcs_data.wait_status == -ERESTARTSYS) {26782678- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;26792670 } else {26802671 args->out.status = HL_WAIT_CS_STATUS_BUSY;26812672 }···26932688 rc = _hl_cs_wait_ioctl(hdev, hpriv->ctx, args->in.timeout_us, seq,26942689 &status, ×tamp);2695269026912691+ if (rc == -ERESTARTSYS) {26922692+ dev_err_ratelimited(hdev->dev,26932693+ "user process got signal while waiting for CS handle %llu\n",26942694+ seq);26952695+ return -EINTR;26962696+ }26972697+26962698 memset(args, 0, sizeof(*args));2697269926982700 if (rc) {26992699- if (rc == -ERESTARTSYS) {27002700- dev_err_ratelimited(hdev->dev,27012701- "user process got signal while waiting for CS handle %llu\n",27022702- seq);27032703- args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED;27042704- rc = -EINTR;27052705- } else if (rc == -ETIMEDOUT) {27012701+ if (rc == -ETIMEDOUT) {27062702 dev_err_ratelimited(hdev->dev,27072703 "CS %llu has timed-out while user process is waiting for it\n",27082704 seq);···28292823 dev_err_ratelimited(hdev->dev,28302824 "user process got signal while waiting for interrupt ID %d\n",28312825 interrupt->interrupt_id);28322832- *status = HL_WAIT_CS_STATUS_INTERRUPTED;28332826 rc = -EINTR;28342827 } else {28352828 *status = CS_WAIT_STATUS_BUSY;···28832878 args->in.interrupt_timeout_us, args->in.addr,28842879 args->in.target, interrupt_offset, &status);2885288028862886- memset(args, 0, sizeof(*args));28872887-28882881 if (rc) {28892882 if (rc != -EINTR)28902883 dev_err_ratelimited(hdev->dev,···2890288728912888 return rc;28922889 }28902890+28912891+ memset(args, 0, sizeof(*args));2893289228942893 switch (status) {28952894 case CS_WAIT_STATUS_COMPLETED:
+8-4
drivers/misc/mei/hbm.c
···1298129812991299 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||13001300 dev->hbm_state != MEI_HBM_STARTING) {13011301- if (dev->dev_state == MEI_DEV_POWER_DOWN) {13011301+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||13021302+ dev->dev_state == MEI_DEV_POWERING_DOWN) {13021303 dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n");13031304 return 0;13041305 }···1382138113831382 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||13841383 dev->hbm_state != MEI_HBM_DR_SETUP) {13851385- if (dev->dev_state == MEI_DEV_POWER_DOWN) {13841384+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||13851385+ dev->dev_state == MEI_DEV_POWERING_DOWN) {13861386 dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n");13871387 return 0;13881388 }···1450144814511449 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||14521450 dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) {14531453- if (dev->dev_state == MEI_DEV_POWER_DOWN) {14511451+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||14521452+ dev->dev_state == MEI_DEV_POWERING_DOWN) {14541453 dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n");14551454 return 0;14561455 }···1493149014941491 if (dev->dev_state != MEI_DEV_INIT_CLIENTS ||14951492 dev->hbm_state != MEI_HBM_ENUM_CLIENTS) {14961496- if (dev->dev_state == MEI_DEV_POWER_DOWN) {14931493+ if (dev->dev_state == MEI_DEV_POWER_DOWN ||14941494+ dev->dev_state == MEI_DEV_POWERING_DOWN) {14971495 dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n");14981496 return 0;14991497 }
+1
drivers/misc/mei/hw-me-regs.h
···9292#define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */93939494#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */9595+#define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */95969697#define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */9798
···449449void ksz_switch_remove(struct ksz_device *dev)450450{451451 /* timer started */452452- if (dev->mib_read_interval)452452+ if (dev->mib_read_interval) {453453+ dev->mib_read_interval = 0;453454 cancel_delayed_work_sync(&dev->mib_read);455455+ }454456455457 dev->dev_ops->exit(dev);456458 dsa_unregister_switch(dev->ds);
+108-17
drivers/net/dsa/mv88e6xxx/chip.c
···12121313#include <linux/bitfield.h>1414#include <linux/delay.h>1515+#include <linux/dsa/mv88e6xxx.h>1516#include <linux/etherdevice.h>1617#include <linux/ethtool.h>1718#include <linux/if_bridge.h>···750749 ops = chip->info->ops;751750752751 mv88e6xxx_reg_lock(chip);753753- if ((!mv88e6xxx_port_ppu_updates(chip, port) ||752752+ /* Internal PHYs propagate their configuration directly to the MAC.753753+ * External PHYs depend on whether the PPU is enabled for this port.754754+ */755755+ if (((!mv88e6xxx_phy_is_internal(ds, port) &&756756+ !mv88e6xxx_port_ppu_updates(chip, port)) ||754757 mode == MLO_AN_FIXED) && ops->port_sync_link)755758 err = ops->port_sync_link(chip, port, mode, false);756759 mv88e6xxx_reg_unlock(chip);···777772 ops = chip->info->ops;778773779774 mv88e6xxx_reg_lock(chip);780780- if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) {775775+ /* Internal PHYs propagate their configuration directly to the MAC.776776+ * External PHYs depend on whether the PPU is enabled for this port.777777+ */778778+ if ((!mv88e6xxx_phy_is_internal(ds, port) &&779779+ !mv88e6xxx_port_ppu_updates(chip, port)) ||780780+ mode == MLO_AN_FIXED) {781781 /* FIXME: for an automedia port, should we force the link782782 * down here - what if the link comes up due to "other" media783783 * while we're bringing the port up, how is the exclusivity···16871677 return 0;16881678}1689167916801680+static int mv88e6xxx_port_commit_pvid(struct mv88e6xxx_chip *chip, int port)16811681+{16821682+ struct dsa_port *dp = dsa_to_port(chip->ds, port);16831683+ struct mv88e6xxx_port *p = &chip->ports[port];16841684+ u16 pvid = MV88E6XXX_VID_STANDALONE;16851685+ bool drop_untagged = false;16861686+ int err;16871687+16881688+ if (dp->bridge_dev) {16891689+ if (br_vlan_enabled(dp->bridge_dev)) {16901690+ pvid = p->bridge_pvid.vid;16911691+ drop_untagged = !p->bridge_pvid.valid;16921692+ } else {16931693+ pvid = MV88E6XXX_VID_BRIDGED;16941694+ }16951695+ }16961696+16971697+ err = mv88e6xxx_port_set_pvid(chip, port, pvid);16981698+ if (err)16991699+ return err;17001700+17011701+ return mv88e6xxx_port_drop_untagged(chip, port, drop_untagged);17021702+}17031703+16901704static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,16911705 bool vlan_filtering,16921706 struct netlink_ext_ack *extack)···17241690 return -EOPNOTSUPP;1725169117261692 mv88e6xxx_reg_lock(chip);16931693+17271694 err = mv88e6xxx_port_set_8021q_mode(chip, port, mode);16951695+ if (err)16961696+ goto unlock;16971697+16981698+ err = mv88e6xxx_port_commit_pvid(chip, port);16991699+ if (err)17001700+ goto unlock;17011701+17021702+unlock:17281703 mv88e6xxx_reg_unlock(chip);1729170417301705 return err;···17681725 u16 fid;17691726 int err;1770172717711771- /* Null VLAN ID corresponds to the port private database */17281728+ /* Ports have two private address databases: one for when the port is17291729+ * standalone and one for when the port is under a bridge and the17301730+ * 802.1Q mode is disabled. When the port is standalone, DSA wants its17311731+ * address database to remain 100% empty, so we never load an ATU entry17321732+ * into a standalone port's database. Therefore, translate the null17331733+ * VLAN ID into the port's database used for VLAN-unaware bridging.17341734+ */17721735 if (vid == 0) {17731773- err = mv88e6xxx_port_get_fid(chip, port, &fid);17741774- if (err)17751775- return err;17361736+ fid = MV88E6XXX_FID_BRIDGED;17761737 } else {17771738 err = mv88e6xxx_vtu_get(chip, vid, &vlan);17781739 if (err)···21702123 struct mv88e6xxx_chip *chip = ds->priv;21712124 bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;21722125 bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;21262126+ struct mv88e6xxx_port *p = &chip->ports[port];21732127 bool warn;21742128 u8 member;21752129 int err;···22042156 }2205215722062158 if (pvid) {22072207- err = mv88e6xxx_port_set_pvid(chip, port, vlan->vid);22082208- if (err) {22092209- dev_err(ds->dev, "p%d: failed to set PVID %d\n",22102210- port, vlan->vid);21592159+ p->bridge_pvid.vid = vlan->vid;21602160+ p->bridge_pvid.valid = true;21612161+21622162+ err = mv88e6xxx_port_commit_pvid(chip, port);21632163+ if (err)22112164 goto out;22122212- }21652165+ } else if (vlan->vid && p->bridge_pvid.vid == vlan->vid) {21662166+ /* The old pvid was reinstalled as a non-pvid VLAN */21672167+ p->bridge_pvid.valid = false;21682168+21692169+ err = mv88e6xxx_port_commit_pvid(chip, port);21702170+ if (err)21712171+ goto out;22132172 }21732173+22142174out:22152175 mv88e6xxx_reg_unlock(chip);22162176···22682212 const struct switchdev_obj_port_vlan *vlan)22692213{22702214 struct mv88e6xxx_chip *chip = ds->priv;22152215+ struct mv88e6xxx_port *p = &chip->ports[port];22712216 int err = 0;22722217 u16 pvid;22732218···22862229 goto unlock;2287223022882231 if (vlan->vid == pvid) {22892289- err = mv88e6xxx_port_set_pvid(chip, port, 0);22322232+ p->bridge_pvid.valid = false;22332233+22342234+ err = mv88e6xxx_port_commit_pvid(chip, port);22902235 if (err)22912236 goto unlock;22922237 }···24522393 int err;2453239424542395 mv88e6xxx_reg_lock(chip);23962396+24552397 err = mv88e6xxx_bridge_map(chip, br);23982398+ if (err)23992399+ goto unlock;24002400+24012401+ err = mv88e6xxx_port_commit_pvid(chip, port);24022402+ if (err)24032403+ goto unlock;24042404+24052405+unlock:24562406 mv88e6xxx_reg_unlock(chip);2457240724582408 return err;···24712403 struct net_device *br)24722404{24732405 struct mv88e6xxx_chip *chip = ds->priv;24062406+ int err;2474240724752408 mv88e6xxx_reg_lock(chip);24092409+24762410 if (mv88e6xxx_bridge_map(chip, br) ||24772411 mv88e6xxx_port_vlan_map(chip, port))24782412 dev_err(ds->dev, "failed to remap in-chip Port VLAN\n");24132413+24142414+ err = mv88e6xxx_port_commit_pvid(chip, port);24152415+ if (err)24162416+ dev_err(ds->dev,24172417+ "port %d failed to restore standalone pvid: %pe\n",24182418+ port, ERR_PTR(err));24192419+24792420 mv88e6xxx_reg_unlock(chip);24802421}24812422···29302853 if (err)29312854 return err;2932285528562856+ /* Associate MV88E6XXX_VID_BRIDGED with MV88E6XXX_FID_BRIDGED in the28572857+ * ATU by virtue of the fact that mv88e6xxx_atu_new() will pick it as28582858+ * the first free FID after MV88E6XXX_FID_STANDALONE. This will be used28592859+ * as the private PVID on ports under a VLAN-unaware bridge.28602860+ * Shared (DSA and CPU) ports must also be members of it, to translate28612861+ * the VID from the DSA tag into MV88E6XXX_FID_BRIDGED, instead of28622862+ * relying on their port default FID.28632863+ */28642864+ err = mv88e6xxx_port_vlan_join(chip, port, MV88E6XXX_VID_BRIDGED,28652865+ MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_UNTAGGED,28662866+ false);28672867+ if (err)28682868+ return err;28692869+29332870 if (chip->info->ops->port_set_jumbo_size) {29342871 err = chip->info->ops->port_set_jumbo_size(chip, port, 10218);29352872 if (err)···30162925 * database, and allow bidirectional communication between the30172926 * CPU and DSA port(s), and the other ports.30182927 */30193019- err = mv88e6xxx_port_set_fid(chip, port, 0);29282928+ err = mv88e6xxx_port_set_fid(chip, port, MV88E6XXX_FID_STANDALONE);30202929 if (err)30212930 return err;30222931···32063115 }32073116 }3208311731183118+ err = mv88e6xxx_vtu_setup(chip);31193119+ if (err)31203120+ goto unlock;31213121+32093122 /* Setup Switch Port Registers */32103123 for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {32113124 if (dsa_is_unused_port(ds, i))···32363141 goto unlock;3237314232383143 err = mv88e6xxx_phy_setup(chip);32393239- if (err)32403240- goto unlock;32413241-32423242- err = mv88e6xxx_vtu_setup(chip);32433144 if (err)32443145 goto unlock;32453146
···6464static int sja1105_change_rxtstamping(struct sja1105_private *priv,6565 bool on)6666{6767+ struct sja1105_tagger_data *tagger_data = &priv->tagger_data;6768 struct sja1105_ptp_data *ptp_data = &priv->ptp_data;6869 struct sja1105_general_params_entry *general_params;6970 struct sja1105_table *table;···8079 priv->tagger_data.stampable_skb = NULL;8180 }8281 ptp_cancel_worker_sync(ptp_data->clock);8383- skb_queue_purge(&ptp_data->skb_txtstamp_queue);8282+ skb_queue_purge(&tagger_data->skb_txtstamp_queue);8483 skb_queue_purge(&ptp_data->skb_rxtstamp_queue);85848685 return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING);···453452 return priv->info->rxtstamp(ds, port, skb);454453}455454456456-void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id,457457- enum sja1110_meta_tstamp dir, u64 tstamp)458458-{459459- struct sja1105_private *priv = ds->priv;460460- struct sja1105_ptp_data *ptp_data = &priv->ptp_data;461461- struct sk_buff *skb, *skb_tmp, *skb_match = NULL;462462- struct skb_shared_hwtstamps shwt = {0};463463-464464- /* We don't care about RX timestamps on the CPU port */465465- if (dir == SJA1110_META_TSTAMP_RX)466466- return;467467-468468- spin_lock(&ptp_data->skb_txtstamp_queue.lock);469469-470470- skb_queue_walk_safe(&ptp_data->skb_txtstamp_queue, skb, skb_tmp) {471471- if (SJA1105_SKB_CB(skb)->ts_id != ts_id)472472- continue;473473-474474- __skb_unlink(skb, &ptp_data->skb_txtstamp_queue);475475- skb_match = skb;476476-477477- break;478478- }479479-480480- spin_unlock(&ptp_data->skb_txtstamp_queue.lock);481481-482482- if (WARN_ON(!skb_match))483483- return;484484-485485- shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));486486- skb_complete_tx_timestamp(skb_match, &shwt);487487-}488488-EXPORT_SYMBOL_GPL(sja1110_process_meta_tstamp);489489-490455/* In addition to cloning the skb which is done by the common491456 * sja1105_port_txtstamp, we need to generate a timestamp ID and save the492457 * packet to the TX timestamping queue.···461494{462495 struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone;463496 struct sja1105_private *priv = ds->priv;464464- struct sja1105_ptp_data *ptp_data = &priv->ptp_data;465497 struct sja1105_port *sp = &priv->ports[port];466498 u8 ts_id;467499···476510477511 spin_unlock(&sp->data->meta_lock);478512479479- skb_queue_tail(&ptp_data->skb_txtstamp_queue, clone);513513+ skb_queue_tail(&sp->data->skb_txtstamp_queue, clone);480514}481515482516/* Called from dsa_skb_tx_timestamp. This callback is just to clone···919953 /* Only used on SJA1105 */920954 skb_queue_head_init(&ptp_data->skb_rxtstamp_queue);921955 /* Only used on SJA1110 */922922- skb_queue_head_init(&ptp_data->skb_txtstamp_queue);956956+ skb_queue_head_init(&tagger_data->skb_txtstamp_queue);923957 spin_lock_init(&tagger_data->meta_lock);924958925959 ptp_data->clock = ptp_clock_register(&ptp_data->caps, ds->dev);···937971void sja1105_ptp_clock_unregister(struct dsa_switch *ds)938972{939973 struct sja1105_private *priv = ds->priv;974974+ struct sja1105_tagger_data *tagger_data = &priv->tagger_data;940975 struct sja1105_ptp_data *ptp_data = &priv->ptp_data;941976942977 if (IS_ERR_OR_NULL(ptp_data->clock))···945978946979 del_timer_sync(&ptp_data->extts_timer);947980 ptp_cancel_worker_sync(ptp_data->clock);948948- skb_queue_purge(&ptp_data->skb_txtstamp_queue);981981+ skb_queue_purge(&tagger_data->skb_txtstamp_queue);949982 skb_queue_purge(&ptp_data->skb_rxtstamp_queue);950983 ptp_clock_unregister(ptp_data->clock);951984 ptp_data->clock = NULL;
-19
drivers/net/dsa/sja1105/sja1105_ptp.h
···8899#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP)10101111-/* Timestamps are in units of 8 ns clock ticks (equivalent to1212- * a fixed 125 MHz clock).1313- */1414-#define SJA1105_TICK_NS 81515-1616-static inline s64 ns_to_sja1105_ticks(s64 ns)1717-{1818- return ns / SJA1105_TICK_NS;1919-}2020-2121-static inline s64 sja1105_ticks_to_ns(s64 ticks)2222-{2323- return ticks * SJA1105_TICK_NS;2424-}2525-2611/* Calculate the first base_time in the future that satisfies this2712 * relationship:2813 *···6277 struct timer_list extts_timer;6378 /* Used only on SJA1105 to reconstruct partial timestamps */6479 struct sk_buff_head skb_rxtstamp_queue;6565- /* Used on SJA1110 where meta frames are generated only for6666- * 2-step TX timestamps6767- */6868- struct sk_buff_head skb_txtstamp_queue;6980 struct ptp_clock_info caps;7081 struct ptp_clock *clock;7182 struct sja1105_ptp_cmd cmd;
+1
drivers/net/ethernet/Kconfig
···100100config KORINA101101 tristate "Korina (IDT RC32434) Ethernet support"102102 depends on MIKROTIK_RB532 || COMPILE_TEST103103+ select CRC32103104 select MII104105 help105106 If you have a Mikrotik RouterBoard 500 or IDT RC32434
···29812981 agg_count += mqprio->qopt.count[i];29822982 }2983298329842984- if (priv->channels.params.num_channels < agg_count) {29852985- netdev_err(netdev, "Num of queues (%d) exceeds available (%d)\n",29842984+ if (priv->channels.params.num_channels != agg_count) {29852985+ netdev_err(netdev, "Num of queues (%d) does not match available (%d)\n",29862986 agg_count, priv->channels.params.num_channels);29872987 return -EINVAL;29882988 }···33253325 return mlx5_set_port_fcs(mdev, !enable);33263326}3327332733283328+static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)33293329+{33303330+ u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {};33313331+ bool supported, curr_state;33323332+ int err;33333333+33343334+ if (!MLX5_CAP_GEN(mdev, ports_check))33353335+ return 0;33363336+33373337+ err = mlx5_query_ports_check(mdev, in, sizeof(in));33383338+ if (err)33393339+ return err;33403340+33413341+ supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap);33423342+ curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc);33433343+33443344+ if (!supported || enable == curr_state)33453345+ return 0;33463346+33473347+ MLX5_SET(pcmr_reg, in, local_port, 1);33483348+ MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable);33493349+33503350+ return mlx5_set_ports_check(mdev, in, sizeof(in));33513351+}33523352+33283353static int set_feature_rx_fcs(struct net_device *netdev, bool enable)33293354{33303355 struct mlx5e_priv *priv = netdev_priv(netdev);33563356+ struct mlx5e_channels *chs = &priv->channels;33573357+ struct mlx5_core_dev *mdev = priv->mdev;33313358 int err;3332335933333360 mutex_lock(&priv->state_lock);3334336133353335- priv->channels.params.scatter_fcs_en = enable;33363336- err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable);33373337- if (err)33383338- priv->channels.params.scatter_fcs_en = !enable;33623362+ if (enable) {33633363+ err = mlx5e_set_rx_port_ts(mdev, false);33643364+ if (err)33653365+ goto out;3339336633673367+ chs->params.scatter_fcs_en = true;33683368+ err = mlx5e_modify_channels_scatter_fcs(chs, true);33693369+ if (err) {33703370+ chs->params.scatter_fcs_en = false;33713371+ mlx5e_set_rx_port_ts(mdev, true);33723372+ }33733373+ } else {33743374+ chs->params.scatter_fcs_en = false;33753375+ err = mlx5e_modify_channels_scatter_fcs(chs, false);33763376+ if (err) {33773377+ chs->params.scatter_fcs_en = true;33783378+ goto out;33793379+ }33803380+ err = mlx5e_set_rx_port_ts(mdev, true);33813381+ if (err) {33823382+ mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);33833383+ err = 0;33843384+ }33853385+ }33863386+33873387+out:33403388 mutex_unlock(&priv->state_lock);33413341-33423389 return err;33433390}33443391
+5-1
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
···618618 params->mqprio.num_tc = 1;619619 params->tunneled_offload_en = false;620620621621+ /* Set an initial non-zero value, so that mlx5e_select_queue won't622622+ * divide by zero if called before first activating channels.623623+ */624624+ priv->num_tc_x_num_ch = params->num_channels * params->mqprio.num_tc;625625+621626 mlx5_query_min_inline(mdev, ¶ms->tx_min_inline_mode);622627}623628···648643 netdev->hw_features |= NETIF_F_RXCSUM;649644650645 netdev->features |= netdev->hw_features;651651- netdev->features |= NETIF_F_VLAN_CHALLENGED;652646 netdev->features |= NETIF_F_NETNS_LOCAL;653647}654648
···2424#define MLXSW_THERMAL_ZONE_MAX_NAME 162525#define MLXSW_THERMAL_TEMP_SCORE_MAX GENMASK(31, 0)2626#define MLXSW_THERMAL_MAX_STATE 102727+#define MLXSW_THERMAL_MIN_STATE 22728#define MLXSW_THERMAL_MAX_DUTY 2552828-/* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values2929- * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for3030- * setting fan speed dynamic minimum. For example, if value is set to 14 (40%)3131- * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to3232- * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100.3333- */3434-#define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2)3535-#define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2)3636-#define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */37293830/* External cooling devices, allowed for binding to mlxsw thermal zones. */3931static char * const mlxsw_thermal_external_allowed_cdev[] = {···638646 struct mlxsw_thermal *thermal = cdev->devdata;639647 struct device *dev = thermal->bus_info->dev;640648 char mfsc_pl[MLXSW_REG_MFSC_LEN];641641- unsigned long cur_state, i;642649 int idx;643643- u8 duty;644650 int err;651651+652652+ if (state > MLXSW_THERMAL_MAX_STATE)653653+ return -EINVAL;645654646655 idx = mlxsw_get_cooling_device_idx(thermal, cdev);647656 if (idx < 0)648657 return idx;649649-650650- /* Verify if this request is for changing allowed fan dynamical651651- * minimum. If it is - update cooling levels accordingly and update652652- * state, if current state is below the newly requested minimum state.653653- * For example, if current state is 5, and minimal state is to be654654- * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed655655- * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be656656- * overwritten.657657- */658658- if (state >= MLXSW_THERMAL_SPEED_MIN &&659659- state <= MLXSW_THERMAL_SPEED_MAX) {660660- state -= MLXSW_THERMAL_MAX_STATE;661661- for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++)662662- thermal->cooling_levels[i] = max(state, i);663663-664664- mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0);665665- err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl);666666- if (err)667667- return err;668668-669669- duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl);670670- cur_state = mlxsw_duty_to_state(duty);671671-672672- /* If current fan state is lower than requested dynamical673673- * minimum, increase fan speed up to dynamical minimum.674674- */675675- if (state < cur_state)676676- return 0;677677-678678- state = cur_state;679679- }680680-681681- if (state > MLXSW_THERMAL_MAX_STATE)682682- return -EINVAL;683658684659 /* Normalize the state to the valid speed range. */685660 state = thermal->cooling_levels[state];···957998958999 /* Initialize cooling levels per PWM state. */9591000 for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++)960960- thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL,961961- i);10011001+ thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i);96210029631003 thermal->polling_delay = bus_info->low_frequency ?9641004 MLXSW_THERMAL_SLOW_POLL_INT :
···85668566 return;85678567 }8568856885698569- if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) {85698569+ if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) {85708570 s2io_card_down(sp);85718571 pr_err("Can't restore mac addr after reset.\n");85728572 return;
+14-5
drivers/net/ethernet/netronome/nfp/flower/main.c
···830830 if (err)831831 goto err_cleanup;832832833833- err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);834834- if (err)835835- goto err_cleanup;836836-837833 if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM)838834 nfp_flower_qos_init(app);839835···938942 return err;939943 }940944941941- return nfp_tunnel_config_start(app);945945+ err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app);946946+ if (err)947947+ return err;948948+949949+ err = nfp_tunnel_config_start(app);950950+ if (err)951951+ goto err_tunnel_config;952952+953953+ return 0;954954+955955+err_tunnel_config:956956+ flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app,957957+ nfp_flower_setup_indr_tc_release);958958+ return err;942959}943960944961static void nfp_flower_stop(struct nfp_app *app)
+4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
···1379137913801380static int ionic_addr_del(struct net_device *netdev, const u8 *addr)13811381{13821382+ /* Don't delete our own address from the uc list */13831383+ if (ether_addr_equal(addr, netdev->dev_addr))13841384+ return 0;13851385+13821386 return ionic_lif_list_addr(netdev_priv(netdev), addr, DEL_ADDR);13831387}13841388
···9999config USB_RTL8152100100 tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"101101 select MII102102+ select CRC32103103+ select CRYPTO104104+ select CRYPTO_HASH105105+ select CRYPTO_SHA256102106 help103107 This option adds support for Realtek RTL8152 based USB 2.0104108 10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
+1-1
drivers/net/virtio_net.c
···406406 * add_recvbuf_mergeable() + get_mergeable_buf_len()407407 */408408 truesize = headroom ? PAGE_SIZE : truesize;409409- tailroom = truesize - len - headroom;409409+ tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len);410410 buf = p - headroom;411411412412 len -= hdr_len;
···167167config DELL_WMI_PRIVACY168168 bool "Dell WMI Hardware Privacy Support"169169 depends on LEDS_TRIGGER_AUDIO = y || DELL_WMI = LEDS_TRIGGER_AUDIO170170+ depends on DELL_WMI170171 help171172 This option adds integration with the "Dell Hardware Privacy"172173 feature of Dell laptops to the dell-wmi driver.
···401401402402 gpiod_remove_lookup_table(&int3472->gpios);403403404404- if (int3472->clock.ena_gpio)404404+ if (int3472->clock.cl)405405 skl_int3472_unregister_clock(int3472);406406407407 gpiod_put(int3472->clock.ena_gpio);
+3-3
drivers/platform/x86/intel_scu_ipc.c
···7575#define IPC_READ_BUFFER 0x9076767777/* Timeout in jiffies */7878-#define IPC_TIMEOUT (5 * HZ)7878+#define IPC_TIMEOUT (10 * HZ)79798080static struct intel_scu_ipc_dev *ipcdev; /* Only one for now */8181static DEFINE_MUTEX(ipclock); /* lock used to prevent multiple call to SCU */···232232/* Wait till scu status is busy */233233static inline int busy_loop(struct intel_scu_ipc_dev *scu)234234{235235- unsigned long end = jiffies + msecs_to_jiffies(IPC_TIMEOUT);235235+ unsigned long end = jiffies + IPC_TIMEOUT;236236237237 do {238238 u32 status;···247247 return -ETIMEDOUT;248248}249249250250-/* Wait till ipc ioc interrupt is received or timeout in 3 HZ */250250+/* Wait till ipc ioc interrupt is received or timeout in 10 HZ */251251static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu)252252{253253 int status;
-1
drivers/soc/canaan/Kconfig
···55 depends on RISCV && SOC_CANAAN && OF66 default SOC_CANAAN77 select PM88- select SIMPLE_PM_BUS98 select SYSCON109 select MFD_SYSCON1110 help
+2-2
drivers/spi/spi-atmel.c
···13011301 * DMA map early, for performance (empties dcache ASAP) and13021302 * better fault reporting.13031303 */13041304- if ((!master->cur_msg_mapped)13041304+ if ((!master->cur_msg->is_dma_mapped)13051305 && as->use_pdc) {13061306 if (atmel_spi_dma_map_xfer(as, xfer) < 0)13071307 return -ENOMEM;···13811381 }13821382 }1383138313841384- if (!master->cur_msg_mapped13841384+ if (!master->cur_msg->is_dma_mapped13851385 && as->use_pdc)13861386 atmel_spi_dma_unmap_xfer(master, xfer);13871387
+45-32
drivers/spi/spi-bcm-qspi.c
···1250125012511251static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi)12521252{12531253+ u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS);12541254+12531255 bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0);12541256 if (has_bspi(qspi))12551257 bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0);1256125812591259+ /* clear interrupt */12601260+ bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1);12571261}1258126212591263static const struct spi_controller_mem_ops bcm_qspi_mem_ops = {···14011397 if (!qspi->dev_ids)14021398 return -ENOMEM;1403139914001400+ /*14011401+ * Some SoCs integrate spi controller (e.g., its interrupt bits)14021402+ * in specific ways14031403+ */14041404+ if (soc_intc) {14051405+ qspi->soc_intc = soc_intc;14061406+ soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);14071407+ } else {14081408+ qspi->soc_intc = NULL;14091409+ }14101410+14111411+ if (qspi->clk) {14121412+ ret = clk_prepare_enable(qspi->clk);14131413+ if (ret) {14141414+ dev_err(dev, "failed to prepare clock\n");14151415+ goto qspi_probe_err;14161416+ }14171417+ qspi->base_clk = clk_get_rate(qspi->clk);14181418+ } else {14191419+ qspi->base_clk = MSPI_BASE_FREQ;14201420+ }14211421+14221422+ if (data->has_mspi_rev) {14231423+ rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);14241424+ /* some older revs do not have a MSPI_REV register */14251425+ if ((rev & 0xff) == 0xff)14261426+ rev = 0;14271427+ }14281428+14291429+ qspi->mspi_maj_rev = (rev >> 4) & 0xf;14301430+ qspi->mspi_min_rev = rev & 0xf;14311431+ qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;14321432+14331433+ qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);14341434+14351435+ /*14361436+ * On SW resets it is possible to have the mask still enabled14371437+ * Need to disable the mask and clear the status while we init14381438+ */14391439+ bcm_qspi_hw_uninit(qspi);14401440+14041441 for (val = 0; val < num_irqs; val++) {14051442 irq = -1;14061443 name = qspi_irq_tab[val].irq_name;···14771432 ret = -EINVAL;14781433 goto qspi_probe_err;14791434 }14801480-14811481- /*14821482- * Some SoCs integrate spi controller (e.g., its interrupt bits)14831483- * in specific ways14841484- */14851485- if (soc_intc) {14861486- qspi->soc_intc = soc_intc;14871487- soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true);14881488- } else {14891489- qspi->soc_intc = NULL;14901490- }14911491-14921492- ret = clk_prepare_enable(qspi->clk);14931493- if (ret) {14941494- dev_err(dev, "failed to prepare clock\n");14951495- goto qspi_probe_err;14961496- }14971497-14981498- qspi->base_clk = clk_get_rate(qspi->clk);14991499-15001500- if (data->has_mspi_rev) {15011501- rev = bcm_qspi_read(qspi, MSPI, MSPI_REV);15021502- /* some older revs do not have a MSPI_REV register */15031503- if ((rev & 0xff) == 0xff)15041504- rev = 0;15051505- }15061506-15071507- qspi->mspi_maj_rev = (rev >> 4) & 0xf;15081508- qspi->mspi_min_rev = rev & 0xf;15091509- qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk;15101510-15111511- qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2);1512143515131436 bcm_qspi_hw_init(qspi);15141437 init_completion(&qspi->mspi_done);
···137137 priv = spi_controller_get_devdata(ctlr);138138 priv->spi = spi;139139140140+ /*141141+ * Increase lockdep class as these lock are taken while the parent bus142142+ * already holds their instance's lock.143143+ */144144+ lockdep_set_subclass(&ctlr->io_mutex, 1);145145+ lockdep_set_subclass(&ctlr->add_lock, 1);146146+140147 priv->mux = devm_mux_control_get(&spi->dev, NULL);141148 if (IS_ERR(priv->mux)) {142149 ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
+7-19
drivers/spi/spi-nxp-fspi.c
···33333434#include <linux/acpi.h>3535#include <linux/bitops.h>3636+#include <linux/bitfield.h>3637#include <linux/clk.h>3738#include <linux/completion.h>3839#include <linux/delay.h>···316315#define NXP_FSPI_MIN_IOMAP SZ_4M317316318317#define DCFG_RCWSR1 0x100318318+#define SYS_PLL_RAT GENMASK(6, 2)319319320320/* Access flash memory using IP bus only */321321#define FSPI_QUIRK_USE_IP_ONLY BIT(0)···928926 { .family = "QorIQ LS1028A" },929927 { /* sentinel */ }930928 };931931- struct device_node *np;932929 struct regmap *map;933933- u32 val = 0, sysclk = 0;930930+ u32 val, sys_pll_ratio;934931 int ret;935932936933 /* Check for LS1028A family */···938937 return;939938 }940939941941- /* Compute system clock frequency multiplier ratio */942940 map = syscon_regmap_lookup_by_compatible("fsl,ls1028a-dcfg");943941 if (IS_ERR(map)) {944942 dev_err(f->dev, "No syscon regmap\n");···948948 if (ret < 0)949949 goto err;950950951951- /* Strap bits 6:2 define SYS_PLL_RAT i.e frequency multiplier ratio */952952- val = (val >> 2) & 0x1F;953953- WARN(val == 0, "Strapping is zero: Cannot determine ratio");951951+ sys_pll_ratio = FIELD_GET(SYS_PLL_RAT, val);952952+ dev_dbg(f->dev, "val: 0x%08x, sys_pll_ratio: %d\n", val, sys_pll_ratio);954953955955- /* Compute system clock frequency */956956- np = of_find_node_by_name(NULL, "clock-sysclk");957957- if (!np)958958- goto err;959959-960960- if (of_property_read_u32(np, "clock-frequency", &sysclk))961961- goto err;962962-963963- sysclk = (sysclk * val) / 1000000; /* Convert sysclk to Mhz */964964- dev_dbg(f->dev, "val: 0x%08x, sysclk: %dMhz\n", val, sysclk);965965-966966- /* Use IP bus only if PLL is 300MHz */967967- if (sysclk == 300)954954+ /* Use IP bus only if platform clock is 300MHz */955955+ if (sys_pll_ratio == 3)968956 f->devtype_data->quirks |= FSPI_QUIRK_USE_IP_ONLY;969957970958 return;
···585585{586586 struct optee *optee = platform_get_drvdata(pdev);587587588588+ /* Unregister OP-TEE specific client devices on TEE bus */589589+ optee_unregister_devices();590590+588591 /*589592 * Ask OP-TEE to free all cached shared memory objects to decrease590593 * reference counters and also avoid wild pointers in secure world
···361361 If unsure, say N.362362363363config SERIAL_8250_FSL364364- bool364364+ bool "Freescale 16550 UART support" if COMPILE_TEST && !(PPC || ARM || ARM64)365365 depends on SERIAL_8250_CONSOLE366366- default PPC || ARM || ARM64 || COMPILE_TEST366366+ default PPC || ARM || ARM64367367+ help368368+ Selecting this option enables a workaround for a break-detection369369+ erratum for Freescale 16550 UARTs in the 8250 driver. It also370370+ enables support for ACPI enumeration.367371368372config SERIAL_8250_DW369373 tristate "Support for Synopsys DesignWare 8250 quirks"
+13-15
drivers/usb/host/xhci-dbgtty.c
···408408 return -EBUSY;409409410410 xhci_dbc_tty_init_port(dbc, port);411411- tty_dev = tty_port_register_device(&port->port,412412- dbc_tty_driver, 0, NULL);413413- if (IS_ERR(tty_dev)) {414414- ret = PTR_ERR(tty_dev);415415- goto register_fail;416416- }417411418412 ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL);419413 if (ret)420420- goto buf_alloc_fail;414414+ goto err_exit_port;421415422416 ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool,423417 dbc_read_complete);424418 if (ret)425425- goto request_fail;419419+ goto err_free_fifo;426420427421 ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool,428422 dbc_write_complete);429423 if (ret)430430- goto request_fail;424424+ goto err_free_requests;425425+426426+ tty_dev = tty_port_register_device(&port->port,427427+ dbc_tty_driver, 0, NULL);428428+ if (IS_ERR(tty_dev)) {429429+ ret = PTR_ERR(tty_dev);430430+ goto err_free_requests;431431+ }431432432433 port->registered = true;433434434435 return 0;435436436436-request_fail:437437+err_free_requests:437438 xhci_dbc_free_requests(&port->read_pool);438439 xhci_dbc_free_requests(&port->write_pool);440440+err_free_fifo:439441 kfifo_free(&port->write_fifo);440440-441441-buf_alloc_fail:442442- tty_unregister_device(dbc_tty_driver, 0);443443-444444-register_fail:442442+err_exit_port:445443 xhci_dbc_tty_exit_port(port);446444447445 dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
···366366/* Must be called with xhci->lock held, releases and aquires lock back */367367static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)368368{369369- u64 temp_64;369369+ u32 temp_32;370370 int ret;371371372372 xhci_dbg(xhci, "Abort command ring\n");373373374374 reinit_completion(&xhci->cmd_ring_stop_completion);375375376376- temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring);377377- xhci_write_64(xhci, temp_64 | CMD_RING_ABORT,378378- &xhci->op_regs->cmd_ring);376376+ /*377377+ * The control bits like command stop, abort are located in lower378378+ * dword of the command ring control register. Limit the write379379+ * to the lower dword to avoid corrupting the command ring pointer380380+ * in case if the command ring is stopped by the time upper dword381381+ * is written.382382+ */383383+ temp_32 = readl(&xhci->op_regs->cmd_ring);384384+ writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);379385380386 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the381387 * completion of the Command Abort operation. If CRR is not negated in 5···565559 struct xhci_ring *ep_ring;566560 struct xhci_command *cmd;567561 struct xhci_segment *new_seg;562562+ struct xhci_segment *halted_seg = NULL;568563 union xhci_trb *new_deq;569564 int new_cycle;565565+ union xhci_trb *halted_trb;566566+ int index = 0;570567 dma_addr_t addr;571568 u64 hw_dequeue;572569 bool cycle_found = false;···607598 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);608599 new_seg = ep_ring->deq_seg;609600 new_deq = ep_ring->dequeue;610610- new_cycle = hw_dequeue & 0x1;601601+602602+ /*603603+ * Quirk: xHC write-back of the DCS field in the hardware dequeue604604+ * pointer is wrong - use the cycle state of the TRB pointed to by605605+ * the dequeue pointer.606606+ */607607+ if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&608608+ !(ep->ep_state & EP_HAS_STREAMS))609609+ halted_seg = trb_in_td(xhci, td->start_seg,610610+ td->first_trb, td->last_trb,611611+ hw_dequeue & ~0xf, false);612612+ if (halted_seg) {613613+ index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /614614+ sizeof(*halted_trb);615615+ halted_trb = &halted_seg->trbs[index];616616+ new_cycle = halted_trb->generic.field[3] & 0x1;617617+ xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",618618+ (u8)(hw_dequeue & 0x1), index, new_cycle);619619+ } else {620620+ new_cycle = hw_dequeue & 0x1;621621+ }611622612623 /*613624 * We want to find the pointer, segment and cycle state of the new trb
+5
drivers/usb/host/xhci.c
···32143214 return;3215321532163216 /* Bail out if toggle is already being cleared by a endpoint reset */32173217+ spin_lock_irqsave(&xhci->lock, flags);32173218 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) {32183219 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE;32203220+ spin_unlock_irqrestore(&xhci->lock, flags);32193221 return;32203222 }32233223+ spin_unlock_irqrestore(&xhci->lock, flags);32213224 /* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */32223225 if (usb_endpoint_xfer_control(&host_ep->desc) ||32233226 usb_endpoint_xfer_isoc(&host_ep->desc))···33063303 xhci_free_command(xhci, cfg_cmd);33073304cleanup:33083305 xhci_free_command(xhci, stop_cmd);33063306+ spin_lock_irqsave(&xhci->lock, flags);33093307 if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE)33103308 ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE;33093309+ spin_unlock_irqrestore(&xhci->lock, flags);33113310}3312331133133312static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+1
drivers/usb/host/xhci.h
···18991899#define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39)19001900#define XHCI_NO_SOFT_RETRY BIT_ULL(40)19011901#define XHCI_BROKEN_D3COLD BIT_ULL(41)19021902+#define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)1902190319031904 unsigned int num_active_eps;19041905 unsigned int limit_active_eps;
+3-1
drivers/usb/musb/musb_dsps.c
···899899 if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) {900900 ret = dsps_setup_optional_vbus_irq(pdev, glue);901901 if (ret)902902- goto err;902902+ goto unregister_pdev;903903 }904904905905 return 0;906906907907+unregister_pdev:908908+ platform_device_unregister(glue->musb);907909err:908910 pm_runtime_disable(&pdev->dev);909911 iounmap(glue->usbss_base);
···190190}191191192192/*193193- * lookup a directory item based on name. 'dir' is the objectid194194- * we're searching in, and 'mod' tells us if you plan on deleting the195195- * item (use mod < 0) or changing the options (use mod > 0)193193+ * Lookup for a directory item by name.194194+ *195195+ * @trans: The transaction handle to use. Can be NULL if @mod is 0.196196+ * @root: The root of the target tree.197197+ * @path: Path to use for the search.198198+ * @dir: The inode number (objectid) of the directory.199199+ * @name: The name associated to the directory entry we are looking for.200200+ * @name_len: The length of the name.201201+ * @mod: Used to indicate if the tree search is meant for a read only202202+ * lookup, for a modification lookup or for a deletion lookup, so203203+ * its value should be 0, 1 or -1, respectively.204204+ *205205+ * Returns: NULL if the dir item does not exists, an error pointer if an error206206+ * happened, or a pointer to a dir item if a dir item exists for the given name.196207 */197208struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,198209 struct btrfs_root *root,···284273}285274286275/*287287- * lookup a directory item based on index. 'dir' is the objectid288288- * we're searching in, and 'mod' tells us if you plan on deleting the289289- * item (use mod < 0) or changing the options (use mod > 0)276276+ * Lookup for a directory index item by name and index number.290277 *291291- * The name is used to make sure the index really points to the name you were292292- * looking for.278278+ * @trans: The transaction handle to use. Can be NULL if @mod is 0.279279+ * @root: The root of the target tree.280280+ * @path: Path to use for the search.281281+ * @dir: The inode number (objectid) of the directory.282282+ * @index: The index number.283283+ * @name: The name associated to the directory entry we are looking for.284284+ * @name_len: The length of the name.285285+ * @mod: Used to indicate if the tree search is meant for a read only286286+ * lookup, for a modification lookup or for a deletion lookup, so287287+ * its value should be 0, 1 or -1, respectively.288288+ *289289+ * Returns: NULL if the dir index item does not exists, an error pointer if an290290+ * error happened, or a pointer to a dir item if the dir index item exists and291291+ * matches the criteria (name and index number).293292 */294293struct btrfs_dir_item *295294btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,296295 struct btrfs_root *root,297296 struct btrfs_path *path, u64 dir,298298- u64 objectid, const char *name, int name_len,297297+ u64 index, const char *name, int name_len,299298 int mod)300299{300300+ struct btrfs_dir_item *di;301301 struct btrfs_key key;302302303303 key.objectid = dir;304304 key.type = BTRFS_DIR_INDEX_KEY;305305- key.offset = objectid;305305+ key.offset = index;306306307307- return btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);307307+ di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);308308+ if (di == ERR_PTR(-ENOENT))309309+ return NULL;310310+311311+ return di;308312}309313310314struct btrfs_dir_item *
···734734 if (args->start >= inode->disk_i_size && !args->replace_extent)735735 modify_tree = 0;736736737737- update_refs = (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) ||738738- root == fs_info->tree_root);737737+ update_refs = (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID);739738 while (1) {740739 recow = 0;741740 ret = btrfs_lookup_file_extent(trans, root, path, ino,···27032704 drop_args.bytes_found);27042705 if (ret != -ENOSPC) {27052706 /*27062706- * When cloning we want to avoid transaction aborts when27072707- * nothing was done and we are attempting to clone parts27082708- * of inline extents, in such cases -EOPNOTSUPP is27092709- * returned by __btrfs_drop_extents() without having27102710- * changed anything in the file.27072707+ * The only time we don't want to abort is if we are27082708+ * attempting to clone a partial inline extent, in which27092709+ * case we'll get EOPNOTSUPP. However if we aren't27102710+ * clone we need to abort no matter what, because if we27112711+ * got EOPNOTSUPP via prealloc then we messed up and27122712+ * need to abort.27112713 */27122712- if (extent_info && !extent_info->is_new_extent &&27132713- ret && ret != -EOPNOTSUPP)27142714+ if (ret &&27152715+ (ret != -EOPNOTSUPP ||27162716+ (extent_info && extent_info->is_new_extent)))27142717 btrfs_abort_transaction(trans, ret);27152718 break;27162719 }
+48-31
fs/btrfs/tree-log.c
···939939}940940941941/*942942- * helper function to see if a given name and sequence number found943943- * in an inode back reference are already in a directory and correctly944944- * point to this inode942942+ * See if a given name and sequence number found in an inode back reference are943943+ * already in a directory and correctly point to this inode.944944+ *945945+ * Returns: < 0 on error, 0 if the directory entry does not exists and 1 if it946946+ * exists.945947 */946948static noinline int inode_in_dir(struct btrfs_root *root,947949 struct btrfs_path *path,···952950{953951 struct btrfs_dir_item *di;954952 struct btrfs_key location;955955- int match = 0;953953+ int ret = 0;956954957955 di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,958956 index, name, name_len, 0);959959- if (di && !IS_ERR(di)) {957957+ if (IS_ERR(di)) {958958+ ret = PTR_ERR(di);959959+ goto out;960960+ } else if (di) {960961 btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);961962 if (location.objectid != objectid)962963 goto out;963963- } else964964+ } else {964965 goto out;965965- btrfs_release_path(path);966966+ }966967968968+ btrfs_release_path(path);967969 di = btrfs_lookup_dir_item(NULL, root, path, dirid, name, name_len, 0);968968- if (di && !IS_ERR(di)) {969969- btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);970970- if (location.objectid != objectid)971971- goto out;972972- } else970970+ if (IS_ERR(di)) {971971+ ret = PTR_ERR(di);973972 goto out;974974- match = 1;973973+ } else if (di) {974974+ btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location);975975+ if (location.objectid == objectid)976976+ ret = 1;977977+ }975978out:976979 btrfs_release_path(path);977977- return match;980980+ return ret;978981}979982980983/*···11891182 /* look for a conflicting sequence number */11901183 di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),11911184 ref_index, name, namelen, 0);11921192- if (di && !IS_ERR(di)) {11851185+ if (IS_ERR(di)) {11861186+ return PTR_ERR(di);11871187+ } else if (di) {11931188 ret = drop_one_dir_item(trans, root, path, dir, di);11941189 if (ret)11951190 return ret;···12011192 /* look for a conflicting name */12021193 di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir),12031194 name, namelen, 0);12041204- if (di && !IS_ERR(di)) {11951195+ if (IS_ERR(di)) {11961196+ return PTR_ERR(di);11971197+ } else if (di) {12051198 ret = drop_one_dir_item(trans, root, path, dir, di);12061199 if (ret)12071200 return ret;···15281517 if (ret)15291518 goto out;1530151915311531- /* if we already have a perfect match, we're done */15321532- if (!inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),15331533- btrfs_ino(BTRFS_I(inode)), ref_index,15341534- name, namelen)) {15201520+ ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)),15211521+ btrfs_ino(BTRFS_I(inode)), ref_index,15221522+ name, namelen);15231523+ if (ret < 0) {15241524+ goto out;15251525+ } else if (ret == 0) {15351526 /*15361527 * look for a conflicting back reference in the15371528 * metadata. if we find one we have to unlink that name···15931580 if (ret)15941581 goto out;15951582 }15831583+ /* Else, ret == 1, we already have a perfect match, we're done. */1596158415971585 ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen;15981586 kfree(name);···19501936 struct btrfs_key log_key;19511937 struct inode *dir;19521938 u8 log_type;19531953- int exists;19541954- int ret = 0;19391939+ bool exists;19401940+ int ret;19551941 bool update_size = (key->type == BTRFS_DIR_INDEX_KEY);19561942 bool name_added = false;19571943···19711957 name_len);1972195819731959 btrfs_dir_item_key_to_cpu(eb, di, &log_key);19741974- exists = btrfs_lookup_inode(trans, root, path, &log_key, 0);19751975- if (exists == 0)19761976- exists = 1;19771977- else19781978- exists = 0;19601960+ ret = btrfs_lookup_inode(trans, root, path, &log_key, 0);19791961 btrfs_release_path(path);19621962+ if (ret < 0)19631963+ goto out;19641964+ exists = (ret == 0);19651965+ ret = 0;1980196619811967 if (key->type == BTRFS_DIR_ITEM_KEY) {19821968 dst_di = btrfs_lookup_dir_item(trans, root, path, key->objectid,···19911977 ret = -EINVAL;19921978 goto out;19931979 }19941994- if (IS_ERR_OR_NULL(dst_di)) {19801980+19811981+ if (IS_ERR(dst_di)) {19821982+ ret = PTR_ERR(dst_di);19831983+ goto out;19841984+ } else if (!dst_di) {19951985 /* we need a sequence number to insert, so we only19961986 * do inserts for the BTRFS_DIR_INDEX_KEY types19971987 */···22992281 dir_key->offset,23002282 name, name_len, 0);23012283 }23022302- if (!log_di || log_di == ERR_PTR(-ENOENT)) {22842284+ if (!log_di) {23032285 btrfs_dir_item_key_to_cpu(eb, di, &location);23042286 btrfs_release_path(path);23052287 btrfs_release_path(log_path);···35583540 if (err == -ENOSPC) {35593541 btrfs_set_log_full_commit(trans);35603542 err = 0;35613561- } else if (err < 0 && err != -ENOENT) {35623562- /* ENOENT can be returned if the entry hasn't been fsynced yet */35433543+ } else if (err < 0) {35633544 btrfs_abort_transaction(trans, err);35643545 }35653546
···77 *88 */991010-#include <linux/blkdev.h>1111-#include <linux/buffer_head.h>1210#include <linux/fs.h>1313-#include <linux/iversion.h>1411#include <linux/nls.h>15121613#include "debug.h"···1518#include "ntfs_fs.h"16191720/* Convert little endian UTF-16 to NLS string. */1818-int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni,2121+int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len,1922 u8 *buf, int buf_len)2023{2121- int ret, uni_len, warn;2222- const __le16 *ip;2424+ int ret, warn;2325 u8 *op;2424- struct nls_table *nls = sbi->options.nls;2626+ struct nls_table *nls = sbi->options->nls;25272628 static_assert(sizeof(wchar_t) == sizeof(__le16));27292830 if (!nls) {2931 /* UTF-16 -> UTF-8 */3030- ret = utf16s_to_utf8s((wchar_t *)uni->name, uni->len,3131- UTF16_LITTLE_ENDIAN, buf, buf_len);3232+ ret = utf16s_to_utf8s(name, len, UTF16_LITTLE_ENDIAN, buf,3333+ buf_len);3234 buf[ret] = '\0';3335 return ret;3436 }35373636- ip = uni->name;3738 op = buf;3838- uni_len = uni->len;3939 warn = 0;40404141- while (uni_len--) {4141+ while (len--) {4242 u16 ec;4343 int charlen;4444 char dump[5];···4652 break;4753 }48544949- ec = le16_to_cpu(*ip++);5555+ ec = le16_to_cpu(*name++);5056 charlen = nls->uni2char(ec, op, buf_len);51575258 if (charlen > 0) {···180186{181187 int ret, slen;182188 const u8 *end;183183- struct nls_table *nls = sbi->options.nls;189189+ struct nls_table *nls = sbi->options->nls;184190 u16 *uname = uni->name;185191186192 static_assert(sizeof(wchar_t) == sizeof(u16));···295301 return 0;296302297303 /* Skip meta files. Unless option to show metafiles is set. */298298- if (!sbi->options.showmeta && ntfs_is_meta_file(sbi, ino))304304+ if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino))299305 return 0;300306301301- if (sbi->options.nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))307307+ if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN))302308 return 0;303309304304- name_len = ntfs_utf16_to_nls(sbi, (struct le_str *)&fname->name_len,305305- name, PATH_MAX);310310+ name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name,311311+ PATH_MAX);306312 if (name_len <= 0) {307313 ntfs_warn(sbi->sb, "failed to convert name for inode %lx.",308314 ino);
+7-5
fs/ntfs3/file.c
···1212#include <linux/compat.h>1313#include <linux/falloc.h>1414#include <linux/fiemap.h>1515-#include <linux/nls.h>16151716#include "debug.h"1817#include "ntfs.h"···587588 truncate_pagecache(inode, vbo_down);588589589590 if (!is_sparsed(ni) && !is_compressed(ni)) {590590- /* Normal file. */591591- err = ntfs_zero_range(inode, vbo, end);591591+ /*592592+ * Normal file, can't make hole.593593+ * TODO: Try to find way to save info about hole.594594+ */595595+ err = -EOPNOTSUPP;592596 goto out;593597 }594598···739737 umode_t mode = inode->i_mode;740738 int err;741739742742- if (sbi->options.no_acs_rules) {740740+ if (sbi->options->noacsrules) {743741 /* "No access rules" - Force any changes of time etc. */744742 attr->ia_valid |= ATTR_FORCE;745743 /* and disable for editing some attributes. */···11871185 int err = 0;1188118611891187 /* If we are last writer on the inode, drop the block reservation. */11901190- if (sbi->options.prealloc && ((file->f_mode & FMODE_WRITE) &&11881188+ if (sbi->options->prealloc && ((file->f_mode & FMODE_WRITE) &&11911189 atomic_read(&inode->i_writecount) == 1)) {11921190 ni_lock(ni);11931191 down_write(&ni->file.run_lock);
+41-14
fs/ntfs3/frecord.c
···55 *66 */7788-#include <linux/blkdev.h>99-#include <linux/buffer_head.h>108#include <linux/fiemap.h>119#include <linux/fs.h>1212-#include <linux/nls.h>1310#include <linux/vmalloc.h>14111512#include "debug.h"···705708 continue;706709707710 mi = ni_find_mi(ni, ino_get(&le->ref));711711+ if (!mi) {712712+ /* Should never happened, 'cause already checked. */713713+ goto bad;714714+ }708715709716 attr = mi_find_attr(mi, NULL, le->type, le_name(le),710717 le->name_len, &le->id);718718+ if (!attr) {719719+ /* Should never happened, 'cause already checked. */720720+ goto bad;721721+ }711722 asize = le32_to_cpu(attr->size);712723713724 /* Insert into primary record. */714725 attr_ins = mi_insert_attr(&ni->mi, le->type, le_name(le),715726 le->name_len, asize,716727 le16_to_cpu(attr->name_off));717717- id = attr_ins->id;728728+ if (!attr_ins) {729729+ /*730730+ * Internal error.731731+ * Either no space in primary record (already checked).732732+ * Either tried to insert another733733+ * non indexed attribute (logic error).734734+ */735735+ goto bad;736736+ }718737719738 /* Copy all except id. */739739+ id = attr_ins->id;720740 memcpy(attr_ins, attr, asize);721741 attr_ins->id = id;722742···749735 ni->attr_list.dirty = false;750736751737 return 0;738738+bad:739739+ ntfs_inode_err(&ni->vfs_inode, "Internal error");740740+ make_bad_inode(&ni->vfs_inode);741741+ return -EINVAL;752742}753743754744/*···973955 /* Only indexed attributes can share same record. */974956 continue;975957 }958958+959959+ /*960960+ * Do not try to insert this attribute961961+ * if there is no room in record.962962+ */963963+ if (le32_to_cpu(mi->mrec->used) + asize > sbi->record_size)964964+ continue;976965977966 /* Try to insert attribute into this subrecord. */978967 attr = ni_ins_new_attr(ni, mi, le, type, name, name_len, asize,···14761451 attr->res.flags = RESIDENT_FLAG_INDEXED;1477145214781453 /* is_attr_indexed(attr)) == true */14791479- le16_add_cpu(&ni->mi.mrec->hard_links, +1);14541454+ le16_add_cpu(&ni->mi.mrec->hard_links, 1);14801455 ni->mi.dirty = true;14811456 }14821457 attr->res.res = 0;···1631160616321607 *le = NULL;1633160816341634- if (FILE_NAME_POSIX == name_type)16091609+ if (name_type == FILE_NAME_POSIX)16351610 return NULL;1636161116371612 /* Enumerate all names. */···17311706/*17321707 * ni_parse_reparse17331708 *17341734- * Buffer is at least 24 bytes.17091709+ * buffer - memory for reparse buffer header17351710 */17361711enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,17371737- void *buffer)17121712+ struct REPARSE_DATA_BUFFER *buffer)17381713{17391714 const struct REPARSE_DATA_BUFFER *rp = NULL;17401715 u8 bits;17411716 u16 len;17421717 typeof(rp->CompressReparseBuffer) *cmpr;17431743-17441744- static_assert(sizeof(struct REPARSE_DATA_BUFFER) <= 24);1745171817461719 /* Try to estimate reparse point. */17471720 if (!attr->non_res) {···1825180218261803 return REPARSE_NONE;18271804 }18051805+18061806+ if (buffer != rp)18071807+ memcpy(buffer, rp, sizeof(struct REPARSE_DATA_BUFFER));1828180818291809 /* Looks like normal symlink. */18301810 return REPARSE_LINK;···29322906 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), de + 1, de_key_size);29332907 mi_get_ref(&ni->mi, &de->ref);2934290829352935- if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) {29092909+ if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1))29362910 return false;29372937- }29382911 }2939291229402913 return true;···31023077 const struct EA_INFO *info;3103307831043079 info = resident_data_ex(attr, sizeof(struct EA_INFO));31053105- dup->ea_size = info->size_pack;30803080+ /* If ATTR_EA_INFO exists 'info' can't be NULL. */30813081+ if (info)30823082+ dup->ea_size = info->size_pack;31063083 }31073084 }31083085···32323205 goto out;32333206 }3234320732353235- err = al_update(ni);32083208+ err = al_update(ni, sync);32363209 if (err)32373210 goto out;32383211 }
···1010#ifndef _LINUX_NTFS3_NTFS_H1111#define _LINUX_NTFS3_NTFS_H12121313-/* TODO: Check 4K MFT record and 512 bytes cluster. */1313+#include <linux/blkdev.h>1414+#include <linux/build_bug.h>1515+#include <linux/kernel.h>1616+#include <linux/stddef.h>1717+#include <linux/string.h>1818+#include <linux/types.h>14191515-/* Activate this define to use binary search in indexes. */1616-#define NTFS3_INDEX_BINARY_SEARCH2020+#include "debug.h"2121+2222+/* TODO: Check 4K MFT record and 512 bytes cluster. */17231824/* Check each run for marked clusters. */1925#define NTFS3_CHECK_FREE_CLST20262127#define NTFS_NAME_LEN 25522282323-/* ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. */2424-#define NTFS_LINK_MAX 0x4002525-//#define NTFS_LINK_MAX 0xffff2929+/*3030+ * ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff.3131+ * xfstest generic/041 creates 3003 hardlinks.3232+ */3333+#define NTFS_LINK_MAX 400026342735/*2836 * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys.
···55 *66 */7788-#include <linux/blkdev.h>99-#include <linux/buffer_head.h>108#include <linux/fs.h>1111-#include <linux/nls.h>129#include <linux/posix_acl.h>1310#include <linux/posix_acl_xattr.h>1411#include <linux/xattr.h>···7578 size_t add_bytes, const struct EA_INFO **info)7679{7780 int err;8181+ struct ntfs_sb_info *sbi = ni->mi.sbi;7882 struct ATTR_LIST_ENTRY *le = NULL;7983 struct ATTRIB *attr_info, *attr_ea;8084 void *ea_p;···100102101103 /* Check Ea limit. */102104 size = le32_to_cpu((*info)->size);103103- if (size > ni->mi.sbi->ea_max_size)105105+ if (size > sbi->ea_max_size)104106 return -EFBIG;105107106106- if (attr_size(attr_ea) > ni->mi.sbi->ea_max_size)108108+ if (attr_size(attr_ea) > sbi->ea_max_size)107109 return -EFBIG;108110109111 /* Allocate memory for packed Ea. */···111113 if (!ea_p)112114 return -ENOMEM;113115114114- if (attr_ea->non_res) {116116+ if (!size) {117117+ ;118118+ } else if (attr_ea->non_res) {115119 struct runs_tree run;116120117121 run_init(&run);118122119123 err = attr_load_runs(attr_ea, ni, &run, NULL);120124 if (!err)121121- err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size,122122- NULL);125125+ err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL);123126 run_close(&run);124127125128 if (err)···259260260261static noinline int ntfs_set_ea(struct inode *inode, const char *name,261262 size_t name_len, const void *value,262262- size_t val_size, int flags, int locked)263263+ size_t val_size, int flags)263264{264265 struct ntfs_inode *ni = ntfs_i(inode);265266 struct ntfs_sb_info *sbi = ni->mi.sbi;···278279 u64 new_sz;279280 void *p;280281281281- if (!locked)282282- ni_lock(ni);282282+ ni_lock(ni);283283284284 run_init(&ea_run);285285···368370 new_ea->name[name_len] = 0;369371 memcpy(new_ea->name + name_len + 1, value, val_size);370372 new_pack = le16_to_cpu(ea_info.size_pack) + packed_ea_size(new_ea);371371-372372- /* Should fit into 16 bits. */373373- if (new_pack > 0xffff) {374374- err = -EFBIG; // -EINVAL?375375- goto out;376376- }377373 ea_info.size_pack = cpu_to_le16(new_pack);378378-379374 /* New size of ATTR_EA. */380375 size += add;381381- if (size > sbi->ea_max_size) {376376+ ea_info.size = cpu_to_le32(size);377377+378378+ /*379379+ * 1. Check ea_info.size_pack for overflow.380380+ * 2. New attibute size must fit value from $AttrDef381381+ */382382+ if (new_pack > 0xffff || size > sbi->ea_max_size) {383383+ ntfs_inode_warn(384384+ inode,385385+ "The size of extended attributes must not exceed 64KiB");382386 err = -EFBIG; // -EINVAL?383387 goto out;384388 }385385- ea_info.size = cpu_to_le32(size);386389387390update_ea:388391···443444 /* Delete xattr, ATTR_EA */444445 ni_remove_attr_le(ni, attr, mi, le);445446 } else if (attr->non_res) {446446- err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size);447447+ err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0);447448 if (err)448449 goto out;449450 } else {···467468 mark_inode_dirty(&ni->vfs_inode);468469469470out:470470- if (!locked)471471- ni_unlock(ni);471471+ ni_unlock(ni);472472473473 run_close(&ea_run);474474 kfree(ea_all);···476478}477479478480#ifdef CONFIG_NTFS3_FS_POSIX_ACL479479-static inline void ntfs_posix_acl_release(struct posix_acl *acl)480480-{481481- if (acl && refcount_dec_and_test(&acl->a_refcount))482482- kfree(acl);483483-}484484-485481static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns,486482 struct inode *inode, int type,487483 int locked)···513521 /* Translate extended attribute to acl. */514522 if (err >= 0) {515523 acl = posix_acl_from_xattr(mnt_userns, buf, err);516516- if (!IS_ERR(acl))517517- set_cached_acl(inode, type, acl);524524+ } else if (err == -ENODATA) {525525+ acl = NULL;518526 } else {519519- acl = err == -ENODATA ? NULL : ERR_PTR(err);527527+ acl = ERR_PTR(err);520528 }529529+530530+ if (!IS_ERR(acl))531531+ set_cached_acl(inode, type, acl);521532522533 __putname(buf);523534···541546542547static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns,543548 struct inode *inode, struct posix_acl *acl,544544- int type, int locked)549549+ int type)545550{546551 const char *name;547552 size_t size, name_len;548553 void *value = NULL;549554 int err = 0;555555+ int flags;550556551557 if (S_ISLNK(inode->i_mode))552558 return -EOPNOTSUPP;···557561 if (acl) {558562 umode_t mode = inode->i_mode;559563560560- err = posix_acl_equiv_mode(acl, &mode);561561- if (err < 0)562562- return err;564564+ err = posix_acl_update_mode(mnt_userns, inode, &mode,565565+ &acl);566566+ if (err)567567+ goto out;563568564569 if (inode->i_mode != mode) {565570 inode->i_mode = mode;566571 mark_inode_dirty(inode);567567- }568568-569569- if (!err) {570570- /*571571- * ACL can be exactly represented in the572572- * traditional file mode permission bits.573573- */574574- acl = NULL;575572 }576573 }577574 name = XATTR_NAME_POSIX_ACL_ACCESS;···583594 }584595585596 if (!acl) {597597+ /* Remove xattr if it can be presented via mode. */586598 size = 0;587599 value = NULL;600600+ flags = XATTR_REPLACE;588601 } else {589602 size = posix_acl_xattr_size(acl->a_count);590603 value = kmalloc(size, GFP_NOFS);591604 if (!value)592605 return -ENOMEM;593593-594606 err = posix_acl_to_xattr(mnt_userns, acl, value, size);595607 if (err < 0)596608 goto out;609609+ flags = 0;597610 }598611599599- err = ntfs_set_ea(inode, name, name_len, value, size, 0, locked);612612+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);613613+ if (err == -ENODATA && !size)614614+ err = 0; /* Removing non existed xattr. */600615 if (!err)601616 set_cached_acl(inode, type, acl);602617···616623int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode,617624 struct posix_acl *acl, int type)618625{619619- return ntfs_set_acl_ex(mnt_userns, inode, acl, type, 0);620620-}621621-622622-static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns,623623- struct inode *inode, int type, void *buffer,624624- size_t size)625625-{626626- struct posix_acl *acl;627627- int err;628628-629629- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {630630- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");631631- return -EOPNOTSUPP;632632- }633633-634634- acl = ntfs_get_acl(inode, type, false);635635- if (IS_ERR(acl))636636- return PTR_ERR(acl);637637-638638- if (!acl)639639- return -ENODATA;640640-641641- err = posix_acl_to_xattr(mnt_userns, acl, buffer, size);642642- ntfs_posix_acl_release(acl);643643-644644- return err;645645-}646646-647647-static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns,648648- struct inode *inode, int type, const void *value,649649- size_t size)650650-{651651- struct posix_acl *acl;652652- int err;653653-654654- if (!(inode->i_sb->s_flags & SB_POSIXACL)) {655655- ntfs_inode_warn(inode, "add mount option \"acl\" to use acl");656656- return -EOPNOTSUPP;657657- }658658-659659- if (!inode_owner_or_capable(mnt_userns, inode))660660- return -EPERM;661661-662662- if (!value) {663663- acl = NULL;664664- } else {665665- acl = posix_acl_from_xattr(mnt_userns, value, size);666666- if (IS_ERR(acl))667667- return PTR_ERR(acl);668668-669669- if (acl) {670670- err = posix_acl_valid(mnt_userns, acl);671671- if (err)672672- goto release_and_out;673673- }674674- }675675-676676- err = ntfs_set_acl(mnt_userns, inode, acl, type);677677-678678-release_and_out:679679- ntfs_posix_acl_release(acl);680680- return err;626626+ return ntfs_set_acl_ex(mnt_userns, inode, acl, type);681627}682628683629/*···630698 struct posix_acl *default_acl, *acl;631699 int err;632700633633- /*634634- * TODO: Refactoring lock.635635- * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir)636636- */637637- inode->i_default_acl = NULL;701701+ err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl);702702+ if (err)703703+ return err;638704639639- default_acl = ntfs_get_acl_ex(mnt_userns, dir, ACL_TYPE_DEFAULT, 1);640640-641641- if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) {642642- inode->i_mode &= ~current_umask();643643- err = 0;644644- goto out;645645- }646646-647647- if (IS_ERR(default_acl)) {648648- err = PTR_ERR(default_acl);649649- goto out;650650- }651651-652652- acl = default_acl;653653- err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode);654654- if (err < 0)655655- goto out1;656656- if (!err) {657657- posix_acl_release(acl);658658- acl = NULL;659659- }660660-661661- if (!S_ISDIR(inode->i_mode)) {662662- posix_acl_release(default_acl);663663- default_acl = NULL;664664- }665665-666666- if (default_acl)705705+ if (default_acl) {667706 err = ntfs_set_acl_ex(mnt_userns, inode, default_acl,668668- ACL_TYPE_DEFAULT, 1);707707+ ACL_TYPE_DEFAULT);708708+ posix_acl_release(default_acl);709709+ } else {710710+ inode->i_default_acl = NULL;711711+ }669712670713 if (!acl)671714 inode->i_acl = NULL;672672- else if (!err)673673- err = ntfs_set_acl_ex(mnt_userns, inode, acl, ACL_TYPE_ACCESS,674674- 1);715715+ else {716716+ if (!err)717717+ err = ntfs_set_acl_ex(mnt_userns, inode, acl,718718+ ACL_TYPE_ACCESS);719719+ posix_acl_release(acl);720720+ }675721676676- posix_acl_release(acl);677677-out1:678678- posix_acl_release(default_acl);679679-680680-out:681722 return err;682723}683724#endif···677772int ntfs_permission(struct user_namespace *mnt_userns, struct inode *inode,678773 int mask)679774{680680- if (ntfs_sb(inode->i_sb)->options.no_acs_rules) {775775+ if (ntfs_sb(inode->i_sb)->options->noacsrules) {681776 /* "No access rules" mode - Allow all changes. */682777 return 0;683778 }···785880 goto out;786881 }787882788788-#ifdef CONFIG_NTFS3_FS_POSIX_ACL789789- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&790790- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,791791- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||792792- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&793793- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,794794- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {795795- /* TODO: init_user_ns? */796796- err = ntfs_xattr_get_acl(797797- &init_user_ns, inode,798798- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1799799- ? ACL_TYPE_ACCESS800800- : ACL_TYPE_DEFAULT,801801- buffer, size);802802- goto out;803803- }804804-#endif805883 /* Deal with NTFS extended attribute. */806884 err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL);807885···8971009 goto out;8981010 }8991011900900-#ifdef CONFIG_NTFS3_FS_POSIX_ACL901901- if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 &&902902- !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS,903903- sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) ||904904- (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 &&905905- !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT,906906- sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) {907907- err = ntfs_xattr_set_acl(908908- mnt_userns, inode,909909- name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1910910- ? ACL_TYPE_ACCESS911911- : ACL_TYPE_DEFAULT,912912- value, size);913913- goto out;914914- }915915-#endif9161012 /* Deal with NTFS extended attribute. */917917- err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0);10131013+ err = ntfs_set_ea(inode, name, name_len, value, size, flags);91810149191015out:9201016 return err;···9141042 int err;9151043 __le32 value;916104410451045+ /* TODO: refactor this, so we don't lock 4 times in ntfs_set_ea */9171046 value = cpu_to_le32(i_uid_read(inode));9181047 err = ntfs_set_ea(inode, "$LXUID", sizeof("$LXUID") - 1, &value,919919- sizeof(value), 0, 0);10481048+ sizeof(value), 0);9201049 if (err)9211050 goto out;92210519231052 value = cpu_to_le32(i_gid_read(inode));9241053 err = ntfs_set_ea(inode, "$LXGID", sizeof("$LXGID") - 1, &value,925925- sizeof(value), 0, 0);10541054+ sizeof(value), 0);9261055 if (err)9271056 goto out;92810579291058 value = cpu_to_le32(inode->i_mode);9301059 err = ntfs_set_ea(inode, "$LXMOD", sizeof("$LXMOD") - 1, &value,931931- sizeof(value), 0, 0);10601060+ sizeof(value), 0);9321061 if (err)9331062 goto out;93410639351064 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {9361065 value = cpu_to_le32(inode->i_rdev);9371066 err = ntfs_set_ea(inode, "$LXDEV", sizeof("$LXDEV") - 1, &value,938938- sizeof(value), 0, 0);10671067+ sizeof(value), 0);9391068 if (err)9401069 goto out;9411070 }
+3-3
include/kunit/test.h
···613613 * and is automatically cleaned up after the test case concludes. See &struct614614 * kunit_resource for more information.615615 */616616-void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t flags);616616+void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t gfp);617617618618/**619619 * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*.···657657 *658658 * See kcalloc() and kunit_kmalloc_array() for more information.659659 */660660-static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t flags)660660+static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t gfp)661661{662662- return kunit_kmalloc_array(test, n, size, flags | __GFP_ZERO);662662+ return kunit_kmalloc_array(test, n, size, gfp | __GFP_ZERO);663663}664664665665void kunit_cleanup(struct kunit *test);
···531531 /* I/O mutex */532532 struct mutex io_mutex;533533534534+ /* Used to avoid adding the same CS twice */535535+ struct mutex add_lock;536536+534537 /* lock and mutex for SPI bus locking */535538 spinlock_t bus_lock_spinlock;536539 struct mutex bus_lock_mutex;
+2-3
include/linux/workqueue.h
···399399 * RETURNS:400400 * Pointer to the allocated workqueue on success, %NULL on failure.401401 */402402-struct workqueue_struct *alloc_workqueue(const char *fmt,403403- unsigned int flags,404404- int max_active, ...);402402+__printf(1, 4) struct workqueue_struct *403403+alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);405404406405/**407406 * alloc_ordered_workqueue - allocate an ordered workqueue
+4-51
include/soc/mscc/ocelot.h
···8989/* Source PGIDs, one per physical port */9090#define PGID_SRC 8091919292-#define IFH_TAG_TYPE_C 09393-#define IFH_TAG_TYPE_S 19494-9595-#define IFH_REW_OP_NOOP 0x09696-#define IFH_REW_OP_DSCP 0x19797-#define IFH_REW_OP_ONE_STEP_PTP 0x29898-#define IFH_REW_OP_TWO_STEP_PTP 0x39999-#define IFH_REW_OP_ORIGIN_PTP 0x5100100-10192#define OCELOT_NUM_TC 81029310394#define OCELOT_SPEED_2500 0···594603 /* The VLAN ID that will be transmitted as untagged, on egress */595604 struct ocelot_vlan native_vlan;596605606606+ unsigned int ptp_skbs_in_flight;597607 u8 ptp_cmd;598608 struct sk_buff_head tx_skbs;599609 u8 ts_id;600600- spinlock_t ts_id_lock;601610602611 phy_interface_t phy_mode;603612···671680 struct ptp_clock *ptp_clock;672681 struct ptp_clock_info ptp_info;673682 struct hwtstamp_config hwtstamp_config;683683+ unsigned int ptp_skbs_in_flight;684684+ /* Protects the 2-step TX timestamp ID logic */685685+ spinlock_t ts_id_lock;674686 /* Protects the PTP interface state */675687 struct mutex ptp_lock;676688 /* Protects the PTP clock */···685691 u32 rate; /* kilobit per second */686692 u32 burst; /* bytes */687693};688688-689689-struct ocelot_skb_cb {690690- struct sk_buff *clone;691691- u8 ptp_cmd;692692- u8 ts_id;693693-};694694-695695-#define OCELOT_SKB_CB(skb) \696696- ((struct ocelot_skb_cb *)((skb)->cb))697694698695#define ocelot_read_ix(ocelot, reg, gi, ri) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri))699696#define ocelot_read_gix(ocelot, reg, gi) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi))···737752void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target,738753 u32 val, u32 reg, u32 offset);739754740740-#if IS_ENABLED(CONFIG_MSCC_OCELOT_SWITCH_LIB)741741-742755/* Packet I/O */743756bool ocelot_can_inject(struct ocelot *ocelot, int grp);744757void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp,745758 u32 rew_op, struct sk_buff *skb);746759int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb);747760void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp);748748-749749-u32 ocelot_ptp_rew_op(struct sk_buff *skb);750750-#else751751-752752-static inline bool ocelot_can_inject(struct ocelot *ocelot, int grp)753753-{754754- return false;755755-}756756-757757-static inline void ocelot_port_inject_frame(struct ocelot *ocelot, int port,758758- int grp, u32 rew_op,759759- struct sk_buff *skb)760760-{761761-}762762-763763-static inline int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp,764764- struct sk_buff **skb)765765-{766766- return -EIO;767767-}768768-769769-static inline void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp)770770-{771771-}772772-773773-static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb)774774-{775775- return 0;776776-}777777-#endif778761779762/* Hardware initialization */780763int ocelot_regfields_init(struct ocelot *ocelot,
···224224#endif225225226226 /* misc flags */227227+ unsigned int configured:1; /* codec was configured */227228 unsigned int in_freeing:1; /* being released */228229 unsigned int registered:1; /* codec was registered */229230 unsigned int display_power_control:1; /* needs display power */
···917917#define HL_WAIT_CS_STATUS_BUSY 1918918#define HL_WAIT_CS_STATUS_TIMEDOUT 2919919#define HL_WAIT_CS_STATUS_ABORTED 3920920-#define HL_WAIT_CS_STATUS_INTERRUPTED 4921920922921#define HL_WAIT_CS_STATUS_FLAG_GONE 0x1923922#define HL_WAIT_CS_STATUS_FLAG_TIMESTAMP_VLD 0x2···12851286 * EIO - The CS was aborted (usually because the device was reset)12861287 * ENODEV - The device wants to do hard-reset (so user need to close FD)12871288 *12881288- * The driver also returns a custom define inside the IOCTL which can be:12891289+ * The driver also returns a custom define in case the IOCTL call returned 0.12901290+ * The define can be one of the following:12891291 *12901292 * HL_WAIT_CS_STATUS_COMPLETED - The CS has been completed successfully (0)12911293 * HL_WAIT_CS_STATUS_BUSY - The CS is still executing (0)···12941294 * (ETIMEDOUT)12951295 * HL_WAIT_CS_STATUS_ABORTED - The CS was aborted, usually because the12961296 * device was reset (EIO)12971297- * HL_WAIT_CS_STATUS_INTERRUPTED - Waiting for the CS was interrupted (EINTR)12981298- *12991297 */1300129813011299#define HL_IOCTL_WAIT_CS \
+1
init/main.c
···382382 ret = xbc_snprint_cmdline(new_cmdline, len + 1, root);383383 if (ret < 0 || ret > len) {384384 pr_err("Failed to print extra kernel cmdline.\n");385385+ memblock_free_ptr(new_cmdline, len + 1);385386 return NULL;386387 }387388
+29-27
kernel/cgroup/cpuset.c
···311311 if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))312312313313/*314314- * There are two global locks guarding cpuset structures - cpuset_mutex and314314+ * There are two global locks guarding cpuset structures - cpuset_rwsem and315315 * callback_lock. We also require taking task_lock() when dereferencing a316316 * task's cpuset pointer. See "The task_lock() exception", at the end of this317317- * comment.317317+ * comment. The cpuset code uses only cpuset_rwsem write lock. Other318318+ * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to319319+ * prevent change to cpuset structures.318320 *319321 * A task must hold both locks to modify cpusets. If a task holds320320- * cpuset_mutex, then it blocks others wanting that mutex, ensuring that it322322+ * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it321323 * is the only task able to also acquire callback_lock and be able to322324 * modify cpusets. It can perform various checks on the cpuset structure323325 * first, knowing nothing will change. It can also allocate memory while324324- * just holding cpuset_mutex. While it is performing these checks, various326326+ * just holding cpuset_rwsem. While it is performing these checks, various325327 * callback routines can briefly acquire callback_lock to query cpusets.326328 * Once it is ready to make the changes, it takes callback_lock, blocking327329 * everyone else.···395393 * One way or another, we guarantee to return some non-empty subset396394 * of cpu_online_mask.397395 *398398- * Call with callback_lock or cpuset_mutex held.396396+ * Call with callback_lock or cpuset_rwsem held.399397 */400398static void guarantee_online_cpus(struct task_struct *tsk,401399 struct cpumask *pmask)···437435 * One way or another, we guarantee to return some non-empty subset438436 * of node_states[N_MEMORY].439437 *440440- * Call with callback_lock or cpuset_mutex held.438438+ * Call with callback_lock or cpuset_rwsem held.441439 */442440static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)443441{···449447/*450448 * update task's spread flag if cpuset's page/slab spread flag is set451449 *452452- * Call with callback_lock or cpuset_mutex held.450450+ * Call with callback_lock or cpuset_rwsem held.453451 */454452static void cpuset_update_task_spread_flag(struct cpuset *cs,455453 struct task_struct *tsk)···470468 *471469 * One cpuset is a subset of another if all its allowed CPUs and472470 * Memory Nodes are a subset of the other, and its exclusive flags473473- * are only set if the other's are set. Call holding cpuset_mutex.471471+ * are only set if the other's are set. Call holding cpuset_rwsem.474472 */475473476474static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q)···579577 * If we replaced the flag and mask values of the current cpuset580578 * (cur) with those values in the trial cpuset (trial), would581579 * our various subset and exclusive rules still be valid? Presumes582582- * cpuset_mutex held.580580+ * cpuset_rwsem held.583581 *584582 * 'cur' is the address of an actual, in-use cpuset. Operations585583 * such as list traversal that depend on the actual address of the···702700 rcu_read_unlock();703701}704702705705-/* Must be called with cpuset_mutex held. */703703+/* Must be called with cpuset_rwsem held. */706704static inline int nr_cpusets(void)707705{708706 /* jump label reference count + the top-level cpuset */···728726 * domains when operating in the severe memory shortage situations729727 * that could cause allocation failures below.730728 *731731- * Must be called with cpuset_mutex held.729729+ * Must be called with cpuset_rwsem held.732730 *733731 * The three key local variables below are:734732 * cp - cpuset pointer, used (together with pos_css) to perform a···10071005 * 'cpus' is removed, then call this routine to rebuild the10081006 * scheduler's dynamic sched domains.10091007 *10101010- * Call with cpuset_mutex held. Takes cpus_read_lock().10081008+ * Call with cpuset_rwsem held. Takes cpus_read_lock().10111009 */10121010static void rebuild_sched_domains_locked(void)10131011{···10801078 * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed10811079 *10821080 * Iterate through each task of @cs updating its cpus_allowed to the10831083- * effective cpuset's. As this function is called with cpuset_mutex held,10811081+ * effective cpuset's. As this function is called with cpuset_rwsem held,10841082 * cpuset membership stays stable.10851083 */10861084static void update_tasks_cpumask(struct cpuset *cs)···13491347 *13501348 * On legacy hierarchy, effective_cpus will be the same with cpu_allowed.13511349 *13521352- * Called with cpuset_mutex held13501350+ * Called with cpuset_rwsem held13531351 */13541352static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp)13551353{···17061704 * @cs: the cpuset in which each task's mems_allowed mask needs to be changed17071705 *17081706 * Iterate through each task of @cs updating its mems_allowed to the17091709- * effective cpuset's. As this function is called with cpuset_mutex held,17071707+ * effective cpuset's. As this function is called with cpuset_rwsem held,17101708 * cpuset membership stays stable.17111709 */17121710static void update_tasks_nodemask(struct cpuset *cs)17131711{17141714- static nodemask_t newmems; /* protected by cpuset_mutex */17121712+ static nodemask_t newmems; /* protected by cpuset_rwsem */17151713 struct css_task_iter it;17161714 struct task_struct *task;17171715···17241722 * take while holding tasklist_lock. Forks can happen - the17251723 * mpol_dup() cpuset_being_rebound check will catch such forks,17261724 * and rebind their vma mempolicies too. Because we still hold17271727- * the global cpuset_mutex, we know that no other rebind effort17251725+ * the global cpuset_rwsem, we know that no other rebind effort17281726 * will be contending for the global variable cpuset_being_rebound.17291727 * It's ok if we rebind the same mm twice; mpol_rebind_mm()17301728 * is idempotent. Also migrate pages in each mm to new nodes.···17701768 *17711769 * On legacy hierarchy, effective_mems will be the same with mems_allowed.17721770 *17731773- * Called with cpuset_mutex held17711771+ * Called with cpuset_rwsem held17741772 */17751773static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)17761774{···18231821 * mempolicies and if the cpuset is marked 'memory_migrate',18241822 * migrate the tasks pages to the new memory.18251823 *18261826- * Call with cpuset_mutex held. May take callback_lock during call.18241824+ * Call with cpuset_rwsem held. May take callback_lock during call.18271825 * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,18281826 * lock each such tasks mm->mmap_lock, scan its vma's and rebind18291827 * their mempolicies to the cpusets new mems_allowed.···19131911 * @cs: the cpuset in which each task's spread flags needs to be changed19141912 *19151913 * Iterate through each task of @cs updating its spread flags. As this19161916- * function is called with cpuset_mutex held, cpuset membership stays19141914+ * function is called with cpuset_rwsem held, cpuset membership stays19171915 * stable.19181916 */19191917static void update_tasks_flags(struct cpuset *cs)···19331931 * cs: the cpuset to update19341932 * turning_on: whether the flag is being set or cleared19351933 *19361936- * Call with cpuset_mutex held.19341934+ * Call with cpuset_rwsem held.19371935 */1938193619391937static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,···19821980 * cs: the cpuset to update19831981 * new_prs: new partition root state19841982 *19851985- * Call with cpuset_mutex held.19831983+ * Call with cpuset_rwsem held.19861984 */19871985static int update_prstate(struct cpuset *cs, int new_prs)19881986{···2169216721702168static struct cpuset *cpuset_attach_old_cs;2171216921722172-/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */21702170+/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */21732171static int cpuset_can_attach(struct cgroup_taskset *tset)21742172{21752173 struct cgroup_subsys_state *css;···22212219}2222222022232221/*22242224- * Protected by cpuset_mutex. cpus_attach is used only by cpuset_attach()22222222+ * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach()22252223 * but we can't allocate it dynamically there. Define it global and22262224 * allocate from cpuset_init().22272225 */···2229222722302228static void cpuset_attach(struct cgroup_taskset *tset)22312229{22322232- /* static buf protected by cpuset_mutex */22302230+ /* static buf protected by cpuset_rwsem */22332231 static nodemask_t cpuset_attach_nodemask_to;22342232 struct task_struct *task;22352233 struct task_struct *leader;···24192417 * operation like this one can lead to a deadlock through kernfs24202418 * active_ref protection. Let's break the protection. Losing the24212419 * protection is okay as we check whether @cs is online after24222422- * grabbing cpuset_mutex anyway. This only happens on the legacy24202420+ * grabbing cpuset_rwsem anyway. This only happens on the legacy24232421 * hierarchies.24242422 */24252423 css_get(&cs->css);···36743672 * - Used for /proc/<pid>/cpuset.36753673 * - No need to task_lock(tsk) on this tsk->cpuset reference, as it36763674 * doesn't really matter if tsk->cpuset changes after we read it,36773677- * and we take cpuset_mutex, keeping cpuset_attach() from changing it36753675+ * and we take cpuset_rwsem, keeping cpuset_attach() from changing it36783676 * anyway.36793677 */36803678int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
···17441744 irq_work_queue(&tr->fsnotify_irqwork);17451745}1746174617471747-/*17481748- * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \17491749- * defined(CONFIG_FSNOTIFY)17501750- */17511751-#else17471747+#elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \17481748+ || defined(CONFIG_OSNOISE_TRACER)1752174917531750#define trace_create_maxlat_file(tr, d_tracer) \17541751 trace_create_file("tracing_max_latency", 0644, d_tracer, \17551752 &tr->max_latency, &tracing_max_lat_fops)1756175317541754+#else17551755+#define trace_create_maxlat_file(tr, d_tracer) do { } while (0)17571756#endif1758175717591758#ifdef CONFIG_TRACER_MAX_TRACE···9472947394739474 create_trace_options_dir(tr);9474947594759475-#if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)94769476 trace_create_maxlat_file(tr, d_tracer);94779477-#endif9478947794799478 if (ftrace_create_function_files(tr, d_tracer))94809479 MEM_FAIL(1, "Could not allocate function filter files");
+58-3
kernel/trace/trace_eprobe.c
···119119 int argc, const char **argv, struct dyn_event *ev)120120{121121 struct trace_eprobe *ep = to_trace_eprobe(ev);122122+ const char *slash;122123123123- return strcmp(trace_probe_name(&ep->tp), event) == 0 &&124124- (!system || strcmp(trace_probe_group_name(&ep->tp), system) == 0) &&125125- trace_probe_match_command_args(&ep->tp, argc, argv);124124+ /*125125+ * We match the following:126126+ * event only - match all eprobes with event name127127+ * system and event only - match all system/event probes128128+ *129129+ * The below has the above satisfied with more arguments:130130+ *131131+ * attached system/event - If the arg has the system and event132132+ * the probe is attached to, match133133+ * probes with the attachment.134134+ *135135+ * If any more args are given, then it requires a full match.136136+ */137137+138138+ /*139139+ * If system exists, but this probe is not part of that system140140+ * do not match.141141+ */142142+ if (system && strcmp(trace_probe_group_name(&ep->tp), system) != 0)143143+ return false;144144+145145+ /* Must match the event name */146146+ if (strcmp(trace_probe_name(&ep->tp), event) != 0)147147+ return false;148148+149149+ /* No arguments match all */150150+ if (argc < 1)151151+ return true;152152+153153+ /* First argument is the system/event the probe is attached to */154154+155155+ slash = strchr(argv[0], '/');156156+ if (!slash)157157+ slash = strchr(argv[0], '.');158158+ if (!slash)159159+ return false;160160+161161+ if (strncmp(ep->event_system, argv[0], slash - argv[0]))162162+ return false;163163+ if (strcmp(ep->event_name, slash + 1))164164+ return false;165165+166166+ argc--;167167+ argv++;168168+169169+ /* If there are no other args, then match */170170+ if (argc < 1)171171+ return true;172172+173173+ return trace_probe_match_command_args(&ep->tp, argc, argv);126174}127175128176static struct dyn_event_operations eprobe_dyn_event_ops = {···680632681633 trace_event_trigger_enable_disable(file, 0);682634 update_cond_flag(file);635635+636636+ /* Make sure nothing is using the edata or trigger */637637+ tracepoint_synchronize_unregister();638638+639639+ kfree(edata);640640+ kfree(trigger);641641+683642 return 0;684643}685644
+1-1
kernel/trace/trace_events_hist.c
···25062506 * events. However, for convenience, users are allowed to directly25072507 * specify an event field in an action, which will be automatically25082508 * converted into a variable on their behalf.25092509-25092509+ *25102510 * If a user specifies a field on an event that isn't the event the25112511 * histogram currently being defined (the target event histogram), the25122512 * only way that can be accomplished is if a new hist trigger is
+16-2
kernel/workqueue.c
···4830483048314831 for_each_pwq(pwq, wq) {48324832 raw_spin_lock_irqsave(&pwq->pool->lock, flags);48334833- if (pwq->nr_active || !list_empty(&pwq->inactive_works))48334833+ if (pwq->nr_active || !list_empty(&pwq->inactive_works)) {48344834+ /*48354835+ * Defer printing to avoid deadlocks in console48364836+ * drivers that queue work while holding locks48374837+ * also taken in their write paths.48384838+ */48394839+ printk_deferred_enter();48344840 show_pwq(pwq);48414841+ printk_deferred_exit();48424842+ }48354843 raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);48364844 /*48374845 * We could be printing a lot from atomic context, e.g.···48574849 raw_spin_lock_irqsave(&pool->lock, flags);48584850 if (pool->nr_workers == pool->nr_idle)48594851 goto next_pool;48604860-48524852+ /*48534853+ * Defer printing to avoid deadlocks in console drivers that48544854+ * queue work while holding locks also taken in their write48554855+ * paths.48564856+ */48574857+ printk_deferred_enter();48614858 pr_info("pool %d:", pool->id);48624859 pr_cont_pool_info(pool);48634860 pr_cont(" hung=%us workers=%d",···48774864 first = false;48784865 }48794866 pr_cont("\n");48674867+ printk_deferred_exit();48804868 next_pool:48814869 raw_spin_unlock_irqrestore(&pool->lock, flags);48824870 /*
···101101102102config NET_DSA_TAG_OCELOT103103 tristate "Tag driver for Ocelot family of switches, using NPI port"104104- depends on MSCC_OCELOT_SWITCH_LIB || \105105- (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)106104 select PACKING107105 help108106 Say Y or M if you want to enable NPI tagging for the Ocelot switches···112114113115config NET_DSA_TAG_OCELOT_8021Q114116 tristate "Tag driver for Ocelot family of switches, using VLAN"115115- depends on MSCC_OCELOT_SWITCH_LIB || \116116- (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)117117 help118118 Say Y or M if you want to enable support for tagging frames with a119119 custom VLAN-based header. Frames that require timestamping, such as···134138135139config NET_DSA_TAG_SJA1105136140 tristate "Tag driver for NXP SJA1105 switches"137137- depends on NET_DSA_SJA1105 || !NET_DSA_SJA1105138141 select PACKING139142 help140143 Say Y or M if you want to enable support for tagging frames with the
+3-1
net/dsa/dsa2.c
···170170 /* Check if the bridge is still in use, otherwise it is time171171 * to clean it up so we can reuse this bridge_num later.172172 */173173- if (!dsa_bridge_num_find(bridge_dev))173173+ if (dsa_bridge_num_find(bridge_dev) < 0)174174 clear_bit(bridge_num, &dsa_fwd_offloading_bridges);175175}176176···811811 if (!dsa_is_cpu_port(ds, port))812812 continue;813813814814+ rtnl_lock();814815 err = ds->ops->change_tag_protocol(ds, port, tag_ops->proto);816816+ rtnl_unlock();815817 if (err) {816818 dev_err(ds->dev, "Unable to use tag protocol \"%s\": %pe\n",817819 tag_ops->name, ERR_PTR(err));
+1-1
net/dsa/switch.c
···168168 if (extack._msg)169169 dev_err(ds->dev, "port %d: %s\n", info->port,170170 extack._msg);171171- if (err && err != EOPNOTSUPP)171171+ if (err && err != -EOPNOTSUPP)172172 return err;173173 }174174
···44#include <linux/if_vlan.h>55#include <linux/dsa/sja1105.h>66#include <linux/dsa/8021q.h>77+#include <linux/skbuff.h>78#include <linux/packing.h>89#include "dsa_priv.h"910···5352#define SJA1110_RX_TRAILER_LEN 135453#define SJA1110_TX_TRAILER_LEN 45554#define SJA1110_MAX_PADDING_LEN 155555+5656+enum sja1110_meta_tstamp {5757+ SJA1110_META_TSTAMP_TX = 0,5858+ SJA1110_META_TSTAMP_RX = 1,5959+};56605761/* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */5862static inline bool sja1105_is_link_local(const struct sk_buff *skb)···524518525519 return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local,526520 is_meta);521521+}522522+523523+static void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port,524524+ u8 ts_id, enum sja1110_meta_tstamp dir,525525+ u64 tstamp)526526+{527527+ struct sk_buff *skb, *skb_tmp, *skb_match = NULL;528528+ struct dsa_port *dp = dsa_to_port(ds, port);529529+ struct skb_shared_hwtstamps shwt = {0};530530+ struct sja1105_port *sp = dp->priv;531531+532532+ if (!dsa_port_is_sja1105(dp))533533+ return;534534+535535+ /* We don't care about RX timestamps on the CPU port */536536+ if (dir == SJA1110_META_TSTAMP_RX)537537+ return;538538+539539+ spin_lock(&sp->data->skb_txtstamp_queue.lock);540540+541541+ skb_queue_walk_safe(&sp->data->skb_txtstamp_queue, skb, skb_tmp) {542542+ if (SJA1105_SKB_CB(skb)->ts_id != ts_id)543543+ continue;544544+545545+ __skb_unlink(skb, &sp->data->skb_txtstamp_queue);546546+ skb_match = skb;547547+548548+ break;549549+ }550550+551551+ spin_unlock(&sp->data->skb_txtstamp_queue.lock);552552+553553+ if (WARN_ON(!skb_match))554554+ return;555555+556556+ shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp));557557+ skb_complete_tx_timestamp(skb_match, &shwt);527558}528559529560static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
+11-12
net/ipv4/icmp.c
···10541054 iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr), &_iio);10551055 if (!ext_hdr || !iio)10561056 goto send_mal_query;10571057- if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr))10571057+ if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr) ||10581058+ ntohs(iio->extobj_hdr.length) > sizeof(_iio))10581059 goto send_mal_query;10591060 ident_len = ntohs(iio->extobj_hdr.length) - sizeof(iio->extobj_hdr);10611061+ iio = skb_header_pointer(skb, sizeof(_ext_hdr),10621062+ sizeof(iio->extobj_hdr) + ident_len, &_iio);10631063+ if (!iio)10641064+ goto send_mal_query;10651065+10601066 status = 0;10611067 dev = NULL;10621068 switch (iio->extobj_hdr.class_type) {10631069 case ICMP_EXT_ECHO_CTYPE_NAME:10641064- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);10651070 if (ident_len >= IFNAMSIZ)10661071 goto send_mal_query;10671072 memset(buff, 0, sizeof(buff));···10741069 dev = dev_get_by_name(net, buff);10751070 break;10761071 case ICMP_EXT_ECHO_CTYPE_INDEX:10771077- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +10781078- sizeof(iio->ident.ifindex), &_iio);10791072 if (ident_len != sizeof(iio->ident.ifindex))10801073 goto send_mal_query;10811074 dev = dev_get_by_index(net, ntohl(iio->ident.ifindex));10821075 break;10831076 case ICMP_EXT_ECHO_CTYPE_ADDR:10841084- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +10771077+ if (ident_len < sizeof(iio->ident.addr.ctype3_hdr) ||10781078+ ident_len != sizeof(iio->ident.addr.ctype3_hdr) +10851079 iio->ident.addr.ctype3_hdr.addrlen)10861080 goto send_mal_query;10871081 switch (ntohs(iio->ident.addr.ctype3_hdr.afi)) {10881082 case ICMP_AFI_IP:10891089- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) +10901090- sizeof(struct in_addr), &_iio);10911091- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +10921092- sizeof(struct in_addr))10831083+ if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in_addr))10931084 goto send_mal_query;10941085 dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr);10951086 break;10961087#if IS_ENABLED(CONFIG_IPV6)10971088 case ICMP_AFI_IP6:10981098- iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio);10991099- if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) +11001100- sizeof(struct in6_addr))10891089+ if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in6_addr))11011090 goto send_mal_query;11021091 dev = ipv6_stub->ipv6_dev_find(net, &iio->ident.addr.ip_addr.ipv6_addr, dev);11031092 dev_hold(dev);
+62-8
net/ipv6/ioam6.c
···770770 data += sizeof(__be32);771771 }772772773773+ /* bit12 undefined: filled with empty value */774774+ if (trace->type.bit12) {775775+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);776776+ data += sizeof(__be32);777777+ }778778+779779+ /* bit13 undefined: filled with empty value */780780+ if (trace->type.bit13) {781781+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);782782+ data += sizeof(__be32);783783+ }784784+785785+ /* bit14 undefined: filled with empty value */786786+ if (trace->type.bit14) {787787+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);788788+ data += sizeof(__be32);789789+ }790790+791791+ /* bit15 undefined: filled with empty value */792792+ if (trace->type.bit15) {793793+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);794794+ data += sizeof(__be32);795795+ }796796+797797+ /* bit16 undefined: filled with empty value */798798+ if (trace->type.bit16) {799799+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);800800+ data += sizeof(__be32);801801+ }802802+803803+ /* bit17 undefined: filled with empty value */804804+ if (trace->type.bit17) {805805+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);806806+ data += sizeof(__be32);807807+ }808808+809809+ /* bit18 undefined: filled with empty value */810810+ if (trace->type.bit18) {811811+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);812812+ data += sizeof(__be32);813813+ }814814+815815+ /* bit19 undefined: filled with empty value */816816+ if (trace->type.bit19) {817817+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);818818+ data += sizeof(__be32);819819+ }820820+821821+ /* bit20 undefined: filled with empty value */822822+ if (trace->type.bit20) {823823+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);824824+ data += sizeof(__be32);825825+ }826826+827827+ /* bit21 undefined: filled with empty value */828828+ if (trace->type.bit21) {829829+ *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE);830830+ data += sizeof(__be32);831831+ }832832+773833 /* opaque state snapshot */774834 if (trace->type.bit22) {775835 if (!sc) {···851791 struct ioam6_schema *sc;852792 u8 sclen = 0;853793854854- /* Skip if Overflow flag is set OR855855- * if an unknown type (bit 12-21) is set794794+ /* Skip if Overflow flag is set856795 */857857- if (trace->overflow ||858858- trace->type.bit12 | trace->type.bit13 | trace->type.bit14 |859859- trace->type.bit15 | trace->type.bit16 | trace->type.bit17 |860860- trace->type.bit18 | trace->type.bit19 | trace->type.bit20 |861861- trace->type.bit21) {796796+ if (trace->overflow)862797 return;863863- }864798865799 /* NodeLen does not include Opaque State Snapshot length. We need to866800 * take it into account if the corresponding bit is set (bit 22) and
···2525#include <sound/core.h>2626#include <sound/initval.h>2727#include "hda_controller.h"2828+#include "hda_local.h"28292930#define CREATE_TRACE_POINTS3031#include "hda_controller_trace.h"···12491248int azx_codec_configure(struct azx *chip)12501249{12511250 struct hda_codec *codec, *next;12511251+ int success = 0;1252125212531253- /* use _safe version here since snd_hda_codec_configure() deregisters12541254- * the device upon error and deletes itself from the bus list.12551255- */12561256- list_for_each_codec_safe(codec, next, &chip->bus) {12571257- snd_hda_codec_configure(codec);12531253+ list_for_each_codec(codec, &chip->bus) {12541254+ if (!snd_hda_codec_configure(codec))12551255+ success++;12581256 }1259125712601260- if (!azx_bus(chip)->num_codecs)12611261- return -ENODEV;12621262- return 0;12581258+ if (success) {12591259+ /* unregister failed codecs if any codec has been probed */12601260+ list_for_each_codec_safe(codec, next, &chip->bus) {12611261+ if (!codec->configured) {12621262+ codec_err(codec, "Unable to configure, disabling\n");12631263+ snd_hdac_device_unregister(&codec->core);12641264+ }12651265+ }12661266+ }12671267+12681268+ return success ? 0 : -ENODEV;12631269}12641270EXPORT_SYMBOL_GPL(azx_codec_configure);12651271
+1-1
sound/pci/hda/hda_controller.h
···4141/* 24 unused */4242#define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */4343#define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */4444-/* 27 unused */4444+#define AZX_DCAPS_RETRY_PROBE (1 << 27) /* retry probe if no codec is configured */4545#define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */4646#define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */4747#define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */
+23-6
sound/pci/hda/hda_intel.c
···307307/* quirks for AMD SB */308308#define AZX_DCAPS_PRESET_AMD_SB \309309 (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_AMD_WORKAROUND |\310310- AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME)310310+ AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME |\311311+ AZX_DCAPS_RETRY_PROBE)311312312313/* quirks for Nvidia */313314#define AZX_DCAPS_PRESET_NVIDIA \···1724172317251724static void azx_probe_work(struct work_struct *work)17261725{17271727- struct hda_intel *hda = container_of(work, struct hda_intel, probe_work);17261726+ struct hda_intel *hda = container_of(work, struct hda_intel, probe_work.work);17281727 azx_probe_continue(&hda->chip);17291728}17301729···18291828 }1830182918311830 /* continue probing in work context as may trigger request module */18321832- INIT_WORK(&hda->probe_work, azx_probe_work);18311831+ INIT_DELAYED_WORK(&hda->probe_work, azx_probe_work);1833183218341833 *rchip = chip;18351834···21432142#endif2144214321452144 if (schedule_probe)21462146- schedule_work(&hda->probe_work);21452145+ schedule_delayed_work(&hda->probe_work, 0);2147214621482147 dev++;21492148 if (chip->disabled)···22292228 int dev = chip->dev_index;22302229 int err;2231223022312231+ if (chip->disabled || hda->init_failed)22322232+ return -EIO;22332233+ if (hda->probe_retry)22342234+ goto probe_retry;22352235+22322236 to_hda_bus(bus)->bus_probing = 1;22332237 hda->probe_continued = 1;22342238···22952289#endif22962290 }22972291#endif22922292+22932293+ probe_retry:22982294 if (bus->codec_mask && !(probe_only[dev] & 1)) {22992295 err = azx_codec_configure(chip);23002300- if (err < 0)22962296+ if (err) {22972297+ if ((chip->driver_caps & AZX_DCAPS_RETRY_PROBE) &&22982298+ ++hda->probe_retry < 60) {22992299+ schedule_delayed_work(&hda->probe_work,23002300+ msecs_to_jiffies(1000));23012301+ return 0; /* keep things up */23022302+ }23032303+ dev_err(chip->card->dev, "Cannot probe codecs, giving up\n");23012304 goto out_free;23052305+ }23022306 }2303230723042308 err = snd_card_register(chip->card);···23382322 display_power(chip, false);23392323 complete_all(&hda->probe_wait);23402324 to_hda_bus(bus)->bus_probing = 0;23252325+ hda->probe_retry = 0;23412326 return 0;23422327}23432328···23642347 * device during cancel_work_sync() call.23652348 */23662349 device_unlock(&pci->dev);23672367- cancel_work_sync(&hda->probe_work);23502350+ cancel_delayed_work_sync(&hda->probe_work);23682351 device_lock(&pci->dev);2369235223702353 snd_card_free(card);
+3-1
sound/pci/hda/hda_intel.h
···14141515 /* sync probing */1616 struct completion probe_wait;1717- struct work_struct probe_work;1717+ struct delayed_work probe_work;18181919 /* card list (for power_save trigger) */2020 struct list_head list;···3030 unsigned int freed:1; /* resources already released */31313232 bool need_i915_power:1; /* the hda controller needs i915 power */3333+3434+ int probe_retry; /* being probe-retry */3335};34363537#endif
+62-4
sound/pci/hda/patch_realtek.c
···526526 struct alc_spec *spec = codec->spec;527527528528 switch (codec->core.vendor_id) {529529+ case 0x10ec0236:530530+ case 0x10ec0256:529531 case 0x10ec0283:530532 case 0x10ec0286:531533 case 0x10ec0288:···25392537 SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),25402538 SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),25412539 SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),25422542- SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),25402540+ SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS),25412541+ SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED),25432542 SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),25442543 SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950),25452544 SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950),···35313528 /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly35323529 * when booting with headset plugged. So skip setting it for the codec alc25735333530 */35343534- if (codec->core.vendor_id != 0x10ec0257)35313531+ if (spec->codec_variant != ALC269_TYPE_ALC257 &&35323532+ spec->codec_variant != ALC269_TYPE_ALC256)35353533 alc_update_coef_idx(codec, 0x46, 0, 3 << 12);3536353435373535 if (!spec->no_shutup_pins)···64536449/* for alc285_fixup_ideapad_s740_coef() */64546450#include "ideapad_s740_helper.c"6455645164526452+static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec,64536453+ const struct hda_fixup *fix,64546454+ int action)64556455+{64566456+ /*64576457+ * A certain other OS sets these coeffs to different values. On at least one TongFang64586458+ * barebone these settings might survive even a cold reboot. So to restore a clean slate the64596459+ * values are explicitly reset to default here. Without this, the external microphone is64606460+ * always in a plugged-in state, while the internal microphone is always in an unplugged64616461+ * state, breaking the ability to use the internal microphone.64626462+ */64636463+ alc_write_coef_idx(codec, 0x24, 0x0000);64646464+ alc_write_coef_idx(codec, 0x26, 0x0000);64656465+ alc_write_coef_idx(codec, 0x29, 0x3000);64666466+ alc_write_coef_idx(codec, 0x37, 0xfe05);64676467+ alc_write_coef_idx(codec, 0x45, 0x5089);64686468+}64696469+64566470enum {64576471 ALC269_FIXUP_GPIO2,64586472 ALC269_FIXUP_SONY_VAIO,···66856663 ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS,66866664 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,66876665 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,66886688- ALC287_FIXUP_13S_GEN2_SPEAKERS66666666+ ALC287_FIXUP_13S_GEN2_SPEAKERS,66676667+ ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS,66896668};6690666966916670static const struct hda_fixup alc269_fixups[] = {···83678344 .v.verbs = (const struct hda_verb[]) {83688345 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 },83698346 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 },83708370- { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 },83478347+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 },83718348 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 },83728349 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },83738350 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 },···83838360 },83848361 .chained = true,83858362 .chain_id = ALC269_FIXUP_HEADSET_MODE,83638363+ },83648364+ [ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = {83658365+ .type = HDA_FIXUP_FUNC,83668366+ .v.func = alc256_fixup_tongfang_reset_persistent_settings,83868367 },83878368};83888369···84798452 SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC),84808453 SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC),84818454 SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK),84558455+ SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK),84568456+ SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),84578457+ SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE),84828458 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),84838459 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),84848460 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),···88198789 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */88208790 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),88218791 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),87928792+ SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS),88228793 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),88238794 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE),88248795 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),···1019710166 ALC671_FIXUP_HP_HEADSET_MIC2,1019810167 ALC662_FIXUP_ACER_X2660G_HEADSET_MODE,1019910168 ALC662_FIXUP_ACER_NITRO_HEADSET_MODE,1016910169+ ALC668_FIXUP_ASUS_NO_HEADSET_MIC,1017010170+ ALC668_FIXUP_HEADSET_MIC,1017110171+ ALC668_FIXUP_MIC_DET_COEF,1020010172};10201101731020210174static const struct hda_fixup alc662_fixups[] = {···1058310549 .chained = true,1058410550 .chain_id = ALC662_FIXUP_USI_FUNC1058510551 },1055210552+ [ALC668_FIXUP_ASUS_NO_HEADSET_MIC] = {1055310553+ .type = HDA_FIXUP_PINS,1055410554+ .v.pins = (const struct hda_pintbl[]) {1055510555+ { 0x1b, 0x04a1112c },1055610556+ { }1055710557+ },1055810558+ .chained = true,1055910559+ .chain_id = ALC668_FIXUP_HEADSET_MIC1056010560+ },1056110561+ [ALC668_FIXUP_HEADSET_MIC] = {1056210562+ .type = HDA_FIXUP_FUNC,1056310563+ .v.func = alc269_fixup_headset_mic,1056410564+ .chained = true,1056510565+ .chain_id = ALC668_FIXUP_MIC_DET_COEF1056610566+ },1056710567+ [ALC668_FIXUP_MIC_DET_COEF] = {1056810568+ .type = HDA_FIXUP_VERBS,1056910569+ .v.verbs = (const struct hda_verb[]) {1057010570+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x15 },1057110571+ { 0x20, AC_VERB_SET_PROC_COEF, 0x0d60 },1057210572+ {}1057310573+ },1057410574+ },1058610575};10587105761058810577static const struct snd_pci_quirk alc662_fixup_tbl[] = {···1064110584 SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),1064210585 SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),1064310586 SND_PCI_QUIRK(0x1043, 0x17bd, "ASUS N751", ALC668_FIXUP_ASUS_Nx51),1058710587+ SND_PCI_QUIRK(0x1043, 0x185d, "ASUS G551JW", ALC668_FIXUP_ASUS_NO_HEADSET_MIC),1064410588 SND_PCI_QUIRK(0x1043, 0x1963, "ASUS X71SL", ALC662_FIXUP_ASUS_MODE8),1064510589 SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16),1064610590 SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+2
sound/usb/mixer_scarlett_gen2.c
···24502450 err = scarlett2_usb_get_config(mixer,24512451 SCARLETT2_CONFIG_TALKBACK_MAP,24522452 1, &bitmap);24532453+ if (err < 0)24542454+ return err;24532455 for (i = 0; i < num_mixes; i++, bitmap >>= 1)24542456 private->talkback_map[i] = bitmap & 1;24552457 }
+42
sound/usb/quirks-table.h
···7878{ USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) },79798080/*8181+ * Creative Technology, Ltd Live! Cam Sync HD [VF0770]8282+ * The device advertises 8 formats, but only a rate of 48kHz is honored by the8383+ * hardware and 24 bits give chopped audio, so only report the one working8484+ * combination.8585+ */8686+{8787+ USB_DEVICE(0x041e, 0x4095),8888+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {8989+ .ifnum = QUIRK_ANY_INTERFACE,9090+ .type = QUIRK_COMPOSITE,9191+ .data = &(const struct snd_usb_audio_quirk[]) {9292+ {9393+ .ifnum = 2,9494+ .type = QUIRK_AUDIO_STANDARD_MIXER,9595+ },9696+ {9797+ .ifnum = 3,9898+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,9999+ .data = &(const struct audioformat) {100100+ .formats = SNDRV_PCM_FMTBIT_S16_LE,101101+ .channels = 2,102102+ .fmt_bits = 16,103103+ .iface = 3,104104+ .altsetting = 4,105105+ .altset_idx = 4,106106+ .endpoint = 0x82,107107+ .ep_attr = 0x05,108108+ .rates = SNDRV_PCM_RATE_48000,109109+ .rate_min = 48000,110110+ .rate_max = 48000,111111+ .nr_rates = 1,112112+ .rate_table = (unsigned int[]) { 48000 },113113+ },114114+ },115115+ {116116+ .ifnum = -1117117+ },118118+ },119119+ },120120+},121121+122122+/*81123 * HP Wireless Audio82124 * When not ignored, causes instability issues for some users, forcing them to83125 * skip the entire module.
···16161717from collections import namedtuple1818from enum import Enum, auto1919-from typing import Iterable1919+from typing import Iterable, Sequence20202121import kunit_config2222import kunit_json···186186 exec_result.elapsed_time))187187 return parse_result188188189189+# Problem:190190+# $ kunit.py run --json191191+# works as one would expect and prints the parsed test results as JSON.192192+# $ kunit.py run --json suite_name193193+# would *not* pass suite_name as the filter_glob and print as json.194194+# argparse will consider it to be another way of writing195195+# $ kunit.py run --json=suite_name196196+# i.e. it would run all tests, and dump the json to a `suite_name` file.197197+# So we hackily automatically rewrite --json => --json=stdout198198+pseudo_bool_flag_defaults = {199199+ '--json': 'stdout',200200+ '--raw_output': 'kunit',201201+}202202+def massage_argv(argv: Sequence[str]) -> Sequence[str]:203203+ def massage_arg(arg: str) -> str:204204+ if arg not in pseudo_bool_flag_defaults:205205+ return arg206206+ return f'{arg}={pseudo_bool_flag_defaults[arg]}'207207+ return list(map(massage_arg, argv))208208+189209def add_common_opts(parser) -> None:190210 parser.add_argument('--build_dir',191211 help='As in the make command, it specifies the build '···323303 help='Specifies the file to read results from.',324304 type=str, nargs='?', metavar='input_file')325305326326- cli_args = parser.parse_args(argv)306306+ cli_args = parser.parse_args(massage_argv(argv))327307328308 if get_kernel_root_path():329309 os.chdir(get_kernel_root_path())
+8
tools/testing/kunit/kunit_tool_test.py
···408408 self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))409409 self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))410410411411+ def test_run_raw_output_does_not_take_positional_args(self):412412+ # --raw_output is a string flag, but we don't want it to consume413413+ # any positional arguments, only ones after an '='414414+ self.linux_source_mock.run_kernel = mock.Mock(return_value=[])415415+ kunit.main(['run', '--raw_output', 'filter_glob'], self.linux_source_mock)416416+ self.linux_source_mock.run_kernel.assert_called_once_with(417417+ args=None, build_dir='.kunit', filter_glob='filter_glob', timeout=300)418418+411419 def test_exec_timeout(self):412420 timeout = 3453413421 kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)