···77title: Lattice Slave SPI sysCONFIG FPGA manager8899maintainers:1010- - Ivan Bornyakov <i.bornyakov@metrotek.ru>1010+ - Vladimir Georgiev <v.georgiev@metrotek.ru>11111212description: |1313 Lattice sysCONFIG port, which is used for FPGA configuration, among others,
···77title: Microchip Polarfire FPGA manager.8899maintainers:1010- - Ivan Bornyakov <i.bornyakov@metrotek.ru>1010+ - Vladimir Georgiev <v.georgiev@metrotek.ru>11111212description:1313 Device Tree Bindings for Microchip Polarfire FPGA Manager using slave SPI to
···9090 of the MAX chips to the GyroADC, while MISO line of each Maxim9191 ADC connects to a shared input pin of the GyroADC.9292 enum:9393- - adi,74769393+ - adi,ad74769494 - fujitsu,mb88101a9595 - maxim,max11629696 - maxim,max11100
···287287 description:288288 High-Speed PHY interface selection between UTMI+ and ULPI when the289289 DWC_USB3_HSPHY_INTERFACE has value 3.290290- $ref: /schemas/types.yaml#/definitions/uint8290290+ $ref: /schemas/types.yaml#/definitions/string291291 enum: [utmi, ulpi]292292293293 snps,quirk-frame-length-adjustment:
+19
Documentation/mm/page_table_check.rst
···52525353Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page5454table support without extra kernel parameter.5555+5656+Implementation notes5757+====================5858+5959+We specifically decided not to use VMA information in order to avoid relying on6060+MM states (except for limited "struct page" info). The page table check is a6161+separate from Linux-MM state machine that verifies that the user accessible6262+pages are not falsely shared.6363+6464+PAGE_TABLE_CHECK depends on EXCLUSIVE_SYSTEM_RAM. The reason is that without6565+EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary physical memory6666+regions into the userspace via /dev/mem. At the same time, pages may change6767+their properties (e.g., from anonymous pages to named pages) while they are6868+still being mapped in the userspace, leading to "corruption" detected by the6969+page table check.7070+7171+Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be mapped via7272+/dev/mem. However, these pages are always considered as named pages, so they7373+won't break the logic used in the page table check.
···4040---------------------------------------------4141The flow steering mode parameter controls the flow steering mode of the driver.4242Two modes are supported:4343+43441. 'dmfs' - Device managed flow steering.44452. 'smfs' - Software/Driver managed flow steering.4546···10099By default metadata is enabled on the supported devices in E-switch.101100Metadata is applicable only for E-switch in switchdev mode and102101users may disable it when NONE of the below use cases will be in use:102102+1031031. HCA is in Dual/multi-port RoCE mode.1041042. VF/SF representor bonding (Usually used for Live migration)1051053. Stacked devices···182180183181 $ devlink health diagnose pci/0000:82:00.0 reporter tx184182185185-NOTE: This command has valid output only when interface is up, otherwise the command has empty output.183183+.. note::184184+ This command has valid output only when interface is up, otherwise the command has empty output.186185187186- Show number of tx errors indicated, number of recover flows ended successfully,188187 is autorecover enabled and graceful period from last recover::···235232236233 $ devlink health dump show pci/0000:82:00.0 reporter fw237234238238-NOTE: This command can run only on the PF which has fw tracer ownership,239239-running it on other PF or any VF will return "Operation not permitted".235235+.. note::236236+ This command can run only on the PF which has fw tracer ownership,237237+ running it on other PF or any VF will return "Operation not permitted".240238241239fw fatal reporter242240-----------------···260256261257 $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal262258263263-NOTE: This command can run only on PF.259259+.. note::260260+ This command can run only on PF.264261265262vnic reporter266263-------------···270265them in realtime.271266272267Description of the vnic counters:273273-total_q_under_processor_handle: number of queues in an error state due to274274-an async error or errored command.275275-send_queue_priority_update_flow: number of QP/SQ priority/SL update276276-events.277277-cq_overrun: number of times CQ entered an error state due to an278278-overflow.279279-async_eq_overrun: number of times an EQ mapped to async events was280280-overrun.281281-comp_eq_overrun: number of times an EQ mapped to completion events was282282-overrun.283283-quota_exceeded_command: number of commands issued and failed due to quota284284-exceeded.285285-invalid_command: number of commands issued and failed dues to any reason286286-other than quota exceeded.287287-nic_receive_steering_discard: number of packets that completed RX flow288288-steering but were discarded due to a mismatch in flow table.268268+269269+- total_q_under_processor_handle270270+ number of queues in an error state due to271271+ an async error or errored command.272272+- send_queue_priority_update_flow273273+ number of QP/SQ priority/SL update events.274274+- cq_overrun275275+ number of times CQ entered an error state due to an overflow.276276+- async_eq_overrun277277+ number of times an EQ mapped to async events was overrun.278278+ comp_eq_overrun number of times an EQ mapped to completion events was279279+ overrun.280280+- quota_exceeded_command281281+ number of commands issued and failed due to quota exceeded.282282+- invalid_command283283+ number of commands issued and failed dues to any reason other than quota284284+ exceeded.285285+- nic_receive_steering_discard286286+ number of packets that completed RX flow287287+ steering but were discarded due to a mismatch in flow table.289288290289User commands examples:291291-- Diagnose PF/VF vnic counters290290+291291+- Diagnose PF/VF vnic counters::292292+292293 $ devlink health diagnose pci/0000:82:00.1 reporter vnic294294+293295- Diagnose representor vnic counters (performed by supplying devlink port of the294294- representor, which can be obtained via devlink port command)296296+ representor, which can be obtained via devlink port command)::297297+295298 $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic296299297297-NOTE: This command can run over all interfaces such as PF/VF and representor ports.300300+.. note::301301+ This command can run over all interfaces such as PF/VF and representor ports.
+32-32
Documentation/trace/histogram.rst
···3535 in place of an explicit value field - this is simply a count of3636 event hits. If 'values' isn't specified, an implicit 'hitcount'3737 value will be automatically created and used as the only value.3838- Keys can be any field, or the special string 'stacktrace', which3838+ Keys can be any field, or the special string 'common_stacktrace', which3939 will use the event's kernel stacktrace as the key. The keywords4040 'keys' or 'key' can be used to specify keys, and the keywords4141 'values', 'vals', or 'val' can be used to specify values. Compound···5454 'compatible' if the fields named in the trigger share the same5555 number and type of fields and those fields also have the same names.5656 Note that any two events always share the compatible 'hitcount' and5757- 'stacktrace' fields and can therefore be combined using those5757+ 'common_stacktrace' fields and can therefore be combined using those5858 fields, however pointless that may be.59596060 'hist' triggers add a 'hist' file to each event's subdirectory.···547547 the hist trigger display symbolic call_sites, we can have the hist548548 trigger additionally display the complete set of kernel stack traces549549 that led to each call_site. To do that, we simply use the special550550- value 'stacktrace' for the key parameter::550550+ value 'common_stacktrace' for the key parameter::551551552552- # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \552552+ # echo 'hist:keys=common_stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \553553 /sys/kernel/tracing/events/kmem/kmalloc/trigger554554555555 The above trigger will use the kernel stack trace in effect when an···561561 every callpath to a kmalloc for a kernel compile)::562562563563 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist564564- # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]564564+ # trigger info: hist:keys=common_stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]565565566566- { stacktrace:566566+ { common_stacktrace:567567 __kmalloc_track_caller+0x10b/0x1a0568568 kmemdup+0x20/0x50569569 hidraw_report_event+0x8a/0x120 [hid]···581581 cpu_startup_entry+0x315/0x3e0582582 rest_init+0x7c/0x80583583 } hitcount: 3 bytes_req: 21 bytes_alloc: 24584584- { stacktrace:584584+ { common_stacktrace:585585 __kmalloc_track_caller+0x10b/0x1a0586586 kmemdup+0x20/0x50587587 hidraw_report_event+0x8a/0x120 [hid]···596596 do_IRQ+0x5a/0xf0597597 ret_from_intr+0x0/0x30598598 } hitcount: 3 bytes_req: 21 bytes_alloc: 24599599- { stacktrace:599599+ { common_stacktrace:600600 kmem_cache_alloc_trace+0xeb/0x150601601 aa_alloc_task_context+0x27/0x40602602 apparmor_cred_prepare+0x1f/0x50···608608 .609609 .610610 .611611- { stacktrace:611611+ { common_stacktrace:612612 __kmalloc+0x11b/0x1b0613613 i915_gem_execbuffer2+0x6c/0x2c0 [i915]614614 drm_ioctl+0x349/0x670 [drm]···616616 SyS_ioctl+0x81/0xa0617617 system_call_fastpath+0x12/0x6a618618 } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808619619- { stacktrace:619619+ { common_stacktrace:620620 __kmalloc+0x11b/0x1b0621621 load_elf_phdrs+0x76/0xa0622622 load_elf_binary+0x102/0x1650···625625 SyS_execve+0x3a/0x50626626 return_from_execve+0x0/0x23627627 } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048628628- { stacktrace:628628+ { common_stacktrace:629629 kmem_cache_alloc_trace+0xeb/0x150630630 apparmor_file_alloc_security+0x27/0x40631631 security_file_alloc+0x16/0x20···636636 SyS_open+0x1e/0x20637637 system_call_fastpath+0x12/0x6a638638 } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376639639- { stacktrace:639639+ { common_stacktrace:640640 __kmalloc+0x11b/0x1b0641641 seq_buf_alloc+0x1b/0x50642642 seq_read+0x2cc/0x370···10261026 First we set up an initially paused stacktrace trigger on the10271027 netif_receive_skb event::1028102810291029- # echo 'hist:key=stacktrace:vals=len:pause' > \10291029+ # echo 'hist:key=common_stacktrace:vals=len:pause' > \10301030 /sys/kernel/tracing/events/net/netif_receive_skb/trigger1031103110321032 Next, we set up an 'enable_hist' trigger on the sched_process_exec···10601060 $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz1061106110621062 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist10631063- # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]10631063+ # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]1064106410651065- { stacktrace:10651065+ { common_stacktrace:10661066 __netif_receive_skb_core+0x46d/0x99010671067 __netif_receive_skb+0x18/0x6010681068 netif_receive_skb_internal+0x23/0x90···10791079 kthread+0xd2/0xf010801080 ret_from_fork+0x42/0x7010811081 } hitcount: 85 len: 2888410821082- { stacktrace:10821082+ { common_stacktrace:10831083 __netif_receive_skb_core+0x46d/0x99010841084 __netif_receive_skb+0x18/0x6010851085 netif_receive_skb_internal+0x23/0x90···10971097 irq_thread+0x11f/0x15010981098 kthread+0xd2/0xf010991099 } hitcount: 98 len: 66432911001100- { stacktrace:11001100+ { common_stacktrace:11011101 __netif_receive_skb_core+0x46d/0x99011021102 __netif_receive_skb+0x18/0x6011031103 process_backlog+0xa8/0x150···11151115 inet_sendmsg+0x64/0xa011161116 sock_sendmsg+0x3d/0x5011171117 } hitcount: 115 len: 1303011181118- { stacktrace:11181118+ { common_stacktrace:11191119 __netif_receive_skb_core+0x46d/0x99011201120 __netif_receive_skb+0x18/0x6011211121 netif_receive_skb_internal+0x23/0x90···11421142 into the histogram. In order to avoid having to set everything up11431143 again, we can just clear the histogram first::1144114411451145- # echo 'hist:key=stacktrace:vals=len:clear' >> \11451145+ # echo 'hist:key=common_stacktrace:vals=len:clear' >> \11461146 /sys/kernel/tracing/events/net/netif_receive_skb/trigger1147114711481148 Just to verify that it is in fact cleared, here's what we now see in11491149 the hist file::1150115011511151 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist11521152- # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]11521152+ # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]1153115311541154 Totals:11551155 Hits: 0···1485148514861486 And here's an example that shows how to combine histogram data from14871487 any two events even if they don't share any 'compatible' fields14881488- other than 'hitcount' and 'stacktrace'. These commands create a14881488+ other than 'hitcount' and 'common_stacktrace'. These commands create a14891489 couple of triggers named 'bar' using those fields::1490149014911491- # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \14911491+ # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \14921492 /sys/kernel/tracing/events/sched/sched_process_fork/trigger14931493- # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \14931493+ # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \14941494 /sys/kernel/tracing/events/net/netif_rx/trigger1495149514961496 And displaying the output of either shows some interesting if···1501150115021502 # event histogram15031503 #15041504- # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]15041504+ # trigger info: hist:name=bar:keys=common_stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]15051505 #1506150615071507- { stacktrace:15071507+ { common_stacktrace:15081508 kernel_clone+0x18e/0x33015091509 kernel_thread+0x29/0x3015101510 kthreadd+0x154/0x1b015111511 ret_from_fork+0x3f/0x7015121512 } hitcount: 115131513- { stacktrace:15131513+ { common_stacktrace:15141514 netif_rx_internal+0xb2/0xd015151515 netif_rx_ni+0x20/0x7015161516 dev_loopback_xmit+0xaa/0xd0···15281528 call_cpuidle+0x3b/0x6015291529 cpu_startup_entry+0x22d/0x31015301530 } hitcount: 115311531- { stacktrace:15311531+ { common_stacktrace:15321532 netif_rx_internal+0xb2/0xd015331533 netif_rx_ni+0x20/0x7015341534 dev_loopback_xmit+0xaa/0xd0···15431543 SyS_sendto+0xe/0x1015441544 entry_SYSCALL_64_fastpath+0x12/0x6a15451545 } hitcount: 215461546- { stacktrace:15461546+ { common_stacktrace:15471547 netif_rx_internal+0xb2/0xd015481548 netif_rx+0x1c/0x6015491549 loopback_xmit+0x6c/0xb0···15611561 sock_sendmsg+0x38/0x5015621562 ___sys_sendmsg+0x14e/0x27015631563 } hitcount: 7615641564- { stacktrace:15641564+ { common_stacktrace:15651565 netif_rx_internal+0xb2/0xd015661566 netif_rx+0x1c/0x6015671567 loopback_xmit+0x6c/0xb0···15791579 sock_sendmsg+0x38/0x5015801580 ___sys_sendmsg+0x269/0x27015811581 } hitcount: 7715821582- { stacktrace:15821582+ { common_stacktrace:15831583 netif_rx_internal+0xb2/0xd015841584 netif_rx+0x1c/0x6015851585 loopback_xmit+0x6c/0xb0···15971597 sock_sendmsg+0x38/0x5015981598 SYSC_sendto+0xef/0x17015991599 } hitcount: 8816001600- { stacktrace:16001600+ { common_stacktrace:16011601 kernel_clone+0x18e/0x33016021602 SyS_clone+0x19/0x2016031603 entry_SYSCALL_64_fastpath+0x12/0x6a···1949194919501950 # cd /sys/kernel/tracing19511951 # echo 's:block_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events19521952- # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger19521952+ # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger19531953 # echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(block_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger19541954 # echo 1 > events/synthetic/block_lat/enable19551955 # cat trace
+9-8
MAINTAINERS
···956956F: drivers/net/ethernet/amazon/957957958958AMAZON RDMA EFA DRIVER959959-M: Gal Pressman <galpress@amazon.com>959959+M: Michael Margolin <mrgolin@amazon.com>960960+R: Gal Pressman <gal.pressman@linux.dev>960961R: Yossi Leybovich <sleybo@amazon.com>961962L: linux-rdma@vger.kernel.org962963S: Supported···1601160016021601ARASAN NAND CONTROLLER DRIVER16031602M: Miquel Raynal <miquel.raynal@bootlin.com>16041604-M: Naga Sureshkumar Relli <nagasure@xilinx.com>16031603+R: Michal Simek <michal.simek@amd.com>16051604L: linux-mtd@lists.infradead.org16061605S: Maintained16071606F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml···1764176317651764ARM PRIMECELL PL35X NAND CONTROLLER DRIVER17661765M: Miquel Raynal <miquel.raynal@bootlin.com>17671767-M: Naga Sureshkumar Relli <nagasure@xilinx.com>17661766+R: Michal Simek <michal.simek@amd.com>17681767L: linux-mtd@lists.infradead.org17691768S: Maintained17701769F: Documentation/devicetree/bindings/mtd/arm,pl353-nand-r2p1.yaml···1772177117731772ARM PRIMECELL PL35X SMC DRIVER17741773M: Miquel Raynal <miquel.raynal@bootlin.com>17751775-M: Naga Sureshkumar Relli <nagasure@xilinx.com>17741774+R: Michal Simek <michal.simek@amd.com>17761775L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)17771776S: Maintained17781777F: Documentation/devicetree/bindings/memory-controllers/arm,pl35x-smc.yaml···5149514851505149COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3)51515150M: Steve French <sfrench@samba.org>51525152-R: Paulo Alcantara <pc@cjr.nz> (DFS, global name space)51515151+R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space)51535152R: Ronnie Sahlberg <lsahlber@redhat.com> (directory leases, sparse files)51545153R: Shyam Prasad N <sprasad@microsoft.com> (multichannel)51555154R: Tom Talpey <tom@talpey.com> (RDMA, smbdirect)···9354935393559354HISILICON ROCE DRIVER93569355M: Haoyue Xu <xuhaoyue1@hisilicon.com>93579357-M: Wenpeng Liang <liangwenpeng@huawei.com>93569356+M: Junxian Huang <huangjunxian6@hisilicon.com>93589357L: linux-rdma@vger.kernel.org93599358S: Maintained93609359F: Documentation/devicetree/bindings/infiniband/hisilicon-hns-roce.txt···1012510124F: Documentation/process/kernel-docs.rst10126101251012710126INDUSTRY PACK SUBSYSTEM (IPACK)1012810128-M: Samuel Iglesias Gonsalvez <siglesias@igalia.com>1012710127+M: Vaibhav Gupta <vaibhavgupta40@gmail.com>1012910128M: Jens Taprogge <jens.taprogge@taprogge.org>1013010129M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>1013110130L: industrypack-devel@lists.sourceforge.net···13847138461384813847MICROCHIP POLARFIRE FPGA DRIVERS1384913848M: Conor Dooley <conor.dooley@microchip.com>1385013850-R: Ivan Bornyakov <i.bornyakov@metrotek.ru>1384913849+R: Vladimir Georgiev <v.georgiev@metrotek.ru>1385113850L: linux-fpga@vger.kernel.org1385213851S: Supported1385313852F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml
···632632 *633633 * The walker will walk the page-table entries corresponding to the input634634 * address range specified, visiting entries according to the walker flags.635635- * Invalid entries are treated as leaf entries. Leaf entries are reloaded636636- * after invoking the walker callback, allowing the walker to descend into637637- * a newly installed table.635635+ * Invalid entries are treated as leaf entries. The visited page table entry is636636+ * reloaded after invoking the walker callback, allowing the walker to descend637637+ * into a newly installed table.638638 *639639 * Returning a negative error code from the walker callback function will640640 * terminate the walk immediately with the same error code.
···694694695695static struct arm_pmu *kvm_pmu_probe_armpmu(void)696696{697697- struct perf_event_attr attr = { };698698- struct perf_event *event;699699- struct arm_pmu *pmu = NULL;697697+ struct arm_pmu *tmp, *pmu = NULL;698698+ struct arm_pmu_entry *entry;699699+ int cpu;700700701701- /*702702- * Create a dummy event that only counts user cycles. As we'll never703703- * leave this function with the event being live, it will never704704- * count anything. But it allows us to probe some of the PMU705705- * details. Yes, this is terrible.706706- */707707- attr.type = PERF_TYPE_RAW;708708- attr.size = sizeof(attr);709709- attr.pinned = 1;710710- attr.disabled = 0;711711- attr.exclude_user = 0;712712- attr.exclude_kernel = 1;713713- attr.exclude_hv = 1;714714- attr.exclude_host = 1;715715- attr.config = ARMV8_PMUV3_PERFCTR_CPU_CYCLES;716716- attr.sample_period = GENMASK(63, 0);701701+ mutex_lock(&arm_pmus_lock);717702718718- event = perf_event_create_kernel_counter(&attr, -1, current,719719- kvm_pmu_perf_overflow, &attr);703703+ cpu = smp_processor_id();704704+ list_for_each_entry(entry, &arm_pmus, entry) {705705+ tmp = entry->arm_pmu;720706721721- if (IS_ERR(event)) {722722- pr_err_once("kvm: pmu event creation failed %ld\n",723723- PTR_ERR(event));724724- return NULL;707707+ if (cpumask_test_cpu(cpu, &tmp->supported_cpus)) {708708+ pmu = tmp;709709+ break;710710+ }725711 }726712727727- if (event->pmu) {728728- pmu = to_arm_pmu(event->pmu);729729- if (pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_NI ||730730- pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF)731731- pmu = NULL;732732- }733733-734734- perf_event_disable(event);735735- perf_event_release_kernel(event);713713+ mutex_unlock(&arm_pmus_lock);736714737715 return pmu;738716}···890912 return -EBUSY;891913892914 if (!kvm->arch.arm_pmu) {893893- /* No PMU set, get the default one */915915+ /*916916+ * No PMU set, get the default one.917917+ *918918+ * The observant among you will notice that the supported_cpus919919+ * mask does not get updated for the default PMU even though it920920+ * is quite possible the selected instance supports only a921921+ * subset of cores in the system. This is intentional, and922922+ * upholds the preexisting behavior on heterogeneous systems923923+ * where vCPUs can be scheduled on any core but the guest924924+ * counters could stop working.925925+ */894926 kvm->arch.arm_pmu = kvm_pmu_probe_armpmu();895927 if (!kvm->arch.arm_pmu)896928 return -ENODEV;
···235235 * KVM io device for the redistributor that belongs to this VCPU.236236 */237237 if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {238238- mutex_lock(&vcpu->kvm->arch.config_lock);238238+ mutex_lock(&vcpu->kvm->slots_lock);239239 ret = vgic_register_redist_iodev(vcpu);240240- mutex_unlock(&vcpu->kvm->arch.config_lock);240240+ mutex_unlock(&vcpu->kvm->slots_lock);241241 }242242 return ret;243243}···406406407407/**408408 * vgic_lazy_init: Lazy init is only allowed if the GIC exposed to the guest409409- * is a GICv2. A GICv3 must be explicitly initialized by the guest using the409409+ * is a GICv2. A GICv3 must be explicitly initialized by userspace using the410410 * KVM_DEV_ARM_VGIC_GRP_CTRL KVM_DEVICE group.411411 * @kvm: kvm struct pointer412412 */···446446int kvm_vgic_map_resources(struct kvm *kvm)447447{448448 struct vgic_dist *dist = &kvm->arch.vgic;449449+ gpa_t dist_base;449450 int ret = 0;450451451452 if (likely(vgic_ready(kvm)))452453 return 0;453454455455+ mutex_lock(&kvm->slots_lock);454456 mutex_lock(&kvm->arch.config_lock);455457 if (vgic_ready(kvm))456458 goto out;···465463 else466464 ret = vgic_v3_map_resources(kvm);467465468468- if (ret)466466+ if (ret) {469467 __kvm_vgic_destroy(kvm);470470- else471471- dist->ready = true;468468+ goto out;469469+ }470470+ dist->ready = true;471471+ dist_base = dist->vgic_dist_base;472472+ mutex_unlock(&kvm->arch.config_lock);473473+474474+ ret = vgic_register_dist_iodev(kvm, dist_base,475475+ kvm_vgic_global_state.type);476476+ if (ret) {477477+ kvm_err("Unable to register VGIC dist MMIO regions\n");478478+ kvm_vgic_destroy(kvm);479479+ }480480+ mutex_unlock(&kvm->slots_lock);481481+ return ret;472482473483out:474484 mutex_unlock(&kvm->arch.config_lock);485485+ mutex_unlock(&kvm->slots_lock);475486 return ret;476487}477488
+10-4
arch/arm64/kvm/vgic/vgic-its.c
···1936193619371937static int vgic_its_create(struct kvm_device *dev, u32 type)19381938{19391939+ int ret;19391940 struct vgic_its *its;1940194119411942 if (type != KVM_DEV_TYPE_ARM_VGIC_ITS)···19461945 if (!its)19471946 return -ENOMEM;1948194719481948+ mutex_lock(&dev->kvm->arch.config_lock);19491949+19491950 if (vgic_initialized(dev->kvm)) {19501950- int ret = vgic_v4_init(dev->kvm);19511951+ ret = vgic_v4_init(dev->kvm);19511952 if (ret < 0) {19531953+ mutex_unlock(&dev->kvm->arch.config_lock);19521954 kfree(its);19531955 return ret;19541956 }···1964196019651961 /* Yep, even more trickery for lock ordering... */19661962#ifdef CONFIG_LOCKDEP19671967- mutex_lock(&dev->kvm->arch.config_lock);19681963 mutex_lock(&its->cmd_lock);19691964 mutex_lock(&its->its_lock);19701965 mutex_unlock(&its->its_lock);19711966 mutex_unlock(&its->cmd_lock);19721972- mutex_unlock(&dev->kvm->arch.config_lock);19731967#endif1974196819751969 its->vgic_its_base = VGIC_ADDR_UNDEF;···1988198619891987 dev->private = its;1990198819911991- return vgic_its_set_abi(its, NR_ITS_ABIS - 1);19891989+ ret = vgic_its_set_abi(its, NR_ITS_ABIS - 1);19901990+19911991+ mutex_unlock(&dev->kvm->arch.config_lock);19921992+19931993+ return ret;19921994}1993199519941996static void vgic_its_destroy(struct kvm_device *kvm_dev)
+8-2
arch/arm64/kvm/vgic/vgic-kvm-device.c
···102102 if (get_user(addr, uaddr))103103 return -EFAULT;104104105105- mutex_lock(&kvm->arch.config_lock);105105+ /*106106+ * Since we can't hold config_lock while registering the redistributor107107+ * iodevs, take the slots_lock immediately.108108+ */109109+ mutex_lock(&kvm->slots_lock);106110 switch (attr->attr) {107111 case KVM_VGIC_V2_ADDR_TYPE_DIST:108112 r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);···186182 if (r)187183 goto out;188184185185+ mutex_lock(&kvm->arch.config_lock);189186 if (write) {190187 r = vgic_check_iorange(kvm, *addr_ptr, addr, alignment, size);191188 if (!r)···194189 } else {195190 addr = *addr_ptr;196191 }192192+ mutex_unlock(&kvm->arch.config_lock);197193198194out:199199- mutex_unlock(&kvm->arch.config_lock);195195+ mutex_unlock(&kvm->slots_lock);200196201197 if (!r && !write)202198 r = put_user(addr, uaddr);
+21-10
arch/arm64/kvm/vgic/vgic-mmio-v3.c
···769769 struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev;770770 struct vgic_redist_region *rdreg;771771 gpa_t rd_base;772772- int ret;772772+ int ret = 0;773773+774774+ lockdep_assert_held(&kvm->slots_lock);775775+ mutex_lock(&kvm->arch.config_lock);773776774777 if (!IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr))775775- return 0;778778+ goto out_unlock;776779777780 /*778781 * We may be creating VCPUs before having set the base address for the···785782 */786783 rdreg = vgic_v3_rdist_free_slot(&vgic->rd_regions);787784 if (!rdreg)788788- return 0;785785+ goto out_unlock;789786790790- if (!vgic_v3_check_base(kvm))791791- return -EINVAL;787787+ if (!vgic_v3_check_base(kvm)) {788788+ ret = -EINVAL;789789+ goto out_unlock;790790+ }792791793792 vgic_cpu->rdreg = rdreg;794793 vgic_cpu->rdreg_index = rdreg->free_index;···804799 rd_dev->nr_regions = ARRAY_SIZE(vgic_v3_rd_registers);805800 rd_dev->redist_vcpu = vcpu;806801807807- mutex_lock(&kvm->slots_lock);802802+ mutex_unlock(&kvm->arch.config_lock);803803+808804 ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, rd_base,809805 2 * SZ_64K, &rd_dev->dev);810810- mutex_unlock(&kvm->slots_lock);811811-812806 if (ret)813807 return ret;814808809809+ /* Protected by slots_lock */815810 rdreg->free_index++;816811 return 0;812812+813813+out_unlock:814814+ mutex_unlock(&kvm->arch.config_lock);815815+ return ret;817816}818817819818static void vgic_unregister_redist_iodev(struct kvm_vcpu *vcpu)···843834 /* The current c failed, so iterate over the previous ones. */844835 int i;845836846846- mutex_lock(&kvm->slots_lock);847837 for (i = 0; i < c; i++) {848838 vcpu = kvm_get_vcpu(kvm, i);849839 vgic_unregister_redist_iodev(vcpu);850840 }851851- mutex_unlock(&kvm->slots_lock);852841 }853842854843 return ret;···945938{946939 int ret;947940941941+ mutex_lock(&kvm->arch.config_lock);948942 ret = vgic_v3_alloc_redist_region(kvm, index, addr, count);943943+ mutex_unlock(&kvm->arch.config_lock);949944 if (ret)950945 return ret;951946···959950 if (ret) {960951 struct vgic_redist_region *rdreg;961952953953+ mutex_lock(&kvm->arch.config_lock);962954 rdreg = vgic_v3_rdist_region_from_index(kvm, index);963955 vgic_v3_free_redist_region(rdreg);956956+ mutex_unlock(&kvm->arch.config_lock);964957 return ret;965958 }966959
···312312 return ret;313313 }314314315315- ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2);316316- if (ret) {317317- kvm_err("Unable to register VGIC MMIO regions\n");318318- return ret;319319- }320320-321315 if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {322316 ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,323317 kvm_vgic_global_state.vcpu_base,
-7
arch/arm64/kvm/vgic/vgic-v3.c
···539539{540540 struct vgic_dist *dist = &kvm->arch.vgic;541541 struct kvm_vcpu *vcpu;542542- int ret = 0;543542 unsigned long c;544543545544 kvm_for_each_vcpu(c, vcpu, kvm) {···566567 */567568 if (!vgic_initialized(kvm)) {568569 return -EBUSY;569569- }570570-571571- ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3);572572- if (ret) {573573- kvm_err("Unable to register VGICv3 dist MMIO regions\n");574574- return ret;575570 }576571577572 if (kvm_vgic_global_state.has_gicv4_1)
+2-1
arch/arm64/kvm/vgic/vgic-v4.c
···184184 }185185}186186187187-/* Must be called with the kvm lock held */188187void vgic_v4_configure_vsgis(struct kvm *kvm)189188{190189 struct vgic_dist *dist = &kvm->arch.vgic;191190 struct kvm_vcpu *vcpu;192191 unsigned long i;192192+193193+ lockdep_assert_held(&kvm->arch.config_lock);193194194195 kvm_arm_halt_guest(kvm);195196
···317317static void tce_freemulti_pSeriesLP(struct iommu_table *tbl, long tcenum, long npages)318318{319319 u64 rc;320320+ long rpages = npages;321321+ unsigned long limit;320322321323 if (!firmware_has_feature(FW_FEATURE_STUFF_TCE))322324 return tce_free_pSeriesLP(tbl->it_index, tcenum,323325 tbl->it_page_shift, npages);324326325325- rc = plpar_tce_stuff((u64)tbl->it_index,326326- (u64)tcenum << tbl->it_page_shift, 0, npages);327327+ do {328328+ limit = min_t(unsigned long, rpages, 512);329329+330330+ rc = plpar_tce_stuff((u64)tbl->it_index,331331+ (u64)tcenum << tbl->it_page_shift, 0, limit);332332+333333+ rpages -= limit;334334+ tcenum += limit;335335+ } while (rpages > 0 && !rc);327336328337 if (rc && printk_ratelimit()) {329338 printk("tce_freemulti_pSeriesLP: plpar_tce_stuff failed\n");
+1-1
arch/powerpc/xmon/xmon.c
···8888static unsigned long nidump = 16;8989static unsigned long ncsum = 4096;9090static int termch;9191-static char tmpstr[128];9191+static char tmpstr[KSYM_NAME_LEN];9292static int tracing_enabled;93939494static long bus_error_jmp[JMP_BUF_LEN];
+4-1
arch/riscv/Kconfig
···799799800800source "kernel/power/Kconfig"801801802802+# Hibernation is only possible on systems where the SBI implementation has803803+# marked its reserved memory as not accessible from, or does not run804804+# from the same memory as, Linux802805config ARCH_HIBERNATION_POSSIBLE803803- def_bool y806806+ def_bool NONPORTABLE804807805808config ARCH_HIBERNATION_HEADER806809 def_bool HIBERNATION
···229229 u32 physical_id;230230231231 /*232232+ * For simplicity, KVM always allocates enough space for all possible233233+ * xAPIC IDs. Yell, but don't kill the VM, as KVM can continue on234234+ * without the optimized map.235235+ */236236+ if (WARN_ON_ONCE(xapic_id > new->max_apic_id))237237+ return -EINVAL;238238+239239+ /*240240+ * Bail if a vCPU was added and/or enabled its APIC between allocating241241+ * the map and doing the actual calculations for the map. Note, KVM242242+ * hardcodes the x2APIC ID to vcpu_id, i.e. there's no TOCTOU bug if243243+ * the compiler decides to reload x2apic_id after this check.244244+ */245245+ if (x2apic_id > new->max_apic_id)246246+ return -E2BIG;247247+248248+ /*232249 * Deliberately truncate the vCPU ID when detecting a mismatched APIC233250 * ID to avoid false positives if the vCPU ID, i.e. x2APIC ID, is a234251 * 32-bit value. Any unwanted aliasing due to truncation results will···270253 */271254 if (vcpu->kvm->arch.x2apic_format) {272255 /* See also kvm_apic_match_physical_addr(). */273273- if ((apic_x2apic_mode(apic) || x2apic_id > 0xff) &&274274- x2apic_id <= new->max_apic_id)256256+ if (apic_x2apic_mode(apic) || x2apic_id > 0xff)275257 new->phys_map[x2apic_id] = apic;276258277259 if (!apic_x2apic_mode(apic) && !new->phys_map[xapic_id])
···26942694 return 0;26952695}2696269626972697-static struct ata_device *ata_find_dev(struct ata_port *ap, int devno)26972697+static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno)26982698{26992699- if (!sata_pmp_attached(ap)) {27002700- if (likely(devno >= 0 &&27012701- devno < ata_link_max_devices(&ap->link)))26992699+ /*27002700+ * For the non-PMP case, ata_link_max_devices() returns 1 (SATA case),27012701+ * or 2 (IDE master + slave case). However, the former case includes27022702+ * libsas hosted devices which are numbered per scsi host, leading27032703+ * to devno potentially being larger than 0 but with each struct27042704+ * ata_device having its own struct ata_port and struct ata_link.27052705+ * To accommodate these, ignore devno and always use device number 0.27062706+ */27072707+ if (likely(!sata_pmp_attached(ap))) {27082708+ int link_max_devices = ata_link_max_devices(&ap->link);27092709+27102710+ if (link_max_devices == 1)27112711+ return &ap->link.device[0];27122712+27132713+ if (devno < link_max_devices)27022714 return &ap->link.device[devno];27032703- } else {27042704- if (likely(devno >= 0 &&27052705- devno < ap->nr_pmp_links))27062706- return &ap->pmp_link[devno].device[0];27152715+27162716+ return NULL;27072717 }27182718+27192719+ /*27202720+ * For PMP-attached devices, the device number corresponds to C27212721+ * (channel) of SCSI [H:C:I:L], indicating the port pmp link27222722+ * for the device.27232723+ */27242724+ if (devno < ap->nr_pmp_links)27252725+ return &ap->pmp_link[devno].device[0];2708272627092727 return NULL;27102728}
+26
drivers/base/cacheinfo.c
···388388 continue;/* skip if itself or no cacheinfo */389389 for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) {390390 sib_leaf = per_cpu_cacheinfo_idx(i, sib_index);391391+392392+ /*393393+ * Comparing cache IDs only makes sense if the leaves394394+ * belong to the same cache level of same type. Skip395395+ * the check if level and type do not match.396396+ */397397+ if (sib_leaf->level != this_leaf->level ||398398+ sib_leaf->type != this_leaf->type)399399+ continue;400400+391401 if (cache_leaves_are_shared(this_leaf, sib_leaf)) {392402 cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);393403 cpumask_set_cpu(i, &this_leaf->shared_cpu_map);···410400 coherency_max_size = this_leaf->coherency_line_size;411401 }412402403403+ /* shared_cpu_map is now populated for the cpu */404404+ this_cpu_ci->cpu_map_populated = true;413405 return 0;414406}415407416408static void cache_shared_cpu_map_remove(unsigned int cpu)417409{410410+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);418411 struct cacheinfo *this_leaf, *sib_leaf;419412 unsigned int sibling, index, sib_index;420413···432419433420 for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) {434421 sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index);422422+423423+ /*424424+ * Comparing cache IDs only makes sense if the leaves425425+ * belong to the same cache level of same type. Skip426426+ * the check if level and type do not match.427427+ */428428+ if (sib_leaf->level != this_leaf->level ||429429+ sib_leaf->type != this_leaf->type)430430+ continue;431431+435432 if (cache_leaves_are_shared(this_leaf, sib_leaf)) {436433 cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);437434 cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);···450427 }451428 }452429 }430430+431431+ /* cpu is no longer populated in the shared map */432432+ this_cpu_ci->cpu_map_populated = false;453433}454434455435static void free_cache_attributes(unsigned int cpu)
+1-1
drivers/base/firmware_loader/main.c
···812812 char *outbuf;813813814814 alg = crypto_alloc_shash("sha256", 0, 0);815815- if (!alg)815815+ if (IS_ERR(alg))816816 return;817817818818 sha256buf = kmalloc(SHA256_DIGEST_SIZE, GFP_KERNEL);
+10-3
drivers/base/regmap/Kconfig
···44# subsystems should select the appropriate symbols.5566config REGMAP77+ bool "Register Map support" if KUNIT_ALL_TESTS78 default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SOUNDWIRE || REGMAP_SOUNDWIRE_MBQ || REGMAP_SCCB || REGMAP_I3C || REGMAP_SPI_AVMM || REGMAP_MDIO || REGMAP_FSI)89 select IRQ_DOMAIN if REGMAP_IRQ910 select MDIO_BUS if REGMAP_MDIO1010- bool1111+ help1212+ Enable support for the Register Map (regmap) access API.1313+1414+ Usually, this option is automatically selected when needed.1515+ However, you may want to enable it manually for running the regmap1616+ KUnit tests.1717+1818+ If unsure, say N.11191220config REGMAP_KUNIT1321 tristate "KUnit tests for regmap"1414- depends on KUNIT2222+ depends on KUNIT && REGMAP1523 default KUNIT_ALL_TESTS1616- select REGMAP1724 select REGMAP_RAM18251926config REGMAP_AC97
+4-1
drivers/base/regmap/regcache-maple.c
···203203204204 mas_for_each(&mas, entry, max) {205205 for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) {206206+ mas_pause(&mas);207207+ rcu_read_unlock();206208 ret = regcache_sync_val(map, r, entry[r - mas.index]);207209 if (ret != 0)208210 goto out;211211+ rcu_read_lock();209212 }210213 }211214212212-out:213215 rcu_read_unlock();214216217217+out:215218 map->cache_bypass = false;216219217220 return ret;
+4
drivers/base/regmap/regmap-sdw.c
···5959 if (config->pad_bits != 0)6060 return -ENOTSUPP;61616262+ /* Only bulk writes are supported not multi-register writes */6363+ if (config->can_multi_write)6464+ return -ENOTSUPP;6565+6266 return 0;6367}6468
···11021102 NULL,11031103 src_addr, dst_addr,11041104 xt, xt->sgl);11051105+ if (!first)11061106+ return NULL;1105110711061108 /* Length of the block is (BLEN+1) microblocks. */11071109 for (i = 0; i < xt->numf - 1; i++)···11341132 src_addr, dst_addr,11351133 xt, chunk);11361134 if (!desc) {11371137- list_splice_tail_init(&first->descs_list,11381138- &atchan->free_descs_list);11351135+ if (first)11361136+ list_splice_tail_init(&first->descs_list,11371137+ &atchan->free_descs_list);11391138 return NULL;11401139 }11411140
-1
drivers/dma/idxd/cdev.c
···277277 if (wq_dedicated(wq)) {278278 rc = idxd_wq_set_pasid(wq, pasid);279279 if (rc < 0) {280280- iommu_sva_unbind_device(sva);281280 dev_err(dev, "wq set pasid failed: %d\n", rc);282281 goto failed_set_pasid;283282 }
+4-4
drivers/dma/pl330.c
···10501050 return true;10511051}1052105210531053-static bool _start(struct pl330_thread *thrd)10531053+static bool pl330_start_thread(struct pl330_thread *thrd)10541054{10551055 switch (_state(thrd)) {10561056 case PL330_STATE_FAULT_COMPLETING:···17021702 thrd->req_running = -1;1703170317041704 /* Get going again ASAP */17051705- _start(thrd);17051705+ pl330_start_thread(thrd);1706170617071707 /* For now, just make a list of callbacks to be done */17081708 list_add_tail(&descdone->rqd, &pl330->req_done);···20892089 } else {20902090 /* Make sure the PL330 Channel thread is active */20912091 spin_lock(&pch->thread->dmac->lock);20922092- _start(pch->thread);20922092+ pl330_start_thread(pch->thread);20932093 spin_unlock(&pch->thread->dmac->lock);20942094 }20952095···21072107 if (power_down) {21082108 pch->active = true;21092109 spin_lock(&pch->thread->dmac->lock);21102110- _start(pch->thread);21102110+ pl330_start_thread(pch->thread);21112111 spin_unlock(&pch->thread->dmac->lock);21122112 power_down = false;21132113 }
···983983}984984985985void dcn30_prepare_bandwidth(struct dc *dc,986986- struct dc_state *context)986986+ struct dc_state *context)987987{988988- bool p_state_change_support = context->bw_ctx.bw.dcn.clk.p_state_change_support;989989- /* Any transition into an FPO config should disable MCLK switching first to avoid990990- * driver and FW P-State synchronization issues.991991- */992992- if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) {993993- dc->optimized_required = true;994994- context->bw_ctx.bw.dcn.clk.p_state_change_support = false;995995- }996996-997988 if (dc->clk_mgr->dc_mode_softmax_enabled)998989 if (dc->clk_mgr->clks.dramclk_khz <= dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000 &&999990 context->bw_ctx.bw.dcn.clk.dramclk_khz > dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000)1000991 dc->clk_mgr->funcs->set_max_memclk(dc->clk_mgr, dc->clk_mgr->bw_params->clk_table.entries[dc->clk_mgr->bw_params->clk_table.num_entries - 1].memclk_mhz);10019921002993 dcn20_prepare_bandwidth(dc, context);10031003- /*10041004- * enabled -> enabled: do not disable10051005- * enabled -> disabled: disable10061006- * disabled -> enabled: don't care10071007- * disabled -> disabled: don't care10081008- */10091009- if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching)10101010- dc_dmub_srv_p_state_delegate(dc, false, context);10111011-10121012- if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) {10131013- /* After disabling P-State, restore the original value to ensure we get the correct P-State10141014- * on the next optimize. */10151015- context->bw_ctx.bw.dcn.clk.p_state_change_support = p_state_change_support;10161016- }1017994}1018995
-29
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
···69256925 return 0;69266926}6927692769286928-static int si_set_temperature_range(struct amdgpu_device *adev)69296929-{69306930- int ret;69316931-69326932- ret = si_thermal_enable_alert(adev, false);69336933- if (ret)69346934- return ret;69356935- ret = si_thermal_set_temperature_range(adev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX);69366936- if (ret)69376937- return ret;69386938- ret = si_thermal_enable_alert(adev, true);69396939- if (ret)69406940- return ret;69416941-69426942- return ret;69436943-}69446944-69456928static void si_dpm_disable(struct amdgpu_device *adev)69466929{69476930 struct rv7xx_power_info *pi = rv770_get_pi(adev);···7609762676107627static int si_dpm_late_init(void *handle)76117628{76127612- int ret;76137613- struct amdgpu_device *adev = (struct amdgpu_device *)handle;76147614-76157615- if (!adev->pm.dpm_enabled)76167616- return 0;76177617-76187618- ret = si_set_temperature_range(adev);76197619- if (ret)76207620- return ret;76217621-#if 0 //TODO ?76227622- si_dpm_powergate_uvd(adev, true);76237623-#endif76247629 return 0;76257630}76267631
+6-4
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
···582582 DpmClocks_t *clk_table = smu->smu_table.clocks_table;583583 SmuMetrics_legacy_t metrics;584584 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);585585- int i, size = 0, ret = 0;585585+ int i, idx, size = 0, ret = 0;586586 uint32_t cur_value = 0, value = 0, count = 0;587587 bool cur_value_match_level = false;588588···656656 case SMU_MCLK:657657 case SMU_FCLK:658658 for (i = 0; i < count; i++) {659659- ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value);659659+ idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;660660+ ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value);660661 if (ret)661662 return ret;662663 if (!value)···684683 DpmClocks_t *clk_table = smu->smu_table.clocks_table;685684 SmuMetrics_t metrics;686685 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);687687- int i, size = 0, ret = 0;686686+ int i, idx, size = 0, ret = 0;688687 uint32_t cur_value = 0, value = 0, count = 0;689688 bool cur_value_match_level = false;690689 uint32_t min, max;···766765 case SMU_MCLK:767766 case SMU_FCLK:768767 for (i = 0; i < count; i++) {769769- ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value);768768+ idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;769769+ ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value);770770 if (ret)771771 return ret;772772 if (!value)
+3-2
drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
···494494static int renoir_print_clk_levels(struct smu_context *smu,495495 enum smu_clk_type clk_type, char *buf)496496{497497- int i, size = 0, ret = 0;497497+ int i, idx, size = 0, ret = 0;498498 uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0;499499 SmuMetrics_t metrics;500500 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);···594594 case SMU_VCLK:595595 case SMU_DCLK:596596 for (i = 0; i < count; i++) {597597- ret = renoir_get_dpm_clk_limited(smu, clk_type, i, &value);597597+ idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;598598+ ret = renoir_get_dpm_clk_limited(smu, clk_type, idx, &value);598599 if (ret)599600 return ret;600601 if (!value)
···866866static int smu_v13_0_5_print_clk_levels(struct smu_context *smu,867867 enum smu_clk_type clk_type, char *buf)868868{869869- int i, size = 0, ret = 0;869869+ int i, idx, size = 0, ret = 0;870870 uint32_t cur_value = 0, value = 0, count = 0;871871 uint32_t min = 0, max = 0;872872···898898 goto print_clk_out;899899900900 for (i = 0; i < count; i++) {901901- ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, i, &value);901901+ idx = (clk_type == SMU_MCLK) ? (count - i - 1) : i;902902+ ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, idx, &value);902903 if (ret)903904 goto print_clk_out;904905
···10001000static int yellow_carp_print_clk_levels(struct smu_context *smu,10011001 enum smu_clk_type clk_type, char *buf)10021002{10031003- int i, size = 0, ret = 0;10031003+ int i, idx, size = 0, ret = 0;10041004 uint32_t cur_value = 0, value = 0, count = 0;10051005 uint32_t min, max;10061006···10331033 goto print_clk_out;1034103410351035 for (i = 0; i < count; i++) {10361036- ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, i, &value);10361036+ idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i;10371037+ ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, idx, &value);10371038 if (ret)10381039 goto print_clk_out;10391040
+11-6
drivers/gpu/drm/i915/i915_perf.c
···877877 stream->oa_buffer.last_ctx_id = ctx_id;878878 }879879880880- /*881881- * Clear out the report id and timestamp as a means to detect unlanded882882- * reports.883883- */884884- oa_report_id_clear(stream, report32);885885- oa_timestamp_clear(stream, report32);880880+ if (is_power_of_2(report_size)) {881881+ /*882882+ * Clear out the report id and timestamp as a means883883+ * to detect unlanded reports.884884+ */885885+ oa_report_id_clear(stream, report32);886886+ oa_timestamp_clear(stream, report32);887887+ } else {888888+ /* Zero out the entire report */889889+ memset(report32, 0, report_size);890890+ }886891 }887892888893 if (start_offset != *offset) {
···831831 /* Enter report */832832 if ((data[1] & 0xfc) == 0xc0) {833833 /* serial number of the tool */834834- wacom->serial[idx] = ((data[3] & 0x0f) << 28) +834834+ wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) +835835 (data[4] << 20) + (data[5] << 12) +836836 (data[6] << 4) + (data[7] >> 4);837837
+1-1
drivers/iio/accel/kionix-kx022a.c
···10481048 data->ien_reg = KX022A_REG_INC4;10491049 } else {10501050 irq = fwnode_irq_get_byname(fwnode, "INT2");10511051- if (irq <= 0)10511051+ if (irq < 0)10521052 return dev_err_probe(dev, irq, "No suitable IRQ\n");1053105310541054 data->inc_reg = KX022A_REG_INC5;
+2-2
drivers/iio/accel/st_accel_core.c
···1291129112921292 adev = ACPI_COMPANION(indio_dev->dev.parent);12931293 if (!adev)12941294- return 0;12941294+ return -ENXIO;1295129512961296 /* Read _ONT data, which should be a package of 6 integers. */12971297 status = acpi_evaluate_object(adev->handle, "_ONT", NULL, &buffer);12981298 if (status == AE_NOT_FOUND) {12991299- return 0;12991299+ return -ENXIO;13001300 } else if (ACPI_FAILURE(status)) {13011301 dev_warn(&indio_dev->dev, "failed to execute _ONT: %d\n",13021302 status);
+11-1
drivers/iio/adc/ad4130.c
···18171817 .unprepare = ad4130_int_clk_unprepare,18181818};1819181918201820+static void ad4130_clk_del_provider(void *of_node)18211821+{18221822+ of_clk_del_provider(of_node);18231823+}18241824+18201825static int ad4130_setup_int_clk(struct ad4130_state *st)18211826{18221827 struct device *dev = &st->spi->dev;···18291824 struct clk_init_data init;18301825 const char *clk_name;18311826 struct clk *clk;18271827+ int ret;1832182818331829 if (st->int_pin_sel == AD4130_INT_PIN_CLK ||18341830 st->mclk_sel != AD4130_MCLK_76_8KHZ)···18491843 if (IS_ERR(clk))18501844 return PTR_ERR(clk);1851184518521852- return of_clk_add_provider(of_node, of_clk_src_simple_get, clk);18461846+ ret = of_clk_add_provider(of_node, of_clk_src_simple_get, clk);18471847+ if (ret)18481848+ return ret;18491849+18501850+ return devm_add_action_or_reset(dev, ad4130_clk_del_provider, of_node);18531851}1854185218551853static int ad4130_setup(struct iio_dev *indio_dev)
···584584 init_completion(&sigma_delta->completion);585585586586 sigma_delta->irq_dis = true;587587+588588+ /* the IRQ core clears IRQ_DISABLE_UNLAZY flag when freeing an IRQ */589589+ irq_set_status_flags(sigma_delta->spi->irq, IRQ_DISABLE_UNLAZY);590590+587591 ret = devm_request_irq(dev, sigma_delta->spi->irq,588592 ad_sd_data_rdy_trig_poll,589593 sigma_delta->info->irq_flags | IRQF_NO_AUTOEN,
+3-4
drivers/iio/adc/imx93_adc.c
···236236{237237 struct imx93_adc *adc = iio_priv(indio_dev);238238 struct device *dev = adc->dev;239239- long ret;240240- u32 vref_uv;239239+ int ret;241240242241 switch (mask) {243242 case IIO_CHAN_INFO_RAW:···252253 return IIO_VAL_INT;253254254255 case IIO_CHAN_INFO_SCALE:255255- ret = vref_uv = regulator_get_voltage(adc->vref);256256+ ret = regulator_get_voltage(adc->vref);256257 if (ret < 0)257258 return ret;258258- *val = vref_uv / 1000;259259+ *val = ret / 1000;259260 *val2 = 12;260261 return IIO_VAL_FRACTIONAL_LOG2;261262
+51-2
drivers/iio/adc/mt6370-adc.c
···19192020#include <dt-bindings/iio/adc/mediatek,mt6370_adc.h>21212222+#define MT6370_REG_DEV_INFO 0x1002223#define MT6370_REG_CHG_CTRL3 0x1132324#define MT6370_REG_CHG_CTRL7 0x1172425#define MT6370_REG_CHG_ADC 0x121···2827#define MT6370_ADC_START_MASK BIT(0)2928#define MT6370_ADC_IN_SEL_MASK GENMASK(7, 4)3029#define MT6370_AICR_ICHG_MASK GENMASK(7, 2)3030+#define MT6370_VENID_MASK GENMASK(7, 4)31313232#define MT6370_AICR_100_mA 0x03333#define MT6370_AICR_150_mA 0x1···4947#define ADC_CONV_TIME_MS 355048#define ADC_CONV_POLLING_TIME_US 100051495050+#define MT6370_VID_RT5081 0x85151+#define MT6370_VID_RT5081A 0xA5252+#define MT6370_VID_MT6370 0xE5353+5254struct mt6370_adc_data {5355 struct device *dev;5456 struct regmap *regmap;···6155 * from being read at the same time.6256 */6357 struct mutex adc_lock;5858+ unsigned int vid;6459};65606661static int mt6370_adc_read_channel(struct mt6370_adc_data *priv, int chan,···10598 return ret;10699}107100101101+static int mt6370_adc_get_ibus_scale(struct mt6370_adc_data *priv)102102+{103103+ switch (priv->vid) {104104+ case MT6370_VID_RT5081:105105+ case MT6370_VID_RT5081A:106106+ case MT6370_VID_MT6370:107107+ return 3350;108108+ default:109109+ return 3875;110110+ }111111+}112112+113113+static int mt6370_adc_get_ibat_scale(struct mt6370_adc_data *priv)114114+{115115+ switch (priv->vid) {116116+ case MT6370_VID_RT5081:117117+ case MT6370_VID_RT5081A:118118+ case MT6370_VID_MT6370:119119+ return 2680;120120+ default:121121+ return 3870;122122+ }123123+}124124+108125static int mt6370_adc_read_scale(struct mt6370_adc_data *priv,109126 int chan, int *val1, int *val2)110127{···154123 case MT6370_AICR_250_mA:155124 case MT6370_AICR_300_mA:156125 case MT6370_AICR_350_mA:157157- *val1 = 3350;126126+ *val1 = mt6370_adc_get_ibus_scale(priv);158127 break;159128 default:160129 *val1 = 5000;···181150 case MT6370_ICHG_600_mA:182151 case MT6370_ICHG_700_mA:183152 case MT6370_ICHG_800_mA:184184- *val1 = 2680;153153+ *val1 = mt6370_adc_get_ibat_scale(priv);185154 break;186155 default:187156 *val1 = 5000;···282251 MT6370_ADC_CHAN(TEMP_JC, IIO_TEMP, 12, BIT(IIO_CHAN_INFO_OFFSET)),283252};284253254254+static int mt6370_get_vendor_info(struct mt6370_adc_data *priv)255255+{256256+ unsigned int dev_info;257257+ int ret;258258+259259+ ret = regmap_read(priv->regmap, MT6370_REG_DEV_INFO, &dev_info);260260+ if (ret)261261+ return ret;262262+263263+ priv->vid = FIELD_GET(MT6370_VENID_MASK, dev_info);264264+265265+ return 0;266266+}267267+285268static int mt6370_adc_probe(struct platform_device *pdev)286269{287270 struct device *dev = &pdev->dev;···316271 priv->dev = dev;317272 priv->regmap = regmap;318273 mutex_init(&priv->adc_lock);274274+275275+ ret = mt6370_get_vendor_info(priv);276276+ if (ret)277277+ return dev_err_probe(dev, ret, "Failed to get vid\n");319278320279 ret = regmap_write(priv->regmap, MT6370_REG_CHG_ADC, 0);321280 if (ret)
+5-5
drivers/iio/adc/mxs-lradc-adc.c
···757757758758 ret = mxs_lradc_adc_trigger_init(iio);759759 if (ret)760760- goto err_trig;760760+ return ret;761761762762 ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time,763763 &mxs_lradc_adc_trigger_handler,764764 &mxs_lradc_adc_buffer_ops);765765 if (ret)766766- return ret;766766+ goto err_trig;767767768768 adc->vref_mv = mxs_lradc_adc_vref_mv[lradc->soc];769769···801801802802err_dev:803803 mxs_lradc_adc_hw_stop(adc);804804- mxs_lradc_adc_trigger_remove(iio);805805-err_trig:806804 iio_triggered_buffer_cleanup(iio);805805+err_trig:806806+ mxs_lradc_adc_trigger_remove(iio);807807 return ret;808808}809809···814814815815 iio_device_unregister(iio);816816 mxs_lradc_adc_hw_stop(adc);817817- mxs_lradc_adc_trigger_remove(iio);818817 iio_triggered_buffer_cleanup(iio);818818+ mxs_lradc_adc_trigger_remove(iio);819819820820 return 0;821821}
+5-5
drivers/iio/adc/palmas_gpadc.c
···547547 int adc_chan = chan->channel;548548 int ret = 0;549549550550- if (adc_chan > PALMAS_ADC_CH_MAX)550550+ if (adc_chan >= PALMAS_ADC_CH_MAX)551551 return -EINVAL;552552553553 mutex_lock(&adc->lock);···595595 int adc_chan = chan->channel;596596 int ret = 0;597597598598- if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)598598+ if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)599599 return -EINVAL;600600601601 mutex_lock(&adc->lock);···684684 int adc_chan = chan->channel;685685 int ret;686686687687- if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)687687+ if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)688688 return -EINVAL;689689690690 mutex_lock(&adc->lock);···710710 int adc_chan = chan->channel;711711 int ret;712712713713- if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)713713+ if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)714714 return -EINVAL;715715716716 mutex_lock(&adc->lock);···744744 int old;745745 int ret;746746747747- if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)747747+ if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH)748748 return -EINVAL;749749750750 mutex_lock(&adc->lock);
+32-29
drivers/iio/adc/stm32-adc.c
···20062006 * to get the *real* number of channels.20072007 */20082008 ret = device_property_count_u32(dev, "st,adc-diff-channels");20092009- if (ret < 0)20102010- return ret;20112011-20122012- ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32));20132013- if (ret > adc_info->max_channels) {20142014- dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n");20152015- return -EINVAL;20162016- } else if (ret > 0) {20172017- adc->num_diff = ret;20182018- num_channels += ret;20092009+ if (ret > 0) {20102010+ ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32));20112011+ if (ret > adc_info->max_channels) {20122012+ dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n");20132013+ return -EINVAL;20142014+ } else if (ret > 0) {20152015+ adc->num_diff = ret;20162016+ num_channels += ret;20172017+ }20192018 }2020201920212020 /* Optional sample time is provided either for each, or all channels */···20362037 struct stm32_adc_diff_channel diff[STM32_ADC_CH_MAX];20372038 struct device *dev = &indio_dev->dev;20382039 u32 num_diff = adc->num_diff;20402040+ int num_se = nchans - num_diff;20392041 int size = num_diff * sizeof(*diff) / sizeof(u32);20402042 int scan_index = 0, ret, i, c;20412043 u32 smp = 0, smps[STM32_ADC_CH_MAX], chans[STM32_ADC_CH_MAX];···20632063 scan_index++;20642064 }20652065 }20662066-20672067- ret = device_property_read_u32_array(dev, "st,adc-channels", chans,20682068- nchans);20692069- if (ret)20702070- return ret;20712071-20722072- for (c = 0; c < nchans; c++) {20732073- if (chans[c] >= adc_info->max_channels) {20742074- dev_err(&indio_dev->dev, "Invalid channel %d\n",20752075- chans[c]);20762076- return -EINVAL;20662066+ if (num_se > 0) {20672067+ ret = device_property_read_u32_array(dev, "st,adc-channels", chans, num_se);20682068+ if (ret) {20692069+ dev_err(&indio_dev->dev, "Failed to get st,adc-channels %d\n", ret);20702070+ return ret;20772071 }2078207220792079- /* Channel can't be configured both as single-ended & diff */20802080- for (i = 0; i < num_diff; i++) {20812081- if (chans[c] == diff[i].vinp) {20822082- dev_err(&indio_dev->dev, "channel %d misconfigured\n", chans[c]);20732073+ for (c = 0; c < num_se; c++) {20742074+ if (chans[c] >= adc_info->max_channels) {20752075+ dev_err(&indio_dev->dev, "Invalid channel %d\n",20762076+ chans[c]);20832077 return -EINVAL;20842078 }20792079+20802080+ /* Channel can't be configured both as single-ended & diff */20812081+ for (i = 0; i < num_diff; i++) {20822082+ if (chans[c] == diff[i].vinp) {20832083+ dev_err(&indio_dev->dev, "channel %d misconfigured\n",20842084+ chans[c]);20852085+ return -EINVAL;20862086+ }20872087+ }20882088+ stm32_adc_chan_init_one(indio_dev, &channels[scan_index],20892089+ chans[c], 0, scan_index, false);20902090+ scan_index++;20852091 }20862086- stm32_adc_chan_init_one(indio_dev, &channels[scan_index],20872087- chans[c], 0, scan_index, false);20882088- scan_index++;20892092 }2090209320912094 if (adc->nsmps > 0) {···2309230623102307 if (legacy)23112308 ret = stm32_adc_legacy_chan_init(indio_dev, adc, channels,23122312- num_channels);23092309+ timestamping ? num_channels - 1 : num_channels);23132310 else23142311 ret = stm32_adc_generic_chan_init(indio_dev, adc, channels);23152312 if (ret < 0)
···275275{276276 struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);277277 struct device *dev = regmap_get_device(st->map);278278+ struct inv_icm42600_timestamp *ts = iio_priv(indio_dev);278279279280 pm_runtime_get_sync(dev);281281+282282+ mutex_lock(&st->lock);283283+ inv_icm42600_timestamp_reset(ts);284284+ mutex_unlock(&st->lock);280285281286 return 0;282287}···380375 struct device *dev = regmap_get_device(st->map);381376 unsigned int sensor;382377 unsigned int *watermark;383383- struct inv_icm42600_timestamp *ts;384378 struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;385379 unsigned int sleep_temp = 0;386380 unsigned int sleep_sensor = 0;···389385 if (indio_dev == st->indio_gyro) {390386 sensor = INV_ICM42600_SENSOR_GYRO;391387 watermark = &st->fifo.watermark.gyro;392392- ts = iio_priv(st->indio_gyro);393388 } else if (indio_dev == st->indio_accel) {394389 sensor = INV_ICM42600_SENSOR_ACCEL;395390 watermark = &st->fifo.watermark.accel;396396- ts = iio_priv(st->indio_accel);397391 } else {398392 return -EINVAL;399393 }···418416 /* if FIFO is off, turn temperature off */419417 if (!st->fifo.on)420418 ret = inv_icm42600_set_temp_conf(st, false, &sleep_temp);421421-422422- inv_icm42600_timestamp_reset(ts);423419424420out_unlock:425421 mutex_unlock(&st->lock);
+32-10
drivers/iio/industrialio-gts-helper.c
···337337 return ret;338338}339339340340+static void iio_gts_us_to_int_micro(int *time_us, int *int_micro_times,341341+ int num_times)342342+{343343+ int i;344344+345345+ for (i = 0; i < num_times; i++) {346346+ int_micro_times[i * 2] = time_us[i] / 1000000;347347+ int_micro_times[i * 2 + 1] = time_us[i] % 1000000;348348+ }349349+}350350+340351/**341352 * iio_gts_build_avail_time_table - build table of available integration times342353 * @gts: Gain time scale descriptor···362351 */363352static int iio_gts_build_avail_time_table(struct iio_gts *gts)364353{365365- int *times, i, j, idx = 0;354354+ int *times, i, j, idx = 0, *int_micro_times;366355367356 if (!gts->num_itime)368357 return 0;···389378 }390379 }391380 }392392- gts->avail_time_tables = times;393393- /*394394- * This is just to survive a unlikely corner-case where times in the395395- * given time table were not unique. Else we could just trust the396396- * gts->num_itime.397397- */398398- gts->num_avail_time_tables = idx;381381+382382+ /* create a list of times formatted as list of IIO_VAL_INT_PLUS_MICRO */383383+ int_micro_times = kcalloc(idx, sizeof(int) * 2, GFP_KERNEL);384384+ if (int_micro_times) {385385+ /*386386+ * This is just to survive a unlikely corner-case where times in387387+ * the given time table were not unique. Else we could just388388+ * trust the gts->num_itime.389389+ */390390+ gts->num_avail_time_tables = idx;391391+ iio_gts_us_to_int_micro(times, int_micro_times, idx);392392+ }393393+394394+ gts->avail_time_tables = int_micro_times;395395+ kfree(times);396396+397397+ if (!int_micro_times)398398+ return -ENOMEM;399399400400 return 0;401401}···705683 return -EINVAL;706684707685 *vals = gts->avail_time_tables;708708- *type = IIO_VAL_INT;709709- *length = gts->num_avail_time_tables;686686+ *type = IIO_VAL_INT_PLUS_MICRO;687687+ *length = gts->num_avail_time_tables * 2;710688711689 return IIO_AVAIL_LIST;712690}
+20-6
drivers/iio/light/rohm-bu27034.c
···231231232232static const struct regmap_range bu27034_volatile_ranges[] = {233233 {234234+ .range_min = BU27034_REG_SYSTEM_CONTROL,235235+ .range_max = BU27034_REG_SYSTEM_CONTROL,236236+ }, {234237 .range_min = BU27034_REG_MODE_CONTROL4,235238 .range_max = BU27034_REG_MODE_CONTROL4,236239 }, {···1170116711711168 switch (mask) {11721169 case IIO_CHAN_INFO_INT_TIME:11731173- *val = bu27034_get_int_time(data);11741174- if (*val < 0)11751175- return *val;11701170+ *val = 0;11711171+ *val2 = bu27034_get_int_time(data);11721172+ if (*val2 < 0)11731173+ return *val2;1176117411771177- return IIO_VAL_INT;11751175+ return IIO_VAL_INT_PLUS_MICRO;1178117611791177 case IIO_CHAN_INFO_SCALE:11801178 return bu27034_get_scale(data, chan->channel, val, val2);···12331229 ret = bu27034_set_scale(data, chan->channel, val, val2);12341230 break;12351231 case IIO_CHAN_INFO_INT_TIME:12361236- ret = bu27034_try_set_int_time(data, val);12321232+ if (!val)12331233+ ret = bu27034_try_set_int_time(data, val2);12341234+ else12351235+ ret = -EINVAL;12371236 break;12381237 default:12391238 ret = -EINVAL;···12751268 int ret, sel;1276126912771270 /* Reset */12781278- ret = regmap_update_bits(data->regmap, BU27034_REG_SYSTEM_CONTROL,12711271+ ret = regmap_write_bits(data->regmap, BU27034_REG_SYSTEM_CONTROL,12791272 BU27034_MASK_SW_RESET, BU27034_MASK_SW_RESET);12801273 if (ret)12811274 return dev_err_probe(data->dev, ret, "Sensor reset failed\n");1282127512831276 msleep(1);12771277+12781278+ ret = regmap_reinit_cache(data->regmap, &bu27034_regmap);12791279+ if (ret) {12801280+ dev_err(data->dev, "Failed to reinit reg cache\n");12811281+ return ret;12821282+ }12831283+12841284 /*12851285 * Read integration time here to ensure it is in regmap cache. We do12861286 * this to speed-up the int-time acquisition in the start of the buffer
···296296 return ret;297297298298 ret = tmag5273_get_measure(data, &t, &x, &y, &z, &angle, &magnitude);299299- if (ret)300300- return ret;301299302300 pm_runtime_mark_last_busy(data->dev);303301 pm_runtime_put_autosuspend(data->dev);302302+303303+ if (ret)304304+ return ret;304305305306 switch (chan->address) {306307 case TEMPERATURE:
+1-3
drivers/infiniband/hw/bnxt_re/ib_verbs.c
···33413341 udwr.remote_qkey = gsi_sqp->qplib_qp.qkey;3342334233433343 /* post data received in the send queue */33443344- rc = bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);33453345-33463346- return 0;33443344+ return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);33473345}3348334633493347static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+4
drivers/infiniband/hw/bnxt_re/main.c
···13361336{13371337 struct bnxt_qplib_cc_param cc_param = {};1338133813391339+ /* Do not enable congestion control on VFs */13401340+ if (rdev->is_virtfn)13411341+ return;13421342+13391343 /* Currently enabling only for GenP5 adapters */13401344 if (!bnxt_qplib_is_chip_gen_p5(rdev->chip_ctx))13411345 return;
+6-5
drivers/infiniband/hw/bnxt_re/qplib_fp.c
···20562056 u32 pg_sz_lvl;20572057 int rc;2058205820592059+ if (!cq->dpi) {20602060+ dev_err(&rcfw->pdev->dev,20612061+ "FP: CREATE_CQ failed due to NULL DPI\n");20622062+ return -EINVAL;20632063+ }20642064+20592065 hwq_attr.res = res;20602066 hwq_attr.depth = cq->max_wqe;20612067 hwq_attr.stride = sizeof(struct cq_base);···20752069 CMDQ_BASE_OPCODE_CREATE_CQ,20762070 sizeof(req));2077207120782078- if (!cq->dpi) {20792079- dev_err(&rcfw->pdev->dev,20802080- "FP: CREATE_CQ failed due to NULL DPI\n");20812081- return -EINVAL;20822082- }20832072 req.dpi = cpu_to_le32(cq->dpi->dpi);20842073 req.cq_handle = cpu_to_le64(cq->cq_handle);20852074 req.cq_size = cpu_to_le32(cq->hwq.max_elements);
···759759}760760761761/*762762+ * This function restarts event logging in case the IOMMU experienced763763+ * an GA log overflow.764764+ */765765+void amd_iommu_restart_ga_log(struct amd_iommu *iommu)766766+{767767+ u32 status;768768+769769+ status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);770770+ if (status & MMIO_STATUS_GALOG_RUN_MASK)771771+ return;772772+773773+ pr_info_ratelimited("IOMMU GA Log restarting\n");774774+775775+ iommu_feature_disable(iommu, CONTROL_GALOG_EN);776776+ iommu_feature_disable(iommu, CONTROL_GAINT_EN);777777+778778+ writel(MMIO_STATUS_GALOG_OVERFLOW_MASK,779779+ iommu->mmio_base + MMIO_STATUS_OFFSET);780780+781781+ iommu_feature_enable(iommu, CONTROL_GAINT_EN);782782+ iommu_feature_enable(iommu, CONTROL_GALOG_EN);783783+}784784+785785+/*762786 * This function resets the command buffer if the IOMMU stopped fetching763787 * commands from it.764788 */
···10911091 mutex_lock(&adap->lock);10921092 dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg);1093109310941094- adap->last_initiator = 0xff;10941094+ if (!adap->transmit_in_progress)10951095+ adap->last_initiator = 0xff;1095109610961097 /* Check if this message was for us (directed or broadcast). */10971098 if (!cec_msg_is_broadcast(msg)) {···15861585 *15871586 * This function is called with adap->lock held.15881587 */15891589-static int cec_adap_enable(struct cec_adapter *adap)15881588+int cec_adap_enable(struct cec_adapter *adap)15901589{15911590 bool enable;15921591 int ret = 0;···15951594 adap->log_addrs.num_log_addrs;15961595 if (adap->needs_hpd)15971596 enable = enable && adap->phys_addr != CEC_PHYS_ADDR_INVALID;15971597+15981598+ if (adap->devnode.unregistered)15991599+ enable = false;1598160015991601 if (enable == adap->is_enabled)16001602 return 0;
+2
drivers/media/cec/core/cec-core.c
···191191 mutex_lock(&adap->lock);192192 __cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false);193193 __cec_s_log_addrs(adap, NULL, false);194194+ // Disable the adapter (since adap->devnode.unregistered is true)195195+ cec_adap_enable(adap);194196 mutex_unlock(&adap->lock);195197196198 cdev_device_del(&devnode->cdev, &devnode->dev);
···584584585585 if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED)) {586586 for (i = 0; i < num_supported_formats; i++) {587587+ if (mtk_video_formats[i].type != MTK_FMT_DEC)588588+ continue;589589+587590 mtk_video_formats[i].frmsize.max_width =588591 VCODEC_DEC_4K_CODED_WIDTH;589592 mtk_video_formats[i].frmsize.max_height =
-1
drivers/media/platform/qcom/camss/camss-video.c
···353353 if (subdev == NULL)354354 return -EPIPE;355355356356- memset(&fmt, 0, sizeof(fmt));357356 fmt.pad = pad;358357359358 ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
+4-2
drivers/media/platform/verisilicon/hantro_v4l2.c
···397397 if (!raw_vpu_fmt)398398 return -EINVAL;399399400400- if (ctx->is_encoder)400400+ if (ctx->is_encoder) {401401 encoded_fmt = &ctx->dst_fmt;402402- else402402+ ctx->vpu_src_fmt = raw_vpu_fmt;403403+ } else {403404 encoded_fmt = &ctx->src_fmt;405405+ }404406405407 hantro_reset_fmt(&raw_fmt, raw_vpu_fmt);406408 raw_fmt.width = encoded_fmt->width;
+11-5
drivers/media/usb/uvc/uvc_driver.c
···251251 /* Find the format descriptor from its GUID. */252252 fmtdesc = uvc_format_by_guid(&buffer[5]);253253254254- if (fmtdesc != NULL) {255255- format->fcc = fmtdesc->fcc;256256- } else {254254+ if (!fmtdesc) {255255+ /*256256+ * Unknown video formats are not fatal errors, the257257+ * caller will skip this descriptor.258258+ */257259 dev_info(&streaming->intf->dev,258260 "Unknown video format %pUl\n", &buffer[5]);259259- format->fcc = 0;261261+ return 0;260262 }261263264264+ format->fcc = fmtdesc->fcc;262265 format->bpp = buffer[21];263266264267 /*···678675 interval = (u32 *)&frame[nframes];679676680677 streaming->format = format;681681- streaming->nformats = nformats;678678+ streaming->nformats = 0;682679683680 /* Parse the format descriptors. */684681 while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE) {···692689 &interval, buffer, buflen);693690 if (ret < 0)694691 goto error;692692+ if (!ret)693693+ break;695694695695+ streaming->nformats++;696696 frame += format->nframes;697697 format++;698698
+1-2
drivers/media/v4l2-core/v4l2-mc.c
···314314{315315 struct fwnode_handle *endpoint;316316317317- if (!(sink->flags & MEDIA_PAD_FL_SINK) ||318318- !is_media_entity_v4l2_subdev(sink->entity))317317+ if (!(sink->flags & MEDIA_PAD_FL_SINK))319318 return -EINVAL;320319321320 fwnode_graph_for_each_endpoint(dev_fwnode(src_sd->dev), endpoint) {
+23-8
drivers/misc/fastrpc.c
···316316 if (map->table) {317317 if (map->attr & FASTRPC_ATTR_SECUREMAP) {318318 struct qcom_scm_vmperm perm;319319+ int vmid = map->fl->cctx->vmperms[0].vmid;320320+ u64 src_perms = BIT(QCOM_SCM_VMID_HLOS) | BIT(vmid);319321 int err = 0;320322321323 perm.vmid = QCOM_SCM_VMID_HLOS;322324 perm.perm = QCOM_SCM_PERM_RWX;323325 err = qcom_scm_assign_mem(map->phys, map->size,324324- &map->fl->cctx->perms, &perm, 1);326326+ &src_perms, &perm, 1);325327 if (err) {326328 dev_err(map->fl->sctx->dev, "Failed to assign memory phys 0x%llx size 0x%llx err %d",327329 map->phys, map->size, err);···789787 goto map_err;790788 }791789792792- map->phys = sg_dma_address(map->table->sgl);793793- map->phys += ((u64)fl->sctx->sid << 32);790790+ if (attr & FASTRPC_ATTR_SECUREMAP) {791791+ map->phys = sg_phys(map->table->sgl);792792+ } else {793793+ map->phys = sg_dma_address(map->table->sgl);794794+ map->phys += ((u64)fl->sctx->sid << 32);795795+ }794796 map->size = len;795797 map->va = sg_virt(map->table->sgl);796798 map->len = len;···804798 * If subsystem VMIDs are defined in DTSI, then do805799 * hyp_assign from HLOS to those VM(s)806800 */801801+ u64 src_perms = BIT(QCOM_SCM_VMID_HLOS);802802+ struct qcom_scm_vmperm dst_perms[2] = {0};803803+804804+ dst_perms[0].vmid = QCOM_SCM_VMID_HLOS;805805+ dst_perms[0].perm = QCOM_SCM_PERM_RW;806806+ dst_perms[1].vmid = fl->cctx->vmperms[0].vmid;807807+ dst_perms[1].perm = QCOM_SCM_PERM_RWX;807808 map->attr = attr;808808- err = qcom_scm_assign_mem(map->phys, (u64)map->size, &fl->cctx->perms,809809- fl->cctx->vmperms, fl->cctx->vmcount);809809+ err = qcom_scm_assign_mem(map->phys, (u64)map->size, &src_perms, dst_perms, 2);810810 if (err) {811811 dev_err(sess->dev, "Failed to assign memory with phys 0x%llx size 0x%llx err %d",812812 map->phys, map->size, err);···19041892 req.vaddrout = rsp_msg.vaddr;1905189319061894 /* Add memory to static PD pool, protection thru hypervisor */19071907- if (req.flags != ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) {18951895+ if (req.flags == ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) {19081896 struct qcom_scm_vmperm perm;1909189719101898 perm.vmid = QCOM_SCM_VMID_HLOS;···23492337 struct fastrpc_invoke_ctx *ctx;2350233823512339 spin_lock(&user->lock);23522352- list_for_each_entry(ctx, &user->pending, node)23402340+ list_for_each_entry(ctx, &user->pending, node) {23412341+ ctx->retval = -EPIPE;23532342 complete(&ctx->work);23432343+ }23542344 spin_unlock(&user->lock);23552345}23562346···23632349 struct fastrpc_user *user;23642350 unsigned long flags;2365235123522352+ /* No invocations past this point */23662353 spin_lock_irqsave(&cctx->lock, flags);23542354+ cctx->rpdev = NULL;23672355 list_for_each_entry(user, &cctx->users, user)23682356 fastrpc_notify_users(user);23692357 spin_unlock_irqrestore(&cctx->lock, flags);···2384236823852369 of_platform_depopulate(&rpdev->dev);2386237023872387- cctx->rpdev = NULL;23882371 fastrpc_channel_ctx_put(cctx);23892372}23902373
···24572457 NDTR1_WAIT_MODE;24582458 }2459245924602460+ /*24612461+ * Reset nfc->selected_chip so the next command will cause the timing24622462+ * registers to be updated in marvell_nfc_select_target().24632463+ */24642464+ nfc->selected_chip = NULL;24652465+24602466 return 0;24612467}24622468···29002894 regmap_update_bits(sysctrl_base, GENCONF_CLK_GATING_CTRL,29012895 GENCONF_CLK_GATING_CTRL_ND_GATE,29022896 GENCONF_CLK_GATING_CTRL_ND_GATE);29032903-29042904- regmap_update_bits(sysctrl_base, GENCONF_ND_CLK_CTRL,29052905- GENCONF_ND_CLK_CTRL_EN,29062906- GENCONF_ND_CLK_CTRL_EN);29072897 }2908289829092899 /* Configure the DMA if appropriate */
+4-1
drivers/mtd/spi-nor/core.c
···2018201820192019static const struct flash_info spi_nor_generic_flash = {20202020 .name = "spi-nor-generic",20212021+ .n_banks = 1,20212022 /*20222023 * JESD216 rev A doesn't specify the page size, therefore we need a20232024 * sane default.···29222921 if (nor->flags & SNOR_F_HAS_LOCK && !nor->params->locking_ops)29232922 spi_nor_init_default_locking_ops(nor);2924292329252925- nor->params->bank_size = div64_u64(nor->params->size, nor->info->n_banks);29242924+ if (nor->info->n_banks > 1)29252925+ params->bank_size = div64_u64(params->size, nor->info->n_banks);29262926}2927292729282928/**···29892987 /* Set SPI NOR sizes. */29902988 params->writesize = 1;29912989 params->size = (u64)info->sector_size * info->n_sectors;29902990+ params->bank_size = params->size;29922991 params->page_size = info->page_size;2993299229942993 if (!(info->flags & SPI_NOR_NO_FR)) {
+2-2
drivers/mtd/spi-nor/spansion.c
···361361 */362362static int cypress_nor_set_addr_mode_nbytes(struct spi_nor *nor)363363{364364- struct spi_mem_op op;364364+ struct spi_mem_op op = {};365365 u8 addr_mode;366366 int ret;367367···492492 const struct sfdp_parameter_header *bfpt_header,493493 const struct sfdp_bfpt *bfpt)494494{495495- struct spi_mem_op op;495495+ struct spi_mem_op op = {};496496 int ret;497497498498 ret = cypress_nor_set_addr_mode_nbytes(nor);
+1-1
drivers/net/dsa/mv88e6xxx/chip.c
···71707170 goto out;71717171 }71727172 if (chip->reset)71737173- usleep_range(1000, 2000);71737173+ usleep_range(10000, 20000);7174717471757175 /* Detect if the device is configured in single chip addressing mode,71767176 * otherwise continue with address specific smi init/detection.
···926926 if (err)927927 return err;928928929929- for (i = 0; i < MLX5E_MAX_BUFFER; i++)929929+ for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++)930930 dcb_buffer->buffer_size[i] = port_buffer.buffer[i].size;931931- dcb_buffer->total_size = port_buffer.port_buffer_size;931931+ dcb_buffer->total_size = port_buffer.port_buffer_size -932932+ port_buffer.internal_buffers_size;932933933934 return 0;934935}···971970 if (err)972971 return err;973972974974- for (i = 0; i < MLX5E_MAX_BUFFER; i++) {973973+ for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {975974 if (port_buffer.buffer[i].size != dcb_buffer->buffer_size[i]) {976975 changed |= MLX5E_PORT_BUFFER_SIZE;977976 buffer_size = dcb_buffer->buffer_size;
···141141 irq_update_affinity_hint(irq->map.virq, NULL);142142#ifdef CONFIG_RFS_ACCEL143143 rmap = mlx5_eq_table_get_rmap(pool->dev);144144- if (rmap && irq->map.index)144144+ if (rmap)145145 irq_cpu_rmap_remove(rmap, irq->map.virq);146146#endif147147···232232 if (!irq)233233 return ERR_PTR(-ENOMEM);234234 if (!i || !pci_msix_can_alloc_dyn(dev->pdev)) {235235- /* The vector at index 0 was already allocated.236236- * Just get the irq number. If dynamic irq is not supported237237- * vectors have also been allocated.235235+ /* The vector at index 0 is always statically allocated. If236236+ * dynamic irq is not supported all vectors are statically237237+ * allocated. In both cases just get the irq number and set238238+ * the index.238239 */239240 irq->map.virq = pci_irq_vector(dev->pdev, i);240240- irq->map.index = 0;241241+ irq->map.index = i;241242 } else {242243 irq->map = pci_msix_alloc_irq_at(dev->pdev, MSI_ANY_INDEX, af_desc);243244 if (!irq->map.virq) {···571570572571 af_desc.is_managed = false;573572 for (i = 0; i < nirqs; i++) {573573+ cpumask_clear(&af_desc.mask);574574 cpumask_set_cpu(cpus[i], &af_desc.mask);575575 irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap);576576 if (IS_ERR(irq))577577 break;578578- cpumask_clear(&af_desc.mask);579578 irqs[i] = irq;580579 }581580
···117117 return -EOPNOTSUPP;118118 }119119120120+ if (!prog)121121+ xdp_features_clear_redirect_target(dev);122122+120123 need_update = !!priv->xdp_prog != !!prog;121124 if (if_running && need_update)122125 stmmac_xdp_release(dev);···133130134131 if (if_running && need_update)135132 stmmac_xdp_open(dev);133133+134134+ if (prog)135135+ xdp_features_set_redirect_target(dev, false);136136137137 return 0;138138}
+1-1
drivers/net/ipa/ipa_endpoint.c
···119119};120120121121/* Size in bytes of an IPA packet status structure */122122-#define IPA_STATUS_SIZE sizeof(__le32[4])122122+#define IPA_STATUS_SIZE sizeof(__le32[8])123123124124/* IPA status structure decoder; looks up field values for a structure */125125static u32 ipa_status_extract(struct ipa *ipa, const void *data,
+3-13
drivers/net/phy/mxl-gpy.c
···274274 return ret < 0 ? ret : 0;275275}276276277277-static bool gpy_has_broken_mdint(struct phy_device *phydev)278278-{279279- /* At least these PHYs are known to have broken interrupt handling */280280- return phydev->drv->phy_id == PHY_ID_GPY215B ||281281- phydev->drv->phy_id == PHY_ID_GPY215C;282282-}283283-284277static int gpy_probe(struct phy_device *phydev)285278{286279 struct device *dev = &phydev->mdio.dev;···293300 phydev->priv = priv;294301 mutex_init(&priv->mbox_lock);295302296296- if (gpy_has_broken_mdint(phydev) &&297297- !device_property_present(dev, "maxlinear,use-broken-interrupts"))303303+ if (!device_property_present(dev, "maxlinear,use-broken-interrupts"))298304 phydev->dev_flags |= PHY_F_NO_IRQ;299305300306 fw_version = phy_read(phydev, PHY_FWV);···651659 * frame. Therefore, polling is the best we can do and won't do any more652660 * harm.653661 * It was observed that this bug happens on link state and link speed654654- * changes on a GPY215B and GYP215C independent of the firmware version655655- * (which doesn't mean that this list is exhaustive).662662+ * changes independent of the firmware version.656663 */657657- if (gpy_has_broken_mdint(phydev) &&658658- (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC))) {664664+ if (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC)) {659665 reg = gpy_mbox_read(phydev, REG_GPIO0_OUT);660666 if (reg < 0) {661667 phy_error(phydev);
···762762763763config SERIAL_CPM764764 tristate "CPM SCC/SMC serial port support"765765- depends on CPM2 || CPM1 || (PPC32 && COMPILE_TEST)765765+ depends on CPM2 || CPM1766766 select SERIAL_CORE767767 help768768 This driver supports the SCC and SMC serial ports on Motorola
···1495149514961496static void lpuart32_break_ctl(struct uart_port *port, int break_state)14971497{14981498- unsigned long temp, modem;14991499- struct tty_struct *tty;15001500- unsigned int cflag = 0;14981498+ unsigned long temp;1501149915021502- tty = tty_port_tty_get(&port->state->port);15031503- if (tty) {15041504- cflag = tty->termios.c_cflag;15051505- tty_kref_put(tty);15061506- }15001500+ temp = lpuart32_read(port, UARTCTRL);1507150115081508- temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK;15091509- modem = lpuart32_read(port, UARTMODIR);15101510-15021502+ /*15031503+ * LPUART IP now has two known bugs, one is CTS has higher priority than the15041504+ * break signal, which causes the break signal sending through UARTCTRL_SBK15051505+ * may impacted by the CTS input if the HW flow control is enabled. It15061506+ * exists on all platforms we support in this driver.15071507+ * Another bug is i.MX8QM LPUART may have an additional break character15081508+ * being sent after SBK was cleared.15091509+ * To avoid above two bugs, we use Transmit Data Inversion function to send15101510+ * the break signal instead of UARTCTRL_SBK.15111511+ */15111512 if (break_state != 0) {15121512- temp |= UARTCTRL_SBK;15131513 /*15141514- * LPUART CTS has higher priority than SBK, need to disable CTS before15151515- * asserting SBK to avoid any interference if flow control is enabled.15141514+ * Disable the transmitter to prevent any data from being sent out15151515+ * during break, then invert the TX line to send break.15161516 */15171517- if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE)15181518- lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);15171517+ temp &= ~UARTCTRL_TE;15181518+ lpuart32_write(port, temp, UARTCTRL);15191519+ temp |= UARTCTRL_TXINV;15201520+ lpuart32_write(port, temp, UARTCTRL);15191521 } else {15201520- /* Re-enable the CTS when break off. */15211521- if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE))15221522- lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR);15221522+ /* Disable the TXINV to turn off break and re-enable transmitter. */15231523+ temp &= ~UARTCTRL_TXINV;15241524+ lpuart32_write(port, temp, UARTCTRL);15251525+ temp |= UARTCTRL_TE;15261526+ lpuart32_write(port, temp, UARTCTRL);15231527 }15241524-15251525- lpuart32_write(port, temp, UARTCTRL);15261528}1527152915281530static void lpuart_setup_watermark(struct lpuart_port *sport)
+13
drivers/usb/cdns3/cdns3-gadget.c
···20972097 else20982098 priv_ep->trb_burst_size = 16;2099209921002100+ /*21012101+ * In versions preceding DEV_VER_V2, for example, iMX8QM, there exit the bugs21022102+ * in the DMA. These bugs occur when the trb_burst_size exceeds 16 and the21032103+ * address is not aligned to 128 Bytes (which is a product of the 64-bit AXI21042104+ * and AXI maximum burst length of 16 or 0xF+1, dma_axi_ctrl0[3:0]). This21052105+ * results in data corruption when it crosses the 4K border. The corruption21062106+ * specifically occurs from the position (4K - (address & 0x7F)) to 4K.21072107+ *21082108+ * So force trb_burst_size to 16 at such platform.21092109+ */21102110+ if (priv_dev->dev_ver < DEV_VER_V2)21112111+ priv_ep->trb_burst_size = 16;21122112+21002113 mult = min_t(u8, mult, EP_CFG_MULT_MAX);21012114 buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX);21022115 maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
+41
drivers/usb/core/buffer.c
···172172 }173173 dma_free_coherent(hcd->self.sysdev, size, addr, dma);174174}175175+176176+void *hcd_buffer_alloc_pages(struct usb_hcd *hcd,177177+ size_t size, gfp_t mem_flags, dma_addr_t *dma)178178+{179179+ if (size == 0)180180+ return NULL;181181+182182+ if (hcd->localmem_pool)183183+ return gen_pool_dma_alloc_align(hcd->localmem_pool,184184+ size, dma, PAGE_SIZE);185185+186186+ /* some USB hosts just use PIO */187187+ if (!hcd_uses_dma(hcd)) {188188+ *dma = DMA_MAPPING_ERROR;189189+ return (void *)__get_free_pages(mem_flags,190190+ get_order(size));191191+ }192192+193193+ return dma_alloc_coherent(hcd->self.sysdev,194194+ size, dma, mem_flags);195195+}196196+197197+void hcd_buffer_free_pages(struct usb_hcd *hcd,198198+ size_t size, void *addr, dma_addr_t dma)199199+{200200+ if (!addr)201201+ return;202202+203203+ if (hcd->localmem_pool) {204204+ gen_pool_free(hcd->localmem_pool,205205+ (unsigned long)addr, size);206206+ return;207207+ }208208+209209+ if (!hcd_uses_dma(hcd)) {210210+ free_pages((unsigned long)addr, get_order(size));211211+ return;212212+ }213213+214214+ dma_free_coherent(hcd->self.sysdev, size, addr, dma);215215+}
+14-6
drivers/usb/core/devio.c
···186186static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)187187{188188 struct usb_dev_state *ps = usbm->ps;189189+ struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus);189190 unsigned long flags;190191191192 spin_lock_irqsave(&ps->lock, flags);···195194 list_del(&usbm->memlist);196195 spin_unlock_irqrestore(&ps->lock, flags);197196198198- usb_free_coherent(ps->dev, usbm->size, usbm->mem,199199- usbm->dma_handle);197197+ hcd_buffer_free_pages(hcd, usbm->size,198198+ usbm->mem, usbm->dma_handle);200199 usbfs_decrease_memory_usage(201200 usbm->size + sizeof(struct usb_memory));202201 kfree(usbm);···235234 size_t size = vma->vm_end - vma->vm_start;236235 void *mem;237236 unsigned long flags;238238- dma_addr_t dma_handle;237237+ dma_addr_t dma_handle = DMA_MAPPING_ERROR;239238 int ret;240239241240 ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory));···248247 goto error_decrease_mem;249248 }250249251251- mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN,252252- &dma_handle);250250+ mem = hcd_buffer_alloc_pages(hcd,251251+ size, GFP_USER | __GFP_NOWARN, &dma_handle);253252 if (!mem) {254253 ret = -ENOMEM;255254 goto error_free_usbm;···265264 usbm->vma_use_count = 1;266265 INIT_LIST_HEAD(&usbm->memlist);267266268268- if (hcd->localmem_pool || !hcd_uses_dma(hcd)) {267267+ /*268268+ * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates269269+ * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check270270+ * whether we are in such cases, and then use remap_pfn_range (or271271+ * dma_mmap_coherent) to map normal (or DMA) pages into the user272272+ * space, respectively.273273+ */274274+ if (dma_handle == DMA_MAPPING_ERROR) {269275 if (remap_pfn_range(vma, vma->vm_start,270276 virt_to_phys(usbm->mem) >> PAGE_SHIFT,271277 size, vma->vm_page_prot) < 0) {
···920920 enable_irq(client->irq);921921 }922922923923- if (client->irq)923923+ if (!client->irq)924924 queue_delayed_work(system_power_efficient_wq, &tps->wq_poll,925925 msecs_to_jiffies(POLL_INTERVAL));926926
+5-17
drivers/vhost/vhost.c
···256256 * test_and_set_bit() implies a memory barrier.257257 */258258 llist_add(&work->node, &dev->worker->work_list);259259- wake_up_process(dev->worker->vtsk->task);259259+ vhost_task_wake(dev->worker->vtsk);260260 }261261}262262EXPORT_SYMBOL_GPL(vhost_work_queue);···333333 __vhost_vq_meta_reset(vq);334334}335335336336-static int vhost_worker(void *data)336336+static bool vhost_worker(void *data)337337{338338 struct vhost_worker *worker = data;339339 struct vhost_work *work, *work_next;340340 struct llist_node *node;341341342342- for (;;) {343343- /* mb paired w/ kthread_stop */344344- set_current_state(TASK_INTERRUPTIBLE);345345-346346- if (vhost_task_should_stop(worker->vtsk)) {347347- __set_current_state(TASK_RUNNING);348348- break;349349- }350350-351351- node = llist_del_all(&worker->work_list);352352- if (!node)353353- schedule();354354-342342+ node = llist_del_all(&worker->work_list);343343+ if (node) {355344 node = llist_reverse_order(node);356345 /* make sure flag is seen after deletion */357346 smp_wmb();358347 llist_for_each_entry_safe(work, work_next, node, node) {359348 clear_bit(VHOST_WORK_QUEUED, &work->flags);360360- __set_current_state(TASK_RUNNING);361349 kcov_remote_start_common(worker->kcov_handle);362350 work->fn(work);363351 kcov_remote_stop();···353365 }354366 }355367356356- return 0;368368+ return !!node;357369}358370359371static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
···371371 if (t != current && !(t->flags & PF_POSTCOREDUMP)) {372372 sigaddset(&t->pending.signal, SIGKILL);373373 signal_wake_up(t, 1);374374- nr++;374374+ /* The vhost_worker does not particpate in coredumps */375375+ if ((t->flags & (PF_USER_WORKER | PF_IO_WORKER)) != PF_USER_WORKER)376376+ nr++;375377 }376378 }377379
+4-1
fs/ext4/ext4.h
···918918 * where the second inode has larger inode number919919 * than the first920920 * I_DATA_SEM_QUOTA - Used for quota inodes only921921+ * I_DATA_SEM_EA - Used for ea_inodes only921922 */922923enum {923924 I_DATA_SEM_NORMAL = 0,924925 I_DATA_SEM_OTHER,925926 I_DATA_SEM_QUOTA,927927+ I_DATA_SEM_EA926928};927929928930···29032901 EXT4_IGET_NORMAL = 0,29042902 EXT4_IGET_SPECIAL = 0x0001, /* OK to iget a system inode */29052903 EXT4_IGET_HANDLE = 0x0002, /* Inode # is from a handle */29062906- EXT4_IGET_BAD = 0x0004 /* Allow to iget a bad inode */29042904+ EXT4_IGET_BAD = 0x0004, /* Allow to iget a bad inode */29052905+ EXT4_IGET_EA_INODE = 0x0008 /* Inode should contain an EA value */29072906} ext4_iget_flags;2908290729092908extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+7
fs/ext4/fsync.c
···108108 journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;109109 tid_t commit_tid = datasync ? ei->i_datasync_tid : ei->i_sync_tid;110110111111+ /*112112+ * Fastcommit does not really support fsync on directories or other113113+ * special files. Force a full commit.114114+ */115115+ if (!S_ISREG(inode->i_mode))116116+ return ext4_force_commit(inode->i_sb);117117+111118 if (journal->j_flags & JBD2_BARRIER &&112119 !jbd2_trans_will_send_data_barrier(journal, commit_tid))113120 *needs_barrier = true;
+29-5
fs/ext4/inode.c
···46414641 inode_set_iversion_queried(inode, val);46424642}4643464346444644+static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags)46454645+46464646+{46474647+ if (flags & EXT4_IGET_EA_INODE) {46484648+ if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))46494649+ return "missing EA_INODE flag";46504650+ if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||46514651+ EXT4_I(inode)->i_file_acl)46524652+ return "ea_inode with extended attributes";46534653+ } else {46544654+ if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))46554655+ return "unexpected EA_INODE flag";46564656+ }46574657+ if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD))46584658+ return "unexpected bad inode w/o EXT4_IGET_BAD";46594659+ return NULL;46604660+}46614661+46444662struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,46454663 ext4_iget_flags flags, const char *function,46464664 unsigned int line)···46684650 struct ext4_inode_info *ei;46694651 struct ext4_super_block *es = EXT4_SB(sb)->s_es;46704652 struct inode *inode;46534653+ const char *err_str;46714654 journal_t *journal = EXT4_SB(sb)->s_journal;46724655 long ret;46734656 loff_t size;···46964677 inode = iget_locked(sb, ino);46974678 if (!inode)46984679 return ERR_PTR(-ENOMEM);46994699- if (!(inode->i_state & I_NEW))46804680+ if (!(inode->i_state & I_NEW)) {46814681+ if ((err_str = check_igot_inode(inode, flags)) != NULL) {46824682+ ext4_error_inode(inode, function, line, 0, err_str);46834683+ iput(inode);46844684+ return ERR_PTR(-EFSCORRUPTED);46854685+ }47004686 return inode;46874687+ }4701468847024689 ei = EXT4_I(inode);47034690 iloc.bh = NULL;···49694944 if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb))49704945 ext4_error_inode(inode, function, line, 0,49714946 "casefold flag without casefold feature");49724972- if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {49734973- ext4_error_inode(inode, function, line, 0,49744974- "bad inode without EXT4_IGET_BAD flag");49754975- ret = -EUCLEAN;49474947+ if ((err_str = check_igot_inode(inode, flags)) != NULL) {49484948+ ext4_error_inode(inode, function, line, 0, err_str);49494949+ ret = -EFSCORRUPTED;49764950 goto bad_inode;49774951 }49784952
+15-1
fs/ext4/mballoc.c
···20622062 if (bex->fe_len < gex->fe_len)20632063 return;2064206420652065- if (finish_group)20652065+ if (finish_group || ac->ac_found > sbi->s_mb_min_to_scan)20662066 ext4_mb_use_best_found(ac, e4b);20672067}20682068···20732073 * previous found extent and if new one is better, then it's stored20742074 * in the context. Later, the best found extent will be used, if20752075 * mballoc can't find good enough extent.20762076+ *20772077+ * The algorithm used is roughly as follows:20782078+ *20792079+ * * If free extent found is exactly as big as goal, then20802080+ * stop the scan and use it immediately20812081+ *20822082+ * * If free extent found is smaller than goal, then keep retrying20832083+ * upto a max of sbi->s_mb_max_to_scan times (default 200). After20842084+ * that stop scanning and use whatever we have.20852085+ *20862086+ * * If free extent found is bigger than goal, then keep retrying20872087+ * upto a max of sbi->s_mb_min_to_scan times (default 10) before20882088+ * stopping the scan and using the extent.20892089+ *20762090 *20772091 * FIXME: real allocation policy is to be designed yet!20782092 */
+12-12
fs/ext4/super.c
···65896589 }6590659065916591 /*65926592- * Reinitialize lazy itable initialization thread based on65936593- * current settings65946594- */65956595- if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))65966596- ext4_unregister_li_request(sb);65976597- else {65986598- ext4_group_t first_not_zeroed;65996599- first_not_zeroed = ext4_has_uninit_itable(sb);66006600- ext4_register_li_request(sb, first_not_zeroed);66016601- }66026602-66036603- /*66046592 * Handle creation of system zone data early because it can fail.66056593 * Releasing of existing data is done when we are sure remount will66066594 * succeed.···6624663666256637 if (enable_rw)66266638 sb->s_flags &= ~SB_RDONLY;66396639+66406640+ /*66416641+ * Reinitialize lazy itable initialization thread based on66426642+ * current settings66436643+ */66446644+ if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE))66456645+ ext4_unregister_li_request(sb);66466646+ else {66476647+ ext4_group_t first_not_zeroed;66486648+ first_not_zeroed = ext4_has_uninit_itable(sb);66496649+ ext4_register_li_request(sb, first_not_zeroed);66506650+ }6627665166286652 if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))66296653 ext4_stop_mmpd(sbi);
+12-29
fs/ext4/xattr.c
···121121#ifdef CONFIG_LOCKDEP122122void ext4_xattr_inode_set_class(struct inode *ea_inode)123123{124124+ struct ext4_inode_info *ei = EXT4_I(ea_inode);125125+124126 lockdep_set_subclass(&ea_inode->i_rwsem, 1);127127+ (void) ei; /* shut up clang warning if !CONFIG_LOCKDEP */128128+ lockdep_set_subclass(&ei->i_data_sem, I_DATA_SEM_EA);125129}126130#endif127131···437433 return -EFSCORRUPTED;438434 }439435440440- inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);436436+ inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_EA_INODE);441437 if (IS_ERR(inode)) {442438 err = PTR_ERR(inode);443439 ext4_error(parent->i_sb,···445441 err);446442 return err;447443 }448448-449449- if (is_bad_inode(inode)) {450450- ext4_error(parent->i_sb,451451- "error while reading EA inode %lu is_bad_inode",452452- ea_ino);453453- err = -EIO;454454- goto error;455455- }456456-457457- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {458458- ext4_error(parent->i_sb,459459- "EA inode %lu does not have EXT4_EA_INODE_FL flag",460460- ea_ino);461461- err = -EINVAL;462462- goto error;463463- }464464-465444 ext4_xattr_inode_set_class(inode);466445467446 /*···465478466479 *ea_inode = inode;467480 return 0;468468-error:469469- iput(inode);470470- return err;471481}472482473483/* Remove entry from mbcache when EA inode is getting evicted */···1540155615411557 while (ce) {15421558 ea_inode = ext4_iget(inode->i_sb, ce->e_value,15431543- EXT4_IGET_NORMAL);15441544- if (!IS_ERR(ea_inode) &&15451545- !is_bad_inode(ea_inode) &&15461546- (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) &&15471547- i_size_read(ea_inode) == value_len &&15591559+ EXT4_IGET_EA_INODE);15601560+ if (IS_ERR(ea_inode))15611561+ goto next_entry;15621562+ ext4_xattr_inode_set_class(ea_inode);15631563+ if (i_size_read(ea_inode) == value_len &&15481564 !ext4_xattr_inode_read(ea_inode, ea_data, value_len) &&15491565 !ext4_xattr_inode_verify_hashes(ea_inode, NULL, ea_data,15501566 value_len) &&···15541570 kvfree(ea_data);15551571 return ea_inode;15561572 }15571557-15581558- if (!IS_ERR(ea_inode))15591559- iput(ea_inode);15731573+ iput(ea_inode);15741574+ next_entry:15601575 ce = mb_cache_entry_find_next(ea_inode_cache, ce);15611576 }15621577 kvfree(ea_data);
+1-6
fs/nfsd/nfsctl.c
···690690 if (err != 0 || fd < 0)691691 return -EINVAL;692692693693- if (svc_alien_sock(net, fd)) {694694- printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__);695695- return -EINVAL;696696- }697697-698693 err = nfsd_create_serv(net);699694 if (err != 0)700695 return err;701696702702- err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);697697+ err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);703698704699 if (err >= 0 &&705700 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+9-1
fs/nfsd/vfs.c
···536536537537 inode_lock(inode);538538 for (retries = 1;;) {539539- host_err = __nfsd_setattr(dentry, iap);539539+ struct iattr attrs;540540+541541+ /*542542+ * notify_change() can alter its iattr argument, making543543+ * @iap unsuitable for submission multiple times. Make a544544+ * copy for every loop iteration.545545+ */546546+ attrs = *iap;547547+ host_err = __nfsd_setattr(dentry, &attrs);540548 if (host_err != -EAGAIN || !retries--)541549 break;542550 if (!nfsd_wait_for_delegreturn(rqstp, inode))
···135135/**136136 * iio_gts_find_sel_by_int_time - find selector matching integration time137137 * @gts: Gain time scale descriptor138138- * @gain: HW-gain for which matching selector is searched for138138+ * @time: Integration time for which matching selector is searched for139139 *140140 * Return: a selector matching given integration time or -EINVAL if141141 * selector was not found.
···617617 * Please note that, confusingly, "page_mapping" refers to the inode618618 * address_space which maps the page from disk; whereas "page_mapped"619619 * refers to user virtual address space into which the page is mapped.620620+ *621621+ * For slab pages, since slab reuses the bits in struct page to store its622622+ * internal states, the page->mapping does not exist as such, nor do these623623+ * flags below. So in order to avoid testing non-existent bits, please624624+ * make sure that PageSlab(page) actually evaluates to false before calling625625+ * the following functions (e.g., PageAnon). See mm/slab.h.620626 */621627#define PAGE_MAPPING_ANON 0x1622628#define PAGE_MAPPING_MOVABLE 0x2
+13-12
include/linux/pe.h
···1111#include <linux/types.h>12121313/*1414- * Linux EFI stub v1.0 adds the following functionality:1515- * - Loading initrd from the LINUX_EFI_INITRD_MEDIA_GUID device path,1616- * - Loading/starting the kernel from firmware that targets a different1717- * machine type, via the entrypoint exposed in the .compat PE/COFF section.1414+ * Starting from version v3.0, the major version field should be interpreted as1515+ * a bit mask of features supported by the kernel's EFI stub:1616+ * - 0x1: initrd loading from the LINUX_EFI_INITRD_MEDIA_GUID device path,1717+ * - 0x2: initrd loading using the initrd= command line option, where the file1818+ * may be specified using device path notation, and is not required to1919+ * reside on the same volume as the loaded kernel image.1820 *1921 * The recommended way of loading and starting v1.0 or later kernels is to use2022 * the LoadImage() and StartImage() EFI boot services, and expose the initrd2123 * via the LINUX_EFI_INITRD_MEDIA_GUID device path.2224 *2323- * Versions older than v1.0 support initrd loading via the image load options2424- * (using initrd=, limited to the volume from which the kernel itself was2525- * loaded), or via arch specific means (bootparams, DT, etc).2525+ * Versions older than v1.0 may support initrd loading via the image load2626+ * options (using initrd=, limited to the volume from which the kernel itself2727+ * was loaded), or only via arch specific means (bootparams, DT, etc).2628 *2727- * On x86, LoadImage() and StartImage() can be omitted if the EFI handover2828- * protocol is implemented, which can be inferred from the version,2929- * handover_offset and xloadflags fields in the bootparams structure.2929+ * The minor version field must remain 0x0.3030+ * (https://lore.kernel.org/all/efd6f2d4-547c-1378-1faa-53c044dbd297@gmail.com/)3031 */3131-#define LINUX_EFISTUB_MAJOR_VERSION 0x13232-#define LINUX_EFISTUB_MINOR_VERSION 0x13232+#define LINUX_EFISTUB_MAJOR_VERSION 0x33333+#define LINUX_EFISTUB_MINOR_VERSION 0x033343435/*3536 * LINUX_PE_MAGIC appears at offset 0x38 into the MS-DOS header of EFI bootable
-1
include/linux/sched/task.h
···2929 u32 io_thread:1;3030 u32 user_worker:1;3131 u32 no_files:1;3232- u32 ignore_signals:1;3332 unsigned long stack;3433 unsigned long stack_size;3534 unsigned long tls;
···11191119 * @vfh: pointer to &struct v4l2_fh11201120 * @state: pointer to &struct v4l2_subdev_state11211121 * @owner: module pointer to the owner of this file handle11221122+ * @client_caps: bitmask of ``V4L2_SUBDEV_CLIENT_CAP_*``11221123 */11231124struct v4l2_subdev_fh {11241125 struct v4l2_fh vfh;
···336336 * @sk_cgrp_data: cgroup data for this cgroup337337 * @sk_memcg: this socket's memory cgroup association338338 * @sk_write_pending: a write to stream socket waits to start339339+ * @sk_wait_pending: number of threads blocked on this socket339340 * @sk_state_change: callback to indicate change in the state of the sock340341 * @sk_data_ready: callback to indicate there is data to be processed341342 * @sk_write_space: callback to indicate there is bf sending space available···429428 unsigned int sk_napi_id;430429#endif431430 int sk_rcvbuf;431431+ int sk_wait_pending;432432433433 struct sk_filter __rcu *sk_filter;434434 union {···1176117411771175#define sk_wait_event(__sk, __timeo, __condition, __wait) \11781176 ({ int __rc; \11771177+ __sk->sk_wait_pending++; \11791178 release_sock(__sk); \11801179 __rc = __condition; \11811180 if (!__rc) { \···11861183 } \11871184 sched_annotate_sleep(); \11881185 lock_sock(__sk); \11861186+ __sk->sk_wait_pending--; \11891187 __rc = __condition; \11901188 __rc; \11911189 })
···2525{2626 struct io_epoll *epoll = io_kiocb_to_cmd(req, struct io_epoll);27272828- pr_warn_once("%s: epoll_ctl support in io_uring is deprecated and will "2929- "be removed in a future Linux kernel version.\n",3030- current->comm);3131-3228 if (sqe->buf_index || sqe->splice_fd_in)3329 return -EINVAL;3430
+4-1
kernel/exit.c
···411411 tsk->flags |= PF_POSTCOREDUMP;412412 core_state = tsk->signal->core_state;413413 spin_unlock_irq(&tsk->sighand->siglock);414414- if (core_state) {414414+415415+ /* The vhost_worker does not particpate in coredumps */416416+ if (core_state &&417417+ ((tsk->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER)) {415418 struct core_thread self;416419417420 self.task = current;
+5-8
kernel/fork.c
···23362336 p->flags &= ~PF_KTHREAD;23372337 if (args->kthread)23382338 p->flags |= PF_KTHREAD;23392339- if (args->user_worker)23402340- p->flags |= PF_USER_WORKER;23412341- if (args->io_thread) {23392339+ if (args->user_worker) {23422340 /*23432343- * Mark us an IO worker, and block any signal that isn't23412341+ * Mark us a user worker, and block any signal that isn't23442342 * fatal or STOP23452343 */23462346- p->flags |= PF_IO_WORKER;23442344+ p->flags |= PF_USER_WORKER;23472345 siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP));23482346 }23472347+ if (args->io_thread)23482348+ p->flags |= PF_IO_WORKER;2349234923502350 if (args->name)23512351 strscpy_pad(p->comm, args->name, sizeof(p->comm));···25162516 retval = copy_thread(p, args);25172517 if (retval)25182518 goto bad_fork_cleanup_io;25192519-25202520- if (args->ignore_signals)25212521- ignore_signals(p);2522251925232520 stackleak_task_init(p);25242521
+1-1
kernel/module/decompress.c
···257257 do {258258 struct page *page = module_get_next_page(info);259259260260- if (!IS_ERR(page)) {260260+ if (IS_ERR(page)) {261261 retval = PTR_ERR(page);262262 goto out;263263 }
+24-52
kernel/module/main.c
···15211521 MOD_RODATA,15221522 MOD_RO_AFTER_INIT,15231523 MOD_DATA,15241524- MOD_INVALID, /* This is needed to match the masks array */15241524+ MOD_DATA,15251525 };15261526 static const int init_m_to_mem_type[] = {15271527 MOD_INIT_TEXT,15281528 MOD_INIT_RODATA,15291529 MOD_INVALID,15301530 MOD_INIT_DATA,15311531- MOD_INVALID, /* This is needed to match the masks array */15311531+ MOD_INIT_DATA,15321532 };1533153315341534 for (m = 0; m < ARRAY_SIZE(masks); ++m) {···30573057 return load_module(&info, uargs, 0);30583058}3059305930603060-static int file_init_module(struct file *file, const char __user * uargs, int flags)30603060+SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)30613061{30623062 struct load_info info = { };30633063 void *buf = NULL;30643064 int len;30653065-30663066- len = kernel_read_file(file, 0, &buf, INT_MAX, NULL,30673067- READING_MODULE);30683068- if (len < 0) {30693069- mod_stat_inc(&failed_kreads);30703070- mod_stat_add_long(len, &invalid_kread_bytes);30713071- return len;30723072- }30733073-30743074- if (flags & MODULE_INIT_COMPRESSED_FILE) {30753075- int err = module_decompress(&info, buf, len);30763076- vfree(buf); /* compressed data is no longer needed */30773077- if (err) {30783078- mod_stat_inc(&failed_decompress);30793079- mod_stat_add_long(len, &invalid_decompress_bytes);30803080- return err;30813081- }30823082- } else {30833083- info.hdr = buf;30843084- info.len = len;30853085- }30863086-30873087- return load_module(&info, uargs, flags);30883088-}30893089-30903090-/*30913091- * kernel_read_file() will already deny write access, but module30923092- * loading wants _exclusive_ access to the file, so we do that30933093- * here, along with basic sanity checks.30943094- */30953095-static int prepare_file_for_module_load(struct file *file)30963096-{30973097- if (!file || !(file->f_mode & FMODE_READ))30983098- return -EBADF;30993099- if (!S_ISREG(file_inode(file)->i_mode))31003100- return -EINVAL;31013101- return exclusive_deny_write_access(file);31023102-}31033103-31043104-SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)31053105-{31063106- struct fd f;31073065 int err;3108306631093067 err = may_init_module();···30753117 |MODULE_INIT_COMPRESSED_FILE))30763118 return -EINVAL;3077311930783078- f = fdget(fd);30793079- err = prepare_file_for_module_load(f.file);30803080- if (!err) {30813081- err = file_init_module(f.file, uargs, flags);30823082- allow_write_access(f.file);31203120+ len = kernel_read_file_from_fd(fd, 0, &buf, INT_MAX, NULL,31213121+ READING_MODULE);31223122+ if (len < 0) {31233123+ mod_stat_inc(&failed_kreads);31243124+ mod_stat_add_long(len, &invalid_kread_bytes);31253125+ return len;30833126 }30843084- fdput(f);30853085- return err;31273127+31283128+ if (flags & MODULE_INIT_COMPRESSED_FILE) {31293129+ err = module_decompress(&info, buf, len);31303130+ vfree(buf); /* compressed data is no longer needed */31313131+ if (err) {31323132+ mod_stat_inc(&failed_decompress);31333133+ mod_stat_add_long(len, &invalid_decompress_bytes);31343134+ return err;31353135+ }31363136+ } else {31373137+ info.hdr = buf;31383138+ info.len = len;31393139+ }31403140+31413141+ return load_module(&info, uargs, flags);30863142}3087314330883144/* Keep in sync with MODULE_FLAGS_BUF_SIZE !!! */
+5-3
kernel/signal.c
···1368136813691369 while_each_thread(p, t) {13701370 task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);13711371- count++;13711371+ /* Don't require de_thread to wait for the vhost_worker */13721372+ if ((t->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER)13731373+ count++;1372137413731375 /* Don't bother with already dead threads */13741376 if (t->exit_state)···28632861 }2864286228652863 /*28662866- * PF_IO_WORKER threads will catch and exit on fatal signals28642864+ * PF_USER_WORKER threads will catch and exit on fatal signals28672865 * themselves. They have cleanup that must be performed, so28682866 * we cannot call do_exit() on their behalf.28692867 */28702870- if (current->flags & PF_IO_WORKER)28682868+ if (current->flags & PF_USER_WORKER)28712869 goto out;2872287028732871 /*
+36-8
kernel/trace/trace.c
···6060 */6161bool ring_buffer_expanded;62626363+#ifdef CONFIG_FTRACE_STARTUP_TEST6364/*6465 * We need to change this state when a selftest is running.6566 * A selftest will lurk into the ring-buffer to count the···7675 */7776bool __read_mostly tracing_selftest_disabled;78777979-#ifdef CONFIG_FTRACE_STARTUP_TEST8078void __init disable_tracing_selftest(const char *reason)8179{8280 if (!tracing_selftest_disabled) {···8383 pr_info("Ftrace startup test is disabled due to %s\n", reason);8484 }8585}8686+#else8787+#define tracing_selftest_running 08888+#define tracing_selftest_disabled 08689#endif87908891/* Pipe tracepoints to printk */···10541051 if (!(tr->trace_flags & TRACE_ITER_PRINTK))10551052 return 0;1056105310571057- if (unlikely(tracing_selftest_running || tracing_disabled))10541054+ if (unlikely(tracing_selftest_running && tr == &global_trace))10551055+ return 0;10561056+10571057+ if (unlikely(tracing_disabled))10581058 return 0;1059105910601060 alloc = sizeof(*entry) + size + 2; /* possible \n added */···20472041 return 0;20482042}2049204320442044+static int do_run_tracer_selftest(struct tracer *type)20452045+{20462046+ int ret;20472047+20482048+ /*20492049+ * Tests can take a long time, especially if they are run one after the20502050+ * other, as does happen during bootup when all the tracers are20512051+ * registered. This could cause the soft lockup watchdog to trigger.20522052+ */20532053+ cond_resched();20542054+20552055+ tracing_selftest_running = true;20562056+ ret = run_tracer_selftest(type);20572057+ tracing_selftest_running = false;20582058+20592059+ return ret;20602060+}20612061+20502062static __init int init_trace_selftests(void)20512063{20522064 struct trace_selftests *p, *n;···21162092{21172093 return 0;21182094}20952095+static inline int do_run_tracer_selftest(struct tracer *type)20962096+{20972097+ return 0;20982098+}21192099#endif /* CONFIG_FTRACE_STARTUP_TEST */2120210021212101static void add_tracer_options(struct trace_array *tr, struct tracer *t);···2155212721562128 mutex_lock(&trace_types_lock);2157212921582158- tracing_selftest_running = true;21592159-21602130 for (t = trace_types; t; t = t->next) {21612131 if (strcmp(type->name, t->name) == 0) {21622132 /* already found */···21832157 /* store the tracer for __set_tracer_option */21842158 type->flags->trace = type;2185215921862186- ret = run_tracer_selftest(type);21602160+ ret = do_run_tracer_selftest(type);21872161 if (ret < 0)21882162 goto out;21892163···21922166 add_tracer_options(&global_trace, type);2193216721942168 out:21952195- tracing_selftest_running = false;21962169 mutex_unlock(&trace_types_lock);2197217021982171 if (ret || !default_bootup_tracer)···35153490 unsigned int trace_ctx;35163491 char *tbuffer;3517349235183518- if (tracing_disabled || tracing_selftest_running)34933493+ if (tracing_disabled)35193494 return 0;3520349535213496 /* Don't pollute graph traces with trace_vprintk internals */···35633538int trace_array_vprintk(struct trace_array *tr,35643539 unsigned long ip, const char *fmt, va_list args)35653540{35413541+ if (tracing_selftest_running && tr == &global_trace)35423542+ return 0;35433543+35663544 return __trace_array_vprintk(tr->array_buffer.buffer, ip, fmt, args);35673545}35683546···57805752 "\t table using the key(s) and value(s) named, and the value of a\n"57815753 "\t sum called 'hitcount' is incremented. Keys and values\n"57825754 "\t correspond to fields in the event's format description. Keys\n"57835783- "\t can be any field, or the special string 'stacktrace'.\n"57555755+ "\t can be any field, or the special string 'common_stacktrace'.\n"57845756 "\t Compound keys consisting of up to two fields can be specified\n"57855757 "\t by the 'keys' keyword. Values must correspond to numeric\n"57865758 "\t fields. Sort keys consisting of up to two fields can be\n"
···13641364 if (field->field)13651365 field_name = field->field->name;13661366 else13671367- field_name = "stacktrace";13671367+ field_name = "common_stacktrace";13681368 } else if (field->flags & HIST_FIELD_FL_HITCOUNT)13691369 field_name = "hitcount";13701370···23672367 hist_data->enable_timestamps = true;23682368 if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)23692369 hist_data->attrs->ts_in_usecs = true;23702370- } else if (strcmp(field_name, "stacktrace") == 0) {23702370+ } else if (strcmp(field_name, "common_stacktrace") == 0) {23712371 *flags |= HIST_FIELD_FL_STACKTRACE;23722372 } else if (strcmp(field_name, "common_cpu") == 0)23732373 *flags |= HIST_FIELD_FL_CPU;···23782378 if (!field || !field->size) {23792379 /*23802380 * For backward compatibility, if field_name23812381- * was "cpu", then we treat this the same as23822382- * common_cpu. This also works for "CPU".23812381+ * was "cpu" or "stacktrace", then we treat this23822382+ * the same as common_cpu and common_stacktrace23832383+ * respectively. This also works for "CPU", and23842384+ * "STACKTRACE".23832385 */23842386 if (field && field->filter_type == FILTER_CPU) {23852387 *flags |= HIST_FIELD_FL_CPU;23882388+ } else if (field && field->filter_type == FILTER_STACKTRACE) {23892389+ *flags |= HIST_FIELD_FL_STACKTRACE;23862390 } else {23872391 hist_err(tr, HIST_ERR_FIELD_NOT_FOUND,23882392 errpos(field_name));···42424238 goto out;42434239 }4244424042454245- /* Some types cannot be a value */42464246- if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |42474247- HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 |42484248- HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET |42494249- HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) {42504250- hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str));42514251- ret = -EINVAL;42414241+ /* values and variables should not have some modifiers */42424242+ if (hist_field->flags & HIST_FIELD_FL_VAR) {42434243+ /* Variable */42444244+ if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |42454245+ HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2))42464246+ goto err;42474247+ } else {42484248+ /* Value */42494249+ if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT |42504250+ HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 |42514251+ HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET |42524252+ HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE))42534253+ goto err;42524254 }4253425542544256 hist_data->fields[val_idx] = hist_field;···42664256 ret = -EINVAL;42674257 out:42684258 return ret;42594259+ err:42604260+ hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str));42614261+ return -EINVAL;42694262}4270426342714264static int create_val_field(struct hist_trigger_data *hist_data,···53985385 if (key_field->field)53995386 seq_printf(m, "%s.stacktrace", key_field->field->name);54005387 else54015401- seq_puts(m, "stacktrace:\n");53885388+ seq_puts(m, "common_stacktrace:\n");54025389 hist_trigger_stacktrace_print(m,54035390 key + key_field->offset,54045391 HIST_STACKTRACE_DEPTH);···59815968 if (field->field)59825969 seq_printf(m, "%s.stacktrace", field->field->name);59835970 else59845984- seq_puts(m, "stacktrace");59715971+ seq_puts(m, "common_stacktrace");59855972 } else59865973 hist_field_print(m, field);59875974 }
+73-39
kernel/trace/trace_events_user.c
···9696 * these to track enablement sites that are tied to an event.9797 */9898struct user_event_enabler {9999- struct list_head link;9999+ struct list_head mm_enablers_link;100100 struct user_event *event;101101 unsigned long addr;102102103103 /* Track enable bit, flags, etc. Aligned for bitops. */104104- unsigned int values;104104+ unsigned long values;105105};106106107107/* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */···116116/* Only duplicate the bit value */117117#define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK118118119119-#define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values)119119+#define ENABLE_BITOPS(e) (&(e)->values)120120+121121+#define ENABLE_BIT(e) ((int)((e)->values & ENABLE_VAL_BIT_MASK))120122121123/* Used for asynchronous faulting in of pages */122124struct user_event_enabler_fault {···155153#define VALIDATOR_REL (1 << 1)156154157155struct user_event_validator {158158- struct list_head link;156156+ struct list_head user_event_link;159157 int offset;160158 int flags;161159};···261259262260static void user_event_enabler_destroy(struct user_event_enabler *enabler)263261{264264- list_del_rcu(&enabler->link);262262+ list_del_rcu(&enabler->mm_enablers_link);265263266264 /* No longer tracking the event via the enabler */267265 refcount_dec(&enabler->event->refcnt);···425423426424 /* Update bit atomically, user tracers must be atomic as well */427425 if (enabler->event && enabler->event->status)428428- set_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr);426426+ set_bit(ENABLE_BIT(enabler), ptr);429427 else430430- clear_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr);428428+ clear_bit(ENABLE_BIT(enabler), ptr);431429432430 kunmap_local(kaddr);433431 unpin_user_pages_dirty_lock(&page, 1, true);···439437 unsigned long uaddr, unsigned char bit)440438{441439 struct user_event_enabler *enabler;442442- struct user_event_enabler *next;443440444444- list_for_each_entry_safe(enabler, next, &mm->enablers, link) {445445- if (enabler->addr == uaddr &&446446- (enabler->values & ENABLE_VAL_BIT_MASK) == bit)441441+ list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) {442442+ if (enabler->addr == uaddr && ENABLE_BIT(enabler) == bit)447443 return true;448444 }449445···451451static void user_event_enabler_update(struct user_event *user)452452{453453 struct user_event_enabler *enabler;454454- struct user_event_mm *mm = user_event_mm_get_all(user);455454 struct user_event_mm *next;455455+ struct user_event_mm *mm;456456 int attempt;457457+458458+ lockdep_assert_held(&event_mutex);459459+460460+ /*461461+ * We need to build a one-shot list of all the mms that have an462462+ * enabler for the user_event passed in. This list is only valid463463+ * while holding the event_mutex. The only reason for this is due464464+ * to the global mm list being RCU protected and we use methods465465+ * which can wait (mmap_read_lock and pin_user_pages_remote).466466+ *467467+ * NOTE: user_event_mm_get_all() increments the ref count of each468468+ * mm that is added to the list to prevent removal timing windows.469469+ * We must always put each mm after they are used, which may wait.470470+ */471471+ mm = user_event_mm_get_all(user);457472458473 while (mm) {459474 next = mm->next;460475 mmap_read_lock(mm->mm);461461- rcu_read_lock();462476463463- list_for_each_entry_rcu(enabler, &mm->enablers, link) {477477+ list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) {464478 if (enabler->event == user) {465479 attempt = 0;466480 user_event_enabler_write(mm, enabler, true, &attempt);467481 }468482 }469483470470- rcu_read_unlock();471484 mmap_read_unlock(mm->mm);472485 user_event_mm_put(mm);473486 mm = next;···508495 enabler->values = orig->values & ENABLE_VAL_DUP_MASK;509496510497 refcount_inc(&enabler->event->refcnt);511511- list_add_rcu(&enabler->link, &mm->enablers);498498+499499+ /* Enablers not exposed yet, RCU not required */500500+ list_add(&enabler->mm_enablers_link, &mm->enablers);512501513502 return true;514503}···529514 struct user_event_mm *mm;530515531516 /*517517+ * We use the mm->next field to build a one-shot list from the global518518+ * RCU protected list. To build this list the event_mutex must be held.519519+ * This lets us build a list without requiring allocs that could fail520520+ * when user based events are most wanted for diagnostics.521521+ */522522+ lockdep_assert_held(&event_mutex);523523+524524+ /*532525 * We do not want to block fork/exec while enablements are being533526 * updated, so we use RCU to walk the current tasks that have used534527 * user_events ABI for 1 or more events. Each enabler found in each···548525 */549526 rcu_read_lock();550527551551- list_for_each_entry_rcu(mm, &user_event_mms, link)552552- list_for_each_entry_rcu(enabler, &mm->enablers, link)528528+ list_for_each_entry_rcu(mm, &user_event_mms, mms_link) {529529+ list_for_each_entry_rcu(enabler, &mm->enablers, mm_enablers_link) {553530 if (enabler->event == user) {554531 mm->next = found;555532 found = user_event_mm_get(mm);556533 break;557534 }535535+ }536536+ }558537559538 rcu_read_unlock();560539561540 return found;562541}563542564564-static struct user_event_mm *user_event_mm_create(struct task_struct *t)543543+static struct user_event_mm *user_event_mm_alloc(struct task_struct *t)565544{566545 struct user_event_mm *user_mm;567567- unsigned long flags;568546569547 user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL_ACCOUNT);570548···576552 INIT_LIST_HEAD(&user_mm->enablers);577553 refcount_set(&user_mm->refcnt, 1);578554 refcount_set(&user_mm->tasks, 1);579579-580580- spin_lock_irqsave(&user_event_mms_lock, flags);581581- list_add_rcu(&user_mm->link, &user_event_mms);582582- spin_unlock_irqrestore(&user_event_mms_lock, flags);583583-584584- t->user_event_mm = user_mm;585555586556 /*587557 * The lifetime of the memory descriptor can slightly outlast···590572 return user_mm;591573}592574575575+static void user_event_mm_attach(struct user_event_mm *user_mm, struct task_struct *t)576576+{577577+ unsigned long flags;578578+579579+ spin_lock_irqsave(&user_event_mms_lock, flags);580580+ list_add_rcu(&user_mm->mms_link, &user_event_mms);581581+ spin_unlock_irqrestore(&user_event_mms_lock, flags);582582+583583+ t->user_event_mm = user_mm;584584+}585585+593586static struct user_event_mm *current_user_event_mm(void)594587{595588 struct user_event_mm *user_mm = current->user_event_mm;···608579 if (user_mm)609580 goto inc;610581611611- user_mm = user_event_mm_create(current);582582+ user_mm = user_event_mm_alloc(current);612583613584 if (!user_mm)614585 goto error;586586+587587+ user_event_mm_attach(user_mm, current);615588inc:616589 refcount_inc(&user_mm->refcnt);617590error:···624593{625594 struct user_event_enabler *enabler, *next;626595627627- list_for_each_entry_safe(enabler, next, &mm->enablers, link)596596+ list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link)628597 user_event_enabler_destroy(enabler);629598630599 mmdrop(mm->mm);···661630662631 /* Remove the mm from the list, so it can no longer be enabled */663632 spin_lock_irqsave(&user_event_mms_lock, flags);664664- list_del_rcu(&mm->link);633633+ list_del_rcu(&mm->mms_link);665634 spin_unlock_irqrestore(&user_event_mms_lock, flags);666635667636 /*···701670702671void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm)703672{704704- struct user_event_mm *mm = user_event_mm_create(t);673673+ struct user_event_mm *mm = user_event_mm_alloc(t);705674 struct user_event_enabler *enabler;706675707676 if (!mm)···709678710679 rcu_read_lock();711680712712- list_for_each_entry_rcu(enabler, &old_mm->enablers, link)681681+ list_for_each_entry_rcu(enabler, &old_mm->enablers, mm_enablers_link) {713682 if (!user_event_enabler_dup(enabler, mm))714683 goto error;684684+ }715685716686 rcu_read_unlock();717687688688+ user_event_mm_attach(mm, t);718689 return;719690error:720691 rcu_read_unlock();721721- user_event_mm_remove(t);692692+ user_event_mm_destroy(mm);722693}723694724695static bool current_user_event_enabler_exists(unsigned long uaddr,···781748 */782749 if (!*write_result) {783750 refcount_inc(&enabler->event->refcnt);784784- list_add_rcu(&enabler->link, &user_mm->enablers);751751+ list_add_rcu(&enabler->mm_enablers_link, &user_mm->enablers);785752 }786753787754 mutex_unlock(&event_mutex);···937904 struct user_event_validator *validator, *next;938905 struct list_head *head = &user->validators;939906940940- list_for_each_entry_safe(validator, next, head, link) {941941- list_del(&validator->link);907907+ list_for_each_entry_safe(validator, next, head, user_event_link) {908908+ list_del(&validator->user_event_link);942909 kfree(validator);943910 }944911}···992959 validator->offset = offset;993960994961 /* Want sequential access when validating */995995- list_add_tail(&validator->link, &user->validators);962962+ list_add_tail(&validator->user_event_link, &user->validators);996963997964add_field:998965 field->type = type;···13821349 void *pos, *end = data + len;13831350 u32 loc, offset, size;1384135113851385- list_for_each_entry(validator, head, link) {13521352+ list_for_each_entry(validator, head, user_event_link) {13861353 pos = data + validator->offset;1387135413881355 /* Already done min_size check, no bounds check here */···23032270 */23042271 mutex_lock(&event_mutex);2305227223062306- list_for_each_entry_safe(enabler, next, &mm->enablers, link)22732273+ list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link) {23072274 if (enabler->addr == reg.disable_addr &&23082308- (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) {22752275+ ENABLE_BIT(enabler) == reg.disable_bit) {23092276 set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler));2310227723112278 if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))···23142281 /* Removed at least one */23152282 ret = 0;23162283 }22842284+ }2317228523182286 mutex_unlock(&event_mutex);23192287
···848848 }849849850850#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS851851+ /*852852+ * These tests can take some time to run. Make sure on non PREEMPT853853+ * kernels, we do not trigger the softlockup detector.854854+ */855855+ cond_resched();856856+851857 tracing_reset_online_cpus(&tr->array_buffer);852858 set_graph_array(tr);853859···874868 (unsigned long)ftrace_stub_direct_tramp);875869 if (ret)876870 goto out;871871+872872+ cond_resched();877873878874 ret = register_ftrace_graph(&fgraph_ops);879875 if (ret) {···898890 true);899891 if (ret)900892 goto out;893893+894894+ cond_resched();901895902896 tracing_start();903897
+61-31
kernel/vhost_task.c
···1212 VHOST_TASK_FLAGS_STOP,1313};14141515+struct vhost_task {1616+ bool (*fn)(void *data);1717+ void *data;1818+ struct completion exited;1919+ unsigned long flags;2020+ struct task_struct *task;2121+};2222+1523static int vhost_task_fn(void *data)1624{1725 struct vhost_task *vtsk = data;1818- int ret;2626+ bool dead = false;19272020- ret = vtsk->fn(vtsk->data);2828+ for (;;) {2929+ bool did_work;3030+3131+ /* mb paired w/ vhost_task_stop */3232+ if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags))3333+ break;3434+3535+ if (!dead && signal_pending(current)) {3636+ struct ksignal ksig;3737+ /*3838+ * Calling get_signal will block in SIGSTOP,3939+ * or clear fatal_signal_pending, but remember4040+ * what was set.4141+ *4242+ * This thread won't actually exit until all4343+ * of the file descriptors are closed, and4444+ * the release function is called.4545+ */4646+ dead = get_signal(&ksig);4747+ if (dead)4848+ clear_thread_flag(TIF_SIGPENDING);4949+ }5050+5151+ did_work = vtsk->fn(vtsk->data);5252+ if (!did_work) {5353+ set_current_state(TASK_INTERRUPTIBLE);5454+ schedule();5555+ }5656+ }5757+2158 complete(&vtsk->exited);2222- do_exit(ret);5959+ do_exit(0);2360}6161+6262+/**6363+ * vhost_task_wake - wakeup the vhost_task6464+ * @vtsk: vhost_task to wake6565+ *6666+ * wake up the vhost_task worker thread6767+ */6868+void vhost_task_wake(struct vhost_task *vtsk)6969+{7070+ wake_up_process(vtsk->task);7171+}7272+EXPORT_SYMBOL_GPL(vhost_task_wake);24732574/**2675 * vhost_task_stop - stop a vhost_task2776 * @vtsk: vhost_task to stop2877 *2929- * Callers must call vhost_task_should_stop and return from their worker3030- * function when it returns true;7878+ * vhost_task_fn ensures the worker thread exits after7979+ * VHOST_TASK_FLAGS_SOP becomes true.3180 */3281void vhost_task_stop(struct vhost_task *vtsk)3382{3434- pid_t pid = vtsk->task->pid;3535-3683 set_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags);3737- wake_up_process(vtsk->task);8484+ vhost_task_wake(vtsk);3885 /*3986 * Make sure vhost_task_fn is no longer accessing the vhost_task before4040- * freeing it below. If userspace crashed or exited without closing,4141- * then the vhost_task->task could already be marked dead so4242- * kernel_wait will return early.8787+ * freeing it below.4388 */4489 wait_for_completion(&vtsk->exited);4545- /*4646- * If we are just closing/removing a device and the parent process is4747- * not exiting then reap the task.4848- */4949- kernel_wait4(pid, NULL, __WCLONE, NULL);5090 kfree(vtsk);5191}5292EXPORT_SYMBOL_GPL(vhost_task_stop);53935494/**5555- * vhost_task_should_stop - should the vhost task return from the work function5656- * @vtsk: vhost_task to stop5757- */5858-bool vhost_task_should_stop(struct vhost_task *vtsk)5959-{6060- return test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags);6161-}6262-EXPORT_SYMBOL_GPL(vhost_task_should_stop);6363-6464-/**6565- * vhost_task_create - create a copy of a process to be used by the kernel6666- * @fn: thread stack9595+ * vhost_task_create - create a copy of a task to be used by the kernel9696+ * @fn: vhost worker function6797 * @arg: data to be passed to fn6898 * @name: the thread's name6999 *···10171 * failure. The returned task is inactive, and the caller must fire it up10272 * through vhost_task_start().10373 */104104-struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg,7474+struct vhost_task *vhost_task_create(bool (*fn)(void *), void *arg,10575 const char *name)10676{10777 struct kernel_clone_args args = {108108- .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM,7878+ .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM |7979+ CLONE_THREAD | CLONE_SIGHAND,10980 .exit_signal = 0,11081 .fn = vhost_task_fn,11182 .name = name,11283 .user_worker = 1,11384 .no_files = 1,114114- .ignore_signals = 1,11585 };11686 struct vhost_task *vtsk;11787 struct task_struct *tsk;
+65-20
lib/test_firmware.c
···4545 bool sent;4646 const struct firmware *fw;4747 const char *name;4848+ const char *fw_buf;4849 struct completion completion;4950 struct task_struct *task;5051 struct device *dev;···176175177176 for (i = 0; i < test_fw_config->num_requests; i++) {178177 req = &test_fw_config->reqs[i];179179- if (req->fw)178178+ if (req->fw) {179179+ if (req->fw_buf) {180180+ kfree_const(req->fw_buf);181181+ req->fw_buf = NULL;182182+ }180183 release_firmware(req->fw);184184+ req->fw = NULL;185185+ }181186 }182187183188 vfree(test_fw_config->reqs);···360353 return len;361354}362355356356+static inline int __test_dev_config_update_bool(const char *buf, size_t size,357357+ bool *cfg)358358+{359359+ int ret;360360+361361+ if (kstrtobool(buf, cfg) < 0)362362+ ret = -EINVAL;363363+ else364364+ ret = size;365365+366366+ return ret;367367+}368368+363369static int test_dev_config_update_bool(const char *buf, size_t size,364370 bool *cfg)365371{366372 int ret;367373368374 mutex_lock(&test_fw_mutex);369369- if (kstrtobool(buf, cfg) < 0)370370- ret = -EINVAL;371371- else372372- ret = size;375375+ ret = __test_dev_config_update_bool(buf, size, cfg);373376 mutex_unlock(&test_fw_mutex);374377375378 return ret;···390373 return snprintf(buf, PAGE_SIZE, "%d\n", val);391374}392375393393-static int test_dev_config_update_size_t(const char *buf,376376+static int __test_dev_config_update_size_t(377377+ const char *buf,394378 size_t size,395379 size_t *cfg)396380{···402384 if (ret)403385 return ret;404386405405- mutex_lock(&test_fw_mutex);406387 *(size_t *)cfg = new;407407- mutex_unlock(&test_fw_mutex);408388409389 /* Always return full write size even if we didn't consume all */410390 return size;···418402 return snprintf(buf, PAGE_SIZE, "%d\n", val);419403}420404421421-static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)405405+static int __test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)422406{423407 u8 val;424408 int ret;···427411 if (ret)428412 return ret;429413430430- mutex_lock(&test_fw_mutex);431414 *(u8 *)cfg = val;432432- mutex_unlock(&test_fw_mutex);433415434416 /* Always return full write size even if we didn't consume all */435417 return size;418418+}419419+420420+static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg)421421+{422422+ int ret;423423+424424+ mutex_lock(&test_fw_mutex);425425+ ret = __test_dev_config_update_u8(buf, size, cfg);426426+ mutex_unlock(&test_fw_mutex);427427+428428+ return ret;436429}437430438431static ssize_t test_dev_config_show_u8(char *buf, u8 val)···496471 mutex_unlock(&test_fw_mutex);497472 goto out;498473 }499499- mutex_unlock(&test_fw_mutex);500474501501- rc = test_dev_config_update_u8(buf, count,502502- &test_fw_config->num_requests);475475+ rc = __test_dev_config_update_u8(buf, count,476476+ &test_fw_config->num_requests);477477+ mutex_unlock(&test_fw_mutex);503478504479out:505480 return rc;···543518 mutex_unlock(&test_fw_mutex);544519 goto out;545520 }546546- mutex_unlock(&test_fw_mutex);547521548548- rc = test_dev_config_update_size_t(buf, count,549549- &test_fw_config->buf_size);522522+ rc = __test_dev_config_update_size_t(buf, count,523523+ &test_fw_config->buf_size);524524+ mutex_unlock(&test_fw_mutex);550525551526out:552527 return rc;···573548 mutex_unlock(&test_fw_mutex);574549 goto out;575550 }576576- mutex_unlock(&test_fw_mutex);577551578578- rc = test_dev_config_update_size_t(buf, count,579579- &test_fw_config->file_offset);552552+ rc = __test_dev_config_update_size_t(buf, count,553553+ &test_fw_config->file_offset);554554+ mutex_unlock(&test_fw_mutex);580555581556out:582557 return rc;···677652678653 mutex_lock(&test_fw_mutex);679654 release_firmware(test_firmware);655655+ if (test_fw_config->reqs)656656+ __test_release_all_firmware();680657 test_firmware = NULL;681658 rc = request_firmware(&test_firmware, name, dev);682659 if (rc) {···779752 mutex_lock(&test_fw_mutex);780753 release_firmware(test_firmware);781754 test_firmware = NULL;755755+ if (test_fw_config->reqs)756756+ __test_release_all_firmware();782757 rc = request_firmware_nowait(THIS_MODULE, 1, name, dev, GFP_KERNEL,783758 NULL, trigger_async_request_cb);784759 if (rc) {···823794824795 mutex_lock(&test_fw_mutex);825796 release_firmware(test_firmware);797797+ if (test_fw_config->reqs)798798+ __test_release_all_firmware();826799 test_firmware = NULL;827800 rc = request_firmware_nowait(THIS_MODULE, FW_ACTION_NOUEVENT, name,828801 dev, GFP_KERNEL, NULL,···887856 test_fw_config->buf_size);888857 if (!req->fw)889858 kfree(test_buf);859859+ else860860+ req->fw_buf = test_buf;890861 } else {891862 req->rc = test_fw_config->req_firmware(&req->fw,892863 req->name,···928895929896 mutex_lock(&test_fw_mutex);930897898898+ if (test_fw_config->reqs) {899899+ rc = -EBUSY;900900+ goto out_bail;901901+ }902902+931903 test_fw_config->reqs =932904 vzalloc(array3_size(sizeof(struct test_batched_req),933905 test_fw_config->num_requests, 2));···949911 req->fw = NULL;950912 req->idx = i;951913 req->name = test_fw_config->name;914914+ req->fw_buf = NULL;952915 req->dev = dev;953916 init_completion(&req->completion);954917 req->task = kthread_run(test_fw_run_batch_request, req,···10329931033994 mutex_lock(&test_fw_mutex);1034995996996+ if (test_fw_config->reqs) {997997+ rc = -EBUSY;998998+ goto out_bail;999999+ }10001000+10351001 test_fw_config->reqs =10361002 vzalloc(array3_size(sizeof(struct test_batched_req),10371003 test_fw_config->num_requests, 2));···10541010 for (i = 0; i < test_fw_config->num_requests; i++) {10551011 req = &test_fw_config->reqs[i];10561012 req->name = test_fw_config->name;10131013+ req->fw_buf = NULL;10571014 req->fw = NULL;10581015 req->idx = i;10591016 init_completion(&req->completion);
+1
mm/Kconfig.debug
···9898config PAGE_TABLE_CHECK9999 bool "Check for invalid mappings in user page tables"100100 depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK101101+ depends on EXCLUSIVE_SYSTEM_RAM101102 select PAGE_EXTENSION102103 help103104 Check that anonymous page is not being mapped twice with read write
+6
mm/page_table_check.c
···71717272 page = pfn_to_page(pfn);7373 page_ext = page_ext_get(page);7474+7575+ BUG_ON(PageSlab(page));7476 anon = PageAnon(page);75777678 for (i = 0; i < pgcnt; i++) {···109107110108 page = pfn_to_page(pfn);111109 page_ext = page_ext_get(page);110110+111111+ BUG_ON(PageSlab(page));112112 anon = PageAnon(page);113113114114 for (i = 0; i < pgcnt; i++) {···136132{137133 struct page_ext *page_ext;138134 unsigned long i;135135+136136+ BUG_ON(PageSlab(page));139137140138 page_ext = page_ext_get(page);141139 BUG_ON(!page_ext);
+38-16
net/core/rtnetlink.c
···23852385 if (tb[IFLA_BROADCAST] &&23862386 nla_len(tb[IFLA_BROADCAST]) < dev->addr_len)23872387 return -EINVAL;23882388+23892389+ if (tb[IFLA_GSO_MAX_SIZE] &&23902390+ nla_get_u32(tb[IFLA_GSO_MAX_SIZE]) > dev->tso_max_size) {23912391+ NL_SET_ERR_MSG(extack, "too big gso_max_size");23922392+ return -EINVAL;23932393+ }23942394+23952395+ if (tb[IFLA_GSO_MAX_SEGS] &&23962396+ (nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > GSO_MAX_SEGS ||23972397+ nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > dev->tso_max_segs)) {23982398+ NL_SET_ERR_MSG(extack, "too big gso_max_segs");23992399+ return -EINVAL;24002400+ }24012401+24022402+ if (tb[IFLA_GRO_MAX_SIZE] &&24032403+ nla_get_u32(tb[IFLA_GRO_MAX_SIZE]) > GRO_MAX_SIZE) {24042404+ NL_SET_ERR_MSG(extack, "too big gro_max_size");24052405+ return -EINVAL;24062406+ }24072407+24082408+ if (tb[IFLA_GSO_IPV4_MAX_SIZE] &&24092409+ nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]) > dev->tso_max_size) {24102410+ NL_SET_ERR_MSG(extack, "too big gso_ipv4_max_size");24112411+ return -EINVAL;24122412+ }24132413+24142414+ if (tb[IFLA_GRO_IPV4_MAX_SIZE] &&24152415+ nla_get_u32(tb[IFLA_GRO_IPV4_MAX_SIZE]) > GRO_MAX_SIZE) {24162416+ NL_SET_ERR_MSG(extack, "too big gro_ipv4_max_size");24172417+ return -EINVAL;24182418+ }23882419 }2389242023902421 if (tb[IFLA_AF_SPEC]) {···28892858 if (tb[IFLA_GSO_MAX_SIZE]) {28902859 u32 max_size = nla_get_u32(tb[IFLA_GSO_MAX_SIZE]);2891286028922892- if (max_size > dev->tso_max_size) {28932893- err = -EINVAL;28942894- goto errout;28952895- }28962896-28972861 if (dev->gso_max_size ^ max_size) {28982862 netif_set_gso_max_size(dev, max_size);28992863 status |= DO_SETLINK_MODIFIED;···2897287128982872 if (tb[IFLA_GSO_MAX_SEGS]) {28992873 u32 max_segs = nla_get_u32(tb[IFLA_GSO_MAX_SEGS]);29002900-29012901- if (max_segs > GSO_MAX_SEGS || max_segs > dev->tso_max_segs) {29022902- err = -EINVAL;29032903- goto errout;29042904- }2905287429062875 if (dev->gso_max_segs ^ max_segs) {29072876 netif_set_gso_max_segs(dev, max_segs);···2915289429162895 if (tb[IFLA_GSO_IPV4_MAX_SIZE]) {29172896 u32 max_size = nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]);29182918-29192919- if (max_size > dev->tso_max_size) {29202920- err = -EINVAL;29212921- goto errout;29222922- }2923289729242898 if (dev->gso_ipv4_max_size ^ max_size) {29252899 netif_set_gso_ipv4_max_size(dev, max_size);···33013285 struct net_device *dev;33023286 unsigned int num_tx_queues = 1;33033287 unsigned int num_rx_queues = 1;32883288+ int err;3304328933053290 if (tb[IFLA_NUM_TX_QUEUES])33063291 num_tx_queues = nla_get_u32(tb[IFLA_NUM_TX_QUEUES]);···33373320 if (!dev)33383321 return ERR_PTR(-ENOMEM);3339332233233323+ err = validate_linkmsg(dev, tb, extack);33243324+ if (err < 0) {33253325+ free_netdev(dev);33263326+ return ERR_PTR(err);33273327+ }33283328+33403329 dev_net_set(dev, net);33413330 dev->rtnl_link_ops = ops;33423331 dev->rtnl_link_state = RTNL_LINK_INITIALIZING;3343333233443333 if (tb[IFLA_MTU]) {33453334 u32 mtu = nla_get_u32(tb[IFLA_MTU]);33463346- int err;3347333533483336 err = dev_validate_mtu(dev, mtu, extack);33493337 if (err) {
···30813081 int old_state = sk->sk_state;30823082 u32 seq;3083308330843084+ /* Deny disconnect if other threads are blocked in sk_wait_event()30853085+ * or inet_wait_for_connect().30863086+ */30873087+ if (sk->sk_wait_pending)30883088+ return -EBUSY;30893089+30843090 if (old_state != TCP_CLOSE)30853091 tcp_set_state(sk, TCP_CLOSE);30863092···40784072 switch (optname) {40794073 case TCP_MAXSEG:40804074 val = tp->mss_cache;40814081- if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))40754075+ if (tp->rx_opt.user_mss &&40764076+ ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))40824077 val = tp->rx_opt.user_mss;40834078 if (tp->repair)40844079 val = tp->rx_opt.mss_clamp;
···290290void tcp_delack_timer_handler(struct sock *sk)291291{292292 struct inet_connection_sock *icsk = inet_csk(sk);293293+ struct tcp_sock *tp = tcp_sk(sk);293294294294- if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||295295- !(icsk->icsk_ack.pending & ICSK_ACK_TIMER))295295+ if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))296296+ return;297297+298298+ /* Handling the sack compression case */299299+ if (tp->compressed_ack) {300300+ tcp_mstamp_refresh(tp);301301+ tcp_sack_compress_send_ack(sk);302302+ return;303303+ }304304+305305+ if (!(icsk->icsk_ack.pending & ICSK_ACK_TIMER))296306 return;297307298308 if (time_after(icsk->icsk_ack.timeout, jiffies)) {···322312 inet_csk_exit_pingpong_mode(sk);323313 icsk->icsk_ack.ato = TCP_ATO_MIN;324314 }325325- tcp_mstamp_refresh(tcp_sk(sk));315315+ tcp_mstamp_refresh(tp);326316 tcp_send_ack(sk);327317 __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS);328318 }
+78-62
net/mptcp/protocol.c
···9090 if (err)9191 return err;92929393- msk->first = ssock->sk;9494- msk->subflow = ssock;9393+ WRITE_ONCE(msk->first, ssock->sk);9494+ WRITE_ONCE(msk->subflow, ssock);9595 subflow = mptcp_subflow_ctx(ssock->sk);9696 list_add(&subflow->node, &msk->conn_list);9797 sock_hold(ssock->sk);···603603 WRITE_ONCE(msk->ack_seq, msk->ack_seq + 1);604604 WRITE_ONCE(msk->rcv_data_fin, 0);605605606606- sk->sk_shutdown |= RCV_SHUTDOWN;606606+ WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);607607 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */608608609609 switch (sk->sk_state) {···825825 mptcp_data_unlock(sk);826826}827827828828+static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk)829829+{830830+ mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq);831831+ WRITE_ONCE(msk->allow_infinite_fallback, false);832832+ mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);833833+}834834+828835static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)829836{830837 struct sock *sk = (struct sock *)msk;···846839 mptcp_sock_graft(ssk, sk->sk_socket);847840848841 mptcp_sockopt_sync_locked(msk, ssk);842842+ mptcp_subflow_joined(msk, ssk);849843 return true;850844}851845···918910 /* hopefully temporary hack: propagate shutdown status919911 * to msk, when all subflows agree on it920912 */921921- sk->sk_shutdown |= RCV_SHUTDOWN;913913+ WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN);922914923915 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */924916 sk->sk_data_ready(sk);···1710170217111703 lock_sock(ssk);17121704 msg->msg_flags |= MSG_DONTWAIT;17131713- msk->connect_flags = O_NONBLOCK;17141705 msk->fastopening = 1;17151706 ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL);17161707 msk->fastopening = 0;···22902283{22912284 if (msk->subflow) {22922285 iput(SOCK_INODE(msk->subflow));22932293- msk->subflow = NULL;22862286+ WRITE_ONCE(msk->subflow, NULL);22942287 }22952288}22962289···24272420 sock_put(ssk);2428242124292422 if (ssk == msk->first)24302430- msk->first = NULL;24232423+ WRITE_ONCE(msk->first, NULL);2431242424322425out:24332426 if (ssk == msk->last_snd)···25342527 }2535252825362529 inet_sk_state_store(sk, TCP_CLOSE);25372537- sk->sk_shutdown = SHUTDOWN_MASK;25302530+ WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);25382531 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */25392532 set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags);25402533···27282721 WRITE_ONCE(msk->rmem_released, 0);27292722 msk->timer_ival = TCP_RTO_MIN;2730272327312731- msk->first = NULL;27242724+ WRITE_ONCE(msk->first, NULL);27322725 inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;27332726 WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));27342727 WRITE_ONCE(msk->allow_infinite_fallback, true);···29662959 bool do_cancel_work = false;29672960 int subflows_alive = 0;2968296129692969- sk->sk_shutdown = SHUTDOWN_MASK;29622962+ WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK);2970296329712964 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) {29722965 mptcp_listen_inuse_dec(sk);···30463039 sock_put(sk);30473040}3048304130493049-void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)30423042+static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)30503043{30513044#if IS_ENABLED(CONFIG_MPTCP_IPV6)30523045 const struct ipv6_pinfo *ssk6 = inet6_sk(ssk);···31093102 mptcp_pm_data_reset(msk);31103103 mptcp_ca_reset(sk);3111310431123112- sk->sk_shutdown = 0;31053105+ WRITE_ONCE(sk->sk_shutdown, 0);31133106 sk_error_report(sk);31143107 return 0;31153108}···31233116}31243117#endif3125311831263126-struct sock *mptcp_sk_clone(const struct sock *sk,31273127- const struct mptcp_options_received *mp_opt,31283128- struct request_sock *req)31193119+struct sock *mptcp_sk_clone_init(const struct sock *sk,31203120+ const struct mptcp_options_received *mp_opt,31213121+ struct sock *ssk,31223122+ struct request_sock *req)31293123{31303124 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);31313125 struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC);···31453137 msk = mptcp_sk(nsk);31463138 msk->local_key = subflow_req->local_key;31473139 msk->token = subflow_req->token;31483148- msk->subflow = NULL;31403140+ WRITE_ONCE(msk->subflow, NULL);31493141 msk->in_accept_queue = 1;31503142 WRITE_ONCE(msk->fully_established, false);31513143 if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD)···31583150 msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq;3159315131603152 sock_reset_flag(nsk, SOCK_RCU_FREE);31613161- /* will be fully established after successful MPC subflow creation */31623162- inet_sk_state_store(nsk, TCP_SYN_RECV);31633163-31643153 security_inet_csk_clone(nsk, req);31543154+31553155+ /* this can't race with mptcp_close(), as the msk is31563156+ * not yet exposted to user-space31573157+ */31583158+ inet_sk_state_store(nsk, TCP_ESTABLISHED);31593159+31603160+ /* The msk maintain a ref to each subflow in the connections list */31613161+ WRITE_ONCE(msk->first, ssk);31623162+ list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list);31633163+ sock_hold(ssk);31643164+31653165+ /* new mpc subflow takes ownership of the newly31663166+ * created mptcp socket31673167+ */31683168+ mptcp_token_accept(subflow_req, msk);31693169+31703170+ /* set msk addresses early to ensure mptcp_pm_get_local_id()31713171+ * uses the correct data31723172+ */31733173+ mptcp_copy_inaddrs(nsk, ssk);31743174+ mptcp_propagate_sndbuf(nsk, ssk);31753175+31763176+ mptcp_rcv_space_init(msk, ssk);31653177 bh_unlock_sock(nsk);3166317831673179 /* note: the newly allocated socket refcount is 2 now */···32133185 struct socket *listener;32143186 struct sock *newsk;3215318732163216- listener = msk->subflow;31883188+ listener = READ_ONCE(msk->subflow);32173189 if (WARN_ON_ONCE(!listener)) {32183190 *err = -EINVAL;32193191 return NULL;···34933465 return false;34943466 }3495346734963496- if (!list_empty(&subflow->node))34973497- goto out;34683468+ /* active subflow, already present inside the conn_list */34693469+ if (!list_empty(&subflow->node)) {34703470+ mptcp_subflow_joined(msk, ssk);34713471+ return true;34723472+ }3498347334993474 if (!mptcp_pm_allow_new_subflow(msk))35003475 goto err_prohibited;3501347635023502- /* active connections are already on conn_list.35033503- * If we can't acquire msk socket lock here, let the release callback34773477+ /* If we can't acquire msk socket lock here, let the release callback35043478 * handle it35053479 */35063480 mptcp_data_lock(parent);···35253495 return false;35263496 }3527349735283528- subflow->map_seq = READ_ONCE(msk->ack_seq);35293529- WRITE_ONCE(msk->allow_infinite_fallback, false);35303530-35313531-out:35323532- mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);35333498 return true;35343499}35353500···36423617 * acquired the subflow socket lock, too.36433618 */36443619 if (msk->fastopening)36453645- err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1);36203620+ err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1);36463621 else36473647- err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags);36223622+ err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK);36483623 inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect;3649362436503625 /* on successful connect, the msk state will be moved to established by···3657363236583633 mptcp_copy_inaddrs(sk, ssock->sk);3659363436603660- /* unblocking connect, mptcp-level inet_stream_connect will error out36613661- * without changing the socket state, update it here.36353635+ /* silence EINPROGRESS and let the caller inet_stream_connect36363636+ * handle the connection in progress36623637 */36633663- if (err == -EINPROGRESS)36643664- sk->sk_socket->state = ssock->state;36653665- return err;36383638+ return 0;36663639}3667364036683641static struct proto mptcp_prot = {···37193696 return err;37203697}3721369837223722-static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,37233723- int addr_len, int flags)37243724-{37253725- int ret;37263726-37273727- lock_sock(sock->sk);37283728- mptcp_sk(sock->sk)->connect_flags = flags;37293729- ret = __inet_stream_connect(sock, uaddr, addr_len, flags, 0);37303730- release_sock(sock->sk);37313731- return ret;37323732-}37333733-37343699static int mptcp_listen(struct socket *sock, int backlog)37353700{37363701 struct mptcp_sock *msk = mptcp_sk(sock->sk);···3762375137633752 pr_debug("msk=%p", msk);3764375337653765- /* buggy applications can call accept on socket states other then LISTEN37543754+ /* Buggy applications can call accept on socket states other then LISTEN37663755 * but no need to allocate the first subflow just to error out.37673756 */37683768- ssock = msk->subflow;37573757+ ssock = READ_ONCE(msk->subflow);37693758 if (!ssock)37703759 return -EINVAL;37713760···38113800{38123801 struct sock *sk = (struct sock *)msk;3813380238143814- if (unlikely(sk->sk_shutdown & SEND_SHUTDOWN))38153815- return EPOLLOUT | EPOLLWRNORM;38163816-38173803 if (sk_stream_is_writeable(sk))38183804 return EPOLLOUT | EPOLLWRNORM;38193805···38283820 struct sock *sk = sock->sk;38293821 struct mptcp_sock *msk;38303822 __poll_t mask = 0;38233823+ u8 shutdown;38313824 int state;3832382538333826 msk = mptcp_sk(sk);···38373828 state = inet_sk_state_load(sk);38383829 pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags);38393830 if (state == TCP_LISTEN) {38403840- if (WARN_ON_ONCE(!msk->subflow || !msk->subflow->sk))38313831+ struct socket *ssock = READ_ONCE(msk->subflow);38323832+38333833+ if (WARN_ON_ONCE(!ssock || !ssock->sk))38413834 return 0;3842383538433843- return inet_csk_listen_poll(msk->subflow->sk);38363836+ return inet_csk_listen_poll(ssock->sk);38443837 }38383838+38393839+ shutdown = READ_ONCE(sk->sk_shutdown);38403840+ if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)38413841+ mask |= EPOLLHUP;38423842+ if (shutdown & RCV_SHUTDOWN)38433843+ mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;3845384438463845 if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {38473846 mask |= mptcp_check_readable(msk);38483848- mask |= mptcp_check_writeable(msk);38473847+ if (shutdown & SEND_SHUTDOWN)38483848+ mask |= EPOLLOUT | EPOLLWRNORM;38493849+ else38503850+ mask |= mptcp_check_writeable(msk);38493851 } else if (state == TCP_SYN_SENT && inet_sk(sk)->defer_connect) {38503852 /* cf tcp_poll() note about TFO */38513853 mask |= EPOLLOUT | EPOLLWRNORM;38523854 }38533853- if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE)38543854- mask |= EPOLLHUP;38553855- if (sk->sk_shutdown & RCV_SHUTDOWN)38563856- mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;3857385538583856 /* This barrier is coupled with smp_wmb() in __mptcp_error_report() */38593857 smp_rmb();···38753859 .owner = THIS_MODULE,38763860 .release = inet_release,38773861 .bind = mptcp_bind,38783878- .connect = mptcp_stream_connect,38623862+ .connect = inet_stream_connect,38793863 .socketpair = sock_no_socketpair,38803864 .accept = mptcp_stream_accept,38813865 .getname = inet_getname,···39703954 .owner = THIS_MODULE,39713955 .release = inet6_release,39723956 .bind = mptcp_bind,39733973- .connect = mptcp_stream_connect,39573957+ .connect = inet_stream_connect,39743958 .socketpair = sock_no_socketpair,39753959 .accept = mptcp_stream_accept,39763960 .getname = inet6_getname,
+9-6
net/mptcp/protocol.h
···297297 nodelay:1,298298 fastopening:1,299299 in_accept_queue:1;300300- int connect_flags;301300 struct work_struct work;302301 struct sk_buff *ooo_last_skb;303302 struct rb_root out_of_order_queue;···305306 struct list_head rtx_queue;306307 struct mptcp_data_frag *first_pending;307308 struct list_head join_list;308308- struct socket *subflow; /* outgoing connect/listener/!mp_capable */309309+ struct socket *subflow; /* outgoing connect/listener/!mp_capable310310+ * The mptcp ops can safely dereference, using suitable311311+ * ONCE annotation, the subflow outside the socket312312+ * lock as such sock is freed after close().313313+ */309314 struct sock *first;310315 struct mptcp_pm_data pm;311316 struct {···616613int mptcp_allow_join_id0(const struct net *net);617614unsigned int mptcp_stale_loss_cnt(const struct net *net);618615int mptcp_get_pm_type(const struct net *net);619619-void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk);620616void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,621617 const struct mptcp_options_received *mp_opt);622618bool __mptcp_retransmit_pending_data(struct sock *sk);···685683int __init mptcp_proto_v6_init(void);686684#endif687685688688-struct sock *mptcp_sk_clone(const struct sock *sk,689689- const struct mptcp_options_received *mp_opt,690690- struct request_sock *req);686686+struct sock *mptcp_sk_clone_init(const struct sock *sk,687687+ const struct mptcp_options_received *mp_opt,688688+ struct sock *ssk,689689+ struct request_sock *req);691690void mptcp_get_options(const struct sk_buff *skb,692691 struct mptcp_options_received *mp_opt);693692
+1-27
net/mptcp/subflow.c
···815815 ctx->setsockopt_seq = listener->setsockopt_seq;816816817817 if (ctx->mp_capable) {818818- ctx->conn = mptcp_sk_clone(listener->conn, &mp_opt, req);818818+ ctx->conn = mptcp_sk_clone_init(listener->conn, &mp_opt, child, req);819819 if (!ctx->conn)820820 goto fallback;821821822822 owner = mptcp_sk(ctx->conn);823823-824824- /* this can't race with mptcp_close(), as the msk is825825- * not yet exposted to user-space826826- */827827- inet_sk_state_store(ctx->conn, TCP_ESTABLISHED);828828-829829- /* record the newly created socket as the first msk830830- * subflow, but don't link it yet into conn_list831831- */832832- WRITE_ONCE(owner->first, child);833833-834834- /* new mpc subflow takes ownership of the newly835835- * created mptcp socket836836- */837837- owner->setsockopt_seq = ctx->setsockopt_seq;838823 mptcp_pm_new_connection(owner, child, 1);839839- mptcp_token_accept(subflow_req, owner);840840-841841- /* set msk addresses early to ensure mptcp_pm_get_local_id()842842- * uses the correct data843843- */844844- mptcp_copy_inaddrs(ctx->conn, child);845845- mptcp_propagate_sndbuf(ctx->conn, child);846846-847847- mptcp_rcv_space_init(owner, child);848848- list_add(&ctx->node, &owner->conn_list);849849- sock_hold(child);850824851825 /* with OoO packets we can reach here without ingress852826 * mpc option
+1-1
net/netlink/af_netlink.c
···17791779 break;17801780 }17811781 }17821782- if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen))17821782+ if (put_user(ALIGN(BITS_TO_BYTES(nlk->ngroups), sizeof(u32)), optlen))17831783 err = -EFAULT;17841784 netlink_unlock_table();17851785 return err;
+4-3
net/netrom/nr_subr.c
···123123 unsigned char *dptr;124124 int len, timeout;125125126126- len = NR_NETWORK_LEN + NR_TRANSPORT_LEN;126126+ len = NR_TRANSPORT_LEN;127127128128 switch (frametype & 0x0F) {129129 case NR_CONNREQ:···141141 return;142142 }143143144144- if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)144144+ skb = alloc_skb(NR_NETWORK_LEN + len, GFP_ATOMIC);145145+ if (!skb)145146 return;146147147148 /*···150149 */151150 skb_reserve(skb, NR_NETWORK_LEN);152151153153- dptr = skb_put(skb, skb_tailroom(skb));152152+ dptr = skb_put(skb, len);154153155154 switch (frametype & 0x0F) {156155 case NR_CONNREQ:
+5-3
net/packet/af_packet.c
···3201320132023202 lock_sock(sk);32033203 spin_lock(&po->bind_lock);32043204+ if (!proto)32053205+ proto = po->num;32063206+32043207 rcu_read_lock();3205320832063209 if (po->fanout) {···33023299 memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min));33033300 name[sizeof(uaddr->sa_data_min)] = 0;3304330133053305- return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);33023302+ return packet_do_bind(sk, name, 0, 0);33063303}3307330433083305static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)···33193316 if (sll->sll_family != AF_PACKET)33203317 return -EINVAL;3321331833223322- return packet_do_bind(sk, NULL, sll->sll_ifindex,33233323- sll->sll_protocol ? : pkt_sk(sk)->num);33193319+ return packet_do_bind(sk, NULL, sll->sll_ifindex, sll->sll_protocol);33243320}3325332133263322static struct proto packet_proto = {
···578578{579579 struct smc_buf_desc *buf_next;580580581581- if (!buf_pos || list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) {581581+ if (!buf_pos)582582+ return _smc_llc_get_next_rmb(lgr, buf_lst);583583+584584+ if (list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) {582585 (*buf_lst)++;583586 return _smc_llc_get_next_rmb(lgr, buf_lst);584587 }···617614 goto out;618615 buf_pos = smc_llc_get_first_rmb(lgr, &buf_lst);619616 for (i = 0; i < ext->num_rkeys; i++) {617617+ while (buf_pos && !(buf_pos)->used)618618+ buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);620619 if (!buf_pos)621620 break;622621 rmb = buf_pos;···628623 cpu_to_be64((uintptr_t)rmb->cpu_addr) :629624 cpu_to_be64((u64)sg_dma_address(rmb->sgt[lnk_idx].sgl));630625 buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);631631- while (buf_pos && !(buf_pos)->used)632632- buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos);633626 }634627 len += i * sizeof(ext->rt[0]);635628out:
+6-18
net/sunrpc/svcsock.c
···14801480 return svsk;14811481}1482148214831483-bool svc_alien_sock(struct net *net, int fd)14841484-{14851485- int err;14861486- struct socket *sock = sockfd_lookup(fd, &err);14871487- bool ret = false;14881488-14891489- if (!sock)14901490- goto out;14911491- if (sock_net(sock->sk) != net)14921492- ret = true;14931493- sockfd_put(sock);14941494-out:14951495- return ret;14961496-}14971497-EXPORT_SYMBOL_GPL(svc_alien_sock);14981498-14991483/**15001484 * svc_addsock - add a listener socket to an RPC service15011485 * @serv: pointer to RPC service to which to add a new listener14861486+ * @net: caller's network namespace15021487 * @fd: file descriptor of the new listener15031488 * @name_return: pointer to buffer to fill in with name of listener15041489 * @len: size of the buffer···14931508 * Name is terminated with '\n'. On error, returns a negative errno14941509 * value.14951510 */14961496-int svc_addsock(struct svc_serv *serv, const int fd, char *name_return,14971497- const size_t len, const struct cred *cred)15111511+int svc_addsock(struct svc_serv *serv, struct net *net, const int fd,15121512+ char *name_return, const size_t len, const struct cred *cred)14981513{14991514 int err = 0;15001515 struct socket *so = sockfd_lookup(fd, &err);···1505152015061521 if (!so)15071522 return err;15231523+ err = -EINVAL;15241524+ if (sock_net(so->sk) != net)15251525+ goto out;15081526 err = -EAFNOSUPPORT;15091527 if ((so->sk->sk_family != PF_INET) && (so->sk->sk_family != PF_INET6))15101528 goto out;
+3-1
net/tls/tls_strp.c
···2020 strp->stopped = 1;21212222 /* Report an error on the lower socket */2323- strp->sk->sk_err = -err;2323+ WRITE_ONCE(strp->sk->sk_err, -err);2424+ /* Paired with smp_rmb() in tcp_poll() */2525+ smp_wmb();2426 sk_error_report(strp->sk);2527}2628
+3-1
net/tls/tls_sw.c
···7070{7171 WARN_ON_ONCE(err >= 0);7272 /* sk->sk_err should contain a positive error code. */7373- sk->sk_err = -err;7373+ WRITE_ONCE(sk->sk_err, -err);7474+ /* Paired with smp_rmb() in tcp_poll() */7575+ smp_wmb();7476 sk_error_report(sk);7577}7678
+5-1
security/selinux/Makefile
···2626 cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h27272828targets += flask.h av_permissions.h2929-$(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE2929+# once make >= 4.3 is required, we can use grouped targets in the rule below,3030+# which basically involves adding both headers and a '&' before the colon, see3131+# the example below:3232+# $(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/...3333+$(obj)/flask.h: scripts/selinux/genheaders/genheaders FORCE3034 $(call if_changed,flask)
-13
tools/include/linux/coresight-pmu.h
···2121 */2222#define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2))23232424-/* CoreSight trace ID is currently the bottom 7 bits of the value */2525-#define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)2626-2727-/*2828- * perf record will set the legacy meta data values as unused initially.2929- * This allows perf report to manage the decoders created when dynamic3030- * allocation in operation.3131- */3232-#define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31)3333-3434-/* Value to set for unused trace ID values */3535-#define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F3636-3724/*3825 * Below are the definition of bit offsets for perf option, and works as3926 * arbitrary values for all ETM versions.
···2525} __attribute__((preserve_access_index));26262727/* new kernel perf_mem_data_src definition */2828-union perf_mem_data_src__new {2828+union perf_mem_data_src___new {2929 __u64 val;3030 struct {3131 __u64 mem_op:5, /* type of opcode */···108108 if (entry->part == 7)109109 return kctx->data->data_src.mem_blk;110110 if (entry->part == 8) {111111- union perf_mem_data_src__new *data = (void *)&kctx->data->data_src;111111+ union perf_mem_data_src___new *data = (void *)&kctx->data->data_src;112112113113 if (bpf_core_field_exists(data->mem_hops))114114 return data->mem_hops;
+13
tools/perf/util/cs-etm.h
···227227#define INFO_HEADER_SIZE (sizeof(((struct perf_record_auxtrace_info *)0)->type) + \228228 sizeof(((struct perf_record_auxtrace_info *)0)->reserved__))229229230230+/* CoreSight trace ID is currently the bottom 7 bits of the value */231231+#define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0)232232+233233+/*234234+ * perf record will set the legacy meta data values as unused initially.235235+ * This allows perf report to manage the decoders created when dynamic236236+ * allocation in operation.237237+ */238238+#define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31)239239+240240+/* Value to set for unused trace ID values */241241+#define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F242242+230243int cs_etm__process_auxtrace_info(union perf_event *event,231244 struct perf_session *session);232245struct perf_event_attr *cs_etm_get_default_config(struct perf_pmu *pmu);
···1010# because it's invoked by variable name, see how the "tests" array is used1111#shellcheck disable=SC231712121313+. "$(dirname "${0}")/mptcp_lib.sh"1414+1315ret=01416sin=""1517sinfail=""···1917cin=""2018cinfail=""2119cinsent=""2020+tmpfile=""2221cout=""2322capout=""2423ns1=""···139136140137check_tools()141138{139139+ mptcp_lib_check_mptcp140140+142141 if ! ip -Version &> /dev/null; then143142 echo "SKIP: Could not run test without ip tool"144143 exit $ksft_skip···180175{181176 rm -f "$cin" "$cout" "$sinfail"182177 rm -f "$sin" "$sout" "$cinsent" "$cinfail"178178+ rm -f "$tmpfile"183179 rm -rf $evts_ns1 $evts_ns2184180 cleanup_partial185181}···389383 fail_test390384 return 1391385 fi392392- bytes="--bytes=${bytes}"386386+387387+ # note: BusyBox's "cmp" command doesn't support --bytes388388+ tmpfile=$(mktemp)389389+ head --bytes="$bytes" "$in" > "$tmpfile"390390+ mv "$tmpfile" "$in"391391+ head --bytes="$bytes" "$out" > "$tmpfile"392392+ mv "$tmpfile" "$out"393393+ tmpfile=""393394 fi394394- cmp -l "$in" "$out" ${bytes} | while read -r i a b; do395395+ cmp -l "$in" "$out" | while read -r i a b; do395396 local sum=$((0${a} + 0${b}))396397 if [ $check_invert -eq 0 ] || [ $sum -ne $((0xff)) ]; then397398 echo "[ FAIL ] $what does not match (in, out):"
+40
tools/testing/selftests/net/mptcp/mptcp_lib.sh
···11+#! /bin/bash22+# SPDX-License-Identifier: GPL-2.033+44+readonly KSFT_FAIL=155+readonly KSFT_SKIP=466+77+# SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES env var can be set when validating all88+# features using the last version of the kernel and the selftests to make sure99+# a test is not being skipped by mistake.1010+mptcp_lib_expect_all_features() {1111+ [ "${SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES:-}" = "1" ]1212+}1313+1414+# $1: msg1515+mptcp_lib_fail_if_expected_feature() {1616+ if mptcp_lib_expect_all_features; then1717+ echo "ERROR: missing feature: ${*}"1818+ exit ${KSFT_FAIL}1919+ fi2020+2121+ return 12222+}2323+2424+# $1: file2525+mptcp_lib_has_file() {2626+ local f="${1}"2727+2828+ if [ -f "${f}" ]; then2929+ return 03030+ fi3131+3232+ mptcp_lib_fail_if_expected_feature "${f} file not found"3333+}3434+3535+mptcp_lib_check_mptcp() {3636+ if ! mptcp_lib_has_file "/proc/sys/net/mptcp/enabled"; then3737+ echo "SKIP: MPTCP support is not available"3838+ exit ${KSFT_SKIP}3939+ fi4040+}
···11#!/bin/bash22# SPDX-License-Identifier: GPL-2.03344+. "$(dirname "${0}")/mptcp_lib.sh"55+66+mptcp_lib_check_mptcp77+48ip -Version > /dev/null 2>&159if [ $? -ne 0 ];then610 echo "SKIP: Cannot not run test without ip tool"