+1
-1
.mailmap
+1
-1
.mailmap
···
435
435
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
436
436
Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
437
437
Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com>
438
-
Mathieu Othacehe <m.othacehe@gmail.com> <othacehe@gnu.org>
438
+
Mathieu Othacehe <othacehe@gnu.org> <m.othacehe@gmail.com>
439
439
Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com>
440
440
Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org>
441
441
Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com>
+7
-3
Documentation/admin-guide/laptops/thinkpad-acpi.rst
+7
-3
Documentation/admin-guide/laptops/thinkpad-acpi.rst
···
445
445
0x1008 0x07 FN+F8 IBM: toggle screen expand
446
446
Lenovo: configure UltraNav,
447
447
or toggle screen expand.
448
-
On newer platforms (2024+)
449
-
replaced by 0x131f (see below)
448
+
On 2024 platforms replaced by
449
+
0x131f (see below) and on newer
450
+
platforms (2025 +) keycode is
451
+
replaced by 0x1401 (see below).
450
452
451
453
0x1009 0x08 FN+F9 -
452
454
···
508
506
509
507
0x1019 0x18 unknown
510
508
511
-
0x131f ... FN+F8 Platform Mode change.
509
+
0x131f ... FN+F8 Platform Mode change (2024 systems).
512
510
Implemented in driver.
513
511
512
+
0x1401 ... FN+F8 Platform Mode change (2025 + systems).
513
+
Implemented in driver.
514
514
... ... ...
515
515
516
516
0x1020 0x1F unknown
+1
-1
Documentation/admin-guide/mm/transhuge.rst
+1
-1
Documentation/admin-guide/mm/transhuge.rst
···
436
436
The number of file transparent huge pages mapped to userspace is available
437
437
by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
438
438
To identify what applications are mapping file transparent huge pages, it
439
-
is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
439
+
is necessary to read ``/proc/PID/smaps`` and count the FilePmdMapped fields
440
440
for each mapping.
441
441
442
442
Note that reading the smaps file is expensive and reading it
+1
-1
Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+1
-1
Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+1
-1
Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+1
-1
Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+31
-29
Documentation/netlink/specs/mptcp_pm.yaml
+31
-29
Documentation/netlink/specs/mptcp_pm.yaml
···
22
22
doc: unused event
23
23
-
24
24
name: created
25
-
doc:
26
-
token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport
25
+
doc: >-
27
26
A new MPTCP connection has been created. It is the good time to
28
27
allocate memory and send ADD_ADDR if needed. Depending on the
29
28
traffic-patterns it can take a long time until the
30
29
MPTCP_EVENT_ESTABLISHED is sent.
30
+
Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
31
+
dport, server-side.
31
32
-
32
33
name: established
33
-
doc:
34
-
token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport
34
+
doc: >-
35
35
A MPTCP connection is established (can start new subflows).
36
+
Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport,
37
+
dport, server-side.
36
38
-
37
39
name: closed
38
-
doc:
39
-
token
40
+
doc: >-
40
41
A MPTCP connection has stopped.
42
+
Attribute: token.
41
43
-
42
44
name: announced
43
45
value: 6
44
-
doc:
45
-
token, rem_id, family, daddr4 | daddr6 [, dport]
46
+
doc: >-
46
47
A new address has been announced by the peer.
48
+
Attributes: token, rem_id, family, daddr4 | daddr6 [, dport].
47
49
-
48
50
name: removed
49
-
doc:
50
-
token, rem_id
51
+
doc: >-
51
52
An address has been lost by the peer.
53
+
Attributes: token, rem_id.
52
54
-
53
55
name: sub-established
54
56
value: 10
55
-
doc:
56
-
token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
57
-
dport, backup, if_idx [, error]
57
+
doc: >-
58
58
A new subflow has been established. 'error' should not be set.
59
+
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
60
+
daddr6, sport, dport, backup, if_idx [, error].
59
61
-
60
62
name: sub-closed
61
-
doc:
62
-
token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
63
-
dport, backup, if_idx [, error]
63
+
doc: >-
64
64
A subflow has been closed. An error (copy of sk_err) could be set if an
65
65
error has been detected for this subflow.
66
+
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
67
+
daddr6, sport, dport, backup, if_idx [, error].
66
68
-
67
69
name: sub-priority
68
70
value: 13
69
-
doc:
70
-
token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
71
-
dport, backup, if_idx [, error]
71
+
doc: >-
72
72
The priority of a subflow has changed. 'error' should not be set.
73
+
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
74
+
daddr6, sport, dport, backup, if_idx [, error].
73
75
-
74
76
name: listener-created
75
77
value: 15
76
-
doc:
77
-
family, sport, saddr4 | saddr6
78
+
doc: >-
78
79
A new PM listener is created.
80
+
Attributes: family, sport, saddr4 | saddr6.
79
81
-
80
82
name: listener-closed
81
-
doc:
82
-
family, sport, saddr4 | saddr6
83
+
doc: >-
83
84
A PM listener is closed.
85
+
Attributes: family, sport, saddr4 | saddr6.
84
86
85
87
attribute-sets:
86
88
-
···
308
306
attributes:
309
307
- addr
310
308
-
311
-
name: flush-addrs
312
-
doc: flush addresses
309
+
name: flush-addrs
310
+
doc: Flush addresses
313
311
attribute-set: endpoint
314
312
dont-validate: [ strict ]
315
313
flags: [ uns-admin-perm ]
···
353
351
- addr-remote
354
352
-
355
353
name: announce
356
-
doc: announce new sf
354
+
doc: Announce new address
357
355
attribute-set: attr
358
356
dont-validate: [ strict ]
359
357
flags: [ uns-admin-perm ]
···
364
362
- token
365
363
-
366
364
name: remove
367
-
doc: announce removal
365
+
doc: Announce removal
368
366
attribute-set: attr
369
367
dont-validate: [ strict ]
370
368
flags: [ uns-admin-perm ]
···
375
373
- loc-id
376
374
-
377
375
name: subflow-create
378
-
doc: todo
376
+
doc: Create subflow
379
377
attribute-set: attr
380
378
dont-validate: [ strict ]
381
379
flags: [ uns-admin-perm ]
···
387
385
- addr-remote
388
386
-
389
387
name: subflow-destroy
390
-
doc: todo
388
+
doc: Destroy subflow
391
389
attribute-set: attr
392
390
dont-validate: [ strict ]
393
391
flags: [ uns-admin-perm ]
+3
Documentation/virt/kvm/api.rst
+3
Documentation/virt/kvm/api.rst
···
1914
1914
#define KVM_IRQ_ROUTING_HV_SINT 4
1915
1915
#define KVM_IRQ_ROUTING_XEN_EVTCHN 5
1916
1916
1917
+
On s390, adding a KVM_IRQ_ROUTING_S390_ADAPTER is rejected on ucontrol VMs with
1918
+
error -EINVAL.
1919
+
1917
1920
flags:
1918
1921
1919
1922
- KVM_MSI_VALID_DEVID: used along with KVM_IRQ_ROUTING_MSI routing entry
+4
Documentation/virt/kvm/devices/s390_flic.rst
+4
Documentation/virt/kvm/devices/s390_flic.rst
···
58
58
Enables async page faults for the guest. So in case of a major page fault
59
59
the host is allowed to handle this async and continues the guest.
60
60
61
+
-EINVAL is returned when called on the FLIC of a ucontrol VM.
62
+
61
63
KVM_DEV_FLIC_APF_DISABLE_WAIT
62
64
Disables async page faults for the guest and waits until already pending
63
65
async page faults are done. This is necessary to trigger a completion interrupt
64
66
for every init interrupt before migrating the interrupt list.
67
+
68
+
-EINVAL is returned when called on the FLIC of a ucontrol VM.
65
69
66
70
KVM_DEV_FLIC_ADAPTER_REGISTER
67
71
Register an I/O adapter interrupt source. Takes a kvm_s390_io_adapter
+6
-4
MAINTAINERS
+6
-4
MAINTAINERS
···
1797
1797
1798
1798
ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS)
1799
1799
M: Arnd Bergmann <arnd@arndb.de>
1800
-
M: Olof Johansson <olof@lixom.net>
1801
1800
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
1802
1801
L: soc@lists.linux.dev
1803
1802
S: Maintained
···
3607
3608
3608
3609
ATHEROS ATH GENERIC UTILITIES
3609
3610
M: Kalle Valo <kvalo@kernel.org>
3611
+
M: Jeff Johnson <jjohnson@kernel.org>
3610
3612
L: linux-wireless@vger.kernel.org
3611
3613
S: Supported
3612
3614
F: drivers/net/wireless/ath/*
···
14756
14756
F: include/soc/mediatek/smi.h
14757
14757
14758
14758
MEDIATEK SWITCH DRIVER
14759
-
M: Arınç ÜNAL <arinc.unal@arinc9.com>
14759
+
M: Chester A. Unal <chester.a.unal@arinc9.com>
14760
14760
M: Daniel Golle <daniel@makrotopia.org>
14761
14761
M: DENG Qingfang <dqfext@gmail.com>
14762
14762
M: Sean Wang <sean.wang@mediatek.com>
···
18460
18460
F: drivers/pinctrl/mediatek/
18461
18461
18462
18462
PIN CONTROLLER - MEDIATEK MIPS
18463
-
M: Arınç ÜNAL <arinc.unal@arinc9.com>
18463
+
M: Chester A. Unal <chester.a.unal@arinc9.com>
18464
18464
M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
18465
18465
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
18466
18466
L: linux-mips@vger.kernel.org
···
19504
19504
F: arch/mips/ralink
19505
19505
19506
19506
RALINK MT7621 MIPS ARCHITECTURE
19507
-
M: Arınç ÜNAL <arinc.unal@arinc9.com>
19507
+
M: Chester A. Unal <chester.a.unal@arinc9.com>
19508
19508
M: Sergio Paracuellos <sergio.paracuellos@gmail.com>
19509
19509
L: linux-mips@vger.kernel.org
19510
19510
S: Maintained
···
20907
20907
SCHEDULER - SCHED_EXT
20908
20908
R: Tejun Heo <tj@kernel.org>
20909
20909
R: David Vernet <void@manifault.com>
20910
+
R: Andrea Righi <arighi@nvidia.com>
20911
+
R: Changwoo Min <changwoo@igalia.com>
20910
20912
L: linux-kernel@vger.kernel.org
20911
20913
S: Maintained
20912
20914
W: https://github.com/sched-ext/scx
+1
-1
Makefile
+1
-1
Makefile
+1
arch/arm/mach-imx/Kconfig
+1
arch/arm/mach-imx/Kconfig
+5
-5
arch/nios2/kernel/cpuinfo.c
+5
-5
arch/nios2/kernel/cpuinfo.c
···
143
143
" DIV:\t\t%s\n"
144
144
" BMX:\t\t%s\n"
145
145
" CDX:\t\t%s\n",
146
-
cpuinfo.has_mul ? "yes" : "no",
147
-
cpuinfo.has_mulx ? "yes" : "no",
148
-
cpuinfo.has_div ? "yes" : "no",
149
-
cpuinfo.has_bmx ? "yes" : "no",
150
-
cpuinfo.has_cdx ? "yes" : "no");
146
+
str_yes_no(cpuinfo.has_mul),
147
+
str_yes_no(cpuinfo.has_mulx),
148
+
str_yes_no(cpuinfo.has_div),
149
+
str_yes_no(cpuinfo.has_bmx),
150
+
str_yes_no(cpuinfo.has_cdx));
151
151
152
152
seq_printf(m,
153
153
"Icache:\t\t%ukB, line length: %u\n",
+36
arch/powerpc/platforms/book3s/vas-api.c
+36
arch/powerpc/platforms/book3s/vas-api.c
···
464
464
return VM_FAULT_SIGBUS;
465
465
}
466
466
467
+
/*
468
+
* During mmap() paste address, mapping VMA is saved in VAS window
469
+
* struct which is used to unmap during migration if the window is
470
+
* still open. But the user space can remove this mapping with
471
+
* munmap() before closing the window and the VMA address will
472
+
* be invalid. Set VAS window VMA to NULL in this function which
473
+
* is called before VMA free.
474
+
*/
475
+
static void vas_mmap_close(struct vm_area_struct *vma)
476
+
{
477
+
struct file *fp = vma->vm_file;
478
+
struct coproc_instance *cp_inst = fp->private_data;
479
+
struct vas_window *txwin;
480
+
481
+
/* Should not happen */
482
+
if (!cp_inst || !cp_inst->txwin) {
483
+
pr_err("No attached VAS window for the paste address mmap\n");
484
+
return;
485
+
}
486
+
487
+
txwin = cp_inst->txwin;
488
+
/*
489
+
* task_ref.vma is set in coproc_mmap() during mmap paste
490
+
* address. So it has to be the same VMA that is getting freed.
491
+
*/
492
+
if (WARN_ON(txwin->task_ref.vma != vma)) {
493
+
pr_err("Invalid paste address mmaping\n");
494
+
return;
495
+
}
496
+
497
+
mutex_lock(&txwin->task_ref.mmap_mutex);
498
+
txwin->task_ref.vma = NULL;
499
+
mutex_unlock(&txwin->task_ref.mmap_mutex);
500
+
}
501
+
467
502
static const struct vm_operations_struct vas_vm_ops = {
503
+
.close = vas_mmap_close,
468
504
.fault = vas_mmap_fault,
469
505
};
470
506
+6
arch/s390/kvm/interrupt.c
+6
arch/s390/kvm/interrupt.c
···
2678
2678
kvm_s390_clear_float_irqs(dev->kvm);
2679
2679
break;
2680
2680
case KVM_DEV_FLIC_APF_ENABLE:
2681
+
if (kvm_is_ucontrol(dev->kvm))
2682
+
return -EINVAL;
2681
2683
dev->kvm->arch.gmap->pfault_enabled = 1;
2682
2684
break;
2683
2685
case KVM_DEV_FLIC_APF_DISABLE_WAIT:
2686
+
if (kvm_is_ucontrol(dev->kvm))
2687
+
return -EINVAL;
2684
2688
dev->kvm->arch.gmap->pfault_enabled = 0;
2685
2689
/*
2686
2690
* Make sure no async faults are in transition when
···
2898
2894
switch (ue->type) {
2899
2895
/* we store the userspace addresses instead of the guest addresses */
2900
2896
case KVM_IRQ_ROUTING_S390_ADAPTER:
2897
+
if (kvm_is_ucontrol(kvm))
2898
+
return -EINVAL;
2901
2899
e->set = set_adapter_int;
2902
2900
uaddr = gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr);
2903
2901
if (uaddr == -EFAULT)
+1
-1
arch/s390/kvm/vsie.c
+1
-1
arch/s390/kvm/vsie.c
+11
-1
arch/x86/events/intel/core.c
+11
-1
arch/x86/events/intel/core.c
···
429
429
EVENT_CONSTRAINT_END
430
430
};
431
431
432
+
static struct extra_reg intel_lnc_extra_regs[] __read_mostly = {
433
+
INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0xfffffffffffull, RSP_0),
434
+
INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0xfffffffffffull, RSP_1),
435
+
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
436
+
INTEL_UEVENT_EXTRA_REG(0x02c6, MSR_PEBS_FRONTEND, 0x9, FE),
437
+
INTEL_UEVENT_EXTRA_REG(0x03c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
438
+
INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0xf, FE),
439
+
INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
440
+
EVENT_EXTRA_END
441
+
};
432
442
433
443
EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3");
434
444
EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3");
···
6432
6422
intel_pmu_init_glc(pmu);
6433
6423
hybrid(pmu, event_constraints) = intel_lnc_event_constraints;
6434
6424
hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints;
6435
-
hybrid(pmu, extra_regs) = intel_rwc_extra_regs;
6425
+
hybrid(pmu, extra_regs) = intel_lnc_extra_regs;
6436
6426
}
6437
6427
6438
6428
static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
+1
arch/x86/events/intel/ds.c
+1
arch/x86/events/intel/ds.c
+1
arch/x86/events/intel/uncore.c
+1
arch/x86/events/intel/uncore.c
···
1910
1910
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &adl_uncore_init),
1911
1911
X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &gnr_uncore_init),
1912
1912
X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &gnr_uncore_init),
1913
+
X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, &gnr_uncore_init),
1913
1914
{},
1914
1915
};
1915
1916
MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+30
arch/x86/kernel/cet.c
+30
arch/x86/kernel/cet.c
···
81
81
82
82
static __ro_after_init bool ibt_fatal = true;
83
83
84
+
/*
85
+
* By definition, all missing-ENDBRANCH #CPs are a result of WFE && !ENDBR.
86
+
*
87
+
* For the kernel IBT no ENDBR selftest where #CPs are deliberately triggered,
88
+
* the WFE state of the interrupted context needs to be cleared to let execution
89
+
* continue. Otherwise when the CPU resumes from the instruction that just
90
+
* caused the previous #CP, another missing-ENDBRANCH #CP is raised and the CPU
91
+
* enters a dead loop.
92
+
*
93
+
* This is not a problem with IDT because it doesn't preserve WFE and IRET doesn't
94
+
* set WFE. But FRED provides space on the entry stack (in an expanded CS area)
95
+
* to save and restore the WFE state, thus the WFE state is no longer clobbered,
96
+
* so software must clear it.
97
+
*/
98
+
static void ibt_clear_fred_wfe(struct pt_regs *regs)
99
+
{
100
+
/*
101
+
* No need to do any FRED checks.
102
+
*
103
+
* For IDT event delivery, the high-order 48 bits of CS are pushed
104
+
* as 0s into the stack, and later IRET ignores these bits.
105
+
*
106
+
* For FRED, a test to check if fred_cs.wfe is set would be dropped
107
+
* by compilers.
108
+
*/
109
+
regs->fred_cs.wfe = 0;
110
+
}
111
+
84
112
static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
85
113
{
86
114
if ((error_code & CP_EC) != CP_ENDBR) {
···
118
90
119
91
if (unlikely(regs->ip == (unsigned long)&ibt_selftest_noendbr)) {
120
92
regs->ax = 0;
93
+
ibt_clear_fred_wfe(regs);
121
94
return;
122
95
}
123
96
···
126
97
if (!ibt_fatal) {
127
98
printk(KERN_DEFAULT CUT_HERE);
128
99
__warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL);
100
+
ibt_clear_fred_wfe(regs);
129
101
return;
130
102
}
131
103
BUG();
+17
-9
drivers/block/ublk_drv.c
+17
-9
drivers/block/ublk_drv.c
···
1618
1618
blk_mq_kick_requeue_list(ub->ub_disk->queue);
1619
1619
}
1620
1620
1621
+
static struct gendisk *ublk_detach_disk(struct ublk_device *ub)
1622
+
{
1623
+
struct gendisk *disk;
1624
+
1625
+
/* Sync with ublk_abort_queue() by holding the lock */
1626
+
spin_lock(&ub->lock);
1627
+
disk = ub->ub_disk;
1628
+
ub->dev_info.state = UBLK_S_DEV_DEAD;
1629
+
ub->dev_info.ublksrv_pid = -1;
1630
+
ub->ub_disk = NULL;
1631
+
spin_unlock(&ub->lock);
1632
+
1633
+
return disk;
1634
+
}
1635
+
1621
1636
static void ublk_stop_dev(struct ublk_device *ub)
1622
1637
{
1623
1638
struct gendisk *disk;
···
1646
1631
ublk_unquiesce_dev(ub);
1647
1632
}
1648
1633
del_gendisk(ub->ub_disk);
1649
-
1650
-
/* Sync with ublk_abort_queue() by holding the lock */
1651
-
spin_lock(&ub->lock);
1652
-
disk = ub->ub_disk;
1653
-
ub->dev_info.state = UBLK_S_DEV_DEAD;
1654
-
ub->dev_info.ublksrv_pid = -1;
1655
-
ub->ub_disk = NULL;
1656
-
spin_unlock(&ub->lock);
1634
+
disk = ublk_detach_disk(ub);
1657
1635
put_disk(disk);
1658
1636
unlock:
1659
1637
mutex_unlock(&ub->mutex);
···
2344
2336
2345
2337
out_put_cdev:
2346
2338
if (ret) {
2347
-
ub->dev_info.state = UBLK_S_DEV_DEAD;
2339
+
ublk_detach_disk(ub);
2348
2340
ublk_put_device(ub);
2349
2341
}
2350
2342
if (ret)
+1
-1
drivers/cdrom/cdrom.c
+1
-1
drivers/cdrom/cdrom.c
+2
-1
drivers/clk/imx/clk-imx8mp-audiomix.c
+2
-1
drivers/clk/imx/clk-imx8mp-audiomix.c
···
278
278
279
279
#else /* !CONFIG_RESET_CONTROLLER */
280
280
281
-
static int clk_imx8mp_audiomix_reset_controller_register(struct clk_imx8mp_audiomix_priv *priv)
281
+
static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
282
+
struct clk_imx8mp_audiomix_priv *priv)
282
283
{
283
284
return 0;
284
285
}
+12
-1
drivers/clk/thead/clk-th1520-ap.c
+12
-1
drivers/clk/thead/clk-th1520-ap.c
···
779
779
},
780
780
};
781
781
782
+
static CLK_FIXED_FACTOR_HW(emmc_sdio_ref_clk, "emmc-sdio-ref",
783
+
&video_pll_clk.common.hw, 4, 1, 0);
784
+
785
+
static const struct clk_parent_data emmc_sdio_ref_clk_pd[] = {
786
+
{ .hw = &emmc_sdio_ref_clk.hw },
787
+
};
788
+
782
789
static CCU_GATE(CLK_BROM, brom_clk, "brom", ahb2_cpusys_hclk_pd, 0x100, BIT(4), 0);
783
790
static CCU_GATE(CLK_BMU, bmu_clk, "bmu", axi4_cpusys2_aclk_pd, 0x100, BIT(5), 0);
784
791
static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_aclk_pd,
···
805
798
0x150, BIT(12), 0);
806
799
static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
807
800
static CCU_GATE(CLK_CPU2VP, cpu2vp_clk, "cpu2vp", axi_aclk_pd, 0x1e0, BIT(13), 0);
808
-
static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", video_pll_clk_pd, 0x204, BIT(30), 0);
801
+
static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", emmc_sdio_ref_clk_pd, 0x204, BIT(30), 0);
809
802
static CCU_GATE(CLK_GMAC1, gmac1_clk, "gmac1", gmac_pll_clk_pd, 0x204, BIT(26), 0);
810
803
static CCU_GATE(CLK_PADCTRL1, padctrl1_clk, "padctrl1", perisys_apb_pclk_pd, 0x204, BIT(24), 0);
811
804
static CCU_GATE(CLK_DSMART, dsmart_clk, "dsmart", perisys_apb_pclk_pd, 0x204, BIT(23), 0);
···
1065
1058
if (ret)
1066
1059
return ret;
1067
1060
priv->hws[CLK_PLL_GMAC_100M] = &gmac_pll_clk_100m.hw;
1061
+
1062
+
ret = devm_clk_hw_register(dev, &emmc_sdio_ref_clk.hw);
1063
+
if (ret)
1064
+
return ret;
1068
1065
1069
1066
ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, priv);
1070
1067
if (ret)
+12
-16
drivers/dma/amd/qdma/qdma.c
+12
-16
drivers/dma/amd/qdma/qdma.c
···
7
7
#include <linux/bitfield.h>
8
8
#include <linux/bitops.h>
9
9
#include <linux/dmaengine.h>
10
+
#include <linux/dma-mapping.h>
10
11
#include <linux/module.h>
11
12
#include <linux/mod_devicetable.h>
12
-
#include <linux/dma-map-ops.h>
13
13
#include <linux/platform_device.h>
14
14
#include <linux/platform_data/amd_qdma.h>
15
15
#include <linux/regmap.h>
···
492
492
493
493
static int qdma_device_setup(struct qdma_device *qdev)
494
494
{
495
-
struct device *dev = &qdev->pdev->dev;
496
495
u32 ring_sz = QDMA_DEFAULT_RING_SIZE;
497
496
int ret = 0;
498
-
499
-
while (dev && get_dma_ops(dev))
500
-
dev = dev->parent;
501
-
if (!dev) {
502
-
qdma_err(qdev, "dma device not found");
503
-
return -EINVAL;
504
-
}
505
-
set_dma_ops(&qdev->pdev->dev, get_dma_ops(dev));
506
497
507
498
ret = qdma_setup_fmap_context(qdev);
508
499
if (ret) {
···
539
548
{
540
549
struct qdma_queue *queue = to_qdma_queue(chan);
541
550
struct qdma_device *qdev = queue->qdev;
542
-
struct device *dev = qdev->dma_dev.dev;
551
+
struct qdma_platdata *pdata;
543
552
544
553
qdma_clear_queue_context(queue);
545
554
vchan_free_chan_resources(&queue->vchan);
546
-
dma_free_coherent(dev, queue->ring_size * QDMA_MM_DESC_SIZE,
555
+
pdata = dev_get_platdata(&qdev->pdev->dev);
556
+
dma_free_coherent(pdata->dma_dev, queue->ring_size * QDMA_MM_DESC_SIZE,
547
557
queue->desc_base, queue->dma_desc_base);
548
558
}
549
559
···
557
565
struct qdma_queue *queue = to_qdma_queue(chan);
558
566
struct qdma_device *qdev = queue->qdev;
559
567
struct qdma_ctxt_sw_desc desc;
568
+
struct qdma_platdata *pdata;
560
569
size_t size;
561
570
int ret;
562
571
···
565
572
if (ret)
566
573
return ret;
567
574
575
+
pdata = dev_get_platdata(&qdev->pdev->dev);
568
576
size = queue->ring_size * QDMA_MM_DESC_SIZE;
569
-
queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
577
+
queue->desc_base = dma_alloc_coherent(pdata->dma_dev, size,
570
578
&queue->dma_desc_base,
571
579
GFP_KERNEL);
572
580
if (!queue->desc_base) {
···
582
588
if (ret) {
583
589
qdma_err(qdev, "Failed to setup SW desc ctxt for %s",
584
590
chan->name);
585
-
dma_free_coherent(qdev->dma_dev.dev, size, queue->desc_base,
591
+
dma_free_coherent(pdata->dma_dev, size, queue->desc_base,
586
592
queue->dma_desc_base);
587
593
return ret;
588
594
}
···
942
948
943
949
static int qdmam_alloc_qintr_rings(struct qdma_device *qdev)
944
950
{
945
-
u32 ctxt[QDMA_CTXT_REGMAP_LEN];
951
+
struct qdma_platdata *pdata = dev_get_platdata(&qdev->pdev->dev);
946
952
struct device *dev = &qdev->pdev->dev;
953
+
u32 ctxt[QDMA_CTXT_REGMAP_LEN];
947
954
struct qdma_intr_ring *ring;
948
955
struct qdma_ctxt_intr intr_ctxt;
949
956
u32 vector;
···
964
969
ring->msix_id = qdev->err_irq_idx + i + 1;
965
970
ring->ridx = i;
966
971
ring->color = 1;
967
-
ring->base = dmam_alloc_coherent(dev, QDMA_INTR_RING_SIZE,
972
+
ring->base = dmam_alloc_coherent(pdata->dma_dev,
973
+
QDMA_INTR_RING_SIZE,
968
974
&ring->dev_base, GFP_KERNEL);
969
975
if (!ring->base) {
970
976
qdma_err(qdev, "Failed to alloc intr ring %d", i);
+2
-5
drivers/dma/apple-admac.c
+2
-5
drivers/dma/apple-admac.c
···
153
153
{
154
154
struct admac_sram *sram;
155
155
int i, ret = 0, nblocks;
156
+
ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
157
+
ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
156
158
157
159
if (dir == DMA_MEM_TO_DEV)
158
160
sram = &ad->txcache;
···
914
912
goto free_irq;
915
913
}
916
914
917
-
ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
918
-
ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
919
-
920
915
dev_info(&pdev->dev, "Audio DMA Controller\n");
921
-
dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n",
922
-
readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size);
923
916
924
917
return 0;
925
918
+2
drivers/dma/at_xdmac.c
+2
drivers/dma/at_xdmac.c
+4
-2
drivers/dma/dw/acpi.c
+4
-2
drivers/dma/dw/acpi.c
···
8
8
9
9
static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
10
10
{
11
+
struct dw_dma *dw = to_dw_dma(chan->device);
12
+
struct dw_dma_chip_pdata *data = dev_get_drvdata(dw->dma.dev);
11
13
struct acpi_dma_spec *dma_spec = param;
12
14
struct dw_dma_slave slave = {
13
15
.dma_dev = dma_spec->dev,
14
16
.src_id = dma_spec->slave_id,
15
17
.dst_id = dma_spec->slave_id,
16
-
.m_master = 0,
17
-
.p_master = 1,
18
+
.m_master = data->m_master,
19
+
.p_master = data->p_master,
18
20
};
19
21
20
22
return dw_dma_filter(chan, &slave);
+8
drivers/dma/dw/internal.h
+8
drivers/dma/dw/internal.h
···
51
51
int (*probe)(struct dw_dma_chip *chip);
52
52
int (*remove)(struct dw_dma_chip *chip);
53
53
struct dw_dma_chip *chip;
54
+
u8 m_master;
55
+
u8 p_master;
54
56
};
55
57
56
58
static __maybe_unused const struct dw_dma_chip_pdata dw_dma_chip_pdata = {
57
59
.probe = dw_dma_probe,
58
60
.remove = dw_dma_remove,
61
+
.m_master = 0,
62
+
.p_master = 1,
59
63
};
60
64
61
65
static const struct dw_dma_platform_data idma32_pdata = {
···
76
72
.pdata = &idma32_pdata,
77
73
.probe = idma32_dma_probe,
78
74
.remove = idma32_dma_remove,
75
+
.m_master = 0,
76
+
.p_master = 0,
79
77
};
80
78
81
79
static const struct dw_dma_platform_data xbar_pdata = {
···
94
88
.pdata = &xbar_pdata,
95
89
.probe = idma32_dma_probe,
96
90
.remove = idma32_dma_remove,
91
+
.m_master = 0,
92
+
.p_master = 0,
97
93
};
98
94
99
95
#endif /* _DMA_DW_INTERNAL_H */
+2
-2
drivers/dma/dw/pci.c
+2
-2
drivers/dma/dw/pci.c
+1
drivers/dma/fsl-edma-common.h
+1
drivers/dma/fsl-edma-common.h
+36
-5
drivers/dma/fsl-edma-main.c
+36
-5
drivers/dma/fsl-edma-main.c
···
417
417
};
418
418
MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids);
419
419
420
+
static void fsl_edma3_detach_pd(struct fsl_edma_engine *fsl_edma)
421
+
{
422
+
struct fsl_edma_chan *fsl_chan;
423
+
int i;
424
+
425
+
for (i = 0; i < fsl_edma->n_chans; i++) {
426
+
if (fsl_edma->chan_masked & BIT(i))
427
+
continue;
428
+
fsl_chan = &fsl_edma->chans[i];
429
+
if (fsl_chan->pd_dev_link)
430
+
device_link_del(fsl_chan->pd_dev_link);
431
+
if (fsl_chan->pd_dev) {
432
+
dev_pm_domain_detach(fsl_chan->pd_dev, false);
433
+
pm_runtime_dont_use_autosuspend(fsl_chan->pd_dev);
434
+
pm_runtime_set_suspended(fsl_chan->pd_dev);
435
+
}
436
+
}
437
+
}
438
+
439
+
static void devm_fsl_edma3_detach_pd(void *data)
440
+
{
441
+
fsl_edma3_detach_pd(data);
442
+
}
443
+
420
444
static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
421
445
{
422
446
struct fsl_edma_chan *fsl_chan;
423
-
struct device_link *link;
424
447
struct device *pd_chan;
425
448
struct device *dev;
426
449
int i;
···
459
436
pd_chan = dev_pm_domain_attach_by_id(dev, i);
460
437
if (IS_ERR_OR_NULL(pd_chan)) {
461
438
dev_err(dev, "Failed attach pd %d\n", i);
462
-
return -EINVAL;
439
+
goto detach;
463
440
}
464
441
465
-
link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
442
+
fsl_chan->pd_dev_link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
466
443
DL_FLAG_PM_RUNTIME |
467
444
DL_FLAG_RPM_ACTIVE);
468
-
if (!link) {
445
+
if (!fsl_chan->pd_dev_link) {
469
446
dev_err(dev, "Failed to add device_link to %d\n", i);
470
-
return -EINVAL;
447
+
dev_pm_domain_detach(pd_chan, false);
448
+
goto detach;
471
449
}
472
450
473
451
fsl_chan->pd_dev = pd_chan;
···
479
455
}
480
456
481
457
return 0;
458
+
459
+
detach:
460
+
fsl_edma3_detach_pd(fsl_edma);
461
+
return -EINVAL;
482
462
}
483
463
484
464
static int fsl_edma_probe(struct platform_device *pdev)
···
570
542
571
543
if (drvdata->flags & FSL_EDMA_DRV_HAS_PD) {
572
544
ret = fsl_edma3_attach_pd(pdev, fsl_edma);
545
+
if (ret)
546
+
return ret;
547
+
ret = devm_add_action_or_reset(&pdev->dev, devm_fsl_edma3_detach_pd, fsl_edma);
573
548
if (ret)
574
549
return ret;
575
550
}
+1
-1
drivers/dma/loongson2-apb-dma.c
+1
-1
drivers/dma/loongson2-apb-dma.c
···
31
31
#define LDMA_ASK_VALID BIT(2)
32
32
#define LDMA_START BIT(3) /* DMA start operation */
33
33
#define LDMA_STOP BIT(4) /* DMA stop operation */
34
-
#define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */
34
+
#define LDMA_CONFIG_MASK GENMASK_ULL(4, 0) /* DMA controller config bits mask */
35
35
36
36
/* Bitfields in ndesc_addr field of HW descriptor */
37
37
#define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
+2
drivers/dma/mv_xor.c
+2
drivers/dma/mv_xor.c
···
1388
1388
irq = irq_of_parse_and_map(np, 0);
1389
1389
if (!irq) {
1390
1390
ret = -ENODEV;
1391
+
of_node_put(np);
1391
1392
goto err_channel_add;
1392
1393
}
1393
1394
···
1397
1396
if (IS_ERR(chan)) {
1398
1397
ret = PTR_ERR(chan);
1399
1398
irq_dispose_mapping(irq);
1399
+
of_node_put(np);
1400
1400
goto err_channel_add;
1401
1401
}
1402
1402
+10
drivers/dma/tegra186-gpc-dma.c
+10
drivers/dma/tegra186-gpc-dma.c
···
231
231
bool config_init;
232
232
char name[30];
233
233
enum dma_transfer_direction sid_dir;
234
+
enum dma_status status;
234
235
int id;
235
236
int irq;
236
237
int slave_id;
···
394
393
tegra_dma_dump_chan_regs(tdc);
395
394
}
396
395
396
+
tdc->status = DMA_PAUSED;
397
+
397
398
return ret;
398
399
}
399
400
···
422
419
val = tdc_read(tdc, TEGRA_GPCDMA_CHAN_CSRE);
423
420
val &= ~TEGRA_GPCDMA_CHAN_CSRE_PAUSE;
424
421
tdc_write(tdc, TEGRA_GPCDMA_CHAN_CSRE, val);
422
+
423
+
tdc->status = DMA_IN_PROGRESS;
425
424
}
426
425
427
426
static int tegra_dma_device_resume(struct dma_chan *dc)
···
549
544
550
545
tegra_dma_sid_free(tdc);
551
546
tdc->dma_desc = NULL;
547
+
tdc->status = DMA_COMPLETE;
552
548
}
553
549
554
550
static void tegra_dma_chan_decode_error(struct tegra_dma_channel *tdc,
···
722
716
tdc->dma_desc = NULL;
723
717
}
724
718
719
+
tdc->status = DMA_COMPLETE;
725
720
tegra_dma_sid_free(tdc);
726
721
vchan_get_all_descriptors(&tdc->vc, &head);
727
722
spin_unlock_irqrestore(&tdc->vc.lock, flags);
···
775
768
ret = dma_cookie_status(dc, cookie, txstate);
776
769
if (ret == DMA_COMPLETE)
777
770
return ret;
771
+
772
+
if (tdc->status == DMA_PAUSED)
773
+
ret = DMA_PAUSED;
778
774
779
775
spin_lock_irqsave(&tdc->vc.lock, flags);
780
776
vd = vchan_find_desc(&tdc->vc, cookie);
+12
-2
drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+12
-2
drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
···
153
153
ADV7511_AUDIO_CFG3_LEN_MASK, len);
154
154
regmap_update_bits(adv7511->regmap, ADV7511_REG_I2C_FREQ_ID_CFG,
155
155
ADV7511_I2C_FREQ_ID_CFG_RATE_MASK, rate << 4);
156
-
regmap_write(adv7511->regmap, 0x73, 0x1);
156
+
157
+
/* send current Audio infoframe values while updating */
158
+
regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
159
+
BIT(5), BIT(5));
160
+
161
+
regmap_write(adv7511->regmap, ADV7511_REG_AUDIO_INFOFRAME(0), 0x1);
162
+
163
+
/* use Audio infoframe updated info */
164
+
regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
165
+
BIT(5), 0);
157
166
158
167
return 0;
159
168
}
···
193
184
regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(0),
194
185
BIT(7) | BIT(6), BIT(7));
195
186
/* use Audio infoframe updated info */
196
-
regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(1),
187
+
regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
197
188
BIT(5), 0);
189
+
198
190
/* enable SPDIF receiver */
199
191
if (adv7511->audio_source == ADV7511_AUDIO_SOURCE_SPDIF)
200
192
regmap_update_bits(adv7511->regmap, ADV7511_REG_AUDIO_CONFIG,
+8
-2
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+8
-2
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
···
1241
1241
return ret;
1242
1242
1243
1243
ret = adv7511_init_regulators(adv7511);
1244
-
if (ret)
1245
-
return dev_err_probe(dev, ret, "failed to init regulators\n");
1244
+
if (ret) {
1245
+
dev_err_probe(dev, ret, "failed to init regulators\n");
1246
+
goto err_of_node_put;
1247
+
}
1246
1248
1247
1249
/*
1248
1250
* The power down GPIO is optional. If present, toggle it from active to
···
1365
1363
i2c_unregister_device(adv7511->i2c_edid);
1366
1364
uninit_regulators:
1367
1365
adv7511_uninit_regulators(adv7511);
1366
+
err_of_node_put:
1367
+
of_node_put(adv7511->host_node);
1368
1368
1369
1369
return ret;
1370
1370
}
···
1374
1370
static void adv7511_remove(struct i2c_client *i2c)
1375
1371
{
1376
1372
struct adv7511 *adv7511 = i2c_get_clientdata(i2c);
1373
+
1374
+
of_node_put(adv7511->host_node);
1377
1375
1378
1376
adv7511_uninit_regulators(adv7511);
1379
1377
+1
-3
drivers/gpu/drm/bridge/adv7511/adv7533.c
+1
-3
drivers/gpu/drm/bridge/adv7511/adv7533.c
···
172
172
173
173
of_property_read_u32(np, "adi,dsi-lanes", &num_lanes);
174
174
175
-
if (num_lanes < 1 || num_lanes > 4)
175
+
if (num_lanes < 2 || num_lanes > 4)
176
176
return -EINVAL;
177
177
178
178
adv->num_dsi_lanes = num_lanes;
···
180
180
adv->host_node = of_graph_get_remote_node(np, 0, 0);
181
181
if (!adv->host_node)
182
182
return -ENODEV;
183
-
184
-
of_node_put(adv->host_node);
185
183
186
184
adv->use_timing_gen = !of_property_read_bool(np,
187
185
"adi,disable-timing-generator");
+4
-8
drivers/gpu/drm/i915/display/intel_cx0_phy.c
+4
-8
drivers/gpu/drm/i915/display/intel_cx0_phy.c
···
2115
2115
0, C10_VDR_CTRL_MSGBUS_ACCESS,
2116
2116
MB_WRITE_COMMITTED);
2117
2117
2118
-
/* Custom width needs to be programmed to 0 for both the phy lanes */
2119
-
intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
2120
-
C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
2121
-
MB_WRITE_COMMITTED);
2122
-
intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1),
2123
-
0, C10_VDR_CTRL_UPDATE_CFG,
2124
-
MB_WRITE_COMMITTED);
2125
-
2126
2118
/* Program the pll values only for the master lane */
2127
2119
for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++)
2128
2120
intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i),
···
2124
2132
intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED);
2125
2133
intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED);
2126
2134
2135
+
/* Custom width needs to be programmed to 0 for both the phy lanes */
2136
+
intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
2137
+
C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
2138
+
MB_WRITE_COMMITTED);
2127
2139
intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1),
2128
2140
0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG,
2129
2141
MB_WRITE_COMMITTED);
+1
-1
drivers/gpu/drm/i915/gt/intel_rc6.c
+1
-1
drivers/gpu/drm/i915/gt/intel_rc6.c
···
133
133
GEN9_MEDIA_PG_ENABLE |
134
134
GEN11_MEDIA_SAMPLER_PG_ENABLE;
135
135
136
-
if (GRAPHICS_VER(gt->i915) >= 12) {
136
+
if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) {
137
137
for (i = 0; i < I915_MAX_VCS; i++)
138
138
if (HAS_ENGINE(gt, _VCS(i)))
139
139
pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) |
+10
-2
drivers/gpu/drm/xe/xe_bo.c
+10
-2
drivers/gpu/drm/xe/xe_bo.c
···
724
724
new_mem->mem_type == XE_PL_SYSTEM) {
725
725
long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
726
726
DMA_RESV_USAGE_BOOKKEEP,
727
-
true,
727
+
false,
728
728
MAX_SCHEDULE_TIMEOUT);
729
729
if (timeout < 0) {
730
730
ret = timeout;
···
848
848
849
849
out:
850
850
if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) &&
851
-
ttm_bo->ttm)
851
+
ttm_bo->ttm) {
852
+
long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
853
+
DMA_RESV_USAGE_KERNEL,
854
+
false,
855
+
MAX_SCHEDULE_TIMEOUT);
856
+
if (timeout < 0)
857
+
ret = timeout;
858
+
852
859
xe_tt_unmap_sg(ttm_bo->ttm);
860
+
}
853
861
854
862
return ret;
855
863
}
+14
-1
drivers/gpu/drm/xe/xe_devcoredump.c
+14
-1
drivers/gpu/drm/xe/xe_devcoredump.c
···
109
109
drm_puts(&p, "\n**** GuC CT ****\n");
110
110
xe_guc_ct_snapshot_print(ss->guc.ct, &p);
111
111
112
-
drm_puts(&p, "\n**** Contexts ****\n");
112
+
/*
113
+
* Don't add a new section header here because the mesa debug decoder
114
+
* tool expects the context information to be in the 'GuC CT' section.
115
+
*/
116
+
/* drm_puts(&p, "\n**** Contexts ****\n"); */
113
117
xe_guc_exec_queue_snapshot_print(ss->ge, &p);
114
118
115
119
drm_puts(&p, "\n**** Job ****\n");
···
366
362
const u32 *blob32 = (const u32 *)blob;
367
363
char buff[ASCII85_BUFSZ], *line_buff;
368
364
size_t line_pos = 0;
365
+
366
+
/*
367
+
* Splitting blobs across multiple lines is not compatible with the mesa
368
+
* debug decoder tool. Note that even dropping the explicit '\n' below
369
+
* doesn't help because the GuC log is so big some underlying implementation
370
+
* still splits the lines at 512K characters. So just bail completely for
371
+
* the moment.
372
+
*/
373
+
return;
369
374
370
375
#define DMESG_MAX_LINE_LEN 800
371
376
#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
+9
drivers/gpu/drm/xe/xe_exec_queue.c
+9
drivers/gpu/drm/xe/xe_exec_queue.c
···
8
8
#include <linux/nospec.h>
9
9
10
10
#include <drm/drm_device.h>
11
+
#include <drm/drm_drv.h>
11
12
#include <drm/drm_file.h>
12
13
#include <uapi/drm/xe_drm.h>
13
14
···
763
762
*/
764
763
void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
765
764
{
765
+
struct xe_device *xe = gt_to_xe(q->gt);
766
766
struct xe_file *xef;
767
767
struct xe_lrc *lrc;
768
768
u32 old_ts, new_ts;
769
+
int idx;
769
770
770
771
/*
771
772
* Jobs that are run during driver load may use an exec_queue, but are
···
775
772
* for kernel specific work.
776
773
*/
777
774
if (!q->vm || !q->vm->xef)
775
+
return;
776
+
777
+
/* Synchronize with unbind while holding the xe file open */
778
+
if (!drm_dev_enter(&xe->drm, &idx))
778
779
return;
779
780
780
781
xef = q->vm->xef;
···
794
787
lrc = q->lrc[0];
795
788
new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
796
789
xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
790
+
791
+
drm_dev_exit(idx);
797
792
}
798
793
799
794
/**
+1
-1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+1
-1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
···
2046
2046
valid_any = valid_any || (valid_ggtt && is_primary);
2047
2047
2048
2048
if (IS_DGFX(xe)) {
2049
-
bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid);
2049
+
bool valid_lmem = pf_get_vf_config_lmem(primary_gt, vfid);
2050
2050
2051
2051
valid_any = valid_any || (valid_lmem && is_primary);
2052
2052
valid_all = valid_all && valid_lmem;
+45
-89
drivers/gpu/drm/xe/xe_oa.c
+45
-89
drivers/gpu/drm/xe/xe_oa.c
···
74
74
struct rcu_head rcu;
75
75
};
76
76
77
-
struct flex {
78
-
struct xe_reg reg;
79
-
u32 offset;
80
-
u32 value;
81
-
};
82
-
83
77
struct xe_oa_open_param {
84
78
struct xe_file *xef;
85
79
u32 oa_unit_id;
···
590
596
return ret;
591
597
}
592
598
599
+
static void xe_oa_lock_vma(struct xe_exec_queue *q)
600
+
{
601
+
if (q->vm) {
602
+
down_read(&q->vm->lock);
603
+
xe_vm_lock(q->vm, false);
604
+
}
605
+
}
606
+
607
+
static void xe_oa_unlock_vma(struct xe_exec_queue *q)
608
+
{
609
+
if (q->vm) {
610
+
xe_vm_unlock(q->vm);
611
+
up_read(&q->vm->lock);
612
+
}
613
+
}
614
+
593
615
static struct dma_fence *xe_oa_submit_bb(struct xe_oa_stream *stream, enum xe_oa_submit_deps deps,
594
616
struct xe_bb *bb)
595
617
{
618
+
struct xe_exec_queue *q = stream->exec_q ?: stream->k_exec_q;
596
619
struct xe_sched_job *job;
597
620
struct dma_fence *fence;
598
621
int err = 0;
599
622
600
-
/* Kernel configuration is issued on stream->k_exec_q, not stream->exec_q */
601
-
job = xe_bb_create_job(stream->k_exec_q, bb);
623
+
xe_oa_lock_vma(q);
624
+
625
+
job = xe_bb_create_job(q, bb);
602
626
if (IS_ERR(job)) {
603
627
err = PTR_ERR(job);
604
628
goto exit;
605
629
}
630
+
job->ggtt = true;
606
631
607
632
if (deps == XE_OA_SUBMIT_ADD_DEPS) {
608
633
for (int i = 0; i < stream->num_syncs && !err; i++)
···
636
623
fence = dma_fence_get(&job->drm.s_fence->finished);
637
624
xe_sched_job_push(job);
638
625
626
+
xe_oa_unlock_vma(q);
627
+
639
628
return fence;
640
629
err_put_job:
641
630
xe_sched_job_put(job);
642
631
exit:
632
+
xe_oa_unlock_vma(q);
643
633
return ERR_PTR(err);
644
634
}
645
635
···
691
675
dma_fence_put(stream->last_fence);
692
676
}
693
677
694
-
static void xe_oa_store_flex(struct xe_oa_stream *stream, struct xe_lrc *lrc,
695
-
struct xe_bb *bb, const struct flex *flex, u32 count)
696
-
{
697
-
u32 offset = xe_bo_ggtt_addr(lrc->bo);
698
-
699
-
do {
700
-
bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1);
701
-
bb->cs[bb->len++] = offset + flex->offset * sizeof(u32);
702
-
bb->cs[bb->len++] = 0;
703
-
bb->cs[bb->len++] = flex->value;
704
-
705
-
} while (flex++, --count);
706
-
}
707
-
708
-
static int xe_oa_modify_ctx_image(struct xe_oa_stream *stream, struct xe_lrc *lrc,
709
-
const struct flex *flex, u32 count)
678
+
static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count)
710
679
{
711
680
struct dma_fence *fence;
712
681
struct xe_bb *bb;
713
682
int err;
714
683
715
-
bb = xe_bb_new(stream->gt, 4 * count, false);
684
+
bb = xe_bb_new(stream->gt, 2 * count + 1, false);
716
685
if (IS_ERR(bb)) {
717
686
err = PTR_ERR(bb);
718
687
goto exit;
719
688
}
720
689
721
-
xe_oa_store_flex(stream, lrc, bb, flex, count);
722
-
723
-
fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb);
724
-
if (IS_ERR(fence)) {
725
-
err = PTR_ERR(fence);
726
-
goto free_bb;
727
-
}
728
-
xe_bb_free(bb, fence);
729
-
dma_fence_put(fence);
730
-
731
-
return 0;
732
-
free_bb:
733
-
xe_bb_free(bb, NULL);
734
-
exit:
735
-
return err;
736
-
}
737
-
738
-
static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri)
739
-
{
740
-
struct dma_fence *fence;
741
-
struct xe_bb *bb;
742
-
int err;
743
-
744
-
bb = xe_bb_new(stream->gt, 3, false);
745
-
if (IS_ERR(bb)) {
746
-
err = PTR_ERR(bb);
747
-
goto exit;
748
-
}
749
-
750
-
write_cs_mi_lri(bb, reg_lri, 1);
690
+
write_cs_mi_lri(bb, reg_lri, count);
751
691
752
692
fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb);
753
693
if (IS_ERR(fence)) {
···
723
751
static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable)
724
752
{
725
753
const struct xe_oa_format *format = stream->oa_buffer.format;
726
-
struct xe_lrc *lrc = stream->exec_q->lrc[0];
727
-
u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
728
754
u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
729
755
(enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
730
756
731
-
struct flex regs_context[] = {
757
+
struct xe_oa_reg reg_lri[] = {
732
758
{
733
759
OACTXCONTROL(stream->hwe->mmio_base),
734
-
stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
735
760
enable ? OA_COUNTER_RESUME : 0,
736
761
},
737
762
{
763
+
OAR_OACONTROL,
764
+
oacontrol,
765
+
},
766
+
{
738
767
RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
739
-
regs_offset + CTX_CONTEXT_CONTROL,
740
-
_MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE),
768
+
_MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
769
+
enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0)
741
770
},
742
771
};
743
-
struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol };
744
-
int err;
745
772
746
-
/* Modify stream hwe context image with regs_context */
747
-
err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
748
-
regs_context, ARRAY_SIZE(regs_context));
749
-
if (err)
750
-
return err;
751
-
752
-
/* Apply reg_lri using LRI */
753
-
return xe_oa_load_with_lri(stream, ®_lri);
773
+
return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
754
774
}
755
775
756
776
static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable)
757
777
{
758
778
const struct xe_oa_format *format = stream->oa_buffer.format;
759
-
struct xe_lrc *lrc = stream->exec_q->lrc[0];
760
-
u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
761
779
u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
762
780
(enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
763
-
struct flex regs_context[] = {
781
+
struct xe_oa_reg reg_lri[] = {
764
782
{
765
783
OACTXCONTROL(stream->hwe->mmio_base),
766
-
stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
767
784
enable ? OA_COUNTER_RESUME : 0,
768
785
},
769
786
{
787
+
OAC_OACONTROL,
788
+
oacontrol
789
+
},
790
+
{
770
791
RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
771
-
regs_offset + CTX_CONTEXT_CONTROL,
772
-
_MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) |
792
+
_MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
793
+
enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) |
773
794
_MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0),
774
795
},
775
796
};
776
-
struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol };
777
-
int err;
778
797
779
798
/* Set ccs select to enable programming of OAC_OACONTROL */
780
799
xe_mmio_write32(&stream->gt->mmio, __oa_regs(stream)->oa_ctrl,
781
800
__oa_ccs_select(stream));
782
801
783
-
/* Modify stream hwe context image with regs_context */
784
-
err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
785
-
regs_context, ARRAY_SIZE(regs_context));
786
-
if (err)
787
-
return err;
788
-
789
-
/* Apply reg_lri using LRI */
790
-
return xe_oa_load_with_lri(stream, ®_lri);
802
+
return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
791
803
}
792
804
793
805
static int xe_oa_configure_oa_context(struct xe_oa_stream *stream, bool enable)
···
2022
2066
if (XE_IOCTL_DBG(oa->xe, !param.exec_q))
2023
2067
return -ENOENT;
2024
2068
2025
-
if (param.exec_q->width > 1)
2026
-
drm_dbg(&oa->xe->drm, "exec_q->width > 1, programming only exec_q->lrc[0]\n");
2069
+
if (XE_IOCTL_DBG(oa->xe, param.exec_q->width > 1))
2070
+
return -EOPNOTSUPP;
2027
2071
}
2028
2072
2029
2073
/*
+4
-1
drivers/gpu/drm/xe/xe_ring_ops.c
+4
-1
drivers/gpu/drm/xe/xe_ring_ops.c
+2
drivers/gpu/drm/xe/xe_sched_job_types.h
+2
drivers/gpu/drm/xe/xe_sched_job_types.h
+4
-5
drivers/i2c/busses/i2c-imx.c
+4
-5
drivers/i2c/busses/i2c-imx.c
···
335
335
{ .compatible = "fsl,imx6sll-i2c", .data = &imx6_i2c_hwdata, },
336
336
{ .compatible = "fsl,imx6sx-i2c", .data = &imx6_i2c_hwdata, },
337
337
{ .compatible = "fsl,imx6ul-i2c", .data = &imx6_i2c_hwdata, },
338
+
{ .compatible = "fsl,imx7d-i2c", .data = &imx6_i2c_hwdata, },
338
339
{ .compatible = "fsl,imx7s-i2c", .data = &imx6_i2c_hwdata, },
339
340
{ .compatible = "fsl,imx8mm-i2c", .data = &imx6_i2c_hwdata, },
340
341
{ .compatible = "fsl,imx8mn-i2c", .data = &imx6_i2c_hwdata, },
···
533
532
534
533
static int i2c_imx_bus_busy(struct imx_i2c_struct *i2c_imx, int for_busy, bool atomic)
535
534
{
535
+
bool multi_master = i2c_imx->multi_master;
536
536
unsigned long orig_jiffies = jiffies;
537
537
unsigned int temp;
538
-
539
-
if (!i2c_imx->multi_master)
540
-
return 0;
541
538
542
539
while (1) {
543
540
temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2SR);
544
541
545
542
/* check for arbitration lost */
546
-
if (temp & I2SR_IAL) {
543
+
if (multi_master && (temp & I2SR_IAL)) {
547
544
i2c_imx_clear_irq(i2c_imx, I2SR_IAL);
548
545
return -EAGAIN;
549
546
}
550
547
551
-
if (for_busy && (temp & I2SR_IBB)) {
548
+
if (for_busy && (!multi_master || (temp & I2SR_IBB))) {
552
549
i2c_imx->stopped = 0;
553
550
break;
554
551
}
+96
-30
drivers/i2c/busses/i2c-microchip-corei2c.c
+96
-30
drivers/i2c/busses/i2c-microchip-corei2c.c
···
93
93
* @base: pointer to register struct
94
94
* @dev: device reference
95
95
* @i2c_clk: clock reference for i2c input clock
96
+
* @msg_queue: pointer to the messages requiring sending
96
97
* @buf: pointer to msg buffer for easier use
97
98
* @msg_complete: xfer completion object
98
99
* @adapter: core i2c abstraction
99
100
* @msg_err: error code for completed message
100
101
* @bus_clk_rate: current i2c bus clock rate
101
102
* @isr_status: cached copy of local ISR status
103
+
* @total_num: total number of messages to be sent/received
104
+
* @current_num: index of the current message being sent/received
102
105
* @msg_len: number of bytes transferred in msg
103
106
* @addr: address of the current slave
107
+
* @restart_needed: whether or not a repeated start is required after current message
104
108
*/
105
109
struct mchp_corei2c_dev {
106
110
void __iomem *base;
107
111
struct device *dev;
108
112
struct clk *i2c_clk;
113
+
struct i2c_msg *msg_queue;
109
114
u8 *buf;
110
115
struct completion msg_complete;
111
116
struct i2c_adapter adapter;
112
117
int msg_err;
118
+
int total_num;
119
+
int current_num;
113
120
u32 bus_clk_rate;
114
121
u32 isr_status;
115
122
u16 msg_len;
116
123
u8 addr;
124
+
bool restart_needed;
117
125
};
118
126
119
127
static void mchp_corei2c_core_disable(struct mchp_corei2c_dev *idev)
···
230
222
return 0;
231
223
}
232
224
225
+
static void mchp_corei2c_next_msg(struct mchp_corei2c_dev *idev)
226
+
{
227
+
struct i2c_msg *this_msg;
228
+
u8 ctrl;
229
+
230
+
if (idev->current_num >= idev->total_num) {
231
+
complete(&idev->msg_complete);
232
+
return;
233
+
}
234
+
235
+
/*
236
+
* If there's been an error, the isr needs to return control
237
+
* to the "main" part of the driver, so as not to keep sending
238
+
* messages once it completes and clears the SI bit.
239
+
*/
240
+
if (idev->msg_err) {
241
+
complete(&idev->msg_complete);
242
+
return;
243
+
}
244
+
245
+
this_msg = idev->msg_queue++;
246
+
247
+
if (idev->current_num < (idev->total_num - 1)) {
248
+
struct i2c_msg *next_msg = idev->msg_queue;
249
+
250
+
idev->restart_needed = next_msg->flags & I2C_M_RD;
251
+
} else {
252
+
idev->restart_needed = false;
253
+
}
254
+
255
+
idev->addr = i2c_8bit_addr_from_msg(this_msg);
256
+
idev->msg_len = this_msg->len;
257
+
idev->buf = this_msg->buf;
258
+
259
+
ctrl = readb(idev->base + CORE_I2C_CTRL);
260
+
ctrl |= CTRL_STA;
261
+
writeb(ctrl, idev->base + CORE_I2C_CTRL);
262
+
263
+
idev->current_num++;
264
+
}
265
+
233
266
static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
234
267
{
235
268
u32 status = idev->isr_status;
···
287
238
ctrl &= ~CTRL_STA;
288
239
writeb(idev->addr, idev->base + CORE_I2C_DATA);
289
240
writeb(ctrl, idev->base + CORE_I2C_CTRL);
290
-
if (idev->msg_len == 0)
291
-
finished = true;
292
241
break;
293
242
case STATUS_M_ARB_LOST:
294
243
idev->msg_err = -EAGAIN;
···
294
247
break;
295
248
case STATUS_M_SLAW_ACK:
296
249
case STATUS_M_TX_DATA_ACK:
297
-
if (idev->msg_len > 0)
250
+
if (idev->msg_len > 0) {
298
251
mchp_corei2c_fill_tx(idev);
299
-
else
300
-
last_byte = true;
252
+
} else {
253
+
if (idev->restart_needed)
254
+
finished = true;
255
+
else
256
+
last_byte = true;
257
+
}
301
258
break;
302
259
case STATUS_M_TX_DATA_NACK:
303
260
case STATUS_M_SLAR_NACK:
···
338
287
mchp_corei2c_stop(idev);
339
288
340
289
if (last_byte || finished)
341
-
complete(&idev->msg_complete);
290
+
mchp_corei2c_next_msg(idev);
342
291
343
292
return IRQ_HANDLED;
344
293
}
···
362
311
return ret;
363
312
}
364
313
365
-
static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev,
366
-
struct i2c_msg *msg)
314
+
static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
315
+
int num)
367
316
{
368
-
u8 ctrl;
317
+
struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
318
+
struct i2c_msg *this_msg = msgs;
369
319
unsigned long time_left;
370
-
371
-
idev->addr = i2c_8bit_addr_from_msg(msg);
372
-
idev->msg_len = msg->len;
373
-
idev->buf = msg->buf;
374
-
idev->msg_err = 0;
375
-
376
-
reinit_completion(&idev->msg_complete);
320
+
u8 ctrl;
377
321
378
322
mchp_corei2c_core_enable(idev);
379
323
324
+
/*
325
+
* The isr controls the flow of a transfer, this info needs to be saved
326
+
* to a location that it can access the queue information from.
327
+
*/
328
+
idev->restart_needed = false;
329
+
idev->msg_queue = msgs;
330
+
idev->total_num = num;
331
+
idev->current_num = 0;
332
+
333
+
/*
334
+
* But the first entry to the isr is triggered by the start in this
335
+
* function, so the first message needs to be "dequeued".
336
+
*/
337
+
idev->addr = i2c_8bit_addr_from_msg(this_msg);
338
+
idev->msg_len = this_msg->len;
339
+
idev->buf = this_msg->buf;
340
+
idev->msg_err = 0;
341
+
342
+
if (idev->total_num > 1) {
343
+
struct i2c_msg *next_msg = msgs + 1;
344
+
345
+
idev->restart_needed = next_msg->flags & I2C_M_RD;
346
+
}
347
+
348
+
idev->current_num++;
349
+
idev->msg_queue++;
350
+
351
+
reinit_completion(&idev->msg_complete);
352
+
353
+
/*
354
+
* Send the first start to pass control to the isr
355
+
*/
380
356
ctrl = readb(idev->base + CORE_I2C_CTRL);
381
357
ctrl |= CTRL_STA;
382
358
writeb(ctrl, idev->base + CORE_I2C_CTRL);
···
413
335
if (!time_left)
414
336
return -ETIMEDOUT;
415
337
416
-
return idev->msg_err;
417
-
}
418
-
419
-
static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
420
-
int num)
421
-
{
422
-
struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
423
-
int i, ret;
424
-
425
-
for (i = 0; i < num; i++) {
426
-
ret = mchp_corei2c_xfer_msg(idev, msgs++);
427
-
if (ret)
428
-
return ret;
429
-
}
338
+
if (idev->msg_err)
339
+
return idev->msg_err;
430
340
431
341
return num;
432
342
}
+16
drivers/infiniband/core/cma.c
+16
drivers/infiniband/core/cma.c
···
690
690
int bound_if_index = dev_addr->bound_dev_if;
691
691
int dev_type = dev_addr->dev_type;
692
692
struct net_device *ndev = NULL;
693
+
struct net_device *pdev = NULL;
693
694
694
695
if (!rdma_dev_access_netns(device, id_priv->id.route.addr.dev_addr.net))
695
696
goto out;
···
715
714
716
715
rcu_read_lock();
717
716
ndev = rcu_dereference(sgid_attr->ndev);
717
+
if (ndev->ifindex != bound_if_index) {
718
+
pdev = dev_get_by_index_rcu(dev_addr->net, bound_if_index);
719
+
if (pdev) {
720
+
if (is_vlan_dev(pdev)) {
721
+
pdev = vlan_dev_real_dev(pdev);
722
+
if (ndev->ifindex == pdev->ifindex)
723
+
bound_if_index = pdev->ifindex;
724
+
}
725
+
if (is_vlan_dev(ndev)) {
726
+
pdev = vlan_dev_real_dev(ndev);
727
+
if (bound_if_index == pdev->ifindex)
728
+
bound_if_index = ndev->ifindex;
729
+
}
730
+
}
731
+
}
718
732
if (!net_eq(dev_net(ndev), dev_addr->net) ||
719
733
ndev->ifindex != bound_if_index) {
720
734
rdma_put_gid_attr(sgid_attr);
+1
-1
drivers/infiniband/core/nldev.c
+1
-1
drivers/infiniband/core/nldev.c
+9
-7
drivers/infiniband/core/uverbs_cmd.c
+9
-7
drivers/infiniband/core/uverbs_cmd.c
···
161
161
{
162
162
const void __user *res = iter->cur;
163
163
164
-
if (iter->cur + len > iter->end)
164
+
if (len > iter->end - iter->cur)
165
165
return (void __force __user *)ERR_PTR(-ENOSPC);
166
166
iter->cur += len;
167
167
return res;
···
2008
2008
ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd));
2009
2009
if (ret)
2010
2010
return ret;
2011
-
wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count);
2011
+
wqes = uverbs_request_next_ptr(&iter, size_mul(cmd.wqe_size,
2012
+
cmd.wr_count));
2012
2013
if (IS_ERR(wqes))
2013
2014
return PTR_ERR(wqes);
2014
-
sgls = uverbs_request_next_ptr(
2015
-
&iter, cmd.sge_count * sizeof(struct ib_uverbs_sge));
2015
+
sgls = uverbs_request_next_ptr(&iter,
2016
+
size_mul(cmd.sge_count,
2017
+
sizeof(struct ib_uverbs_sge)));
2016
2018
if (IS_ERR(sgls))
2017
2019
return PTR_ERR(sgls);
2018
2020
ret = uverbs_request_finish(&iter);
···
2200
2198
if (wqe_size < sizeof(struct ib_uverbs_recv_wr))
2201
2199
return ERR_PTR(-EINVAL);
2202
2200
2203
-
wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count);
2201
+
wqes = uverbs_request_next_ptr(iter, size_mul(wqe_size, wr_count));
2204
2202
if (IS_ERR(wqes))
2205
2203
return ERR_CAST(wqes);
2206
-
sgls = uverbs_request_next_ptr(
2207
-
iter, sge_count * sizeof(struct ib_uverbs_sge));
2204
+
sgls = uverbs_request_next_ptr(iter, size_mul(sge_count,
2205
+
sizeof(struct ib_uverbs_sge)));
2208
2206
if (IS_ERR(sgls))
2209
2207
return ERR_CAST(sgls);
2210
2208
ret = uverbs_request_finish(iter);
+25
-25
drivers/infiniband/hw/bnxt_re/ib_verbs.c
+25
-25
drivers/infiniband/hw/bnxt_re/ib_verbs.c
···
199
199
200
200
ib_attr->vendor_id = rdev->en_dev->pdev->vendor;
201
201
ib_attr->vendor_part_id = rdev->en_dev->pdev->device;
202
-
ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device;
202
+
ib_attr->hw_ver = rdev->en_dev->pdev->revision;
203
203
ib_attr->max_qp = dev_attr->max_qp;
204
204
ib_attr->max_qp_wr = dev_attr->max_qp_wqes;
205
205
ib_attr->device_cap_flags =
···
967
967
unsigned int flags;
968
968
int rc;
969
969
970
+
bnxt_re_debug_rem_qpinfo(rdev, qp);
971
+
970
972
bnxt_qplib_flush_cqn_wq(&qp->qplib_qp);
971
973
972
974
rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
973
-
if (rc) {
975
+
if (rc)
974
976
ibdev_err(&rdev->ibdev, "Failed to destroy HW QP");
975
-
return rc;
976
-
}
977
977
978
978
if (rdma_is_kernel_res(&qp->ib_qp.res)) {
979
979
flags = bnxt_re_lock_cqs(qp);
···
983
983
984
984
bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp);
985
985
986
-
if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) {
987
-
rc = bnxt_re_destroy_gsi_sqp(qp);
988
-
if (rc)
989
-
return rc;
990
-
}
986
+
if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp)
987
+
bnxt_re_destroy_gsi_sqp(qp);
991
988
992
989
mutex_lock(&rdev->qp_lock);
993
990
list_del(&qp->list);
···
994
997
atomic_dec(&rdev->stats.res.rc_qp_count);
995
998
else if (qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_UD)
996
999
atomic_dec(&rdev->stats.res.ud_qp_count);
997
-
998
-
bnxt_re_debug_rem_qpinfo(rdev, qp);
999
1000
1000
1001
ib_umem_release(qp->rumem);
1001
1002
ib_umem_release(qp->sumem);
···
2162
2167
}
2163
2168
}
2164
2169
2165
-
if (qp_attr_mask & IB_QP_PATH_MTU) {
2166
-
qp->qplib_qp.modify_flags |=
2167
-
CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
2168
-
qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
2169
-
qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu);
2170
-
} else if (qp_attr->qp_state == IB_QPS_RTR) {
2171
-
qp->qplib_qp.modify_flags |=
2172
-
CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
2173
-
qp->qplib_qp.path_mtu =
2174
-
__from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
2175
-
qp->qplib_qp.mtu =
2176
-
ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
2170
+
if (qp_attr->qp_state == IB_QPS_RTR) {
2171
+
enum ib_mtu qpmtu;
2172
+
2173
+
qpmtu = iboe_get_mtu(rdev->netdev->mtu);
2174
+
if (qp_attr_mask & IB_QP_PATH_MTU) {
2175
+
if (ib_mtu_enum_to_int(qp_attr->path_mtu) >
2176
+
ib_mtu_enum_to_int(qpmtu))
2177
+
return -EINVAL;
2178
+
qpmtu = qp_attr->path_mtu;
2179
+
}
2180
+
2181
+
qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
2182
+
qp->qplib_qp.path_mtu = __from_ib_mtu(qpmtu);
2183
+
qp->qplib_qp.mtu = ib_mtu_enum_to_int(qpmtu);
2177
2184
}
2178
2185
2179
2186
if (qp_attr_mask & IB_QP_TIMEOUT) {
···
2325
2328
qp_attr->retry_cnt = qplib_qp->retry_cnt;
2326
2329
qp_attr->rnr_retry = qplib_qp->rnr_retry;
2327
2330
qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer;
2331
+
qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id);
2328
2332
qp_attr->rq_psn = qplib_qp->rq.psn;
2329
2333
qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic;
2330
2334
qp_attr->sq_psn = qplib_qp->sq.psn;
···
2822
2824
wr = wr->next;
2823
2825
}
2824
2826
bnxt_qplib_post_send_db(&qp->qplib_qp);
2825
-
bnxt_ud_qp_hw_stall_workaround(qp);
2827
+
if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
2828
+
bnxt_ud_qp_hw_stall_workaround(qp);
2826
2829
spin_unlock_irqrestore(&qp->sq_lock, flags);
2827
2830
return rc;
2828
2831
}
···
2935
2936
wr = wr->next;
2936
2937
}
2937
2938
bnxt_qplib_post_send_db(&qp->qplib_qp);
2938
-
bnxt_ud_qp_hw_stall_workaround(qp);
2939
+
if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
2940
+
bnxt_ud_qp_hw_stall_workaround(qp);
2939
2941
spin_unlock_irqrestore(&qp->sq_lock, flags);
2940
2942
2941
2943
return rc;
+4
drivers/infiniband/hw/bnxt_re/ib_verbs.h
+4
drivers/infiniband/hw/bnxt_re/ib_verbs.h
···
268
268
int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
269
269
void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
270
270
271
+
static inline u32 __to_ib_port_num(u16 port_id)
272
+
{
273
+
return (u32)port_id + 1;
274
+
}
271
275
272
276
unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp);
273
277
void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+1
-7
drivers/infiniband/hw/bnxt_re/main.c
+1
-7
drivers/infiniband/hw/bnxt_re/main.c
···
1715
1715
1716
1716
static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
1717
1717
{
1718
-
int mask = IB_QP_STATE;
1719
-
struct ib_qp_attr qp_attr;
1720
1718
struct bnxt_re_qp *qp;
1721
1719
1722
-
qp_attr.qp_state = IB_QPS_ERR;
1723
1720
mutex_lock(&rdev->qp_lock);
1724
1721
list_for_each_entry(qp, &rdev->qp_list, list) {
1725
1722
/* Modify the state of all QPs except QP1/Shadow QP */
···
1724
1727
if (qp->qplib_qp.state !=
1725
1728
CMDQ_MODIFY_QP_NEW_STATE_RESET &&
1726
1729
qp->qplib_qp.state !=
1727
-
CMDQ_MODIFY_QP_NEW_STATE_ERR) {
1730
+
CMDQ_MODIFY_QP_NEW_STATE_ERR)
1728
1731
bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp,
1729
1732
1, IB_EVENT_QP_FATAL);
1730
-
bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask,
1731
-
NULL);
1732
-
}
1733
1733
}
1734
1734
}
1735
1735
mutex_unlock(&rdev->qp_lock);
+50
-29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
+50
-29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
···
659
659
rc = bnxt_qplib_alloc_init_hwq(&srq->hwq, &hwq_attr);
660
660
if (rc)
661
661
return rc;
662
-
663
-
srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
664
-
GFP_KERNEL);
665
-
if (!srq->swq) {
666
-
rc = -ENOMEM;
667
-
goto fail;
668
-
}
669
662
srq->dbinfo.flags = 0;
670
663
bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
671
664
CMDQ_BASE_OPCODE_CREATE_SRQ,
···
687
694
spin_lock_init(&srq->lock);
688
695
srq->start_idx = 0;
689
696
srq->last_idx = srq->hwq.max_elements - 1;
690
-
for (idx = 0; idx < srq->hwq.max_elements; idx++)
691
-
srq->swq[idx].next_idx = idx + 1;
692
-
srq->swq[srq->last_idx].next_idx = -1;
697
+
if (!srq->hwq.is_user) {
698
+
srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
699
+
GFP_KERNEL);
700
+
if (!srq->swq) {
701
+
rc = -ENOMEM;
702
+
goto fail;
703
+
}
704
+
for (idx = 0; idx < srq->hwq.max_elements; idx++)
705
+
srq->swq[idx].next_idx = idx + 1;
706
+
srq->swq[srq->last_idx].next_idx = -1;
707
+
}
693
708
694
709
srq->id = le32_to_cpu(resp.xid);
695
710
srq->dbinfo.hwq = &srq->hwq;
···
1001
1000
u32 tbl_indx;
1002
1001
u16 nsge;
1003
1002
1004
-
if (res->dattr)
1005
-
qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
1006
-
1003
+
qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
1007
1004
sq->dbinfo.flags = 0;
1008
1005
bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
1009
1006
CMDQ_BASE_OPCODE_CREATE_QP,
···
1033
1034
: 0;
1034
1035
/* Update msn tbl size */
1035
1036
if (qp->is_host_msn_tbl && psn_sz) {
1036
-
hwq_attr.aux_depth = roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
1037
+
if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC)
1038
+
hwq_attr.aux_depth =
1039
+
roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
1040
+
else
1041
+
hwq_attr.aux_depth =
1042
+
roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2;
1037
1043
qp->msn_tbl_sz = hwq_attr.aux_depth;
1038
1044
qp->msn = 0;
1039
1045
}
···
1048
1044
if (rc)
1049
1045
return rc;
1050
1046
1051
-
rc = bnxt_qplib_alloc_init_swq(sq);
1052
-
if (rc)
1053
-
goto fail_sq;
1047
+
if (!sq->hwq.is_user) {
1048
+
rc = bnxt_qplib_alloc_init_swq(sq);
1049
+
if (rc)
1050
+
goto fail_sq;
1054
1051
1055
-
if (psn_sz)
1056
-
bnxt_qplib_init_psn_ptr(qp, psn_sz);
1057
-
1052
+
if (psn_sz)
1053
+
bnxt_qplib_init_psn_ptr(qp, psn_sz);
1054
+
}
1058
1055
req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
1059
1056
pbl = &sq->hwq.pbl[PBL_LVL_0];
1060
1057
req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
···
1081
1076
rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
1082
1077
if (rc)
1083
1078
goto sq_swq;
1084
-
rc = bnxt_qplib_alloc_init_swq(rq);
1085
-
if (rc)
1086
-
goto fail_rq;
1079
+
if (!rq->hwq.is_user) {
1080
+
rc = bnxt_qplib_alloc_init_swq(rq);
1081
+
if (rc)
1082
+
goto fail_rq;
1083
+
}
1087
1084
1088
1085
req.rq_size = cpu_to_le32(rq->max_wqe);
1089
1086
pbl = &rq->hwq.pbl[PBL_LVL_0];
···
1181
1174
rq->dbinfo.db = qp->dpi->dbr;
1182
1175
rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
1183
1176
}
1177
+
spin_lock_bh(&rcfw->tbl_lock);
1184
1178
tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
1185
1179
rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
1186
1180
rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
1181
+
spin_unlock_bh(&rcfw->tbl_lock);
1187
1182
1188
1183
return 0;
1189
1184
fail:
···
1292
1283
}
1293
1284
}
1294
1285
1295
-
static void bnxt_set_mandatory_attributes(struct bnxt_qplib_qp *qp,
1286
+
static void bnxt_set_mandatory_attributes(struct bnxt_qplib_res *res,
1287
+
struct bnxt_qplib_qp *qp,
1296
1288
struct cmdq_modify_qp *req)
1297
1289
{
1298
1290
u32 mandatory_flags = 0;
···
1306
1296
if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC && qp->srq)
1307
1297
req->flags = cpu_to_le16(CMDQ_MODIFY_QP_FLAGS_SRQ_USED);
1308
1298
mandatory_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY;
1299
+
}
1300
+
1301
+
if (_is_min_rnr_in_rtr_rts_mandatory(res->dattr->dev_cap_flags2) &&
1302
+
(qp->cur_qp_state == CMDQ_MODIFY_QP_NEW_STATE_RTR &&
1303
+
qp->state == CMDQ_MODIFY_QP_NEW_STATE_RTS)) {
1304
+
if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC)
1305
+
mandatory_flags |=
1306
+
CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER;
1309
1307
}
1310
1308
1311
1309
if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_UD ||
···
1356
1338
/* Set mandatory attributes for INIT -> RTR and RTR -> RTS transition */
1357
1339
if (_is_optimize_modify_qp_supported(res->dattr->dev_cap_flags2) &&
1358
1340
is_optimized_state_transition(qp))
1359
-
bnxt_set_mandatory_attributes(qp, &req);
1341
+
bnxt_set_mandatory_attributes(res, qp, &req);
1360
1342
}
1361
1343
bmask = qp->modify_flags;
1362
1344
req.modify_mask = cpu_to_le32(qp->modify_flags);
···
1539
1521
qp->dest_qpn = le32_to_cpu(sb->dest_qp_id);
1540
1522
memcpy(qp->smac, sb->src_mac, 6);
1541
1523
qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id);
1524
+
qp->port_id = le16_to_cpu(sb->port_id);
1542
1525
bail:
1543
1526
dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
1544
1527
sbuf.sb, sbuf.dma_addr);
···
2686
2667
bnxt_qplib_add_flush_qp(qp);
2687
2668
} else {
2688
2669
/* Before we complete, do WA 9060 */
2689
-
if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
2690
-
cqe_sq_cons)) {
2691
-
*lib_qp = qp;
2692
-
goto out;
2670
+
if (!bnxt_qplib_is_chip_gen_p5_p7(qp->cctx)) {
2671
+
if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
2672
+
cqe_sq_cons)) {
2673
+
*lib_qp = qp;
2674
+
goto out;
2675
+
}
2693
2676
}
2694
2677
if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) {
2695
2678
cqe->status = CQ_REQ_STATUS_OK;
+2
-2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
+2
-2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
···
114
114
u32 size;
115
115
};
116
116
117
-
#define BNXT_QPLIB_QP_MAX_SGL 6
118
117
struct bnxt_qplib_swq {
119
118
u64 wr_id;
120
119
int next_idx;
···
153
154
#define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE BIT(2)
154
155
#define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT BIT(3)
155
156
#define BNXT_QPLIB_SWQE_FLAGS_INLINE BIT(4)
156
-
struct bnxt_qplib_sge sg_list[BNXT_QPLIB_QP_MAX_SGL];
157
+
struct bnxt_qplib_sge sg_list[BNXT_VAR_MAX_SGE];
157
158
int num_sge;
158
159
/* Max inline data is 96 bytes */
159
160
u32 inline_len;
···
298
299
u32 dest_qpn;
299
300
u8 smac[6];
300
301
u16 vlan_id;
302
+
u16 port_id;
301
303
u8 nw_type;
302
304
struct bnxt_qplib_ah ah;
303
305
+3
-2
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+3
-2
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
···
424
424
425
425
/* Prevent posting if f/w is not in a state to process */
426
426
if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags))
427
-
return bnxt_qplib_map_rc(opcode);
427
+
return -ENXIO;
428
+
428
429
if (test_bit(FIRMWARE_STALL_DETECTED, &cmdq->flags))
429
430
return -ETIMEDOUT;
430
431
···
494
493
495
494
rc = __send_message_basic_sanity(rcfw, msg, opcode);
496
495
if (rc)
497
-
return rc;
496
+
return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc;
498
497
499
498
rc = __send_message(rcfw, msg, opcode);
500
499
if (rc)
+5
drivers/infiniband/hw/bnxt_re/qplib_res.h
+5
drivers/infiniband/hw/bnxt_re/qplib_res.h
···
584
584
return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_OPTIMIZE_MODIFY_QP_SUPPORTED;
585
585
}
586
586
587
+
static inline bool _is_min_rnr_in_rtr_rts_mandatory(u16 dev_cap_ext_flags2)
588
+
{
589
+
return !!(dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED);
590
+
}
591
+
587
592
static inline bool _is_cq_coalescing_supported(u16 dev_cap_ext_flags2)
588
593
{
589
594
return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_CQ_COALESCING_SUPPORTED;
+12
-6
drivers/infiniband/hw/bnxt_re/qplib_sp.c
+12
-6
drivers/infiniband/hw/bnxt_re/qplib_sp.c
···
129
129
attr->max_qp_init_rd_atom =
130
130
sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
131
131
BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom;
132
-
attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr);
133
-
/*
134
-
* 128 WQEs needs to be reserved for the HW (8916). Prevent
135
-
* reporting the max number
136
-
*/
137
-
attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
132
+
attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr) - 1;
133
+
if (!bnxt_qplib_is_chip_gen_p5_p7(rcfw->res->cctx)) {
134
+
/*
135
+
* 128 WQEs needs to be reserved for the HW (8916). Prevent
136
+
* reporting the max number on legacy devices
137
+
*/
138
+
attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
139
+
}
140
+
141
+
/* Adjust for max_qp_wqes for variable wqe */
142
+
if (cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE)
143
+
attr->max_qp_wqes = BNXT_VAR_MAX_WQE - 1;
138
144
139
145
attr->max_qp_sges = cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE ?
140
146
min_t(u32, sb->max_sge_var_wqe, BNXT_VAR_MAX_SGE) : 6;
+1
drivers/infiniband/hw/bnxt_re/roce_hsi.h
+1
drivers/infiniband/hw/bnxt_re/roce_hsi.h
···
2215
2215
#define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE (0x2UL << 4)
2216
2216
#define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \
2217
2217
CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE
2218
+
#define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL
2218
2219
__le16 max_xp_qp_size;
2219
2220
__le16 create_qp_batch_size;
2220
2221
__le16 destroy_qp_batch_size;
+29
-14
drivers/infiniband/hw/hns/hns_roce_hem.c
+29
-14
drivers/infiniband/hw/hns/hns_roce_hem.c
···
931
931
size_t count; /* max ba numbers */
932
932
int start; /* start buf offset in this hem */
933
933
int end; /* end buf offset in this hem */
934
+
bool exist_bt;
934
935
};
935
936
936
937
/* All HEM items are linked in a tree structure */
···
960
959
}
961
960
}
962
961
962
+
hem->exist_bt = exist_bt;
963
963
hem->count = count;
964
964
hem->start = start;
965
965
hem->end = end;
···
971
969
}
972
970
973
971
static void hem_list_free_item(struct hns_roce_dev *hr_dev,
974
-
struct hns_roce_hem_item *hem, bool exist_bt)
972
+
struct hns_roce_hem_item *hem)
975
973
{
976
-
if (exist_bt)
974
+
if (hem->exist_bt)
977
975
dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN,
978
976
hem->addr, hem->dma_addr);
979
977
kfree(hem);
980
978
}
981
979
982
980
static void hem_list_free_all(struct hns_roce_dev *hr_dev,
983
-
struct list_head *head, bool exist_bt)
981
+
struct list_head *head)
984
982
{
985
983
struct hns_roce_hem_item *hem, *temp_hem;
986
984
987
985
list_for_each_entry_safe(hem, temp_hem, head, list) {
988
986
list_del(&hem->list);
989
-
hem_list_free_item(hr_dev, hem, exist_bt);
987
+
hem_list_free_item(hr_dev, hem);
990
988
}
991
989
}
992
990
···
1086
1084
1087
1085
for (i = 0; i < region_cnt; i++) {
1088
1086
r = (struct hns_roce_buf_region *)®ions[i];
1087
+
/* when r->hopnum = 0, the region should not occupy root_ba. */
1088
+
if (!r->hopnum)
1089
+
continue;
1090
+
1089
1091
if (r->hopnum > 1) {
1090
1092
step = hem_list_calc_ba_range(r->hopnum, 1, unit);
1091
1093
if (step > 0)
···
1183
1177
1184
1178
err_exit:
1185
1179
for (level = 1; level < hopnum; level++)
1186
-
hem_list_free_all(hr_dev, &temp_list[level], true);
1180
+
hem_list_free_all(hr_dev, &temp_list[level]);
1187
1181
1188
1182
return ret;
1189
1183
}
···
1224
1218
{
1225
1219
struct hns_roce_hem_item *hem;
1226
1220
1221
+
/* This is on the has_mtt branch, if r->hopnum
1222
+
* is 0, there is no root_ba to reuse for the
1223
+
* region's fake hem, so a dma_alloc request is
1224
+
* necessary here.
1225
+
*/
1227
1226
hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1,
1228
-
r->count, false);
1227
+
r->count, !r->hopnum);
1229
1228
if (!hem)
1230
1229
return -ENOMEM;
1231
1230
1232
-
hem_list_assign_bt(hem, cpu_base, phy_base);
1231
+
/* The root_ba can be reused only when r->hopnum > 0. */
1232
+
if (r->hopnum)
1233
+
hem_list_assign_bt(hem, cpu_base, phy_base);
1233
1234
list_add(&hem->list, branch_head);
1234
1235
list_add(&hem->sibling, leaf_head);
1235
1236
1236
-
return r->count;
1237
+
/* If r->hopnum == 0, 0 is returned,
1238
+
* so that the root_bt entry is not occupied.
1239
+
*/
1240
+
return r->hopnum ? r->count : 0;
1237
1241
}
1238
1242
1239
1243
static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
···
1287
1271
return -ENOMEM;
1288
1272
1289
1273
total = 0;
1290
-
for (i = 0; i < region_cnt && total < max_ba_num; i++) {
1274
+
for (i = 0; i < region_cnt && total <= max_ba_num; i++) {
1291
1275
r = ®ions[i];
1292
1276
if (!r->count)
1293
1277
continue;
···
1353
1337
region_cnt);
1354
1338
if (ret) {
1355
1339
for (i = 0; i < region_cnt; i++)
1356
-
hem_list_free_all(hr_dev, &head.branch[i], false);
1340
+
hem_list_free_all(hr_dev, &head.branch[i]);
1357
1341
1358
-
hem_list_free_all(hr_dev, &head.root, true);
1342
+
hem_list_free_all(hr_dev, &head.root);
1359
1343
}
1360
1344
1361
1345
return ret;
···
1418
1402
1419
1403
for (i = 0; i < HNS_ROCE_MAX_BT_REGION; i++)
1420
1404
for (j = 0; j < HNS_ROCE_MAX_BT_LEVEL; j++)
1421
-
hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j],
1422
-
j != 0);
1405
+
hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j]);
1423
1406
1424
-
hem_list_free_all(hr_dev, &hem_list->root_bt, true);
1407
+
hem_list_free_all(hr_dev, &hem_list->root_bt);
1425
1408
INIT_LIST_HEAD(&hem_list->btm_bt);
1426
1409
hem_list->root_ba = 0;
1427
1410
}
+9
-2
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+9
-2
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
···
468
468
valid_num_sge = calc_wr_sge_num(wr, &msg_len);
469
469
470
470
ret = set_ud_opcode(ud_sq_wqe, wr);
471
-
if (WARN_ON(ret))
471
+
if (WARN_ON_ONCE(ret))
472
472
return ret;
473
473
474
474
ud_sq_wqe->msg_len = cpu_to_le32(msg_len);
···
572
572
rc_sq_wqe->msg_len = cpu_to_le32(msg_len);
573
573
574
574
ret = set_rc_opcode(hr_dev, rc_sq_wqe, wr);
575
-
if (WARN_ON(ret))
575
+
if (WARN_ON_ONCE(ret))
576
576
return ret;
577
577
578
578
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
···
670
670
#define HNS_ROCE_SL_SHIFT 2
671
671
struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe;
672
672
673
+
if (unlikely(qp->state == IB_QPS_ERR)) {
674
+
flush_cqe(hr_dev, qp);
675
+
return;
676
+
}
673
677
/* All kinds of DirectWQE have the same header field layout */
674
678
hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG);
675
679
hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl);
···
5622
5618
struct hns_roce_qp *hr_qp)
5623
5619
{
5624
5620
struct hns_roce_dip *hr_dip = hr_qp->dip;
5621
+
5622
+
if (!hr_dip)
5623
+
return;
5625
5624
5626
5625
xa_lock(&hr_dev->qp_table.dip_xa);
5627
5626
-5
drivers/infiniband/hw/hns/hns_roce_mr.c
-5
drivers/infiniband/hw/hns/hns_roce_mr.c
···
814
814
for (i = 0, mapped_cnt = 0; i < mtr->hem_cfg.region_count &&
815
815
mapped_cnt < page_cnt; i++) {
816
816
r = &mtr->hem_cfg.region[i];
817
-
/* if hopnum is 0, no need to map pages in this region */
818
-
if (!r->hopnum) {
819
-
mapped_cnt += r->count;
820
-
continue;
821
-
}
822
817
823
818
if (r->offset + r->count > page_cnt) {
824
819
ret = -EINVAL;
+5
-3
drivers/infiniband/hw/mlx5/main.c
+5
-3
drivers/infiniband/hw/mlx5/main.c
···
2839
2839
int err;
2840
2840
2841
2841
*num_plane = 0;
2842
-
if (!MLX5_CAP_GEN(mdev, ib_virt))
2842
+
if (!MLX5_CAP_GEN(mdev, ib_virt) || !MLX5_CAP_GEN_2(mdev, multiplane))
2843
2843
return 0;
2844
2844
2845
2845
err = mlx5_query_hca_vport_context(mdev, 0, 1, 0, &vport_ctx);
···
3639
3639
list_for_each_entry(mpi, &mlx5_ib_unaffiliated_port_list,
3640
3640
list) {
3641
3641
if (dev->sys_image_guid == mpi->sys_image_guid &&
3642
-
(mlx5_core_native_port_num(mpi->mdev) - 1) == i) {
3642
+
(mlx5_core_native_port_num(mpi->mdev) - 1) == i &&
3643
+
mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) {
3643
3644
bound = mlx5_ib_bind_slave_port(dev, mpi);
3644
3645
}
3645
3646
···
4786
4785
4787
4786
mutex_lock(&mlx5_ib_multiport_mutex);
4788
4787
list_for_each_entry(dev, &mlx5_ib_dev_list, ib_dev_list) {
4789
-
if (dev->sys_image_guid == mpi->sys_image_guid)
4788
+
if (dev->sys_image_guid == mpi->sys_image_guid &&
4789
+
mlx5_core_same_coredev_type(dev->mdev, mpi->mdev))
4790
4790
bound = mlx5_ib_bind_slave_port(dev, mpi);
4791
4791
4792
4792
if (bound) {
+19
-4
drivers/infiniband/sw/rxe/rxe.c
+19
-4
drivers/infiniband/sw/rxe/rxe.c
···
40
40
/* initialize rxe device parameters */
41
41
static void rxe_init_device_param(struct rxe_dev *rxe)
42
42
{
43
+
struct net_device *ndev;
44
+
43
45
rxe->max_inline_data = RXE_MAX_INLINE_DATA;
44
46
45
47
rxe->attr.vendor_id = RXE_VENDOR_ID;
···
73
71
rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
74
72
rxe->attr.max_pkeys = RXE_MAX_PKEYS;
75
73
rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
74
+
75
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
76
+
if (!ndev)
77
+
return;
78
+
76
79
addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
77
-
rxe->ndev->dev_addr);
80
+
ndev->dev_addr);
81
+
82
+
dev_put(ndev);
78
83
79
84
rxe->max_ucontext = RXE_MAX_UCONTEXT;
80
85
}
···
118
109
static void rxe_init_ports(struct rxe_dev *rxe)
119
110
{
120
111
struct rxe_port *port = &rxe->port;
112
+
struct net_device *ndev;
121
113
122
114
rxe_init_port_param(port);
115
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
116
+
if (!ndev)
117
+
return;
123
118
addrconf_addr_eui48((unsigned char *)&port->port_guid,
124
-
rxe->ndev->dev_addr);
119
+
ndev->dev_addr);
120
+
dev_put(ndev);
125
121
spin_lock_init(&port->port_lock);
126
122
}
127
123
···
181
167
/* called by ifc layer to create new rxe device.
182
168
* The caller should allocate memory for rxe by calling ib_alloc_device.
183
169
*/
184
-
int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name)
170
+
int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
171
+
struct net_device *ndev)
185
172
{
186
173
rxe_init(rxe);
187
174
rxe_set_mtu(rxe, mtu);
188
175
189
-
return rxe_register_device(rxe, ibdev_name);
176
+
return rxe_register_device(rxe, ibdev_name, ndev);
190
177
}
191
178
192
179
static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+2
-1
drivers/infiniband/sw/rxe/rxe.h
+2
-1
drivers/infiniband/sw/rxe/rxe.h
···
139
139
140
140
void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu);
141
141
142
-
int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name);
142
+
int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
143
+
struct net_device *ndev);
143
144
144
145
void rxe_rcv(struct sk_buff *skb);
145
146
+20
-2
drivers/infiniband/sw/rxe/rxe_mcast.c
+20
-2
drivers/infiniband/sw/rxe/rxe_mcast.c
···
31
31
static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
32
32
{
33
33
unsigned char ll_addr[ETH_ALEN];
34
+
struct net_device *ndev;
35
+
int ret;
36
+
37
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
38
+
if (!ndev)
39
+
return -ENODEV;
34
40
35
41
ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
36
42
37
-
return dev_mc_add(rxe->ndev, ll_addr);
43
+
ret = dev_mc_add(ndev, ll_addr);
44
+
dev_put(ndev);
45
+
46
+
return ret;
38
47
}
39
48
40
49
/**
···
56
47
static int rxe_mcast_del(struct rxe_dev *rxe, union ib_gid *mgid)
57
48
{
58
49
unsigned char ll_addr[ETH_ALEN];
50
+
struct net_device *ndev;
51
+
int ret;
52
+
53
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
54
+
if (!ndev)
55
+
return -ENODEV;
59
56
60
57
ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
61
58
62
-
return dev_mc_del(rxe->ndev, ll_addr);
59
+
ret = dev_mc_del(ndev, ll_addr);
60
+
dev_put(ndev);
61
+
62
+
return ret;
63
63
}
64
64
65
65
/**
+20
-4
drivers/infiniband/sw/rxe/rxe_net.c
+20
-4
drivers/infiniband/sw/rxe/rxe_net.c
···
524
524
*/
525
525
const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num)
526
526
{
527
-
return rxe->ndev->name;
527
+
struct net_device *ndev;
528
+
char *ndev_name;
529
+
530
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
531
+
if (!ndev)
532
+
return NULL;
533
+
ndev_name = ndev->name;
534
+
dev_put(ndev);
535
+
536
+
return ndev_name;
528
537
}
529
538
530
539
int rxe_net_add(const char *ibdev_name, struct net_device *ndev)
···
545
536
if (!rxe)
546
537
return -ENOMEM;
547
538
548
-
rxe->ndev = ndev;
549
539
ib_mark_name_assigned_by_user(&rxe->ib_dev);
550
540
551
-
err = rxe_add(rxe, ndev->mtu, ibdev_name);
541
+
err = rxe_add(rxe, ndev->mtu, ibdev_name, ndev);
552
542
if (err) {
553
543
ib_dealloc_device(&rxe->ib_dev);
554
544
return err;
···
595
587
596
588
void rxe_set_port_state(struct rxe_dev *rxe)
597
589
{
598
-
if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev))
590
+
struct net_device *ndev;
591
+
592
+
ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
593
+
if (!ndev)
594
+
return;
595
+
596
+
if (netif_running(ndev) && netif_carrier_ok(ndev))
599
597
rxe_port_up(rxe);
600
598
else
601
599
rxe_port_down(rxe);
600
+
601
+
dev_put(ndev);
602
602
}
603
603
604
604
static int rxe_notify(struct notifier_block *not_blk,
+21
-5
drivers/infiniband/sw/rxe/rxe_verbs.c
+21
-5
drivers/infiniband/sw/rxe/rxe_verbs.c
···
41
41
u32 port_num, struct ib_port_attr *attr)
42
42
{
43
43
struct rxe_dev *rxe = to_rdev(ibdev);
44
+
struct net_device *ndev;
44
45
int err, ret;
45
46
46
47
if (port_num != 1) {
47
48
err = -EINVAL;
48
49
rxe_dbg_dev(rxe, "bad port_num = %d\n", port_num);
50
+
goto err_out;
51
+
}
52
+
53
+
ndev = rxe_ib_device_get_netdev(ibdev);
54
+
if (!ndev) {
55
+
err = -ENODEV;
49
56
goto err_out;
50
57
}
51
58
···
64
57
65
58
if (attr->state == IB_PORT_ACTIVE)
66
59
attr->phys_state = IB_PORT_PHYS_STATE_LINK_UP;
67
-
else if (dev_get_flags(rxe->ndev) & IFF_UP)
60
+
else if (dev_get_flags(ndev) & IFF_UP)
68
61
attr->phys_state = IB_PORT_PHYS_STATE_POLLING;
69
62
else
70
63
attr->phys_state = IB_PORT_PHYS_STATE_DISABLED;
71
64
72
65
mutex_unlock(&rxe->usdev_lock);
73
66
67
+
dev_put(ndev);
74
68
return ret;
75
69
76
70
err_out:
···
1433
1425
static int rxe_enable_driver(struct ib_device *ib_dev)
1434
1426
{
1435
1427
struct rxe_dev *rxe = container_of(ib_dev, struct rxe_dev, ib_dev);
1428
+
struct net_device *ndev;
1429
+
1430
+
ndev = rxe_ib_device_get_netdev(ib_dev);
1431
+
if (!ndev)
1432
+
return -ENODEV;
1436
1433
1437
1434
rxe_set_port_state(rxe);
1438
-
dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(rxe->ndev));
1435
+
dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(ndev));
1436
+
1437
+
dev_put(ndev);
1439
1438
return 0;
1440
1439
}
1441
1440
···
1510
1495
INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw),
1511
1496
};
1512
1497
1513
-
int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
1498
+
int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
1499
+
struct net_device *ndev)
1514
1500
{
1515
1501
int err;
1516
1502
struct ib_device *dev = &rxe->ib_dev;
···
1523
1507
dev->num_comp_vectors = num_possible_cpus();
1524
1508
dev->local_dma_lkey = 0;
1525
1509
addrconf_addr_eui48((unsigned char *)&dev->node_guid,
1526
-
rxe->ndev->dev_addr);
1510
+
ndev->dev_addr);
1527
1511
1528
1512
dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) |
1529
1513
BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ);
1530
1514
1531
1515
ib_set_device_ops(dev, &rxe_dev_ops);
1532
-
err = ib_device_set_netdev(&rxe->ib_dev, rxe->ndev, 1);
1516
+
err = ib_device_set_netdev(&rxe->ib_dev, ndev, 1);
1533
1517
if (err)
1534
1518
return err;
1535
1519
+8
-3
drivers/infiniband/sw/rxe/rxe_verbs.h
+8
-3
drivers/infiniband/sw/rxe/rxe_verbs.h
···
370
370
u32 qp_gsi_index;
371
371
};
372
372
373
+
#define RXE_PORT 1
373
374
struct rxe_dev {
374
375
struct ib_device ib_dev;
375
376
struct ib_device_attr attr;
376
377
int max_ucontext;
377
378
int max_inline_data;
378
379
struct mutex usdev_lock;
379
-
380
-
struct net_device *ndev;
381
380
382
381
struct rxe_pool uc_pool;
383
382
struct rxe_pool pd_pool;
···
404
405
struct rxe_port port;
405
406
struct crypto_shash *tfm;
406
407
};
408
+
409
+
static inline struct net_device *rxe_ib_device_get_netdev(struct ib_device *dev)
410
+
{
411
+
return ib_device_get_netdev(dev, RXE_PORT);
412
+
}
407
413
408
414
static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index)
409
415
{
···
475
471
return to_rpd(mw->ibmw.pd);
476
472
}
477
473
478
-
int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
474
+
int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
475
+
struct net_device *ndev);
479
476
480
477
#endif /* RXE_VERBS_H */
+3
-4
drivers/infiniband/sw/siw/siw.h
+3
-4
drivers/infiniband/sw/siw/siw.h
···
46
46
*/
47
47
#define SIW_IRQ_MAXBURST_SQ_ACTIVE 4
48
48
49
+
/* There is always only a port 1 per siw device */
50
+
#define SIW_PORT 1
51
+
49
52
struct siw_dev_cap {
50
53
int max_qp;
51
54
int max_qp_wr;
···
72
69
73
70
struct siw_device {
74
71
struct ib_device base_dev;
75
-
struct net_device *netdev;
76
72
struct siw_dev_cap attrs;
77
73
78
74
u32 vendor_part_id;
79
75
int numa_node;
80
76
char raw_gid[ETH_ALEN];
81
-
82
-
/* physical port state (only one port per device) */
83
-
enum ib_port_state state;
84
77
85
78
spinlock_t lock;
86
79
+21
-6
drivers/infiniband/sw/siw/siw_cm.c
+21
-6
drivers/infiniband/sw/siw/siw_cm.c
···
1759
1759
{
1760
1760
struct socket *s;
1761
1761
struct siw_cep *cep = NULL;
1762
+
struct net_device *ndev = NULL;
1762
1763
struct siw_device *sdev = to_siw_dev(id->device);
1763
1764
int addr_family = id->local_addr.ss_family;
1764
1765
int rv = 0;
···
1780
1779
struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr);
1781
1780
1782
1781
/* For wildcard addr, limit binding to current device only */
1783
-
if (ipv4_is_zeronet(laddr->sin_addr.s_addr))
1784
-
s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
1785
-
1782
+
if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) {
1783
+
ndev = ib_device_get_netdev(id->device, SIW_PORT);
1784
+
if (ndev) {
1785
+
s->sk->sk_bound_dev_if = ndev->ifindex;
1786
+
} else {
1787
+
rv = -ENODEV;
1788
+
goto error;
1789
+
}
1790
+
}
1786
1791
rv = s->ops->bind(s, (struct sockaddr *)laddr,
1787
1792
sizeof(struct sockaddr_in));
1788
1793
} else {
···
1804
1797
}
1805
1798
1806
1799
/* For wildcard addr, limit binding to current device only */
1807
-
if (ipv6_addr_any(&laddr->sin6_addr))
1808
-
s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
1809
-
1800
+
if (ipv6_addr_any(&laddr->sin6_addr)) {
1801
+
ndev = ib_device_get_netdev(id->device, SIW_PORT);
1802
+
if (ndev) {
1803
+
s->sk->sk_bound_dev_if = ndev->ifindex;
1804
+
} else {
1805
+
rv = -ENODEV;
1806
+
goto error;
1807
+
}
1808
+
}
1810
1809
rv = s->ops->bind(s, (struct sockaddr *)laddr,
1811
1810
sizeof(struct sockaddr_in6));
1812
1811
}
···
1873
1860
}
1874
1861
list_add_tail(&cep->listenq, (struct list_head *)id->provider_data);
1875
1862
cep->state = SIW_EPSTATE_LISTENING;
1863
+
dev_put(ndev);
1876
1864
1877
1865
siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr);
1878
1866
···
1893
1879
siw_cep_set_free_and_put(cep);
1894
1880
}
1895
1881
sock_release(s);
1882
+
dev_put(ndev);
1896
1883
1897
1884
return rv;
1898
1885
}
+1
-14
drivers/infiniband/sw/siw/siw_main.c
+1
-14
drivers/infiniband/sw/siw/siw_main.c
···
287
287
return NULL;
288
288
289
289
base_dev = &sdev->base_dev;
290
-
sdev->netdev = netdev;
291
290
292
291
if (netdev->addr_len) {
293
292
memcpy(sdev->raw_gid, netdev->dev_addr,
···
380
381
381
382
switch (event) {
382
383
case NETDEV_UP:
383
-
sdev->state = IB_PORT_ACTIVE;
384
384
siw_port_event(sdev, 1, IB_EVENT_PORT_ACTIVE);
385
385
break;
386
386
387
387
case NETDEV_DOWN:
388
-
sdev->state = IB_PORT_DOWN;
389
388
siw_port_event(sdev, 1, IB_EVENT_PORT_ERR);
390
389
break;
391
390
···
404
407
siw_port_event(sdev, 1, IB_EVENT_LID_CHANGE);
405
408
break;
406
409
/*
407
-
* Todo: Below netdev events are currently not handled.
410
+
* All other events are not handled
408
411
*/
409
-
case NETDEV_CHANGEMTU:
410
-
case NETDEV_CHANGE:
411
-
break;
412
-
413
412
default:
414
413
break;
415
414
}
···
435
442
sdev = siw_device_create(netdev);
436
443
if (sdev) {
437
444
dev_dbg(&netdev->dev, "siw: new device\n");
438
-
439
-
if (netif_running(netdev) && netif_carrier_ok(netdev))
440
-
sdev->state = IB_PORT_ACTIVE;
441
-
else
442
-
sdev->state = IB_PORT_DOWN;
443
-
444
445
ib_mark_name_assigned_by_user(&sdev->base_dev);
445
446
rv = siw_device_register(sdev, basedev_name);
446
447
if (rv)
+24
-11
drivers/infiniband/sw/siw/siw_verbs.c
+24
-11
drivers/infiniband/sw/siw/siw_verbs.c
···
171
171
int siw_query_port(struct ib_device *base_dev, u32 port,
172
172
struct ib_port_attr *attr)
173
173
{
174
-
struct siw_device *sdev = to_siw_dev(base_dev);
174
+
struct net_device *ndev;
175
175
int rv;
176
176
177
177
memset(attr, 0, sizeof(*attr));
178
178
179
179
rv = ib_get_eth_speed(base_dev, port, &attr->active_speed,
180
180
&attr->active_width);
181
+
if (rv)
182
+
return rv;
183
+
184
+
ndev = ib_device_get_netdev(base_dev, SIW_PORT);
185
+
if (!ndev)
186
+
return -ENODEV;
187
+
181
188
attr->gid_tbl_len = 1;
182
189
attr->max_msg_sz = -1;
183
-
attr->max_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
184
-
attr->active_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
185
-
attr->phys_state = sdev->state == IB_PORT_ACTIVE ?
190
+
attr->max_mtu = ib_mtu_int_to_enum(ndev->max_mtu);
191
+
attr->active_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
192
+
attr->phys_state = (netif_running(ndev) && netif_carrier_ok(ndev)) ?
186
193
IB_PORT_PHYS_STATE_LINK_UP : IB_PORT_PHYS_STATE_DISABLED;
194
+
attr->state = attr->phys_state == IB_PORT_PHYS_STATE_LINK_UP ?
195
+
IB_PORT_ACTIVE : IB_PORT_DOWN;
187
196
attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_DEVICE_MGMT_SUP;
188
-
attr->state = sdev->state;
189
197
/*
190
198
* All zero
191
199
*
···
207
199
* attr->subnet_timeout = 0;
208
200
* attr->init_type_repy = 0;
209
201
*/
202
+
dev_put(ndev);
210
203
return rv;
211
204
}
212
205
···
514
505
int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
515
506
{
516
507
struct siw_qp *qp;
517
-
struct siw_device *sdev;
508
+
struct net_device *ndev;
518
509
519
-
if (base_qp && qp_attr && qp_init_attr) {
510
+
if (base_qp && qp_attr && qp_init_attr)
520
511
qp = to_siw_qp(base_qp);
521
-
sdev = to_siw_dev(base_qp->device);
522
-
} else {
512
+
else
523
513
return -EINVAL;
524
-
}
514
+
515
+
ndev = ib_device_get_netdev(base_qp->device, SIW_PORT);
516
+
if (!ndev)
517
+
return -ENODEV;
518
+
525
519
qp_attr->qp_state = siw_qp_state_to_ib_qp_state[qp->attrs.state];
526
520
qp_attr->cap.max_inline_data = SIW_MAX_INLINE;
527
521
qp_attr->cap.max_send_wr = qp->attrs.sq_size;
528
522
qp_attr->cap.max_send_sge = qp->attrs.sq_max_sges;
529
523
qp_attr->cap.max_recv_wr = qp->attrs.rq_size;
530
524
qp_attr->cap.max_recv_sge = qp->attrs.rq_max_sges;
531
-
qp_attr->path_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
525
+
qp_attr->path_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
532
526
qp_attr->max_rd_atomic = qp->attrs.irq_size;
533
527
qp_attr->max_dest_rd_atomic = qp->attrs.orq_size;
534
528
···
546
534
547
535
qp_init_attr->cap = qp_attr->cap;
548
536
537
+
dev_put(ndev);
549
538
return 0;
550
539
}
551
540
+1
-1
drivers/infiniband/ulp/rtrs/rtrs-srv.c
+1
-1
drivers/infiniband/ulp/rtrs/rtrs-srv.c
···
349
349
struct rtrs_srv_mr *srv_mr;
350
350
bool need_inval = false;
351
351
enum ib_send_flags flags;
352
+
struct ib_sge list;
352
353
u32 imm;
353
354
int err;
354
355
···
402
401
imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
403
402
imm_wr.wr.next = NULL;
404
403
if (always_invalidate) {
405
-
struct ib_sge list;
406
404
struct rtrs_msg_rkey_rsp *msg;
407
405
408
406
srv_mr = &srv_path->mrs[id->msg_id];
+8
-8
drivers/mmc/host/sdhci-msm.c
+8
-8
drivers/mmc/host/sdhci-msm.c
···
1867
1867
struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
1868
1868
union cqhci_crypto_cap_entry cap;
1869
1869
1870
+
if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
1871
+
return qcom_ice_evict_key(msm_host->ice, slot);
1872
+
1870
1873
/* Only AES-256-XTS has been tested so far. */
1871
1874
cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
1872
1875
if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
1873
1876
cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
1874
1877
return -EINVAL;
1875
1878
1876
-
if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)
1877
-
return qcom_ice_program_key(msm_host->ice,
1878
-
QCOM_ICE_CRYPTO_ALG_AES_XTS,
1879
-
QCOM_ICE_CRYPTO_KEY_SIZE_256,
1880
-
cfg->crypto_key,
1881
-
cfg->data_unit_size, slot);
1882
-
else
1883
-
return qcom_ice_evict_key(msm_host->ice, slot);
1879
+
return qcom_ice_program_key(msm_host->ice,
1880
+
QCOM_ICE_CRYPTO_ALG_AES_XTS,
1881
+
QCOM_ICE_CRYPTO_KEY_SIZE_256,
1882
+
cfg->crypto_key,
1883
+
cfg->data_unit_size, slot);
1884
1884
}
1885
1885
1886
1886
#else /* CONFIG_MMC_CRYPTO */
+9
-2
drivers/mtd/nand/raw/arasan-nand-controller.c
+9
-2
drivers/mtd/nand/raw/arasan-nand-controller.c
···
1409
1409
* case, the "not" chosen CS is assigned to nfc->spare_cs and selected
1410
1410
* whenever a GPIO CS must be asserted.
1411
1411
*/
1412
-
if (nfc->cs_array && nfc->ncs > 2) {
1413
-
if (!nfc->cs_array[0] && !nfc->cs_array[1]) {
1412
+
if (nfc->cs_array) {
1413
+
if (nfc->ncs > 2 && !nfc->cs_array[0] && !nfc->cs_array[1]) {
1414
1414
dev_err(nfc->dev,
1415
1415
"Assign a single native CS when using GPIOs\n");
1416
1416
return -EINVAL;
···
1478
1478
1479
1479
static void anfc_remove(struct platform_device *pdev)
1480
1480
{
1481
+
int i;
1481
1482
struct arasan_nfc *nfc = platform_get_drvdata(pdev);
1483
+
1484
+
for (i = 0; i < nfc->ncs; i++) {
1485
+
if (nfc->cs_array[i]) {
1486
+
gpiod_put(nfc->cs_array[i]);
1487
+
}
1488
+
}
1482
1489
1483
1490
anfc_chips_cleanup(nfc);
1484
1491
}
+1
-3
drivers/mtd/nand/raw/atmel/pmecc.c
+1
-3
drivers/mtd/nand/raw/atmel/pmecc.c
+1
-1
drivers/mtd/nand/raw/diskonchip.c
+1
-1
drivers/mtd/nand/raw/diskonchip.c
···
1098
1098
(i == 0) && (ip->firstUnit > 0)) {
1099
1099
parts[0].name = " DiskOnChip IPL / Media Header partition";
1100
1100
parts[0].offset = 0;
1101
-
parts[0].size = mtd->erasesize * ip->firstUnit;
1101
+
parts[0].size = (uint64_t)mtd->erasesize * ip->firstUnit;
1102
1102
numparts = 1;
1103
1103
}
1104
1104
+16
drivers/mtd/nand/raw/omap2.c
+16
drivers/mtd/nand/raw/omap2.c
···
254
254
255
255
/**
256
256
* omap_nand_data_in_pref - NAND data in using prefetch engine
257
+
* @chip: NAND chip
258
+
* @buf: output buffer where NAND data is placed into
259
+
* @len: length of transfer
260
+
* @force_8bit: force 8-bit transfers
257
261
*/
258
262
static void omap_nand_data_in_pref(struct nand_chip *chip, void *buf,
259
263
unsigned int len, bool force_8bit)
···
301
297
302
298
/**
303
299
* omap_nand_data_out_pref - NAND data out using Write Posting engine
300
+
* @chip: NAND chip
301
+
* @buf: input buffer that is sent to NAND
302
+
* @len: length of transfer
303
+
* @force_8bit: force 8-bit transfers
304
304
*/
305
305
static void omap_nand_data_out_pref(struct nand_chip *chip,
306
306
const void *buf, unsigned int len,
···
448
440
449
441
/**
450
442
* omap_nand_data_in_dma_pref - NAND data in using DMA and Prefetch
443
+
* @chip: NAND chip
444
+
* @buf: output buffer where NAND data is placed into
445
+
* @len: length of transfer
446
+
* @force_8bit: force 8-bit transfers
451
447
*/
452
448
static void omap_nand_data_in_dma_pref(struct nand_chip *chip, void *buf,
453
449
unsigned int len, bool force_8bit)
···
472
460
473
461
/**
474
462
* omap_nand_data_out_dma_pref - NAND data out using DMA and write posting
463
+
* @chip: NAND chip
464
+
* @buf: input buffer that is sent to NAND
465
+
* @len: length of transfer
466
+
* @force_8bit: force 8-bit transfers
475
467
*/
476
468
static void omap_nand_data_out_dma_pref(struct nand_chip *chip,
477
469
const void *buf, unsigned int len,
+36
-11
drivers/net/dsa/microchip/ksz9477.c
+36
-11
drivers/net/dsa/microchip/ksz9477.c
···
2
2
/*
3
3
* Microchip KSZ9477 switch driver main logic
4
4
*
5
-
* Copyright (C) 2017-2019 Microchip Technology Inc.
5
+
* Copyright (C) 2017-2024 Microchip Technology Inc.
6
6
*/
7
7
8
8
#include <linux/kernel.h>
···
983
983
int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
984
984
{
985
985
u32 secs = msecs / 1000;
986
-
u8 value;
987
-
u8 data;
986
+
u8 data, mult, value;
987
+
u32 max_val;
988
988
int ret;
989
989
990
-
value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
990
+
#define MAX_TIMER_VAL ((1 << 8) - 1)
991
991
992
-
ret = ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
993
-
if (ret < 0)
994
-
return ret;
992
+
/* The aging timer comprises a 3-bit multiplier and an 8-bit second
993
+
* value. Either of them cannot be zero. The maximum timer is then
994
+
* 7 * 255 = 1785 seconds.
995
+
*/
996
+
if (!secs)
997
+
secs = 1;
995
998
996
-
data = FIELD_GET(SW_AGE_PERIOD_10_8_M, secs);
999
+
/* Return error if too large. */
1000
+
else if (secs > 7 * MAX_TIMER_VAL)
1001
+
return -EINVAL;
997
1002
998
1003
ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value);
999
1004
if (ret < 0)
1000
1005
return ret;
1001
1006
1002
-
value &= ~SW_AGE_CNT_M;
1003
-
value |= FIELD_PREP(SW_AGE_CNT_M, data);
1007
+
/* Check whether there is need to update the multiplier. */
1008
+
mult = FIELD_GET(SW_AGE_CNT_M, value);
1009
+
max_val = MAX_TIMER_VAL;
1010
+
if (mult > 0) {
1011
+
/* Try to use the same multiplier already in the register as
1012
+
* the hardware default uses multiplier 4 and 75 seconds for
1013
+
* 300 seconds.
1014
+
*/
1015
+
max_val = DIV_ROUND_UP(secs, mult);
1016
+
if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
1017
+
max_val = MAX_TIMER_VAL;
1018
+
}
1004
1019
1005
-
return ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
1020
+
data = DIV_ROUND_UP(secs, max_val);
1021
+
if (mult != data) {
1022
+
value &= ~SW_AGE_CNT_M;
1023
+
value |= FIELD_PREP(SW_AGE_CNT_M, data);
1024
+
ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
1025
+
if (ret < 0)
1026
+
return ret;
1027
+
}
1028
+
1029
+
value = DIV_ROUND_UP(secs, data);
1030
+
return ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
1006
1031
}
1007
1032
1008
1033
void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+1
-3
drivers/net/dsa/microchip/ksz9477_reg.h
+1
-3
drivers/net/dsa/microchip/ksz9477_reg.h
···
2
2
/*
3
3
* Microchip KSZ9477 register definitions
4
4
*
5
-
* Copyright (C) 2017-2018 Microchip Technology Inc.
5
+
* Copyright (C) 2017-2024 Microchip Technology Inc.
6
6
*/
7
7
8
8
#ifndef __KSZ9477_REGS_H
···
165
165
#define SW_VLAN_ENABLE BIT(7)
166
166
#define SW_DROP_INVALID_VID BIT(6)
167
167
#define SW_AGE_CNT_M GENMASK(5, 3)
168
-
#define SW_AGE_CNT_S 3
169
-
#define SW_AGE_PERIOD_10_8_M GENMASK(10, 8)
170
168
#define SW_RESV_MCAST_ENABLE BIT(2)
171
169
#define SW_HASH_OPTION_M 0x03
172
170
#define SW_HASH_OPTION_CRC 1
+59
-3
drivers/net/dsa/microchip/lan937x_main.c
+59
-3
drivers/net/dsa/microchip/lan937x_main.c
···
1
1
// SPDX-License-Identifier: GPL-2.0
2
2
/* Microchip LAN937X switch driver main logic
3
-
* Copyright (C) 2019-2022 Microchip Technology Inc.
3
+
* Copyright (C) 2019-2024 Microchip Technology Inc.
4
4
*/
5
5
#include <linux/kernel.h>
6
6
#include <linux/module.h>
···
461
461
462
462
int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
463
463
{
464
-
u32 secs = msecs / 1000;
465
-
u32 value;
464
+
u8 data, mult, value8;
465
+
bool in_msec = false;
466
+
u32 max_val, value;
467
+
u32 secs = msecs;
466
468
int ret;
469
+
470
+
#define MAX_TIMER_VAL ((1 << 20) - 1)
471
+
472
+
/* The aging timer comprises a 3-bit multiplier and a 20-bit second
473
+
* value. Either of them cannot be zero. The maximum timer is then
474
+
* 7 * 1048575 = 7340025 seconds. As this value is too large for
475
+
* practical use it can be interpreted as microseconds, making the
476
+
* maximum timer 7340 seconds with finer control. This allows for
477
+
* maximum 122 minutes compared to 29 minutes in KSZ9477 switch.
478
+
*/
479
+
if (msecs % 1000)
480
+
in_msec = true;
481
+
else
482
+
secs /= 1000;
483
+
if (!secs)
484
+
secs = 1;
485
+
486
+
/* Return error if too large. */
487
+
else if (secs > 7 * MAX_TIMER_VAL)
488
+
return -EINVAL;
489
+
490
+
/* Configure how to interpret the number value. */
491
+
ret = ksz_rmw8(dev, REG_SW_LUE_CTRL_2, SW_AGE_CNT_IN_MICROSEC,
492
+
in_msec ? SW_AGE_CNT_IN_MICROSEC : 0);
493
+
if (ret < 0)
494
+
return ret;
495
+
496
+
ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value8);
497
+
if (ret < 0)
498
+
return ret;
499
+
500
+
/* Check whether there is need to update the multiplier. */
501
+
mult = FIELD_GET(SW_AGE_CNT_M, value8);
502
+
max_val = MAX_TIMER_VAL;
503
+
if (mult > 0) {
504
+
/* Try to use the same multiplier already in the register as
505
+
* the hardware default uses multiplier 4 and 75 seconds for
506
+
* 300 seconds.
507
+
*/
508
+
max_val = DIV_ROUND_UP(secs, mult);
509
+
if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
510
+
max_val = MAX_TIMER_VAL;
511
+
}
512
+
513
+
data = DIV_ROUND_UP(secs, max_val);
514
+
if (mult != data) {
515
+
value8 &= ~SW_AGE_CNT_M;
516
+
value8 |= FIELD_PREP(SW_AGE_CNT_M, data);
517
+
ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value8);
518
+
if (ret < 0)
519
+
return ret;
520
+
}
521
+
522
+
secs = DIV_ROUND_UP(secs, data);
467
523
468
524
value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
469
525
+6
-3
drivers/net/dsa/microchip/lan937x_reg.h
+6
-3
drivers/net/dsa/microchip/lan937x_reg.h
···
1
1
/* SPDX-License-Identifier: GPL-2.0 */
2
2
/* Microchip LAN937X switch register definitions
3
-
* Copyright (C) 2019-2021 Microchip Technology Inc.
3
+
* Copyright (C) 2019-2024 Microchip Technology Inc.
4
4
*/
5
5
#ifndef __LAN937X_REG_H
6
6
#define __LAN937X_REG_H
···
56
56
57
57
#define SW_VLAN_ENABLE BIT(7)
58
58
#define SW_DROP_INVALID_VID BIT(6)
59
-
#define SW_AGE_CNT_M 0x7
60
-
#define SW_AGE_CNT_S 3
59
+
#define SW_AGE_CNT_M GENMASK(5, 3)
61
60
#define SW_RESV_MCAST_ENABLE BIT(2)
62
61
63
62
#define REG_SW_LUE_CTRL_1 0x0311
···
68
69
#define SW_AGING_ENABLE BIT(2)
69
70
#define SW_FAST_AGING BIT(1)
70
71
#define SW_LINK_AUTO_AGING BIT(0)
72
+
73
+
#define REG_SW_LUE_CTRL_2 0x0312
74
+
75
+
#define SW_AGE_CNT_IN_MICROSEC BIT(7)
71
76
72
77
#define REG_SW_AGE_PERIOD__1 0x0313
73
78
#define SW_AGE_PERIOD_7_0_M GENMASK(7, 0)
+18
-3
drivers/net/ethernet/broadcom/bcmsysport.c
+18
-3
drivers/net/ethernet/broadcom/bcmsysport.c
···
1933
1933
unsigned int i;
1934
1934
int ret;
1935
1935
1936
-
clk_prepare_enable(priv->clk);
1936
+
ret = clk_prepare_enable(priv->clk);
1937
+
if (ret) {
1938
+
netdev_err(dev, "could not enable priv clock\n");
1939
+
return ret;
1940
+
}
1937
1941
1938
1942
/* Reset UniMAC */
1939
1943
umac_reset(priv);
···
2595
2591
goto err_deregister_notifier;
2596
2592
}
2597
2593
2598
-
clk_prepare_enable(priv->clk);
2594
+
ret = clk_prepare_enable(priv->clk);
2595
+
if (ret) {
2596
+
dev_err(&pdev->dev, "could not enable priv clock\n");
2597
+
goto err_deregister_netdev;
2598
+
}
2599
2599
2600
2600
priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK;
2601
2601
dev_info(&pdev->dev,
···
2613
2605
2614
2606
return 0;
2615
2607
2608
+
err_deregister_netdev:
2609
+
unregister_netdev(dev);
2616
2610
err_deregister_notifier:
2617
2611
unregister_netdevice_notifier(&priv->netdev_notifier);
2618
2612
err_deregister_fixed_link:
···
2784
2774
if (!netif_running(dev))
2785
2775
return 0;
2786
2776
2787
-
clk_prepare_enable(priv->clk);
2777
+
ret = clk_prepare_enable(priv->clk);
2778
+
if (ret) {
2779
+
netdev_err(dev, "could not enable priv clock\n");
2780
+
return ret;
2781
+
}
2782
+
2788
2783
if (priv->wolopts)
2789
2784
clk_disable_unprepare(priv->wol_clk);
2790
2785
+1
drivers/net/ethernet/google/gve/gve.h
+1
drivers/net/ethernet/google/gve/gve.h
···
1140
1140
void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
1141
1141
bool gve_tx_poll(struct gve_notify_block *block, int budget);
1142
1142
bool gve_xdp_poll(struct gve_notify_block *block, int budget);
1143
+
int gve_xsk_tx_poll(struct gve_notify_block *block, int budget);
1143
1144
int gve_tx_alloc_rings_gqi(struct gve_priv *priv,
1144
1145
struct gve_tx_alloc_rings_cfg *cfg);
1145
1146
void gve_tx_free_rings_gqi(struct gve_priv *priv,
+36
-27
drivers/net/ethernet/google/gve/gve_main.c
+36
-27
drivers/net/ethernet/google/gve/gve_main.c
···
333
333
334
334
if (block->rx) {
335
335
work_done = gve_rx_poll(block, budget);
336
+
337
+
/* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of
338
+
* TX and RX work done.
339
+
*/
340
+
if (priv->xdp_prog)
341
+
work_done = max_t(int, work_done,
342
+
gve_xsk_tx_poll(block, budget));
343
+
336
344
reschedule |= work_done == budget;
337
345
}
338
346
···
930
922
static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv,
931
923
struct gve_tx_alloc_rings_cfg *cfg)
932
924
{
925
+
int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0;
926
+
933
927
cfg->qcfg = &priv->tx_cfg;
934
928
cfg->raw_addressing = !gve_is_qpl(priv);
935
929
cfg->ring_size = priv->tx_desc_cnt;
936
930
cfg->start_idx = 0;
937
-
cfg->num_rings = gve_num_tx_queues(priv);
931
+
cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues;
938
932
cfg->tx = priv->tx;
939
933
}
940
934
···
1633
1623
if (err)
1634
1624
return err;
1635
1625
1636
-
/* If XDP prog is not installed, return */
1637
-
if (!priv->xdp_prog)
1626
+
/* If XDP prog is not installed or interface is down, return. */
1627
+
if (!priv->xdp_prog || !netif_running(dev))
1638
1628
return 0;
1639
1629
1640
1630
rx = &priv->rx[qid];
···
1679
1669
if (qid >= priv->rx_cfg.num_queues)
1680
1670
return -EINVAL;
1681
1671
1682
-
/* If XDP prog is not installed, unmap DMA and return */
1683
-
if (!priv->xdp_prog)
1672
+
/* If XDP prog is not installed or interface is down, unmap DMA and
1673
+
* return.
1674
+
*/
1675
+
if (!priv->xdp_prog || !netif_running(dev))
1684
1676
goto done;
1685
-
1686
-
tx_qid = gve_xdp_tx_queue_id(priv, qid);
1687
-
if (!netif_running(dev)) {
1688
-
priv->rx[qid].xsk_pool = NULL;
1689
-
xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
1690
-
priv->tx[tx_qid].xsk_pool = NULL;
1691
-
goto done;
1692
-
}
1693
1677
1694
1678
napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi;
1695
1679
napi_disable(napi_rx); /* make sure current rx poll is done */
1696
1680
1681
+
tx_qid = gve_xdp_tx_queue_id(priv, qid);
1697
1682
napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi;
1698
1683
napi_disable(napi_tx); /* make sure current tx poll is done */
1699
1684
···
1714
1709
static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
1715
1710
{
1716
1711
struct gve_priv *priv = netdev_priv(dev);
1717
-
int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id);
1712
+
struct napi_struct *napi;
1713
+
1714
+
if (!gve_get_napi_enabled(priv))
1715
+
return -ENETDOWN;
1718
1716
1719
1717
if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog)
1720
1718
return -EINVAL;
1721
1719
1722
-
if (flags & XDP_WAKEUP_TX) {
1723
-
struct gve_tx_ring *tx = &priv->tx[tx_queue_id];
1724
-
struct napi_struct *napi =
1725
-
&priv->ntfy_blocks[tx->ntfy_id].napi;
1726
-
1727
-
if (!napi_if_scheduled_mark_missed(napi)) {
1728
-
/* Call local_bh_enable to trigger SoftIRQ processing */
1729
-
local_bh_disable();
1730
-
napi_schedule(napi);
1731
-
local_bh_enable();
1732
-
}
1733
-
1734
-
tx->xdp_xsk_wakeup++;
1720
+
napi = &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_id)].napi;
1721
+
if (!napi_if_scheduled_mark_missed(napi)) {
1722
+
/* Call local_bh_enable to trigger SoftIRQ processing */
1723
+
local_bh_disable();
1724
+
napi_schedule(napi);
1725
+
local_bh_enable();
1735
1726
}
1736
1727
1737
1728
return 0;
···
1838
1837
{
1839
1838
struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0};
1840
1839
struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0};
1840
+
int num_xdp_queues;
1841
1841
int err;
1842
1842
1843
1843
gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg);
···
1848
1846
rx_alloc_cfg.qcfg_tx = &new_tx_config;
1849
1847
rx_alloc_cfg.qcfg = &new_rx_config;
1850
1848
tx_alloc_cfg.num_rings = new_tx_config.num_queues;
1849
+
1850
+
/* Add dedicated XDP TX queues if enabled. */
1851
+
num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0;
1852
+
tx_alloc_cfg.num_rings += num_xdp_queues;
1851
1853
1852
1854
if (netif_running(priv->dev)) {
1853
1855
err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
···
1905
1899
1906
1900
gve_clear_napi_enabled(priv);
1907
1901
gve_clear_report_stats(priv);
1902
+
1903
+
/* Make sure that all traffic is finished processing. */
1904
+
synchronize_net();
1908
1905
}
1909
1906
1910
1907
static void gve_turnup(struct gve_priv *priv)
+30
-16
drivers/net/ethernet/google/gve/gve_tx.c
+30
-16
drivers/net/ethernet/google/gve/gve_tx.c
···
206
206
return;
207
207
208
208
gve_remove_napi(priv, ntfy_idx);
209
-
gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
209
+
if (tx->q_num < priv->tx_cfg.num_queues)
210
+
gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
211
+
else
212
+
gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
210
213
netdev_tx_reset_queue(tx->netdev_txq);
211
214
gve_tx_remove_from_block(priv, idx);
212
215
}
···
837
834
struct gve_tx_ring *tx;
838
835
int i, err = 0, qid;
839
836
840
-
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
837
+
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog)
841
838
return -EINVAL;
839
+
840
+
if (!gve_get_napi_enabled(priv))
841
+
return -ENETDOWN;
842
842
843
843
qid = gve_xdp_tx_queue_id(priv,
844
844
smp_processor_id() % priv->num_xdp_queues);
···
981
975
return sent;
982
976
}
983
977
978
+
int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget)
979
+
{
980
+
struct gve_rx_ring *rx = rx_block->rx;
981
+
struct gve_priv *priv = rx->gve;
982
+
struct gve_tx_ring *tx;
983
+
int sent = 0;
984
+
985
+
tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)];
986
+
if (tx->xsk_pool) {
987
+
sent = gve_xsk_tx(priv, tx, budget);
988
+
989
+
u64_stats_update_begin(&tx->statss);
990
+
tx->xdp_xsk_sent += sent;
991
+
u64_stats_update_end(&tx->statss);
992
+
if (xsk_uses_need_wakeup(tx->xsk_pool))
993
+
xsk_set_tx_need_wakeup(tx->xsk_pool);
994
+
}
995
+
996
+
return sent;
997
+
}
998
+
984
999
bool gve_xdp_poll(struct gve_notify_block *block, int budget)
985
1000
{
986
1001
struct gve_priv *priv = block->priv;
987
1002
struct gve_tx_ring *tx = block->tx;
988
1003
u32 nic_done;
989
-
bool repoll;
990
1004
u32 to_do;
991
1005
992
1006
/* Find out how much work there is to be done */
993
1007
nic_done = gve_tx_load_event_counter(priv, tx);
994
1008
to_do = min_t(u32, (nic_done - tx->done), budget);
995
1009
gve_clean_xdp_done(priv, tx, to_do);
996
-
repoll = nic_done != tx->done;
997
-
998
-
if (tx->xsk_pool) {
999
-
int sent = gve_xsk_tx(priv, tx, budget);
1000
-
1001
-
u64_stats_update_begin(&tx->statss);
1002
-
tx->xdp_xsk_sent += sent;
1003
-
u64_stats_update_end(&tx->statss);
1004
-
repoll |= (sent == budget);
1005
-
if (xsk_uses_need_wakeup(tx->xsk_pool))
1006
-
xsk_set_tx_need_wakeup(tx->xsk_pool);
1007
-
}
1008
1010
1009
1011
/* If we still have work we want to repoll */
1010
-
return repoll;
1012
+
return nic_done != tx->done;
1011
1013
}
1012
1014
1013
1015
bool gve_tx_poll(struct gve_notify_block *block, int budget)
+12
-2
drivers/net/ethernet/marvell/mv643xx_eth.c
+12
-2
drivers/net/ethernet/marvell/mv643xx_eth.c
···
2704
2704
2705
2705
static void mv643xx_eth_shared_of_remove(void)
2706
2706
{
2707
+
struct mv643xx_eth_platform_data *pd;
2707
2708
int n;
2708
2709
2709
2710
for (n = 0; n < 3; n++) {
2711
+
if (!port_platdev[n])
2712
+
continue;
2713
+
pd = dev_get_platdata(&port_platdev[n]->dev);
2714
+
if (pd)
2715
+
of_node_put(pd->phy_node);
2710
2716
platform_device_del(port_platdev[n]);
2711
2717
port_platdev[n] = NULL;
2712
2718
}
···
2775
2769
}
2776
2770
2777
2771
ppdev = platform_device_alloc(MV643XX_ETH_NAME, dev_num);
2778
-
if (!ppdev)
2779
-
return -ENOMEM;
2772
+
if (!ppdev) {
2773
+
ret = -ENOMEM;
2774
+
goto put_err;
2775
+
}
2780
2776
ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
2781
2777
ppdev->dev.of_node = pnp;
2782
2778
···
2800
2792
2801
2793
port_err:
2802
2794
platform_device_put(ppdev);
2795
+
put_err:
2796
+
of_node_put(ppd.phy_node);
2803
2797
return ret;
2804
2798
}
2805
2799
+1
drivers/net/ethernet/marvell/sky2.c
+1
drivers/net/ethernet/marvell/sky2.c
···
130
130
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436C) }, /* 88E8072 */
131
131
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436D) }, /* 88E8055 */
132
132
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */
133
+
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4373) }, /* 88E8075 */
133
134
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */
134
135
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */
135
136
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
···
339
339
{
340
340
struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev);
341
341
struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs;
342
+
const struct macsec_tx_sc *tx_sc = &ctx->secy->tx_sc;
342
343
struct mlx5_macsec_rule_attrs rule_attrs;
343
344
union mlx5_macsec_rule *macsec_rule;
345
+
346
+
if (is_tx && tx_sc->encoding_sa != sa->assoc_num)
347
+
return 0;
344
348
345
349
rule_attrs.macsec_obj_id = sa->macsec_obj_id;
346
350
rule_attrs.sci = sa->sci;
+17
-2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+17
-2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
···
6542
6542
6543
6543
mlx5_core_uplink_netdev_set(mdev, NULL);
6544
6544
mlx5e_dcbnl_delete_app(priv);
6545
-
unregister_netdev(priv->netdev);
6546
-
_mlx5e_suspend(adev, false);
6545
+
/* When unload driver, the netdev is in registered state
6546
+
* if it's from legacy mode. If from switchdev mode, it
6547
+
* is already unregistered before changing to NIC profile.
6548
+
*/
6549
+
if (priv->netdev->reg_state == NETREG_REGISTERED) {
6550
+
unregister_netdev(priv->netdev);
6551
+
_mlx5e_suspend(adev, false);
6552
+
} else {
6553
+
struct mlx5_core_dev *pos;
6554
+
int i;
6555
+
6556
+
if (test_bit(MLX5E_STATE_DESTROYING, &priv->state))
6557
+
mlx5_sd_for_each_dev(i, mdev, pos)
6558
+
mlx5e_destroy_mdev_resources(pos);
6559
+
else
6560
+
_mlx5e_suspend(adev, true);
6561
+
}
6547
6562
/* Avoid cleanup if profile rollback failed. */
6548
6563
if (priv->profile)
6549
6564
priv->profile->cleanup(priv);
+15
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+15
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
···
1509
1509
1510
1510
priv = netdev_priv(netdev);
1511
1511
1512
+
/* This bit is set when using devlink to change eswitch mode from
1513
+
* switchdev to legacy. As need to keep uplink netdev ifindex, we
1514
+
* detach uplink representor profile and attach NIC profile only.
1515
+
* The netdev will be unregistered later when unload NIC auxiliary
1516
+
* driver for this case.
1517
+
* We explicitly block devlink eswitch mode change if any IPSec rules
1518
+
* offloaded, but can't block other cases, such as driver unload
1519
+
* and devlink reload. We have to unregister netdev before profile
1520
+
* change for those cases. This is to avoid resource leak because
1521
+
* the offloaded rules don't have the chance to be unoffloaded before
1522
+
* cleanup which is triggered by detach uplink representor profile.
1523
+
*/
1524
+
if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY))
1525
+
unregister_netdev(netdev);
1526
+
1512
1527
mlx5e_netdev_attach_nic_profile(priv);
1513
1528
}
1514
1529
+3
-3
drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+3
-3
drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
···
150
150
unsigned long i;
151
151
int err;
152
152
153
-
xa_for_each(&esw->offloads.vport_reps, i, rep) {
154
-
rpriv = rep->rep_data[REP_ETH].priv;
155
-
if (!rpriv || !rpriv->netdev)
153
+
mlx5_esw_for_each_rep(esw, i, rep) {
154
+
if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
156
155
continue;
157
156
157
+
rpriv = rep->rep_data[REP_ETH].priv;
158
158
rhashtable_walk_enter(&rpriv->tc_ht, &iter);
159
159
rhashtable_walk_start(&iter);
160
160
while ((flow = rhashtable_walk_next(&iter)) != NULL) {
+3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
···
714
714
MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base) +\
715
715
(last) - 1)
716
716
717
+
#define mlx5_esw_for_each_rep(esw, i, rep) \
718
+
xa_for_each(&((esw)->offloads.vport_reps), i, rep)
719
+
717
720
struct mlx5_eswitch *__must_check
718
721
mlx5_devlink_eswitch_get(struct devlink *devlink);
719
722
+2
-3
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+2
-3
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
···
53
53
#include "lag/lag.h"
54
54
#include "en/tc/post_meter.h"
55
55
56
-
#define mlx5_esw_for_each_rep(esw, i, rep) \
57
-
xa_for_each(&((esw)->offloads.vport_reps), i, rep)
58
-
59
56
/* There are two match-all miss flows, one for unicast dst mac and
60
57
* one for multicast.
61
58
*/
···
3777
3780
esw->eswitch_operation_in_progress = true;
3778
3781
up_write(&esw->mode_lock);
3779
3782
3783
+
if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
3784
+
esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY;
3780
3785
mlx5_eswitch_disable_locked(esw);
3781
3786
if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
3782
3787
if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+1
-3
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_send.c
+1
-3
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_send.c
···
1067
1067
int inlen, err, eqn;
1068
1068
void *cqc, *in;
1069
1069
__be64 *pas;
1070
-
int vector;
1071
1070
u32 i;
1072
1071
1073
1072
cq = kzalloc(sizeof(*cq), GFP_KERNEL);
···
1095
1096
if (!in)
1096
1097
goto err_cqwq;
1097
1098
1098
-
vector = raw_smp_processor_id() % mlx5_comp_vectors_max(mdev);
1099
-
err = mlx5_comp_eqn_get(mdev, vector, &eqn);
1099
+
err = mlx5_comp_eqn_get(mdev, 0, &eqn);
1100
1100
if (err) {
1101
1101
kvfree(in);
1102
1102
goto err_cqwq;
+1
-2
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+1
-2
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
···
423
423
424
424
parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
425
425
ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
426
-
0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0,
427
-
0);
426
+
0, 0, tun->net, parms.link, tun->fwmark, 0, 0);
428
427
429
428
rt = ip_route_output_key(tun->net, &fl4);
430
429
if (IS_ERR(rt))
+1
-1
drivers/net/ethernet/meta/fbnic/fbnic_csr.c
+1
-1
drivers/net/ethernet/meta/fbnic/fbnic_csr.c
+1
-1
drivers/net/ethernet/sfc/tc_conntrack.c
+1
-1
drivers/net/ethernet/sfc/tc_conntrack.c
···
16
16
void *cb_priv);
17
17
18
18
static const struct rhashtable_params efx_tc_ct_zone_ht_params = {
19
-
.key_len = offsetof(struct efx_tc_ct_zone, linkage),
19
+
.key_len = sizeof_field(struct efx_tc_ct_zone, zone),
20
20
.key_offset = 0,
21
21
.head_offset = offsetof(struct efx_tc_ct_zone, linkage),
22
22
};
+17
-26
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+17
-26
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
···
406
406
}
407
407
408
408
/**
409
-
* stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt()
410
-
* @pdev: platform_device structure
411
-
* @plat: driver data platform structure
412
-
*
413
-
* Release resources claimed by stmmac_probe_config_dt().
414
-
*/
415
-
static void stmmac_remove_config_dt(struct platform_device *pdev,
416
-
struct plat_stmmacenet_data *plat)
417
-
{
418
-
clk_disable_unprepare(plat->stmmac_clk);
419
-
clk_disable_unprepare(plat->pclk);
420
-
of_node_put(plat->phy_node);
421
-
of_node_put(plat->mdio_node);
422
-
}
423
-
424
-
/**
425
409
* stmmac_probe_config_dt - parse device-tree driver parameters
426
410
* @pdev: platform_device structure
427
411
* @mac: MAC address to use
···
474
490
dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n");
475
491
476
492
rc = stmmac_mdio_setup(plat, np, &pdev->dev);
477
-
if (rc)
478
-
return ERR_PTR(rc);
493
+
if (rc) {
494
+
ret = ERR_PTR(rc);
495
+
goto error_put_phy;
496
+
}
479
497
480
498
of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size);
481
499
···
567
581
dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg),
568
582
GFP_KERNEL);
569
583
if (!dma_cfg) {
570
-
stmmac_remove_config_dt(pdev, plat);
571
-
return ERR_PTR(-ENOMEM);
584
+
ret = ERR_PTR(-ENOMEM);
585
+
goto error_put_mdio;
572
586
}
573
587
plat->dma_cfg = dma_cfg;
574
588
···
596
610
597
611
rc = stmmac_mtl_setup(pdev, plat);
598
612
if (rc) {
599
-
stmmac_remove_config_dt(pdev, plat);
600
-
return ERR_PTR(rc);
613
+
ret = ERR_PTR(rc);
614
+
goto error_put_mdio;
601
615
}
602
616
603
617
/* clock setup */
···
649
663
clk_disable_unprepare(plat->pclk);
650
664
error_pclk_get:
651
665
clk_disable_unprepare(plat->stmmac_clk);
666
+
error_put_mdio:
667
+
of_node_put(plat->mdio_node);
668
+
error_put_phy:
669
+
of_node_put(plat->phy_node);
652
670
653
671
return ret;
654
672
}
···
661
671
{
662
672
struct plat_stmmacenet_data *plat = data;
663
673
664
-
/* Platform data argument is unused */
665
-
stmmac_remove_config_dt(NULL, plat);
674
+
clk_disable_unprepare(plat->stmmac_clk);
675
+
clk_disable_unprepare(plat->pclk);
676
+
of_node_put(plat->mdio_node);
677
+
of_node_put(plat->phy_node);
666
678
}
667
679
668
680
/**
669
681
* devm_stmmac_probe_config_dt
670
682
* @pdev: platform_device structure
671
683
* @mac: MAC address to use
672
-
* Description: Devres variant of stmmac_probe_config_dt(). Does not require
673
-
* the user to call stmmac_remove_config_dt() at driver detach.
684
+
* Description: Devres variant of stmmac_probe_config_dt().
674
685
*/
675
686
struct plat_stmmacenet_data *
676
687
devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+1
-1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
+1
-1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
···
3551
3551
init_completion(&common->tdown_complete);
3552
3552
common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS;
3553
3553
common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS;
3554
-
common->pf_p0_rx_ptype_rrobin = false;
3554
+
common->pf_p0_rx_ptype_rrobin = true;
3555
3555
common->default_vlan = 1;
3556
3556
3557
3557
common->ports = devm_kcalloc(dev, common->port_num,
+8
drivers/net/ethernet/ti/icssg/icss_iep.c
+8
drivers/net/ethernet/ti/icssg/icss_iep.c
···
215
215
for (cmp = IEP_MIN_CMP; cmp < IEP_MAX_CMP; cmp++) {
216
216
regmap_update_bits(iep->map, ICSS_IEP_CMP_STAT_REG,
217
217
IEP_CMP_STATUS(cmp), IEP_CMP_STATUS(cmp));
218
+
219
+
regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
220
+
IEP_CMP_CFG_CMP_EN(cmp), 0);
218
221
}
219
222
220
223
/* enable reset counter on CMP0 event */
···
782
779
iep->ptp_clock = NULL;
783
780
}
784
781
icss_iep_disable(iep);
782
+
783
+
if (iep->pps_enabled)
784
+
icss_iep_pps_enable(iep, false);
785
+
else if (iep->perout_enabled)
786
+
icss_iep_perout_enable(iep, NULL, false);
785
787
786
788
return 0;
787
789
}
-25
drivers/net/ethernet/ti/icssg/icssg_common.c
-25
drivers/net/ethernet/ti/icssg/icssg_common.c
···
855
855
}
856
856
EXPORT_SYMBOL_GPL(prueth_rx_irq);
857
857
858
-
void prueth_emac_stop(struct prueth_emac *emac)
859
-
{
860
-
struct prueth *prueth = emac->prueth;
861
-
int slice;
862
-
863
-
switch (emac->port_id) {
864
-
case PRUETH_PORT_MII0:
865
-
slice = ICSS_SLICE0;
866
-
break;
867
-
case PRUETH_PORT_MII1:
868
-
slice = ICSS_SLICE1;
869
-
break;
870
-
default:
871
-
netdev_err(emac->ndev, "invalid port\n");
872
-
return;
873
-
}
874
-
875
-
emac->fw_running = 0;
876
-
if (!emac->is_sr1)
877
-
rproc_shutdown(prueth->txpru[slice]);
878
-
rproc_shutdown(prueth->rtu[slice]);
879
-
rproc_shutdown(prueth->pru[slice]);
880
-
}
881
-
EXPORT_SYMBOL_GPL(prueth_emac_stop);
882
-
883
858
void prueth_cleanup_tx_ts(struct prueth_emac *emac)
884
859
{
885
860
int i;
+28
-13
drivers/net/ethernet/ti/icssg/icssg_config.c
+28
-13
drivers/net/ethernet/ti/icssg/icssg_config.c
···
397
397
return 0;
398
398
}
399
399
400
-
static void icssg_init_emac_mode(struct prueth *prueth)
400
+
void icssg_init_emac_mode(struct prueth *prueth)
401
401
{
402
402
/* When the device is configured as a bridge and it is being brought
403
403
* back to the emac mode, the host mac address has to be set as 0.
···
405
405
u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET;
406
406
int i;
407
407
u8 mac[ETH_ALEN] = { 0 };
408
-
409
-
if (prueth->emacs_initialized)
410
-
return;
411
408
412
409
/* Set VLAN TABLE address base */
413
410
regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
···
420
423
/* Clear host MAC address */
421
424
icssg_class_set_host_mac_addr(prueth->miig_rt, mac);
422
425
}
426
+
EXPORT_SYMBOL_GPL(icssg_init_emac_mode);
423
427
424
-
static void icssg_init_fw_offload_mode(struct prueth *prueth)
428
+
void icssg_init_fw_offload_mode(struct prueth *prueth)
425
429
{
426
430
u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET;
427
431
int i;
428
-
429
-
if (prueth->emacs_initialized)
430
-
return;
431
432
432
433
/* Set VLAN TABLE address base */
433
434
regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
···
443
448
icssg_class_set_host_mac_addr(prueth->miig_rt, prueth->hw_bridge_dev->dev_addr);
444
449
icssg_set_pvid(prueth, prueth->default_vlan, PRUETH_PORT_HOST);
445
450
}
451
+
EXPORT_SYMBOL_GPL(icssg_init_fw_offload_mode);
446
452
447
453
int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
448
454
{
449
455
void __iomem *config = emac->dram.va + ICSSG_CONFIG_OFFSET;
450
456
struct icssg_flow_cfg __iomem *flow_cfg;
451
457
int ret;
452
-
453
-
if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
454
-
icssg_init_fw_offload_mode(prueth);
455
-
else
456
-
icssg_init_emac_mode(prueth);
457
458
458
459
memset_io(config, 0, TAS_GATE_MASK_LIST0);
459
460
icssg_miig_queues_init(prueth, slice);
···
777
786
writel(pvid, prueth->shram.va + EMAC_ICSSG_SWITCH_PORT0_DEFAULT_VLAN_OFFSET);
778
787
}
779
788
EXPORT_SYMBOL_GPL(icssg_set_pvid);
789
+
790
+
int emac_fdb_flow_id_updated(struct prueth_emac *emac)
791
+
{
792
+
struct mgmt_cmd_rsp fdb_cmd_rsp = { 0 };
793
+
int slice = prueth_emac_slice(emac);
794
+
struct mgmt_cmd fdb_cmd = { 0 };
795
+
int ret;
796
+
797
+
fdb_cmd.header = ICSSG_FW_MGMT_CMD_HEADER;
798
+
fdb_cmd.type = ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW;
799
+
fdb_cmd.seqnum = ++(emac->prueth->icssg_hwcmdseq);
800
+
fdb_cmd.param = 0;
801
+
802
+
fdb_cmd.param |= (slice << 4);
803
+
fdb_cmd.cmd_args[0] = 0;
804
+
805
+
ret = icssg_send_fdb_msg(emac, &fdb_cmd, &fdb_cmd_rsp);
806
+
if (ret)
807
+
return ret;
808
+
809
+
WARN_ON(fdb_cmd.seqnum != fdb_cmd_rsp.seqnum);
810
+
return fdb_cmd_rsp.status == 1 ? 0 : -EINVAL;
811
+
}
812
+
EXPORT_SYMBOL_GPL(emac_fdb_flow_id_updated);
+1
drivers/net/ethernet/ti/icssg/icssg_config.h
+1
drivers/net/ethernet/ti/icssg/icssg_config.h
+191
-90
drivers/net/ethernet/ti/icssg/icssg_prueth.c
+191
-90
drivers/net/ethernet/ti/icssg/icssg_prueth.c
···
164
164
}
165
165
};
166
166
167
-
static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
167
+
static int prueth_start(struct rproc *rproc, const char *fw_name)
168
+
{
169
+
int ret;
170
+
171
+
ret = rproc_set_firmware(rproc, fw_name);
172
+
if (ret)
173
+
return ret;
174
+
return rproc_boot(rproc);
175
+
}
176
+
177
+
static void prueth_shutdown(struct rproc *rproc)
178
+
{
179
+
rproc_shutdown(rproc);
180
+
}
181
+
182
+
static int prueth_emac_start(struct prueth *prueth)
168
183
{
169
184
struct icssg_firmwares *firmwares;
170
185
struct device *dev = prueth->dev;
171
-
int slice, ret;
186
+
int ret, slice;
172
187
173
188
if (prueth->is_switch_mode)
174
189
firmwares = icssg_switch_firmwares;
···
192
177
else
193
178
firmwares = icssg_emac_firmwares;
194
179
195
-
slice = prueth_emac_slice(emac);
196
-
if (slice < 0) {
197
-
netdev_err(emac->ndev, "invalid port\n");
198
-
return -EINVAL;
180
+
for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
181
+
ret = prueth_start(prueth->pru[slice], firmwares[slice].pru);
182
+
if (ret) {
183
+
dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
184
+
goto unwind_slices;
185
+
}
186
+
187
+
ret = prueth_start(prueth->rtu[slice], firmwares[slice].rtu);
188
+
if (ret) {
189
+
dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
190
+
rproc_shutdown(prueth->pru[slice]);
191
+
goto unwind_slices;
192
+
}
193
+
194
+
ret = prueth_start(prueth->txpru[slice], firmwares[slice].txpru);
195
+
if (ret) {
196
+
dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
197
+
rproc_shutdown(prueth->rtu[slice]);
198
+
rproc_shutdown(prueth->pru[slice]);
199
+
goto unwind_slices;
200
+
}
199
201
}
200
202
201
-
ret = icssg_config(prueth, emac, slice);
202
-
if (ret)
203
-
return ret;
204
-
205
-
ret = rproc_set_firmware(prueth->pru[slice], firmwares[slice].pru);
206
-
ret = rproc_boot(prueth->pru[slice]);
207
-
if (ret) {
208
-
dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
209
-
return -EINVAL;
210
-
}
211
-
212
-
ret = rproc_set_firmware(prueth->rtu[slice], firmwares[slice].rtu);
213
-
ret = rproc_boot(prueth->rtu[slice]);
214
-
if (ret) {
215
-
dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
216
-
goto halt_pru;
217
-
}
218
-
219
-
ret = rproc_set_firmware(prueth->txpru[slice], firmwares[slice].txpru);
220
-
ret = rproc_boot(prueth->txpru[slice]);
221
-
if (ret) {
222
-
dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
223
-
goto halt_rtu;
224
-
}
225
-
226
-
emac->fw_running = 1;
227
203
return 0;
228
204
229
-
halt_rtu:
230
-
rproc_shutdown(prueth->rtu[slice]);
231
-
232
-
halt_pru:
233
-
rproc_shutdown(prueth->pru[slice]);
205
+
unwind_slices:
206
+
while (--slice >= 0) {
207
+
prueth_shutdown(prueth->txpru[slice]);
208
+
prueth_shutdown(prueth->rtu[slice]);
209
+
prueth_shutdown(prueth->pru[slice]);
210
+
}
234
211
235
212
return ret;
213
+
}
214
+
215
+
static void prueth_emac_stop(struct prueth *prueth)
216
+
{
217
+
int slice;
218
+
219
+
for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
220
+
prueth_shutdown(prueth->txpru[slice]);
221
+
prueth_shutdown(prueth->rtu[slice]);
222
+
prueth_shutdown(prueth->pru[slice]);
223
+
}
224
+
}
225
+
226
+
static int prueth_emac_common_start(struct prueth *prueth)
227
+
{
228
+
struct prueth_emac *emac;
229
+
int ret = 0;
230
+
int slice;
231
+
232
+
if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
233
+
return -EINVAL;
234
+
235
+
/* clear SMEM and MSMC settings for all slices */
236
+
memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
237
+
memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
238
+
239
+
icssg_class_default(prueth->miig_rt, ICSS_SLICE0, 0, false);
240
+
icssg_class_default(prueth->miig_rt, ICSS_SLICE1, 0, false);
241
+
242
+
if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
243
+
icssg_init_fw_offload_mode(prueth);
244
+
else
245
+
icssg_init_emac_mode(prueth);
246
+
247
+
for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
248
+
emac = prueth->emac[slice];
249
+
if (!emac)
250
+
continue;
251
+
ret = icssg_config(prueth, emac, slice);
252
+
if (ret)
253
+
goto disable_class;
254
+
}
255
+
256
+
ret = prueth_emac_start(prueth);
257
+
if (ret)
258
+
goto disable_class;
259
+
260
+
emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
261
+
prueth->emac[ICSS_SLICE1];
262
+
ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
263
+
emac, IEP_DEFAULT_CYCLE_TIME_NS);
264
+
if (ret) {
265
+
dev_err(prueth->dev, "Failed to initialize IEP module\n");
266
+
goto stop_pruss;
267
+
}
268
+
269
+
return 0;
270
+
271
+
stop_pruss:
272
+
prueth_emac_stop(prueth);
273
+
274
+
disable_class:
275
+
icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
276
+
icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
277
+
278
+
return ret;
279
+
}
280
+
281
+
static int prueth_emac_common_stop(struct prueth *prueth)
282
+
{
283
+
struct prueth_emac *emac;
284
+
285
+
if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
286
+
return -EINVAL;
287
+
288
+
icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
289
+
icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
290
+
291
+
prueth_emac_stop(prueth);
292
+
293
+
emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
294
+
prueth->emac[ICSS_SLICE1];
295
+
icss_iep_exit(emac->iep);
296
+
297
+
return 0;
236
298
}
237
299
238
300
/* called back by PHY layer if there is change in link state of hw port*/
···
465
373
u64 cyclecount;
466
374
u32 cycletime;
467
375
int timeout;
468
-
469
-
if (!emac->fw_running)
470
-
return;
471
376
472
377
sc_descp = emac->prueth->shram.va + TIMESYNC_FW_WC_SETCLOCK_DESC_OFFSET;
473
378
···
632
543
{
633
544
struct prueth_emac *emac = netdev_priv(ndev);
634
545
int ret, i, num_data_chn = emac->tx_ch_num;
546
+
struct icssg_flow_cfg __iomem *flow_cfg;
635
547
struct prueth *prueth = emac->prueth;
636
548
int slice = prueth_emac_slice(emac);
637
549
struct device *dev = prueth->dev;
638
550
int max_rx_flows;
639
551
int rx_flow;
640
552
641
-
/* clear SMEM and MSMC settings for all slices */
642
-
if (!prueth->emacs_initialized) {
643
-
memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
644
-
memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
645
-
}
646
-
647
553
/* set h/w MAC as user might have re-configured */
648
554
ether_addr_copy(emac->mac_addr, ndev->dev_addr);
649
555
650
556
icssg_class_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
651
-
icssg_class_default(prueth->miig_rt, slice, 0, false);
652
557
icssg_ft1_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
653
558
654
559
/* Notify the stack of the actual queue counts. */
···
680
597
goto cleanup_napi;
681
598
}
682
599
683
-
/* reset and start PRU firmware */
684
-
ret = prueth_emac_start(prueth, emac);
685
-
if (ret)
686
-
goto free_rx_irq;
600
+
if (!prueth->emacs_initialized) {
601
+
ret = prueth_emac_common_start(prueth);
602
+
if (ret)
603
+
goto free_rx_irq;
604
+
}
605
+
606
+
flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;
607
+
writew(emac->rx_flow_id_base, &flow_cfg->rx_base_flow);
608
+
ret = emac_fdb_flow_id_updated(emac);
609
+
610
+
if (ret) {
611
+
netdev_err(ndev, "Failed to update Rx Flow ID %d", ret);
612
+
goto stop;
613
+
}
687
614
688
615
icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu);
689
-
690
-
if (!prueth->emacs_initialized) {
691
-
ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
692
-
emac, IEP_DEFAULT_CYCLE_TIME_NS);
693
-
}
694
616
695
617
ret = request_threaded_irq(emac->tx_ts_irq, NULL, prueth_tx_ts_irq,
696
618
IRQF_ONESHOT, dev_name(dev), emac);
···
741
653
free_tx_ts_irq:
742
654
free_irq(emac->tx_ts_irq, emac);
743
655
stop:
744
-
prueth_emac_stop(emac);
656
+
if (!prueth->emacs_initialized)
657
+
prueth_emac_common_stop(prueth);
745
658
free_rx_irq:
746
659
free_irq(emac->rx_chns.irq[rx_flow], emac);
747
660
cleanup_napi:
···
777
688
/* block packets from wire */
778
689
if (ndev->phydev)
779
690
phy_stop(ndev->phydev);
780
-
781
-
icssg_class_disable(prueth->miig_rt, prueth_emac_slice(emac));
782
691
783
692
if (emac->prueth->is_hsr_offload_mode)
784
693
__dev_mc_unsync(ndev, icssg_prueth_hsr_del_mcast);
···
815
728
/* Destroying the queued work in ndo_stop() */
816
729
cancel_delayed_work_sync(&emac->stats_work);
817
730
818
-
if (prueth->emacs_initialized == 1)
819
-
icss_iep_exit(emac->iep);
820
-
821
731
/* stop PRUs */
822
-
prueth_emac_stop(emac);
732
+
if (prueth->emacs_initialized == 1)
733
+
prueth_emac_common_stop(prueth);
823
734
824
735
free_irq(emac->tx_ts_irq, emac);
825
736
···
1138
1053
}
1139
1054
}
1140
1055
1141
-
static void prueth_emac_restart(struct prueth *prueth)
1056
+
static int prueth_emac_restart(struct prueth *prueth)
1142
1057
{
1143
1058
struct prueth_emac *emac0 = prueth->emac[PRUETH_MAC0];
1144
1059
struct prueth_emac *emac1 = prueth->emac[PRUETH_MAC1];
1060
+
int ret;
1145
1061
1146
1062
/* Detach the net_device for both PRUeth ports*/
1147
1063
if (netif_running(emac0->ndev))
···
1151
1065
netif_device_detach(emac1->ndev);
1152
1066
1153
1067
/* Disable both PRUeth ports */
1154
-
icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
1155
-
icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
1068
+
ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
1069
+
ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
1070
+
if (ret)
1071
+
return ret;
1156
1072
1157
1073
/* Stop both pru cores for both PRUeth ports*/
1158
-
prueth_emac_stop(emac0);
1159
-
prueth->emacs_initialized--;
1160
-
prueth_emac_stop(emac1);
1161
-
prueth->emacs_initialized--;
1074
+
ret = prueth_emac_common_stop(prueth);
1075
+
if (ret) {
1076
+
dev_err(prueth->dev, "Failed to stop the firmwares");
1077
+
return ret;
1078
+
}
1162
1079
1163
1080
/* Start both pru cores for both PRUeth ports */
1164
-
prueth_emac_start(prueth, emac0);
1165
-
prueth->emacs_initialized++;
1166
-
prueth_emac_start(prueth, emac1);
1167
-
prueth->emacs_initialized++;
1081
+
ret = prueth_emac_common_start(prueth);
1082
+
if (ret) {
1083
+
dev_err(prueth->dev, "Failed to start the firmwares");
1084
+
return ret;
1085
+
}
1168
1086
1169
1087
/* Enable forwarding for both PRUeth ports */
1170
-
icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
1171
-
icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
1088
+
ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
1089
+
ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
1172
1090
1173
1091
/* Attache net_device for both PRUeth ports */
1174
1092
netif_device_attach(emac0->ndev);
1175
1093
netif_device_attach(emac1->ndev);
1094
+
1095
+
return ret;
1176
1096
}
1177
1097
1178
1098
static void icssg_change_mode(struct prueth *prueth)
1179
1099
{
1180
1100
struct prueth_emac *emac;
1181
-
int mac;
1101
+
int mac, ret;
1182
1102
1183
-
prueth_emac_restart(prueth);
1103
+
ret = prueth_emac_restart(prueth);
1104
+
if (ret) {
1105
+
dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
1106
+
return;
1107
+
}
1184
1108
1185
1109
for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
1186
1110
emac = prueth->emac[mac];
···
1269
1173
{
1270
1174
struct prueth_emac *emac = netdev_priv(ndev);
1271
1175
struct prueth *prueth = emac->prueth;
1176
+
int ret;
1272
1177
1273
1178
prueth->br_members &= ~BIT(emac->port_id);
1274
1179
1275
1180
if (prueth->is_switch_mode) {
1276
1181
prueth->is_switch_mode = false;
1277
1182
emac->port_vlan = 0;
1278
-
prueth_emac_restart(prueth);
1183
+
ret = prueth_emac_restart(prueth);
1184
+
if (ret) {
1185
+
dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
1186
+
return;
1187
+
}
1279
1188
}
1280
1189
1281
1190
prueth_offload_fwd_mark_update(prueth);
···
1329
1228
struct prueth *prueth = emac->prueth;
1330
1229
struct prueth_emac *emac0;
1331
1230
struct prueth_emac *emac1;
1231
+
int ret;
1332
1232
1333
1233
emac0 = prueth->emac[PRUETH_MAC0];
1334
1234
emac1 = prueth->emac[PRUETH_MAC1];
···
1340
1238
emac0->port_vlan = 0;
1341
1239
emac1->port_vlan = 0;
1342
1240
prueth->hsr_dev = NULL;
1343
-
prueth_emac_restart(prueth);
1241
+
ret = prueth_emac_restart(prueth);
1242
+
if (ret) {
1243
+
dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
1244
+
return;
1245
+
}
1344
1246
netdev_dbg(ndev, "Disabling HSR Offload mode\n");
1345
1247
}
1346
1248
}
···
1519
1413
prueth->pa_stats = NULL;
1520
1414
}
1521
1415
1522
-
if (eth0_node) {
1416
+
if (eth0_node || eth1_node) {
1523
1417
ret = prueth_get_cores(prueth, ICSS_SLICE0, false);
1524
1418
if (ret)
1525
1419
goto put_cores;
1526
-
}
1527
-
1528
-
if (eth1_node) {
1529
1420
ret = prueth_get_cores(prueth, ICSS_SLICE1, false);
1530
1421
if (ret)
1531
1422
goto put_cores;
···
1721
1618
pruss_put(prueth->pruss);
1722
1619
1723
1620
put_cores:
1724
-
if (eth1_node) {
1725
-
prueth_put_cores(prueth, ICSS_SLICE1);
1726
-
of_node_put(eth1_node);
1727
-
}
1728
-
1729
-
if (eth0_node) {
1621
+
if (eth0_node || eth1_node) {
1730
1622
prueth_put_cores(prueth, ICSS_SLICE0);
1731
1623
of_node_put(eth0_node);
1624
+
1625
+
prueth_put_cores(prueth, ICSS_SLICE1);
1626
+
of_node_put(eth1_node);
1732
1627
}
1733
1628
1734
1629
return ret;
+3
-2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
+3
-2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
···
140
140
/* data for each emac port */
141
141
struct prueth_emac {
142
142
bool is_sr1;
143
-
bool fw_running;
144
143
struct prueth *prueth;
145
144
struct net_device *ndev;
146
145
u8 mac_addr[6];
···
360
361
enum icssg_port_state_cmd state);
361
362
void icssg_config_set_speed(struct prueth_emac *emac);
362
363
void icssg_config_half_duplex(struct prueth_emac *emac);
364
+
void icssg_init_emac_mode(struct prueth *prueth);
365
+
void icssg_init_fw_offload_mode(struct prueth *prueth);
363
366
364
367
/* Buffer queue helpers */
365
368
int icssg_queue_pop(struct prueth *prueth, u8 queue);
···
378
377
u8 untag_mask, bool add);
379
378
u16 icssg_get_pvid(struct prueth_emac *emac);
380
379
void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port);
380
+
int emac_fdb_flow_id_updated(struct prueth_emac *emac);
381
381
#define prueth_napi_to_tx_chn(pnapi) \
382
382
container_of(pnapi, struct prueth_tx_chn, napi_tx)
383
383
···
409
407
struct sk_buff *skb, u32 *psdata);
410
408
enum netdev_tx icssg_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev);
411
409
irqreturn_t prueth_rx_irq(int irq, void *dev_id);
412
-
void prueth_emac_stop(struct prueth_emac *emac);
413
410
void prueth_cleanup_tx_ts(struct prueth_emac *emac);
414
411
int icssg_napi_rx_poll(struct napi_struct *napi_rx, int budget);
415
412
int prueth_prepare_rx_chan(struct prueth_emac *emac,
+23
-1
drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
+23
-1
drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
···
440
440
goto halt_pru;
441
441
}
442
442
443
-
emac->fw_running = 1;
444
443
return 0;
445
444
446
445
halt_pru:
447
446
rproc_shutdown(prueth->pru[slice]);
448
447
449
448
return ret;
449
+
}
450
+
451
+
static void prueth_emac_stop(struct prueth_emac *emac)
452
+
{
453
+
struct prueth *prueth = emac->prueth;
454
+
int slice;
455
+
456
+
switch (emac->port_id) {
457
+
case PRUETH_PORT_MII0:
458
+
slice = ICSS_SLICE0;
459
+
break;
460
+
case PRUETH_PORT_MII1:
461
+
slice = ICSS_SLICE1;
462
+
break;
463
+
default:
464
+
netdev_err(emac->ndev, "invalid port\n");
465
+
return;
466
+
}
467
+
468
+
if (!emac->is_sr1)
469
+
rproc_shutdown(prueth->txpru[slice]);
470
+
rproc_shutdown(prueth->rtu[slice]);
471
+
rproc_shutdown(prueth->pru[slice]);
450
472
}
451
473
452
474
/**
+101
-13
drivers/net/phy/micrel.c
+101
-13
drivers/net/phy/micrel.c
···
432
432
struct kszphy_priv {
433
433
struct kszphy_ptp_priv ptp_priv;
434
434
const struct kszphy_type *type;
435
+
struct clk *clk;
435
436
int led_mode;
436
437
u16 vct_ctrl1000;
437
438
bool rmii_ref_clk_sel;
438
439
bool rmii_ref_clk_sel_val;
440
+
bool clk_enable;
439
441
u64 stats[ARRAY_SIZE(kszphy_hw_stats)];
440
442
};
441
443
···
2052
2050
data[i] = kszphy_get_stat(phydev, i);
2053
2051
}
2054
2052
2053
+
static void kszphy_enable_clk(struct phy_device *phydev)
2054
+
{
2055
+
struct kszphy_priv *priv = phydev->priv;
2056
+
2057
+
if (!priv->clk_enable && priv->clk) {
2058
+
clk_prepare_enable(priv->clk);
2059
+
priv->clk_enable = true;
2060
+
}
2061
+
}
2062
+
2063
+
static void kszphy_disable_clk(struct phy_device *phydev)
2064
+
{
2065
+
struct kszphy_priv *priv = phydev->priv;
2066
+
2067
+
if (priv->clk_enable && priv->clk) {
2068
+
clk_disable_unprepare(priv->clk);
2069
+
priv->clk_enable = false;
2070
+
}
2071
+
}
2072
+
2073
+
static int kszphy_generic_resume(struct phy_device *phydev)
2074
+
{
2075
+
kszphy_enable_clk(phydev);
2076
+
2077
+
return genphy_resume(phydev);
2078
+
}
2079
+
2080
+
static int kszphy_generic_suspend(struct phy_device *phydev)
2081
+
{
2082
+
int ret;
2083
+
2084
+
ret = genphy_suspend(phydev);
2085
+
if (ret)
2086
+
return ret;
2087
+
2088
+
kszphy_disable_clk(phydev);
2089
+
2090
+
return 0;
2091
+
}
2092
+
2055
2093
static int kszphy_suspend(struct phy_device *phydev)
2056
2094
{
2057
2095
/* Disable PHY Interrupts */
···
2101
2059
phydev->drv->config_intr(phydev);
2102
2060
}
2103
2061
2104
-
return genphy_suspend(phydev);
2062
+
return kszphy_generic_suspend(phydev);
2105
2063
}
2106
2064
2107
2065
static void kszphy_parse_led_mode(struct phy_device *phydev)
···
2132
2090
{
2133
2091
int ret;
2134
2092
2135
-
genphy_resume(phydev);
2093
+
ret = kszphy_generic_resume(phydev);
2094
+
if (ret)
2095
+
return ret;
2136
2096
2137
2097
/* After switching from power-down to normal mode, an internal global
2138
2098
* reset is automatically generated. Wait a minimum of 1 ms before
···
2152
2108
if (phydev->drv->config_intr)
2153
2109
phydev->drv->config_intr(phydev);
2154
2110
}
2111
+
2112
+
return 0;
2113
+
}
2114
+
2115
+
/* Because of errata DS80000700A, receiver error following software
2116
+
* power down. Suspend and resume callbacks only disable and enable
2117
+
* external rmii reference clock.
2118
+
*/
2119
+
static int ksz8041_resume(struct phy_device *phydev)
2120
+
{
2121
+
kszphy_enable_clk(phydev);
2122
+
2123
+
return 0;
2124
+
}
2125
+
2126
+
static int ksz8041_suspend(struct phy_device *phydev)
2127
+
{
2128
+
kszphy_disable_clk(phydev);
2155
2129
2156
2130
return 0;
2157
2131
}
···
2221
2159
if (!(ret & BMCR_PDOWN))
2222
2160
return 0;
2223
2161
2224
-
genphy_resume(phydev);
2162
+
ret = kszphy_generic_resume(phydev);
2163
+
if (ret)
2164
+
return ret;
2165
+
2225
2166
usleep_range(1000, 2000);
2226
2167
2227
2168
/* Re-program the value after chip is reset. */
···
2240
2175
}
2241
2176
2242
2177
return 0;
2178
+
}
2179
+
2180
+
static int ksz8061_suspend(struct phy_device *phydev)
2181
+
{
2182
+
return kszphy_suspend(phydev);
2243
2183
}
2244
2184
2245
2185
static int kszphy_probe(struct phy_device *phydev)
···
2287
2217
} else if (!clk) {
2288
2218
/* unnamed clock from the generic ethernet-phy binding */
2289
2219
clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL);
2290
-
if (IS_ERR(clk))
2291
-
return PTR_ERR(clk);
2292
2220
}
2221
+
2222
+
if (IS_ERR(clk))
2223
+
return PTR_ERR(clk);
2224
+
2225
+
clk_disable_unprepare(clk);
2226
+
priv->clk = clk;
2293
2227
2294
2228
if (ksz8041_fiber_mode(phydev))
2295
2229
phydev->port = PORT_FIBRE;
···
5364
5290
return 0;
5365
5291
}
5366
5292
5293
+
static int lan8804_resume(struct phy_device *phydev)
5294
+
{
5295
+
return kszphy_resume(phydev);
5296
+
}
5297
+
5298
+
static int lan8804_suspend(struct phy_device *phydev)
5299
+
{
5300
+
return kszphy_generic_suspend(phydev);
5301
+
}
5302
+
5303
+
static int lan8841_resume(struct phy_device *phydev)
5304
+
{
5305
+
return kszphy_generic_resume(phydev);
5306
+
}
5307
+
5367
5308
static int lan8841_suspend(struct phy_device *phydev)
5368
5309
{
5369
5310
struct kszphy_priv *priv = phydev->priv;
···
5387
5298
if (ptp_priv->ptp_clock)
5388
5299
ptp_cancel_worker_sync(ptp_priv->ptp_clock);
5389
5300
5390
-
return genphy_suspend(phydev);
5301
+
return kszphy_generic_suspend(phydev);
5391
5302
}
5392
5303
5393
5304
static struct phy_driver ksphy_driver[] = {
···
5447
5358
.get_sset_count = kszphy_get_sset_count,
5448
5359
.get_strings = kszphy_get_strings,
5449
5360
.get_stats = kszphy_get_stats,
5450
-
/* No suspend/resume callbacks because of errata DS80000700A,
5451
-
* receiver error following software power down.
5452
-
*/
5361
+
.suspend = ksz8041_suspend,
5362
+
.resume = ksz8041_resume,
5453
5363
}, {
5454
5364
.phy_id = PHY_ID_KSZ8041RNLI,
5455
5365
.phy_id_mask = MICREL_PHY_ID_MASK,
···
5524
5436
.soft_reset = genphy_soft_reset,
5525
5437
.config_intr = kszphy_config_intr,
5526
5438
.handle_interrupt = kszphy_handle_interrupt,
5527
-
.suspend = kszphy_suspend,
5439
+
.suspend = ksz8061_suspend,
5528
5440
.resume = ksz8061_resume,
5529
5441
}, {
5530
5442
.phy_id = PHY_ID_KSZ9021,
···
5595
5507
.get_sset_count = kszphy_get_sset_count,
5596
5508
.get_strings = kszphy_get_strings,
5597
5509
.get_stats = kszphy_get_stats,
5598
-
.suspend = genphy_suspend,
5599
-
.resume = kszphy_resume,
5510
+
.suspend = lan8804_suspend,
5511
+
.resume = lan8804_resume,
5600
5512
.config_intr = lan8804_config_intr,
5601
5513
.handle_interrupt = lan8804_handle_interrupt,
5602
5514
}, {
···
5614
5526
.get_strings = kszphy_get_strings,
5615
5527
.get_stats = kszphy_get_stats,
5616
5528
.suspend = lan8841_suspend,
5617
-
.resume = genphy_resume,
5529
+
.resume = lan8841_resume,
5618
5530
.cable_test_start = lan8814_cable_test_start,
5619
5531
.cable_test_get_status = ksz886x_cable_test_get_status,
5620
5532
}, {
+4
-12
drivers/net/pse-pd/tps23881.c
+4
-12
drivers/net/pse-pd/tps23881.c
···
64
64
if (id >= TPS23881_MAX_CHANS)
65
65
return -ERANGE;
66
66
67
-
ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
68
-
if (ret < 0)
69
-
return ret;
70
-
71
67
chan = priv->port[id].chan[0];
72
68
if (chan < 4)
73
-
val = (u16)(ret | BIT(chan));
69
+
val = BIT(chan);
74
70
else
75
-
val = (u16)(ret | BIT(chan + 4));
71
+
val = BIT(chan + 4);
76
72
77
73
if (priv->port[id].is_4p) {
78
74
chan = priv->port[id].chan[1];
···
96
100
if (id >= TPS23881_MAX_CHANS)
97
101
return -ERANGE;
98
102
99
-
ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
100
-
if (ret < 0)
101
-
return ret;
102
-
103
103
chan = priv->port[id].chan[0];
104
104
if (chan < 4)
105
-
val = (u16)(ret | BIT(chan + 4));
105
+
val = BIT(chan + 4);
106
106
else
107
-
val = (u16)(ret | BIT(chan + 8));
107
+
val = BIT(chan + 8);
108
108
109
109
if (priv->port[id].is_4p) {
110
110
chan = priv->port[id].chan[1];
+1
drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+1
drivers/net/wireless/intel/iwlwifi/cfg/bz.c
···
161
161
162
162
const char iwl_bz_name[] = "Intel(R) TBD Bz device";
163
163
const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz";
164
+
const char iwl_wh_name[] = "Intel(R) Wi-Fi 7 BE211 320MHz";
164
165
const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz";
165
166
const char iwl_mtp_name[] = "Intel(R) Wi-Fi 7 BE202 160MHz";
166
167
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
+11
-3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+11
-3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
···
2954
2954
int idx)
2955
2955
{
2956
2956
int i;
2957
+
int n_channels = 0;
2957
2958
2958
2959
if (fw_has_api(&mvm->fw->ucode_capa,
2959
2960
IWL_UCODE_TLV_API_SCAN_OFFLOAD_CHANS)) {
···
2963
2962
2964
2963
for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN * 8; i++)
2965
2964
if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
2966
-
match->channels[match->n_channels++] =
2965
+
match->channels[n_channels++] =
2967
2966
mvm->nd_channels[i]->center_freq;
2968
2967
} else {
2969
2968
struct iwl_scan_offload_profile_match_v1 *matches =
···
2971
2970
2972
2971
for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN_V1 * 8; i++)
2973
2972
if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
2974
-
match->channels[match->n_channels++] =
2973
+
match->channels[n_channels++] =
2975
2974
mvm->nd_channels[i]->center_freq;
2976
2975
}
2976
+
/* We may have ended up with fewer channels than we allocated. */
2977
+
match->n_channels = n_channels;
2977
2978
}
2978
2979
2979
2980
/**
···
3056
3053
GFP_KERNEL);
3057
3054
if (!net_detect || !n_matches)
3058
3055
goto out_report_nd;
3056
+
net_detect->n_matches = n_matches;
3057
+
n_matches = 0;
3059
3058
3060
3059
for_each_set_bit(i, &matched_profiles, mvm->n_nd_match_sets) {
3061
3060
struct cfg80211_wowlan_nd_match *match;
···
3071
3066
GFP_KERNEL);
3072
3067
if (!match)
3073
3068
goto out_report_nd;
3069
+
match->n_channels = n_channels;
3074
3070
3075
-
net_detect->matches[net_detect->n_matches++] = match;
3071
+
net_detect->matches[n_matches++] = match;
3076
3072
3077
3073
/* We inverted the order of the SSIDs in the scan
3078
3074
* request, so invert the index here.
···
3088
3082
3089
3083
iwl_mvm_query_set_freqs(mvm, d3_data->nd_results, match, i);
3090
3084
}
3085
+
/* We may have fewer matches than we allocated. */
3086
+
net_detect->n_matches = n_matches;
3091
3087
3092
3088
out_report_nd:
3093
3089
wakeup.net_detect = net_detect;
+38
-3
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+38
-3
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
···
1106
1106
iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
1107
1107
1108
1108
/* Bz */
1109
-
/* FIXME: need to change the naming according to the actual CRF */
1110
1109
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1111
1110
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
1111
+
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
1112
1112
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1113
+
iwl_cfg_bz, iwl_ax201_name),
1114
+
1115
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1116
+
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
1117
+
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
1118
+
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1119
+
iwl_cfg_bz, iwl_ax211_name),
1120
+
1121
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1122
+
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
1123
+
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
1124
+
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1125
+
iwl_cfg_bz, iwl_fm_name),
1126
+
1127
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1128
+
IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
1129
+
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
1130
+
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1131
+
iwl_cfg_bz, iwl_wh_name),
1132
+
1133
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1134
+
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
1135
+
IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
1136
+
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1137
+
iwl_cfg_bz, iwl_ax201_name),
1138
+
1139
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1140
+
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
1141
+
IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
1142
+
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1143
+
iwl_cfg_bz, iwl_ax211_name),
1144
+
1145
+
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1146
+
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
1147
+
IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
1113
1148
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1114
1149
iwl_cfg_bz, iwl_fm_name),
1115
1150
1116
1151
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
1117
1152
IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
1153
+
IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
1118
1154
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1119
-
IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
1120
-
iwl_cfg_bz, iwl_fm_name),
1155
+
iwl_cfg_bz, iwl_wh_name),
1121
1156
1122
1157
/* Ga (Gl) */
1123
1158
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+1
-1
drivers/net/wireless/st/cw1200/cw1200_spi.c
+1
-1
drivers/net/wireless/st/cw1200/cw1200_spi.c
+1
-1
drivers/net/wwan/iosm/iosm_ipc_mmio.c
+1
-1
drivers/net/wwan/iosm/iosm_ipc_mmio.c
+17
-9
drivers/net/wwan/t7xx/t7xx_state_monitor.c
+17
-9
drivers/net/wwan/t7xx/t7xx_state_monitor.c
···
104
104
fsm_state_notify(ctl->md, state);
105
105
}
106
106
107
+
static void fsm_release_command(struct kref *ref)
108
+
{
109
+
struct t7xx_fsm_command *cmd = container_of(ref, typeof(*cmd), refcnt);
110
+
111
+
kfree(cmd);
112
+
}
113
+
107
114
static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result)
108
115
{
109
116
if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
110
-
*cmd->ret = result;
111
-
complete_all(cmd->done);
117
+
cmd->result = result;
118
+
complete_all(&cmd->done);
112
119
}
113
120
114
-
kfree(cmd);
121
+
kref_put(&cmd->refcnt, fsm_release_command);
115
122
}
116
123
117
124
static void fsm_del_kf_event(struct t7xx_fsm_event *event)
···
482
475
483
476
int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
484
477
{
485
-
DECLARE_COMPLETION_ONSTACK(done);
486
478
struct t7xx_fsm_command *cmd;
487
479
unsigned long flags;
488
480
int ret;
···
493
487
INIT_LIST_HEAD(&cmd->entry);
494
488
cmd->cmd_id = cmd_id;
495
489
cmd->flag = flag;
490
+
kref_init(&cmd->refcnt);
496
491
if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
497
-
cmd->done = &done;
498
-
cmd->ret = &ret;
492
+
init_completion(&cmd->done);
493
+
kref_get(&cmd->refcnt);
499
494
}
500
495
496
+
kref_get(&cmd->refcnt);
501
497
spin_lock_irqsave(&ctl->command_lock, flags);
502
498
list_add_tail(&cmd->entry, &ctl->command_queue);
503
499
spin_unlock_irqrestore(&ctl->command_lock, flags);
···
509
501
if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
510
502
unsigned long wait_ret;
511
503
512
-
wait_ret = wait_for_completion_timeout(&done,
504
+
wait_ret = wait_for_completion_timeout(&cmd->done,
513
505
msecs_to_jiffies(FSM_CMD_TIMEOUT_MS));
514
-
if (!wait_ret)
515
-
return -ETIMEDOUT;
516
506
507
+
ret = wait_ret ? cmd->result : -ETIMEDOUT;
508
+
kref_put(&cmd->refcnt, fsm_release_command);
517
509
return ret;
518
510
}
519
511
+3
-2
drivers/net/wwan/t7xx/t7xx_state_monitor.h
+3
-2
drivers/net/wwan/t7xx/t7xx_state_monitor.h
+5
drivers/nvme/host/nvme.h
+5
drivers/nvme/host/nvme.h
+7
-2
drivers/nvme/host/pci.c
+7
-2
drivers/nvme/host/pci.c
···
2834
2834
2835
2835
static int nvme_setup_prp_pools(struct nvme_dev *dev)
2836
2836
{
2837
+
size_t small_align = 256;
2838
+
2837
2839
dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
2838
2840
NVME_CTRL_PAGE_SIZE,
2839
2841
NVME_CTRL_PAGE_SIZE, 0);
2840
2842
if (!dev->prp_page_pool)
2841
2843
return -ENOMEM;
2842
2844
2845
+
if (dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512)
2846
+
small_align = 512;
2847
+
2843
2848
/* Optimisation for I/Os between 4k and 128k */
2844
2849
dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,
2845
-
256, 256, 0);
2850
+
256, small_align, 0);
2846
2851
if (!dev->prp_small_pool) {
2847
2852
dma_pool_destroy(dev->prp_page_pool);
2848
2853
return -ENOMEM;
···
3612
3607
{ PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
3613
3608
.driver_data = NVME_QUIRK_BOGUS_NID, },
3614
3609
{ PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
3615
-
.driver_data = NVME_QUIRK_QDEPTH_ONE },
3610
+
.driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
3616
3611
{ PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
3617
3612
.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
3618
3613
NVME_QUIRK_BOGUS_NID, },
+7
-11
drivers/nvme/host/tcp.c
+7
-11
drivers/nvme/host/tcp.c
···
2024
2024
return __nvme_tcp_alloc_io_queues(ctrl);
2025
2025
}
2026
2026
2027
-
static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
2028
-
{
2029
-
nvme_tcp_stop_io_queues(ctrl);
2030
-
if (remove)
2031
-
nvme_remove_io_tag_set(ctrl);
2032
-
nvme_tcp_free_io_queues(ctrl);
2033
-
}
2034
-
2035
2027
static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
2036
2028
{
2037
2029
int ret, nr_queues;
···
2168
2176
nvme_sync_io_queues(ctrl);
2169
2177
nvme_tcp_stop_io_queues(ctrl);
2170
2178
nvme_cancel_tagset(ctrl);
2171
-
if (remove)
2179
+
if (remove) {
2172
2180
nvme_unquiesce_io_queues(ctrl);
2173
-
nvme_tcp_destroy_io_queues(ctrl, remove);
2181
+
nvme_remove_io_tag_set(ctrl);
2182
+
}
2183
+
nvme_tcp_free_io_queues(ctrl);
2174
2184
}
2175
2185
2176
2186
static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl,
···
2261
2267
nvme_sync_io_queues(ctrl);
2262
2268
nvme_tcp_stop_io_queues(ctrl);
2263
2269
nvme_cancel_tagset(ctrl);
2264
-
nvme_tcp_destroy_io_queues(ctrl, new);
2270
+
if (new)
2271
+
nvme_remove_io_tag_set(ctrl);
2272
+
nvme_tcp_free_io_queues(ctrl);
2265
2273
}
2266
2274
destroy_admin:
2267
2275
nvme_stop_keep_alive(ctrl);
+5
-4
drivers/nvme/target/admin-cmd.c
+5
-4
drivers/nvme/target/admin-cmd.c
···
139
139
unsigned long idx;
140
140
141
141
ctrl = req->sq->ctrl;
142
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
142
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
143
143
/* we don't have the right data for file backed ns */
144
144
if (!ns->bdev)
145
145
continue;
···
331
331
u32 count = 0;
332
332
333
333
if (!(req->cmd->get_log_page.lsp & NVME_ANA_LOG_RGO)) {
334
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns)
334
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
335
335
if (ns->anagrpid == grpid)
336
336
desc->nsids[count++] = cpu_to_le32(ns->nsid);
337
+
}
337
338
}
338
339
339
340
desc->grpid = cpu_to_le32(grpid);
···
773
772
goto out;
774
773
}
775
774
776
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
775
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
777
776
if (ns->nsid <= min_endgid)
778
777
continue;
779
778
···
816
815
goto out;
817
816
}
818
817
819
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
818
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
820
819
if (ns->nsid <= min_nsid)
821
820
continue;
822
821
if (match_css && req->ns->csi != req->cmd->identify.csi)
+9
-14
drivers/nvme/target/configfs.c
+9
-14
drivers/nvme/target/configfs.c
···
810
810
NULL,
811
811
};
812
812
813
-
bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid)
814
-
{
815
-
struct config_item *ns_item;
816
-
char name[12];
817
-
818
-
snprintf(name, sizeof(name), "%u", nsid);
819
-
mutex_lock(&subsys->namespaces_group.cg_subsys->su_mutex);
820
-
ns_item = config_group_find_item(&subsys->namespaces_group, name);
821
-
mutex_unlock(&subsys->namespaces_group.cg_subsys->su_mutex);
822
-
return ns_item != NULL;
823
-
}
824
-
825
813
static void nvmet_ns_release(struct config_item *item)
826
814
{
827
815
struct nvmet_ns *ns = to_nvmet_ns(item);
···
2242
2254
const char *page, size_t count)
2243
2255
{
2244
2256
struct list_head *entry;
2257
+
char *old_nqn, *new_nqn;
2245
2258
size_t len;
2246
2259
2247
2260
len = strcspn(page, "\n");
2248
2261
if (!len || len > NVMF_NQN_FIELD_LEN - 1)
2249
2262
return -EINVAL;
2263
+
2264
+
new_nqn = kstrndup(page, len, GFP_KERNEL);
2265
+
if (!new_nqn)
2266
+
return -ENOMEM;
2250
2267
2251
2268
down_write(&nvmet_config_sem);
2252
2269
list_for_each(entry, &nvmet_subsystems_group.cg_children) {
···
2261
2268
if (!strncmp(config_item_name(item), page, len)) {
2262
2269
pr_err("duplicate NQN %s\n", config_item_name(item));
2263
2270
up_write(&nvmet_config_sem);
2271
+
kfree(new_nqn);
2264
2272
return -EINVAL;
2265
2273
}
2266
2274
}
2267
-
memset(nvmet_disc_subsys->subsysnqn, 0, NVMF_NQN_FIELD_LEN);
2268
-
memcpy(nvmet_disc_subsys->subsysnqn, page, len);
2275
+
old_nqn = nvmet_disc_subsys->subsysnqn;
2276
+
nvmet_disc_subsys->subsysnqn = new_nqn;
2269
2277
up_write(&nvmet_config_sem);
2270
2278
2279
+
kfree(old_nqn);
2271
2280
return len;
2272
2281
}
2273
2282
+63
-45
drivers/nvme/target/core.c
+63
-45
drivers/nvme/target/core.c
···
127
127
unsigned long idx;
128
128
u32 nsid = 0;
129
129
130
-
xa_for_each(&subsys->namespaces, idx, cur)
130
+
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, cur)
131
131
nsid = cur->nsid;
132
132
133
133
return nsid;
···
441
441
struct nvmet_subsys *subsys = nvmet_req_subsys(req);
442
442
443
443
req->ns = xa_load(&subsys->namespaces, nsid);
444
-
if (unlikely(!req->ns)) {
444
+
if (unlikely(!req->ns || !req->ns->enabled)) {
445
445
req->error_loc = offsetof(struct nvme_common_command, nsid);
446
-
if (nvmet_subsys_nsid_exists(subsys, nsid))
447
-
return NVME_SC_INTERNAL_PATH_ERROR;
448
-
return NVME_SC_INVALID_NS | NVME_STATUS_DNR;
446
+
if (!req->ns) /* ns doesn't exist! */
447
+
return NVME_SC_INVALID_NS | NVME_STATUS_DNR;
448
+
449
+
/* ns exists but it's disabled */
450
+
req->ns = NULL;
451
+
return NVME_SC_INTERNAL_PATH_ERROR;
449
452
}
450
453
451
454
percpu_ref_get(&req->ns->ref);
···
586
583
goto out_unlock;
587
584
588
585
ret = -EMFILE;
589
-
if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES)
590
-
goto out_unlock;
591
586
592
587
ret = nvmet_bdev_ns_enable(ns);
593
588
if (ret == -ENOTBLK)
···
600
599
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
601
600
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
602
601
603
-
ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace,
604
-
0, GFP_KERNEL);
605
-
if (ret)
606
-
goto out_dev_put;
607
-
608
-
if (ns->nsid > subsys->max_nsid)
609
-
subsys->max_nsid = ns->nsid;
610
-
611
-
ret = xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL);
612
-
if (ret)
613
-
goto out_restore_subsys_maxnsid;
614
-
615
602
if (ns->pr.enable) {
616
603
ret = nvmet_pr_init_ns(ns);
617
604
if (ret)
618
-
goto out_remove_from_subsys;
605
+
goto out_dev_put;
619
606
}
620
-
621
-
subsys->nr_namespaces++;
622
607
623
608
nvmet_ns_changed(subsys, ns->nsid);
624
609
ns->enabled = true;
610
+
xa_set_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED);
625
611
ret = 0;
626
612
out_unlock:
627
613
mutex_unlock(&subsys->lock);
628
614
return ret;
629
-
630
-
out_remove_from_subsys:
631
-
xa_erase(&subsys->namespaces, ns->nsid);
632
-
out_restore_subsys_maxnsid:
633
-
subsys->max_nsid = nvmet_max_nsid(subsys);
634
-
percpu_ref_exit(&ns->ref);
635
615
out_dev_put:
636
616
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
637
617
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
···
631
649
goto out_unlock;
632
650
633
651
ns->enabled = false;
634
-
xa_erase(&ns->subsys->namespaces, ns->nsid);
635
-
if (ns->nsid == subsys->max_nsid)
636
-
subsys->max_nsid = nvmet_max_nsid(subsys);
652
+
xa_clear_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED);
637
653
638
654
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
639
655
pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
656
+
657
+
mutex_unlock(&subsys->lock);
658
+
659
+
if (ns->pr.enable)
660
+
nvmet_pr_exit_ns(ns);
661
+
662
+
mutex_lock(&subsys->lock);
663
+
nvmet_ns_changed(subsys, ns->nsid);
664
+
nvmet_ns_dev_disable(ns);
665
+
out_unlock:
666
+
mutex_unlock(&subsys->lock);
667
+
}
668
+
669
+
void nvmet_ns_free(struct nvmet_ns *ns)
670
+
{
671
+
struct nvmet_subsys *subsys = ns->subsys;
672
+
673
+
nvmet_ns_disable(ns);
674
+
675
+
mutex_lock(&subsys->lock);
676
+
677
+
xa_erase(&subsys->namespaces, ns->nsid);
678
+
if (ns->nsid == subsys->max_nsid)
679
+
subsys->max_nsid = nvmet_max_nsid(subsys);
640
680
641
681
mutex_unlock(&subsys->lock);
642
682
···
675
671
wait_for_completion(&ns->disable_done);
676
672
percpu_ref_exit(&ns->ref);
677
673
678
-
if (ns->pr.enable)
679
-
nvmet_pr_exit_ns(ns);
680
-
681
674
mutex_lock(&subsys->lock);
682
-
683
675
subsys->nr_namespaces--;
684
-
nvmet_ns_changed(subsys, ns->nsid);
685
-
nvmet_ns_dev_disable(ns);
686
-
out_unlock:
687
676
mutex_unlock(&subsys->lock);
688
-
}
689
-
690
-
void nvmet_ns_free(struct nvmet_ns *ns)
691
-
{
692
-
nvmet_ns_disable(ns);
693
677
694
678
down_write(&nvmet_ana_sem);
695
679
nvmet_ana_group_enabled[ns->anagrpid]--;
···
691
699
{
692
700
struct nvmet_ns *ns;
693
701
702
+
mutex_lock(&subsys->lock);
703
+
704
+
if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES)
705
+
goto out_unlock;
706
+
694
707
ns = kzalloc(sizeof(*ns), GFP_KERNEL);
695
708
if (!ns)
696
-
return NULL;
709
+
goto out_unlock;
697
710
698
711
init_completion(&ns->disable_done);
699
712
700
713
ns->nsid = nsid;
701
714
ns->subsys = subsys;
715
+
716
+
if (percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 0, GFP_KERNEL))
717
+
goto out_free;
718
+
719
+
if (ns->nsid > subsys->max_nsid)
720
+
subsys->max_nsid = nsid;
721
+
722
+
if (xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL))
723
+
goto out_exit;
724
+
725
+
subsys->nr_namespaces++;
726
+
727
+
mutex_unlock(&subsys->lock);
702
728
703
729
down_write(&nvmet_ana_sem);
704
730
ns->anagrpid = NVMET_DEFAULT_ANA_GRPID;
···
728
718
ns->csi = NVME_CSI_NVM;
729
719
730
720
return ns;
721
+
out_exit:
722
+
subsys->max_nsid = nvmet_max_nsid(subsys);
723
+
percpu_ref_exit(&ns->ref);
724
+
out_free:
725
+
kfree(ns);
726
+
out_unlock:
727
+
mutex_unlock(&subsys->lock);
728
+
return NULL;
731
729
}
732
730
733
731
static void nvmet_update_sq_head(struct nvmet_req *req)
···
1412
1394
1413
1395
ctrl->p2p_client = get_device(req->p2p_client);
1414
1396
1415
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns)
1397
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns)
1416
1398
nvmet_p2pmem_ns_add_p2p(ctrl, ns);
1417
1399
}
1418
1400
+1
-1
drivers/nvme/target/io-cmd-bdev.c
+1
-1
drivers/nvme/target/io-cmd-bdev.c
···
36
36
*/
37
37
id->nsfeat |= 1 << 4;
38
38
/* NPWG = Namespace Preferred Write Granularity. 0's based */
39
-
id->npwg = lpp0b;
39
+
id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev));
40
40
/* NPWA = Namespace Preferred Write Alignment. 0's based */
41
41
id->npwa = id->npwg;
42
42
/* NPDG = Namespace Preferred Deallocate Granularity. 0's based */
+7
drivers/nvme/target/nvmet.h
+7
drivers/nvme/target/nvmet.h
···
24
24
25
25
#define NVMET_DEFAULT_VS NVME_VS(2, 1, 0)
26
26
27
+
#define NVMET_NS_ENABLED XA_MARK_1
27
28
#define NVMET_ASYNC_EVENTS 4
28
29
#define NVMET_ERROR_LOG_SLOTS 128
29
30
#define NVMET_NO_ERROR_LOC ((u16)-1)
···
33
32
#define NVMET_SN_MAX_SIZE 20
34
33
#define NVMET_FR_MAX_SIZE 8
35
34
#define NVMET_PR_LOG_QUEUE_SIZE 64
35
+
36
+
#define nvmet_for_each_ns(xa, index, entry) \
37
+
xa_for_each(xa, index, entry)
38
+
39
+
#define nvmet_for_each_enabled_ns(xa, index, entry) \
40
+
xa_for_each_marked(xa, index, entry, NVMET_NS_ENABLED)
36
41
37
42
/*
38
43
* Supported optional AENs:
+4
-4
drivers/nvme/target/pr.c
+4
-4
drivers/nvme/target/pr.c
···
60
60
goto success;
61
61
}
62
62
63
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
63
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
64
64
if (ns->pr.enable)
65
65
WRITE_ONCE(ns->pr.notify_mask, mask);
66
66
}
···
1056
1056
* nvmet_pr_init_ns(), see more details in nvmet_ns_enable().
1057
1057
* So just check ns->pr.enable.
1058
1058
*/
1059
-
xa_for_each(&subsys->namespaces, idx, ns) {
1059
+
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) {
1060
1060
if (ns->pr.enable) {
1061
1061
ret = nvmet_pr_alloc_and_insert_pc_ref(ns, ctrl->cntlid,
1062
1062
&ctrl->hostid);
···
1067
1067
return 0;
1068
1068
1069
1069
free_per_ctrl_refs:
1070
-
xa_for_each(&subsys->namespaces, idx, ns) {
1070
+
nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) {
1071
1071
if (ns->pr.enable) {
1072
1072
pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid);
1073
1073
if (pc_ref)
···
1087
1087
kfifo_free(&ctrl->pr_log_mgr.log_queue);
1088
1088
mutex_destroy(&ctrl->pr_log_mgr.lock);
1089
1089
1090
-
xa_for_each(&ctrl->subsys->namespaces, idx, ns) {
1090
+
nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) {
1091
1091
if (ns->pr.enable) {
1092
1092
pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid);
1093
1093
if (pc_ref)
+5
-2
drivers/pci/msi/irqdomain.c
+5
-2
drivers/pci/msi/irqdomain.c
···
350
350
351
351
domain = dev_get_msi_domain(&pdev->dev);
352
352
353
-
if (!domain || !irq_domain_is_hierarchy(domain))
354
-
return mode == ALLOW_LEGACY;
353
+
if (!domain || !irq_domain_is_hierarchy(domain)) {
354
+
if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS))
355
+
return mode == ALLOW_LEGACY;
356
+
return false;
357
+
}
355
358
356
359
if (!irq_domain_is_msi_parent(domain)) {
357
360
/*
+4
drivers/pci/msi/msi.c
+4
drivers/pci/msi/msi.c
···
433
433
if (WARN_ON_ONCE(dev->msi_enabled))
434
434
return -EINVAL;
435
435
436
+
/* Test for the availability of MSI support */
437
+
if (!pci_msi_domain_supports(dev, 0, ALLOW_LEGACY))
438
+
return -ENOTSUPP;
439
+
436
440
nvec = pci_msi_vec_count(dev);
437
441
if (nvec < 0)
438
442
return nvec;
+6
drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
+6
drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
···
325
325
void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
326
326
327
327
USB_CTRL_UNSET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN);
328
+
329
+
/*
330
+
* The PHY might be in a bad state if it is already powered
331
+
* up. Toggle the power just in case.
332
+
*/
333
+
USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
328
334
USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN);
329
335
330
336
/* 1 millisecond - for USB clocks to settle down */
+1
-2
drivers/phy/freescale/phy-fsl-samsung-hdmi.c
+1
-2
drivers/phy/freescale/phy-fsl-samsung-hdmi.c
+1
drivers/phy/mediatek/Kconfig
+1
drivers/phy/mediatek/Kconfig
+13
-8
drivers/phy/phy-core.c
+13
-8
drivers/phy/phy-core.c
···
145
145
return phy_provider;
146
146
147
147
for_each_child_of_node(phy_provider->children, child)
148
-
if (child == node)
148
+
if (child == node) {
149
+
of_node_put(child);
149
150
return phy_provider;
151
+
}
150
152
}
151
153
152
154
return ERR_PTR(-EPROBE_DEFER);
···
631
629
return ERR_PTR(-ENODEV);
632
630
633
631
/* This phy type handled by the usb-phy subsystem for now */
634
-
if (of_device_is_compatible(args.np, "usb-nop-xceiv"))
635
-
return ERR_PTR(-ENODEV);
632
+
if (of_device_is_compatible(args.np, "usb-nop-xceiv")) {
633
+
phy = ERR_PTR(-ENODEV);
634
+
goto out_put_node;
635
+
}
636
636
637
637
mutex_lock(&phy_provider_mutex);
638
638
phy_provider = of_phy_provider_lookup(args.np);
···
656
652
657
653
out_unlock:
658
654
mutex_unlock(&phy_provider_mutex);
655
+
out_put_node:
659
656
of_node_put(args.np);
660
657
661
658
return phy;
···
742
737
if (!phy)
743
738
return;
744
739
745
-
r = devres_destroy(dev, devm_phy_release, devm_phy_match, phy);
740
+
r = devres_release(dev, devm_phy_release, devm_phy_match, phy);
746
741
dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
747
742
}
748
743
EXPORT_SYMBOL_GPL(devm_phy_put);
···
1126
1121
{
1127
1122
int r;
1128
1123
1129
-
r = devres_destroy(dev, devm_phy_consume, devm_phy_match, phy);
1124
+
r = devres_release(dev, devm_phy_consume, devm_phy_match, phy);
1130
1125
dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
1131
1126
}
1132
1127
EXPORT_SYMBOL_GPL(devm_phy_destroy);
···
1264
1259
* of_phy_provider_unregister to unregister the phy provider.
1265
1260
*/
1266
1261
void devm_of_phy_provider_unregister(struct device *dev,
1267
-
struct phy_provider *phy_provider)
1262
+
struct phy_provider *phy_provider)
1268
1263
{
1269
1264
int r;
1270
1265
1271
-
r = devres_destroy(dev, devm_phy_provider_release, devm_phy_match,
1272
-
phy_provider);
1266
+
r = devres_release(dev, devm_phy_provider_release, devm_phy_match,
1267
+
phy_provider);
1273
1268
dev_WARN_ONCE(dev, r, "couldn't find PHY provider device resource\n");
1274
1269
}
1275
1270
EXPORT_SYMBOL_GPL(devm_of_phy_provider_unregister);
+1
-1
drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+1
-1
drivers/phy/qualcomm/phy-qcom-qmp-usb.c
···
1052
1052
QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_FO_GAIN, 0x2f),
1053
1053
QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_LOW, 0xff),
1054
1054
QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_HIGH, 0x0f),
1055
-
QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_SO_GAIN, 0x0a),
1055
+
QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FO_GAIN, 0x0a),
1056
1056
QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL1, 0x54),
1057
1057
QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL2, 0x0f),
1058
1058
QMP_PHY_INIT_CFG(QSERDES_V5_RX_RX_EQU_ADAPTOR_CNTRL2, 0x0f),
+1
-1
drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+1
-1
drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
···
309
309
310
310
priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
311
311
312
-
priv->phy_rst = devm_reset_control_array_get_exclusive(dev);
312
+
priv->phy_rst = devm_reset_control_get(dev, "phy");
313
313
if (IS_ERR(priv->phy_rst))
314
314
return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
315
315
+2
-1
drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+2
-1
drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
···
1101
1101
return dev_err_probe(dev, PTR_ERR(hdptx->grf),
1102
1102
"Could not get GRF syscon\n");
1103
1103
1104
+
platform_set_drvdata(pdev, hdptx);
1105
+
1104
1106
ret = devm_pm_runtime_enable(dev);
1105
1107
if (ret)
1106
1108
return dev_err_probe(dev, ret, "Failed to enable runtime PM\n");
···
1112
1110
return dev_err_probe(dev, PTR_ERR(hdptx->phy),
1113
1111
"Failed to create HDMI PHY\n");
1114
1112
1115
-
platform_set_drvdata(pdev, hdptx);
1116
1113
phy_set_drvdata(hdptx->phy, hdptx);
1117
1114
phy_set_bus_width(hdptx->phy, 8);
1118
1115
+15
-6
drivers/phy/st/phy-stm32-combophy.c
+15
-6
drivers/phy/st/phy-stm32-combophy.c
···
122
122
u32 max_vswing = imp_lookup[imp_size - 1].vswing[vswing_size - 1];
123
123
u32 min_vswing = imp_lookup[0].vswing[0];
124
124
u32 val;
125
+
u32 regval;
125
126
126
127
if (!of_property_read_u32(combophy->dev->of_node, "st,output-micro-ohms", &val)) {
127
128
if (val < min_imp || val > max_imp) {
···
130
129
return -EINVAL;
131
130
}
132
131
133
-
for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++)
134
-
if (imp_lookup[imp_of].microohm <= val)
132
+
regval = 0;
133
+
for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++) {
134
+
if (imp_lookup[imp_of].microohm <= val) {
135
+
regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of);
135
136
break;
137
+
}
138
+
}
136
139
137
140
dev_dbg(combophy->dev, "Set %u micro-ohms output impedance\n",
138
141
imp_lookup[imp_of].microohm);
139
142
140
143
regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR,
141
144
STM32MP25_PCIEPRG_IMPCTRL_OHM,
142
-
FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of));
145
+
regval);
143
146
} else {
144
147
regmap_read(combophy->regmap, SYSCFG_PCIEPRGCR, &val);
145
148
imp_of = FIELD_GET(STM32MP25_PCIEPRG_IMPCTRL_OHM, val);
···
155
150
return -EINVAL;
156
151
}
157
152
158
-
for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++)
159
-
if (imp_lookup[imp_of].vswing[vswing_of] >= val)
153
+
regval = 0;
154
+
for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++) {
155
+
if (imp_lookup[imp_of].vswing[vswing_of] >= val) {
156
+
regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of);
160
157
break;
158
+
}
159
+
}
161
160
162
161
dev_dbg(combophy->dev, "Set %u microvolt swing\n",
163
162
imp_lookup[imp_of].vswing[vswing_of]);
164
163
165
164
regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR,
166
165
STM32MP25_PCIEPRG_IMPCTRL_VSWING,
167
-
FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of));
166
+
regval);
168
167
}
169
168
170
169
return 0;
+6
drivers/pinctrl/pinctrl-mcp23s08.c
+6
drivers/pinctrl/pinctrl-mcp23s08.c
···
86
86
.num_reg_defaults = ARRAY_SIZE(mcp23x08_defaults),
87
87
.cache_type = REGCACHE_FLAT,
88
88
.max_register = MCP_OLAT,
89
+
.disable_locking = true, /* mcp->lock protects the regmap */
89
90
};
90
91
EXPORT_SYMBOL_GPL(mcp23x08_regmap);
91
92
···
133
132
.num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults),
134
133
.cache_type = REGCACHE_FLAT,
135
134
.val_format_endian = REGMAP_ENDIAN_LITTLE,
135
+
.disable_locking = true, /* mcp->lock protects the regmap */
136
136
};
137
137
EXPORT_SYMBOL_GPL(mcp23x17_regmap);
138
138
···
230
228
231
229
switch (param) {
232
230
case PIN_CONFIG_BIAS_PULL_UP:
231
+
mutex_lock(&mcp->lock);
233
232
ret = mcp_read(mcp, MCP_GPPU, &data);
233
+
mutex_unlock(&mcp->lock);
234
234
if (ret < 0)
235
235
return ret;
236
236
status = (data & BIT(pin)) ? 1 : 0;
···
261
257
262
258
switch (param) {
263
259
case PIN_CONFIG_BIAS_PULL_UP:
260
+
mutex_lock(&mcp->lock);
264
261
ret = mcp_set_bit(mcp, MCP_GPPU, pin, arg);
262
+
mutex_unlock(&mcp->lock);
265
263
break;
266
264
default:
267
265
dev_dbg(mcp->dev, "Invalid config param %04x\n", param);
+2
-2
drivers/platform/chrome/cros_ec_lpc.c
+2
-2
drivers/platform/chrome/cros_ec_lpc.c
···
707
707
/* Framework Laptop (12th Gen Intel Core) */
708
708
.matches = {
709
709
DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
710
-
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "12th Gen Intel Core"),
710
+
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"),
711
711
},
712
712
.driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
713
713
},
···
715
715
/* Framework Laptop (13th Gen Intel Core) */
716
716
.matches = {
717
717
DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
718
-
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "13th Gen Intel Core"),
718
+
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"),
719
719
},
720
720
.driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
721
721
},
+2
-2
drivers/platform/x86/hp/hp-wmi.c
+2
-2
drivers/platform/x86/hp/hp-wmi.c
···
64
64
"874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C",
65
65
"88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD",
66
66
"88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912",
67
-
"8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42"
67
+
"8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42", "8A15"
68
68
};
69
69
70
70
/* DMI Board names of Omen laptops that are specifically set to be thermal
···
80
80
* "balanced" when reaching zero.
81
81
*/
82
82
static const char * const omen_timed_thermal_profile_boards[] = {
83
-
"8BAD", "8A42"
83
+
"8BAD", "8A42", "8A15"
84
84
};
85
85
86
86
/* DMI Board names of Victus laptops */
+2
drivers/platform/x86/mlx-platform.c
+2
drivers/platform/x86/mlx-platform.c
···
6237
6237
fail_pci_request_regions:
6238
6238
pci_disable_device(pci_dev);
6239
6239
fail_pci_enable_device:
6240
+
pci_dev_put(pci_dev);
6240
6241
return err;
6241
6242
}
6242
6243
···
6248
6247
iounmap(pci_bridge_addr);
6249
6248
pci_release_regions(pci_bridge);
6250
6249
pci_disable_device(pci_bridge);
6250
+
pci_dev_put(pci_bridge);
6251
6251
}
6252
6252
6253
6253
static int
+3
-1
drivers/platform/x86/thinkpad_acpi.c
+3
-1
drivers/platform/x86/thinkpad_acpi.c
···
184
184
*/
185
185
TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */
186
186
TP_HKEY_EV_DOUBLETAP_TOGGLE = 0x131c, /* Toggle trackpoint doubletap on/off */
187
-
TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */
187
+
TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile in 2024 systems */
188
+
TP_HKEY_EV_PROFILE_TOGGLE2 = 0x1401, /* Toggle platform profile in 2025 + systems */
188
189
189
190
/* Reasons for waking up from S3/S4 */
190
191
TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */
···
11201
11200
tp_features.trackpoint_doubletap = !tp_features.trackpoint_doubletap;
11202
11201
return true;
11203
11202
case TP_HKEY_EV_PROFILE_TOGGLE:
11203
+
case TP_HKEY_EV_PROFILE_TOGGLE2:
11204
11204
platform_profile_cycle();
11205
11205
return true;
11206
11206
}
+6
drivers/pmdomain/core.c
+6
drivers/pmdomain/core.c
···
2142
2142
return 0;
2143
2143
}
2144
2144
2145
+
static void genpd_provider_release(struct device *dev)
2146
+
{
2147
+
/* nothing to be done here */
2148
+
}
2149
+
2145
2150
static int genpd_alloc_data(struct generic_pm_domain *genpd)
2146
2151
{
2147
2152
struct genpd_governor_data *gd = NULL;
···
2178
2173
2179
2174
genpd->gd = gd;
2180
2175
device_initialize(&genpd->dev);
2176
+
genpd->dev.release = genpd_provider_release;
2181
2177
2182
2178
if (!genpd_is_dev_name_fw(genpd)) {
2183
2179
dev_set_name(&genpd->dev, "%s", genpd->name);
+2
-2
drivers/pmdomain/imx/gpcv2.c
+2
-2
drivers/pmdomain/imx/gpcv2.c
···
1458
1458
.max_register = SZ_4K,
1459
1459
};
1460
1460
struct device *dev = &pdev->dev;
1461
-
struct device_node *pgc_np;
1461
+
struct device_node *pgc_np __free(device_node) =
1462
+
of_get_child_by_name(dev->of_node, "pgc");
1462
1463
struct regmap *regmap;
1463
1464
void __iomem *base;
1464
1465
int ret;
1465
1466
1466
-
pgc_np = of_get_child_by_name(dev->of_node, "pgc");
1467
1467
if (!pgc_np) {
1468
1468
dev_err(dev, "No power domains specified in DT\n");
1469
1469
return -EINVAL;
+9
-3
drivers/power/supply/bq24190_charger.c
+9
-3
drivers/power/supply/bq24190_charger.c
···
567
567
568
568
static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
569
569
{
570
+
union power_supply_propval val = { .intval = bdi->charge_type };
570
571
int ret;
571
572
572
573
ret = pm_runtime_resume_and_get(bdi->dev);
···
588
587
589
588
ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
590
589
BQ24296_REG_POC_OTG_CONFIG_MASK,
591
-
BQ24296_REG_POC_CHG_CONFIG_SHIFT,
590
+
BQ24296_REG_POC_OTG_CONFIG_SHIFT,
592
591
BQ24296_REG_POC_OTG_CONFIG_OTG);
593
-
} else
592
+
} else {
594
593
ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
595
594
BQ24296_REG_POC_OTG_CONFIG_MASK,
596
-
BQ24296_REG_POC_CHG_CONFIG_SHIFT,
595
+
BQ24296_REG_POC_OTG_CONFIG_SHIFT,
597
596
BQ24296_REG_POC_OTG_CONFIG_DISABLE);
597
+
if (ret < 0)
598
+
goto out;
599
+
600
+
ret = bq24190_charger_set_charge_type(bdi, &val);
601
+
}
598
602
599
603
out:
600
604
pm_runtime_mark_last_busy(bdi->dev);
+27
-9
drivers/power/supply/cros_charge-control.c
+27
-9
drivers/power/supply/cros_charge-control.c
···
7
7
#include <acpi/battery.h>
8
8
#include <linux/container_of.h>
9
9
#include <linux/dmi.h>
10
+
#include <linux/lockdep.h>
10
11
#include <linux/mod_devicetable.h>
11
12
#include <linux/module.h>
13
+
#include <linux/mutex.h>
12
14
#include <linux/platform_data/cros_ec_commands.h>
13
15
#include <linux/platform_data/cros_ec_proto.h>
14
16
#include <linux/platform_device.h>
···
51
49
struct attribute *attributes[_CROS_CHCTL_ATTR_COUNT];
52
50
struct attribute_group group;
53
51
52
+
struct mutex lock; /* protects fields below and cros_ec */
54
53
enum power_supply_charge_behaviour current_behaviour;
55
54
u8 current_start_threshold, current_end_threshold;
56
55
};
···
87
84
static int cros_chctl_configure_ec(struct cros_chctl_priv *priv)
88
85
{
89
86
struct ec_params_charge_control req = {};
87
+
88
+
lockdep_assert_held(&priv->lock);
90
89
91
90
req.cmd = EC_CHARGE_CONTROL_CMD_SET;
92
91
···
139
134
return -EINVAL;
140
135
141
136
if (is_end_threshold) {
142
-
if (val <= priv->current_start_threshold)
137
+
/* Start threshold is not exposed, use fixed value */
138
+
if (priv->cmd_version == 2)
139
+
priv->current_start_threshold = val == 100 ? 0 : val;
140
+
141
+
if (val < priv->current_start_threshold)
143
142
return -EINVAL;
144
143
priv->current_end_threshold = val;
145
144
} else {
146
-
if (val >= priv->current_end_threshold)
145
+
if (val > priv->current_end_threshold)
147
146
return -EINVAL;
148
147
priv->current_start_threshold = val;
149
148
}
···
168
159
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
169
160
CROS_CHCTL_ATTR_START_THRESHOLD);
170
161
162
+
guard(mutex)(&priv->lock);
171
163
return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_start_threshold);
172
164
}
173
165
···
179
169
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
180
170
CROS_CHCTL_ATTR_START_THRESHOLD);
181
171
172
+
guard(mutex)(&priv->lock);
182
173
return cros_chctl_store_threshold(dev, priv, 0, buf, count);
183
174
}
184
175
···
189
178
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
190
179
CROS_CHCTL_ATTR_END_THRESHOLD);
191
180
181
+
guard(mutex)(&priv->lock);
192
182
return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_end_threshold);
193
183
}
194
184
···
199
187
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
200
188
CROS_CHCTL_ATTR_END_THRESHOLD);
201
189
190
+
guard(mutex)(&priv->lock);
202
191
return cros_chctl_store_threshold(dev, priv, 1, buf, count);
203
192
}
204
193
···
208
195
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
209
196
CROS_CHCTL_ATTR_CHARGE_BEHAVIOUR);
210
197
198
+
guard(mutex)(&priv->lock);
211
199
return power_supply_charge_behaviour_show(dev, EC_CHARGE_CONTROL_BEHAVIOURS,
212
200
priv->current_behaviour, buf);
213
201
}
···
224
210
if (ret < 0)
225
211
return ret;
226
212
213
+
guard(mutex)(&priv->lock);
227
214
priv->current_behaviour = ret;
228
215
229
216
ret = cros_chctl_configure_ec(priv);
···
238
223
{
239
224
struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(attr, n);
240
225
241
-
if (priv->cmd_version < 2) {
242
-
if (n == CROS_CHCTL_ATTR_START_THRESHOLD)
243
-
return 0;
244
-
if (n == CROS_CHCTL_ATTR_END_THRESHOLD)
245
-
return 0;
246
-
}
226
+
if (n == CROS_CHCTL_ATTR_START_THRESHOLD && priv->cmd_version < 3)
227
+
return 0;
228
+
else if (n == CROS_CHCTL_ATTR_END_THRESHOLD && priv->cmd_version < 2)
229
+
return 0;
247
230
248
231
return attr->mode;
249
232
}
···
303
290
if (!priv)
304
291
return -ENOMEM;
305
292
293
+
ret = devm_mutex_init(dev, &priv->lock);
294
+
if (ret)
295
+
return ret;
296
+
306
297
ret = cros_ec_get_cmd_versions(cros_ec, EC_CMD_CHARGE_CONTROL);
307
298
if (ret < 0)
308
299
return ret;
···
344
327
priv->current_end_threshold = 100;
345
328
346
329
/* Bring EC into well-known state */
347
-
ret = cros_chctl_configure_ec(priv);
330
+
scoped_guard(mutex, &priv->lock)
331
+
ret = cros_chctl_configure_ec(priv);
348
332
if (ret < 0)
349
333
return ret;
350
334
+8
drivers/power/supply/gpio-charger.c
+8
drivers/power/supply/gpio-charger.c
···
67
67
if (gpio_charger->current_limit_map[i].limit_ua <= val)
68
68
break;
69
69
}
70
+
71
+
/*
72
+
* If a valid charge current limit isn't found, default to smallest
73
+
* current limitation for safety reasons.
74
+
*/
75
+
if (i >= gpio_charger->current_limit_map_size)
76
+
i = gpio_charger->current_limit_map_size - 1;
77
+
70
78
mapping = gpio_charger->current_limit_map[i];
71
79
72
80
for (i = 0; i < ndescs; i++) {
+1
-3
drivers/virt/coco/tdx-guest/tdx-guest.c
+1
-3
drivers/virt/coco/tdx-guest/tdx-guest.c
+1
-1
drivers/watchdog/stm32_iwdg.c
+1
-1
drivers/watchdog/stm32_iwdg.c
+4
-7
fs/btrfs/ctree.c
+4
-7
fs/btrfs/ctree.c
···
654
654
goto error_unlock_cow;
655
655
}
656
656
}
657
+
658
+
trace_btrfs_cow_block(root, buf, cow);
657
659
if (unlock_orig)
658
660
btrfs_tree_unlock(buf);
659
661
free_extent_buffer_stale(buf);
···
712
710
{
713
711
struct btrfs_fs_info *fs_info = root->fs_info;
714
712
u64 search_start;
715
-
int ret;
716
713
717
714
if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) {
718
715
btrfs_abort_transaction(trans, -EUCLEAN);
···
752
751
* Also We don't care about the error, as it's handled internally.
753
752
*/
754
753
btrfs_qgroup_trace_subtree_after_cow(trans, root, buf);
755
-
ret = btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
756
-
cow_ret, search_start, 0, nest);
757
-
758
-
trace_btrfs_cow_block(root, buf, *cow_ret);
759
-
760
-
return ret;
754
+
return btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
755
+
cow_ret, search_start, 0, nest);
761
756
}
762
757
ALLOW_ERROR_INJECTION(btrfs_cow_block, ERRNO);
763
758
+110
-44
fs/btrfs/inode.c
+110
-44
fs/btrfs/inode.c
···
9078
9078
}
9079
9079
9080
9080
struct btrfs_encoded_read_private {
9081
-
wait_queue_head_t wait;
9081
+
struct completion done;
9082
9082
void *uring_ctx;
9083
-
atomic_t pending;
9083
+
refcount_t pending_refs;
9084
9084
blk_status_t status;
9085
9085
};
9086
9086
···
9099
9099
*/
9100
9100
WRITE_ONCE(priv->status, bbio->bio.bi_status);
9101
9101
}
9102
-
if (atomic_dec_and_test(&priv->pending)) {
9102
+
if (refcount_dec_and_test(&priv->pending_refs)) {
9103
9103
int err = blk_status_to_errno(READ_ONCE(priv->status));
9104
9104
9105
9105
if (priv->uring_ctx) {
9106
9106
btrfs_uring_read_extent_endio(priv->uring_ctx, err);
9107
9107
kfree(priv);
9108
9108
} else {
9109
-
wake_up(&priv->wait);
9109
+
complete(&priv->done);
9110
9110
}
9111
9111
}
9112
9112
bio_put(&bbio->bio);
···
9126
9126
if (!priv)
9127
9127
return -ENOMEM;
9128
9128
9129
-
init_waitqueue_head(&priv->wait);
9130
-
atomic_set(&priv->pending, 1);
9129
+
init_completion(&priv->done);
9130
+
refcount_set(&priv->pending_refs, 1);
9131
9131
priv->status = 0;
9132
9132
priv->uring_ctx = uring_ctx;
9133
9133
···
9140
9140
size_t bytes = min_t(u64, disk_io_size, PAGE_SIZE);
9141
9141
9142
9142
if (bio_add_page(&bbio->bio, pages[i], bytes, 0) < bytes) {
9143
-
atomic_inc(&priv->pending);
9143
+
refcount_inc(&priv->pending_refs);
9144
9144
btrfs_submit_bbio(bbio, 0);
9145
9145
9146
9146
bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info,
···
9155
9155
disk_io_size -= bytes;
9156
9156
} while (disk_io_size);
9157
9157
9158
-
atomic_inc(&priv->pending);
9158
+
refcount_inc(&priv->pending_refs);
9159
9159
btrfs_submit_bbio(bbio, 0);
9160
9160
9161
9161
if (uring_ctx) {
9162
-
if (atomic_dec_return(&priv->pending) == 0) {
9162
+
if (refcount_dec_and_test(&priv->pending_refs)) {
9163
9163
ret = blk_status_to_errno(READ_ONCE(priv->status));
9164
9164
btrfs_uring_read_extent_endio(uring_ctx, ret);
9165
9165
kfree(priv);
···
9168
9168
9169
9169
return -EIOCBQUEUED;
9170
9170
} else {
9171
-
if (atomic_dec_return(&priv->pending) != 0)
9172
-
io_wait_event(priv->wait, !atomic_read(&priv->pending));
9171
+
if (!refcount_dec_and_test(&priv->pending_refs))
9172
+
wait_for_completion_io(&priv->done);
9173
9173
/* See btrfs_encoded_read_endio() for ordering. */
9174
9174
ret = blk_status_to_errno(READ_ONCE(priv->status));
9175
9175
kfree(priv);
···
9799
9799
struct btrfs_fs_info *fs_info = root->fs_info;
9800
9800
struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
9801
9801
struct extent_state *cached_state = NULL;
9802
-
struct extent_map *em = NULL;
9803
9802
struct btrfs_chunk_map *map = NULL;
9804
9803
struct btrfs_device *device = NULL;
9805
9804
struct btrfs_swap_info bsi = {
9806
9805
.lowest_ppage = (sector_t)-1ULL,
9807
9806
};
9807
+
struct btrfs_backref_share_check_ctx *backref_ctx = NULL;
9808
+
struct btrfs_path *path = NULL;
9808
9809
int ret = 0;
9809
9810
u64 isize;
9810
-
u64 start;
9811
+
u64 prev_extent_end = 0;
9812
+
9813
+
/*
9814
+
* Acquire the inode's mmap lock to prevent races with memory mapped
9815
+
* writes, as they could happen after we flush delalloc below and before
9816
+
* we lock the extent range further below. The inode was already locked
9817
+
* up in the call chain.
9818
+
*/
9819
+
btrfs_assert_inode_locked(BTRFS_I(inode));
9820
+
down_write(&BTRFS_I(inode)->i_mmap_lock);
9811
9821
9812
9822
/*
9813
9823
* If the swap file was just created, make sure delalloc is done. If the
···
9826
9816
*/
9827
9817
ret = btrfs_wait_ordered_range(BTRFS_I(inode), 0, (u64)-1);
9828
9818
if (ret)
9829
-
return ret;
9819
+
goto out_unlock_mmap;
9830
9820
9831
9821
/*
9832
9822
* The inode is locked, so these flags won't change after we check them.
9833
9823
*/
9834
9824
if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) {
9835
9825
btrfs_warn(fs_info, "swapfile must not be compressed");
9836
-
return -EINVAL;
9826
+
ret = -EINVAL;
9827
+
goto out_unlock_mmap;
9837
9828
}
9838
9829
if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) {
9839
9830
btrfs_warn(fs_info, "swapfile must not be copy-on-write");
9840
-
return -EINVAL;
9831
+
ret = -EINVAL;
9832
+
goto out_unlock_mmap;
9841
9833
}
9842
9834
if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) {
9843
9835
btrfs_warn(fs_info, "swapfile must not be checksummed");
9844
-
return -EINVAL;
9836
+
ret = -EINVAL;
9837
+
goto out_unlock_mmap;
9838
+
}
9839
+
9840
+
path = btrfs_alloc_path();
9841
+
backref_ctx = btrfs_alloc_backref_share_check_ctx();
9842
+
if (!path || !backref_ctx) {
9843
+
ret = -ENOMEM;
9844
+
goto out_unlock_mmap;
9845
9845
}
9846
9846
9847
9847
/*
···
9866
9846
if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_SWAP_ACTIVATE)) {
9867
9847
btrfs_warn(fs_info,
9868
9848
"cannot activate swapfile while exclusive operation is running");
9869
-
return -EBUSY;
9849
+
ret = -EBUSY;
9850
+
goto out_unlock_mmap;
9870
9851
}
9871
9852
9872
9853
/*
···
9881
9860
btrfs_exclop_finish(fs_info);
9882
9861
btrfs_warn(fs_info,
9883
9862
"cannot activate swapfile because snapshot creation is in progress");
9884
-
return -EINVAL;
9863
+
ret = -EINVAL;
9864
+
goto out_unlock_mmap;
9885
9865
}
9886
9866
/*
9887
9867
* Snapshots can create extents which require COW even if NODATACOW is
···
9903
9881
btrfs_warn(fs_info,
9904
9882
"cannot activate swapfile because subvolume %llu is being deleted",
9905
9883
btrfs_root_id(root));
9906
-
return -EPERM;
9884
+
ret = -EPERM;
9885
+
goto out_unlock_mmap;
9907
9886
}
9908
9887
atomic_inc(&root->nr_swapfiles);
9909
9888
spin_unlock(&root->root_item_lock);
···
9912
9889
isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
9913
9890
9914
9891
lock_extent(io_tree, 0, isize - 1, &cached_state);
9915
-
start = 0;
9916
-
while (start < isize) {
9917
-
u64 logical_block_start, physical_block_start;
9892
+
while (prev_extent_end < isize) {
9893
+
struct btrfs_key key;
9894
+
struct extent_buffer *leaf;
9895
+
struct btrfs_file_extent_item *ei;
9918
9896
struct btrfs_block_group *bg;
9919
-
u64 len = isize - start;
9897
+
u64 logical_block_start;
9898
+
u64 physical_block_start;
9899
+
u64 extent_gen;
9900
+
u64 disk_bytenr;
9901
+
u64 len;
9920
9902
9921
-
em = btrfs_get_extent(BTRFS_I(inode), NULL, start, len);
9922
-
if (IS_ERR(em)) {
9923
-
ret = PTR_ERR(em);
9903
+
key.objectid = btrfs_ino(BTRFS_I(inode));
9904
+
key.type = BTRFS_EXTENT_DATA_KEY;
9905
+
key.offset = prev_extent_end;
9906
+
9907
+
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
9908
+
if (ret < 0)
9924
9909
goto out;
9925
-
}
9926
9910
9927
-
if (em->disk_bytenr == EXTENT_MAP_HOLE) {
9911
+
/*
9912
+
* If key not found it means we have an implicit hole (NO_HOLES
9913
+
* is enabled).
9914
+
*/
9915
+
if (ret > 0) {
9928
9916
btrfs_warn(fs_info, "swapfile must not have holes");
9929
9917
ret = -EINVAL;
9930
9918
goto out;
9931
9919
}
9932
-
if (em->disk_bytenr == EXTENT_MAP_INLINE) {
9920
+
9921
+
leaf = path->nodes[0];
9922
+
ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
9923
+
9924
+
if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_INLINE) {
9933
9925
/*
9934
9926
* It's unlikely we'll ever actually find ourselves
9935
9927
* here, as a file small enough to fit inline won't be
···
9956
9918
ret = -EINVAL;
9957
9919
goto out;
9958
9920
}
9959
-
if (extent_map_is_compressed(em)) {
9921
+
9922
+
if (btrfs_file_extent_compression(leaf, ei) != BTRFS_COMPRESS_NONE) {
9960
9923
btrfs_warn(fs_info, "swapfile must not be compressed");
9961
9924
ret = -EINVAL;
9962
9925
goto out;
9963
9926
}
9964
9927
9965
-
logical_block_start = extent_map_block_start(em) + (start - em->start);
9966
-
len = min(len, em->len - (start - em->start));
9967
-
free_extent_map(em);
9968
-
em = NULL;
9928
+
disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei);
9929
+
if (disk_bytenr == 0) {
9930
+
btrfs_warn(fs_info, "swapfile must not have holes");
9931
+
ret = -EINVAL;
9932
+
goto out;
9933
+
}
9969
9934
9970
-
ret = can_nocow_extent(inode, start, &len, NULL, false, true);
9935
+
logical_block_start = disk_bytenr + btrfs_file_extent_offset(leaf, ei);
9936
+
extent_gen = btrfs_file_extent_generation(leaf, ei);
9937
+
prev_extent_end = btrfs_file_extent_end(path);
9938
+
9939
+
if (prev_extent_end > isize)
9940
+
len = isize - key.offset;
9941
+
else
9942
+
len = btrfs_file_extent_num_bytes(leaf, ei);
9943
+
9944
+
backref_ctx->curr_leaf_bytenr = leaf->start;
9945
+
9946
+
/*
9947
+
* Don't need the path anymore, release to avoid deadlocks when
9948
+
* calling btrfs_is_data_extent_shared() because when joining a
9949
+
* transaction it can block waiting for the current one's commit
9950
+
* which in turn may be trying to lock the same leaf to flush
9951
+
* delayed items for example.
9952
+
*/
9953
+
btrfs_release_path(path);
9954
+
9955
+
ret = btrfs_is_data_extent_shared(BTRFS_I(inode), disk_bytenr,
9956
+
extent_gen, backref_ctx);
9971
9957
if (ret < 0) {
9972
9958
goto out;
9973
-
} else if (ret) {
9974
-
ret = 0;
9975
-
} else {
9959
+
} else if (ret > 0) {
9976
9960
btrfs_warn(fs_info,
9977
9961
"swapfile must not be copy-on-write");
9978
9962
ret = -EINVAL;
···
10029
9969
10030
9970
physical_block_start = (map->stripes[0].physical +
10031
9971
(logical_block_start - map->start));
10032
-
len = min(len, map->chunk_len - (logical_block_start - map->start));
10033
9972
btrfs_free_chunk_map(map);
10034
9973
map = NULL;
10035
9974
···
10069
10010
if (ret)
10070
10011
goto out;
10071
10012
}
10072
-
bsi.start = start;
10013
+
bsi.start = key.offset;
10073
10014
bsi.block_start = physical_block_start;
10074
10015
bsi.block_len = len;
10075
10016
}
10076
10017
10077
-
start += len;
10018
+
if (fatal_signal_pending(current)) {
10019
+
ret = -EINTR;
10020
+
goto out;
10021
+
}
10022
+
10023
+
cond_resched();
10078
10024
}
10079
10025
10080
10026
if (bsi.block_len)
10081
10027
ret = btrfs_add_swap_extent(sis, &bsi);
10082
10028
10083
10029
out:
10084
-
if (!IS_ERR_OR_NULL(em))
10085
-
free_extent_map(em);
10086
10030
if (!IS_ERR_OR_NULL(map))
10087
10031
btrfs_free_chunk_map(map);
10088
10032
···
10098
10036
10099
10037
btrfs_exclop_finish(fs_info);
10100
10038
10039
+
out_unlock_mmap:
10040
+
up_write(&BTRFS_I(inode)->i_mmap_lock);
10041
+
btrfs_free_backref_share_ctx(backref_ctx);
10042
+
btrfs_free_path(path);
10101
10043
if (ret)
10102
10044
return ret;
10103
10045
+1
-2
fs/btrfs/qgroup.c
+1
-2
fs/btrfs/qgroup.c
···
1121
1121
fs_info->qgroup_flags = BTRFS_QGROUP_STATUS_FLAG_ON;
1122
1122
if (simple) {
1123
1123
fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE;
1124
+
btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
1124
1125
btrfs_set_qgroup_status_enable_gen(leaf, ptr, trans->transid);
1125
1126
} else {
1126
1127
fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
···
1255
1254
spin_lock(&fs_info->qgroup_lock);
1256
1255
fs_info->quota_root = quota_root;
1257
1256
set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
1258
-
if (simple)
1259
-
btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
1260
1257
spin_unlock(&fs_info->qgroup_lock);
1261
1258
1262
1259
/* Skip rescan for simple qgroups. */
+6
fs/btrfs/relocation.c
+6
fs/btrfs/relocation.c
···
2902
2902
const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags);
2903
2903
2904
2904
ASSERT(index <= last_index);
2905
+
again:
2905
2906
folio = filemap_lock_folio(inode->i_mapping, index);
2906
2907
if (IS_ERR(folio)) {
2907
2908
···
2937
2936
if (!folio_test_uptodate(folio)) {
2938
2937
ret = -EIO;
2939
2938
goto release_folio;
2939
+
}
2940
+
if (folio->mapping != inode->i_mapping) {
2941
+
folio_unlock(folio);
2942
+
folio_put(folio);
2943
+
goto again;
2940
2944
}
2941
2945
}
2942
2946
+6
fs/btrfs/send.c
+6
fs/btrfs/send.c
···
5280
5280
unsigned cur_len = min_t(unsigned, len,
5281
5281
PAGE_SIZE - pg_offset);
5282
5282
5283
+
again:
5283
5284
folio = filemap_lock_folio(mapping, index);
5284
5285
if (IS_ERR(folio)) {
5285
5286
page_cache_sync_readahead(mapping,
···
5312
5311
folio_put(folio);
5313
5312
ret = -EIO;
5314
5313
break;
5314
+
}
5315
+
if (folio->mapping != mapping) {
5316
+
folio_unlock(folio);
5317
+
folio_put(folio);
5318
+
goto again;
5315
5319
}
5316
5320
}
5317
5321
+3
-3
fs/btrfs/sysfs.c
+3
-3
fs/btrfs/sysfs.c
···
1118
1118
{
1119
1119
struct btrfs_fs_info *fs_info = to_fs_info(kobj);
1120
1120
1121
-
return sysfs_emit(buf, "%u\n", fs_info->super_copy->nodesize);
1121
+
return sysfs_emit(buf, "%u\n", fs_info->nodesize);
1122
1122
}
1123
1123
1124
1124
BTRFS_ATTR(, nodesize, btrfs_nodesize_show);
···
1128
1128
{
1129
1129
struct btrfs_fs_info *fs_info = to_fs_info(kobj);
1130
1130
1131
-
return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
1131
+
return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
1132
1132
}
1133
1133
1134
1134
BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show);
···
1180
1180
{
1181
1181
struct btrfs_fs_info *fs_info = to_fs_info(kobj);
1182
1182
1183
-
return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
1183
+
return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
1184
1184
}
1185
1185
1186
1186
BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+1
-1
fs/ocfs2/quota_global.c
+1
-1
fs/ocfs2/quota_global.c
+1
fs/ocfs2/quota_local.c
+1
fs/ocfs2/quota_local.c
+1
-1
fs/proc/task_mmu.c
+1
-1
fs/proc/task_mmu.c
-2
fs/smb/client/cifsproto.h
-2
fs/smb/client/cifsproto.h
···
614
614
void cifs_free_hash(struct shash_desc **sdesc);
615
615
616
616
int cifs_try_adding_channels(struct cifs_ses *ses);
617
-
bool is_server_using_iface(struct TCP_Server_Info *server,
618
-
struct cifs_server_iface *iface);
619
617
bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface);
620
618
void cifs_ses_mark_for_reconnect(struct cifs_ses *ses);
621
619
+5
-1
fs/smb/client/file.c
+5
-1
fs/smb/client/file.c
···
990
990
}
991
991
992
992
/* Get the cached handle as SMB2 close is deferred */
993
-
rc = cifs_get_readable_path(tcon, full_path, &cfile);
993
+
if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) {
994
+
rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile);
995
+
} else {
996
+
rc = cifs_get_readable_path(tcon, full_path, &cfile);
997
+
}
994
998
if (rc == 0) {
995
999
if (file->f_flags == cfile->f_flags) {
996
1000
file->private_data = cfile;
-25
fs/smb/client/sess.c
-25
fs/smb/client/sess.c
···
27
27
cifs_ses_add_channel(struct cifs_ses *ses,
28
28
struct cifs_server_iface *iface);
29
29
30
-
bool
31
-
is_server_using_iface(struct TCP_Server_Info *server,
32
-
struct cifs_server_iface *iface)
33
-
{
34
-
struct sockaddr_in *i4 = (struct sockaddr_in *)&iface->sockaddr;
35
-
struct sockaddr_in6 *i6 = (struct sockaddr_in6 *)&iface->sockaddr;
36
-
struct sockaddr_in *s4 = (struct sockaddr_in *)&server->dstaddr;
37
-
struct sockaddr_in6 *s6 = (struct sockaddr_in6 *)&server->dstaddr;
38
-
39
-
if (server->dstaddr.ss_family != iface->sockaddr.ss_family)
40
-
return false;
41
-
if (server->dstaddr.ss_family == AF_INET) {
42
-
if (s4->sin_addr.s_addr != i4->sin_addr.s_addr)
43
-
return false;
44
-
} else if (server->dstaddr.ss_family == AF_INET6) {
45
-
if (memcmp(&s6->sin6_addr, &i6->sin6_addr,
46
-
sizeof(i6->sin6_addr)) != 0)
47
-
return false;
48
-
} else {
49
-
/* unknown family.. */
50
-
return false;
51
-
}
52
-
return true;
53
-
}
54
-
55
30
bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface)
56
31
{
57
32
int i;
+10
-3
include/linux/dmaengine.h
+10
-3
include/linux/dmaengine.h
···
84
84
DMA_TRANS_NONE,
85
85
};
86
86
87
-
/**
87
+
/*
88
88
* Interleaved Transfer Request
89
89
* ----------------------------
90
90
* A chunk is collection of contiguous bytes to be transferred.
···
223
223
};
224
224
225
225
/**
226
-
* enum pq_check_flags - result of async_{xor,pq}_zero_sum operations
226
+
* enum sum_check_flags - result of async_{xor,pq}_zero_sum operations
227
227
* @SUM_CHECK_P_RESULT - 1 if xor zero sum error, 0 otherwise
228
228
* @SUM_CHECK_Q_RESULT - 1 if reed-solomon zero sum error, 0 otherwise
229
229
*/
···
286
286
* pointer to the engine's metadata area
287
287
* 4. Read out the metadata from the pointer
288
288
*
289
-
* Note: the two mode is not compatible and clients must use one mode for a
289
+
* Warning: the two modes are not compatible and clients must use one mode for a
290
290
* descriptor.
291
291
*/
292
292
enum dma_desc_metadata_mode {
···
594
594
* @phys: physical address of the descriptor
595
595
* @chan: target channel for this operation
596
596
* @tx_submit: accept the descriptor, assign ordered cookie and mark the
597
+
* @desc_free: driver's callback function to free a resusable descriptor
598
+
* after completion
597
599
* descriptor pending. To be pushed on .issue_pending() call
598
600
* @callback: routine to call after this operation is complete
601
+
* @callback_result: error result from a DMA transaction
599
602
* @callback_param: general parameter to pass to the callback routine
603
+
* @unmap: hook for generic DMA unmap data
600
604
* @desc_metadata_mode: core managed metadata mode to protect mixed use of
601
605
* DESC_METADATA_CLIENT or DESC_METADATA_ENGINE. Otherwise
602
606
* DESC_METADATA_NONE
···
831
827
* @device_prep_dma_memset: prepares a memset operation
832
828
* @device_prep_dma_memset_sg: prepares a memset operation over a scatter list
833
829
* @device_prep_dma_interrupt: prepares an end of chain interrupt operation
830
+
* @device_prep_peripheral_dma_vec: prepares a scatter-gather DMA transfer,
831
+
* where the address and size of each segment is located in one entry of
832
+
* the dma_vec array.
834
833
* @device_prep_slave_sg: prepares a slave dma operation
835
834
* @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio.
836
835
* The function takes a buffer of size buf_len. The callback function will
+13
-3
include/linux/if_vlan.h
+13
-3
include/linux/if_vlan.h
···
585
585
* vlan_get_protocol - get protocol EtherType.
586
586
* @skb: skbuff to query
587
587
* @type: first vlan protocol
588
+
* @mac_offset: MAC offset
588
589
* @depth: buffer to store length of eth and vlan tags in bytes
589
590
*
590
591
* Returns the EtherType of the packet, regardless of whether it is
591
592
* vlan encapsulated (normal or hardware accelerated) or not.
592
593
*/
593
-
static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
594
-
int *depth)
594
+
static inline __be16 __vlan_get_protocol_offset(const struct sk_buff *skb,
595
+
__be16 type,
596
+
int mac_offset,
597
+
int *depth)
595
598
{
596
599
unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH;
597
600
···
613
610
do {
614
611
struct vlan_hdr vhdr, *vh;
615
612
616
-
vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr);
613
+
vh = skb_header_pointer(skb, mac_offset + vlan_depth,
614
+
sizeof(vhdr), &vhdr);
617
615
if (unlikely(!vh || !--parse_depth))
618
616
return 0;
619
617
···
627
623
*depth = vlan_depth;
628
624
629
625
return type;
626
+
}
627
+
628
+
static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
629
+
int *depth)
630
+
{
631
+
return __vlan_get_protocol_offset(skb, type, 0, depth);
630
632
}
631
633
632
634
/**
+14
include/linux/memfd.h
+14
include/linux/memfd.h
···
7
7
#ifdef CONFIG_MEMFD_CREATE
8
8
extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned int arg);
9
9
struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx);
10
+
unsigned int *memfd_file_seals_ptr(struct file *file);
10
11
#else
11
12
static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a)
12
13
{
···
17
16
{
18
17
return ERR_PTR(-EINVAL);
19
18
}
19
+
20
+
static inline unsigned int *memfd_file_seals_ptr(struct file *file)
21
+
{
22
+
return NULL;
23
+
}
20
24
#endif
25
+
26
+
/* Retrieve memfd seals associated with the file, if any. */
27
+
static inline unsigned int memfd_file_seals(struct file *file)
28
+
{
29
+
unsigned int *sealsp = memfd_file_seals_ptr(file);
30
+
31
+
return sealsp ? *sealsp : 0;
32
+
}
21
33
22
34
#endif /* __LINUX_MEMFD_H */
+7
include/linux/mlx5/driver.h
+7
include/linux/mlx5/driver.h
···
524
524
* creation/deletion on drivers rescan. Unset during device attach.
525
525
*/
526
526
MLX5_PRIV_FLAGS_DETACH = 1 << 2,
527
+
MLX5_PRIV_FLAGS_SWITCH_LEGACY = 1 << 3,
527
528
};
528
529
529
530
struct mlx5_adev {
···
1201
1200
static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev)
1202
1201
{
1203
1202
return dev->coredev_type == MLX5_COREDEV_VF;
1203
+
}
1204
+
1205
+
static inline bool mlx5_core_same_coredev_type(const struct mlx5_core_dev *dev1,
1206
+
const struct mlx5_core_dev *dev2)
1207
+
{
1208
+
return dev1->coredev_type == dev2->coredev_type;
1204
1209
}
1205
1210
1206
1211
static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev)
+3
-1
include/linux/mlx5/mlx5_ifc.h
+3
-1
include/linux/mlx5/mlx5_ifc.h
···
2119
2119
u8 migration_in_chunks[0x1];
2120
2120
u8 reserved_at_d1[0x1];
2121
2121
u8 sf_eq_usage[0x1];
2122
-
u8 reserved_at_d3[0xd];
2122
+
u8 reserved_at_d3[0x5];
2123
+
u8 multiplane[0x1];
2124
+
u8 reserved_at_d9[0x7];
2123
2125
2124
2126
u8 cross_vhca_object_to_object_supported[0x20];
2125
2127
+40
-17
include/linux/mm.h
+40
-17
include/linux/mm.h
···
3125
3125
if (!pmd_ptlock_init(ptdesc))
3126
3126
return false;
3127
3127
__folio_set_pgtable(folio);
3128
+
ptdesc_pmd_pts_init(ptdesc);
3128
3129
lruvec_stat_add_folio(folio, NR_PAGETABLE);
3129
3130
return true;
3130
3131
}
···
4102
4101
static inline void mem_dump_obj(void *object) {}
4103
4102
#endif
4104
4103
4104
+
static inline bool is_write_sealed(int seals)
4105
+
{
4106
+
return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE);
4107
+
}
4108
+
4109
+
/**
4110
+
* is_readonly_sealed - Checks whether write-sealed but mapped read-only,
4111
+
* in which case writes should be disallowing moving
4112
+
* forwards.
4113
+
* @seals: the seals to check
4114
+
* @vm_flags: the VMA flags to check
4115
+
*
4116
+
* Returns whether readonly sealed, in which case writess should be disallowed
4117
+
* going forward.
4118
+
*/
4119
+
static inline bool is_readonly_sealed(int seals, vm_flags_t vm_flags)
4120
+
{
4121
+
/*
4122
+
* Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
4123
+
* MAP_SHARED and read-only, take care to not allow mprotect to
4124
+
* revert protections on such mappings. Do this only for shared
4125
+
* mappings. For private mappings, don't need to mask
4126
+
* VM_MAYWRITE as we still want them to be COW-writable.
4127
+
*/
4128
+
if (is_write_sealed(seals) &&
4129
+
((vm_flags & (VM_SHARED | VM_WRITE)) == VM_SHARED))
4130
+
return true;
4131
+
4132
+
return false;
4133
+
}
4134
+
4105
4135
/**
4106
4136
* seal_check_write - Check for F_SEAL_WRITE or F_SEAL_FUTURE_WRITE flags and
4107
4137
* handle them.
···
4144
4112
*/
4145
4113
static inline int seal_check_write(int seals, struct vm_area_struct *vma)
4146
4114
{
4147
-
if (seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
4148
-
/*
4149
-
* New PROT_WRITE and MAP_SHARED mmaps are not allowed when
4150
-
* write seals are active.
4151
-
*/
4152
-
if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
4153
-
return -EPERM;
4115
+
if (!is_write_sealed(seals))
4116
+
return 0;
4154
4117
4155
-
/*
4156
-
* Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
4157
-
* MAP_SHARED and read-only, take care to not allow mprotect to
4158
-
* revert protections on such mappings. Do this only for shared
4159
-
* mappings. For private mappings, don't need to mask
4160
-
* VM_MAYWRITE as we still want them to be COW-writable.
4161
-
*/
4162
-
if (vma->vm_flags & VM_SHARED)
4163
-
vm_flags_clear(vma, VM_MAYWRITE);
4164
-
}
4118
+
/*
4119
+
* New PROT_WRITE and MAP_SHARED mmaps are not allowed when
4120
+
* write seals are active.
4121
+
*/
4122
+
if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
4123
+
return -EPERM;
4165
4124
4166
4125
return 0;
4167
4126
}
+30
include/linux/mm_types.h
+30
include/linux/mm_types.h
···
445
445
* @pt_index: Used for s390 gmap.
446
446
* @pt_mm: Used for x86 pgds.
447
447
* @pt_frag_refcount: For fragmented page table tracking. Powerpc only.
448
+
* @pt_share_count: Used for HugeTLB PMD page table share count.
448
449
* @_pt_pad_2: Padding to ensure proper alignment.
449
450
* @ptl: Lock for the page table.
450
451
* @__page_type: Same as page->page_type. Unused for page tables.
···
472
471
pgoff_t pt_index;
473
472
struct mm_struct *pt_mm;
474
473
atomic_t pt_frag_refcount;
474
+
#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
475
+
atomic_t pt_share_count;
476
+
#endif
475
477
};
476
478
477
479
union {
···
519
515
#define page_ptdesc(p) (_Generic((p), \
520
516
const struct page *: (const struct ptdesc *)(p), \
521
517
struct page *: (struct ptdesc *)(p)))
518
+
519
+
#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
520
+
static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
521
+
{
522
+
atomic_set(&ptdesc->pt_share_count, 0);
523
+
}
524
+
525
+
static inline void ptdesc_pmd_pts_inc(struct ptdesc *ptdesc)
526
+
{
527
+
atomic_inc(&ptdesc->pt_share_count);
528
+
}
529
+
530
+
static inline void ptdesc_pmd_pts_dec(struct ptdesc *ptdesc)
531
+
{
532
+
atomic_dec(&ptdesc->pt_share_count);
533
+
}
534
+
535
+
static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc)
536
+
{
537
+
return atomic_read(&ptdesc->pt_share_count);
538
+
}
539
+
#else
540
+
static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
541
+
{
542
+
}
543
+
#endif
522
544
523
545
/*
524
546
* Used for sizing the vmemmap region on some architectures
+1
-4
include/linux/percpu-defs.h
+1
-4
include/linux/percpu-defs.h
···
221
221
} while (0)
222
222
223
223
#define PERCPU_PTR(__p) \
224
-
({ \
225
-
unsigned long __pcpu_ptr = (__force unsigned long)(__p); \
226
-
(typeof(*(__p)) __force __kernel *)(__pcpu_ptr); \
227
-
})
224
+
(typeof(*(__p)) __force __kernel *)((__force unsigned long)(__p))
228
225
229
226
#ifdef CONFIG_SMP
230
227
+2
include/linux/platform_data/amd_qdma.h
+2
include/linux/platform_data/amd_qdma.h
···
26
26
* @max_mm_channels: Maximum number of MM DMA channels in each direction
27
27
* @device_map: DMA slave map
28
28
* @irq_index: The index of first IRQ
29
+
* @dma_dev: The device pointer for dma operations
29
30
*/
30
31
struct qdma_platdata {
31
32
u32 max_mm_channels;
32
33
u32 irq_index;
33
34
struct dma_slave_map *device_map;
35
+
struct device *dma_dev;
34
36
};
35
37
36
38
#endif /* _PLATDATA_AMD_QDMA_H */
+2
-1
include/linux/sched.h
+2
-1
include/linux/sched.h
···
1637
1637
* We're lying here, but rather than expose a completely new task state
1638
1638
* to userspace, we can make this appear as if the task has gone through
1639
1639
* a regular rt_mutex_lock() call.
1640
+
* Report frozen tasks as uninterruptible.
1640
1641
*/
1641
-
if (tsk_state & TASK_RTLOCK_WAIT)
1642
+
if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN))
1642
1643
state = TASK_UNINTERRUPTIBLE;
1643
1644
1644
1645
return fls(state);
+1
-1
include/linux/trace_events.h
+1
-1
include/linux/trace_events.h
+3
-3
include/linux/vermagic.h
+3
-3
include/linux/vermagic.h
···
15
15
#else
16
16
#define MODULE_VERMAGIC_SMP ""
17
17
#endif
18
-
#ifdef CONFIG_PREEMPT_BUILD
19
-
#define MODULE_VERMAGIC_PREEMPT "preempt "
20
-
#elif defined(CONFIG_PREEMPT_RT)
18
+
#ifdef CONFIG_PREEMPT_RT
21
19
#define MODULE_VERMAGIC_PREEMPT "preempt_rt "
20
+
#elif defined(CONFIG_PREEMPT_BUILD)
21
+
#define MODULE_VERMAGIC_PREEMPT "preempt "
22
22
#else
23
23
#define MODULE_VERMAGIC_PREEMPT ""
24
24
#endif
+5
-2
include/net/netfilter/nf_tables.h
+5
-2
include/net/netfilter/nf_tables.h
···
733
733
/**
734
734
* struct nft_set_ext - set extensions
735
735
*
736
-
* @genmask: generation mask
736
+
* @genmask: generation mask, but also flags (see NFT_SET_ELEM_DEAD_BIT)
737
737
* @offset: offsets of individual extension types
738
738
* @data: beginning of extension data
739
+
*
740
+
* This structure must be aligned to word size, otherwise atomic bitops
741
+
* on genmask field can cause alignment failure on some archs.
739
742
*/
740
743
struct nft_set_ext {
741
744
u8 genmask;
742
745
u8 offset[NFT_SET_EXT_NUM];
743
746
char data[];
744
-
};
747
+
} __aligned(BITS_PER_LONG / 8);
745
748
746
749
static inline void nft_set_ext_prepare(struct nft_set_ext_tmpl *tmpl)
747
750
{
+26
-24
include/uapi/linux/mptcp_pm.h
+26
-24
include/uapi/linux/mptcp_pm.h
···
12
12
/**
13
13
* enum mptcp_event_type
14
14
* @MPTCP_EVENT_UNSPEC: unused event
15
-
* @MPTCP_EVENT_CREATED: token, family, saddr4 | saddr6, daddr4 | daddr6,
16
-
* sport, dport A new MPTCP connection has been created. It is the good time
17
-
* to allocate memory and send ADD_ADDR if needed. Depending on the
15
+
* @MPTCP_EVENT_CREATED: A new MPTCP connection has been created. It is the
16
+
* good time to allocate memory and send ADD_ADDR if needed. Depending on the
18
17
* traffic-patterns it can take a long time until the MPTCP_EVENT_ESTABLISHED
19
-
* is sent.
20
-
* @MPTCP_EVENT_ESTABLISHED: token, family, saddr4 | saddr6, daddr4 | daddr6,
21
-
* sport, dport A MPTCP connection is established (can start new subflows).
22
-
* @MPTCP_EVENT_CLOSED: token A MPTCP connection has stopped.
23
-
* @MPTCP_EVENT_ANNOUNCED: token, rem_id, family, daddr4 | daddr6 [, dport] A
24
-
* new address has been announced by the peer.
25
-
* @MPTCP_EVENT_REMOVED: token, rem_id An address has been lost by the peer.
26
-
* @MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, saddr4 |
27
-
* saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error] A new
28
-
* subflow has been established. 'error' should not be set.
29
-
* @MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6,
30
-
* daddr4 | daddr6, sport, dport, backup, if_idx [, error] A subflow has been
31
-
* closed. An error (copy of sk_err) could be set if an error has been
32
-
* detected for this subflow.
33
-
* @MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6,
34
-
* daddr4 | daddr6, sport, dport, backup, if_idx [, error] The priority of a
35
-
* subflow has changed. 'error' should not be set.
36
-
* @MPTCP_EVENT_LISTENER_CREATED: family, sport, saddr4 | saddr6 A new PM
37
-
* listener is created.
38
-
* @MPTCP_EVENT_LISTENER_CLOSED: family, sport, saddr4 | saddr6 A PM listener
39
-
* is closed.
18
+
* is sent. Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6,
19
+
* sport, dport, server-side.
20
+
* @MPTCP_EVENT_ESTABLISHED: A MPTCP connection is established (can start new
21
+
* subflows). Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6,
22
+
* sport, dport, server-side.
23
+
* @MPTCP_EVENT_CLOSED: A MPTCP connection has stopped. Attribute: token.
24
+
* @MPTCP_EVENT_ANNOUNCED: A new address has been announced by the peer.
25
+
* Attributes: token, rem_id, family, daddr4 | daddr6 [, dport].
26
+
* @MPTCP_EVENT_REMOVED: An address has been lost by the peer. Attributes:
27
+
* token, rem_id.
28
+
* @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error'
29
+
* should not be set. Attributes: token, family, loc_id, rem_id, saddr4 |
30
+
* saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error].
31
+
* @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of
32
+
* sk_err) could be set if an error has been detected for this subflow.
33
+
* Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
34
+
* daddr6, sport, dport, backup, if_idx [, error].
35
+
* @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error'
36
+
* should not be set. Attributes: token, family, loc_id, rem_id, saddr4 |
37
+
* saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error].
38
+
* @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes:
39
+
* family, sport, saddr4 | saddr6.
40
+
* @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family,
41
+
* sport, saddr4 | saddr6.
40
42
*/
41
43
enum mptcp_event_type {
42
44
MPTCP_EVENT_UNSPEC,
+10
-3
include/uapi/linux/stddef.h
+10
-3
include/uapi/linux/stddef.h
···
8
8
#define __always_inline inline
9
9
#endif
10
10
11
+
/* Not all C++ standards support type declarations inside an anonymous union */
12
+
#ifndef __cplusplus
13
+
#define __struct_group_tag(TAG) TAG
14
+
#else
15
+
#define __struct_group_tag(TAG)
16
+
#endif
17
+
11
18
/**
12
19
* __struct_group() - Create a mirrored named and anonyomous struct
13
20
*
···
27
20
* and size: one anonymous and one named. The former's members can be used
28
21
* normally without sub-struct naming, and the latter can be used to
29
22
* reason about the start, end, and size of the group of struct members.
30
-
* The named struct can also be explicitly tagged for layer reuse, as well
31
-
* as both having struct attributes appended.
23
+
* The named struct can also be explicitly tagged for layer reuse (C only),
24
+
* as well as both having struct attributes appended.
32
25
*/
33
26
#define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
34
27
union { \
35
28
struct { MEMBERS } ATTRS; \
36
-
struct TAG { MEMBERS } ATTRS NAME; \
29
+
struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
37
30
} ATTRS
38
31
39
32
#ifdef __cplusplus
+3
-1
io_uring/kbuf.c
+3
-1
io_uring/kbuf.c
···
139
139
struct io_uring_buf_ring *br = bl->buf_ring;
140
140
__u16 tail, head = bl->head;
141
141
struct io_uring_buf *buf;
142
+
void __user *ret;
142
143
143
144
tail = smp_load_acquire(&br->tail);
144
145
if (unlikely(tail == head))
···
154
153
req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT;
155
154
req->buf_list = bl;
156
155
req->buf_index = buf->bid;
156
+
ret = u64_to_user_ptr(buf->addr);
157
157
158
158
if (issue_flags & IO_URING_F_UNLOCKED || !io_file_can_poll(req)) {
159
159
/*
···
170
168
io_kbuf_commit(req, bl, *len, 1);
171
169
req->buf_list = NULL;
172
170
}
173
-
return u64_to_user_ptr(buf->addr);
171
+
return ret;
174
172
}
175
173
176
174
void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+1
io_uring/net.c
+1
io_uring/net.c
+2
io_uring/rw.c
+2
io_uring/rw.c
+6
io_uring/sqpoll.c
+6
io_uring/sqpoll.c
···
405
405
__cold int io_sq_offload_create(struct io_ring_ctx *ctx,
406
406
struct io_uring_params *p)
407
407
{
408
+
struct task_struct *task_to_put = NULL;
408
409
int ret;
409
410
410
411
/* Retain compatibility with failing for an invalid attach attempt */
···
481
480
}
482
481
483
482
sqd->thread = tsk;
483
+
task_to_put = get_task_struct(tsk);
484
484
ret = io_uring_alloc_task_context(tsk, ctx);
485
485
wake_up_new_task(tsk);
486
486
if (ret)
···
492
490
goto err;
493
491
}
494
492
493
+
if (task_to_put)
494
+
put_task_struct(task_to_put);
495
495
return 0;
496
496
err_sqpoll:
497
497
complete(&ctx->sq_data->exited);
498
498
err:
499
499
io_sq_thread_finish(ctx);
500
+
if (task_to_put)
501
+
put_task_struct(task_to_put);
500
502
return ret;
501
503
}
502
504
+31
-14
io_uring/timeout.c
+31
-14
io_uring/timeout.c
···
85
85
io_req_task_complete(req, ts);
86
86
}
87
87
88
-
static bool io_kill_timeout(struct io_kiocb *req, int status)
88
+
static __cold bool io_flush_killed_timeouts(struct list_head *list, int err)
89
+
{
90
+
if (list_empty(list))
91
+
return false;
92
+
93
+
while (!list_empty(list)) {
94
+
struct io_timeout *timeout;
95
+
struct io_kiocb *req;
96
+
97
+
timeout = list_first_entry(list, struct io_timeout, list);
98
+
list_del_init(&timeout->list);
99
+
req = cmd_to_io_kiocb(timeout);
100
+
if (err)
101
+
req_set_fail(req);
102
+
io_req_queue_tw_complete(req, err);
103
+
}
104
+
105
+
return true;
106
+
}
107
+
108
+
static void io_kill_timeout(struct io_kiocb *req, struct list_head *list)
89
109
__must_hold(&req->ctx->timeout_lock)
90
110
{
91
111
struct io_timeout_data *io = req->async_data;
···
113
93
if (hrtimer_try_to_cancel(&io->timer) != -1) {
114
94
struct io_timeout *timeout = io_kiocb_to_cmd(req, struct io_timeout);
115
95
116
-
if (status)
117
-
req_set_fail(req);
118
96
atomic_set(&req->ctx->cq_timeouts,
119
97
atomic_read(&req->ctx->cq_timeouts) + 1);
120
-
list_del_init(&timeout->list);
121
-
io_req_queue_tw_complete(req, status);
122
-
return true;
98
+
list_move_tail(&timeout->list, list);
123
99
}
124
-
return false;
125
100
}
126
101
127
102
__cold void io_flush_timeouts(struct io_ring_ctx *ctx)
128
103
{
129
-
u32 seq;
130
104
struct io_timeout *timeout, *tmp;
105
+
LIST_HEAD(list);
106
+
u32 seq;
131
107
132
108
raw_spin_lock_irq(&ctx->timeout_lock);
133
109
seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts);
···
147
131
if (events_got < events_needed)
148
132
break;
149
133
150
-
io_kill_timeout(req, 0);
134
+
io_kill_timeout(req, &list);
151
135
}
152
136
ctx->cq_last_tm_flush = seq;
153
137
raw_spin_unlock_irq(&ctx->timeout_lock);
138
+
io_flush_killed_timeouts(&list, 0);
154
139
}
155
140
156
141
static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts)
···
678
661
bool cancel_all)
679
662
{
680
663
struct io_timeout *timeout, *tmp;
681
-
int canceled = 0;
664
+
LIST_HEAD(list);
682
665
683
666
/*
684
667
* completion_lock is needed for io_match_task(). Take it before
···
689
672
list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) {
690
673
struct io_kiocb *req = cmd_to_io_kiocb(timeout);
691
674
692
-
if (io_match_task(req, tctx, cancel_all) &&
693
-
io_kill_timeout(req, -ECANCELED))
694
-
canceled++;
675
+
if (io_match_task(req, tctx, cancel_all))
676
+
io_kill_timeout(req, &list);
695
677
}
696
678
raw_spin_unlock_irq(&ctx->timeout_lock);
697
679
spin_unlock(&ctx->completion_lock);
698
-
return canceled != 0;
680
+
681
+
return io_flush_killed_timeouts(&list, -ECANCELED);
699
682
}
+1
-1
kernel/kcov.c
+1
-1
kernel/kcov.c
···
166
166
* Unlike in_serving_softirq(), this function returns false when called during
167
167
* a hardirq or an NMI that happened in the softirq context.
168
168
*/
169
-
static inline bool in_softirq_really(void)
169
+
static __always_inline bool in_softirq_really(void)
170
170
{
171
171
return in_serving_softirq() && !in_hardirq() && !in_nmi();
172
172
}
+16
-2
kernel/locking/rtmutex.c
+16
-2
kernel/locking/rtmutex.c
···
1292
1292
*/
1293
1293
get_task_struct(owner);
1294
1294
1295
+
preempt_disable();
1295
1296
raw_spin_unlock_irq(&lock->wait_lock);
1297
+
/* wake up any tasks on the wake_q before calling rt_mutex_adjust_prio_chain */
1298
+
wake_up_q(wake_q);
1299
+
wake_q_init(wake_q);
1300
+
preempt_enable();
1301
+
1296
1302
1297
1303
res = rt_mutex_adjust_prio_chain(owner, chwalk, lock,
1298
1304
next_lock, waiter, task);
···
1602
1596
* or TASK_UNINTERRUPTIBLE)
1603
1597
* @timeout: the pre-initialized and started timer, or NULL for none
1604
1598
* @waiter: the pre-initialized rt_mutex_waiter
1599
+
* @wake_q: wake_q of tasks to wake when we drop the lock->wait_lock
1605
1600
*
1606
1601
* Must be called with lock->wait_lock held and interrupts disabled
1607
1602
*/
···
1610
1603
struct ww_acquire_ctx *ww_ctx,
1611
1604
unsigned int state,
1612
1605
struct hrtimer_sleeper *timeout,
1613
-
struct rt_mutex_waiter *waiter)
1606
+
struct rt_mutex_waiter *waiter,
1607
+
struct wake_q_head *wake_q)
1614
1608
__releases(&lock->wait_lock) __acquires(&lock->wait_lock)
1615
1609
{
1616
1610
struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex);
···
1642
1634
owner = rt_mutex_owner(lock);
1643
1635
else
1644
1636
owner = NULL;
1637
+
preempt_disable();
1645
1638
raw_spin_unlock_irq(&lock->wait_lock);
1639
+
if (wake_q) {
1640
+
wake_up_q(wake_q);
1641
+
wake_q_init(wake_q);
1642
+
}
1643
+
preempt_enable();
1646
1644
1647
1645
if (!owner || !rtmutex_spin_on_owner(lock, waiter, owner))
1648
1646
rt_mutex_schedule();
···
1722
1708
1723
1709
ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk, wake_q);
1724
1710
if (likely(!ret))
1725
-
ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter);
1711
+
ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter, wake_q);
1726
1712
1727
1713
if (likely(!ret)) {
1728
1714
/* acquired the lock */
+1
-1
kernel/locking/rtmutex_api.c
+1
-1
kernel/locking/rtmutex_api.c
···
383
383
raw_spin_lock_irq(&lock->wait_lock);
384
384
/* sleep on the mutex */
385
385
set_current_state(TASK_INTERRUPTIBLE);
386
-
ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter);
386
+
ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter, NULL);
387
387
/*
388
388
* try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
389
389
* have to fix that up.
+2
-2
kernel/sched/ext.c
+2
-2
kernel/sched/ext.c
···
4763
4763
* sees scx_rq_bypassing() before moving tasks to SCX.
4764
4764
*/
4765
4765
if (!scx_enabled()) {
4766
-
rq_unlock_irqrestore(rq, &rf);
4766
+
rq_unlock(rq, &rf);
4767
4767
continue;
4768
4768
}
4769
4769
···
7013
7013
return -ENOENT;
7014
7014
7015
7015
INIT_LIST_HEAD(&kit->cursor.node);
7016
-
kit->cursor.flags |= SCX_DSQ_LNODE_ITER_CURSOR | flags;
7016
+
kit->cursor.flags = SCX_DSQ_LNODE_ITER_CURSOR | flags;
7017
7017
kit->cursor.priv = READ_ONCE(kit->dsq->seq);
7018
7018
7019
7019
return 0;
+1
-1
kernel/trace/fgraph.c
+1
-1
kernel/trace/fgraph.c
+2
-6
kernel/trace/ftrace.c
+2
-6
kernel/trace/ftrace.c
···
902
902
}
903
903
904
904
static struct fgraph_ops fprofiler_ops = {
905
-
.ops = {
906
-
.flags = FTRACE_OPS_FL_INITIALIZED,
907
-
INIT_OPS_HASH(fprofiler_ops.ops)
908
-
},
909
905
.entryfunc = &profile_graph_entry,
910
906
.retfunc = &profile_graph_return,
911
907
};
912
908
913
909
static int register_ftrace_profiler(void)
914
910
{
911
+
ftrace_ops_set_global_filter(&fprofiler_ops.ops);
915
912
return register_ftrace_graph(&fprofiler_ops);
916
913
}
917
914
···
919
922
#else
920
923
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
921
924
.func = function_profile_call,
922
-
.flags = FTRACE_OPS_FL_INITIALIZED,
923
-
INIT_OPS_HASH(ftrace_profile_ops)
924
925
};
925
926
926
927
static int register_ftrace_profiler(void)
927
928
{
929
+
ftrace_ops_set_global_filter(&ftrace_profile_ops);
928
930
return register_ftrace_function(&ftrace_profile_ops);
929
931
}
930
932
+3
kernel/trace/trace.c
+3
kernel/trace/trace.c
+12
kernel/trace/trace_events.c
+12
kernel/trace/trace_events.c
···
365
365
} while (s < e);
366
366
367
367
/*
368
+
* Check for arrays. If the argument has: foo[REC->val]
369
+
* then it is very likely that foo is an array of strings
370
+
* that are safe to use.
371
+
*/
372
+
r = strstr(s, "[");
373
+
if (r && r < e) {
374
+
r = strstr(r, "REC->");
375
+
if (r && r < e)
376
+
return true;
377
+
}
378
+
379
+
/*
368
380
* If there's any strings in the argument consider this arg OK as it
369
381
* could be: REC->field ? "foo" : "bar" and we don't want to get into
370
382
* verifying that logic here.
+1
-1
kernel/trace/trace_kprobe.c
+1
-1
kernel/trace/trace_kprobe.c
···
725
725
726
726
static struct notifier_block trace_kprobe_module_nb = {
727
727
.notifier_call = trace_kprobe_module_callback,
728
-
.priority = 1 /* Invoked after kprobe module callback */
728
+
.priority = 2 /* Invoked after kprobe and jump_label module callback */
729
729
};
730
730
static int trace_kprobe_register_module_notifier(void)
731
731
{
+14
-9
kernel/workqueue.c
+14
-9
kernel/workqueue.c
···
3680
3680
* check_flush_dependency - check for flush dependency sanity
3681
3681
* @target_wq: workqueue being flushed
3682
3682
* @target_work: work item being flushed (NULL for workqueue flushes)
3683
+
* @from_cancel: are we called from the work cancel path
3683
3684
*
3684
3685
* %current is trying to flush the whole @target_wq or @target_work on it.
3685
-
* If @target_wq doesn't have %WQ_MEM_RECLAIM, verify that %current is not
3686
-
* reclaiming memory or running on a workqueue which doesn't have
3687
-
* %WQ_MEM_RECLAIM as that can break forward-progress guarantee leading to
3688
-
* a deadlock.
3686
+
* If this is not the cancel path (which implies work being flushed is either
3687
+
* already running, or will not be at all), check if @target_wq doesn't have
3688
+
* %WQ_MEM_RECLAIM and verify that %current is not reclaiming memory or running
3689
+
* on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward-
3690
+
* progress guarantee leading to a deadlock.
3689
3691
*/
3690
3692
static void check_flush_dependency(struct workqueue_struct *target_wq,
3691
-
struct work_struct *target_work)
3693
+
struct work_struct *target_work,
3694
+
bool from_cancel)
3692
3695
{
3693
-
work_func_t target_func = target_work ? target_work->func : NULL;
3696
+
work_func_t target_func;
3694
3697
struct worker *worker;
3695
3698
3696
-
if (target_wq->flags & WQ_MEM_RECLAIM)
3699
+
if (from_cancel || target_wq->flags & WQ_MEM_RECLAIM)
3697
3700
return;
3698
3701
3699
3702
worker = current_wq_worker();
3703
+
target_func = target_work ? target_work->func : NULL;
3700
3704
3701
3705
WARN_ONCE(current->flags & PF_MEMALLOC,
3702
3706
"workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps",
···
3984
3980
list_add_tail(&this_flusher.list, &wq->flusher_overflow);
3985
3981
}
3986
3982
3987
-
check_flush_dependency(wq, NULL);
3983
+
check_flush_dependency(wq, NULL, false);
3988
3984
3989
3985
mutex_unlock(&wq->mutex);
3990
3986
···
4159
4155
}
4160
4156
4161
4157
wq = pwq->wq;
4162
-
check_flush_dependency(wq, work);
4158
+
check_flush_dependency(wq, work, from_cancel);
4163
4159
4164
4160
insert_wq_barrier(pwq, barr, work, worker);
4165
4161
raw_spin_unlock_irq(&pool->lock);
···
5645
5641
} while (activated);
5646
5642
}
5647
5643
5644
+
__printf(1, 0)
5648
5645
static struct workqueue_struct *__alloc_workqueue(const char *fmt,
5649
5646
unsigned int flags,
5650
5647
int max_active, va_list args)
+1
lib/maple_tree.c
+1
lib/maple_tree.c
+9
-1
mm/damon/core.c
+9
-1
mm/damon/core.c
···
868
868
NUMA_NO_NODE);
869
869
if (!new_scheme)
870
870
return -ENOMEM;
871
+
err = damos_commit(new_scheme, src_scheme);
872
+
if (err) {
873
+
damon_destroy_scheme(new_scheme);
874
+
return err;
875
+
}
871
876
damon_add_scheme(dst, new_scheme);
872
877
}
873
878
return 0;
···
966
961
return -ENOMEM;
967
962
err = damon_commit_target(new_target, false,
968
963
src_target, damon_target_has_pid(src));
969
-
if (err)
964
+
if (err) {
965
+
damon_destroy_target(new_target);
970
966
return err;
967
+
}
968
+
damon_add_target(dst, new_target);
971
969
}
972
970
return 0;
973
971
}
-9
mm/filemap.c
-9
mm/filemap.c
···
124
124
* ->private_lock (zap_pte_range->block_dirty_folio)
125
125
*/
126
126
127
-
static void mapping_set_update(struct xa_state *xas,
128
-
struct address_space *mapping)
129
-
{
130
-
if (dax_mapping(mapping) || shmem_mapping(mapping))
131
-
return;
132
-
xas_set_update(xas, workingset_update_node);
133
-
xas_set_lru(xas, &shadow_nodes);
134
-
}
135
-
136
127
static void page_cache_delete(struct address_space *mapping,
137
128
struct folio *folio, void *shadow)
138
129
{
+7
-9
mm/hugetlb.c
+7
-9
mm/hugetlb.c
···
7211
7211
spte = hugetlb_walk(svma, saddr,
7212
7212
vma_mmu_pagesize(svma));
7213
7213
if (spte) {
7214
-
get_page(virt_to_page(spte));
7214
+
ptdesc_pmd_pts_inc(virt_to_ptdesc(spte));
7215
7215
break;
7216
7216
}
7217
7217
}
···
7226
7226
(pmd_t *)((unsigned long)spte & PAGE_MASK));
7227
7227
mm_inc_nr_pmds(mm);
7228
7228
} else {
7229
-
put_page(virt_to_page(spte));
7229
+
ptdesc_pmd_pts_dec(virt_to_ptdesc(spte));
7230
7230
}
7231
7231
spin_unlock(&mm->page_table_lock);
7232
7232
out:
···
7238
7238
/*
7239
7239
* unmap huge page backed by shared pte.
7240
7240
*
7241
-
* Hugetlb pte page is ref counted at the time of mapping. If pte is shared
7242
-
* indicated by page_count > 1, unmap is achieved by clearing pud and
7243
-
* decrementing the ref count. If count == 1, the pte page is not shared.
7244
-
*
7245
7241
* Called with page table lock held.
7246
7242
*
7247
7243
* returns: 1 successfully unmapped a shared pte page
···
7246
7250
int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
7247
7251
unsigned long addr, pte_t *ptep)
7248
7252
{
7253
+
unsigned long sz = huge_page_size(hstate_vma(vma));
7249
7254
pgd_t *pgd = pgd_offset(mm, addr);
7250
7255
p4d_t *p4d = p4d_offset(pgd, addr);
7251
7256
pud_t *pud = pud_offset(p4d, addr);
7252
7257
7253
7258
i_mmap_assert_write_locked(vma->vm_file->f_mapping);
7254
7259
hugetlb_vma_assert_locked(vma);
7255
-
BUG_ON(page_count(virt_to_page(ptep)) == 0);
7256
-
if (page_count(virt_to_page(ptep)) == 1)
7260
+
if (sz != PMD_SIZE)
7261
+
return 0;
7262
+
if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep)))
7257
7263
return 0;
7258
7264
7259
7265
pud_clear(pud);
7260
-
put_page(virt_to_page(ptep));
7266
+
ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep));
7261
7267
mm_dec_nr_pmds(mm);
7262
7268
return 1;
7263
7269
}
+6
mm/internal.h
+6
mm/internal.h
···
1504
1504
/* Only track the nodes of mappings with shadow entries */
1505
1505
void workingset_update_node(struct xa_node *node);
1506
1506
extern struct list_lru shadow_nodes;
1507
+
#define mapping_set_update(xas, mapping) do { \
1508
+
if (!dax_mapping(mapping) && !shmem_mapping(mapping)) { \
1509
+
xas_set_update(xas, workingset_update_node); \
1510
+
xas_set_lru(xas, &shadow_nodes); \
1511
+
} \
1512
+
} while (0)
1507
1513
1508
1514
/* mremap.c */
1509
1515
unsigned long move_page_tables(struct vm_area_struct *vma,
+3
mm/khugepaged.c
+3
mm/khugepaged.c
···
19
19
#include <linux/rcupdate_wait.h>
20
20
#include <linux/swapops.h>
21
21
#include <linux/shmem_fs.h>
22
+
#include <linux/dax.h>
22
23
#include <linux/ksm.h>
23
24
24
25
#include <asm/tlb.h>
···
1837
1836
result = alloc_charge_folio(&new_folio, mm, cc);
1838
1837
if (result != SCAN_SUCCEED)
1839
1838
goto out;
1839
+
1840
+
mapping_set_update(&xas, mapping);
1840
1841
1841
1842
__folio_set_locked(new_folio);
1842
1843
if (is_shmem)
+1
-1
mm/kmemleak.c
+1
-1
mm/kmemleak.c
+1
-1
mm/list_lru.c
+1
-1
mm/list_lru.c
···
77
77
spin_lock(&l->lock);
78
78
nr_items = READ_ONCE(l->nr_items);
79
79
if (likely(nr_items != LONG_MIN)) {
80
-
WARN_ON(nr_items < 0);
81
80
rcu_read_unlock();
82
81
return l;
83
82
}
···
449
450
450
451
list_splice_init(&src->list, &dst->list);
451
452
if (src->nr_items) {
453
+
WARN_ON(src->nr_items < 0);
452
454
dst->nr_items += src->nr_items;
453
455
set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru));
454
456
}
+1
-1
mm/memfd.c
+1
-1
mm/memfd.c
+5
-1
mm/mmap.c
+5
-1
mm/mmap.c
···
47
47
#include <linux/oom.h>
48
48
#include <linux/sched/mm.h>
49
49
#include <linux/ksm.h>
50
+
#include <linux/memfd.h>
50
51
51
52
#include <linux/uaccess.h>
52
53
#include <asm/cacheflush.h>
···
369
368
370
369
if (file) {
371
370
struct inode *inode = file_inode(file);
371
+
unsigned int seals = memfd_file_seals(file);
372
372
unsigned long flags_mask;
373
373
374
374
if (!file_mmap_ok(file, inode, pgoff, len))
···
410
408
vm_flags |= VM_SHARED | VM_MAYSHARE;
411
409
if (!(file->f_mode & FMODE_WRITE))
412
410
vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
411
+
else if (is_readonly_sealed(seals, vm_flags))
412
+
vm_flags &= ~VM_MAYWRITE;
413
413
fallthrough;
414
414
case MAP_PRIVATE:
415
415
if (!(file->f_mode & FMODE_READ))
···
892
888
893
889
if (get_area) {
894
890
addr = get_area(file, addr, len, pgoff, flags);
895
-
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
891
+
} else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file
896
892
&& !addr /* no hint */
897
893
&& IS_ALIGNED(len, PMD_SIZE)) {
898
894
/* Ensures that larger anonymous mappings are THP aligned. */
+5
-1
mm/readahead.c
+5
-1
mm/readahead.c
···
646
646
1UL << order);
647
647
if (index == expected) {
648
648
ra->start += ra->size;
649
-
ra->size = get_next_ra_size(ra, max_pages);
649
+
/*
650
+
* In the case of MADV_HUGEPAGE, the actual size might exceed
651
+
* the readahead window.
652
+
*/
653
+
ra->size = max(ra->size, get_next_ra_size(ra, max_pages));
650
654
ra->async_size = ra->size;
651
655
goto readit;
652
656
}
+4
-3
mm/shmem.c
+4
-3
mm/shmem.c
···
1535
1535
!shmem_falloc->waitq &&
1536
1536
index >= shmem_falloc->start &&
1537
1537
index < shmem_falloc->next)
1538
-
shmem_falloc->nr_unswapped++;
1538
+
shmem_falloc->nr_unswapped += nr_pages;
1539
1539
else
1540
1540
shmem_falloc = NULL;
1541
1541
spin_unlock(&inode->i_lock);
···
1689
1689
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
1690
1690
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
1691
1691
unsigned long vm_flags = vma ? vma->vm_flags : 0;
1692
+
pgoff_t aligned_index;
1692
1693
bool global_huge;
1693
1694
loff_t i_size;
1694
1695
int order;
···
1724
1723
/* Allow mTHP that will be fully within i_size. */
1725
1724
order = highest_order(within_size_orders);
1726
1725
while (within_size_orders) {
1727
-
index = round_up(index + 1, order);
1726
+
aligned_index = round_up(index + 1, 1 << order);
1728
1727
i_size = round_up(i_size_read(inode), PAGE_SIZE);
1729
-
if (i_size >> PAGE_SHIFT >= index) {
1728
+
if (i_size >> PAGE_SHIFT >= aligned_index) {
1730
1729
mask |= within_size_orders;
1731
1730
break;
1732
1731
}
+1
-6
mm/util.c
+1
-6
mm/util.c
···
297
297
{
298
298
char *p;
299
299
300
-
/*
301
-
* Always use GFP_KERNEL, since copy_from_user() can sleep and
302
-
* cause pagefault, which makes it pointless to use GFP_NOFS
303
-
* or GFP_ATOMIC.
304
-
*/
305
-
p = kmalloc_track_caller(len + 1, GFP_KERNEL);
300
+
p = kmem_buckets_alloc_track_caller(user_buckets, len + 1, GFP_USER | __GFP_NOWARN);
306
301
if (!p)
307
302
return ERR_PTR(-ENOMEM);
308
303
+8
-1
mm/vmscan.c
+8
-1
mm/vmscan.c
···
374
374
if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL))
375
375
nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
376
376
zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
377
-
377
+
/*
378
+
* If there are no reclaimable file-backed or anonymous pages,
379
+
* ensure zones with sufficient free pages are not skipped.
380
+
* This prevents zones like DMA32 from being ignored in reclaim
381
+
* scenarios where they can still help alleviate memory pressure.
382
+
*/
383
+
if (nr == 0)
384
+
nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
378
385
return nr;
379
386
}
380
387
+2
-1
mm/vmstat.c
+2
-1
mm/vmstat.c
···
2148
2148
if (!node_state(cpu_to_node(cpu), N_CPU)) {
2149
2149
node_set_state(cpu_to_node(cpu), N_CPU);
2150
2150
}
2151
+
enable_delayed_work(&per_cpu(vmstat_work, cpu));
2151
2152
2152
2153
return 0;
2153
2154
}
2154
2155
2155
2156
static int vmstat_cpu_down_prep(unsigned int cpu)
2156
2157
{
2157
-
cancel_delayed_work_sync(&per_cpu(vmstat_work, cpu));
2158
+
disable_delayed_work_sync(&per_cpu(vmstat_work, cpu));
2158
2159
return 0;
2159
2160
}
2160
2161
+16
-3
mm/zswap.c
+16
-3
mm/zswap.c
···
880
880
return 0;
881
881
}
882
882
883
+
/* Prevent CPU hotplug from freeing up the per-CPU acomp_ctx resources */
884
+
static struct crypto_acomp_ctx *acomp_ctx_get_cpu(struct crypto_acomp_ctx __percpu *acomp_ctx)
885
+
{
886
+
cpus_read_lock();
887
+
return raw_cpu_ptr(acomp_ctx);
888
+
}
889
+
890
+
static void acomp_ctx_put_cpu(void)
891
+
{
892
+
cpus_read_unlock();
893
+
}
894
+
883
895
static bool zswap_compress(struct page *page, struct zswap_entry *entry,
884
896
struct zswap_pool *pool)
885
897
{
···
905
893
gfp_t gfp;
906
894
u8 *dst;
907
895
908
-
acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
909
-
896
+
acomp_ctx = acomp_ctx_get_cpu(pool->acomp_ctx);
910
897
mutex_lock(&acomp_ctx->mutex);
911
898
912
899
dst = acomp_ctx->buffer;
···
961
950
zswap_reject_alloc_fail++;
962
951
963
952
mutex_unlock(&acomp_ctx->mutex);
953
+
acomp_ctx_put_cpu();
964
954
return comp_ret == 0 && alloc_ret == 0;
965
955
}
966
956
···
972
960
struct crypto_acomp_ctx *acomp_ctx;
973
961
u8 *src;
974
962
975
-
acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
963
+
acomp_ctx = acomp_ctx_get_cpu(entry->pool->acomp_ctx);
976
964
mutex_lock(&acomp_ctx->mutex);
977
965
978
966
src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
···
1002
990
1003
991
if (src != acomp_ctx->buffer)
1004
992
zpool_unmap_handle(zpool, entry->handle);
993
+
acomp_ctx_put_cpu();
1005
994
}
1006
995
1007
996
/*********************************
+3
-1
net/core/dev.c
+3
-1
net/core/dev.c
···
3642
3642
3643
3643
if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
3644
3644
if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) &&
3645
-
skb_network_header_len(skb) != sizeof(struct ipv6hdr))
3645
+
skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&
3646
+
!ipv6_has_hopopt_jumbo(skb))
3646
3647
goto sw_checksum;
3648
+
3647
3649
switch (skb->csum_offset) {
3648
3650
case offsetof(struct tcphdr, check):
3649
3651
case offsetof(struct udphdr, check):
+5
-1
net/core/netdev-genl.c
+5
-1
net/core/netdev-genl.c
+4
-1
net/core/sock.c
+4
-1
net/core/sock.c
···
1295
1295
sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE);
1296
1296
break;
1297
1297
case SO_REUSEPORT:
1298
-
sk->sk_reuseport = valbool;
1298
+
if (valbool && !sk_is_inet(sk))
1299
+
ret = -EOPNOTSUPP;
1300
+
else
1301
+
sk->sk_reuseport = valbool;
1299
1302
break;
1300
1303
case SO_DONTROUTE:
1301
1304
sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
+3
-3
net/ipv4/ip_tunnel.c
+3
-3
net/ipv4/ip_tunnel.c
···
294
294
295
295
ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
296
296
iph->saddr, tunnel->parms.o_key,
297
-
iph->tos & INET_DSCP_MASK, dev_net(dev),
297
+
iph->tos & INET_DSCP_MASK, tunnel->net,
298
298
tunnel->parms.link, tunnel->fwmark, 0, 0);
299
299
rt = ip_route_output_key(tunnel->net, &fl4);
300
300
···
611
611
}
612
612
ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
613
613
tunnel_id_to_key32(key->tun_id),
614
-
tos & INET_DSCP_MASK, dev_net(dev), 0, skb->mark,
614
+
tos & INET_DSCP_MASK, tunnel->net, 0, skb->mark,
615
615
skb_get_hash(skb), key->flow_flags);
616
616
617
617
if (!tunnel_hlen)
···
774
774
775
775
ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
776
776
tunnel->parms.o_key, tos & INET_DSCP_MASK,
777
-
dev_net(dev), READ_ONCE(tunnel->parms.link),
777
+
tunnel->net, READ_ONCE(tunnel->parms.link),
778
778
tunnel->fwmark, skb_get_hash(skb), 0);
779
779
780
780
if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0)
+1
net/ipv4/tcp_input.c
+1
net/ipv4/tcp_input.c
+11
-5
net/ipv6/ila/ila_xlat.c
+11
-5
net/ipv6/ila/ila_xlat.c
···
195
195
},
196
196
};
197
197
198
+
static DEFINE_MUTEX(ila_mutex);
199
+
198
200
static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
199
201
{
200
202
struct ila_net *ilan = net_generic(net, ila_net_id);
···
204
202
spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match);
205
203
int err = 0, order;
206
204
207
-
if (!ilan->xlat.hooks_registered) {
205
+
if (!READ_ONCE(ilan->xlat.hooks_registered)) {
208
206
/* We defer registering net hooks in the namespace until the
209
207
* first mapping is added.
210
208
*/
211
-
err = nf_register_net_hooks(net, ila_nf_hook_ops,
212
-
ARRAY_SIZE(ila_nf_hook_ops));
209
+
mutex_lock(&ila_mutex);
210
+
if (!ilan->xlat.hooks_registered) {
211
+
err = nf_register_net_hooks(net, ila_nf_hook_ops,
212
+
ARRAY_SIZE(ila_nf_hook_ops));
213
+
if (!err)
214
+
WRITE_ONCE(ilan->xlat.hooks_registered, true);
215
+
}
216
+
mutex_unlock(&ila_mutex);
213
217
if (err)
214
218
return err;
215
-
216
-
ilan->xlat.hooks_registered = true;
217
219
}
218
220
219
221
ila = kzalloc(sizeof(*ila), GFP_KERNEL);
+1
-1
net/llc/llc_input.c
+1
-1
net/llc/llc_input.c
+7
net/mptcp/options.c
+7
net/mptcp/options.c
···
667
667
&echo, &drop_other_suboptions))
668
668
return false;
669
669
670
+
/*
671
+
* Later on, mptcp_write_options() will enforce mutually exclusion with
672
+
* DSS, bail out if such option is set and we can't drop it.
673
+
*/
670
674
if (drop_other_suboptions)
671
675
remaining += opt_size;
676
+
else if (opts->suboptions & OPTION_MPTCP_DSS)
677
+
return false;
678
+
672
679
len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port);
673
680
if (remaining < len)
674
681
return false;
+12
-11
net/mptcp/protocol.c
+12
-11
net/mptcp/protocol.c
···
136
136
int delta;
137
137
138
138
if (MPTCP_SKB_CB(from)->offset ||
139
+
((to->len + from->len) > (sk->sk_rcvbuf >> 3)) ||
139
140
!skb_try_coalesce(to, from, &fragstolen, &delta))
140
141
return false;
141
142
···
529
528
mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow));
530
529
}
531
530
532
-
static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
531
+
static void mptcp_subflow_cleanup_rbuf(struct sock *ssk, int copied)
533
532
{
534
533
bool slow;
535
534
536
535
slow = lock_sock_fast(ssk);
537
536
if (tcp_can_send_ack(ssk))
538
-
tcp_cleanup_rbuf(ssk, 1);
537
+
tcp_cleanup_rbuf(ssk, copied);
539
538
unlock_sock_fast(ssk, slow);
540
539
}
541
540
···
552
551
(ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED)));
553
552
}
554
553
555
-
static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
554
+
static void mptcp_cleanup_rbuf(struct mptcp_sock *msk, int copied)
556
555
{
557
556
int old_space = READ_ONCE(msk->old_wspace);
558
557
struct mptcp_subflow_context *subflow;
···
560
559
int space = __mptcp_space(sk);
561
560
bool cleanup, rx_empty;
562
561
563
-
cleanup = (space > 0) && (space >= (old_space << 1));
564
-
rx_empty = !__mptcp_rmem(sk);
562
+
cleanup = (space > 0) && (space >= (old_space << 1)) && copied;
563
+
rx_empty = !__mptcp_rmem(sk) && copied;
565
564
566
565
mptcp_for_each_subflow(msk, subflow) {
567
566
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
568
567
569
568
if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty))
570
-
mptcp_subflow_cleanup_rbuf(ssk);
569
+
mptcp_subflow_cleanup_rbuf(ssk, copied);
571
570
}
572
571
}
573
572
···
1940
1939
goto out;
1941
1940
}
1942
1941
1942
+
static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied);
1943
+
1943
1944
static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
1944
1945
struct msghdr *msg,
1945
1946
size_t len, int flags,
···
1995
1992
break;
1996
1993
}
1997
1994
1995
+
mptcp_rcv_space_adjust(msk, copied);
1998
1996
return copied;
1999
1997
}
2000
1998
···
2221
2217
2222
2218
copied += bytes_read;
2223
2219
2224
-
/* be sure to advertise window change */
2225
-
mptcp_cleanup_rbuf(msk);
2226
-
2227
2220
if (skb_queue_empty(&msk->receive_queue) && __mptcp_move_skbs(msk))
2228
2221
continue;
2229
2222
···
2269
2268
}
2270
2269
2271
2270
pr_debug("block timeout %ld\n", timeo);
2272
-
mptcp_rcv_space_adjust(msk, copied);
2271
+
mptcp_cleanup_rbuf(msk, copied);
2273
2272
err = sk_wait_data(sk, &timeo, NULL);
2274
2273
if (err < 0) {
2275
2274
err = copied ? : err;
···
2277
2276
}
2278
2277
}
2279
2278
2280
-
mptcp_rcv_space_adjust(msk, copied);
2279
+
mptcp_cleanup_rbuf(msk, copied);
2281
2280
2282
2281
out_err:
2283
2282
if (cmsg_flags && copied >= 0) {
+6
net/netrom/nr_route.c
+6
net/netrom/nr_route.c
···
754
754
int ret;
755
755
struct sk_buff *skbn;
756
756
757
+
/*
758
+
* Reject malformed packets early. Check that it contains at least 2
759
+
* addresses and 1 byte more for Time-To-Live
760
+
*/
761
+
if (skb->len < 2 * sizeof(ax25_address) + 1)
762
+
return 0;
757
763
758
764
nr_src = (ax25_address *)(skb->data + 0);
759
765
nr_dest = (ax25_address *)(skb->data + 7);
+7
-21
net/packet/af_packet.c
+7
-21
net/packet/af_packet.c
···
538
538
return packet_lookup_frame(po, rb, rb->head, status);
539
539
}
540
540
541
-
static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
541
+
static u16 vlan_get_tci(const struct sk_buff *skb, struct net_device *dev)
542
542
{
543
-
u8 *skb_orig_data = skb->data;
544
-
int skb_orig_len = skb->len;
545
543
struct vlan_hdr vhdr, *vh;
546
544
unsigned int header_len;
547
545
···
560
562
else
561
563
return 0;
562
564
563
-
skb_push(skb, skb->data - skb_mac_header(skb));
564
-
vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr);
565
-
if (skb_orig_data != skb->data) {
566
-
skb->data = skb_orig_data;
567
-
skb->len = skb_orig_len;
568
-
}
565
+
vh = skb_header_pointer(skb, skb_mac_offset(skb) + header_len,
566
+
sizeof(vhdr), &vhdr);
569
567
if (unlikely(!vh))
570
568
return 0;
571
569
572
570
return ntohs(vh->h_vlan_TCI);
573
571
}
574
572
575
-
static __be16 vlan_get_protocol_dgram(struct sk_buff *skb)
573
+
static __be16 vlan_get_protocol_dgram(const struct sk_buff *skb)
576
574
{
577
575
__be16 proto = skb->protocol;
578
576
579
-
if (unlikely(eth_type_vlan(proto))) {
580
-
u8 *skb_orig_data = skb->data;
581
-
int skb_orig_len = skb->len;
582
-
583
-
skb_push(skb, skb->data - skb_mac_header(skb));
584
-
proto = __vlan_get_protocol(skb, proto, NULL);
585
-
if (skb_orig_data != skb->data) {
586
-
skb->data = skb_orig_data;
587
-
skb->len = skb_orig_len;
588
-
}
589
-
}
577
+
if (unlikely(eth_type_vlan(proto)))
578
+
proto = __vlan_get_protocol_offset(skb, proto,
579
+
skb_mac_offset(skb), NULL);
590
580
591
581
return proto;
592
582
}
+2
-1
net/sctp/associola.c
+2
-1
net/sctp/associola.c
···
137
137
= 5 * asoc->rto_max;
138
138
139
139
asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay;
140
-
asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ;
140
+
asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] =
141
+
(unsigned long)sp->autoclose * HZ;
141
142
142
143
/* Initializes the timers */
143
144
for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
+16
-2
rust/kernel/workqueue.rs
+16
-2
rust/kernel/workqueue.rs
···
519
519
impl{T} HasWork<Self> for ClosureWork<T> { self.work }
520
520
}
521
521
522
-
// SAFETY: TODO.
522
+
// SAFETY: The `__enqueue` implementation in RawWorkItem uses a `work_struct` initialized with the
523
+
// `run` method of this trait as the function pointer because:
524
+
// - `__enqueue` gets the `work_struct` from the `Work` field, using `T::raw_get_work`.
525
+
// - The only safe way to create a `Work` object is through `Work::new`.
526
+
// - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`.
527
+
// - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field
528
+
// will be used because of the ID const generic bound. This makes sure that `T::raw_get_work`
529
+
// uses the correct offset for the `Work` field, and `Work::new` picks the correct
530
+
// implementation of `WorkItemPointer` for `Arc<T>`.
523
531
unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
524
532
where
525
533
T: WorkItem<ID, Pointer = Self>,
···
545
537
}
546
538
}
547
539
548
-
// SAFETY: TODO.
540
+
// SAFETY: The `work_struct` raw pointer is guaranteed to be valid for the duration of the call to
541
+
// the closure because we get it from an `Arc`, which means that the ref count will be at least 1,
542
+
// and we don't drop the `Arc` ourselves. If `queue_work_on` returns true, it is further guaranteed
543
+
// to be valid until a call to the function pointer in `work_struct` because we leak the memory it
544
+
// points to, and only reclaim it if the closure returns false, or in `WorkItemPointer::run`, which
545
+
// is what the function pointer in the `work_struct` must be pointing to, according to the safety
546
+
// requirements of `WorkItemPointer`.
549
547
unsafe impl<T, const ID: u64> RawWorkItem<ID> for Arc<T>
550
548
where
551
549
T: WorkItem<ID, Pointer = Self>,
+2
-2
scripts/mksysmap
+2
-2
scripts/mksysmap
···
26
26
# (do not forget a space before each pattern)
27
27
28
28
# local symbols for ARM, MIPS, etc.
29
-
/ \\$/d
29
+
/ \$/d
30
30
31
31
# local labels, .LBB, .Ltmpxxx, .L__unnamed_xx, .LASANPC, etc.
32
32
/ \.L/d
···
39
39
/ __pi_\.L/d
40
40
41
41
# arm64 local symbols in non-VHE KVM namespace
42
-
/ __kvm_nvhe_\\$/d
42
+
/ __kvm_nvhe_\$/d
43
43
/ __kvm_nvhe_\.L/d
44
44
45
45
# lld arm/aarch64/mips thunks
+17
-19
scripts/mod/file2alias.c
+17
-19
scripts/mod/file2alias.c
···
132
132
* based at address m.
133
133
*/
134
134
#define DEF_FIELD(m, devid, f) \
135
-
typeof(((struct devid *)0)->f) f = TO_NATIVE(*(typeof(f) *)((m) + OFF_##devid##_##f))
135
+
typeof(((struct devid *)0)->f) f = \
136
+
get_unaligned_native((typeof(f) *)((m) + OFF_##devid##_##f))
136
137
137
138
/* Define a variable f that holds the address of field f of struct devid
138
139
* based at address m. Due to the way typeof works, for a field of type
···
601
600
static void do_pcmcia_entry(struct module *mod, void *symval)
602
601
{
603
602
char alias[256] = {};
604
-
unsigned int i;
603
+
605
604
DEF_FIELD(symval, pcmcia_device_id, match_flags);
606
605
DEF_FIELD(symval, pcmcia_device_id, manf_id);
607
606
DEF_FIELD(symval, pcmcia_device_id, card_id);
···
609
608
DEF_FIELD(symval, pcmcia_device_id, function);
610
609
DEF_FIELD(symval, pcmcia_device_id, device_no);
611
610
DEF_FIELD_ADDR(symval, pcmcia_device_id, prod_id_hash);
612
-
613
-
for (i=0; i<4; i++) {
614
-
(*prod_id_hash)[i] = TO_NATIVE((*prod_id_hash)[i]);
615
-
}
616
611
617
612
ADD(alias, "m", match_flags & PCMCIA_DEV_ID_MATCH_MANF_ID,
618
613
manf_id);
···
620
623
function);
621
624
ADD(alias, "pfn", match_flags & PCMCIA_DEV_ID_MATCH_DEVICE_NO,
622
625
device_no);
623
-
ADD(alias, "pa", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID1, (*prod_id_hash)[0]);
624
-
ADD(alias, "pb", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID2, (*prod_id_hash)[1]);
625
-
ADD(alias, "pc", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID3, (*prod_id_hash)[2]);
626
-
ADD(alias, "pd", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID4, (*prod_id_hash)[3]);
626
+
ADD(alias, "pa", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID1,
627
+
get_unaligned_native(*prod_id_hash + 0));
628
+
ADD(alias, "pb", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID2,
629
+
get_unaligned_native(*prod_id_hash + 1));
630
+
ADD(alias, "pc", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID3,
631
+
get_unaligned_native(*prod_id_hash + 2));
632
+
ADD(alias, "pd", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID4,
633
+
get_unaligned_native(*prod_id_hash + 3));
627
634
628
635
module_alias_printf(mod, true, "pcmcia:%s", alias);
629
636
}
···
655
654
{
656
655
unsigned int i;
657
656
658
-
for (i = min / BITS_PER_LONG; i < max / BITS_PER_LONG + 1; i++)
659
-
arr[i] = TO_NATIVE(arr[i]);
660
-
for (i = min; i < max; i++)
661
-
if (arr[i / BITS_PER_LONG] & (1ULL << (i%BITS_PER_LONG)))
657
+
for (i = min; i <= max; i++)
658
+
if (get_unaligned_native(arr + i / BITS_PER_LONG) &
659
+
(1ULL << (i % BITS_PER_LONG)))
662
660
sprintf(alias + strlen(alias), "%X,*", i);
663
661
}
664
662
···
812
812
* Each byte of the guid will be represented by two hex characters
813
813
* in the name.
814
814
*/
815
-
816
815
static void do_vmbus_entry(struct module *mod, void *symval)
817
816
{
818
-
int i;
819
817
DEF_FIELD_ADDR(symval, hv_vmbus_device_id, guid);
820
-
char guid_name[(sizeof(*guid) + 1) * 2];
818
+
char guid_name[sizeof(*guid) * 2 + 1];
821
819
822
-
for (i = 0; i < (sizeof(*guid) * 2); i += 2)
823
-
sprintf(&guid_name[i], "%02x", TO_NATIVE((guid->b)[i/2]));
820
+
for (int i = 0; i < sizeof(*guid); i++)
821
+
sprintf(&guid_name[i * 2], "%02x", guid->b[i]);
824
822
825
823
module_alias_printf(mod, false, "vmbus:%s", guid_name);
826
824
}
+12
-12
scripts/mod/modpost.c
+12
-12
scripts/mod/modpost.c
···
1138
1138
{
1139
1139
switch (r_type) {
1140
1140
case R_386_32:
1141
-
return TO_NATIVE(*location);
1141
+
return get_unaligned_native(location);
1142
1142
case R_386_PC32:
1143
-
return TO_NATIVE(*location) + 4;
1143
+
return get_unaligned_native(location) + 4;
1144
1144
}
1145
1145
1146
1146
return (Elf_Addr)(-1);
···
1161
1161
switch (r_type) {
1162
1162
case R_ARM_ABS32:
1163
1163
case R_ARM_REL32:
1164
-
inst = TO_NATIVE(*(uint32_t *)loc);
1164
+
inst = get_unaligned_native((uint32_t *)loc);
1165
1165
return inst + sym->st_value;
1166
1166
case R_ARM_MOVW_ABS_NC:
1167
1167
case R_ARM_MOVT_ABS:
1168
-
inst = TO_NATIVE(*(uint32_t *)loc);
1168
+
inst = get_unaligned_native((uint32_t *)loc);
1169
1169
offset = sign_extend32(((inst & 0xf0000) >> 4) | (inst & 0xfff),
1170
1170
15);
1171
1171
return offset + sym->st_value;
1172
1172
case R_ARM_PC24:
1173
1173
case R_ARM_CALL:
1174
1174
case R_ARM_JUMP24:
1175
-
inst = TO_NATIVE(*(uint32_t *)loc);
1175
+
inst = get_unaligned_native((uint32_t *)loc);
1176
1176
offset = sign_extend32((inst & 0x00ffffff) << 2, 25);
1177
1177
return offset + sym->st_value + 8;
1178
1178
case R_ARM_THM_MOVW_ABS_NC:
1179
1179
case R_ARM_THM_MOVT_ABS:
1180
-
upper = TO_NATIVE(*(uint16_t *)loc);
1181
-
lower = TO_NATIVE(*((uint16_t *)loc + 1));
1180
+
upper = get_unaligned_native((uint16_t *)loc);
1181
+
lower = get_unaligned_native((uint16_t *)loc + 1);
1182
1182
offset = sign_extend32(((upper & 0x000f) << 12) |
1183
1183
((upper & 0x0400) << 1) |
1184
1184
((lower & 0x7000) >> 4) |
···
1195
1195
* imm11 = lower[10:0]
1196
1196
* imm32 = SignExtend(S:J2:J1:imm6:imm11:'0')
1197
1197
*/
1198
-
upper = TO_NATIVE(*(uint16_t *)loc);
1199
-
lower = TO_NATIVE(*((uint16_t *)loc + 1));
1198
+
upper = get_unaligned_native((uint16_t *)loc);
1199
+
lower = get_unaligned_native((uint16_t *)loc + 1);
1200
1200
1201
1201
sign = (upper >> 10) & 1;
1202
1202
j1 = (lower >> 13) & 1;
···
1219
1219
* I2 = NOT(J2 XOR S)
1220
1220
* imm32 = SignExtend(S:I1:I2:imm10:imm11:'0')
1221
1221
*/
1222
-
upper = TO_NATIVE(*(uint16_t *)loc);
1223
-
lower = TO_NATIVE(*((uint16_t *)loc + 1));
1222
+
upper = get_unaligned_native((uint16_t *)loc);
1223
+
lower = get_unaligned_native((uint16_t *)loc + 1);
1224
1224
1225
1225
sign = (upper >> 10) & 1;
1226
1226
j1 = (lower >> 13) & 1;
···
1241
1241
{
1242
1242
uint32_t inst;
1243
1243
1244
-
inst = TO_NATIVE(*location);
1244
+
inst = get_unaligned_native(location);
1245
1245
switch (r_type) {
1246
1246
case R_MIPS_LO16:
1247
1247
return inst & 0xffff;
+14
scripts/mod/modpost.h
+14
scripts/mod/modpost.h
···
65
65
#define TO_NATIVE(x) \
66
66
(target_is_big_endian == host_is_big_endian ? x : bswap(x))
67
67
68
+
#define __get_unaligned_t(type, ptr) ({ \
69
+
const struct { type x; } __attribute__((__packed__)) *__pptr = \
70
+
(typeof(__pptr))(ptr); \
71
+
__pptr->x; \
72
+
})
73
+
74
+
#define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr))
75
+
76
+
#define get_unaligned_native(ptr) \
77
+
({ \
78
+
typeof(*(ptr)) _val = get_unaligned(ptr); \
79
+
TO_NATIVE(_val); \
80
+
})
81
+
68
82
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
69
83
70
84
#define strstarts(str, prefix) (strncmp(str, prefix, strlen(prefix)) == 0)
+1
-1
scripts/package/PKGBUILD
+1
-1
scripts/package/PKGBUILD
+4
-1
scripts/sorttable.h
+4
-1
scripts/sorttable.h
···
110
110
111
111
static int orc_sort_cmp(const void *_a, const void *_b)
112
112
{
113
-
struct orc_entry *orc_a;
113
+
struct orc_entry *orc_a, *orc_b;
114
114
const int *a = g_orc_ip_table + *(int *)_a;
115
115
const int *b = g_orc_ip_table + *(int *)_b;
116
116
unsigned long a_val = orc_ip(a);
···
128
128
* whitelisted .o files which didn't get objtool generation.
129
129
*/
130
130
orc_a = g_orc_table + (a - g_orc_ip_table);
131
+
orc_b = g_orc_table + (b - g_orc_ip_table);
132
+
if (orc_a->type == ORC_TYPE_UNDEFINED && orc_b->type == ORC_TYPE_UNDEFINED)
133
+
return 0;
131
134
return orc_a->type == ORC_TYPE_UNDEFINED ? -1 : 1;
132
135
}
133
136
+26
-17
sound/core/compress_offload.c
+26
-17
sound/core/compress_offload.c
···
1025
1025
static int snd_compr_task_new(struct snd_compr_stream *stream, struct snd_compr_task *utask)
1026
1026
{
1027
1027
struct snd_compr_task_runtime *task;
1028
-
int retval;
1028
+
int retval, fd_i, fd_o;
1029
1029
1030
1030
if (stream->runtime->total_tasks >= stream->runtime->fragments)
1031
1031
return -EBUSY;
···
1039
1039
retval = stream->ops->task_create(stream, task);
1040
1040
if (retval < 0)
1041
1041
goto cleanup;
1042
-
utask->input_fd = dma_buf_fd(task->input, O_WRONLY|O_CLOEXEC);
1043
-
if (utask->input_fd < 0) {
1044
-
retval = utask->input_fd;
1042
+
/* similar functionality as in dma_buf_fd(), but ensure that both
1043
+
file descriptors are allocated before fd_install() */
1044
+
if (!task->input || !task->input->file || !task->output || !task->output->file) {
1045
+
retval = -EINVAL;
1045
1046
goto cleanup;
1046
1047
}
1047
-
utask->output_fd = dma_buf_fd(task->output, O_RDONLY|O_CLOEXEC);
1048
-
if (utask->output_fd < 0) {
1049
-
retval = utask->output_fd;
1048
+
fd_i = get_unused_fd_flags(O_WRONLY|O_CLOEXEC);
1049
+
if (fd_i < 0)
1050
+
goto cleanup;
1051
+
fd_o = get_unused_fd_flags(O_RDONLY|O_CLOEXEC);
1052
+
if (fd_o < 0) {
1053
+
put_unused_fd(fd_i);
1050
1054
goto cleanup;
1051
1055
}
1052
1056
/* keep dmabuf reference until freed with task free ioctl */
1053
-
dma_buf_get(utask->input_fd);
1054
-
dma_buf_get(utask->output_fd);
1057
+
get_dma_buf(task->input);
1058
+
get_dma_buf(task->output);
1059
+
fd_install(fd_i, task->input->file);
1060
+
fd_install(fd_o, task->output->file);
1061
+
utask->input_fd = fd_i;
1062
+
utask->output_fd = fd_o;
1055
1063
list_add_tail(&task->list, &stream->runtime->tasks);
1056
1064
stream->runtime->total_tasks++;
1057
1065
return 0;
···
1077
1069
return -EPERM;
1078
1070
task = memdup_user((void __user *)arg, sizeof(*task));
1079
1071
if (IS_ERR(task))
1080
-
return PTR_ERR(no_free_ptr(task));
1072
+
return PTR_ERR(task);
1081
1073
retval = snd_compr_task_new(stream, task);
1082
1074
if (retval >= 0)
1083
1075
if (copy_to_user((void __user *)arg, task, sizeof(*task)))
···
1138
1130
return -EPERM;
1139
1131
task = memdup_user((void __user *)arg, sizeof(*task));
1140
1132
if (IS_ERR(task))
1141
-
return PTR_ERR(no_free_ptr(task));
1133
+
return PTR_ERR(task);
1142
1134
retval = snd_compr_task_start(stream, task);
1143
1135
if (retval >= 0)
1144
1136
if (copy_to_user((void __user *)arg, task, sizeof(*task)))
···
1182
1174
static int snd_compr_task_seq(struct snd_compr_stream *stream, unsigned long arg,
1183
1175
snd_compr_seq_func_t fcn)
1184
1176
{
1185
-
struct snd_compr_task_runtime *task;
1177
+
struct snd_compr_task_runtime *task, *temp;
1186
1178
__u64 seqno;
1187
1179
int retval;
1188
1180
1189
1181
if (stream->runtime->state != SNDRV_PCM_STATE_SETUP)
1190
1182
return -EPERM;
1191
-
retval = get_user(seqno, (__u64 __user *)arg);
1192
-
if (retval < 0)
1193
-
return retval;
1183
+
retval = copy_from_user(&seqno, (__u64 __user *)arg, sizeof(seqno));
1184
+
if (retval)
1185
+
return -EFAULT;
1194
1186
retval = 0;
1195
1187
if (seqno == 0) {
1196
-
list_for_each_entry_reverse(task, &stream->runtime->tasks, list)
1188
+
list_for_each_entry_safe_reverse(task, temp, &stream->runtime->tasks, list)
1197
1189
fcn(stream, task);
1198
1190
} else {
1199
1191
task = snd_compr_find_task(stream, seqno);
···
1229
1221
return -EPERM;
1230
1222
status = memdup_user((void __user *)arg, sizeof(*status));
1231
1223
if (IS_ERR(status))
1232
-
return PTR_ERR(no_free_ptr(status));
1224
+
return PTR_ERR(status);
1233
1225
retval = snd_compr_task_status(stream, status);
1234
1226
if (retval >= 0)
1235
1227
if (copy_to_user((void __user *)arg, status, sizeof(*status)))
···
1255
1247
}
1256
1248
EXPORT_SYMBOL_GPL(snd_compr_task_finished);
1257
1249
1250
+
MODULE_IMPORT_NS("DMA_BUF");
1258
1251
#endif /* CONFIG_SND_COMPRESS_ACCEL */
1259
1252
1260
1253
static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+1
-1
sound/core/memalloc.c
+1
-1
sound/core/memalloc.c
···
505
505
if (!p)
506
506
return NULL;
507
507
dmab->addr = dma_map_single(dmab->dev.dev, p, size, DMA_BIDIRECTIONAL);
508
-
if (dmab->addr == DMA_MAPPING_ERROR) {
508
+
if (dma_mapping_error(dmab->dev.dev, dmab->addr)) {
509
509
do_free_pages(dmab->area, size, true);
510
510
return NULL;
511
511
}
+2
sound/core/seq/oss/seq_oss_synth.c
+2
sound/core/seq/oss/seq_oss_synth.c
···
66
66
};
67
67
68
68
static DEFINE_SPINLOCK(register_lock);
69
+
static DEFINE_MUTEX(sysex_mutex);
69
70
70
71
/*
71
72
* prototypes
···
498
497
if (!info)
499
498
return -ENXIO;
500
499
500
+
guard(mutex)(&sysex_mutex);
501
501
sysex = info->sysex;
502
502
if (sysex == NULL) {
503
503
sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);
+10
-4
sound/core/seq/seq_clientmgr.c
+10
-4
sound/core/seq/seq_clientmgr.c
···
1275
1275
if (client->type != client_info->type)
1276
1276
return -EINVAL;
1277
1277
1278
-
/* check validity of midi_version field */
1279
-
if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3) &&
1280
-
client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
1281
-
return -EINVAL;
1278
+
if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3)) {
1279
+
/* check validity of midi_version field */
1280
+
if (client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
1281
+
return -EINVAL;
1282
+
1283
+
/* check if UMP is supported in kernel */
1284
+
if (!IS_ENABLED(CONFIG_SND_SEQ_UMP) &&
1285
+
client_info->midi_version > 0)
1286
+
return -EINVAL;
1287
+
}
1282
1288
1283
1289
/* fill the info fields */
1284
1290
if (client_info->name[0])
+1
-1
sound/core/ump.c
+1
-1
sound/core/ump.c
+1
sound/pci/hda/patch_realtek.c
+1
sound/pci/hda/patch_realtek.c
···
11009
11009
SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
11010
11010
SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
11011
11011
SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
11012
+
SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
11012
11013
11013
11014
#if 0
11014
11015
/* Below is a quirk table taken from the old code.
+4
sound/pci/hda/tas2781_hda_i2c.c
+4
sound/pci/hda/tas2781_hda_i2c.c
···
142
142
}
143
143
sub = acpi_get_subsystem_id(ACPI_HANDLE(physdev));
144
144
if (IS_ERR(sub)) {
145
+
/* No subsys id in older tas2563 projects. */
146
+
if (!strncmp(hid, "INT8866", sizeof("INT8866")))
147
+
goto end_2563;
145
148
dev_err(p->dev, "Failed to get SUBSYS ID.\n");
146
149
ret = PTR_ERR(sub);
147
150
goto err;
···
167
164
p->speaker_id = NULL;
168
165
}
169
166
167
+
end_2563:
170
168
acpi_dev_free_resource_list(&resources);
171
169
strscpy(p->dev_name, hid, sizeof(p->dev_name));
172
170
put_device(physdev);
+1
-1
sound/sh/sh_dac_audio.c
+1
-1
sound/sh/sh_dac_audio.c
···
163
163
/* channel is not used (interleaved data) */
164
164
struct snd_sh_dac *chip = snd_pcm_substream_chip(substream);
165
165
166
-
if (copy_from_iter(chip->data_buffer + pos, src, count) != count)
166
+
if (copy_from_iter(chip->data_buffer + pos, count, src) != count)
167
167
return -EFAULT;
168
168
chip->buffer_end = chip->data_buffer + pos + count;
169
169
+16
-1
sound/soc/amd/ps/pci-ps.c
+16
-1
sound/soc/amd/ps/pci-ps.c
···
375
375
{
376
376
struct acpi_device *pdm_dev;
377
377
const union acpi_object *obj;
378
+
acpi_handle handle;
379
+
acpi_integer dmic_status;
378
380
u32 config;
379
381
bool is_dmic_dev = false;
380
382
bool is_sdw_dev = false;
383
+
bool wov_en, dmic_en;
381
384
int ret;
385
+
386
+
/* IF WOV entry not found, enable dmic based on acp-audio-device-type entry*/
387
+
wov_en = true;
388
+
dmic_en = false;
382
389
383
390
config = readl(acp_data->acp63_base + ACP_PIN_CONFIG);
384
391
switch (config) {
···
419
412
if (!acpi_dev_get_property(pdm_dev, "acp-audio-device-type",
420
413
ACPI_TYPE_INTEGER, &obj) &&
421
414
obj->integer.value == ACP_DMIC_DEV)
422
-
is_dmic_dev = true;
415
+
dmic_en = true;
423
416
}
417
+
418
+
handle = ACPI_HANDLE(&pci->dev);
419
+
ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
420
+
if (!ACPI_FAILURE(ret))
421
+
wov_en = dmic_status;
424
422
}
423
+
424
+
if (dmic_en && wov_en)
425
+
is_dmic_dev = true;
425
426
426
427
if (acp_data->is_sdw_config) {
427
428
ret = acp_scan_sdw_devices(&pci->dev, ACP63_SDW_ADDR);
+6
-1
sound/soc/codecs/rt722-sdca.c
+6
-1
sound/soc/codecs/rt722-sdca.c
···
1468
1468
0x008d);
1469
1469
/* check HP calibration FSM status */
1470
1470
for (loop_check = 0; loop_check < chk_cnt; loop_check++) {
1471
+
usleep_range(10000, 11000);
1471
1472
ret = rt722_sdca_index_read(rt722, RT722_VENDOR_CALI,
1472
1473
RT722_DAC_DC_CALI_CTL3, &calib_status);
1473
-
if (ret < 0 || loop_check == chk_cnt)
1474
+
if (ret < 0)
1474
1475
dev_dbg(&rt722->slave->dev, "calibration failed!, ret=%d\n", ret);
1475
1476
if ((calib_status & 0x0040) == 0x0)
1476
1477
break;
1477
1478
}
1479
+
1480
+
if (loop_check == chk_cnt)
1481
+
dev_dbg(&rt722->slave->dev, "%s, calibration time-out!\n", __func__);
1482
+
1478
1483
/* Set ADC09 power entity floating control */
1479
1484
rt722_sdca_index_write(rt722, RT722_VENDOR_HDA_CTL, RT722_ADC0A_08_PDE_FLOAT_CTL,
1480
1485
0x2a12);
+20
-3
sound/soc/intel/boards/sof_sdw.c
+20
-3
sound/soc/intel/boards/sof_sdw.c
···
632
632
.callback = sof_sdw_quirk_cb,
633
633
.matches = {
634
634
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
635
-
DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C")
635
+
DMI_MATCH(DMI_PRODUCT_NAME, "21QB")
636
636
},
637
637
/* Note this quirk excludes the CODEC mic */
638
638
.driver_data = (void *)(SOC_SDW_CODEC_MIC),
···
641
641
.callback = sof_sdw_quirk_cb,
642
642
.matches = {
643
643
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
644
-
DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B")
644
+
DMI_MATCH(DMI_PRODUCT_NAME, "21QA")
645
645
},
646
-
.driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
646
+
/* Note this quirk excludes the CODEC mic */
647
+
.driver_data = (void *)(SOC_SDW_CODEC_MIC),
648
+
},
649
+
{
650
+
.callback = sof_sdw_quirk_cb,
651
+
.matches = {
652
+
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
653
+
DMI_MATCH(DMI_PRODUCT_NAME, "21Q6")
654
+
},
655
+
.driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
656
+
},
657
+
{
658
+
.callback = sof_sdw_quirk_cb,
659
+
.matches = {
660
+
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
661
+
DMI_MATCH(DMI_PRODUCT_NAME, "21Q7")
662
+
},
663
+
.driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
647
664
},
648
665
649
666
/* ArrowLake devices */
+2
-2
sound/soc/mediatek/common/mtk-afe-platform-driver.c
+2
-2
sound/soc/mediatek/common/mtk-afe-platform-driver.c
···
120
120
struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
121
121
122
122
size = afe->mtk_afe_hardware->buffer_bytes_max;
123
-
snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV,
124
-
afe->dev, size, size);
123
+
snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size);
124
+
125
125
return 0;
126
126
}
127
127
EXPORT_SYMBOL_GPL(mtk_afe_pcm_new);
+19
-6
sound/soc/sof/intel/hda-dai.c
+19
-6
sound/soc/sof/intel/hda-dai.c
···
103
103
return sdai->platform_private;
104
104
}
105
105
106
-
int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
107
-
struct snd_soc_dai *cpu_dai)
106
+
static int
107
+
hda_link_dma_cleanup(struct snd_pcm_substream *substream,
108
+
struct hdac_ext_stream *hext_stream,
109
+
struct snd_soc_dai *cpu_dai, bool release)
108
110
{
109
111
const struct hda_dai_widget_dma_ops *ops = hda_dai_get_ops(substream, cpu_dai);
110
112
struct sof_intel_hda_stream *hda_stream;
···
128
126
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
129
127
stream_tag = hdac_stream(hext_stream)->stream_tag;
130
128
snd_hdac_ext_bus_link_clear_stream_id(hlink, stream_tag);
129
+
}
130
+
131
+
if (!release) {
132
+
/*
133
+
* Force stream reconfiguration without releasing the channel on
134
+
* subsequent stream restart (without free), including LinkDMA
135
+
* reset.
136
+
* The stream is released via hda_dai_hw_free()
137
+
*/
138
+
hext_stream->link_prepared = 0;
139
+
return 0;
131
140
}
132
141
133
142
if (ops->release_hext_stream)
···
224
211
if (!hext_stream)
225
212
return 0;
226
213
227
-
return hda_link_dma_cleanup(substream, hext_stream, cpu_dai);
214
+
return hda_link_dma_cleanup(substream, hext_stream, cpu_dai, true);
228
215
}
229
216
230
217
static int __maybe_unused hda_dai_hw_params_data(struct snd_pcm_substream *substream,
···
317
304
switch (cmd) {
318
305
case SNDRV_PCM_TRIGGER_STOP:
319
306
case SNDRV_PCM_TRIGGER_SUSPEND:
320
-
ret = hda_link_dma_cleanup(substream, hext_stream, dai);
307
+
ret = hda_link_dma_cleanup(substream, hext_stream, dai,
308
+
cmd == SNDRV_PCM_TRIGGER_STOP ? false : true);
321
309
if (ret < 0) {
322
310
dev_err(sdev->dev, "%s: failed to clean up link DMA\n", __func__);
323
311
return ret;
···
674
660
}
675
661
676
662
ret = hda_link_dma_cleanup(hext_stream->link_substream,
677
-
hext_stream,
678
-
cpu_dai);
663
+
hext_stream, cpu_dai, true);
679
664
if (ret < 0)
680
665
return ret;
681
666
}
-2
sound/soc/sof/intel/hda.h
-2
sound/soc/sof/intel/hda.h
···
1038
1038
hda_select_dai_widget_ops(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget);
1039
1039
int hda_dai_config(struct snd_soc_dapm_widget *w, unsigned int flags,
1040
1040
struct snd_sof_dai_config_data *data);
1041
-
int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
1042
-
struct snd_soc_dai *cpu_dai);
1043
1041
1044
1042
static inline struct snd_sof_dev *widget_to_sdev(struct snd_soc_dapm_widget *w)
1045
1043
{
+1
-1
sound/usb/mixer_us16x08.c
+1
-1
sound/usb/mixer_us16x08.c
···
687
687
struct usb_mixer_elem_info *elem = kcontrol->private_data;
688
688
struct snd_usb_audio *chip = elem->head.mixer->chip;
689
689
struct snd_us16x08_meter_store *store = elem->private_data;
690
-
u8 meter_urb[64];
690
+
u8 meter_urb[64] = {0};
691
691
692
692
switch (kcontrol->private_value) {
693
693
case 0: {
+11
-4
tools/include/uapi/linux/stddef.h
+11
-4
tools/include/uapi/linux/stddef.h
···
8
8
#define __always_inline __inline__
9
9
#endif
10
10
11
+
/* Not all C++ standards support type declarations inside an anonymous union */
12
+
#ifndef __cplusplus
13
+
#define __struct_group_tag(TAG) TAG
14
+
#else
15
+
#define __struct_group_tag(TAG)
16
+
#endif
17
+
11
18
/**
12
19
* __struct_group() - Create a mirrored named and anonyomous struct
13
20
*
···
27
20
* and size: one anonymous and one named. The former's members can be used
28
21
* normally without sub-struct naming, and the latter can be used to
29
22
* reason about the start, end, and size of the group of struct members.
30
-
* The named struct can also be explicitly tagged for layer reuse, as well
31
-
* as both having struct attributes appended.
23
+
* The named struct can also be explicitly tagged for layer reuse (C only),
24
+
* as well as both having struct attributes appended.
32
25
*/
33
26
#define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
34
27
union { \
35
28
struct { MEMBERS } ATTRS; \
36
-
struct TAG { MEMBERS } ATTRS NAME; \
37
-
}
29
+
struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
30
+
} ATTRS
38
31
39
32
/**
40
33
* __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+1
tools/objtool/noreturns.h
+1
tools/objtool/noreturns.h
+3
-3
tools/sched_ext/include/scx/common.bpf.h
+3
-3
tools/sched_ext/include/scx/common.bpf.h
···
40
40
void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak;
41
41
u32 scx_bpf_dispatch_nr_slots(void) __ksym;
42
42
void scx_bpf_dispatch_cancel(void) __ksym;
43
-
bool scx_bpf_dsq_move_to_local(u64 dsq_id) __ksym;
44
-
void scx_bpf_dsq_move_set_slice(struct bpf_iter_scx_dsq *it__iter, u64 slice) __ksym;
45
-
void scx_bpf_dsq_move_set_vtime(struct bpf_iter_scx_dsq *it__iter, u64 vtime) __ksym;
43
+
bool scx_bpf_dsq_move_to_local(u64 dsq_id) __ksym __weak;
44
+
void scx_bpf_dsq_move_set_slice(struct bpf_iter_scx_dsq *it__iter, u64 slice) __ksym __weak;
45
+
void scx_bpf_dsq_move_set_vtime(struct bpf_iter_scx_dsq *it__iter, u64 vtime) __ksym __weak;
46
46
bool scx_bpf_dsq_move(struct bpf_iter_scx_dsq *it__iter, struct task_struct *p, u64 dsq_id, u64 enq_flags) __ksym __weak;
47
47
bool scx_bpf_dsq_move_vtime(struct bpf_iter_scx_dsq *it__iter, struct task_struct *p, u64 dsq_id, u64 enq_flags) __ksym __weak;
48
48
u32 scx_bpf_reenqueue_local(void) __ksym;
+1
-1
tools/sched_ext/scx_central.c
+1
-1
tools/sched_ext/scx_central.c
···
97
97
SCX_BUG_ON(!cpuset, "Failed to allocate cpuset");
98
98
CPU_ZERO(cpuset);
99
99
CPU_SET(skel->rodata->central_cpu, cpuset);
100
-
SCX_BUG_ON(sched_setaffinity(0, sizeof(cpuset), cpuset),
100
+
SCX_BUG_ON(sched_setaffinity(0, sizeof(*cpuset), cpuset),
101
101
"Failed to affinitize to central CPU %d (max %d)",
102
102
skel->rodata->central_cpu, skel->rodata->nr_cpu_ids - 1);
103
103
CPU_FREE(cpuset);
+1
-1
tools/testing/selftests/alsa/Makefile
+1
-1
tools/testing/selftests/alsa/Makefile
+24
-4
tools/testing/selftests/drivers/net/queues.py
+24
-4
tools/testing/selftests/drivers/net/queues.py
···
1
1
#!/usr/bin/env python3
2
2
# SPDX-License-Identifier: GPL-2.0
3
3
4
-
from lib.py import ksft_run, ksft_exit, ksft_eq, KsftSkipEx
5
-
from lib.py import EthtoolFamily, NetdevFamily
4
+
from lib.py import ksft_disruptive, ksft_exit, ksft_run
5
+
from lib.py import ksft_eq, ksft_raises, KsftSkipEx
6
+
from lib.py import EthtoolFamily, NetdevFamily, NlError
6
7
from lib.py import NetDrvEnv
7
-
from lib.py import cmd
8
+
from lib.py import cmd, defer, ip
9
+
import errno
8
10
import glob
9
11
10
12
···
61
59
ksft_eq(queues, expected)
62
60
63
61
62
+
@ksft_disruptive
63
+
def check_down(cfg, nl) -> None:
64
+
# Check the NAPI IDs before interface goes down and hides them
65
+
napis = nl.napi_get({'ifindex': cfg.ifindex}, dump=True)
66
+
67
+
ip(f"link set dev {cfg.dev['ifname']} down")
68
+
defer(ip, f"link set dev {cfg.dev['ifname']} up")
69
+
70
+
with ksft_raises(NlError) as cm:
71
+
nl.queue_get({'ifindex': cfg.ifindex, 'id': 0, 'type': 'rx'})
72
+
ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT)
73
+
74
+
if napis:
75
+
with ksft_raises(NlError) as cm:
76
+
nl.napi_get({'id': napis[0]['id']})
77
+
ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT)
78
+
79
+
64
80
def main() -> None:
65
81
with NetDrvEnv(__file__, queue_count=100) as cfg:
66
-
ksft_run([get_queues, addremove_queues], args=(cfg, NetdevFamily()))
82
+
ksft_run([get_queues, addremove_queues, check_down], args=(cfg, NetdevFamily()))
67
83
ksft_exit()
68
84
69
85
+171
-1
tools/testing/selftests/kvm/s390x/ucontrol_test.c
+171
-1
tools/testing/selftests/kvm/s390x/ucontrol_test.c
···
210
210
struct kvm_device_attr attr = {
211
211
.group = KVM_S390_VM_MEM_CTRL,
212
212
.attr = KVM_S390_VM_MEM_LIMIT_SIZE,
213
-
.addr = (unsigned long)&limit,
213
+
.addr = (u64)&limit,
214
214
};
215
215
int rc;
216
+
217
+
rc = ioctl(self->vm_fd, KVM_HAS_DEVICE_ATTR, &attr);
218
+
EXPECT_EQ(0, rc);
216
219
217
220
rc = ioctl(self->vm_fd, KVM_GET_DEVICE_ATTR, &attr);
218
221
EXPECT_EQ(0, rc);
···
636
633
ASSERT_EQ(skeyvalue & 0xfa, sync_regs->gprs[1]);
637
634
ASSERT_EQ(0, sync_regs->gprs[1] & 0x04);
638
635
uc_assert_diag44(self);
636
+
}
637
+
638
+
static char uc_flic_b[PAGE_SIZE];
639
+
static struct kvm_s390_io_adapter uc_flic_ioa = { .id = 0 };
640
+
static struct kvm_s390_io_adapter_req uc_flic_ioam = { .id = 0 };
641
+
static struct kvm_s390_ais_req uc_flic_asim = { .isc = 0 };
642
+
static struct kvm_s390_ais_all uc_flic_asima = { .simm = 0 };
643
+
static struct uc_flic_attr_test {
644
+
char *name;
645
+
struct kvm_device_attr a;
646
+
int hasrc;
647
+
int geterrno;
648
+
int seterrno;
649
+
} uc_flic_attr_tests[] = {
650
+
{
651
+
.name = "KVM_DEV_FLIC_GET_ALL_IRQS",
652
+
.seterrno = EINVAL,
653
+
.a = {
654
+
.group = KVM_DEV_FLIC_GET_ALL_IRQS,
655
+
.addr = (u64)&uc_flic_b,
656
+
.attr = PAGE_SIZE,
657
+
},
658
+
},
659
+
{
660
+
.name = "KVM_DEV_FLIC_ENQUEUE",
661
+
.geterrno = EINVAL,
662
+
.a = { .group = KVM_DEV_FLIC_ENQUEUE, },
663
+
},
664
+
{
665
+
.name = "KVM_DEV_FLIC_CLEAR_IRQS",
666
+
.geterrno = EINVAL,
667
+
.a = { .group = KVM_DEV_FLIC_CLEAR_IRQS, },
668
+
},
669
+
{
670
+
.name = "KVM_DEV_FLIC_ADAPTER_REGISTER",
671
+
.geterrno = EINVAL,
672
+
.a = {
673
+
.group = KVM_DEV_FLIC_ADAPTER_REGISTER,
674
+
.addr = (u64)&uc_flic_ioa,
675
+
},
676
+
},
677
+
{
678
+
.name = "KVM_DEV_FLIC_ADAPTER_MODIFY",
679
+
.geterrno = EINVAL,
680
+
.seterrno = EINVAL,
681
+
.a = {
682
+
.group = KVM_DEV_FLIC_ADAPTER_MODIFY,
683
+
.addr = (u64)&uc_flic_ioam,
684
+
.attr = sizeof(uc_flic_ioam),
685
+
},
686
+
},
687
+
{
688
+
.name = "KVM_DEV_FLIC_CLEAR_IO_IRQ",
689
+
.geterrno = EINVAL,
690
+
.seterrno = EINVAL,
691
+
.a = {
692
+
.group = KVM_DEV_FLIC_CLEAR_IO_IRQ,
693
+
.attr = 32,
694
+
},
695
+
},
696
+
{
697
+
.name = "KVM_DEV_FLIC_AISM",
698
+
.geterrno = EINVAL,
699
+
.seterrno = ENOTSUP,
700
+
.a = {
701
+
.group = KVM_DEV_FLIC_AISM,
702
+
.addr = (u64)&uc_flic_asim,
703
+
},
704
+
},
705
+
{
706
+
.name = "KVM_DEV_FLIC_AIRQ_INJECT",
707
+
.geterrno = EINVAL,
708
+
.a = { .group = KVM_DEV_FLIC_AIRQ_INJECT, },
709
+
},
710
+
{
711
+
.name = "KVM_DEV_FLIC_AISM_ALL",
712
+
.geterrno = ENOTSUP,
713
+
.seterrno = ENOTSUP,
714
+
.a = {
715
+
.group = KVM_DEV_FLIC_AISM_ALL,
716
+
.addr = (u64)&uc_flic_asima,
717
+
.attr = sizeof(uc_flic_asima),
718
+
},
719
+
},
720
+
{
721
+
.name = "KVM_DEV_FLIC_APF_ENABLE",
722
+
.geterrno = EINVAL,
723
+
.seterrno = EINVAL,
724
+
.a = { .group = KVM_DEV_FLIC_APF_ENABLE, },
725
+
},
726
+
{
727
+
.name = "KVM_DEV_FLIC_APF_DISABLE_WAIT",
728
+
.geterrno = EINVAL,
729
+
.seterrno = EINVAL,
730
+
.a = { .group = KVM_DEV_FLIC_APF_DISABLE_WAIT, },
731
+
},
732
+
};
733
+
734
+
TEST_F(uc_kvm, uc_flic_attrs)
735
+
{
736
+
struct kvm_create_device cd = { .type = KVM_DEV_TYPE_FLIC };
737
+
struct kvm_device_attr attr;
738
+
u64 value;
739
+
int rc, i;
740
+
741
+
rc = ioctl(self->vm_fd, KVM_CREATE_DEVICE, &cd);
742
+
ASSERT_EQ(0, rc) TH_LOG("create device failed with err %s (%i)",
743
+
strerror(errno), errno);
744
+
745
+
for (i = 0; i < ARRAY_SIZE(uc_flic_attr_tests); i++) {
746
+
TH_LOG("test %s", uc_flic_attr_tests[i].name);
747
+
attr = (struct kvm_device_attr) {
748
+
.group = uc_flic_attr_tests[i].a.group,
749
+
.attr = uc_flic_attr_tests[i].a.attr,
750
+
.addr = uc_flic_attr_tests[i].a.addr,
751
+
};
752
+
if (attr.addr == 0)
753
+
attr.addr = (u64)&value;
754
+
755
+
rc = ioctl(cd.fd, KVM_HAS_DEVICE_ATTR, &attr);
756
+
EXPECT_EQ(uc_flic_attr_tests[i].hasrc, !!rc)
757
+
TH_LOG("expected dev attr missing %s",
758
+
uc_flic_attr_tests[i].name);
759
+
760
+
rc = ioctl(cd.fd, KVM_GET_DEVICE_ATTR, &attr);
761
+
EXPECT_EQ(!!uc_flic_attr_tests[i].geterrno, !!rc)
762
+
TH_LOG("get dev attr rc not expected on %s %s (%i)",
763
+
uc_flic_attr_tests[i].name,
764
+
strerror(errno), errno);
765
+
if (uc_flic_attr_tests[i].geterrno)
766
+
EXPECT_EQ(uc_flic_attr_tests[i].geterrno, errno)
767
+
TH_LOG("get dev attr errno not expected on %s %s (%i)",
768
+
uc_flic_attr_tests[i].name,
769
+
strerror(errno), errno);
770
+
771
+
rc = ioctl(cd.fd, KVM_SET_DEVICE_ATTR, &attr);
772
+
EXPECT_EQ(!!uc_flic_attr_tests[i].seterrno, !!rc)
773
+
TH_LOG("set sev attr rc not expected on %s %s (%i)",
774
+
uc_flic_attr_tests[i].name,
775
+
strerror(errno), errno);
776
+
if (uc_flic_attr_tests[i].seterrno)
777
+
EXPECT_EQ(uc_flic_attr_tests[i].seterrno, errno)
778
+
TH_LOG("set dev attr errno not expected on %s %s (%i)",
779
+
uc_flic_attr_tests[i].name,
780
+
strerror(errno), errno);
781
+
}
782
+
783
+
close(cd.fd);
784
+
}
785
+
786
+
TEST_F(uc_kvm, uc_set_gsi_routing)
787
+
{
788
+
struct kvm_irq_routing *routing = kvm_gsi_routing_create();
789
+
struct kvm_irq_routing_entry ue = {
790
+
.type = KVM_IRQ_ROUTING_S390_ADAPTER,
791
+
.gsi = 1,
792
+
.u.adapter = (struct kvm_irq_routing_s390_adapter) {
793
+
.ind_addr = 0,
794
+
},
795
+
};
796
+
int rc;
797
+
798
+
routing->entries[0] = ue;
799
+
routing->nr = 1;
800
+
rc = ioctl(self->vm_fd, KVM_SET_GSI_ROUTING, routing);
801
+
ASSERT_EQ(-1, rc) TH_LOG("err %s (%i)", strerror(errno), errno);
802
+
ASSERT_EQ(EINVAL, errno) TH_LOG("err %s (%i)", strerror(errno), errno);
639
803
}
640
804
641
805
TEST_HARNESS_MAIN
+43
tools/testing/selftests/memfd/memfd_test.c
+43
tools/testing/selftests/memfd/memfd_test.c
···
282
282
return p;
283
283
}
284
284
285
+
static void *mfd_assert_mmap_read_shared(int fd)
286
+
{
287
+
void *p;
288
+
289
+
p = mmap(NULL,
290
+
mfd_def_size,
291
+
PROT_READ,
292
+
MAP_SHARED,
293
+
fd,
294
+
0);
295
+
if (p == MAP_FAILED) {
296
+
printf("mmap() failed: %m\n");
297
+
abort();
298
+
}
299
+
300
+
return p;
301
+
}
302
+
285
303
static void *mfd_assert_mmap_private(int fd)
286
304
{
287
305
void *p;
···
998
980
close(fd);
999
981
}
1000
982
983
+
static void test_seal_write_map_read_shared(void)
984
+
{
985
+
int fd;
986
+
void *p;
987
+
988
+
printf("%s SEAL-WRITE-MAP-READ\n", memfd_str);
989
+
990
+
fd = mfd_assert_new("kern_memfd_seal_write_map_read",
991
+
mfd_def_size,
992
+
MFD_CLOEXEC | MFD_ALLOW_SEALING);
993
+
994
+
mfd_assert_add_seals(fd, F_SEAL_WRITE);
995
+
mfd_assert_has_seals(fd, F_SEAL_WRITE);
996
+
997
+
p = mfd_assert_mmap_read_shared(fd);
998
+
999
+
mfd_assert_read(fd);
1000
+
mfd_assert_read_shared(fd);
1001
+
mfd_fail_write(fd);
1002
+
1003
+
munmap(p, mfd_def_size);
1004
+
close(fd);
1005
+
}
1006
+
1001
1007
/*
1002
1008
* Test SEAL_SHRINK
1003
1009
* Test whether SEAL_SHRINK actually prevents shrinking
···
1635
1593
1636
1594
test_seal_write();
1637
1595
test_seal_future_write();
1596
+
test_seal_write_map_read_shared();
1638
1597
test_seal_shrink();
1639
1598
test_seal_grow();
1640
1599
test_seal_resize();
-1
tools/testing/selftests/net/forwarding/local_termination.sh
-1
tools/testing/selftests/net/forwarding/local_termination.sh
+1
-1
tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+1
-1
tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+2
-2
tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+2
-2
tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
···
17
17
18
18
if (cpu >= 0) {
19
19
/* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */
20
-
scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
21
-
p->scx.dsq_vtime, 0);
20
+
scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
21
+
p->scx.dsq_vtime, 0);
22
22
return cpu;
23
23
}
24
24
+5
-2
tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+5
-2
tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
···
43
43
if (!p)
44
44
return;
45
45
46
-
target = bpf_get_prandom_u32() % nr_cpus;
46
+
if (p->nr_cpus_allowed == nr_cpus)
47
+
target = bpf_get_prandom_u32() % nr_cpus;
48
+
else
49
+
target = scx_bpf_task_cpu(p);
47
50
48
-
scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
51
+
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
49
52
bpf_task_release(p);
50
53
}
51
54
+3
-2
tools/testing/selftests/sched_ext/dsp_local_on.c
+3
-2
tools/testing/selftests/sched_ext/dsp_local_on.c
···
34
34
/* Just sleeping is fine, plenty of scheduling events happening */
35
35
sleep(1);
36
36
37
-
SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_ERROR));
38
37
bpf_link__destroy(link);
38
+
39
+
SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG));
39
40
40
41
return SCX_TEST_PASS;
41
42
}
···
51
50
struct scx_test dsp_local_on = {
52
51
.name = "dsp_local_on",
53
52
.description = "Verify we can directly dispatch tasks to a local DSQs "
54
-
"from osp.dispatch()",
53
+
"from ops.dispatch()",
55
54
.setup = setup,
56
55
.run = run,
57
56
.cleanup = cleanup,
+1
-1
tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+1
-1
tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+2
-2
tools/testing/selftests/sched_ext/exit.bpf.c
+2
-2
tools/testing/selftests/sched_ext/exit.bpf.c
···
33
33
if (exit_point == EXIT_ENQUEUE)
34
34
EXIT_CLEANLY();
35
35
36
-
scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
36
+
scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
37
37
}
38
38
39
39
void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
···
41
41
if (exit_point == EXIT_DISPATCH)
42
42
EXIT_CLEANLY();
43
43
44
-
scx_bpf_consume(DSQ_ID);
44
+
scx_bpf_dsq_move_to_local(DSQ_ID);
45
45
}
46
46
47
47
void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+5
-3
tools/testing/selftests/sched_ext/maximal.bpf.c
+5
-3
tools/testing/selftests/sched_ext/maximal.bpf.c
···
12
12
13
13
char _license[] SEC("license") = "GPL";
14
14
15
+
#define DSQ_ID 0
16
+
15
17
s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
16
18
u64 wake_flags)
17
19
{
···
22
20
23
21
void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags)
24
22
{
25
-
scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
23
+
scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
26
24
}
27
25
28
26
void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
···
30
28
31
29
void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev)
32
30
{
33
-
scx_bpf_consume(SCX_DSQ_GLOBAL);
31
+
scx_bpf_dsq_move_to_local(DSQ_ID);
34
32
}
35
33
36
34
void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags)
···
125
123
126
124
s32 BPF_STRUCT_OPS_SLEEPABLE(maximal_init)
127
125
{
128
-
return 0;
126
+
return scx_bpf_create_dsq(DSQ_ID, -1);
129
127
}
130
128
131
129
void BPF_STRUCT_OPS(maximal_exit, struct scx_exit_info *info)
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+1
-1
tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+2
-2
tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+2
-2
tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
···
18
18
s32 prev_cpu, u64 wake_flags)
19
19
{
20
20
/* Dispatching twice in a row is disallowed. */
21
-
scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
22
-
scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
21
+
scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
22
+
scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
23
23
24
24
return prev_cpu;
25
25
}
+4
-4
tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+4
-4
tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
···
2
2
/*
3
3
* A scheduler that validates that enqueue flags are properly stored and
4
4
* applied at dispatch time when a task is directly dispatched from
5
-
* ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and
6
-
* making the test a very basic vtime scheduler.
5
+
* ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(),
6
+
* and making the test a very basic vtime scheduler.
7
7
*
8
8
* Copyright (c) 2024 Meta Platforms, Inc. and affiliates.
9
9
* Copyright (c) 2024 David Vernet <dvernet@meta.com>
···
47
47
cpu = prev_cpu;
48
48
scx_bpf_test_and_clear_cpu_idle(cpu);
49
49
ddsp:
50
-
scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
50
+
scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
51
51
return cpu;
52
52
}
53
53
54
54
void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p)
55
55
{
56
-
if (scx_bpf_consume(VTIME_DSQ))
56
+
if (scx_bpf_dsq_move_to_local(VTIME_DSQ))
57
57
consumed = true;
58
58
}
59
59
+96
-81
tools/tracing/rtla/src/timerlat_hist.c
+96
-81
tools/tracing/rtla/src/timerlat_hist.c
···
282
282
}
283
283
284
284
/*
285
+
* format_summary_value - format a line of summary value (min, max or avg)
286
+
* of hist data
287
+
*/
288
+
static void format_summary_value(struct trace_seq *seq,
289
+
int count,
290
+
unsigned long long val,
291
+
bool avg)
292
+
{
293
+
if (count)
294
+
trace_seq_printf(seq, "%9llu ", avg ? val / count : val);
295
+
else
296
+
trace_seq_printf(seq, "%9c ", '-');
297
+
}
298
+
299
+
/*
285
300
* timerlat_print_summary - print the summary of the hist data to the output
286
301
*/
287
302
static void
···
343
328
if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
344
329
continue;
345
330
346
-
if (!params->no_irq) {
347
-
if (data->hist[cpu].irq_count)
348
-
trace_seq_printf(trace->seq, "%9llu ",
349
-
data->hist[cpu].min_irq);
350
-
else
351
-
trace_seq_printf(trace->seq, " - ");
352
-
}
331
+
if (!params->no_irq)
332
+
format_summary_value(trace->seq,
333
+
data->hist[cpu].irq_count,
334
+
data->hist[cpu].min_irq,
335
+
false);
353
336
354
-
if (!params->no_thread) {
355
-
if (data->hist[cpu].thread_count)
356
-
trace_seq_printf(trace->seq, "%9llu ",
357
-
data->hist[cpu].min_thread);
358
-
else
359
-
trace_seq_printf(trace->seq, " - ");
360
-
}
337
+
if (!params->no_thread)
338
+
format_summary_value(trace->seq,
339
+
data->hist[cpu].thread_count,
340
+
data->hist[cpu].min_thread,
341
+
false);
361
342
362
-
if (params->user_hist) {
363
-
if (data->hist[cpu].user_count)
364
-
trace_seq_printf(trace->seq, "%9llu ",
365
-
data->hist[cpu].min_user);
366
-
else
367
-
trace_seq_printf(trace->seq, " - ");
368
-
}
343
+
if (params->user_hist)
344
+
format_summary_value(trace->seq,
345
+
data->hist[cpu].user_count,
346
+
data->hist[cpu].min_user,
347
+
false);
369
348
}
370
349
trace_seq_printf(trace->seq, "\n");
371
350
···
373
364
if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
374
365
continue;
375
366
376
-
if (!params->no_irq) {
377
-
if (data->hist[cpu].irq_count)
378
-
trace_seq_printf(trace->seq, "%9llu ",
379
-
data->hist[cpu].sum_irq / data->hist[cpu].irq_count);
380
-
else
381
-
trace_seq_printf(trace->seq, " - ");
382
-
}
367
+
if (!params->no_irq)
368
+
format_summary_value(trace->seq,
369
+
data->hist[cpu].irq_count,
370
+
data->hist[cpu].sum_irq,
371
+
true);
383
372
384
-
if (!params->no_thread) {
385
-
if (data->hist[cpu].thread_count)
386
-
trace_seq_printf(trace->seq, "%9llu ",
387
-
data->hist[cpu].sum_thread / data->hist[cpu].thread_count);
388
-
else
389
-
trace_seq_printf(trace->seq, " - ");
390
-
}
373
+
if (!params->no_thread)
374
+
format_summary_value(trace->seq,
375
+
data->hist[cpu].thread_count,
376
+
data->hist[cpu].sum_thread,
377
+
true);
391
378
392
-
if (params->user_hist) {
393
-
if (data->hist[cpu].user_count)
394
-
trace_seq_printf(trace->seq, "%9llu ",
395
-
data->hist[cpu].sum_user / data->hist[cpu].user_count);
396
-
else
397
-
trace_seq_printf(trace->seq, " - ");
398
-
}
379
+
if (params->user_hist)
380
+
format_summary_value(trace->seq,
381
+
data->hist[cpu].user_count,
382
+
data->hist[cpu].sum_user,
383
+
true);
399
384
}
400
385
trace_seq_printf(trace->seq, "\n");
401
386
···
403
400
if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
404
401
continue;
405
402
406
-
if (!params->no_irq) {
407
-
if (data->hist[cpu].irq_count)
408
-
trace_seq_printf(trace->seq, "%9llu ",
409
-
data->hist[cpu].max_irq);
410
-
else
411
-
trace_seq_printf(trace->seq, " - ");
412
-
}
403
+
if (!params->no_irq)
404
+
format_summary_value(trace->seq,
405
+
data->hist[cpu].irq_count,
406
+
data->hist[cpu].max_irq,
407
+
false);
413
408
414
-
if (!params->no_thread) {
415
-
if (data->hist[cpu].thread_count)
416
-
trace_seq_printf(trace->seq, "%9llu ",
417
-
data->hist[cpu].max_thread);
418
-
else
419
-
trace_seq_printf(trace->seq, " - ");
420
-
}
409
+
if (!params->no_thread)
410
+
format_summary_value(trace->seq,
411
+
data->hist[cpu].thread_count,
412
+
data->hist[cpu].max_thread,
413
+
false);
421
414
422
-
if (params->user_hist) {
423
-
if (data->hist[cpu].user_count)
424
-
trace_seq_printf(trace->seq, "%9llu ",
425
-
data->hist[cpu].max_user);
426
-
else
427
-
trace_seq_printf(trace->seq, " - ");
428
-
}
415
+
if (params->user_hist)
416
+
format_summary_value(trace->seq,
417
+
data->hist[cpu].user_count,
418
+
data->hist[cpu].max_user,
419
+
false);
429
420
}
430
421
trace_seq_printf(trace->seq, "\n");
431
422
trace_seq_do_printf(trace->seq);
···
503
506
trace_seq_printf(trace->seq, "min: ");
504
507
505
508
if (!params->no_irq)
506
-
trace_seq_printf(trace->seq, "%9llu ",
507
-
sum.min_irq);
509
+
format_summary_value(trace->seq,
510
+
sum.irq_count,
511
+
sum.min_irq,
512
+
false);
508
513
509
514
if (!params->no_thread)
510
-
trace_seq_printf(trace->seq, "%9llu ",
511
-
sum.min_thread);
515
+
format_summary_value(trace->seq,
516
+
sum.thread_count,
517
+
sum.min_thread,
518
+
false);
512
519
513
520
if (params->user_hist)
514
-
trace_seq_printf(trace->seq, "%9llu ",
515
-
sum.min_user);
521
+
format_summary_value(trace->seq,
522
+
sum.user_count,
523
+
sum.min_user,
524
+
false);
516
525
517
526
trace_seq_printf(trace->seq, "\n");
518
527
···
526
523
trace_seq_printf(trace->seq, "avg: ");
527
524
528
525
if (!params->no_irq)
529
-
trace_seq_printf(trace->seq, "%9llu ",
530
-
sum.sum_irq / sum.irq_count);
526
+
format_summary_value(trace->seq,
527
+
sum.irq_count,
528
+
sum.sum_irq,
529
+
true);
531
530
532
531
if (!params->no_thread)
533
-
trace_seq_printf(trace->seq, "%9llu ",
534
-
sum.sum_thread / sum.thread_count);
532
+
format_summary_value(trace->seq,
533
+
sum.thread_count,
534
+
sum.sum_thread,
535
+
true);
535
536
536
537
if (params->user_hist)
537
-
trace_seq_printf(trace->seq, "%9llu ",
538
-
sum.sum_user / sum.user_count);
538
+
format_summary_value(trace->seq,
539
+
sum.user_count,
540
+
sum.sum_user,
541
+
true);
539
542
540
543
trace_seq_printf(trace->seq, "\n");
541
544
···
549
540
trace_seq_printf(trace->seq, "max: ");
550
541
551
542
if (!params->no_irq)
552
-
trace_seq_printf(trace->seq, "%9llu ",
553
-
sum.max_irq);
543
+
format_summary_value(trace->seq,
544
+
sum.irq_count,
545
+
sum.max_irq,
546
+
false);
554
547
555
548
if (!params->no_thread)
556
-
trace_seq_printf(trace->seq, "%9llu ",
557
-
sum.max_thread);
549
+
format_summary_value(trace->seq,
550
+
sum.thread_count,
551
+
sum.max_thread,
552
+
false);
558
553
559
554
if (params->user_hist)
560
-
trace_seq_printf(trace->seq, "%9llu ",
561
-
sum.max_user);
555
+
format_summary_value(trace->seq,
556
+
sum.user_count,
557
+
sum.max_user,
558
+
false);
562
559
563
560
trace_seq_printf(trace->seq, "\n");
564
561
trace_seq_do_printf(trace->seq);