Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts.

Adjacent changes:

drivers/net/ethernet/sfc/tc.c
622ab656344a ("sfc: fix error unwinds in TC offload")
b6583d5e9e94 ("sfc: support TC decap rules matching on enc_src_port")

net/mptcp/protocol.c
5b825727d087 ("mptcp: add annotations around msk->subflow accesses")
e76c8ef5cc5b ("mptcp: refactor mptcp_stream_accept()")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2479 -1380
+2 -2
Documentation/admin-guide/cifs/changes.rst
··· 5 5 See https://wiki.samba.org/index.php/LinuxCIFSKernel for summary 6 6 information about fixes/improvements to CIFS/SMB2/SMB3 support (changes 7 7 to cifs.ko module) by kernel version (and cifs internal module version). 8 - This may be easier to read than parsing the output of "git log fs/cifs" 9 - by release. 8 + This may be easier to read than parsing the output of 9 + "git log fs/smb/client" by release.
+4 -4
Documentation/admin-guide/cifs/usage.rst
··· 45 45 46 46 If you have built the CIFS vfs as module (successfully) simply 47 47 type ``make modules_install`` (or if you prefer, manually copy the file to 48 - the modules directory e.g. /lib/modules/2.4.10-4GB/kernel/fs/cifs/cifs.ko). 48 + the modules directory e.g. /lib/modules/6.3.0-060300-generic/kernel/fs/smb/client/cifs.ko). 49 49 50 50 If you have built the CIFS vfs into the kernel itself, follow the instructions 51 51 for your distribution on how to install a new kernel (usually you ··· 66 66 and maximum number of simultaneous requests to one server can be configured. 67 67 Changing these from their defaults is not recommended. By executing modinfo:: 68 68 69 - modinfo kernel/fs/cifs/cifs.ko 69 + modinfo <path to cifs.ko> 70 70 71 - on kernel/fs/cifs/cifs.ko the list of configuration changes that can be made 71 + on kernel/fs/smb/client/cifs.ko the list of configuration changes that can be made 72 72 at module initialization time (by running insmod cifs.ko) can be seen. 73 73 74 74 Recommendations 75 75 =============== 76 76 77 - To improve security the SMB2.1 dialect or later (usually will get SMB3) is now 77 + To improve security the SMB2.1 dialect or later (usually will get SMB3.1.1) is now 78 78 the new default. To use old dialects (e.g. to mount Windows XP) use "vers=1.0" 79 79 on mount (or vers=2.0 for Windows Vista). Note that the CIFS (vers=1.0) is 80 80 much older and less secure than the default dialect SMB3 which includes
+6
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml
··· 166 166 resets: 167 167 maxItems: 1 168 168 169 + mediatek,broken-save-restore-fw: 170 + type: boolean 171 + description: 172 + Asserts that the firmware on this device has issues saving and restoring 173 + GICR registers when the GIC redistributors are powered off. 174 + 169 175 dependencies: 170 176 mbi-ranges: [ msi-controller ] 171 177 msi-controller: [ mbi-ranges ]
+1 -1
Documentation/devicetree/bindings/usb/cdns,usb3.yaml
··· 64 64 description: 65 65 size of memory intended as internal memory for endpoints 66 66 buffers expressed in KB 67 - $ref: /schemas/types.yaml#/definitions/uint32 67 + $ref: /schemas/types.yaml#/definitions/uint16 68 68 69 69 cdns,phyrst-a-enable: 70 70 description: Enable resetting of PHY if Rx fail is detected
+1 -1
Documentation/filesystems/cifs/cifsroot.rst Documentation/filesystems/smb/cifsroot.rst
··· 59 59 Enables the kernel to mount the root file system via SMB that are 60 60 located in the <server-ip> and <share> specified in this option. 61 61 62 - The default mount options are set in fs/cifs/cifsroot.c. 62 + The default mount options are set in fs/smb/client/cifsroot.c. 63 63 64 64 server-ip 65 65 IPv4 address of the server.
Documentation/filesystems/cifs/index.rst Documentation/filesystems/smb/index.rst
Documentation/filesystems/cifs/ksmbd.rst Documentation/filesystems/smb/ksmbd.rst
+1 -1
Documentation/filesystems/index.rst
··· 72 72 befs 73 73 bfs 74 74 btrfs 75 - cifs/index 76 75 ceph 77 76 coda 78 77 configfs ··· 110 111 ramfs-rootfs-initramfs 111 112 relay 112 113 romfs 114 + smb/index 113 115 spufs/index 114 116 squashfs 115 117 sysfs
+8 -24
Documentation/netlink/specs/ethtool.yaml
··· 61 61 nested-attributes: bitset-bits 62 62 63 63 - 64 - name: u64-array 65 - attributes: 66 - - 67 - name: u64 68 - type: nest 69 - multi-attr: true 70 - nested-attributes: u64 71 - - 72 - name: s32-array 73 - attributes: 74 - - 75 - name: s32 76 - type: nest 77 - multi-attr: true 78 - nested-attributes: s32 79 - - 80 64 name: string 81 65 attributes: 82 66 - ··· 689 705 type: u8 690 706 - 691 707 name: corrected 692 - type: nest 693 - nested-attributes: u64-array 708 + type: binary 709 + sub-type: u64 694 710 - 695 711 name: uncorr 696 - type: nest 697 - nested-attributes: u64-array 712 + type: binary 713 + sub-type: u64 698 714 - 699 715 name: corr-bits 700 - type: nest 701 - nested-attributes: u64-array 716 + type: binary 717 + sub-type: u64 702 718 - 703 719 name: fec 704 720 attributes: ··· 811 827 type: u32 812 828 - 813 829 name: index 814 - type: nest 815 - nested-attributes: s32-array 830 + type: binary 831 + sub-type: s32 816 832 - 817 833 name: module 818 834 attributes:
+37 -23
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/devlink.rst
··· 40 40 --------------------------------------------- 41 41 The flow steering mode parameter controls the flow steering mode of the driver. 42 42 Two modes are supported: 43 + 43 44 1. 'dmfs' - Device managed flow steering. 44 45 2. 'smfs' - Software/Driver managed flow steering. 45 46 ··· 100 99 By default metadata is enabled on the supported devices in E-switch. 101 100 Metadata is applicable only for E-switch in switchdev mode and 102 101 users may disable it when NONE of the below use cases will be in use: 102 + 103 103 1. HCA is in Dual/multi-port RoCE mode. 104 104 2. VF/SF representor bonding (Usually used for Live migration) 105 105 3. Stacked devices ··· 182 180 183 181 $ devlink health diagnose pci/0000:82:00.0 reporter tx 184 182 185 - NOTE: This command has valid output only when interface is up, otherwise the command has empty output. 183 + .. note:: 184 + This command has valid output only when interface is up, otherwise the command has empty output. 186 185 187 186 - Show number of tx errors indicated, number of recover flows ended successfully, 188 187 is autorecover enabled and graceful period from last recover:: ··· 235 232 236 233 $ devlink health dump show pci/0000:82:00.0 reporter fw 237 234 238 - NOTE: This command can run only on the PF which has fw tracer ownership, 239 - running it on other PF or any VF will return "Operation not permitted". 235 + .. note:: 236 + This command can run only on the PF which has fw tracer ownership, 237 + running it on other PF or any VF will return "Operation not permitted". 240 238 241 239 fw fatal reporter 242 240 ----------------- ··· 260 256 261 257 $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 262 258 263 - NOTE: This command can run only on PF. 259 + .. note:: 260 + This command can run only on PF. 264 261 265 262 vnic reporter 266 263 ------------- ··· 270 265 them in realtime. 271 266 272 267 Description of the vnic counters: 273 - total_q_under_processor_handle: number of queues in an error state due to 274 - an async error or errored command. 275 - send_queue_priority_update_flow: number of QP/SQ priority/SL update 276 - events. 277 - cq_overrun: number of times CQ entered an error state due to an 278 - overflow. 279 - async_eq_overrun: number of times an EQ mapped to async events was 280 - overrun. 281 - comp_eq_overrun: number of times an EQ mapped to completion events was 282 - overrun. 283 - quota_exceeded_command: number of commands issued and failed due to quota 284 - exceeded. 285 - invalid_command: number of commands issued and failed dues to any reason 286 - other than quota exceeded. 287 - nic_receive_steering_discard: number of packets that completed RX flow 288 - steering but were discarded due to a mismatch in flow table. 268 + 269 + - total_q_under_processor_handle 270 + number of queues in an error state due to 271 + an async error or errored command. 272 + - send_queue_priority_update_flow 273 + number of QP/SQ priority/SL update events. 274 + - cq_overrun 275 + number of times CQ entered an error state due to an overflow. 276 + - async_eq_overrun 277 + number of times an EQ mapped to async events was overrun. 278 + comp_eq_overrun number of times an EQ mapped to completion events was 279 + overrun. 280 + - quota_exceeded_command 281 + number of commands issued and failed due to quota exceeded. 282 + - invalid_command 283 + number of commands issued and failed dues to any reason other than quota 284 + exceeded. 285 + - nic_receive_steering_discard 286 + number of packets that completed RX flow 287 + steering but were discarded due to a mismatch in flow table. 289 288 290 289 User commands examples: 291 - - Diagnose PF/VF vnic counters 290 + 291 + - Diagnose PF/VF vnic counters:: 292 + 292 293 $ devlink health diagnose pci/0000:82:00.1 reporter vnic 294 + 293 295 - Diagnose representor vnic counters (performed by supplying devlink port of the 294 - representor, which can be obtained via devlink port command) 296 + representor, which can be obtained via devlink port command):: 297 + 295 298 $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 296 299 297 - NOTE: This command can run over all interfaces such as PF/VF and representor ports. 300 + .. note:: 301 + This command can run over all interfaces such as PF/VF and representor ports.
+32 -32
Documentation/trace/histogram.rst
··· 35 35 in place of an explicit value field - this is simply a count of 36 36 event hits. If 'values' isn't specified, an implicit 'hitcount' 37 37 value will be automatically created and used as the only value. 38 - Keys can be any field, or the special string 'stacktrace', which 38 + Keys can be any field, or the special string 'common_stacktrace', which 39 39 will use the event's kernel stacktrace as the key. The keywords 40 40 'keys' or 'key' can be used to specify keys, and the keywords 41 41 'values', 'vals', or 'val' can be used to specify values. Compound ··· 54 54 'compatible' if the fields named in the trigger share the same 55 55 number and type of fields and those fields also have the same names. 56 56 Note that any two events always share the compatible 'hitcount' and 57 - 'stacktrace' fields and can therefore be combined using those 57 + 'common_stacktrace' fields and can therefore be combined using those 58 58 fields, however pointless that may be. 59 59 60 60 'hist' triggers add a 'hist' file to each event's subdirectory. ··· 547 547 the hist trigger display symbolic call_sites, we can have the hist 548 548 trigger additionally display the complete set of kernel stack traces 549 549 that led to each call_site. To do that, we simply use the special 550 - value 'stacktrace' for the key parameter:: 550 + value 'common_stacktrace' for the key parameter:: 551 551 552 - # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \ 552 + # echo 'hist:keys=common_stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \ 553 553 /sys/kernel/tracing/events/kmem/kmalloc/trigger 554 554 555 555 The above trigger will use the kernel stack trace in effect when an ··· 561 561 every callpath to a kmalloc for a kernel compile):: 562 562 563 563 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist 564 - # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active] 564 + # trigger info: hist:keys=common_stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active] 565 565 566 - { stacktrace: 566 + { common_stacktrace: 567 567 __kmalloc_track_caller+0x10b/0x1a0 568 568 kmemdup+0x20/0x50 569 569 hidraw_report_event+0x8a/0x120 [hid] ··· 581 581 cpu_startup_entry+0x315/0x3e0 582 582 rest_init+0x7c/0x80 583 583 } hitcount: 3 bytes_req: 21 bytes_alloc: 24 584 - { stacktrace: 584 + { common_stacktrace: 585 585 __kmalloc_track_caller+0x10b/0x1a0 586 586 kmemdup+0x20/0x50 587 587 hidraw_report_event+0x8a/0x120 [hid] ··· 596 596 do_IRQ+0x5a/0xf0 597 597 ret_from_intr+0x0/0x30 598 598 } hitcount: 3 bytes_req: 21 bytes_alloc: 24 599 - { stacktrace: 599 + { common_stacktrace: 600 600 kmem_cache_alloc_trace+0xeb/0x150 601 601 aa_alloc_task_context+0x27/0x40 602 602 apparmor_cred_prepare+0x1f/0x50 ··· 608 608 . 609 609 . 610 610 . 611 - { stacktrace: 611 + { common_stacktrace: 612 612 __kmalloc+0x11b/0x1b0 613 613 i915_gem_execbuffer2+0x6c/0x2c0 [i915] 614 614 drm_ioctl+0x349/0x670 [drm] ··· 616 616 SyS_ioctl+0x81/0xa0 617 617 system_call_fastpath+0x12/0x6a 618 618 } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808 619 - { stacktrace: 619 + { common_stacktrace: 620 620 __kmalloc+0x11b/0x1b0 621 621 load_elf_phdrs+0x76/0xa0 622 622 load_elf_binary+0x102/0x1650 ··· 625 625 SyS_execve+0x3a/0x50 626 626 return_from_execve+0x0/0x23 627 627 } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048 628 - { stacktrace: 628 + { common_stacktrace: 629 629 kmem_cache_alloc_trace+0xeb/0x150 630 630 apparmor_file_alloc_security+0x27/0x40 631 631 security_file_alloc+0x16/0x20 ··· 636 636 SyS_open+0x1e/0x20 637 637 system_call_fastpath+0x12/0x6a 638 638 } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376 639 - { stacktrace: 639 + { common_stacktrace: 640 640 __kmalloc+0x11b/0x1b0 641 641 seq_buf_alloc+0x1b/0x50 642 642 seq_read+0x2cc/0x370 ··· 1026 1026 First we set up an initially paused stacktrace trigger on the 1027 1027 netif_receive_skb event:: 1028 1028 1029 - # echo 'hist:key=stacktrace:vals=len:pause' > \ 1029 + # echo 'hist:key=common_stacktrace:vals=len:pause' > \ 1030 1030 /sys/kernel/tracing/events/net/netif_receive_skb/trigger 1031 1031 1032 1032 Next, we set up an 'enable_hist' trigger on the sched_process_exec ··· 1060 1060 $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz 1061 1061 1062 1062 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist 1063 - # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1063 + # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1064 1064 1065 - { stacktrace: 1065 + { common_stacktrace: 1066 1066 __netif_receive_skb_core+0x46d/0x990 1067 1067 __netif_receive_skb+0x18/0x60 1068 1068 netif_receive_skb_internal+0x23/0x90 ··· 1079 1079 kthread+0xd2/0xf0 1080 1080 ret_from_fork+0x42/0x70 1081 1081 } hitcount: 85 len: 28884 1082 - { stacktrace: 1082 + { common_stacktrace: 1083 1083 __netif_receive_skb_core+0x46d/0x990 1084 1084 __netif_receive_skb+0x18/0x60 1085 1085 netif_receive_skb_internal+0x23/0x90 ··· 1097 1097 irq_thread+0x11f/0x150 1098 1098 kthread+0xd2/0xf0 1099 1099 } hitcount: 98 len: 664329 1100 - { stacktrace: 1100 + { common_stacktrace: 1101 1101 __netif_receive_skb_core+0x46d/0x990 1102 1102 __netif_receive_skb+0x18/0x60 1103 1103 process_backlog+0xa8/0x150 ··· 1115 1115 inet_sendmsg+0x64/0xa0 1116 1116 sock_sendmsg+0x3d/0x50 1117 1117 } hitcount: 115 len: 13030 1118 - { stacktrace: 1118 + { common_stacktrace: 1119 1119 __netif_receive_skb_core+0x46d/0x990 1120 1120 __netif_receive_skb+0x18/0x60 1121 1121 netif_receive_skb_internal+0x23/0x90 ··· 1142 1142 into the histogram. In order to avoid having to set everything up 1143 1143 again, we can just clear the histogram first:: 1144 1144 1145 - # echo 'hist:key=stacktrace:vals=len:clear' >> \ 1145 + # echo 'hist:key=common_stacktrace:vals=len:clear' >> \ 1146 1146 /sys/kernel/tracing/events/net/netif_receive_skb/trigger 1147 1147 1148 1148 Just to verify that it is in fact cleared, here's what we now see in 1149 1149 the hist file:: 1150 1150 1151 1151 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist 1152 - # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1152 + # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1153 1153 1154 1154 Totals: 1155 1155 Hits: 0 ··· 1485 1485 1486 1486 And here's an example that shows how to combine histogram data from 1487 1487 any two events even if they don't share any 'compatible' fields 1488 - other than 'hitcount' and 'stacktrace'. These commands create a 1488 + other than 'hitcount' and 'common_stacktrace'. These commands create a 1489 1489 couple of triggers named 'bar' using those fields:: 1490 1490 1491 - # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \ 1491 + # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \ 1492 1492 /sys/kernel/tracing/events/sched/sched_process_fork/trigger 1493 - # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \ 1493 + # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \ 1494 1494 /sys/kernel/tracing/events/net/netif_rx/trigger 1495 1495 1496 1496 And displaying the output of either shows some interesting if ··· 1501 1501 1502 1502 # event histogram 1503 1503 # 1504 - # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active] 1504 + # trigger info: hist:name=bar:keys=common_stacktrace:vals=hitcount:sort=hitcount:size=2048 [active] 1505 1505 # 1506 1506 1507 - { stacktrace: 1507 + { common_stacktrace: 1508 1508 kernel_clone+0x18e/0x330 1509 1509 kernel_thread+0x29/0x30 1510 1510 kthreadd+0x154/0x1b0 1511 1511 ret_from_fork+0x3f/0x70 1512 1512 } hitcount: 1 1513 - { stacktrace: 1513 + { common_stacktrace: 1514 1514 netif_rx_internal+0xb2/0xd0 1515 1515 netif_rx_ni+0x20/0x70 1516 1516 dev_loopback_xmit+0xaa/0xd0 ··· 1528 1528 call_cpuidle+0x3b/0x60 1529 1529 cpu_startup_entry+0x22d/0x310 1530 1530 } hitcount: 1 1531 - { stacktrace: 1531 + { common_stacktrace: 1532 1532 netif_rx_internal+0xb2/0xd0 1533 1533 netif_rx_ni+0x20/0x70 1534 1534 dev_loopback_xmit+0xaa/0xd0 ··· 1543 1543 SyS_sendto+0xe/0x10 1544 1544 entry_SYSCALL_64_fastpath+0x12/0x6a 1545 1545 } hitcount: 2 1546 - { stacktrace: 1546 + { common_stacktrace: 1547 1547 netif_rx_internal+0xb2/0xd0 1548 1548 netif_rx+0x1c/0x60 1549 1549 loopback_xmit+0x6c/0xb0 ··· 1561 1561 sock_sendmsg+0x38/0x50 1562 1562 ___sys_sendmsg+0x14e/0x270 1563 1563 } hitcount: 76 1564 - { stacktrace: 1564 + { common_stacktrace: 1565 1565 netif_rx_internal+0xb2/0xd0 1566 1566 netif_rx+0x1c/0x60 1567 1567 loopback_xmit+0x6c/0xb0 ··· 1579 1579 sock_sendmsg+0x38/0x50 1580 1580 ___sys_sendmsg+0x269/0x270 1581 1581 } hitcount: 77 1582 - { stacktrace: 1582 + { common_stacktrace: 1583 1583 netif_rx_internal+0xb2/0xd0 1584 1584 netif_rx+0x1c/0x60 1585 1585 loopback_xmit+0x6c/0xb0 ··· 1597 1597 sock_sendmsg+0x38/0x50 1598 1598 SYSC_sendto+0xef/0x170 1599 1599 } hitcount: 88 1600 - { stacktrace: 1600 + { common_stacktrace: 1601 1601 kernel_clone+0x18e/0x330 1602 1602 SyS_clone+0x19/0x20 1603 1603 entry_SYSCALL_64_fastpath+0x12/0x6a ··· 1949 1949 1950 1950 # cd /sys/kernel/tracing 1951 1951 # echo 's:block_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 1952 - # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger 1952 + # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger 1953 1953 # echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(block_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 1954 1954 # echo 1 > events/synthetic/block_lat/enable 1955 1955 # cat trace
+1 -1
Documentation/userspace-api/ioctl/ioctl-number.rst
··· 363 363 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 364 364 0xCD 01 linux/reiserfs_fs.h 365 365 0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices 366 - 0xCF 02 fs/cifs/ioctl.c 366 + 0xCF 02 fs/smb/client/cifs_ioctl.h 367 367 0xDB 00-0F drivers/char/mwave/mwavepub.h 368 368 0xDD 00-3F ZFCP device driver see drivers/s390/scsi/ 369 369 <mailto:aherrman@de.ibm.com>
+21 -11
MAINTAINERS
··· 956 956 F: drivers/net/ethernet/amazon/ 957 957 958 958 AMAZON RDMA EFA DRIVER 959 - M: Gal Pressman <galpress@amazon.com> 959 + M: Michael Margolin <mrgolin@amazon.com> 960 + R: Gal Pressman <gal.pressman@linux.dev> 960 961 R: Yossi Leybovich <sleybo@amazon.com> 961 962 L: linux-rdma@vger.kernel.org 962 963 S: Supported ··· 2430 2429 N: at91 2431 2430 N: atmel 2432 2431 2432 + ARM/MICROCHIP (ARM64) SoC support 2433 + M: Conor Dooley <conor@kernel.org> 2434 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2435 + M: Claudiu Beznea <claudiu.beznea@microchip.com> 2436 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2437 + S: Supported 2438 + T: git https://git.kernel.org/pub/scm/linux/kernel/git/at91/linux.git 2439 + F: arch/arm64/boot/dts/microchip/ 2440 + 2433 2441 ARM/Microchip Sparx5 SoC support 2434 2442 M: Lars Povlsen <lars.povlsen@microchip.com> 2435 2443 M: Steen Hegelund <Steen.Hegelund@microchip.com> ··· 2446 2436 M: UNGLinuxDriver@microchip.com 2447 2437 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2448 2438 S: Supported 2449 - T: git git://github.com/microchip-ung/linux-upstream.git 2450 - F: arch/arm64/boot/dts/microchip/ 2439 + F: arch/arm64/boot/dts/microchip/sparx* 2451 2440 F: drivers/net/ethernet/microchip/vcap/ 2452 2441 F: drivers/pinctrl/pinctrl-microchip-sgpio.c 2453 2442 N: sparx5 ··· 3545 3536 F: fs/befs/ 3546 3537 3547 3538 BFQ I/O SCHEDULER 3548 - M: Paolo Valente <paolo.valente@linaro.org> 3539 + M: Paolo Valente <paolo.valente@unimore.it> 3549 3540 M: Jens Axboe <axboe@kernel.dk> 3550 3541 L: linux-block@vger.kernel.org 3551 3542 S: Maintained ··· 5139 5130 5140 5131 COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3) 5141 5132 M: Steve French <sfrench@samba.org> 5142 - R: Paulo Alcantara <pc@cjr.nz> (DFS, global name space) 5133 + R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space) 5143 5134 R: Ronnie Sahlberg <lsahlber@redhat.com> (directory leases, sparse files) 5144 5135 R: Shyam Prasad N <sprasad@microsoft.com> (multichannel) 5145 5136 R: Tom Talpey <tom@talpey.com> (RDMA, smbdirect) ··· 5149 5140 W: https://wiki.samba.org/index.php/LinuxCIFS 5150 5141 T: git git://git.samba.org/sfrench/cifs-2.6.git 5151 5142 F: Documentation/admin-guide/cifs/ 5152 - F: fs/cifs/ 5153 - F: fs/smbfs_common/ 5143 + F: fs/smb/client/ 5144 + F: fs/smb/common/ 5154 5145 F: include/uapi/linux/cifs 5155 5146 5156 5147 COMPACTPCI HOTPLUG CORE ··· 9350 9341 9351 9342 HISILICON ROCE DRIVER 9352 9343 M: Haoyue Xu <xuhaoyue1@hisilicon.com> 9353 - M: Wenpeng Liang <liangwenpeng@huawei.com> 9344 + M: Junxian Huang <huangjunxian6@hisilicon.com> 9354 9345 L: linux-rdma@vger.kernel.org 9355 9346 S: Maintained 9356 9347 F: Documentation/devicetree/bindings/infiniband/hisilicon-hns-roce.txt ··· 11315 11306 L: linux-cifs@vger.kernel.org 11316 11307 S: Maintained 11317 11308 T: git git://git.samba.org/ksmbd.git 11318 - F: Documentation/filesystems/cifs/ksmbd.rst 11319 - F: fs/ksmbd/ 11320 - F: fs/smbfs_common/ 11309 + F: Documentation/filesystems/smb/ksmbd.rst 11310 + F: fs/smb/common/ 11311 + F: fs/smb/server/ 11321 11312 11322 11313 KERNEL UNIT TESTING FRAMEWORK (KUnit) 11323 11314 M: Brendan Higgins <brendanhiggins@google.com> ··· 14941 14932 14942 14933 NTFS FILESYSTEM 14943 14934 M: Anton Altaparmakov <anton@tuxera.com> 14935 + R: Namjae Jeon <linkinjeon@kernel.org> 14944 14936 L: linux-ntfs-dev@lists.sourceforge.net 14945 14937 S: Supported 14946 14938 W: http://www.tuxera.com/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 4 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/imx6qdl-mba6.dtsi
··· 209 209 pinctrl-names = "default"; 210 210 pinctrl-0 = <&pinctrl_pcie>; 211 211 reset-gpio = <&gpio6 7 GPIO_ACTIVE_LOW>; 212 + vpcie-supply = <&reg_pcie>; 212 213 status = "okay"; 213 214 }; 214 215
+7
arch/arm/boot/dts/imx6ull-dhcor-som.dtsi
··· 8 8 #include <dt-bindings/input/input.h> 9 9 #include <dt-bindings/leds/common.h> 10 10 #include <dt-bindings/pwm/pwm.h> 11 + #include <dt-bindings/regulator/dlg,da9063-regulator.h> 11 12 #include "imx6ull.dtsi" 12 13 13 14 / { ··· 85 84 86 85 regulators { 87 86 vdd_soc_in_1v4: buck1 { 87 + regulator-allowed-modes = <DA9063_BUCK_MODE_SLEEP>; /* PFM */ 88 88 regulator-always-on; 89 89 regulator-boot-on; 90 + regulator-initial-mode = <DA9063_BUCK_MODE_SLEEP>; 90 91 regulator-max-microvolt = <1400000>; 91 92 regulator-min-microvolt = <1400000>; 92 93 regulator-name = "vdd_soc_in_1v4"; 93 94 }; 94 95 95 96 vcc_3v3: buck2 { 97 + regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */ 96 98 regulator-always-on; 97 99 regulator-boot-on; 100 + regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>; 98 101 regulator-max-microvolt = <3300000>; 99 102 regulator-min-microvolt = <3300000>; 100 103 regulator-name = "vcc_3v3"; ··· 111 106 * the voltage is set to 1.5V. 112 107 */ 113 108 vcc_ddr_1v35: buck3 { 109 + regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */ 114 110 regulator-always-on; 115 111 regulator-boot-on; 112 + regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>; 116 113 regulator-max-microvolt = <1500000>; 117 114 regulator-min-microvolt = <1500000>; 118 115 regulator-name = "vcc_ddr_1v35";
+1
arch/arm/boot/dts/vexpress-v2p-ca5s.dts
··· 132 132 reg = <0x2c0f0000 0x1000>; 133 133 interrupts = <0 84 4>; 134 134 cache-level = <2>; 135 + cache-unified; 135 136 }; 136 137 137 138 pmu {
+1
arch/arm64/boot/dts/arm/foundation-v8.dtsi
··· 59 59 L2_0: l2-cache0 { 60 60 compatible = "cache"; 61 61 cache-level = <2>; 62 + cache-unified; 62 63 }; 63 64 }; 64 65
+1
arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts
··· 72 72 L2_0: l2-cache0 { 73 73 compatible = "cache"; 74 74 cache-level = <2>; 75 + cache-unified; 75 76 }; 76 77 }; 77 78
+1
arch/arm64/boot/dts/arm/vexpress-v2f-1xv7-ca53x2.dts
··· 58 58 L2_0: l2-cache0 { 59 59 compatible = "cache"; 60 60 cache-level = <2>; 61 + cache-unified; 61 62 }; 62 63 }; 63 64
+1
arch/arm64/boot/dts/freescale/imx8-ss-conn.dtsi
··· 171 171 interrupt-names = "host", "peripheral", "otg", "wakeup"; 172 172 phys = <&usb3_phy>; 173 173 phy-names = "cdns3,usb3-phy"; 174 + cdns,on-chip-buff-size = /bits/ 16 <18>; 174 175 status = "disabled"; 175 176 }; 176 177 };
+7 -1
arch/arm64/boot/dts/freescale/imx8mn-var-som.dtsi
··· 98 98 #address-cells = <1>; 99 99 #size-cells = <0>; 100 100 101 - ethphy: ethernet-phy@4 { 101 + ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */ 102 102 compatible = "ethernet-phy-ieee802.3-c22"; 103 103 reg = <4>; 104 104 reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>; 105 105 reset-assert-us = <10000>; 106 + /* 107 + * Deassert delay: 108 + * ADIN1300 requires 5ms. 109 + * AR8033 requires 1ms. 110 + */ 111 + reset-deassert-us = <20000>; 106 112 }; 107 113 }; 108 114 };
+15 -13
arch/arm64/boot/dts/freescale/imx8mn.dtsi
··· 1069 1069 <&clk IMX8MN_CLK_DISP_APB_ROOT>, 1070 1070 <&clk IMX8MN_CLK_DISP_AXI_ROOT>; 1071 1071 clock-names = "pix", "axi", "disp_axi"; 1072 - assigned-clocks = <&clk IMX8MN_CLK_DISP_PIXEL_ROOT>, 1073 - <&clk IMX8MN_CLK_DISP_AXI>, 1074 - <&clk IMX8MN_CLK_DISP_APB>; 1075 - assigned-clock-parents = <&clk IMX8MN_CLK_DISP_PIXEL>, 1076 - <&clk IMX8MN_SYS_PLL2_1000M>, 1077 - <&clk IMX8MN_SYS_PLL1_800M>; 1078 - assigned-clock-rates = <594000000>, <500000000>, <200000000>; 1079 1072 interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 1080 1073 power-domains = <&disp_blk_ctrl IMX8MN_DISPBLK_PD_LCDIF>; 1081 1074 status = "disabled"; ··· 1086 1093 clocks = <&clk IMX8MN_CLK_DSI_CORE>, 1087 1094 <&clk IMX8MN_CLK_DSI_PHY_REF>; 1088 1095 clock-names = "bus_clk", "sclk_mipi"; 1089 - assigned-clocks = <&clk IMX8MN_CLK_DSI_CORE>, 1090 - <&clk IMX8MN_CLK_DSI_PHY_REF>; 1091 - assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>, 1092 - <&clk IMX8MN_CLK_24M>; 1093 - assigned-clock-rates = <266000000>, <24000000>; 1094 - samsung,pll-clock-frequency = <24000000>; 1095 1096 interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>; 1096 1097 power-domains = <&disp_blk_ctrl IMX8MN_DISPBLK_PD_MIPI_DSI>; 1097 1098 status = "disabled"; ··· 1129 1142 "lcdif-axi", "lcdif-apb", "lcdif-pix", 1130 1143 "dsi-pclk", "dsi-ref", 1131 1144 "csi-aclk", "csi-pclk"; 1145 + assigned-clocks = <&clk IMX8MN_CLK_DSI_CORE>, 1146 + <&clk IMX8MN_CLK_DSI_PHY_REF>, 1147 + <&clk IMX8MN_CLK_DISP_PIXEL>, 1148 + <&clk IMX8MN_CLK_DISP_AXI>, 1149 + <&clk IMX8MN_CLK_DISP_APB>; 1150 + assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>, 1151 + <&clk IMX8MN_CLK_24M>, 1152 + <&clk IMX8MN_VIDEO_PLL1_OUT>, 1153 + <&clk IMX8MN_SYS_PLL2_1000M>, 1154 + <&clk IMX8MN_SYS_PLL1_800M>; 1155 + assigned-clock-rates = <266000000>, 1156 + <24000000>, 1157 + <594000000>, 1158 + <500000000>, 1159 + <200000000>; 1132 1160 #power-domain-cells = <1>; 1133 1161 }; 1134 1162
+9 -16
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 1211 1211 <&clk IMX8MP_CLK_MEDIA_APB_ROOT>, 1212 1212 <&clk IMX8MP_CLK_MEDIA_AXI_ROOT>; 1213 1213 clock-names = "pix", "axi", "disp_axi"; 1214 - assigned-clocks = <&clk IMX8MP_CLK_MEDIA_DISP1_PIX_ROOT>, 1215 - <&clk IMX8MP_CLK_MEDIA_AXI>, 1216 - <&clk IMX8MP_CLK_MEDIA_APB>; 1217 - assigned-clock-parents = <&clk IMX8MP_CLK_MEDIA_DISP1_PIX>, 1218 - <&clk IMX8MP_SYS_PLL2_1000M>, 1219 - <&clk IMX8MP_SYS_PLL1_800M>; 1220 - assigned-clock-rates = <594000000>, <500000000>, <200000000>; 1221 1214 interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 1222 1215 power-domains = <&media_blk_ctrl IMX8MP_MEDIABLK_PD_LCDIF_1>; 1223 1216 status = "disabled"; ··· 1230 1237 <&clk IMX8MP_CLK_MEDIA_APB_ROOT>, 1231 1238 <&clk IMX8MP_CLK_MEDIA_AXI_ROOT>; 1232 1239 clock-names = "pix", "axi", "disp_axi"; 1233 - assigned-clocks = <&clk IMX8MP_CLK_MEDIA_DISP2_PIX>, 1234 - <&clk IMX8MP_VIDEO_PLL1>; 1235 - assigned-clock-parents = <&clk IMX8MP_VIDEO_PLL1_OUT>, 1236 - <&clk IMX8MP_VIDEO_PLL1_REF_SEL>; 1237 - assigned-clock-rates = <0>, <1039500000>; 1238 1240 power-domains = <&media_blk_ctrl IMX8MP_MEDIABLK_PD_LCDIF_2>; 1239 1241 status = "disabled"; 1240 1242 ··· 1284 1296 "disp1", "disp2", "isp", "phy"; 1285 1297 1286 1298 assigned-clocks = <&clk IMX8MP_CLK_MEDIA_AXI>, 1287 - <&clk IMX8MP_CLK_MEDIA_APB>; 1299 + <&clk IMX8MP_CLK_MEDIA_APB>, 1300 + <&clk IMX8MP_CLK_MEDIA_DISP1_PIX>, 1301 + <&clk IMX8MP_CLK_MEDIA_DISP2_PIX>, 1302 + <&clk IMX8MP_VIDEO_PLL1>; 1288 1303 assigned-clock-parents = <&clk IMX8MP_SYS_PLL2_1000M>, 1289 - <&clk IMX8MP_SYS_PLL1_800M>; 1290 - assigned-clock-rates = <500000000>, <200000000>; 1291 - 1304 + <&clk IMX8MP_SYS_PLL1_800M>, 1305 + <&clk IMX8MP_VIDEO_PLL1_OUT>, 1306 + <&clk IMX8MP_VIDEO_PLL1_OUT>; 1307 + assigned-clock-rates = <500000000>, <200000000>, 1308 + <0>, <0>, <1039500000>; 1292 1309 #power-domain-cells = <1>; 1293 1310 1294 1311 lvds_bridge: bridge@5c {
+6
arch/arm64/boot/dts/freescale/imx8x-colibri-eval-v3.dtsi
··· 33 33 }; 34 34 }; 35 35 36 + &iomuxc { 37 + pinctrl-names = "default"; 38 + pinctrl-0 = <&pinctrl_ext_io0>, <&pinctrl_hog0>, <&pinctrl_hog1>, 39 + <&pinctrl_lpspi2_cs2>; 40 + }; 41 + 36 42 /* Colibri SPI */ 37 43 &lpspi2 { 38 44 status = "okay";
+1 -2
arch/arm64/boot/dts/freescale/imx8x-colibri-iris.dtsi
··· 48 48 <IMX8QXP_SAI0_TXFS_LSIO_GPIO0_IO28 0x20>, /* SODIMM 101 */ 49 49 <IMX8QXP_SAI0_RXD_LSIO_GPIO0_IO27 0x20>, /* SODIMM 97 */ 50 50 <IMX8QXP_ENET0_RGMII_RXC_LSIO_GPIO5_IO03 0x06000020>, /* SODIMM 85 */ 51 - <IMX8QXP_SAI0_TXC_LSIO_GPIO0_IO26 0x20>, /* SODIMM 79 */ 52 - <IMX8QXP_QSPI0A_DATA1_LSIO_GPIO3_IO10 0x06700041>; /* SODIMM 45 */ 51 + <IMX8QXP_SAI0_TXC_LSIO_GPIO0_IO26 0x20>; /* SODIMM 79 */ 53 52 }; 54 53 55 54 pinctrl_uart1_forceoff: uart1forceoffgrp {
+8 -6
arch/arm64/boot/dts/freescale/imx8x-colibri.dtsi
··· 363 363 /* TODO VPU Encoder/Decoder */ 364 364 365 365 &iomuxc { 366 - pinctrl-names = "default"; 367 - pinctrl-0 = <&pinctrl_ext_io0>, <&pinctrl_hog0>, <&pinctrl_hog1>, 368 - <&pinctrl_hog2>, <&pinctrl_lpspi2_cs2>; 369 - 370 366 /* On-module touch pen-down interrupt */ 371 367 pinctrl_ad7879_int: ad7879intgrp { 372 368 fsl,pins = <IMX8QXP_MIPI_CSI0_I2C0_SCL_LSIO_GPIO3_IO05 0x21>; ··· 495 499 }; 496 500 497 501 pinctrl_hog1: hog1grp { 498 - fsl,pins = <IMX8QXP_CSI_MCLK_LSIO_GPIO3_IO01 0x20>, /* SODIMM 75 */ 499 - <IMX8QXP_QSPI0A_SCLK_LSIO_GPIO3_IO16 0x20>; /* SODIMM 93 */ 502 + fsl,pins = <IMX8QXP_QSPI0A_SCLK_LSIO_GPIO3_IO16 0x20>; /* SODIMM 93 */ 500 503 }; 501 504 502 505 pinctrl_hog2: hog2grp { ··· 769 774 fsl,pins = <IMX8QXP_SCU_BOOT_MODE3_SCU_DSC_RTC_CLOCK_OUTPUT_32K 0x20>; 770 775 }; 771 776 }; 777 + 778 + /* Delete peripherals which are not present on SOC, but are defined in imx8-ss-*.dtsi */ 779 + 780 + /delete-node/ &adc1; 781 + /delete-node/ &adc1_lpcg; 782 + /delete-node/ &dsp; 783 + /delete-node/ &dsp_lpcg;
+1
arch/mips/Kconfig
··· 79 79 select HAVE_LD_DEAD_CODE_DATA_ELIMINATION 80 80 select HAVE_MOD_ARCH_SPECIFIC 81 81 select HAVE_NMI 82 + select HAVE_PATA_PLATFORM 82 83 select HAVE_PERF_EVENTS 83 84 select HAVE_PERF_REGS 84 85 select HAVE_PERF_USER_STACK_DUMP
+15 -12
arch/mips/alchemy/common/dbdma.c
··· 30 30 * 31 31 */ 32 32 33 + #include <linux/dma-map-ops.h> /* for dma_default_coherent */ 33 34 #include <linux/init.h> 34 35 #include <linux/kernel.h> 35 36 #include <linux/slab.h> ··· 624 623 dp->dscr_cmd0 &= ~DSCR_CMD0_IE; 625 624 626 625 /* 627 - * There is an errata on the Au1200/Au1550 parts that could result 628 - * in "stale" data being DMA'ed. It has to do with the snoop logic on 629 - * the cache eviction buffer. DMA_NONCOHERENT is on by default for 630 - * these parts. If it is fixed in the future, these dma_cache_inv will 631 - * just be nothing more than empty macros. See io.h. 626 + * There is an erratum on certain Au1200/Au1550 revisions that could 627 + * result in "stale" data being DMA'ed. It has to do with the snoop 628 + * logic on the cache eviction buffer. dma_default_coherent is set 629 + * to false on these parts. 632 630 */ 633 - dma_cache_wback_inv((unsigned long)buf, nbytes); 631 + if (!dma_default_coherent) 632 + dma_cache_wback_inv(KSEG0ADDR(buf), nbytes); 634 633 dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */ 635 634 wmb(); /* drain writebuffer */ 636 635 dma_cache_wback_inv((unsigned long)dp, sizeof(*dp)); 637 636 ctp->chan_ptr->ddma_dbell = 0; 637 + wmb(); /* force doorbell write out to dma engine */ 638 638 639 639 /* Get next descriptor pointer. */ 640 640 ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr)); ··· 687 685 dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1); 688 686 #endif 689 687 /* 690 - * There is an errata on the Au1200/Au1550 parts that could result in 691 - * "stale" data being DMA'ed. It has to do with the snoop logic on the 692 - * cache eviction buffer. DMA_NONCOHERENT is on by default for these 693 - * parts. If it is fixed in the future, these dma_cache_inv will just 694 - * be nothing more than empty macros. See io.h. 688 + * There is an erratum on certain Au1200/Au1550 revisions that could 689 + * result in "stale" data being DMA'ed. It has to do with the snoop 690 + * logic on the cache eviction buffer. dma_default_coherent is set 691 + * to false on these parts. 695 692 */ 696 - dma_cache_inv((unsigned long)buf, nbytes); 693 + if (!dma_default_coherent) 694 + dma_cache_inv(KSEG0ADDR(buf), nbytes); 697 695 dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */ 698 696 wmb(); /* drain writebuffer */ 699 697 dma_cache_wback_inv((unsigned long)dp, sizeof(*dp)); 700 698 ctp->chan_ptr->ddma_dbell = 0; 699 + wmb(); /* force doorbell write out to dma engine */ 701 700 702 701 /* Get next descriptor pointer. */ 703 702 ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
+5
arch/mips/kernel/cpu-probe.c
··· 1502 1502 break; 1503 1503 } 1504 1504 break; 1505 + case PRID_IMP_NETLOGIC_AU13XX: 1506 + c->cputype = CPU_ALCHEMY; 1507 + __cpu_name[cpu] = "Au1300"; 1508 + break; 1505 1509 } 1506 1510 } 1507 1511 ··· 1867 1863 cpu_probe_mips(c, cpu); 1868 1864 break; 1869 1865 case PRID_COMP_ALCHEMY: 1866 + case PRID_COMP_NETLOGIC: 1870 1867 cpu_probe_alchemy(c, cpu); 1871 1868 break; 1872 1869 case PRID_COMP_SIBYTE:
+5 -4
arch/mips/kernel/setup.c
··· 158 158 pr_err("initrd start must be page aligned\n"); 159 159 goto disable; 160 160 } 161 - if (initrd_start < PAGE_OFFSET) { 162 - pr_err("initrd start < PAGE_OFFSET\n"); 163 - goto disable; 164 - } 165 161 166 162 /* 167 163 * Sanitize initrd addresses. For example firmware ··· 169 173 end = __pa(initrd_end); 170 174 initrd_end = (unsigned long)__va(end); 171 175 initrd_start = (unsigned long)__va(__pa(initrd_start)); 176 + 177 + if (initrd_start < PAGE_OFFSET) { 178 + pr_err("initrd start < PAGE_OFFSET\n"); 179 + goto disable; 180 + } 172 181 173 182 ROOT_DEV = Root_RAM0; 174 183 return PFN_UP(end);
+4
arch/parisc/Kconfig
··· 130 130 config STACKTRACE_SUPPORT 131 131 def_bool y 132 132 133 + config LOCKDEP_SUPPORT 134 + bool 135 + default y 136 + 133 137 config ISA_DMA_API 134 138 bool 135 139
+11
arch/parisc/Kconfig.debug
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + # 3 + config LIGHTWEIGHT_SPINLOCK_CHECK 4 + bool "Enable lightweight spinlock checks" 5 + depends on SMP && !DEBUG_SPINLOCK 6 + default y 7 + help 8 + Add checks with low performance impact to the spinlock functions 9 + to catch memory overwrites at runtime. For more advanced 10 + spinlock debugging you should choose the DEBUG_SPINLOCK option 11 + which will detect unitialized spinlocks too. 12 + If unsure say Y here.
+4
arch/parisc/include/asm/cacheflush.h
··· 48 48 49 49 #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) 50 50 #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) 51 + #define flush_dcache_mmap_lock_irqsave(mapping, flags) \ 52 + xa_lock_irqsave(&mapping->i_pages, flags) 53 + #define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \ 54 + xa_unlock_irqrestore(&mapping->i_pages, flags) 51 55 52 56 #define flush_icache_page(vma,page) do { \ 53 57 flush_kernel_dcache_page_addr(page_address(page)); \
+34 -5
arch/parisc/include/asm/spinlock.h
··· 7 7 #include <asm/processor.h> 8 8 #include <asm/spinlock_types.h> 9 9 10 + #define SPINLOCK_BREAK_INSN 0x0000c006 /* break 6,6 */ 11 + 12 + static inline void arch_spin_val_check(int lock_val) 13 + { 14 + if (IS_ENABLED(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK)) 15 + asm volatile( "andcm,= %0,%1,%%r0\n" 16 + ".word %2\n" 17 + : : "r" (lock_val), "r" (__ARCH_SPIN_LOCK_UNLOCKED_VAL), 18 + "i" (SPINLOCK_BREAK_INSN)); 19 + } 20 + 10 21 static inline int arch_spin_is_locked(arch_spinlock_t *x) 11 22 { 12 - volatile unsigned int *a = __ldcw_align(x); 13 - return READ_ONCE(*a) == 0; 23 + volatile unsigned int *a; 24 + int lock_val; 25 + 26 + a = __ldcw_align(x); 27 + lock_val = READ_ONCE(*a); 28 + arch_spin_val_check(lock_val); 29 + return (lock_val == 0); 14 30 } 15 31 16 32 static inline void arch_spin_lock(arch_spinlock_t *x) ··· 34 18 volatile unsigned int *a; 35 19 36 20 a = __ldcw_align(x); 37 - while (__ldcw(a) == 0) 21 + do { 22 + int lock_val_old; 23 + 24 + lock_val_old = __ldcw(a); 25 + arch_spin_val_check(lock_val_old); 26 + if (lock_val_old) 27 + return; /* got lock */ 28 + 29 + /* wait until we should try to get lock again */ 38 30 while (*a == 0) 39 31 continue; 32 + } while (1); 40 33 } 41 34 42 35 static inline void arch_spin_unlock(arch_spinlock_t *x) ··· 54 29 55 30 a = __ldcw_align(x); 56 31 /* Release with ordered store. */ 57 - __asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory"); 32 + __asm__ __volatile__("stw,ma %0,0(%1)" 33 + : : "r"(__ARCH_SPIN_LOCK_UNLOCKED_VAL), "r"(a) : "memory"); 58 34 } 59 35 60 36 static inline int arch_spin_trylock(arch_spinlock_t *x) 61 37 { 62 38 volatile unsigned int *a; 39 + int lock_val; 63 40 64 41 a = __ldcw_align(x); 65 - return __ldcw(a) != 0; 42 + lock_val = __ldcw(a); 43 + arch_spin_val_check(lock_val); 44 + return lock_val != 0; 66 45 } 67 46 68 47 /*
+6 -2
arch/parisc/include/asm/spinlock_types.h
··· 2 2 #ifndef __ASM_SPINLOCK_TYPES_H 3 3 #define __ASM_SPINLOCK_TYPES_H 4 4 5 + #define __ARCH_SPIN_LOCK_UNLOCKED_VAL 0x1a46 6 + 5 7 typedef struct { 6 8 #ifdef CONFIG_PA20 7 9 volatile unsigned int slock; 8 - # define __ARCH_SPIN_LOCK_UNLOCKED { 1 } 10 + # define __ARCH_SPIN_LOCK_UNLOCKED { __ARCH_SPIN_LOCK_UNLOCKED_VAL } 9 11 #else 10 12 volatile unsigned int lock[4]; 11 - # define __ARCH_SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 } } 13 + # define __ARCH_SPIN_LOCK_UNLOCKED \ 14 + { { __ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL, \ 15 + __ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL } } 12 16 #endif 13 17 } arch_spinlock_t; 14 18
+1 -1
arch/parisc/kernel/alternative.c
··· 25 25 { 26 26 struct alt_instr *entry; 27 27 int index = 0, applied = 0; 28 - int num_cpus = num_online_cpus(); 28 + int num_cpus = num_present_cpus(); 29 29 u16 cond_check; 30 30 31 31 cond_check = ALT_COND_ALWAYS |
+3 -2
arch/parisc/kernel/cache.c
··· 399 399 unsigned long offset; 400 400 unsigned long addr, old_addr = 0; 401 401 unsigned long count = 0; 402 + unsigned long flags; 402 403 pgoff_t pgoff; 403 404 404 405 if (mapping && !mapping_mapped(mapping)) { ··· 421 420 * to flush one address here for them all to become coherent 422 421 * on machines that support equivalent aliasing 423 422 */ 424 - flush_dcache_mmap_lock(mapping); 423 + flush_dcache_mmap_lock_irqsave(mapping, flags); 425 424 vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { 426 425 offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; 427 426 addr = mpnt->vm_start + offset; ··· 461 460 } 462 461 WARN_ON(++count == 4096); 463 462 } 464 - flush_dcache_mmap_unlock(mapping); 463 + flush_dcache_mmap_unlock_irqrestore(mapping, flags); 465 464 } 466 465 EXPORT_SYMBOL(flush_dcache_page); 467 466
+17 -1
arch/parisc/kernel/pci-dma.c
··· 446 446 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, 447 447 enum dma_data_direction dir) 448 448 { 449 + /* 450 + * fdc: The data cache line is written back to memory, if and only if 451 + * it is dirty, and then invalidated from the data cache. 452 + */ 449 453 flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size); 450 454 } 451 455 452 456 void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, 453 457 enum dma_data_direction dir) 454 458 { 455 - flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size); 459 + unsigned long addr = (unsigned long) phys_to_virt(paddr); 460 + 461 + switch (dir) { 462 + case DMA_TO_DEVICE: 463 + case DMA_BIDIRECTIONAL: 464 + flush_kernel_dcache_range(addr, size); 465 + return; 466 + case DMA_FROM_DEVICE: 467 + purge_kernel_dcache_range_asm(addr, addr + size); 468 + return; 469 + default: 470 + BUG(); 471 + } 456 472 }
+8 -3
arch/parisc/kernel/process.c
··· 122 122 /* It seems we have no way to power the system off via 123 123 * software. The user has to press the button himself. */ 124 124 125 - printk(KERN_EMERG "System shut down completed.\n" 126 - "Please power this system off now."); 125 + printk("Power off or press RETURN to reboot.\n"); 127 126 128 127 /* prevent soft lockup/stalled CPU messages for endless loop. */ 129 128 rcu_sysrq_start(); 130 129 lockup_detector_soft_poweroff(); 131 - for (;;); 130 + while (1) { 131 + /* reboot if user presses RETURN key */ 132 + if (pdc_iodc_getc() == 13) { 133 + printk("Rebooting...\n"); 134 + machine_restart(NULL); 135 + } 136 + } 132 137 } 133 138 134 139 void (*pm_power_off)(void);
+14 -4
arch/parisc/kernel/traps.c
··· 47 47 #include <linux/kgdb.h> 48 48 #include <linux/kprobes.h> 49 49 50 + #if defined(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK) 51 + #include <asm/spinlock.h> 52 + #endif 53 + 50 54 #include "../math-emu/math-emu.h" /* for handle_fpe() */ 51 55 52 56 static void parisc_show_stack(struct task_struct *task, ··· 295 291 } 296 292 297 293 #ifdef CONFIG_KPROBES 298 - if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) { 294 + if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) { 299 295 parisc_kprobe_break_handler(regs); 300 296 return; 301 297 } 302 - if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) { 298 + if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) { 303 299 parisc_kprobe_ss_handler(regs); 304 300 return; 305 301 } 306 302 #endif 307 303 308 304 #ifdef CONFIG_KGDB 309 - if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN || 310 - iir == PARISC_KGDB_BREAK_INSN)) { 305 + if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN || 306 + iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) { 311 307 kgdb_handle_exception(9, SIGTRAP, 0, regs); 312 308 return; 309 + } 310 + #endif 311 + 312 + #ifdef CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK 313 + if ((iir == SPINLOCK_BREAK_INSN) && !user_mode(regs)) { 314 + die_if_kernel("Spinlock was trashed", regs, 1); 313 315 } 314 316 #endif 315 317
+6
arch/powerpc/Kconfig
··· 906 906 907 907 config ARCH_FORCE_MAX_ORDER 908 908 int "Order of maximal physically contiguous allocations" 909 + range 7 8 if PPC64 && PPC_64K_PAGES 909 910 default "8" if PPC64 && PPC_64K_PAGES 911 + range 12 12 if PPC64 && !PPC_64K_PAGES 910 912 default "12" if PPC64 && !PPC_64K_PAGES 913 + range 8 10 if PPC32 && PPC_16K_PAGES 911 914 default "8" if PPC32 && PPC_16K_PAGES 915 + range 6 10 if PPC32 && PPC_64K_PAGES 912 916 default "6" if PPC32 && PPC_64K_PAGES 917 + range 4 10 if PPC32 && PPC_256K_PAGES 913 918 default "4" if PPC32 && PPC_256K_PAGES 919 + range 10 10 914 920 default "10" 915 921 help 916 922 The kernel page allocator limits the size of maximal physically
-2
arch/x86/crypto/aria-aesni-avx-asm_64.S
··· 773 773 .octa 0x3F893781E95FE1576CDA64D2BA0CB204 774 774 775 775 #ifdef CONFIG_AS_GFNI 776 - .section .rodata.cst8, "aM", @progbits, 8 777 - .align 8 778 776 /* AES affine: */ 779 777 #define tf_aff_const BV8(1, 1, 0, 0, 0, 1, 1, 0) 780 778 .Ltf_aff_bitmatrix:
+1 -1
arch/x86/events/intel/core.c
··· 4074 4074 if (x86_pmu.intel_cap.pebs_baseline) { 4075 4075 arr[(*nr)++] = (struct perf_guest_switch_msr){ 4076 4076 .msr = MSR_PEBS_DATA_CFG, 4077 - .host = cpuc->pebs_data_cfg, 4077 + .host = cpuc->active_pebs_data_cfg, 4078 4078 .guest = kvm_pmu->pebs_data_cfg, 4079 4079 }; 4080 4080 }
+11
arch/x86/events/intel/uncore_snbep.c
··· 6150 6150 }; 6151 6151 6152 6152 #define UNCORE_SPR_NUM_UNCORE_TYPES 12 6153 + #define UNCORE_SPR_CHA 0 6153 6154 #define UNCORE_SPR_IIO 1 6154 6155 #define UNCORE_SPR_IMC 6 6155 6156 #define UNCORE_SPR_UPI 8 ··· 6461 6460 return max + 1; 6462 6461 } 6463 6462 6463 + #define SPR_MSR_UNC_CBO_CONFIG 0x2FFE 6464 + 6464 6465 void spr_uncore_cpu_init(void) 6465 6466 { 6467 + struct intel_uncore_type *type; 6468 + u64 num_cbo; 6469 + 6466 6470 uncore_msr_uncores = uncore_get_uncores(UNCORE_ACCESS_MSR, 6467 6471 UNCORE_SPR_MSR_EXTRA_UNCORES, 6468 6472 spr_msr_uncores); 6469 6473 6474 + type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA); 6475 + if (type) { 6476 + rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo); 6477 + type->num_boxes = num_cbo; 6478 + } 6470 6479 spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO); 6471 6480 } 6472 6481
+1 -1
arch/x86/include/asm/fpu/sched.h
··· 39 39 static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) 40 40 { 41 41 if (cpu_feature_enabled(X86_FEATURE_FPU) && 42 - !(current->flags & (PF_KTHREAD | PF_IO_WORKER))) { 42 + !(current->flags & (PF_KTHREAD | PF_USER_WORKER))) { 43 43 save_fpregs_to_fpstate(old_fpu); 44 44 /* 45 45 * The save operation preserved register state, so the
+3 -2
arch/x86/kernel/cpu/topology.c
··· 79 79 * initial apic id, which also represents 32-bit extended x2apic id. 80 80 */ 81 81 c->initial_apicid = edx; 82 - smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); 82 + smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); 83 83 #endif 84 84 return 0; 85 85 } ··· 109 109 */ 110 110 cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx); 111 111 c->initial_apicid = edx; 112 - core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); 112 + core_level_siblings = LEVEL_MAX_SIBLINGS(ebx); 113 + smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx)); 113 114 core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 114 115 die_level_siblings = LEVEL_MAX_SIBLINGS(ebx); 115 116 pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
+5 -2
arch/x86/kernel/dumpstack.c
··· 195 195 printk("%sCall Trace:\n", log_lvl); 196 196 197 197 unwind_start(&state, task, regs, stack); 198 - stack = stack ? : get_stack_pointer(task, regs); 199 198 regs = unwind_get_entry_regs(&state, &partial); 200 199 201 200 /* ··· 213 214 * - hardirq stack 214 215 * - entry stack 215 216 */ 216 - for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) { 217 + for (stack = stack ?: get_stack_pointer(task, regs); 218 + stack; 219 + stack = stack_info.next_sp) { 217 220 const char *stack_name; 221 + 222 + stack = PTR_ALIGN(stack, sizeof(long)); 218 223 219 224 if (get_stack_info(stack, task, &stack_info, &visit_mask)) { 220 225 /*
+1 -1
arch/x86/kernel/fpu/context.h
··· 57 57 struct fpu *fpu = &current->thread.fpu; 58 58 int cpu = smp_processor_id(); 59 59 60 - if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER))) 60 + if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_USER_WORKER))) 61 61 return; 62 62 63 63 if (!fpregs_state_valid(fpu, cpu)) {
+1 -1
arch/x86/kernel/fpu/core.c
··· 426 426 427 427 this_cpu_write(in_kernel_fpu, true); 428 428 429 - if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) && 429 + if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) && 430 430 !test_thread_flag(TIF_NEED_FPU_LOAD)) { 431 431 set_thread_flag(TIF_NEED_FPU_LOAD); 432 432 save_fpregs_to_fpstate(&current->thread.fpu);
+9 -1
arch/x86/lib/copy_user_64.S
··· 7 7 */ 8 8 9 9 #include <linux/linkage.h> 10 + #include <asm/cpufeatures.h> 11 + #include <asm/alternative.h> 10 12 #include <asm/asm.h> 11 13 #include <asm/export.h> 12 14 ··· 31 29 */ 32 30 SYM_FUNC_START(rep_movs_alternative) 33 31 cmpq $64,%rcx 34 - jae .Lunrolled 32 + jae .Llarge 35 33 36 34 cmp $8,%ecx 37 35 jae .Lword ··· 66 64 67 65 _ASM_EXTABLE_UA( 2b, .Lcopy_user_tail) 68 66 _ASM_EXTABLE_UA( 3b, .Lcopy_user_tail) 67 + 68 + .Llarge: 69 + 0: ALTERNATIVE "jmp .Lunrolled", "rep movsb", X86_FEATURE_ERMS 70 + 1: RET 71 + 72 + _ASM_EXTABLE_UA( 0b, 1b) 69 73 70 74 .p2align 4 71 75 .Lunrolled:
+5 -3
arch/x86/pci/xen.c
··· 198 198 i++; 199 199 } 200 200 kfree(v); 201 - return 0; 201 + return msi_device_populate_sysfs(&dev->dev); 202 202 203 203 error: 204 204 if (ret == -ENOSYS) ··· 254 254 dev_dbg(&dev->dev, 255 255 "xen: msi --> pirq=%d --> irq=%d\n", pirq, irq); 256 256 } 257 - return 0; 257 + return msi_device_populate_sysfs(&dev->dev); 258 258 259 259 error: 260 260 dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n", ··· 346 346 if (ret < 0) 347 347 goto out; 348 348 } 349 - ret = 0; 349 + ret = msi_device_populate_sysfs(&dev->dev); 350 350 out: 351 351 return ret; 352 352 } ··· 394 394 xen_destroy_irq(msidesc->irq + i); 395 395 msidesc->irq = 0; 396 396 } 397 + 398 + msi_device_destroy_sysfs(&dev->dev); 397 399 } 398 400 399 401 static void xen_pv_teardown_msi_irqs(struct pci_dev *dev)
+1 -1
block/blk-core.c
··· 520 520 sector_t maxsector = bdev_nr_sectors(bio->bi_bdev); 521 521 unsigned int nr_sectors = bio_sectors(bio); 522 522 523 - if (nr_sectors && maxsector && 523 + if (nr_sectors && 524 524 (nr_sectors > maxsector || 525 525 bio->bi_iter.bi_sector > maxsector - nr_sectors)) { 526 526 pr_info_ratelimited("%s: attempt to access beyond end of device\n"
+1 -1
block/blk-map.c
··· 248 248 { 249 249 struct bio *bio; 250 250 251 - if (rq->cmd_flags & REQ_ALLOC_CACHE) { 251 + if (rq->cmd_flags & REQ_ALLOC_CACHE && (nr_vecs <= BIO_INLINE_VECS)) { 252 252 bio = bio_alloc_bioset(NULL, nr_vecs, rq->cmd_flags, gfp_mask, 253 253 &fs_bio_set); 254 254 if (!bio)
+8 -4
block/blk-mq-tag.c
··· 39 39 { 40 40 unsigned int users; 41 41 42 + /* 43 + * calling test_bit() prior to test_and_set_bit() is intentional, 44 + * it avoids dirtying the cacheline if the queue is already active. 45 + */ 42 46 if (blk_mq_is_shared_tags(hctx->flags)) { 43 47 struct request_queue *q = hctx->queue; 44 48 45 - if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) 49 + if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) || 50 + test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags)) 46 51 return; 47 - set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags); 48 52 } else { 49 - if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) 53 + if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) || 54 + test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state)) 50 55 return; 51 - set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state); 52 56 } 53 57 54 58 users = atomic_inc_return(&hctx->tags->active_queues);
+7 -5
block/blk-wbt.c
··· 730 730 { 731 731 struct request_queue *q = disk->queue; 732 732 struct rq_qos *rqos; 733 - bool disable_flag = q->elevator && 734 - test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags); 733 + bool enable = IS_ENABLED(CONFIG_BLK_WBT_MQ); 734 + 735 + if (q->elevator && 736 + test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags)) 737 + enable = false; 735 738 736 739 /* Throttling already enabled? */ 737 740 rqos = wbt_rq_qos(q); 738 741 if (rqos) { 739 - if (!disable_flag && 740 - RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT) 742 + if (enable && RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT) 741 743 RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT; 742 744 return; 743 745 } ··· 748 746 if (!blk_queue_registered(q)) 749 747 return; 750 748 751 - if (queue_is_mq(q) && !disable_flag) 749 + if (queue_is_mq(q) && enable) 752 750 wbt_init(disk); 753 751 } 754 752 EXPORT_SYMBOL_GPL(wbt_enable_default);
+25 -16
drivers/accel/qaic/qaic_control.c
··· 997 997 struct xfer_queue_elem elem; 998 998 struct wire_msg *out_buf; 999 999 struct wrapper_msg *w; 1000 + long ret = -EAGAIN; 1001 + int xfer_count = 0; 1000 1002 int retry_count; 1001 - long ret; 1002 1003 1003 1004 if (qdev->in_reset) { 1004 1005 mutex_unlock(&qdev->cntl_mutex); 1005 1006 return ERR_PTR(-ENODEV); 1007 + } 1008 + 1009 + /* Attempt to avoid a partial commit of a message */ 1010 + list_for_each_entry(w, &wrappers->list, list) 1011 + xfer_count++; 1012 + 1013 + for (retry_count = 0; retry_count < QAIC_MHI_RETRY_MAX; retry_count++) { 1014 + if (xfer_count <= mhi_get_free_desc_count(qdev->cntl_ch, DMA_TO_DEVICE)) { 1015 + ret = 0; 1016 + break; 1017 + } 1018 + msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS); 1019 + if (signal_pending(current)) 1020 + break; 1021 + } 1022 + 1023 + if (ret) { 1024 + mutex_unlock(&qdev->cntl_mutex); 1025 + return ERR_PTR(ret); 1006 1026 } 1007 1027 1008 1028 elem.seq_num = seq_num; ··· 1058 1038 list_for_each_entry(w, &wrappers->list, list) { 1059 1039 kref_get(&w->ref_count); 1060 1040 retry_count = 0; 1061 - retry: 1062 1041 ret = mhi_queue_buf(qdev->cntl_ch, DMA_TO_DEVICE, &w->msg, w->len, 1063 1042 list_is_last(&w->list, &wrappers->list) ? MHI_EOT : MHI_CHAIN); 1064 1043 if (ret) { 1065 - if (ret == -EAGAIN && retry_count++ < QAIC_MHI_RETRY_MAX) { 1066 - msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS); 1067 - if (!signal_pending(current)) 1068 - goto retry; 1069 - } 1070 - 1071 1044 qdev->cntl_lost_buf = true; 1072 1045 kref_put(&w->ref_count, free_wrapper); 1073 1046 mutex_unlock(&qdev->cntl_mutex); ··· 1262 1249 1263 1250 int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv) 1264 1251 { 1265 - struct qaic_manage_msg *user_msg; 1252 + struct qaic_manage_msg *user_msg = data; 1266 1253 struct qaic_device *qdev; 1267 1254 struct manage_msg *msg; 1268 1255 struct qaic_user *usr; ··· 1270 1257 int qdev_rcu_id; 1271 1258 int usr_rcu_id; 1272 1259 int ret; 1260 + 1261 + if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH) 1262 + return -EINVAL; 1273 1263 1274 1264 usr = file_priv->driver_priv; 1275 1265 ··· 1289 1273 srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); 1290 1274 srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); 1291 1275 return -ENODEV; 1292 - } 1293 - 1294 - user_msg = data; 1295 - 1296 - if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH) { 1297 - ret = -EINVAL; 1298 - goto out; 1299 1276 } 1300 1277 1301 1278 msg = kzalloc(QAIC_MANAGE_MAX_MSG_LENGTH + sizeof(*msg), GFP_KERNEL);
+46 -47
drivers/accel/qaic/qaic_data.c
··· 591 591 struct qaic_bo *bo = to_qaic_bo(obj); 592 592 unsigned long offset = 0; 593 593 struct scatterlist *sg; 594 - int ret; 594 + int ret = 0; 595 595 596 596 if (obj->import_attach) 597 597 return -EINVAL; ··· 663 663 if (args->pad) 664 664 return -EINVAL; 665 665 666 + size = PAGE_ALIGN(args->size); 667 + if (size == 0) 668 + return -EINVAL; 669 + 666 670 usr = file_priv->driver_priv; 667 671 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); 668 672 if (!usr->qddev) { ··· 678 674 qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); 679 675 if (qdev->in_reset) { 680 676 ret = -ENODEV; 681 - goto unlock_dev_srcu; 682 - } 683 - 684 - size = PAGE_ALIGN(args->size); 685 - if (size == 0) { 686 - ret = -EINVAL; 687 677 goto unlock_dev_srcu; 688 678 } 689 679 ··· 924 926 { 925 927 struct qaic_attach_slice_entry *slice_ent; 926 928 struct qaic_attach_slice *args = data; 929 + int rcu_id, usr_rcu_id, qdev_rcu_id; 927 930 struct dma_bridge_chan *dbc; 928 - int usr_rcu_id, qdev_rcu_id; 929 931 struct drm_gem_object *obj; 930 932 struct qaic_device *qdev; 931 933 unsigned long arg_size; ··· 933 935 u8 __user *user_data; 934 936 struct qaic_bo *bo; 935 937 int ret; 938 + 939 + if (args->hdr.count == 0) 940 + return -EINVAL; 941 + 942 + arg_size = args->hdr.count * sizeof(*slice_ent); 943 + if (arg_size / args->hdr.count != sizeof(*slice_ent)) 944 + return -EINVAL; 945 + 946 + if (args->hdr.size == 0) 947 + return -EINVAL; 948 + 949 + if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) 950 + return -EINVAL; 951 + 952 + if (args->data == 0) 953 + return -EINVAL; 936 954 937 955 usr = file_priv->driver_priv; 938 956 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); ··· 964 950 goto unlock_dev_srcu; 965 951 } 966 952 967 - if (args->hdr.count == 0) { 968 - ret = -EINVAL; 969 - goto unlock_dev_srcu; 970 - } 971 - 972 - arg_size = args->hdr.count * sizeof(*slice_ent); 973 - if (arg_size / args->hdr.count != sizeof(*slice_ent)) { 974 - ret = -EINVAL; 975 - goto unlock_dev_srcu; 976 - } 977 - 978 953 if (args->hdr.dbc_id >= qdev->num_dbc) { 979 - ret = -EINVAL; 980 - goto unlock_dev_srcu; 981 - } 982 - 983 - if (args->hdr.size == 0) { 984 - ret = -EINVAL; 985 - goto unlock_dev_srcu; 986 - } 987 - 988 - if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) { 989 - ret = -EINVAL; 990 - goto unlock_dev_srcu; 991 - } 992 - 993 - dbc = &qdev->dbc[args->hdr.dbc_id]; 994 - if (dbc->usr != usr) { 995 - ret = -EINVAL; 996 - goto unlock_dev_srcu; 997 - } 998 - 999 - if (args->data == 0) { 1000 954 ret = -EINVAL; 1001 955 goto unlock_dev_srcu; 1002 956 } ··· 995 1013 996 1014 bo = to_qaic_bo(obj); 997 1015 1016 + if (bo->sliced) { 1017 + ret = -EINVAL; 1018 + goto put_bo; 1019 + } 1020 + 1021 + dbc = &qdev->dbc[args->hdr.dbc_id]; 1022 + rcu_id = srcu_read_lock(&dbc->ch_lock); 1023 + if (dbc->usr != usr) { 1024 + ret = -EINVAL; 1025 + goto unlock_ch_srcu; 1026 + } 1027 + 998 1028 ret = qaic_prepare_bo(qdev, bo, &args->hdr); 999 1029 if (ret) 1000 - goto put_bo; 1030 + goto unlock_ch_srcu; 1001 1031 1002 1032 ret = qaic_attach_slicing_bo(qdev, bo, &args->hdr, slice_ent); 1003 1033 if (ret) ··· 1019 1025 dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, args->hdr.dir); 1020 1026 1021 1027 bo->dbc = dbc; 1028 + srcu_read_unlock(&dbc->ch_lock, rcu_id); 1022 1029 drm_gem_object_put(obj); 1023 1030 srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id); 1024 1031 srcu_read_unlock(&usr->qddev_lock, usr_rcu_id); ··· 1028 1033 1029 1034 unprepare_bo: 1030 1035 qaic_unprepare_bo(qdev, bo); 1036 + unlock_ch_srcu: 1037 + srcu_read_unlock(&dbc->ch_lock, rcu_id); 1031 1038 put_bo: 1032 1039 drm_gem_object_put(obj); 1033 1040 free_slice_ent: ··· 1313 1316 received_ts = ktime_get_ns(); 1314 1317 1315 1318 size = is_partial ? sizeof(*pexec) : sizeof(*exec); 1316 - 1317 1319 n = (unsigned long)size * args->hdr.count; 1318 1320 if (args->hdr.count == 0 || n / args->hdr.count != size) 1319 1321 return -EINVAL; ··· 1661 1665 int rcu_id; 1662 1666 int ret; 1663 1667 1668 + if (args->pad != 0) 1669 + return -EINVAL; 1670 + 1664 1671 usr = file_priv->driver_priv; 1665 1672 usr_rcu_id = srcu_read_lock(&usr->qddev_lock); 1666 1673 if (!usr->qddev) { ··· 1675 1676 qdev_rcu_id = srcu_read_lock(&qdev->dev_lock); 1676 1677 if (qdev->in_reset) { 1677 1678 ret = -ENODEV; 1678 - goto unlock_dev_srcu; 1679 - } 1680 - 1681 - if (args->pad != 0) { 1682 - ret = -EINVAL; 1683 1679 goto unlock_dev_srcu; 1684 1680 } 1685 1681 ··· 1849 1855 dbc->usr = NULL; 1850 1856 empty_xfer_list(qdev, dbc); 1851 1857 synchronize_srcu(&dbc->ch_lock); 1858 + /* 1859 + * Threads holding channel lock, may add more elements in the xfer_list. 1860 + * Flush out these elements from xfer_list. 1861 + */ 1862 + empty_xfer_list(qdev, dbc); 1852 1863 } 1853 1864 1854 1865 void release_dbc(struct qaic_device *qdev, u32 dbc_id)
+1 -1
drivers/accel/qaic/qaic_drv.c
··· 262 262 263 263 static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id) 264 264 { 265 + u16 major = -1, minor = -1; 265 266 struct qaic_device *qdev; 266 - u16 major, minor; 267 267 int ret; 268 268 269 269 /*
+21 -7
drivers/android/binder.c
··· 1934 1934 static void binder_transaction_buffer_release(struct binder_proc *proc, 1935 1935 struct binder_thread *thread, 1936 1936 struct binder_buffer *buffer, 1937 - binder_size_t failed_at, 1937 + binder_size_t off_end_offset, 1938 1938 bool is_failure) 1939 1939 { 1940 1940 int debug_id = buffer->debug_id; 1941 - binder_size_t off_start_offset, buffer_offset, off_end_offset; 1941 + binder_size_t off_start_offset, buffer_offset; 1942 1942 1943 1943 binder_debug(BINDER_DEBUG_TRANSACTION, 1944 1944 "%d buffer release %d, size %zd-%zd, failed at %llx\n", 1945 1945 proc->pid, buffer->debug_id, 1946 1946 buffer->data_size, buffer->offsets_size, 1947 - (unsigned long long)failed_at); 1947 + (unsigned long long)off_end_offset); 1948 1948 1949 1949 if (buffer->target_node) 1950 1950 binder_dec_node(buffer->target_node, 1, 0); 1951 1951 1952 1952 off_start_offset = ALIGN(buffer->data_size, sizeof(void *)); 1953 - off_end_offset = is_failure && failed_at ? failed_at : 1954 - off_start_offset + buffer->offsets_size; 1953 + 1955 1954 for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; 1956 1955 buffer_offset += sizeof(binder_size_t)) { 1957 1956 struct binder_object_header *hdr; ··· 2108 2109 break; 2109 2110 } 2110 2111 } 2112 + } 2113 + 2114 + /* Clean up all the objects in the buffer */ 2115 + static inline void binder_release_entire_buffer(struct binder_proc *proc, 2116 + struct binder_thread *thread, 2117 + struct binder_buffer *buffer, 2118 + bool is_failure) 2119 + { 2120 + binder_size_t off_end_offset; 2121 + 2122 + off_end_offset = ALIGN(buffer->data_size, sizeof(void *)); 2123 + off_end_offset += buffer->offsets_size; 2124 + 2125 + binder_transaction_buffer_release(proc, thread, buffer, 2126 + off_end_offset, is_failure); 2111 2127 } 2112 2128 2113 2129 static int binder_translate_binder(struct flat_binder_object *fp, ··· 2820 2806 t_outdated->buffer = NULL; 2821 2807 buffer->transaction = NULL; 2822 2808 trace_binder_transaction_update_buffer_release(buffer); 2823 - binder_transaction_buffer_release(proc, NULL, buffer, 0, 0); 2809 + binder_release_entire_buffer(proc, NULL, buffer, false); 2824 2810 binder_alloc_free_buf(&proc->alloc, buffer); 2825 2811 kfree(t_outdated); 2826 2812 binder_stats_deleted(BINDER_STAT_TRANSACTION); ··· 3789 3775 binder_node_inner_unlock(buf_node); 3790 3776 } 3791 3777 trace_binder_transaction_buffer_release(buffer); 3792 - binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure); 3778 + binder_release_entire_buffer(proc, thread, buffer, is_failure); 3793 3779 binder_alloc_free_buf(&proc->alloc, buffer); 3794 3780 } 3795 3781
+29 -35
drivers/android/binder_alloc.c
··· 212 212 mm = alloc->mm; 213 213 214 214 if (mm) { 215 - mmap_read_lock(mm); 216 - vma = vma_lookup(mm, alloc->vma_addr); 215 + mmap_write_lock(mm); 216 + vma = alloc->vma; 217 217 } 218 218 219 219 if (!vma && need_mm) { ··· 270 270 trace_binder_alloc_page_end(alloc, index); 271 271 } 272 272 if (mm) { 273 - mmap_read_unlock(mm); 273 + mmap_write_unlock(mm); 274 274 mmput(mm); 275 275 } 276 276 return 0; ··· 303 303 } 304 304 err_no_vma: 305 305 if (mm) { 306 - mmap_read_unlock(mm); 306 + mmap_write_unlock(mm); 307 307 mmput(mm); 308 308 } 309 309 return vma ? -ENOMEM : -ESRCH; 310 310 } 311 311 312 + static inline void binder_alloc_set_vma(struct binder_alloc *alloc, 313 + struct vm_area_struct *vma) 314 + { 315 + /* pairs with smp_load_acquire in binder_alloc_get_vma() */ 316 + smp_store_release(&alloc->vma, vma); 317 + } 318 + 312 319 static inline struct vm_area_struct *binder_alloc_get_vma( 313 320 struct binder_alloc *alloc) 314 321 { 315 - struct vm_area_struct *vma = NULL; 316 - 317 - if (alloc->vma_addr) 318 - vma = vma_lookup(alloc->mm, alloc->vma_addr); 319 - 320 - return vma; 322 + /* pairs with smp_store_release in binder_alloc_set_vma() */ 323 + return smp_load_acquire(&alloc->vma); 321 324 } 322 325 323 326 static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid) ··· 383 380 size_t size, data_offsets_size; 384 381 int ret; 385 382 386 - mmap_read_lock(alloc->mm); 383 + /* Check binder_alloc is fully initialized */ 387 384 if (!binder_alloc_get_vma(alloc)) { 388 - mmap_read_unlock(alloc->mm); 389 385 binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 390 386 "%d: binder_alloc_buf, no vma\n", 391 387 alloc->pid); 392 388 return ERR_PTR(-ESRCH); 393 389 } 394 - mmap_read_unlock(alloc->mm); 395 390 396 391 data_offsets_size = ALIGN(data_size, sizeof(void *)) + 397 392 ALIGN(offsets_size, sizeof(void *)); ··· 779 778 buffer->free = 1; 780 779 binder_insert_free_buffer(alloc, buffer); 781 780 alloc->free_async_space = alloc->buffer_size / 2; 782 - alloc->vma_addr = vma->vm_start; 781 + 782 + /* Signal binder_alloc is fully initialized */ 783 + binder_alloc_set_vma(alloc, vma); 783 784 784 785 return 0; 785 786 ··· 811 808 812 809 buffers = 0; 813 810 mutex_lock(&alloc->mutex); 814 - BUG_ON(alloc->vma_addr && 815 - vma_lookup(alloc->mm, alloc->vma_addr)); 811 + BUG_ON(alloc->vma); 816 812 817 813 while ((n = rb_first(&alloc->allocated_buffers))) { 818 814 buffer = rb_entry(n, struct binder_buffer, rb_node); ··· 918 916 * Make sure the binder_alloc is fully initialized, otherwise we might 919 917 * read inconsistent state. 920 918 */ 921 - 922 - mmap_read_lock(alloc->mm); 923 - if (binder_alloc_get_vma(alloc) == NULL) { 924 - mmap_read_unlock(alloc->mm); 925 - goto uninitialized; 919 + if (binder_alloc_get_vma(alloc) != NULL) { 920 + for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { 921 + page = &alloc->pages[i]; 922 + if (!page->page_ptr) 923 + free++; 924 + else if (list_empty(&page->lru)) 925 + active++; 926 + else 927 + lru++; 928 + } 926 929 } 927 - 928 - mmap_read_unlock(alloc->mm); 929 - for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) { 930 - page = &alloc->pages[i]; 931 - if (!page->page_ptr) 932 - free++; 933 - else if (list_empty(&page->lru)) 934 - active++; 935 - else 936 - lru++; 937 - } 938 - 939 - uninitialized: 940 930 mutex_unlock(&alloc->mutex); 941 931 seq_printf(m, " pages: %d:%d:%d\n", active, lru, free); 942 932 seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high); ··· 963 969 */ 964 970 void binder_alloc_vma_close(struct binder_alloc *alloc) 965 971 { 966 - alloc->vma_addr = 0; 972 + binder_alloc_set_vma(alloc, NULL); 967 973 } 968 974 969 975 /**
+2 -2
drivers/android/binder_alloc.h
··· 75 75 /** 76 76 * struct binder_alloc - per-binder proc state for binder allocator 77 77 * @mutex: protects binder_alloc fields 78 - * @vma_addr: vm_area_struct->vm_start passed to mmap_handler 78 + * @vma: vm_area_struct passed to mmap_handler 79 79 * (invariant after mmap) 80 80 * @mm: copy of task->mm (invariant after open) 81 81 * @buffer: base of per-proc address space mapped via mmap ··· 99 99 */ 100 100 struct binder_alloc { 101 101 struct mutex mutex; 102 - unsigned long vma_addr; 102 + struct vm_area_struct *vma; 103 103 struct mm_struct *mm; 104 104 void __user *buffer; 105 105 struct list_head buffers;
+1 -1
drivers/android/binder_alloc_selftest.c
··· 287 287 if (!binder_selftest_run) 288 288 return; 289 289 mutex_lock(&binder_selftest_lock); 290 - if (!binder_selftest_run || !alloc->vma_addr) 290 + if (!binder_selftest_run || !alloc->vma) 291 291 goto done; 292 292 pr_info("STARTED\n"); 293 293 binder_selftest_alloc_offset(alloc, end_offset, 0);
+26 -8
drivers/ata/libata-scsi.c
··· 2694 2694 return 0; 2695 2695 } 2696 2696 2697 - static struct ata_device *ata_find_dev(struct ata_port *ap, int devno) 2697 + static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno) 2698 2698 { 2699 - if (!sata_pmp_attached(ap)) { 2700 - if (likely(devno >= 0 && 2701 - devno < ata_link_max_devices(&ap->link))) 2699 + /* 2700 + * For the non-PMP case, ata_link_max_devices() returns 1 (SATA case), 2701 + * or 2 (IDE master + slave case). However, the former case includes 2702 + * libsas hosted devices which are numbered per scsi host, leading 2703 + * to devno potentially being larger than 0 but with each struct 2704 + * ata_device having its own struct ata_port and struct ata_link. 2705 + * To accommodate these, ignore devno and always use device number 0. 2706 + */ 2707 + if (likely(!sata_pmp_attached(ap))) { 2708 + int link_max_devices = ata_link_max_devices(&ap->link); 2709 + 2710 + if (link_max_devices == 1) 2711 + return &ap->link.device[0]; 2712 + 2713 + if (devno < link_max_devices) 2702 2714 return &ap->link.device[devno]; 2703 - } else { 2704 - if (likely(devno >= 0 && 2705 - devno < ap->nr_pmp_links)) 2706 - return &ap->pmp_link[devno].device[0]; 2715 + 2716 + return NULL; 2707 2717 } 2718 + 2719 + /* 2720 + * For PMP-attached devices, the device number corresponds to C 2721 + * (channel) of SCSI [H:C:I:L], indicating the port pmp link 2722 + * for the device. 2723 + */ 2724 + if (devno < ap->nr_pmp_links) 2725 + return &ap->pmp_link[devno].device[0]; 2708 2726 2709 2727 return NULL; 2710 2728 }
+10 -3
drivers/base/regmap/Kconfig
··· 4 4 # subsystems should select the appropriate symbols. 5 5 6 6 config REGMAP 7 + bool "Register Map support" if KUNIT_ALL_TESTS 7 8 default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SOUNDWIRE || REGMAP_SOUNDWIRE_MBQ || REGMAP_SCCB || REGMAP_I3C || REGMAP_SPI_AVMM || REGMAP_MDIO || REGMAP_FSI) 8 9 select IRQ_DOMAIN if REGMAP_IRQ 9 10 select MDIO_BUS if REGMAP_MDIO 10 - bool 11 + help 12 + Enable support for the Register Map (regmap) access API. 13 + 14 + Usually, this option is automatically selected when needed. 15 + However, you may want to enable it manually for running the regmap 16 + KUnit tests. 17 + 18 + If unsure, say N. 11 19 12 20 config REGMAP_KUNIT 13 21 tristate "KUnit tests for regmap" 14 - depends on KUNIT 22 + depends on KUNIT && REGMAP 15 23 default KUNIT_ALL_TESTS 16 - select REGMAP 17 24 select REGMAP_RAM 18 25 19 26 config REGMAP_AC97
+4 -1
drivers/base/regmap/regcache-maple.c
··· 203 203 204 204 mas_for_each(&mas, entry, max) { 205 205 for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) { 206 + mas_pause(&mas); 207 + rcu_read_unlock(); 206 208 ret = regcache_sync_val(map, r, entry[r - mas.index]); 207 209 if (ret != 0) 208 210 goto out; 211 + rcu_read_lock(); 209 212 } 210 213 } 211 214 212 - out: 213 215 rcu_read_unlock(); 214 216 217 + out: 215 218 map->cache_bypass = false; 216 219 217 220 return ret;
+4
drivers/base/regmap/regmap-sdw.c
··· 59 59 if (config->pad_bits != 0) 60 60 return -ENOTSUPP; 61 61 62 + /* Only bulk writes are supported not multi-register writes */ 63 + if (config->can_multi_write) 64 + return -ENOTSUPP; 65 + 62 66 return 0; 63 67 } 64 68
+4 -2
drivers/base/regmap/regmap.c
··· 2082 2082 size_t val_count = val_len / val_bytes; 2083 2083 size_t chunk_count, chunk_bytes; 2084 2084 size_t chunk_regs = val_count; 2085 + size_t max_data = map->max_raw_write - map->format.reg_bytes - 2086 + map->format.pad_bytes; 2085 2087 int ret, i; 2086 2088 2087 2089 if (!val_count) ··· 2091 2089 2092 2090 if (map->use_single_write) 2093 2091 chunk_regs = 1; 2094 - else if (map->max_raw_write && val_len > map->max_raw_write) 2095 - chunk_regs = map->max_raw_write / val_bytes; 2092 + else if (map->max_raw_write && val_len > max_data) 2093 + chunk_regs = max_data / val_bytes; 2096 2094 2097 2095 chunk_count = val_count / chunk_regs; 2098 2096 chunk_bytes = chunk_regs * val_bytes;
+2 -1
drivers/block/xen-blkfront.c
··· 780 780 ring_req->u.rw.handle = info->handle; 781 781 ring_req->operation = rq_data_dir(req) ? 782 782 BLKIF_OP_WRITE : BLKIF_OP_READ; 783 - if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) { 783 + if (req_op(req) == REQ_OP_FLUSH || 784 + (req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) { 784 785 /* 785 786 * Ideally we can do an unordered flush-to-disk. 786 787 * In case the backend onlysupports barriers, use that.
+14 -1
drivers/char/agp/parisc-agp.c
··· 90 90 { 91 91 struct _parisc_agp_info *info = &parisc_agp_info; 92 92 93 + /* force fdc ops to be visible to IOMMU */ 94 + asm_io_sync(); 95 + 93 96 writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM); 94 97 readq(info->ioc_regs+IOC_PCOM); /* flush */ 95 98 } ··· 161 158 info->gatt[j] = 162 159 parisc_agp_mask_memory(agp_bridge, 163 160 paddr, type); 161 + asm_io_fdc(&info->gatt[j]); 164 162 } 165 163 } 166 164 ··· 195 191 parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr, 196 192 int type) 197 193 { 198 - return SBA_PDIR_VALID_BIT | addr; 194 + unsigned ci; /* coherent index */ 195 + dma_addr_t pa; 196 + 197 + pa = addr & IOVP_MASK; 198 + asm("lci 0(%1), %0" : "=r" (ci) : "r" (phys_to_virt(pa))); 199 + 200 + pa |= (ci >> PAGE_SHIFT) & 0xff;/* move CI (8 bits) into lowest byte */ 201 + pa |= SBA_PDIR_VALID_BIT; /* set "valid" bit */ 202 + 203 + return cpu_to_le64(pa); 199 204 } 200 205 201 206 static void
+37 -9
drivers/cpufreq/amd-pstate.c
··· 444 444 return 0; 445 445 } 446 446 447 - static int amd_pstate_target(struct cpufreq_policy *policy, 448 - unsigned int target_freq, 449 - unsigned int relation) 447 + static int amd_pstate_update_freq(struct cpufreq_policy *policy, 448 + unsigned int target_freq, bool fast_switch) 450 449 { 451 450 struct cpufreq_freqs freqs; 452 451 struct amd_cpudata *cpudata = policy->driver_data; ··· 464 465 des_perf = DIV_ROUND_CLOSEST(target_freq * cap_perf, 465 466 cpudata->max_freq); 466 467 467 - cpufreq_freq_transition_begin(policy, &freqs); 468 + WARN_ON(fast_switch && !policy->fast_switch_enabled); 469 + /* 470 + * If fast_switch is desired, then there aren't any registered 471 + * transition notifiers. See comment for 472 + * cpufreq_enable_fast_switch(). 473 + */ 474 + if (!fast_switch) 475 + cpufreq_freq_transition_begin(policy, &freqs); 476 + 468 477 amd_pstate_update(cpudata, min_perf, des_perf, 469 - max_perf, false, policy->governor->flags); 470 - cpufreq_freq_transition_end(policy, &freqs, false); 478 + max_perf, fast_switch, policy->governor->flags); 479 + 480 + if (!fast_switch) 481 + cpufreq_freq_transition_end(policy, &freqs, false); 471 482 472 483 return 0; 484 + } 485 + 486 + static int amd_pstate_target(struct cpufreq_policy *policy, 487 + unsigned int target_freq, 488 + unsigned int relation) 489 + { 490 + return amd_pstate_update_freq(policy, target_freq, false); 491 + } 492 + 493 + static unsigned int amd_pstate_fast_switch(struct cpufreq_policy *policy, 494 + unsigned int target_freq) 495 + { 496 + return amd_pstate_update_freq(policy, target_freq, true); 473 497 } 474 498 475 499 static void amd_pstate_adjust_perf(unsigned int cpu, ··· 501 479 unsigned long capacity) 502 480 { 503 481 unsigned long max_perf, min_perf, des_perf, 504 - cap_perf, lowest_nonlinear_perf; 482 + cap_perf, lowest_nonlinear_perf, max_freq; 505 483 struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 506 484 struct amd_cpudata *cpudata = policy->driver_data; 485 + unsigned int target_freq; 507 486 508 487 cap_perf = READ_ONCE(cpudata->highest_perf); 509 488 lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); 489 + max_freq = READ_ONCE(cpudata->max_freq); 510 490 511 491 des_perf = cap_perf; 512 492 if (target_perf < capacity) ··· 524 500 max_perf = cap_perf; 525 501 if (max_perf < min_perf) 526 502 max_perf = min_perf; 503 + 504 + des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf); 505 + target_freq = div_u64(des_perf * max_freq, max_perf); 506 + policy->cur = target_freq; 527 507 528 508 amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true, 529 509 policy->governor->flags); ··· 743 715 744 716 freq_qos_remove_request(&cpudata->req[1]); 745 717 freq_qos_remove_request(&cpudata->req[0]); 718 + policy->fast_switch_possible = false; 746 719 kfree(cpudata); 747 720 748 721 return 0; ··· 1108 1079 policy->policy = CPUFREQ_POLICY_POWERSAVE; 1109 1080 1110 1081 if (boot_cpu_has(X86_FEATURE_CPPC)) { 1111 - policy->fast_switch_possible = true; 1112 1082 ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value); 1113 1083 if (ret) 1114 1084 return ret; ··· 1130 1102 static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy) 1131 1103 { 1132 1104 pr_debug("CPU %d exiting\n", policy->cpu); 1133 - policy->fast_switch_possible = false; 1134 1105 return 0; 1135 1106 } 1136 1107 ··· 1336 1309 .flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS, 1337 1310 .verify = amd_pstate_verify, 1338 1311 .target = amd_pstate_target, 1312 + .fast_switch = amd_pstate_fast_switch, 1339 1313 .init = amd_pstate_cpu_init, 1340 1314 .exit = amd_pstate_cpu_exit, 1341 1315 .suspend = amd_pstate_cpu_suspend,
+11 -1
drivers/cxl/core/mbox.c
··· 1028 1028 * cxl_dev_state_identify() - Send the IDENTIFY command to the device. 1029 1029 * @cxlds: The device data for the operation 1030 1030 * 1031 - * Return: 0 if identify was executed successfully. 1031 + * Return: 0 if identify was executed successfully or media not ready. 1032 1032 * 1033 1033 * This will dispatch the identify command to the device and on success populate 1034 1034 * structures to be exported to sysfs. ··· 1040 1040 struct cxl_mbox_cmd mbox_cmd; 1041 1041 u32 val; 1042 1042 int rc; 1043 + 1044 + if (!cxlds->media_ready) 1045 + return 0; 1043 1046 1044 1047 mbox_cmd = (struct cxl_mbox_cmd) { 1045 1048 .opcode = CXL_MBOX_OP_IDENTIFY, ··· 1104 1101 { 1105 1102 struct device *dev = cxlds->dev; 1106 1103 int rc; 1104 + 1105 + if (!cxlds->media_ready) { 1106 + cxlds->dpa_res = DEFINE_RES_MEM(0, 0); 1107 + cxlds->ram_res = DEFINE_RES_MEM(0, 0); 1108 + cxlds->pmem_res = DEFINE_RES_MEM(0, 0); 1109 + return 0; 1110 + } 1107 1111 1108 1112 cxlds->dpa_res = 1109 1113 (struct resource)DEFINE_RES_MEM(0, cxlds->total_bytes);
+99 -13
drivers/cxl/core/pci.c
··· 101 101 } 102 102 EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL); 103 103 104 - /* 105 - * Wait up to @media_ready_timeout for the device to report memory 106 - * active. 107 - */ 108 - int cxl_await_media_ready(struct cxl_dev_state *cxlds) 104 + static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id) 105 + { 106 + struct pci_dev *pdev = to_pci_dev(cxlds->dev); 107 + int d = cxlds->cxl_dvsec; 108 + bool valid = false; 109 + int rc, i; 110 + u32 temp; 111 + 112 + if (id > CXL_DVSEC_RANGE_MAX) 113 + return -EINVAL; 114 + 115 + /* Check MEM INFO VALID bit first, give up after 1s */ 116 + i = 1; 117 + do { 118 + rc = pci_read_config_dword(pdev, 119 + d + CXL_DVSEC_RANGE_SIZE_LOW(id), 120 + &temp); 121 + if (rc) 122 + return rc; 123 + 124 + valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp); 125 + if (valid) 126 + break; 127 + msleep(1000); 128 + } while (i--); 129 + 130 + if (!valid) { 131 + dev_err(&pdev->dev, 132 + "Timeout awaiting memory range %d valid after 1s.\n", 133 + id); 134 + return -ETIMEDOUT; 135 + } 136 + 137 + return 0; 138 + } 139 + 140 + static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id) 109 141 { 110 142 struct pci_dev *pdev = to_pci_dev(cxlds->dev); 111 143 int d = cxlds->cxl_dvsec; 112 144 bool active = false; 113 - u64 md_status; 114 145 int rc, i; 146 + u32 temp; 115 147 148 + if (id > CXL_DVSEC_RANGE_MAX) 149 + return -EINVAL; 150 + 151 + /* Check MEM ACTIVE bit, up to 60s timeout by default */ 116 152 for (i = media_ready_timeout; i; i--) { 117 - u32 temp; 118 - 119 153 rc = pci_read_config_dword( 120 - pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp); 154 + pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(id), &temp); 121 155 if (rc) 122 156 return rc; 123 157 ··· 166 132 "timeout awaiting memory active after %d seconds\n", 167 133 media_ready_timeout); 168 134 return -ETIMEDOUT; 135 + } 136 + 137 + return 0; 138 + } 139 + 140 + /* 141 + * Wait up to @media_ready_timeout for the device to report memory 142 + * active. 143 + */ 144 + int cxl_await_media_ready(struct cxl_dev_state *cxlds) 145 + { 146 + struct pci_dev *pdev = to_pci_dev(cxlds->dev); 147 + int d = cxlds->cxl_dvsec; 148 + int rc, i, hdm_count; 149 + u64 md_status; 150 + u16 cap; 151 + 152 + rc = pci_read_config_word(pdev, 153 + d + CXL_DVSEC_CAP_OFFSET, &cap); 154 + if (rc) 155 + return rc; 156 + 157 + hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap); 158 + for (i = 0; i < hdm_count; i++) { 159 + rc = cxl_dvsec_mem_range_valid(cxlds, i); 160 + if (rc) 161 + return rc; 162 + } 163 + 164 + for (i = 0; i < hdm_count; i++) { 165 + rc = cxl_dvsec_mem_range_active(cxlds, i); 166 + if (rc) 167 + return rc; 169 168 } 170 169 171 170 md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET); ··· 308 241 hdm + CXL_HDM_DECODER_CTRL_OFFSET); 309 242 } 310 243 311 - static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm) 244 + int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm) 312 245 { 313 - void __iomem *hdm = cxlhdm->regs.hdm_decoder; 246 + void __iomem *hdm; 314 247 u32 global_ctrl; 315 248 249 + /* 250 + * If the hdm capability was not mapped there is nothing to enable and 251 + * the caller is responsible for what happens next. For example, 252 + * emulate a passthrough decoder. 253 + */ 254 + if (IS_ERR(cxlhdm)) 255 + return 0; 256 + 257 + hdm = cxlhdm->regs.hdm_decoder; 316 258 global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); 259 + 260 + /* 261 + * If the HDM decoder capability was enabled on entry, skip 262 + * registering disable_hdm() since this decode capability may be 263 + * owned by platform firmware. 264 + */ 265 + if (global_ctrl & CXL_HDM_DECODER_ENABLE) 266 + return 0; 267 + 317 268 writel(global_ctrl | CXL_HDM_DECODER_ENABLE, 318 269 hdm + CXL_HDM_DECODER_CTRL_OFFSET); 319 270 320 - return devm_add_action_or_reset(host, disable_hdm, cxlhdm); 271 + return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm); 321 272 } 273 + EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL); 322 274 323 275 int cxl_dvsec_rr_decode(struct device *dev, int d, 324 276 struct cxl_endpoint_dvsec_info *info) ··· 511 425 if (info->mem_enabled) 512 426 return 0; 513 427 514 - rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); 428 + rc = devm_cxl_enable_hdm(port, cxlhdm); 515 429 if (rc) 516 430 return rc; 517 431
+3 -4
drivers/cxl/core/port.c
··· 750 750 751 751 parent_port = parent_dport ? parent_dport->port : NULL; 752 752 if (IS_ERR(port)) { 753 - dev_dbg(uport, "Failed to add %s%s%s%s: %ld\n", 754 - dev_name(&port->dev), 755 - parent_port ? " to " : "", 753 + dev_dbg(uport, "Failed to add%s%s%s: %ld\n", 754 + parent_port ? " port to " : "", 756 755 parent_port ? dev_name(&parent_port->dev) : "", 757 - parent_port ? "" : " (root port)", 756 + parent_port ? "" : " root port", 758 757 PTR_ERR(port)); 759 758 } else { 760 759 dev_dbg(uport, "%s added%s%s%s\n",
+1
drivers/cxl/cxl.h
··· 710 710 struct cxl_hdm; 711 711 struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, 712 712 struct cxl_endpoint_dvsec_info *info); 713 + int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm); 713 714 int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, 714 715 struct cxl_endpoint_dvsec_info *info); 715 716 int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
+2
drivers/cxl/cxlmem.h
··· 266 266 * @regs: Parsed register blocks 267 267 * @cxl_dvsec: Offset to the PCIe device DVSEC 268 268 * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH) 269 + * @media_ready: Indicate whether the device media is usable 269 270 * @payload_size: Size of space for payload 270 271 * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) 271 272 * @lsa_size: Size of Label Storage Area ··· 304 303 int cxl_dvsec; 305 304 306 305 bool rcd; 306 + bool media_ready; 307 307 size_t payload_size; 308 308 size_t lsa_size; 309 309 struct mutex mbox_mutex; /* Protects device mailbox and firmware */
+2
drivers/cxl/cxlpci.h
··· 31 31 #define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10)) 32 32 #define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28) 33 33 34 + #define CXL_DVSEC_RANGE_MAX 2 35 + 34 36 /* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */ 35 37 #define CXL_DVSEC_FUNCTION_MAP 2 36 38
+3
drivers/cxl/mem.c
··· 124 124 struct dentry *dentry; 125 125 int rc; 126 126 127 + if (!cxlds->media_ready) 128 + return -EBUSY; 129 + 127 130 /* 128 131 * Someone is trying to reattach this device after it lost its port 129 132 * connection (an endpoint port previously registered by this memdev was
+6
drivers/cxl/pci.c
··· 708 708 if (rc) 709 709 dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); 710 710 711 + rc = cxl_await_media_ready(cxlds); 712 + if (rc == 0) 713 + cxlds->media_ready = true; 714 + else 715 + dev_warn(&pdev->dev, "Media not active (%d)\n", rc); 716 + 711 717 rc = cxl_pci_setup_mailbox(cxlds); 712 718 if (rc) 713 719 return rc;
+9 -11
drivers/cxl/port.c
··· 60 60 static int cxl_switch_port_probe(struct cxl_port *port) 61 61 { 62 62 struct cxl_hdm *cxlhdm; 63 - int rc; 63 + int rc, nr_dports; 64 64 65 - rc = devm_cxl_port_enumerate_dports(port); 66 - if (rc < 0) 67 - return rc; 65 + nr_dports = devm_cxl_port_enumerate_dports(port); 66 + if (nr_dports < 0) 67 + return nr_dports; 68 68 69 69 cxlhdm = devm_cxl_setup_hdm(port, NULL); 70 + rc = devm_cxl_enable_hdm(port, cxlhdm); 71 + if (rc) 72 + return rc; 73 + 70 74 if (!IS_ERR(cxlhdm)) 71 75 return devm_cxl_enumerate_decoders(cxlhdm, NULL); 72 76 ··· 79 75 return PTR_ERR(cxlhdm); 80 76 } 81 77 82 - if (rc == 1) { 78 + if (nr_dports == 1) { 83 79 dev_dbg(&port->dev, "Fallback to passthrough decoder\n"); 84 80 return devm_cxl_add_passthrough_decoder(port); 85 81 } ··· 116 112 rc = cxl_hdm_decode_init(cxlds, cxlhdm, &info); 117 113 if (rc) 118 114 return rc; 119 - 120 - rc = cxl_await_media_ready(cxlds); 121 - if (rc) { 122 - dev_err(&port->dev, "Media not active (%d)\n", rc); 123 - return rc; 124 - } 125 115 126 116 rc = devm_cxl_enumerate_decoders(cxlhdm, &info); 127 117 if (rc)
+10 -7
drivers/dma/at_hdmac.c
··· 132 132 #define ATC_DST_PIP BIT(12) /* Destination Picture-in-Picture enabled */ 133 133 #define ATC_SRC_DSCR_DIS BIT(16) /* Src Descriptor fetch disable */ 134 134 #define ATC_DST_DSCR_DIS BIT(20) /* Dst Descriptor fetch disable */ 135 - #define ATC_FC GENMASK(22, 21) /* Choose Flow Controller */ 135 + #define ATC_FC GENMASK(23, 21) /* Choose Flow Controller */ 136 136 #define ATC_FC_MEM2MEM 0x0 /* Mem-to-Mem (DMA) */ 137 137 #define ATC_FC_MEM2PER 0x1 /* Mem-to-Periph (DMA) */ 138 138 #define ATC_FC_PER2MEM 0x2 /* Periph-to-Mem (DMA) */ ··· 153 153 #define ATC_AUTO BIT(31) /* Auto multiple buffer tx enable */ 154 154 155 155 /* Bitfields in CFG */ 156 - #define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */ 157 - 158 156 #define ATC_SRC_PER GENMASK(3, 0) /* Channel src rq associated with periph handshaking ifc h */ 159 157 #define ATC_DST_PER GENMASK(7, 4) /* Channel dst rq associated with periph handshaking ifc h */ 160 158 #define ATC_SRC_REP BIT(8) /* Source Replay Mod */ ··· 179 181 #define ATC_DPIP_HOLE GENMASK(15, 0) 180 182 #define ATC_DPIP_BOUNDARY GENMASK(25, 16) 181 183 182 - #define ATC_SRC_PER_ID(id) (FIELD_PREP(ATC_SRC_PER_MSB, (id)) | \ 183 - FIELD_PREP(ATC_SRC_PER, (id))) 184 - #define ATC_DST_PER_ID(id) (FIELD_PREP(ATC_DST_PER_MSB, (id)) | \ 185 - FIELD_PREP(ATC_DST_PER, (id))) 184 + #define ATC_PER_MSB GENMASK(5, 4) /* Extract MSBs of a handshaking identifier */ 185 + #define ATC_SRC_PER_ID(id) \ 186 + ({ typeof(id) _id = (id); \ 187 + FIELD_PREP(ATC_SRC_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \ 188 + FIELD_PREP(ATC_SRC_PER, _id); }) 189 + #define ATC_DST_PER_ID(id) \ 190 + ({ typeof(id) _id = (id); \ 191 + FIELD_PREP(ATC_DST_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \ 192 + FIELD_PREP(ATC_DST_PER, _id); }) 186 193 187 194 188 195
+5 -2
drivers/dma/at_xdmac.c
··· 1102 1102 NULL, 1103 1103 src_addr, dst_addr, 1104 1104 xt, xt->sgl); 1105 + if (!first) 1106 + return NULL; 1105 1107 1106 1108 /* Length of the block is (BLEN+1) microblocks. */ 1107 1109 for (i = 0; i < xt->numf - 1; i++) ··· 1134 1132 src_addr, dst_addr, 1135 1133 xt, chunk); 1136 1134 if (!desc) { 1137 - list_splice_tail_init(&first->descs_list, 1138 - &atchan->free_descs_list); 1135 + if (first) 1136 + list_splice_tail_init(&first->descs_list, 1137 + &atchan->free_descs_list); 1139 1138 return NULL; 1140 1139 } 1141 1140
-1
drivers/dma/idxd/cdev.c
··· 277 277 if (wq_dedicated(wq)) { 278 278 rc = idxd_wq_set_pasid(wq, pasid); 279 279 if (rc < 0) { 280 - iommu_sva_unbind_device(sva); 281 280 dev_err(dev, "wq set pasid failed: %d\n", rc); 282 281 goto failed_set_pasid; 283 282 }
+4 -4
drivers/dma/pl330.c
··· 1050 1050 return true; 1051 1051 } 1052 1052 1053 - static bool _start(struct pl330_thread *thrd) 1053 + static bool pl330_start_thread(struct pl330_thread *thrd) 1054 1054 { 1055 1055 switch (_state(thrd)) { 1056 1056 case PL330_STATE_FAULT_COMPLETING: ··· 1702 1702 thrd->req_running = -1; 1703 1703 1704 1704 /* Get going again ASAP */ 1705 - _start(thrd); 1705 + pl330_start_thread(thrd); 1706 1706 1707 1707 /* For now, just make a list of callbacks to be done */ 1708 1708 list_add_tail(&descdone->rqd, &pl330->req_done); ··· 2089 2089 } else { 2090 2090 /* Make sure the PL330 Channel thread is active */ 2091 2091 spin_lock(&pch->thread->dmac->lock); 2092 - _start(pch->thread); 2092 + pl330_start_thread(pch->thread); 2093 2093 spin_unlock(&pch->thread->dmac->lock); 2094 2094 } 2095 2095 ··· 2107 2107 if (power_down) { 2108 2108 pch->active = true; 2109 2109 spin_lock(&pch->thread->dmac->lock); 2110 - _start(pch->thread); 2110 + pl330_start_thread(pch->thread); 2111 2111 spin_unlock(&pch->thread->dmac->lock); 2112 2112 power_down = false; 2113 2113 }
+2 -2
drivers/dma/ti/k3-udma.c
··· 5527 5527 return ret; 5528 5528 } 5529 5529 5530 - static int udma_pm_suspend(struct device *dev) 5530 + static int __maybe_unused udma_pm_suspend(struct device *dev) 5531 5531 { 5532 5532 struct udma_dev *ud = dev_get_drvdata(dev); 5533 5533 struct dma_device *dma_dev = &ud->ddev; ··· 5549 5549 return 0; 5550 5550 } 5551 5551 5552 - static int udma_pm_resume(struct device *dev) 5552 + static int __maybe_unused udma_pm_resume(struct device *dev) 5553 5553 { 5554 5554 struct udma_dev *ud = dev_get_drvdata(dev); 5555 5555 struct dma_device *dma_dev = &ud->ddev;
+16 -5
drivers/firmware/arm_ffa/bus.c
··· 15 15 16 16 #include "common.h" 17 17 18 + static DEFINE_IDA(ffa_bus_id); 19 + 18 20 static int ffa_device_match(struct device *dev, struct device_driver *drv) 19 21 { 20 22 const struct ffa_device_id *id_table; ··· 55 53 { 56 54 struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver); 57 55 58 - ffa_drv->remove(to_ffa_dev(dev)); 56 + if (ffa_drv->remove) 57 + ffa_drv->remove(to_ffa_dev(dev)); 59 58 } 60 59 61 60 static int ffa_device_uevent(const struct device *dev, struct kobj_uevent_env *env) ··· 133 130 { 134 131 struct ffa_device *ffa_dev = to_ffa_dev(dev); 135 132 133 + ida_free(&ffa_bus_id, ffa_dev->id); 136 134 kfree(ffa_dev); 137 135 } 138 136 ··· 174 170 struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, 175 171 const struct ffa_ops *ops) 176 172 { 177 - int ret; 173 + int id, ret; 178 174 struct device *dev; 179 175 struct ffa_device *ffa_dev; 180 176 181 - ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL); 182 - if (!ffa_dev) 177 + id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL); 178 + if (id < 0) 183 179 return NULL; 180 + 181 + ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL); 182 + if (!ffa_dev) { 183 + ida_free(&ffa_bus_id, id); 184 + return NULL; 185 + } 184 186 185 187 dev = &ffa_dev->dev; 186 188 dev->bus = &ffa_bus_type; 187 189 dev->release = ffa_release_device; 188 - dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id); 190 + dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id); 189 191 190 192 ffa_dev->vm_id = vm_id; 191 193 ffa_dev->ops = ops; ··· 227 217 { 228 218 ffa_devices_unregister(); 229 219 bus_unregister(&ffa_bus_type); 220 + ida_destroy(&ffa_bus_id); 230 221 }
+8 -1
drivers/firmware/arm_ffa/driver.c
··· 193 193 int idx, count, flags = 0, sz, buf_sz; 194 194 ffa_value_t partition_info; 195 195 196 - if (!buffer || !num_partitions) /* Just get the count for now */ 196 + if (drv_info->version > FFA_VERSION_1_0 && 197 + (!buffer || !num_partitions)) /* Just get the count for now */ 197 198 flags = PARTITION_INFO_GET_RETURN_COUNT_ONLY; 198 199 199 200 mutex_lock(&drv_info->rx_lock); ··· 421 420 ep_mem_access->receiver = args->attrs[idx].receiver; 422 421 ep_mem_access->attrs = args->attrs[idx].attrs; 423 422 ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs); 423 + ep_mem_access->flag = 0; 424 + ep_mem_access->reserved = 0; 424 425 } 426 + mem_region->reserved_0 = 0; 427 + mem_region->reserved_1 = 0; 425 428 mem_region->ep_count = args->nattrs; 426 429 427 430 composite = buffer + COMPOSITE_OFFSET(args->nattrs); 428 431 composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg); 429 432 composite->addr_range_cnt = num_entries; 433 + composite->reserved = 0; 430 434 431 435 length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries); 432 436 frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0); ··· 466 460 467 461 constituents->address = sg_phys(args->sg); 468 462 constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE; 463 + constituents->reserved = 0; 469 464 constituents++; 470 465 frag_len += sizeof(struct ffa_mem_region_addr_range); 471 466 } while ((args->sg = sg_next(args->sg)));
+1 -1
drivers/firmware/arm_scmi/raw_mode.c
··· 1066 1066 1067 1067 raw->wait_wq = alloc_workqueue("scmi-raw-wait-wq-%d", 1068 1068 WQ_UNBOUND | WQ_FREEZABLE | 1069 - WQ_HIGHPRI, WQ_SYSFS, raw->id); 1069 + WQ_HIGHPRI | WQ_SYSFS, 0, raw->id); 1070 1070 if (!raw->wait_wq) 1071 1071 return -ENOMEM; 1072 1072
+1 -1
drivers/gpio/Kconfig
··· 897 897 help 898 898 This option enables support for GPIOs found on Fintek Super-I/O 899 899 chips F71869, F71869A, F71882FG, F71889F and F81866. 900 - As well as Nuvoton Super-I/O chip NCT6116D. 900 + As well as Nuvoton Super-I/O chip NCT6126D. 901 901 902 902 To compile this driver as a module, choose M here: the module will 903 903 be called f7188x-gpio.
+14 -14
drivers/gpio/gpio-f7188x.c
··· 48 48 /* 49 49 * Nuvoton devices. 50 50 */ 51 - #define SIO_NCT6116D_ID 0xD283 /* NCT6116D chipset ID */ 51 + #define SIO_NCT6126D_ID 0xD283 /* NCT6126D chipset ID */ 52 52 53 53 #define SIO_LD_GPIO_NUVOTON 0x07 /* GPIO logical device */ 54 54 ··· 62 62 f81866, 63 63 f81804, 64 64 f81865, 65 - nct6116d, 65 + nct6126d, 66 66 }; 67 67 68 68 static const char * const f7188x_names[] = { ··· 74 74 "f81866", 75 75 "f81804", 76 76 "f81865", 77 - "nct6116d", 77 + "nct6126d", 78 78 }; 79 79 80 80 struct f7188x_sio { ··· 187 187 /* Output mode register (0:open drain 1:push-pull). */ 188 188 #define f7188x_gpio_out_mode(base) ((base) + 3) 189 189 190 - #define f7188x_gpio_dir_invert(type) ((type) == nct6116d) 191 - #define f7188x_gpio_data_single(type) ((type) == nct6116d) 190 + #define f7188x_gpio_dir_invert(type) ((type) == nct6126d) 191 + #define f7188x_gpio_data_single(type) ((type) == nct6126d) 192 192 193 193 static struct f7188x_gpio_bank f71869_gpio_bank[] = { 194 194 F7188X_GPIO_BANK(0, 6, 0xF0, DRVNAME "-0"), ··· 274 274 F7188X_GPIO_BANK(60, 5, 0x90, DRVNAME "-6"), 275 275 }; 276 276 277 - static struct f7188x_gpio_bank nct6116d_gpio_bank[] = { 277 + static struct f7188x_gpio_bank nct6126d_gpio_bank[] = { 278 278 F7188X_GPIO_BANK(0, 8, 0xE0, DRVNAME "-0"), 279 279 F7188X_GPIO_BANK(10, 8, 0xE4, DRVNAME "-1"), 280 280 F7188X_GPIO_BANK(20, 8, 0xE8, DRVNAME "-2"), ··· 282 282 F7188X_GPIO_BANK(40, 8, 0xF0, DRVNAME "-4"), 283 283 F7188X_GPIO_BANK(50, 8, 0xF4, DRVNAME "-5"), 284 284 F7188X_GPIO_BANK(60, 8, 0xF8, DRVNAME "-6"), 285 - F7188X_GPIO_BANK(70, 1, 0xFC, DRVNAME "-7"), 285 + F7188X_GPIO_BANK(70, 8, 0xFC, DRVNAME "-7"), 286 286 }; 287 287 288 288 static int f7188x_gpio_get_direction(struct gpio_chip *chip, unsigned offset) ··· 490 490 data->nr_bank = ARRAY_SIZE(f81865_gpio_bank); 491 491 data->bank = f81865_gpio_bank; 492 492 break; 493 - case nct6116d: 494 - data->nr_bank = ARRAY_SIZE(nct6116d_gpio_bank); 495 - data->bank = nct6116d_gpio_bank; 493 + case nct6126d: 494 + data->nr_bank = ARRAY_SIZE(nct6126d_gpio_bank); 495 + data->bank = nct6126d_gpio_bank; 496 496 break; 497 497 default: 498 498 return -ENODEV; ··· 559 559 case SIO_F81865_ID: 560 560 sio->type = f81865; 561 561 break; 562 - case SIO_NCT6116D_ID: 562 + case SIO_NCT6126D_ID: 563 563 sio->device = SIO_LD_GPIO_NUVOTON; 564 - sio->type = nct6116d; 564 + sio->type = nct6126d; 565 565 break; 566 566 default: 567 567 pr_info("Unsupported Fintek device 0x%04x\n", devid); ··· 569 569 } 570 570 571 571 /* double check manufacturer where possible */ 572 - if (sio->type != nct6116d) { 572 + if (sio->type != nct6126d) { 573 573 manid = superio_inw(addr, SIO_FINTEK_MANID); 574 574 if (manid != SIO_FINTEK_ID) { 575 575 pr_debug("Not a Fintek device at 0x%08x\n", addr); ··· 581 581 err = 0; 582 582 583 583 pr_info("Found %s at %#x\n", f7188x_names[sio->type], (unsigned int)addr); 584 - if (sio->type != nct6116d) 584 + if (sio->type != nct6126d) 585 585 pr_info(" revision %d\n", superio_inb(addr, SIO_FINTEK_DEVREV)); 586 586 587 587 err:
+1 -1
drivers/gpio/gpio-mockup.c
··· 369 369 priv->offset = i; 370 370 priv->desc = gpiochip_get_desc(gc, i); 371 371 372 - debugfs_create_file(name, 0200, chip->dbg_dir, priv, 372 + debugfs_create_file(name, 0600, chip->dbg_dir, priv, 373 373 &gpio_mockup_debugfs_ops); 374 374 } 375 375 }
+2
drivers/gpio/gpiolib.c
··· 209 209 break; 210 210 /* nope, check the space right after the chip */ 211 211 base = gdev->base + gdev->ngpio; 212 + if (base < GPIO_DYNAMIC_BASE) 213 + base = GPIO_DYNAMIC_BASE; 212 214 } 213 215 214 216 if (gpio_is_valid(base)) {
+3 -1
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 6892 6892 return r; 6893 6893 6894 6894 r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr); 6895 - if (unlikely(r != 0)) 6895 + if (unlikely(r != 0)) { 6896 + amdgpu_bo_unreserve(ring->mqd_obj); 6896 6897 return r; 6898 + } 6897 6899 6898 6900 gfx_v10_0_kiq_init_queue(ring); 6899 6901 amdgpu_bo_kunmap(ring->mqd_obj);
+3 -1
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 3617 3617 return r; 3618 3618 3619 3619 r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr); 3620 - if (unlikely(r != 0)) 3620 + if (unlikely(r != 0)) { 3621 + amdgpu_bo_unreserve(ring->mqd_obj); 3621 3622 return r; 3623 + } 3622 3624 3623 3625 gfx_v9_0_kiq_init_queue(ring); 3624 3626 amdgpu_bo_kunmap(ring->mqd_obj);
+7 -1
drivers/gpu/drm/amd/amdgpu/psp_v10_0.c
··· 57 57 if (err) 58 58 return err; 59 59 60 - return psp_init_ta_microcode(psp, ucode_prefix); 60 + err = psp_init_ta_microcode(psp, ucode_prefix); 61 + if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 1, 0)) && 62 + (adev->pdev->revision == 0xa1) && 63 + (psp->securedisplay_context.context.bin_desc.fw_version >= 0x27000008)) { 64 + adev->psp.securedisplay_context.context.bin_desc.size_bytes = 0; 65 + } 66 + return err; 61 67 } 62 68 63 69 static int psp_v10_0_ring_create(struct psp_context *psp,
+15 -10
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2479 2479 if (acrtc && state->stream_status[i].plane_count != 0) { 2480 2480 irq_source = IRQ_TYPE_PFLIP + acrtc->otg_inst; 2481 2481 rc = dc_interrupt_set(adev->dm.dc, irq_source, enable) ? 0 : -EBUSY; 2482 - DRM_DEBUG_VBL("crtc %d - vupdate irq %sabling: r=%d\n", 2483 - acrtc->crtc_id, enable ? "en" : "dis", rc); 2484 2482 if (rc) 2485 2483 DRM_WARN("Failed to %s pflip interrupts\n", 2486 2484 enable ? "enable" : "disable"); 2487 2485 2488 2486 if (enable) { 2489 - rc = amdgpu_dm_crtc_enable_vblank(&acrtc->base); 2490 - if (rc) 2491 - DRM_WARN("Failed to enable vblank interrupts\n"); 2492 - } else { 2493 - amdgpu_dm_crtc_disable_vblank(&acrtc->base); 2494 - } 2487 + if (amdgpu_dm_crtc_vrr_active(to_dm_crtc_state(acrtc->base.state))) 2488 + rc = amdgpu_dm_crtc_set_vupdate_irq(&acrtc->base, true); 2489 + } else 2490 + rc = amdgpu_dm_crtc_set_vupdate_irq(&acrtc->base, false); 2495 2491 2492 + if (rc) 2493 + DRM_WARN("Failed to %sable vupdate interrupt\n", enable ? "en" : "dis"); 2494 + 2495 + irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst; 2496 + /* During gpu-reset we disable and then enable vblank irq, so 2497 + * don't use amdgpu_irq_get/put() to avoid refcount change. 2498 + */ 2499 + if (!dc_interrupt_set(adev->dm.dc, irq_source, enable)) 2500 + DRM_WARN("Failed to %sable vblank interrupt\n", enable ? "en" : "dis"); 2496 2501 } 2497 2502 } 2498 2503 ··· 2857 2852 * this is the case when traversing through already created 2858 2853 * MST connectors, should be skipped 2859 2854 */ 2860 - if (aconnector->dc_link->type == dc_connection_mst_branch) 2855 + if (aconnector && aconnector->mst_root) 2861 2856 continue; 2862 2857 2863 2858 mutex_lock(&aconnector->hpd_lock); ··· 6742 6737 int clock, bpp = 0; 6743 6738 bool is_y420 = false; 6744 6739 6745 - if (!aconnector->mst_output_port || !aconnector->dc_sink) 6740 + if (!aconnector->mst_output_port) 6746 6741 return 0; 6747 6742 6748 6743 mst_port = aconnector->mst_output_port;
+3 -13
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 146 146 147 147 static inline int dm_set_vblank(struct drm_crtc *crtc, bool enable) 148 148 { 149 - enum dc_irq_source irq_source; 150 149 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); 151 150 struct amdgpu_device *adev = drm_to_adev(crtc->dev); 152 151 struct dm_crtc_state *acrtc_state = to_dm_crtc_state(crtc->state); ··· 168 169 if (rc) 169 170 return rc; 170 171 171 - if (amdgpu_in_reset(adev)) { 172 - irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst; 173 - /* During gpu-reset we disable and then enable vblank irq, so 174 - * don't use amdgpu_irq_get/put() to avoid refcount change. 175 - */ 176 - if (!dc_interrupt_set(adev->dm.dc, irq_source, enable)) 177 - rc = -EBUSY; 178 - } else { 179 - rc = (enable) 180 - ? amdgpu_irq_get(adev, &adev->crtc_irq, acrtc->crtc_id) 181 - : amdgpu_irq_put(adev, &adev->crtc_irq, acrtc->crtc_id); 182 - } 172 + rc = (enable) 173 + ? amdgpu_irq_get(adev, &adev->crtc_irq, acrtc->crtc_id) 174 + : amdgpu_irq_put(adev, &adev->crtc_irq, acrtc->crtc_id); 183 175 184 176 if (rc) 185 177 return rc;
+5 -7
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 871 871 } 872 872 if (ret == -ENOENT) { 873 873 size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf); 874 - if (size > 0) { 875 - size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size); 876 - size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size); 877 - size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size); 878 - size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size); 879 - size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size); 880 - } 874 + size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size); 875 + size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size); 876 + size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size); 877 + size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size); 878 + size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size); 881 879 } 882 880 883 881 if (size == 0)
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 125 125 MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0), 126 126 MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0), 127 127 MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0), 128 + MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0), 128 129 }; 129 130 130 131 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
+2 -20
drivers/gpu/drm/drm_managed.c
··· 264 264 } 265 265 EXPORT_SYMBOL(drmm_kfree); 266 266 267 - static void drmm_mutex_release(struct drm_device *dev, void *res) 267 + void __drmm_mutex_release(struct drm_device *dev, void *res) 268 268 { 269 269 struct mutex *lock = res; 270 270 271 271 mutex_destroy(lock); 272 272 } 273 - 274 - /** 275 - * drmm_mutex_init - &drm_device-managed mutex_init() 276 - * @dev: DRM device 277 - * @lock: lock to be initialized 278 - * 279 - * Returns: 280 - * 0 on success, or a negative errno code otherwise. 281 - * 282 - * This is a &drm_device-managed version of mutex_init(). The initialized 283 - * lock is automatically destroyed on the final drm_dev_put(). 284 - */ 285 - int drmm_mutex_init(struct drm_device *dev, struct mutex *lock) 286 - { 287 - mutex_init(lock); 288 - 289 - return drmm_add_action_or_reset(dev, drmm_mutex_release, lock); 290 - } 291 - EXPORT_SYMBOL(drmm_mutex_init); 273 + EXPORT_SYMBOL(__drmm_mutex_release);
+1 -1
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 179 179 }, { /* AYA NEO AIR */ 180 180 .matches = { 181 181 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"), 182 - DMI_MATCH(DMI_BOARD_NAME, "AIR"), 182 + DMI_MATCH(DMI_PRODUCT_NAME, "AIR"), 183 183 }, 184 184 .driver_data = (void *)&lcd1080x1920_leftside_up, 185 185 }, { /* AYA NEO NEXT */
+10 -2
drivers/gpu/drm/i915/display/intel_display.c
··· 1851 1851 1852 1852 intel_disable_shared_dpll(old_crtc_state); 1853 1853 1854 - intel_encoders_post_pll_disable(state, crtc); 1854 + if (!intel_crtc_is_bigjoiner_slave(old_crtc_state)) { 1855 + struct intel_crtc *slave_crtc; 1855 1856 1856 - intel_dmc_disable_pipe(i915, crtc->pipe); 1857 + intel_encoders_post_pll_disable(state, crtc); 1858 + 1859 + intel_dmc_disable_pipe(i915, crtc->pipe); 1860 + 1861 + for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc, 1862 + intel_crtc_bigjoiner_slave_pipes(old_crtc_state)) 1863 + intel_dmc_disable_pipe(i915, slave_crtc->pipe); 1864 + } 1857 1865 } 1858 1866 1859 1867 static void i9xx_pfit_enable(const struct intel_crtc_state *crtc_state)
+5
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 642 642 if (funcs->pixpllc_atomic_update) 643 643 funcs->pixpllc_atomic_update(crtc, old_state); 644 644 645 + if (crtc_state->gamma_lut) 646 + mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data); 647 + else 648 + mgag200_crtc_set_gamma_linear(mdev, format); 649 + 645 650 mgag200_enable_display(mdev); 646 651 647 652 if (funcs->enable_vidrst)
+1 -1
drivers/gpu/drm/pl111/pl111_display.c
··· 53 53 { 54 54 struct drm_device *drm = pipe->crtc.dev; 55 55 struct pl111_drm_dev_private *priv = drm->dev_private; 56 - u32 cpp = priv->variant->fb_bpp / 8; 56 + u32 cpp = DIV_ROUND_UP(priv->variant->fb_depth, 8); 57 57 u64 bw; 58 58 59 59 /*
+2 -2
drivers/gpu/drm/pl111/pl111_drm.h
··· 114 114 * extensions to the control register 115 115 * @formats: array of supported pixel formats on this variant 116 116 * @nformats: the length of the array of supported pixel formats 117 - * @fb_bpp: desired bits per pixel on the default framebuffer 117 + * @fb_depth: desired depth per pixel on the default framebuffer 118 118 */ 119 119 struct pl111_variant_data { 120 120 const char *name; ··· 126 126 bool st_bitmux_control; 127 127 const u32 *formats; 128 128 unsigned int nformats; 129 - unsigned int fb_bpp; 129 + unsigned int fb_depth; 130 130 }; 131 131 132 132 struct pl111_drm_dev_private {
+4 -4
drivers/gpu/drm/pl111/pl111_drv.c
··· 308 308 if (ret < 0) 309 309 goto dev_put; 310 310 311 - drm_fbdev_dma_setup(drm, priv->variant->fb_bpp); 311 + drm_fbdev_dma_setup(drm, priv->variant->fb_depth); 312 312 313 313 return 0; 314 314 ··· 351 351 .is_pl110 = true, 352 352 .formats = pl110_pixel_formats, 353 353 .nformats = ARRAY_SIZE(pl110_pixel_formats), 354 - .fb_bpp = 16, 354 + .fb_depth = 16, 355 355 }; 356 356 357 357 /* RealView, Versatile Express etc use this modern variant */ ··· 376 376 .name = "PL111", 377 377 .formats = pl111_pixel_formats, 378 378 .nformats = ARRAY_SIZE(pl111_pixel_formats), 379 - .fb_bpp = 32, 379 + .fb_depth = 32, 380 380 }; 381 381 382 382 static const u32 pl110_nomadik_pixel_formats[] = { ··· 405 405 .is_lcdc = true, 406 406 .st_bitmux_control = true, 407 407 .broken_vblank = true, 408 - .fb_bpp = 16, 408 + .fb_depth = 16, 409 409 }; 410 410 411 411 static const struct amba_id pl111_id_table[] = {
+5 -5
drivers/gpu/drm/pl111/pl111_versatile.c
··· 316 316 .broken_vblank = true, 317 317 .formats = pl110_integrator_pixel_formats, 318 318 .nformats = ARRAY_SIZE(pl110_integrator_pixel_formats), 319 - .fb_bpp = 16, 319 + .fb_depth = 16, 320 320 }; 321 321 322 322 /* ··· 330 330 .broken_vblank = true, 331 331 .formats = pl110_integrator_pixel_formats, 332 332 .nformats = ARRAY_SIZE(pl110_integrator_pixel_formats), 333 - .fb_bpp = 16, 333 + .fb_depth = 15, 334 334 }; 335 335 336 336 /* ··· 343 343 .external_bgr = true, 344 344 .formats = pl110_versatile_pixel_formats, 345 345 .nformats = ARRAY_SIZE(pl110_versatile_pixel_formats), 346 - .fb_bpp = 16, 346 + .fb_depth = 16, 347 347 }; 348 348 349 349 /* ··· 355 355 .name = "PL111 RealView", 356 356 .formats = pl111_realview_pixel_formats, 357 357 .nformats = ARRAY_SIZE(pl111_realview_pixel_formats), 358 - .fb_bpp = 16, 358 + .fb_depth = 16, 359 359 }; 360 360 361 361 /* ··· 367 367 .name = "PL111 Versatile Express", 368 368 .formats = pl111_realview_pixel_formats, 369 369 .nformats = ARRAY_SIZE(pl111_realview_pixel_formats), 370 - .fb_bpp = 16, 370 + .fb_depth = 16, 371 371 .broken_clockdivider = true, 372 372 }; 373 373
+10
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 99 99 100 100 static void radeon_dp_work_func(struct work_struct *work) 101 101 { 102 + struct radeon_device *rdev = container_of(work, struct radeon_device, 103 + dp_work); 104 + struct drm_device *dev = rdev->ddev; 105 + struct drm_mode_config *mode_config = &dev->mode_config; 106 + struct drm_connector *connector; 107 + 108 + mutex_lock(&mode_config->mutex); 109 + list_for_each_entry(connector, &mode_config->connector_list, head) 110 + radeon_connector_hotplug(connector); 111 + mutex_unlock(&mode_config->mutex); 102 112 } 103 113 104 114 /**
-3
drivers/gpu/drm/scheduler/sched_main.c
··· 1141 1141 for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { 1142 1142 struct drm_sched_rq *rq = &sched->sched_rq[i]; 1143 1143 1144 - if (!rq) 1145 - continue; 1146 - 1147 1144 spin_lock(&rq->lock); 1148 1145 list_for_each_entry(s_entity, &rq->entities, list) 1149 1146 /*
+2
drivers/hid/hid-google-hammer.c
··· 587 587 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 588 588 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, 589 589 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 590 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_JEWEL) }, 591 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 590 592 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) }, 591 593 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 592 594 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) },
+1
drivers/hid/hid-ids.h
··· 529 529 #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 530 530 #define USB_DEVICE_ID_GOOGLE_DON 0x5050 531 531 #define USB_DEVICE_ID_GOOGLE_EEL 0x5057 532 + #define USB_DEVICE_ID_GOOGLE_JEWEL 0x5061 532 533 533 534 #define USB_VENDOR_ID_GOTOP 0x08f2 534 535 #define USB_DEVICE_ID_SUPER_Q2 0x007f
+1
drivers/hid/hid-logitech-hidpp.c
··· 314 314 dbg_hid("%s:timeout waiting for response\n", __func__); 315 315 memset(response, 0, sizeof(struct hidpp_report)); 316 316 ret = -ETIMEDOUT; 317 + goto exit; 317 318 } 318 319 319 320 if (response->report_id == REPORT_ID_HIDPP_SHORT &&
+16 -5
drivers/hid/wacom_sys.c
··· 2224 2224 } else if (strstr(product_name, "Wacom") || 2225 2225 strstr(product_name, "wacom") || 2226 2226 strstr(product_name, "WACOM")) { 2227 - strscpy(name, product_name, sizeof(name)); 2227 + if (strscpy(name, product_name, sizeof(name)) < 0) { 2228 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2229 + } 2228 2230 } else { 2229 2231 snprintf(name, sizeof(name), "Wacom %s", product_name); 2230 2232 } ··· 2244 2242 if (name[strlen(name)-1] == ' ') 2245 2243 name[strlen(name)-1] = '\0'; 2246 2244 } else { 2247 - strscpy(name, features->name, sizeof(name)); 2245 + if (strscpy(name, features->name, sizeof(name)) < 0) { 2246 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2247 + } 2248 2248 } 2249 2249 2250 2250 snprintf(wacom_wac->name, sizeof(wacom_wac->name), "%s%s", ··· 2414 2410 goto fail_quirks; 2415 2411 } 2416 2412 2417 - if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) 2413 + if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) { 2418 2414 error = hid_hw_open(hdev); 2415 + if (error) { 2416 + hid_err(hdev, "hw open failed\n"); 2417 + goto fail_quirks; 2418 + } 2419 + } 2419 2420 2420 2421 wacom_set_shared_values(wacom_wac); 2421 2422 devres_close_group(&hdev->dev, wacom); ··· 2509 2500 goto fail; 2510 2501 } 2511 2502 2512 - strscpy(wacom_wac->name, wacom_wac1->name, 2513 - sizeof(wacom_wac->name)); 2503 + if (strscpy(wacom_wac->name, wacom_wac1->name, 2504 + sizeof(wacom_wac->name)) < 0) { 2505 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2506 + } 2514 2507 } 2515 2508 2516 2509 return;
+1 -1
drivers/hid/wacom_wac.c
··· 831 831 /* Enter report */ 832 832 if ((data[1] & 0xfc) == 0xc0) { 833 833 /* serial number of the tool */ 834 - wacom->serial[idx] = ((data[3] & 0x0f) << 28) + 834 + wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) + 835 835 (data[4] << 20) + (data[5] << 12) + 836 836 (data[6] << 4) + (data[7] >> 4); 837 837
+1
drivers/hwtracing/coresight/coresight-etm-perf.c
··· 402 402 trace_id = coresight_trace_id_get_cpu_id(cpu); 403 403 if (!IS_VALID_CS_TRACE_ID(trace_id)) { 404 404 cpumask_clear_cpu(cpu, mask); 405 + coresight_release_path(path); 405 406 continue; 406 407 } 407 408
+1 -1
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 942 942 943 943 len = tmc_etr_buf_get_data(etr_buf, offset, 944 944 CORESIGHT_BARRIER_PKT_SIZE, &bufp); 945 - if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE)) 945 + if (WARN_ON(len < 0 || len < CORESIGHT_BARRIER_PKT_SIZE)) 946 946 return -EINVAL; 947 947 coresight_insert_barrier_packet(bufp); 948 948 return offset + CORESIGHT_BARRIER_PKT_SIZE;
+1 -3
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 3341 3341 udwr.remote_qkey = gsi_sqp->qplib_qp.qkey; 3342 3342 3343 3343 /* post data received in the send queue */ 3344 - rc = bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr); 3345 - 3346 - return 0; 3344 + return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr); 3347 3345 } 3348 3346 3349 3347 static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+4
drivers/infiniband/hw/bnxt_re/main.c
··· 1336 1336 { 1337 1337 struct bnxt_qplib_cc_param cc_param = {}; 1338 1338 1339 + /* Do not enable congestion control on VFs */ 1340 + if (rdev->is_virtfn) 1341 + return; 1342 + 1339 1343 /* Currently enabling only for GenP5 adapters */ 1340 1344 if (!bnxt_qplib_is_chip_gen_p5(rdev->chip_ctx)) 1341 1345 return;
+6 -5
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 2056 2056 u32 pg_sz_lvl; 2057 2057 int rc; 2058 2058 2059 + if (!cq->dpi) { 2060 + dev_err(&rcfw->pdev->dev, 2061 + "FP: CREATE_CQ failed due to NULL DPI\n"); 2062 + return -EINVAL; 2063 + } 2064 + 2059 2065 hwq_attr.res = res; 2060 2066 hwq_attr.depth = cq->max_wqe; 2061 2067 hwq_attr.stride = sizeof(struct cq_base); ··· 2075 2069 CMDQ_BASE_OPCODE_CREATE_CQ, 2076 2070 sizeof(req)); 2077 2071 2078 - if (!cq->dpi) { 2079 - dev_err(&rcfw->pdev->dev, 2080 - "FP: CREATE_CQ failed due to NULL DPI\n"); 2081 - return -EINVAL; 2082 - } 2083 2072 req.dpi = cpu_to_le32(cq->dpi->dpi); 2084 2073 req.cq_handle = cpu_to_le64(cq->cq_handle); 2085 2074 req.cq_size = cpu_to_le32(cq->hwq.max_elements);
+2 -10
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 215 215 return -EINVAL; 216 216 hwq_attr->sginfo->npages = npages; 217 217 } else { 218 - unsigned long sginfo_num_pages = ib_umem_num_dma_blocks( 219 - hwq_attr->sginfo->umem, hwq_attr->sginfo->pgsize); 220 - 218 + npages = ib_umem_num_dma_blocks(hwq_attr->sginfo->umem, 219 + hwq_attr->sginfo->pgsize); 221 220 hwq->is_user = true; 222 - npages = sginfo_num_pages; 223 - npages = (npages * PAGE_SIZE) / 224 - BIT_ULL(hwq_attr->sginfo->pgshft); 225 - if ((sginfo_num_pages * PAGE_SIZE) % 226 - BIT_ULL(hwq_attr->sginfo->pgshft)) 227 - if (!npages) 228 - npages++; 229 221 } 230 222 231 223 if (npages == MAX_PBL_LVL_0_PGS && !hwq_attr->sginfo->nopte) {
+3 -4
drivers/infiniband/hw/bnxt_re/qplib_sp.c
··· 617 617 /* Free the hwq if it already exist, must be a rereg */ 618 618 if (mr->hwq.max_elements) 619 619 bnxt_qplib_free_hwq(res, &mr->hwq); 620 - /* Use system PAGE_SIZE */ 621 620 hwq_attr.res = res; 622 621 hwq_attr.depth = pages; 623 - hwq_attr.stride = buf_pg_size; 622 + hwq_attr.stride = sizeof(dma_addr_t); 624 623 hwq_attr.type = HWQ_TYPE_MR; 625 624 hwq_attr.sginfo = &sginfo; 626 625 hwq_attr.sginfo->umem = umem; 627 626 hwq_attr.sginfo->npages = pages; 628 - hwq_attr.sginfo->pgsize = PAGE_SIZE; 629 - hwq_attr.sginfo->pgshft = PAGE_SHIFT; 627 + hwq_attr.sginfo->pgsize = buf_pg_size; 628 + hwq_attr.sginfo->pgshft = ilog2(buf_pg_size); 630 629 rc = bnxt_qplib_alloc_init_hwq(&mr->hwq, &hwq_attr); 631 630 if (rc) { 632 631 dev_err(&res->pdev->dev,
+1 -1
drivers/infiniband/hw/efa/efa_verbs.c
··· 1403 1403 */ 1404 1404 static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl) 1405 1405 { 1406 - u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, PAGE_SIZE); 1406 + u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE); 1407 1407 struct scatterlist *sgl; 1408 1408 int sg_dma_cnt, err; 1409 1409
+17 -8
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 4583 4583 mtu = ib_mtu_enum_to_int(ib_mtu); 4584 4584 if (WARN_ON(mtu <= 0)) 4585 4585 return -EINVAL; 4586 - #define MAX_LP_MSG_LEN 16384 4587 - /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 16KB */ 4588 - lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu); 4589 - if (WARN_ON(lp_pktn_ini >= 0xF)) 4590 - return -EINVAL; 4586 + #define MIN_LP_MSG_LEN 1024 4587 + /* mtu * (2 ^ lp_pktn_ini) should be in the range of 1024 to mtu */ 4588 + lp_pktn_ini = ilog2(max(mtu, MIN_LP_MSG_LEN) / mtu); 4591 4589 4592 4590 if (attr_mask & IB_QP_PATH_MTU) { 4593 4591 hr_reg_write(context, QPC_MTU, ib_mtu); ··· 5010 5012 static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout) 5011 5013 { 5012 5014 #define QP_ACK_TIMEOUT_MAX_HIP08 20 5013 - #define QP_ACK_TIMEOUT_OFFSET 10 5014 5015 #define QP_ACK_TIMEOUT_MAX 31 5015 5016 5016 5017 if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { ··· 5018 5021 "local ACK timeout shall be 0 to 20.\n"); 5019 5022 return false; 5020 5023 } 5021 - *timeout += QP_ACK_TIMEOUT_OFFSET; 5024 + *timeout += HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08; 5022 5025 } else if (hr_dev->pci_dev->revision > PCI_REVISION_ID_HIP08) { 5023 5026 if (*timeout > QP_ACK_TIMEOUT_MAX) { 5024 5027 ibdev_warn(&hr_dev->ib_dev, ··· 5304 5307 return ret; 5305 5308 } 5306 5309 5310 + static u8 get_qp_timeout_attr(struct hns_roce_dev *hr_dev, 5311 + struct hns_roce_v2_qp_context *context) 5312 + { 5313 + u8 timeout; 5314 + 5315 + timeout = (u8)hr_reg_read(context, QPC_AT); 5316 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) 5317 + timeout -= HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08; 5318 + 5319 + return timeout; 5320 + } 5321 + 5307 5322 static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, 5308 5323 int qp_attr_mask, 5309 5324 struct ib_qp_init_attr *qp_init_attr) ··· 5393 5384 qp_attr->max_dest_rd_atomic = 1 << hr_reg_read(&context, QPC_RR_MAX); 5394 5385 5395 5386 qp_attr->min_rnr_timer = (u8)hr_reg_read(&context, QPC_MIN_RNR_TIME); 5396 - qp_attr->timeout = (u8)hr_reg_read(&context, QPC_AT); 5387 + qp_attr->timeout = get_qp_timeout_attr(hr_dev, &context); 5397 5388 qp_attr->retry_cnt = hr_reg_read(&context, QPC_RETRY_NUM_INIT); 5398 5389 qp_attr->rnr_retry = hr_reg_read(&context, QPC_RNR_NUM_INIT); 5399 5390
+2
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 44 44 #define HNS_ROCE_V2_MAX_XRCD_NUM 0x1000000 45 45 #define HNS_ROCE_V2_RSV_XRCD_NUM 0 46 46 47 + #define HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08 10 48 + 47 49 #define HNS_ROCE_V3_SCCC_SZ 64 48 50 #define HNS_ROCE_V3_GMV_ENTRY_SZ 32 49 51
+43
drivers/infiniband/hw/hns/hns_roce_mr.c
··· 33 33 34 34 #include <linux/vmalloc.h> 35 35 #include <rdma/ib_umem.h> 36 + #include <linux/math.h> 36 37 #include "hns_roce_device.h" 37 38 #include "hns_roce_cmd.h" 38 39 #include "hns_roce_hem.h" ··· 910 909 return page_cnt; 911 910 } 912 911 912 + static u64 cal_pages_per_l1ba(unsigned int ba_per_bt, unsigned int hopnum) 913 + { 914 + return int_pow(ba_per_bt, hopnum - 1); 915 + } 916 + 917 + static unsigned int cal_best_bt_pg_sz(struct hns_roce_dev *hr_dev, 918 + struct hns_roce_mtr *mtr, 919 + unsigned int pg_shift) 920 + { 921 + unsigned long cap = hr_dev->caps.page_size_cap; 922 + struct hns_roce_buf_region *re; 923 + unsigned int pgs_per_l1ba; 924 + unsigned int ba_per_bt; 925 + unsigned int ba_num; 926 + int i; 927 + 928 + for_each_set_bit_from(pg_shift, &cap, sizeof(cap) * BITS_PER_BYTE) { 929 + if (!(BIT(pg_shift) & cap)) 930 + continue; 931 + 932 + ba_per_bt = BIT(pg_shift) / BA_BYTE_LEN; 933 + ba_num = 0; 934 + for (i = 0; i < mtr->hem_cfg.region_count; i++) { 935 + re = &mtr->hem_cfg.region[i]; 936 + if (re->hopnum == 0) 937 + continue; 938 + 939 + pgs_per_l1ba = cal_pages_per_l1ba(ba_per_bt, re->hopnum); 940 + ba_num += DIV_ROUND_UP(re->count, pgs_per_l1ba); 941 + } 942 + 943 + if (ba_num <= ba_per_bt) 944 + return pg_shift; 945 + } 946 + 947 + return 0; 948 + } 949 + 913 950 static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, 914 951 unsigned int ba_page_shift) 915 952 { ··· 956 917 957 918 hns_roce_hem_list_init(&mtr->hem_list); 958 919 if (!cfg->is_direct) { 920 + ba_page_shift = cal_best_bt_pg_sz(hr_dev, mtr, ba_page_shift); 921 + if (!ba_page_shift) 922 + return -ERANGE; 923 + 959 924 ret = hns_roce_hem_list_request(hr_dev, &mtr->hem_list, 960 925 cfg->region, cfg->region_count, 961 926 ba_page_shift);
+7 -5
drivers/infiniband/hw/irdma/verbs.c
··· 522 522 if (!iwqp->user_mode) 523 523 cancel_delayed_work_sync(&iwqp->dwork_flush); 524 524 525 - irdma_qp_rem_ref(&iwqp->ibqp); 526 - wait_for_completion(&iwqp->free_qp); 527 - irdma_free_lsmm_rsrc(iwqp); 528 - irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp); 529 - 530 525 if (!iwqp->user_mode) { 531 526 if (iwqp->iwscq) { 532 527 irdma_clean_cqes(iwqp, iwqp->iwscq); ··· 529 534 irdma_clean_cqes(iwqp, iwqp->iwrcq); 530 535 } 531 536 } 537 + 538 + irdma_qp_rem_ref(&iwqp->ibqp); 539 + wait_for_completion(&iwqp->free_qp); 540 + irdma_free_lsmm_rsrc(iwqp); 541 + irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp); 542 + 532 543 irdma_remove_push_mmap_entries(iwqp); 533 544 irdma_free_qp_rsrc(iwqp); 534 545 ··· 3292 3291 break; 3293 3292 case IB_WR_LOCAL_INV: 3294 3293 info.op_type = IRDMA_OP_TYPE_INV_STAG; 3294 + info.local_fence = info.read_fence; 3295 3295 info.op.inv_local_stag.target_stag = ib_wr->ex.invalidate_rkey; 3296 3296 err = irdma_uk_stag_local_invalidate(ukqp, &info, true); 3297 3297 break;
+16 -10
drivers/infiniband/sw/rxe/rxe_comp.c
··· 115 115 void retransmit_timer(struct timer_list *t) 116 116 { 117 117 struct rxe_qp *qp = from_timer(qp, t, retrans_timer); 118 + unsigned long flags; 118 119 119 120 rxe_dbg_qp(qp, "retransmit timer fired\n"); 120 121 121 - spin_lock_bh(&qp->state_lock); 122 + spin_lock_irqsave(&qp->state_lock, flags); 122 123 if (qp->valid) { 123 124 qp->comp.timeout = 1; 124 125 rxe_sched_task(&qp->comp.task); 125 126 } 126 - spin_unlock_bh(&qp->state_lock); 127 + spin_unlock_irqrestore(&qp->state_lock, flags); 127 128 } 128 129 129 130 void rxe_comp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) ··· 482 481 483 482 static void comp_check_sq_drain_done(struct rxe_qp *qp) 484 483 { 485 - spin_lock_bh(&qp->state_lock); 484 + unsigned long flags; 485 + 486 + spin_lock_irqsave(&qp->state_lock, flags); 486 487 if (unlikely(qp_state(qp) == IB_QPS_SQD)) { 487 488 if (qp->attr.sq_draining && qp->comp.psn == qp->req.psn) { 488 489 qp->attr.sq_draining = 0; 489 - spin_unlock_bh(&qp->state_lock); 490 + spin_unlock_irqrestore(&qp->state_lock, flags); 490 491 491 492 if (qp->ibqp.event_handler) { 492 493 struct ib_event ev; ··· 502 499 return; 503 500 } 504 501 } 505 - spin_unlock_bh(&qp->state_lock); 502 + spin_unlock_irqrestore(&qp->state_lock, flags); 506 503 } 507 504 508 505 static inline enum comp_state complete_ack(struct rxe_qp *qp, ··· 628 625 */ 629 626 static void reset_retry_timer(struct rxe_qp *qp) 630 627 { 628 + unsigned long flags; 629 + 631 630 if (qp_type(qp) == IB_QPT_RC && qp->qp_timeout_jiffies) { 632 - spin_lock_bh(&qp->state_lock); 631 + spin_lock_irqsave(&qp->state_lock, flags); 633 632 if (qp_state(qp) >= IB_QPS_RTS && 634 633 psn_compare(qp->req.psn, qp->comp.psn) > 0) 635 634 mod_timer(&qp->retrans_timer, 636 635 jiffies + qp->qp_timeout_jiffies); 637 - spin_unlock_bh(&qp->state_lock); 636 + spin_unlock_irqrestore(&qp->state_lock, flags); 638 637 } 639 638 } 640 639 ··· 648 643 struct rxe_pkt_info *pkt = NULL; 649 644 enum comp_state state; 650 645 int ret; 646 + unsigned long flags; 651 647 652 - spin_lock_bh(&qp->state_lock); 648 + spin_lock_irqsave(&qp->state_lock, flags); 653 649 if (!qp->valid || qp_state(qp) == IB_QPS_ERR || 654 650 qp_state(qp) == IB_QPS_RESET) { 655 651 bool notify = qp->valid && (qp_state(qp) == IB_QPS_ERR); 656 652 657 653 drain_resp_pkts(qp); 658 654 flush_send_queue(qp, notify); 659 - spin_unlock_bh(&qp->state_lock); 655 + spin_unlock_irqrestore(&qp->state_lock, flags); 660 656 goto exit; 661 657 } 662 - spin_unlock_bh(&qp->state_lock); 658 + spin_unlock_irqrestore(&qp->state_lock, flags); 663 659 664 660 if (qp->comp.timeout) { 665 661 qp->comp.timeout_retry = 1;
+4 -3
drivers/infiniband/sw/rxe/rxe_net.c
··· 412 412 int err; 413 413 int is_request = pkt->mask & RXE_REQ_MASK; 414 414 struct rxe_dev *rxe = to_rdev(qp->ibqp.device); 415 + unsigned long flags; 415 416 416 - spin_lock_bh(&qp->state_lock); 417 + spin_lock_irqsave(&qp->state_lock, flags); 417 418 if ((is_request && (qp_state(qp) < IB_QPS_RTS)) || 418 419 (!is_request && (qp_state(qp) < IB_QPS_RTR))) { 419 - spin_unlock_bh(&qp->state_lock); 420 + spin_unlock_irqrestore(&qp->state_lock, flags); 420 421 rxe_dbg_qp(qp, "Packet dropped. QP is not in ready state\n"); 421 422 goto drop; 422 423 } 423 - spin_unlock_bh(&qp->state_lock); 424 + spin_unlock_irqrestore(&qp->state_lock, flags); 424 425 425 426 rxe_icrc_generate(skb, pkt); 426 427
+24 -13
drivers/infiniband/sw/rxe/rxe_qp.c
··· 300 300 struct rxe_cq *rcq = to_rcq(init->recv_cq); 301 301 struct rxe_cq *scq = to_rcq(init->send_cq); 302 302 struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; 303 + unsigned long flags; 303 304 304 305 rxe_get(pd); 305 306 rxe_get(rcq); ··· 326 325 if (err) 327 326 goto err2; 328 327 329 - spin_lock_bh(&qp->state_lock); 328 + spin_lock_irqsave(&qp->state_lock, flags); 330 329 qp->attr.qp_state = IB_QPS_RESET; 331 330 qp->valid = 1; 332 - spin_unlock_bh(&qp->state_lock); 331 + spin_unlock_irqrestore(&qp->state_lock, flags); 333 332 334 333 return 0; 335 334 ··· 493 492 /* move the qp to the error state */ 494 493 void rxe_qp_error(struct rxe_qp *qp) 495 494 { 496 - spin_lock_bh(&qp->state_lock); 495 + unsigned long flags; 496 + 497 + spin_lock_irqsave(&qp->state_lock, flags); 497 498 qp->attr.qp_state = IB_QPS_ERR; 498 499 499 500 /* drain work and packet queues */ 500 501 rxe_sched_task(&qp->resp.task); 501 502 rxe_sched_task(&qp->comp.task); 502 503 rxe_sched_task(&qp->req.task); 503 - spin_unlock_bh(&qp->state_lock); 504 + spin_unlock_irqrestore(&qp->state_lock, flags); 504 505 } 505 506 506 507 static void rxe_qp_sqd(struct rxe_qp *qp, struct ib_qp_attr *attr, 507 508 int mask) 508 509 { 509 - spin_lock_bh(&qp->state_lock); 510 + unsigned long flags; 511 + 512 + spin_lock_irqsave(&qp->state_lock, flags); 510 513 qp->attr.sq_draining = 1; 511 514 rxe_sched_task(&qp->comp.task); 512 515 rxe_sched_task(&qp->req.task); 513 - spin_unlock_bh(&qp->state_lock); 516 + spin_unlock_irqrestore(&qp->state_lock, flags); 514 517 } 515 518 516 519 /* caller should hold qp->state_lock */ ··· 560 555 qp->attr.cur_qp_state = attr->qp_state; 561 556 562 557 if (mask & IB_QP_STATE) { 563 - spin_lock_bh(&qp->state_lock); 558 + unsigned long flags; 559 + 560 + spin_lock_irqsave(&qp->state_lock, flags); 564 561 err = __qp_chk_state(qp, attr, mask); 565 562 if (!err) { 566 563 qp->attr.qp_state = attr->qp_state; 567 564 rxe_dbg_qp(qp, "state -> %s\n", 568 565 qps2str[attr->qp_state]); 569 566 } 570 - spin_unlock_bh(&qp->state_lock); 567 + spin_unlock_irqrestore(&qp->state_lock, flags); 571 568 572 569 if (err) 573 570 return err; ··· 695 688 /* called by the query qp verb */ 696 689 int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask) 697 690 { 691 + unsigned long flags; 692 + 698 693 *attr = qp->attr; 699 694 700 695 attr->rq_psn = qp->resp.psn; ··· 717 708 /* Applications that get this state typically spin on it. 718 709 * Yield the processor 719 710 */ 720 - spin_lock_bh(&qp->state_lock); 711 + spin_lock_irqsave(&qp->state_lock, flags); 721 712 if (qp->attr.sq_draining) { 722 - spin_unlock_bh(&qp->state_lock); 713 + spin_unlock_irqrestore(&qp->state_lock, flags); 723 714 cond_resched(); 715 + } else { 716 + spin_unlock_irqrestore(&qp->state_lock, flags); 724 717 } 725 - spin_unlock_bh(&qp->state_lock); 726 718 727 719 return 0; 728 720 } ··· 746 736 static void rxe_qp_do_cleanup(struct work_struct *work) 747 737 { 748 738 struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); 739 + unsigned long flags; 749 740 750 - spin_lock_bh(&qp->state_lock); 741 + spin_lock_irqsave(&qp->state_lock, flags); 751 742 qp->valid = 0; 752 - spin_unlock_bh(&qp->state_lock); 743 + spin_unlock_irqrestore(&qp->state_lock, flags); 753 744 qp->qp_timeout_jiffies = 0; 754 745 755 746 if (qp_type(qp) == IB_QPT_RC) {
+5 -4
drivers/infiniband/sw/rxe/rxe_recv.c
··· 14 14 struct rxe_qp *qp) 15 15 { 16 16 unsigned int pkt_type; 17 + unsigned long flags; 17 18 18 19 if (unlikely(!qp->valid)) 19 20 return -EINVAL; ··· 39 38 return -EINVAL; 40 39 } 41 40 42 - spin_lock_bh(&qp->state_lock); 41 + spin_lock_irqsave(&qp->state_lock, flags); 43 42 if (pkt->mask & RXE_REQ_MASK) { 44 43 if (unlikely(qp_state(qp) < IB_QPS_RTR)) { 45 - spin_unlock_bh(&qp->state_lock); 44 + spin_unlock_irqrestore(&qp->state_lock, flags); 46 45 return -EINVAL; 47 46 } 48 47 } else { 49 48 if (unlikely(qp_state(qp) < IB_QPS_RTS)) { 50 - spin_unlock_bh(&qp->state_lock); 49 + spin_unlock_irqrestore(&qp->state_lock, flags); 51 50 return -EINVAL; 52 51 } 53 52 } 54 - spin_unlock_bh(&qp->state_lock); 53 + spin_unlock_irqrestore(&qp->state_lock, flags); 55 54 56 55 return 0; 57 56 }
+17 -13
drivers/infiniband/sw/rxe/rxe_req.c
··· 99 99 void rnr_nak_timer(struct timer_list *t) 100 100 { 101 101 struct rxe_qp *qp = from_timer(qp, t, rnr_nak_timer); 102 + unsigned long flags; 102 103 103 104 rxe_dbg_qp(qp, "nak timer fired\n"); 104 105 105 - spin_lock_bh(&qp->state_lock); 106 + spin_lock_irqsave(&qp->state_lock, flags); 106 107 if (qp->valid) { 107 108 /* request a send queue retry */ 108 109 qp->req.need_retry = 1; 109 110 qp->req.wait_for_rnr_timer = 0; 110 111 rxe_sched_task(&qp->req.task); 111 112 } 112 - spin_unlock_bh(&qp->state_lock); 113 + spin_unlock_irqrestore(&qp->state_lock, flags); 113 114 } 114 115 115 116 static void req_check_sq_drain_done(struct rxe_qp *qp) ··· 119 118 unsigned int index; 120 119 unsigned int cons; 121 120 struct rxe_send_wqe *wqe; 121 + unsigned long flags; 122 122 123 - spin_lock_bh(&qp->state_lock); 123 + spin_lock_irqsave(&qp->state_lock, flags); 124 124 if (qp_state(qp) == IB_QPS_SQD) { 125 125 q = qp->sq.queue; 126 126 index = qp->req.wqe_index; ··· 142 140 break; 143 141 144 142 qp->attr.sq_draining = 0; 145 - spin_unlock_bh(&qp->state_lock); 143 + spin_unlock_irqrestore(&qp->state_lock, flags); 146 144 147 145 if (qp->ibqp.event_handler) { 148 146 struct ib_event ev; ··· 156 154 return; 157 155 } while (0); 158 156 } 159 - spin_unlock_bh(&qp->state_lock); 157 + spin_unlock_irqrestore(&qp->state_lock, flags); 160 158 } 161 159 162 160 static struct rxe_send_wqe *__req_next_wqe(struct rxe_qp *qp) ··· 175 173 static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) 176 174 { 177 175 struct rxe_send_wqe *wqe; 176 + unsigned long flags; 178 177 179 178 req_check_sq_drain_done(qp); 180 179 ··· 183 180 if (wqe == NULL) 184 181 return NULL; 185 182 186 - spin_lock_bh(&qp->state_lock); 183 + spin_lock_irqsave(&qp->state_lock, flags); 187 184 if (unlikely((qp_state(qp) == IB_QPS_SQD) && 188 185 (wqe->state != wqe_state_processing))) { 189 - spin_unlock_bh(&qp->state_lock); 186 + spin_unlock_irqrestore(&qp->state_lock, flags); 190 187 return NULL; 191 188 } 192 - spin_unlock_bh(&qp->state_lock); 189 + spin_unlock_irqrestore(&qp->state_lock, flags); 193 190 194 191 wqe->mask = wr_opcode_mask(wqe->wr.opcode, qp); 195 192 return wqe; ··· 679 676 struct rxe_queue *q = qp->sq.queue; 680 677 struct rxe_ah *ah; 681 678 struct rxe_av *av; 679 + unsigned long flags; 682 680 683 - spin_lock_bh(&qp->state_lock); 681 + spin_lock_irqsave(&qp->state_lock, flags); 684 682 if (unlikely(!qp->valid)) { 685 - spin_unlock_bh(&qp->state_lock); 683 + spin_unlock_irqrestore(&qp->state_lock, flags); 686 684 goto exit; 687 685 } 688 686 689 687 if (unlikely(qp_state(qp) == IB_QPS_ERR)) { 690 688 wqe = __req_next_wqe(qp); 691 - spin_unlock_bh(&qp->state_lock); 689 + spin_unlock_irqrestore(&qp->state_lock, flags); 692 690 if (wqe) 693 691 goto err; 694 692 else ··· 704 700 qp->req.wait_psn = 0; 705 701 qp->req.need_retry = 0; 706 702 qp->req.wait_for_rnr_timer = 0; 707 - spin_unlock_bh(&qp->state_lock); 703 + spin_unlock_irqrestore(&qp->state_lock, flags); 708 704 goto exit; 709 705 } 710 - spin_unlock_bh(&qp->state_lock); 706 + spin_unlock_irqrestore(&qp->state_lock, flags); 711 707 712 708 /* we come here if the retransmit timer has fired 713 709 * or if the rnr timer has fired. If the retransmit
+8 -6
drivers/infiniband/sw/rxe/rxe_resp.c
··· 1047 1047 struct ib_uverbs_wc *uwc = &cqe.uibwc; 1048 1048 struct rxe_recv_wqe *wqe = qp->resp.wqe; 1049 1049 struct rxe_dev *rxe = to_rdev(qp->ibqp.device); 1050 + unsigned long flags; 1050 1051 1051 1052 if (!wqe) 1052 1053 goto finish; ··· 1138 1137 return RESPST_ERR_CQ_OVERFLOW; 1139 1138 1140 1139 finish: 1141 - spin_lock_bh(&qp->state_lock); 1140 + spin_lock_irqsave(&qp->state_lock, flags); 1142 1141 if (unlikely(qp_state(qp) == IB_QPS_ERR)) { 1143 - spin_unlock_bh(&qp->state_lock); 1142 + spin_unlock_irqrestore(&qp->state_lock, flags); 1144 1143 return RESPST_CHK_RESOURCE; 1145 1144 } 1146 - spin_unlock_bh(&qp->state_lock); 1145 + spin_unlock_irqrestore(&qp->state_lock, flags); 1147 1146 1148 1147 if (unlikely(!pkt)) 1149 1148 return RESPST_DONE; ··· 1469 1468 enum resp_states state; 1470 1469 struct rxe_pkt_info *pkt = NULL; 1471 1470 int ret; 1471 + unsigned long flags; 1472 1472 1473 - spin_lock_bh(&qp->state_lock); 1473 + spin_lock_irqsave(&qp->state_lock, flags); 1474 1474 if (!qp->valid || qp_state(qp) == IB_QPS_ERR || 1475 1475 qp_state(qp) == IB_QPS_RESET) { 1476 1476 bool notify = qp->valid && (qp_state(qp) == IB_QPS_ERR); 1477 1477 1478 1478 drain_req_pkts(qp); 1479 1479 flush_recv_queue(qp, notify); 1480 - spin_unlock_bh(&qp->state_lock); 1480 + spin_unlock_irqrestore(&qp->state_lock, flags); 1481 1481 goto exit; 1482 1482 } 1483 - spin_unlock_bh(&qp->state_lock); 1483 + spin_unlock_irqrestore(&qp->state_lock, flags); 1484 1484 1485 1485 qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; 1486 1486
+13 -12
drivers/infiniband/sw/rxe/rxe_verbs.c
··· 904 904 if (!err) 905 905 rxe_sched_task(&qp->req.task); 906 906 907 - spin_lock_bh(&qp->state_lock); 907 + spin_lock_irqsave(&qp->state_lock, flags); 908 908 if (qp_state(qp) == IB_QPS_ERR) 909 909 rxe_sched_task(&qp->comp.task); 910 - spin_unlock_bh(&qp->state_lock); 910 + spin_unlock_irqrestore(&qp->state_lock, flags); 911 911 912 912 return err; 913 913 } ··· 917 917 { 918 918 struct rxe_qp *qp = to_rqp(ibqp); 919 919 int err; 920 + unsigned long flags; 920 921 921 - spin_lock_bh(&qp->state_lock); 922 + spin_lock_irqsave(&qp->state_lock, flags); 922 923 /* caller has already called destroy_qp */ 923 924 if (WARN_ON_ONCE(!qp->valid)) { 924 - spin_unlock_bh(&qp->state_lock); 925 + spin_unlock_irqrestore(&qp->state_lock, flags); 925 926 rxe_err_qp(qp, "qp has been destroyed"); 926 927 return -EINVAL; 927 928 } 928 929 929 930 if (unlikely(qp_state(qp) < IB_QPS_RTS)) { 930 - spin_unlock_bh(&qp->state_lock); 931 + spin_unlock_irqrestore(&qp->state_lock, flags); 931 932 *bad_wr = wr; 932 933 rxe_err_qp(qp, "qp not ready to send"); 933 934 return -EINVAL; 934 935 } 935 - spin_unlock_bh(&qp->state_lock); 936 + spin_unlock_irqrestore(&qp->state_lock, flags); 936 937 937 938 if (qp->is_user) { 938 939 /* Utilize process context to do protocol processing */ ··· 1009 1008 struct rxe_rq *rq = &qp->rq; 1010 1009 unsigned long flags; 1011 1010 1012 - spin_lock_bh(&qp->state_lock); 1011 + spin_lock_irqsave(&qp->state_lock, flags); 1013 1012 /* caller has already called destroy_qp */ 1014 1013 if (WARN_ON_ONCE(!qp->valid)) { 1015 - spin_unlock_bh(&qp->state_lock); 1014 + spin_unlock_irqrestore(&qp->state_lock, flags); 1016 1015 rxe_err_qp(qp, "qp has been destroyed"); 1017 1016 return -EINVAL; 1018 1017 } 1019 1018 1020 1019 /* see C10-97.2.1 */ 1021 1020 if (unlikely((qp_state(qp) < IB_QPS_INIT))) { 1022 - spin_unlock_bh(&qp->state_lock); 1021 + spin_unlock_irqrestore(&qp->state_lock, flags); 1023 1022 *bad_wr = wr; 1024 1023 rxe_dbg_qp(qp, "qp not ready to post recv"); 1025 1024 return -EINVAL; 1026 1025 } 1027 - spin_unlock_bh(&qp->state_lock); 1026 + spin_unlock_irqrestore(&qp->state_lock, flags); 1028 1027 1029 1028 if (unlikely(qp->srq)) { 1030 1029 *bad_wr = wr; ··· 1045 1044 1046 1045 spin_unlock_irqrestore(&rq->producer_lock, flags); 1047 1046 1048 - spin_lock_bh(&qp->state_lock); 1047 + spin_lock_irqsave(&qp->state_lock, flags); 1049 1048 if (qp_state(qp) == IB_QPS_ERR) 1050 1049 rxe_sched_task(&qp->resp.task); 1051 - spin_unlock_bh(&qp->state_lock); 1050 + spin_unlock_irqrestore(&qp->state_lock, flags); 1052 1051 1053 1052 return err; 1054 1053 }
+6 -2
drivers/irqchip/irq-gic-common.c
··· 16 16 const struct gic_quirk *quirks, void *data) 17 17 { 18 18 for (; quirks->desc; quirks++) { 19 - if (!of_device_is_compatible(np, quirks->compatible)) 19 + if (quirks->compatible && 20 + !of_device_is_compatible(np, quirks->compatible)) 21 + continue; 22 + if (quirks->property && 23 + !of_property_read_bool(np, quirks->property)) 20 24 continue; 21 25 if (quirks->init(data)) 22 26 pr_info("GIC: enabling workaround for %s\n", ··· 32 28 void *data) 33 29 { 34 30 for (; quirks->desc; quirks++) { 35 - if (quirks->compatible) 31 + if (quirks->compatible || quirks->property) 36 32 continue; 37 33 if (quirks->iidr != (quirks->mask & iidr)) 38 34 continue;
+1
drivers/irqchip/irq-gic-common.h
··· 13 13 struct gic_quirk { 14 14 const char *desc; 15 15 const char *compatible; 16 + const char *property; 16 17 bool (*init)(void *data); 17 18 u32 iidr; 18 19 u32 mask;
+20
drivers/irqchip/irq-gic-v3.c
··· 39 39 40 40 #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0) 41 41 #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1) 42 + #define FLAGS_WORKAROUND_MTK_GICR_SAVE (1ULL << 2) 42 43 43 44 #define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1) 44 45 ··· 1721 1720 return true; 1722 1721 } 1723 1722 1723 + static bool gic_enable_quirk_mtk_gicr(void *data) 1724 + { 1725 + struct gic_chip_data *d = data; 1726 + 1727 + d->flags |= FLAGS_WORKAROUND_MTK_GICR_SAVE; 1728 + 1729 + return true; 1730 + } 1731 + 1724 1732 static bool gic_enable_quirk_cavium_38539(void *data) 1725 1733 { 1726 1734 struct gic_chip_data *d = data; ··· 1803 1793 .init = gic_enable_quirk_msm8996, 1804 1794 }, 1805 1795 { 1796 + .desc = "GICv3: Mediatek Chromebook GICR save problem", 1797 + .property = "mediatek,broken-save-restore-fw", 1798 + .init = gic_enable_quirk_mtk_gicr, 1799 + }, 1800 + { 1806 1801 .desc = "GICv3: HIP06 erratum 161010803", 1807 1802 .iidr = 0x0204043b, 1808 1803 .mask = 0xffffffff, ··· 1848 1833 1849 1834 if (!gic_prio_masking_enabled()) 1850 1835 return; 1836 + 1837 + if (gic_data.flags & FLAGS_WORKAROUND_MTK_GICR_SAVE) { 1838 + pr_warn("Skipping NMI enable due to firmware issues\n"); 1839 + return; 1840 + } 1851 1841 1852 1842 ppi_nmi_refs = kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERNEL); 1853 1843 if (!ppi_nmi_refs)
+18 -13
drivers/irqchip/irq-mbigen.c
··· 240 240 struct irq_domain *domain; 241 241 struct device_node *np; 242 242 u32 num_pins; 243 + int ret = 0; 244 + 245 + parent = bus_get_dev_root(&platform_bus_type); 246 + if (!parent) 247 + return -ENODEV; 243 248 244 249 for_each_child_of_node(pdev->dev.of_node, np) { 245 250 if (!of_property_read_bool(np, "interrupt-controller")) 246 251 continue; 247 252 248 - parent = bus_get_dev_root(&platform_bus_type); 249 - if (parent) { 250 - child = of_platform_device_create(np, NULL, parent); 251 - put_device(parent); 252 - if (!child) { 253 - of_node_put(np); 254 - return -ENOMEM; 255 - } 253 + child = of_platform_device_create(np, NULL, parent); 254 + if (!child) { 255 + ret = -ENOMEM; 256 + break; 256 257 } 257 258 258 259 if (of_property_read_u32(child->dev.of_node, "num-pins", 259 260 &num_pins) < 0) { 260 261 dev_err(&pdev->dev, "No num-pins property\n"); 261 - of_node_put(np); 262 - return -EINVAL; 262 + ret = -EINVAL; 263 + break; 263 264 } 264 265 265 266 domain = platform_msi_create_device_domain(&child->dev, num_pins, ··· 268 267 &mbigen_domain_ops, 269 268 mgn_chip); 270 269 if (!domain) { 271 - of_node_put(np); 272 - return -ENOMEM; 270 + ret = -ENOMEM; 271 + break; 273 272 } 274 273 } 275 274 276 - return 0; 275 + put_device(parent); 276 + if (ret) 277 + of_node_put(np); 278 + 279 + return ret; 277 280 } 278 281 279 282 #ifdef CONFIG_ACPI
+1 -1
drivers/irqchip/irq-meson-gpio.c
··· 150 150 INIT_MESON_S4_COMMON_DATA(82) 151 151 }; 152 152 153 - static const struct of_device_id meson_irq_gpio_matches[] = { 153 + static const struct of_device_id meson_irq_gpio_matches[] __maybe_unused = { 154 154 { .compatible = "amlogic,meson8-gpio-intc", .data = &meson8_params }, 155 155 { .compatible = "amlogic,meson8b-gpio-intc", .data = &meson8b_params }, 156 156 { .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params },
+17 -15
drivers/irqchip/irq-mips-gic.c
··· 50 50 51 51 static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks); 52 52 53 - static DEFINE_SPINLOCK(gic_lock); 53 + static DEFINE_RAW_SPINLOCK(gic_lock); 54 54 static struct irq_domain *gic_irq_domain; 55 55 static int gic_shared_intrs; 56 56 static unsigned int gic_cpu_pin; ··· 210 210 211 211 irq = GIC_HWIRQ_TO_SHARED(d->hwirq); 212 212 213 - spin_lock_irqsave(&gic_lock, flags); 213 + raw_spin_lock_irqsave(&gic_lock, flags); 214 214 switch (type & IRQ_TYPE_SENSE_MASK) { 215 215 case IRQ_TYPE_EDGE_FALLING: 216 216 pol = GIC_POL_FALLING_EDGE; ··· 250 250 else 251 251 irq_set_chip_handler_name_locked(d, &gic_level_irq_controller, 252 252 handle_level_irq, NULL); 253 - spin_unlock_irqrestore(&gic_lock, flags); 253 + raw_spin_unlock_irqrestore(&gic_lock, flags); 254 254 255 255 return 0; 256 256 } ··· 268 268 return -EINVAL; 269 269 270 270 /* Assumption : cpumask refers to a single CPU */ 271 - spin_lock_irqsave(&gic_lock, flags); 271 + raw_spin_lock_irqsave(&gic_lock, flags); 272 272 273 273 /* Re-route this IRQ */ 274 274 write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu))); ··· 279 279 set_bit(irq, per_cpu_ptr(pcpu_masks, cpu)); 280 280 281 281 irq_data_update_effective_affinity(d, cpumask_of(cpu)); 282 - spin_unlock_irqrestore(&gic_lock, flags); 282 + raw_spin_unlock_irqrestore(&gic_lock, flags); 283 283 284 284 return IRQ_SET_MASK_OK; 285 285 } ··· 357 357 cd = irq_data_get_irq_chip_data(d); 358 358 cd->mask = false; 359 359 360 - spin_lock_irqsave(&gic_lock, flags); 360 + raw_spin_lock_irqsave(&gic_lock, flags); 361 361 for_each_online_cpu(cpu) { 362 362 write_gic_vl_other(mips_cm_vp_id(cpu)); 363 363 write_gic_vo_rmask(BIT(intr)); 364 364 } 365 - spin_unlock_irqrestore(&gic_lock, flags); 365 + raw_spin_unlock_irqrestore(&gic_lock, flags); 366 366 } 367 367 368 368 static void gic_unmask_local_irq_all_vpes(struct irq_data *d) ··· 375 375 cd = irq_data_get_irq_chip_data(d); 376 376 cd->mask = true; 377 377 378 - spin_lock_irqsave(&gic_lock, flags); 378 + raw_spin_lock_irqsave(&gic_lock, flags); 379 379 for_each_online_cpu(cpu) { 380 380 write_gic_vl_other(mips_cm_vp_id(cpu)); 381 381 write_gic_vo_smask(BIT(intr)); 382 382 } 383 - spin_unlock_irqrestore(&gic_lock, flags); 383 + raw_spin_unlock_irqrestore(&gic_lock, flags); 384 384 } 385 385 386 386 static void gic_all_vpes_irq_cpu_online(void) ··· 393 393 unsigned long flags; 394 394 int i; 395 395 396 - spin_lock_irqsave(&gic_lock, flags); 396 + raw_spin_lock_irqsave(&gic_lock, flags); 397 397 398 398 for (i = 0; i < ARRAY_SIZE(local_intrs); i++) { 399 399 unsigned int intr = local_intrs[i]; 400 400 struct gic_all_vpes_chip_data *cd; 401 401 402 + if (!gic_local_irq_is_routable(intr)) 403 + continue; 402 404 cd = &gic_all_vpes_chip_data[intr]; 403 405 write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map); 404 406 if (cd->mask) 405 407 write_gic_vl_smask(BIT(intr)); 406 408 } 407 409 408 - spin_unlock_irqrestore(&gic_lock, flags); 410 + raw_spin_unlock_irqrestore(&gic_lock, flags); 409 411 } 410 412 411 413 static struct irq_chip gic_all_vpes_local_irq_controller = { ··· 437 435 438 436 data = irq_get_irq_data(virq); 439 437 440 - spin_lock_irqsave(&gic_lock, flags); 438 + raw_spin_lock_irqsave(&gic_lock, flags); 441 439 write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin); 442 440 write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu))); 443 441 irq_data_update_effective_affinity(data, cpumask_of(cpu)); 444 - spin_unlock_irqrestore(&gic_lock, flags); 442 + raw_spin_unlock_irqrestore(&gic_lock, flags); 445 443 446 444 return 0; 447 445 } ··· 533 531 if (!gic_local_irq_is_routable(intr)) 534 532 return -EPERM; 535 533 536 - spin_lock_irqsave(&gic_lock, flags); 534 + raw_spin_lock_irqsave(&gic_lock, flags); 537 535 for_each_online_cpu(cpu) { 538 536 write_gic_vl_other(mips_cm_vp_id(cpu)); 539 537 write_gic_vo_map(mips_gic_vx_map_reg(intr), map); 540 538 } 541 - spin_unlock_irqrestore(&gic_lock, flags); 539 + raw_spin_unlock_irqrestore(&gic_lock, flags); 542 540 543 541 return 0; 544 542 }
+6 -4
drivers/mailbox/mailbox-test.c
··· 98 98 size_t count, loff_t *ppos) 99 99 { 100 100 struct mbox_test_device *tdev = filp->private_data; 101 + char *message; 101 102 void *data; 102 103 int ret; 103 104 ··· 114 113 return -EINVAL; 115 114 } 116 115 117 - mutex_lock(&tdev->mutex); 118 - 119 - tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL); 120 - if (!tdev->message) 116 + message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL); 117 + if (!message) 121 118 return -ENOMEM; 122 119 120 + mutex_lock(&tdev->mutex); 121 + 122 + tdev->message = message; 123 123 ret = copy_from_user(tdev->message, userbuf, count); 124 124 if (ret) { 125 125 ret = -EFAULT;
+1 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 7271 7271 goto out; 7272 7272 } 7273 7273 if (chip->reset) 7274 - usleep_range(1000, 2000); 7274 + usleep_range(10000, 20000); 7275 7275 7276 7276 /* Detect if the device is configured in single chip addressing mode, 7277 7277 * otherwise continue with address specific smi init/detection.
+9 -3
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 1329 1329 return pdata->phy_if.phy_impl.an_outcome(pdata); 1330 1330 } 1331 1331 1332 - static void xgbe_phy_status_result(struct xgbe_prv_data *pdata) 1332 + static bool xgbe_phy_status_result(struct xgbe_prv_data *pdata) 1333 1333 { 1334 1334 struct ethtool_link_ksettings *lks = &pdata->phy.lks; 1335 1335 enum xgbe_mode mode; ··· 1367 1367 1368 1368 pdata->phy.duplex = DUPLEX_FULL; 1369 1369 1370 - if (xgbe_set_mode(pdata, mode) && pdata->an_again) 1370 + if (!xgbe_set_mode(pdata, mode)) 1371 + return false; 1372 + 1373 + if (pdata->an_again) 1371 1374 xgbe_phy_reconfig_aneg(pdata); 1375 + 1376 + return true; 1372 1377 } 1373 1378 1374 1379 static void xgbe_phy_status(struct xgbe_prv_data *pdata) ··· 1403 1398 return; 1404 1399 } 1405 1400 1406 - xgbe_phy_status_result(pdata); 1401 + if (xgbe_phy_status_result(pdata)) 1402 + return; 1407 1403 1408 1404 if (test_bit(XGBE_LINK_INIT, &pdata->dev_state)) 1409 1405 clear_bit(XGBE_LINK_INIT, &pdata->dev_state);
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1152 1152 unsigned int total_rx_bytes = 0, total_rx_pkts = 0; 1153 1153 unsigned int offset = rx_ring->rx_offset; 1154 1154 struct xdp_buff *xdp = &rx_ring->xdp; 1155 + u32 cached_ntc = rx_ring->first_desc; 1155 1156 struct ice_tx_ring *xdp_ring = NULL; 1156 1157 struct bpf_prog *xdp_prog = NULL; 1157 1158 u32 ntc = rx_ring->next_to_clean; 1158 1159 u32 cnt = rx_ring->count; 1159 - u32 cached_ntc = ntc; 1160 1160 u32 xdp_xmit = 0; 1161 1161 u32 cached_ntu; 1162 1162 bool failure;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
··· 490 490 (u64)timestamp_low; 491 491 break; 492 492 default: 493 - if (tracer_event->event_id >= tracer->str_db.first_string_trace || 493 + if (tracer_event->event_id >= tracer->str_db.first_string_trace && 494 494 tracer_event->event_id <= tracer->str_db.first_string_trace + 495 495 tracer->str_db.num_string_trace) { 496 496 tracer_event->type = TRACER_EVENT_TYPE_STRING;
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 327 327 unsigned int sw_mtu; 328 328 int hard_mtu; 329 329 bool ptp_rx; 330 + __be32 terminate_lkey_be; 330 331 }; 331 332 332 333 static inline u8 mlx5e_get_dcb_num_tc(struct mlx5e_params *params)
+28 -16
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 51 51 if (err) 52 52 goto out; 53 53 54 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 54 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 55 55 buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]); 56 56 port_buffer->buffer[i].lossy = 57 57 MLX5_GET(bufferx_reg, buffer, lossy); ··· 73 73 port_buffer->buffer[i].lossy); 74 74 } 75 75 76 - port_buffer->headroom_size = total_used; 76 + port_buffer->internal_buffers_size = 0; 77 + for (i = MLX5E_MAX_NETWORK_BUFFER; i < MLX5E_TOTAL_BUFFERS; i++) { 78 + buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]); 79 + port_buffer->internal_buffers_size += 80 + MLX5_GET(bufferx_reg, buffer, size) * port_buff_cell_sz; 81 + } 82 + 77 83 port_buffer->port_buffer_size = 78 84 MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz; 79 - port_buffer->spare_buffer_size = 80 - port_buffer->port_buffer_size - total_used; 85 + port_buffer->headroom_size = total_used; 86 + port_buffer->spare_buffer_size = port_buffer->port_buffer_size - 87 + port_buffer->internal_buffers_size - 88 + port_buffer->headroom_size; 81 89 82 - mlx5e_dbg(HW, priv, "total buffer size=%d, spare buffer size=%d\n", 83 - port_buffer->port_buffer_size, 90 + mlx5e_dbg(HW, priv, 91 + "total buffer size=%u, headroom buffer size=%u, internal buffers size=%u, spare buffer size=%u\n", 92 + port_buffer->port_buffer_size, port_buffer->headroom_size, 93 + port_buffer->internal_buffers_size, 84 94 port_buffer->spare_buffer_size); 85 95 out: 86 96 kfree(out); ··· 216 206 if (!MLX5_CAP_GEN(mdev, sbcam_reg)) 217 207 return 0; 218 208 219 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 209 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 220 210 lossless_buff_count += ((port_buffer->buffer[i].size) && 221 211 (!(port_buffer->buffer[i].lossy))); 222 212 223 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 213 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 224 214 p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count); 225 215 err = mlx5e_port_set_sbcm(mdev, 0, i, 226 216 MLX5_INGRESS_DIR, ··· 303 293 if (err) 304 294 goto out; 305 295 306 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 296 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 307 297 void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]); 308 298 u64 size = port_buffer->buffer[i].size; 309 299 u64 xoff = port_buffer->buffer[i].xoff; ··· 361 351 { 362 352 int i; 363 353 364 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 354 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 365 355 if (port_buffer->buffer[i].lossy) { 366 356 port_buffer->buffer[i].xoff = 0; 367 357 port_buffer->buffer[i].xon = 0; ··· 418 408 int err; 419 409 int i; 420 410 421 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 411 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 422 412 prio_count = 0; 423 413 lossy_count = 0; 424 414 ··· 442 432 } 443 433 444 434 if (changed) { 445 - err = port_update_pool_cfg(mdev, port_buffer); 435 + err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz); 446 436 if (err) 447 437 return err; 448 438 449 - err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz); 439 + err = port_update_pool_cfg(mdev, port_buffer); 450 440 if (err) 451 441 return err; 452 442 ··· 525 515 526 516 if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) { 527 517 update_prio2buffer = true; 528 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 518 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 529 519 mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n", 530 520 __func__, i, prio2buffer[i]); 531 521 ··· 540 530 } 541 531 542 532 if (change & MLX5E_PORT_BUFFER_SIZE) { 543 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 533 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 544 534 mlx5e_dbg(HW, priv, "%s: buffer[%d]=%d\n", __func__, i, buffer_size[i]); 545 535 if (!port_buffer.buffer[i].lossy && !buffer_size[i]) { 546 536 mlx5e_dbg(HW, priv, "%s: lossless buffer[%d] size cannot be zero\n", ··· 554 544 555 545 mlx5e_dbg(HW, priv, "%s: total buffer requested=%d\n", __func__, total_used); 556 546 557 - if (total_used > port_buffer.port_buffer_size) 547 + if (total_used > port_buffer.headroom_size && 548 + (total_used - port_buffer.headroom_size) > 549 + port_buffer.spare_buffer_size) 558 550 return -EINVAL; 559 551 560 552 update_buffer = true;
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
··· 35 35 #include "en.h" 36 36 #include "port.h" 37 37 38 - #define MLX5E_MAX_BUFFER 8 38 + #define MLX5E_MAX_NETWORK_BUFFER 8 39 + #define MLX5E_TOTAL_BUFFERS 10 39 40 #define MLX5E_DEFAULT_CABLE_LEN 7 /* 7 meters */ 40 41 41 42 #define MLX5_BUFFER_SUPPORTED(mdev) (MLX5_CAP_GEN(mdev, pcam_reg) && \ ··· 61 60 struct mlx5e_port_buffer { 62 61 u32 port_buffer_size; 63 62 u32 spare_buffer_size; 64 - u32 headroom_size; 65 - struct mlx5e_bufferx_reg buffer[MLX5E_MAX_BUFFER]; 63 + u32 headroom_size; /* Buffers 0-7 */ 64 + u32 internal_buffers_size; /* Buffers 8-9 */ 65 + struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER]; 66 66 }; 67 67 68 68 int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
··· 84 84 85 85 int 86 86 mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state, 87 - struct flow_action *flow_action, 87 + struct flow_action *flow_action, int from, int to, 88 88 struct mlx5_flow_attr *attr, 89 89 enum mlx5_flow_namespace_type ns_type) 90 90 { ··· 96 96 priv = parse_state->flow->priv; 97 97 98 98 flow_action_for_each(i, act, flow_action) { 99 + if (i < from) 100 + continue; 101 + else if (i > to) 102 + break; 103 + 99 104 tc_act = mlx5e_tc_act_get(act->id, ns_type); 100 105 if (!tc_act || !tc_act->post_parse) 101 106 continue;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
··· 112 112 113 113 int 114 114 mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state, 115 - struct flow_action *flow_action, 115 + struct flow_action *flow_action, int from, int to, 116 116 struct mlx5_flow_attr *attr, 117 117 enum mlx5_flow_namespace_type ns_type); 118 118
+103 -17
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 492 492 mlx5e_encap_dealloc(priv, e); 493 493 } 494 494 495 + static void mlx5e_encap_put_locked(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e) 496 + { 497 + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 498 + 499 + lockdep_assert_held(&esw->offloads.encap_tbl_lock); 500 + 501 + if (!refcount_dec_and_test(&e->refcnt)) 502 + return; 503 + list_del(&e->route_list); 504 + hash_del_rcu(&e->encap_hlist); 505 + mlx5e_encap_dealloc(priv, e); 506 + } 507 + 495 508 static void mlx5e_decap_put(struct mlx5e_priv *priv, struct mlx5e_decap_entry *d) 496 509 { 497 510 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; ··· 829 816 uintptr_t hash_key; 830 817 int err = 0; 831 818 819 + lockdep_assert_held(&esw->offloads.encap_tbl_lock); 820 + 832 821 parse_attr = attr->parse_attr; 833 822 tun_info = parse_attr->tun_info[out_index]; 834 823 mpls_info = &parse_attr->mpls_info[out_index]; ··· 844 829 845 830 hash_key = hash_encap_info(&key); 846 831 847 - mutex_lock(&esw->offloads.encap_tbl_lock); 848 832 e = mlx5e_encap_get(priv, &key, hash_key); 849 833 850 834 /* must verify if encap is valid or not */ ··· 854 840 goto out_err; 855 841 } 856 842 857 - mutex_unlock(&esw->offloads.encap_tbl_lock); 858 - wait_for_completion(&e->res_ready); 859 - 860 - /* Protect against concurrent neigh update. */ 861 - mutex_lock(&esw->offloads.encap_tbl_lock); 862 - if (e->compl_result < 0) { 863 - err = -EREMOTEIO; 864 - goto out_err; 865 - } 866 843 goto attach_flow; 867 844 } 868 845 ··· 882 877 INIT_LIST_HEAD(&e->flows); 883 878 hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key); 884 879 tbl_time_before = mlx5e_route_tbl_get_last_update(priv); 885 - mutex_unlock(&esw->offloads.encap_tbl_lock); 886 880 887 881 if (family == AF_INET) 888 882 err = mlx5e_tc_tun_create_header_ipv4(priv, mirred_dev, e); 889 883 else if (family == AF_INET6) 890 884 err = mlx5e_tc_tun_create_header_ipv6(priv, mirred_dev, e); 891 885 892 - /* Protect against concurrent neigh update. */ 893 - mutex_lock(&esw->offloads.encap_tbl_lock); 894 886 complete_all(&e->res_ready); 895 887 if (err) { 896 888 e->compl_result = err; ··· 922 920 } else { 923 921 flow_flag_set(flow, SLOW); 924 922 } 925 - mutex_unlock(&esw->offloads.encap_tbl_lock); 926 923 927 924 return err; 928 925 929 926 out_err: 930 - mutex_unlock(&esw->offloads.encap_tbl_lock); 931 927 if (e) 932 - mlx5e_encap_put(priv, e); 928 + mlx5e_encap_put_locked(priv, e); 933 929 return err; 934 930 935 931 out_err_init: 936 - mutex_unlock(&esw->offloads.encap_tbl_lock); 937 932 kfree(tun_info); 938 933 kfree(e); 939 934 return err; ··· 1013 1014 out_err: 1014 1015 mutex_unlock(&esw->offloads.decap_tbl_lock); 1015 1016 return err; 1017 + } 1018 + 1019 + int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv, 1020 + struct mlx5e_tc_flow *flow, 1021 + struct mlx5_flow_attr *attr, 1022 + struct netlink_ext_ack *extack, 1023 + bool *vf_tun) 1024 + { 1025 + struct mlx5e_tc_flow_parse_attr *parse_attr; 1026 + struct mlx5_esw_flow_attr *esw_attr; 1027 + struct net_device *encap_dev = NULL; 1028 + struct mlx5e_rep_priv *rpriv; 1029 + struct mlx5e_priv *out_priv; 1030 + struct mlx5_eswitch *esw; 1031 + int out_index; 1032 + int err = 0; 1033 + 1034 + if (!mlx5e_is_eswitch_flow(flow)) 1035 + return 0; 1036 + 1037 + parse_attr = attr->parse_attr; 1038 + esw_attr = attr->esw_attr; 1039 + *vf_tun = false; 1040 + 1041 + esw = priv->mdev->priv.eswitch; 1042 + mutex_lock(&esw->offloads.encap_tbl_lock); 1043 + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1044 + struct net_device *out_dev; 1045 + int mirred_ifindex; 1046 + 1047 + if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1048 + continue; 1049 + 1050 + mirred_ifindex = parse_attr->mirred_ifindex[out_index]; 1051 + out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex); 1052 + if (!out_dev) { 1053 + NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found"); 1054 + err = -ENODEV; 1055 + goto out; 1056 + } 1057 + err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index, 1058 + extack, &encap_dev); 1059 + dev_put(out_dev); 1060 + if (err) 1061 + goto out; 1062 + 1063 + if (esw_attr->dests[out_index].flags & 1064 + MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE && 1065 + !esw_attr->dest_int_port) 1066 + *vf_tun = true; 1067 + 1068 + out_priv = netdev_priv(encap_dev); 1069 + rpriv = out_priv->ppriv; 1070 + esw_attr->dests[out_index].rep = rpriv->rep; 1071 + esw_attr->dests[out_index].mdev = out_priv->mdev; 1072 + } 1073 + 1074 + if (*vf_tun && esw_attr->out_count > 1) { 1075 + NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported"); 1076 + err = -EOPNOTSUPP; 1077 + goto out; 1078 + } 1079 + 1080 + out: 1081 + mutex_unlock(&esw->offloads.encap_tbl_lock); 1082 + return err; 1083 + } 1084 + 1085 + void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv, 1086 + struct mlx5e_tc_flow *flow, 1087 + struct mlx5_flow_attr *attr) 1088 + { 1089 + struct mlx5_esw_flow_attr *esw_attr; 1090 + int out_index; 1091 + 1092 + if (!mlx5e_is_eswitch_flow(flow)) 1093 + return; 1094 + 1095 + esw_attr = attr->esw_attr; 1096 + 1097 + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1098 + if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1099 + continue; 1100 + 1101 + mlx5e_detach_encap(flow->priv, flow, attr, out_index); 1102 + kfree(attr->parse_attr->tun_info[out_index]); 1103 + } 1016 1104 } 1017 1105 1018 1106 static int cmp_route_info(struct mlx5e_route_key *a,
+9
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h
··· 30 30 void mlx5e_detach_decap_route(struct mlx5e_priv *priv, 31 31 struct mlx5e_tc_flow *flow); 32 32 33 + int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv, 34 + struct mlx5e_tc_flow *flow, 35 + struct mlx5_flow_attr *attr, 36 + struct netlink_ext_ack *extack, 37 + bool *vf_tun); 38 + void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv, 39 + struct mlx5e_tc_flow *flow, 40 + struct mlx5_flow_attr *attr); 41 + 33 42 struct ip_tunnel_info *mlx5e_dup_tun_info(const struct ip_tunnel_info *tun_info); 34 43 35 44 int mlx5e_tc_set_attr_rx_tun(struct mlx5e_tc_flow *flow,
+4 -7
drivers/net/ethernet/mellanox/mlx5/core/en_common.c
··· 150 150 151 151 inlen = MLX5_ST_SZ_BYTES(modify_tir_in); 152 152 in = kvzalloc(inlen, GFP_KERNEL); 153 - if (!in) { 154 - err = -ENOMEM; 155 - goto out; 156 - } 153 + if (!in) 154 + return -ENOMEM; 157 155 158 156 if (enable_uc_lb) 159 157 lb_flags = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; ··· 169 171 tirn = tir->tirn; 170 172 err = mlx5_core_modify_tir(mdev, tirn, in); 171 173 if (err) 172 - goto out; 174 + break; 173 175 } 176 + mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock); 174 177 175 - out: 176 178 kvfree(in); 177 179 if (err) 178 180 netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err); 179 - mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock); 180 181 181 182 return err; 182 183 }
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 926 926 if (err) 927 927 return err; 928 928 929 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 929 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 930 930 dcb_buffer->buffer_size[i] = port_buffer.buffer[i].size; 931 - dcb_buffer->total_size = port_buffer.port_buffer_size; 931 + dcb_buffer->total_size = port_buffer.port_buffer_size - 932 + port_buffer.internal_buffers_size; 932 933 933 934 return 0; 934 935 } ··· 971 970 if (err) 972 971 return err; 973 972 974 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 973 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 975 974 if (port_buffer.buffer[i].size != dcb_buffer->buffer_size[i]) { 976 975 changed |= MLX5E_PORT_BUFFER_SIZE; 977 976 buffer_size = dcb_buffer->buffer_size;
+39 -30
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 727 727 mlx5e_rq_shampo_hd_free(rq); 728 728 } 729 729 730 - static __be32 mlx5e_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev) 731 - { 732 - u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {}; 733 - u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {}; 734 - int res; 735 - 736 - if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey)) 737 - return MLX5_TERMINATE_SCATTER_LIST_LKEY; 738 - 739 - MLX5_SET(query_special_contexts_in, in, opcode, 740 - MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS); 741 - res = mlx5_cmd_exec_inout(dev, query_special_contexts, in, out); 742 - if (res) 743 - return MLX5_TERMINATE_SCATTER_LIST_LKEY; 744 - 745 - res = MLX5_GET(query_special_contexts_out, out, 746 - terminate_scatter_list_mkey); 747 - return cpu_to_be32(res); 748 - } 749 - 750 730 static int mlx5e_alloc_rq(struct mlx5e_params *params, 751 731 struct mlx5e_xsk_param *xsk, 752 732 struct mlx5e_rq_param *rqp, ··· 888 908 /* check if num_frags is not a pow of two */ 889 909 if (rq->wqe.info.num_frags < (1 << rq->wqe.info.log_num_frags)) { 890 910 wqe->data[f].byte_count = 0; 891 - wqe->data[f].lkey = mlx5e_get_terminate_scatter_list_mkey(mdev); 911 + wqe->data[f].lkey = params->terminate_lkey_be; 892 912 wqe->data[f].addr = 0; 893 913 } 894 914 } ··· 4987 5007 /* RQ */ 4988 5008 mlx5e_build_rq_params(mdev, params); 4989 5009 5010 + params->terminate_lkey_be = mlx5_core_get_terminate_scatter_list_mkey(mdev); 5011 + 4990 5012 params->packet_merge.timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT); 4991 5013 4992 5014 /* CQ moderation params */ ··· 5261 5279 5262 5280 mlx5e_timestamp_init(priv); 5263 5281 5282 + priv->dfs_root = debugfs_create_dir("nic", 5283 + mlx5_debugfs_get_dev_root(mdev)); 5284 + 5264 5285 fs = mlx5e_fs_init(priv->profile, mdev, 5265 5286 !test_bit(MLX5E_STATE_DESTROYING, &priv->state), 5266 5287 priv->dfs_root); 5267 5288 if (!fs) { 5268 5289 err = -ENOMEM; 5269 5290 mlx5_core_err(mdev, "FS initialization failed, %d\n", err); 5291 + debugfs_remove_recursive(priv->dfs_root); 5270 5292 return err; 5271 5293 } 5272 5294 priv->fs = fs; ··· 5291 5305 mlx5e_health_destroy_reporters(priv); 5292 5306 mlx5e_ktls_cleanup(priv); 5293 5307 mlx5e_fs_cleanup(priv->fs); 5308 + debugfs_remove_recursive(priv->dfs_root); 5294 5309 priv->fs = NULL; 5295 5310 } 5296 5311 ··· 5838 5851 } 5839 5852 5840 5853 static int 5841 - mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5842 - const struct mlx5e_profile *new_profile, void *new_ppriv) 5854 + mlx5e_netdev_init_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5855 + const struct mlx5e_profile *new_profile, void *new_ppriv) 5843 5856 { 5844 5857 struct mlx5e_priv *priv = netdev_priv(netdev); 5845 5858 int err; ··· 5855 5868 err = new_profile->init(priv->mdev, priv->netdev); 5856 5869 if (err) 5857 5870 goto priv_cleanup; 5871 + 5872 + return 0; 5873 + 5874 + priv_cleanup: 5875 + mlx5e_priv_cleanup(priv); 5876 + return err; 5877 + } 5878 + 5879 + static int 5880 + mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5881 + const struct mlx5e_profile *new_profile, void *new_ppriv) 5882 + { 5883 + struct mlx5e_priv *priv = netdev_priv(netdev); 5884 + int err; 5885 + 5886 + err = mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); 5887 + if (err) 5888 + return err; 5889 + 5858 5890 err = mlx5e_attach_netdev(priv); 5859 5891 if (err) 5860 5892 goto profile_cleanup; ··· 5881 5875 5882 5876 profile_cleanup: 5883 5877 new_profile->cleanup(priv); 5884 - priv_cleanup: 5885 5878 mlx5e_priv_cleanup(priv); 5886 5879 return err; 5887 5880 } ··· 5898 5893 mlx5e_detach_netdev(priv); 5899 5894 priv->profile->cleanup(priv); 5900 5895 mlx5e_priv_cleanup(priv); 5896 + 5897 + if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 5898 + mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); 5899 + set_bit(MLX5E_STATE_DESTROYING, &priv->state); 5900 + return -EIO; 5901 + } 5901 5902 5902 5903 err = mlx5e_netdev_attach_profile(netdev, mdev, new_profile, new_ppriv); 5903 5904 if (err) { /* roll back to original profile */ ··· 5966 5955 struct net_device *netdev = priv->netdev; 5967 5956 struct mlx5_core_dev *mdev = priv->mdev; 5968 5957 5969 - if (!netif_device_present(netdev)) 5958 + if (!netif_device_present(netdev)) { 5959 + if (test_bit(MLX5E_STATE_DESTROYING, &priv->state)) 5960 + mlx5e_destroy_mdev_resources(mdev); 5970 5961 return -ENODEV; 5962 + } 5971 5963 5972 5964 mlx5e_detach_netdev(priv); 5973 5965 mlx5e_destroy_mdev_resources(mdev); ··· 6016 6002 priv->profile = profile; 6017 6003 priv->ppriv = NULL; 6018 6004 6019 - priv->dfs_root = debugfs_create_dir("nic", 6020 - mlx5_debugfs_get_dev_root(priv->mdev)); 6021 - 6022 6005 err = profile->init(mdev, netdev); 6023 6006 if (err) { 6024 6007 mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err); ··· 6044 6033 err_profile_cleanup: 6045 6034 profile->cleanup(priv); 6046 6035 err_destroy_netdev: 6047 - debugfs_remove_recursive(priv->dfs_root); 6048 6036 mlx5e_destroy_netdev(priv); 6049 6037 err_devlink_port_unregister: 6050 6038 mlx5e_devlink_port_unregister(mlx5e_dev); ··· 6063 6053 unregister_netdev(priv->netdev); 6064 6054 mlx5e_suspend(adev, state); 6065 6055 priv->profile->cleanup(priv); 6066 - debugfs_remove_recursive(priv->dfs_root); 6067 6056 mlx5e_destroy_netdev(priv); 6068 6057 mlx5e_devlink_port_unregister(mlx5e_dev); 6069 6058 mlx5e_destroy_devlink(mlx5e_dev);
+6
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 30 30 * SOFTWARE. 31 31 */ 32 32 33 + #include <linux/debugfs.h> 33 34 #include <linux/mlx5/fs.h> 34 35 #include <net/switchdev.h> 35 36 #include <net/pkt_cls.h> ··· 813 812 { 814 813 struct mlx5e_priv *priv = netdev_priv(netdev); 815 814 815 + priv->dfs_root = debugfs_create_dir("nic", 816 + mlx5_debugfs_get_dev_root(mdev)); 817 + 816 818 priv->fs = mlx5e_fs_init(priv->profile, mdev, 817 819 !test_bit(MLX5E_STATE_DESTROYING, &priv->state), 818 820 priv->dfs_root); 819 821 if (!priv->fs) { 820 822 netdev_err(priv->netdev, "FS allocation failed\n"); 823 + debugfs_remove_recursive(priv->dfs_root); 821 824 return -ENOMEM; 822 825 } 823 826 ··· 834 829 static void mlx5e_cleanup_rep(struct mlx5e_priv *priv) 835 830 { 836 831 mlx5e_fs_cleanup(priv->fs); 832 + debugfs_remove_recursive(priv->dfs_root); 837 833 priv->fs = NULL; 838 834 } 839 835
+7 -90
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1700 1700 } 1701 1701 1702 1702 static int 1703 - set_encap_dests(struct mlx5e_priv *priv, 1704 - struct mlx5e_tc_flow *flow, 1705 - struct mlx5_flow_attr *attr, 1706 - struct netlink_ext_ack *extack, 1707 - bool *vf_tun) 1708 - { 1709 - struct mlx5e_tc_flow_parse_attr *parse_attr; 1710 - struct mlx5_esw_flow_attr *esw_attr; 1711 - struct net_device *encap_dev = NULL; 1712 - struct mlx5e_rep_priv *rpriv; 1713 - struct mlx5e_priv *out_priv; 1714 - int out_index; 1715 - int err = 0; 1716 - 1717 - if (!mlx5e_is_eswitch_flow(flow)) 1718 - return 0; 1719 - 1720 - parse_attr = attr->parse_attr; 1721 - esw_attr = attr->esw_attr; 1722 - *vf_tun = false; 1723 - 1724 - for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1725 - struct net_device *out_dev; 1726 - int mirred_ifindex; 1727 - 1728 - if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1729 - continue; 1730 - 1731 - mirred_ifindex = parse_attr->mirred_ifindex[out_index]; 1732 - out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex); 1733 - if (!out_dev) { 1734 - NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found"); 1735 - err = -ENODEV; 1736 - goto out; 1737 - } 1738 - err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index, 1739 - extack, &encap_dev); 1740 - dev_put(out_dev); 1741 - if (err) 1742 - goto out; 1743 - 1744 - if (esw_attr->dests[out_index].flags & 1745 - MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE && 1746 - !esw_attr->dest_int_port) 1747 - *vf_tun = true; 1748 - 1749 - out_priv = netdev_priv(encap_dev); 1750 - rpriv = out_priv->ppriv; 1751 - esw_attr->dests[out_index].rep = rpriv->rep; 1752 - esw_attr->dests[out_index].mdev = out_priv->mdev; 1753 - } 1754 - 1755 - if (*vf_tun && esw_attr->out_count > 1) { 1756 - NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported"); 1757 - err = -EOPNOTSUPP; 1758 - goto out; 1759 - } 1760 - 1761 - out: 1762 - return err; 1763 - } 1764 - 1765 - static void 1766 - clean_encap_dests(struct mlx5e_priv *priv, 1767 - struct mlx5e_tc_flow *flow, 1768 - struct mlx5_flow_attr *attr) 1769 - { 1770 - struct mlx5_esw_flow_attr *esw_attr; 1771 - int out_index; 1772 - 1773 - if (!mlx5e_is_eswitch_flow(flow)) 1774 - return; 1775 - 1776 - esw_attr = attr->esw_attr; 1777 - 1778 - for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1779 - if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1780 - continue; 1781 - 1782 - mlx5e_detach_encap(priv, flow, attr, out_index); 1783 - kfree(attr->parse_attr->tun_info[out_index]); 1784 - } 1785 - } 1786 - 1787 - static int 1788 1703 verify_attr_actions(u32 actions, struct netlink_ext_ack *extack) 1789 1704 { 1790 1705 if (!(actions & ··· 1735 1820 if (err) 1736 1821 goto err_out; 1737 1822 1738 - err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun); 1823 + err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun); 1739 1824 if (err) 1740 1825 goto err_out; 1741 1826 ··· 3865 3950 struct mlx5_flow_attr *prev_attr; 3866 3951 struct flow_action_entry *act; 3867 3952 struct mlx5e_tc_act *tc_act; 3953 + int err, i, i_split = 0; 3868 3954 bool is_missable; 3869 - int err, i; 3870 3955 3871 3956 ns_type = mlx5e_get_flow_namespace(flow); 3872 3957 list_add(&attr->list, &flow->attrs); ··· 3907 3992 i < flow_action->num_entries - 1)) { 3908 3993 is_missable = tc_act->is_missable ? tc_act->is_missable(act) : false; 3909 3994 3910 - err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type); 3995 + err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, 3996 + ns_type); 3911 3997 if (err) 3912 3998 goto out_free_post_acts; 3913 3999 ··· 3918 4002 goto out_free_post_acts; 3919 4003 } 3920 4004 4005 + i_split = i + 1; 3921 4006 list_add(&attr->list, &flow->attrs); 3922 4007 } 3923 4008 ··· 3933 4016 } 3934 4017 } 3935 4018 3936 - err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type); 4019 + err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, ns_type); 3937 4020 if (err) 3938 4021 goto out_free_post_acts; 3939 4022 ··· 4247 4330 if (attr->post_act_handle) 4248 4331 mlx5e_tc_post_act_del(get_post_action(flow->priv), attr->post_act_handle); 4249 4332 4250 - clean_encap_dests(flow->priv, flow, attr); 4333 + mlx5e_tc_tun_encap_dests_unset(flow->priv, flow, attr); 4251 4334 4252 4335 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) 4253 4336 mlx5_fc_destroy(counter_dev, attr->counter);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 824 824 ncomp_eqs = table->num_comp_eqs; 825 825 cpus = kcalloc(ncomp_eqs, sizeof(*cpus), GFP_KERNEL); 826 826 if (!cpus) 827 - ret = -ENOMEM; 827 + return -ENOMEM; 828 828 829 829 i = 0; 830 830 rcu_read_lock();
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 923 923 } 924 924 925 925 mlx5_pci_vsc_init(dev); 926 - dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev); 927 926 return 0; 928 927 929 928 err_clr_master: ··· 1154 1155 goto err_cmd_cleanup; 1155 1156 } 1156 1157 1158 + dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev); 1157 1159 mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP); 1158 1160 1159 1161 mlx5_start_health_poll(dev); ··· 1802 1802 struct devlink *devlink = priv_to_devlink(dev); 1803 1803 1804 1804 set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state); 1805 - /* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain 1806 - * fw_reset before unregistering the devlink. 1805 + /* mlx5_drain_fw_reset() and mlx5_drain_health_wq() are using 1806 + * devlink notify APIs. 1807 + * Hence, we must drain them before unregistering the devlink. 1807 1808 */ 1808 1809 mlx5_drain_fw_reset(dev); 1810 + mlx5_drain_health_wq(dev); 1809 1811 devlink_unregister(devlink); 1810 1812 mlx5_sriov_disable(pdev); 1811 1813 mlx5_thermal_uninit(dev); 1812 1814 mlx5_crdump_disable(dev); 1813 - mlx5_drain_health_wq(dev); 1814 1815 mlx5_uninit_one(dev); 1815 1816 mlx5_pci_close(dev); 1816 1817 mlx5_mdev_uninit(dev);
+21
drivers/net/ethernet/mellanox/mlx5/core/mr.c
··· 32 32 33 33 #include <linux/kernel.h> 34 34 #include <linux/mlx5/driver.h> 35 + #include <linux/mlx5/qp.h> 35 36 #include "mlx5_core.h" 36 37 37 38 int mlx5_core_create_mkey(struct mlx5_core_dev *dev, u32 *mkey, u32 *in, ··· 123 122 return mlx5_cmd_exec_in(dev, destroy_psv, in); 124 123 } 125 124 EXPORT_SYMBOL(mlx5_core_destroy_psv); 125 + 126 + __be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev) 127 + { 128 + u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {}; 129 + u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {}; 130 + u32 mkey; 131 + 132 + if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey)) 133 + return MLX5_TERMINATE_SCATTER_LIST_LKEY; 134 + 135 + MLX5_SET(query_special_contexts_in, in, opcode, 136 + MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS); 137 + if (mlx5_cmd_exec_inout(dev, query_special_contexts, in, out)) 138 + return MLX5_TERMINATE_SCATTER_LIST_LKEY; 139 + 140 + mkey = MLX5_GET(query_special_contexts_out, out, 141 + terminate_scatter_list_mkey); 142 + return cpu_to_be32(mkey); 143 + } 144 + EXPORT_SYMBOL(mlx5_core_get_terminate_scatter_list_mkey);
+7 -6
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 141 141 irq_update_affinity_hint(irq->map.virq, NULL); 142 142 #ifdef CONFIG_RFS_ACCEL 143 143 rmap = mlx5_eq_table_get_rmap(pool->dev); 144 - if (rmap && irq->map.index) 144 + if (rmap) 145 145 irq_cpu_rmap_remove(rmap, irq->map.virq); 146 146 #endif 147 147 ··· 232 232 if (!irq) 233 233 return ERR_PTR(-ENOMEM); 234 234 if (!i || !pci_msix_can_alloc_dyn(dev->pdev)) { 235 - /* The vector at index 0 was already allocated. 236 - * Just get the irq number. If dynamic irq is not supported 237 - * vectors have also been allocated. 235 + /* The vector at index 0 is always statically allocated. If 236 + * dynamic irq is not supported all vectors are statically 237 + * allocated. In both cases just get the irq number and set 238 + * the index. 238 239 */ 239 240 irq->map.virq = pci_irq_vector(dev->pdev, i); 240 - irq->map.index = 0; 241 + irq->map.index = i; 241 242 } else { 242 243 irq->map = pci_msix_alloc_irq_at(dev->pdev, MSI_ANY_INDEX, af_desc); 243 244 if (!irq->map.virq) { ··· 571 570 572 571 af_desc.is_managed = false; 573 572 for (i = 0; i < nirqs; i++) { 573 + cpumask_clear(&af_desc.mask); 574 574 cpumask_set_cpu(cpus[i], &af_desc.mask); 575 575 irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap); 576 576 if (IS_ERR(irq)) 577 577 break; 578 - cpumask_clear(&af_desc.mask); 579 578 irqs[i] = irq; 580 579 } 581 580
+1
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 63 63 struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev); 64 64 struct devlink *devlink = priv_to_devlink(sf_dev->mdev); 65 65 66 + mlx5_drain_health_wq(sf_dev->mdev); 66 67 devlink_unregister(devlink); 67 68 mlx5_uninit_one(sf_dev->mdev); 68 69 iounmap(sf_dev->mdev->iseg);
+3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c
··· 213 213 } 214 214 215 215 INIT_LIST_HEAD(&mgr->ptrn_list); 216 + mutex_init(&mgr->modify_hdr_mutex); 217 + 216 218 return mgr; 217 219 218 220 free_mgr: ··· 239 237 } 240 238 241 239 mlx5dr_icm_pool_destroy(mgr->ptrn_icm_pool); 240 + mutex_destroy(&mgr->modify_hdr_mutex); 242 241 kfree(mgr); 243 242 }
+7 -6
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
··· 245 245 246 246 skb = priv->rx_skb[rx_pi_rem]; 247 247 248 - skb_put(skb, datalen); 249 - 250 - skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */ 251 - 252 - skb->protocol = eth_type_trans(skb, netdev); 253 - 254 248 /* Alloc another RX SKB for this same index */ 255 249 rx_skb = mlxbf_gige_alloc_skb(priv, MLXBF_GIGE_DEFAULT_BUF_SZ, 256 250 &rx_buf_dma, DMA_FROM_DEVICE); ··· 253 259 priv->rx_skb[rx_pi_rem] = rx_skb; 254 260 dma_unmap_single(priv->dev, *rx_wqe_addr, 255 261 MLXBF_GIGE_DEFAULT_BUF_SZ, DMA_FROM_DEVICE); 262 + 263 + skb_put(skb, datalen); 264 + 265 + skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */ 266 + 267 + skb->protocol = eth_type_trans(skb, netdev); 268 + 256 269 *rx_wqe_addr = rx_buf_dma; 257 270 } else if (rx_cqe & MLXBF_GIGE_RX_CQE_PKT_STATUS_MAC_ERR) { 258 271 priv->stats.rx_mac_errors++;
-10
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1279 1279 if (comp_read < 1) 1280 1280 return; 1281 1281 1282 - apc->eth_stats.tx_cqes = comp_read; 1283 - 1284 1282 for (i = 0; i < comp_read; i++) { 1285 1283 struct mana_tx_comp_oob *cqe_oob; 1286 1284 ··· 1361 1363 WARN_ON_ONCE(1); 1362 1364 1363 1365 cq->work_done = pkt_transmitted; 1364 - 1365 - apc->eth_stats.tx_cqes -= pkt_transmitted; 1366 1366 } 1367 1367 1368 1368 static void mana_post_pkt_rxq(struct mana_rxq *rxq) ··· 1622 1626 { 1623 1627 struct gdma_comp *comp = cq->gdma_comp_buf; 1624 1628 struct mana_rxq *rxq = cq->rxq; 1625 - struct mana_port_context *apc; 1626 1629 int comp_read, i; 1627 - 1628 - apc = netdev_priv(rxq->ndev); 1629 1630 1630 1631 comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER); 1631 1632 WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER); 1632 1633 1633 - apc->eth_stats.rx_cqes = comp_read; 1634 1634 rxq->xdp_flush = false; 1635 1635 1636 1636 for (i = 0; i < comp_read; i++) { ··· 1638 1646 return; 1639 1647 1640 1648 mana_process_rx_cqe(rxq, cq, &comp[i]); 1641 - 1642 - apc->eth_stats.rx_cqes--; 1643 1649 } 1644 1650 1645 1651 if (rxq->xdp_flush)
-2
drivers/net/ethernet/microsoft/mana/mana_ethtool.c
··· 13 13 } mana_eth_stats[] = { 14 14 {"stop_queue", offsetof(struct mana_ethtool_stats, stop_queue)}, 15 15 {"wake_queue", offsetof(struct mana_ethtool_stats, wake_queue)}, 16 - {"tx_cqes", offsetof(struct mana_ethtool_stats, tx_cqes)}, 17 16 {"tx_cq_err", offsetof(struct mana_ethtool_stats, tx_cqe_err)}, 18 17 {"tx_cqe_unknown_type", offsetof(struct mana_ethtool_stats, 19 18 tx_cqe_unknown_type)}, 20 - {"rx_cqes", offsetof(struct mana_ethtool_stats, rx_cqes)}, 21 19 {"rx_coalesced_err", offsetof(struct mana_ethtool_stats, 22 20 rx_coalesced_err)}, 23 21 {"rx_cqe_unknown_type", offsetof(struct mana_ethtool_stats,
+1 -1
drivers/net/ethernet/renesas/rswitch.c
··· 1485 1485 1486 1486 if (rswitch_get_num_cur_queues(gq) >= gq->ring_size - 1) { 1487 1487 netif_stop_subqueue(ndev, 0); 1488 - return ret; 1488 + return NETDEV_TX_BUSY; 1489 1489 } 1490 1490 1491 1491 if (skb_put_padto(skb, ETH_ZLEN))
+12 -15
drivers/net/ethernet/sfc/tc.c
··· 687 687 if (!found) { /* We don't care. */ 688 688 netif_dbg(efx, drv, efx->net_dev, 689 689 "Ignoring foreign filter that doesn't egdev us\n"); 690 - rc = -EOPNOTSUPP; 691 - goto release; 690 + return -EOPNOTSUPP; 692 691 } 693 692 694 693 rc = efx_mae_match_check_caps(efx, &match.mask, NULL); 695 694 if (rc) 696 - goto release; 695 + return rc; 697 696 698 697 if (efx_tc_match_is_encap(&match.mask)) { 699 698 enum efx_encap_type type; ··· 701 702 if (type == EFX_ENCAP_TYPE_NONE) { 702 703 NL_SET_ERR_MSG_MOD(extack, 703 704 "Egress encap match on unsupported tunnel device"); 704 - rc = -EOPNOTSUPP; 705 - goto release; 705 + return -EOPNOTSUPP; 706 706 } 707 707 708 708 rc = efx_mae_check_encap_type_supported(efx, type); ··· 709 711 NL_SET_ERR_MSG_FMT_MOD(extack, 710 712 "Firmware reports no support for %s encap match", 711 713 efx_tc_encap_type_name(type)); 712 - goto release; 714 + return rc; 713 715 } 714 716 715 717 rc = efx_tc_flower_record_encap_match(efx, &match, type, 716 718 EFX_TC_EM_DIRECT, 0, 0, 717 719 extack); 718 720 if (rc) 719 - goto release; 721 + return rc; 720 722 } else { 721 723 /* This is not a tunnel decap rule, ignore it */ 722 724 netif_dbg(efx, drv, efx->net_dev, 723 725 "Ignoring foreign filter without encap match\n"); 724 - rc = -EOPNOTSUPP; 725 - goto release; 726 + return -EOPNOTSUPP; 726 727 } 727 728 728 729 rule = kzalloc(sizeof(*rule), GFP_USER); 729 730 if (!rule) { 730 731 rc = -ENOMEM; 731 - goto release; 732 + goto out_free; 732 733 } 733 734 INIT_LIST_HEAD(&rule->acts.list); 734 735 rule->cookie = tc->cookie; ··· 739 742 "Ignoring already-offloaded rule (cookie %lx)\n", 740 743 tc->cookie); 741 744 rc = -EEXIST; 742 - goto release; 745 + goto out_free; 743 746 } 744 747 745 748 act = kzalloc(sizeof(*act), GFP_USER); ··· 904 907 efx_tc_match_action_ht_params); 905 908 efx_tc_free_action_set_list(efx, &rule->acts, false); 906 909 } 910 + out_free: 907 911 kfree(rule); 908 912 if (match.encap) 909 913 efx_tc_flower_release_encap_match(efx, match.encap); ··· 961 963 return rc; 962 964 if (efx_tc_match_is_encap(&match.mask)) { 963 965 NL_SET_ERR_MSG_MOD(extack, "Ingress enc_key matches not supported"); 964 - rc = -EOPNOTSUPP; 965 - goto release; 966 + return -EOPNOTSUPP; 966 967 } 967 968 968 969 if (tc->common.chain_index) { ··· 985 988 if (old) { 986 989 netif_dbg(efx, drv, efx->net_dev, 987 990 "Already offloaded rule (cookie %lx)\n", tc->cookie); 988 - rc = -EEXIST; 989 991 NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded"); 990 - goto release; 992 + kfree(rule); 993 + return -EEXIST; 991 994 } 992 995 993 996 /* Parse actions */
+1 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7233 7233 ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 7234 7234 NETIF_F_RXCSUM; 7235 7235 ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 7236 - NETDEV_XDP_ACT_XSK_ZEROCOPY | 7237 - NETDEV_XDP_ACT_NDO_XMIT; 7236 + NETDEV_XDP_ACT_XSK_ZEROCOPY; 7238 7237 7239 7238 ret = stmmac_tc_init(priv, priv); 7240 7239 if (!ret) {
+6
drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
··· 117 117 return -EOPNOTSUPP; 118 118 } 119 119 120 + if (!prog) 121 + xdp_features_clear_redirect_target(dev); 122 + 120 123 need_update = !!priv->xdp_prog != !!prog; 121 124 if (if_running && need_update) 122 125 stmmac_xdp_release(dev); ··· 133 130 134 131 if (if_running && need_update) 135 132 stmmac_xdp_open(dev); 133 + 134 + if (prog) 135 + xdp_features_set_redirect_target(dev, false); 136 136 137 137 return 0; 138 138 }
+1 -1
drivers/net/ipa/ipa_endpoint.c
··· 119 119 }; 120 120 121 121 /* Size in bytes of an IPA packet status structure */ 122 - #define IPA_STATUS_SIZE sizeof(__le32[4]) 122 + #define IPA_STATUS_SIZE sizeof(__le32[8]) 123 123 124 124 /* IPA status structure decoder; looks up field values for a structure */ 125 125 static u32 ipa_status_extract(struct ipa *ipa, const void *data,
+3 -13
drivers/net/phy/mxl-gpy.c
··· 274 274 return ret < 0 ? ret : 0; 275 275 } 276 276 277 - static bool gpy_has_broken_mdint(struct phy_device *phydev) 278 - { 279 - /* At least these PHYs are known to have broken interrupt handling */ 280 - return phydev->drv->phy_id == PHY_ID_GPY215B || 281 - phydev->drv->phy_id == PHY_ID_GPY215C; 282 - } 283 - 284 277 static int gpy_probe(struct phy_device *phydev) 285 278 { 286 279 struct device *dev = &phydev->mdio.dev; ··· 293 300 phydev->priv = priv; 294 301 mutex_init(&priv->mbox_lock); 295 302 296 - if (gpy_has_broken_mdint(phydev) && 297 - !device_property_present(dev, "maxlinear,use-broken-interrupts")) 303 + if (!device_property_present(dev, "maxlinear,use-broken-interrupts")) 298 304 phydev->dev_flags |= PHY_F_NO_IRQ; 299 305 300 306 fw_version = phy_read(phydev, PHY_FWV); ··· 651 659 * frame. Therefore, polling is the best we can do and won't do any more 652 660 * harm. 653 661 * It was observed that this bug happens on link state and link speed 654 - * changes on a GPY215B and GYP215C independent of the firmware version 655 - * (which doesn't mean that this list is exhaustive). 662 + * changes independent of the firmware version. 656 663 */ 657 - if (gpy_has_broken_mdint(phydev) && 658 - (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC))) { 664 + if (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC)) { 659 665 reg = gpy_mbox_read(phydev, REG_GPIO0_OUT); 660 666 if (reg < 0) { 661 667 phy_error(phydev);
+1 -1
drivers/net/usb/qmi_wwan.c
··· 1325 1325 {QMI_FIXED_INTF(0x2001, 0x7e3d, 4)}, /* D-Link DWM-222 A2 */ 1326 1326 {QMI_FIXED_INTF(0x2020, 0x2031, 4)}, /* Olicard 600 */ 1327 1327 {QMI_FIXED_INTF(0x2020, 0x2033, 4)}, /* BroadMobi BM806U */ 1328 - {QMI_FIXED_INTF(0x2020, 0x2060, 4)}, /* BroadMobi BM818 */ 1328 + {QMI_QUIRK_SET_DTR(0x2020, 0x2060, 4)}, /* BroadMobi BM818 */ 1329 1329 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 1330 1330 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 1331 1331 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */
-4
drivers/nfc/nfcsim.c
··· 336 336 static void nfcsim_debugfs_init(void) 337 337 { 338 338 nfcsim_debugfs_root = debugfs_create_dir("nfcsim", NULL); 339 - 340 - if (!nfcsim_debugfs_root) 341 - pr_err("Could not create debugfs entry\n"); 342 - 343 339 } 344 340 345 341 static void nfcsim_debugfs_remove(void)
+2
drivers/nvme/host/pci.c
··· 3424 3424 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3425 3425 { PCI_DEVICE(0x1e4B, 0x1202), /* MAXIO MAP1202 */ 3426 3426 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3427 + { PCI_DEVICE(0x1e4B, 0x1602), /* MAXIO MAP1602 */ 3428 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3427 3429 { PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */ 3428 3430 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3429 3431 { PCI_DEVICE(0x1dbe, 0x5236), /* ADATA XPG GAMMIX S70 */
+7 -2
drivers/pci/quirks.c
··· 6003 6003 6004 6004 #ifdef CONFIG_PCIE_DPC 6005 6005 /* 6006 - * Intel Tiger Lake and Alder Lake BIOS has a bug that clears the DPC 6007 - * RP PIO Log Size of the integrated Thunderbolt PCIe Root Ports. 6006 + * Intel Ice Lake, Tiger Lake and Alder Lake BIOS has a bug that clears 6007 + * the DPC RP PIO Log Size of the integrated Thunderbolt PCIe Root 6008 + * Ports. 6008 6009 */ 6009 6010 static void dpc_log_size(struct pci_dev *dev) 6010 6011 { ··· 6028 6027 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x462f, dpc_log_size); 6029 6028 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x463f, dpc_log_size); 6030 6029 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x466e, dpc_log_size); 6030 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1d, dpc_log_size); 6031 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a1f, dpc_log_size); 6032 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a21, dpc_log_size); 6033 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8a23, dpc_log_size); 6031 6034 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a23, dpc_log_size); 6032 6035 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a25, dpc_log_size); 6033 6036 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a27, dpc_log_size);
+1 -1
drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c
··· 70 70 HHI_MIPI_CNTL1_BANDGAP); 71 71 72 72 regmap_write(priv->regmap, HHI_MIPI_CNTL2, 73 - FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x459) | 73 + FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x45a) | 74 74 FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL1, 0x2680)); 75 75 76 76 reg = DSI_LANE_CLK;
+5 -5
drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c
··· 237 237 */ 238 238 if (tmds_clk < 54 * MEGA) 239 239 txposdiv = 8; 240 - else if (tmds_clk >= 54 * MEGA && tmds_clk < 148.35 * MEGA) 240 + else if (tmds_clk >= 54 * MEGA && (tmds_clk * 100) < 14835 * MEGA) 241 241 txposdiv = 4; 242 - else if (tmds_clk >= 148.35 * MEGA && tmds_clk < 296.7 * MEGA) 242 + else if ((tmds_clk * 100) >= 14835 * MEGA && (tmds_clk * 10) < 2967 * MEGA) 243 243 txposdiv = 2; 244 - else if (tmds_clk >= 296.7 * MEGA && tmds_clk <= 594 * MEGA) 244 + else if ((tmds_clk * 10) >= 2967 * MEGA && tmds_clk <= 594 * MEGA) 245 245 txposdiv = 1; 246 246 else 247 247 return -EINVAL; ··· 324 324 clk_channel_bias = 0x34; /* 20mA */ 325 325 impedance_en = 0xf; 326 326 impedance = 0x36; /* 100ohm */ 327 - } else if (pixel_clk >= 74.175 * MEGA && pixel_clk <= 300 * MEGA) { 327 + } else if (((u64)pixel_clk * 1000) >= 74175 * MEGA && pixel_clk <= 300 * MEGA) { 328 328 data_channel_bias = 0x34; /* 20mA */ 329 329 clk_channel_bias = 0x2c; /* 16mA */ 330 330 impedance_en = 0xf; 331 331 impedance = 0x36; /* 100ohm */ 332 - } else if (pixel_clk >= 27 * MEGA && pixel_clk < 74.175 * MEGA) { 332 + } else if (pixel_clk >= 27 * MEGA && ((u64)pixel_clk * 1000) < 74175 * MEGA) { 333 333 data_channel_bias = 0x14; /* 10mA */ 334 334 clk_channel_bias = 0x14; /* 10mA */ 335 335 impedance_en = 0x0;
+3 -2
drivers/phy/qualcomm/phy-qcom-qmp-combo.c
··· 2472 2472 ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs); 2473 2473 if (ret) { 2474 2474 dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret); 2475 - goto err_unlock; 2475 + goto err_decrement_count; 2476 2476 } 2477 2477 2478 2478 ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets); ··· 2522 2522 reset_control_bulk_assert(cfg->num_resets, qmp->resets); 2523 2523 err_disable_regulators: 2524 2524 regulator_bulk_disable(cfg->num_vregs, qmp->vregs); 2525 - err_unlock: 2525 + err_decrement_count: 2526 + qmp->init_count--; 2526 2527 mutex_unlock(&qmp->phy_mutex); 2527 2528 2528 2529 return ret;
+3 -2
drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c
··· 379 379 ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs); 380 380 if (ret) { 381 381 dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret); 382 - goto err_unlock; 382 + goto err_decrement_count; 383 383 } 384 384 385 385 ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets); ··· 409 409 reset_control_bulk_assert(cfg->num_resets, qmp->resets); 410 410 err_disable_regulators: 411 411 regulator_bulk_disable(cfg->num_vregs, qmp->vregs); 412 - err_unlock: 412 + err_decrement_count: 413 + qmp->init_count--; 413 414 mutex_unlock(&qmp->phy_mutex); 414 415 415 416 return ret;
+1 -1
drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
··· 115 115 * 116 116 * @cfg_ahb_clk: AHB2PHY interface clock 117 117 * @ref_clk: phy reference clock 118 - * @iface_clk: phy interface clock 119 118 * @phy_reset: phy reset control 120 119 * @vregs: regulator supplies bulk data 121 120 * @phy_initialized: if PHY has been initialized correctly 122 121 * @mode: contains the current mode the PHY is in 122 + * @update_seq_cfg: tuning parameters for phy init 123 123 */ 124 124 struct qcom_snps_hsphy { 125 125 struct phy *phy;
+3 -1
drivers/tee/optee/smc_abi.c
··· 1004 1004 1005 1005 invoke_fn(OPTEE_SMC_GET_ASYNC_NOTIF_VALUE, 0, 0, 0, 0, 0, 0, 0, &res); 1006 1006 1007 - if (res.a0) 1007 + if (res.a0) { 1008 + *value_valid = false; 1008 1009 return 0; 1010 + } 1009 1011 *value_valid = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_VALID); 1010 1012 *value_pending = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_PENDING); 1011 1013 return res.a1;
+2 -2
drivers/thermal/intel/int340x_thermal/int3400_thermal.c
··· 131 131 132 132 for (i = 0; i < INT3400_THERMAL_MAXIMUM_UUID; i++) { 133 133 if (priv->uuid_bitmap & (1 << i)) 134 - length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]); 134 + length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]); 135 135 } 136 136 137 137 return length; ··· 149 149 150 150 for (i = 0; i <= INT3400_THERMAL_CRITICAL; i++) { 151 151 if (priv->os_uuid_mask & BIT(i)) 152 - length += sysfs_emit_at(buf, length, int3400_thermal_uuids[i]); 152 + length += sysfs_emit_at(buf, length, "%s\n", int3400_thermal_uuids[i]); 153 153 } 154 154 155 155 if (length)
+5
drivers/vfio/vfio_iommu_type1.c
··· 860 860 if (ret) 861 861 goto pin_unwind; 862 862 863 + if (!pfn_valid(phys_pfn)) { 864 + ret = -EINVAL; 865 + goto pin_unwind; 866 + } 867 + 863 868 ret = vfio_add_to_pfn_list(dma, iova, phys_pfn); 864 869 if (ret) { 865 870 if (put_pfn(phys_pfn, dma->prot) && do_accounting)
+5 -17
drivers/vhost/vhost.c
··· 256 256 * test_and_set_bit() implies a memory barrier. 257 257 */ 258 258 llist_add(&work->node, &dev->worker->work_list); 259 - wake_up_process(dev->worker->vtsk->task); 259 + vhost_task_wake(dev->worker->vtsk); 260 260 } 261 261 } 262 262 EXPORT_SYMBOL_GPL(vhost_work_queue); ··· 333 333 __vhost_vq_meta_reset(vq); 334 334 } 335 335 336 - static int vhost_worker(void *data) 336 + static bool vhost_worker(void *data) 337 337 { 338 338 struct vhost_worker *worker = data; 339 339 struct vhost_work *work, *work_next; 340 340 struct llist_node *node; 341 341 342 - for (;;) { 343 - /* mb paired w/ kthread_stop */ 344 - set_current_state(TASK_INTERRUPTIBLE); 345 - 346 - if (vhost_task_should_stop(worker->vtsk)) { 347 - __set_current_state(TASK_RUNNING); 348 - break; 349 - } 350 - 351 - node = llist_del_all(&worker->work_list); 352 - if (!node) 353 - schedule(); 354 - 342 + node = llist_del_all(&worker->work_list); 343 + if (node) { 355 344 node = llist_reverse_order(node); 356 345 /* make sure flag is seen after deletion */ 357 346 smp_wmb(); 358 347 llist_for_each_entry_safe(work, work_next, node, node) { 359 348 clear_bit(VHOST_WORK_QUEUED, &work->flags); 360 - __set_current_state(TASK_RUNNING); 361 349 kcov_remote_start_common(worker->kcov_handle); 362 350 work->fn(work); 363 351 kcov_remote_stop(); ··· 353 365 } 354 366 } 355 367 356 - return 0; 368 + return !!node; 357 369 } 358 370 359 371 static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
+4 -5
drivers/xen/pvcalls-back.c
··· 325 325 void *page; 326 326 327 327 map = kzalloc(sizeof(*map), GFP_KERNEL); 328 - if (map == NULL) 328 + if (map == NULL) { 329 + sock_release(sock); 329 330 return NULL; 331 + } 330 332 331 333 map->fedata = fedata; 332 334 map->sock = sock; ··· 420 418 req->u.connect.ref, 421 419 req->u.connect.evtchn, 422 420 sock); 423 - if (!map) { 421 + if (!map) 424 422 ret = -EFAULT; 425 - sock_release(sock); 426 - } 427 423 428 424 out: 429 425 rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); ··· 561 561 sock); 562 562 if (!map) { 563 563 ret = -EFAULT; 564 - sock_release(sock); 565 564 goto out_error; 566 565 } 567 566
+1 -8
fs/Kconfig
··· 368 368 source "net/sunrpc/Kconfig" 369 369 source "fs/ceph/Kconfig" 370 370 371 - source "fs/cifs/Kconfig" 372 - source "fs/ksmbd/Kconfig" 373 - 374 - config SMBFS_COMMON 375 - tristate 376 - default y if CIFS=y || SMB_SERVER=y 377 - default m if CIFS=m || SMB_SERVER=m 378 - 371 + source "fs/smb/Kconfig" 379 372 source "fs/coda/Kconfig" 380 373 source "fs/afs/Kconfig" 381 374 source "fs/9p/Kconfig"
+1 -3
fs/Makefile
··· 95 95 obj-$(CONFIG_NLS) += nls/ 96 96 obj-y += unicode/ 97 97 obj-$(CONFIG_SYSV_FS) += sysv/ 98 - obj-$(CONFIG_SMBFS_COMMON) += smbfs_common/ 99 - obj-$(CONFIG_CIFS) += cifs/ 100 - obj-$(CONFIG_SMB_SERVER) += ksmbd/ 98 + obj-$(CONFIG_SMBFS) += smb/ 101 99 obj-$(CONFIG_HPFS_FS) += hpfs/ 102 100 obj-$(CONFIG_NTFS_FS) += ntfs/ 103 101 obj-$(CONFIG_NTFS3_FS) += ntfs3/
+1 -1
fs/btrfs/bio.c
··· 330 330 if (bbio->inode && !(bbio->bio.bi_opf & REQ_META)) 331 331 btrfs_check_read_bio(bbio, bbio->bio.bi_private); 332 332 else 333 - bbio->end_io(bbio); 333 + btrfs_orig_bbio_end_io(bbio); 334 334 } 335 335 336 336 static void btrfs_simple_end_io(struct bio *bio)
+12 -2
fs/btrfs/block-group.c
··· 2818 2818 } 2819 2819 2820 2820 ret = inc_block_group_ro(cache, 0); 2821 - if (!do_chunk_alloc || ret == -ETXTBSY) 2822 - goto unlock_out; 2823 2821 if (!ret) 2824 2822 goto out; 2823 + if (ret == -ETXTBSY) 2824 + goto unlock_out; 2825 + 2826 + /* 2827 + * Skip chunk alloction if the bg is SYSTEM, this is to avoid system 2828 + * chunk allocation storm to exhaust the system chunk array. Otherwise 2829 + * we still want to try our best to mark the block group read-only. 2830 + */ 2831 + if (!do_chunk_alloc && ret == -ENOSPC && 2832 + (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM)) 2833 + goto unlock_out; 2834 + 2825 2835 alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags); 2826 2836 ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE); 2827 2837 if (ret < 0)
+10 -1
fs/btrfs/disk-io.c
··· 96 96 crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE, 97 97 first_page_part - BTRFS_CSUM_SIZE); 98 98 99 - for (i = 1; i < num_pages; i++) { 99 + for (i = 1; i < num_pages && INLINE_EXTENT_BUFFER_PAGES > 1; i++) { 100 100 kaddr = page_address(buf->pages[i]); 101 101 crypto_shash_update(shash, kaddr, PAGE_SIZE); 102 102 } ··· 4936 4936 */ 4937 4937 inode = igrab(&btrfs_inode->vfs_inode); 4938 4938 if (inode) { 4939 + unsigned int nofs_flag; 4940 + 4941 + nofs_flag = memalloc_nofs_save(); 4939 4942 invalidate_inode_pages2(inode->i_mapping); 4943 + memalloc_nofs_restore(nofs_flag); 4940 4944 iput(inode); 4941 4945 } 4942 4946 spin_lock(&root->delalloc_lock); ··· 5046 5042 5047 5043 inode = cache->io_ctl.inode; 5048 5044 if (inode) { 5045 + unsigned int nofs_flag; 5046 + 5047 + nofs_flag = memalloc_nofs_save(); 5049 5048 invalidate_inode_pages2(inode->i_mapping); 5049 + memalloc_nofs_restore(nofs_flag); 5050 + 5050 5051 BTRFS_I(inode)->generation = 0; 5051 5052 cache->io_ctl.inode = NULL; 5052 5053 iput(inode);
+3 -1
fs/btrfs/file-item.c
··· 792 792 sums = kvzalloc(btrfs_ordered_sum_size(fs_info, 793 793 bytes_left), GFP_KERNEL); 794 794 memalloc_nofs_restore(nofs_flag); 795 - BUG_ON(!sums); /* -ENOMEM */ 795 + if (!sums) 796 + return BLK_STS_RESOURCE; 797 + 796 798 sums->len = bytes_left; 797 799 ordered = btrfs_lookup_ordered_extent(inode, 798 800 offset);
+8 -1
fs/btrfs/scrub.c
··· 2518 2518 2519 2519 if (ret == 0) { 2520 2520 ro_set = 1; 2521 - } else if (ret == -ENOSPC && !sctx->is_dev_replace) { 2521 + } else if (ret == -ENOSPC && !sctx->is_dev_replace && 2522 + !(cache->flags & BTRFS_BLOCK_GROUP_RAID56_MASK)) { 2522 2523 /* 2523 2524 * btrfs_inc_block_group_ro return -ENOSPC when it 2524 2525 * failed in creating new chunk for metadata. 2525 2526 * It is not a problem for scrub, because 2526 2527 * metadata are always cowed, and our scrub paused 2527 2528 * commit_transactions. 2529 + * 2530 + * For RAID56 chunks, we have to mark them read-only 2531 + * for scrub, as later we would use our own cache 2532 + * out of RAID56 realm. 2533 + * Thus we want the RAID56 bg to be marked RO to 2534 + * prevent RMW from screwing up out cache. 2528 2535 */ 2529 2536 ro_set = 0; 2530 2537 } else if (ret == -ETXTBSY) {
+1 -1
fs/btrfs/tree-log.c
··· 6158 6158 { 6159 6159 struct btrfs_root *log = inode->root->log_root; 6160 6160 const struct btrfs_delayed_item *curr; 6161 - u64 last_range_start; 6161 + u64 last_range_start = 0; 6162 6162 u64 last_range_end = 0; 6163 6163 struct btrfs_key key; 6164 6164
fs/cifs/Kconfig fs/smb/client/Kconfig
fs/cifs/Makefile fs/smb/client/Makefile
fs/cifs/asn1.c fs/smb/client/asn1.c
fs/cifs/cached_dir.c fs/smb/client/cached_dir.c
fs/cifs/cached_dir.h fs/smb/client/cached_dir.h
+6 -2
fs/cifs/cifs_debug.c fs/smb/client/cifs_debug.c
··· 108 108 if ((tcon->seal) || 109 109 (tcon->ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) || 110 110 (tcon->share_flags & SHI1005_FLAGS_ENCRYPT_DATA)) 111 - seq_printf(m, " Encrypted"); 111 + seq_puts(m, " encrypted"); 112 112 if (tcon->nocase) 113 113 seq_printf(m, " nocase"); 114 114 if (tcon->unix_ext) ··· 415 415 416 416 /* dump session id helpful for use with network trace */ 417 417 seq_printf(m, " SessionId: 0x%llx", ses->Suid); 418 - if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) 418 + if (ses->session_flags & SMB2_SESSION_FLAG_ENCRYPT_DATA) { 419 419 seq_puts(m, " encrypted"); 420 + /* can help in debugging to show encryption type */ 421 + if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM) 422 + seq_puts(m, "(gcm256)"); 423 + } 420 424 if (ses->sign) 421 425 seq_puts(m, " signed"); 422 426
fs/cifs/cifs_debug.h fs/smb/client/cifs_debug.h
fs/cifs/cifs_dfs_ref.c fs/smb/client/cifs_dfs_ref.c
fs/cifs/cifs_fs_sb.h fs/smb/client/cifs_fs_sb.h
fs/cifs/cifs_ioctl.h fs/smb/client/cifs_ioctl.h
fs/cifs/cifs_spnego.c fs/smb/client/cifs_spnego.c
fs/cifs/cifs_spnego.h fs/smb/client/cifs_spnego.h
fs/cifs/cifs_spnego_negtokeninit.asn1 fs/smb/client/cifs_spnego_negtokeninit.asn1
fs/cifs/cifs_swn.c fs/smb/client/cifs_swn.c
fs/cifs/cifs_swn.h fs/smb/client/cifs_swn.h
fs/cifs/cifs_unicode.c fs/smb/client/cifs_unicode.c
fs/cifs/cifs_unicode.h fs/smb/client/cifs_unicode.h
fs/cifs/cifs_uniupr.h fs/smb/client/cifs_uniupr.h
fs/cifs/cifsacl.c fs/smb/client/cifsacl.c
fs/cifs/cifsacl.h fs/smb/client/cifsacl.h
+1 -1
fs/cifs/cifsencrypt.c fs/smb/client/cifsencrypt.c
··· 21 21 #include <linux/random.h> 22 22 #include <linux/highmem.h> 23 23 #include <linux/fips.h> 24 - #include "../smbfs_common/arc4.h" 24 + #include "../common/arc4.h" 25 25 #include <crypto/aead.h> 26 26 27 27 /*
fs/cifs/cifsfs.c fs/smb/client/cifsfs.c
fs/cifs/cifsfs.h fs/smb/client/cifsfs.h
+1 -1
fs/cifs/cifsglob.h fs/smb/client/cifsglob.h
··· 24 24 #include "cifsacl.h" 25 25 #include <crypto/internal/hash.h> 26 26 #include <uapi/linux/cifs/cifs_mount.h> 27 - #include "../smbfs_common/smb2pdu.h" 27 + #include "../common/smb2pdu.h" 28 28 #include "smb2pdu.h" 29 29 #include <linux/filelock.h> 30 30
+1 -1
fs/cifs/cifspdu.h fs/smb/client/cifspdu.h
··· 11 11 12 12 #include <net/sock.h> 13 13 #include <asm/unaligned.h> 14 - #include "../smbfs_common/smbfsctl.h" 14 + #include "../common/smbfsctl.h" 15 15 16 16 #define CIFS_PROT 0 17 17 #define POSIX_PROT (CIFS_PROT+1)
fs/cifs/cifsproto.h fs/smb/client/cifsproto.h
fs/cifs/cifsroot.c fs/smb/client/cifsroot.c
fs/cifs/cifssmb.c fs/smb/client/cifssmb.c
fs/cifs/connect.c fs/smb/client/connect.c
+1 -1
fs/cifs/dfs.c fs/smb/client/dfs.c
··· 303 303 if (!nodfs) { 304 304 rc = dfs_get_referral(mnt_ctx, ctx->UNC + 1, NULL, NULL); 305 305 if (rc) { 306 - if (rc != -ENOENT && rc != -EOPNOTSUPP) 306 + if (rc != -ENOENT && rc != -EOPNOTSUPP && rc != -EIO) 307 307 goto out; 308 308 nodfs = true; 309 309 }
fs/cifs/dfs.h fs/smb/client/dfs.h
fs/cifs/dfs_cache.c fs/smb/client/dfs_cache.c
fs/cifs/dfs_cache.h fs/smb/client/dfs_cache.h
fs/cifs/dir.c fs/smb/client/dir.c
fs/cifs/dns_resolve.c fs/smb/client/dns_resolve.c
fs/cifs/dns_resolve.h fs/smb/client/dns_resolve.h
fs/cifs/export.c fs/smb/client/export.c
+2 -1
fs/cifs/file.c fs/smb/client/file.c
··· 3353 3353 while (n && ix < nbv) { 3354 3354 len = min3(n, bvecs[ix].bv_len - skip, max_size); 3355 3355 span += len; 3356 + max_size -= len; 3356 3357 nsegs++; 3357 3358 ix++; 3358 - if (span >= max_size || nsegs >= max_segs) 3359 + if (max_size == 0 || nsegs >= max_segs) 3359 3360 break; 3360 3361 skip = 0; 3361 3362 n -= len;
+8
fs/cifs/fs_context.c fs/smb/client/fs_context.c
··· 904 904 ctx->sfu_remap = false; /* disable SFU mapping */ 905 905 } 906 906 break; 907 + case Opt_mapchars: 908 + if (result.negated) 909 + ctx->sfu_remap = false; 910 + else { 911 + ctx->sfu_remap = true; 912 + ctx->remap = false; /* disable SFM (mapposix) mapping */ 913 + } 914 + break; 907 915 case Opt_user_xattr: 908 916 if (result.negated) 909 917 ctx->no_xattr = 1;
fs/cifs/fs_context.h fs/smb/client/fs_context.h
fs/cifs/fscache.c fs/smb/client/fscache.c
fs/cifs/fscache.h fs/smb/client/fscache.h
fs/cifs/inode.c fs/smb/client/inode.c
+5 -1
fs/cifs/ioctl.c fs/smb/client/ioctl.c
··· 321 321 struct tcon_link *tlink; 322 322 struct cifs_sb_info *cifs_sb; 323 323 __u64 ExtAttrBits = 0; 324 + #ifdef CONFIG_CIFS_POSIX 325 + #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 324 326 __u64 caps; 327 + #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */ 328 + #endif /* CONFIG_CIFS_POSIX */ 325 329 326 330 xid = get_xid(); 327 331 ··· 335 331 if (pSMBFile == NULL) 336 332 break; 337 333 tcon = tlink_tcon(pSMBFile->tlink); 338 - caps = le64_to_cpu(tcon->fsUnixInfo.Capability); 339 334 #ifdef CONFIG_CIFS_POSIX 340 335 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 336 + caps = le64_to_cpu(tcon->fsUnixInfo.Capability); 341 337 if (CIFS_UNIX_EXTATTR_CAP & caps) { 342 338 __u64 ExtAttrMask = 0; 343 339 rc = CIFSGetExtAttr(xid, tcon,
fs/cifs/link.c fs/smb/client/link.c
fs/cifs/misc.c fs/smb/client/misc.c
fs/cifs/netlink.c fs/smb/client/netlink.c
fs/cifs/netlink.h fs/smb/client/netlink.h
fs/cifs/netmisc.c fs/smb/client/netmisc.c
fs/cifs/nterr.c fs/smb/client/nterr.c
fs/cifs/nterr.h fs/smb/client/nterr.h
fs/cifs/ntlmssp.h fs/smb/client/ntlmssp.h
fs/cifs/readdir.c fs/smb/client/readdir.c
fs/cifs/rfc1002pdu.h fs/smb/client/rfc1002pdu.h
fs/cifs/sess.c fs/smb/client/sess.c
fs/cifs/smb1ops.c fs/smb/client/smb1ops.c
fs/cifs/smb2file.c fs/smb/client/smb2file.c
fs/cifs/smb2glob.h fs/smb/client/smb2glob.h
fs/cifs/smb2inode.c fs/smb/client/smb2inode.c
fs/cifs/smb2maperror.c fs/smb/client/smb2maperror.c
fs/cifs/smb2misc.c fs/smb/client/smb2misc.c
-1
fs/cifs/smb2ops.c fs/smb/client/smb2ops.c
··· 618 618 * Add a new one instead 619 619 */ 620 620 spin_lock(&ses->iface_lock); 621 - iface = niface = NULL; 622 621 list_for_each_entry_safe(iface, niface, &ses->iface_list, 623 622 iface_head) { 624 623 ret = iface_cmp(iface, &tmp_iface);
+1 -1
fs/cifs/smb2pdu.c fs/smb/client/smb2pdu.c
··· 3725 3725 if (*out_data == NULL) { 3726 3726 rc = -ENOMEM; 3727 3727 goto cnotify_exit; 3728 - } else 3728 + } else if (plen) 3729 3729 *plen = le32_to_cpu(smb_rsp->OutputBufferLength); 3730 3730 } 3731 3731
fs/cifs/smb2pdu.h fs/smb/client/smb2pdu.h
fs/cifs/smb2proto.h fs/smb/client/smb2proto.h
fs/cifs/smb2status.h fs/smb/client/smb2status.h
fs/cifs/smb2transport.c fs/smb/client/smb2transport.c
fs/cifs/smbdirect.c fs/smb/client/smbdirect.c
fs/cifs/smbdirect.h fs/smb/client/smbdirect.h
+1 -1
fs/cifs/smbencrypt.c fs/smb/client/smbencrypt.c
··· 24 24 #include "cifsglob.h" 25 25 #include "cifs_debug.h" 26 26 #include "cifsproto.h" 27 - #include "../smbfs_common/md4.h" 27 + #include "../common/md4.h" 28 28 29 29 #ifndef false 30 30 #define false 0
fs/cifs/smberr.h fs/smb/client/smberr.h
fs/cifs/trace.c fs/smb/client/trace.c
fs/cifs/trace.h fs/smb/client/trace.h
fs/cifs/transport.c fs/smb/client/transport.c
fs/cifs/unc.c fs/smb/client/unc.c
fs/cifs/winucase.c fs/smb/client/winucase.c
fs/cifs/xattr.c fs/smb/client/xattr.c
+3 -1
fs/coredump.c
··· 371 371 if (t != current && !(t->flags & PF_POSTCOREDUMP)) { 372 372 sigaddset(&t->pending.signal, SIGKILL); 373 373 signal_wake_up(t, 1); 374 - nr++; 374 + /* The vhost_worker does not particpate in coredumps */ 375 + if ((t->flags & (PF_USER_WORKER | PF_IO_WORKER)) != PF_USER_WORKER) 376 + nr++; 375 377 } 376 378 } 377 379
+4 -1
fs/ext4/ext4.h
··· 918 918 * where the second inode has larger inode number 919 919 * than the first 920 920 * I_DATA_SEM_QUOTA - Used for quota inodes only 921 + * I_DATA_SEM_EA - Used for ea_inodes only 921 922 */ 922 923 enum { 923 924 I_DATA_SEM_NORMAL = 0, 924 925 I_DATA_SEM_OTHER, 925 926 I_DATA_SEM_QUOTA, 927 + I_DATA_SEM_EA 926 928 }; 927 929 928 930 ··· 2903 2901 EXT4_IGET_NORMAL = 0, 2904 2902 EXT4_IGET_SPECIAL = 0x0001, /* OK to iget a system inode */ 2905 2903 EXT4_IGET_HANDLE = 0x0002, /* Inode # is from a handle */ 2906 - EXT4_IGET_BAD = 0x0004 /* Allow to iget a bad inode */ 2904 + EXT4_IGET_BAD = 0x0004, /* Allow to iget a bad inode */ 2905 + EXT4_IGET_EA_INODE = 0x0008 /* Inode should contain an EA value */ 2907 2906 } ext4_iget_flags; 2908 2907 2909 2908 extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+7
fs/ext4/fsync.c
··· 108 108 journal_t *journal = EXT4_SB(inode->i_sb)->s_journal; 109 109 tid_t commit_tid = datasync ? ei->i_datasync_tid : ei->i_sync_tid; 110 110 111 + /* 112 + * Fastcommit does not really support fsync on directories or other 113 + * special files. Force a full commit. 114 + */ 115 + if (!S_ISREG(inode->i_mode)) 116 + return ext4_force_commit(inode->i_sb); 117 + 111 118 if (journal->j_flags & JBD2_BARRIER && 112 119 !jbd2_trans_will_send_data_barrier(journal, commit_tid)) 113 120 *needs_barrier = true;
+29 -5
fs/ext4/inode.c
··· 4641 4641 inode_set_iversion_queried(inode, val); 4642 4642 } 4643 4643 4644 + static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags) 4645 + 4646 + { 4647 + if (flags & EXT4_IGET_EA_INODE) { 4648 + if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) 4649 + return "missing EA_INODE flag"; 4650 + if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) || 4651 + EXT4_I(inode)->i_file_acl) 4652 + return "ea_inode with extended attributes"; 4653 + } else { 4654 + if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) 4655 + return "unexpected EA_INODE flag"; 4656 + } 4657 + if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) 4658 + return "unexpected bad inode w/o EXT4_IGET_BAD"; 4659 + return NULL; 4660 + } 4661 + 4644 4662 struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, 4645 4663 ext4_iget_flags flags, const char *function, 4646 4664 unsigned int line) ··· 4668 4650 struct ext4_inode_info *ei; 4669 4651 struct ext4_super_block *es = EXT4_SB(sb)->s_es; 4670 4652 struct inode *inode; 4653 + const char *err_str; 4671 4654 journal_t *journal = EXT4_SB(sb)->s_journal; 4672 4655 long ret; 4673 4656 loff_t size; ··· 4696 4677 inode = iget_locked(sb, ino); 4697 4678 if (!inode) 4698 4679 return ERR_PTR(-ENOMEM); 4699 - if (!(inode->i_state & I_NEW)) 4680 + if (!(inode->i_state & I_NEW)) { 4681 + if ((err_str = check_igot_inode(inode, flags)) != NULL) { 4682 + ext4_error_inode(inode, function, line, 0, err_str); 4683 + iput(inode); 4684 + return ERR_PTR(-EFSCORRUPTED); 4685 + } 4700 4686 return inode; 4687 + } 4701 4688 4702 4689 ei = EXT4_I(inode); 4703 4690 iloc.bh = NULL; ··· 4969 4944 if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb)) 4970 4945 ext4_error_inode(inode, function, line, 0, 4971 4946 "casefold flag without casefold feature"); 4972 - if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) { 4973 - ext4_error_inode(inode, function, line, 0, 4974 - "bad inode without EXT4_IGET_BAD flag"); 4975 - ret = -EUCLEAN; 4947 + if ((err_str = check_igot_inode(inode, flags)) != NULL) { 4948 + ext4_error_inode(inode, function, line, 0, err_str); 4949 + ret = -EFSCORRUPTED; 4976 4950 goto bad_inode; 4977 4951 } 4978 4952
+12 -12
fs/ext4/super.c
··· 6589 6589 } 6590 6590 6591 6591 /* 6592 - * Reinitialize lazy itable initialization thread based on 6593 - * current settings 6594 - */ 6595 - if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE)) 6596 - ext4_unregister_li_request(sb); 6597 - else { 6598 - ext4_group_t first_not_zeroed; 6599 - first_not_zeroed = ext4_has_uninit_itable(sb); 6600 - ext4_register_li_request(sb, first_not_zeroed); 6601 - } 6602 - 6603 - /* 6604 6592 * Handle creation of system zone data early because it can fail. 6605 6593 * Releasing of existing data is done when we are sure remount will 6606 6594 * succeed. ··· 6624 6636 6625 6637 if (enable_rw) 6626 6638 sb->s_flags &= ~SB_RDONLY; 6639 + 6640 + /* 6641 + * Reinitialize lazy itable initialization thread based on 6642 + * current settings 6643 + */ 6644 + if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE)) 6645 + ext4_unregister_li_request(sb); 6646 + else { 6647 + ext4_group_t first_not_zeroed; 6648 + first_not_zeroed = ext4_has_uninit_itable(sb); 6649 + ext4_register_li_request(sb, first_not_zeroed); 6650 + } 6627 6651 6628 6652 if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb)) 6629 6653 ext4_stop_mmpd(sbi);
+12 -29
fs/ext4/xattr.c
··· 121 121 #ifdef CONFIG_LOCKDEP 122 122 void ext4_xattr_inode_set_class(struct inode *ea_inode) 123 123 { 124 + struct ext4_inode_info *ei = EXT4_I(ea_inode); 125 + 124 126 lockdep_set_subclass(&ea_inode->i_rwsem, 1); 127 + (void) ei; /* shut up clang warning if !CONFIG_LOCKDEP */ 128 + lockdep_set_subclass(&ei->i_data_sem, I_DATA_SEM_EA); 125 129 } 126 130 #endif 127 131 ··· 437 433 return -EFSCORRUPTED; 438 434 } 439 435 440 - inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL); 436 + inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_EA_INODE); 441 437 if (IS_ERR(inode)) { 442 438 err = PTR_ERR(inode); 443 439 ext4_error(parent->i_sb, ··· 445 441 err); 446 442 return err; 447 443 } 448 - 449 - if (is_bad_inode(inode)) { 450 - ext4_error(parent->i_sb, 451 - "error while reading EA inode %lu is_bad_inode", 452 - ea_ino); 453 - err = -EIO; 454 - goto error; 455 - } 456 - 457 - if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) { 458 - ext4_error(parent->i_sb, 459 - "EA inode %lu does not have EXT4_EA_INODE_FL flag", 460 - ea_ino); 461 - err = -EINVAL; 462 - goto error; 463 - } 464 - 465 444 ext4_xattr_inode_set_class(inode); 466 445 467 446 /* ··· 465 478 466 479 *ea_inode = inode; 467 480 return 0; 468 - error: 469 - iput(inode); 470 - return err; 471 481 } 472 482 473 483 /* Remove entry from mbcache when EA inode is getting evicted */ ··· 1540 1556 1541 1557 while (ce) { 1542 1558 ea_inode = ext4_iget(inode->i_sb, ce->e_value, 1543 - EXT4_IGET_NORMAL); 1544 - if (!IS_ERR(ea_inode) && 1545 - !is_bad_inode(ea_inode) && 1546 - (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) && 1547 - i_size_read(ea_inode) == value_len && 1559 + EXT4_IGET_EA_INODE); 1560 + if (IS_ERR(ea_inode)) 1561 + goto next_entry; 1562 + ext4_xattr_inode_set_class(ea_inode); 1563 + if (i_size_read(ea_inode) == value_len && 1548 1564 !ext4_xattr_inode_read(ea_inode, ea_data, value_len) && 1549 1565 !ext4_xattr_inode_verify_hashes(ea_inode, NULL, ea_data, 1550 1566 value_len) && ··· 1554 1570 kvfree(ea_data); 1555 1571 return ea_inode; 1556 1572 } 1557 - 1558 - if (!IS_ERR(ea_inode)) 1559 - iput(ea_inode); 1573 + iput(ea_inode); 1574 + next_entry: 1560 1575 ce = mb_cache_entry_find_next(ea_inode_cache, ce); 1561 1576 } 1562 1577 kvfree(ea_data);
fs/ksmbd/Kconfig fs/smb/server/Kconfig
fs/ksmbd/Makefile fs/smb/server/Makefile
fs/ksmbd/asn1.c fs/smb/server/asn1.c
fs/ksmbd/asn1.h fs/smb/server/asn1.h
+1 -1
fs/ksmbd/auth.c fs/smb/server/auth.c
··· 29 29 #include "mgmt/user_config.h" 30 30 #include "crypto_ctx.h" 31 31 #include "transport_ipc.h" 32 - #include "../smbfs_common/arc4.h" 32 + #include "../common/arc4.h" 33 33 34 34 /* 35 35 * Fixed format data defining GSS header and fixed string
fs/ksmbd/auth.h fs/smb/server/auth.h
fs/ksmbd/connection.c fs/smb/server/connection.c
fs/ksmbd/connection.h fs/smb/server/connection.h
fs/ksmbd/crypto_ctx.c fs/smb/server/crypto_ctx.c
fs/ksmbd/crypto_ctx.h fs/smb/server/crypto_ctx.h
fs/ksmbd/glob.h fs/smb/server/glob.h
fs/ksmbd/ksmbd_netlink.h fs/smb/server/ksmbd_netlink.h
fs/ksmbd/ksmbd_spnego_negtokeninit.asn1 fs/smb/server/ksmbd_spnego_negtokeninit.asn1
fs/ksmbd/ksmbd_spnego_negtokentarg.asn1 fs/smb/server/ksmbd_spnego_negtokentarg.asn1
fs/ksmbd/ksmbd_work.c fs/smb/server/ksmbd_work.c
fs/ksmbd/ksmbd_work.h fs/smb/server/ksmbd_work.h
fs/ksmbd/mgmt/ksmbd_ida.c fs/smb/server/mgmt/ksmbd_ida.c
fs/ksmbd/mgmt/ksmbd_ida.h fs/smb/server/mgmt/ksmbd_ida.h
fs/ksmbd/mgmt/share_config.c fs/smb/server/mgmt/share_config.c
fs/ksmbd/mgmt/share_config.h fs/smb/server/mgmt/share_config.h
fs/ksmbd/mgmt/tree_connect.c fs/smb/server/mgmt/tree_connect.c
fs/ksmbd/mgmt/tree_connect.h fs/smb/server/mgmt/tree_connect.h
fs/ksmbd/mgmt/user_config.c fs/smb/server/mgmt/user_config.c
fs/ksmbd/mgmt/user_config.h fs/smb/server/mgmt/user_config.h
fs/ksmbd/mgmt/user_session.c fs/smb/server/mgmt/user_session.c
fs/ksmbd/mgmt/user_session.h fs/smb/server/mgmt/user_session.h
fs/ksmbd/misc.c fs/smb/server/misc.c
fs/ksmbd/misc.h fs/smb/server/misc.h
fs/ksmbd/ndr.c fs/smb/server/ndr.c
fs/ksmbd/ndr.h fs/smb/server/ndr.h
fs/ksmbd/nterr.h fs/smb/server/nterr.h
fs/ksmbd/ntlmssp.h fs/smb/server/ntlmssp.h
+47 -25
fs/ksmbd/oplock.c fs/smb/server/oplock.c
··· 157 157 rcu_read_lock(); 158 158 opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info, 159 159 op_entry); 160 - if (opinfo && !atomic_inc_not_zero(&opinfo->refcount)) 161 - opinfo = NULL; 160 + if (opinfo) { 161 + if (!atomic_inc_not_zero(&opinfo->refcount)) 162 + opinfo = NULL; 163 + else { 164 + atomic_inc(&opinfo->conn->r_count); 165 + if (ksmbd_conn_releasing(opinfo->conn)) { 166 + atomic_dec(&opinfo->conn->r_count); 167 + atomic_dec(&opinfo->refcount); 168 + opinfo = NULL; 169 + } 170 + } 171 + } 172 + 162 173 rcu_read_unlock(); 163 174 164 175 return opinfo; 176 + } 177 + 178 + static void opinfo_conn_put(struct oplock_info *opinfo) 179 + { 180 + struct ksmbd_conn *conn; 181 + 182 + if (!opinfo) 183 + return; 184 + 185 + conn = opinfo->conn; 186 + /* 187 + * Checking waitqueue to dropping pending requests on 188 + * disconnection. waitqueue_active is safe because it 189 + * uses atomic operation for condition. 190 + */ 191 + if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 192 + wake_up(&conn->r_count_q); 193 + opinfo_put(opinfo); 165 194 } 166 195 167 196 void opinfo_put(struct oplock_info *opinfo) ··· 695 666 696 667 out: 697 668 ksmbd_free_work_struct(work); 698 - /* 699 - * Checking waitqueue to dropping pending requests on 700 - * disconnection. waitqueue_active is safe because it 701 - * uses atomic operation for condition. 702 - */ 703 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 704 - wake_up(&conn->r_count_q); 705 669 } 706 670 707 671 /** ··· 728 706 work->conn = conn; 729 707 work->sess = opinfo->sess; 730 708 731 - atomic_inc(&conn->r_count); 732 709 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 733 710 INIT_WORK(&work->work, __smb2_oplock_break_noti); 734 711 ksmbd_queue_work(work); ··· 797 776 798 777 out: 799 778 ksmbd_free_work_struct(work); 800 - /* 801 - * Checking waitqueue to dropping pending requests on 802 - * disconnection. waitqueue_active is safe because it 803 - * uses atomic operation for condition. 804 - */ 805 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 806 - wake_up(&conn->r_count_q); 807 779 } 808 780 809 781 /** ··· 836 822 work->conn = conn; 837 823 work->sess = opinfo->sess; 838 824 839 - atomic_inc(&conn->r_count); 840 825 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 841 826 list_for_each_safe(tmp, t, &opinfo->interim_list) { 842 827 struct ksmbd_work *in_work; ··· 1157 1144 } 1158 1145 prev_opinfo = opinfo_get_list(ci); 1159 1146 if (!prev_opinfo || 1160 - (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) 1147 + (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) { 1148 + opinfo_conn_put(prev_opinfo); 1161 1149 goto set_lev; 1150 + } 1162 1151 prev_op_has_lease = prev_opinfo->is_lease; 1163 1152 if (prev_op_has_lease) 1164 1153 prev_op_state = prev_opinfo->o_lease->state; ··· 1168 1153 if (share_ret < 0 && 1169 1154 prev_opinfo->level == SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1170 1155 err = share_ret; 1171 - opinfo_put(prev_opinfo); 1156 + opinfo_conn_put(prev_opinfo); 1172 1157 goto err_out; 1173 1158 } 1174 1159 1175 1160 if (prev_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH && 1176 1161 prev_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1177 - opinfo_put(prev_opinfo); 1162 + opinfo_conn_put(prev_opinfo); 1178 1163 goto op_break_not_needed; 1179 1164 } 1180 1165 1181 1166 list_add(&work->interim_entry, &prev_opinfo->interim_list); 1182 1167 err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II); 1183 - opinfo_put(prev_opinfo); 1168 + opinfo_conn_put(prev_opinfo); 1184 1169 if (err == -ENOENT) 1185 1170 goto set_lev; 1186 1171 /* Check all oplock was freed by close */ ··· 1243 1228 return; 1244 1229 if (brk_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH && 1245 1230 brk_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1246 - opinfo_put(brk_opinfo); 1231 + opinfo_conn_put(brk_opinfo); 1247 1232 return; 1248 1233 } 1249 1234 1250 1235 brk_opinfo->open_trunc = is_trunc; 1251 1236 list_add(&work->interim_entry, &brk_opinfo->interim_list); 1252 1237 oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II); 1253 - opinfo_put(brk_opinfo); 1238 + opinfo_conn_put(brk_opinfo); 1254 1239 } 1255 1240 1256 1241 /** ··· 1278 1263 list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) { 1279 1264 if (!atomic_inc_not_zero(&brk_op->refcount)) 1280 1265 continue; 1266 + 1267 + atomic_inc(&brk_op->conn->r_count); 1268 + if (ksmbd_conn_releasing(brk_op->conn)) { 1269 + atomic_dec(&brk_op->conn->r_count); 1270 + continue; 1271 + } 1272 + 1281 1273 rcu_read_unlock(); 1282 1274 if (brk_op->is_lease && (brk_op->o_lease->state & 1283 1275 (~(SMB2_LEASE_READ_CACHING_LE | ··· 1314 1292 brk_op->open_trunc = is_trunc; 1315 1293 oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE); 1316 1294 next: 1317 - opinfo_put(brk_op); 1295 + opinfo_conn_put(brk_op); 1318 1296 rcu_read_lock(); 1319 1297 } 1320 1298 rcu_read_unlock();
fs/ksmbd/oplock.h fs/smb/server/oplock.h
fs/ksmbd/server.c fs/smb/server/server.c
fs/ksmbd/server.h fs/smb/server/server.h
fs/ksmbd/smb2misc.c fs/smb/server/smb2misc.c
fs/ksmbd/smb2ops.c fs/smb/server/smb2ops.c
+46 -50
fs/ksmbd/smb2pdu.c fs/smb/server/smb2pdu.c
··· 326 326 if (hdr->Command == SMB2_NEGOTIATE) 327 327 aux_max = 1; 328 328 else 329 - aux_max = conn->vals->max_credits - credit_charge; 329 + aux_max = conn->vals->max_credits - conn->total_credits; 330 330 credits_granted = min_t(unsigned short, credits_requested, aux_max); 331 - 332 - if (conn->vals->max_credits - conn->total_credits < credits_granted) 333 - credits_granted = conn->vals->max_credits - 334 - conn->total_credits; 335 331 336 332 conn->total_credits += credits_granted; 337 333 work->credits_granted += credits_granted; ··· 845 849 846 850 static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn, 847 851 struct smb2_preauth_neg_context *pneg_ctxt, 848 - int len_of_ctxts) 852 + int ctxt_len) 849 853 { 850 854 /* 851 855 * sizeof(smb2_preauth_neg_context) assumes SMB311_SALT_SIZE Salt, 852 856 * which may not be present. Only check for used HashAlgorithms[1]. 853 857 */ 854 - if (len_of_ctxts < MIN_PREAUTH_CTXT_DATA_LEN) 858 + if (ctxt_len < 859 + sizeof(struct smb2_neg_context) + MIN_PREAUTH_CTXT_DATA_LEN) 855 860 return STATUS_INVALID_PARAMETER; 856 861 857 862 if (pneg_ctxt->HashAlgorithms != SMB2_PREAUTH_INTEGRITY_SHA512) ··· 864 867 865 868 static void decode_encrypt_ctxt(struct ksmbd_conn *conn, 866 869 struct smb2_encryption_neg_context *pneg_ctxt, 867 - int len_of_ctxts) 870 + int ctxt_len) 868 871 { 869 - int cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount); 870 - int i, cphs_size = cph_cnt * sizeof(__le16); 872 + int cph_cnt; 873 + int i, cphs_size; 874 + 875 + if (sizeof(struct smb2_encryption_neg_context) > ctxt_len) { 876 + pr_err("Invalid SMB2_ENCRYPTION_CAPABILITIES context size\n"); 877 + return; 878 + } 871 879 872 880 conn->cipher_type = 0; 873 881 882 + cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount); 883 + cphs_size = cph_cnt * sizeof(__le16); 884 + 874 885 if (sizeof(struct smb2_encryption_neg_context) + cphs_size > 875 - len_of_ctxts) { 886 + ctxt_len) { 876 887 pr_err("Invalid cipher count(%d)\n", cph_cnt); 877 888 return; 878 889 } ··· 928 923 929 924 static void decode_sign_cap_ctxt(struct ksmbd_conn *conn, 930 925 struct smb2_signing_capabilities *pneg_ctxt, 931 - int len_of_ctxts) 926 + int ctxt_len) 932 927 { 933 - int sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount); 934 - int i, sign_alos_size = sign_algo_cnt * sizeof(__le16); 928 + int sign_algo_cnt; 929 + int i, sign_alos_size; 930 + 931 + if (sizeof(struct smb2_signing_capabilities) > ctxt_len) { 932 + pr_err("Invalid SMB2_SIGNING_CAPABILITIES context length\n"); 933 + return; 934 + } 935 935 936 936 conn->signing_negotiated = false; 937 + sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount); 938 + sign_alos_size = sign_algo_cnt * sizeof(__le16); 937 939 938 940 if (sizeof(struct smb2_signing_capabilities) + sign_alos_size > 939 - len_of_ctxts) { 941 + ctxt_len) { 940 942 pr_err("Invalid signing algorithm count(%d)\n", sign_algo_cnt); 941 943 return; 942 944 } ··· 981 969 len_of_ctxts = len_of_smb - offset; 982 970 983 971 while (i++ < neg_ctxt_cnt) { 984 - int clen; 985 - 986 - /* check that offset is not beyond end of SMB */ 987 - if (len_of_ctxts == 0) 988 - break; 972 + int clen, ctxt_len; 989 973 990 974 if (len_of_ctxts < sizeof(struct smb2_neg_context)) 991 975 break; 992 976 993 977 pctx = (struct smb2_neg_context *)((char *)pctx + offset); 994 978 clen = le16_to_cpu(pctx->DataLength); 995 - if (clen + sizeof(struct smb2_neg_context) > len_of_ctxts) 979 + ctxt_len = clen + sizeof(struct smb2_neg_context); 980 + 981 + if (ctxt_len > len_of_ctxts) 996 982 break; 997 983 998 984 if (pctx->ContextType == SMB2_PREAUTH_INTEGRITY_CAPABILITIES) { ··· 1001 991 1002 992 status = decode_preauth_ctxt(conn, 1003 993 (struct smb2_preauth_neg_context *)pctx, 1004 - len_of_ctxts); 994 + ctxt_len); 1005 995 if (status != STATUS_SUCCESS) 1006 996 break; 1007 997 } else if (pctx->ContextType == SMB2_ENCRYPTION_CAPABILITIES) { ··· 1012 1002 1013 1003 decode_encrypt_ctxt(conn, 1014 1004 (struct smb2_encryption_neg_context *)pctx, 1015 - len_of_ctxts); 1005 + ctxt_len); 1016 1006 } else if (pctx->ContextType == SMB2_COMPRESSION_CAPABILITIES) { 1017 1007 ksmbd_debug(SMB, 1018 1008 "deassemble SMB2_COMPRESSION_CAPABILITIES context\n"); ··· 1031 1021 } else if (pctx->ContextType == SMB2_SIGNING_CAPABILITIES) { 1032 1022 ksmbd_debug(SMB, 1033 1023 "deassemble SMB2_SIGNING_CAPABILITIES context\n"); 1024 + 1034 1025 decode_sign_cap_ctxt(conn, 1035 1026 (struct smb2_signing_capabilities *)pctx, 1036 - len_of_ctxts); 1027 + ctxt_len); 1037 1028 } 1038 1029 1039 1030 /* offsets must be 8 byte aligned */ ··· 1068 1057 return rc; 1069 1058 } 1070 1059 1071 - if (req->DialectCount == 0) { 1072 - pr_err("malformed packet\n"); 1060 + smb2_buf_len = get_rfc1002_len(work->request_buf); 1061 + smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects); 1062 + if (smb2_neg_size > smb2_buf_len) { 1073 1063 rsp->hdr.Status = STATUS_INVALID_PARAMETER; 1074 1064 rc = -EINVAL; 1075 1065 goto err_out; 1076 1066 } 1077 1067 1078 - smb2_buf_len = get_rfc1002_len(work->request_buf); 1079 - smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects); 1080 - if (smb2_neg_size > smb2_buf_len) { 1068 + if (req->DialectCount == 0) { 1069 + pr_err("malformed packet\n"); 1081 1070 rsp->hdr.Status = STATUS_INVALID_PARAMETER; 1082 1071 rc = -EINVAL; 1083 1072 goto err_out; ··· 4369 4358 return 0; 4370 4359 } 4371 4360 4372 - static unsigned long long get_allocation_size(struct inode *inode, 4373 - struct kstat *stat) 4374 - { 4375 - unsigned long long alloc_size = 0; 4376 - 4377 - if (!S_ISDIR(stat->mode)) { 4378 - if ((inode->i_blocks << 9) <= stat->size) 4379 - alloc_size = stat->size; 4380 - else 4381 - alloc_size = inode->i_blocks << 9; 4382 - } 4383 - 4384 - return alloc_size; 4385 - } 4386 - 4387 4361 static void get_file_standard_info(struct smb2_query_info_rsp *rsp, 4388 4362 struct ksmbd_file *fp, void *rsp_org) 4389 4363 { ··· 4383 4387 sinfo = (struct smb2_file_standard_info *)rsp->Buffer; 4384 4388 delete_pending = ksmbd_inode_pending_delete(fp); 4385 4389 4386 - sinfo->AllocationSize = cpu_to_le64(get_allocation_size(inode, &stat)); 4390 + sinfo->AllocationSize = cpu_to_le64(inode->i_blocks << 9); 4387 4391 sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4388 4392 sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending); 4389 4393 sinfo->DeletePending = delete_pending; ··· 4448 4452 file_info->Attributes = fp->f_ci->m_fattr; 4449 4453 file_info->Pad1 = 0; 4450 4454 file_info->AllocationSize = 4451 - cpu_to_le64(get_allocation_size(inode, &stat)); 4455 + cpu_to_le64(inode->i_blocks << 9); 4452 4456 file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4453 4457 file_info->NumberOfLinks = 4454 4458 cpu_to_le32(get_nlink(&stat) - delete_pending); ··· 4637 4641 file_info->ChangeTime = cpu_to_le64(time); 4638 4642 file_info->Attributes = fp->f_ci->m_fattr; 4639 4643 file_info->AllocationSize = 4640 - cpu_to_le64(get_allocation_size(inode, &stat)); 4644 + cpu_to_le64(inode->i_blocks << 9); 4641 4645 file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4642 4646 file_info->Reserved = cpu_to_le32(0); 4643 4647 rsp->OutputBufferLength = ··· 5502 5506 { 5503 5507 char *link_name = NULL, *target_name = NULL, *pathname = NULL; 5504 5508 struct path path; 5505 - bool file_present = true; 5509 + bool file_present = false; 5506 5510 int rc; 5507 5511 5508 5512 if (buf_len < (u64)sizeof(struct smb2_file_link_info) + ··· 5535 5539 if (rc) { 5536 5540 if (rc != -ENOENT) 5537 5541 goto out; 5538 - file_present = false; 5539 - } 5542 + } else 5543 + file_present = true; 5540 5544 5541 5545 if (file_info->ReplaceIfExists) { 5542 5546 if (file_present) {
fs/ksmbd/smb2pdu.h fs/smb/server/smb2pdu.h
fs/ksmbd/smb_common.c fs/smb/server/smb_common.c
+1 -1
fs/ksmbd/smb_common.h fs/smb/server/smb_common.h
··· 10 10 11 11 #include "glob.h" 12 12 #include "nterr.h" 13 - #include "../smbfs_common/smb2pdu.h" 13 + #include "../common/smb2pdu.h" 14 14 #include "smb2pdu.h" 15 15 16 16 /* ksmbd's Specific ERRNO */
fs/ksmbd/smbacl.c fs/smb/server/smbacl.c
fs/ksmbd/smbacl.h fs/smb/server/smbacl.h
+1 -1
fs/ksmbd/smbfsctl.h fs/smb/server/smbfsctl.h
··· 1 1 /* SPDX-License-Identifier: LGPL-2.1+ */ 2 2 /* 3 - * fs/cifs/smbfsctl.h: SMB, CIFS, SMB2 FSCTL definitions 3 + * fs/smb/server/smbfsctl.h: SMB, CIFS, SMB2 FSCTL definitions 4 4 * 5 5 * Copyright (c) International Business Machines Corp., 2002,2009 6 6 * Author(s): Steve French (sfrench@us.ibm.com)
+1 -1
fs/ksmbd/smbstatus.h fs/smb/server/smbstatus.h
··· 1 1 /* SPDX-License-Identifier: LGPL-2.1+ */ 2 2 /* 3 - * fs/cifs/smb2status.h 3 + * fs/server/smb2status.h 4 4 * 5 5 * SMB2 Status code (network error) definitions 6 6 * Definitions are from MS-ERREF
fs/ksmbd/transport_ipc.c fs/smb/server/transport_ipc.c
fs/ksmbd/transport_ipc.h fs/smb/server/transport_ipc.h
fs/ksmbd/transport_rdma.c fs/smb/server/transport_rdma.c
fs/ksmbd/transport_rdma.h fs/smb/server/transport_rdma.h
fs/ksmbd/transport_tcp.c fs/smb/server/transport_tcp.c
fs/ksmbd/transport_tcp.h fs/smb/server/transport_tcp.h
fs/ksmbd/unicode.c fs/smb/server/unicode.c
fs/ksmbd/unicode.h fs/smb/server/unicode.h
fs/ksmbd/uniupr.h fs/smb/server/uniupr.h
+7 -2
fs/ksmbd/vfs.c fs/smb/server/vfs.c
··· 86 86 err = vfs_path_parent_lookup(filename, flags, 87 87 &parent_path, &last, &type, 88 88 root_share_path); 89 - putname(filename); 90 - if (err) 89 + if (err) { 90 + putname(filename); 91 91 return err; 92 + } 92 93 93 94 if (unlikely(type != LAST_NORM)) { 94 95 path_put(&parent_path); 96 + putname(filename); 95 97 return -ENOENT; 96 98 } 97 99 ··· 110 108 path->dentry = d; 111 109 path->mnt = share_conf->vfs_path.mnt; 112 110 path_put(&parent_path); 111 + putname(filename); 113 112 114 113 return 0; 115 114 116 115 err_out: 117 116 inode_unlock(parent_path.dentry->d_inode); 118 117 path_put(&parent_path); 118 + putname(filename); 119 119 return -ENOENT; 120 120 } 121 121 ··· 747 743 rd.new_dir = new_path.dentry->d_inode, 748 744 rd.new_dentry = new_dentry, 749 745 rd.flags = flags, 746 + rd.delegated_inode = NULL, 750 747 err = vfs_rename(&rd); 751 748 if (err) 752 749 ksmbd_debug(VFS, "vfs_rename failed err %d\n", err);
fs/ksmbd/vfs.h fs/smb/server/vfs.h
fs/ksmbd/vfs_cache.c fs/smb/server/vfs_cache.c
fs/ksmbd/vfs_cache.h fs/smb/server/vfs_cache.h
fs/ksmbd/xattr.h fs/smb/server/xattr.h
+11
fs/smb/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # smbfs configuration 4 + 5 + source "fs/smb/client/Kconfig" 6 + source "fs/smb/server/Kconfig" 7 + 8 + config SMBFS 9 + tristate 10 + default y if CIFS=y || SMB_SERVER=y 11 + default m if CIFS=m || SMB_SERVER=m
+5
fs/smb/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + obj-$(CONFIG_SMBFS) += common/ 4 + obj-$(CONFIG_CIFS) += client/ 5 + obj-$(CONFIG_SMB_SERVER) += server/
+2 -2
fs/smbfs_common/Makefile fs/smb/common/Makefile
··· 3 3 # Makefile for Linux filesystem routines that are shared by client and server. 4 4 # 5 5 6 - obj-$(CONFIG_SMBFS_COMMON) += cifs_arc4.o 7 - obj-$(CONFIG_SMBFS_COMMON) += cifs_md4.o 6 + obj-$(CONFIG_SMBFS) += cifs_arc4.o 7 + obj-$(CONFIG_SMBFS) += cifs_md4.o
fs/smbfs_common/arc4.h fs/smb/common/arc4.h
fs/smbfs_common/cifs_arc4.c fs/smb/common/cifs_arc4.c
fs/smbfs_common/cifs_md4.c fs/smb/common/cifs_md4.c
fs/smbfs_common/md4.h fs/smb/common/md4.h
fs/smbfs_common/smb2pdu.h fs/smb/common/smb2pdu.h
fs/smbfs_common/smbfsctl.h fs/smb/common/smbfsctl.h
+9 -6
fs/xattr.c
··· 985 985 return 0; 986 986 } 987 987 988 - /* 988 + /** 989 + * generic_listxattr - run through a dentry's xattr list() operations 990 + * @dentry: dentry to list the xattrs 991 + * @buffer: result buffer 992 + * @buffer_size: size of @buffer 993 + * 989 994 * Combine the results of the list() operation from every xattr_handler in the 990 - * list. 995 + * xattr_handler stack. 996 + * 997 + * Note that this will not include the entries for POSIX ACLs. 991 998 */ 992 999 ssize_t 993 1000 generic_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size) ··· 1002 995 const struct xattr_handler *handler, **handlers = dentry->d_sb->s_xattr; 1003 996 ssize_t remaining_size = buffer_size; 1004 997 int err = 0; 1005 - 1006 - err = posix_acl_listxattr(d_inode(dentry), &buffer, &remaining_size); 1007 - if (err) 1008 - return err; 1009 998 1010 999 for_each_xattr_handler(handlers, handler) { 1011 1000 if (!handler->name || (handler->list && !handler->list(dentry)))
+8 -1
include/asm-generic/vmlinux.lds.h
··· 891 891 /* 892 892 * Discard .note.GNU-stack, which is emitted as PROGBITS by the compiler. 893 893 * Otherwise, the type of .notes section would become PROGBITS instead of NOTES. 894 + * 895 + * Also, discard .note.gnu.property, otherwise it forces the notes section to 896 + * be 8-byte aligned which causes alignment mismatches with the kernel's custom 897 + * 4-byte aligned notes. 894 898 */ 895 899 #define NOTES \ 896 - /DISCARD/ : { *(.note.GNU-stack) } \ 900 + /DISCARD/ : { \ 901 + *(.note.GNU-stack) \ 902 + *(.note.gnu.property) \ 903 + } \ 897 904 .notes : AT(ADDR(.notes) - LOAD_OFFSET) { \ 898 905 BOUNDED_SECTION_BY(.note.*, _notes) \ 899 906 } NOTES_HEADERS \
+17 -1
include/drm/drm_managed.h
··· 105 105 106 106 void drmm_kfree(struct drm_device *dev, void *data); 107 107 108 - int drmm_mutex_init(struct drm_device *dev, struct mutex *lock); 108 + void __drmm_mutex_release(struct drm_device *dev, void *res); 109 + 110 + /** 111 + * drmm_mutex_init - &drm_device-managed mutex_init() 112 + * @dev: DRM device 113 + * @lock: lock to be initialized 114 + * 115 + * Returns: 116 + * 0 on success, or a negative errno code otherwise. 117 + * 118 + * This is a &drm_device-managed version of mutex_init(). The initialized 119 + * lock is automatically destroyed on the final drm_dev_put(). 120 + */ 121 + #define drmm_mutex_init(dev, lock) ({ \ 122 + mutex_init(lock); \ 123 + drmm_add_action_or_reset(dev, __drmm_mutex_release, lock); \ 124 + }) \ 109 125 110 126 #endif
+1
include/linux/arm_ffa.h
··· 96 96 97 97 /* FFA Bus/Device/Driver related */ 98 98 struct ffa_device { 99 + u32 id; 99 100 int vm_id; 100 101 bool mode_32bit; 101 102 uuid_t uuid;
+1 -1
include/linux/firewire.h
··· 391 391 u32 tag:2; /* tx: Tag in packet header */ 392 392 u32 sy:4; /* tx: Sy in packet header */ 393 393 u32 header_length:8; /* Length of immediate header */ 394 - u32 header[0]; /* tx: Top of 1394 isoch. data_block */ 394 + u32 header[]; /* tx: Top of 1394 isoch. data_block */ 395 395 }; 396 396 397 397 #define FW_ISO_CONTEXT_TRANSMIT 0
+21 -21
include/linux/fs.h
··· 1076 1076 * sb->s_flags. Note that these mirror the equivalent MS_* flags where 1077 1077 * represented in both. 1078 1078 */ 1079 - #define SB_RDONLY 1 /* Mount read-only */ 1080 - #define SB_NOSUID 2 /* Ignore suid and sgid bits */ 1081 - #define SB_NODEV 4 /* Disallow access to device special files */ 1082 - #define SB_NOEXEC 8 /* Disallow program execution */ 1083 - #define SB_SYNCHRONOUS 16 /* Writes are synced at once */ 1084 - #define SB_MANDLOCK 64 /* Allow mandatory locks on an FS */ 1085 - #define SB_DIRSYNC 128 /* Directory modifications are synchronous */ 1086 - #define SB_NOATIME 1024 /* Do not update access times. */ 1087 - #define SB_NODIRATIME 2048 /* Do not update directory access times */ 1088 - #define SB_SILENT 32768 1089 - #define SB_POSIXACL (1<<16) /* VFS does not apply the umask */ 1090 - #define SB_INLINECRYPT (1<<17) /* Use blk-crypto for encrypted files */ 1091 - #define SB_KERNMOUNT (1<<22) /* this is a kern_mount call */ 1092 - #define SB_I_VERSION (1<<23) /* Update inode I_version field */ 1093 - #define SB_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */ 1079 + #define SB_RDONLY BIT(0) /* Mount read-only */ 1080 + #define SB_NOSUID BIT(1) /* Ignore suid and sgid bits */ 1081 + #define SB_NODEV BIT(2) /* Disallow access to device special files */ 1082 + #define SB_NOEXEC BIT(3) /* Disallow program execution */ 1083 + #define SB_SYNCHRONOUS BIT(4) /* Writes are synced at once */ 1084 + #define SB_MANDLOCK BIT(6) /* Allow mandatory locks on an FS */ 1085 + #define SB_DIRSYNC BIT(7) /* Directory modifications are synchronous */ 1086 + #define SB_NOATIME BIT(10) /* Do not update access times. */ 1087 + #define SB_NODIRATIME BIT(11) /* Do not update directory access times */ 1088 + #define SB_SILENT BIT(15) 1089 + #define SB_POSIXACL BIT(16) /* VFS does not apply the umask */ 1090 + #define SB_INLINECRYPT BIT(17) /* Use blk-crypto for encrypted files */ 1091 + #define SB_KERNMOUNT BIT(22) /* this is a kern_mount call */ 1092 + #define SB_I_VERSION BIT(23) /* Update inode I_version field */ 1093 + #define SB_LAZYTIME BIT(25) /* Update the on-disk [acm]times lazily */ 1094 1094 1095 1095 /* These sb flags are internal to the kernel */ 1096 - #define SB_SUBMOUNT (1<<26) 1097 - #define SB_FORCE (1<<27) 1098 - #define SB_NOSEC (1<<28) 1099 - #define SB_BORN (1<<29) 1100 - #define SB_ACTIVE (1<<30) 1101 - #define SB_NOUSER (1<<31) 1096 + #define SB_SUBMOUNT BIT(26) 1097 + #define SB_FORCE BIT(27) 1098 + #define SB_NOSEC BIT(28) 1099 + #define SB_BORN BIT(29) 1100 + #define SB_ACTIVE BIT(30) 1101 + #define SB_NOUSER BIT(31) 1102 1102 1103 1103 /* These flags relate to encoding and casefolding */ 1104 1104 #define SB_ENC_STRICT_MODE_FL (1 << 0)
+14
include/linux/lockdep.h
··· 344 344 #define lockdep_repin_lock(l,c) lock_repin_lock(&(l)->dep_map, (c)) 345 345 #define lockdep_unpin_lock(l,c) lock_unpin_lock(&(l)->dep_map, (c)) 346 346 347 + /* 348 + * Must use lock_map_aquire_try() with override maps to avoid 349 + * lockdep thinking they participate in the block chain. 350 + */ 351 + #define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type) \ 352 + struct lockdep_map _name = { \ 353 + .name = #_name "-wait-type-override", \ 354 + .wait_type_inner = _wait_type, \ 355 + .lock_type = LD_LOCK_WAIT_OVERRIDE, } 356 + 347 357 #else /* !CONFIG_LOCKDEP */ 348 358 349 359 static inline void lockdep_init_task(struct task_struct *task) ··· 441 431 #define lockdep_pin_lock(l) ({ struct pin_cookie cookie = { }; cookie; }) 442 432 #define lockdep_repin_lock(l, c) do { (void)(l); (void)(c); } while (0) 443 433 #define lockdep_unpin_lock(l, c) do { (void)(l); (void)(c); } while (0) 434 + 435 + #define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type) \ 436 + struct lockdep_map __maybe_unused _name = {} 444 437 445 438 #endif /* !LOCKDEP */ 446 439 ··· 569 556 #define rwsem_release(l, i) lock_release(l, i) 570 557 571 558 #define lock_map_acquire(l) lock_acquire_exclusive(l, 0, 0, NULL, _THIS_IP_) 559 + #define lock_map_acquire_try(l) lock_acquire_exclusive(l, 0, 1, NULL, _THIS_IP_) 572 560 #define lock_map_acquire_read(l) lock_acquire_shared_recursive(l, 0, 0, NULL, _THIS_IP_) 573 561 #define lock_map_acquire_tryread(l) lock_acquire_shared_recursive(l, 0, 1, NULL, _THIS_IP_) 574 562 #define lock_map_release(l) lock_release(l, _THIS_IP_)
+1
include/linux/lockdep_types.h
··· 33 33 enum lockdep_lock_type { 34 34 LD_LOCK_NORMAL = 0, /* normal, catch all */ 35 35 LD_LOCK_PERCPU, /* percpu */ 36 + LD_LOCK_WAIT_OVERRIDE, /* annotation */ 36 37 LD_LOCK_MAX, 37 38 }; 38 39
+1
include/linux/mlx5/driver.h
··· 1093 1093 int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn, 1094 1094 int npsvs, u32 *sig_index); 1095 1095 int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num); 1096 + __be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev); 1096 1097 void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common); 1097 1098 int mlx5_query_odp_caps(struct mlx5_core_dev *dev, 1098 1099 struct mlx5_odp_caps *odp_caps);
+8 -1
include/linux/msi.h
··· 383 383 void arch_teardown_msi_irq(unsigned int irq); 384 384 int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); 385 385 void arch_teardown_msi_irqs(struct pci_dev *dev); 386 + #endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ 387 + 388 + /* 389 + * Xen uses non-default msi_domain_ops and hence needs a way to populate sysfs 390 + * entries of MSI IRQs. 391 + */ 392 + #if defined(CONFIG_PCI_XEN) || defined(CONFIG_PCI_MSI_ARCH_FALLBACKS) 386 393 #ifdef CONFIG_SYSFS 387 394 int msi_device_populate_sysfs(struct device *dev); 388 395 void msi_device_destroy_sysfs(struct device *dev); ··· 397 390 static inline int msi_device_populate_sysfs(struct device *dev) { return 0; } 398 391 static inline void msi_device_destroy_sysfs(struct device *dev) { } 399 392 #endif /* !CONFIG_SYSFS */ 400 - #endif /* CONFIG_PCI_MSI_ARCH_FALLBACKS */ 393 + #endif /* CONFIG_PCI_XEN || CONFIG_PCI_MSI_ARCH_FALLBACKS */ 401 394 402 395 /* 403 396 * The restore hook is still available even for fully irq domain based
-1
include/linux/sched/task.h
··· 29 29 u32 io_thread:1; 30 30 u32 user_worker:1; 31 31 u32 no_files:1; 32 - u32 ignore_signals:1; 33 32 unsigned long stack; 34 33 unsigned long stack_size; 35 34 unsigned long tls;
+3 -12
include/linux/sched/vhost_task.h
··· 2 2 #ifndef _LINUX_VHOST_TASK_H 3 3 #define _LINUX_VHOST_TASK_H 4 4 5 - #include <linux/completion.h> 6 5 7 - struct task_struct; 6 + struct vhost_task; 8 7 9 - struct vhost_task { 10 - int (*fn)(void *data); 11 - void *data; 12 - struct completion exited; 13 - unsigned long flags; 14 - struct task_struct *task; 15 - }; 16 - 17 - struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, 8 + struct vhost_task *vhost_task_create(bool (*fn)(void *), void *arg, 18 9 const char *name); 19 10 void vhost_task_start(struct vhost_task *vtsk); 20 11 void vhost_task_stop(struct vhost_task *vtsk); 21 - bool vhost_task_should_stop(struct vhost_task *vtsk); 12 + void vhost_task_wake(struct vhost_task *vtsk); 22 13 23 14 #endif
+1
include/linux/trace_events.h
··· 806 806 FILTER_TRACE_FN, 807 807 FILTER_COMM, 808 808 FILTER_CPU, 809 + FILTER_STACKTRACE, 809 810 }; 810 811 811 812 extern int trace_event_raw_init(struct trace_event_call *call);
+2 -1
include/linux/user_events.h
··· 17 17 18 18 #ifdef CONFIG_USER_EVENTS 19 19 struct user_event_mm { 20 - struct list_head link; 20 + struct list_head mms_link; 21 21 struct list_head enablers; 22 22 struct mm_struct *mm; 23 + /* Used for one-shot lists, protected by event_mutex */ 23 24 struct user_event_mm *next; 24 25 refcount_t refcnt; 25 26 refcount_t tasks;
-2
include/net/mana/mana.h
··· 347 347 struct mana_ethtool_stats { 348 348 u64 stop_queue; 349 349 u64 wake_queue; 350 - u64 tx_cqes; 351 350 u64 tx_cqe_err; 352 351 u64 tx_cqe_unknown_type; 353 - u64 rx_cqes; 354 352 u64 rx_coalesced_err; 355 353 u64 rx_cqe_unknown_type; 356 354 };
+4
include/net/sock.h
··· 336 336 * @sk_cgrp_data: cgroup data for this cgroup 337 337 * @sk_memcg: this socket's memory cgroup association 338 338 * @sk_write_pending: a write to stream socket waits to start 339 + * @sk_wait_pending: number of threads blocked on this socket 339 340 * @sk_state_change: callback to indicate change in the state of the sock 340 341 * @sk_data_ready: callback to indicate there is data to be processed 341 342 * @sk_write_space: callback to indicate there is bf sending space available ··· 429 428 unsigned int sk_napi_id; 430 429 #endif 431 430 int sk_rcvbuf; 431 + int sk_wait_pending; 432 432 433 433 struct sk_filter __rcu *sk_filter; 434 434 union { ··· 1176 1174 1177 1175 #define sk_wait_event(__sk, __timeo, __condition, __wait) \ 1178 1176 ({ int __rc; \ 1177 + __sk->sk_wait_pending++; \ 1179 1178 release_sock(__sk); \ 1180 1179 __rc = __condition; \ 1181 1180 if (!__rc) { \ ··· 1186 1183 } \ 1187 1184 sched_annotate_sleep(); \ 1188 1185 lock_sock(__sk); \ 1186 + __sk->sk_wait_pending--; \ 1189 1187 __rc = __condition; \ 1190 1188 __rc; \ 1191 1189 })
+1
include/net/tcp.h
··· 628 628 void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, struct sk_buff *skb); 629 629 void tcp_fin(struct sock *sk); 630 630 void tcp_check_space(struct sock *sk); 631 + void tcp_sack_compress_send_ack(struct sock *sk); 631 632 632 633 /* tcp_timer.c */ 633 634 void tcp_init_xmit_timers(struct sock *);
+5 -1
io_uring/sqpoll.c
··· 255 255 sqt_spin = true; 256 256 257 257 if (sqt_spin || !time_after(jiffies, timeout)) { 258 - cond_resched(); 259 258 if (sqt_spin) 260 259 timeout = jiffies + sqd->sq_thread_idle; 260 + if (unlikely(need_resched())) { 261 + mutex_unlock(&sqd->lock); 262 + cond_resched(); 263 + mutex_lock(&sqd->lock); 264 + } 261 265 continue; 262 266 } 263 267
+4 -1
kernel/exit.c
··· 411 411 tsk->flags |= PF_POSTCOREDUMP; 412 412 core_state = tsk->signal->core_state; 413 413 spin_unlock_irq(&tsk->sighand->siglock); 414 - if (core_state) { 414 + 415 + /* The vhost_worker does not particpate in coredumps */ 416 + if (core_state && 417 + ((tsk->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER)) { 415 418 struct core_thread self; 416 419 417 420 self.task = current;
+5 -8
kernel/fork.c
··· 2336 2336 p->flags &= ~PF_KTHREAD; 2337 2337 if (args->kthread) 2338 2338 p->flags |= PF_KTHREAD; 2339 - if (args->user_worker) 2340 - p->flags |= PF_USER_WORKER; 2341 - if (args->io_thread) { 2339 + if (args->user_worker) { 2342 2340 /* 2343 - * Mark us an IO worker, and block any signal that isn't 2341 + * Mark us a user worker, and block any signal that isn't 2344 2342 * fatal or STOP 2345 2343 */ 2346 - p->flags |= PF_IO_WORKER; 2344 + p->flags |= PF_USER_WORKER; 2347 2345 siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP)); 2348 2346 } 2347 + if (args->io_thread) 2348 + p->flags |= PF_IO_WORKER; 2349 2349 2350 2350 if (args->name) 2351 2351 strscpy_pad(p->comm, args->name, sizeof(p->comm)); ··· 2516 2516 retval = copy_thread(p, args); 2517 2517 if (retval) 2518 2518 goto bad_fork_cleanup_io; 2519 - 2520 - if (args->ignore_signals) 2521 - ignore_signals(p); 2522 2519 2523 2520 stackleak_task_init(p); 2524 2521
+2 -2
kernel/irq/msi.c
··· 542 542 return ret; 543 543 } 544 544 545 - #ifdef CONFIG_PCI_MSI_ARCH_FALLBACKS 545 + #if defined(CONFIG_PCI_MSI_ARCH_FALLBACKS) || defined(CONFIG_PCI_XEN) 546 546 /** 547 547 * msi_device_populate_sysfs - Populate msi_irqs sysfs entries for a device 548 548 * @dev: The device (PCI, platform etc) which will get sysfs entries ··· 574 574 msi_for_each_desc(desc, dev, MSI_DESC_ALL) 575 575 msi_sysfs_remove_desc(dev, desc); 576 576 } 577 - #endif /* CONFIG_PCI_MSI_ARCH_FALLBACK */ 577 + #endif /* CONFIG_PCI_MSI_ARCH_FALLBACK || CONFIG_PCI_XEN */ 578 578 #else /* CONFIG_SYSFS */ 579 579 static inline int msi_sysfs_create_group(struct device *dev) { return 0; } 580 580 static inline int msi_sysfs_populate_desc(struct device *dev, struct msi_desc *desc) { return 0; }
+21 -7
kernel/locking/lockdep.c
··· 2263 2263 2264 2264 static inline bool usage_skip(struct lock_list *entry, void *mask) 2265 2265 { 2266 + if (entry->class->lock_type == LD_LOCK_NORMAL) 2267 + return false; 2268 + 2266 2269 /* 2267 2270 * Skip local_lock() for irq inversion detection. 2268 2271 * ··· 2292 2289 * As a result, we will skip local_lock(), when we search for irq 2293 2290 * inversion bugs. 2294 2291 */ 2295 - if (entry->class->lock_type == LD_LOCK_PERCPU) { 2296 - if (DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG)) 2297 - return false; 2292 + if (entry->class->lock_type == LD_LOCK_PERCPU && 2293 + DEBUG_LOCKS_WARN_ON(entry->class->wait_type_inner < LD_WAIT_CONFIG)) 2294 + return false; 2298 2295 2299 - return true; 2300 - } 2296 + /* 2297 + * Skip WAIT_OVERRIDE for irq inversion detection -- it's not actually 2298 + * a lock and only used to override the wait_type. 2299 + */ 2301 2300 2302 - return false; 2301 + return true; 2303 2302 } 2304 2303 2305 2304 /* ··· 4773 4768 4774 4769 for (; depth < curr->lockdep_depth; depth++) { 4775 4770 struct held_lock *prev = curr->held_locks + depth; 4776 - u8 prev_inner = hlock_class(prev)->wait_type_inner; 4771 + struct lock_class *class = hlock_class(prev); 4772 + u8 prev_inner = class->wait_type_inner; 4777 4773 4778 4774 if (prev_inner) { 4779 4775 /* ··· 4784 4778 * Also due to trylocks. 4785 4779 */ 4786 4780 curr_inner = min(curr_inner, prev_inner); 4781 + 4782 + /* 4783 + * Allow override for annotations -- this is typically 4784 + * only valid/needed for code that only exists when 4785 + * CONFIG_PREEMPT_RT=n. 4786 + */ 4787 + if (unlikely(class->lock_type == LD_LOCK_WAIT_OVERRIDE)) 4788 + curr_inner = prev_inner; 4787 4789 } 4788 4790 } 4789 4791
+2 -2
kernel/module/main.c
··· 1521 1521 MOD_RODATA, 1522 1522 MOD_RO_AFTER_INIT, 1523 1523 MOD_DATA, 1524 - MOD_INVALID, /* This is needed to match the masks array */ 1524 + MOD_DATA, 1525 1525 }; 1526 1526 static const int init_m_to_mem_type[] = { 1527 1527 MOD_INIT_TEXT, 1528 1528 MOD_INIT_RODATA, 1529 1529 MOD_INVALID, 1530 1530 MOD_INIT_DATA, 1531 - MOD_INVALID, /* This is needed to match the masks array */ 1531 + MOD_INIT_DATA, 1532 1532 }; 1533 1533 1534 1534 for (m = 0; m < ARRAY_SIZE(masks); ++m) {
+5 -3
kernel/signal.c
··· 1368 1368 1369 1369 while_each_thread(p, t) { 1370 1370 task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK); 1371 - count++; 1371 + /* Don't require de_thread to wait for the vhost_worker */ 1372 + if ((t->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER) 1373 + count++; 1372 1374 1373 1375 /* Don't bother with already dead threads */ 1374 1376 if (t->exit_state) ··· 2863 2861 } 2864 2862 2865 2863 /* 2866 - * PF_IO_WORKER threads will catch and exit on fatal signals 2864 + * PF_USER_WORKER threads will catch and exit on fatal signals 2867 2865 * themselves. They have cleanup that must be performed, so 2868 2866 * we cannot call do_exit() on their behalf. 2869 2867 */ 2870 - if (current->flags & PF_IO_WORKER) 2868 + if (current->flags & PF_USER_WORKER) 2871 2869 goto out; 2872 2870 2873 2871 /*
+36 -8
kernel/trace/trace.c
··· 60 60 */ 61 61 bool ring_buffer_expanded; 62 62 63 + #ifdef CONFIG_FTRACE_STARTUP_TEST 63 64 /* 64 65 * We need to change this state when a selftest is running. 65 66 * A selftest will lurk into the ring-buffer to count the ··· 76 75 */ 77 76 bool __read_mostly tracing_selftest_disabled; 78 77 79 - #ifdef CONFIG_FTRACE_STARTUP_TEST 80 78 void __init disable_tracing_selftest(const char *reason) 81 79 { 82 80 if (!tracing_selftest_disabled) { ··· 83 83 pr_info("Ftrace startup test is disabled due to %s\n", reason); 84 84 } 85 85 } 86 + #else 87 + #define tracing_selftest_running 0 88 + #define tracing_selftest_disabled 0 86 89 #endif 87 90 88 91 /* Pipe tracepoints to printk */ ··· 1054 1051 if (!(tr->trace_flags & TRACE_ITER_PRINTK)) 1055 1052 return 0; 1056 1053 1057 - if (unlikely(tracing_selftest_running || tracing_disabled)) 1054 + if (unlikely(tracing_selftest_running && tr == &global_trace)) 1055 + return 0; 1056 + 1057 + if (unlikely(tracing_disabled)) 1058 1058 return 0; 1059 1059 1060 1060 alloc = sizeof(*entry) + size + 2; /* possible \n added */ ··· 2047 2041 return 0; 2048 2042 } 2049 2043 2044 + static int do_run_tracer_selftest(struct tracer *type) 2045 + { 2046 + int ret; 2047 + 2048 + /* 2049 + * Tests can take a long time, especially if they are run one after the 2050 + * other, as does happen during bootup when all the tracers are 2051 + * registered. This could cause the soft lockup watchdog to trigger. 2052 + */ 2053 + cond_resched(); 2054 + 2055 + tracing_selftest_running = true; 2056 + ret = run_tracer_selftest(type); 2057 + tracing_selftest_running = false; 2058 + 2059 + return ret; 2060 + } 2061 + 2050 2062 static __init int init_trace_selftests(void) 2051 2063 { 2052 2064 struct trace_selftests *p, *n; ··· 2116 2092 { 2117 2093 return 0; 2118 2094 } 2095 + static inline int do_run_tracer_selftest(struct tracer *type) 2096 + { 2097 + return 0; 2098 + } 2119 2099 #endif /* CONFIG_FTRACE_STARTUP_TEST */ 2120 2100 2121 2101 static void add_tracer_options(struct trace_array *tr, struct tracer *t); ··· 2155 2127 2156 2128 mutex_lock(&trace_types_lock); 2157 2129 2158 - tracing_selftest_running = true; 2159 - 2160 2130 for (t = trace_types; t; t = t->next) { 2161 2131 if (strcmp(type->name, t->name) == 0) { 2162 2132 /* already found */ ··· 2183 2157 /* store the tracer for __set_tracer_option */ 2184 2158 type->flags->trace = type; 2185 2159 2186 - ret = run_tracer_selftest(type); 2160 + ret = do_run_tracer_selftest(type); 2187 2161 if (ret < 0) 2188 2162 goto out; 2189 2163 ··· 2192 2166 add_tracer_options(&global_trace, type); 2193 2167 2194 2168 out: 2195 - tracing_selftest_running = false; 2196 2169 mutex_unlock(&trace_types_lock); 2197 2170 2198 2171 if (ret || !default_bootup_tracer) ··· 3515 3490 unsigned int trace_ctx; 3516 3491 char *tbuffer; 3517 3492 3518 - if (tracing_disabled || tracing_selftest_running) 3493 + if (tracing_disabled) 3519 3494 return 0; 3520 3495 3521 3496 /* Don't pollute graph traces with trace_vprintk internals */ ··· 3563 3538 int trace_array_vprintk(struct trace_array *tr, 3564 3539 unsigned long ip, const char *fmt, va_list args) 3565 3540 { 3541 + if (tracing_selftest_running && tr == &global_trace) 3542 + return 0; 3543 + 3566 3544 return __trace_array_vprintk(tr->array_buffer.buffer, ip, fmt, args); 3567 3545 } 3568 3546 ··· 5780 5752 "\t table using the key(s) and value(s) named, and the value of a\n" 5781 5753 "\t sum called 'hitcount' is incremented. Keys and values\n" 5782 5754 "\t correspond to fields in the event's format description. Keys\n" 5783 - "\t can be any field, or the special string 'stacktrace'.\n" 5755 + "\t can be any field, or the special string 'common_stacktrace'.\n" 5784 5756 "\t Compound keys consisting of up to two fields can be specified\n" 5785 5757 "\t by the 'keys' keyword. Values must correspond to numeric\n" 5786 5758 "\t fields. Sort keys consisting of up to two fields can be\n"
+2
kernel/trace/trace_events.c
··· 194 194 __generic_field(int, common_cpu, FILTER_CPU); 195 195 __generic_field(char *, COMM, FILTER_COMM); 196 196 __generic_field(char *, comm, FILTER_COMM); 197 + __generic_field(char *, stacktrace, FILTER_STACKTRACE); 198 + __generic_field(char *, STACKTRACE, FILTER_STACKTRACE); 197 199 198 200 return ret; 199 201 }
+26 -13
kernel/trace/trace_events_hist.c
··· 1364 1364 if (field->field) 1365 1365 field_name = field->field->name; 1366 1366 else 1367 - field_name = "stacktrace"; 1367 + field_name = "common_stacktrace"; 1368 1368 } else if (field->flags & HIST_FIELD_FL_HITCOUNT) 1369 1369 field_name = "hitcount"; 1370 1370 ··· 2367 2367 hist_data->enable_timestamps = true; 2368 2368 if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS) 2369 2369 hist_data->attrs->ts_in_usecs = true; 2370 - } else if (strcmp(field_name, "stacktrace") == 0) { 2370 + } else if (strcmp(field_name, "common_stacktrace") == 0) { 2371 2371 *flags |= HIST_FIELD_FL_STACKTRACE; 2372 2372 } else if (strcmp(field_name, "common_cpu") == 0) 2373 2373 *flags |= HIST_FIELD_FL_CPU; ··· 2378 2378 if (!field || !field->size) { 2379 2379 /* 2380 2380 * For backward compatibility, if field_name 2381 - * was "cpu", then we treat this the same as 2382 - * common_cpu. This also works for "CPU". 2381 + * was "cpu" or "stacktrace", then we treat this 2382 + * the same as common_cpu and common_stacktrace 2383 + * respectively. This also works for "CPU", and 2384 + * "STACKTRACE". 2383 2385 */ 2384 2386 if (field && field->filter_type == FILTER_CPU) { 2385 2387 *flags |= HIST_FIELD_FL_CPU; 2388 + } else if (field && field->filter_type == FILTER_STACKTRACE) { 2389 + *flags |= HIST_FIELD_FL_STACKTRACE; 2386 2390 } else { 2387 2391 hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, 2388 2392 errpos(field_name)); ··· 4242 4238 goto out; 4243 4239 } 4244 4240 4245 - /* Some types cannot be a value */ 4246 - if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4247 - HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 | 4248 - HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET | 4249 - HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) { 4250 - hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str)); 4251 - ret = -EINVAL; 4241 + /* values and variables should not have some modifiers */ 4242 + if (hist_field->flags & HIST_FIELD_FL_VAR) { 4243 + /* Variable */ 4244 + if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4245 + HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2)) 4246 + goto err; 4247 + } else { 4248 + /* Value */ 4249 + if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4250 + HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 | 4251 + HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET | 4252 + HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) 4253 + goto err; 4252 4254 } 4253 4255 4254 4256 hist_data->fields[val_idx] = hist_field; ··· 4266 4256 ret = -EINVAL; 4267 4257 out: 4268 4258 return ret; 4259 + err: 4260 + hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str)); 4261 + return -EINVAL; 4269 4262 } 4270 4263 4271 4264 static int create_val_field(struct hist_trigger_data *hist_data, ··· 5398 5385 if (key_field->field) 5399 5386 seq_printf(m, "%s.stacktrace", key_field->field->name); 5400 5387 else 5401 - seq_puts(m, "stacktrace:\n"); 5388 + seq_puts(m, "common_stacktrace:\n"); 5402 5389 hist_trigger_stacktrace_print(m, 5403 5390 key + key_field->offset, 5404 5391 HIST_STACKTRACE_DEPTH); ··· 5981 5968 if (field->field) 5982 5969 seq_printf(m, "%s.stacktrace", field->field->name); 5983 5970 else 5984 - seq_puts(m, "stacktrace"); 5971 + seq_puts(m, "common_stacktrace"); 5985 5972 } else 5986 5973 hist_field_print(m, field); 5987 5974 }
+73 -39
kernel/trace/trace_events_user.c
··· 96 96 * these to track enablement sites that are tied to an event. 97 97 */ 98 98 struct user_event_enabler { 99 - struct list_head link; 99 + struct list_head mm_enablers_link; 100 100 struct user_event *event; 101 101 unsigned long addr; 102 102 103 103 /* Track enable bit, flags, etc. Aligned for bitops. */ 104 - unsigned int values; 104 + unsigned long values; 105 105 }; 106 106 107 107 /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ ··· 116 116 /* Only duplicate the bit value */ 117 117 #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK 118 118 119 - #define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) 119 + #define ENABLE_BITOPS(e) (&(e)->values) 120 + 121 + #define ENABLE_BIT(e) ((int)((e)->values & ENABLE_VAL_BIT_MASK)) 120 122 121 123 /* Used for asynchronous faulting in of pages */ 122 124 struct user_event_enabler_fault { ··· 155 153 #define VALIDATOR_REL (1 << 1) 156 154 157 155 struct user_event_validator { 158 - struct list_head link; 156 + struct list_head user_event_link; 159 157 int offset; 160 158 int flags; 161 159 }; ··· 261 259 262 260 static void user_event_enabler_destroy(struct user_event_enabler *enabler) 263 261 { 264 - list_del_rcu(&enabler->link); 262 + list_del_rcu(&enabler->mm_enablers_link); 265 263 266 264 /* No longer tracking the event via the enabler */ 267 265 refcount_dec(&enabler->event->refcnt); ··· 425 423 426 424 /* Update bit atomically, user tracers must be atomic as well */ 427 425 if (enabler->event && enabler->event->status) 428 - set_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); 426 + set_bit(ENABLE_BIT(enabler), ptr); 429 427 else 430 - clear_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); 428 + clear_bit(ENABLE_BIT(enabler), ptr); 431 429 432 430 kunmap_local(kaddr); 433 431 unpin_user_pages_dirty_lock(&page, 1, true); ··· 439 437 unsigned long uaddr, unsigned char bit) 440 438 { 441 439 struct user_event_enabler *enabler; 442 - struct user_event_enabler *next; 443 440 444 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) { 445 - if (enabler->addr == uaddr && 446 - (enabler->values & ENABLE_VAL_BIT_MASK) == bit) 441 + list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) { 442 + if (enabler->addr == uaddr && ENABLE_BIT(enabler) == bit) 447 443 return true; 448 444 } 449 445 ··· 451 451 static void user_event_enabler_update(struct user_event *user) 452 452 { 453 453 struct user_event_enabler *enabler; 454 - struct user_event_mm *mm = user_event_mm_get_all(user); 455 454 struct user_event_mm *next; 455 + struct user_event_mm *mm; 456 456 int attempt; 457 + 458 + lockdep_assert_held(&event_mutex); 459 + 460 + /* 461 + * We need to build a one-shot list of all the mms that have an 462 + * enabler for the user_event passed in. This list is only valid 463 + * while holding the event_mutex. The only reason for this is due 464 + * to the global mm list being RCU protected and we use methods 465 + * which can wait (mmap_read_lock and pin_user_pages_remote). 466 + * 467 + * NOTE: user_event_mm_get_all() increments the ref count of each 468 + * mm that is added to the list to prevent removal timing windows. 469 + * We must always put each mm after they are used, which may wait. 470 + */ 471 + mm = user_event_mm_get_all(user); 457 472 458 473 while (mm) { 459 474 next = mm->next; 460 475 mmap_read_lock(mm->mm); 461 - rcu_read_lock(); 462 476 463 - list_for_each_entry_rcu(enabler, &mm->enablers, link) { 477 + list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) { 464 478 if (enabler->event == user) { 465 479 attempt = 0; 466 480 user_event_enabler_write(mm, enabler, true, &attempt); 467 481 } 468 482 } 469 483 470 - rcu_read_unlock(); 471 484 mmap_read_unlock(mm->mm); 472 485 user_event_mm_put(mm); 473 486 mm = next; ··· 508 495 enabler->values = orig->values & ENABLE_VAL_DUP_MASK; 509 496 510 497 refcount_inc(&enabler->event->refcnt); 511 - list_add_rcu(&enabler->link, &mm->enablers); 498 + 499 + /* Enablers not exposed yet, RCU not required */ 500 + list_add(&enabler->mm_enablers_link, &mm->enablers); 512 501 513 502 return true; 514 503 } ··· 529 514 struct user_event_mm *mm; 530 515 531 516 /* 517 + * We use the mm->next field to build a one-shot list from the global 518 + * RCU protected list. To build this list the event_mutex must be held. 519 + * This lets us build a list without requiring allocs that could fail 520 + * when user based events are most wanted for diagnostics. 521 + */ 522 + lockdep_assert_held(&event_mutex); 523 + 524 + /* 532 525 * We do not want to block fork/exec while enablements are being 533 526 * updated, so we use RCU to walk the current tasks that have used 534 527 * user_events ABI for 1 or more events. Each enabler found in each ··· 548 525 */ 549 526 rcu_read_lock(); 550 527 551 - list_for_each_entry_rcu(mm, &user_event_mms, link) 552 - list_for_each_entry_rcu(enabler, &mm->enablers, link) 528 + list_for_each_entry_rcu(mm, &user_event_mms, mms_link) { 529 + list_for_each_entry_rcu(enabler, &mm->enablers, mm_enablers_link) { 553 530 if (enabler->event == user) { 554 531 mm->next = found; 555 532 found = user_event_mm_get(mm); 556 533 break; 557 534 } 535 + } 536 + } 558 537 559 538 rcu_read_unlock(); 560 539 561 540 return found; 562 541 } 563 542 564 - static struct user_event_mm *user_event_mm_create(struct task_struct *t) 543 + static struct user_event_mm *user_event_mm_alloc(struct task_struct *t) 565 544 { 566 545 struct user_event_mm *user_mm; 567 - unsigned long flags; 568 546 569 547 user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL_ACCOUNT); 570 548 ··· 576 552 INIT_LIST_HEAD(&user_mm->enablers); 577 553 refcount_set(&user_mm->refcnt, 1); 578 554 refcount_set(&user_mm->tasks, 1); 579 - 580 - spin_lock_irqsave(&user_event_mms_lock, flags); 581 - list_add_rcu(&user_mm->link, &user_event_mms); 582 - spin_unlock_irqrestore(&user_event_mms_lock, flags); 583 - 584 - t->user_event_mm = user_mm; 585 555 586 556 /* 587 557 * The lifetime of the memory descriptor can slightly outlast ··· 590 572 return user_mm; 591 573 } 592 574 575 + static void user_event_mm_attach(struct user_event_mm *user_mm, struct task_struct *t) 576 + { 577 + unsigned long flags; 578 + 579 + spin_lock_irqsave(&user_event_mms_lock, flags); 580 + list_add_rcu(&user_mm->mms_link, &user_event_mms); 581 + spin_unlock_irqrestore(&user_event_mms_lock, flags); 582 + 583 + t->user_event_mm = user_mm; 584 + } 585 + 593 586 static struct user_event_mm *current_user_event_mm(void) 594 587 { 595 588 struct user_event_mm *user_mm = current->user_event_mm; ··· 608 579 if (user_mm) 609 580 goto inc; 610 581 611 - user_mm = user_event_mm_create(current); 582 + user_mm = user_event_mm_alloc(current); 612 583 613 584 if (!user_mm) 614 585 goto error; 586 + 587 + user_event_mm_attach(user_mm, current); 615 588 inc: 616 589 refcount_inc(&user_mm->refcnt); 617 590 error: ··· 624 593 { 625 594 struct user_event_enabler *enabler, *next; 626 595 627 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) 596 + list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link) 628 597 user_event_enabler_destroy(enabler); 629 598 630 599 mmdrop(mm->mm); ··· 661 630 662 631 /* Remove the mm from the list, so it can no longer be enabled */ 663 632 spin_lock_irqsave(&user_event_mms_lock, flags); 664 - list_del_rcu(&mm->link); 633 + list_del_rcu(&mm->mms_link); 665 634 spin_unlock_irqrestore(&user_event_mms_lock, flags); 666 635 667 636 /* ··· 701 670 702 671 void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) 703 672 { 704 - struct user_event_mm *mm = user_event_mm_create(t); 673 + struct user_event_mm *mm = user_event_mm_alloc(t); 705 674 struct user_event_enabler *enabler; 706 675 707 676 if (!mm) ··· 709 678 710 679 rcu_read_lock(); 711 680 712 - list_for_each_entry_rcu(enabler, &old_mm->enablers, link) 681 + list_for_each_entry_rcu(enabler, &old_mm->enablers, mm_enablers_link) { 713 682 if (!user_event_enabler_dup(enabler, mm)) 714 683 goto error; 684 + } 715 685 716 686 rcu_read_unlock(); 717 687 688 + user_event_mm_attach(mm, t); 718 689 return; 719 690 error: 720 691 rcu_read_unlock(); 721 - user_event_mm_remove(t); 692 + user_event_mm_destroy(mm); 722 693 } 723 694 724 695 static bool current_user_event_enabler_exists(unsigned long uaddr, ··· 781 748 */ 782 749 if (!*write_result) { 783 750 refcount_inc(&enabler->event->refcnt); 784 - list_add_rcu(&enabler->link, &user_mm->enablers); 751 + list_add_rcu(&enabler->mm_enablers_link, &user_mm->enablers); 785 752 } 786 753 787 754 mutex_unlock(&event_mutex); ··· 937 904 struct user_event_validator *validator, *next; 938 905 struct list_head *head = &user->validators; 939 906 940 - list_for_each_entry_safe(validator, next, head, link) { 941 - list_del(&validator->link); 907 + list_for_each_entry_safe(validator, next, head, user_event_link) { 908 + list_del(&validator->user_event_link); 942 909 kfree(validator); 943 910 } 944 911 } ··· 992 959 validator->offset = offset; 993 960 994 961 /* Want sequential access when validating */ 995 - list_add_tail(&validator->link, &user->validators); 962 + list_add_tail(&validator->user_event_link, &user->validators); 996 963 997 964 add_field: 998 965 field->type = type; ··· 1382 1349 void *pos, *end = data + len; 1383 1350 u32 loc, offset, size; 1384 1351 1385 - list_for_each_entry(validator, head, link) { 1352 + list_for_each_entry(validator, head, user_event_link) { 1386 1353 pos = data + validator->offset; 1387 1354 1388 1355 /* Already done min_size check, no bounds check here */ ··· 2303 2270 */ 2304 2271 mutex_lock(&event_mutex); 2305 2272 2306 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) 2273 + list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link) { 2307 2274 if (enabler->addr == reg.disable_addr && 2308 - (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) { 2275 + ENABLE_BIT(enabler) == reg.disable_bit) { 2309 2276 set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); 2310 2277 2311 2278 if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) ··· 2314 2281 /* Removed at least one */ 2315 2282 ret = 0; 2316 2283 } 2284 + } 2317 2285 2318 2286 mutex_unlock(&event_mutex); 2319 2287
+2
kernel/trace/trace_osnoise.c
··· 1652 1652 osnoise_stop_tracing(); 1653 1653 notify_new_max_latency(diff); 1654 1654 1655 + wake_up_process(tlat->kthread); 1656 + 1655 1657 return HRTIMER_NORESTART; 1656 1658 } 1657 1659 }
+10
kernel/trace/trace_selftest.c
··· 848 848 } 849 849 850 850 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 851 + /* 852 + * These tests can take some time to run. Make sure on non PREEMPT 853 + * kernels, we do not trigger the softlockup detector. 854 + */ 855 + cond_resched(); 856 + 851 857 tracing_reset_online_cpus(&tr->array_buffer); 852 858 set_graph_array(tr); 853 859 ··· 874 868 (unsigned long)ftrace_stub_direct_tramp); 875 869 if (ret) 876 870 goto out; 871 + 872 + cond_resched(); 877 873 878 874 ret = register_ftrace_graph(&fgraph_ops); 879 875 if (ret) { ··· 898 890 true); 899 891 if (ret) 900 892 goto out; 893 + 894 + cond_resched(); 901 895 902 896 tracing_start(); 903 897
+61 -31
kernel/vhost_task.c
··· 12 12 VHOST_TASK_FLAGS_STOP, 13 13 }; 14 14 15 + struct vhost_task { 16 + bool (*fn)(void *data); 17 + void *data; 18 + struct completion exited; 19 + unsigned long flags; 20 + struct task_struct *task; 21 + }; 22 + 15 23 static int vhost_task_fn(void *data) 16 24 { 17 25 struct vhost_task *vtsk = data; 18 - int ret; 26 + bool dead = false; 19 27 20 - ret = vtsk->fn(vtsk->data); 28 + for (;;) { 29 + bool did_work; 30 + 31 + /* mb paired w/ vhost_task_stop */ 32 + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) 33 + break; 34 + 35 + if (!dead && signal_pending(current)) { 36 + struct ksignal ksig; 37 + /* 38 + * Calling get_signal will block in SIGSTOP, 39 + * or clear fatal_signal_pending, but remember 40 + * what was set. 41 + * 42 + * This thread won't actually exit until all 43 + * of the file descriptors are closed, and 44 + * the release function is called. 45 + */ 46 + dead = get_signal(&ksig); 47 + if (dead) 48 + clear_thread_flag(TIF_SIGPENDING); 49 + } 50 + 51 + did_work = vtsk->fn(vtsk->data); 52 + if (!did_work) { 53 + set_current_state(TASK_INTERRUPTIBLE); 54 + schedule(); 55 + } 56 + } 57 + 21 58 complete(&vtsk->exited); 22 - do_exit(ret); 59 + do_exit(0); 23 60 } 61 + 62 + /** 63 + * vhost_task_wake - wakeup the vhost_task 64 + * @vtsk: vhost_task to wake 65 + * 66 + * wake up the vhost_task worker thread 67 + */ 68 + void vhost_task_wake(struct vhost_task *vtsk) 69 + { 70 + wake_up_process(vtsk->task); 71 + } 72 + EXPORT_SYMBOL_GPL(vhost_task_wake); 24 73 25 74 /** 26 75 * vhost_task_stop - stop a vhost_task 27 76 * @vtsk: vhost_task to stop 28 77 * 29 - * Callers must call vhost_task_should_stop and return from their worker 30 - * function when it returns true; 78 + * vhost_task_fn ensures the worker thread exits after 79 + * VHOST_TASK_FLAGS_SOP becomes true. 31 80 */ 32 81 void vhost_task_stop(struct vhost_task *vtsk) 33 82 { 34 - pid_t pid = vtsk->task->pid; 35 - 36 83 set_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags); 37 - wake_up_process(vtsk->task); 84 + vhost_task_wake(vtsk); 38 85 /* 39 86 * Make sure vhost_task_fn is no longer accessing the vhost_task before 40 - * freeing it below. If userspace crashed or exited without closing, 41 - * then the vhost_task->task could already be marked dead so 42 - * kernel_wait will return early. 87 + * freeing it below. 43 88 */ 44 89 wait_for_completion(&vtsk->exited); 45 - /* 46 - * If we are just closing/removing a device and the parent process is 47 - * not exiting then reap the task. 48 - */ 49 - kernel_wait4(pid, NULL, __WCLONE, NULL); 50 90 kfree(vtsk); 51 91 } 52 92 EXPORT_SYMBOL_GPL(vhost_task_stop); 53 93 54 94 /** 55 - * vhost_task_should_stop - should the vhost task return from the work function 56 - * @vtsk: vhost_task to stop 57 - */ 58 - bool vhost_task_should_stop(struct vhost_task *vtsk) 59 - { 60 - return test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags); 61 - } 62 - EXPORT_SYMBOL_GPL(vhost_task_should_stop); 63 - 64 - /** 65 - * vhost_task_create - create a copy of a process to be used by the kernel 66 - * @fn: thread stack 95 + * vhost_task_create - create a copy of a task to be used by the kernel 96 + * @fn: vhost worker function 67 97 * @arg: data to be passed to fn 68 98 * @name: the thread's name 69 99 * ··· 101 71 * failure. The returned task is inactive, and the caller must fire it up 102 72 * through vhost_task_start(). 103 73 */ 104 - struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, 74 + struct vhost_task *vhost_task_create(bool (*fn)(void *), void *arg, 105 75 const char *name) 106 76 { 107 77 struct kernel_clone_args args = { 108 - .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM, 78 + .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM | 79 + CLONE_THREAD | CLONE_SIGHAND, 109 80 .exit_signal = 0, 110 81 .fn = vhost_task_fn, 111 82 .name = name, 112 83 .user_worker = 1, 113 84 .no_files = 1, 114 - .ignore_signals = 1, 115 85 }; 116 86 struct vhost_task *vtsk; 117 87 struct task_struct *tsk;
+14 -3
lib/debugobjects.c
··· 126 126 127 127 static void fill_pool(void) 128 128 { 129 - gfp_t gfp = GFP_ATOMIC | __GFP_NORETRY | __GFP_NOWARN; 129 + gfp_t gfp = __GFP_HIGH | __GFP_NOWARN; 130 130 struct debug_obj *obj; 131 131 unsigned long flags; 132 132 ··· 591 591 { 592 592 /* 593 593 * On RT enabled kernels the pool refill must happen in preemptible 594 - * context: 594 + * context -- for !RT kernels we rely on the fact that spinlock_t and 595 + * raw_spinlock_t are basically the same type and this lock-type 596 + * inversion works just fine. 595 597 */ 596 - if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) 598 + if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible()) { 599 + /* 600 + * Annotate away the spinlock_t inside raw_spinlock_t warning 601 + * by temporarily raising the wait-type to WAIT_SLEEP, matching 602 + * the preemptible() condition above. 603 + */ 604 + static DEFINE_WAIT_OVERRIDE_MAP(fill_pool_map, LD_WAIT_SLEEP); 605 + lock_map_acquire_try(&fill_pool_map); 597 606 fill_pool(); 607 + lock_map_release(&fill_pool_map); 608 + } 598 609 } 599 610 600 611 static void
+38 -16
net/core/rtnetlink.c
··· 2385 2385 if (tb[IFLA_BROADCAST] && 2386 2386 nla_len(tb[IFLA_BROADCAST]) < dev->addr_len) 2387 2387 return -EINVAL; 2388 + 2389 + if (tb[IFLA_GSO_MAX_SIZE] && 2390 + nla_get_u32(tb[IFLA_GSO_MAX_SIZE]) > dev->tso_max_size) { 2391 + NL_SET_ERR_MSG(extack, "too big gso_max_size"); 2392 + return -EINVAL; 2393 + } 2394 + 2395 + if (tb[IFLA_GSO_MAX_SEGS] && 2396 + (nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > GSO_MAX_SEGS || 2397 + nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > dev->tso_max_segs)) { 2398 + NL_SET_ERR_MSG(extack, "too big gso_max_segs"); 2399 + return -EINVAL; 2400 + } 2401 + 2402 + if (tb[IFLA_GRO_MAX_SIZE] && 2403 + nla_get_u32(tb[IFLA_GRO_MAX_SIZE]) > GRO_MAX_SIZE) { 2404 + NL_SET_ERR_MSG(extack, "too big gro_max_size"); 2405 + return -EINVAL; 2406 + } 2407 + 2408 + if (tb[IFLA_GSO_IPV4_MAX_SIZE] && 2409 + nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]) > dev->tso_max_size) { 2410 + NL_SET_ERR_MSG(extack, "too big gso_ipv4_max_size"); 2411 + return -EINVAL; 2412 + } 2413 + 2414 + if (tb[IFLA_GRO_IPV4_MAX_SIZE] && 2415 + nla_get_u32(tb[IFLA_GRO_IPV4_MAX_SIZE]) > GRO_MAX_SIZE) { 2416 + NL_SET_ERR_MSG(extack, "too big gro_ipv4_max_size"); 2417 + return -EINVAL; 2418 + } 2388 2419 } 2389 2420 2390 2421 if (tb[IFLA_AF_SPEC]) { ··· 2889 2858 if (tb[IFLA_GSO_MAX_SIZE]) { 2890 2859 u32 max_size = nla_get_u32(tb[IFLA_GSO_MAX_SIZE]); 2891 2860 2892 - if (max_size > dev->tso_max_size) { 2893 - err = -EINVAL; 2894 - goto errout; 2895 - } 2896 - 2897 2861 if (dev->gso_max_size ^ max_size) { 2898 2862 netif_set_gso_max_size(dev, max_size); 2899 2863 status |= DO_SETLINK_MODIFIED; ··· 2897 2871 2898 2872 if (tb[IFLA_GSO_MAX_SEGS]) { 2899 2873 u32 max_segs = nla_get_u32(tb[IFLA_GSO_MAX_SEGS]); 2900 - 2901 - if (max_segs > GSO_MAX_SEGS || max_segs > dev->tso_max_segs) { 2902 - err = -EINVAL; 2903 - goto errout; 2904 - } 2905 2874 2906 2875 if (dev->gso_max_segs ^ max_segs) { 2907 2876 netif_set_gso_max_segs(dev, max_segs); ··· 2915 2894 2916 2895 if (tb[IFLA_GSO_IPV4_MAX_SIZE]) { 2917 2896 u32 max_size = nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]); 2918 - 2919 - if (max_size > dev->tso_max_size) { 2920 - err = -EINVAL; 2921 - goto errout; 2922 - } 2923 2897 2924 2898 if (dev->gso_ipv4_max_size ^ max_size) { 2925 2899 netif_set_gso_ipv4_max_size(dev, max_size); ··· 3301 3285 struct net_device *dev; 3302 3286 unsigned int num_tx_queues = 1; 3303 3287 unsigned int num_rx_queues = 1; 3288 + int err; 3304 3289 3305 3290 if (tb[IFLA_NUM_TX_QUEUES]) 3306 3291 num_tx_queues = nla_get_u32(tb[IFLA_NUM_TX_QUEUES]); ··· 3337 3320 if (!dev) 3338 3321 return ERR_PTR(-ENOMEM); 3339 3322 3323 + err = validate_linkmsg(dev, tb, extack); 3324 + if (err < 0) { 3325 + free_netdev(dev); 3326 + return ERR_PTR(err); 3327 + } 3328 + 3340 3329 dev_net_set(dev, net); 3341 3330 dev->rtnl_link_ops = ops; 3342 3331 dev->rtnl_link_state = RTNL_LINK_INITIALIZING; 3343 3332 3344 3333 if (tb[IFLA_MTU]) { 3345 3334 u32 mtu = nla_get_u32(tb[IFLA_MTU]); 3346 - int err; 3347 3335 3348 3336 err = dev_validate_mtu(dev, mtu, extack); 3349 3337 if (err) {
+1 -1
net/core/sock.c
··· 2381 2381 { 2382 2382 u32 max_segs = 1; 2383 2383 2384 - sk_dst_set(sk, dst); 2385 2384 sk->sk_route_caps = dst->dev->features; 2386 2385 if (sk_is_tcp(sk)) 2387 2386 sk->sk_route_caps |= NETIF_F_GSO; ··· 2399 2400 } 2400 2401 } 2401 2402 sk->sk_gso_max_segs = max_segs; 2403 + sk_dst_set(sk, dst); 2402 2404 } 2403 2405 EXPORT_SYMBOL_GPL(sk_setup_caps); 2404 2406
+2
net/ipv4/af_inet.c
··· 586 586 587 587 add_wait_queue(sk_sleep(sk), &wait); 588 588 sk->sk_write_pending += writebias; 589 + sk->sk_wait_pending++; 589 590 590 591 /* Basic assumption: if someone sets sk->sk_err, he _must_ 591 592 * change state of the socket from TCP_SYN_*. ··· 602 601 } 603 602 remove_wait_queue(sk_sleep(sk), &wait); 604 603 sk->sk_write_pending -= writebias; 604 + sk->sk_wait_pending--; 605 605 return timeo; 606 606 } 607 607
+1
net/ipv4/inet_connection_sock.c
··· 1142 1142 if (newsk) { 1143 1143 struct inet_connection_sock *newicsk = inet_csk(newsk); 1144 1144 1145 + newsk->sk_wait_pending = 0; 1145 1146 inet_sk_set_state(newsk, TCP_SYN_RECV); 1146 1147 newicsk->icsk_bind_hash = NULL; 1147 1148 newicsk->icsk_bind2_hash = NULL;
+8 -1
net/ipv4/tcp.c
··· 2961 2961 int old_state = sk->sk_state; 2962 2962 u32 seq; 2963 2963 2964 + /* Deny disconnect if other threads are blocked in sk_wait_event() 2965 + * or inet_wait_for_connect(). 2966 + */ 2967 + if (sk->sk_wait_pending) 2968 + return -EBUSY; 2969 + 2964 2970 if (old_state != TCP_CLOSE) 2965 2971 tcp_set_state(sk, TCP_CLOSE); 2966 2972 ··· 3958 3952 switch (optname) { 3959 3953 case TCP_MAXSEG: 3960 3954 val = tp->mss_cache; 3961 - if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 3955 + if (tp->rx_opt.user_mss && 3956 + ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 3962 3957 val = tp->rx_opt.user_mss; 3963 3958 if (tp->repair) 3964 3959 val = tp->rx_opt.mss_clamp;
+1 -1
net/ipv4/tcp_input.c
··· 4530 4530 } 4531 4531 } 4532 4532 4533 - static void tcp_sack_compress_send_ack(struct sock *sk) 4533 + void tcp_sack_compress_send_ack(struct sock *sk) 4534 4534 { 4535 4535 struct tcp_sock *tp = tcp_sk(sk); 4536 4536
+13 -3
net/ipv4/tcp_timer.c
··· 295 295 void tcp_delack_timer_handler(struct sock *sk) 296 296 { 297 297 struct inet_connection_sock *icsk = inet_csk(sk); 298 + struct tcp_sock *tp = tcp_sk(sk); 298 299 299 - if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) || 300 - !(icsk->icsk_ack.pending & ICSK_ACK_TIMER)) 300 + if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) 301 + return; 302 + 303 + /* Handling the sack compression case */ 304 + if (tp->compressed_ack) { 305 + tcp_mstamp_refresh(tp); 306 + tcp_sack_compress_send_ack(sk); 307 + return; 308 + } 309 + 310 + if (!(icsk->icsk_ack.pending & ICSK_ACK_TIMER)) 301 311 return; 302 312 303 313 if (time_after(icsk->icsk_ack.timeout, jiffies)) { ··· 327 317 inet_csk_exit_pingpong_mode(sk); 328 318 icsk->icsk_ack.ato = TCP_ATO_MIN; 329 319 } 330 - tcp_mstamp_refresh(tcp_sk(sk)); 320 + tcp_mstamp_refresh(tp); 331 321 tcp_send_ack(sk); 332 322 __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS); 333 323 }
+78 -62
net/mptcp/protocol.c
··· 90 90 if (err) 91 91 return err; 92 92 93 - msk->first = ssock->sk; 94 - msk->subflow = ssock; 93 + WRITE_ONCE(msk->first, ssock->sk); 94 + WRITE_ONCE(msk->subflow, ssock); 95 95 subflow = mptcp_subflow_ctx(ssock->sk); 96 96 list_add(&subflow->node, &msk->conn_list); 97 97 sock_hold(ssock->sk); ··· 603 603 WRITE_ONCE(msk->ack_seq, msk->ack_seq + 1); 604 604 WRITE_ONCE(msk->rcv_data_fin, 0); 605 605 606 - sk->sk_shutdown |= RCV_SHUTDOWN; 606 + WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN); 607 607 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 608 608 609 609 switch (sk->sk_state) { ··· 825 825 mptcp_data_unlock(sk); 826 826 } 827 827 828 + static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk) 829 + { 830 + mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq); 831 + WRITE_ONCE(msk->allow_infinite_fallback, false); 832 + mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); 833 + } 834 + 828 835 static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk) 829 836 { 830 837 struct sock *sk = (struct sock *)msk; ··· 846 839 mptcp_sock_graft(ssk, sk->sk_socket); 847 840 848 841 mptcp_sockopt_sync_locked(msk, ssk); 842 + mptcp_subflow_joined(msk, ssk); 849 843 return true; 850 844 } 851 845 ··· 918 910 /* hopefully temporary hack: propagate shutdown status 919 911 * to msk, when all subflows agree on it 920 912 */ 921 - sk->sk_shutdown |= RCV_SHUTDOWN; 913 + WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN); 922 914 923 915 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 924 916 sk->sk_data_ready(sk); ··· 1710 1702 1711 1703 lock_sock(ssk); 1712 1704 msg->msg_flags |= MSG_DONTWAIT; 1713 - msk->connect_flags = O_NONBLOCK; 1714 1705 msk->fastopening = 1; 1715 1706 ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL); 1716 1707 msk->fastopening = 0; ··· 2290 2283 { 2291 2284 if (msk->subflow) { 2292 2285 iput(SOCK_INODE(msk->subflow)); 2293 - msk->subflow = NULL; 2286 + WRITE_ONCE(msk->subflow, NULL); 2294 2287 } 2295 2288 } 2296 2289 ··· 2427 2420 sock_put(ssk); 2428 2421 2429 2422 if (ssk == msk->first) 2430 - msk->first = NULL; 2423 + WRITE_ONCE(msk->first, NULL); 2431 2424 2432 2425 out: 2433 2426 if (ssk == msk->last_snd) ··· 2534 2527 } 2535 2528 2536 2529 inet_sk_state_store(sk, TCP_CLOSE); 2537 - sk->sk_shutdown = SHUTDOWN_MASK; 2530 + WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK); 2538 2531 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 2539 2532 set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags); 2540 2533 ··· 2728 2721 WRITE_ONCE(msk->rmem_released, 0); 2729 2722 msk->timer_ival = TCP_RTO_MIN; 2730 2723 2731 - msk->first = NULL; 2724 + WRITE_ONCE(msk->first, NULL); 2732 2725 inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss; 2733 2726 WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk))); 2734 2727 WRITE_ONCE(msk->allow_infinite_fallback, true); ··· 2966 2959 bool do_cancel_work = false; 2967 2960 int subflows_alive = 0; 2968 2961 2969 - sk->sk_shutdown = SHUTDOWN_MASK; 2962 + WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK); 2970 2963 2971 2964 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) { 2972 2965 mptcp_listen_inuse_dec(sk); ··· 3046 3039 sock_put(sk); 3047 3040 } 3048 3041 3049 - void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) 3042 + static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) 3050 3043 { 3051 3044 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 3052 3045 const struct ipv6_pinfo *ssk6 = inet6_sk(ssk); ··· 3109 3102 mptcp_pm_data_reset(msk); 3110 3103 mptcp_ca_reset(sk); 3111 3104 3112 - sk->sk_shutdown = 0; 3105 + WRITE_ONCE(sk->sk_shutdown, 0); 3113 3106 sk_error_report(sk); 3114 3107 return 0; 3115 3108 } ··· 3123 3116 } 3124 3117 #endif 3125 3118 3126 - struct sock *mptcp_sk_clone(const struct sock *sk, 3127 - const struct mptcp_options_received *mp_opt, 3128 - struct request_sock *req) 3119 + struct sock *mptcp_sk_clone_init(const struct sock *sk, 3120 + const struct mptcp_options_received *mp_opt, 3121 + struct sock *ssk, 3122 + struct request_sock *req) 3129 3123 { 3130 3124 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); 3131 3125 struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC); ··· 3145 3137 msk = mptcp_sk(nsk); 3146 3138 msk->local_key = subflow_req->local_key; 3147 3139 msk->token = subflow_req->token; 3148 - msk->subflow = NULL; 3140 + WRITE_ONCE(msk->subflow, NULL); 3149 3141 msk->in_accept_queue = 1; 3150 3142 WRITE_ONCE(msk->fully_established, false); 3151 3143 if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD) ··· 3158 3150 msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq; 3159 3151 3160 3152 sock_reset_flag(nsk, SOCK_RCU_FREE); 3161 - /* will be fully established after successful MPC subflow creation */ 3162 - inet_sk_state_store(nsk, TCP_SYN_RECV); 3163 - 3164 3153 security_inet_csk_clone(nsk, req); 3154 + 3155 + /* this can't race with mptcp_close(), as the msk is 3156 + * not yet exposted to user-space 3157 + */ 3158 + inet_sk_state_store(nsk, TCP_ESTABLISHED); 3159 + 3160 + /* The msk maintain a ref to each subflow in the connections list */ 3161 + WRITE_ONCE(msk->first, ssk); 3162 + list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list); 3163 + sock_hold(ssk); 3164 + 3165 + /* new mpc subflow takes ownership of the newly 3166 + * created mptcp socket 3167 + */ 3168 + mptcp_token_accept(subflow_req, msk); 3169 + 3170 + /* set msk addresses early to ensure mptcp_pm_get_local_id() 3171 + * uses the correct data 3172 + */ 3173 + mptcp_copy_inaddrs(nsk, ssk); 3174 + mptcp_propagate_sndbuf(nsk, ssk); 3175 + 3176 + mptcp_rcv_space_init(msk, ssk); 3165 3177 bh_unlock_sock(nsk); 3166 3178 3167 3179 /* note: the newly allocated socket refcount is 2 now */ ··· 3213 3185 struct socket *listener; 3214 3186 struct sock *newsk; 3215 3187 3216 - listener = msk->subflow; 3188 + listener = READ_ONCE(msk->subflow); 3217 3189 if (WARN_ON_ONCE(!listener)) { 3218 3190 *err = -EINVAL; 3219 3191 return NULL; ··· 3493 3465 return false; 3494 3466 } 3495 3467 3496 - if (!list_empty(&subflow->node)) 3497 - goto out; 3468 + /* active subflow, already present inside the conn_list */ 3469 + if (!list_empty(&subflow->node)) { 3470 + mptcp_subflow_joined(msk, ssk); 3471 + return true; 3472 + } 3498 3473 3499 3474 if (!mptcp_pm_allow_new_subflow(msk)) 3500 3475 goto err_prohibited; 3501 3476 3502 - /* active connections are already on conn_list. 3503 - * If we can't acquire msk socket lock here, let the release callback 3477 + /* If we can't acquire msk socket lock here, let the release callback 3504 3478 * handle it 3505 3479 */ 3506 3480 mptcp_data_lock(parent); ··· 3525 3495 return false; 3526 3496 } 3527 3497 3528 - subflow->map_seq = READ_ONCE(msk->ack_seq); 3529 - WRITE_ONCE(msk->allow_infinite_fallback, false); 3530 - 3531 - out: 3532 - mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); 3533 3498 return true; 3534 3499 } 3535 3500 ··· 3642 3617 * acquired the subflow socket lock, too. 3643 3618 */ 3644 3619 if (msk->fastopening) 3645 - err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1); 3620 + err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1); 3646 3621 else 3647 - err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags); 3622 + err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK); 3648 3623 inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect; 3649 3624 3650 3625 /* on successful connect, the msk state will be moved to established by ··· 3657 3632 3658 3633 mptcp_copy_inaddrs(sk, ssock->sk); 3659 3634 3660 - /* unblocking connect, mptcp-level inet_stream_connect will error out 3661 - * without changing the socket state, update it here. 3635 + /* silence EINPROGRESS and let the caller inet_stream_connect 3636 + * handle the connection in progress 3662 3637 */ 3663 - if (err == -EINPROGRESS) 3664 - sk->sk_socket->state = ssock->state; 3665 - return err; 3638 + return 0; 3666 3639 } 3667 3640 3668 3641 static struct proto mptcp_prot = { ··· 3719 3696 return err; 3720 3697 } 3721 3698 3722 - static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr, 3723 - int addr_len, int flags) 3724 - { 3725 - int ret; 3726 - 3727 - lock_sock(sock->sk); 3728 - mptcp_sk(sock->sk)->connect_flags = flags; 3729 - ret = __inet_stream_connect(sock, uaddr, addr_len, flags, 0); 3730 - release_sock(sock->sk); 3731 - return ret; 3732 - } 3733 - 3734 3699 static int mptcp_listen(struct socket *sock, int backlog) 3735 3700 { 3736 3701 struct mptcp_sock *msk = mptcp_sk(sock->sk); ··· 3763 3752 3764 3753 pr_debug("msk=%p", msk); 3765 3754 3766 - /* buggy applications can call accept on socket states other then LISTEN 3755 + /* Buggy applications can call accept on socket states other then LISTEN 3767 3756 * but no need to allocate the first subflow just to error out. 3768 3757 */ 3769 - ssock = msk->subflow; 3758 + ssock = READ_ONCE(msk->subflow); 3770 3759 if (!ssock) 3771 3760 return -EINVAL; 3772 3761 ··· 3814 3803 { 3815 3804 struct sock *sk = (struct sock *)msk; 3816 3805 3817 - if (unlikely(sk->sk_shutdown & SEND_SHUTDOWN)) 3818 - return EPOLLOUT | EPOLLWRNORM; 3819 - 3820 3806 if (sk_stream_is_writeable(sk)) 3821 3807 return EPOLLOUT | EPOLLWRNORM; 3822 3808 ··· 3831 3823 struct sock *sk = sock->sk; 3832 3824 struct mptcp_sock *msk; 3833 3825 __poll_t mask = 0; 3826 + u8 shutdown; 3834 3827 int state; 3835 3828 3836 3829 msk = mptcp_sk(sk); ··· 3840 3831 state = inet_sk_state_load(sk); 3841 3832 pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags); 3842 3833 if (state == TCP_LISTEN) { 3843 - if (WARN_ON_ONCE(!msk->subflow || !msk->subflow->sk)) 3834 + struct socket *ssock = READ_ONCE(msk->subflow); 3835 + 3836 + if (WARN_ON_ONCE(!ssock || !ssock->sk)) 3844 3837 return 0; 3845 3838 3846 - return inet_csk_listen_poll(msk->subflow->sk); 3839 + return inet_csk_listen_poll(ssock->sk); 3847 3840 } 3841 + 3842 + shutdown = READ_ONCE(sk->sk_shutdown); 3843 + if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE) 3844 + mask |= EPOLLHUP; 3845 + if (shutdown & RCV_SHUTDOWN) 3846 + mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 3848 3847 3849 3848 if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) { 3850 3849 mask |= mptcp_check_readable(msk); 3851 - mask |= mptcp_check_writeable(msk); 3850 + if (shutdown & SEND_SHUTDOWN) 3851 + mask |= EPOLLOUT | EPOLLWRNORM; 3852 + else 3853 + mask |= mptcp_check_writeable(msk); 3852 3854 } else if (state == TCP_SYN_SENT && inet_sk(sk)->defer_connect) { 3853 3855 /* cf tcp_poll() note about TFO */ 3854 3856 mask |= EPOLLOUT | EPOLLWRNORM; 3855 3857 } 3856 - if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE) 3857 - mask |= EPOLLHUP; 3858 - if (sk->sk_shutdown & RCV_SHUTDOWN) 3859 - mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 3860 3858 3861 3859 /* This barrier is coupled with smp_wmb() in __mptcp_error_report() */ 3862 3860 smp_rmb(); ··· 3878 3862 .owner = THIS_MODULE, 3879 3863 .release = inet_release, 3880 3864 .bind = mptcp_bind, 3881 - .connect = mptcp_stream_connect, 3865 + .connect = inet_stream_connect, 3882 3866 .socketpair = sock_no_socketpair, 3883 3867 .accept = mptcp_stream_accept, 3884 3868 .getname = inet_getname, ··· 3973 3957 .owner = THIS_MODULE, 3974 3958 .release = inet6_release, 3975 3959 .bind = mptcp_bind, 3976 - .connect = mptcp_stream_connect, 3960 + .connect = inet_stream_connect, 3977 3961 .socketpair = sock_no_socketpair, 3978 3962 .accept = mptcp_stream_accept, 3979 3963 .getname = inet6_getname,
+9 -6
net/mptcp/protocol.h
··· 297 297 nodelay:1, 298 298 fastopening:1, 299 299 in_accept_queue:1; 300 - int connect_flags; 301 300 struct work_struct work; 302 301 struct sk_buff *ooo_last_skb; 303 302 struct rb_root out_of_order_queue; ··· 305 306 struct list_head rtx_queue; 306 307 struct mptcp_data_frag *first_pending; 307 308 struct list_head join_list; 308 - struct socket *subflow; /* outgoing connect/listener/!mp_capable */ 309 + struct socket *subflow; /* outgoing connect/listener/!mp_capable 310 + * The mptcp ops can safely dereference, using suitable 311 + * ONCE annotation, the subflow outside the socket 312 + * lock as such sock is freed after close(). 313 + */ 309 314 struct sock *first; 310 315 struct mptcp_pm_data pm; 311 316 struct { ··· 616 613 int mptcp_allow_join_id0(const struct net *net); 617 614 unsigned int mptcp_stale_loss_cnt(const struct net *net); 618 615 int mptcp_get_pm_type(const struct net *net); 619 - void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk); 620 616 void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow, 621 617 const struct mptcp_options_received *mp_opt); 622 618 bool __mptcp_retransmit_pending_data(struct sock *sk); ··· 685 683 int __init mptcp_proto_v6_init(void); 686 684 #endif 687 685 688 - struct sock *mptcp_sk_clone(const struct sock *sk, 689 - const struct mptcp_options_received *mp_opt, 690 - struct request_sock *req); 686 + struct sock *mptcp_sk_clone_init(const struct sock *sk, 687 + const struct mptcp_options_received *mp_opt, 688 + struct sock *ssk, 689 + struct request_sock *req); 691 690 void mptcp_get_options(const struct sk_buff *skb, 692 691 struct mptcp_options_received *mp_opt); 693 692
+1 -27
net/mptcp/subflow.c
··· 815 815 ctx->setsockopt_seq = listener->setsockopt_seq; 816 816 817 817 if (ctx->mp_capable) { 818 - ctx->conn = mptcp_sk_clone(listener->conn, &mp_opt, req); 818 + ctx->conn = mptcp_sk_clone_init(listener->conn, &mp_opt, child, req); 819 819 if (!ctx->conn) 820 820 goto fallback; 821 821 822 822 owner = mptcp_sk(ctx->conn); 823 - 824 - /* this can't race with mptcp_close(), as the msk is 825 - * not yet exposted to user-space 826 - */ 827 - inet_sk_state_store(ctx->conn, TCP_ESTABLISHED); 828 - 829 - /* record the newly created socket as the first msk 830 - * subflow, but don't link it yet into conn_list 831 - */ 832 - WRITE_ONCE(owner->first, child); 833 - 834 - /* new mpc subflow takes ownership of the newly 835 - * created mptcp socket 836 - */ 837 - owner->setsockopt_seq = ctx->setsockopt_seq; 838 823 mptcp_pm_new_connection(owner, child, 1); 839 - mptcp_token_accept(subflow_req, owner); 840 - 841 - /* set msk addresses early to ensure mptcp_pm_get_local_id() 842 - * uses the correct data 843 - */ 844 - mptcp_copy_inaddrs(ctx->conn, child); 845 - mptcp_propagate_sndbuf(ctx->conn, child); 846 - 847 - mptcp_rcv_space_init(owner, child); 848 - list_add(&ctx->node, &owner->conn_list); 849 - sock_hold(child); 850 824 851 825 /* with OoO packets we can reach here without ingress 852 826 * mpc option
+1 -1
net/netlink/af_netlink.c
··· 1779 1779 break; 1780 1780 } 1781 1781 } 1782 - if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen)) 1782 + if (put_user(ALIGN(BITS_TO_BYTES(nlk->ngroups), sizeof(u32)), optlen)) 1783 1783 err = -EFAULT; 1784 1784 netlink_unlock_table(); 1785 1785 return err;
+4 -3
net/netrom/nr_subr.c
··· 123 123 unsigned char *dptr; 124 124 int len, timeout; 125 125 126 - len = NR_NETWORK_LEN + NR_TRANSPORT_LEN; 126 + len = NR_TRANSPORT_LEN; 127 127 128 128 switch (frametype & 0x0F) { 129 129 case NR_CONNREQ: ··· 141 141 return; 142 142 } 143 143 144 - if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL) 144 + skb = alloc_skb(NR_NETWORK_LEN + len, GFP_ATOMIC); 145 + if (!skb) 145 146 return; 146 147 147 148 /* ··· 150 149 */ 151 150 skb_reserve(skb, NR_NETWORK_LEN); 152 151 153 - dptr = skb_put(skb, skb_tailroom(skb)); 152 + dptr = skb_put(skb, len); 154 153 155 154 switch (frametype & 0x0F) { 156 155 case NR_CONNREQ:
+5 -3
net/packet/af_packet.c
··· 3201 3201 3202 3202 lock_sock(sk); 3203 3203 spin_lock(&po->bind_lock); 3204 + if (!proto) 3205 + proto = po->num; 3206 + 3204 3207 rcu_read_lock(); 3205 3208 3206 3209 if (po->fanout) { ··· 3302 3299 memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min)); 3303 3300 name[sizeof(uaddr->sa_data_min)] = 0; 3304 3301 3305 - return packet_do_bind(sk, name, 0, pkt_sk(sk)->num); 3302 + return packet_do_bind(sk, name, 0, 0); 3306 3303 } 3307 3304 3308 3305 static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) ··· 3319 3316 if (sll->sll_family != AF_PACKET) 3320 3317 return -EINVAL; 3321 3318 3322 - return packet_do_bind(sk, NULL, sll->sll_ifindex, 3323 - sll->sll_protocol ? : pkt_sk(sk)->num); 3319 + return packet_do_bind(sk, NULL, sll->sll_ifindex, sll->sll_protocol); 3324 3320 } 3325 3321 3326 3322 static struct proto packet_proto = {
+1 -1
net/packet/diag.c
··· 143 143 rp = nlmsg_data(nlh); 144 144 rp->pdiag_family = AF_PACKET; 145 145 rp->pdiag_type = sk->sk_type; 146 - rp->pdiag_num = ntohs(po->num); 146 + rp->pdiag_num = ntohs(READ_ONCE(po->num)); 147 147 rp->pdiag_ino = sk_ino; 148 148 sock_diag_save_cookie(sk, rp->pdiag_cookie); 149 149
+1
net/rxrpc/af_rxrpc.c
··· 980 980 BUILD_BUG_ON(sizeof(struct rxrpc_skb_priv) > sizeof_field(struct sk_buff, cb)); 981 981 982 982 ret = -ENOMEM; 983 + rxrpc_gen_version_string(); 983 984 rxrpc_call_jar = kmem_cache_create( 984 985 "rxrpc_call_jar", sizeof(struct rxrpc_call), 0, 985 986 SLAB_HWCACHE_ALIGN, NULL);
+1
net/rxrpc/ar-internal.h
··· 1068 1068 /* 1069 1069 * local_event.c 1070 1070 */ 1071 + void rxrpc_gen_version_string(void); 1071 1072 void rxrpc_send_version_request(struct rxrpc_local *local, 1072 1073 struct rxrpc_host_header *hdr, 1073 1074 struct sk_buff *skb);
+10 -1
net/rxrpc/local_event.c
··· 16 16 #include <generated/utsrelease.h> 17 17 #include "ar-internal.h" 18 18 19 - static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC"; 19 + static char rxrpc_version_string[65]; // "linux-" UTS_RELEASE " AF_RXRPC"; 20 + 21 + /* 22 + * Generate the VERSION packet string. 23 + */ 24 + void rxrpc_gen_version_string(void) 25 + { 26 + snprintf(rxrpc_version_string, sizeof(rxrpc_version_string), 27 + "linux-%.49s AF_RXRPC", UTS_RELEASE); 28 + } 20 29 21 30 /* 22 31 * Reply to a version request
+3
net/sched/cls_flower.c
··· 1157 1157 if (option_len > sizeof(struct geneve_opt)) 1158 1158 data_len = option_len - sizeof(struct geneve_opt); 1159 1159 1160 + if (key->enc_opts.len > FLOW_DIS_TUN_OPTS_MAX - 4) 1161 + return -ERANGE; 1162 + 1160 1163 opt = (struct geneve_opt *)&key->enc_opts.data[key->enc_opts.len]; 1161 1164 memset(opt, 0xff, option_len); 1162 1165 opt->length = data_len / 4;
+15 -1
net/sched/sch_api.c
··· 1252 1252 sch->parent = parent; 1253 1253 1254 1254 if (handle == TC_H_INGRESS) { 1255 - sch->flags |= TCQ_F_INGRESS; 1255 + if (!(sch->flags & TCQ_F_INGRESS)) { 1256 + NL_SET_ERR_MSG(extack, 1257 + "Specified parent ID is reserved for ingress and clsact Qdiscs"); 1258 + err = -EINVAL; 1259 + goto err_out3; 1260 + } 1256 1261 handle = TC_H_MAKE(TC_H_INGRESS, 0); 1257 1262 } else { 1258 1263 if (handle == 0) { ··· 1596 1591 NL_SET_ERR_MSG(extack, "Invalid qdisc name"); 1597 1592 return -EINVAL; 1598 1593 } 1594 + if (q->flags & TCQ_F_INGRESS) { 1595 + NL_SET_ERR_MSG(extack, 1596 + "Cannot regraft ingress or clsact Qdiscs"); 1597 + return -EINVAL; 1598 + } 1599 1599 if (q == p || 1600 1600 (p && check_loop(q, p, 0))) { 1601 1601 NL_SET_ERR_MSG(extack, "Qdisc parent/child loop detected"); 1602 1602 return -ELOOP; 1603 + } 1604 + if (clid == TC_H_INGRESS) { 1605 + NL_SET_ERR_MSG(extack, "Ingress cannot graft directly"); 1606 + return -EINVAL; 1603 1607 } 1604 1608 qdisc_refcount_inc(q); 1605 1609 goto graft;
+14 -2
net/sched/sch_ingress.c
··· 80 80 struct net_device *dev = qdisc_dev(sch); 81 81 int err; 82 82 83 + if (sch->parent != TC_H_INGRESS) 84 + return -EOPNOTSUPP; 85 + 83 86 net_inc_ingress_queue(); 84 87 85 88 mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress); ··· 103 100 static void ingress_destroy(struct Qdisc *sch) 104 101 { 105 102 struct ingress_sched_data *q = qdisc_priv(sch); 103 + 104 + if (sch->parent != TC_H_INGRESS) 105 + return; 106 106 107 107 tcf_block_put_ext(q->block, sch, &q->block_info); 108 108 net_dec_ingress_queue(); ··· 140 134 .cl_ops = &ingress_class_ops, 141 135 .id = "ingress", 142 136 .priv_size = sizeof(struct ingress_sched_data), 143 - .static_flags = TCQ_F_CPUSTATS, 137 + .static_flags = TCQ_F_INGRESS | TCQ_F_CPUSTATS, 144 138 .init = ingress_init, 145 139 .destroy = ingress_destroy, 146 140 .dump = ingress_dump, ··· 225 219 struct net_device *dev = qdisc_dev(sch); 226 220 int err; 227 221 222 + if (sch->parent != TC_H_CLSACT) 223 + return -EOPNOTSUPP; 224 + 228 225 net_inc_ingress_queue(); 229 226 net_inc_egress_queue(); 230 227 ··· 257 248 { 258 249 struct clsact_sched_data *q = qdisc_priv(sch); 259 250 251 + if (sch->parent != TC_H_CLSACT) 252 + return; 253 + 260 254 tcf_block_put_ext(q->egress_block, sch, &q->egress_block_info); 261 255 tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info); 262 256 ··· 281 269 .cl_ops = &clsact_class_ops, 282 270 .id = "clsact", 283 271 .priv_size = sizeof(struct clsact_sched_data), 284 - .static_flags = TCQ_F_CPUSTATS, 272 + .static_flags = TCQ_F_INGRESS | TCQ_F_CPUSTATS, 285 273 .init = clsact_init, 286 274 .destroy = clsact_destroy, 287 275 .dump = ingress_dump,
+6 -3
net/smc/smc_llc.c
··· 578 578 { 579 579 struct smc_buf_desc *buf_next; 580 580 581 - if (!buf_pos || list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) { 581 + if (!buf_pos) 582 + return _smc_llc_get_next_rmb(lgr, buf_lst); 583 + 584 + if (list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) { 582 585 (*buf_lst)++; 583 586 return _smc_llc_get_next_rmb(lgr, buf_lst); 584 587 } ··· 617 614 goto out; 618 615 buf_pos = smc_llc_get_first_rmb(lgr, &buf_lst); 619 616 for (i = 0; i < ext->num_rkeys; i++) { 617 + while (buf_pos && !(buf_pos)->used) 618 + buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 620 619 if (!buf_pos) 621 620 break; 622 621 rmb = buf_pos; ··· 628 623 cpu_to_be64((uintptr_t)rmb->cpu_addr) : 629 624 cpu_to_be64((u64)sg_dma_address(rmb->sgt[lnk_idx].sgl)); 630 625 buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 631 - while (buf_pos && !(buf_pos)->used) 632 - buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 633 626 } 634 627 len += i * sizeof(ext->rt[0]); 635 628 out:
+3 -1
net/tls/tls_strp.c
··· 20 20 strp->stopped = 1; 21 21 22 22 /* Report an error on the lower socket */ 23 - strp->sk->sk_err = -err; 23 + WRITE_ONCE(strp->sk->sk_err, -err); 24 + /* Paired with smp_rmb() in tcp_poll() */ 25 + smp_wmb(); 24 26 sk_error_report(strp->sk); 25 27 } 26 28
+3 -1
net/tls/tls_sw.c
··· 70 70 { 71 71 WARN_ON_ONCE(err >= 0); 72 72 /* sk->sk_err should contain a positive error code. */ 73 - sk->sk_err = -err; 73 + WRITE_ONCE(sk->sk_err, -err); 74 + /* Paired with smp_rmb() in tcp_poll() */ 75 + smp_wmb(); 74 76 sk_error_report(sk); 75 77 } 76 78
+1 -1
tools/gpio/lsgpio.c
··· 94 94 for (i = 0; i < info->num_attrs; i++) { 95 95 if (info->attrs[i].id == GPIO_V2_LINE_ATTR_ID_DEBOUNCE) 96 96 fprintf(stdout, ", debounce_period=%dusec", 97 - info->attrs[0].debounce_period_us); 97 + info->attrs[i].debounce_period_us); 98 98 } 99 99 } 100 100
-13
tools/include/linux/coresight-pmu.h
··· 21 21 */ 22 22 #define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2)) 23 23 24 - /* CoreSight trace ID is currently the bottom 7 bits of the value */ 25 - #define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0) 26 - 27 - /* 28 - * perf record will set the legacy meta data values as unused initially. 29 - * This allows perf report to manage the decoders created when dynamic 30 - * allocation in operation. 31 - */ 32 - #define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31) 33 - 34 - /* Value to set for unused trace ID values */ 35 - #define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F 36 - 37 24 /* 38 25 * Below are the definition of bit offsets for perf option, and works as 39 26 * arbitrary values for all ETM versions.
+1
tools/include/uapi/linux/in.h
··· 163 163 #define IP_MULTICAST_ALL 49 164 164 #define IP_UNICAST_IF 50 165 165 #define IP_LOCAL_PORT_RANGE 51 166 + #define IP_PROTOCOL 52 166 167 167 168 #define MCAST_EXCLUDE 0 168 169 #define MCAST_INCLUDE 1
+3 -2
tools/net/ynl/lib/ynl.py
··· 582 582 print('Unexpected message: ' + repr(gm)) 583 583 continue 584 584 585 - rsp.append(self._decode(gm.raw_attrs, op.attr_set.name) 586 - | gm.fixed_header_attrs) 585 + rsp_msg = self._decode(gm.raw_attrs, op.attr_set.name) 586 + rsp_msg.update(gm.fixed_header_attrs) 587 + rsp.append(rsp_msg) 587 588 588 589 if not rsp: 589 590 return None
+1
tools/perf/Makefile.config
··· 927 927 EXTLIBS += -lstdc++ 928 928 CFLAGS += -DHAVE_CXA_DEMANGLE_SUPPORT 929 929 CXXFLAGS += -DHAVE_CXA_DEMANGLE_SUPPORT 930 + $(call detected,CONFIG_CXX_DEMANGLE) 930 931 endif 931 932 ifdef BUILD_NONDISTRO 932 933 ifeq ($(filter -liberty,$(EXTLIBS)),)
+1 -2
tools/perf/Makefile.perf
··· 181 181 HOSTLD ?= ld 182 182 HOSTAR ?= ar 183 183 CLANG ?= clang 184 - LLVM_STRIP ?= llvm-strip 185 184 186 185 PKG_CONFIG = $(CROSS_COMPILE)pkg-config 187 186 ··· 1082 1083 1083 1084 $(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) | $(SKEL_TMP_OUT) 1084 1085 $(QUIET_CLANG)$(CLANG) -g -O2 -target bpf -Wall -Werror $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \ 1085 - -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ && $(LLVM_STRIP) -g $@ 1086 + -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ 1086 1087 1087 1088 $(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL) 1088 1089 $(QUIET_GENSKEL)$(BPFTOOL) gen skeleton $< > $@
+1 -1
tools/perf/arch/arm/util/pmu.c
··· 12 12 #include "arm-spe.h" 13 13 #include "hisi-ptt.h" 14 14 #include "../../../util/pmu.h" 15 - #include "../cs-etm.h" 15 + #include "../../../util/cs-etm.h" 16 16 17 17 struct perf_event_attr 18 18 *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
+1 -1
tools/perf/builtin-ftrace.c
··· 1175 1175 OPT_BOOLEAN('b', "use-bpf", &ftrace.target.use_bpf, 1176 1176 "Use BPF to measure function latency"), 1177 1177 #endif 1178 - OPT_BOOLEAN('n', "--use-nsec", &ftrace.use_nsec, 1178 + OPT_BOOLEAN('n', "use-nsec", &ftrace.use_nsec, 1179 1179 "Use nano-second histogram"), 1180 1180 OPT_PARENT(common_options), 1181 1181 };
+1 -1
tools/perf/util/Build
··· 214 214 215 215 perf-$(CONFIG_LIBCAP) += cap.o 216 216 217 - perf-y += demangle-cxx.o 217 + perf-$(CONFIG_CXX_DEMANGLE) += demangle-cxx.o 218 218 perf-y += demangle-ocaml.o 219 219 perf-y += demangle-java.o 220 220 perf-y += demangle-rust.o
+2 -2
tools/perf/util/bpf_skel/sample_filter.bpf.c
··· 25 25 } __attribute__((preserve_access_index)); 26 26 27 27 /* new kernel perf_mem_data_src definition */ 28 - union perf_mem_data_src__new { 28 + union perf_mem_data_src___new { 29 29 __u64 val; 30 30 struct { 31 31 __u64 mem_op:5, /* type of opcode */ ··· 108 108 if (entry->part == 7) 109 109 return kctx->data->data_src.mem_blk; 110 110 if (entry->part == 8) { 111 - union perf_mem_data_src__new *data = (void *)&kctx->data->data_src; 111 + union perf_mem_data_src___new *data = (void *)&kctx->data->data_src; 112 112 113 113 if (bpf_core_field_exists(data->mem_hops)) 114 114 return data->mem_hops;
+13
tools/perf/util/cs-etm.h
··· 227 227 #define INFO_HEADER_SIZE (sizeof(((struct perf_record_auxtrace_info *)0)->type) + \ 228 228 sizeof(((struct perf_record_auxtrace_info *)0)->reserved__)) 229 229 230 + /* CoreSight trace ID is currently the bottom 7 bits of the value */ 231 + #define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0) 232 + 233 + /* 234 + * perf record will set the legacy meta data values as unused initially. 235 + * This allows perf report to manage the decoders created when dynamic 236 + * allocation in operation. 237 + */ 238 + #define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31) 239 + 240 + /* Value to set for unused trace ID values */ 241 + #define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F 242 + 230 243 int cs_etm__process_auxtrace_info(union perf_event *event, 231 244 struct perf_session *session); 232 245 struct perf_event_attr *cs_etm_get_default_config(struct perf_pmu *pmu);
+1
tools/perf/util/evsel.c
··· 282 282 evsel->bpf_fd = -1; 283 283 INIT_LIST_HEAD(&evsel->config_terms); 284 284 INIT_LIST_HEAD(&evsel->bpf_counter_list); 285 + INIT_LIST_HEAD(&evsel->bpf_filters); 285 286 perf_evsel__object.init(evsel); 286 287 evsel->sample_size = __evsel__sample_size(attr->sample_type); 287 288 evsel__calc_id_pos(evsel);
+2 -4
tools/perf/util/evsel.h
··· 151 151 */ 152 152 struct bpf_counter_ops *bpf_counter_ops; 153 153 154 - union { 155 - struct list_head bpf_counter_list; /* for perf-stat -b */ 156 - struct list_head bpf_filters; /* for perf-record --filter */ 157 - }; 154 + struct list_head bpf_counter_list; /* for perf-stat -b */ 155 + struct list_head bpf_filters; /* for perf-record --filter */ 158 156 159 157 /* for perf-stat --use-bpf */ 160 158 int bperf_leader_prog_fd;
+27
tools/perf/util/symbol-elf.c
··· 31 31 #include <bfd.h> 32 32 #endif 33 33 34 + #if defined(HAVE_LIBBFD_SUPPORT) || defined(HAVE_CPLUS_DEMANGLE_SUPPORT) 35 + #ifndef DMGL_PARAMS 36 + #define DMGL_PARAMS (1 << 0) /* Include function args */ 37 + #define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */ 38 + #endif 39 + #endif 40 + 34 41 #ifndef EM_AARCH64 35 42 #define EM_AARCH64 183 /* ARM 64 bit */ 36 43 #endif ··· 276 269 static bool want_demangle(bool is_kernel_sym) 277 270 { 278 271 return is_kernel_sym ? symbol_conf.demangle_kernel : symbol_conf.demangle; 272 + } 273 + 274 + /* 275 + * Demangle C++ function signature, typically replaced by demangle-cxx.cpp 276 + * version. 277 + */ 278 + __weak char *cxx_demangle_sym(const char *str __maybe_unused, bool params __maybe_unused, 279 + bool modifiers __maybe_unused) 280 + { 281 + #ifdef HAVE_LIBBFD_SUPPORT 282 + int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0); 283 + 284 + return bfd_demangle(NULL, str, flags); 285 + #elif defined(HAVE_CPLUS_DEMANGLE_SUPPORT) 286 + int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0); 287 + 288 + return cplus_demangle(str, flags); 289 + #else 290 + return NULL; 291 + #endif 279 292 } 280 293 281 294 static char *demangle_sym(struct dso *dso, int kmodule, const char *elf_name)
+1
tools/testing/cxl/Kbuild
··· 6 6 ldflags-y += --wrap=nvdimm_bus_register 7 7 ldflags-y += --wrap=devm_cxl_port_enumerate_dports 8 8 ldflags-y += --wrap=devm_cxl_setup_hdm 9 + ldflags-y += --wrap=devm_cxl_enable_hdm 9 10 ldflags-y += --wrap=devm_cxl_add_passthrough_decoder 10 11 ldflags-y += --wrap=devm_cxl_enumerate_decoders 11 12 ldflags-y += --wrap=cxl_await_media_ready
+1
tools/testing/cxl/test/mem.c
··· 1256 1256 if (rc) 1257 1257 return rc; 1258 1258 1259 + cxlds->media_ready = true; 1259 1260 rc = cxl_dev_state_identify(cxlds); 1260 1261 if (rc) 1261 1262 return rc;
+15
tools/testing/cxl/test/mock.c
··· 149 149 } 150 150 EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_setup_hdm, CXL); 151 151 152 + int __wrap_devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm) 153 + { 154 + int index, rc; 155 + struct cxl_mock_ops *ops = get_cxl_mock_ops(&index); 156 + 157 + if (ops && ops->is_mock_port(port->uport)) 158 + rc = 0; 159 + else 160 + rc = devm_cxl_enable_hdm(port, cxlhdm); 161 + put_cxl_mock_ops(index); 162 + 163 + return rc; 164 + } 165 + EXPORT_SYMBOL_NS_GPL(__wrap_devm_cxl_enable_hdm, CXL); 166 + 152 167 int __wrap_devm_cxl_add_passthrough_decoder(struct cxl_port *port) 153 168 { 154 169 int rc, index;
+24
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-stack-legacy.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: event trigger - test inter-event histogram trigger trace action with dynamic string param (legacy stack) 4 + # requires: set_event synthetic_events events/sched/sched_process_exec/hist "long[] stack' >> synthetic_events":README 5 + 6 + fail() { #msg 7 + echo $1 8 + exit_fail 9 + } 10 + 11 + echo "Test create synthetic event with stack" 12 + 13 + # Test the old stacktrace keyword (for backward compatibility) 14 + echo 's:wake_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 15 + echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 16 + echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(wake_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 17 + echo 1 > events/synthetic/wake_lat/enable 18 + sleep 1 19 + 20 + if ! grep -q "=>.*sched" trace; then 21 + fail "Failed to create synthetic event with stack" 22 + fi 23 + 24 + exit 0
+2 -3
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-stack.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: event trigger - test inter-event histogram trigger trace action with dynamic string param 4 - # requires: set_event synthetic_events events/sched/sched_process_exec/hist "long[]' >> synthetic_events":README 4 + # requires: set_event synthetic_events events/sched/sched_process_exec/hist "can be any field, or the special string 'common_stacktrace'":README 5 5 6 6 fail() { #msg 7 7 echo $1 ··· 10 10 11 11 echo "Test create synthetic event with stack" 12 12 13 - 14 13 echo 's:wake_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 15 - echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 14 + echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 16 15 echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(wake_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 17 16 echo 1 > events/synthetic/wake_lat/enable 18 17 sleep 1
+3
tools/testing/selftests/gpio/gpio-sim.sh
··· 389 389 create_bank chip bank 390 390 set_num_lines chip bank 8 391 391 enable_chip chip 392 + DEVNAME=`configfs_dev_name chip` 393 + CHIPNAME=`configfs_chip_name chip bank` 394 + SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value" 392 395 $BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0 393 396 test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work" 394 397 remove_chip chip
+1 -1
tools/testing/selftests/net/mptcp/Makefile
··· 9 9 10 10 TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq 11 11 12 - TEST_FILES := settings 12 + TEST_FILES := mptcp_lib.sh settings 13 13 14 14 EXTRA_CLEAN := *.pcap 15 15
+4
tools/testing/selftests/net/mptcp/diag.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 sec=$(date +%s) 5 7 rndh=$(printf %x $sec)-$(mktemp -u XXXXXX) 6 8 ns="ns1-$rndh" ··· 32 30 33 31 ip netns del $ns 34 32 } 33 + 34 + mptcp_lib_check_mptcp 35 35 36 36 ip -Version > /dev/null 2>&1 37 37 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 time_start=$(date +%s) 5 7 6 8 optstring="S:R:d:e:l:r:h4cm:f:tC" ··· 142 140 rm -f /tmp/$netns.{nstat,out} 143 141 done 144 142 } 143 + 144 + mptcp_lib_check_mptcp 145 145 146 146 ip -Version > /dev/null 2>&1 147 147 if [ $? -ne 0 ];then
+15 -2
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 10 10 # because it's invoked by variable name, see how the "tests" array is used 11 11 #shellcheck disable=SC2317 12 12 13 + . "$(dirname "${0}")/mptcp_lib.sh" 14 + 13 15 ret=0 14 16 sin="" 15 17 sinfail="" ··· 19 17 cin="" 20 18 cinfail="" 21 19 cinsent="" 20 + tmpfile="" 22 21 cout="" 23 22 capout="" 24 23 ns1="" ··· 141 138 142 139 check_tools() 143 140 { 141 + mptcp_lib_check_mptcp 142 + 144 143 if ! ip -Version &> /dev/null; then 145 144 echo "SKIP: Could not run test without ip tool" 146 145 exit $ksft_skip ··· 182 177 { 183 178 rm -f "$cin" "$cout" "$sinfail" 184 179 rm -f "$sin" "$sout" "$cinsent" "$cinfail" 180 + rm -f "$tmpfile" 185 181 rm -rf $evts_ns1 $evts_ns2 186 182 cleanup_partial 187 183 } ··· 394 388 fail_test 395 389 return 1 396 390 fi 397 - bytes="--bytes=${bytes}" 391 + 392 + # note: BusyBox's "cmp" command doesn't support --bytes 393 + tmpfile=$(mktemp) 394 + head --bytes="$bytes" "$in" > "$tmpfile" 395 + mv "$tmpfile" "$in" 396 + head --bytes="$bytes" "$out" > "$tmpfile" 397 + mv "$tmpfile" "$out" 398 + tmpfile="" 398 399 fi 399 - cmp -l "$in" "$out" ${bytes} | while read -r i a b; do 400 + cmp -l "$in" "$out" | while read -r i a b; do 400 401 local sum=$((0${a} + 0${b})) 401 402 if [ $check_invert -eq 0 ] || [ $sum -ne $((0xff)) ]; then 402 403 echo "[ FAIL ] $what does not match (in, out):"
+40
tools/testing/selftests/net/mptcp/mptcp_lib.sh
··· 1 + #! /bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + readonly KSFT_FAIL=1 5 + readonly KSFT_SKIP=4 6 + 7 + # SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES env var can be set when validating all 8 + # features using the last version of the kernel and the selftests to make sure 9 + # a test is not being skipped by mistake. 10 + mptcp_lib_expect_all_features() { 11 + [ "${SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES:-}" = "1" ] 12 + } 13 + 14 + # $1: msg 15 + mptcp_lib_fail_if_expected_feature() { 16 + if mptcp_lib_expect_all_features; then 17 + echo "ERROR: missing feature: ${*}" 18 + exit ${KSFT_FAIL} 19 + fi 20 + 21 + return 1 22 + } 23 + 24 + # $1: file 25 + mptcp_lib_has_file() { 26 + local f="${1}" 27 + 28 + if [ -f "${f}" ]; then 29 + return 0 30 + fi 31 + 32 + mptcp_lib_fail_if_expected_feature "${f} file not found" 33 + } 34 + 35 + mptcp_lib_check_mptcp() { 36 + if ! mptcp_lib_has_file "/proc/sys/net/mptcp/enabled"; then 37 + echo "SKIP: MPTCP support is not available" 38 + exit ${KSFT_SKIP} 39 + fi 40 + }
+4
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 ret=0 5 7 sin="" 6 8 sout="" ··· 85 83 rm -f "$cin" "$cout" 86 84 rm -f "$sin" "$sout" 87 85 } 86 + 87 + mptcp_lib_check_mptcp 88 88 89 89 ip -Version > /dev/null 2>&1 90 90 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 ksft_skip=4 5 7 ret=0 6 8 ··· 35 33 rm -f $err 36 34 ip netns del $ns1 37 35 } 36 + 37 + mptcp_lib_check_mptcp 38 38 39 39 ip -Version > /dev/null 2>&1 40 40 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 sec=$(date +%s) 5 7 rndh=$(printf %x $sec)-$(mktemp -u XXXXXX) 6 8 ns1="ns1-$rndh" ··· 35 33 ip netns del $netns 36 34 done 37 35 } 36 + 37 + mptcp_lib_check_mptcp 38 38 39 39 ip -Version > /dev/null 2>&1 40 40 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 6 + mptcp_lib_check_mptcp 7 + 4 8 ip -Version > /dev/null 2>&1 5 9 if [ $? -ne 0 ];then 6 10 echo "SKIP: Cannot not run test without ip tool"