Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.4-rc5 into driver-core-next

We need the driver core fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2948 -1641
+1 -1
Documentation/devicetree/bindings/fpga/lattice,sysconfig.yaml
··· 7 7 title: Lattice Slave SPI sysCONFIG FPGA manager 8 8 9 9 maintainers: 10 - - Ivan Bornyakov <i.bornyakov@metrotek.ru> 10 + - Vladimir Georgiev <v.georgiev@metrotek.ru> 11 11 12 12 description: | 13 13 Lattice sysCONFIG port, which is used for FPGA configuration, among others,
+1 -1
Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml
··· 7 7 title: Microchip Polarfire FPGA manager. 8 8 9 9 maintainers: 10 - - Ivan Bornyakov <i.bornyakov@metrotek.ru> 10 + - Vladimir Georgiev <v.georgiev@metrotek.ru> 11 11 12 12 description: 13 13 Device Tree Bindings for Microchip Polarfire FPGA Manager using slave SPI to
+7
Documentation/devicetree/bindings/iio/adc/nxp,imx8qxp-adc.yaml
··· 39 39 power-domains: 40 40 maxItems: 1 41 41 42 + vref-supply: 43 + description: | 44 + External ADC reference voltage supply on VREFH pad. If VERID[MVI] is 45 + set, there are additional, internal reference voltages selectable. 46 + VREFH1 is always from VREFH pad. 47 + 42 48 "#io-channel-cells": 43 49 const: 1 44 50 ··· 78 72 assigned-clocks = <&clk IMX_SC_R_ADC_0>; 79 73 assigned-clock-rates = <24000000>; 80 74 power-domains = <&pd IMX_SC_R_ADC_0>; 75 + vref-supply = <&reg_1v8>; 81 76 #io-channel-cells = <1>; 82 77 }; 83 78 };
+1 -1
Documentation/devicetree/bindings/iio/adc/renesas,rcar-gyroadc.yaml
··· 90 90 of the MAX chips to the GyroADC, while MISO line of each Maxim 91 91 ADC connects to a shared input pin of the GyroADC. 92 92 enum: 93 - - adi,7476 93 + - adi,ad7476 94 94 - fujitsu,mb88101a 95 95 - maxim,max1162 96 96 - maxim,max11100
+1
Documentation/devicetree/bindings/serial/8250_omap.yaml
··· 70 70 dsr-gpios: true 71 71 rng-gpios: true 72 72 dcd-gpios: true 73 + rs485-rts-active-high: true 73 74 rts-gpio: true 74 75 power-domains: true 75 76 clock-frequency: true
+1 -1
Documentation/devicetree/bindings/usb/snps,dwc3.yaml
··· 287 287 description: 288 288 High-Speed PHY interface selection between UTMI+ and ULPI when the 289 289 DWC_USB3_HSPHY_INTERFACE has value 3. 290 - $ref: /schemas/types.yaml#/definitions/uint8 290 + $ref: /schemas/types.yaml#/definitions/string 291 291 enum: [utmi, ulpi] 292 292 293 293 snps,quirk-frame-length-adjustment:
+19
Documentation/mm/page_table_check.rst
··· 52 52 53 53 Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page 54 54 table support without extra kernel parameter. 55 + 56 + Implementation notes 57 + ==================== 58 + 59 + We specifically decided not to use VMA information in order to avoid relying on 60 + MM states (except for limited "struct page" info). The page table check is a 61 + separate from Linux-MM state machine that verifies that the user accessible 62 + pages are not falsely shared. 63 + 64 + PAGE_TABLE_CHECK depends on EXCLUSIVE_SYSTEM_RAM. The reason is that without 65 + EXCLUSIVE_SYSTEM_RAM, users are allowed to map arbitrary physical memory 66 + regions into the userspace via /dev/mem. At the same time, pages may change 67 + their properties (e.g., from anonymous pages to named pages) while they are 68 + still being mapped in the userspace, leading to "corruption" detected by the 69 + page table check. 70 + 71 + Even with EXCLUSIVE_SYSTEM_RAM, I/O pages may be still allowed to be mapped via 72 + /dev/mem. However, these pages are always considered as named pages, so they 73 + won't break the logic used in the page table check.
+8 -24
Documentation/netlink/specs/ethtool.yaml
··· 61 61 nested-attributes: bitset-bits 62 62 63 63 - 64 - name: u64-array 65 - attributes: 66 - - 67 - name: u64 68 - type: nest 69 - multi-attr: true 70 - nested-attributes: u64 71 - - 72 - name: s32-array 73 - attributes: 74 - - 75 - name: s32 76 - type: nest 77 - multi-attr: true 78 - nested-attributes: s32 79 - - 80 64 name: string 81 65 attributes: 82 66 - ··· 689 705 type: u8 690 706 - 691 707 name: corrected 692 - type: nest 693 - nested-attributes: u64-array 708 + type: binary 709 + sub-type: u64 694 710 - 695 711 name: uncorr 696 - type: nest 697 - nested-attributes: u64-array 712 + type: binary 713 + sub-type: u64 698 714 - 699 715 name: corr-bits 700 - type: nest 701 - nested-attributes: u64-array 716 + type: binary 717 + sub-type: u64 702 718 - 703 719 name: fec 704 720 attributes: ··· 811 827 type: u32 812 828 - 813 829 name: index 814 - type: nest 815 - nested-attributes: s32-array 830 + type: binary 831 + sub-type: s32 816 832 - 817 833 name: module 818 834 attributes:
+37 -23
Documentation/networking/device_drivers/ethernet/mellanox/mlx5/devlink.rst
··· 40 40 --------------------------------------------- 41 41 The flow steering mode parameter controls the flow steering mode of the driver. 42 42 Two modes are supported: 43 + 43 44 1. 'dmfs' - Device managed flow steering. 44 45 2. 'smfs' - Software/Driver managed flow steering. 45 46 ··· 100 99 By default metadata is enabled on the supported devices in E-switch. 101 100 Metadata is applicable only for E-switch in switchdev mode and 102 101 users may disable it when NONE of the below use cases will be in use: 102 + 103 103 1. HCA is in Dual/multi-port RoCE mode. 104 104 2. VF/SF representor bonding (Usually used for Live migration) 105 105 3. Stacked devices ··· 182 180 183 181 $ devlink health diagnose pci/0000:82:00.0 reporter tx 184 182 185 - NOTE: This command has valid output only when interface is up, otherwise the command has empty output. 183 + .. note:: 184 + This command has valid output only when interface is up, otherwise the command has empty output. 186 185 187 186 - Show number of tx errors indicated, number of recover flows ended successfully, 188 187 is autorecover enabled and graceful period from last recover:: ··· 235 232 236 233 $ devlink health dump show pci/0000:82:00.0 reporter fw 237 234 238 - NOTE: This command can run only on the PF which has fw tracer ownership, 239 - running it on other PF or any VF will return "Operation not permitted". 235 + .. note:: 236 + This command can run only on the PF which has fw tracer ownership, 237 + running it on other PF or any VF will return "Operation not permitted". 240 238 241 239 fw fatal reporter 242 240 ----------------- ··· 260 256 261 257 $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 262 258 263 - NOTE: This command can run only on PF. 259 + .. note:: 260 + This command can run only on PF. 264 261 265 262 vnic reporter 266 263 ------------- ··· 270 265 them in realtime. 271 266 272 267 Description of the vnic counters: 273 - total_q_under_processor_handle: number of queues in an error state due to 274 - an async error or errored command. 275 - send_queue_priority_update_flow: number of QP/SQ priority/SL update 276 - events. 277 - cq_overrun: number of times CQ entered an error state due to an 278 - overflow. 279 - async_eq_overrun: number of times an EQ mapped to async events was 280 - overrun. 281 - comp_eq_overrun: number of times an EQ mapped to completion events was 282 - overrun. 283 - quota_exceeded_command: number of commands issued and failed due to quota 284 - exceeded. 285 - invalid_command: number of commands issued and failed dues to any reason 286 - other than quota exceeded. 287 - nic_receive_steering_discard: number of packets that completed RX flow 288 - steering but were discarded due to a mismatch in flow table. 268 + 269 + - total_q_under_processor_handle 270 + number of queues in an error state due to 271 + an async error or errored command. 272 + - send_queue_priority_update_flow 273 + number of QP/SQ priority/SL update events. 274 + - cq_overrun 275 + number of times CQ entered an error state due to an overflow. 276 + - async_eq_overrun 277 + number of times an EQ mapped to async events was overrun. 278 + comp_eq_overrun number of times an EQ mapped to completion events was 279 + overrun. 280 + - quota_exceeded_command 281 + number of commands issued and failed due to quota exceeded. 282 + - invalid_command 283 + number of commands issued and failed dues to any reason other than quota 284 + exceeded. 285 + - nic_receive_steering_discard 286 + number of packets that completed RX flow 287 + steering but were discarded due to a mismatch in flow table. 289 288 290 289 User commands examples: 291 - - Diagnose PF/VF vnic counters 290 + 291 + - Diagnose PF/VF vnic counters:: 292 + 292 293 $ devlink health diagnose pci/0000:82:00.1 reporter vnic 294 + 293 295 - Diagnose representor vnic counters (performed by supplying devlink port of the 294 - representor, which can be obtained via devlink port command) 296 + representor, which can be obtained via devlink port command):: 297 + 295 298 $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 296 299 297 - NOTE: This command can run over all interfaces such as PF/VF and representor ports. 300 + .. note:: 301 + This command can run over all interfaces such as PF/VF and representor ports.
+32 -32
Documentation/trace/histogram.rst
··· 35 35 in place of an explicit value field - this is simply a count of 36 36 event hits. If 'values' isn't specified, an implicit 'hitcount' 37 37 value will be automatically created and used as the only value. 38 - Keys can be any field, or the special string 'stacktrace', which 38 + Keys can be any field, or the special string 'common_stacktrace', which 39 39 will use the event's kernel stacktrace as the key. The keywords 40 40 'keys' or 'key' can be used to specify keys, and the keywords 41 41 'values', 'vals', or 'val' can be used to specify values. Compound ··· 54 54 'compatible' if the fields named in the trigger share the same 55 55 number and type of fields and those fields also have the same names. 56 56 Note that any two events always share the compatible 'hitcount' and 57 - 'stacktrace' fields and can therefore be combined using those 57 + 'common_stacktrace' fields and can therefore be combined using those 58 58 fields, however pointless that may be. 59 59 60 60 'hist' triggers add a 'hist' file to each event's subdirectory. ··· 547 547 the hist trigger display symbolic call_sites, we can have the hist 548 548 trigger additionally display the complete set of kernel stack traces 549 549 that led to each call_site. To do that, we simply use the special 550 - value 'stacktrace' for the key parameter:: 550 + value 'common_stacktrace' for the key parameter:: 551 551 552 - # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \ 552 + # echo 'hist:keys=common_stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \ 553 553 /sys/kernel/tracing/events/kmem/kmalloc/trigger 554 554 555 555 The above trigger will use the kernel stack trace in effect when an ··· 561 561 every callpath to a kmalloc for a kernel compile):: 562 562 563 563 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist 564 - # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active] 564 + # trigger info: hist:keys=common_stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active] 565 565 566 - { stacktrace: 566 + { common_stacktrace: 567 567 __kmalloc_track_caller+0x10b/0x1a0 568 568 kmemdup+0x20/0x50 569 569 hidraw_report_event+0x8a/0x120 [hid] ··· 581 581 cpu_startup_entry+0x315/0x3e0 582 582 rest_init+0x7c/0x80 583 583 } hitcount: 3 bytes_req: 21 bytes_alloc: 24 584 - { stacktrace: 584 + { common_stacktrace: 585 585 __kmalloc_track_caller+0x10b/0x1a0 586 586 kmemdup+0x20/0x50 587 587 hidraw_report_event+0x8a/0x120 [hid] ··· 596 596 do_IRQ+0x5a/0xf0 597 597 ret_from_intr+0x0/0x30 598 598 } hitcount: 3 bytes_req: 21 bytes_alloc: 24 599 - { stacktrace: 599 + { common_stacktrace: 600 600 kmem_cache_alloc_trace+0xeb/0x150 601 601 aa_alloc_task_context+0x27/0x40 602 602 apparmor_cred_prepare+0x1f/0x50 ··· 608 608 . 609 609 . 610 610 . 611 - { stacktrace: 611 + { common_stacktrace: 612 612 __kmalloc+0x11b/0x1b0 613 613 i915_gem_execbuffer2+0x6c/0x2c0 [i915] 614 614 drm_ioctl+0x349/0x670 [drm] ··· 616 616 SyS_ioctl+0x81/0xa0 617 617 system_call_fastpath+0x12/0x6a 618 618 } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808 619 - { stacktrace: 619 + { common_stacktrace: 620 620 __kmalloc+0x11b/0x1b0 621 621 load_elf_phdrs+0x76/0xa0 622 622 load_elf_binary+0x102/0x1650 ··· 625 625 SyS_execve+0x3a/0x50 626 626 return_from_execve+0x0/0x23 627 627 } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048 628 - { stacktrace: 628 + { common_stacktrace: 629 629 kmem_cache_alloc_trace+0xeb/0x150 630 630 apparmor_file_alloc_security+0x27/0x40 631 631 security_file_alloc+0x16/0x20 ··· 636 636 SyS_open+0x1e/0x20 637 637 system_call_fastpath+0x12/0x6a 638 638 } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376 639 - { stacktrace: 639 + { common_stacktrace: 640 640 __kmalloc+0x11b/0x1b0 641 641 seq_buf_alloc+0x1b/0x50 642 642 seq_read+0x2cc/0x370 ··· 1026 1026 First we set up an initially paused stacktrace trigger on the 1027 1027 netif_receive_skb event:: 1028 1028 1029 - # echo 'hist:key=stacktrace:vals=len:pause' > \ 1029 + # echo 'hist:key=common_stacktrace:vals=len:pause' > \ 1030 1030 /sys/kernel/tracing/events/net/netif_receive_skb/trigger 1031 1031 1032 1032 Next, we set up an 'enable_hist' trigger on the sched_process_exec ··· 1060 1060 $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz 1061 1061 1062 1062 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist 1063 - # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1063 + # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1064 1064 1065 - { stacktrace: 1065 + { common_stacktrace: 1066 1066 __netif_receive_skb_core+0x46d/0x990 1067 1067 __netif_receive_skb+0x18/0x60 1068 1068 netif_receive_skb_internal+0x23/0x90 ··· 1079 1079 kthread+0xd2/0xf0 1080 1080 ret_from_fork+0x42/0x70 1081 1081 } hitcount: 85 len: 28884 1082 - { stacktrace: 1082 + { common_stacktrace: 1083 1083 __netif_receive_skb_core+0x46d/0x990 1084 1084 __netif_receive_skb+0x18/0x60 1085 1085 netif_receive_skb_internal+0x23/0x90 ··· 1097 1097 irq_thread+0x11f/0x150 1098 1098 kthread+0xd2/0xf0 1099 1099 } hitcount: 98 len: 664329 1100 - { stacktrace: 1100 + { common_stacktrace: 1101 1101 __netif_receive_skb_core+0x46d/0x990 1102 1102 __netif_receive_skb+0x18/0x60 1103 1103 process_backlog+0xa8/0x150 ··· 1115 1115 inet_sendmsg+0x64/0xa0 1116 1116 sock_sendmsg+0x3d/0x50 1117 1117 } hitcount: 115 len: 13030 1118 - { stacktrace: 1118 + { common_stacktrace: 1119 1119 __netif_receive_skb_core+0x46d/0x990 1120 1120 __netif_receive_skb+0x18/0x60 1121 1121 netif_receive_skb_internal+0x23/0x90 ··· 1142 1142 into the histogram. In order to avoid having to set everything up 1143 1143 again, we can just clear the histogram first:: 1144 1144 1145 - # echo 'hist:key=stacktrace:vals=len:clear' >> \ 1145 + # echo 'hist:key=common_stacktrace:vals=len:clear' >> \ 1146 1146 /sys/kernel/tracing/events/net/netif_receive_skb/trigger 1147 1147 1148 1148 Just to verify that it is in fact cleared, here's what we now see in 1149 1149 the hist file:: 1150 1150 1151 1151 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist 1152 - # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1152 + # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused] 1153 1153 1154 1154 Totals: 1155 1155 Hits: 0 ··· 1485 1485 1486 1486 And here's an example that shows how to combine histogram data from 1487 1487 any two events even if they don't share any 'compatible' fields 1488 - other than 'hitcount' and 'stacktrace'. These commands create a 1488 + other than 'hitcount' and 'common_stacktrace'. These commands create a 1489 1489 couple of triggers named 'bar' using those fields:: 1490 1490 1491 - # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \ 1491 + # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \ 1492 1492 /sys/kernel/tracing/events/sched/sched_process_fork/trigger 1493 - # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \ 1493 + # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \ 1494 1494 /sys/kernel/tracing/events/net/netif_rx/trigger 1495 1495 1496 1496 And displaying the output of either shows some interesting if ··· 1501 1501 1502 1502 # event histogram 1503 1503 # 1504 - # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active] 1504 + # trigger info: hist:name=bar:keys=common_stacktrace:vals=hitcount:sort=hitcount:size=2048 [active] 1505 1505 # 1506 1506 1507 - { stacktrace: 1507 + { common_stacktrace: 1508 1508 kernel_clone+0x18e/0x330 1509 1509 kernel_thread+0x29/0x30 1510 1510 kthreadd+0x154/0x1b0 1511 1511 ret_from_fork+0x3f/0x70 1512 1512 } hitcount: 1 1513 - { stacktrace: 1513 + { common_stacktrace: 1514 1514 netif_rx_internal+0xb2/0xd0 1515 1515 netif_rx_ni+0x20/0x70 1516 1516 dev_loopback_xmit+0xaa/0xd0 ··· 1528 1528 call_cpuidle+0x3b/0x60 1529 1529 cpu_startup_entry+0x22d/0x310 1530 1530 } hitcount: 1 1531 - { stacktrace: 1531 + { common_stacktrace: 1532 1532 netif_rx_internal+0xb2/0xd0 1533 1533 netif_rx_ni+0x20/0x70 1534 1534 dev_loopback_xmit+0xaa/0xd0 ··· 1543 1543 SyS_sendto+0xe/0x10 1544 1544 entry_SYSCALL_64_fastpath+0x12/0x6a 1545 1545 } hitcount: 2 1546 - { stacktrace: 1546 + { common_stacktrace: 1547 1547 netif_rx_internal+0xb2/0xd0 1548 1548 netif_rx+0x1c/0x60 1549 1549 loopback_xmit+0x6c/0xb0 ··· 1561 1561 sock_sendmsg+0x38/0x50 1562 1562 ___sys_sendmsg+0x14e/0x270 1563 1563 } hitcount: 76 1564 - { stacktrace: 1564 + { common_stacktrace: 1565 1565 netif_rx_internal+0xb2/0xd0 1566 1566 netif_rx+0x1c/0x60 1567 1567 loopback_xmit+0x6c/0xb0 ··· 1579 1579 sock_sendmsg+0x38/0x50 1580 1580 ___sys_sendmsg+0x269/0x270 1581 1581 } hitcount: 77 1582 - { stacktrace: 1582 + { common_stacktrace: 1583 1583 netif_rx_internal+0xb2/0xd0 1584 1584 netif_rx+0x1c/0x60 1585 1585 loopback_xmit+0x6c/0xb0 ··· 1597 1597 sock_sendmsg+0x38/0x50 1598 1598 SYSC_sendto+0xef/0x170 1599 1599 } hitcount: 88 1600 - { stacktrace: 1600 + { common_stacktrace: 1601 1601 kernel_clone+0x18e/0x330 1602 1602 SyS_clone+0x19/0x20 1603 1603 entry_SYSCALL_64_fastpath+0x12/0x6a ··· 1949 1949 1950 1950 # cd /sys/kernel/tracing 1951 1951 # echo 's:block_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 1952 - # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger 1952 + # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger 1953 1953 # echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(block_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 1954 1954 # echo 1 > events/synthetic/block_lat/enable 1955 1955 # cat trace
+9 -8
MAINTAINERS
··· 956 956 F: drivers/net/ethernet/amazon/ 957 957 958 958 AMAZON RDMA EFA DRIVER 959 - M: Gal Pressman <galpress@amazon.com> 959 + M: Michael Margolin <mrgolin@amazon.com> 960 + R: Gal Pressman <gal.pressman@linux.dev> 960 961 R: Yossi Leybovich <sleybo@amazon.com> 961 962 L: linux-rdma@vger.kernel.org 962 963 S: Supported ··· 1601 1600 1602 1601 ARASAN NAND CONTROLLER DRIVER 1603 1602 M: Miquel Raynal <miquel.raynal@bootlin.com> 1604 - M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1603 + R: Michal Simek <michal.simek@amd.com> 1605 1604 L: linux-mtd@lists.infradead.org 1606 1605 S: Maintained 1607 1606 F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml ··· 1764 1763 1765 1764 ARM PRIMECELL PL35X NAND CONTROLLER DRIVER 1766 1765 M: Miquel Raynal <miquel.raynal@bootlin.com> 1767 - M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1766 + R: Michal Simek <michal.simek@amd.com> 1768 1767 L: linux-mtd@lists.infradead.org 1769 1768 S: Maintained 1770 1769 F: Documentation/devicetree/bindings/mtd/arm,pl353-nand-r2p1.yaml ··· 1772 1771 1773 1772 ARM PRIMECELL PL35X SMC DRIVER 1774 1773 M: Miquel Raynal <miquel.raynal@bootlin.com> 1775 - M: Naga Sureshkumar Relli <nagasure@xilinx.com> 1774 + R: Michal Simek <michal.simek@amd.com> 1776 1775 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1777 1776 S: Maintained 1778 1777 F: Documentation/devicetree/bindings/memory-controllers/arm,pl35x-smc.yaml ··· 5149 5148 5150 5149 COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3) 5151 5150 M: Steve French <sfrench@samba.org> 5152 - R: Paulo Alcantara <pc@cjr.nz> (DFS, global name space) 5151 + R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space) 5153 5152 R: Ronnie Sahlberg <lsahlber@redhat.com> (directory leases, sparse files) 5154 5153 R: Shyam Prasad N <sprasad@microsoft.com> (multichannel) 5155 5154 R: Tom Talpey <tom@talpey.com> (RDMA, smbdirect) ··· 9354 9353 9355 9354 HISILICON ROCE DRIVER 9356 9355 M: Haoyue Xu <xuhaoyue1@hisilicon.com> 9357 - M: Wenpeng Liang <liangwenpeng@huawei.com> 9356 + M: Junxian Huang <huangjunxian6@hisilicon.com> 9358 9357 L: linux-rdma@vger.kernel.org 9359 9358 S: Maintained 9360 9359 F: Documentation/devicetree/bindings/infiniband/hisilicon-hns-roce.txt ··· 10125 10124 F: Documentation/process/kernel-docs.rst 10126 10125 10127 10126 INDUSTRY PACK SUBSYSTEM (IPACK) 10128 - M: Samuel Iglesias Gonsalvez <siglesias@igalia.com> 10127 + M: Vaibhav Gupta <vaibhavgupta40@gmail.com> 10129 10128 M: Jens Taprogge <jens.taprogge@taprogge.org> 10130 10129 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 10131 10130 L: industrypack-devel@lists.sourceforge.net ··· 13847 13846 13848 13847 MICROCHIP POLARFIRE FPGA DRIVERS 13849 13848 M: Conor Dooley <conor.dooley@microchip.com> 13850 - R: Ivan Bornyakov <i.bornyakov@metrotek.ru> 13849 + R: Vladimir Georgiev <v.georgiev@metrotek.ru> 13851 13850 L: linux-fpga@vger.kernel.org 13852 13851 S: Supported 13853 13852 F: Documentation/devicetree/bindings/fpga/microchip,mpf-spi-fpga-mgr.yaml
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 4 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+3 -3
arch/arm64/include/asm/kvm_pgtable.h
··· 632 632 * 633 633 * The walker will walk the page-table entries corresponding to the input 634 634 * address range specified, visiting entries according to the walker flags. 635 - * Invalid entries are treated as leaf entries. Leaf entries are reloaded 636 - * after invoking the walker callback, allowing the walker to descend into 637 - * a newly installed table. 635 + * Invalid entries are treated as leaf entries. The visited page table entry is 636 + * reloaded after invoking the walker callback, allowing the walker to descend 637 + * into a newly installed table. 638 638 * 639 639 * Returning a negative error code from the walker callback function will 640 640 * terminate the walk immediately with the same error code.
+6
arch/arm64/include/asm/sysreg.h
··· 115 115 #define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31) 116 116 117 117 #define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2) 118 + #define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4) 119 + #define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6) 118 120 #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) 121 + #define SYS_DC_CGSW sys_insn(1, 0, 7, 10, 4) 122 + #define SYS_DC_CGDSW sys_insn(1, 0, 7, 10, 6) 119 123 #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) 124 + #define SYS_DC_CIGSW sys_insn(1, 0, 7, 14, 4) 125 + #define SYS_DC_CIGDSW sys_insn(1, 0, 7, 14, 6) 120 126 121 127 /* 122 128 * Automatically generated definitions for system registers, the
+6 -2
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 412 412 return false; 413 413 } 414 414 415 - static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) 415 + static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code) 416 416 { 417 417 if (!__populate_fault_info(vcpu)) 418 418 return true; 419 419 420 420 return false; 421 421 } 422 + static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) 423 + __alias(kvm_hyp_handle_memory_fault); 424 + static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code) 425 + __alias(kvm_hyp_handle_memory_fault); 422 426 423 427 static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code) 424 428 { 425 - if (!__populate_fault_info(vcpu)) 429 + if (kvm_hyp_handle_memory_fault(vcpu, exit_code)) 426 430 return true; 427 431 428 432 if (static_branch_unlikely(&vgic_v2_cpuif_trap)) {
+7 -7
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 575 575 576 576 struct check_walk_data { 577 577 enum pkvm_page_state desired; 578 - enum pkvm_page_state (*get_page_state)(kvm_pte_t pte); 578 + enum pkvm_page_state (*get_page_state)(kvm_pte_t pte, u64 addr); 579 579 }; 580 580 581 581 static int __check_page_state_visitor(const struct kvm_pgtable_visit_ctx *ctx, ··· 583 583 { 584 584 struct check_walk_data *d = ctx->arg; 585 585 586 - if (kvm_pte_valid(ctx->old) && !addr_is_allowed_memory(kvm_pte_to_phys(ctx->old))) 587 - return -EINVAL; 588 - 589 - return d->get_page_state(ctx->old) == d->desired ? 0 : -EPERM; 586 + return d->get_page_state(ctx->old, ctx->addr) == d->desired ? 0 : -EPERM; 590 587 } 591 588 592 589 static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, ··· 598 601 return kvm_pgtable_walk(pgt, addr, size, &walker); 599 602 } 600 603 601 - static enum pkvm_page_state host_get_page_state(kvm_pte_t pte) 604 + static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) 602 605 { 606 + if (!addr_is_allowed_memory(addr)) 607 + return PKVM_NOPAGE; 608 + 603 609 if (!kvm_pte_valid(pte) && pte) 604 610 return PKVM_NOPAGE; 605 611 ··· 709 709 return host_stage2_set_owner_locked(addr, size, host_id); 710 710 } 711 711 712 - static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) 712 + static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte, u64 addr) 713 713 { 714 714 if (!kvm_pte_valid(pte)) 715 715 return PKVM_NOPAGE;
+2
arch/arm64/kvm/hyp/nvhe/switch.c
··· 186 186 [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, 187 187 [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, 188 188 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 189 + [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 189 190 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 190 191 }; 191 192 ··· 197 196 [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, 198 197 [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, 199 198 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 199 + [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 200 200 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 201 201 }; 202 202
+16 -1
arch/arm64/kvm/hyp/pgtable.c
··· 209 209 .flags = flags, 210 210 }; 211 211 int ret = 0; 212 + bool reload = false; 212 213 kvm_pteref_t childp; 213 214 bool table = kvm_pte_table(ctx.old, level); 214 215 215 - if (table && (ctx.flags & KVM_PGTABLE_WALK_TABLE_PRE)) 216 + if (table && (ctx.flags & KVM_PGTABLE_WALK_TABLE_PRE)) { 216 217 ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_PRE); 218 + reload = true; 219 + } 217 220 218 221 if (!table && (ctx.flags & KVM_PGTABLE_WALK_LEAF)) { 219 222 ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_LEAF); 223 + reload = true; 224 + } 225 + 226 + /* 227 + * Reload the page table after invoking the walker callback for leaf 228 + * entries or after pre-order traversal, to allow the walker to descend 229 + * into a newly installed or replaced table. 230 + */ 231 + if (reload) { 220 232 ctx.old = READ_ONCE(*ptep); 221 233 table = kvm_pte_table(ctx.old, level); 222 234 } ··· 1332 1320 }; 1333 1321 1334 1322 WARN_ON(__kvm_pgtable_walk(&data, mm_ops, ptep, level + 1)); 1323 + 1324 + WARN_ON(mm_ops->page_count(pgtable) != 1); 1325 + mm_ops->put_page(pgtable); 1335 1326 }
+1
arch/arm64/kvm/hyp/vhe/switch.c
··· 110 110 [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, 111 111 [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, 112 112 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 113 + [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 113 114 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 114 115 }; 115 116
+23 -35
arch/arm64/kvm/pmu-emul.c
··· 694 694 695 695 static struct arm_pmu *kvm_pmu_probe_armpmu(void) 696 696 { 697 - struct perf_event_attr attr = { }; 698 - struct perf_event *event; 699 - struct arm_pmu *pmu = NULL; 697 + struct arm_pmu *tmp, *pmu = NULL; 698 + struct arm_pmu_entry *entry; 699 + int cpu; 700 700 701 - /* 702 - * Create a dummy event that only counts user cycles. As we'll never 703 - * leave this function with the event being live, it will never 704 - * count anything. But it allows us to probe some of the PMU 705 - * details. Yes, this is terrible. 706 - */ 707 - attr.type = PERF_TYPE_RAW; 708 - attr.size = sizeof(attr); 709 - attr.pinned = 1; 710 - attr.disabled = 0; 711 - attr.exclude_user = 0; 712 - attr.exclude_kernel = 1; 713 - attr.exclude_hv = 1; 714 - attr.exclude_host = 1; 715 - attr.config = ARMV8_PMUV3_PERFCTR_CPU_CYCLES; 716 - attr.sample_period = GENMASK(63, 0); 701 + mutex_lock(&arm_pmus_lock); 717 702 718 - event = perf_event_create_kernel_counter(&attr, -1, current, 719 - kvm_pmu_perf_overflow, &attr); 703 + cpu = smp_processor_id(); 704 + list_for_each_entry(entry, &arm_pmus, entry) { 705 + tmp = entry->arm_pmu; 720 706 721 - if (IS_ERR(event)) { 722 - pr_err_once("kvm: pmu event creation failed %ld\n", 723 - PTR_ERR(event)); 724 - return NULL; 707 + if (cpumask_test_cpu(cpu, &tmp->supported_cpus)) { 708 + pmu = tmp; 709 + break; 710 + } 725 711 } 726 712 727 - if (event->pmu) { 728 - pmu = to_arm_pmu(event->pmu); 729 - if (pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_NI || 730 - pmu->pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 731 - pmu = NULL; 732 - } 733 - 734 - perf_event_disable(event); 735 - perf_event_release_kernel(event); 713 + mutex_unlock(&arm_pmus_lock); 736 714 737 715 return pmu; 738 716 } ··· 890 912 return -EBUSY; 891 913 892 914 if (!kvm->arch.arm_pmu) { 893 - /* No PMU set, get the default one */ 915 + /* 916 + * No PMU set, get the default one. 917 + * 918 + * The observant among you will notice that the supported_cpus 919 + * mask does not get updated for the default PMU even though it 920 + * is quite possible the selected instance supports only a 921 + * subset of cores in the system. This is intentional, and 922 + * upholds the preexisting behavior on heterogeneous systems 923 + * where vCPUs can be scheduled on any core but the guest 924 + * counters could stop working. 925 + */ 894 926 kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); 895 927 if (!kvm->arch.arm_pmu) 896 928 return -ENODEV;
+19
arch/arm64/kvm/sys_regs.c
··· 211 211 return true; 212 212 } 213 213 214 + static bool access_dcgsw(struct kvm_vcpu *vcpu, 215 + struct sys_reg_params *p, 216 + const struct sys_reg_desc *r) 217 + { 218 + if (!kvm_has_mte(vcpu->kvm)) { 219 + kvm_inject_undefined(vcpu); 220 + return false; 221 + } 222 + 223 + /* Treat MTE S/W ops as we treat the classic ones: with contempt */ 224 + return access_dcsw(vcpu, p, r); 225 + } 226 + 214 227 static void get_access_mask(const struct sys_reg_desc *r, u64 *mask, u64 *shift) 215 228 { 216 229 switch (r->aarch32_map) { ··· 1769 1756 */ 1770 1757 static const struct sys_reg_desc sys_reg_descs[] = { 1771 1758 { SYS_DESC(SYS_DC_ISW), access_dcsw }, 1759 + { SYS_DESC(SYS_DC_IGSW), access_dcgsw }, 1760 + { SYS_DESC(SYS_DC_IGDSW), access_dcgsw }, 1772 1761 { SYS_DESC(SYS_DC_CSW), access_dcsw }, 1762 + { SYS_DESC(SYS_DC_CGSW), access_dcgsw }, 1763 + { SYS_DESC(SYS_DC_CGDSW), access_dcgsw }, 1773 1764 { SYS_DESC(SYS_DC_CISW), access_dcsw }, 1765 + { SYS_DESC(SYS_DC_CIGSW), access_dcgsw }, 1766 + { SYS_DESC(SYS_DC_CIGDSW), access_dcgsw }, 1774 1767 1775 1768 DBG_BCR_BVR_WCR_WVR_EL1(0), 1776 1769 DBG_BCR_BVR_WCR_WVR_EL1(1),
+21 -6
arch/arm64/kvm/vgic/vgic-init.c
··· 235 235 * KVM io device for the redistributor that belongs to this VCPU. 236 236 */ 237 237 if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { 238 - mutex_lock(&vcpu->kvm->arch.config_lock); 238 + mutex_lock(&vcpu->kvm->slots_lock); 239 239 ret = vgic_register_redist_iodev(vcpu); 240 - mutex_unlock(&vcpu->kvm->arch.config_lock); 240 + mutex_unlock(&vcpu->kvm->slots_lock); 241 241 } 242 242 return ret; 243 243 } ··· 406 406 407 407 /** 408 408 * vgic_lazy_init: Lazy init is only allowed if the GIC exposed to the guest 409 - * is a GICv2. A GICv3 must be explicitly initialized by the guest using the 409 + * is a GICv2. A GICv3 must be explicitly initialized by userspace using the 410 410 * KVM_DEV_ARM_VGIC_GRP_CTRL KVM_DEVICE group. 411 411 * @kvm: kvm struct pointer 412 412 */ ··· 446 446 int kvm_vgic_map_resources(struct kvm *kvm) 447 447 { 448 448 struct vgic_dist *dist = &kvm->arch.vgic; 449 + gpa_t dist_base; 449 450 int ret = 0; 450 451 451 452 if (likely(vgic_ready(kvm))) 452 453 return 0; 453 454 455 + mutex_lock(&kvm->slots_lock); 454 456 mutex_lock(&kvm->arch.config_lock); 455 457 if (vgic_ready(kvm)) 456 458 goto out; ··· 465 463 else 466 464 ret = vgic_v3_map_resources(kvm); 467 465 468 - if (ret) 466 + if (ret) { 469 467 __kvm_vgic_destroy(kvm); 470 - else 471 - dist->ready = true; 468 + goto out; 469 + } 470 + dist->ready = true; 471 + dist_base = dist->vgic_dist_base; 472 + mutex_unlock(&kvm->arch.config_lock); 473 + 474 + ret = vgic_register_dist_iodev(kvm, dist_base, 475 + kvm_vgic_global_state.type); 476 + if (ret) { 477 + kvm_err("Unable to register VGIC dist MMIO regions\n"); 478 + kvm_vgic_destroy(kvm); 479 + } 480 + mutex_unlock(&kvm->slots_lock); 481 + return ret; 472 482 473 483 out: 474 484 mutex_unlock(&kvm->arch.config_lock); 485 + mutex_unlock(&kvm->slots_lock); 475 486 return ret; 476 487 } 477 488
+10 -4
arch/arm64/kvm/vgic/vgic-its.c
··· 1936 1936 1937 1937 static int vgic_its_create(struct kvm_device *dev, u32 type) 1938 1938 { 1939 + int ret; 1939 1940 struct vgic_its *its; 1940 1941 1941 1942 if (type != KVM_DEV_TYPE_ARM_VGIC_ITS) ··· 1946 1945 if (!its) 1947 1946 return -ENOMEM; 1948 1947 1948 + mutex_lock(&dev->kvm->arch.config_lock); 1949 + 1949 1950 if (vgic_initialized(dev->kvm)) { 1950 - int ret = vgic_v4_init(dev->kvm); 1951 + ret = vgic_v4_init(dev->kvm); 1951 1952 if (ret < 0) { 1953 + mutex_unlock(&dev->kvm->arch.config_lock); 1952 1954 kfree(its); 1953 1955 return ret; 1954 1956 } ··· 1964 1960 1965 1961 /* Yep, even more trickery for lock ordering... */ 1966 1962 #ifdef CONFIG_LOCKDEP 1967 - mutex_lock(&dev->kvm->arch.config_lock); 1968 1963 mutex_lock(&its->cmd_lock); 1969 1964 mutex_lock(&its->its_lock); 1970 1965 mutex_unlock(&its->its_lock); 1971 1966 mutex_unlock(&its->cmd_lock); 1972 - mutex_unlock(&dev->kvm->arch.config_lock); 1973 1967 #endif 1974 1968 1975 1969 its->vgic_its_base = VGIC_ADDR_UNDEF; ··· 1988 1986 1989 1987 dev->private = its; 1990 1988 1991 - return vgic_its_set_abi(its, NR_ITS_ABIS - 1); 1989 + ret = vgic_its_set_abi(its, NR_ITS_ABIS - 1); 1990 + 1991 + mutex_unlock(&dev->kvm->arch.config_lock); 1992 + 1993 + return ret; 1992 1994 } 1993 1995 1994 1996 static void vgic_its_destroy(struct kvm_device *kvm_dev)
+8 -2
arch/arm64/kvm/vgic/vgic-kvm-device.c
··· 102 102 if (get_user(addr, uaddr)) 103 103 return -EFAULT; 104 104 105 - mutex_lock(&kvm->arch.config_lock); 105 + /* 106 + * Since we can't hold config_lock while registering the redistributor 107 + * iodevs, take the slots_lock immediately. 108 + */ 109 + mutex_lock(&kvm->slots_lock); 106 110 switch (attr->attr) { 107 111 case KVM_VGIC_V2_ADDR_TYPE_DIST: 108 112 r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2); ··· 186 182 if (r) 187 183 goto out; 188 184 185 + mutex_lock(&kvm->arch.config_lock); 189 186 if (write) { 190 187 r = vgic_check_iorange(kvm, *addr_ptr, addr, alignment, size); 191 188 if (!r) ··· 194 189 } else { 195 190 addr = *addr_ptr; 196 191 } 192 + mutex_unlock(&kvm->arch.config_lock); 197 193 198 194 out: 199 - mutex_unlock(&kvm->arch.config_lock); 195 + mutex_unlock(&kvm->slots_lock); 200 196 201 197 if (!r && !write) 202 198 r = put_user(addr, uaddr);
+21 -10
arch/arm64/kvm/vgic/vgic-mmio-v3.c
··· 769 769 struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev; 770 770 struct vgic_redist_region *rdreg; 771 771 gpa_t rd_base; 772 - int ret; 772 + int ret = 0; 773 + 774 + lockdep_assert_held(&kvm->slots_lock); 775 + mutex_lock(&kvm->arch.config_lock); 773 776 774 777 if (!IS_VGIC_ADDR_UNDEF(vgic_cpu->rd_iodev.base_addr)) 775 - return 0; 778 + goto out_unlock; 776 779 777 780 /* 778 781 * We may be creating VCPUs before having set the base address for the ··· 785 782 */ 786 783 rdreg = vgic_v3_rdist_free_slot(&vgic->rd_regions); 787 784 if (!rdreg) 788 - return 0; 785 + goto out_unlock; 789 786 790 - if (!vgic_v3_check_base(kvm)) 791 - return -EINVAL; 787 + if (!vgic_v3_check_base(kvm)) { 788 + ret = -EINVAL; 789 + goto out_unlock; 790 + } 792 791 793 792 vgic_cpu->rdreg = rdreg; 794 793 vgic_cpu->rdreg_index = rdreg->free_index; ··· 804 799 rd_dev->nr_regions = ARRAY_SIZE(vgic_v3_rd_registers); 805 800 rd_dev->redist_vcpu = vcpu; 806 801 807 - mutex_lock(&kvm->slots_lock); 802 + mutex_unlock(&kvm->arch.config_lock); 803 + 808 804 ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, rd_base, 809 805 2 * SZ_64K, &rd_dev->dev); 810 - mutex_unlock(&kvm->slots_lock); 811 - 812 806 if (ret) 813 807 return ret; 814 808 809 + /* Protected by slots_lock */ 815 810 rdreg->free_index++; 816 811 return 0; 812 + 813 + out_unlock: 814 + mutex_unlock(&kvm->arch.config_lock); 815 + return ret; 817 816 } 818 817 819 818 static void vgic_unregister_redist_iodev(struct kvm_vcpu *vcpu) ··· 843 834 /* The current c failed, so iterate over the previous ones. */ 844 835 int i; 845 836 846 - mutex_lock(&kvm->slots_lock); 847 837 for (i = 0; i < c; i++) { 848 838 vcpu = kvm_get_vcpu(kvm, i); 849 839 vgic_unregister_redist_iodev(vcpu); 850 840 } 851 - mutex_unlock(&kvm->slots_lock); 852 841 } 853 842 854 843 return ret; ··· 945 938 { 946 939 int ret; 947 940 941 + mutex_lock(&kvm->arch.config_lock); 948 942 ret = vgic_v3_alloc_redist_region(kvm, index, addr, count); 943 + mutex_unlock(&kvm->arch.config_lock); 949 944 if (ret) 950 945 return ret; 951 946 ··· 959 950 if (ret) { 960 951 struct vgic_redist_region *rdreg; 961 952 953 + mutex_lock(&kvm->arch.config_lock); 962 954 rdreg = vgic_v3_rdist_region_from_index(kvm, index); 963 955 vgic_v3_free_redist_region(rdreg); 956 + mutex_unlock(&kvm->arch.config_lock); 964 957 return ret; 965 958 } 966 959
+2 -7
arch/arm64/kvm/vgic/vgic-mmio.c
··· 1096 1096 enum vgic_type type) 1097 1097 { 1098 1098 struct vgic_io_device *io_device = &kvm->arch.vgic.dist_iodev; 1099 - int ret = 0; 1100 1099 unsigned int len; 1101 1100 1102 1101 switch (type) { ··· 1113 1114 io_device->iodev_type = IODEV_DIST; 1114 1115 io_device->redist_vcpu = NULL; 1115 1116 1116 - mutex_lock(&kvm->slots_lock); 1117 - ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address, 1118 - len, &io_device->dev); 1119 - mutex_unlock(&kvm->slots_lock); 1120 - 1121 - return ret; 1117 + return kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, dist_base_address, 1118 + len, &io_device->dev); 1122 1119 }
-6
arch/arm64/kvm/vgic/vgic-v2.c
··· 312 312 return ret; 313 313 } 314 314 315 - ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V2); 316 - if (ret) { 317 - kvm_err("Unable to register VGIC MMIO regions\n"); 318 - return ret; 319 - } 320 - 321 315 if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) { 322 316 ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base, 323 317 kvm_vgic_global_state.vcpu_base,
-7
arch/arm64/kvm/vgic/vgic-v3.c
··· 539 539 { 540 540 struct vgic_dist *dist = &kvm->arch.vgic; 541 541 struct kvm_vcpu *vcpu; 542 - int ret = 0; 543 542 unsigned long c; 544 543 545 544 kvm_for_each_vcpu(c, vcpu, kvm) { ··· 566 567 */ 567 568 if (!vgic_initialized(kvm)) { 568 569 return -EBUSY; 569 - } 570 - 571 - ret = vgic_register_dist_iodev(kvm, dist->vgic_dist_base, VGIC_V3); 572 - if (ret) { 573 - kvm_err("Unable to register VGICv3 dist MMIO regions\n"); 574 - return ret; 575 570 } 576 571 577 572 if (kvm_vgic_global_state.has_gicv4_1)
+2 -1
arch/arm64/kvm/vgic/vgic-v4.c
··· 184 184 } 185 185 } 186 186 187 - /* Must be called with the kvm lock held */ 188 187 void vgic_v4_configure_vsgis(struct kvm *kvm) 189 188 { 190 189 struct vgic_dist *dist = &kvm->arch.vgic; 191 190 struct kvm_vcpu *vcpu; 192 191 unsigned long i; 192 + 193 + lockdep_assert_held(&kvm->arch.config_lock); 193 194 194 195 kvm_arm_halt_guest(kvm); 195 196
+5 -5
arch/powerpc/crypto/Makefile
··· 22 22 sha256-ppc-spe-y := sha256-spe-asm.o sha256-spe-glue.o 23 23 crc32c-vpmsum-y := crc32c-vpmsum_asm.o crc32c-vpmsum_glue.o 24 24 crct10dif-vpmsum-y := crct10dif-vpmsum_asm.o crct10dif-vpmsum_glue.o 25 - aes-gcm-p10-crypto-y := aes-gcm-p10-glue.o aes-gcm-p10.o ghashp8-ppc.o aesp8-ppc.o 25 + aes-gcm-p10-crypto-y := aes-gcm-p10-glue.o aes-gcm-p10.o ghashp10-ppc.o aesp10-ppc.o 26 26 27 27 quiet_cmd_perl = PERL $@ 28 28 cmd_perl = $(PERL) $< $(if $(CONFIG_CPU_LITTLE_ENDIAN), linux-ppc64le, linux-ppc64) > $@ 29 29 30 - targets += aesp8-ppc.S ghashp8-ppc.S 30 + targets += aesp10-ppc.S ghashp10-ppc.S 31 31 32 - $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE 32 + $(obj)/aesp10-ppc.S $(obj)/ghashp10-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE 33 33 $(call if_changed,perl) 34 34 35 - OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y 36 - OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y 35 + OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y 36 + OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y
+9 -9
arch/powerpc/crypto/aes-gcm-p10-glue.c
··· 30 30 MODULE_LICENSE("GPL v2"); 31 31 MODULE_ALIAS_CRYPTO("aes"); 32 32 33 - asmlinkage int aes_p8_set_encrypt_key(const u8 *userKey, const int bits, 33 + asmlinkage int aes_p10_set_encrypt_key(const u8 *userKey, const int bits, 34 34 void *key); 35 - asmlinkage void aes_p8_encrypt(const u8 *in, u8 *out, const void *key); 35 + asmlinkage void aes_p10_encrypt(const u8 *in, u8 *out, const void *key); 36 36 asmlinkage void aes_p10_gcm_encrypt(u8 *in, u8 *out, size_t len, 37 37 void *rkey, u8 *iv, void *Xi); 38 38 asmlinkage void aes_p10_gcm_decrypt(u8 *in, u8 *out, size_t len, 39 39 void *rkey, u8 *iv, void *Xi); 40 40 asmlinkage void gcm_init_htable(unsigned char htable[256], unsigned char Xi[16]); 41 - asmlinkage void gcm_ghash_p8(unsigned char *Xi, unsigned char *Htable, 41 + asmlinkage void gcm_ghash_p10(unsigned char *Xi, unsigned char *Htable, 42 42 unsigned char *aad, unsigned int alen); 43 43 44 44 struct aes_key { ··· 93 93 gctx->aadLen = alen; 94 94 i = alen & ~0xf; 95 95 if (i) { 96 - gcm_ghash_p8(nXi, hash->Htable+32, aad, i); 96 + gcm_ghash_p10(nXi, hash->Htable+32, aad, i); 97 97 aad += i; 98 98 alen -= i; 99 99 } ··· 102 102 nXi[i] ^= aad[i]; 103 103 104 104 memset(gctx->aad_hash, 0, 16); 105 - gcm_ghash_p8(gctx->aad_hash, hash->Htable+32, nXi, 16); 105 + gcm_ghash_p10(gctx->aad_hash, hash->Htable+32, nXi, 16); 106 106 } else { 107 107 memcpy(gctx->aad_hash, nXi, 16); 108 108 } ··· 115 115 { 116 116 __be32 counter = cpu_to_be32(1); 117 117 118 - aes_p8_encrypt(hash->H, hash->H, rdkey); 118 + aes_p10_encrypt(hash->H, hash->H, rdkey); 119 119 set_subkey(hash->H); 120 120 gcm_init_htable(hash->Htable+32, hash->H); 121 121 ··· 126 126 /* 127 127 * Encrypt counter vector as iv tag and increment counter. 128 128 */ 129 - aes_p8_encrypt(iv, gctx->ivtag, rdkey); 129 + aes_p10_encrypt(iv, gctx->ivtag, rdkey); 130 130 131 131 counter = cpu_to_be32(2); 132 132 *((__be32 *)(iv+12)) = counter; ··· 160 160 /* 161 161 * hash (AAD len and len) 162 162 */ 163 - gcm_ghash_p8(hash->Htable, hash->Htable+32, aclen, 16); 163 + gcm_ghash_p10(hash->Htable, hash->Htable+32, aclen, 16); 164 164 165 165 for (i = 0; i < 16; i++) 166 166 hash->Htable[i] ^= gctx->ivtag[i]; ··· 192 192 int ret; 193 193 194 194 vsx_begin(); 195 - ret = aes_p8_set_encrypt_key(key, keylen * 8, &ctx->enc_key); 195 + ret = aes_p10_set_encrypt_key(key, keylen * 8, &ctx->enc_key); 196 196 vsx_end(); 197 197 198 198 return ret ? -EINVAL : 0;
+1 -1
arch/powerpc/crypto/aesp8-ppc.pl arch/powerpc/crypto/aesp10-ppc.pl
··· 110 110 open STDOUT,"| $^X $xlate $flavour ".shift || die "can't call $xlate: $!"; 111 111 112 112 $FRAME=8*$SIZE_T; 113 - $prefix="aes_p8"; 113 + $prefix="aes_p10"; 114 114 115 115 $sp="r1"; 116 116 $vrsave="r12";
+6 -6
arch/powerpc/crypto/ghashp8-ppc.pl arch/powerpc/crypto/ghashp10-ppc.pl
··· 64 64 65 65 .text 66 66 67 - .globl .gcm_init_p8 67 + .globl .gcm_init_p10 68 68 lis r0,0xfff0 69 69 li r8,0x10 70 70 mfspr $vrsave,256 ··· 110 110 .long 0 111 111 .byte 0,12,0x14,0,0,0,2,0 112 112 .long 0 113 - .size .gcm_init_p8,.-.gcm_init_p8 113 + .size .gcm_init_p10,.-.gcm_init_p10 114 114 115 115 .globl .gcm_init_htable 116 116 lis r0,0xfff0 ··· 237 237 .long 0 238 238 .size .gcm_init_htable,.-.gcm_init_htable 239 239 240 - .globl .gcm_gmult_p8 240 + .globl .gcm_gmult_p10 241 241 lis r0,0xfff8 242 242 li r8,0x10 243 243 mfspr $vrsave,256 ··· 283 283 .long 0 284 284 .byte 0,12,0x14,0,0,0,2,0 285 285 .long 0 286 - .size .gcm_gmult_p8,.-.gcm_gmult_p8 286 + .size .gcm_gmult_p10,.-.gcm_gmult_p10 287 287 288 - .globl .gcm_ghash_p8 288 + .globl .gcm_ghash_p10 289 289 lis r0,0xfff8 290 290 li r8,0x10 291 291 mfspr $vrsave,256 ··· 350 350 .long 0 351 351 .byte 0,12,0x14,0,0,0,4,0 352 352 .long 0 353 - .size .gcm_ghash_p8,.-.gcm_ghash_p8 353 + .size .gcm_ghash_p10,.-.gcm_ghash_p10 354 354 355 355 .asciz "GHASH for PowerISA 2.07, CRYPTOGAMS by <appro\@openssl.org>" 356 356 .align 2
+11 -2
arch/powerpc/platforms/pseries/iommu.c
··· 317 317 static void tce_freemulti_pSeriesLP(struct iommu_table *tbl, long tcenum, long npages) 318 318 { 319 319 u64 rc; 320 + long rpages = npages; 321 + unsigned long limit; 320 322 321 323 if (!firmware_has_feature(FW_FEATURE_STUFF_TCE)) 322 324 return tce_free_pSeriesLP(tbl->it_index, tcenum, 323 325 tbl->it_page_shift, npages); 324 326 325 - rc = plpar_tce_stuff((u64)tbl->it_index, 326 - (u64)tcenum << tbl->it_page_shift, 0, npages); 327 + do { 328 + limit = min_t(unsigned long, rpages, 512); 329 + 330 + rc = plpar_tce_stuff((u64)tbl->it_index, 331 + (u64)tcenum << tbl->it_page_shift, 0, limit); 332 + 333 + rpages -= limit; 334 + tcenum += limit; 335 + } while (rpages > 0 && !rc); 327 336 328 337 if (rc && printk_ratelimit()) { 329 338 printk("tce_freemulti_pSeriesLP: plpar_tce_stuff failed\n");
+1 -1
arch/powerpc/xmon/xmon.c
··· 88 88 static unsigned long nidump = 16; 89 89 static unsigned long ncsum = 4096; 90 90 static int termch; 91 - static char tmpstr[128]; 91 + static char tmpstr[KSYM_NAME_LEN]; 92 92 static int tracing_enabled; 93 93 94 94 static long bus_error_jmp[JMP_BUF_LEN];
+4 -1
arch/riscv/Kconfig
··· 799 799 800 800 source "kernel/power/Kconfig" 801 801 802 + # Hibernation is only possible on systems where the SBI implementation has 803 + # marked its reserved memory as not accessible from, or does not run 804 + # from the same memory as, Linux 802 805 config ARCH_HIBERNATION_POSSIBLE 803 - def_bool y 806 + def_bool NONPORTABLE 804 807 805 808 config ARCH_HIBERNATION_HEADER 806 809 def_bool HIBERNATION
+4
arch/riscv/errata/Makefile
··· 1 + ifdef CONFIG_RELOCATABLE 2 + KBUILD_CFLAGS += -fno-pie 3 + endif 4 + 1 5 obj-$(CONFIG_ERRATA_SIFIVE) += sifive/ 2 6 obj-$(CONFIG_ERRATA_THEAD) += thead/
+3
arch/riscv/include/asm/hugetlb.h
··· 36 36 unsigned long addr, pte_t *ptep, 37 37 pte_t pte, int dirty); 38 38 39 + #define __HAVE_ARCH_HUGE_PTEP_GET 40 + pte_t huge_ptep_get(pte_t *ptep); 41 + 39 42 pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags); 40 43 #define arch_make_huge_pte arch_make_huge_pte 41 44
+7
arch/riscv/include/asm/perf_event.h
··· 10 10 11 11 #include <linux/perf_event.h> 12 12 #define perf_arch_bpf_user_pt_regs(regs) (struct user_regs_struct *)regs 13 + 14 + #define perf_arch_fetch_caller_regs(regs, __ip) { \ 15 + (regs)->epc = (__ip); \ 16 + (regs)->s0 = (unsigned long) __builtin_frame_address(0); \ 17 + (regs)->sp = current_stack_pointer; \ 18 + (regs)->status = SR_PP; \ 19 + } 13 20 #endif /* _ASM_RISCV_PERF_EVENT_H */
+4
arch/riscv/kernel/Makefile
··· 23 23 CFLAGS_REMOVE_alternative.o = $(CC_FLAGS_FTRACE) 24 24 CFLAGS_REMOVE_cpufeature.o = $(CC_FLAGS_FTRACE) 25 25 endif 26 + ifdef CONFIG_RELOCATABLE 27 + CFLAGS_alternative.o += -fno-pie 28 + CFLAGS_cpufeature.o += -fno-pie 29 + endif 26 30 ifdef CONFIG_KASAN 27 31 KASAN_SANITIZE_alternative.o := n 28 32 KASAN_SANITIZE_cpufeature.o := n
+29 -1
arch/riscv/mm/hugetlbpage.c
··· 3 3 #include <linux/err.h> 4 4 5 5 #ifdef CONFIG_RISCV_ISA_SVNAPOT 6 + pte_t huge_ptep_get(pte_t *ptep) 7 + { 8 + unsigned long pte_num; 9 + int i; 10 + pte_t orig_pte = ptep_get(ptep); 11 + 12 + if (!pte_present(orig_pte) || !pte_napot(orig_pte)) 13 + return orig_pte; 14 + 15 + pte_num = napot_pte_num(napot_cont_order(orig_pte)); 16 + 17 + for (i = 0; i < pte_num; i++, ptep++) { 18 + pte_t pte = ptep_get(ptep); 19 + 20 + if (pte_dirty(pte)) 21 + orig_pte = pte_mkdirty(orig_pte); 22 + 23 + if (pte_young(pte)) 24 + orig_pte = pte_mkyoung(orig_pte); 25 + } 26 + 27 + return orig_pte; 28 + } 29 + 6 30 pte_t *huge_pte_alloc(struct mm_struct *mm, 7 31 struct vm_area_struct *vma, 8 32 unsigned long addr, ··· 242 218 { 243 219 pte_t pte = ptep_get(ptep); 244 220 unsigned long order; 221 + pte_t orig_pte; 245 222 int i, pte_num; 246 223 247 224 if (!pte_napot(pte)) { ··· 253 228 order = napot_cont_order(pte); 254 229 pte_num = napot_pte_num(order); 255 230 ptep = huge_pte_offset(mm, addr, napot_cont_size(order)); 231 + orig_pte = get_clear_contig_flush(mm, addr, ptep, pte_num); 232 + 233 + orig_pte = pte_wrprotect(orig_pte); 256 234 257 235 for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) 258 - ptep_set_wrprotect(mm, addr, ptep); 236 + set_pte_at(mm, addr, ptep, orig_pte); 259 237 } 260 238 261 239 pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+1 -1
arch/riscv/mm/init.c
··· 922 922 static void __init create_fdt_early_page_table(uintptr_t fix_fdt_va, 923 923 uintptr_t dtb_pa) 924 924 { 925 + #ifndef CONFIG_BUILTIN_DTB 925 926 uintptr_t pa = dtb_pa & ~(PMD_SIZE - 1); 926 927 927 - #ifndef CONFIG_BUILTIN_DTB 928 928 /* Make sure the fdt fixmap address is always aligned on PMD size */ 929 929 BUILD_BUG_ON(FIX_FDT % (PMD_SIZE / PAGE_SIZE)); 930 930
-2
arch/x86/crypto/aria-aesni-avx-asm_64.S
··· 773 773 .octa 0x3F893781E95FE1576CDA64D2BA0CB204 774 774 775 775 #ifdef CONFIG_AS_GFNI 776 - .section .rodata.cst8, "aM", @progbits, 8 777 - .align 8 778 776 /* AES affine: */ 779 777 #define tf_aff_const BV8(1, 1, 0, 0, 0, 1, 1, 0) 780 778 .Ltf_aff_bitmatrix:
+1 -1
arch/x86/include/asm/fpu/sched.h
··· 39 39 static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu) 40 40 { 41 41 if (cpu_feature_enabled(X86_FEATURE_FPU) && 42 - !(current->flags & (PF_KTHREAD | PF_IO_WORKER))) { 42 + !(current->flags & (PF_KTHREAD | PF_USER_WORKER))) { 43 43 save_fpregs_to_fpstate(old_fpu); 44 44 /* 45 45 * The save operation preserved register state, so the
+1 -1
arch/x86/kernel/fpu/context.h
··· 57 57 struct fpu *fpu = &current->thread.fpu; 58 58 int cpu = smp_processor_id(); 59 59 60 - if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER))) 60 + if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_USER_WORKER))) 61 61 return; 62 62 63 63 if (!fpregs_state_valid(fpu, cpu)) {
+1 -1
arch/x86/kernel/fpu/core.c
··· 426 426 427 427 this_cpu_write(in_kernel_fpu, true); 428 428 429 - if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) && 429 + if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) && 430 430 !test_thread_flag(TIF_NEED_FPU_LOAD)) { 431 431 set_thread_flag(TIF_NEED_FPU_LOAD); 432 432 save_fpregs_to_fpstate(&current->thread.fpu);
+18 -2
arch/x86/kvm/lapic.c
··· 229 229 u32 physical_id; 230 230 231 231 /* 232 + * For simplicity, KVM always allocates enough space for all possible 233 + * xAPIC IDs. Yell, but don't kill the VM, as KVM can continue on 234 + * without the optimized map. 235 + */ 236 + if (WARN_ON_ONCE(xapic_id > new->max_apic_id)) 237 + return -EINVAL; 238 + 239 + /* 240 + * Bail if a vCPU was added and/or enabled its APIC between allocating 241 + * the map and doing the actual calculations for the map. Note, KVM 242 + * hardcodes the x2APIC ID to vcpu_id, i.e. there's no TOCTOU bug if 243 + * the compiler decides to reload x2apic_id after this check. 244 + */ 245 + if (x2apic_id > new->max_apic_id) 246 + return -E2BIG; 247 + 248 + /* 232 249 * Deliberately truncate the vCPU ID when detecting a mismatched APIC 233 250 * ID to avoid false positives if the vCPU ID, i.e. x2APIC ID, is a 234 251 * 32-bit value. Any unwanted aliasing due to truncation results will ··· 270 253 */ 271 254 if (vcpu->kvm->arch.x2apic_format) { 272 255 /* See also kvm_apic_match_physical_addr(). */ 273 - if ((apic_x2apic_mode(apic) || x2apic_id > 0xff) && 274 - x2apic_id <= new->max_apic_id) 256 + if (apic_x2apic_mode(apic) || x2apic_id > 0xff) 275 257 new->phys_map[x2apic_id] = apic; 276 258 277 259 if (!apic_x2apic_mode(apic) && !new->phys_map[xapic_id])
+4 -1
arch/x86/kvm/mmu/mmu.c
··· 7091 7091 */ 7092 7092 slot = NULL; 7093 7093 if (atomic_read(&kvm->nr_memslots_dirty_logging)) { 7094 - slot = gfn_to_memslot(kvm, sp->gfn); 7094 + struct kvm_memslots *slots; 7095 + 7096 + slots = kvm_memslots_for_spte_role(kvm, sp->role); 7097 + slot = __gfn_to_memslot(slots, sp->gfn); 7095 7098 WARN_ON_ONCE(!slot); 7096 7099 } 7097 7100
+1 -1
arch/x86/kvm/svm/svm.c
··· 3510 3510 if (!is_vnmi_enabled(svm)) 3511 3511 return false; 3512 3512 3513 - return !!(svm->vmcb->control.int_ctl & V_NMI_BLOCKING_MASK); 3513 + return !!(svm->vmcb->control.int_ctl & V_NMI_PENDING_MASK); 3514 3514 } 3515 3515 3516 3516 static bool svm_set_vnmi_pending(struct kvm_vcpu *vcpu)
+3
arch/x86/kvm/x86.c
··· 10758 10758 exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED; 10759 10759 break; 10760 10760 } 10761 + 10762 + /* Note, VM-Exits that go down the "slow" path are accounted below. */ 10763 + ++vcpu->stat.exits; 10761 10764 } 10762 10765 10763 10766 /*
+2 -1
block/blk-settings.c
··· 915 915 void disk_set_zoned(struct gendisk *disk, enum blk_zoned_model model) 916 916 { 917 917 struct request_queue *q = disk->queue; 918 + unsigned int old_model = q->limits.zoned; 918 919 919 920 switch (model) { 920 921 case BLK_ZONED_HM: ··· 953 952 */ 954 953 blk_queue_zone_write_granularity(q, 955 954 queue_logical_block_size(q)); 956 - } else { 955 + } else if (old_model != BLK_ZONED_NONE) { 957 956 disk_clear_zone_settings(disk); 958 957 } 959 958 }
-6
drivers/acpi/apei/apei-internal.h
··· 7 7 #ifndef APEI_INTERNAL_H 8 8 #define APEI_INTERNAL_H 9 9 10 - #include <linux/cper.h> 11 10 #include <linux/acpi.h> 12 11 13 12 struct apei_exec_context; ··· 128 129 else 129 130 return sizeof(*estatus) + estatus->data_length; 130 131 } 131 - 132 - void cper_estatus_print(const char *pfx, 133 - const struct acpi_hest_generic_status *estatus); 134 - int cper_estatus_check_header(const struct acpi_hest_generic_status *estatus); 135 - int cper_estatus_check(const struct acpi_hest_generic_status *estatus); 136 132 137 133 int apei_osc_setup(void); 138 134 #endif
+1
drivers/acpi/apei/bert.c
··· 23 23 #include <linux/module.h> 24 24 #include <linux/init.h> 25 25 #include <linux/acpi.h> 26 + #include <linux/cper.h> 26 27 #include <linux/io.h> 27 28 28 29 #include "apei-internal.h"
+26 -8
drivers/ata/libata-scsi.c
··· 2694 2694 return 0; 2695 2695 } 2696 2696 2697 - static struct ata_device *ata_find_dev(struct ata_port *ap, int devno) 2697 + static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno) 2698 2698 { 2699 - if (!sata_pmp_attached(ap)) { 2700 - if (likely(devno >= 0 && 2701 - devno < ata_link_max_devices(&ap->link))) 2699 + /* 2700 + * For the non-PMP case, ata_link_max_devices() returns 1 (SATA case), 2701 + * or 2 (IDE master + slave case). However, the former case includes 2702 + * libsas hosted devices which are numbered per scsi host, leading 2703 + * to devno potentially being larger than 0 but with each struct 2704 + * ata_device having its own struct ata_port and struct ata_link. 2705 + * To accommodate these, ignore devno and always use device number 0. 2706 + */ 2707 + if (likely(!sata_pmp_attached(ap))) { 2708 + int link_max_devices = ata_link_max_devices(&ap->link); 2709 + 2710 + if (link_max_devices == 1) 2711 + return &ap->link.device[0]; 2712 + 2713 + if (devno < link_max_devices) 2702 2714 return &ap->link.device[devno]; 2703 - } else { 2704 - if (likely(devno >= 0 && 2705 - devno < ap->nr_pmp_links)) 2706 - return &ap->pmp_link[devno].device[0]; 2715 + 2716 + return NULL; 2707 2717 } 2718 + 2719 + /* 2720 + * For PMP-attached devices, the device number corresponds to C 2721 + * (channel) of SCSI [H:C:I:L], indicating the port pmp link 2722 + * for the device. 2723 + */ 2724 + if (devno < ap->nr_pmp_links) 2725 + return &ap->pmp_link[devno].device[0]; 2708 2726 2709 2727 return NULL; 2710 2728 }
+26
drivers/base/cacheinfo.c
··· 388 388 continue;/* skip if itself or no cacheinfo */ 389 389 for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) { 390 390 sib_leaf = per_cpu_cacheinfo_idx(i, sib_index); 391 + 392 + /* 393 + * Comparing cache IDs only makes sense if the leaves 394 + * belong to the same cache level of same type. Skip 395 + * the check if level and type do not match. 396 + */ 397 + if (sib_leaf->level != this_leaf->level || 398 + sib_leaf->type != this_leaf->type) 399 + continue; 400 + 391 401 if (cache_leaves_are_shared(this_leaf, sib_leaf)) { 392 402 cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map); 393 403 cpumask_set_cpu(i, &this_leaf->shared_cpu_map); ··· 410 400 coherency_max_size = this_leaf->coherency_line_size; 411 401 } 412 402 403 + /* shared_cpu_map is now populated for the cpu */ 404 + this_cpu_ci->cpu_map_populated = true; 413 405 return 0; 414 406 } 415 407 416 408 static void cache_shared_cpu_map_remove(unsigned int cpu) 417 409 { 410 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 418 411 struct cacheinfo *this_leaf, *sib_leaf; 419 412 unsigned int sibling, index, sib_index; 420 413 ··· 432 419 433 420 for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) { 434 421 sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index); 422 + 423 + /* 424 + * Comparing cache IDs only makes sense if the leaves 425 + * belong to the same cache level of same type. Skip 426 + * the check if level and type do not match. 427 + */ 428 + if (sib_leaf->level != this_leaf->level || 429 + sib_leaf->type != this_leaf->type) 430 + continue; 431 + 435 432 if (cache_leaves_are_shared(this_leaf, sib_leaf)) { 436 433 cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map); 437 434 cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map); ··· 450 427 } 451 428 } 452 429 } 430 + 431 + /* cpu is no longer populated in the shared map */ 432 + this_cpu_ci->cpu_map_populated = false; 453 433 } 454 434 455 435 static void free_cache_attributes(unsigned int cpu)
+1 -1
drivers/base/firmware_loader/main.c
··· 812 812 char *outbuf; 813 813 814 814 alg = crypto_alloc_shash("sha256", 0, 0); 815 - if (!alg) 815 + if (IS_ERR(alg)) 816 816 return; 817 817 818 818 sha256buf = kmalloc(SHA256_DIGEST_SIZE, GFP_KERNEL);
+10 -3
drivers/base/regmap/Kconfig
··· 4 4 # subsystems should select the appropriate symbols. 5 5 6 6 config REGMAP 7 + bool "Register Map support" if KUNIT_ALL_TESTS 7 8 default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SOUNDWIRE || REGMAP_SOUNDWIRE_MBQ || REGMAP_SCCB || REGMAP_I3C || REGMAP_SPI_AVMM || REGMAP_MDIO || REGMAP_FSI) 8 9 select IRQ_DOMAIN if REGMAP_IRQ 9 10 select MDIO_BUS if REGMAP_MDIO 10 - bool 11 + help 12 + Enable support for the Register Map (regmap) access API. 13 + 14 + Usually, this option is automatically selected when needed. 15 + However, you may want to enable it manually for running the regmap 16 + KUnit tests. 17 + 18 + If unsure, say N. 11 19 12 20 config REGMAP_KUNIT 13 21 tristate "KUnit tests for regmap" 14 - depends on KUNIT 22 + depends on KUNIT && REGMAP 15 23 default KUNIT_ALL_TESTS 16 - select REGMAP 17 24 select REGMAP_RAM 18 25 19 26 config REGMAP_AC97
+4 -1
drivers/base/regmap/regcache-maple.c
··· 203 203 204 204 mas_for_each(&mas, entry, max) { 205 205 for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) { 206 + mas_pause(&mas); 207 + rcu_read_unlock(); 206 208 ret = regcache_sync_val(map, r, entry[r - mas.index]); 207 209 if (ret != 0) 208 210 goto out; 211 + rcu_read_lock(); 209 212 } 210 213 } 211 214 212 - out: 213 215 rcu_read_unlock(); 214 216 217 + out: 215 218 map->cache_bypass = false; 216 219 217 220 return ret;
+4
drivers/base/regmap/regmap-sdw.c
··· 59 59 if (config->pad_bits != 0) 60 60 return -ENOTSUPP; 61 61 62 + /* Only bulk writes are supported not multi-register writes */ 63 + if (config->can_multi_write) 64 + return -ENOTSUPP; 65 + 62 66 return 0; 63 67 } 64 68
+4 -2
drivers/base/regmap/regmap.c
··· 2082 2082 size_t val_count = val_len / val_bytes; 2083 2083 size_t chunk_count, chunk_bytes; 2084 2084 size_t chunk_regs = val_count; 2085 + size_t max_data = map->max_raw_write - map->format.reg_bytes - 2086 + map->format.pad_bytes; 2085 2087 int ret, i; 2086 2088 2087 2089 if (!val_count) ··· 2091 2089 2092 2090 if (map->use_single_write) 2093 2091 chunk_regs = 1; 2094 - else if (map->max_raw_write && val_len > map->max_raw_write) 2095 - chunk_regs = map->max_raw_write / val_bytes; 2092 + else if (map->max_raw_write && val_len > max_data) 2093 + chunk_regs = max_data / val_bytes; 2096 2094 2097 2095 chunk_count = val_count / chunk_regs; 2098 2096 chunk_bytes = chunk_regs * val_bytes;
+4 -4
drivers/char/tpm/tpm_tis_core.h
··· 84 84 #define ILB_REMAP_SIZE 0x100 85 85 86 86 enum tpm_tis_flags { 87 - TPM_TIS_ITPM_WORKAROUND = BIT(0), 88 - TPM_TIS_INVALID_STATUS = BIT(1), 89 - TPM_TIS_DEFAULT_CANCELLATION = BIT(2), 90 - TPM_TIS_IRQ_TESTED = BIT(3), 87 + TPM_TIS_ITPM_WORKAROUND = 0, 88 + TPM_TIS_INVALID_STATUS = 1, 89 + TPM_TIS_DEFAULT_CANCELLATION = 2, 90 + TPM_TIS_IRQ_TESTED = 3, 91 91 }; 92 92 93 93 struct tpm_tis_data {
+10 -7
drivers/dma/at_hdmac.c
··· 132 132 #define ATC_DST_PIP BIT(12) /* Destination Picture-in-Picture enabled */ 133 133 #define ATC_SRC_DSCR_DIS BIT(16) /* Src Descriptor fetch disable */ 134 134 #define ATC_DST_DSCR_DIS BIT(20) /* Dst Descriptor fetch disable */ 135 - #define ATC_FC GENMASK(22, 21) /* Choose Flow Controller */ 135 + #define ATC_FC GENMASK(23, 21) /* Choose Flow Controller */ 136 136 #define ATC_FC_MEM2MEM 0x0 /* Mem-to-Mem (DMA) */ 137 137 #define ATC_FC_MEM2PER 0x1 /* Mem-to-Periph (DMA) */ 138 138 #define ATC_FC_PER2MEM 0x2 /* Periph-to-Mem (DMA) */ ··· 153 153 #define ATC_AUTO BIT(31) /* Auto multiple buffer tx enable */ 154 154 155 155 /* Bitfields in CFG */ 156 - #define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */ 157 - 158 156 #define ATC_SRC_PER GENMASK(3, 0) /* Channel src rq associated with periph handshaking ifc h */ 159 157 #define ATC_DST_PER GENMASK(7, 4) /* Channel dst rq associated with periph handshaking ifc h */ 160 158 #define ATC_SRC_REP BIT(8) /* Source Replay Mod */ ··· 179 181 #define ATC_DPIP_HOLE GENMASK(15, 0) 180 182 #define ATC_DPIP_BOUNDARY GENMASK(25, 16) 181 183 182 - #define ATC_SRC_PER_ID(id) (FIELD_PREP(ATC_SRC_PER_MSB, (id)) | \ 183 - FIELD_PREP(ATC_SRC_PER, (id))) 184 - #define ATC_DST_PER_ID(id) (FIELD_PREP(ATC_DST_PER_MSB, (id)) | \ 185 - FIELD_PREP(ATC_DST_PER, (id))) 184 + #define ATC_PER_MSB GENMASK(5, 4) /* Extract MSBs of a handshaking identifier */ 185 + #define ATC_SRC_PER_ID(id) \ 186 + ({ typeof(id) _id = (id); \ 187 + FIELD_PREP(ATC_SRC_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \ 188 + FIELD_PREP(ATC_SRC_PER, _id); }) 189 + #define ATC_DST_PER_ID(id) \ 190 + ({ typeof(id) _id = (id); \ 191 + FIELD_PREP(ATC_DST_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \ 192 + FIELD_PREP(ATC_DST_PER, _id); }) 186 193 187 194 188 195
+5 -2
drivers/dma/at_xdmac.c
··· 1102 1102 NULL, 1103 1103 src_addr, dst_addr, 1104 1104 xt, xt->sgl); 1105 + if (!first) 1106 + return NULL; 1105 1107 1106 1108 /* Length of the block is (BLEN+1) microblocks. */ 1107 1109 for (i = 0; i < xt->numf - 1; i++) ··· 1134 1132 src_addr, dst_addr, 1135 1133 xt, chunk); 1136 1134 if (!desc) { 1137 - list_splice_tail_init(&first->descs_list, 1138 - &atchan->free_descs_list); 1135 + if (first) 1136 + list_splice_tail_init(&first->descs_list, 1137 + &atchan->free_descs_list); 1139 1138 return NULL; 1140 1139 } 1141 1140
-1
drivers/dma/idxd/cdev.c
··· 277 277 if (wq_dedicated(wq)) { 278 278 rc = idxd_wq_set_pasid(wq, pasid); 279 279 if (rc < 0) { 280 - iommu_sva_unbind_device(sva); 281 280 dev_err(dev, "wq set pasid failed: %d\n", rc); 282 281 goto failed_set_pasid; 283 282 }
+4 -4
drivers/dma/pl330.c
··· 1050 1050 return true; 1051 1051 } 1052 1052 1053 - static bool _start(struct pl330_thread *thrd) 1053 + static bool pl330_start_thread(struct pl330_thread *thrd) 1054 1054 { 1055 1055 switch (_state(thrd)) { 1056 1056 case PL330_STATE_FAULT_COMPLETING: ··· 1702 1702 thrd->req_running = -1; 1703 1703 1704 1704 /* Get going again ASAP */ 1705 - _start(thrd); 1705 + pl330_start_thread(thrd); 1706 1706 1707 1707 /* For now, just make a list of callbacks to be done */ 1708 1708 list_add_tail(&descdone->rqd, &pl330->req_done); ··· 2089 2089 } else { 2090 2090 /* Make sure the PL330 Channel thread is active */ 2091 2091 spin_lock(&pch->thread->dmac->lock); 2092 - _start(pch->thread); 2092 + pl330_start_thread(pch->thread); 2093 2093 spin_unlock(&pch->thread->dmac->lock); 2094 2094 } 2095 2095 ··· 2107 2107 if (power_down) { 2108 2108 pch->active = true; 2109 2109 spin_lock(&pch->thread->dmac->lock); 2110 - _start(pch->thread); 2110 + pl330_start_thread(pch->thread); 2111 2111 spin_unlock(&pch->thread->dmac->lock); 2112 2112 power_down = false; 2113 2113 }
+2 -2
drivers/dma/ti/k3-udma.c
··· 5527 5527 return ret; 5528 5528 } 5529 5529 5530 - static int udma_pm_suspend(struct device *dev) 5530 + static int __maybe_unused udma_pm_suspend(struct device *dev) 5531 5531 { 5532 5532 struct udma_dev *ud = dev_get_drvdata(dev); 5533 5533 struct dma_device *dma_dev = &ud->ddev; ··· 5549 5549 return 0; 5550 5550 } 5551 5551 5552 - static int udma_pm_resume(struct device *dev) 5552 + static int __maybe_unused udma_pm_resume(struct device *dev) 5553 5553 { 5554 5554 struct udma_dev *ud = dev_get_drvdata(dev); 5555 5555 struct dma_device *dma_dev = &ud->ddev;
+2 -1
drivers/firmware/efi/libstub/Makefile.zboot
··· 32 32 $(obj)/vmlinuz: $(obj)/vmlinux.bin FORCE 33 33 $(call if_changed,$(zboot-method-y)) 34 34 35 - OBJCOPYFLAGS_vmlinuz.o := -I binary -O $(EFI_ZBOOT_BFD_TARGET) $(EFI_ZBOOT_OBJCOPY_FLAGS) \ 35 + # avoid eager evaluation to prevent references to non-existent build artifacts 36 + OBJCOPYFLAGS_vmlinuz.o = -I binary -O $(EFI_ZBOOT_BFD_TARGET) $(EFI_ZBOOT_OBJCOPY_FLAGS) \ 36 37 --rename-section .data=.gzdata,load,alloc,readonly,contents 37 38 $(obj)/vmlinuz.o: $(obj)/vmlinuz FORCE 38 39 $(call if_changed,objcopy)
+3
drivers/firmware/efi/libstub/efistub.h
··· 1133 1133 void efi_remap_image(unsigned long image_base, unsigned alloc_size, 1134 1134 unsigned long code_size); 1135 1135 1136 + asmlinkage efi_status_t __efiapi 1137 + efi_zboot_entry(efi_handle_t handle, efi_system_table_t *systab); 1138 + 1136 1139 #endif
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 593 593 case IP_VERSION(9, 3, 0): 594 594 /* GC 10.3.7 */ 595 595 case IP_VERSION(10, 3, 7): 596 + /* GC 11.0.1 */ 597 + case IP_VERSION(11, 0, 1): 596 598 if (amdgpu_tmz == 0) { 597 599 adev->gmc.tmz_enabled = false; 598 600 dev_info(adev->dev, ··· 618 616 case IP_VERSION(10, 3, 1): 619 617 /* YELLOW_CARP*/ 620 618 case IP_VERSION(10, 3, 3): 621 - case IP_VERSION(11, 0, 1): 622 619 case IP_VERSION(11, 0, 4): 623 620 /* Don't enable it by default yet. 624 621 */
+26 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
··· 241 241 return 0; 242 242 } 243 243 244 + int amdgpu_jpeg_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *ras_block) 245 + { 246 + int r, i; 247 + 248 + r = amdgpu_ras_block_late_init(adev, ras_block); 249 + if (r) 250 + return r; 251 + 252 + if (amdgpu_ras_is_supported(adev, ras_block->block)) { 253 + for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) { 254 + if (adev->jpeg.harvest_config & (1 << i)) 255 + continue; 256 + 257 + r = amdgpu_irq_get(adev, &adev->jpeg.inst[i].ras_poison_irq, 0); 258 + if (r) 259 + goto late_fini; 260 + } 261 + } 262 + return 0; 263 + 264 + late_fini: 265 + amdgpu_ras_block_late_fini(adev, ras_block); 266 + return r; 267 + } 268 + 244 269 int amdgpu_jpeg_ras_sw_init(struct amdgpu_device *adev) 245 270 { 246 271 int err; ··· 287 262 adev->jpeg.ras_if = &ras->ras_block.ras_comm; 288 263 289 264 if (!ras->ras_block.ras_late_init) 290 - ras->ras_block.ras_late_init = amdgpu_ras_block_late_init; 265 + ras->ras_block.ras_late_init = amdgpu_jpeg_ras_late_init; 291 266 292 267 return 0; 293 268 }
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.h
··· 38 38 struct amdgpu_jpeg_inst { 39 39 struct amdgpu_ring ring_dec; 40 40 struct amdgpu_irq_src irq; 41 + struct amdgpu_irq_src ras_poison_irq; 41 42 struct amdgpu_jpeg_reg external; 42 43 }; 43 44 ··· 73 72 int amdgpu_jpeg_process_poison_irq(struct amdgpu_device *adev, 74 73 struct amdgpu_irq_src *source, 75 74 struct amdgpu_iv_entry *entry); 75 + int amdgpu_jpeg_ras_late_init(struct amdgpu_device *adev, 76 + struct ras_common_if *ras_block); 76 77 int amdgpu_jpeg_ras_sw_init(struct amdgpu_device *adev); 77 78 78 79 #endif /*__AMDGPU_JPEG_H__*/
+26 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 1181 1181 return 0; 1182 1182 } 1183 1183 1184 + int amdgpu_vcn_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *ras_block) 1185 + { 1186 + int r, i; 1187 + 1188 + r = amdgpu_ras_block_late_init(adev, ras_block); 1189 + if (r) 1190 + return r; 1191 + 1192 + if (amdgpu_ras_is_supported(adev, ras_block->block)) { 1193 + for (i = 0; i < adev->vcn.num_vcn_inst; i++) { 1194 + if (adev->vcn.harvest_config & (1 << i)) 1195 + continue; 1196 + 1197 + r = amdgpu_irq_get(adev, &adev->vcn.inst[i].ras_poison_irq, 0); 1198 + if (r) 1199 + goto late_fini; 1200 + } 1201 + } 1202 + return 0; 1203 + 1204 + late_fini: 1205 + amdgpu_ras_block_late_fini(adev, ras_block); 1206 + return r; 1207 + } 1208 + 1184 1209 int amdgpu_vcn_ras_sw_init(struct amdgpu_device *adev) 1185 1210 { 1186 1211 int err; ··· 1227 1202 adev->vcn.ras_if = &ras->ras_block.ras_comm; 1228 1203 1229 1204 if (!ras->ras_block.ras_late_init) 1230 - ras->ras_block.ras_late_init = amdgpu_ras_block_late_init; 1205 + ras->ras_block.ras_late_init = amdgpu_vcn_ras_late_init; 1231 1206 1232 1207 return 0; 1233 1208 }
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
··· 234 234 struct amdgpu_ring ring_enc[AMDGPU_VCN_MAX_ENC_RINGS]; 235 235 atomic_t sched_score; 236 236 struct amdgpu_irq_src irq; 237 + struct amdgpu_irq_src ras_poison_irq; 237 238 struct amdgpu_vcn_reg external; 238 239 struct amdgpu_bo *dpg_sram_bo; 239 240 struct dpg_pause_state pause_state; ··· 401 400 int amdgpu_vcn_process_poison_irq(struct amdgpu_device *adev, 402 401 struct amdgpu_irq_src *source, 403 402 struct amdgpu_iv_entry *entry); 403 + int amdgpu_vcn_ras_late_init(struct amdgpu_device *adev, 404 + struct ras_common_if *ras_block); 404 405 int amdgpu_vcn_ras_sw_init(struct amdgpu_device *adev); 405 406 406 407 #endif
+22 -6
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
··· 102 102 103 103 /* JPEG DJPEG POISON EVENT */ 104 104 r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_jpeg[i], 105 - VCN_2_6__SRCID_DJPEG0_POISON, &adev->jpeg.inst[i].irq); 105 + VCN_2_6__SRCID_DJPEG0_POISON, &adev->jpeg.inst[i].ras_poison_irq); 106 106 if (r) 107 107 return r; 108 108 109 109 /* JPEG EJPEG POISON EVENT */ 110 110 r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_jpeg[i], 111 - VCN_2_6__SRCID_EJPEG0_POISON, &adev->jpeg.inst[i].irq); 111 + VCN_2_6__SRCID_EJPEG0_POISON, &adev->jpeg.inst[i].ras_poison_irq); 112 112 if (r) 113 113 return r; 114 114 } ··· 221 221 if (adev->jpeg.cur_state != AMD_PG_STATE_GATE && 222 222 RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS)) 223 223 jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE); 224 + 225 + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__JPEG)) 226 + amdgpu_irq_put(adev, &adev->jpeg.inst[i].ras_poison_irq, 0); 224 227 } 225 228 226 229 return 0; ··· 572 569 return 0; 573 570 } 574 571 572 + static int jpeg_v2_6_set_ras_interrupt_state(struct amdgpu_device *adev, 573 + struct amdgpu_irq_src *source, 574 + unsigned int type, 575 + enum amdgpu_interrupt_state state) 576 + { 577 + return 0; 578 + } 579 + 575 580 static int jpeg_v2_5_process_interrupt(struct amdgpu_device *adev, 576 581 struct amdgpu_irq_src *source, 577 582 struct amdgpu_iv_entry *entry) ··· 603 592 switch (entry->src_id) { 604 593 case VCN_2_0__SRCID__JPEG_DECODE: 605 594 amdgpu_fence_process(&adev->jpeg.inst[ip_instance].ring_dec); 606 - break; 607 - case VCN_2_6__SRCID_DJPEG0_POISON: 608 - case VCN_2_6__SRCID_EJPEG0_POISON: 609 - amdgpu_jpeg_process_poison_irq(adev, source, entry); 610 595 break; 611 596 default: 612 597 DRM_ERROR("Unhandled interrupt: %d %d\n", ··· 732 725 .process = jpeg_v2_5_process_interrupt, 733 726 }; 734 727 728 + static const struct amdgpu_irq_src_funcs jpeg_v2_6_ras_irq_funcs = { 729 + .set = jpeg_v2_6_set_ras_interrupt_state, 730 + .process = amdgpu_jpeg_process_poison_irq, 731 + }; 732 + 735 733 static void jpeg_v2_5_set_irq_funcs(struct amdgpu_device *adev) 736 734 { 737 735 int i; ··· 747 735 748 736 adev->jpeg.inst[i].irq.num_types = 1; 749 737 adev->jpeg.inst[i].irq.funcs = &jpeg_v2_5_irq_funcs; 738 + 739 + adev->jpeg.inst[i].ras_poison_irq.num_types = 1; 740 + adev->jpeg.inst[i].ras_poison_irq.funcs = &jpeg_v2_6_ras_irq_funcs; 750 741 } 751 742 } 752 743 ··· 815 800 static struct amdgpu_jpeg_ras jpeg_v2_6_ras = { 816 801 .ras_block = { 817 802 .hw_ops = &jpeg_v2_6_ras_hw_ops, 803 + .ras_late_init = amdgpu_jpeg_ras_late_init, 818 804 }, 819 805 }; 820 806
+21 -7
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
··· 87 87 88 88 /* JPEG DJPEG POISON EVENT */ 89 89 r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN, 90 - VCN_4_0__SRCID_DJPEG0_POISON, &adev->jpeg.inst->irq); 90 + VCN_4_0__SRCID_DJPEG0_POISON, &adev->jpeg.inst->ras_poison_irq); 91 91 if (r) 92 92 return r; 93 93 94 94 /* JPEG EJPEG POISON EVENT */ 95 95 r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN, 96 - VCN_4_0__SRCID_EJPEG0_POISON, &adev->jpeg.inst->irq); 96 + VCN_4_0__SRCID_EJPEG0_POISON, &adev->jpeg.inst->ras_poison_irq); 97 97 if (r) 98 98 return r; 99 99 ··· 202 202 RREG32_SOC15(JPEG, 0, regUVD_JRBC_STATUS)) 203 203 jpeg_v4_0_set_powergating_state(adev, AMD_PG_STATE_GATE); 204 204 } 205 - amdgpu_irq_put(adev, &adev->jpeg.inst->irq, 0); 205 + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__JPEG)) 206 + amdgpu_irq_put(adev, &adev->jpeg.inst->ras_poison_irq, 0); 206 207 207 208 return 0; 208 209 } ··· 671 670 return 0; 672 671 } 673 672 673 + static int jpeg_v4_0_set_ras_interrupt_state(struct amdgpu_device *adev, 674 + struct amdgpu_irq_src *source, 675 + unsigned int type, 676 + enum amdgpu_interrupt_state state) 677 + { 678 + return 0; 679 + } 680 + 674 681 static int jpeg_v4_0_process_interrupt(struct amdgpu_device *adev, 675 682 struct amdgpu_irq_src *source, 676 683 struct amdgpu_iv_entry *entry) ··· 688 679 switch (entry->src_id) { 689 680 case VCN_4_0__SRCID__JPEG_DECODE: 690 681 amdgpu_fence_process(&adev->jpeg.inst->ring_dec); 691 - break; 692 - case VCN_4_0__SRCID_DJPEG0_POISON: 693 - case VCN_4_0__SRCID_EJPEG0_POISON: 694 - amdgpu_jpeg_process_poison_irq(adev, source, entry); 695 682 break; 696 683 default: 697 684 DRM_DEV_ERROR(adev->dev, "Unhandled interrupt: %d %d\n", ··· 758 753 .process = jpeg_v4_0_process_interrupt, 759 754 }; 760 755 756 + static const struct amdgpu_irq_src_funcs jpeg_v4_0_ras_irq_funcs = { 757 + .set = jpeg_v4_0_set_ras_interrupt_state, 758 + .process = amdgpu_jpeg_process_poison_irq, 759 + }; 760 + 761 761 static void jpeg_v4_0_set_irq_funcs(struct amdgpu_device *adev) 762 762 { 763 763 adev->jpeg.inst->irq.num_types = 1; 764 764 adev->jpeg.inst->irq.funcs = &jpeg_v4_0_irq_funcs; 765 + 766 + adev->jpeg.inst->ras_poison_irq.num_types = 1; 767 + adev->jpeg.inst->ras_poison_irq.funcs = &jpeg_v4_0_ras_irq_funcs; 765 768 } 766 769 767 770 const struct amdgpu_ip_block_version jpeg_v4_0_ip_block = { ··· 824 811 static struct amdgpu_jpeg_ras jpeg_v4_0_ras = { 825 812 .ras_block = { 826 813 .hw_ops = &jpeg_v4_0_ras_hw_ops, 814 + .ras_late_init = amdgpu_jpeg_ras_late_init, 827 815 }, 828 816 }; 829 817
+21 -4
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
··· 143 143 144 144 /* VCN POISON TRAP */ 145 145 r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[j], 146 - VCN_2_6__SRCID_UVD_POISON, &adev->vcn.inst[j].irq); 146 + VCN_2_6__SRCID_UVD_POISON, &adev->vcn.inst[j].ras_poison_irq); 147 147 if (r) 148 148 return r; 149 149 } ··· 354 354 (adev->vcn.cur_state != AMD_PG_STATE_GATE && 355 355 RREG32_SOC15(VCN, i, mmUVD_STATUS))) 356 356 vcn_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE); 357 + 358 + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__VCN)) 359 + amdgpu_irq_put(adev, &adev->vcn.inst[i].ras_poison_irq, 0); 357 360 } 358 361 359 362 return 0; ··· 1810 1807 return 0; 1811 1808 } 1812 1809 1810 + static int vcn_v2_6_set_ras_interrupt_state(struct amdgpu_device *adev, 1811 + struct amdgpu_irq_src *source, 1812 + unsigned int type, 1813 + enum amdgpu_interrupt_state state) 1814 + { 1815 + return 0; 1816 + } 1817 + 1813 1818 static int vcn_v2_5_process_interrupt(struct amdgpu_device *adev, 1814 1819 struct amdgpu_irq_src *source, 1815 1820 struct amdgpu_iv_entry *entry) ··· 1848 1837 case VCN_2_0__SRCID__UVD_ENC_LOW_LATENCY: 1849 1838 amdgpu_fence_process(&adev->vcn.inst[ip_instance].ring_enc[1]); 1850 1839 break; 1851 - case VCN_2_6__SRCID_UVD_POISON: 1852 - amdgpu_vcn_process_poison_irq(adev, source, entry); 1853 - break; 1854 1840 default: 1855 1841 DRM_ERROR("Unhandled interrupt: %d %d\n", 1856 1842 entry->src_id, entry->src_data[0]); ··· 1862 1854 .process = vcn_v2_5_process_interrupt, 1863 1855 }; 1864 1856 1857 + static const struct amdgpu_irq_src_funcs vcn_v2_6_ras_irq_funcs = { 1858 + .set = vcn_v2_6_set_ras_interrupt_state, 1859 + .process = amdgpu_vcn_process_poison_irq, 1860 + }; 1861 + 1865 1862 static void vcn_v2_5_set_irq_funcs(struct amdgpu_device *adev) 1866 1863 { 1867 1864 int i; ··· 1876 1863 continue; 1877 1864 adev->vcn.inst[i].irq.num_types = adev->vcn.num_enc_rings + 1; 1878 1865 adev->vcn.inst[i].irq.funcs = &vcn_v2_5_irq_funcs; 1866 + 1867 + adev->vcn.inst[i].ras_poison_irq.num_types = adev->vcn.num_enc_rings + 1; 1868 + adev->vcn.inst[i].ras_poison_irq.funcs = &vcn_v2_6_ras_irq_funcs; 1879 1869 } 1880 1870 } 1881 1871 ··· 1981 1965 static struct amdgpu_vcn_ras vcn_v2_6_ras = { 1982 1966 .ras_block = { 1983 1967 .hw_ops = &vcn_v2_6_ras_hw_ops, 1968 + .ras_late_init = amdgpu_vcn_ras_late_init, 1984 1969 }, 1985 1970 }; 1986 1971
+30 -6
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
··· 139 139 140 140 /* VCN POISON TRAP */ 141 141 r = amdgpu_irq_add_id(adev, amdgpu_ih_clientid_vcns[i], 142 - VCN_4_0__SRCID_UVD_POISON, &adev->vcn.inst[i].irq); 142 + VCN_4_0__SRCID_UVD_POISON, &adev->vcn.inst[i].ras_poison_irq); 143 143 if (r) 144 144 return r; 145 145 ··· 305 305 vcn_v4_0_set_powergating_state(adev, AMD_PG_STATE_GATE); 306 306 } 307 307 } 308 - 309 - amdgpu_irq_put(adev, &adev->vcn.inst[i].irq, 0); 308 + if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__VCN)) 309 + amdgpu_irq_put(adev, &adev->vcn.inst[i].ras_poison_irq, 0); 310 310 } 311 311 312 312 return 0; ··· 1976 1976 } 1977 1977 1978 1978 /** 1979 + * vcn_v4_0_set_ras_interrupt_state - set VCN block RAS interrupt state 1980 + * 1981 + * @adev: amdgpu_device pointer 1982 + * @source: interrupt sources 1983 + * @type: interrupt types 1984 + * @state: interrupt states 1985 + * 1986 + * Set VCN block RAS interrupt state 1987 + */ 1988 + static int vcn_v4_0_set_ras_interrupt_state(struct amdgpu_device *adev, 1989 + struct amdgpu_irq_src *source, 1990 + unsigned int type, 1991 + enum amdgpu_interrupt_state state) 1992 + { 1993 + return 0; 1994 + } 1995 + 1996 + /** 1979 1997 * vcn_v4_0_process_interrupt - process VCN block interrupt 1980 1998 * 1981 1999 * @adev: amdgpu_device pointer ··· 2025 2007 case VCN_4_0__SRCID__UVD_ENC_GENERAL_PURPOSE: 2026 2008 amdgpu_fence_process(&adev->vcn.inst[ip_instance].ring_enc[0]); 2027 2009 break; 2028 - case VCN_4_0__SRCID_UVD_POISON: 2029 - amdgpu_vcn_process_poison_irq(adev, source, entry); 2030 - break; 2031 2010 default: 2032 2011 DRM_ERROR("Unhandled interrupt: %d %d\n", 2033 2012 entry->src_id, entry->src_data[0]); ··· 2037 2022 static const struct amdgpu_irq_src_funcs vcn_v4_0_irq_funcs = { 2038 2023 .set = vcn_v4_0_set_interrupt_state, 2039 2024 .process = vcn_v4_0_process_interrupt, 2025 + }; 2026 + 2027 + static const struct amdgpu_irq_src_funcs vcn_v4_0_ras_irq_funcs = { 2028 + .set = vcn_v4_0_set_ras_interrupt_state, 2029 + .process = amdgpu_vcn_process_poison_irq, 2040 2030 }; 2041 2031 2042 2032 /** ··· 2061 2041 2062 2042 adev->vcn.inst[i].irq.num_types = adev->vcn.num_enc_rings + 1; 2063 2043 adev->vcn.inst[i].irq.funcs = &vcn_v4_0_irq_funcs; 2044 + 2045 + adev->vcn.inst[i].ras_poison_irq.num_types = adev->vcn.num_enc_rings + 1; 2046 + adev->vcn.inst[i].ras_poison_irq.funcs = &vcn_v4_0_ras_irq_funcs; 2064 2047 } 2065 2048 } 2066 2049 ··· 2137 2114 static struct amdgpu_vcn_ras vcn_v4_0_ras = { 2138 2115 .ras_block = { 2139 2116 .hw_ops = &vcn_v4_0_ras_hw_ops, 2117 + .ras_late_init = amdgpu_vcn_ras_late_init, 2140 2118 }, 2141 2119 }; 2142 2120
-9
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 2113 2113 if (hubbub->funcs->program_compbuf_size) 2114 2114 hubbub->funcs->program_compbuf_size(hubbub, context->bw_ctx.bw.dcn.compbuf_size_kb, true); 2115 2115 2116 - if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) { 2117 - dc_dmub_srv_p_state_delegate(dc, 2118 - true, context); 2119 - context->bw_ctx.bw.dcn.clk.p_state_change_support = true; 2120 - dc->clk_mgr->clks.fw_based_mclk_switching = true; 2121 - } else { 2122 - dc->clk_mgr->clks.fw_based_mclk_switching = false; 2123 - } 2124 - 2125 2116 dc->clk_mgr->funcs->update_clocks( 2126 2117 dc->clk_mgr, 2127 2118 context,
+1 -24
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c
··· 983 983 } 984 984 985 985 void dcn30_prepare_bandwidth(struct dc *dc, 986 - struct dc_state *context) 986 + struct dc_state *context) 987 987 { 988 - bool p_state_change_support = context->bw_ctx.bw.dcn.clk.p_state_change_support; 989 - /* Any transition into an FPO config should disable MCLK switching first to avoid 990 - * driver and FW P-State synchronization issues. 991 - */ 992 - if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) { 993 - dc->optimized_required = true; 994 - context->bw_ctx.bw.dcn.clk.p_state_change_support = false; 995 - } 996 - 997 988 if (dc->clk_mgr->dc_mode_softmax_enabled) 998 989 if (dc->clk_mgr->clks.dramclk_khz <= dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000 && 999 990 context->bw_ctx.bw.dcn.clk.dramclk_khz > dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000) 1000 991 dc->clk_mgr->funcs->set_max_memclk(dc->clk_mgr, dc->clk_mgr->bw_params->clk_table.entries[dc->clk_mgr->bw_params->clk_table.num_entries - 1].memclk_mhz); 1001 992 1002 993 dcn20_prepare_bandwidth(dc, context); 1003 - /* 1004 - * enabled -> enabled: do not disable 1005 - * enabled -> disabled: disable 1006 - * disabled -> enabled: don't care 1007 - * disabled -> disabled: don't care 1008 - */ 1009 - if (!context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) 1010 - dc_dmub_srv_p_state_delegate(dc, false, context); 1011 - 1012 - if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching || dc->clk_mgr->clks.fw_based_mclk_switching) { 1013 - /* After disabling P-State, restore the original value to ensure we get the correct P-State 1014 - * on the next optimize. */ 1015 - context->bw_ctx.bw.dcn.clk.p_state_change_support = p_state_change_support; 1016 - } 1017 994 } 1018 995
-29
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 6925 6925 return 0; 6926 6926 } 6927 6927 6928 - static int si_set_temperature_range(struct amdgpu_device *adev) 6929 - { 6930 - int ret; 6931 - 6932 - ret = si_thermal_enable_alert(adev, false); 6933 - if (ret) 6934 - return ret; 6935 - ret = si_thermal_set_temperature_range(adev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX); 6936 - if (ret) 6937 - return ret; 6938 - ret = si_thermal_enable_alert(adev, true); 6939 - if (ret) 6940 - return ret; 6941 - 6942 - return ret; 6943 - } 6944 - 6945 6928 static void si_dpm_disable(struct amdgpu_device *adev) 6946 6929 { 6947 6930 struct rv7xx_power_info *pi = rv770_get_pi(adev); ··· 7609 7626 7610 7627 static int si_dpm_late_init(void *handle) 7611 7628 { 7612 - int ret; 7613 - struct amdgpu_device *adev = (struct amdgpu_device *)handle; 7614 - 7615 - if (!adev->pm.dpm_enabled) 7616 - return 0; 7617 - 7618 - ret = si_set_temperature_range(adev); 7619 - if (ret) 7620 - return ret; 7621 - #if 0 //TODO ? 7622 - si_dpm_powergate_uvd(adev, true); 7623 - #endif 7624 7629 return 0; 7625 7630 } 7626 7631
+6 -4
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
··· 582 582 DpmClocks_t *clk_table = smu->smu_table.clocks_table; 583 583 SmuMetrics_legacy_t metrics; 584 584 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); 585 - int i, size = 0, ret = 0; 585 + int i, idx, size = 0, ret = 0; 586 586 uint32_t cur_value = 0, value = 0, count = 0; 587 587 bool cur_value_match_level = false; 588 588 ··· 656 656 case SMU_MCLK: 657 657 case SMU_FCLK: 658 658 for (i = 0; i < count; i++) { 659 - ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value); 659 + idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i; 660 + ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value); 660 661 if (ret) 661 662 return ret; 662 663 if (!value) ··· 684 683 DpmClocks_t *clk_table = smu->smu_table.clocks_table; 685 684 SmuMetrics_t metrics; 686 685 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); 687 - int i, size = 0, ret = 0; 686 + int i, idx, size = 0, ret = 0; 688 687 uint32_t cur_value = 0, value = 0, count = 0; 689 688 bool cur_value_match_level = false; 690 689 uint32_t min, max; ··· 766 765 case SMU_MCLK: 767 766 case SMU_FCLK: 768 767 for (i = 0; i < count; i++) { 769 - ret = vangogh_get_dpm_clk_limited(smu, clk_type, i, &value); 768 + idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i; 769 + ret = vangogh_get_dpm_clk_limited(smu, clk_type, idx, &value); 770 770 if (ret) 771 771 return ret; 772 772 if (!value)
+3 -2
drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
··· 494 494 static int renoir_print_clk_levels(struct smu_context *smu, 495 495 enum smu_clk_type clk_type, char *buf) 496 496 { 497 - int i, size = 0, ret = 0; 497 + int i, idx, size = 0, ret = 0; 498 498 uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0; 499 499 SmuMetrics_t metrics; 500 500 struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm); ··· 594 594 case SMU_VCLK: 595 595 case SMU_DCLK: 596 596 for (i = 0; i < count; i++) { 597 - ret = renoir_get_dpm_clk_limited(smu, clk_type, i, &value); 597 + idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i; 598 + ret = renoir_get_dpm_clk_limited(smu, clk_type, idx, &value); 598 599 if (ret) 599 600 return ret; 600 601 if (!value)
+3 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
··· 478 478 static int smu_v13_0_4_print_clk_levels(struct smu_context *smu, 479 479 enum smu_clk_type clk_type, char *buf) 480 480 { 481 - int i, size = 0, ret = 0; 481 + int i, idx, size = 0, ret = 0; 482 482 uint32_t cur_value = 0, value = 0, count = 0; 483 483 uint32_t min, max; 484 484 ··· 512 512 break; 513 513 514 514 for (i = 0; i < count; i++) { 515 - ret = smu_v13_0_4_get_dpm_freq_by_index(smu, clk_type, i, &value); 515 + idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i; 516 + ret = smu_v13_0_4_get_dpm_freq_by_index(smu, clk_type, idx, &value); 516 517 if (ret) 517 518 break; 518 519
+3 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_5_ppt.c
··· 866 866 static int smu_v13_0_5_print_clk_levels(struct smu_context *smu, 867 867 enum smu_clk_type clk_type, char *buf) 868 868 { 869 - int i, size = 0, ret = 0; 869 + int i, idx, size = 0, ret = 0; 870 870 uint32_t cur_value = 0, value = 0, count = 0; 871 871 uint32_t min = 0, max = 0; 872 872 ··· 898 898 goto print_clk_out; 899 899 900 900 for (i = 0; i < count; i++) { 901 - ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, i, &value); 901 + idx = (clk_type == SMU_MCLK) ? (count - i - 1) : i; 902 + ret = smu_v13_0_5_get_dpm_freq_by_index(smu, clk_type, idx, &value); 902 903 if (ret) 903 904 goto print_clk_out; 904 905
+3 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
··· 1000 1000 static int yellow_carp_print_clk_levels(struct smu_context *smu, 1001 1001 enum smu_clk_type clk_type, char *buf) 1002 1002 { 1003 - int i, size = 0, ret = 0; 1003 + int i, idx, size = 0, ret = 0; 1004 1004 uint32_t cur_value = 0, value = 0, count = 0; 1005 1005 uint32_t min, max; 1006 1006 ··· 1033 1033 goto print_clk_out; 1034 1034 1035 1035 for (i = 0; i < count; i++) { 1036 - ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, i, &value); 1036 + idx = (clk_type == SMU_FCLK || clk_type == SMU_MCLK) ? (count - i - 1) : i; 1037 + ret = yellow_carp_get_dpm_freq_by_index(smu, clk_type, idx, &value); 1037 1038 if (ret) 1038 1039 goto print_clk_out; 1039 1040
+11 -6
drivers/gpu/drm/i915/i915_perf.c
··· 877 877 stream->oa_buffer.last_ctx_id = ctx_id; 878 878 } 879 879 880 - /* 881 - * Clear out the report id and timestamp as a means to detect unlanded 882 - * reports. 883 - */ 884 - oa_report_id_clear(stream, report32); 885 - oa_timestamp_clear(stream, report32); 880 + if (is_power_of_2(report_size)) { 881 + /* 882 + * Clear out the report id and timestamp as a means 883 + * to detect unlanded reports. 884 + */ 885 + oa_report_id_clear(stream, report32); 886 + oa_timestamp_clear(stream, report32); 887 + } else { 888 + /* Zero out the entire report */ 889 + memset(report32, 0, report_size); 890 + } 886 891 } 887 892 888 893 if (start_offset != *offset) {
+2
drivers/hid/hid-google-hammer.c
··· 587 587 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 588 588 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, 589 589 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 590 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_JEWEL) }, 591 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 590 592 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) }, 591 593 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 592 594 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) },
+1
drivers/hid/hid-ids.h
··· 529 529 #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 530 530 #define USB_DEVICE_ID_GOOGLE_DON 0x5050 531 531 #define USB_DEVICE_ID_GOOGLE_EEL 0x5057 532 + #define USB_DEVICE_ID_GOOGLE_JEWEL 0x5061 532 533 533 534 #define USB_VENDOR_ID_GOTOP 0x08f2 534 535 #define USB_DEVICE_ID_SUPER_Q2 0x007f
+1
drivers/hid/hid-logitech-hidpp.c
··· 314 314 dbg_hid("%s:timeout waiting for response\n", __func__); 315 315 memset(response, 0, sizeof(struct hidpp_report)); 316 316 ret = -ETIMEDOUT; 317 + goto exit; 317 318 } 318 319 319 320 if (response->report_id == REPORT_ID_HIDPP_SHORT &&
+16 -5
drivers/hid/wacom_sys.c
··· 2224 2224 } else if (strstr(product_name, "Wacom") || 2225 2225 strstr(product_name, "wacom") || 2226 2226 strstr(product_name, "WACOM")) { 2227 - strscpy(name, product_name, sizeof(name)); 2227 + if (strscpy(name, product_name, sizeof(name)) < 0) { 2228 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2229 + } 2228 2230 } else { 2229 2231 snprintf(name, sizeof(name), "Wacom %s", product_name); 2230 2232 } ··· 2244 2242 if (name[strlen(name)-1] == ' ') 2245 2243 name[strlen(name)-1] = '\0'; 2246 2244 } else { 2247 - strscpy(name, features->name, sizeof(name)); 2245 + if (strscpy(name, features->name, sizeof(name)) < 0) { 2246 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2247 + } 2248 2248 } 2249 2249 2250 2250 snprintf(wacom_wac->name, sizeof(wacom_wac->name), "%s%s", ··· 2414 2410 goto fail_quirks; 2415 2411 } 2416 2412 2417 - if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) 2413 + if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR) { 2418 2414 error = hid_hw_open(hdev); 2415 + if (error) { 2416 + hid_err(hdev, "hw open failed\n"); 2417 + goto fail_quirks; 2418 + } 2419 + } 2419 2420 2420 2421 wacom_set_shared_values(wacom_wac); 2421 2422 devres_close_group(&hdev->dev, wacom); ··· 2509 2500 goto fail; 2510 2501 } 2511 2502 2512 - strscpy(wacom_wac->name, wacom_wac1->name, 2513 - sizeof(wacom_wac->name)); 2503 + if (strscpy(wacom_wac->name, wacom_wac1->name, 2504 + sizeof(wacom_wac->name)) < 0) { 2505 + hid_warn(wacom->hdev, "String overflow while assembling device name"); 2506 + } 2514 2507 } 2515 2508 2516 2509 return;
+1 -1
drivers/hid/wacom_wac.c
··· 831 831 /* Enter report */ 832 832 if ((data[1] & 0xfc) == 0xc0) { 833 833 /* serial number of the tool */ 834 - wacom->serial[idx] = ((data[3] & 0x0f) << 28) + 834 + wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) + 835 835 (data[4] << 20) + (data[5] << 12) + 836 836 (data[6] << 4) + (data[7] >> 4); 837 837
+1 -1
drivers/iio/accel/kionix-kx022a.c
··· 1048 1048 data->ien_reg = KX022A_REG_INC4; 1049 1049 } else { 1050 1050 irq = fwnode_irq_get_byname(fwnode, "INT2"); 1051 - if (irq <= 0) 1051 + if (irq < 0) 1052 1052 return dev_err_probe(dev, irq, "No suitable IRQ\n"); 1053 1053 1054 1054 data->inc_reg = KX022A_REG_INC5;
+2 -2
drivers/iio/accel/st_accel_core.c
··· 1291 1291 1292 1292 adev = ACPI_COMPANION(indio_dev->dev.parent); 1293 1293 if (!adev) 1294 - return 0; 1294 + return -ENXIO; 1295 1295 1296 1296 /* Read _ONT data, which should be a package of 6 integers. */ 1297 1297 status = acpi_evaluate_object(adev->handle, "_ONT", NULL, &buffer); 1298 1298 if (status == AE_NOT_FOUND) { 1299 - return 0; 1299 + return -ENXIO; 1300 1300 } else if (ACPI_FAILURE(status)) { 1301 1301 dev_warn(&indio_dev->dev, "failed to execute _ONT: %d\n", 1302 1302 status);
+11 -1
drivers/iio/adc/ad4130.c
··· 1817 1817 .unprepare = ad4130_int_clk_unprepare, 1818 1818 }; 1819 1819 1820 + static void ad4130_clk_del_provider(void *of_node) 1821 + { 1822 + of_clk_del_provider(of_node); 1823 + } 1824 + 1820 1825 static int ad4130_setup_int_clk(struct ad4130_state *st) 1821 1826 { 1822 1827 struct device *dev = &st->spi->dev; ··· 1829 1824 struct clk_init_data init; 1830 1825 const char *clk_name; 1831 1826 struct clk *clk; 1827 + int ret; 1832 1828 1833 1829 if (st->int_pin_sel == AD4130_INT_PIN_CLK || 1834 1830 st->mclk_sel != AD4130_MCLK_76_8KHZ) ··· 1849 1843 if (IS_ERR(clk)) 1850 1844 return PTR_ERR(clk); 1851 1845 1852 - return of_clk_add_provider(of_node, of_clk_src_simple_get, clk); 1846 + ret = of_clk_add_provider(of_node, of_clk_src_simple_get, clk); 1847 + if (ret) 1848 + return ret; 1849 + 1850 + return devm_add_action_or_reset(dev, ad4130_clk_del_provider, of_node); 1853 1851 } 1854 1852 1855 1853 static int ad4130_setup(struct iio_dev *indio_dev)
+2 -6
drivers/iio/adc/ad7192.c
··· 897 897 __AD719x_CHANNEL(_si, _channel1, -1, _address, NULL, IIO_VOLTAGE, \ 898 898 BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info) 899 899 900 - #define AD719x_SHORTED_CHANNEL(_si, _channel1, _address) \ 901 - __AD719x_CHANNEL(_si, _channel1, -1, _address, "shorted", IIO_VOLTAGE, \ 902 - BIT(IIO_CHAN_INFO_SCALE), ad7192_calibsys_ext_info) 903 - 904 900 #define AD719x_TEMP_CHANNEL(_si, _address) \ 905 901 __AD719x_CHANNEL(_si, 0, -1, _address, NULL, IIO_TEMP, 0, NULL) 906 902 ··· 904 908 AD719x_DIFF_CHANNEL(0, 1, 2, AD7192_CH_AIN1P_AIN2M), 905 909 AD719x_DIFF_CHANNEL(1, 3, 4, AD7192_CH_AIN3P_AIN4M), 906 910 AD719x_TEMP_CHANNEL(2, AD7192_CH_TEMP), 907 - AD719x_SHORTED_CHANNEL(3, 2, AD7192_CH_AIN2P_AIN2M), 911 + AD719x_DIFF_CHANNEL(3, 2, 2, AD7192_CH_AIN2P_AIN2M), 908 912 AD719x_CHANNEL(4, 1, AD7192_CH_AIN1), 909 913 AD719x_CHANNEL(5, 2, AD7192_CH_AIN2), 910 914 AD719x_CHANNEL(6, 3, AD7192_CH_AIN3), ··· 918 922 AD719x_DIFF_CHANNEL(2, 5, 6, AD7193_CH_AIN5P_AIN6M), 919 923 AD719x_DIFF_CHANNEL(3, 7, 8, AD7193_CH_AIN7P_AIN8M), 920 924 AD719x_TEMP_CHANNEL(4, AD7193_CH_TEMP), 921 - AD719x_SHORTED_CHANNEL(5, 2, AD7193_CH_AIN2P_AIN2M), 925 + AD719x_DIFF_CHANNEL(5, 2, 2, AD7193_CH_AIN2P_AIN2M), 922 926 AD719x_CHANNEL(6, 1, AD7193_CH_AIN1), 923 927 AD719x_CHANNEL(7, 2, AD7193_CH_AIN2), 924 928 AD719x_CHANNEL(8, 3, AD7193_CH_AIN3),
+4
drivers/iio/adc/ad_sigma_delta.c
··· 584 584 init_completion(&sigma_delta->completion); 585 585 586 586 sigma_delta->irq_dis = true; 587 + 588 + /* the IRQ core clears IRQ_DISABLE_UNLAZY flag when freeing an IRQ */ 589 + irq_set_status_flags(sigma_delta->spi->irq, IRQ_DISABLE_UNLAZY); 590 + 587 591 ret = devm_request_irq(dev, sigma_delta->spi->irq, 588 592 ad_sd_data_rdy_trig_poll, 589 593 sigma_delta->info->irq_flags | IRQF_NO_AUTOEN,
+3 -4
drivers/iio/adc/imx93_adc.c
··· 236 236 { 237 237 struct imx93_adc *adc = iio_priv(indio_dev); 238 238 struct device *dev = adc->dev; 239 - long ret; 240 - u32 vref_uv; 239 + int ret; 241 240 242 241 switch (mask) { 243 242 case IIO_CHAN_INFO_RAW: ··· 252 253 return IIO_VAL_INT; 253 254 254 255 case IIO_CHAN_INFO_SCALE: 255 - ret = vref_uv = regulator_get_voltage(adc->vref); 256 + ret = regulator_get_voltage(adc->vref); 256 257 if (ret < 0) 257 258 return ret; 258 - *val = vref_uv / 1000; 259 + *val = ret / 1000; 259 260 *val2 = 12; 260 261 return IIO_VAL_FRACTIONAL_LOG2; 261 262
+51 -2
drivers/iio/adc/mt6370-adc.c
··· 19 19 20 20 #include <dt-bindings/iio/adc/mediatek,mt6370_adc.h> 21 21 22 + #define MT6370_REG_DEV_INFO 0x100 22 23 #define MT6370_REG_CHG_CTRL3 0x113 23 24 #define MT6370_REG_CHG_CTRL7 0x117 24 25 #define MT6370_REG_CHG_ADC 0x121 ··· 28 27 #define MT6370_ADC_START_MASK BIT(0) 29 28 #define MT6370_ADC_IN_SEL_MASK GENMASK(7, 4) 30 29 #define MT6370_AICR_ICHG_MASK GENMASK(7, 2) 30 + #define MT6370_VENID_MASK GENMASK(7, 4) 31 31 32 32 #define MT6370_AICR_100_mA 0x0 33 33 #define MT6370_AICR_150_mA 0x1 ··· 49 47 #define ADC_CONV_TIME_MS 35 50 48 #define ADC_CONV_POLLING_TIME_US 1000 51 49 50 + #define MT6370_VID_RT5081 0x8 51 + #define MT6370_VID_RT5081A 0xA 52 + #define MT6370_VID_MT6370 0xE 53 + 52 54 struct mt6370_adc_data { 53 55 struct device *dev; 54 56 struct regmap *regmap; ··· 61 55 * from being read at the same time. 62 56 */ 63 57 struct mutex adc_lock; 58 + unsigned int vid; 64 59 }; 65 60 66 61 static int mt6370_adc_read_channel(struct mt6370_adc_data *priv, int chan, ··· 105 98 return ret; 106 99 } 107 100 101 + static int mt6370_adc_get_ibus_scale(struct mt6370_adc_data *priv) 102 + { 103 + switch (priv->vid) { 104 + case MT6370_VID_RT5081: 105 + case MT6370_VID_RT5081A: 106 + case MT6370_VID_MT6370: 107 + return 3350; 108 + default: 109 + return 3875; 110 + } 111 + } 112 + 113 + static int mt6370_adc_get_ibat_scale(struct mt6370_adc_data *priv) 114 + { 115 + switch (priv->vid) { 116 + case MT6370_VID_RT5081: 117 + case MT6370_VID_RT5081A: 118 + case MT6370_VID_MT6370: 119 + return 2680; 120 + default: 121 + return 3870; 122 + } 123 + } 124 + 108 125 static int mt6370_adc_read_scale(struct mt6370_adc_data *priv, 109 126 int chan, int *val1, int *val2) 110 127 { ··· 154 123 case MT6370_AICR_250_mA: 155 124 case MT6370_AICR_300_mA: 156 125 case MT6370_AICR_350_mA: 157 - *val1 = 3350; 126 + *val1 = mt6370_adc_get_ibus_scale(priv); 158 127 break; 159 128 default: 160 129 *val1 = 5000; ··· 181 150 case MT6370_ICHG_600_mA: 182 151 case MT6370_ICHG_700_mA: 183 152 case MT6370_ICHG_800_mA: 184 - *val1 = 2680; 153 + *val1 = mt6370_adc_get_ibat_scale(priv); 185 154 break; 186 155 default: 187 156 *val1 = 5000; ··· 282 251 MT6370_ADC_CHAN(TEMP_JC, IIO_TEMP, 12, BIT(IIO_CHAN_INFO_OFFSET)), 283 252 }; 284 253 254 + static int mt6370_get_vendor_info(struct mt6370_adc_data *priv) 255 + { 256 + unsigned int dev_info; 257 + int ret; 258 + 259 + ret = regmap_read(priv->regmap, MT6370_REG_DEV_INFO, &dev_info); 260 + if (ret) 261 + return ret; 262 + 263 + priv->vid = FIELD_GET(MT6370_VENID_MASK, dev_info); 264 + 265 + return 0; 266 + } 267 + 285 268 static int mt6370_adc_probe(struct platform_device *pdev) 286 269 { 287 270 struct device *dev = &pdev->dev; ··· 316 271 priv->dev = dev; 317 272 priv->regmap = regmap; 318 273 mutex_init(&priv->adc_lock); 274 + 275 + ret = mt6370_get_vendor_info(priv); 276 + if (ret) 277 + return dev_err_probe(dev, ret, "Failed to get vid\n"); 319 278 320 279 ret = regmap_write(priv->regmap, MT6370_REG_CHG_ADC, 0); 321 280 if (ret)
+5 -5
drivers/iio/adc/mxs-lradc-adc.c
··· 757 757 758 758 ret = mxs_lradc_adc_trigger_init(iio); 759 759 if (ret) 760 - goto err_trig; 760 + return ret; 761 761 762 762 ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time, 763 763 &mxs_lradc_adc_trigger_handler, 764 764 &mxs_lradc_adc_buffer_ops); 765 765 if (ret) 766 - return ret; 766 + goto err_trig; 767 767 768 768 adc->vref_mv = mxs_lradc_adc_vref_mv[lradc->soc]; 769 769 ··· 801 801 802 802 err_dev: 803 803 mxs_lradc_adc_hw_stop(adc); 804 - mxs_lradc_adc_trigger_remove(iio); 805 - err_trig: 806 804 iio_triggered_buffer_cleanup(iio); 805 + err_trig: 806 + mxs_lradc_adc_trigger_remove(iio); 807 807 return ret; 808 808 } 809 809 ··· 814 814 815 815 iio_device_unregister(iio); 816 816 mxs_lradc_adc_hw_stop(adc); 817 - mxs_lradc_adc_trigger_remove(iio); 818 817 iio_triggered_buffer_cleanup(iio); 818 + mxs_lradc_adc_trigger_remove(iio); 819 819 820 820 return 0; 821 821 }
+5 -5
drivers/iio/adc/palmas_gpadc.c
··· 547 547 int adc_chan = chan->channel; 548 548 int ret = 0; 549 549 550 - if (adc_chan > PALMAS_ADC_CH_MAX) 550 + if (adc_chan >= PALMAS_ADC_CH_MAX) 551 551 return -EINVAL; 552 552 553 553 mutex_lock(&adc->lock); ··· 595 595 int adc_chan = chan->channel; 596 596 int ret = 0; 597 597 598 - if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 598 + if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 599 599 return -EINVAL; 600 600 601 601 mutex_lock(&adc->lock); ··· 684 684 int adc_chan = chan->channel; 685 685 int ret; 686 686 687 - if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 687 + if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 688 688 return -EINVAL; 689 689 690 690 mutex_lock(&adc->lock); ··· 710 710 int adc_chan = chan->channel; 711 711 int ret; 712 712 713 - if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 713 + if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 714 714 return -EINVAL; 715 715 716 716 mutex_lock(&adc->lock); ··· 744 744 int old; 745 745 int ret; 746 746 747 - if (adc_chan > PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 747 + if (adc_chan >= PALMAS_ADC_CH_MAX || type != IIO_EV_TYPE_THRESH) 748 748 return -EINVAL; 749 749 750 750 mutex_lock(&adc->lock);
+32 -29
drivers/iio/adc/stm32-adc.c
··· 2006 2006 * to get the *real* number of channels. 2007 2007 */ 2008 2008 ret = device_property_count_u32(dev, "st,adc-diff-channels"); 2009 - if (ret < 0) 2010 - return ret; 2011 - 2012 - ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32)); 2013 - if (ret > adc_info->max_channels) { 2014 - dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n"); 2015 - return -EINVAL; 2016 - } else if (ret > 0) { 2017 - adc->num_diff = ret; 2018 - num_channels += ret; 2009 + if (ret > 0) { 2010 + ret /= (int)(sizeof(struct stm32_adc_diff_channel) / sizeof(u32)); 2011 + if (ret > adc_info->max_channels) { 2012 + dev_err(&indio_dev->dev, "Bad st,adc-diff-channels?\n"); 2013 + return -EINVAL; 2014 + } else if (ret > 0) { 2015 + adc->num_diff = ret; 2016 + num_channels += ret; 2017 + } 2019 2018 } 2020 2019 2021 2020 /* Optional sample time is provided either for each, or all channels */ ··· 2036 2037 struct stm32_adc_diff_channel diff[STM32_ADC_CH_MAX]; 2037 2038 struct device *dev = &indio_dev->dev; 2038 2039 u32 num_diff = adc->num_diff; 2040 + int num_se = nchans - num_diff; 2039 2041 int size = num_diff * sizeof(*diff) / sizeof(u32); 2040 2042 int scan_index = 0, ret, i, c; 2041 2043 u32 smp = 0, smps[STM32_ADC_CH_MAX], chans[STM32_ADC_CH_MAX]; ··· 2063 2063 scan_index++; 2064 2064 } 2065 2065 } 2066 - 2067 - ret = device_property_read_u32_array(dev, "st,adc-channels", chans, 2068 - nchans); 2069 - if (ret) 2070 - return ret; 2071 - 2072 - for (c = 0; c < nchans; c++) { 2073 - if (chans[c] >= adc_info->max_channels) { 2074 - dev_err(&indio_dev->dev, "Invalid channel %d\n", 2075 - chans[c]); 2076 - return -EINVAL; 2066 + if (num_se > 0) { 2067 + ret = device_property_read_u32_array(dev, "st,adc-channels", chans, num_se); 2068 + if (ret) { 2069 + dev_err(&indio_dev->dev, "Failed to get st,adc-channels %d\n", ret); 2070 + return ret; 2077 2071 } 2078 2072 2079 - /* Channel can't be configured both as single-ended & diff */ 2080 - for (i = 0; i < num_diff; i++) { 2081 - if (chans[c] == diff[i].vinp) { 2082 - dev_err(&indio_dev->dev, "channel %d misconfigured\n", chans[c]); 2073 + for (c = 0; c < num_se; c++) { 2074 + if (chans[c] >= adc_info->max_channels) { 2075 + dev_err(&indio_dev->dev, "Invalid channel %d\n", 2076 + chans[c]); 2083 2077 return -EINVAL; 2084 2078 } 2079 + 2080 + /* Channel can't be configured both as single-ended & diff */ 2081 + for (i = 0; i < num_diff; i++) { 2082 + if (chans[c] == diff[i].vinp) { 2083 + dev_err(&indio_dev->dev, "channel %d misconfigured\n", 2084 + chans[c]); 2085 + return -EINVAL; 2086 + } 2087 + } 2088 + stm32_adc_chan_init_one(indio_dev, &channels[scan_index], 2089 + chans[c], 0, scan_index, false); 2090 + scan_index++; 2085 2091 } 2086 - stm32_adc_chan_init_one(indio_dev, &channels[scan_index], 2087 - chans[c], 0, scan_index, false); 2088 - scan_index++; 2089 2092 } 2090 2093 2091 2094 if (adc->nsmps > 0) { ··· 2309 2306 2310 2307 if (legacy) 2311 2308 ret = stm32_adc_legacy_chan_init(indio_dev, adc, channels, 2312 - num_channels); 2309 + timestamping ? num_channels - 1 : num_channels); 2313 2310 else 2314 2311 ret = stm32_adc_generic_chan_init(indio_dev, adc, channels); 2315 2312 if (ret < 0)
+1 -1
drivers/iio/addac/ad74413r.c
··· 1007 1007 1008 1008 ret = ad74413r_get_single_adc_result(indio_dev, chan->channel, 1009 1009 val); 1010 - if (ret) 1010 + if (ret < 0) 1011 1011 return ret; 1012 1012 1013 1013 ad74413r_adc_to_resistance_result(*val, val);
+1 -1
drivers/iio/dac/Makefile
··· 17 17 obj-$(CONFIG_AD5592R) += ad5592r.o 18 18 obj-$(CONFIG_AD5593R) += ad5593r.o 19 19 obj-$(CONFIG_AD5755) += ad5755.o 20 - obj-$(CONFIG_AD5755) += ad5758.o 20 + obj-$(CONFIG_AD5758) += ad5758.o 21 21 obj-$(CONFIG_AD5761) += ad5761.o 22 22 obj-$(CONFIG_AD5764) += ad5764.o 23 23 obj-$(CONFIG_AD5766) += ad5766.o
+14 -2
drivers/iio/dac/mcp4725.c
··· 47 47 struct mcp4725_data *data = iio_priv(i2c_get_clientdata( 48 48 to_i2c_client(dev))); 49 49 u8 outbuf[2]; 50 + int ret; 50 51 51 52 outbuf[0] = (data->powerdown_mode + 1) << 4; 52 53 outbuf[1] = 0; 53 54 data->powerdown = true; 54 55 55 - return i2c_master_send(data->client, outbuf, 2); 56 + ret = i2c_master_send(data->client, outbuf, 2); 57 + if (ret < 0) 58 + return ret; 59 + else if (ret != 2) 60 + return -EIO; 61 + return 0; 56 62 } 57 63 58 64 static int mcp4725_resume(struct device *dev) ··· 66 60 struct mcp4725_data *data = iio_priv(i2c_get_clientdata( 67 61 to_i2c_client(dev))); 68 62 u8 outbuf[2]; 63 + int ret; 69 64 70 65 /* restore previous DAC value */ 71 66 outbuf[0] = (data->dac_value >> 8) & 0xf; 72 67 outbuf[1] = data->dac_value & 0xff; 73 68 data->powerdown = false; 74 69 75 - return i2c_master_send(data->client, outbuf, 2); 70 + ret = i2c_master_send(data->client, outbuf, 2); 71 + if (ret < 0) 72 + return ret; 73 + else if (ret != 2) 74 + return -EIO; 75 + return 0; 76 76 } 77 77 static DEFINE_SIMPLE_DEV_PM_OPS(mcp4725_pm_ops, mcp4725_suspend, 78 78 mcp4725_resume);
+5 -5
drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
··· 275 275 { 276 276 struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev); 277 277 struct device *dev = regmap_get_device(st->map); 278 + struct inv_icm42600_timestamp *ts = iio_priv(indio_dev); 278 279 279 280 pm_runtime_get_sync(dev); 281 + 282 + mutex_lock(&st->lock); 283 + inv_icm42600_timestamp_reset(ts); 284 + mutex_unlock(&st->lock); 280 285 281 286 return 0; 282 287 } ··· 380 375 struct device *dev = regmap_get_device(st->map); 381 376 unsigned int sensor; 382 377 unsigned int *watermark; 383 - struct inv_icm42600_timestamp *ts; 384 378 struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT; 385 379 unsigned int sleep_temp = 0; 386 380 unsigned int sleep_sensor = 0; ··· 389 385 if (indio_dev == st->indio_gyro) { 390 386 sensor = INV_ICM42600_SENSOR_GYRO; 391 387 watermark = &st->fifo.watermark.gyro; 392 - ts = iio_priv(st->indio_gyro); 393 388 } else if (indio_dev == st->indio_accel) { 394 389 sensor = INV_ICM42600_SENSOR_ACCEL; 395 390 watermark = &st->fifo.watermark.accel; 396 - ts = iio_priv(st->indio_accel); 397 391 } else { 398 392 return -EINVAL; 399 393 } ··· 418 416 /* if FIFO is off, turn temperature off */ 419 417 if (!st->fifo.on) 420 418 ret = inv_icm42600_set_temp_conf(st, false, &sleep_temp); 421 - 422 - inv_icm42600_timestamp_reset(ts); 423 419 424 420 out_unlock: 425 421 mutex_unlock(&st->lock);
+32 -10
drivers/iio/industrialio-gts-helper.c
··· 337 337 return ret; 338 338 } 339 339 340 + static void iio_gts_us_to_int_micro(int *time_us, int *int_micro_times, 341 + int num_times) 342 + { 343 + int i; 344 + 345 + for (i = 0; i < num_times; i++) { 346 + int_micro_times[i * 2] = time_us[i] / 1000000; 347 + int_micro_times[i * 2 + 1] = time_us[i] % 1000000; 348 + } 349 + } 350 + 340 351 /** 341 352 * iio_gts_build_avail_time_table - build table of available integration times 342 353 * @gts: Gain time scale descriptor ··· 362 351 */ 363 352 static int iio_gts_build_avail_time_table(struct iio_gts *gts) 364 353 { 365 - int *times, i, j, idx = 0; 354 + int *times, i, j, idx = 0, *int_micro_times; 366 355 367 356 if (!gts->num_itime) 368 357 return 0; ··· 389 378 } 390 379 } 391 380 } 392 - gts->avail_time_tables = times; 393 - /* 394 - * This is just to survive a unlikely corner-case where times in the 395 - * given time table were not unique. Else we could just trust the 396 - * gts->num_itime. 397 - */ 398 - gts->num_avail_time_tables = idx; 381 + 382 + /* create a list of times formatted as list of IIO_VAL_INT_PLUS_MICRO */ 383 + int_micro_times = kcalloc(idx, sizeof(int) * 2, GFP_KERNEL); 384 + if (int_micro_times) { 385 + /* 386 + * This is just to survive a unlikely corner-case where times in 387 + * the given time table were not unique. Else we could just 388 + * trust the gts->num_itime. 389 + */ 390 + gts->num_avail_time_tables = idx; 391 + iio_gts_us_to_int_micro(times, int_micro_times, idx); 392 + } 393 + 394 + gts->avail_time_tables = int_micro_times; 395 + kfree(times); 396 + 397 + if (!int_micro_times) 398 + return -ENOMEM; 399 399 400 400 return 0; 401 401 } ··· 705 683 return -EINVAL; 706 684 707 685 *vals = gts->avail_time_tables; 708 - *type = IIO_VAL_INT; 709 - *length = gts->num_avail_time_tables; 686 + *type = IIO_VAL_INT_PLUS_MICRO; 687 + *length = gts->num_avail_time_tables * 2; 710 688 711 689 return IIO_AVAIL_LIST; 712 690 }
+20 -6
drivers/iio/light/rohm-bu27034.c
··· 231 231 232 232 static const struct regmap_range bu27034_volatile_ranges[] = { 233 233 { 234 + .range_min = BU27034_REG_SYSTEM_CONTROL, 235 + .range_max = BU27034_REG_SYSTEM_CONTROL, 236 + }, { 234 237 .range_min = BU27034_REG_MODE_CONTROL4, 235 238 .range_max = BU27034_REG_MODE_CONTROL4, 236 239 }, { ··· 1170 1167 1171 1168 switch (mask) { 1172 1169 case IIO_CHAN_INFO_INT_TIME: 1173 - *val = bu27034_get_int_time(data); 1174 - if (*val < 0) 1175 - return *val; 1170 + *val = 0; 1171 + *val2 = bu27034_get_int_time(data); 1172 + if (*val2 < 0) 1173 + return *val2; 1176 1174 1177 - return IIO_VAL_INT; 1175 + return IIO_VAL_INT_PLUS_MICRO; 1178 1176 1179 1177 case IIO_CHAN_INFO_SCALE: 1180 1178 return bu27034_get_scale(data, chan->channel, val, val2); ··· 1233 1229 ret = bu27034_set_scale(data, chan->channel, val, val2); 1234 1230 break; 1235 1231 case IIO_CHAN_INFO_INT_TIME: 1236 - ret = bu27034_try_set_int_time(data, val); 1232 + if (!val) 1233 + ret = bu27034_try_set_int_time(data, val2); 1234 + else 1235 + ret = -EINVAL; 1237 1236 break; 1238 1237 default: 1239 1238 ret = -EINVAL; ··· 1275 1268 int ret, sel; 1276 1269 1277 1270 /* Reset */ 1278 - ret = regmap_update_bits(data->regmap, BU27034_REG_SYSTEM_CONTROL, 1271 + ret = regmap_write_bits(data->regmap, BU27034_REG_SYSTEM_CONTROL, 1279 1272 BU27034_MASK_SW_RESET, BU27034_MASK_SW_RESET); 1280 1273 if (ret) 1281 1274 return dev_err_probe(data->dev, ret, "Sensor reset failed\n"); 1282 1275 1283 1276 msleep(1); 1277 + 1278 + ret = regmap_reinit_cache(data->regmap, &bu27034_regmap); 1279 + if (ret) { 1280 + dev_err(data->dev, "Failed to reinit reg cache\n"); 1281 + return ret; 1282 + } 1283 + 1284 1284 /* 1285 1285 * Read integration time here to ensure it is in regmap cache. We do 1286 1286 * this to speed-up the int-time acquisition in the start of the buffer
+3
drivers/iio/light/vcnl4035.c
··· 8 8 * TODO: Proximity 9 9 */ 10 10 #include <linux/bitops.h> 11 + #include <linux/bitfield.h> 11 12 #include <linux/i2c.h> 12 13 #include <linux/module.h> 13 14 #include <linux/pm_runtime.h> ··· 43 42 #define VCNL4035_ALS_PERS_MASK GENMASK(3, 2) 44 43 #define VCNL4035_INT_ALS_IF_H_MASK BIT(12) 45 44 #define VCNL4035_INT_ALS_IF_L_MASK BIT(13) 45 + #define VCNL4035_DEV_ID_MASK GENMASK(7, 0) 46 46 47 47 /* Default values */ 48 48 #define VCNL4035_MODE_ALS_ENABLE BIT(0) ··· 415 413 return ret; 416 414 } 417 415 416 + id = FIELD_GET(VCNL4035_DEV_ID_MASK, id); 418 417 if (id != VCNL4035_DEV_ID_VAL) { 419 418 dev_err(&data->client->dev, "Wrong id, got %x, expected %x\n", 420 419 id, VCNL4035_DEV_ID_VAL);
+3 -2
drivers/iio/magnetometer/tmag5273.c
··· 296 296 return ret; 297 297 298 298 ret = tmag5273_get_measure(data, &t, &x, &y, &z, &angle, &magnitude); 299 - if (ret) 300 - return ret; 301 299 302 300 pm_runtime_mark_last_busy(data->dev); 303 301 pm_runtime_put_autosuspend(data->dev); 302 + 303 + if (ret) 304 + return ret; 304 305 305 306 switch (chan->address) { 306 307 case TEMPERATURE:
+1 -3
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 3341 3341 udwr.remote_qkey = gsi_sqp->qplib_qp.qkey; 3342 3342 3343 3343 /* post data received in the send queue */ 3344 - rc = bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr); 3345 - 3346 - return 0; 3344 + return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr); 3347 3345 } 3348 3346 3349 3347 static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
+4
drivers/infiniband/hw/bnxt_re/main.c
··· 1336 1336 { 1337 1337 struct bnxt_qplib_cc_param cc_param = {}; 1338 1338 1339 + /* Do not enable congestion control on VFs */ 1340 + if (rdev->is_virtfn) 1341 + return; 1342 + 1339 1343 /* Currently enabling only for GenP5 adapters */ 1340 1344 if (!bnxt_qplib_is_chip_gen_p5(rdev->chip_ctx)) 1341 1345 return;
+6 -5
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 2056 2056 u32 pg_sz_lvl; 2057 2057 int rc; 2058 2058 2059 + if (!cq->dpi) { 2060 + dev_err(&rcfw->pdev->dev, 2061 + "FP: CREATE_CQ failed due to NULL DPI\n"); 2062 + return -EINVAL; 2063 + } 2064 + 2059 2065 hwq_attr.res = res; 2060 2066 hwq_attr.depth = cq->max_wqe; 2061 2067 hwq_attr.stride = sizeof(struct cq_base); ··· 2075 2069 CMDQ_BASE_OPCODE_CREATE_CQ, 2076 2070 sizeof(req)); 2077 2071 2078 - if (!cq->dpi) { 2079 - dev_err(&rcfw->pdev->dev, 2080 - "FP: CREATE_CQ failed due to NULL DPI\n"); 2081 - return -EINVAL; 2082 - } 2083 2072 req.dpi = cpu_to_le32(cq->dpi->dpi); 2084 2073 req.cq_handle = cpu_to_le64(cq->cq_handle); 2085 2074 req.cq_size = cpu_to_le32(cq->hwq.max_elements);
+2 -10
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 215 215 return -EINVAL; 216 216 hwq_attr->sginfo->npages = npages; 217 217 } else { 218 - unsigned long sginfo_num_pages = ib_umem_num_dma_blocks( 219 - hwq_attr->sginfo->umem, hwq_attr->sginfo->pgsize); 220 - 218 + npages = ib_umem_num_dma_blocks(hwq_attr->sginfo->umem, 219 + hwq_attr->sginfo->pgsize); 221 220 hwq->is_user = true; 222 - npages = sginfo_num_pages; 223 - npages = (npages * PAGE_SIZE) / 224 - BIT_ULL(hwq_attr->sginfo->pgshft); 225 - if ((sginfo_num_pages * PAGE_SIZE) % 226 - BIT_ULL(hwq_attr->sginfo->pgshft)) 227 - if (!npages) 228 - npages++; 229 221 } 230 222 231 223 if (npages == MAX_PBL_LVL_0_PGS && !hwq_attr->sginfo->nopte) {
+3 -4
drivers/infiniband/hw/bnxt_re/qplib_sp.c
··· 617 617 /* Free the hwq if it already exist, must be a rereg */ 618 618 if (mr->hwq.max_elements) 619 619 bnxt_qplib_free_hwq(res, &mr->hwq); 620 - /* Use system PAGE_SIZE */ 621 620 hwq_attr.res = res; 622 621 hwq_attr.depth = pages; 623 - hwq_attr.stride = buf_pg_size; 622 + hwq_attr.stride = sizeof(dma_addr_t); 624 623 hwq_attr.type = HWQ_TYPE_MR; 625 624 hwq_attr.sginfo = &sginfo; 626 625 hwq_attr.sginfo->umem = umem; 627 626 hwq_attr.sginfo->npages = pages; 628 - hwq_attr.sginfo->pgsize = PAGE_SIZE; 629 - hwq_attr.sginfo->pgshft = PAGE_SHIFT; 627 + hwq_attr.sginfo->pgsize = buf_pg_size; 628 + hwq_attr.sginfo->pgshft = ilog2(buf_pg_size); 630 629 rc = bnxt_qplib_alloc_init_hwq(&mr->hwq, &hwq_attr); 631 630 if (rc) { 632 631 dev_err(&res->pdev->dev,
+1 -1
drivers/infiniband/hw/efa/efa_verbs.c
··· 1403 1403 */ 1404 1404 static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl) 1405 1405 { 1406 - u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, PAGE_SIZE); 1406 + u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE); 1407 1407 struct scatterlist *sgl; 1408 1408 int sg_dma_cnt, err; 1409 1409
+17 -8
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 4583 4583 mtu = ib_mtu_enum_to_int(ib_mtu); 4584 4584 if (WARN_ON(mtu <= 0)) 4585 4585 return -EINVAL; 4586 - #define MAX_LP_MSG_LEN 16384 4587 - /* MTU * (2 ^ LP_PKTN_INI) shouldn't be bigger than 16KB */ 4588 - lp_pktn_ini = ilog2(MAX_LP_MSG_LEN / mtu); 4589 - if (WARN_ON(lp_pktn_ini >= 0xF)) 4590 - return -EINVAL; 4586 + #define MIN_LP_MSG_LEN 1024 4587 + /* mtu * (2 ^ lp_pktn_ini) should be in the range of 1024 to mtu */ 4588 + lp_pktn_ini = ilog2(max(mtu, MIN_LP_MSG_LEN) / mtu); 4591 4589 4592 4590 if (attr_mask & IB_QP_PATH_MTU) { 4593 4591 hr_reg_write(context, QPC_MTU, ib_mtu); ··· 5010 5012 static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout) 5011 5013 { 5012 5014 #define QP_ACK_TIMEOUT_MAX_HIP08 20 5013 - #define QP_ACK_TIMEOUT_OFFSET 10 5014 5015 #define QP_ACK_TIMEOUT_MAX 31 5015 5016 5016 5017 if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { ··· 5018 5021 "local ACK timeout shall be 0 to 20.\n"); 5019 5022 return false; 5020 5023 } 5021 - *timeout += QP_ACK_TIMEOUT_OFFSET; 5024 + *timeout += HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08; 5022 5025 } else if (hr_dev->pci_dev->revision > PCI_REVISION_ID_HIP08) { 5023 5026 if (*timeout > QP_ACK_TIMEOUT_MAX) { 5024 5027 ibdev_warn(&hr_dev->ib_dev, ··· 5304 5307 return ret; 5305 5308 } 5306 5309 5310 + static u8 get_qp_timeout_attr(struct hns_roce_dev *hr_dev, 5311 + struct hns_roce_v2_qp_context *context) 5312 + { 5313 + u8 timeout; 5314 + 5315 + timeout = (u8)hr_reg_read(context, QPC_AT); 5316 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) 5317 + timeout -= HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08; 5318 + 5319 + return timeout; 5320 + } 5321 + 5307 5322 static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr, 5308 5323 int qp_attr_mask, 5309 5324 struct ib_qp_init_attr *qp_init_attr) ··· 5393 5384 qp_attr->max_dest_rd_atomic = 1 << hr_reg_read(&context, QPC_RR_MAX); 5394 5385 5395 5386 qp_attr->min_rnr_timer = (u8)hr_reg_read(&context, QPC_MIN_RNR_TIME); 5396 - qp_attr->timeout = (u8)hr_reg_read(&context, QPC_AT); 5387 + qp_attr->timeout = get_qp_timeout_attr(hr_dev, &context); 5397 5388 qp_attr->retry_cnt = hr_reg_read(&context, QPC_RETRY_NUM_INIT); 5398 5389 qp_attr->rnr_retry = hr_reg_read(&context, QPC_RNR_NUM_INIT); 5399 5390
+2
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 44 44 #define HNS_ROCE_V2_MAX_XRCD_NUM 0x1000000 45 45 #define HNS_ROCE_V2_RSV_XRCD_NUM 0 46 46 47 + #define HNS_ROCE_V2_QP_ACK_TIMEOUT_OFS_HIP08 10 48 + 47 49 #define HNS_ROCE_V3_SCCC_SZ 64 48 50 #define HNS_ROCE_V3_GMV_ENTRY_SZ 32 49 51
+43
drivers/infiniband/hw/hns/hns_roce_mr.c
··· 33 33 34 34 #include <linux/vmalloc.h> 35 35 #include <rdma/ib_umem.h> 36 + #include <linux/math.h> 36 37 #include "hns_roce_device.h" 37 38 #include "hns_roce_cmd.h" 38 39 #include "hns_roce_hem.h" ··· 910 909 return page_cnt; 911 910 } 912 911 912 + static u64 cal_pages_per_l1ba(unsigned int ba_per_bt, unsigned int hopnum) 913 + { 914 + return int_pow(ba_per_bt, hopnum - 1); 915 + } 916 + 917 + static unsigned int cal_best_bt_pg_sz(struct hns_roce_dev *hr_dev, 918 + struct hns_roce_mtr *mtr, 919 + unsigned int pg_shift) 920 + { 921 + unsigned long cap = hr_dev->caps.page_size_cap; 922 + struct hns_roce_buf_region *re; 923 + unsigned int pgs_per_l1ba; 924 + unsigned int ba_per_bt; 925 + unsigned int ba_num; 926 + int i; 927 + 928 + for_each_set_bit_from(pg_shift, &cap, sizeof(cap) * BITS_PER_BYTE) { 929 + if (!(BIT(pg_shift) & cap)) 930 + continue; 931 + 932 + ba_per_bt = BIT(pg_shift) / BA_BYTE_LEN; 933 + ba_num = 0; 934 + for (i = 0; i < mtr->hem_cfg.region_count; i++) { 935 + re = &mtr->hem_cfg.region[i]; 936 + if (re->hopnum == 0) 937 + continue; 938 + 939 + pgs_per_l1ba = cal_pages_per_l1ba(ba_per_bt, re->hopnum); 940 + ba_num += DIV_ROUND_UP(re->count, pgs_per_l1ba); 941 + } 942 + 943 + if (ba_num <= ba_per_bt) 944 + return pg_shift; 945 + } 946 + 947 + return 0; 948 + } 949 + 913 950 static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, 914 951 unsigned int ba_page_shift) 915 952 { ··· 956 917 957 918 hns_roce_hem_list_init(&mtr->hem_list); 958 919 if (!cfg->is_direct) { 920 + ba_page_shift = cal_best_bt_pg_sz(hr_dev, mtr, ba_page_shift); 921 + if (!ba_page_shift) 922 + return -ERANGE; 923 + 959 924 ret = hns_roce_hem_list_request(hr_dev, &mtr->hem_list, 960 925 cfg->region, cfg->region_count, 961 926 ba_page_shift);
+7 -5
drivers/infiniband/hw/irdma/verbs.c
··· 522 522 if (!iwqp->user_mode) 523 523 cancel_delayed_work_sync(&iwqp->dwork_flush); 524 524 525 - irdma_qp_rem_ref(&iwqp->ibqp); 526 - wait_for_completion(&iwqp->free_qp); 527 - irdma_free_lsmm_rsrc(iwqp); 528 - irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp); 529 - 530 525 if (!iwqp->user_mode) { 531 526 if (iwqp->iwscq) { 532 527 irdma_clean_cqes(iwqp, iwqp->iwscq); ··· 529 534 irdma_clean_cqes(iwqp, iwqp->iwrcq); 530 535 } 531 536 } 537 + 538 + irdma_qp_rem_ref(&iwqp->ibqp); 539 + wait_for_completion(&iwqp->free_qp); 540 + irdma_free_lsmm_rsrc(iwqp); 541 + irdma_cqp_qp_destroy_cmd(&iwdev->rf->sc_dev, &iwqp->sc_qp); 542 + 532 543 irdma_remove_push_mmap_entries(iwqp); 533 544 irdma_free_qp_rsrc(iwqp); 534 545 ··· 3292 3291 break; 3293 3292 case IB_WR_LOCAL_INV: 3294 3293 info.op_type = IRDMA_OP_TYPE_INV_STAG; 3294 + info.local_fence = info.read_fence; 3295 3295 info.op.inv_local_stag.target_stag = ib_wr->ex.invalidate_rkey; 3296 3296 err = irdma_uk_stag_local_invalidate(ukqp, &info, true); 3297 3297 break;
+16 -10
drivers/infiniband/sw/rxe/rxe_comp.c
··· 115 115 void retransmit_timer(struct timer_list *t) 116 116 { 117 117 struct rxe_qp *qp = from_timer(qp, t, retrans_timer); 118 + unsigned long flags; 118 119 119 120 rxe_dbg_qp(qp, "retransmit timer fired\n"); 120 121 121 - spin_lock_bh(&qp->state_lock); 122 + spin_lock_irqsave(&qp->state_lock, flags); 122 123 if (qp->valid) { 123 124 qp->comp.timeout = 1; 124 125 rxe_sched_task(&qp->comp.task); 125 126 } 126 - spin_unlock_bh(&qp->state_lock); 127 + spin_unlock_irqrestore(&qp->state_lock, flags); 127 128 } 128 129 129 130 void rxe_comp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb) ··· 482 481 483 482 static void comp_check_sq_drain_done(struct rxe_qp *qp) 484 483 { 485 - spin_lock_bh(&qp->state_lock); 484 + unsigned long flags; 485 + 486 + spin_lock_irqsave(&qp->state_lock, flags); 486 487 if (unlikely(qp_state(qp) == IB_QPS_SQD)) { 487 488 if (qp->attr.sq_draining && qp->comp.psn == qp->req.psn) { 488 489 qp->attr.sq_draining = 0; 489 - spin_unlock_bh(&qp->state_lock); 490 + spin_unlock_irqrestore(&qp->state_lock, flags); 490 491 491 492 if (qp->ibqp.event_handler) { 492 493 struct ib_event ev; ··· 502 499 return; 503 500 } 504 501 } 505 - spin_unlock_bh(&qp->state_lock); 502 + spin_unlock_irqrestore(&qp->state_lock, flags); 506 503 } 507 504 508 505 static inline enum comp_state complete_ack(struct rxe_qp *qp, ··· 628 625 */ 629 626 static void reset_retry_timer(struct rxe_qp *qp) 630 627 { 628 + unsigned long flags; 629 + 631 630 if (qp_type(qp) == IB_QPT_RC && qp->qp_timeout_jiffies) { 632 - spin_lock_bh(&qp->state_lock); 631 + spin_lock_irqsave(&qp->state_lock, flags); 633 632 if (qp_state(qp) >= IB_QPS_RTS && 634 633 psn_compare(qp->req.psn, qp->comp.psn) > 0) 635 634 mod_timer(&qp->retrans_timer, 636 635 jiffies + qp->qp_timeout_jiffies); 637 - spin_unlock_bh(&qp->state_lock); 636 + spin_unlock_irqrestore(&qp->state_lock, flags); 638 637 } 639 638 } 640 639 ··· 648 643 struct rxe_pkt_info *pkt = NULL; 649 644 enum comp_state state; 650 645 int ret; 646 + unsigned long flags; 651 647 652 - spin_lock_bh(&qp->state_lock); 648 + spin_lock_irqsave(&qp->state_lock, flags); 653 649 if (!qp->valid || qp_state(qp) == IB_QPS_ERR || 654 650 qp_state(qp) == IB_QPS_RESET) { 655 651 bool notify = qp->valid && (qp_state(qp) == IB_QPS_ERR); 656 652 657 653 drain_resp_pkts(qp); 658 654 flush_send_queue(qp, notify); 659 - spin_unlock_bh(&qp->state_lock); 655 + spin_unlock_irqrestore(&qp->state_lock, flags); 660 656 goto exit; 661 657 } 662 - spin_unlock_bh(&qp->state_lock); 658 + spin_unlock_irqrestore(&qp->state_lock, flags); 663 659 664 660 if (qp->comp.timeout) { 665 661 qp->comp.timeout_retry = 1;
+4 -3
drivers/infiniband/sw/rxe/rxe_net.c
··· 412 412 int err; 413 413 int is_request = pkt->mask & RXE_REQ_MASK; 414 414 struct rxe_dev *rxe = to_rdev(qp->ibqp.device); 415 + unsigned long flags; 415 416 416 - spin_lock_bh(&qp->state_lock); 417 + spin_lock_irqsave(&qp->state_lock, flags); 417 418 if ((is_request && (qp_state(qp) < IB_QPS_RTS)) || 418 419 (!is_request && (qp_state(qp) < IB_QPS_RTR))) { 419 - spin_unlock_bh(&qp->state_lock); 420 + spin_unlock_irqrestore(&qp->state_lock, flags); 420 421 rxe_dbg_qp(qp, "Packet dropped. QP is not in ready state\n"); 421 422 goto drop; 422 423 } 423 - spin_unlock_bh(&qp->state_lock); 424 + spin_unlock_irqrestore(&qp->state_lock, flags); 424 425 425 426 rxe_icrc_generate(skb, pkt); 426 427
+24 -13
drivers/infiniband/sw/rxe/rxe_qp.c
··· 300 300 struct rxe_cq *rcq = to_rcq(init->recv_cq); 301 301 struct rxe_cq *scq = to_rcq(init->send_cq); 302 302 struct rxe_srq *srq = init->srq ? to_rsrq(init->srq) : NULL; 303 + unsigned long flags; 303 304 304 305 rxe_get(pd); 305 306 rxe_get(rcq); ··· 326 325 if (err) 327 326 goto err2; 328 327 329 - spin_lock_bh(&qp->state_lock); 328 + spin_lock_irqsave(&qp->state_lock, flags); 330 329 qp->attr.qp_state = IB_QPS_RESET; 331 330 qp->valid = 1; 332 - spin_unlock_bh(&qp->state_lock); 331 + spin_unlock_irqrestore(&qp->state_lock, flags); 333 332 334 333 return 0; 335 334 ··· 493 492 /* move the qp to the error state */ 494 493 void rxe_qp_error(struct rxe_qp *qp) 495 494 { 496 - spin_lock_bh(&qp->state_lock); 495 + unsigned long flags; 496 + 497 + spin_lock_irqsave(&qp->state_lock, flags); 497 498 qp->attr.qp_state = IB_QPS_ERR; 498 499 499 500 /* drain work and packet queues */ 500 501 rxe_sched_task(&qp->resp.task); 501 502 rxe_sched_task(&qp->comp.task); 502 503 rxe_sched_task(&qp->req.task); 503 - spin_unlock_bh(&qp->state_lock); 504 + spin_unlock_irqrestore(&qp->state_lock, flags); 504 505 } 505 506 506 507 static void rxe_qp_sqd(struct rxe_qp *qp, struct ib_qp_attr *attr, 507 508 int mask) 508 509 { 509 - spin_lock_bh(&qp->state_lock); 510 + unsigned long flags; 511 + 512 + spin_lock_irqsave(&qp->state_lock, flags); 510 513 qp->attr.sq_draining = 1; 511 514 rxe_sched_task(&qp->comp.task); 512 515 rxe_sched_task(&qp->req.task); 513 - spin_unlock_bh(&qp->state_lock); 516 + spin_unlock_irqrestore(&qp->state_lock, flags); 514 517 } 515 518 516 519 /* caller should hold qp->state_lock */ ··· 560 555 qp->attr.cur_qp_state = attr->qp_state; 561 556 562 557 if (mask & IB_QP_STATE) { 563 - spin_lock_bh(&qp->state_lock); 558 + unsigned long flags; 559 + 560 + spin_lock_irqsave(&qp->state_lock, flags); 564 561 err = __qp_chk_state(qp, attr, mask); 565 562 if (!err) { 566 563 qp->attr.qp_state = attr->qp_state; 567 564 rxe_dbg_qp(qp, "state -> %s\n", 568 565 qps2str[attr->qp_state]); 569 566 } 570 - spin_unlock_bh(&qp->state_lock); 567 + spin_unlock_irqrestore(&qp->state_lock, flags); 571 568 572 569 if (err) 573 570 return err; ··· 695 688 /* called by the query qp verb */ 696 689 int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask) 697 690 { 691 + unsigned long flags; 692 + 698 693 *attr = qp->attr; 699 694 700 695 attr->rq_psn = qp->resp.psn; ··· 717 708 /* Applications that get this state typically spin on it. 718 709 * Yield the processor 719 710 */ 720 - spin_lock_bh(&qp->state_lock); 711 + spin_lock_irqsave(&qp->state_lock, flags); 721 712 if (qp->attr.sq_draining) { 722 - spin_unlock_bh(&qp->state_lock); 713 + spin_unlock_irqrestore(&qp->state_lock, flags); 723 714 cond_resched(); 715 + } else { 716 + spin_unlock_irqrestore(&qp->state_lock, flags); 724 717 } 725 - spin_unlock_bh(&qp->state_lock); 726 718 727 719 return 0; 728 720 } ··· 746 736 static void rxe_qp_do_cleanup(struct work_struct *work) 747 737 { 748 738 struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work); 739 + unsigned long flags; 749 740 750 - spin_lock_bh(&qp->state_lock); 741 + spin_lock_irqsave(&qp->state_lock, flags); 751 742 qp->valid = 0; 752 - spin_unlock_bh(&qp->state_lock); 743 + spin_unlock_irqrestore(&qp->state_lock, flags); 753 744 qp->qp_timeout_jiffies = 0; 754 745 755 746 if (qp_type(qp) == IB_QPT_RC) {
+5 -4
drivers/infiniband/sw/rxe/rxe_recv.c
··· 14 14 struct rxe_qp *qp) 15 15 { 16 16 unsigned int pkt_type; 17 + unsigned long flags; 17 18 18 19 if (unlikely(!qp->valid)) 19 20 return -EINVAL; ··· 39 38 return -EINVAL; 40 39 } 41 40 42 - spin_lock_bh(&qp->state_lock); 41 + spin_lock_irqsave(&qp->state_lock, flags); 43 42 if (pkt->mask & RXE_REQ_MASK) { 44 43 if (unlikely(qp_state(qp) < IB_QPS_RTR)) { 45 - spin_unlock_bh(&qp->state_lock); 44 + spin_unlock_irqrestore(&qp->state_lock, flags); 46 45 return -EINVAL; 47 46 } 48 47 } else { 49 48 if (unlikely(qp_state(qp) < IB_QPS_RTS)) { 50 - spin_unlock_bh(&qp->state_lock); 49 + spin_unlock_irqrestore(&qp->state_lock, flags); 51 50 return -EINVAL; 52 51 } 53 52 } 54 - spin_unlock_bh(&qp->state_lock); 53 + spin_unlock_irqrestore(&qp->state_lock, flags); 55 54 56 55 return 0; 57 56 }
+17 -13
drivers/infiniband/sw/rxe/rxe_req.c
··· 99 99 void rnr_nak_timer(struct timer_list *t) 100 100 { 101 101 struct rxe_qp *qp = from_timer(qp, t, rnr_nak_timer); 102 + unsigned long flags; 102 103 103 104 rxe_dbg_qp(qp, "nak timer fired\n"); 104 105 105 - spin_lock_bh(&qp->state_lock); 106 + spin_lock_irqsave(&qp->state_lock, flags); 106 107 if (qp->valid) { 107 108 /* request a send queue retry */ 108 109 qp->req.need_retry = 1; 109 110 qp->req.wait_for_rnr_timer = 0; 110 111 rxe_sched_task(&qp->req.task); 111 112 } 112 - spin_unlock_bh(&qp->state_lock); 113 + spin_unlock_irqrestore(&qp->state_lock, flags); 113 114 } 114 115 115 116 static void req_check_sq_drain_done(struct rxe_qp *qp) ··· 119 118 unsigned int index; 120 119 unsigned int cons; 121 120 struct rxe_send_wqe *wqe; 121 + unsigned long flags; 122 122 123 - spin_lock_bh(&qp->state_lock); 123 + spin_lock_irqsave(&qp->state_lock, flags); 124 124 if (qp_state(qp) == IB_QPS_SQD) { 125 125 q = qp->sq.queue; 126 126 index = qp->req.wqe_index; ··· 142 140 break; 143 141 144 142 qp->attr.sq_draining = 0; 145 - spin_unlock_bh(&qp->state_lock); 143 + spin_unlock_irqrestore(&qp->state_lock, flags); 146 144 147 145 if (qp->ibqp.event_handler) { 148 146 struct ib_event ev; ··· 156 154 return; 157 155 } while (0); 158 156 } 159 - spin_unlock_bh(&qp->state_lock); 157 + spin_unlock_irqrestore(&qp->state_lock, flags); 160 158 } 161 159 162 160 static struct rxe_send_wqe *__req_next_wqe(struct rxe_qp *qp) ··· 175 173 static struct rxe_send_wqe *req_next_wqe(struct rxe_qp *qp) 176 174 { 177 175 struct rxe_send_wqe *wqe; 176 + unsigned long flags; 178 177 179 178 req_check_sq_drain_done(qp); 180 179 ··· 183 180 if (wqe == NULL) 184 181 return NULL; 185 182 186 - spin_lock_bh(&qp->state_lock); 183 + spin_lock_irqsave(&qp->state_lock, flags); 187 184 if (unlikely((qp_state(qp) == IB_QPS_SQD) && 188 185 (wqe->state != wqe_state_processing))) { 189 - spin_unlock_bh(&qp->state_lock); 186 + spin_unlock_irqrestore(&qp->state_lock, flags); 190 187 return NULL; 191 188 } 192 - spin_unlock_bh(&qp->state_lock); 189 + spin_unlock_irqrestore(&qp->state_lock, flags); 193 190 194 191 wqe->mask = wr_opcode_mask(wqe->wr.opcode, qp); 195 192 return wqe; ··· 679 676 struct rxe_queue *q = qp->sq.queue; 680 677 struct rxe_ah *ah; 681 678 struct rxe_av *av; 679 + unsigned long flags; 682 680 683 - spin_lock_bh(&qp->state_lock); 681 + spin_lock_irqsave(&qp->state_lock, flags); 684 682 if (unlikely(!qp->valid)) { 685 - spin_unlock_bh(&qp->state_lock); 683 + spin_unlock_irqrestore(&qp->state_lock, flags); 686 684 goto exit; 687 685 } 688 686 689 687 if (unlikely(qp_state(qp) == IB_QPS_ERR)) { 690 688 wqe = __req_next_wqe(qp); 691 - spin_unlock_bh(&qp->state_lock); 689 + spin_unlock_irqrestore(&qp->state_lock, flags); 692 690 if (wqe) 693 691 goto err; 694 692 else ··· 704 700 qp->req.wait_psn = 0; 705 701 qp->req.need_retry = 0; 706 702 qp->req.wait_for_rnr_timer = 0; 707 - spin_unlock_bh(&qp->state_lock); 703 + spin_unlock_irqrestore(&qp->state_lock, flags); 708 704 goto exit; 709 705 } 710 - spin_unlock_bh(&qp->state_lock); 706 + spin_unlock_irqrestore(&qp->state_lock, flags); 711 707 712 708 /* we come here if the retransmit timer has fired 713 709 * or if the rnr timer has fired. If the retransmit
+8 -6
drivers/infiniband/sw/rxe/rxe_resp.c
··· 1047 1047 struct ib_uverbs_wc *uwc = &cqe.uibwc; 1048 1048 struct rxe_recv_wqe *wqe = qp->resp.wqe; 1049 1049 struct rxe_dev *rxe = to_rdev(qp->ibqp.device); 1050 + unsigned long flags; 1050 1051 1051 1052 if (!wqe) 1052 1053 goto finish; ··· 1138 1137 return RESPST_ERR_CQ_OVERFLOW; 1139 1138 1140 1139 finish: 1141 - spin_lock_bh(&qp->state_lock); 1140 + spin_lock_irqsave(&qp->state_lock, flags); 1142 1141 if (unlikely(qp_state(qp) == IB_QPS_ERR)) { 1143 - spin_unlock_bh(&qp->state_lock); 1142 + spin_unlock_irqrestore(&qp->state_lock, flags); 1144 1143 return RESPST_CHK_RESOURCE; 1145 1144 } 1146 - spin_unlock_bh(&qp->state_lock); 1145 + spin_unlock_irqrestore(&qp->state_lock, flags); 1147 1146 1148 1147 if (unlikely(!pkt)) 1149 1148 return RESPST_DONE; ··· 1469 1468 enum resp_states state; 1470 1469 struct rxe_pkt_info *pkt = NULL; 1471 1470 int ret; 1471 + unsigned long flags; 1472 1472 1473 - spin_lock_bh(&qp->state_lock); 1473 + spin_lock_irqsave(&qp->state_lock, flags); 1474 1474 if (!qp->valid || qp_state(qp) == IB_QPS_ERR || 1475 1475 qp_state(qp) == IB_QPS_RESET) { 1476 1476 bool notify = qp->valid && (qp_state(qp) == IB_QPS_ERR); 1477 1477 1478 1478 drain_req_pkts(qp); 1479 1479 flush_recv_queue(qp, notify); 1480 - spin_unlock_bh(&qp->state_lock); 1480 + spin_unlock_irqrestore(&qp->state_lock, flags); 1481 1481 goto exit; 1482 1482 } 1483 - spin_unlock_bh(&qp->state_lock); 1483 + spin_unlock_irqrestore(&qp->state_lock, flags); 1484 1484 1485 1485 qp->resp.aeth_syndrome = AETH_ACK_UNLIMITED; 1486 1486
+13 -12
drivers/infiniband/sw/rxe/rxe_verbs.c
··· 904 904 if (!err) 905 905 rxe_sched_task(&qp->req.task); 906 906 907 - spin_lock_bh(&qp->state_lock); 907 + spin_lock_irqsave(&qp->state_lock, flags); 908 908 if (qp_state(qp) == IB_QPS_ERR) 909 909 rxe_sched_task(&qp->comp.task); 910 - spin_unlock_bh(&qp->state_lock); 910 + spin_unlock_irqrestore(&qp->state_lock, flags); 911 911 912 912 return err; 913 913 } ··· 917 917 { 918 918 struct rxe_qp *qp = to_rqp(ibqp); 919 919 int err; 920 + unsigned long flags; 920 921 921 - spin_lock_bh(&qp->state_lock); 922 + spin_lock_irqsave(&qp->state_lock, flags); 922 923 /* caller has already called destroy_qp */ 923 924 if (WARN_ON_ONCE(!qp->valid)) { 924 - spin_unlock_bh(&qp->state_lock); 925 + spin_unlock_irqrestore(&qp->state_lock, flags); 925 926 rxe_err_qp(qp, "qp has been destroyed"); 926 927 return -EINVAL; 927 928 } 928 929 929 930 if (unlikely(qp_state(qp) < IB_QPS_RTS)) { 930 - spin_unlock_bh(&qp->state_lock); 931 + spin_unlock_irqrestore(&qp->state_lock, flags); 931 932 *bad_wr = wr; 932 933 rxe_err_qp(qp, "qp not ready to send"); 933 934 return -EINVAL; 934 935 } 935 - spin_unlock_bh(&qp->state_lock); 936 + spin_unlock_irqrestore(&qp->state_lock, flags); 936 937 937 938 if (qp->is_user) { 938 939 /* Utilize process context to do protocol processing */ ··· 1009 1008 struct rxe_rq *rq = &qp->rq; 1010 1009 unsigned long flags; 1011 1010 1012 - spin_lock_bh(&qp->state_lock); 1011 + spin_lock_irqsave(&qp->state_lock, flags); 1013 1012 /* caller has already called destroy_qp */ 1014 1013 if (WARN_ON_ONCE(!qp->valid)) { 1015 - spin_unlock_bh(&qp->state_lock); 1014 + spin_unlock_irqrestore(&qp->state_lock, flags); 1016 1015 rxe_err_qp(qp, "qp has been destroyed"); 1017 1016 return -EINVAL; 1018 1017 } 1019 1018 1020 1019 /* see C10-97.2.1 */ 1021 1020 if (unlikely((qp_state(qp) < IB_QPS_INIT))) { 1022 - spin_unlock_bh(&qp->state_lock); 1021 + spin_unlock_irqrestore(&qp->state_lock, flags); 1023 1022 *bad_wr = wr; 1024 1023 rxe_dbg_qp(qp, "qp not ready to post recv"); 1025 1024 return -EINVAL; 1026 1025 } 1027 - spin_unlock_bh(&qp->state_lock); 1026 + spin_unlock_irqrestore(&qp->state_lock, flags); 1028 1027 1029 1028 if (unlikely(qp->srq)) { 1030 1029 *bad_wr = wr; ··· 1045 1044 1046 1045 spin_unlock_irqrestore(&rq->producer_lock, flags); 1047 1046 1048 - spin_lock_bh(&qp->state_lock); 1047 + spin_lock_irqsave(&qp->state_lock, flags); 1049 1048 if (qp_state(qp) == IB_QPS_ERR) 1050 1049 rxe_sched_task(&qp->resp.task); 1051 - spin_unlock_bh(&qp->state_lock); 1050 + spin_unlock_irqrestore(&qp->state_lock, flags); 1052 1051 1053 1052 return err; 1054 1053 }
+1
drivers/iommu/Kconfig
··· 282 282 config IPMMU_VMSA 283 283 bool "Renesas VMSA-compatible IPMMU" 284 284 depends on ARCH_RENESAS || COMPILE_TEST 285 + depends on ARM || ARM64 || COMPILE_TEST 285 286 depends on !GENERIC_ATOMIC64 # for IOMMU_IO_PGTABLE_LPAE 286 287 select IOMMU_API 287 288 select IOMMU_IO_PGTABLE_LPAE
+1 -3
drivers/iommu/amd/amd_iommu.h
··· 15 15 extern irqreturn_t amd_iommu_int_handler(int irq, void *data); 16 16 extern void amd_iommu_apply_erratum_63(struct amd_iommu *iommu, u16 devid); 17 17 extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu); 18 - extern int amd_iommu_init_devices(void); 19 - extern void amd_iommu_uninit_devices(void); 20 - extern void amd_iommu_init_notifier(void); 18 + extern void amd_iommu_restart_ga_log(struct amd_iommu *iommu); 21 19 extern void amd_iommu_set_rlookup_table(struct amd_iommu *iommu, u16 devid); 22 20 23 21 #ifdef CONFIG_AMD_IOMMU_DEBUGFS
+24
drivers/iommu/amd/init.c
··· 759 759 } 760 760 761 761 /* 762 + * This function restarts event logging in case the IOMMU experienced 763 + * an GA log overflow. 764 + */ 765 + void amd_iommu_restart_ga_log(struct amd_iommu *iommu) 766 + { 767 + u32 status; 768 + 769 + status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET); 770 + if (status & MMIO_STATUS_GALOG_RUN_MASK) 771 + return; 772 + 773 + pr_info_ratelimited("IOMMU GA Log restarting\n"); 774 + 775 + iommu_feature_disable(iommu, CONTROL_GALOG_EN); 776 + iommu_feature_disable(iommu, CONTROL_GAINT_EN); 777 + 778 + writel(MMIO_STATUS_GALOG_OVERFLOW_MASK, 779 + iommu->mmio_base + MMIO_STATUS_OFFSET); 780 + 781 + iommu_feature_enable(iommu, CONTROL_GAINT_EN); 782 + iommu_feature_enable(iommu, CONTROL_GALOG_EN); 783 + } 784 + 785 + /* 762 786 * This function resets the command buffer if the IOMMU stopped fetching 763 787 * commands from it. 764 788 */
+25 -6
drivers/iommu/amd/iommu.c
··· 845 845 (MMIO_STATUS_EVT_OVERFLOW_INT_MASK | \ 846 846 MMIO_STATUS_EVT_INT_MASK | \ 847 847 MMIO_STATUS_PPR_INT_MASK | \ 848 + MMIO_STATUS_GALOG_OVERFLOW_MASK | \ 848 849 MMIO_STATUS_GALOG_INT_MASK) 849 850 850 851 irqreturn_t amd_iommu_int_thread(int irq, void *data) ··· 869 868 } 870 869 871 870 #ifdef CONFIG_IRQ_REMAP 872 - if (status & MMIO_STATUS_GALOG_INT_MASK) { 871 + if (status & (MMIO_STATUS_GALOG_INT_MASK | 872 + MMIO_STATUS_GALOG_OVERFLOW_MASK)) { 873 873 pr_devel("Processing IOMMU GA Log\n"); 874 874 iommu_poll_ga_log(iommu); 875 + } 876 + 877 + if (status & MMIO_STATUS_GALOG_OVERFLOW_MASK) { 878 + pr_info_ratelimited("IOMMU GA Log overflow\n"); 879 + amd_iommu_restart_ga_log(iommu); 875 880 } 876 881 #endif 877 882 ··· 2074 2067 { 2075 2068 struct io_pgtable_ops *pgtbl_ops; 2076 2069 struct protection_domain *domain; 2077 - int pgtable = amd_iommu_pgtable; 2070 + int pgtable; 2078 2071 int mode = DEFAULT_PGTABLE_LEVEL; 2079 2072 int ret; 2080 2073 ··· 2091 2084 mode = PAGE_MODE_NONE; 2092 2085 } else if (type == IOMMU_DOMAIN_UNMANAGED) { 2093 2086 pgtable = AMD_IOMMU_V1; 2087 + } else if (type == IOMMU_DOMAIN_DMA || type == IOMMU_DOMAIN_DMA_FQ) { 2088 + pgtable = amd_iommu_pgtable; 2089 + } else { 2090 + return NULL; 2094 2091 } 2095 2092 2096 2093 switch (pgtable) { ··· 2129 2118 return NULL; 2130 2119 } 2131 2120 2121 + static inline u64 dma_max_address(void) 2122 + { 2123 + if (amd_iommu_pgtable == AMD_IOMMU_V1) 2124 + return ~0ULL; 2125 + 2126 + /* V2 with 4/5 level page table */ 2127 + return ((1ULL << PM_LEVEL_SHIFT(amd_iommu_gpt_level)) - 1); 2128 + } 2129 + 2132 2130 static struct iommu_domain *amd_iommu_domain_alloc(unsigned type) 2133 2131 { 2134 2132 struct protection_domain *domain; ··· 2154 2134 return NULL; 2155 2135 2156 2136 domain->domain.geometry.aperture_start = 0; 2157 - domain->domain.geometry.aperture_end = ~0ULL; 2137 + domain->domain.geometry.aperture_end = dma_max_address(); 2158 2138 domain->domain.geometry.force_aperture = true; 2159 2139 2160 2140 return &domain->domain; ··· 2407 2387 unsigned long flags; 2408 2388 2409 2389 spin_lock_irqsave(&dom->lock, flags); 2410 - domain_flush_pages(dom, gather->start, gather->end - gather->start, 1); 2390 + domain_flush_pages(dom, gather->start, gather->end - gather->start + 1, 1); 2411 2391 amd_iommu_domain_flush_complete(dom); 2412 2392 spin_unlock_irqrestore(&dom->lock, flags); 2413 2393 } ··· 3513 3493 struct irte_ga *entry = (struct irte_ga *) ir_data->entry; 3514 3494 u64 valid; 3515 3495 3516 - if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) || 3517 - !entry || entry->lo.fields_vapic.guest_mode) 3496 + if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) || !entry) 3518 3497 return 0; 3519 3498 3520 3499 valid = entry->lo.fields_vapic.valid;
+2 -1
drivers/iommu/mtk_iommu.c
··· 781 781 { 782 782 struct mtk_iommu_domain *dom = to_mtk_domain(domain); 783 783 784 - mtk_iommu_tlb_flush_all(dom->bank->parent_data); 784 + if (dom->bank) 785 + mtk_iommu_tlb_flush_all(dom->bank->parent_data); 785 786 } 786 787 787 788 static void mtk_iommu_iotlb_sync(struct iommu_domain *domain,
+8 -6
drivers/iommu/rockchip-iommu.c
··· 1335 1335 for (i = 0; i < iommu->num_irq; i++) { 1336 1336 int irq = platform_get_irq(pdev, i); 1337 1337 1338 - if (irq < 0) 1339 - return irq; 1338 + if (irq < 0) { 1339 + err = irq; 1340 + goto err_pm_disable; 1341 + } 1340 1342 1341 1343 err = devm_request_irq(iommu->dev, irq, rk_iommu_irq, 1342 1344 IRQF_SHARED, dev_name(dev), iommu); 1343 - if (err) { 1344 - pm_runtime_disable(dev); 1345 - goto err_remove_sysfs; 1346 - } 1345 + if (err) 1346 + goto err_pm_disable; 1347 1347 } 1348 1348 1349 1349 dma_set_mask_and_coherent(dev, rk_ops->dma_bit_mask); 1350 1350 1351 1351 return 0; 1352 + err_pm_disable: 1353 + pm_runtime_disable(dev); 1352 1354 err_remove_sysfs: 1353 1355 iommu_device_sysfs_remove(&iommu->iommu); 1354 1356 err_put_group:
+2
drivers/irqchip/irq-gic-common.c
··· 16 16 const struct gic_quirk *quirks, void *data) 17 17 { 18 18 for (; quirks->desc; quirks++) { 19 + if (!quirks->compatible && !quirks->property) 20 + continue; 19 21 if (quirks->compatible && 20 22 !of_device_is_compatible(np, quirks->compatible)) 21 23 continue;
+4 -4
drivers/leds/rgb/leds-qcom-lpg.c
··· 312 312 max_res = LPG_RESOLUTION_9BIT; 313 313 } 314 314 315 - min_period = (u64)NSEC_PER_SEC * 316 - div64_u64((1 << pwm_resolution_arr[0]), clk_rate_arr[clk_len - 1]); 315 + min_period = div64_u64((u64)NSEC_PER_SEC * (1 << pwm_resolution_arr[0]), 316 + clk_rate_arr[clk_len - 1]); 317 317 if (period <= min_period) 318 318 return -EINVAL; 319 319 320 320 /* Limit period to largest possible value, to avoid overflows */ 321 - max_period = (u64)NSEC_PER_SEC * max_res * LPG_MAX_PREDIV * 322 - div64_u64((1 << LPG_MAX_M), 1024); 321 + max_period = div64_u64((u64)NSEC_PER_SEC * max_res * LPG_MAX_PREDIV * (1 << LPG_MAX_M), 322 + 1024); 323 323 if (period > max_period) 324 324 period = max_period; 325 325
+6 -4
drivers/mailbox/mailbox-test.c
··· 98 98 size_t count, loff_t *ppos) 99 99 { 100 100 struct mbox_test_device *tdev = filp->private_data; 101 + char *message; 101 102 void *data; 102 103 int ret; 103 104 ··· 114 113 return -EINVAL; 115 114 } 116 115 117 - mutex_lock(&tdev->mutex); 118 - 119 - tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL); 120 - if (!tdev->message) 116 + message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL); 117 + if (!message) 121 118 return -ENOMEM; 122 119 120 + mutex_lock(&tdev->mutex); 121 + 122 + tdev->message = message; 123 123 ret = copy_from_user(tdev->message, userbuf, count); 124 124 if (ret) { 125 125 ret = -EFAULT;
+1 -1
drivers/md/raid5.c
··· 5516 5516 5517 5517 sector = raid5_compute_sector(conf, raid_bio->bi_iter.bi_sector, 0, 5518 5518 &dd_idx, NULL); 5519 - end_sector = bio_end_sector(raid_bio); 5519 + end_sector = sector + bio_sectors(raid_bio); 5520 5520 5521 5521 rcu_read_lock(); 5522 5522 if (r5c_big_stripe_cached(conf, sector))
+6 -2
drivers/media/cec/core/cec-adap.c
··· 1091 1091 mutex_lock(&adap->lock); 1092 1092 dprintk(2, "%s: %*ph\n", __func__, msg->len, msg->msg); 1093 1093 1094 - adap->last_initiator = 0xff; 1094 + if (!adap->transmit_in_progress) 1095 + adap->last_initiator = 0xff; 1095 1096 1096 1097 /* Check if this message was for us (directed or broadcast). */ 1097 1098 if (!cec_msg_is_broadcast(msg)) { ··· 1586 1585 * 1587 1586 * This function is called with adap->lock held. 1588 1587 */ 1589 - static int cec_adap_enable(struct cec_adapter *adap) 1588 + int cec_adap_enable(struct cec_adapter *adap) 1590 1589 { 1591 1590 bool enable; 1592 1591 int ret = 0; ··· 1595 1594 adap->log_addrs.num_log_addrs; 1596 1595 if (adap->needs_hpd) 1597 1596 enable = enable && adap->phys_addr != CEC_PHYS_ADDR_INVALID; 1597 + 1598 + if (adap->devnode.unregistered) 1599 + enable = false; 1598 1600 1599 1601 if (enable == adap->is_enabled) 1600 1602 return 0;
+2
drivers/media/cec/core/cec-core.c
··· 191 191 mutex_lock(&adap->lock); 192 192 __cec_s_phys_addr(adap, CEC_PHYS_ADDR_INVALID, false); 193 193 __cec_s_log_addrs(adap, NULL, false); 194 + // Disable the adapter (since adap->devnode.unregistered is true) 195 + cec_adap_enable(adap); 194 196 mutex_unlock(&adap->lock); 195 197 196 198 cdev_device_del(&devnode->cdev, &devnode->dev);
+1
drivers/media/cec/core/cec-priv.h
··· 47 47 void cec_monitor_pin_cnt_dec(struct cec_adapter *adap); 48 48 int cec_adap_status(struct seq_file *file, void *priv); 49 49 int cec_thread_func(void *_adap); 50 + int cec_adap_enable(struct cec_adapter *adap); 50 51 void __cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block); 51 52 int __cec_s_log_addrs(struct cec_adapter *adap, 52 53 struct cec_log_addrs *log_addrs, bool block);
+3
drivers/media/platform/mediatek/vcodec/mtk_vcodec_dec_stateful.c
··· 584 584 585 585 if (!(ctx->dev->dec_capability & VCODEC_CAPABILITY_4K_DISABLED)) { 586 586 for (i = 0; i < num_supported_formats; i++) { 587 + if (mtk_video_formats[i].type != MTK_FMT_DEC) 588 + continue; 589 + 587 590 mtk_video_formats[i].frmsize.max_width = 588 591 VCODEC_DEC_4K_CODED_WIDTH; 589 592 mtk_video_formats[i].frmsize.max_height =
-1
drivers/media/platform/qcom/camss/camss-video.c
··· 353 353 if (subdev == NULL) 354 354 return -EPIPE; 355 355 356 - memset(&fmt, 0, sizeof(fmt)); 357 356 fmt.pad = pad; 358 357 359 358 ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &fmt);
+4 -2
drivers/media/platform/verisilicon/hantro_v4l2.c
··· 397 397 if (!raw_vpu_fmt) 398 398 return -EINVAL; 399 399 400 - if (ctx->is_encoder) 400 + if (ctx->is_encoder) { 401 401 encoded_fmt = &ctx->dst_fmt; 402 - else 402 + ctx->vpu_src_fmt = raw_vpu_fmt; 403 + } else { 403 404 encoded_fmt = &ctx->src_fmt; 405 + } 404 406 405 407 hantro_reset_fmt(&raw_fmt, raw_vpu_fmt); 406 408 raw_fmt.width = encoded_fmt->width;
+11 -5
drivers/media/usb/uvc/uvc_driver.c
··· 251 251 /* Find the format descriptor from its GUID. */ 252 252 fmtdesc = uvc_format_by_guid(&buffer[5]); 253 253 254 - if (fmtdesc != NULL) { 255 - format->fcc = fmtdesc->fcc; 256 - } else { 254 + if (!fmtdesc) { 255 + /* 256 + * Unknown video formats are not fatal errors, the 257 + * caller will skip this descriptor. 258 + */ 257 259 dev_info(&streaming->intf->dev, 258 260 "Unknown video format %pUl\n", &buffer[5]); 259 - format->fcc = 0; 261 + return 0; 260 262 } 261 263 264 + format->fcc = fmtdesc->fcc; 262 265 format->bpp = buffer[21]; 263 266 264 267 /* ··· 678 675 interval = (u32 *)&frame[nframes]; 679 676 680 677 streaming->format = format; 681 - streaming->nformats = nformats; 678 + streaming->nformats = 0; 682 679 683 680 /* Parse the format descriptors. */ 684 681 while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE) { ··· 692 689 &interval, buffer, buflen); 693 690 if (ret < 0) 694 691 goto error; 692 + if (!ret) 693 + break; 695 694 695 + streaming->nformats++; 696 696 frame += format->nframes; 697 697 format++; 698 698
+1 -2
drivers/media/v4l2-core/v4l2-mc.c
··· 314 314 { 315 315 struct fwnode_handle *endpoint; 316 316 317 - if (!(sink->flags & MEDIA_PAD_FL_SINK) || 318 - !is_media_entity_v4l2_subdev(sink->entity)) 317 + if (!(sink->flags & MEDIA_PAD_FL_SINK)) 319 318 return -EINVAL; 320 319 321 320 fwnode_graph_for_each_endpoint(dev_fwnode(src_sd->dev), endpoint) {
+23 -8
drivers/misc/fastrpc.c
··· 316 316 if (map->table) { 317 317 if (map->attr & FASTRPC_ATTR_SECUREMAP) { 318 318 struct qcom_scm_vmperm perm; 319 + int vmid = map->fl->cctx->vmperms[0].vmid; 320 + u64 src_perms = BIT(QCOM_SCM_VMID_HLOS) | BIT(vmid); 319 321 int err = 0; 320 322 321 323 perm.vmid = QCOM_SCM_VMID_HLOS; 322 324 perm.perm = QCOM_SCM_PERM_RWX; 323 325 err = qcom_scm_assign_mem(map->phys, map->size, 324 - &map->fl->cctx->perms, &perm, 1); 326 + &src_perms, &perm, 1); 325 327 if (err) { 326 328 dev_err(map->fl->sctx->dev, "Failed to assign memory phys 0x%llx size 0x%llx err %d", 327 329 map->phys, map->size, err); ··· 789 787 goto map_err; 790 788 } 791 789 792 - map->phys = sg_dma_address(map->table->sgl); 793 - map->phys += ((u64)fl->sctx->sid << 32); 790 + if (attr & FASTRPC_ATTR_SECUREMAP) { 791 + map->phys = sg_phys(map->table->sgl); 792 + } else { 793 + map->phys = sg_dma_address(map->table->sgl); 794 + map->phys += ((u64)fl->sctx->sid << 32); 795 + } 794 796 map->size = len; 795 797 map->va = sg_virt(map->table->sgl); 796 798 map->len = len; ··· 804 798 * If subsystem VMIDs are defined in DTSI, then do 805 799 * hyp_assign from HLOS to those VM(s) 806 800 */ 801 + u64 src_perms = BIT(QCOM_SCM_VMID_HLOS); 802 + struct qcom_scm_vmperm dst_perms[2] = {0}; 803 + 804 + dst_perms[0].vmid = QCOM_SCM_VMID_HLOS; 805 + dst_perms[0].perm = QCOM_SCM_PERM_RW; 806 + dst_perms[1].vmid = fl->cctx->vmperms[0].vmid; 807 + dst_perms[1].perm = QCOM_SCM_PERM_RWX; 807 808 map->attr = attr; 808 - err = qcom_scm_assign_mem(map->phys, (u64)map->size, &fl->cctx->perms, 809 - fl->cctx->vmperms, fl->cctx->vmcount); 809 + err = qcom_scm_assign_mem(map->phys, (u64)map->size, &src_perms, dst_perms, 2); 810 810 if (err) { 811 811 dev_err(sess->dev, "Failed to assign memory with phys 0x%llx size 0x%llx err %d", 812 812 map->phys, map->size, err); ··· 1904 1892 req.vaddrout = rsp_msg.vaddr; 1905 1893 1906 1894 /* Add memory to static PD pool, protection thru hypervisor */ 1907 - if (req.flags != ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) { 1895 + if (req.flags == ADSP_MMAP_REMOTE_HEAP_ADDR && fl->cctx->vmcount) { 1908 1896 struct qcom_scm_vmperm perm; 1909 1897 1910 1898 perm.vmid = QCOM_SCM_VMID_HLOS; ··· 2349 2337 struct fastrpc_invoke_ctx *ctx; 2350 2338 2351 2339 spin_lock(&user->lock); 2352 - list_for_each_entry(ctx, &user->pending, node) 2340 + list_for_each_entry(ctx, &user->pending, node) { 2341 + ctx->retval = -EPIPE; 2353 2342 complete(&ctx->work); 2343 + } 2354 2344 spin_unlock(&user->lock); 2355 2345 } 2356 2346 ··· 2363 2349 struct fastrpc_user *user; 2364 2350 unsigned long flags; 2365 2351 2352 + /* No invocations past this point */ 2366 2353 spin_lock_irqsave(&cctx->lock, flags); 2354 + cctx->rpdev = NULL; 2367 2355 list_for_each_entry(user, &cctx->users, user) 2368 2356 fastrpc_notify_users(user); 2369 2357 spin_unlock_irqrestore(&cctx->lock, flags); ··· 2384 2368 2385 2369 of_platform_depopulate(&rpdev->dev); 2386 2370 2387 - cctx->rpdev = NULL; 2388 2371 fastrpc_channel_ctx_put(cctx); 2389 2372 } 2390 2373
+26 -8
drivers/mmc/core/pwrseq_sd8787.c
··· 28 28 struct mmc_pwrseq pwrseq; 29 29 struct gpio_desc *reset_gpio; 30 30 struct gpio_desc *pwrdn_gpio; 31 - u32 reset_pwrdwn_delay_ms; 32 31 }; 33 32 34 33 #define to_pwrseq_sd8787(p) container_of(p, struct mmc_pwrseq_sd8787, pwrseq) ··· 38 39 39 40 gpiod_set_value_cansleep(pwrseq->reset_gpio, 1); 40 41 41 - msleep(pwrseq->reset_pwrdwn_delay_ms); 42 + msleep(300); 42 43 gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 1); 43 44 } 44 45 ··· 50 51 gpiod_set_value_cansleep(pwrseq->reset_gpio, 0); 51 52 } 52 53 54 + static void mmc_pwrseq_wilc1000_pre_power_on(struct mmc_host *host) 55 + { 56 + struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq); 57 + 58 + /* The pwrdn_gpio is really CHIP_EN, reset_gpio is RESETN */ 59 + gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 1); 60 + msleep(5); 61 + gpiod_set_value_cansleep(pwrseq->reset_gpio, 1); 62 + } 63 + 64 + static void mmc_pwrseq_wilc1000_power_off(struct mmc_host *host) 65 + { 66 + struct mmc_pwrseq_sd8787 *pwrseq = to_pwrseq_sd8787(host->pwrseq); 67 + 68 + gpiod_set_value_cansleep(pwrseq->reset_gpio, 0); 69 + gpiod_set_value_cansleep(pwrseq->pwrdn_gpio, 0); 70 + } 71 + 53 72 static const struct mmc_pwrseq_ops mmc_pwrseq_sd8787_ops = { 54 73 .pre_power_on = mmc_pwrseq_sd8787_pre_power_on, 55 74 .power_off = mmc_pwrseq_sd8787_power_off, 56 75 }; 57 76 58 - static const u32 sd8787_delay_ms = 300; 59 - static const u32 wilc1000_delay_ms = 5; 77 + static const struct mmc_pwrseq_ops mmc_pwrseq_wilc1000_ops = { 78 + .pre_power_on = mmc_pwrseq_wilc1000_pre_power_on, 79 + .power_off = mmc_pwrseq_wilc1000_power_off, 80 + }; 60 81 61 82 static const struct of_device_id mmc_pwrseq_sd8787_of_match[] = { 62 - { .compatible = "mmc-pwrseq-sd8787", .data = &sd8787_delay_ms }, 63 - { .compatible = "mmc-pwrseq-wilc1000", .data = &wilc1000_delay_ms }, 83 + { .compatible = "mmc-pwrseq-sd8787", .data = &mmc_pwrseq_sd8787_ops }, 84 + { .compatible = "mmc-pwrseq-wilc1000", .data = &mmc_pwrseq_wilc1000_ops }, 64 85 {/* sentinel */}, 65 86 }; 66 87 MODULE_DEVICE_TABLE(of, mmc_pwrseq_sd8787_of_match); ··· 96 77 return -ENOMEM; 97 78 98 79 match = of_match_node(mmc_pwrseq_sd8787_of_match, pdev->dev.of_node); 99 - pwrseq->reset_pwrdwn_delay_ms = *(u32 *)match->data; 100 80 101 81 pwrseq->pwrdn_gpio = devm_gpiod_get(dev, "powerdown", GPIOD_OUT_LOW); 102 82 if (IS_ERR(pwrseq->pwrdn_gpio)) ··· 106 88 return PTR_ERR(pwrseq->reset_gpio); 107 89 108 90 pwrseq->pwrseq.dev = dev; 109 - pwrseq->pwrseq.ops = &mmc_pwrseq_sd8787_ops; 91 + pwrseq->pwrseq.ops = match->data; 110 92 pwrseq->pwrseq.owner = THIS_MODULE; 111 93 platform_set_drvdata(pdev, pwrseq); 112 94
+3
drivers/mmc/host/vub300.c
··· 1713 1713 int bytes = 3 & less_cmd; 1714 1714 int words = less_cmd >> 2; 1715 1715 u8 *r = vub300->resp.response.command_response; 1716 + 1717 + if (!resp_len) 1718 + return; 1716 1719 if (bytes == 3) { 1717 1720 cmd->resp[words] = (r[1 + (words << 2)] << 24) 1718 1721 | (r[2 + (words << 2)] << 16)
+4 -4
drivers/mtd/mtdchar.c
··· 590 590 (end_page - start_page + 1) * oob_per_page); 591 591 } 592 592 593 - static int mtdchar_write_ioctl(struct mtd_info *mtd, 594 - struct mtd_write_req __user *argp) 593 + static noinline_for_stack int 594 + mtdchar_write_ioctl(struct mtd_info *mtd, struct mtd_write_req __user *argp) 595 595 { 596 596 struct mtd_info *master = mtd_get_master(mtd); 597 597 struct mtd_write_req req; ··· 688 688 return ret; 689 689 } 690 690 691 - static int mtdchar_read_ioctl(struct mtd_info *mtd, 692 - struct mtd_read_req __user *argp) 691 + static noinline_for_stack int 692 + mtdchar_read_ioctl(struct mtd_info *mtd, struct mtd_read_req __user *argp) 693 693 { 694 694 struct mtd_info *master = mtd_get_master(mtd); 695 695 struct mtd_read_req req;
+4 -4
drivers/mtd/nand/raw/ingenic/ingenic_ecc.h
··· 36 36 void ingenic_ecc_release(struct ingenic_ecc *ecc); 37 37 struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np); 38 38 #else /* CONFIG_MTD_NAND_INGENIC_ECC */ 39 - int ingenic_ecc_calculate(struct ingenic_ecc *ecc, 39 + static inline int ingenic_ecc_calculate(struct ingenic_ecc *ecc, 40 40 struct ingenic_ecc_params *params, 41 41 const u8 *buf, u8 *ecc_code) 42 42 { 43 43 return -ENODEV; 44 44 } 45 45 46 - int ingenic_ecc_correct(struct ingenic_ecc *ecc, 46 + static inline int ingenic_ecc_correct(struct ingenic_ecc *ecc, 47 47 struct ingenic_ecc_params *params, u8 *buf, 48 48 u8 *ecc_code) 49 49 { 50 50 return -ENODEV; 51 51 } 52 52 53 - void ingenic_ecc_release(struct ingenic_ecc *ecc) 53 + static inline void ingenic_ecc_release(struct ingenic_ecc *ecc) 54 54 { 55 55 } 56 56 57 - struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np) 57 + static inline struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np) 58 58 { 59 59 return ERR_PTR(-ENODEV); 60 60 }
+6 -4
drivers/mtd/nand/raw/marvell_nand.c
··· 2457 2457 NDTR1_WAIT_MODE; 2458 2458 } 2459 2459 2460 + /* 2461 + * Reset nfc->selected_chip so the next command will cause the timing 2462 + * registers to be updated in marvell_nfc_select_target(). 2463 + */ 2464 + nfc->selected_chip = NULL; 2465 + 2460 2466 return 0; 2461 2467 } 2462 2468 ··· 2900 2894 regmap_update_bits(sysctrl_base, GENCONF_CLK_GATING_CTRL, 2901 2895 GENCONF_CLK_GATING_CTRL_ND_GATE, 2902 2896 GENCONF_CLK_GATING_CTRL_ND_GATE); 2903 - 2904 - regmap_update_bits(sysctrl_base, GENCONF_ND_CLK_CTRL, 2905 - GENCONF_ND_CLK_CTRL_EN, 2906 - GENCONF_ND_CLK_CTRL_EN); 2907 2897 } 2908 2898 2909 2899 /* Configure the DMA if appropriate */
+4 -1
drivers/mtd/spi-nor/core.c
··· 2018 2018 2019 2019 static const struct flash_info spi_nor_generic_flash = { 2020 2020 .name = "spi-nor-generic", 2021 + .n_banks = 1, 2021 2022 /* 2022 2023 * JESD216 rev A doesn't specify the page size, therefore we need a 2023 2024 * sane default. ··· 2922 2921 if (nor->flags & SNOR_F_HAS_LOCK && !nor->params->locking_ops) 2923 2922 spi_nor_init_default_locking_ops(nor); 2924 2923 2925 - nor->params->bank_size = div64_u64(nor->params->size, nor->info->n_banks); 2924 + if (nor->info->n_banks > 1) 2925 + params->bank_size = div64_u64(params->size, nor->info->n_banks); 2926 2926 } 2927 2927 2928 2928 /** ··· 2989 2987 /* Set SPI NOR sizes. */ 2990 2988 params->writesize = 1; 2991 2989 params->size = (u64)info->sector_size * info->n_sectors; 2990 + params->bank_size = params->size; 2992 2991 params->page_size = info->page_size; 2993 2992 2994 2993 if (!(info->flags & SPI_NOR_NO_FR)) {
+2 -2
drivers/mtd/spi-nor/spansion.c
··· 361 361 */ 362 362 static int cypress_nor_set_addr_mode_nbytes(struct spi_nor *nor) 363 363 { 364 - struct spi_mem_op op; 364 + struct spi_mem_op op = {}; 365 365 u8 addr_mode; 366 366 int ret; 367 367 ··· 492 492 const struct sfdp_parameter_header *bfpt_header, 493 493 const struct sfdp_bfpt *bfpt) 494 494 { 495 - struct spi_mem_op op; 495 + struct spi_mem_op op = {}; 496 496 int ret; 497 497 498 498 ret = cypress_nor_set_addr_mode_nbytes(nor);
+1 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 7170 7170 goto out; 7171 7171 } 7172 7172 if (chip->reset) 7173 - usleep_range(1000, 2000); 7173 + usleep_range(10000, 20000); 7174 7174 7175 7175 /* Detect if the device is configured in single chip addressing mode, 7176 7176 * otherwise continue with address specific smi init/detection.
+9 -3
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 1329 1329 return pdata->phy_if.phy_impl.an_outcome(pdata); 1330 1330 } 1331 1331 1332 - static void xgbe_phy_status_result(struct xgbe_prv_data *pdata) 1332 + static bool xgbe_phy_status_result(struct xgbe_prv_data *pdata) 1333 1333 { 1334 1334 struct ethtool_link_ksettings *lks = &pdata->phy.lks; 1335 1335 enum xgbe_mode mode; ··· 1367 1367 1368 1368 pdata->phy.duplex = DUPLEX_FULL; 1369 1369 1370 - if (xgbe_set_mode(pdata, mode) && pdata->an_again) 1370 + if (!xgbe_set_mode(pdata, mode)) 1371 + return false; 1372 + 1373 + if (pdata->an_again) 1371 1374 xgbe_phy_reconfig_aneg(pdata); 1375 + 1376 + return true; 1372 1377 } 1373 1378 1374 1379 static void xgbe_phy_status(struct xgbe_prv_data *pdata) ··· 1403 1398 return; 1404 1399 } 1405 1400 1406 - xgbe_phy_status_result(pdata); 1401 + if (xgbe_phy_status_result(pdata)) 1402 + return; 1407 1403 1408 1404 if (test_bit(XGBE_LINK_INIT, &pdata->dev_state)) 1409 1405 clear_bit(XGBE_LINK_INIT, &pdata->dev_state);
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1152 1152 unsigned int total_rx_bytes = 0, total_rx_pkts = 0; 1153 1153 unsigned int offset = rx_ring->rx_offset; 1154 1154 struct xdp_buff *xdp = &rx_ring->xdp; 1155 + u32 cached_ntc = rx_ring->first_desc; 1155 1156 struct ice_tx_ring *xdp_ring = NULL; 1156 1157 struct bpf_prog *xdp_prog = NULL; 1157 1158 u32 ntc = rx_ring->next_to_clean; 1158 1159 u32 cnt = rx_ring->count; 1159 - u32 cached_ntc = ntc; 1160 1160 u32 xdp_xmit = 0; 1161 1161 u32 cached_ntu; 1162 1162 bool failure;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
··· 490 490 (u64)timestamp_low; 491 491 break; 492 492 default: 493 - if (tracer_event->event_id >= tracer->str_db.first_string_trace || 493 + if (tracer_event->event_id >= tracer->str_db.first_string_trace && 494 494 tracer_event->event_id <= tracer->str_db.first_string_trace + 495 495 tracer->str_db.num_string_trace) { 496 496 tracer_event->type = TRACER_EVENT_TYPE_STRING;
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 327 327 unsigned int sw_mtu; 328 328 int hard_mtu; 329 329 bool ptp_rx; 330 + __be32 terminate_lkey_be; 330 331 }; 331 332 332 333 static inline u8 mlx5e_get_dcb_num_tc(struct mlx5e_params *params)
+28 -16
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
··· 51 51 if (err) 52 52 goto out; 53 53 54 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 54 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 55 55 buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]); 56 56 port_buffer->buffer[i].lossy = 57 57 MLX5_GET(bufferx_reg, buffer, lossy); ··· 73 73 port_buffer->buffer[i].lossy); 74 74 } 75 75 76 - port_buffer->headroom_size = total_used; 76 + port_buffer->internal_buffers_size = 0; 77 + for (i = MLX5E_MAX_NETWORK_BUFFER; i < MLX5E_TOTAL_BUFFERS; i++) { 78 + buffer = MLX5_ADDR_OF(pbmc_reg, out, buffer[i]); 79 + port_buffer->internal_buffers_size += 80 + MLX5_GET(bufferx_reg, buffer, size) * port_buff_cell_sz; 81 + } 82 + 77 83 port_buffer->port_buffer_size = 78 84 MLX5_GET(pbmc_reg, out, port_buffer_size) * port_buff_cell_sz; 79 - port_buffer->spare_buffer_size = 80 - port_buffer->port_buffer_size - total_used; 85 + port_buffer->headroom_size = total_used; 86 + port_buffer->spare_buffer_size = port_buffer->port_buffer_size - 87 + port_buffer->internal_buffers_size - 88 + port_buffer->headroom_size; 81 89 82 - mlx5e_dbg(HW, priv, "total buffer size=%d, spare buffer size=%d\n", 83 - port_buffer->port_buffer_size, 90 + mlx5e_dbg(HW, priv, 91 + "total buffer size=%u, headroom buffer size=%u, internal buffers size=%u, spare buffer size=%u\n", 92 + port_buffer->port_buffer_size, port_buffer->headroom_size, 93 + port_buffer->internal_buffers_size, 84 94 port_buffer->spare_buffer_size); 85 95 out: 86 96 kfree(out); ··· 216 206 if (!MLX5_CAP_GEN(mdev, sbcam_reg)) 217 207 return 0; 218 208 219 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 209 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 220 210 lossless_buff_count += ((port_buffer->buffer[i].size) && 221 211 (!(port_buffer->buffer[i].lossy))); 222 212 223 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 213 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 224 214 p = select_sbcm_params(&port_buffer->buffer[i], lossless_buff_count); 225 215 err = mlx5e_port_set_sbcm(mdev, 0, i, 226 216 MLX5_INGRESS_DIR, ··· 303 293 if (err) 304 294 goto out; 305 295 306 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 296 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 307 297 void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]); 308 298 u64 size = port_buffer->buffer[i].size; 309 299 u64 xoff = port_buffer->buffer[i].xoff; ··· 361 351 { 362 352 int i; 363 353 364 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 354 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 365 355 if (port_buffer->buffer[i].lossy) { 366 356 port_buffer->buffer[i].xoff = 0; 367 357 port_buffer->buffer[i].xon = 0; ··· 418 408 int err; 419 409 int i; 420 410 421 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 411 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 422 412 prio_count = 0; 423 413 lossy_count = 0; 424 414 ··· 442 432 } 443 433 444 434 if (changed) { 445 - err = port_update_pool_cfg(mdev, port_buffer); 435 + err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz); 446 436 if (err) 447 437 return err; 448 438 449 - err = update_xoff_threshold(port_buffer, xoff, max_mtu, port_buff_cell_sz); 439 + err = port_update_pool_cfg(mdev, port_buffer); 450 440 if (err) 451 441 return err; 452 442 ··· 525 515 526 516 if (change & MLX5E_PORT_BUFFER_PRIO2BUFFER) { 527 517 update_prio2buffer = true; 528 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 518 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 529 519 mlx5e_dbg(HW, priv, "%s: requested to map prio[%d] to buffer %d\n", 530 520 __func__, i, prio2buffer[i]); 531 521 ··· 540 530 } 541 531 542 532 if (change & MLX5E_PORT_BUFFER_SIZE) { 543 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 533 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 544 534 mlx5e_dbg(HW, priv, "%s: buffer[%d]=%d\n", __func__, i, buffer_size[i]); 545 535 if (!port_buffer.buffer[i].lossy && !buffer_size[i]) { 546 536 mlx5e_dbg(HW, priv, "%s: lossless buffer[%d] size cannot be zero\n", ··· 554 544 555 545 mlx5e_dbg(HW, priv, "%s: total buffer requested=%d\n", __func__, total_used); 556 546 557 - if (total_used > port_buffer.port_buffer_size) 547 + if (total_used > port_buffer.headroom_size && 548 + (total_used - port_buffer.headroom_size) > 549 + port_buffer.spare_buffer_size) 558 550 return -EINVAL; 559 551 560 552 update_buffer = true;
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.h
··· 35 35 #include "en.h" 36 36 #include "port.h" 37 37 38 - #define MLX5E_MAX_BUFFER 8 38 + #define MLX5E_MAX_NETWORK_BUFFER 8 39 + #define MLX5E_TOTAL_BUFFERS 10 39 40 #define MLX5E_DEFAULT_CABLE_LEN 7 /* 7 meters */ 40 41 41 42 #define MLX5_BUFFER_SUPPORTED(mdev) (MLX5_CAP_GEN(mdev, pcam_reg) && \ ··· 61 60 struct mlx5e_port_buffer { 62 61 u32 port_buffer_size; 63 62 u32 spare_buffer_size; 64 - u32 headroom_size; 65 - struct mlx5e_bufferx_reg buffer[MLX5E_MAX_BUFFER]; 63 + u32 headroom_size; /* Buffers 0-7 */ 64 + u32 internal_buffers_size; /* Buffers 8-9 */ 65 + struct mlx5e_bufferx_reg buffer[MLX5E_MAX_NETWORK_BUFFER]; 66 66 }; 67 67 68 68 int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.c
··· 84 84 85 85 int 86 86 mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state, 87 - struct flow_action *flow_action, 87 + struct flow_action *flow_action, int from, int to, 88 88 struct mlx5_flow_attr *attr, 89 89 enum mlx5_flow_namespace_type ns_type) 90 90 { ··· 96 96 priv = parse_state->flow->priv; 97 97 98 98 flow_action_for_each(i, act, flow_action) { 99 + if (i < from) 100 + continue; 101 + else if (i > to) 102 + break; 103 + 99 104 tc_act = mlx5e_tc_act_get(act->id, ns_type); 100 105 if (!tc_act || !tc_act->post_parse) 101 106 continue;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc/act/act.h
··· 112 112 113 113 int 114 114 mlx5e_tc_act_post_parse(struct mlx5e_tc_act_parse_state *parse_state, 115 - struct flow_action *flow_action, 115 + struct flow_action *flow_action, int from, int to, 116 116 struct mlx5_flow_attr *attr, 117 117 enum mlx5_flow_namespace_type ns_type); 118 118
+103 -17
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 492 492 mlx5e_encap_dealloc(priv, e); 493 493 } 494 494 495 + static void mlx5e_encap_put_locked(struct mlx5e_priv *priv, struct mlx5e_encap_entry *e) 496 + { 497 + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 498 + 499 + lockdep_assert_held(&esw->offloads.encap_tbl_lock); 500 + 501 + if (!refcount_dec_and_test(&e->refcnt)) 502 + return; 503 + list_del(&e->route_list); 504 + hash_del_rcu(&e->encap_hlist); 505 + mlx5e_encap_dealloc(priv, e); 506 + } 507 + 495 508 static void mlx5e_decap_put(struct mlx5e_priv *priv, struct mlx5e_decap_entry *d) 496 509 { 497 510 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; ··· 829 816 uintptr_t hash_key; 830 817 int err = 0; 831 818 819 + lockdep_assert_held(&esw->offloads.encap_tbl_lock); 820 + 832 821 parse_attr = attr->parse_attr; 833 822 tun_info = parse_attr->tun_info[out_index]; 834 823 mpls_info = &parse_attr->mpls_info[out_index]; ··· 844 829 845 830 hash_key = hash_encap_info(&key); 846 831 847 - mutex_lock(&esw->offloads.encap_tbl_lock); 848 832 e = mlx5e_encap_get(priv, &key, hash_key); 849 833 850 834 /* must verify if encap is valid or not */ ··· 854 840 goto out_err; 855 841 } 856 842 857 - mutex_unlock(&esw->offloads.encap_tbl_lock); 858 - wait_for_completion(&e->res_ready); 859 - 860 - /* Protect against concurrent neigh update. */ 861 - mutex_lock(&esw->offloads.encap_tbl_lock); 862 - if (e->compl_result < 0) { 863 - err = -EREMOTEIO; 864 - goto out_err; 865 - } 866 843 goto attach_flow; 867 844 } 868 845 ··· 882 877 INIT_LIST_HEAD(&e->flows); 883 878 hash_add_rcu(esw->offloads.encap_tbl, &e->encap_hlist, hash_key); 884 879 tbl_time_before = mlx5e_route_tbl_get_last_update(priv); 885 - mutex_unlock(&esw->offloads.encap_tbl_lock); 886 880 887 881 if (family == AF_INET) 888 882 err = mlx5e_tc_tun_create_header_ipv4(priv, mirred_dev, e); 889 883 else if (family == AF_INET6) 890 884 err = mlx5e_tc_tun_create_header_ipv6(priv, mirred_dev, e); 891 885 892 - /* Protect against concurrent neigh update. */ 893 - mutex_lock(&esw->offloads.encap_tbl_lock); 894 886 complete_all(&e->res_ready); 895 887 if (err) { 896 888 e->compl_result = err; ··· 922 920 } else { 923 921 flow_flag_set(flow, SLOW); 924 922 } 925 - mutex_unlock(&esw->offloads.encap_tbl_lock); 926 923 927 924 return err; 928 925 929 926 out_err: 930 - mutex_unlock(&esw->offloads.encap_tbl_lock); 931 927 if (e) 932 - mlx5e_encap_put(priv, e); 928 + mlx5e_encap_put_locked(priv, e); 933 929 return err; 934 930 935 931 out_err_init: 936 - mutex_unlock(&esw->offloads.encap_tbl_lock); 937 932 kfree(tun_info); 938 933 kfree(e); 939 934 return err; ··· 1013 1014 out_err: 1014 1015 mutex_unlock(&esw->offloads.decap_tbl_lock); 1015 1016 return err; 1017 + } 1018 + 1019 + int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv, 1020 + struct mlx5e_tc_flow *flow, 1021 + struct mlx5_flow_attr *attr, 1022 + struct netlink_ext_ack *extack, 1023 + bool *vf_tun) 1024 + { 1025 + struct mlx5e_tc_flow_parse_attr *parse_attr; 1026 + struct mlx5_esw_flow_attr *esw_attr; 1027 + struct net_device *encap_dev = NULL; 1028 + struct mlx5e_rep_priv *rpriv; 1029 + struct mlx5e_priv *out_priv; 1030 + struct mlx5_eswitch *esw; 1031 + int out_index; 1032 + int err = 0; 1033 + 1034 + if (!mlx5e_is_eswitch_flow(flow)) 1035 + return 0; 1036 + 1037 + parse_attr = attr->parse_attr; 1038 + esw_attr = attr->esw_attr; 1039 + *vf_tun = false; 1040 + 1041 + esw = priv->mdev->priv.eswitch; 1042 + mutex_lock(&esw->offloads.encap_tbl_lock); 1043 + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1044 + struct net_device *out_dev; 1045 + int mirred_ifindex; 1046 + 1047 + if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1048 + continue; 1049 + 1050 + mirred_ifindex = parse_attr->mirred_ifindex[out_index]; 1051 + out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex); 1052 + if (!out_dev) { 1053 + NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found"); 1054 + err = -ENODEV; 1055 + goto out; 1056 + } 1057 + err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index, 1058 + extack, &encap_dev); 1059 + dev_put(out_dev); 1060 + if (err) 1061 + goto out; 1062 + 1063 + if (esw_attr->dests[out_index].flags & 1064 + MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE && 1065 + !esw_attr->dest_int_port) 1066 + *vf_tun = true; 1067 + 1068 + out_priv = netdev_priv(encap_dev); 1069 + rpriv = out_priv->ppriv; 1070 + esw_attr->dests[out_index].rep = rpriv->rep; 1071 + esw_attr->dests[out_index].mdev = out_priv->mdev; 1072 + } 1073 + 1074 + if (*vf_tun && esw_attr->out_count > 1) { 1075 + NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported"); 1076 + err = -EOPNOTSUPP; 1077 + goto out; 1078 + } 1079 + 1080 + out: 1081 + mutex_unlock(&esw->offloads.encap_tbl_lock); 1082 + return err; 1083 + } 1084 + 1085 + void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv, 1086 + struct mlx5e_tc_flow *flow, 1087 + struct mlx5_flow_attr *attr) 1088 + { 1089 + struct mlx5_esw_flow_attr *esw_attr; 1090 + int out_index; 1091 + 1092 + if (!mlx5e_is_eswitch_flow(flow)) 1093 + return; 1094 + 1095 + esw_attr = attr->esw_attr; 1096 + 1097 + for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1098 + if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1099 + continue; 1100 + 1101 + mlx5e_detach_encap(flow->priv, flow, attr, out_index); 1102 + kfree(attr->parse_attr->tun_info[out_index]); 1103 + } 1016 1104 } 1017 1105 1018 1106 static int cmp_route_info(struct mlx5e_route_key *a,
+9
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.h
··· 30 30 void mlx5e_detach_decap_route(struct mlx5e_priv *priv, 31 31 struct mlx5e_tc_flow *flow); 32 32 33 + int mlx5e_tc_tun_encap_dests_set(struct mlx5e_priv *priv, 34 + struct mlx5e_tc_flow *flow, 35 + struct mlx5_flow_attr *attr, 36 + struct netlink_ext_ack *extack, 37 + bool *vf_tun); 38 + void mlx5e_tc_tun_encap_dests_unset(struct mlx5e_priv *priv, 39 + struct mlx5e_tc_flow *flow, 40 + struct mlx5_flow_attr *attr); 41 + 33 42 struct ip_tunnel_info *mlx5e_dup_tun_info(const struct ip_tunnel_info *tun_info); 34 43 35 44 int mlx5e_tc_set_attr_rx_tun(struct mlx5e_tc_flow *flow,
+4 -7
drivers/net/ethernet/mellanox/mlx5/core/en_common.c
··· 150 150 151 151 inlen = MLX5_ST_SZ_BYTES(modify_tir_in); 152 152 in = kvzalloc(inlen, GFP_KERNEL); 153 - if (!in) { 154 - err = -ENOMEM; 155 - goto out; 156 - } 153 + if (!in) 154 + return -ENOMEM; 157 155 158 156 if (enable_uc_lb) 159 157 lb_flags = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; ··· 169 171 tirn = tir->tirn; 170 172 err = mlx5_core_modify_tir(mdev, tirn, in); 171 173 if (err) 172 - goto out; 174 + break; 173 175 } 176 + mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock); 174 177 175 - out: 176 178 kvfree(in); 177 179 if (err) 178 180 netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err); 179 - mutex_unlock(&mdev->mlx5e_res.hw_objs.td.list_lock); 180 181 181 182 return err; 182 183 }
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 926 926 if (err) 927 927 return err; 928 928 929 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) 929 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) 930 930 dcb_buffer->buffer_size[i] = port_buffer.buffer[i].size; 931 - dcb_buffer->total_size = port_buffer.port_buffer_size; 931 + dcb_buffer->total_size = port_buffer.port_buffer_size - 932 + port_buffer.internal_buffers_size; 932 933 933 934 return 0; 934 935 } ··· 971 970 if (err) 972 971 return err; 973 972 974 - for (i = 0; i < MLX5E_MAX_BUFFER; i++) { 973 + for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) { 975 974 if (port_buffer.buffer[i].size != dcb_buffer->buffer_size[i]) { 976 975 changed |= MLX5E_PORT_BUFFER_SIZE; 977 976 buffer_size = dcb_buffer->buffer_size;
+39 -30
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 727 727 mlx5e_rq_shampo_hd_free(rq); 728 728 } 729 729 730 - static __be32 mlx5e_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev) 731 - { 732 - u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {}; 733 - u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {}; 734 - int res; 735 - 736 - if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey)) 737 - return MLX5_TERMINATE_SCATTER_LIST_LKEY; 738 - 739 - MLX5_SET(query_special_contexts_in, in, opcode, 740 - MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS); 741 - res = mlx5_cmd_exec_inout(dev, query_special_contexts, in, out); 742 - if (res) 743 - return MLX5_TERMINATE_SCATTER_LIST_LKEY; 744 - 745 - res = MLX5_GET(query_special_contexts_out, out, 746 - terminate_scatter_list_mkey); 747 - return cpu_to_be32(res); 748 - } 749 - 750 730 static int mlx5e_alloc_rq(struct mlx5e_params *params, 751 731 struct mlx5e_xsk_param *xsk, 752 732 struct mlx5e_rq_param *rqp, ··· 888 908 /* check if num_frags is not a pow of two */ 889 909 if (rq->wqe.info.num_frags < (1 << rq->wqe.info.log_num_frags)) { 890 910 wqe->data[f].byte_count = 0; 891 - wqe->data[f].lkey = mlx5e_get_terminate_scatter_list_mkey(mdev); 911 + wqe->data[f].lkey = params->terminate_lkey_be; 892 912 wqe->data[f].addr = 0; 893 913 } 894 914 } ··· 4987 5007 /* RQ */ 4988 5008 mlx5e_build_rq_params(mdev, params); 4989 5009 5010 + params->terminate_lkey_be = mlx5_core_get_terminate_scatter_list_mkey(mdev); 5011 + 4990 5012 params->packet_merge.timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT); 4991 5013 4992 5014 /* CQ moderation params */ ··· 5261 5279 5262 5280 mlx5e_timestamp_init(priv); 5263 5281 5282 + priv->dfs_root = debugfs_create_dir("nic", 5283 + mlx5_debugfs_get_dev_root(mdev)); 5284 + 5264 5285 fs = mlx5e_fs_init(priv->profile, mdev, 5265 5286 !test_bit(MLX5E_STATE_DESTROYING, &priv->state), 5266 5287 priv->dfs_root); 5267 5288 if (!fs) { 5268 5289 err = -ENOMEM; 5269 5290 mlx5_core_err(mdev, "FS initialization failed, %d\n", err); 5291 + debugfs_remove_recursive(priv->dfs_root); 5270 5292 return err; 5271 5293 } 5272 5294 priv->fs = fs; ··· 5291 5305 mlx5e_health_destroy_reporters(priv); 5292 5306 mlx5e_ktls_cleanup(priv); 5293 5307 mlx5e_fs_cleanup(priv->fs); 5308 + debugfs_remove_recursive(priv->dfs_root); 5294 5309 priv->fs = NULL; 5295 5310 } 5296 5311 ··· 5838 5851 } 5839 5852 5840 5853 static int 5841 - mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5842 - const struct mlx5e_profile *new_profile, void *new_ppriv) 5854 + mlx5e_netdev_init_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5855 + const struct mlx5e_profile *new_profile, void *new_ppriv) 5843 5856 { 5844 5857 struct mlx5e_priv *priv = netdev_priv(netdev); 5845 5858 int err; ··· 5855 5868 err = new_profile->init(priv->mdev, priv->netdev); 5856 5869 if (err) 5857 5870 goto priv_cleanup; 5871 + 5872 + return 0; 5873 + 5874 + priv_cleanup: 5875 + mlx5e_priv_cleanup(priv); 5876 + return err; 5877 + } 5878 + 5879 + static int 5880 + mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5881 + const struct mlx5e_profile *new_profile, void *new_ppriv) 5882 + { 5883 + struct mlx5e_priv *priv = netdev_priv(netdev); 5884 + int err; 5885 + 5886 + err = mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); 5887 + if (err) 5888 + return err; 5889 + 5858 5890 err = mlx5e_attach_netdev(priv); 5859 5891 if (err) 5860 5892 goto profile_cleanup; ··· 5881 5875 5882 5876 profile_cleanup: 5883 5877 new_profile->cleanup(priv); 5884 - priv_cleanup: 5885 5878 mlx5e_priv_cleanup(priv); 5886 5879 return err; 5887 5880 } ··· 5898 5893 mlx5e_detach_netdev(priv); 5899 5894 priv->profile->cleanup(priv); 5900 5895 mlx5e_priv_cleanup(priv); 5896 + 5897 + if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 5898 + mlx5e_netdev_init_profile(netdev, mdev, new_profile, new_ppriv); 5899 + set_bit(MLX5E_STATE_DESTROYING, &priv->state); 5900 + return -EIO; 5901 + } 5901 5902 5902 5903 err = mlx5e_netdev_attach_profile(netdev, mdev, new_profile, new_ppriv); 5903 5904 if (err) { /* roll back to original profile */ ··· 5966 5955 struct net_device *netdev = priv->netdev; 5967 5956 struct mlx5_core_dev *mdev = priv->mdev; 5968 5957 5969 - if (!netif_device_present(netdev)) 5958 + if (!netif_device_present(netdev)) { 5959 + if (test_bit(MLX5E_STATE_DESTROYING, &priv->state)) 5960 + mlx5e_destroy_mdev_resources(mdev); 5970 5961 return -ENODEV; 5962 + } 5971 5963 5972 5964 mlx5e_detach_netdev(priv); 5973 5965 mlx5e_destroy_mdev_resources(mdev); ··· 6016 6002 priv->profile = profile; 6017 6003 priv->ppriv = NULL; 6018 6004 6019 - priv->dfs_root = debugfs_create_dir("nic", 6020 - mlx5_debugfs_get_dev_root(priv->mdev)); 6021 - 6022 6005 err = profile->init(mdev, netdev); 6023 6006 if (err) { 6024 6007 mlx5_core_err(mdev, "mlx5e_nic_profile init failed, %d\n", err); ··· 6044 6033 err_profile_cleanup: 6045 6034 profile->cleanup(priv); 6046 6035 err_destroy_netdev: 6047 - debugfs_remove_recursive(priv->dfs_root); 6048 6036 mlx5e_destroy_netdev(priv); 6049 6037 err_devlink_port_unregister: 6050 6038 mlx5e_devlink_port_unregister(mlx5e_dev); ··· 6063 6053 unregister_netdev(priv->netdev); 6064 6054 mlx5e_suspend(adev, state); 6065 6055 priv->profile->cleanup(priv); 6066 - debugfs_remove_recursive(priv->dfs_root); 6067 6056 mlx5e_destroy_netdev(priv); 6068 6057 mlx5e_devlink_port_unregister(mlx5e_dev); 6069 6058 mlx5e_destroy_devlink(mlx5e_dev);
+6
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 30 30 * SOFTWARE. 31 31 */ 32 32 33 + #include <linux/debugfs.h> 33 34 #include <linux/mlx5/fs.h> 34 35 #include <net/switchdev.h> 35 36 #include <net/pkt_cls.h> ··· 813 812 { 814 813 struct mlx5e_priv *priv = netdev_priv(netdev); 815 814 815 + priv->dfs_root = debugfs_create_dir("nic", 816 + mlx5_debugfs_get_dev_root(mdev)); 817 + 816 818 priv->fs = mlx5e_fs_init(priv->profile, mdev, 817 819 !test_bit(MLX5E_STATE_DESTROYING, &priv->state), 818 820 priv->dfs_root); 819 821 if (!priv->fs) { 820 822 netdev_err(priv->netdev, "FS allocation failed\n"); 823 + debugfs_remove_recursive(priv->dfs_root); 821 824 return -ENOMEM; 822 825 } 823 826 ··· 834 829 static void mlx5e_cleanup_rep(struct mlx5e_priv *priv) 835 830 { 836 831 mlx5e_fs_cleanup(priv->fs); 832 + debugfs_remove_recursive(priv->dfs_root); 837 833 priv->fs = NULL; 838 834 } 839 835
+7 -90
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1700 1700 } 1701 1701 1702 1702 static int 1703 - set_encap_dests(struct mlx5e_priv *priv, 1704 - struct mlx5e_tc_flow *flow, 1705 - struct mlx5_flow_attr *attr, 1706 - struct netlink_ext_ack *extack, 1707 - bool *vf_tun) 1708 - { 1709 - struct mlx5e_tc_flow_parse_attr *parse_attr; 1710 - struct mlx5_esw_flow_attr *esw_attr; 1711 - struct net_device *encap_dev = NULL; 1712 - struct mlx5e_rep_priv *rpriv; 1713 - struct mlx5e_priv *out_priv; 1714 - int out_index; 1715 - int err = 0; 1716 - 1717 - if (!mlx5e_is_eswitch_flow(flow)) 1718 - return 0; 1719 - 1720 - parse_attr = attr->parse_attr; 1721 - esw_attr = attr->esw_attr; 1722 - *vf_tun = false; 1723 - 1724 - for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1725 - struct net_device *out_dev; 1726 - int mirred_ifindex; 1727 - 1728 - if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1729 - continue; 1730 - 1731 - mirred_ifindex = parse_attr->mirred_ifindex[out_index]; 1732 - out_dev = dev_get_by_index(dev_net(priv->netdev), mirred_ifindex); 1733 - if (!out_dev) { 1734 - NL_SET_ERR_MSG_MOD(extack, "Requested mirred device not found"); 1735 - err = -ENODEV; 1736 - goto out; 1737 - } 1738 - err = mlx5e_attach_encap(priv, flow, attr, out_dev, out_index, 1739 - extack, &encap_dev); 1740 - dev_put(out_dev); 1741 - if (err) 1742 - goto out; 1743 - 1744 - if (esw_attr->dests[out_index].flags & 1745 - MLX5_ESW_DEST_CHAIN_WITH_SRC_PORT_CHANGE && 1746 - !esw_attr->dest_int_port) 1747 - *vf_tun = true; 1748 - 1749 - out_priv = netdev_priv(encap_dev); 1750 - rpriv = out_priv->ppriv; 1751 - esw_attr->dests[out_index].rep = rpriv->rep; 1752 - esw_attr->dests[out_index].mdev = out_priv->mdev; 1753 - } 1754 - 1755 - if (*vf_tun && esw_attr->out_count > 1) { 1756 - NL_SET_ERR_MSG_MOD(extack, "VF tunnel encap with mirroring is not supported"); 1757 - err = -EOPNOTSUPP; 1758 - goto out; 1759 - } 1760 - 1761 - out: 1762 - return err; 1763 - } 1764 - 1765 - static void 1766 - clean_encap_dests(struct mlx5e_priv *priv, 1767 - struct mlx5e_tc_flow *flow, 1768 - struct mlx5_flow_attr *attr) 1769 - { 1770 - struct mlx5_esw_flow_attr *esw_attr; 1771 - int out_index; 1772 - 1773 - if (!mlx5e_is_eswitch_flow(flow)) 1774 - return; 1775 - 1776 - esw_attr = attr->esw_attr; 1777 - 1778 - for (out_index = 0; out_index < MLX5_MAX_FLOW_FWD_VPORTS; out_index++) { 1779 - if (!(esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) 1780 - continue; 1781 - 1782 - mlx5e_detach_encap(priv, flow, attr, out_index); 1783 - kfree(attr->parse_attr->tun_info[out_index]); 1784 - } 1785 - } 1786 - 1787 - static int 1788 1703 verify_attr_actions(u32 actions, struct netlink_ext_ack *extack) 1789 1704 { 1790 1705 if (!(actions & ··· 1735 1820 if (err) 1736 1821 goto err_out; 1737 1822 1738 - err = set_encap_dests(flow->priv, flow, attr, extack, &vf_tun); 1823 + err = mlx5e_tc_tun_encap_dests_set(flow->priv, flow, attr, extack, &vf_tun); 1739 1824 if (err) 1740 1825 goto err_out; 1741 1826 ··· 3859 3944 struct mlx5_flow_attr *prev_attr; 3860 3945 struct flow_action_entry *act; 3861 3946 struct mlx5e_tc_act *tc_act; 3947 + int err, i, i_split = 0; 3862 3948 bool is_missable; 3863 - int err, i; 3864 3949 3865 3950 ns_type = mlx5e_get_flow_namespace(flow); 3866 3951 list_add(&attr->list, &flow->attrs); ··· 3901 3986 i < flow_action->num_entries - 1)) { 3902 3987 is_missable = tc_act->is_missable ? tc_act->is_missable(act) : false; 3903 3988 3904 - err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type); 3989 + err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, 3990 + ns_type); 3905 3991 if (err) 3906 3992 goto out_free_post_acts; 3907 3993 ··· 3912 3996 goto out_free_post_acts; 3913 3997 } 3914 3998 3999 + i_split = i + 1; 3915 4000 list_add(&attr->list, &flow->attrs); 3916 4001 } 3917 4002 ··· 3927 4010 } 3928 4011 } 3929 4012 3930 - err = mlx5e_tc_act_post_parse(parse_state, flow_action, attr, ns_type); 4013 + err = mlx5e_tc_act_post_parse(parse_state, flow_action, i_split, i, attr, ns_type); 3931 4014 if (err) 3932 4015 goto out_free_post_acts; 3933 4016 ··· 4241 4324 if (attr->post_act_handle) 4242 4325 mlx5e_tc_post_act_del(get_post_action(flow->priv), attr->post_act_handle); 4243 4326 4244 - clean_encap_dests(flow->priv, flow, attr); 4327 + mlx5e_tc_tun_encap_dests_unset(flow->priv, flow, attr); 4245 4328 4246 4329 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) 4247 4330 mlx5_fc_destroy(counter_dev, attr->counter);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 824 824 ncomp_eqs = table->num_comp_eqs; 825 825 cpus = kcalloc(ncomp_eqs, sizeof(*cpus), GFP_KERNEL); 826 826 if (!cpus) 827 - ret = -ENOMEM; 827 + return -ENOMEM; 828 828 829 829 i = 0; 830 830 rcu_read_lock();
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 923 923 } 924 924 925 925 mlx5_pci_vsc_init(dev); 926 - dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev); 927 926 return 0; 928 927 929 928 err_clr_master: ··· 1154 1155 goto err_cmd_cleanup; 1155 1156 } 1156 1157 1158 + dev->caps.embedded_cpu = mlx5_read_embedded_cpu(dev); 1157 1159 mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP); 1158 1160 1159 1161 mlx5_start_health_poll(dev); ··· 1802 1802 struct devlink *devlink = priv_to_devlink(dev); 1803 1803 1804 1804 set_bit(MLX5_BREAK_FW_WAIT, &dev->intf_state); 1805 - /* mlx5_drain_fw_reset() is using devlink APIs. Hence, we must drain 1806 - * fw_reset before unregistering the devlink. 1805 + /* mlx5_drain_fw_reset() and mlx5_drain_health_wq() are using 1806 + * devlink notify APIs. 1807 + * Hence, we must drain them before unregistering the devlink. 1807 1808 */ 1808 1809 mlx5_drain_fw_reset(dev); 1810 + mlx5_drain_health_wq(dev); 1809 1811 devlink_unregister(devlink); 1810 1812 mlx5_sriov_disable(pdev); 1811 1813 mlx5_thermal_uninit(dev); 1812 1814 mlx5_crdump_disable(dev); 1813 - mlx5_drain_health_wq(dev); 1814 1815 mlx5_uninit_one(dev); 1815 1816 mlx5_pci_close(dev); 1816 1817 mlx5_mdev_uninit(dev);
+21
drivers/net/ethernet/mellanox/mlx5/core/mr.c
··· 32 32 33 33 #include <linux/kernel.h> 34 34 #include <linux/mlx5/driver.h> 35 + #include <linux/mlx5/qp.h> 35 36 #include "mlx5_core.h" 36 37 37 38 int mlx5_core_create_mkey(struct mlx5_core_dev *dev, u32 *mkey, u32 *in, ··· 123 122 return mlx5_cmd_exec_in(dev, destroy_psv, in); 124 123 } 125 124 EXPORT_SYMBOL(mlx5_core_destroy_psv); 125 + 126 + __be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev) 127 + { 128 + u32 out[MLX5_ST_SZ_DW(query_special_contexts_out)] = {}; 129 + u32 in[MLX5_ST_SZ_DW(query_special_contexts_in)] = {}; 130 + u32 mkey; 131 + 132 + if (!MLX5_CAP_GEN(dev, terminate_scatter_list_mkey)) 133 + return MLX5_TERMINATE_SCATTER_LIST_LKEY; 134 + 135 + MLX5_SET(query_special_contexts_in, in, opcode, 136 + MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS); 137 + if (mlx5_cmd_exec_inout(dev, query_special_contexts, in, out)) 138 + return MLX5_TERMINATE_SCATTER_LIST_LKEY; 139 + 140 + mkey = MLX5_GET(query_special_contexts_out, out, 141 + terminate_scatter_list_mkey); 142 + return cpu_to_be32(mkey); 143 + } 144 + EXPORT_SYMBOL(mlx5_core_get_terminate_scatter_list_mkey);
+7 -6
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 141 141 irq_update_affinity_hint(irq->map.virq, NULL); 142 142 #ifdef CONFIG_RFS_ACCEL 143 143 rmap = mlx5_eq_table_get_rmap(pool->dev); 144 - if (rmap && irq->map.index) 144 + if (rmap) 145 145 irq_cpu_rmap_remove(rmap, irq->map.virq); 146 146 #endif 147 147 ··· 232 232 if (!irq) 233 233 return ERR_PTR(-ENOMEM); 234 234 if (!i || !pci_msix_can_alloc_dyn(dev->pdev)) { 235 - /* The vector at index 0 was already allocated. 236 - * Just get the irq number. If dynamic irq is not supported 237 - * vectors have also been allocated. 235 + /* The vector at index 0 is always statically allocated. If 236 + * dynamic irq is not supported all vectors are statically 237 + * allocated. In both cases just get the irq number and set 238 + * the index. 238 239 */ 239 240 irq->map.virq = pci_irq_vector(dev->pdev, i); 240 - irq->map.index = 0; 241 + irq->map.index = i; 241 242 } else { 242 243 irq->map = pci_msix_alloc_irq_at(dev->pdev, MSI_ANY_INDEX, af_desc); 243 244 if (!irq->map.virq) { ··· 571 570 572 571 af_desc.is_managed = false; 573 572 for (i = 0; i < nirqs; i++) { 573 + cpumask_clear(&af_desc.mask); 574 574 cpumask_set_cpu(cpus[i], &af_desc.mask); 575 575 irq = mlx5_irq_request(dev, i + 1, &af_desc, rmap); 576 576 if (IS_ERR(irq)) 577 577 break; 578 - cpumask_clear(&af_desc.mask); 579 578 irqs[i] = irq; 580 579 } 581 580
+1
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 63 63 struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev); 64 64 struct devlink *devlink = priv_to_devlink(sf_dev->mdev); 65 65 66 + mlx5_drain_health_wq(sf_dev->mdev); 66 67 devlink_unregister(devlink); 67 68 mlx5_uninit_one(sf_dev->mdev); 68 69 iounmap(sf_dev->mdev->iseg);
+3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ptrn.c
··· 213 213 } 214 214 215 215 INIT_LIST_HEAD(&mgr->ptrn_list); 216 + mutex_init(&mgr->modify_hdr_mutex); 217 + 216 218 return mgr; 217 219 218 220 free_mgr: ··· 239 237 } 240 238 241 239 mlx5dr_icm_pool_destroy(mgr->ptrn_icm_pool); 240 + mutex_destroy(&mgr->modify_hdr_mutex); 242 241 kfree(mgr); 243 242 }
+7 -6
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_rx.c
··· 245 245 246 246 skb = priv->rx_skb[rx_pi_rem]; 247 247 248 - skb_put(skb, datalen); 249 - 250 - skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */ 251 - 252 - skb->protocol = eth_type_trans(skb, netdev); 253 - 254 248 /* Alloc another RX SKB for this same index */ 255 249 rx_skb = mlxbf_gige_alloc_skb(priv, MLXBF_GIGE_DEFAULT_BUF_SZ, 256 250 &rx_buf_dma, DMA_FROM_DEVICE); ··· 253 259 priv->rx_skb[rx_pi_rem] = rx_skb; 254 260 dma_unmap_single(priv->dev, *rx_wqe_addr, 255 261 MLXBF_GIGE_DEFAULT_BUF_SZ, DMA_FROM_DEVICE); 262 + 263 + skb_put(skb, datalen); 264 + 265 + skb->ip_summed = CHECKSUM_NONE; /* device did not checksum packet */ 266 + 267 + skb->protocol = eth_type_trans(skb, netdev); 268 + 256 269 *rx_wqe_addr = rx_buf_dma; 257 270 } else if (rx_cqe & MLXBF_GIGE_RX_CQE_PKT_STATUS_MAC_ERR) { 258 271 priv->stats.rx_mac_errors++;
-10
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1279 1279 if (comp_read < 1) 1280 1280 return; 1281 1281 1282 - apc->eth_stats.tx_cqes = comp_read; 1283 - 1284 1282 for (i = 0; i < comp_read; i++) { 1285 1283 struct mana_tx_comp_oob *cqe_oob; 1286 1284 ··· 1361 1363 WARN_ON_ONCE(1); 1362 1364 1363 1365 cq->work_done = pkt_transmitted; 1364 - 1365 - apc->eth_stats.tx_cqes -= pkt_transmitted; 1366 1366 } 1367 1367 1368 1368 static void mana_post_pkt_rxq(struct mana_rxq *rxq) ··· 1622 1626 { 1623 1627 struct gdma_comp *comp = cq->gdma_comp_buf; 1624 1628 struct mana_rxq *rxq = cq->rxq; 1625 - struct mana_port_context *apc; 1626 1629 int comp_read, i; 1627 - 1628 - apc = netdev_priv(rxq->ndev); 1629 1630 1630 1631 comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER); 1631 1632 WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER); 1632 1633 1633 - apc->eth_stats.rx_cqes = comp_read; 1634 1634 rxq->xdp_flush = false; 1635 1635 1636 1636 for (i = 0; i < comp_read; i++) { ··· 1638 1646 return; 1639 1647 1640 1648 mana_process_rx_cqe(rxq, cq, &comp[i]); 1641 - 1642 - apc->eth_stats.rx_cqes--; 1643 1649 } 1644 1650 1645 1651 if (rxq->xdp_flush)
-2
drivers/net/ethernet/microsoft/mana/mana_ethtool.c
··· 13 13 } mana_eth_stats[] = { 14 14 {"stop_queue", offsetof(struct mana_ethtool_stats, stop_queue)}, 15 15 {"wake_queue", offsetof(struct mana_ethtool_stats, wake_queue)}, 16 - {"tx_cqes", offsetof(struct mana_ethtool_stats, tx_cqes)}, 17 16 {"tx_cq_err", offsetof(struct mana_ethtool_stats, tx_cqe_err)}, 18 17 {"tx_cqe_unknown_type", offsetof(struct mana_ethtool_stats, 19 18 tx_cqe_unknown_type)}, 20 - {"rx_cqes", offsetof(struct mana_ethtool_stats, rx_cqes)}, 21 19 {"rx_coalesced_err", offsetof(struct mana_ethtool_stats, 22 20 rx_coalesced_err)}, 23 21 {"rx_cqe_unknown_type", offsetof(struct mana_ethtool_stats,
+1 -1
drivers/net/ethernet/renesas/rswitch.c
··· 1485 1485 1486 1486 if (rswitch_get_num_cur_queues(gq) >= gq->ring_size - 1) { 1487 1487 netif_stop_subqueue(ndev, 0); 1488 - return ret; 1488 + return NETDEV_TX_BUSY; 1489 1489 } 1490 1490 1491 1491 if (skb_put_padto(skb, ETH_ZLEN))
+12 -15
drivers/net/ethernet/sfc/tc.c
··· 624 624 if (!found) { /* We don't care. */ 625 625 netif_dbg(efx, drv, efx->net_dev, 626 626 "Ignoring foreign filter that doesn't egdev us\n"); 627 - rc = -EOPNOTSUPP; 628 - goto release; 627 + return -EOPNOTSUPP; 629 628 } 630 629 631 630 rc = efx_mae_match_check_caps(efx, &match.mask, NULL); 632 631 if (rc) 633 - goto release; 632 + return rc; 634 633 635 634 if (efx_tc_match_is_encap(&match.mask)) { 636 635 enum efx_encap_type type; ··· 638 639 if (type == EFX_ENCAP_TYPE_NONE) { 639 640 NL_SET_ERR_MSG_MOD(extack, 640 641 "Egress encap match on unsupported tunnel device"); 641 - rc = -EOPNOTSUPP; 642 - goto release; 642 + return -EOPNOTSUPP; 643 643 } 644 644 645 645 rc = efx_mae_check_encap_type_supported(efx, type); ··· 646 648 NL_SET_ERR_MSG_FMT_MOD(extack, 647 649 "Firmware reports no support for %s encap match", 648 650 efx_tc_encap_type_name(type)); 649 - goto release; 651 + return rc; 650 652 } 651 653 652 654 rc = efx_tc_flower_record_encap_match(efx, &match, type, 653 655 extack); 654 656 if (rc) 655 - goto release; 657 + return rc; 656 658 } else { 657 659 /* This is not a tunnel decap rule, ignore it */ 658 660 netif_dbg(efx, drv, efx->net_dev, 659 661 "Ignoring foreign filter without encap match\n"); 660 - rc = -EOPNOTSUPP; 661 - goto release; 662 + return -EOPNOTSUPP; 662 663 } 663 664 664 665 rule = kzalloc(sizeof(*rule), GFP_USER); 665 666 if (!rule) { 666 667 rc = -ENOMEM; 667 - goto release; 668 + goto out_free; 668 669 } 669 670 INIT_LIST_HEAD(&rule->acts.list); 670 671 rule->cookie = tc->cookie; ··· 675 678 "Ignoring already-offloaded rule (cookie %lx)\n", 676 679 tc->cookie); 677 680 rc = -EEXIST; 678 - goto release; 681 + goto out_free; 679 682 } 680 683 681 684 act = kzalloc(sizeof(*act), GFP_USER); ··· 840 843 efx_tc_match_action_ht_params); 841 844 efx_tc_free_action_set_list(efx, &rule->acts, false); 842 845 } 846 + out_free: 843 847 kfree(rule); 844 848 if (match.encap) 845 849 efx_tc_flower_release_encap_match(efx, match.encap); ··· 897 899 return rc; 898 900 if (efx_tc_match_is_encap(&match.mask)) { 899 901 NL_SET_ERR_MSG_MOD(extack, "Ingress enc_key matches not supported"); 900 - rc = -EOPNOTSUPP; 901 - goto release; 902 + return -EOPNOTSUPP; 902 903 } 903 904 904 905 if (tc->common.chain_index) { ··· 921 924 if (old) { 922 925 netif_dbg(efx, drv, efx->net_dev, 923 926 "Already offloaded rule (cookie %lx)\n", tc->cookie); 924 - rc = -EEXIST; 925 927 NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded"); 926 - goto release; 928 + kfree(rule); 929 + return -EEXIST; 927 930 } 928 931 929 932 /* Parse actions */
+1 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7233 7233 ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 7234 7234 NETIF_F_RXCSUM; 7235 7235 ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 7236 - NETDEV_XDP_ACT_XSK_ZEROCOPY | 7237 - NETDEV_XDP_ACT_NDO_XMIT; 7236 + NETDEV_XDP_ACT_XSK_ZEROCOPY; 7238 7237 7239 7238 ret = stmmac_tc_init(priv, priv); 7240 7239 if (!ret) {
+6
drivers/net/ethernet/stmicro/stmmac/stmmac_xdp.c
··· 117 117 return -EOPNOTSUPP; 118 118 } 119 119 120 + if (!prog) 121 + xdp_features_clear_redirect_target(dev); 122 + 120 123 need_update = !!priv->xdp_prog != !!prog; 121 124 if (if_running && need_update) 122 125 stmmac_xdp_release(dev); ··· 133 130 134 131 if (if_running && need_update) 135 132 stmmac_xdp_open(dev); 133 + 134 + if (prog) 135 + xdp_features_set_redirect_target(dev, false); 136 136 137 137 return 0; 138 138 }
+1 -1
drivers/net/ipa/ipa_endpoint.c
··· 119 119 }; 120 120 121 121 /* Size in bytes of an IPA packet status structure */ 122 - #define IPA_STATUS_SIZE sizeof(__le32[4]) 122 + #define IPA_STATUS_SIZE sizeof(__le32[8]) 123 123 124 124 /* IPA status structure decoder; looks up field values for a structure */ 125 125 static u32 ipa_status_extract(struct ipa *ipa, const void *data,
+3 -13
drivers/net/phy/mxl-gpy.c
··· 274 274 return ret < 0 ? ret : 0; 275 275 } 276 276 277 - static bool gpy_has_broken_mdint(struct phy_device *phydev) 278 - { 279 - /* At least these PHYs are known to have broken interrupt handling */ 280 - return phydev->drv->phy_id == PHY_ID_GPY215B || 281 - phydev->drv->phy_id == PHY_ID_GPY215C; 282 - } 283 - 284 277 static int gpy_probe(struct phy_device *phydev) 285 278 { 286 279 struct device *dev = &phydev->mdio.dev; ··· 293 300 phydev->priv = priv; 294 301 mutex_init(&priv->mbox_lock); 295 302 296 - if (gpy_has_broken_mdint(phydev) && 297 - !device_property_present(dev, "maxlinear,use-broken-interrupts")) 303 + if (!device_property_present(dev, "maxlinear,use-broken-interrupts")) 298 304 phydev->dev_flags |= PHY_F_NO_IRQ; 299 305 300 306 fw_version = phy_read(phydev, PHY_FWV); ··· 651 659 * frame. Therefore, polling is the best we can do and won't do any more 652 660 * harm. 653 661 * It was observed that this bug happens on link state and link speed 654 - * changes on a GPY215B and GYP215C independent of the firmware version 655 - * (which doesn't mean that this list is exhaustive). 662 + * changes independent of the firmware version. 656 663 */ 657 - if (gpy_has_broken_mdint(phydev) && 658 - (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC))) { 664 + if (reg & (PHY_IMASK_LSTC | PHY_IMASK_LSPC)) { 659 665 reg = gpy_mbox_read(phydev, REG_GPIO0_OUT); 660 666 if (reg < 0) { 661 667 phy_error(phydev);
+1 -1
drivers/net/usb/qmi_wwan.c
··· 1325 1325 {QMI_FIXED_INTF(0x2001, 0x7e3d, 4)}, /* D-Link DWM-222 A2 */ 1326 1326 {QMI_FIXED_INTF(0x2020, 0x2031, 4)}, /* Olicard 600 */ 1327 1327 {QMI_FIXED_INTF(0x2020, 0x2033, 4)}, /* BroadMobi BM806U */ 1328 - {QMI_FIXED_INTF(0x2020, 0x2060, 4)}, /* BroadMobi BM818 */ 1328 + {QMI_QUIRK_SET_DTR(0x2020, 0x2060, 4)}, /* BroadMobi BM818 */ 1329 1329 {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ 1330 1330 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 1331 1331 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */
-4
drivers/nfc/nfcsim.c
··· 336 336 static void nfcsim_debugfs_init(void) 337 337 { 338 338 nfcsim_debugfs_root = debugfs_create_dir("nfcsim", NULL); 339 - 340 - if (!nfcsim_debugfs_root) 341 - pr_err("Could not create debugfs entry\n"); 342 - 343 339 } 344 340 345 341 static void nfcsim_debugfs_remove(void)
+1 -1
drivers/nvme/host/constants.c
··· 21 21 [nvme_cmd_resv_release] = "Reservation Release", 22 22 [nvme_cmd_zone_mgmt_send] = "Zone Management Send", 23 23 [nvme_cmd_zone_mgmt_recv] = "Zone Management Receive", 24 - [nvme_cmd_zone_append] = "Zone Management Append", 24 + [nvme_cmd_zone_append] = "Zone Append", 25 25 }; 26 26 27 27 static const char * const nvme_admin_ops[] = {
+48 -4
drivers/nvme/host/core.c
··· 397 397 trace_nvme_complete_rq(req); 398 398 nvme_cleanup_cmd(req); 399 399 400 - if (ctrl->kas) 400 + /* 401 + * Completions of long-running commands should not be able to 402 + * defer sending of periodic keep alives, since the controller 403 + * may have completed processing such commands a long time ago 404 + * (arbitrarily close to command submission time). 405 + * req->deadline - req->timeout is the command submission time 406 + * in jiffies. 407 + */ 408 + if (ctrl->kas && 409 + req->deadline - req->timeout >= ctrl->ka_last_check_time) 401 410 ctrl->comp_seen = true; 402 411 403 412 switch (nvme_decide_disposition(req)) { ··· 1124 1115 } 1125 1116 EXPORT_SYMBOL_NS_GPL(nvme_passthru_start, NVME_TARGET_PASSTHRU); 1126 1117 1127 - void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects, 1118 + void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects, 1128 1119 struct nvme_command *cmd, int status) 1129 1120 { 1130 1121 if (effects & NVME_CMD_EFFECTS_CSE_MASK) { ··· 1141 1132 nvme_queue_scan(ctrl); 1142 1133 flush_work(&ctrl->scan_work); 1143 1134 } 1135 + if (ns) 1136 + return; 1144 1137 1145 1138 switch (cmd->common.opcode) { 1146 1139 case nvme_admin_set_features: ··· 1172 1161 * The host should send Keep Alive commands at half of the Keep Alive Timeout 1173 1162 * accounting for transport roundtrip times [..]. 1174 1163 */ 1164 + static unsigned long nvme_keep_alive_work_period(struct nvme_ctrl *ctrl) 1165 + { 1166 + unsigned long delay = ctrl->kato * HZ / 2; 1167 + 1168 + /* 1169 + * When using Traffic Based Keep Alive, we need to run 1170 + * nvme_keep_alive_work at twice the normal frequency, as one 1171 + * command completion can postpone sending a keep alive command 1172 + * by up to twice the delay between runs. 1173 + */ 1174 + if (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) 1175 + delay /= 2; 1176 + return delay; 1177 + } 1178 + 1175 1179 static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl) 1176 1180 { 1177 - queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2); 1181 + queue_delayed_work(nvme_wq, &ctrl->ka_work, 1182 + nvme_keep_alive_work_period(ctrl)); 1178 1183 } 1179 1184 1180 1185 static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq, ··· 1199 1172 struct nvme_ctrl *ctrl = rq->end_io_data; 1200 1173 unsigned long flags; 1201 1174 bool startka = false; 1175 + unsigned long rtt = jiffies - (rq->deadline - rq->timeout); 1176 + unsigned long delay = nvme_keep_alive_work_period(ctrl); 1177 + 1178 + /* 1179 + * Subtract off the keepalive RTT so nvme_keep_alive_work runs 1180 + * at the desired frequency. 1181 + */ 1182 + if (rtt <= delay) { 1183 + delay -= rtt; 1184 + } else { 1185 + dev_warn(ctrl->device, "long keepalive RTT (%u ms)\n", 1186 + jiffies_to_msecs(rtt)); 1187 + delay = 0; 1188 + } 1202 1189 1203 1190 blk_mq_free_request(rq); 1204 1191 ··· 1223 1182 return RQ_END_IO_NONE; 1224 1183 } 1225 1184 1185 + ctrl->ka_last_check_time = jiffies; 1226 1186 ctrl->comp_seen = false; 1227 1187 spin_lock_irqsave(&ctrl->lock, flags); 1228 1188 if (ctrl->state == NVME_CTRL_LIVE || ··· 1231 1189 startka = true; 1232 1190 spin_unlock_irqrestore(&ctrl->lock, flags); 1233 1191 if (startka) 1234 - nvme_queue_keep_alive_work(ctrl); 1192 + queue_delayed_work(nvme_wq, &ctrl->ka_work, delay); 1235 1193 return RQ_END_IO_NONE; 1236 1194 } 1237 1195 ··· 1241 1199 struct nvme_ctrl, ka_work); 1242 1200 bool comp_seen = ctrl->comp_seen; 1243 1201 struct request *rq; 1202 + 1203 + ctrl->ka_last_check_time = jiffies; 1244 1204 1245 1205 if ((ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) && comp_seen) { 1246 1206 dev_dbg(ctrl->device,
+1 -1
drivers/nvme/host/ioctl.c
··· 254 254 blk_mq_free_request(req); 255 255 256 256 if (effects) 257 - nvme_passthru_end(ctrl, effects, cmd, ret); 257 + nvme_passthru_end(ctrl, ns, effects, cmd, ret); 258 258 259 259 return ret; 260 260 }
+2 -1
drivers/nvme/host/nvme.h
··· 328 328 struct delayed_work ka_work; 329 329 struct delayed_work failfast_work; 330 330 struct nvme_command ka_cmd; 331 + unsigned long ka_last_check_time; 331 332 struct work_struct fw_act_work; 332 333 unsigned long events; 333 334 ··· 1078 1077 u8 opcode); 1079 1078 u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode); 1080 1079 int nvme_execute_rq(struct request *rq, bool at_head); 1081 - void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects, 1080 + void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects, 1082 1081 struct nvme_command *cmd, int status); 1083 1082 struct nvme_ctrl *nvme_ctrl_from_file(struct file *file); 1084 1083 struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
+1 -1
drivers/nvme/target/passthru.c
··· 243 243 blk_mq_free_request(rq); 244 244 245 245 if (effects) 246 - nvme_passthru_end(ctrl, effects, req->cmd, status); 246 + nvme_passthru_end(ctrl, ns, effects, req->cmd, status); 247 247 } 248 248 249 249 static enum rq_end_io_ret nvmet_passthru_req_done(struct request *rq,
+1 -1
drivers/phy/amlogic/phy-meson-g12a-mipi-dphy-analog.c
··· 70 70 HHI_MIPI_CNTL1_BANDGAP); 71 71 72 72 regmap_write(priv->regmap, HHI_MIPI_CNTL2, 73 - FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x459) | 73 + FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL0, 0x45a) | 74 74 FIELD_PREP(HHI_MIPI_CNTL2_DIF_TX_CTL1, 0x2680)); 75 75 76 76 reg = DSI_LANE_CLK;
+5 -5
drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c
··· 237 237 */ 238 238 if (tmds_clk < 54 * MEGA) 239 239 txposdiv = 8; 240 - else if (tmds_clk >= 54 * MEGA && tmds_clk < 148.35 * MEGA) 240 + else if (tmds_clk >= 54 * MEGA && (tmds_clk * 100) < 14835 * MEGA) 241 241 txposdiv = 4; 242 - else if (tmds_clk >= 148.35 * MEGA && tmds_clk < 296.7 * MEGA) 242 + else if ((tmds_clk * 100) >= 14835 * MEGA && (tmds_clk * 10) < 2967 * MEGA) 243 243 txposdiv = 2; 244 - else if (tmds_clk >= 296.7 * MEGA && tmds_clk <= 594 * MEGA) 244 + else if ((tmds_clk * 10) >= 2967 * MEGA && tmds_clk <= 594 * MEGA) 245 245 txposdiv = 1; 246 246 else 247 247 return -EINVAL; ··· 324 324 clk_channel_bias = 0x34; /* 20mA */ 325 325 impedance_en = 0xf; 326 326 impedance = 0x36; /* 100ohm */ 327 - } else if (pixel_clk >= 74.175 * MEGA && pixel_clk <= 300 * MEGA) { 327 + } else if (((u64)pixel_clk * 1000) >= 74175 * MEGA && pixel_clk <= 300 * MEGA) { 328 328 data_channel_bias = 0x34; /* 20mA */ 329 329 clk_channel_bias = 0x2c; /* 16mA */ 330 330 impedance_en = 0xf; 331 331 impedance = 0x36; /* 100ohm */ 332 - } else if (pixel_clk >= 27 * MEGA && pixel_clk < 74.175 * MEGA) { 332 + } else if (pixel_clk >= 27 * MEGA && ((u64)pixel_clk * 1000) < 74175 * MEGA) { 333 333 data_channel_bias = 0x14; /* 10mA */ 334 334 clk_channel_bias = 0x14; /* 10mA */ 335 335 impedance_en = 0x0;
+3 -2
drivers/phy/qualcomm/phy-qcom-qmp-combo.c
··· 2472 2472 ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs); 2473 2473 if (ret) { 2474 2474 dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret); 2475 - goto err_unlock; 2475 + goto err_decrement_count; 2476 2476 } 2477 2477 2478 2478 ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets); ··· 2522 2522 reset_control_bulk_assert(cfg->num_resets, qmp->resets); 2523 2523 err_disable_regulators: 2524 2524 regulator_bulk_disable(cfg->num_vregs, qmp->vregs); 2525 - err_unlock: 2525 + err_decrement_count: 2526 + qmp->init_count--; 2526 2527 mutex_unlock(&qmp->phy_mutex); 2527 2528 2528 2529 return ret;
+3 -2
drivers/phy/qualcomm/phy-qcom-qmp-pcie-msm8996.c
··· 379 379 ret = regulator_bulk_enable(cfg->num_vregs, qmp->vregs); 380 380 if (ret) { 381 381 dev_err(qmp->dev, "failed to enable regulators, err=%d\n", ret); 382 - goto err_unlock; 382 + goto err_decrement_count; 383 383 } 384 384 385 385 ret = reset_control_bulk_assert(cfg->num_resets, qmp->resets); ··· 409 409 reset_control_bulk_assert(cfg->num_resets, qmp->resets); 410 410 err_disable_regulators: 411 411 regulator_bulk_disable(cfg->num_vregs, qmp->vregs); 412 - err_unlock: 412 + err_decrement_count: 413 + qmp->init_count--; 413 414 mutex_unlock(&qmp->phy_mutex); 414 415 415 416 return ret;
+1 -1
drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
··· 115 115 * 116 116 * @cfg_ahb_clk: AHB2PHY interface clock 117 117 * @ref_clk: phy reference clock 118 - * @iface_clk: phy interface clock 119 118 * @phy_reset: phy reset control 120 119 * @vregs: regulator supplies bulk data 121 120 * @phy_initialized: if PHY has been initialized correctly 122 121 * @mode: contains the current mode the PHY is in 122 + * @update_seq_cfg: tuning parameters for phy init 123 123 */ 124 124 struct qcom_snps_hsphy { 125 125 struct phy *phy;
+1
drivers/scsi/qla2xxx/qla_def.h
··· 3796 3796 uint64_t retry_term_jiff; 3797 3797 struct qla_tgt_counters tgt_counters; 3798 3798 uint16_t cpuid; 3799 + bool cpu_mapped; 3799 3800 struct qla_fw_resources fwres ____cacheline_aligned; 3800 3801 struct qla_buf_pool buf_pool; 3801 3802 u32 cmd_cnt;
+3
drivers/scsi/qla2xxx/qla_init.c
··· 9426 9426 qpair->rsp->req = qpair->req; 9427 9427 qpair->rsp->qpair = qpair; 9428 9428 9429 + if (!qpair->cpu_mapped) 9430 + qla_cpu_update(qpair, raw_smp_processor_id()); 9431 + 9429 9432 if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) { 9430 9433 if (ha->fw_attributes & BIT_4) 9431 9434 qpair->difdix_supported = 1;
+3
drivers/scsi/qla2xxx/qla_inline.h
··· 539 539 if (!ha->qp_cpu_map) 540 540 return; 541 541 mask = pci_irq_get_affinity(ha->pdev, msix->vector_base0); 542 + if (!mask) 543 + return; 542 544 qpair->cpuid = cpumask_first(mask); 543 545 for_each_cpu(cpu, mask) { 544 546 ha->qp_cpu_map[cpu] = qpair; 545 547 } 546 548 msix->cpuid = qpair->cpuid; 549 + qpair->cpu_mapped = true; 547 550 } 548 551 549 552 static inline void
+3
drivers/scsi/qla2xxx/qla_isr.c
··· 3770 3770 3771 3771 if (rsp->qpair->cpuid != smp_processor_id() || !rsp->qpair->rcv_intr) { 3772 3772 rsp->qpair->rcv_intr = 1; 3773 + 3774 + if (!rsp->qpair->cpu_mapped) 3775 + qla_cpu_update(rsp->qpair, raw_smp_processor_id()); 3773 3776 } 3774 3777 3775 3778 #define __update_rsp_in(_is_shadow_hba, _rsp, _rsp_in) \
+4
drivers/scsi/stex.c
··· 109 109 TASK_ATTRIBUTE_HEADOFQUEUE = 0x1, 110 110 TASK_ATTRIBUTE_ORDERED = 0x2, 111 111 TASK_ATTRIBUTE_ACA = 0x4, 112 + }; 112 113 114 + enum { 113 115 SS_STS_NORMAL = 0x80000000, 114 116 SS_STS_DONE = 0x40000000, 115 117 SS_STS_HANDSHAKE = 0x20000000, ··· 123 121 SS_I2H_REQUEST_RESET = 0x2000, 124 122 125 123 SS_MU_OPERATIONAL = 0x80000000, 124 + }; 126 125 126 + enum { 127 127 STEX_CDB_LENGTH = 16, 128 128 STATUS_VAR_LEN = 128, 129 129
+2 -2
drivers/soc/fsl/qe/Kconfig
··· 36 36 config CPM_TSA 37 37 tristate "CPM TSA support" 38 38 depends on OF && HAS_IOMEM 39 - depends on CPM1 || COMPILE_TEST 39 + depends on CPM1 || (CPM && COMPILE_TEST) 40 40 help 41 41 Freescale CPM Time Slot Assigner (TSA) 42 42 controller. ··· 47 47 config CPM_QMC 48 48 tristate "CPM QMC support" 49 49 depends on OF && HAS_IOMEM 50 - depends on CPM1 || (FSL_SOC && COMPILE_TEST) 50 + depends on CPM1 || (FSL_SOC && CPM && COMPILE_TEST) 51 51 depends on CPM_TSA 52 52 help 53 53 Freescale CPM QUICC Multichannel Controller
+2 -2
drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
··· 373 373 static int ov2680_detect(struct i2c_client *client) 374 374 { 375 375 struct i2c_adapter *adapter = client->adapter; 376 - u32 high, low; 376 + u32 high = 0, low = 0; 377 377 int ret; 378 378 u16 id; 379 379 u8 revision; ··· 383 383 384 384 ret = ov_read_reg8(client, OV2680_SC_CMMN_CHIP_ID_H, &high); 385 385 if (ret) { 386 - dev_err(&client->dev, "sensor_id_high = 0x%x\n", high); 386 + dev_err(&client->dev, "sensor_id_high read failed (%d)\n", ret); 387 387 return -ENODEV; 388 388 } 389 389 ret = ov_read_reg8(client, OV2680_SC_CMMN_CHIP_ID_L, &low);
+1 -1
drivers/staging/media/imx/imx8mq-mipi-csi2.c
··· 354 354 struct v4l2_subdev_state *sd_state) 355 355 { 356 356 int ret; 357 - u32 hs_settle; 357 + u32 hs_settle = 0; 358 358 359 359 ret = imx8mq_mipi_csi_sw_reset(state); 360 360 if (ret)
-2
drivers/target/iscsi/iscsi_target.c
··· 364 364 init_completion(&np->np_restart_comp); 365 365 INIT_LIST_HEAD(&np->np_list); 366 366 367 - timer_setup(&np->np_login_timer, iscsi_handle_login_thread_timeout, 0); 368 - 369 367 ret = iscsi_target_setup_login_socket(np, sockaddr); 370 368 if (ret != 0) { 371 369 kfree(np);
+5 -58
drivers/target/iscsi/iscsi_target_login.c
··· 811 811 iscsit_dec_conn_usage_count(conn); 812 812 } 813 813 814 - void iscsi_handle_login_thread_timeout(struct timer_list *t) 815 - { 816 - struct iscsi_np *np = from_timer(np, t, np_login_timer); 817 - 818 - spin_lock_bh(&np->np_thread_lock); 819 - pr_err("iSCSI Login timeout on Network Portal %pISpc\n", 820 - &np->np_sockaddr); 821 - 822 - if (np->np_login_timer_flags & ISCSI_TF_STOP) { 823 - spin_unlock_bh(&np->np_thread_lock); 824 - return; 825 - } 826 - 827 - if (np->np_thread) 828 - send_sig(SIGINT, np->np_thread, 1); 829 - 830 - np->np_login_timer_flags &= ~ISCSI_TF_RUNNING; 831 - spin_unlock_bh(&np->np_thread_lock); 832 - } 833 - 834 - static void iscsi_start_login_thread_timer(struct iscsi_np *np) 835 - { 836 - /* 837 - * This used the TA_LOGIN_TIMEOUT constant because at this 838 - * point we do not have access to ISCSI_TPG_ATTRIB(tpg)->login_timeout 839 - */ 840 - spin_lock_bh(&np->np_thread_lock); 841 - np->np_login_timer_flags &= ~ISCSI_TF_STOP; 842 - np->np_login_timer_flags |= ISCSI_TF_RUNNING; 843 - mod_timer(&np->np_login_timer, jiffies + TA_LOGIN_TIMEOUT * HZ); 844 - 845 - pr_debug("Added timeout timer to iSCSI login request for" 846 - " %u seconds.\n", TA_LOGIN_TIMEOUT); 847 - spin_unlock_bh(&np->np_thread_lock); 848 - } 849 - 850 - static void iscsi_stop_login_thread_timer(struct iscsi_np *np) 851 - { 852 - spin_lock_bh(&np->np_thread_lock); 853 - if (!(np->np_login_timer_flags & ISCSI_TF_RUNNING)) { 854 - spin_unlock_bh(&np->np_thread_lock); 855 - return; 856 - } 857 - np->np_login_timer_flags |= ISCSI_TF_STOP; 858 - spin_unlock_bh(&np->np_thread_lock); 859 - 860 - del_timer_sync(&np->np_login_timer); 861 - 862 - spin_lock_bh(&np->np_thread_lock); 863 - np->np_login_timer_flags &= ~ISCSI_TF_RUNNING; 864 - spin_unlock_bh(&np->np_thread_lock); 865 - } 866 - 867 814 int iscsit_setup_np( 868 815 struct iscsi_np *np, 869 816 struct sockaddr_storage *sockaddr) ··· 1070 1123 spin_lock_init(&conn->nopin_timer_lock); 1071 1124 spin_lock_init(&conn->response_queue_lock); 1072 1125 spin_lock_init(&conn->state_lock); 1126 + spin_lock_init(&conn->login_worker_lock); 1127 + spin_lock_init(&conn->login_timer_lock); 1073 1128 1074 1129 timer_setup(&conn->nopin_response_timer, 1075 1130 iscsit_handle_nopin_response_timeout, 0); 1076 1131 timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0); 1132 + timer_setup(&conn->login_timer, iscsit_login_timeout, 0); 1077 1133 1078 1134 if (iscsit_conn_set_transport(conn, np->np_transport) < 0) 1079 1135 goto free_conn; ··· 1254 1304 goto new_sess_out; 1255 1305 } 1256 1306 1257 - iscsi_start_login_thread_timer(np); 1307 + iscsit_start_login_timer(conn, current); 1258 1308 1259 1309 pr_debug("Moving to TARG_CONN_STATE_XPT_UP.\n"); 1260 1310 conn->conn_state = TARG_CONN_STATE_XPT_UP; ··· 1367 1417 if (ret < 0) 1368 1418 goto new_sess_out; 1369 1419 1370 - iscsi_stop_login_thread_timer(np); 1371 - 1372 1420 if (ret == 1) { 1373 1421 tpg_np = conn->tpg_np; 1374 1422 ··· 1382 1434 new_sess_out: 1383 1435 new_sess = true; 1384 1436 old_sess_out: 1385 - iscsi_stop_login_thread_timer(np); 1437 + iscsit_stop_login_timer(conn); 1386 1438 tpg_np = conn->tpg_np; 1387 1439 iscsi_target_login_sess_out(conn, zero_tsih, new_sess); 1388 1440 new_sess = false; ··· 1396 1448 return 1; 1397 1449 1398 1450 exit: 1399 - iscsi_stop_login_thread_timer(np); 1400 1451 spin_lock_bh(&np->np_thread_lock); 1401 1452 np->np_thread_state = ISCSI_NP_THREAD_EXIT; 1402 1453 spin_unlock_bh(&np->np_thread_lock);
+42 -32
drivers/target/iscsi/iscsi_target_nego.c
··· 535 535 iscsi_target_login_sess_out(conn, zero_tsih, true); 536 536 } 537 537 538 - struct conn_timeout { 539 - struct timer_list timer; 540 - struct iscsit_conn *conn; 541 - }; 542 - 543 - static void iscsi_target_login_timeout(struct timer_list *t) 544 - { 545 - struct conn_timeout *timeout = from_timer(timeout, t, timer); 546 - struct iscsit_conn *conn = timeout->conn; 547 - 548 - pr_debug("Entering iscsi_target_login_timeout >>>>>>>>>>>>>>>>>>>\n"); 549 - 550 - if (conn->login_kworker) { 551 - pr_debug("Sending SIGINT to conn->login_kworker %s/%d\n", 552 - conn->login_kworker->comm, conn->login_kworker->pid); 553 - send_sig(SIGINT, conn->login_kworker, 1); 554 - } 555 - } 556 - 557 538 static void iscsi_target_do_login_rx(struct work_struct *work) 558 539 { 559 540 struct iscsit_conn *conn = container_of(work, ··· 543 562 struct iscsi_np *np = login->np; 544 563 struct iscsi_portal_group *tpg = conn->tpg; 545 564 struct iscsi_tpg_np *tpg_np = conn->tpg_np; 546 - struct conn_timeout timeout; 547 565 int rc, zero_tsih = login->zero_tsih; 548 566 bool state; 549 567 550 568 pr_debug("entering iscsi_target_do_login_rx, conn: %p, %s:%d\n", 551 569 conn, current->comm, current->pid); 570 + 571 + spin_lock(&conn->login_worker_lock); 572 + set_bit(LOGIN_FLAGS_WORKER_RUNNING, &conn->login_flags); 573 + spin_unlock(&conn->login_worker_lock); 552 574 /* 553 575 * If iscsi_target_do_login_rx() has been invoked by ->sk_data_ready() 554 576 * before initial PDU processing in iscsi_target_start_negotiation() ··· 581 597 goto err; 582 598 } 583 599 584 - conn->login_kworker = current; 585 600 allow_signal(SIGINT); 586 - 587 - timeout.conn = conn; 588 - timer_setup_on_stack(&timeout.timer, iscsi_target_login_timeout, 0); 589 - mod_timer(&timeout.timer, jiffies + TA_LOGIN_TIMEOUT * HZ); 590 - pr_debug("Starting login timer for %s/%d\n", current->comm, current->pid); 601 + rc = iscsit_set_login_timer_kworker(conn, current); 602 + if (rc < 0) { 603 + /* The login timer has already expired */ 604 + pr_debug("iscsi_target_do_login_rx, login failed\n"); 605 + goto err; 606 + } 591 607 592 608 rc = conn->conn_transport->iscsit_get_login_rx(conn, login); 593 - del_timer_sync(&timeout.timer); 594 - destroy_timer_on_stack(&timeout.timer); 595 609 flush_signals(current); 596 - conn->login_kworker = NULL; 597 610 598 611 if (rc < 0) 599 612 goto err; ··· 627 646 if (iscsi_target_sk_check_and_clear(conn, 628 647 LOGIN_FLAGS_WRITE_ACTIVE)) 629 648 goto err; 649 + 650 + /* 651 + * Set the login timer thread pointer to NULL to prevent the 652 + * login process from getting stuck if the initiator 653 + * stops sending data. 654 + */ 655 + rc = iscsit_set_login_timer_kworker(conn, NULL); 656 + if (rc < 0) 657 + goto err; 630 658 } else if (rc == 1) { 659 + iscsit_stop_login_timer(conn); 631 660 cancel_delayed_work(&conn->login_work); 632 661 iscsi_target_nego_release(conn); 633 662 iscsi_post_login_handler(np, conn, zero_tsih); ··· 647 656 648 657 err: 649 658 iscsi_target_restore_sock_callbacks(conn); 659 + iscsit_stop_login_timer(conn); 650 660 cancel_delayed_work(&conn->login_work); 651 661 iscsi_target_login_drop(conn, login); 652 662 iscsit_deaccess_np(np, tpg, tpg_np); ··· 1122 1130 iscsi_target_set_sock_callbacks(conn); 1123 1131 1124 1132 login->np = np; 1133 + conn->tpg = NULL; 1125 1134 1126 1135 login_req = (struct iscsi_login_req *) login->req; 1127 1136 payload_length = ntoh24(login_req->dlength); ··· 1190 1197 */ 1191 1198 sessiontype = strncmp(s_buf, DISCOVERY, 9); 1192 1199 if (!sessiontype) { 1193 - conn->tpg = iscsit_global->discovery_tpg; 1194 1200 if (!login->leading_connection) 1195 1201 goto get_target; 1196 1202 ··· 1206 1214 * Serialize access across the discovery struct iscsi_portal_group to 1207 1215 * process login attempt. 1208 1216 */ 1217 + conn->tpg = iscsit_global->discovery_tpg; 1209 1218 if (iscsit_access_np(np, conn->tpg) < 0) { 1210 1219 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, 1211 1220 ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE); 1221 + conn->tpg = NULL; 1212 1222 ret = -1; 1213 1223 goto out; 1214 1224 } ··· 1362 1368 * and perform connection cleanup now. 1363 1369 */ 1364 1370 ret = iscsi_target_do_login(conn, login); 1365 - if (!ret && iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_INITIAL_PDU)) 1366 - ret = -1; 1371 + if (!ret) { 1372 + spin_lock(&conn->login_worker_lock); 1373 + 1374 + if (iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_INITIAL_PDU)) 1375 + ret = -1; 1376 + else if (!test_bit(LOGIN_FLAGS_WORKER_RUNNING, &conn->login_flags)) { 1377 + if (iscsit_set_login_timer_kworker(conn, NULL) < 0) { 1378 + /* 1379 + * The timeout has expired already. 1380 + * Schedule login_work to perform the cleanup. 1381 + */ 1382 + schedule_delayed_work(&conn->login_work, 0); 1383 + } 1384 + } 1385 + 1386 + spin_unlock(&conn->login_worker_lock); 1387 + } 1367 1388 1368 1389 if (ret < 0) { 1369 1390 iscsi_target_restore_sock_callbacks(conn); 1370 1391 iscsi_remove_failed_auth_entry(conn); 1371 1392 } 1372 1393 if (ret != 0) { 1394 + iscsit_stop_login_timer(conn); 1373 1395 cancel_delayed_work_sync(&conn->login_work); 1374 1396 iscsi_target_nego_release(conn); 1375 1397 }
+51
drivers/target/iscsi/iscsi_target_util.c
··· 1040 1040 spin_unlock_bh(&conn->nopin_timer_lock); 1041 1041 } 1042 1042 1043 + void iscsit_login_timeout(struct timer_list *t) 1044 + { 1045 + struct iscsit_conn *conn = from_timer(conn, t, login_timer); 1046 + struct iscsi_login *login = conn->login; 1047 + 1048 + pr_debug("Entering iscsi_target_login_timeout >>>>>>>>>>>>>>>>>>>\n"); 1049 + 1050 + spin_lock_bh(&conn->login_timer_lock); 1051 + login->login_failed = 1; 1052 + 1053 + if (conn->login_kworker) { 1054 + pr_debug("Sending SIGINT to conn->login_kworker %s/%d\n", 1055 + conn->login_kworker->comm, conn->login_kworker->pid); 1056 + send_sig(SIGINT, conn->login_kworker, 1); 1057 + } else { 1058 + schedule_delayed_work(&conn->login_work, 0); 1059 + } 1060 + spin_unlock_bh(&conn->login_timer_lock); 1061 + } 1062 + 1063 + void iscsit_start_login_timer(struct iscsit_conn *conn, struct task_struct *kthr) 1064 + { 1065 + pr_debug("Login timer started\n"); 1066 + 1067 + conn->login_kworker = kthr; 1068 + mod_timer(&conn->login_timer, jiffies + TA_LOGIN_TIMEOUT * HZ); 1069 + } 1070 + 1071 + int iscsit_set_login_timer_kworker(struct iscsit_conn *conn, struct task_struct *kthr) 1072 + { 1073 + struct iscsi_login *login = conn->login; 1074 + int ret = 0; 1075 + 1076 + spin_lock_bh(&conn->login_timer_lock); 1077 + if (login->login_failed) { 1078 + /* The timer has already expired */ 1079 + ret = -1; 1080 + } else { 1081 + conn->login_kworker = kthr; 1082 + } 1083 + spin_unlock_bh(&conn->login_timer_lock); 1084 + 1085 + return ret; 1086 + } 1087 + 1088 + void iscsit_stop_login_timer(struct iscsit_conn *conn) 1089 + { 1090 + pr_debug("Login timer stopped\n"); 1091 + timer_delete_sync(&conn->login_timer); 1092 + } 1093 + 1043 1094 int iscsit_send_tx_data( 1044 1095 struct iscsit_cmd *cmd, 1045 1096 struct iscsit_conn *conn,
+4
drivers/target/iscsi/iscsi_target_util.h
··· 56 56 extern void __iscsit_start_nopin_timer(struct iscsit_conn *); 57 57 extern void iscsit_start_nopin_timer(struct iscsit_conn *); 58 58 extern void iscsit_stop_nopin_timer(struct iscsit_conn *); 59 + extern void iscsit_login_timeout(struct timer_list *t); 60 + extern void iscsit_start_login_timer(struct iscsit_conn *, struct task_struct *kthr); 61 + extern void iscsit_stop_login_timer(struct iscsit_conn *); 62 + extern int iscsit_set_login_timer_kworker(struct iscsit_conn *, struct task_struct *kthr); 59 63 extern int iscsit_send_tx_data(struct iscsit_cmd *, struct iscsit_conn *, int); 60 64 extern int iscsit_fe_sendpage_sg(struct iscsit_cmd *, struct iscsit_conn *); 61 65 extern int iscsit_tx_login_rsp(struct iscsit_conn *, u8, u8);
+3 -1
drivers/tty/serial/8250/8250_tegra.c
··· 113 113 114 114 ret = serial8250_register_8250_port(&port8250); 115 115 if (ret < 0) 116 - goto err_clkdisable; 116 + goto err_ctrl_assert; 117 117 118 118 platform_set_drvdata(pdev, uart); 119 119 uart->line = ret; 120 120 121 121 return 0; 122 122 123 + err_ctrl_assert: 124 + reset_control_assert(uart->rst); 123 125 err_clkdisable: 124 126 clk_disable_unprepare(uart->clk); 125 127
+1 -1
drivers/tty/serial/Kconfig
··· 762 762 763 763 config SERIAL_CPM 764 764 tristate "CPM SCC/SMC serial port support" 765 - depends on CPM2 || CPM1 || (PPC32 && COMPILE_TEST) 765 + depends on CPM2 || CPM1 766 766 select SERIAL_CORE 767 767 help 768 768 This driver supports the SCC and SMC serial ports on Motorola
-2
drivers/tty/serial/cpm_uart/cpm_uart.h
··· 19 19 #include "cpm_uart_cpm2.h" 20 20 #elif defined(CONFIG_CPM1) 21 21 #include "cpm_uart_cpm1.h" 22 - #elif defined(CONFIG_COMPILE_TEST) 23 - #include "cpm_uart_cpm2.h" 24 22 #endif 25 23 26 24 #define SERIAL_CPM_MAJOR 204
+23 -21
drivers/tty/serial/fsl_lpuart.c
··· 1495 1495 1496 1496 static void lpuart32_break_ctl(struct uart_port *port, int break_state) 1497 1497 { 1498 - unsigned long temp, modem; 1499 - struct tty_struct *tty; 1500 - unsigned int cflag = 0; 1498 + unsigned long temp; 1501 1499 1502 - tty = tty_port_tty_get(&port->state->port); 1503 - if (tty) { 1504 - cflag = tty->termios.c_cflag; 1505 - tty_kref_put(tty); 1506 - } 1500 + temp = lpuart32_read(port, UARTCTRL); 1507 1501 1508 - temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK; 1509 - modem = lpuart32_read(port, UARTMODIR); 1510 - 1502 + /* 1503 + * LPUART IP now has two known bugs, one is CTS has higher priority than the 1504 + * break signal, which causes the break signal sending through UARTCTRL_SBK 1505 + * may impacted by the CTS input if the HW flow control is enabled. It 1506 + * exists on all platforms we support in this driver. 1507 + * Another bug is i.MX8QM LPUART may have an additional break character 1508 + * being sent after SBK was cleared. 1509 + * To avoid above two bugs, we use Transmit Data Inversion function to send 1510 + * the break signal instead of UARTCTRL_SBK. 1511 + */ 1511 1512 if (break_state != 0) { 1512 - temp |= UARTCTRL_SBK; 1513 1513 /* 1514 - * LPUART CTS has higher priority than SBK, need to disable CTS before 1515 - * asserting SBK to avoid any interference if flow control is enabled. 1514 + * Disable the transmitter to prevent any data from being sent out 1515 + * during break, then invert the TX line to send break. 1516 1516 */ 1517 - if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE) 1518 - lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR); 1517 + temp &= ~UARTCTRL_TE; 1518 + lpuart32_write(port, temp, UARTCTRL); 1519 + temp |= UARTCTRL_TXINV; 1520 + lpuart32_write(port, temp, UARTCTRL); 1519 1521 } else { 1520 - /* Re-enable the CTS when break off. */ 1521 - if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE)) 1522 - lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR); 1522 + /* Disable the TXINV to turn off break and re-enable transmitter. */ 1523 + temp &= ~UARTCTRL_TXINV; 1524 + lpuart32_write(port, temp, UARTCTRL); 1525 + temp |= UARTCTRL_TE; 1526 + lpuart32_write(port, temp, UARTCTRL); 1523 1527 } 1524 - 1525 - lpuart32_write(port, temp, UARTCTRL); 1526 1528 } 1527 1529 1528 1530 static void lpuart_setup_watermark(struct lpuart_port *sport)
+13
drivers/usb/cdns3/cdns3-gadget.c
··· 2097 2097 else 2098 2098 priv_ep->trb_burst_size = 16; 2099 2099 2100 + /* 2101 + * In versions preceding DEV_VER_V2, for example, iMX8QM, there exit the bugs 2102 + * in the DMA. These bugs occur when the trb_burst_size exceeds 16 and the 2103 + * address is not aligned to 128 Bytes (which is a product of the 64-bit AXI 2104 + * and AXI maximum burst length of 16 or 0xF+1, dma_axi_ctrl0[3:0]). This 2105 + * results in data corruption when it crosses the 4K border. The corruption 2106 + * specifically occurs from the position (4K - (address & 0x7F)) to 4K. 2107 + * 2108 + * So force trb_burst_size to 16 at such platform. 2109 + */ 2110 + if (priv_dev->dev_ver < DEV_VER_V2) 2111 + priv_ep->trb_burst_size = 16; 2112 + 2100 2113 mult = min_t(u8, mult, EP_CFG_MULT_MAX); 2101 2114 buffering = min_t(u8, buffering, EP_CFG_BUFFERING_MAX); 2102 2115 maxburst = min_t(u8, maxburst, EP_CFG_MAXBURST_MAX);
+41
drivers/usb/core/buffer.c
··· 172 172 } 173 173 dma_free_coherent(hcd->self.sysdev, size, addr, dma); 174 174 } 175 + 176 + void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, 177 + size_t size, gfp_t mem_flags, dma_addr_t *dma) 178 + { 179 + if (size == 0) 180 + return NULL; 181 + 182 + if (hcd->localmem_pool) 183 + return gen_pool_dma_alloc_align(hcd->localmem_pool, 184 + size, dma, PAGE_SIZE); 185 + 186 + /* some USB hosts just use PIO */ 187 + if (!hcd_uses_dma(hcd)) { 188 + *dma = DMA_MAPPING_ERROR; 189 + return (void *)__get_free_pages(mem_flags, 190 + get_order(size)); 191 + } 192 + 193 + return dma_alloc_coherent(hcd->self.sysdev, 194 + size, dma, mem_flags); 195 + } 196 + 197 + void hcd_buffer_free_pages(struct usb_hcd *hcd, 198 + size_t size, void *addr, dma_addr_t dma) 199 + { 200 + if (!addr) 201 + return; 202 + 203 + if (hcd->localmem_pool) { 204 + gen_pool_free(hcd->localmem_pool, 205 + (unsigned long)addr, size); 206 + return; 207 + } 208 + 209 + if (!hcd_uses_dma(hcd)) { 210 + free_pages((unsigned long)addr, get_order(size)); 211 + return; 212 + } 213 + 214 + dma_free_coherent(hcd->self.sysdev, size, addr, dma); 215 + }
+14 -6
drivers/usb/core/devio.c
··· 186 186 static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count) 187 187 { 188 188 struct usb_dev_state *ps = usbm->ps; 189 + struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus); 189 190 unsigned long flags; 190 191 191 192 spin_lock_irqsave(&ps->lock, flags); ··· 195 194 list_del(&usbm->memlist); 196 195 spin_unlock_irqrestore(&ps->lock, flags); 197 196 198 - usb_free_coherent(ps->dev, usbm->size, usbm->mem, 199 - usbm->dma_handle); 197 + hcd_buffer_free_pages(hcd, usbm->size, 198 + usbm->mem, usbm->dma_handle); 200 199 usbfs_decrease_memory_usage( 201 200 usbm->size + sizeof(struct usb_memory)); 202 201 kfree(usbm); ··· 235 234 size_t size = vma->vm_end - vma->vm_start; 236 235 void *mem; 237 236 unsigned long flags; 238 - dma_addr_t dma_handle; 237 + dma_addr_t dma_handle = DMA_MAPPING_ERROR; 239 238 int ret; 240 239 241 240 ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory)); ··· 248 247 goto error_decrease_mem; 249 248 } 250 249 251 - mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN, 252 - &dma_handle); 250 + mem = hcd_buffer_alloc_pages(hcd, 251 + size, GFP_USER | __GFP_NOWARN, &dma_handle); 253 252 if (!mem) { 254 253 ret = -ENOMEM; 255 254 goto error_free_usbm; ··· 265 264 usbm->vma_use_count = 1; 266 265 INIT_LIST_HEAD(&usbm->memlist); 267 266 268 - if (hcd->localmem_pool || !hcd_uses_dma(hcd)) { 267 + /* 268 + * In DMA-unavailable cases, hcd_buffer_alloc_pages allocates 269 + * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check 270 + * whether we are in such cases, and then use remap_pfn_range (or 271 + * dma_mmap_coherent) to map normal (or DMA) pages into the user 272 + * space, respectively. 273 + */ 274 + if (dma_handle == DMA_MAPPING_ERROR) { 269 275 if (remap_pfn_range(vma, vma->vm_start, 270 276 virt_to_phys(usbm->mem) >> PAGE_SHIFT, 271 277 size, vma->vm_page_prot) < 0) {
+1 -1
drivers/usb/gadget/function/f_fs.c
··· 3535 3535 /* Drain any pending AIO completions */ 3536 3536 drain_workqueue(ffs->io_completion_wq); 3537 3537 3538 + ffs_event_add(ffs, FUNCTIONFS_UNBIND); 3538 3539 if (!--opts->refcnt) 3539 3540 functionfs_unbind(ffs); 3540 3541 ··· 3560 3559 func->function.ssp_descriptors = NULL; 3561 3560 func->interfaces_nums = NULL; 3562 3561 3563 - ffs_event_add(ffs, FUNCTIONFS_UNBIND); 3564 3562 } 3565 3563 3566 3564 static struct usb_function *ffs_alloc(struct usb_function_instance *fi)
+3
drivers/usb/gadget/udc/amd5536udc_pci.c
··· 170 170 retval = -ENODEV; 171 171 goto err_probe; 172 172 } 173 + 174 + udc = dev; 175 + 173 176 return 0; 174 177 175 178 err_probe:
+1 -1
drivers/usb/typec/tipd/core.c
··· 920 920 enable_irq(client->irq); 921 921 } 922 922 923 - if (client->irq) 923 + if (!client->irq) 924 924 queue_delayed_work(system_power_efficient_wq, &tps->wq_poll, 925 925 msecs_to_jiffies(POLL_INTERVAL)); 926 926
+5 -17
drivers/vhost/vhost.c
··· 256 256 * test_and_set_bit() implies a memory barrier. 257 257 */ 258 258 llist_add(&work->node, &dev->worker->work_list); 259 - wake_up_process(dev->worker->vtsk->task); 259 + vhost_task_wake(dev->worker->vtsk); 260 260 } 261 261 } 262 262 EXPORT_SYMBOL_GPL(vhost_work_queue); ··· 333 333 __vhost_vq_meta_reset(vq); 334 334 } 335 335 336 - static int vhost_worker(void *data) 336 + static bool vhost_worker(void *data) 337 337 { 338 338 struct vhost_worker *worker = data; 339 339 struct vhost_work *work, *work_next; 340 340 struct llist_node *node; 341 341 342 - for (;;) { 343 - /* mb paired w/ kthread_stop */ 344 - set_current_state(TASK_INTERRUPTIBLE); 345 - 346 - if (vhost_task_should_stop(worker->vtsk)) { 347 - __set_current_state(TASK_RUNNING); 348 - break; 349 - } 350 - 351 - node = llist_del_all(&worker->work_list); 352 - if (!node) 353 - schedule(); 354 - 342 + node = llist_del_all(&worker->work_list); 343 + if (node) { 355 344 node = llist_reverse_order(node); 356 345 /* make sure flag is seen after deletion */ 357 346 smp_wmb(); 358 347 llist_for_each_entry_safe(work, work_next, node, node) { 359 348 clear_bit(VHOST_WORK_QUEUED, &work->flags); 360 - __set_current_state(TASK_RUNNING); 361 349 kcov_remote_start_common(worker->kcov_handle); 362 350 work->fn(work); 363 351 kcov_remote_stop(); ··· 353 365 } 354 366 } 355 367 356 - return 0; 368 + return !!node; 357 369 } 358 370 359 371 static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
+2 -3
drivers/video/fbdev/arcfb.c
··· 590 590 return retval; 591 591 } 592 592 593 - static int arcfb_remove(struct platform_device *dev) 593 + static void arcfb_remove(struct platform_device *dev) 594 594 { 595 595 struct fb_info *info = platform_get_drvdata(dev); 596 596 ··· 601 601 vfree((void __force *)info->screen_base); 602 602 framebuffer_release(info); 603 603 } 604 - return 0; 605 604 } 606 605 607 606 static struct platform_driver arcfb_driver = { 608 607 .probe = arcfb_probe, 609 - .remove = arcfb_remove, 608 + .remove_new = arcfb_remove, 610 609 .driver = { 611 610 .name = "arcfb", 612 611 },
+3 -8
drivers/video/fbdev/au1100fb.c
··· 520 520 return -ENODEV; 521 521 } 522 522 523 - int au1100fb_drv_remove(struct platform_device *dev) 523 + void au1100fb_drv_remove(struct platform_device *dev) 524 524 { 525 525 struct au1100fb_device *fbdev = NULL; 526 - 527 - if (!dev) 528 - return -ENODEV; 529 526 530 527 fbdev = platform_get_drvdata(dev); 531 528 ··· 540 543 clk_disable_unprepare(fbdev->lcdclk); 541 544 clk_put(fbdev->lcdclk); 542 545 } 543 - 544 - return 0; 545 546 } 546 547 547 548 #ifdef CONFIG_PM ··· 588 593 .name = "au1100-lcd", 589 594 }, 590 595 .probe = au1100fb_drv_probe, 591 - .remove = au1100fb_drv_remove, 596 + .remove_new = au1100fb_drv_remove, 592 597 .suspend = au1100fb_drv_suspend, 593 - .resume = au1100fb_drv_resume, 598 + .resume = au1100fb_drv_resume, 594 599 }; 595 600 module_platform_driver(au1100fb_driver); 596 601
+2 -4
drivers/video/fbdev/au1200fb.c
··· 1765 1765 return ret; 1766 1766 } 1767 1767 1768 - static int au1200fb_drv_remove(struct platform_device *dev) 1768 + static void au1200fb_drv_remove(struct platform_device *dev) 1769 1769 { 1770 1770 struct au1200fb_platdata *pd = platform_get_drvdata(dev); 1771 1771 struct fb_info *fbi; ··· 1788 1788 } 1789 1789 1790 1790 free_irq(platform_get_irq(dev, 0), (void *)dev); 1791 - 1792 - return 0; 1793 1791 } 1794 1792 1795 1793 #ifdef CONFIG_PM ··· 1838 1840 .pm = AU1200FB_PMOPS, 1839 1841 }, 1840 1842 .probe = au1200fb_drv_probe, 1841 - .remove = au1200fb_drv_remove, 1843 + .remove_new = au1200fb_drv_remove, 1842 1844 }; 1843 1845 module_platform_driver(au1200fb_driver); 1844 1846
+2 -3
drivers/video/fbdev/broadsheetfb.c
··· 1193 1193 1194 1194 } 1195 1195 1196 - static int broadsheetfb_remove(struct platform_device *dev) 1196 + static void broadsheetfb_remove(struct platform_device *dev) 1197 1197 { 1198 1198 struct fb_info *info = platform_get_drvdata(dev); 1199 1199 ··· 1209 1209 module_put(par->board->owner); 1210 1210 framebuffer_release(info); 1211 1211 } 1212 - return 0; 1213 1212 } 1214 1213 1215 1214 static struct platform_driver broadsheetfb_driver = { 1216 1215 .probe = broadsheetfb_probe, 1217 - .remove = broadsheetfb_remove, 1216 + .remove_new = broadsheetfb_remove, 1218 1217 .driver = { 1219 1218 .name = "broadsheetfb", 1220 1219 },
+2 -4
drivers/video/fbdev/bw2.c
··· 352 352 return err; 353 353 } 354 354 355 - static int bw2_remove(struct platform_device *op) 355 + static void bw2_remove(struct platform_device *op) 356 356 { 357 357 struct fb_info *info = dev_get_drvdata(&op->dev); 358 358 struct bw2_par *par = info->par; ··· 363 363 of_iounmap(&op->resource[0], info->screen_base, info->fix.smem_len); 364 364 365 365 framebuffer_release(info); 366 - 367 - return 0; 368 366 } 369 367 370 368 static const struct of_device_id bw2_match[] = { ··· 379 381 .of_match_table = bw2_match, 380 382 }, 381 383 .probe = bw2_probe, 382 - .remove = bw2_remove, 384 + .remove_new = bw2_remove, 383 385 }; 384 386 385 387 static int __init bw2_init(void)
+3
drivers/video/fbdev/core/bitblit.c
··· 247 247 248 248 cursor.set = 0; 249 249 250 + if (!vc->vc_font.data) 251 + return; 252 + 250 253 c = scr_readw((u16 *) vc->vc_pos); 251 254 attribute = get_attribute(info, c); 252 255 src = vc->vc_font.data + ((c & charmask) * (w * vc->vc_font.height));
+9 -3
drivers/video/fbdev/imsttfb.c
··· 1452 1452 FBINFO_HWACCEL_FILLRECT | 1453 1453 FBINFO_HWACCEL_YPAN; 1454 1454 1455 - fb_alloc_cmap(&info->cmap, 0, 0); 1455 + if (fb_alloc_cmap(&info->cmap, 0, 0)) { 1456 + framebuffer_release(info); 1457 + return -ENODEV; 1458 + } 1456 1459 1457 1460 if (register_framebuffer(info) < 0) { 1461 + fb_dealloc_cmap(&info->cmap); 1458 1462 framebuffer_release(info); 1459 1463 return -ENODEV; 1460 1464 } ··· 1535 1531 goto error; 1536 1532 info->pseudo_palette = par->palette; 1537 1533 ret = init_imstt(info); 1538 - if (!ret) 1539 - pci_set_drvdata(pdev, info); 1534 + if (ret) 1535 + goto error; 1536 + 1537 + pci_set_drvdata(pdev, info); 1540 1538 return ret; 1541 1539 1542 1540 error:
+1 -1
drivers/video/fbdev/matrox/matroxfb_maven.c
··· 1291 1291 .driver = { 1292 1292 .name = "maven", 1293 1293 }, 1294 - .probe_new = maven_probe, 1294 + .probe = maven_probe, 1295 1295 .remove = maven_remove, 1296 1296 .id_table = maven_id, 1297 1297 };
+1 -1
drivers/video/fbdev/ssd1307fb.c
··· 844 844 MODULE_DEVICE_TABLE(i2c, ssd1307fb_i2c_id); 845 845 846 846 static struct i2c_driver ssd1307fb_driver = { 847 - .probe_new = ssd1307fb_probe, 847 + .probe = ssd1307fb_probe, 848 848 .remove = ssd1307fb_remove, 849 849 .id_table = ssd1307fb_i2c_id, 850 850 .driver = {
+1 -5
fs/btrfs/bio.c
··· 330 330 if (bbio->inode && !(bbio->bio.bi_opf & REQ_META)) 331 331 btrfs_check_read_bio(bbio, bbio->bio.bi_private); 332 332 else 333 - bbio->end_io(bbio); 333 + btrfs_orig_bbio_end_io(bbio); 334 334 } 335 335 336 336 static void btrfs_simple_end_io(struct bio *bio) ··· 811 811 goto fail; 812 812 813 813 if (dev_replace) { 814 - if (btrfs_op(&bbio->bio) == BTRFS_MAP_WRITE && btrfs_is_zoned(fs_info)) { 815 - bbio->bio.bi_opf &= ~REQ_OP_WRITE; 816 - bbio->bio.bi_opf |= REQ_OP_ZONE_APPEND; 817 - } 818 814 ASSERT(smap.dev == fs_info->dev_replace.srcdev); 819 815 smap.dev = fs_info->dev_replace.tgtdev; 820 816 }
+1 -1
fs/btrfs/disk-io.c
··· 96 96 crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE, 97 97 first_page_part - BTRFS_CSUM_SIZE); 98 98 99 - for (i = 1; i < num_pages; i++) { 99 + for (i = 1; i < num_pages && INLINE_EXTENT_BUFFER_PAGES > 1; i++) { 100 100 kaddr = page_address(buf->pages[i]); 101 101 crypto_shash_update(shash, kaddr, PAGE_SIZE); 102 102 }
+32 -16
fs/btrfs/scrub.c
··· 1137 1137 wake_up(&stripe->io_wait); 1138 1138 } 1139 1139 1140 + static void scrub_submit_write_bio(struct scrub_ctx *sctx, 1141 + struct scrub_stripe *stripe, 1142 + struct btrfs_bio *bbio, bool dev_replace) 1143 + { 1144 + struct btrfs_fs_info *fs_info = sctx->fs_info; 1145 + u32 bio_len = bbio->bio.bi_iter.bi_size; 1146 + u32 bio_off = (bbio->bio.bi_iter.bi_sector << SECTOR_SHIFT) - 1147 + stripe->logical; 1148 + 1149 + fill_writer_pointer_gap(sctx, stripe->physical + bio_off); 1150 + atomic_inc(&stripe->pending_io); 1151 + btrfs_submit_repair_write(bbio, stripe->mirror_num, dev_replace); 1152 + if (!btrfs_is_zoned(fs_info)) 1153 + return; 1154 + /* 1155 + * For zoned writeback, queue depth must be 1, thus we must wait for 1156 + * the write to finish before the next write. 1157 + */ 1158 + wait_scrub_stripe_io(stripe); 1159 + 1160 + /* 1161 + * And also need to update the write pointer if write finished 1162 + * successfully. 1163 + */ 1164 + if (!test_bit(bio_off >> fs_info->sectorsize_bits, 1165 + &stripe->write_error_bitmap)) 1166 + sctx->write_pointer += bio_len; 1167 + } 1168 + 1140 1169 /* 1141 1170 * Submit the write bio(s) for the sectors specified by @write_bitmap. 1142 1171 * ··· 1184 1155 { 1185 1156 struct btrfs_fs_info *fs_info = stripe->bg->fs_info; 1186 1157 struct btrfs_bio *bbio = NULL; 1187 - const bool zoned = btrfs_is_zoned(fs_info); 1188 1158 int sector_nr; 1189 1159 1190 1160 for_each_set_bit(sector_nr, &write_bitmap, stripe->nr_sectors) { ··· 1196 1168 1197 1169 /* Cannot merge with previous sector, submit the current one. */ 1198 1170 if (bbio && sector_nr && !test_bit(sector_nr - 1, &write_bitmap)) { 1199 - fill_writer_pointer_gap(sctx, stripe->physical + 1200 - (sector_nr << fs_info->sectorsize_bits)); 1201 - atomic_inc(&stripe->pending_io); 1202 - btrfs_submit_repair_write(bbio, stripe->mirror_num, dev_replace); 1203 - /* For zoned writeback, queue depth must be 1. */ 1204 - if (zoned) 1205 - wait_scrub_stripe_io(stripe); 1171 + scrub_submit_write_bio(sctx, stripe, bbio, dev_replace); 1206 1172 bbio = NULL; 1207 1173 } 1208 1174 if (!bbio) { ··· 1209 1187 ret = bio_add_page(&bbio->bio, page, fs_info->sectorsize, pgoff); 1210 1188 ASSERT(ret == fs_info->sectorsize); 1211 1189 } 1212 - if (bbio) { 1213 - fill_writer_pointer_gap(sctx, bbio->bio.bi_iter.bi_sector << 1214 - SECTOR_SHIFT); 1215 - atomic_inc(&stripe->pending_io); 1216 - btrfs_submit_repair_write(bbio, stripe->mirror_num, dev_replace); 1217 - if (zoned) 1218 - wait_scrub_stripe_io(stripe); 1219 - } 1190 + if (bbio) 1191 + scrub_submit_write_bio(sctx, stripe, bbio, dev_replace); 1220 1192 } 1221 1193 1222 1194 /*
+1 -1
fs/btrfs/tree-log.c
··· 6158 6158 { 6159 6159 struct btrfs_root *log = inode->root->log_root; 6160 6160 const struct btrfs_delayed_item *curr; 6161 - u64 last_range_start; 6161 + u64 last_range_start = 0; 6162 6162 u64 last_range_end = 0; 6163 6163 struct btrfs_key key; 6164 6164
+3 -1
fs/coredump.c
··· 371 371 if (t != current && !(t->flags & PF_POSTCOREDUMP)) { 372 372 sigaddset(&t->pending.signal, SIGKILL); 373 373 signal_wake_up(t, 1); 374 - nr++; 374 + /* The vhost_worker does not particpate in coredumps */ 375 + if ((t->flags & (PF_USER_WORKER | PF_IO_WORKER)) != PF_USER_WORKER) 376 + nr++; 375 377 } 376 378 } 377 379
+4 -1
fs/ext4/ext4.h
··· 918 918 * where the second inode has larger inode number 919 919 * than the first 920 920 * I_DATA_SEM_QUOTA - Used for quota inodes only 921 + * I_DATA_SEM_EA - Used for ea_inodes only 921 922 */ 922 923 enum { 923 924 I_DATA_SEM_NORMAL = 0, 924 925 I_DATA_SEM_OTHER, 925 926 I_DATA_SEM_QUOTA, 927 + I_DATA_SEM_EA 926 928 }; 927 929 928 930 ··· 2903 2901 EXT4_IGET_NORMAL = 0, 2904 2902 EXT4_IGET_SPECIAL = 0x0001, /* OK to iget a system inode */ 2905 2903 EXT4_IGET_HANDLE = 0x0002, /* Inode # is from a handle */ 2906 - EXT4_IGET_BAD = 0x0004 /* Allow to iget a bad inode */ 2904 + EXT4_IGET_BAD = 0x0004, /* Allow to iget a bad inode */ 2905 + EXT4_IGET_EA_INODE = 0x0008 /* Inode should contain an EA value */ 2907 2906 } ext4_iget_flags; 2908 2907 2909 2908 extern struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+7
fs/ext4/fsync.c
··· 108 108 journal_t *journal = EXT4_SB(inode->i_sb)->s_journal; 109 109 tid_t commit_tid = datasync ? ei->i_datasync_tid : ei->i_sync_tid; 110 110 111 + /* 112 + * Fastcommit does not really support fsync on directories or other 113 + * special files. Force a full commit. 114 + */ 115 + if (!S_ISREG(inode->i_mode)) 116 + return ext4_force_commit(inode->i_sb); 117 + 111 118 if (journal->j_flags & JBD2_BARRIER && 112 119 !jbd2_trans_will_send_data_barrier(journal, commit_tid)) 113 120 *needs_barrier = true;
+29 -5
fs/ext4/inode.c
··· 4641 4641 inode_set_iversion_queried(inode, val); 4642 4642 } 4643 4643 4644 + static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags) 4645 + 4646 + { 4647 + if (flags & EXT4_IGET_EA_INODE) { 4648 + if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) 4649 + return "missing EA_INODE flag"; 4650 + if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) || 4651 + EXT4_I(inode)->i_file_acl) 4652 + return "ea_inode with extended attributes"; 4653 + } else { 4654 + if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) 4655 + return "unexpected EA_INODE flag"; 4656 + } 4657 + if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) 4658 + return "unexpected bad inode w/o EXT4_IGET_BAD"; 4659 + return NULL; 4660 + } 4661 + 4644 4662 struct inode *__ext4_iget(struct super_block *sb, unsigned long ino, 4645 4663 ext4_iget_flags flags, const char *function, 4646 4664 unsigned int line) ··· 4668 4650 struct ext4_inode_info *ei; 4669 4651 struct ext4_super_block *es = EXT4_SB(sb)->s_es; 4670 4652 struct inode *inode; 4653 + const char *err_str; 4671 4654 journal_t *journal = EXT4_SB(sb)->s_journal; 4672 4655 long ret; 4673 4656 loff_t size; ··· 4696 4677 inode = iget_locked(sb, ino); 4697 4678 if (!inode) 4698 4679 return ERR_PTR(-ENOMEM); 4699 - if (!(inode->i_state & I_NEW)) 4680 + if (!(inode->i_state & I_NEW)) { 4681 + if ((err_str = check_igot_inode(inode, flags)) != NULL) { 4682 + ext4_error_inode(inode, function, line, 0, err_str); 4683 + iput(inode); 4684 + return ERR_PTR(-EFSCORRUPTED); 4685 + } 4700 4686 return inode; 4687 + } 4701 4688 4702 4689 ei = EXT4_I(inode); 4703 4690 iloc.bh = NULL; ··· 4969 4944 if (IS_CASEFOLDED(inode) && !ext4_has_feature_casefold(inode->i_sb)) 4970 4945 ext4_error_inode(inode, function, line, 0, 4971 4946 "casefold flag without casefold feature"); 4972 - if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) { 4973 - ext4_error_inode(inode, function, line, 0, 4974 - "bad inode without EXT4_IGET_BAD flag"); 4975 - ret = -EUCLEAN; 4947 + if ((err_str = check_igot_inode(inode, flags)) != NULL) { 4948 + ext4_error_inode(inode, function, line, 0, err_str); 4949 + ret = -EFSCORRUPTED; 4976 4950 goto bad_inode; 4977 4951 } 4978 4952
+15 -1
fs/ext4/mballoc.c
··· 2062 2062 if (bex->fe_len < gex->fe_len) 2063 2063 return; 2064 2064 2065 - if (finish_group) 2065 + if (finish_group || ac->ac_found > sbi->s_mb_min_to_scan) 2066 2066 ext4_mb_use_best_found(ac, e4b); 2067 2067 } 2068 2068 ··· 2073 2073 * previous found extent and if new one is better, then it's stored 2074 2074 * in the context. Later, the best found extent will be used, if 2075 2075 * mballoc can't find good enough extent. 2076 + * 2077 + * The algorithm used is roughly as follows: 2078 + * 2079 + * * If free extent found is exactly as big as goal, then 2080 + * stop the scan and use it immediately 2081 + * 2082 + * * If free extent found is smaller than goal, then keep retrying 2083 + * upto a max of sbi->s_mb_max_to_scan times (default 200). After 2084 + * that stop scanning and use whatever we have. 2085 + * 2086 + * * If free extent found is bigger than goal, then keep retrying 2087 + * upto a max of sbi->s_mb_min_to_scan times (default 10) before 2088 + * stopping the scan and using the extent. 2089 + * 2076 2090 * 2077 2091 * FIXME: real allocation policy is to be designed yet! 2078 2092 */
+12 -12
fs/ext4/super.c
··· 6589 6589 } 6590 6590 6591 6591 /* 6592 - * Reinitialize lazy itable initialization thread based on 6593 - * current settings 6594 - */ 6595 - if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE)) 6596 - ext4_unregister_li_request(sb); 6597 - else { 6598 - ext4_group_t first_not_zeroed; 6599 - first_not_zeroed = ext4_has_uninit_itable(sb); 6600 - ext4_register_li_request(sb, first_not_zeroed); 6601 - } 6602 - 6603 - /* 6604 6592 * Handle creation of system zone data early because it can fail. 6605 6593 * Releasing of existing data is done when we are sure remount will 6606 6594 * succeed. ··· 6624 6636 6625 6637 if (enable_rw) 6626 6638 sb->s_flags &= ~SB_RDONLY; 6639 + 6640 + /* 6641 + * Reinitialize lazy itable initialization thread based on 6642 + * current settings 6643 + */ 6644 + if (sb_rdonly(sb) || !test_opt(sb, INIT_INODE_TABLE)) 6645 + ext4_unregister_li_request(sb); 6646 + else { 6647 + ext4_group_t first_not_zeroed; 6648 + first_not_zeroed = ext4_has_uninit_itable(sb); 6649 + ext4_register_li_request(sb, first_not_zeroed); 6650 + } 6627 6651 6628 6652 if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb)) 6629 6653 ext4_stop_mmpd(sbi);
+12 -29
fs/ext4/xattr.c
··· 121 121 #ifdef CONFIG_LOCKDEP 122 122 void ext4_xattr_inode_set_class(struct inode *ea_inode) 123 123 { 124 + struct ext4_inode_info *ei = EXT4_I(ea_inode); 125 + 124 126 lockdep_set_subclass(&ea_inode->i_rwsem, 1); 127 + (void) ei; /* shut up clang warning if !CONFIG_LOCKDEP */ 128 + lockdep_set_subclass(&ei->i_data_sem, I_DATA_SEM_EA); 125 129 } 126 130 #endif 127 131 ··· 437 433 return -EFSCORRUPTED; 438 434 } 439 435 440 - inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL); 436 + inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_EA_INODE); 441 437 if (IS_ERR(inode)) { 442 438 err = PTR_ERR(inode); 443 439 ext4_error(parent->i_sb, ··· 445 441 err); 446 442 return err; 447 443 } 448 - 449 - if (is_bad_inode(inode)) { 450 - ext4_error(parent->i_sb, 451 - "error while reading EA inode %lu is_bad_inode", 452 - ea_ino); 453 - err = -EIO; 454 - goto error; 455 - } 456 - 457 - if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) { 458 - ext4_error(parent->i_sb, 459 - "EA inode %lu does not have EXT4_EA_INODE_FL flag", 460 - ea_ino); 461 - err = -EINVAL; 462 - goto error; 463 - } 464 - 465 444 ext4_xattr_inode_set_class(inode); 466 445 467 446 /* ··· 465 478 466 479 *ea_inode = inode; 467 480 return 0; 468 - error: 469 - iput(inode); 470 - return err; 471 481 } 472 482 473 483 /* Remove entry from mbcache when EA inode is getting evicted */ ··· 1540 1556 1541 1557 while (ce) { 1542 1558 ea_inode = ext4_iget(inode->i_sb, ce->e_value, 1543 - EXT4_IGET_NORMAL); 1544 - if (!IS_ERR(ea_inode) && 1545 - !is_bad_inode(ea_inode) && 1546 - (EXT4_I(ea_inode)->i_flags & EXT4_EA_INODE_FL) && 1547 - i_size_read(ea_inode) == value_len && 1559 + EXT4_IGET_EA_INODE); 1560 + if (IS_ERR(ea_inode)) 1561 + goto next_entry; 1562 + ext4_xattr_inode_set_class(ea_inode); 1563 + if (i_size_read(ea_inode) == value_len && 1548 1564 !ext4_xattr_inode_read(ea_inode, ea_data, value_len) && 1549 1565 !ext4_xattr_inode_verify_hashes(ea_inode, NULL, ea_data, 1550 1566 value_len) && ··· 1554 1570 kvfree(ea_data); 1555 1571 return ea_inode; 1556 1572 } 1557 - 1558 - if (!IS_ERR(ea_inode)) 1559 - iput(ea_inode); 1573 + iput(ea_inode); 1574 + next_entry: 1560 1575 ce = mb_cache_entry_find_next(ea_inode_cache, ce); 1561 1576 } 1562 1577 kvfree(ea_data);
+1 -6
fs/nfsd/nfsctl.c
··· 690 690 if (err != 0 || fd < 0) 691 691 return -EINVAL; 692 692 693 - if (svc_alien_sock(net, fd)) { 694 - printk(KERN_ERR "%s: socket net is different to NFSd's one\n", __func__); 695 - return -EINVAL; 696 - } 697 - 698 693 err = nfsd_create_serv(net); 699 694 if (err != 0) 700 695 return err; 701 696 702 - err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred); 697 + err = svc_addsock(nn->nfsd_serv, net, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred); 703 698 704 699 if (err >= 0 && 705 700 !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+9 -1
fs/nfsd/vfs.c
··· 536 536 537 537 inode_lock(inode); 538 538 for (retries = 1;;) { 539 - host_err = __nfsd_setattr(dentry, iap); 539 + struct iattr attrs; 540 + 541 + /* 542 + * notify_change() can alter its iattr argument, making 543 + * @iap unsuitable for submission multiple times. Make a 544 + * copy for every loop iteration. 545 + */ 546 + attrs = *iap; 547 + host_err = __nfsd_setattr(dentry, &attrs); 540 548 if (host_err != -EAGAIN || !retries--) 541 549 break; 542 550 if (!nfsd_wait_for_delegreturn(rqstp, inode))
+5 -1
fs/smb/client/ioctl.c
··· 321 321 struct tcon_link *tlink; 322 322 struct cifs_sb_info *cifs_sb; 323 323 __u64 ExtAttrBits = 0; 324 + #ifdef CONFIG_CIFS_POSIX 325 + #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 324 326 __u64 caps; 327 + #endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */ 328 + #endif /* CONFIG_CIFS_POSIX */ 325 329 326 330 xid = get_xid(); 327 331 ··· 335 331 if (pSMBFile == NULL) 336 332 break; 337 333 tcon = tlink_tcon(pSMBFile->tlink); 338 - caps = le64_to_cpu(tcon->fsUnixInfo.Capability); 339 334 #ifdef CONFIG_CIFS_POSIX 340 335 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 336 + caps = le64_to_cpu(tcon->fsUnixInfo.Capability); 341 337 if (CIFS_UNIX_EXTATTR_CAP & caps) { 342 338 __u64 ExtAttrMask = 0; 343 339 rc = CIFSGetExtAttr(xid, tcon,
-1
fs/smb/client/smb2ops.c
··· 618 618 * Add a new one instead 619 619 */ 620 620 spin_lock(&ses->iface_lock); 621 - iface = niface = NULL; 622 621 list_for_each_entry_safe(iface, niface, &ses->iface_list, 623 622 iface_head) { 624 623 ret = iface_cmp(iface, &tmp_iface);
+1 -1
fs/smb/client/smb2pdu.c
··· 3725 3725 if (*out_data == NULL) { 3726 3726 rc = -ENOMEM; 3727 3727 goto cnotify_exit; 3728 - } else 3728 + } else if (plen) 3729 3729 *plen = le32_to_cpu(smb_rsp->OutputBufferLength); 3730 3730 } 3731 3731
+47 -25
fs/smb/server/oplock.c
··· 157 157 rcu_read_lock(); 158 158 opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info, 159 159 op_entry); 160 - if (opinfo && !atomic_inc_not_zero(&opinfo->refcount)) 161 - opinfo = NULL; 160 + if (opinfo) { 161 + if (!atomic_inc_not_zero(&opinfo->refcount)) 162 + opinfo = NULL; 163 + else { 164 + atomic_inc(&opinfo->conn->r_count); 165 + if (ksmbd_conn_releasing(opinfo->conn)) { 166 + atomic_dec(&opinfo->conn->r_count); 167 + atomic_dec(&opinfo->refcount); 168 + opinfo = NULL; 169 + } 170 + } 171 + } 172 + 162 173 rcu_read_unlock(); 163 174 164 175 return opinfo; 176 + } 177 + 178 + static void opinfo_conn_put(struct oplock_info *opinfo) 179 + { 180 + struct ksmbd_conn *conn; 181 + 182 + if (!opinfo) 183 + return; 184 + 185 + conn = opinfo->conn; 186 + /* 187 + * Checking waitqueue to dropping pending requests on 188 + * disconnection. waitqueue_active is safe because it 189 + * uses atomic operation for condition. 190 + */ 191 + if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 192 + wake_up(&conn->r_count_q); 193 + opinfo_put(opinfo); 165 194 } 166 195 167 196 void opinfo_put(struct oplock_info *opinfo) ··· 695 666 696 667 out: 697 668 ksmbd_free_work_struct(work); 698 - /* 699 - * Checking waitqueue to dropping pending requests on 700 - * disconnection. waitqueue_active is safe because it 701 - * uses atomic operation for condition. 702 - */ 703 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 704 - wake_up(&conn->r_count_q); 705 669 } 706 670 707 671 /** ··· 728 706 work->conn = conn; 729 707 work->sess = opinfo->sess; 730 708 731 - atomic_inc(&conn->r_count); 732 709 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 733 710 INIT_WORK(&work->work, __smb2_oplock_break_noti); 734 711 ksmbd_queue_work(work); ··· 797 776 798 777 out: 799 778 ksmbd_free_work_struct(work); 800 - /* 801 - * Checking waitqueue to dropping pending requests on 802 - * disconnection. waitqueue_active is safe because it 803 - * uses atomic operation for condition. 804 - */ 805 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 806 - wake_up(&conn->r_count_q); 807 779 } 808 780 809 781 /** ··· 836 822 work->conn = conn; 837 823 work->sess = opinfo->sess; 838 824 839 - atomic_inc(&conn->r_count); 840 825 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 841 826 list_for_each_safe(tmp, t, &opinfo->interim_list) { 842 827 struct ksmbd_work *in_work; ··· 1157 1144 } 1158 1145 prev_opinfo = opinfo_get_list(ci); 1159 1146 if (!prev_opinfo || 1160 - (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) 1147 + (prev_opinfo->level == SMB2_OPLOCK_LEVEL_NONE && lctx)) { 1148 + opinfo_conn_put(prev_opinfo); 1161 1149 goto set_lev; 1150 + } 1162 1151 prev_op_has_lease = prev_opinfo->is_lease; 1163 1152 if (prev_op_has_lease) 1164 1153 prev_op_state = prev_opinfo->o_lease->state; ··· 1168 1153 if (share_ret < 0 && 1169 1154 prev_opinfo->level == SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1170 1155 err = share_ret; 1171 - opinfo_put(prev_opinfo); 1156 + opinfo_conn_put(prev_opinfo); 1172 1157 goto err_out; 1173 1158 } 1174 1159 1175 1160 if (prev_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH && 1176 1161 prev_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1177 - opinfo_put(prev_opinfo); 1162 + opinfo_conn_put(prev_opinfo); 1178 1163 goto op_break_not_needed; 1179 1164 } 1180 1165 1181 1166 list_add(&work->interim_entry, &prev_opinfo->interim_list); 1182 1167 err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II); 1183 - opinfo_put(prev_opinfo); 1168 + opinfo_conn_put(prev_opinfo); 1184 1169 if (err == -ENOENT) 1185 1170 goto set_lev; 1186 1171 /* Check all oplock was freed by close */ ··· 1243 1228 return; 1244 1229 if (brk_opinfo->level != SMB2_OPLOCK_LEVEL_BATCH && 1245 1230 brk_opinfo->level != SMB2_OPLOCK_LEVEL_EXCLUSIVE) { 1246 - opinfo_put(brk_opinfo); 1231 + opinfo_conn_put(brk_opinfo); 1247 1232 return; 1248 1233 } 1249 1234 1250 1235 brk_opinfo->open_trunc = is_trunc; 1251 1236 list_add(&work->interim_entry, &brk_opinfo->interim_list); 1252 1237 oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II); 1253 - opinfo_put(brk_opinfo); 1238 + opinfo_conn_put(brk_opinfo); 1254 1239 } 1255 1240 1256 1241 /** ··· 1278 1263 list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) { 1279 1264 if (!atomic_inc_not_zero(&brk_op->refcount)) 1280 1265 continue; 1266 + 1267 + atomic_inc(&brk_op->conn->r_count); 1268 + if (ksmbd_conn_releasing(brk_op->conn)) { 1269 + atomic_dec(&brk_op->conn->r_count); 1270 + continue; 1271 + } 1272 + 1281 1273 rcu_read_unlock(); 1282 1274 if (brk_op->is_lease && (brk_op->o_lease->state & 1283 1275 (~(SMB2_LEASE_READ_CACHING_LE | ··· 1314 1292 brk_op->open_trunc = is_trunc; 1315 1293 oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE); 1316 1294 next: 1317 - opinfo_put(brk_op); 1295 + opinfo_conn_put(brk_op); 1318 1296 rcu_read_lock(); 1319 1297 } 1320 1298 rcu_read_unlock();
+46 -50
fs/smb/server/smb2pdu.c
··· 326 326 if (hdr->Command == SMB2_NEGOTIATE) 327 327 aux_max = 1; 328 328 else 329 - aux_max = conn->vals->max_credits - credit_charge; 329 + aux_max = conn->vals->max_credits - conn->total_credits; 330 330 credits_granted = min_t(unsigned short, credits_requested, aux_max); 331 - 332 - if (conn->vals->max_credits - conn->total_credits < credits_granted) 333 - credits_granted = conn->vals->max_credits - 334 - conn->total_credits; 335 331 336 332 conn->total_credits += credits_granted; 337 333 work->credits_granted += credits_granted; ··· 845 849 846 850 static __le32 decode_preauth_ctxt(struct ksmbd_conn *conn, 847 851 struct smb2_preauth_neg_context *pneg_ctxt, 848 - int len_of_ctxts) 852 + int ctxt_len) 849 853 { 850 854 /* 851 855 * sizeof(smb2_preauth_neg_context) assumes SMB311_SALT_SIZE Salt, 852 856 * which may not be present. Only check for used HashAlgorithms[1]. 853 857 */ 854 - if (len_of_ctxts < MIN_PREAUTH_CTXT_DATA_LEN) 858 + if (ctxt_len < 859 + sizeof(struct smb2_neg_context) + MIN_PREAUTH_CTXT_DATA_LEN) 855 860 return STATUS_INVALID_PARAMETER; 856 861 857 862 if (pneg_ctxt->HashAlgorithms != SMB2_PREAUTH_INTEGRITY_SHA512) ··· 864 867 865 868 static void decode_encrypt_ctxt(struct ksmbd_conn *conn, 866 869 struct smb2_encryption_neg_context *pneg_ctxt, 867 - int len_of_ctxts) 870 + int ctxt_len) 868 871 { 869 - int cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount); 870 - int i, cphs_size = cph_cnt * sizeof(__le16); 872 + int cph_cnt; 873 + int i, cphs_size; 874 + 875 + if (sizeof(struct smb2_encryption_neg_context) > ctxt_len) { 876 + pr_err("Invalid SMB2_ENCRYPTION_CAPABILITIES context size\n"); 877 + return; 878 + } 871 879 872 880 conn->cipher_type = 0; 873 881 882 + cph_cnt = le16_to_cpu(pneg_ctxt->CipherCount); 883 + cphs_size = cph_cnt * sizeof(__le16); 884 + 874 885 if (sizeof(struct smb2_encryption_neg_context) + cphs_size > 875 - len_of_ctxts) { 886 + ctxt_len) { 876 887 pr_err("Invalid cipher count(%d)\n", cph_cnt); 877 888 return; 878 889 } ··· 928 923 929 924 static void decode_sign_cap_ctxt(struct ksmbd_conn *conn, 930 925 struct smb2_signing_capabilities *pneg_ctxt, 931 - int len_of_ctxts) 926 + int ctxt_len) 932 927 { 933 - int sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount); 934 - int i, sign_alos_size = sign_algo_cnt * sizeof(__le16); 928 + int sign_algo_cnt; 929 + int i, sign_alos_size; 930 + 931 + if (sizeof(struct smb2_signing_capabilities) > ctxt_len) { 932 + pr_err("Invalid SMB2_SIGNING_CAPABILITIES context length\n"); 933 + return; 934 + } 935 935 936 936 conn->signing_negotiated = false; 937 + sign_algo_cnt = le16_to_cpu(pneg_ctxt->SigningAlgorithmCount); 938 + sign_alos_size = sign_algo_cnt * sizeof(__le16); 937 939 938 940 if (sizeof(struct smb2_signing_capabilities) + sign_alos_size > 939 - len_of_ctxts) { 941 + ctxt_len) { 940 942 pr_err("Invalid signing algorithm count(%d)\n", sign_algo_cnt); 941 943 return; 942 944 } ··· 981 969 len_of_ctxts = len_of_smb - offset; 982 970 983 971 while (i++ < neg_ctxt_cnt) { 984 - int clen; 985 - 986 - /* check that offset is not beyond end of SMB */ 987 - if (len_of_ctxts == 0) 988 - break; 972 + int clen, ctxt_len; 989 973 990 974 if (len_of_ctxts < sizeof(struct smb2_neg_context)) 991 975 break; 992 976 993 977 pctx = (struct smb2_neg_context *)((char *)pctx + offset); 994 978 clen = le16_to_cpu(pctx->DataLength); 995 - if (clen + sizeof(struct smb2_neg_context) > len_of_ctxts) 979 + ctxt_len = clen + sizeof(struct smb2_neg_context); 980 + 981 + if (ctxt_len > len_of_ctxts) 996 982 break; 997 983 998 984 if (pctx->ContextType == SMB2_PREAUTH_INTEGRITY_CAPABILITIES) { ··· 1001 991 1002 992 status = decode_preauth_ctxt(conn, 1003 993 (struct smb2_preauth_neg_context *)pctx, 1004 - len_of_ctxts); 994 + ctxt_len); 1005 995 if (status != STATUS_SUCCESS) 1006 996 break; 1007 997 } else if (pctx->ContextType == SMB2_ENCRYPTION_CAPABILITIES) { ··· 1012 1002 1013 1003 decode_encrypt_ctxt(conn, 1014 1004 (struct smb2_encryption_neg_context *)pctx, 1015 - len_of_ctxts); 1005 + ctxt_len); 1016 1006 } else if (pctx->ContextType == SMB2_COMPRESSION_CAPABILITIES) { 1017 1007 ksmbd_debug(SMB, 1018 1008 "deassemble SMB2_COMPRESSION_CAPABILITIES context\n"); ··· 1031 1021 } else if (pctx->ContextType == SMB2_SIGNING_CAPABILITIES) { 1032 1022 ksmbd_debug(SMB, 1033 1023 "deassemble SMB2_SIGNING_CAPABILITIES context\n"); 1024 + 1034 1025 decode_sign_cap_ctxt(conn, 1035 1026 (struct smb2_signing_capabilities *)pctx, 1036 - len_of_ctxts); 1027 + ctxt_len); 1037 1028 } 1038 1029 1039 1030 /* offsets must be 8 byte aligned */ ··· 1068 1057 return rc; 1069 1058 } 1070 1059 1071 - if (req->DialectCount == 0) { 1072 - pr_err("malformed packet\n"); 1060 + smb2_buf_len = get_rfc1002_len(work->request_buf); 1061 + smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects); 1062 + if (smb2_neg_size > smb2_buf_len) { 1073 1063 rsp->hdr.Status = STATUS_INVALID_PARAMETER; 1074 1064 rc = -EINVAL; 1075 1065 goto err_out; 1076 1066 } 1077 1067 1078 - smb2_buf_len = get_rfc1002_len(work->request_buf); 1079 - smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects); 1080 - if (smb2_neg_size > smb2_buf_len) { 1068 + if (req->DialectCount == 0) { 1069 + pr_err("malformed packet\n"); 1081 1070 rsp->hdr.Status = STATUS_INVALID_PARAMETER; 1082 1071 rc = -EINVAL; 1083 1072 goto err_out; ··· 4369 4358 return 0; 4370 4359 } 4371 4360 4372 - static unsigned long long get_allocation_size(struct inode *inode, 4373 - struct kstat *stat) 4374 - { 4375 - unsigned long long alloc_size = 0; 4376 - 4377 - if (!S_ISDIR(stat->mode)) { 4378 - if ((inode->i_blocks << 9) <= stat->size) 4379 - alloc_size = stat->size; 4380 - else 4381 - alloc_size = inode->i_blocks << 9; 4382 - } 4383 - 4384 - return alloc_size; 4385 - } 4386 - 4387 4361 static void get_file_standard_info(struct smb2_query_info_rsp *rsp, 4388 4362 struct ksmbd_file *fp, void *rsp_org) 4389 4363 { ··· 4383 4387 sinfo = (struct smb2_file_standard_info *)rsp->Buffer; 4384 4388 delete_pending = ksmbd_inode_pending_delete(fp); 4385 4389 4386 - sinfo->AllocationSize = cpu_to_le64(get_allocation_size(inode, &stat)); 4390 + sinfo->AllocationSize = cpu_to_le64(inode->i_blocks << 9); 4387 4391 sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4388 4392 sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending); 4389 4393 sinfo->DeletePending = delete_pending; ··· 4448 4452 file_info->Attributes = fp->f_ci->m_fattr; 4449 4453 file_info->Pad1 = 0; 4450 4454 file_info->AllocationSize = 4451 - cpu_to_le64(get_allocation_size(inode, &stat)); 4455 + cpu_to_le64(inode->i_blocks << 9); 4452 4456 file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4453 4457 file_info->NumberOfLinks = 4454 4458 cpu_to_le32(get_nlink(&stat) - delete_pending); ··· 4637 4641 file_info->ChangeTime = cpu_to_le64(time); 4638 4642 file_info->Attributes = fp->f_ci->m_fattr; 4639 4643 file_info->AllocationSize = 4640 - cpu_to_le64(get_allocation_size(inode, &stat)); 4644 + cpu_to_le64(inode->i_blocks << 9); 4641 4645 file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4642 4646 file_info->Reserved = cpu_to_le32(0); 4643 4647 rsp->OutputBufferLength = ··· 5502 5506 { 5503 5507 char *link_name = NULL, *target_name = NULL, *pathname = NULL; 5504 5508 struct path path; 5505 - bool file_present = true; 5509 + bool file_present = false; 5506 5510 int rc; 5507 5511 5508 5512 if (buf_len < (u64)sizeof(struct smb2_file_link_info) + ··· 5535 5539 if (rc) { 5536 5540 if (rc != -ENOENT) 5537 5541 goto out; 5538 - file_present = false; 5539 - } 5542 + } else 5543 + file_present = true; 5540 5544 5541 5545 if (file_info->ReplaceIfExists) { 5542 5546 if (file_present) {
+7 -2
fs/smb/server/vfs.c
··· 86 86 err = vfs_path_parent_lookup(filename, flags, 87 87 &parent_path, &last, &type, 88 88 root_share_path); 89 - putname(filename); 90 - if (err) 89 + if (err) { 90 + putname(filename); 91 91 return err; 92 + } 92 93 93 94 if (unlikely(type != LAST_NORM)) { 94 95 path_put(&parent_path); 96 + putname(filename); 95 97 return -ENOENT; 96 98 } 97 99 ··· 110 108 path->dentry = d; 111 109 path->mnt = share_conf->vfs_path.mnt; 112 110 path_put(&parent_path); 111 + putname(filename); 113 112 114 113 return 0; 115 114 116 115 err_out: 117 116 inode_unlock(parent_path.dentry->d_inode); 118 117 path_put(&parent_path); 118 + putname(filename); 119 119 return -ENOENT; 120 120 } 121 121 ··· 747 743 rd.new_dir = new_path.dentry->d_inode, 748 744 rd.new_dentry = new_dentry, 749 745 rd.flags = flags, 746 + rd.delegated_inode = NULL, 750 747 err = vfs_rename(&rd); 751 748 if (err) 752 749 ksmbd_debug(VFS, "vfs_rename failed err %d\n", err);
+6
include/linux/cper.h
··· 572 572 int cper_mem_err_location(struct cper_mem_err_compact *mem, char *msg); 573 573 int cper_dimm_err_location(struct cper_mem_err_compact *mem, char *msg); 574 574 575 + struct acpi_hest_generic_status; 576 + void cper_estatus_print(const char *pfx, 577 + const struct acpi_hest_generic_status *estatus); 578 + int cper_estatus_check_header(const struct acpi_hest_generic_status *estatus); 579 + int cper_estatus_check(const struct acpi_hest_generic_status *estatus); 580 + 575 581 #endif
+2
include/linux/efi.h
··· 1338 1338 return xen_efi_config_table_is_usable(guid, table); 1339 1339 } 1340 1340 1341 + umode_t efi_attr_is_visible(struct kobject *kobj, struct attribute *attr, int n); 1342 + 1341 1343 #endif /* _LINUX_EFI_H */
+1 -1
include/linux/firewire.h
··· 391 391 u32 tag:2; /* tx: Tag in packet header */ 392 392 u32 sy:4; /* tx: Sy in packet header */ 393 393 u32 header_length:8; /* Length of immediate header */ 394 - u32 header[0]; /* tx: Top of 1394 isoch. data_block */ 394 + u32 header[]; /* tx: Top of 1394 isoch. data_block */ 395 395 }; 396 396 397 397 #define FW_ISO_CONTEXT_TRANSMIT 0
-6
include/linux/fs.h
··· 2566 2566 struct inode *inode = file_inode(file); 2567 2567 return atomic_dec_unless_positive(&inode->i_writecount) ? 0 : -ETXTBSY; 2568 2568 } 2569 - static inline int exclusive_deny_write_access(struct file *file) 2570 - { 2571 - int old = 0; 2572 - struct inode *inode = file_inode(file); 2573 - return atomic_try_cmpxchg(&inode->i_writecount, &old, -1) ? 0 : -ETXTBSY; 2574 - } 2575 2569 static inline void put_write_access(struct inode * inode) 2576 2570 { 2577 2571 atomic_dec(&inode->i_writecount);
+1 -1
include/linux/iio/iio-gts-helper.h
··· 135 135 /** 136 136 * iio_gts_find_sel_by_int_time - find selector matching integration time 137 137 * @gts: Gain time scale descriptor 138 - * @gain: HW-gain for which matching selector is searched for 138 + * @time: Integration time for which matching selector is searched for 139 139 * 140 140 * Return: a selector matching given integration time or -EINVAL if 141 141 * selector was not found.
+1
include/linux/mlx5/driver.h
··· 1093 1093 int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn, 1094 1094 int npsvs, u32 *sig_index); 1095 1095 int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num); 1096 + __be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev); 1096 1097 void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common); 1097 1098 int mlx5_query_odp_caps(struct mlx5_core_dev *dev, 1098 1099 struct mlx5_odp_caps *odp_caps);
+6
include/linux/page-flags.h
··· 617 617 * Please note that, confusingly, "page_mapping" refers to the inode 618 618 * address_space which maps the page from disk; whereas "page_mapped" 619 619 * refers to user virtual address space into which the page is mapped. 620 + * 621 + * For slab pages, since slab reuses the bits in struct page to store its 622 + * internal states, the page->mapping does not exist as such, nor do these 623 + * flags below. So in order to avoid testing non-existent bits, please 624 + * make sure that PageSlab(page) actually evaluates to false before calling 625 + * the following functions (e.g., PageAnon). See mm/slab.h. 620 626 */ 621 627 #define PAGE_MAPPING_ANON 0x1 622 628 #define PAGE_MAPPING_MOVABLE 0x2
+13 -12
include/linux/pe.h
··· 11 11 #include <linux/types.h> 12 12 13 13 /* 14 - * Linux EFI stub v1.0 adds the following functionality: 15 - * - Loading initrd from the LINUX_EFI_INITRD_MEDIA_GUID device path, 16 - * - Loading/starting the kernel from firmware that targets a different 17 - * machine type, via the entrypoint exposed in the .compat PE/COFF section. 14 + * Starting from version v3.0, the major version field should be interpreted as 15 + * a bit mask of features supported by the kernel's EFI stub: 16 + * - 0x1: initrd loading from the LINUX_EFI_INITRD_MEDIA_GUID device path, 17 + * - 0x2: initrd loading using the initrd= command line option, where the file 18 + * may be specified using device path notation, and is not required to 19 + * reside on the same volume as the loaded kernel image. 18 20 * 19 21 * The recommended way of loading and starting v1.0 or later kernels is to use 20 22 * the LoadImage() and StartImage() EFI boot services, and expose the initrd 21 23 * via the LINUX_EFI_INITRD_MEDIA_GUID device path. 22 24 * 23 - * Versions older than v1.0 support initrd loading via the image load options 24 - * (using initrd=, limited to the volume from which the kernel itself was 25 - * loaded), or via arch specific means (bootparams, DT, etc). 25 + * Versions older than v1.0 may support initrd loading via the image load 26 + * options (using initrd=, limited to the volume from which the kernel itself 27 + * was loaded), or only via arch specific means (bootparams, DT, etc). 26 28 * 27 - * On x86, LoadImage() and StartImage() can be omitted if the EFI handover 28 - * protocol is implemented, which can be inferred from the version, 29 - * handover_offset and xloadflags fields in the bootparams structure. 29 + * The minor version field must remain 0x0. 30 + * (https://lore.kernel.org/all/efd6f2d4-547c-1378-1faa-53c044dbd297@gmail.com/) 30 31 */ 31 - #define LINUX_EFISTUB_MAJOR_VERSION 0x1 32 - #define LINUX_EFISTUB_MINOR_VERSION 0x1 32 + #define LINUX_EFISTUB_MAJOR_VERSION 0x3 33 + #define LINUX_EFISTUB_MINOR_VERSION 0x0 33 34 34 35 /* 35 36 * LINUX_PE_MAGIC appears at offset 0x38 into the MS-DOS header of EFI bootable
-1
include/linux/sched/task.h
··· 29 29 u32 io_thread:1; 30 30 u32 user_worker:1; 31 31 u32 no_files:1; 32 - u32 ignore_signals:1; 33 32 unsigned long stack; 34 33 unsigned long stack_size; 35 34 unsigned long tls;
+3 -12
include/linux/sched/vhost_task.h
··· 2 2 #ifndef _LINUX_VHOST_TASK_H 3 3 #define _LINUX_VHOST_TASK_H 4 4 5 - #include <linux/completion.h> 6 5 7 - struct task_struct; 6 + struct vhost_task; 8 7 9 - struct vhost_task { 10 - int (*fn)(void *data); 11 - void *data; 12 - struct completion exited; 13 - unsigned long flags; 14 - struct task_struct *task; 15 - }; 16 - 17 - struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, 8 + struct vhost_task *vhost_task_create(bool (*fn)(void *), void *arg, 18 9 const char *name); 19 10 void vhost_task_start(struct vhost_task *vtsk); 20 11 void vhost_task_stop(struct vhost_task *vtsk); 21 - bool vhost_task_should_stop(struct vhost_task *vtsk); 12 + void vhost_task_wake(struct vhost_task *vtsk); 22 13 23 14 #endif
+3 -4
include/linux/sunrpc/svcsock.h
··· 61 61 void svc_send(struct svc_rqst *rqstp); 62 62 void svc_drop(struct svc_rqst *); 63 63 void svc_sock_update_bufs(struct svc_serv *serv); 64 - bool svc_alien_sock(struct net *net, int fd); 65 - int svc_addsock(struct svc_serv *serv, const int fd, 66 - char *name_return, const size_t len, 67 - const struct cred *cred); 64 + int svc_addsock(struct svc_serv *serv, struct net *net, 65 + const int fd, char *name_return, const size_t len, 66 + const struct cred *cred); 68 67 void svc_init_xprt_sock(void); 69 68 void svc_cleanup_xprt_sock(void); 70 69 struct svc_xprt *svc_sock_create(struct svc_serv *serv, int prot);
+1
include/linux/trace_events.h
··· 806 806 FILTER_TRACE_FN, 807 807 FILTER_COMM, 808 808 FILTER_CPU, 809 + FILTER_STACKTRACE, 809 810 }; 810 811 811 812 extern int trace_event_raw_init(struct trace_event_call *call);
+5
include/linux/usb/hcd.h
··· 501 501 void hcd_buffer_free(struct usb_bus *bus, size_t size, 502 502 void *addr, dma_addr_t dma); 503 503 504 + void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, 505 + size_t size, gfp_t mem_flags, dma_addr_t *dma); 506 + void hcd_buffer_free_pages(struct usb_hcd *hcd, 507 + size_t size, void *addr, dma_addr_t dma); 508 + 504 509 /* generic bus glue, needed for host controllers that don't use PCI */ 505 510 extern irqreturn_t usb_hcd_irq(int irq, void *__hcd); 506 511
+2 -1
include/linux/user_events.h
··· 17 17 18 18 #ifdef CONFIG_USER_EVENTS 19 19 struct user_event_mm { 20 - struct list_head link; 20 + struct list_head mms_link; 21 21 struct list_head enablers; 22 22 struct mm_struct *mm; 23 + /* Used for one-shot lists, protected by event_mutex */ 23 24 struct user_event_mm *next; 24 25 refcount_t refcnt; 25 26 refcount_t tasks;
+1
include/media/v4l2-subdev.h
··· 1119 1119 * @vfh: pointer to &struct v4l2_fh 1120 1120 * @state: pointer to &struct v4l2_subdev_state 1121 1121 * @owner: module pointer to the owner of this file handle 1122 + * @client_caps: bitmask of ``V4L2_SUBDEV_CLIENT_CAP_*`` 1122 1123 */ 1123 1124 struct v4l2_subdev_fh { 1124 1125 struct v4l2_fh vfh;
-2
include/net/mana/mana.h
··· 347 347 struct mana_ethtool_stats { 348 348 u64 stop_queue; 349 349 u64 wake_queue; 350 - u64 tx_cqes; 351 350 u64 tx_cqe_err; 352 351 u64 tx_cqe_unknown_type; 353 - u64 rx_cqes; 354 352 u64 rx_coalesced_err; 355 353 u64 rx_cqe_unknown_type; 356 354 };
+4
include/net/sock.h
··· 336 336 * @sk_cgrp_data: cgroup data for this cgroup 337 337 * @sk_memcg: this socket's memory cgroup association 338 338 * @sk_write_pending: a write to stream socket waits to start 339 + * @sk_wait_pending: number of threads blocked on this socket 339 340 * @sk_state_change: callback to indicate change in the state of the sock 340 341 * @sk_data_ready: callback to indicate there is data to be processed 341 342 * @sk_write_space: callback to indicate there is bf sending space available ··· 429 428 unsigned int sk_napi_id; 430 429 #endif 431 430 int sk_rcvbuf; 431 + int sk_wait_pending; 432 432 433 433 struct sk_filter __rcu *sk_filter; 434 434 union { ··· 1176 1174 1177 1175 #define sk_wait_event(__sk, __timeo, __condition, __wait) \ 1178 1176 ({ int __rc; \ 1177 + __sk->sk_wait_pending++; \ 1179 1178 release_sock(__sk); \ 1180 1179 __rc = __condition; \ 1181 1180 if (!__rc) { \ ··· 1186 1183 } \ 1187 1184 sched_annotate_sleep(); \ 1188 1185 lock_sock(__sk); \ 1186 + __sk->sk_wait_pending--; \ 1189 1187 __rc = __condition; \ 1190 1188 __rc; \ 1191 1189 })
+1
include/net/tcp.h
··· 632 632 void tcp_skb_mark_lost_uncond_verify(struct tcp_sock *tp, struct sk_buff *skb); 633 633 void tcp_fin(struct sock *sk); 634 634 void tcp_check_space(struct sock *sk); 635 + void tcp_sack_compress_send_ack(struct sock *sk); 635 636 636 637 /* tcp_timer.c */ 637 638 void tcp_init_xmit_timers(struct sock *);
+4 -3
include/target/iscsi/iscsi_target_core.h
··· 562 562 #define LOGIN_FLAGS_READ_ACTIVE 2 563 563 #define LOGIN_FLAGS_WRITE_ACTIVE 3 564 564 #define LOGIN_FLAGS_CLOSED 4 565 + #define LOGIN_FLAGS_WORKER_RUNNING 5 565 566 unsigned long login_flags; 566 567 struct delayed_work login_work; 567 568 struct iscsi_login *login; 568 569 struct timer_list nopin_timer; 569 570 struct timer_list nopin_response_timer; 570 - struct timer_list transport_timer; 571 + struct timer_list login_timer; 571 572 struct task_struct *login_kworker; 572 573 /* Spinlock used for add/deleting cmd's from conn_cmd_list */ 573 574 spinlock_t cmd_lock; ··· 577 576 spinlock_t nopin_timer_lock; 578 577 spinlock_t response_queue_lock; 579 578 spinlock_t state_lock; 579 + spinlock_t login_timer_lock; 580 + spinlock_t login_worker_lock; 580 581 /* libcrypto RX and TX contexts for crc32c */ 581 582 struct ahash_request *conn_rx_hash; 582 583 struct ahash_request *conn_tx_hash; ··· 795 792 enum np_thread_state_table np_thread_state; 796 793 bool enabled; 797 794 atomic_t np_reset_count; 798 - enum iscsi_timer_flags_table np_login_timer_flags; 799 795 u32 np_exports; 800 796 enum np_flags_table np_flags; 801 797 spinlock_t np_thread_lock; ··· 802 800 struct socket *np_socket; 803 801 struct sockaddr_storage np_sockaddr; 804 802 struct task_struct *np_thread; 805 - struct timer_list np_login_timer; 806 803 void *np_context; 807 804 struct iscsit_transport *np_transport; 808 805 struct list_head np_list;
-4
io_uring/epoll.c
··· 25 25 { 26 26 struct io_epoll *epoll = io_kiocb_to_cmd(req, struct io_epoll); 27 27 28 - pr_warn_once("%s: epoll_ctl support in io_uring is deprecated and will " 29 - "be removed in a future Linux kernel version.\n", 30 - current->comm); 31 - 32 28 if (sqe->buf_index || sqe->splice_fd_in) 33 29 return -EINVAL; 34 30
+4 -1
kernel/exit.c
··· 411 411 tsk->flags |= PF_POSTCOREDUMP; 412 412 core_state = tsk->signal->core_state; 413 413 spin_unlock_irq(&tsk->sighand->siglock); 414 - if (core_state) { 414 + 415 + /* The vhost_worker does not particpate in coredumps */ 416 + if (core_state && 417 + ((tsk->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER)) { 415 418 struct core_thread self; 416 419 417 420 self.task = current;
+5 -8
kernel/fork.c
··· 2336 2336 p->flags &= ~PF_KTHREAD; 2337 2337 if (args->kthread) 2338 2338 p->flags |= PF_KTHREAD; 2339 - if (args->user_worker) 2340 - p->flags |= PF_USER_WORKER; 2341 - if (args->io_thread) { 2339 + if (args->user_worker) { 2342 2340 /* 2343 - * Mark us an IO worker, and block any signal that isn't 2341 + * Mark us a user worker, and block any signal that isn't 2344 2342 * fatal or STOP 2345 2343 */ 2346 - p->flags |= PF_IO_WORKER; 2344 + p->flags |= PF_USER_WORKER; 2347 2345 siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP)); 2348 2346 } 2347 + if (args->io_thread) 2348 + p->flags |= PF_IO_WORKER; 2349 2349 2350 2350 if (args->name) 2351 2351 strscpy_pad(p->comm, args->name, sizeof(p->comm)); ··· 2516 2516 retval = copy_thread(p, args); 2517 2517 if (retval) 2518 2518 goto bad_fork_cleanup_io; 2519 - 2520 - if (args->ignore_signals) 2521 - ignore_signals(p); 2522 2519 2523 2520 stackleak_task_init(p); 2524 2521
+1 -1
kernel/module/decompress.c
··· 257 257 do { 258 258 struct page *page = module_get_next_page(info); 259 259 260 - if (!IS_ERR(page)) { 260 + if (IS_ERR(page)) { 261 261 retval = PTR_ERR(page); 262 262 goto out; 263 263 }
+24 -52
kernel/module/main.c
··· 1521 1521 MOD_RODATA, 1522 1522 MOD_RO_AFTER_INIT, 1523 1523 MOD_DATA, 1524 - MOD_INVALID, /* This is needed to match the masks array */ 1524 + MOD_DATA, 1525 1525 }; 1526 1526 static const int init_m_to_mem_type[] = { 1527 1527 MOD_INIT_TEXT, 1528 1528 MOD_INIT_RODATA, 1529 1529 MOD_INVALID, 1530 1530 MOD_INIT_DATA, 1531 - MOD_INVALID, /* This is needed to match the masks array */ 1531 + MOD_INIT_DATA, 1532 1532 }; 1533 1533 1534 1534 for (m = 0; m < ARRAY_SIZE(masks); ++m) { ··· 3057 3057 return load_module(&info, uargs, 0); 3058 3058 } 3059 3059 3060 - static int file_init_module(struct file *file, const char __user * uargs, int flags) 3060 + SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags) 3061 3061 { 3062 3062 struct load_info info = { }; 3063 3063 void *buf = NULL; 3064 3064 int len; 3065 - 3066 - len = kernel_read_file(file, 0, &buf, INT_MAX, NULL, 3067 - READING_MODULE); 3068 - if (len < 0) { 3069 - mod_stat_inc(&failed_kreads); 3070 - mod_stat_add_long(len, &invalid_kread_bytes); 3071 - return len; 3072 - } 3073 - 3074 - if (flags & MODULE_INIT_COMPRESSED_FILE) { 3075 - int err = module_decompress(&info, buf, len); 3076 - vfree(buf); /* compressed data is no longer needed */ 3077 - if (err) { 3078 - mod_stat_inc(&failed_decompress); 3079 - mod_stat_add_long(len, &invalid_decompress_bytes); 3080 - return err; 3081 - } 3082 - } else { 3083 - info.hdr = buf; 3084 - info.len = len; 3085 - } 3086 - 3087 - return load_module(&info, uargs, flags); 3088 - } 3089 - 3090 - /* 3091 - * kernel_read_file() will already deny write access, but module 3092 - * loading wants _exclusive_ access to the file, so we do that 3093 - * here, along with basic sanity checks. 3094 - */ 3095 - static int prepare_file_for_module_load(struct file *file) 3096 - { 3097 - if (!file || !(file->f_mode & FMODE_READ)) 3098 - return -EBADF; 3099 - if (!S_ISREG(file_inode(file)->i_mode)) 3100 - return -EINVAL; 3101 - return exclusive_deny_write_access(file); 3102 - } 3103 - 3104 - SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags) 3105 - { 3106 - struct fd f; 3107 3065 int err; 3108 3066 3109 3067 err = may_init_module(); ··· 3075 3117 |MODULE_INIT_COMPRESSED_FILE)) 3076 3118 return -EINVAL; 3077 3119 3078 - f = fdget(fd); 3079 - err = prepare_file_for_module_load(f.file); 3080 - if (!err) { 3081 - err = file_init_module(f.file, uargs, flags); 3082 - allow_write_access(f.file); 3120 + len = kernel_read_file_from_fd(fd, 0, &buf, INT_MAX, NULL, 3121 + READING_MODULE); 3122 + if (len < 0) { 3123 + mod_stat_inc(&failed_kreads); 3124 + mod_stat_add_long(len, &invalid_kread_bytes); 3125 + return len; 3083 3126 } 3084 - fdput(f); 3085 - return err; 3127 + 3128 + if (flags & MODULE_INIT_COMPRESSED_FILE) { 3129 + err = module_decompress(&info, buf, len); 3130 + vfree(buf); /* compressed data is no longer needed */ 3131 + if (err) { 3132 + mod_stat_inc(&failed_decompress); 3133 + mod_stat_add_long(len, &invalid_decompress_bytes); 3134 + return err; 3135 + } 3136 + } else { 3137 + info.hdr = buf; 3138 + info.len = len; 3139 + } 3140 + 3141 + return load_module(&info, uargs, flags); 3086 3142 } 3087 3143 3088 3144 /* Keep in sync with MODULE_FLAGS_BUF_SIZE !!! */
+5 -3
kernel/signal.c
··· 1368 1368 1369 1369 while_each_thread(p, t) { 1370 1370 task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK); 1371 - count++; 1371 + /* Don't require de_thread to wait for the vhost_worker */ 1372 + if ((t->flags & (PF_IO_WORKER | PF_USER_WORKER)) != PF_USER_WORKER) 1373 + count++; 1372 1374 1373 1375 /* Don't bother with already dead threads */ 1374 1376 if (t->exit_state) ··· 2863 2861 } 2864 2862 2865 2863 /* 2866 - * PF_IO_WORKER threads will catch and exit on fatal signals 2864 + * PF_USER_WORKER threads will catch and exit on fatal signals 2867 2865 * themselves. They have cleanup that must be performed, so 2868 2866 * we cannot call do_exit() on their behalf. 2869 2867 */ 2870 - if (current->flags & PF_IO_WORKER) 2868 + if (current->flags & PF_USER_WORKER) 2871 2869 goto out; 2872 2870 2873 2871 /*
+36 -8
kernel/trace/trace.c
··· 60 60 */ 61 61 bool ring_buffer_expanded; 62 62 63 + #ifdef CONFIG_FTRACE_STARTUP_TEST 63 64 /* 64 65 * We need to change this state when a selftest is running. 65 66 * A selftest will lurk into the ring-buffer to count the ··· 76 75 */ 77 76 bool __read_mostly tracing_selftest_disabled; 78 77 79 - #ifdef CONFIG_FTRACE_STARTUP_TEST 80 78 void __init disable_tracing_selftest(const char *reason) 81 79 { 82 80 if (!tracing_selftest_disabled) { ··· 83 83 pr_info("Ftrace startup test is disabled due to %s\n", reason); 84 84 } 85 85 } 86 + #else 87 + #define tracing_selftest_running 0 88 + #define tracing_selftest_disabled 0 86 89 #endif 87 90 88 91 /* Pipe tracepoints to printk */ ··· 1054 1051 if (!(tr->trace_flags & TRACE_ITER_PRINTK)) 1055 1052 return 0; 1056 1053 1057 - if (unlikely(tracing_selftest_running || tracing_disabled)) 1054 + if (unlikely(tracing_selftest_running && tr == &global_trace)) 1055 + return 0; 1056 + 1057 + if (unlikely(tracing_disabled)) 1058 1058 return 0; 1059 1059 1060 1060 alloc = sizeof(*entry) + size + 2; /* possible \n added */ ··· 2047 2041 return 0; 2048 2042 } 2049 2043 2044 + static int do_run_tracer_selftest(struct tracer *type) 2045 + { 2046 + int ret; 2047 + 2048 + /* 2049 + * Tests can take a long time, especially if they are run one after the 2050 + * other, as does happen during bootup when all the tracers are 2051 + * registered. This could cause the soft lockup watchdog to trigger. 2052 + */ 2053 + cond_resched(); 2054 + 2055 + tracing_selftest_running = true; 2056 + ret = run_tracer_selftest(type); 2057 + tracing_selftest_running = false; 2058 + 2059 + return ret; 2060 + } 2061 + 2050 2062 static __init int init_trace_selftests(void) 2051 2063 { 2052 2064 struct trace_selftests *p, *n; ··· 2116 2092 { 2117 2093 return 0; 2118 2094 } 2095 + static inline int do_run_tracer_selftest(struct tracer *type) 2096 + { 2097 + return 0; 2098 + } 2119 2099 #endif /* CONFIG_FTRACE_STARTUP_TEST */ 2120 2100 2121 2101 static void add_tracer_options(struct trace_array *tr, struct tracer *t); ··· 2155 2127 2156 2128 mutex_lock(&trace_types_lock); 2157 2129 2158 - tracing_selftest_running = true; 2159 - 2160 2130 for (t = trace_types; t; t = t->next) { 2161 2131 if (strcmp(type->name, t->name) == 0) { 2162 2132 /* already found */ ··· 2183 2157 /* store the tracer for __set_tracer_option */ 2184 2158 type->flags->trace = type; 2185 2159 2186 - ret = run_tracer_selftest(type); 2160 + ret = do_run_tracer_selftest(type); 2187 2161 if (ret < 0) 2188 2162 goto out; 2189 2163 ··· 2192 2166 add_tracer_options(&global_trace, type); 2193 2167 2194 2168 out: 2195 - tracing_selftest_running = false; 2196 2169 mutex_unlock(&trace_types_lock); 2197 2170 2198 2171 if (ret || !default_bootup_tracer) ··· 3515 3490 unsigned int trace_ctx; 3516 3491 char *tbuffer; 3517 3492 3518 - if (tracing_disabled || tracing_selftest_running) 3493 + if (tracing_disabled) 3519 3494 return 0; 3520 3495 3521 3496 /* Don't pollute graph traces with trace_vprintk internals */ ··· 3563 3538 int trace_array_vprintk(struct trace_array *tr, 3564 3539 unsigned long ip, const char *fmt, va_list args) 3565 3540 { 3541 + if (tracing_selftest_running && tr == &global_trace) 3542 + return 0; 3543 + 3566 3544 return __trace_array_vprintk(tr->array_buffer.buffer, ip, fmt, args); 3567 3545 } 3568 3546 ··· 5780 5752 "\t table using the key(s) and value(s) named, and the value of a\n" 5781 5753 "\t sum called 'hitcount' is incremented. Keys and values\n" 5782 5754 "\t correspond to fields in the event's format description. Keys\n" 5783 - "\t can be any field, or the special string 'stacktrace'.\n" 5755 + "\t can be any field, or the special string 'common_stacktrace'.\n" 5784 5756 "\t Compound keys consisting of up to two fields can be specified\n" 5785 5757 "\t by the 'keys' keyword. Values must correspond to numeric\n" 5786 5758 "\t fields. Sort keys consisting of up to two fields can be\n"
+2
kernel/trace/trace_events.c
··· 194 194 __generic_field(int, common_cpu, FILTER_CPU); 195 195 __generic_field(char *, COMM, FILTER_COMM); 196 196 __generic_field(char *, comm, FILTER_COMM); 197 + __generic_field(char *, stacktrace, FILTER_STACKTRACE); 198 + __generic_field(char *, STACKTRACE, FILTER_STACKTRACE); 197 199 198 200 return ret; 199 201 }
+26 -13
kernel/trace/trace_events_hist.c
··· 1364 1364 if (field->field) 1365 1365 field_name = field->field->name; 1366 1366 else 1367 - field_name = "stacktrace"; 1367 + field_name = "common_stacktrace"; 1368 1368 } else if (field->flags & HIST_FIELD_FL_HITCOUNT) 1369 1369 field_name = "hitcount"; 1370 1370 ··· 2367 2367 hist_data->enable_timestamps = true; 2368 2368 if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS) 2369 2369 hist_data->attrs->ts_in_usecs = true; 2370 - } else if (strcmp(field_name, "stacktrace") == 0) { 2370 + } else if (strcmp(field_name, "common_stacktrace") == 0) { 2371 2371 *flags |= HIST_FIELD_FL_STACKTRACE; 2372 2372 } else if (strcmp(field_name, "common_cpu") == 0) 2373 2373 *flags |= HIST_FIELD_FL_CPU; ··· 2378 2378 if (!field || !field->size) { 2379 2379 /* 2380 2380 * For backward compatibility, if field_name 2381 - * was "cpu", then we treat this the same as 2382 - * common_cpu. This also works for "CPU". 2381 + * was "cpu" or "stacktrace", then we treat this 2382 + * the same as common_cpu and common_stacktrace 2383 + * respectively. This also works for "CPU", and 2384 + * "STACKTRACE". 2383 2385 */ 2384 2386 if (field && field->filter_type == FILTER_CPU) { 2385 2387 *flags |= HIST_FIELD_FL_CPU; 2388 + } else if (field && field->filter_type == FILTER_STACKTRACE) { 2389 + *flags |= HIST_FIELD_FL_STACKTRACE; 2386 2390 } else { 2387 2391 hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, 2388 2392 errpos(field_name)); ··· 4242 4238 goto out; 4243 4239 } 4244 4240 4245 - /* Some types cannot be a value */ 4246 - if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4247 - HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 | 4248 - HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET | 4249 - HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) { 4250 - hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str)); 4251 - ret = -EINVAL; 4241 + /* values and variables should not have some modifiers */ 4242 + if (hist_field->flags & HIST_FIELD_FL_VAR) { 4243 + /* Variable */ 4244 + if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4245 + HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2)) 4246 + goto err; 4247 + } else { 4248 + /* Value */ 4249 + if (hist_field->flags & (HIST_FIELD_FL_GRAPH | HIST_FIELD_FL_PERCENT | 4250 + HIST_FIELD_FL_BUCKET | HIST_FIELD_FL_LOG2 | 4251 + HIST_FIELD_FL_SYM | HIST_FIELD_FL_SYM_OFFSET | 4252 + HIST_FIELD_FL_SYSCALL | HIST_FIELD_FL_STACKTRACE)) 4253 + goto err; 4252 4254 } 4253 4255 4254 4256 hist_data->fields[val_idx] = hist_field; ··· 4266 4256 ret = -EINVAL; 4267 4257 out: 4268 4258 return ret; 4259 + err: 4260 + hist_err(file->tr, HIST_ERR_BAD_FIELD_MODIFIER, errpos(field_str)); 4261 + return -EINVAL; 4269 4262 } 4270 4263 4271 4264 static int create_val_field(struct hist_trigger_data *hist_data, ··· 5398 5385 if (key_field->field) 5399 5386 seq_printf(m, "%s.stacktrace", key_field->field->name); 5400 5387 else 5401 - seq_puts(m, "stacktrace:\n"); 5388 + seq_puts(m, "common_stacktrace:\n"); 5402 5389 hist_trigger_stacktrace_print(m, 5403 5390 key + key_field->offset, 5404 5391 HIST_STACKTRACE_DEPTH); ··· 5981 5968 if (field->field) 5982 5969 seq_printf(m, "%s.stacktrace", field->field->name); 5983 5970 else 5984 - seq_puts(m, "stacktrace"); 5971 + seq_puts(m, "common_stacktrace"); 5985 5972 } else 5986 5973 hist_field_print(m, field); 5987 5974 }
+73 -39
kernel/trace/trace_events_user.c
··· 96 96 * these to track enablement sites that are tied to an event. 97 97 */ 98 98 struct user_event_enabler { 99 - struct list_head link; 99 + struct list_head mm_enablers_link; 100 100 struct user_event *event; 101 101 unsigned long addr; 102 102 103 103 /* Track enable bit, flags, etc. Aligned for bitops. */ 104 - unsigned int values; 104 + unsigned long values; 105 105 }; 106 106 107 107 /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ ··· 116 116 /* Only duplicate the bit value */ 117 117 #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK 118 118 119 - #define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) 119 + #define ENABLE_BITOPS(e) (&(e)->values) 120 + 121 + #define ENABLE_BIT(e) ((int)((e)->values & ENABLE_VAL_BIT_MASK)) 120 122 121 123 /* Used for asynchronous faulting in of pages */ 122 124 struct user_event_enabler_fault { ··· 155 153 #define VALIDATOR_REL (1 << 1) 156 154 157 155 struct user_event_validator { 158 - struct list_head link; 156 + struct list_head user_event_link; 159 157 int offset; 160 158 int flags; 161 159 }; ··· 261 259 262 260 static void user_event_enabler_destroy(struct user_event_enabler *enabler) 263 261 { 264 - list_del_rcu(&enabler->link); 262 + list_del_rcu(&enabler->mm_enablers_link); 265 263 266 264 /* No longer tracking the event via the enabler */ 267 265 refcount_dec(&enabler->event->refcnt); ··· 425 423 426 424 /* Update bit atomically, user tracers must be atomic as well */ 427 425 if (enabler->event && enabler->event->status) 428 - set_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); 426 + set_bit(ENABLE_BIT(enabler), ptr); 429 427 else 430 - clear_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); 428 + clear_bit(ENABLE_BIT(enabler), ptr); 431 429 432 430 kunmap_local(kaddr); 433 431 unpin_user_pages_dirty_lock(&page, 1, true); ··· 439 437 unsigned long uaddr, unsigned char bit) 440 438 { 441 439 struct user_event_enabler *enabler; 442 - struct user_event_enabler *next; 443 440 444 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) { 445 - if (enabler->addr == uaddr && 446 - (enabler->values & ENABLE_VAL_BIT_MASK) == bit) 441 + list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) { 442 + if (enabler->addr == uaddr && ENABLE_BIT(enabler) == bit) 447 443 return true; 448 444 } 449 445 ··· 451 451 static void user_event_enabler_update(struct user_event *user) 452 452 { 453 453 struct user_event_enabler *enabler; 454 - struct user_event_mm *mm = user_event_mm_get_all(user); 455 454 struct user_event_mm *next; 455 + struct user_event_mm *mm; 456 456 int attempt; 457 + 458 + lockdep_assert_held(&event_mutex); 459 + 460 + /* 461 + * We need to build a one-shot list of all the mms that have an 462 + * enabler for the user_event passed in. This list is only valid 463 + * while holding the event_mutex. The only reason for this is due 464 + * to the global mm list being RCU protected and we use methods 465 + * which can wait (mmap_read_lock and pin_user_pages_remote). 466 + * 467 + * NOTE: user_event_mm_get_all() increments the ref count of each 468 + * mm that is added to the list to prevent removal timing windows. 469 + * We must always put each mm after they are used, which may wait. 470 + */ 471 + mm = user_event_mm_get_all(user); 457 472 458 473 while (mm) { 459 474 next = mm->next; 460 475 mmap_read_lock(mm->mm); 461 - rcu_read_lock(); 462 476 463 - list_for_each_entry_rcu(enabler, &mm->enablers, link) { 477 + list_for_each_entry(enabler, &mm->enablers, mm_enablers_link) { 464 478 if (enabler->event == user) { 465 479 attempt = 0; 466 480 user_event_enabler_write(mm, enabler, true, &attempt); 467 481 } 468 482 } 469 483 470 - rcu_read_unlock(); 471 484 mmap_read_unlock(mm->mm); 472 485 user_event_mm_put(mm); 473 486 mm = next; ··· 508 495 enabler->values = orig->values & ENABLE_VAL_DUP_MASK; 509 496 510 497 refcount_inc(&enabler->event->refcnt); 511 - list_add_rcu(&enabler->link, &mm->enablers); 498 + 499 + /* Enablers not exposed yet, RCU not required */ 500 + list_add(&enabler->mm_enablers_link, &mm->enablers); 512 501 513 502 return true; 514 503 } ··· 529 514 struct user_event_mm *mm; 530 515 531 516 /* 517 + * We use the mm->next field to build a one-shot list from the global 518 + * RCU protected list. To build this list the event_mutex must be held. 519 + * This lets us build a list without requiring allocs that could fail 520 + * when user based events are most wanted for diagnostics. 521 + */ 522 + lockdep_assert_held(&event_mutex); 523 + 524 + /* 532 525 * We do not want to block fork/exec while enablements are being 533 526 * updated, so we use RCU to walk the current tasks that have used 534 527 * user_events ABI for 1 or more events. Each enabler found in each ··· 548 525 */ 549 526 rcu_read_lock(); 550 527 551 - list_for_each_entry_rcu(mm, &user_event_mms, link) 552 - list_for_each_entry_rcu(enabler, &mm->enablers, link) 528 + list_for_each_entry_rcu(mm, &user_event_mms, mms_link) { 529 + list_for_each_entry_rcu(enabler, &mm->enablers, mm_enablers_link) { 553 530 if (enabler->event == user) { 554 531 mm->next = found; 555 532 found = user_event_mm_get(mm); 556 533 break; 557 534 } 535 + } 536 + } 558 537 559 538 rcu_read_unlock(); 560 539 561 540 return found; 562 541 } 563 542 564 - static struct user_event_mm *user_event_mm_create(struct task_struct *t) 543 + static struct user_event_mm *user_event_mm_alloc(struct task_struct *t) 565 544 { 566 545 struct user_event_mm *user_mm; 567 - unsigned long flags; 568 546 569 547 user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL_ACCOUNT); 570 548 ··· 576 552 INIT_LIST_HEAD(&user_mm->enablers); 577 553 refcount_set(&user_mm->refcnt, 1); 578 554 refcount_set(&user_mm->tasks, 1); 579 - 580 - spin_lock_irqsave(&user_event_mms_lock, flags); 581 - list_add_rcu(&user_mm->link, &user_event_mms); 582 - spin_unlock_irqrestore(&user_event_mms_lock, flags); 583 - 584 - t->user_event_mm = user_mm; 585 555 586 556 /* 587 557 * The lifetime of the memory descriptor can slightly outlast ··· 590 572 return user_mm; 591 573 } 592 574 575 + static void user_event_mm_attach(struct user_event_mm *user_mm, struct task_struct *t) 576 + { 577 + unsigned long flags; 578 + 579 + spin_lock_irqsave(&user_event_mms_lock, flags); 580 + list_add_rcu(&user_mm->mms_link, &user_event_mms); 581 + spin_unlock_irqrestore(&user_event_mms_lock, flags); 582 + 583 + t->user_event_mm = user_mm; 584 + } 585 + 593 586 static struct user_event_mm *current_user_event_mm(void) 594 587 { 595 588 struct user_event_mm *user_mm = current->user_event_mm; ··· 608 579 if (user_mm) 609 580 goto inc; 610 581 611 - user_mm = user_event_mm_create(current); 582 + user_mm = user_event_mm_alloc(current); 612 583 613 584 if (!user_mm) 614 585 goto error; 586 + 587 + user_event_mm_attach(user_mm, current); 615 588 inc: 616 589 refcount_inc(&user_mm->refcnt); 617 590 error: ··· 624 593 { 625 594 struct user_event_enabler *enabler, *next; 626 595 627 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) 596 + list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link) 628 597 user_event_enabler_destroy(enabler); 629 598 630 599 mmdrop(mm->mm); ··· 661 630 662 631 /* Remove the mm from the list, so it can no longer be enabled */ 663 632 spin_lock_irqsave(&user_event_mms_lock, flags); 664 - list_del_rcu(&mm->link); 633 + list_del_rcu(&mm->mms_link); 665 634 spin_unlock_irqrestore(&user_event_mms_lock, flags); 666 635 667 636 /* ··· 701 670 702 671 void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) 703 672 { 704 - struct user_event_mm *mm = user_event_mm_create(t); 673 + struct user_event_mm *mm = user_event_mm_alloc(t); 705 674 struct user_event_enabler *enabler; 706 675 707 676 if (!mm) ··· 709 678 710 679 rcu_read_lock(); 711 680 712 - list_for_each_entry_rcu(enabler, &old_mm->enablers, link) 681 + list_for_each_entry_rcu(enabler, &old_mm->enablers, mm_enablers_link) { 713 682 if (!user_event_enabler_dup(enabler, mm)) 714 683 goto error; 684 + } 715 685 716 686 rcu_read_unlock(); 717 687 688 + user_event_mm_attach(mm, t); 718 689 return; 719 690 error: 720 691 rcu_read_unlock(); 721 - user_event_mm_remove(t); 692 + user_event_mm_destroy(mm); 722 693 } 723 694 724 695 static bool current_user_event_enabler_exists(unsigned long uaddr, ··· 781 748 */ 782 749 if (!*write_result) { 783 750 refcount_inc(&enabler->event->refcnt); 784 - list_add_rcu(&enabler->link, &user_mm->enablers); 751 + list_add_rcu(&enabler->mm_enablers_link, &user_mm->enablers); 785 752 } 786 753 787 754 mutex_unlock(&event_mutex); ··· 937 904 struct user_event_validator *validator, *next; 938 905 struct list_head *head = &user->validators; 939 906 940 - list_for_each_entry_safe(validator, next, head, link) { 941 - list_del(&validator->link); 907 + list_for_each_entry_safe(validator, next, head, user_event_link) { 908 + list_del(&validator->user_event_link); 942 909 kfree(validator); 943 910 } 944 911 } ··· 992 959 validator->offset = offset; 993 960 994 961 /* Want sequential access when validating */ 995 - list_add_tail(&validator->link, &user->validators); 962 + list_add_tail(&validator->user_event_link, &user->validators); 996 963 997 964 add_field: 998 965 field->type = type; ··· 1382 1349 void *pos, *end = data + len; 1383 1350 u32 loc, offset, size; 1384 1351 1385 - list_for_each_entry(validator, head, link) { 1352 + list_for_each_entry(validator, head, user_event_link) { 1386 1353 pos = data + validator->offset; 1387 1354 1388 1355 /* Already done min_size check, no bounds check here */ ··· 2303 2270 */ 2304 2271 mutex_lock(&event_mutex); 2305 2272 2306 - list_for_each_entry_safe(enabler, next, &mm->enablers, link) 2273 + list_for_each_entry_safe(enabler, next, &mm->enablers, mm_enablers_link) { 2307 2274 if (enabler->addr == reg.disable_addr && 2308 - (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) { 2275 + ENABLE_BIT(enabler) == reg.disable_bit) { 2309 2276 set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); 2310 2277 2311 2278 if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) ··· 2314 2281 /* Removed at least one */ 2315 2282 ret = 0; 2316 2283 } 2284 + } 2317 2285 2318 2286 mutex_unlock(&event_mutex); 2319 2287
+2
kernel/trace/trace_osnoise.c
··· 1652 1652 osnoise_stop_tracing(); 1653 1653 notify_new_max_latency(diff); 1654 1654 1655 + wake_up_process(tlat->kthread); 1656 + 1655 1657 return HRTIMER_NORESTART; 1656 1658 } 1657 1659 }
+1 -1
kernel/trace/trace_probe.h
··· 308 308 { 309 309 struct trace_probe_event *tpe = trace_probe_event_from_call(call); 310 310 311 - return list_first_entry(&tpe->probes, struct trace_probe, list); 311 + return list_first_entry_or_null(&tpe->probes, struct trace_probe, list); 312 312 } 313 313 314 314 static inline struct list_head *trace_probe_probe_list(struct trace_probe *tp)
+10
kernel/trace/trace_selftest.c
··· 848 848 } 849 849 850 850 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 851 + /* 852 + * These tests can take some time to run. Make sure on non PREEMPT 853 + * kernels, we do not trigger the softlockup detector. 854 + */ 855 + cond_resched(); 856 + 851 857 tracing_reset_online_cpus(&tr->array_buffer); 852 858 set_graph_array(tr); 853 859 ··· 874 868 (unsigned long)ftrace_stub_direct_tramp); 875 869 if (ret) 876 870 goto out; 871 + 872 + cond_resched(); 877 873 878 874 ret = register_ftrace_graph(&fgraph_ops); 879 875 if (ret) { ··· 898 890 true); 899 891 if (ret) 900 892 goto out; 893 + 894 + cond_resched(); 901 895 902 896 tracing_start(); 903 897
+61 -31
kernel/vhost_task.c
··· 12 12 VHOST_TASK_FLAGS_STOP, 13 13 }; 14 14 15 + struct vhost_task { 16 + bool (*fn)(void *data); 17 + void *data; 18 + struct completion exited; 19 + unsigned long flags; 20 + struct task_struct *task; 21 + }; 22 + 15 23 static int vhost_task_fn(void *data) 16 24 { 17 25 struct vhost_task *vtsk = data; 18 - int ret; 26 + bool dead = false; 19 27 20 - ret = vtsk->fn(vtsk->data); 28 + for (;;) { 29 + bool did_work; 30 + 31 + /* mb paired w/ vhost_task_stop */ 32 + if (test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags)) 33 + break; 34 + 35 + if (!dead && signal_pending(current)) { 36 + struct ksignal ksig; 37 + /* 38 + * Calling get_signal will block in SIGSTOP, 39 + * or clear fatal_signal_pending, but remember 40 + * what was set. 41 + * 42 + * This thread won't actually exit until all 43 + * of the file descriptors are closed, and 44 + * the release function is called. 45 + */ 46 + dead = get_signal(&ksig); 47 + if (dead) 48 + clear_thread_flag(TIF_SIGPENDING); 49 + } 50 + 51 + did_work = vtsk->fn(vtsk->data); 52 + if (!did_work) { 53 + set_current_state(TASK_INTERRUPTIBLE); 54 + schedule(); 55 + } 56 + } 57 + 21 58 complete(&vtsk->exited); 22 - do_exit(ret); 59 + do_exit(0); 23 60 } 61 + 62 + /** 63 + * vhost_task_wake - wakeup the vhost_task 64 + * @vtsk: vhost_task to wake 65 + * 66 + * wake up the vhost_task worker thread 67 + */ 68 + void vhost_task_wake(struct vhost_task *vtsk) 69 + { 70 + wake_up_process(vtsk->task); 71 + } 72 + EXPORT_SYMBOL_GPL(vhost_task_wake); 24 73 25 74 /** 26 75 * vhost_task_stop - stop a vhost_task 27 76 * @vtsk: vhost_task to stop 28 77 * 29 - * Callers must call vhost_task_should_stop and return from their worker 30 - * function when it returns true; 78 + * vhost_task_fn ensures the worker thread exits after 79 + * VHOST_TASK_FLAGS_SOP becomes true. 31 80 */ 32 81 void vhost_task_stop(struct vhost_task *vtsk) 33 82 { 34 - pid_t pid = vtsk->task->pid; 35 - 36 83 set_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags); 37 - wake_up_process(vtsk->task); 84 + vhost_task_wake(vtsk); 38 85 /* 39 86 * Make sure vhost_task_fn is no longer accessing the vhost_task before 40 - * freeing it below. If userspace crashed or exited without closing, 41 - * then the vhost_task->task could already be marked dead so 42 - * kernel_wait will return early. 87 + * freeing it below. 43 88 */ 44 89 wait_for_completion(&vtsk->exited); 45 - /* 46 - * If we are just closing/removing a device and the parent process is 47 - * not exiting then reap the task. 48 - */ 49 - kernel_wait4(pid, NULL, __WCLONE, NULL); 50 90 kfree(vtsk); 51 91 } 52 92 EXPORT_SYMBOL_GPL(vhost_task_stop); 53 93 54 94 /** 55 - * vhost_task_should_stop - should the vhost task return from the work function 56 - * @vtsk: vhost_task to stop 57 - */ 58 - bool vhost_task_should_stop(struct vhost_task *vtsk) 59 - { 60 - return test_bit(VHOST_TASK_FLAGS_STOP, &vtsk->flags); 61 - } 62 - EXPORT_SYMBOL_GPL(vhost_task_should_stop); 63 - 64 - /** 65 - * vhost_task_create - create a copy of a process to be used by the kernel 66 - * @fn: thread stack 95 + * vhost_task_create - create a copy of a task to be used by the kernel 96 + * @fn: vhost worker function 67 97 * @arg: data to be passed to fn 68 98 * @name: the thread's name 69 99 * ··· 101 71 * failure. The returned task is inactive, and the caller must fire it up 102 72 * through vhost_task_start(). 103 73 */ 104 - struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, 74 + struct vhost_task *vhost_task_create(bool (*fn)(void *), void *arg, 105 75 const char *name) 106 76 { 107 77 struct kernel_clone_args args = { 108 - .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM, 78 + .flags = CLONE_FS | CLONE_UNTRACED | CLONE_VM | 79 + CLONE_THREAD | CLONE_SIGHAND, 109 80 .exit_signal = 0, 110 81 .fn = vhost_task_fn, 111 82 .name = name, 112 83 .user_worker = 1, 113 84 .no_files = 1, 114 - .ignore_signals = 1, 115 85 }; 116 86 struct vhost_task *vtsk; 117 87 struct task_struct *tsk;
+65 -20
lib/test_firmware.c
··· 45 45 bool sent; 46 46 const struct firmware *fw; 47 47 const char *name; 48 + const char *fw_buf; 48 49 struct completion completion; 49 50 struct task_struct *task; 50 51 struct device *dev; ··· 176 175 177 176 for (i = 0; i < test_fw_config->num_requests; i++) { 178 177 req = &test_fw_config->reqs[i]; 179 - if (req->fw) 178 + if (req->fw) { 179 + if (req->fw_buf) { 180 + kfree_const(req->fw_buf); 181 + req->fw_buf = NULL; 182 + } 180 183 release_firmware(req->fw); 184 + req->fw = NULL; 185 + } 181 186 } 182 187 183 188 vfree(test_fw_config->reqs); ··· 360 353 return len; 361 354 } 362 355 356 + static inline int __test_dev_config_update_bool(const char *buf, size_t size, 357 + bool *cfg) 358 + { 359 + int ret; 360 + 361 + if (kstrtobool(buf, cfg) < 0) 362 + ret = -EINVAL; 363 + else 364 + ret = size; 365 + 366 + return ret; 367 + } 368 + 363 369 static int test_dev_config_update_bool(const char *buf, size_t size, 364 370 bool *cfg) 365 371 { 366 372 int ret; 367 373 368 374 mutex_lock(&test_fw_mutex); 369 - if (kstrtobool(buf, cfg) < 0) 370 - ret = -EINVAL; 371 - else 372 - ret = size; 375 + ret = __test_dev_config_update_bool(buf, size, cfg); 373 376 mutex_unlock(&test_fw_mutex); 374 377 375 378 return ret; ··· 390 373 return snprintf(buf, PAGE_SIZE, "%d\n", val); 391 374 } 392 375 393 - static int test_dev_config_update_size_t(const char *buf, 376 + static int __test_dev_config_update_size_t( 377 + const char *buf, 394 378 size_t size, 395 379 size_t *cfg) 396 380 { ··· 402 384 if (ret) 403 385 return ret; 404 386 405 - mutex_lock(&test_fw_mutex); 406 387 *(size_t *)cfg = new; 407 - mutex_unlock(&test_fw_mutex); 408 388 409 389 /* Always return full write size even if we didn't consume all */ 410 390 return size; ··· 418 402 return snprintf(buf, PAGE_SIZE, "%d\n", val); 419 403 } 420 404 421 - static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg) 405 + static int __test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg) 422 406 { 423 407 u8 val; 424 408 int ret; ··· 427 411 if (ret) 428 412 return ret; 429 413 430 - mutex_lock(&test_fw_mutex); 431 414 *(u8 *)cfg = val; 432 - mutex_unlock(&test_fw_mutex); 433 415 434 416 /* Always return full write size even if we didn't consume all */ 435 417 return size; 418 + } 419 + 420 + static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg) 421 + { 422 + int ret; 423 + 424 + mutex_lock(&test_fw_mutex); 425 + ret = __test_dev_config_update_u8(buf, size, cfg); 426 + mutex_unlock(&test_fw_mutex); 427 + 428 + return ret; 436 429 } 437 430 438 431 static ssize_t test_dev_config_show_u8(char *buf, u8 val) ··· 496 471 mutex_unlock(&test_fw_mutex); 497 472 goto out; 498 473 } 499 - mutex_unlock(&test_fw_mutex); 500 474 501 - rc = test_dev_config_update_u8(buf, count, 502 - &test_fw_config->num_requests); 475 + rc = __test_dev_config_update_u8(buf, count, 476 + &test_fw_config->num_requests); 477 + mutex_unlock(&test_fw_mutex); 503 478 504 479 out: 505 480 return rc; ··· 543 518 mutex_unlock(&test_fw_mutex); 544 519 goto out; 545 520 } 546 - mutex_unlock(&test_fw_mutex); 547 521 548 - rc = test_dev_config_update_size_t(buf, count, 549 - &test_fw_config->buf_size); 522 + rc = __test_dev_config_update_size_t(buf, count, 523 + &test_fw_config->buf_size); 524 + mutex_unlock(&test_fw_mutex); 550 525 551 526 out: 552 527 return rc; ··· 573 548 mutex_unlock(&test_fw_mutex); 574 549 goto out; 575 550 } 576 - mutex_unlock(&test_fw_mutex); 577 551 578 - rc = test_dev_config_update_size_t(buf, count, 579 - &test_fw_config->file_offset); 552 + rc = __test_dev_config_update_size_t(buf, count, 553 + &test_fw_config->file_offset); 554 + mutex_unlock(&test_fw_mutex); 580 555 581 556 out: 582 557 return rc; ··· 677 652 678 653 mutex_lock(&test_fw_mutex); 679 654 release_firmware(test_firmware); 655 + if (test_fw_config->reqs) 656 + __test_release_all_firmware(); 680 657 test_firmware = NULL; 681 658 rc = request_firmware(&test_firmware, name, dev); 682 659 if (rc) { ··· 779 752 mutex_lock(&test_fw_mutex); 780 753 release_firmware(test_firmware); 781 754 test_firmware = NULL; 755 + if (test_fw_config->reqs) 756 + __test_release_all_firmware(); 782 757 rc = request_firmware_nowait(THIS_MODULE, 1, name, dev, GFP_KERNEL, 783 758 NULL, trigger_async_request_cb); 784 759 if (rc) { ··· 823 794 824 795 mutex_lock(&test_fw_mutex); 825 796 release_firmware(test_firmware); 797 + if (test_fw_config->reqs) 798 + __test_release_all_firmware(); 826 799 test_firmware = NULL; 827 800 rc = request_firmware_nowait(THIS_MODULE, FW_ACTION_NOUEVENT, name, 828 801 dev, GFP_KERNEL, NULL, ··· 887 856 test_fw_config->buf_size); 888 857 if (!req->fw) 889 858 kfree(test_buf); 859 + else 860 + req->fw_buf = test_buf; 890 861 } else { 891 862 req->rc = test_fw_config->req_firmware(&req->fw, 892 863 req->name, ··· 928 895 929 896 mutex_lock(&test_fw_mutex); 930 897 898 + if (test_fw_config->reqs) { 899 + rc = -EBUSY; 900 + goto out_bail; 901 + } 902 + 931 903 test_fw_config->reqs = 932 904 vzalloc(array3_size(sizeof(struct test_batched_req), 933 905 test_fw_config->num_requests, 2)); ··· 949 911 req->fw = NULL; 950 912 req->idx = i; 951 913 req->name = test_fw_config->name; 914 + req->fw_buf = NULL; 952 915 req->dev = dev; 953 916 init_completion(&req->completion); 954 917 req->task = kthread_run(test_fw_run_batch_request, req, ··· 1032 993 1033 994 mutex_lock(&test_fw_mutex); 1034 995 996 + if (test_fw_config->reqs) { 997 + rc = -EBUSY; 998 + goto out_bail; 999 + } 1000 + 1035 1001 test_fw_config->reqs = 1036 1002 vzalloc(array3_size(sizeof(struct test_batched_req), 1037 1003 test_fw_config->num_requests, 2)); ··· 1054 1010 for (i = 0; i < test_fw_config->num_requests; i++) { 1055 1011 req = &test_fw_config->reqs[i]; 1056 1012 req->name = test_fw_config->name; 1013 + req->fw_buf = NULL; 1057 1014 req->fw = NULL; 1058 1015 req->idx = i; 1059 1016 init_completion(&req->completion);
+1
mm/Kconfig.debug
··· 98 98 config PAGE_TABLE_CHECK 99 99 bool "Check for invalid mappings in user page tables" 100 100 depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK 101 + depends on EXCLUSIVE_SYSTEM_RAM 101 102 select PAGE_EXTENSION 102 103 help 103 104 Check that anonymous page is not being mapped twice with read write
+6
mm/page_table_check.c
··· 71 71 72 72 page = pfn_to_page(pfn); 73 73 page_ext = page_ext_get(page); 74 + 75 + BUG_ON(PageSlab(page)); 74 76 anon = PageAnon(page); 75 77 76 78 for (i = 0; i < pgcnt; i++) { ··· 109 107 110 108 page = pfn_to_page(pfn); 111 109 page_ext = page_ext_get(page); 110 + 111 + BUG_ON(PageSlab(page)); 112 112 anon = PageAnon(page); 113 113 114 114 for (i = 0; i < pgcnt; i++) { ··· 136 132 { 137 133 struct page_ext *page_ext; 138 134 unsigned long i; 135 + 136 + BUG_ON(PageSlab(page)); 139 137 140 138 page_ext = page_ext_get(page); 141 139 BUG_ON(!page_ext);
+38 -16
net/core/rtnetlink.c
··· 2385 2385 if (tb[IFLA_BROADCAST] && 2386 2386 nla_len(tb[IFLA_BROADCAST]) < dev->addr_len) 2387 2387 return -EINVAL; 2388 + 2389 + if (tb[IFLA_GSO_MAX_SIZE] && 2390 + nla_get_u32(tb[IFLA_GSO_MAX_SIZE]) > dev->tso_max_size) { 2391 + NL_SET_ERR_MSG(extack, "too big gso_max_size"); 2392 + return -EINVAL; 2393 + } 2394 + 2395 + if (tb[IFLA_GSO_MAX_SEGS] && 2396 + (nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > GSO_MAX_SEGS || 2397 + nla_get_u32(tb[IFLA_GSO_MAX_SEGS]) > dev->tso_max_segs)) { 2398 + NL_SET_ERR_MSG(extack, "too big gso_max_segs"); 2399 + return -EINVAL; 2400 + } 2401 + 2402 + if (tb[IFLA_GRO_MAX_SIZE] && 2403 + nla_get_u32(tb[IFLA_GRO_MAX_SIZE]) > GRO_MAX_SIZE) { 2404 + NL_SET_ERR_MSG(extack, "too big gro_max_size"); 2405 + return -EINVAL; 2406 + } 2407 + 2408 + if (tb[IFLA_GSO_IPV4_MAX_SIZE] && 2409 + nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]) > dev->tso_max_size) { 2410 + NL_SET_ERR_MSG(extack, "too big gso_ipv4_max_size"); 2411 + return -EINVAL; 2412 + } 2413 + 2414 + if (tb[IFLA_GRO_IPV4_MAX_SIZE] && 2415 + nla_get_u32(tb[IFLA_GRO_IPV4_MAX_SIZE]) > GRO_MAX_SIZE) { 2416 + NL_SET_ERR_MSG(extack, "too big gro_ipv4_max_size"); 2417 + return -EINVAL; 2418 + } 2388 2419 } 2389 2420 2390 2421 if (tb[IFLA_AF_SPEC]) { ··· 2889 2858 if (tb[IFLA_GSO_MAX_SIZE]) { 2890 2859 u32 max_size = nla_get_u32(tb[IFLA_GSO_MAX_SIZE]); 2891 2860 2892 - if (max_size > dev->tso_max_size) { 2893 - err = -EINVAL; 2894 - goto errout; 2895 - } 2896 - 2897 2861 if (dev->gso_max_size ^ max_size) { 2898 2862 netif_set_gso_max_size(dev, max_size); 2899 2863 status |= DO_SETLINK_MODIFIED; ··· 2897 2871 2898 2872 if (tb[IFLA_GSO_MAX_SEGS]) { 2899 2873 u32 max_segs = nla_get_u32(tb[IFLA_GSO_MAX_SEGS]); 2900 - 2901 - if (max_segs > GSO_MAX_SEGS || max_segs > dev->tso_max_segs) { 2902 - err = -EINVAL; 2903 - goto errout; 2904 - } 2905 2874 2906 2875 if (dev->gso_max_segs ^ max_segs) { 2907 2876 netif_set_gso_max_segs(dev, max_segs); ··· 2915 2894 2916 2895 if (tb[IFLA_GSO_IPV4_MAX_SIZE]) { 2917 2896 u32 max_size = nla_get_u32(tb[IFLA_GSO_IPV4_MAX_SIZE]); 2918 - 2919 - if (max_size > dev->tso_max_size) { 2920 - err = -EINVAL; 2921 - goto errout; 2922 - } 2923 2897 2924 2898 if (dev->gso_ipv4_max_size ^ max_size) { 2925 2899 netif_set_gso_ipv4_max_size(dev, max_size); ··· 3301 3285 struct net_device *dev; 3302 3286 unsigned int num_tx_queues = 1; 3303 3287 unsigned int num_rx_queues = 1; 3288 + int err; 3304 3289 3305 3290 if (tb[IFLA_NUM_TX_QUEUES]) 3306 3291 num_tx_queues = nla_get_u32(tb[IFLA_NUM_TX_QUEUES]); ··· 3337 3320 if (!dev) 3338 3321 return ERR_PTR(-ENOMEM); 3339 3322 3323 + err = validate_linkmsg(dev, tb, extack); 3324 + if (err < 0) { 3325 + free_netdev(dev); 3326 + return ERR_PTR(err); 3327 + } 3328 + 3340 3329 dev_net_set(dev, net); 3341 3330 dev->rtnl_link_ops = ops; 3342 3331 dev->rtnl_link_state = RTNL_LINK_INITIALIZING; 3343 3332 3344 3333 if (tb[IFLA_MTU]) { 3345 3334 u32 mtu = nla_get_u32(tb[IFLA_MTU]); 3346 - int err; 3347 3335 3348 3336 err = dev_validate_mtu(dev, mtu, extack); 3349 3337 if (err) {
+1 -1
net/core/sock.c
··· 2381 2381 { 2382 2382 u32 max_segs = 1; 2383 2383 2384 - sk_dst_set(sk, dst); 2385 2384 sk->sk_route_caps = dst->dev->features; 2386 2385 if (sk_is_tcp(sk)) 2387 2386 sk->sk_route_caps |= NETIF_F_GSO; ··· 2399 2400 } 2400 2401 } 2401 2402 sk->sk_gso_max_segs = max_segs; 2403 + sk_dst_set(sk, dst); 2402 2404 } 2403 2405 EXPORT_SYMBOL_GPL(sk_setup_caps); 2404 2406
+2
net/ipv4/af_inet.c
··· 586 586 587 587 add_wait_queue(sk_sleep(sk), &wait); 588 588 sk->sk_write_pending += writebias; 589 + sk->sk_wait_pending++; 589 590 590 591 /* Basic assumption: if someone sets sk->sk_err, he _must_ 591 592 * change state of the socket from TCP_SYN_*. ··· 602 601 } 603 602 remove_wait_queue(sk_sleep(sk), &wait); 604 603 sk->sk_write_pending -= writebias; 604 + sk->sk_wait_pending--; 605 605 return timeo; 606 606 } 607 607
+1
net/ipv4/inet_connection_sock.c
··· 1142 1142 if (newsk) { 1143 1143 struct inet_connection_sock *newicsk = inet_csk(newsk); 1144 1144 1145 + newsk->sk_wait_pending = 0; 1145 1146 inet_sk_set_state(newsk, TCP_SYN_RECV); 1146 1147 newicsk->icsk_bind_hash = NULL; 1147 1148 newicsk->icsk_bind2_hash = NULL;
+8 -1
net/ipv4/tcp.c
··· 3081 3081 int old_state = sk->sk_state; 3082 3082 u32 seq; 3083 3083 3084 + /* Deny disconnect if other threads are blocked in sk_wait_event() 3085 + * or inet_wait_for_connect(). 3086 + */ 3087 + if (sk->sk_wait_pending) 3088 + return -EBUSY; 3089 + 3084 3090 if (old_state != TCP_CLOSE) 3085 3091 tcp_set_state(sk, TCP_CLOSE); 3086 3092 ··· 4078 4072 switch (optname) { 4079 4073 case TCP_MAXSEG: 4080 4074 val = tp->mss_cache; 4081 - if (!val && ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 4075 + if (tp->rx_opt.user_mss && 4076 + ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 4082 4077 val = tp->rx_opt.user_mss; 4083 4078 if (tp->repair) 4084 4079 val = tp->rx_opt.mss_clamp;
+1 -1
net/ipv4/tcp_input.c
··· 4530 4530 } 4531 4531 } 4532 4532 4533 - static void tcp_sack_compress_send_ack(struct sock *sk) 4533 + void tcp_sack_compress_send_ack(struct sock *sk) 4534 4534 { 4535 4535 struct tcp_sock *tp = tcp_sk(sk); 4536 4536
+13 -3
net/ipv4/tcp_timer.c
··· 290 290 void tcp_delack_timer_handler(struct sock *sk) 291 291 { 292 292 struct inet_connection_sock *icsk = inet_csk(sk); 293 + struct tcp_sock *tp = tcp_sk(sk); 293 294 294 - if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) || 295 - !(icsk->icsk_ack.pending & ICSK_ACK_TIMER)) 295 + if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) 296 + return; 297 + 298 + /* Handling the sack compression case */ 299 + if (tp->compressed_ack) { 300 + tcp_mstamp_refresh(tp); 301 + tcp_sack_compress_send_ack(sk); 302 + return; 303 + } 304 + 305 + if (!(icsk->icsk_ack.pending & ICSK_ACK_TIMER)) 296 306 return; 297 307 298 308 if (time_after(icsk->icsk_ack.timeout, jiffies)) { ··· 322 312 inet_csk_exit_pingpong_mode(sk); 323 313 icsk->icsk_ack.ato = TCP_ATO_MIN; 324 314 } 325 - tcp_mstamp_refresh(tcp_sk(sk)); 315 + tcp_mstamp_refresh(tp); 326 316 tcp_send_ack(sk); 327 317 __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS); 328 318 }
+78 -62
net/mptcp/protocol.c
··· 90 90 if (err) 91 91 return err; 92 92 93 - msk->first = ssock->sk; 94 - msk->subflow = ssock; 93 + WRITE_ONCE(msk->first, ssock->sk); 94 + WRITE_ONCE(msk->subflow, ssock); 95 95 subflow = mptcp_subflow_ctx(ssock->sk); 96 96 list_add(&subflow->node, &msk->conn_list); 97 97 sock_hold(ssock->sk); ··· 603 603 WRITE_ONCE(msk->ack_seq, msk->ack_seq + 1); 604 604 WRITE_ONCE(msk->rcv_data_fin, 0); 605 605 606 - sk->sk_shutdown |= RCV_SHUTDOWN; 606 + WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN); 607 607 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 608 608 609 609 switch (sk->sk_state) { ··· 825 825 mptcp_data_unlock(sk); 826 826 } 827 827 828 + static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk) 829 + { 830 + mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq); 831 + WRITE_ONCE(msk->allow_infinite_fallback, false); 832 + mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); 833 + } 834 + 828 835 static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk) 829 836 { 830 837 struct sock *sk = (struct sock *)msk; ··· 846 839 mptcp_sock_graft(ssk, sk->sk_socket); 847 840 848 841 mptcp_sockopt_sync_locked(msk, ssk); 842 + mptcp_subflow_joined(msk, ssk); 849 843 return true; 850 844 } 851 845 ··· 918 910 /* hopefully temporary hack: propagate shutdown status 919 911 * to msk, when all subflows agree on it 920 912 */ 921 - sk->sk_shutdown |= RCV_SHUTDOWN; 913 + WRITE_ONCE(sk->sk_shutdown, sk->sk_shutdown | RCV_SHUTDOWN); 922 914 923 915 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 924 916 sk->sk_data_ready(sk); ··· 1710 1702 1711 1703 lock_sock(ssk); 1712 1704 msg->msg_flags |= MSG_DONTWAIT; 1713 - msk->connect_flags = O_NONBLOCK; 1714 1705 msk->fastopening = 1; 1715 1706 ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL); 1716 1707 msk->fastopening = 0; ··· 2290 2283 { 2291 2284 if (msk->subflow) { 2292 2285 iput(SOCK_INODE(msk->subflow)); 2293 - msk->subflow = NULL; 2286 + WRITE_ONCE(msk->subflow, NULL); 2294 2287 } 2295 2288 } 2296 2289 ··· 2427 2420 sock_put(ssk); 2428 2421 2429 2422 if (ssk == msk->first) 2430 - msk->first = NULL; 2423 + WRITE_ONCE(msk->first, NULL); 2431 2424 2432 2425 out: 2433 2426 if (ssk == msk->last_snd) ··· 2534 2527 } 2535 2528 2536 2529 inet_sk_state_store(sk, TCP_CLOSE); 2537 - sk->sk_shutdown = SHUTDOWN_MASK; 2530 + WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK); 2538 2531 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 2539 2532 set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags); 2540 2533 ··· 2728 2721 WRITE_ONCE(msk->rmem_released, 0); 2729 2722 msk->timer_ival = TCP_RTO_MIN; 2730 2723 2731 - msk->first = NULL; 2724 + WRITE_ONCE(msk->first, NULL); 2732 2725 inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss; 2733 2726 WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk))); 2734 2727 WRITE_ONCE(msk->allow_infinite_fallback, true); ··· 2966 2959 bool do_cancel_work = false; 2967 2960 int subflows_alive = 0; 2968 2961 2969 - sk->sk_shutdown = SHUTDOWN_MASK; 2962 + WRITE_ONCE(sk->sk_shutdown, SHUTDOWN_MASK); 2970 2963 2971 2964 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) { 2972 2965 mptcp_listen_inuse_dec(sk); ··· 3046 3039 sock_put(sk); 3047 3040 } 3048 3041 3049 - void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) 3042 + static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk) 3050 3043 { 3051 3044 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 3052 3045 const struct ipv6_pinfo *ssk6 = inet6_sk(ssk); ··· 3109 3102 mptcp_pm_data_reset(msk); 3110 3103 mptcp_ca_reset(sk); 3111 3104 3112 - sk->sk_shutdown = 0; 3105 + WRITE_ONCE(sk->sk_shutdown, 0); 3113 3106 sk_error_report(sk); 3114 3107 return 0; 3115 3108 } ··· 3123 3116 } 3124 3117 #endif 3125 3118 3126 - struct sock *mptcp_sk_clone(const struct sock *sk, 3127 - const struct mptcp_options_received *mp_opt, 3128 - struct request_sock *req) 3119 + struct sock *mptcp_sk_clone_init(const struct sock *sk, 3120 + const struct mptcp_options_received *mp_opt, 3121 + struct sock *ssk, 3122 + struct request_sock *req) 3129 3123 { 3130 3124 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); 3131 3125 struct sock *nsk = sk_clone_lock(sk, GFP_ATOMIC); ··· 3145 3137 msk = mptcp_sk(nsk); 3146 3138 msk->local_key = subflow_req->local_key; 3147 3139 msk->token = subflow_req->token; 3148 - msk->subflow = NULL; 3140 + WRITE_ONCE(msk->subflow, NULL); 3149 3141 msk->in_accept_queue = 1; 3150 3142 WRITE_ONCE(msk->fully_established, false); 3151 3143 if (mp_opt->suboptions & OPTION_MPTCP_CSUMREQD) ··· 3158 3150 msk->setsockopt_seq = mptcp_sk(sk)->setsockopt_seq; 3159 3151 3160 3152 sock_reset_flag(nsk, SOCK_RCU_FREE); 3161 - /* will be fully established after successful MPC subflow creation */ 3162 - inet_sk_state_store(nsk, TCP_SYN_RECV); 3163 - 3164 3153 security_inet_csk_clone(nsk, req); 3154 + 3155 + /* this can't race with mptcp_close(), as the msk is 3156 + * not yet exposted to user-space 3157 + */ 3158 + inet_sk_state_store(nsk, TCP_ESTABLISHED); 3159 + 3160 + /* The msk maintain a ref to each subflow in the connections list */ 3161 + WRITE_ONCE(msk->first, ssk); 3162 + list_add(&mptcp_subflow_ctx(ssk)->node, &msk->conn_list); 3163 + sock_hold(ssk); 3164 + 3165 + /* new mpc subflow takes ownership of the newly 3166 + * created mptcp socket 3167 + */ 3168 + mptcp_token_accept(subflow_req, msk); 3169 + 3170 + /* set msk addresses early to ensure mptcp_pm_get_local_id() 3171 + * uses the correct data 3172 + */ 3173 + mptcp_copy_inaddrs(nsk, ssk); 3174 + mptcp_propagate_sndbuf(nsk, ssk); 3175 + 3176 + mptcp_rcv_space_init(msk, ssk); 3165 3177 bh_unlock_sock(nsk); 3166 3178 3167 3179 /* note: the newly allocated socket refcount is 2 now */ ··· 3213 3185 struct socket *listener; 3214 3186 struct sock *newsk; 3215 3187 3216 - listener = msk->subflow; 3188 + listener = READ_ONCE(msk->subflow); 3217 3189 if (WARN_ON_ONCE(!listener)) { 3218 3190 *err = -EINVAL; 3219 3191 return NULL; ··· 3493 3465 return false; 3494 3466 } 3495 3467 3496 - if (!list_empty(&subflow->node)) 3497 - goto out; 3468 + /* active subflow, already present inside the conn_list */ 3469 + if (!list_empty(&subflow->node)) { 3470 + mptcp_subflow_joined(msk, ssk); 3471 + return true; 3472 + } 3498 3473 3499 3474 if (!mptcp_pm_allow_new_subflow(msk)) 3500 3475 goto err_prohibited; 3501 3476 3502 - /* active connections are already on conn_list. 3503 - * If we can't acquire msk socket lock here, let the release callback 3477 + /* If we can't acquire msk socket lock here, let the release callback 3504 3478 * handle it 3505 3479 */ 3506 3480 mptcp_data_lock(parent); ··· 3525 3495 return false; 3526 3496 } 3527 3497 3528 - subflow->map_seq = READ_ONCE(msk->ack_seq); 3529 - WRITE_ONCE(msk->allow_infinite_fallback, false); 3530 - 3531 - out: 3532 - mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC); 3533 3498 return true; 3534 3499 } 3535 3500 ··· 3642 3617 * acquired the subflow socket lock, too. 3643 3618 */ 3644 3619 if (msk->fastopening) 3645 - err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1); 3620 + err = __inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK, 1); 3646 3621 else 3647 - err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags); 3622 + err = inet_stream_connect(ssock, uaddr, addr_len, O_NONBLOCK); 3648 3623 inet_sk(sk)->defer_connect = inet_sk(ssock->sk)->defer_connect; 3649 3624 3650 3625 /* on successful connect, the msk state will be moved to established by ··· 3657 3632 3658 3633 mptcp_copy_inaddrs(sk, ssock->sk); 3659 3634 3660 - /* unblocking connect, mptcp-level inet_stream_connect will error out 3661 - * without changing the socket state, update it here. 3635 + /* silence EINPROGRESS and let the caller inet_stream_connect 3636 + * handle the connection in progress 3662 3637 */ 3663 - if (err == -EINPROGRESS) 3664 - sk->sk_socket->state = ssock->state; 3665 - return err; 3638 + return 0; 3666 3639 } 3667 3640 3668 3641 static struct proto mptcp_prot = { ··· 3719 3696 return err; 3720 3697 } 3721 3698 3722 - static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr, 3723 - int addr_len, int flags) 3724 - { 3725 - int ret; 3726 - 3727 - lock_sock(sock->sk); 3728 - mptcp_sk(sock->sk)->connect_flags = flags; 3729 - ret = __inet_stream_connect(sock, uaddr, addr_len, flags, 0); 3730 - release_sock(sock->sk); 3731 - return ret; 3732 - } 3733 - 3734 3699 static int mptcp_listen(struct socket *sock, int backlog) 3735 3700 { 3736 3701 struct mptcp_sock *msk = mptcp_sk(sock->sk); ··· 3762 3751 3763 3752 pr_debug("msk=%p", msk); 3764 3753 3765 - /* buggy applications can call accept on socket states other then LISTEN 3754 + /* Buggy applications can call accept on socket states other then LISTEN 3766 3755 * but no need to allocate the first subflow just to error out. 3767 3756 */ 3768 - ssock = msk->subflow; 3757 + ssock = READ_ONCE(msk->subflow); 3769 3758 if (!ssock) 3770 3759 return -EINVAL; 3771 3760 ··· 3811 3800 { 3812 3801 struct sock *sk = (struct sock *)msk; 3813 3802 3814 - if (unlikely(sk->sk_shutdown & SEND_SHUTDOWN)) 3815 - return EPOLLOUT | EPOLLWRNORM; 3816 - 3817 3803 if (sk_stream_is_writeable(sk)) 3818 3804 return EPOLLOUT | EPOLLWRNORM; 3819 3805 ··· 3828 3820 struct sock *sk = sock->sk; 3829 3821 struct mptcp_sock *msk; 3830 3822 __poll_t mask = 0; 3823 + u8 shutdown; 3831 3824 int state; 3832 3825 3833 3826 msk = mptcp_sk(sk); ··· 3837 3828 state = inet_sk_state_load(sk); 3838 3829 pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags); 3839 3830 if (state == TCP_LISTEN) { 3840 - if (WARN_ON_ONCE(!msk->subflow || !msk->subflow->sk)) 3831 + struct socket *ssock = READ_ONCE(msk->subflow); 3832 + 3833 + if (WARN_ON_ONCE(!ssock || !ssock->sk)) 3841 3834 return 0; 3842 3835 3843 - return inet_csk_listen_poll(msk->subflow->sk); 3836 + return inet_csk_listen_poll(ssock->sk); 3844 3837 } 3838 + 3839 + shutdown = READ_ONCE(sk->sk_shutdown); 3840 + if (shutdown == SHUTDOWN_MASK || state == TCP_CLOSE) 3841 + mask |= EPOLLHUP; 3842 + if (shutdown & RCV_SHUTDOWN) 3843 + mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 3845 3844 3846 3845 if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) { 3847 3846 mask |= mptcp_check_readable(msk); 3848 - mask |= mptcp_check_writeable(msk); 3847 + if (shutdown & SEND_SHUTDOWN) 3848 + mask |= EPOLLOUT | EPOLLWRNORM; 3849 + else 3850 + mask |= mptcp_check_writeable(msk); 3849 3851 } else if (state == TCP_SYN_SENT && inet_sk(sk)->defer_connect) { 3850 3852 /* cf tcp_poll() note about TFO */ 3851 3853 mask |= EPOLLOUT | EPOLLWRNORM; 3852 3854 } 3853 - if (sk->sk_shutdown == SHUTDOWN_MASK || state == TCP_CLOSE) 3854 - mask |= EPOLLHUP; 3855 - if (sk->sk_shutdown & RCV_SHUTDOWN) 3856 - mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 3857 3855 3858 3856 /* This barrier is coupled with smp_wmb() in __mptcp_error_report() */ 3859 3857 smp_rmb(); ··· 3875 3859 .owner = THIS_MODULE, 3876 3860 .release = inet_release, 3877 3861 .bind = mptcp_bind, 3878 - .connect = mptcp_stream_connect, 3862 + .connect = inet_stream_connect, 3879 3863 .socketpair = sock_no_socketpair, 3880 3864 .accept = mptcp_stream_accept, 3881 3865 .getname = inet_getname, ··· 3970 3954 .owner = THIS_MODULE, 3971 3955 .release = inet6_release, 3972 3956 .bind = mptcp_bind, 3973 - .connect = mptcp_stream_connect, 3957 + .connect = inet_stream_connect, 3974 3958 .socketpair = sock_no_socketpair, 3975 3959 .accept = mptcp_stream_accept, 3976 3960 .getname = inet6_getname,
+9 -6
net/mptcp/protocol.h
··· 297 297 nodelay:1, 298 298 fastopening:1, 299 299 in_accept_queue:1; 300 - int connect_flags; 301 300 struct work_struct work; 302 301 struct sk_buff *ooo_last_skb; 303 302 struct rb_root out_of_order_queue; ··· 305 306 struct list_head rtx_queue; 306 307 struct mptcp_data_frag *first_pending; 307 308 struct list_head join_list; 308 - struct socket *subflow; /* outgoing connect/listener/!mp_capable */ 309 + struct socket *subflow; /* outgoing connect/listener/!mp_capable 310 + * The mptcp ops can safely dereference, using suitable 311 + * ONCE annotation, the subflow outside the socket 312 + * lock as such sock is freed after close(). 313 + */ 309 314 struct sock *first; 310 315 struct mptcp_pm_data pm; 311 316 struct { ··· 616 613 int mptcp_allow_join_id0(const struct net *net); 617 614 unsigned int mptcp_stale_loss_cnt(const struct net *net); 618 615 int mptcp_get_pm_type(const struct net *net); 619 - void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk); 620 616 void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow, 621 617 const struct mptcp_options_received *mp_opt); 622 618 bool __mptcp_retransmit_pending_data(struct sock *sk); ··· 685 683 int __init mptcp_proto_v6_init(void); 686 684 #endif 687 685 688 - struct sock *mptcp_sk_clone(const struct sock *sk, 689 - const struct mptcp_options_received *mp_opt, 690 - struct request_sock *req); 686 + struct sock *mptcp_sk_clone_init(const struct sock *sk, 687 + const struct mptcp_options_received *mp_opt, 688 + struct sock *ssk, 689 + struct request_sock *req); 691 690 void mptcp_get_options(const struct sk_buff *skb, 692 691 struct mptcp_options_received *mp_opt); 693 692
+1 -27
net/mptcp/subflow.c
··· 815 815 ctx->setsockopt_seq = listener->setsockopt_seq; 816 816 817 817 if (ctx->mp_capable) { 818 - ctx->conn = mptcp_sk_clone(listener->conn, &mp_opt, req); 818 + ctx->conn = mptcp_sk_clone_init(listener->conn, &mp_opt, child, req); 819 819 if (!ctx->conn) 820 820 goto fallback; 821 821 822 822 owner = mptcp_sk(ctx->conn); 823 - 824 - /* this can't race with mptcp_close(), as the msk is 825 - * not yet exposted to user-space 826 - */ 827 - inet_sk_state_store(ctx->conn, TCP_ESTABLISHED); 828 - 829 - /* record the newly created socket as the first msk 830 - * subflow, but don't link it yet into conn_list 831 - */ 832 - WRITE_ONCE(owner->first, child); 833 - 834 - /* new mpc subflow takes ownership of the newly 835 - * created mptcp socket 836 - */ 837 - owner->setsockopt_seq = ctx->setsockopt_seq; 838 823 mptcp_pm_new_connection(owner, child, 1); 839 - mptcp_token_accept(subflow_req, owner); 840 - 841 - /* set msk addresses early to ensure mptcp_pm_get_local_id() 842 - * uses the correct data 843 - */ 844 - mptcp_copy_inaddrs(ctx->conn, child); 845 - mptcp_propagate_sndbuf(ctx->conn, child); 846 - 847 - mptcp_rcv_space_init(owner, child); 848 - list_add(&ctx->node, &owner->conn_list); 849 - sock_hold(child); 850 824 851 825 /* with OoO packets we can reach here without ingress 852 826 * mpc option
+1 -1
net/netlink/af_netlink.c
··· 1779 1779 break; 1780 1780 } 1781 1781 } 1782 - if (put_user(ALIGN(nlk->ngroups / 8, sizeof(u32)), optlen)) 1782 + if (put_user(ALIGN(BITS_TO_BYTES(nlk->ngroups), sizeof(u32)), optlen)) 1783 1783 err = -EFAULT; 1784 1784 netlink_unlock_table(); 1785 1785 return err;
+4 -3
net/netrom/nr_subr.c
··· 123 123 unsigned char *dptr; 124 124 int len, timeout; 125 125 126 - len = NR_NETWORK_LEN + NR_TRANSPORT_LEN; 126 + len = NR_TRANSPORT_LEN; 127 127 128 128 switch (frametype & 0x0F) { 129 129 case NR_CONNREQ: ··· 141 141 return; 142 142 } 143 143 144 - if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL) 144 + skb = alloc_skb(NR_NETWORK_LEN + len, GFP_ATOMIC); 145 + if (!skb) 145 146 return; 146 147 147 148 /* ··· 150 149 */ 151 150 skb_reserve(skb, NR_NETWORK_LEN); 152 151 153 - dptr = skb_put(skb, skb_tailroom(skb)); 152 + dptr = skb_put(skb, len); 154 153 155 154 switch (frametype & 0x0F) { 156 155 case NR_CONNREQ:
+5 -3
net/packet/af_packet.c
··· 3201 3201 3202 3202 lock_sock(sk); 3203 3203 spin_lock(&po->bind_lock); 3204 + if (!proto) 3205 + proto = po->num; 3206 + 3204 3207 rcu_read_lock(); 3205 3208 3206 3209 if (po->fanout) { ··· 3302 3299 memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data_min)); 3303 3300 name[sizeof(uaddr->sa_data_min)] = 0; 3304 3301 3305 - return packet_do_bind(sk, name, 0, pkt_sk(sk)->num); 3302 + return packet_do_bind(sk, name, 0, 0); 3306 3303 } 3307 3304 3308 3305 static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) ··· 3319 3316 if (sll->sll_family != AF_PACKET) 3320 3317 return -EINVAL; 3321 3318 3322 - return packet_do_bind(sk, NULL, sll->sll_ifindex, 3323 - sll->sll_protocol ? : pkt_sk(sk)->num); 3319 + return packet_do_bind(sk, NULL, sll->sll_ifindex, sll->sll_protocol); 3324 3320 } 3325 3321 3326 3322 static struct proto packet_proto = {
+1 -1
net/packet/diag.c
··· 143 143 rp = nlmsg_data(nlh); 144 144 rp->pdiag_family = AF_PACKET; 145 145 rp->pdiag_type = sk->sk_type; 146 - rp->pdiag_num = ntohs(po->num); 146 + rp->pdiag_num = ntohs(READ_ONCE(po->num)); 147 147 rp->pdiag_ino = sk_ino; 148 148 sock_diag_save_cookie(sk, rp->pdiag_cookie); 149 149
+1
net/rxrpc/af_rxrpc.c
··· 980 980 BUILD_BUG_ON(sizeof(struct rxrpc_skb_priv) > sizeof_field(struct sk_buff, cb)); 981 981 982 982 ret = -ENOMEM; 983 + rxrpc_gen_version_string(); 983 984 rxrpc_call_jar = kmem_cache_create( 984 985 "rxrpc_call_jar", sizeof(struct rxrpc_call), 0, 985 986 SLAB_HWCACHE_ALIGN, NULL);
+1
net/rxrpc/ar-internal.h
··· 1068 1068 /* 1069 1069 * local_event.c 1070 1070 */ 1071 + void rxrpc_gen_version_string(void); 1071 1072 void rxrpc_send_version_request(struct rxrpc_local *local, 1072 1073 struct rxrpc_host_header *hdr, 1073 1074 struct sk_buff *skb);
+10 -1
net/rxrpc/local_event.c
··· 16 16 #include <generated/utsrelease.h> 17 17 #include "ar-internal.h" 18 18 19 - static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC"; 19 + static char rxrpc_version_string[65]; // "linux-" UTS_RELEASE " AF_RXRPC"; 20 + 21 + /* 22 + * Generate the VERSION packet string. 23 + */ 24 + void rxrpc_gen_version_string(void) 25 + { 26 + snprintf(rxrpc_version_string, sizeof(rxrpc_version_string), 27 + "linux-%.49s AF_RXRPC", UTS_RELEASE); 28 + } 20 29 21 30 /* 22 31 * Reply to a version request
+3
net/sched/cls_flower.c
··· 1153 1153 if (option_len > sizeof(struct geneve_opt)) 1154 1154 data_len = option_len - sizeof(struct geneve_opt); 1155 1155 1156 + if (key->enc_opts.len > FLOW_DIS_TUN_OPTS_MAX - 4) 1157 + return -ERANGE; 1158 + 1156 1159 opt = (struct geneve_opt *)&key->enc_opts.data[key->enc_opts.len]; 1157 1160 memset(opt, 0xff, option_len); 1158 1161 opt->length = data_len / 4;
+15 -1
net/sched/sch_api.c
··· 1252 1252 sch->parent = parent; 1253 1253 1254 1254 if (handle == TC_H_INGRESS) { 1255 - sch->flags |= TCQ_F_INGRESS; 1255 + if (!(sch->flags & TCQ_F_INGRESS)) { 1256 + NL_SET_ERR_MSG(extack, 1257 + "Specified parent ID is reserved for ingress and clsact Qdiscs"); 1258 + err = -EINVAL; 1259 + goto err_out3; 1260 + } 1256 1261 handle = TC_H_MAKE(TC_H_INGRESS, 0); 1257 1262 } else { 1258 1263 if (handle == 0) { ··· 1596 1591 NL_SET_ERR_MSG(extack, "Invalid qdisc name"); 1597 1592 return -EINVAL; 1598 1593 } 1594 + if (q->flags & TCQ_F_INGRESS) { 1595 + NL_SET_ERR_MSG(extack, 1596 + "Cannot regraft ingress or clsact Qdiscs"); 1597 + return -EINVAL; 1598 + } 1599 1599 if (q == p || 1600 1600 (p && check_loop(q, p, 0))) { 1601 1601 NL_SET_ERR_MSG(extack, "Qdisc parent/child loop detected"); 1602 1602 return -ELOOP; 1603 + } 1604 + if (clid == TC_H_INGRESS) { 1605 + NL_SET_ERR_MSG(extack, "Ingress cannot graft directly"); 1606 + return -EINVAL; 1603 1607 } 1604 1608 qdisc_refcount_inc(q); 1605 1609 goto graft;
+14 -2
net/sched/sch_ingress.c
··· 80 80 struct net_device *dev = qdisc_dev(sch); 81 81 int err; 82 82 83 + if (sch->parent != TC_H_INGRESS) 84 + return -EOPNOTSUPP; 85 + 83 86 net_inc_ingress_queue(); 84 87 85 88 mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress); ··· 103 100 static void ingress_destroy(struct Qdisc *sch) 104 101 { 105 102 struct ingress_sched_data *q = qdisc_priv(sch); 103 + 104 + if (sch->parent != TC_H_INGRESS) 105 + return; 106 106 107 107 tcf_block_put_ext(q->block, sch, &q->block_info); 108 108 net_dec_ingress_queue(); ··· 140 134 .cl_ops = &ingress_class_ops, 141 135 .id = "ingress", 142 136 .priv_size = sizeof(struct ingress_sched_data), 143 - .static_flags = TCQ_F_CPUSTATS, 137 + .static_flags = TCQ_F_INGRESS | TCQ_F_CPUSTATS, 144 138 .init = ingress_init, 145 139 .destroy = ingress_destroy, 146 140 .dump = ingress_dump, ··· 225 219 struct net_device *dev = qdisc_dev(sch); 226 220 int err; 227 221 222 + if (sch->parent != TC_H_CLSACT) 223 + return -EOPNOTSUPP; 224 + 228 225 net_inc_ingress_queue(); 229 226 net_inc_egress_queue(); 230 227 ··· 257 248 { 258 249 struct clsact_sched_data *q = qdisc_priv(sch); 259 250 251 + if (sch->parent != TC_H_CLSACT) 252 + return; 253 + 260 254 tcf_block_put_ext(q->egress_block, sch, &q->egress_block_info); 261 255 tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info); 262 256 ··· 281 269 .cl_ops = &clsact_class_ops, 282 270 .id = "clsact", 283 271 .priv_size = sizeof(struct clsact_sched_data), 284 - .static_flags = TCQ_F_CPUSTATS, 272 + .static_flags = TCQ_F_INGRESS | TCQ_F_CPUSTATS, 285 273 .init = clsact_init, 286 274 .destroy = clsact_destroy, 287 275 .dump = ingress_dump,
+6 -3
net/smc/smc_llc.c
··· 578 578 { 579 579 struct smc_buf_desc *buf_next; 580 580 581 - if (!buf_pos || list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) { 581 + if (!buf_pos) 582 + return _smc_llc_get_next_rmb(lgr, buf_lst); 583 + 584 + if (list_is_last(&buf_pos->list, &lgr->rmbs[*buf_lst])) { 582 585 (*buf_lst)++; 583 586 return _smc_llc_get_next_rmb(lgr, buf_lst); 584 587 } ··· 617 614 goto out; 618 615 buf_pos = smc_llc_get_first_rmb(lgr, &buf_lst); 619 616 for (i = 0; i < ext->num_rkeys; i++) { 617 + while (buf_pos && !(buf_pos)->used) 618 + buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 620 619 if (!buf_pos) 621 620 break; 622 621 rmb = buf_pos; ··· 628 623 cpu_to_be64((uintptr_t)rmb->cpu_addr) : 629 624 cpu_to_be64((u64)sg_dma_address(rmb->sgt[lnk_idx].sgl)); 630 625 buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 631 - while (buf_pos && !(buf_pos)->used) 632 - buf_pos = smc_llc_get_next_rmb(lgr, &buf_lst, buf_pos); 633 626 } 634 627 len += i * sizeof(ext->rt[0]); 635 628 out:
+6 -18
net/sunrpc/svcsock.c
··· 1480 1480 return svsk; 1481 1481 } 1482 1482 1483 - bool svc_alien_sock(struct net *net, int fd) 1484 - { 1485 - int err; 1486 - struct socket *sock = sockfd_lookup(fd, &err); 1487 - bool ret = false; 1488 - 1489 - if (!sock) 1490 - goto out; 1491 - if (sock_net(sock->sk) != net) 1492 - ret = true; 1493 - sockfd_put(sock); 1494 - out: 1495 - return ret; 1496 - } 1497 - EXPORT_SYMBOL_GPL(svc_alien_sock); 1498 - 1499 1483 /** 1500 1484 * svc_addsock - add a listener socket to an RPC service 1501 1485 * @serv: pointer to RPC service to which to add a new listener 1486 + * @net: caller's network namespace 1502 1487 * @fd: file descriptor of the new listener 1503 1488 * @name_return: pointer to buffer to fill in with name of listener 1504 1489 * @len: size of the buffer ··· 1493 1508 * Name is terminated with '\n'. On error, returns a negative errno 1494 1509 * value. 1495 1510 */ 1496 - int svc_addsock(struct svc_serv *serv, const int fd, char *name_return, 1497 - const size_t len, const struct cred *cred) 1511 + int svc_addsock(struct svc_serv *serv, struct net *net, const int fd, 1512 + char *name_return, const size_t len, const struct cred *cred) 1498 1513 { 1499 1514 int err = 0; 1500 1515 struct socket *so = sockfd_lookup(fd, &err); ··· 1505 1520 1506 1521 if (!so) 1507 1522 return err; 1523 + err = -EINVAL; 1524 + if (sock_net(so->sk) != net) 1525 + goto out; 1508 1526 err = -EAFNOSUPPORT; 1509 1527 if ((so->sk->sk_family != PF_INET) && (so->sk->sk_family != PF_INET6)) 1510 1528 goto out;
+3 -1
net/tls/tls_strp.c
··· 20 20 strp->stopped = 1; 21 21 22 22 /* Report an error on the lower socket */ 23 - strp->sk->sk_err = -err; 23 + WRITE_ONCE(strp->sk->sk_err, -err); 24 + /* Paired with smp_rmb() in tcp_poll() */ 25 + smp_wmb(); 24 26 sk_error_report(strp->sk); 25 27 } 26 28
+3 -1
net/tls/tls_sw.c
··· 70 70 { 71 71 WARN_ON_ONCE(err >= 0); 72 72 /* sk->sk_err should contain a positive error code. */ 73 - sk->sk_err = -err; 73 + WRITE_ONCE(sk->sk_err, -err); 74 + /* Paired with smp_rmb() in tcp_poll() */ 75 + smp_wmb(); 74 76 sk_error_report(sk); 75 77 } 76 78
+5 -1
security/selinux/Makefile
··· 26 26 cmd_flask = $< $(obj)/flask.h $(obj)/av_permissions.h 27 27 28 28 targets += flask.h av_permissions.h 29 - $(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/genheaders/genheaders FORCE 29 + # once make >= 4.3 is required, we can use grouped targets in the rule below, 30 + # which basically involves adding both headers and a '&' before the colon, see 31 + # the example below: 32 + # $(obj)/flask.h $(obj)/av_permissions.h &: scripts/selinux/... 33 + $(obj)/flask.h: scripts/selinux/genheaders/genheaders FORCE 30 34 $(call if_changed,flask)
-13
tools/include/linux/coresight-pmu.h
··· 21 21 */ 22 22 #define CORESIGHT_LEGACY_CPU_TRACE_ID(cpu) (0x10 + (cpu * 2)) 23 23 24 - /* CoreSight trace ID is currently the bottom 7 bits of the value */ 25 - #define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0) 26 - 27 - /* 28 - * perf record will set the legacy meta data values as unused initially. 29 - * This allows perf report to manage the decoders created when dynamic 30 - * allocation in operation. 31 - */ 32 - #define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31) 33 - 34 - /* Value to set for unused trace ID values */ 35 - #define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F 36 - 37 24 /* 38 25 * Below are the definition of bit offsets for perf option, and works as 39 26 * arbitrary values for all ETM versions.
+1
tools/include/uapi/linux/in.h
··· 163 163 #define IP_MULTICAST_ALL 49 164 164 #define IP_UNICAST_IF 50 165 165 #define IP_LOCAL_PORT_RANGE 51 166 + #define IP_PROTOCOL 52 166 167 167 168 #define MCAST_EXCLUDE 0 168 169 #define MCAST_INCLUDE 1
+3 -2
tools/net/ynl/lib/ynl.py
··· 591 591 print('Unexpected message: ' + repr(gm)) 592 592 continue 593 593 594 - rsp.append(self._decode(gm.raw_attrs, op.attr_set.name) 595 - | gm.fixed_header_attrs) 594 + rsp_msg = self._decode(gm.raw_attrs, op.attr_set.name) 595 + rsp_msg.update(gm.fixed_header_attrs) 596 + rsp.append(rsp_msg) 596 597 597 598 if not rsp: 598 599 return None
+1
tools/perf/Makefile.config
··· 927 927 EXTLIBS += -lstdc++ 928 928 CFLAGS += -DHAVE_CXA_DEMANGLE_SUPPORT 929 929 CXXFLAGS += -DHAVE_CXA_DEMANGLE_SUPPORT 930 + $(call detected,CONFIG_CXX_DEMANGLE) 930 931 endif 931 932 ifdef BUILD_NONDISTRO 932 933 ifeq ($(filter -liberty,$(EXTLIBS)),)
+1 -2
tools/perf/Makefile.perf
··· 181 181 HOSTLD ?= ld 182 182 HOSTAR ?= ar 183 183 CLANG ?= clang 184 - LLVM_STRIP ?= llvm-strip 185 184 186 185 PKG_CONFIG = $(CROSS_COMPILE)pkg-config 187 186 ··· 1082 1083 1083 1084 $(SKEL_TMP_OUT)/%.bpf.o: util/bpf_skel/%.bpf.c $(LIBBPF) | $(SKEL_TMP_OUT) 1084 1085 $(QUIET_CLANG)$(CLANG) -g -O2 -target bpf -Wall -Werror $(BPF_INCLUDE) $(TOOLS_UAPI_INCLUDE) \ 1085 - -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ && $(LLVM_STRIP) -g $@ 1086 + -c $(filter util/bpf_skel/%.bpf.c,$^) -o $@ 1086 1087 1087 1088 $(SKEL_OUT)/%.skel.h: $(SKEL_TMP_OUT)/%.bpf.o | $(BPFTOOL) 1088 1089 $(QUIET_GENSKEL)$(BPFTOOL) gen skeleton $< > $@
+1 -1
tools/perf/arch/arm/util/pmu.c
··· 12 12 #include "arm-spe.h" 13 13 #include "hisi-ptt.h" 14 14 #include "../../../util/pmu.h" 15 - #include "../cs-etm.h" 15 + #include "../../../util/cs-etm.h" 16 16 17 17 struct perf_event_attr 18 18 *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused)
+1 -1
tools/perf/builtin-ftrace.c
··· 1175 1175 OPT_BOOLEAN('b', "use-bpf", &ftrace.target.use_bpf, 1176 1176 "Use BPF to measure function latency"), 1177 1177 #endif 1178 - OPT_BOOLEAN('n', "--use-nsec", &ftrace.use_nsec, 1178 + OPT_BOOLEAN('n', "use-nsec", &ftrace.use_nsec, 1179 1179 "Use nano-second histogram"), 1180 1180 OPT_PARENT(common_options), 1181 1181 };
+1 -1
tools/perf/util/Build
··· 214 214 215 215 perf-$(CONFIG_LIBCAP) += cap.o 216 216 217 - perf-y += demangle-cxx.o 217 + perf-$(CONFIG_CXX_DEMANGLE) += demangle-cxx.o 218 218 perf-y += demangle-ocaml.o 219 219 perf-y += demangle-java.o 220 220 perf-y += demangle-rust.o
+2 -2
tools/perf/util/bpf_skel/sample_filter.bpf.c
··· 25 25 } __attribute__((preserve_access_index)); 26 26 27 27 /* new kernel perf_mem_data_src definition */ 28 - union perf_mem_data_src__new { 28 + union perf_mem_data_src___new { 29 29 __u64 val; 30 30 struct { 31 31 __u64 mem_op:5, /* type of opcode */ ··· 108 108 if (entry->part == 7) 109 109 return kctx->data->data_src.mem_blk; 110 110 if (entry->part == 8) { 111 - union perf_mem_data_src__new *data = (void *)&kctx->data->data_src; 111 + union perf_mem_data_src___new *data = (void *)&kctx->data->data_src; 112 112 113 113 if (bpf_core_field_exists(data->mem_hops)) 114 114 return data->mem_hops;
+13
tools/perf/util/cs-etm.h
··· 227 227 #define INFO_HEADER_SIZE (sizeof(((struct perf_record_auxtrace_info *)0)->type) + \ 228 228 sizeof(((struct perf_record_auxtrace_info *)0)->reserved__)) 229 229 230 + /* CoreSight trace ID is currently the bottom 7 bits of the value */ 231 + #define CORESIGHT_TRACE_ID_VAL_MASK GENMASK(6, 0) 232 + 233 + /* 234 + * perf record will set the legacy meta data values as unused initially. 235 + * This allows perf report to manage the decoders created when dynamic 236 + * allocation in operation. 237 + */ 238 + #define CORESIGHT_TRACE_ID_UNUSED_FLAG BIT(31) 239 + 240 + /* Value to set for unused trace ID values */ 241 + #define CORESIGHT_TRACE_ID_UNUSED_VAL 0x7F 242 + 230 243 int cs_etm__process_auxtrace_info(union perf_event *event, 231 244 struct perf_session *session); 232 245 struct perf_event_attr *cs_etm_get_default_config(struct perf_pmu *pmu);
+1
tools/perf/util/evsel.c
··· 282 282 evsel->bpf_fd = -1; 283 283 INIT_LIST_HEAD(&evsel->config_terms); 284 284 INIT_LIST_HEAD(&evsel->bpf_counter_list); 285 + INIT_LIST_HEAD(&evsel->bpf_filters); 285 286 perf_evsel__object.init(evsel); 286 287 evsel->sample_size = __evsel__sample_size(attr->sample_type); 287 288 evsel__calc_id_pos(evsel);
+2 -4
tools/perf/util/evsel.h
··· 151 151 */ 152 152 struct bpf_counter_ops *bpf_counter_ops; 153 153 154 - union { 155 - struct list_head bpf_counter_list; /* for perf-stat -b */ 156 - struct list_head bpf_filters; /* for perf-record --filter */ 157 - }; 154 + struct list_head bpf_counter_list; /* for perf-stat -b */ 155 + struct list_head bpf_filters; /* for perf-record --filter */ 158 156 159 157 /* for perf-stat --use-bpf */ 160 158 int bperf_leader_prog_fd;
+27
tools/perf/util/symbol-elf.c
··· 31 31 #include <bfd.h> 32 32 #endif 33 33 34 + #if defined(HAVE_LIBBFD_SUPPORT) || defined(HAVE_CPLUS_DEMANGLE_SUPPORT) 35 + #ifndef DMGL_PARAMS 36 + #define DMGL_PARAMS (1 << 0) /* Include function args */ 37 + #define DMGL_ANSI (1 << 1) /* Include const, volatile, etc */ 38 + #endif 39 + #endif 40 + 34 41 #ifndef EM_AARCH64 35 42 #define EM_AARCH64 183 /* ARM 64 bit */ 36 43 #endif ··· 276 269 static bool want_demangle(bool is_kernel_sym) 277 270 { 278 271 return is_kernel_sym ? symbol_conf.demangle_kernel : symbol_conf.demangle; 272 + } 273 + 274 + /* 275 + * Demangle C++ function signature, typically replaced by demangle-cxx.cpp 276 + * version. 277 + */ 278 + __weak char *cxx_demangle_sym(const char *str __maybe_unused, bool params __maybe_unused, 279 + bool modifiers __maybe_unused) 280 + { 281 + #ifdef HAVE_LIBBFD_SUPPORT 282 + int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0); 283 + 284 + return bfd_demangle(NULL, str, flags); 285 + #elif defined(HAVE_CPLUS_DEMANGLE_SUPPORT) 286 + int flags = (params ? DMGL_PARAMS : 0) | (modifiers ? DMGL_ANSI : 0); 287 + 288 + return cplus_demangle(str, flags); 289 + #else 290 + return NULL; 291 + #endif 279 292 } 280 293 281 294 static char *demangle_sym(struct dso *dso, int kmodule, const char *elf_name)
+28 -19
tools/testing/selftests/ftrace/test.d/filter/event-filter-function.tc
··· 9 9 exit_fail 10 10 } 11 11 12 - echo "Test event filter function name" 12 + sample_events() { 13 + echo > trace 14 + echo 1 > events/kmem/kmem_cache_free/enable 15 + echo 1 > tracing_on 16 + ls > /dev/null 17 + echo 0 > tracing_on 18 + echo 0 > events/kmem/kmem_cache_free/enable 19 + } 20 + 13 21 echo 0 > tracing_on 14 22 echo 0 > events/enable 15 - echo > trace 16 - echo 'call_site.function == exit_mmap' > events/kmem/kmem_cache_free/filter 17 - echo 1 > events/kmem/kmem_cache_free/enable 18 - echo 1 > tracing_on 19 - ls > /dev/null 20 - echo 0 > events/kmem/kmem_cache_free/enable 21 23 22 - hitcnt=`grep kmem_cache_free trace| grep exit_mmap | wc -l` 23 - misscnt=`grep kmem_cache_free trace| grep -v exit_mmap | wc -l` 24 + echo "Get the most frequently calling function" 25 + sample_events 26 + 27 + target_func=`cut -d: -f3 trace | sed 's/call_site=\([^+]*\)+0x.*/\1/' | sort | uniq -c | sort | tail -n 1 | sed 's/^[ 0-9]*//'` 28 + if [ -z "$target_func" ]; then 29 + exit_fail 30 + fi 31 + echo > trace 32 + 33 + echo "Test event filter function name" 34 + echo "call_site.function == $target_func" > events/kmem/kmem_cache_free/filter 35 + sample_events 36 + 37 + hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l` 38 + misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l` 24 39 25 40 if [ $hitcnt -eq 0 ]; then 26 41 exit_fail ··· 45 30 exit_fail 46 31 fi 47 32 48 - address=`grep ' exit_mmap$' /proc/kallsyms | cut -d' ' -f1` 33 + address=`grep " ${target_func}\$" /proc/kallsyms | cut -d' ' -f1` 49 34 50 35 echo "Test event filter function address" 51 - echo 0 > tracing_on 52 - echo 0 > events/enable 53 - echo > trace 54 36 echo "call_site.function == 0x$address" > events/kmem/kmem_cache_free/filter 55 - echo 1 > events/kmem/kmem_cache_free/enable 56 - echo 1 > tracing_on 57 - sleep 1 58 - echo 0 > events/kmem/kmem_cache_free/enable 37 + sample_events 59 38 60 - hitcnt=`grep kmem_cache_free trace| grep exit_mmap | wc -l` 61 - misscnt=`grep kmem_cache_free trace| grep -v exit_mmap | wc -l` 39 + hitcnt=`grep kmem_cache_free trace| grep $target_func | wc -l` 40 + misscnt=`grep kmem_cache_free trace| grep -v $target_func | wc -l` 62 41 63 42 if [ $hitcnt -eq 0 ]; then 64 43 exit_fail
+24
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-stack-legacy.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: event trigger - test inter-event histogram trigger trace action with dynamic string param (legacy stack) 4 + # requires: set_event synthetic_events events/sched/sched_process_exec/hist "long[] stack' >> synthetic_events":README 5 + 6 + fail() { #msg 7 + echo $1 8 + exit_fail 9 + } 10 + 11 + echo "Test create synthetic event with stack" 12 + 13 + # Test the old stacktrace keyword (for backward compatibility) 14 + echo 's:wake_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 15 + echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 16 + echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(wake_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 17 + echo 1 > events/synthetic/wake_lat/enable 18 + sleep 1 19 + 20 + if ! grep -q "=>.*sched" trace; then 21 + fail "Failed to create synthetic event with stack" 22 + fi 23 + 24 + exit 0
+2 -3
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-stack.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: event trigger - test inter-event histogram trigger trace action with dynamic string param 4 - # requires: set_event synthetic_events events/sched/sched_process_exec/hist "long[]' >> synthetic_events":README 4 + # requires: set_event synthetic_events events/sched/sched_process_exec/hist "can be any field, or the special string 'common_stacktrace'":README 5 5 6 6 fail() { #msg 7 7 echo $1 ··· 10 10 11 11 echo "Test create synthetic event with stack" 12 12 13 - 14 13 echo 's:wake_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events 15 - echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 14 + echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 1||prev_state == 2' >> events/sched/sched_switch/trigger 16 15 echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(wake_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger 17 16 echo 1 > events/synthetic/wake_lat/enable 18 17 sleep 1
+1
tools/testing/selftests/kvm/Makefile
··· 116 116 TEST_GEN_PROGS_x86_64 += x86_64/amx_test 117 117 TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test 118 118 TEST_GEN_PROGS_x86_64 += x86_64/triple_fault_event_test 119 + TEST_GEN_PROGS_x86_64 += x86_64/recalc_apic_map_test 119 120 TEST_GEN_PROGS_x86_64 += access_tracking_perf_test 120 121 TEST_GEN_PROGS_x86_64 += demand_paging_test 121 122 TEST_GEN_PROGS_x86_64 += dirty_log_test
+74
tools/testing/selftests/kvm/x86_64/recalc_apic_map_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Test edge cases and race conditions in kvm_recalculate_apic_map(). 4 + */ 5 + 6 + #include <sys/ioctl.h> 7 + #include <pthread.h> 8 + #include <time.h> 9 + 10 + #include "processor.h" 11 + #include "test_util.h" 12 + #include "kvm_util.h" 13 + #include "apic.h" 14 + 15 + #define TIMEOUT 5 /* seconds */ 16 + 17 + #define LAPIC_DISABLED 0 18 + #define LAPIC_X2APIC (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE) 19 + #define MAX_XAPIC_ID 0xff 20 + 21 + static void *race(void *arg) 22 + { 23 + struct kvm_lapic_state lapic = {}; 24 + struct kvm_vcpu *vcpu = arg; 25 + 26 + while (1) { 27 + /* Trigger kvm_recalculate_apic_map(). */ 28 + vcpu_ioctl(vcpu, KVM_SET_LAPIC, &lapic); 29 + pthread_testcancel(); 30 + } 31 + 32 + return NULL; 33 + } 34 + 35 + int main(void) 36 + { 37 + struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; 38 + struct kvm_vcpu *vcpuN; 39 + struct kvm_vm *vm; 40 + pthread_t thread; 41 + time_t t; 42 + int i; 43 + 44 + kvm_static_assert(KVM_MAX_VCPUS > MAX_XAPIC_ID); 45 + 46 + /* 47 + * Create the max number of vCPUs supported by selftests so that KVM 48 + * has decent amount of work to do when recalculating the map, i.e. to 49 + * make the problematic window large enough to hit. 50 + */ 51 + vm = vm_create_with_vcpus(KVM_MAX_VCPUS, NULL, vcpus); 52 + 53 + /* 54 + * Enable x2APIC on all vCPUs so that KVM doesn't bail from the recalc 55 + * due to vCPUs having aliased xAPIC IDs (truncated to 8 bits). 56 + */ 57 + for (i = 0; i < KVM_MAX_VCPUS; i++) 58 + vcpu_set_msr(vcpus[i], MSR_IA32_APICBASE, LAPIC_X2APIC); 59 + 60 + ASSERT_EQ(pthread_create(&thread, NULL, race, vcpus[0]), 0); 61 + 62 + vcpuN = vcpus[KVM_MAX_VCPUS - 1]; 63 + for (t = time(NULL) + TIMEOUT; time(NULL) < t;) { 64 + vcpu_set_msr(vcpuN, MSR_IA32_APICBASE, LAPIC_X2APIC); 65 + vcpu_set_msr(vcpuN, MSR_IA32_APICBASE, LAPIC_DISABLED); 66 + } 67 + 68 + ASSERT_EQ(pthread_cancel(thread), 0); 69 + ASSERT_EQ(pthread_join(thread, NULL), 0); 70 + 71 + kvm_vm_free(vm); 72 + 73 + return 0; 74 + }
+1 -1
tools/testing/selftests/net/mptcp/Makefile
··· 9 9 10 10 TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq 11 11 12 - TEST_FILES := settings 12 + TEST_FILES := mptcp_lib.sh settings 13 13 14 14 EXTRA_CLEAN := *.pcap 15 15
+4
tools/testing/selftests/net/mptcp/diag.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 sec=$(date +%s) 5 7 rndh=$(printf %x $sec)-$(mktemp -u XXXXXX) 6 8 ns="ns1-$rndh" ··· 32 30 33 31 ip netns del $ns 34 32 } 33 + 34 + mptcp_lib_check_mptcp 35 35 36 36 ip -Version > /dev/null 2>&1 37 37 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 time_start=$(date +%s) 5 7 6 8 optstring="S:R:d:e:l:r:h4cm:f:tC" ··· 142 140 rm -f /tmp/$netns.{nstat,out} 143 141 done 144 142 } 143 + 144 + mptcp_lib_check_mptcp 145 145 146 146 ip -Version > /dev/null 2>&1 147 147 if [ $? -ne 0 ];then
+15 -2
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 10 10 # because it's invoked by variable name, see how the "tests" array is used 11 11 #shellcheck disable=SC2317 12 12 13 + . "$(dirname "${0}")/mptcp_lib.sh" 14 + 13 15 ret=0 14 16 sin="" 15 17 sinfail="" ··· 19 17 cin="" 20 18 cinfail="" 21 19 cinsent="" 20 + tmpfile="" 22 21 cout="" 23 22 capout="" 24 23 ns1="" ··· 139 136 140 137 check_tools() 141 138 { 139 + mptcp_lib_check_mptcp 140 + 142 141 if ! ip -Version &> /dev/null; then 143 142 echo "SKIP: Could not run test without ip tool" 144 143 exit $ksft_skip ··· 180 175 { 181 176 rm -f "$cin" "$cout" "$sinfail" 182 177 rm -f "$sin" "$sout" "$cinsent" "$cinfail" 178 + rm -f "$tmpfile" 183 179 rm -rf $evts_ns1 $evts_ns2 184 180 cleanup_partial 185 181 } ··· 389 383 fail_test 390 384 return 1 391 385 fi 392 - bytes="--bytes=${bytes}" 386 + 387 + # note: BusyBox's "cmp" command doesn't support --bytes 388 + tmpfile=$(mktemp) 389 + head --bytes="$bytes" "$in" > "$tmpfile" 390 + mv "$tmpfile" "$in" 391 + head --bytes="$bytes" "$out" > "$tmpfile" 392 + mv "$tmpfile" "$out" 393 + tmpfile="" 393 394 fi 394 - cmp -l "$in" "$out" ${bytes} | while read -r i a b; do 395 + cmp -l "$in" "$out" | while read -r i a b; do 395 396 local sum=$((0${a} + 0${b})) 396 397 if [ $check_invert -eq 0 ] || [ $sum -ne $((0xff)) ]; then 397 398 echo "[ FAIL ] $what does not match (in, out):"
+40
tools/testing/selftests/net/mptcp/mptcp_lib.sh
··· 1 + #! /bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + readonly KSFT_FAIL=1 5 + readonly KSFT_SKIP=4 6 + 7 + # SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES env var can be set when validating all 8 + # features using the last version of the kernel and the selftests to make sure 9 + # a test is not being skipped by mistake. 10 + mptcp_lib_expect_all_features() { 11 + [ "${SELFTESTS_MPTCP_LIB_EXPECT_ALL_FEATURES:-}" = "1" ] 12 + } 13 + 14 + # $1: msg 15 + mptcp_lib_fail_if_expected_feature() { 16 + if mptcp_lib_expect_all_features; then 17 + echo "ERROR: missing feature: ${*}" 18 + exit ${KSFT_FAIL} 19 + fi 20 + 21 + return 1 22 + } 23 + 24 + # $1: file 25 + mptcp_lib_has_file() { 26 + local f="${1}" 27 + 28 + if [ -f "${f}" ]; then 29 + return 0 30 + fi 31 + 32 + mptcp_lib_fail_if_expected_feature "${f} file not found" 33 + } 34 + 35 + mptcp_lib_check_mptcp() { 36 + if ! mptcp_lib_has_file "/proc/sys/net/mptcp/enabled"; then 37 + echo "SKIP: MPTCP support is not available" 38 + exit ${KSFT_SKIP} 39 + fi 40 + }
+4
tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 ret=0 5 7 sin="" 6 8 sout="" ··· 85 83 rm -f "$cin" "$cout" 86 84 rm -f "$sin" "$sout" 87 85 } 86 + 87 + mptcp_lib_check_mptcp 88 88 89 89 ip -Version > /dev/null 2>&1 90 90 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/pm_netlink.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 ksft_skip=4 5 7 ret=0 6 8 ··· 35 33 rm -f $err 36 34 ip netns del $ns1 37 35 } 36 + 37 + mptcp_lib_check_mptcp 38 38 39 39 ip -Version > /dev/null 2>&1 40 40 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 4 6 sec=$(date +%s) 5 7 rndh=$(printf %x $sec)-$(mktemp -u XXXXXX) 6 8 ns1="ns1-$rndh" ··· 35 33 ip netns del $netns 36 34 done 37 35 } 36 + 37 + mptcp_lib_check_mptcp 38 38 39 39 ip -Version > /dev/null 2>&1 40 40 if [ $? -ne 0 ];then
+4
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + . "$(dirname "${0}")/mptcp_lib.sh" 5 + 6 + mptcp_lib_check_mptcp 7 + 4 8 ip -Version > /dev/null 2>&1 5 9 if [ $? -ne 0 ];then 6 10 echo "SKIP: Cannot not run test without ip tool"