···279279 size in 512B sectors of the zones of the device, with280280 the eventual exception of the last zone of the device281281 which may be smaller.282282+283283+What: /sys/block/<disk>/queue/io_timeout284284+Date: November 2018285285+Contact: Weiping Zhang <zhangweiping@didiglobal.com>286286+Description:287287+ io_timeout is the request timeout in milliseconds. If a request288288+ does not complete in this time then the block driver timeout289289+ handler is invoked. That timeout handler can decide to retry290290+ the request, to fail it or to start a device recovery strategy.
+9-2
Documentation/ABI/testing/sysfs-block-zram
···122122 statistics (bd_count, bd_reads, bd_writes) in a format123123 similar to block layer statistics file format.124124125125+What: /sys/block/zram<id>/writeback_limit_enable126126+Date: November 2018127127+Contact: Minchan Kim <minchan@kernel.org>128128+Description:129129+ The writeback_limit_enable file is read-write and specifies130130+ eanbe of writeback_limit feature. "1" means eable the feature.131131+ No limit "0" is the initial state.132132+125133What: /sys/block/zram<id>/writeback_limit126134Date: November 2018127135Contact: Minchan Kim <minchan@kernel.org>128136Description:129137 The writeback_limit file is read-write and specifies the maximum130138 amount of writeback ZRAM can do. The limit could be changed131131- in run time and "0" means disable the limit.132132- No limit is the initial state.139139+ in run time.
+32
Documentation/ABI/testing/sysfs-class-chromeos
···11+What: /sys/class/chromeos/<ec-device-name>/flashinfo22+Date: August 201533+KernelVersion: 4.244+Description:55+ Show the EC flash information.66+77+What: /sys/class/chromeos/<ec-device-name>/kb_wake_angle88+Date: March 201899+KernelVersion: 4.171010+Description:1111+ Control the keyboard wake lid angle. Values are between1212+ 0 and 360. This file will also show the keyboard wake lid1313+ angle by querying the hardware.1414+1515+What: /sys/class/chromeos/<ec-device-name>/reboot1616+Date: August 20151717+KernelVersion: 4.21818+Description:1919+ Tell the EC to reboot in various ways. Options are:2020+ "cancel": Cancel a pending reboot.2121+ "ro": Jump to RO without rebooting.2222+ "rw": Jump to RW without rebooting.2323+ "cold": Cold reboot.2424+ "disable-jump": Disable jump until next reboot.2525+ "hibernate": Hibernate the EC.2626+ "at-shutdown": Reboot after an AP shutdown.2727+2828+What: /sys/class/chromeos/<ec-device-name>/version2929+Date: August 20153030+KernelVersion: 4.23131+Description:3232+ Show the information about the EC software and hardware.
···11+What: /sys/class/chromeos/<ec-device-name>/lightbar/brightness22+Date: August 201533+KernelVersion: 4.244+Description:55+ Writing to this file adjusts the overall brightness of66+ the lightbar, separate from any color intensity. The77+ valid range is 0 (off) to 255 (maximum brightness).88+99+What: /sys/class/chromeos/<ec-device-name>/lightbar/interval_msec1010+Date: August 20151111+KernelVersion: 4.21212+Description:1313+ The lightbar is controlled by an embedded controller (EC),1414+ which also manages the keyboard, battery charging, fans,1515+ and other system hardware. To prevent unprivileged users1616+ from interfering with the other EC functions, the rate at1717+ which the lightbar control files can be read or written is1818+ limited.1919+2020+ Reading this file will return the number of milliseconds2121+ that must elapse between accessing any of the lightbar2222+ functions through this interface. Going faster will simply2323+ block until the necessary interval has lapsed. The interval2424+ applies uniformly to all accesses of any kind by any user.2525+2626+What: /sys/class/chromeos/<ec-device-name>/lightbar/led_rgb2727+Date: August 20152828+KernelVersion: 4.22929+Description:3030+ This allows you to control each LED segment. If the3131+ lightbar is already running one of the automatic3232+ sequences, you probably won’t see anything change because3333+ your color setting will be almost immediately replaced.3434+ To get useful results, you should stop the lightbar3535+ sequence first.3636+3737+ The values written to this file are sets of four integers,3838+ indicating LED, RED, GREEN, BLUE. The LED number is 0 to 33939+ to select a single segment, or 4 to set all four segments4040+ to the same value at once. The RED, GREEN, and BLUE4141+ numbers should be in the range 0 (off) to 255 (maximum).4242+ You can update more than one segment at a time by writing4343+ more than one set of four integers.4444+4545+What: /sys/class/chromeos/<ec-device-name>/lightbar/program4646+Date: August 20154747+KernelVersion: 4.24848+Description:4949+ This allows you to upload and run custom lightbar sequences.5050+5151+What: /sys/class/chromeos/<ec-device-name>/lightbar/sequence5252+Date: August 20155353+KernelVersion: 4.25454+Description:5555+ The Pixel lightbar has a number of built-in sequences5656+ that it displays under various conditions, such as at5757+ power on, shut down, or while running. Reading from this5858+ file displays the current sequence that the lightbar is5959+ displaying. Writing to this file allows you to change the6060+ sequence.6161+6262+What: /sys/class/chromeos/<ec-device-name>/lightbar/userspace_control6363+Date: August 20156464+KernelVersion: 4.26565+Description:6666+ This allows you to take the control of the lightbar. This6767+ prevents the kernel from going through its normal6868+ sequences.6969+7070+What: /sys/class/chromeos/<ec-device-name>/lightbar/version7171+Date: August 20157272+KernelVersion: 4.27373+Description:7474+ Show the information about the lightbar version.
···11+What: /sys/class/chromeos/<ec-device-name>/vbc/vboot_context22+Date: October 201533+KernelVersion: 4.444+Description:55+ Read/write the verified boot context data included on a66+ small nvram space on some EC implementations.
+7
Documentation/block/bfq-iosched.txt
···357357than maximum throughput. In these cases, consider setting the358358strict_guarantees parameter.359359360360+slice_idle_us361361+-------------362362+363363+Controls the same tuning parameter as slice_idle, but in microseconds.364364+Either tunable can be used to set idling behavior. Afterwards, the365365+other tunable will reflect the newly set value in sysfs.366366+360367strict_guarantees361368-----------------362369
+2-1
Documentation/block/null_blk.txt
···88888989zoned=[0/1]: Default: 09090 0: Block device is exposed as a random-access block device.9191- 1: Block device is exposed as a host-managed zoned block device.9191+ 1: Block device is exposed as a host-managed zoned block device. Requires9292+ CONFIG_BLK_DEV_ZONED.92939394zone_size=[MB]: Default: 2569495 Per zone size when exposed as a zoned block device. Must be a power of two.
+7
Documentation/block/queue-sysfs.txt
···6767IO to sleep for this amount of microseconds before entering classic6868polling.69697070+io_timeout (RW)7171+---------------7272+io_timeout is the request timeout in milliseconds. If a request does not7373+complete in this time then the block driver timeout handler is invoked.7474+That timeout handler can decide to retry the request, to fail it or to start7575+a device recovery strategy.7676+7077iostats (RW)7178-------------7279This file is used to control (on/off) the iostats accounting of the
+47-27
Documentation/blockdev/zram.txt
···156156A brief description of exported device attributes. For more details please157157read Documentation/ABI/testing/sysfs-block-zram.158158159159-Name access description160160----- ------ -----------161161-disksize RW show and set the device's disk size162162-initstate RO shows the initialization state of the device163163-reset WO trigger device reset164164-mem_used_max WO reset the `mem_used_max' counter (see later)165165-mem_limit WO specifies the maximum amount of memory ZRAM can use166166- to store the compressed data167167-writeback_limit WO specifies the maximum amount of write IO zram can168168- write out to backing device as 4KB unit169169-max_comp_streams RW the number of possible concurrent compress operations170170-comp_algorithm RW show and change the compression algorithm171171-compact WO trigger memory compaction172172-debug_stat RO this file is used for zram debugging purposes173173-backing_dev RW set up backend storage for zram to write out174174-idle WO mark allocated slot as idle159159+Name access description160160+---- ------ -----------161161+disksize RW show and set the device's disk size162162+initstate RO shows the initialization state of the device163163+reset WO trigger device reset164164+mem_used_max WO reset the `mem_used_max' counter (see later)165165+mem_limit WO specifies the maximum amount of memory ZRAM can use166166+ to store the compressed data167167+writeback_limit WO specifies the maximum amount of write IO zram can168168+ write out to backing device as 4KB unit169169+writeback_limit_enable RW show and set writeback_limit feature170170+max_comp_streams RW the number of possible concurrent compress operations171171+comp_algorithm RW show and change the compression algorithm172172+compact WO trigger memory compaction173173+debug_stat RO this file is used for zram debugging purposes174174+backing_dev RW set up backend storage for zram to write out175175+idle WO mark allocated slot as idle175176176177177178User space is advised to use the following files to read the device statistics.···281280If there are lots of write IO with flash device, potentially, it has282281flash wearout problem so that admin needs to design write limitation283282to guarantee storage health for entire product life.284284-To overcome the concern, zram supports "writeback_limit".285285-The "writeback_limit"'s default value is 0 so that it doesn't limit286286-any writeback. If admin want to measure writeback count in a certain287287-period, he could know it via /sys/block/zram0/bd_stat's 3rd column.283283+284284+To overcome the concern, zram supports "writeback_limit" feature.285285+The "writeback_limit_enable"'s default value is 0 so that it doesn't limit286286+any writeback. IOW, if admin want to apply writeback budget, he should287287+enable writeback_limit_enable via288288+289289+ $ echo 1 > /sys/block/zramX/writeback_limit_enable290290+291291+Once writeback_limit_enable is set, zram doesn't allow any writeback292292+until admin set the budget via /sys/block/zramX/writeback_limit.293293+294294+(If admin doesn't enable writeback_limit_enable, writeback_limit's value295295+assigned via /sys/block/zramX/writeback_limit is meaninless.)288296289297If admin want to limit writeback as per-day 400M, he could do it290298like below.291299292292- MB_SHIFT=20293293- 4K_SHIFT=12294294- echo $((400<<MB_SHIFT>>4K_SHIFT)) > \295295- /sys/block/zram0/writeback_limit.300300+ $ MB_SHIFT=20301301+ $ 4K_SHIFT=12302302+ $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \303303+ /sys/block/zram0/writeback_limit.304304+ $ echo 1 > /sys/block/zram0/writeback_limit_enable296305297297-If admin want to allow further write again, he could do it like below306306+If admin want to allow further write again once the bugdet is exausted,307307+he could do it like below298308299299- echo 0 > /sys/block/zram0/writeback_limit309309+ $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \310310+ /sys/block/zram0/writeback_limit300311301312If admin want to see remaining writeback budget since he set,302313303303- cat /sys/block/zram0/writeback_limit314314+ $ cat /sys/block/zramX/writeback_limit315315+316316+If admin want to disable writeback limit, he could do317317+318318+ $ echo 0 > /sys/block/zramX/writeback_limit_enable304319305320The writeback_limit count will reset whenever you reset zram(e.g.,306321system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of307322writeback happened until you reset the zram to allocate extra writeback308323budget in next setting is user's job.324324+325325+If admin want to measure writeback count in a certain period, he could326326+know it via /sys/block/zram0/bd_stat's 3rd column.309327310328= memory tracking311329
+5-6
Documentation/bpf/bpf_design_QA.rst
···157157------------------------------158158A: YES. BPF instructions, arguments to BPF programs, set of helper159159functions and their arguments, recognized return codes are all part160160-of ABI. However when tracing programs are using bpf_probe_read() helper161161-to walk kernel internal datastructures and compile with kernel162162-internal headers these accesses can and will break with newer163163-kernels. The union bpf_attr -> kern_version is checked at load time164164-to prevent accidentally loading kprobe-based bpf programs written165165-for a different kernel. Networking programs don't do kern_version check.160160+of ABI. However there is one specific exception to tracing programs161161+which are using helpers like bpf_probe_read() to walk kernel internal162162+data structures and compile with kernel internal headers. Both of these163163+kernel internals are subject to change and can break with newer kernels164164+such that the program needs to be adapted accordingly.166165167166Q: How much stack space a BPF program uses?168167-------------------------------------------
···235235===========================================236236237237[1] ARM Linux Kernel documentation - CPUs bindings238238- Documentation/devicetree/bindings/arm/cpus.txt238238+ Documentation/devicetree/bindings/arm/cpus.yaml
···684684===========================================685685686686[1] ARM Linux Kernel documentation - CPUs bindings687687- Documentation/devicetree/bindings/arm/cpus.txt687687+ Documentation/devicetree/bindings/arm/cpus.yaml688688689689[2] ARM Linux Kernel documentation - PSCI bindings690690 Documentation/devicetree/bindings/arm/psci.txt
+1-1
Documentation/devicetree/bindings/arm/sp810.txt
···44Required properties:5566- compatible: standard compatible string for a Primecell peripheral,77- see Documentation/devicetree/bindings/arm/primecell.txt77+ see Documentation/devicetree/bindings/arm/primecell.yaml88 for more details99 should be: "arm,sp810", "arm,primecell"1010
···472472473473===============================================================================474474[1] ARM Linux kernel documentation475475- Documentation/devicetree/bindings/arm/cpus.txt475475+ Documentation/devicetree/bindings/arm/cpus.yaml
···1818Each clock is assigned an identifier and client nodes use this identifier1919to specify the clock which they consume.20202121-All these identifier could be found in <dt-bindings/clock/marvell-mmp2.h>.2121+All these identifiers could be found in <dt-bindings/clock/marvell,mmp2.h>.
···11* ARM PrimeCell Color LCD Controller PL110/PL1112233-See also Documentation/devicetree/bindings/arm/primecell.txt33+See also Documentation/devicetree/bindings/arm/primecell.yaml4455Required properties:66
···14141515 "marvell,armada-8k-gpio" should be used for the Armada 7K and 8K1616 SoCs (either from AP or CP), see1717- Documentation/devicetree/bindings/arm/marvell/cp110-system-controller0.txt1818- and1917 Documentation/devicetree/bindings/arm/marvell/ap806-system-controller.txt2018 for specific details about the offset property.2119
···11+STMicroelectronics STPMIC1 Onkey22+33+Required properties:44+55+- compatible = "st,stpmic1-onkey";66+- interrupts: interrupt line to use77+- interrupt-names = "onkey-falling", "onkey-rising"88+ onkey-falling: happens when onkey is pressed; IT_PONKEY_F of pmic99+ onkey-rising: happens when onkey is released; IT_PONKEY_R of pmic1010+1111+Optional properties:1212+1313+- st,onkey-clear-cc-flag: onkey is able power on after an1414+ over-current shutdown event.1515+- st,onkey-pu-inactive: onkey pull up is not active1616+- power-off-time-sec: Duration in seconds which the key should be kept1717+ pressed for device to power off automatically (from 1 to 16 seconds).1818+ see See Documentation/devicetree/bindings/input/keys.txt1919+2020+Example:2121+2222+onkey {2323+ compatible = "st,stpmic1-onkey";2424+ interrupt-parent = <&pmic>;2525+ interrupts = <IT_PONKEY_F 0>,<IT_PONKEY_R 1>;2626+ interrupt-names = "onkey-falling", "onkey-rising";2727+ power-off-time-sec = <10>;2828+};
···7878PPI affinity can be expressed as a single "ppi-partitions" node,7979containing a set of sub-nodes, each with the following property:8080- affinity: Should be a list of phandles to CPU nodes (as described in8181-Documentation/devicetree/bindings/arm/cpus.txt).8181+ Documentation/devicetree/bindings/arm/cpus.yaml).82828383GICv3 has one or more Interrupt Translation Services (ITS) that are8484used to route Message Signalled Interrupts (MSI) to the CPUs.
···11Altera SOCFPGA Reset Manager2233Required properties:44-- compatible : "altr,rst-mgr"44+- compatible : "altr,rst-mgr" for (Cyclone5/Arria5/Arria10)55+ "altr,stratix10-rst-mgr","altr,rst-mgr" for Stratix10 ARM64 SoC56- reg : Should contain 1 register ranges(address and length)67- altr,modrst-offset : Should contain the offset of the first modrst register.78- #reset-cells: 1
···120120 };121121122122123123-USB3 core reset124124----------------123123+Peripheral core reset in glue layer124124+-----------------------------------125125126126-USB3 core reset belongs to USB3 glue layer. Before using the core reset,127127-it is necessary to control the clocks and resets to enable this layer.128128-These clocks and resets should be described in each property.126126+Some peripheral core reset belongs to its own glue layer. Before using127127+this core reset, it is necessary to control the clocks and resets to enable128128+this layer. These clocks and resets should be described in each property.129129130130Required properties:131131- compatible: Should be132132- "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC133133- "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC134134- "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC135135- "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC132132+ "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC USB3133133+ "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC USB3134134+ "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC USB3135135+ "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC USB3136136+ "socionext,uniphier-pro4-ahci-reset" - for Pro4 SoC AHCI137137+ "socionext,uniphier-pxs2-ahci-reset" - for PXs2 SoC AHCI138138+ "socionext,uniphier-pxs3-ahci-reset" - for PXs3 SoC AHCI136139- #reset-cells: Should be 1.137140- reg: Specifies offset and length of the register set for the device.138138-- clocks: A list of phandles to the clock gate for USB3 glue layer.141141+- clocks: A list of phandles to the clock gate for the glue layer.139142 According to the clock-names, appropriate clocks are required.140143- clock-names: Should contain141144 "gio", "link" - for Pro4 SoC142145 "link" - for others143143-- resets: A list of phandles to the reset control for USB3 glue layer.146146+- resets: A list of phandles to the reset control for the glue layer.144147 According to the reset-names, appropriate resets are required.145148- reset-names: Should contain146149 "gio", "link" - for Pro4 SoC
···5555= EXAMPLE5656The following example represents the GLINK RPM node on a MSM8996 device, with5757the function for the "rpm_request" channel defined, which is used for5858-regualtors and root clocks.5858+regulators and root clocks.59596060 apcs_glb: mailbox@9820000 {6161 compatible = "qcom,msm8996-apcs-hmss-global";
···4141- qcom,local-pid:4242 Usage: required4343 Value type: <u32>4444- Definition: specifies the identfier of the local endpoint of this edge4444+ Definition: specifies the identifier of the local endpoint of this edge45454646- qcom,remote-pid:4747 Usage: required4848 Value type: <u32>4949- Definition: specifies the identfier of the remote endpoint of this edge4949+ Definition: specifies the identifier of the remote endpoint of this edge50505151= SUBNODES5252Each SMP2P pair contain a set of inbound and outbound entries, these are
···11+STMicroelectronics STPMIC1 Watchdog22+33+Required properties:44+55+- compatible : should be "st,stpmic1-wdt"66+77+Example:88+99+watchdog {1010+ compatible = "st,stpmic1-wdt";1111+};
+4-4
Documentation/driver-model/bus.txt
···124124 ssize_t (*store)(struct bus_type *, const char * buf, size_t count);125125};126126127127-Bus drivers can export attributes using the BUS_ATTR macro that works128128-similarly to the DEVICE_ATTR macro for devices. For example, a definition 129129-like this:127127+Bus drivers can export attributes using the BUS_ATTR_RW macro that works128128+similarly to the DEVICE_ATTR_RW macro for devices. For example, a129129+definition like this:130130131131-static BUS_ATTR(debug,0644,show_debug,store_debug);131131+static BUS_ATTR_RW(debug);132132133133is equivalent to declaring:134134
+8
Documentation/fb/fbcon.txt
···163163 be preserved until there actually is some text is output to the console.164164 This option causes fbcon to bind immediately to the fbdev device.165165166166+7. fbcon=logo-pos:<location>167167+168168+ The only possible 'location' is 'center' (without quotes), and when169169+ given, the bootup logo is moved from the default top-left corner170170+ location to the center of the framebuffer. If more than one logo is171171+ displayed due to multiple CPUs, the collected line of logos is moved172172+ as a whole.173173+166174C. Attaching, Detaching and Unloading167175168176Before going on to how to attach, detach and unload the framebuffer console, an
···10001000 size should be set when the call is begun. tx_total_len may not be less10011001 than zero.1002100210031003- (*) Check to see the completion state of a call so that the caller can assess10041004- whether it needs to be retried.10051005-10061006- enum rxrpc_call_completion {10071007- RXRPC_CALL_SUCCEEDED,10081008- RXRPC_CALL_REMOTELY_ABORTED,10091009- RXRPC_CALL_LOCALLY_ABORTED,10101010- RXRPC_CALL_LOCAL_ERROR,10111011- RXRPC_CALL_NETWORK_ERROR,10121012- };10131013-10141014- int rxrpc_kernel_check_call(struct socket *sock, struct rxrpc_call *call,10151015- enum rxrpc_call_completion *_compl,10161016- u32 *_abort_code);10171017-10181018- On return, -EINPROGRESS will be returned if the call is still ongoing; if10191019- it is finished, *_compl will be set to indicate the manner of completion,10201020- *_abort_code will be set to any abort code that occurred. 0 will be10211021- returned on a successful completion, -ECONNABORTED will be returned if the10221022- client failed due to a remote abort and anything else will return an10231023- appropriate error code.10241024-10251025- The caller should look at this information to decide if it's worth10261026- retrying the call.10271027-10281028- (*) Retry a client call.10291029-10301030- int rxrpc_kernel_retry_call(struct socket *sock,10311031- struct rxrpc_call *call,10321032- struct sockaddr_rxrpc *srx,10331033- struct key *key);10341034-10351035- This attempts to partially reinitialise a call and submit it again while10361036- reusing the original call's Tx queue to avoid the need to repackage and10371037- re-encrypt the data to be sent. call indicates the call to retry, srx the10381038- new address to send it to and key the encryption key to use for signing or10391039- encrypting the packets.10401040-10411041- For this to work, the first Tx data packet must still be in the transmit10421042- queue, and currently this is only permitted for local and network errors10431043- and the call must not have been aborted. Any partially constructed Tx10441044- packet is left as is and can continue being filled afterwards.10451045-10461046- It returns 0 if the call was requeued and an error otherwise.10471047-10481003 (*) Get call RTT.1049100410501005 u64 rxrpc_kernel_get_rtt(struct socket *sock, struct rxrpc_call *call);
+125-5
Documentation/networking/snmp_counter.rst
···336336to the accept queue.337337338338339339-TCP Fast Open339339+* TcpEstabResets340340+Defined in `RFC1213 tcpEstabResets`_.341341+342342+.. _RFC1213 tcpEstabResets: https://tools.ietf.org/html/rfc1213#page-48343343+344344+* TcpAttemptFails345345+Defined in `RFC1213 tcpAttemptFails`_.346346+347347+.. _RFC1213 tcpAttemptFails: https://tools.ietf.org/html/rfc1213#page-48348348+349349+* TcpOutRsts350350+Defined in `RFC1213 tcpOutRsts`_. The RFC says this counter indicates351351+the 'segments sent containing the RST flag', but in linux kernel, this352352+couner indicates the segments kerenl tried to send. The sending353353+process might be failed due to some errors (e.g. memory alloc failed).354354+355355+.. _RFC1213 tcpOutRsts: https://tools.ietf.org/html/rfc1213#page-52356356+357357+358358+TCP Fast Path340359============341360When kernel receives a TCP packet, it has two paths to handler the342361packet, one is fast path, another is slow path. The comment in kernel···402383403384TCP abort404385========405405-406406-407386* TcpExtTCPAbortOnData408387It means TCP layer has data in flight, but need to close the409388connection. So TCP layer sends a RST to the other side, indicate the···562545stack of kernel will increase TcpExtTCPSACKReorder for both of the563546above scenarios.564547565565-566548DSACK567549=====568550The DSACK is defined in `RFC2883`_. The receiver uses DSACK to report···582566DSACK to the sender.583567584568* TcpExtTCPDSACKRecv585585-The TCP stack receives a DSACK, which indicate an acknowledged569569+The TCP stack receives a DSACK, which indicates an acknowledged586570duplicate packet is received.587571588572* TcpExtTCPDSACKOfoRecv589573The TCP stack receives a DSACK, which indicate an out of order590574duplicate packet is received.575575+576576+invalid SACK and DSACK577577+====================578578+When a SACK (or DSACK) block is invalid, a corresponding counter would579579+be updated. The validation method is base on the start/end sequence580580+number of the SACK block. For more details, please refer the comment581581+of the function tcp_is_sackblock_valid in the kernel source code. A582582+SACK option could have up to 4 blocks, they are checked583583+individually. E.g., if 3 blocks of a SACk is invalid, the584584+corresponding counter would be updated 3 times. The comment of the585585+`Add counters for discarded SACK blocks`_ patch has additional586586+explaination:587587+588588+.. _Add counters for discarded SACK blocks: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=18f02545a9a16c9a89778b91a162ad16d510bb32589589+590590+* TcpExtTCPSACKDiscard591591+This counter indicates how many SACK blocks are invalid. If the invalid592592+SACK block is caused by ACK recording, the TCP stack will only ignore593593+it and won't update this counter.594594+595595+* TcpExtTCPDSACKIgnoredOld and TcpExtTCPDSACKIgnoredNoUndo596596+When a DSACK block is invalid, one of these two counters would be597597+updated. Which counter will be updated depends on the undo_marker flag598598+of the TCP socket. If the undo_marker is not set, the TCP stack isn't599599+likely to re-transmit any packets, and we still receive an invalid600600+DSACK block, the reason might be that the packet is duplicated in the601601+middle of the network. In such scenario, TcpExtTCPDSACKIgnoredNoUndo602602+will be updated. If the undo_marker is set, TcpExtTCPDSACKIgnoredOld603603+will be updated. As implied in its name, it might be an old packet.604604+605605+SACK shift606606+=========607607+The linux networking stack stores data in sk_buff struct (skb for608608+short). If a SACK block acrosses multiple skb, the TCP stack will try609609+to re-arrange data in these skb. E.g. if a SACK block acknowledges seq610610+10 to 15, skb1 has seq 10 to 13, skb2 has seq 14 to 20. The seq 14 and611611+15 in skb2 would be moved to skb1. This operation is 'shift'. If a612612+SACK block acknowledges seq 10 to 20, skb1 has seq 10 to 13, skb2 has613613+seq 14 to 20. All data in skb2 will be moved to skb1, and skb2 will be614614+discard, this operation is 'merge'.615615+616616+* TcpExtTCPSackShifted617617+A skb is shifted618618+619619+* TcpExtTCPSackMerged620620+A skb is merged621621+622622+* TcpExtTCPSackShiftFallback623623+A skb should be shifted or merged, but the TCP stack doesn't do it for624624+some reasons.591625592626TCP out of order593627===============···728662.. _RFC 5961 section 4.2: https://tools.ietf.org/html/rfc5961#page-9729663.. _RFC 5961 section 5.2: https://tools.ietf.org/html/rfc5961#page-11730664665665+TCP receive window666666+=================667667+* TcpExtTCPWantZeroWindowAdv668668+Depending on current memory usage, the TCP stack tries to set receive669669+window to zero. But the receive window might still be a no-zero670670+value. For example, if the previous window size is 10, and the TCP671671+stack receives 3 bytes, the current window size would be 7 even if the672672+window size calculated by the memory usage is zero.673673+674674+* TcpExtTCPToZeroWindowAdv675675+The TCP receive window is set to zero from a no-zero value.676676+677677+* TcpExtTCPFromZeroWindowAdv678678+The TCP receive window is set to no-zero value from zero.679679+680680+681681+Delayed ACK682682+==========683683+The TCP Delayed ACK is a technique which is used for reducing the684684+packet count in the network. For more details, please refer the685685+`Delayed ACK wiki`_686686+687687+.. _Delayed ACK wiki: https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment688688+689689+* TcpExtDelayedACKs690690+A delayed ACK timer expires. The TCP stack will send a pure ACK packet691691+and exit the delayed ACK mode.692692+693693+* TcpExtDelayedACKLocked694694+A delayed ACK timer expires, but the TCP stack can't send an ACK695695+immediately due to the socket is locked by a userspace program. The696696+TCP stack will send a pure ACK later (after the userspace program697697+unlock the socket). When the TCP stack sends the pure ACK later, the698698+TCP stack will also update TcpExtDelayedACKs and exit the delayed ACK699699+mode.700700+701701+* TcpExtDelayedACKLost702702+It will be updated when the TCP stack receives a packet which has been703703+ACKed. A Delayed ACK loss might cause this issue, but it would also be704704+triggered by other reasons, such as a packet is duplicated in the705705+network.706706+707707+Tail Loss Probe (TLP)708708+===================709709+TLP is an algorithm which is used to detect TCP packet loss. For more710710+details, please refer the `TLP paper`_.711711+712712+.. _TLP paper: https://tools.ietf.org/html/draft-dukkipati-tcpm-tcp-loss-probe-01713713+714714+* TcpExtTCPLossProbes715715+A TLP probe packet is sent.716716+717717+* TcpExtTCPLossProbeRecovery718718+A packet loss is detected and recovered by TLP.731719732720examples733721=======
+2-2
Documentation/networking/timestamping.txt
···417417418418Hardware time stamping must also be initialized for each device driver419419that is expected to do hardware time stamping. The parameter is defined in420420-/include/linux/net_tstamp.h as:420420+include/uapi/linux/net_tstamp.h as:421421422422struct hwtstamp_config {423423 int flags; /* no flags defined right now, must be zero */···487487 HWTSTAMP_FILTER_PTP_V1_L4_EVENT,488488489489 /* for the complete list of values, please check490490- * the include file /include/linux/net_tstamp.h490490+ * the include file include/uapi/linux/net_tstamp.h491491 */492492};493493
+1-1
Documentation/trace/coresight-cpu-debug.txt
···165165The same can also be done from an application program.166166167167Disable specific CPU's specific idle state from cpuidle sysfs (see168168-Documentation/cpuidle/sysfs.txt):168168+Documentation/admin-guide/pm/cpuidle.rst):169169# echo 1 > /sys/devices/system/cpu/cpu$cpu/cpuidle/state$state/disable170170171171
···99Tony Luck <tony.luck@intel.com>1010Vikas Shivappa <vikas.shivappa@intel.com>11111212-This feature is enabled by the CONFIG_RESCTRL and the X86 /proc/cpuinfo1212+This feature is enabled by the CONFIG_X86_RESCTRL and the x86 /proc/cpuinfo1313flag bits:1414RDT (Resource Director Technology) Allocation - "rdt_a"1515CAT (Cache Allocation Technology) - "cat_l3", "cat_l2"
···22#ifndef __ASM_PROTOTYPES_H33#define __ASM_PROTOTYPES_H44/*55- * CONFIG_MODEVERIONS requires a C declaration to generate the appropriate CRC55+ * CONFIG_MODVERSIONS requires a C declaration to generate the appropriate CRC66 * for each symbol. Since commit:77 *88 * 4efca4ed05cbdfd1 ("kbuild: modversions for EXPORT_SYMBOL() for asm")
···1616#ifndef __ASM_MMU_H1717#define __ASM_MMU_H18181919+#include <asm/cputype.h>2020+1921#define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */2022#define USER_ASID_BIT 482123#define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)···4442{4543 return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&4644 cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);4545+}4646+4747+static inline bool arm64_kernel_use_ng_mappings(void)4848+{4949+ bool tx1_bug;5050+5151+ /* What's a kpti? Use global mappings if we don't know. */5252+ if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))5353+ return false;5454+5555+ /*5656+ * Note: this function is called before the CPU capabilities have5757+ * been configured, so our early mappings will be global. If we5858+ * later determine that kpti is required, then5959+ * kpti_install_ng_mappings() will make them non-global.6060+ */6161+ if (arm64_kernel_unmapped_at_el0())6262+ return true;6363+6464+ if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))6565+ return false;6666+6767+ /*6868+ * KASLR is enabled so we're going to be enabling kpti on non-broken6969+ * CPUs regardless of their susceptibility to Meltdown. Rather7070+ * than force everybody to go through the G -> nG dance later on,7171+ * just put down non-global mappings from the beginning.7272+ */7373+ if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {7474+ tx1_bug = false;7575+#ifndef MODULE7676+ } else if (!static_branch_likely(&arm64_const_caps_ready)) {7777+ extern const struct midr_range cavium_erratum_27456_cpus[];7878+7979+ tx1_bug = is_midr_in_range_list(read_cpuid_id(),8080+ cavium_erratum_27456_cpus);8181+#endif8282+ } else {8383+ tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);8484+ }8585+8686+ return !tx1_bug && kaslr_offset() > 0;4787}48884989typedef void (*bp_hardening_cb_t)(void);
···983983984984 /* Useful for KASLR robustness */985985 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))986986- return true;986986+ return kaslr_offset() > 0;987987988988 /* Don't force KPTI for CPUs that are not vulnerable */989989 if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))···10031003 static bool kpti_applied = false;10041004 int cpu = smp_processor_id();1005100510061006- if (kpti_applied)10061006+ /*10071007+ * We don't need to rewrite the page-tables if either we've done10081008+ * it already or we have KASLR enabled and therefore have not10091009+ * created any global mappings at all.10101010+ */10111011+ if (kpti_applied || kaslr_offset() > 0)10071012 return;1008101310091014 remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+1
arch/arm64/kernel/head.S
···475475476476ENTRY(kimage_vaddr)477477 .quad _text - TEXT_OFFSET478478+EXPORT_SYMBOL(kimage_vaddr)478479479480/*480481 * If we're fortunate enough to boot at EL2, ensure that the world is
+6-2
arch/arm64/kernel/kaslr.c
···1414#include <linux/sched.h>1515#include <linux/types.h>16161717+#include <asm/cacheflush.h>1718#include <asm/fixmap.h>1819#include <asm/kernel-pgtable.h>1920#include <asm/memory.h>···4443 return ret;4544}46454747-static __init const u8 *get_cmdline(void *fdt)4646+static __init const u8 *kaslr_get_cmdline(void *fdt)4847{4948 static __initconst const u8 default_cmdline[] = CONFIG_CMDLINE;5049···110109 * Check if 'nokaslr' appears on the command line, and111110 * return 0 if that is the case.112111 */113113- cmdline = get_cmdline(fdt);112112+ cmdline = kaslr_get_cmdline(fdt);114113 str = strstr(cmdline, "nokaslr");115114 if (str == cmdline || (str > cmdline && *(str - 1) == ' '))116115 return 0;···169168 /* use the lower 21 bits to randomize the base of the module region */170169 module_alloc_base += (module_range * (seed & ((1 << 21) - 1))) >> 21;171170 module_alloc_base &= PAGE_MASK;171171+172172+ __flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));173173+ __flush_dcache_area(&memstart_offset_seed, sizeof(memstart_offset_seed));172174173175 return offset;174176}
+3-1
arch/arm64/kernel/machine_kexec_file.c
···87878888 /* add kaslr-seed */8989 ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED);9090- if (ret && (ret != -FDT_ERR_NOTFOUND))9090+ if (ret == -FDT_ERR_NOTFOUND)9191+ ret = 0;9292+ else if (ret)9193 goto out;92949395 if (rng_is_initialized()) {
···31553155config MIPS32_N3231563156 bool "Kernel support for n32 binaries"31573157 depends on 64BIT31583158+ select ARCH_WANT_COMPAT_IPC_PARSE_VERSION31583159 select COMPAT31593160 select MIPS32_COMPAT31603161 select SYSVIPC_COMPAT if SYSVIPC
+31
arch/mips/bcm47xx/setup.c
···173173 pm_power_off = bcm47xx_machine_halt;174174}175175176176+#ifdef CONFIG_BCM47XX_BCMA177177+static struct device * __init bcm47xx_setup_device(void)178178+{179179+ struct device *dev;180180+ int err;181181+182182+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);183183+ if (!dev)184184+ return NULL;185185+186186+ err = dev_set_name(dev, "bcm47xx_soc");187187+ if (err) {188188+ pr_err("Failed to set SoC device name: %d\n", err);189189+ kfree(dev);190190+ return NULL;191191+ }192192+193193+ err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32));194194+ if (err)195195+ pr_err("Failed to set SoC DMA mask: %d\n", err);196196+197197+ return dev;198198+}199199+#endif200200+176201/*177202 * This finishes bus initialization doing things that were not possible without178203 * kmalloc. Make sure to call it late enough (after mm_init).···207182#ifdef CONFIG_BCM47XX_BCMA208183 if (bcm47xx_bus_type == BCM47XX_BUS_TYPE_BCMA) {209184 int err;185185+186186+ bcm47xx_bus.bcma.dev = bcm47xx_setup_device();187187+ if (!bcm47xx_bus.bcma.dev)188188+ panic("Failed to setup SoC device\n");210189211190 err = bcma_host_soc_init(&bcm47xx_bus.bcma);212191 if (err)···264235#endif265236#ifdef CONFIG_BCM47XX_BCMA266237 case BCM47XX_BUS_TYPE_BCMA:238238+ if (device_register(bcm47xx_bus.bcma.dev))239239+ pr_err("Failed to register SoC device\n");267240 bcma_bus_register(&bcm47xx_bus.bcma.bus);268241 break;269242#endif
···6666# CONFIG_SERIAL_8250_PCI is not set6767CONFIG_SERIAL_8250_NR_UARTS=16868CONFIG_SERIAL_8250_RUNTIME_UARTS=16969+CONFIG_SERIAL_OF_PLATFORM=y6970CONFIG_SERIAL_AR933X=y7071CONFIG_SERIAL_AR933X_CONSOLE=y7172# CONFIG_HW_RANDOM is not set
···852852853853 /* set up the PTE pointers for the Abatron bdiGDB.854854 */855855- tovirt(r6,r6)856855 lis r5, abatron_pteptrs@h857856 ori r5, r5, abatron_pteptrs@l858857 stw r5, 0xf0(0) /* Must match your Abatron config file */859858 tophys(r5,r5)859859+ lis r6, swapper_pg_dir@h860860+ ori r6, r6, swapper_pg_dir@l860861 stw r6, 0(r5)861862862863/* Now turn on the MMU for real! */
+4-3
arch/powerpc/kernel/signal_64.c
···755755 if (restore_tm_sigcontexts(current, &uc->uc_mcontext,756756 &uc_transact->uc_mcontext))757757 goto badframe;758758- }758758+ } else759759#endif760760- /* Fall through, for non-TM restore */761761- if (!MSR_TM_ACTIVE(msr)) {760760+ {762761 /*762762+ * Fall through, for non-TM restore763763+ *763764 * Unset MSR[TS] on the thread regs since MSR from user764765 * context does not have MSR active, and recheckpoint was765766 * not called since restore_tm_sigcontexts() was not called
···538538 /* see if there is a keyboard in the device tree539539 with a parent of type "adb" */540540 for_each_node_by_name(kbd, "keyboard")541541- if (kbd->parent && kbd->parent->type542542- && strcmp(kbd->parent->type, "adb") == 0)541541+ if (of_node_is_type(kbd->parent, "adb"))543542 break;544543 of_node_put(kbd);545544 if (kbd)
···201201 REG_S s2, PT_SEPC(sp)202202 /* Trace syscalls, but only if requested by the user. */203203 REG_L t0, TASK_TI_FLAGS(tp)204204- andi t0, t0, _TIF_SYSCALL_TRACE204204+ andi t0, t0, _TIF_SYSCALL_WORK205205 bnez t0, handle_syscall_trace_enter206206check_syscall_nr:207207 /* Check to make sure we don't jump to a bogus syscall number. */···221221 REG_S a0, PT_A0(sp)222222 /* Trace syscalls, but only if requested by the user. */223223 REG_L t0, TASK_TI_FLAGS(tp)224224- andi t0, t0, _TIF_SYSCALL_TRACE224224+ andi t0, t0, _TIF_SYSCALL_WORK225225 bnez t0, handle_syscall_trace_exit226226227227ret_from_exception:
+16-14
arch/riscv/kernel/module-sections.c
···99#include <linux/kernel.h>1010#include <linux/module.h>11111212-u64 module_emit_got_entry(struct module *mod, u64 val)1212+unsigned long module_emit_got_entry(struct module *mod, unsigned long val)1313{1414 struct mod_section *got_sec = &mod->arch.got;1515 int i = got_sec->num_entries;1616 struct got_entry *got = get_got_entry(val, got_sec);17171818 if (got)1919- return (u64)got;1919+ return (unsigned long)got;20202121 /* There is no duplicate entry, create a new one */2222 got = (struct got_entry *)got_sec->shdr->sh_addr;···2525 got_sec->num_entries++;2626 BUG_ON(got_sec->num_entries > got_sec->max_entries);27272828- return (u64)&got[i];2828+ return (unsigned long)&got[i];2929}30303131-u64 module_emit_plt_entry(struct module *mod, u64 val)3131+unsigned long module_emit_plt_entry(struct module *mod, unsigned long val)3232{3333 struct mod_section *got_plt_sec = &mod->arch.got_plt;3434 struct got_entry *got_plt;···3737 int i = plt_sec->num_entries;38383939 if (plt)4040- return (u64)plt;4040+ return (unsigned long)plt;41414242 /* There is no duplicate entry, create a new one */4343 got_plt = (struct got_entry *)got_plt_sec->shdr->sh_addr;4444 got_plt[i] = emit_got_entry(val);4545 plt = (struct plt_entry *)plt_sec->shdr->sh_addr;4646- plt[i] = emit_plt_entry(val, (u64)&plt[i], (u64)&got_plt[i]);4646+ plt[i] = emit_plt_entry(val,4747+ (unsigned long)&plt[i],4848+ (unsigned long)&got_plt[i]);47494850 plt_sec->num_entries++;4951 got_plt_sec->num_entries++;5052 BUG_ON(plt_sec->num_entries > plt_sec->max_entries);51535252- return (u64)&plt[i];5454+ return (unsigned long)&plt[i];5355}54565555-static int is_rela_equal(const Elf64_Rela *x, const Elf64_Rela *y)5757+static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y)5658{5759 return x->r_info == y->r_info && x->r_addend == y->r_addend;5860}59616060-static bool duplicate_rela(const Elf64_Rela *rela, int idx)6262+static bool duplicate_rela(const Elf_Rela *rela, int idx)6163{6264 int i;6365 for (i = 0; i < idx; i++) {···6967 return false;7068}71697272-static void count_max_entries(Elf64_Rela *relas, int num,7070+static void count_max_entries(Elf_Rela *relas, int num,7371 unsigned int *plts, unsigned int *gots)7472{7573 unsigned int type, i;76747775 for (i = 0; i < num; i++) {7878- type = ELF64_R_TYPE(relas[i].r_info);7676+ type = ELF_RISCV_R_TYPE(relas[i].r_info);7977 if (type == R_RISCV_CALL_PLT) {8078 if (!duplicate_rela(relas, i))8179 (*plts)++;···120118121119 /* Calculate the maxinum number of entries */122120 for (i = 0; i < ehdr->e_shnum; i++) {123123- Elf64_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset;124124- int num_rela = sechdrs[i].sh_size / sizeof(Elf64_Rela);125125- Elf64_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info;121121+ Elf_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset;122122+ int num_rela = sechdrs[i].sh_size / sizeof(Elf_Rela);123123+ Elf_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info;126124127125 if (sechdrs[i].sh_type != SHT_RELA)128126 continue;
···446446 branches. Requires a compiler with -mindirect-branch=thunk-extern447447 support for full protection. The kernel may run slower.448448449449-config RESCTRL449449+config X86_RESCTRL450450 bool "Resource Control support"451451 depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD)452452 select KERNFS···617617618618config X86_INTEL_LPSS619619 bool "Intel Low Power Subsystem Support"620620- depends on X86 && ACPI620620+ depends on X86 && ACPI && PCI621621 select COMMON_CLK622622 select PINCTRL623623 select IOSF_MBI
···45404540 * given physical address won't match the required45414541 * VMCS12_REVISION identifier.45424542 */45434543- nested_vmx_failValid(vcpu,45434543+ return nested_vmx_failValid(vcpu,45444544 VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID);45454545- return kvm_skip_emulated_instruction(vcpu);45464545 }45474546 new_vmcs12 = kmap(page);45484547 if (new_vmcs12->hdr.revision_id != VMCS12_REVISION ||
+2-2
arch/x86/kvm/vmx/vmx.c
···453453 struct kvm_tlb_range *range)454454{455455 struct kvm_vcpu *vcpu;456456- int ret = -ENOTSUPP, i;456456+ int ret = 0, i;457457458458 spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock);459459···7044704470457045 /* unmask address range configure area */70467046 for (i = 0; i < vmx->pt_desc.addr_range; i++)70477047- vmx->pt_desc.ctl_bitmask &= ~(0xf << (32 + i * 4));70477047+ vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4));70487048}7049704970507050static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+1-4
arch/x86/xen/enlighten_pv.c
···898898 val = native_read_msr_safe(msr, err);899899 switch (msr) {900900 case MSR_IA32_APICBASE:901901-#ifdef CONFIG_X86_X2APIC902902- if (!(cpuid_ecx(1) & (1 << (X86_FEATURE_X2APIC & 31))))903903-#endif904904- val &= ~X2APIC_ENABLE;901901+ val &= ~X2APIC_ENABLE;905902 break;906903 }907904 return val;
+9-3
arch/x86/xen/time.c
···361361{362362 int cpu;363363364364- pvclock_resume();365365-366364 if (xen_clockevent != &xen_vcpuop_clockevent)367365 return;368366···377379};378380379381static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;382382+static u64 xen_clock_value_saved;380383381384void xen_save_time_memory_area(void)382385{383386 struct vcpu_register_time_memory_area t;384387 int ret;388388+389389+ xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;385390386391 if (!xen_clock)387392 return;···405404 int ret;406405407406 if (!xen_clock)408408- return;407407+ goto out;409408410409 t.addr.v = &xen_clock->pvti;411410···422421 if (ret != 0)423422 pr_notice("Cannot restore secondary vcpu_time_info (err %d)",424423 ret);424424+425425+out:426426+ /* Need pvclock_resume() before using xen_clocksource_read(). */427427+ pvclock_resume();428428+ xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;425429}426430427431static void xen_setup_vsyscall_time_info(void)
+5-6
block/bfq-wf2q.c
···11541154}1155115511561156/**11571157- * __bfq_deactivate_entity - deactivate an entity from its service tree.11581158- * @entity: the entity to deactivate.11571157+ * __bfq_deactivate_entity - update sched_data and service trees for11581158+ * entity, so as to represent entity as inactive11591159+ * @entity: the entity being deactivated.11591160 * @ins_into_idle_tree: if false, the entity will not be put into the11601161 * idle tree.11611162 *11621162- * Deactivates an entity, independently of its previous state. Must11631163- * be invoked only if entity is on a service tree. Extracts the entity11641164- * from that tree, and if necessary and allowed, puts it into the idle11651165- * tree.11631163+ * If necessary and allowed, puts entity into the idle tree. NOTE:11641164+ * entity may be on no tree if in service.11661165 */11671166bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)11681167{
+19-1
block/blk-core.c
···661661 * blk_attempt_plug_merge - try to merge with %current's plugged list662662 * @q: request_queue new bio is being queued at663663 * @bio: new bio being queued664664- * @request_count: out parameter for number of traversed plugged requests665664 * @same_queue_rq: pointer to &struct request that gets filled in when666665 * another request associated with @q is found on the plug list667666 * (optional, may be %NULL)···16821683 * @plug: The &struct blk_plug that needs to be initialized16831684 *16841685 * Description:16861686+ * blk_start_plug() indicates to the block layer an intent by the caller16871687+ * to submit multiple I/O requests in a batch. The block layer may use16881688+ * this hint to defer submitting I/Os from the caller until blk_finish_plug()16891689+ * is called. However, the block layer may choose to submit requests16901690+ * before a call to blk_finish_plug() if the number of queued I/Os16911691+ * exceeds %BLK_MAX_REQUEST_COUNT, or if the size of the I/O is larger than16921692+ * %BLK_PLUG_FLUSH_SIZE. The queued I/Os may also be submitted early if16931693+ * the task schedules (see below).16941694+ *16851695 * Tracking blk_plug inside the task_struct will help with auto-flushing the16861696 * pending I/O should the task end up blocking between blk_start_plug() and16871697 * blk_finish_plug(). This is important from a performance perspective, but···17731765 blk_mq_flush_plug_list(plug, from_schedule);17741766}1775176717681768+/**17691769+ * blk_finish_plug - mark the end of a batch of submitted I/O17701770+ * @plug: The &struct blk_plug passed to blk_start_plug()17711771+ *17721772+ * Description:17731773+ * Indicate that a batch of I/O submissions is complete. This function17741774+ * must be paired with an initial call to blk_start_plug(). The intent17751775+ * is to allow the block layer to optimize I/O submission. See the17761776+ * documentation for blk_start_plug() for more information.17771777+ */17761778void blk_finish_plug(struct blk_plug *plug)17771779{17781780 if (plug != current->plug)
-2
block/blk-mq-debugfs-zoned.c
···11// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) 2017 Western Digital Corporation or its affiliates.44- *55- * This file is released under the GPL.64 */7586#include <linux/blkdev.h>
···5858 return -EINVAL;5959 if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)6060 return -EINVAL;6161- if (RTA_PAYLOAD(rta) < sizeof(*param))6161+6262+ /*6363+ * RTA_OK() didn't align the rtattr's payload when validating that it6464+ * fits in the buffer. Yet, the keys should start on the next 4-byte6565+ * aligned boundary. To avoid confusion, require that the rtattr6666+ * payload be exactly the param struct, which has a 4-byte aligned size.6767+ */6868+ if (RTA_PAYLOAD(rta) != sizeof(*param))6269 return -EINVAL;7070+ BUILD_BUG_ON(sizeof(*param) % RTA_ALIGNTO);63716472 param = RTA_DATA(rta);6573 keys->enckeylen = be32_to_cpu(param->enckeylen);66746767- key += RTA_ALIGN(rta->rta_len);6868- keylen -= RTA_ALIGN(rta->rta_len);7575+ key += rta->rta_len;7676+ keylen -= rta->rta_len;69777078 if (keylen < keys->enckeylen)7179 return -EINVAL;
···100100101101 for (i = 0; i <= 63; i++) {102102103103- ss1 = rol32((rol32(a, 12) + e + rol32(t(i), i)), 7);103103+ ss1 = rol32((rol32(a, 12) + e + rol32(t(i), i & 31)), 7);104104105105 ss2 = ss1 ^ rol32(a, 12);106106
+1
drivers/acpi/Kconfig
···1010 bool "ACPI (Advanced Configuration and Power Interface) Support"1111 depends on ARCH_SUPPORTS_ACPI1212 select PNP1313+ select NLS1314 default y if X861415 help1516 Advanced Configuration and Power Interface (ACPI) support for
···10541054 goto error0;10551055 }1056105610571057- /*10581058- * ACPI 2.0 requires the EC driver to be loaded and work before10591059- * the EC device is found in the namespace (i.e. before10601060- * acpi_load_tables() is called).10611061- *10621062- * This is accomplished by looking for the ECDT table, and getting10631063- * the EC parameters out of that.10641064- *10651065- * Ignore the result. Not having an ECDT is not fatal.10661066- */10671067- status = acpi_ec_ecdt_probe();10681068-10691057#ifdef CONFIG_X8610701058 if (!acpi_ioapic) {10711059 /* compatible (0) means level (3) */···11291141 "Unable to load the System Description Tables\n");11301142 goto error1;11311143 }11441144+11451145+ /*11461146+ * ACPI 2.0 requires the EC driver to be loaded and work before the EC11471147+ * device is found in the namespace.11481148+ *11491149+ * This is accomplished by looking for the ECDT table and getting the EC11501150+ * parameters out of that.11511151+ *11521152+ * Do that before calling acpi_initialize_objects() which may trigger EC11531153+ * address space accesses.11541154+ */11551155+ acpi_ec_ecdt_probe();1132115611331157 status = acpi_enable_subsystem(ACPI_NO_ACPI_ENABLE);11341158 if (ACPI_FAILURE(status)) {
···2020#define GPI1_LDO_ON (3 << 0)2121#define GPI1_LDO_OFF (4 << 0)22222323-#define AXP288_ADC_TS_PIN_GPADC 0xf22424-#define AXP288_ADC_TS_PIN_ON 0xf32323+#define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0)2424+#define AXP288_ADC_TS_CURRENT_OFF (0 << 0)2525+#define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0)2626+#define AXP288_ADC_TS_CURRENT_ON_ONDEMAND (2 << 0)2727+#define AXP288_ADC_TS_CURRENT_ON (3 << 0)25282629static struct pmic_table power_table[] = {2730 {···215212 */216213static int intel_xpower_pmic_get_raw_temp(struct regmap *regmap, int reg)217214{215215+ int ret, adc_ts_pin_ctrl;218216 u8 buf[2];219219- int ret;220217221221- ret = regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL,222222- AXP288_ADC_TS_PIN_GPADC);218218+ /*219219+ * The current-source used for the battery temp-sensor (TS) is shared220220+ * with the GPADC. For proper fuel-gauge and charger operation the TS221221+ * current-source needs to be permanently on. But to read the GPADC we222222+ * need to temporary switch the TS current-source to ondemand, so that223223+ * the GPADC can use it, otherwise we will always read an all 0 value.224224+ *225225+ * Note that the switching from on to on-ondemand is not necessary226226+ * when the TS current-source is off (this happens on devices which227227+ * do not use the TS-pin).228228+ */229229+ ret = regmap_read(regmap, AXP288_ADC_TS_PIN_CTRL, &adc_ts_pin_ctrl);223230 if (ret)224231 return ret;225232226226- /* After switching to the GPADC pin give things some time to settle */227227- usleep_range(6000, 10000);233233+ if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) {234234+ ret = regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL,235235+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,236236+ AXP288_ADC_TS_CURRENT_ON_ONDEMAND);237237+ if (ret)238238+ return ret;239239+240240+ /* Wait a bit after switching the current-source */241241+ usleep_range(6000, 10000);242242+ }228243229244 ret = regmap_bulk_read(regmap, AXP288_GP_ADC_H, buf, 2);230245 if (ret == 0)231246 ret = (buf[0] << 4) + ((buf[1] >> 4) & 0x0f);232247233233- regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, AXP288_ADC_TS_PIN_ON);248248+ if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) {249249+ regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL,250250+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,251251+ AXP288_ADC_TS_CURRENT_ON);252252+ }234253235254 return ret;236255}
+22
drivers/acpi/power.c
···131131 }132132}133133134134+static bool acpi_power_resource_is_dup(union acpi_object *package,135135+ unsigned int start, unsigned int i)136136+{137137+ acpi_handle rhandle, dup;138138+ unsigned int j;139139+140140+ /* The caller is expected to check the package element types */141141+ rhandle = package->package.elements[i].reference.handle;142142+ for (j = start; j < i; j++) {143143+ dup = package->package.elements[j].reference.handle;144144+ if (dup == rhandle)145145+ return true;146146+ }147147+148148+ return false;149149+}150150+134151int acpi_extract_power_resources(union acpi_object *package, unsigned int start,135152 struct list_head *list)136153{···167150 err = -ENODEV;168151 break;169152 }153153+154154+ /* Some ACPI tables contain duplicate power resource references */155155+ if (acpi_power_resource_is_dup(package, start, i))156156+ continue;157157+170158 err = acpi_add_power_resource(rhandle);171159 if (err)172160 break;
+1-1
drivers/ata/Kconfig
···1091109110921092config PATA_ACPI10931093 tristate "ACPI firmware driver for PATA"10941094- depends on ATA_ACPI && ATA_BMDMA10941094+ depends on ATA_ACPI && ATA_BMDMA && PCI10951095 help10961096 This option enables an ACPI method driver which drives10971097 motherboard PATA controller interfaces through the ACPI
···121121 * Compute the autosuspend-delay expiration time based on the device's122122 * power.last_busy time. If the delay has already expired or is disabled123123 * (negative) or the power.use_autosuspend flag isn't set, return 0.124124- * Otherwise return the expiration time in jiffies (adjusted to be nonzero).124124+ * Otherwise return the expiration time in nanoseconds (adjusted to be nonzero).125125 *126126 * This function may be called either with or without dev->power.lock held.127127 * Either way it can be racy, since power.last_busy may be updated at any time.···141141142142 last_busy = READ_ONCE(dev->power.last_busy);143143144144- expires = last_busy + autosuspend_delay * NSEC_PER_MSEC;144144+ expires = last_busy + (u64)autosuspend_delay * NSEC_PER_MSEC;145145 if (expires <= now)146146 expires = 0; /* Already expired. */147147···525525 * We add a slack of 25% to gather wakeups526526 * without sacrificing the granularity.527527 */528528- u64 slack = READ_ONCE(dev->power.autosuspend_delay) *528528+ u64 slack = (u64)READ_ONCE(dev->power.autosuspend_delay) *529529 (NSEC_PER_MSEC >> 2);530530531531 dev->power.timer_expires = expires;···905905 spin_lock_irqsave(&dev->power.lock, flags);906906907907 expires = dev->power.timer_expires;908908- /* If 'expire' is after 'jiffies' we've been called too early. */908908+ /*909909+ * If 'expires' is after the current time, we've been called910910+ * too early.911911+ */909912 if (expires > 0 && expires < ktime_to_ns(ktime_get())) {910913 dev->power.timer_expires = 0;911914 rpm_suspend(dev, dev->power.timer_autosuspends ?
+7-1
drivers/base/regmap/regmap-irq.c
···108108 * suppress pointless writes.109109 */110110 for (i = 0; i < d->chip->num_regs; i++) {111111+ if (!d->chip->mask_base)112112+ continue;113113+111114 reg = d->chip->mask_base +112115 (i * map->reg_stride * d->irq_reg_stride);113116 if (d->chip->mask_invert) {···261258 const struct regmap_irq_type *t = &irq_data->type;262259263260 if ((t->types_supported & type) != type)264264- return -ENOTSUPP;261261+ return 0;265262266263 reg = t->type_reg_offset / map->reg_stride;267264···591588 /* Mask all the interrupts by default */592589 for (i = 0; i < chip->num_regs; i++) {593590 d->mask_buf[i] = d->mask_buf_def[i];591591+ if (!chip->mask_base)592592+ continue;593593+594594 reg = chip->mask_base +595595 (i * map->reg_stride * d->irq_reg_stride);596596 if (chip->mask_invert)
+33-2
drivers/block/loop.c
···11901190 goto out_unlock;11911191 }1192119211931193+ if (lo->lo_offset != info->lo_offset ||11941194+ lo->lo_sizelimit != info->lo_sizelimit) {11951195+ sync_blockdev(lo->lo_device);11961196+ kill_bdev(lo->lo_device);11971197+ }11981198+11931199 /* I/O need to be drained during transfer transition */11941200 blk_mq_freeze_queue(lo->lo_queue);11951201···1224121812251219 if (lo->lo_offset != info->lo_offset ||12261220 lo->lo_sizelimit != info->lo_sizelimit) {12211221+ /* kill_bdev should have truncated all the pages */12221222+ if (lo->lo_device->bd_inode->i_mapping->nrpages) {12231223+ err = -EAGAIN;12241224+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",12251225+ __func__, lo->lo_number, lo->lo_file_name,12261226+ lo->lo_device->bd_inode->i_mapping->nrpages);12271227+ goto out_unfreeze;12281228+ }12271229 if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {12281230 err = -EFBIG;12291231 goto out_unfreeze;···1457144314581444static int loop_set_block_size(struct loop_device *lo, unsigned long arg)14591445{14461446+ int err = 0;14471447+14601448 if (lo->lo_state != Lo_bound)14611449 return -ENXIO;1462145014631451 if (arg < 512 || arg > PAGE_SIZE || !is_power_of_2(arg))14641452 return -EINVAL;1465145314541454+ if (lo->lo_queue->limits.logical_block_size != arg) {14551455+ sync_blockdev(lo->lo_device);14561456+ kill_bdev(lo->lo_device);14571457+ }14581458+14661459 blk_mq_freeze_queue(lo->lo_queue);14601460+14611461+ /* kill_bdev should have truncated all the pages */14621462+ if (lo->lo_queue->limits.logical_block_size != arg &&14631463+ lo->lo_device->bd_inode->i_mapping->nrpages) {14641464+ err = -EAGAIN;14651465+ pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n",14661466+ __func__, lo->lo_number, lo->lo_file_name,14671467+ lo->lo_device->bd_inode->i_mapping->nrpages);14681468+ goto out_unfreeze;14691469+ }1467147014681471 blk_queue_logical_block_size(lo->lo_queue, arg);14691472 blk_queue_physical_block_size(lo->lo_queue, arg);14701473 blk_queue_io_min(lo->lo_queue, arg);14711474 loop_update_dio(lo);14721472-14751475+out_unfreeze:14731476 blk_mq_unfreeze_queue(lo->lo_queue);1474147714751475- return 0;14781478+ return err;14761479}1477148014781481static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
+3-2
drivers/block/nbd.c
···288288 blk_queue_physical_block_size(nbd->disk->queue, config->blksize);289289 set_capacity(nbd->disk, config->bytesize >> 9);290290 if (bdev) {291291- if (bdev->bd_disk)291291+ if (bdev->bd_disk) {292292 bd_set_size(bdev, config->bytesize);293293- else293293+ set_blocksize(bdev, config->blksize);294294+ } else294295 bdev->bd_invalidated = 1;295296 bdput(bdev);296297 }
+1
drivers/block/null_blk.h
···9797#else9898static inline int null_zone_init(struct nullb_device *dev)9999{100100+ pr_err("null_blk: CONFIG_BLK_DEV_ZONED not enabled\n");100101 return -EINVAL;101102}102103static inline void null_zone_exit(struct nullb_device *dev) {}
+4-5
drivers/block/rbd.c
···59865986 struct list_head *tmp;59875987 int dev_id;59885988 char opt_buf[6];59895989- bool already = false;59905989 bool force = false;59915990 int ret;59925991···60186019 spin_lock_irq(&rbd_dev->lock);60196020 if (rbd_dev->open_count && !force)60206021 ret = -EBUSY;60216021- else60226022- already = test_and_set_bit(RBD_DEV_FLAG_REMOVING,60236023- &rbd_dev->flags);60226022+ else if (test_and_set_bit(RBD_DEV_FLAG_REMOVING,60236023+ &rbd_dev->flags))60246024+ ret = -EINPROGRESS;60246025 spin_unlock_irq(&rbd_dev->lock);60256026 }60266027 spin_unlock(&rbd_dev_list_lock);60276027- if (ret < 0 || already)60286028+ if (ret)60286029 return ret;6029603060306031 if (force) {
···8686 atomic64_t bd_count; /* no. of pages in backing device */8787 atomic64_t bd_reads; /* no. of reads from backing device */8888 atomic64_t bd_writes; /* no. of writes from backing device */8989- atomic64_t bd_wb_limit; /* writeback limit of backing device */9089#endif9190};9291···113114 */114115 bool claim; /* Protected by bdev->bd_mutex */115116 struct file *backing_dev;116116- bool stop_writeback;117117#ifdef CONFIG_ZRAM_WRITEBACK118118+ spinlock_t wb_limit_lock;119119+ bool wb_limit_enable;120120+ u64 bd_wb_limit;118121 struct block_device *bdev;119122 unsigned int old_block_size;120123 unsigned long *bitmap;
+4-8
drivers/cpufreq/cpufreq.c
···15301530{15311531 unsigned int ret_freq = 0;1532153215331533- if (!cpufreq_driver->get)15331533+ if (unlikely(policy_is_inactive(policy)) || !cpufreq_driver->get)15341534 return ret_freq;1535153515361536 ret_freq = cpufreq_driver->get(policy->cpu);1537153715381538 /*15391539- * Updating inactive policies is invalid, so avoid doing that. Also15401540- * if fast frequency switching is used with the given policy, the check15391539+ * If fast frequency switching is used with the given policy, the check15411540 * against policy->cur is pointless, so skip it in that case too.15421541 */15431543- if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled)15421542+ if (policy->fast_switch_enabled)15441543 return ret_freq;1545154415461545 if (ret_freq && policy->cur &&···1568156915691570 if (policy) {15701571 down_read(&policy->rwsem);15711571-15721572- if (!policy_is_inactive(policy))15731573- ret_freq = __cpufreq_get(policy);15741574-15721572+ ret_freq = __cpufreq_get(policy);15751573 up_read(&policy->rwsem);1576157415771575 cpufreq_cpu_put(policy);
···28452845 struct spu_hw *spu = &iproc_priv.spu;28462846 struct iproc_ctx_s *ctx = crypto_aead_ctx(cipher);28472847 struct crypto_tfm *tfm = crypto_aead_tfm(cipher);28482848- struct rtattr *rta = (void *)key;28492849- struct crypto_authenc_key_param *param;28502850- const u8 *origkey = key;28512851- const unsigned int origkeylen = keylen;28522852-28532853- int ret = 0;28482848+ struct crypto_authenc_keys keys;28492849+ int ret;2854285028552851 flow_log("%s() aead:%p key:%p keylen:%u\n", __func__, cipher, key,28562852 keylen);28572853 flow_dump(" key: ", key, keylen);2858285428592859- if (!RTA_OK(rta, keylen))28602860- goto badkey;28612861- if (rta->rta_type != CRYPTO_AUTHENC_KEYA_PARAM)28622862- goto badkey;28632863- if (RTA_PAYLOAD(rta) < sizeof(*param))28552855+ ret = crypto_authenc_extractkeys(&keys, key, keylen);28562856+ if (ret)28642857 goto badkey;2865285828662866- param = RTA_DATA(rta);28672867- ctx->enckeylen = be32_to_cpu(param->enckeylen);28682868-28692869- key += RTA_ALIGN(rta->rta_len);28702870- keylen -= RTA_ALIGN(rta->rta_len);28712871-28722872- if (keylen < ctx->enckeylen)28732873- goto badkey;28742874- if (ctx->enckeylen > MAX_KEY_SIZE)28592859+ if (keys.enckeylen > MAX_KEY_SIZE ||28602860+ keys.authkeylen > MAX_KEY_SIZE)28752861 goto badkey;2876286228772877- ctx->authkeylen = keylen - ctx->enckeylen;28632863+ ctx->enckeylen = keys.enckeylen;28642864+ ctx->authkeylen = keys.authkeylen;2878286528792879- if (ctx->authkeylen > MAX_KEY_SIZE)28802880- goto badkey;28812881-28822882- memcpy(ctx->enckey, key + ctx->authkeylen, ctx->enckeylen);28662866+ memcpy(ctx->enckey, keys.enckey, keys.enckeylen);28832867 /* May end up padding auth key. So make sure it's zeroed. */28842868 memset(ctx->authkey, 0, sizeof(ctx->authkey));28852885- memcpy(ctx->authkey, key, ctx->authkeylen);28692869+ memcpy(ctx->authkey, keys.authkey, keys.authkeylen);2886287028872871 switch (ctx->alg->cipher_info.alg) {28882872 case CIPHER_ALG_DES:···28742890 u32 tmp[DES_EXPKEY_WORDS];28752891 u32 flags = CRYPTO_TFM_RES_WEAK_KEY;2876289228772877- if (des_ekey(tmp, key) == 0) {28932893+ if (des_ekey(tmp, keys.enckey) == 0) {28782894 if (crypto_aead_get_flags(cipher) &28792895 CRYPTO_TFM_REQ_WEAK_KEY) {28802896 crypto_aead_set_flags(cipher, flags);···28892905 break;28902906 case CIPHER_ALG_3DES:28912907 if (ctx->enckeylen == (DES_KEY_SIZE * 3)) {28922892- const u32 *K = (const u32 *)key;29082908+ const u32 *K = (const u32 *)keys.enckey;28932909 u32 flags = CRYPTO_TFM_RES_BAD_KEY_SCHED;2894291028952911 if (!((K[0] ^ K[2]) | (K[1] ^ K[3])) ||···29402956 ctx->fallback_cipher->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;29412957 ctx->fallback_cipher->base.crt_flags |=29422958 tfm->crt_flags & CRYPTO_TFM_REQ_MASK;29432943- ret =29442944- crypto_aead_setkey(ctx->fallback_cipher, origkey,29452945- origkeylen);29592959+ ret = crypto_aead_setkey(ctx->fallback_cipher, key, keylen);29462960 if (ret) {29472961 flow_log(" fallback setkey() returned:%d\n", ret);29482962 tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
+1-1
drivers/crypto/caam/caamalg.c
···34763476 * Skip algorithms requiring message digests34773477 * if MD or MD size is not supported by device.34783478 */34793479- if ((c2_alg_sel & ~OP_ALG_ALGSEL_SUBMASK) == 0x40 &&34793479+ if (is_mdha(c2_alg_sel) &&34803480 (!md_inst || t_alg->aead.maxauthsize > md_limit))34813481 continue;34823482
···17011701 amdgpu_xgmi_add_device(adev);17021702 amdgpu_amdkfd_device_init(adev);1703170317041704- if (amdgpu_sriov_vf(adev))17041704+ if (amdgpu_sriov_vf(adev)) {17051705+ amdgpu_virt_init_data_exchange(adev);17051706 amdgpu_virt_release_full_gpu(adev, true);17071707+ }1706170817071709 return 0;17081710}···26342632 goto failed;26352633 }2636263426372637- if (amdgpu_sriov_vf(adev))26382638- amdgpu_virt_init_data_exchange(adev);26392639-26402635 amdgpu_fbdev_init(adev);2641263626422637 r = amdgpu_pm_sysfs_init(adev);···27972798 struct drm_framebuffer *fb = crtc->primary->fb;27982799 struct amdgpu_bo *robj;2799280028002800- if (amdgpu_crtc->cursor_bo) {28012801+ if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) {28012802 struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo);28022803 r = amdgpu_bo_reserve(aobj, true);28032804 if (r == 0) {···29052906 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {29062907 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);2907290829082908- if (amdgpu_crtc->cursor_bo) {29092909+ if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) {29092910 struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo);29102911 r = amdgpu_bo_reserve(aobj, true);29112912 if (r == 0) {···32253226 r = amdgpu_ib_ring_tests(adev);3226322732273228error:32293229+ amdgpu_virt_init_data_exchange(adev);32283230 amdgpu_virt_release_full_gpu(adev, true);32293231 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) {32303232 atomic_inc(&adev->vram_lost_counter);
+12-8
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
···188188 goto cleanup;189189 }190190191191- r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev));192192- if (unlikely(r != 0)) {193193- DRM_ERROR("failed to pin new abo buffer before flip\n");194194- goto unreserve;191191+ if (!adev->enable_virtual_display) {192192+ r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev));193193+ if (unlikely(r != 0)) {194194+ DRM_ERROR("failed to pin new abo buffer before flip\n");195195+ goto unreserve;196196+ }195197 }196198197199 r = amdgpu_ttm_alloc_gart(&new_abo->tbo);···213211 amdgpu_bo_get_tiling_flags(new_abo, &tiling_flags);214212 amdgpu_bo_unreserve(new_abo);215213216216- work->base = amdgpu_bo_gpu_offset(new_abo);214214+ if (!adev->enable_virtual_display)215215+ work->base = amdgpu_bo_gpu_offset(new_abo);217216 work->target_vblank = target - (uint32_t)drm_crtc_vblank_count(crtc) +218217 amdgpu_get_vblank_counter_kms(dev, work->crtc_id);219218···245242 goto cleanup;246243 }247244unpin:248248- if (unlikely(amdgpu_bo_unpin(new_abo) != 0)) {249249- DRM_ERROR("failed to unpin new abo in error path\n");250250- }245245+ if (!adev->enable_virtual_display)246246+ if (unlikely(amdgpu_bo_unpin(new_abo) != 0))247247+ DRM_ERROR("failed to unpin new abo in error path\n");248248+251249unreserve:252250 amdgpu_bo_unreserve(new_abo);253251
+14-8
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
···2008200820092009int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)20102010{20112011+ struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;20112012 int ret;2012201320132014 if (adev->pm.sysfs_initialized)···20922091 "pp_power_profile_mode\n");20932092 return ret;20942093 }20952095- ret = device_create_file(adev->dev,20962096- &dev_attr_pp_od_clk_voltage);20972097- if (ret) {20982098- DRM_ERROR("failed to create device file "20992099- "pp_od_clk_voltage\n");21002100- return ret;20942094+ if (hwmgr->od_enabled) {20952095+ ret = device_create_file(adev->dev,20962096+ &dev_attr_pp_od_clk_voltage);20972097+ if (ret) {20982098+ DRM_ERROR("failed to create device file "20992099+ "pp_od_clk_voltage\n");21002100+ return ret;21012101+ }21012102 }21022103 ret = device_create_file(adev->dev,21032104 &dev_attr_gpu_busy_percent);···2121211821222119void amdgpu_pm_sysfs_fini(struct amdgpu_device *adev)21232120{21212121+ struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;21222122+21242123 if (adev->pm.dpm_enabled == 0)21252124 return;21262125···21432138 device_remove_file(adev->dev, &dev_attr_pp_mclk_od);21442139 device_remove_file(adev->dev,21452140 &dev_attr_pp_power_profile_mode);21462146- device_remove_file(adev->dev,21472147- &dev_attr_pp_od_clk_voltage);21412141+ if (hwmgr->od_enabled)21422142+ device_remove_file(adev->dev,21432143+ &dev_attr_pp_od_clk_voltage);21482144 device_remove_file(adev->dev, &dev_attr_gpu_busy_percent);21492145}21502146
···167167 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);168168169169 dce_virtual_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);170170- if (crtc->primary->fb) {171171- int r;172172- struct amdgpu_bo *abo;173173-174174- abo = gem_to_amdgpu_bo(crtc->primary->fb->obj[0]);175175- r = amdgpu_bo_reserve(abo, true);176176- if (unlikely(r))177177- DRM_ERROR("failed to reserve abo before unpin\n");178178- else {179179- amdgpu_bo_unpin(abo);180180- amdgpu_bo_unreserve(abo);181181- }182182- }183170184171 amdgpu_crtc->pll_id = ATOM_PPLL_INVALID;185172 amdgpu_crtc->encoder = NULL;···679692 spin_unlock_irqrestore(&adev->ddev->event_lock, flags);680693681694 drm_crtc_vblank_put(&amdgpu_crtc->base);682682- schedule_work(&works->unpin_work);695695+ amdgpu_bo_unref(&works->old_abo);696696+ kfree(works->shared);697697+ kfree(works);683698684699 return 0;685700}
+35-15
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
···42334233 u32 tmp;42344234 u32 rb_bufsz;42354235 u64 rb_addr, rptr_addr, wptr_gpu_addr;42364236- int r;4237423642384237 /* Set the write pointer delay */42394238 WREG32(mmCP_RB_WPTR_DELAY, 0);···42774278 amdgpu_ring_clear_ring(ring);42784279 gfx_v8_0_cp_gfx_start(adev);42794280 ring->sched.ready = true;42804280- r = amdgpu_ring_test_helper(ring);4281428142824282- return r;42824282+ return 0;42834283}4284428442854285static void gfx_v8_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)···43674369 amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr));43684370 }4369437143704370- r = amdgpu_ring_test_helper(kiq_ring);43714371- if (r)43724372- DRM_ERROR("KCQ enable failed\n");43734373- return r;43724372+ amdgpu_ring_commit(kiq_ring);43734373+43744374+ return 0;43744375}4375437643764377static int gfx_v8_0_deactivate_hqd(struct amdgpu_device *adev, u32 req)···47064709 if (r)47074710 goto done;4708471147094709- /* Test KCQs - reversing the order of rings seems to fix ring test failure47104710- * after GPU reset47114711- */47124712- for (i = adev->gfx.num_compute_rings - 1; i >= 0; i--) {47134713- ring = &adev->gfx.compute_ring[i];47144714- r = amdgpu_ring_test_helper(ring);47154715- }47164716-47174712done:47184713 return r;47144714+}47154715+47164716+static int gfx_v8_0_cp_test_all_rings(struct amdgpu_device *adev)47174717+{47184718+ int r, i;47194719+ struct amdgpu_ring *ring;47204720+47214721+ /* collect all the ring_tests here, gfx, kiq, compute */47224722+ ring = &adev->gfx.gfx_ring[0];47234723+ r = amdgpu_ring_test_helper(ring);47244724+ if (r)47254725+ return r;47264726+47274727+ ring = &adev->gfx.kiq.ring;47284728+ r = amdgpu_ring_test_helper(ring);47294729+ if (r)47304730+ return r;47314731+47324732+ for (i = 0; i < adev->gfx.num_compute_rings; i++) {47334733+ ring = &adev->gfx.compute_ring[i];47344734+ amdgpu_ring_test_helper(ring);47354735+ }47364736+47374737+ return 0;47194738}4720473947214740static int gfx_v8_0_cp_resume(struct amdgpu_device *adev)···47524739 r = gfx_v8_0_kcq_resume(adev);47534740 if (r)47544741 return r;47424742+47434743+ r = gfx_v8_0_cp_test_all_rings(adev);47444744+ if (r)47454745+ return r;47464746+47554747 gfx_v8_0_enable_gui_idle_interrupt(adev, true);4756474847574749 return 0;···51035085 if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) ||51045086 REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_GFX))51055087 gfx_v8_0_cp_gfx_resume(adev);50885088+50895089+ gfx_v8_0_cp_test_all_rings(adev);5106509051075091 adev->gfx.rlc.funcs->start(adev);51085092
···4455config HSA_AMD66 bool "HSA kernel driver for AMD GPU devices"77- depends on DRM_AMDGPU && X86_6488- imply AMD_IOMMU_V277+ depends on DRM_AMDGPU && (X86_64 || ARM64)88+ imply AMD_IOMMU_V2 if X86_6499 select MMU_NOTIFIER1010 help1111 Enable this if you want to use HSA features on AMD GPU devices.
+8
drivers/gpu/drm/amd/amdkfd/kfd_crat.c
···863863 return 0;864864}865865866866+#if CONFIG_X86_64866867static int kfd_fill_iolink_info_for_cpu(int numa_node_id, int *avail_size,867868 uint32_t *num_entries,868869 struct crat_subtype_iolink *sub_type_hdr)···906905907906 return 0;908907}908908+#endif909909910910/* kfd_create_vcrat_image_cpu - Create Virtual CRAT for CPU911911 *···922920 struct crat_subtype_generic *sub_type_hdr;923921 int avail_size = *size;924922 int numa_node_id;923923+#ifdef CONFIG_X86_64925924 uint32_t entries = 0;925925+#endif926926 int ret = 0;927927928928 if (!pcrat_image || avail_size < VCRAT_SIZE_FOR_CPU)···986982 sub_type_hdr->length);987983988984 /* Fill in Subtype: IO Link */985985+#ifdef CONFIG_X86_64989986 ret = kfd_fill_iolink_info_for_cpu(numa_node_id, &avail_size,990987 &entries,991988 (struct crat_subtype_iolink *)sub_type_hdr);···997992998993 sub_type_hdr = (typeof(sub_type_hdr))((char *)sub_type_hdr +999994 sub_type_hdr->length * entries);995995+#else996996+ pr_info("IO link not available for non x86 platforms\n");997997+#endif10009981001999 crat_table->num_domains++;10021000 }
+14-7
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
···10931093 * the GPU device is not already present in the topology device10941094 * list then return NULL. This means a new topology device has to10951095 * be created for this GPU.10961096- * TODO: Rather than assiging @gpu to first topology device withtout10971097- * gpu attached, it will better to have more stringent check.10981096 */10991097static struct kfd_topology_device *kfd_assign_gpu(struct kfd_dev *gpu)11001098{···11001102 struct kfd_topology_device *out_dev = NULL;1101110311021104 down_write(&topology_lock);11031103- list_for_each_entry(dev, &topology_device_list, list)11051105+ list_for_each_entry(dev, &topology_device_list, list) {11061106+ /* Discrete GPUs need their own topology device list11071107+ * entries. Don't assign them to CPU/APU nodes.11081108+ */11091109+ if (!gpu->device_info->needs_iommu_device &&11101110+ dev->node_props.cpu_cores_count)11111111+ continue;11121112+11041113 if (!dev->gpu && (dev->node_props.simd_count > 0)) {11051114 dev->gpu = gpu;11061115 out_dev = dev;11071116 break;11081117 }11181118+ }11091119 up_write(&topology_lock);11101120 return out_dev;11111121}···1398139213991393static int kfd_cpumask_to_apic_id(const struct cpumask *cpumask)14001394{14011401- const struct cpuinfo_x86 *cpuinfo;14021395 int first_cpu_of_numa_node;1403139614041397 if (!cpumask || cpumask == cpu_none_mask)···14051400 first_cpu_of_numa_node = cpumask_first(cpumask);14061401 if (first_cpu_of_numa_node >= nr_cpu_ids)14071402 return -1;14081408- cpuinfo = &cpu_data(first_cpu_of_numa_node);14091409-14101410- return cpuinfo->apicid;14031403+#ifdef CONFIG_X86_6414041404+ return cpu_data(first_cpu_of_numa_node).apicid;14051405+#else14061406+ return first_cpu_of_numa_node;14071407+#endif14111408}1412140914131410/* kfd_numa_node_to_apic_id - Returns the APIC ID of the first logical processor
+27-14
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···699699{700700 struct amdgpu_dm_connector *aconnector;701701 struct drm_connector *connector;702702+ struct drm_dp_mst_topology_mgr *mgr;703703+ int ret;704704+ bool need_hotplug = false;702705703706 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);704707705705- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {706706- aconnector = to_amdgpu_dm_connector(connector);707707- if (aconnector->dc_link->type == dc_connection_mst_branch &&708708- !aconnector->mst_port) {708708+ list_for_each_entry(connector, &dev->mode_config.connector_list,709709+ head) {710710+ aconnector = to_amdgpu_dm_connector(connector);711711+ if (aconnector->dc_link->type != dc_connection_mst_branch ||712712+ aconnector->mst_port)713713+ continue;709714710710- if (suspend)711711- drm_dp_mst_topology_mgr_suspend(&aconnector->mst_mgr);712712- else713713- drm_dp_mst_topology_mgr_resume(&aconnector->mst_mgr);714714- }715715+ mgr = &aconnector->mst_mgr;716716+717717+ if (suspend) {718718+ drm_dp_mst_topology_mgr_suspend(mgr);719719+ } else {720720+ ret = drm_dp_mst_topology_mgr_resume(mgr);721721+ if (ret < 0) {722722+ drm_dp_mst_topology_mgr_set_mst(mgr, false);723723+ need_hotplug = true;724724+ }725725+ }715726 }716727717728 drm_modeset_unlock(&dev->mode_config.connection_mutex);729729+730730+ if (need_hotplug)731731+ drm_kms_helper_hotplug_event(dev);718732}719733720734/**···912898 struct drm_plane_state *new_plane_state;913899 struct dm_plane_state *dm_new_plane_state;914900 enum dc_connection_type new_connection_type = dc_connection_none;915915- int ret;916901 int i;917902918903 /* power on hardware */···984971 }985972 }986973987987- ret = drm_atomic_helper_resume(ddev, dm->cached_state);974974+ drm_atomic_helper_resume(ddev, dm->cached_state);988975989976 dm->cached_state = NULL;990977991978 amdgpu_dm_irq_resume_late(adev);992979993993- return ret;980980+ return 0;994981}995982996983/**···17721759 + caps.min_input_signal * 0x101;1773176017741761 if (dc_link_set_backlight_level(dm->backlight_link,17751775- brightness, 0, 0))17621762+ brightness, 0))17761763 return 0;17771764 else17781765 return 1;···59335920 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {59345921 if (!drm_atomic_crtc_needs_modeset(new_crtc_state) &&59355922 !new_crtc_state->color_mgmt_changed &&59365936- !new_crtc_state->vrr_enabled)59235923+ old_crtc_state->vrr_enabled == new_crtc_state->vrr_enabled)59375924 continue;5938592559395926 if (!new_crtc_state->enable)
···91919292 /* DMCU info */9393 unsigned int abm_level;9494- unsigned int bl_pwm_level;95949695 /* from core_stream struct */9796 struct dc_context *ctx;
···1000100010011001 pipe_ctx->stream_res.audio->funcs->az_enable(pipe_ctx->stream_res.audio);1002100210031003- if (num_audio == 1 && pp_smu != NULL && pp_smu->set_pme_wa_enable != NULL)10031003+ if (num_audio >= 1 && pp_smu != NULL && pp_smu->set_pme_wa_enable != NULL)10041004 /*this is the first audio. apply the PME w/a in order to wake AZ from D3*/10051005 pp_smu->set_pme_wa_enable(&pp_smu->pp_smu);10061006 /* un-mute audio */···10171017 pipe_ctx->stream_res.stream_enc->funcs->audio_mute_control(10181018 pipe_ctx->stream_res.stream_enc, true);10191019 if (pipe_ctx->stream_res.audio) {10201020+ struct pp_smu_funcs_rv *pp_smu = dc->res_pool->pp_smu;10211021+10201022 if (option != KEEP_ACQUIRED_RESOURCE ||10211023 !dc->debug.az_endpoint_mute_only) {10221024 /*only disalbe az_endpoint if power down or free*/···10381036 update_audio_usage(&dc->current_state->res_ctx, dc->res_pool, pipe_ctx->stream_res.audio, false);10391037 pipe_ctx->stream_res.audio = NULL;10401038 }10391039+ if (pp_smu != NULL && pp_smu->set_pme_wa_enable != NULL)10401040+ /*this is the first audio. apply the PME w/a in order to wake AZ from D3*/10411041+ pp_smu->set_pme_wa_enable(&pp_smu->pp_smu);1041104210421043 /* TODO: notify audio driver for if audio modes list changed10431044 * add audio mode list change flag */
+1-1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c
···463463 if (src_y_offset >= (int)param->viewport.height)464464 cur_en = 0; /* not visible beyond bottom edge*/465465466466- if (src_y_offset < 0)466466+ if (src_y_offset + (int)height <= 0)467467 cur_en = 0; /* not visible beyond top edge*/468468469469 REG_UPDATE(CURSOR0_CONTROL,
+1-1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hubp.c
···11401140 if (src_y_offset >= (int)param->viewport.height)11411141 cur_en = 0; /* not visible beyond bottom edge*/1142114211431143- if (src_y_offset < 0) //+ (int)hubp->curs_attr.height11431143+ if (src_y_offset + (int)hubp->curs_attr.height <= 0)11441144 cur_en = 0; /* not visible beyond top edge*/1145114511461146 if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
···4141 int (*host_init)(struct device *dev, void *gvt, const void *ops);4242 void (*host_exit)(struct device *dev, void *gvt);4343 int (*attach_vgpu)(void *vgpu, unsigned long *handle);4444- void (*detach_vgpu)(unsigned long handle);4444+ void (*detach_vgpu)(void *vgpu);4545 int (*inject_msi)(unsigned long handle, u32 addr, u16 data);4646 unsigned long (*from_virt_to_mfn)(void *p);4747 int (*enable_page_track)(unsigned long handle, u64 gfn);
+26-4
drivers/gpu/drm/i915/gvt/kvmgt.c
···996996{997997 unsigned int index;998998 u64 virtaddr;999999- unsigned long req_size, pgoff = 0;999999+ unsigned long req_size, pgoff, req_start;10001000 pgprot_t pg_prot;10011001 struct intel_vgpu *vgpu = mdev_get_drvdata(mdev);10021002···10141014 pg_prot = vma->vm_page_prot;10151015 virtaddr = vma->vm_start;10161016 req_size = vma->vm_end - vma->vm_start;10171017- pgoff = vgpu_aperture_pa_base(vgpu) >> PAGE_SHIFT;10171017+ pgoff = vma->vm_pgoff &10181018+ ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);10191019+ req_start = pgoff << PAGE_SHIFT;10201020+10211021+ if (!intel_vgpu_in_aperture(vgpu, req_start))10221022+ return -EINVAL;10231023+ if (req_start + req_size >10241024+ vgpu_aperture_offset(vgpu) + vgpu_aperture_sz(vgpu))10251025+ return -EINVAL;10261026+10271027+ pgoff = (gvt_aperture_pa_base(vgpu->gvt) >> PAGE_SHIFT) + pgoff;1018102810191029 return remap_pfn_range(vma, virtaddr, pgoff, req_size, pg_prot);10201030}···16721662 return 0;16731663}1674166416751675-static void kvmgt_detach_vgpu(unsigned long handle)16651665+static void kvmgt_detach_vgpu(void *p_vgpu)16761666{16771677- /* nothing to do here */16671667+ int i;16681668+ struct intel_vgpu *vgpu = (struct intel_vgpu *)p_vgpu;16691669+16701670+ if (!vgpu->vdev.region)16711671+ return;16721672+16731673+ for (i = 0; i < vgpu->vdev.num_regions; i++)16741674+ if (vgpu->vdev.region[i].ops->release)16751675+ vgpu->vdev.region[i].ops->release(vgpu,16761676+ &vgpu->vdev.region[i]);16771677+ vgpu->vdev.num_regions = 0;16781678+ kfree(vgpu->vdev.region);16791679+ vgpu->vdev.region = NULL;16781680}1679168116801682static int kvmgt_inject_msi(unsigned long handle, u32 addr, u16 data)
+1-1
drivers/gpu/drm/i915/gvt/mpt.h
···101101 if (!intel_gvt_host.mpt->detach_vgpu)102102 return;103103104104- intel_gvt_host.mpt->detach_vgpu(vgpu->handle);104104+ intel_gvt_host.mpt->detach_vgpu(vgpu);105105}106106107107#define MSI_CAP_CONTROL(offset) (offset + 2)
+42-22
drivers/gpu/drm/i915/gvt/scheduler.c
···356356 return 0;357357}358358359359+static int360360+intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload)361361+{362362+ struct intel_vgpu *vgpu = workload->vgpu;363363+ struct intel_vgpu_submission *s = &vgpu->submission;364364+ struct i915_gem_context *shadow_ctx = s->shadow_ctx;365365+ struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;366366+ struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];367367+ struct i915_request *rq;368368+ int ret = 0;369369+370370+ lockdep_assert_held(&dev_priv->drm.struct_mutex);371371+372372+ if (workload->req)373373+ goto out;374374+375375+ rq = i915_request_alloc(engine, shadow_ctx);376376+ if (IS_ERR(rq)) {377377+ gvt_vgpu_err("fail to allocate gem request\n");378378+ ret = PTR_ERR(rq);379379+ goto out;380380+ }381381+ workload->req = i915_request_get(rq);382382+out:383383+ return ret;384384+}385385+359386/**360387 * intel_gvt_scan_and_shadow_workload - audit the workload by scanning and361388 * shadow it as well, include ringbuffer,wa_ctx and ctx.···399372 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;400373 struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id];401374 struct intel_context *ce;402402- struct i915_request *rq;403375 int ret;404376405377 lockdep_assert_held(&dev_priv->drm.struct_mutex);406378407407- if (workload->req)379379+ if (workload->shadow)408380 return 0;409381410382 ret = set_context_ppgtt_from_shadow(workload, shadow_ctx);···443417 goto err_shadow;444418 }445419446446- rq = i915_request_alloc(engine, shadow_ctx);447447- if (IS_ERR(rq)) {448448- gvt_vgpu_err("fail to allocate gem request\n");449449- ret = PTR_ERR(rq);450450- goto err_shadow;451451- }452452- workload->req = i915_request_get(rq);453453-454454- ret = populate_shadow_context(workload);455455- if (ret)456456- goto err_req;457457-420420+ workload->shadow = true;458421 return 0;459459-err_req:460460- rq = fetch_and_zero(&workload->req);461461- i915_request_put(rq);462422err_shadow:463423 release_shadow_wa_ctx(&workload->wa_ctx);464424err_unpin:···683671 mutex_lock(&vgpu->vgpu_lock);684672 mutex_lock(&dev_priv->drm.struct_mutex);685673674674+ ret = intel_gvt_workload_req_alloc(workload);675675+ if (ret)676676+ goto err_req;677677+686678 ret = intel_gvt_scan_and_shadow_workload(workload);687679 if (ret)688680 goto out;689681682682+ ret = populate_shadow_context(workload);683683+ if (ret) {684684+ release_shadow_wa_ctx(&workload->wa_ctx);685685+ goto out;686686+ }687687+690688 ret = prepare_workload(workload);691691-692689out:693693- if (ret)694694- workload->status = ret;695695-696690 if (!IS_ERR_OR_NULL(workload->req)) {697691 gvt_dbg_sched("ring id %d submit workload to i915 %p\n",698692 ring_id, workload->req);699693 i915_request_add(workload->req);700694 workload->dispatched = true;701695 }702702-696696+err_req:697697+ if (ret)698698+ workload->status = ret;703699 mutex_unlock(&dev_priv->drm.struct_mutex);704700 mutex_unlock(&vgpu->vgpu_lock);705701 return ret;
+1
drivers/gpu/drm/i915/gvt/scheduler.h
···8383 struct i915_request *req;8484 /* if this workload has been dispatched to i915? */8585 bool dispatched;8686+ bool shadow; /* if workload has done shadow of guest request */8687 int status;87888889 struct intel_vgpu_mm *shadow_mm;
···20752075int gen6_ppgtt_pin(struct i915_hw_ppgtt *base)20762076{20772077 struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base);20782078+ int err;2078207920792080 /*20802081 * Workaround the limited maximum vma->pin_count and the aliasing_ppgtt···20912090 * allocator works in address space sizes, so it's multiplied by page20922091 * size. We allocate at the top of the GTT to avoid fragmentation.20932092 */20942094- return i915_vma_pin(ppgtt->vma,20952095- 0, GEN6_PD_ALIGN,20962096- PIN_GLOBAL | PIN_HIGH);20932093+ err = i915_vma_pin(ppgtt->vma,20942094+ 0, GEN6_PD_ALIGN,20952095+ PIN_GLOBAL | PIN_HIGH);20962096+ if (err)20972097+ goto unpin;20982098+20992099+ return 0;21002100+21012101+unpin:21022102+ ppgtt->pin_count = 0;21032103+ return err;20972104}2098210520992106void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base)
+14-9
drivers/gpu/drm/i915/i915_gpu_error.c
···19071907{19081908 struct i915_gpu_state *error;1909190919101910+ /* Check if GPU capture has been disabled */19111911+ error = READ_ONCE(i915->gpu_error.first_error);19121912+ if (IS_ERR(error))19131913+ return error;19141914+19101915 error = kzalloc(sizeof(*error), GFP_ATOMIC);19111911- if (!error)19121912- return NULL;19161916+ if (!error) {19171917+ i915_disable_error_state(i915, -ENOMEM);19181918+ return ERR_PTR(-ENOMEM);19191919+ }1913192019141921 kref_init(&error->ref);19151922 error->i915 = i915;···19521945 return;1953194619541947 error = i915_capture_gpu_state(i915);19551955- if (!error) {19561956- DRM_DEBUG_DRIVER("out of memory, not capturing error state\n");19571957- i915_disable_error_state(i915, -ENOMEM);19481948+ if (IS_ERR(error))19581949 return;19591959- }1960195019611951 i915_error_capture_msg(i915, error, engine_mask, error_msg);19621952 DRM_INFO("%s\n", error->error_msg);···1991198719921988 spin_lock_irq(&i915->gpu_error.lock);19931989 error = i915->gpu_error.first_error;19941994- if (error)19901990+ if (!IS_ERR_OR_NULL(error))19951991 i915_gpu_state_get(error);19961992 spin_unlock_irq(&i915->gpu_error.lock);19971993···2004200020052001 spin_lock_irq(&i915->gpu_error.lock);20062002 error = i915->gpu_error.first_error;20072007- i915->gpu_error.first_error = NULL;20032003+ if (error != ERR_PTR(-ENODEV)) /* if disabled, always disabled */20042004+ i915->gpu_error.first_error = NULL;20082005 spin_unlock_irq(&i915->gpu_error.lock);2009200620102010- if (!IS_ERR(error))20072007+ if (!IS_ERR_OR_NULL(error))20112008 i915_gpu_state_put(error);20122009}20132010
+3-1
drivers/gpu/drm/i915/i915_sysfs.c
···521521 ssize_t ret;522522523523 gpu = i915_first_error_state(i915);524524- if (gpu) {524524+ if (IS_ERR(gpu)) {525525+ ret = PTR_ERR(gpu);526526+ } else if (gpu) {525527 ret = i915_gpu_state_copy_to_buffer(gpu, buf, off, count);526528 i915_gpu_state_put(gpu);527529 } else {
···274274 DRM_DEBUG_KMS("eDP panel supports PSR version %x\n",275275 intel_dp->psr_dpcd[0]);276276277277+ if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) {278278+ DRM_DEBUG_KMS("PSR support not currently available for this panel\n");279279+ return;280280+ }281281+277282 if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) {278283 DRM_DEBUG_KMS("Panel lacks power state control, PSR cannot be enabled\n");279284 return;280285 }286286+281287 dev_priv->psr.sink_support = true;282288 dev_priv->psr.sink_sync_latency =283289 intel_dp_get_sink_sync_latency(intel_dp);
···253253 case NV_DEVICE_INFO_V0_FERMI:254254 case NV_DEVICE_INFO_V0_KEPLER:255255 case NV_DEVICE_INFO_V0_MAXWELL:256256+ case NV_DEVICE_INFO_V0_PASCAL:257257+ case NV_DEVICE_INFO_V0_VOLTA:258258+ case NV_DEVICE_INFO_V0_TURING:256259 ret = nv50_backlight_init(nv_encoder, &props, &ops);257260 break;258261 default:
···2121 bool "Laptop Hybrid Graphics - GPU switching support"2222 depends on X862323 depends on ACPI2424+ depends on PCI2425 select VGA_ARB2526 help2627 Many laptops released in 2008/9/10 have two GPUs with a multiplexer
···380380 val *= 1000000ULL;381381 break;382382 case 2:383383- val = get_unaligned_be32(&power->update_tag) *384384- occ->powr_sample_time_us;383383+ val = (u64)get_unaligned_be32(&power->update_tag) *384384+ occ->powr_sample_time_us;385385 break;386386 case 3:387387 val = get_unaligned_be16(&power->value) * 1000000ULL;···425425 &power->update_tag);426426 break;427427 case 2:428428- val = get_unaligned_be32(&power->update_tag) *429429- occ->powr_sample_time_us;428428+ val = (u64)get_unaligned_be32(&power->update_tag) *429429+ occ->powr_sample_time_us;430430 break;431431 case 3:432432 val = get_unaligned_be16(&power->value) * 1000000ULL;···463463 &power->system.update_tag);464464 break;465465 case 2:466466- val = get_unaligned_be32(&power->system.update_tag) *467467- occ->powr_sample_time_us;466466+ val = (u64)get_unaligned_be32(&power->system.update_tag) *467467+ occ->powr_sample_time_us;468468 break;469469 case 3:470470 val = get_unaligned_be16(&power->system.value) * 1000000ULL;···477477 &power->proc.update_tag);478478 break;479479 case 6:480480- val = get_unaligned_be32(&power->proc.update_tag) *481481- occ->powr_sample_time_us;480480+ val = (u64)get_unaligned_be32(&power->proc.update_tag) *481481+ occ->powr_sample_time_us;482482 break;483483 case 7:484484 val = get_unaligned_be16(&power->proc.value) * 1000000ULL;···491491 &power->vdd.update_tag);492492 break;493493 case 10:494494- val = get_unaligned_be32(&power->vdd.update_tag) *495495- occ->powr_sample_time_us;494494+ val = (u64)get_unaligned_be32(&power->vdd.update_tag) *495495+ occ->powr_sample_time_us;496496 break;497497 case 11:498498 val = get_unaligned_be16(&power->vdd.value) * 1000000ULL;···505505 &power->vdn.update_tag);506506 break;507507 case 14:508508- val = get_unaligned_be32(&power->vdn.update_tag) *509509- occ->powr_sample_time_us;508508+ val = (u64)get_unaligned_be32(&power->vdn.update_tag) *509509+ occ->powr_sample_time_us;510510 break;511511 case 15:512512 val = get_unaligned_be16(&power->vdn.value) * 1000000ULL;
···470470 data_arg.data);471471 }472472 case I2C_RETRIES:473473+ if (arg > INT_MAX)474474+ return -EINVAL;475475+473476 client->adapter->retries = arg;474477 break;475478 case I2C_TIMEOUT:479479+ if (arg > INT_MAX)480480+ return -EINVAL;481481+476482 /* For historical reasons, user-space sets the timeout477483 * value in units of 10 ms.478484 */
···6060{6161 int ret;62626363+ if (uverbs_attr_is_valid(attrs, UVERBS_ATTR_CORE_OUT))6464+ return uverbs_copy_to_struct_or_zero(6565+ attrs, UVERBS_ATTR_CORE_OUT, resp, resp_len);6666+6367 if (copy_to_user(attrs->ucore.outbuf, resp,6468 min(attrs->ucore.outlen, resp_len)))6569 return -EFAULT;···11851181 goto out_put;11861182 }1187118311841184+ if (uverbs_attr_is_valid(attrs, UVERBS_ATTR_CORE_OUT))11851185+ ret = uverbs_output_written(attrs, UVERBS_ATTR_CORE_OUT);11861186+11881187 ret = 0;1189118811901189out_put:···20192012 return -ENOMEM;2020201320212014 qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs);20222022- if (!qp)20152015+ if (!qp) {20162016+ ret = -EINVAL;20232017 goto out;20182018+ }2024201920252020 is_ud = qp->qp_type == IB_QPT_UD;20262021 sg_ind = 0;
+49-13
drivers/infiniband/core/uverbs_ioctl.c
···144144 0, uattr->len - len);145145}146146147147+static int uverbs_set_output(const struct uverbs_attr_bundle *bundle,148148+ const struct uverbs_attr *attr)149149+{150150+ struct bundle_priv *pbundle =151151+ container_of(bundle, struct bundle_priv, bundle);152152+ u16 flags;153153+154154+ flags = pbundle->uattrs[attr->ptr_attr.uattr_idx].flags |155155+ UVERBS_ATTR_F_VALID_OUTPUT;156156+ if (put_user(flags,157157+ &pbundle->user_attrs[attr->ptr_attr.uattr_idx].flags))158158+ return -EFAULT;159159+ return 0;160160+}161161+147162static int uverbs_process_idrs_array(struct bundle_priv *pbundle,148163 const struct uverbs_api_attr *attr_uapi,149164 struct uverbs_objs_arr_attr *attr,···471456 }472457473458 /*459459+ * Until the drivers are revised to use the bundle directly we have to460460+ * assume that the driver wrote to its UHW_OUT and flag userspace461461+ * appropriately.462462+ */463463+ if (!ret && pbundle->method_elm->has_udata) {464464+ const struct uverbs_attr *attr =465465+ uverbs_attr_get(&pbundle->bundle, UVERBS_ATTR_UHW_OUT);466466+467467+ if (!IS_ERR(attr))468468+ ret = uverbs_set_output(&pbundle->bundle, attr);469469+ }470470+471471+ /*474472 * EPROTONOSUPPORT is ONLY to be returned if the ioctl framework can475473 * not invoke the method because the request is not supported. No476474 * other cases should return this code.···734706int uverbs_copy_to(const struct uverbs_attr_bundle *bundle, size_t idx,735707 const void *from, size_t size)736708{737737- struct bundle_priv *pbundle =738738- container_of(bundle, struct bundle_priv, bundle);739709 const struct uverbs_attr *attr = uverbs_attr_get(bundle, idx);740740- u16 flags;741710 size_t min_size;742711743712 if (IS_ERR(attr))···744719 if (copy_to_user(u64_to_user_ptr(attr->ptr_attr.data), from, min_size))745720 return -EFAULT;746721747747- flags = pbundle->uattrs[attr->ptr_attr.uattr_idx].flags |748748- UVERBS_ATTR_F_VALID_OUTPUT;749749- if (put_user(flags,750750- &pbundle->user_attrs[attr->ptr_attr.uattr_idx].flags))751751- return -EFAULT;752752-753753- return 0;722722+ return uverbs_set_output(bundle, attr);754723}755724EXPORT_SYMBOL(uverbs_copy_to);725725+726726+727727+/*728728+ * This is only used if the caller has directly used copy_to_use to write the729729+ * data. It signals to user space that the buffer is filled in.730730+ */731731+int uverbs_output_written(const struct uverbs_attr_bundle *bundle, size_t idx)732732+{733733+ const struct uverbs_attr *attr = uverbs_attr_get(bundle, idx);734734+735735+ if (IS_ERR(attr))736736+ return PTR_ERR(attr);737737+738738+ return uverbs_set_output(bundle, attr);739739+}756740757741int _uverbs_get_const(s64 *to, const struct uverbs_attr_bundle *attrs_bundle,758742 size_t idx, s64 lower_bound, u64 upper_bound,···791757{792758 const struct uverbs_attr *attr = uverbs_attr_get(bundle, idx);793759794794- if (clear_user(u64_to_user_ptr(attr->ptr_attr.data),795795- attr->ptr_attr.len))796796- return -EFAULT;760760+ if (size < attr->ptr_attr.len) {761761+ if (clear_user(u64_to_user_ptr(attr->ptr_attr.data) + size,762762+ attr->ptr_attr.len - size))763763+ return -EFAULT;764764+ }797765 return uverbs_copy_to(bundle, idx, from, size);798766}
···504504 INIT_LIST_HEAD(&ctx->mm_head);505505 mutex_init(&ctx->mm_list_lock);506506507507- ctx->ah_tbl.va = dma_zalloc_coherent(&pdev->dev, map_len,508508- &ctx->ah_tbl.pa, GFP_KERNEL);507507+ ctx->ah_tbl.va = dma_alloc_coherent(&pdev->dev, map_len,508508+ &ctx->ah_tbl.pa, GFP_KERNEL);509509 if (!ctx->ah_tbl.va) {510510 kfree(ctx);511511 return ERR_PTR(-ENOMEM);···838838 return -ENOMEM;839839840840 for (i = 0; i < mr->num_pbls; i++) {841841- va = dma_zalloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL);841841+ va = dma_alloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL);842842 if (!va) {843843 ocrdma_free_mr_pbl_tbl(dev, mr);844844 status = -ENOMEM;
+2-2
drivers/infiniband/hw/qedr/verbs.c
···556556 return ERR_PTR(-ENOMEM);557557558558 for (i = 0; i < pbl_info->num_pbls; i++) {559559- va = dma_zalloc_coherent(&pdev->dev, pbl_info->pbl_size,560560- &pa, flags);559559+ va = dma_alloc_coherent(&pdev->dev, pbl_info->pbl_size, &pa,560560+ flags);561561 if (!va)562562 goto err;563563
+34-1
drivers/infiniband/hw/vmw_pvrdma/pvrdma.h
···427427428428static inline enum pvrdma_wr_opcode ib_wr_opcode_to_pvrdma(enum ib_wr_opcode op)429429{430430- return (enum pvrdma_wr_opcode)op;430430+ switch (op) {431431+ case IB_WR_RDMA_WRITE:432432+ return PVRDMA_WR_RDMA_WRITE;433433+ case IB_WR_RDMA_WRITE_WITH_IMM:434434+ return PVRDMA_WR_RDMA_WRITE_WITH_IMM;435435+ case IB_WR_SEND:436436+ return PVRDMA_WR_SEND;437437+ case IB_WR_SEND_WITH_IMM:438438+ return PVRDMA_WR_SEND_WITH_IMM;439439+ case IB_WR_RDMA_READ:440440+ return PVRDMA_WR_RDMA_READ;441441+ case IB_WR_ATOMIC_CMP_AND_SWP:442442+ return PVRDMA_WR_ATOMIC_CMP_AND_SWP;443443+ case IB_WR_ATOMIC_FETCH_AND_ADD:444444+ return PVRDMA_WR_ATOMIC_FETCH_AND_ADD;445445+ case IB_WR_LSO:446446+ return PVRDMA_WR_LSO;447447+ case IB_WR_SEND_WITH_INV:448448+ return PVRDMA_WR_SEND_WITH_INV;449449+ case IB_WR_RDMA_READ_WITH_INV:450450+ return PVRDMA_WR_RDMA_READ_WITH_INV;451451+ case IB_WR_LOCAL_INV:452452+ return PVRDMA_WR_LOCAL_INV;453453+ case IB_WR_REG_MR:454454+ return PVRDMA_WR_FAST_REG_MR;455455+ case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:456456+ return PVRDMA_WR_MASKED_ATOMIC_CMP_AND_SWP;457457+ case IB_WR_MASKED_ATOMIC_FETCH_AND_ADD:458458+ return PVRDMA_WR_MASKED_ATOMIC_FETCH_AND_ADD;459459+ case IB_WR_REG_SIG_MR:460460+ return PVRDMA_WR_REG_SIG_MR;461461+ default:462462+ return PVRDMA_WR_ERROR;463463+ }431464}432465433466static inline enum ib_wc_status pvrdma_wc_status_to_ib(
+2-2
drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
···890890 dev_info(&pdev->dev, "device version %d, driver version %d\n",891891 dev->dsr_version, PVRDMA_VERSION);892892893893- dev->dsr = dma_zalloc_coherent(&pdev->dev, sizeof(*dev->dsr),894894- &dev->dsrbase, GFP_KERNEL);893893+ dev->dsr = dma_alloc_coherent(&pdev->dev, sizeof(*dev->dsr),894894+ &dev->dsrbase, GFP_KERNEL);895895 if (!dev->dsr) {896896 dev_err(&pdev->dev, "failed to allocate shared region\n");897897 ret = -ENOMEM;
+6
drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c
···721721 wr->opcode == IB_WR_RDMA_WRITE_WITH_IMM)722722 wqe_hdr->ex.imm_data = wr->ex.imm_data;723723724724+ if (unlikely(wqe_hdr->opcode == PVRDMA_WR_ERROR)) {725725+ *bad_wr = wr;726726+ ret = -EINVAL;727727+ goto out;728728+ }729729+724730 switch (qp->ibqp.qp_type) {725731 case IB_QPT_GSI:726732 case IB_QPT_UD:
+11
drivers/input/misc/Kconfig
···851851 To compile this driver as a module, choose M here. The module will852852 be called sc27xx_vibra.853853854854+config INPUT_STPMIC1_ONKEY855855+ tristate "STPMIC1 PMIC Onkey support"856856+ depends on MFD_STPMIC1857857+ help858858+ Say Y to enable support of onkey embedded into STPMIC1 PMIC. onkey859859+ can be used to wakeup from low power modes and force a shut-down on860860+ long press.861861+862862+ To compile this driver as a module, choose M here: the863863+ module will be called stpmic1_onkey.864864+854865endif
···318318319319 /* Let the programs run for couple of ms and check the engine status */320320 usleep_range(3000, 6000);321321- lp55xx_read(chip, LP5523_REG_STATUS, &status);321321+ ret = lp55xx_read(chip, LP5523_REG_STATUS, &status);322322+ if (ret)323323+ return ret;322324 status &= LP5523_ENG_STATUS_MASK;323325324326 if (status != LP5523_ENG_STATUS_MASK) {
+1-6
drivers/md/md.c
···207207struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,208208 struct mddev *mddev)209209{210210- struct bio *b;211211-212210 if (!mddev || !bioset_initialized(&mddev->bio_set))213211 return bio_alloc(gfp_mask, nr_iovecs);214212215215- b = bio_alloc_bioset(gfp_mask, nr_iovecs, &mddev->bio_set);216216- if (!b)217217- return NULL;218218- return b;213213+ return bio_alloc_bioset(gfp_mask, nr_iovecs, &mddev->bio_set);219214}220215EXPORT_SYMBOL_GPL(bio_alloc_mddev);221216
···807807 struct vb2_v4l2_buffer *vbuf;808808 unsigned long flags;809809810810- cancel_delayed_work_sync(&dev->work_run);810810+ if (v4l2_m2m_get_curr_priv(dev->m2m_dev) == ctx)811811+ cancel_delayed_work_sync(&dev->work_run);812812+811813 for (;;) {812814 if (V4L2_TYPE_IS_OUTPUT(q->type))813815 vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+19-5
drivers/media/v4l2-core/v4l2-ioctl.c
···287287 const struct v4l2_window *win;288288 const struct v4l2_sdr_format *sdr;289289 const struct v4l2_meta_format *meta;290290+ u32 planes;290291 unsigned i;291292292293 pr_cont("type=%s", prt_names(p->type, v4l2_type_names));···318317 prt_names(mp->field, v4l2_field_names),319318 mp->colorspace, mp->num_planes, mp->flags,320319 mp->ycbcr_enc, mp->quantization, mp->xfer_func);321321- for (i = 0; i < mp->num_planes; i++)320320+ planes = min_t(u32, mp->num_planes, VIDEO_MAX_PLANES);321321+ for (i = 0; i < planes; i++)322322 printk(KERN_DEBUG "plane %u: bytesperline=%u sizeimage=%u\n", i,323323 mp->plane_fmt[i].bytesperline,324324 mp->plane_fmt[i].sizeimage);···15531551 if (unlikely(!ops->vidioc_s_fmt_vid_cap_mplane))15541552 break;15551553 CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);15541554+ if (p->fmt.pix_mp.num_planes > VIDEO_MAX_PLANES)15551555+ break;15561556 for (i = 0; i < p->fmt.pix_mp.num_planes; i++)15571557- CLEAR_AFTER_FIELD(p, fmt.pix_mp.plane_fmt[i].bytesperline);15571557+ CLEAR_AFTER_FIELD(&p->fmt.pix_mp.plane_fmt[i],15581558+ bytesperline);15581559 return ops->vidioc_s_fmt_vid_cap_mplane(file, fh, arg);15591560 case V4L2_BUF_TYPE_VIDEO_OVERLAY:15601561 if (unlikely(!ops->vidioc_s_fmt_vid_overlay))···15861581 if (unlikely(!ops->vidioc_s_fmt_vid_out_mplane))15871582 break;15881583 CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);15841584+ if (p->fmt.pix_mp.num_planes > VIDEO_MAX_PLANES)15851585+ break;15891586 for (i = 0; i < p->fmt.pix_mp.num_planes; i++)15901590- CLEAR_AFTER_FIELD(p, fmt.pix_mp.plane_fmt[i].bytesperline);15871587+ CLEAR_AFTER_FIELD(&p->fmt.pix_mp.plane_fmt[i],15881588+ bytesperline);15911589 return ops->vidioc_s_fmt_vid_out_mplane(file, fh, arg);15921590 case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:15931591 if (unlikely(!ops->vidioc_s_fmt_vid_out_overlay))···16561648 if (unlikely(!ops->vidioc_try_fmt_vid_cap_mplane))16571649 break;16581650 CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);16511651+ if (p->fmt.pix_mp.num_planes > VIDEO_MAX_PLANES)16521652+ break;16591653 for (i = 0; i < p->fmt.pix_mp.num_planes; i++)16601660- CLEAR_AFTER_FIELD(p, fmt.pix_mp.plane_fmt[i].bytesperline);16541654+ CLEAR_AFTER_FIELD(&p->fmt.pix_mp.plane_fmt[i],16551655+ bytesperline);16611656 return ops->vidioc_try_fmt_vid_cap_mplane(file, fh, arg);16621657 case V4L2_BUF_TYPE_VIDEO_OVERLAY:16631658 if (unlikely(!ops->vidioc_try_fmt_vid_overlay))···16891678 if (unlikely(!ops->vidioc_try_fmt_vid_out_mplane))16901679 break;16911680 CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);16811681+ if (p->fmt.pix_mp.num_planes > VIDEO_MAX_PLANES)16821682+ break;16921683 for (i = 0; i < p->fmt.pix_mp.num_planes; i++)16931693- CLEAR_AFTER_FIELD(p, fmt.pix_mp.plane_fmt[i].bytesperline);16841684+ CLEAR_AFTER_FIELD(&p->fmt.pix_mp.plane_fmt[i],16851685+ bytesperline);16941686 return ops->vidioc_try_fmt_vid_out_mplane(file, fh, arg);16951687 case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:16961688 if (unlikely(!ops->vidioc_try_fmt_vid_out_overlay))
+17-1
drivers/mfd/Kconfig
···102102config MFD_AT91_USART103103 tristate "AT91 USART Driver"104104 select MFD_CORE105105+ depends on ARCH_AT91 || COMPILE_TEST105106 help106107 Select this to get support for AT91 USART IP. This is a wrapper107108 over at91-usart-serial driver and usart-spi-driver. Only one function···215214config MFD_CROS_EC_CHARDEV216215 tristate "Chrome OS Embedded Controller userspace device interface"217216 depends on MFD_CROS_EC218218- select CROS_EC_CTL219217 ---help---220218 This driver adds support to talk with the ChromeOS EC from userspace.221219···18711871 Select this option to enable STM32 timers driver used18721872 for PWM and IIO Timer. This driver allow to share the18731873 registers between the others drivers.18741874+18751875+config MFD_STPMIC118761876+ tristate "Support for STPMIC1 PMIC"18771877+ depends on (I2C=y && OF)18781878+ select REGMAP_I2C18791879+ select REGMAP_IRQ18801880+ select MFD_CORE18811881+ help18821882+ Support for ST Microelectronics STPMIC1 PMIC. STPMIC1 has power on18831883+ key, watchdog and regulator functionalities which are supported via18841884+ the relevant subsystems. This driver provides core support for the18851885+ STPMIC1. In order to use the actual functionaltiy of the device other18861886+ drivers must be enabled.18871887+18881888+ To compile this driver as a module, choose M here: the18891889+ module will be called stpmic1.1874189018751891menu "Multimedia Capabilities Port drivers"18761892 depends on ARCH_SA1100
···979979 * letting it generate the right frequencies for USB, MADC, and980980 * other purposes.981981 */982982-static inline int __init protect_pm_master(void)982982+static inline int protect_pm_master(void)983983{984984 int e = 0;985985···988988 return e;989989}990990991991-static inline int __init unprotect_pm_master(void)991991+static inline int unprotect_pm_master(void)992992{993993 int e = 0;994994
···394394 struct _vop_vdev *vdev = to_vopvdev(dev);395395 struct vop_device *vpdev = vdev->vpdev;396396 struct mic_device_ctrl __iomem *dc = vdev->dc;397397- int i, err, retry;397397+ int i, err, retry, queue_idx = 0;398398399399 /* We must have this many virtqueues. */400400 if (nvqs > ioread8(&vdev->desc->num_vq))401401 return -ENOENT;402402403403 for (i = 0; i < nvqs; ++i) {404404+ if (!names[i]) {405405+ vqs[i] = NULL;406406+ continue;407407+ }408408+404409 dev_dbg(_vop_dev(vdev), "%s: %d: %s\n",405410 __func__, i, names[i]);406406- vqs[i] = vop_find_vq(dev, i, callbacks[i], names[i],411411+ vqs[i] = vop_find_vq(dev, queue_idx++, callbacks[i], names[i],407412 ctx ? ctx[i] : false);408413 if (IS_ERR(vqs[i])) {409414 err = PTR_ERR(vqs[i]);
+1-1
drivers/mmc/core/host.c
···234234 if (device_property_read_bool(dev, "broken-cd"))235235 host->caps |= MMC_CAP_NEEDS_POLL;236236237237- ret = mmc_gpiod_request_cd(host, "cd", 0, true,237237+ ret = mmc_gpiod_request_cd(host, "cd", 0, false,238238 cd_debounce_delay_ms * 1000,239239 &cd_gpio_invert);240240 if (!ret)
+3-2
drivers/mmc/host/sdhci.c
···37633763 * Use zalloc to zero the reserved high 32-bits of 128-bit37643764 * descriptors so that they never need to be written.37653765 */37663766- buf = dma_zalloc_coherent(mmc_dev(mmc), host->align_buffer_sz +37673767- host->adma_table_sz, &dma, GFP_KERNEL);37663766+ buf = dma_alloc_coherent(mmc_dev(mmc),37673767+ host->align_buffer_sz + host->adma_table_sz,37683768+ &dma, GFP_KERNEL);37683769 if (!buf) {37693770 pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n",37703771 mmc_hostname(mmc));
+1-1
drivers/mtd/mtdcore.c
···522522 mtd->nvmem = nvmem_register(&config);523523 if (IS_ERR(mtd->nvmem)) {524524 /* Just ignore if there is no NVMEM support in the kernel */525525- if (PTR_ERR(mtd->nvmem) == -ENOSYS) {525525+ if (PTR_ERR(mtd->nvmem) == -EOPNOTSUPP) {526526 mtd->nvmem = NULL;527527 } else {528528 dev_err(&mtd->dev, "Failed to register NVMEM device\n");
···618618 list_add(&new->list, &mtd_partitions);619619 mutex_unlock(&mtd_partitions_mutex);620620621621- add_mtd_device(&new->mtd);621621+ ret = add_mtd_device(&new->mtd);622622+ if (ret)623623+ goto err_remove_part;622624623625 mtd_add_partition_attrs(new);626626+627627+ return 0;628628+629629+err_remove_part:630630+ mutex_lock(&mtd_partitions_mutex);631631+ list_del(&new->list);632632+ mutex_unlock(&mtd_partitions_mutex);633633+634634+ free_partition(new);635635+ pr_info("%s:%i\n", __func__, __LINE__);624636625637 return ret;626638}···724712{725713 struct mtd_part *slave;726714 uint64_t cur_offset = 0;727727- int i;715715+ int i, ret;728716729717 printk(KERN_NOTICE "Creating %d MTD partitions on \"%s\":\n", nbparts, master->name);730718731719 for (i = 0; i < nbparts; i++) {732720 slave = allocate_partition(master, parts + i, i, cur_offset);733721 if (IS_ERR(slave)) {734734- del_mtd_partitions(master);735735- return PTR_ERR(slave);722722+ ret = PTR_ERR(slave);723723+ goto err_del_partitions;736724 }737725738726 mutex_lock(&mtd_partitions_mutex);739727 list_add(&slave->list, &mtd_partitions);740728 mutex_unlock(&mtd_partitions_mutex);741729742742- add_mtd_device(&slave->mtd);730730+ ret = add_mtd_device(&slave->mtd);731731+ if (ret) {732732+ mutex_lock(&mtd_partitions_mutex);733733+ list_del(&slave->list);734734+ mutex_unlock(&mtd_partitions_mutex);735735+736736+ free_partition(slave);737737+ goto err_del_partitions;738738+ }739739+743740 mtd_add_partition_attrs(slave);744741 /* Look for subpartitions */745742 parse_mtd_partitions(&slave->mtd, parts[i].types, NULL);···757736 }758737759738 return 0;739739+740740+err_del_partitions:741741+ del_mtd_partitions(master);742742+743743+ return ret;760744}761745762746static DEFINE_SPINLOCK(part_parser_lock);
+1-1
drivers/mtd/nand/raw/denali.c
···13221322 }1323132313241324 /* clk rate info is needed for setup_data_interface */13251325- if (denali->clk_rate && denali->clk_x_rate)13251325+ if (!denali->clk_rate || !denali->clk_x_rate)13261326 chip->options |= NAND_KEEP_TIMINGS;1327132713281328 chip->legacy.dummy_controller.ops = &denali_controller_ops;
-21
drivers/mtd/nand/raw/fsmc_nand.c
···593593 dma_xfer(host, (void *)buf, len, DMA_TO_DEVICE);594594}595595596596-/* fsmc_select_chip - assert or deassert nCE */597597-static void fsmc_ce_ctrl(struct fsmc_nand_data *host, bool assert)598598-{599599- u32 pc = readl(host->regs_va + FSMC_PC);600600-601601- if (!assert)602602- writel_relaxed(pc & ~FSMC_ENABLE, host->regs_va + FSMC_PC);603603- else604604- writel_relaxed(pc | FSMC_ENABLE, host->regs_va + FSMC_PC);605605-606606- /*607607- * nCE line changes must be applied before returning from this608608- * function.609609- */610610- mb();611611-}612612-613596/*614597 * fsmc_exec_op - hook called by the core to execute NAND operations615598 *···609626 int i;610627611628 pr_debug("Executing operation [%d instructions]:\n", op->ninstrs);612612-613613- fsmc_ce_ctrl(host, true);614629615630 for (op_id = 0; op_id < op->ninstrs; op_id++) {616631 instr = &op->instrs[op_id];···666685 break;667686 }668687 }669669-670670- fsmc_ce_ctrl(host, false);671688672689 return ret;673690}
···28332833 if (ret)28342834 return ret;2835283528362836+ if (nandc->props->is_bam) {28372837+ free_bam_transaction(nandc);28382838+ nandc->bam_txn = alloc_bam_transaction(nandc);28392839+ if (!nandc->bam_txn) {28402840+ dev_err(nandc->dev,28412841+ "failed to allocate bam transaction\n");28422842+ return -ENOMEM;28432843+ }28442844+ }28452845+28362846 ret = mtd_device_register(mtd, NULL, 0);28372847 if (ret)28382848 nand_cleanup(chip);···28562846 struct device_node *dn = dev->of_node, *child;28572847 struct qcom_nand_host *host;28582848 int ret;28592859-28602860- if (nandc->props->is_bam) {28612861- free_bam_transaction(nandc);28622862- nandc->bam_txn = alloc_bam_transaction(nandc);28632863- if (!nandc->bam_txn) {28642864- dev_err(nandc->dev,28652865- "failed to allocate bam transaction\n");28662866- return -ENOMEM;28672867- }28682868- }2869284928702850 for_each_available_child_of_node(dn, child) {28712851 host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
+1-1
drivers/net/Kconfig
···519519 and destroy a failover master netdev and manages a primary and520520 standby slave netdevs that get registered via the generic failover521521 infrastructure. This can be used by paravirtual drivers to enable522522- an alternate low latency datapath. It alsoenables live migration of522522+ an alternate low latency datapath. It also enables live migration of523523 a VM with direct attached VF by failing over to the paravirtual524524 datapath when the VF is unplugged.525525
···24032403 return mv88e6xxx_g1_stats_clear(chip);24042404}2405240524062406+/* The mv88e6390 has some hidden registers used for debug and24072407+ * development. The errata also makes use of them.24082408+ */24092409+static int mv88e6390_hidden_write(struct mv88e6xxx_chip *chip, int port,24102410+ int reg, u16 val)24112411+{24122412+ u16 ctrl;24132413+ int err;24142414+24152415+ err = mv88e6xxx_port_write(chip, PORT_RESERVED_1A_DATA_PORT,24162416+ PORT_RESERVED_1A, val);24172417+ if (err)24182418+ return err;24192419+24202420+ ctrl = PORT_RESERVED_1A_BUSY | PORT_RESERVED_1A_WRITE |24212421+ PORT_RESERVED_1A_BLOCK | port << PORT_RESERVED_1A_PORT_SHIFT |24222422+ reg;24232423+24242424+ return mv88e6xxx_port_write(chip, PORT_RESERVED_1A_CTRL_PORT,24252425+ PORT_RESERVED_1A, ctrl);24262426+}24272427+24282428+static int mv88e6390_hidden_wait(struct mv88e6xxx_chip *chip)24292429+{24302430+ return mv88e6xxx_wait(chip, PORT_RESERVED_1A_CTRL_PORT,24312431+ PORT_RESERVED_1A, PORT_RESERVED_1A_BUSY);24322432+}24332433+24342434+24352435+static int mv88e6390_hidden_read(struct mv88e6xxx_chip *chip, int port,24362436+ int reg, u16 *val)24372437+{24382438+ u16 ctrl;24392439+ int err;24402440+24412441+ ctrl = PORT_RESERVED_1A_BUSY | PORT_RESERVED_1A_READ |24422442+ PORT_RESERVED_1A_BLOCK | port << PORT_RESERVED_1A_PORT_SHIFT |24432443+ reg;24442444+24452445+ err = mv88e6xxx_port_write(chip, PORT_RESERVED_1A_CTRL_PORT,24462446+ PORT_RESERVED_1A, ctrl);24472447+ if (err)24482448+ return err;24492449+24502450+ err = mv88e6390_hidden_wait(chip);24512451+ if (err)24522452+ return err;24532453+24542454+ return mv88e6xxx_port_read(chip, PORT_RESERVED_1A_DATA_PORT,24552455+ PORT_RESERVED_1A, val);24562456+}24572457+24582458+/* Check if the errata has already been applied. */24592459+static bool mv88e6390_setup_errata_applied(struct mv88e6xxx_chip *chip)24602460+{24612461+ int port;24622462+ int err;24632463+ u16 val;24642464+24652465+ for (port = 0; port < mv88e6xxx_num_ports(chip); port++) {24662466+ err = mv88e6390_hidden_read(chip, port, 0, &val);24672467+ if (err) {24682468+ dev_err(chip->dev,24692469+ "Error reading hidden register: %d\n", err);24702470+ return false;24712471+ }24722472+ if (val != 0x01c0)24732473+ return false;24742474+ }24752475+24762476+ return true;24772477+}24782478+24792479+/* The 6390 copper ports have an errata which require poking magic24802480+ * values into undocumented hidden registers and then performing a24812481+ * software reset.24822482+ */24832483+static int mv88e6390_setup_errata(struct mv88e6xxx_chip *chip)24842484+{24852485+ int port;24862486+ int err;24872487+24882488+ if (mv88e6390_setup_errata_applied(chip))24892489+ return 0;24902490+24912491+ /* Set the ports into blocking mode */24922492+ for (port = 0; port < mv88e6xxx_num_ports(chip); port++) {24932493+ err = mv88e6xxx_port_set_state(chip, port, BR_STATE_DISABLED);24942494+ if (err)24952495+ return err;24962496+ }24972497+24982498+ for (port = 0; port < mv88e6xxx_num_ports(chip); port++) {24992499+ err = mv88e6390_hidden_write(chip, port, 0, 0x01c0);25002500+ if (err)25012501+ return err;25022502+ }25032503+25042504+ return mv88e6xxx_software_reset(chip);25052505+}25062506+24062507static int mv88e6xxx_setup(struct dsa_switch *ds)24072508{24082509 struct mv88e6xxx_chip *chip = ds->priv;···25152414 ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);2516241525172416 mutex_lock(&chip->reg_lock);24172417+24182418+ if (chip->info->ops->setup_errata) {24192419+ err = chip->info->ops->setup_errata(chip);24202420+ if (err)24212421+ goto unlock;24222422+ }2518242325192424 /* Cache the cmode of each port. */25202425 for (i = 0; i < mv88e6xxx_num_ports(chip); i++) {···3333322633343227static const struct mv88e6xxx_ops mv88e6190_ops = {33353228 /* MV88E6XXX_FAMILY_6390 */32293229+ .setup_errata = mv88e6390_setup_errata,33363230 .irl_init_all = mv88e6390_g2_irl_init_all,33373231 .get_eeprom = mv88e6xxx_g2_get_eeprom8,33383232 .set_eeprom = mv88e6xxx_g2_set_eeprom8,···3377326933783270static const struct mv88e6xxx_ops mv88e6190x_ops = {33793271 /* MV88E6XXX_FAMILY_6390 */32723272+ .setup_errata = mv88e6390_setup_errata,33803273 .irl_init_all = mv88e6390_g2_irl_init_all,33813274 .get_eeprom = mv88e6xxx_g2_get_eeprom8,33823275 .set_eeprom = mv88e6xxx_g2_set_eeprom8,···3421331234223313static const struct mv88e6xxx_ops mv88e6191_ops = {34233314 /* MV88E6XXX_FAMILY_6390 */33153315+ .setup_errata = mv88e6390_setup_errata,34243316 .irl_init_all = mv88e6390_g2_irl_init_all,34253317 .get_eeprom = mv88e6xxx_g2_get_eeprom8,34263318 .set_eeprom = mv88e6xxx_g2_set_eeprom8,···3514340435153405static const struct mv88e6xxx_ops mv88e6290_ops = {35163406 /* MV88E6XXX_FAMILY_6390 */34073407+ .setup_errata = mv88e6390_setup_errata,35173408 .irl_init_all = mv88e6390_g2_irl_init_all,35183409 .get_eeprom = mv88e6xxx_g2_get_eeprom8,35193410 .set_eeprom = mv88e6xxx_g2_set_eeprom8,···3820370938213710static const struct mv88e6xxx_ops mv88e6390_ops = {38223711 /* MV88E6XXX_FAMILY_6390 */37123712+ .setup_errata = mv88e6390_setup_errata,38233713 .irl_init_all = mv88e6390_g2_irl_init_all,38243714 .get_eeprom = mv88e6xxx_g2_get_eeprom8,38253715 .set_eeprom = mv88e6xxx_g2_set_eeprom8,···3868375638693757static const struct mv88e6xxx_ops mv88e6390x_ops = {38703758 /* MV88E6XXX_FAMILY_6390 */37593759+ .setup_errata = mv88e6390_setup_errata,38713760 .irl_init_all = mv88e6390_g2_irl_init_all,38723761 .get_eeprom = mv88e6xxx_g2_get_eeprom8,38733762 .set_eeprom = mv88e6xxx_g2_set_eeprom8,
+5
drivers/net/dsa/mv88e6xxx/chip.h
···300300};301301302302struct mv88e6xxx_ops {303303+ /* Switch Setup Errata, called early in the switch setup to304304+ * allow any errata actions to be performed305305+ */306306+ int (*setup_errata)(struct mv88e6xxx_chip *chip);307307+303308 int (*ieee_pri_map)(struct mv88e6xxx_chip *chip);304309 int (*ip_pri_map)(struct mv88e6xxx_chip *chip);305310
···10191019 sizeof(struct atl1c_recv_ret_status) * rx_desc_count +10201020 8 * 4;1021102110221022- ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size,10231023- &ring_header->dma, GFP_KERNEL);10221022+ ring_header->desc = dma_alloc_coherent(&pdev->dev, ring_header->size,10231023+ &ring_header->dma, GFP_KERNEL);10241024 if (unlikely(!ring_header->desc)) {10251025 dev_err(&pdev->dev, "could not get memory for DMA buffer\n");10261026 goto err_nomem;
+4-4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
···936936937937 /* allocate rx dma ring */938938 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc);939939- p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);939939+ p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);940940 if (!p) {941941 ret = -ENOMEM;942942 goto out_freeirq_tx;···947947948948 /* allocate tx dma ring */949949 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc);950950- p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);950950+ p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);951951 if (!p) {952952 ret = -ENOMEM;953953 goto out_free_rx_ring;···2120212021212121 /* allocate rx dma ring */21222122 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc);21232123- p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);21232123+ p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL);21242124 if (!p) {21252125 dev_err(kdev, "cannot allocate rx ring %u\n", size);21262126 ret = -ENOMEM;···2132213221332133 /* allocate tx dma ring */21342134 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc);21352135- p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);21352135+ p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL);21362136 if (!p) {21372137 dev_err(kdev, "cannot allocate tx ring\n");21382138 ret = -ENOMEM;
+2-2
drivers/net/ethernet/broadcom/bcmsysport.c
···15061506 /* We just need one DMA descriptor which is DMA-able, since writing to15071507 * the port will allocate a new descriptor in its internal linked-list15081508 */15091509- p = dma_zalloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma,15101510- GFP_KERNEL);15091509+ p = dma_alloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma,15101510+ GFP_KERNEL);15111511 if (!p) {15121512 netif_err(priv, hw, priv->netdev, "DMA alloc failed\n");15131513 return -ENOMEM;
+6-6
drivers/net/ethernet/broadcom/bgmac.c
···634634635635 /* Alloc ring of descriptors */636636 size = BGMAC_TX_RING_SLOTS * sizeof(struct bgmac_dma_desc);637637- ring->cpu_base = dma_zalloc_coherent(dma_dev, size,638638- &ring->dma_base,639639- GFP_KERNEL);637637+ ring->cpu_base = dma_alloc_coherent(dma_dev, size,638638+ &ring->dma_base,639639+ GFP_KERNEL);640640 if (!ring->cpu_base) {641641 dev_err(bgmac->dev, "Allocation of TX ring 0x%X failed\n",642642 ring->mmio_base);···659659660660 /* Alloc ring of descriptors */661661 size = BGMAC_RX_RING_SLOTS * sizeof(struct bgmac_dma_desc);662662- ring->cpu_base = dma_zalloc_coherent(dma_dev, size,663663- &ring->dma_base,664664- GFP_KERNEL);662662+ ring->cpu_base = dma_alloc_coherent(dma_dev, size,663663+ &ring->dma_base,664664+ GFP_KERNEL);665665 if (!ring->cpu_base) {666666 dev_err(bgmac->dev, "Allocation of RX ring 0x%X failed\n",667667 ring->mmio_base);
···20812081 bool is_pf);2082208220832083#define BNX2X_ILT_ZALLOC(x, y, size) \20842084- x = dma_zalloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL)20842084+ x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL)2085208520862086#define BNX2X_ILT_FREE(x, y, size) \20872087 do { \
···37943794 /* If we have version number support, then check to see if the adapter37953795 * already has up-to-date PHY firmware loaded.37963796 */37973797- if (phy_fw_version) {37973797+ if (phy_fw_version) {37983798 new_phy_fw_vers = phy_fw_version(phy_fw_data, phy_fw_size);37993799 ret = t4_phy_fw_ver(adap, &cur_phy_fw_ver);38003800 if (ret < 0)
+1-1
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
···756756 * Allocate the hardware ring and PCI DMA bus address space for said.757757 */758758 size_t hwlen = nelem * hwsize + stat_size;759759- void *hwring = dma_zalloc_coherent(dev, hwlen, busaddrp, GFP_KERNEL);759759+ void *hwring = dma_alloc_coherent(dev, hwlen, busaddrp, GFP_KERNEL);760760761761 if (!hwring)762762 return NULL;
···732732 ((struct ipv6hdr *)ip_p)->nexthdr;733733}734734735735+#define short_frame(size) ((size) <= ETH_ZLEN + ETH_FCS_LEN)736736+735737static inline void mlx5e_handle_csum(struct net_device *netdev,736738 struct mlx5_cqe64 *cqe,737739 struct mlx5e_rq *rq,···754752 }755753756754 if (unlikely(test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state)))755755+ goto csum_unnecessary;756756+757757+ /* CQE csum doesn't cover padding octets in short ethernet758758+ * frames. And the pad field is appended prior to calculating759759+ * and appending the FCS field.760760+ *761761+ * Detecting these padded frames requires to verify and parse762762+ * IP headers, so we simply force all those small frames to be763763+ * CHECKSUM_UNNECESSARY even if they are not padded.764764+ */765765+ if (short_frame(skb->len))757766 goto csum_unnecessary;758767759768 if (likely(is_last_ethertype_ip(skb, &network_depth, &proto))) {
+1
drivers/net/ethernet/mellanox/mlxsw/Kconfig
···7878 depends on IPV6 || IPV6=n7979 depends on NET_IPGRE || NET_IPGRE=n8080 depends on IPV6_GRE || IPV6_GRE=n8181+ depends on VXLAN || VXLAN=n8182 select GENERIC_ALLOCATOR8283 select PARMAN8384 select OBJAGG
···445445 if (pskb_trim_rcsum(skb, len))446446 goto drop;447447448448+ ph = pppoe_hdr(skb);448449 pn = pppoe_pernet(dev_net(dev));449450450451 /* Note that get_item does a sock_hold(), so sk_pppox(po)
+7-4
drivers/net/tun.c
···856856 err = 0;857857 }858858859859- rcu_assign_pointer(tfile->tun, tun);860860- rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile);861861- tun->numqueues++;862862-863859 if (tfile->detached) {864860 tun_enable_queue(tfile);865861 } else {···872876 * refcnt.873877 */874878879879+ /* Publish tfile->tun and tun->tfiles only after we've fully880880+ * initialized tfile; otherwise we risk using half-initialized881881+ * object.882882+ */883883+ rcu_assign_pointer(tfile->tun, tun);884884+ rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile);885885+ tun->numqueues++;875886out:876887 return err;877888}
···9999 /* Status messages are allocated and initialized to 0. This is necessary100100 * since DR bit should be initialized to 0.101101 */102102- sring->va = dma_zalloc_coherent(dev, sz, &sring->pa, GFP_KERNEL);102102+ sring->va = dma_alloc_coherent(dev, sz, &sring->pa, GFP_KERNEL);103103 if (!sring->va)104104 return -ENOMEM;105105···381381 if (!ring->ctx)382382 goto err;383383384384- ring->va = dma_zalloc_coherent(dev, sz, &ring->pa, GFP_KERNEL);384384+ ring->va = dma_alloc_coherent(dev, sz, &ring->pa, GFP_KERNEL);385385 if (!ring->va)386386 goto err_free_ctx;387387388388 if (ring->is_rx) {389389 sz = sizeof(*ring->edma_rx_swtail.va);390390 ring->edma_rx_swtail.va =391391- dma_zalloc_coherent(dev, sz, &ring->edma_rx_swtail.pa,392392- GFP_KERNEL);391391+ dma_alloc_coherent(dev, sz, &ring->edma_rx_swtail.pa,392392+ GFP_KERNEL);393393 if (!ring->edma_rx_swtail.va)394394 goto err_free_va;395395 }
···9090 * Set MEDIUM priority on SQ creation9191 */9292 NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7),9393+9494+ /*9595+ * Ignore device provided subnqn.9696+ */9797+ NVME_QUIRK_IGNORE_DEV_SUBNQN = (1 << 8),9398};949995100/*
+64-32
drivers/nvme/host/pci.c
···9595struct nvme_queue;96969797static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);9898+static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);989999100/*100101 * Represents an NVM Express device. Each nvme_dev is a PCI function.···1020101910211020static inline void nvme_update_cq_head(struct nvme_queue *nvmeq)10221021{10231023- if (++nvmeq->cq_head == nvmeq->q_depth) {10221022+ if (nvmeq->cq_head == nvmeq->q_depth - 1) {10241023 nvmeq->cq_head = 0;10251024 nvmeq->cq_phase = !nvmeq->cq_phase;10251025+ } else {10261026+ nvmeq->cq_head++;10261027 }10271028}10281029···14231420 return 0;14241421}1425142214231423+static void nvme_suspend_io_queues(struct nvme_dev *dev)14241424+{14251425+ int i;14261426+14271427+ for (i = dev->ctrl.queue_count - 1; i > 0; i--)14281428+ nvme_suspend_queue(&dev->queues[i]);14291429+}14301430+14261431static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown)14271432{14281433 struct nvme_queue *nvmeq = &dev->queues[0];···14961485 if (dev->ctrl.queue_count > qid)14971486 return 0;1498148714991499- nvmeq->cqes = dma_zalloc_coherent(dev->dev, CQ_SIZE(depth),15001500- &nvmeq->cq_dma_addr, GFP_KERNEL);14881488+ nvmeq->cqes = dma_alloc_coherent(dev->dev, CQ_SIZE(depth),14891489+ &nvmeq->cq_dma_addr, GFP_KERNEL);15011490 if (!nvmeq->cqes)15021491 goto free_nvmeq;15031492···18961885 struct nvme_host_mem_buf_desc *desc = &dev->host_mem_descs[i];18971886 size_t size = le32_to_cpu(desc->size) * dev->ctrl.page_size;1898188718991899- dma_free_coherent(dev->dev, size, dev->host_mem_desc_bufs[i],19001900- le64_to_cpu(desc->addr));18881888+ dma_free_attrs(dev->dev, size, dev->host_mem_desc_bufs[i],18891889+ le64_to_cpu(desc->addr),18901890+ DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN);19011891 }1902189219031893 kfree(dev->host_mem_desc_bufs);···19271915 if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries)19281916 max_entries = dev->ctrl.hmmaxd;1929191719301930- descs = dma_zalloc_coherent(dev->dev, max_entries * sizeof(*descs),19311931- &descs_dma, GFP_KERNEL);19181918+ descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs),19191919+ &descs_dma, GFP_KERNEL);19321920 if (!descs)19331921 goto out;19341922···19641952 while (--i >= 0) {19651953 size_t size = le32_to_cpu(descs[i].size) * dev->ctrl.page_size;1966195419671967- dma_free_coherent(dev->dev, size, bufs[i],19681968- le64_to_cpu(descs[i].addr));19551955+ dma_free_attrs(dev->dev, size, bufs[i],19561956+ le64_to_cpu(descs[i].addr),19571957+ DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN);19691958 }1970195919711960 kfree(bufs);···20412028 return ret;20422029}2043203020312031+/* irq_queues covers admin queue */20442032static void nvme_calc_io_queues(struct nvme_dev *dev, unsigned int irq_queues)20452033{20462034 unsigned int this_w_queues = write_queues;2047203520362036+ WARN_ON(!irq_queues);20372037+20482038 /*20492049- * Setup read/write queue split20392039+ * Setup read/write queue split, assign admin queue one independent20402040+ * irq vector if irq_queues is > 1.20502041 */20512051- if (irq_queues == 1) {20422042+ if (irq_queues <= 2) {20522043 dev->io_queues[HCTX_TYPE_DEFAULT] = 1;20532044 dev->io_queues[HCTX_TYPE_READ] = 0;20542045 return;···2060204320612044 /*20622045 * If 'write_queues' is set, ensure it leaves room for at least20632063- * one read queue20462046+ * one read queue and one admin queue20642047 */20652048 if (this_w_queues >= irq_queues)20662066- this_w_queues = irq_queues - 1;20492049+ this_w_queues = irq_queues - 2;2067205020682051 /*20692052 * If 'write_queues' is set to zero, reads and writes will share20702053 * a queue set.20712054 */20722055 if (!this_w_queues) {20732073- dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues;20562056+ dev->io_queues[HCTX_TYPE_DEFAULT] = irq_queues - 1;20742057 dev->io_queues[HCTX_TYPE_READ] = 0;20752058 } else {20762059 dev->io_queues[HCTX_TYPE_DEFAULT] = this_w_queues;20772077- dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues;20602060+ dev->io_queues[HCTX_TYPE_READ] = irq_queues - this_w_queues - 1;20782061 }20792062}20802063···20992082 this_p_queues = nr_io_queues - 1;21002083 irq_queues = 1;21012084 } else {21022102- irq_queues = nr_io_queues - this_p_queues;20852085+ irq_queues = nr_io_queues - this_p_queues + 1;21032086 }21042087 dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;21052088···21192102 * If we got a failure and we're down to asking for just21202103 * 1 + 1 queues, just ask for a single vector. We'll share21212104 * that between the single IO queue and the admin queue.21052105+ * Otherwise, we assign one independent vector to admin queue.21222106 */21232123- if (result >= 0 && irq_queues > 1)21072107+ if (irq_queues > 1)21242108 irq_queues = irq_sets[0] + irq_sets[1] + 1;2125210921262110 result = pci_alloc_irq_vectors_affinity(pdev, irq_queues,···21482130 } while (1);2149213121502132 return result;21332133+}21342134+21352135+static void nvme_disable_io_queues(struct nvme_dev *dev)21362136+{21372137+ if (__nvme_disable_io_queues(dev, nvme_admin_delete_sq))21382138+ __nvme_disable_io_queues(dev, nvme_admin_delete_cq);21512139}2152214021532141static int nvme_setup_io_queues(struct nvme_dev *dev)···21922168 } while (1);21932169 adminq->q_db = dev->dbs;2194217021712171+ retry:21952172 /* Deregister the admin queue's interrupt */21962173 pci_free_irq(pdev, 0, adminq);21972174···22102185 result = max(result - 1, 1);22112186 dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL];2212218722132213- dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",22142214- dev->io_queues[HCTX_TYPE_DEFAULT],22152215- dev->io_queues[HCTX_TYPE_READ],22162216- dev->io_queues[HCTX_TYPE_POLL]);22172217-22182188 /*22192189 * Should investigate if there's a performance win from allocating22202190 * more queues than interrupt vectors; it might allow the submission22212191 * path to scale better, even if the receive path is limited by the22222192 * number of interrupts.22232193 */22242224-22252194 result = queue_request_irq(adminq);22262195 if (result) {22272196 adminq->cq_vector = -1;22282197 return result;22292198 }22302199 set_bit(NVMEQ_ENABLED, &adminq->flags);22312231- return nvme_create_io_queues(dev);22002200+22012201+ result = nvme_create_io_queues(dev);22022202+ if (result || dev->online_queues < 2)22032203+ return result;22042204+22052205+ if (dev->online_queues - 1 < dev->max_qid) {22062206+ nr_io_queues = dev->online_queues - 1;22072207+ nvme_disable_io_queues(dev);22082208+ nvme_suspend_io_queues(dev);22092209+ goto retry;22102210+ }22112211+ dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",22122212+ dev->io_queues[HCTX_TYPE_DEFAULT],22132213+ dev->io_queues[HCTX_TYPE_READ],22142214+ dev->io_queues[HCTX_TYPE_POLL]);22152215+ return 0;22322216}2233221722342218static void nvme_del_queue_end(struct request *req, blk_status_t error)···22822248 return 0;22832249}2284225022852285-static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)22512251+static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)22862252{22872253 int nr_queues = dev->online_queues - 1, sent = 0;22882254 unsigned long timeout;···23282294 dev->tagset.nr_maps = 2; /* default + read */23292295 if (dev->io_queues[HCTX_TYPE_POLL])23302296 dev->tagset.nr_maps++;23312331- dev->tagset.nr_maps = HCTX_MAX_TYPES;23322297 dev->tagset.timeout = NVME_IO_TIMEOUT;23332298 dev->tagset.numa_node = dev_to_node(dev->dev);23342299 dev->tagset.queue_depth =···2443241024442411static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)24452412{24462446- int i;24472413 bool dead = true;24482414 struct pci_dev *pdev = to_pci_dev(dev->dev);24492415···24692437 nvme_stop_queues(&dev->ctrl);2470243824712439 if (!dead && dev->ctrl.queue_count > 0) {24722472- if (nvme_disable_io_queues(dev, nvme_admin_delete_sq))24732473- nvme_disable_io_queues(dev, nvme_admin_delete_cq);24402440+ nvme_disable_io_queues(dev);24742441 nvme_disable_admin_queue(dev, shutdown);24752442 }24762476- for (i = dev->ctrl.queue_count - 1; i >= 0; i--)24772477- nvme_suspend_queue(&dev->queues[i]);24782478-24432443+ nvme_suspend_io_queues(dev);24442444+ nvme_suspend_queue(&dev->queues[0]);24792445 nvme_pci_disable(dev);2480244624812447 blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl);···29762946 { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */29772947 .driver_data = NVME_QUIRK_NO_DEEPEST_PS |29782948 NVME_QUIRK_MEDIUM_PRIO_SQ },29492949+ { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */29502950+ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },29792951 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */29802952 .driver_data = NVME_QUIRK_IDENTIFY_CNS, },29812953 { PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
+6-10
drivers/nvme/host/tcp.c
···15651565{15661566 nvme_tcp_stop_io_queues(ctrl);15671567 if (remove) {15681568- if (ctrl->ops->flags & NVME_F_FABRICS)15691569- blk_cleanup_queue(ctrl->connect_q);15681568+ blk_cleanup_queue(ctrl->connect_q);15701569 blk_mq_free_tag_set(ctrl->tagset);15711570 }15721571 nvme_tcp_free_io_queues(ctrl);···15861587 goto out_free_io_queues;15871588 }1588158915891589- if (ctrl->ops->flags & NVME_F_FABRICS) {15901590- ctrl->connect_q = blk_mq_init_queue(ctrl->tagset);15911591- if (IS_ERR(ctrl->connect_q)) {15921592- ret = PTR_ERR(ctrl->connect_q);15931593- goto out_free_tag_set;15941594- }15901590+ ctrl->connect_q = blk_mq_init_queue(ctrl->tagset);15911591+ if (IS_ERR(ctrl->connect_q)) {15921592+ ret = PTR_ERR(ctrl->connect_q);15931593+ goto out_free_tag_set;15951594 }15961595 } else {15971596 blk_mq_update_nr_hw_queues(ctrl->tagset,···16031606 return 0;1604160716051608out_cleanup_connect_q:16061606- if (new && (ctrl->ops->flags & NVME_F_FABRICS))16091609+ if (new)16071610 blk_cleanup_queue(ctrl->connect_q);16081611out_free_tag_set:16091612 if (new)···16171620{16181621 nvme_tcp_stop_queue(ctrl, 0);16191622 if (remove) {16201620- free_opal_dev(ctrl->opal_dev);16211623 blk_cleanup_queue(ctrl->admin_q);16221624 blk_mq_free_tag_set(ctrl->admin_tagset);16231625 }
+1-1
drivers/nvme/target/tcp.c
···1089108910901090static int nvmet_tcp_try_recv_one(struct nvmet_tcp_queue *queue)10911091{10921092- int result;10921092+ int result = 0;1093109310941094 if (unlikely(queue->rcv_state == NVMET_TCP_RECV_ERR))10951095 return 0;
-3
drivers/of/dynamic.c
···207207208208 if (!of_node_check_flag(np, OF_OVERLAY)) {209209 np->name = __of_get_property(np, "name", NULL);210210- np->type = __of_get_property(np, "device_type", NULL);211210 if (!np->name)212211 np->name = "<NULL>";213213- if (!np->type)214214- np->type = "<NULL>";215212216213 phandle = __of_get_property(np, "phandle", &sz);217214 if (!phandle)
···806806807807 if (!of_device_is_available(remote)) {808808 pr_debug("not available for remote node\n");809809+ of_node_put(remote);809810 return NULL;810811 }811812
+58-5
drivers/opp/core.c
···988988 kfree(opp);989989}990990991991-static void _opp_kref_release(struct kref *kref)991991+static void _opp_kref_release(struct dev_pm_opp *opp,992992+ struct opp_table *opp_table)992993{993993- struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);994994- struct opp_table *opp_table = opp->opp_table;995995-996994 /*997995 * Notify the changes in the availability of the operable998996 * frequency/voltage list.···10001002 opp_debug_remove_one(opp);10011003 list_del(&opp->node);10021004 kfree(opp);10051005+}1003100610071007+static void _opp_kref_release_unlocked(struct kref *kref)10081008+{10091009+ struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);10101010+ struct opp_table *opp_table = opp->opp_table;10111011+10121012+ _opp_kref_release(opp, opp_table);10131013+}10141014+10151015+static void _opp_kref_release_locked(struct kref *kref)10161016+{10171017+ struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);10181018+ struct opp_table *opp_table = opp->opp_table;10191019+10201020+ _opp_kref_release(opp, opp_table);10041021 mutex_unlock(&opp_table->lock);10051022}10061023···1026101310271014void dev_pm_opp_put(struct dev_pm_opp *opp)10281015{10291029- kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock);10161016+ kref_put_mutex(&opp->kref, _opp_kref_release_locked,10171017+ &opp->opp_table->lock);10301018}10311019EXPORT_SYMBOL_GPL(dev_pm_opp_put);10201020+10211021+static void dev_pm_opp_put_unlocked(struct dev_pm_opp *opp)10221022+{10231023+ kref_put(&opp->kref, _opp_kref_release_unlocked);10241024+}1032102510331026/**10341027 * dev_pm_opp_remove() - Remove an OPP from OPP table···10781059 dev_pm_opp_put_opp_table(opp_table);10791060}10801061EXPORT_SYMBOL_GPL(dev_pm_opp_remove);10621062+10631063+/**10641064+ * dev_pm_opp_remove_all_dynamic() - Remove all dynamically created OPPs10651065+ * @dev: device for which we do this operation10661066+ *10671067+ * This function removes all dynamically created OPPs from the opp table.10681068+ */10691069+void dev_pm_opp_remove_all_dynamic(struct device *dev)10701070+{10711071+ struct opp_table *opp_table;10721072+ struct dev_pm_opp *opp, *temp;10731073+ int count = 0;10741074+10751075+ opp_table = _find_opp_table(dev);10761076+ if (IS_ERR(opp_table))10771077+ return;10781078+10791079+ mutex_lock(&opp_table->lock);10801080+ list_for_each_entry_safe(opp, temp, &opp_table->opp_list, node) {10811081+ if (opp->dynamic) {10821082+ dev_pm_opp_put_unlocked(opp);10831083+ count++;10841084+ }10851085+ }10861086+ mutex_unlock(&opp_table->lock);10871087+10881088+ /* Drop the references taken by dev_pm_opp_add() */10891089+ while (count--)10901090+ dev_pm_opp_put_opp_table(opp_table);10911091+10921092+ /* Drop the reference taken by _find_opp_table() */10931093+ dev_pm_opp_put_opp_table(opp_table);10941094+}10951095+EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic);1081109610821097struct dev_pm_opp *_opp_allocate(struct opp_table *table)10831098{
+8-14
drivers/pci/Kconfig
···2121 support for PCI-X and the foundations for PCI Express support.2222 Say 'Y' here unless you know what you are doing.23232424+if PCI2525+2426config PCI_DOMAINS2527 bool2628 depends on PCI27292830config PCI_DOMAINS_GENERIC2931 bool3030- depends on PCI3132 select PCI_DOMAINS32333334config PCI_SYSCALL···38373938config PCI_MSI4039 bool "Message Signaled Interrupts (MSI and MSI-X)"4141- depends on PCI4240 select GENERIC_MSI_IRQ4341 help4442 This allows device drivers to enable MSI (Message Signaled···5959config PCI_QUIRKS6060 default y6161 bool "Enable PCI quirk workarounds" if EXPERT6262- depends on PCI6362 help6463 This enables workarounds for various PCI chipset bugs/quirks.6564 Disable this only if your target machine is unaffected by PCI···66676768config PCI_DEBUG6869 bool "PCI Debugging"6969- depends on PCI && DEBUG_KERNEL7070+ depends on DEBUG_KERNEL7071 help7172 Say Y here if you want the PCI core to produce a bunch of debug7273 messages to the system log. Select this if you are having a···76777778config PCI_REALLOC_ENABLE_AUTO7879 bool "Enable PCI resource re-allocation detection"7979- depends on PCI8080 depends on PCI_IOV8181 help8282 Say Y here if you want the PCI core to detect if PCI resource···88908991config PCI_STUB9092 tristate "PCI Stub driver"9191- depends on PCI9293 help9394 Say Y or M here if you want be able to reserve a PCI device9495 when it is going to be assigned to a guest operating system.···969997100config PCI_PF_STUB98101 tristate "PCI PF Stub driver"9999- depends on PCI100102 depends on PCI_IOV101103 help102104 Say Y or M here if you want to enable support for devices that···107111108112config XEN_PCIDEV_FRONTEND109113 tristate "Xen PCI Frontend"110110- depends on PCI && X86 && XEN114114+ depends on X86 && XEN111115 select PCI_XEN112116 select XEN_XENBUS_FRONTEND113117 default y···129133130134config PCI_IOV131135 bool "PCI IOV support"132132- depends on PCI133136 select PCI_ATS134137 help135138 I/O Virtualization is a PCI feature supported by some devices···139144140145config PCI_PRI141146 bool "PCI PRI support"142142- depends on PCI143147 select PCI_ATS144148 help145149 PRI is the PCI Page Request Interface. It allows PCI devices that are···148154149155config PCI_PASID150156 bool "PCI PASID support"151151- depends on PCI152157 select PCI_ATS153158 help154159 Process Address Space Identifiers (PASIDs) can be used by PCI devices···160167161168config PCI_P2PDMA162169 bool "PCI peer-to-peer transfer support"163163- depends on PCI && ZONE_DEVICE170170+ depends on ZONE_DEVICE164171 select GENERIC_ALLOCATOR165172 help166173 Enableѕ drivers to do PCI peer-to-peer transactions to and from···177184178185config PCI_LABEL179186 def_bool y if (DMI || ACPI)180180- depends on PCI181187 select NLS182188183189config PCI_HYPERV184190 tristate "Hyper-V PCI Frontend"185185- depends on PCI && X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64191191+ depends on X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64186192 help187193 The PCI device frontend driver allows the kernel to import arbitrary188194 PCI devices from a PCI backend to support PCI driver domains.···190198source "drivers/pci/controller/Kconfig"191199source "drivers/pci/endpoint/Kconfig"192200source "drivers/pci/switch/Kconfig"201201+202202+endif
···32323333/* register 0x01 */3434#define REF_FREF_SEL_25 BIT(0)3535-#define PHY_MODE_SATA (0x0 << 5)3535+#define PHY_BERLIN_MODE_SATA (0x0 << 5)36363737/* register 0x02 */3838#define USE_MAX_PLL_RATE BIT(12)···102102103103 /* set PHY mode and ref freq to 25 MHz */104104 phy_berlin_sata_reg_setbits(ctrl_reg, priv->phy_base, 0x01,105105- 0x00ff, REF_FREF_SEL_25 | PHY_MODE_SATA);105105+ 0x00ff,106106+ REF_FREF_SEL_25 | PHY_BERLIN_MODE_SATA);106107107108 /* set PHY up to 6 Gbps */108109 phy_berlin_sata_reg_setbits(ctrl_reg, priv->phy_base, 0x25,
+1
drivers/phy/ti/Kconfig
···8282 default y if TI_CPSW=y8383 depends on TI_CPSW || COMPILE_TEST8484 select GENERIC_PHY8585+ select REGMAP8586 default m8687 help8788 This driver supports configuring of the TI CPSW Port mode depending on
+44-3
drivers/platform/chrome/Kconfig
···4949 To compile this driver as a module, choose M here: the5050 module will be called chromeos_tbmc.51515252-config CROS_EC_CTL5353- tristate5454-5552config CROS_EC_I2C5653 tristate "ChromeOS Embedded Controller (I2C)"5754 depends on MFD_CROS_EC && I2C···107110108111 To compile this driver as a module, choose M here: the109112 module will be called cros_kbd_led_backlight.113113+114114+config CROS_EC_LIGHTBAR115115+ tristate "Chromebook Pixel's lightbar support"116116+ depends on MFD_CROS_EC_CHARDEV117117+ default MFD_CROS_EC_CHARDEV118118+ help119119+ This option exposes the Chromebook Pixel's lightbar to120120+ userspace.121121+122122+ To compile this driver as a module, choose M here: the123123+ module will be called cros_ec_lightbar.124124+125125+config CROS_EC_VBC126126+ tristate "ChromeOS EC vboot context support"127127+ depends on MFD_CROS_EC_CHARDEV && OF128128+ default MFD_CROS_EC_CHARDEV129129+ help130130+ This option exposes the ChromeOS EC vboot context nvram to131131+ userspace.132132+133133+ To compile this driver as a module, choose M here: the134134+ module will be called cros_ec_vbc.135135+136136+config CROS_EC_DEBUGFS137137+ tristate "Export ChromeOS EC internals in DebugFS"138138+ depends on MFD_CROS_EC_CHARDEV && DEBUG_FS139139+ default MFD_CROS_EC_CHARDEV140140+ help141141+ This option exposes the ChromeOS EC device internals to142142+ userspace.143143+144144+ To compile this driver as a module, choose M here: the145145+ module will be called cros_ec_debugfs.146146+147147+config CROS_EC_SYSFS148148+ tristate "ChromeOS EC control and information through sysfs"149149+ depends on MFD_CROS_EC_CHARDEV && SYSFS150150+ default MFD_CROS_EC_CHARDEV151151+ help152152+ This option exposes some sysfs attributes to control and get153153+ information from ChromeOS EC.154154+155155+ To compile this driver as a module, choose M here: the156156+ module will be called cros_ec_sysfs.110157111158endif # CHROMEOS_PLATFORMS
···1009100910101010config INTEL_IPS10111011 tristate "Intel Intelligent Power Sharing"10121012- depends on ACPI10121012+ depends on ACPI && PCI10131013 ---help---10141014 Intel Calpella platforms support dynamic power sharing between the10151015 CPU and GPU, maximizing performance in a given TDP. This driver,···1135113511361136config APPLE_GMUX11371137 tristate "Apple Gmux Driver"11381138- depends on ACPI11381138+ depends on ACPI && PCI11391139 depends on PNP11401140 depends on BACKLIGHT_CLASS_DEVICE11411141 depends on BACKLIGHT_APPLE=n || BACKLIGHT_APPLE···1174117411751175config INTEL_PMC_IPC11761176 tristate "Intel PMC IPC Driver"11771177- depends on ACPI11771177+ depends on ACPI && PCI11781178 ---help---11791179 This driver provides support for PMC control on some Intel platforms.11801180 The PMC is an ARC processor which defines IPC commands for communication
···9090 * Allocate space for DMA descriptors9191 * (add an extra element for link descriptor)9292 */9393- bd_ptr = dma_zalloc_coherent(dev,9494- (bd_num + 1) * sizeof(struct tsi721_dma_desc),9595- &bd_phys, GFP_ATOMIC);9393+ bd_ptr = dma_alloc_coherent(dev,9494+ (bd_num + 1) * sizeof(struct tsi721_dma_desc),9595+ &bd_phys, GFP_ATOMIC);9696 if (!bd_ptr)9797 return -ENOMEM;9898···108108 sts_size = ((bd_num + 1) >= TSI721_DMA_MINSTSSZ) ?109109 (bd_num + 1) : TSI721_DMA_MINSTSSZ;110110 sts_size = roundup_pow_of_two(sts_size);111111- sts_ptr = dma_zalloc_coherent(dev,111111+ sts_ptr = dma_alloc_coherent(dev,112112 sts_size * sizeof(struct tsi721_dma_sts),113113 &sts_phys, GFP_ATOMIC);114114 if (!sts_ptr) {
+7-2
drivers/remoteproc/remoteproc_virtio.c
···153153 const bool * ctx,154154 struct irq_affinity *desc)155155{156156- int i, ret;156156+ int i, ret, queue_idx = 0;157157158158 for (i = 0; i < nvqs; ++i) {159159- vqs[i] = rp_find_vq(vdev, i, callbacks[i], names[i],159159+ if (!names[i]) {160160+ vqs[i] = NULL;161161+ continue;162162+ }163163+164164+ vqs[i] = rp_find_vq(vdev, queue_idx++, callbacks[i], names[i],160165 ctx ? ctx[i] : false);161166 if (IS_ERR(vqs[i])) {162167 ret = PTR_ERR(vqs[i]);
+14-6
drivers/reset/Kconfig
···109109110110config RESET_SIMPLE111111 bool "Simple Reset Controller Driver" if COMPILE_TEST112112- default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED112112+ default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED113113 help114114 This enables a simple reset controller driver for reset lines that115115 that can be asserted and deasserted by toggling bits in a contiguous,···127127 default MACH_STM32MP157128128 help129129 This enables the RCC reset controller driver for STM32 MPUs.130130+131131+config RESET_SOCFPGA132132+ bool "SoCFPGA Reset Driver" if COMPILE_TEST && !ARCH_SOCFPGA133133+ default ARCH_SOCFPGA134134+ select RESET_SIMPLE135135+ help136136+ This enables the reset driver for the SoCFPGA ARMv7 platforms. This137137+ driver gets initialized early during platform init calls.130138131139config RESET_SUNXI132140 bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI···171163 Say Y if you want to control reset signals provided by System Control172164 block, Media I/O block, Peripheral Block.173165174174-config RESET_UNIPHIER_USB3175175- tristate "USB3 reset driver for UniPhier SoCs"166166+config RESET_UNIPHIER_GLUE167167+ tristate "Reset driver in glue layer for UniPhier SoCs"176168 depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF177169 default ARCH_UNIPHIER178170 select RESET_SIMPLE179171 help180180- Support for the USB3 core reset on UniPhier SoCs.181181- Say Y if you want to control reset signals provided by182182- USB3 glue layer.172172+ Support for peripheral core reset included in its own glue layer173173+ on UniPhier SoCs. Say Y if you want to control reset signals174174+ provided by the glue layer.183175184176config RESET_ZYNQ185177 bool "ZYNQ Reset Driver" if COMPILE_TEST
···795795 return rstc;796796}797797EXPORT_SYMBOL_GPL(devm_reset_control_array_get);798798+799799+static int reset_control_get_count_from_lookup(struct device *dev)800800+{801801+ const struct reset_control_lookup *lookup;802802+ const char *dev_id;803803+ int count = 0;804804+805805+ if (!dev)806806+ return -EINVAL;807807+808808+ dev_id = dev_name(dev);809809+ mutex_lock(&reset_lookup_mutex);810810+811811+ list_for_each_entry(lookup, &reset_lookup_list, list) {812812+ if (!strcmp(lookup->dev_id, dev_id))813813+ count++;814814+ }815815+816816+ mutex_unlock(&reset_lookup_mutex);817817+818818+ if (count == 0)819819+ count = -ENOENT;820820+821821+ return count;822822+}823823+824824+/**825825+ * reset_control_get_count - Count number of resets available with a device826826+ *827827+ * @dev: device for which to return the number of resets828828+ *829829+ * Returns positive reset count on success, or error number on failure and830830+ * on count being zero.831831+ */832832+int reset_control_get_count(struct device *dev)833833+{834834+ if (dev->of_node)835835+ return of_reset_control_get_count(dev->of_node);836836+837837+ return reset_control_get_count_from_lookup(dev);838838+}839839+EXPORT_SYMBOL_GPL(reset_control_get_count);
···18271827 * page, this is used as a priori size of SLI4_PAGE_SIZE for18281828 * the later DMA memory free.18291829 */18301830- viraddr = dma_zalloc_coherent(&phba->pcidev->dev,18311831- SLI4_PAGE_SIZE, &phyaddr,18321832- GFP_KERNEL);18301830+ viraddr = dma_alloc_coherent(&phba->pcidev->dev,18311831+ SLI4_PAGE_SIZE, &phyaddr,18321832+ GFP_KERNEL);18331833 /* In case of malloc fails, proceed with whatever we have */18341834 if (!viraddr)18351835 break;
+18-17
drivers/scsi/lpfc/lpfc_sli.c
···53625362 * mailbox command.53635363 */53645364 dma_size = *vpd_size;53655365- dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, dma_size,53665366- &dmabuf->phys, GFP_KERNEL);53655365+ dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, dma_size,53665366+ &dmabuf->phys, GFP_KERNEL);53675367 if (!dmabuf->virt) {53685368 kfree(dmabuf);53695369 return -ENOMEM;···63006300 goto free_mem;63016301 }6302630263036303- dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev,63036303+ dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,63046304 LPFC_RAS_MAX_ENTRY_SIZE,63056305- &dmabuf->phys,63066306- GFP_KERNEL);63056305+ &dmabuf->phys, GFP_KERNEL);63076306 if (!dmabuf->virt) {63086307 kfree(dmabuf);63096308 rc = -ENOMEM;···94079408 cmnd = CMD_XMIT_SEQUENCE64_CR;94089409 if (phba->link_flag & LS_LOOPBACK_MODE)94099410 bf_set(wqe_xo, &wqe->xmit_sequence.wge_ctl, 1);94119411+ /* fall through */94109412 case CMD_XMIT_SEQUENCE64_CR:94119413 /* word3 iocb=io_tag32 wqe=reserved */94129414 wqe->xmit_sequence.rsvd3 = 0;···1352913529 case FC_STATUS_RQ_BUF_LEN_EXCEEDED:1353013530 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,1353113531 "2537 Receive Frame Truncated!!\n");1353213532+ /* fall through */1353213533 case FC_STATUS_RQ_SUCCESS:1353313534 spin_lock_irqsave(&phba->hbalock, iflags);1353413535 lpfc_sli4_rq_release(hrq, drq);···1393913938 case FC_STATUS_RQ_BUF_LEN_EXCEEDED:1394013939 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,1394113940 "6126 Receive Frame Truncated!!\n");1394213942- /* Drop thru */1394113941+ /* fall through */1394313942 case FC_STATUS_RQ_SUCCESS:1394413943 spin_lock_irqsave(&phba->hbalock, iflags);1394513944 lpfc_sli4_rq_release(hrq, drq);···1461414613 dmabuf = kzalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL);1461514614 if (!dmabuf)1461614615 goto out_fail;1461714617- dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev,1461814618- hw_page_size, &dmabuf->phys,1461914619- GFP_KERNEL);1461614616+ dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev,1461714617+ hw_page_size, &dmabuf->phys,1461814618+ GFP_KERNEL);1462014619 if (!dmabuf->virt) {1462114620 kfree(dmabuf);1462214621 goto out_fail;···1485114850 eq->entry_count);1485214851 if (eq->entry_count < 256)1485314852 return -EINVAL;1485414854- /* otherwise default to smallest count (drop through) */1485314853+ /* fall through - otherwise default to smallest count */1485514854 case 256:1485614855 bf_set(lpfc_eq_context_count, &eq_create->u.request.context,1485714856 LPFC_EQ_CNT_256);···1498214981 LPFC_CQ_CNT_WORD7);1498314982 break;1498414983 }1498514985- /* Fall Thru */1498414984+ /* fall through */1498614985 default:1498714986 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,1498814987 "0361 Unsupported CQ count: "···1499314992 status = -EINVAL;1499414993 goto out;1499514994 }1499614996- /* otherwise default to smallest count (drop through) */1499514995+ /* fall through - otherwise default to smallest count */1499714996 case 256:1499814997 bf_set(lpfc_cq_context_count, &cq_create->u.request.context,1499914998 LPFC_CQ_CNT_256);···1515315152 LPFC_CQ_CNT_WORD7);1515415153 break;1515515154 }1515615156- /* Fall Thru */1515515155+ /* fall through */1515715156 default:1515815157 lpfc_printf_log(phba, KERN_ERR, LOG_SLI,1515915158 "3118 Bad CQ count. (%d)\n",···1516215161 status = -EINVAL;1516315162 goto out;1516415163 }1516515165- /* otherwise default to smallest (drop thru) */1516415164+ /* fall through - otherwise default to smallest */1516615165 case 256:1516715166 bf_set(lpfc_mbx_cq_create_set_cqe_cnt,1516815167 &cq_set->u.request, LPFC_CQ_CNT_256);···1543415433 status = -EINVAL;1543515434 goto out;1543615435 }1543715437- /* otherwise default to smallest count (drop through) */1543615436+ /* fall through - otherwise default to smallest count */1543815437 case 16:1543915438 bf_set(lpfc_mq_context_ring_size,1544015439 &mq_create_ext->u.request.context,···1585315852 status = -EINVAL;1585415853 goto out;1585515854 }1585615856- /* otherwise default to smallest count (drop through) */1585515855+ /* fall through - otherwise default to smallest count */1585715856 case 512:1585815857 bf_set(lpfc_rq_context_rqe_count,1585915858 &rq_create->u.request.context,···1599015989 status = -EINVAL;1599115990 goto out;1599215991 }1599315993- /* otherwise default to smallest count (drop through) */1599215992+ /* fall through - otherwise default to smallest count */1599415993 case 512:1599515994 bf_set(lpfc_rq_context_rqe_count,1599615995 &rq_create->u.request.context,
+8-7
drivers/scsi/megaraid/megaraid_mbox.c
···967967 * Allocate the common 16-byte aligned memory for the handshake968968 * mailbox.969969 */970970- raid_dev->una_mbox64 = dma_zalloc_coherent(&adapter->pdev->dev,971971- sizeof(mbox64_t), &raid_dev->una_mbox64_dma,972972- GFP_KERNEL);970970+ raid_dev->una_mbox64 = dma_alloc_coherent(&adapter->pdev->dev,971971+ sizeof(mbox64_t),972972+ &raid_dev->una_mbox64_dma,973973+ GFP_KERNEL);973974974975 if (!raid_dev->una_mbox64) {975976 con_log(CL_ANN, (KERN_WARNING···996995 align;997996998997 // Allocate memory for commands issued internally999999- adapter->ibuf = dma_zalloc_coherent(&pdev->dev, MBOX_IBUF_SIZE,10001000- &adapter->ibuf_dma_h, GFP_KERNEL);998998+ adapter->ibuf = dma_alloc_coherent(&pdev->dev, MBOX_IBUF_SIZE,999999+ &adapter->ibuf_dma_h, GFP_KERNEL);10011000 if (!adapter->ibuf) {1002100110031002 con_log(CL_ANN, (KERN_WARNING···28982897 * Issue an ENQUIRY3 command to find out certain adapter parameters,28992898 * e.g., max channels, max commands etc.29002899 */29012901- pinfo = dma_zalloc_coherent(&adapter->pdev->dev, sizeof(mraid_pinfo_t),29022902- &pinfo_dma_h, GFP_KERNEL);29002900+ pinfo = dma_alloc_coherent(&adapter->pdev->dev, sizeof(mraid_pinfo_t),29012901+ &pinfo_dma_h, GFP_KERNEL);29032902 if (pinfo == NULL) {29042903 con_log(CL_ANN, (KERN_WARNING29052904 "megaraid: out of memory, %s %d\n", __func__,
···175175 /*176176 * Check if it is our interrupt177177 */178178- status = readl(®s->outbound_intr_status);178178+ status = megasas_readl(instance,179179+ ®s->outbound_intr_status);179180180181 if (status & 1) {181182 writel(status, ®s->outbound_intr_status);···690689 array_size = sizeof(struct MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY) *691690 MAX_MSIX_QUEUES_FUSION;692691693693- fusion->rdpq_virt = dma_zalloc_coherent(&instance->pdev->dev,694694- array_size, &fusion->rdpq_phys, GFP_KERNEL);692692+ fusion->rdpq_virt = dma_alloc_coherent(&instance->pdev->dev,693693+ array_size, &fusion->rdpq_phys,694694+ GFP_KERNEL);695695 if (!fusion->rdpq_virt) {696696 dev_err(&instance->pdev->dev,697697 "Failed from %s %d\n", __func__, __LINE__);
+3-2
drivers/scsi/mesh.c
···19151915 /* We use the PCI APIs for now until the generic one gets fixed19161916 * enough or until we get some macio-specific versions19171917 */19181918- dma_cmd_space = dma_zalloc_coherent(&macio_get_pci_dev(mdev)->dev,19191919- ms->dma_cmd_size, &dma_cmd_bus, GFP_KERNEL);19181918+ dma_cmd_space = dma_alloc_coherent(&macio_get_pci_dev(mdev)->dev,19191919+ ms->dma_cmd_size, &dma_cmd_bus,19201920+ GFP_KERNEL);19201921 if (dma_cmd_space == NULL) {19211922 printk(KERN_ERR "mesh: can't allocate DMA table\n");19221923 goto out_unmap;
+5-4
drivers/scsi/mvumi.c
···143143144144 case RESOURCE_UNCACHED_MEMORY:145145 size = round_up(size, 8);146146- res->virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size,147147- &res->bus_addr, GFP_KERNEL);146146+ res->virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size,147147+ &res->bus_addr,148148+ GFP_KERNEL);148149 if (!res->virt_addr) {149150 dev_err(&mhba->pdev->dev,150151 "unable to allocate consistent mem,"···247246 if (size == 0)248247 return 0;249248250250- virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size, &phy_addr,251251- GFP_KERNEL);249249+ virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size, &phy_addr,250250+ GFP_KERNEL);252251 if (!virt_addr)253252 return -1;254253
+3-3
drivers/scsi/pm8001/pm8001_sas.c
···116116 u64 align_offset = 0;117117 if (align)118118 align_offset = (dma_addr_t)align - 1;119119- mem_virt_alloc = dma_zalloc_coherent(&pdev->dev, mem_size + align,120120- &mem_dma_handle, GFP_KERNEL);119119+ mem_virt_alloc = dma_alloc_coherent(&pdev->dev, mem_size + align,120120+ &mem_dma_handle, GFP_KERNEL);121121 if (!mem_virt_alloc) {122122 pm8001_printk("memory allocation error\n");123123 return -1;···657657 if (dev->dev_type == SAS_SATA_DEV) {658658 pm8001_device->attached_phy =659659 dev->rphy->identify.phy_identifier;660660- flag = 1; /* directly sata*/660660+ flag = 1; /* directly sata */661661 }662662 } /*register this device to HBA*/663663 PM8001_DISC_DBG(pm8001_ha, pm8001_printk("Found device\n"));
+17-12
drivers/scsi/qedf/qedf_main.c
···10501050 sizeof(void *);10511051 fcport->sq_pbl_size = fcport->sq_pbl_size + QEDF_PAGE_SIZE;1052105210531053- fcport->sq = dma_zalloc_coherent(&qedf->pdev->dev,10541054- fcport->sq_mem_size, &fcport->sq_dma, GFP_KERNEL);10531053+ fcport->sq = dma_alloc_coherent(&qedf->pdev->dev, fcport->sq_mem_size,10541054+ &fcport->sq_dma, GFP_KERNEL);10551055 if (!fcport->sq) {10561056 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate send queue.\n");10571057 rval = 1;10581058 goto out;10591059 }1060106010611061- fcport->sq_pbl = dma_zalloc_coherent(&qedf->pdev->dev,10621062- fcport->sq_pbl_size, &fcport->sq_pbl_dma, GFP_KERNEL);10611061+ fcport->sq_pbl = dma_alloc_coherent(&qedf->pdev->dev,10621062+ fcport->sq_pbl_size,10631063+ &fcport->sq_pbl_dma, GFP_KERNEL);10631064 if (!fcport->sq_pbl) {10641065 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate send queue PBL.\n");10651066 rval = 1;···26812680 }2682268126832682 /* Allocate list of PBL pages */26842684- qedf->bdq_pbl_list = dma_zalloc_coherent(&qedf->pdev->dev,26852685- QEDF_PAGE_SIZE, &qedf->bdq_pbl_list_dma, GFP_KERNEL);26832683+ qedf->bdq_pbl_list = dma_alloc_coherent(&qedf->pdev->dev,26842684+ QEDF_PAGE_SIZE,26852685+ &qedf->bdq_pbl_list_dma,26862686+ GFP_KERNEL);26862687 if (!qedf->bdq_pbl_list) {26872688 QEDF_ERR(&(qedf->dbg_ctx), "Could not allocate list of PBL pages.\n");26882689 return -ENOMEM;···27732770 ALIGN(qedf->global_queues[i]->cq_pbl_size, QEDF_PAGE_SIZE);2774277127752772 qedf->global_queues[i]->cq =27762776- dma_zalloc_coherent(&qedf->pdev->dev,27772777- qedf->global_queues[i]->cq_mem_size,27782778- &qedf->global_queues[i]->cq_dma, GFP_KERNEL);27732773+ dma_alloc_coherent(&qedf->pdev->dev,27742774+ qedf->global_queues[i]->cq_mem_size,27752775+ &qedf->global_queues[i]->cq_dma,27762776+ GFP_KERNEL);2779277727802778 if (!qedf->global_queues[i]->cq) {27812779 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate cq.\n");···27852781 }2786278227872783 qedf->global_queues[i]->cq_pbl =27882788- dma_zalloc_coherent(&qedf->pdev->dev,27892789- qedf->global_queues[i]->cq_pbl_size,27902790- &qedf->global_queues[i]->cq_pbl_dma, GFP_KERNEL);27842784+ dma_alloc_coherent(&qedf->pdev->dev,27852785+ qedf->global_queues[i]->cq_pbl_size,27862786+ &qedf->global_queues[i]->cq_pbl_dma,27872787+ GFP_KERNEL);2791278827922789 if (!qedf->global_queues[i]->cq_pbl) {27932790 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate cq PBL.\n");
+3
drivers/scsi/qedi/qedi_iscsi.c
···953953954954 qedi_ep = ep->dd_data;955955 if (qedi_ep->state == EP_STATE_IDLE ||956956+ qedi_ep->state == EP_STATE_OFLDCONN_NONE ||956957 qedi_ep->state == EP_STATE_OFLDCONN_FAILED)957958 return -1;958959···1036103510371036 switch (qedi_ep->state) {10381037 case EP_STATE_OFLDCONN_START:10381038+ case EP_STATE_OFLDCONN_NONE:10391039 goto ep_release_conn;10401040 case EP_STATE_OFLDCONN_FAILED:10411041 break;···1227122512281226 if (!is_valid_ether_addr(&path_data->mac_addr[0])) {12291227 QEDI_NOTICE(&qedi->dbg_ctx, "dst mac NOT VALID\n");12281228+ qedi_ep->state = EP_STATE_OFLDCONN_NONE;12301229 ret = -EIO;12311230 goto set_path_exit;12321231 }
···80808181 if (err == 0) {8282 pm_runtime_disable(dev);8383- pm_runtime_set_active(dev);8383+ err = pm_runtime_set_active(dev);8484 pm_runtime_enable(dev);8585+8686+ /*8787+ * Forcibly set runtime PM status of request queue to "active"8888+ * to make sure we can again get requests from the queue8989+ * (see also blk_pm_peek_request()).9090+ *9191+ * The resume hook will correct runtime PM status of the disk.9292+ */9393+ if (!err && scsi_is_sdev_device(dev)) {9494+ struct scsi_device *sdev = to_scsi_device(dev);9595+9696+ if (sdev->request_queue->dev)9797+ blk_set_runtime_active(sdev->request_queue);9898+ }8599 }8610087101 return err;···153139 fn = async_sdev_restore;154140 else155141 fn = NULL;156156-157157- /*158158- * Forcibly set runtime PM status of request queue to "active" to159159- * make sure we can again get requests from the queue (see also160160- * blk_pm_peek_request()).161161- *162162- * The resume hook will correct runtime PM status of the disk.163163- */164164- if (scsi_is_sdev_device(dev) && pm_runtime_suspended(dev))165165- blk_set_runtime_active(to_scsi_device(dev)->request_queue);166142167143 if (fn) {168144 async_schedule_domain(fn, dev, &scsi_sd_pm_domain);
+6
drivers/scsi/sd.c
···206206 sp = buffer_data[0] & 0x80 ? 1 : 0;207207 buffer_data[0] &= ~0x80;208208209209+ /*210210+ * Ensure WP, DPOFUA, and RESERVED fields are cleared in211211+ * received mode parameter buffer before doing MODE SELECT.212212+ */213213+ data.device_specific = 0;214214+209215 if (scsi_mode_select(sdp, 1, sp, 8, buffer_data, len, SD_TIMEOUT,210216 SD_MAX_RETRIES, &data, &sshdr)) {211217 if (scsi_sense_valid(&sshdr))
···714714 sizeof(struct iscsi_queue_req),715715 __alignof__(struct iscsi_queue_req), 0, NULL);716716 if (!lio_qr_cache) {717717- pr_err("nable to kmem_cache_create() for"717717+ pr_err("Unable to kmem_cache_create() for"718718 " lio_qr_cache\n");719719 goto bitmap_out;720720 }
+61-27
drivers/target/target_core_user.c
···148148 size_t ring_size;149149150150 struct mutex cmdr_lock;151151- struct list_head cmdr_queue;151151+ struct list_head qfull_queue;152152153153 uint32_t dbi_max;154154 uint32_t dbi_thresh;···159159160160 struct timer_list cmd_timer;161161 unsigned int cmd_time_out;162162+ struct list_head inflight_queue;162163163164 struct timer_list qfull_timer;164165 int qfull_time_out;···180179struct tcmu_cmd {181180 struct se_cmd *se_cmd;182181 struct tcmu_dev *tcmu_dev;183183- struct list_head cmdr_queue_entry;182182+ struct list_head queue_entry;184183185184 uint16_t cmd_id;186185···193192 unsigned long deadline;194193195194#define TCMU_CMD_BIT_EXPIRED 0195195+#define TCMU_CMD_BIT_INFLIGHT 1196196 unsigned long flags;197197};198198/*···588586 if (!tcmu_cmd)589587 return NULL;590588591591- INIT_LIST_HEAD(&tcmu_cmd->cmdr_queue_entry);589589+ INIT_LIST_HEAD(&tcmu_cmd->queue_entry);592590 tcmu_cmd->se_cmd = se_cmd;593591 tcmu_cmd->tcmu_dev = udev;594592···917915 return 0;918916919917 tcmu_cmd->deadline = round_jiffies_up(jiffies + msecs_to_jiffies(tmo));920920- mod_timer(timer, tcmu_cmd->deadline);918918+ if (!timer_pending(timer))919919+ mod_timer(timer, tcmu_cmd->deadline);920920+921921 return 0;922922}923923924924-static int add_to_cmdr_queue(struct tcmu_cmd *tcmu_cmd)924924+static int add_to_qfull_queue(struct tcmu_cmd *tcmu_cmd)925925{926926 struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;927927 unsigned int tmo;···946942 if (ret)947943 return ret;948944949949- list_add_tail(&tcmu_cmd->cmdr_queue_entry, &udev->cmdr_queue);945945+ list_add_tail(&tcmu_cmd->queue_entry, &udev->qfull_queue);950946 pr_debug("adding cmd %u on dev %s to ring space wait queue\n",951947 tcmu_cmd->cmd_id, udev->name);952948 return 0;···1003999 base_command_size = tcmu_cmd_get_base_cmd_size(tcmu_cmd->dbi_cnt);10041000 command_size = tcmu_cmd_get_cmd_size(tcmu_cmd, base_command_size);1005100110061006- if (!list_empty(&udev->cmdr_queue))10021002+ if (!list_empty(&udev->qfull_queue))10071003 goto queue;1008100410091005 mb = udev->mb_addr;···11001096 UPDATE_HEAD(mb->cmd_head, command_size, udev->cmdr_size);11011097 tcmu_flush_dcache_range(mb, sizeof(*mb));1102109810991099+ list_add_tail(&tcmu_cmd->queue_entry, &udev->inflight_queue);11001100+ set_bit(TCMU_CMD_BIT_INFLIGHT, &tcmu_cmd->flags);11011101+11031102 /* TODO: only if FLUSH and FUA? */11041103 uio_event_notify(&udev->uio_info);1105110411061105 return 0;1107110611081107queue:11091109- if (add_to_cmdr_queue(tcmu_cmd)) {11081108+ if (add_to_qfull_queue(tcmu_cmd)) {11101109 *scsi_err = TCM_OUT_OF_RESOURCES;11111110 return -1;11121111 }···11511144 */11521145 if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))11531146 goto out;11471147+11481148+ list_del_init(&cmd->queue_entry);1154114911551150 tcmu_cmd_reset_dbi_cur(cmd);11561151···12031194 tcmu_free_cmd(cmd);12041195}1205119611971197+static void tcmu_set_next_deadline(struct list_head *queue,11981198+ struct timer_list *timer)11991199+{12001200+ struct tcmu_cmd *tcmu_cmd, *tmp_cmd;12011201+ unsigned long deadline = 0;12021202+12031203+ list_for_each_entry_safe(tcmu_cmd, tmp_cmd, queue, queue_entry) {12041204+ if (!time_after(jiffies, tcmu_cmd->deadline)) {12051205+ deadline = tcmu_cmd->deadline;12061206+ break;12071207+ }12081208+ }12091209+12101210+ if (deadline)12111211+ mod_timer(timer, deadline);12121212+ else12131213+ del_timer(timer);12141214+}12151215+12061216static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)12071217{12081218 struct tcmu_mailbox *mb;12191219+ struct tcmu_cmd *cmd;12091220 int handled = 0;1210122112111222 if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags)) {···12391210 while (udev->cmdr_last_cleaned != READ_ONCE(mb->cmd_tail)) {1240121112411212 struct tcmu_cmd_entry *entry = (void *) mb + CMDR_OFF + udev->cmdr_last_cleaned;12421242- struct tcmu_cmd *cmd;1243121312441214 tcmu_flush_dcache_range(entry, sizeof(*entry));12451215···12711243 /* no more pending commands */12721244 del_timer(&udev->cmd_timer);1273124512741274- if (list_empty(&udev->cmdr_queue)) {12461246+ if (list_empty(&udev->qfull_queue)) {12751247 /*12761248 * no more pending or waiting commands so try to12771249 * reclaim blocks if needed.···12801252 tcmu_global_max_blocks)12811253 schedule_delayed_work(&tcmu_unmap_work, 0);12821254 }12551255+ } else if (udev->cmd_time_out) {12561256+ tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);12831257 }1284125812851259 return handled;···13011271 if (!time_after(jiffies, cmd->deadline))13021272 return 0;1303127313041304- is_running = list_empty(&cmd->cmdr_queue_entry);12741274+ is_running = test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags);13051275 se_cmd = cmd->se_cmd;1306127613071277 if (is_running) {···13181288 */13191289 scsi_status = SAM_STAT_CHECK_CONDITION;13201290 } else {13211321- list_del_init(&cmd->cmdr_queue_entry);13221322-13231291 idr_remove(&udev->commands, id);13241292 tcmu_free_cmd(cmd);13251293 scsi_status = SAM_STAT_TASK_SET_FULL;13261294 }12951295+ list_del_init(&cmd->queue_entry);1327129613281297 pr_debug("Timing out cmd %u on dev %s that is %s.\n",13291298 id, udev->name, is_running ? "inflight" : "queued");···1401137214021373 INIT_LIST_HEAD(&udev->node);14031374 INIT_LIST_HEAD(&udev->timedout_entry);14041404- INIT_LIST_HEAD(&udev->cmdr_queue);13751375+ INIT_LIST_HEAD(&udev->qfull_queue);13761376+ INIT_LIST_HEAD(&udev->inflight_queue);14051377 idr_init(&udev->commands);1406137814071379 timer_setup(&udev->qfull_timer, tcmu_qfull_timedout, 0);···14131383 return &udev->se_dev;14141384}1415138514161416-static bool run_cmdr_queue(struct tcmu_dev *udev, bool fail)13861386+static bool run_qfull_queue(struct tcmu_dev *udev, bool fail)14171387{14181388 struct tcmu_cmd *tcmu_cmd, *tmp_cmd;14191389 LIST_HEAD(cmds);···14211391 sense_reason_t scsi_ret;14221392 int ret;1423139314241424- if (list_empty(&udev->cmdr_queue))13941394+ if (list_empty(&udev->qfull_queue))14251395 return true;1426139614271397 pr_debug("running %s's cmdr queue forcefail %d\n", udev->name, fail);1428139814291429- list_splice_init(&udev->cmdr_queue, &cmds);13991399+ list_splice_init(&udev->qfull_queue, &cmds);1430140014311431- list_for_each_entry_safe(tcmu_cmd, tmp_cmd, &cmds, cmdr_queue_entry) {14321432- list_del_init(&tcmu_cmd->cmdr_queue_entry);14011401+ list_for_each_entry_safe(tcmu_cmd, tmp_cmd, &cmds, queue_entry) {14021402+ list_del_init(&tcmu_cmd->queue_entry);1433140314341404 pr_debug("removing cmd %u on dev %s from queue\n",14351405 tcmu_cmd->cmd_id, udev->name);···14671437 * cmd was requeued, so just put all cmds back in14681438 * the queue14691439 */14701470- list_splice_tail(&cmds, &udev->cmdr_queue);14401440+ list_splice_tail(&cmds, &udev->qfull_queue);14711441 drained = false;14721472- goto done;14421442+ break;14731443 }14741444 }14751475- if (list_empty(&udev->cmdr_queue))14761476- del_timer(&udev->qfull_timer);14771477-done:14451445+14461446+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);14781447 return drained;14791448}14801449···1483145414841455 mutex_lock(&udev->cmdr_lock);14851456 tcmu_handle_completions(udev);14861486- run_cmdr_queue(udev, false);14571457+ run_qfull_queue(udev, false);14871458 mutex_unlock(&udev->cmdr_lock);1488145914891460 return 0;···20111982 /* complete IO that has executed successfully */20121983 tcmu_handle_completions(udev);20131984 /* fail IO waiting to be queued */20142014- run_cmdr_queue(udev, true);19851985+ run_qfull_queue(udev, true);2015198620161987unlock:20171988 mutex_unlock(&udev->cmdr_lock);···20261997 mutex_lock(&udev->cmdr_lock);2027199820281999 idr_for_each_entry(&udev->commands, cmd, i) {20292029- if (!list_empty(&cmd->cmdr_queue_entry))20002000+ if (!test_bit(TCMU_CMD_BIT_INFLIGHT, &cmd->flags))20302001 continue;2031200220322003 pr_debug("removing cmd %u on dev %s from ring (is expired %d)\n",···2035200620362007 idr_remove(&udev->commands, i);20372008 if (!test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) {20092009+ list_del_init(&cmd->queue_entry);20382010 if (err_level == 1) {20392011 /*20402012 * Userspace was not able to start the···2696266626972667 mutex_lock(&udev->cmdr_lock);26982668 idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);26692669+26702670+ tcmu_set_next_deadline(&udev->inflight_queue, &udev->cmd_timer);26712671+ tcmu_set_next_deadline(&udev->qfull_queue, &udev->qfull_timer);26722672+26992673 mutex_unlock(&udev->cmdr_lock);2700267427012675 spin_lock_bh(&timed_out_udevs_lock);
+1-1
drivers/thermal/intel/int340x_thermal/Kconfig
···4455config INT340X_THERMAL66 tristate "ACPI INT340X thermal drivers"77- depends on X86 && ACPI77+ depends on X86 && ACPI && PCI88 select THERMAL_GOV_USER_SPACE99 select ACPI_THERMAL_REL1010 select ACPI_FAN
+12
drivers/tty/serial/Kconfig
···8585 with "earlycon=smh" on the kernel command line. The console is8686 enabled when early_param is processed.87878888+config SERIAL_EARLYCON_RISCV_SBI8989+ bool "Early console using RISC-V SBI"9090+ depends on RISCV9191+ select SERIAL_CORE9292+ select SERIAL_CORE_CONSOLE9393+ select SERIAL_EARLYCON9494+ help9595+ Support for early debug console using RISC-V SBI. This enables9696+ the console before standard serial driver is probed. This is enabled9797+ with "earlycon=sbi" on the kernel command line. The console is9898+ enabled when early_param is processed.9999+88100config SERIAL_SB1250_DUART89101 tristate "BCM1xxx on-chip DUART serial support"90102 depends on SIBYTE_SB1xxx_SOC=y
+1
drivers/tty/serial/Makefile
···7788obj-$(CONFIG_SERIAL_EARLYCON) += earlycon.o99obj-$(CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST) += earlycon-arm-semihost.o1010+obj-$(CONFIG_SERIAL_EARLYCON_RISCV_SBI) += earlycon-riscv-sbi.o10111112# These Sparc drivers have to appear before others such as 82501213# which share ttySx minor node space. Otherwise console device
+28
drivers/tty/serial/earlycon-riscv-sbi.c
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * RISC-V SBI based earlycon44+ *55+ * Copyright (C) 2018 Anup Patel <anup@brainfault.org>66+ */77+#include <linux/kernel.h>88+#include <linux/console.h>99+#include <linux/init.h>1010+#include <linux/serial_core.h>1111+#include <asm/sbi.h>1212+1313+static void sbi_console_write(struct console *con,1414+ const char *s, unsigned int n)1515+{1616+ int i;1717+1818+ for (i = 0; i < n; ++i)1919+ sbi_console_putchar(s[i]);2020+}2121+2222+static int __init early_sbi_setup(struct earlycon_device *device,2323+ const char *opt)2424+{2525+ device->con->write = sbi_console_write;2626+ return 0;2727+}2828+EARLYCON_DECLARE(sbi, early_sbi_setup);
···12561256static int tty_reopen(struct tty_struct *tty)12571257{12581258 struct tty_driver *driver = tty->driver;12591259- int retval;12591259+ struct tty_ldisc *ld;12601260+ int retval = 0;1260126112611262 if (driver->type == TTY_DRIVER_TYPE_PTY &&12621263 driver->subtype == PTY_TYPE_MASTER)···12691268 if (test_bit(TTY_EXCLUSIVE, &tty->flags) && !capable(CAP_SYS_ADMIN))12701269 return -EBUSY;1271127012721272- retval = tty_ldisc_lock(tty, 5 * HZ);12731273- if (retval)12741274- return retval;12711271+ ld = tty_ldisc_ref_wait(tty);12721272+ if (ld) {12731273+ tty_ldisc_deref(ld);12741274+ } else {12751275+ retval = tty_ldisc_lock(tty, 5 * HZ);12761276+ if (retval)12771277+ return retval;1275127812761276- if (!tty->ldisc)12771277- retval = tty_ldisc_reinit(tty, tty->termios.c_line);12781278- tty_ldisc_unlock(tty);12791279+ if (!tty->ldisc)12801280+ retval = tty_ldisc_reinit(tty, tty->termios.c_line);12811281+ tty_ldisc_unlock(tty);12821282+ }1279128312801284 if (retval == 0)12811285 tty->count++;
+7
drivers/usb/class/cdc-acm.c
···18651865 .driver_info = IGNORE_DEVICE,18661866 },1867186718681868+ { USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */18691869+ .driver_info = SEND_ZERO_PACKET,18701870+ },18711871+ { USB_DEVICE(0x1bc7, 0x0023), /* Telit 3G ACM + ECM composition */18721872+ .driver_info = SEND_ZERO_PACKET,18731873+ },18741874+18681875 /* control interfaces without any protocol set */18691876 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM,18701877 USB_CDC_PROTO_NONE) },
+6-3
drivers/usb/core/generic.c
···143143 continue;144144 }145145146146- if (i > 0 && desc && is_audio(desc) && is_uac3_config(desc)) {147147- best = c;148148- break;146146+ if (i > 0 && desc && is_audio(desc)) {147147+ if (is_uac3_config(desc)) {148148+ best = c;149149+ break;150150+ }151151+ continue;149152 }150153151154 /* From the remaining configs, choose the first one whose
···235235 if (!(us->fflags & US_FL_NEEDS_CAP16))236236 sdev->try_rc_10_first = 1;237237238238- /* assume SPC3 or latter devices support sense size > 18 */239239- if (sdev->scsi_level > SCSI_SPC_2)238238+ /*239239+ * assume SPC3 or latter devices support sense size > 18240240+ * unless US_FL_BAD_SENSE quirk is specified.241241+ */242242+ if (sdev->scsi_level > SCSI_SPC_2 &&243243+ !(us->fflags & US_FL_BAD_SENSE))240244 us->fflags |= US_FL_SANE_SENSE;241245242246 /*
+12
drivers/usb/storage/unusual_devs.h
···12661266 US_FL_FIX_CAPACITY ),1267126712681268/*12691269+ * Reported by Icenowy Zheng <icenowy@aosc.io>12701270+ * The SMI SM3350 USB-UFS bridge controller will enter a wrong state12711271+ * that do not process read/write command if a long sense is requested,12721272+ * so force to use 18-byte sense.12731273+ */12741274+UNUSUAL_DEV( 0x090c, 0x3350, 0x0000, 0xffff,12751275+ "SMI",12761276+ "SM3350 UFS-to-USB-Mass-Storage bridge",12771277+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,12781278+ US_FL_BAD_SENSE ),12791279+12801280+/*12691281 * Reported by Paul Hartman <paul.hartman+linux@gmail.com>12701282 * This card reader returns "Illegal Request, Logical Block Address12711283 * Out of Range" for the first READ(10) after a new card is inserted.
···10341034 int type, ret;1035103510361036 ret = copy_from_iter(&type, sizeof(type), from);10371037- if (ret != sizeof(type))10371037+ if (ret != sizeof(type)) {10381038+ ret = -EINVAL;10381039 goto done;10401040+ }1039104110401042 switch (type) {10411043 case VHOST_IOTLB_MSG:···1056105410571055 iov_iter_advance(from, offset);10581056 ret = copy_from_iter(&msg, sizeof(msg), from);10591059- if (ret != sizeof(msg))10571057+ if (ret != sizeof(msg)) {10581058+ ret = -EINVAL;10601059 goto done;10601060+ }10611061 if (vhost_process_iotlb_msg(dev, &msg)) {10621062 ret = -EFAULT;10631063 goto done;···17371733 return r;17381734}1739173517361736+static int log_write_hva(struct vhost_virtqueue *vq, u64 hva, u64 len)17371737+{17381738+ struct vhost_umem *umem = vq->umem;17391739+ struct vhost_umem_node *u;17401740+ u64 start, end, l, min;17411741+ int r;17421742+ bool hit = false;17431743+17441744+ while (len) {17451745+ min = len;17461746+ /* More than one GPAs can be mapped into a single HVA. So17471747+ * iterate all possible umems here to be safe.17481748+ */17491749+ list_for_each_entry(u, &umem->umem_list, link) {17501750+ if (u->userspace_addr > hva - 1 + len ||17511751+ u->userspace_addr - 1 + u->size < hva)17521752+ continue;17531753+ start = max(u->userspace_addr, hva);17541754+ end = min(u->userspace_addr - 1 + u->size,17551755+ hva - 1 + len);17561756+ l = end - start + 1;17571757+ r = log_write(vq->log_base,17581758+ u->start + start - u->userspace_addr,17591759+ l);17601760+ if (r < 0)17611761+ return r;17621762+ hit = true;17631763+ min = min(l, min);17641764+ }17651765+17661766+ if (!hit)17671767+ return -EFAULT;17681768+17691769+ len -= min;17701770+ hva += min;17711771+ }17721772+17731773+ return 0;17741774+}17751775+17761776+static int log_used(struct vhost_virtqueue *vq, u64 used_offset, u64 len)17771777+{17781778+ struct iovec iov[64];17791779+ int i, ret;17801780+17811781+ if (!vq->iotlb)17821782+ return log_write(vq->log_base, vq->log_addr + used_offset, len);17831783+17841784+ ret = translate_desc(vq, (uintptr_t)vq->used + used_offset,17851785+ len, iov, 64, VHOST_ACCESS_WO);17861786+ if (ret)17871787+ return ret;17881788+17891789+ for (i = 0; i < ret; i++) {17901790+ ret = log_write_hva(vq, (uintptr_t)iov[i].iov_base,17911791+ iov[i].iov_len);17921792+ if (ret)17931793+ return ret;17941794+ }17951795+17961796+ return 0;17971797+}17981798+17401799int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,17411741- unsigned int log_num, u64 len)18001800+ unsigned int log_num, u64 len, struct iovec *iov, int count)17421801{17431802 int i, r;1744180317451804 /* Make sure data written is seen before log. */17461805 smp_wmb();18061806+18071807+ if (vq->iotlb) {18081808+ for (i = 0; i < count; i++) {18091809+ r = log_write_hva(vq, (uintptr_t)iov[i].iov_base,18101810+ iov[i].iov_len);18111811+ if (r < 0)18121812+ return r;18131813+ }18141814+ return 0;18151815+ }18161816+17471817 for (i = 0; i < log_num; ++i) {17481818 u64 l = min(log[i].len, len);17491819 r = log_write(vq->log_base, log[i].addr, l);···18471769 smp_wmb();18481770 /* Log used flag write. */18491771 used = &vq->used->flags;18501850- log_write(vq->log_base, vq->log_addr +18511851- (used - (void __user *)vq->used),18521852- sizeof vq->used->flags);17721772+ log_used(vq, (used - (void __user *)vq->used),17731773+ sizeof vq->used->flags);18531774 if (vq->log_ctx)18541775 eventfd_signal(vq->log_ctx, 1);18551776 }···18661789 smp_wmb();18671790 /* Log avail event write */18681791 used = vhost_avail_event(vq);18691869- log_write(vq->log_base, vq->log_addr +18701870- (used - (void __user *)vq->used),18711871- sizeof *vhost_avail_event(vq));17921792+ log_used(vq, (used - (void __user *)vq->used),17931793+ sizeof *vhost_avail_event(vq));18721794 if (vq->log_ctx)18731795 eventfd_signal(vq->log_ctx, 1);18741796 }···22672191 /* Make sure data is seen before log. */22682192 smp_wmb();22692193 /* Log used ring entry write. */22702270- log_write(vq->log_base,22712271- vq->log_addr +22722272- ((void __user *)used - (void __user *)vq->used),22732273- count * sizeof *used);21942194+ log_used(vq, ((void __user *)used - (void __user *)vq->used),21952195+ count * sizeof *used);22742196 }22752197 old = vq->last_used_idx;22762198 new = (vq->last_used_idx += count);···23102236 /* Make sure used idx is seen before log. */23112237 smp_wmb();23122238 /* Log used index update. */23132313- log_write(vq->log_base,23142314- vq->log_addr + offsetof(struct vring_used, idx),23152315- sizeof vq->used->idx);22392239+ log_used(vq, offsetof(struct vring_used, idx),22402240+ sizeof vq->used->idx);23162241 if (vq->log_ctx)23172242 eventfd_signal(vq->log_ctx, 1);23182243 }
+2-1
drivers/vhost/vhost.h
···205205bool vhost_enable_notify(struct vhost_dev *, struct vhost_virtqueue *);206206207207int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,208208- unsigned int log_num, u64 len);208208+ unsigned int log_num, u64 len,209209+ struct iovec *iov, int count);209210int vq_iotlb_prefetch(struct vhost_virtqueue *vq);210211211212struct vhost_msg_node *vhost_new_msg(struct vhost_virtqueue *vq, int type);
···3030 struct device *dev;3131 unsigned int lth_brightness;3232 unsigned int *levels;3333+ bool enabled;3334 struct regulator *power_supply;3435 struct gpio_desc *enable_gpio;3536 unsigned int scale;···5150 int err;52515352 pwm_get_state(pb->pwm, &state);5454- if (state.enabled)5353+ if (pb->enabled)5554 return;56555756 err = regulator_enable(pb->power_supply);···66656766 if (pb->enable_gpio)6867 gpiod_set_value_cansleep(pb->enable_gpio, 1);6868+6969+ pb->enabled = true;6970}70717172static void pwm_backlight_power_off(struct pwm_bl_data *pb)···7572 struct pwm_state state;76737774 pwm_get_state(pb->pwm, &state);7878- if (!state.enabled)7575+ if (!pb->enabled)7976 return;80778178 if (pb->enable_gpio)···8986 pwm_apply_state(pb->pwm, &state);90879188 regulator_disable(pb->power_supply);8989+ pb->enabled = false;9290}93919492static int compute_duty_cycle(struct pwm_bl_data *pb, int brightness)···273269 memset(data, 0, sizeof(*data));274270275271 /*272272+ * These values are optional and set as 0 by default, the out values273273+ * are modified only if a valid u32 value can be decoded.274274+ */275275+ of_property_read_u32(node, "post-pwm-on-delay-ms",276276+ &data->post_pwm_on_delay);277277+ of_property_read_u32(node, "pwm-off-delay-ms", &data->pwm_off_delay);278278+279279+ data->enable_gpio = -EINVAL;280280+281281+ /*276282 * Determine the number of brightness levels, if this property is not277283 * set a default table of brightness levels will be used.278284 */···394380 data->max_brightness--;395381 }396382397397- /*398398- * These values are optional and set as 0 by default, the out values399399- * are modified only if a valid u32 value can be decoded.400400- */401401- of_property_read_u32(node, "post-pwm-on-delay-ms",402402- &data->post_pwm_on_delay);403403- of_property_read_u32(node, "pwm-off-delay-ms", &data->pwm_off_delay);404404-405405- data->enable_gpio = -EINVAL;406383 return 0;407384}408385···488483 pb->check_fb = data->check_fb;489484 pb->exit = data->exit;490485 pb->dev = &pdev->dev;486486+ pb->enabled = false;491487 pb->post_pwm_on_delay = data->post_pwm_on_delay;492488 pb->pwm_off_delay = data->pwm_off_delay;493489
···10101111if LOGO12121313-config FB_LOGO_CENTER1414- bool "Center the logo"1515- depends on FB=y1616- help1717- When this option is selected, the bootup logo is centered both1818- horizontally and vertically. If more than one logo is displayed1919- due to multiple CPUs, the collected line of logos is centered2020- as a whole.2121-2213config FB_LOGO_EXTRA2314 bool2415 depends on FB=y
+65-33
drivers/virtio/virtio_balloon.c
···6161 VIRTIO_BALLOON_VQ_MAX6262};63636464+enum virtio_balloon_config_read {6565+ VIRTIO_BALLOON_CONFIG_READ_CMD_ID = 0,6666+};6767+6468struct virtio_balloon {6569 struct virtio_device *vdev;6670 struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;···8177 /* Prevent updating balloon when it is being canceled. */8278 spinlock_t stop_update_lock;8379 bool stop_update;8080+ /* Bitmap to indicate if reading the related config fields are needed */8181+ unsigned long config_read_bitmap;84828583 /* The list of allocated free pages, waiting to be given back to mm */8684 struct list_head free_page_list;8785 spinlock_t free_page_list_lock;8886 /* The number of free page blocks on the above list */8987 unsigned long num_free_page_blocks;9090- /* The cmd id received from host */9191- u32 cmd_id_received;8888+ /*8989+ * The cmd id received from host.9090+ * Read it via virtio_balloon_cmd_id_received to get the latest value9191+ * sent from host.9292+ */9393+ u32 cmd_id_received_cache;9294 /* The cmd id that is actively in use */9395 __virtio32 cmd_id_active;9496 /* Buffer to store the stop sign */···400390 return num_returned;401391}402392393393+static void virtio_balloon_queue_free_page_work(struct virtio_balloon *vb)394394+{395395+ if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))396396+ return;397397+398398+ /* No need to queue the work if the bit was already set. */399399+ if (test_and_set_bit(VIRTIO_BALLOON_CONFIG_READ_CMD_ID,400400+ &vb->config_read_bitmap))401401+ return;402402+403403+ queue_work(vb->balloon_wq, &vb->report_free_page_work);404404+}405405+403406static void virtballoon_changed(struct virtio_device *vdev)404407{405408 struct virtio_balloon *vb = vdev->priv;406409 unsigned long flags;407407- s64 diff = towards_target(vb);408410409409- if (diff) {410410- spin_lock_irqsave(&vb->stop_update_lock, flags);411411- if (!vb->stop_update)412412- queue_work(system_freezable_wq,413413- &vb->update_balloon_size_work);414414- spin_unlock_irqrestore(&vb->stop_update_lock, flags);411411+ spin_lock_irqsave(&vb->stop_update_lock, flags);412412+ if (!vb->stop_update) {413413+ queue_work(system_freezable_wq,414414+ &vb->update_balloon_size_work);415415+ virtio_balloon_queue_free_page_work(vb);415416 }416416-417417- if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {418418- virtio_cread(vdev, struct virtio_balloon_config,419419- free_page_report_cmd_id, &vb->cmd_id_received);420420- if (vb->cmd_id_received == VIRTIO_BALLOON_CMD_ID_DONE) {421421- /* Pass ULONG_MAX to give back all the free pages */422422- return_free_pages_to_mm(vb, ULONG_MAX);423423- } else if (vb->cmd_id_received != VIRTIO_BALLOON_CMD_ID_STOP &&424424- vb->cmd_id_received !=425425- virtio32_to_cpu(vdev, vb->cmd_id_active)) {426426- spin_lock_irqsave(&vb->stop_update_lock, flags);427427- if (!vb->stop_update) {428428- queue_work(vb->balloon_wq,429429- &vb->report_free_page_work);430430- }431431- spin_unlock_irqrestore(&vb->stop_update_lock, flags);432432- }433433- }417417+ spin_unlock_irqrestore(&vb->stop_update_lock, flags);434418}435419436420static void update_balloon_size(struct virtio_balloon *vb)···531527 return 0;532528}533529530530+static u32 virtio_balloon_cmd_id_received(struct virtio_balloon *vb)531531+{532532+ if (test_and_clear_bit(VIRTIO_BALLOON_CONFIG_READ_CMD_ID,533533+ &vb->config_read_bitmap))534534+ virtio_cread(vb->vdev, struct virtio_balloon_config,535535+ free_page_report_cmd_id,536536+ &vb->cmd_id_received_cache);537537+538538+ return vb->cmd_id_received_cache;539539+}540540+534541static int send_cmd_id_start(struct virtio_balloon *vb)535542{536543 struct scatterlist sg;···552537 while (virtqueue_get_buf(vq, &unused))553538 ;554539555555- vb->cmd_id_active = cpu_to_virtio32(vb->vdev, vb->cmd_id_received);540540+ vb->cmd_id_active = virtio32_to_cpu(vb->vdev,541541+ virtio_balloon_cmd_id_received(vb));556542 sg_init_one(&sg, &vb->cmd_id_active, sizeof(vb->cmd_id_active));557543 err = virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_id_active, GFP_KERNEL);558544 if (!err)···636620 * stop the reporting.637621 */638622 cmd_id_active = virtio32_to_cpu(vb->vdev, vb->cmd_id_active);639639- if (cmd_id_active != vb->cmd_id_received)623623+ if (unlikely(cmd_id_active !=624624+ virtio_balloon_cmd_id_received(vb)))640625 break;641626642627 /*···654637 return 0;655638}656639657657-static void report_free_page_func(struct work_struct *work)640640+static void virtio_balloon_report_free_page(struct virtio_balloon *vb)658641{659642 int err;660660- struct virtio_balloon *vb = container_of(work, struct virtio_balloon,661661- report_free_page_work);662643 struct device *dev = &vb->vdev->dev;663644664645 /* Start by sending the received cmd id to host with an outbuf. */···672657 err = send_cmd_id_stop(vb);673658 if (unlikely(err))674659 dev_err(dev, "Failed to send a stop id, err = %d\n", err);660660+}661661+662662+static void report_free_page_func(struct work_struct *work)663663+{664664+ struct virtio_balloon *vb = container_of(work, struct virtio_balloon,665665+ report_free_page_work);666666+ u32 cmd_id_received;667667+668668+ cmd_id_received = virtio_balloon_cmd_id_received(vb);669669+ if (cmd_id_received == VIRTIO_BALLOON_CMD_ID_DONE) {670670+ /* Pass ULONG_MAX to give back all the free pages */671671+ return_free_pages_to_mm(vb, ULONG_MAX);672672+ } else if (cmd_id_received != VIRTIO_BALLOON_CMD_ID_STOP &&673673+ cmd_id_received !=674674+ virtio32_to_cpu(vb->vdev, vb->cmd_id_active)) {675675+ virtio_balloon_report_free_page(vb);676676+ }675677}676678677679#ifdef CONFIG_BALLOON_COMPACTION···917885 goto out_del_vqs;918886 }919887 INIT_WORK(&vb->report_free_page_work, report_free_page_func);920920- vb->cmd_id_received = VIRTIO_BALLOON_CMD_ID_STOP;888888+ vb->cmd_id_received_cache = VIRTIO_BALLOON_CMD_ID_STOP;921889 vb->cmd_id_active = cpu_to_virtio32(vb->vdev,922890 VIRTIO_BALLOON_CMD_ID_STOP);923891 vb->cmd_id_stop = cpu_to_virtio32(vb->vdev,
+7-2
drivers/virtio/virtio_mmio.c
···468468{469469 struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);470470 unsigned int irq = platform_get_irq(vm_dev->pdev, 0);471471- int i, err;471471+ int i, err, queue_idx = 0;472472473473 err = request_irq(irq, vm_interrupt, IRQF_SHARED,474474 dev_name(&vdev->dev), vm_dev);···476476 return err;477477478478 for (i = 0; i < nvqs; ++i) {479479- vqs[i] = vm_setup_vq(vdev, i, callbacks[i], names[i],479479+ if (!names[i]) {480480+ vqs[i] = NULL;481481+ continue;482482+ }483483+484484+ vqs[i] = vm_setup_vq(vdev, queue_idx++, callbacks[i], names[i],480485 ctx ? ctx[i] : false);481486 if (IS_ERR(vqs[i])) {482487 vm_del_vqs(vdev);
+4-4
drivers/virtio/virtio_pci_common.c
···285285{286286 struct virtio_pci_device *vp_dev = to_vp_device(vdev);287287 u16 msix_vec;288288- int i, err, nvectors, allocated_vectors;288288+ int i, err, nvectors, allocated_vectors, queue_idx = 0;289289290290 vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);291291 if (!vp_dev->vqs)···321321 msix_vec = allocated_vectors++;322322 else323323 msix_vec = VP_MSIX_VQ_VECTOR;324324- vqs[i] = vp_setup_vq(vdev, i, callbacks[i], names[i],324324+ vqs[i] = vp_setup_vq(vdev, queue_idx++, callbacks[i], names[i],325325 ctx ? ctx[i] : false,326326 msix_vec);327327 if (IS_ERR(vqs[i])) {···356356 const char * const names[], const bool *ctx)357357{358358 struct virtio_pci_device *vp_dev = to_vp_device(vdev);359359- int i, err;359359+ int i, err, queue_idx = 0;360360361361 vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);362362 if (!vp_dev->vqs)···374374 vqs[i] = NULL;375375 continue;376376 }377377- vqs[i] = vp_setup_vq(vdev, i, callbacks[i], names[i],377377+ vqs[i] = vp_setup_vq(vdev, queue_idx++, callbacks[i], names[i],378378 ctx ? ctx[i] : false,379379 VIRTIO_MSI_NO_VECTOR);380380 if (IS_ERR(vqs[i])) {
+12
drivers/watchdog/Kconfig
···817817 To compile this driver as a module, choose M here: the818818 module will be called stm32_iwdg.819819820820+config STPMIC1_WATCHDOG821821+ tristate "STPMIC1 PMIC watchdog support"822822+ depends on MFD_STPMIC1823823+ select WATCHDOG_CORE824824+ help825825+ Say Y here to include watchdog support embedded into STPMIC1 PMIC.826826+ If the watchdog timer expires, stpmic1 will shut down all its power827827+ supplies.828828+829829+ To compile this driver as a module, choose M here: the830830+ module will be called spmic1_wdt.831831+820832config UNIPHIER_WATCHDOG821833 tristate "UniPhier watchdog support"822834 depends on ARCH_UNIPHIER || COMPILE_TEST
···7979 return -ENOMEM;80808181 res = platform_get_resource(pdev, IORESOURCE_IO, 0);8282- if (IS_ERR(res))8383- return PTR_ERR(res);8282+ if (!res)8383+ return -ENODEV;84848585 priv->io_base = devm_ioport_map(&pdev->dev, res->start,8686 resource_size(res));8787- if (IS_ERR(priv->io_base))8888- return PTR_ERR(priv->io_base);8787+ if (!priv->io_base)8888+ return -ENOMEM;89899090 watchdog_set_drvdata(&priv->wdd, priv);9191
+1-1
drivers/xen/events/events_base.c
···16501650 xen_have_vector_callback = 0;16511651 return;16521652 }16531653- pr_info("Xen HVM callback vector for event delivery is enabled\n");16531653+ pr_info_once("Xen HVM callback vector for event delivery is enabled\n");16541654 alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR,16551655 xen_hvm_callback_vector);16561656 }
+4-5
drivers/xen/pvcalls-back.c
···160160161161 /* write the data, then modify the indexes */162162 virt_wmb();163163- if (ret < 0)163163+ if (ret < 0) {164164+ atomic_set(&map->read, 0);164165 intf->in_error = ret;165165- else166166+ } else166167 intf->in_prod = prod + ret;167168 /* update the indexes, then notify the other end */168169 virt_wmb();···283282static void pvcalls_sk_state_change(struct sock *sock)284283{285284 struct sock_mapping *map = sock->sk_user_data;286286- struct pvcalls_data_intf *intf;287285288286 if (map == NULL)289287 return;290288291291- intf = map->ring;292292- intf->in_error = -ENOTCONN;289289+ atomic_inc(&map->read);293290 notify_remote_via_irq(map->irq);294291}295292
+76-30
drivers/xen/pvcalls-front.c
···3131#define PVCALLS_NR_RSP_PER_RING __CONST_RING_SIZE(xen_pvcalls, XEN_PAGE_SIZE)3232#define PVCALLS_FRONT_MAX_SPIN 500033333434+static struct proto pvcalls_proto = {3535+ .name = "PVCalls",3636+ .owner = THIS_MODULE,3737+ .obj_size = sizeof(struct sock),3838+};3939+3440struct pvcalls_bedata {3541 struct xen_pvcalls_front_ring ring;3642 grant_ref_t ref;···341335 return ret;342336}343337338338+static void free_active_ring(struct sock_mapping *map)339339+{340340+ if (!map->active.ring)341341+ return;342342+343343+ free_pages((unsigned long)map->active.data.in,344344+ map->active.ring->ring_order);345345+ free_page((unsigned long)map->active.ring);346346+}347347+348348+static int alloc_active_ring(struct sock_mapping *map)349349+{350350+ void *bytes;351351+352352+ map->active.ring = (struct pvcalls_data_intf *)353353+ get_zeroed_page(GFP_KERNEL);354354+ if (!map->active.ring)355355+ goto out;356356+357357+ map->active.ring->ring_order = PVCALLS_RING_ORDER;358358+ bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,359359+ PVCALLS_RING_ORDER);360360+ if (!bytes)361361+ goto out;362362+363363+ map->active.data.in = bytes;364364+ map->active.data.out = bytes +365365+ XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER);366366+367367+ return 0;368368+369369+out:370370+ free_active_ring(map);371371+ return -ENOMEM;372372+}373373+344374static int create_active(struct sock_mapping *map, int *evtchn)345375{346376 void *bytes;···385343 *evtchn = -1;386344 init_waitqueue_head(&map->active.inflight_conn_req);387345388388- map->active.ring = (struct pvcalls_data_intf *)389389- __get_free_page(GFP_KERNEL | __GFP_ZERO);390390- if (map->active.ring == NULL)391391- goto out_error;392392- map->active.ring->ring_order = PVCALLS_RING_ORDER;393393- bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,394394- PVCALLS_RING_ORDER);395395- if (bytes == NULL)396396- goto out_error;346346+ bytes = map->active.data.in;397347 for (i = 0; i < (1 << PVCALLS_RING_ORDER); i++)398348 map->active.ring->ref[i] = gnttab_grant_foreign_access(399349 pvcalls_front_dev->otherend_id,···394360 map->active.ref = gnttab_grant_foreign_access(395361 pvcalls_front_dev->otherend_id,396362 pfn_to_gfn(virt_to_pfn((void *)map->active.ring)), 0);397397-398398- map->active.data.in = bytes;399399- map->active.data.out = bytes +400400- XEN_FLEX_RING_SIZE(PVCALLS_RING_ORDER);401363402364 ret = xenbus_alloc_evtchn(pvcalls_front_dev, evtchn);403365 if (ret)···415385out_error:416386 if (*evtchn >= 0)417387 xenbus_free_evtchn(pvcalls_front_dev, *evtchn);418418- free_pages((unsigned long)map->active.data.in, PVCALLS_RING_ORDER);419419- free_page((unsigned long)map->active.ring);420388 return ret;421389}422390···434406 return PTR_ERR(map);435407436408 bedata = dev_get_drvdata(&pvcalls_front_dev->dev);409409+ ret = alloc_active_ring(map);410410+ if (ret < 0) {411411+ pvcalls_exit_sock(sock);412412+ return ret;413413+ }437414438415 spin_lock(&bedata->socket_lock);439416 ret = get_request(bedata, &req_id);440417 if (ret < 0) {441418 spin_unlock(&bedata->socket_lock);419419+ free_active_ring(map);442420 pvcalls_exit_sock(sock);443421 return ret;444422 }445423 ret = create_active(map, &evtchn);446424 if (ret < 0) {447425 spin_unlock(&bedata->socket_lock);426426+ free_active_ring(map);448427 pvcalls_exit_sock(sock);449428 return ret;450429 }···504469 virt_mb();505470506471 size = pvcalls_queued(prod, cons, array_size);507507- if (size >= array_size)472472+ if (size > array_size)508473 return -EINVAL;474474+ if (size == array_size)475475+ return 0;509476 if (len > array_size - size)510477 len = array_size - size;511478···597560 error = intf->in_error;598561 /* get pointers before reading from the ring */599562 virt_rmb();600600- if (error < 0)601601- return error;602563603564 size = pvcalls_queued(prod, cons, array_size);604565 masked_prod = pvcalls_mask(prod, array_size);605566 masked_cons = pvcalls_mask(cons, array_size);606567607568 if (size == 0)608608- return 0;569569+ return error ?: size;609570610571 if (len > size)611572 len = size;···815780 }816781 }817782783783+ map2 = kzalloc(sizeof(*map2), GFP_KERNEL);784784+ if (map2 == NULL) {785785+ clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,786786+ (void *)&map->passive.flags);787787+ pvcalls_exit_sock(sock);788788+ return -ENOMEM;789789+ }790790+ ret = alloc_active_ring(map2);791791+ if (ret < 0) {792792+ clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,793793+ (void *)&map->passive.flags);794794+ kfree(map2);795795+ pvcalls_exit_sock(sock);796796+ return ret;797797+ }818798 spin_lock(&bedata->socket_lock);819799 ret = get_request(bedata, &req_id);820800 if (ret < 0) {821801 clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,822802 (void *)&map->passive.flags);823803 spin_unlock(&bedata->socket_lock);804804+ free_active_ring(map2);805805+ kfree(map2);824806 pvcalls_exit_sock(sock);825807 return ret;826808 }827827- map2 = kzalloc(sizeof(*map2), GFP_ATOMIC);828828- if (map2 == NULL) {829829- clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,830830- (void *)&map->passive.flags);831831- spin_unlock(&bedata->socket_lock);832832- pvcalls_exit_sock(sock);833833- return -ENOMEM;834834- }809809+835810 ret = create_active(map2, &evtchn);836811 if (ret < 0) {812812+ free_active_ring(map2);837813 kfree(map2);838814 clear_bit(PVCALLS_FLAG_ACCEPT_INFLIGHT,839815 (void *)&map->passive.flags);···885839886840received:887841 map2->sock = newsock;888888- newsock->sk = kzalloc(sizeof(*newsock->sk), GFP_KERNEL);842842+ newsock->sk = sk_alloc(sock_net(sock->sk), PF_INET, GFP_KERNEL, &pvcalls_proto, false);889843 if (!newsock->sk) {890844 bedata->rsp[req_id].req_id = PVCALLS_INVALID_ID;891845 map->passive.inflight_req_id = PVCALLS_INVALID_ID;···10781032 spin_lock(&bedata->socket_lock);10791033 list_del(&map->list);10801034 spin_unlock(&bedata->socket_lock);10811081- if (READ_ONCE(map->passive.inflight_req_id) !=10821082- PVCALLS_INVALID_ID) {10351035+ if (READ_ONCE(map->passive.inflight_req_id) != PVCALLS_INVALID_ID &&10361036+ READ_ONCE(map->passive.inflight_req_id) != 0) {10831037 pvcalls_front_free_map(bedata,10841038 map->passive.accept_map);10851039 }
+2-2
fs/afs/flock.c
···208208 /* The new front of the queue now owns the state variables. */209209 next = list_entry(vnode->pending_locks.next,210210 struct file_lock, fl_u.afs.link);211211- vnode->lock_key = afs_file_key(next->fl_file);211211+ vnode->lock_key = key_get(afs_file_key(next->fl_file));212212 vnode->lock_type = (next->fl_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;213213 vnode->lock_state = AFS_VNODE_LOCK_WAITING_FOR_CB;214214 goto again;···413413 /* The new front of the queue now owns the state variables. */414414 next = list_entry(vnode->pending_locks.next,415415 struct file_lock, fl_u.afs.link);416416- vnode->lock_key = afs_file_key(next->fl_file);416416+ vnode->lock_key = key_get(afs_file_key(next->fl_file));417417 vnode->lock_type = (next->fl_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;418418 vnode->lock_state = AFS_VNODE_LOCK_WAITING_FOR_CB;419419 afs_lock_may_be_available(vnode);
···2323static void afs_wake_up_call_waiter(struct sock *, struct rxrpc_call *, unsigned long);2424static long afs_wait_for_call_to_complete(struct afs_call *, struct afs_addr_cursor *);2525static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long);2626+static void afs_delete_async_call(struct work_struct *);2627static void afs_process_async_call(struct work_struct *);2728static void afs_rx_new_call(struct sock *, struct rxrpc_call *, unsigned long);2829static void afs_rx_discard_new_call(struct rxrpc_call *, unsigned long);···204203 }205204}206205206206+static struct afs_call *afs_get_call(struct afs_call *call,207207+ enum afs_call_trace why)208208+{209209+ int u = atomic_inc_return(&call->usage);210210+211211+ trace_afs_call(call, why, u,212212+ atomic_read(&call->net->nr_outstanding_calls),213213+ __builtin_return_address(0));214214+ return call;215215+}216216+207217/*208218 * Queue the call for actual work.209219 */210220static void afs_queue_call_work(struct afs_call *call)211221{212222 if (call->type->work) {213213- int u = atomic_inc_return(&call->usage);214214-215215- trace_afs_call(call, afs_call_trace_work, u,216216- atomic_read(&call->net->nr_outstanding_calls),217217- __builtin_return_address(0));218218-219223 INIT_WORK(&call->work, call->type->work);220224225225+ afs_get_call(call, afs_call_trace_work);221226 if (!queue_work(afs_wq, &call->work))222227 afs_put_call(call);223228 }···405398 }406399 }407400401401+ /* If the call is going to be asynchronous, we need an extra ref for402402+ * the call to hold itself so the caller need not hang on to its ref.403403+ */404404+ if (call->async)405405+ afs_get_call(call, afs_call_trace_get);406406+408407 /* create a call */409408 rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key,410409 (unsigned long)call,···451438 goto error_do_abort;452439 }453440454454- /* at this point, an async call may no longer exist as it may have455455- * already completed */456456- if (call->async)441441+ /* Note that at this point, we may have received the reply or an abort442442+ * - and an asynchronous call may already have completed.443443+ */444444+ if (call->async) {445445+ afs_put_call(call);457446 return -EINPROGRESS;447447+ }458448459449 return afs_wait_for_call_to_complete(call, ac);460450461451error_do_abort:462462- call->state = AFS_CALL_COMPLETE;463452 if (ret != -ECONNABORTED) {464453 rxrpc_kernel_abort_call(call->net->socket, rxcall,465454 RX_USER_ABORT, ret, "KSD");···478463error_kill_call:479464 if (call->type->done)480465 call->type->done(call);481481- afs_put_call(call);466466+467467+ /* We need to dispose of the extra ref we grabbed for an async call.468468+ * The call, however, might be queued on afs_async_calls and we need to469469+ * make sure we don't get any more notifications that might requeue it.470470+ */471471+ if (call->rxcall) {472472+ rxrpc_kernel_end_call(call->net->socket, call->rxcall);473473+ call->rxcall = NULL;474474+ }475475+ if (call->async) {476476+ if (cancel_work_sync(&call->async_work))477477+ afs_put_call(call);478478+ afs_put_call(call);479479+ }480480+482481 ac->error = ret;482482+ call->state = AFS_CALL_COMPLETE;483483+ afs_put_call(call);483484 _leave(" = %d", ret);484485 return ret;485486}
···803803 bp = xdr_encode_YFSFid(bp, &vnode->fid);804804 bp = xdr_encode_string(bp, name, namesz);805805 bp = xdr_encode_YFSStoreStatus_mode(bp, mode);806806- bp = xdr_encode_u32(bp, 0); /* ViceLockType */806806+ bp = xdr_encode_u32(bp, yfs_LockNone); /* ViceLockType */807807 yfs_check_req(call, bp);808808809809 afs_use_fs_server(call, fc->cbi);
+18-10
fs/block_dev.c
···104104}105105EXPORT_SYMBOL(invalidate_bdev);106106107107+static void set_init_blocksize(struct block_device *bdev)108108+{109109+ unsigned bsize = bdev_logical_block_size(bdev);110110+ loff_t size = i_size_read(bdev->bd_inode);111111+112112+ while (bsize < PAGE_SIZE) {113113+ if (size & bsize)114114+ break;115115+ bsize <<= 1;116116+ }117117+ bdev->bd_block_size = bsize;118118+ bdev->bd_inode->i_blkbits = blksize_bits(bsize);119119+}120120+107121int set_blocksize(struct block_device *bdev, int size)108122{109123 /* Size must be a power of two, and between 512 and PAGE_SIZE */···1445143114461432void bd_set_size(struct block_device *bdev, loff_t size)14471433{14481448- unsigned bsize = bdev_logical_block_size(bdev);14491449-14501434 inode_lock(bdev->bd_inode);14511435 i_size_write(bdev->bd_inode, size);14521436 inode_unlock(bdev->bd_inode);14531453- while (bsize < PAGE_SIZE) {14541454- if (size & bsize)14551455- break;14561456- bsize <<= 1;14571457- }14581458- bdev->bd_block_size = bsize;14591459- bdev->bd_inode->i_blkbits = blksize_bits(bsize);14601437}14611438EXPORT_SYMBOL(bd_set_size);14621439···15241519 }15251520 }1526152115271527- if (!ret)15221522+ if (!ret) {15281523 bd_set_size(bdev,(loff_t)get_capacity(disk)<<9);15241524+ set_init_blocksize(bdev);15251525+ }1529152615301527 /*15311528 * If the device is invalidated, rescan partition···15621555 goto out_clear;15631556 }15641557 bd_set_size(bdev, (loff_t)bdev->bd_part->nr_sects << 9);15581558+ set_init_blocksize(bdev);15651559 }1566156015671561 if (bdev->bd_bdi == &noop_backing_dev_info)
+9-7
fs/btrfs/ctree.c
···10161016 parent_start = parent->start;1017101710181018 /*10191019- * If we are COWing a node/leaf from the extent, chunk or device trees,10201020- * make sure that we do not finish block group creation of pending block10211021- * groups. We do this to avoid a deadlock.10191019+ * If we are COWing a node/leaf from the extent, chunk, device or free10201020+ * space trees, make sure that we do not finish block group creation of10211021+ * pending block groups. We do this to avoid a deadlock.10221022 * COWing can result in allocation of a new chunk, and flushing pending10231023 * block groups (btrfs_create_pending_block_groups()) can be triggered10241024 * when finishing allocation of a new chunk. Creation of a pending block10251025- * group modifies the extent, chunk and device trees, therefore we could10261026- * deadlock with ourselves since we are holding a lock on an extent10271027- * buffer that btrfs_create_pending_block_groups() may try to COW later.10251025+ * group modifies the extent, chunk, device and free space trees,10261026+ * therefore we could deadlock with ourselves since we are holding a10271027+ * lock on an extent buffer that btrfs_create_pending_block_groups() may10281028+ * try to COW later.10281029 */10291030 if (root == fs_info->extent_root ||10301031 root == fs_info->chunk_root ||10311031- root == fs_info->dev_root)10321032+ root == fs_info->dev_root ||10331033+ root == fs_info->free_space_root)10321034 trans->can_flush_pending_bgs = false;1033103510341036 cow = btrfs_alloc_tree_block(trans, root, parent_start,
+7
fs/btrfs/ctree.h
···3535struct btrfs_trans_handle;3636struct btrfs_transaction;3737struct btrfs_pending_snapshot;3838+struct btrfs_delayed_ref_root;3839extern struct kmem_cache *btrfs_trans_handle_cachep;3940extern struct kmem_cache *btrfs_bit_radix_cachep;4041extern struct kmem_cache *btrfs_path_cachep;···787786 * main phase. The fs_info::balance_ctl is initialized.788787 */789788 BTRFS_FS_BALANCE_RUNNING,789789+790790+ /* Indicate that the cleaner thread is awake and doing something. */791791+ BTRFS_FS_CLEANER_RUNNING,790792};791793792794struct btrfs_fs_info {···26652661 unsigned long count);26662662int btrfs_async_run_delayed_refs(struct btrfs_fs_info *fs_info,26672663 unsigned long count, u64 transid, int wait);26642664+void btrfs_cleanup_ref_head_accounting(struct btrfs_fs_info *fs_info,26652665+ struct btrfs_delayed_ref_root *delayed_refs,26662666+ struct btrfs_delayed_ref_head *head);26682667int btrfs_lookup_data_extent(struct btrfs_fs_info *fs_info, u64 start, u64 len);26692668int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans,26702669 struct btrfs_fs_info *fs_info, u64 bytenr,
+12
fs/btrfs/disk-io.c
···16821682 while (1) {16831683 again = 0;1684168416851685+ set_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);16861686+16851687 /* Make the cleaner go to sleep early. */16861688 if (btrfs_need_cleaner_sleep(fs_info))16871689 goto sleep;···17301728 */17311729 btrfs_delete_unused_bgs(fs_info);17321730sleep:17311731+ clear_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags);17331732 if (kthread_should_park())17341733 kthread_parkme();17351734 if (kthread_should_stop())···42044201 spin_lock(&fs_info->ordered_root_lock);42054202 }42064203 spin_unlock(&fs_info->ordered_root_lock);42044204+42054205+ /*42064206+ * We need this here because if we've been flipped read-only we won't42074207+ * get sync() from the umount, so we need to make sure any ordered42084208+ * extents that haven't had their dirty pages IO start writeout yet42094209+ * actually get run and error out properly.42104210+ */42114211+ btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);42074212}4208421342094214static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,···42764265 if (pin_bytes)42774266 btrfs_pin_extent(fs_info, head->bytenr,42784267 head->num_bytes, 1);42684268+ btrfs_cleanup_ref_head_accounting(fs_info, delayed_refs, head);42794269 btrfs_put_delayed_ref_head(head);42804270 cond_resched();42814271 spin_lock(&delayed_refs->lock);
+14-7
fs/btrfs/extent-tree.c
···24562456 return ret ? ret : 1;24572457}2458245824592459-static void cleanup_ref_head_accounting(struct btrfs_trans_handle *trans,24602460- struct btrfs_delayed_ref_head *head)24592459+void btrfs_cleanup_ref_head_accounting(struct btrfs_fs_info *fs_info,24602460+ struct btrfs_delayed_ref_root *delayed_refs,24612461+ struct btrfs_delayed_ref_head *head)24612462{24622462- struct btrfs_fs_info *fs_info = trans->fs_info;24632463- struct btrfs_delayed_ref_root *delayed_refs =24642464- &trans->transaction->delayed_refs;24652463 int nr_items = 1; /* Dropping this ref head update. */2466246424672465 if (head->total_ref_mod < 0) {···25422544 }25432545 }2544254625452545- cleanup_ref_head_accounting(trans, head);25472547+ btrfs_cleanup_ref_head_accounting(fs_info, delayed_refs, head);2546254825472549 trace_run_delayed_ref_head(fs_info, head, 0);25482550 btrfs_delayed_ref_unlock(head);···49524954 ret = 0;49534955 break;49544956 case COMMIT_TRANS:49574957+ /*49584958+ * If we have pending delayed iputs then we could free up a49594959+ * bunch of pinned space, so make sure we run the iputs before49604960+ * we do our pinned bytes check below.49614961+ */49624962+ mutex_lock(&fs_info->cleaner_delayed_iput_mutex);49634963+ btrfs_run_delayed_iputs(fs_info);49644964+ mutex_unlock(&fs_info->cleaner_delayed_iput_mutex);49654965+49554966 ret = may_commit_transaction(fs_info, space_info);49564967 break;49574968 default:···71957188 if (head->must_insert_reserved)71967189 ret = 1;7197719071987198- cleanup_ref_head_accounting(trans, head);71917191+ btrfs_cleanup_ref_head_accounting(trans->fs_info, delayed_refs, head);71997192 mutex_unlock(&head->mutex);72007193 btrfs_put_delayed_ref_head(head);72017194 return ret;
+2-3
fs/btrfs/inode.c
···31293129 /* once for the tree */31303130 btrfs_put_ordered_extent(ordered_extent);3131313131323132- /* Try to release some metadata so we don't get an OOM but don't wait */31333133- btrfs_btree_balance_dirty_nodelay(fs_info);31343134-31353132 return ret;31363133}31373134···32513254 ASSERT(list_empty(&binode->delayed_iput));32523255 list_add_tail(&binode->delayed_iput, &fs_info->delayed_iputs);32533256 spin_unlock(&fs_info->delayed_iput_lock);32573257+ if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags))32583258+ wake_up_process(fs_info->cleaner_kthread);32543259}3255326032563261void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info)
+43-6
fs/btrfs/ioctl.c
···32213221 inode_lock_nested(inode2, I_MUTEX_CHILD);32223222}3223322332243224+static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,32253225+ struct inode *inode2, u64 loff2, u64 len)32263226+{32273227+ unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);32283228+ unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);32293229+}32303230+32313231+static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1,32323232+ struct inode *inode2, u64 loff2, u64 len)32333233+{32343234+ if (inode1 < inode2) {32353235+ swap(inode1, inode2);32363236+ swap(loff1, loff2);32373237+ } else if (inode1 == inode2 && loff2 < loff1) {32383238+ swap(loff1, loff2);32393239+ }32403240+ lock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1);32413241+ lock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);32423242+}32433243+32243244static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen,32253245 struct inode *dst, u64 dst_loff)32263246{···32623242 return -EINVAL;3263324332643244 /*32653265- * Lock destination range to serialize with concurrent readpages().32453245+ * Lock destination range to serialize with concurrent readpages() and32463246+ * source range to serialize with relocation.32663247 */32673267- lock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1);32483248+ btrfs_double_extent_lock(src, loff, dst, dst_loff, len);32683249 ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1);32693269- unlock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1);32503250+ btrfs_double_extent_unlock(src, loff, dst, dst_loff, len);3270325132713252 return ret;32723253}···39263905 len = ALIGN(src->i_size, bs) - off;3927390639283907 if (destoff > inode->i_size) {39083908+ const u64 wb_start = ALIGN_DOWN(inode->i_size, bs);39093909+39293910 ret = btrfs_cont_expand(inode, inode->i_size, destoff);39113911+ if (ret)39123912+ return ret;39133913+ /*39143914+ * We may have truncated the last block if the inode's size is39153915+ * not sector size aligned, so we need to wait for writeback to39163916+ * complete before proceeding further, otherwise we can race39173917+ * with cloning and attempt to increment a reference to an39183918+ * extent that no longer exists (writeback completed right after39193919+ * we found the previous extent covering eof and before we39203920+ * attempted to increment its reference count).39213921+ */39223922+ ret = btrfs_wait_ordered_range(inode, wb_start,39233923+ destoff - wb_start);39303924 if (ret)39313925 return ret;39323926 }3933392739343928 /*39353935- * Lock destination range to serialize with concurrent readpages().39293929+ * Lock destination range to serialize with concurrent readpages() and39303930+ * source range to serialize with relocation.39363931 */39373937- lock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1);39323932+ btrfs_double_extent_lock(src, off, inode, destoff, len);39383933 ret = btrfs_clone(src, inode, off, olen, len, destoff, 0);39393939- unlock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1);39343934+ btrfs_double_extent_unlock(src, off, inode, destoff, len);39403935 /*39413936 * Truncate page cache pages so that future reads will see the cloned39423937 * data immediately and not the previous data.
+12
fs/btrfs/volumes.c
···78257825 ret = -EUCLEAN;78267826 goto out;78277827 }78287828+78297829+ /* It's possible this device is a dummy for seed device */78307830+ if (dev->disk_total_bytes == 0) {78317831+ dev = find_device(fs_info->fs_devices->seed, devid, NULL);78327832+ if (!dev) {78337833+ btrfs_err(fs_info, "failed to find seed devid %llu",78347834+ devid);78357835+ ret = -EUCLEAN;78367836+ goto out;78377837+ }78387838+ }78397839+78287840 if (physical_offset + physical_len > dev->disk_total_bytes) {78297841 btrfs_err(fs_info,78307842"dev extent devid %llu physical offset %llu len %llu is beyond device boundary %llu",
+1-4
fs/ceph/addr.c
···14941494 if (err < 0 || off >= i_size_read(inode)) {14951495 unlock_page(page);14961496 put_page(page);14971497- if (err == -ENOMEM)14981498- ret = VM_FAULT_OOM;14991499- else15001500- ret = VM_FAULT_SIGBUS;14971497+ ret = vmf_error(err);15011498 goto out_inline;15021499 }15031500 if (err < PAGE_SIZE)
+2-2
fs/ceph/super.c
···530530 seq_putc(m, ',');531531 pos = m->count;532532533533- ret = ceph_print_client_options(m, fsc->client);533533+ ret = ceph_print_client_options(m, fsc->client, false);534534 if (ret)535535 return ret;536536···640640 opt = NULL; /* fsc->client now owns this */641641642642 fsc->client->extra_mon_dispatch = extra_mon_dispatch;643643- fsc->client->osdc.abort_on_full = true;643643+ ceph_set_opt(fsc->client, ABORT_ON_FULL);644644645645 if (!fsopt->mds_namespace) {646646 ceph_monc_want_map(&fsc->client->monc, CEPH_SUB_MDSMAP,
···14381438 int mid_state; /* wish this were enum but can not pass to wait_event */14391439 unsigned int mid_flags;14401440 __le16 command; /* smb command code */14411441+ unsigned int optype; /* operation type */14411442 bool large_buf:1; /* if valid response, is pointer to large buf */14421443 bool multiRsp:1; /* multiple trans2 responses for one request */14431444 bool multiEnd:1; /* both received */···15731572 kfree(param[i].node_name);15741573 }15751574 kfree(param);15751575+}15761576+15771577+static inline bool is_interrupt_error(int error)15781578+{15791579+ switch (error) {15801580+ case -EINTR:15811581+ case -ERESTARTSYS:15821582+ case -ERESTARTNOHAND:15831583+ case -ERESTARTNOINTR:15841584+ return true;15851585+ }15861586+ return false;15871587+}15881588+15891589+static inline bool is_retryable_error(int error)15901590+{15911591+ if (is_interrupt_error(error) || error == -EAGAIN)15921592+ return true;15931593+ return false;15761594}1577159515781596#define MID_FREE 0
···387387 if (rc < 0 && rc != -EINTR)388388 cifs_dbg(VFS, "Error %d sending data on socket to server\n",389389 rc);390390- else390390+ else if (rc > 0)391391 rc = 0;392392393393 return rc;···783783}784784785785static void786786-cifs_noop_callback(struct mid_q_entry *mid)786786+cifs_compound_callback(struct mid_q_entry *mid)787787{788788+ struct TCP_Server_Info *server = mid->server;789789+ unsigned int optype = mid->optype;790790+ unsigned int credits_received = 0;791791+792792+ if (mid->mid_state == MID_RESPONSE_RECEIVED) {793793+ if (mid->resp_buf)794794+ credits_received = server->ops->get_credits(mid);795795+ else796796+ cifs_dbg(FYI, "Bad state for cancelled MID\n");797797+ }798798+799799+ add_credits(server, credits_received, optype);800800+}801801+802802+static void803803+cifs_compound_last_callback(struct mid_q_entry *mid)804804+{805805+ cifs_compound_callback(mid);806806+ cifs_wake_up_task(mid);807807+}808808+809809+static void810810+cifs_cancelled_callback(struct mid_q_entry *mid)811811+{812812+ cifs_compound_callback(mid);813813+ DeleteMidQEntry(mid);788814}789815790816int···821795 int i, j, rc = 0;822796 int timeout, optype;823797 struct mid_q_entry *midQ[MAX_COMPOUND];824824- unsigned int credits = 0;798798+ bool cancelled_mid[MAX_COMPOUND] = {false};799799+ unsigned int credits[MAX_COMPOUND] = {0};825800 char *buf;826801827802 timeout = flags & CIFS_TIMEOUT_MASK;···840813 return -ENOENT;841814842815 /*843843- * Ensure that we do not send more than 50 overlapping requests844844- * to the same server. We may make this configurable later or845845- * use ses->maxReq.816816+ * Ensure we obtain 1 credit per request in the compound chain.817817+ * It can be optimized further by waiting for all the credits818818+ * at once but this can wait long enough if we don't have enough819819+ * credits due to some heavy operations in progress or the server820820+ * not granting us much, so a fallback to the current approach is821821+ * needed anyway.846822 */847847- rc = wait_for_free_request(ses->server, timeout, optype);848848- if (rc)849849- return rc;823823+ for (i = 0; i < num_rqst; i++) {824824+ rc = wait_for_free_request(ses->server, timeout, optype);825825+ if (rc) {826826+ /*827827+ * We haven't sent an SMB packet to the server yet but828828+ * we already obtained credits for i requests in the829829+ * compound chain - need to return those credits back830830+ * for future use. Note that we need to call add_credits831831+ * multiple times to match the way we obtained credits832832+ * in the first place and to account for in flight833833+ * requests correctly.834834+ */835835+ for (j = 0; j < i; j++)836836+ add_credits(ses->server, 1, optype);837837+ return rc;838838+ }839839+ credits[i] = 1;840840+ }850841851842 /*852843 * Make sure that we sign in the same order that we send on this socket···880835 for (j = 0; j < i; j++)881836 cifs_delete_mid(midQ[j]);882837 mutex_unlock(&ses->server->srv_mutex);838838+883839 /* Update # of requests on wire to server */884884- add_credits(ses->server, 1, optype);840840+ for (j = 0; j < num_rqst; j++)841841+ add_credits(ses->server, credits[j], optype);885842 return PTR_ERR(midQ[i]);886843 }887844888845 midQ[i]->mid_state = MID_REQUEST_SUBMITTED;846846+ midQ[i]->optype = optype;889847 /*890890- * We don't invoke the callback compounds unless it is the last891891- * request.848848+ * Invoke callback for every part of the compound chain849849+ * to calculate credits properly. Wake up this thread only when850850+ * the last element is received.892851 */893852 if (i < num_rqst - 1)894894- midQ[i]->callback = cifs_noop_callback;853853+ midQ[i]->callback = cifs_compound_callback;854854+ else855855+ midQ[i]->callback = cifs_compound_last_callback;895856 }896857 cifs_in_send_inc(ses->server);897858 rc = smb_send_rqst(ses->server, num_rqst, rqst, flags);···911860912861 mutex_unlock(&ses->server->srv_mutex);913862914914- if (rc < 0)863863+ if (rc < 0) {864864+ /* Sending failed for some reason - return credits back */865865+ for (i = 0; i < num_rqst; i++)866866+ add_credits(ses->server, credits[i], optype);915867 goto out;868868+ }869869+870870+ /*871871+ * At this point the request is passed to the network stack - we assume872872+ * that any credits taken from the server structure on the client have873873+ * been spent and we can't return them back. Once we receive responses874874+ * we will collect credits granted by the server in the mid callbacks875875+ * and add those credits to the server structure.876876+ */916877917878 /*918879 * Compounding is never used during session establish.···938875939876 for (i = 0; i < num_rqst; i++) {940877 rc = wait_for_response(ses->server, midQ[i]);941941- if (rc != 0) {878878+ if (rc != 0)879879+ break;880880+ }881881+ if (rc != 0) {882882+ for (; i < num_rqst; i++) {942883 cifs_dbg(VFS, "Cancelling wait for mid %llu cmd: %d\n",943884 midQ[i]->mid, le16_to_cpu(midQ[i]->command));944885 send_cancel(ses->server, &rqst[i], midQ[i]);945886 spin_lock(&GlobalMid_Lock);946887 if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) {947888 midQ[i]->mid_flags |= MID_WAIT_CANCELLED;948948- midQ[i]->callback = DeleteMidQEntry;949949- spin_unlock(&GlobalMid_Lock);950950- add_credits(ses->server, 1, optype);951951- return rc;889889+ midQ[i]->callback = cifs_cancelled_callback;890890+ cancelled_mid[i] = true;891891+ credits[i] = 0;952892 }953893 spin_unlock(&GlobalMid_Lock);954894 }955895 }956956-957957- for (i = 0; i < num_rqst; i++)958958- if (midQ[i]->resp_buf)959959- credits += ses->server->ops->get_credits(midQ[i]);960960- if (!credits)961961- credits = 1;962896963897 for (i = 0; i < num_rqst; i++) {964898 if (rc < 0)···963903964904 rc = cifs_sync_mid_result(midQ[i], ses->server);965905 if (rc != 0) {966966- add_credits(ses->server, credits, optype);967967- return rc;906906+ /* mark this mid as cancelled to not free it below */907907+ cancelled_mid[i] = true;908908+ goto out;968909 }969910970911 if (!midQ[i]->resp_buf ||···1012951 * This is prevented above by using a noop callback that will not1013952 * wake this thread except for the very last PDU.1014953 */10151015- for (i = 0; i < num_rqst; i++)10161016- cifs_delete_mid(midQ[i]);10171017- add_credits(ses->server, credits, optype);954954+ for (i = 0; i < num_rqst; i++) {955955+ if (!cancelled_mid[i])956956+ cifs_delete_mid(midQ[i]);957957+ }10189581019959 return rc;1020960}
+33-28
fs/hugetlbfs/inode.c
···383383 * truncation is indicated by end of range being LLONG_MAX384384 * In this case, we first scan the range and release found pages.385385 * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv386386- * maps and global counts.386386+ * maps and global counts. Page faults can not race with truncation387387+ * in this routine. hugetlb_no_page() prevents page faults in the388388+ * truncated range. It checks i_size before allocation, and again after389389+ * with the page table lock for the page held. The same lock must be390390+ * acquired to unmap a page.387391 * hole punch is indicated if end is not LLONG_MAX388392 * In the hole punch case we scan the range and release found pages.389393 * Only when releasing a page is the associated region/reserv map390394 * deleted. The region/reserv map for ranges without associated391391- * pages are not modified.392392- *393393- * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent394394- * races with page faults.395395- *395395+ * pages are not modified. Page faults can race with hole punch.396396+ * This is indicated if we find a mapped page.396397 * Note: If the passed end of range value is beyond the end of file, but397398 * not LLONG_MAX this routine still performs a hole punch operation.398399 */···423422424423 for (i = 0; i < pagevec_count(&pvec); ++i) {425424 struct page *page = pvec.pages[i];425425+ u32 hash;426426427427 index = page->index;428428+ hash = hugetlb_fault_mutex_hash(h, current->mm,429429+ &pseudo_vma,430430+ mapping, index, 0);431431+ mutex_lock(&hugetlb_fault_mutex_table[hash]);432432+428433 /*429429- * A mapped page is impossible as callers should unmap430430- * all references before calling. And, i_mmap_rwsem431431- * prevents the creation of additional mappings.434434+ * If page is mapped, it was faulted in after being435435+ * unmapped in caller. Unmap (again) now after taking436436+ * the fault mutex. The mutex will prevent faults437437+ * until we finish removing the page.438438+ *439439+ * This race can only happen in the hole punch case.440440+ * Getting here in a truncate operation is a bug.432441 */433433- VM_BUG_ON(page_mapped(page));442442+ if (unlikely(page_mapped(page))) {443443+ BUG_ON(truncate_op);444444+445445+ i_mmap_lock_write(mapping);446446+ hugetlb_vmdelete_list(&mapping->i_mmap,447447+ index * pages_per_huge_page(h),448448+ (index + 1) * pages_per_huge_page(h));449449+ i_mmap_unlock_write(mapping);450450+ }434451435452 lock_page(page);436453 /*···470451 }471452472453 unlock_page(page);454454+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);473455 }474456 huge_pagevec_release(&pvec);475457 cond_resched();···482462483463static void hugetlbfs_evict_inode(struct inode *inode)484464{485485- struct address_space *mapping = inode->i_mapping;486465 struct resv_map *resv_map;487466488488- /*489489- * The vfs layer guarantees that there are no other users of this490490- * inode. Therefore, it would be safe to call remove_inode_hugepages491491- * without holding i_mmap_rwsem. We acquire and hold here to be492492- * consistent with other callers. Since there will be no contention493493- * on the semaphore, overhead is negligible.494494- */495495- i_mmap_lock_write(mapping);496467 remove_inode_hugepages(inode, 0, LLONG_MAX);497497- i_mmap_unlock_write(mapping);498498-499468 resv_map = (struct resv_map *)inode->i_mapping->private_data;500469 /* root inode doesn't have the resv_map, so we should check it */501470 if (resv_map)···505496 i_mmap_lock_write(mapping);506497 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))507498 hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0);508508- remove_inode_hugepages(inode, offset, LLONG_MAX);509499 i_mmap_unlock_write(mapping);500500+ remove_inode_hugepages(inode, offset, LLONG_MAX);510501 return 0;511502}512503···540531 hugetlb_vmdelete_list(&mapping->i_mmap,541532 hole_start >> PAGE_SHIFT,542533 hole_end >> PAGE_SHIFT);543543- remove_inode_hugepages(inode, hole_start, hole_end);544534 i_mmap_unlock_write(mapping);535535+ remove_inode_hugepages(inode, hole_start, hole_end);545536 inode_unlock(inode);546537 }547538···624615 /* addr is the offset within the file (zero based) */625616 addr = index * hpage_size;626617627627- /*628628- * fault mutex taken here, protects against fault path629629- * and hole punch. inode_lock previously taken protects630630- * against truncation.631631- */618618+ /* mutex taken here, fault path and hole punch */632619 hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping,633620 index, addr);634621 mutex_lock(&hugetlb_fault_mutex_table[hash]);
+1-7
fs/nfs/nfs4file.c
···133133 struct file *file_out, loff_t pos_out,134134 size_t count, unsigned int flags)135135{136136- ssize_t ret;137137-138136 if (file_inode(file_in) == file_inode(file_out))139137 return -EINVAL;140140-retry:141141- ret = nfs42_proc_copy(file_in, pos_in, file_out, pos_out, count);142142- if (ret == -EAGAIN)143143- goto retry;144144- return ret;138138+ return nfs42_proc_copy(file_in, pos_in, file_out, pos_out, count);145139}146140147141static loff_t nfs4_file_llseek(struct file *filep, loff_t offset, int whence)
+4-8
fs/pstore/ram.c
···128128 struct pstore_record *record)129129{130130 struct persistent_ram_zone *prz;131131- bool update = (record->type == PSTORE_TYPE_DMESG);132131133132 /* Give up if we never existed or have hit the end. */134133 if (!przs)···138139 return NULL;139140140141 /* Update old/shadowed buffer. */141141- if (update)142142+ if (prz->type == PSTORE_TYPE_DMESG)142143 persistent_ram_save_old(prz);143144144145 if (!persistent_ram_old_size(prz))···710711{711712 struct device *dev = &pdev->dev;712713 struct ramoops_platform_data *pdata = dev->platform_data;714714+ struct ramoops_platform_data pdata_local;713715 struct ramoops_context *cxt = &oops_cxt;714716 size_t dump_mem_sz;715717 phys_addr_t paddr;716718 int err = -EINVAL;717719718720 if (dev_of_node(dev) && !pdata) {719719- pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);720720- if (!pdata) {721721- pr_err("cannot allocate platform data buffer\n");722722- err = -ENOMEM;723723- goto fail_out;724724- }721721+ pdata = &pdata_local;722722+ memset(pdata, 0, sizeof(*pdata));725723726724 err = ramoops_parse_dt(pdev, pdata);727725 if (err < 0)
+2-1
fs/sysfs/dir.c
···4343 kuid_t uid;4444 kgid_t gid;45454646- BUG_ON(!kobj);4646+ if (WARN_ON(!kobj))4747+ return -EINVAL;47484849 if (kobj->parent)4950 parent = kobj->parent->sd;
···112112 kgid_t gid;113113 int error;114114115115- BUG_ON(!kobj || (!update && !kobj->sd));115115+ if (WARN_ON(!kobj || (!update && !kobj->sd)))116116+ return -EINVAL;116117117118 /* Updates may happen before the object has been instantiated */118119 if (unlikely(update && !kobj->sd))
+2-1
fs/sysfs/symlink.c
···2323{2424 struct kernfs_node *kn, *target = NULL;25252626- BUG_ON(!name || !parent);2626+ if (WARN_ON(!name || !parent))2727+ return -EINVAL;27282829 /*2930 * We don't own @target_kobj and it may be removed at any time.
+7
include/drm/drm_dp_helper.h
···13651365 * to 16 bits. So will give a constant value (0x8000) for compatability.13661366 */13671367 DP_DPCD_QUIRK_CONSTANT_N,13681368+ /**13691369+ * @DP_DPCD_QUIRK_NO_PSR:13701370+ *13711371+ * The device does not support PSR even if reports that it supports or13721372+ * driver still need to implement proper handling for such device.13731373+ */13741374+ DP_DPCD_QUIRK_NO_PSR,13681375};1369137613701377/**
···33#define _LINUX_BPFILTER_H4455#include <uapi/linux/bpfilter.h>66+#include <linux/umh.h>6778struct sock;89int bpfilter_ip_set_sockopt(struct sock *sk, int optname, char __user *optval,910 unsigned int optlen);1011int bpfilter_ip_get_sockopt(struct sock *sk, int optname, char __user *optval,1112 int __user *optlen);1212-extern int (*bpfilter_process_sockopt)(struct sock *sk, int optname,1313- char __user *optval,1414- unsigned int optlen, bool is_set);1313+struct bpfilter_umh_ops {1414+ struct umh_info info;1515+ /* since ip_getsockopt() can run in parallel, serialize access to umh */1616+ struct mutex lock;1717+ int (*sockopt)(struct sock *sk, int optname,1818+ char __user *optval,1919+ unsigned int optlen, bool is_set);2020+ int (*start)(void);2121+ bool stop;2222+};2323+extern struct bpfilter_umh_ops bpfilter_ops;1524#endif
+4-2
include/linux/ceph/libceph.h
···3535#define CEPH_OPT_NOMSGAUTH (1<<4) /* don't require msg signing feat */3636#define CEPH_OPT_TCP_NODELAY (1<<5) /* TCP_NODELAY on TCP sockets */3737#define CEPH_OPT_NOMSGSIGN (1<<6) /* don't sign msgs */3838+#define CEPH_OPT_ABORT_ON_FULL (1<<7) /* abort w/ ENOSPC when full */38393940#define CEPH_OPT_DEFAULT (CEPH_OPT_TCP_NODELAY)4041···5453 unsigned long osd_request_timeout; /* jiffies */55545655 /*5757- * any type that can't be simply compared or doesn't need need5656+ * any type that can't be simply compared or doesn't need5857 * to be compared should go beyond this point,5958 * ceph_compare_options() should be updated accordingly6059 */···282281 const char *dev_name, const char *dev_name_end,283282 int (*parse_extra_token)(char *c, void *private),284283 void *private);285285-int ceph_print_client_options(struct seq_file *m, struct ceph_client *client);284284+int ceph_print_client_options(struct seq_file *m, struct ceph_client *client,285285+ bool show_all);286286extern void ceph_destroy_options(struct ceph_options *opt);287287extern int ceph_compare_options(struct ceph_options *new_opt,288288 struct ceph_client *client);
-1
include/linux/ceph/osd_client.h
···354354 struct rb_root linger_map_checks;355355 atomic_t num_requests;356356 atomic_t num_homeless;357357- bool abort_on_full; /* abort w/ ENOSPC when full */358357 int abort_err;359358 struct delayed_work timeout_work;360359 struct delayed_work osds_timeout_work;
+2-3
include/linux/compiler-clang.h
···33#error "Please don't include <linux/compiler-clang.h> directly, include <linux/compiler.h> instead."44#endif5566-/* Some compiler specific definitions are overwritten here77- * for Clang compiler88- */66+/* Compiler specific definitions for Clang compiler */77+98#define uninitialized_var(x) x = *(&(x))1091110/* same as gcc, this was present in clang-2.6 so we can assume it works
+1-5
include/linux/compiler-gcc.h
···5858 (typeof(ptr)) (__ptr + (off)); \5959})60606161-/* Make the optimizer believe the variable can be manipulated arbitrarily. */6262-#define OPTIMIZER_HIDE_VAR(var) \6363- __asm__ ("" : "=r" (var) : "0" (var))6464-6561/*6662 * A trick to suppress uninitialized variable warning without generating any6763 * code6864 */6965#define uninitialized_var(x) x = x70667171-#ifdef RETPOLINE6767+#ifdef CONFIG_RETPOLINE7268#define __noretpoline __attribute__((__indirect_branch__("keep")))7369#endif7470
+1-3
include/linux/compiler-intel.h
···5566#ifdef __ECC7788-/* Some compiler specific definitions are overwritten here99- * for Intel ECC compiler1010- */88+/* Compiler specific definitions for Intel ECC compiler */1191210#include <asm/intrinsics.h>1311
+3-1
include/linux/compiler.h
···161161#endif162162163163#ifndef OPTIMIZER_HIDE_VAR164164-#define OPTIMIZER_HIDE_VAR(var) barrier()164164+/* Make the optimizer believe the variable can be manipulated arbitrarily. */165165+#define OPTIMIZER_HIDE_VAR(var) \166166+ __asm__ ("" : "=r" (var) : "0" (var))165167#endif166168167169/* Not-quite-unique ID. */
-9
include/linux/dma-mapping.h
···717717}718718#endif719719720720-/*721721- * Please always use dma_alloc_coherent instead as it already zeroes the memory!722722- */723723-static inline void *dma_zalloc_coherent(struct device *dev, size_t size,724724- dma_addr_t *dma_handle, gfp_t flag)725725-{726726- return dma_alloc_coherent(dev, size, dma_handle, flag);727727-}728728-729720static inline int dma_get_cache_alignment(void)730721{731722#ifdef ARCH_DMA_MINALIGN
+1
include/linux/fb.h
···653653654654extern struct fb_info *registered_fb[FB_MAX];655655extern int num_registered_fb;656656+extern bool fb_center_logo;656657extern struct class *fb_class;657658658659#define for_each_registered_fb(i) \
···7979/* Some controllers have a CBSY bit */8080#define TMIO_MMC_HAVE_CBSY BIT(11)81818282-/* Some controllers that support HS400 use use 4 taps while others use 8. */8282+/* Some controllers that support HS400 use 4 taps while others use 8. */8383#define TMIO_MMC_HAVE_4TAP_HS400 BIT(13)84848585int tmio_core_mmc_enable(void __iomem *cnf, int shift, unsigned long base);
+6
include/linux/mmzone.h
···520520 PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */521521};522522523523+enum zone_flags {524524+ ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks.525525+ * Cleared when kswapd is woken.526526+ */527527+};528528+523529static inline unsigned long zone_managed_pages(struct zone *zone)524530{525531 return (unsigned long)atomic_long_read(&zone->managed_pages);
···4848extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_gbit_fibre_features) __ro_after_init;4949extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_gbit_all_ports_features) __ro_after_init;5050extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_features) __ro_after_init;5151+extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_fec_features) __ro_after_init;5152extern __ETHTOOL_DECLARE_LINK_MODE_MASK(phy_10gbit_full_features) __ro_after_init;52535354#define PHY_BASIC_FEATURES ((unsigned long *)&phy_basic_features)···5756#define PHY_GBIT_FIBRE_FEATURES ((unsigned long *)&phy_gbit_fibre_features)5857#define PHY_GBIT_ALL_PORTS_FEATURES ((unsigned long *)&phy_gbit_all_ports_features)5958#define PHY_10GBIT_FEATURES ((unsigned long *)&phy_10gbit_features)5959+#define PHY_10GBIT_FEC_FEATURES ((unsigned long *)&phy_10gbit_fec_features)6060#define PHY_10GBIT_FULL_FEATURES ((unsigned long *)&phy_10gbit_full_features)61616262extern const int phy_10_100_features_array[4];···469467 * only works for PHYs with IDs which match this field470468 * name: The friendly name of this PHY type471469 * phy_id_mask: Defines the important bits of the phy_id472472- * features: A list of features (speed, duplex, etc) supported473473- * by this PHY470470+ * features: A mandatory list of features (speed, duplex, etc)471471+ * supported by this PHY474472 * flags: A bitfield defining certain other features this PHY475473 * supports (like interrupts)476474 *
···663663static inline void qed_chain_set_prod(struct qed_chain *p_chain,664664 u32 prod_idx, void *p_prod_elem)665665{666666+ if (p_chain->mode == QED_CHAIN_MODE_PBL) {667667+ u32 cur_prod, page_mask, page_cnt, page_diff;668668+669669+ cur_prod = is_chain_u16(p_chain) ? p_chain->u.chain16.prod_idx :670670+ p_chain->u.chain32.prod_idx;671671+672672+ /* Assume that number of elements in a page is power of 2 */673673+ page_mask = ~p_chain->elem_per_page_mask;674674+675675+ /* Use "cur_prod - 1" and "prod_idx - 1" since producer index676676+ * reaches the first element of next page before the page index677677+ * is incremented. See qed_chain_produce().678678+ * Index wrap around is not a problem because the difference679679+ * between current and given producer indices is always680680+ * positive and lower than the chain's capacity.681681+ */682682+ page_diff = (((cur_prod - 1) & page_mask) -683683+ ((prod_idx - 1) & page_mask)) /684684+ p_chain->elem_per_page;685685+686686+ page_cnt = qed_chain_get_page_cnt(p_chain);687687+ if (is_chain_u16(p_chain))688688+ p_chain->pbl.c.u16.prod_page_idx =689689+ (p_chain->pbl.c.u16.prod_page_idx -690690+ page_diff + page_cnt) % page_cnt;691691+ else692692+ p_chain->pbl.c.u32.prod_page_idx =693693+ (p_chain->pbl.c.u32.prod_page_idx -694694+ page_diff + page_cnt) % page_cnt;695695+ }696696+666697 if (is_chain_u16(p_chain))667698 p_chain->u.chain16.prod_idx = (u16) prod_idx;668699 else
+11-4
include/linux/reset.h
···3232struct reset_control *of_reset_control_array_get(struct device_node *np,3333 bool shared, bool optional);34343535+int reset_control_get_count(struct device *dev);3636+3537#else36383739static inline int reset_control_reset(struct reset_control *rstc)···9997 return optional ? NULL : ERR_PTR(-ENOTSUPP);10098}10199100100+static inline int reset_control_get_count(struct device *dev)101101+{102102+ return -ENOENT;103103+}104104+102105#endif /* CONFIG_RESET_CONTROLLER */103106104107static inline int __must_check device_reset(struct device *dev)···145138 *146139 * Returns a struct reset_control or IS_ERR() condition containing errno.147140 * This function is intended for use with reset-controls which are shared148148- * between hardware-blocks.141141+ * between hardware blocks.149142 *150143 * When a reset-control is shared, the behavior of reset_control_assert /151144 * deassert is changed, the reset-core will keep track of a deassert_count···194187}195188196189/**197197- * of_reset_control_get_shared - Lookup and obtain an shared reference190190+ * of_reset_control_get_shared - Lookup and obtain a shared reference198191 * to a reset controller.199192 * @node: device to be reset by the controller200193 * @id: reset line name···236229}237230238231/**239239- * of_reset_control_get_shared_by_index - Lookup and obtain an shared232232+ * of_reset_control_get_shared_by_index - Lookup and obtain a shared240233 * reference to a reset controller241234 * by index.242235 * @node: device to be reset by the controller···329322330323/**331324 * devm_reset_control_get_shared_by_index - resource managed332332- * reset_control_get_shared325325+ * reset_control_get_shared333326 * @dev: device to be reset by the controller334327 * @index: index of the reset controller335328 *
+10-1
include/linux/sched.h
···995995 /* cg_list protected by css_set_lock and tsk->alloc_lock: */996996 struct list_head cg_list;997997#endif998998-#ifdef CONFIG_RESCTRL998998+#ifdef CONFIG_X86_RESCTRL999999 u32 closid;10001000 u32 rmid;10011001#endif···14061406#define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */14071407#define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */14081408#define PF_MEMSTALL 0x01000000 /* Stalled due to lack of memory */14091409+#define PF_UMH 0x02000000 /* I'm an Usermodehelper process */14091410#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */14101411#define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */14111412#define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */···19041903}1905190419061905#endif19061906+19071907+void __exit_umh(struct task_struct *tsk);19081908+19091909+static inline void exit_umh(struct task_struct *tsk)19101910+{19111911+ if (unlikely(tsk->flags & PF_UMH))19121912+ __exit_umh(tsk);19131913+}1907191419081915#ifdef CONFIG_DEBUG_RSEQ19091916
+1
include/linux/skbuff.h
···32183218 *32193219 * This is exactly the same as pskb_trim except that it ensures the32203220 * checksum of received packets are still valid after the operation.32213221+ * It can change skb pointers.32213222 */3222322332233224static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)
···12121313/**1414 * virtio_config_ops - operations for configuring a virtio device1515+ * Note: Do not assume that a transport implements all of the operations1616+ * getting/setting a value as a simple read/write! Generally speaking,1717+ * any of @get/@set, @get_status/@set_status, or @get_features/1818+ * @finalize_features are NOT safe to be called from an atomic1919+ * context.1520 * @get: read the value of a configuration field1621 * vdev: the virtio_device1722 * offset: the offset of the configuration field···2722 * offset: the offset of the configuration field2823 * buf: the buffer to read the field value from.2924 * len: the length of the buffer3030- * @generation: config generation counter2525+ * @generation: config generation counter (optional)3126 * vdev: the virtio_device3227 * Returns the config generation counter3328 * @get_status: read the status byte···5348 * @del_vqs: free virtqueues found by find_vqs().5449 * @get_features: get the array of feature bits for this device.5550 * vdev: the virtio_device5656- * Returns the first 32 feature bits (all we currently need).5151+ * Returns the first 64 feature bits (all we currently need).5752 * @finalize_features: confirm what device features we'll be using.5853 * vdev: the virtio_device5954 * This gives the final feature bits for the device: it can change6055 * the dev->feature bits if it wants.6156 * Returns 0 on success or error status6262- * @bus_name: return the bus name associated with the device5757+ * @bus_name: return the bus name associated with the device (optional)6358 * vdev: the virtio_device6459 * This returns a pointer to the bus name a la pci_name from which6560 * the caller can then copy.6666- * @set_vq_affinity: set the affinity for a virtqueue.6161+ * @set_vq_affinity: set the affinity for a virtqueue (optional).6762 * @get_vq_affinity: get the affinity for a virtqueue (optional).6863 */6964typedef void vq_callback_t(struct virtqueue *);
···400400/* do not define AUDIT_ARCH_PPCLE since it is not supported by audit */401401#define AUDIT_ARCH_PPC64 (EM_PPC64|__AUDIT_ARCH_64BIT)402402#define AUDIT_ARCH_PPC64LE (EM_PPC64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE)403403+#define AUDIT_ARCH_RISCV32 (EM_RISCV|__AUDIT_ARCH_LE)404404+#define AUDIT_ARCH_RISCV64 (EM_RISCV|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE)403405#define AUDIT_ARCH_S390 (EM_S390)404406#define AUDIT_ARCH_S390X (EM_S390|__AUDIT_ARCH_64BIT)405407#define AUDIT_ARCH_SH (EM_SH)
···11241124 bool "Dead code and data elimination (EXPERIMENTAL)"11251125 depends on HAVE_LD_DEAD_CODE_DATA_ELIMINATION11261126 depends on EXPERT11271127+ depends on !(FUNCTION_TRACER && CC_IS_GCC && GCC_VERSION < 40800)11271128 depends on $(cc-option,-ffunction-sections -fdata-sections)11281129 depends on $(ld-option,--gc-sections)11291130 help
···718718 case BPF_FUNC_trace_printk:719719 if (capable(CAP_SYS_ADMIN))720720 return bpf_get_trace_printk_proto();721721+ /* fall through */721722 default:722723 return NULL;723724 }
+15-2
kernel/bpf/map_in_map.c
···1212struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)1313{1414 struct bpf_map *inner_map, *inner_map_meta;1515+ u32 inner_map_meta_size;1516 struct fd f;16171718 f = fdget(inner_map_ufd);···3736 return ERR_PTR(-EINVAL);3837 }39384040- inner_map_meta = kzalloc(sizeof(*inner_map_meta), GFP_USER);3939+ inner_map_meta_size = sizeof(*inner_map_meta);4040+ /* In some cases verifier needs to access beyond just base map. */4141+ if (inner_map->ops == &array_map_ops)4242+ inner_map_meta_size = sizeof(struct bpf_array);4343+4444+ inner_map_meta = kzalloc(inner_map_meta_size, GFP_USER);4145 if (!inner_map_meta) {4246 fdput(f);4347 return ERR_PTR(-ENOMEM);···5246 inner_map_meta->key_size = inner_map->key_size;5347 inner_map_meta->value_size = inner_map->value_size;5448 inner_map_meta->map_flags = inner_map->map_flags;5555- inner_map_meta->ops = inner_map->ops;5649 inner_map_meta->max_entries = inner_map->max_entries;5050+5151+ /* Misc members not needed in bpf_map_meta_equal() check. */5252+ inner_map_meta->ops = inner_map->ops;5353+ if (inner_map->ops == &array_map_ops) {5454+ inner_map_meta->unpriv_array = inner_map->unpriv_array;5555+ container_of(inner_map_meta, struct bpf_array, map)->index_mask =5656+ container_of(inner_map, struct bpf_array, map)->index_mask;5757+ }57585859 fdput(f);5960 return inner_map_meta;
···31033103 }31043104}3105310531063106+static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,31073107+ const struct bpf_insn *insn)31083108+{31093109+ return env->allow_ptr_leaks || BPF_SRC(insn->code) == BPF_K;31103110+}31113111+31123112+static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,31133113+ u32 alu_state, u32 alu_limit)31143114+{31153115+ /* If we arrived here from different branches with different31163116+ * state or limits to sanitize, then this won't work.31173117+ */31183118+ if (aux->alu_state &&31193119+ (aux->alu_state != alu_state ||31203120+ aux->alu_limit != alu_limit))31213121+ return -EACCES;31223122+31233123+ /* Corresponding fixup done in fixup_bpf_calls(). */31243124+ aux->alu_state = alu_state;31253125+ aux->alu_limit = alu_limit;31263126+ return 0;31273127+}31283128+31293129+static int sanitize_val_alu(struct bpf_verifier_env *env,31303130+ struct bpf_insn *insn)31313131+{31323132+ struct bpf_insn_aux_data *aux = cur_aux(env);31333133+31343134+ if (can_skip_alu_sanitation(env, insn))31353135+ return 0;31363136+31373137+ return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);31383138+}31393139+31063140static int sanitize_ptr_alu(struct bpf_verifier_env *env,31073141 struct bpf_insn *insn,31083142 const struct bpf_reg_state *ptr_reg,···31513117 struct bpf_reg_state tmp;31523118 bool ret;3153311931543154- if (env->allow_ptr_leaks || BPF_SRC(insn->code) == BPF_K)31203120+ if (can_skip_alu_sanitation(env, insn))31553121 return 0;3156312231573123 /* We already marked aux for masking from non-speculative···3167313331683134 if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg))31693135 return 0;31703170-31713171- /* If we arrived here from different branches with different31723172- * limits to sanitize, then this won't work.31733173- */31743174- if (aux->alu_state &&31753175- (aux->alu_state != alu_state ||31763176- aux->alu_limit != alu_limit))31363136+ if (update_alu_sanitation_state(aux, alu_state, alu_limit))31773137 return -EACCES;31783178-31793179- /* Corresponding fixup done in fixup_bpf_calls(). */31803180- aux->alu_state = alu_state;31813181- aux->alu_limit = alu_limit;31823182-31833138do_sim:31843139 /* Simulate and find potential out-of-bounds access under31853140 * speculative execution from truncation as a result of···34413418 s64 smin_val, smax_val;34423419 u64 umin_val, umax_val;34433420 u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32;34213421+ u32 dst = insn->dst_reg;34223422+ int ret;3444342334453424 if (insn_bitness == 32) {34463425 /* Relevant for 32-bit RSH: Information can propagate towards···3477345234783453 switch (opcode) {34793454 case BPF_ADD:34553455+ ret = sanitize_val_alu(env, insn);34563456+ if (ret < 0) {34573457+ verbose(env, "R%d tried to add from different pointers or scalars\n", dst);34583458+ return ret;34593459+ }34803460 if (signed_add_overflows(dst_reg->smin_value, smin_val) ||34813461 signed_add_overflows(dst_reg->smax_value, smax_val)) {34823462 dst_reg->smin_value = S64_MIN;···35013471 dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);35023472 break;35033473 case BPF_SUB:34743474+ ret = sanitize_val_alu(env, insn);34753475+ if (ret < 0) {34763476+ verbose(env, "R%d tried to sub from different pointers or scalars\n", dst);34773477+ return ret;34783478+ }35043479 if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||35053480 signed_sub_overflows(dst_reg->smax_value, smin_val)) {35063481 /* Overflow possible, we know nothing */
···866866 exit_task_namespaces(tsk);867867 exit_task_work(tsk);868868 exit_thread(tsk);869869+ exit_umh(tsk);869870870871 /*871872 * Flush inherited counters to the parent - before the parent
+12-2
kernel/fork.c
···217217 memset(s->addr, 0, THREAD_SIZE);218218219219 tsk->stack_vm_area = s;220220+ tsk->stack = s->addr;220221 return s->addr;221222 }222223···1834183318351834 posix_cpu_timers_init(p);1836183518371837- p->start_time = ktime_get_ns();18381838- p->real_start_time = ktime_get_boot_ns();18391836 p->io_context = NULL;18401837 audit_set_context(p, NULL);18411838 cgroup_fork(p);···19981999 retval = cgroup_can_fork(p);19992000 if (retval)20002001 goto bad_fork_free_pid;20022002+20032003+ /*20042004+ * From this point on we must avoid any synchronous user-space20052005+ * communication until we take the tasklist-lock. In particular, we do20062006+ * not want user-space to be able to predict the process start-time by20072007+ * stalling fork(2) after we recorded the start_time but before it is20082008+ * visible to the system.20092009+ */20102010+20112011+ p->start_time = ktime_get_ns();20122012+ p->real_start_time = ktime_get_boot_ns();2001201320022014 /*20032015 * Make it visible to the rest of the system, but dont wake it up yet.
···12071207/*12081208 * Work around broken programs that cannot handle "Linux 3.0".12091209 * Instead we map 3.x to 2.6.40+x, so e.g. 3.0 would be 2.6.4012101210- * And we map 4.x to 2.6.60+x, so 4.0 would be 2.6.60.12101210+ * And we map 4.x and later versions to 2.6.60+x, so 4.0/5.0/6.0/... would be12111211+ * 2.6.60.12111212 */12121213static int override_release(char __user *release, size_t len)12131214{
+9-3
kernel/trace/trace_kprobe.c
···607607 char buf[MAX_EVENT_NAME_LEN];608608 unsigned int flags = TPARG_FL_KERNEL;609609610610- /* argc must be >= 1 */611611- if (argv[0][0] == 'r') {610610+ switch (argv[0][0]) {611611+ case 'r':612612 is_return = true;613613 flags |= TPARG_FL_RETURN;614614- } else if (argv[0][0] != 'p' || argc < 2)614614+ break;615615+ case 'p':616616+ break;617617+ default:618618+ return -ECANCELED;619619+ }620620+ if (argc < 2)615621 return -ECANCELED;616622617623 event = strchr(&argv[0][1], ':');
···5252 if (x <= ULONG_MAX)5353 return int_sqrt((unsigned long) x);54545555- m = 1ULL << (fls64(x) & ~1ULL);5555+ m = 1ULL << ((fls64(x) - 1) & ~1ULL);5656 while (m != 0) {5757 b = y + m;5858 y >>= 1;
+3-10
lib/sbitmap.c
···2626static inline bool sbitmap_deferred_clear(struct sbitmap *sb, int index)2727{2828 unsigned long mask, val;2929- unsigned long __maybe_unused flags;3029 bool ret = false;3030+ unsigned long flags;31313232- /* Silence bogus lockdep warning */3333-#if defined(CONFIG_LOCKDEP)3434- local_irq_save(flags);3535-#endif3636- spin_lock(&sb->map[index].swap_lock);3232+ spin_lock_irqsave(&sb->map[index].swap_lock, flags);37333834 if (!sb->map[index].cleared)3935 goto out_unlock;···50545155 ret = true;5256out_unlock:5353- spin_unlock(&sb->map[index].swap_lock);5454-#if defined(CONFIG_LOCKDEP)5555- local_irq_restore(flags);5656-#endif5757+ spin_unlock_irqrestore(&sb->map[index].swap_lock, flags);5758 return ret;5859}5960
+24-57
mm/hugetlb.c
···32383238 struct page *ptepage;32393239 unsigned long addr;32403240 int cow;32413241- struct address_space *mapping = vma->vm_file->f_mapping;32423241 struct hstate *h = hstate_vma(vma);32433242 unsigned long sz = huge_page_size(h);32443243 struct mmu_notifier_range range;···32493250 mmu_notifier_range_init(&range, src, vma->vm_start,32503251 vma->vm_end);32513252 mmu_notifier_invalidate_range_start(&range);32523252- } else {32533253- /*32543254- * For shared mappings i_mmap_rwsem must be held to call32553255- * huge_pte_alloc, otherwise the returned ptep could go32563256- * away if part of a shared pmd and another thread calls32573257- * huge_pmd_unshare.32583258- */32593259- i_mmap_lock_read(mapping);32603253 }3261325432623255 for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {32633256 spinlock_t *src_ptl, *dst_ptl;32643264-32653257 src_pte = huge_pte_offset(src, addr, sz);32663258 if (!src_pte)32673259 continue;32683268-32693260 dst_pte = huge_pte_alloc(dst, addr, sz);32703261 if (!dst_pte) {32713262 ret = -ENOMEM;···3326333733273338 if (cow)33283339 mmu_notifier_invalidate_range_end(&range);33293329- else33303330- i_mmap_unlock_read(mapping);3331334033323341 return ret;33333342}···37423755 }3743375637443757 /*37453745- * We can not race with truncation due to holding i_mmap_rwsem.37463746- * Check once here for faults beyond end of file.37583758+ * Use page lock to guard against racing truncation37593759+ * before we get page_table_lock.37473760 */37483748- size = i_size_read(mapping->host) >> huge_page_shift(h);37493749- if (idx >= size)37503750- goto out;37513751-37523761retry:37533762 page = find_lock_page(mapping, idx);37543763 if (!page) {37643764+ size = i_size_read(mapping->host) >> huge_page_shift(h);37653765+ if (idx >= size)37663766+ goto out;37673767+37553768 /*37563769 * Check for page in userfault range37573770 */···37713784 };3772378537733786 /*37743774- * hugetlb_fault_mutex and i_mmap_rwsem must be37753775- * dropped before handling userfault. Reacquire37763776- * after handling fault to make calling code simpler.37873787+ * hugetlb_fault_mutex must be dropped before37883788+ * handling userfault. Reacquire after handling37893789+ * fault to make calling code simpler.37773790 */37783791 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,37793792 idx, haddr);37803793 mutex_unlock(&hugetlb_fault_mutex_table[hash]);37813781- i_mmap_unlock_read(mapping);37823782-37833794 ret = handle_userfault(&vmf, VM_UFFD_MISSING);37843784-37853785- i_mmap_lock_read(mapping);37863795 mutex_lock(&hugetlb_fault_mutex_table[hash]);37873796 goto out;37883797 }···38373854 }3838385538393856 ptl = huge_pte_lock(h, mm, ptep);38573857+ size = i_size_read(mapping->host) >> huge_page_shift(h);38583858+ if (idx >= size)38593859+ goto backout;3840386038413861 ret = 0;38423862 if (!huge_pte_none(huge_ptep_get(ptep)))···3926394039273941 ptep = huge_pte_offset(mm, haddr, huge_page_size(h));39283942 if (ptep) {39293929- /*39303930- * Since we hold no locks, ptep could be stale. That is39313931- * OK as we are only making decisions based on content and39323932- * not actually modifying content here.39333933- */39343943 entry = huge_ptep_get(ptep);39353944 if (unlikely(is_hugetlb_entry_migration(entry))) {39363945 migration_entry_wait_huge(vma, mm, ptep);···39333952 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))39343953 return VM_FAULT_HWPOISON_LARGE |39353954 VM_FAULT_SET_HINDEX(hstate_index(h));39553955+ } else {39563956+ ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));39573957+ if (!ptep)39583958+ return VM_FAULT_OOM;39363959 }3937396039383938- /*39393939- * Acquire i_mmap_rwsem before calling huge_pte_alloc and hold39403940- * until finished with ptep. This serves two purposes:39413941- * 1) It prevents huge_pmd_unshare from being called elsewhere39423942- * and making the ptep no longer valid.39433943- * 2) It synchronizes us with file truncation.39443944- *39453945- * ptep could have already be assigned via huge_pte_offset. That39463946- * is OK, as huge_pte_alloc will return the same value unless39473947- * something changed.39483948- */39493961 mapping = vma->vm_file->f_mapping;39503950- i_mmap_lock_read(mapping);39513951- ptep = huge_pte_alloc(mm, haddr, huge_page_size(h));39523952- if (!ptep) {39533953- i_mmap_unlock_read(mapping);39543954- return VM_FAULT_OOM;39553955- }39623962+ idx = vma_hugecache_offset(h, vma, haddr);3956396339573964 /*39583965 * Serialize hugepage allocation and instantiation, so that we don't39593966 * get spurious allocation failures if two CPUs race to instantiate39603967 * the same page in the page cache.39613968 */39623962- idx = vma_hugecache_offset(h, vma, haddr);39633969 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr);39643970 mutex_lock(&hugetlb_fault_mutex_table[hash]);39653971···40344066 }40354067out_mutex:40364068 mutex_unlock(&hugetlb_fault_mutex_table[hash]);40374037- i_mmap_unlock_read(mapping);40384069 /*40394070 * Generally it's safe to hold refcount during waiting page lock. But40404071 * here we just wait to defer the next page fault to avoid busy loop and···46384671 * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()46394672 * and returns the corresponding pte. While this is not necessary for the46404673 * !shared pmd case because we can allocate the pmd later as well, it makes the46414641- * code much cleaner.46424642- *46434643- * This routine must be called with i_mmap_rwsem held in at least read mode.46444644- * For hugetlbfs, this prevents removal of any page table entries associated46454645- * with the address space. This is important as we are setting up sharing46464646- * based on existing page table entries (mappings).46744674+ * code much cleaner. pmd allocation is essential for the shared case because46754675+ * pud has to be populated inside the same i_mmap_rwsem section - otherwise46764676+ * racing tasks could either miss the sharing (see huge_pte_offset) or select a46774677+ * bad pmd for sharing.46474678 */46484679pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)46494680{···46584693 if (!vma_shareable(vma, addr))46594694 return (pte_t *)pmd_alloc(mm, pud, addr);4660469546964696+ i_mmap_lock_write(mapping);46614697 vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {46624698 if (svma == vma)46634699 continue;···46884722 spin_unlock(ptl);46894723out:46904724 pte = (pte_t *)pmd_alloc(mm, pud, addr);47254725+ i_mmap_unlock_write(mapping);46914726 return pte;46924727}46934728···46994732 * indicated by page_count > 1, unmap is achieved by clearing pud and47004733 * decrementing the ref count. If count == 1, the pte page is not shared.47014734 *47024702- * Called with page table lock held and i_mmap_rwsem held in write mode.47354735+ * called with page table lock held.47034736 *47044737 * returns: 1 successfully unmapped a shared pte page47054738 * 0 the underlying pte page is not shared, or it is the last user
+44-23
mm/kasan/common.c
···298298 return;299299 }300300301301- cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE);302302-303301 *flags |= SLAB_KASAN;304302}305303···347349}348350349351/*350350- * Since it's desirable to only call object contructors once during slab351351- * allocation, we preassign tags to all such objects. Also preassign tags for352352- * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports.353353- * For SLAB allocator we can't preassign tags randomly since the freelist is354354- * stored as an array of indexes instead of a linked list. Assign tags based355355- * on objects indexes, so that objects that are next to each other get356356- * different tags.357357- * After a tag is assigned, the object always gets allocated with the same tag.358358- * The reason is that we can't change tags for objects with constructors on359359- * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor360360- * code can save the pointer to the object somewhere (e.g. in the object361361- * itself). Then if we retag it, the old saved pointer will become invalid.352352+ * This function assigns a tag to an object considering the following:353353+ * 1. A cache might have a constructor, which might save a pointer to a slab354354+ * object somewhere (e.g. in the object itself). We preassign a tag for355355+ * each object in caches with constructors during slab creation and reuse356356+ * the same tag each time a particular object is allocated.357357+ * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be358358+ * accessed after being freed. We preassign tags for objects in these359359+ * caches as well.360360+ * 3. For SLAB allocator we can't preassign tags randomly since the freelist361361+ * is stored as an array of indexes instead of a linked list. Assign tags362362+ * based on objects indexes, so that objects that are next to each other363363+ * get different tags.362364 */363363-static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new)365365+static u8 assign_tag(struct kmem_cache *cache, const void *object,366366+ bool init, bool krealloc)364367{365365- if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))366366- return new ? KASAN_TAG_KERNEL : random_tag();368368+ /* Reuse the same tag for krealloc'ed objects. */369369+ if (krealloc)370370+ return get_tag(object);367371372372+ /*373373+ * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU374374+ * set, assign a tag when the object is being allocated (init == false).375375+ */376376+ if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU))377377+ return init ? KASAN_TAG_KERNEL : random_tag();378378+379379+ /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */368380#ifdef CONFIG_SLAB381381+ /* For SLAB assign tags based on the object index in the freelist. */369382 return (u8)obj_to_index(cache, virt_to_page(object), (void *)object);370383#else371371- return new ? random_tag() : get_tag(object);384384+ /*385385+ * For SLUB assign a random tag during slab creation, otherwise reuse386386+ * the already assigned tag.387387+ */388388+ return init ? random_tag() : get_tag(object);372389#endif373390}374391···399386 __memset(alloc_info, 0, sizeof(*alloc_info));400387401388 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))402402- object = set_tag(object, assign_tag(cache, object, true));389389+ object = set_tag(object,390390+ assign_tag(cache, object, true, false));403391404392 return (void *)object;405393}···466452 return __kasan_slab_free(cache, object, ip, true);467453}468454469469-void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,470470- size_t size, gfp_t flags)455455+static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,456456+ size_t size, gfp_t flags, bool krealloc)471457{472458 unsigned long redzone_start;473459 unsigned long redzone_end;···485471 KASAN_SHADOW_SCALE_SIZE);486472487473 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))488488- tag = assign_tag(cache, object, false);474474+ tag = assign_tag(cache, object, false, krealloc);489475490476 /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */491477 kasan_unpoison_shadow(set_tag(object, tag), size);···496482 set_track(&get_alloc_info(cache, object)->alloc_track, flags);497483498484 return set_tag(object, tag);485485+}486486+487487+void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,488488+ size_t size, gfp_t flags)489489+{490490+ return __kasan_kmalloc(cache, object, size, flags, false);499491}500492EXPORT_SYMBOL(kasan_kmalloc);501493···542522 if (unlikely(!PageSlab(page)))543523 return kasan_kmalloc_large(object, size, flags);544524 else545545- return kasan_kmalloc(page->slab_cache, object, size, flags);525525+ return __kasan_kmalloc(page->slab_cache, object, size,526526+ flags, true);546527}547528548529void kasan_poison_kfree(void *ptr, unsigned long ip)
+2-14
mm/memory-failure.c
···966966 enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;967967 struct address_space *mapping;968968 LIST_HEAD(tokill);969969- bool unmap_success = true;969969+ bool unmap_success;970970 int kill = 1, forcekill;971971 struct page *hpage = *hpagep;972972 bool mlocked = PageMlocked(hpage);···10281028 if (kill)10291029 collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED);1030103010311031- if (!PageHuge(hpage)) {10321032- unmap_success = try_to_unmap(hpage, ttu);10331033- } else if (mapping) {10341034- /*10351035- * For hugetlb pages, try_to_unmap could potentially call10361036- * huge_pmd_unshare. Because of this, take semaphore in10371037- * write mode here and set TTU_RMAP_LOCKED to indicate we10381038- * have taken the lock at this higer level.10391039- */10401040- i_mmap_lock_write(mapping);10411041- unmap_success = try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED);10421042- i_mmap_unlock_write(mapping);10431043- }10311031+ unmap_success = try_to_unmap(hpage, ttu);10441032 if (!unmap_success)10451033 pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n",10461034 pfn, page_mapcount(hpage));
+24-2
mm/memory.c
···29942994 struct vm_area_struct *vma = vmf->vma;29952995 vm_fault_t ret;2996299629972997+ /*29982998+ * Preallocate pte before we take page_lock because this might lead to29992999+ * deadlocks for memcg reclaim which waits for pages under writeback:30003000+ * lock_page(A)30013001+ * SetPageWriteback(A)30023002+ * unlock_page(A)30033003+ * lock_page(B)30043004+ * lock_page(B)30053005+ * pte_alloc_pne30063006+ * shrink_page_list30073007+ * wait_on_page_writeback(A)30083008+ * SetPageWriteback(B)30093009+ * unlock_page(B)30103010+ * # flush A, B to clear the writeback30113011+ */30123012+ if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {30133013+ vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm);30143014+ if (!vmf->prealloc_pte)30153015+ return VM_FAULT_OOM;30163016+ smp_wmb(); /* See comment in __pte_alloc() */30173017+ }30183018+29973019 ret = vma->vm_ops->fault(vmf);29983020 if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY |29993021 VM_FAULT_DONE_COW)))···40994077 goto out;4100407841014079 if (range) {41024102- range->start = address & PAGE_MASK;41034103- range->end = range->start + PAGE_SIZE;40804080+ mmu_notifier_range_init(range, mm, address & PAGE_MASK,40814081+ (address & PAGE_MASK) + PAGE_SIZE);41044082 mmu_notifier_invalidate_range_start(range);41054083 }41064084 ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
+1-12
mm/migrate.c
···13241324 goto put_anon;1325132513261326 if (page_mapped(hpage)) {13271327- struct address_space *mapping = page_mapping(hpage);13281328-13291329- /*13301330- * try_to_unmap could potentially call huge_pmd_unshare.13311331- * Because of this, take semaphore in write mode here and13321332- * set TTU_RMAP_LOCKED to let lower levels know we have13331333- * taken the lock.13341334- */13351335- i_mmap_lock_write(mapping);13361327 try_to_unmap(hpage,13371337- TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS|13381338- TTU_RMAP_LOCKED);13391339- i_mmap_unlock_write(mapping);13281328+ TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS);13401329 page_was_mapped = 1;13411330 }13421331
+7-1
mm/page_alloc.c
···22142214 */22152215 boost_watermark(zone);22162216 if (alloc_flags & ALLOC_KSWAPD)22172217- wakeup_kswapd(zone, 0, 0, zone_idx(zone));22172217+ set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);2218221822192219 /* We are not allowed to try stealing from the whole block */22202220 if (!whole_block)···31023102 local_irq_restore(flags);3103310331043104out:31053105+ /* Separate test+clear to avoid unnecessary atomics */31063106+ if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) {31073107+ clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags);31083108+ wakeup_kswapd(zone, 0, 0, zone_idx(zone));31093109+ }31103110+31053111 VM_BUG_ON_PAGE(page && bad_range(zone, page), page);31063112 return page;31073113
+2-6
mm/rmap.c
···2525 * page->flags PG_locked (lock_page)2626 * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share)2727 * mapping->i_mmap_rwsem2828- * hugetlb_fault_mutex (hugetlbfs specific page fault mutex)2928 * anon_vma->rwsem3029 * mm->page_table_lock or pte_lock3130 * zone_lru_lock (in mark_page_accessed, isolate_lru_page)···13711372 * Note that the page can not be free in this function as call of13721373 * try_to_unmap() must hold a reference on the page.13731374 */13741374- mmu_notifier_range_init(&range, vma->vm_mm, vma->vm_start,13751375- min(vma->vm_end, vma->vm_start +13751375+ mmu_notifier_range_init(&range, vma->vm_mm, address,13761376+ min(vma->vm_end, address +13761377 (PAGE_SIZE << compound_order(page))));13771378 if (PageHuge(page)) {13781379 /*13791380 * If sharing is possible, start and end will be adjusted13801381 * accordingly.13811381- *13821382- * If called for a huge page, caller must hold i_mmap_rwsem13831383- * in write mode as it is possible to call huge_pmd_unshare.13841382 */13851383 adjust_range_if_pmd_sharing_possible(vma, &range.start,13861384 &range.end);
···247247/*248248 * Validates that the given object is:249249 * - not bogus address250250- * - known-safe heap or stack object250250+ * - fully contained by stack (or stack frame, when available)251251+ * - fully within SLAB object (or object whitelist area, when available)251252 * - not in kernel text252253 */253254void __check_object_size(const void *ptr, unsigned long n, bool to_user)···262261263262 /* Check for invalid addresses. */264263 check_bogus_address((const unsigned long)ptr, n, to_user);265265-266266- /* Check for bad heap object. */267267- check_heap_object(ptr, n, to_user);268264269265 /* Check for bad stack object. */270266 switch (check_stack_object(ptr, n)) {···279281 default:280282 usercopy_abort("process stack", NULL, to_user, 0, n);281283 }284284+285285+ /* Check for bad heap object. */286286+ check_heap_object(ptr, n, to_user);282287283288 /* Check for object in kernel to avoid text exposure. */284289 check_kernel_text_object((const unsigned long)ptr, n, to_user);
+2-9
mm/userfaultfd.c
···267267 VM_BUG_ON(dst_addr & ~huge_page_mask(h));268268269269 /*270270- * Serialize via i_mmap_rwsem and hugetlb_fault_mutex.271271- * i_mmap_rwsem ensures the dst_pte remains valid even272272- * in the case of shared pmds. fault mutex prevents273273- * races with other faulting threads.270270+ * Serialize via hugetlb_fault_mutex274271 */275275- mapping = dst_vma->vm_file->f_mapping;276276- i_mmap_lock_read(mapping);277272 idx = linear_page_index(dst_vma, dst_addr);273273+ mapping = dst_vma->vm_file->f_mapping;278274 hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping,279275 idx, dst_addr);280276 mutex_lock(&hugetlb_fault_mutex_table[hash]);···279283 dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));280284 if (!dst_pte) {281285 mutex_unlock(&hugetlb_fault_mutex_table[hash]);282282- i_mmap_unlock_read(mapping);283286 goto out_unlock;284287 }285288···286291 dst_pteval = huge_ptep_get(dst_pte);287292 if (!huge_pte_none(dst_pteval)) {288293 mutex_unlock(&hugetlb_fault_mutex_table[hash]);289289- i_mmap_unlock_read(mapping);290294 goto out_unlock;291295 }292296···293299 dst_addr, src_addr, &page);294300295301 mutex_unlock(&hugetlb_fault_mutex_table[hash]);296296- i_mmap_unlock_read(mapping);297302 vm_alloc_shared = vm_shared;298303299304 cond_resched();
+1-1
mm/util.c
···478478 return true;479479 if (PageHuge(page))480480 return false;481481- for (i = 0; i < hpage_nr_pages(page); i++) {481481+ for (i = 0; i < (1 << compound_order(page)); i++) {482482 if (atomic_read(&page[i]._mapcount) >= 0)483483 return true;484484 }
+52-44
net/bpfilter/bpfilter_kern.c
···1313extern char bpfilter_umh_start;1414extern char bpfilter_umh_end;15151616-static struct umh_info info;1717-/* since ip_getsockopt() can run in parallel, serialize access to umh */1818-static DEFINE_MUTEX(bpfilter_lock);1919-2020-static void shutdown_umh(struct umh_info *info)1616+static void shutdown_umh(void)2117{2218 struct task_struct *tsk;23192424- if (!info->pid)2020+ if (bpfilter_ops.stop)2521 return;2626- tsk = get_pid_task(find_vpid(info->pid), PIDTYPE_PID);2222+2323+ tsk = get_pid_task(find_vpid(bpfilter_ops.info.pid), PIDTYPE_PID);2724 if (tsk) {2825 force_sig(SIGKILL, tsk);2926 put_task_struct(tsk);3027 }3131- fput(info->pipe_to_umh);3232- fput(info->pipe_from_umh);3333- info->pid = 0;3428}35293630static void __stop_umh(void)3731{3838- if (IS_ENABLED(CONFIG_INET)) {3939- bpfilter_process_sockopt = NULL;4040- shutdown_umh(&info);4141- }4242-}4343-4444-static void stop_umh(void)4545-{4646- mutex_lock(&bpfilter_lock);4747- __stop_umh();4848- mutex_unlock(&bpfilter_lock);3232+ if (IS_ENABLED(CONFIG_INET))3333+ shutdown_umh();4934}50355136static int __bpfilter_process_sockopt(struct sock *sk, int optname,···4863 req.cmd = optname;4964 req.addr = (long __force __user)optval;5065 req.len = optlen;5151- mutex_lock(&bpfilter_lock);5252- if (!info.pid)6666+ if (!bpfilter_ops.info.pid)5367 goto out;5454- n = __kernel_write(info.pipe_to_umh, &req, sizeof(req), &pos);6868+ n = __kernel_write(bpfilter_ops.info.pipe_to_umh, &req, sizeof(req),6969+ &pos);5570 if (n != sizeof(req)) {5671 pr_err("write fail %zd\n", n);5772 __stop_umh();···5974 goto out;6075 }6176 pos = 0;6262- n = kernel_read(info.pipe_from_umh, &reply, sizeof(reply), &pos);7777+ n = kernel_read(bpfilter_ops.info.pipe_from_umh, &reply, sizeof(reply),7878+ &pos);6379 if (n != sizeof(reply)) {6480 pr_err("read fail %zd\n", n);6581 __stop_umh();···6983 }7084 ret = reply.status;7185out:7272- mutex_unlock(&bpfilter_lock);7386 return ret;8787+}8888+8989+static int start_umh(void)9090+{9191+ int err;9292+9393+ /* fork usermode process */9494+ err = fork_usermode_blob(&bpfilter_umh_start,9595+ &bpfilter_umh_end - &bpfilter_umh_start,9696+ &bpfilter_ops.info);9797+ if (err)9898+ return err;9999+ bpfilter_ops.stop = false;100100+ pr_info("Loaded bpfilter_umh pid %d\n", bpfilter_ops.info.pid);101101+102102+ /* health check that usermode process started correctly */103103+ if (__bpfilter_process_sockopt(NULL, 0, NULL, 0, 0) != 0) {104104+ shutdown_umh();105105+ return -EFAULT;106106+ }107107+108108+ return 0;74109}7511076111static int __init load_umh(void)77112{78113 int err;791148080- /* fork usermode process */8181- info.cmdline = "bpfilter_umh";8282- err = fork_usermode_blob(&bpfilter_umh_start,8383- &bpfilter_umh_end - &bpfilter_umh_start,8484- &info);8585- if (err)8686- return err;8787- pr_info("Loaded bpfilter_umh pid %d\n", info.pid);8888-8989- /* health check that usermode process started correctly */9090- if (__bpfilter_process_sockopt(NULL, 0, NULL, 0, 0) != 0) {9191- stop_umh();9292- return -EFAULT;115115+ mutex_lock(&bpfilter_ops.lock);116116+ if (!bpfilter_ops.stop) {117117+ err = -EFAULT;118118+ goto out;93119 }9494- if (IS_ENABLED(CONFIG_INET))9595- bpfilter_process_sockopt = &__bpfilter_process_sockopt;9696-9797- return 0;120120+ err = start_umh();121121+ if (!err && IS_ENABLED(CONFIG_INET)) {122122+ bpfilter_ops.sockopt = &__bpfilter_process_sockopt;123123+ bpfilter_ops.start = &start_umh;124124+ }125125+out:126126+ mutex_unlock(&bpfilter_ops.lock);127127+ return err;98128}99129100130static void __exit fini_umh(void)101131{102102- stop_umh();132132+ mutex_lock(&bpfilter_ops.lock);133133+ if (IS_ENABLED(CONFIG_INET)) {134134+ shutdown_umh();135135+ bpfilter_ops.start = NULL;136136+ bpfilter_ops.sockopt = NULL;137137+ }138138+ mutex_unlock(&bpfilter_ops.lock);103139}104140module_init(load_umh);105141module_exit(fini_umh);
···416416 while (modidx < MAX_MODFUNCTIONS && gwj->mod.modfunc[modidx])417417 (*gwj->mod.modfunc[modidx++])(cf, &gwj->mod);418418419419- /* check for checksum updates when the CAN frame has been modified */419419+ /* Has the CAN frame been modified? */420420 if (modidx) {421421- if (gwj->mod.csumfunc.crc8)422422- (*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);421421+ /* get available space for the processed CAN frame type */422422+ int max_len = nskb->len - offsetof(struct can_frame, data);423423424424- if (gwj->mod.csumfunc.xor)424424+ /* dlc may have changed, make sure it fits to the CAN frame */425425+ if (cf->can_dlc > max_len)426426+ goto out_delete;427427+428428+ /* check for checksum updates in classic CAN length only */429429+ if (gwj->mod.csumfunc.crc8) {430430+ if (cf->can_dlc > 8)431431+ goto out_delete;432432+433433+ (*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8);434434+ }435435+436436+ if (gwj->mod.csumfunc.xor) {437437+ if (cf->can_dlc > 8)438438+ goto out_delete;439439+425440 (*gwj->mod.csumfunc.xor)(cf, &gwj->mod.csum.xor);441441+ }426442 }427443428444 /* clear the skb timestamp if not configured the other way */···450434 gwj->dropped_frames++;451435 else452436 gwj->handled_frames++;437437+438438+ return;439439+440440+ out_delete:441441+ /* delete frame due to misconfiguration */442442+ gwj->deleted_frames++;443443+ kfree_skb(nskb);444444+ return;453445}454446455447static inline int cgw_register_filter(struct net *net, struct cgw_job *gwj)
···375375 struct ceph_client *client = s->private;376376 int ret;377377378378- ret = ceph_print_client_options(s, client);378378+ ret = ceph_print_client_options(s, client, true);379379 if (ret)380380 return ret;381381
+2-2
net/ceph/osd_client.c
···23152315 (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) ||23162316 pool_full(osdc, req->r_t.base_oloc.pool))) {23172317 dout("req %p full/pool_full\n", req);23182318- if (osdc->abort_on_full) {23182318+ if (ceph_test_opt(osdc->client, ABORT_ON_FULL)) {23192319 err = -ENOSPC;23202320 } else {23212321 pr_warn_ratelimited("FULL or reached pool quota\n");···25452545{25462546 bool victims = false;2547254725482548- if (osdc->abort_on_full &&25482548+ if (ceph_test_opt(osdc->client, ABORT_ON_FULL) &&25492549 (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || have_pool_full(osdc)))25502550 for_each_request(osdc, abort_on_full_fn, &victims);25512551}
+21-13
net/core/filter.c
···20202020static int __bpf_redirect_no_mac(struct sk_buff *skb, struct net_device *dev,20212021 u32 flags)20222022{20232023- /* skb->mac_len is not set on normal egress */20242024- unsigned int mlen = skb->network_header - skb->mac_header;20232023+ unsigned int mlen = skb_network_offset(skb);2025202420262026- __skb_pull(skb, mlen);20252025+ if (mlen) {20262026+ __skb_pull(skb, mlen);2027202720282028- /* At ingress, the mac header has already been pulled once.20292029- * At egress, skb_pospull_rcsum has to be done in case that20302030- * the skb is originated from ingress (i.e. a forwarded skb)20312031- * to ensure that rcsum starts at net header.20322032- */20332033- if (!skb_at_tc_ingress(skb))20342034- skb_postpull_rcsum(skb, skb_mac_header(skb), mlen);20282028+ /* At ingress, the mac header has already been pulled once.20292029+ * At egress, skb_pospull_rcsum has to be done in case that20302030+ * the skb is originated from ingress (i.e. a forwarded skb)20312031+ * to ensure that rcsum starts at net header.20322032+ */20332033+ if (!skb_at_tc_ingress(skb))20342034+ skb_postpull_rcsum(skb, skb_mac_header(skb), mlen);20352035+ }20352036 skb_pop_mac_header(skb);20362037 skb_reset_mac_len(skb);20372038 return flags & BPF_F_INGRESS ?···41204119 sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);41214120 break;41224121 case SO_MAX_PACING_RATE: /* 32bit version */41224122+ if (val != ~0U)41234123+ cmpxchg(&sk->sk_pacing_status,41244124+ SK_PACING_NONE,41254125+ SK_PACING_NEEDED);41234126 sk->sk_max_pacing_rate = (val == ~0U) ? ~0UL : val;41244127 sk->sk_pacing_rate = min(sk->sk_pacing_rate,41254128 sk->sk_max_pacing_rate);···41374132 sk->sk_rcvlowat = val ? : 1;41384133 break;41394134 case SO_MARK:41404140- sk->sk_mark = val;41354135+ if (sk->sk_mark != val) {41364136+ sk->sk_mark = val;41374137+ sk_dst_reset(sk);41384138+ }41414139 break;41424140 default:41434141 ret = -EINVAL;···42114203 /* Only some options are supported */42124204 switch (optname) {42134205 case TCP_BPF_IW:42144214- if (val <= 0 || tp->data_segs_out > 0)42064206+ if (val <= 0 || tp->data_segs_out > tp->syn_data)42154207 ret = -EINVAL;42164208 else42174209 tp->snd_cwnd = val;···53175309 case BPF_FUNC_trace_printk:53185310 if (capable(CAP_SYS_ADMIN))53195311 return bpf_get_trace_printk_proto();53205320- /* else: fall through */53125312+ /* else, fall through */53215313 default:53225314 return NULL;53235315 }
+1
net/core/lwt_bpf.c
···6363 lwt->name ? : "<unknown>");6464 ret = BPF_OK;6565 } else {6666+ skb_reset_mac_header(skb);6667 ret = skb_do_redirect(skb);6768 if (ret == 0)6869 ret = BPF_REDIRECT;
+10-5
net/core/neighbour.c
···1818#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt19192020#include <linux/slab.h>2121+#include <linux/kmemleak.h>2122#include <linux/types.h>2223#include <linux/kernel.h>2324#include <linux/module.h>···444443 ret = kmalloc(sizeof(*ret), GFP_ATOMIC);445444 if (!ret)446445 return NULL;447447- if (size <= PAGE_SIZE)446446+ if (size <= PAGE_SIZE) {448447 buckets = kzalloc(size, GFP_ATOMIC);449449- else448448+ } else {450449 buckets = (struct neighbour __rcu **)451450 __get_free_pages(GFP_ATOMIC | __GFP_ZERO,452451 get_order(size));452452+ kmemleak_alloc(buckets, size, 1, GFP_ATOMIC);453453+ }453454 if (!buckets) {454455 kfree(ret);455456 return NULL;···471468 size_t size = (1 << nht->hash_shift) * sizeof(struct neighbour *);472469 struct neighbour __rcu **buckets = nht->hash_buckets;473470474474- if (size <= PAGE_SIZE)471471+ if (size <= PAGE_SIZE) {475472 kfree(buckets);476476- else473473+ } else {474474+ kmemleak_free(buckets);477475 free_pages((unsigned long)buckets, get_order(size));476476+ }478477 kfree(nht);479478}480479···10071002 if (neigh->ops->solicit)10081003 neigh->ops->solicit(neigh, skb);10091004 atomic_inc(&neigh->probes);10101010- kfree_skb(skb);10051005+ consume_skb(skb);10111006}1012100710131008/* Called when a timer expires for a neighbour entry. */
+1-6
net/core/skbuff.c
···52705270 unsigned long chunk;52715271 struct sk_buff *skb;52725272 struct page *page;52735273- gfp_t gfp_head;52745273 int i;5275527452765275 *errcode = -EMSGSIZE;···52795280 if (npages > MAX_SKB_FRAGS)52805281 return NULL;5281528252825282- gfp_head = gfp_mask;52835283- if (gfp_head & __GFP_DIRECT_RECLAIM)52845284- gfp_head |= __GFP_RETRY_MAYFAIL;52855285-52865283 *errcode = -ENOBUFS;52875287- skb = alloc_skb(header_len, gfp_head);52845284+ skb = alloc_skb(header_len, gfp_mask);52885285 if (!skb)52895286 return NULL;52905287
+48-10
net/ipv4/bpfilter/sockopt.c
···11// SPDX-License-Identifier: GPL-2.022+#include <linux/init.h>33+#include <linux/module.h>24#include <linux/uaccess.h>35#include <linux/bpfilter.h>46#include <uapi/linux/bpf.h>57#include <linux/wait.h>68#include <linux/kmod.h>99+#include <linux/fs.h>1010+#include <linux/file.h>71188-int (*bpfilter_process_sockopt)(struct sock *sk, int optname,99- char __user *optval,1010- unsigned int optlen, bool is_set);1111-EXPORT_SYMBOL_GPL(bpfilter_process_sockopt);1212+struct bpfilter_umh_ops bpfilter_ops;1313+EXPORT_SYMBOL_GPL(bpfilter_ops);1414+1515+static void bpfilter_umh_cleanup(struct umh_info *info)1616+{1717+ mutex_lock(&bpfilter_ops.lock);1818+ bpfilter_ops.stop = true;1919+ fput(info->pipe_to_umh);2020+ fput(info->pipe_from_umh);2121+ info->pid = 0;2222+ mutex_unlock(&bpfilter_ops.lock);2323+}12241325static int bpfilter_mbox_request(struct sock *sk, int optname,1426 char __user *optval,1527 unsigned int optlen, bool is_set)1628{1717- if (!bpfilter_process_sockopt) {1818- int err = request_module("bpfilter");2929+ int err;3030+ mutex_lock(&bpfilter_ops.lock);3131+ if (!bpfilter_ops.sockopt) {3232+ mutex_unlock(&bpfilter_ops.lock);3333+ err = request_module("bpfilter");3434+ mutex_lock(&bpfilter_ops.lock);19352036 if (err)2121- return err;2222- if (!bpfilter_process_sockopt)2323- return -ECHILD;3737+ goto out;3838+ if (!bpfilter_ops.sockopt) {3939+ err = -ECHILD;4040+ goto out;4141+ }2442 }2525- return bpfilter_process_sockopt(sk, optname, optval, optlen, is_set);4343+ if (bpfilter_ops.stop) {4444+ err = bpfilter_ops.start();4545+ if (err)4646+ goto out;4747+ }4848+ err = bpfilter_ops.sockopt(sk, optname, optval, optlen, is_set);4949+out:5050+ mutex_unlock(&bpfilter_ops.lock);5151+ return err;2652}27532854int bpfilter_ip_set_sockopt(struct sock *sk, int optname, char __user *optval,···67416842 return bpfilter_mbox_request(sk, optname, optval, len, false);6943}4444+4545+static int __init bpfilter_sockopt_init(void)4646+{4747+ mutex_init(&bpfilter_ops.lock);4848+ bpfilter_ops.stop = true;4949+ bpfilter_ops.info.cmdline = "bpfilter_umh";5050+ bpfilter_ops.info.cleanup = &bpfilter_umh_cleanup;5151+5252+ return 0;5353+}5454+5555+module_init(bpfilter_sockopt_init);
···488488 goto drop;489489 }490490491491+ iph = ip_hdr(skb);491492 skb->transport_header = skb->network_header + iph->ihl*4;492493493494 /* Remove any debris in the socket control block */
+5-7
net/ipv4/ip_sockglue.c
···148148149149static void ip_cmsg_recv_dstaddr(struct msghdr *msg, struct sk_buff *skb)150150{151151+ __be16 _ports[2], *ports;151152 struct sockaddr_in sin;152152- __be16 *ports;153153- int end;154154-155155- end = skb_transport_offset(skb) + 4;156156- if (end > 0 && !pskb_may_pull(skb, end))157157- return;158153159154 /* All current transport protocols have the port numbers in the160155 * first four bytes of the transport header and this function is161156 * written with this assumption in mind.162157 */163163- ports = (__be16 *)skb_transport_header(skb);158158+ ports = skb_header_pointer(skb, skb_transport_offset(skb),159159+ sizeof(_ports), &_ports);160160+ if (!ports)161161+ return;164162165163 sin.sin_family = AF_INET;166164 sin.sin_addr.s_addr = ip_hdr(skb)->daddr;
···310310311311 /* Check if the address belongs to the host. */312312 if (addr_type == IPV6_ADDR_MAPPED) {313313+ struct net_device *dev = NULL;313314 int chk_addr_ret;314315315316 /* Binding to v4-mapped address on a v6-only socket···321320 goto out;322321 }323322323323+ rcu_read_lock();324324+ if (sk->sk_bound_dev_if) {325325+ dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if);326326+ if (!dev) {327327+ err = -ENODEV;328328+ goto out_unlock;329329+ }330330+ }331331+324332 /* Reproduce AF_INET checks to make the bindings consistent */325333 v4addr = addr->sin6_addr.s6_addr32[3];326326- chk_addr_ret = inet_addr_type(net, v4addr);334334+ chk_addr_ret = inet_addr_type_dev_table(net, dev, v4addr);335335+ rcu_read_unlock();336336+327337 if (!inet_can_nonlocal_bind(net, inet) &&328338 v4addr != htonl(INADDR_ANY) &&329339 chk_addr_ret != RTN_LOCAL &&
+5-6
net/ipv6/datagram.c
···341341 skb_reset_network_header(skb);342342 iph = ipv6_hdr(skb);343343 iph->daddr = fl6->daddr;344344+ ip6_flow_hdr(iph, 0, 0);344345345346 serr = SKB_EXT_ERR(skb);346347 serr->ee.ee_errno = err;···701700 }702701 if (np->rxopt.bits.rxorigdstaddr) {703702 struct sockaddr_in6 sin6;704704- __be16 *ports;705705- int end;703703+ __be16 _ports[2], *ports;706704707707- end = skb_transport_offset(skb) + 4;708708- if (end <= 0 || pskb_may_pull(skb, end)) {705705+ ports = skb_header_pointer(skb, skb_transport_offset(skb),706706+ sizeof(_ports), &_ports);707707+ if (ports) {709708 /* All current transport protocols have the port numbers in the710709 * first four bytes of the transport header and this function is711710 * written with this assumption in mind.712711 */713713- ports = (__be16 *)skb_transport_header(skb);714714-715712 sin6.sin6_family = AF_INET6;716713 sin6.sin6_addr = ipv6_hdr(skb)->daddr;717714 sin6.sin6_port = ports[1];
+15-2
net/ipv6/fou6.c
···9090{9191 int transport_offset = skb_transport_offset(skb);9292 struct guehdr *guehdr;9393- size_t optlen;9393+ size_t len, optlen;9494 int ret;95959696- if (skb->len < sizeof(struct udphdr) + sizeof(struct guehdr))9696+ len = sizeof(struct udphdr) + sizeof(struct guehdr);9797+ if (!pskb_may_pull(skb, len))9798 return -EINVAL;989999100 guehdr = (struct guehdr *)&udp_hdr(skb)[1];···129128130129 optlen = guehdr->hlen << 2;131130131131+ if (!pskb_may_pull(skb, len + optlen))132132+ return -EINVAL;133133+134134+ guehdr = (struct guehdr *)&udp_hdr(skb)[1];132135 if (validate_gue_flags(guehdr, optlen))133136 return -EINVAL;137137+138138+ /* Handling exceptions for direct UDP encapsulation in GUE would lead to139139+ * recursion. Besides, this kind of encapsulation can't even be140140+ * configured currently. Discard this.141141+ */142142+ if (guehdr->proto_ctype == IPPROTO_UDP ||143143+ guehdr->proto_ctype == IPPROTO_UDPLITE)144144+ return -EOPNOTSUPP;134145135146 skb_set_transport_header(skb, -(int)sizeof(struct icmp6hdr));136147 ret = gue6_err_proto_handler(guehdr->proto_ctype, skb,
+6-2
net/ipv6/icmp.c
···423423static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,424424 const struct in6_addr *force_saddr)425425{426426- struct net *net = dev_net(skb->dev);427426 struct inet6_dev *idev = NULL;428427 struct ipv6hdr *hdr = ipv6_hdr(skb);429428 struct sock *sk;429429+ struct net *net;430430 struct ipv6_pinfo *np;431431 const struct in6_addr *saddr = NULL;432432 struct dst_entry *dst;···437437 int iif = 0;438438 int addr_type = 0;439439 int len;440440- u32 mark = IP6_REPLY_MARK(net, skb->mark);440440+ u32 mark;441441442442 if ((u8 *)hdr < skb->head ||443443 (skb_network_header(skb) + sizeof(*hdr)) > skb_tail_pointer(skb))444444 return;445445446446+ if (!skb->dev)447447+ return;448448+ net = dev_net(skb->dev);449449+ mark = IP6_REPLY_MARK(net, skb->mark);446450 /*447451 * Make sure we respect the rules448452 * i.e. RFC 1885 2.4(e)
···26282628 addr = saddr->sll_halen ? saddr->sll_addr : NULL;26292629 dev = dev_get_by_index(sock_net(&po->sk), saddr->sll_ifindex);26302630 if (addr && dev && saddr->sll_halen < dev->addr_len)26312631- goto out;26312631+ goto out_put;26322632 }2633263326342634 err = -ENXIO;···28282828 addr = saddr->sll_halen ? saddr->sll_addr : NULL;28292829 dev = dev_get_by_index(sock_net(sk), saddr->sll_ifindex);28302830 if (addr && dev && saddr->sll_halen < dev->addr_len)28312831- goto out;28312831+ goto out_unlock;28322832 }2833283328342834 err = -ENXIO;···28872887 goto out_free;28882888 } else if (reserve) {28892889 skb_reserve(skb, -reserve);28902890- if (len < reserve)28902890+ if (len < reserve + sizeof(struct ipv6hdr) &&28912891+ dev->min_header_len != dev->hard_header_len)28912892 skb_reset_network_header(skb);28922893 }28932894
+2-2
net/rds/ib_send.c
···522522 if (be32_to_cpu(rm->m_inc.i_hdr.h_len) == 0)523523 i = 1;524524 else525525- i = ceil(be32_to_cpu(rm->m_inc.i_hdr.h_len), RDS_FRAG_SIZE);525525+ i = DIV_ROUND_UP(be32_to_cpu(rm->m_inc.i_hdr.h_len), RDS_FRAG_SIZE);526526527527 work_alloc = rds_ib_ring_alloc(&ic->i_send_ring, i, &pos);528528 if (work_alloc == 0) {···879879 * Instead of knowing how to return a partial rdma read/write we insist that there880880 * be enough work requests to send the entire message.881881 */882882- i = ceil(op->op_count, max_sge);882882+ i = DIV_ROUND_UP(op->op_count, max_sge);883883884884 work_alloc = rds_ib_ring_alloc(&ic->i_send_ring, i, &pos);885885 if (work_alloc != i) {
+2-2
net/rds/message.c
···341341{342342 struct rds_message *rm;343343 unsigned int i;344344- int num_sgs = ceil(total_len, PAGE_SIZE);344344+ int num_sgs = DIV_ROUND_UP(total_len, PAGE_SIZE);345345 int extra_bytes = num_sgs * sizeof(struct scatterlist);346346 int ret;347347···351351352352 set_bit(RDS_MSG_PAGEVEC, &rm->m_flags);353353 rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len);354354- rm->data.op_nents = ceil(total_len, PAGE_SIZE);354354+ rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);355355 rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);356356 if (!rm->data.op_sg) {357357 rds_message_put(rm);
-4
net/rds/rds.h
···4848}4949#endif50505151-/* XXX is there one of these somewhere? */5252-#define ceil(x, y) \5353- ({ unsigned long __x = (x), __y = (y); (__x + __y - 1) / __y; })5454-5551#define RDS_FRAG_SHIFT 125652#define RDS_FRAG_SIZE ((unsigned int)(1 << RDS_FRAG_SHIFT))5753
+1-1
net/rds/send.c
···11071107 size_t total_payload_len = payload_len, rdma_payload_len = 0;11081108 bool zcopy = ((msg->msg_flags & MSG_ZEROCOPY) &&11091109 sock_flag(rds_rs_to_sk(rs), SOCK_ZEROCOPY));11101110- int num_sgs = ceil(payload_len, PAGE_SIZE);11101110+ int num_sgs = DIV_ROUND_UP(payload_len, PAGE_SIZE);11111111 int namelen;11121112 struct rds_iov_vector_arr vct;11131113 int ind;
-70
net/rxrpc/af_rxrpc.c
···419419EXPORT_SYMBOL(rxrpc_kernel_get_epoch);420420421421/**422422- * rxrpc_kernel_check_call - Check a call's state423423- * @sock: The socket the call is on424424- * @call: The call to check425425- * @_compl: Where to store the completion state426426- * @_abort_code: Where to store any abort code427427- *428428- * Allow a kernel service to query the state of a call and find out the manner429429- * of its termination if it has completed. Returns -EINPROGRESS if the call is430430- * still going, 0 if the call finished successfully, -ECONNABORTED if the call431431- * was aborted and an appropriate error if the call failed in some other way.432432- */433433-int rxrpc_kernel_check_call(struct socket *sock, struct rxrpc_call *call,434434- enum rxrpc_call_completion *_compl, u32 *_abort_code)435435-{436436- if (call->state != RXRPC_CALL_COMPLETE)437437- return -EINPROGRESS;438438- smp_rmb();439439- *_compl = call->completion;440440- *_abort_code = call->abort_code;441441- return call->error;442442-}443443-EXPORT_SYMBOL(rxrpc_kernel_check_call);444444-445445-/**446446- * rxrpc_kernel_retry_call - Allow a kernel service to retry a call447447- * @sock: The socket the call is on448448- * @call: The call to retry449449- * @srx: The address of the peer to contact450450- * @key: The security context to use (defaults to socket setting)451451- *452452- * Allow a kernel service to try resending a client call that failed due to a453453- * network error to a new address. The Tx queue is maintained intact, thereby454454- * relieving the need to re-encrypt any request data that has already been455455- * buffered.456456- */457457-int rxrpc_kernel_retry_call(struct socket *sock, struct rxrpc_call *call,458458- struct sockaddr_rxrpc *srx, struct key *key)459459-{460460- struct rxrpc_conn_parameters cp;461461- struct rxrpc_sock *rx = rxrpc_sk(sock->sk);462462- int ret;463463-464464- _enter("%d{%d}", call->debug_id, atomic_read(&call->usage));465465-466466- if (!key)467467- key = rx->key;468468- if (key && !key->payload.data[0])469469- key = NULL; /* a no-security key */470470-471471- memset(&cp, 0, sizeof(cp));472472- cp.local = rx->local;473473- cp.key = key;474474- cp.security_level = 0;475475- cp.exclusive = false;476476- cp.service_id = srx->srx_service;477477-478478- mutex_lock(&call->user_mutex);479479-480480- ret = rxrpc_prepare_call_for_retry(rx, call);481481- if (ret == 0)482482- ret = rxrpc_retry_client_call(rx, call, &cp, srx, GFP_KERNEL);483483-484484- mutex_unlock(&call->user_mutex);485485- rxrpc_put_peer(cp.peer);486486- _leave(" = %d", ret);487487- return ret;488488-}489489-EXPORT_SYMBOL(rxrpc_kernel_retry_call);490490-491491-/**492422 * rxrpc_kernel_new_call_notification - Get notifications of new calls493423 * @sock: The socket to intercept received messages on494424 * @notify_new_call: Function to be called when new calls appear
+12-7
net/rxrpc/ar-internal.h
···476476 RXRPC_CALL_EXPOSED, /* The call was exposed to the world */477477 RXRPC_CALL_RX_LAST, /* Received the last packet (at rxtx_top) */478478 RXRPC_CALL_TX_LAST, /* Last packet in Tx buffer (at rxtx_top) */479479- RXRPC_CALL_TX_LASTQ, /* Last packet has been queued */480479 RXRPC_CALL_SEND_PING, /* A ping will need to be sent */481480 RXRPC_CALL_PINGING, /* Ping in process */482481 RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */···514515 RXRPC_CALL_SERVER_AWAIT_ACK, /* - server awaiting final ACK */515516 RXRPC_CALL_COMPLETE, /* - call complete */516517 NR__RXRPC_CALL_STATES518518+};519519+520520+/*521521+ * Call completion condition (state == RXRPC_CALL_COMPLETE).522522+ */523523+enum rxrpc_call_completion {524524+ RXRPC_CALL_SUCCEEDED, /* - Normal termination */525525+ RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */526526+ RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */527527+ RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */528528+ RXRPC_CALL_NETWORK_ERROR, /* - call terminated by network error */529529+ NR__RXRPC_CALL_COMPLETIONS517530};518531519532/*···772761 struct sockaddr_rxrpc *,773762 struct rxrpc_call_params *, gfp_t,774763 unsigned int);775775-int rxrpc_retry_client_call(struct rxrpc_sock *,776776- struct rxrpc_call *,777777- struct rxrpc_conn_parameters *,778778- struct sockaddr_rxrpc *,779779- gfp_t);780764void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *,781765 struct sk_buff *);782766void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *);783783-int rxrpc_prepare_call_for_retry(struct rxrpc_sock *, struct rxrpc_call *);784767void rxrpc_release_calls_on_socket(struct rxrpc_sock *);785768bool __rxrpc_queue_call(struct rxrpc_call *);786769bool rxrpc_queue_call(struct rxrpc_call *);
-97
net/rxrpc/call_object.c
···325325}326326327327/*328328- * Retry a call to a new address. It is expected that the Tx queue of the call329329- * will contain data previously packaged for an old call.330330- */331331-int rxrpc_retry_client_call(struct rxrpc_sock *rx,332332- struct rxrpc_call *call,333333- struct rxrpc_conn_parameters *cp,334334- struct sockaddr_rxrpc *srx,335335- gfp_t gfp)336336-{337337- const void *here = __builtin_return_address(0);338338- int ret;339339-340340- /* Set up or get a connection record and set the protocol parameters,341341- * including channel number and call ID.342342- */343343- ret = rxrpc_connect_call(rx, call, cp, srx, gfp);344344- if (ret < 0)345345- goto error;346346-347347- trace_rxrpc_call(call, rxrpc_call_connected, atomic_read(&call->usage),348348- here, NULL);349349-350350- rxrpc_start_call_timer(call);351351-352352- _net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id);353353-354354- if (!test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events))355355- rxrpc_queue_call(call);356356-357357- _leave(" = 0");358358- return 0;359359-360360-error:361361- rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,362362- RX_CALL_DEAD, ret);363363- trace_rxrpc_call(call, rxrpc_call_error, atomic_read(&call->usage),364364- here, ERR_PTR(ret));365365- _leave(" = %d", ret);366366- return ret;367367-}368368-369369-/*370328 * Set up an incoming call. call->conn points to the connection.371329 * This is called in BH context and isn't allowed to fail.372330 */···489531 }490532491533 _leave("");492492-}493493-494494-/*495495- * Prepare a kernel service call for retry.496496- */497497-int rxrpc_prepare_call_for_retry(struct rxrpc_sock *rx, struct rxrpc_call *call)498498-{499499- const void *here = __builtin_return_address(0);500500- int i;501501- u8 last = 0;502502-503503- _enter("{%d,%d}", call->debug_id, atomic_read(&call->usage));504504-505505- trace_rxrpc_call(call, rxrpc_call_release, atomic_read(&call->usage),506506- here, (const void *)call->flags);507507-508508- ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);509509- ASSERTCMP(call->completion, !=, RXRPC_CALL_REMOTELY_ABORTED);510510- ASSERTCMP(call->completion, !=, RXRPC_CALL_LOCALLY_ABORTED);511511- ASSERT(list_empty(&call->recvmsg_link));512512-513513- del_timer_sync(&call->timer);514514-515515- _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, call->conn);516516-517517- if (call->conn)518518- rxrpc_disconnect_call(call);519519-520520- if (rxrpc_is_service_call(call) ||521521- !call->tx_phase ||522522- call->tx_hard_ack != 0 ||523523- call->rx_hard_ack != 0 ||524524- call->rx_top != 0)525525- return -EINVAL;526526-527527- call->state = RXRPC_CALL_UNINITIALISED;528528- call->completion = RXRPC_CALL_SUCCEEDED;529529- call->call_id = 0;530530- call->cid = 0;531531- call->cong_cwnd = 0;532532- call->cong_extra = 0;533533- call->cong_ssthresh = 0;534534- call->cong_mode = 0;535535- call->cong_dup_acks = 0;536536- call->cong_cumul_acks = 0;537537- call->acks_lowest_nak = 0;538538-539539- for (i = 0; i < RXRPC_RXTX_BUFF_SIZE; i++) {540540- last |= call->rxtx_annotations[i];541541- call->rxtx_annotations[i] &= RXRPC_TX_ANNO_LAST;542542- call->rxtx_annotations[i] |= RXRPC_TX_ANNO_RETRANS;543543- }544544-545545- _leave(" = 0");546546- return 0;547534}548535549536/*
···169169170170 ASSERTCMP(seq, ==, call->tx_top + 1);171171172172- if (last) {172172+ if (last)173173 annotation |= RXRPC_TX_ANNO_LAST;174174- set_bit(RXRPC_CALL_TX_LASTQ, &call->flags);175175- }176174177175 /* We have to set the timestamp before queueing as the retransmit178176 * algorithm can see the packet as soon as we queue it.···384386 call->tx_total_len -= copy;385387 }386388389389+ /* check for the far side aborting the call or a network error390390+ * occurring */391391+ if (call->state == RXRPC_CALL_COMPLETE)392392+ goto call_terminated;393393+387394 /* add the packet to the send queue if it's now full */388395 if (sp->remain <= 0 ||389396 (msg_data_left(msg) == 0 && !more)) {···428425 notify_end_tx);429426 skb = NULL;430427 }431431-432432- /* Check for the far side aborting the call or a network error433433- * occurring. If this happens, save any packet that was under434434- * construction so that in the case of a network error, the435435- * call can be retried or redirected.436436- */437437- if (call->state == RXRPC_CALL_COMPLETE) {438438- ret = call->error;439439- goto out;440440- }441428 } while (msg_data_left(msg) > 0);442429443430success:···436443 call->tx_pending = skb;437444 _leave(" = %d", ret);438445 return ret;446446+447447+call_terminated:448448+ rxrpc_free_skb(skb, rxrpc_skb_tx_freed);449449+ _leave(" = %d", call->error);450450+ return call->error;439451440452maybe_error:441453 if (copied)
+11-8
net/sched/act_tunnel_key.c
···197197 [TCA_TUNNEL_KEY_ENC_TTL] = { .type = NLA_U8 },198198};199199200200+static void tunnel_key_release_params(struct tcf_tunnel_key_params *p)201201+{202202+ if (!p)203203+ return;204204+ if (p->tcft_action == TCA_TUNNEL_KEY_ACT_SET)205205+ dst_release(&p->tcft_enc_metadata->dst);206206+ kfree_rcu(p, rcu);207207+}208208+200209static int tunnel_key_init(struct net *net, struct nlattr *nla,201210 struct nlattr *est, struct tc_action **a,202211 int ovr, int bind, bool rtnl_held,···369360 rcu_swap_protected(t->params, params_new,370361 lockdep_is_held(&t->tcf_lock));371362 spin_unlock_bh(&t->tcf_lock);372372- if (params_new)373373- kfree_rcu(params_new, rcu);363363+ tunnel_key_release_params(params_new);374364375365 if (ret == ACT_P_CREATED)376366 tcf_idr_insert(tn, *a);···393385 struct tcf_tunnel_key_params *params;394386395387 params = rcu_dereference_protected(t->params, 1);396396- if (params) {397397- if (params->tcft_action == TCA_TUNNEL_KEY_ACT_SET)398398- dst_release(¶ms->tcft_enc_metadata->dst);399399-400400- kfree_rcu(params, rcu);401401- }388388+ tunnel_key_release_params(params);402389}403390404391static int tunnel_key_geneve_opts_dump(struct sk_buff *skb,
···17391739 xdr_buf_init(&req->rq_rcv_buf,17401740 req->rq_rbuffer,17411741 req->rq_rcvsize);17421742- req->rq_bytes_sent = 0;1743174217441743 p = rpc_encode_header(task);17451745- if (p == NULL) {17461746- printk(KERN_INFO "RPC: couldn't encode RPC header, exit EIO\n");17471747- rpc_exit(task, -EIO);17441744+ if (p == NULL)17481745 return;17491749- }1750174617511747 encode = task->tk_msg.rpc_proc->p_encode;17521748 if (encode == NULL)···17671771 /* Did the encode result in an error condition? */17681772 if (task->tk_status != 0) {17691773 /* Was the error nonfatal? */17701770- if (task->tk_status == -EAGAIN || task->tk_status == -ENOMEM)17741774+ switch (task->tk_status) {17751775+ case -EAGAIN:17761776+ case -ENOMEM:17711777 rpc_delay(task, HZ >> 4);17721772- else17781778+ break;17791779+ case -EKEYEXPIRED:17801780+ task->tk_action = call_refresh;17811781+ break;17821782+ default:17731783 rpc_exit(task, task->tk_status);17841784+ }17741785 return;17751786 }17761787···23392336 *p++ = htonl(clnt->cl_vers); /* program version */23402337 *p++ = htonl(task->tk_msg.rpc_proc->p_proc); /* procedure */23412338 p = rpcauth_marshcred(task, p);23422342- req->rq_slen = xdr_adjust_iovec(&req->rq_svec[0], p);23392339+ if (p)23402340+ req->rq_slen = xdr_adjust_iovec(&req->rq_svec[0], p);23432341 return p;23442342}23452343
+2-1
net/sunrpc/xprt.c
···11511151 struct rpc_xprt *xprt = req->rq_xprt;1152115211531153 if (xprt_request_need_enqueue_transmit(task, req)) {11541154+ req->rq_bytes_sent = 0;11541155 spin_lock(&xprt->queue_lock);11551156 /*11561157 * Requests that carry congestion control credits are added···11781177 INIT_LIST_HEAD(&req->rq_xmit2);11791178 goto out;11801179 }11811181- } else {11801180+ } else if (!req->rq_seqno) {11821181 list_for_each_entry(pos, &xprt->xmit_queue, rq_xmit) {11831182 if (pos->rq_task->tk_owner != task->tk_owner)11841183 continue;
+4-6
net/sunrpc/xprtrdma/verbs.c
···845845 for (i = 0; i <= buf->rb_sc_last; i++) {846846 sc = rpcrdma_sendctx_create(&r_xprt->rx_ia);847847 if (!sc)848848- goto out_destroy;848848+ return -ENOMEM;849849850850 sc->sc_xprt = r_xprt;851851 buf->rb_sc_ctxs[i] = sc;852852 }853853854854 return 0;855855-856856-out_destroy:857857- rpcrdma_sendctxs_destroy(buf);858858- return -ENOMEM;859855}860856861857/* The sendctx queue is not guaranteed to have a size that is a···11091113 WQ_MEM_RECLAIM | WQ_HIGHPRI,11101114 0,11111115 r_xprt->rx_xprt.address_strings[RPC_DISPLAY_ADDR]);11121112- if (!buf->rb_completion_wq)11161116+ if (!buf->rb_completion_wq) {11171117+ rc = -ENOMEM;11131118 goto out;11191119+ }1114112011151121 return 0;11161122out:
···11+/* SPDX-License-Identifier: GPL-2.0 */22+/* Copyright (c) 2019 Facebook */33+#ifndef __ASM_GOTO_WORKAROUND_H44+#define __ASM_GOTO_WORKAROUND_H55+66+/* this will bring in asm_volatile_goto macro definition77+ * if enabled by compiler and config options.88+ */99+#include <linux/types.h>1010+1111+#ifdef asm_volatile_goto1212+#undef asm_volatile_goto1313+#define asm_volatile_goto(x...) asm volatile("invalid use of asm_volatile_goto")1414+#endif1515+1616+#endif
+7-7
samples/bpf/test_cgrp2_attach2.c
···77777878 /* Create cgroup /foo, get fd, and join it */7979 foo = create_and_get_cgroup(FOO);8080- if (!foo)8080+ if (foo < 0)8181 goto err;82828383 if (join_cgroup(FOO))···94949595 /* Create cgroup /foo/bar, get fd, and join it */9696 bar = create_and_get_cgroup(BAR);9797- if (!bar)9797+ if (bar < 0)9898 goto err;9999100100 if (join_cgroup(BAR))···298298 goto err;299299300300 cg1 = create_and_get_cgroup("/cg1");301301- if (!cg1)301301+ if (cg1 < 0)302302 goto err;303303 cg2 = create_and_get_cgroup("/cg1/cg2");304304- if (!cg2)304304+ if (cg2 < 0)305305 goto err;306306 cg3 = create_and_get_cgroup("/cg1/cg2/cg3");307307- if (!cg3)307307+ if (cg3 < 0)308308 goto err;309309 cg4 = create_and_get_cgroup("/cg1/cg2/cg3/cg4");310310- if (!cg4)310310+ if (cg4 < 0)311311 goto err;312312 cg5 = create_and_get_cgroup("/cg1/cg2/cg3/cg4/cg5");313313- if (!cg5)313313+ if (cg5 < 0)314314 goto err;315315316316 if (join_cgroup("/cg1/cg2/cg3/cg4/cg5"))
+1-1
samples/bpf/test_current_task_under_cgroup_user.c
···32323333 cg2 = create_and_get_cgroup(CGROUP_PATH);34343535- if (!cg2)3535+ if (cg2 < 0)3636 goto err;37373838 if (bpf_map_update_elem(map_fd[0], &idx, &cg2, BPF_ANY)) {
···2424basetarget = $(basename $(notdir $@))25252626###2727-# filename of first prerequisite with directory and extension stripped2828-baseprereq = $(basename $(notdir $<))2929-3030-###3127# Escape single quote for use in echo statements3228escsq = $(subst $(squote),'\$(squote)',$1)3329
···6969- x = (T)vmalloc(E1);7070+ x = (T)vzalloc(E1);7171|7272-- x = dma_alloc_coherent(E2,E1,E3,E4);7373-+ x = dma_zalloc_coherent(E2,E1,E3,E4);7474-|7575-- x = (T *)dma_alloc_coherent(E2,E1,E3,E4);7676-+ x = dma_zalloc_coherent(E2,E1,E3,E4);7777-|7878-- x = (T)dma_alloc_coherent(E2,E1,E3,E4);7979-+ x = (T)dma_zalloc_coherent(E2,E1,E3,E4);8080-|8172- x = kmalloc_node(E1,E2,E3);8273+ x = kzalloc_node(E1,E2,E3);8374|···216225x << r2.x;217226@@218227219219-msg="WARNING: dma_zalloc_coherent should be used for %s, instead of dma_alloc_coherent/memset" % (x)228228+msg="WARNING: dma_alloc_coherent use in %s already zeroes out memory, so memset is not needed" % (x)220229coccilib.report.print_report(p[0], msg)221230222231//-----------------------------------------------------------------
+21-2
scripts/gcc-plugins/arm_ssp_per_task_plugin.c
···1313 for (insn = get_insns(); insn; insn = NEXT_INSN(insn)) {1414 const char *sym;1515 rtx body;1616- rtx masked_sp;1616+ rtx mask, masked_sp;17171818 /*1919 * Find a SET insn involving a SYMBOL_REF to __stack_chk_guard···3333 * produces the address of the copy of the stack canary value3434 * stored in struct thread_info3535 */3636+ mask = GEN_INT(sext_hwi(sp_mask, GET_MODE_PRECISION(Pmode)));3637 masked_sp = gen_reg_rtx(Pmode);37383839 emit_insn_before(gen_rtx_SET(masked_sp,3940 gen_rtx_AND(Pmode,4041 stack_pointer_rtx,4141- GEN_INT(sp_mask))),4242+ mask)),4243 insn);43444445 SET_SRC(body) = gen_rtx_PLUS(Pmode, masked_sp,···52515352#define NO_GATE5453#include "gcc-generate-rtl-pass.h"5454+5555+#if BUILDING_GCC_VERSION >= 90005656+static bool no(void)5757+{5858+ return false;5959+}6060+6161+static void arm_pertask_ssp_start_unit(void *gcc_data, void *user_data)6262+{6363+ targetm.have_stack_protect_combined_set = no;6464+ targetm.have_stack_protect_combined_test = no;6565+}6666+#endif55675668__visible int plugin_init(struct plugin_name_args *plugin_info,5769 struct plugin_gcc_version *version)···1129811399 register_callback(plugin_info->base_name, PLUGIN_PASS_MANAGER_SETUP,114100 NULL, &arm_pertask_ssp_rtl_pass_info);101101+102102+#if BUILDING_GCC_VERSION >= 9000103103+ register_callback(plugin_info->base_name, PLUGIN_START_UNIT,104104+ arm_pertask_ssp_start_unit, NULL);105105+#endif115106116107 return 0;117108}
···1027102710281028void security_cred_free(struct cred *cred)10291029{10301030+ /*10311031+ * There is a failure case in prepare_creds() that10321032+ * may result in a call here with ->security being NULL.10331033+ */10341034+ if (unlikely(cred->security == NULL))10351035+ return;10361036+10301037 call_void_hook(cred_free, cred);10311038}10321039
+2-1
security/selinux/ss/policydb.c
···732732 kfree(key);733733 if (datum) {734734 levdatum = datum;735735- ebitmap_destroy(&levdatum->level->cat);735735+ if (levdatum->level)736736+ ebitmap_destroy(&levdatum->level->cat);736737 kfree(levdatum->level);737738 }738739 kfree(datum);
+3-1
security/yama/yama_lsm.c
···368368 break;369369 case YAMA_SCOPE_RELATIONAL:370370 rcu_read_lock();371371- if (!task_is_descendant(current, child) &&371371+ if (!pid_alive(child))372372+ rc = -EPERM;373373+ if (!rc && !task_is_descendant(current, child) &&372374 !ptracer_exception_found(current, child) &&373375 !ns_capable(__task_cred(child)->user_ns, CAP_SYS_PTRACE))374376 rc = -EPERM;
+2-2
sound/aoa/soundbus/i2sbus/core.c
···4747 /* We use the PCI APIs for now until the generic one gets fixed4848 * enough or until we get some macio-specific versions4949 */5050- r->space = dma_zalloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev,5151- r->size, &r->bus_addr, GFP_KERNEL);5050+ r->space = dma_alloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev,5151+ r->size, &r->bus_addr, GFP_KERNEL);5252 if (!r->space)5353 return -ENOMEM;5454
+3
sound/pci/cs46xx/dsp_spos.c
···903903 struct dsp_spos_instance * ins = chip->dsp_spos_instance;904904 int i;905905906906+ if (!ins)907907+ return 0;908908+906909 snd_info_free_entry(ins->proc_sym_info_entry);907910 ins->proc_sym_info_entry = NULL;908911
···11+/*22+ * Copyright (C) 2012 ARM Ltd.33+ * Copyright (C) 2015 Regents of the University of California44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 as77+ * published by the Free Software Foundation.88+ *99+ * This program is distributed in the hope that it will be useful,1010+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1111+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1212+ * GNU General Public License for more details.1313+ *1414+ * You should have received a copy of the GNU General Public License1515+ * along with this program. If not, see <http://www.gnu.org/licenses/>.1616+ */1717+1818+#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H1919+#define _UAPI_ASM_RISCV_BITSPERLONG_H2020+2121+#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8)2222+2323+#include <asm-generic/bitsperlong.h>2424+2525+#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
···11-// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)11+// SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause)22/*33 * Simple streaming JSON writer44 *55 * This takes care of the annoying bits of JSON syntax like the commas66 * after elements77- *88- * This program is free software; you can redistribute it and/or99- * modify it under the terms of the GNU General Public License1010- * as published by the Free Software Foundation; either version1111- * 2 of the License, or (at your option) any later version.127 *138 * Authors: Stephen Hemminger <stephen@networkplumber.org>149 */
-5
tools/bpf/bpftool/json_writer.h
···55 * This takes care of the annoying bits of JSON syntax like the commas66 * after elements77 *88- * This program is free software; you can redistribute it and/or99- * modify it under the terms of the GNU General Public License1010- * as published by the Free Software Foundation; either version1111- * 2 of the License, or (at your option) any later version.1212- *138 * Authors: Stephen Hemminger <stephen@networkplumber.org>149 */1510
+3-1
tools/include/uapi/asm-generic/unistd.h
···738738__SC_COMP(__NR_io_pgetevents, sys_io_pgetevents, compat_sys_io_pgetevents)739739#define __NR_rseq 293740740__SYSCALL(__NR_rseq, sys_rseq)741741+#define __NR_kexec_file_load 294742742+__SYSCALL(__NR_kexec_file_load, sys_kexec_file_load)741743742744#undef __NR_syscalls743743-#define __NR_syscalls 294745745+#define __NR_syscalls 295744746745747/*746748 * 32 bit systems traditionally used different
···412412 int irq_seq;413413} drm_i915_irq_wait_t;414414415415+/*416416+ * Different modes of per-process Graphics Translation Table,417417+ * see I915_PARAM_HAS_ALIASING_PPGTT418418+ */419419+#define I915_GEM_PPGTT_NONE 0420420+#define I915_GEM_PPGTT_ALIASING 1421421+#define I915_GEM_PPGTT_FULL 2422422+415423/* Ioctl to query kernel params:416424 */417425#define I915_PARAM_IRQ_ACTIVE 1
+8-52
tools/include/uapi/linux/fs.h
···1414#include <linux/ioctl.h>1515#include <linux/types.h>16161717+/* Use of MS_* flags within the kernel is restricted to core mount(2) code. */1818+#if !defined(__KERNEL__)1919+#include <linux/mount.h>2020+#endif2121+1722/*1823 * It's silly to have NR_OPEN bigger than NR_FILE, but you can change1924 * the file limit at runtime and only root can increase the per-process···105100106101107102#define NR_FILE 8192 /* this can well be larger on a larger system */108108-109109-110110-/*111111- * These are the fs-independent mount-flags: up to 32 flags are supported112112- */113113-#define MS_RDONLY 1 /* Mount read-only */114114-#define MS_NOSUID 2 /* Ignore suid and sgid bits */115115-#define MS_NODEV 4 /* Disallow access to device special files */116116-#define MS_NOEXEC 8 /* Disallow program execution */117117-#define MS_SYNCHRONOUS 16 /* Writes are synced at once */118118-#define MS_REMOUNT 32 /* Alter flags of a mounted FS */119119-#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */120120-#define MS_DIRSYNC 128 /* Directory modifications are synchronous */121121-#define MS_NOATIME 1024 /* Do not update access times. */122122-#define MS_NODIRATIME 2048 /* Do not update directory access times */123123-#define MS_BIND 4096124124-#define MS_MOVE 8192125125-#define MS_REC 16384126126-#define MS_VERBOSE 32768 /* War is peace. Verbosity is silence.127127- MS_VERBOSE is deprecated. */128128-#define MS_SILENT 32768129129-#define MS_POSIXACL (1<<16) /* VFS does not apply the umask */130130-#define MS_UNBINDABLE (1<<17) /* change to unbindable */131131-#define MS_PRIVATE (1<<18) /* change to private */132132-#define MS_SLAVE (1<<19) /* change to slave */133133-#define MS_SHARED (1<<20) /* change to shared */134134-#define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */135135-#define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */136136-#define MS_I_VERSION (1<<23) /* Update inode I_version field */137137-#define MS_STRICTATIME (1<<24) /* Always perform atime updates */138138-#define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */139139-140140-/* These sb flags are internal to the kernel */141141-#define MS_SUBMOUNT (1<<26)142142-#define MS_NOREMOTELOCK (1<<27)143143-#define MS_NOSEC (1<<28)144144-#define MS_BORN (1<<29)145145-#define MS_ACTIVE (1<<30)146146-#define MS_NOUSER (1<<31)147147-148148-/*149149- * Superblock flags that can be altered by MS_REMOUNT150150- */151151-#define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\152152- MS_LAZYTIME)153153-154154-/*155155- * Old magic mount flag and mask156156- */157157-#define MS_MGC_VAL 0xC0ED0000158158-#define MS_MGC_MSK 0xffff0000159103160104/*161105 * Structure for FS_IOC_FSGETXATTR[A] and FS_IOC_FSSETXATTR.···223269#define FS_POLICY_FLAGS_PAD_16 0x02224270#define FS_POLICY_FLAGS_PAD_32 0x03225271#define FS_POLICY_FLAGS_PAD_MASK 0x03226226-#define FS_POLICY_FLAGS_VALID 0x03272272+#define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */273273+#define FS_POLICY_FLAGS_VALID 0x07227274228275/* Encryption algorithms */229276#define FS_ENCRYPTION_MODE_INVALID 0···236281#define FS_ENCRYPTION_MODE_AES_128_CTS 6237282#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */238283#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */284284+#define FS_ENCRYPTION_MODE_ADIANTUM 9239285240286struct fscrypt_policy {241287 __u8 version;
···492492 };493493};494494495495+/* for KVM_CLEAR_DIRTY_LOG */496496+struct kvm_clear_dirty_log {497497+ __u32 slot;498498+ __u32 num_pages;499499+ __u64 first_page;500500+ union {501501+ void __user *dirty_bitmap; /* one bit per page */502502+ __u64 padding2;503503+ };504504+};505505+495506/* for KVM_SET_SIGNAL_MASK */496507struct kvm_signal_mask {497508 __u32 len;···986975#define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163987976#define KVM_CAP_EXCEPTION_PAYLOAD 164988977#define KVM_CAP_ARM_VM_IPA_SIZE 165978978+#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166979979+#define KVM_CAP_HYPERV_CPUID 167989980990981#ifdef KVM_CAP_IRQ_ROUTING991982···14331420/* Available with KVM_CAP_NESTED_STATE */14341421#define KVM_GET_NESTED_STATE _IOWR(KVMIO, 0xbe, struct kvm_nested_state)14351422#define KVM_SET_NESTED_STATE _IOW(KVMIO, 0xbf, struct kvm_nested_state)14231423+14241424+/* Available with KVM_CAP_MANUAL_DIRTY_LOG_PROTECT */14251425+#define KVM_CLEAR_DIRTY_LOG _IOWR(KVMIO, 0xc0, struct kvm_clear_dirty_log)14261426+14271427+/* Available with KVM_CAP_HYPERV_CPUID */14281428+#define KVM_GET_SUPPORTED_HV_CPUID _IOWR(KVMIO, 0xc1, struct kvm_cpuid2)1436142914371430/* Secure Encrypted Virtualization command */14381431enum sev_cmd_id {
+58
tools/include/uapi/linux/mount.h
···11+#ifndef _UAPI_LINUX_MOUNT_H22+#define _UAPI_LINUX_MOUNT_H33+44+/*55+ * These are the fs-independent mount-flags: up to 32 flags are supported66+ *77+ * Usage of these is restricted within the kernel to core mount(2) code and88+ * callers of sys_mount() only. Filesystems should be using the SB_*99+ * equivalent instead.1010+ */1111+#define MS_RDONLY 1 /* Mount read-only */1212+#define MS_NOSUID 2 /* Ignore suid and sgid bits */1313+#define MS_NODEV 4 /* Disallow access to device special files */1414+#define MS_NOEXEC 8 /* Disallow program execution */1515+#define MS_SYNCHRONOUS 16 /* Writes are synced at once */1616+#define MS_REMOUNT 32 /* Alter flags of a mounted FS */1717+#define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */1818+#define MS_DIRSYNC 128 /* Directory modifications are synchronous */1919+#define MS_NOATIME 1024 /* Do not update access times. */2020+#define MS_NODIRATIME 2048 /* Do not update directory access times */2121+#define MS_BIND 40962222+#define MS_MOVE 81922323+#define MS_REC 163842424+#define MS_VERBOSE 32768 /* War is peace. Verbosity is silence.2525+ MS_VERBOSE is deprecated. */2626+#define MS_SILENT 327682727+#define MS_POSIXACL (1<<16) /* VFS does not apply the umask */2828+#define MS_UNBINDABLE (1<<17) /* change to unbindable */2929+#define MS_PRIVATE (1<<18) /* change to private */3030+#define MS_SLAVE (1<<19) /* change to slave */3131+#define MS_SHARED (1<<20) /* change to shared */3232+#define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */3333+#define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */3434+#define MS_I_VERSION (1<<23) /* Update inode I_version field */3535+#define MS_STRICTATIME (1<<24) /* Always perform atime updates */3636+#define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */3737+3838+/* These sb flags are internal to the kernel */3939+#define MS_SUBMOUNT (1<<26)4040+#define MS_NOREMOTELOCK (1<<27)4141+#define MS_NOSEC (1<<28)4242+#define MS_BORN (1<<29)4343+#define MS_ACTIVE (1<<30)4444+#define MS_NOUSER (1<<31)4545+4646+/*4747+ * Superblock flags that can be altered by MS_REMOUNT4848+ */4949+#define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\5050+ MS_LAZYTIME)5151+5252+/*5353+ * Old magic mount flag and mask5454+ */5555+#define MS_MGC_VAL 0xC0ED00005656+#define MS_MGC_MSK 0xffff00005757+5858+#endif /* _UAPI_LINUX_MOUNT_H */
+1163
tools/include/uapi/linux/pkt_sched.h
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+#ifndef __LINUX_PKT_SCHED_H33+#define __LINUX_PKT_SCHED_H44+55+#include <linux/types.h>66+77+/* Logical priority bands not depending on specific packet scheduler.88+ Every scheduler will map them to real traffic classes, if it has99+ no more precise mechanism to classify packets.1010+1111+ These numbers have no special meaning, though their coincidence1212+ with obsolete IPv6 values is not occasional :-). New IPv6 drafts1313+ preferred full anarchy inspired by diffserv group.1414+1515+ Note: TC_PRIO_BESTEFFORT does not mean that it is the most unhappy1616+ class, actually, as rule it will be handled with more care than1717+ filler or even bulk.1818+ */1919+2020+#define TC_PRIO_BESTEFFORT 02121+#define TC_PRIO_FILLER 12222+#define TC_PRIO_BULK 22323+#define TC_PRIO_INTERACTIVE_BULK 42424+#define TC_PRIO_INTERACTIVE 62525+#define TC_PRIO_CONTROL 72626+2727+#define TC_PRIO_MAX 152828+2929+/* Generic queue statistics, available for all the elements.3030+ Particular schedulers may have also their private records.3131+ */3232+3333+struct tc_stats {3434+ __u64 bytes; /* Number of enqueued bytes */3535+ __u32 packets; /* Number of enqueued packets */3636+ __u32 drops; /* Packets dropped because of lack of resources */3737+ __u32 overlimits; /* Number of throttle events when this3838+ * flow goes out of allocated bandwidth */3939+ __u32 bps; /* Current flow byte rate */4040+ __u32 pps; /* Current flow packet rate */4141+ __u32 qlen;4242+ __u32 backlog;4343+};4444+4545+struct tc_estimator {4646+ signed char interval;4747+ unsigned char ewma_log;4848+};4949+5050+/* "Handles"5151+ ---------5252+5353+ All the traffic control objects have 32bit identifiers, or "handles".5454+5555+ They can be considered as opaque numbers from user API viewpoint,5656+ but actually they always consist of two fields: major and5757+ minor numbers, which are interpreted by kernel specially,5858+ that may be used by applications, though not recommended.5959+6060+ F.e. qdisc handles always have minor number equal to zero,6161+ classes (or flows) have major equal to parent qdisc major, and6262+ minor uniquely identifying class inside qdisc.6363+6464+ Macros to manipulate handles:6565+ */6666+6767+#define TC_H_MAJ_MASK (0xFFFF0000U)6868+#define TC_H_MIN_MASK (0x0000FFFFU)6969+#define TC_H_MAJ(h) ((h)&TC_H_MAJ_MASK)7070+#define TC_H_MIN(h) ((h)&TC_H_MIN_MASK)7171+#define TC_H_MAKE(maj,min) (((maj)&TC_H_MAJ_MASK)|((min)&TC_H_MIN_MASK))7272+7373+#define TC_H_UNSPEC (0U)7474+#define TC_H_ROOT (0xFFFFFFFFU)7575+#define TC_H_INGRESS (0xFFFFFFF1U)7676+#define TC_H_CLSACT TC_H_INGRESS7777+7878+#define TC_H_MIN_PRIORITY 0xFFE0U7979+#define TC_H_MIN_INGRESS 0xFFF2U8080+#define TC_H_MIN_EGRESS 0xFFF3U8181+8282+/* Need to corrospond to iproute2 tc/tc_core.h "enum link_layer" */8383+enum tc_link_layer {8484+ TC_LINKLAYER_UNAWARE, /* Indicate unaware old iproute2 util */8585+ TC_LINKLAYER_ETHERNET,8686+ TC_LINKLAYER_ATM,8787+};8888+#define TC_LINKLAYER_MASK 0x0F /* limit use to lower 4 bits */8989+9090+struct tc_ratespec {9191+ unsigned char cell_log;9292+ __u8 linklayer; /* lower 4 bits */9393+ unsigned short overhead;9494+ short cell_align;9595+ unsigned short mpu;9696+ __u32 rate;9797+};9898+9999+#define TC_RTAB_SIZE 1024100100+101101+struct tc_sizespec {102102+ unsigned char cell_log;103103+ unsigned char size_log;104104+ short cell_align;105105+ int overhead;106106+ unsigned int linklayer;107107+ unsigned int mpu;108108+ unsigned int mtu;109109+ unsigned int tsize;110110+};111111+112112+enum {113113+ TCA_STAB_UNSPEC,114114+ TCA_STAB_BASE,115115+ TCA_STAB_DATA,116116+ __TCA_STAB_MAX117117+};118118+119119+#define TCA_STAB_MAX (__TCA_STAB_MAX - 1)120120+121121+/* FIFO section */122122+123123+struct tc_fifo_qopt {124124+ __u32 limit; /* Queue length: bytes for bfifo, packets for pfifo */125125+};126126+127127+/* SKBPRIO section */128128+129129+/*130130+ * Priorities go from zero to (SKBPRIO_MAX_PRIORITY - 1).131131+ * SKBPRIO_MAX_PRIORITY should be at least 64 in order for skbprio to be able132132+ * to map one to one the DS field of IPV4 and IPV6 headers.133133+ * Memory allocation grows linearly with SKBPRIO_MAX_PRIORITY.134134+ */135135+136136+#define SKBPRIO_MAX_PRIORITY 64137137+138138+struct tc_skbprio_qopt {139139+ __u32 limit; /* Queue length in packets. */140140+};141141+142142+/* PRIO section */143143+144144+#define TCQ_PRIO_BANDS 16145145+#define TCQ_MIN_PRIO_BANDS 2146146+147147+struct tc_prio_qopt {148148+ int bands; /* Number of bands */149149+ __u8 priomap[TC_PRIO_MAX+1]; /* Map: logical priority -> PRIO band */150150+};151151+152152+/* MULTIQ section */153153+154154+struct tc_multiq_qopt {155155+ __u16 bands; /* Number of bands */156156+ __u16 max_bands; /* Maximum number of queues */157157+};158158+159159+/* PLUG section */160160+161161+#define TCQ_PLUG_BUFFER 0162162+#define TCQ_PLUG_RELEASE_ONE 1163163+#define TCQ_PLUG_RELEASE_INDEFINITE 2164164+#define TCQ_PLUG_LIMIT 3165165+166166+struct tc_plug_qopt {167167+ /* TCQ_PLUG_BUFFER: Inset a plug into the queue and168168+ * buffer any incoming packets169169+ * TCQ_PLUG_RELEASE_ONE: Dequeue packets from queue head170170+ * to beginning of the next plug.171171+ * TCQ_PLUG_RELEASE_INDEFINITE: Dequeue all packets from queue.172172+ * Stop buffering packets until the next TCQ_PLUG_BUFFER173173+ * command is received (just act as a pass-thru queue).174174+ * TCQ_PLUG_LIMIT: Increase/decrease queue size175175+ */176176+ int action;177177+ __u32 limit;178178+};179179+180180+/* TBF section */181181+182182+struct tc_tbf_qopt {183183+ struct tc_ratespec rate;184184+ struct tc_ratespec peakrate;185185+ __u32 limit;186186+ __u32 buffer;187187+ __u32 mtu;188188+};189189+190190+enum {191191+ TCA_TBF_UNSPEC,192192+ TCA_TBF_PARMS,193193+ TCA_TBF_RTAB,194194+ TCA_TBF_PTAB,195195+ TCA_TBF_RATE64,196196+ TCA_TBF_PRATE64,197197+ TCA_TBF_BURST,198198+ TCA_TBF_PBURST,199199+ TCA_TBF_PAD,200200+ __TCA_TBF_MAX,201201+};202202+203203+#define TCA_TBF_MAX (__TCA_TBF_MAX - 1)204204+205205+206206+/* TEQL section */207207+208208+/* TEQL does not require any parameters */209209+210210+/* SFQ section */211211+212212+struct tc_sfq_qopt {213213+ unsigned quantum; /* Bytes per round allocated to flow */214214+ int perturb_period; /* Period of hash perturbation */215215+ __u32 limit; /* Maximal packets in queue */216216+ unsigned divisor; /* Hash divisor */217217+ unsigned flows; /* Maximal number of flows */218218+};219219+220220+struct tc_sfqred_stats {221221+ __u32 prob_drop; /* Early drops, below max threshold */222222+ __u32 forced_drop; /* Early drops, after max threshold */223223+ __u32 prob_mark; /* Marked packets, below max threshold */224224+ __u32 forced_mark; /* Marked packets, after max threshold */225225+ __u32 prob_mark_head; /* Marked packets, below max threshold */226226+ __u32 forced_mark_head;/* Marked packets, after max threshold */227227+};228228+229229+struct tc_sfq_qopt_v1 {230230+ struct tc_sfq_qopt v0;231231+ unsigned int depth; /* max number of packets per flow */232232+ unsigned int headdrop;233233+/* SFQRED parameters */234234+ __u32 limit; /* HARD maximal flow queue length (bytes) */235235+ __u32 qth_min; /* Min average length threshold (bytes) */236236+ __u32 qth_max; /* Max average length threshold (bytes) */237237+ unsigned char Wlog; /* log(W) */238238+ unsigned char Plog; /* log(P_max/(qth_max-qth_min)) */239239+ unsigned char Scell_log; /* cell size for idle damping */240240+ unsigned char flags;241241+ __u32 max_P; /* probability, high resolution */242242+/* SFQRED stats */243243+ struct tc_sfqred_stats stats;244244+};245245+246246+247247+struct tc_sfq_xstats {248248+ __s32 allot;249249+};250250+251251+/* RED section */252252+253253+enum {254254+ TCA_RED_UNSPEC,255255+ TCA_RED_PARMS,256256+ TCA_RED_STAB,257257+ TCA_RED_MAX_P,258258+ __TCA_RED_MAX,259259+};260260+261261+#define TCA_RED_MAX (__TCA_RED_MAX - 1)262262+263263+struct tc_red_qopt {264264+ __u32 limit; /* HARD maximal queue length (bytes) */265265+ __u32 qth_min; /* Min average length threshold (bytes) */266266+ __u32 qth_max; /* Max average length threshold (bytes) */267267+ unsigned char Wlog; /* log(W) */268268+ unsigned char Plog; /* log(P_max/(qth_max-qth_min)) */269269+ unsigned char Scell_log; /* cell size for idle damping */270270+ unsigned char flags;271271+#define TC_RED_ECN 1272272+#define TC_RED_HARDDROP 2273273+#define TC_RED_ADAPTATIVE 4274274+};275275+276276+struct tc_red_xstats {277277+ __u32 early; /* Early drops */278278+ __u32 pdrop; /* Drops due to queue limits */279279+ __u32 other; /* Drops due to drop() calls */280280+ __u32 marked; /* Marked packets */281281+};282282+283283+/* GRED section */284284+285285+#define MAX_DPs 16286286+287287+enum {288288+ TCA_GRED_UNSPEC,289289+ TCA_GRED_PARMS,290290+ TCA_GRED_STAB,291291+ TCA_GRED_DPS,292292+ TCA_GRED_MAX_P,293293+ TCA_GRED_LIMIT,294294+ TCA_GRED_VQ_LIST, /* nested TCA_GRED_VQ_ENTRY */295295+ __TCA_GRED_MAX,296296+};297297+298298+#define TCA_GRED_MAX (__TCA_GRED_MAX - 1)299299+300300+enum {301301+ TCA_GRED_VQ_ENTRY_UNSPEC,302302+ TCA_GRED_VQ_ENTRY, /* nested TCA_GRED_VQ_* */303303+ __TCA_GRED_VQ_ENTRY_MAX,304304+};305305+#define TCA_GRED_VQ_ENTRY_MAX (__TCA_GRED_VQ_ENTRY_MAX - 1)306306+307307+enum {308308+ TCA_GRED_VQ_UNSPEC,309309+ TCA_GRED_VQ_PAD,310310+ TCA_GRED_VQ_DP, /* u32 */311311+ TCA_GRED_VQ_STAT_BYTES, /* u64 */312312+ TCA_GRED_VQ_STAT_PACKETS, /* u32 */313313+ TCA_GRED_VQ_STAT_BACKLOG, /* u32 */314314+ TCA_GRED_VQ_STAT_PROB_DROP, /* u32 */315315+ TCA_GRED_VQ_STAT_PROB_MARK, /* u32 */316316+ TCA_GRED_VQ_STAT_FORCED_DROP, /* u32 */317317+ TCA_GRED_VQ_STAT_FORCED_MARK, /* u32 */318318+ TCA_GRED_VQ_STAT_PDROP, /* u32 */319319+ TCA_GRED_VQ_STAT_OTHER, /* u32 */320320+ TCA_GRED_VQ_FLAGS, /* u32 */321321+ __TCA_GRED_VQ_MAX322322+};323323+324324+#define TCA_GRED_VQ_MAX (__TCA_GRED_VQ_MAX - 1)325325+326326+struct tc_gred_qopt {327327+ __u32 limit; /* HARD maximal queue length (bytes) */328328+ __u32 qth_min; /* Min average length threshold (bytes) */329329+ __u32 qth_max; /* Max average length threshold (bytes) */330330+ __u32 DP; /* up to 2^32 DPs */331331+ __u32 backlog;332332+ __u32 qave;333333+ __u32 forced;334334+ __u32 early;335335+ __u32 other;336336+ __u32 pdrop;337337+ __u8 Wlog; /* log(W) */338338+ __u8 Plog; /* log(P_max/(qth_max-qth_min)) */339339+ __u8 Scell_log; /* cell size for idle damping */340340+ __u8 prio; /* prio of this VQ */341341+ __u32 packets;342342+ __u32 bytesin;343343+};344344+345345+/* gred setup */346346+struct tc_gred_sopt {347347+ __u32 DPs;348348+ __u32 def_DP;349349+ __u8 grio;350350+ __u8 flags;351351+ __u16 pad1;352352+};353353+354354+/* CHOKe section */355355+356356+enum {357357+ TCA_CHOKE_UNSPEC,358358+ TCA_CHOKE_PARMS,359359+ TCA_CHOKE_STAB,360360+ TCA_CHOKE_MAX_P,361361+ __TCA_CHOKE_MAX,362362+};363363+364364+#define TCA_CHOKE_MAX (__TCA_CHOKE_MAX - 1)365365+366366+struct tc_choke_qopt {367367+ __u32 limit; /* Hard queue length (packets) */368368+ __u32 qth_min; /* Min average threshold (packets) */369369+ __u32 qth_max; /* Max average threshold (packets) */370370+ unsigned char Wlog; /* log(W) */371371+ unsigned char Plog; /* log(P_max/(qth_max-qth_min)) */372372+ unsigned char Scell_log; /* cell size for idle damping */373373+ unsigned char flags; /* see RED flags */374374+};375375+376376+struct tc_choke_xstats {377377+ __u32 early; /* Early drops */378378+ __u32 pdrop; /* Drops due to queue limits */379379+ __u32 other; /* Drops due to drop() calls */380380+ __u32 marked; /* Marked packets */381381+ __u32 matched; /* Drops due to flow match */382382+};383383+384384+/* HTB section */385385+#define TC_HTB_NUMPRIO 8386386+#define TC_HTB_MAXDEPTH 8387387+#define TC_HTB_PROTOVER 3 /* the same as HTB and TC's major */388388+389389+struct tc_htb_opt {390390+ struct tc_ratespec rate;391391+ struct tc_ratespec ceil;392392+ __u32 buffer;393393+ __u32 cbuffer;394394+ __u32 quantum;395395+ __u32 level; /* out only */396396+ __u32 prio;397397+};398398+struct tc_htb_glob {399399+ __u32 version; /* to match HTB/TC */400400+ __u32 rate2quantum; /* bps->quantum divisor */401401+ __u32 defcls; /* default class number */402402+ __u32 debug; /* debug flags */403403+404404+ /* stats */405405+ __u32 direct_pkts; /* count of non shaped packets */406406+};407407+enum {408408+ TCA_HTB_UNSPEC,409409+ TCA_HTB_PARMS,410410+ TCA_HTB_INIT,411411+ TCA_HTB_CTAB,412412+ TCA_HTB_RTAB,413413+ TCA_HTB_DIRECT_QLEN,414414+ TCA_HTB_RATE64,415415+ TCA_HTB_CEIL64,416416+ TCA_HTB_PAD,417417+ __TCA_HTB_MAX,418418+};419419+420420+#define TCA_HTB_MAX (__TCA_HTB_MAX - 1)421421+422422+struct tc_htb_xstats {423423+ __u32 lends;424424+ __u32 borrows;425425+ __u32 giants; /* unused since 'Make HTB scheduler work with TSO.' */426426+ __s32 tokens;427427+ __s32 ctokens;428428+};429429+430430+/* HFSC section */431431+432432+struct tc_hfsc_qopt {433433+ __u16 defcls; /* default class */434434+};435435+436436+struct tc_service_curve {437437+ __u32 m1; /* slope of the first segment in bps */438438+ __u32 d; /* x-projection of the first segment in us */439439+ __u32 m2; /* slope of the second segment in bps */440440+};441441+442442+struct tc_hfsc_stats {443443+ __u64 work; /* total work done */444444+ __u64 rtwork; /* work done by real-time criteria */445445+ __u32 period; /* current period */446446+ __u32 level; /* class level in hierarchy */447447+};448448+449449+enum {450450+ TCA_HFSC_UNSPEC,451451+ TCA_HFSC_RSC,452452+ TCA_HFSC_FSC,453453+ TCA_HFSC_USC,454454+ __TCA_HFSC_MAX,455455+};456456+457457+#define TCA_HFSC_MAX (__TCA_HFSC_MAX - 1)458458+459459+460460+/* CBQ section */461461+462462+#define TC_CBQ_MAXPRIO 8463463+#define TC_CBQ_MAXLEVEL 8464464+#define TC_CBQ_DEF_EWMA 5465465+466466+struct tc_cbq_lssopt {467467+ unsigned char change;468468+ unsigned char flags;469469+#define TCF_CBQ_LSS_BOUNDED 1470470+#define TCF_CBQ_LSS_ISOLATED 2471471+ unsigned char ewma_log;472472+ unsigned char level;473473+#define TCF_CBQ_LSS_FLAGS 1474474+#define TCF_CBQ_LSS_EWMA 2475475+#define TCF_CBQ_LSS_MAXIDLE 4476476+#define TCF_CBQ_LSS_MINIDLE 8477477+#define TCF_CBQ_LSS_OFFTIME 0x10478478+#define TCF_CBQ_LSS_AVPKT 0x20479479+ __u32 maxidle;480480+ __u32 minidle;481481+ __u32 offtime;482482+ __u32 avpkt;483483+};484484+485485+struct tc_cbq_wrropt {486486+ unsigned char flags;487487+ unsigned char priority;488488+ unsigned char cpriority;489489+ unsigned char __reserved;490490+ __u32 allot;491491+ __u32 weight;492492+};493493+494494+struct tc_cbq_ovl {495495+ unsigned char strategy;496496+#define TC_CBQ_OVL_CLASSIC 0497497+#define TC_CBQ_OVL_DELAY 1498498+#define TC_CBQ_OVL_LOWPRIO 2499499+#define TC_CBQ_OVL_DROP 3500500+#define TC_CBQ_OVL_RCLASSIC 4501501+ unsigned char priority2;502502+ __u16 pad;503503+ __u32 penalty;504504+};505505+506506+struct tc_cbq_police {507507+ unsigned char police;508508+ unsigned char __res1;509509+ unsigned short __res2;510510+};511511+512512+struct tc_cbq_fopt {513513+ __u32 split;514514+ __u32 defmap;515515+ __u32 defchange;516516+};517517+518518+struct tc_cbq_xstats {519519+ __u32 borrows;520520+ __u32 overactions;521521+ __s32 avgidle;522522+ __s32 undertime;523523+};524524+525525+enum {526526+ TCA_CBQ_UNSPEC,527527+ TCA_CBQ_LSSOPT,528528+ TCA_CBQ_WRROPT,529529+ TCA_CBQ_FOPT,530530+ TCA_CBQ_OVL_STRATEGY,531531+ TCA_CBQ_RATE,532532+ TCA_CBQ_RTAB,533533+ TCA_CBQ_POLICE,534534+ __TCA_CBQ_MAX,535535+};536536+537537+#define TCA_CBQ_MAX (__TCA_CBQ_MAX - 1)538538+539539+/* dsmark section */540540+541541+enum {542542+ TCA_DSMARK_UNSPEC,543543+ TCA_DSMARK_INDICES,544544+ TCA_DSMARK_DEFAULT_INDEX,545545+ TCA_DSMARK_SET_TC_INDEX,546546+ TCA_DSMARK_MASK,547547+ TCA_DSMARK_VALUE,548548+ __TCA_DSMARK_MAX,549549+};550550+551551+#define TCA_DSMARK_MAX (__TCA_DSMARK_MAX - 1)552552+553553+/* ATM section */554554+555555+enum {556556+ TCA_ATM_UNSPEC,557557+ TCA_ATM_FD, /* file/socket descriptor */558558+ TCA_ATM_PTR, /* pointer to descriptor - later */559559+ TCA_ATM_HDR, /* LL header */560560+ TCA_ATM_EXCESS, /* excess traffic class (0 for CLP) */561561+ TCA_ATM_ADDR, /* PVC address (for output only) */562562+ TCA_ATM_STATE, /* VC state (ATM_VS_*; for output only) */563563+ __TCA_ATM_MAX,564564+};565565+566566+#define TCA_ATM_MAX (__TCA_ATM_MAX - 1)567567+568568+/* Network emulator */569569+570570+enum {571571+ TCA_NETEM_UNSPEC,572572+ TCA_NETEM_CORR,573573+ TCA_NETEM_DELAY_DIST,574574+ TCA_NETEM_REORDER,575575+ TCA_NETEM_CORRUPT,576576+ TCA_NETEM_LOSS,577577+ TCA_NETEM_RATE,578578+ TCA_NETEM_ECN,579579+ TCA_NETEM_RATE64,580580+ TCA_NETEM_PAD,581581+ TCA_NETEM_LATENCY64,582582+ TCA_NETEM_JITTER64,583583+ TCA_NETEM_SLOT,584584+ TCA_NETEM_SLOT_DIST,585585+ __TCA_NETEM_MAX,586586+};587587+588588+#define TCA_NETEM_MAX (__TCA_NETEM_MAX - 1)589589+590590+struct tc_netem_qopt {591591+ __u32 latency; /* added delay (us) */592592+ __u32 limit; /* fifo limit (packets) */593593+ __u32 loss; /* random packet loss (0=none ~0=100%) */594594+ __u32 gap; /* re-ordering gap (0 for none) */595595+ __u32 duplicate; /* random packet dup (0=none ~0=100%) */596596+ __u32 jitter; /* random jitter in latency (us) */597597+};598598+599599+struct tc_netem_corr {600600+ __u32 delay_corr; /* delay correlation */601601+ __u32 loss_corr; /* packet loss correlation */602602+ __u32 dup_corr; /* duplicate correlation */603603+};604604+605605+struct tc_netem_reorder {606606+ __u32 probability;607607+ __u32 correlation;608608+};609609+610610+struct tc_netem_corrupt {611611+ __u32 probability;612612+ __u32 correlation;613613+};614614+615615+struct tc_netem_rate {616616+ __u32 rate; /* byte/s */617617+ __s32 packet_overhead;618618+ __u32 cell_size;619619+ __s32 cell_overhead;620620+};621621+622622+struct tc_netem_slot {623623+ __s64 min_delay; /* nsec */624624+ __s64 max_delay;625625+ __s32 max_packets;626626+ __s32 max_bytes;627627+ __s64 dist_delay; /* nsec */628628+ __s64 dist_jitter; /* nsec */629629+};630630+631631+enum {632632+ NETEM_LOSS_UNSPEC,633633+ NETEM_LOSS_GI, /* General Intuitive - 4 state model */634634+ NETEM_LOSS_GE, /* Gilbert Elliot models */635635+ __NETEM_LOSS_MAX636636+};637637+#define NETEM_LOSS_MAX (__NETEM_LOSS_MAX - 1)638638+639639+/* State transition probabilities for 4 state model */640640+struct tc_netem_gimodel {641641+ __u32 p13;642642+ __u32 p31;643643+ __u32 p32;644644+ __u32 p14;645645+ __u32 p23;646646+};647647+648648+/* Gilbert-Elliot models */649649+struct tc_netem_gemodel {650650+ __u32 p;651651+ __u32 r;652652+ __u32 h;653653+ __u32 k1;654654+};655655+656656+#define NETEM_DIST_SCALE 8192657657+#define NETEM_DIST_MAX 16384658658+659659+/* DRR */660660+661661+enum {662662+ TCA_DRR_UNSPEC,663663+ TCA_DRR_QUANTUM,664664+ __TCA_DRR_MAX665665+};666666+667667+#define TCA_DRR_MAX (__TCA_DRR_MAX - 1)668668+669669+struct tc_drr_stats {670670+ __u32 deficit;671671+};672672+673673+/* MQPRIO */674674+#define TC_QOPT_BITMASK 15675675+#define TC_QOPT_MAX_QUEUE 16676676+677677+enum {678678+ TC_MQPRIO_HW_OFFLOAD_NONE, /* no offload requested */679679+ TC_MQPRIO_HW_OFFLOAD_TCS, /* offload TCs, no queue counts */680680+ __TC_MQPRIO_HW_OFFLOAD_MAX681681+};682682+683683+#define TC_MQPRIO_HW_OFFLOAD_MAX (__TC_MQPRIO_HW_OFFLOAD_MAX - 1)684684+685685+enum {686686+ TC_MQPRIO_MODE_DCB,687687+ TC_MQPRIO_MODE_CHANNEL,688688+ __TC_MQPRIO_MODE_MAX689689+};690690+691691+#define __TC_MQPRIO_MODE_MAX (__TC_MQPRIO_MODE_MAX - 1)692692+693693+enum {694694+ TC_MQPRIO_SHAPER_DCB,695695+ TC_MQPRIO_SHAPER_BW_RATE, /* Add new shapers below */696696+ __TC_MQPRIO_SHAPER_MAX697697+};698698+699699+#define __TC_MQPRIO_SHAPER_MAX (__TC_MQPRIO_SHAPER_MAX - 1)700700+701701+struct tc_mqprio_qopt {702702+ __u8 num_tc;703703+ __u8 prio_tc_map[TC_QOPT_BITMASK + 1];704704+ __u8 hw;705705+ __u16 count[TC_QOPT_MAX_QUEUE];706706+ __u16 offset[TC_QOPT_MAX_QUEUE];707707+};708708+709709+#define TC_MQPRIO_F_MODE 0x1710710+#define TC_MQPRIO_F_SHAPER 0x2711711+#define TC_MQPRIO_F_MIN_RATE 0x4712712+#define TC_MQPRIO_F_MAX_RATE 0x8713713+714714+enum {715715+ TCA_MQPRIO_UNSPEC,716716+ TCA_MQPRIO_MODE,717717+ TCA_MQPRIO_SHAPER,718718+ TCA_MQPRIO_MIN_RATE64,719719+ TCA_MQPRIO_MAX_RATE64,720720+ __TCA_MQPRIO_MAX,721721+};722722+723723+#define TCA_MQPRIO_MAX (__TCA_MQPRIO_MAX - 1)724724+725725+/* SFB */726726+727727+enum {728728+ TCA_SFB_UNSPEC,729729+ TCA_SFB_PARMS,730730+ __TCA_SFB_MAX,731731+};732732+733733+#define TCA_SFB_MAX (__TCA_SFB_MAX - 1)734734+735735+/*736736+ * Note: increment, decrement are Q0.16 fixed-point values.737737+ */738738+struct tc_sfb_qopt {739739+ __u32 rehash_interval; /* delay between hash move, in ms */740740+ __u32 warmup_time; /* double buffering warmup time in ms (warmup_time < rehash_interval) */741741+ __u32 max; /* max len of qlen_min */742742+ __u32 bin_size; /* maximum queue length per bin */743743+ __u32 increment; /* probability increment, (d1 in Blue) */744744+ __u32 decrement; /* probability decrement, (d2 in Blue) */745745+ __u32 limit; /* max SFB queue length */746746+ __u32 penalty_rate; /* inelastic flows are rate limited to 'rate' pps */747747+ __u32 penalty_burst;748748+};749749+750750+struct tc_sfb_xstats {751751+ __u32 earlydrop;752752+ __u32 penaltydrop;753753+ __u32 bucketdrop;754754+ __u32 queuedrop;755755+ __u32 childdrop; /* drops in child qdisc */756756+ __u32 marked;757757+ __u32 maxqlen;758758+ __u32 maxprob;759759+ __u32 avgprob;760760+};761761+762762+#define SFB_MAX_PROB 0xFFFF763763+764764+/* QFQ */765765+enum {766766+ TCA_QFQ_UNSPEC,767767+ TCA_QFQ_WEIGHT,768768+ TCA_QFQ_LMAX,769769+ __TCA_QFQ_MAX770770+};771771+772772+#define TCA_QFQ_MAX (__TCA_QFQ_MAX - 1)773773+774774+struct tc_qfq_stats {775775+ __u32 weight;776776+ __u32 lmax;777777+};778778+779779+/* CODEL */780780+781781+enum {782782+ TCA_CODEL_UNSPEC,783783+ TCA_CODEL_TARGET,784784+ TCA_CODEL_LIMIT,785785+ TCA_CODEL_INTERVAL,786786+ TCA_CODEL_ECN,787787+ TCA_CODEL_CE_THRESHOLD,788788+ __TCA_CODEL_MAX789789+};790790+791791+#define TCA_CODEL_MAX (__TCA_CODEL_MAX - 1)792792+793793+struct tc_codel_xstats {794794+ __u32 maxpacket; /* largest packet we've seen so far */795795+ __u32 count; /* how many drops we've done since the last time we796796+ * entered dropping state797797+ */798798+ __u32 lastcount; /* count at entry to dropping state */799799+ __u32 ldelay; /* in-queue delay seen by most recently dequeued packet */800800+ __s32 drop_next; /* time to drop next packet */801801+ __u32 drop_overlimit; /* number of time max qdisc packet limit was hit */802802+ __u32 ecn_mark; /* number of packets we ECN marked instead of dropped */803803+ __u32 dropping; /* are we in dropping state ? */804804+ __u32 ce_mark; /* number of CE marked packets because of ce_threshold */805805+};806806+807807+/* FQ_CODEL */808808+809809+enum {810810+ TCA_FQ_CODEL_UNSPEC,811811+ TCA_FQ_CODEL_TARGET,812812+ TCA_FQ_CODEL_LIMIT,813813+ TCA_FQ_CODEL_INTERVAL,814814+ TCA_FQ_CODEL_ECN,815815+ TCA_FQ_CODEL_FLOWS,816816+ TCA_FQ_CODEL_QUANTUM,817817+ TCA_FQ_CODEL_CE_THRESHOLD,818818+ TCA_FQ_CODEL_DROP_BATCH_SIZE,819819+ TCA_FQ_CODEL_MEMORY_LIMIT,820820+ __TCA_FQ_CODEL_MAX821821+};822822+823823+#define TCA_FQ_CODEL_MAX (__TCA_FQ_CODEL_MAX - 1)824824+825825+enum {826826+ TCA_FQ_CODEL_XSTATS_QDISC,827827+ TCA_FQ_CODEL_XSTATS_CLASS,828828+};829829+830830+struct tc_fq_codel_qd_stats {831831+ __u32 maxpacket; /* largest packet we've seen so far */832832+ __u32 drop_overlimit; /* number of time max qdisc833833+ * packet limit was hit834834+ */835835+ __u32 ecn_mark; /* number of packets we ECN marked836836+ * instead of being dropped837837+ */838838+ __u32 new_flow_count; /* number of time packets839839+ * created a 'new flow'840840+ */841841+ __u32 new_flows_len; /* count of flows in new list */842842+ __u32 old_flows_len; /* count of flows in old list */843843+ __u32 ce_mark; /* packets above ce_threshold */844844+ __u32 memory_usage; /* in bytes */845845+ __u32 drop_overmemory;846846+};847847+848848+struct tc_fq_codel_cl_stats {849849+ __s32 deficit;850850+ __u32 ldelay; /* in-queue delay seen by most recently851851+ * dequeued packet852852+ */853853+ __u32 count;854854+ __u32 lastcount;855855+ __u32 dropping;856856+ __s32 drop_next;857857+};858858+859859+struct tc_fq_codel_xstats {860860+ __u32 type;861861+ union {862862+ struct tc_fq_codel_qd_stats qdisc_stats;863863+ struct tc_fq_codel_cl_stats class_stats;864864+ };865865+};866866+867867+/* FQ */868868+869869+enum {870870+ TCA_FQ_UNSPEC,871871+872872+ TCA_FQ_PLIMIT, /* limit of total number of packets in queue */873873+874874+ TCA_FQ_FLOW_PLIMIT, /* limit of packets per flow */875875+876876+ TCA_FQ_QUANTUM, /* RR quantum */877877+878878+ TCA_FQ_INITIAL_QUANTUM, /* RR quantum for new flow */879879+880880+ TCA_FQ_RATE_ENABLE, /* enable/disable rate limiting */881881+882882+ TCA_FQ_FLOW_DEFAULT_RATE,/* obsolete, do not use */883883+884884+ TCA_FQ_FLOW_MAX_RATE, /* per flow max rate */885885+886886+ TCA_FQ_BUCKETS_LOG, /* log2(number of buckets) */887887+888888+ TCA_FQ_FLOW_REFILL_DELAY, /* flow credit refill delay in usec */889889+890890+ TCA_FQ_ORPHAN_MASK, /* mask applied to orphaned skb hashes */891891+892892+ TCA_FQ_LOW_RATE_THRESHOLD, /* per packet delay under this rate */893893+894894+ TCA_FQ_CE_THRESHOLD, /* DCTCP-like CE-marking threshold */895895+896896+ __TCA_FQ_MAX897897+};898898+899899+#define TCA_FQ_MAX (__TCA_FQ_MAX - 1)900900+901901+struct tc_fq_qd_stats {902902+ __u64 gc_flows;903903+ __u64 highprio_packets;904904+ __u64 tcp_retrans;905905+ __u64 throttled;906906+ __u64 flows_plimit;907907+ __u64 pkts_too_long;908908+ __u64 allocation_errors;909909+ __s64 time_next_delayed_flow;910910+ __u32 flows;911911+ __u32 inactive_flows;912912+ __u32 throttled_flows;913913+ __u32 unthrottle_latency_ns;914914+ __u64 ce_mark; /* packets above ce_threshold */915915+};916916+917917+/* Heavy-Hitter Filter */918918+919919+enum {920920+ TCA_HHF_UNSPEC,921921+ TCA_HHF_BACKLOG_LIMIT,922922+ TCA_HHF_QUANTUM,923923+ TCA_HHF_HH_FLOWS_LIMIT,924924+ TCA_HHF_RESET_TIMEOUT,925925+ TCA_HHF_ADMIT_BYTES,926926+ TCA_HHF_EVICT_TIMEOUT,927927+ TCA_HHF_NON_HH_WEIGHT,928928+ __TCA_HHF_MAX929929+};930930+931931+#define TCA_HHF_MAX (__TCA_HHF_MAX - 1)932932+933933+struct tc_hhf_xstats {934934+ __u32 drop_overlimit; /* number of times max qdisc packet limit935935+ * was hit936936+ */937937+ __u32 hh_overlimit; /* number of times max heavy-hitters was hit */938938+ __u32 hh_tot_count; /* number of captured heavy-hitters so far */939939+ __u32 hh_cur_count; /* number of current heavy-hitters */940940+};941941+942942+/* PIE */943943+enum {944944+ TCA_PIE_UNSPEC,945945+ TCA_PIE_TARGET,946946+ TCA_PIE_LIMIT,947947+ TCA_PIE_TUPDATE,948948+ TCA_PIE_ALPHA,949949+ TCA_PIE_BETA,950950+ TCA_PIE_ECN,951951+ TCA_PIE_BYTEMODE,952952+ __TCA_PIE_MAX953953+};954954+#define TCA_PIE_MAX (__TCA_PIE_MAX - 1)955955+956956+struct tc_pie_xstats {957957+ __u32 prob; /* current probability */958958+ __u32 delay; /* current delay in ms */959959+ __u32 avg_dq_rate; /* current average dq_rate in bits/pie_time */960960+ __u32 packets_in; /* total number of packets enqueued */961961+ __u32 dropped; /* packets dropped due to pie_action */962962+ __u32 overlimit; /* dropped due to lack of space in queue */963963+ __u32 maxq; /* maximum queue size */964964+ __u32 ecn_mark; /* packets marked with ecn*/965965+};966966+967967+/* CBS */968968+struct tc_cbs_qopt {969969+ __u8 offload;970970+ __u8 _pad[3];971971+ __s32 hicredit;972972+ __s32 locredit;973973+ __s32 idleslope;974974+ __s32 sendslope;975975+};976976+977977+enum {978978+ TCA_CBS_UNSPEC,979979+ TCA_CBS_PARMS,980980+ __TCA_CBS_MAX,981981+};982982+983983+#define TCA_CBS_MAX (__TCA_CBS_MAX - 1)984984+985985+986986+/* ETF */987987+struct tc_etf_qopt {988988+ __s32 delta;989989+ __s32 clockid;990990+ __u32 flags;991991+#define TC_ETF_DEADLINE_MODE_ON BIT(0)992992+#define TC_ETF_OFFLOAD_ON BIT(1)993993+};994994+995995+enum {996996+ TCA_ETF_UNSPEC,997997+ TCA_ETF_PARMS,998998+ __TCA_ETF_MAX,999999+};10001000+10011001+#define TCA_ETF_MAX (__TCA_ETF_MAX - 1)10021002+10031003+10041004+/* CAKE */10051005+enum {10061006+ TCA_CAKE_UNSPEC,10071007+ TCA_CAKE_PAD,10081008+ TCA_CAKE_BASE_RATE64,10091009+ TCA_CAKE_DIFFSERV_MODE,10101010+ TCA_CAKE_ATM,10111011+ TCA_CAKE_FLOW_MODE,10121012+ TCA_CAKE_OVERHEAD,10131013+ TCA_CAKE_RTT,10141014+ TCA_CAKE_TARGET,10151015+ TCA_CAKE_AUTORATE,10161016+ TCA_CAKE_MEMORY,10171017+ TCA_CAKE_NAT,10181018+ TCA_CAKE_RAW,10191019+ TCA_CAKE_WASH,10201020+ TCA_CAKE_MPU,10211021+ TCA_CAKE_INGRESS,10221022+ TCA_CAKE_ACK_FILTER,10231023+ TCA_CAKE_SPLIT_GSO,10241024+ __TCA_CAKE_MAX10251025+};10261026+#define TCA_CAKE_MAX (__TCA_CAKE_MAX - 1)10271027+10281028+enum {10291029+ __TCA_CAKE_STATS_INVALID,10301030+ TCA_CAKE_STATS_PAD,10311031+ TCA_CAKE_STATS_CAPACITY_ESTIMATE64,10321032+ TCA_CAKE_STATS_MEMORY_LIMIT,10331033+ TCA_CAKE_STATS_MEMORY_USED,10341034+ TCA_CAKE_STATS_AVG_NETOFF,10351035+ TCA_CAKE_STATS_MIN_NETLEN,10361036+ TCA_CAKE_STATS_MAX_NETLEN,10371037+ TCA_CAKE_STATS_MIN_ADJLEN,10381038+ TCA_CAKE_STATS_MAX_ADJLEN,10391039+ TCA_CAKE_STATS_TIN_STATS,10401040+ TCA_CAKE_STATS_DEFICIT,10411041+ TCA_CAKE_STATS_COBALT_COUNT,10421042+ TCA_CAKE_STATS_DROPPING,10431043+ TCA_CAKE_STATS_DROP_NEXT_US,10441044+ TCA_CAKE_STATS_P_DROP,10451045+ TCA_CAKE_STATS_BLUE_TIMER_US,10461046+ __TCA_CAKE_STATS_MAX10471047+};10481048+#define TCA_CAKE_STATS_MAX (__TCA_CAKE_STATS_MAX - 1)10491049+10501050+enum {10511051+ __TCA_CAKE_TIN_STATS_INVALID,10521052+ TCA_CAKE_TIN_STATS_PAD,10531053+ TCA_CAKE_TIN_STATS_SENT_PACKETS,10541054+ TCA_CAKE_TIN_STATS_SENT_BYTES64,10551055+ TCA_CAKE_TIN_STATS_DROPPED_PACKETS,10561056+ TCA_CAKE_TIN_STATS_DROPPED_BYTES64,10571057+ TCA_CAKE_TIN_STATS_ACKS_DROPPED_PACKETS,10581058+ TCA_CAKE_TIN_STATS_ACKS_DROPPED_BYTES64,10591059+ TCA_CAKE_TIN_STATS_ECN_MARKED_PACKETS,10601060+ TCA_CAKE_TIN_STATS_ECN_MARKED_BYTES64,10611061+ TCA_CAKE_TIN_STATS_BACKLOG_PACKETS,10621062+ TCA_CAKE_TIN_STATS_BACKLOG_BYTES,10631063+ TCA_CAKE_TIN_STATS_THRESHOLD_RATE64,10641064+ TCA_CAKE_TIN_STATS_TARGET_US,10651065+ TCA_CAKE_TIN_STATS_INTERVAL_US,10661066+ TCA_CAKE_TIN_STATS_WAY_INDIRECT_HITS,10671067+ TCA_CAKE_TIN_STATS_WAY_MISSES,10681068+ TCA_CAKE_TIN_STATS_WAY_COLLISIONS,10691069+ TCA_CAKE_TIN_STATS_PEAK_DELAY_US,10701070+ TCA_CAKE_TIN_STATS_AVG_DELAY_US,10711071+ TCA_CAKE_TIN_STATS_BASE_DELAY_US,10721072+ TCA_CAKE_TIN_STATS_SPARSE_FLOWS,10731073+ TCA_CAKE_TIN_STATS_BULK_FLOWS,10741074+ TCA_CAKE_TIN_STATS_UNRESPONSIVE_FLOWS,10751075+ TCA_CAKE_TIN_STATS_MAX_SKBLEN,10761076+ TCA_CAKE_TIN_STATS_FLOW_QUANTUM,10771077+ __TCA_CAKE_TIN_STATS_MAX10781078+};10791079+#define TCA_CAKE_TIN_STATS_MAX (__TCA_CAKE_TIN_STATS_MAX - 1)10801080+#define TC_CAKE_MAX_TINS (8)10811081+10821082+enum {10831083+ CAKE_FLOW_NONE = 0,10841084+ CAKE_FLOW_SRC_IP,10851085+ CAKE_FLOW_DST_IP,10861086+ CAKE_FLOW_HOSTS, /* = CAKE_FLOW_SRC_IP | CAKE_FLOW_DST_IP */10871087+ CAKE_FLOW_FLOWS,10881088+ CAKE_FLOW_DUAL_SRC, /* = CAKE_FLOW_SRC_IP | CAKE_FLOW_FLOWS */10891089+ CAKE_FLOW_DUAL_DST, /* = CAKE_FLOW_DST_IP | CAKE_FLOW_FLOWS */10901090+ CAKE_FLOW_TRIPLE, /* = CAKE_FLOW_HOSTS | CAKE_FLOW_FLOWS */10911091+ CAKE_FLOW_MAX,10921092+};10931093+10941094+enum {10951095+ CAKE_DIFFSERV_DIFFSERV3 = 0,10961096+ CAKE_DIFFSERV_DIFFSERV4,10971097+ CAKE_DIFFSERV_DIFFSERV8,10981098+ CAKE_DIFFSERV_BESTEFFORT,10991099+ CAKE_DIFFSERV_PRECEDENCE,11001100+ CAKE_DIFFSERV_MAX11011101+};11021102+11031103+enum {11041104+ CAKE_ACK_NONE = 0,11051105+ CAKE_ACK_FILTER,11061106+ CAKE_ACK_AGGRESSIVE,11071107+ CAKE_ACK_MAX11081108+};11091109+11101110+enum {11111111+ CAKE_ATM_NONE = 0,11121112+ CAKE_ATM_ATM,11131113+ CAKE_ATM_PTM,11141114+ CAKE_ATM_MAX11151115+};11161116+11171117+11181118+/* TAPRIO */11191119+enum {11201120+ TC_TAPRIO_CMD_SET_GATES = 0x00,11211121+ TC_TAPRIO_CMD_SET_AND_HOLD = 0x01,11221122+ TC_TAPRIO_CMD_SET_AND_RELEASE = 0x02,11231123+};11241124+11251125+enum {11261126+ TCA_TAPRIO_SCHED_ENTRY_UNSPEC,11271127+ TCA_TAPRIO_SCHED_ENTRY_INDEX, /* u32 */11281128+ TCA_TAPRIO_SCHED_ENTRY_CMD, /* u8 */11291129+ TCA_TAPRIO_SCHED_ENTRY_GATE_MASK, /* u32 */11301130+ TCA_TAPRIO_SCHED_ENTRY_INTERVAL, /* u32 */11311131+ __TCA_TAPRIO_SCHED_ENTRY_MAX,11321132+};11331133+#define TCA_TAPRIO_SCHED_ENTRY_MAX (__TCA_TAPRIO_SCHED_ENTRY_MAX - 1)11341134+11351135+/* The format for schedule entry list is:11361136+ * [TCA_TAPRIO_SCHED_ENTRY_LIST]11371137+ * [TCA_TAPRIO_SCHED_ENTRY]11381138+ * [TCA_TAPRIO_SCHED_ENTRY_CMD]11391139+ * [TCA_TAPRIO_SCHED_ENTRY_GATES]11401140+ * [TCA_TAPRIO_SCHED_ENTRY_INTERVAL]11411141+ */11421142+enum {11431143+ TCA_TAPRIO_SCHED_UNSPEC,11441144+ TCA_TAPRIO_SCHED_ENTRY,11451145+ __TCA_TAPRIO_SCHED_MAX,11461146+};11471147+11481148+#define TCA_TAPRIO_SCHED_MAX (__TCA_TAPRIO_SCHED_MAX - 1)11491149+11501150+enum {11511151+ TCA_TAPRIO_ATTR_UNSPEC,11521152+ TCA_TAPRIO_ATTR_PRIOMAP, /* struct tc_mqprio_qopt */11531153+ TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST, /* nested of entry */11541154+ TCA_TAPRIO_ATTR_SCHED_BASE_TIME, /* s64 */11551155+ TCA_TAPRIO_ATTR_SCHED_SINGLE_ENTRY, /* single entry */11561156+ TCA_TAPRIO_ATTR_SCHED_CLOCKID, /* s32 */11571157+ TCA_TAPRIO_PAD,11581158+ __TCA_TAPRIO_ATTR_MAX,11591159+};11601160+11611161+#define TCA_TAPRIO_ATTR_MAX (__TCA_TAPRIO_ATTR_MAX - 1)11621162+11631163+#endif
···1111 * device configuration.1212 */13131414+#include <linux/vhost_types.h>1415#include <linux/types.h>1515-#include <linux/compiler.h>1616#include <linux/ioctl.h>1717-#include <linux/virtio_config.h>1818-#include <linux/virtio_ring.h>1919-2020-struct vhost_vring_state {2121- unsigned int index;2222- unsigned int num;2323-};2424-2525-struct vhost_vring_file {2626- unsigned int index;2727- int fd; /* Pass -1 to unbind from file. */2828-2929-};3030-3131-struct vhost_vring_addr {3232- unsigned int index;3333- /* Option flags. */3434- unsigned int flags;3535- /* Flag values: */3636- /* Whether log address is valid. If set enables logging. */3737-#define VHOST_VRING_F_LOG 03838-3939- /* Start of array of descriptors (virtually contiguous) */4040- __u64 desc_user_addr;4141- /* Used structure address. Must be 32 bit aligned */4242- __u64 used_user_addr;4343- /* Available structure address. Must be 16 bit aligned */4444- __u64 avail_user_addr;4545- /* Logging support. */4646- /* Log writes to used structure, at offset calculated from specified4747- * address. Address must be 32 bit aligned. */4848- __u64 log_guest_addr;4949-};5050-5151-/* no alignment requirement */5252-struct vhost_iotlb_msg {5353- __u64 iova;5454- __u64 size;5555- __u64 uaddr;5656-#define VHOST_ACCESS_RO 0x15757-#define VHOST_ACCESS_WO 0x25858-#define VHOST_ACCESS_RW 0x35959- __u8 perm;6060-#define VHOST_IOTLB_MISS 16161-#define VHOST_IOTLB_UPDATE 26262-#define VHOST_IOTLB_INVALIDATE 36363-#define VHOST_IOTLB_ACCESS_FAIL 46464- __u8 type;6565-};6666-6767-#define VHOST_IOTLB_MSG 0x16868-#define VHOST_IOTLB_MSG_V2 0x26969-7070-struct vhost_msg {7171- int type;7272- union {7373- struct vhost_iotlb_msg iotlb;7474- __u8 padding[64];7575- };7676-};7777-7878-struct vhost_msg_v2 {7979- __u32 type;8080- __u32 reserved;8181- union {8282- struct vhost_iotlb_msg iotlb;8383- __u8 padding[64];8484- };8585-};8686-8787-struct vhost_memory_region {8888- __u64 guest_phys_addr;8989- __u64 memory_size; /* bytes */9090- __u64 userspace_addr;9191- __u64 flags_padding; /* No flags are currently specified. */9292-};9393-9494-/* All region addresses and sizes must be 4K aligned. */9595-#define VHOST_PAGE_SIZE 0x10009696-9797-struct vhost_memory {9898- __u32 nregions;9999- __u32 padding;100100- struct vhost_memory_region regions[0];101101-};1021710318/* ioctls */10419···101186 * device. This can be used to stop the ring (e.g. for migration). */102187#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)103188104104-/* Feature bits */105105-/* Log all write descriptors. Can be changed while device is active. */106106-#define VHOST_F_LOG_ALL 26107107-/* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */108108-#define VHOST_NET_F_VIRTIO_NET_HDR 27109109-110110-/* VHOST_SCSI specific definitions */111111-112112-/*113113- * Used by QEMU userspace to ensure a consistent vhost-scsi ABI.114114- *115115- * ABI Rev 0: July 2012 version starting point for v3.6-rc merge candidate +116116- * RFC-v2 vhost-scsi userspace. Add GET_ABI_VERSION ioctl usage117117- * ABI Rev 1: January 2013. Ignore vhost_tpgt filed in struct vhost_scsi_target.118118- * All the targets under vhost_wwpn can be seen and used by guset.119119- */120120-121121-#define VHOST_SCSI_ABI_VERSION 1122122-123123-struct vhost_scsi_target {124124- int abi_version;125125- char vhost_wwpn[224]; /* TRANSPORT_IQN_LEN */126126- unsigned short vhost_tpgt;127127- unsigned short reserved;128128-};189189+/* VHOST_SCSI specific defines */129190130191#define VHOST_SCSI_SET_ENDPOINT _IOW(VHOST_VIRTIO, 0x40, struct vhost_scsi_target)131192#define VHOST_SCSI_CLEAR_ENDPOINT _IOW(VHOST_VIRTIO, 0x41, struct vhost_scsi_target)
···132132Format of version script and ways to handle ABI changes, including133133incompatible ones, described in details in [1].134134135135+Stand-alone build136136+=================137137+138138+Under https://github.com/libbpf/libbpf there is a (semi-)automated139139+mirror of the mainline's version of libbpf for a stand-alone build.140140+141141+However, all changes to libbpf's code base must be upstreamed through142142+the mainline kernel tree.143143+144144+License145145+=======146146+147147+libbpf is dual-licensed under LGPL 2.1 and BSD 2-Clause.148148+135149Links136150=====137151
···194194}195195196196/**197197- * tep_is_file_bigendian - get if the file is in big endian order197197+ * tep_file_bigendian - get if the file is in big endian order198198 * @pevent: a handle to the tep_handle199199 *200200 * This returns if the file is in big endian order201201 * If @pevent is NULL, 0 is returned.202202 */203203-int tep_is_file_bigendian(struct tep_handle *pevent)203203+int tep_file_bigendian(struct tep_handle *pevent)204204{205205 if(pevent)206206 return pevent->file_bigendian;
+2-2
tools/lib/traceevent/event-parse-local.h
···77#ifndef _PARSE_EVENTS_INT_H88#define _PARSE_EVENTS_INT_H991010-struct cmdline;1010+struct tep_cmdline;1111struct cmdline_list;1212struct func_map;1313struct func_list;···3636 int long_size;3737 int page_size;38383939- struct cmdline *cmdlines;3939+ struct tep_cmdline *cmdlines;4040 struct cmdline_list *cmdlist;4141 int cmdline_count;4242
+82-47
tools/lib/traceevent/event-parse.c
···124124 return calloc(1, sizeof(struct tep_print_arg));125125}126126127127-struct cmdline {127127+struct tep_cmdline {128128 char *comm;129129 int pid;130130};131131132132static int cmdline_cmp(const void *a, const void *b)133133{134134- const struct cmdline *ca = a;135135- const struct cmdline *cb = b;134134+ const struct tep_cmdline *ca = a;135135+ const struct tep_cmdline *cb = b;136136137137 if (ca->pid < cb->pid)138138 return -1;···152152{153153 struct cmdline_list *cmdlist = pevent->cmdlist;154154 struct cmdline_list *item;155155- struct cmdline *cmdlines;155155+ struct tep_cmdline *cmdlines;156156 int i;157157158158 cmdlines = malloc(sizeof(*cmdlines) * pevent->cmdline_count);···179179180180static const char *find_cmdline(struct tep_handle *pevent, int pid)181181{182182- const struct cmdline *comm;183183- struct cmdline key;182182+ const struct tep_cmdline *comm;183183+ struct tep_cmdline key;184184185185 if (!pid)186186 return "<idle>";···208208 */209209int tep_pid_is_registered(struct tep_handle *pevent, int pid)210210{211211- const struct cmdline *comm;212212- struct cmdline key;211211+ const struct tep_cmdline *comm;212212+ struct tep_cmdline key;213213214214 if (!pid)215215 return 1;···232232 * we must add this pid. This is much slower than when cmdlines233233 * are added before the array is initialized.234234 */235235-static int add_new_comm(struct tep_handle *pevent, const char *comm, int pid)235235+static int add_new_comm(struct tep_handle *pevent,236236+ const char *comm, int pid, bool override)236237{237237- struct cmdline *cmdlines = pevent->cmdlines;238238- const struct cmdline *cmdline;239239- struct cmdline key;238238+ struct tep_cmdline *cmdlines = pevent->cmdlines;239239+ struct tep_cmdline *cmdline;240240+ struct tep_cmdline key;241241+ char *new_comm;240242241243 if (!pid)242244 return 0;···249247 cmdline = bsearch(&key, pevent->cmdlines, pevent->cmdline_count,250248 sizeof(*pevent->cmdlines), cmdline_cmp);251249 if (cmdline) {252252- errno = EEXIST;253253- return -1;250250+ if (!override) {251251+ errno = EEXIST;252252+ return -1;253253+ }254254+ new_comm = strdup(comm);255255+ if (!new_comm) {256256+ errno = ENOMEM;257257+ return -1;258258+ }259259+ free(cmdline->comm);260260+ cmdline->comm = new_comm;261261+262262+ return 0;254263 }255264256265 cmdlines = realloc(cmdlines, sizeof(*cmdlines) * (pevent->cmdline_count + 1));···288275 return 0;289276}290277291291-/**292292- * tep_register_comm - register a pid / comm mapping293293- * @pevent: handle for the pevent294294- * @comm: the command line to register295295- * @pid: the pid to map the command line to296296- *297297- * This adds a mapping to search for command line names with298298- * a given pid. The comm is duplicated.299299- */300300-int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid)278278+static int _tep_register_comm(struct tep_handle *pevent,279279+ const char *comm, int pid, bool override)301280{302281 struct cmdline_list *item;303282304283 if (pevent->cmdlines)305305- return add_new_comm(pevent, comm, pid);284284+ return add_new_comm(pevent, comm, pid, override);306285307286 item = malloc(sizeof(*item));308287 if (!item)···315310 pevent->cmdline_count++;316311317312 return 0;313313+}314314+315315+/**316316+ * tep_register_comm - register a pid / comm mapping317317+ * @pevent: handle for the pevent318318+ * @comm: the command line to register319319+ * @pid: the pid to map the command line to320320+ *321321+ * This adds a mapping to search for command line names with322322+ * a given pid. The comm is duplicated. If a command with the same pid323323+ * already exist, -1 is returned and errno is set to EEXIST324324+ */325325+int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid)326326+{327327+ return _tep_register_comm(pevent, comm, pid, false);328328+}329329+330330+/**331331+ * tep_override_comm - register a pid / comm mapping332332+ * @pevent: handle for the pevent333333+ * @comm: the command line to register334334+ * @pid: the pid to map the command line to335335+ *336336+ * This adds a mapping to search for command line names with337337+ * a given pid. The comm is duplicated. If a command with the same pid338338+ * already exist, the command string is udapted with the new one339339+ */340340+int tep_override_comm(struct tep_handle *pevent, const char *comm, int pid)341341+{342342+ if (!pevent->cmdlines && cmdline_init(pevent)) {343343+ errno = ENOMEM;344344+ return -1;345345+ }346346+ return _tep_register_comm(pevent, comm, pid, true);318347}319348320349int tep_register_trace_clock(struct tep_handle *pevent, const char *trace_clock)···52665227}5267522852685229/**52695269- * tep_data_event_from_type - find the event by a given type52705270- * @pevent: a handle to the pevent52715271- * @type: the type of the event.52725272- *52735273- * This returns the event form a given @type;52745274- */52755275-struct tep_event *tep_data_event_from_type(struct tep_handle *pevent, int type)52765276-{52775277- return tep_find_event(pevent, type);52785278-}52795279-52805280-/**52815230 * tep_data_pid - parse the PID from record52825231 * @pevent: a handle to the pevent52835232 * @rec: the record to parse···53195292 return comm;53205293}5321529453225322-static struct cmdline *53235323-pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct cmdline *next)52955295+static struct tep_cmdline *52965296+pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct tep_cmdline *next)53245297{53255298 struct cmdline_list *cmdlist = (struct cmdline_list *)next;53265299···53325305 while (cmdlist && strcmp(cmdlist->comm, comm) != 0)53335306 cmdlist = cmdlist->next;5334530753355335- return (struct cmdline *)cmdlist;53085308+ return (struct tep_cmdline *)cmdlist;53365309}5337531053385311/**···53485321 * next pid.53495322 * Also, it does a linear search, so it may be slow.53505323 */53515351-struct cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm,53525352- struct cmdline *next)53245324+struct tep_cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm,53255325+ struct tep_cmdline *next)53535326{53545354- struct cmdline *cmdline;53275327+ struct tep_cmdline *cmdline;5355532853565329 /*53575330 * If the cmdlines have not been converted yet, then use···53905363 * Returns the pid for a give cmdline. If @cmdline is NULL, then53915364 * -1 is returned.53925365 */53935393-int tep_cmdline_pid(struct tep_handle *pevent, struct cmdline *cmdline)53665366+int tep_cmdline_pid(struct tep_handle *pevent, struct tep_cmdline *cmdline)53945367{53955368 struct cmdline_list *cmdlist = (struct cmdline_list *)cmdline;53965369···66206593 *66216594 * If @id is >= 0, then it is used to find the event.66226595 * else @sys_name and @event_name are used.65966596+ *65976597+ * Returns:65986598+ * TEP_REGISTER_SUCCESS_OVERWRITE if an existing handler is overwritten65996599+ * TEP_REGISTER_SUCCESS if a new handler is registered successfully66006600+ * negative TEP_ERRNO_... in case of an error66016601+ *66236602 */66246603int tep_register_event_handler(struct tep_handle *pevent, int id,66256604 const char *sys_name, const char *event_name,···6643661066446611 event->handler = func;66456612 event->context = context;66466646- return 0;66136613+ return TEP_REGISTER_SUCCESS_OVERWRITE;6647661466486615 not_found:66496616 /* Save for later use. */···66736640 pevent->handlers = handle;66746641 handle->context = context;6675664266766676- return -1;66436643+ return TEP_REGISTER_SUCCESS;66776644}6678664566796646static int handle_matches(struct event_handler *handler, int id,···67566723{67576724 struct tep_handle *pevent = calloc(1, sizeof(*pevent));6758672567596759- if (pevent)67266726+ if (pevent) {67606727 pevent->ref_count = 1;67286728+ pevent->host_bigendian = tep_host_bigendian();67296729+ }6761673067626731 return pevent;67636732}
···389389 * We can only use the structure if file is of the same390390 * endianness.391391 */392392- if (tep_is_file_bigendian(event->pevent) ==392392+ if (tep_file_bigendian(event->pevent) ==393393 tep_is_host_bigendian(event->pevent)) {394394395395 trace_seq_printf(s, "%u q%u%s %s%s %spae %snxe %swp%s%s%s",
+12-5
tools/lib/traceevent/trace-seq.c
···100100 * @fmt: printf format string101101 *102102 * It returns 0 if the trace oversizes the buffer's free103103- * space, 1 otherwise.103103+ * space, the number of characters printed, or a negative104104+ * value in case of an error.104105 *105106 * The tracer may use either sequence operations or its own106107 * copy to user routines. To simplify formating of a trace···130129 goto try_again;131130 }132131133133- s->len += ret;132132+ if (ret > 0)133133+ s->len += ret;134134135135- return 1;135135+ return ret;136136}137137138138/**···141139 * @s: trace sequence descriptor142140 * @fmt: printf format string143141 *142142+ * It returns 0 if the trace oversizes the buffer's free143143+ * space, the number of characters printed, or a negative144144+ * value in case of an error.145145+ * *144146 * The tracer may use either sequence operations or its own145147 * copy to user routines. To simplify formating of a trace146148 * trace_seq_printf is used to store strings into a special···169163 goto try_again;170164 }171165172172- s->len += ret;166166+ if (ret > 0)167167+ s->len += ret;173168174174- return len;169169+ return ret;175170}176171177172/**
···11+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note22+#33+# system call numbers and entry vectors for powerpc44+#55+# The format is:66+# <number> <abi> <name> <entry point> <compat entry point>77+#88+# The <abi> can be common, spu, nospu, 64, or 32 for this file.99+#1010+0 nospu restart_syscall sys_restart_syscall1111+1 nospu exit sys_exit1212+2 nospu fork ppc_fork1313+3 common read sys_read1414+4 common write sys_write1515+5 common open sys_open compat_sys_open1616+6 common close sys_close1717+7 common waitpid sys_waitpid1818+8 common creat sys_creat1919+9 common link sys_link2020+10 common unlink sys_unlink2121+11 nospu execve sys_execve compat_sys_execve2222+12 common chdir sys_chdir2323+13 common time sys_time compat_sys_time2424+14 common mknod sys_mknod2525+15 common chmod sys_chmod2626+16 common lchown sys_lchown2727+17 common break sys_ni_syscall2828+18 32 oldstat sys_stat sys_ni_syscall2929+18 64 oldstat sys_ni_syscall3030+18 spu oldstat sys_ni_syscall3131+19 common lseek sys_lseek compat_sys_lseek3232+20 common getpid sys_getpid3333+21 nospu mount sys_mount compat_sys_mount3434+22 32 umount sys_oldumount3535+22 64 umount sys_ni_syscall3636+22 spu umount sys_ni_syscall3737+23 common setuid sys_setuid3838+24 common getuid sys_getuid3939+25 common stime sys_stime compat_sys_stime4040+26 nospu ptrace sys_ptrace compat_sys_ptrace4141+27 common alarm sys_alarm4242+28 32 oldfstat sys_fstat sys_ni_syscall4343+28 64 oldfstat sys_ni_syscall4444+28 spu oldfstat sys_ni_syscall4545+29 nospu pause sys_pause4646+30 nospu utime sys_utime compat_sys_utime4747+31 common stty sys_ni_syscall4848+32 common gtty sys_ni_syscall4949+33 common access sys_access5050+34 common nice sys_nice5151+35 common ftime sys_ni_syscall5252+36 common sync sys_sync5353+37 common kill sys_kill5454+38 common rename sys_rename5555+39 common mkdir sys_mkdir5656+40 common rmdir sys_rmdir5757+41 common dup sys_dup5858+42 common pipe sys_pipe5959+43 common times sys_times compat_sys_times6060+44 common prof sys_ni_syscall6161+45 common brk sys_brk6262+46 common setgid sys_setgid6363+47 common getgid sys_getgid6464+48 nospu signal sys_signal6565+49 common geteuid sys_geteuid6666+50 common getegid sys_getegid6767+51 nospu acct sys_acct6868+52 nospu umount2 sys_umount6969+53 common lock sys_ni_syscall7070+54 common ioctl sys_ioctl compat_sys_ioctl7171+55 common fcntl sys_fcntl compat_sys_fcntl7272+56 common mpx sys_ni_syscall7373+57 common setpgid sys_setpgid7474+58 common ulimit sys_ni_syscall7575+59 32 oldolduname sys_olduname7676+59 64 oldolduname sys_ni_syscall7777+59 spu oldolduname sys_ni_syscall7878+60 common umask sys_umask7979+61 common chroot sys_chroot8080+62 nospu ustat sys_ustat compat_sys_ustat8181+63 common dup2 sys_dup28282+64 common getppid sys_getppid8383+65 common getpgrp sys_getpgrp8484+66 common setsid sys_setsid8585+67 32 sigaction sys_sigaction compat_sys_sigaction8686+67 64 sigaction sys_ni_syscall8787+67 spu sigaction sys_ni_syscall8888+68 common sgetmask sys_sgetmask8989+69 common ssetmask sys_ssetmask9090+70 common setreuid sys_setreuid9191+71 common setregid sys_setregid9292+72 32 sigsuspend sys_sigsuspend9393+72 64 sigsuspend sys_ni_syscall9494+72 spu sigsuspend sys_ni_syscall9595+73 32 sigpending sys_sigpending compat_sys_sigpending9696+73 64 sigpending sys_ni_syscall9797+73 spu sigpending sys_ni_syscall9898+74 common sethostname sys_sethostname9999+75 common setrlimit sys_setrlimit compat_sys_setrlimit100100+76 32 getrlimit sys_old_getrlimit compat_sys_old_getrlimit101101+76 64 getrlimit sys_ni_syscall102102+76 spu getrlimit sys_ni_syscall103103+77 common getrusage sys_getrusage compat_sys_getrusage104104+78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday105105+79 common settimeofday sys_settimeofday compat_sys_settimeofday106106+80 common getgroups sys_getgroups107107+81 common setgroups sys_setgroups108108+82 32 select ppc_select sys_ni_syscall109109+82 64 select sys_ni_syscall110110+82 spu select sys_ni_syscall111111+83 common symlink sys_symlink112112+84 32 oldlstat sys_lstat sys_ni_syscall113113+84 64 oldlstat sys_ni_syscall114114+84 spu oldlstat sys_ni_syscall115115+85 common readlink sys_readlink116116+86 nospu uselib sys_uselib117117+87 nospu swapon sys_swapon118118+88 nospu reboot sys_reboot119119+89 32 readdir sys_old_readdir compat_sys_old_readdir120120+89 64 readdir sys_ni_syscall121121+89 spu readdir sys_ni_syscall122122+90 common mmap sys_mmap123123+91 common munmap sys_munmap124124+92 common truncate sys_truncate compat_sys_truncate125125+93 common ftruncate sys_ftruncate compat_sys_ftruncate126126+94 common fchmod sys_fchmod127127+95 common fchown sys_fchown128128+96 common getpriority sys_getpriority129129+97 common setpriority sys_setpriority130130+98 common profil sys_ni_syscall131131+99 nospu statfs sys_statfs compat_sys_statfs132132+100 nospu fstatfs sys_fstatfs compat_sys_fstatfs133133+101 common ioperm sys_ni_syscall134134+102 common socketcall sys_socketcall compat_sys_socketcall135135+103 common syslog sys_syslog136136+104 common setitimer sys_setitimer compat_sys_setitimer137137+105 common getitimer sys_getitimer compat_sys_getitimer138138+106 common stat sys_newstat compat_sys_newstat139139+107 common lstat sys_newlstat compat_sys_newlstat140140+108 common fstat sys_newfstat compat_sys_newfstat141141+109 32 olduname sys_uname142142+109 64 olduname sys_ni_syscall143143+109 spu olduname sys_ni_syscall144144+110 common iopl sys_ni_syscall145145+111 common vhangup sys_vhangup146146+112 common idle sys_ni_syscall147147+113 common vm86 sys_ni_syscall148148+114 common wait4 sys_wait4 compat_sys_wait4149149+115 nospu swapoff sys_swapoff150150+116 common sysinfo sys_sysinfo compat_sys_sysinfo151151+117 nospu ipc sys_ipc compat_sys_ipc152152+118 common fsync sys_fsync153153+119 32 sigreturn sys_sigreturn compat_sys_sigreturn154154+119 64 sigreturn sys_ni_syscall155155+119 spu sigreturn sys_ni_syscall156156+120 nospu clone ppc_clone157157+121 common setdomainname sys_setdomainname158158+122 common uname sys_newuname159159+123 common modify_ldt sys_ni_syscall160160+124 common adjtimex sys_adjtimex compat_sys_adjtimex161161+125 common mprotect sys_mprotect162162+126 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask163163+126 64 sigprocmask sys_ni_syscall164164+126 spu sigprocmask sys_ni_syscall165165+127 common create_module sys_ni_syscall166166+128 nospu init_module sys_init_module167167+129 nospu delete_module sys_delete_module168168+130 common get_kernel_syms sys_ni_syscall169169+131 nospu quotactl sys_quotactl170170+132 common getpgid sys_getpgid171171+133 common fchdir sys_fchdir172172+134 common bdflush sys_bdflush173173+135 common sysfs sys_sysfs174174+136 32 personality sys_personality ppc64_personality175175+136 64 personality ppc64_personality176176+136 spu personality ppc64_personality177177+137 common afs_syscall sys_ni_syscall178178+138 common setfsuid sys_setfsuid179179+139 common setfsgid sys_setfsgid180180+140 common _llseek sys_llseek181181+141 common getdents sys_getdents compat_sys_getdents182182+142 common _newselect sys_select compat_sys_select183183+143 common flock sys_flock184184+144 common msync sys_msync185185+145 common readv sys_readv compat_sys_readv186186+146 common writev sys_writev compat_sys_writev187187+147 common getsid sys_getsid188188+148 common fdatasync sys_fdatasync189189+149 nospu _sysctl sys_sysctl compat_sys_sysctl190190+150 common mlock sys_mlock191191+151 common munlock sys_munlock192192+152 common mlockall sys_mlockall193193+153 common munlockall sys_munlockall194194+154 common sched_setparam sys_sched_setparam195195+155 common sched_getparam sys_sched_getparam196196+156 common sched_setscheduler sys_sched_setscheduler197197+157 common sched_getscheduler sys_sched_getscheduler198198+158 common sched_yield sys_sched_yield199199+159 common sched_get_priority_max sys_sched_get_priority_max200200+160 common sched_get_priority_min sys_sched_get_priority_min201201+161 common sched_rr_get_interval sys_sched_rr_get_interval compat_sys_sched_rr_get_interval202202+162 common nanosleep sys_nanosleep compat_sys_nanosleep203203+163 common mremap sys_mremap204204+164 common setresuid sys_setresuid205205+165 common getresuid sys_getresuid206206+166 common query_module sys_ni_syscall207207+167 common poll sys_poll208208+168 common nfsservctl sys_ni_syscall209209+169 common setresgid sys_setresgid210210+170 common getresgid sys_getresgid211211+171 common prctl sys_prctl212212+172 nospu rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn213213+173 nospu rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction214214+174 nospu rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask215215+175 nospu rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending216216+176 nospu rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait217217+177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo218218+178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend219219+179 common pread64 sys_pread64 compat_sys_pread64220220+180 common pwrite64 sys_pwrite64 compat_sys_pwrite64221221+181 common chown sys_chown222222+182 common getcwd sys_getcwd223223+183 common capget sys_capget224224+184 common capset sys_capset225225+185 nospu sigaltstack sys_sigaltstack compat_sys_sigaltstack226226+186 32 sendfile sys_sendfile compat_sys_sendfile227227+186 64 sendfile sys_sendfile64228228+186 spu sendfile sys_sendfile64229229+187 common getpmsg sys_ni_syscall230230+188 common putpmsg sys_ni_syscall231231+189 nospu vfork ppc_vfork232232+190 common ugetrlimit sys_getrlimit compat_sys_getrlimit233233+191 common readahead sys_readahead compat_sys_readahead234234+192 32 mmap2 sys_mmap2 compat_sys_mmap2235235+193 32 truncate64 sys_truncate64 compat_sys_truncate64236236+194 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64237237+195 32 stat64 sys_stat64238238+196 32 lstat64 sys_lstat64239239+197 32 fstat64 sys_fstat64240240+198 nospu pciconfig_read sys_pciconfig_read241241+199 nospu pciconfig_write sys_pciconfig_write242242+200 nospu pciconfig_iobase sys_pciconfig_iobase243243+201 common multiplexer sys_ni_syscall244244+202 common getdents64 sys_getdents64245245+203 common pivot_root sys_pivot_root246246+204 32 fcntl64 sys_fcntl64 compat_sys_fcntl64247247+205 common madvise sys_madvise248248+206 common mincore sys_mincore249249+207 common gettid sys_gettid250250+208 common tkill sys_tkill251251+209 common setxattr sys_setxattr252252+210 common lsetxattr sys_lsetxattr253253+211 common fsetxattr sys_fsetxattr254254+212 common getxattr sys_getxattr255255+213 common lgetxattr sys_lgetxattr256256+214 common fgetxattr sys_fgetxattr257257+215 common listxattr sys_listxattr258258+216 common llistxattr sys_llistxattr259259+217 common flistxattr sys_flistxattr260260+218 common removexattr sys_removexattr261261+219 common lremovexattr sys_lremovexattr262262+220 common fremovexattr sys_fremovexattr263263+221 common futex sys_futex compat_sys_futex264264+222 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity265265+223 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity266266+# 224 unused267267+225 common tuxcall sys_ni_syscall268268+226 32 sendfile64 sys_sendfile64 compat_sys_sendfile64269269+227 common io_setup sys_io_setup compat_sys_io_setup270270+228 common io_destroy sys_io_destroy271271+229 common io_getevents sys_io_getevents compat_sys_io_getevents272272+230 common io_submit sys_io_submit compat_sys_io_submit273273+231 common io_cancel sys_io_cancel274274+232 nospu set_tid_address sys_set_tid_address275275+233 common fadvise64 sys_fadvise64 ppc32_fadvise64276276+234 nospu exit_group sys_exit_group277277+235 nospu lookup_dcookie sys_lookup_dcookie compat_sys_lookup_dcookie278278+236 common epoll_create sys_epoll_create279279+237 common epoll_ctl sys_epoll_ctl280280+238 common epoll_wait sys_epoll_wait281281+239 common remap_file_pages sys_remap_file_pages282282+240 common timer_create sys_timer_create compat_sys_timer_create283283+241 common timer_settime sys_timer_settime compat_sys_timer_settime284284+242 common timer_gettime sys_timer_gettime compat_sys_timer_gettime285285+243 common timer_getoverrun sys_timer_getoverrun286286+244 common timer_delete sys_timer_delete287287+245 common clock_settime sys_clock_settime compat_sys_clock_settime288288+246 common clock_gettime sys_clock_gettime compat_sys_clock_gettime289289+247 common clock_getres sys_clock_getres compat_sys_clock_getres290290+248 common clock_nanosleep sys_clock_nanosleep compat_sys_clock_nanosleep291291+249 32 swapcontext ppc_swapcontext ppc32_swapcontext292292+249 64 swapcontext ppc64_swapcontext293293+249 spu swapcontext sys_ni_syscall294294+250 common tgkill sys_tgkill295295+251 common utimes sys_utimes compat_sys_utimes296296+252 common statfs64 sys_statfs64 compat_sys_statfs64297297+253 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64298298+254 32 fadvise64_64 ppc_fadvise64_64299299+254 spu fadvise64_64 sys_ni_syscall300300+255 common rtas sys_rtas301301+256 32 sys_debug_setcontext sys_debug_setcontext sys_ni_syscall302302+256 64 sys_debug_setcontext sys_ni_syscall303303+256 spu sys_debug_setcontext sys_ni_syscall304304+# 257 reserved for vserver305305+258 nospu migrate_pages sys_migrate_pages compat_sys_migrate_pages306306+259 nospu mbind sys_mbind compat_sys_mbind307307+260 nospu get_mempolicy sys_get_mempolicy compat_sys_get_mempolicy308308+261 nospu set_mempolicy sys_set_mempolicy compat_sys_set_mempolicy309309+262 nospu mq_open sys_mq_open compat_sys_mq_open310310+263 nospu mq_unlink sys_mq_unlink311311+264 nospu mq_timedsend sys_mq_timedsend compat_sys_mq_timedsend312312+265 nospu mq_timedreceive sys_mq_timedreceive compat_sys_mq_timedreceive313313+266 nospu mq_notify sys_mq_notify compat_sys_mq_notify314314+267 nospu mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr315315+268 nospu kexec_load sys_kexec_load compat_sys_kexec_load316316+269 nospu add_key sys_add_key317317+270 nospu request_key sys_request_key318318+271 nospu keyctl sys_keyctl compat_sys_keyctl319319+272 nospu waitid sys_waitid compat_sys_waitid320320+273 nospu ioprio_set sys_ioprio_set321321+274 nospu ioprio_get sys_ioprio_get322322+275 nospu inotify_init sys_inotify_init323323+276 nospu inotify_add_watch sys_inotify_add_watch324324+277 nospu inotify_rm_watch sys_inotify_rm_watch325325+278 nospu spu_run sys_spu_run326326+279 nospu spu_create sys_spu_create327327+280 nospu pselect6 sys_pselect6 compat_sys_pselect6328328+281 nospu ppoll sys_ppoll compat_sys_ppoll329329+282 common unshare sys_unshare330330+283 common splice sys_splice331331+284 common tee sys_tee332332+285 common vmsplice sys_vmsplice compat_sys_vmsplice333333+286 common openat sys_openat compat_sys_openat334334+287 common mkdirat sys_mkdirat335335+288 common mknodat sys_mknodat336336+289 common fchownat sys_fchownat337337+290 common futimesat sys_futimesat compat_sys_futimesat338338+291 32 fstatat64 sys_fstatat64339339+291 64 newfstatat sys_newfstatat340340+291 spu newfstatat sys_newfstatat341341+292 common unlinkat sys_unlinkat342342+293 common renameat sys_renameat343343+294 common linkat sys_linkat344344+295 common symlinkat sys_symlinkat345345+296 common readlinkat sys_readlinkat346346+297 common fchmodat sys_fchmodat347347+298 common faccessat sys_faccessat348348+299 common get_robust_list sys_get_robust_list compat_sys_get_robust_list349349+300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list350350+301 common move_pages sys_move_pages compat_sys_move_pages351351+302 common getcpu sys_getcpu352352+303 nospu epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait353353+304 common utimensat sys_utimensat compat_sys_utimensat354354+305 common signalfd sys_signalfd compat_sys_signalfd355355+306 common timerfd_create sys_timerfd_create356356+307 common eventfd sys_eventfd357357+308 common sync_file_range2 sys_sync_file_range2 compat_sys_sync_file_range2358358+309 nospu fallocate sys_fallocate compat_sys_fallocate359359+310 nospu subpage_prot sys_subpage_prot360360+311 common timerfd_settime sys_timerfd_settime compat_sys_timerfd_settime361361+312 common timerfd_gettime sys_timerfd_gettime compat_sys_timerfd_gettime362362+313 common signalfd4 sys_signalfd4 compat_sys_signalfd4363363+314 common eventfd2 sys_eventfd2364364+315 common epoll_create1 sys_epoll_create1365365+316 common dup3 sys_dup3366366+317 common pipe2 sys_pipe2367367+318 nospu inotify_init1 sys_inotify_init1368368+319 common perf_event_open sys_perf_event_open369369+320 common preadv sys_preadv compat_sys_preadv370370+321 common pwritev sys_pwritev compat_sys_pwritev371371+322 nospu rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo372372+323 nospu fanotify_init sys_fanotify_init373373+324 nospu fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark374374+325 common prlimit64 sys_prlimit64375375+326 common socket sys_socket376376+327 common bind sys_bind377377+328 common connect sys_connect378378+329 common listen sys_listen379379+330 common accept sys_accept380380+331 common getsockname sys_getsockname381381+332 common getpeername sys_getpeername382382+333 common socketpair sys_socketpair383383+334 common send sys_send384384+335 common sendto sys_sendto385385+336 common recv sys_recv compat_sys_recv386386+337 common recvfrom sys_recvfrom compat_sys_recvfrom387387+338 common shutdown sys_shutdown388388+339 common setsockopt sys_setsockopt compat_sys_setsockopt389389+340 common getsockopt sys_getsockopt compat_sys_getsockopt390390+341 common sendmsg sys_sendmsg compat_sys_sendmsg391391+342 common recvmsg sys_recvmsg compat_sys_recvmsg392392+343 common recvmmsg sys_recvmmsg compat_sys_recvmmsg393393+344 common accept4 sys_accept4394394+345 common name_to_handle_at sys_name_to_handle_at395395+346 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at396396+347 common clock_adjtime sys_clock_adjtime compat_sys_clock_adjtime397397+348 common syncfs sys_syncfs398398+349 common sendmmsg sys_sendmmsg compat_sys_sendmmsg399399+350 common setns sys_setns400400+351 nospu process_vm_readv sys_process_vm_readv compat_sys_process_vm_readv401401+352 nospu process_vm_writev sys_process_vm_writev compat_sys_process_vm_writev402402+353 nospu finit_module sys_finit_module403403+354 nospu kcmp sys_kcmp404404+355 common sched_setattr sys_sched_setattr405405+356 common sched_getattr sys_sched_getattr406406+357 common renameat2 sys_renameat2407407+358 common seccomp sys_seccomp408408+359 common getrandom sys_getrandom409409+360 common memfd_create sys_memfd_create410410+361 common bpf sys_bpf411411+362 nospu execveat sys_execveat compat_sys_execveat412412+363 32 switch_endian sys_ni_syscall413413+363 64 switch_endian ppc_switch_endian414414+363 spu switch_endian sys_ni_syscall415415+364 common userfaultfd sys_userfaultfd416416+365 common membarrier sys_membarrier417417+378 nospu mlock2 sys_mlock2418418+379 nospu copy_file_range sys_copy_file_range419419+380 common preadv2 sys_preadv2 compat_sys_preadv2420420+381 common pwritev2 sys_pwritev2 compat_sys_pwritev2421421+382 nospu kexec_file_load sys_kexec_file_load422422+383 nospu statx sys_statx423423+384 nospu pkey_alloc sys_pkey_alloc424424+385 nospu pkey_free sys_pkey_free425425+386 nospu pkey_mprotect sys_pkey_mprotect426426+387 nospu rseq sys_rseq427427+388 nospu io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents
···1028102810291029static int callchain_param__setup_sample_type(struct callchain_param *callchain)10301030{10311031- if (!perf_hpp_list.sym) {10321032- if (callchain->enabled) {10331033- ui__error("Selected -g but \"sym\" not present in --sort/-s.");10341034- return -EINVAL;10351035- }10361036- } else if (callchain->mode != CHAIN_NONE) {10311031+ if (callchain->mode != CHAIN_NONE) {10371032 if (callchain_register_param(callchain) < 0) {10381033 ui__error("Can't register callchain params.\n");10391034 return -EINVAL;
···55#define VDSO__MAP_NAME "[vdso]"6677/*88- * Include definition of find_vdso_map() also used in util/vdso.c for88+ * Include definition of find_map() also used in util/vdso.c for99 * building perf.1010 */1111-#include "util/find-vdso-map.c"1111+#include "util/find-map.c"12121313int main(void)1414{1515 void *start, *end;1616 size_t size, written;17171818- if (find_vdso_map(&start, &end))1818+ if (find_map(&start, &end, VDSO__MAP_NAME))1919 return 1;20202121 size = end - start;
···109109 return ret;110110 }111111 len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);112112- va_end(ap_saved);113112 if (len > strbuf_avail(sb)) {114113 pr_debug("this should not happen, your vsnprintf is broken");115114 va_end(ap_saved);
···1818#include "debug.h"19192020/*2121- * Include definition of find_vdso_map() also used in perf-read-vdso.c for2121+ * Include definition of find_map() also used in perf-read-vdso.c for2222 * building perf-read-vdso32 and perf-read-vdsox32.2323 */2424-#include "find-vdso-map.c"2424+#include "find-map.c"25252626#define VDSO__TEMP_FILE_NAME "/tmp/perf-vdso.so-XXXXXX"2727···7676 if (vdso_file->found)7777 return vdso_file->temp_file_name;78787979- if (vdso_file->error || find_vdso_map(&start, &end))7979+ if (vdso_file->error || find_map(&start, &end, VDSO__MAP_NAME))8080 return NULL;81818282 size = end - start;
···5555 test_flow_dissector.sh \5656 test_xdp_vlan.sh57575858-TEST_PROGS_EXTENDED := with_addr.sh5858+TEST_PROGS_EXTENDED := with_addr.sh \5959+ with_tunnels.sh \6060+ tcp_client.py \6161+ tcp_server.py59626063# Compile but not part of 'make run_tests'6164TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr test_skb_cgroup_id_user \
+3-3
tools/testing/selftests/bpf/cgroup_helpers.c
···155155 * This function creates a cgroup under the top level workdir and returns the156156 * file descriptor. It is idempotent.157157 *158158- * On success, it returns the file descriptor. On failure it returns 0.158158+ * On success, it returns the file descriptor. On failure it returns -1.159159 * If there is a failure, it prints the error to stderr.160160 */161161int create_and_get_cgroup(const char *path)···166166 format_cgroup_path(cgroup_path, path);167167 if (mkdir(cgroup_path, 0777) && errno != EEXIST) {168168 log_err("mkdiring cgroup %s .. %s", path, cgroup_path);169169- return 0;169169+ return -1;170170 }171171172172 fd = open(cgroup_path, O_RDONLY);173173 if (fd < 0) {174174 log_err("Opening Cgroup");175175- return 0;175175+ return -1;176176 }177177178178 return fd;
···81818282 /* Create a cgroup, get fd, and join it */8383 cgroup_fd = create_and_get_cgroup(TEST_CGROUP);8484- if (!cgroup_fd) {8484+ if (cgroup_fd < 0) {8585 printf("Failed to create test cgroup\n");8686 goto err;8787 }
+1-1
tools/testing/selftests/bpf/test_dev_cgroup.c
···43434444 /* Create a cgroup, get fd, and join it */4545 cgroup_fd = create_and_get_cgroup(TEST_CGROUP);4646- if (!cgroup_fd) {4646+ if (cgroup_fd < 0) {4747 printf("Failed to create test cgroup\n");4848 goto err;4949 }
+1-1
tools/testing/selftests/bpf/test_netcnt.c
···65656666 /* Create a cgroup, get fd, and join it */6767 cgroup_fd = create_and_get_cgroup(TEST_CGROUP);6868- if (!cgroup_fd) {6868+ if (cgroup_fd < 0) {6969 printf("Failed to create test cgroup\n");7070 goto err;7171 }
+30
tools/testing/selftests/bpf/test_progs.c
···11881188 int i, j;11891189 struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH];11901190 int build_id_matches = 0;11911191+ int retry = 1;1191119211931193+retry:11921194 err = bpf_prog_load(file, BPF_PROG_TYPE_TRACEPOINT, &obj, &prog_fd);11931195 if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))11941196 goto out;···13031301 previous_key = key;13041302 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0);1305130313041304+ /* stack_map_get_build_id_offset() is racy and sometimes can return13051305+ * BPF_STACK_BUILD_ID_IP instead of BPF_STACK_BUILD_ID_VALID;13061306+ * try it one more time.13071307+ */13081308+ if (build_id_matches < 1 && retry--) {13091309+ ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);13101310+ close(pmu_fd);13111311+ bpf_object__close(obj);13121312+ printf("%s:WARN:Didn't find expected build ID from the map, retrying\n",13131313+ __func__);13141314+ goto retry;13151315+ }13161316+13061317 if (CHECK(build_id_matches < 1, "build id match",13071318 "Didn't find expected build ID from the map\n"))13081319 goto disable_pmu;···13561341 int i, j;13571342 struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH];13581343 int build_id_matches = 0;13441344+ int retry = 1;1359134513461346+retry:13601347 err = bpf_prog_load(file, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd);13611348 if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno))13621349 return;···14521435 }14531436 previous_key = key;14541437 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0);14381438+14391439+ /* stack_map_get_build_id_offset() is racy and sometimes can return14401440+ * BPF_STACK_BUILD_ID_IP instead of BPF_STACK_BUILD_ID_VALID;14411441+ * try it one more time.14421442+ */14431443+ if (build_id_matches < 1 && retry--) {14441444+ ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);14451445+ close(pmu_fd);14461446+ bpf_object__close(obj);14471447+ printf("%s:WARN:Didn't find expected build ID from the map, retrying\n",14481448+ __func__);14491449+ goto retry;14501450+ }1455145114561452 if (CHECK(build_id_matches < 1, "build id match",14571453 "Didn't find expected build ID from the map\n"))
···2525 lag_unlink_slaves_test2626 lag_dev_deletion_test2727 vlan_interface_uppers_test2828+ bridge_extern_learn_test2829 devlink_reload_test2930"3031NUM_NETIFS=2···538537 ip link del dev br-test539538540539 log_test "vlan interface uppers"540540+541541+ ip link del dev br0542542+}543543+544544+bridge_extern_learn_test()545545+{546546+ # Test that externally learned entries added from user space are547547+ # marked as offloaded548548+ RET=0549549+550550+ ip link add name br0 type bridge551551+ ip link set dev $swp1 master br0552552+553553+ bridge fdb add de:ad:be:ef:13:37 dev $swp1 master extern_learn554554+555555+ bridge fdb show brport $swp1 | grep de:ad:be:ef:13:37 | grep -q offload556556+ check_err $? "fdb entry not marked as offloaded when should"557557+558558+ log_test "externally learned fdb entry"541559542560 ip link del dev br0543561}
···847847848848 log_test "vlan-aware - failed enslavement to vlan-aware bridge"849849850850+ bridge vlan del vid 10 dev vxlan20851851+ bridge vlan add vid 20 dev vxlan20 pvid untagged852852+853853+ # Test that offloading of an unsupported tunnel fails when it is854854+ # triggered by addition of VLAN to a local port855855+ RET=0856856+857857+ # TOS must be set to inherit858858+ ip link set dev vxlan10 type vxlan tos 42859859+860860+ ip link set dev $swp1 master br0861861+ bridge vlan add vid 10 dev $swp1 &> /dev/null862862+ check_fail $?863863+864864+ log_test "vlan-aware - failed vlan addition to a local port"865865+866866+ ip link set dev vxlan10 type vxlan tos inherit867867+850868 ip link del dev vxlan20851869 ip link del dev vxlan10852870 ip link del dev br0
···11#!/bin/bash22# SPDX-License-Identifier: GPL-2.03344-ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding"44+ALL_TESTS="ping_ipv4 ping_ipv6 learning flooding vlan_deletion extern_learn"55NUM_NETIFS=466CHECK_TC="yes"77source lib.sh···9494flooding()9595{9696 flood_test $swp2 $h1 $h29797+}9898+9999+vlan_deletion()100100+{101101+ # Test that the deletion of a VLAN on a bridge port does not affect102102+ # the PVID VLAN103103+ log_info "Add and delete a VLAN on bridge port $swp1"104104+105105+ bridge vlan add vid 10 dev $swp1106106+ bridge vlan del vid 10 dev $swp1107107+108108+ ping_ipv4109109+ ping_ipv6110110+}111111+112112+extern_learn()113113+{114114+ local mac=de:ad:be:ef:13:37115115+ local ageing_time116116+117117+ # Test that externally learned FDB entries can roam, but not age out118118+ RET=0119119+120120+ bridge fdb add de:ad:be:ef:13:37 dev $swp1 master extern_learn vlan 1121121+122122+ bridge fdb show brport $swp1 | grep -q de:ad:be:ef:13:37123123+ check_err $? "Did not find FDB entry when should"124124+125125+ # Wait for 10 seconds after the ageing time to make sure the FDB entry126126+ # was not aged out127127+ ageing_time=$(bridge_ageing_time_get br0)128128+ sleep $((ageing_time + 10))129129+130130+ bridge fdb show brport $swp1 | grep -q de:ad:be:ef:13:37131131+ check_err $? "FDB entry was aged out when should not"132132+133133+ $MZ $h2 -c 1 -p 64 -a $mac -t ip -q134134+135135+ bridge fdb show brport $swp2 | grep -q de:ad:be:ef:13:37136136+ check_err $? "FDB entry did not roam when should"137137+138138+ log_test "Externally learned FDB entry - ageing & roaming"139139+140140+ bridge fdb del de:ad:be:ef:13:37 dev $swp2 master vlan 1 &> /dev/null141141+ bridge fdb del de:ad:be:ef:13:37 dev $swp1 master vlan 1 &> /dev/null97142}9814399144trap cleanup EXIT
···203203{204204 struct ip *iphdr = (struct ip *)ip_frame;205205 struct ip6_hdr *ip6hdr = (struct ip6_hdr *)ip_frame;206206+ const bool ipv4 = !ipv6;206207 int res;207208 int offset;208209 int frag_len;···240239 iphdr->ip_sum = 0;241240 }242241242242+ /* Occasionally test in-order fragments. */243243+ if (!cfg_overlap && (rand() % 100 < 15)) {244244+ offset = 0;245245+ while (offset < (UDP_HLEN + payload_len)) {246246+ send_fragment(fd_raw, addr, alen, offset, ipv6);247247+ offset += max_frag_len;248248+ }249249+ return;250250+ }251251+252252+ /* Occasionally test IPv4 "runs" (see net/ipv4/ip_fragment.c) */253253+ if (ipv4 && !cfg_overlap && (rand() % 100 < 20) &&254254+ (payload_len > 9 * max_frag_len)) {255255+ offset = 6 * max_frag_len;256256+ while (offset < (UDP_HLEN + payload_len)) {257257+ send_fragment(fd_raw, addr, alen, offset, ipv6);258258+ offset += max_frag_len;259259+ }260260+ offset = 3 * max_frag_len;261261+ while (offset < 6 * max_frag_len) {262262+ send_fragment(fd_raw, addr, alen, offset, ipv6);263263+ offset += max_frag_len;264264+ }265265+ offset = 0;266266+ while (offset < 3 * max_frag_len) {267267+ send_fragment(fd_raw, addr, alen, offset, ipv6);268268+ offset += max_frag_len;269269+ }270270+ return;271271+ }272272+243273 /* Odd fragments. */244274 offset = max_frag_len;245275 while (offset < (UDP_HLEN + payload_len)) {246276 send_fragment(fd_raw, addr, alen, offset, ipv6);277277+ /* IPv4 ignores duplicates, so randomly send a duplicate. */278278+ if (ipv4 && (1 == rand() % 100))279279+ send_fragment(fd_raw, addr, alen, offset, ipv6);247280 offset += 2 * max_frag_len;248281 }249282250283 if (cfg_overlap) {251284 /* Send an extra random fragment. */252252- offset = rand() % (UDP_HLEN + payload_len - 1);253253- /* sendto() returns EINVAL if offset + frag_len is too small. */254285 if (ipv6) {255286 struct ip6_frag *fraghdr = (struct ip6_frag *)(ip_frame + IP6_HLEN);287287+ /* sendto() returns EINVAL if offset + frag_len is too small. */288288+ offset = rand() % (UDP_HLEN + payload_len - 1);256289 frag_len = max_frag_len + rand() % 256;257290 /* In IPv6 if !!(frag_len % 8), the fragment is dropped. */258291 frag_len &= ~0x7;···294259 ip6hdr->ip6_plen = htons(frag_len);295260 frag_len += IP6_HLEN;296261 } else {297297- frag_len = IP4_HLEN + UDP_HLEN + rand() % 256;262262+ /* In IPv4, duplicates and some fragments completely inside263263+ * previously sent fragments are dropped/ignored. So264264+ * random offset and frag_len can result in a dropped265265+ * fragment instead of a dropped queue/packet. So we266266+ * hard-code offset and frag_len.267267+ *268268+ * See ade446403bfb ("net: ipv4: do not handle duplicate269269+ * fragments as overlapping").270270+ */271271+ if (max_frag_len * 4 < payload_len || max_frag_len < 16) {272272+ /* not enough payload to play with random offset and frag_len. */273273+ offset = 8;274274+ frag_len = IP4_HLEN + UDP_HLEN + max_frag_len;275275+ } else {276276+ offset = rand() % (payload_len / 2);277277+ frag_len = 2 * max_frag_len + 1 + rand() % 256;278278+ }298279 iphdr->ip_off = htons(offset / 8 | IP4_MF);299280 iphdr->ip_len = htons(frag_len);300281 }301282 res = sendto(fd_raw, ip_frame, frag_len, 0, addr, alen);302283 if (res < 0)303303- error(1, errno, "sendto overlap");284284+ error(1, errno, "sendto overlap: %d", frag_len);304285 if (res != frag_len)305286 error(1, 0, "sendto overlap: %d vs %d", (int)res, frag_len);306287 frag_counter++;···326275 offset = 0;327276 while (offset < (UDP_HLEN + payload_len)) {328277 send_fragment(fd_raw, addr, alen, offset, ipv6);278278+ /* IPv4 ignores duplicates, so randomly send a duplicate. */279279+ if (ipv4 && (1 == rand() % 100))280280+ send_fragment(fd_raw, addr, alen, offset, ipv6);329281 offset += 2 * max_frag_len;330282 }331283}···336282static void run_test(struct sockaddr *addr, socklen_t alen, bool ipv6)337283{338284 int fd_tx_raw, fd_rx_udp;339339- struct timeval tv = { .tv_sec = 0, .tv_usec = 10 * 1000 };285285+ /* Frag queue timeout is set to one second in the calling script;286286+ * socket timeout should be just a bit longer to avoid tests interfering287287+ * with each other.288288+ */289289+ struct timeval tv = { .tv_sec = 1, .tv_usec = 10 };340290 int idx;341291 int min_frag_len = ipv6 ? 1280 : 8;342292···366308 payload_len += (rand() % 4096)) {367309 if (cfg_verbose)368310 printf("payload_len: %d\n", payload_len);369369- max_frag_len = min_frag_len;370370- do {311311+312312+ if (cfg_overlap) {313313+ /* With overlaps, one send/receive pair below takes314314+ * at least one second (== timeout) to run, so there315315+ * is not enough test time to run a nested loop:316316+ * the full overlap test takes 20-30 seconds.317317+ */318318+ max_frag_len = min_frag_len +319319+ rand() % (1500 - FRAG_HLEN - min_frag_len);371320 send_udp_frags(fd_tx_raw, addr, alen, ipv6);372321 recv_validate_udp(fd_rx_udp);373373- max_frag_len += 8 * (rand() % 8);374374- } while (max_frag_len < (1500 - FRAG_HLEN) && max_frag_len <= payload_len);322322+ } else {323323+ /* Without overlaps, each packet reassembly (== one324324+ * send/receive pair below) takes very little time to325325+ * run, so we can easily afford more thourough testing326326+ * with a nested loop: the full non-overlap test takes327327+ * less than one second).328328+ */329329+ max_frag_len = min_frag_len;330330+ do {331331+ send_udp_frags(fd_tx_raw, addr, alen, ipv6);332332+ recv_validate_udp(fd_rx_udp);333333+ max_frag_len += 8 * (rand() % 8);334334+ } while (max_frag_len < (1500 - FRAG_HLEN) &&335335+ max_frag_len <= payload_len);336336+ }375337 }376338377339 /* Cleanup. */
+8-1
tools/testing/selftests/net/ip_defrag.sh
···1111setup() {1212 ip netns add "${NETNS}"1313 ip -netns "${NETNS}" link set lo up1414+1415 ip netns exec "${NETNS}" sysctl -w net.ipv4.ipfrag_high_thresh=9000000 >/dev/null 2>&11516 ip netns exec "${NETNS}" sysctl -w net.ipv4.ipfrag_low_thresh=7000000 >/dev/null 2>&11717+ ip netns exec "${NETNS}" sysctl -w net.ipv4.ipfrag_time=1 >/dev/null 2>&11818+1619 ip netns exec "${NETNS}" sysctl -w net.ipv6.ip6frag_high_thresh=9000000 >/dev/null 2>&11720 ip netns exec "${NETNS}" sysctl -w net.ipv6.ip6frag_low_thresh=7000000 >/dev/null 2>&12121+ ip netns exec "${NETNS}" sysctl -w net.ipv6.ip6frag_time=1 >/dev/null 2>&12222+2323+ # DST cache can get full with a lot of frags, with GC not keeping up with the test.2424+ ip netns exec "${NETNS}" sysctl -w net.ipv6.route.max_size=65536 >/dev/null 2>&11825}19262027cleanup() {···3427echo "ipv4 defrag"3528ip netns exec "${NETNS}" ./ip_defrag -436293737-3830echo "ipv4 defrag with overlaps"3931ip netns exec "${NETNS}" ./ip_defrag -4o4032···4337echo "ipv6 defrag with overlaps"4438ip netns exec "${NETNS}" ./ip_defrag -6o45394040+echo "all tests done"