···11+What: /sys/kernel/debug/pktcdvd/pktcdvd[0-7]22+Date: Oct. 200633+KernelVersion: 2.6.2044+Contact: Thomas Maier <balagi@justmail.de>55+Description:66+77+The pktcdvd module (packet writing driver) creates88+these files in debugfs:99+1010+/sys/kernel/debug/pktcdvd/pktcdvd[0-7]/1111+1212+ ==== ====== ====================================1313+ info 0444 Lots of driver statistics and infos.1414+ ==== ====== ====================================1515+1616+Example::1717+1818+ cat /sys/kernel/debug/pktcdvd/pktcdvd0/info
+97
Documentation/ABI/testing/sysfs-class-pktcdvd
···11+sysfs interface22+---------------33+The pktcdvd module (packet writing driver) creates the following files in the44+sysfs: (<devid> is in the format major:minor)55+66+What: /sys/class/pktcdvd/add77+What: /sys/class/pktcdvd/remove88+What: /sys/class/pktcdvd/device_map99+Date: Oct. 20061010+KernelVersion: 2.6.201111+Contact: Thomas Maier <balagi@justmail.de>1212+Description:1313+1414+ ========== ==============================================1515+ add (WO) Write a block device id (major:minor) to1616+ create a new pktcdvd device and map it to the1717+ block device.1818+1919+ remove (WO) Write the pktcdvd device id (major:minor)2020+ to remove the pktcdvd device.2121+2222+ device_map (RO) Shows the device mapping in format:2323+ pktcdvd[0-7] <pktdevid> <blkdevid>2424+ ========== ==============================================2525+2626+2727+What: /sys/class/pktcdvd/pktcdvd[0-7]/dev2828+What: /sys/class/pktcdvd/pktcdvd[0-7]/uevent2929+Date: Oct. 20063030+KernelVersion: 2.6.203131+Contact: Thomas Maier <balagi@justmail.de>3232+Description:3333+ dev: (RO) Device id3434+3535+ uevent: (WO) To send a uevent3636+3737+3838+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/packets_started3939+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/packets_finished4040+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/kb_written4141+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/kb_read4242+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/kb_read_gather4343+What: /sys/class/pktcdvd/pktcdvd[0-7]/stat/reset4444+Date: Oct. 20064545+KernelVersion: 2.6.204646+Contact: Thomas Maier <balagi@justmail.de>4747+Description:4848+ packets_started: (RO) Number of started packets.4949+5050+ packets_finished: (RO) Number of finished packets.5151+5252+ kb_written: (RO) kBytes written.5353+5454+ kb_read: (RO) kBytes read.5555+5656+ kb_read_gather: (RO) kBytes read to fill write packets.5757+5858+ reset: (WO) Write any value to it to reset5959+ pktcdvd device statistic values, like6060+ bytes read/written.6161+6262+6363+What: /sys/class/pktcdvd/pktcdvd[0-7]/write_queue/size6464+What: /sys/class/pktcdvd/pktcdvd[0-7]/write_queue/congestion_off6565+What: /sys/class/pktcdvd/pktcdvd[0-7]/write_queue/congestion_on6666+Date: Oct. 20066767+KernelVersion: 2.6.206868+Contact: Thomas Maier <balagi@justmail.de>6969+Description:7070+ ============== ================================================7171+ size (RO) Contains the size of the bio write queue.7272+7373+ congestion_off (RW) If bio write queue size is below this mark,7474+ accept new bio requests from the block layer.7575+7676+ congestion_on (RW) If bio write queue size is higher as this7777+ mark, do no longer accept bio write requests7878+ from the block layer and wait till the pktcdvd7979+ device has processed enough bio's so that bio8080+ write queue size is below congestion off mark.8181+ A value of <= 0 disables congestion control.8282+ ============== ================================================8383+8484+8585+Example:8686+--------8787+To use the pktcdvd sysfs interface directly, you can do::8888+8989+ # create a new pktcdvd device mapped to /dev/hdc9090+ echo "22:0" >/sys/class/pktcdvd/add9191+ cat /sys/class/pktcdvd/device_map9292+ # assuming device pktcdvd0 was created, look at stat's9393+ cat /sys/class/pktcdvd/pktcdvd0/stat/kb_written9494+ # print the device id of the mapped block device9595+ fgrep pktcdvd0 /sys/class/pktcdvd/device_map9696+ # remove device, using pktcdvd0 device id 253:09797+ echo "253:0" >/sys/class/pktcdvd/remove
···8080 or applicable for the respective data port.8181 More info in MIPI Alliance SoundWire 1.0 Specifications.8282 minItems: 38383- maxItems: 58383+ maxItems: 884848585 qcom,ports-sinterval-low:8686 $ref: /schemas/types.yaml#/definitions/uint8-array···124124 or applicable for the respective data port.125125 More info in MIPI Alliance SoundWire 1.0 Specifications.126126 minItems: 3127127- maxItems: 5127127+ maxItems: 8128128129129 qcom,ports-block-pack-mode:130130 $ref: /schemas/types.yaml#/definitions/uint8-array···154154 or applicable for the respective data port.155155 More info in MIPI Alliance SoundWire 1.0 Specifications.156156 minItems: 3157157- maxItems: 5157157+ maxItems: 8158158 items:159159 oneOf:160160 - minimum: 0···171171 or applicable for the respective data port.172172 More info in MIPI Alliance SoundWire 1.0 Specifications.173173 minItems: 3174174- maxItems: 5174174+ maxItems: 8175175 items:176176 oneOf:177177 - minimum: 0···187187 or applicable for the respective data port.188188 More info in MIPI Alliance SoundWire 1.0 Specifications.189189 minItems: 3190190- maxItems: 5190190+ maxItems: 8191191 items:192192 oneOf:193193 - minimum: 0
···11+.. SPDX-License-Identifier: GPL-2.022+33+=======================================44+Linux NVMe feature and and quirk policy55+=======================================66+77+This file explains the policy used to decide what is supported by the88+Linux NVMe driver and what is not.99+1010+1111+Introduction1212+============1313+1414+NVM Express is an open collection of standards and information.1515+1616+The Linux NVMe host driver in drivers/nvme/host/ supports devices1717+implementing the NVM Express (NVMe) family of specifications, which1818+currently consists of a number of documents:1919+2020+ - the NVMe Base specification2121+ - various Command Set specifications (e.g. NVM Command Set)2222+ - various Transport specifications (e.g. PCIe, Fibre Channel, RDMA, TCP)2323+ - the NVMe Management Interface specification2424+2525+See https://nvmexpress.org/developers/ for the NVMe specifications.2626+2727+2828+Supported features2929+==================3030+3131+NVMe is a large suite of specifications, and contains features that are only3232+useful or suitable for specific use-cases. It is important to note that Linux3333+does not aim to implement every feature in the specification. Every additional3434+feature implemented introduces more code, more maintenance and potentially more3535+bugs. Hence there is an inherent tradeoff between functionality and3636+maintainability of the NVMe host driver.3737+3838+Any feature implemented in the Linux NVMe host driver must support the3939+following requirements:4040+4141+ 1. The feature is specified in a release version of an official NVMe4242+ specification, or in a ratified Technical Proposal (TP) that is4343+ available on NVMe website. Or if it is not directly related to the4444+ on-wire protocol, does not contradict any of the NVMe specifications.4545+ 2. Does not conflict with the Linux architecture, nor the design of the4646+ NVMe host driver.4747+ 3. Has a clear, indisputable value-proposition and a wide consensus across4848+ the community.4949+5050+Vendor specific extensions are generally not supported in the NVMe host5151+driver.5252+5353+It is strongly recommended to work with the Linux NVMe and block layer5454+maintainers and get feedback on specification changes that are intended5555+to be used by the Linux NVMe host driver in order to avoid conflict at a5656+later stage.5757+5858+5959+Quirks6060+======6161+6262+Sometimes implementations of open standards fail to correctly implement parts6363+of the standards. Linux uses identifier-based quirks to work around such6464+implementation bugs. The intent of quirks is to deal with widely available6565+hardware, usually consumer, which Linux users can't use without these quirks.6666+Typically these implementations are not or only superficially tested with Linux6767+by the hardware manufacturer.6868+6969+The Linux NVMe maintainers decide ad hoc whether to quirk implementations7070+based on the impact of the problem to Linux users and how it impacts7171+maintainability of the driver. In general quirks are a last resort, if no7272+firmware updates or other workarounds are available from the vendor.7373+7474+Quirks will not be added to the Linux kernel for hardware that isn't available7575+on the mass market. Hardware that fails qualification for enterprise Linux7676+distributions, ChromeOS, Android or other consumers of the Linux kernel7777+should be fixed before it is shipped instead of relying on Linux quirks.
+205-176
Documentation/process/maintainer-netdev.rst
···2233.. _netdev-FAQ:4455-==========66-netdev FAQ77-==========55+=============================66+Networking subsystem (netdev)77+=============================8899tl;dr1010-----···1515 - don't repost your patches within one 24h period1616 - reverse xmas tree17171818-What is netdev?1919----------------2020-It is a mailing list for all network-related Linux stuff. This1818+netdev1919+------2020+2121+netdev is a mailing list for all network-related Linux stuff. This2122includes anything found under net/ (i.e. core code like IPv6) and2223drivers/net (i.e. hardware specific drivers) in the Linux source tree.23242425Note that some subsystems (e.g. wireless drivers) which have a high2525-volume of traffic have their own specific mailing lists.2626+volume of traffic have their own specific mailing lists and trees.26272728The netdev list is managed (like many other Linux mailing lists) through2829VGER (http://vger.kernel.org/) with archives available at···3332Linux development (i.e. RFC, review, comments, etc.) takes place on3433netdev.35343636-How do the changes posted to netdev make their way into Linux?3737---------------------------------------------------------------3838-There are always two trees (git repositories) in play. Both are3939-driven by David Miller, the main network maintainer. There is the4040-``net`` tree, and the ``net-next`` tree. As you can probably guess from4141-the names, the ``net`` tree is for fixes to existing code already in the4242-mainline tree from Linus, and ``net-next`` is where the new code goes4343-for the future release. You can find the trees here:3535+Development cycle3636+-----------------44374545-- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git4646-- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git4747-4848-How do I indicate which tree (net vs. net-next) my patch should be in?4949-----------------------------------------------------------------------5050-To help maintainers and CI bots you should explicitly mark which tree5151-your patch is targeting. Assuming that you use git, use the prefix5252-flag::5353-5454- git format-patch --subject-prefix='PATCH net-next' start..finish5555-5656-Use ``net`` instead of ``net-next`` (always lower case) in the above for5757-bug-fix ``net`` content.5858-5959-How often do changes from these trees make it to the mainline Linus tree?6060--------------------------------------------------------------------------6161-To understand this, you need to know a bit of background information on3838+Here is a bit of background information on6239the cadence of Linux development. Each new release starts off with a6340two week "merge window" where the main maintainers feed their new stuff6441to Linus for merging into the mainline tree. After the two weeks, the···4869state of churn), and a week after the last vX.Y-rcN was done, the4970official vX.Y is released.50715151-Relating that to netdev: At the beginning of the 2-week merge window,5252-the ``net-next`` tree will be closed - no new changes/features. The5353-accumulated new content of the past ~10 weeks will be passed onto7272+To find out where we are now in the cycle - load the mainline (Linus)7373+page here:7474+7575+ https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git7676+7777+and note the top of the "tags" section. If it is rc1, it is early in7878+the dev cycle. If it was tagged rc7 a week ago, then a release is7979+probably imminent. If the most recent tag is a final release tag8080+(without an ``-rcN`` suffix) - we are most likely in a merge window8181+and ``net-next`` is closed.8282+8383+git trees and patch flow8484+------------------------8585+8686+There are two networking trees (git repositories) in play. Both are8787+driven by David Miller, the main network maintainer. There is the8888+``net`` tree, and the ``net-next`` tree. As you can probably guess from8989+the names, the ``net`` tree is for fixes to existing code already in the9090+mainline tree from Linus, and ``net-next`` is where the new code goes9191+for the future release. You can find the trees here:9292+9393+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git9494+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git9595+9696+Relating that to kernel development: At the beginning of the 2-week9797+merge window, the ``net-next`` tree will be closed - no new changes/features.9898+The accumulated new content of the past ~10 weeks will be passed onto5499mainline/Linus via a pull request for vX.Y -- at the same time, the55100``net`` tree will start accumulating fixes for this pulled content56101relating to vX.Y···106103107104Finally, the vX.Y gets released, and the whole cycle starts over.108105109109-So where are we now in this cycle?110110-----------------------------------106106+netdev patch review107107+-------------------111108112112-Load the mainline (Linus) page here:109109+Patch status110110+~~~~~~~~~~~~113111114114- https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git115115-116116-and note the top of the "tags" section. If it is rc1, it is early in117117-the dev cycle. If it was tagged rc7 a week ago, then a release is118118-probably imminent. If the most recent tag is a final release tag119119-(without an ``-rcN`` suffix) - we are most likely in a merge window120120-and ``net-next`` is closed.121121-122122-How can I tell the status of a patch I've sent?123123------------------------------------------------124124-Start by looking at the main patchworks queue for netdev:112112+Status of a patch can be checked by looking at the main patchwork113113+queue for netdev:125114126115 https://patchwork.kernel.org/project/netdevbpf/list/127116···122127which carried them so if you have trouble finding your patch append123128the value of ``Message-ID`` to the URL above.124129125125-How long before my patch is accepted?126126--------------------------------------130130+Updating patch status131131+~~~~~~~~~~~~~~~~~~~~~132132+133133+It may be tempting to help the maintainers and update the state of your134134+own patches when you post a new version or spot a bug. Please **do not**135135+do that.136136+Interfering with the patch status on patchwork will only cause confusion. Leave137137+it to the maintainer to figure out what is the most recent and current138138+version that should be applied. If there is any doubt, the maintainer139139+will reply and ask what should be done.140140+141141+Review timelines142142+~~~~~~~~~~~~~~~~143143+127144Generally speaking, the patches get triaged quickly (in less than12814548h). But be patient, if your patch is active in patchwork (i.e. it's129146listed on the project's patch list) the chances it was missed are close to zero.···143136patch is a good way to ensure your patch is ignored or pushed to the144137bottom of the priority list.145138146146-Should I directly update patchwork state of my own patches?147147------------------------------------------------------------148148-It may be tempting to help the maintainers and update the state of your149149-own patches when you post a new version or spot a bug. Please do not do that.150150-Interfering with the patch status on patchwork will only cause confusion. Leave151151-it to the maintainer to figure out what is the most recent and current152152-version that should be applied. If there is any doubt, the maintainer153153-will reply and ask what should be done.139139+Partial resends140140+~~~~~~~~~~~~~~~154141155155-How do I divide my work into patches?156156--------------------------------------157157-158158-Put yourself in the shoes of the reviewer. Each patch is read separately159159-and therefore should constitute a comprehensible step towards your stated160160-goal.161161-162162-Avoid sending series longer than 15 patches. Larger series takes longer163163-to review as reviewers will defer looking at it until they find a large164164-chunk of time. A small series can be reviewed in a short time, so Maintainers165165-just do it. As a result, a sequence of smaller series gets merged quicker and166166-with better review coverage. Re-posting large series also increases the mailing167167-list traffic.168168-169169-I made changes to only a few patches in a patch series should I resend only those changed?170170-------------------------------------------------------------------------------------------171171-No, please resend the entire patch series and make sure you do number your142142+Please always resend the entire patch series and make sure you do number your172143patches such that it is clear this is the latest and greatest set of patches173173-that can be applied.144144+that can be applied. Do not try to resend just the patches which changed.174145175175-I have received review feedback, when should I post a revised version of the patches?176176--------------------------------------------------------------------------------------177177-Allow at least 24 hours to pass between postings. This will ensure reviewers178178-from all geographical locations have a chance to chime in. Do not wait179179-too long (weeks) between postings either as it will make it harder for reviewers180180-to recall all the context.146146+Handling misapplied patches147147+~~~~~~~~~~~~~~~~~~~~~~~~~~~181148182182-Make sure you address all the feedback in your new posting. Do not post a new183183-version of the code if the discussion about the previous version is still184184-ongoing, unless directly instructed by a reviewer.185185-186186-I submitted multiple versions of a patch series and it looks like a version other than the last one has been accepted, what should I do?187187-----------------------------------------------------------------------------------------------------------------------------------------149149+Occasionally a patch series gets applied before receiving critical feedback,150150+or the wrong version of a series gets applied.188151There is no revert possible, once it is pushed out, it stays like that.189152Please send incremental versions on top of what has been merged in order to fix190153the patches the way they would look like if your latest patch series was to be191154merged.192155193193-Are there special rules regarding stable submissions on netdev?194194----------------------------------------------------------------156156+Stable tree157157+~~~~~~~~~~~158158+195159While it used to be the case that netdev submissions were not supposed196160to carry explicit ``CC: stable@vger.kernel.org`` tags that is no longer197161the case today. Please follow the standard stable rules in198162:ref:`Documentation/process/stable-kernel-rules.rst <stable_kernel_rules>`,199163and make sure you include appropriate Fixes tags!200164201201-Is the comment style convention different for the networking content?202202----------------------------------------------------------------------203203-Yes, in a largely trivial way. Instead of this::165165+Security fixes166166+~~~~~~~~~~~~~~204167205205- /*206206- * foobar blah blah blah207207- * another line of text208208- */209209-210210-it is requested that you make it look like this::211211-212212- /* foobar blah blah blah213213- * another line of text214214- */215215-216216-What is "reverse xmas tree"?217217-----------------------------218218-219219-Netdev has a convention for ordering local variables in functions.220220-Order the variable declaration lines longest to shortest, e.g.::221221-222222- struct scatterlist *sg;223223- struct sk_buff *skb;224224- int err, i;225225-226226-If there are dependencies between the variables preventing the ordering227227-move the initialization out of line.228228-229229-I am working in existing code which uses non-standard formatting. Which formatting should I use?230230-------------------------------------------------------------------------------------------------231231-Make your code follow the most recent guidelines, so that eventually all code232232-in the domain of netdev is in the preferred format.233233-234234-I found a bug that might have possible security implications or similar. Should I mail the main netdev maintainer off-list?235235----------------------------------------------------------------------------------------------------------------------------236236-No. The current netdev maintainer has consistently requested that168168+Do not email netdev maintainers directly if you think you discovered169169+a bug that might have possible security implications.170170+The current netdev maintainer has consistently requested that237171people use the mailing lists and not reach out directly. If you aren't238172OK with that, then perhaps consider mailing security@kernel.org or239173reading about http://oss-security.openwall.org/wiki/mailing-lists/distros240174as possible alternative mechanisms.241175242242-What level of testing is expected before I submit my change?243243-------------------------------------------------------------244244-At the very minimum your changes must survive an ``allyesconfig`` and an245245-``allmodconfig`` build with ``W=1`` set without new warnings or failures.246176247247-Ideally you will have done run-time testing specific to your change,248248-and the patch series contains a set of kernel selftest for249249-``tools/testing/selftests/net`` or using the KUnit framework.177177+Co-posting changes to user space components178178+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~250179251251-You are expected to test your changes on top of the relevant networking252252-tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.253253-254254-How do I post corresponding changes to user space components?255255--------------------------------------------------------------256180User space code exercising kernel features should be posted257181alongside kernel patches. This gives reviewers a chance to see258182how any new interface is used and how well it works.···208270Posting as one thread is discouraged because it confuses patchwork209271(as of patchwork 2.2.2).210272211211-Can I reproduce the checks from patchwork on my local machine?212212---------------------------------------------------------------273273+Preparing changes274274+-----------------213275214214-Checks in patchwork are mostly simple wrappers around existing kernel215215-scripts, the sources are available at:216216-217217-https://github.com/kuba-moo/nipa/tree/master/tests218218-219219-Running all the builds and checks locally is a pain, can I post my patches and have the patchwork bot validate them?220220---------------------------------------------------------------------------------------------------------------------221221-222222-No, you must ensure that your patches are ready by testing them locally223223-before posting to the mailing list. The patchwork build bot instance224224-gets overloaded very easily and netdev@vger really doesn't need more225225-traffic if we can help it.226226-227227-netdevsim is great, can I extend it for my out-of-tree tests?228228--------------------------------------------------------------229229-230230-No, ``netdevsim`` is a test vehicle solely for upstream tests.231231-(Please add your tests under ``tools/testing/selftests/``.)232232-233233-We also give no guarantees that ``netdevsim`` won't change in the future234234-in a way which would break what would normally be considered uAPI.235235-236236-Is netdevsim considered a "user" of an API?237237--------------------------------------------238238-239239-Linux kernel has a long standing rule that no API should be added unless240240-it has a real, in-tree user. Mock-ups and tests based on ``netdevsim`` are241241-strongly encouraged when adding new APIs, but ``netdevsim`` in itself242242-is **not** considered a use case/user.243243-244244-Any other tips to help ensure my net/net-next patch gets OK'd?245245---------------------------------------------------------------246246-Attention to detail. Re-read your own work as if you were the276276+Attention to detail is important. Re-read your own work as if you were the247277reviewer. You can start with using ``checkpatch.pl``, perhaps even with248278the ``--strict`` flag. But do not be mindlessly robotic in doing so.249279If your change is a bug fix, make sure your commit log indicates the···226320:ref:`Documentation/process/submitting-patches.rst <submittingpatches>`227321to be sure you are not repeating some common mistake documented there.228322229229-My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback?230230----------------------------------------------------------------------------------------------------------323323+Indicating target tree324324+~~~~~~~~~~~~~~~~~~~~~~231325232232-Yes, especially if you spend significant amount of time reviewing code326326+To help maintainers and CI bots you should explicitly mark which tree327327+your patch is targeting. Assuming that you use git, use the prefix328328+flag::329329+330330+ git format-patch --subject-prefix='PATCH net-next' start..finish331331+332332+Use ``net`` instead of ``net-next`` (always lower case) in the above for333333+bug-fix ``net`` content.334334+335335+Dividing work into patches336336+~~~~~~~~~~~~~~~~~~~~~~~~~~337337+338338+Put yourself in the shoes of the reviewer. Each patch is read separately339339+and therefore should constitute a comprehensible step towards your stated340340+goal.341341+342342+Avoid sending series longer than 15 patches. Larger series takes longer343343+to review as reviewers will defer looking at it until they find a large344344+chunk of time. A small series can be reviewed in a short time, so Maintainers345345+just do it. As a result, a sequence of smaller series gets merged quicker and346346+with better review coverage. Re-posting large series also increases the mailing347347+list traffic.348348+349349+Multi-line comments350350+~~~~~~~~~~~~~~~~~~~351351+352352+Comment style convention is slightly different for networking and most of353353+the tree. Instead of this::354354+355355+ /*356356+ * foobar blah blah blah357357+ * another line of text358358+ */359359+360360+it is requested that you make it look like this::361361+362362+ /* foobar blah blah blah363363+ * another line of text364364+ */365365+366366+Local variable ordering ("reverse xmas tree", "RCS")367367+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~368368+369369+Netdev has a convention for ordering local variables in functions.370370+Order the variable declaration lines longest to shortest, e.g.::371371+372372+ struct scatterlist *sg;373373+ struct sk_buff *skb;374374+ int err, i;375375+376376+If there are dependencies between the variables preventing the ordering377377+move the initialization out of line.378378+379379+Format precedence380380+~~~~~~~~~~~~~~~~~381381+382382+When working in existing code which uses nonstandard formatting make383383+your code follow the most recent guidelines, so that eventually all code384384+in the domain of netdev is in the preferred format.385385+386386+Resending after review387387+~~~~~~~~~~~~~~~~~~~~~~388388+389389+Allow at least 24 hours to pass between postings. This will ensure reviewers390390+from all geographical locations have a chance to chime in. Do not wait391391+too long (weeks) between postings either as it will make it harder for reviewers392392+to recall all the context.393393+394394+Make sure you address all the feedback in your new posting. Do not post a new395395+version of the code if the discussion about the previous version is still396396+ongoing, unless directly instructed by a reviewer.397397+398398+Testing399399+-------400400+401401+Expected level of testing402402+~~~~~~~~~~~~~~~~~~~~~~~~~403403+404404+At the very minimum your changes must survive an ``allyesconfig`` and an405405+``allmodconfig`` build with ``W=1`` set without new warnings or failures.406406+407407+Ideally you will have done run-time testing specific to your change,408408+and the patch series contains a set of kernel selftest for409409+``tools/testing/selftests/net`` or using the KUnit framework.410410+411411+You are expected to test your changes on top of the relevant networking412412+tree (``net`` or ``net-next``) and not e.g. a stable tree or ``linux-next``.413413+414414+patchwork checks415415+~~~~~~~~~~~~~~~~416416+417417+Checks in patchwork are mostly simple wrappers around existing kernel418418+scripts, the sources are available at:419419+420420+https://github.com/kuba-moo/nipa/tree/master/tests421421+422422+**Do not** post your patches just to run them through the checks.423423+You must ensure that your patches are ready by testing them locally424424+before posting to the mailing list. The patchwork build bot instance425425+gets overloaded very easily and netdev@vger really doesn't need more426426+traffic if we can help it.427427+428428+netdevsim429429+~~~~~~~~~430430+431431+``netdevsim`` is a test driver which can be used to exercise driver432432+configuration APIs without requiring capable hardware.433433+Mock-ups and tests based on ``netdevsim`` are strongly encouraged when434434+adding new APIs, but ``netdevsim`` in itself is **not** considered435435+a use case/user. You must also implement the new APIs in a real driver.436436+437437+We give no guarantees that ``netdevsim`` won't change in the future438438+in a way which would break what would normally be considered uAPI.439439+440440+``netdevsim`` is reserved for use by upstream tests only, so any441441+new ``netdevsim`` features must be accompanied by selftests under442442+``tools/testing/selftests/``.443443+444444+Testimonials / feedback445445+-----------------------446446+447447+Some companies use peer feedback in employee performance reviews.448448+Please feel free to request feedback from netdev maintainers,449449+especially if you spend significant amount of time reviewing code233450and go out of your way to improve shared infrastructure.234451235452The feedback must be requested by you, the contributor, and will always
+26-20
Documentation/virt/kvm/api.rst
···53435343 32 vCPUs in the shared_info page, KVM does not automatically do so53445344 and instead requires that KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO be used53455345 explicitly even when the vcpu_info for a given vCPU resides at the53465346- "default" location in the shared_info page. This is because KVM is53475347- not aware of the Xen CPU id which is used as the index into the53485348- vcpu_info[] array, so cannot know the correct default location.53465346+ "default" location in the shared_info page. This is because KVM may53475347+ not be aware of the Xen CPU id which is used as the index into the53485348+ vcpu_info[] array, so may know the correct default location.5349534953505350 Note that the shared info page may be constantly written to by KVM;53515351 it contains the event channel bitmap used to deliver interrupts to···53565356 any vCPU has been running or any event channel interrupts can be53575357 routed to the guest.5358535853595359+ Setting the gfn to KVM_XEN_INVALID_GFN will disable the shared info53605360+ page.53615361+53595362KVM_XEN_ATTR_TYPE_UPCALL_VECTOR53605363 Sets the exception vector used to deliver Xen event channel upcalls.53615364 This is the HVM-wide vector injected directly by the hypervisor53625365 (not through the local APIC), typically configured by a guest via53635363- HVM_PARAM_CALLBACK_IRQ.53665366+ HVM_PARAM_CALLBACK_IRQ. This can be disabled again (e.g. for guest53675367+ SHUTDOWN_soft_reset) by setting it to zero.5364536853655369KVM_XEN_ATTR_TYPE_EVTCHN53665370 This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates53675371 support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It configures53685372 an outbound port number for interception of EVTCHNOP_send requests53695369- from the guest. A given sending port number may be directed back53705370- to a specified vCPU (by APIC ID) / port / priority on the guest,53715371- or to trigger events on an eventfd. The vCPU and priority can be53725372- changed by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call,53735373- but other fields cannot change for a given sending port. A port53745374- mapping is removed by using KVM_XEN_EVTCHN_DEASSIGN in the flags53755375- field.53735373+ from the guest. A given sending port number may be directed back to53745374+ a specified vCPU (by APIC ID) / port / priority on the guest, or to53755375+ trigger events on an eventfd. The vCPU and priority can be changed53765376+ by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call, but but other53775377+ fields cannot change for a given sending port. A port mapping is53785378+ removed by using KVM_XEN_EVTCHN_DEASSIGN in the flags field. Passing53795379+ KVM_XEN_EVTCHN_RESET in the flags field removes all interception of53805380+ outbound event channels. The values of the flags field are mutually53815381+ exclusive and cannot be combined as a bitmask.5376538253775383KVM_XEN_ATTR_TYPE_XEN_VERSION53785384 This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates···53945388 support for KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG. It enables the53955389 XEN_RUNSTATE_UPDATE flag which allows guest vCPUs to safely read53965390 other vCPUs' vcpu_runstate_info. Xen guests enable this feature via53975397- the VM_ASST_TYPE_runstate_update_flag of the HYPERVISOR_vm_assist53915391+ the VMASST_TYPE_runstate_update_flag of the HYPERVISOR_vm_assist53985392 hypercall.53995393540053944.127 KVM_XEN_HVM_GET_ATTR···54525446 As with the shared_info page for the VM, the corresponding page may be54535447 dirtied at any time if event channel interrupt delivery is enabled, so54545448 userspace should always assume that the page is dirty without relying54555455- on dirty logging.54495449+ on dirty logging. Setting the gpa to KVM_XEN_INVALID_GPA will disable54505450+ the vcpu_info.5456545154575452KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO54585453 Sets the guest physical address of an additional pvclock structure54595454 for a given vCPU. This is typically used for guest vsyscall support.54555455+ Setting the gpa to KVM_XEN_INVALID_GPA will disable the structure.5460545654615457KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR54625458 Sets the guest physical address of the vcpu_runstate_info for a given54635459 vCPU. This is how a Xen guest tracks CPU state such as steal time.54605460+ Setting the gpa to KVM_XEN_INVALID_GPA will disable the runstate area.5464546154655462KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT54665463 Sets the runstate (RUNSTATE_running/_runnable/_blocked/_offline) of···54965487 This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates54975488 support for KVM_XEN_HVM_CONFIG_EVTCHN_SEND features. It sets the54985489 event channel port/priority for the VIRQ_TIMER of the vCPU, as well54995499- as allowing a pending timer to be saved/restored.54905490+ as allowing a pending timer to be saved/restored. Setting the timer54915491+ port to zero disables kernel handling of the singleshot timer.5500549255015493KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR55025494 This attribute is available when the KVM_CAP_XEN_HVM ioctl indicates···55055495 per-vCPU local APIC upcall vector, configured by a Xen guest with55065496 the HVMOP_set_evtchn_upcall_vector hypercall. This is typically55075497 used by Windows guests, and is distinct from the HVM-wide upcall55085508- vector configured with HVM_PARAM_CALLBACK_IRQ.54985498+ vector configured with HVM_PARAM_CALLBACK_IRQ. It is disabled by54995499+ setting the vector to zero.5509550055105501551155024.129 KVM_XEN_VCPU_GET_ATTR···65876576Please note that the kernel is allowed to use the kvm_run structure as the65886577primary storage for certain register types. Therefore, the kernel may use the65896578values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.65906590-65916591-::65926592-65936593- };65946594-6595657965966580659765816. Capabilities that can be enabled on vCPUs
+14-5
Documentation/virt/kvm/locking.rst
···1616- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring1717 them together is quite rare.18181919-- Unlike kvm->slots_lock, kvm->slots_arch_lock is released before2020- synchronize_srcu(&kvm->srcu). Therefore kvm->slots_arch_lock2121- can be taken inside a kvm->srcu read-side critical section,2222- while kvm->slots_lock cannot.2323-2419- kvm->mn_active_invalidate_count ensures that pairs of2520 invalidate_range_start() and invalidate_range_end() callbacks2621 use the same memslots array. kvm->slots_lock and kvm->slots_arch_lock2722 are taken on the waiting side in install_new_memslots, so MMU notifiers2823 must not take either kvm->slots_lock or kvm->slots_arch_lock.2424+2525+For SRCU:2626+2727+- ``synchronize_srcu(&kvm->srcu)`` is called _inside_2828+ the kvm->slots_lock critical section, therefore kvm->slots_lock2929+ cannot be taken inside a kvm->srcu read-side critical section.3030+ Instead, kvm->slots_arch_lock is released before the call3131+ to ``synchronize_srcu()`` and _can_ be taken inside a3232+ kvm->srcu read-side critical section.3333+3434+- kvm->lock is taken inside kvm->srcu, therefore3535+ ``synchronize_srcu(&kvm->srcu)`` cannot be called inside3636+ a kvm->lock critical section. If you cannot delay the3737+ call until after kvm->lock is released, use ``call_srcu``.29383039On x86:3140
+12-2
MAINTAINERS
···1146811468F: arch/x86/kvm/kvm_onhyperv.*1146911469F: arch/x86/kvm/svm/hyperv.*1147011470F: arch/x86/kvm/svm/svm_onhyperv.*1147111471-F: arch/x86/kvm/vmx/evmcs.*1147111471+F: arch/x86/kvm/vmx/hyperv.*11472114721147311473KVM X86 Xen (KVM/Xen)1147411474M: David Woodhouse <dwmw2@infradead.org>···1491614916S: Supported1491714917W: http://git.infradead.org/nvme.git1491814918T: git://git.infradead.org/nvme.git1491914919+F: Documentation/nvme/1491914920F: drivers/nvme/host/1492014921F: drivers/nvme/common/1492114922F: include/linux/nvme*···1660916608S: Supported1661016609F: Documentation/devicetree/bindings/input/pine64,pinephone-keyboard.yaml1661116610F: drivers/input/keyboard/pinephone-keyboard.c1661116611+1661216612+PKTCDVD DRIVER1661316613+M: linux-block@vger.kernel.org1661416614+S: Orphan1661516615+F: drivers/block/pktcdvd.c1661616616+F: include/linux/pktcdvd.h1661716617+F: include/uapi/linux/pktcdvd.h16612166181661316619PLANTOWER PMS7003 AIR POLLUTION SENSOR DRIVER1661416620M: Tomasz Duszynski <tduszyns@gmail.com>···2225322245F: drivers/scsi/vmw_pvscsi.h22254222462225522247VMWARE VIRTUAL PTP CLOCK DRIVER2225622256-M: Vivek Thampi <vithampi@vmware.com>2224822248+M: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu>2224922249+M: Deep Shah <sdeep@vmware.com>2225022250+R: Alexey Makhalov <amakhalov@vmware.com>2225722251R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>2225822252L: netdev@vger.kernel.org2225922253S: Supported
···6464 dtb = get_fdt();6565 __dt_setup_arch(dtb);66666767- if (!early_init_dt_scan_memory())6767+ if (early_init_dt_scan_memory())6868 return;69697070 if (soc_info.mem_detect)
···386386{387387 unsigned long *reg, val, vaddr;388388 char buffer[MAX_INSN_SIZE];389389+ enum insn_mmio_type mmio;389390 struct insn insn = {};390390- enum mmio_type mmio;391391 int size, extend_size;392392 u8 extend_val = 0;393393···402402 return -EINVAL;403403404404 mmio = insn_decode_mmio(&insn, &size);405405- if (WARN_ON_ONCE(mmio == MMIO_DECODE_FAILED))405405+ if (WARN_ON_ONCE(mmio == INSN_MMIO_DECODE_FAILED))406406 return -EINVAL;407407408408- if (mmio != MMIO_WRITE_IMM && mmio != MMIO_MOVS) {408408+ if (mmio != INSN_MMIO_WRITE_IMM && mmio != INSN_MMIO_MOVS) {409409 reg = insn_get_modrm_reg_ptr(&insn, regs);410410 if (!reg)411411 return -EINVAL;···426426427427 /* Handle writes first */428428 switch (mmio) {429429- case MMIO_WRITE:429429+ case INSN_MMIO_WRITE:430430 memcpy(&val, reg, size);431431 if (!mmio_write(size, ve->gpa, val))432432 return -EIO;433433 return insn.length;434434- case MMIO_WRITE_IMM:434434+ case INSN_MMIO_WRITE_IMM:435435 val = insn.immediate.value;436436 if (!mmio_write(size, ve->gpa, val))437437 return -EIO;438438 return insn.length;439439- case MMIO_READ:440440- case MMIO_READ_ZERO_EXTEND:441441- case MMIO_READ_SIGN_EXTEND:439439+ case INSN_MMIO_READ:440440+ case INSN_MMIO_READ_ZERO_EXTEND:441441+ case INSN_MMIO_READ_SIGN_EXTEND:442442 /* Reads are handled below */443443 break;444444- case MMIO_MOVS:445445- case MMIO_DECODE_FAILED:444444+ case INSN_MMIO_MOVS:445445+ case INSN_MMIO_DECODE_FAILED:446446 /*447447 * MMIO was accessed with an instruction that could not be448448 * decoded or handled properly. It was likely not using io.h···459459 return -EIO;460460461461 switch (mmio) {462462- case MMIO_READ:462462+ case INSN_MMIO_READ:463463 /* Zero-extend for 32-bit operation */464464 extend_size = size == 4 ? sizeof(*reg) : 0;465465 break;466466- case MMIO_READ_ZERO_EXTEND:466466+ case INSN_MMIO_READ_ZERO_EXTEND:467467 /* Zero extend based on operand size */468468 extend_size = insn.opnd_bytes;469469 break;470470- case MMIO_READ_SIGN_EXTEND:470470+ case INSN_MMIO_READ_SIGN_EXTEND:471471 /* Sign extend based on operand size */472472 extend_size = insn.opnd_bytes;473473 if (size == 1 && val & BIT(7))
+1-1
arch/x86/events/amd/core.c
···13871387 * numbered counter following it.13881388 */13891389 for (i = 0; i < x86_pmu.num_counters - 1; i += 2)13901390- even_ctr_mask |= 1 << i;13901390+ even_ctr_mask |= BIT_ULL(i);1391139113921392 pair_constraint = (struct event_constraint)13931393 __EVENT_CONSTRAINT(0, even_ctr_mask, 0,
···19811981 if (ctrl == PR_SPEC_FORCE_DISABLE)19821982 task_set_spec_ib_force_disable(task);19831983 task_update_spec_tif(task);19841984+ if (task == current)19851985+ indirect_branch_prediction_barrier();19841986 break;19851987 default:19861988 return -ERANGE;
+1-3
arch/x86/kernel/crash.c
···401401 kbuf.buf_align = ELF_CORE_HEADER_ALIGN;402402 kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;403403 ret = kexec_add_buffer(&kbuf);404404- if (ret) {405405- vfree((void *)image->elf_headers);404404+ if (ret)406405 return ret;407407- }408406 image->elf_load_addr = kbuf.mem;409407 pr_debug("Loaded ELF headers at 0x%lx bufsz=0x%lx memsz=0x%lx\n",410408 image->elf_load_addr, kbuf.bufsz, kbuf.memsz);
+7-3
arch/x86/kernel/kprobes/core.c
···3737#include <linux/extable.h>3838#include <linux/kdebug.h>3939#include <linux/kallsyms.h>4040+#include <linux/kgdb.h>4041#include <linux/ftrace.h>4142#include <linux/kasan.h>4243#include <linux/moduleloader.h>···282281 if (ret < 0)283282 return 0;284283284284+#ifdef CONFIG_KGDB285285 /*286286- * Another debugging subsystem might insert this breakpoint.287287- * In that case, we can't recover it.286286+ * If there is a dynamically installed kgdb sw breakpoint,287287+ * this function should not be probed.288288 */289289- if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)289289+ if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&290290+ kgdb_has_hit_break(addr))290291 return 0;292292+#endif291293 addr += insn.length;292294 }293295
+8-20
arch/x86/kernel/kprobes/opt.c
···1515#include <linux/extable.h>1616#include <linux/kdebug.h>1717#include <linux/kallsyms.h>1818+#include <linux/kgdb.h>1819#include <linux/ftrace.h>1920#include <linux/objtool.h>2021#include <linux/pgtable.h>···280279 return ret;281280}282281283283-static bool is_padding_int3(unsigned long addr, unsigned long eaddr)284284-{285285- unsigned char ops;286286-287287- for (; addr < eaddr; addr++) {288288- if (get_kernel_nofault(ops, (void *)addr) < 0 ||289289- ops != INT3_INSN_OPCODE)290290- return false;291291- }292292-293293- return true;294294-}295295-296282/* Decode whole function to ensure any instructions don't jump into target */297283static int can_optimize(unsigned long paddr)298284{···322334 ret = insn_decode_kernel(&insn, (void *)recovered_insn);323335 if (ret < 0)324336 return 0;325325-337337+#ifdef CONFIG_KGDB326338 /*327327- * In the case of detecting unknown breakpoint, this could be328328- * a padding INT3 between functions. Let's check that all the329329- * rest of the bytes are also INT3.339339+ * If there is a dynamically installed kgdb sw breakpoint,340340+ * this function should not be probed.330341 */331331- if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)332332- return is_padding_int3(addr, paddr - offset + size) ? 1 : 0;333333-342342+ if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&343343+ kgdb_has_hit_break(addr))344344+ return 0;345345+#endif334346 /* Recover address */335347 insn.kaddr = (void *)addr;336348 insn.next_byte = (void *)(addr + insn.length);
+9-9
arch/x86/kernel/sev.c
···15361536static enum es_result vc_handle_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)15371537{15381538 struct insn *insn = &ctxt->insn;15391539+ enum insn_mmio_type mmio;15391540 unsigned int bytes = 0;15401540- enum mmio_type mmio;15411541 enum es_result ret;15421542 u8 sign_byte;15431543 long *reg_data;1544154415451545 mmio = insn_decode_mmio(insn, &bytes);15461546- if (mmio == MMIO_DECODE_FAILED)15461546+ if (mmio == INSN_MMIO_DECODE_FAILED)15471547 return ES_DECODE_FAILED;1548154815491549- if (mmio != MMIO_WRITE_IMM && mmio != MMIO_MOVS) {15491549+ if (mmio != INSN_MMIO_WRITE_IMM && mmio != INSN_MMIO_MOVS) {15501550 reg_data = insn_get_modrm_reg_ptr(insn, ctxt->regs);15511551 if (!reg_data)15521552 return ES_DECODE_FAILED;15531553 }1554155415551555 switch (mmio) {15561556- case MMIO_WRITE:15561556+ case INSN_MMIO_WRITE:15571557 memcpy(ghcb->shared_buffer, reg_data, bytes);15581558 ret = vc_do_mmio(ghcb, ctxt, bytes, false);15591559 break;15601560- case MMIO_WRITE_IMM:15601560+ case INSN_MMIO_WRITE_IMM:15611561 memcpy(ghcb->shared_buffer, insn->immediate1.bytes, bytes);15621562 ret = vc_do_mmio(ghcb, ctxt, bytes, false);15631563 break;15641564- case MMIO_READ:15641564+ case INSN_MMIO_READ:15651565 ret = vc_do_mmio(ghcb, ctxt, bytes, true);15661566 if (ret)15671567 break;···1572157215731573 memcpy(reg_data, ghcb->shared_buffer, bytes);15741574 break;15751575- case MMIO_READ_ZERO_EXTEND:15751575+ case INSN_MMIO_READ_ZERO_EXTEND:15761576 ret = vc_do_mmio(ghcb, ctxt, bytes, true);15771577 if (ret)15781578 break;···15811581 memset(reg_data, 0, insn->opnd_bytes);15821582 memcpy(reg_data, ghcb->shared_buffer, bytes);15831583 break;15841584- case MMIO_READ_SIGN_EXTEND:15841584+ case INSN_MMIO_READ_SIGN_EXTEND:15851585 ret = vc_do_mmio(ghcb, ctxt, bytes, true);15861586 if (ret)15871587 break;···16001600 memset(reg_data, sign_byte, insn->opnd_bytes);16011601 memcpy(reg_data, ghcb->shared_buffer, bytes);16021602 break;16031603- case MMIO_MOVS:16031603+ case INSN_MMIO_MOVS:16041604 ret = vc_handle_mmio_movs(ctxt, bytes);16051605 break;16061606 default:
+36-27
arch/x86/kvm/hyperv.c
···17691769}1770177017711771struct kvm_hv_hcall {17721772+ /* Hypercall input data */17721773 u64 param;17731774 u64 ingpa;17741775 u64 outgpa;···17801779 bool fast;17811780 bool rep;17821781 sse128_t xmm[HV_HYPERCALL_MAX_XMM_REGISTERS];17821782+17831783+ /*17841784+ * Current read offset when KVM reads hypercall input data gradually,17851785+ * either offset in bytes from 'ingpa' for regular hypercalls or the17861786+ * number of already consumed 'XMM halves' for 'fast' hypercalls.17871787+ */17881788+ union {17891789+ gpa_t data_offset;17901790+ int consumed_xmm_halves;17911791+ };17831792};178417931785179417861795static int kvm_hv_get_hc_data(struct kvm *kvm, struct kvm_hv_hcall *hc,17871787- u16 orig_cnt, u16 cnt_cap, u64 *data,17881788- int consumed_xmm_halves, gpa_t offset)17961796+ u16 orig_cnt, u16 cnt_cap, u64 *data)17891797{17901798 /*17911799 * Preserve the original count when ignoring entries via a "cap", KVM···18091799 * Each XMM holds two sparse banks, but do not count halves that18101800 * have already been consumed for hypercall parameters.18111801 */18121812- if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - consumed_xmm_halves)18021802+ if (orig_cnt > 2 * HV_HYPERCALL_MAX_XMM_REGISTERS - hc->consumed_xmm_halves)18131803 return HV_STATUS_INVALID_HYPERCALL_INPUT;1814180418151805 for (i = 0; i < cnt; i++) {18161816- j = i + consumed_xmm_halves;18061806+ j = i + hc->consumed_xmm_halves;18171807 if (j % 2)18181808 data[i] = sse128_hi(hc->xmm[j / 2]);18191809 else···18221812 return 0;18231813 }1824181418251825- return kvm_read_guest(kvm, hc->ingpa + offset, data,18151815+ return kvm_read_guest(kvm, hc->ingpa + hc->data_offset, data,18261816 cnt * sizeof(*data));18271817}1828181818291819static u64 kvm_get_sparse_vp_set(struct kvm *kvm, struct kvm_hv_hcall *hc,18301830- u64 *sparse_banks, int consumed_xmm_halves,18311831- gpa_t offset)18201820+ u64 *sparse_banks)18321821{18331822 if (hc->var_cnt > HV_MAX_SPARSE_VCPU_BANKS)18341823 return -EINVAL;1835182418361825 /* Cap var_cnt to ignore banks that cannot contain a legal VP index. */18371826 return kvm_hv_get_hc_data(kvm, hc, hc->var_cnt, KVM_HV_MAX_SPARSE_VCPU_SET_BITS,18381838- sparse_banks, consumed_xmm_halves, offset);18271827+ sparse_banks);18391828}1840182918411841-static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[],18421842- int consumed_xmm_halves, gpa_t offset)18301830+static int kvm_hv_get_tlb_flush_entries(struct kvm *kvm, struct kvm_hv_hcall *hc, u64 entries[])18431831{18441844- return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt,18451845- entries, consumed_xmm_halves, offset);18321832+ return kvm_hv_get_hc_data(kvm, hc, hc->rep_cnt, hc->rep_cnt, entries);18461833}1847183418481835static void hv_tlb_flush_enqueue(struct kvm_vcpu *vcpu,···19331926 struct kvm_vcpu *v;19341927 unsigned long i;19351928 bool all_cpus;19361936- int consumed_xmm_halves = 0;19371937- gpa_t data_offset;1938192919391930 /*19401931 * The Hyper-V TLFS doesn't allow more than HV_MAX_SPARSE_VCPU_BANKS···19601955 flush.address_space = hc->ingpa;19611956 flush.flags = hc->outgpa;19621957 flush.processor_mask = sse128_lo(hc->xmm[0]);19631963- consumed_xmm_halves = 1;19581958+ hc->consumed_xmm_halves = 1;19641959 } else {19651960 if (unlikely(kvm_read_guest(kvm, hc->ingpa,19661961 &flush, sizeof(flush))))19671962 return HV_STATUS_INVALID_HYPERCALL_INPUT;19681968- data_offset = sizeof(flush);19631963+ hc->data_offset = sizeof(flush);19691964 }1970196519711966 trace_kvm_hv_flush_tlb(flush.processor_mask,···19901985 flush_ex.flags = hc->outgpa;19911986 memcpy(&flush_ex.hv_vp_set,19921987 &hc->xmm[0], sizeof(hc->xmm[0]));19931993- consumed_xmm_halves = 2;19881988+ hc->consumed_xmm_halves = 2;19941989 } else {19951990 if (unlikely(kvm_read_guest(kvm, hc->ingpa, &flush_ex,19961991 sizeof(flush_ex))))19971992 return HV_STATUS_INVALID_HYPERCALL_INPUT;19981998- data_offset = sizeof(flush_ex);19931993+ hc->data_offset = sizeof(flush_ex);19991994 }2000199520011996 trace_kvm_hv_flush_tlb_ex(flush_ex.hv_vp_set.valid_bank_mask,···20142009 if (!hc->var_cnt)20152010 goto ret_success;2016201120172017- if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks,20182018- consumed_xmm_halves, data_offset))20122012+ if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks))20192013 return HV_STATUS_INVALID_HYPERCALL_INPUT;20202014 }20212015···20252021 * consumed_xmm_halves to make sure TLB flush entries are read20262022 * from the correct offset.20272023 */20282028- data_offset += hc->var_cnt * sizeof(sparse_banks[0]);20292029- consumed_xmm_halves += hc->var_cnt;20242024+ if (hc->fast)20252025+ hc->consumed_xmm_halves += hc->var_cnt;20262026+ else20272027+ hc->data_offset += hc->var_cnt * sizeof(sparse_banks[0]);20302028 }2031202920322030 if (hc->code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||···20362030 hc->rep_cnt > ARRAY_SIZE(__tlb_flush_entries)) {20372031 tlb_flush_entries = NULL;20382032 } else {20392039- if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries,20402040- consumed_xmm_halves, data_offset))20332033+ if (kvm_hv_get_tlb_flush_entries(kvm, hc, __tlb_flush_entries))20412034 return HV_STATUS_INVALID_HYPERCALL_INPUT;20422035 tlb_flush_entries = __tlb_flush_entries;20432036 }···21852180 if (!hc->var_cnt)21862181 goto ret_success;2187218221882188- if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks, 1,21892189- offsetof(struct hv_send_ipi_ex,21902190- vp_set.bank_contents)))21832183+ if (!hc->fast)21842184+ hc->data_offset = offsetof(struct hv_send_ipi_ex,21852185+ vp_set.bank_contents);21862186+ else21872187+ hc->consumed_xmm_halves = 1;21882188+21892189+ if (kvm_get_sparse_vp_set(kvm, hc, sparse_banks))21912190 return HV_STATUS_INVALID_HYPERCALL_INPUT;21922191 }21932192
···363363 * A shadow-present leaf SPTE may be non-writable for 4 possible reasons:364364 *365365 * 1. To intercept writes for dirty logging. KVM write-protects huge pages366366- * so that they can be split be split down into the dirty logging366366+ * so that they can be split down into the dirty logging367367 * granularity (4KiB) whenever the guest writes to them. KVM also368368 * write-protects 4KiB pages so that writes can be recorded in the dirty log369369 * (e.g. if not using PML). SPTEs are write-protected for dirty logging
+18-7
arch/x86/kvm/mmu/tdp_mmu.c
···10741074 int ret = RET_PF_FIXED;10751075 bool wrprot = false;1076107610771077- WARN_ON(sp->role.level != fault->goal_level);10771077+ if (WARN_ON_ONCE(sp->role.level != fault->goal_level))10781078+ return RET_PF_RETRY;10791079+10781080 if (unlikely(!fault->slot))10791081 new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL);10801082 else···11751173 if (fault->nx_huge_page_workaround_enabled)11761174 disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);1177117511781178- if (iter.level == fault->goal_level)11791179- break;11801180-11811176 /*11821177 * If SPTE has been frozen by another thread, just give up and11831178 * retry, avoiding unnecessary page table allocation and free.11841179 */11851180 if (is_removed_spte(iter.old_spte))11861181 goto retry;11821182+11831183+ if (iter.level == fault->goal_level)11841184+ goto map_target_level;1187118511881186 /* Step down into the lower level page table if it exists. */11891187 if (is_shadow_present_pte(iter.old_spte) &&···12051203 r = tdp_mmu_link_sp(kvm, &iter, sp, true);1206120412071205 /*12081208- * Also force the guest to retry the access if the upper level SPTEs12091209- * aren't in place.12061206+ * Force the guest to retry if installing an upper level SPTE12071207+ * failed, e.g. because a different task modified the SPTE.12101208 */12111209 if (r) {12121210 tdp_mmu_free_sp(sp);···12161214 if (fault->huge_page_disallowed &&12171215 fault->req_level >= iter.level) {12181216 spin_lock(&kvm->arch.tdp_mmu_pages_lock);12191219- track_possible_nx_huge_page(kvm, sp);12171217+ if (sp->nx_huge_page_disallowed)12181218+ track_possible_nx_huge_page(kvm, sp);12201219 spin_unlock(&kvm->arch.tdp_mmu_pages_lock);12211220 }12221221 }1223122212231223+ /*12241224+ * The walk aborted before reaching the target level, e.g. because the12251225+ * iterator detected an upper level SPTE was frozen during traversal.12261226+ */12271227+ WARN_ON_ONCE(iter.level == fault->goal_level);12281228+ goto retry;12291229+12301230+map_target_level:12241231 ret = tdp_mmu_map_handle_target_level(vcpu, fault, &iter);1225123212261233retry:
+2-1
arch/x86/kvm/pmu.c
···238238 return false;239239240240 /* recalibrate sample period and check if it's accepted by perf core */241241- if (perf_event_period(pmc->perf_event,241241+ if (is_sampling_event(pmc->perf_event) &&242242+ perf_event_period(pmc->perf_event,242243 get_sample_period(pmc, pmc->counter)))243244 return false;244245
+2-1
arch/x86/kvm/pmu.h
···140140141141static inline void pmc_update_sample_period(struct kvm_pmc *pmc)142142{143143- if (!pmc->perf_event || pmc->is_paused)143143+ if (!pmc->perf_event || pmc->is_paused ||144144+ !is_sampling_event(pmc->perf_event))144145 return;145146146147 perf_event_period(pmc->perf_event,
+15-5
arch/x86/kvm/vmx/nested.c
···52965296 if (vmptr == vmx->nested.current_vmptr)52975297 nested_release_vmcs12(vcpu);5298529852995299- kvm_vcpu_write_guest(vcpu,53005300- vmptr + offsetof(struct vmcs12,53015301- launch_state),53025302- &zero, sizeof(zero));52995299+ /*53005300+ * Silently ignore memory errors on VMCLEAR, Intel's pseudocode53015301+ * for VMCLEAR includes a "ensure that data for VMCS referenced53025302+ * by the operand is in memory" clause that guards writes to53035303+ * memory, i.e. doing nothing for I/O is architecturally valid.53045304+ *53055305+ * FIXME: Suppress failures if and only if no memslot is found,53065306+ * i.e. exit to userspace if __copy_to_user() fails.53075307+ */53085308+ (void)kvm_vcpu_write_guest(vcpu,53095309+ vmptr + offsetof(struct vmcs12,53105310+ launch_state),53115311+ &zero, sizeof(zero));53035312 } else if (vmx->nested.hv_evmcs && vmptr == vmx->nested.hv_evmcs_vmptr) {53045313 nested_release_evmcs(vcpu);53055314 }···68826873 SECONDARY_EXEC_ENABLE_INVPCID |68836874 SECONDARY_EXEC_RDSEED_EXITING |68846875 SECONDARY_EXEC_XSAVES |68856885- SECONDARY_EXEC_TSC_SCALING;68766876+ SECONDARY_EXEC_TSC_SCALING |68776877+ SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;6886687868876879 /*68886880 * We can emulate "VMCS shadowing," even if the hardware
+7
arch/x86/kvm/vmx/vmx.c
···44594459 * controls for features that are/aren't exposed to the guest.44604460 */44614461 if (nested) {44624462+ /*44634463+ * All features that can be added or removed to VMX MSRs must44644464+ * be supported in the first place for nested virtualization.44654465+ */44664466+ if (WARN_ON_ONCE(!(vmcs_config.nested.secondary_ctls_high & control)))44674467+ enabled = false;44684468+44624469 if (enabled)44634470 vmx->nested.msrs.secondary_ctls_high |= control;44644471 else
···309309 *segs = nsegs;310310 return NULL;311311split:312312+ /*313313+ * We can't sanely support splitting for a REQ_NOWAIT bio. End it314314+ * with EAGAIN if splitting is required and return an error pointer.315315+ */316316+ if (bio->bi_opf & REQ_NOWAIT) {317317+ bio->bi_status = BLK_STS_AGAIN;318318+ bio_endio(bio);319319+ return ERR_PTR(-EAGAIN);320320+ }321321+312322 *segs = nsegs;313323314324 /*···368358 default:369359 split = bio_split_rw(bio, lim, nr_segs, bs,370360 get_max_io_size(bio, lim) << SECTOR_SHIFT);361361+ if (IS_ERR(split))362362+ return NULL;371363 break;372364 }373365374366 if (split) {375375- /* there isn't chance to merge the splitted bio */367367+ /* there isn't chance to merge the split bio */376368 split->bi_opf |= REQ_NOMERGE;377369378370 blkcg_bio_issue_init(split);
+4-1
block/blk-mq.c
···29512951 blk_status_t ret;2952295229532953 bio = blk_queue_bounce(bio, q);29542954- if (bio_may_exceed_limits(bio, &q->limits))29542954+ if (bio_may_exceed_limits(bio, &q->limits)) {29552955 bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);29562956+ if (!bio)29572957+ return;29582958+ }2956295929572960 if (!bio_integrity_prep(bio))29582961 return;
···7070static int only_lcd = -1;7171module_param(only_lcd, int, 0444);72727373-/*7474- * Display probing is known to take up to 5 seconds, so delay the fallback7575- * backlight registration by 5 seconds + 3 seconds for some extra margin.7676- */7777-static int register_backlight_delay = 8;7373+static int register_backlight_delay;7874module_param(register_backlight_delay, int, 0444);7975MODULE_PARM_DESC(register_backlight_delay,8076 "Delay in seconds before doing fallback (non GPU driver triggered) "···2171217521722176 return false;21732177}21782178+21792179+/*21802180+ * At least one graphics driver has reported that no LCD is connected21812181+ * via the native interface. cancel the registration for fallback acpi_video0.21822182+ * If another driver still deems this necessary, it can explicitly register it.21832183+ */21842184+void acpi_video_report_nolcd(void)21852185+{21862186+ cancel_delayed_work(&video_bus_register_backlight_work);21872187+}21882188+EXPORT_SYMBOL(acpi_video_report_nolcd);2174218921752190int acpi_video_register(void)21762191{
···3434#include <linux/module.h>3535#include <linux/pci.h>3636#include <linux/platform_data/x86/nvidia-wmi-ec-backlight.h>3737+#include <linux/pnp.h>3738#include <linux/types.h>3839#include <linux/workqueue.h>3940#include <acpi/video.h>···105104 return false;106105}107106#endif107107+108108+static bool apple_gmux_backlight_present(void)109109+{110110+ struct acpi_device *adev;111111+ struct device *dev;112112+113113+ adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1);114114+ if (!adev)115115+ return false;116116+117117+ dev = acpi_get_first_physical_node(adev);118118+ if (!dev)119119+ return false;120120+121121+ /*122122+ * drivers/platform/x86/apple-gmux.c only supports old style123123+ * Apple GMUX with an IO-resource.124124+ */125125+ return pnp_get_resource(to_pnp_dev(dev), IORESOURCE_IO, 0) != NULL;126126+}108127109128/* Force to use vendor driver when the ACPI device is known to be110129 * buggy */···788767 if (nvidia_wmi_ec_present)789768 return acpi_backlight_nvidia_wmi_ec;790769791791- if (apple_gmux_present())770770+ if (apple_gmux_backlight_present())792771 return acpi_backlight_apple_gmux;793772794773 /* Use ACPI video if available, except when native should be preferred. */
···8383static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);8484static void ahci_remove_one(struct pci_dev *dev);8585static void ahci_shutdown_one(struct pci_dev *dev);8686+static void ahci_intel_pcs_quirk(struct pci_dev *pdev, struct ahci_host_priv *hpriv);8687static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,8788 unsigned long deadline);8889static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,···677676 ahci_save_initial_config(&pdev->dev, hpriv);678677}679678679679+static int ahci_pci_reset_controller(struct ata_host *host)680680+{681681+ struct pci_dev *pdev = to_pci_dev(host->dev);682682+ struct ahci_host_priv *hpriv = host->private_data;683683+ int rc;684684+685685+ rc = ahci_reset_controller(host);686686+ if (rc)687687+ return rc;688688+689689+ /*690690+ * If platform firmware failed to enable ports, try to enable691691+ * them here.692692+ */693693+ ahci_intel_pcs_quirk(pdev, hpriv);694694+695695+ return 0;696696+}697697+680698static void ahci_pci_init_controller(struct ata_host *host)681699{682700 struct ahci_host_priv *hpriv = host->private_data;···890870 struct ata_host *host = pci_get_drvdata(pdev);891871 int rc;892872893893- rc = ahci_reset_controller(host);873873+ rc = ahci_pci_reset_controller(host);894874 if (rc)895875 return rc;896876 ahci_pci_init_controller(host);···926906 ahci_mcp89_apple_enable(pdev);927907928908 if (pdev->dev.power.power_state.event == PM_EVENT_SUSPEND) {929929- rc = ahci_reset_controller(host);909909+ rc = ahci_pci_reset_controller(host);930910 if (rc)931911 return rc;932912···18041784 /* save initial config */18051785 ahci_pci_save_initial_config(pdev, hpriv);1806178618071807- /*18081808- * If platform firmware failed to enable ports, try to enable18091809- * them here.18101810- */18111811- ahci_intel_pcs_quirk(pdev, hpriv);18121812-18131787 /* prepare host */18141788 if (hpriv->cap & HOST_CAP_NCQ) {18151789 pi.flags |= ATA_FLAG_NCQ;···19131899 if (rc)19141900 return rc;1915190119161916- rc = ahci_reset_controller(host);19021902+ rc = ahci_pci_reset_controller(host);19171903 if (rc)19181904 return rc;19191905
+43
drivers/block/Kconfig
···285285 The default value is 4096 kilobytes. Only change this if you know286286 what you are doing.287287288288+config CDROM_PKTCDVD289289+ tristate "Packet writing on CD/DVD media (DEPRECATED)"290290+ depends on !UML291291+ depends on SCSI292292+ select CDROM293293+ help294294+ Note: This driver is deprecated and will be removed from the295295+ kernel in the near future!296296+297297+ If you have a CDROM/DVD drive that supports packet writing, say298298+ Y to include support. It should work with any MMC/Mt Fuji299299+ compliant ATAPI or SCSI drive, which is just about any newer300300+ DVD/CD writer.301301+302302+ Currently only writing to CD-RW, DVD-RW, DVD+RW and DVDRAM discs303303+ is possible.304304+ DVD-RW disks must be in restricted overwrite mode.305305+306306+ See the file <file:Documentation/cdrom/packet-writing.rst>307307+ for further information on the use of this driver.308308+309309+ To compile this driver as a module, choose M here: the310310+ module will be called pktcdvd.311311+312312+config CDROM_PKTCDVD_BUFFERS313313+ int "Free buffers for data gathering"314314+ depends on CDROM_PKTCDVD315315+ default "8"316316+ help317317+ This controls the maximum number of active concurrent packets. More318318+ concurrent packets can increase write performance, but also require319319+ more memory. Each concurrent packet will require approximately 64Kb320320+ of non-swappable kernel memory, memory which will be allocated when321321+ a disc is opened for writing.322322+323323+config CDROM_PKTCDVD_WCACHE324324+ bool "Enable write caching"325325+ depends on CDROM_PKTCDVD326326+ help327327+ If enabled, write caching will be set for the CD-R/W device. For now328328+ this option is dangerous unless the CD-RW media is known good, as we329329+ don't do deferred write error handling yet.330330+288331config ATA_OVER_ETH289332 tristate "ATA over Ethernet support"290333 depends on NET
···168168 kset_unregister(dma_buf_stats_kset);169169}170170171171-int dma_buf_stats_setup(struct dma_buf *dmabuf)171171+int dma_buf_stats_setup(struct dma_buf *dmabuf, struct file *file)172172{173173 struct dma_buf_sysfs_entry *sysfs_entry;174174 int ret;175175-176176- if (!dmabuf || !dmabuf->file)177177- return -EINVAL;178175179176 if (!dmabuf->exp_name) {180177 pr_err("exporter name must not be empty if stats needed\n");···189192190193 /* create the directory for buffer stats */191194 ret = kobject_init_and_add(&sysfs_entry->kobj, &dma_buf_ktype, NULL,192192- "%lu", file_inode(dmabuf->file)->i_ino);195195+ "%lu", file_inode(file)->i_ino);193196 if (ret)194197 goto err_sysfs_dmabuf;195198
···181181int amdgpu_noretry = -1;182182int amdgpu_force_asic_type = -1;183183int amdgpu_tmz = -1; /* auto */184184+uint amdgpu_freesync_vid_mode;184185int amdgpu_reset_method = -1; /* auto */185186int amdgpu_num_kcq = -1;186187int amdgpu_smartshift_bias;···879878 */880879MODULE_PARM_DESC(tmz, "Enable TMZ feature (-1 = auto (default), 0 = off, 1 = on)");881880module_param_named(tmz, amdgpu_tmz, int, 0444);881881+882882+/**883883+ * DOC: freesync_video (uint)884884+ * Enable the optimization to adjust front porch timing to achieve seamless885885+ * mode change experience when setting a freesync supported mode for which full886886+ * modeset is not needed.887887+ *888888+ * The Display Core will add a set of modes derived from the base FreeSync889889+ * video mode into the corresponding connector's mode list based on commonly890890+ * used refresh rates and VRR range of the connected display, when users enable891891+ * this feature. From the userspace perspective, they can see a seamless mode892892+ * change experience when the change between different refresh rates under the893893+ * same resolution. Additionally, userspace applications such as Video playback894894+ * can read this modeset list and change the refresh rate based on the video895895+ * frame rate. Finally, the userspace can also derive an appropriate mode for a896896+ * particular refresh rate based on the FreeSync Mode and add it to the897897+ * connector's mode list.898898+ *899899+ * Note: This is an experimental feature.900900+ *901901+ * The default value: 0 (off).902902+ */903903+MODULE_PARM_DESC(904904+ freesync_video,905905+ "Enable freesync modesetting optimization feature (0 = off (default), 1 = on)");906906+module_param_named(freesync_video, amdgpu_freesync_vid_mode, uint, 0444);882907883908/**884909 * DOC: reset_method (int)
+1-1
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
···801801802802 p2plink->attr.name = "properties";803803 p2plink->attr.mode = KFD_SYSFS_FILE_MODE;804804- sysfs_attr_init(&iolink->attr);804804+ sysfs_attr_init(&p2plink->attr);805805 ret = sysfs_create_file(p2plink->kobj, &p2plink->attr);806806 if (ret < 0)807807 return ret;
+11-5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···43614361 amdgpu_set_panel_orientation(&aconnector->base);43624362 }4363436343644364+ /* If we didn't find a panel, notify the acpi video detection */43654365+ if (dm->adev->flags & AMD_IS_APU && dm->num_of_edps == 0)43664366+ acpi_video_report_nolcd();43674367+43644368 /* Software is initialized. Now we can register interrupt handlers. */43654369 switch (adev->asic_type) {43664370#if defined(CONFIG_DRM_AMD_DC_SI)···58355831 */58365832 DRM_DEBUG_DRIVER("No preferred mode found\n");58375833 } else {58385838- recalculate_timing = is_freesync_video_mode(&mode, aconnector);58345834+ recalculate_timing = amdgpu_freesync_vid_mode &&58355835+ is_freesync_video_mode(&mode, aconnector);58395836 if (recalculate_timing) {58405837 freesync_mode = get_highest_refresh_rate_mode(aconnector, false);58415838 drm_mode_copy(&saved_mode, &mode);···69876982 struct amdgpu_dm_connector *amdgpu_dm_connector =69886983 to_amdgpu_dm_connector(connector);6989698469906990- if (!edid)69856985+ if (!(amdgpu_freesync_vid_mode && edid))69916986 return;6992698769936988 if (amdgpu_dm_connector->max_vfreq - amdgpu_dm_connector->min_vfreq > 10)···88518846 * TODO: Refactor this function to allow this check to work88528847 * in all conditions.88538848 */88548854- if (dm_new_crtc_state->stream &&88498849+ if (amdgpu_freesync_vid_mode &&88508850+ dm_new_crtc_state->stream &&88558851 is_timing_unchanged_for_freesync(new_crtc_state, old_crtc_state))88568852 goto skip_modeset;88578853···88878881 if (!dm_old_crtc_state->stream)88888882 goto skip_modeset;8889888388908890- if (dm_new_crtc_state->stream &&88848884+ if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&88918885 is_timing_unchanged_for_freesync(new_crtc_state,88928886 old_crtc_state)) {88938887 new_crtc_state->mode_changed = false;···88998893 set_freesync_fixed_config(dm_new_crtc_state);8900889489018895 goto skip_modeset;89028902- } else if (aconnector &&88968896+ } else if (amdgpu_freesync_vid_mode && aconnector &&89038897 is_freesync_video_mode(&new_crtc_state->mode,89048898 aconnector)) {89058899 struct drm_display_mode *high_mode;
···62576257 double SwathSizePerSurfaceC[DC__NUM_DPP__MAX];62586258 bool NotEnoughDETSwathFillLatencyHiding = false;6259625962606260- /* calculate sum of single swath size for all pipes in bytes*/62606260+ /* calculate sum of single swath size for all pipes in bytes */62616261 for (k = 0; k < NumberOfActiveSurfaces; k++) {62626262- SwathSizePerSurfaceY[k] += SwathHeightY[k] * SwathWidthY[k] * BytePerPixelInDETY[k] * NumOfDPP[k];62626262+ SwathSizePerSurfaceY[k] = SwathHeightY[k] * SwathWidthY[k] * BytePerPixelInDETY[k] * NumOfDPP[k];6263626362646264 if (SwathHeightC[k] != 0)62656265- SwathSizePerSurfaceC[k] += SwathHeightC[k] * SwathWidthC[k] * BytePerPixelInDETC[k] * NumOfDPP[k];62656265+ SwathSizePerSurfaceC[k] = SwathHeightC[k] * SwathWidthC[k] * BytePerPixelInDETC[k] * NumOfDPP[k];62666266 else62676267 SwathSizePerSurfaceC[k] = 0;62686268
+91-3
drivers/gpu/drm/i915/display/intel_dsi_vbt.c
···41414242#include "i915_drv.h"4343#include "i915_reg.h"4444+#include "intel_de.h"4445#include "intel_display_types.h"4546#include "intel_dsi.h"4647#include "intel_dsi_vbt.h"4848+#include "intel_gmbus_regs.h"4749#include "vlv_dsi.h"4850#include "vlv_dsi_regs.h"4951#include "vlv_sideband.h"···379377 drm_dbg_kms(&dev_priv->drm, "Skipping ICL GPIO element execution\n");380378}381379380380+enum {381381+ MIPI_RESET_1 = 0,382382+ MIPI_AVDD_EN_1,383383+ MIPI_BKLT_EN_1,384384+ MIPI_AVEE_EN_1,385385+ MIPI_VIO_EN_1,386386+ MIPI_RESET_2,387387+ MIPI_AVDD_EN_2,388388+ MIPI_BKLT_EN_2,389389+ MIPI_AVEE_EN_2,390390+ MIPI_VIO_EN_2,391391+};392392+393393+static void icl_native_gpio_set_value(struct drm_i915_private *dev_priv,394394+ int gpio, bool value)395395+{396396+ int index;397397+398398+ if (drm_WARN_ON(&dev_priv->drm, DISPLAY_VER(dev_priv) == 11 && gpio >= MIPI_RESET_2))399399+ return;400400+401401+ switch (gpio) {402402+ case MIPI_RESET_1:403403+ case MIPI_RESET_2:404404+ index = gpio == MIPI_RESET_1 ? HPD_PORT_A : HPD_PORT_B;405405+406406+ /*407407+ * Disable HPD to set the pin to output, and set output408408+ * value. The HPD pin should not be enabled for DSI anyway,409409+ * assuming the board design and VBT are sane, and the pin isn't410410+ * used by a non-DSI encoder.411411+ *412412+ * The locking protects against concurrent SHOTPLUG_CTL_DDI413413+ * modifications in irq setup and handling.414414+ */415415+ spin_lock_irq(&dev_priv->irq_lock);416416+ intel_de_rmw(dev_priv, SHOTPLUG_CTL_DDI,417417+ SHOTPLUG_CTL_DDI_HPD_ENABLE(index) |418418+ SHOTPLUG_CTL_DDI_HPD_OUTPUT_DATA(index),419419+ value ? SHOTPLUG_CTL_DDI_HPD_OUTPUT_DATA(index) : 0);420420+ spin_unlock_irq(&dev_priv->irq_lock);421421+ break;422422+ case MIPI_AVDD_EN_1:423423+ case MIPI_AVDD_EN_2:424424+ index = gpio == MIPI_AVDD_EN_1 ? 0 : 1;425425+426426+ intel_de_rmw(dev_priv, PP_CONTROL(index), PANEL_POWER_ON,427427+ value ? PANEL_POWER_ON : 0);428428+ break;429429+ case MIPI_BKLT_EN_1:430430+ case MIPI_BKLT_EN_2:431431+ index = gpio == MIPI_BKLT_EN_1 ? 0 : 1;432432+433433+ intel_de_rmw(dev_priv, PP_CONTROL(index), EDP_BLC_ENABLE,434434+ value ? EDP_BLC_ENABLE : 0);435435+ break;436436+ case MIPI_AVEE_EN_1:437437+ case MIPI_AVEE_EN_2:438438+ index = gpio == MIPI_AVEE_EN_1 ? 1 : 2;439439+440440+ intel_de_rmw(dev_priv, GPIO(dev_priv, index),441441+ GPIO_CLOCK_VAL_OUT,442442+ GPIO_CLOCK_DIR_MASK | GPIO_CLOCK_DIR_OUT |443443+ GPIO_CLOCK_VAL_MASK | (value ? GPIO_CLOCK_VAL_OUT : 0));444444+ break;445445+ case MIPI_VIO_EN_1:446446+ case MIPI_VIO_EN_2:447447+ index = gpio == MIPI_VIO_EN_1 ? 1 : 2;448448+449449+ intel_de_rmw(dev_priv, GPIO(dev_priv, index),450450+ GPIO_DATA_VAL_OUT,451451+ GPIO_DATA_DIR_MASK | GPIO_DATA_DIR_OUT |452452+ GPIO_DATA_VAL_MASK | (value ? GPIO_DATA_VAL_OUT : 0));453453+ break;454454+ default:455455+ MISSING_CASE(gpio);456456+ }457457+}458458+382459static const u8 *mipi_exec_gpio(struct intel_dsi *intel_dsi, const u8 *data)383460{384461 struct drm_device *dev = intel_dsi->base.base.dev;···465384 struct intel_connector *connector = intel_dsi->attached_connector;466385 u8 gpio_source, gpio_index = 0, gpio_number;467386 bool value;468468-469469- drm_dbg_kms(&dev_priv->drm, "\n");387387+ bool native = DISPLAY_VER(dev_priv) >= 11;470388471389 if (connector->panel.vbt.dsi.seq_version >= 3)472390 gpio_index = *data++;···478398 else479399 gpio_source = 0;480400401401+ if (connector->panel.vbt.dsi.seq_version >= 4 && *data & BIT(1))402402+ native = false;403403+481404 /* pull up/down */482405 value = *data++ & 1;483406484484- if (DISPLAY_VER(dev_priv) >= 11)407407+ drm_dbg_kms(&dev_priv->drm, "GPIO index %u, number %u, source %u, native %s, set to %s\n",408408+ gpio_index, gpio_number, gpio_source, str_yes_no(native), str_on_off(value));409409+410410+ if (native)411411+ icl_native_gpio_set_value(dev_priv, gpio_number, value);412412+ else if (DISPLAY_VER(dev_priv) >= 11)485413 icl_exec_gpio(connector, gpio_source, gpio_index, value);486414 else if (IS_VALLEYVIEW(dev_priv))487415 vlv_exec_gpio(connector, gpio_source, gpio_number, value);
+48-11
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
···730730 bool unpinned;731731732732 /*733733- * Attempt to pin all of the buffers into the GTT.734734- * This is done in 2 phases:733733+ * We have one more buffers that we couldn't bind, which could be due to734734+ * various reasons. To resolve this we have 4 passes, with every next735735+ * level turning the screws tighter:735736 *736736- * 1. Unbind all objects that do not match the GTT constraints for737737- * the execbuffer (fenceable, mappable, alignment etc).738738- * 2. Bind new objects.737737+ * 0. Unbind all objects that do not match the GTT constraints for the738738+ * execbuffer (fenceable, mappable, alignment etc). Bind all new739739+ * objects. This avoids unnecessary unbinding of later objects in order740740+ * to make room for the earlier objects *unless* we need to defragment.739741 *740740- * This avoid unnecessary unbinding of later objects in order to make741741- * room for the earlier objects *unless* we need to defragment.742742+ * 1. Reorder the buffers, where objects with the most restrictive743743+ * placement requirements go first (ignoring fixed location buffers for744744+ * now). For example, objects needing the mappable aperture (the first745745+ * 256M of GTT), should go first vs objects that can be placed just746746+ * about anywhere. Repeat the previous pass.742747 *743743- * Defragmenting is skipped if all objects are pinned at a fixed location.748748+ * 2. Consider buffers that are pinned at a fixed location. Also try to749749+ * evict the entire VM this time, leaving only objects that we were750750+ * unable to lock. Try again to bind the buffers. (still using the new751751+ * buffer order).752752+ *753753+ * 3. We likely have object lock contention for one or more stubborn754754+ * objects in the VM, for which we need to evict to make forward755755+ * progress (perhaps we are fighting the shrinker?). When evicting the756756+ * VM this time around, anything that we can't lock we now track using757757+ * the busy_bo, using the full lock (after dropping the vm->mutex to758758+ * prevent deadlocks), instead of trylock. We then continue to evict the759759+ * VM, this time with the stubborn object locked, which we can now760760+ * hopefully unbind (if still bound in the VM). Repeat until the VM is761761+ * evicted. Finally we should be able bind everything.744762 */745745- for (pass = 0; pass <= 2; pass++) {763763+ for (pass = 0; pass <= 3; pass++) {746764 int pin_flags = PIN_USER | PIN_VALIDATE;747765748766 if (pass == 0)749767 pin_flags |= PIN_NONBLOCK;750768751769 if (pass >= 1)752752- unpinned = eb_unbind(eb, pass == 2);770770+ unpinned = eb_unbind(eb, pass >= 2);753771754772 if (pass == 2) {755773 err = mutex_lock_interruptible(&eb->context->vm->mutex);756774 if (!err) {757757- err = i915_gem_evict_vm(eb->context->vm, &eb->ww);775775+ err = i915_gem_evict_vm(eb->context->vm, &eb->ww, NULL);758776 mutex_unlock(&eb->context->vm->mutex);777777+ }778778+ if (err)779779+ return err;780780+ }781781+782782+ if (pass == 3) {783783+retry:784784+ err = mutex_lock_interruptible(&eb->context->vm->mutex);785785+ if (!err) {786786+ struct drm_i915_gem_object *busy_bo = NULL;787787+788788+ err = i915_gem_evict_vm(eb->context->vm, &eb->ww, &busy_bo);789789+ mutex_unlock(&eb->context->vm->mutex);790790+ if (err && busy_bo) {791791+ err = i915_gem_object_lock(busy_bo, &eb->ww);792792+ i915_gem_object_put(busy_bo);793793+ if (!err)794794+ goto retry;795795+ }759796 }760797 if (err)761798 return err;
+1-1
drivers/gpu/drm/i915/gem/i915_gem_mman.c
···369369 if (vma == ERR_PTR(-ENOSPC)) {370370 ret = mutex_lock_interruptible(&ggtt->vm.mutex);371371 if (!ret) {372372- ret = i915_gem_evict_vm(&ggtt->vm, &ww);372372+ ret = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);373373 mutex_unlock(&ggtt->vm.mutex);374374 }375375 if (ret)
+7-1
drivers/gpu/drm/i915/gt/intel_gt.c
···11091109 continue;1110111011111111 if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) {11121112+ u32 val = BIT(engine->instance);11131113+11141114+ if (engine->class == VIDEO_DECODE_CLASS ||11151115+ engine->class == VIDEO_ENHANCEMENT_CLASS ||11161116+ engine->class == COMPUTE_CLASS)11171117+ val = _MASKED_BIT_ENABLE(val);11121118 intel_gt_mcr_multicast_write_fw(gt,11131119 xehp_regs[engine->class],11141114- BIT(engine->instance));11201120+ val);11151121 } else {11161122 rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);11171123 if (!i915_mmio_reg_offset(rb.reg))
···5555 int idx;5656 bool ret;57575858- if (!vgpu->attached)5858+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))5959 return false;60606161 idx = srcu_read_lock(&kvm->srcu);···11781178 if (!HAS_PAGE_SIZES(vgpu->gvt->gt->i915, I915_GTT_PAGE_SIZE_2M))11791179 return 0;1180118011811181- if (!vgpu->attached)11811181+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))11821182 return -EINVAL;11831183 pfn = gfn_to_pfn(vgpu->vfio_device.kvm, ops->get_pfn(entry));11841184 if (is_error_noslot_pfn(pfn))···12091209 for_each_shadow_entry(sub_spt, &sub_se, sub_index) {12101210 ret = intel_gvt_dma_map_guest_page(vgpu, start_gfn + sub_index,12111211 PAGE_SIZE, &dma_addr);12121212- if (ret) {12131213- ppgtt_invalidate_spt(spt);12141214- return ret;12151215- }12121212+ if (ret)12131213+ goto err;12161214 sub_se.val64 = se->val64;1217121512181216 /* Copy the PAT field from PDE. */···12291231 ops->set_pfn(se, sub_spt->shadow_page.mfn);12301232 ppgtt_set_shadow_entry(spt, se, index);12311233 return 0;12341234+err:12351235+ /* Cancel the existing addess mappings of DMA addr. */12361236+ for_each_present_shadow_entry(sub_spt, &sub_se, sub_index) {12371237+ gvt_vdbg_mm("invalidate 4K entry\n");12381238+ ppgtt_invalidate_pte(sub_spt, &sub_se);12391239+ }12401240+ /* Release the new allocated spt. */12411241+ trace_spt_change(sub_spt->vgpu->id, "release", sub_spt,12421242+ sub_spt->guest_page.gfn, sub_spt->shadow_page.type);12431243+ ppgtt_free_spt(sub_spt);12441244+ return ret;12321245}1233124612341247static int split_64KB_gtt_entry(struct intel_vgpu *vgpu,
+10-5
drivers/gpu/drm/i915/gvt/gvt.h
···172172173173#define KVMGT_DEBUGFS_FILENAME "kvmgt_nr_cache_entries"174174175175+enum {176176+ INTEL_VGPU_STATUS_ATTACHED = 0,177177+ INTEL_VGPU_STATUS_ACTIVE,178178+ INTEL_VGPU_STATUS_NR_BITS,179179+};180180+175181struct intel_vgpu {176182 struct vfio_device vfio_device;177183 struct intel_gvt *gvt;178184 struct mutex vgpu_lock;179185 int id;180180- bool active;181181- bool attached;186186+ DECLARE_BITMAP(status, INTEL_VGPU_STATUS_NR_BITS);182187 bool pv_notified;183188 bool failsafe;184189 unsigned int resetting_eng;···472467473468#define for_each_active_vgpu(gvt, vgpu, id) \474469 idr_for_each_entry((&(gvt)->vgpu_idr), (vgpu), (id)) \475475- for_each_if(vgpu->active)470470+ for_each_if(test_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status))476471477472static inline void intel_vgpu_write_pci_bar(struct intel_vgpu *vgpu,478473 u32 offset, u32 val, bool low)···730725static inline int intel_gvt_read_gpa(struct intel_vgpu *vgpu, unsigned long gpa,731726 void *buf, unsigned long len)732727{733733- if (!vgpu->attached)728728+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))734729 return -ESRCH;735730 return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, false);736731}···748743static inline int intel_gvt_write_gpa(struct intel_vgpu *vgpu,749744 unsigned long gpa, void *buf, unsigned long len)750745{751751- if (!vgpu->attached)746746+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))752747 return -ESRCH;753748 return vfio_dma_rw(&vgpu->vfio_device, gpa, buf, len, true);754749}
+1-1
drivers/gpu/drm/i915/gvt/interrupt.c
···433433 * enabled by guest. so if msi_trigger is null, success is still434434 * returned and don't inject interrupt into guest.435435 */436436- if (!vgpu->attached)436436+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))437437 return -ESRCH;438438 if (vgpu->msi_trigger && eventfd_signal(vgpu->msi_trigger, 1) != 1)439439 return -EFAULT;
+13-22
drivers/gpu/drm/i915/gvt/kvmgt.c
···638638639639 mutex_lock(&vgpu->gvt->lock);640640 for_each_active_vgpu(vgpu->gvt, itr, id) {641641- if (!itr->attached)641641+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, itr->status))642642 continue;643643644644 if (vgpu->vfio_device.kvm == itr->vfio_device.kvm) {···655655{656656 struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);657657658658- if (vgpu->attached)659659- return -EEXIST;660660-661658 if (!vgpu->vfio_device.kvm ||662659 vgpu->vfio_device.kvm->mm != current->mm) {663660 gvt_vgpu_err("KVM is required to use Intel vGPU\n");···664667 if (__kvmgt_vgpu_exist(vgpu))665668 return -EEXIST;666669667667- vgpu->attached = true;668668-669670 vgpu->track_node.track_write = kvmgt_page_track_write;670671 vgpu->track_node.track_flush_slot = kvmgt_page_track_flush_slot;671672 kvm_get_kvm(vgpu->vfio_device.kvm);672673 kvm_page_track_register_notifier(vgpu->vfio_device.kvm,673674 &vgpu->track_node);675675+676676+ set_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status);674677675678 debugfs_create_ulong(KVMGT_DEBUGFS_FILENAME, 0444, vgpu->debugfs,676679 &vgpu->nr_cache_entries);···695698{696699 struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);697700698698- if (!vgpu->attached)699699- return;700700-701701 intel_gvt_release_vgpu(vgpu);702702+703703+ clear_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status);702704703705 debugfs_remove(debugfs_lookup(KVMGT_DEBUGFS_FILENAME, vgpu->debugfs));704706···714718 vgpu->dma_addr_cache = RB_ROOT;715719716720 intel_vgpu_release_msi_eventfd_ctx(vgpu);717717-718718- vgpu->attached = false;719721}720722721723static u64 intel_vgpu_get_bar_addr(struct intel_vgpu *vgpu, int bar)···15061512{15071513 struct intel_vgpu *vgpu = dev_get_drvdata(&mdev->dev);1508151415091509- if (WARN_ON_ONCE(vgpu->attached))15101510- return;15111511-15121515 vfio_unregister_group_dev(&vgpu->vfio_device);15131516 vfio_put_device(&vgpu->vfio_device);15141517}···15501559 struct kvm_memory_slot *slot;15511560 int idx;1552156115531553- if (!info->attached)15621562+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, info->status))15541563 return -ESRCH;1555156415561565 idx = srcu_read_lock(&kvm->srcu);···15801589 struct kvm_memory_slot *slot;15811590 int idx;1582159115831583- if (!info->attached)15841584- return 0;15921592+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, info->status))15931593+ return -ESRCH;1585159415861595 idx = srcu_read_lock(&kvm->srcu);15871596 slot = gfn_to_memslot(kvm, gfn);···16591668 struct gvt_dma *entry;16601669 int ret;1661167016621662- if (!vgpu->attached)16711671+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))16631672 return -EINVAL;1664167316651674 mutex_lock(&vgpu->cache_lock);···17051714 struct gvt_dma *entry;17061715 int ret = 0;1707171617081708- if (!vgpu->attached)17091709- return -ENODEV;17171717+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))17181718+ return -EINVAL;1710171917111720 mutex_lock(&vgpu->cache_lock);17121721 entry = __gvt_cache_find_dma_addr(vgpu, dma_addr);···17331742{17341743 struct gvt_dma *entry;1735174417361736- if (!vgpu->attached)17451745+ if (!test_bit(INTEL_VGPU_STATUS_ATTACHED, vgpu->status))17371746 return;1738174717391748 mutex_lock(&vgpu->cache_lock);···17691778 idr_for_each_entry((&(gvt)->vgpu_idr), (vgpu), (id)) {17701779 if (test_and_clear_bit(INTEL_GVT_REQUEST_EMULATE_VBLANK + id,17711780 (void *)&gvt->service_request)) {17721772- if (vgpu->active)17811781+ if (test_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status))17731782 intel_vgpu_emulate_vblank(vgpu);17741783 }17751784 }
···166166 */167167void intel_gvt_activate_vgpu(struct intel_vgpu *vgpu)168168{169169- mutex_lock(&vgpu->vgpu_lock);170170- vgpu->active = true;171171- mutex_unlock(&vgpu->vgpu_lock);169169+ set_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status);172170}173171174172/**···181183{182184 mutex_lock(&vgpu->vgpu_lock);183185184184- vgpu->active = false;186186+ clear_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status);185187186188 if (atomic_read(&vgpu->submission.running_workload_num)) {187189 mutex_unlock(&vgpu->vgpu_lock);···226228 struct intel_gvt *gvt = vgpu->gvt;227229 struct drm_i915_private *i915 = gvt->gt->i915;228230229229- drm_WARN(&i915->drm, vgpu->active, "vGPU is still active!\n");231231+ drm_WARN(&i915->drm, test_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status),232232+ "vGPU is still active!\n");230233231234 /*232235 * remove idr first so later clean can judge if need to stop···284285 if (ret)285286 goto out_free_vgpu;286287287287- vgpu->active = false;288288-288288+ clear_bit(INTEL_VGPU_STATUS_ACTIVE, vgpu->status);289289 return vgpu;290290291291out_free_vgpu:
+27-10
drivers/gpu/drm/i915/i915_gem_evict.c
···416416 * @vm: Address space to cleanse417417 * @ww: An optional struct i915_gem_ww_ctx. If not NULL, i915_gem_evict_vm418418 * will be able to evict vma's locked by the ww as well.419419+ * @busy_bo: Optional pointer to struct drm_i915_gem_object. If not NULL, then420420+ * in the event i915_gem_evict_vm() is unable to trylock an object for eviction,421421+ * then @busy_bo will point to it. -EBUSY is also returned. The caller must drop422422+ * the vm->mutex, before trying again to acquire the contended lock. The caller423423+ * also owns a reference to the object.419424 *420425 * This function evicts all vmas from a vm.421426 *···430425 * To clarify: This is for freeing up virtual address space, not for freeing431426 * memory in e.g. the shrinker.432427 */433433-int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)428428+int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww,429429+ struct drm_i915_gem_object **busy_bo)434430{435431 int ret = 0;436432···463457 * the resv is shared among multiple objects, we still464458 * need the object ref.465459 */466466- if (dying_vma(vma) ||460460+ if (!i915_gem_object_get_rcu(vma->obj) ||467461 (ww && (dma_resv_locking_ctx(vma->obj->base.resv) == &ww->ctx))) {468462 __i915_vma_pin(vma);469463 list_add(&vma->evict_link, &locked_eviction_list);470464 continue;471465 }472466473473- if (!i915_gem_object_trylock(vma->obj, ww))467467+ if (!i915_gem_object_trylock(vma->obj, ww)) {468468+ if (busy_bo) {469469+ *busy_bo = vma->obj; /* holds ref */470470+ ret = -EBUSY;471471+ break;472472+ }473473+ i915_gem_object_put(vma->obj);474474 continue;475475+ }475476476477 __i915_vma_pin(vma);477478 list_add(&vma->evict_link, &eviction_list);···486473 if (list_empty(&eviction_list) && list_empty(&locked_eviction_list))487474 break;488475489489- ret = 0;490476 /* Unbind locked objects first, before unlocking the eviction_list */491477 list_for_each_entry_safe(vma, vn, &locked_eviction_list, evict_link) {492478 __i915_vma_unpin(vma);493479494494- if (ret == 0)480480+ if (ret == 0) {495481 ret = __i915_vma_unbind(vma);496496- if (ret != -EINTR) /* "Get me out of here!" */497497- ret = 0;482482+ if (ret != -EINTR) /* "Get me out of here!" */483483+ ret = 0;484484+ }485485+ if (!dying_vma(vma))486486+ i915_gem_object_put(vma->obj);498487 }499488500489 list_for_each_entry_safe(vma, vn, &eviction_list, evict_link) {501490 __i915_vma_unpin(vma);502502- if (ret == 0)491491+ if (ret == 0) {503492 ret = __i915_vma_unbind(vma);504504- if (ret != -EINTR) /* "Get me out of here!" */505505- ret = 0;493493+ if (ret != -EINTR) /* "Get me out of here!" */494494+ ret = 0;495495+ }506496507497 i915_gem_object_unlock(vma->obj);498498+ i915_gem_object_put(vma->obj);508499 }509500 } while (ret == 0);510501
···15661566 * locked objects when called from execbuf when pinning15671567 * is removed. This would probably regress badly.15681568 */15691569- i915_gem_evict_vm(vm, NULL);15691569+ i915_gem_evict_vm(vm, NULL, NULL);15701570 mutex_unlock(&vm->mutex);15711571 }15721572 } while (1);
+2-2
drivers/gpu/drm/i915/selftests/i915_gem_evict.c
···344344345345 /* Everything is pinned, nothing should happen */346346 mutex_lock(&ggtt->vm.mutex);347347- err = i915_gem_evict_vm(&ggtt->vm, NULL);347347+ err = i915_gem_evict_vm(&ggtt->vm, NULL, NULL);348348 mutex_unlock(&ggtt->vm.mutex);349349 if (err) {350350 pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",···356356357357 for_i915_gem_ww(&ww, err, false) {358358 mutex_lock(&ggtt->vm.mutex);359359- err = i915_gem_evict_vm(&ggtt->vm, &ww);359359+ err = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);360360 mutex_unlock(&ggtt->vm.mutex);361361 }362362
···8282 struct panfrost_gem_object *bo;8383 struct drm_panfrost_create_bo *args = data;8484 struct panfrost_gem_mapping *mapping;8585+ int ret;85868687 if (!args->size || args->pad ||8788 (args->flags & ~(PANFROST_BO_NOEXEC | PANFROST_BO_HEAP)))···9392 !(args->flags & PANFROST_BO_NOEXEC))9493 return -EINVAL;95949696- bo = panfrost_gem_create_with_handle(file, dev, args->size, args->flags,9797- &args->handle);9595+ bo = panfrost_gem_create(dev, args->size, args->flags);9896 if (IS_ERR(bo))9997 return PTR_ERR(bo);100989999+ ret = drm_gem_handle_create(file, &bo->base.base, &args->handle);100100+ if (ret)101101+ goto out;102102+101103 mapping = panfrost_gem_mapping_get(bo, priv);102102- if (!mapping) {103103- drm_gem_object_put(&bo->base.base);104104- return -EINVAL;104104+ if (mapping) {105105+ args->offset = mapping->mmnode.start << PAGE_SHIFT;106106+ panfrost_gem_mapping_put(mapping);107107+ } else {108108+ /* This can only happen if the handle from109109+ * drm_gem_handle_create() has already been guessed and freed110110+ * by user space111111+ */112112+ ret = -EINVAL;105113 }106114107107- args->offset = mapping->mmnode.start << PAGE_SHIFT;108108- panfrost_gem_mapping_put(mapping);109109-110110- return 0;115115+out:116116+ drm_gem_object_put(&bo->base.base);117117+ return ret;111118}112119113120/**
+1-15
drivers/gpu/drm/panfrost/panfrost_gem.c
···235235}236236237237struct panfrost_gem_object *238238-panfrost_gem_create_with_handle(struct drm_file *file_priv,239239- struct drm_device *dev, size_t size,240240- u32 flags,241241- uint32_t *handle)238238+panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags)242239{243243- int ret;244240 struct drm_gem_shmem_object *shmem;245241 struct panfrost_gem_object *bo;246242···251255 bo = to_panfrost_bo(&shmem->base);252256 bo->noexec = !!(flags & PANFROST_BO_NOEXEC);253257 bo->is_heap = !!(flags & PANFROST_BO_HEAP);254254-255255- /*256256- * Allocate an id of idr table where the obj is registered257257- * and handle has the id what user can see.258258- */259259- ret = drm_gem_handle_create(file_priv, &shmem->base, handle);260260- /* drop reference from allocate - handle holds it now. */261261- drm_gem_object_put(&shmem->base);262262- if (ret)263263- return ERR_PTR(ret);264258265259 return bo;266260}
···8181 init_completion(&entity->entity_idle);82828383 /* We start in an idle state. */8484- complete(&entity->entity_idle);8484+ complete_all(&entity->entity_idle);85858686 spin_lock_init(&entity->rq_lock);8787 spsc_queue_init(&entity->job_queue);
···184184 struct virtio_gpu_object_array *objs = NULL;185185 struct drm_gem_shmem_object *shmem_obj;186186 struct virtio_gpu_object *bo;187187- struct virtio_gpu_mem_entry *ents;187187+ struct virtio_gpu_mem_entry *ents = NULL;188188 unsigned int nents;189189 int ret;190190···210210 ret = -ENOMEM;211211 objs = virtio_gpu_array_alloc(1);212212 if (!objs)213213- goto err_put_id;213213+ goto err_free_entry;214214 virtio_gpu_array_add_obj(objs, &bo->base.base);215215216216 ret = virtio_gpu_array_lock_resv(objs);···239239240240err_put_objs:241241 virtio_gpu_array_put_free(objs);242242+err_free_entry:243243+ kvfree(ents);242244err_put_id:243245 virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);244246err_free_gem:
+3-3
drivers/infiniband/hw/mlx5/counters.c
···278278 const struct mlx5_ib_counters *cnts = get_counters(dev, port_num - 1);279279 struct mlx5_core_dev *mdev;280280 int ret, num_counters;281281- u32 mdev_port_num;282281283282 if (!stats)284283 return -EINVAL;···298299 }299300300301 if (MLX5_CAP_GEN(dev->mdev, cc_query_allowed)) {301301- mdev = mlx5_ib_get_native_port_mdev(dev, port_num,302302- &mdev_port_num);302302+ if (!port_num)303303+ port_num = 1;304304+ mdev = mlx5_ib_get_native_port_mdev(dev, port_num, NULL);303305 if (!mdev) {304306 /* If port is not affiliated yet, its in down state305307 * which doesn't have any counters yet, so it would be
···17421742 * otherwise associated queue_limits won't be imposed.17431743 */17441744 bio = bio_split_to_limits(bio);17451745+ if (!bio)17461746+ return;17451747 }1746174817471749 init_clone_info(&ci, md, map, bio, is_abnormal);
+2
drivers/md/md.c
···455455 }456456457457 bio = bio_split_to_limits(bio);458458+ if (!bio)459459+ return;458460459461 if (mddev->ro == MD_RDONLY && unlikely(rw == WRITE)) {460462 if (bio_sectors(bio) != 0)
+1
drivers/net/bonding/bond_3ad.c
···15491549 slave_err(bond->dev, port->slave->dev,15501550 "Port %d did not find a suitable aggregator\n",15511551 port->actor_port_number);15521552+ return;15521553 }15531554 }15541555 /* if all aggregator's ports are READY_N == TRUE, set ready=TRUE
···22config NET_DSA_MV88E6XXX33 tristate "Marvell 88E6xxx Ethernet switch fabric support"44 depends on NET_DSA55- depends on PTP_1588_CLOCK_OPTIONAL65 select IRQ_DOMAIN76 select NET_DSA_TAG_EDSA87 select NET_DSA_TAG_DSA···1213config NET_DSA_MV88E6XXX_PTP1314 bool "PTP support for Marvell 88E6xxx"1415 default n1515- depends on NET_DSA_MV88E6XXX && PTP_1588_CLOCK1616+ depends on (NET_DSA_MV88E6XXX = y && PTP_1588_CLOCK = y) || \1717+ (NET_DSA_MV88E6XXX = m && PTP_1588_CLOCK)1618 help1719 Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch1820 chips that support it.
+108-58
drivers/net/dsa/qca/qca8k-8xxx.c
···3737}38383939static int4040-qca8k_set_lo(struct qca8k_priv *priv, int phy_id, u32 regnum, u16 lo)4040+qca8k_mii_write_lo(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)4141{4242- u16 *cached_lo = &priv->mdio_cache.lo;4343- struct mii_bus *bus = priv->bus;4442 int ret;4343+ u16 lo;45444646- if (lo == *cached_lo)4747- return 0;4848-4545+ lo = val & 0xffff;4946 ret = bus->write(bus, phy_id, regnum, lo);5047 if (ret < 0)5148 dev_err_ratelimited(&bus->dev,5249 "failed to write qca8k 32bit lo register\n");53505454- *cached_lo = lo;5555- return 0;5151+ return ret;5652}57535854static int5959-qca8k_set_hi(struct qca8k_priv *priv, int phy_id, u32 regnum, u16 hi)5555+qca8k_mii_write_hi(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)6056{6161- u16 *cached_hi = &priv->mdio_cache.hi;6262- struct mii_bus *bus = priv->bus;6357 int ret;5858+ u16 hi;64596565- if (hi == *cached_hi)6666- return 0;6767-6060+ hi = (u16)(val >> 16);6861 ret = bus->write(bus, phy_id, regnum, hi);6962 if (ret < 0)7063 dev_err_ratelimited(&bus->dev,7164 "failed to write qca8k 32bit hi register\n");72657373- *cached_hi = hi;6666+ return ret;6767+}6868+6969+static int7070+qca8k_mii_read_lo(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)7171+{7272+ int ret;7373+7474+ ret = bus->read(bus, phy_id, regnum);7575+ if (ret < 0)7676+ goto err;7777+7878+ *val = ret & 0xffff;7479 return 0;8080+8181+err:8282+ dev_err_ratelimited(&bus->dev,8383+ "failed to read qca8k 32bit lo register\n");8484+ *val = 0;8585+8686+ return ret;8787+}8888+8989+static int9090+qca8k_mii_read_hi(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)9191+{9292+ int ret;9393+9494+ ret = bus->read(bus, phy_id, regnum);9595+ if (ret < 0)9696+ goto err;9797+9898+ *val = ret << 16;9999+ return 0;100100+101101+err:102102+ dev_err_ratelimited(&bus->dev,103103+ "failed to read qca8k 32bit hi register\n");104104+ *val = 0;105105+106106+ return ret;75107}7610877109static int78110qca8k_mii_read32(struct mii_bus *bus, int phy_id, u32 regnum, u32 *val)79111{112112+ u32 hi, lo;80113 int ret;811148282- ret = bus->read(bus, phy_id, regnum);8383- if (ret >= 0) {8484- *val = ret;8585- ret = bus->read(bus, phy_id, regnum + 1);8686- *val |= ret << 16;8787- }115115+ *val = 0;881168989- if (ret < 0) {9090- dev_err_ratelimited(&bus->dev,9191- "failed to read qca8k 32bit register\n");9292- *val = 0;9393- return ret;9494- }117117+ ret = qca8k_mii_read_lo(bus, phy_id, regnum, &lo);118118+ if (ret < 0)119119+ goto err;951209696- return 0;121121+ ret = qca8k_mii_read_hi(bus, phy_id, regnum + 1, &hi);122122+ if (ret < 0)123123+ goto err;124124+125125+ *val = lo | hi;126126+127127+err:128128+ return ret;97129}9813099131static void100100-qca8k_mii_write32(struct qca8k_priv *priv, int phy_id, u32 regnum, u32 val)132132+qca8k_mii_write32(struct mii_bus *bus, int phy_id, u32 regnum, u32 val)101133{102102- u16 lo, hi;103103- int ret;134134+ if (qca8k_mii_write_lo(bus, phy_id, regnum, val) < 0)135135+ return;104136105105- lo = val & 0xffff;106106- hi = (u16)(val >> 16);107107-108108- ret = qca8k_set_lo(priv, phy_id, regnum, lo);109109- if (ret >= 0)110110- ret = qca8k_set_hi(priv, phy_id, regnum + 1, hi);137137+ qca8k_mii_write_hi(bus, phy_id, regnum + 1, val);111138}112139113140static int···173146174147 command = get_unaligned_le32(&mgmt_ethhdr->command);175148 cmd = FIELD_GET(QCA_HDR_MGMT_CMD, command);149149+176150 len = FIELD_GET(QCA_HDR_MGMT_LENGTH, command);151151+ /* Special case for len of 15 as this is the max value for len and needs to152152+ * be increased before converting it from word to dword.153153+ */154154+ if (len == 15)155155+ len++;156156+157157+ /* We can ignore odd value, we always round up them in the alloc function. */158158+ len *= sizeof(u16);177159178160 /* Make sure the seq match the requested packet */179161 if (get_unaligned_le32(&mgmt_ethhdr->seq) == mgmt_eth_data->seq)···229193 if (!skb)230194 return NULL;231195232232- /* Max value for len reg is 15 (0xf) but the switch actually return 16 byte233233- * Actually for some reason the steps are:234234- * 0: nothing235235- * 1-4: first 4 byte236236- * 5-6: first 12 byte237237- * 7-15: all 16 byte196196+ /* Hdr mgmt length value is in step of word size.197197+ * As an example to process 4 byte of data the correct length to set is 2.198198+ * To process 8 byte 4, 12 byte 6, 16 byte 8...199199+ *200200+ * Odd values will always return the next size on the ack packet.201201+ * (length of 3 (6 byte) will always return 8 bytes of data)202202+ *203203+ * This means that a value of 15 (0xf) actually means reading/writing 32 bytes204204+ * of data.205205+ *206206+ * To correctly calculate the length we devide the requested len by word and207207+ * round up.208208+ * On the ack function we can skip the odd check as we already handle the209209+ * case here.238210 */239239- if (len == 16)240240- real_len = 15;241241- else242242- real_len = len;211211+ real_len = DIV_ROUND_UP(len, sizeof(u16));212212+213213+ /* We check if the result len is odd and we round up another time to214214+ * the next size. (length of 3 will be increased to 4 as switch will always215215+ * return 8 bytes)216216+ */217217+ if (real_len % sizeof(u16) != 0)218218+ real_len++;219219+220220+ /* Max reg value is 0xf(15) but switch will always return the next size (32 byte) */221221+ if (real_len == 16)222222+ real_len--;243223244224 skb_reset_mac_header(skb);245225 skb_set_network_header(skb, skb->len);···469417 if (ret < 0)470418 goto exit;471419472472- qca8k_mii_write32(priv, 0x10 | r2, r1, val);420420+ qca8k_mii_write32(bus, 0x10 | r2, r1, val);473421474422exit:475423 mutex_unlock(&bus->mdio_lock);···502450503451 val &= ~mask;504452 val |= write_val;505505- qca8k_mii_write32(priv, 0x10 | r2, r1, val);453453+ qca8k_mii_write32(bus, 0x10 | r2, r1, val);506454507455exit:508456 mutex_unlock(&bus->mdio_lock);···740688741689 qca8k_split_addr(reg, &r1, &r2, &page);742690743743- ret = read_poll_timeout(qca8k_mii_read32, ret1, !(val & mask), 0,691691+ ret = read_poll_timeout(qca8k_mii_read_hi, ret1, !(val & mask), 0,744692 QCA8K_BUSY_WAIT_TIMEOUT * USEC_PER_MSEC, false,745745- bus, 0x10 | r2, r1, &val);693693+ bus, 0x10 | r2, r1 + 1, &val);746694747695 /* Check if qca8k_read has failed for a different reason748696 * before returnting -ETIMEDOUT···777725 if (ret)778726 goto exit;779727780780- qca8k_mii_write32(priv, 0x10 | r2, r1, val);728728+ qca8k_mii_write32(bus, 0x10 | r2, r1, val);781729782730 ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL,783731 QCA8K_MDIO_MASTER_BUSY);784732785733exit:786734 /* even if the busy_wait timeouts try to clear the MASTER_EN */787787- qca8k_mii_write32(priv, 0x10 | r2, r1, 0);735735+ qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, 0);788736789737 mutex_unlock(&bus->mdio_lock);790738···814762 if (ret)815763 goto exit;816764817817- qca8k_mii_write32(priv, 0x10 | r2, r1, val);765765+ qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, val);818766819767 ret = qca8k_mdio_busy_wait(bus, QCA8K_MDIO_MASTER_CTRL,820768 QCA8K_MDIO_MASTER_BUSY);821769 if (ret)822770 goto exit;823771824824- ret = qca8k_mii_read32(bus, 0x10 | r2, r1, &val);772772+ ret = qca8k_mii_read_lo(bus, 0x10 | r2, r1, &val);825773826774exit:827775 /* even if the busy_wait timeouts try to clear the MASTER_EN */828828- qca8k_mii_write32(priv, 0x10 | r2, r1, 0);776776+ qca8k_mii_write_hi(bus, 0x10 | r2, r1 + 1, 0);829777830778 mutex_unlock(&bus->mdio_lock);831779···19951943 }1996194419971945 priv->mdio_cache.page = 0xffff;19981998- priv->mdio_cache.lo = 0xffff;19991999- priv->mdio_cache.hi = 0xffff;2000194620011947 /* Check the detected switch id */20021948 ret = qca8k_read_switch_id(priv);
-5
drivers/net/dsa/qca/qca8k.h
···375375 * mdio writes376376 */377377 u16 page;378378-/* lo and hi can also be cached and from Documentation we can skip one379379- * extra mdio write if lo or hi is didn't change.380380- */381381- u16 lo;382382- u16 hi;383378};384379385380struct qca8k_pcs {
+9-20
drivers/net/ethernet/amazon/ena/ena_com.c
···24002400 return -EOPNOTSUPP;24012401 }2402240224032403- switch (func) {24042404- case ENA_ADMIN_TOEPLITZ:24052405- if (key) {24062406- if (key_len != sizeof(hash_key->key)) {24072407- netdev_err(ena_dev->net_device,24082408- "key len (%u) doesn't equal the supported size (%zu)\n",24092409- key_len, sizeof(hash_key->key));24102410- return -EINVAL;24112411- }24122412- memcpy(hash_key->key, key, key_len);24132413- rss->hash_init_val = init_val;24142414- hash_key->key_parts = key_len / sizeof(hash_key->key[0]);24032403+ if ((func == ENA_ADMIN_TOEPLITZ) && key) {24042404+ if (key_len != sizeof(hash_key->key)) {24052405+ netdev_err(ena_dev->net_device,24062406+ "key len (%u) doesn't equal the supported size (%zu)\n",24072407+ key_len, sizeof(hash_key->key));24082408+ return -EINVAL;24152409 }24162416- break;24172417- case ENA_ADMIN_CRC32:24182418- rss->hash_init_val = init_val;24192419- break;24202420- default:24212421- netdev_err(ena_dev->net_device, "Invalid hash function (%d)\n",24222422- func);24232423- return -EINVAL;24102410+ memcpy(hash_key->key, key, key_len);24112411+ hash_key->key_parts = key_len / sizeof(hash_key->key[0]);24242412 }2425241324142414+ rss->hash_init_val = init_val;24262415 old_func = rss->hash_func;24272416 rss->hash_func = func;24282417 rc = ena_com_set_hash_function(ena_dev);
+1-5
drivers/net/ethernet/amazon/ena/ena_ethtool.c
···887887 switch (tuna->id) {888888 case ETHTOOL_RX_COPYBREAK:889889 len = *(u32 *)data;890890- if (len > adapter->netdev->mtu) {891891- ret = -EINVAL;892892- break;893893- }894894- adapter->rx_copybreak = len;890890+ ret = ena_set_rx_copybreak(adapter, len);895891 break;896892 default:897893 ret = -EINVAL;
+60-23
drivers/net/ethernet/amazon/ena/ena_netdev.c
···374374375375static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp)376376{377377+ u32 verdict = ENA_XDP_PASS;377378 struct bpf_prog *xdp_prog;378379 struct ena_ring *xdp_ring;379379- u32 verdict = XDP_PASS;380380 struct xdp_frame *xdpf;381381 u64 *xdp_stat;382382···393393 if (unlikely(!xdpf)) {394394 trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);395395 xdp_stat = &rx_ring->rx_stats.xdp_aborted;396396- verdict = XDP_ABORTED;396396+ verdict = ENA_XDP_DROP;397397 break;398398 }399399···409409410410 spin_unlock(&xdp_ring->xdp_tx_lock);411411 xdp_stat = &rx_ring->rx_stats.xdp_tx;412412+ verdict = ENA_XDP_TX;412413 break;413414 case XDP_REDIRECT:414415 if (likely(!xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog))) {415416 xdp_stat = &rx_ring->rx_stats.xdp_redirect;417417+ verdict = ENA_XDP_REDIRECT;416418 break;417419 }418420 trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);419421 xdp_stat = &rx_ring->rx_stats.xdp_aborted;420420- verdict = XDP_ABORTED;422422+ verdict = ENA_XDP_DROP;421423 break;422424 case XDP_ABORTED:423425 trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict);424426 xdp_stat = &rx_ring->rx_stats.xdp_aborted;427427+ verdict = ENA_XDP_DROP;425428 break;426429 case XDP_DROP:427430 xdp_stat = &rx_ring->rx_stats.xdp_drop;431431+ verdict = ENA_XDP_DROP;428432 break;429433 case XDP_PASS:430434 xdp_stat = &rx_ring->rx_stats.xdp_pass;435435+ verdict = ENA_XDP_PASS;431436 break;432437 default:433438 bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, verdict);434439 xdp_stat = &rx_ring->rx_stats.xdp_invalid;440440+ verdict = ENA_XDP_DROP;435441 }436442437443 ena_increase_stat(xdp_stat, 1, &rx_ring->syncp);···518512 struct bpf_prog *prog,519513 int first, int count)520514{515515+ struct bpf_prog *old_bpf_prog;521516 struct ena_ring *rx_ring;522517 int i = 0;523518524519 for (i = first; i < count; i++) {525520 rx_ring = &adapter->rx_ring[i];526526- xchg(&rx_ring->xdp_bpf_prog, prog);527527- if (prog) {521521+ old_bpf_prog = xchg(&rx_ring->xdp_bpf_prog, prog);522522+523523+ if (!old_bpf_prog && prog) {528524 ena_xdp_register_rxq_info(rx_ring);529525 rx_ring->rx_headroom = XDP_PACKET_HEADROOM;530530- } else {526526+ } else if (old_bpf_prog && !prog) {531527 ena_xdp_unregister_rxq_info(rx_ring);532528 rx_ring->rx_headroom = NET_SKB_PAD;533529 }···680672 ring->ena_dev = adapter->ena_dev;681673 ring->per_napi_packets = 0;682674 ring->cpu = 0;675675+ ring->numa_node = 0;683676 ring->no_interrupt_event_cnt = 0;684677 u64_stats_init(&ring->syncp);685678}···784775 tx_ring->next_to_use = 0;785776 tx_ring->next_to_clean = 0;786777 tx_ring->cpu = ena_irq->cpu;778778+ tx_ring->numa_node = node;787779 return 0;788780789781err_push_buf_intermediate_buf:···917907 rx_ring->next_to_clean = 0;918908 rx_ring->next_to_use = 0;919909 rx_ring->cpu = ena_irq->cpu;910910+ rx_ring->numa_node = node;920911921912 return 0;922913}···16301619 * we expect, then we simply drop it16311620 */16321621 if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU))16331633- return XDP_DROP;16221622+ return ENA_XDP_DROP;1634162316351624 ret = ena_xdp_execute(rx_ring, xdp);1636162516371626 /* The xdp program might expand the headers */16381638- if (ret == XDP_PASS) {16271627+ if (ret == ENA_XDP_PASS) {16391628 rx_info->page_offset = xdp->data - xdp->data_hard_start;16401629 rx_ring->ena_bufs[0].len = xdp->data_end - xdp->data;16411630 }···16741663 xdp_init_buff(&xdp, ENA_PAGE_SIZE, &rx_ring->xdp_rxq);1675166416761665 do {16771677- xdp_verdict = XDP_PASS;16661666+ xdp_verdict = ENA_XDP_PASS;16781667 skb = NULL;16791668 ena_rx_ctx.ena_bufs = rx_ring->ena_bufs;16801669 ena_rx_ctx.max_bufs = rx_ring->sgl_size;···17021691 xdp_verdict = ena_xdp_handle_buff(rx_ring, &xdp);1703169217041693 /* allocate skb and fill it */17051705- if (xdp_verdict == XDP_PASS)16941694+ if (xdp_verdict == ENA_XDP_PASS)17061695 skb = ena_rx_skb(rx_ring,17071696 rx_ring->ena_bufs,17081697 ena_rx_ctx.descs,···17201709 /* Packets was passed for transmission, unmap it17211710 * from RX side.17221711 */17231723- if (xdp_verdict == XDP_TX || xdp_verdict == XDP_REDIRECT) {17121712+ if (xdp_verdict & ENA_XDP_FORWARDED) {17241713 ena_unmap_rx_buff(rx_ring,17251714 &rx_ring->rx_buffer_info[req_id]);17261715 rx_ring->rx_buffer_info[req_id].page = NULL;17271716 }17281717 }17291729- if (xdp_verdict != XDP_PASS) {17181718+ if (xdp_verdict != ENA_XDP_PASS) {17301719 xdp_flags |= xdp_verdict;17201720+ total_len += ena_rx_ctx.ena_bufs[0].len;17311721 res_budget--;17321722 continue;17331723 }···17721760 ena_refill_rx_bufs(rx_ring, refill_required);17731761 }1774176217751775- if (xdp_flags & XDP_REDIRECT)17631763+ if (xdp_flags & ENA_XDP_REDIRECT)17761764 xdp_do_flush_map();1777176517781766 return work_done;···18261814static void ena_unmask_interrupt(struct ena_ring *tx_ring,18271815 struct ena_ring *rx_ring)18281816{18171817+ u32 rx_interval = tx_ring->smoothed_interval;18291818 struct ena_eth_io_intr_reg intr_reg;18301830- u32 rx_interval = 0;18191819+18311820 /* Rx ring can be NULL when for XDP tx queues which don't have an18321821 * accompanying rx_ring pair.18331822 */···18661853 if (likely(tx_ring->cpu == cpu))18671854 goto out;1868185518561856+ tx_ring->cpu = cpu;18571857+ if (rx_ring)18581858+ rx_ring->cpu = cpu;18591859+18691860 numa_node = cpu_to_node(cpu);18611861+18621862+ if (likely(tx_ring->numa_node == numa_node))18631863+ goto out;18641864+18701865 put_cpu();1871186618721867 if (numa_node != NUMA_NO_NODE) {18731868 ena_com_update_numa_node(tx_ring->ena_com_io_cq, numa_node);18741874- if (rx_ring)18691869+ tx_ring->numa_node = numa_node;18701870+ if (rx_ring) {18711871+ rx_ring->numa_node = numa_node;18751872 ena_com_update_numa_node(rx_ring->ena_com_io_cq,18761873 numa_node);18741874+ }18771875 }18781878-18791879- tx_ring->cpu = cpu;18801880- if (rx_ring)18811881- rx_ring->cpu = cpu;1882187618831877 return;18841878out:···20071987 if (ena_com_get_adaptive_moderation_enabled(rx_ring->ena_dev))20081988 ena_adjust_adaptive_rx_intr_moderation(ena_napi);2009198919901990+ ena_update_ring_numa_node(tx_ring, rx_ring);20101991 ena_unmask_interrupt(tx_ring, rx_ring);20111992 }20122012-20132013- ena_update_ring_numa_node(tx_ring, rx_ring);2014199320151994 ret = rx_work_done;20161995 } else {···23952376 ctx.mem_queue_type = ena_dev->tx_mem_queue_type;23962377 ctx.msix_vector = msix_vector;23972378 ctx.queue_size = tx_ring->ring_size;23982398- ctx.numa_node = cpu_to_node(tx_ring->cpu);23792379+ ctx.numa_node = tx_ring->numa_node;2399238024002381 rc = ena_com_create_io_queue(ena_dev, &ctx);24012382 if (rc) {···24632444 ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST;24642445 ctx.msix_vector = msix_vector;24652446 ctx.queue_size = rx_ring->ring_size;24662466- ctx.numa_node = cpu_to_node(rx_ring->cpu);24472447+ ctx.numa_node = rx_ring->numa_node;2467244824682449 rc = ena_com_create_io_queue(ena_dev, &ctx);24692450 if (rc) {···28222803 adapter->xdp_num_queues +28232804 adapter->num_io_queues);28242805 return dev_was_up ? ena_up(adapter) : 0;28062806+}28072807+28082808+int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak)28092809+{28102810+ struct ena_ring *rx_ring;28112811+ int i;28122812+28132813+ if (rx_copybreak > min_t(u16, adapter->netdev->mtu, ENA_PAGE_SIZE))28142814+ return -EINVAL;28152815+28162816+ adapter->rx_copybreak = rx_copybreak;28172817+28182818+ for (i = 0; i < adapter->num_io_queues; i++) {28192819+ rx_ring = &adapter->rx_ring[i];28202820+ rx_ring->rx_copybreak = rx_copybreak;28212821+ }28222822+28232823+ return 0;28252824}2826282528272826int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count)
+15-2
drivers/net/ethernet/amazon/ena/ena_netdev.h
···262262 bool disable_meta_caching;263263 u16 no_interrupt_event_cnt;264264265265- /* cpu for TPH */265265+ /* cpu and NUMA for TPH */266266 int cpu;267267- /* number of tx/rx_buffer_info's entries */267267+ int numa_node;268268+269269+ /* number of tx/rx_buffer_info's entries */268270 int ring_size;269271270272 enum ena_admin_placement_policy_type tx_mem_queue_type;···394392395393int ena_update_queue_count(struct ena_adapter *adapter, u32 new_channel_count);396394395395+int ena_set_rx_copybreak(struct ena_adapter *adapter, u32 rx_copybreak);396396+397397int ena_get_sset_count(struct net_device *netdev, int sset);398398399399static inline void ena_reset_device(struct ena_adapter *adapter,···412408 ENA_XDP_CURRENT_MTU_TOO_LARGE,413409 ENA_XDP_NO_ENOUGH_QUEUES,414410};411411+412412+enum ENA_XDP_ACTIONS {413413+ ENA_XDP_PASS = 0,414414+ ENA_XDP_TX = BIT(0),415415+ ENA_XDP_REDIRECT = BIT(1),416416+ ENA_XDP_DROP = BIT(2)417417+};418418+419419+#define ENA_XDP_FORWARDED (ENA_XDP_TX | ENA_XDP_REDIRECT)415420416421static inline bool ena_xdp_present(struct ena_adapter *adapter)417422{
···222222 int err;223223224224 list_for_each_entry(flow, flow_list, tmp_list) {225225- if (!mlx5e_is_offloaded_flow(flow) || flow_flag_test(flow, SLOW))225225+ if (!mlx5e_is_offloaded_flow(flow))226226 continue;227227228228 attr = mlx5e_tc_get_encap_attr(flow);···230230 /* mark the flow's encap dest as non-valid */231231 esw_attr->dests[flow->tmp_entry_index].flags &= ~MLX5_ESW_DEST_ENCAP_VALID;232232 esw_attr->dests[flow->tmp_entry_index].pkt_reformat = NULL;233233+234234+ /* Clear pkt_reformat before checking slow path flag. Because235235+ * in next iteration, the same flow is already set slow path236236+ * flag, but still need to clear the pkt_reformat.237237+ */238238+ if (flow_flag_test(flow, SLOW))239239+ continue;233240234241 /* update from encap rule to slow path rule */235242 spec = &flow->attr->parse_attr->spec;
···674674 dev = container_of(priv, struct mlx5_core_dev, priv);675675 devlink = priv_to_devlink(dev);676676677677+ mutex_lock(&dev->intf_state_mutex);678678+ if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {679679+ mlx5_core_err(dev, "health works are not permitted at this stage\n");680680+ return;681681+ }682682+ mutex_unlock(&dev->intf_state_mutex);677683 enter_error_state(dev, false);678684 if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) {679685 devl_lock(devlink);
···7171 params->packet_merge.type = MLX5E_PACKET_MERGE_NONE;7272 params->hard_mtu = MLX5_IB_GRH_BYTES + MLX5_IPOIB_HARD_LEN;7373 params->tunneled_offload_en = false;7474+7575+ /* CQE compression is not supported for IPoIB */7676+ params->rx_cqe_compress_def = false;7777+ MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS, params->rx_cqe_compress_def);7478}75797680/* Called directly after IPoIB netdevice was created to initialize SW structs */
+1
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
···228228 if (ldev->nb.notifier_call)229229 unregister_netdevice_notifier_net(&init_net, &ldev->nb);230230 mlx5_lag_mp_cleanup(ldev);231231+ cancel_delayed_work_sync(&ldev->bond_work);231232 destroy_workqueue(ldev->wq);232233 mlx5_lag_mpesw_cleanup(ldev);233234 mutex_destroy(&ldev->lock);
···834834 if (err)835835 goto cleanup_config;836836837837- if (!of_get_mac_address(np, sparx5->base_mac)) {837837+ if (of_get_mac_address(np, sparx5->base_mac)) {838838 dev_info(sparx5->dev, "MAC addr was not set, use random MAC\n");839839 eth_random_addr(sparx5->base_mac);840840 sparx5->base_mac[5] = 0;
+7
drivers/net/ethernet/netronome/nfp/nfp_net.h
···617617 * @vnic_no_name: For non-port PF vNIC make ndo_get_phys_port_name return618618 * -EOPNOTSUPP to keep backwards compatibility (set by app)619619 * @port: Pointer to nfp_port structure if vNIC is a port620620+ * @mc_lock: Protect mc_addrs list621621+ * @mc_addrs: List of mc addrs to add/del to HW622622+ * @mc_work: Work to update mc addrs620623 * @app_priv: APP private data for this vNIC621624 */622625struct nfp_net {···720717 bool vnic_no_name;721718722719 struct nfp_port *port;720720+721721+ spinlock_t mc_lock;722722+ struct list_head mc_addrs;723723+ struct work_struct mc_work;723724724725 void *app_priv;725726};
···4141 unsigned long state;4242};43434444-static inline void qlcnic_clear_dcb_ops(struct qlcnic_dcb *dcb)4545-{4646- kfree(dcb);4747-}4848-4944static inline int qlcnic_dcb_get_hw_capability(struct qlcnic_dcb *dcb)5045{5146 if (dcb && dcb->ops->get_hw_capability)···107112 dcb->ops->init_dcbnl_ops(dcb);108113}109114110110-static inline void qlcnic_dcb_enable(struct qlcnic_dcb *dcb)115115+static inline int qlcnic_dcb_enable(struct qlcnic_dcb *dcb)111116{112112- if (dcb && qlcnic_dcb_attach(dcb))113113- qlcnic_clear_dcb_ops(dcb);117117+ return dcb ? qlcnic_dcb_attach(dcb) : 0;114118}115119#endif
+7-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
···25992599 "Device does not support MSI interrupts\n");2600260026012601 if (qlcnic_82xx_check(adapter)) {26022602- qlcnic_dcb_enable(adapter->dcb);26022602+ err = qlcnic_dcb_enable(adapter->dcb);26032603+ if (err) {26042604+ qlcnic_dcb_free(adapter->dcb);26052605+ dev_err(&pdev->dev, "Failed to enable DCB\n");26062606+ goto err_out_free_hw;26072607+ }26082608+26032609 qlcnic_dcb_get_info(adapter->dcb);26042610 err = qlcnic_setup_intr(adapter);26052611
···1385138513861386 /* loopback, multicast & non-ND link-local traffic; do not push through13871387 * packet taps again. Reset pkt_type for upper layers to process skb.13881388- * For strict packets with a source LLA, determine the dst using the13891389- * original ifindex.13881388+ * For non-loopback strict packets, determine the dst using the original13891389+ * ifindex.13901390 */13911391 if (skb->pkt_type == PACKET_LOOPBACK || (need_strict && !is_ndisc)) {13921392 skb->dev = vrf_dev;···1395139513961396 if (skb->pkt_type == PACKET_LOOPBACK)13971397 skb->pkt_type = PACKET_HOST;13981398- else if (ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)13981398+ else13991399 vrf_ip6_input_dst(skb, vrf_dev, orig_iif);1400140014011401 goto out;
···327327}328328329329#ifdef CONFIG_ATH9K_HTC_DEBUGFS330330-#define __STAT_SAFE(hif_dev, expr) ((hif_dev)->htc_handle->drv_priv ? (expr) : 0)331331-#define CAB_STAT_INC(priv) ((priv)->debug.tx_stats.cab_queued++)332332-#define TX_QSTAT_INC(priv, q) ((priv)->debug.tx_stats.queue_stats[q]++)330330+#define __STAT_SAFE(hif_dev, expr) do { ((hif_dev)->htc_handle->drv_priv ? (expr) : 0); } while (0)331331+#define CAB_STAT_INC(priv) do { ((priv)->debug.tx_stats.cab_queued++); } while (0)332332+#define TX_QSTAT_INC(priv, q) do { ((priv)->debug.tx_stats.queue_stats[q]++); } while (0)333333334334#define TX_STAT_INC(hif_dev, c) \335335 __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c++)···378378 struct ethtool_stats *stats, u64 *data);379379#else380380381381-#define TX_STAT_INC(hif_dev, c)382382-#define TX_STAT_ADD(hif_dev, c, a)383383-#define RX_STAT_INC(hif_dev, c)384384-#define RX_STAT_ADD(hif_dev, c, a)381381+#define TX_STAT_INC(hif_dev, c) do { } while (0)382382+#define TX_STAT_ADD(hif_dev, c, a) do { } while (0)383383+#define RX_STAT_INC(hif_dev, c) do { } while (0)384384+#define RX_STAT_ADD(hif_dev, c, a) do { } while (0)385385386386#define CAB_STAT_INC(priv)387387#define TX_QSTAT_INC(priv, c)
+5
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
···11061106 int i, j, num_sub_bands;11071107 s8 *gain;1108110811091109+ /* many firmware images for JF lie about this */11101110+ if (CSR_HW_RFID_TYPE(fwrt->trans->hw_rf_id) ==11111111+ CSR_HW_RFID_TYPE(CSR_HW_RF_ID_TYPE_JF))11121112+ return -EOPNOTSUPP;11131113+11091114 if (!fw_has_capa(&fwrt->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) {11101115 IWL_DEBUG_RADIO(fwrt,11111116 "PPAG capability not supported by FW, command not sent.\n");
+1
drivers/net/wireless/mediatek/mt76/mt7996/Kconfig
···22config MT7996E33 tristate "MediaTek MT7996 (PCIe) support"44 select MT76_CONNAC_LIB55+ select RELAY56 depends on MAC8021167 depends on PCI78 help
···10741074 return 0;10751075}1076107610771077+static u32 nvme_known_nvm_effects(u8 opcode)10781078+{10791079+ switch (opcode) {10801080+ case nvme_cmd_write:10811081+ case nvme_cmd_write_zeroes:10821082+ case nvme_cmd_write_uncor:10831083+ return NVME_CMD_EFFECTS_LBCC;10841084+ default:10851085+ return 0;10861086+ }10871087+}10881088+10771089u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)10781090{10791091 u32 effects = 0;···10931081 if (ns) {10941082 if (ns->head->effects)10951083 effects = le32_to_cpu(ns->head->effects->iocs[opcode]);10841084+ if (ns->head->ids.csi == NVME_CAP_CSS_NVM)10851085+ effects |= nvme_known_nvm_effects(opcode);10961086 if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))10971087 dev_warn_once(ctrl->device,10981098- "IO command:%02x has unhandled effects:%08x\n",10881088+ "IO command:%02x has unusual effects:%08x\n",10991089 opcode, effects);11001100- return 0;11011101- }1102109011031103- if (ctrl->effects)11041104- effects = le32_to_cpu(ctrl->effects->acs[opcode]);11051105- effects |= nvme_known_admin_effects(opcode);10911091+ /*10921092+ * NVME_CMD_EFFECTS_CSE_MASK causes a freeze all I/O queues,10931093+ * which would deadlock when done on an I/O command. Note that10941094+ * We already warn about an unusual effect above.10951095+ */10961096+ effects &= ~NVME_CMD_EFFECTS_CSE_MASK;10971097+ } else {10981098+ if (ctrl->effects)10991099+ effects = le32_to_cpu(ctrl->effects->acs[opcode]);11001100+ effects |= nvme_known_admin_effects(opcode);11011101+ }1106110211071103 return effects;11081104}···4946492649474927 memset(set, 0, sizeof(*set));49484928 set->ops = ops;49494949- set->queue_depth = ctrl->sqsize + 1;49294929+ set->queue_depth = min_t(unsigned, ctrl->sqsize, BLK_MQ_MAX_DEPTH - 1);49504930 /*49514931 * Some Apple controllers requires tags to be unique across admin and49524932 * the (only) I/O queue, so reserve the first 32 tags of the I/O queue.
+24-4
drivers/nvme/host/ioctl.c
···1111static bool nvme_cmd_allowed(struct nvme_ns *ns, struct nvme_command *c,1212 fmode_t mode)1313{1414+ u32 effects;1515+1416 if (capable(CAP_SYS_ADMIN))1517 return true;1618···4543 }46444745 /*4848- * Only allow I/O commands that transfer data to the controller if the4949- * special file is open for writing, but always allow I/O commands that5050- * transfer data from the controller.4646+ * Check if the controller provides a Commands Supported and Effects log4747+ * and marks this command as supported. If not reject unprivileged4848+ * passthrough.5149 */5252- if (nvme_is_write(c))5050+ effects = nvme_command_effects(ns->ctrl, ns, c->common.opcode);5151+ if (!(effects & NVME_CMD_EFFECTS_CSUPP))5252+ return false;5353+5454+ /*5555+ * Don't allow passthrough for command that have intrusive (or unknown)5656+ * effects.5757+ */5858+ if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC |5959+ NVME_CMD_EFFECTS_UUID_SEL |6060+ NVME_CMD_EFFECTS_SCOPE_MASK))6161+ return false;6262+6363+ /*6464+ * Only allow I/O commands that transfer data to the controller or that6565+ * change the logical block contents if the file descriptor is open for6666+ * writing.6767+ */6868+ if (nvme_is_write(c) || (effects & NVME_CMD_EFFECTS_LBCC))5369 return mode & FMODE_WRITE;5470 return true;5571}
+2
drivers/nvme/host/multipath.c
···376376 * pool from the original queue to allocate the bvecs from.377377 */378378 bio = bio_split_to_limits(bio);379379+ if (!bio)380380+ return;379381380382 srcu_idx = srcu_read_lock(&head->srcu);381383 ns = nvme_find_path(head);
+1-1
drivers/nvme/host/nvme.h
···893893{894894 struct nvme_ns *ns = req->q->queuedata;895895896896- if (req->cmd_flags & REQ_NVME_MPATH)896896+ if ((req->cmd_flags & REQ_NVME_MPATH) && req->bio)897897 trace_block_bio_complete(ns->head->disk->queue, req->bio);898898}899899
+24-22
drivers/nvme/host/pci.c
···3636#define SQ_SIZE(q) ((q)->q_depth << (q)->sqes)3737#define CQ_SIZE(q) ((q)->q_depth * sizeof(struct nvme_completion))38383939-#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc))3939+#define SGES_PER_PAGE (NVME_CTRL_PAGE_SIZE / sizeof(struct nvme_sgl_desc))40404141/*4242 * These can be higher, but we need to ensure that any command doesn't···144144 mempool_t *iod_mempool;145145146146 /* shadow doorbell buffer support: */147147- u32 *dbbuf_dbs;147147+ __le32 *dbbuf_dbs;148148 dma_addr_t dbbuf_dbs_dma_addr;149149- u32 *dbbuf_eis;149149+ __le32 *dbbuf_eis;150150 dma_addr_t dbbuf_eis_dma_addr;151151152152 /* host memory buffer support: */···208208#define NVMEQ_SQ_CMB 1209209#define NVMEQ_DELETE_ERROR 2210210#define NVMEQ_POLLED 3211211- u32 *dbbuf_sq_db;212212- u32 *dbbuf_cq_db;213213- u32 *dbbuf_sq_ei;214214- u32 *dbbuf_cq_ei;211211+ __le32 *dbbuf_sq_db;212212+ __le32 *dbbuf_cq_db;213213+ __le32 *dbbuf_sq_ei;214214+ __le32 *dbbuf_cq_ei;215215 struct completion delete_done;216216};217217···343343}344344345345/* Update dbbuf and return true if an MMIO is required */346346-static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,347347- volatile u32 *dbbuf_ei)346346+static bool nvme_dbbuf_update_and_check_event(u16 value, __le32 *dbbuf_db,347347+ volatile __le32 *dbbuf_ei)348348{349349 if (dbbuf_db) {350350- u16 old_value;350350+ u16 old_value, event_idx;351351352352 /*353353 * Ensure that the queue is written before updating···355355 */356356 wmb();357357358358- old_value = *dbbuf_db;359359- *dbbuf_db = value;358358+ old_value = le32_to_cpu(*dbbuf_db);359359+ *dbbuf_db = cpu_to_le32(value);360360361361 /*362362 * Ensure that the doorbell is updated before reading the event···366366 */367367 mb();368368369369- if (!nvme_dbbuf_need_event(*dbbuf_ei, value, old_value))369369+ event_idx = le32_to_cpu(*dbbuf_ei);370370+ if (!nvme_dbbuf_need_event(event_idx, value, old_value))370371 return false;371372 }372373···381380 */382381static int nvme_pci_npages_prp(void)383382{384384- unsigned nprps = DIV_ROUND_UP(NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE,385385- NVME_CTRL_PAGE_SIZE);386386- return DIV_ROUND_UP(8 * nprps, PAGE_SIZE - 8);383383+ unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;384384+ unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);385385+ return DIV_ROUND_UP(8 * nprps, NVME_CTRL_PAGE_SIZE - 8);387386}388387389388/*···393392static int nvme_pci_npages_sgl(void)394393{395394 return DIV_ROUND_UP(NVME_MAX_SEGS * sizeof(struct nvme_sgl_desc),396396- PAGE_SIZE);395395+ NVME_CTRL_PAGE_SIZE);397396}398397399398static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,···709708 sge->length = cpu_to_le32(entries * sizeof(*sge));710709 sge->type = NVME_SGL_FMT_LAST_SEG_DESC << 4;711710 } else {712712- sge->length = cpu_to_le32(PAGE_SIZE);711711+ sge->length = cpu_to_le32(NVME_CTRL_PAGE_SIZE);713712 sge->type = NVME_SGL_FMT_SEG_DESC << 4;714713 }715714}···23332332 if (dev->cmb_use_sqes) {23342333 result = nvme_cmb_qdepth(dev, nr_io_queues,23352334 sizeof(struct nvme_command));23362336- if (result > 0)23352335+ if (result > 0) {23372336 dev->q_depth = result;23382338- else23372337+ dev->ctrl.sqsize = result - 1;23382338+ } else {23392339 dev->cmb_use_sqes = false;23402340+ }23402341 }2341234223422343 do {···2539253625402537 dev->q_depth = min_t(u32, NVME_CAP_MQES(dev->ctrl.cap) + 1,25412538 io_queue_depth);25422542- dev->ctrl.sqsize = dev->q_depth - 1; /* 0's based queue depth */25432539 dev->db_stride = 1 << NVME_CAP_STRIDE(dev->ctrl.cap);25442540 dev->dbs = dev->bar + 4096;25452541···25792577 dev_warn(dev->ctrl.device, "IO queue depth clamped to %d\n",25802578 dev->q_depth);25812579 }25822582-25802580+ dev->ctrl.sqsize = dev->q_depth - 1; /* 0's based queue depth */2583258125842582 nvme_map_cmb(dev);25852583
···334334 }335335336336 /*337337- * If there are effects for the command we are about to execute, or338338- * an end_req function we need to use nvme_execute_passthru_rq()339339- * synchronously in a work item seeing the end_req function and340340- * nvme_passthru_end() can't be called in the request done callback341341- * which is typically in interrupt context.337337+ * If a command needs post-execution fixups, or there are any338338+ * non-trivial effects, make sure to execute the command synchronously339339+ * in a workqueue so that nvme_passthru_end gets called.342340 */343341 effects = nvme_command_effects(ctrl, ns, req->cmd->common.opcode);344344- if (req->p.use_workqueue || effects) {342342+ if (req->p.use_workqueue ||343343+ (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))) {345344 INIT_WORK(&req->p.work, nvmet_passthru_execute_cmd_work);346345 req->p.rq = rq;347346 queue_work(nvmet_wq, &req->p.work);
+32-28
drivers/of/fdt.c
···10991099 */11001100int __init early_init_dt_scan_memory(void)11011101{11021102- int node;11021102+ int node, found_memory = 0;11031103 const void *fdt = initial_boot_params;1104110411051105 fdt_for_each_subnode(node, fdt, 0) {···1139113911401140 early_init_dt_add_memory_arch(base, size);1141114111421142+ found_memory = 1;11431143+11421144 if (!hotpluggable)11431145 continue;11441146···11491147 base, base + size);11501148 }11511149 }11521152- return 0;11501150+ return found_memory;11531151}1154115211551153int __init early_init_dt_scan_chosen(char *cmdline)···11631161 if (node < 0)11641162 node = fdt_path_offset(fdt, "/chosen@0");11651163 if (node < 0)11661166- return -ENOENT;11641164+ /* Handle the cmdline config options even if no /chosen node */11651165+ goto handle_cmdline;1167116611681167 chosen_node_offset = node;1169116811701169 early_init_dt_check_for_initrd(node);11711170 early_init_dt_check_for_elfcorehdr(node);11721172-11731173- /* Retrieve command line */11741174- p = of_get_flat_dt_prop(node, "bootargs", &l);11751175- if (p != NULL && l > 0)11761176- strscpy(cmdline, p, min(l, COMMAND_LINE_SIZE));1177117111781172 rng_seed = of_get_flat_dt_prop(node, "rng-seed", &l);11791173 if (rng_seed && l > 0) {···11821184 of_fdt_crc32 = crc32_be(~0, initial_boot_params,11831185 fdt_totalsize(initial_boot_params));11841186 }11871187+11881188+ /* Retrieve command line */11891189+ p = of_get_flat_dt_prop(node, "bootargs", &l);11901190+ if (p != NULL && l > 0)11911191+ strscpy(cmdline, p, min(l, COMMAND_LINE_SIZE));11921192+11931193+handle_cmdline:11941194+ /*11951195+ * CONFIG_CMDLINE is meant to be a default in case nothing else11961196+ * managed to set the command line, unless CONFIG_CMDLINE_FORCE11971197+ * is set in which case we override whatever was found earlier.11981198+ */11991199+#ifdef CONFIG_CMDLINE12001200+#if defined(CONFIG_CMDLINE_EXTEND)12011201+ strlcat(cmdline, " ", COMMAND_LINE_SIZE);12021202+ strlcat(cmdline, CONFIG_CMDLINE, COMMAND_LINE_SIZE);12031203+#elif defined(CONFIG_CMDLINE_FORCE)12041204+ strscpy(cmdline, CONFIG_CMDLINE, COMMAND_LINE_SIZE);12051205+#else12061206+ /* No arguments from boot loader, use kernel's cmdl*/12071207+ if (!((char *)cmdline)[0])12081208+ strscpy(cmdline, CONFIG_CMDLINE, COMMAND_LINE_SIZE);12091209+#endif12101210+#endif /* CONFIG_CMDLINE */12111211+12121212+ pr_debug("Command line is: %s\n", (char *)cmdline);1185121311861214 return 0;11871215}···13001276 rc = early_init_dt_scan_chosen(boot_command_line);13011277 if (rc)13021278 pr_warn("No chosen node found, continuing without\n");13031303-13041304- /*13051305- * CONFIG_CMDLINE is meant to be a default in case nothing else13061306- * managed to set the command line, unless CONFIG_CMDLINE_FORCE13071307- * is set in which case we override whatever was found earlier.13081308- */13091309-#ifdef CONFIG_CMDLINE13101310-#if defined(CONFIG_CMDLINE_EXTEND)13111311- strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);13121312- strlcat(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);13131313-#elif defined(CONFIG_CMDLINE_FORCE)13141314- strscpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);13151315-#else13161316- /* No arguments from boot loader, use kernel's cmdl */13171317- if (!boot_command_line[0])13181318- strscpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);13191319-#endif13201320-#endif /* CONFIG_CMDLINE */13211321-13221322- pr_debug("Command line is: %s\n", boot_command_line);1323127913241280 /* Setup memory, calling early_init_dt_add_memory_arch */13251281 early_init_dt_scan_memory();
+2
drivers/s390/block/dcssblk.c
···865865 unsigned long bytes_done;866866867867 bio = bio_split_to_limits(bio);868868+ if (!bio)869869+ return;868870869871 bytes_done = 0;870872 dev_info = bio->bi_bdev->bd_disk->private_data;
···207207 /* Test the interface */208208 ret = ulpi_write(ulpi, ULPI_SCRATCH, 0xaa);209209 if (ret < 0)210210- return ret;210210+ goto err;211211212212 ret = ulpi_read(ulpi, ULPI_SCRATCH);213213 if (ret < 0)
···144144145145static int __init fotg210_init(void)146146{147147- if (usb_disabled())148148- return -ENODEV;149149-150150- if (IS_ENABLED(CONFIG_USB_FOTG210_HCD))147147+ if (IS_ENABLED(CONFIG_USB_FOTG210_HCD) && !usb_disabled())151148 fotg210_hcd_init();152149 return platform_driver_register(&fotg210_driver);153150}
+2
drivers/usb/fotg210/fotg210-udc.c
···12011201 dev_info(dev, "found and initialized PHY\n");12021202 }1203120312041204+ ret = -ENOMEM;12051205+12041206 for (i = 0; i < FOTG210_MAX_NUM_EP; i++) {12051207 fotg210->ep[i] = kzalloc(sizeof(struct fotg210_ep), GFP_KERNEL);12061208 if (!fotg210->ep[i])
···427427 int ret;428428429429 ret = device_register(&vdpasim_blk_mgmtdev);430430- if (ret)430430+ if (ret) {431431+ put_device(&vdpasim_blk_mgmtdev);431432 return ret;433433+ }432434433435 ret = vdpa_mgmtdev_register(&mgmt_dev);434436 if (ret)
+6-1
drivers/vdpa/vdpa_sim/vdpa_sim_net.c
···6262 if (len < ETH_ALEN + hdr_len)6363 return false;64646565+ if (is_broadcast_ether_addr(vdpasim->buffer + hdr_len) ||6666+ is_multicast_ether_addr(vdpasim->buffer + hdr_len))6767+ return true;6568 if (!strncmp(vdpasim->buffer + hdr_len, vio_config->mac, ETH_ALEN))6669 return true;6770···308305 int ret;309306310307 ret = device_register(&vdpasim_net_mgmtdev);311311- if (ret)308308+ if (ret) {309309+ put_device(&vdpasim_net_mgmtdev);312310 return ret;311311+ }313312314313 ret = vdpa_mgmtdev_register(&mgmt_dev);315314 if (ret)
···1515 struct device_attribute *attr, char *buf)1616{1717 struct virtio_device *dev = dev_to_virtio(_d);1818- return sprintf(buf, "0x%04x\n", dev->id.device);1818+ return sysfs_emit(buf, "0x%04x\n", dev->id.device);1919}2020static DEVICE_ATTR_RO(device);2121···2323 struct device_attribute *attr, char *buf)2424{2525 struct virtio_device *dev = dev_to_virtio(_d);2626- return sprintf(buf, "0x%04x\n", dev->id.vendor);2626+ return sysfs_emit(buf, "0x%04x\n", dev->id.vendor);2727}2828static DEVICE_ATTR_RO(vendor);2929···3131 struct device_attribute *attr, char *buf)3232{3333 struct virtio_device *dev = dev_to_virtio(_d);3434- return sprintf(buf, "0x%08x\n", dev->config->get_status(dev));3434+ return sysfs_emit(buf, "0x%08x\n", dev->config->get_status(dev));3535}3636static DEVICE_ATTR_RO(status);3737···3939 struct device_attribute *attr, char *buf)4040{4141 struct virtio_device *dev = dev_to_virtio(_d);4242- return sprintf(buf, "virtio:d%08Xv%08X\n",4242+ return sysfs_emit(buf, "virtio:d%08Xv%08X\n",4343 dev->id.device, dev->id.vendor);4444}4545static DEVICE_ATTR_RO(modalias);···5454 /* We actually represent this as a bitstring, as it could be5555 * arbitrary length in future. */5656 for (i = 0; i < sizeof(dev->features)*8; i++)5757- len += sprintf(buf+len, "%c",5757+ len += sysfs_emit_at(buf, len, "%c",5858 __virtio_test_bit(dev, i) ? '1' : '0');5959- len += sprintf(buf+len, "\n");5959+ len += sysfs_emit_at(buf, len, "\n");6060 return len;6161}6262static DEVICE_ATTR_RO(features);
+2-2
drivers/virtio/virtio_pci_modern.c
···303303 int err;304304305305 if (index >= vp_modern_get_num_queues(mdev))306306- return ERR_PTR(-ENOENT);306306+ return ERR_PTR(-EINVAL);307307308308 /* Check if queue is either not available or already active. */309309 num = vp_modern_get_queue_size(mdev, index);310310 if (!num || vp_modern_get_queue_enable(mdev, index))311311 return ERR_PTR(-ENOENT);312312313313- if (num & (num - 1)) {313313+ if (!is_power_of_2(num)) {314314 dev_warn(&vp_dev->pci_dev->dev, "bad queue size %u", num);315315 return ERR_PTR(-EINVAL);316316 }
+1-1
drivers/virtio/virtio_ring.c
···10521052 dma_addr_t dma_addr;1053105310541054 /* We assume num is a power of 2. */10551055- if (num & (num - 1)) {10551055+ if (!is_power_of_2(num)) {10561056 dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);10571057 return -EINVAL;10581058 }
···329329 &map_length, &bioc, mirror_num);330330 if (ret)331331 goto out_counter_dec;332332- BUG_ON(mirror_num != bioc->mirror_num);332332+ /*333333+ * This happens when dev-replace is also running, and the334334+ * mirror_num indicates the dev-replace target.335335+ *336336+ * In this case, we don't need to do anything, as the read337337+ * error just means the replace progress hasn't reached our338338+ * read range, and later replace routine would handle it well.339339+ */340340+ if (mirror_num != bioc->mirror_num)341341+ goto out_counter_dec;333342 }334343335344 sector = bioc->stripes[bioc->mirror_num - 1].physical >> 9;
+4-2
fs/btrfs/defrag.c
···358358 goto out;359359360360 path = btrfs_alloc_path();361361- if (!path)362362- return -ENOMEM;361361+ if (!path) {362362+ ret = -ENOMEM;363363+ goto out;364364+ }363365364366 level = btrfs_header_level(root->node);365367
+8-3
fs/btrfs/disk-io.c
···530530 }531531532532 if (found_level != check->level) {533533+ btrfs_err(fs_info,534534+ "level verify failed on logical %llu mirror %u wanted %u found %u",535535+ eb->start, eb->read_mirror, check->level, found_level);533536 ret = -EIO;534537 goto out;535538 }···33843381/*33853382 * Do various sanity and dependency checks of different features.33863383 *33843384+ * @is_rw_mount: If the mount is read-write.33853385+ *33873386 * This is the place for less strict checks (like for subpage or artificial33883387 * feature dependencies).33893388 *···33963391 * (space cache related) can modify on-disk format like free space tree and33973392 * screw up certain feature dependencies.33983393 */33993399-int btrfs_check_features(struct btrfs_fs_info *fs_info, struct super_block *sb)33943394+int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount)34003395{34013396 struct btrfs_super_block *disk_super = fs_info->super_copy;34023397 u64 incompat = btrfs_super_incompat_flags(disk_super);···34353430 if (btrfs_super_nodesize(disk_super) > PAGE_SIZE)34363431 incompat |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA;3437343234383438- if (compat_ro_unsupp && !sb_rdonly(sb)) {34333433+ if (compat_ro_unsupp && is_rw_mount) {34393434 btrfs_err(fs_info,34403435 "cannot mount read-write because of unknown compat_ro features (0x%llx)",34413436 compat_ro);···36383633 goto fail_alloc;36393634 }3640363536413641- ret = btrfs_check_features(fs_info, sb);36363636+ ret = btrfs_check_features(fs_info, !sb_rdonly(sb));36423637 if (ret < 0) {36433638 err = ret;36443639 goto fail_alloc;
···17131713 BUG();17141714 if (ret && insert_reserved)17151715 btrfs_pin_extent(trans, node->bytenr, node->num_bytes, 1);17161716+ if (ret < 0)17171717+ btrfs_err(trans->fs_info,17181718+"failed to run delayed ref for logical %llu num_bytes %llu type %u action %u ref_mod %d: %d",17191719+ node->bytenr, node->num_bytes, node->type,17201720+ node->action, node->ref_mod, ret);17161721 return ret;17171722}17181723···19591954 if (ret) {19601955 unselect_delayed_ref_head(delayed_refs, locked_ref);19611956 btrfs_put_delayed_ref(ref);19621962- btrfs_debug(fs_info, "run_one_delayed_ref returned %d",19631963- ret);19641957 return ret;19651958 }19661959
+25-5
fs/btrfs/extent_io.c
···104104 btrfs_bio_end_io_t end_io_func;105105106106 /*107107+ * This is for metadata read, to provide the extra needed verification108108+ * info. This has to be provided for submit_one_bio(), as109109+ * submit_one_bio() can submit a bio if it ends at stripe boundary. If110110+ * no such parent_check is provided, the metadata can hit false alert at111111+ * endio time.112112+ */113113+ struct btrfs_tree_parent_check *parent_check;114114+115115+ /*107116 * Tell writepage not to lock the state bits for this range, it still108117 * does the unlocking.109118 */···142133143134 btrfs_bio(bio)->file_offset = page_offset(bv->bv_page) + bv->bv_offset;144135145145- if (!is_data_inode(&inode->vfs_inode))136136+ if (!is_data_inode(&inode->vfs_inode)) {137137+ if (btrfs_op(bio) != BTRFS_MAP_WRITE) {138138+ /*139139+ * For metadata read, we should have the parent_check,140140+ * and copy it to bbio for metadata verification.141141+ */142142+ ASSERT(bio_ctrl->parent_check);143143+ memcpy(&btrfs_bio(bio)->parent_check,144144+ bio_ctrl->parent_check,145145+ sizeof(struct btrfs_tree_parent_check));146146+ }146147 btrfs_submit_metadata_bio(inode, bio, mirror_num);147147- else if (btrfs_op(bio) == BTRFS_MAP_WRITE)148148+ } else if (btrfs_op(bio) == BTRFS_MAP_WRITE) {148149 btrfs_submit_data_write_bio(inode, bio, mirror_num);149149- else150150+ } else {150151 btrfs_submit_data_read_bio(inode, bio, mirror_num,151152 bio_ctrl->compress_type);153153+ }152154153155 /* The bio is owned by the end_io handler now */154156 bio_ctrl->bio = NULL;···48494829 struct extent_state *cached_state = NULL;48504830 struct btrfs_bio_ctrl bio_ctrl = {48514831 .mirror_num = mirror_num,48324832+ .parent_check = check,48524833 };48534834 int ret = 0;48544835···48994878 */49004879 atomic_dec(&eb->io_pages);49014880 }49024902- memcpy(&btrfs_bio(bio_ctrl.bio)->parent_check, check, sizeof(*check));49034881 submit_one_bio(&bio_ctrl);49044882 if (ret || wait != WAIT_COMPLETE) {49054883 free_extent_state(cached_state);···49254905 unsigned long num_reads = 0;49264906 struct btrfs_bio_ctrl bio_ctrl = {49274907 .mirror_num = mirror_num,49084908+ .parent_check = check,49284909 };4929491049304911 if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))···50174996 }50184997 }5019499850205020- memcpy(&btrfs_bio(bio_ctrl.bio)->parent_check, check, sizeof(*check));50214999 submit_one_bio(&bio_ctrl);5022500050235001 if (ret || wait != WAIT_COMPLETE)
···70927092 * Other members are not utilized for inline extents.70937093 */70947094 ASSERT(em->block_start == EXTENT_MAP_INLINE);70957095- ASSERT(em->len = fs_info->sectorsize);70957095+ ASSERT(em->len == fs_info->sectorsize);7096709670977097 ret = read_inline_extent(inode, path, page);70987098 if (ret < 0)···9377937793789378 if (flags & RENAME_WHITEOUT) {93799379 whiteout_args.inode = new_whiteout_inode(mnt_userns, old_dir);93809380- if (!whiteout_args.inode)93819381- return -ENOMEM;93809380+ if (!whiteout_args.inode) {93819381+ ret = -ENOMEM;93829382+ goto out_fscrypt_names;93839383+ }93829384 ret = btrfs_new_inode_prepare(&whiteout_args, &trans_num_items);93839385 if (ret)93849386 goto out_whiteout_inode;
+1
fs/btrfs/qgroup.c
···27872787 * current root. It's safe inside commit_transaction().27882788 */27892789 ctx.trans = trans;27902790+ ctx.time_seq = BTRFS_SEQ_LAST;27902791 ret = btrfs_find_all_roots(&ctx, false);27912792 if (ret < 0)27922793 goto cleanup;
···74597459 * not fail, but if it does, it's not serious, just bail out and74607460 * mark the log for a full commit.74617461 */74627462- if (WARN_ON_ONCE(ret < 0))74627462+ if (WARN_ON_ONCE(ret < 0)) {74637463+ fscrypt_free_filename(&fname);74637464 goto out;74657465+ }74667466+74647467 log_pinned = true;7465746874667469 path = btrfs_alloc_path();
+1-1
fs/ceph/caps.c
···2913291329142914 while (true) {29152915 flags &= CEPH_FILE_MODE_MASK;29162916- if (atomic_read(&fi->num_locks))29162916+ if (vfs_inode_has_locks(inode))29172917 flags |= CHECK_FILELOCK;29182918 _got = 0;29192919 ret = try_get_cap_refs(inode, need, want, endoff,
+18-6
fs/ceph/locks.c
···32323333static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)3434{3535- struct ceph_file_info *fi = dst->fl_file->private_data;3635 struct inode *inode = file_inode(dst->fl_file);3736 atomic_inc(&ceph_inode(inode)->i_filelock_ref);3838- atomic_inc(&fi->num_locks);3737+ dst->fl_u.ceph.inode = igrab(inode);3938}40394040+/*4141+ * Do not use the 'fl->fl_file' in release function, which4242+ * is possibly already released by another thread.4343+ */4144static void ceph_fl_release_lock(struct file_lock *fl)4245{4343- struct ceph_file_info *fi = fl->fl_file->private_data;4444- struct inode *inode = file_inode(fl->fl_file);4545- struct ceph_inode_info *ci = ceph_inode(inode);4646- atomic_dec(&fi->num_locks);4646+ struct inode *inode = fl->fl_u.ceph.inode;4747+ struct ceph_inode_info *ci;4848+4949+ /*5050+ * If inode is NULL it should be a request file_lock,5151+ * nothing we can do.5252+ */5353+ if (!inode)5454+ return;5555+5656+ ci = ceph_inode(inode);4757 if (atomic_dec_and_test(&ci->i_filelock_ref)) {4858 /* clear error when all locks are released */4959 spin_lock(&ci->i_ceph_lock);5060 ci->i_ceph_flags &= ~CEPH_I_ERROR_FILELOCK;5161 spin_unlock(&ci->i_ceph_lock);5262 }6363+ fl->fl_u.ceph.inode = NULL;6464+ iput(inode);5365}54665567static const struct file_lock_operations ceph_fl_lock_ops = {
···292292 continue;293293 }294294 kref_get(&iface->refcount);295295+ break;295296 }296297297297- if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) {298298+ if (list_entry_is_head(iface, &ses->iface_list, iface_head)) {298299 rc = 1;299300 iface = NULL;300301 cifs_dbg(FYI, "unable to find a suitable iface\n");
+4-8
fs/cifs/smb2ops.c
···530530 p = buf;531531532532 spin_lock(&ses->iface_lock);533533- ses->iface_count = 0;534533 /*535534 * Go through iface_list and do kref_put to remove536535 * any unused ifaces. ifaces in use will be removed···539540 iface_head) {540541 iface->is_active = 0;541542 kref_put(&iface->refcount, release_iface);543543+ ses->iface_count--;542544 }543545 spin_unlock(&ses->iface_lock);544546···618618 /* just get a ref so that it doesn't get picked/freed */619619 iface->is_active = 1;620620 kref_get(&iface->refcount);621621+ ses->iface_count++;621622 spin_unlock(&ses->iface_lock);622623 goto next_iface;623624 } else if (ret < 0) {···4489448844904489 /* copy pages form the old */44914490 for (j = 0; j < npages; j++) {44924492- char *dst, *src;44934491 unsigned int offset, len;4494449244954493 rqst_page_get_length(new, j, &len, &offset);4496449444974497- dst = kmap_local_page(new->rq_pages[j]) + offset;44984498- src = kmap_local_page(old->rq_pages[j]) + offset;44994499-45004500- memcpy(dst, src, len);45014501- kunmap(new->rq_pages[j]);45024502- kunmap(old->rq_pages[j]);44954495+ memcpy_page(new->rq_pages[j], offset,44964496+ old->rq_pages[j], offset, len);45034497 }45044498 }45054499
+7-4
fs/cifs/smb2pdu.c
···541541assemble_neg_contexts(struct smb2_negotiate_req *req,542542 struct TCP_Server_Info *server, unsigned int *total_len)543543{544544- char *pneg_ctxt;545545- char *hostname = NULL;546544 unsigned int ctxt_len, neg_context_count;545545+ struct TCP_Server_Info *pserver;546546+ char *pneg_ctxt;547547+ char *hostname;547548548549 if (*total_len > 200) {549550 /* In case length corrupted don't want to overrun smb buffer */···575574 * secondary channels don't have the hostname field populated576575 * use the hostname field in the primary channel instead577576 */578578- hostname = CIFS_SERVER_IS_CHAN(server) ?579579- server->primary_server->hostname : server->hostname;577577+ pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;578578+ cifs_server_lock(pserver);579579+ hostname = pserver->hostname;580580 if (hostname && (hostname[0] != 0)) {581581 ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,582582 hostname);···586584 neg_context_count = 3;587585 } else588586 neg_context_count = 2;587587+ cifs_server_unlock(pserver);589588590589 build_posix_ctxt((struct smb2_posix_neg_context *)pneg_ctxt);591590 *total_len += sizeof(struct smb2_posix_neg_context);
+1-1
fs/f2fs/data.c
···21832183 sector_t last_block_in_file;21842184 const unsigned blocksize = blks_to_bytes(inode, 1);21852185 struct decompress_io_ctx *dic = NULL;21862186- struct extent_info ei = {0, };21862186+ struct extent_info ei = {};21872187 bool from_dnode = true;21882188 int i;21892189 int ret = 0;
+18-16
fs/f2fs/extent_cache.c
···546546 struct extent_node *en;547547 bool ret = false;548548549549- f2fs_bug_on(sbi, !et);549549+ if (!et)550550+ return false;550551551552 trace_f2fs_lookup_extent_tree_start(inode, pgofs, type);552553···882881}883882884883/* This returns a new age and allocated blocks in ei */885885-static int __get_new_block_age(struct inode *inode, struct extent_info *ei)884884+static int __get_new_block_age(struct inode *inode, struct extent_info *ei,885885+ block_t blkaddr)886886{887887 struct f2fs_sb_info *sbi = F2FS_I_SB(inode);888888 loff_t f_size = i_size_read(inode);889889 unsigned long long cur_blocks =890890 atomic64_read(&sbi->allocated_data_blocks);891891+ struct extent_info tei = *ei; /* only fofs and len are valid */891892892893 /*893894 * When I/O is not aligned to a PAGE_SIZE, update will happen to the last···897894 * block here.898895 */899896 if ((f_size >> PAGE_SHIFT) == ei->fofs && f_size & (PAGE_SIZE - 1) &&900900- ei->blk == NEW_ADDR)897897+ blkaddr == NEW_ADDR)901898 return -EINVAL;902899903903- if (__lookup_extent_tree(inode, ei->fofs, ei, EX_BLOCK_AGE)) {900900+ if (__lookup_extent_tree(inode, ei->fofs, &tei, EX_BLOCK_AGE)) {904901 unsigned long long cur_age;905902906906- if (cur_blocks >= ei->last_blocks)907907- cur_age = cur_blocks - ei->last_blocks;903903+ if (cur_blocks >= tei.last_blocks)904904+ cur_age = cur_blocks - tei.last_blocks;908905 else909906 /* allocated_data_blocks overflow */910910- cur_age = ULLONG_MAX - ei->last_blocks + cur_blocks;907907+ cur_age = ULLONG_MAX - tei.last_blocks + cur_blocks;911908912912- if (ei->age)913913- ei->age = __calculate_block_age(cur_age, ei->age);909909+ if (tei.age)910910+ ei->age = __calculate_block_age(cur_age, tei.age);914911 else915912 ei->age = cur_age;916913 ei->last_blocks = cur_blocks;···918915 return 0;919916 }920917921921- f2fs_bug_on(sbi, ei->blk == NULL_ADDR);918918+ f2fs_bug_on(sbi, blkaddr == NULL_ADDR);922919923920 /* the data block was allocated for the first time */924924- if (ei->blk == NEW_ADDR)921921+ if (blkaddr == NEW_ADDR)925922 goto out;926923927927- if (__is_valid_data_blkaddr(ei->blk) &&928928- !f2fs_is_valid_blkaddr(sbi, ei->blk, DATA_GENERIC_ENHANCE)) {924924+ if (__is_valid_data_blkaddr(blkaddr) &&925925+ !f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE)) {929926 f2fs_bug_on(sbi, 1);930927 return -EINVAL;931928 }···941938942939static void __update_extent_cache(struct dnode_of_data *dn, enum extent_type type)943940{944944- struct extent_info ei;941941+ struct extent_info ei = {};945942946943 if (!__may_extent_tree(dn->inode, type))947944 return;···956953 else957954 ei.blk = dn->data_blkaddr;958955 } else if (type == EX_BLOCK_AGE) {959959- ei.blk = dn->data_blkaddr;960960- if (__get_new_block_age(dn->inode, &ei))956956+ if (__get_new_block_age(dn->inode, &ei, dn->data_blkaddr))961957 return;962958 }963959 __update_extent_tree_range(dn->inode, &ei, type);
+1-1
fs/f2fs/file.c
···25592559 struct f2fs_map_blocks map = { .m_next_extent = NULL,25602560 .m_seg_type = NO_CHECK_TYPE,25612561 .m_may_create = false };25622562- struct extent_info ei = {0, };25622562+ struct extent_info ei = {};25632563 pgoff_t pg_start, pg_end, next_pgofs;25642564 unsigned int blk_per_seg = sbi->blocks_per_seg;25652565 unsigned int total = 0, sec_num;
+5-8
fs/f2fs/segment.c
···663663 if (IS_ERR(fcc->f2fs_issue_flush)) {664664 int err = PTR_ERR(fcc->f2fs_issue_flush);665665666666- kfree(fcc);667667- SM_I(sbi)->fcc_info = NULL;666666+ fcc->f2fs_issue_flush = NULL;668667 return err;669668 }670669···31603161static int __get_age_segment_type(struct inode *inode, pgoff_t pgofs)31613162{31623163 struct f2fs_sb_info *sbi = F2FS_I_SB(inode);31633163- struct extent_info ei;31643164+ struct extent_info ei = {};3164316531653166 if (f2fs_lookup_age_extent_cache(inode, pgofs, &ei)) {31663167 if (!ei.age)···5137513851385139 init_f2fs_rwsem(&sm_info->curseg_lock);5139514051405140- if (!f2fs_readonly(sbi->sb)) {51415141- err = f2fs_create_flush_cmd_control(sbi);51425142- if (err)51435143- return err;51445144- }51415141+ err = f2fs_create_flush_cmd_control(sbi);51425142+ if (err)51435143+ return err;5145514451465145 err = create_discard_cmd_control(sbi);51475146 if (err)
···29572957 const struct cred *cred)29582958{29592959 const struct task_struct *parent;29602960+ const struct cred *pcred;29602961 u64 ret;2961296229622963 rcu_read_lock();29632964 for (;;) {29642965 parent = rcu_dereference(task->real_parent);29652965- if (parent == task || cred_fscmp(parent->cred, cred) != 0)29662966+ pcred = rcu_dereference(parent->cred);29672967+ if (parent == task || cred_fscmp(pcred, cred) != 0)29662968 break;29672969 task = parent;29682970 }···30253023 * but do it without locking.30263024 */30273025 struct nfs_inode *nfsi = NFS_I(inode);30263026+ u64 login_time = nfs_access_login_time(current, cred);30283027 struct nfs_access_entry *cache;30293028 int err = -ECHILD;30303029 struct list_head *lh;···30393036 access_cmp(cred, cache) != 0)30403037 cache = NULL;30413038 if (cache == NULL)30393039+ goto out;30403040+ if ((s64)(login_time - cache->timestamp) > 0)30423041 goto out;30433042 if (nfs_check_cache_invalid(inode, NFS_INO_INVALID_ACCESS))30443043 goto out;
+8
fs/nfs/filelayout/filelayout.c
···783783 return &fl->generic_hdr;784784}785785786786+static bool787787+filelayout_lseg_is_striped(const struct nfs4_filelayout_segment *flseg)788788+{789789+ return flseg->num_fh > 1;790790+}791791+786792/*787793 * filelayout_pg_test(). Called by nfs_can_coalesce_requests()788794 *···809803 size = pnfs_generic_pg_test(pgio, prev, req);810804 if (!size)811805 return 0;806806+ else if (!filelayout_lseg_is_striped(FILELAYOUT_LSEG(pgio->pg_lseg)))807807+ return size;812808813809 /* see if req and prev are in the same stripe */814810 if (prev) {
+11
fs/nfsd/nfs4xdr.c
···36293629 case nfserr_noent:36303630 xdr_truncate_encode(xdr, start_offset);36313631 goto skip_entry;36323632+ case nfserr_jukebox:36333633+ /*36343634+ * The pseudoroot should only display dentries that lead to36353635+ * exports. If we get EJUKEBOX here, then we can't tell whether36363636+ * this entry should be included. Just fail the whole READDIR36373637+ * with NFS4ERR_DELAY in that case, and hope that the situation36383638+ * will resolve itself by the client's next attempt.36393639+ */36403640+ if (cd->rd_fhp->fh_export->ex_flags & NFSEXP_V4ROOT)36413641+ goto fail;36423642+ fallthrough;36323643 default:36333644 /*36343645 * If the client requested the RDATTR_ERROR attribute,
···891891#define PRINTK_INDEX892892#endif893893894894+/*895895+ * Discard .note.GNU-stack, which is emitted as PROGBITS by the compiler.896896+ * Otherwise, the type of .notes section would become PROGBITS instead of NOTES.897897+ */894898#define NOTES \899899+ /DISCARD/ : { *(.note.GNU-stack) } \895900 .notes : AT(ADDR(.notes) - LOAD_OFFSET) { \896901 BOUNDED_SECTION_BY(.note.*, _notes) \897902 } NOTES_HEADERS \
···475475extern void bio_set_pages_dirty(struct bio *bio);476476extern void bio_check_pages_dirty(struct bio *bio);477477478478+extern void bio_copy_data_iter(struct bio *dst, struct bvec_iter *dst_iter,479479+ struct bio *src, struct bvec_iter *src_iter);478480extern void bio_copy_data(struct bio *dst, struct bio *src);479481extern void bio_free_pages(struct bio *bio);480482void guard_bio_eod(struct bio *bio);
+1
include/linux/blkdev.h
···13951395 void (*swap_slot_free_notify) (struct block_device *, unsigned long);13961396 int (*report_zones)(struct gendisk *, sector_t sector,13971397 unsigned int nr_zones, report_zones_cb cb, void *data);13981398+ char *(*devnode)(struct gendisk *disk, umode_t *mode);13981399 /* returns the length of the identifier or a negative errno: */13991400 int (*get_unique_id)(struct gendisk *disk, u8 id[16],14001401 enum blk_unique_id id_type);
+2-2
include/linux/dsa/tag_qca.h
···4545 QCA_HDR_MGMT_COMMAND_LEN + \4646 QCA_HDR_MGMT_DATA1_LEN)47474848-#define QCA_HDR_MGMT_DATA2_LEN 12 /* Other 12 byte for the mdio data */4949-#define QCA_HDR_MGMT_PADDING_LEN 34 /* Padding to reach the min Ethernet packet */4848+#define QCA_HDR_MGMT_DATA2_LEN 28 /* Other 28 byte for the mdio data */4949+#define QCA_HDR_MGMT_PADDING_LEN 18 /* Padding to reach the min Ethernet packet */50505151#define QCA_HDR_MGMT_PKT_LEN (QCA_HDR_MGMT_HEADER_LEN + \5252 QCA_HDR_LEN + \
···197197};198198199199/* Max range where every element is added/deleted in one step */200200-#define IPSET_MAX_RANGE (1<<20)200200+#define IPSET_MAX_RANGE (1<<14)201201202202/* The max revision number supported by any set type + 1 */203203#define IPSET_REVISION_MAX 9
···826826 * whether to advertise lower-speed modes for that interface. It is827827 * assumed that if a rate matching mode is supported on an interface,828828 * then that interface's rate can be adapted to all slower link speeds829829- * supported by the phy. If iface is %PHY_INTERFACE_MODE_NA, and the phy830830- * supports any kind of rate matching for any interface, then it must831831- * return that rate matching mode (preferring %RATE_MATCH_PAUSE to832832- * %RATE_MATCH_CRS). If the interface is not supported, this should829829+ * supported by the phy. If the interface is not supported, this should833830 * return %RATE_MATCH_NONE.834831 */835832 int (*get_rate_matching)(struct phy_device *phydev,
+197
include/linux/pktcdvd.h
···11+/*22+ * Copyright (C) 2000 Jens Axboe <axboe@suse.de>33+ * Copyright (C) 2001-2004 Peter Osterlund <petero2@telia.com>44+ *55+ * May be copied or modified under the terms of the GNU General Public66+ * License. See linux/COPYING for more information.77+ *88+ * Packet writing layer for ATAPI and SCSI CD-R, CD-RW, DVD-R, and99+ * DVD-RW devices.1010+ *1111+ */1212+#ifndef __PKTCDVD_H1313+#define __PKTCDVD_H1414+1515+#include <linux/blkdev.h>1616+#include <linux/completion.h>1717+#include <linux/cdrom.h>1818+#include <linux/kobject.h>1919+#include <linux/sysfs.h>2020+#include <linux/mempool.h>2121+#include <uapi/linux/pktcdvd.h>2222+2323+/* default bio write queue congestion marks */2424+#define PKT_WRITE_CONGESTION_ON 100002525+#define PKT_WRITE_CONGESTION_OFF 90002626+2727+2828+struct packet_settings2929+{3030+ __u32 size; /* packet size in (512 byte) sectors */3131+ __u8 fp; /* fixed packets */3232+ __u8 link_loss; /* the rest is specified3333+ * as per Mt Fuji */3434+ __u8 write_type;3535+ __u8 track_mode;3636+ __u8 block_mode;3737+};3838+3939+/*4040+ * Very crude stats for now4141+ */4242+struct packet_stats4343+{4444+ unsigned long pkt_started;4545+ unsigned long pkt_ended;4646+ unsigned long secs_w;4747+ unsigned long secs_rg;4848+ unsigned long secs_r;4949+};5050+5151+struct packet_cdrw5252+{5353+ struct list_head pkt_free_list;5454+ struct list_head pkt_active_list;5555+ spinlock_t active_list_lock; /* Serialize access to pkt_active_list */5656+ struct task_struct *thread;5757+ atomic_t pending_bios;5858+};5959+6060+/*6161+ * Switch to high speed reading after reading this many kilobytes6262+ * with no interspersed writes.6363+ */6464+#define HI_SPEED_SWITCH 5126565+6666+struct packet_iosched6767+{6868+ atomic_t attention; /* Set to non-zero when queue processing is needed */6969+ int writing; /* Non-zero when writing, zero when reading */7070+ spinlock_t lock; /* Protecting read/write queue manipulations */7171+ struct bio_list read_queue;7272+ struct bio_list write_queue;7373+ sector_t last_write; /* The sector where the last write ended */7474+ int successive_reads;7575+};7676+7777+/*7878+ * 32 buffers of 2048 bytes7979+ */8080+#if (PAGE_SIZE % CD_FRAMESIZE) != 08181+#error "PAGE_SIZE must be a multiple of CD_FRAMESIZE"8282+#endif8383+#define PACKET_MAX_SIZE 1288484+#define FRAMES_PER_PAGE (PAGE_SIZE / CD_FRAMESIZE)8585+#define PACKET_MAX_SECTORS (PACKET_MAX_SIZE * CD_FRAMESIZE >> 9)8686+8787+enum packet_data_state {8888+ PACKET_IDLE_STATE, /* Not used at the moment */8989+ PACKET_WAITING_STATE, /* Waiting for more bios to arrive, so */9090+ /* we don't have to do as much */9191+ /* data gathering */9292+ PACKET_READ_WAIT_STATE, /* Waiting for reads to fill in holes */9393+ PACKET_WRITE_WAIT_STATE, /* Waiting for the write to complete */9494+ PACKET_RECOVERY_STATE, /* Recover after read/write errors */9595+ PACKET_FINISHED_STATE, /* After write has finished */9696+9797+ PACKET_NUM_STATES /* Number of possible states */9898+};9999+100100+/*101101+ * Information needed for writing a single packet102102+ */103103+struct pktcdvd_device;104104+105105+struct packet_data106106+{107107+ struct list_head list;108108+109109+ spinlock_t lock; /* Lock protecting state transitions and */110110+ /* orig_bios list */111111+112112+ struct bio_list orig_bios; /* Original bios passed to pkt_make_request */113113+ /* that will be handled by this packet */114114+ int write_size; /* Total size of all bios in the orig_bios */115115+ /* list, measured in number of frames */116116+117117+ struct bio *w_bio; /* The bio we will send to the real CD */118118+ /* device once we have all data for the */119119+ /* packet we are going to write */120120+ sector_t sector; /* First sector in this packet */121121+ int frames; /* Number of frames in this packet */122122+123123+ enum packet_data_state state; /* Current state */124124+ atomic_t run_sm; /* Incremented whenever the state */125125+ /* machine needs to be run */126126+ long sleep_time; /* Set this to non-zero to make the state */127127+ /* machine run after this many jiffies. */128128+129129+ atomic_t io_wait; /* Number of pending IO operations */130130+ atomic_t io_errors; /* Number of read/write errors during IO */131131+132132+ struct bio *r_bios[PACKET_MAX_SIZE]; /* bios to use during data gathering */133133+ struct page *pages[PACKET_MAX_SIZE / FRAMES_PER_PAGE];134134+135135+ int cache_valid; /* If non-zero, the data for the zone defined */136136+ /* by the sector variable is completely cached */137137+ /* in the pages[] vector. */138138+139139+ int id; /* ID number for debugging */140140+ struct pktcdvd_device *pd;141141+};142142+143143+struct pkt_rb_node {144144+ struct rb_node rb_node;145145+ struct bio *bio;146146+};147147+148148+struct packet_stacked_data149149+{150150+ struct bio *bio; /* Original read request bio */151151+ struct pktcdvd_device *pd;152152+};153153+#define PSD_POOL_SIZE 64154154+155155+struct pktcdvd_device156156+{157157+ struct block_device *bdev; /* dev attached */158158+ dev_t pkt_dev; /* our dev */159159+ char name[20];160160+ struct packet_settings settings;161161+ struct packet_stats stats;162162+ int refcnt; /* Open count */163163+ int write_speed; /* current write speed, kB/s */164164+ int read_speed; /* current read speed, kB/s */165165+ unsigned long offset; /* start offset */166166+ __u8 mode_offset; /* 0 / 8 */167167+ __u8 type;168168+ unsigned long flags;169169+ __u16 mmc3_profile;170170+ __u32 nwa; /* next writable address */171171+ __u32 lra; /* last recorded address */172172+ struct packet_cdrw cdrw;173173+ wait_queue_head_t wqueue;174174+175175+ spinlock_t lock; /* Serialize access to bio_queue */176176+ struct rb_root bio_queue; /* Work queue of bios we need to handle */177177+ int bio_queue_size; /* Number of nodes in bio_queue */178178+ bool congested; /* Someone is waiting for bio_queue_size179179+ * to drop. */180180+ sector_t current_sector; /* Keep track of where the elevator is */181181+ atomic_t scan_queue; /* Set to non-zero when pkt_handle_queue */182182+ /* needs to be run. */183183+ mempool_t rb_pool; /* mempool for pkt_rb_node allocations */184184+185185+ struct packet_iosched iosched;186186+ struct gendisk *disk;187187+188188+ int write_congestion_off;189189+ int write_congestion_on;190190+191191+ struct device *dev; /* sysfs pktcdvd[0-7] dev */192192+193193+ struct dentry *dfs_d_root; /* debugfs: devname directory */194194+ struct dentry *dfs_f_info; /* debugfs: info file */195195+};196196+197197+#endif /* __PKTCDVD_H */
+5
include/linux/sunrpc/rpc_pipe_fs.h
···9292 char __user *, size_t);9393extern int rpc_queue_upcall(struct rpc_pipe *, struct rpc_pipe_msg *);94949595+/* returns true if the msg is in-flight, i.e., already eaten by the peer */9696+static inline bool rpc_msg_is_inflight(const struct rpc_pipe_msg *msg) {9797+ return (msg->copied != 0 && list_empty(&msg->list));9898+}9999+95100struct rpc_clnt;96101extern struct dentry *rpc_create_client_dir(struct dentry *, const char *, struct rpc_clnt *);97102extern int rpc_remove_client_dir(struct rpc_clnt *);
+4
include/net/inet_hashtables.h
···108108 struct hlist_node node;109109 /* List of sockets hashed to this bucket */110110 struct hlist_head owners;111111+ /* bhash has twsk in owners, but bhash2 has twsk in112112+ * deathrow not to add a member in struct sock_common.113113+ */114114+ struct hlist_head deathrow;111115};112116113117static inline struct net *ib_net(const struct inet_bind_bucket *ib)
···3838 */3939#define BR2684_ENCAPS_VC (0) /* VC-mux */4040#define BR2684_ENCAPS_LLC (1)4141-#define BR2684_ENCAPS_AUTODETECT (2) /* Unsuported */4141+#define BR2684_ENCAPS_AUTODETECT (2) /* Unsupported */42424343/*4444 * Is this VC bridged or routed?
+8
include/uapi/linux/io_uring.h
···10101111#include <linux/fs.h>1212#include <linux/types.h>1313+/*1414+ * this file is shared with liburing and that has to autodetect1515+ * if linux/time_types.h is available or not, it can1616+ * define UAPI_LINUX_IO_URING_H_SKIP_LINUX_TIME_TYPES_H1717+ * if linux/time_types.h is not available1818+ */1919+#ifndef UAPI_LINUX_IO_URING_H_SKIP_LINUX_TIME_TYPES_H1320#include <linux/time_types.h>2121+#endif14221523#ifdef __cplusplus1624extern "C" {
···11+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */22+/*33+ * Copyright (C) 2000 Jens Axboe <axboe@suse.de>44+ * Copyright (C) 2001-2004 Peter Osterlund <petero2@telia.com>55+ *66+ * May be copied or modified under the terms of the GNU General Public77+ * License. See linux/COPYING for more information.88+ *99+ * Packet writing layer for ATAPI and SCSI CD-R, CD-RW, DVD-R, and1010+ * DVD-RW devices.1111+ *1212+ */1313+#ifndef _UAPI__PKTCDVD_H1414+#define _UAPI__PKTCDVD_H1515+1616+#include <linux/types.h>1717+1818+/*1919+ * 1 for normal debug messages, 2 is very verbose. 0 to turn it off.2020+ */2121+#define PACKET_DEBUG 12222+2323+#define MAX_WRITERS 82424+2525+#define PKT_RB_POOL_SIZE 5122626+2727+/*2828+ * How long we should hold a non-full packet before starting data gathering.2929+ */3030+#define PACKET_WAIT_TIME (HZ * 5 / 1000)3131+3232+/*3333+ * use drive write caching -- we need deferred error handling to be3434+ * able to successfully recover with this option (drive will return good3535+ * status as soon as the cdb is validated).3636+ */3737+#if defined(CONFIG_CDROM_PKTCDVD_WCACHE)3838+#define USE_WCACHING 13939+#else4040+#define USE_WCACHING 04141+#endif4242+4343+/*4444+ * No user-servicable parts beyond this point ->4545+ */4646+4747+/*4848+ * device types4949+ */5050+#define PACKET_CDR 15151+#define PACKET_CDRW 25252+#define PACKET_DVDR 35353+#define PACKET_DVDRW 45454+5555+/*5656+ * flags5757+ */5858+#define PACKET_WRITABLE 1 /* pd is writable */5959+#define PACKET_NWA_VALID 2 /* next writable address valid */6060+#define PACKET_LRA_VALID 3 /* last recorded address valid */6161+#define PACKET_MERGE_SEGS 4 /* perform segment merging to keep */6262+ /* underlying cdrom device happy */6363+6464+/*6565+ * Disc status -- from READ_DISC_INFO6666+ */6767+#define PACKET_DISC_EMPTY 06868+#define PACKET_DISC_INCOMPLETE 16969+#define PACKET_DISC_COMPLETE 27070+#define PACKET_DISC_OTHER 37171+7272+/*7373+ * write type, and corresponding data block type7474+ */7575+#define PACKET_MODE1 17676+#define PACKET_MODE2 27777+#define PACKET_BLOCK_MODE1 87878+#define PACKET_BLOCK_MODE2 107979+8080+/*8181+ * Last session/border status8282+ */8383+#define PACKET_SESSION_EMPTY 08484+#define PACKET_SESSION_INCOMPLETE 18585+#define PACKET_SESSION_RESERVED 28686+#define PACKET_SESSION_COMPLETE 38787+8888+#define PACKET_MCN "4a656e734178626f65323030300000"8989+9090+#undef PACKET_USE_LS9191+9292+#define PKT_CTRL_CMD_SETUP 09393+#define PKT_CTRL_CMD_TEARDOWN 19494+#define PKT_CTRL_CMD_STATUS 29595+9696+struct pkt_ctrl_command {9797+ __u32 command; /* in: Setup, teardown, status */9898+ __u32 dev_index; /* in/out: Device index */9999+ __u32 dev; /* in/out: Device nr for cdrw device */100100+ __u32 pkt_dev; /* in/out: Device nr for packet device */101101+ __u32 num_devices; /* out: Largest device index + 1 */102102+ __u32 padding; /* Not used */103103+};104104+105105+/*106106+ * packet ioctls107107+ */108108+#define PACKET_IOCTL_MAGIC ('X')109109+#define PACKET_CTRL_CMD _IOWR(PACKET_IOCTL_MAGIC, 1, struct pkt_ctrl_command)110110+111111+112112+#endif /* _UAPI__PKTCDVD_H */
+1-3
include/uapi/linux/vdpa.h
···5353 VDPA_ATTR_DEV_VENDOR_ATTR_NAME, /* string */5454 VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, /* u64 */55555656+ /* virtio features that are provisioned to the vDPA device */5657 VDPA_ATTR_DEV_FEATURES, /* u64 */5757-5858- /* virtio features that are supported by the vDPA device */5959- VDPA_ATTR_VDPA_DEV_SUPPORTED_FEATURES, /* u64 */60586159 /* new attributes must be added above here */6260 VDPA_ATTR_MAX,
+4-5
io_uring/cancel.c
···288288289289 ret = __io_sync_cancel(current->io_uring, &cd, sc.fd);290290291291+ mutex_unlock(&ctx->uring_lock);291292 if (ret != -EALREADY)292293 break;293294294294- mutex_unlock(&ctx->uring_lock);295295 ret = io_run_task_work_sig(ctx);296296- if (ret < 0) {297297- mutex_lock(&ctx->uring_lock);296296+ if (ret < 0)298297 break;299299- }300298 ret = schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS);301301- mutex_lock(&ctx->uring_lock);302299 if (!ret) {303300 ret = -ETIME;304301 break;305302 }303303+ mutex_lock(&ctx->uring_lock);306304 } while (1);307305308306 finish_wait(&ctx->cq_wait, &wait);307307+ mutex_lock(&ctx->uring_lock);309308310309 if (ret == -ENOENT || ret > 0)311310 ret = 0;
···438438 */439439 struct bpf_iter_seq_task_common common;440440 struct task_struct *task;441441+ struct mm_struct *mm;441442 struct vm_area_struct *vma;442443 u32 tid;443444 unsigned long prev_vm_start;···457456 enum bpf_task_vma_iter_find_op op;458457 struct vm_area_struct *curr_vma;459458 struct task_struct *curr_task;459459+ struct mm_struct *curr_mm;460460 u32 saved_tid = info->tid;461461462462 /* If this function returns a non-NULL vma, it holds a reference to463463- * the task_struct, and holds read lock on vma->mm->mmap_lock.463463+ * the task_struct, holds a refcount on mm->mm_users, and holds464464+ * read lock on vma->mm->mmap_lock.464465 * If this function returns NULL, it does not hold any reference or465466 * lock.466467 */467468 if (info->task) {468469 curr_task = info->task;469470 curr_vma = info->vma;471471+ curr_mm = info->mm;470472 /* In case of lock contention, drop mmap_lock to unblock471473 * the writer.472474 *···508504 * 4.2) VMA2 and VMA2' covers different ranges, process509505 * VMA2'.510506 */511511- if (mmap_lock_is_contended(curr_task->mm)) {507507+ if (mmap_lock_is_contended(curr_mm)) {512508 info->prev_vm_start = curr_vma->vm_start;513509 info->prev_vm_end = curr_vma->vm_end;514510 op = task_vma_iter_find_vma;515515- mmap_read_unlock(curr_task->mm);516516- if (mmap_read_lock_killable(curr_task->mm))511511+ mmap_read_unlock(curr_mm);512512+ if (mmap_read_lock_killable(curr_mm)) {513513+ mmput(curr_mm);517514 goto finish;515515+ }518516 } else {519517 op = task_vma_iter_next_vma;520518 }···541535 op = task_vma_iter_find_vma;542536 }543537544544- if (!curr_task->mm)538538+ curr_mm = get_task_mm(curr_task);539539+ if (!curr_mm)545540 goto next_task;546541547547- if (mmap_read_lock_killable(curr_task->mm))542542+ if (mmap_read_lock_killable(curr_mm)) {543543+ mmput(curr_mm);548544 goto finish;545545+ }549546 }550547551548 switch (op) {552549 case task_vma_iter_first_vma:553553- curr_vma = find_vma(curr_task->mm, 0);550550+ curr_vma = find_vma(curr_mm, 0);554551 break;555552 case task_vma_iter_next_vma:556556- curr_vma = find_vma(curr_task->mm, curr_vma->vm_end);553553+ curr_vma = find_vma(curr_mm, curr_vma->vm_end);557554 break;558555 case task_vma_iter_find_vma:559556 /* We dropped mmap_lock so it is necessary to use find_vma560557 * to find the next vma. This is similar to the mechanism561558 * in show_smaps_rollup().562559 */563563- curr_vma = find_vma(curr_task->mm, info->prev_vm_end - 1);560560+ curr_vma = find_vma(curr_mm, info->prev_vm_end - 1);564561 /* case 1) and 4.2) above just use curr_vma */565562566563 /* check for case 2) or case 4.1) above */567564 if (curr_vma &&568565 curr_vma->vm_start == info->prev_vm_start &&569566 curr_vma->vm_end == info->prev_vm_end)570570- curr_vma = find_vma(curr_task->mm, curr_vma->vm_end);567567+ curr_vma = find_vma(curr_mm, curr_vma->vm_end);571568 break;572569 }573570 if (!curr_vma) {574571 /* case 3) above, or case 2) 4.1) with vma->next == NULL */575575- mmap_read_unlock(curr_task->mm);572572+ mmap_read_unlock(curr_mm);573573+ mmput(curr_mm);576574 goto next_task;577575 }578576 info->task = curr_task;579577 info->vma = curr_vma;578578+ info->mm = curr_mm;580579 return curr_vma;581580582581next_task:···590579591580 put_task_struct(curr_task);592581 info->task = NULL;582582+ info->mm = NULL;593583 info->tid++;594584 goto again;595585···599587 put_task_struct(curr_task);600588 info->task = NULL;601589 info->vma = NULL;590590+ info->mm = NULL;602591 return NULL;603592}604593···671658 */672659 info->prev_vm_start = ~0UL;673660 info->prev_vm_end = info->vma->vm_end;674674- mmap_read_unlock(info->task->mm);661661+ mmap_read_unlock(info->mm);662662+ mmput(info->mm);663663+ info->mm = NULL;675664 put_task_struct(info->task);676665 info->task = NULL;677666 }
+4
kernel/bpf/trampoline.c
···488488 /* reset fops->func and fops->trampoline for re-register */489489 tr->fops->func = NULL;490490 tr->fops->trampoline = 0;491491+492492+ /* reset im->image memory attr for arch_prepare_bpf_trampoline */493493+ set_memory_nx((long)im->image, 1);494494+ set_memory_rw((long)im->image, 1);491495 goto again;492496 }493497#endif
+15-6
kernel/bpf/verifier.c
···10541054 */10551055static void *copy_array(void *dst, const void *src, size_t n, size_t size, gfp_t flags)10561056{10571057+ size_t alloc_bytes;10581058+ void *orig = dst;10571059 size_t bytes;1058106010591061 if (ZERO_OR_NULL_PTR(src))···10641062 if (unlikely(check_mul_overflow(n, size, &bytes)))10651063 return NULL;1066106410671067- if (ksize(dst) < ksize(src)) {10681068- kfree(dst);10691069- dst = kmalloc_track_caller(kmalloc_size_roundup(bytes), flags);10701070- if (!dst)10711071- return NULL;10651065+ alloc_bytes = max(ksize(orig), kmalloc_size_roundup(bytes));10661066+ dst = krealloc(orig, alloc_bytes, flags);10671067+ if (!dst) {10681068+ kfree(orig);10691069+ return NULL;10721070 }1073107110741072 memcpy(dst, src, bytes);···1182411822 * register B - not null1182511823 * for JNE A, B, ... - A is not null in the false branch;1182611824 * for JEQ A, B, ... - A is not null in the true branch.1182511825+ *1182611826+ * Since PTR_TO_BTF_ID points to a kernel struct that does1182711827+ * not need to be null checked by the BPF program, i.e.,1182811828+ * could be null even without PTR_MAYBE_NULL marking, so1182911829+ * only propagate nullness when neither reg is that type.1182711830 */1182811831 if (!is_jmp32 && BPF_SRC(insn->code) == BPF_X &&1182911832 __is_pointer_value(false, src_reg) && __is_pointer_value(false, dst_reg) &&1183011830- type_may_be_null(src_reg->type) != type_may_be_null(dst_reg->type)) {1183311833+ type_may_be_null(src_reg->type) != type_may_be_null(dst_reg->type) &&1183411834+ base_type(src_reg->type) != PTR_TO_BTF_ID &&1183511835+ base_type(dst_reg->type) != PTR_TO_BTF_ID) {1183111836 eq_branch_regs = NULL;1183211837 switch (opcode) {1183311838 case BPF_JEQ:
+17-37
kernel/events/core.c
···380380381381/*382382 * perf_sched_events : >0 events exist383383- * perf_cgroup_events: >0 per-cpu cgroup events exist on this cpu384383 */385384386385static void perf_sched_delayed(struct work_struct *work);···388389static DEFINE_MUTEX(perf_sched_mutex);389390static atomic_t perf_sched_count;390391391391-static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);392392static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);393393394394static atomic_t nr_mmap_events __read_mostly;···842844 struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context);843845 struct perf_cgroup *cgrp;844846845845- cgrp = perf_cgroup_from_task(task, NULL);847847+ /*848848+ * cpuctx->cgrp is set when the first cgroup event enabled,849849+ * and is cleared when the last cgroup event disabled.850850+ */851851+ if (READ_ONCE(cpuctx->cgrp) == NULL)852852+ return;846853847854 WARN_ON_ONCE(cpuctx->ctx.nr_cgroups == 0);855855+856856+ cgrp = perf_cgroup_from_task(task, NULL);848857 if (READ_ONCE(cpuctx->cgrp) == cgrp)849858 return;850859···36363631 * to check if we have to switch out PMU state.36373632 * cgroup event are system-wide mode only36383633 */36393639- if (atomic_read(this_cpu_ptr(&perf_cgroup_events)))36403640- perf_cgroup_switch(next);36343634+ perf_cgroup_switch(next);36413635}3642363636433637static bool perf_less_group_idx(const void *l, const void *r)···49784974 detach_sb_event(event);49794975}4980497649814981-static void unaccount_event_cpu(struct perf_event *event, int cpu)49824982-{49834983- if (event->parent)49844984- return;49854985-49864986- if (is_cgroup_event(event))49874987- atomic_dec(&per_cpu(perf_cgroup_events, cpu));49884988-}49894989-49904977#ifdef CONFIG_NO_HZ_FULL49914978static DEFINE_SPINLOCK(nr_freq_lock);49924979#endif···50425047 if (!atomic_add_unless(&perf_sched_count, -1, 1))50435048 schedule_delayed_work(&perf_sched_work, HZ);50445049 }50455045-50465046- unaccount_event_cpu(event, event->cpu);5047505050485051 unaccount_pmu_sb_event(event);50495052}···1167211679 attach_sb_event(event);1167311680}11674116811167511675-static void account_event_cpu(struct perf_event *event, int cpu)1167611676-{1167711677- if (event->parent)1167811678- return;1167911679-1168011680- if (is_cgroup_event(event))1168111681- atomic_inc(&per_cpu(perf_cgroup_events, cpu));1168211682-}1168311683-1168411682/* Freq events need the tick to stay alive (see perf_event_task_tick). */1168511683static void account_freq_event_nohz(void)1168611684{···1175811774 mutex_unlock(&perf_sched_mutex);1175911775 }1176011776enabled:1176111761-1176211762- account_event_cpu(event, event->cpu);11763117771176411778 account_pmu_sb_event(event);1176511779}···1232112339 if (flags & ~PERF_FLAG_ALL)1232212340 return -EINVAL;12323123411232412324- /* Do we allow access to perf_event_open(2) ? */1232512325- err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);1234212342+ err = perf_copy_attr(attr_uptr, &attr);1232612343 if (err)1232712344 return err;12328123451232912329- err = perf_copy_attr(attr_uptr, &attr);1234612346+ /* Do we allow access to perf_event_open(2) ? */1234712347+ err = security_perf_event_open(&attr, PERF_SECURITY_OPEN);1233012348 if (err)1233112349 return err;1233212350···1267112689 return event_fd;12672126901267312691err_context:1267412674- /* event->pmu_ctx freed by free_event() */1269212692+ put_pmu_ctx(event->pmu_ctx);1269312693+ event->pmu_ctx = NULL; /* _free_event() */1267512694err_locked:1267612695 mutex_unlock(&ctx->mutex);1267712696 perf_unpin_context(ctx);···12785128021278612803err_pmu_ctx:1278712804 put_pmu_ctx(pmu_ctx);1280512805+ event->pmu_ctx = NULL; /* _free_event() */1278812806err_unlock:1278912807 mutex_unlock(&ctx->mutex);1279012808 perf_unpin_context(ctx);···12806128221280712823 perf_event_groups_for_cpu_pmu(event, groups, cpu, pmu) {1280812824 perf_remove_from_context(event, 0);1280912809- unaccount_event_cpu(event, cpu);1281012825 put_pmu_ctx(event->pmu_ctx);1281112826 list_add(&event->migrate_entry, events);12812128271281312828 for_each_sibling_event(sibling, event) {1281412829 perf_remove_from_context(sibling, 0);1281512815- unaccount_event_cpu(sibling, cpu);1281612830 put_pmu_ctx(sibling->pmu_ctx);1281712831 list_add(&sibling->migrate_entry, events);1281812832 }···12829128471283012848 if (event->state >= PERF_EVENT_STATE_OFF)1283112849 event->state = PERF_EVENT_STATE_INACTIVE;1283212832- account_event_cpu(event, cpu);1283312850 perf_install_in_context(ctx, event, cpu);1283412851}1283512852···1321213231 pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event);1321313232 if (IS_ERR(pmu_ctx)) {1321413233 free_event(child_event);1321513215- return NULL;1323413234+ return ERR_CAST(pmu_ctx);1321613235 }1321713236 child_event->pmu_ctx = pmu_ctx;1321813237···1372313742 struct task_struct *task = info;13724137431372513744 preempt_disable();1372613726- if (atomic_read(this_cpu_ptr(&perf_cgroup_events)))1372713727- perf_cgroup_switch(task);1374513745+ perf_cgroup_switch(task);1372813746 preempt_enable();13729137471373013748 return 0;
+7-4
kernel/futex/syscalls.c
···286286 }287287288288 futexv = kcalloc(nr_futexes, sizeof(*futexv), GFP_KERNEL);289289- if (!futexv)290290- return -ENOMEM;289289+ if (!futexv) {290290+ ret = -ENOMEM;291291+ goto destroy_timer;292292+ }291293292294 ret = futex_parse_waitv(futexv, waiters, nr_futexes);293295 if (!ret)294296 ret = futex_wait_multiple(futexv, nr_futexes, timeout ? &to : NULL);295297298298+ kfree(futexv);299299+300300+destroy_timer:296301 if (timeout) {297302 hrtimer_cancel(&to.timer);298303 destroy_hrtimer_on_stack(&to.timer);299304 }300300-301301- kfree(futexv);302305 return ret;303306}304307
+46-9
kernel/locking/rtmutex.c
···8989 * set this bit before looking at the lock.9090 */91919292-static __always_inline void9393-rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)9292+static __always_inline struct task_struct *9393+rt_mutex_owner_encode(struct rt_mutex_base *lock, struct task_struct *owner)9494{9595 unsigned long val = (unsigned long)owner;96969797 if (rt_mutex_has_waiters(lock))9898 val |= RT_MUTEX_HAS_WAITERS;9999100100- WRITE_ONCE(lock->owner, (struct task_struct *)val);100100+ return (struct task_struct *)val;101101+}102102+103103+static __always_inline void104104+rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)105105+{106106+ /*107107+ * lock->wait_lock is held but explicit acquire semantics are needed108108+ * for a new lock owner so WRITE_ONCE is insufficient.109109+ */110110+ xchg_acquire(&lock->owner, rt_mutex_owner_encode(lock, owner));111111+}112112+113113+static __always_inline void rt_mutex_clear_owner(struct rt_mutex_base *lock)114114+{115115+ /* lock->wait_lock is held so the unlock provides release semantics. */116116+ WRITE_ONCE(lock->owner, rt_mutex_owner_encode(lock, NULL));101117}102118103119static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)···122106 ((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);123107}124108125125-static __always_inline void fixup_rt_mutex_waiters(struct rt_mutex_base *lock)109109+static __always_inline void110110+fixup_rt_mutex_waiters(struct rt_mutex_base *lock, bool acquire_lock)126111{127112 unsigned long owner, *p = (unsigned long *) &lock->owner;128113···189172 * still set.190173 */191174 owner = READ_ONCE(*p);192192- if (owner & RT_MUTEX_HAS_WAITERS)193193- WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);175175+ if (owner & RT_MUTEX_HAS_WAITERS) {176176+ /*177177+ * See rt_mutex_set_owner() and rt_mutex_clear_owner() on178178+ * why xchg_acquire() is used for updating owner for179179+ * locking and WRITE_ONCE() for unlocking.180180+ *181181+ * WRITE_ONCE() would work for the acquire case too, but182182+ * in case that the lock acquisition failed it might183183+ * force other lockers into the slow path unnecessarily.184184+ */185185+ if (acquire_lock)186186+ xchg_acquire(p, owner & ~RT_MUTEX_HAS_WAITERS);187187+ else188188+ WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);189189+ }194190}195191196192/*···238208 owner = *p;239209 } while (cmpxchg_relaxed(p, owner,240210 owner | RT_MUTEX_HAS_WAITERS) != owner);211211+212212+ /*213213+ * The cmpxchg loop above is relaxed to avoid back-to-back ACQUIRE214214+ * operations in the event of contention. Ensure the successful215215+ * cmpxchg is visible.216216+ */217217+ smp_mb__after_atomic();241218}242219243220/*···12801243 * try_to_take_rt_mutex() sets the lock waiters bit12811244 * unconditionally. Clean this up.12821245 */12831283- fixup_rt_mutex_waiters(lock);12461246+ fixup_rt_mutex_waiters(lock, true);1284124712851248 return ret;12861249}···16411604 * try_to_take_rt_mutex() sets the waiter bit16421605 * unconditionally. We might have to fix that up.16431606 */16441644- fixup_rt_mutex_waiters(lock);16071607+ fixup_rt_mutex_waiters(lock, true);1645160816461609 trace_contention_end(lock, ret);16471610···17561719 * try_to_take_rt_mutex() sets the waiter bit unconditionally.17571720 * We might have to fix that up:17581721 */17591759- fixup_rt_mutex_waiters(lock);17221722+ fixup_rt_mutex_waiters(lock, true);17601723 debug_rt_mutex_free_waiter(&waiter);1761172417621725 trace_contention_end(lock, 0);
+3-3
kernel/locking/rtmutex_api.c
···267267void __sched rt_mutex_proxy_unlock(struct rt_mutex_base *lock)268268{269269 debug_rt_mutex_proxy_unlock(lock);270270- rt_mutex_set_owner(lock, NULL);270270+ rt_mutex_clear_owner(lock);271271}272272273273/**···382382 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might383383 * have to fix that up.384384 */385385- fixup_rt_mutex_waiters(lock);385385+ fixup_rt_mutex_waiters(lock, true);386386 raw_spin_unlock_irq(&lock->wait_lock);387387388388 return ret;···438438 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might439439 * have to fix that up.440440 */441441- fixup_rt_mutex_waiters(lock);441441+ fixup_rt_mutex_waiters(lock, false);442442443443 raw_spin_unlock_irq(&lock->wait_lock);444444
+3-1
lib/kunit/string-stream.c
···2323 return ERR_PTR(-ENOMEM);24242525 frag->fragment = kunit_kmalloc(test, len, gfp);2626- if (!frag->fragment)2626+ if (!frag->fragment) {2727+ kunit_kfree(test, frag);2728 return ERR_PTR(-ENOMEM);2929+ }28302931 return frag;3032}
+1-1
lib/scatterlist.c
···476476 /* Merge contiguous pages into the last SG */477477 prv_len = sgt_append->prv->length;478478 last_pg = sg_page(sgt_append->prv);479479- while (n_pages && pages_are_mergeable(last_pg, pages[0])) {479479+ while (n_pages && pages_are_mergeable(pages[0], last_pg)) {480480 if (sgt_append->prv->length + PAGE_SIZE > max_segment)481481 break;482482 sgt_append->prv->length += PAGE_SIZE;
+1-1
mm/memblock.c
···836836 * @base: phys starting address of the boot memory block837837 * @size: size of the boot memory block in bytes838838 *839839- * Free boot memory block previously allocated by memblock_alloc_xx() API.839839+ * Free boot memory block previously allocated by memblock_phys_alloc_xx() API.840840 * The freeing memory will not be released to the buddy allocator.841841 */842842int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size)
+5-1
net/caif/cfctrl.c
···269269 default:270270 pr_warn("Request setup of bad link type = %d\n",271271 param->linktype);272272+ cfpkt_destroy(pkt);272273 return -EINVAL;273274 }274275 req = kzalloc(sizeof(*req), GFP_KERNEL);275275- if (!req)276276+ if (!req) {277277+ cfpkt_destroy(pkt);276278 return -ENOMEM;279279+ }280280+277281 req->client_layer = user_layer;278282 req->cmd = CFCTRL_CMD_LINK_SETUP;279283 req->param = *param;
+5-2
net/core/filter.c
···3180318031813181static int bpf_skb_generic_pop(struct sk_buff *skb, u32 off, u32 len)31823182{31833183+ void *old_data;31843184+31833185 /* skb_ensure_writable() is not needed here, as we're31843186 * already working on an uncloned skb.31853187 */31863188 if (unlikely(!pskb_may_pull(skb, off + len)))31873189 return -ENOMEM;3188319031893189- skb_postpull_rcsum(skb, skb->data + off, len);31903190- memmove(skb->data + len, skb->data, off);31913191+ old_data = skb->data;31913192 __skb_pull(skb, len);31933193+ skb_postpull_rcsum(skb, old_data + off, len);31943194+ memmove(skb->data, old_data, off);3192319531933196 return 0;31943197}
+71-38
net/ethtool/ioctl.c
···20782078 return ret;20792079}2080208020812081-static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)20812081+static int ethtool_vzalloc_stats_array(int n_stats, u64 **data)20822082{20832083- const struct ethtool_phy_ops *phy_ops = ethtool_phy_ops;20842084- const struct ethtool_ops *ops = dev->ethtool_ops;20852085- struct phy_device *phydev = dev->phydev;20862086- struct ethtool_stats stats;20872087- u64 *data;20882088- int ret, n_stats;20892089-20902090- if (!phydev && (!ops->get_ethtool_phy_stats || !ops->get_sset_count))20912091- return -EOPNOTSUPP;20922092-20932093- if (phydev && !ops->get_ethtool_phy_stats &&20942094- phy_ops && phy_ops->get_sset_count)20952095- n_stats = phy_ops->get_sset_count(phydev);20962096- else20972097- n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);20982083 if (n_stats < 0)20992084 return n_stats;21002085 if (n_stats > S32_MAX / sizeof(u64))21012086 return -ENOMEM;21022102- WARN_ON_ONCE(!n_stats);20872087+ if (WARN_ON_ONCE(!n_stats))20882088+ return -EOPNOTSUPP;20892089+20902090+ *data = vzalloc(array_size(n_stats, sizeof(u64)));20912091+ if (!*data)20922092+ return -ENOMEM;20932093+20942094+ return 0;20952095+}20962096+20972097+static int ethtool_get_phy_stats_phydev(struct phy_device *phydev,20982098+ struct ethtool_stats *stats,20992099+ u64 **data)21002100+ {21012101+ const struct ethtool_phy_ops *phy_ops = ethtool_phy_ops;21022102+ int n_stats, ret;21032103+21042104+ if (!phy_ops || !phy_ops->get_sset_count || !phy_ops->get_stats)21052105+ return -EOPNOTSUPP;21062106+21072107+ n_stats = phy_ops->get_sset_count(phydev);21082108+21092109+ ret = ethtool_vzalloc_stats_array(n_stats, data);21102110+ if (ret)21112111+ return ret;21122112+21132113+ stats->n_stats = n_stats;21142114+ return phy_ops->get_stats(phydev, stats, *data);21152115+}21162116+21172117+static int ethtool_get_phy_stats_ethtool(struct net_device *dev,21182118+ struct ethtool_stats *stats,21192119+ u64 **data)21202120+{21212121+ const struct ethtool_ops *ops = dev->ethtool_ops;21222122+ int n_stats, ret;21232123+21242124+ if (!ops || !ops->get_sset_count || ops->get_ethtool_phy_stats)21252125+ return -EOPNOTSUPP;21262126+21272127+ n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);21282128+21292129+ ret = ethtool_vzalloc_stats_array(n_stats, data);21302130+ if (ret)21312131+ return ret;21322132+21332133+ stats->n_stats = n_stats;21342134+ ops->get_ethtool_phy_stats(dev, stats, *data);21352135+21362136+ return 0;21372137+}21382138+21392139+static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)21402140+{21412141+ struct phy_device *phydev = dev->phydev;21422142+ struct ethtool_stats stats;21432143+ u64 *data = NULL;21442144+ int ret = -EOPNOTSUPP;2103214521042146 if (copy_from_user(&stats, useraddr, sizeof(stats)))21052147 return -EFAULT;2106214821072107- stats.n_stats = n_stats;21492149+ if (phydev)21502150+ ret = ethtool_get_phy_stats_phydev(phydev, &stats, &data);2108215121092109- if (n_stats) {21102110- data = vzalloc(array_size(n_stats, sizeof(u64)));21112111- if (!data)21122112- return -ENOMEM;21522152+ if (ret == -EOPNOTSUPP)21532153+ ret = ethtool_get_phy_stats_ethtool(dev, &stats, &data);2113215421142114- if (phydev && !ops->get_ethtool_phy_stats &&21152115- phy_ops && phy_ops->get_stats) {21162116- ret = phy_ops->get_stats(phydev, &stats, data);21172117- if (ret < 0)21182118- goto out;21192119- } else {21202120- ops->get_ethtool_phy_stats(dev, &stats, data);21212121- }21222122- } else {21232123- data = NULL;21552155+ if (ret)21562156+ goto out;21572157+21582158+ if (copy_to_user(useraddr, &stats, sizeof(stats))) {21592159+ ret = -EFAULT;21602160+ goto out;21242161 }2125216221262126- ret = -EFAULT;21272127- if (copy_to_user(useraddr, &stats, sizeof(stats)))21282128- goto out;21292163 useraddr += sizeof(stats);21302130- if (n_stats && copy_to_user(useraddr, data, array_size(n_stats, sizeof(u64))))21312131- goto out;21322132- ret = 0;21642164+ if (copy_to_user(useraddr, data, array_size(stats.n_stats, sizeof(u64))))21652165+ ret = -EFAULT;2133216621342167 out:21352168 vfree(data);
+1
net/ipv4/af_inet.c
···16651665 if (rc == 0) {16661666 *sk = sock->sk;16671667 (*sk)->sk_allocation = GFP_ATOMIC;16681668+ (*sk)->sk_use_task_frag = false;16681669 /*16691670 * Unhash it so that IP input processing does not even see it,16701671 * we do not wish this socket to see incoming packets.
···116116#endif117117 tb->rcv_saddr = sk->sk_rcv_saddr;118118 INIT_HLIST_HEAD(&tb->owners);119119+ INIT_HLIST_HEAD(&tb->deathrow);119120 hlist_add_head(&tb->node, &head->chain);120121}121122···138137/* Caller must hold hashbucket lock for this tb with local BH disabled */139138void inet_bind2_bucket_destroy(struct kmem_cache *cachep, struct inet_bind2_bucket *tb)140139{141141- if (hlist_empty(&tb->owners)) {140140+ if (hlist_empty(&tb->owners) && hlist_empty(&tb->deathrow)) {142141 __hlist_del(&tb->node);143142 kmem_cache_free(cachep, tb);144143 }···11041103 /* Head lock still held and bh's disabled */11051104 inet_bind_hash(sk, tb, tb2, port);1106110511071107- spin_unlock(&head2->lock);11081108-11091106 if (sk_unhashed(sk)) {11101107 inet_sk(sk)->inet_sport = htons(port);11111108 inet_ehash_nolisten(sk, (struct sock *)tw, NULL);11121109 }11131110 if (tw)11141111 inet_twsk_bind_unhash(tw, hinfo);11121112+11131113+ spin_unlock(&head2->lock);11151114 spin_unlock(&head->lock);11151115+11161116 if (tw)11171117 inet_twsk_deschedule_put(tw);11181118 local_bh_enable();
+29-2
net/ipv4/inet_timewait_sock.c
···2929void inet_twsk_bind_unhash(struct inet_timewait_sock *tw,3030 struct inet_hashinfo *hashinfo)3131{3232+ struct inet_bind2_bucket *tb2 = tw->tw_tb2;3233 struct inet_bind_bucket *tb = tw->tw_tb;33343435 if (!tb)···3837 __hlist_del(&tw->tw_bind_node);3938 tw->tw_tb = NULL;4039 inet_bind_bucket_destroy(hashinfo->bind_bucket_cachep, tb);4040+4141+ __hlist_del(&tw->tw_bind2_node);4242+ tw->tw_tb2 = NULL;4343+ inet_bind2_bucket_destroy(hashinfo->bind2_bucket_cachep, tb2);4444+4145 __sock_put((struct sock *)tw);4246}4347···5145{5246 struct inet_hashinfo *hashinfo = tw->tw_dr->hashinfo;5347 spinlock_t *lock = inet_ehash_lockp(hashinfo, tw->tw_hash);5454- struct inet_bind_hashbucket *bhead;4848+ struct inet_bind_hashbucket *bhead, *bhead2;55495650 spin_lock(lock);5751 sk_nulls_del_node_init_rcu((struct sock *)tw);···6054 /* Disassociate with bind bucket. */6155 bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), tw->tw_num,6256 hashinfo->bhash_size)];5757+ bhead2 = inet_bhashfn_portaddr(hashinfo, (struct sock *)tw,5858+ twsk_net(tw), tw->tw_num);63596460 spin_lock(&bhead->lock);6161+ spin_lock(&bhead2->lock);6562 inet_twsk_bind_unhash(tw, hashinfo);6363+ spin_unlock(&bhead2->lock);6664 spin_unlock(&bhead->lock);67656866 refcount_dec(&tw->tw_dr->tw_refcount);···10393 hlist_add_head(&tw->tw_bind_node, list);10494}105959696+static void inet_twsk_add_bind2_node(struct inet_timewait_sock *tw,9797+ struct hlist_head *list)9898+{9999+ hlist_add_head(&tw->tw_bind2_node, list);100100+}101101+106102/*107103 * Enter the time wait state. This is called with locally disabled BH.108104 * Essentially we whip up a timewait bucket, copy the relevant info into it···121105 const struct inet_connection_sock *icsk = inet_csk(sk);122106 struct inet_ehash_bucket *ehead = inet_ehash_bucket(hashinfo, sk->sk_hash);123107 spinlock_t *lock = inet_ehash_lockp(hashinfo, sk->sk_hash);124124- struct inet_bind_hashbucket *bhead;108108+ struct inet_bind_hashbucket *bhead, *bhead2;109109+125110 /* Step 1: Put TW into bind hash. Original socket stays there too.126111 Note, that any socket with inet->num != 0 MUST be bound in127112 binding cache, even if it is closed.128113 */129114 bhead = &hashinfo->bhash[inet_bhashfn(twsk_net(tw), inet->inet_num,130115 hashinfo->bhash_size)];116116+ bhead2 = inet_bhashfn_portaddr(hashinfo, sk, twsk_net(tw), inet->inet_num);117117+131118 spin_lock(&bhead->lock);119119+ spin_lock(&bhead2->lock);120120+132121 tw->tw_tb = icsk->icsk_bind_hash;133122 WARN_ON(!icsk->icsk_bind_hash);134123 inet_twsk_add_bind_node(tw, &tw->tw_tb->owners);124124+125125+ tw->tw_tb2 = icsk->icsk_bind2_hash;126126+ WARN_ON(!icsk->icsk_bind2_hash);127127+ inet_twsk_add_bind2_node(tw, &tw->tw_tb2->deathrow);128128+129129+ spin_unlock(&bhead2->lock);135130 spin_unlock(&bhead->lock);136131137132 spin_lock(lock);
+4
net/ipv4/tcp_ulp.c
···139139 if (sk->sk_socket)140140 clear_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);141141142142+ err = -EINVAL;143143+ if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)144144+ goto out_err;145145+142146 err = ulp_ops->init(sk);143147 if (err)144148 goto out_err;
+16-4
net/mptcp/protocol.c
···16621662 set_bit(MPTCP_NOSPACE, &mptcp_sk(sk)->flags);16631663}1664166416651665+static int mptcp_disconnect(struct sock *sk, int flags);16661666+16651667static int mptcp_sendmsg_fastopen(struct sock *sk, struct sock *ssk, struct msghdr *msg,16661668 size_t len, int *copied_syn)16671669{···16741672 lock_sock(ssk);16751673 msg->msg_flags |= MSG_DONTWAIT;16761674 msk->connect_flags = O_NONBLOCK;16771677- msk->is_sendmsg = 1;16751675+ msk->fastopening = 1;16781676 ret = tcp_sendmsg_fastopen(ssk, msg, copied_syn, len, NULL);16791679- msk->is_sendmsg = 0;16771677+ msk->fastopening = 0;16801678 msg->msg_flags = saved_flags;16811679 release_sock(ssk);16821680···16901688 */16911689 if (ret && ret != -EINPROGRESS && ret != -ERESTARTSYS && ret != -EINTR)16921690 *copied_syn = 0;16911691+ } else if (ret && ret != -EINPROGRESS) {16921692+ mptcp_disconnect(sk, 0);16931693 }1694169416951695 return ret;···23572353 /* otherwise tcp will dispose of the ssk and subflow ctx */23582354 if (ssk->sk_state == TCP_LISTEN) {23592355 tcp_set_state(ssk, TCP_CLOSE);23602360- mptcp_subflow_queue_clean(ssk);23562356+ mptcp_subflow_queue_clean(sk, ssk);23612357 inet_csk_listen_stop(ssk);23622358 mptcp_event_pm_listener(ssk, MPTCP_EVENT_LISTENER_CLOSED);23632359 }···29932989{29942990 struct mptcp_sock *msk = mptcp_sk(sk);2995299129922992+ /* We are on the fastopen error path. We can't call straight into the29932993+ * subflows cleanup code due to lock nesting (we are already under29942994+ * msk->firstsocket lock). Do nothing and leave the cleanup to the29952995+ * caller.29962996+ */29972997+ if (msk->fastopening)29982998+ return 0;29992999+29963000 inet_sk_state_store(sk, TCP_CLOSE);2997300129983002 mptcp_stop_timer(sk);···35443532 /* if reaching here via the fastopen/sendmsg path, the caller already35453533 * acquired the subflow socket lock, too.35463534 */35473547- if (msk->is_sendmsg)35353535+ if (msk->fastopening)35483536 err = __inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags, 1);35493537 else35503538 err = inet_stream_connect(ssock, uaddr, addr_len, msk->connect_flags);
···17911791 }17921792}1793179317941794-void mptcp_subflow_queue_clean(struct sock *listener_ssk)17941794+void mptcp_subflow_queue_clean(struct sock *listener_sk, struct sock *listener_ssk)17951795{17961796 struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue;17971797 struct mptcp_sock *msk, *next, *head = NULL;···1840184018411841 do_cancel_work = __mptcp_close(sk, 0);18421842 release_sock(sk);18431843- if (do_cancel_work)18431843+ if (do_cancel_work) {18441844+ /* lockdep will report a false positive ABBA deadlock18451845+ * between cancel_work_sync and the listener socket.18461846+ * The involved locks belong to different sockets WRT18471847+ * the existing AB chain.18481848+ * Using a per socket key is problematic as key18491849+ * deregistration requires process context and must be18501850+ * performed at socket disposal time, in atomic18511851+ * context.18521852+ * Just tell lockdep to consider the listener socket18531853+ * released here.18541854+ */18551855+ mutex_release(&listener_sk->sk_lock.dep_map, _RET_IP_);18441856 mptcp_cancel_work(sk);18571857+ mutex_acquire(&listener_sk->sk_lock.dep_map,18581858+ SINGLE_DEPTH_NESTING, 0, _RET_IP_);18591859+ }18451860 sock_put(sk);18461861 }18471862
···393393 result = tcf_classify(skb, NULL, fl, &res, true);394394 if (result < 0)395395 continue;396396+ if (result == TC_ACT_SHOT)397397+ goto done;398398+396399 flow = (struct atm_flow_data *)res.class;397400 if (!flow)398401 flow = lookup_flow(sch, res.classid);399399- goto done;402402+ goto drop;400403 }401404 }402405 flow = NULL;
+2-2
net/sched/sch_cbq.c
···230230 result = tcf_classify(skb, NULL, fl, &res, true);231231 if (!fl || result < 0)232232 goto fallback;233233+ if (result == TC_ACT_SHOT)234234+ return NULL;233235234236 cl = (void *)res.class;235237 if (!cl) {···252250 case TC_ACT_TRAP:253251 *qerr = NET_XMIT_SUCCESS | __NET_XMIT_STOLEN;254252 fallthrough;255255- case TC_ACT_SHOT:256256- return NULL;257253 case TC_ACT_RECLASSIFY:258254 return cbq_reclassify(skb, cl);259255 }
+6-2
net/sched/sch_htb.c
···199199{200200 return (unsigned long)htb_find(handle, sch);201201}202202+203203+#define HTB_DIRECT ((struct htb_class *)-1L)204204+202205/**203206 * htb_classify - classify a packet into class207207+ * @skb: the socket buffer208208+ * @sch: the active queue discipline209209+ * @qerr: pointer for returned status code204210 *205211 * It returns NULL if the packet should be dropped or -1 if the packet206212 * should be passed directly thru. In all other cases leaf class is returned.···217211 * have no valid leaf we try to use MAJOR:default leaf. It still unsuccessful218212 * then finish and return direct queue.219213 */220220-#define HTB_DIRECT ((struct htb_class *)-1L)221221-222214static struct htb_class *htb_classify(struct sk_buff *skb, struct Qdisc *sch,223215 int *qerr)224216{
···4444 $(if $(CONFIG_MODVERSIONS),-m) \4545 $(if $(CONFIG_MODULE_SRCVERSION_ALL),-a) \4646 $(if $(CONFIG_SECTION_MISMATCH_WARN_ONLY),,-E) \4747+ $(if $(KBUILD_MODPOST_WARN),-w) \4748 $(if $(KBUILD_NSDEPS),-d $(MODULES_NSDEPS)) \4849 $(if $(CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS)$(KBUILD_NSDEPS),-N) \4950 -o $@···5453# 'make -i -k' ignores compile errors, and builds as many modules as possible.5554ifneq ($(findstring i,$(filter-out --%,$(MAKEFLAGS))),)5655modpost-args += -n5656+endif5757+5858+# Read out modules.order to pass in modpost.5959+# Otherwise, allmodconfig would fail with "Argument list too long".6060+ifdef KBUILD_MODULES6161+modpost-args += -T $(MODORDER)6262+modpost-deps += $(MODORDER)5763endif58645965ifeq ($(KBUILD_EXTMOD),)···121113122114endif # ($(KBUILD_EXTMOD),)123115124124-ifneq ($(KBUILD_MODPOST_WARN)$(missing-input),)116116+ifneq ($(missing-input),)125117modpost-args += -w126118endif127119128128-ifdef KBUILD_MODULES129129-modpost-args += -T $(MODORDER)130130-modpost-deps += $(MODORDER)131131-endif132132-133133-# Read out modules.order to pass in modpost.134134-# Otherwise, allmodconfig would fail with "Argument list too long".135120quiet_cmd_modpost = MODPOST $@136121 cmd_modpost = \137122 $(if $(missing-input), \
+1
scripts/Makefile.package
···158158PHONY += help159159help:160160 @echo ' rpm-pkg - Build both source and binary RPM kernel packages'161161+ @echo ' srcrpm-pkg - Build only the source kernel RPM package'161162 @echo ' binrpm-pkg - Build only the binary kernel RPM package'162163 @echo ' deb-pkg - Build both source and binary deb kernel packages'163164 @echo ' bindeb-pkg - Build only the binary kernel deb package'
···161161"(especially with a larger number of unrolled categories) than the\n"162162"default mode.\n"163163"\n"164164+165165+"Search\n"166166+"-------\n"167167+"Pressing the forward-slash (/) anywhere brings up a search dialog box.\n"168168+"\n"169169+164170"Different color themes available\n"165171"--------------------------------\n"166172"It is possible to select different color themes using the variable\n"
+2-1
scripts/package/mkspec
···5151 URL: https://www.kernel.org5252$S Source: kernel-$__KERNELRELEASE.tar.gz5353 Provides: $PROVIDES5454-$S BuildRequires: bc binutils bison dwarves elfutils-libelf-devel flex5454+$S BuildRequires: bc binutils bison dwarves5555+$S BuildRequires: (elfutils-libelf-devel or libelf-devel) flex5556$S BuildRequires: gcc make openssl openssl-devel perl python3 rsync56575758 # $UTS_MACHINE as a fallback of _arch in case
+19-8
sound/pci/hda/patch_hdmi.c
···167167 struct hdmi_ops ops;168168169169 bool dyn_pin_out;170170+ bool static_pcm_mapping;170171 /* hdmi interrupt trigger control flag for Nvidia codec */171172 bool hdmi_intr_trig_ctrl;172173 bool nv_dp_workaround; /* workaround DP audio infoframe for Nvidia */···15261525 */15271526 pcm_jack = pin_idx_to_pcm_jack(codec, per_pin);1528152715291529- if (eld->eld_valid) {15301530- hdmi_attach_hda_pcm(spec, per_pin);15311531- hdmi_pcm_setup_pin(spec, per_pin);15321532- } else {15331533- hdmi_pcm_reset_pin(spec, per_pin);15341534- hdmi_detach_hda_pcm(spec, per_pin);15281528+ if (!spec->static_pcm_mapping) {15291529+ if (eld->eld_valid) {15301530+ hdmi_attach_hda_pcm(spec, per_pin);15311531+ hdmi_pcm_setup_pin(spec, per_pin);15321532+ } else {15331533+ hdmi_pcm_reset_pin(spec, per_pin);15341534+ hdmi_detach_hda_pcm(spec, per_pin);15351535+ }15351536 }15371537+15361538 /* if pcm_idx == -1, it means this is in monitor connection event15371539 * we can get the correct pcm_idx now.15381540 */···22852281 struct hdmi_spec *spec = codec->spec;22862282 int idx, pcm_num;2287228322882288- /* limit the PCM devices to the codec converters */22892289- pcm_num = spec->num_cvts;22842284+ /* limit the PCM devices to the codec converters or available PINs */22852285+ pcm_num = min(spec->num_cvts, spec->num_pins);22902286 codec_dbg(codec, "hdmi: pcm_num set to %d\n", pcm_num);2291228722922288 for (idx = 0; idx < pcm_num; idx++) {···23822378 for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {23832379 struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);23842380 struct hdmi_eld *pin_eld = &per_pin->sink_eld;23812381+23822382+ if (spec->static_pcm_mapping) {23832383+ hdmi_attach_hda_pcm(spec, per_pin);23842384+ hdmi_pcm_setup_pin(spec, per_pin);23852385+ }2385238623862387 pin_eld->eld_valid = false;23872388 hdmi_present_sense(per_pin, 0);···44274418 codec->patch_ops.init = atihdmi_init;4428441944294420 spec = codec->spec;44214421+44224422+ spec->static_pcm_mapping = true;4430442344314424 spec->ops.pin_get_eld = atihdmi_pin_get_eld;44324425 spec->ops.pin_setup_infoframe = atihdmi_pin_setup_infoframe;
···304304 for (;;) {305305 done =306306 line6_midibuf_read(mb, line6->buffer_message,307307- LINE6_MIDI_MESSAGE_MAXLEN);307307+ LINE6_MIDI_MESSAGE_MAXLEN,308308+ LINE6_MIDIBUF_READ_RX);308309309310 if (done <= 0)310311 break;
+4-2
sound/usb/line6/midi.c
···4444 int req, done;45454646 for (;;) {4747- req = min(line6_midibuf_bytes_free(mb), line6->max_packet_size);4747+ req = min3(line6_midibuf_bytes_free(mb), line6->max_packet_size,4848+ LINE6_FALLBACK_MAXPACKETSIZE);4849 done = snd_rawmidi_transmit_peek(substream, chunk, req);49505051 if (done == 0)···57565857 for (;;) {5958 done = line6_midibuf_read(mb, chunk,6060- LINE6_FALLBACK_MAXPACKETSIZE);5959+ LINE6_FALLBACK_MAXPACKETSIZE,6060+ LINE6_MIDIBUF_READ_TX);61616262 if (done == 0)6363 break;
+17-8
sound/usb/line6/midibuf.c
···991010#include "midibuf.h"11111212+1213static int midibuf_message_length(unsigned char code)1314{1415 int message_length;···21202221 message_length = length[(code >> 4) - 8];2322 } else {2424- /*2525- Note that according to the MIDI specification 0xf2 is2626- the "Song Position Pointer", but this is used by Line 62727- to send sysex messages to the host.2828- */2929- static const int length[] = { -1, 2, -1, 2, -1, -1, 1, 1, 1, 1,2323+ static const int length[] = { -1, 2, 2, 2, -1, -1, 1, 1, 1, -1,3024 1, 1, 1, -1, 1, 13125 };3226 message_length = length[code & 0x0f];···121125}122126123127int line6_midibuf_read(struct midi_buffer *this, unsigned char *data,124124- int length)128128+ int length, int read_type)125129{126130 int bytes_used;127131 int length1, length2;···144148145149 length1 = this->size - this->pos_read;146150147147- /* check MIDI command length */148151 command = this->buf[this->pos_read];152152+ /*153153+ PODxt always has status byte lower nibble set to 0010,154154+ when it means to send 0000, so we correct if here so155155+ that control/program changes come on channel 1 and156156+ sysex message status byte is correct157157+ */158158+ if (read_type == LINE6_MIDIBUF_READ_RX) {159159+ if (command == 0xb2 || command == 0xc2 || command == 0xf2) {160160+ unsigned char fixed = command & 0xf0;161161+ this->buf[this->pos_read] = fixed;162162+ command = fixed;163163+ }164164+ }149165166166+ /* check MIDI command length */150167 if (command & 0x80) {151168 midi_length = midibuf_message_length(command);152169 this->command_prev = command;
+4-1
sound/usb/line6/midibuf.h
···88#ifndef MIDIBUF_H99#define MIDIBUF_H10101111+#define LINE6_MIDIBUF_READ_TX 01212+#define LINE6_MIDIBUF_READ_RX 11313+1114struct midi_buffer {1215 unsigned char *buf;1316 int size;···2623extern int line6_midibuf_ignore(struct midi_buffer *mb, int length);2724extern int line6_midibuf_init(struct midi_buffer *mb, int size, int split);2825extern int line6_midibuf_read(struct midi_buffer *mb, unsigned char *data,2929- int length);2626+ int length, int read_type);3027extern void line6_midibuf_reset(struct midi_buffer *mb);3128extern int line6_midibuf_write(struct midi_buffer *mb, unsigned char *data,3229 int length);
···116116117117 /* open single copy of the events w/o cgroup */118118 err = evsel__open_per_cpu(evsel, evsel->core.cpus, -1);119119- if (err) {120120- pr_err("Failed to open first cgroup events\n");121121- goto out;122122- }119119+ if (err == 0)120120+ evsel->supported = true;123121124122 map_fd = bpf_map__fd(skel->maps.events);125123 perf_cpu_map__for_each_cpu(cpu, j, evsel->core.cpus) {126124 int fd = FD(evsel, j);127125 __u32 idx = evsel->core.idx * total_cpus + cpu.cpu;128126129129- err = bpf_map_update_elem(map_fd, &idx, &fd,130130- BPF_ANY);131131- if (err < 0) {132132- pr_err("Failed to update perf_event fd\n");133133- goto out;134134- }127127+ bpf_map_update_elem(map_fd, &idx, &fd, BPF_ANY);135128 }136129137130 evsel->cgrp = leader_cgrp;138131 }139139- evsel->supported = true;140132141133 if (evsel->cgrp == cgrp)142134 continue;
+18-5
tools/perf/util/cgroup.c
···224224 return 0;225225}226226227227+static int check_and_add_cgroup_name(const char *fpath)228228+{229229+ struct cgroup_name *cn;230230+231231+ list_for_each_entry(cn, &cgroup_list, list) {232232+ if (!strcmp(cn->name, fpath))233233+ return 0;234234+ }235235+236236+ /* pretend if it's added by ftw() */237237+ return add_cgroup_name(fpath, NULL, FTW_D, NULL);238238+}239239+227240static void release_cgroup_list(void)228241{229242 struct cgroup_name *cn;···255242 struct cgroup_name *cn;256243 char *s;257244258258- /* use given name as is - for testing purpose */245245+ /* use given name as is when no regex is given */259246 for (;;) {260247 p = strchr(str, ',');261248 e = p ? p : eos;···266253 s = strndup(str, e - str);267254 if (!s)268255 return -1;269269- /* pretend if it's added by ftw() */270270- ret = add_cgroup_name(s, NULL, FTW_D, NULL);256256+257257+ ret = check_and_add_cgroup_name(s);271258 free(s);272272- if (ret)259259+ if (ret < 0)273260 return -1;274261 } else {275275- if (add_cgroup_name("", NULL, FTW_D, NULL) < 0)262262+ if (check_and_add_cgroup_name("/") < 0)276263 return -1;277264 }278265
···11+// SPDX-License-Identifier: GPL-2.022+33+#include "vmlinux.h"44+#include <bpf/bpf_helpers.h>55+#include "bpf_misc.h"66+77+char _license[] SEC("license") = "GPL";88+99+struct {1010+ __uint(type, BPF_MAP_TYPE_HASH);1111+ __uint(max_entries, 1);1212+ __type(key, u64);1313+ __type(value, u64);1414+} m_hash SEC(".maps");1515+1616+SEC("?raw_tp")1717+__failure __msg("R8 invalid mem access 'map_value_or_null")1818+int jeq_infer_not_null_ptr_to_btfid(void *ctx)1919+{2020+ struct bpf_map *map = (struct bpf_map *)&m_hash;2121+ struct bpf_map *inner_map = map->inner_map_meta;2222+ u64 key = 0, ret = 0, *val;2323+2424+ val = bpf_map_lookup_elem(map, &key);2525+ /* Do not mark ptr as non-null if one of them is2626+ * PTR_TO_BTF_ID (R9), reject because of invalid2727+ * access to map value (R8).2828+ *2929+ * Here, we need to inline those insns to access3030+ * R8 directly, since compiler may use other reg3131+ * once it figures out val==inner_map.3232+ */3333+ asm volatile("r8 = %[val];\n"3434+ "r9 = %[inner_map];\n"3535+ "if r8 != r9 goto +1;\n"3636+ "%[ret] = *(u64 *)(r8 +0);\n"3737+ : [ret] "+r"(ret)3838+ : [inner_map] "r"(inner_map), [val] "r"(val)3939+ : "r8", "r9");4040+4141+ return ret;4242+}
···77include $(top_srcdir)/scripts/subarch.include88ARCH ?= $(SUBARCH)991010-# For cross-builds to work, UNAME_M has to map to ARCH and arch specific1111-# directories and targets in this Makefile. "uname -m" doesn't map to1212-# arch specific sub-directory names.1313-#1414-# UNAME_M variable to used to run the compiles pointing to the right arch1515-# directories and build the right targets for these supported architectures.1616-#1717-# TEST_GEN_PROGS and LIBKVM are set using UNAME_M variable.1818-# LINUX_TOOL_ARCH_INCLUDE is set using ARCH variable.1919-#2020-# x86_64 targets are named to include x86_64 as a suffix and directories2121-# for includes are in x86_64 sub-directory. s390x and aarch64 follow the2222-# same convention. "uname -m" doesn't result in the correct mapping for2323-# s390x and aarch64.2424-#2525-# No change necessary for x86_642626-UNAME_M := $(shell uname -m)2727-2828-# Set UNAME_M for arm64 compile/install to work2929-ifeq ($(ARCH),arm64)3030- UNAME_M := aarch643131-endif3232-# Set UNAME_M s390x compile/install to work3333-ifeq ($(ARCH),s390)3434- UNAME_M := s390x3535-endif3636-# Set UNAME_M riscv compile/install to work3737-ifeq ($(ARCH),riscv)3838- UNAME_M := riscv1010+ifeq ($(ARCH),x86)1111+ ARCH_DIR := x86_641212+else ifeq ($(ARCH),arm64)1313+ ARCH_DIR := aarch641414+else ifeq ($(ARCH),s390)1515+ ARCH_DIR := s390x1616+else1717+ ARCH_DIR := $(ARCH)3918endif40194120LIBKVM += lib/assert.c···175196TEST_GEN_PROGS_riscv += set_memory_region_test176197TEST_GEN_PROGS_riscv += kvm_binary_stats_test177198178178-TEST_PROGS += $(TEST_PROGS_$(UNAME_M))179179-TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))180180-TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(UNAME_M))181181-LIBKVM += $(LIBKVM_$(UNAME_M))199199+TEST_PROGS += $(TEST_PROGS_$(ARCH_DIR))200200+TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(ARCH_DIR))201201+TEST_GEN_PROGS_EXTENDED += $(TEST_GEN_PROGS_EXTENDED_$(ARCH_DIR))202202+LIBKVM += $(LIBKVM_$(ARCH_DIR))203203+204204+# lib.mak defines $(OUTPUT), prepends $(OUTPUT)/ to $(TEST_GEN_PROGS), and most205205+# importantly defines, i.e. overwrites, $(CC) (unless `make -e` or `make CC=`,206206+# which causes the environment variable to override the makefile).207207+include ../lib.mk182208183209INSTALL_HDR_PATH = $(top_srcdir)/usr184210LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/···194210LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include195211endif196212CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \213213+ -Wno-gnu-variable-sized-type-not-at-end \214214+ -fno-builtin-memcmp -fno-builtin-memcpy -fno-builtin-memset \197215 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \198216 -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \199199- -I$(<D) -Iinclude/$(UNAME_M) -I ../rseq -I.. $(EXTRA_CFLAGS) \217217+ -I$(<D) -Iinclude/$(ARCH_DIR) -I ../rseq -I.. $(EXTRA_CFLAGS) \200218 $(KHDR_INCLUDES)201219202202-no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \203203- $(CC) -Werror -no-pie -x c - -o "$$TMP", -no-pie)220220+no-pie-option := $(call try-run, echo 'int main(void) { return 0; }' | \221221+ $(CC) -Werror $(CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)204222205223# On s390, build the testcases KVM-enabled206206-pgste-option = $(call try-run, echo 'int main() { return 0; }' | \224224+pgste-option = $(call try-run, echo 'int main(void) { return 0; }' | \207225 $(CC) -Werror -Wl$(comma)--s390-pgste -x c - -o "$$TMP",-Wl$(comma)--s390-pgste)208226209227LDLIBS += -ldl210228LDFLAGS += -pthread $(no-pie-option) $(pgste-option)211211-212212-# After inclusion, $(OUTPUT) is defined and213213-# $(TEST_GEN_PROGS) starts with $(OUTPUT)/214214-include ../lib.mk215229216230LIBKVM_C := $(filter %.c,$(LIBKVM))217231LIBKVM_S := $(filter %.S,$(LIBKVM))
···186186_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,187187 "Missing new mode params?");188188189189+/*190190+ * Initializes vm->vpages_valid to match the canonical VA space of the191191+ * architecture.192192+ *193193+ * The default implementation is valid for architectures which split the194194+ * range addressed by a single page table into a low and high region195195+ * based on the MSB of the VA. On architectures with this behavior196196+ * the VA region spans [0, 2^(va_bits - 1)), [-(2^(va_bits - 1), -1].197197+ */189198__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)190199{191200 sparsebit_set_num(vm->vpages_valid,···1425141614261417 while (npages--) {14271418 virt_pg_map(vm, vaddr, paddr);14191419+ sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);14201420+14281421 vaddr += page_size;14291422 paddr += page_size;14301430-14311431- sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);14321423 }14331424}14341425
+14-2
tools/testing/selftests/kvm/lib/ucall_common.c
···44#include "linux/bitmap.h"55#include "linux/atomic.h"6677+#define GUEST_UCALL_FAILED -188+79struct ucall_header {810 DECLARE_BITMAP(in_use, KVM_MAX_VCPUS);911 struct ucall ucalls[KVM_MAX_VCPUS];···4341 struct ucall *uc;4442 int i;45434646- GUEST_ASSERT(ucall_pool);4444+ if (!ucall_pool)4545+ goto ucall_failed;47464847 for (i = 0; i < KVM_MAX_VCPUS; ++i) {4948 if (!test_and_set_bit(i, ucall_pool->in_use)) {···5451 }5552 }56535757- GUEST_ASSERT(0);5454+ucall_failed:5555+ /*5656+ * If the vCPU cannot grab a ucall structure, make a bare ucall with a5757+ * magic value to signal to get_ucall() that things went sideways.5858+ * GUEST_ASSERT() depends on ucall_alloc() and so cannot be used here.5959+ */6060+ ucall_arch_do_ucall(GUEST_UCALL_FAILED);5861 return NULL;5962}6063···1029310394 addr = ucall_arch_get_ucall(vcpu);10495 if (addr) {9696+ TEST_ASSERT(addr != (void *)GUEST_UCALL_FAILED,9797+ "Guest failed to allocate ucall struct");9898+10599 memcpy(uc, addr, sizeof(*uc));106100 vcpu_run_complete_io(vcpu);107101 } else {
···4141static void l2_guest_code_int(void)4242{4343 GUEST_ASSERT_1(int_fired == 1, int_fired);4444- vmmcall();4545- ud2();4444+4545+ /*4646+ * Same as the vmmcall() function, but with a ud2 sneaked after the4747+ * vmmcall. The caller injects an exception with the return address4848+ * increased by 2, so the "pop rbp" must be after the ud2 and we cannot4949+ * use vmmcall() directly.5050+ */5151+ __asm__ __volatile__("push %%rbp; vmmcall; ud2; pop %%rbp"5252+ : : "a"(0xdeadbeef), "c"(0xbeefdead)5353+ : "rbx", "rdx", "rsi", "rdi", "r8", "r9",5454+ "r10", "r11", "r12", "r13", "r14", "r15");46554756 GUEST_ASSERT_1(bp_fired == 1, bp_fired);4857 hlt();
···173173 long started = 0, completed = 0, next_reset = reset_n;174174 long completed_before, started_before;175175 int r, test = 1;176176- unsigned len;176176+ unsigned int len;177177 long long spurious = 0;178178 const bool random_batch = batch == RANDOM_BATCH;179179