···842842843843N: Helge Deller844844E: deller@gmx.de845845-E: hdeller@redhat.de846846-D: PA-RISC Linux hacker, LASI-, ASP-, WAX-, LCD/LED-driver847847-S: Schimmelsrain 1848848-S: D-69231 Rauenberg845845+W: http://www.parisc-linux.org/846846+D: PA-RISC Linux architecture maintainer847847+D: LASI-, ASP-, WAX-, LCD/LED-driver849848S: Germany850849851850N: Jean Delvare···13601361S: South Africa1361136213621363N: Grant Grundler13631363-E: grundler@parisc-linux.org13641364+E: grantgrundler@gmail.com13641365W: http://obmouse.sourceforge.net/13651366W: http://www.parisc-linux.org/13661367D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver···24912492S: USA2492249324932494N: Kyle McMartin24942494-E: kyle@parisc-linux.org24952495+E: kyle@mcmartin.ca24952496D: Linux/PARISC hacker24962497D: AD1889 sound driver24972498S: Ottawa, Canada···37793780S: Cupertino, CA 9501437803781S: USA3781378237823782-N: Thibaut Varene37833783-E: T-Bone@parisc-linux.org37843784-W: http://www.parisc-linux.org/~varenet/37853785-P: 1024D/B7D2F063 E67C 0D43 A75E 12A5 BB1C FA2F 1E32 C3DA B7D2 F06337833783+N: Thibaut Varène37843784+E: hacks+kernel@slashdirt.org37853785+W: http://hacks.slashdirt.org/37863786D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits37873787D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there37883788D: AD1889 sound driver37893789-S: Paris, France37893789+S: France3790379037913791N: Heikki Vatiainen37923792E: hessu@cs.tut.fi
+16-16
Documentation/admin-guide/README.rst
···11.. _readme:2233-Linux kernel release 4.x <http://kernel.org/>33+Linux kernel release 5.x <http://kernel.org/>44=============================================5566-These are the release notes for Linux version 4. Read them carefully,66+These are the release notes for Linux version 5. Read them carefully,77as they tell you what this is all about, explain how to install the88kernel, and what to do if something goes wrong.99···6363 directory where you have permissions (e.g. your home directory) and6464 unpack it::65656666- xz -cd linux-4.X.tar.xz | tar xvf -6666+ xz -cd linux-5.x.tar.xz | tar xvf -67676868 Replace "X" with the version number of the latest kernel.6969···7272 files. They should match the library, and not get messed up by7373 whatever the kernel-du-jour happens to be.74747575- - You can also upgrade between 4.x releases by patching. Patches are7575+ - You can also upgrade between 5.x releases by patching. Patches are7676 distributed in the xz format. To install by patching, get all the7777 newer patch files, enter the top level directory of the kernel source7878- (linux-4.X) and execute::7878+ (linux-5.x) and execute::79798080- xz -cd ../patch-4.x.xz | patch -p18080+ xz -cd ../patch-5.x.xz | patch -p181818282- Replace "x" for all versions bigger than the version "X" of your current8282+ Replace "x" for all versions bigger than the version "x" of your current8383 source tree, **in_order**, and you should be ok. You may want to remove8484 the backup files (some-file-name~ or some-file-name.orig), and make sure8585 that there are no failed patches (some-file-name# or some-file-name.rej).8686 If there are, either you or I have made a mistake.87878888- Unlike patches for the 4.x kernels, patches for the 4.x.y kernels8888+ Unlike patches for the 5.x kernels, patches for the 5.x.y kernels8989 (also known as the -stable kernels) are not incremental but instead apply9090- directly to the base 4.x kernel. For example, if your base kernel is 4.09191- and you want to apply the 4.0.3 patch, you must not first apply the 4.0.19292- and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and9393- want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is,9494- patch -R) **before** applying the 4.0.3 patch. You can read more on this in9090+ directly to the base 5.x kernel. For example, if your base kernel is 5.09191+ and you want to apply the 5.0.3 patch, you must not first apply the 5.0.19292+ and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and9393+ want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is,9494+ patch -R) **before** applying the 5.0.3 patch. You can read more on this in9595 :ref:`Documentation/process/applying-patches.rst <applying_patches>`.96969797 Alternatively, the script patch-kernel can be used to automate this···114114Software requirements115115---------------------116116117117- Compiling and running the 4.x kernels requires up-to-date117117+ Compiling and running the 5.x kernels requires up-to-date118118 versions of various software packages. Consult119119 :ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers120120 required and how to get updates for these packages. Beware that using···132132 place for the output files (including .config).133133 Example::134134135135- kernel source code: /usr/src/linux-4.X135135+ kernel source code: /usr/src/linux-5.x136136 build directory: /home/name/build/kernel137137138138 To configure and build the kernel, use::139139140140- cd /usr/src/linux-4.X140140+ cd /usr/src/linux-5.x141141 make O=/home/name/build/kernel menuconfig142142 make O=/home/name/build/kernel143143 sudo make O=/home/name/build/kernel modules_install install
+3-7
Documentation/networking/dsa/dsa.txt
···533533 function that the driver has to call for each VLAN the given port is a member534534 of. A switchdev object is used to carry the VID and bridge flags.535535536536-- port_fdb_prepare: bridge layer function invoked when the bridge prepares the537537- installation of a Forwarding Database entry. If the operation is not538538- supported, this function should return -EOPNOTSUPP to inform the bridge code539539- to fallback to a software implementation. No hardware setup must be done in540540- this function. See port_fdb_add for this and details.541541-542536- port_fdb_add: bridge layer function invoked when the bridge wants to install a543537 Forwarding Database entry, the switch hardware should be programmed with the544538 specified address in the specified VLAN Id in the forwarding database545545- associated with this VLAN ID539539+ associated with this VLAN ID. If the operation is not supported, this540540+ function should return -EOPNOTSUPP to inform the bridge code to fallback to541541+ a software implementation.546542547543Note: VLAN ID 0 corresponds to the port private database, which, in the context548544of DSA, would be the its port-based VLAN, used by the associated bridge device.
+1-1
Documentation/networking/msg_zerocopy.rst
···77=====8899The MSG_ZEROCOPY flag enables copy avoidance for socket send calls.1010-The feature is currently implemented for TCP sockets.1010+The feature is currently implemented for TCP and UDP sockets.111112121313Opportunity and Caveats
+5-5
Documentation/networking/switchdev.txt
···9292Switch ID9393^^^^^^^^^94949595-The switchdev driver must implement the switchdev op switchdev_port_attr_get9696-for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same9797-physical ID for each port of a switch. The ID must be unique between switches9898-on the same system. The ID does not need to be unique between switches on9999-different systems.9595+The switchdev driver must implement the net_device operation9696+ndo_get_port_parent_id for each port netdev, returning the same physical ID for9797+each port of a switch. The ID must be unique between switches on the same9898+system. The ID does not need to be unique between switches on different9999+systems.100100101101The switch ID is used to locate ports on a switch and to know if aggregated102102ports belong to the same switch.
+61-56
Documentation/process/applying-patches.rst
···216216generate a patch representing the differences between two patches and then217217apply the result.218218219219-This will let you move from something like 4.7.2 to 4.7.3 in a single219219+This will let you move from something like 5.7.2 to 5.7.3 in a single220220step. The -z flag to interdiff will even let you feed it patches in gzip or221221bzip2 compressed form directly without the use of zcat or bzcat or manual222222decompression.223223224224-Here's how you'd go from 4.7.2 to 4.7.3 in a single step::224224+Here's how you'd go from 5.7.2 to 5.7.3 in a single step::225225226226- interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1226226+ interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1227227228228Although interdiff may save you a step or two you are generally advised to229229do the additional steps since interdiff can get things wrong in some cases.···245245Most recent patches are linked from the front page, but they also have246246specific homes.247247248248-The 4.x.y (-stable) and 4.x patches live at248248+The 5.x.y (-stable) and 5.x patches live at249249250250- https://www.kernel.org/pub/linux/kernel/v4.x/250250+ https://www.kernel.org/pub/linux/kernel/v5.x/251251252252-The -rc patches live at252252+The -rc patches are not stored on the webserver but are generated on253253+demand from git tags such as253254254254- https://www.kernel.org/pub/linux/kernel/v4.x/testing/255255+ https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0256256+257257+The stable -rc patches live at258258+259259+ https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/255260256261257257-The 4.x kernels262262+The 5.x kernels258263===============259264260265These are the base stable releases released by Linus. The highest numbered261266release is the most recent.262267263268If regressions or other serious flaws are found, then a -stable fix patch264264-will be released (see below) on top of this base. Once a new 4.x base269269+will be released (see below) on top of this base. Once a new 5.x base265270kernel is released, a patch is made available that is a delta between the266266-previous 4.x kernel and the new one.271271+previous 5.x kernel and the new one.267272268268-To apply a patch moving from 4.6 to 4.7, you'd do the following (note269269-that such patches do **NOT** apply on top of 4.x.y kernels but on top of the270270-base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to271271-first revert the 4.x.y patch).273273+To apply a patch moving from 5.6 to 5.7, you'd do the following (note274274+that such patches do **NOT** apply on top of 5.x.y kernels but on top of the275275+base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to276276+first revert the 5.x.y patch).272277273278Here are some examples::274279275275- # moving from 4.6 to 4.7280280+ # moving from 5.6 to 5.7276281277277- $ cd ~/linux-4.6 # change to kernel source dir278278- $ patch -p1 < ../patch-4.7 # apply the 4.7 patch282282+ $ cd ~/linux-5.6 # change to kernel source dir283283+ $ patch -p1 < ../patch-5.7 # apply the 5.7 patch279284 $ cd ..280280- $ mv linux-4.6 linux-4.7 # rename source dir285285+ $ mv linux-5.6 linux-5.7 # rename source dir281286282282- # moving from 4.6.1 to 4.7287287+ # moving from 5.6.1 to 5.7283288284284- $ cd ~/linux-4.6.1 # change to kernel source dir285285- $ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch286286- # source dir is now 4.6287287- $ patch -p1 < ../patch-4.7 # apply new 4.7 patch289289+ $ cd ~/linux-5.6.1 # change to kernel source dir290290+ $ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch291291+ # source dir is now 5.6292292+ $ patch -p1 < ../patch-5.7 # apply new 5.7 patch288293 $ cd ..289289- $ mv linux-4.6.1 linux-4.7 # rename source dir294294+ $ mv linux-5.6.1 linux-5.7 # rename source dir290295291296292292-The 4.x.y kernels297297+The 5.x.y kernels293298=================294299295300Kernels with 3-digit versions are -stable kernels. They contain small(ish)296301critical fixes for security problems or significant regressions discovered297297-in a given 4.x kernel.302302+in a given 5.x kernel.298303299304This is the recommended branch for users who want the most recent stable300305kernel and are not interested in helping test development/experimental301306versions.302307303303-If no 4.x.y kernel is available, then the highest numbered 4.x kernel is308308+If no 5.x.y kernel is available, then the highest numbered 5.x kernel is304309the current stable kernel.305310306311.. note::···313308 The -stable team usually do make incremental patches available as well314309 as patches against the latest mainline release, but I only cover the315310 non-incremental ones below. The incremental ones can be found at316316- https://www.kernel.org/pub/linux/kernel/v4.x/incr/311311+ https://www.kernel.org/pub/linux/kernel/v5.x/incr/317312318318-These patches are not incremental, meaning that for example the 4.7.3319319-patch does not apply on top of the 4.7.2 kernel source, but rather on top320320-of the base 4.7 kernel source.313313+These patches are not incremental, meaning that for example the 5.7.3314314+patch does not apply on top of the 5.7.2 kernel source, but rather on top315315+of the base 5.7 kernel source.321316322322-So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel323323-source you have to first back out the 4.7.2 patch (so you are left with a324324-base 4.7 kernel source) and then apply the new 4.7.3 patch.317317+So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel318318+source you have to first back out the 5.7.2 patch (so you are left with a319319+base 5.7 kernel source) and then apply the new 5.7.3 patch.325320326321Here's a small example::327322328328- $ cd ~/linux-4.7.2 # change to the kernel source dir329329- $ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch330330- $ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch323323+ $ cd ~/linux-5.7.2 # change to the kernel source dir324324+ $ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch325325+ $ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch331326 $ cd ..332332- $ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir327327+ $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir333328334329The -rc kernels335330===============···348343development kernels but do not want to run some of the really experimental349344stuff (such people should see the sections about -next and -mm kernels below).350345351351-The -rc patches are not incremental, they apply to a base 4.x kernel, just352352-like the 4.x.y patches described above. The kernel version before the -rcN346346+The -rc patches are not incremental, they apply to a base 5.x kernel, just347347+like the 5.x.y patches described above. The kernel version before the -rcN353348suffix denotes the version of the kernel that this -rc kernel will eventually354349turn into.355350356356-So, 4.8-rc5 means that this is the fifth release candidate for the 4.8357357-kernel and the patch should be applied on top of the 4.7 kernel source.351351+So, 5.8-rc5 means that this is the fifth release candidate for the 5.8352352+kernel and the patch should be applied on top of the 5.7 kernel source.358353359354Here are 3 examples of how to apply these patches::360355361361- # first an example of moving from 4.7 to 4.8-rc3356356+ # first an example of moving from 5.7 to 5.8-rc3362357363363- $ cd ~/linux-4.7 # change to the 4.7 source dir364364- $ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch358358+ $ cd ~/linux-5.7 # change to the 5.7 source dir359359+ $ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch365360 $ cd ..366366- $ mv linux-4.7 linux-4.8-rc3 # rename the source dir361361+ $ mv linux-5.7 linux-5.8-rc3 # rename the source dir367362368368- # now let's move from 4.8-rc3 to 4.8-rc5363363+ # now let's move from 5.8-rc3 to 5.8-rc5369364370370- $ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir371371- $ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch372372- $ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch365365+ $ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir366366+ $ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch367367+ $ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch373368 $ cd ..374374- $ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir369369+ $ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir375370376376- # finally let's try and move from 4.7.3 to 4.8-rc5371371+ # finally let's try and move from 5.7.3 to 5.8-rc5377372378378- $ cd ~/linux-4.7.3 # change to the kernel source dir379379- $ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch380380- $ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch373373+ $ cd ~/linux-5.7.3 # change to the kernel source dir374374+ $ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch375375+ $ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch381376 $ cd ..382382- $ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir377377+ $ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir383378384379385380The -mm patches and the linux-next tree
···4455.. _it_readme:6677-Rilascio del kernel Linux 4.x <http://kernel.org/>77+Rilascio del kernel Linux 5.x <http://kernel.org/>88===================================================991010.. warning::
+16-6
MAINTAINERS
···409409F: include/uapi/linux/wmi.h410410411411AD1889 ALSA SOUND DRIVER412412-M: Thibaut Varene <T-Bone@parisc-linux.org>413413-W: http://wiki.parisc-linux.org/AD1889412412+W: https://parisc.wiki.kernel.org/index.php/AD1889414413L: linux-parisc@vger.kernel.org415414S: Maintained416415F: sound/pci/ad1889.*···28642865R: Song Liu <songliubraving@fb.com>28652866R: Yonghong Song <yhs@fb.com>28662867L: netdev@vger.kernel.org28672867-L: linux-kernel@vger.kernel.org28682868+L: bpf@vger.kernel.org28682869T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git28692870T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git28702871Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147···28942895BPF JIT for ARM28952896M: Shubham Bansal <illusionist.neo@gmail.com>28962897L: netdev@vger.kernel.org28982898+L: bpf@vger.kernel.org28972899S: Maintained28982900F: arch/arm/net/28992901···29032903M: Alexei Starovoitov <ast@kernel.org>29042904M: Zi Shen Lim <zlim.lnx@gmail.com>29052905L: netdev@vger.kernel.org29062906+L: bpf@vger.kernel.org29062907S: Supported29072908F: arch/arm64/net/2908290929092910BPF JIT for MIPS (32-BIT AND 64-BIT)29102911M: Paul Burton <paul.burton@mips.com>29112912L: netdev@vger.kernel.org29132913+L: bpf@vger.kernel.org29122914S: Maintained29132915F: arch/mips/net/2914291629152917BPF JIT for NFP NICs29162918M: Jakub Kicinski <jakub.kicinski@netronome.com>29172919L: netdev@vger.kernel.org29202920+L: bpf@vger.kernel.org29182921S: Supported29192922F: drivers/net/ethernet/netronome/nfp/bpf/29202923···29252922M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>29262923M: Sandipan Das <sandipan@linux.ibm.com>29272924L: netdev@vger.kernel.org29252925+L: bpf@vger.kernel.org29282926S: Maintained29292927F: arch/powerpc/net/29302928···29332929M: Martin Schwidefsky <schwidefsky@de.ibm.com>29342930M: Heiko Carstens <heiko.carstens@de.ibm.com>29352931L: netdev@vger.kernel.org29322932+L: bpf@vger.kernel.org29362933S: Maintained29372934F: arch/s390/net/29382935X: arch/s390/net/pnet.c···29412936BPF JIT for SPARC (32-BIT AND 64-BIT)29422937M: David S. Miller <davem@davemloft.net>29432938L: netdev@vger.kernel.org29392939+L: bpf@vger.kernel.org29442940S: Maintained29452941F: arch/sparc/net/2946294229472943BPF JIT for X86 32-BIT29482944M: Wang YanQing <udknight@gmail.com>29492945L: netdev@vger.kernel.org29462946+L: bpf@vger.kernel.org29502947S: Maintained29512948F: arch/x86/net/bpf_jit_comp32.c29522949···29562949M: Alexei Starovoitov <ast@kernel.org>29572950M: Daniel Borkmann <daniel@iogearbox.net>29582951L: netdev@vger.kernel.org29522952+L: bpf@vger.kernel.org29592953S: Supported29602954F: arch/x86/net/29612955X: arch/x86/net/bpf_jit_comp32.c···34113403F: drivers/media/platform/marvell-ccic/3412340434133405CAIF NETWORK LAYER34143414-M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>34153406L: netdev@vger.kernel.org34163416-S: Supported34073407+S: Orphan34173408F: Documentation/networking/caif/34183409F: drivers/net/caif/34193410F: include/uapi/linux/caif/···85318524M: John Fastabend <john.fastabend@gmail.com>85328525M: Daniel Borkmann <daniel@iogearbox.net>85338526L: netdev@vger.kernel.org85278527+L: bpf@vger.kernel.org85348528S: Maintained85358529F: include/linux/skmsg.h85368530F: net/core/skmsg.c···1153311525F: drivers/block/paride/11534115261153511527PARISC ARCHITECTURE1153611536-M: "James E.J. Bottomley" <jejb@parisc-linux.org>1152811528+M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>1153711529M: Helge Deller <deller@gmx.de>1153811530L: linux-parisc@vger.kernel.org1153911531W: http://www.parisc-linux.org/···1675916751M: John Fastabend <john.fastabend@gmail.com>1676016752L: netdev@vger.kernel.org1676116753L: xdp-newbies@vger.kernel.org1675416754+L: bpf@vger.kernel.org1676216755S: Supported1676316756F: net/core/xdp.c1676416757F: include/net/xdp.h···1677316764M: Björn Töpel <bjorn.topel@intel.com>1677416765M: Magnus Karlsson <magnus.karlsson@intel.com>1677516766L: netdev@vger.kernel.org1676716767+L: bpf@vger.kernel.org1677616768S: Maintained1677716769F: kernel/bpf/xskmap.c1677816770F: net/xdp/
···191191192192config ARC_SMP_HALT_ON_RESET193193 bool "Enable Halt-on-reset boot mode"194194- default y if ARC_UBOOT_SUPPORT195194 help196195 In SMP configuration cores can be configured as Halt-on-reset197196 or they could all start at same time. For Halt-on-reset, non···406407 (also referred to as r58:r59). These can also be used by gcc as GPR so407408 kernel needs to save/restore per process408409410410+config ARC_IRQ_NO_AUTOSAVE411411+ bool "Disable hardware autosave regfile on interrupts"412412+ default n413413+ help414414+ On HS cores, taken interrupt auto saves the regfile on stack.415415+ This is programmable and can be optionally disabled in which case416416+ software INTERRUPT_PROLOGUE/EPILGUE do the needed work417417+409418endif # ISA_ARCV2410419411420endmenu # "ARC CPU Configuration"···521514 bool "Paranoia Checks in Low Level TLB Handlers"522515523516endif524524-525525-config ARC_UBOOT_SUPPORT526526- bool "Support uboot arg Handling"527527- help528528- ARC Linux by default checks for uboot provided args as pointers to529529- external cmdline or DTB. This however breaks in absence of uboot,530530- when booting from Metaware debugger directly, as the registers are531531- not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus532532- registers look like uboot args to kernel which then chokes.533533- So only enable the uboot arg checking/processing if users are sure534534- of uboot being in play.535517536518config ARC_BUILTIN_DTB_NAME537519 string "Built in DTB"
-1
arch/arc/configs/nps_defconfig
···3131# CONFIG_ARC_HAS_LLSC is not set3232CONFIG_ARC_KVADDR_SIZE=4023333CONFIG_ARC_EMUL_UNALIGNED=y3434-CONFIG_ARC_UBOOT_SUPPORT=y3534CONFIG_PREEMPT=y3635CONFIG_NET=y3736CONFIG_UNIX=y
···1515CONFIG_ISA_ARCV2=y1616CONFIG_SMP=y1717# CONFIG_ARC_TIMERS_64BIT is not set1818-# CONFIG_ARC_SMP_HALT_ON_RESET is not set1919-CONFIG_ARC_UBOOT_SUPPORT=y2018CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"2119CONFIG_PREEMPT=y2220CONFIG_NET=y
+8
arch/arc/include/asm/arcregs.h
···151151#endif152152};153153154154+struct bcr_uarch_build_arcv2 {155155+#ifdef CONFIG_CPU_BIG_ENDIAN156156+ unsigned int pad:8, prod:8, maj:8, min:8;157157+#else158158+ unsigned int min:8, maj:8, prod:8, pad:8;159159+#endif160160+};161161+154162struct bcr_mpy {155163#ifdef CONFIG_CPU_BIG_ENDIAN156164 unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8;
+11
arch/arc/include/asm/cache.h
···5252#define cache_line_size() SMP_CACHE_BYTES5353#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES54545555+/*5656+ * Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses5757+ * ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit5858+ * alignment for any atomic64_t embedded in buffer.5959+ * Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed6060+ * value of 4 (and not 8) in ARC ABI.6161+ */6262+#if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC)6363+#define ARCH_SLAB_MINALIGN 86464+#endif6565+5566extern void arc_cache_init(void);5667extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);5768extern void read_decode_cache_bcr(void);
+54
arch/arc/include/asm/entry-arcv2.h
···1717 ;1818 ; Now manually save: r12, sp, fp, gp, r2519192020+#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE2121+.ifnc \called_from, exception2222+ st.as r9, [sp, -10] ; save r9 in it's final stack slot2323+ sub sp, sp, 12 ; skip JLI, LDI, EI2424+2525+ PUSH lp_count2626+ PUSHAX lp_start2727+ PUSHAX lp_end2828+ PUSH blink2929+3030+ PUSH r113131+ PUSH r103232+3333+ sub sp, sp, 4 ; skip r93434+3535+ PUSH r83636+ PUSH r73737+ PUSH r63838+ PUSH r53939+ PUSH r44040+ PUSH r34141+ PUSH r24242+ PUSH r14343+ PUSH r04444+.endif4545+#endif4646+2047#ifdef CONFIG_ARC_HAS_ACCL_REGS2148 PUSH r592249 PUSH r58···11184#ifdef CONFIG_ARC_HAS_ACCL_REGS11285 POP r5811386 POP r598787+#endif8888+8989+#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE9090+.ifnc \called_from, exception9191+ POP r09292+ POP r19393+ POP r29494+ POP r39595+ POP r49696+ POP r59797+ POP r69898+ POP r79999+ POP r8100100+ POP r9101101+ POP r10102102+ POP r11103103+104104+ POP blink105105+ POPAX lp_end106106+ POPAX lp_start107107+108108+ POP r9109109+ mov lp_count, r9110110+111111+ add sp, sp, 12 ; skip JLI, LDI, EI112112+ ld.as r9, [sp, -10] ; reload r9 which got clobbered113113+.endif114114#endif115115116116.endm
···209209;####### Return from Intr #######210210211211debug_marker_l1:212212- bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot212212+ ; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot213213+ btst r0, STATUS_DE_BIT ; Z flag set if bit clear214214+ bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set213215214216.Lisr_ret_fast_path:215217 ; Handle special case #1: (Entry via Exception, Return via IRQ)
+12-4
arch/arc/kernel/head.S
···1717#include <asm/entry.h>1818#include <asm/arcregs.h>1919#include <asm/cache.h>2020+#include <asm/irqflags.h>20212122.macro CPU_EARLY_SETUP2223···4847 sr r5, [ARC_REG_DC_CTRL]494850491:5050+5151+#ifdef CONFIG_ISA_ARCV25252+ ; Unaligned access is disabled at reset, so re-enable early as5353+ ; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access5454+ ; by default5555+ lr r5, [status32]5656+ bset r5, r5, STATUS_AD_BIT5757+ kflag r55858+#endif5159.endm52605361 .section .init.text, "ax",@progbits···10090 st.ab 0, [r5, 4]101911:10292103103-#ifdef CONFIG_ARC_UBOOT_SUPPORT10493 ; Uboot - kernel ABI10594 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2106106- ; r1 = magic number (board identity, unused as of now9595+ ; r1 = magic number (always zero as of now)10796 ; r2 = pointer to uboot provided cmdline or external DTB in mem108108- ; These are handled later in setup_arch()9797+ ; These are handled later in handle_uboot_args()10998 st r0, [@uboot_tag]11099 st r2, [@uboot_arg]111111-#endif112100113101 ; setup "current" tsk and optionally cache it in dedicated r25114102 mov r9, @init_task
+2
arch/arc/kernel/intc-arcv2.c
···49495050 *(unsigned int *)&ictrl = 0;51515252+#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE5253 ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */5354 ictrl.save_blink = 1;5455 ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */5556 ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */5657 ictrl.save_idx_regs = 1; /* JLI, LDI, EI */5858+#endif57595860 WRITE_AUX(AUX_IRQ_CTRL, ictrl);5961
+89-38
arch/arc/kernel/setup.c
···199199 cpu->bpu.ret_stk = 4 << bpu.rse;200200201201 if (cpu->core.family >= 0x54) {202202- unsigned int exec_ctrl;203202204204- READ_BCR(AUX_EXEC_CTRL, exec_ctrl);205205- cpu->extn.dual_enb = !(exec_ctrl & 1);203203+ struct bcr_uarch_build_arcv2 uarch;206204207207- /* dual issue always present for this core */208208- cpu->extn.dual = 1;205205+ /*206206+ * The first 0x54 core (uarch maj:min 0:1 or 0:2) was207207+ * dual issue only (HS4x). But next uarch rev (1:0)208208+ * allows it be configured for single issue (HS3x)209209+ * Ensure we fiddle with dual issue only on HS4x210210+ */211211+ READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch);212212+213213+ if (uarch.prod == 4) {214214+ unsigned int exec_ctrl;215215+216216+ /* dual issue hardware always present */217217+ cpu->extn.dual = 1;218218+219219+ READ_BCR(AUX_EXEC_CTRL, exec_ctrl);220220+221221+ /* dual issue hardware enabled ? */222222+ cpu->extn.dual_enb = !(exec_ctrl & 1);223223+224224+ }209225 }210226 }211227212228 READ_BCR(ARC_REG_AP_BCR, ap);213229 if (ap.ver) {214230 cpu->extn.ap_num = 2 << ap.num;215215- cpu->extn.ap_full = !!ap.min;231231+ cpu->extn.ap_full = !ap.min;216232 }217233218234 READ_BCR(ARC_REG_SMART_BCR, bcr);···478462 arc_chk_core_config();479463}480464481481-static inline int is_kernel(unsigned long addr)465465+static inline bool uboot_arg_invalid(unsigned long addr)482466{483483- if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end)484484- return 1;485485- return 0;467467+ /*468468+ * Check that it is a untranslated address (although MMU is not enabled469469+ * yet, it being a high address ensures this is not by fluke)470470+ */471471+ if (addr < PAGE_OFFSET)472472+ return true;473473+474474+ /* Check that address doesn't clobber resident kernel image */475475+ return addr >= (unsigned long)_stext && addr <= (unsigned long)_end;476476+}477477+478478+#define IGNORE_ARGS "Ignore U-boot args: "479479+480480+/* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */481481+#define UBOOT_TAG_NONE 0482482+#define UBOOT_TAG_CMDLINE 1483483+#define UBOOT_TAG_DTB 2484484+485485+void __init handle_uboot_args(void)486486+{487487+ bool use_embedded_dtb = true;488488+ bool append_cmdline = false;489489+490490+ /* check that we know this tag */491491+ if (uboot_tag != UBOOT_TAG_NONE &&492492+ uboot_tag != UBOOT_TAG_CMDLINE &&493493+ uboot_tag != UBOOT_TAG_DTB) {494494+ pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag);495495+ goto ignore_uboot_args;496496+ }497497+498498+ if (uboot_tag != UBOOT_TAG_NONE &&499499+ uboot_arg_invalid((unsigned long)uboot_arg)) {500500+ pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);501501+ goto ignore_uboot_args;502502+ }503503+504504+ /* see if U-boot passed an external Device Tree blob */505505+ if (uboot_tag == UBOOT_TAG_DTB) {506506+ machine_desc = setup_machine_fdt((void *)uboot_arg);507507+508508+ /* external Device Tree blob is invalid - use embedded one */509509+ use_embedded_dtb = !machine_desc;510510+ }511511+512512+ if (uboot_tag == UBOOT_TAG_CMDLINE)513513+ append_cmdline = true;514514+515515+ignore_uboot_args:516516+517517+ if (use_embedded_dtb) {518518+ machine_desc = setup_machine_fdt(__dtb_start);519519+ if (!machine_desc)520520+ panic("Embedded DT invalid\n");521521+ }522522+523523+ /*524524+ * NOTE: @boot_command_line is populated by setup_machine_fdt() so this525525+ * append processing can only happen after.526526+ */527527+ if (append_cmdline) {528528+ /* Ensure a whitespace between the 2 cmdlines */529529+ strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);530530+ strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE);531531+ }486532}487533488534void __init setup_arch(char **cmdline_p)489535{490490-#ifdef CONFIG_ARC_UBOOT_SUPPORT491491- /* make sure that uboot passed pointer to cmdline/dtb is valid */492492- if (uboot_tag && is_kernel((unsigned long)uboot_arg))493493- panic("Invalid uboot arg\n");494494-495495- /* See if u-boot passed an external Device Tree blob */496496- machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */497497- if (!machine_desc)498498-#endif499499- {500500- /* No, so try the embedded one */501501- machine_desc = setup_machine_fdt(__dtb_start);502502- if (!machine_desc)503503- panic("Embedded DT invalid\n");504504-505505- /*506506- * If we are here, it is established that @uboot_arg didn't507507- * point to DT blob. Instead if u-boot says it is cmdline,508508- * append to embedded DT cmdline.509509- * setup_machine_fdt() would have populated @boot_command_line510510- */511511- if (uboot_tag == 1) {512512- /* Ensure a whitespace between the 2 cmdlines */513513- strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);514514- strlcat(boot_command_line, uboot_arg,515515- COMMAND_LINE_SIZE);516516- }517517- }536536+ handle_uboot_args();518537519538 /* Save unparsed command line copy for /proc/cmdline */520539 *cmdline_p = boot_command_line;
···99 bool "ARC HS Development Kit SOC"1010 depends on ISA_ARCV21111 select ARC_HAS_ACCL_REGS1212+ select ARC_IRQ_NO_AUTOSAVE1213 select CLK_HSDK1314 select RESET_HSDK1415 select HAVE_PCI
+1
arch/arm/Kconfig
···14001400config HOTPLUG_CPU14011401 bool "Support for hot-pluggable CPUs"14021402 depends on SMP14031403+ select GENERIC_IRQ_MIGRATION14031404 help14041405 Say Y here to experiment with turning CPUs off and on. CPUs14051406 can be controlled through /sys/devices/system/cpu.
···1313 stdout-path = "serial0:115200n8";1414 };15151616- memory@80000000 {1616+ /*1717+ * Note that recent version of the device tree compiler (starting with1818+ * version 1.4.2) warn about this node containing a reg property, but1919+ * missing a unit-address. However, the bootloader on these Chromebook2020+ * devices relies on the full name of this node to be exactly /memory.2121+ * Adding the unit-address causes the bootloader to create a /memory2222+ * node and write the memory bank configuration to that node, which in2323+ * turn leads the kernel to believe that the device has 2 GiB of2424+ * memory instead of the amount detected by the bootloader.2525+ *2626+ * The name of this node is effectively ABI and must not be changed.2727+ */2828+ memory {2929+ device_type = "memory";1730 reg = <0x0 0x80000000 0x0 0x80000000>;1831 };3232+3333+ /delete-node/ memory@80000000;19342035 host1x@50000000 {2136 hdmi@54280000 {
···3131#include <linux/smp.h>3232#include <linux/init.h>3333#include <linux/seq_file.h>3434-#include <linux/ratelimit.h>3534#include <linux/errno.h>3635#include <linux/list.h>3736#include <linux/kallsyms.h>···108109 return nr_irqs;109110}110111#endif111111-112112-#ifdef CONFIG_HOTPLUG_CPU113113-static bool migrate_one_irq(struct irq_desc *desc)114114-{115115- struct irq_data *d = irq_desc_get_irq_data(desc);116116- const struct cpumask *affinity = irq_data_get_affinity_mask(d);117117- struct irq_chip *c;118118- bool ret = false;119119-120120- /*121121- * If this is a per-CPU interrupt, or the affinity does not122122- * include this CPU, then we have nothing to do.123123- */124124- if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))125125- return false;126126-127127- if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {128128- affinity = cpu_online_mask;129129- ret = true;130130- }131131-132132- c = irq_data_get_irq_chip(d);133133- if (!c->irq_set_affinity)134134- pr_debug("IRQ%u: unable to set affinity\n", d->irq);135135- else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)136136- cpumask_copy(irq_data_get_affinity_mask(d), affinity);137137-138138- return ret;139139-}140140-141141-/*142142- * The current CPU has been marked offline. Migrate IRQs off this CPU.143143- * If the affinity settings do not allow other CPUs, force them onto any144144- * available CPU.145145- *146146- * Note: we must iterate over all IRQs, whether they have an attached147147- * action structure or not, as we need to get chained interrupts too.148148- */149149-void migrate_irqs(void)150150-{151151- unsigned int i;152152- struct irq_desc *desc;153153- unsigned long flags;154154-155155- local_irq_save(flags);156156-157157- for_each_irq_desc(i, desc) {158158- bool affinity_broken;159159-160160- raw_spin_lock(&desc->lock);161161- affinity_broken = migrate_one_irq(desc);162162- raw_spin_unlock(&desc->lock);163163-164164- if (affinity_broken)165165- pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",166166- i, smp_processor_id());167167- }168168-169169- local_irq_restore(flags);170170-}171171-#endif /* CONFIG_HOTPLUG_CPU */
+1-1
arch/arm/kernel/smp.c
···254254 /*255255 * OK - migrate IRQs away from this CPU256256 */257257- migrate_irqs();257257+ irq_migrate_all_off_this_cpu();258258259259 /*260260 * Flush user cache and TLB mappings, and then remove this CPU
+2
arch/arm/mm/dma-mapping.c
···23902390 return;2391239123922392 arm_teardown_iommu_dma_ops(dev);23932393+ /* Let arch_setup_dma_ops() start again from scratch upon re-probe */23942394+ set_dma_ops(dev, NULL);23932395}
+1-1
arch/arm/probes/kprobes/opt-arm.c
···247247 }248248249249 /* Copy arch-dep-instance from template. */250250- memcpy(code, (unsigned char *)optprobe_template_entry,250250+ memcpy(code, (unsigned long *)&optprobe_template_entry,251251 TMPL_END_IDX * sizeof(kprobe_opcode_t));252252253253 /* Adjust buffer according to instruction. */
···17021702}1703170317041704/*17051705- * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a17061706- * We also take into account DIT (bit 24), which is not yet documented, and17071707- * treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be17081708- * allocated an EL0 meaning in future.17051705+ * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.17061706+ * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is17071707+ * not described in ARM DDI 0487D.a.17081708+ * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may17091709+ * be allocated an EL0 meaning in future.17091710 * Userspace cannot use these until they have an architectural meaning.17101711 * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.17111712 * We also reserve IL for the kernel; SS is handled dynamically.17121713 */17131714#define SPSR_EL1_AARCH64_RES0_BITS \17141714- (GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \17151715- GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5))17151715+ (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \17161716+ GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))17161717#define SPSR_EL1_AARCH32_RES0_BITS \17171717- (GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20))17181718+ (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))1718171917191720static int valid_compat_regs(struct user_pt_regs *regs)17201721{
+3
arch/arm64/kernel/setup.c
···339339 smp_init_cpus();340340 smp_build_mpidr_hash();341341342342+ /* Init percpu seeds for random tags after cpus are set up. */343343+ kasan_init_tags();344344+342345#ifdef CONFIG_ARM64_SW_TTBR0_PAN343346 /*344347 * Make sure init_thread_info.ttbr0 always generates translation
-2
arch/arm64/mm/kasan_init.c
···252252 memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);253253 cpu_replace_ttbr1(lm_alias(swapper_pg_dir));254254255255- kasan_init_tags();256256-257255 /* At this point kasan is fully initialized. Enable error messages */258256 init_task.kasan_depth = 0;259257 pr_info("KernelAddressSanitizer initialized\n");
···5454unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,5555 unsigned long new, unsigned int size)5656{5757- u32 mask, old32, new32, load32;5757+ u32 mask, old32, new32, load32, load;5858 volatile u32 *ptr32;5959 unsigned int shift;6060- u8 load;61606261 /* Check that ptr is naturally aligned */6362 WARN_ON((unsigned long)ptr & (size - 1));
+2-1
arch/mips/kernel/setup.c
···384384 init_initrd();385385 reserved_end = (unsigned long) PFN_UP(__pa_symbol(&_end));386386387387- memblock_reserve(PHYS_OFFSET, reserved_end << PAGE_SHIFT);387387+ memblock_reserve(PHYS_OFFSET,388388+ (reserved_end << PAGE_SHIFT) - PHYS_OFFSET);388389389390 /*390391 * max_low_pfn is not a number of pages. The number of pages
···7979 REG_64BIT_32BIT,8080 /* 32-bit compatible, need truncation for 64-bit ops. */8181 REG_32BIT,8282- /* 32-bit zero extended. */8383- REG_32BIT_ZERO_EX,8482 /* 32-bit no sign/zero extension needed. */8583 REG_32BIT_POS8684};···341343 const struct bpf_prog *prog = ctx->skf;342344 int stack_adjust = ctx->stack_size;343345 int store_offset = stack_adjust - 8;346346+ enum reg_val_type td;344347 int r0 = MIPS_R_V0;345348346346- if (dest_reg == MIPS_R_RA &&347347- get_reg_val_type(ctx, prog->len, BPF_REG_0) == REG_32BIT_ZERO_EX)349349+ if (dest_reg == MIPS_R_RA) {348350 /* Don't let zero extended value escape. */349349- emit_instr(ctx, sll, r0, r0, 0);351351+ td = get_reg_val_type(ctx, prog->len, BPF_REG_0);352352+ if (td == REG_64BIT)353353+ emit_instr(ctx, sll, r0, r0, 0);354354+ }350355351356 if (ctx->flags & EBPF_SAVE_RA) {352357 emit_instr(ctx, ld, MIPS_R_RA, store_offset, MIPS_R_SP);···693692 if (dst < 0)694693 return dst;695694 td = get_reg_val_type(ctx, this_idx, insn->dst_reg);696696- if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {695695+ if (td == REG_64BIT) {697696 /* sign extend */698697 emit_instr(ctx, sll, dst, dst, 0);699698 }···708707 if (dst < 0)709708 return dst;710709 td = get_reg_val_type(ctx, this_idx, insn->dst_reg);711711- if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {710710+ if (td == REG_64BIT) {712711 /* sign extend */713712 emit_instr(ctx, sll, dst, dst, 0);714713 }···722721 if (dst < 0)723722 return dst;724723 td = get_reg_val_type(ctx, this_idx, insn->dst_reg);725725- if (td == REG_64BIT || td == REG_32BIT_ZERO_EX)724724+ if (td == REG_64BIT)726725 /* sign extend */727726 emit_instr(ctx, sll, dst, dst, 0);728727 if (insn->imm == 1) {···861860 if (src < 0 || dst < 0)862861 return -EINVAL;863862 td = get_reg_val_type(ctx, this_idx, insn->dst_reg);864864- if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) {863863+ if (td == REG_64BIT) {865864 /* sign extend */866865 emit_instr(ctx, sll, dst, dst, 0);867866 }868867 did_move = false;869868 ts = get_reg_val_type(ctx, this_idx, insn->src_reg);870870- if (ts == REG_64BIT || ts == REG_32BIT_ZERO_EX) {869869+ if (ts == REG_64BIT) {871870 int tmp_reg = MIPS_R_AT;872871873872 if (bpf_op == BPF_MOV) {···12551254 if (insn->imm == 64 && td == REG_32BIT)12561255 emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32);1257125612581258- if (insn->imm != 64 &&12591259- (td == REG_64BIT || td == REG_32BIT_ZERO_EX)) {12571257+ if (insn->imm != 64 && td == REG_64BIT) {12601258 /* sign extend */12611259 emit_instr(ctx, sll, dst, dst, 0);12621260 }···1819181918201820 /* Update the icache */18211821 flush_icache_range((unsigned long)ctx.target,18221822- (unsigned long)(ctx.target + ctx.idx * sizeof(u32)));18221822+ (unsigned long)&ctx.target[ctx.idx]);1823182318241824 if (bpf_jit_enable > 1)18251825 /* Dump JIT code */
+21-8
arch/parisc/kernel/ptrace.c
···308308309309long do_syscall_trace_enter(struct pt_regs *regs)310310{311311- if (test_thread_flag(TIF_SYSCALL_TRACE) &&312312- tracehook_report_syscall_entry(regs)) {311311+ if (test_thread_flag(TIF_SYSCALL_TRACE)) {312312+ int rc = tracehook_report_syscall_entry(regs);313313+313314 /*314314- * Tracing decided this syscall should not happen or the315315- * debugger stored an invalid system call number. Skip316316- * the system call and the system call restart handling.315315+ * As tracesys_next does not set %r28 to -ENOSYS316316+ * when %r20 is set to -1, initialize it here.317317 */318318- regs->gr[20] = -1UL;319319- goto out;318318+ regs->gr[28] = -ENOSYS;319319+320320+ if (rc) {321321+ /*322322+ * A nonzero return code from323323+ * tracehook_report_syscall_entry() tells us324324+ * to prevent the syscall execution. Skip325325+ * the syscall call and the syscall restart handling.326326+ *327327+ * Note that the tracer may also just change328328+ * regs->gr[20] to an invalid syscall number,329329+ * that is handled by tracesys_next.330330+ */331331+ regs->gr[20] = -1UL;332332+ return -1;333333+ }320334 }321335322336 /* Do the secure computing check after ptrace. */···354340 regs->gr[24] & 0xffffffff,355341 regs->gr[23] & 0xffffffff);356342357357-out:358343 /*359344 * Sign extend the syscall number to 64bit since it may have been360345 * modified by a compat ptrace call
···841841 * count is equal with how many entries of union hv_gpa_page_range can842842 * be populated into the input parameter page.843843 */844844-#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) / \844844+#define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) / \845845 sizeof(union hv_gpa_page_range))846846847847struct hv_guest_mapping_flush_list {
+2
arch/x86/include/asm/kvm_host.h
···299299 unsigned int cr4_smap:1;300300 unsigned int cr4_smep:1;301301 unsigned int cr4_la57:1;302302+ unsigned int maxphyaddr:6;302303 };303304};304305···398397 void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,399398 u64 *spte, const void *pte);400399 hpa_t root_hpa;400400+ gpa_t root_cr3;401401 union kvm_mmu_role mmu_role;402402 u8 root_level;403403 u8 shadow_root_level;
···117117}118118EXPORT_SYMBOL_GPL(ex_handler_fprestore);119119120120-/* Helper to check whether a uaccess fault indicates a kernel bug. */121121-static bool bogus_uaccess(struct pt_regs *regs, int trapnr,122122- unsigned long fault_addr)123123-{124124- /* This is the normal case: #PF with a fault address in userspace. */125125- if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX)126126- return false;127127-128128- /*129129- * This code can be reached for machine checks, but only if the #MC130130- * handler has already decided that it looks like a candidate for fixup.131131- * This e.g. happens when attempting to access userspace memory which132132- * the CPU can't access because of uncorrectable bad memory.133133- */134134- if (trapnr == X86_TRAP_MC)135135- return false;136136-137137- /*138138- * There are two remaining exception types we might encounter here:139139- * - #PF for faulting accesses to kernel addresses140140- * - #GP for faulting accesses to noncanonical addresses141141- * Complain about anything else.142142- */143143- if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) {144144- WARN(1, "unexpected trap %d in uaccess\n", trapnr);145145- return false;146146- }147147-148148- /*149149- * This is a faulting memory access in kernel space, on a kernel150150- * address, in a usercopy function. This can e.g. be caused by improper151151- * use of helpers like __put_user and by improper attempts to access152152- * userspace addresses in KERNEL_DS regions.153153- * The one (semi-)legitimate exception are probe_kernel_{read,write}(),154154- * which can be invoked from places like kgdb, /dev/mem (for reading)155155- * and privileged BPF code (for reading).156156- * The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag157157- * to tell us that faulting on kernel addresses, and even noncanonical158158- * addresses, in a userspace accessor does not necessarily imply a159159- * kernel bug, root might just be doing weird stuff.160160- */161161- if (current->kernel_uaccess_faults_ok)162162- return false;163163-164164- /* This is bad. Refuse the fixup so that we go into die(). */165165- if (trapnr == X86_TRAP_PF) {166166- pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n",167167- fault_addr);168168- } else {169169- pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n");170170- }171171- return true;172172-}173173-174120__visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,175121 struct pt_regs *regs, int trapnr,176122 unsigned long error_code,177123 unsigned long fault_addr)178124{179179- if (bogus_uaccess(regs, trapnr, fault_addr))180180- return false;181125 regs->ip = ex_fixup_addr(fixup);182126 return true;183127}···132188 unsigned long error_code,133189 unsigned long fault_addr)134190{135135- if (bogus_uaccess(regs, trapnr, fault_addr))136136- return false;137191 /* Special hack for uaccess_err */138192 current->thread.uaccess_err = 1;139193 regs->ip = ex_fixup_addr(fixup);
+3-1
crypto/af_alg.c
···122122123123int af_alg_release(struct socket *sock)124124{125125- if (sock->sk)125125+ if (sock->sk) {126126 sock_put(sock->sk);127127+ sock->sk = NULL;128128+ }127129 return 0;128130}129131EXPORT_SYMBOL_GPL(af_alg_release);
···131131132132void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)133133{134134- int i, n;134134+ int i;135135 int last_valid_bit;136136137137 if (adev->kfd.dev) {···142142 .gpuvm_size = min(adev->vm_manager.max_pfn143143 << AMDGPU_GPU_PAGE_SHIFT,144144 AMDGPU_GMC_HOLE_START),145145- .drm_render_minor = adev->ddev->render->index145145+ .drm_render_minor = adev->ddev->render->index,146146+ .sdma_doorbell_idx = adev->doorbell_index.sdma_engine,147147+146148 };147149148150 /* this is going to have a few of the MSBs set that we need to···174172 &gpu_resources.doorbell_aperture_size,175173 &gpu_resources.doorbell_start_offset);176174177177- if (adev->asic_type < CHIP_VEGA10) {178178- kgd2kfd_device_init(adev->kfd.dev, &gpu_resources);179179- return;180180- }181181-182182- n = (adev->asic_type < CHIP_VEGA20) ? 2 : 8;183183-184184- for (i = 0; i < n; i += 2) {185185- /* On SOC15 the BIF is involved in routing186186- * doorbells using the low 12 bits of the187187- * address. Communicate the assignments to188188- * KFD. KFD uses two doorbell pages per189189- * process in case of 64-bit doorbells so we190190- * can use each doorbell assignment twice.191191- */192192- gpu_resources.sdma_doorbell[0][i] =193193- adev->doorbell_index.sdma_engine[0] + (i >> 1);194194- gpu_resources.sdma_doorbell[0][i+1] =195195- adev->doorbell_index.sdma_engine[0] + 0x200 + (i >> 1);196196- gpu_resources.sdma_doorbell[1][i] =197197- adev->doorbell_index.sdma_engine[1] + (i >> 1);198198- gpu_resources.sdma_doorbell[1][i+1] =199199- adev->doorbell_index.sdma_engine[1] + 0x200 + (i >> 1);200200- }201201- /* Doorbells 0x0e0-0ff and 0x2e0-2ff are reserved for202202- * SDMA, IH and VCN. So don't use them for the CP.175175+ /* Since SOC15, BIF starts to statically use the176176+ * lower 12 bits of doorbell addresses for routing177177+ * based on settings in registers like178178+ * SDMA0_DOORBELL_RANGE etc..179179+ * In order to route a doorbell to CP engine, the lower180180+ * 12 bits of its address has to be outside the range181181+ * set for SDMA, VCN, and IH blocks.203182 */204204- gpu_resources.reserved_doorbell_mask = 0x1e0;205205- gpu_resources.reserved_doorbell_val = 0x0e0;183183+ if (adev->asic_type >= CHIP_VEGA10) {184184+ gpu_resources.non_cp_doorbells_start =185185+ adev->doorbell_index.first_non_cp;186186+ gpu_resources.non_cp_doorbells_end =187187+ adev->doorbell_index.last_non_cp;188188+ }206189207190 kgd2kfd_device_init(adev->kfd.dev, &gpu_resources);208191 }
+13-125
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
···204204}205205206206207207-/* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence(s) from BO's207207+/* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence from BO's208208 * reservation object.209209 *210210 * @bo: [IN] Remove eviction fence(s) from this BO211211- * @ef: [IN] If ef is specified, then this eviction fence is removed if it211211+ * @ef: [IN] This eviction fence is removed if it212212 * is present in the shared list.213213- * @ef_list: [OUT] Returns list of eviction fences. These fences are removed214214- * from BO's reservation object shared list.215215- * @ef_count: [OUT] Number of fences in ef_list.216213 *217217- * NOTE: If called with ef_list, then amdgpu_amdkfd_add_eviction_fence must be218218- * called to restore the eviction fences and to avoid memory leak. This is219219- * useful for shared BOs.220214 * NOTE: Must be called with BO reserved i.e. bo->tbo.resv->lock held.221215 */222216static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo,223223- struct amdgpu_amdkfd_fence *ef,224224- struct amdgpu_amdkfd_fence ***ef_list,225225- unsigned int *ef_count)217217+ struct amdgpu_amdkfd_fence *ef)226218{227219 struct reservation_object *resv = bo->tbo.resv;228220 struct reservation_object_list *old, *new;229221 unsigned int i, j, k;230222231231- if (!ef && !ef_list)223223+ if (!ef)232224 return -EINVAL;233233-234234- if (ef_list) {235235- *ef_list = NULL;236236- *ef_count = 0;237237- }238225239226 old = reservation_object_get_list(resv);240227 if (!old)···241254 f = rcu_dereference_protected(old->shared[i],242255 reservation_object_held(resv));243256244244- if ((ef && f->context == ef->base.context) ||245245- (!ef && to_amdgpu_amdkfd_fence(f)))257257+ if (f->context == ef->base.context)246258 RCU_INIT_POINTER(new->shared[--j], f);247259 else248260 RCU_INIT_POINTER(new->shared[k++], f);249261 }250262 new->shared_max = old->shared_max;251263 new->shared_count = k;252252-253253- if (!ef) {254254- unsigned int count = old->shared_count - j;255255-256256- /* Alloc memory for count number of eviction fence pointers.257257- * Fill the ef_list array and ef_count258258- */259259- *ef_list = kcalloc(count, sizeof(**ef_list), GFP_KERNEL);260260- *ef_count = count;261261-262262- if (!*ef_list) {263263- kfree(new);264264- return -ENOMEM;265265- }266266- }267264268265 /* Install the new fence list, seqcount provides the barriers */269266 preempt_disable();···262291263292 f = rcu_dereference_protected(new->shared[i],264293 reservation_object_held(resv));265265- if (!ef)266266- (*ef_list)[k++] = to_amdgpu_amdkfd_fence(f);267267- else268268- dma_fence_put(f);294294+ dma_fence_put(f);269295 }270296 kfree_rcu(old, rcu);271297272298 return 0;273273-}274274-275275-/* amdgpu_amdkfd_add_eviction_fence - Adds eviction fence(s) back into BO's276276- * reservation object.277277- *278278- * @bo: [IN] Add eviction fences to this BO279279- * @ef_list: [IN] List of eviction fences to be added280280- * @ef_count: [IN] Number of fences in ef_list.281281- *282282- * NOTE: Must call amdgpu_amdkfd_remove_eviction_fence before calling this283283- * function.284284- */285285-static void amdgpu_amdkfd_add_eviction_fence(struct amdgpu_bo *bo,286286- struct amdgpu_amdkfd_fence **ef_list,287287- unsigned int ef_count)288288-{289289- int i;290290-291291- if (!ef_list || !ef_count)292292- return;293293-294294- for (i = 0; i < ef_count; i++) {295295- amdgpu_bo_fence(bo, &ef_list[i]->base, true);296296- /* Re-adding the fence takes an additional reference. Drop that297297- * reference.298298- */299299- dma_fence_put(&ef_list[i]->base);300300- }301301-302302- kfree(ef_list);303299}304300305301static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain,···284346 ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx);285347 if (ret)286348 goto validate_fail;287287- if (wait) {288288- struct amdgpu_amdkfd_fence **ef_list;289289- unsigned int ef_count;290290-291291- ret = amdgpu_amdkfd_remove_eviction_fence(bo, NULL, &ef_list,292292- &ef_count);293293- if (ret)294294- goto validate_fail;295295-296296- ttm_bo_wait(&bo->tbo, false, false);297297- amdgpu_amdkfd_add_eviction_fence(bo, ef_list, ef_count);298298- }349349+ if (wait)350350+ amdgpu_bo_sync_wait(bo, AMDGPU_FENCE_OWNER_KFD, false);299351300352validate_fail:301353 return ret;···372444{373445 int ret;374446 struct kfd_bo_va_list *bo_va_entry;375375- struct amdgpu_bo *pd = vm->root.base.bo;376447 struct amdgpu_bo *bo = mem->bo;377448 uint64_t va = mem->va;378449 struct list_head *list_bo_va = &mem->bo_va_list;···411484 *p_bo_va_entry = bo_va_entry;412485413486 /* Allocate new page tables if needed and validate414414- * them. Clearing of new page tables and validate need to wait415415- * on move fences. We don't want that to trigger the eviction416416- * fence, so remove it temporarily.487487+ * them.417488 */418418- amdgpu_amdkfd_remove_eviction_fence(pd,419419- vm->process_info->eviction_fence,420420- NULL, NULL);421421-422489 ret = amdgpu_vm_alloc_pts(adev, vm, va, amdgpu_bo_size(bo));423490 if (ret) {424491 pr_err("Failed to allocate pts, err=%d\n", ret);···425504 goto err_alloc_pts;426505 }427506428428- /* Add the eviction fence back */429429- amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);430430-431507 return 0;432508433509err_alloc_pts:434434- amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);435510 amdgpu_vm_bo_rmv(adev, bo_va_entry->bo_va);436511 list_del(&bo_va_entry->bo_list);437512err_vmadd:···726809{727810 struct amdgpu_bo_va *bo_va = entry->bo_va;728811 struct amdgpu_vm *vm = bo_va->base.vm;729729- struct amdgpu_bo *pd = vm->root.base.bo;730812731731- /* Remove eviction fence from PD (and thereby from PTs too as732732- * they share the resv. object). Otherwise during PT update733733- * job (see amdgpu_vm_bo_update_mapping), eviction fence would734734- * get added to job->sync object and job execution would735735- * trigger the eviction fence.736736- */737737- amdgpu_amdkfd_remove_eviction_fence(pd,738738- vm->process_info->eviction_fence,739739- NULL, NULL);740813 amdgpu_vm_bo_unmap(adev, bo_va, entry->va);741814742815 amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update);743743-744744- /* Add the eviction fence back */745745- amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true);746816747817 amdgpu_sync_fence(NULL, sync, bo_va->last_pt_update, false);748818···9061002 pr_err("validate_pt_pd_bos() failed\n");9071003 goto validate_pd_fail;9081004 }909909- ret = ttm_bo_wait(&vm->root.base.bo->tbo, false, false);10051005+ amdgpu_bo_sync_wait(vm->root.base.bo, AMDGPU_FENCE_OWNER_KFD, false);9101006 if (ret)9111007 goto wait_pd_fail;9121008 amdgpu_bo_fence(vm->root.base.bo,···12931389 * attached12941390 */12951391 amdgpu_amdkfd_remove_eviction_fence(mem->bo,12961296- process_info->eviction_fence,12971297- NULL, NULL);13921392+ process_info->eviction_fence);12981393 pr_debug("Release VA 0x%llx - 0x%llx\n", mem->va,12991394 mem->va + bo_size * (1 + mem->aql_queue));13001395···15201617 if (mem->mapped_to_gpu_memory == 0 &&15211618 !amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) && !mem->bo->pin_count)15221619 amdgpu_amdkfd_remove_eviction_fence(mem->bo,15231523- process_info->eviction_fence,15241524- NULL, NULL);16201620+ process_info->eviction_fence);1525162115261622unreserve_out:15271623 unreserve_bo_and_vms(&ctx, false, false);···15811679 }1582168015831681 amdgpu_amdkfd_remove_eviction_fence(15841584- bo, mem->process_info->eviction_fence, NULL, NULL);16821682+ bo, mem->process_info->eviction_fence);15851683 list_del_init(&mem->validate_list.head);1586168415871685 if (size)···1847194518481946 amdgpu_sync_create(&sync);1849194718501850- /* Avoid triggering eviction fences when unmapping invalid18511851- * userptr BOs (waits for all fences, doesn't use18521852- * FENCE_OWNER_VM)18531853- */18541854- list_for_each_entry(peer_vm, &process_info->vm_list_head,18551855- vm_list_node)18561856- amdgpu_amdkfd_remove_eviction_fence(peer_vm->root.base.bo,18571857- process_info->eviction_fence,18581858- NULL, NULL);18591859-18601948 ret = process_validate_vms(process_info);18611949 if (ret)18621950 goto unreserve_out;···19072015 ret = process_update_pds(process_info, &sync);1908201619092017unreserve_out:19101910- list_for_each_entry(peer_vm, &process_info->vm_list_head,19111911- vm_list_node)19121912- amdgpu_bo_fence(peer_vm->root.base.bo,19131913- &process_info->eviction_fence->base, true);19142018 ttm_eu_backoff_reservation(&ticket, &resv_list);19152019 amdgpu_sync_wait(&sync, false);19162020 amdgpu_sync_free(&sync);
+8-3
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
···124124 struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];125125 struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS];126126 unsigned num_rings;127127+ unsigned num_rqs = 0;127128128129 switch (i) {129130 case AMDGPU_HW_IP_GFX:···167166 break;168167 }169168170170- for (j = 0; j < num_rings; ++j)171171- rqs[j] = &rings[j]->sched.sched_rq[priority];169169+ for (j = 0; j < num_rings; ++j) {170170+ if (!rings[j]->adev)171171+ continue;172172+173173+ rqs[num_rqs++] = &rings[j]->sched.sched_rq[priority];174174+ }172175173176 for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)174177 r = drm_sched_entity_init(&ctx->entities[i][j].entity,175175- rqs, num_rings, &ctx->guilty);178178+ rqs, num_rqs, &ctx->guilty);176179 if (r)177180 goto error_cleanup_entities;178181 }
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
···158158 while (size) {159159 uint32_t value;160160161161- if (*pos > adev->rmmio_size)162162- goto end;163163-164161 if (read) {165162 value = RREG32(*pos >> 2);166163 r = put_user(value, (uint32_t *)buf);
···266266 }267267268268 /* Trigger recovery for world switch failure if no TDR */269269- if (amdgpu_device_should_recover_gpu(adev))269269+ if (amdgpu_device_should_recover_gpu(adev)270270+ && amdgpu_lockup_timeout == MAX_SCHEDULE_TIMEOUT)270271 amdgpu_device_gpu_recover(adev, NULL);271272}272273
···62166216 si_pi->force_pcie_gen = AMDGPU_PCIE_GEN2;62176217 if (current_link_speed == AMDGPU_PCIE_GEN2)62186218 break;62196219+ /* fall through */62196220 case AMDGPU_PCIE_GEN2:62206221 if (amdgpu_acpi_pcie_performance_request(adev, PCIE_PERF_REQ_PECI_GEN2, false) == 0)62216222 break;62226223#endif62246224+ /* fall through */62236225 default:62246226 si_pi->force_pcie_gen = si_get_current_pcie_speed(adev);62256227 break;
···134134 */135135 q->doorbell_id = q->properties.queue_id;136136 } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) {137137- /* For SDMA queues on SOC15, use static doorbell138138- * assignments based on the engine and queue.137137+ /* For SDMA queues on SOC15 with 8-byte doorbell, use static138138+ * doorbell assignments based on the engine and queue id.139139+ * The doobell index distance between RLC (2*i) and (2*i+1)140140+ * for a SDMA engine is 512.139141 */140140- q->doorbell_id = dev->shared_resources.sdma_doorbell141141- [q->properties.sdma_engine_id]142142- [q->properties.sdma_queue_id];142142+ uint32_t *idx_offset =143143+ dev->shared_resources.sdma_doorbell_idx;144144+145145+ q->doorbell_id = idx_offset[q->properties.sdma_engine_id]146146+ + (q->properties.sdma_queue_id & 1)147147+ * KFD_QUEUE_DOORBELL_MIRROR_OFFSET148148+ + (q->properties.sdma_queue_id >> 1);143149 } else {144150 /* For CP queues on SOC15 reserve a free doorbell ID */145151 unsigned int found;
+17-5
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
···9797#define KFD_CWSR_TBA_TMA_SIZE (PAGE_SIZE * 2)9898#define KFD_CWSR_TMA_OFFSET PAGE_SIZE9999100100+#define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE \101101+ (KFD_MAX_NUM_OF_PROCESSES * \102102+ KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)103103+104104+#define KFD_KERNEL_QUEUE_SIZE 2048105105+106106+/*107107+ * 512 = 0x200108108+ * The doorbell index distance between SDMA RLC (2*i) and (2*i+1) in the109109+ * same SDMA engine on SOC15, which has 8-byte doorbells for SDMA.110110+ * 512 8-byte doorbell distance (i.e. one page away) ensures that SDMA RLC111111+ * (2*i+1) doorbells (in terms of the lower 12 bit address) lie exactly in112112+ * the OFFSET and SIZE set in registers like BIF_SDMA0_DOORBELL_RANGE.113113+ */114114+#define KFD_QUEUE_DOORBELL_MIRROR_OFFSET 512115115+116116+100117/*101118 * Kernel module parameter to specify maximum number of supported queues per102119 * device103120 */104121extern int max_num_of_queues_per_device;105122106106-#define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE \107107- (KFD_MAX_NUM_OF_PROCESSES * \108108- KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)109109-110110-#define KFD_KERNEL_QUEUE_SIZE 2048111123112124/* Kernel module parameter to specify the scheduling policy */113125extern int sched_policy;
+9-5
drivers/gpu/drm/amd/amdkfd/kfd_process.c
···607607 if (!qpd->doorbell_bitmap)608608 return -ENOMEM;609609610610- /* Mask out any reserved doorbells */611611- for (i = 0; i < KFD_MAX_NUM_OF_QUEUES_PER_PROCESS; i++)612612- if ((dev->shared_resources.reserved_doorbell_mask & i) ==613613- dev->shared_resources.reserved_doorbell_val) {610610+ /* Mask out doorbells reserved for SDMA, IH, and VCN on SOC15. */611611+ for (i = 0; i < KFD_MAX_NUM_OF_QUEUES_PER_PROCESS / 2; i++) {612612+ if (i >= dev->shared_resources.non_cp_doorbells_start613613+ && i <= dev->shared_resources.non_cp_doorbells_end) {614614 set_bit(i, qpd->doorbell_bitmap);615615- pr_debug("reserved doorbell 0x%03x\n", i);615615+ set_bit(i + KFD_QUEUE_DOORBELL_MIRROR_OFFSET,616616+ qpd->doorbell_bitmap);617617+ pr_debug("reserved doorbell 0x%03x and 0x%03x\n", i,618618+ i + KFD_QUEUE_DOORBELL_MIRROR_OFFSET);616619 }620620+ }617621618622 return 0;619623}
+98-23
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···303303 return;304304 }305305306306+ /* Update to correct count(s) if racing with vblank irq */307307+ amdgpu_crtc->last_flip_vblank = drm_crtc_accurate_vblank_count(&amdgpu_crtc->base);306308307309 /* wake up userspace */308310 if (amdgpu_crtc->event) {309309- /* Update to correct count(s) if racing with vblank irq */310310- drm_crtc_accurate_vblank_count(&amdgpu_crtc->base);311311-312311 drm_crtc_send_vblank_event(&amdgpu_crtc->base, amdgpu_crtc->event);313312314313 /* page flip completed. clean up */···785786 struct amdgpu_display_manager *dm = &adev->dm;786787 int ret = 0;787788789789+ WARN_ON(adev->dm.cached_state);790790+ adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);791791+788792 s3_handle_mst(adev->ddev, true);789793790794 amdgpu_dm_irq_suspend(adev);791795792792- WARN_ON(adev->dm.cached_state);793793- adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);794796795797 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);796798···37903790 * check will succeed, and let DC implement proper check37913791 */37923792static const uint32_t rgb_formats[] = {37933793- DRM_FORMAT_RGB888,37943793 DRM_FORMAT_XRGB8888,37953794 DRM_FORMAT_ARGB8888,37963795 DRM_FORMAT_RGBA8888,···46454646 struct amdgpu_bo *abo;46464647 uint64_t tiling_flags, dcc_address;46474648 uint32_t target, target_vblank;46494649+ uint64_t last_flip_vblank;46504650+ bool vrr_active = acrtc_state->freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;4648465146494652 struct {46504653 struct dc_surface_update surface_updates[MAX_SURFACES];···46794678 struct dc_plane_state *dc_plane;46804679 struct dm_plane_state *dm_new_plane_state = to_dm_plane_state(new_plane_state);4681468046824682- if (plane->type == DRM_PLANE_TYPE_CURSOR) {46834683- handle_cursor_update(plane, old_plane_state);46814681+ /* Cursor plane is handled after stream updates */46824682+ if (plane->type == DRM_PLANE_TYPE_CURSOR)46844683 continue;46854685- }4686468446874685 if (!fb || !crtc || pcrtc != crtc)46884686 continue;···47124712 */47134713 abo = gem_to_amdgpu_bo(fb->obj[0]);47144714 r = amdgpu_bo_reserve(abo, true);47154715- if (unlikely(r != 0)) {47154715+ if (unlikely(r != 0))47164716 DRM_ERROR("failed to reserve buffer before flip\n");47174717- WARN_ON(1);47184718- }4719471747204720- /* Wait for all fences on this FB */47214721- WARN_ON(reservation_object_wait_timeout_rcu(abo->tbo.resv, true, false,47224722- MAX_SCHEDULE_TIMEOUT) < 0);47184718+ /*47194719+ * Wait for all fences on this FB. Do limited wait to avoid47204720+ * deadlock during GPU reset when this fence will not signal47214721+ * but we hold reservation lock for the BO.47224722+ */47234723+ r = reservation_object_wait_timeout_rcu(abo->tbo.resv,47244724+ true, false,47254725+ msecs_to_jiffies(5000));47264726+ if (unlikely(r == 0))47274727+ DRM_ERROR("Waiting for fences timed out.");47284728+47294729+4723473047244731 amdgpu_bo_get_tiling_flags(abo, &tiling_flags);47254732···48064799 * hopefully eliminating dc_*_update structs in their entirety.48074800 */48084801 if (flip_count) {48094809- target = (uint32_t)drm_crtc_vblank_count(pcrtc) + *wait_for_vblank;48024802+ if (!vrr_active) {48034803+ /* Use old throttling in non-vrr fixed refresh rate mode48044804+ * to keep flip scheduling based on target vblank counts48054805+ * working in a backwards compatible way, e.g., for48064806+ * clients using the GLX_OML_sync_control extension or48074807+ * DRI3/Present extension with defined target_msc.48084808+ */48094809+ last_flip_vblank = drm_crtc_vblank_count(pcrtc);48104810+ }48114811+ else {48124812+ /* For variable refresh rate mode only:48134813+ * Get vblank of last completed flip to avoid > 1 vrr48144814+ * flips per video frame by use of throttling, but allow48154815+ * flip programming anywhere in the possibly large48164816+ * variable vrr vblank interval for fine-grained flip48174817+ * timing control and more opportunity to avoid stutter48184818+ * on late submission of flips.48194819+ */48204820+ spin_lock_irqsave(&pcrtc->dev->event_lock, flags);48214821+ last_flip_vblank = acrtc_attach->last_flip_vblank;48224822+ spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);48234823+ }48244824+48254825+ target = (uint32_t)last_flip_vblank + *wait_for_vblank;48264826+48104827 /* Prepare wait for target vblank early - before the fence-waits */48114828 target_vblank = target - (uint32_t)drm_crtc_vblank_count(pcrtc) +48124829 amdgpu_get_vblank_counter_kms(pcrtc->dev, acrtc_attach->crtc_id);···49044873 dc_state);49054874 mutex_unlock(&dm->dc_lock);49064875 }48764876+48774877+ for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i)48784878+ if (plane->type == DRM_PLANE_TYPE_CURSOR)48794879+ handle_cursor_update(plane, old_plane_state);4907488049084881cleanup:49094882 kfree(flip);···58345799 old_dm_crtc_state = to_dm_crtc_state(old_crtc_state);58355800 num_plane = 0;5836580158375837- if (!new_dm_crtc_state->stream) {58385838- if (!new_dm_crtc_state->stream && old_dm_crtc_state->stream) {58395839- update_type = UPDATE_TYPE_FULL;58405840- goto cleanup;58415841- }58425842-58435843- continue;58025802+ if (new_dm_crtc_state->stream != old_dm_crtc_state->stream) {58035803+ update_type = UPDATE_TYPE_FULL;58045804+ goto cleanup;58445805 }58065806+58075807+ if (!new_dm_crtc_state->stream)58085808+ continue;5845580958465810 for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) {58475811 new_plane_crtc = new_plane_state->crtc;···5850581658515817 if (plane->type == DRM_PLANE_TYPE_CURSOR)58525818 continue;58195819+58205820+ if (new_dm_plane_state->dc_state != old_dm_plane_state->dc_state) {58215821+ update_type = UPDATE_TYPE_FULL;58225822+ goto cleanup;58235823+ }5853582458545825 if (!state->allow_modeset)58555826 continue;···59925953 ret = drm_atomic_add_affected_planes(state, crtc);59935954 if (ret)59945955 goto fail;59565956+ }59575957+59585958+ /*59595959+ * Add all primary and overlay planes on the CRTC to the state59605960+ * whenever a plane is enabled to maintain correct z-ordering59615961+ * and to enable fast surface updates.59625962+ */59635963+ drm_for_each_crtc(crtc, dev) {59645964+ bool modified = false;59655965+59665966+ for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) {59675967+ if (plane->type == DRM_PLANE_TYPE_CURSOR)59685968+ continue;59695969+59705970+ if (new_plane_state->crtc == crtc ||59715971+ old_plane_state->crtc == crtc) {59725972+ modified = true;59735973+ break;59745974+ }59755975+ }59765976+59775977+ if (!modified)59785978+ continue;59795979+59805980+ drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) {59815981+ if (plane->type == DRM_PLANE_TYPE_CURSOR)59825982+ continue;59835983+59845984+ new_plane_state =59855985+ drm_atomic_get_plane_state(state, plane);59865986+59875987+ if (IS_ERR(new_plane_state)) {59885988+ ret = PTR_ERR(new_plane_state);59895989+ goto fail;59905990+ }59915991+ }59955992 }5996599359975994 /* Remove exiting planes if they are modified */
···265265 && id.enum_id == obj_id.enum_id)266266 return &bp->object_info_tbl.v1_4->display_path[i];267267 }268268+ /* fall through */268269 case OBJECT_TYPE_CONNECTOR:269270 case OBJECT_TYPE_GENERIC:270271 /* Both Generic and Connector Object ID···278277 && id.enum_id == obj_id.enum_id)279278 return &bp->object_info_tbl.v1_4->display_path[i];280279 }280280+ /* fall through */281281 default:282282 return NULL;283283 }
+9-6
drivers/gpu/drm/amd/display/dc/core/dc.c
···11381138 /* pplib is notified if disp_num changed */11391139 dc->hwss.optimize_bandwidth(dc, context);1140114011411141+ for (i = 0; i < context->stream_count; i++)11421142+ context->streams[i]->mode_changed = false;11431143+11411144 dc_release_state(dc->current_state);1142114511431146 dc->current_state = context;···16261623 stream_update->adjust->v_total_min,16271624 stream_update->adjust->v_total_max);1628162516291629- if (stream_update->periodic_vsync_config && pipe_ctx->stream_res.tg->funcs->program_vline_interrupt)16301630- pipe_ctx->stream_res.tg->funcs->program_vline_interrupt(16311631- pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing, VLINE0, &stream->periodic_vsync_config);16261626+ if (stream_update->periodic_interrupt0 &&16271627+ dc->hwss.setup_periodic_interrupt)16281628+ dc->hwss.setup_periodic_interrupt(pipe_ctx, VLINE0);1632162916331633- if (stream_update->enhanced_sync_config && pipe_ctx->stream_res.tg->funcs->program_vline_interrupt)16341634- pipe_ctx->stream_res.tg->funcs->program_vline_interrupt(16351635- pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing, VLINE1, &stream->enhanced_sync_config);16301630+ if (stream_update->periodic_interrupt1 &&16311631+ dc->hwss.setup_periodic_interrupt)16321632+ dc->hwss.setup_periodic_interrupt(pipe_ctx, VLINE1);1636163316371634 if ((stream_update->hdr_static_metadata && !stream->use_dynamic_meta) ||16381635 stream_update->vrr_infopacket ||
+17-7
drivers/gpu/drm/amd/display/dc/dc_stream.h
···5151 bool dummy;5252};53535454-union vline_config {5555- unsigned int line_number;5656- unsigned long long delta_in_ns;5454+enum vertical_interrupt_ref_point {5555+ START_V_UPDATE = 0,5656+ START_V_SYNC,5757+ INVALID_POINT5858+5959+ //For now, only v_update interrupt is used.6060+ //START_V_BLANK,6161+ //START_V_ACTIVE6262+};6363+6464+struct periodic_interrupt_config {6565+ enum vertical_interrupt_ref_point ref_point;6666+ int lines_offset;5767};58685969···116106 /* DMCU info */117107 unsigned int abm_level;118108119119- union vline_config periodic_vsync_config;120120- union vline_config enhanced_sync_config;109109+ struct periodic_interrupt_config periodic_interrupt0;110110+ struct periodic_interrupt_config periodic_interrupt1;121111122112 /* from core_stream struct */123113 struct dc_context *ctx;···168158 struct dc_info_packet *hdr_static_metadata;169159 unsigned int *abm_level;170160171171- union vline_config *periodic_vsync_config;172172- union vline_config *enhanced_sync_config;161161+ struct periodic_interrupt_config *periodic_interrupt0;162162+ struct periodic_interrupt_config *periodic_interrupt1;173163174164 struct dc_crtc_timing_adjust *adjust;175165 struct dc_info_packet *vrr_infopacket;
···479479 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:480480 sign = 1;481481 floating = 1;482482- /* no break */482482+ /* fall through */483483 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: /* shouldn't this get float too? */484484 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:485485 grph_depth = 3;
···792792 struct dc *dc,793793 struct dc_state *context)794794{795795- /* TODO implement when needed but for now hardcode max value*/796796- context->bw.dce.dispclk_khz = 681000;797797- context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ;795795+ int i;796796+ bool at_least_one_pipe = false;797797+798798+ for (i = 0; i < dc->res_pool->pipe_count; i++) {799799+ if (context->res_ctx.pipe_ctx[i].stream)800800+ at_least_one_pipe = true;801801+ }802802+803803+ if (at_least_one_pipe) {804804+ /* TODO implement when needed but for now hardcode max value*/805805+ context->bw.dce.dispclk_khz = 681000;806806+ context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ;807807+ } else {808808+ context->bw.dce.dispclk_khz = 0;809809+ context->bw.dce.yclk_khz = 0;810810+ }798811799812 return true;800813}
···165165};166166#pragma pack(pop)167167168168-static uint16_t backlight_8_to_16(unsigned int backlight_8bit)169169-{170170- return (uint16_t)(backlight_8bit * 0x101);171171-}172172-173168static void fill_backlight_transform_table(struct dmcu_iram_parameters params,174169 struct iram_table_v_2 *table)175170{176171 unsigned int i;177172 unsigned int num_entries = NUM_BL_CURVE_SEGS;178178- unsigned int query_input_8bit;179179- unsigned int query_output_8bit;180173 unsigned int lut_index;181174182175 table->backlight_thresholds[0] = 0;···187194 * format U4.10.188195 */189196 for (i = 1; i+1 < num_entries; i++) {190190- query_input_8bit = DIV_ROUNDUP((i * 256), num_entries);191191-192197 lut_index = (params.backlight_lut_array_size - 1) * i / (num_entries - 1);193198 ASSERT(lut_index < params.backlight_lut_array_size);194194- query_output_8bit = params.backlight_lut_array[lut_index] >> 8;195199196200 table->backlight_thresholds[i] =197197- backlight_8_to_16(query_input_8bit);201201+ cpu_to_be16(DIV_ROUNDUP((i * 65536), num_entries));198202 table->backlight_offsets[i] =199199- backlight_8_to_16(query_output_8bit);203203+ cpu_to_be16(params.backlight_lut_array[lut_index]);200204 }201205}202206···202212{203213 unsigned int i;204214 unsigned int num_entries = NUM_BL_CURVE_SEGS;205205- unsigned int query_input_8bit;206206- unsigned int query_output_8bit;207215 unsigned int lut_index;208216209217 table->backlight_thresholds[0] = 0;···219231 * format U4.10.220232 */221233 for (i = 1; i+1 < num_entries; i++) {222222- query_input_8bit = DIV_ROUNDUP((i * 256), num_entries);223223-224234 lut_index = (params.backlight_lut_array_size - 1) * i / (num_entries - 1);225235 ASSERT(lut_index < params.backlight_lut_array_size);226226- query_output_8bit = params.backlight_lut_array[lut_index] >> 8;227236228237 table->backlight_thresholds[i] =229229- backlight_8_to_16(query_input_8bit);238238+ cpu_to_be16(DIV_ROUNDUP((i * 65536), num_entries));230239 table->backlight_offsets[i] =231231- backlight_8_to_16(query_output_8bit);240240+ cpu_to_be16(params.backlight_lut_array[lut_index]);232241 }233242}234243
+8-11
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
···137137 /* Bit n == 1 means Queue n is available for KFD */138138 DECLARE_BITMAP(queue_bitmap, KGD_MAX_QUEUES);139139140140- /* Doorbell assignments (SOC15 and later chips only). Only140140+ /* SDMA doorbell assignments (SOC15 and later chips only). Only141141 * specific doorbells are routed to each SDMA engine. Others142142 * are routed to IH and VCN. They are not usable by the CP.143143- *144144- * Any doorbell number D that satisfies the following condition145145- * is reserved: (D & reserved_doorbell_mask) == reserved_doorbell_val146146- *147147- * KFD currently uses 1024 (= 0x3ff) doorbells per process. If148148- * doorbells 0x0e0-0x0ff and 0x2e0-0x2ff are reserved, that means149149- * mask would be set to 0x1e0 and val set to 0x0e0.150143 */151151- unsigned int sdma_doorbell[2][8];152152- unsigned int reserved_doorbell_mask;153153- unsigned int reserved_doorbell_val;144144+ uint32_t *sdma_doorbell_idx;145145+146146+ /* From SOC15 onward, the doorbell index range not usable for CP147147+ * queues.148148+ */149149+ uint32_t non_cp_doorbells_start;150150+ uint32_t non_cp_doorbells_end;154151155152 /* Base address of doorbell aperture. */156153 phys_addr_t doorbell_physical_address;
···36813681 data->force_pcie_gen = PP_PCIEGen2;36823682 if (current_link_speed == PP_PCIEGen2)36833683 break;36843684+ /* fall through */36843685 case PP_PCIEGen2:36853686 if (0 == amdgpu_acpi_pcie_performance_request(hwmgr->adev, PCIE_PERF_REQ_GEN2, false))36863687 break;36873688#endif36893689+ /* fall through */36883690 default:36893691 data->force_pcie_gen = smu7_get_current_pcie_speed(hwmgr);36903692 break;
···11+/*22+ * Copyright 2018 Advanced Micro Devices, Inc.33+ *44+ * Permission is hereby granted, free of charge, to any person obtaining a55+ * copy of this software and associated documentation files (the "Software"),66+ * to deal in the Software without restriction, including without limitation77+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,88+ * and/or sell copies of the Software, and to permit persons to whom the99+ * Software is furnished to do so, subject to the following conditions:1010+ *1111+ * The above copyright notice and this permission notice shall be included in1212+ * all copies or substantial portions of the Software.1313+ *1414+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020+ * OTHER DEALINGS IN THE SOFTWARE.2121+ *2222+ */123#include "amdgpu.h"224#include "soc15.h"325#include "soc15_hw_ip.h"···136114 if (soc15_baco_program_registers(hwmgr, pre_baco_tbl,137115 ARRAY_SIZE(pre_baco_tbl))) {138116 if (smum_send_msg_to_smc(hwmgr, PPSMC_MSG_EnterBaco))139139- return -1;117117+ return -EINVAL;140118141119 if (soc15_baco_program_registers(hwmgr, enter_baco_tbl,142120 ARRAY_SIZE(enter_baco_tbl)))···154132 }155133 }156134157157- return -1;135135+ return -EINVAL;158136}
+2-2
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_baco.h
···2020 * OTHER DEALINGS IN THE SOFTWARE.2121 *2222 */2323-#ifndef __VEGA10_BOCO_H__2424-#define __VEGA10_BOCO_H__2323+#ifndef __VEGA10_BACO_H__2424+#define __VEGA10_BACO_H__2525#include "hwmgr.h"2626#include "common_baco.h"2727
+25-3
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_baco.c
···11+/*22+ * Copyright 2018 Advanced Micro Devices, Inc.33+ *44+ * Permission is hereby granted, free of charge, to any person obtaining a55+ * copy of this software and associated documentation files (the "Software"),66+ * to deal in the Software without restriction, including without limitation77+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,88+ * and/or sell copies of the Software, and to permit persons to whom the99+ * Software is furnished to do so, subject to the following conditions:1010+ *1111+ * The above copyright notice and this permission notice shall be included in1212+ * all copies or substantial portions of the Software.1313+ *1414+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020+ * OTHER DEALINGS IN THE SOFTWARE.2121+ *2222+ */123#include "amdgpu.h"224#include "soc15.h"325#include "soc15_hw_ip.h"···896790689169 if(smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_EnterBaco, 0))9292- return -1;7070+ return -EINVAL;93719472 } else if (state == BACO_STATE_OUT) {9573 if (smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ExitBaco))9696- return -1;7474+ return -EINVAL;9775 if (!soc15_baco_program_registers(hwmgr, clean_baco_tbl,9876 ARRAY_SIZE(clean_baco_tbl)))9999- return -1;7777+ return -EINVAL;10078 }1017910280 return 0;
+2-2
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_baco.h
···2020 * OTHER DEALINGS IN THE SOFTWARE.2121 *2222 */2323-#ifndef __VEGA20_BOCO_H__2424-#define __VEGA20_BOCO_H__2323+#ifndef __VEGA20_BACO_H__2424+#define __VEGA20_BACO_H__2525#include "hwmgr.h"2626#include "common_baco.h"2727
···145145 if (IS_ERR(dev))146146 return PTR_ERR(dev);147147148148+ ret = pci_enable_device(pdev);149149+ if (ret)150150+ goto err_free_dev;151151+148152 dev->pdev = pdev;149153 pci_set_drvdata(pdev, dev);150154
+9
drivers/gpu/drm/drm_atomic_helper.c
···16081608 old_plane_state->crtc != new_plane_state->crtc)16091609 return -EINVAL;1610161016111611+ /*16121612+ * FIXME: Since prepare_fb and cleanup_fb are always called on16131613+ * the new_plane_state for async updates we need to block framebuffer16141614+ * changes. This prevents use of a fb that's been cleaned up and16151615+ * double cleanups from occuring.16161616+ */16171617+ if (old_plane_state->fb != new_plane_state->fb)16181618+ return -EINVAL;16191619+16111620 funcs = plane->helper_private;16121621 if (!funcs->atomic_async_update)16131622 return -EINVAL;
+16-6
drivers/gpu/drm/drm_file.c
···262262 kfree(file);263263}264264265265+static void drm_close_helper(struct file *filp)266266+{267267+ struct drm_file *file_priv = filp->private_data;268268+ struct drm_device *dev = file_priv->minor->dev;269269+270270+ mutex_lock(&dev->filelist_mutex);271271+ list_del(&file_priv->lhead);272272+ mutex_unlock(&dev->filelist_mutex);273273+274274+ drm_file_free(file_priv);275275+}276276+265277static int drm_setup(struct drm_device * dev)266278{267279 int ret;···330318 goto err_undo;331319 if (need_setup) {332320 retcode = drm_setup(dev);333333- if (retcode)321321+ if (retcode) {322322+ drm_close_helper(filp);334323 goto err_undo;324324+ }335325 }336326 return 0;337327···487473488474 DRM_DEBUG("open_count = %d\n", dev->open_count);489475490490- mutex_lock(&dev->filelist_mutex);491491- list_del(&file_priv->lhead);492492- mutex_unlock(&dev->filelist_mutex);493493-494494- drm_file_free(file_priv);476476+ drm_close_helper(filp);495477496478 if (!--dev->open_count) {497479 drm_lastclose(dev);
+17-5
drivers/gpu/drm/drm_ioctl.c
···508508 return err;509509}510510511511+static inline bool512512+drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags)513513+{514514+ return drm_core_check_feature(dev, DRIVER_RENDER) &&515515+ (flags & DRM_RENDER_ALLOW);516516+}517517+511518/**512519 * drm_ioctl_permit - Check ioctl permissions against caller513520 *···529522 */530523int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)531524{525525+ const struct drm_device *dev = file_priv->minor->dev;526526+532527 /* ROOT_ONLY is only for CAP_SYS_ADMIN */533528 if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))534529 return -EACCES;535530536536- /* AUTH is only for authenticated or render client */537537- if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&538538- !file_priv->authenticated))539539- return -EACCES;531531+ /* AUTH is only for master ... */532532+ if (unlikely((flags & DRM_AUTH) && drm_is_primary_client(file_priv))) {533533+ /* authenticated ones, or render capable on DRM_RENDER_ALLOW. */534534+ if (!file_priv->authenticated &&535535+ !drm_render_driver_and_ioctl(dev, flags))536536+ return -EACCES;537537+ }540538541539 /* MASTER is only for master or control clients */542540 if (unlikely((flags & DRM_MASTER) &&···582570 DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),583571 DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),584572 DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),585585- DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_AUTH|DRM_UNLOCKED|DRM_MASTER),573573+ DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_UNLOCKED|DRM_MASTER),586574587575 DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_legacy_addmap_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY),588576 DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_legacy_rmmap_ioctl, DRM_AUTH),
+2-1
drivers/gpu/drm/imx/Kconfig
···44 select VIDEOMODE_HELPERS55 select DRM_GEM_CMA_HELPER66 select DRM_KMS_CMA_HELPER77- depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM)77+ depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM || COMPILE_TEST)88 depends on IMX_IPUV3_CORE99 help1010 enable i.MX graphics support···1818config DRM_IMX_TVE1919 tristate "Support for TV and VGA displays"2020 depends on DRM_IMX2121+ depends on COMMON_CLK2122 select REGMAP_MMIO2223 help2324 Choose this to enable the internal Television Encoder (TVe)
+2-5
drivers/gpu/drm/imx/imx-drm-core.c
···4949{5050 int ret;51515252- ret = drm_atomic_helper_check_modeset(dev, state);5353- if (ret)5454- return ret;5555-5656- ret = drm_atomic_helper_check_planes(dev, state);5252+ ret = drm_atomic_helper_check(dev, state);5753 if (ret)5854 return ret;5955···225229 drm->mode_config.funcs = &imx_drm_mode_config_funcs;226230 drm->mode_config.helper_private = &imx_drm_mode_config_helpers;227231 drm->mode_config.allow_fb_modifiers = true;232232+ drm->mode_config.normalize_zpos = true;228233229234 drm_mode_config_init(drm);230235
+28-2
drivers/gpu/drm/imx/ipuv3-crtc.c
···3434 struct ipu_dc *dc;3535 struct ipu_di *di;3636 int irq;3737+ struct drm_pending_vblank_event *event;3738};38393940static inline struct ipu_crtc *to_ipu_crtc(struct drm_crtc *crtc)···174173static irqreturn_t ipu_irq_handler(int irq, void *dev_id)175174{176175 struct ipu_crtc *ipu_crtc = dev_id;176176+ struct drm_crtc *crtc = &ipu_crtc->base;177177+ unsigned long flags;178178+ int i;177179178178- drm_crtc_handle_vblank(&ipu_crtc->base);180180+ drm_crtc_handle_vblank(crtc);181181+182182+ if (ipu_crtc->event) {183183+ for (i = 0; i < ARRAY_SIZE(ipu_crtc->plane); i++) {184184+ struct ipu_plane *plane = ipu_crtc->plane[i];185185+186186+ if (!plane)187187+ continue;188188+189189+ if (ipu_plane_atomic_update_pending(&plane->base))190190+ break;191191+ }192192+193193+ if (i == ARRAY_SIZE(ipu_crtc->plane)) {194194+ spin_lock_irqsave(&crtc->dev->event_lock, flags);195195+ drm_crtc_send_vblank_event(crtc, ipu_crtc->event);196196+ ipu_crtc->event = NULL;197197+ drm_crtc_vblank_put(crtc);198198+ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);199199+ }200200+ }179201180202 return IRQ_HANDLED;181203}···247223{248224 spin_lock_irq(&crtc->dev->event_lock);249225 if (crtc->state->event) {226226+ struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc);227227+250228 WARN_ON(drm_crtc_vblank_get(crtc));251251- drm_crtc_arm_vblank_event(crtc, crtc->state->event);229229+ ipu_crtc->event = crtc->state->event;252230 crtc->state->event = NULL;253231 }254232 spin_unlock_irq(&crtc->dev->event_lock);
+51-25
drivers/gpu/drm/imx/ipuv3-plane.c
···273273274274static void ipu_plane_state_reset(struct drm_plane *plane)275275{276276+ unsigned int zpos = (plane->type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1;276277 struct ipu_plane_state *ipu_state;277278278279 if (plane->state) {···285284286285 ipu_state = kzalloc(sizeof(*ipu_state), GFP_KERNEL);287286288288- if (ipu_state)287287+ if (ipu_state) {289288 __drm_atomic_helper_plane_reset(plane, &ipu_state->base);289289+ ipu_state->base.zpos = zpos;290290+ ipu_state->base.normalized_zpos = zpos;291291+ }290292}291293292294static struct drm_plane_state *···564560 if (ipu_plane->dp_flow == IPU_DP_FLOW_SYNC_FG)565561 ipu_dp_set_window_pos(ipu_plane->dp, dst->x1, dst->y1);566562563563+ switch (ipu_plane->dp_flow) {564564+ case IPU_DP_FLOW_SYNC_BG:565565+ if (state->normalized_zpos == 1) {566566+ ipu_dp_set_global_alpha(ipu_plane->dp,567567+ !fb->format->has_alpha, 0xff,568568+ true);569569+ } else {570570+ ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true);571571+ }572572+ break;573573+ case IPU_DP_FLOW_SYNC_FG:574574+ if (state->normalized_zpos == 1) {575575+ ipu_dp_set_global_alpha(ipu_plane->dp,576576+ !fb->format->has_alpha, 0xff,577577+ false);578578+ }579579+ break;580580+ }581581+567582 eba = drm_plane_state_to_eba(state, 0);568583569584 /*···605582 active = ipu_idmac_get_current_buffer(ipu_plane->ipu_ch);606583 ipu_cpmem_set_buffer(ipu_plane->ipu_ch, !active, eba);607584 ipu_idmac_select_buffer(ipu_plane->ipu_ch, !active);585585+ ipu_plane->next_buf = !active;608586 if (ipu_plane_separate_alpha(ipu_plane)) {609587 active = ipu_idmac_get_current_buffer(ipu_plane->alpha_ch);610588 ipu_cpmem_set_buffer(ipu_plane->alpha_ch, !active,···619595 switch (ipu_plane->dp_flow) {620596 case IPU_DP_FLOW_SYNC_BG:621597 ipu_dp_setup_channel(ipu_plane->dp, ics, IPUV3_COLORSPACE_RGB);622622- ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true);623598 break;624599 case IPU_DP_FLOW_SYNC_FG:625600 ipu_dp_setup_channel(ipu_plane->dp, ics,626601 IPUV3_COLORSPACE_UNKNOWN);627627- /* Enable local alpha on partial plane */628628- switch (fb->format->format) {629629- case DRM_FORMAT_ARGB1555:630630- case DRM_FORMAT_ABGR1555:631631- case DRM_FORMAT_RGBA5551:632632- case DRM_FORMAT_BGRA5551:633633- case DRM_FORMAT_ARGB4444:634634- case DRM_FORMAT_ARGB8888:635635- case DRM_FORMAT_ABGR8888:636636- case DRM_FORMAT_RGBA8888:637637- case DRM_FORMAT_BGRA8888:638638- case DRM_FORMAT_RGB565_A8:639639- case DRM_FORMAT_BGR565_A8:640640- case DRM_FORMAT_RGB888_A8:641641- case DRM_FORMAT_BGR888_A8:642642- case DRM_FORMAT_RGBX8888_A8:643643- case DRM_FORMAT_BGRX8888_A8:644644- ipu_dp_set_global_alpha(ipu_plane->dp, false, 0, false);645645- break;646646- default:647647- ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true);648648- break;649649- }602602+ break;650603 }651604652605 ipu_dmfc_config_wait4eot(ipu_plane->dmfc, drm_rect_width(dst));···710709 ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 1, eba);711710 ipu_idmac_lock_enable(ipu_plane->ipu_ch, num_bursts);712711 ipu_plane_enable(ipu_plane);712712+ ipu_plane->next_buf = -1;713713}714714715715static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = {···720718 .atomic_update = ipu_plane_atomic_update,721719};722720721721+bool ipu_plane_atomic_update_pending(struct drm_plane *plane)722722+{723723+ struct ipu_plane *ipu_plane = to_ipu_plane(plane);724724+ struct drm_plane_state *state = plane->state;725725+ struct ipu_plane_state *ipu_state = to_ipu_plane_state(state);726726+727727+ /* disabled crtcs must not block the update */728728+ if (!state->crtc)729729+ return false;730730+731731+ if (ipu_state->use_pre)732732+ return ipu_prg_channel_configure_pending(ipu_plane->ipu_ch);733733+ else if (ipu_plane->next_buf >= 0)734734+ return ipu_idmac_get_current_buffer(ipu_plane->ipu_ch) !=735735+ ipu_plane->next_buf;736736+737737+ return false;738738+}723739int ipu_planes_assign_pre(struct drm_device *dev,724740 struct drm_atomic_state *state)725741{···826806{827807 struct ipu_plane *ipu_plane;828808 const uint64_t *modifiers = ipu_format_modifiers;809809+ unsigned int zpos = (type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1;829810 int ret;830811831812 DRM_DEBUG_KMS("channel %d, dp flow %d, possible_crtcs=0x%x\n",···856835 }857836858837 drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs);838838+839839+ if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG)840840+ drm_plane_create_zpos_property(&ipu_plane->base, zpos, 0, 1);841841+ else842842+ drm_plane_create_zpos_immutable_property(&ipu_plane->base, 0);859843860844 return ipu_plane;861845}
···48694869 pi->force_pcie_gen = RADEON_PCIE_GEN2;48704870 if (current_link_speed == RADEON_PCIE_GEN2)48714871 break;48724872+ /* fall through */48724873 case RADEON_PCIE_GEN2:48734874 if (radeon_acpi_pcie_performance_request(rdev, PCIE_PERF_REQ_PECI_GEN2, false) == 0)48744875 break;48754876#endif48774877+ /* fall through */48764878 default:48774879 pi->force_pcie_gen = ci_get_current_pcie_speed(rdev);48784880 break;
···57625762 si_pi->force_pcie_gen = RADEON_PCIE_GEN2;57635763 if (current_link_speed == RADEON_PCIE_GEN2)57645764 break;57655765+ /* fall through */57655766 case RADEON_PCIE_GEN2:57665767 if (radeon_acpi_pcie_performance_request(rdev, PCIE_PERF_REQ_PECI_GEN2, false) == 0)57675768 break;57685769#endif57705770+ /* fall through */57695771 default:57705772 si_pi->force_pcie_gen = si_get_current_pcie_speed(rdev);57715773 break;
+26-13
drivers/gpu/drm/scheduler/sched_entity.c
···5252{5353 int i;54545555- if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))5555+ if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))5656 return -EINVAL;57575858 memset(entity, 0, sizeof(struct drm_sched_entity));5959 INIT_LIST_HEAD(&entity->list);6060- entity->rq = rq_list[0];6060+ entity->rq = NULL;6161 entity->guilty = guilty;6262 entity->num_rq_list = num_rq_list;6363 entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),···67676868 for (i = 0; i < num_rq_list; ++i)6969 entity->rq_list[i] = rq_list[i];7070+7171+ if (num_rq_list)7272+ entity->rq = rq_list[0];7373+7074 entity->last_scheduled = NULL;71757276 spin_lock_init(&entity->rq_lock);···168164 struct drm_gpu_scheduler *sched;169165 struct task_struct *last_user;170166 long ret = timeout;167167+168168+ if (!entity->rq)169169+ return 0;171170172171 sched = entity->rq->sched;173172 /**···271264 */272265void drm_sched_entity_fini(struct drm_sched_entity *entity)273266{274274- struct drm_gpu_scheduler *sched;267267+ struct drm_gpu_scheduler *sched = NULL;275268276276- sched = entity->rq->sched;277277- drm_sched_rq_remove_entity(entity->rq, entity);269269+ if (entity->rq) {270270+ sched = entity->rq->sched;271271+ drm_sched_rq_remove_entity(entity->rq, entity);272272+ }278273279274 /* Consumption of existing IBs wasn't completed. Forcefully280275 * remove them here.281276 */282277 if (spsc_queue_peek(&entity->job_queue)) {283283- /* Park the kernel for a moment to make sure it isn't processing284284- * our enity.285285- */286286- kthread_park(sched->thread);287287- kthread_unpark(sched->thread);278278+ if (sched) {279279+ /* Park the kernel for a moment to make sure it isn't processing280280+ * our enity.281281+ */282282+ kthread_park(sched->thread);283283+ kthread_unpark(sched->thread);284284+ }288285 if (entity->dependency) {289286 dma_fence_remove_callback(entity->dependency,290287 &entity->cb);···373362 for (i = 0; i < entity->num_rq_list; ++i)374363 drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);375364376376- drm_sched_rq_remove_entity(entity->rq, entity);377377- drm_sched_entity_set_rq_priority(&entity->rq, priority);378378- drm_sched_rq_add_entity(entity->rq, entity);365365+ if (entity->rq) {366366+ drm_sched_rq_remove_entity(entity->rq, entity);367367+ drm_sched_entity_set_rq_priority(&entity->rq, priority);368368+ drm_sched_rq_add_entity(entity->rq, entity);369369+ }379370380371 spin_unlock(&entity->rq_lock);381372}
···212212 * Going to have to check what details I need to set and how to213213 * get them214214 */215215- mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node);215215+ mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);216216 mtd->type = MTD_NORFLASH;217217 mtd->flags = MTD_WRITEABLE;218218 mtd->size = size;
···11831183 }11841184 }1185118511861186- /* Link-local multicast packets should be passed to the11871187- * stack on the link they arrive as well as pass them to the11881188- * bond-master device. These packets are mostly usable when11891189- * stack receives it with the link on which they arrive11901190- * (e.g. LLDP) they also must be available on master. Some of11911191- * the use cases include (but are not limited to): LLDP agents11921192- * that must be able to operate both on enslaved interfaces as11931193- * well as on bonds themselves; linux bridges that must be able11941194- * to process/pass BPDUs from attached bonds when any kind of11951195- * STP version is enabled on the network.11861186+ /*11871187+ * For packets determined by bond_should_deliver_exact_match() call to11881188+ * be suppressed we want to make an exception for link-local packets.11891189+ * This is necessary for e.g. LLDP daemons to be able to monitor11901190+ * inactive slave links without being forced to bind to them11911191+ * explicitly.11921192+ *11931193+ * At the same time, packets that are passed to the bonding master11941194+ * (including link-local ones) can have their originating interface11951195+ * determined via PACKET_ORIGDEV socket option.11961196 */11971197- if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) {11981198- struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC);11991199-12001200- if (nskb) {12011201- nskb->dev = bond->dev;12021202- nskb->queue_mapping = 0;12031203- netif_rx(nskb);12041204- }12051205- return RX_HANDLER_PASS;12061206- }12071207- if (bond_should_deliver_exact_match(skb, slave, bond))11971197+ if (bond_should_deliver_exact_match(skb, slave, bond)) {11981198+ if (is_link_local_ether_addr(eth_hdr(skb)->h_dest))11991199+ return RX_HANDLER_PASS;12081200 return RX_HANDLER_EXACT;12011201+ }1209120212101203 skb->dev = bond->dev;12111204
+71-19
drivers/net/dsa/b53/b53_common.c
···344344 b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt);345345}346346347347-static void b53_enable_vlan(struct b53_device *dev, bool enable)347347+static void b53_enable_vlan(struct b53_device *dev, bool enable,348348+ bool enable_filtering)348349{349350 u8 mgmt, vc0, vc1, vc4 = 0, vc5;350351···370369 vc0 |= VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID;371370 vc1 |= VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN;372371 vc4 &= ~VC4_ING_VID_CHECK_MASK;373373- vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S;374374- vc5 |= VC5_DROP_VTABLE_MISS;372372+ if (enable_filtering) {373373+ vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S;374374+ vc5 |= VC5_DROP_VTABLE_MISS;375375+ } else {376376+ vc4 |= VC4_ING_VID_VIO_FWD << VC4_ING_VID_CHECK_S;377377+ vc5 &= ~VC5_DROP_VTABLE_MISS;378378+ }375379376380 if (is5325(dev))377381 vc0 &= ~VC0_RESERVED_1;···426420 }427421428422 b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_MODE, mgmt);423423+424424+ dev->vlan_enabled = enable;425425+ dev->vlan_filtering_enabled = enable_filtering;429426}430427431428static int b53_set_jumbo(struct b53_device *dev, bool enable, bool allow_10_100)···641632 b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);642633}643634635635+static u16 b53_default_pvid(struct b53_device *dev)636636+{637637+ if (is5325(dev) || is5365(dev))638638+ return 1;639639+ else640640+ return 0;641641+}642642+644643int b53_configure_vlan(struct dsa_switch *ds)645644{646645 struct b53_device *dev = ds->priv;647646 struct b53_vlan vl = { 0 };648648- int i;647647+ int i, def_vid;648648+649649+ def_vid = b53_default_pvid(dev);649650650651 /* clear all vlan entries */651652 if (is5325(dev) || is5365(dev)) {652652- for (i = 1; i < dev->num_vlans; i++)653653+ for (i = def_vid; i < dev->num_vlans; i++)653654 b53_set_vlan_entry(dev, i, &vl);654655 } else {655656 b53_do_vlan_op(dev, VTA_CMD_CLEAR);656657 }657658658658- b53_enable_vlan(dev, false);659659+ b53_enable_vlan(dev, false, dev->vlan_filtering_enabled);659660660661 b53_for_each_port(dev, i)661662 b53_write16(dev, B53_VLAN_PAGE,662662- B53_VLAN_PORT_DEF_TAG(i), 1);663663+ B53_VLAN_PORT_DEF_TAG(i), def_vid);663664664665 if (!is5325(dev) && !is5365(dev))665666 b53_set_jumbo(dev, dev->enable_jumbo, false);···1274125512751256int b53_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering)12761257{12581258+ struct b53_device *dev = ds->priv;12591259+ struct net_device *bridge_dev;12601260+ unsigned int i;12611261+ u16 pvid, new_pvid;12621262+12631263+ /* Handle the case were multiple bridges span the same switch device12641264+ * and one of them has a different setting than what is being requested12651265+ * which would be breaking filtering semantics for any of the other12661266+ * bridge devices.12671267+ */12681268+ b53_for_each_port(dev, i) {12691269+ bridge_dev = dsa_to_port(ds, i)->bridge_dev;12701270+ if (bridge_dev &&12711271+ bridge_dev != dsa_to_port(ds, port)->bridge_dev &&12721272+ br_vlan_enabled(bridge_dev) != vlan_filtering) {12731273+ netdev_err(bridge_dev,12741274+ "VLAN filtering is global to the switch!\n");12751275+ return -EINVAL;12761276+ }12771277+ }12781278+12791279+ b53_read16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), &pvid);12801280+ new_pvid = pvid;12811281+ if (dev->vlan_filtering_enabled && !vlan_filtering) {12821282+ /* Filtering is currently enabled, use the default PVID since12831283+ * the bridge does not expect tagging anymore12841284+ */12851285+ dev->ports[port].pvid = pvid;12861286+ new_pvid = b53_default_pvid(dev);12871287+ } else if (!dev->vlan_filtering_enabled && vlan_filtering) {12881288+ /* Filtering is currently disabled, restore the previous PVID */12891289+ new_pvid = dev->ports[port].pvid;12901290+ }12911291+12921292+ if (pvid != new_pvid)12931293+ b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port),12941294+ new_pvid);12951295+12961296+ b53_enable_vlan(dev, dev->vlan_enabled, vlan_filtering);12971297+12771298 return 0;12781299}12791300EXPORT_SYMBOL(b53_vlan_filtering);···13291270 if (vlan->vid_end > dev->num_vlans)13301271 return -ERANGE;1331127213321332- b53_enable_vlan(dev, true);12731273+ b53_enable_vlan(dev, true, dev->vlan_filtering_enabled);1333127413341275 return 0;13351276}···13591300 b53_fast_age_vlan(dev, vid);13601301 }1361130213621362- if (pvid) {13031303+ if (pvid && !dsa_is_cpu_port(ds, port)) {13631304 b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port),13641305 vlan->vid_end);13651306 b53_fast_age_vlan(dev, vid);···1385132613861327 vl->members &= ~BIT(port);1387132813881388- if (pvid == vid) {13891389- if (is5325(dev) || is5365(dev))13901390- pvid = 1;13911391- else13921392- pvid = 0;13931393- }13291329+ if (pvid == vid)13301330+ pvid = b53_default_pvid(dev);1394133113951332 if (untagged && !dsa_is_cpu_port(ds, port))13961333 vl->untag &= ~(BIT(port));···16991644 b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan);17001645 dev->ports[port].vlan_ctl_mask = pvlan;1701164617021702- if (is5325(dev) || is5365(dev))17031703- pvid = 1;17041704- else17051705- pvid = 0;16471647+ pvid = b53_default_pvid(dev);1706164817071649 /* Make this port join all VLANs without VLAN entries */17081650 if (is58xx(dev)) {
···336336 unsigned int cclk_ps; /* Core clock period in psec */337337 unsigned short udb_density; /* # of user DB/page */338338 unsigned short ucq_density; /* # of user CQs/page */339339+ unsigned int sge_host_page_size; /* SGE host page size */339340 unsigned short filt_mode; /* filter optional components */340341 unsigned short tx_modq[NCHAN]; /* maps each tx channel to a */341342 /* scheduler queue */
···32893289 i40e_alloc_rx_buffers_zc(ring, I40E_DESC_UNUSED(ring)) :32903290 !i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));32913291 if (!ok) {32923292+ /* Log this in case the user has forgotten to give the kernel32933293+ * any buffers, even later in the application.32943294+ */32923295 dev_info(&vsi->back->pdev->dev,32933293- "Failed allocate some buffers on %sRx ring %d (pf_q %d)\n",32963296+ "Failed to allocate some buffers on %sRx ring %d (pf_q %d)\n",32943297 ring->xsk_umem ? "UMEM enabled " : "",32953298 ring->queue_index, pf_q);32963299 }···6728672567296726 for (i = 0; i < vsi->num_queue_pairs; i++) {67306727 i40e_clean_tx_ring(vsi->tx_rings[i]);67316731- if (i40e_enabled_xdp_vsi(vsi))67286728+ if (i40e_enabled_xdp_vsi(vsi)) {67296729+ /* Make sure that in-progress ndo_xdp_xmit67306730+ * calls are completed.67316731+ */67326732+ synchronize_rcu();67326733 i40e_clean_tx_ring(vsi->xdp_rings[i]);67346734+ }67336735 i40e_clean_rx_ring(vsi->rx_rings[i]);67346736 }67356737···1190311895 if (old_prog)1190411896 bpf_prog_put(old_prog);11905118971189811898+ /* Kick start the NAPI context if there is an AF_XDP socket open1189911899+ * on that queue id. This so that receiving will start.1190011900+ */1190111901+ if (need_reset && prog)1190211902+ for (i = 0; i < vsi->num_queue_pairs; i++)1190311903+ if (vsi->xdp_rings[i]->xsk_umem)1190411904+ (void)i40e_xsk_async_xmit(vsi->netdev, i);1190511905+1190611906 return 0;1190711907}1190811908···1197111955static void i40e_queue_pair_clean_rings(struct i40e_vsi *vsi, int queue_pair)1197211956{1197311957 i40e_clean_tx_ring(vsi->tx_rings[queue_pair]);1197411974- if (i40e_enabled_xdp_vsi(vsi))1195811958+ if (i40e_enabled_xdp_vsi(vsi)) {1195911959+ /* Make sure that in-progress ndo_xdp_xmit calls are1196011960+ * completed.1196111961+ */1196211962+ synchronize_rcu();1197511963 i40e_clean_tx_ring(vsi->xdp_rings[queue_pair]);1196411964+ }1197611965 i40e_clean_rx_ring(vsi->rx_rings[queue_pair]);1197711966}1197811967
+3-1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···37093709 struct i40e_netdev_priv *np = netdev_priv(dev);37103710 unsigned int queue_index = smp_processor_id();37113711 struct i40e_vsi *vsi = np->vsi;37123712+ struct i40e_pf *pf = vsi->back;37123713 struct i40e_ring *xdp_ring;37133714 int drops = 0;37143715 int i;···37173716 if (test_bit(__I40E_VSI_DOWN, vsi->state))37183717 return -ENETDOWN;3719371837203720- if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs)37193719+ if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs ||37203720+ test_bit(__I40E_CONFIG_BUSY, pf->state))37213721 return -ENXIO;3722372237233723 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+5
drivers/net/ethernet/intel/i40e/i40e_xsk.c
···183183 err = i40e_queue_pair_enable(vsi, qid);184184 if (err)185185 return err;186186+187187+ /* Kick start the NAPI context so that receiving will start */188188+ err = i40e_xsk_async_xmit(vsi->netdev, qid);189189+ if (err)190190+ return err;186191 }187192188193 return 0;
+16-3
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···39533953 else39543954 mrqc = IXGBE_MRQC_VMDQRSS64EN;3955395539563956- /* Enable L3/L4 for Tx Switched packets */39573957- mrqc |= IXGBE_MRQC_L3L4TXSWEN;39563956+ /* Enable L3/L4 for Tx Switched packets only for X550,39573957+ * older devices do not support this feature39583958+ */39593959+ if (hw->mac.type >= ixgbe_mac_X550)39603960+ mrqc |= IXGBE_MRQC_L3L4TXSWEN;39583961 } else {39593962 if (tcs > 4)39603963 mrqc = IXGBE_MRQC_RTRSS8TCEN;···1022810225 int i, frame_size = dev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;1022910226 struct ixgbe_adapter *adapter = netdev_priv(dev);1023010227 struct bpf_prog *old_prog;1022810228+ bool need_reset;10231102291023210230 if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)1023310231 return -EINVAL;···1025110247 return -ENOMEM;10252102481025310249 old_prog = xchg(&adapter->xdp_prog, prog);1025010250+ need_reset = (!!prog != !!old_prog);10254102511025510252 /* If transitioning XDP modes reconfigure rings */1025610256- if (!!prog != !!old_prog) {1025310253+ if (need_reset) {1025710254 int err = ixgbe_setup_tc(dev, adapter->hw_tcs);10258102551025910256 if (err) {···10269102641027010265 if (old_prog)1027110266 bpf_prog_put(old_prog);1026710267+1026810268+ /* Kick start the NAPI context if there is an AF_XDP socket open1026910269+ * on that queue id. This so that receiving will start.1027010270+ */1027110271+ if (need_reset && prog)1027210272+ for (i = 0; i < adapter->num_rx_queues; i++)1027310273+ if (adapter->xdp_ring[i]->xsk_umem)1027410274+ (void)ixgbe_xsk_async_xmit(adapter->netdev, i);10272102751027310276 return 0;1027410277}
+12-3
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
···144144 ixgbe_txrx_ring_disable(adapter, qid);145145146146 err = ixgbe_add_xsk_umem(adapter, umem, qid);147147+ if (err)148148+ return err;147149148148- if (if_running)150150+ if (if_running) {149151 ixgbe_txrx_ring_enable(adapter, qid);150152151151- return err;153153+ /* Kick start the NAPI context so that receiving will start */154154+ err = ixgbe_xsk_async_xmit(adapter->netdev, qid);155155+ if (err)156156+ return err;157157+ }158158+159159+ return 0;152160}153161154162static int ixgbe_xsk_umem_disable(struct ixgbe_adapter *adapter, u16 qid)···642634 dma_addr_t dma;643635644636 while (budget-- > 0) {645645- if (unlikely(!ixgbe_desc_unused(xdp_ring))) {637637+ if (unlikely(!ixgbe_desc_unused(xdp_ring)) ||638638+ !netif_carrier_ok(xdp_ring->netdev)) {646639 work_done = false;647640 break;648641 }
+6-1
drivers/net/ethernet/marvell/mv643xx_eth.c
···2879287928802880 ret = mv643xx_eth_shared_of_probe(pdev);28812881 if (ret)28822882- return ret;28822882+ goto err_put_clk;28832883 pd = dev_get_platdata(&pdev->dev);2884288428852885 msp->tx_csum_limit = (pd != NULL && pd->tx_csum_limit) ?···28872887 infer_hw_params(msp);2888288828892889 return 0;28902890+28912891+err_put_clk:28922892+ if (!IS_ERR(msp->clk))28932893+ clk_disable_unprepare(msp->clk);28942894+ return ret;28902895}2891289628922897static int mv643xx_eth_shared_remove(struct platform_device *pdev)
···696696 struct ethtool_eee *edata)697697{698698 struct stmmac_priv *priv = netdev_priv(dev);699699+ int ret;699700700700- priv->eee_enabled = edata->eee_enabled;701701-702702- if (!priv->eee_enabled)701701+ if (!edata->eee_enabled) {703702 stmmac_disable_eee_mode(priv);704704- else {703703+ } else {705704 /* We are asking for enabling the EEE but it is safe706705 * to verify all by invoking the eee_init function.707706 * In case of failure it will return an error.708707 */709709- priv->eee_enabled = stmmac_eee_init(priv);710710- if (!priv->eee_enabled)708708+ edata->eee_enabled = stmmac_eee_init(priv);709709+ if (!edata->eee_enabled)711710 return -EOPNOTSUPP;712712-713713- /* Do not change tx_lpi_timer in case of failure */714714- priv->tx_lpi_timer = edata->tx_lpi_timer;715711 }716712717717- return phy_ethtool_set_eee(dev->phydev, edata);713713+ ret = phy_ethtool_set_eee(dev->phydev, edata);714714+ if (ret)715715+ return ret;716716+717717+ priv->eee_enabled = edata->eee_enabled;718718+ priv->tx_lpi_timer = edata->tx_lpi_timer;719719+ return 0;718720}719721720722static u32 stmmac_usec2riwt(u32 usec, struct stmmac_priv *priv)
+1-1
drivers/net/ethernet/ti/netcp_core.c
···259259 const char *name;260260 char node_name[32];261261262262- if (of_property_read_string(node, "label", &name) < 0) {262262+ if (of_property_read_string(child, "label", &name) < 0) {263263 snprintf(node_name, sizeof(node_name), "%pOFn", child);264264 name = node_name;265265 }
+8-3
drivers/net/geneve.c
···692692static int geneve_open(struct net_device *dev)693693{694694 struct geneve_dev *geneve = netdev_priv(dev);695695- bool ipv6 = !!(geneve->info.mode & IP_TUNNEL_INFO_IPV6);696695 bool metadata = geneve->collect_md;696696+ bool ipv4, ipv6;697697 int ret = 0;698698699699+ ipv6 = geneve->info.mode & IP_TUNNEL_INFO_IPV6 || metadata;700700+ ipv4 = !ipv6 || metadata;699701#if IS_ENABLED(CONFIG_IPV6)700700- if (ipv6 || metadata)702702+ if (ipv6) {701703 ret = geneve_sock_add(geneve, true);704704+ if (ret < 0 && ret != -EAFNOSUPPORT)705705+ ipv4 = false;706706+ }702707#endif703703- if (!ret && (!ipv6 || metadata))708708+ if (ipv4)704709 ret = geneve_sock_add(geneve, false);705710 if (ret < 0)706711 geneve_sock_release(geneve);
+19-3
drivers/net/hyperv/netvsc_drv.c
···744744 schedule_delayed_work(&ndev_ctx->dwork, 0);745745}746746747747+static void netvsc_comp_ipcsum(struct sk_buff *skb)748748+{749749+ struct iphdr *iph = (struct iphdr *)skb->data;750750+751751+ iph->check = 0;752752+ iph->check = ip_fast_csum(iph, iph->ihl);753753+}754754+747755static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net,748756 struct netvsc_channel *nvchan)749757{···778770 /* skb is already created with CHECKSUM_NONE */779771 skb_checksum_none_assert(skb);780772781781- /*782782- * In Linux, the IP checksum is always checked.783783- * Do L4 checksum offload if enabled and present.773773+ /* Incoming packets may have IP header checksum verified by the host.774774+ * They may not have IP header checksum computed after coalescing.775775+ * We compute it here if the flags are set, because on Linux, the IP776776+ * checksum is always checked.777777+ */778778+ if (csum_info && csum_info->receive.ip_checksum_value_invalid &&779779+ csum_info->receive.ip_checksum_succeeded &&780780+ skb->protocol == htons(ETH_P_IP))781781+ netvsc_comp_ipcsum(skb);782782+783783+ /* Do L4 checksum offload if enabled and present.784784 */785785 if (csum_info && (net->features & NETIF_F_RXCSUM)) {786786 if (csum_info->receive.tcp_checksum_succeeded ||
+4
drivers/net/ipvlan/ipvlan_main.c
···499499500500 if (!data)501501 return 0;502502+ if (!ns_capable(dev_net(ipvlan->phy_dev)->user_ns, CAP_NET_ADMIN))503503+ return -EPERM;502504503505 if (data[IFLA_IPVLAN_MODE]) {504506 u16 nmode = nla_get_u16(data[IFLA_IPVLAN_MODE]);···603601 struct ipvl_dev *tmp = netdev_priv(phy_dev);604602605603 phy_dev = tmp->phy_dev;604604+ if (!ns_capable(dev_net(phy_dev)->user_ns, CAP_NET_ADMIN))605605+ return -EPERM;606606 } else if (!netif_is_ipvlan_port(phy_dev)) {607607 /* Exit early if the underlying link is invalid or busy */608608 if (phy_dev->type != ARPHRD_ETHER ||
···282282 .name = "RTL8366RB Gigabit Ethernet",283283 .features = PHY_GBIT_FEATURES,284284 .config_init = &rtl8366rb_config_init,285285+ /* These interrupts are handled by the irq controller286286+ * embedded inside the RTL8366RB, they get unmasked when the287287+ * irq is requested and ACKed by reading the status register,288288+ * which is done by the irqchip code.289289+ */290290+ .ack_interrupt = genphy_no_ack_interrupt,291291+ .config_intr = genphy_no_config_intr,285292 .suspend = genphy_suspend,286293 .resume = genphy_resume,287294 },
+4-1
drivers/net/phy/xilinx_gmii2rgmii.c
···4444 u16 val = 0;4545 int err;46464747- err = priv->phy_drv->read_status(phydev);4747+ if (priv->phy_drv->read_status)4848+ err = priv->phy_drv->read_status(phydev);4949+ else5050+ err = genphy_read_status(phydev);4851 if (err < 0)4952 return err;5053
+2-2
drivers/net/team/team.c
···12561256 list_add_tail_rcu(&port->list, &team->port_list);12571257 team_port_enable(team, port);12581258 __team_compute_features(team);12591259- __team_port_change_port_added(port, !!netif_carrier_ok(port_dev));12591259+ __team_port_change_port_added(port, !!netif_oper_up(port_dev));12601260 __team_options_change_check(team);1261126112621262 netdev_info(dev, "Port device %s added\n", portname);···2915291529162916 switch (event) {29172917 case NETDEV_UP:29182918- if (netif_carrier_ok(dev))29182918+ if (netif_oper_up(dev))29192919 team_port_change_check(port, true);29202920 break;29212921 case NETDEV_DOWN:
···12011201 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */12021202 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */12031203 {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */12041204- {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354 */12051205- {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC7304/MC7354 */12041204+ {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354, WP76xx */12051205+ {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 10)},/* Sierra Wireless MC7304/MC7354 */12061206 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */12071207 {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */12081208 {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */
+3-2
drivers/net/usb/r8152.c
···557557/* MAC PASSTHRU */558558#define AD_MASK 0xfee0559559#define BND_MASK 0x0004560560+#define BD_MASK 0x0001560561#define EFUSE 0xcfdb561562#define PASS_THRU_MASK 0x1562563···11771176 return -ENODEV;11781177 }11791178 } else {11801180- /* test for RTL8153-BND */11791179+ /* test for RTL8153-BND and RTL8153-BD */11811180 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_MISC_1);11821182- if ((ocp_data & BND_MASK) == 0) {11811181+ if ((ocp_data & BND_MASK) == 0 && (ocp_data & BD_MASK) == 0) {11831182 netif_dbg(tp, probe, tp->netdev,11841183 "Invalid variant for MAC pass through\n");11851184 return -ENODEV;
+3
drivers/net/vrf.c
···1273127312741274 /* default to no qdisc; user can add if desired */12751275 dev->priv_flags |= IFF_NO_QUEUE;12761276+12771277+ dev->min_mtu = 0;12781278+ dev->max_mtu = 0;12761279}1277128012781281static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
···158158 .get_txpower = mt76x02_get_txpower,159159};160160161161+static int mt76x0u_init_hardware(struct mt76x02_dev *dev)162162+{163163+ int err;164164+165165+ mt76x0_chip_onoff(dev, true, true);166166+167167+ if (!mt76x02_wait_for_mac(&dev->mt76))168168+ return -ETIMEDOUT;169169+170170+ err = mt76x0u_mcu_init(dev);171171+ if (err < 0)172172+ return err;173173+174174+ mt76x0_init_usb_dma(dev);175175+ err = mt76x0_init_hardware(dev);176176+ if (err < 0)177177+ return err;178178+179179+ mt76_rmw(dev, MT_US_CYC_CFG, MT_US_CYC_CNT, 0x1e);180180+ mt76_wr(dev, MT_TXOP_CTRL_CFG,181181+ FIELD_PREP(MT_TXOP_TRUN_EN, 0x3f) |182182+ FIELD_PREP(MT_TXOP_EXT_CCA_DLY, 0x58));183183+184184+ return 0;185185+}186186+161187static int mt76x0u_register_device(struct mt76x02_dev *dev)162188{163189 struct ieee80211_hw *hw = dev->mt76.hw;···197171 if (err < 0)198172 goto out_err;199173200200- mt76x0_chip_onoff(dev, true, true);201201- if (!mt76x02_wait_for_mac(&dev->mt76)) {202202- err = -ETIMEDOUT;203203- goto out_err;204204- }205205-206206- err = mt76x0u_mcu_init(dev);174174+ err = mt76x0u_init_hardware(dev);207175 if (err < 0)208176 goto out_err;209209-210210- mt76x0_init_usb_dma(dev);211211- err = mt76x0_init_hardware(dev);212212- if (err < 0)213213- goto out_err;214214-215215- mt76_rmw(dev, MT_US_CYC_CFG, MT_US_CYC_CNT, 0x1e);216216- mt76_wr(dev, MT_TXOP_CTRL_CFG,217217- FIELD_PREP(MT_TXOP_TRUN_EN, 0x3f) |218218- FIELD_PREP(MT_TXOP_EXT_CCA_DLY, 0x58));219177220178 err = mt76x0_register_device(dev);221179 if (err < 0)···311301312302 mt76u_stop_queues(&dev->mt76);313303 mt76x0u_mac_stop(dev);304304+ clear_bit(MT76_STATE_MCU_RUNNING, &dev->mt76.state);305305+ mt76x0_chip_onoff(dev, false, false);314306 usb_kill_urb(usb->mcu.res.urb);315307316308 return 0;···340328 tasklet_enable(&usb->rx_tasklet);341329 tasklet_enable(&usb->tx_tasklet);342330343343- ret = mt76x0_init_hardware(dev);331331+ ret = mt76x0u_init_hardware(dev);344332 if (ret)345333 goto err;346334
+2
drivers/net/xen-netback/hash.c
···454454 if (xenvif_hash_cache_size == 0)455455 return;456456457457+ BUG_ON(vif->hash.cache.count);458458+457459 spin_lock_init(&vif->hash.cache.lock);458460 INIT_LIST_HEAD(&vif->hash.cache.list);459461}
+7
drivers/net/xen-netback/interface.c
···153153{154154 struct xenvif *vif = netdev_priv(dev);155155 unsigned int size = vif->hash.size;156156+ unsigned int num_queues;157157+158158+ /* If queues are not set up internally - always return 0159159+ * as the packet going to be dropped anyway */160160+ num_queues = READ_ONCE(vif->num_queues);161161+ if (num_queues < 1)162162+ return 0;156163157164 if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE)158165 return fallback(dev, skb, NULL) % dev->real_num_tx_queues;
+5-5
drivers/net/xen-netback/netback.c
···10721072 skb_frag_size_set(&frags[i], len);10731073 }1074107410751075- /* Copied all the bits from the frag list -- free it. */10761076- skb_frag_list_init(skb);10771077- xenvif_skb_zerocopy_prepare(queue, nskb);10781078- kfree_skb(nskb);10791079-10801075 /* Release all the original (foreign) frags. */10811076 for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)10821077 skb_frag_unref(skb, f);···11401145 xenvif_fill_frags(queue, skb);1141114611421147 if (unlikely(skb_has_frag_list(skb))) {11481148+ struct sk_buff *nskb = skb_shinfo(skb)->frag_list;11491149+ xenvif_skb_zerocopy_prepare(queue, nskb);11431150 if (xenvif_handle_frag_list(queue, skb)) {11441151 if (net_ratelimit())11451152 netdev_err(queue->vif->dev,···11501153 kfree_skb(skb);11511154 continue;11521155 }11561156+ /* Copied all the bits from the frag list -- free it. */11571157+ skb_frag_list_init(skb);11581158+ kfree_skb(nskb);11531159 }1154116011551161 skb->dev = queue->vif->dev;
···73617361 unsigned long bar0map_len, bar2map_len;73627362 int i, hbq_count;73637363 void *ptr;73647364- int error = -ENODEV;73647364+ int error;7365736573667366 if (!pdev)73677367- return error;73677367+ return -ENODEV;7368736873697369 /* Set the device DMA mask size */73707370- if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) ||73717371- dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))73707370+ error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));73717371+ if (error)73727372+ error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));73737373+ if (error)73727374 return error;73757375+ error = -ENODEV;7373737673747377 /* Get the bus address of Bar0 and Bar2 and the number of bytes73757378 * required by each mapping.···97459742 uint32_t if_type;9746974397479744 if (!pdev)97489748- return error;97459745+ return -ENODEV;9749974697509747 /* Set the device DMA mask size */97519751- if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) ||97529752- dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)))97489748+ error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));97499749+ if (error)97509750+ error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));97519751+ if (error)97539752 return error;9754975397559754 /*
+1-1
drivers/scsi/scsi_lib.c
···655655 set_host_byte(cmd, DID_OK);656656 return BLK_STS_TARGET;657657 case DID_NEXUS_FAILURE:658658+ set_host_byte(cmd, DID_OK);658659 return BLK_STS_NEXUS;659660 case DID_ALLOC_FAILURE:660661 set_host_byte(cmd, DID_OK);···25982597 * device deleted during suspend)25992598 */26002599 mutex_lock(&sdev->state_mutex);26012601- WARN_ON_ONCE(!sdev->quiesced_by);26022600 sdev->quiesced_by = NULL;26032601 blk_clear_pm_only(sdev->request_queue);26042602 if (sdev->sdev_state == SDEV_QUIESCE)
+5-3
drivers/scsi/sd_zbc.c
···142142 return -EOPNOTSUPP;143143144144 /*145145- * Get a reply buffer for the number of requested zones plus a header.146146- * For ATA, buffers must be aligned to 512B.145145+ * Get a reply buffer for the number of requested zones plus a header,146146+ * without exceeding the device maximum command size. For ATA disks,147147+ * buffers must be aligned to 512B.147148 */148148- buflen = roundup((nrz + 1) * 64, 512);149149+ buflen = min(queue_max_hw_sectors(disk->queue) << 9,150150+ roundup((nrz + 1) * 64, 512));149151 buf = kmalloc(buflen, gfp_mask);150152 if (!buf)151153 return -ENOMEM;
···1414#include <linux/err.h>1515#include <linux/fs.h>16161717+static inline bool spacetab(char c) { return c == ' ' || c == '\t'; }1818+static inline char *next_non_spacetab(char *first, const char *last)1919+{2020+ for (; first <= last; first++)2121+ if (!spacetab(*first))2222+ return first;2323+ return NULL;2424+}2525+static inline char *next_terminator(char *first, const char *last)2626+{2727+ for (; first <= last; first++)2828+ if (spacetab(*first) || !*first)2929+ return first;3030+ return NULL;3131+}3232+1733static int load_script(struct linux_binprm *bprm)1834{1935 const char *i_arg, *i_name;2020- char *cp;3636+ char *cp, *buf_end;2137 struct file *file;2238 int retval;23394040+ /* Not ours to exec if we don't start with "#!". */2441 if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!'))2542 return -ENOEXEC;2643···5033 if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE)5134 return -ENOENT;52355353- /*5454- * This section does the #! interpretation.5555- * Sorta complicated, but hopefully it will work. -TYT5656- */5757-3636+ /* Release since we are not mapping a binary into memory. */5837 allow_write_access(bprm->file);5938 fput(bprm->file);6039 bprm->file = NULL;61406262- bprm->buf[BINPRM_BUF_SIZE - 1] = '\0';6363- if ((cp = strchr(bprm->buf, '\n')) == NULL)6464- cp = bprm->buf+BINPRM_BUF_SIZE-1;4141+ /*4242+ * This section handles parsing the #! line into separate4343+ * interpreter path and argument strings. We must be careful4444+ * because bprm->buf is not yet guaranteed to be NUL-terminated4545+ * (though the buffer will have trailing NUL padding when the4646+ * file size was smaller than the buffer size).4747+ *4848+ * We do not want to exec a truncated interpreter path, so either4949+ * we find a newline (which indicates nothing is truncated), or5050+ * we find a space/tab/NUL after the interpreter path (which5151+ * itself may be preceded by spaces/tabs). Truncating the5252+ * arguments is fine: the interpreter can re-read the script to5353+ * parse them on its own.5454+ */5555+ buf_end = bprm->buf + sizeof(bprm->buf) - 1;5656+ cp = strnchr(bprm->buf, sizeof(bprm->buf), '\n');5757+ if (!cp) {5858+ cp = next_non_spacetab(bprm->buf + 2, buf_end);5959+ if (!cp)6060+ return -ENOEXEC; /* Entire buf is spaces/tabs */6161+ /*6262+ * If there is no later space/tab/NUL we must assume the6363+ * interpreter path is truncated.6464+ */6565+ if (!next_terminator(cp, buf_end))6666+ return -ENOEXEC;6767+ cp = buf_end;6868+ }6969+ /* NUL-terminate the buffer and any trailing spaces/tabs. */6570 *cp = '\0';6671 while (cp > bprm->buf) {6772 cp--;
+2-1
fs/ceph/snap.c
···616616 capsnap->size);617617618618 spin_lock(&mdsc->snap_flush_lock);619619- list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list);619619+ if (list_empty(&ci->i_snap_flush_item))620620+ list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list);620621 spin_unlock(&mdsc->snap_flush_lock);621622 return 1; /* caller may want to ceph_flush_snaps */622623}
+12
fs/hugetlbfs/inode.c
···859859 rc = migrate_huge_page_move_mapping(mapping, newpage, page);860860 if (rc != MIGRATEPAGE_SUCCESS)861861 return rc;862862+863863+ /*864864+ * page_private is subpool pointer in hugetlb pages. Transfer to865865+ * new page. PagePrivate is not associated with page_private for866866+ * hugetlb pages and can not be set here as only page_huge_active867867+ * pages can be migrated.868868+ */869869+ if (page_private(page)) {870870+ set_page_private(newpage, page_private(page));871871+ set_page_private(page, 0);872872+ }873873+862874 if (mode != MIGRATE_SYNC_NO_COPY)863875 migrate_page_copy(newpage, page);864876 else
···1086108610871087 task_lock(p);10881088 if (!p->vfork_done && process_shares_mm(p, mm)) {10891089- pr_info("updating oom_score_adj for %d (%s) from %d to %d because it shares mm with %d (%s). Report if this is unexpected.\n",10901090- task_pid_nr(p), p->comm,10911091- p->signal->oom_score_adj, oom_adj,10921092- task_pid_nr(task), task->comm);10931089 p->signal->oom_score_adj = oom_adj;10941090 if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE))10951091 p->signal->oom_score_adj_min = (short)oom_adj;
···11+/* request_key authorisation token key type22+ *33+ * Copyright (C) 2005 Red Hat, Inc. All Rights Reserved.44+ * Written by David Howells (dhowells@redhat.com)55+ *66+ * This program is free software; you can redistribute it and/or77+ * modify it under the terms of the GNU General Public Licence88+ * as published by the Free Software Foundation; either version99+ * 2 of the Licence, or (at your option) any later version.1010+ */1111+1212+#ifndef _KEYS_REQUEST_KEY_AUTH_TYPE_H1313+#define _KEYS_REQUEST_KEY_AUTH_TYPE_H1414+1515+#include <linux/key.h>1616+1717+/*1818+ * Authorisation record for request_key().1919+ */2020+struct request_key_auth {2121+ struct key *target_key;2222+ struct key *dest_keyring;2323+ const struct cred *cred;2424+ void *callout_info;2525+ size_t callout_len;2626+ pid_t pid;2727+ char op[8];2828+} __randomize_layout;2929+3030+static inline struct request_key_auth *get_request_key_auth(const struct key *key)3131+{3232+ return key->payload.data[0];3333+}3434+3535+3636+#endif /* _KEYS_REQUEST_KEY_AUTH_TYPE_H */
+1-1
include/keys/user-type.h
···3131struct user_key_payload {3232 struct rcu_head rcu; /* RCU destructor */3333 unsigned short datalen; /* length of this data */3434- char data[0]; /* actual data */3434+ char data[0] __aligned(__alignof__(u64)); /* actual data */3535};36363737extern struct key_type key_type_user;
+6-16
include/linux/key-type.h
···2121struct kernel_pkey_params;22222323/*2424- * key under-construction record2525- * - passed to the request_key actor if supplied2626- */2727-struct key_construction {2828- struct key *key; /* key being constructed */2929- struct key *authkey;/* authorisation for key being constructed */3030-};3131-3232-/*3324 * Pre-parsed payload, used by key add, update and instantiate.3425 *3526 * This struct will be cleared and data and datalen will be set with the data···4150 time64_t expiry; /* Expiry time of key */4251} __randomize_layout;43524444-typedef int (*request_key_actor_t)(struct key_construction *key,4545- const char *op, void *aux);5353+typedef int (*request_key_actor_t)(struct key *auth_key, void *aux);46544755/*4856 * Preparsed matching criterion.···171181 const void *data,172182 size_t datalen,173183 struct key *keyring,174174- struct key *instkey);184184+ struct key *authkey);175185extern int key_reject_and_link(struct key *key,176186 unsigned timeout,177187 unsigned error,178188 struct key *keyring,179179- struct key *instkey);180180-extern void complete_request_key(struct key_construction *cons, int error);189189+ struct key *authkey);190190+extern void complete_request_key(struct key *authkey, int error);181191182192static inline int key_negate_and_link(struct key *key,183193 unsigned timeout,184194 struct key *keyring,185185- struct key *instkey)195195+ struct key *authkey)186196{187187- return key_reject_and_link(key, timeout, ENOKEY, keyring, instkey);197197+ return key_reject_and_link(key, timeout, ENOKEY, keyring, authkey);188198}189199190200extern int generic_key_instantiate(struct key *key, struct key_preparsed_payload *prep);
+22-2
include/linux/netdev_features.h
···1111#define _LINUX_NETDEV_FEATURES_H12121313#include <linux/types.h>1414+#include <linux/bitops.h>1515+#include <asm/byteorder.h>14161517typedef u64 netdev_features_t;1618···156154#define NETIF_F_HW_TLS_TX __NETIF_F(HW_TLS_TX)157155#define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX)158156159159-#define for_each_netdev_feature(mask_addr, bit) \160160- for_each_set_bit(bit, (unsigned long *)mask_addr, NETDEV_FEATURE_COUNT)157157+/* Finds the next feature with the highest number of the range of start till 0.158158+ */159159+static inline int find_next_netdev_feature(u64 feature, unsigned long start)160160+{161161+ /* like BITMAP_LAST_WORD_MASK() for u64162162+ * this sets the most significant 64 - start to 0.163163+ */164164+ feature &= ~0ULL >> (-start & ((sizeof(feature) * 8) - 1));165165+166166+ return fls64(feature) - 1;167167+}168168+169169+/* This goes for the MSB to the LSB through the set feature bits,170170+ * mask_addr should be a u64 and bit an int171171+ */172172+#define for_each_netdev_feature(mask_addr, bit) \173173+ for ((bit) = find_next_netdev_feature((mask_addr), \174174+ NETDEV_FEATURE_COUNT); \175175+ (bit) >= 0; \176176+ (bit) = find_next_netdev_feature((mask_addr), (bit) - 1))161177162178/* Features valid for ethtool to change */163179/* = all defined minus driver/device-class-related */
+1-1
include/linux/netdevice.h
···38613861 if (debug_value == 0) /* no output */38623862 return 0;38633863 /* set low N bits */38643864- return (1 << debug_value) - 1;38643864+ return (1U << debug_value) - 1;38653865}3866386638673867static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
+8
include/linux/phy.h
···992992{993993 return 0;994994}995995+static inline int genphy_no_ack_interrupt(struct phy_device *phydev)996996+{997997+ return 0;998998+}999999+static inline int genphy_no_config_intr(struct phy_device *phydev)10001000+{10011001+ return 0;10021002+}9951003int genphy_read_mmd_unsupported(struct phy_device *phdev, int devad,9961004 u16 regnum);9971005int genphy_write_mmd_unsupported(struct phy_device *phdev, int devnum,
-6
include/linux/sched.h
···739739 unsigned use_memdelay:1;740740#endif741741742742- /*743743- * May usercopy functions fault on kernel addresses?744744- * This is not just a single bit because this can potentially nest.745745- */746746- unsigned int kernel_uaccess_faults_ok;747747-748742 unsigned long atomic_flags; /* Flags requiring atomic access. */749743750744 struct restart_block restart_block;
···57575858 if (!skb_partial_csum_set(skb, start, off))5959 return -EINVAL;6060+ } else {6161+ /* gso packets without NEEDS_CSUM do not set transport_offset.6262+ * probe and drop if does not match one of the above types.6363+ */6464+ if (gso_type && skb->network_header) {6565+ if (!skb->protocol)6666+ virtio_net_hdr_set_proto(skb, hdr);6767+retry:6868+ skb_probe_transport_header(skb, -1);6969+ if (!skb_transport_header_was_set(skb)) {7070+ /* UFO does not specify ipv4 or 6: try both */7171+ if (gso_type & SKB_GSO_UDP &&7272+ skb->protocol == htons(ETH_P_IP)) {7373+ skb->protocol = htons(ETH_P_IPV6);7474+ goto retry;7575+ }7676+ return -EINVAL;7777+ }7878+ }6079 }61806281 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+8-1
include/net/icmp.h
···22222323#include <net/inet_sock.h>2424#include <net/snmp.h>2525+#include <net/ip.h>25262627struct icmp_err {2728 int errno;···4039struct sk_buff;4140struct net;42414343-void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info);4242+void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,4343+ const struct ip_options *opt);4444+static inline void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)4545+{4646+ __icmp_send(skb_in, type, code, info, &IPCB(skb_in)->opt);4747+}4848+4449int icmp_rcv(struct sk_buff *skb);4550int icmp_err(struct sk_buff *skb, u32 info);4651int icmp_init(void);
···348348 unsigned int axi_id, unsigned int width,349349 unsigned int height, unsigned int stride,350350 u32 format, uint64_t modifier, unsigned long *eba);351351+bool ipu_prg_channel_configure_pending(struct ipuv3_channel *ipu_chan);351352352353/*353354 * IPU CMOS Sensor Interface (csi) functions
···4444 struct stack_map_irq_work *work;45454646 work = container_of(entry, struct stack_map_irq_work, irq_work);4747- up_read(work->sem);4747+ up_read_non_owner(work->sem);4848 work->sem = NULL;4949}5050···338338 } else {339339 work->sem = ¤t->mm->mmap_sem;340340 irq_work_queue(&work->irq_work);341341+ /*342342+ * The irq_work will release the mmap_sem with343343+ * up_read_non_owner(). The rwsem_release() is called344344+ * here to release the lock from lockdep's perspective.345345+ */346346+ rwsem_release(¤t->mm->mmap_sem.dep_map, 1, _RET_IP_);341347 }342348}343349
+3-3
kernel/bpf/syscall.c
···559559 err = bpf_map_new_fd(map, f_flags);560560 if (err < 0) {561561 /* failed to allocate fd.562562- * bpf_map_put() is needed because the above562562+ * bpf_map_put_with_uref() is needed because the above563563 * bpf_map_alloc_id() has published the map564564 * to the userspace and the userspace may565565 * have refcnt-ed it through BPF_MAP_GET_FD_BY_ID.566566 */567567- bpf_map_put(map);567567+ bpf_map_put_with_uref(map);568568 return err;569569 }570570···1986198619871987 fd = bpf_map_new_fd(map, f_flags);19881988 if (fd < 0)19891989- bpf_map_put(map);19891989+ bpf_map_put_with_uref(map);1990199019911991 return fd;19921992}
+9-5
kernel/bpf/verifier.c
···16171617 return 0;16181618}1619161916201620-static int check_sock_access(struct bpf_verifier_env *env, u32 regno, int off,16211621- int size, enum bpf_access_type t)16201620+static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,16211621+ u32 regno, int off, int size,16221622+ enum bpf_access_type t)16221623{16231624 struct bpf_reg_state *regs = cur_regs(env);16241625 struct bpf_reg_state *reg = ®s[regno];16251625- struct bpf_insn_access_aux info;16261626+ struct bpf_insn_access_aux info = {};1626162716271628 if (reg->smin_value < 0) {16281629 verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",···16361635 off, size);16371636 return -EACCES;16381637 }16381638+16391639+ env->insn_aux_data[insn_idx].ctx_field_size = info.ctx_field_size;1639164016401641 return 0;16411642}···20352032 verbose(env, "cannot write into socket\n");20362033 return -EACCES;20372034 }20382038- err = check_sock_access(env, regno, off, size, t);20352035+ err = check_sock_access(env, insn_idx, regno, off, size, t);20392036 if (!err && value_regno >= 0)20402037 mark_reg_unknown(env, regs, value_regno);20412038 } else {···69206917 u32 off_reg;6921691869226919 aux = &env->insn_aux_data[i + delta];69236923- if (!aux->alu_state)69206920+ if (!aux->alu_state ||69216921+ aux->alu_state == BPF_ALU_NON_POINTER)69246922 continue;6925692369266924 isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
+1-1
kernel/sched/psi.c
···322322 expires = group->next_update;323323 if (now < expires)324324 goto out;325325- if (now - expires > psi_period)325325+ if (now - expires >= psi_period)326326 missed_periods = div_u64(now - expires, psi_period);327327328328 /*
···861861static nokprobe_inline int862862fetch_store_strlen(unsigned long addr)863863{864864- mm_segment_t old_fs;865864 int ret, len = 0;866865 u8 c;867866868868- old_fs = get_fs();869869- set_fs(KERNEL_DS);870870- pagefault_disable();871871-872867 do {873873- ret = __copy_from_user_inatomic(&c, (u8 *)addr + len, 1);868868+ ret = probe_mem_read(&c, (u8 *)addr + len, 1);874869 len++;875870 } while (c && ret == 0 && len < MAX_STRING_SIZE);876876-877877- pagefault_enable();878878- set_fs(old_fs);879871880872 return (ret < 0) ? ret : len;881873}
+22
lib/Kconfig.kasan
···113113114114endchoice115115116116+config KASAN_STACK_ENABLE117117+ bool "Enable stack instrumentation (unsafe)" if CC_IS_CLANG && !COMPILE_TEST118118+ default !(CLANG_VERSION < 90000)119119+ depends on KASAN120120+ help121121+ The LLVM stack address sanitizer has a know problem that122122+ causes excessive stack usage in a lot of functions, see123123+ https://bugs.llvm.org/show_bug.cgi?id=38809124124+ Disabling asan-stack makes it safe to run kernels build125125+ with clang-8 with KASAN enabled, though it loses some of126126+ the functionality.127127+ This feature is always disabled when compile-testing with clang-8128128+ or earlier to avoid cluttering the output in stack overflow129129+ warnings, but clang-8 users can still enable it for builds without130130+ CONFIG_COMPILE_TEST. On gcc and later clang versions it is131131+ assumed to always be safe to use and enabled by default.132132+133133+config KASAN_STACK134134+ int135135+ default 1 if KASAN_STACK_ENABLE || CC_IS_GCC136136+ default 0137137+116138config KASAN_S390_4_LEVEL_PAGING117139 bool "KASan: use 4-level paging"118140 depends on KASAN && S390
+5-3
lib/assoc_array.c
···768768 new_s0->index_key[i] =769769 ops->get_key_chunk(index_key, i * ASSOC_ARRAY_KEY_CHUNK_SIZE);770770771771- blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK);772772- pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank);773773- new_s0->index_key[keylen - 1] &= ~blank;771771+ if (level & ASSOC_ARRAY_KEY_CHUNK_MASK) {772772+ blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK);773773+ pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank);774774+ new_s0->index_key[keylen - 1] &= ~blank;775775+ }774776775777 /* This now reduces to a node splitting exercise for which we'll need776778 * to regenerate the disparity table.
+3-1
mm/debug.c
···44444545void __dump_page(struct page *page, const char *reason)4646{4747- struct address_space *mapping = page_mapping(page);4747+ struct address_space *mapping;4848 bool page_poisoned = PagePoisoned(page);4949 int mapcount;5050···5757 pr_warn("page:%px is uninitialized and poisoned", page);5858 goto hex_only;5959 }6060+6161+ mapping = page_mapping(page);60626163 /*6264 * Avoid VM_BUG_ON() in page_mapcount().
+13-3
mm/hugetlb.c
···36243624 copy_user_huge_page(new_page, old_page, address, vma,36253625 pages_per_huge_page(h));36263626 __SetPageUptodate(new_page);36273627- set_page_huge_active(new_page);3628362736293628 mmu_notifier_range_init(&range, mm, haddr, haddr + huge_page_size(h));36303629 mmu_notifier_invalidate_range_start(&range);···36443645 make_huge_pte(vma, new_page, 1));36453646 page_remove_rmap(old_page, true);36463647 hugepage_add_new_anon_rmap(new_page, vma, haddr);36483648+ set_page_huge_active(new_page);36473649 /* Make the old page be freed below */36483650 new_page = old_page;36493651 }···37293729 pte_t new_pte;37303730 spinlock_t *ptl;37313731 unsigned long haddr = address & huge_page_mask(h);37323732+ bool new_page = false;3732373337333734 /*37343735 * Currently, we are forced to kill the process in the event the···37913790 }37923791 clear_huge_page(page, address, pages_per_huge_page(h));37933792 __SetPageUptodate(page);37943794- set_page_huge_active(page);37933793+ new_page = true;3795379437963795 if (vma->vm_flags & VM_MAYSHARE) {37973796 int err = huge_add_to_page_cache(page, mapping, idx);···38623861 }3863386238643863 spin_unlock(ptl);38643864+38653865+ /*38663866+ * Only make newly allocated pages active. Existing pages found38673867+ * in the pagecache could be !page_huge_active() if they have been38683868+ * isolated for migration.38693869+ */38703870+ if (new_page)38713871+ set_page_huge_active(page);38723872+38653873 unlock_page(page);38663874out:38673875 return ret;···41054095 * the set_pte_at() write.41064096 */41074097 __SetPageUptodate(page);41084108- set_page_huge_active(page);4109409841104099 mapping = dst_vma->vm_file->f_mapping;41114100 idx = vma_hugecache_offset(h, dst_vma, dst_addr);···41724163 update_mmu_cache(dst_vma, dst_addr, dst_pte);4173416441744165 spin_unlock(ptl);41664166+ set_page_huge_active(page);41754167 if (vm_shared)41764168 unlock_page(page);41774169 ret = 0;
+2
mm/kasan/Makefile
···7788CFLAGS_REMOVE_common.o = -pg99CFLAGS_REMOVE_generic.o = -pg1010+CFLAGS_REMOVE_tags.o = -pg1111+1012# Function splitter causes unnecessary splits in __asan_load1/__asan_store11113# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=635331214
+17-12
mm/kasan/common.c
···361361 * get different tags.362362 */363363static u8 assign_tag(struct kmem_cache *cache, const void *object,364364- bool init, bool krealloc)364364+ bool init, bool keep_tag)365365{366366- /* Reuse the same tag for krealloc'ed objects. */367367- if (krealloc)366366+ /*367367+ * 1. When an object is kmalloc()'ed, two hooks are called:368368+ * kasan_slab_alloc() and kasan_kmalloc(). We assign the369369+ * tag only in the first one.370370+ * 2. We reuse the same tag for krealloc'ed objects.371371+ */372372+ if (keep_tag)368373 return get_tag(object);369374370375 /*···408403 assign_tag(cache, object, true, false));409404410405 return (void *)object;411411-}412412-413413-void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,414414- gfp_t flags)415415-{416416- return kasan_kmalloc(cache, object, cache->object_size, flags);417406}418407419408static inline bool shadow_invalid(u8 tag, s8 shadow_byte)···466467}467468468469static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object,469469- size_t size, gfp_t flags, bool krealloc)470470+ size_t size, gfp_t flags, bool keep_tag)470471{471472 unsigned long redzone_start;472473 unsigned long redzone_end;···484485 KASAN_SHADOW_SCALE_SIZE);485486486487 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS))487487- tag = assign_tag(cache, object, false, krealloc);488488+ tag = assign_tag(cache, object, false, keep_tag);488489489490 /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */490491 kasan_unpoison_shadow(set_tag(object, tag), size);···497498 return set_tag(object, tag);498499}499500501501+void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object,502502+ gfp_t flags)503503+{504504+ return __kasan_kmalloc(cache, object, cache->object_size, flags, false);505505+}506506+500507void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object,501508 size_t size, gfp_t flags)502509{503503- return __kasan_kmalloc(cache, object, size, flags, false);510510+ return __kasan_kmalloc(cache, object, size, flags, true);504511}505512EXPORT_SYMBOL(kasan_kmalloc);506513
···11881188 return PageBuddy(page) && page_order(page) >= pageblock_order;11891189}1190119011911191-/* Return the start of the next active pageblock after a given page */11921192-static struct page *next_active_pageblock(struct page *page)11911191+/* Return the pfn of the start of the next active pageblock after a given pfn */11921192+static unsigned long next_active_pageblock(unsigned long pfn)11931193{11941194+ struct page *page = pfn_to_page(pfn);11951195+11941196 /* Ensure the starting page is pageblock-aligned */11951195- BUG_ON(page_to_pfn(page) & (pageblock_nr_pages - 1));11971197+ BUG_ON(pfn & (pageblock_nr_pages - 1));1196119811971199 /* If the entire pageblock is free, move to the end of free page */11981200 if (pageblock_free(page)) {···12021200 /* be careful. we don't have locks, page_order can be changed.*/12031201 order = page_order(page);12041202 if ((order < MAX_ORDER) && (order >= pageblock_order))12051205- return page + (1 << order);12031203+ return pfn + (1 << order);12061204 }1207120512081208- return page + pageblock_nr_pages;12061206+ return pfn + pageblock_nr_pages;12091207}1210120812111211-static bool is_pageblock_removable_nolock(struct page *page)12091209+static bool is_pageblock_removable_nolock(unsigned long pfn)12121210{12111211+ struct page *page = pfn_to_page(pfn);12131212 struct zone *zone;12141214- unsigned long pfn;1215121312161214 /*12171215 * We have to be careful here because we are iterating over memory···12341232/* Checks if this range of memory is likely to be hot-removable. */12351233bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)12361234{12371237- struct page *page = pfn_to_page(start_pfn);12381238- unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page)));12391239- struct page *end_page = pfn_to_page(end_pfn);12351235+ unsigned long end_pfn, pfn;12361236+12371237+ end_pfn = min(start_pfn + nr_pages,12381238+ zone_end_pfn(page_zone(pfn_to_page(start_pfn))));1240123912411240 /* Check the starting page of each pageblock within the range */12421242- for (; page < end_page; page = next_active_pageblock(page)) {12431243- if (!is_pageblock_removable_nolock(page))12411241+ for (pfn = start_pfn; pfn < end_pfn; pfn = next_active_pageblock(pfn)) {12421242+ if (!is_pageblock_removable_nolock(pfn))12441243 return false;12451244 cond_resched();12461245 }
+3-3
mm/mempolicy.c
···13141314 nodemask_t *nodes)13151315{13161316 unsigned long copy = ALIGN(maxnode-1, 64) / 8;13171317- const int nbytes = BITS_TO_LONGS(MAX_NUMNODES) * sizeof(long);13171317+ unsigned int nbytes = BITS_TO_LONGS(nr_node_ids) * sizeof(long);1318131813191319 if (copy > nbytes) {13201320 if (copy > PAGE_SIZE)···14911491 int uninitialized_var(pval);14921492 nodemask_t nodes;1493149314941494- if (nmask != NULL && maxnode < MAX_NUMNODES)14941494+ if (nmask != NULL && maxnode < nr_node_ids)14951495 return -EINVAL;1496149614971497 err = do_get_mempolicy(&pval, &nodes, addr, flags);···15271527 unsigned long nr_bits, alloc_size;15281528 DECLARE_BITMAP(bm, MAX_NUMNODES);1529152915301530- nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES);15301530+ nr_bits = min_t(unsigned long, maxnode-1, nr_node_ids);15311531 alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;1532153215331533 if (nmask)
+11
mm/migrate.c
···13151315 lock_page(hpage);13161316 }1317131713181318+ /*13191319+ * Check for pages which are in the process of being freed. Without13201320+ * page_mapping() set, hugetlbfs specific move page routine will not13211321+ * be called and we could leak usage counts for subpools.13221322+ */13231323+ if (page_private(hpage) && !page_mapping(hpage)) {13241324+ rc = -EBUSY;13251325+ goto out_unlock;13261326+ }13271327+13181328 if (PageAnon(hpage))13191329 anon_vma = page_get_anon_vma(hpage);13201330···13551345 put_new_page = NULL;13561346 }1357134713481348+out_unlock:13581349 unlock_page(hpage);13591350out:13601351 if (rc != -EAGAIN)
+3-4
mm/mmap.c
···24262426{24272427 struct mm_struct *mm = vma->vm_mm;24282428 struct vm_area_struct *prev;24292429- int error;24292429+ int error = 0;2430243024312431 address &= PAGE_MASK;24322432- error = security_mmap_addr(address);24332433- if (error)24342434- return error;24322432+ if (address < mmap_min_addr)24332433+ return -EPERM;2435243424362435 /* Enforce stack_guard_gap */24372436 prev = vma->vm_prev;
+16-4
mm/page_alloc.c
···2170217021712171 max_boost = mult_frac(zone->_watermark[WMARK_HIGH],21722172 watermark_boost_factor, 10000);21732173+21742174+ /*21752175+ * high watermark may be uninitialised if fragmentation occurs21762176+ * very early in boot so do not boost. We do not fall21772177+ * through and boost by pageblock_nr_pages as failing21782178+ * allocations that early means that reclaim is not going21792179+ * to help and it may even be impossible to reclaim the21802180+ * boosted watermark resulting in a hang.21812181+ */21822182+ if (!max_boost)21832183+ return;21842184+21732185 max_boost = max(pageblock_nr_pages, max_boost);2174218621752187 zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages,···46874675 /* Even if we own the page, we do not use atomic_set().46884676 * This would break get_page_unless_zero() users.46894677 */46904690- page_ref_add(page, size);46784678+ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);4691467946924680 /* reset page count bias and offset to start of new frag */46934681 nc->pfmemalloc = page_is_pfmemalloc(page);46944694- nc->pagecnt_bias = size + 1;46824682+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;46954683 nc->offset = size;46964684 }46974685···47074695 size = nc->size;47084696#endif47094697 /* OK, page count is 0, we can safely set it */47104710- set_page_count(page, size + 1);46984698+ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);4711469947124700 /* reset page count bias and offset to start of new frag */47134713- nc->pagecnt_bias = size + 1;47014701+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;47144702 offset = size - fragsz;47154703 }47164704
+8-4
mm/shmem.c
···28482848static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry)28492849{28502850 struct inode *inode = d_inode(old_dentry);28512851- int ret;28512851+ int ret = 0;2852285228532853 /*28542854 * No ordinary (disk based) filesystem counts links as inodes;28552855 * but each new link needs a new dentry, pinning lowmem, and28562856 * tmpfs dentries cannot be pruned until they are unlinked.28572857+ * But if an O_TMPFILE file is linked into the tmpfs, the28582858+ * first link must skip that, to get the accounting right.28572859 */28582858- ret = shmem_reserve_inode(inode->i_sb);28592859- if (ret)28602860- goto out;28602860+ if (inode->i_nlink) {28612861+ ret = shmem_reserve_inode(inode->i_sb);28622862+ if (ret)28632863+ goto out;28642864+ }2861286528622866 dir->i_size += BOGO_DIRENT_SIZE;28632867 inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
+11-4
mm/slab.c
···23592359 void *freelist;23602360 void *addr = page_address(page);2361236123622362- page->s_mem = kasan_reset_tag(addr) + colour_off;23622362+ page->s_mem = addr + colour_off;23632363 page->active = 0;2364236423652365 if (OBJFREELIST_SLAB(cachep))···23682368 /* Slab management obj is off-slab. */23692369 freelist = kmem_cache_alloc_node(cachep->freelist_cache,23702370 local_flags, nodeid);23712371+ freelist = kasan_reset_tag(freelist);23712372 if (!freelist)23722373 return NULL;23732374 } else {···2682268126832682 offset *= cachep->colour_off;2684268326842684+ /*26852685+ * Call kasan_poison_slab() before calling alloc_slabmgmt(), so26862686+ * page_address() in the latter returns a non-tagged pointer,26872687+ * as it should be for slab pages.26882688+ */26892689+ kasan_poison_slab(page);26902690+26852691 /* Get slab management. */26862692 freelist = alloc_slabmgmt(cachep, page, offset,26872693 local_flags & ~GFP_CONSTRAINT_MASK, page_node);···2697268926982690 slab_map_pages(cachep, page, freelist);2699269127002700- kasan_poison_slab(page);27012692 cache_init_objs(cachep, page);2702269327032694 if (gfpflags_allow_blocking(local_flags))···35473540{35483541 void *ret = slab_alloc(cachep, flags, _RET_IP_);3549354235503550- ret = kasan_slab_alloc(cachep, ret, flags);35513543 trace_kmem_cache_alloc(_RET_IP_, ret,35523544 cachep->object_size, cachep->size, flags);35533545···36363630{36373631 void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_);3638363236393639- ret = kasan_slab_alloc(cachep, ret, flags);36403633 trace_kmem_cache_alloc_node(_RET_IP_, ret,36413634 cachep->object_size, cachep->size,36423635 flags, nodeid);···44124407 struct kmem_cache *cachep;44134408 unsigned int objnr;44144409 unsigned long offset;44104410+44114411+ ptr = kasan_reset_tag(ptr);4415441244164413 /* Find and validate object. */44174414 cachep = page->slab_cache;
+3-4
mm/slab.h
···437437438438 flags &= gfp_allowed_mask;439439 for (i = 0; i < size; i++) {440440- void *object = p[i];441441-442442- kmemleak_alloc_recursive(object, s->object_size, 1,440440+ p[i] = kasan_slab_alloc(s, p[i], flags);441441+ /* As p[i] might get tagged, call kmemleak hook after KASAN. */442442+ kmemleak_alloc_recursive(p[i], s->object_size, 1,443443 s->flags, flags);444444- p[i] = kasan_slab_alloc(s, object, flags);445444 }446445447446 if (memcg_kmem_enabled())
+2-1
mm/slab_common.c
···12281228 flags |= __GFP_COMP;12291229 page = alloc_pages(flags, order);12301230 ret = page ? page_address(page) : NULL;12311231- kmemleak_alloc(ret, size, 1, flags);12321231 ret = kasan_kmalloc_large(ret, size, flags);12321232+ /* As ret might get tagged, call kmemleak hook after KASAN. */12331233+ kmemleak_alloc(ret, size, 1, flags);12331234 return ret;12341235}12351236EXPORT_SYMBOL(kmalloc_order);
+39-20
mm/slub.c
···249249 unsigned long ptr_addr)250250{251251#ifdef CONFIG_SLAB_FREELIST_HARDENED252252- return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr);252252+ /*253253+ * When CONFIG_KASAN_SW_TAGS is enabled, ptr_addr might be tagged.254254+ * Normally, this doesn't cause any issues, as both set_freepointer()255255+ * and get_freepointer() are called with a pointer with the same tag.256256+ * However, there are some issues with CONFIG_SLUB_DEBUG code. For257257+ * example, when __free_slub() iterates over objects in a cache, it258258+ * passes untagged pointers to check_object(). check_object() in turns259259+ * calls get_freepointer() with an untagged pointer, which causes the260260+ * freepointer to be restored incorrectly.261261+ */262262+ return (void *)((unsigned long)ptr ^ s->random ^263263+ (unsigned long)kasan_reset_tag((void *)ptr_addr));253264#else254265 return ptr;255266#endif···314303 __p < (__addr) + (__objects) * (__s)->size; \315304 __p += (__s)->size)316305317317-#define for_each_object_idx(__p, __idx, __s, __addr, __objects) \318318- for (__p = fixup_red_left(__s, __addr), __idx = 1; \319319- __idx <= __objects; \320320- __p += (__s)->size, __idx++)321321-322306/* Determine object index from a given position */323307static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)324308{325325- return (p - addr) / s->size;309309+ return (kasan_reset_tag(p) - addr) / s->size;326310}327311328312static inline unsigned int order_objects(unsigned int order, unsigned int size)···513507 return 1;514508515509 base = page_address(page);510510+ object = kasan_reset_tag(object);516511 object = restore_red_left(s, object);517512 if (object < base || object >= base + page->objects * s->size ||518513 (object - base) % s->size) {···10821075 init_tracking(s, object);10831076}1084107710781078+static void setup_page_debug(struct kmem_cache *s, void *addr, int order)10791079+{10801080+ if (!(s->flags & SLAB_POISON))10811081+ return;10821082+10831083+ metadata_access_enable();10841084+ memset(addr, POISON_INUSE, PAGE_SIZE << order);10851085+ metadata_access_disable();10861086+}10871087+10851088static inline int alloc_consistency_checks(struct kmem_cache *s,10861089 struct page *page,10871090 void *object, unsigned long addr)···13471330#else /* !CONFIG_SLUB_DEBUG */13481331static inline void setup_object_debug(struct kmem_cache *s,13491332 struct page *page, void *object) {}13331333+static inline void setup_page_debug(struct kmem_cache *s,13341334+ void *addr, int order) {}1350133513511336static inline int alloc_debug_processing(struct kmem_cache *s,13521337 struct page *page, void *object, unsigned long addr) { return 0; }···13931374 */13941375static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags)13951376{13771377+ ptr = kasan_kmalloc_large(ptr, size, flags);13781378+ /* As ptr might get tagged, call kmemleak hook after KASAN. */13961379 kmemleak_alloc(ptr, size, 1, flags);13971397- return kasan_kmalloc_large(ptr, size, flags);13801380+ return ptr;13981381}1399138214001383static __always_inline void kfree_hook(void *x)···16621641 if (page_is_pfmemalloc(page))16631642 SetPageSlabPfmemalloc(page);1664164316441644+ kasan_poison_slab(page);16451645+16651646 start = page_address(page);1666164716671667- if (unlikely(s->flags & SLAB_POISON))16681668- memset(start, POISON_INUSE, PAGE_SIZE << order);16691669-16701670- kasan_poison_slab(page);16481648+ setup_page_debug(s, start, order);1671164916721650 shuffle = shuffle_freelist(s, page);1673165116741652 if (!shuffle) {16751675- for_each_object_idx(p, idx, s, start, page->objects) {16761676- if (likely(idx < page->objects)) {16771677- next = p + s->size;16781678- next = setup_object(s, page, next);16791679- set_freepointer(s, p, next);16801680- } else16811681- set_freepointer(s, p, NULL);16821682- }16831653 start = fixup_red_left(s, start);16841654 start = setup_object(s, page, start);16851655 page->freelist = start;16561656+ for (idx = 0, p = start; idx < page->objects - 1; idx++) {16571657+ next = p + s->size;16581658+ next = setup_object(s, page, next);16591659+ set_freepointer(s, p, next);16601660+ p = next;16611661+ }16621662+ set_freepointer(s, p, NULL);16861663 }1687166416881665 page->inuse = page->objects;
+10-7
mm/swap.c
···320320{321321}322322323323-static bool need_activate_page_drain(int cpu)324324-{325325- return false;326326-}327327-328323void activate_page(struct page *page)329324{330325 struct zone *zone = page_zone(page);···648653 put_cpu();649654}650655656656+#ifdef CONFIG_SMP657657+658658+static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);659659+651660static void lru_add_drain_per_cpu(struct work_struct *dummy)652661{653662 lru_add_drain();654663}655655-656656-static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);657664658665/*659666 * Doesn't need any cpu hotplug locking because we do rely on per-cpu···699702700703 mutex_unlock(&lock);701704}705705+#else706706+void lru_add_drain_all(void)707707+{708708+ lru_add_drain();709709+}710710+#endif702711703712/**704713 * release_pages - batched put_page()
+1-1
mm/util.c
···150150{151151 void *p;152152153153- p = kmalloc_track_caller(len, GFP_USER);153153+ p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN);154154 if (!p)155155 return ERR_PTR(-ENOMEM);156156
···12041204 return;1205120512061206 br_multicast_update_query_timer(br, query, max_delay);12071207-12081208- /* Based on RFC4541, section 2.1.1 IGMP Forwarding Rules,12091209- * the arrival port for IGMP Queries where the source address12101210- * is 0.0.0.0 should not be added to router port list.12111211- */12121212- if ((saddr->proto == htons(ETH_P_IP) && saddr->u.ip4) ||12131213- saddr->proto == htons(ETH_P_IPV6))12141214- br_multicast_mark_router(br, port);12071207+ br_multicast_mark_router(br, port);12151208}1216120912171210static void br_ip4_multicast_query(struct net_bridge *br,
+9-6
net/ceph/messenger.c
···20582058 dout("process_connect on %p tag %d\n", con, (int)con->in_tag);2059205920602060 if (con->auth) {20612061+ int len = le32_to_cpu(con->in_reply.authorizer_len);20622062+20612063 /*20622064 * Any connection that defines ->get_authorizer()20632065 * should also define ->add_authorizer_challenge() and···20692067 */20702068 if (con->in_reply.tag == CEPH_MSGR_TAG_CHALLENGE_AUTHORIZER) {20712069 ret = con->ops->add_authorizer_challenge(20722072- con, con->auth->authorizer_reply_buf,20732073- le32_to_cpu(con->in_reply.authorizer_len));20702070+ con, con->auth->authorizer_reply_buf, len);20742071 if (ret < 0)20752072 return ret;20762073···20792078 return 0;20802079 }2081208020822082- ret = con->ops->verify_authorizer_reply(con);20832083- if (ret < 0) {20842084- con->error_msg = "bad authorize reply";20852085- return ret;20812081+ if (len) {20822082+ ret = con->ops->verify_authorizer_reply(con);20832083+ if (ret < 0) {20842084+ con->error_msg = "bad authorize reply";20852085+ return ret;20862086+ }20862087 }20872088 }20882089
+5-1
net/compat.c
···388388 char __user *optval, unsigned int optlen)389389{390390 int err;391391- struct socket *sock = sockfd_lookup(fd, &err);391391+ struct socket *sock;392392393393+ if (optlen > INT_MAX)394394+ return -EINVAL;395395+396396+ sock = sockfd_lookup(fd, &err);393397 if (sock) {394398 err = security_socket_setsockopt(sock, level, optname);395399 if (err) {
+2-2
net/core/dev.c
···81528152 netdev_features_t feature;81538153 int feature_bit;8154815481558155- for_each_netdev_feature(&upper_disables, feature_bit) {81558155+ for_each_netdev_feature(upper_disables, feature_bit) {81568156 feature = __NETIF_F_BIT(feature_bit);81578157 if (!(upper->wanted_features & feature)81588158 && (features & feature)) {···81728172 netdev_features_t feature;81738173 int feature_bit;8174817481758175- for_each_netdev_feature(&upper_disables, feature_bit) {81758175+ for_each_netdev_feature(upper_disables, feature_bit) {81768176 feature = __NETIF_F_BIT(feature_bit);81778177 if (!(features & feature) && (lower->features & feature)) {81788178 netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
+4-8
net/core/filter.c
···27892789 u32 off = skb_mac_header_len(skb);27902790 int ret;2791279127922792- /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */27932793- if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb)))27922792+ if (!skb_is_gso_tcp(skb))27942793 return -ENOTSUPP;2795279427962795 ret = skb_cow(skb, len_diff);···28302831 u32 off = skb_mac_header_len(skb);28312832 int ret;2832283328332833- /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */28342834- if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb)))28342834+ if (!skb_is_gso_tcp(skb))28352835 return -ENOTSUPP;2836283628372837 ret = skb_unclone(skb, GFP_ATOMIC);···29552957 u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb);29562958 int ret;2957295929582958- /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */29592959- if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb)))29602960+ if (!skb_is_gso_tcp(skb))29602961 return -ENOTSUPP;2961296229622963 ret = skb_cow(skb, len_diff);···29842987 u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb);29852988 int ret;2986298929872987- /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */29882988- if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb)))29902990+ if (!skb_is_gso_tcp(skb))29892991 return -ENOTSUPP;2990299229912993 ret = skb_unclone(skb, GFP_ATOMIC);
+4
net/core/skbuff.c
···356356 */357357void *netdev_alloc_frag(unsigned int fragsz)358358{359359+ fragsz = SKB_DATA_ALIGN(fragsz);360360+359361 return __netdev_alloc_frag(fragsz, GFP_ATOMIC);360362}361363EXPORT_SYMBOL(netdev_alloc_frag);···371369372370void *napi_alloc_frag(unsigned int fragsz)373371{372372+ fragsz = SKB_DATA_ALIGN(fragsz);373373+374374 return __napi_alloc_frag(fragsz, GFP_ATOMIC);375375}376376EXPORT_SYMBOL(napi_alloc_frag);
+10-6
net/dsa/dsa2.c
···612612{613613 struct device_node *ports, *port;614614 struct dsa_port *dp;615615+ int err = 0;615616 u32 reg;616616- int err;617617618618 ports = of_get_child_by_name(dn, "ports");619619 if (!ports) {···624624 for_each_available_child_of_node(ports, port) {625625 err = of_property_read_u32(port, "reg", ®);626626 if (err)627627- return err;627627+ goto out_put_node;628628629629- if (reg >= ds->num_ports)630630- return -EINVAL;629629+ if (reg >= ds->num_ports) {630630+ err = -EINVAL;631631+ goto out_put_node;632632+ }631633632634 dp = &ds->ports[reg];633635634636 err = dsa_port_parse_of(dp, port);635637 if (err)636636- return err;638638+ goto out_put_node;637639 }638640639639- return 0;641641+out_put_node:642642+ of_node_put(ports);643643+ return err;640644}641645642646static int dsa_switch_parse_member_of(struct dsa_switch *ds,
+5-3
net/dsa/port.c
···69697070int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy)7171{7272- u8 stp_state = dp->bridge_dev ? BR_STATE_BLOCKING : BR_STATE_FORWARDING;7372 struct dsa_switch *ds = dp->ds;7473 int port = dp->index;7574 int err;···7980 return err;8081 }81828282- dsa_port_set_state_now(dp, stp_state);8383+ if (!dp->bridge_dev)8484+ dsa_port_set_state_now(dp, BR_STATE_FORWARDING);83858486 return 0;8587}···9090 struct dsa_switch *ds = dp->ds;9191 int port = dp->index;92929393- dsa_port_set_state_now(dp, BR_STATE_DISABLED);9393+ if (!dp->bridge_dev)9494+ dsa_port_set_state_now(dp, BR_STATE_DISABLED);94959596 if (ds->ops->port_disable)9697 ds->ops->port_disable(ds, port, phy);···292291 return ERR_PTR(-EPROBE_DEFER);293292 }294293294294+ of_node_put(phy_dn);295295 return phydev;296296}297297
+17-3
net/ipv4/cipso_ipv4.c
···667667 case CIPSO_V4_MAP_PASS:668668 return 0;669669 case CIPSO_V4_MAP_TRANS:670670- if (doi_def->map.std->lvl.cipso[level] < CIPSO_V4_INV_LVL)670670+ if ((level < doi_def->map.std->lvl.cipso_size) &&671671+ (doi_def->map.std->lvl.cipso[level] < CIPSO_V4_INV_LVL))671672 return 0;672673 break;673674 }···17361735 */17371736void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway)17381737{17381738+ unsigned char optbuf[sizeof(struct ip_options) + 40];17391739+ struct ip_options *opt = (struct ip_options *)optbuf;17401740+17391741 if (ip_hdr(skb)->protocol == IPPROTO_ICMP || error != -EACCES)17401742 return;1741174317441744+ /*17451745+ * We might be called above the IP layer,17461746+ * so we can not use icmp_send and IPCB here.17471747+ */17481748+17491749+ memset(opt, 0, sizeof(struct ip_options));17501750+ opt->optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr);17511751+ if (__ip_options_compile(dev_net(skb->dev), opt, skb, NULL))17521752+ return;17531753+17421754 if (gateway)17431743- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_NET_ANO, 0);17551755+ __icmp_send(skb, ICMP_DEST_UNREACH, ICMP_NET_ANO, 0, opt);17441756 else17451745- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_ANO, 0);17571757+ __icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_ANO, 0, opt);17461758}1747175917481760/**
+1-1
net/ipv4/esp4.c
···328328 skb->len += tailen;329329 skb->data_len += tailen;330330 skb->truesize += tailen;331331- if (sk)331331+ if (sk && sk_fullsock(sk))332332 refcount_add(tailen, &sk->sk_wmem_alloc);333333334334 goto out;
+4
net/ipv4/fib_frontend.c
···710710 case RTA_GATEWAY:711711 cfg->fc_gw = nla_get_be32(attr);712712 break;713713+ case RTA_VIA:714714+ NL_SET_ERR_MSG(extack, "IPv4 does not support RTA_VIA attribute");715715+ err = -EINVAL;716716+ goto errout;713717 case RTA_PRIORITY:714718 cfg->fc_priority = nla_get_u32(attr);715719 break;
+4-3
net/ipv4/icmp.c
···570570 * MUST reply to only the first fragment.571571 */572572573573-void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info)573573+void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,574574+ const struct ip_options *opt)574575{575576 struct iphdr *iph;576577 int room;···692691 iph->tos;693692 mark = IP4_REPLY_MARK(net, skb_in->mark);694693695695- if (ip_options_echo(net, &icmp_param.replyopts.opt.opt, skb_in))694694+ if (__ip_options_echo(net, &icmp_param.replyopts.opt.opt, skb_in, opt))696695 goto out_unlock;697696698697···743742 local_bh_enable();744743out:;745744}746746-EXPORT_SYMBOL(icmp_send);745745+EXPORT_SYMBOL(__icmp_send);747746748747749748static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
+17-16
net/ipv4/ip_gre.c
···14571457 struct ip_tunnel_parm *p = &t->parms;14581458 __be16 o_flags = p->o_flags;1459145914601460- if ((t->erspan_ver == 1 || t->erspan_ver == 2) &&14611461- !t->collect_md)14621462- o_flags |= TUNNEL_KEY;14601460+ if (t->erspan_ver == 1 || t->erspan_ver == 2) {14611461+ if (!t->collect_md)14621462+ o_flags |= TUNNEL_KEY;14631463+14641464+ if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver))14651465+ goto nla_put_failure;14661466+14671467+ if (t->erspan_ver == 1) {14681468+ if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index))14691469+ goto nla_put_failure;14701470+ } else {14711471+ if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir))14721472+ goto nla_put_failure;14731473+ if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid))14741474+ goto nla_put_failure;14751475+ }14761476+ }1463147714641478 if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) ||14651479 nla_put_be16(skb, IFLA_GRE_IFLAGS,···1506149215071493 if (t->collect_md) {15081494 if (nla_put_flag(skb, IFLA_GRE_COLLECT_METADATA))15091509- goto nla_put_failure;15101510- }15111511-15121512- if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver))15131513- goto nla_put_failure;15141514-15151515- if (t->erspan_ver == 1) {15161516- if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index))15171517- goto nla_put_failure;15181518- } else if (t->erspan_ver == 2) {15191519- if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir))15201520- goto nla_put_failure;15211521- if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid))15221495 goto nla_put_failure;15231496 }15241497
+5-4
net/ipv4/ip_input.c
···307307}308308309309static int ip_rcv_finish_core(struct net *net, struct sock *sk,310310- struct sk_buff *skb)310310+ struct sk_buff *skb, struct net_device *dev)311311{312312 const struct iphdr *iph = ip_hdr(skb);313313 int (*edemux)(struct sk_buff *skb);314314- struct net_device *dev = skb->dev;315314 struct rtable *rt;316315 int err;317316···399400400401static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb)401402{403403+ struct net_device *dev = skb->dev;402404 int ret;403405404406 /* if ingress device is enslaved to an L3 master device pass the···409409 if (!skb)410410 return NET_RX_SUCCESS;411411412412- ret = ip_rcv_finish_core(net, sk, skb);412412+ ret = ip_rcv_finish_core(net, sk, skb, dev);413413 if (ret != NET_RX_DROP)414414 ret = dst_input(skb);415415 return ret;···545545546546 INIT_LIST_HEAD(&sublist);547547 list_for_each_entry_safe(skb, next, head, list) {548548+ struct net_device *dev = skb->dev;548549 struct dst_entry *dst;549550550551 skb_list_del_init(skb);···555554 skb = l3mdev_ip_rcv(skb);556555 if (!skb)557556 continue;558558- if (ip_rcv_finish_core(net, sk, skb) == NET_RX_DROP)557557+ if (ip_rcv_finish_core(net, sk, skb, dev) == NET_RX_DROP)559558 continue;560559561560 dst = skb_dst(skb);
+17-5
net/ipv4/ip_options.c
···251251 * If opt == NULL, then skb->data should point to IP header.252252 */253253254254-int ip_options_compile(struct net *net,255255- struct ip_options *opt, struct sk_buff *skb)254254+int __ip_options_compile(struct net *net,255255+ struct ip_options *opt, struct sk_buff *skb,256256+ __be32 *info)256257{257258 __be32 spec_dst = htonl(INADDR_ANY);258259 unsigned char *pp_ptr = NULL;···469468 return 0;470469471470error:472472- if (skb) {473473- icmp_send(skb, ICMP_PARAMETERPROB, 0, htonl((pp_ptr-iph)<<24));474474- }471471+ if (info)472472+ *info = htonl((pp_ptr-iph)<<24);475473 return -EINVAL;474474+}475475+476476+int ip_options_compile(struct net *net,477477+ struct ip_options *opt, struct sk_buff *skb)478478+{479479+ int ret;480480+ __be32 info;481481+482482+ ret = __ip_options_compile(net, opt, skb, &info);483483+ if (ret != 0 && skb)484484+ icmp_send(skb, ICMP_PARAMETERPROB, 0, info);485485+ return ret;476486}477487EXPORT_SYMBOL(ip_options_compile);478488
+14-5
net/ipv4/netlink.c
···33#include <linux/types.h>44#include <net/net_namespace.h>55#include <net/netlink.h>66+#include <linux/in6.h>67#include <net/ip.h>7888-int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto,99+int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family,910 struct netlink_ext_ack *extack)1011{1112 *ip_proto = nla_get_u8(attr);···1413 switch (*ip_proto) {1514 case IPPROTO_TCP:1615 case IPPROTO_UDP:1717- case IPPROTO_ICMP:1816 return 0;1919- default:2020- NL_SET_ERR_MSG(extack, "Unsupported ip proto");2121- return -EOPNOTSUPP;1717+ case IPPROTO_ICMP:1818+ if (family != AF_INET)1919+ break;2020+ return 0;2121+#if IS_ENABLED(CONFIG_IPV6)2222+ case IPPROTO_ICMPV6:2323+ if (family != AF_INET6)2424+ break;2525+ return 0;2626+#endif2227 }2828+ NL_SET_ERR_MSG(extack, "Unsupported ip proto");2929+ return -EOPNOTSUPP;2330}2431EXPORT_SYMBOL_GPL(rtm_getroute_parse_ip_proto);
···23472347 /* "skb_mstamp_ns" is used as a start point for the retransmit timer */23482348 skb->skb_mstamp_ns = tp->tcp_wstamp_ns = tp->tcp_clock_cache;23492349 list_move_tail(&skb->tcp_tsorted_anchor, &tp->tsorted_sent_queue);23502350+ tcp_init_tso_segs(skb, mss_now);23502351 goto repair; /* Skip network transmission */23512352 }23522353
+4-2
net/ipv4/udp.c
···562562563563 for (i = 0; i < MAX_IPTUN_ENCAP_OPS; i++) {564564 int (*handler)(struct sk_buff *skb, u32 info);565565+ const struct ip_tunnel_encap_ops *encap;565566566566- if (!iptun_encaps[i])567567+ encap = rcu_dereference(iptun_encaps[i]);568568+ if (!encap)567569 continue;568568- handler = rcu_dereference(iptun_encaps[i]->err_handler);570570+ handler = encap->err_handler;569571 if (handler && !handler(skb, info))570572 return 0;571573 }
+1-1
net/ipv6/esp6.c
···296296 skb->len += tailen;297297 skb->data_len += tailen;298298 skb->truesize += tailen;299299- if (sk)299299+ if (sk && sk_fullsock(sk))300300 refcount_add(tailen, &sk->sk_wmem_alloc);301301302302 goto out;
+1-1
net/ipv6/fou6.c
···72727373static int gue6_err_proto_handler(int proto, struct sk_buff *skb,7474 struct inet6_skb_parm *opt,7575- u8 type, u8 code, int offset, u32 info)7575+ u8 type, u8 code, int offset, __be32 info)7676{7777 const struct inet6_protocol *ipprot;7878
+41-32
net/ipv6/ip6_gre.c
···17191719 return 0;17201720}1721172117221722+static void ip6erspan_set_version(struct nlattr *data[],17231723+ struct __ip6_tnl_parm *parms)17241724+{17251725+ if (!data)17261726+ return;17271727+17281728+ parms->erspan_ver = 1;17291729+ if (data[IFLA_GRE_ERSPAN_VER])17301730+ parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]);17311731+17321732+ if (parms->erspan_ver == 1) {17331733+ if (data[IFLA_GRE_ERSPAN_INDEX])17341734+ parms->index = nla_get_u32(data[IFLA_GRE_ERSPAN_INDEX]);17351735+ } else if (parms->erspan_ver == 2) {17361736+ if (data[IFLA_GRE_ERSPAN_DIR])17371737+ parms->dir = nla_get_u8(data[IFLA_GRE_ERSPAN_DIR]);17381738+ if (data[IFLA_GRE_ERSPAN_HWID])17391739+ parms->hwid = nla_get_u16(data[IFLA_GRE_ERSPAN_HWID]);17401740+ }17411741+}17421742+17221743static void ip6gre_netlink_parms(struct nlattr *data[],17231744 struct __ip6_tnl_parm *parms)17241745{···1788176717891768 if (data[IFLA_GRE_COLLECT_METADATA])17901769 parms->collect_md = true;17911791-17921792- parms->erspan_ver = 1;17931793- if (data[IFLA_GRE_ERSPAN_VER])17941794- parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]);17951795-17961796- if (parms->erspan_ver == 1) {17971797- if (data[IFLA_GRE_ERSPAN_INDEX])17981798- parms->index = nla_get_u32(data[IFLA_GRE_ERSPAN_INDEX]);17991799- } else if (parms->erspan_ver == 2) {18001800- if (data[IFLA_GRE_ERSPAN_DIR])18011801- parms->dir = nla_get_u8(data[IFLA_GRE_ERSPAN_DIR]);18021802- if (data[IFLA_GRE_ERSPAN_HWID])18031803- parms->hwid = nla_get_u16(data[IFLA_GRE_ERSPAN_HWID]);18041804- }18051770}1806177118071772static int ip6gre_tap_init(struct net_device *dev)···21072100 struct __ip6_tnl_parm *p = &t->parms;21082101 __be16 o_flags = p->o_flags;2109210221102110- if ((p->erspan_ver == 1 || p->erspan_ver == 2) &&21112111- !p->collect_md)21122112- o_flags |= TUNNEL_KEY;21032103+ if (p->erspan_ver == 1 || p->erspan_ver == 2) {21042104+ if (!p->collect_md)21052105+ o_flags |= TUNNEL_KEY;21062106+21072107+ if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, p->erspan_ver))21082108+ goto nla_put_failure;21092109+21102110+ if (p->erspan_ver == 1) {21112111+ if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index))21122112+ goto nla_put_failure;21132113+ } else {21142114+ if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, p->dir))21152115+ goto nla_put_failure;21162116+ if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, p->hwid))21172117+ goto nla_put_failure;21182118+ }21192119+ }2113212021142121 if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) ||21152122 nla_put_be16(skb, IFLA_GRE_IFLAGS,···21382117 nla_put_u8(skb, IFLA_GRE_ENCAP_LIMIT, p->encap_limit) ||21392118 nla_put_be32(skb, IFLA_GRE_FLOWINFO, p->flowinfo) ||21402119 nla_put_u32(skb, IFLA_GRE_FLAGS, p->flags) ||21412141- nla_put_u32(skb, IFLA_GRE_FWMARK, p->fwmark) ||21422142- nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index))21202120+ nla_put_u32(skb, IFLA_GRE_FWMARK, p->fwmark))21432121 goto nla_put_failure;2144212221452123 if (nla_put_u16(skb, IFLA_GRE_ENCAP_TYPE,···2153213321542134 if (p->collect_md) {21552135 if (nla_put_flag(skb, IFLA_GRE_COLLECT_METADATA))21562156- goto nla_put_failure;21572157- }21582158-21592159- if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, p->erspan_ver))21602160- goto nla_put_failure;21612161-21622162- if (p->erspan_ver == 1) {21632163- if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index))21642164- goto nla_put_failure;21652165- } else if (p->erspan_ver == 2) {21662166- if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, p->dir))21672167- goto nla_put_failure;21682168- if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, p->hwid))21692136 goto nla_put_failure;21702137 }21712138···22102203 int err;2211220422122205 ip6gre_netlink_parms(data, &nt->parms);22062206+ ip6erspan_set_version(data, &nt->parms);22132207 ign = net_generic(net, ip6gre_net_id);2214220822152209 if (nt->parms.collect_md) {···22562248 if (IS_ERR(t))22572249 return PTR_ERR(t);2258225022512251+ ip6erspan_set_version(data, &p);22592252 ip6gre_tunnel_unlink_md(ign, t);22602253 ip6gre_tunnel_unlink(ign, t);22612254 ip6erspan_tnl_change(t, &p, !tb[IFLA_MTU]);
+30-9
net/ipv6/route.c
···12741274static void rt6_remove_exception(struct rt6_exception_bucket *bucket,12751275 struct rt6_exception *rt6_ex)12761276{12771277+ struct fib6_info *from;12771278 struct net *net;1278127912791280 if (!bucket || !rt6_ex)12801281 return;1281128212821283 net = dev_net(rt6_ex->rt6i->dst.dev);12841284+ net->ipv6.rt6_stats->fib_rt_cache--;12851285+12861286+ /* purge completely the exception to allow releasing the held resources:12871287+ * some [sk] cache may keep the dst around for unlimited time12881288+ */12891289+ from = rcu_dereference_protected(rt6_ex->rt6i->from,12901290+ lockdep_is_held(&rt6_exception_lock));12911291+ rcu_assign_pointer(rt6_ex->rt6i->from, NULL);12921292+ fib6_info_release(from);12931293+ dst_dev_put(&rt6_ex->rt6i->dst);12941294+12831295 hlist_del_rcu(&rt6_ex->hlist);12841296 dst_release(&rt6_ex->rt6i->dst);12851297 kfree_rcu(rt6_ex, rcu);12861298 WARN_ON_ONCE(!bucket->depth);12871299 bucket->depth--;12881288- net->ipv6.rt6_stats->fib_rt_cache--;12891300}1290130112911302/* Remove oldest rt6_ex in bucket and free the memory···16101599static void rt6_update_exception_stamp_rt(struct rt6_info *rt)16111600{16121601 struct rt6_exception_bucket *bucket;16131613- struct fib6_info *from = rt->from;16141602 struct in6_addr *src_key = NULL;16151603 struct rt6_exception *rt6_ex;16161616-16171617- if (!from ||16181618- !(rt->rt6i_flags & RTF_CACHE))16191619- return;16041604+ struct fib6_info *from;1620160516211606 rcu_read_lock();16071607+ from = rcu_dereference(rt->from);16081608+ if (!from || !(rt->rt6i_flags & RTF_CACHE))16091609+ goto unlock;16101610+16221611 bucket = rcu_dereference(from->rt6i_exception_bucket);1623161216241613#ifdef CONFIG_IPV6_SUBTREES···16371626 if (rt6_ex)16381627 rt6_ex->stamp = jiffies;1639162816291629+unlock:16401630 rcu_read_unlock();16411631}16421632···27542742 u32 tbid = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN;27552743 const struct in6_addr *gw_addr = &cfg->fc_gateway;27562744 u32 flags = RTF_LOCAL | RTF_ANYCAST | RTF_REJECT;27452745+ struct fib6_info *from;27572746 struct rt6_info *grt;27582747 int err;2759274827602749 err = 0;27612750 grt = ip6_nh_lookup_table(net, cfg, gw_addr, tbid, 0);27622751 if (grt) {27522752+ rcu_read_lock();27532753+ from = rcu_dereference(grt->from);27632754 if (!grt->dst.error &&27642755 /* ignore match if it is the default route */27652765- grt->from && !ipv6_addr_any(&grt->from->fib6_dst.addr) &&27562756+ from && !ipv6_addr_any(&from->fib6_dst.addr) &&27662757 (grt->rt6i_flags & flags || dev != grt->dst.dev)) {27672758 NL_SET_ERR_MSG(extack,27682759 "Nexthop has invalid gateway or device mismatch");27692760 err = -EINVAL;27702761 }27622762+ rcu_read_unlock();2771276327722764 ip6_rt_put(grt);27732765 }···41824166 cfg->fc_gateway = nla_get_in6_addr(tb[RTA_GATEWAY]);41834167 cfg->fc_flags |= RTF_GATEWAY;41844168 }41694169+ if (tb[RTA_VIA]) {41704170+ NL_SET_ERR_MSG(extack, "IPv6 does not support RTA_VIA attribute");41714171+ goto errout;41724172+ }4185417341864174 if (tb[RTA_DST]) {41874175 int plen = (rtm->rtm_dst_len + 7) >> 3;···46694649 table = rt->fib6_table->tb6_id;46704650 else46714651 table = RT6_TABLE_UNSPEC;46724672- rtm->rtm_table = table;46524652+ rtm->rtm_table = table < 256 ? table : RT_TABLE_COMPAT;46734653 if (nla_put_u32(skb, RTA_TABLE, table))46744654 goto nla_put_failure;46754655···4893487348944874 if (tb[RTA_IP_PROTO]) {48954875 err = rtm_getroute_parse_ip_proto(tb[RTA_IP_PROTO],48964896- &fl6.flowi6_proto, extack);48764876+ &fl6.flowi6_proto, AF_INET6,48774877+ extack);48974878 if (err)48984879 goto errout;48994880 }
···288288 int peeked, peeking, off;289289 int err;290290 int is_udplite = IS_UDPLITE(sk);291291+ struct udp_mib __percpu *mib;291292 bool checksum_valid = false;292292- struct udp_mib *mib;293293 int is_udp4;294294295295 if (flags & MSG_ERRQUEUE)···420420 */421421static int __udp6_lib_err_encap_no_sk(struct sk_buff *skb,422422 struct inet6_skb_parm *opt,423423- u8 type, u8 code, int offset, u32 info)423423+ u8 type, u8 code, int offset, __be32 info)424424{425425 int i;426426427427 for (i = 0; i < MAX_IPTUN_ENCAP_OPS; i++) {428428 int (*handler)(struct sk_buff *skb, struct inet6_skb_parm *opt,429429- u8 type, u8 code, int offset, u32 info);429429+ u8 type, u8 code, int offset, __be32 info);430430+ const struct ip6_tnl_encap_ops *encap;430431431431- if (!ip6tun_encaps[i])432432+ encap = rcu_dereference(ip6tun_encaps[i]);433433+ if (!encap)432434 continue;433433- handler = rcu_dereference(ip6tun_encaps[i]->err_handler);435435+ handler = encap->err_handler;434436 if (handler && !handler(skb, opt, type, code, offset, info))435437 return 0;436438 }
+1-1
net/ipv6/xfrm6_tunnel.c
···344344 struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net);345345 unsigned int i;346346347347- xfrm_state_flush(net, IPSEC_PROTO_ANY, false);348347 xfrm_flush_gc();348348+ xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true);349349350350 for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)351351 WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));
+16-26
net/key/af_key.c
···196196 return 0;197197}198198199199-static int pfkey_broadcast_one(struct sk_buff *skb, struct sk_buff **skb2,200200- gfp_t allocation, struct sock *sk)199199+static int pfkey_broadcast_one(struct sk_buff *skb, gfp_t allocation,200200+ struct sock *sk)201201{202202 int err = -ENOBUFS;203203204204- sock_hold(sk);205205- if (*skb2 == NULL) {206206- if (refcount_read(&skb->users) != 1) {207207- *skb2 = skb_clone(skb, allocation);208208- } else {209209- *skb2 = skb;210210- refcount_inc(&skb->users);211211- }204204+ if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf)205205+ return err;206206+207207+ skb = skb_clone(skb, allocation);208208+209209+ if (skb) {210210+ skb_set_owner_r(skb, sk);211211+ skb_queue_tail(&sk->sk_receive_queue, skb);212212+ sk->sk_data_ready(sk);213213+ err = 0;212214 }213213- if (*skb2 != NULL) {214214- if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf) {215215- skb_set_owner_r(*skb2, sk);216216- skb_queue_tail(&sk->sk_receive_queue, *skb2);217217- sk->sk_data_ready(sk);218218- *skb2 = NULL;219219- err = 0;220220- }221221- }222222- sock_put(sk);223215 return err;224216}225217···226234{227235 struct netns_pfkey *net_pfkey = net_generic(net, pfkey_net_id);228236 struct sock *sk;229229- struct sk_buff *skb2 = NULL;230237 int err = -ESRCH;231238232239 /* XXX Do we need something like netlink_overrun? I think···244253 * socket.245254 */246255 if (pfk->promisc)247247- pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk);256256+ pfkey_broadcast_one(skb, GFP_ATOMIC, sk);248257249258 /* the exact target will be processed later */250259 if (sk == one_sk)···259268 continue;260269 }261270262262- err2 = pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk);271271+ err2 = pfkey_broadcast_one(skb, GFP_ATOMIC, sk);263272264273 /* Error is cleared after successful sending to at least one265274 * registered KM */···269278 rcu_read_unlock();270279271280 if (one_sk != NULL)272272- err = pfkey_broadcast_one(skb, &skb2, allocation, one_sk);281281+ err = pfkey_broadcast_one(skb, allocation, one_sk);273282274274- kfree_skb(skb2);275283 kfree_skb(skb);276284 return err;277285}···17731783 if (proto == 0)17741784 return -EINVAL;1775178517761776- err = xfrm_state_flush(net, proto, true);17861786+ err = xfrm_state_flush(net, proto, true, false);17771787 err2 = unicast_flush_resp(sk, hdr);17781788 if (err || err2) {17791789 if (err == -ESRCH) /* empty table - go quietly */
+5-1
net/mac80211/cfg.c
···941941 BSS_CHANGED_P2P_PS |942942 BSS_CHANGED_TXPOWER;943943 int err;944944+ int prev_beacon_int;944945945946 old = sdata_dereference(sdata->u.ap.beacon, sdata);946947 if (old)···964963965964 sdata->needed_rx_chains = sdata->local->rx_chains;966965966966+ prev_beacon_int = sdata->vif.bss_conf.beacon_int;967967 sdata->vif.bss_conf.beacon_int = params->beacon_interval;968968969969 if (params->he_cap)···976974 if (!err)977975 ieee80211_vif_copy_chanctx_to_vlans(sdata, false);978976 mutex_unlock(&local->mtx);979979- if (err)977977+ if (err) {978978+ sdata->vif.bss_conf.beacon_int = prev_beacon_int;980979 return err;980980+ }981981982982 /*983983 * Apply control port protocol, this allows us to
+2-2
net/mac80211/main.c
···615615 * We need a bit of data queued to build aggregates properly, so616616 * instruct the TCP stack to allow more than a single ms of data617617 * to be queued in the stack. The value is a bit-shift of 1618618- * second, so 8 is ~4ms of queued data. Only affects local TCP618618+ * second, so 7 is ~8ms of queued data. Only affects local TCP619619 * sockets.620620 * This is the default, anyhow - drivers may need to override it621621 * for local reasons (longer buffers, longer completion time, or622622 * similar).623623 */624624- local->hw.tx_sk_pacing_shift = 8;624624+ local->hw.tx_sk_pacing_shift = 7;625625626626 /* set up some defaults */627627 local->hw.queues = 1;
+6
net/mac80211/mesh.h
···7070 * @dst: mesh path destination mac address7171 * @mpp: mesh proxy mac address7272 * @rhash: rhashtable list pointer7373+ * @walk_list: linked list containing all mesh_path objects.7374 * @gate_list: list pointer for known gates list7475 * @sdata: mesh subif7576 * @next_hop: mesh neighbor to which frames for this destination will be···106105 u8 dst[ETH_ALEN];107106 u8 mpp[ETH_ALEN]; /* used for MPP or MAP */108107 struct rhash_head rhash;108108+ struct hlist_node walk_list;109109 struct hlist_node gate_list;110110 struct ieee80211_sub_if_data *sdata;111111 struct sta_info __rcu *next_hop;···135133 * gate's mpath may or may not be resolved and active.136134 * @gates_lock: protects updates to known_gates137135 * @rhead: the rhashtable containing struct mesh_paths, keyed by dest addr136136+ * @walk_head: linked list containging all mesh_path objects137137+ * @walk_lock: lock protecting walk_head138138 * @entries: number of entries in the table139139 */140140struct mesh_table {141141 struct hlist_head known_gates;142142 spinlock_t gates_lock;143143 struct rhashtable rhead;144144+ struct hlist_head walk_head;145145+ spinlock_t walk_lock;144146 atomic_t entries; /* Up to MAX_MESH_NEIGHBOURS */145147};146148
+47-110
net/mac80211/mesh_pathtbl.c
···5959 return NULL;60606161 INIT_HLIST_HEAD(&newtbl->known_gates);6262+ INIT_HLIST_HEAD(&newtbl->walk_head);6263 atomic_set(&newtbl->entries, 0);6364 spin_lock_init(&newtbl->gates_lock);6565+ spin_lock_init(&newtbl->walk_lock);64666567 return newtbl;6668}···251249static struct mesh_path *252250__mesh_path_lookup_by_idx(struct mesh_table *tbl, int idx)253251{254254- int i = 0, ret;255255- struct mesh_path *mpath = NULL;256256- struct rhashtable_iter iter;252252+ int i = 0;253253+ struct mesh_path *mpath;257254258258- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC);259259- if (ret)260260- return NULL;261261-262262- rhashtable_walk_start(&iter);263263-264264- while ((mpath = rhashtable_walk_next(&iter))) {265265- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)266266- continue;267267- if (IS_ERR(mpath))268268- break;255255+ hlist_for_each_entry_rcu(mpath, &tbl->walk_head, walk_list) {269256 if (i++ == idx)270257 break;271258 }272272- rhashtable_walk_stop(&iter);273273- rhashtable_walk_exit(&iter);274259275275- if (IS_ERR(mpath) || !mpath)260260+ if (!mpath)276261 return NULL;277262278263 if (mpath_expired(mpath)) {···421432 return ERR_PTR(-ENOMEM);422433423434 tbl = sdata->u.mesh.mesh_paths;435435+ spin_lock_bh(&tbl->walk_lock);424436 do {425437 ret = rhashtable_lookup_insert_fast(&tbl->rhead,426438 &new_mpath->rhash,···431441 mpath = rhashtable_lookup_fast(&tbl->rhead,432442 dst,433443 mesh_rht_params);434434-444444+ else if (!ret)445445+ hlist_add_head(&new_mpath->walk_list, &tbl->walk_head);435446 } while (unlikely(ret == -EEXIST && !mpath));447447+ spin_unlock_bh(&tbl->walk_lock);436448437437- if (ret && ret != -EEXIST)438438- return ERR_PTR(ret);439439-440440- /* At this point either new_mpath was added, or we found a441441- * matching entry already in the table; in the latter case442442- * free the unnecessary new entry.443443- */444444- if (ret == -EEXIST) {449449+ if (ret) {445450 kfree(new_mpath);451451+452452+ if (ret != -EEXIST)453453+ return ERR_PTR(ret);454454+446455 new_mpath = mpath;447456 }457457+448458 sdata->u.mesh.mesh_paths_generation++;449459 return new_mpath;450460}···470480471481 memcpy(new_mpath->mpp, mpp, ETH_ALEN);472482 tbl = sdata->u.mesh.mpp_paths;483483+484484+ spin_lock_bh(&tbl->walk_lock);473485 ret = rhashtable_lookup_insert_fast(&tbl->rhead,474486 &new_mpath->rhash,475487 mesh_rht_params);488488+ if (!ret)489489+ hlist_add_head_rcu(&new_mpath->walk_list, &tbl->walk_head);490490+ spin_unlock_bh(&tbl->walk_lock);491491+492492+ if (ret)493493+ kfree(new_mpath);476494477495 sdata->u.mesh.mpp_paths_generation++;478496 return ret;···501503 struct mesh_table *tbl = sdata->u.mesh.mesh_paths;502504 static const u8 bcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};503505 struct mesh_path *mpath;504504- struct rhashtable_iter iter;505505- int ret;506506507507- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC);508508- if (ret)509509- return;510510-511511- rhashtable_walk_start(&iter);512512-513513- while ((mpath = rhashtable_walk_next(&iter))) {514514- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)515515- continue;516516- if (IS_ERR(mpath))517517- break;507507+ rcu_read_lock();508508+ hlist_for_each_entry_rcu(mpath, &tbl->walk_head, walk_list) {518509 if (rcu_access_pointer(mpath->next_hop) == sta &&519510 mpath->flags & MESH_PATH_ACTIVE &&520511 !(mpath->flags & MESH_PATH_FIXED)) {···517530 WLAN_REASON_MESH_PATH_DEST_UNREACHABLE, bcast);518531 }519532 }520520- rhashtable_walk_stop(&iter);521521- rhashtable_walk_exit(&iter);533533+ rcu_read_unlock();522534}523535524536static void mesh_path_free_rcu(struct mesh_table *tbl,···537551538552static void __mesh_path_del(struct mesh_table *tbl, struct mesh_path *mpath)539553{554554+ hlist_del_rcu(&mpath->walk_list);540555 rhashtable_remove_fast(&tbl->rhead, &mpath->rhash, mesh_rht_params);541556 mesh_path_free_rcu(tbl, mpath);542557}···558571 struct ieee80211_sub_if_data *sdata = sta->sdata;559572 struct mesh_table *tbl = sdata->u.mesh.mesh_paths;560573 struct mesh_path *mpath;561561- struct rhashtable_iter iter;562562- int ret;574574+ struct hlist_node *n;563575564564- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC);565565- if (ret)566566- return;567567-568568- rhashtable_walk_start(&iter);569569-570570- while ((mpath = rhashtable_walk_next(&iter))) {571571- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)572572- continue;573573- if (IS_ERR(mpath))574574- break;575575-576576+ spin_lock_bh(&tbl->walk_lock);577577+ hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) {576578 if (rcu_access_pointer(mpath->next_hop) == sta)577579 __mesh_path_del(tbl, mpath);578580 }579579-580580- rhashtable_walk_stop(&iter);581581- rhashtable_walk_exit(&iter);581581+ spin_unlock_bh(&tbl->walk_lock);582582}583583584584static void mpp_flush_by_proxy(struct ieee80211_sub_if_data *sdata,···573599{574600 struct mesh_table *tbl = sdata->u.mesh.mpp_paths;575601 struct mesh_path *mpath;576576- struct rhashtable_iter iter;577577- int ret;602602+ struct hlist_node *n;578603579579- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC);580580- if (ret)581581- return;582582-583583- rhashtable_walk_start(&iter);584584-585585- while ((mpath = rhashtable_walk_next(&iter))) {586586- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)587587- continue;588588- if (IS_ERR(mpath))589589- break;590590-604604+ spin_lock_bh(&tbl->walk_lock);605605+ hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) {591606 if (ether_addr_equal(mpath->mpp, proxy))592607 __mesh_path_del(tbl, mpath);593608 }594594-595595- rhashtable_walk_stop(&iter);596596- rhashtable_walk_exit(&iter);609609+ spin_unlock_bh(&tbl->walk_lock);597610}598611599612static void table_flush_by_iface(struct mesh_table *tbl)600613{601614 struct mesh_path *mpath;602602- struct rhashtable_iter iter;603603- int ret;615615+ struct hlist_node *n;604616605605- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC);606606- if (ret)607607- return;608608-609609- rhashtable_walk_start(&iter);610610-611611- while ((mpath = rhashtable_walk_next(&iter))) {612612- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)613613- continue;614614- if (IS_ERR(mpath))615615- break;617617+ spin_lock_bh(&tbl->walk_lock);618618+ hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) {616619 __mesh_path_del(tbl, mpath);617620 }618618-619619- rhashtable_walk_stop(&iter);620620- rhashtable_walk_exit(&iter);621621+ spin_unlock_bh(&tbl->walk_lock);621622}622623623624/**···624675{625676 struct mesh_path *mpath;626677627627- rcu_read_lock();678678+ spin_lock_bh(&tbl->walk_lock);628679 mpath = rhashtable_lookup_fast(&tbl->rhead, addr, mesh_rht_params);629680 if (!mpath) {630630- rcu_read_unlock();681681+ spin_unlock_bh(&tbl->walk_lock);631682 return -ENXIO;632683 }633684634685 __mesh_path_del(tbl, mpath);635635- rcu_read_unlock();686686+ spin_unlock_bh(&tbl->walk_lock);636687 return 0;637688}638689···803854 struct mesh_table *tbl)804855{805856 struct mesh_path *mpath;806806- struct rhashtable_iter iter;807807- int ret;857857+ struct hlist_node *n;808858809809- ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_KERNEL);810810- if (ret)811811- return;812812-813813- rhashtable_walk_start(&iter);814814-815815- while ((mpath = rhashtable_walk_next(&iter))) {816816- if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN)817817- continue;818818- if (IS_ERR(mpath))819819- break;859859+ spin_lock_bh(&tbl->walk_lock);860860+ hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) {820861 if ((!(mpath->flags & MESH_PATH_RESOLVING)) &&821862 (!(mpath->flags & MESH_PATH_FIXED)) &&822863 time_after(jiffies, mpath->exp_time + MESH_PATH_EXPIRE))823864 __mesh_path_del(tbl, mpath);824865 }825825-826826- rhashtable_walk_stop(&iter);827827- rhashtable_walk_exit(&iter);866866+ spin_unlock_bh(&tbl->walk_lock);828867}829868830869void mesh_path_expire(struct ieee80211_sub_if_data *sdata)
···18381838 goto errout;18391839 break;18401840 }18411841+ case RTA_GATEWAY:18421842+ NL_SET_ERR_MSG(extack, "MPLS does not support RTA_GATEWAY attribute");18431843+ goto errout;18411844 case RTA_VIA:18421845 {18431846 if (nla_get_via(nla, &cfg->rc_via_alen,
+2-1
net/netfilter/ipvs/ip_vs_ctl.c
···896896{897897 struct ip_vs_dest *dest;898898 unsigned int atype, i;899899- int ret = 0;900899901900 EnterFunction(2);902901903902#ifdef CONFIG_IP_VS_IPV6904903 if (udest->af == AF_INET6) {904904+ int ret;905905+905906 atype = ipv6_addr_type(&udest->addr.in6);906907 if ((!(atype & IPV6_ADDR_UNICAST) ||907908 atype & IPV6_ADDR_LINKLOCAL) &&
+3
net/netfilter/nf_tables_api.c
···313313 int err;314314315315 list_for_each_entry(rule, &ctx->chain->rules, list) {316316+ if (!nft_is_active_next(ctx->net, rule))317317+ continue;318318+316319 err = nft_delrule(ctx, rule);317320 if (err < 0)318321 return err;
···189189190190 params_new = kzalloc(sizeof(*params_new), GFP_KERNEL);191191 if (unlikely(!params_new)) {192192- if (ret == ACT_P_CREATED)193193- tcf_idr_release(*a, bind);192192+ tcf_idr_release(*a, bind);194193 return -ENOMEM;195194 }196195
+2-1
net/sched/act_tunnel_key.c
···377377 return ret;378378379379release_tun_meta:380380- dst_release(&metadata->dst);380380+ if (metadata)381381+ dst_release(&metadata->dst);381382382383err_out:383384 if (exists)
+7-3
net/sched/sch_netem.c
···447447 int nb = 0;448448 int count = 1;449449 int rc = NET_XMIT_SUCCESS;450450+ int rc_drop = NET_XMIT_DROP;450451451452 /* Do not fool qdisc_drop_all() */452453 skb->prev = NULL;···487486 q->duplicate = 0;488487 rootq->enqueue(skb2, rootq, to_free);489488 q->duplicate = dupsave;489489+ rc_drop = NET_XMIT_SUCCESS;490490 }491491492492 /*···500498 if (skb_is_gso(skb)) {501499 segs = netem_segment(skb, sch, to_free);502500 if (!segs)503503- return NET_XMIT_DROP;501501+ return rc_drop;504502 } else {505503 segs = skb;506504 }···523521 1<<(prandom_u32() % 8);524522 }525523526526- if (unlikely(sch->q.qlen >= sch->limit))527527- return qdisc_drop_all(skb, sch, to_free);524524+ if (unlikely(sch->q.qlen >= sch->limit)) {525525+ qdisc_drop_all(skb, sch, to_free);526526+ return rc_drop;527527+ }528528529529 qdisc_qstats_backlog_inc(sch, skb);530530
+1-1
net/sctp/chunk.c
···192192 if (unlikely(!max_data)) {193193 max_data = sctp_min_frag_point(sctp_sk(asoc->base.sk),194194 sctp_datachk_len(&asoc->stream));195195- pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%Zu)",195195+ pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%zu)",196196 __func__, asoc, max_data);197197 }198198
+2-1
net/sctp/transport.c
···207207208208 /* When a data chunk is sent, reset the heartbeat interval. */209209 expires = jiffies + sctp_transport_timeout(transport);210210- if (time_before(transport->hb_timer.expires, expires) &&210210+ if ((time_before(transport->hb_timer.expires, expires) ||211211+ !timer_pending(&transport->hb_timer)) &&211212 !mod_timer(&transport->hb_timer,212213 expires + prandom_u32_max(transport->rto)))213214 sctp_transport_hold(transport);
+3-3
net/smc/smc.h
···113113} __aligned(8);114114115115enum smc_urg_state {116116- SMC_URG_VALID, /* data present */117117- SMC_URG_NOTYET, /* data pending */118118- SMC_URG_READ /* data was already read */116116+ SMC_URG_VALID = 1, /* data present */117117+ SMC_URG_NOTYET = 2, /* data pending */118118+ SMC_URG_READ = 3, /* data was already read */119119};120120121121struct smc_connection {
+1
net/socket.c
···577577 if (inode)578578 inode_lock(inode);579579 sock->ops->release(sock);580580+ sock->sk = NULL;580581 if (inode)581582 inode_unlock(inode);582583 sock->ops = NULL;
+11-6
net/tipc/socket.c
···379379380380#define tipc_wait_for_cond(sock_, timeo_, condition_) \381381({ \382382+ DEFINE_WAIT_FUNC(wait_, woken_wake_function); \382383 struct sock *sk_; \383384 int rc_; \384385 \385386 while ((rc_ = !(condition_))) { \386386- DEFINE_WAIT_FUNC(wait_, woken_wake_function); \387387+ /* coupled with smp_wmb() in tipc_sk_proto_rcv() */ \388388+ smp_rmb(); \387389 sk_ = (sock_)->sk; \388390 rc_ = tipc_sk_sock_err((sock_), timeo_); \389391 if (rc_) \390392 break; \391391- prepare_to_wait(sk_sleep(sk_), &wait_, TASK_INTERRUPTIBLE); \393393+ add_wait_queue(sk_sleep(sk_), &wait_); \392394 release_sock(sk_); \393395 *(timeo_) = wait_woken(&wait_, TASK_INTERRUPTIBLE, *(timeo_)); \394396 sched_annotate_sleep(); \···16791677static int tipc_wait_for_rcvmsg(struct socket *sock, long *timeop)16801678{16811679 struct sock *sk = sock->sk;16821682- DEFINE_WAIT(wait);16801680+ DEFINE_WAIT_FUNC(wait, woken_wake_function);16831681 long timeo = *timeop;16841682 int err = sock_error(sk);16851683···16871685 return err;1688168616891687 for (;;) {16901690- prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);16911688 if (timeo && skb_queue_empty(&sk->sk_receive_queue)) {16921689 if (sk->sk_shutdown & RCV_SHUTDOWN) {16931690 err = -ENOTCONN;16941691 break;16951692 }16931693+ add_wait_queue(sk_sleep(sk), &wait);16961694 release_sock(sk);16971697- timeo = schedule_timeout(timeo);16951695+ timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);16961696+ sched_annotate_sleep();16981697 lock_sock(sk);16981698+ remove_wait_queue(sk_sleep(sk), &wait);16991699 }17001700 err = 0;17011701 if (!skb_queue_empty(&sk->sk_receive_queue))···17131709 if (err)17141710 break;17151711 }17161716- finish_wait(sk_sleep(sk), &wait);17171712 *timeop = timeo;17181713 return err;17191714}···19851982 return;19861983 case SOCK_WAKEUP:19871984 tipc_dest_del(&tsk->cong_links, msg_orignode(hdr), 0);19851985+ /* coupled with smp_rmb() in tipc_wait_for_cond() */19861986+ smp_wmb();19881987 tsk->cong_link_cnt--;19891988 wakeup = true;19901989 break;
+34-25
net/unix/af_unix.c
···890890 addr->hash ^= sk->sk_type;891891892892 __unix_remove_socket(sk);893893- u->addr = addr;893893+ smp_store_release(&u->addr, addr);894894 __unix_insert_socket(&unix_socket_table[addr->hash], sk);895895 spin_unlock(&unix_table_lock);896896 err = 0;···1060106010611061 err = 0;10621062 __unix_remove_socket(sk);10631063- u->addr = addr;10631063+ smp_store_release(&u->addr, addr);10641064 __unix_insert_socket(list, sk);1065106510661066out_unlock:···13311331 RCU_INIT_POINTER(newsk->sk_wq, &newu->peer_wq);13321332 otheru = unix_sk(other);1333133313341334- /* copy address information from listening to new sock*/13351335- if (otheru->addr) {13361336- refcount_inc(&otheru->addr->refcnt);13371337- newu->addr = otheru->addr;13381338- }13341334+ /* copy address information from listening to new sock13351335+ *13361336+ * The contents of *(otheru->addr) and otheru->path13371337+ * are seen fully set up here, since we have found13381338+ * otheru in hash under unix_table_lock. Insertion13391339+ * into the hash chain we'd found it in had been done13401340+ * in an earlier critical area protected by unix_table_lock,13411341+ * the same one where we'd set *(otheru->addr) contents,13421342+ * as well as otheru->path and otheru->addr itself.13431343+ *13441344+ * Using smp_store_release() here to set newu->addr13451345+ * is enough to make those stores, as well as stores13461346+ * to newu->path visible to anyone who gets newu->addr13471347+ * by smp_load_acquire(). IOW, the same warranties13481348+ * as for unix_sock instances bound in unix_bind() or13491349+ * in unix_autobind().13501350+ */13391351 if (otheru->path.dentry) {13401352 path_get(&otheru->path);13411353 newu->path = otheru->path;13421354 }13551355+ refcount_inc(&otheru->addr->refcnt);13561356+ smp_store_release(&newu->addr, otheru->addr);1343135713441358 /* Set credentials */13451359 copy_peercred(sk, other);···14671453static int unix_getname(struct socket *sock, struct sockaddr *uaddr, int peer)14681454{14691455 struct sock *sk = sock->sk;14701470- struct unix_sock *u;14561456+ struct unix_address *addr;14711457 DECLARE_SOCKADDR(struct sockaddr_un *, sunaddr, uaddr);14721458 int err = 0;14731459···14821468 sock_hold(sk);14831469 }1484147014851485- u = unix_sk(sk);14861486- unix_state_lock(sk);14871487- if (!u->addr) {14711471+ addr = smp_load_acquire(&unix_sk(sk)->addr);14721472+ if (!addr) {14881473 sunaddr->sun_family = AF_UNIX;14891474 sunaddr->sun_path[0] = 0;14901475 err = sizeof(short);14911476 } else {14921492- struct unix_address *addr = u->addr;14931493-14941477 err = addr->len;14951478 memcpy(sunaddr, addr->name, addr->len);14961479 }14971497- unix_state_unlock(sk);14981480 sock_put(sk);14991481out:15001482 return err;···2083207320842074static void unix_copy_addr(struct msghdr *msg, struct sock *sk)20852075{20862086- struct unix_sock *u = unix_sk(sk);20762076+ struct unix_address *addr = smp_load_acquire(&unix_sk(sk)->addr);2087207720882088- if (u->addr) {20892089- msg->msg_namelen = u->addr->len;20902090- memcpy(msg->msg_name, u->addr->name, u->addr->len);20782078+ if (addr) {20792079+ msg->msg_namelen = addr->len;20802080+ memcpy(msg->msg_name, addr->name, addr->len);20912081 }20922082}20932083···25912581 if (!ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN))25922582 return -EPERM;2593258325942594- unix_state_lock(sk);25952595- path = unix_sk(sk)->path;25962596- if (!path.dentry) {25972597- unix_state_unlock(sk);25842584+ if (!smp_load_acquire(&unix_sk(sk)->addr))25982585 return -ENOENT;25992599- }25862586+25872587+ path = unix_sk(sk)->path;25882588+ if (!path.dentry)25892589+ return -ENOENT;2600259026012591 path_get(&path);26022602- unix_state_unlock(sk);2603259226042593 fd = get_unused_fd_flags(O_CLOEXEC);26052594 if (fd < 0)···28392830 (s->sk_state == TCP_ESTABLISHED ? SS_CONNECTING : SS_DISCONNECTING),28402831 sock_i_ino(s));2841283228422842- if (u->addr) {28332833+ if (u->addr) { // under unix_table_lock here28432834 int i, len;28442835 seq_putc(seq, ' ');28452836
+2-1
net/unix/diag.c
···10101111static int sk_diag_dump_name(struct sock *sk, struct sk_buff *nlskb)1212{1313- struct unix_address *addr = unix_sk(sk)->addr;1313+ /* might or might not have unix_table_lock */1414+ struct unix_address *addr = smp_load_acquire(&unix_sk(sk)->addr);14151516 if (!addr)1617 return 0;
···118118 fprintf(stderr, "Read error or end of file.\n");119119 return -1;120120 }121121- if (strlen(sym) > KSYM_NAME_LEN) {122122- fprintf(stderr, "Symbol %s too long for kallsyms (%zu vs %d).\n"121121+ if (strlen(sym) >= KSYM_NAME_LEN) {122122+ fprintf(stderr, "Symbol %s too long for kallsyms (%zu >= %d).\n"123123 "Please increase KSYM_NAME_LEN both in kernel and kallsyms.c\n",124124 sym, strlen(sym), KSYM_NAME_LEN);125125 return -1;
···661661 BUG_ON((ctx->flags & STATE_CHECKS) == 0 ||662662 (ctx->flags & STATE_CHECKS) == STATE_CHECKS);663663664664- if (ctx->index_key.description)665665- ctx->index_key.desc_len = strlen(ctx->index_key.description);666666-667664 /* Check to see if this top-level keyring is what we are looking for668665 * and whether it is valid or not.669666 */···911914 struct keyring_search_context ctx = {912915 .index_key.type = type,913916 .index_key.description = description,917917+ .index_key.desc_len = strlen(description),914918 .cred = current_cred(),915919 .match_data.cmp = key_default_cmp,916920 .match_data.raw_data = description,
···321321 if (a->u.net->sk) {322322 struct sock *sk = a->u.net->sk;323323 struct unix_sock *u;324324+ struct unix_address *addr;324325 int len = 0;325326 char *p = NULL;326327···352351#endif353352 case AF_UNIX:354353 u = unix_sk(sk);354354+ addr = smp_load_acquire(&u->addr);355355+ if (!addr)356356+ break;355357 if (u->path.dentry) {356358 audit_log_d_path(ab, " path=", &u->path);357359 break;358360 }359359- if (!u->addr)360360- break;361361- len = u->addr->len-sizeof(short);362362- p = &u->addr->name->sun_path[0];361361+ len = addr->len-sizeof(short);362362+ p = &addr->name->sun_path[0];363363 audit_log_format(ab, " path=");364364 if (*p)365365 audit_log_untrustedstring(ab, p);