···1616 gating mechanism in Gaudi. Due to how Gaudi is built, the1717 clock gating needs to be disabled in order to access the1818 registers of the TPC and MME engines. This is sometimes needed1919- during debug by the user and hence the user needs this option1919+ during debug by the user and hence the user needs this option.2020+ The user can supply a bitmask value, each bit represents2121+ a different engine to disable/enable its clock gating feature.2222+ The bitmask is composed of 20 bits:2323+ 0 - 7 : DMA channels2424+ 8 - 11 : MME engines2525+ 12 - 19 : TPC engines2626+ The bit's location of a specific engine can be determined2727+ using (1 << GAUDI_ENGINE_ID_*). GAUDI_ENGINE_ID_* values2828+ are defined in uapi habanalabs.h file in enum gaudi_engine_id20292130What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers2231Date: Jan 2019
+1-1
Documentation/admin-guide/README.rst
···258258Compiling the kernel259259--------------------260260261261- - Make sure you have at least gcc 4.6 available.261261+ - Make sure you have at least gcc 4.9 available.262262 For more information, refer to :ref:`Documentation/process/changes.rst <changes>`.263263264264 Please note that you can still run a.out user programs with this kernel.
+2
Documentation/arm64/cpu-feature-registers.rst
···171171172172173173 3) ID_AA64PFR1_EL1 - Processor Feature Register 1174174+174175 +------------------------------+---------+---------+175176 | Name | bits | visible |176177 +------------------------------+---------+---------+···182181183182184183 4) MIDR_EL1 - Main ID Register184184+185185 +------------------------------+---------+---------+186186 | Name | bits | visible |187187 +------------------------------+---------+---------+
···492492it with auto-tuning. An alternative way to achieve this goal is to493493just increase the value of timeout_sync, leaving max_budget equal to 0.494494495495-weights496496--------497497-498498-Read-only parameter, used to show the weights of the currently active499499-BFQ queues.500500-501501-5024954. Group scheduling with BFQ503496============================504497···559566For each group, there is only the following parameter to set.560567561568weight (namely blkio.bfq.weight or io.bfq-weight): the weight of the562562-group inside its parent. Available values: 1..10000 (default 100). The569569+group inside its parent. Available values: 1..1000 (default 100). The563570linear mapping between ioprio and weights, described at the beginning564571of the tunable section, is still valid, but all weights higher than565572IOPRIO_BE_NR*10 are mapped to ioprio 0.
+8
Documentation/core-api/dma-api.rst
···206206207207::208208209209+ bool210210+ dma_need_sync(struct device *dev, dma_addr_t dma_addr);211211+212212+Returns %true if dma_sync_single_for_{device,cpu} calls are required to213213+transfer memory ownership. Returns %false if those calls can be skipped.214214+215215+::216216+209217 unsigned long210218 dma_get_merge_boundary(struct device *dev);211219
+40
Documentation/dev-tools/kunit/faq.rst
···6161 kernel by installing a production configuration of the kernel on production6262 hardware with a production userspace and then trying to exercise some behavior6363 that depends on interactions between the hardware, the kernel, and userspace.6464+6565+KUnit isn't working, what should I do?6666+======================================6767+6868+Unfortunately, there are a number of things which can break, but here are some6969+things to try.7070+7171+1. Try running ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output``7272+ parameter. This might show details or error messages hidden by the kunit_tool7373+ parser.7474+2. Instead of running ``kunit.py run``, try running ``kunit.py config``,7575+ ``kunit.py build``, and ``kunit.py exec`` independently. This can help track7676+ down where an issue is occurring. (If you think the parser is at fault, you7777+ can run it manually against stdin or a file with ``kunit.py parse``.)7878+3. Running the UML kernel directly can often reveal issues or error messages7979+ kunit_tool ignores. This should be as simple as running ``./vmlinux`` after8080+ building the UML kernel (e.g., by using ``kunit.py build``). Note that UML8181+ has some unusual requirements (such as the host having a tmpfs filesystem8282+ mounted), and has had issues in the past when built statically and the host8383+ has KASLR enabled. (On older host kernels, you may need to run ``setarch8484+ `uname -m` -R ./vmlinux`` to disable KASLR.)8585+4. Make sure the kernel .config has ``CONFIG_KUNIT=y`` and at least one test8686+ (e.g. ``CONFIG_KUNIT_EXAMPLE_TEST=y``). kunit_tool will keep its .config8787+ around, so you can see what config was used after running ``kunit.py run``.8888+ It also preserves any config changes you might make, so you can8989+ enable/disable things with ``make ARCH=um menuconfig`` or similar, and then9090+ re-run kunit_tool.9191+5. Try to run ``make ARCH=um defconfig`` before running ``kunit.py run``. This9292+ may help clean up any residual config items which could be causing problems.9393+6. Finally, try running KUnit outside UML. KUnit and KUnit tests can run be9494+ built into any kernel, or can be built as a module and loaded at runtime.9595+ Doing so should allow you to determine if UML is causing the issue you're9696+ seeing. When tests are built-in, they will execute when the kernel boots, and9797+ modules will automatically execute associated tests when loaded. Test results9898+ can be collected from ``/sys/kernel/debug/kunit/<test suite>/results``, and9999+ can be parsed with ``kunit.py parse``. For more details, see "KUnit on100100+ non-UML architectures" in :doc:`usage`.101101+102102+If none of the above tricks help, you are always welcome to email any issues to103103+kunit-dev@googlegroups.com.
+28-10
Documentation/devicetree/bindings/Makefile
···22DT_DOC_CHECKER ?= dt-doc-validate33DT_EXTRACT_EX ?= dt-extract-example44DT_MK_SCHEMA ?= dt-mk-schema55-DT_MK_SCHEMA_USERONLY_FLAG := $(if $(DT_SCHEMA_FILES), -u)6576DT_SCHEMA_MIN_VERSION = 2020.587···34353536DT_DOCS = $(shell $(find_cmd) | sed -e 's|^$(srctree)/||')36373737-DT_SCHEMA_FILES ?= $(DT_DOCS)3838-3939-extra-$(CHECK_DT_BINDING) += $(patsubst $(src)/%.yaml,%.example.dts, $(DT_SCHEMA_FILES))4040-extra-$(CHECK_DT_BINDING) += $(patsubst $(src)/%.yaml,%.example.dt.yaml, $(DT_SCHEMA_FILES))4141-extra-$(CHECK_DT_BINDING) += processed-schema-examples.yaml4242-4338override DTC_FLAGS := \4439 -Wno-avoid_unnecessary_addr_size \4545- -Wno-graph_child_address4040+ -Wno-graph_child_address \4141+ -Wno-interrupt_provider46424743$(obj)/processed-schema-examples.yaml: $(DT_DOCS) check_dtschema_version FORCE4844 $(call if_changed,mk_schema)49455050-$(obj)/processed-schema.yaml: DT_MK_SCHEMA_FLAGS := $(DT_MK_SCHEMA_USERONLY_FLAG)4646+ifeq ($(DT_SCHEMA_FILES),)4747+4848+# Unless DT_SCHEMA_FILES is specified, use the full schema for dtbs_check too.4949+# Just copy processed-schema-examples.yaml5050+5151+$(obj)/processed-schema.yaml: $(obj)/processed-schema-examples.yaml FORCE5252+ $(call if_changed,copy)5353+5454+DT_SCHEMA_FILES = $(DT_DOCS)5555+5656+else5757+5858+# If DT_SCHEMA_FILES is specified, use it for processed-schema.yaml5959+6060+$(obj)/processed-schema.yaml: DT_MK_SCHEMA_FLAGS := -u5161$(obj)/processed-schema.yaml: $(DT_SCHEMA_FILES) check_dtschema_version FORCE5262 $(call if_changed,mk_schema)53635454-extra-y += processed-schema.yaml6464+endif6565+6666+extra-$(CHECK_DT_BINDING) += $(patsubst $(src)/%.yaml,%.example.dts, $(DT_SCHEMA_FILES))6767+extra-$(CHECK_DT_BINDING) += $(patsubst $(src)/%.yaml,%.example.dt.yaml, $(DT_SCHEMA_FILES))6868+extra-$(CHECK_DT_BINDING) += processed-schema-examples.yaml6969+extra-$(CHECK_DTBS) += processed-schema.yaml7070+7171+# Hack: avoid 'Argument list too long' error for 'make clean'. Remove most of7272+# build artifacts here before they are processed by scripts/Makefile.clean7373+clean-files = $(shell find $(obj) \( -name '*.example.dts' -o \7474+ -name '*.example.dt.yaml' \) -delete 2>/dev/null)
···4747 &lsio_mu1 1 24848 &lsio_mu1 1 34949 &lsio_mu1 3 3>;5050- See Documentation/devicetree/bindings/mailbox/fsl,mu.txt5050+ See Documentation/devicetree/bindings/mailbox/fsl,mu.yaml5151 for detailed mailbox binding.52525353Note: Each mu which supports general interrupt should have an alias correctly
···77title: Clock bindings for Freescale i.MX278899maintainers:1010- - Fabio Estevam <fabio.estevam@freescale.com>1010+ - Fabio Estevam <fabio.estevam@nxp.com>11111212description: |1313 The clock consumer should specify the desired clock by having the clock
···77title: Clock bindings for Freescale i.MX318899maintainers:1010- - Fabio Estevam <fabio.estevam@freescale.com>1010+ - Fabio Estevam <fabio.estevam@nxp.com>11111212description: |1313 The clock consumer should specify the desired clock by having the clock
···77title: Clock bindings for Freescale i.MX58899maintainers:1010- - Fabio Estevam <fabio.estevam@freescale.com>1010+ - Fabio Estevam <fabio.estevam@nxp.com>11111212description: |1313 The clock consumer should specify the desired clock by having the clock
···3737 simple-card or audio-graph-card binding. See their binding3838 documents on how to describe the way the sii902x device is3939 connected to the rest of the audio system:4040- Documentation/devicetree/bindings/sound/simple-card.txt4040+ Documentation/devicetree/bindings/sound/simple-card.yaml4141 Documentation/devicetree/bindings/sound/audio-graph-card.txt4242 Note: In case of the audio-graph-card binding the used port4343 index should be 3.
···6868 datasheet6969- clocks : phandle to the PRE axi clock input, as described7070 in Documentation/devicetree/bindings/clock/clock-bindings.txt and7171- Documentation/devicetree/bindings/clock/imx6q-clock.txt.7171+ Documentation/devicetree/bindings/clock/imx6q-clock.yaml.7272- clock-names: should be "axi"7373- interrupts: should contain the PRE interrupt7474- fsl,iram: phandle pointing to the mmio-sram device node, that should be···9494 datasheet9595- clocks : phandles to the PRG ipg and axi clock inputs, as described9696 in Documentation/devicetree/bindings/clock/clock-bindings.txt and9797- Documentation/devicetree/bindings/clock/imx6q-clock.txt.9797+ Documentation/devicetree/bindings/clock/imx6q-clock.yaml.9898- clock-names: should be "ipg" and "axi"9999- fsl,pres: phandles to the PRE units attached to this PRG, with the fixed100100 PRE as the first entry and the muxable PREs following.
···3030 "di2_sel" - IPU2 DI0 mux3131 "di3_sel" - IPU2 DI1 mux3232 The needed clock numbers for each are documented in3333- Documentation/devicetree/bindings/clock/imx5-clock.txt, and in3434- Documentation/devicetree/bindings/clock/imx6q-clock.txt.3333+ Documentation/devicetree/bindings/clock/imx5-clock.yaml, and in3434+ Documentation/devicetree/bindings/clock/imx6q-clock.yaml.35353636Optional properties:3737 - pinctrl-names : should be "default" on i.MX53, not used on i.MX6q
···2424 description: |2525 Should contain a list of phandles pointing to display interface port2626 of vop devices. vop definitions as defined in2727- Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt2727+ Documentation/devicetree/bindings/display/rockchip/rockchip-vop.yaml28282929required:3030 - compatible
···1212 Only the GPIO_ACTIVE_HIGH and GPIO_ACTIVE_LOW flags are supported.1313- #interrupt-cells : Specifies the number of cells needed to encode an1414 interrupt. Should be 2. The first cell defines the interrupt number,1515- the second encodes the triger flags encoded as described in1515+ the second encodes the trigger flags encoded as described in1616 Documentation/devicetree/bindings/interrupt-controller/interrupts.txt1717- compatible:1818 - "mediatek,mt7621-gpio" for Mediatek controllers
···1010 16-31 : private irq, and we use 16 as the co-processor timer.1111 31-1024: common irq for soc ip.12121313-Interrupt triger mode: (Defined in dt-bindings/interrupt-controller/irq.h)1313+Interrupt trigger mode: (Defined in dt-bindings/interrupt-controller/irq.h)1414 IRQ_TYPE_LEVEL_HIGH (default)1515 IRQ_TYPE_LEVEL_LOW1616 IRQ_TYPE_EDGE_RISING
···88to receive a transfer (that is, when TX FIFO contains the response data) by99strobing the ACK pin with the ready signal. See the "ready-gpios" property of the1010SSP binding as documented in:1111-<Documentation/devicetree/bindings/spi/spi-pxa2xx.txt>.1111+<Documentation/devicetree/bindings/spi/marvell,mmp2-ssp.yaml>.12121313Example:1414 &ssp3 {
···3344This device is a serial attached device to BTIF device and thus it must be a55child node of the serial node with BTIF. The dt-bindings details for BTIF66-device can be known via Documentation/devicetree/bindings/serial/8250.txt.66+device can be known via Documentation/devicetree/bindings/serial/8250.yaml.7788Required properties:99
···114114 [flags]>115115116116On other mach-shmobile platforms GPIO is handled by the gpio-rcar driver.117117-Please refer to Documentation/devicetree/bindings/gpio/renesas,gpio-rcar.txt117117+Please refer to Documentation/devicetree/bindings/gpio/renesas,rcar-gpio.yaml118118for documentation of the GPIO device tree bindings on those platforms.119119120120
···55see ${LINUX}/Documentation/devicetree/bindings/graph.txt6677Basically, Audio Graph Card property is same as Simple Card.88-see ${LINUX}/Documentation/devicetree/bindings/sound/simple-card.txt88+see ${LINUX}/Documentation/devicetree/bindings/sound/simple-card.yaml991010Below are same as Simple-Card.1111
···5566sti sound drivers allows to expose sti SoC audio interface through the77generic ASoC simple card. For details about sound card declaration please refer to88-Documentation/devicetree/bindings/sound/simple-card.txt.88+Documentation/devicetree/bindings/sound/simple-card.yaml.9910101) sti-uniperiph-dai: audio dai device.1111---------------------------------------
···19192020SPI Controller nodes must be child of GENI based Qualcomm Universal2121Peripharal. Please refer GENI based QUP wrapper controller node bindings2222-described in Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.txt.2222+described in Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml.23232424SPI slave nodes must be children of the SPI master node and conform to SPI bus2525binding as described in Documentation/devicetree/bindings/spi/spi-bus.txt.
···88 - PTIM_CTLR "cr<0, 14>" Control reg to start reset timer.99 - PTIM_TSR "cr<1, 14>" Interrupt cleanup status reg.1010 - PTIM_CCVR "cr<3, 14>" Current counter value reg.1111- - PTIM_LVR "cr<6, 14>" Window value reg to triger next event.1111+ - PTIM_LVR "cr<6, 14>" Window value reg to trigger next event.12121313==============================1414timer node bindings definition
···11-:orphan:11+.. SPDX-License-Identifier: GPL-2.02233Writing DeviceTree Bindings in json-schema44==========================================···124124libyaml and its headers be installed on the host system. For some distributions125125that involves installing the development package, such as:126126127127-Debian:127127+Debian::128128+128129 apt-get install libyaml-dev129129-Fedora:130130+131131+Fedora::132132+130133 dnf -y install libyaml-devel131134132135Running checks
+12
Documentation/driver-api/ptp.rst
···2323 + Ancillary clock features2424 - Time stamp external events2525 - Period output signals configurable from user space2626+ - Low Pass Filter (LPF) access from user space2627 - Synchronization of the Linux system time via the PPS subsystem27282829PTP hardware clock kernel API···95949695 - Auxiliary Slave/Master Mode Snapshot (optional interrupt)9796 - Target Time (optional interrupt)9797+9898+ * Renesas (IDT) ClockMatrix™9999+100100+ - Up to 4 independent PHC channels101101+ - Integrated low pass filter (LPF), access via .adjPhase (compliant to ITU-T G.8273.2)102102+ - Programmable output periodic signals103103+ - Programmable inputs can time stamp external triggers104104+ - Driver and/or hardware configuration through firmware (idtcm.bin)105105+ - LPF settings (bandwidth, phase limiting, automatic holdover, physical layer assist (per ITU-T G.8273.2))106106+ - Programmable output PTP clocks, any frequency up to 1GHz (to other PHY/MAC time stampers, refclk to ASSPs/SoCs/FPGAs)107107+ - Lock to GNSS input, automatic switching between GNSS and user-space PHC control (optional)
+2-2
Documentation/filesystems/overlayfs.rst
···560560verified on mount time to check that upper file handles are not stale.561561This verification may cause significant overhead in some cases.562562563563-Note: the mount options index=off,nfs_export=on are conflicting and will564564-result in an error.563563+Note: the mount options index=off,nfs_export=on are conflicting for a564564+read-write mount and will result in an error.565565566566567567Testsuite
+17-5
Documentation/i2c/slave-eeprom-backend.rst
···11==============================22-Linux I2C slave eeprom backend22+Linux I2C slave EEPROM backend33==============================4455-by Wolfram Sang <wsa@sang-engineering.com> in 2014-1555+by Wolfram Sang <wsa@sang-engineering.com> in 2014-206677-This is a proof-of-concept backend which acts like an EEPROM on the connected88-I2C bus. The memory contents can be modified from userspace via this file99-located in sysfs::77+This backend simulates an EEPROM on the connected I2C bus. Its memory contents88+can be accessed from userspace via this file located in sysfs::1091110 /sys/bus/i2c/devices/<device-directory>/slave-eeprom1111+1212+The following types are available: 24c02, 24c32, 24c64, and 24c512. Read-only1313+variants are also supported. The name needed for instantiating has the form1414+'slave-<type>[ro]'. Examples follow:1515+1616+24c02, read/write, address 0x64:1717+ # echo slave-24c02 0x1064 > /sys/bus/i2c/devices/i2c-1/new_device1818+1919+24c512, read-only, address 0x42:2020+ # echo slave-24c512ro 0x1042 > /sys/bus/i2c/devices/i2c-1/new_device2121+2222+You can also preload data during boot if a device-property named2323+'firmware-name' contains a valid filename (DT or ACPI only).12241325As of 2015, Linux doesn't support poll on binary sysfs files, so there is no1426notification when another master changed the content.
+4-3
Documentation/kbuild/modules.rst
···182182 8123_pci.c183183 8123_bin.o_shipped <= Binary blob184184185185---- 3.1 Shared Makefile185185+3.1 Shared Makefile186186+-------------------186187187188 An external module always includes a wrapper makefile that188189 supports building the module using "make" with no arguments.···471470472471 The syntax of the Module.symvers file is::473472474474- <CRC> <Symbol> <Module> <Export Type> <Namespace>473473+ <CRC> <Symbol> <Module> <Export Type> <Namespace>475474476476- 0xe1cc2a05 usb_stor_suspend drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL USB_STORAGE475475+ 0xe1cc2a05 usb_stor_suspend drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL USB_STORAGE477476478477 The fields are separated by tabs and values may be empty (e.g.479478 if no namespace is defined for an exported symbol).
+1-1
Documentation/kbuild/reproducible-builds.rst
···101101102102If you enable ``CONFIG_GCC_PLUGIN_RANDSTRUCT``, you will need to103103pre-generate the random seed in104104-``scripts/gcc-plgins/randomize_layout_seed.h`` so the same value104104+``scripts/gcc-plugins/randomize_layout_seed.h`` so the same value105105is used in rebuilds.106106107107Debug info conflicts
+1-1
Documentation/mips/ingenic-tcu.rst
···6868drivers access their registers through the same regmap.69697070For more information regarding the devicetree bindings of the TCU drivers,7171-have a look at Documentation/devicetree/bindings/timer/ingenic,tcu.txt.7171+have a look at Documentation/devicetree/bindings/timer/ingenic,tcu.yaml.
+1-1
Documentation/networking/arcnet.rst
···434434 ifconfig arc0 insight435435 route add insight arc0436436 route add freedom arc0 /* I would use the subnet here (like I said437437- to to in "single protocol" above),437437+ to in "single protocol" above),438438 but the rest of the subnet439439 unfortunately lies across the PPP440440 link on freedom, which confuses
+1-1
Documentation/networking/ax25.rst
···6677To use the amateur radio protocols within Linux you will need to get a88suitable copy of the AX.25 Utilities. More detailed information about99-AX.25, NET/ROM and ROSE, associated programs and and utilities can be99+AX.25, NET/ROM and ROSE, associated programs and utilities can be1010found on http://www.linux-ax25.org.11111212There is an active mailing list for discussing Linux amateur radio matters
+13-6
Documentation/networking/bareudp.rst
···262627271) Device creation & deletion28282929- a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype 0x8847.2929+ a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls_uc30303131 This creates a bareudp tunnel device which tunnels L3 traffic with ethertype3232 0x8847 (MPLS traffic). The destination port of the UDP header will be set to···34343535 b) ip link delete bareudp036363737-2) Device creation with multiple proto mode enabled3737+2) Device creation with multiproto mode enabled38383939-There are two ways to create a bareudp device for MPLS & IP with multiproto mode4040-enabled.3939+The multiproto mode allows bareudp tunnels to handle several protocols of the4040+same family. It is currently only available for IP and MPLS. This mode has to4141+be enabled explicitly with the "multiproto" flag.41424242- a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype 0x8847 multiproto4343+ a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype ipv4 multiproto43444444- b) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls4545+ For an IPv4 tunnel the multiproto mode allows the tunnel to also handle4646+ IPv6.4747+4848+ b) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls_uc multiproto4949+5050+ For MPLS, the multiproto mode allows the tunnel to handle both unicast5151+ and multicast MPLS packets.455246533) Device Usage4754
+2-2
Documentation/networking/can_ucan_protocol.rst
···144144145145*Host2Dev; mandatory*146146147147-Setup bittiming by sending the the structure147147+Setup bittiming by sending the structure148148``ucan_ctl_payload_t.cmd_set_bittiming`` (see ``struct bittiming`` for149149details)150150···232232 zero233233234234The CAN device has sent a message to the CAN bus. It answers with a235235-list of of tuples <echo-ids, flags>.235235+list of tuples <echo-ids, flags>.236236237237The echo-id identifies the frame from (echos the id from a previous238238UCAN_OUT_TX message). The flag indicates the result of the
+1-1
Documentation/networking/dsa/dsa.rst
···9595Networking stack hooks9696----------------------97979898-When a master netdev is used with DSA, a small hook is placed in in the9898+When a master netdev is used with DSA, a small hook is placed in the9999networking stack is in order to have the DSA subsystem process the Ethernet100100switch specific tagging protocol. DSA accomplishes this by registering a101101specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the
+1-1
Documentation/networking/ip-sysctl.rst
···741741742742 Default: 0x1743743744744- Note that that additional client or server features are only744744+ Note that additional client or server features are only745745 effective if the basic support (0x1 and 0x2) are enabled respectively.746746747747tcp_fastopen_blackhole_timeout_sec - INTEGER
+1-1
Documentation/networking/ipvs-sysctl.rst
···114114 modes (when there is no enough available memory, the strategy115115 is enabled and the variable is automatically set to 2,116116 otherwise the strategy is disabled and the variable is set to117117- 1), and 3 means that that the strategy is always enabled.117117+ 1), and 3 means that the strategy is always enabled.118118119119drop_packet - INTEGER120120 - 0 - disabled (default)
+1-1
Documentation/networking/rxrpc.rst
···186186 time [tunable] after the last connection using it discarded, in case a new187187 connection is made that could use it.188188189189- (#) A client-side connection is only shared between calls if they have have189189+ (#) A client-side connection is only shared between calls if they have190190 the same key struct describing their security (and assuming the calls191191 would otherwise share the connection). Non-secured calls would also be192192 able to share connections with each other.
+1-1
Documentation/powerpc/vas-api.rst
···213213updating CSB with the following data:214214215215 csb.flags = CSB_V;216216- csb.cc = CSB_CC_TRANSLATION;216216+ csb.cc = CSB_CC_FAULT_ADDRESS;217217 csb.ce = CSB_CE_TERMINATION;218218 csb.address = fault_address;219219
+1-1
Documentation/process/changes.rst
···2929====================== =============== ========================================3030 Program Minimal version Command to check the version3131====================== =============== ========================================3232-GNU C 4.8 gcc --version3232+GNU C 4.9 gcc --version3333GNU make 3.81 make --version3434binutils 2.23 ld -v3535flex 2.5.35 flex --version
+20
Documentation/process/coding-style.rst
···319319problem, which is called the function-growth-hormone-imbalance syndrome.320320See chapter 6 (Functions).321321322322+For symbol names and documentation, avoid introducing new usage of323323+'master / slave' (or 'slave' independent of 'master') and 'blacklist /324324+whitelist'.325325+326326+Recommended replacements for 'master / slave' are:327327+ '{primary,main} / {secondary,replica,subordinate}'328328+ '{initiator,requester} / {target,responder}'329329+ '{controller,host} / {device,worker,proxy}'330330+ 'leader / follower'331331+ 'director / performer'332332+333333+Recommended replacements for 'blacklist/whitelist' are:334334+ 'denylist / allowlist'335335+ 'blocklist / passlist'336336+337337+Exceptions for introducing new usage is to maintain a userspace ABI/API,338338+or when updating code for an existing (as of 2020) hardware or protocol339339+specification that mandates those terms. For new specifications340340+translate specification usage of the terminology to the kernel coding341341+standard where possible.3223423233435) Typedefs324344-----------
···22VERSION = 533PATCHLEVEL = 844SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc766NAME = Kleptomaniac Octopus7788# *DOCUMENTATION*···567567ifneq ($(CROSS_COMPILE),)568568CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%))569569GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))570570-CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)570570+CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE))571571GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)572572endif573573ifneq ($(GCC_TOOLCHAIN),)···970970endif971971972972# Align the bit size of userspace programs with the kernel973973-KBUILD_USERCFLAGS += $(filter -m32 -m64, $(KBUILD_CFLAGS))974974-KBUILD_USERLDFLAGS += $(filter -m32 -m64, $(KBUILD_CFLAGS))973973+KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))974974+KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))975975976976# make the checker run with the right architecture977977CHECKFLAGS += --arch=$(ARCH)···17541754descend: $(build-dirs)17551755$(build-dirs): prepare17561756 $(Q)$(MAKE) $(build)=$@ \17571757- single-build=$(if $(filter-out $@/, $(filter $@/%, $(single-no-ko))),1) \17571757+ single-build=$(if $(filter-out $@/, $(filter $@/%, $(KBUILD_SINGLE_TARGETS))),1) \17581758 need-builtin=1 need-modorder=11759175917601760clean-dirs := $(addprefix _clean_, $(clean-dirs))
+15
arch/arc/Kconfig
···170170171171endchoice172172173173+config ARC_TUNE_MCPU174174+ string "Override default -mcpu compiler flag"175175+ default ""176176+ help177177+ Override default -mcpu=xxx compiler flag (which is set depending on178178+ the ISA version) with the specified value.179179+ NOTE: If specified flag isn't supported by current compiler the180180+ ISA default value will be used as a fallback.181181+173182config CPU_BIG_ENDIAN174183 bool "Enable Big Endian Mode"175184 help···473464 On HS cores, taken interrupt auto saves the regfile on stack.474465 This is programmable and can be optionally disabled in which case475466 software INTERRUPT_PROLOGUE/EPILGUE do the needed work467467+468468+config ARC_LPB_DISABLE469469+ bool "Disable loop buffer (LPB)"470470+ help471471+ On HS cores, loop buffer (LPB) is programmable in runtime and can472472+ be optionally disabled.476473477474endif # ISA_ARCV2478475
+19-2
arch/arc/Makefile
···1010endif11111212cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__1313-cflags-$(CONFIG_ISA_ARCOMPACT) += -mA71414-cflags-$(CONFIG_ISA_ARCV2) += -mcpu=hs381313+1414+tune-mcpu-def-$(CONFIG_ISA_ARCOMPACT) := -mcpu=arc7001515+tune-mcpu-def-$(CONFIG_ISA_ARCV2) := -mcpu=hs381616+1717+ifeq ($(CONFIG_ARC_TUNE_MCPU),"")1818+cflags-y += $(tune-mcpu-def-y)1919+else2020+tune-mcpu := $(shell echo $(CONFIG_ARC_TUNE_MCPU))2121+tune-mcpu-ok := $(call cc-option-yn, $(tune-mcpu))2222+ifeq ($(tune-mcpu-ok),y)2323+cflags-y += $(tune-mcpu)2424+else2525+# The flag provided by 'CONFIG_ARC_TUNE_MCPU' option isn't known by this compiler2626+# (probably the compiler is too old). Use ISA default mcpu flag instead as a safe option.2727+$(warning ** WARNING ** CONFIG_ARC_TUNE_MCPU flag '$(tune-mcpu)' is unknown, fallback to '$(tune-mcpu-def-y)')2828+cflags-y += $(tune-mcpu-def-y)2929+endif3030+endif3131+15321633ifdef CONFIG_ARC_CURR_IN_REG1734# For a global register defintion, make sure it gets passed to every file
+1-1
arch/arc/include/asm/elf.h
···1919#define R_ARC_32_PCREL 0x3120202121/*to set parameters in the core dumps */2222-#define ELF_ARCH EM_ARCOMPACT2222+#define ELF_ARCH EM_ARC_INUSE2323#define ELF_CLASS ELFCLASS3224242525#ifdef CONFIG_CPU_BIG_ENDIAN
···165165tracesys:166166 ; save EFA in case tracer wants the PC of traced task167167 ; using ERET won't work since next-PC has already committed168168- lr r12, [efa]169168 GET_CURR_TASK_FIELD_PTR TASK_THREAD, r11170169 st r12, [r11, THREAD_FAULT_ADDR] ; thread.fault_address171170···207208; Breakpoint TRAP208209; ---------------------------------------------209210trap_with_param:210210-211211- ; stop_pc info by gdb needs this info212212- lr r0, [efa]211211+ mov r0, r12 ; EFA in case ptracer/gdb wants stop_pc213212 mov r1, sp214214-215215- ; Now that we have read EFA, it is safe to do "fake" rtie216216- ; and get out of CPU exception mode217217- FAKE_RET_FROM_EXCPN218213219214 ; Save callee regs in case gdb wants to have a look220215 ; SP will grow up by size of CALLEE Reg-File···236243237244 EXCEPTION_PROLOGUE238245246246+ lr r12, [efa]247247+248248+ FAKE_RET_FROM_EXCPN249249+239250 ;============ TRAP 1 :breakpoints240251 ; Check ECR for trap with arg (PROLOGUE ensures r10 has ECR)241252 bmsk.f 0, r10, 7242253 bnz trap_with_param243254244255 ;============ TRAP (no param): syscall top level245245-246246- ; First return from Exception to pure K mode (Exception/IRQs renabled)247247- FAKE_RET_FROM_EXCPN248256249257 ; If syscall tracing ongoing, invoke pre-post-hooks250258 GET_CURR_THR_INFO_FLAGS r10
+8
arch/arc/kernel/head.S
···5959 bclr r5, r5, STATUS_AD_BIT6060#endif6161 kflag r56262+6363+#ifdef CONFIG_ARC_LPB_DISABLE6464+ lr r5, [ARC_REG_LPB_BUILD]6565+ breq r5, 0, 1f ; LPB doesn't exist6666+ mov r5, 16767+ sr r5, [ARC_REG_LPB_CTRL]6868+1:6969+#endif /* CONFIG_ARC_LPB_DISABLE */6270#endif6371 ; Config DSP_CTRL properly, so kernel may use integer multiply,6472 ; multiply-accumulate, and divide operations
+7-12
arch/arc/kernel/setup.c
···5858 { 0x00, NULL }5959};60606161-static const struct id_to_str arc_cpu_rel[] = {6161+static const struct id_to_str arc_hs_ver54_rel[] = {6262 /* UARCH.MAJOR, Release */6363 { 0, "R3.10a"},6464 { 1, "R3.50a"},6565+ { 2, "R3.60a"},6666+ { 3, "R4.00a"},6567 { 0xFF, NULL }6668};6769···119117 struct bcr_uarch_build_arcv2 uarch;120118 const struct id_to_str *tbl;121119122122- /*123123- * Up until (including) the first core4 release (0x54) things were124124- * simple: AUX IDENTITY.ARCVER was sufficient to identify arc family125125- * and release: 0x50 to 0x53 was HS38, 0x54 was HS48 (dual issue)126126- */127127-128120 if (cpu->core.family < 0x54) { /* includes arc700 */129121130122 for (tbl = &arc_legacy_rel[0]; tbl->id != 0; tbl++) {···139143 }140144141145 /*142142- * However the subsequent HS release (same 0x54) allow HS38 or HS48143143- * configurations and encode this info in a different BCR.144144- * The BCR was introduced in 0x54 so can't be read unconditionally.146146+ * Initial HS cores bumped AUX IDENTITY.ARCVER for each release until147147+ * ARCVER 0x54 which introduced AUX MICRO_ARCH_BUILD and subsequent148148+ * releases only update it.145149 */146146-147150 READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch);148151149152 if (uarch.prod == 4) {···153158 cpu->name = "HS38";154159 }155160156156- for (tbl = &arc_cpu_rel[0]; tbl->id != 0xFF; tbl++) {161161+ for (tbl = &arc_hs_ver54_rel[0]; tbl->id != 0xFF; tbl++) {157162 if (uarch.maj == tbl->id) {158163 cpu->release = tbl->str;159164 break;
···105105 linux,code = <SW_FRONT_PROXIMITY>;106106 linux,can-disable;107107 };108108+109109+ machine_cover {110110+ label = "Machine Cover";111111+ gpios = <&gpio6 0 GPIO_ACTIVE_LOW>; /* 160 */112112+ linux,input-type = <EV_SW>;113113+ linux,code = <SW_MACHINE_COVER>;114114+ linux,can-disable;115115+ };108116 };109117110118 isp1707: isp1707 {···827819 pinctrl-0 = <&mmc1_pins>;828820 vmmc-supply = <&vmmc1>;829821 bus-width = <4>;830830- /* For debugging, it is often good idea to remove this GPIO.831831- It means you can remove back cover (to reboot by removing832832- battery) and still use the MMC card. */833833- cd-gpios = <&gpio6 0 GPIO_ACTIVE_LOW>; /* 160 */834822};835823836824/* most boards use vaux3, only some old versions use vmmc2 instead */
···3131#if defined(__APCS_26__)3232#error Sorry, your compiler targets APCS-26 but this kernel requires APCS-323333#endif3434-/*3535- * GCC 4.8.0-4.8.2: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=588543636- * miscompiles find_get_entry(), and can result in EXT3 and EXT43737- * filesystem corruption (possibly other FS too).3838- */3939-#if defined(GCC_VERSION) && GCC_VERSION >= 40800 && GCC_VERSION < 408034040-#error Your compiler is too buggy; it is known to miscompile kernels4141-#error and result in filesystem corruption and oopses.4242-#endif43344435int main(void)4536{
+1-1
arch/arm/mach-imx/devices/devices-common.h
···289289 const struct spi_imx_master *pdata);290290291291struct platform_device *imx_add_imx_dma(char *name, resource_size_t iobase,292292- int irq, int irq_err);292292+ int irq);293293struct platform_device *imx_add_imx_sdma(char *name,294294 resource_size_t iobase, int irq, struct sdma_platform_data *pdata);
···7979 mxc_register_gpio("imx21-gpio", 5, MX27_GPIO6_BASE_ADDR, SZ_256, MX27_INT_GPIO, 0);80808181 pinctrl_provide_dummies();8282- imx_add_imx_dma("imx27-dma", MX27_DMA_BASE_ADDR,8383- MX27_INT_DMACH0, 0); /* No ERR irq */8282+ imx_add_imx_dma("imx27-dma", MX27_DMA_BASE_ADDR, MX27_INT_DMACH0);8483 /* imx27 has the imx21 type audmux */8584 platform_device_register_simple("imx21-audmux", 0, imx27_audmux_res,8685 ARRAY_SIZE(imx27_audmux_res));
+11-3
arch/arm/mach-omap2/omap_hwmod.c
···34353435 regs = ioremap(data->module_pa,34363436 data->module_size);34373437 if (!regs)34383438- return -ENOMEM;34383438+ goto out_free_sysc;34393439 }3440344034413441 /*···34453445 if (oh->class->name && strcmp(oh->class->name, data->name)) {34463446 class = kmemdup(oh->class, sizeof(*oh->class), GFP_KERNEL);34473447 if (!class)34483448- return -ENOMEM;34483448+ goto out_unmap;34493449 }3450345034513451 if (list_empty(&oh->slave_ports)) {34523452 oi = kcalloc(1, sizeof(*oi), GFP_KERNEL);34533453 if (!oi)34543454- return -ENOMEM;34543454+ goto out_free_class;3455345534563456 /*34573457 * Note that we assume interconnect interface clocks will be···34783478 spin_unlock_irqrestore(&oh->_lock, flags);3479347934803480 return 0;34813481+34823482+out_free_class:34833483+ kfree(class);34843484+out_unmap:34853485+ iounmap(regs);34863486+out_free_sysc:34873487+ kfree(sysc);34883488+ return -ENOMEM;34813489}3482349034833491static const struct omap_hwmod_reset omap24xx_reset_quirks[] = {
-1
arch/arm/xen/enlighten.c
···241241 * see Documentation/devicetree/bindings/arm/xen.txt for the242242 * documentation of the Xen Device Tree format.243243 */244244-#define GRANT_TABLE_PHYSADDR 0245244void __init xen_early_init(void)246245{247246 of_scan_flat_dt(fdt_find_hyper_node, NULL);
···22#ifndef __ASM_VDSOCLOCKSOURCE_H33#define __ASM_VDSOCLOCKSOURCE_H4455-#define VDSO_ARCH_CLOCKMODES \66- VDSO_CLOCKMODE_ARCHTIMER55+#define VDSO_ARCH_CLOCKMODES \66+ /* vdso clocksource for both 32 and 64bit tasks */ \77+ VDSO_CLOCKMODE_ARCHTIMER, \88+ /* vdso clocksource for 64bit tasks only */ \99+ VDSO_CLOCKMODE_ARCHTIMER_NOCOMPAT710811#endif
+7-1
arch/arm64/include/asm/vdso/compat_gettimeofday.h
···111111 * update. Return something. Core will do another round and then112112 * see the mode change and fallback to the syscall.113113 */114114- if (clock_mode == VDSO_CLOCKMODE_NONE)114114+ if (clock_mode != VDSO_CLOCKMODE_ARCHTIMER)115115 return 0;116116117117 /*···151151152152 return ret;153153}154154+155155+static inline bool vdso_clocksource_ok(const struct vdso_data *vd)156156+{157157+ return vd->clock_mode == VDSO_CLOCKMODE_ARCHTIMER;158158+}159159+#define vdso_clocksource_ok vdso_clocksource_ok154160155161#endif /* !__ASSEMBLY__ */156162
+2-14
arch/arm64/kernel/alternative.c
···4343 */4444static bool branch_insn_requires_update(struct alt_instr *alt, unsigned long pc)4545{4646- unsigned long replptr;4747-4848- if (kernel_text_address(pc))4949- return true;5050-5151- replptr = (unsigned long)ALT_REPL_PTR(alt);5252- if (pc >= replptr && pc <= (replptr + alt->alt_len))5353- return false;5454-5555- /*5656- * Branching into *another* alternate sequence is doomed, and5757- * we're not even trying to fix it up.5858- */5959- BUG();4646+ unsigned long replptr = (unsigned long)ALT_REPL_PTR(alt);4747+ return !(pc >= replptr && pc <= (replptr + alt->alt_len));6048}61496250#define align_down(x, a) ((unsigned long)(x) & ~(((unsigned long)(a)) - 1))
···141141/*142142 * Single step API and exception handling.143143 */144144-static void set_regs_spsr_ss(struct pt_regs *regs)144144+static void set_user_regs_spsr_ss(struct user_pt_regs *regs)145145{146146 regs->pstate |= DBG_SPSR_SS;147147}148148-NOKPROBE_SYMBOL(set_regs_spsr_ss);148148+NOKPROBE_SYMBOL(set_user_regs_spsr_ss);149149150150-static void clear_regs_spsr_ss(struct pt_regs *regs)150150+static void clear_user_regs_spsr_ss(struct user_pt_regs *regs)151151{152152 regs->pstate &= ~DBG_SPSR_SS;153153}154154-NOKPROBE_SYMBOL(clear_regs_spsr_ss);154154+NOKPROBE_SYMBOL(clear_user_regs_spsr_ss);155155+156156+#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs)157157+#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs)155158156159static DEFINE_SPINLOCK(debug_hook_lock);157160static LIST_HEAD(user_step_hook);···394391 * If single step is active for this thread, then set SPSR.SS395392 * to 1 to avoid returning to the active-pending state.396393 */397397- if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))394394+ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))398395 set_regs_spsr_ss(task_pt_regs(task));399396}400397NOKPROBE_SYMBOL(user_rewind_single_step);401398402399void user_fastforward_single_step(struct task_struct *task)403400{404404- if (test_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP))401401+ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))405402 clear_regs_spsr_ss(task_pt_regs(task));403403+}404404+405405+void user_regs_reset_single_step(struct user_pt_regs *regs,406406+ struct task_struct *task)407407+{408408+ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))409409+ set_user_regs_spsr_ss(regs);410410+ else411411+ clear_user_regs_spsr_ss(regs);406412}407413408414/* Kernel API */
+1-1
arch/arm64/kernel/entry-common.c
···5757 /*5858 * The CPU masked interrupts, and we are leaving them masked during5959 * do_debug_exception(). Update PMR as if we had called6060- * local_mask_daif().6060+ * local_daif_mask().6161 */6262 if (system_uses_irq_prio_masking())6363 gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET);
+31-21
arch/arm64/kernel/entry.S
···126126 add \dst, \dst, #(\sym - .entry.tramp.text)127127 .endm128128129129- // This macro corrupts x0-x3. It is the caller's duty130130- // to save/restore them if required.129129+ /*130130+ * This macro corrupts x0-x3. It is the caller's duty to save/restore131131+ * them if required.132132+ */131133 .macro apply_ssbd, state, tmp1, tmp2132134#ifdef CONFIG_ARM64_SSBD133135alternative_cb arm64_enable_wa2_handling···169167 stp x28, x29, [sp, #16 * 14]170168171169 .if \el == 0170170+ .if \regsize == 32171171+ /*172172+ * If we're returning from a 32-bit task on a system affected by173173+ * 1418040 then re-enable userspace access to the virtual counter.174174+ */175175+#ifdef CONFIG_ARM64_ERRATUM_1418040176176+alternative_if ARM64_WORKAROUND_1418040177177+ mrs x0, cntkctl_el1178178+ orr x0, x0, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN179179+ msr cntkctl_el1, x0180180+alternative_else_nop_endif181181+#endif182182+ .endif172183 clear_gp_regs173184 mrs x21, sp_el0174185 ldr_this_cpu tsk, __entry_task, x20175186 msr sp_el0, tsk176187177177- // Ensure MDSCR_EL1.SS is clear, since we can unmask debug exceptions178178- // when scheduling.188188+ /*189189+ * Ensure MDSCR_EL1.SS is clear, since we can unmask debug exceptions190190+ * when scheduling.191191+ */179192 ldr x19, [tsk, #TSK_TI_FLAGS]180193 disable_step_tsk x19, x20181194···337320 tst x22, #PSR_MODE32_BIT // native task?338321 b.eq 3f339322323323+#ifdef CONFIG_ARM64_ERRATUM_1418040324324+alternative_if ARM64_WORKAROUND_1418040325325+ mrs x0, cntkctl_el1326326+ bic x0, x0, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN327327+ msr cntkctl_el1, x0328328+alternative_else_nop_endif329329+#endif330330+340331#ifdef CONFIG_ARM64_ERRATUM_845719341332alternative_if ARM64_WORKAROUND_845719342333#ifdef CONFIG_PID_IN_CONTEXTIDR···356331alternative_else_nop_endif357332#endif3583333:359359-#ifdef CONFIG_ARM64_ERRATUM_1418040360360-alternative_if_not ARM64_WORKAROUND_1418040361361- b 4f362362-alternative_else_nop_endif363363- /*364364- * if (x22.mode32 == cntkctl_el1.el0vcten)365365- * cntkctl_el1.el0vcten = ~cntkctl_el1.el0vcten366366- */367367- mrs x1, cntkctl_el1368368- eon x0, x1, x22, lsr #3369369- tbz x0, #1, 4f370370- eor x1, x1, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN371371- msr cntkctl_el1, x1372372-4:373373-#endif374334 scs_save tsk, x0375335376336 /* No kernel C function calls after this as user keys are set. */···387377 .if \el == 0388378alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0389379#ifdef CONFIG_UNMAP_KERNEL_AT_EL0390390- bne 5f380380+ bne 4f391381 msr far_el1, x30392382 tramp_alias x30, tramp_exit_native393383 br x30394394-5:384384+4:395385 tramp_alias x30, tramp_exit_compat396386 br x30397387#endif
···122122{123123 return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END,124124 GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS,125125- NUMA_NO_NODE, __func__);125125+ NUMA_NO_NODE, __builtin_return_address(0));126126}127127128128/* arm kprobe: install breakpoint in text */
+37-12
arch/arm64/kernel/ptrace.c
···18111811 unsigned long saved_reg;1812181218131813 /*18141814- * A scratch register (ip(r12) on AArch32, x7 on AArch64) is18151815- * used to denote syscall entry/exit:18141814+ * We have some ABI weirdness here in the way that we handle syscall18151815+ * exit stops because we indicate whether or not the stop has been18161816+ * signalled from syscall entry or syscall exit by clobbering a general18171817+ * purpose register (ip/r12 for AArch32, x7 for AArch64) in the tracee18181818+ * and restoring its old value after the stop. This means that:18191819+ *18201820+ * - Any writes by the tracer to this register during the stop are18211821+ * ignored/discarded.18221822+ *18231823+ * - The actual value of the register is not available during the stop,18241824+ * so the tracer cannot save it and restore it later.18251825+ *18261826+ * - Syscall stops behave differently to seccomp and pseudo-step traps18271827+ * (the latter do not nobble any registers).18161828 */18171829 regno = (is_compat_task() ? 12 : 7);18181830 saved_reg = regs->regs[regno];18191831 regs->regs[regno] = dir;1820183218211821- if (dir == PTRACE_SYSCALL_EXIT)18331833+ if (dir == PTRACE_SYSCALL_ENTER) {18341834+ if (tracehook_report_syscall_entry(regs))18351835+ forget_syscall(regs);18361836+ regs->regs[regno] = saved_reg;18371837+ } else if (!test_thread_flag(TIF_SINGLESTEP)) {18221838 tracehook_report_syscall_exit(regs, 0);18231823- else if (tracehook_report_syscall_entry(regs))18241824- forget_syscall(regs);18391839+ regs->regs[regno] = saved_reg;18401840+ } else {18411841+ regs->regs[regno] = saved_reg;1825184218261826- regs->regs[regno] = saved_reg;18431843+ /*18441844+ * Signal a pseudo-step exception since we are stepping but18451845+ * tracer modifications to the registers may have rewound the18461846+ * state machine.18471847+ */18481848+ tracehook_report_syscall_exit(regs, 1);18491849+ }18271850}1828185118291852int syscall_trace_enter(struct pt_regs *regs)···18561833 if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {18571834 tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);18581835 if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU))18591859- return -1;18361836+ return NO_SYSCALL;18601837 }1861183818621839 /* Do the secure computing after ptrace; failures should be fast. */18631840 if (secure_computing() == -1)18641864- return -1;18411841+ return NO_SYSCALL;1865184218661843 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))18671844 trace_sys_enter(regs, regs->syscallno);···1874185118751852void syscall_trace_exit(struct pt_regs *regs)18761853{18541854+ unsigned long flags = READ_ONCE(current_thread_info()->flags);18551855+18771856 audit_syscall_exit(regs);1878185718791879- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))18581858+ if (flags & _TIF_SYSCALL_TRACEPOINT)18801859 trace_sys_exit(regs, regs_return_value(regs));1881186018821882- if (test_thread_flag(TIF_SYSCALL_TRACE))18611861+ if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))18831862 tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);1884186318851864 rseq_syscall(regs);···19591934 */19601935int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task)19611936{19621962- if (!test_tsk_thread_flag(task, TIF_SINGLESTEP))19631963- regs->pstate &= ~DBG_SPSR_SS;19371937+ /* https://lore.kernel.org/lkml/20191118131525.GA4180@willie-the-truck */19381938+ user_regs_reset_single_step(regs, task);1964193919651940 if (is_compat_thread(task_thread_info(task)))19661941 return valid_compat_regs(regs);
+2-9
arch/arm64/kernel/signal.c
···800800 */801801static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)802802{803803- struct task_struct *tsk = current;804803 sigset_t *oldset = sigmask_to_save();805804 int usig = ksig->sig;806805 int ret;···823824 */824825 ret |= !valid_user_regs(®s->user_regs, current);825826826826- /*827827- * Fast forward the stepping logic so we step into the signal828828- * handler.829829- */830830- if (!ret)831831- user_fastforward_single_step(tsk);832832-833833- signal_setup_done(ret, ksig, 0);827827+ /* Step into the signal handler if we are stepping */828828+ signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLESTEP));834829}835830836831/*
+19-2
arch/arm64/kernel/syscall.c
···5050 ret = do_ni_syscall(regs, scno);5151 }52525353+ if (is_compat_task())5454+ ret = lower_32_bits(ret);5555+5356 regs->regs[0] = ret;5457}5558···124121 user_exit();125122126123 if (has_syscall_work(flags)) {127127- /* set default errno for user-issued syscall(-1) */124124+ /*125125+ * The de-facto standard way to skip a system call using ptrace126126+ * is to set the system call to -1 (NO_SYSCALL) and set x0 to a127127+ * suitable error code for consumption by userspace. However,128128+ * this cannot be distinguished from a user-issued syscall(-1)129129+ * and so we must set x0 to -ENOSYS here in case the tracer doesn't130130+ * issue the skip and we fall into trace_exit with x0 preserved.131131+ *132132+ * This is slightly odd because it also means that if a tracer133133+ * sets the system call number to -1 but does not initialise x0,134134+ * then x0 will be preserved for all system calls apart from a135135+ * user-issued syscall(-1). However, requesting a skip and not136136+ * setting the return value is unlikely to do anything sensible137137+ * anyway.138138+ */128139 if (scno == NO_SYSCALL)129140 regs->regs[0] = -ENOSYS;130141 scno = syscall_trace_enter(regs);···156139 if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {157140 local_daif_mask();158141 flags = current_thread_info()->flags;159159- if (!has_syscall_work(flags)) {142142+ if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {160143 /*161144 * We're off to userspace, where interrupts are162145 * always enabled after we restore the flags from
···1361361371371: cmp x0, #HVC_RESET_VECTORS138138 b.ne 1f139139-reset:139139+140140 /*141141- * Reset kvm back to the hyp stub. Do not clobber x0-x4 in142142- * case we coming via HVC_SOFT_RESTART.141141+ * Set the HVC_RESET_VECTORS return code before entering the common142142+ * path so that we do not clobber x0-x2 in case we are coming via143143+ * HVC_SOFT_RESTART.143144 */145145+ mov x0, xzr146146+reset:147147+ /* Reset kvm back to the hyp stub. */144148 mrs x5, sctlr_el2145149 mov_q x6, SCTLR_ELx_FLAGS146150 bic x5, x5, x6 // Clear SCTL_M and etc···155151 /* Install stub vectors */156152 adr_l x5, __hyp_stub_vectors157153 msr vbar_el2, x5158158- mov x0, xzr159154 eret1601551611561: /* Bad stub call */
+6-1
arch/arm64/kvm/pmu.c
···159159}160160161161/*162162- * On VHE ensure that only guest events have EL0 counting enabled162162+ * On VHE ensure that only guest events have EL0 counting enabled.163163+ * This is called from both vcpu_{load,put} and the sysreg handling.164164+ * Since the latter is preemptible, special care must be taken to165165+ * disable preemption.163166 */164167void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)165168{···172169 if (!has_vhe())173170 return;174171172172+ preempt_disable();175173 host = this_cpu_ptr(&kvm_host_data);176174 events_guest = host->pmu_events.events_guest;177175 events_host = host->pmu_events.events_host;178176179177 kvm_vcpu_pmu_enable_el0(events_guest);180178 kvm_vcpu_pmu_disable_el0(events_host);179179+ preempt_enable();181180}182181183182/*
+12-3
arch/arm64/kvm/pvtime.c
···3344#include <linux/arm-smccc.h>55#include <linux/kvm_host.h>66+#include <linux/sched/stat.h>6778#include <asm/kvm_mmu.h>89#include <asm/pvclock-abi.h>···7473 return base;7574}76757676+static bool kvm_arm_pvtime_supported(void)7777+{7878+ return !!sched_info_on();7979+}8080+7781int kvm_arm_pvtime_set_attr(struct kvm_vcpu *vcpu,7882 struct kvm_device_attr *attr)7983{···8882 int ret = 0;8983 int idx;90849191- if (attr->attr != KVM_ARM_VCPU_PVTIME_IPA)8585+ if (!kvm_arm_pvtime_supported() ||8686+ attr->attr != KVM_ARM_VCPU_PVTIME_IPA)9287 return -ENXIO;93889489 if (get_user(ipa, user))···117110 u64 __user *user = (u64 __user *)attr->addr;118111 u64 ipa;119112120120- if (attr->attr != KVM_ARM_VCPU_PVTIME_IPA)113113+ if (!kvm_arm_pvtime_supported() ||114114+ attr->attr != KVM_ARM_VCPU_PVTIME_IPA)121115 return -ENXIO;122116123117 ipa = vcpu->arch.steal.base;···133125{134126 switch (attr->attr) {135127 case KVM_ARM_VCPU_PVTIME_IPA:136136- return 0;128128+ if (kvm_arm_pvtime_supported())129129+ return 0;137130 }138131 return -ENXIO;139132}
+7-3
arch/arm64/kvm/reset.c
···245245 */246246int kvm_reset_vcpu(struct kvm_vcpu *vcpu)247247{248248- int ret = -EINVAL;248248+ int ret;249249 bool loaded;250250 u32 pstate;251251···269269270270 if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||271271 test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {272272- if (kvm_vcpu_enable_ptrauth(vcpu))272272+ if (kvm_vcpu_enable_ptrauth(vcpu)) {273273+ ret = -EINVAL;273274 goto out;275275+ }274276 }275277276278 switch (vcpu->arch.target) {277279 default:278280 if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {279279- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))281281+ if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {282282+ ret = -EINVAL;280283 goto out;284284+ }281285 pstate = VCPU_RESET_PSTATE_SVC;282286 } else {283287 pstate = VCPU_RESET_PSTATE_EL1;
+8
arch/arm64/kvm/vgic/vgic-v4.c
···9090 !irqd_irq_disabled(&irq_to_desc(irq)->irq_data))9191 disable_irq_nosync(irq);92929393+ /*9494+ * The v4.1 doorbell can fire concurrently with the vPE being9595+ * made non-resident. Ensure we only update pending_last9696+ * *after* the non-residency sequence has completed.9797+ */9898+ raw_spin_lock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock);9399 vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last = true;100100+ raw_spin_unlock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock);101101+94102 kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);95103 kvm_vcpu_kick(vcpu);96104
···1919 \2020 /* \2121 * We can't unroll if the number of iterations isn't \2222- * compile-time constant. Unfortunately GCC versions \2323- * up until 4.6 tend to miss obvious constants & cause \2222+ * compile-time constant. Unfortunately clang versions \2323+ * up until 8.0 tend to miss obvious constants & cause \2424 * this check to fail, even though they go on to \2525 * generate reasonable code for the switch statement, \2626 * so we skip the sanity check for those compilers. \2727 */ \2828- BUILD_BUG_ON((CONFIG_GCC_VERSION >= 40700 || \2929- CONFIG_CLANG_VERSION >= 80000) && \3030- !__builtin_constant_p(times)); \2828+ BUILD_BUG_ON(!__builtin_constant_p(times)); \3129 \3230 switch (times) { \3331 case 32: fn(__VA_ARGS__); /* fall through */ \
+6-3
arch/mips/kernel/traps.c
···723723 perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, 0);724724725725 /* Do not emulate on unsupported core models. */726726- if (!loongson3_cpucfg_emulation_enabled(¤t_cpu_data))726726+ preempt_disable();727727+ if (!loongson3_cpucfg_emulation_enabled(¤t_cpu_data)) {728728+ preempt_enable();727729 return -1;728728-730730+ }729731 regs->regs[rd] = loongson3_cpucfg_read_synthesized(730732 ¤t_cpu_data, sel);731731-733733+ preempt_enable();732734 return 0;733735 }734736···2171216921722170 change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_FR|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX,21732171 status_set);21722172+ back_to_back_c0_hazard();21742173}2175217421762175unsigned int hwrena;
···6060extern unsigned long __cmpxchg_u32(volatile unsigned int *m, unsigned int old,6161 unsigned int new_);6262extern u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new_);6363+extern u8 __cmpxchg_u8(volatile u8 *ptr, u8 old, u8 new_);63646465/* don't worry...optimizer will get rid of most of this */6566static inline unsigned long···7271#endif7372 case 4: return __cmpxchg_u32((unsigned int *)ptr,7473 (unsigned int)old, (unsigned int)new_);7474+ case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_);7575 }7676 __cmpxchg_called_with_bad_pointer();7777 return old;
···8787 * This is very early in boot, so no harm done if the kernel crashes at8888 * this point.8989 */9090- BUG_ON(shared_lppaca_size >= shared_lppaca_total_size);9090+ BUG_ON(shared_lppaca_size > shared_lppaca_total_size);91919292 return ptr;9393}
+7-8
arch/powerpc/mm/book3s64/pkeys.c
···353353 int pkey_shift;354354 u64 amr;355355356356- if (!is_pkey_enabled(pkey))357357- return true;358358-359356 pkey_shift = pkeyshift(pkey);360360- if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift)))361361- return true;357357+ if (execute)358358+ return !(read_iamr() & (IAMR_EX_BIT << pkey_shift));362359363363- amr = read_amr(); /* Delay reading amr until absolutely needed */364364- return ((!write && !(amr & (AMR_RD_BIT << pkey_shift))) ||365365- (write && !(amr & (AMR_WR_BIT << pkey_shift))));360360+ amr = read_amr();361361+ if (write)362362+ return !(amr & (AMR_WR_BIT << pkey_shift));363363+364364+ return !(amr & (AMR_RD_BIT << pkey_shift));366365}367366368367bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
···2323 select ARCH_HAS_SET_DIRECT_MAP2424 select ARCH_HAS_SET_MEMORY2525 select ARCH_HAS_STRICT_KERNEL_RWX if MMU2626+ select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX2727+ select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT2628 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU2729 select ARCH_WANT_FRAME_POINTERS2830 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
+9-1
arch/riscv/include/asm/barrier.h
···5858 * The AQ/RL pair provides a RCpc critical section, but there's not really any5959 * way we can take advantage of that here because the ordering is only enforced6060 * on that one lock. Thus, we're just doing a full fence.6161+ *6262+ * Since we allow writeX to be called from preemptive regions we need at least6363+ * an "o" in the predecessor set to ensure device writes are visible before the6464+ * task is marked as available for scheduling on a new hart. While I don't see6565+ * any concrete reason we need a full IO fence, it seems safer to just upgrade6666+ * this in order to avoid any IO crossing a scheduling boundary. In both6767+ * instances the scheduler pairs this with an mb(), so nothing is necessary on6868+ * the new hart.6169 */6262-#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw)7070+#define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw)63716472#include <asm-generic/barrier.h>6573
···4444DECLARE_INSN(c_bnez, MATCH_C_BNEZ, MASK_C_BNEZ)4545DECLARE_INSN(sret, MATCH_SRET, MASK_SRET)46464747-int decode_register_index(unsigned long opcode, int offset)4747+static int decode_register_index(unsigned long opcode, int offset)4848{4949 return (opcode >> offset) & 0x1F;5050}51515252-int decode_register_index_short(unsigned long opcode, int offset)5252+static int decode_register_index_short(unsigned long opcode, int offset)5353{5454 return ((opcode >> offset) & 0x7) + 8;5555}56565757/* Calculate the new address for after a step */5858-int get_step_address(struct pt_regs *regs, unsigned long *next_addr)5858+static int get_step_address(struct pt_regs *regs, unsigned long *next_addr)5959{6060 unsigned long pc = regs->epc;6161 unsigned long *regs_ptr = (unsigned long *)regs;···136136 return 0;137137}138138139139-int do_single_step(struct pt_regs *regs)139139+static int do_single_step(struct pt_regs *regs)140140{141141 /* Determine where the target instruction will send us to */142142 unsigned long addr = 0;···320320 return err;321321}322322323323-int kgdb_riscv_kgdbbreak(unsigned long addr)323323+static int kgdb_riscv_kgdbbreak(unsigned long addr)324324{325325 if (stepped_address == addr)326326 return KGDB_SW_SINGLE_STEP;
+47-23
arch/riscv/mm/init.c
···9595#ifdef CONFIG_BLK_DEV_INITRD9696static void __init setup_initrd(void)9797{9898+ phys_addr_t start;9899 unsigned long size;99100100100- if (initrd_start >= initrd_end) {101101- pr_info("initrd not found or empty");102102- goto disable;103103- }104104- if (__pa_symbol(initrd_end) > PFN_PHYS(max_low_pfn)) {105105- pr_err("initrd extends beyond end of memory");101101+ /* Ignore the virtul address computed during device tree parsing */102102+ initrd_start = initrd_end = 0;103103+104104+ if (!phys_initrd_size)105105+ return;106106+ /*107107+ * Round the memory region to page boundaries as per free_initrd_mem()108108+ * This allows us to detect whether the pages overlapping the initrd109109+ * are in use, but more importantly, reserves the entire set of pages110110+ * as we don't want these pages allocated for other purposes.111111+ */112112+ start = round_down(phys_initrd_start, PAGE_SIZE);113113+ size = phys_initrd_size + (phys_initrd_start - start);114114+ size = round_up(size, PAGE_SIZE);115115+116116+ if (!memblock_is_region_memory(start, size)) {117117+ pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region",118118+ (u64)start, size);106119 goto disable;107120 }108121109109- size = initrd_end - initrd_start;110110- memblock_reserve(__pa_symbol(initrd_start), size);122122+ if (memblock_is_region_reserved(start, size)) {123123+ pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region\n",124124+ (u64)start, size);125125+ goto disable;126126+ }127127+128128+ memblock_reserve(start, size);129129+ /* Now convert initrd to virtual addresses */130130+ initrd_start = (unsigned long)__va(phys_initrd_start);131131+ initrd_end = initrd_start + phys_initrd_size;111132 initrd_below_start_ok = 1;112133113134 pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n",···147126{148127 struct memblock_region *reg;149128 phys_addr_t mem_size = 0;129129+ phys_addr_t total_mem = 0;130130+ phys_addr_t mem_start, end = 0;150131 phys_addr_t vmlinux_end = __pa_symbol(&_end);151132 phys_addr_t vmlinux_start = __pa_symbol(&_start);152133153134 /* Find the memory region containing the kernel */154135 for_each_memblock(memory, reg) {155155- phys_addr_t end = reg->base + reg->size;156156-157157- if (reg->base <= vmlinux_start && vmlinux_end <= end) {158158- mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET);159159-160160- /*161161- * Remove memblock from the end of usable area to the162162- * end of region163163- */164164- if (reg->base + mem_size < end)165165- memblock_remove(reg->base + mem_size,166166- end - reg->base - mem_size);167167- }136136+ end = reg->base + reg->size;137137+ if (!total_mem)138138+ mem_start = reg->base;139139+ if (reg->base <= vmlinux_start && vmlinux_end <= end)140140+ BUG_ON(reg->size == 0);141141+ total_mem = total_mem + reg->size;168142 }169169- BUG_ON(mem_size == 0);143143+144144+ /*145145+ * Remove memblock from the end of usable area to the146146+ * end of region147147+ */148148+ mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET);149149+ if (mem_start + mem_size < end)150150+ memblock_remove(mem_start + mem_size,151151+ end - mem_start - mem_size);170152171153 /* Reserve from the start of the kernel to the end of the kernel */172154 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);173155174174- set_max_mapnr(PFN_DOWN(mem_size));175156 max_pfn = PFN_DOWN(memblock_end_of_DRAM());176157 max_low_pfn = max_pfn;158158+ set_max_mapnr(max_low_pfn);177159178160#ifdef CONFIG_BLK_DEV_INITRD179161 setup_initrd();
···11CONFIG_SYSVIPC=y22CONFIG_POSIX_MQUEUE=y33+CONFIG_WATCH_QUEUE=y34CONFIG_AUDIT=y45CONFIG_NO_HZ_IDLE=y56CONFIG_HIGH_RES_TIMERS=y···1514CONFIG_IKCONFIG_PROC=y1615CONFIG_NUMA_BALANCING=y1716CONFIG_MEMCG=y1818-CONFIG_MEMCG_SWAP=y1917CONFIG_BLK_CGROUP=y2018CONFIG_CFS_BANDWIDTH=y2119CONFIG_RT_GROUP_SCHED=y···3131CONFIG_USER_NS=y3232CONFIG_CHECKPOINT_RESTORE=y3333CONFIG_SCHED_AUTOGROUP=y3434-CONFIG_BLK_DEV_INITRD=y3534CONFIG_EXPERT=y3635# CONFIG_SYSFS_SYSCALL is not set3636+CONFIG_BPF_LSM=y3737CONFIG_BPF_SYSCALL=y3838CONFIG_USERFAULTFD=y3939# CONFIG_COMPAT_BRK is not set···5151CONFIG_VFIO_CCW=m5252CONFIG_VFIO_AP=m5353CONFIG_CRASH_DUMP=y5454-CONFIG_HIBERNATION=y5555-CONFIG_PM_DEBUG=y5654CONFIG_PROTECTED_VIRTUALIZATION_GUEST=y5755CONFIG_CMM=m5856CONFIG_APPLDATA_BASE=y5957CONFIG_KVM=m6060-CONFIG_VHOST_NET=m6161-CONFIG_VHOST_VSOCK=m5858+CONFIG_S390_UNWIND_SELFTEST=y6259CONFIG_OPROFILE=m6360CONFIG_KPROBES=y6461CONFIG_JUMP_LABEL=y···7477CONFIG_BLK_WBT=y7578CONFIG_BLK_CGROUP_IOLATENCY=y7679CONFIG_BLK_CGROUP_IOCOST=y8080+CONFIG_BLK_INLINE_ENCRYPTION=y8181+CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y7782CONFIG_PARTITION_ADVANCED=y7883CONFIG_IBM_PARTITION=y7984CONFIG_BSD_DISKLABEL=y···9596CONFIG_CMA_DEBUGFS=y9697CONFIG_MEM_SOFT_DIRTY=y9798CONFIG_ZSWAP=y9898-CONFIG_ZBUD=m9999CONFIG_ZSMALLOC=m100100CONFIG_ZSMALLOC_STAT=y101101CONFIG_DEFERRED_STRUCT_PAGE_INIT=y···128130CONFIG_NET_IPVTI=m129131CONFIG_INET_AH=m130132CONFIG_INET_ESP=m133133+CONFIG_INET_ESPINTCP=y131134CONFIG_INET_IPCOMP=m132135CONFIG_INET_DIAG=m133136CONFIG_INET_UDP_DIAG=m···143144CONFIG_IPV6_ROUTER_PREF=y144145CONFIG_INET6_AH=m145146CONFIG_INET6_ESP=m147147+CONFIG_INET6_ESPINTCP=y146148CONFIG_INET6_IPCOMP=m147149CONFIG_IPV6_MIP6=m148150CONFIG_IPV6_VTI=m···151151CONFIG_IPV6_GRE=m152152CONFIG_IPV6_MULTIPLE_TABLES=y153153CONFIG_IPV6_SUBTREES=y154154+CONFIG_IPV6_RPL_LWTUNNEL=y155155+CONFIG_MPTCP=y154156CONFIG_NETFILTER=y157157+CONFIG_BRIDGE_NETFILTER=m155158CONFIG_NF_CONNTRACK=m156159CONFIG_NF_CONNTRACK_SECMARK=y157160CONFIG_NF_CONNTRACK_EVENTS=y···320317CONFIG_L2TP_IP=m321318CONFIG_L2TP_ETH=m322319CONFIG_BRIDGE=m320320+CONFIG_BRIDGE_MRP=y323321CONFIG_VLAN_8021Q=m324322CONFIG_VLAN_8021Q_GVRP=y325323CONFIG_NET_SCHED=y···345341CONFIG_NET_SCH_FQ_CODEL=m346342CONFIG_NET_SCH_INGRESS=m347343CONFIG_NET_SCH_PLUG=m344344+CONFIG_NET_SCH_ETS=m348345CONFIG_NET_CLS_BASIC=m349346CONFIG_NET_CLS_TCINDEX=m350347CONFIG_NET_CLS_ROUTE4=m···369364CONFIG_NET_ACT_SIMP=m370365CONFIG_NET_ACT_SKBEDIT=m371366CONFIG_NET_ACT_CSUM=m367367+CONFIG_NET_ACT_GATE=m372368CONFIG_DNS_RESOLVER=y373369CONFIG_OPENVSWITCH=m374370CONFIG_VSOCKETS=m···380374CONFIG_NET_PKTGEN=m381375# CONFIG_NET_DROP_MONITOR is not set382376CONFIG_PCI=y377377+# CONFIG_PCIEASPM is not set383378CONFIG_PCI_DEBUG=y384379CONFIG_HOTPLUG_PCI=y385380CONFIG_HOTPLUG_PCI_S390=y···442435CONFIG_DM_MULTIPATH=m443436CONFIG_DM_MULTIPATH_QL=m444437CONFIG_DM_MULTIPATH_ST=m438438+CONFIG_DM_MULTIPATH_HST=m445439CONFIG_DM_DELAY=m446440CONFIG_DM_UEVENT=y447441CONFIG_DM_FLAKEY=m···456448CONFIG_IFB=m457449CONFIG_MACVLAN=m458450CONFIG_MACVTAP=m451451+CONFIG_VXLAN=m452452+CONFIG_BAREUDP=m459453CONFIG_TUN=m460454CONFIG_VETH=m461455CONFIG_VIRTIO_NET=m···491481CONFIG_MLX4_EN=m492482CONFIG_MLX5_CORE=m493483CONFIG_MLX5_CORE_EN=y494494-# CONFIG_MLXFW is not set495484# CONFIG_NET_VENDOR_MICREL is not set496485# CONFIG_NET_VENDOR_MICROCHIP is not set497486# CONFIG_NET_VENDOR_MICROSEMI is not set···523514# CONFIG_NET_VENDOR_TI is not set524515# CONFIG_NET_VENDOR_VIA is not set525516# CONFIG_NET_VENDOR_WIZNET is not set517517+# CONFIG_NET_VENDOR_XILINX is not set526518CONFIG_PPP=m527519CONFIG_PPP_BSDCOMP=m528520CONFIG_PPP_DEFLATE=m···571561CONFIG_VIRTIO_PCI=m572562CONFIG_VIRTIO_BALLOON=m573563CONFIG_VIRTIO_INPUT=y564564+CONFIG_VHOST_NET=m565565+CONFIG_VHOST_VSOCK=m574566CONFIG_S390_CCW_IOMMU=y575567CONFIG_S390_AP_IOMMU=y576568CONFIG_EXT4_FS=y···620608CONFIG_UDF_FS=m621609CONFIG_MSDOS_FS=m622610CONFIG_VFAT_FS=m611611+CONFIG_EXFAT_FS=m623612CONFIG_NTFS_FS=m624613CONFIG_NTFS_RW=y625614CONFIG_PROC_KCORE=y···663650CONFIG_DLM=m664651CONFIG_UNICODE=y665652CONFIG_PERSISTENT_KEYRINGS=y666666-CONFIG_BIG_KEYS=y667653CONFIG_ENCRYPTED_KEYS=m654654+CONFIG_KEY_NOTIFICATIONS=y668655CONFIG_SECURITY=y669656CONFIG_SECURITY_NETWORK=y670657CONFIG_FORTIFY_SOURCE=y···688675CONFIG_CRYPTO_DH=m689676CONFIG_CRYPTO_ECDH=m690677CONFIG_CRYPTO_ECRDSA=m678678+CONFIG_CRYPTO_CURVE25519=m679679+CONFIG_CRYPTO_GCM=y691680CONFIG_CRYPTO_CHACHA20POLY1305=m692681CONFIG_CRYPTO_AEGIS128=m682682+CONFIG_CRYPTO_SEQIV=y693683CONFIG_CRYPTO_CFB=m694684CONFIG_CRYPTO_LRW=m695685CONFIG_CRYPTO_PCBC=m···701685CONFIG_CRYPTO_XCBC=m702686CONFIG_CRYPTO_VMAC=m703687CONFIG_CRYPTO_CRC32=m688688+CONFIG_CRYPTO_BLAKE2S=m704689CONFIG_CRYPTO_MICHAEL_MIC=m705690CONFIG_CRYPTO_RMD128=m706691CONFIG_CRYPTO_RMD160=m···718701CONFIG_CRYPTO_CAMELLIA=m719702CONFIG_CRYPTO_CAST5=m720703CONFIG_CRYPTO_CAST6=m704704+CONFIG_CRYPTO_DES=m721705CONFIG_CRYPTO_FCRYPT=m722706CONFIG_CRYPTO_KHAZAD=m723707CONFIG_CRYPTO_SALSA20=m···737719CONFIG_CRYPTO_USER_API_RNG=m738720CONFIG_CRYPTO_USER_API_AEAD=m739721CONFIG_CRYPTO_STATS=y722722+CONFIG_CRYPTO_LIB_BLAKE2S=m723723+CONFIG_CRYPTO_LIB_CURVE25519=m724724+CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m740725CONFIG_ZCRYPT=m741726CONFIG_PKEY=m742727CONFIG_CRYPTO_PAES_S390=m···795774CONFIG_PANIC_ON_OOPS=y796775CONFIG_DETECT_HUNG_TASK=y797776CONFIG_WQ_WATCHDOG=y777777+CONFIG_TEST_LOCKUP=m798778CONFIG_DEBUG_TIMEKEEPING=y799779CONFIG_PROVE_LOCKING=y800780CONFIG_LOCK_STAT=y···808786CONFIG_DEBUG_CREDENTIALS=y809787CONFIG_RCU_TORTURE_TEST=m810788CONFIG_RCU_CPU_STALL_TIMEOUT=300789789+# CONFIG_RCU_TRACE is not set811790CONFIG_LATENCYTOP=y791791+CONFIG_BOOTTIME_TRACING=y812792CONFIG_FUNCTION_PROFILER=y813793CONFIG_STACK_TRACER=y814794CONFIG_IRQSOFF_TRACER=y···832808CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y833809CONFIG_LKDTM=m834810CONFIG_TEST_LIST_SORT=y811811+CONFIG_TEST_MIN_HEAP=y835812CONFIG_TEST_SORT=y836813CONFIG_KPROBES_SANITY_TEST=y837814CONFIG_RBTREE_TEST=y838815CONFIG_INTERVAL_TREE_TEST=m839816CONFIG_PERCPU_TEST=m840817CONFIG_ATOMIC64_SELFTEST=y818818+CONFIG_TEST_BITOPS=m841819CONFIG_TEST_BPF=m
+33-10
arch/s390/configs/defconfig
···11CONFIG_SYSVIPC=y22CONFIG_POSIX_MQUEUE=y33+CONFIG_WATCH_QUEUE=y34CONFIG_AUDIT=y45CONFIG_NO_HZ_IDLE=y56CONFIG_HIGH_RES_TIMERS=y···1413CONFIG_IKCONFIG_PROC=y1514CONFIG_NUMA_BALANCING=y1615CONFIG_MEMCG=y1717-CONFIG_MEMCG_SWAP=y1816CONFIG_BLK_CGROUP=y1917CONFIG_CFS_BANDWIDTH=y2018CONFIG_RT_GROUP_SCHED=y···3030CONFIG_USER_NS=y3131CONFIG_CHECKPOINT_RESTORE=y3232CONFIG_SCHED_AUTOGROUP=y3333-CONFIG_BLK_DEV_INITRD=y3433CONFIG_EXPERT=y3534# CONFIG_SYSFS_SYSCALL is not set3535+CONFIG_BPF_LSM=y3636CONFIG_BPF_SYSCALL=y3737CONFIG_USERFAULTFD=y3838# CONFIG_COMPAT_BRK is not set···4141CONFIG_TUNE_ZEC12=y4242CONFIG_NR_CPUS=5124343CONFIG_NUMA=y4444-# CONFIG_NUMA_EMU is not set4544CONFIG_HZ_100=y4645CONFIG_KEXEC_FILE=y4746CONFIG_KEXEC_SIG=y···5051CONFIG_VFIO_CCW=m5152CONFIG_VFIO_AP=m5253CONFIG_CRASH_DUMP=y5353-CONFIG_HIBERNATION=y5454-CONFIG_PM_DEBUG=y5554CONFIG_PROTECTED_VIRTUALIZATION_GUEST=y5655CONFIG_CMM=m5756CONFIG_APPLDATA_BASE=y5857CONFIG_KVM=m5959-CONFIG_VHOST_NET=m6060-CONFIG_VHOST_VSOCK=m5858+CONFIG_S390_UNWIND_SELFTEST=m6159CONFIG_OPROFILE=m6260CONFIG_KPROBES=y6361CONFIG_JUMP_LABEL=y···7074CONFIG_BLK_WBT=y7175CONFIG_BLK_CGROUP_IOLATENCY=y7276CONFIG_BLK_CGROUP_IOCOST=y7777+CONFIG_BLK_INLINE_ENCRYPTION=y7878+CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y7379CONFIG_PARTITION_ADVANCED=y7480CONFIG_IBM_PARTITION=y7581CONFIG_BSD_DISKLABEL=y···8991CONFIG_FRONTSWAP=y9092CONFIG_MEM_SOFT_DIRTY=y9193CONFIG_ZSWAP=y9292-CONFIG_ZBUD=m9394CONFIG_ZSMALLOC=m9495CONFIG_ZSMALLOC_STAT=y9596CONFIG_DEFERRED_STRUCT_PAGE_INIT=y···122125CONFIG_NET_IPVTI=m123126CONFIG_INET_AH=m124127CONFIG_INET_ESP=m128128+CONFIG_INET_ESPINTCP=y125129CONFIG_INET_IPCOMP=m126130CONFIG_INET_DIAG=m127131CONFIG_INET_UDP_DIAG=m···137139CONFIG_IPV6_ROUTER_PREF=y138140CONFIG_INET6_AH=m139141CONFIG_INET6_ESP=m142142+CONFIG_INET6_ESPINTCP=y140143CONFIG_INET6_IPCOMP=m141144CONFIG_IPV6_MIP6=m142145CONFIG_IPV6_VTI=m···145146CONFIG_IPV6_GRE=m146147CONFIG_IPV6_MULTIPLE_TABLES=y147148CONFIG_IPV6_SUBTREES=y149149+CONFIG_IPV6_RPL_LWTUNNEL=y150150+CONFIG_MPTCP=y148151CONFIG_NETFILTER=y152152+CONFIG_BRIDGE_NETFILTER=m149153CONFIG_NF_CONNTRACK=m150154CONFIG_NF_CONNTRACK_SECMARK=y151155CONFIG_NF_CONNTRACK_EVENTS=y···313311CONFIG_L2TP_IP=m314312CONFIG_L2TP_ETH=m315313CONFIG_BRIDGE=m314314+CONFIG_BRIDGE_MRP=y316315CONFIG_VLAN_8021Q=m317316CONFIG_VLAN_8021Q_GVRP=y318317CONFIG_NET_SCHED=y···338335CONFIG_NET_SCH_FQ_CODEL=m339336CONFIG_NET_SCH_INGRESS=m340337CONFIG_NET_SCH_PLUG=m338338+CONFIG_NET_SCH_ETS=m341339CONFIG_NET_CLS_BASIC=m342340CONFIG_NET_CLS_TCINDEX=m343341CONFIG_NET_CLS_ROUTE4=m···362358CONFIG_NET_ACT_SIMP=m363359CONFIG_NET_ACT_SKBEDIT=m364360CONFIG_NET_ACT_CSUM=m361361+CONFIG_NET_ACT_GATE=m365362CONFIG_DNS_RESOLVER=y366363CONFIG_OPENVSWITCH=m367364CONFIG_VSOCKETS=m···373368CONFIG_NET_PKTGEN=m374369# CONFIG_NET_DROP_MONITOR is not set375370CONFIG_PCI=y371371+# CONFIG_PCIEASPM is not set376372CONFIG_HOTPLUG_PCI=y377373CONFIG_HOTPLUG_PCI_S390=y378374CONFIG_UEVENT_HELPER=y···436430CONFIG_DM_MULTIPATH=m437431CONFIG_DM_MULTIPATH_QL=m438432CONFIG_DM_MULTIPATH_ST=m433433+CONFIG_DM_MULTIPATH_HST=m439434CONFIG_DM_DELAY=m440435CONFIG_DM_UEVENT=y441436CONFIG_DM_FLAKEY=m···451444CONFIG_IFB=m452445CONFIG_MACVLAN=m453446CONFIG_MACVTAP=m447447+CONFIG_VXLAN=m448448+CONFIG_BAREUDP=m454449CONFIG_TUN=m455450CONFIG_VETH=m456451CONFIG_VIRTIO_NET=m···486477CONFIG_MLX4_EN=m487478CONFIG_MLX5_CORE=m488479CONFIG_MLX5_CORE_EN=y489489-# CONFIG_MLXFW is not set490480# CONFIG_NET_VENDOR_MICREL is not set491481# CONFIG_NET_VENDOR_MICROCHIP is not set492482# CONFIG_NET_VENDOR_MICROSEMI is not set···518510# CONFIG_NET_VENDOR_TI is not set519511# CONFIG_NET_VENDOR_VIA is not set520512# CONFIG_NET_VENDOR_WIZNET is not set513513+# CONFIG_NET_VENDOR_XILINX is not set521514CONFIG_PPP=m522515CONFIG_PPP_BSDCOMP=m523516CONFIG_PPP_DEFLATE=m···566557CONFIG_VIRTIO_PCI=m567558CONFIG_VIRTIO_BALLOON=m568559CONFIG_VIRTIO_INPUT=y560560+CONFIG_VHOST_NET=m561561+CONFIG_VHOST_VSOCK=m569562CONFIG_S390_CCW_IOMMU=y570563CONFIG_S390_AP_IOMMU=y571564CONFIG_EXT4_FS=y···611600CONFIG_UDF_FS=m612601CONFIG_MSDOS_FS=m613602CONFIG_VFAT_FS=m603603+CONFIG_EXFAT_FS=m614604CONFIG_NTFS_FS=m615605CONFIG_NTFS_RW=y616606CONFIG_PROC_KCORE=y···654642CONFIG_DLM=m655643CONFIG_UNICODE=y656644CONFIG_PERSISTENT_KEYRINGS=y657657-CONFIG_BIG_KEYS=y658645CONFIG_ENCRYPTED_KEYS=m646646+CONFIG_KEY_NOTIFICATIONS=y659647CONFIG_SECURITY=y660648CONFIG_SECURITY_NETWORK=y661649CONFIG_SECURITY_SELINUX=y···679667CONFIG_CRYPTO_DH=m680668CONFIG_CRYPTO_ECDH=m681669CONFIG_CRYPTO_ECRDSA=m670670+CONFIG_CRYPTO_CURVE25519=m671671+CONFIG_CRYPTO_GCM=y682672CONFIG_CRYPTO_CHACHA20POLY1305=m683673CONFIG_CRYPTO_AEGIS128=m674674+CONFIG_CRYPTO_SEQIV=y684675CONFIG_CRYPTO_CFB=m685676CONFIG_CRYPTO_LRW=m686677CONFIG_CRYPTO_OFB=m···693678CONFIG_CRYPTO_XCBC=m694679CONFIG_CRYPTO_VMAC=m695680CONFIG_CRYPTO_CRC32=m681681+CONFIG_CRYPTO_BLAKE2S=m696682CONFIG_CRYPTO_MICHAEL_MIC=m697683CONFIG_CRYPTO_RMD128=m698684CONFIG_CRYPTO_RMD160=m···710694CONFIG_CRYPTO_CAMELLIA=m711695CONFIG_CRYPTO_CAST5=m712696CONFIG_CRYPTO_CAST6=m697697+CONFIG_CRYPTO_DES=m713698CONFIG_CRYPTO_FCRYPT=m714699CONFIG_CRYPTO_KHAZAD=m715700CONFIG_CRYPTO_SALSA20=m···729712CONFIG_CRYPTO_USER_API_RNG=m730713CONFIG_CRYPTO_USER_API_AEAD=m731714CONFIG_CRYPTO_STATS=y715715+CONFIG_CRYPTO_LIB_BLAKE2S=m716716+CONFIG_CRYPTO_LIB_CURVE25519=m717717+CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m732718CONFIG_ZCRYPT=m733719CONFIG_PKEY=m734720CONFIG_CRYPTO_PAES_S390=m···745725CONFIG_CRYPTO_GHASH_S390=m746726CONFIG_CRYPTO_CRC32_S390=y747727CONFIG_CORDIC=m728728+CONFIG_PRIME_NUMBERS=m748729CONFIG_CRC4=m749730CONFIG_CRC7=m750731CONFIG_CRC8=m···760739CONFIG_MAGIC_SYSRQ=y761740CONFIG_DEBUG_MEMORY_INIT=y762741CONFIG_PANIC_ON_OOPS=y742742+CONFIG_TEST_LOCKUP=m763743CONFIG_BUG_ON_DATA_CORRUPTION=y764744CONFIG_RCU_TORTURE_TEST=m765745CONFIG_RCU_CPU_STALL_TIMEOUT=60766746CONFIG_LATENCYTOP=y747747+CONFIG_BOOTTIME_TRACING=y767748CONFIG_FUNCTION_PROFILER=y768749CONFIG_STACK_TRACER=y769750CONFIG_SCHED_TRACER=y
+5
arch/s390/configs/zfcpdump_defconfig
···3030# CONFIG_BOUNCE is not set3131CONFIG_NET=y3232# CONFIG_IUCV is not set3333+# CONFIG_ETHTOOL_NETLINK is not set3334CONFIG_DEVTMPFS=y3435CONFIG_BLK_DEV_RAM=y3536# CONFIG_BLK_DEV_XPRAM is not set···5655# CONFIG_MONWRITER is not set5756# CONFIG_S390_VMUR is not set5857# CONFIG_HID is not set5858+# CONFIG_VIRTIO_MENU is not set5959+# CONFIG_VHOST_MENU is not set5960# CONFIG_IOMMU_SUPPORT is not set6061# CONFIG_DNOTIFY is not set6162# CONFIG_INOTIFY_USER is not set···6562# CONFIG_MISC_FILESYSTEMS is not set6663# CONFIG_NETWORK_FILESYSTEMS is not set6764CONFIG_LSM="yama,loadpin,safesetid,integrity"6565+# CONFIG_ZLIB_DFLTCC is not set6866CONFIG_PRINTK_TIME=y6767+# CONFIG_SYMBOLIC_ERRNAME is not set6968CONFIG_DEBUG_INFO=y7069CONFIG_DEBUG_FS=y7170CONFIG_DEBUG_KERNEL=y
+4-4
arch/s390/include/asm/kvm_host.h
···3131#define KVM_USER_MEM_SLOTS 3232323333/*3434- * These seem to be used for allocating ->chip in the routing table,3535- * which we don't use. 4096 is an out-of-thin-air value. If we need3636- * to look at ->chip later on, we'll need to revisit this.3434+ * These seem to be used for allocating ->chip in the routing table, which we3535+ * don't use. 1 is as small as we can get to reduce the needed memory. If we3636+ * need to look at ->chip later on, we'll need to revisit this.3737 */3838#define KVM_NR_IRQCHIPS 13939-#define KVM_IRQCHIP_NUM_PINS 40963939+#define KVM_IRQCHIP_NUM_PINS 14040#define KVM_HALT_POLL_NS_DEFAULT 5000041414242/* s390-specific vcpu->requests bit members */
···881881 return err;882882}883883884884+static bool is_callchain_event(struct perf_event *event)885885+{886886+ u64 sample_type = event->attr.sample_type;887887+888888+ return sample_type & (PERF_SAMPLE_CALLCHAIN | PERF_SAMPLE_REGS_USER |889889+ PERF_SAMPLE_STACK_USER);890890+}891891+884892static int cpumsf_pmu_event_init(struct perf_event *event)885893{886894 int err;887895888896 /* No support for taken branch sampling */889889- if (has_branch_stack(event))897897+ /* No support for callchain, stacks and registers */898898+ if (has_branch_stack(event) || is_callchain_event(event))890899 return -EOPNOTSUPP;891900892901 switch (event->attr.type) {
···9494 }9595 zdev->fh = ccdf->fh;9696 zdev->state = ZPCI_FN_STATE_CONFIGURED;9797- zpci_create_device(zdev);9797+ ret = zpci_enable_device(zdev);9898+ if (ret)9999+ break;100100+101101+ pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn);102102+ if (!pdev)103103+ break;104104+105105+ pci_bus_add_device(pdev);106106+ pci_lock_rescan_remove();107107+ pci_bus_add_devices(zdev->zbus->bus);108108+ pci_unlock_rescan_remove();98109 break;99110 case 0x0302: /* Reserved -> Standby */100111 if (!zdev) {
+2-2
arch/x86/boot/compressed/Makefile
···90909191vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o92929393-vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a9493vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o9494+efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a95959696# The compressed kernel is built with -fPIC/-fPIE so that a boot loader9797# can place it anywhere in memory and it will still run. However, since···115115quiet_cmd_check-and-link-vmlinux = LD $@116116 cmd_check-and-link-vmlinux = $(cmd_check_data_rel); $(cmd_ld)117117118118-$(obj)/vmlinux: $(vmlinux-objs-y) FORCE118118+$(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE119119 $(call if_changed,check-and-link-vmlinux)120120121121OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
···4545#define CREATE_TRACE_POINTS4646#include <trace/events/syscalls.h>47474848+/* Check that the stack and regs on entry from user mode are sane. */4949+static noinstr void check_user_regs(struct pt_regs *regs)5050+{5151+ if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) {5252+ /*5353+ * Make sure that the entry code gave us a sensible EFLAGS5454+ * register. Native because we want to check the actual CPU5555+ * state, not the interrupt state as imagined by Xen.5656+ */5757+ unsigned long flags = native_save_fl();5858+ WARN_ON_ONCE(flags & (X86_EFLAGS_AC | X86_EFLAGS_DF |5959+ X86_EFLAGS_NT));6060+6161+ /* We think we came from user mode. Make sure pt_regs agrees. */6262+ WARN_ON_ONCE(!user_mode(regs));6363+6464+ /*6565+ * All entries from user mode (except #DF) should be on the6666+ * normal thread stack and should have user pt_regs in the6767+ * correct location.6868+ */6969+ WARN_ON_ONCE(!on_thread_stack());7070+ WARN_ON_ONCE(regs != task_pt_regs(current));7171+ }7272+}7373+4874#ifdef CONFIG_CONTEXT_TRACKING4975/**5076 * enter_from_user_mode - Establish state when coming from user mode···152126 struct thread_info *ti = current_thread_info();153127 unsigned long ret = 0;154128 u32 work;155155-156156- if (IS_ENABLED(CONFIG_DEBUG_ENTRY))157157- BUG_ON(regs != task_pt_regs(current));158129159130 work = READ_ONCE(ti->flags);160131···294271#endif295272}296273297297-__visible noinstr void prepare_exit_to_usermode(struct pt_regs *regs)274274+static noinstr void prepare_exit_to_usermode(struct pt_regs *regs)298275{299276 instrumentation_begin();300277 __prepare_exit_to_usermode(regs);···369346{370347 struct thread_info *ti;371348349349+ check_user_regs(regs);350350+372351 enter_from_user_mode();373352 instrumentation_begin();374353···434409/* Handles int $0x80 */435410__visible noinstr void do_int80_syscall_32(struct pt_regs *regs)436411{412412+ check_user_regs(regs);413413+437414 enter_from_user_mode();438415 instrumentation_begin();439416···487460 vdso_image_32.sym_int80_landing_pad;488461 bool success;489462463463+ check_user_regs(regs);464464+490465 /*491466 * SYSENTER loses EIP, and even SYSCALL32 needs us to skip forward492467 * so that 'regs->ip -= 2' lands back on an int $0x80 instruction.···539510 (regs->flags & (X86_EFLAGS_RF | X86_EFLAGS_TF | X86_EFLAGS_VM)) == 0;540511#endif541512}513513+514514+/* Returns 0 to return using IRET or 1 to return using SYSEXIT/SYSRETL. */515515+__visible noinstr long do_SYSENTER_32(struct pt_regs *regs)516516+{517517+ /* SYSENTER loses RSP, but the vDSO saved it in RBP. */518518+ regs->sp = regs->bp;519519+520520+ /* SYSENTER clobbers EFLAGS.IF. Assume it was set in usermode. */521521+ regs->flags |= X86_EFLAGS_IF;522522+523523+ return do_fast_syscall_32(regs);524524+}542525#endif543526544527SYSCALL_DEFINE0(ni_syscall)···594553bool noinstr idtentry_enter_cond_rcu(struct pt_regs *regs)595554{596555 if (user_mode(regs)) {556556+ check_user_regs(regs);597557 enter_from_user_mode();598558 return false;599559 }···728686 */729687void noinstr idtentry_enter_user(struct pt_regs *regs)730688{689689+ check_user_regs(regs);731690 enter_from_user_mode();732691}733692
···57575858 movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp59596060+ /* Construct struct pt_regs on stack */6161+ pushq $__USER32_DS /* pt_regs->ss */6262+ pushq $0 /* pt_regs->sp = 0 (placeholder) */6363+6464+ /*6565+ * Push flags. This is nasty. First, interrupts are currently6666+ * off, but we need pt_regs->flags to have IF set. Second, if TS6767+ * was set in usermode, it's still set, and we're singlestepping6868+ * through this code. do_SYSENTER_32() will fix up IF.6969+ */7070+ pushfq /* pt_regs->flags (except IF = 0) */7171+ pushq $__USER32_CS /* pt_regs->cs */7272+ pushq $0 /* pt_regs->ip = 0 (placeholder) */7373+SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)7474+6075 /*6176 * User tracing code (ptrace or signal handlers) might assume that6277 * the saved RAX contains a 32-bit number when we're invoking a 32-bit···8166 */8267 movl %eax, %eax83688484- /* Construct struct pt_regs on stack */8585- pushq $__USER32_DS /* pt_regs->ss */8686- pushq %rbp /* pt_regs->sp (stashed in bp) */8787-8888- /*8989- * Push flags. This is nasty. First, interrupts are currently9090- * off, but we need pt_regs->flags to have IF set. Second, even9191- * if TF was set when SYSENTER started, it's clear by now. We fix9292- * that later using TIF_SINGLESTEP.9393- */9494- pushfq /* pt_regs->flags (except IF = 0) */9595- orl $X86_EFLAGS_IF, (%rsp) /* Fix saved flags */9696- pushq $__USER32_CS /* pt_regs->cs */9797- pushq $0 /* pt_regs->ip = 0 (placeholder) */9869 pushq %rax /* pt_regs->orig_ax */9970 pushq %rdi /* pt_regs->di */10071 pushq %rsi /* pt_regs->si */···136135.Lsysenter_flags_fixed:137136138137 movq %rsp, %rdi139139- call do_fast_syscall_32138138+ call do_SYSENTER_32140139 /* XEN PV guests always use IRET path */141140 ALTERNATIVE "testl %eax, %eax; jz swapgs_restore_regs_and_return_to_usermode", \142141 "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
···353353354354#else /* CONFIG_X86_64 */355355356356-/* Maps to a regular IDTENTRY on 32bit for now */357357-# define DECLARE_IDTENTRY_IST DECLARE_IDTENTRY358358-# define DEFINE_IDTENTRY_IST DEFINE_IDTENTRY359359-360356/**361357 * DECLARE_IDTENTRY_DF - Declare functions for double fault 32bit variant362358 * @vector: Vector number (ignored for C)···383387#endif /* !CONFIG_X86_64 */384388385389/* C-Code mapping */390390+#define DECLARE_IDTENTRY_NMI DECLARE_IDTENTRY_RAW391391+#define DEFINE_IDTENTRY_NMI DEFINE_IDTENTRY_RAW392392+393393+#ifdef CONFIG_X86_64386394#define DECLARE_IDTENTRY_MCE DECLARE_IDTENTRY_IST387395#define DEFINE_IDTENTRY_MCE DEFINE_IDTENTRY_IST388396#define DEFINE_IDTENTRY_MCE_USER DEFINE_IDTENTRY_NOIST389397390390-#define DECLARE_IDTENTRY_NMI DECLARE_IDTENTRY_RAW391391-#define DEFINE_IDTENTRY_NMI DEFINE_IDTENTRY_RAW392392-393398#define DECLARE_IDTENTRY_DEBUG DECLARE_IDTENTRY_IST394399#define DEFINE_IDTENTRY_DEBUG DEFINE_IDTENTRY_IST395400#define DEFINE_IDTENTRY_DEBUG_USER DEFINE_IDTENTRY_NOIST396396-397397-/**398398- * DECLARE_IDTENTRY_XEN - Declare functions for XEN redirect IDT entry points399399- * @vector: Vector number (ignored for C)400400- * @func: Function name of the entry point401401- *402402- * Used for xennmi and xendebug redirections. No DEFINE as this is all ASM403403- * indirection magic.404404- */405405-#define DECLARE_IDTENTRY_XEN(vector, func) \406406- asmlinkage void xen_asm_exc_xen##func(void); \407407- asmlinkage void asm_exc_xen##func(void)401401+#endif408402409403#else /* !__ASSEMBLY__ */410404···441455# define DECLARE_IDTENTRY_MCE(vector, func) \442456 DECLARE_IDTENTRY(vector, func)443457444444-# define DECLARE_IDTENTRY_DEBUG(vector, func) \445445- DECLARE_IDTENTRY(vector, func)446446-447458/* No ASM emitted for DF as this goes through a C shim */448459# define DECLARE_IDTENTRY_DF(vector, func)449460···451468452469/* No ASM code emitted for NMI */453470#define DECLARE_IDTENTRY_NMI(vector, func)454454-455455-/* XEN NMI and DB wrapper */456456-#define DECLARE_IDTENTRY_XEN(vector, func) \457457- idtentry vector asm_exc_xen##func exc_##func has_error_code=0458471459472/*460473 * ASM code to emit the common vector entry stubs where each stub is···469490 .align 8470491SYM_CODE_START(irq_entries_start)471492 vector=FIRST_EXTERNAL_VECTOR472472- pos = .473493 .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)474494 UNWIND_HINT_IRET_REGS495495+0 :475496 .byte 0x6a, vector476497 jmp asm_common_interrupt477498 nop478499 /* Ensure that the above is 8 bytes max */479479- . = pos + 8480480- pos=pos+8481481- vector=vector+1500500+ . = 0b + 8501501+ vector = vector+1482502 .endr483503SYM_CODE_END(irq_entries_start)484504···485507 .align 8486508SYM_CODE_START(spurious_entries_start)487509 vector=FIRST_SYSTEM_VECTOR488488- pos = .489510 .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)490511 UNWIND_HINT_IRET_REGS512512+0 :491513 .byte 0x6a, vector492514 jmp asm_spurious_interrupt493515 nop494516 /* Ensure that the above is 8 bytes max */495495- . = pos + 8496496- pos=pos+8497497- vector=vector+1517517+ . = 0b + 8518518+ vector = vector+1498519 .endr499520SYM_CODE_END(spurious_entries_start)500521#endif···542565DECLARE_IDTENTRY_RAW_ERRORCODE(X86_TRAP_PF, exc_page_fault);543566544567#ifdef CONFIG_X86_MCE568568+#ifdef CONFIG_X86_64545569DECLARE_IDTENTRY_MCE(X86_TRAP_MC, exc_machine_check);570570+#else571571+DECLARE_IDTENTRY_RAW(X86_TRAP_MC, exc_machine_check);572572+#endif546573#endif547574548575/* NMI */549576DECLARE_IDTENTRY_NMI(X86_TRAP_NMI, exc_nmi);550550-DECLARE_IDTENTRY_XEN(X86_TRAP_NMI, nmi);577577+#if defined(CONFIG_XEN_PV) && defined(CONFIG_X86_64)578578+DECLARE_IDTENTRY_RAW(X86_TRAP_NMI, xenpv_exc_nmi);579579+#endif551580552581/* #DB */582582+#ifdef CONFIG_X86_64553583DECLARE_IDTENTRY_DEBUG(X86_TRAP_DB, exc_debug);554554-DECLARE_IDTENTRY_XEN(X86_TRAP_DB, debug);584584+#else585585+DECLARE_IDTENTRY_RAW(X86_TRAP_DB, exc_debug);586586+#endif587587+#if defined(CONFIG_XEN_PV) && defined(CONFIG_X86_64)588588+DECLARE_IDTENTRY_RAW(X86_TRAP_DB, xenpv_exc_debug);589589+#endif555590556591/* #DF */557592DECLARE_IDTENTRY_DF(X86_TRAP_DF, exc_double_fault);···624635625636#if IS_ENABLED(CONFIG_HYPERV)626637DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_CALLBACK_VECTOR, sysvec_hyperv_callback);627627-DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_REENLIGHTENMENT_VECTOR, sysvec_hyperv_reenlightenment);628628-DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_STIMER0_VECTOR, sysvec_hyperv_stimer0);638638+DECLARE_IDTENTRY_SYSVEC(HYPERV_REENLIGHTENMENT_VECTOR, sysvec_hyperv_reenlightenment);639639+DECLARE_IDTENTRY_SYSVEC(HYPERV_STIMER0_VECTOR, sysvec_hyperv_stimer0);629640#endif630641631642#if IS_ENABLED(CONFIG_ACRN_GUEST)
+16
arch/x86/include/asm/io_bitmap.h
···1919void io_bitmap_share(struct task_struct *tsk);2020void io_bitmap_exit(struct task_struct *tsk);21212222+static inline void native_tss_invalidate_io_bitmap(void)2323+{2424+ /*2525+ * Invalidate the I/O bitmap by moving io_bitmap_base outside the2626+ * TSS limit so any subsequent I/O access from user space will2727+ * trigger a #GP.2828+ *2929+ * This is correct even when VMEXIT rewrites the TSS limit3030+ * to 0x67 as the only requirement is that the base points3131+ * outside the limit.3232+ */3333+ this_cpu_write(cpu_tss_rw.x86_tss.io_bitmap_base,3434+ IO_BITMAP_OFFSET_INVALID);3535+}3636+2237void native_tss_update_io_bitmap(void);23382439#ifdef CONFIG_PARAVIRT_XXL2540#include <asm/paravirt.h>2641#else2742#define tss_update_io_bitmap native_tss_update_io_bitmap4343+#define tss_invalidate_io_bitmap native_tss_invalidate_io_bitmap2844#endif29453046#else
···23162316 ip->irqdomain = irq_domain_create_linear(fn, hwirqs, cfg->ops,23172317 (void *)(long)ioapic);2318231823192319- /* Release fw handle if it was allocated above */23202320- if (!cfg->dev)23212321- irq_domain_free_fwnode(fn);23222322-23232323- if (!ip->irqdomain)23192319+ if (!ip->irqdomain) {23202320+ /* Release fw handle if it was allocated above */23212321+ if (!cfg->dev)23222322+ irq_domain_free_fwnode(fn);23242323 return -ENOMEM;23242324+ }2325232523262326 ip->irqdomain->parent = parent;23272327
+12-6
arch/x86/kernel/apic/msi.c
···263263 msi_default_domain =264264 pci_msi_create_irq_domain(fn, &pci_msi_domain_info,265265 parent);266266- irq_domain_free_fwnode(fn);267266 }268268- if (!msi_default_domain)267267+ if (!msi_default_domain) {268268+ irq_domain_free_fwnode(fn);269269 pr_warn("failed to initialize irqdomain for MSI/MSI-x.\n");270270- else270270+ } else {271271 msi_default_domain->flags |= IRQ_DOMAIN_MSI_NOMASK_QUIRK;272272+ }272273}273274274275#ifdef CONFIG_IRQ_REMAP···302301 if (!fn)303302 return NULL;304303 d = pci_msi_create_irq_domain(fn, &pci_msi_ir_domain_info, parent);305305- irq_domain_free_fwnode(fn);304304+ if (!d)305305+ irq_domain_free_fwnode(fn);306306 return d;307307}308308#endif···366364 if (fn) {367365 dmar_domain = msi_create_irq_domain(fn, &dmar_msi_domain_info,368366 x86_vector_domain);369369- irq_domain_free_fwnode(fn);367367+ if (!dmar_domain)368368+ irq_domain_free_fwnode(fn);370369 }371370out:372371 mutex_unlock(&dmar_lock);···492489 }493490494491 d = msi_create_irq_domain(fn, domain_info, parent);495495- irq_domain_free_fwnode(fn);492492+ if (!d) {493493+ irq_domain_free_fwnode(fn);494494+ kfree(domain_info);495495+ }496496 return d;497497}498498
+5-18
arch/x86/kernel/apic/vector.c
···446446 trace_vector_activate(irqd->irq, apicd->is_managed,447447 apicd->can_reserve, reserve);448448449449- /* Nothing to do for fixed assigned vectors */450450- if (!apicd->can_reserve && !apicd->is_managed)451451- return 0;452452-453449 raw_spin_lock_irqsave(&vector_lock, flags);454454- if (reserve || irqd_is_managed_and_shutdown(irqd))450450+ if (!apicd->can_reserve && !apicd->is_managed)451451+ assign_irq_vector_any_locked(irqd);452452+ else if (reserve || irqd_is_managed_and_shutdown(irqd))455453 vector_assign_managed_shutdown(irqd);456454 else if (apicd->is_managed)457455 ret = activate_managed(irqd);···707709 x86_vector_domain = irq_domain_create_tree(fn, &x86_vector_domain_ops,708710 NULL);709711 BUG_ON(x86_vector_domain == NULL);710710- irq_domain_free_fwnode(fn);711712 irq_set_default_host(x86_vector_domain);712713713714 arch_init_msi_domain(x86_vector_domain);···772775static int apic_set_affinity(struct irq_data *irqd,773776 const struct cpumask *dest, bool force)774777{775775- struct apic_chip_data *apicd = apic_chip_data(irqd);776778 int err;777779778778- /*779779- * Core code can call here for inactive interrupts. For inactive780780- * interrupts which use managed or reservation mode there is no781781- * point in going through the vector assignment right now as the782782- * activation will assign a vector which fits the destination783783- * cpumask. Let the core code store the destination mask and be784784- * done with it.785785- */786786- if (!irqd_is_activated(irqd) &&787787- (apicd->is_managed || apicd->can_reserve))788788- return IRQ_SET_MASK_OK;780780+ if (WARN_ON_ONCE(!irqd_is_activated(irqd)))781781+ return -EIO;789782790783 raw_spin_lock(&vector_lock);791784 cpumask_and(vector_searchmask, dest, cpu_online_mask);
+10-1
arch/x86/kernel/cpu/intel.c
···5050static u64 msr_test_ctrl_cache __ro_after_init;51515252/*5353+ * With a name like MSR_TEST_CTL it should go without saying, but don't touch5454+ * MSR_TEST_CTL unless the CPU is one of the whitelisted models. Writing it5555+ * on CPUs that do not support SLD can cause fireworks, even when writing '0'.5656+ */5757+static bool cpu_model_supports_sld __ro_after_init;5858+5959+/*5360 * Processors which have self-snooping capability can handle conflicting5461 * memory type across CPUs by snooping its own cache. However, there exists5562 * CPU models in which having conflicting memory types still leads to···1078107110791072static void split_lock_init(void)10801073{10811081- split_lock_verify_msr(sld_state != sld_off);10741074+ if (cpu_model_supports_sld)10751075+ split_lock_verify_msr(sld_state != sld_off);10821076}1083107710841078static void split_lock_warn(unsigned long ip)···11851177 return;11861178 }1187117911801180+ cpu_model_supports_sld = true;11881181 split_lock_setup();11891182}
+3-1
arch/x86/kernel/cpu/mce/core.c
···1901190119021902static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)19031903{19041904+ WARN_ON_ONCE(user_mode(regs));19051905+19041906 /*19051907 * Only required when from kernel mode. See19061908 * mce_check_crashing_cpu() for details.···19561954}19571955#else19581956/* 32bit unified entry point */19591959-DEFINE_IDTENTRY_MCE(exc_machine_check)19571957+DEFINE_IDTENTRY_RAW(exc_machine_check)19601958{19611959 unsigned long dr7;19621960
+17-10
arch/x86/kernel/dumpstack.c
···7171 printk("%s %s%pB\n", log_lvl, reliable ? "" : "? ", (void *)address);7272}73737474+static int copy_code(struct pt_regs *regs, u8 *buf, unsigned long src,7575+ unsigned int nbytes)7676+{7777+ if (!user_mode(regs))7878+ return copy_from_kernel_nofault(buf, (u8 *)src, nbytes);7979+8080+ /*8181+ * Make sure userspace isn't trying to trick us into dumping kernel8282+ * memory by pointing the userspace instruction pointer at it.8383+ */8484+ if (__chk_range_not_ok(src, nbytes, TASK_SIZE_MAX))8585+ return -EINVAL;8686+8787+ return copy_from_user_nmi(buf, (void __user *)src, nbytes);8888+}8989+7490/*7591 * There are a couple of reasons for the 2/3rd prologue, courtesy of Linus:7692 *···11397#define OPCODE_BUFSIZE (PROLOGUE_SIZE + 1 + EPILOGUE_SIZE)11498 u8 opcodes[OPCODE_BUFSIZE];11599 unsigned long prologue = regs->ip - PROLOGUE_SIZE;116116- bool bad_ip;117100118118- /*119119- * Make sure userspace isn't trying to trick us into dumping kernel120120- * memory by pointing the userspace instruction pointer at it.121121- */122122- bad_ip = user_mode(regs) &&123123- __chk_range_not_ok(prologue, OPCODE_BUFSIZE, TASK_SIZE_MAX);124124-125125- if (bad_ip || copy_from_kernel_nofault(opcodes, (u8 *)prologue,126126- OPCODE_BUFSIZE)) {101101+ if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) {127102 printk("%sCode: Bad RIP value.\n", loglvl);128103 } else {129104 printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %"
+6
arch/x86/kernel/fpu/core.c
···101101 copy_fpregs_to_fpstate(¤t->thread.fpu);102102 }103103 __cpu_invalidate_fpregs_state();104104+105105+ if (boot_cpu_has(X86_FEATURE_XMM))106106+ ldmxcsr(MXCSR_DEFAULT);107107+108108+ if (boot_cpu_has(X86_FEATURE_FPU))109109+ asm volatile ("fninit");104110}105111EXPORT_SYMBOL_GPL(kernel_fpu_begin);106112
+1-1
arch/x86/kernel/fpu/xstate.c
···10741074 copy_part(offsetof(struct fxregs_state, st_space), 128,10751075 &xsave->i387.st_space, &kbuf, &offset_start, &count);10761076 if (header.xfeatures & XFEATURE_MASK_SSE)10771077- copy_part(xstate_offsets[XFEATURE_MASK_SSE], 256,10771077+ copy_part(xstate_offsets[XFEATURE_SSE], 256,10781078 &xsave->i387.xmm_space, &kbuf, &offset_start, &count);10791079 /*10801080 * Fill xsave->i387.sw_reserved value for ptrace frame:
+25-1
arch/x86/kernel/ldt.c
···2929#include <asm/mmu_context.h>3030#include <asm/pgtable_areas.h>31313232+#include <xen/xen.h>3333+3234/* This is a multiple of PAGE_SIZE. */3335#define LDT_SLOT_STRIDE (LDT_ENTRIES * LDT_ENTRY_SIZE)3436···545543 return bytecount;546544}547545546546+static bool allow_16bit_segments(void)547547+{548548+ if (!IS_ENABLED(CONFIG_X86_16BIT))549549+ return false;550550+551551+#ifdef CONFIG_XEN_PV552552+ /*553553+ * Xen PV does not implement ESPFIX64, which means that 16-bit554554+ * segments will not work correctly. Until either Xen PV implements555555+ * ESPFIX64 and can signal this fact to the guest or unless someone556556+ * provides compelling evidence that allowing broken 16-bit segments557557+ * is worthwhile, disallow 16-bit segments under Xen PV.558558+ */559559+ if (xen_pv_domain()) {560560+ pr_info_once("Warning: 16-bit segments do not work correctly in a Xen PV guest\n");561561+ return false;562562+ }563563+#endif564564+565565+ return true;566566+}567567+548568static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode)549569{550570 struct mm_struct *mm = current->mm;···598574 /* The user wants to clear the entry. */599575 memset(&ldt, 0, sizeof(ldt));600576 } else {601601- if (!IS_ENABLED(CONFIG_X86_16BIT) && !ldt_info.seg_32bit) {577577+ if (!ldt_info.seg_32bit && !allow_16bit_segments()) {602578 error = -EINVAL;603579 goto out;604580 }
···322322}323323324324#ifdef CONFIG_X86_IOPL_IOPERM325325-static inline void tss_invalidate_io_bitmap(struct tss_struct *tss)326326-{327327- /*328328- * Invalidate the I/O bitmap by moving io_bitmap_base outside the329329- * TSS limit so any subsequent I/O access from user space will330330- * trigger a #GP.331331- *332332- * This is correct even when VMEXIT rewrites the TSS limit333333- * to 0x67 as the only requirement is that the base points334334- * outside the limit.335335- */336336- tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET_INVALID;337337-}338338-339325static inline void switch_to_bitmap(unsigned long tifp)340326{341327 /*···332346 * user mode.333347 */334348 if (tifp & _TIF_IO_BITMAP)335335- tss_invalidate_io_bitmap(this_cpu_ptr(&cpu_tss_rw));349349+ tss_invalidate_io_bitmap();336350}337351338352static void tss_copy_io_bitmap(struct tss_struct *tss, struct io_bitmap *iobm)···366380 u16 *base = &tss->x86_tss.io_bitmap_base;367381368382 if (!test_thread_flag(TIF_IO_BITMAP)) {369369- tss_invalidate_io_bitmap(tss);383383+ native_tss_invalidate_io_bitmap();370384 return;371385 }372386
-5
arch/x86/kernel/stacktrace.c
···5858 * or a page fault), which can make frame pointers5959 * unreliable.6060 */6161-6261 if (IS_ENABLED(CONFIG_FRAME_POINTER))6362 return -EINVAL;6463 }···78797980 /* Check for stack corruption */8081 if (unwind_error(&state))8181- return -EINVAL;8282-8383- /* Success path for non-user tasks, i.e. kthreads and idle tasks */8484- if (!(task->flags & (PF_KTHREAD | PF_IDLE)))8582 return -EINVAL;86838784 return 0;
+15-1
arch/x86/kernel/traps.c
···303303304304 do_trap(X86_TRAP_AC, SIGBUS, "alignment check", regs,305305 error_code, BUS_ADRALN, NULL);306306+307307+ local_irq_disable();306308}307309308310#ifdef CONFIG_VMAP_STACK···872870 trace_hardirqs_off_finish();873871874872 /*873873+ * If something gets miswired and we end up here for a user mode874874+ * #DB, we will malfunction.875875+ */876876+ WARN_ON_ONCE(user_mode(regs));877877+878878+ /*875879 * Catch SYSENTER with TF set and clear DR_STEP. If this hit a876880 * watchpoint at the same time then that will still be handled.877881 */···895887static __always_inline void exc_debug_user(struct pt_regs *regs,896888 unsigned long dr6)897889{890890+ /*891891+ * If something gets miswired and we end up here for a kernel mode892892+ * #DB, we will malfunction.893893+ */894894+ WARN_ON_ONCE(!user_mode(regs));895895+898896 idtentry_enter_user(regs);899897 instrumentation_begin();900898···931917}932918#else933919/* 32 bit does not have separate entry points. */934934-DEFINE_IDTENTRY_DEBUG(exc_debug)920920+DEFINE_IDTENTRY_RAW(exc_debug)935921{936922 unsigned long dr6, dr7;937923
+6-2
arch/x86/kernel/unwind_orc.c
···440440 /*441441 * Find the orc_entry associated with the text address.442442 *443443- * Decrement call return addresses by one so they work for sibling444444- * calls and calls to noreturn functions.443443+ * For a call frame (as opposed to a signal frame), state->ip points to444444+ * the instruction after the call. That instruction's stack layout445445+ * could be different from the call instruction's layout, for example446446+ * if the call was to a noreturn function. So get the ORC data for the447447+ * call instruction itself.445448 */446449 orc = orc_find(state->signal ? state->ip : state->ip - 1);447450 if (!orc) {···665662 state->sp = task->thread.sp;666663 state->bp = READ_ONCE_NOCHECK(frame->bp);667664 state->ip = READ_ONCE_NOCHECK(frame->ret_addr);665665+ state->signal = (void *)state->ip == ret_from_fork;668666 }669667670668 if (get_stack_info((unsigned long *)state->sp, state->task,
···975975 if (is_long_mode(vcpu)) {976976 if (!(cr4 & X86_CR4_PAE))977977 return 1;978978+ if ((cr4 ^ old_cr4) & X86_CR4_LA57)979979+ return 1;978980 } else if (is_paging(vcpu) && (cr4 & X86_CR4_PAE)979981 && ((cr4 ^ old_cr4) & pdptr_bits)980982 && !load_pdptrs(vcpu, vcpu->arch.walk_mmu,···2693269126942692 /* Bits 4:5 are reserved, Should be zero */26952693 if (data & 0x30)26942694+ return 1;26952695+26962696+ if (!lapic_in_kernel(vcpu))26962697 return 1;2697269826982699 vcpu->arch.apf.msr_en_val = data;
+1-1
arch/x86/math-emu/wm_sqrt.S
···209209210210#ifdef PARANOID211211/* It should be possible to get here only if the arg is ffff....ffff */212212- cmp $0xffffffff,FPU_fsqrt_arg_1212212+ cmpl $0xffffffff,FPU_fsqrt_arg_1213213 jnz sqrt_stage_2_error214214#endif /* PARANOID */215215
···598598}599599600600#ifdef CONFIG_X86_64601601+void noist_exc_debug(struct pt_regs *regs);602602+603603+DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)604604+{605605+ /* On Xen PV, NMI doesn't use IST. The C part is the sane as native. */606606+ exc_nmi(regs);607607+}608608+609609+DEFINE_IDTENTRY_RAW(xenpv_exc_debug)610610+{611611+ /*612612+ * There's no IST on Xen PV, but we still need to dispatch613613+ * to the correct handler.614614+ */615615+ if (user_mode(regs))616616+ noist_exc_debug(regs);617617+ else618618+ exc_debug(regs);619619+}620620+601621struct trap_array_entry {602622 void (*orig)(void);603623 void (*xen)(void);···629609 .xen = xen_asm_##func, \630610 .ist_okay = ist_ok }631611632632-#define TRAP_ENTRY_REDIR(func, xenfunc, ist_ok) { \612612+#define TRAP_ENTRY_REDIR(func, ist_ok) { \633613 .orig = asm_##func, \634634- .xen = xen_asm_##xenfunc, \614614+ .xen = xen_asm_xenpv_##func, \635615 .ist_okay = ist_ok }636616637617static struct trap_array_entry trap_array[] = {638638- TRAP_ENTRY_REDIR(exc_debug, exc_xendebug, true ),618618+ TRAP_ENTRY_REDIR(exc_debug, true ),639619 TRAP_ENTRY(exc_double_fault, true ),640620#ifdef CONFIG_X86_MCE641621 TRAP_ENTRY(exc_machine_check, true ),642622#endif643643- TRAP_ENTRY_REDIR(exc_nmi, exc_xennmi, true ),623623+ TRAP_ENTRY_REDIR(exc_nmi, true ),644624 TRAP_ENTRY(exc_int3, false ),645625 TRAP_ENTRY(exc_overflow, false ),646626#ifdef CONFIG_IA32_EMULATION···870850}871851872852#ifdef CONFIG_X86_IOPL_IOPERM853853+static void xen_invalidate_io_bitmap(void)854854+{855855+ struct physdev_set_iobitmap iobitmap = {856856+ .bitmap = 0,857857+ .nr_ports = 0,858858+ };859859+860860+ native_tss_invalidate_io_bitmap();861861+ HYPERVISOR_physdev_op(PHYSDEVOP_set_iobitmap, &iobitmap);862862+}863863+873864static void xen_update_io_bitmap(void)874865{875866 struct physdev_set_iobitmap iobitmap;···11101079 .load_sp0 = xen_load_sp0,1111108011121081#ifdef CONFIG_X86_IOPL_IOPERM10821082+ .invalidate_io_bitmap = xen_invalidate_io_bitmap,11131083 .update_io_bitmap = xen_update_io_bitmap,11141084#endif11151085 .io_delay = xen_io_delay,
+18-7
arch/x86/xen/xen-asm_64.S
···2929.endm30303131xen_pv_trap asm_exc_divide_error3232-xen_pv_trap asm_exc_debug3333-xen_pv_trap asm_exc_xendebug3232+xen_pv_trap asm_xenpv_exc_debug3433xen_pv_trap asm_exc_int33535-xen_pv_trap asm_exc_xennmi3434+xen_pv_trap asm_xenpv_exc_nmi3635xen_pv_trap asm_exc_overflow3736xen_pv_trap asm_exc_bounds3837xen_pv_trap asm_exc_invalid_op···160161161162/* 32-bit compat sysenter target */162163SYM_FUNC_START(xen_sysenter_target)163163- mov 0*8(%rsp), %rcx164164- mov 1*8(%rsp), %r11165165- mov 5*8(%rsp), %rsp166166- jmp entry_SYSENTER_compat164164+ /*165165+ * NB: Xen is polite and clears TF from EFLAGS for us. This means166166+ * that we don't need to guard against single step exceptions here.167167+ */168168+ popq %rcx169169+ popq %r11170170+171171+ /*172172+ * Neither Xen nor the kernel really knows what the old SS and173173+ * CS were. The kernel expects __USER32_DS and __USER32_CS, so174174+ * report those values even though Xen will guess its own values.175175+ */176176+ movq $__USER32_DS, 4*8(%rsp)177177+ movq $__USER32_CS, 1*8(%rsp)178178+179179+ jmp entry_SYSENTER_compat_after_hwframe167180SYM_FUNC_END(xen_sysenter_target)168181169182#else /* !CONFIG_IA32_EMULATION */
+1-1
arch/xtensa/include/asm/checksum.h
···5757__wsum csum_and_copy_from_user(const void __user *src, void *dst,5858 int len, __wsum sum, int *err_ptr)5959{6060- if (access_ok(dst, len))6060+ if (access_ok(src, len))6161 return csum_partial_copy_generic((__force const void *)src, dst,6262 len, sum, err_ptr, NULL);6363 if (len)
+1-3
arch/xtensa/kernel/perf_event.c
···362362 struct xtensa_pmu_events *ev = this_cpu_ptr(&xtensa_pmu_events);363363 unsigned i;364364365365- for (i = find_first_bit(ev->used_mask, XCHAL_NUM_PERF_COUNTERS);366366- i < XCHAL_NUM_PERF_COUNTERS;367367- i = find_next_bit(ev->used_mask, XCHAL_NUM_PERF_COUNTERS, i + 1)) {365365+ for_each_set_bit(i, ev->used_mask, XCHAL_NUM_PERF_COUNTERS) {368366 uint32_t v = get_er(XTENSA_PMU_PMSTAT(i));369367 struct perf_event *event = ev->event[i];370368 struct hw_perf_event *hwc = &event->hw;
···8787}8888EXPORT_SYMBOL(__xtensa_libgcc_window_spill);89899090-unsigned long __sync_fetch_and_and_4(unsigned long *p, unsigned long v)9090+unsigned int __sync_fetch_and_and_4(volatile void *p, unsigned int v)9191{9292 BUG();9393}9494EXPORT_SYMBOL(__sync_fetch_and_and_4);95959696-unsigned long __sync_fetch_and_or_4(unsigned long *p, unsigned long v)9696+unsigned int __sync_fetch_and_or_4(volatile void *p, unsigned int v)9797{9898 BUG();9999}
···828828 void *priv, bool reserved)829829{830830 /*831831- * If we find a request that is inflight and the queue matches,831831+ * If we find a request that isn't idle and the queue matches,832832 * we know the queue is busy. Return false to stop the iteration.833833 */834834- if (rq->state == MQ_RQ_IN_FLIGHT && rq->q == hctx->queue) {834834+ if (blk_mq_request_started(rq) && rq->q == hctx->queue) {835835 bool *busy = priv;836836837837 *busy = true;
···5050static LIST_HEAD(deferred_sync);5151static unsigned int defer_sync_state_count = 1;5252static unsigned int defer_fw_devlink_count;5353+static LIST_HEAD(deferred_fw_devlink);5354static DEFINE_MUTEX(defer_fw_devlink_lock);5455static bool fw_devlink_is_permissive(void);5556···755754 */756755 dev->state_synced = true;757756758758- if (WARN_ON(!list_empty(&dev->links.defer_sync)))757757+ if (WARN_ON(!list_empty(&dev->links.defer_hook)))759758 return;760759761760 get_device(dev);762762- list_add_tail(&dev->links.defer_sync, list);761761+ list_add_tail(&dev->links.defer_hook, list);763762}764763765764/**···777776{778777 struct device *dev, *tmp;779778780780- list_for_each_entry_safe(dev, tmp, list, links.defer_sync) {781781- list_del_init(&dev->links.defer_sync);779779+ list_for_each_entry_safe(dev, tmp, list, links.defer_hook) {780780+ list_del_init(&dev->links.defer_hook);782781783782 if (dev != dont_lock_dev)784783 device_lock(dev);···816815 if (defer_sync_state_count)817816 goto out;818817819819- list_for_each_entry_safe(dev, tmp, &deferred_sync, links.defer_sync) {818818+ list_for_each_entry_safe(dev, tmp, &deferred_sync, links.defer_hook) {820819 /*821820 * Delete from deferred_sync list before queuing it to822822- * sync_list because defer_sync is used for both lists.821821+ * sync_list because defer_hook is used for both lists.823822 */824824- list_del_init(&dev->links.defer_sync);823823+ list_del_init(&dev->links.defer_hook);825824 __device_links_queue_sync_state(dev, &sync_list);826825 }827826out:···839838840839static void __device_links_supplier_defer_sync(struct device *sup)841840{842842- if (list_empty(&sup->links.defer_sync) && dev_has_sync_state(sup))843843- list_add_tail(&sup->links.defer_sync, &deferred_sync);841841+ if (list_empty(&sup->links.defer_hook) && dev_has_sync_state(sup))842842+ list_add_tail(&sup->links.defer_hook, &deferred_sync);844843}845844846845static void device_link_drop_managed(struct device_link *link)···10531052 WRITE_ONCE(link->status, DL_STATE_DORMANT);10541053 }1055105410561056- list_del_init(&dev->links.defer_sync);10551055+ list_del_init(&dev->links.defer_hook);10571056 __device_links_no_driver(dev);1058105710591058 device_links_write_unlock();···12451244 fw_ret = -EAGAIN;12461245 } else {12471246 fw_ret = -ENODEV;12471247+ /*12481248+ * defer_hook is not used to add device to deferred_sync list12491249+ * until device is bound. Since deferred fw devlink also blocks12501250+ * probing, same list hook can be used for deferred_fw_devlink.12511251+ */12521252+ list_add_tail(&dev->links.defer_hook, &deferred_fw_devlink);12481253 }1249125412501255 if (fw_ret == -ENODEV)···13191312 */13201313void fw_devlink_resume(void)13211314{13151315+ struct device *dev, *tmp;13161316+ LIST_HEAD(probe_list);13171317+13221318 mutex_lock(&defer_fw_devlink_lock);13231319 if (!defer_fw_devlink_count) {13241320 WARN(true, "Unmatched fw_devlink pause/resume!");···13331323 goto out;1334132413351325 device_link_add_missing_supplier_links();13361336- driver_deferred_probe_force_trigger();13261326+ list_splice_tail_init(&deferred_fw_devlink, &probe_list);13371327out:13381328 mutex_unlock(&defer_fw_devlink_lock);13291329+13301330+ /*13311331+ * bus_probe_device() can cause new devices to get added and they'll13321332+ * try to grab defer_fw_devlink_lock. So, this needs to be done outside13331333+ * the defer_fw_devlink_lock.13341334+ */13351335+ list_for_each_entry_safe(dev, tmp, &probe_list, links.defer_hook) {13361336+ list_del_init(&dev->links.defer_hook);13371337+ bus_probe_device(dev);13381338+ }13391339}13401340/* Device links support end. */13411341···21922172 INIT_LIST_HEAD(&dev->links.consumers);21932173 INIT_LIST_HEAD(&dev->links.suppliers);21942174 INIT_LIST_HEAD(&dev->links.needs_suppliers);21952195- INIT_LIST_HEAD(&dev->links.defer_sync);21752175+ INIT_LIST_HEAD(&dev->links.defer_hook);21962176 dev->links.status = DL_DEV_NO_DRIVER;21972177}21982178EXPORT_SYMBOL_GPL(device_initialize);
-5
drivers/base/dd.c
···164164 if (!driver_deferred_probe_enable)165165 return;166166167167- driver_deferred_probe_force_trigger();168168-}169169-170170-void driver_deferred_probe_force_trigger(void)171171-{172167 /*173168 * A successful probe means that all the devices in the pending list174169 * should be triggered to be reprobed. Move all the deferred devices
+1-1
drivers/base/property.c
···721721 return next;722722723723 /* When no more children in primary, continue with secondary */724724- if (!IS_ERR_OR_NULL(fwnode->secondary))724724+ if (fwnode && !IS_ERR_OR_NULL(fwnode->secondary))725725 next = fwnode_get_next_child_node(fwnode->secondary, child);726726727727 return next;
+1-1
drivers/base/regmap/Kconfig
···44# subsystems should select the appropriate symbols.5566config REGMAP77- default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SCCB || REGMAP_I3C)77+ default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SOUNDWIRE || REGMAP_SCCB || REGMAP_I3C)88 select IRQ_DOMAIN if REGMAP_IRQ99 bool1010
+31-25
drivers/base/regmap/regmap-debugfs.c
···463463{464464 struct regmap *map = container_of(file->private_data,465465 struct regmap, cache_only);466466- ssize_t result;467467- bool was_enabled, require_sync = false;466466+ bool new_val, require_sync = false;468467 int err;468468+469469+ err = kstrtobool_from_user(user_buf, count, &new_val);470470+ /* Ignore malforned data like debugfs_write_file_bool() */471471+ if (err)472472+ return count;473473+474474+ err = debugfs_file_get(file->f_path.dentry);475475+ if (err)476476+ return err;469477470478 map->lock(map->lock_arg);471479472472- was_enabled = map->cache_only;473473-474474- result = debugfs_write_file_bool(file, user_buf, count, ppos);475475- if (result < 0) {476476- map->unlock(map->lock_arg);477477- return result;478478- }479479-480480- if (map->cache_only && !was_enabled) {480480+ if (new_val && !map->cache_only) {481481 dev_warn(map->dev, "debugfs cache_only=Y forced\n");482482 add_taint(TAINT_USER, LOCKDEP_STILL_OK);483483- } else if (!map->cache_only && was_enabled) {483483+ } else if (!new_val && map->cache_only) {484484 dev_warn(map->dev, "debugfs cache_only=N forced: syncing cache\n");485485 require_sync = true;486486 }487487+ map->cache_only = new_val;487488488489 map->unlock(map->lock_arg);490490+ debugfs_file_put(file->f_path.dentry);489491490492 if (require_sync) {491493 err = regcache_sync(map);···495493 dev_err(map->dev, "Failed to sync cache %d\n", err);496494 }497495498498- return result;496496+ return count;499497}500498501499static const struct file_operations regmap_cache_only_fops = {···510508{511509 struct regmap *map = container_of(file->private_data,512510 struct regmap, cache_bypass);513513- ssize_t result;514514- bool was_enabled;511511+ bool new_val;512512+ int err;513513+514514+ err = kstrtobool_from_user(user_buf, count, &new_val);515515+ /* Ignore malforned data like debugfs_write_file_bool() */516516+ if (err)517517+ return count;518518+519519+ err = debugfs_file_get(file->f_path.dentry);520520+ if (err)521521+ return err;515522516523 map->lock(map->lock_arg);517524518518- was_enabled = map->cache_bypass;519519-520520- result = debugfs_write_file_bool(file, user_buf, count, ppos);521521- if (result < 0)522522- goto out;523523-524524- if (map->cache_bypass && !was_enabled) {525525+ if (new_val && !map->cache_bypass) {525526 dev_warn(map->dev, "debugfs cache_bypass=Y forced\n");526527 add_taint(TAINT_USER, LOCKDEP_STILL_OK);527527- } else if (!map->cache_bypass && was_enabled) {528528+ } else if (!new_val && map->cache_bypass) {528529 dev_warn(map->dev, "debugfs cache_bypass=N forced\n");529530 }531531+ map->cache_bypass = new_val;530532531531-out:532533 map->unlock(map->lock_arg);534534+ debugfs_file_put(file->f_path.dentry);533535534534- return result;536536+ return count;535537}536538537539static const struct file_operations regmap_cache_bypass_fops = {
···236236 syss_done = ddata->cfg.syss_mask;237237238238 if (syss_offset >= 0) {239239- error = readx_poll_timeout(sysc_read_sysstatus, ddata, rstval,240240- (rstval & ddata->cfg.syss_mask) ==241241- syss_done,242242- 100, MAX_MODULE_SOFTRESET_WAIT);239239+ error = readx_poll_timeout_atomic(sysc_read_sysstatus, ddata,240240+ rstval, (rstval & ddata->cfg.syss_mask) ==241241+ syss_done, 100, MAX_MODULE_SOFTRESET_WAIT);243242244243 } else if (ddata->cfg.quirks & SYSC_QUIRK_RESET_STATUS) {245245- error = readx_poll_timeout(sysc_read_sysconfig, ddata, rstval,246246- !(rstval & sysc_mask),247247- 100, MAX_MODULE_SOFTRESET_WAIT);244244+ error = readx_poll_timeout_atomic(sysc_read_sysconfig, ddata,245245+ rstval, !(rstval & sysc_mask),246246+ 100, MAX_MODULE_SOFTRESET_WAIT);248247 }249248250249 return error;···1278127912791280 ddata = dev_get_drvdata(dev);1280128112811281- if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE)12821282+ if (ddata->cfg.quirks &12831283+ (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))12821284 return 0;1283128512841286 return pm_runtime_force_suspend(dev);···1291129112921292 ddata = dev_get_drvdata(dev);1293129312941294- if (ddata->cfg.quirks & SYSC_QUIRK_LEGACY_IDLE)12941294+ if (ddata->cfg.quirks &12951295+ (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))12951296 return 0;1296129712971298 return pm_runtime_force_resume(dev);···1729172817301729 local_irq_save(flags);17311730 /* RTC_STATUS BUSY bit may stay active for 1/32768 seconds (~30 usec) */17321732- error = readl_poll_timeout(ddata->module_va + 0x44, val,17331733- !(val & BIT(0)), 100, 50);17311731+ error = readl_poll_timeout_atomic(ddata->module_va + 0x44, val,17321732+ !(val & BIT(0)), 100, 50);17341733 if (error)17351734 dev_warn(ddata->dev, "rtc busy timeout\n");17361735 /* Now we have ~15 microseconds to read/write various registers */···28652864 return error;28662865}2867286628672867+/*28682868+ * Ignore timers tagged with no-reset and no-idle. These are likely in use,28692869+ * for example by drivers/clocksource/timer-ti-dm-systimer.c. If more checks28702870+ * are needed, we could also look at the timer register configuration.28712871+ */28722872+static int sysc_check_active_timer(struct sysc *ddata)28732873+{28742874+ if (ddata->cap->type != TI_SYSC_OMAP2_TIMER &&28752875+ ddata->cap->type != TI_SYSC_OMAP4_TIMER)28762876+ return 0;28772877+28782878+ if ((ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) &&28792879+ (ddata->cfg.quirks & SYSC_QUIRK_NO_IDLE))28802880+ return -EBUSY;28812881+28822882+ return 0;28832883+}28842884+28682885static const struct of_device_id sysc_match_table[] = {28692886 { .compatible = "simple-bus", },28702887 { /* sentinel */ },···29362917 sysc_init_early_quirks(ddata);2937291829382919 error = sysc_check_disabled_devices(ddata);29202920+ if (error)29212921+ return error;29222922+29232923+ error = sysc_check_active_timer(ddata);29392924 if (error)29402925 return error;29412926
+7-3
drivers/char/mem.c
···814814#ifdef CONFIG_IO_STRICT_DEVMEM815815void revoke_devmem(struct resource *res)816816{817817- struct inode *inode = READ_ONCE(devmem_inode);817817+ /* pairs with smp_store_release() in devmem_init_inode() */818818+ struct inode *inode = smp_load_acquire(&devmem_inode);818819819820 /*820821 * Check that the initialization has completed. Losing the race···10291028 return rc;10301029 }1031103010321032- /* publish /dev/mem initialized */10331033- WRITE_ONCE(devmem_inode, inode);10311031+ /*10321032+ * Publish /dev/mem initialized.10331033+ * Pairs with smp_load_acquire() in revoke_devmem().10341034+ */10351035+ smp_store_release(&devmem_inode, inode);1034103610351037 return 0;10361038}
+1-1
drivers/char/tpm/st33zp24/i2c.c
···210210211211/*212212 * st33zp24_i2c_probe initialize the TPM device213213- * @param: client, the i2c_client drescription (TPM I2C description).213213+ * @param: client, the i2c_client description (TPM I2C description).214214 * @param: id, the i2c_device_id struct.215215 * @return: 0 in case of success.216216 * -1 in other case.
+2-2
drivers/char/tpm/st33zp24/spi.c
···329329330330/*331331 * st33zp24_spi_probe initialize the TPM device332332- * @param: dev, the spi_device drescription (TPM SPI description).332332+ * @param: dev, the spi_device description (TPM SPI description).333333 * @return: 0 in case of success.334334 * or a negative value describing the error.335335 */···378378379379/*380380 * st33zp24_spi_remove remove the TPM device381381- * @param: client, the spi_device drescription (TPM SPI description).381381+ * @param: client, the spi_device description (TPM SPI description).382382 * @return: 0 in case of success.383383 */384384static int st33zp24_spi_remove(struct spi_device *dev)
+1-1
drivers/char/tpm/st33zp24/st33zp24.c
···502502503503/*504504 * st33zp24_probe initialize the TPM device505505- * @param: client, the i2c_client drescription (TPM I2C description).505505+ * @param: client, the i2c_client description (TPM I2C description).506506 * @param: id, the i2c_device_id struct.507507 * @return: 0 in case of success.508508 * -1 in other case.
+9-10
drivers/char/tpm/tpm-dev-common.c
···189189 goto out;190190 }191191192192- /* atomic tpm command send and result receive. We only hold the ops193193- * lock during this period so that the tpm can be unregistered even if194194- * the char dev is held open.195195- */196196- if (tpm_try_get_ops(priv->chip)) {197197- ret = -EPIPE;198198- goto out;199199- }200200-201192 priv->response_length = 0;202193 priv->response_read = false;203194 *off = 0;···202211 if (file->f_flags & O_NONBLOCK) {203212 priv->command_enqueued = true;204213 queue_work(tpm_dev_wq, &priv->async_work);205205- tpm_put_ops(priv->chip);206214 mutex_unlock(&priv->buffer_mutex);207215 return size;216216+ }217217+218218+ /* atomic tpm command send and result receive. We only hold the ops219219+ * lock during this period so that the tpm can be unregistered even if220220+ * the char dev is held open.221221+ */222222+ if (tpm_try_get_ops(priv->chip)) {223223+ ret = -EPIPE;224224+ goto out;208225 }209226210227 ret = tpm_dev_transmit(priv->chip, priv->space, priv->data_buffer,
+7-7
drivers/char/tpm/tpm_ibmvtpm.c
···683683 if (rc)684684 goto init_irq_cleanup;685685686686- if (!strcmp(id->compat, "IBM,vtpm20")) {687687- chip->flags |= TPM_CHIP_FLAG_TPM2;688688- rc = tpm2_get_cc_attrs_tbl(chip);689689- if (rc)690690- goto init_irq_cleanup;691691- }692692-693686 if (!wait_event_timeout(ibmvtpm->crq_queue.wq,694687 ibmvtpm->rtce_buf != NULL,695688 HZ)) {696689 dev_err(dev, "CRQ response timed out\n");697690 goto init_irq_cleanup;691691+ }692692+693693+ if (!strcmp(id->compat, "IBM,vtpm20")) {694694+ chip->flags |= TPM_CHIP_FLAG_TPM2;695695+ rc = tpm2_get_cc_attrs_tbl(chip);696696+ if (rc)697697+ goto init_irq_cleanup;698698 }699699700700 return tpm_chip_register(chip);
+7
drivers/char/tpm/tpm_tis.c
···235235 return tpm_tis_init(&pnp_dev->dev, &tpm_info);236236}237237238238+/*239239+ * There is a known bug caused by 93e1b7d42e1e ("[PATCH] tpm: add HID module240240+ * parameter"). This commit added IFX0102 device ID, which is also used by241241+ * tpm_infineon but ignored to add quirks to probe which driver ought to be242242+ * used.243243+ */244244+238245static struct pnp_device_id tpm_pnp_tbl[] = {239246 {"PNP0C31", 0}, /* TPM */240247 {"ATM1200", 0}, /* Atmel */
+1-1
drivers/char/tpm/tpm_tis_core.c
···1085108510861086 return 0;10871087out_err:10881088- if ((chip->ops != NULL) && (chip->ops->clk_enable != NULL))10881088+ if (chip->ops->clk_enable != NULL)10891089 chip->ops->clk_enable(chip, false);1090109010911091 tpm_tis_remove(chip);
+5-5
drivers/char/tpm/tpm_tis_spi_main.c
···53535454 if ((phy->iobuf[3] & 0x01) == 0) {5555 // handle SPI wait states5656- phy->iobuf[0] = 0;5757-5856 for (i = 0; i < TPM_RETRY; i++) {5957 spi_xfer->len = 1;6058 spi_message_init(&m);···102104 if (ret < 0)103105 goto exit;104106107107+ /* Flow control transfers are receive only */108108+ spi_xfer.tx_buf = NULL;105109 ret = phy->flow_control(phy, &spi_xfer);106110 if (ret < 0)107111 goto exit;···113113 spi_xfer.delay.value = 5;114114 spi_xfer.delay.unit = SPI_DELAY_UNIT_USECS;115115116116- if (in) {117117- spi_xfer.tx_buf = NULL;118118- } else if (out) {116116+ if (out) {117117+ spi_xfer.tx_buf = phy->iobuf;119118 spi_xfer.rx_buf = NULL;120119 memcpy(phy->iobuf, out, transfer_len);121120 out += transfer_len;···287288 .pm = &tpm_tis_pm,288289 .of_match_table = of_match_ptr(of_tis_spi_match),289290 .acpi_match_table = ACPI_PTR(acpi_tis_spi_match),291291+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,290292 },291293 .probe = tpm_tis_spi_driver_probe,292294 .remove = tpm_tis_spi_remove,
···5050config CLK_HSDK5151 bool "PLL Driver for HSDK platform"5252 depends on OF || COMPILE_TEST5353+ depends on IOMEM5354 help5455 This driver supports the HSDK core, system, ddr, tunnel and hdmi PLLs5556 control.
···315315 idxd_unregister_dma_device(idxd);316316 spin_lock_irqsave(&idxd->dev_lock, flags);317317 rc = idxd_device_disable(idxd);318318+ for (i = 0; i < idxd->max_wqs; i++) {319319+ struct idxd_wq *wq = &idxd->wqs[i];320320+321321+ idxd_wq_disable_cleanup(wq);322322+ }318323 spin_unlock_irqrestore(&idxd->dev_lock, flags);319324 module_put(THIS_MODULE);320325 if (rc < 0)
+4-7
drivers/dma/imx-sdma.c
···1331133113321332 sdma_channel_synchronize(chan);1333133313341334- if (sdmac->event_id0 >= 0)13351335- sdma_event_disable(sdmac, sdmac->event_id0);13341334+ sdma_event_disable(sdmac, sdmac->event_id0);13361335 if (sdmac->event_id1)13371336 sdma_event_disable(sdmac, sdmac->event_id1);13381337···16311632 memcpy(&sdmac->slave_config, dmaengine_cfg, sizeof(*dmaengine_cfg));1632163316331634 /* Set ENBLn earlier to make sure dma request triggered after that */16341634- if (sdmac->event_id0 >= 0) {16351635- if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)16361636- return -EINVAL;16371637- sdma_event_enable(sdmac, sdmac->event_id0);16381638- }16351635+ if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)16361636+ return -EINVAL;16371637+ sdma_event_enable(sdmac, sdmac->event_id0);1639163816401639 if (sdmac->event_id1) {16411640 if (sdmac->event_id1 >= sdmac->sdma->drvdata->num_events)
···3535 mcf_chan = &mcf_edma->chans[ch];36363737 spin_lock(&mcf_chan->vchan.lock);3838+3939+ if (!mcf_chan->edesc) {4040+ /* terminate_all called before */4141+ spin_unlock(&mcf_chan->vchan.lock);4242+ continue;4343+ }4444+3845 if (!mcf_chan->edesc->iscyclic) {3946 list_del(&mcf_chan->edesc->vdesc.node);4047 vchan_cookie_complete(&mcf_chan->edesc->vdesc);
+2
drivers/dma/sh/usb-dmac.c
···586586 desc->residue = usb_dmac_get_current_residue(chan, desc,587587 desc->sg_index - 1);588588 desc->done_cookie = desc->vd.tx.cookie;589589+ desc->vd.tx_result.result = DMA_TRANS_NOERROR;590590+ desc->vd.tx_result.residue = desc->residue;589591 vchan_cookie_complete(&desc->vd);590592591593 /* Restart the next transfer if this driver has a next desc */
+4-1
drivers/dma/tegra210-adma.c
···658658659659 ret = pm_runtime_get_sync(tdc2dev(tdc));660660 if (ret < 0) {661661+ pm_runtime_put_noidle(tdc2dev(tdc));661662 free_irq(tdc->irq, tdc);662663 return ret;663664 }···870869 pm_runtime_enable(&pdev->dev);871870872871 ret = pm_runtime_get_sync(&pdev->dev);873873- if (ret < 0)872872+ if (ret < 0) {873873+ pm_runtime_put_noidle(&pdev->dev);874874 goto rpm_disable;875875+ }875876876877 ret = tegra_adma_init(tdma);877878 if (ret)
+1
drivers/dma/ti/k3-udma-private.c
···4242 ud = platform_get_drvdata(pdev);4343 if (!ud) {4444 pr_debug("UDMA has not been probed\n");4545+ put_device(&pdev->dev);4546 return ERR_PTR(-EPROBE_DEFER);4647 }4748
+19-20
drivers/dma/ti/k3-udma.c
···17531753 dev_err(ud->ddev.dev,17541754 "Descriptor pool allocation failed\n");17551755 uc->use_dma_pool = false;17561756- return -ENOMEM;17561756+ ret = -ENOMEM;17571757+ goto err_cleanup;17571758 }17581759 }17591760···1774177317751774 ret = udma_get_chan_pair(uc);17761775 if (ret)17771777- return ret;17761776+ goto err_cleanup;1778177717791778 ret = udma_alloc_tx_resources(uc);17801780- if (ret)17811781- return ret;17791779+ if (ret) {17801780+ udma_put_rchan(uc);17811781+ goto err_cleanup;17821782+ }1782178317831784 ret = udma_alloc_rx_resources(uc);17841785 if (ret) {17851786 udma_free_tx_resources(uc);17861786- return ret;17871787+ goto err_cleanup;17871788 }1788178917891790 uc->config.src_thread = ud->psil_base + uc->tchan->id;···18031800 uc->id);1804180118051802 ret = udma_alloc_tx_resources(uc);18061806- if (ret) {18071807- uc->config.remote_thread_id = -1;18081808- return ret;18091809- }18031803+ if (ret)18041804+ goto err_cleanup;1810180518111806 uc->config.src_thread = ud->psil_base + uc->tchan->id;18121807 uc->config.dst_thread = uc->config.remote_thread_id;···18211820 uc->id);1822182118231822 ret = udma_alloc_rx_resources(uc);18241824- if (ret) {18251825- uc->config.remote_thread_id = -1;18261826- return ret;18271827- }18231823+ if (ret)18241824+ goto err_cleanup;1828182518291826 uc->config.src_thread = uc->config.remote_thread_id;18301827 uc->config.dst_thread = (ud->psil_base + uc->rchan->id) |···18371838 /* Can not happen */18381839 dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n",18391840 __func__, uc->id, uc->config.dir);18401840- return -EINVAL;18411841+ ret = -EINVAL;18421842+ goto err_cleanup;18431843+18411844 }1842184518431846 /* check if the channel configuration was successful */···1848184718491848 if (udma_is_chan_running(uc)) {18501849 dev_warn(ud->dev, "chan%d: is running!\n", uc->id);18511851- udma_stop(uc);18501850+ udma_reset_chan(uc, false);18521851 if (udma_is_chan_running(uc)) {18531852 dev_err(ud->dev, "chan%d: won't stop!\n", uc->id);18541853 ret = -EBUSY;···1907190619081907 udma_reset_rings(uc);1909190819101910- INIT_DELAYED_WORK_ONSTACK(&uc->tx_drain.work,19111911- udma_check_tx_completion);19121909 return 0;1913191019141911err_irq_free:···19181919err_res_free:19191920 udma_free_tx_resources(uc);19201921 udma_free_rx_resources(uc);19211921-19221922+err_cleanup:19221923 udma_reset_uchan(uc);1923192419241925 if (uc->use_dma_pool) {···30183019 }3019302030203021 cancel_delayed_work_sync(&uc->tx_drain.work);30213021- destroy_delayed_work_on_stack(&uc->tx_drain.work);3022302230233023 if (uc->irq_num_ring > 0) {30243024 free_irq(uc->irq_num_ring, uc);···35913593 return ret;35923594 }3593359535943594- ret = of_property_read_u32(navss_node, "ti,udma-atype", &ud->atype);35963596+ ret = of_property_read_u32(dev->of_node, "ti,udma-atype", &ud->atype);35953597 if (!ret && ud->atype > 2) {35963598 dev_err(dev, "Invalid atype: %u\n", ud->atype);35973599 return -EINVAL;···37093711 tasklet_init(&uc->vc.task, udma_vchan_complete,37103712 (unsigned long)&uc->vc);37113713 init_completion(&uc->teardown_completed);37143714+ INIT_DELAYED_WORK(&uc->tx_drain.work, udma_check_tx_completion);37123715 }3713371637143717 ret = dma_async_device_register(&ud->ddev);
+1-4
drivers/firmware/efi/efi-pstore.c
···356356357357static __init int efivars_pstore_init(void)358358{359359- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))360360- return 0;361361-362362- if (!efivars_kobject())359359+ if (!efivars_kobject() || !efivar_supports_writes())363360 return 0;364361365362 if (efivars_pstore_disable)
+8-4
drivers/firmware/efi/efi.c
···176176static int generic_ops_register(void)177177{178178 generic_ops.get_variable = efi.get_variable;179179- generic_ops.set_variable = efi.set_variable;180180- generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking;181179 generic_ops.get_next_variable = efi.get_next_variable;182180 generic_ops.query_variable_store = efi_query_variable_store;183181182182+ if (efi_rt_services_supported(EFI_RT_SUPPORTED_SET_VARIABLE)) {183183+ generic_ops.set_variable = efi.set_variable;184184+ generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking;185185+ }184186 return efivars_register(&generic_efivars, &generic_ops, efi_kobj);185187}186188···384382 return -ENOMEM;385383 }386384387387- if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) {385385+ if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE |386386+ EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME)) {388387 efivar_ssdt_load();389388 error = generic_ops_register();390389 if (error)···419416err_remove_group:420417 sysfs_remove_group(efi_kobj, &efi_subsys_attr_group);421418err_unregister:422422- if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))419419+ if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE |420420+ EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME))423421 generic_ops_unregister();424422err_put:425423 kobject_put(efi_kobj);
+1-4
drivers/firmware/efi/efivars.c
···680680 struct kobject *parent_kobj = efivars_kobject();681681 int error = 0;682682683683- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))684684- return -ENODEV;685685-686683 /* No efivars has been registered yet */687687- if (!parent_kobj)684684+ if (!parent_kobj || !efivar_supports_writes())688685 return 0;689686690687 printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,
+1-2
drivers/firmware/efi/libstub/Makefile
···66# enabled, even if doing so doesn't break the build.77#88cflags-$(CONFIG_X86_32) := -march=i38699-cflags-$(CONFIG_X86_64) := -mcmodel=small \1010- $(call cc-option,-maccumulate-outgoing-args)99+cflags-$(CONFIG_X86_64) := -mcmodel=small1110cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \1211 -fPIC -fno-strict-aliasing -mno-red-zone \1312 -mno-mmx -mno-sse -fshort-wchar \
+1-1
drivers/firmware/efi/libstub/alignedmem.c
···4444 *addr = ALIGN((unsigned long)alloc_addr, align);45454646 if (slack > 0) {4747- int l = (alloc_addr % align) / EFI_PAGE_SIZE;4747+ int l = (alloc_addr & (align - 1)) / EFI_PAGE_SIZE;48484949 if (l) {5050 efi_bs_call(free_pages, alloc_addr, slack - l + 1);
+14-11
drivers/firmware/efi/libstub/arm64-stub.c
···3535}36363737/*3838- * Relocatable kernels can fix up the misalignment with respect to3939- * MIN_KIMG_ALIGN, so they only require a minimum alignment of EFI_KIMG_ALIGN4040- * (which accounts for the alignment of statically allocated objects such as4141- * the swapper stack.)3838+ * Although relocatable kernels can fix up the misalignment with respect to3939+ * MIN_KIMG_ALIGN, the resulting virtual text addresses are subtly out of4040+ * sync with those recorded in the vmlinux when kaslr is disabled but the4141+ * image required relocation anyway. Therefore retain 2M alignment unless4242+ * KASLR is in use.4243 */4343-static const u64 min_kimg_align = IS_ENABLED(CONFIG_RELOCATABLE) ? EFI_KIMG_ALIGN4444- : MIN_KIMG_ALIGN;4444+static u64 min_kimg_align(void)4545+{4646+ return efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;4747+}45484649efi_status_t handle_kernel_image(unsigned long *image_addr,4750 unsigned long *image_size,···77747875 kernel_size = _edata - _text;7976 kernel_memsize = kernel_size + (_end - _edata);8080- *reserve_size = kernel_memsize + TEXT_OFFSET % min_kimg_align;7777+ *reserve_size = kernel_memsize + TEXT_OFFSET % min_kimg_align();81788279 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) {8380 /*8481 * If KASLR is enabled, and we have some randomness available,8582 * locate the kernel at a randomized offset in physical memory.8683 */8787- status = efi_random_alloc(*reserve_size, min_kimg_align,8484+ status = efi_random_alloc(*reserve_size, min_kimg_align(),8885 reserve_addr, phys_seed);8986 } else {9087 status = EFI_OUT_OF_RESOURCES;9188 }92899390 if (status != EFI_SUCCESS) {9494- if (IS_ALIGNED((u64)_text - TEXT_OFFSET, min_kimg_align)) {9191+ if (IS_ALIGNED((u64)_text - TEXT_OFFSET, min_kimg_align())) {9592 /*9693 * Just execute from wherever we were loaded by the9794 * UEFI PE/COFF loader if the alignment is suitable.···10299 }103100104101 status = efi_allocate_pages_aligned(*reserve_size, reserve_addr,105105- ULONG_MAX, min_kimg_align);102102+ ULONG_MAX, min_kimg_align());106103107104 if (status != EFI_SUCCESS) {108105 efi_err("Failed to relocate kernel\n");···111108 }112109 }113110114114- *image_addr = *reserve_addr + TEXT_OFFSET % min_kimg_align;111111+ *image_addr = *reserve_addr + TEXT_OFFSET % min_kimg_align();115112 memcpy((void *)*image_addr, _text, kernel_size);116113117114 return EFI_SUCCESS;
···122122}123123124124/*125125- * This function handles the architcture specific differences between arm and126126- * arm64 regarding where the kernel image must be loaded and any memory that127127- * must be reserved. On failure it is required to free all128128- * all allocations it has made.129129- */130130-efi_status_t handle_kernel_image(unsigned long *image_addr,131131- unsigned long *image_size,132132- unsigned long *reserve_addr,133133- unsigned long *reserve_size,134134- unsigned long dram_base,135135- efi_loaded_image_t *image);136136-137137-asmlinkage void __noreturn efi_enter_kernel(unsigned long entrypoint,138138- unsigned long fdt_addr,139139- unsigned long fdt_size);140140-141141-/*142125 * EFI entry point for the arm/arm64 EFI stubs. This is the entrypoint143126 * that is described in the PE/COFF header. Most of the code is the same144127 * for both archictectures, with the arch-specific code provided in the
+16
drivers/firmware/efi/libstub/efistub.h
···776776 unsigned long *load_size,777777 unsigned long soft_limit,778778 unsigned long hard_limit);779779+/*780780+ * This function handles the architcture specific differences between arm and781781+ * arm64 regarding where the kernel image must be loaded and any memory that782782+ * must be reserved. On failure it is required to free all783783+ * all allocations it has made.784784+ */785785+efi_status_t handle_kernel_image(unsigned long *image_addr,786786+ unsigned long *image_size,787787+ unsigned long *reserve_addr,788788+ unsigned long *reserve_size,789789+ unsigned long dram_base,790790+ efi_loaded_image_t *image);791791+792792+asmlinkage void __noreturn efi_enter_kernel(unsigned long entrypoint,793793+ unsigned long fdt_addr,794794+ unsigned long fdt_size);779795780796void efi_handle_post_ebs_state(void);781797
+4-4
drivers/firmware/efi/libstub/x86-stub.c
···8899#include <linux/efi.h>1010#include <linux/pci.h>1111+#include <linux/stddef.h>11121213#include <asm/efi.h>1314#include <asm/e820/types.h>···362361 int options_size = 0;363362 efi_status_t status;364363 char *cmdline_ptr;365365- unsigned long ramdisk_addr;366366- unsigned long ramdisk_size;367364368365 efi_system_table = sys_table_arg;369366···389390390391 hdr = &boot_params->hdr;391392392392- /* Copy the second sector to boot_params */393393- memcpy(&hdr->jump, image_base + 512, 512);393393+ /* Copy the setup header from the second sector to boot_params */394394+ memcpy(&hdr->jump, image_base + 512,395395+ sizeof(struct setup_header) - offsetof(struct setup_header, jump));394396395397 /*396398 * Fill out some of the header fields ourselves because the
···157157158158 cpu_groups = kcalloc(nb_available_cpus, sizeof(cpu_groups),159159 GFP_KERNEL);160160- if (!cpu_groups)160160+ if (!cpu_groups) {161161+ free_cpumask_var(tmp);161162 return -ENOMEM;163163+ }162164163165 cpumask_copy(tmp, cpu_online_mask);164166···169167 topology_core_cpumask(cpumask_any(tmp));170168171169 if (!alloc_cpumask_var(&cpu_groups[num_groups], GFP_KERNEL)) {170170+ free_cpumask_var(tmp);172171 free_cpu_groups(num_groups, &cpu_groups);173172 return -ENOMEM;174173 }···199196 if (!page_buf)200197 goto out_free_cpu_groups;201198202202- err = 0;203199 /*204200 * Of course the last CPU cannot be powered down and cpu_down() should205201 * refuse doing that.206202 */207203 pr_info("Trying to turn off and on again all CPUs\n");208208- err += down_and_up_cpus(cpu_online_mask, offlined_cpus);204204+ err = down_and_up_cpus(cpu_online_mask, offlined_cpus);209205210206 /*211207 * Take down CPUs by cpu group this time. When the last CPU is turned
+2-1
drivers/fpga/dfl-afu-main.c
···8383 * on this port and minimum soft reset pulse width has elapsed.8484 * Driver polls port_soft_reset_ack to determine if reset done by HW.8585 */8686- if (readq_poll_timeout(base + PORT_HDR_CTRL, v, v & PORT_CTRL_SFTRST,8686+ if (readq_poll_timeout(base + PORT_HDR_CTRL, v,8787+ v & PORT_CTRL_SFTRST_ACK,8788 RST_POLL_INVL, RST_POLL_TIMEOUT)) {8889 dev_err(&pdev->dev, "timeout, fail to reset device\n");8990 return -ETIMEDOUT;
+2-1
drivers/fpga/dfl-pci.c
···227227{228228 struct cci_drvdata *drvdata = pci_get_drvdata(pcidev);229229 struct dfl_fpga_cdev *cdev = drvdata->cdev;230230- int ret = 0;231230232231 if (!num_vfs) {233232 /*···238239 dfl_fpga_cdev_config_ports_pf(cdev);239240240241 } else {242242+ int ret;243243+241244 /*242245 * before enable SRIOV, put released ports into VF access mode243246 * first of all.
+6-1
drivers/gpio/gpio-arizona.c
···6464 ret = pm_runtime_get_sync(chip->parent);6565 if (ret < 0) {6666 dev_err(chip->parent, "Failed to resume: %d\n", ret);6767+ pm_runtime_put_autosuspend(chip->parent);6768 return ret;6869 }6970···7372 if (ret < 0) {7473 dev_err(chip->parent, "Failed to drop cache: %d\n",7574 ret);7575+ pm_runtime_put_autosuspend(chip->parent);7676 return ret;7777 }78787979 ret = regmap_read(arizona->regmap, reg, &val);8080- if (ret < 0)8080+ if (ret < 0) {8181+ pm_runtime_put_autosuspend(chip->parent);8182 return ret;8383+ }82848385 pm_runtime_mark_last_busy(chip->parent);8486 pm_runtime_put_autosuspend(chip->parent);···110106 ret = pm_runtime_get_sync(chip->parent);111107 if (ret < 0) {112108 dev_err(chip->parent, "Failed to resume: %d\n", ret);109109+ pm_runtime_put(chip->parent);113110 return ret;114111 }115112 }
+94-5
drivers/gpio/gpio-pca953x.c
···107107};108108MODULE_DEVICE_TABLE(i2c, pca953x_id);109109110110+#ifdef CONFIG_GPIO_PCA953X_IRQ111111+112112+#include <linux/dmi.h>113113+#include <linux/gpio.h>114114+#include <linux/list.h>115115+116116+static const struct dmi_system_id pca953x_dmi_acpi_irq_info[] = {117117+ {118118+ /*119119+ * On Intel Galileo Gen 2 board the IRQ pin of one of120120+ * the I²C GPIO expanders, which has GpioInt() resource,121121+ * is provided as an absolute number instead of being122122+ * relative. Since first controller (gpio-sch.c) and123123+ * second (gpio-dwapb.c) are at the fixed bases, we may124124+ * safely refer to the number in the global space to get125125+ * an IRQ out of it.126126+ */127127+ .matches = {128128+ DMI_EXACT_MATCH(DMI_BOARD_NAME, "GalileoGen2"),129129+ },130130+ },131131+ {}132132+};133133+134134+#ifdef CONFIG_ACPI135135+static int pca953x_acpi_get_pin(struct acpi_resource *ares, void *data)136136+{137137+ struct acpi_resource_gpio *agpio;138138+ int *pin = data;139139+140140+ if (acpi_gpio_get_irq_resource(ares, &agpio))141141+ *pin = agpio->pin_table[0];142142+ return 1;143143+}144144+145145+static int pca953x_acpi_find_pin(struct device *dev)146146+{147147+ struct acpi_device *adev = ACPI_COMPANION(dev);148148+ int pin = -ENOENT, ret;149149+ LIST_HEAD(r);150150+151151+ ret = acpi_dev_get_resources(adev, &r, pca953x_acpi_get_pin, &pin);152152+ acpi_dev_free_resource_list(&r);153153+ if (ret < 0)154154+ return ret;155155+156156+ return pin;157157+}158158+#else159159+static inline int pca953x_acpi_find_pin(struct device *dev) { return -ENXIO; }160160+#endif161161+162162+static int pca953x_acpi_get_irq(struct device *dev)163163+{164164+ int pin, ret;165165+166166+ pin = pca953x_acpi_find_pin(dev);167167+ if (pin < 0)168168+ return pin;169169+170170+ dev_info(dev, "Applying ACPI interrupt quirk (GPIO %d)\n", pin);171171+172172+ if (!gpio_is_valid(pin))173173+ return -EINVAL;174174+175175+ ret = gpio_request(pin, "pca953x interrupt");176176+ if (ret)177177+ return ret;178178+179179+ ret = gpio_to_irq(pin);180180+181181+ /* When pin is used as an IRQ, no need to keep it requested */182182+ gpio_free(pin);183183+184184+ return ret;185185+}186186+#endif187187+110188static const struct acpi_device_id pca953x_acpi_ids[] = {111189 { "INT3491", 16 | PCA953X_TYPE | PCA_LATCH_INT, },112190 { }···400322 .writeable_reg = pca953x_writeable_register,401323 .volatile_reg = pca953x_volatile_register,402324325325+ .disable_locking = true,403326 .cache_type = REGCACHE_RBTREE,404327 .max_register = 0x7f,405328};···702623 DECLARE_BITMAP(reg_direction, MAX_LINE);703624 int level;704625705705- pca953x_read_regs(chip, chip->regs->direction, reg_direction);706706-707626 if (chip->driver_data & PCA_PCAL) {708627 /* Enable latch on interrupt-enabled inputs */709628 pca953x_write_regs(chip, PCAL953X_IN_LATCH, chip->irq_mask);···712635 pca953x_write_regs(chip, PCAL953X_INT_MASK, irq_mask);713636 }714637638638+ /* Switch direction to input if needed */639639+ pca953x_read_regs(chip, chip->regs->direction, reg_direction);640640+715641 bitmap_or(irq_mask, chip->irq_trig_fall, chip->irq_trig_raise, gc->ngpio);642642+ bitmap_complement(reg_direction, reg_direction, gc->ngpio);716643 bitmap_and(irq_mask, irq_mask, reg_direction, gc->ngpio);717644718645 /* Look for any newly setup interrupt */···815734 struct gpio_chip *gc = &chip->gpio_chip;816735 DECLARE_BITMAP(pending, MAX_LINE);817736 int level;737737+ bool ret;818738819819- if (!pca953x_irq_pending(chip, pending))820820- return IRQ_NONE;739739+ mutex_lock(&chip->i2c_lock);740740+ ret = pca953x_irq_pending(chip, pending);741741+ mutex_unlock(&chip->i2c_lock);821742822743 for_each_set_bit(level, pending, gc->ngpio)823744 handle_nested_irq(irq_find_mapping(gc->irq.domain, level));824745825825- return IRQ_HANDLED;746746+ return IRQ_RETVAL(ret);826747}827748828749static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base)···834751 DECLARE_BITMAP(reg_direction, MAX_LINE);835752 DECLARE_BITMAP(irq_stat, MAX_LINE);836753 int ret;754754+755755+ if (dmi_first_match(pca953x_dmi_acpi_irq_info)) {756756+ ret = pca953x_acpi_get_irq(&client->dev);757757+ if (ret > 0)758758+ client->irq = ret;759759+ }837760838761 if (!client->irq)839762 return 0;
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
···204204 (mode_info->atom_context->bios + data_offset);205205 switch (crev) {206206 case 11:207207+ case 12:207208 mem_channel_number = igp_info->v11.umachannelnumber;208209 /* channel width is 64 */209210 if (vram_width)
···974974 /* Update the actual used number of crtc */975975 adev->mode_info.num_crtc = adev->dm.display_indexes_num;976976977977+ /* create fake encoders for MST */978978+ dm_dp_create_fake_mst_encoders(adev);979979+977980 /* TODO: Add_display_info? */978981979982 /* TODO use dynamic cursor width */···10009971001998static void amdgpu_dm_fini(struct amdgpu_device *adev)1002999{10001000+ int i;10011001+10021002+ for (i = 0; i < adev->dm.display_indexes_num; i++) {10031003+ drm_encoder_cleanup(&adev->dm.mst_encoders[i].base);10041004+ }10051005+10031006 amdgpu_dm_audio_fini(adev);1004100710051008 amdgpu_dm_destroy_drm_device(&adev->dm);···13671358 struct dmcu *dmcu = NULL;13681359 bool ret;1369136013701370- if (!adev->dm.fw_dmcu)13611361+ if (!adev->dm.fw_dmcu && !adev->dm.dmub_fw)13711362 return detect_mst_link_for_all_connectors(adev->ddev);1372136313731364 dmcu = adev->dm.dc->res_pool->dmcu;···20192010 struct amdgpu_display_manager *dm;20202011 struct drm_connector *conn_base;20212012 struct amdgpu_device *adev;20132013+ struct dc_link *link = NULL;20222014 static const u8 pre_computed_values[] = {20232015 50, 51, 52, 53, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 68, 69,20242016 71, 72, 74, 75, 77, 79, 81, 82, 84, 86, 88, 90, 92, 94, 96, 98};2025201720262018 if (!aconnector || !aconnector->dc_link)20192019+ return;20202020+20212021+ link = aconnector->dc_link;20222022+ if (link->connector_signal != SIGNAL_TYPE_EDP)20272023 return;2028202420292025 conn_base = &aconnector->base;
+10-1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
···4343 */44444545#define AMDGPU_DM_MAX_DISPLAY_INDEX 314646+4747+#define AMDGPU_DM_MAX_CRTC 64848+4649/*4750#include "include/amdgpu_dal_power_if.h"4851#include "amdgpu_dm_irq.h"···331328 * available in FW332329 */333330 const struct gpu_info_soc_bounding_box_v1_0 *soc_bounding_box;331331+332332+ /**333333+ * @mst_encoders:334334+ *335335+ * fake encoders used for DP MST.336336+ */337337+ struct amdgpu_encoder mst_encoders[AMDGPU_DM_MAX_CRTC];334338};335339336340struct amdgpu_dm_connector {···366356 struct amdgpu_dm_dp_aux dm_dp_aux;367357 struct drm_dp_mst_port *port;368358 struct amdgpu_dm_connector *mst_port;369369- struct amdgpu_encoder *mst_encoder;370359 struct drm_dp_aux *dsc_aux;371360372361 /* TODO see if we can merge with ddc_bus or make a dm_connector */
···269269 goto unlock;270270271271 ret = pm_runtime_get_sync(mic->dev);272272- if (ret < 0)272272+ if (ret < 0) {273273+ pm_runtime_put_noidle(mic->dev);273274 goto unlock;275275+ }274276275277 mic_set_path(mic, 1);276278
+3-2
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
···307307 /* reset all the states of crtc/plane/encoder/connector */308308 drm_mode_config_reset(dev);309309310310- drm_fbdev_generic_setup(dev, dev->mode_config.preferred_depth);311311-312310 return 0;313311314312err:···353355 ret);354356 goto err_unload;355357 }358358+359359+ drm_fbdev_generic_setup(dev, dev->mode_config.preferred_depth);360360+356361 return 0;357362358363err_unload:
+11
drivers/gpu/drm/i915/display/intel_display.c
···38223822 return true;38233823}3824382438253825+unsigned int38263826+intel_plane_fence_y_offset(const struct intel_plane_state *plane_state)38273827+{38283828+ int x = 0, y = 0;38293829+38303830+ intel_plane_adjust_aligned_offset(&x, &y, plane_state, 0,38313831+ plane_state->color_plane[0].offset, 0);38323832+38333833+ return y;38343834+}38353835+38253836static int skl_check_main_surface(struct intel_plane_state *plane_state)38263837{38273838 struct drm_i915_private *dev_priv = to_i915(plane_state->uapi.plane->dev);
+1
drivers/gpu/drm/i915/display/intel_display.h
···608608 u32 pixel_format, u64 modifier,609609 unsigned int rotation);610610int bdw_get_pipemisc_bpp(struct intel_crtc *crtc);611611+unsigned int intel_plane_fence_y_offset(const struct intel_plane_state *plane_state);611612612613struct intel_display_error_state *613614intel_display_capture_error_state(struct drm_i915_private *dev_priv);
+36-29
drivers/gpu/drm/i915/display/intel_fbc.c
···4848#include "intel_frontbuffer.h"49495050/*5151- * In some platforms where the CRTC's x:0/y:0 coordinates doesn't match the5252- * frontbuffer's x:0/y:0 coordinates we lie to the hardware about the plane's5353- * origin so the x and y offsets can actually fit the registers. As a5454- * consequence, the fence doesn't really start exactly at the display plane5555- * address we program because it starts at the real start of the buffer, so we5656- * have to take this into consideration here.5757- */5858-static unsigned int get_crtc_fence_y_offset(struct intel_fbc *fbc)5959-{6060- return fbc->state_cache.plane.y - fbc->state_cache.plane.adjusted_y;6161-}6262-6363-/*6451 * For SKL+, the plane source size used by the hardware is based on the value we6552 * write to the PLANE_SIZE register. For BDW-, the hardware looks at the value6653 * we wrote to PIPESRC.···128141 fbc_ctl2 |= FBC_CTL_CPU_FENCE;129142 intel_de_write(dev_priv, FBC_CONTROL2, fbc_ctl2);130143 intel_de_write(dev_priv, FBC_FENCE_OFF,131131- params->crtc.fence_y_offset);144144+ params->fence_y_offset);132145 }133146134147 /* enable it... */···162175 if (params->fence_id >= 0) {163176 dpfc_ctl |= DPFC_CTL_FENCE_EN | params->fence_id;164177 intel_de_write(dev_priv, DPFC_FENCE_YOFF,165165- params->crtc.fence_y_offset);178178+ params->fence_y_offset);166179 } else {167180 intel_de_write(dev_priv, DPFC_FENCE_YOFF, 0);168181 }···230243 intel_de_write(dev_priv, SNB_DPFC_CTL_SA,231244 SNB_CPU_FENCE_ENABLE | params->fence_id);232245 intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET,233233- params->crtc.fence_y_offset);246246+ params->fence_y_offset);234247 }235248 } else {236249 if (IS_GEN(dev_priv, 6)) {···240253 }241254242255 intel_de_write(dev_priv, ILK_DPFC_FENCE_YOFF,243243- params->crtc.fence_y_offset);256256+ params->fence_y_offset);244257 /* enable it... */245258 intel_de_write(dev_priv, ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);246259···307320 intel_de_write(dev_priv, SNB_DPFC_CTL_SA,308321 SNB_CPU_FENCE_ENABLE | params->fence_id);309322 intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET,310310- params->crtc.fence_y_offset);323323+ params->fence_y_offset);311324 } else if (dev_priv->ggtt.num_fences) {312325 intel_de_write(dev_priv, SNB_DPFC_CTL_SA, 0);313326 intel_de_write(dev_priv, DPFC_CPU_FENCE_OFFSET, 0);···618631/*619632 * For some reason, the hardware tracking starts looking at whatever we620633 * programmed as the display plane base address register. It does not look at621621- * the X and Y offset registers. That's why we look at the crtc->adjusted{x,y}622622- * variables instead of just looking at the pipe/plane size.634634+ * the X and Y offset registers. That's why we include the src x/y offsets635635+ * instead of just looking at the plane size.623636 */624637static bool intel_fbc_hw_tracking_covers_screen(struct intel_crtc *crtc)625638{···692705 cache->plane.src_h = drm_rect_height(&plane_state->uapi.src) >> 16;693706 cache->plane.adjusted_x = plane_state->color_plane[0].x;694707 cache->plane.adjusted_y = plane_state->color_plane[0].y;695695- cache->plane.y = plane_state->uapi.src.y1 >> 16;696708697709 cache->plane.pixel_blend_mode = plane_state->hw.pixel_blend_mode;698710699711 cache->fb.format = fb->format;700712 cache->fb.stride = fb->pitches[0];701713 cache->fb.modifier = fb->modifier;714714+715715+ cache->fence_y_offset = intel_plane_fence_y_offset(plane_state);702716703717 drm_WARN_ON(&dev_priv->drm, plane_state->flags & PLANE_HAS_FENCE &&704718 !plane_state->vma->fence);···717729718730 return intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache) >719731 fbc->compressed_fb.size * fbc->threshold;732732+}733733+734734+static u16 intel_fbc_gen9_wa_cfb_stride(struct drm_i915_private *dev_priv)735735+{736736+ struct intel_fbc *fbc = &dev_priv->fbc;737737+ struct intel_fbc_state_cache *cache = &fbc->state_cache;738738+739739+ if ((IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) &&740740+ cache->fb.modifier != I915_FORMAT_MOD_X_TILED)741741+ return DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->threshold) * 8;742742+ else743743+ return 0;744744+}745745+746746+static bool intel_fbc_gen9_wa_cfb_stride_changed(struct drm_i915_private *dev_priv)747747+{748748+ struct intel_fbc *fbc = &dev_priv->fbc;749749+750750+ return fbc->params.gen9_wa_cfb_stride != intel_fbc_gen9_wa_cfb_stride(dev_priv);720751}721752722753static bool intel_fbc_can_enable(struct drm_i915_private *dev_priv)···890883 memset(params, 0, sizeof(*params));891884892885 params->fence_id = cache->fence_id;886886+ params->fence_y_offset = cache->fence_y_offset;893887894888 params->crtc.pipe = crtc->pipe;895889 params->crtc.i9xx_plane = to_intel_plane(crtc->base.primary)->i9xx_plane;896896- params->crtc.fence_y_offset = get_crtc_fence_y_offset(fbc);897890898891 params->fb.format = cache->fb.format;892892+ params->fb.modifier = cache->fb.modifier;899893 params->fb.stride = cache->fb.stride;900894901895 params->cfb_size = intel_fbc_calculate_cfb_size(dev_priv, cache);···924916 return false;925917926918 if (params->fb.format != cache->fb.format)919919+ return false;920920+921921+ if (params->fb.modifier != cache->fb.modifier)927922 return false;928923929924 if (params->fb.stride != cache->fb.stride)···1208119712091198 if (fbc->crtc) {12101199 if (fbc->crtc != crtc ||12111211- !intel_fbc_cfb_size_changed(dev_priv))12001200+ (!intel_fbc_cfb_size_changed(dev_priv) &&12011201+ !intel_fbc_gen9_wa_cfb_stride_changed(dev_priv)))12121202 goto out;1213120312141204 __intel_fbc_disable(dev_priv);···12311219 goto out;12321220 }1233122112341234- if ((IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) &&12351235- plane_state->hw.fb->modifier != I915_FORMAT_MOD_X_TILED)12361236- cache->gen9_wa_cfb_stride =12371237- DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->threshold) * 8;12381238- else12391239- cache->gen9_wa_cfb_stride = 0;12221222+ cache->gen9_wa_cfb_stride = intel_fbc_gen9_wa_cfb_stride(dev_priv);1240122312411224 drm_dbg_kms(&dev_priv->drm, "Enabling FBC on pipe %c\n",12421225 pipe_name(crtc->pipe));
···53965396 * typically be the first we inspect for submission.53975397 */53985398 swp = prandom_u32_max(ve->num_siblings);53995399- if (!swp)54005400- return;54015401-54025402- swap(ve->siblings[swp], ve->siblings[0]);54035403- if (!intel_engine_has_relative_mmio(ve->siblings[0]))54045404- virtual_update_register_offsets(ve->context.lrc_reg_state,54055405- ve->siblings[0]);53995399+ if (swp)54005400+ swap(ve->siblings[swp], ve->siblings[0]);54065401}5407540254085403static int virtual_context_alloc(struct intel_context *ce)···54105415static int virtual_context_pin(struct intel_context *ce)54115416{54125417 struct virtual_engine *ve = container_of(ce, typeof(*ve), context);54135413- int err;5414541854155419 /* Note: we must use a real engine class for setting up reg state */54165416- err = __execlists_context_pin(ce, ve->siblings[0]);54175417- if (err)54185418- return err;54195419-54205420- virtual_engine_initial_hint(ve);54215421- return 0;54205420+ return __execlists_context_pin(ce, ve->siblings[0]);54225421}5423542254245423static void virtual_context_enter(struct intel_context *ce)···56775688 intel_engine_init_active(&ve->base, ENGINE_VIRTUAL);56785689 intel_engine_init_breadcrumbs(&ve->base);56795690 intel_engine_init_execlists(&ve->base);56915691+ ve->base.breadcrumbs.irq_armed = true; /* fake HW, used for irq_work */5680569256815693 ve->base.cops = &virtual_context_ops;56825694 ve->base.request_alloc = execlists_request_alloc;···5759576957605770 ve->base.flags |= I915_ENGINE_IS_VIRTUAL;5761577157725772+ virtual_engine_initial_hint(ve);57625773 return &ve->context;5763577457645775err_put:
+4-4
drivers/gpu/drm/i915/gt/selftest_rps.c
···4444{4545 const u64 *a = A, *b = B;46464747- if (a < b)4747+ if (*a < *b)4848 return -1;4949- else if (a > b)4949+ else if (*a > *b)5050 return 1;5151 else5252 return 0;···5656{5757 const u32 *a = A, *b = B;58585959- if (a < b)5959+ if (*a < *b)6060 return -1;6161- else if (a > b)6161+ else if (*a > *b)6262 return 1;6363 else6464 return 0;
+46
drivers/gpu/drm/i915/gt/shaders/README
···11+ASM sources for auto generated shaders22+======================================33+44+The i915/gt/hsw_clear_kernel.c and i915/gt/ivb_clear_kernel.c files contain55+pre-compiled batch chunks that will clear any residual render cache during66+context switch.77+88+They are generated from their respective platform ASM files present on99+i915/gt/shaders/clear_kernel directory.1010+1111+The generated .c files should never be modified directly. Instead, any modification1212+needs to be done on the on their respective ASM files and build instructions below1313+needes to be followed.1414+1515+Building1616+========1717+1818+Environment1919+-----------2020+2121+IGT GPU tool scripts and the Mesa's i965 instruction assembler tool are used2222+on building.2323+2424+Please make sure your Mesa tool is compiled with "-Dtools=intel" and2525+"-Ddri-drivers=i965", and run this script from IGT source root directory"2626+2727+The instructions bellow assume:2828+ * IGT gpu tools source code is located on your home directory (~) as ~/igt2929+ * Mesa source code is located on your home directory (~) as ~/mesa3030+ and built under the ~/mesa/build directory3131+ * Linux kernel source code is under your home directory (~) as ~/linux3232+3333+Instructions3434+------------3535+3636+~ $ cp ~/linux/drivers/gpu/drm/i915/gt/shaders/clear_kernel/ivb.asm \3737+ ~/igt/lib/i915/shaders/clear_kernel/ivb.asm3838+~ $ cd ~/igt3939+igt $ ./scripts/generate_clear_kernel.sh -g ivb \4040+ -m ~/mesa/build/src/intel/tools/i965_asm4141+4242+~ $ cp ~/linux/drivers/gpu/drm/i915/gt/shaders/clear_kernel/hsw.asm \4343+ ~/igt/lib/i915/shaders/clear_kernel/hsw.asm4444+~ $ cd ~/igt4545+igt $ ./scripts/generate_clear_kernel.sh -g hsw \4646+ -m ~/mesa/build/src/intel/tools/i965_asm
···104104 struct i915_address_space *vm,105105 const struct i915_ggtt_view *view)106106{107107+ struct i915_vma *pos = ERR_PTR(-E2BIG);107108 struct i915_vma *vma;108109 struct rb_node *rb, **p;109110···185184 rb = NULL;186185 p = &obj->vma.tree.rb_node;187186 while (*p) {188188- struct i915_vma *pos;189187 long cmp;190188191189 rb = *p;···196196 * and dispose of ours.197197 */198198 cmp = i915_vma_compare(pos, vm, view);199199- if (cmp == 0) {200200- spin_unlock(&obj->vma.lock);201201- i915_vma_free(vma);202202- return pos;203203- }204204-205199 if (cmp < 0)206200 p = &rb->rb_right;207207- else201201+ else if (cmp > 0)208202 p = &rb->rb_left;203203+ else204204+ goto err_unlock;209205 }210206 rb_link_node(&vma->obj_node, rb, p);211207 rb_insert_color(&vma->obj_node, &obj->vma.tree);···224228err_unlock:225229 spin_unlock(&obj->vma.lock);226230err_vma:231231+ i915_vm_put(vm);227232 i915_vma_free(vma);228228- return ERR_PTR(-E2BIG);233233+ return pos;229234}230235231236static struct i915_vma *
+2
drivers/gpu/drm/lima/lima_pp.c
···271271272272int lima_pp_bcast_resume(struct lima_ip *ip)273273{274274+ /* PP has been reset by individual PP resume */275275+ ip->data.async_reset = false;274276 return 0;275277}276278
+1-1
drivers/gpu/drm/mediatek/Kconfig
···66 depends on COMMON_CLK77 depends on HAVE_ARM_SMCCC88 depends on OF99+ depends on MTK_MMSYS910 select DRM_GEM_CMA_HELPER1011 select DRM_KMS_HELPER1112 select DRM_MIPI_DSI1213 select DRM_PANEL1314 select MEMORY1414- select MTK_MMSYS1515 select MTK_SMI1616 select VIDEOMODE_HELPERS1717 help
+2-6
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
···193193 int ret;194194 int i;195195196196- DRM_DEBUG_DRIVER("%s\n", __func__);197196 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {198197 ret = clk_prepare_enable(mtk_crtc->ddp_comp[i]->clk);199198 if (ret) {···212213{213214 int i;214215215215- DRM_DEBUG_DRIVER("%s\n", __func__);216216 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++)217217 clk_disable_unprepare(mtk_crtc->ddp_comp[i]->clk);218218}···256258 int ret;257259 int i;258260259259- DRM_DEBUG_DRIVER("%s\n", __func__);260261 if (WARN_ON(!crtc->state))261262 return -EINVAL;262263···296299 goto err_mutex_unprepare;297300 }298301299299- DRM_DEBUG_DRIVER("mediatek_ddp_ddp_path_setup\n");300302 for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) {301303 mtk_mmsys_ddp_connect(mtk_crtc->mmsys_dev,302304 mtk_crtc->ddp_comp[i]->id,···345349 struct drm_crtc *crtc = &mtk_crtc->base;346350 int i;347351348348- DRM_DEBUG_DRIVER("%s\n", __func__);349352 for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {350353 mtk_ddp_comp_stop(mtk_crtc->ddp_comp[i]);351354 if (i == 1)···826831827832#if IS_REACHABLE(CONFIG_MTK_CMDQ)828833 mtk_crtc->cmdq_client =829829- cmdq_mbox_create(dev, drm_crtc_index(&mtk_crtc->base),834834+ cmdq_mbox_create(mtk_crtc->mmsys_dev,835835+ drm_crtc_index(&mtk_crtc->base),830836 2000);831837 if (IS_ERR(mtk_crtc->cmdq_client)) {832838 dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n",
+2-4
drivers/gpu/drm/mediatek/mtk_drm_drv.c
···444444 if (!private)445445 return -ENOMEM;446446447447- private->data = of_device_get_match_data(dev);448447 private->mmsys_dev = dev->parent;449448 if (!private->mmsys_dev) {450449 dev_err(dev, "Failed to get MMSYS device\n");···513514 goto err_node;514515 }515516516516- ret = mtk_ddp_comp_init(dev, node, comp, comp_id, NULL);517517+ ret = mtk_ddp_comp_init(dev->parent, node, comp,518518+ comp_id, NULL);517519 if (ret) {518520 of_node_put(node);519521 goto err_node;···571571 int ret;572572573573 ret = drm_mode_config_helper_suspend(drm);574574- DRM_DEBUG_DRIVER("mtk_drm_sys_suspend\n");575574576575 return ret;577576}···582583 int ret;583584584585 ret = drm_mode_config_helper_resume(drm);585585- DRM_DEBUG_DRIVER("mtk_drm_sys_resume\n");586586587587 return ret;588588}
+15-10
drivers/gpu/drm/mediatek/mtk_drm_plane.c
···164164 true, true);165165}166166167167+static void mtk_plane_atomic_disable(struct drm_plane *plane,168168+ struct drm_plane_state *old_state)169169+{170170+ struct mtk_plane_state *state = to_mtk_plane_state(plane->state);171171+172172+ state->pending.enable = false;173173+ wmb(); /* Make sure the above parameter is set before update */174174+ state->pending.dirty = true;175175+}176176+167177static void mtk_plane_atomic_update(struct drm_plane *plane,168178 struct drm_plane_state *old_state)169179{···187177188178 if (!crtc || WARN_ON(!fb))189179 return;180180+181181+ if (!plane->state->visible) {182182+ mtk_plane_atomic_disable(plane, old_state);183183+ return;184184+ }190185191186 gem = fb->obj[0];192187 mtk_gem = to_mtk_gem_obj(gem);···212197 state->pending.height = drm_rect_height(&plane->state->dst);213198 state->pending.rotation = plane->state->rotation;214199 wmb(); /* Make sure the above parameters are set before update */215215- state->pending.dirty = true;216216-}217217-218218-static void mtk_plane_atomic_disable(struct drm_plane *plane,219219- struct drm_plane_state *old_state)220220-{221221- struct mtk_plane_state *state = to_mtk_plane_state(plane->state);222222-223223- state->pending.enable = false;224224- wmb(); /* Make sure the above parameter is set before update */225200 state->pending.dirty = true;226201}227202
···118118 if (retries)119119 udelay(400);120120121121- /* transaction request, wait up to 1ms for it to complete */121121+ /* transaction request, wait up to 2ms for it to complete */122122 nvkm_wr32(device, 0x00e4e4 + base, 0x00010000 | ctrl);123123124124- timeout = 1000;124124+ timeout = 2000;125125 do {126126 ctrl = nvkm_rd32(device, 0x00e4e4 + base);127127 udelay(1);
···118118 if (retries)119119 udelay(400);120120121121- /* transaction request, wait up to 1ms for it to complete */121121+ /* transaction request, wait up to 2ms for it to complete */122122 nvkm_wr32(device, 0x00d954 + base, 0x00010000 | ctrl);123123124124- timeout = 1000;124124+ timeout = 2000;125125 do {126126 ctrl = nvkm_rd32(device, 0x00d954 + base);127127 udelay(1);
+3-4
drivers/gpu/drm/radeon/ci_dpm.c
···55635563 if (!rdev->pm.dpm.ps)55645564 return -ENOMEM;55655565 power_state_offset = (u8 *)state_array->states;55665566+ rdev->pm.dpm.num_ps = 0;55665567 for (i = 0; i < state_array->ucNumEntries; i++) {55675568 u8 *idx;55685569 power_state = (union pplib_power_state *)power_state_offset;···55735572 if (!rdev->pm.power_state[i].clock_info)55745573 return -EINVAL;55755574 ps = kzalloc(sizeof(struct ci_ps), GFP_KERNEL);55765576- if (ps == NULL) {55775577- kfree(rdev->pm.dpm.ps);55755575+ if (ps == NULL)55785576 return -ENOMEM;55795579- }55805577 rdev->pm.dpm.ps[i].ps_priv = ps;55815578 ci_parse_pplib_non_clock_info(rdev, &rdev->pm.dpm.ps[i],55825579 non_clock_info,···55965597 k++;55975598 }55985599 power_state_offset += 2 + power_state->v2.ucNumDPMLevels;56005600+ rdev->pm.dpm.num_ps = i + 1;55995601 }56005600- rdev->pm.dpm.num_ps = state_array->ucNumEntries;5601560256025603 /* fill in the vce power states */56035604 for (i = 0; i < RADEON_MAX_VCE_LEVELS; i++) {
···535535 __set_bit(MSC_RAW, input->mscbit);536536 }537537538538+ /*539539+ * hid-input may mark device as using autorepeat, but neither540540+ * the trackpad, nor the mouse actually want it.541541+ */542542+ __clear_bit(EV_REP, input->evbit);543543+538544 return 0;539545}540546
···13681368 * Write dump contents to the page. No need to synchronize; panic should13691369 * be single-threaded.13701370 */13711371- kmsg_dump_get_buffer(dumper, true, hv_panic_page, HV_HYP_PAGE_SIZE,13711371+ kmsg_dump_get_buffer(dumper, false, hv_panic_page, HV_HYP_PAGE_SIZE,13721372 &bytes_written);13731373 if (bytes_written)13741374 hyperv_report_panic_msg(panic_pa, bytes_written);
+3-1
drivers/hwmon/acpi_power_meter.c
···883883884884 res = setup_attrs(resource);885885 if (res)886886- goto exit_free;886886+ goto exit_free_capability;887887888888 resource->hwmon_dev = hwmon_device_register(&device->dev);889889 if (IS_ERR(resource->hwmon_dev)) {···896896897897exit_remove:898898 remove_attrs(resource);899899+exit_free_capability:900900+ free_capabilities(resource);899901exit_free:900902 kfree(resource);901903exit:
···285285 return err;286286}287287288288+static const char * const sct_avoid_models[] = {289289+/*290290+ * These drives will have WRITE FPDMA QUEUED command timeouts and sometimes just291291+ * freeze until power-cycled under heavy write loads when their temperature is292292+ * getting polled in SCT mode. The SMART mode seems to be fine, though.293293+ *294294+ * While only the 3 TB model (DT01ACA3) was actually caught exhibiting the295295+ * problem let's play safe here to avoid data corruption and ban the whole296296+ * DT01ACAx family.297297+298298+ * The models from this array are prefix-matched.299299+ */300300+ "TOSHIBA DT01ACA",301301+};302302+303303+static bool drivetemp_sct_avoid(struct drivetemp_data *st)304304+{305305+ struct scsi_device *sdev = st->sdev;306306+ unsigned int ctr;307307+308308+ if (!sdev->model)309309+ return false;310310+311311+ /*312312+ * The "model" field contains just the raw SCSI INQUIRY response313313+ * "product identification" field, which has a width of 16 bytes.314314+ * This field is space-filled, but is NOT NULL-terminated.315315+ */316316+ for (ctr = 0; ctr < ARRAY_SIZE(sct_avoid_models); ctr++)317317+ if (!strncmp(sdev->model, sct_avoid_models[ctr],318318+ strlen(sct_avoid_models[ctr])))319319+ return true;320320+321321+ return false;322322+}323323+288324static int drivetemp_identify_sata(struct drivetemp_data *st)289325{290326 struct scsi_device *sdev = st->sdev;···362326 /* bail out if this is not a SATA device */363327 if (!is_ata || !is_sata)364328 return -ENODEV;329329+330330+ if (have_sct && drivetemp_sct_avoid(st)) {331331+ dev_notice(&sdev->sdev_gendev,332332+ "will avoid using SCT for temperature monitoring\n");333333+ have_sct = false;334334+ }335335+365336 if (!have_sct)366337 goto skip_sct;367338
+1-1
drivers/hwmon/emc2103.c
···443443 }444444445445 result = read_u8_from_i2c(client, REG_FAN_CONF1, &conf_reg);446446- if (result) {446446+ if (result < 0) {447447 count = result;448448 goto err;449449 }
+4-3
drivers/hwmon/max6697.c
···3838 * Map device tree / platform data register bit map to chip bit map.3939 * Applies to alert register and over-temperature register.4040 */4141-#define MAX6697_MAP_BITS(reg) ((((reg) & 0x7e) >> 1) | \4141+#define MAX6697_ALERT_MAP_BITS(reg) ((((reg) & 0x7e) >> 1) | \4242 (((reg) & 0x01) << 6) | ((reg) & 0x80))4343+#define MAX6697_OVERT_MAP_BITS(reg) (((reg) >> 1) | (((reg) & 0x01) << 7))43444445#define MAX6697_REG_STAT(n) (0x44 + (n))4546···563562 return ret;564563565564 ret = i2c_smbus_write_byte_data(client, MAX6697_REG_ALERT_MASK,566566- MAX6697_MAP_BITS(pdata->alert_mask));565565+ MAX6697_ALERT_MAP_BITS(pdata->alert_mask));567566 if (ret < 0)568567 return ret;569568570569 ret = i2c_smbus_write_byte_data(client, MAX6697_REG_OVERT_MASK,571571- MAX6697_MAP_BITS(pdata->over_temperature_mask));570570+ MAX6697_OVERT_MAP_BITS(pdata->over_temperature_mask));572571 if (ret < 0)573572 return ret;574573
···7171 Infineon IR35221 controller.72727373 This driver can also be built as a module. If so, the module will7474- be called ir35521.7474+ be called ir35221.75757676config SENSORS_IR380647777 tristate "Infineon IR38064"
···747747 return 0;748748}749749750750+static int cti_pm_setup(struct cti_drvdata *drvdata)751751+{752752+ int ret;753753+754754+ if (drvdata->ctidev.cpu == -1)755755+ return 0;756756+757757+ if (nr_cti_cpu)758758+ goto done;759759+760760+ cpus_read_lock();761761+ ret = cpuhp_setup_state_nocalls_cpuslocked(762762+ CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,763763+ "arm/coresight_cti:starting",764764+ cti_starting_cpu, cti_dying_cpu);765765+ if (ret) {766766+ cpus_read_unlock();767767+ return ret;768768+ }769769+770770+ ret = cpu_pm_register_notifier(&cti_cpu_pm_nb);771771+ cpus_read_unlock();772772+ if (ret) {773773+ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_CTI_STARTING);774774+ return ret;775775+ }776776+777777+done:778778+ nr_cti_cpu++;779779+ cti_cpu_drvdata[drvdata->ctidev.cpu] = drvdata;780780+781781+ return 0;782782+}783783+750784/* release PM registrations */751785static void cti_pm_release(struct cti_drvdata *drvdata)752786{753753- if (drvdata->ctidev.cpu >= 0) {754754- if (--nr_cti_cpu == 0) {755755- cpu_pm_unregister_notifier(&cti_cpu_pm_nb);787787+ if (drvdata->ctidev.cpu == -1)788788+ return;756789757757- cpuhp_remove_state_nocalls(758758- CPUHP_AP_ARM_CORESIGHT_CTI_STARTING);759759- }760760- cti_cpu_drvdata[drvdata->ctidev.cpu] = NULL;790790+ cti_cpu_drvdata[drvdata->ctidev.cpu] = NULL;791791+ if (--nr_cti_cpu == 0) {792792+ cpu_pm_unregister_notifier(&cti_cpu_pm_nb);793793+ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_CTI_STARTING);761794 }762795}763796···856823857824 /* driver data*/858825 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);859859- if (!drvdata) {860860- ret = -ENOMEM;861861- dev_info(dev, "%s, mem err\n", __func__);862862- goto err_out;863863- }826826+ if (!drvdata)827827+ return -ENOMEM;864828865829 /* Validity for the resource is already checked by the AMBA core */866830 base = devm_ioremap_resource(dev, res);867867- if (IS_ERR(base)) {868868- ret = PTR_ERR(base);869869- dev_err(dev, "%s, remap err\n", __func__);870870- goto err_out;871871- }831831+ if (IS_ERR(base))832832+ return PTR_ERR(base);833833+872834 drvdata->base = base;873835874836 dev_set_drvdata(dev, drvdata);···882854 pdata = coresight_cti_get_platform_data(dev);883855 if (IS_ERR(pdata)) {884856 dev_err(dev, "coresight_cti_get_platform_data err\n");885885- ret = PTR_ERR(pdata);886886- goto err_out;857857+ return PTR_ERR(pdata);887858 }888859889860 /* default to powered - could change on PM notifications */···894867 drvdata->ctidev.cpu);895868 else896869 cti_desc.name = coresight_alloc_device_name(&cti_sys_devs, dev);897897- if (!cti_desc.name) {898898- ret = -ENOMEM;899899- goto err_out;900900- }870870+ if (!cti_desc.name)871871+ return -ENOMEM;901872902873 /* setup CPU power management handling for CPU bound CTI devices. */903903- if (drvdata->ctidev.cpu >= 0) {904904- cti_cpu_drvdata[drvdata->ctidev.cpu] = drvdata;905905- if (!nr_cti_cpu++) {906906- cpus_read_lock();907907- ret = cpuhp_setup_state_nocalls_cpuslocked(908908- CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,909909- "arm/coresight_cti:starting",910910- cti_starting_cpu, cti_dying_cpu);911911-912912- if (!ret)913913- ret = cpu_pm_register_notifier(&cti_cpu_pm_nb);914914- cpus_read_unlock();915915- if (ret)916916- goto err_out;917917- }918918- }874874+ ret = cti_pm_setup(drvdata);875875+ if (ret)876876+ return ret;919877920878 /* create dynamic attributes for connections */921879 ret = cti_create_cons_sysfs(dev, drvdata);922880 if (ret) {923881 dev_err(dev, "%s: create dynamic sysfs entries failed\n",924882 cti_desc.name);925925- goto err_out;883883+ goto pm_release;926884 }927885928886 /* set up coresight component description */···920908 drvdata->csdev = coresight_register(&cti_desc);921909 if (IS_ERR(drvdata->csdev)) {922910 ret = PTR_ERR(drvdata->csdev);923923- goto err_out;911911+ goto pm_release;924912 }925913926914 /* add to list of CTI devices */···939927 dev_info(&drvdata->csdev->dev, "CTI initialized\n");940928 return 0;941929942942-err_out:930930+pm_release:943931 cti_pm_release(drvdata);944932 return ret;945933}
+54-30
drivers/hwtracing/coresight/coresight-etm4x.c
···13881388 .notifier_call = etm4_cpu_pm_notify,13891389};1390139013911391-static int etm4_cpu_pm_register(void)13911391+/* Setup PM. Called with cpus locked. Deals with error conditions and counts */13921392+static int etm4_pm_setup_cpuslocked(void)13921393{13931393- if (IS_ENABLED(CONFIG_CPU_PM))13941394- return cpu_pm_register_notifier(&etm4_cpu_pm_nb);13941394+ int ret;1395139513961396- return 0;13961396+ if (etm4_count++)13971397+ return 0;13981398+13991399+ ret = cpu_pm_register_notifier(&etm4_cpu_pm_nb);14001400+ if (ret)14011401+ goto reduce_count;14021402+14031403+ ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,14041404+ "arm/coresight4:starting",14051405+ etm4_starting_cpu, etm4_dying_cpu);14061406+14071407+ if (ret)14081408+ goto unregister_notifier;14091409+14101410+ ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,14111411+ "arm/coresight4:online",14121412+ etm4_online_cpu, NULL);14131413+14141414+ /* HP dyn state ID returned in ret on success */14151415+ if (ret > 0) {14161416+ hp_online = ret;14171417+ return 0;14181418+ }14191419+14201420+ /* failed dyn state - remove others */14211421+ cpuhp_remove_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING);14221422+14231423+unregister_notifier:14241424+ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);14251425+14261426+reduce_count:14271427+ --etm4_count;14281428+ return ret;13971429}1398143013991399-static void etm4_cpu_pm_unregister(void)14311431+static void etm4_pm_clear(void)14001432{14011401- if (IS_ENABLED(CONFIG_CPU_PM))14021402- cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);14331433+ if (--etm4_count != 0)14341434+ return;14351435+14361436+ cpu_pm_unregister_notifier(&etm4_cpu_pm_nb);14371437+ cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);14381438+ if (hp_online) {14391439+ cpuhp_remove_state_nocalls(hp_online);14401440+ hp_online = 0;14411441+ }14031442}1404144314051444static int etm4_probe(struct amba_device *adev, const struct amba_id *id)···14921453 etm4_init_arch_data, drvdata, 1))14931454 dev_err(dev, "ETM arch init failed\n");1494145514951495- if (!etm4_count++) {14961496- cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,14971497- "arm/coresight4:starting",14981498- etm4_starting_cpu, etm4_dying_cpu);14991499- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,15001500- "arm/coresight4:online",15011501- etm4_online_cpu, NULL);15021502- if (ret < 0)15031503- goto err_arch_supported;15041504- hp_online = ret;15051505-15061506- ret = etm4_cpu_pm_register();15071507- if (ret)15081508- goto err_arch_supported;15091509- }15101510-14561456+ ret = etm4_pm_setup_cpuslocked();15111457 cpus_read_unlock();14581458+14591459+ /* etm4_pm_setup_cpuslocked() does its own cleanup - exit on error */14601460+ if (ret) {14611461+ etmdrvdata[drvdata->cpu] = NULL;14621462+ return ret;14631463+ }1512146415131465 if (etm4_arch_supported(drvdata->arch) == false) {15141466 ret = -EINVAL;···1547151715481518err_arch_supported:15491519 etmdrvdata[drvdata->cpu] = NULL;15501550- if (--etm4_count == 0) {15511551- etm4_cpu_pm_unregister();15521552-15531553- cpuhp_remove_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING);15541554- if (hp_online)15551555- cpuhp_remove_state_nocalls(hp_online);15561556- }15201520+ etm4_pm_clear();15571521 return ret;15581522}15591523
+18-3
drivers/hwtracing/intel_th/core.c
···10211021{10221022 struct intel_th_device *hub = to_intel_th_hub(thdev);10231023 struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver);10241024+ int ret;1024102510251026 /* In host mode, this is up to the external debugger, do nothing. */10261027 if (hub->host_mode)10271028 return 0;1028102910291029- if (!hubdrv->set_output)10301030- return -ENOTSUPP;10301030+ /*10311031+ * hub is instantiated together with the source device that10321032+ * calls here, so guaranteed to be present.10331033+ */10341034+ hubdrv = to_intel_th_driver(hub->dev.driver);10351035+ if (!hubdrv || !try_module_get(hubdrv->driver.owner))10361036+ return -EINVAL;1031103710321032- return hubdrv->set_output(hub, master);10381038+ if (!hubdrv->set_output) {10391039+ ret = -ENOTSUPP;10401040+ goto out;10411041+ }10421042+10431043+ ret = hubdrv->set_output(hub, master);10441044+10451045+out:10461046+ module_put(hubdrv->driver.owner);10471047+ return ret;10331048}10341049EXPORT_SYMBOL_GPL(intel_th_set_output);10351050
···113113114114config I2C_SLAVE115115 bool "I2C slave support"116116+ help117117+ This enables Linux to act as an I2C slave device. Note that your I2C118118+ bus master driver also needs to support this functionality. Please119119+ read Documentation/i2c/slave-interface.rst for further details.116120117121if I2C_SLAVE118122119123config I2C_SLAVE_EEPROM120124 tristate "I2C eeprom slave driver"125125+ help126126+ This backend makes Linux behave like an I2C EEPROM. Please read127127+ Documentation/i2c/slave-eeprom-backend.rst for further details.121128122129endif123130
+2-1
drivers/i2c/algos/i2c-algo-pca.c
···314314 DEB2("BUS ERROR - SDA Stuck low\n");315315 pca_reset(adap);316316 goto out;317317- case 0x90: /* Bus error - SCL stuck low */317317+ case 0x78: /* Bus error - SCL stuck low (PCA9665) */318318+ case 0x90: /* Bus error - SCL stuck low (PCA9564) */318319 DEB2("BUS ERROR - SCL Stuck low\n");319320 pca_reset(adap);320321 goto out;
+12-16
drivers/i2c/busses/i2c-cadence.c
···421421 /* Read data if receive data valid is set */422422 while (cdns_i2c_readreg(CDNS_I2C_SR_OFFSET) &423423 CDNS_I2C_SR_RXDV) {424424- /*425425- * Clear hold bit that was set for FIFO control if426426- * RX data left is less than FIFO depth, unless427427- * repeated start is selected.428428- */429429- if ((id->recv_count < CDNS_I2C_FIFO_DEPTH) &&430430- !id->bus_hold_flag)431431- cdns_i2c_clear_bus_hold(id);432432-433424 if (id->recv_count > 0) {434425 *(id->p_recv_buf)++ =435426 cdns_i2c_readreg(CDNS_I2C_DATA_OFFSET);436427 id->recv_count--;437428 id->curr_recv_count--;429429+430430+ /*431431+ * Clear hold bit that was set for FIFO control432432+ * if RX data left is less than or equal to433433+ * FIFO DEPTH unless repeated start is selected434434+ */435435+ if (id->recv_count <= CDNS_I2C_FIFO_DEPTH &&436436+ !id->bus_hold_flag)437437+ cdns_i2c_clear_bus_hold(id);438438+438439 } else {439440 dev_err(id->adap.dev.parent,440441 "xfer_size reg rollover. xfer aborted!\n");···595594 * Check for the message size against FIFO depth and set the596595 * 'hold bus' bit if it is greater than FIFO depth.597596 */598598- if ((id->recv_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag)597597+ if (id->recv_count > CDNS_I2C_FIFO_DEPTH)599598 ctrl_reg |= CDNS_I2C_CR_HOLD;600600- else601601- ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD;602599603600 cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET);604601···653654 * Check for the message size against FIFO depth and set the654655 * 'hold bus' bit if it is greater than FIFO depth.655656 */656656- if ((id->send_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag)657657+ if (id->send_count > CDNS_I2C_FIFO_DEPTH)657658 ctrl_reg |= CDNS_I2C_CR_HOLD;658658- else659659- ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD;660660-661659 cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET);662660663661 /* Clear the interrupts in interrupt status register. */
···869869 /* disable irqs and ensure none is running before clearing ptr */870870 rcar_i2c_write(priv, ICSIER, 0);871871 rcar_i2c_write(priv, ICSCR, 0);872872+ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */872873873874 synchronize_irq(priv->irq);874875 priv->slave = NULL;···970969 ret = rcar_i2c_clock_calculate(priv);971970 if (ret < 0)972971 goto out_pm_put;972972+973973+ rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */973974974975 if (priv->devtype == I2C_RCAR_GEN3) {975976 priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
···130130 [IIO_MOD_PM2P5] = "pm2p5",131131 [IIO_MOD_PM4] = "pm4",132132 [IIO_MOD_PM10] = "pm10",133133+ [IIO_MOD_ETHANOL] = "ethanol",134134+ [IIO_MOD_H2] = "h2",133135};134136135137/* relies on pairs of these shared then separate */
+17-12
drivers/iio/magnetometer/ak8974.c
···192192 bool drdy_irq;193193 struct completion drdy_complete;194194 bool drdy_active_low;195195+ /* Ensure timestamp is naturally aligned */196196+ struct {197197+ __le16 channels[3];198198+ s64 ts __aligned(8);199199+ } scan;195200};196201197202static const char ak8974_reg_avdd[] = "avdd";···662657{663658 struct ak8974 *ak8974 = iio_priv(indio_dev);664659 int ret;665665- __le16 hw_values[8]; /* Three axes + 64bit padding */666660667661 pm_runtime_get_sync(&ak8974->i2c->dev);668662 mutex_lock(&ak8974->lock);···671667 dev_err(&ak8974->i2c->dev, "error triggering measure\n");672668 goto out_unlock;673669 }674674- ret = ak8974_getresult(ak8974, hw_values);670670+ ret = ak8974_getresult(ak8974, ak8974->scan.channels);675671 if (ret) {676672 dev_err(&ak8974->i2c->dev, "error getting measures\n");677673 goto out_unlock;678674 }679675680680- iio_push_to_buffers_with_timestamp(indio_dev, hw_values,676676+ iio_push_to_buffers_with_timestamp(indio_dev, &ak8974->scan,681677 iio_get_time_ns(indio_dev));682678683679 out_unlock:···866862 ak8974->map = devm_regmap_init_i2c(i2c, &ak8974_regmap_config);867863 if (IS_ERR(ak8974->map)) {868864 dev_err(&i2c->dev, "failed to allocate register map\n");865865+ pm_runtime_put_noidle(&i2c->dev);866866+ pm_runtime_disable(&i2c->dev);869867 return PTR_ERR(ak8974->map);870868 }871869872870 ret = ak8974_set_power(ak8974, AK8974_PWR_ON);873871 if (ret) {874872 dev_err(&i2c->dev, "could not power on\n");875875- goto power_off;873873+ goto disable_pm;876874 }877875878876 ret = ak8974_detect(ak8974);879877 if (ret) {880878 dev_err(&i2c->dev, "neither AK8974 nor AMI30x found\n");881881- goto power_off;879879+ goto disable_pm;882880 }883881884882 ret = ak8974_selftest(ak8974);···890884 ret = ak8974_reset(ak8974);891885 if (ret) {892886 dev_err(&i2c->dev, "AK8974 reset failed\n");893893- goto power_off;887887+ goto disable_pm;894888 }895895-896896- pm_runtime_set_autosuspend_delay(&i2c->dev,897897- AK8974_AUTOSUSPEND_DELAY);898898- pm_runtime_use_autosuspend(&i2c->dev);899899- pm_runtime_put(&i2c->dev);900889901890 indio_dev->dev.parent = &i2c->dev;902891 switch (ak8974->variant) {···958957 goto cleanup_buffer;959958 }960959960960+ pm_runtime_set_autosuspend_delay(&i2c->dev,961961+ AK8974_AUTOSUSPEND_DELAY);962962+ pm_runtime_use_autosuspend(&i2c->dev);963963+ pm_runtime_put(&i2c->dev);964964+961965 return 0;962966963967cleanup_buffer:···971965 pm_runtime_put_noidle(&i2c->dev);972966 pm_runtime_disable(&i2c->dev);973967 ak8974_set_power(ak8974, AK8974_PWR_OFF);974974-power_off:975968 regulator_bulk_disable(ARRAY_SIZE(ak8974->regs), ak8974->regs);976969977970 return ret;
···649649{650650 struct ib_uverbs_file *ufile = attrs->ufile;651651652652- /* alloc_commit consumes the uobj kref */653653- uobj->uapi_object->type_class->alloc_commit(uobj);654654-655652 /* kref is held so long as the uobj is on the uobj list. */656653 uverbs_uobject_get(uobj);657654 spin_lock_irq(&ufile->uobjects_lock);···657660658661 /* matches atomic_set(-1) in alloc_uobj */659662 atomic_set(&uobj->usecnt, 0);663663+664664+ /* alloc_commit consumes the uobj kref */665665+ uobj->uapi_object->type_class->alloc_commit(uobj);660666661667 /* Matches the down_read in rdma_alloc_begin_uobject */662668 up_read(&ufile->hw_destroy_rwsem);
+18-22
drivers/infiniband/core/sa_query.c
···829829 return len;830830}831831832832-static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask)832832+static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)833833{834834 struct sk_buff *skb = NULL;835835 struct nlmsghdr *nlh;836836 void *data;837837 struct ib_sa_mad *mad;838838 int len;839839+ unsigned long flags;840840+ unsigned long delay;841841+ gfp_t gfp_flag;842842+ int ret;843843+844844+ INIT_LIST_HEAD(&query->list);845845+ query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq);839846840847 mad = query->mad_buf->mad;841848 len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);···867860 /* Repair the nlmsg header length */868861 nlmsg_end(skb, nlh);869862870870- return rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_mask);871871-}863863+ gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :864864+ GFP_NOWAIT;872865873873-static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)874874-{875875- unsigned long flags;876876- unsigned long delay;877877- int ret;878878-879879- INIT_LIST_HEAD(&query->list);880880- query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq);881881-882882- /* Put the request on the list first.*/883866 spin_lock_irqsave(&ib_nl_request_lock, flags);867867+ ret = rdma_nl_multicast(&init_net, skb, RDMA_NL_GROUP_LS, gfp_flag);868868+869869+ if (ret)870870+ goto out;871871+872872+ /* Put the request on the list.*/884873 delay = msecs_to_jiffies(sa_local_svc_timeout_ms);885874 query->timeout = delay + jiffies;886875 list_add_tail(&query->list, &ib_nl_request_list);887876 /* Start the timeout if this is the only request */888877 if (ib_nl_request_list.next == &query->list)889878 queue_delayed_work(ib_nl_wq, &ib_nl_timed_work, delay);890890- spin_unlock_irqrestore(&ib_nl_request_lock, flags);891879892892- ret = ib_nl_send_msg(query, gfp_mask);893893- if (ret) {894894- ret = -EIO;895895- /* Remove the request */896896- spin_lock_irqsave(&ib_nl_request_lock, flags);897897- list_del(&query->list);898898- spin_unlock_irqrestore(&ib_nl_request_lock, flags);899899- }880880+out:881881+ spin_unlock_irqrestore(&ib_nl_request_lock, flags);900882901883 return ret;902884}
+28-9
drivers/infiniband/hw/hfi1/init.c
···831831}832832833833/**834834+ * destroy_workqueues - destroy per port workqueues835835+ * @dd: the hfi1_ib device836836+ */837837+static void destroy_workqueues(struct hfi1_devdata *dd)838838+{839839+ int pidx;840840+ struct hfi1_pportdata *ppd;841841+842842+ for (pidx = 0; pidx < dd->num_pports; ++pidx) {843843+ ppd = dd->pport + pidx;844844+845845+ if (ppd->hfi1_wq) {846846+ destroy_workqueue(ppd->hfi1_wq);847847+ ppd->hfi1_wq = NULL;848848+ }849849+ if (ppd->link_wq) {850850+ destroy_workqueue(ppd->link_wq);851851+ ppd->link_wq = NULL;852852+ }853853+ }854854+}855855+856856+/**834857 * enable_general_intr() - Enable the IRQs that will be handled by the835858 * general interrupt handler.836859 * @dd: valid devdata···11261103 * We can't count on interrupts since we are stopping.11271104 */11281105 hfi1_quiet_serdes(ppd);11291129-11301130- if (ppd->hfi1_wq) {11311131- destroy_workqueue(ppd->hfi1_wq);11321132- ppd->hfi1_wq = NULL;11331133- }11341134- if (ppd->link_wq) {11351135- destroy_workqueue(ppd->link_wq);11361136- ppd->link_wq = NULL;11371137- }11061106+ if (ppd->hfi1_wq)11071107+ flush_workqueue(ppd->hfi1_wq);11081108+ if (ppd->link_wq)11091109+ flush_workqueue(ppd->link_wq);11381110 }11391111 sdma_exit(dd);11401112}···17741756 * clear dma engines, etc.17751757 */17761758 shutdown_device(dd);17591759+ destroy_workqueues(dd);1777176017781761 stop_timers(dd);17791762
···601601 */602602 synchronize_srcu(&dev->odp_srcu);603603604604+ /*605605+ * All work on the prefetch list must be completed, xa_erase() prevented606606+ * new work from being created.607607+ */608608+ wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));609609+610610+ /*611611+ * At this point it is forbidden for any other thread to enter612612+ * pagefault_mr() on this imr. It is already forbidden to call613613+ * pagefault_mr() on an implicit child. Due to this additions to614614+ * implicit_children are prevented.615615+ */616616+617617+ /*618618+ * Block destroy_unused_implicit_child_mr() from incrementing619619+ * num_deferred_work.620620+ */604621 xa_lock(&imr->implicit_children);605622 xa_for_each (&imr->implicit_children, idx, mtt) {606623 __xa_erase(&imr->implicit_children, idx);···626609 xa_unlock(&imr->implicit_children);627610628611 /*629629- * num_deferred_work can only be incremented inside the odp_srcu, or630630- * under xa_lock while the child is in the xarray. Thus at this point631631- * it is only decreasing, and all work holding it is now on the wq.612612+ * Wait for any concurrent destroy_unused_implicit_child_mr() to613613+ * complete.632614 */633615 wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));634616
+6-1
drivers/infiniband/hw/mlx5/qp.c
···26682668 if (qp_type == IB_QPT_RAW_PACKET && attr->rwq_ind_tbl)26692669 return (create_flags) ? -EINVAL : 0;2670267026712671+ process_create_flag(dev, &create_flags, IB_QP_CREATE_NETIF_QP,26722672+ mlx5_get_flow_namespace(dev->mdev,26732673+ MLX5_FLOW_NAMESPACE_BYPASS),26742674+ qp);26712675 process_create_flag(dev, &create_flags,26722676 IB_QP_CREATE_INTEGRITY_EN,26732677 MLX5_CAP_GEN(mdev, sho), qp);···30053001 mlx5_ib_destroy_dct(qp);30063002 } else {30073003 /*30083008- * The two lines below are temp solution till QP allocation30043004+ * These lines below are temp solution till QP allocation30093005 * will be moved to be under IB/core responsiblity.30103006 */30113007 qp->ibqp.send_cq = attr->send_cq;30123008 qp->ibqp.recv_cq = attr->recv_cq;30093009+ qp->ibqp.pd = pd;30133010 destroy_qp_common(dev, qp, udata);30143011 }30153012
···563563 Support for the Loongson PCH PIC Controller.564564565565config LOONGSON_PCH_MSI566566- bool "Loongson PCH PIC Controller"566566+ bool "Loongson PCH MSI Controller"567567 depends on MACH_LOONGSON64 || COMPILE_TEST568568 depends on PCI569569 default MACH_LOONGSON64
+12-4
drivers/irqchip/irq-gic-v3-its.c
···37973797 if (!gic_rdists->has_vpend_valid_dirty)37983798 return;3799379938003800- WARN_ON_ONCE(readq_relaxed_poll_timeout(vlpi_base + GICR_VPENDBASER,38013801- val,38023802- !(val & GICR_VPENDBASER_Dirty),38033803- 10, 500));38003800+ WARN_ON_ONCE(readq_relaxed_poll_timeout_atomic(vlpi_base + GICR_VPENDBASER,38013801+ val,38023802+ !(val & GICR_VPENDBASER_Dirty),38033803+ 10, 500));38043804}3805380538063806static void its_vpe_schedule(struct its_vpe *vpe)···40544054 u64 val;4055405540564056 if (info->req_db) {40574057+ unsigned long flags;40584058+40574059 /*40584060 * vPE is going to block: make the vPE non-resident with40594061 * PendingLast clear and DB set. The GIC guarantees that if40604062 * we read-back PendingLast clear, then a doorbell will be40614063 * delivered when an interrupt comes.40644064+ *40654065+ * Note the locking to deal with the concurrent update of40664066+ * pending_last from the doorbell interrupt handler that can40674067+ * run concurrently.40624068 */40694069+ raw_spin_lock_irqsave(&vpe->vpe_lock, flags);40634070 val = its_clear_vpend_valid(vlpi_base,40644071 GICR_VPENDBASER_PendingLast,40654072 GICR_VPENDBASER_4_1_DB);40664073 vpe->pending_last = !!(val & GICR_VPENDBASER_PendingLast);40744074+ raw_spin_unlock_irqrestore(&vpe->vpe_lock, flags);40674075 } else {40684076 /*40694077 * We're not blocking, so just make the vPE non-resident
+3-11
drivers/irqchip/irq-gic.c
···329329static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,330330 bool force)331331{332332- void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3);333333- unsigned int cpu, shift = (gic_irq(d) % 4) * 8;334334- u32 val, mask, bit;335335- unsigned long flags;332332+ void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + gic_irq(d);333333+ unsigned int cpu;336334337335 if (!force)338336 cpu = cpumask_any_and(mask_val, cpu_online_mask);···340342 if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids)341343 return -EINVAL;342344343343- gic_lock_irqsave(flags);344344- mask = 0xff << shift;345345- bit = gic_cpu_map[cpu] << shift;346346- val = readl_relaxed(reg) & ~mask;347347- writel_relaxed(val | bit, reg);348348- gic_unlock_irqrestore(flags);349349-345345+ writeb_relaxed(gic_cpu_map[cpu], reg);350346 irq_data_update_effective_affinity(d, cpumask_of(cpu));351347352348 return IRQ_SET_MASK_OK_DONE;
+1-1
drivers/irqchip/irq-riscv-intc.c
···9999100100 hartid = riscv_of_parent_hartid(node);101101 if (hartid < 0) {102102- pr_warn("unable to fine hart id for %pOF\n", node);102102+ pr_warn("unable to find hart id for %pOF\n", node);103103 return 0;104104 }105105
+2-2
drivers/md/dm-integrity.c
···24202420 unsigned prev_free_sectors;2421242124222422 /* the following test is not needed, but it tests the replay code */24232423- if (unlikely(dm_suspended(ic->ti)) && !ic->meta_dev)24232423+ if (unlikely(dm_post_suspending(ic->ti)) && !ic->meta_dev)24242424 return;2425242524262426 spin_lock_irq(&ic->endio_wait.lock);···2481248124822482next_chunk:2483248324842484- if (unlikely(dm_suspended(ic->ti)))24842484+ if (unlikely(dm_post_suspending(ic->ti)))24852485 goto unlock_ret;2486248624872487 range.logical_sector = le64_to_cpu(ic->sb->recalc_sector);
-4
drivers/md/dm-rq.c
···146146 */147147static void rq_completed(struct mapped_device *md)148148{149149- /* nudge anyone waiting on suspend queue */150150- if (unlikely(wq_has_sleeper(&md->wait)))151151- wake_up(&md->wait);152152-153149 /*154150 * dm_put() must be at the end of this function. See the comment above155151 */
+6
drivers/md/dm-writecache.c
···22662266 }2267226722682268 if (WC_MODE_PMEM(wc)) {22692269+ if (!dax_synchronous(wc->ssd_dev->dax_dev)) {22702270+ r = -EOPNOTSUPP;22712271+ ti->error = "Asynchronous persistent memory not supported as pmem cache";22722272+ goto bad;22732273+ }22742274+22692275 r = persistent_memory_claim(wc);22702276 if (r) {22712277 ti->error = "Unable to map persistent memory for cache";
+8-1
drivers/md/dm-zoned-metadata.c
···22172217{22182218 struct list_head *list;22192219 struct dm_zone *zone;22202220- int i = 0;22202220+ int i;2221222122222222+ /* Schedule reclaim to ensure free zones are available */22232223+ if (!(flags & DMZ_ALLOC_RECLAIM)) {22242224+ for (i = 0; i < zmd->nr_devs; i++)22252225+ dmz_schedule_reclaim(zmd->dev[i].reclaim);22262226+ }22272227+22282228+ i = 0;22222229again:22232230 if (flags & DMZ_ALLOC_CACHE)22242231 list = &zmd->unmap_cache_list;
+3-4
drivers/md/dm-zoned-reclaim.c
···456456 nr_zones = dmz_nr_rnd_zones(zmd, zrc->dev_idx);457457 nr_unmap = dmz_nr_unmap_rnd_zones(zmd, zrc->dev_idx);458458 }459459+ if (nr_unmap <= 1)460460+ return 0;459461 return nr_unmap * 100 / nr_zones;460462}461463···503501{504502 struct dmz_reclaim *zrc = container_of(work, struct dmz_reclaim, work.work);505503 struct dmz_metadata *zmd = zrc->metadata;506506- unsigned int p_unmap, nr_unmap_rnd = 0, nr_rnd = 0;504504+ unsigned int p_unmap;507505 int ret;508506509507 if (dmz_dev_is_dying(zmd))···528526 /* Busy but we still have some random zone: throttle */529527 zrc->kc_throttle.throttle = min(75U, 100U - p_unmap / 2);530528 }531531-532532- nr_unmap_rnd = dmz_nr_unmap_rnd_zones(zmd, zrc->dev_idx);533533- nr_rnd = dmz_nr_rnd_zones(zmd, zrc->dev_idx);534529535530 DMDEBUG("(%s/%u): Reclaim (%u): %s, %u%% free zones (%u/%u cache %u/%u random)",536531 dmz_metadata_label(zmd), zrc->dev_idx,
+1-9
drivers/md/dm-zoned-target.c
···400400 dm_per_bio_data(bio, sizeof(struct dmz_bioctx));401401 struct dmz_metadata *zmd = dmz->metadata;402402 struct dm_zone *zone;403403- int i, ret;404404-405405- /*406406- * Write may trigger a zone allocation. So make sure the407407- * allocation can succeed.408408- */409409- if (bio_op(bio) == REQ_OP_WRITE)410410- for (i = 0; i < dmz->nr_ddevs; i++)411411- dmz_schedule_reclaim(dmz->dev[i].reclaim);403403+ int ret;412404413405 dmz_lock_metadata(zmd);414406
+70-31
drivers/md/dm.c
···1212#include <linux/init.h>1313#include <linux/module.h>1414#include <linux/mutex.h>1515+#include <linux/sched/mm.h>1516#include <linux/sched/signal.h>1617#include <linux/blkpg.h>1718#include <linux/bio.h>···143142#define DMF_NOFLUSH_SUSPENDING 5144143#define DMF_DEFERRED_REMOVE 6145144#define DMF_SUSPENDED_INTERNALLY 7145145+#define DMF_POST_SUSPENDING 8146146147147#define DM_NUMA_NODE NUMA_NO_NODE148148static int dm_numa_node = DM_NUMA_NODE;···654652 if (tio->inside_dm_io)655653 return;656654 bio_put(&tio->clone);657657-}658658-659659-static bool md_in_flight_bios(struct mapped_device *md)660660-{661661- int cpu;662662- struct hd_struct *part = &dm_disk(md)->part0;663663- long sum = 0;664664-665665- for_each_possible_cpu(cpu) {666666- sum += part_stat_local_read_cpu(part, in_flight[0], cpu);667667- sum += part_stat_local_read_cpu(part, in_flight[1], cpu);668668- }669669-670670- return sum != 0;671671-}672672-673673-static bool md_in_flight(struct mapped_device *md)674674-{675675- if (queue_is_mq(md->queue))676676- return blk_mq_queue_inflight(md->queue);677677- else678678- return md_in_flight_bios(md);679655}680656681657u64 dm_start_time_ns_from_clone(struct bio *bio)···14451465 BUG_ON(bio_has_data(ci->bio));14461466 while ((ti = dm_table_get_target(ci->map, target_nr++)))14471467 __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);14481448-14491449- bio_disassociate_blkg(ci->bio);14501450-14511468 return 0;14521469}14531470···16321655 ci.bio = &flush_bio;16331656 ci.sector_count = 0;16341657 error = __send_empty_flush(&ci);16581658+ bio_uninit(ci.bio);16351659 /* dec_pending submits any data associated with flush */16361660 } else if (op_is_zone_mgmt(bio_op(bio))) {16371661 ci.bio = bio;···17071729 ci.bio = &flush_bio;17081730 ci.sector_count = 0;17091731 error = __send_empty_flush(&ci);17321732+ bio_uninit(ci.bio);17101733 /* dec_pending submits any data associated with flush */17111734 } else {17121735 struct dm_target_io *tio;···24092430 if (!dm_suspended_md(md)) {24102431 dm_table_presuspend_targets(map);24112432 set_bit(DMF_SUSPENDED, &md->flags);24332433+ set_bit(DMF_POST_SUSPENDING, &md->flags);24122434 dm_table_postsuspend_targets(map);24132435 }24142436 /* dm_put_live_table must be before msleep, otherwise deadlock is possible */···24502470}24512471EXPORT_SYMBOL_GPL(dm_put);2452247224532453-static int dm_wait_for_completion(struct mapped_device *md, long task_state)24732473+static bool md_in_flight_bios(struct mapped_device *md)24742474+{24752475+ int cpu;24762476+ struct hd_struct *part = &dm_disk(md)->part0;24772477+ long sum = 0;24782478+24792479+ for_each_possible_cpu(cpu) {24802480+ sum += part_stat_local_read_cpu(part, in_flight[0], cpu);24812481+ sum += part_stat_local_read_cpu(part, in_flight[1], cpu);24822482+ }24832483+24842484+ return sum != 0;24852485+}24862486+24872487+static int dm_wait_for_bios_completion(struct mapped_device *md, long task_state)24542488{24552489 int r = 0;24562490 DEFINE_WAIT(wait);2457249124582458- while (1) {24922492+ while (true) {24592493 prepare_to_wait(&md->wait, &wait, task_state);2460249424612461- if (!md_in_flight(md))24952495+ if (!md_in_flight_bios(md))24622496 break;2463249724642498 if (signal_pending_state(task_state, current)) {···24832489 io_schedule();24842490 }24852491 finish_wait(&md->wait, &wait);24922492+24932493+ return r;24942494+}24952495+24962496+static int dm_wait_for_completion(struct mapped_device *md, long task_state)24972497+{24982498+ int r = 0;24992499+25002500+ if (!queue_is_mq(md->queue))25012501+ return dm_wait_for_bios_completion(md, task_state);25022502+25032503+ while (true) {25042504+ if (!blk_mq_queue_inflight(md->queue))25052505+ break;25062506+25072507+ if (signal_pending_state(task_state, current)) {25082508+ r = -EINTR;25092509+ break;25102510+ }25112511+25122512+ msleep(5);25132513+ }2486251424872515 return r;24882516}···27682752 if (r)27692753 goto out_unlock;2770275427552755+ set_bit(DMF_POST_SUSPENDING, &md->flags);27712756 dm_table_postsuspend_targets(map);27572757+ clear_bit(DMF_POST_SUSPENDING, &md->flags);2772275827732759out_unlock:27742760 mutex_unlock(&md->suspend_lock);···28672849 (void) __dm_suspend(md, map, suspend_flags, TASK_UNINTERRUPTIBLE,28682850 DMF_SUSPENDED_INTERNALLY);2869285128522852+ set_bit(DMF_POST_SUSPENDING, &md->flags);28702853 dm_table_postsuspend_targets(map);28542854+ clear_bit(DMF_POST_SUSPENDING, &md->flags);28712855}2872285628732857static void __dm_internal_resume(struct mapped_device *md)···29462926int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,29472927 unsigned cookie)29482928{29292929+ int r;29302930+ unsigned noio_flag;29492931 char udev_cookie[DM_COOKIE_LENGTH];29502932 char *envp[] = { udev_cookie, NULL };2951293329342934+ noio_flag = memalloc_noio_save();29352935+29522936 if (!cookie)29532953- return kobject_uevent(&disk_to_dev(md->disk)->kobj, action);29372937+ r = kobject_uevent(&disk_to_dev(md->disk)->kobj, action);29542938 else {29552939 snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",29562940 DM_COOKIE_ENV_VAR_NAME, cookie);29572957- return kobject_uevent_env(&disk_to_dev(md->disk)->kobj,29582958- action, envp);29412941+ r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj,29422942+ action, envp);29592943 }29442944+29452945+ memalloc_noio_restore(noio_flag);29462946+29472947+ return r;29602948}2961294929622950uint32_t dm_next_uevent_seq(struct mapped_device *md)···30303002 return test_bit(DMF_SUSPENDED, &md->flags);30313003}3032300430053005+static int dm_post_suspending_md(struct mapped_device *md)30063006+{30073007+ return test_bit(DMF_POST_SUSPENDING, &md->flags);30083008+}30093009+30333010int dm_suspended_internally_md(struct mapped_device *md)30343011{30353012 return test_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);···30503017 return dm_suspended_md(dm_table_get_md(ti->table));30513018}30523019EXPORT_SYMBOL_GPL(dm_suspended);30203020+30213021+int dm_post_suspending(struct dm_target *ti)30223022+{30233023+ return dm_post_suspending_md(dm_table_get_md(ti->table));30243024+}30253025+EXPORT_SYMBOL_GPL(dm_post_suspending);3053302630543027int dm_noflush_suspending(struct dm_target *ti)30553028{
···1010#include <linux/clk.h>1111#include <linux/err.h>1212#include <linux/io.h>1313-#include <linux/spinlock.h>1313+#include <linux/mutex.h>1414#include <linux/atmel-ssc.h>1515#include <linux/slab.h>1616#include <linux/module.h>···2020#include "../../sound/soc/atmel/atmel_ssc_dai.h"21212222/* Serialize access to ssc_list and user count */2323-static DEFINE_SPINLOCK(user_lock);2323+static DEFINE_MUTEX(user_lock);2424static LIST_HEAD(ssc_list);25252626struct ssc_device *ssc_request(unsigned int ssc_num)···2828 int ssc_valid = 0;2929 struct ssc_device *ssc;30303131- spin_lock(&user_lock);3131+ mutex_lock(&user_lock);3232 list_for_each_entry(ssc, &ssc_list, list) {3333 if (ssc->pdev->dev.of_node) {3434 if (of_alias_get_id(ssc->pdev->dev.of_node, "ssc")···4444 }45454646 if (!ssc_valid) {4747- spin_unlock(&user_lock);4747+ mutex_unlock(&user_lock);4848 pr_err("ssc: ssc%d platform device is missing\n", ssc_num);4949 return ERR_PTR(-ENODEV);5050 }51515252 if (ssc->user) {5353- spin_unlock(&user_lock);5353+ mutex_unlock(&user_lock);5454 dev_dbg(&ssc->pdev->dev, "module busy\n");5555 return ERR_PTR(-EBUSY);5656 }5757 ssc->user++;5858- spin_unlock(&user_lock);5858+ mutex_unlock(&user_lock);59596060 clk_prepare(ssc->clk);6161···6767{6868 bool disable_clk = true;69697070- spin_lock(&user_lock);7070+ mutex_lock(&user_lock);7171 if (ssc->user)7272 ssc->user--;7373 else {7474 disable_clk = false;7575 dev_dbg(&ssc->pdev->dev, "device already free\n");7676 }7777- spin_unlock(&user_lock);7777+ mutex_unlock(&user_lock);78787979 if (disable_clk)8080 clk_unprepare(ssc->clk);···237237 return -ENXIO;238238 }239239240240- spin_lock(&user_lock);240240+ mutex_lock(&user_lock);241241 list_add_tail(&ssc->list, &ssc_list);242242- spin_unlock(&user_lock);242242+ mutex_unlock(&user_lock);243243244244 platform_set_drvdata(pdev, ssc);245245···258258259259 ssc_sound_dai_remove(ssc);260260261261- spin_lock(&user_lock);261261+ mutex_lock(&user_lock);262262 list_del(&ssc->list);263263- spin_unlock(&user_lock);263263+ mutex_unlock(&user_lock);264264265265 return 0;266266}
+11-3
drivers/misc/habanalabs/command_submission.c
···499499 struct asic_fixed_properties *asic = &hdev->asic_prop;500500 struct hw_queue_properties *hw_queue_prop;501501502502+ /* This must be checked here to prevent out-of-bounds access to503503+ * hw_queues_props array504504+ */505505+ if (chunk->queue_index >= HL_MAX_QUEUES) {506506+ dev_err(hdev->dev, "Queue index %d is invalid\n",507507+ chunk->queue_index);508508+ return -EINVAL;509509+ }510510+502511 hw_queue_prop = &asic->hw_queues_props[chunk->queue_index];503512504504- if ((chunk->queue_index >= HL_MAX_QUEUES) ||505505- (hw_queue_prop->type == QUEUE_TYPE_NA)) {506506- dev_err(hdev->dev, "Queue index %d is invalid\n",513513+ if (hw_queue_prop->type == QUEUE_TYPE_NA) {514514+ dev_err(hdev->dev, "Queue index %d is not applicable\n",507515 chunk->queue_index);508516 return -EINVAL;509517 }
···578578 * @mmu_invalidate_cache_range: flush specific MMU STLB cache lines with579579 * ASID-VA-size mask.580580 * @send_heartbeat: send is-alive packet to ArmCP and verify response.581581- * @enable_clock_gating: enable clock gating for reducing power consumption.582582- * @disable_clock_gating: disable clock for accessing registers on HBW.581581+ * @set_clock_gating: enable/disable clock gating per engine according to582582+ * clock gating mask in hdev583583+ * @disable_clock_gating: disable clock gating completely583584 * @debug_coresight: perform certain actions on Coresight for debugging.584585 * @is_device_idle: return true if device is idle, false otherwise.585586 * @soft_reset_late_init: perform certain actions needed after soft reset.···588587 * @hw_queues_unlock: release H/W queues lock.589588 * @get_pci_id: retrieve PCI ID.590589 * @get_eeprom_data: retrieve EEPROM data from F/W.591591- * @send_cpu_message: send buffer to ArmCP.590590+ * @send_cpu_message: send message to F/W. If the message is timedout, the591591+ * driver will eventually reset the device. The timeout can592592+ * be determined by the calling function or it can be 0 and593593+ * then the timeout is the default timeout for the specific594594+ * ASIC592595 * @get_hw_state: retrieve the H/W state593596 * @pci_bars_map: Map PCI BARs.594597 * @set_dram_bar_base: Set DRAM BAR to map specific device address. Returns···685680 int (*mmu_invalidate_cache_range)(struct hl_device *hdev, bool is_hard,686681 u32 asid, u64 va, u64 size);687682 int (*send_heartbeat)(struct hl_device *hdev);688688- void (*enable_clock_gating)(struct hl_device *hdev);683683+ void (*set_clock_gating)(struct hl_device *hdev);689684 void (*disable_clock_gating)(struct hl_device *hdev);690685 int (*debug_coresight)(struct hl_device *hdev, void *data);691686 bool (*is_device_idle)(struct hl_device *hdev, u32 *mask,···14031398 * @max_power: the max power of the device, as configured by the sysadmin. This14041399 * value is saved so in case of hard-reset, the driver will restore14051400 * this value and update the F/W after the re-initialization14011401+ * @clock_gating_mask: is clock gating enabled. bitmask that represents the14021402+ * different engines. See debugfs-driver-habanalabs for14031403+ * details.14061404 * @in_reset: is device in reset flow.14071405 * @curr_pll_profile: current PLL profile.14081406 * @cs_active_cnt: number of active command submissions on this device (active···14331425 * @init_done: is the initialization of the device done.14341426 * @mmu_enable: is MMU enabled.14351427 * @mmu_huge_page_opt: is MMU huge pages optimization enabled.14361436- * @clock_gating: is clock gating enabled.14371428 * @device_cpu_disabled: is the device CPU disabled (due to timeouts)14381429 * @dma_mask: the dma mask that was set for this device14391430 * @in_debug: is device under debug. This, together with fpriv_list, enforces···15001493 atomic64_t dram_used_mem;15011494 u64 timeout_jiffies;15021495 u64 max_power;14961496+ u64 clock_gating_mask;15031497 atomic_t in_reset;15041498 enum hl_pll_frequency curr_pll_profile;15051499 int cs_active_cnt;···15221514 u8 dram_default_page_mapping;15231515 u8 pmmu_huge_range;15241516 u8 init_done;15251525- u8 clock_gating;15261517 u8 device_cpu_disabled;15271518 u8 dma_mask;15281519 u8 in_debug;
···6868 if (WARN_ON(clock > host->max_clk))6969 clock = host->max_clk;70707171- for (div = 1; div < 256; div *= 2) {7171+ for (div = 2; div < 256; div *= 2) {7272 if ((parent / div) <= clock)7373 break;7474 }
+2-2
drivers/mtd/mtdcore.c
···12731273 return -EROFS;12741274 if (!len)12751275 return 0;12761276- if (!mtd->oops_panic_write)12771277- mtd->oops_panic_write = true;12761276+ if (!master->oops_panic_write)12771277+ master->oops_panic_write = true;1278127812791279 return master->_panic_write(master, mtd_get_master_ofs(mtd, to), len,12801280 retlen, buf);
+1-1
drivers/mtd/nand/raw/nandsim.c
···1761176117621762 NS_DBG("switch_state: operation is unknown, try to find it\n");1763176317641764- if (!ns_find_operation(ns, 0))17641764+ if (ns_find_operation(ns, 0))17651765 return;1766176617671767 if ((ns->state & ACTION_MASK) &&
+1-1
drivers/mtd/nand/raw/xway_nand.c
···224224 struct nand_chip *chip = &data->chip;225225 int ret;226226227227- ret = mtd_device_unregister(mtd);227227+ ret = mtd_device_unregister(nand_to_mtd(chip));228228 WARN_ON(ret);229229 nand_cleanup(chip);230230
···974974 PORT_MIRROR_SNIFFER, false);975975}976976977977-static void ksz9477_phy_setup(struct ksz_device *dev, int port,978978- struct phy_device *phy)979979-{980980- /* Only apply to port with PHY. */981981- if (port >= dev->phy_port_cnt)982982- return;983983-984984- /* The MAC actually cannot run in 1000 half-duplex mode. */985985- phy_remove_link_mode(phy,986986- ETHTOOL_LINK_MODE_1000baseT_Half_BIT);987987-988988- /* PHY does not support gigabit. */989989- if (!(dev->features & GBIT_SUPPORT))990990- phy_remove_link_mode(phy,991991- ETHTOOL_LINK_MODE_1000baseT_Full_BIT);992992-}993993-994977static bool ksz9477_get_gbit(struct ksz_device *dev, u8 data)995978{996979 bool gbit;···15711588 return -ENOMEM;15721589 }1573159015911591+ /* set the real number of ports */15921592+ dev->ds->num_ports = dev->port_cnt;15931593+15741594 return 0;15751595}15761596···15861600 .get_port_addr = ksz9477_get_port_addr,15871601 .cfg_port_member = ksz9477_cfg_port_member,15881602 .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table,15891589- .phy_setup = ksz9477_phy_setup,15901603 .port_setup = ksz9477_port_setup,15911604 .r_mib_cnt = ksz9477_r_mib_cnt,15921605 .r_mib_pkt = ksz9477_r_mib_pkt,···1599161416001615int ksz9477_switch_register(struct ksz_device *dev)16011616{16021602- return ksz_switch_register(dev, &ksz9477_dev_ops);16171617+ int ret, i;16181618+ struct phy_device *phydev;16191619+16201620+ ret = ksz_switch_register(dev, &ksz9477_dev_ops);16211621+ if (ret)16221622+ return ret;16231623+16241624+ for (i = 0; i < dev->phy_port_cnt; ++i) {16251625+ if (!dsa_is_user_port(dev->ds, i))16261626+ continue;16271627+16281628+ phydev = dsa_to_port(dev->ds, i)->slave->phydev;16291629+16301630+ /* The MAC actually cannot run in 1000 half-duplex mode. */16311631+ phy_remove_link_mode(phydev,16321632+ ETHTOOL_LINK_MODE_1000baseT_Half_BIT);16331633+16341634+ /* PHY does not support gigabit. */16351635+ if (!(dev->features & GBIT_SUPPORT))16361636+ phy_remove_link_mode(phydev,16371637+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT);16381638+ }16391639+ return ret;16031640}16041641EXPORT_SYMBOL(ksz9477_switch_register);16051642
···358358359359 /* setup slave port */360360 dev->dev_ops->port_setup(dev, port, false);361361- if (dev->dev_ops->phy_setup)362362- dev->dev_ops->phy_setup(dev, port, phy);363361364362 /* port_stp_state_set() will be called after to enable the port so365363 * there is no need to do anything.
-2
drivers/net/dsa/microchip/ksz_common.h
···119119 u32 (*get_port_addr)(int port, int offset);120120 void (*cfg_port_member)(struct ksz_device *dev, int port, u8 member);121121 void (*flush_dyn_mac_table)(struct ksz_device *dev, int port);122122- void (*phy_setup)(struct ksz_device *dev, int port,123123- struct phy_device *phy);124122 void (*port_cleanup)(struct ksz_device *dev, int port);125123 void (*port_setup)(struct ksz_device *dev, int port, bool cpu_port);126124 void (*r_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 *val);
+19-3
drivers/net/dsa/mv88e6xxx/chip.c
···664664 const struct phylink_link_state *state)665665{666666 struct mv88e6xxx_chip *chip = ds->priv;667667+ struct mv88e6xxx_port *p;667668 int err;669669+670670+ p = &chip->ports[port];668671669672 /* FIXME: is this the correct test? If we're in fixed mode on an670673 * internal port, why should we process this any different from···678675 return;679676680677 mv88e6xxx_reg_lock(chip);681681- /* FIXME: should we force the link down here - but if we do, how682682- * do we restore the link force/unforce state? The driver layering683683- * gets in the way.678678+ /* In inband mode, the link may come up at any time while the link679679+ * is not forced down. Force the link down while we reconfigure the680680+ * interface mode.684681 */682682+ if (mode == MLO_AN_INBAND && p->interface != state->interface &&683683+ chip->info->ops->port_set_link)684684+ chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);685685+685686 err = mv88e6xxx_port_config_interface(chip, port, state->interface);686687 if (err && err != -EOPNOTSUPP)687688 goto err_unlock;···697690 */698691 if (err > 0)699692 err = 0;693693+694694+ /* Undo the forced down state above after completing configuration695695+ * irrespective of its state on entry, which allows the link to come up.696696+ */697697+ if (mode == MLO_AN_INBAND && p->interface != state->interface &&698698+ chip->info->ops->port_set_link)699699+ chip->info->ops->port_set_link(chip, port, LINK_UNFORCED);700700+701701+ p->interface = state->interface;700702701703err_unlock:702704 mv88e6xxx_reg_unlock(chip);
···415415 self->aq_nic_cfg.aq_hw_caps->media_type == AQ_HW_MEDIA_TYPE_TP) {416416 self->aq_hw->phy_id = HW_ATL_PHY_ID_MAX;417417 err = aq_phy_init(self->aq_hw);418418+419419+ /* Disable the PTP on NICs where it's known to cause datapath420420+ * problems.421421+ * Ideally this should have been done by PHY provisioning, but422422+ * many units have been shipped with enabled PTP block already.423423+ */424424+ if (self->aq_nic_cfg.aq_hw_caps->quirks & AQ_NIC_QUIRK_BAD_PTP)425425+ if (self->aq_hw->phy_id != HW_ATL_PHY_ID_MAX)426426+ aq_phy_disable_ptp(self->aq_hw);418427 }419428420429 for (i = 0U; i < self->aq_vecs; i++) {
···34933493 drv_fw = &fw_info->fw_hdr;3494349434953495 /* Read the header of the firmware on the card */34963496- ret = -t4_read_flash(adap, FLASH_FW_START,34963496+ ret = t4_read_flash(adap, FLASH_FW_START,34973497 sizeof(*card_fw) / sizeof(uint32_t),34983498 (uint32_t *)card_fw, 1);34993499 if (ret == 0) {···35223522 should_install_fs_fw(adap, card_fw_usable,35233523 be32_to_cpu(fs_fw->fw_ver),35243524 be32_to_cpu(card_fw->fw_ver))) {35253525- ret = -t4_fw_upgrade(adap, adap->mbox, fw_data,35263526- fw_size, 0);35253525+ ret = t4_fw_upgrade(adap, adap->mbox, fw_data,35263526+ fw_size, 0);35273527 if (ret != 0) {35283528 dev_err(adap->pdev_dev,35293529 "failed to install firmware: %d\n", ret);···35543554 FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c),35553555 FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),35563556 FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));35573557- ret = EINVAL;35573557+ ret = -EINVAL;35583558 goto bye;35593559 }35603560
+1-1
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
···29382938 DMA_BIT_MASK(40));29392939 if (err) {29402940 netdev_err(net_dev, "dma_coerce_mask_and_coherent() failed\n");29412941- return err;29412941+ goto free_netdev;29422942 }2943294329442944 /* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
+1-1
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
···3632363236333633 dpni_dev = to_fsl_mc_device(priv->net_dev->dev.parent);36343634 dpmac_dev = fsl_mc_get_endpoint(dpni_dev);36353635- if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)36353635+ if (IS_ERR_OR_NULL(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type)36363636 return 0;3637363736383638 if (dpaa2_mac_is_type_fixed(dpmac_dev, priv->mc_io))
···180180{181181 struct hns3_enet_tqp_vector *tqp_vector = ring->tqp_vector;182182 unsigned char *packet = skb->data;183183+ u32 len = skb_headlen(skb);183184 u32 i;184185185185- for (i = 0; i < skb->len; i++)186186+ len = min_t(u32, len, HNS3_NIC_LB_TEST_PACKET_SIZE);187187+188188+ for (i = 0; i < len; i++)186189 if (packet[i] != (unsigned char)(i & 0xff))187190 break;188191189192 /* The packet is correctly received */190190- if (i == skb->len)193193+ if (i == HNS3_NIC_LB_TEST_PACKET_SIZE)191194 tqp_vector->rx_group.total_packets++;192195 else193196 print_hex_dump(KERN_ERR, "selftest:", DUMP_PREFIX_OFFSET, 16, 1,194194- skb->data, skb->len, true);197197+ skb->data, len, true);195198196199 dev_kfree_skb_any(skb);197200}
···39593959 /* When at 2.5G, the link partner can send frames with shortened39603960 * preambles.39613961 */39623962- if (state->speed == SPEED_2500)39623962+ if (state->interface == PHY_INTERFACE_MODE_2500BASEX)39633963 new_ctrl4 |= MVNETA_GMAC4_SHORT_PREAMBLE_ENABLE;3964396439653965 if (pp->phy_interface != state->interface) {
···20082008 enum protocol_type proto;2009200920102010 if (p_hwfn->mcp_info->func_info.protocol == QED_PCI_ETH_RDMA) {20112011- DP_NOTICE(p_hwfn,20122012- "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n");20112011+ DP_VERBOSE(p_hwfn, QED_MSG_SP,20122012+ "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n");20132013 p_hwfn->hw_info.personality = QED_PCI_ETH_ROCE;20142014 }20152015
+16-1
drivers/net/ethernet/qlogic/qed/qed_debug.c
···75067506 if (p_hwfn->cdev->print_dbg_data)75077507 qed_dbg_print_feature(text_buf, text_size_bytes);7508750875097509+ /* Just return the original binary buffer if requested */75107510+ if (p_hwfn->cdev->dbg_bin_dump) {75117511+ vfree(text_buf);75127512+ return DBG_STATUS_OK;75137513+ }75147514+75097515 /* Free the old dump_buf and point the dump_buf to the newly allocagted75107516 * and formatted text buffer.75117517 */···77397733#define REGDUMP_HEADER_SIZE_SHIFT 077407734#define REGDUMP_HEADER_SIZE_MASK 0xffffff77417735#define REGDUMP_HEADER_FEATURE_SHIFT 2477427742-#define REGDUMP_HEADER_FEATURE_MASK 0x3f77367736+#define REGDUMP_HEADER_FEATURE_MASK 0x1f77377737+#define REGDUMP_HEADER_BIN_DUMP_SHIFT 2977387738+#define REGDUMP_HEADER_BIN_DUMP_MASK 0x177437739#define REGDUMP_HEADER_OMIT_ENGINE_SHIFT 3077447740#define REGDUMP_HEADER_OMIT_ENGINE_MASK 0x177457741#define REGDUMP_HEADER_ENGINE_SHIFT 31···77797771 feature, feature_size);7780777277817773 SET_FIELD(res, REGDUMP_HEADER_FEATURE, feature);77747774+ SET_FIELD(res, REGDUMP_HEADER_BIN_DUMP, 1);77827775 SET_FIELD(res, REGDUMP_HEADER_OMIT_ENGINE, omit_engine);77837776 SET_FIELD(res, REGDUMP_HEADER_ENGINE, engine);77847777···78037794 omit_engine = 1;7804779578057796 mutex_lock(&qed_dbg_lock);77977797+ cdev->dbg_bin_dump = true;7806779878077799 org_engine = qed_get_debug_engine(cdev);78087800 for (cur_engine = 0; cur_engine < cdev->num_hwfns; cur_engine++) {···79417931 DP_ERR(cdev, "qed_dbg_mcp_trace failed. rc = %d\n", rc);79427932 }7943793379347934+ /* Re-populate nvm attribute info */79357935+ qed_mcp_nvm_info_free(p_hwfn);79367936+ qed_mcp_nvm_info_populate(p_hwfn);79377937+79447938 /* nvm cfg1 */79457939 rc = qed_dbg_nvm_image(cdev,79467940 (u8 *)buffer + offset +···80077993 QED_NVM_IMAGE_MDUMP, "QED_NVM_IMAGE_MDUMP", rc);80087994 }8009799579967996+ cdev->dbg_bin_dump = false;80107997 mutex_unlock(&qed_dbg_lock);8011799880127999 return 0;
+4-10
drivers/net/ethernet/qlogic/qed/qed_dev.c
···31023102 }3103310331043104 /* Log and clear previous pglue_b errors if such exist */31053105- qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt);31053105+ qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt, true);3106310631073107 /* Enable the PF's internal FID_enable in the PXP */31083108 rc = qed_pglueb_set_pfid_enable(p_hwfn, p_hwfn->p_main_ptt,···44724472 return 0;44734473}4474447444754475-static void qed_nvm_info_free(struct qed_hwfn *p_hwfn)44764476-{44774477- kfree(p_hwfn->nvm_info.image_att);44784478- p_hwfn->nvm_info.image_att = NULL;44794479-}44804480-44814475static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,44824476 void __iomem *p_regview,44834477 void __iomem *p_doorbells,···45564562 return rc;45574563err3:45584564 if (IS_LEAD_HWFN(p_hwfn))45594559- qed_nvm_info_free(p_hwfn);45654565+ qed_mcp_nvm_info_free(p_hwfn);45604566err2:45614567 if (IS_LEAD_HWFN(p_hwfn))45624568 qed_iov_free_hw_info(p_hwfn->cdev);···46174623 if (rc) {46184624 if (IS_PF(cdev)) {46194625 qed_init_free(p_hwfn);46204620- qed_nvm_info_free(p_hwfn);46264626+ qed_mcp_nvm_info_free(p_hwfn);46214627 qed_mcp_free(p_hwfn);46224628 qed_hw_hwfn_free(p_hwfn);46234629 }···4651465746524658 qed_iov_free_hw_info(cdev);4653465946544654- qed_nvm_info_free(p_hwfn);46604660+ qed_mcp_nvm_info_free(p_hwfn);46554661}4656466246574663static void qed_chain_free_next_ptr(struct qed_dev *cdev,
···4747 return 0;4848}49495050-static int rmnet_register_real_device(struct net_device *real_dev)5050+static int rmnet_register_real_device(struct net_device *real_dev,5151+ struct netlink_ext_ack *extack)5152{5253 struct rmnet_port *port;5354 int rc, entry;54555556 ASSERT_RTNL();56575757- if (rmnet_is_real_dev_registered(real_dev))5858+ if (rmnet_is_real_dev_registered(real_dev)) {5959+ port = rmnet_get_port_rtnl(real_dev);6060+ if (port->rmnet_mode != RMNET_EPMODE_VND) {6161+ NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");6262+ return -EINVAL;6363+ }6464+5865 return 0;6666+ }59676068 port = kzalloc(sizeof(*port), GFP_KERNEL);6169 if (!port)···141133142134 mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);143135144144- err = rmnet_register_real_device(real_dev);136136+ err = rmnet_register_real_device(real_dev, extack);145137 if (err)146138 goto err0;147139···430422 }431423432424 if (port->rmnet_mode != RMNET_EPMODE_VND) {433433- NL_SET_ERR_MSG_MOD(extack, "bridge device already exists");425425+ NL_SET_ERR_MSG_MOD(extack, "more than one bridge dev attached");434426 return -EINVAL;435427 }436428···441433 return -EBUSY;442434 }443435444444- err = rmnet_register_real_device(slave_dev);436436+ err = rmnet_register_real_device(slave_dev, extack);445437 if (err)446438 return -EBUSY;447439
+24-2
drivers/net/ethernet/renesas/ravb_main.c
···14501450 struct ravb_private *priv = container_of(work, struct ravb_private,14511451 work);14521452 struct net_device *ndev = priv->ndev;14531453+ int error;1453145414541455 netif_tx_stop_all_queues(ndev);14551456···14591458 ravb_ptp_stop(ndev);1460145914611460 /* Wait for DMA stopping */14621462- ravb_stop_dma(ndev);14611461+ if (ravb_stop_dma(ndev)) {14621462+ /* If ravb_stop_dma() fails, the hardware is still operating14631463+ * for TX and/or RX. So, this should not call the following14641464+ * functions because ravb_dmac_init() is possible to fail too.14651465+ * Also, this should not retry ravb_stop_dma() again and again14661466+ * here because it's possible to wait forever. So, this just14671467+ * re-enables the TX and RX and skip the following14681468+ * re-initialization procedure.14691469+ */14701470+ ravb_rcv_snd_enable(ndev);14711471+ goto out;14721472+ }1463147314641474 ravb_ring_free(ndev, RAVB_BE);14651475 ravb_ring_free(ndev, RAVB_NC);1466147614671477 /* Device init */14681468- ravb_dmac_init(ndev);14781478+ error = ravb_dmac_init(ndev);14791479+ if (error) {14801480+ /* If ravb_dmac_init() fails, descriptors are freed. So, this14811481+ * should return here to avoid re-enabling the TX and RX in14821482+ * ravb_emac_init().14831483+ */14841484+ netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n",14851485+ __func__, error);14861486+ return;14871487+ }14691488 ravb_emac_init(ndev);1470148914901490+out:14711491 /* Initialise PTP Clock driver */14721492 if (priv->chip_id == RCAR_GEN2)14731493 ravb_ptp_init(ndev, priv->pdev);
+2-2
drivers/net/ethernet/smsc/smc91x.c
···22742274 ret = try_toggle_control_gpio(&pdev->dev, &lp->power_gpio,22752275 "power", 0, 0, 100);22762276 if (ret)22772277- return ret;22772277+ goto out_free_netdev;2278227822792279 /*22802280 * Optional reset GPIO configured? Minimum 100 ns reset needed···22832283 ret = try_toggle_control_gpio(&pdev->dev, &lp->reset_gpio,22842284 "reset", 0, 0, 100);22852285 if (ret)22862286- return ret;22862286+ goto out_free_netdev;2287228722882288 /*22892289 * Need to wait for optional EEPROM to load, max 750 us according
+1-1
drivers/net/ethernet/socionext/sni_ave.c
···11911191 ret = regmap_update_bits(priv->regmap, SG_ETPINMODE,11921192 priv->pinmode_mask, priv->pinmode_val);11931193 if (ret)11941194- return ret;11941194+ goto out_reset_assert;1195119511961196 ave_global_reset(ndev);11971197
···44 *55 * Copyright 2009-2017 Analog Devices Inc.66 *77- * http://www.analog.com/ADF724277+ * https://www.analog.com/ADF724288 */991010#include <linux/kernel.h>···12621262 WQ_MEM_RECLAIM);12631263 if (unlikely(!lp->wqueue)) {12641264 ret = -ENOMEM;12651265- goto err_hw_init;12651265+ goto err_alloc_wq;12661266 }1267126712681268 ret = adf7242_hw_init(lp);···12941294 return ret;1295129512961296err_hw_init:12971297+ destroy_workqueue(lp->wqueue);12981298+err_alloc_wq:12971299 mutex_destroy(&lp->bmux);12981300 ieee802154_free_hw(lp->hw);12991301
+7-9
drivers/net/ipa/gsi.c
···500500 int ret;501501502502 state = gsi_channel_state(channel);503503+504504+ /* Channel could have entered STOPPED state since last call505505+ * if it timed out. If so, we're done.506506+ */507507+ if (state == GSI_CHANNEL_STATE_STOPPED)508508+ return 0;509509+503510 if (state != GSI_CHANNEL_STATE_STARTED &&504511 state != GSI_CHANNEL_STATE_STOP_IN_PROC)505512 return -EINVAL;···796789int gsi_channel_stop(struct gsi *gsi, u32 channel_id)797790{798791 struct gsi_channel *channel = &gsi->channel[channel_id];799799- enum gsi_channel_state state;800792 u32 retries;801793 int ret;802794803795 gsi_channel_freeze(channel);804804-805805- /* Channel could have entered STOPPED state since last call if the806806- * STOP command timed out. We won't stop a channel if stopping it807807- * was successful previously (so we still want the freeze above).808808- */809809- state = gsi_channel_state(channel);810810- if (state == GSI_CHANNEL_STATE_STOPPED)811811- return 0;812796813797 /* RX channels might require a little time to enter STOPPED state */814798 retries = channel->toward_ipa ? 0 : GSI_CHANNEL_STOP_RX_RETRIES;
···172172u32 ipa_cmd_tag_process_count(void);173173174174/**175175+ * ipa_cmd_tag_process() - Perform a tag process176176+ *177177+ * @Return: The number of elements to allocate in a transaction178178+ * to hold tag process commands179179+ */180180+void ipa_cmd_tag_process(struct ipa *ipa);181181+182182+/**175183 * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint176184 * @ipa: IPA pointer177185 * @tre_count: Number of elements in the transaction
···1287128712881288 /* Init all registers */12891289 ret = smsc95xx_reset(dev);12901290+ if (ret)12911291+ goto free_pdata;1290129212911293 /* detect device revision as different features may be available */12921294 ret = smsc95xx_read_reg(dev, ID_REV, &val);12931295 if (ret < 0)12941294- return ret;12961296+ goto free_pdata;12971297+12951298 val >>= 16;12961299 pdata->chip_id = val;12971300 pdata->mdix_ctrl = get_mdix_status(dev->net);···13201317 schedule_delayed_work(&pdata->carrier_check, CARRIER_CHECK_DELAY);1321131813221319 return 0;13201320+13211321+free_pdata:13221322+ kfree(pdata);13231323+ return ret;13231324}1324132513251326static void smsc95xx_unbind(struct usbnet *dev, struct usb_interface *intf)
+3-1
drivers/net/wan/hdlc_x25.c
···7171{7272 unsigned char *ptr;73737474- if (skb_cow(skb, 1))7474+ if (skb_cow(skb, 1)) {7575+ kfree_skb(skb);7576 return NET_RX_DROP;7777+ }76787779 skb_push(skb, 1);7880 skb_reset_network_header(skb);
+13-4
drivers/net/wan/lapbether.c
···128128{129129 unsigned char *ptr;130130131131- skb_push(skb, 1);132132-133133- if (skb_cow(skb, 1))131131+ if (skb_cow(skb, 1)) {132132+ kfree_skb(skb);134133 return NET_RX_DROP;134134+ }135135+136136+ skb_push(skb, 1);135137136138 ptr = skb->data;137139 *ptr = X25_IFACE_DATA;···305303 dev->netdev_ops = &lapbeth_netdev_ops;306304 dev->needs_free_netdev = true;307305 dev->type = ARPHRD_X25;308308- dev->hard_header_len = 3;309306 dev->mtu = 1000;310307 dev->addr_len = 0;311308}···324323 lapbeth_setup);325324 if (!ndev)326325 goto out;326326+327327+ /* When transmitting data:328328+ * first this driver removes a pseudo header of 1 byte,329329+ * then the lapb module prepends an LAPB header of at most 3 bytes,330330+ * then this driver prepends a length field of 2 bytes,331331+ * then the underlying Ethernet device prepends its own header.332332+ */333333+ ndev->hard_header_len = -1 + 3 + 2 + dev->hard_header_len;327334328335 lapbeth = netdev_priv(ndev);329336 lapbeth->axdev = ndev;
+14-7
drivers/net/wan/x25_asy.c
···183183 netif_wake_queue(sl->dev);184184}185185186186-/* Send one completely decapsulated IP datagram to the IP layer. */186186+/* Send an LAPB frame to the LAPB module to process. */187187188188static void x25_asy_bump(struct x25_asy *sl)189189{···195195 count = sl->rcount;196196 dev->stats.rx_bytes += count;197197198198- skb = dev_alloc_skb(count+1);198198+ skb = dev_alloc_skb(count);199199 if (skb == NULL) {200200 netdev_warn(sl->dev, "memory squeeze, dropping packet\n");201201 dev->stats.rx_dropped++;202202 return;203203 }204204- skb_push(skb, 1); /* LAPB internal control */205204 skb_put_data(skb, sl->rbuff, count);206205 skb->protocol = x25_type_trans(skb, sl->dev);207206 err = lapb_data_received(skb->dev, skb);···208209 kfree_skb(skb);209210 printk(KERN_DEBUG "x25_asy: data received err - %d\n", err);210211 } else {211211- netif_rx(skb);212212 dev->stats.rx_packets++;213213 }214214}···354356 */355357356358/*357357- * Called when I frame data arrives. We did the work above - throw it358358- * at the net layer.359359+ * Called when I frame data arrive. We add a pseudo header for upper360360+ * layers and pass it to upper layers.359361 */360362361363static int x25_asy_data_indication(struct net_device *dev, struct sk_buff *skb)362364{365365+ if (skb_cow(skb, 1)) {366366+ kfree_skb(skb);367367+ return NET_RX_DROP;368368+ }369369+ skb_push(skb, 1);370370+ skb->data[0] = X25_IFACE_DATA;371371+372372+ skb->protocol = x25_type_trans(skb, dev);373373+363374 return netif_rx(skb);364375}365376···664657 switch (s) {665658 case X25_END:666659 if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&667667- sl->rcount > 2)660660+ sl->rcount >= 2)668661 x25_asy_bump(sl);669662 clear_bit(SLF_ESCAPE, &sl->flags);670663 sl->rcount = 0;
···7272{7373 int ret;74747575- ret = mt76_eeprom_init(&dev->mt76, MT7615_EEPROM_SIZE +7676- MT7615_EEPROM_EXTRA_DATA);7575+ ret = mt76_eeprom_init(&dev->mt76, MT7615_EEPROM_FULL_SIZE);7776 if (ret < 0)7877 return ret;7978
···11161116 dev_warn(ctrl->device,11171117 "Identify Descriptors failed (%d)\n", status);11181118 /*11191119- * Don't treat an error as fatal, as we potentially already11201120- * have a NGUID or EUI-64.11191119+ * Don't treat non-retryable errors as fatal, as we potentially11201120+ * already have a NGUID or EUI-64. If we failed with DNR set,11211121+ * we want to silently ignore the error as we can still11221122+ * identify the device, but if the status has DNR set, we want11231123+ * to propagate the error back specifically for the disk11241124+ * revalidation flow to make sure we don't abandon the11251125+ * device just because of a temporal retry-able error (such11261126+ * as path of transport errors).11211127 */11221122- if (status > 0 && !(status & NVME_SC_DNR))11281128+ if (status > 0 && (status & NVME_SC_DNR))11231129 status = 0;11241130 goto free_data;11251131 }···19801974 if (ns->head->disk) {19811975 nvme_update_disk_info(ns->head->disk, ns, id);19821976 blk_queue_stack_limits(ns->head->disk->queue, ns->queue);19771977+ nvme_mpath_update_disk_size(ns->head->disk);19831978 }19841979#endif19851980 return 0;
···46384638 * pcie_wait_for_link_delay - Wait until link is active or inactive46394639 * @pdev: Bridge device46404640 * @active: waiting for active or inactive?46414641- * @delay: Delay to wait after link has become active (in ms). Specify %046424642- * for no delay.46414641+ * @delay: Delay to wait after link has become active (in ms)46434642 *46444643 * Use this to wait till link becomes active or inactive.46454644 */···46794680 msleep(10);46804681 timeout -= 10;46814682 }46824682- if (active && ret && delay)46834683+ if (active && ret)46834684 msleep(delay);46844685 else if (ret != active)46854686 pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n",···48004801 if (!pcie_downstream_port(dev))48014802 return;4802480348034803- /*48044804- * Per PCIe r5.0, sec 6.6.1, for downstream ports that support48054805- * speeds > 5 GT/s, we must wait for link training to complete48064806- * before the mandatory delay.48074807- *48084808- * We can only tell when link training completes via DLL Link48094809- * Active, which is required for downstream ports that support48104810- * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common48114811- * devices do not implement Link Active reporting even when it's48124812- * required, so we'll check for that directly instead of checking48134813- * the supported link speed. We assume devices without Link Active48144814- * reporting can train in 100 ms regardless of speed.48154815- */48164816- if (dev->link_active_reporting) {48174817- pci_dbg(dev, "waiting for link to train\n");48184818- if (!pcie_wait_for_link_delay(dev, true, 0)) {48044804+ if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {48054805+ pci_dbg(dev, "waiting %d ms for downstream link\n", delay);48064806+ msleep(delay);48074807+ } else {48084808+ pci_dbg(dev, "waiting %d ms for downstream link, after activation\n",48094809+ delay);48104810+ if (!pcie_wait_for_link_delay(dev, true, delay)) {48194811 /* Did not train, no need to wait any further */48204812 return;48214813 }48224814 }48234823- pci_dbg(child, "waiting %d ms to become accessible\n", delay);48244824- msleep(delay);4825481548264816 if (!pci_device_is_present(child)) {48274817 pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
···800800 pm_runtime_put(vg->dev);801801}802802803803+static void byt_gpio_direct_irq_check(struct intel_pinctrl *vg,804804+ unsigned int offset)805805+{806806+ void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG);807807+808808+ /*809809+ * Before making any direction modifications, do a check if gpio is set810810+ * for direct IRQ. On Bay Trail, setting GPIO to output does not make811811+ * sense, so let's at least inform the caller before they shoot812812+ * themselves in the foot.813813+ */814814+ if (readl(conf_reg) & BYT_DIRECT_IRQ_EN)815815+ dev_info_once(vg->dev, "Potential Error: Setting GPIO with direct_irq_en to output");816816+}817817+803818static int byt_gpio_set_direction(struct pinctrl_dev *pctl_dev,804819 struct pinctrl_gpio_range *range,805820 unsigned int offset,···822807{823808 struct intel_pinctrl *vg = pinctrl_dev_get_drvdata(pctl_dev);824809 void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);825825- void __iomem *conf_reg = byt_gpio_reg(vg, offset, BYT_CONF0_REG);826810 unsigned long flags;827811 u32 value;828812···831817 value &= ~BYT_DIR_MASK;832818 if (input)833819 value |= BYT_OUTPUT_EN;834834- else if (readl(conf_reg) & BYT_DIRECT_IRQ_EN)835835- /*836836- * Before making any direction modifications, do a check if gpio837837- * is set for direct IRQ. On baytrail, setting GPIO to output838838- * does not make sense, so let's at least inform the caller before839839- * they shoot themselves in the foot.840840- */841841- dev_info_once(vg->dev, "Potential Error: Setting GPIO with direct_irq_en to output");820820+ else821821+ byt_gpio_direct_irq_check(vg, offset);842822843823 writel(value, val_reg);844824···1173116511741166static int byt_gpio_direction_input(struct gpio_chip *chip, unsigned int offset)11751167{11761176- return pinctrl_gpio_direction_input(chip->base + offset);11681168+ struct intel_pinctrl *vg = gpiochip_get_data(chip);11691169+ void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);11701170+ unsigned long flags;11711171+ u32 reg;11721172+11731173+ raw_spin_lock_irqsave(&byt_lock, flags);11741174+11751175+ reg = readl(val_reg);11761176+ reg &= ~BYT_DIR_MASK;11771177+ reg |= BYT_OUTPUT_EN;11781178+ writel(reg, val_reg);11791179+11801180+ raw_spin_unlock_irqrestore(&byt_lock, flags);11811181+ return 0;11771182}1178118311841184+/*11851185+ * Note despite the temptation this MUST NOT be converted into a call to11861186+ * pinctrl_gpio_direction_output() + byt_gpio_set() that does not work this11871187+ * MUST be done as a single BYT_VAL_REG register write.11881188+ * See the commit message of the commit adding this comment for details.11891189+ */11791190static int byt_gpio_direction_output(struct gpio_chip *chip,11801191 unsigned int offset, int value)11811192{11821182- int ret = pinctrl_gpio_direction_output(chip->base + offset);11931193+ struct intel_pinctrl *vg = gpiochip_get_data(chip);11941194+ void __iomem *val_reg = byt_gpio_reg(vg, offset, BYT_VAL_REG);11951195+ unsigned long flags;11961196+ u32 reg;1183119711841184- if (ret)11851185- return ret;11981198+ raw_spin_lock_irqsave(&byt_lock, flags);1186119911871187- byt_gpio_set(chip, offset, value);12001200+ byt_gpio_direct_irq_check(vg, offset);1188120112021202+ reg = readl(val_reg);12031203+ reg &= ~BYT_DIR_MASK;12041204+ if (value)12051205+ reg |= BYT_LEVEL;12061206+ else12071207+ reg &= ~BYT_LEVEL;12081208+12091209+ writel(reg, val_reg);12101210+12111211+ raw_spin_unlock_irqrestore(&byt_lock, flags);11891212 return 0;11901213}11911214
···1313#define INTEL_RAPL_PRIO_DEVID_0 0x34511414#define INTEL_CFG_MBOX_DEVID_0 0x345915151616+#define INTEL_RAPL_PRIO_DEVID_1 0x32511717+#define INTEL_CFG_MBOX_DEVID_1 0x32591818+1619/*1720 * Validate maximum commands in a single request.1821 * This is enough to handle command to every core in one ioctl, or all
···332332 case COMEDI_DIGITAL_TRIG_ENABLE_EDGES:333333 /* check shift amount */334334 shift = data[3];335335- if (shift >= s->n_chan) {335335+ if (shift >= 32) {336336 mask = 0;337337 rising = 0;338338 falling = 0;
+1-1
drivers/staging/media/atomisp/Kconfig
···2222 module will be called atomisp23232424config VIDEO_ATOMISP_ISP24012525- bool "VIDEO_ATOMISP_ISP2401"2525+ bool "Use Intel Atom ISP on Cherrytail/Anniedale (ISP2401)"2626 depends on VIDEO_ATOMISP2727 help2828 Enable support for Atom ISP2401-based boards.
···495495 ret = ov2680_read_reg(client, 1, OV2680_MIRROR_REG, &val);496496 if (ret)497497 return ret;498498- if (value) {498498+ if (value)499499 val |= OV2680_FLIP_MIRROR_BIT_ENABLE;500500- } else {500500+ else501501 val &= ~OV2680_FLIP_MIRROR_BIT_ENABLE;502502- }502502+503503 ret = ov2680_write_reg(client, 1,504504 OV2680_MIRROR_REG, val);505505 if (ret)
···250250#define IS_MFLD __IS_SOC(INTEL_FAM6_ATOM_SALTWELL_MID)251251#define IS_BYT __IS_SOC(INTEL_FAM6_ATOM_SILVERMONT)252252#define IS_CHT __IS_SOC(INTEL_FAM6_ATOM_AIRMONT)253253+#define IS_MRFD __IS_SOC(INTEL_FAM6_ATOM_SILVERMONT_MID)253254#define IS_MOFD __IS_SOC(INTEL_FAM6_ATOM_AIRMONT_MID)254255255256/* Both CHT and MOFD come with ISP2401 */
···355355356356 pgnr = DIV_ROUND_UP(map->length, PAGE_SIZE);357357 if (pgnr < ((PAGE_ALIGN(map->length)) >> PAGE_SHIFT)) {358358- dev_err(atomisp_dev,358358+ dev_err(asd->isp->dev,359359 "user space memory size is less than the expected size..\n");360360 return -ENOMEM;361361 } else if (pgnr > ((PAGE_ALIGN(map->length)) >> PAGE_SHIFT)) {362362- dev_err(atomisp_dev,362362+ dev_err(asd->isp->dev,363363 "user space memory size is large than the expected size..\n");364364 return -ENOMEM;365365 }
···2626#define CLK_RATE_19_2MHZ 192000002727#define CLK_RATE_25_0MHZ 2500000028282929+/* Valid clock number range from 0 to 5 */3030+#define MAX_CLK_COUNT 53131+2932/* X-Powers AXP288 register set */3033#define ALDO1_SEL_REG 0x283134#define ALDO1_CTRL3_REG 0x13···64616562struct gmin_subdev {6663 struct v4l2_subdev *subdev;6767- int clock_num;6864 enum clock_rate clock_src;6969- bool clock_on;7065 struct clk *pmc_clk;7166 struct gpio_desc *gpio0;7267 struct gpio_desc *gpio1;···7675 unsigned int csi_lanes;7776 enum atomisp_input_format csi_fmt;7877 enum atomisp_bayer_order csi_bayer;7878+7979+ bool clock_on;7980 bool v1p8_on;8081 bool v2p8_on;8182 bool v1p2_on;8283 bool v2p8_vcm_on;8484+8585+ int v1p8_gpio;8686+ int v2p8_gpio;83878488 u8 pwm_i2c_addr;8589···9690static struct gmin_subdev gmin_subdevs[MAX_SUBDEVS];97919892/* ACPI HIDs for the PMICs that could be used by this driver */9999-#define PMIC_ACPI_AXP "INT33F4:00" /* XPower AXP288 PMIC */100100-#define PMIC_ACPI_TI "INT33F5:00" /* Dollar Cove TI PMIC */101101-#define PMIC_ACPI_CRYSTALCOVE "INT33FD:00" /* Crystal Cove PMIC */9393+#define PMIC_ACPI_AXP "INT33F4" /* XPower AXP288 PMIC */9494+#define PMIC_ACPI_TI "INT33F5" /* Dollar Cove TI PMIC */9595+#define PMIC_ACPI_CRYSTALCOVE "INT33FD" /* Crystal Cove PMIC */1029610397#define PMIC_PLATFORM_TI "intel_soc_pmic_chtdc_ti"10498···111105} pmic_id;112106113107static const char *pmic_name[] = {114114- [PMIC_UNSET] = "unset",108108+ [PMIC_UNSET] = "ACPI device PM",115109 [PMIC_REGULATOR] = "regulator driver",116110 [PMIC_AXP] = "XPower AXP288 PMIC",117111 [PMIC_TI] = "Dollar Cove TI PMIC",···124118static const struct atomisp_platform_data pdata = {125119 .subdevs = pdata_subdevs,126120};127127-128128-/*129129- * Something of a hack. The ECS E7 board drives camera 2.8v from an130130- * external regulator instead of the PMIC. There's a gmin_CamV2P8131131- * config variable that specifies the GPIO to handle this particular132132- * case, but this needs a broader architecture for handling camera133133- * power.134134- */135135-enum { V2P8_GPIO_UNSET = -2, V2P8_GPIO_NONE = -1 };136136-static int v2p8_gpio = V2P8_GPIO_UNSET;137137-138138-/*139139- * Something of a hack. The CHT RVP board drives camera 1.8v from an140140- * external regulator instead of the PMIC just like ECS E7 board, see the141141- * comments above.142142- */143143-enum { V1P8_GPIO_UNSET = -2, V1P8_GPIO_NONE = -1 };144144-static int v1p8_gpio = V1P8_GPIO_UNSET;145121146122static LIST_HEAD(vcm_devices);147123static DEFINE_MUTEX(vcm_lock);···187199 * gmin_subdev struct is already initialized for us.188200 */189201 gs = find_gmin_subdev(subdev);202202+ if (!gs)203203+ return -ENODEV;190204191205 pdata.subdevs[i].type = type;192206 pdata.subdevs[i].port = gs->csi_port;···284294 {"INT33F8:00_CsiFmt", "13"},285295 {"INT33F8:00_CsiBayer", "0"},286296 {"INT33F8:00_CamClk", "0"},297297+287298 {"INT33F9:00_CamType", "1"},288299 {"INT33F9:00_CsiPort", "0"},289300 {"INT33F9:00_CsiLanes", "1"},···300309 {"INT33BE:00_CsiFmt", "13"},301310 {"INT33BE:00_CsiBayer", "2"},302311 {"INT33BE:00_CamClk", "0"},312312+303313 {"INT33F0:00_CsiPort", "0"},304314 {"INT33F0:00_CsiLanes", "1"},305315 {"INT33F0:00_CsiFmt", "13"},···314322 {"XXOV2680:00_CsiPort", "1"},315323 {"XXOV2680:00_CsiLanes", "1"},316324 {"XXOV2680:00_CamClk", "0"},325325+317326 {"XXGC0310:00_CsiPort", "0"},318327 {"XXGC0310:00_CsiLanes", "1"},319328 {"XXGC0310:00_CamClk", "1"},···374381#define GMIN_PMC_CLK_NAME 14 /* "pmc_plt_clk_[0..5]" */375382static char gmin_pmc_clk_name[GMIN_PMC_CLK_NAME];376383377377-static int gmin_i2c_match_one(struct device *dev, const void *data)378378-{379379- const char *name = data;380380- struct i2c_client *client;381381-382382- if (dev->type != &i2c_client_type)383383- return 0;384384-385385- client = to_i2c_client(dev);386386-387387- return (!strcmp(name, client->name));388388-}389389-390384static struct i2c_client *gmin_i2c_dev_exists(struct device *dev, char *name,391385 struct i2c_client **client)392386{387387+ struct acpi_device *adev;393388 struct device *d;394389395395- while ((d = bus_find_device(&i2c_bus_type, NULL, name,396396- gmin_i2c_match_one))) {397397- *client = to_i2c_client(d);398398- dev_dbg(dev, "found '%s' at address 0x%02x, adapter %d\n",399399- (*client)->name, (*client)->addr,400400- (*client)->adapter->nr);401401- return *client;402402- }390390+ adev = acpi_dev_get_first_match_dev(name, NULL, -1);391391+ if (!adev)392392+ return NULL;403393404404- return NULL;394394+ d = bus_find_device_by_acpi_dev(&i2c_bus_type, adev);395395+ acpi_dev_put(adev);396396+ if (!d)397397+ return NULL;398398+399399+ *client = i2c_verify_client(d);400400+ put_device(d);401401+402402+ dev_dbg(dev, "found '%s' at address 0x%02x, adapter %d\n",403403+ (*client)->name, (*client)->addr, (*client)->adapter->nr);404404+ return *client;405405}406406407407static int gmin_i2c_write(struct device *dev, u16 i2c_addr, u8 reg,···413427 "I2C write, addr: 0x%02x, reg: 0x%02x, value: 0x%02x, mask: 0x%02x\n",414428 i2c_addr, reg, value, mask);415429416416- ret = intel_soc_pmic_exec_mipi_pmic_seq_element(i2c_addr, reg,417417- value, mask);418418-419419- if (ret == -EOPNOTSUPP) {430430+ ret = intel_soc_pmic_exec_mipi_pmic_seq_element(i2c_addr, reg, value, mask);431431+ if (ret == -EOPNOTSUPP)420432 dev_err(dev,421433 "ACPI didn't mapped the OpRegion needed to access I2C address 0x%02x.\n"422422- "Need to compile the Kernel using CONFIG_*_PMIC_OPREGION settings\n",434434+ "Need to compile the kernel using CONFIG_*_PMIC_OPREGION settings\n",423435 i2c_addr);424424- return ret;425425- }426436427437 return ret;428438}429439430430-static struct gmin_subdev *gmin_subdev_add(struct v4l2_subdev *subdev)440440+static int atomisp_get_acpi_power(struct device *dev)431441{432432- struct i2c_client *power = NULL, *client = v4l2_get_subdevdata(subdev);433433- struct acpi_device *adev;434434- acpi_handle handle;435435- struct device *dev;436436- int i, ret;442442+ char name[5];443443+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };444444+ struct acpi_buffer b_name = { sizeof(name), name };445445+ union acpi_object *package, *element;446446+ acpi_handle handle = ACPI_HANDLE(dev);447447+ acpi_handle rhandle;448448+ acpi_status status;449449+ int clock_num = -1;450450+ int i;437451438438- if (!client)439439- return NULL;452452+ status = acpi_evaluate_object(handle, "_PR0", NULL, &buffer);453453+ if (!ACPI_SUCCESS(status))454454+ return -1;440455441441- dev = &client->dev;456456+ package = buffer.pointer;442457443443- handle = ACPI_HANDLE(dev);458458+ if (!buffer.length || !package459459+ || package->type != ACPI_TYPE_PACKAGE460460+ || !package->package.count)461461+ goto fail;444462445445- // FIXME: may need to release resources allocated by acpi_bus_get_device()446446- if (!handle || acpi_bus_get_device(handle, &adev)) {447447- dev_err(dev, "Error could not get ACPI device\n");448448- return NULL;463463+ for (i = 0; i < package->package.count; i++) {464464+ element = &package->package.elements[i];465465+466466+ if (element->type != ACPI_TYPE_LOCAL_REFERENCE)467467+ continue;468468+469469+ rhandle = element->reference.handle;470470+ if (!rhandle)471471+ goto fail;472472+473473+ acpi_get_name(rhandle, ACPI_SINGLE_NAME, &b_name);474474+475475+ dev_dbg(dev, "Found PM resource '%s'\n", name);476476+ if (strlen(name) == 4 && !strncmp(name, "CLK", 3)) {477477+ if (name[3] >= '0' && name[3] <= '4')478478+ clock_num = name[3] - '0';479479+#if 0480480+ /*481481+ * We could abort here, but let's parse all resources,482482+ * as this is helpful for debugging purposes483483+ */484484+ if (clock_num >= 0)485485+ break;486486+#endif487487+ }449488 }450489451451- dev_info(&client->dev, "%s: ACPI detected it on bus ID=%s, HID=%s\n",452452- __func__, acpi_device_bid(adev), acpi_device_hid(adev));490490+fail:491491+ ACPI_FREE(buffer.pointer);453492454454- if (!pmic_id) {455455- if (gmin_i2c_dev_exists(dev, PMIC_ACPI_TI, &power))456456- pmic_id = PMIC_TI;457457- else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_AXP, &power))458458- pmic_id = PMIC_AXP;459459- else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_CRYSTALCOVE, &power))460460- pmic_id = PMIC_CRYSTALCOVE;461461- else462462- pmic_id = PMIC_REGULATOR;463463- }493493+ return clock_num;494494+}464495465465- for (i = 0; i < MAX_SUBDEVS && gmin_subdevs[i].subdev; i++)466466- ;467467- if (i >= MAX_SUBDEVS)468468- return NULL;496496+static u8 gmin_get_pmic_id_and_addr(struct device *dev)497497+{498498+ struct i2c_client *power;499499+ static u8 pmic_i2c_addr;469500470470- if (power) {471471- gmin_subdevs[i].pwm_i2c_addr = power->addr;472472- dev_info(dev,473473- "gmin: power management provided via %s (i2c addr 0x%02x)\n",474474- pmic_name[pmic_id], power->addr);475475- } else {476476- dev_info(dev, "gmin: power management provided via %s\n",477477- pmic_name[pmic_id]);478478- }501501+ if (pmic_id)502502+ return pmic_i2c_addr;479503480480- gmin_subdevs[i].subdev = subdev;481481- gmin_subdevs[i].clock_num = gmin_get_var_int(dev, false, "CamClk", 0);504504+ if (gmin_i2c_dev_exists(dev, PMIC_ACPI_TI, &power))505505+ pmic_id = PMIC_TI;506506+ else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_AXP, &power))507507+ pmic_id = PMIC_AXP;508508+ else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_CRYSTALCOVE, &power))509509+ pmic_id = PMIC_CRYSTALCOVE;510510+ else511511+ pmic_id = PMIC_REGULATOR;512512+513513+ pmic_i2c_addr = power ? power->addr : 0;514514+ return pmic_i2c_addr;515515+}516516+517517+static int gmin_detect_pmic(struct v4l2_subdev *subdev)518518+{519519+ struct i2c_client *client = v4l2_get_subdevdata(subdev);520520+ struct device *dev = &client->dev;521521+ u8 pmic_i2c_addr;522522+523523+ pmic_i2c_addr = gmin_get_pmic_id_and_addr(dev);524524+ dev_info(dev, "gmin: power management provided via %s (i2c addr 0x%02x)\n",525525+ pmic_name[pmic_id], pmic_i2c_addr);526526+ return pmic_i2c_addr;527527+}528528+529529+static int gmin_subdev_add(struct gmin_subdev *gs)530530+{531531+ struct i2c_client *client = v4l2_get_subdevdata(gs->subdev);532532+ struct device *dev = &client->dev;533533+ struct acpi_device *adev = ACPI_COMPANION(dev);534534+ int ret, clock_num = -1;535535+536536+ dev_info(dev, "%s: ACPI path is %pfw\n", __func__, dev_fwnode(dev));537537+482538 /*WA:CHT requires XTAL clock as PLL is not stable.*/483483- gmin_subdevs[i].clock_src = gmin_get_var_int(dev, false, "ClkSrc",484484- VLV2_CLK_PLL_19P2MHZ);485485- gmin_subdevs[i].csi_port = gmin_get_var_int(dev, false, "CsiPort", 0);486486- gmin_subdevs[i].csi_lanes = gmin_get_var_int(dev, false, "CsiLanes", 1);539539+ gs->clock_src = gmin_get_var_int(dev, false, "ClkSrc",540540+ VLV2_CLK_PLL_19P2MHZ);487541488488- /* get PMC clock with clock framework */489489- snprintf(gmin_pmc_clk_name,490490- sizeof(gmin_pmc_clk_name),491491- "%s_%d", "pmc_plt_clk", gmin_subdevs[i].clock_num);542542+ gs->csi_port = gmin_get_var_int(dev, false, "CsiPort", 0);543543+ gs->csi_lanes = gmin_get_var_int(dev, false, "CsiLanes", 1);492544493493- gmin_subdevs[i].pmc_clk = devm_clk_get(dev, gmin_pmc_clk_name);494494- if (IS_ERR(gmin_subdevs[i].pmc_clk)) {495495- ret = PTR_ERR(gmin_subdevs[i].pmc_clk);545545+ gs->gpio0 = gpiod_get_index(dev, NULL, 0, GPIOD_OUT_LOW);546546+ if (IS_ERR(gs->gpio0))547547+ gs->gpio0 = NULL;548548+ else549549+ dev_info(dev, "will handle gpio0 via ACPI\n");496550497497- dev_err(dev,498498- "Failed to get clk from %s : %d\n",499499- gmin_pmc_clk_name,500500- ret);551551+ gs->gpio1 = gpiod_get_index(dev, NULL, 1, GPIOD_OUT_LOW);552552+ if (IS_ERR(gs->gpio1))553553+ gs->gpio1 = NULL;554554+ else555555+ dev_info(dev, "will handle gpio1 via ACPI\n");501556502502- return NULL;557557+ /*558558+ * Those are used only when there is an external regulator apart559559+ * from the PMIC that would be providing power supply, like on the560560+ * two cases below:561561+ *562562+ * The ECS E7 board drives camera 2.8v from an external regulator563563+ * instead of the PMIC. There's a gmin_CamV2P8 config variable564564+ * that specifies the GPIO to handle this particular case,565565+ * but this needs a broader architecture for handling camera power.566566+ *567567+ * The CHT RVP board drives camera 1.8v from an* external regulator568568+ * instead of the PMIC just like ECS E7 board.569569+ */570570+571571+ gs->v1p8_gpio = gmin_get_var_int(dev, true, "V1P8GPIO", -1);572572+ gs->v2p8_gpio = gmin_get_var_int(dev, true, "V2P8GPIO", -1);573573+574574+ /*575575+ * FIXME:576576+ *577577+ * The ACPI handling code checks for the _PR? tables in order to578578+ * know what is required to switch the device from power state579579+ * D0 (_PR0) up to D3COLD (_PR3).580580+ *581581+ * The adev->flags.power_manageable is set to true if the device582582+ * has a _PR0 table, which can be checked by calling583583+ * acpi_device_power_manageable(adev).584584+ *585585+ * However, this only says that the device can be set to power off586586+ * mode.587587+ *588588+ * At least on the DSDT tables we've seen so far, there's no _PR3,589589+ * nor _PS3 (which would have a somewhat similar effect).590590+ * So, using ACPI for power management won't work, except if adding591591+ * an ACPI override logic somewhere.592592+ *593593+ * So, at least for the existing devices we know, the check below594594+ * will always be false.595595+ */596596+ if (acpi_device_can_wakeup(adev) &&597597+ acpi_device_can_poweroff(adev)) {598598+ dev_info(dev,599599+ "gmin: power management provided via device PM\n");600600+ return 0;503601 }602602+603603+ /*604604+ * The code below is here due to backward compatibility with devices605605+ * whose ACPI BIOS may not contain everything that would be needed606606+ * in order to set clocks and do power management.607607+ */608608+609609+ /*610610+ * According with :611611+ * https://github.com/projectceladon/hardware-intel-kernelflinger/blob/master/doc/fastboot.md612612+ *613613+ * The "CamClk" EFI var is set via fastboot on some Android devices,614614+ * and seems to contain the number of the clock used to feed the615615+ * sensor.616616+ *617617+ * On systems with a proper ACPI table, this is given via the _PR0618618+ * power resource table. The logic below should first check if there619619+ * is a power resource already, falling back to the EFI vars detection620620+ * otherwise.621621+ */622622+623623+ /* Try first to use ACPI to get the clock resource */624624+ if (acpi_device_power_manageable(adev))625625+ clock_num = atomisp_get_acpi_power(dev);626626+627627+ /* Fall-back use EFI and/or DMI match */628628+ if (clock_num < 0)629629+ clock_num = gmin_get_var_int(dev, false, "CamClk", 0);630630+631631+ if (clock_num < 0 || clock_num > MAX_CLK_COUNT) {632632+ dev_err(dev, "Invalid clock number\n");633633+ return -EINVAL;634634+ }635635+636636+ snprintf(gmin_pmc_clk_name, sizeof(gmin_pmc_clk_name),637637+ "%s_%d", "pmc_plt_clk", clock_num);638638+639639+ gs->pmc_clk = devm_clk_get(dev, gmin_pmc_clk_name);640640+ if (IS_ERR(gs->pmc_clk)) {641641+ ret = PTR_ERR(gs->pmc_clk);642642+ dev_err(dev, "Failed to get clk from %s: %d\n", gmin_pmc_clk_name, ret);643643+ return ret;644644+ }645645+ dev_info(dev, "Will use CLK%d (%s)\n", clock_num, gmin_pmc_clk_name);504646505647 /*506648 * The firmware might enable the clock at···640526 * to disable a clock that has not been enabled,641527 * we need to enable the clock first.642528 */643643- ret = clk_prepare_enable(gmin_subdevs[i].pmc_clk);529529+ ret = clk_prepare_enable(gs->pmc_clk);644530 if (!ret)645645- clk_disable_unprepare(gmin_subdevs[i].pmc_clk);646646-647647- gmin_subdevs[i].gpio0 = gpiod_get_index(dev, NULL, 0, GPIOD_OUT_LOW);648648- if (IS_ERR(gmin_subdevs[i].gpio0))649649- gmin_subdevs[i].gpio0 = NULL;650650-651651- gmin_subdevs[i].gpio1 = gpiod_get_index(dev, NULL, 1, GPIOD_OUT_LOW);652652- if (IS_ERR(gmin_subdevs[i].gpio1))653653- gmin_subdevs[i].gpio1 = NULL;531531+ clk_disable_unprepare(gs->pmc_clk);654532655533 switch (pmic_id) {656534 case PMIC_REGULATOR:657657- gmin_subdevs[i].v1p8_reg = regulator_get(dev, "V1P8SX");658658- gmin_subdevs[i].v2p8_reg = regulator_get(dev, "V2P8SX");535535+ gs->v1p8_reg = regulator_get(dev, "V1P8SX");536536+ gs->v2p8_reg = regulator_get(dev, "V2P8SX");659537660660- gmin_subdevs[i].v1p2_reg = regulator_get(dev, "V1P2A");661661- gmin_subdevs[i].v2p8_vcm_reg = regulator_get(dev, "VPROG4B");538538+ gs->v1p2_reg = regulator_get(dev, "V1P2A");539539+ gs->v2p8_vcm_reg = regulator_get(dev, "VPROG4B");662540663541 /* Note: ideally we would initialize v[12]p8_on to the664542 * output of regulator_is_enabled(), but sadly that···662556 break;663557664558 case PMIC_AXP:665665- gmin_subdevs[i].eldo1_1p8v = gmin_get_var_int(dev, false,666666- "eldo1_1p8v",667667- ELDO1_1P8V);668668- gmin_subdevs[i].eldo1_sel_reg = gmin_get_var_int(dev, false,669669- "eldo1_sel_reg",670670- ELDO1_SEL_REG);671671- gmin_subdevs[i].eldo1_ctrl_shift = gmin_get_var_int(dev, false,672672- "eldo1_ctrl_shift",673673- ELDO1_CTRL_SHIFT);674674- gmin_subdevs[i].eldo2_1p8v = gmin_get_var_int(dev, false,675675- "eldo2_1p8v",676676- ELDO2_1P8V);677677- gmin_subdevs[i].eldo2_sel_reg = gmin_get_var_int(dev, false,678678- "eldo2_sel_reg",679679- ELDO2_SEL_REG);680680- gmin_subdevs[i].eldo2_ctrl_shift = gmin_get_var_int(dev, false,681681- "eldo2_ctrl_shift",682682- ELDO2_CTRL_SHIFT);683683- gmin_subdevs[i].pwm_i2c_addr = power->addr;559559+ gs->eldo1_1p8v = gmin_get_var_int(dev, false,560560+ "eldo1_1p8v",561561+ ELDO1_1P8V);562562+ gs->eldo1_sel_reg = gmin_get_var_int(dev, false,563563+ "eldo1_sel_reg",564564+ ELDO1_SEL_REG);565565+ gs->eldo1_ctrl_shift = gmin_get_var_int(dev, false,566566+ "eldo1_ctrl_shift",567567+ ELDO1_CTRL_SHIFT);568568+ gs->eldo2_1p8v = gmin_get_var_int(dev, false,569569+ "eldo2_1p8v",570570+ ELDO2_1P8V);571571+ gs->eldo2_sel_reg = gmin_get_var_int(dev, false,572572+ "eldo2_sel_reg",573573+ ELDO2_SEL_REG);574574+ gs->eldo2_ctrl_shift = gmin_get_var_int(dev, false,575575+ "eldo2_ctrl_shift",576576+ ELDO2_CTRL_SHIFT);684577 break;685578686579 default:687580 break;688581 }689582690690- return &gmin_subdevs[i];583583+ return 0;691584}692585693586static struct gmin_subdev *find_gmin_subdev(struct v4l2_subdev *subdev)···696591 for (i = 0; i < MAX_SUBDEVS; i++)697592 if (gmin_subdevs[i].subdev == subdev)698593 return &gmin_subdevs[i];699699- return gmin_subdev_add(subdev);594594+ return NULL;595595+}596596+597597+static struct gmin_subdev *find_free_gmin_subdev_slot(void)598598+{599599+ unsigned int i;600600+601601+ for (i = 0; i < MAX_SUBDEVS; i++)602602+ if (gmin_subdevs[i].subdev == NULL)603603+ return &gmin_subdevs[i];604604+ return NULL;700605}701606702607static int axp_regulator_set(struct device *dev, struct gmin_subdev *gs,···815700{816701 struct gmin_subdev *gs = find_gmin_subdev(subdev);817702 int ret;818818- struct device *dev;819819- struct i2c_client *client = v4l2_get_subdevdata(subdev);820703 int value;821704822822- dev = &client->dev;823823-824824- if (v1p8_gpio == V1P8_GPIO_UNSET) {825825- v1p8_gpio = gmin_get_var_int(dev, true,826826- "V1P8GPIO", V1P8_GPIO_NONE);827827- if (v1p8_gpio != V1P8_GPIO_NONE) {828828- pr_info("atomisp_gmin_platform: 1.8v power on GPIO %d\n",829829- v1p8_gpio);830830- ret = gpio_request(v1p8_gpio, "camera_v1p8_en");831831- if (!ret)832832- ret = gpio_direction_output(v1p8_gpio, 0);833833- if (ret)834834- pr_err("V1P8 GPIO initialization failed\n");835835- }705705+ if (gs->v1p8_gpio >= 0) {706706+ pr_info("atomisp_gmin_platform: 1.8v power on GPIO %d\n",707707+ gs->v1p8_gpio);708708+ ret = gpio_request(gs->v1p8_gpio, "camera_v1p8_en");709709+ if (!ret)710710+ ret = gpio_direction_output(gs->v1p8_gpio, 0);711711+ if (ret)712712+ pr_err("V1P8 GPIO initialization failed\n");836713 }837714838715 if (!gs || gs->v1p8_on == on)839716 return 0;840717 gs->v1p8_on = on;841718842842- if (v1p8_gpio >= 0)843843- gpio_set_value(v1p8_gpio, on);719719+ if (gs->v1p8_gpio >= 0)720720+ gpio_set_value(gs->v1p8_gpio, on);844721845722 if (gs->v1p8_reg) {846723 regulator_set_voltage(gs->v1p8_reg, 1800000, 1800000);···869762{870763 struct gmin_subdev *gs = find_gmin_subdev(subdev);871764 int ret;872872- struct device *dev;873873- struct i2c_client *client = v4l2_get_subdevdata(subdev);874765 int value;875766876876- dev = &client->dev;877877-878878- if (v2p8_gpio == V2P8_GPIO_UNSET) {879879- v2p8_gpio = gmin_get_var_int(dev, true,880880- "V2P8GPIO", V2P8_GPIO_NONE);881881- if (v2p8_gpio != V2P8_GPIO_NONE) {882882- pr_info("atomisp_gmin_platform: 2.8v power on GPIO %d\n",883883- v2p8_gpio);884884- ret = gpio_request(v2p8_gpio, "camera_v2p8");885885- if (!ret)886886- ret = gpio_direction_output(v2p8_gpio, 0);887887- if (ret)888888- pr_err("V2P8 GPIO initialization failed\n");889889- }767767+ if (gs->v2p8_gpio >= 0) {768768+ pr_info("atomisp_gmin_platform: 2.8v power on GPIO %d\n",769769+ gs->v2p8_gpio);770770+ ret = gpio_request(gs->v2p8_gpio, "camera_v2p8");771771+ if (!ret)772772+ ret = gpio_direction_output(gs->v2p8_gpio, 0);773773+ if (ret)774774+ pr_err("V2P8 GPIO initialization failed\n");890775 }891776892777 if (!gs || gs->v2p8_on == on)893778 return 0;894779 gs->v2p8_on = on;895780896896- if (v2p8_gpio >= 0)897897- gpio_set_value(v2p8_gpio, on);781781+ if (gs->v2p8_gpio >= 0)782782+ gpio_set_value(gs->v2p8_gpio, on);898783899784 if (gs->v2p8_reg) {900785 regulator_set_voltage(gs->v2p8_reg, 2900000, 2900000);···916817 }917818918819 return -EINVAL;820820+}821821+822822+static int gmin_acpi_pm_ctrl(struct v4l2_subdev *subdev, int on)823823+{824824+ int ret = 0;825825+ struct gmin_subdev *gs = find_gmin_subdev(subdev);826826+ struct i2c_client *client = v4l2_get_subdevdata(subdev);827827+ struct acpi_device *adev = ACPI_COMPANION(&client->dev);828828+829829+ /* Use the ACPI power management to control it */830830+ on = !!on;831831+ if (gs->clock_on == on)832832+ return 0;833833+834834+ dev_dbg(subdev->dev, "Setting power state to %s\n",835835+ on ? "on" : "off");836836+837837+ if (on)838838+ ret = acpi_device_set_power(adev,839839+ ACPI_STATE_D0);840840+ else841841+ ret = acpi_device_set_power(adev,842842+ ACPI_STATE_D3_COLD);843843+844844+ if (!ret)845845+ gs->clock_on = on;846846+ else847847+ dev_err(subdev->dev, "Couldn't set power state to %s\n",848848+ on ? "on" : "off");849849+850850+ return ret;919851}920852921853static int gmin_flisclk_ctrl(struct v4l2_subdev *subdev, int on)···1014884 return NULL;1015885}101688610171017-static struct camera_sensor_platform_data gmin_plat = {887887+static struct camera_sensor_platform_data pmic_gmin_plat = {1018888 .gpio0_ctrl = gmin_gpio0_ctrl,1019889 .gpio1_ctrl = gmin_gpio1_ctrl,1020890 .v1p8_ctrl = gmin_v1p8_ctrl,···1025895 .get_vcm_ctrl = gmin_get_vcm_ctrl,1026896};1027897898898+static struct camera_sensor_platform_data acpi_gmin_plat = {899899+ .gpio0_ctrl = gmin_gpio0_ctrl,900900+ .gpio1_ctrl = gmin_gpio1_ctrl,901901+ .v1p8_ctrl = gmin_acpi_pm_ctrl,902902+ .v2p8_ctrl = gmin_acpi_pm_ctrl,903903+ .v1p2_ctrl = gmin_acpi_pm_ctrl,904904+ .flisclk_ctrl = gmin_acpi_pm_ctrl,905905+ .csi_cfg = gmin_csi_cfg,906906+ .get_vcm_ctrl = gmin_get_vcm_ctrl,907907+};908908+1028909struct camera_sensor_platform_data *gmin_camera_platform_data(1029910 struct v4l2_subdev *subdev,1030911 enum atomisp_input_format csi_format,1031912 enum atomisp_bayer_order csi_bayer)1032913{10331033- struct gmin_subdev *gs = find_gmin_subdev(subdev);914914+ u8 pmic_i2c_addr = gmin_detect_pmic(subdev);915915+ struct gmin_subdev *gs;1034916917917+ gs = find_free_gmin_subdev_slot();918918+ gs->subdev = subdev;1035919 gs->csi_fmt = csi_format;1036920 gs->csi_bayer = csi_bayer;921921+ gs->pwm_i2c_addr = pmic_i2c_addr;103792210381038- return &gmin_plat;923923+ gmin_subdev_add(gs);924924+ if (gs->pmc_clk)925925+ return &pmic_gmin_plat;926926+ else927927+ return &acpi_gmin_plat;1039928}1040929EXPORT_SYMBOL_GPL(gmin_camera_platform_data);1041930···1106957 union acpi_object *obj, *cur = NULL;1107958 int i;1108959960960+ /*961961+ * The data reported by "CamClk" seems to be either 0 or 1 at the962962+ * _DSM table.963963+ *964964+ * At the ACPI tables we looked so far, this is not related to the965965+ * actual clock source for the sensor, which is given by the966966+ * _PR0 ACPI table. So, ignore it, as otherwise this will be967967+ * set to a wrong value.968968+ */969969+ if (!strcmp(var, "CamClk"))970970+ return -EINVAL;971971+1109972 obj = acpi_evaluate_dsm(handle, &atomisp_dsm_guid, 0, 0, NULL);1110973 if (!obj) {1111974 dev_info_once(dev, "Didn't find ACPI _DSM table.\n");1112975 return -EINVAL;1113976 }977977+978978+ /* Return on unexpected object type */979979+ if (obj->type != ACPI_TYPE_PACKAGE)980980+ return -EINVAL;11149811115982#if 0 /* Just for debugging purposes */1116983 for (i = 0; i < obj->package.count; i++) {···13201155 * trying. The driver itself does direct calls to the PUNIT to manage13211156 * ISP power.13221157 */13231323-static void isp_pm_cap_fixup(struct pci_dev *dev)11581158+static void isp_pm_cap_fixup(struct pci_dev *pdev)13241159{13251325- dev_info(&dev->dev, "Disabling PCI power management on camera ISP\n");13261326- dev->pm_cap = 0;11601160+ dev_info(&pdev->dev, "Disabling PCI power management on camera ISP\n");11611161+ pdev->pm_cap = 0;13271162}13281163DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0f38, isp_pm_cap_fixup);13291164
···1313 * more details.1414 */15151616-#ifndef __SYSTEM_GLOBAL_H_INCLUDED__1717-#define __SYSTEM_GLOBAL_H_INCLUDED__1818-1919-#include <hive_isp_css_defs.h>2020-#include <type_support.h>2121-2222-/*2323- * The longest allowed (uninteruptible) bus transfer, does not2424- * take stalling into account2525- */2626-#define HIVE_ISP_MAX_BURST_LENGTH 10242727-2828-/*2929- * Maximum allowed burst length in words for the ISP DMA3030- */3131-#define ISP_DMA_MAX_BURST_LENGTH 1283232-3333-/*3434- * Create a list of HAS and IS properties that defines the system3535- *3636- * The configuration assumes the following3737- * - The system is hetereogeneous; Multiple cells and devices classes3838- * - The cell and device instances are homogeneous, each device type3939- * belongs to the same class4040- * - Device instances supporting a subset of the class capabilities are4141- * allowed4242- *4343- * We could manage different device classes through the enumerated4444- * lists (C) or the use of classes (C++), but that is presently not4545- * fully supported4646- *4747- * N.B. the 3 input formatters are of 2 different classess4848- */4949-5016#define USE_INPUT_SYSTEM_VERSION_25151-5252-#define HAS_MMU_VERSION_25353-#define HAS_DMA_VERSION_25454-#define HAS_GDC_VERSION_25555-#define HAS_VAMEM_VERSION_25656-#define HAS_HMEM_VERSION_15757-#define HAS_BAMEM_VERSION_25858-#define HAS_IRQ_VERSION_25959-#define HAS_IRQ_MAP_VERSION_26060-#define HAS_INPUT_FORMATTER_VERSION_26161-/* 2401: HAS_INPUT_SYSTEM_VERSION_2401 */6262-#define HAS_INPUT_SYSTEM_VERSION_26363-#define HAS_BUFFERED_SENSOR6464-#define HAS_FIFO_MONITORS_VERSION_26565-/* #define HAS_GP_REGS_VERSION_2 */6666-#define HAS_GP_DEVICE_VERSION_26767-#define HAS_GPIO_VERSION_16868-#define HAS_TIMED_CTRL_VERSION_16969-#define HAS_RX_VERSION_27070-7171-#define DMA_DDR_TO_VAMEM_WORKAROUND7272-#define DMA_DDR_TO_HMEM_WORKAROUND7373-7474-/*7575- * Semi global. "HRT" is accessible from SP, but the HRT types do not fully apply7676- */7777-#define HRT_VADDRESS_WIDTH 327878-//#define HRT_ADDRESS_WIDTH 64 /* Surprise, this is a local property*/7979-#define HRT_DATA_WIDTH 328080-8181-#define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3)8282-#define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8)8383-8484-/* The main bus connecting all devices */8585-#define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH8686-#define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES8787-8888-/* per-frame parameter handling support */8989-#define SH_CSS_ENABLE_PER_FRAME_PARAMS9090-9191-typedef u32 hrt_bus_align_t;9292-9393-/*9494- * Enumerate the devices, device access through the API is by ID, through the DLI by address9595- * The enumerator terminators are used to size the wiring arrays and as an exception value.9696- */9797-typedef enum {9898- DDR0_ID = 0,9999- N_DDR_ID100100-} ddr_ID_t;101101-102102-typedef enum {103103- ISP0_ID = 0,104104- N_ISP_ID105105-} isp_ID_t;106106-107107-typedef enum {108108- SP0_ID = 0,109109- N_SP_ID110110-} sp_ID_t;111111-112112-typedef enum {113113- MMU0_ID = 0,114114- MMU1_ID,115115- N_MMU_ID116116-} mmu_ID_t;117117-118118-typedef enum {119119- DMA0_ID = 0,120120- N_DMA_ID121121-} dma_ID_t;122122-123123-typedef enum {124124- GDC0_ID = 0,125125- GDC1_ID,126126- N_GDC_ID127127-} gdc_ID_t;128128-129129-#define N_GDC_ID_CPP 2 // this extra define is needed because we want to use it also in the preprocessor, and that doesn't work with enums.130130-131131-typedef enum {132132- VAMEM0_ID = 0,133133- VAMEM1_ID,134134- VAMEM2_ID,135135- N_VAMEM_ID136136-} vamem_ID_t;137137-138138-typedef enum {139139- BAMEM0_ID = 0,140140- N_BAMEM_ID141141-} bamem_ID_t;142142-143143-typedef enum {144144- HMEM0_ID = 0,145145- N_HMEM_ID146146-} hmem_ID_t;147147-148148-/*149149-typedef enum {150150- IRQ0_ID = 0,151151- N_IRQ_ID152152-} irq_ID_t;153153-*/154154-155155-typedef enum {156156- IRQ0_ID = 0, // GP IRQ block157157- IRQ1_ID, // Input formatter158158- IRQ2_ID, // input system159159- IRQ3_ID, // input selector160160- N_IRQ_ID161161-} irq_ID_t;162162-163163-typedef enum {164164- FIFO_MONITOR0_ID = 0,165165- N_FIFO_MONITOR_ID166166-} fifo_monitor_ID_t;167167-168168-/*169169- * Deprecated: Since all gp_reg instances are different170170- * and put in the address maps of other devices we cannot171171- * enumerate them as that assumes the instrances are the172172- * same.173173- *174174- * We define a single GP_DEVICE containing all gp_regs175175- * w.r.t. a single base address176176- *177177-typedef enum {178178- GP_REGS0_ID = 0,179179- N_GP_REGS_ID180180-} gp_regs_ID_t;181181- */182182-typedef enum {183183- GP_DEVICE0_ID = 0,184184- N_GP_DEVICE_ID185185-} gp_device_ID_t;186186-187187-typedef enum {188188- GP_TIMER0_ID = 0,189189- GP_TIMER1_ID,190190- GP_TIMER2_ID,191191- GP_TIMER3_ID,192192- GP_TIMER4_ID,193193- GP_TIMER5_ID,194194- GP_TIMER6_ID,195195- GP_TIMER7_ID,196196- N_GP_TIMER_ID197197-} gp_timer_ID_t;198198-199199-typedef enum {200200- GPIO0_ID = 0,201201- N_GPIO_ID202202-} gpio_ID_t;203203-204204-typedef enum {205205- TIMED_CTRL0_ID = 0,206206- N_TIMED_CTRL_ID207207-} timed_ctrl_ID_t;208208-209209-typedef enum {210210- INPUT_FORMATTER0_ID = 0,211211- INPUT_FORMATTER1_ID,212212- INPUT_FORMATTER2_ID,213213- INPUT_FORMATTER3_ID,214214- N_INPUT_FORMATTER_ID215215-} input_formatter_ID_t;216216-217217-/* The IF RST is outside the IF */218218-#define INPUT_FORMATTER0_SRST_OFFSET 0x0824219219-#define INPUT_FORMATTER1_SRST_OFFSET 0x0624220220-#define INPUT_FORMATTER2_SRST_OFFSET 0x0424221221-#define INPUT_FORMATTER3_SRST_OFFSET 0x0224222222-223223-#define INPUT_FORMATTER0_SRST_MASK 0x0001224224-#define INPUT_FORMATTER1_SRST_MASK 0x0002225225-#define INPUT_FORMATTER2_SRST_MASK 0x0004226226-#define INPUT_FORMATTER3_SRST_MASK 0x0008227227-228228-typedef enum {229229- INPUT_SYSTEM0_ID = 0,230230- N_INPUT_SYSTEM_ID231231-} input_system_ID_t;232232-233233-typedef enum {234234- RX0_ID = 0,235235- N_RX_ID236236-} rx_ID_t;237237-238238-enum mipi_port_id {239239- MIPI_PORT0_ID = 0,240240- MIPI_PORT1_ID,241241- MIPI_PORT2_ID,242242- N_MIPI_PORT_ID243243-};244244-245245-#define N_RX_CHANNEL_ID 4246246-247247-/* Generic port enumeration with an internal port type ID */248248-typedef enum {249249- CSI_PORT0_ID = 0,250250- CSI_PORT1_ID,251251- CSI_PORT2_ID,252252- TPG_PORT0_ID,253253- PRBS_PORT0_ID,254254- FIFO_PORT0_ID,255255- MEMORY_PORT0_ID,256256- N_INPUT_PORT_ID257257-} input_port_ID_t;258258-259259-typedef enum {260260- CAPTURE_UNIT0_ID = 0,261261- CAPTURE_UNIT1_ID,262262- CAPTURE_UNIT2_ID,263263- ACQUISITION_UNIT0_ID,264264- DMA_UNIT0_ID,265265- CTRL_UNIT0_ID,266266- GPREGS_UNIT0_ID,267267- FIFO_UNIT0_ID,268268- IRQ_UNIT0_ID,269269- N_SUB_SYSTEM_ID270270-} sub_system_ID_t;271271-272272-#define N_CAPTURE_UNIT_ID 3273273-#define N_ACQUISITION_UNIT_ID 1274274-#define N_CTRL_UNIT_ID 1275275-276276-enum ia_css_isp_memories {277277- IA_CSS_ISP_PMEM0 = 0,278278- IA_CSS_ISP_DMEM0,279279- IA_CSS_ISP_VMEM0,280280- IA_CSS_ISP_VAMEM0,281281- IA_CSS_ISP_VAMEM1,282282- IA_CSS_ISP_VAMEM2,283283- IA_CSS_ISP_HMEM0,284284- IA_CSS_SP_DMEM0,285285- IA_CSS_DDR,286286- N_IA_CSS_MEMORIES287287-};288288-289289-#define IA_CSS_NUM_MEMORIES 9290290-/* For driver compatibility */291291-#define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES292292-#define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES293293-294294-#if 0295295-typedef enum {296296- dev_chn, /* device channels, external resource */297297- ext_mem, /* external memories */298298- int_mem, /* internal memories */299299- int_chn /* internal channels, user defined */300300-} resource_type_t;301301-302302-/* if this enum is extended with other memory resources, pls also extend the function resource_to_memptr() */303303-typedef enum {304304- vied_nci_dev_chn_dma_ext0,305305- int_mem_vmem0,306306- int_mem_dmem0307307-} resource_id_t;308308-309309-/* enum listing the different memories within a program group.310310- This enum is used in the mem_ptr_t type */311311-typedef enum {312312- buf_mem_invalid = 0,313313- buf_mem_vmem_prog0,314314- buf_mem_dmem_prog0315315-} buf_mem_t;316316-317317-#endif318318-#endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
···1313 * more details.1414 */15151616-#ifndef __SYSTEM_GLOBAL_H_INCLUDED__1717-#define __SYSTEM_GLOBAL_H_INCLUDED__1818-1919-#include <hive_isp_css_defs.h>2020-#include <type_support.h>2121-2222-/*2323- * The longest allowed (uninteruptible) bus transfer, does not2424- * take stalling into account2525- */2626-#define HIVE_ISP_MAX_BURST_LENGTH 10242727-2828-/*2929- * Maximum allowed burst length in words for the ISP DMA3030- * This value is set to 2 to prevent the ISP DMA from blocking3131- * the bus for too long; as the input system can only buffer3232- * 2 lines on Moorefield and Cherrytrail, the input system buffers3333- * may overflow if blocked for too long (BZ 2726).3434- */3535-#define ISP_DMA_MAX_BURST_LENGTH 23636-3737-/*3838- * Create a list of HAS and IS properties that defines the system3939- *4040- * The configuration assumes the following4141- * - The system is hetereogeneous; Multiple cells and devices classes4242- * - The cell and device instances are homogeneous, each device type4343- * belongs to the same class4444- * - Device instances supporting a subset of the class capabilities are4545- * allowed4646- *4747- * We could manage different device classes through the enumerated4848- * lists (C) or the use of classes (C++), but that is presently not4949- * fully supported5050- *5151- * N.B. the 3 input formatters are of 2 different classess5252- */5353-5454-#define USE_INPUT_SYSTEM_VERSION_24015555-5656-#define HAS_MMU_VERSION_25757-#define HAS_DMA_VERSION_25858-#define HAS_GDC_VERSION_25959-#define HAS_VAMEM_VERSION_26060-#define HAS_HMEM_VERSION_16161-#define HAS_BAMEM_VERSION_26262-#define HAS_IRQ_VERSION_26363-#define HAS_IRQ_MAP_VERSION_26464-#define HAS_INPUT_FORMATTER_VERSION_26565-/* 2401: HAS_INPUT_SYSTEM_VERSION_3 */6666-/* 2400: HAS_INPUT_SYSTEM_VERSION_2 */6767-#define HAS_INPUT_SYSTEM_VERSION_26868-#define HAS_INPUT_SYSTEM_VERSION_24016969-#define HAS_BUFFERED_SENSOR7070-#define HAS_FIFO_MONITORS_VERSION_27171-/* #define HAS_GP_REGS_VERSION_2 */7272-#define HAS_GP_DEVICE_VERSION_27373-#define HAS_GPIO_VERSION_17474-#define HAS_TIMED_CTRL_VERSION_17575-#define HAS_RX_VERSION_27616#define HAS_NO_INPUT_FORMATTER7777-/*#define HAS_NO_PACKED_RAW_PIXELS*/7878-/*#define HAS_NO_DVS_6AXIS_CONFIG_UPDATE*/7979-8080-#define DMA_DDR_TO_VAMEM_WORKAROUND8181-#define DMA_DDR_TO_HMEM_WORKAROUND8282-8383-/*8484- * Semi global. "HRT" is accessible from SP, but8585- * the HRT types do not fully apply8686- */8787-#define HRT_VADDRESS_WIDTH 328888-/* Surprise, this is a local property*/8989-/*#define HRT_ADDRESS_WIDTH 64 */9090-#define HRT_DATA_WIDTH 329191-9292-#define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3)9393-#define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8)9494-9595-/* The main bus connecting all devices */9696-#define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH9797-#define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES9898-1717+#define USE_INPUT_SYSTEM_VERSION_24011818+#define HAS_INPUT_SYSTEM_VERSION_24019919#define CSI2P_DISABLE_ISYS2401_ONLINE_MODE100100-101101-/* per-frame parameter handling support */102102-#define SH_CSS_ENABLE_PER_FRAME_PARAMS103103-104104-typedef u32 hrt_bus_align_t;105105-106106-/*107107- * Enumerate the devices, device access through the API is by ID,108108- * through the DLI by address. The enumerator terminators are used109109- * to size the wiring arrays and as an exception value.110110- */111111-typedef enum {112112- DDR0_ID = 0,113113- N_DDR_ID114114-} ddr_ID_t;115115-116116-typedef enum {117117- ISP0_ID = 0,118118- N_ISP_ID119119-} isp_ID_t;120120-121121-typedef enum {122122- SP0_ID = 0,123123- N_SP_ID124124-} sp_ID_t;125125-126126-typedef enum {127127- MMU0_ID = 0,128128- MMU1_ID,129129- N_MMU_ID130130-} mmu_ID_t;131131-132132-typedef enum {133133- DMA0_ID = 0,134134- N_DMA_ID135135-} dma_ID_t;136136-137137-typedef enum {138138- GDC0_ID = 0,139139- GDC1_ID,140140- N_GDC_ID141141-} gdc_ID_t;142142-143143-/* this extra define is needed because we want to use it also144144- in the preprocessor, and that doesn't work with enums.145145- */146146-#define N_GDC_ID_CPP 2147147-148148-typedef enum {149149- VAMEM0_ID = 0,150150- VAMEM1_ID,151151- VAMEM2_ID,152152- N_VAMEM_ID153153-} vamem_ID_t;154154-155155-typedef enum {156156- BAMEM0_ID = 0,157157- N_BAMEM_ID158158-} bamem_ID_t;159159-160160-typedef enum {161161- HMEM0_ID = 0,162162- N_HMEM_ID163163-} hmem_ID_t;164164-165165-typedef enum {166166- ISYS_IRQ0_ID = 0, /* port a */167167- ISYS_IRQ1_ID, /* port b */168168- ISYS_IRQ2_ID, /* port c */169169- N_ISYS_IRQ_ID170170-} isys_irq_ID_t;171171-172172-typedef enum {173173- IRQ0_ID = 0, /* GP IRQ block */174174- IRQ1_ID, /* Input formatter */175175- IRQ2_ID, /* input system */176176- IRQ3_ID, /* input selector */177177- N_IRQ_ID178178-} irq_ID_t;179179-180180-typedef enum {181181- FIFO_MONITOR0_ID = 0,182182- N_FIFO_MONITOR_ID183183-} fifo_monitor_ID_t;184184-185185-/*186186- * Deprecated: Since all gp_reg instances are different187187- * and put in the address maps of other devices we cannot188188- * enumerate them as that assumes the instrances are the189189- * same.190190- *191191- * We define a single GP_DEVICE containing all gp_regs192192- * w.r.t. a single base address193193- *194194-typedef enum {195195- GP_REGS0_ID = 0,196196- N_GP_REGS_ID197197-} gp_regs_ID_t;198198- */199199-typedef enum {200200- GP_DEVICE0_ID = 0,201201- N_GP_DEVICE_ID202202-} gp_device_ID_t;203203-204204-typedef enum {205205- GP_TIMER0_ID = 0,206206- GP_TIMER1_ID,207207- GP_TIMER2_ID,208208- GP_TIMER3_ID,209209- GP_TIMER4_ID,210210- GP_TIMER5_ID,211211- GP_TIMER6_ID,212212- GP_TIMER7_ID,213213- N_GP_TIMER_ID214214-} gp_timer_ID_t;215215-216216-typedef enum {217217- GPIO0_ID = 0,218218- N_GPIO_ID219219-} gpio_ID_t;220220-221221-typedef enum {222222- TIMED_CTRL0_ID = 0,223223- N_TIMED_CTRL_ID224224-} timed_ctrl_ID_t;225225-226226-typedef enum {227227- INPUT_FORMATTER0_ID = 0,228228- INPUT_FORMATTER1_ID,229229- INPUT_FORMATTER2_ID,230230- INPUT_FORMATTER3_ID,231231- N_INPUT_FORMATTER_ID232232-} input_formatter_ID_t;233233-234234-/* The IF RST is outside the IF */235235-#define INPUT_FORMATTER0_SRST_OFFSET 0x0824236236-#define INPUT_FORMATTER1_SRST_OFFSET 0x0624237237-#define INPUT_FORMATTER2_SRST_OFFSET 0x0424238238-#define INPUT_FORMATTER3_SRST_OFFSET 0x0224239239-240240-#define INPUT_FORMATTER0_SRST_MASK 0x0001241241-#define INPUT_FORMATTER1_SRST_MASK 0x0002242242-#define INPUT_FORMATTER2_SRST_MASK 0x0004243243-#define INPUT_FORMATTER3_SRST_MASK 0x0008244244-245245-typedef enum {246246- INPUT_SYSTEM0_ID = 0,247247- N_INPUT_SYSTEM_ID248248-} input_system_ID_t;249249-250250-typedef enum {251251- RX0_ID = 0,252252- N_RX_ID253253-} rx_ID_t;254254-255255-enum mipi_port_id {256256- MIPI_PORT0_ID = 0,257257- MIPI_PORT1_ID,258258- MIPI_PORT2_ID,259259- N_MIPI_PORT_ID260260-};261261-262262-#define N_RX_CHANNEL_ID 4263263-264264-/* Generic port enumeration with an internal port type ID */265265-typedef enum {266266- CSI_PORT0_ID = 0,267267- CSI_PORT1_ID,268268- CSI_PORT2_ID,269269- TPG_PORT0_ID,270270- PRBS_PORT0_ID,271271- FIFO_PORT0_ID,272272- MEMORY_PORT0_ID,273273- N_INPUT_PORT_ID274274-} input_port_ID_t;275275-276276-typedef enum {277277- CAPTURE_UNIT0_ID = 0,278278- CAPTURE_UNIT1_ID,279279- CAPTURE_UNIT2_ID,280280- ACQUISITION_UNIT0_ID,281281- DMA_UNIT0_ID,282282- CTRL_UNIT0_ID,283283- GPREGS_UNIT0_ID,284284- FIFO_UNIT0_ID,285285- IRQ_UNIT0_ID,286286- N_SUB_SYSTEM_ID287287-} sub_system_ID_t;288288-289289-#define N_CAPTURE_UNIT_ID 3290290-#define N_ACQUISITION_UNIT_ID 1291291-#define N_CTRL_UNIT_ID 1292292-293293-/*294294- * Input-buffer Controller.295295- */296296-typedef enum {297297- IBUF_CTRL0_ID = 0, /* map to ISYS2401_IBUF_CNTRL_A */298298- IBUF_CTRL1_ID, /* map to ISYS2401_IBUF_CNTRL_B */299299- IBUF_CTRL2_ID, /* map ISYS2401_IBUF_CNTRL_C */300300- N_IBUF_CTRL_ID301301-} ibuf_ctrl_ID_t;302302-/* end of Input-buffer Controller */303303-304304-/*305305- * Stream2MMIO.306306- */307307-typedef enum {308308- STREAM2MMIO0_ID = 0, /* map to ISYS2401_S2M_A */309309- STREAM2MMIO1_ID, /* map to ISYS2401_S2M_B */310310- STREAM2MMIO2_ID, /* map to ISYS2401_S2M_C */311311- N_STREAM2MMIO_ID312312-} stream2mmio_ID_t;313313-314314-typedef enum {315315- /*316316- * Stream2MMIO 0 has 8 SIDs that are indexed by317317- * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID7_ID].318318- *319319- * Stream2MMIO 1 has 4 SIDs that are indexed by320320- * [STREAM2MMIO_SID0_ID...TREAM2MMIO_SID3_ID].321321- *322322- * Stream2MMIO 2 has 4 SIDs that are indexed by323323- * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID3_ID].324324- */325325- STREAM2MMIO_SID0_ID = 0,326326- STREAM2MMIO_SID1_ID,327327- STREAM2MMIO_SID2_ID,328328- STREAM2MMIO_SID3_ID,329329- STREAM2MMIO_SID4_ID,330330- STREAM2MMIO_SID5_ID,331331- STREAM2MMIO_SID6_ID,332332- STREAM2MMIO_SID7_ID,333333- N_STREAM2MMIO_SID_ID334334-} stream2mmio_sid_ID_t;335335-/* end of Stream2MMIO */336336-337337-/**338338- * Input System 2401: CSI-MIPI recevier.339339- */340340-typedef enum {341341- CSI_RX_BACKEND0_ID = 0, /* map to ISYS2401_MIPI_BE_A */342342- CSI_RX_BACKEND1_ID, /* map to ISYS2401_MIPI_BE_B */343343- CSI_RX_BACKEND2_ID, /* map to ISYS2401_MIPI_BE_C */344344- N_CSI_RX_BACKEND_ID345345-} csi_rx_backend_ID_t;346346-347347-typedef enum {348348- CSI_RX_FRONTEND0_ID = 0, /* map to ISYS2401_CSI_RX_A */349349- CSI_RX_FRONTEND1_ID, /* map to ISYS2401_CSI_RX_B */350350- CSI_RX_FRONTEND2_ID, /* map to ISYS2401_CSI_RX_C */351351-#define N_CSI_RX_FRONTEND_ID (CSI_RX_FRONTEND2_ID + 1)352352-} csi_rx_frontend_ID_t;353353-354354-typedef enum {355355- CSI_RX_DLANE0_ID = 0, /* map to DLANE0 in CSI RX */356356- CSI_RX_DLANE1_ID, /* map to DLANE1 in CSI RX */357357- CSI_RX_DLANE2_ID, /* map to DLANE2 in CSI RX */358358- CSI_RX_DLANE3_ID, /* map to DLANE3 in CSI RX */359359- N_CSI_RX_DLANE_ID360360-} csi_rx_fe_dlane_ID_t;361361-/* end of CSI-MIPI receiver */362362-363363-typedef enum {364364- ISYS2401_DMA0_ID = 0,365365- N_ISYS2401_DMA_ID366366-} isys2401_dma_ID_t;367367-368368-/**369369- * Pixel-generator. ("system_global.h")370370- */371371-typedef enum {372372- PIXELGEN0_ID = 0,373373- PIXELGEN1_ID,374374- PIXELGEN2_ID,375375- N_PIXELGEN_ID376376-} pixelgen_ID_t;377377-/* end of pixel-generator. ("system_global.h") */378378-379379-typedef enum {380380- INPUT_SYSTEM_CSI_PORT0_ID = 0,381381- INPUT_SYSTEM_CSI_PORT1_ID,382382- INPUT_SYSTEM_CSI_PORT2_ID,383383-384384- INPUT_SYSTEM_PIXELGEN_PORT0_ID,385385- INPUT_SYSTEM_PIXELGEN_PORT1_ID,386386- INPUT_SYSTEM_PIXELGEN_PORT2_ID,387387-388388- N_INPUT_SYSTEM_INPUT_PORT_ID389389-} input_system_input_port_ID_t;390390-391391-#define N_INPUT_SYSTEM_CSI_PORT 3392392-393393-typedef enum {394394- ISYS2401_DMA_CHANNEL_0 = 0,395395- ISYS2401_DMA_CHANNEL_1,396396- ISYS2401_DMA_CHANNEL_2,397397- ISYS2401_DMA_CHANNEL_3,398398- ISYS2401_DMA_CHANNEL_4,399399- ISYS2401_DMA_CHANNEL_5,400400- ISYS2401_DMA_CHANNEL_6,401401- ISYS2401_DMA_CHANNEL_7,402402- ISYS2401_DMA_CHANNEL_8,403403- ISYS2401_DMA_CHANNEL_9,404404- ISYS2401_DMA_CHANNEL_10,405405- ISYS2401_DMA_CHANNEL_11,406406- N_ISYS2401_DMA_CHANNEL407407-} isys2401_dma_channel;408408-409409-enum ia_css_isp_memories {410410- IA_CSS_ISP_PMEM0 = 0,411411- IA_CSS_ISP_DMEM0,412412- IA_CSS_ISP_VMEM0,413413- IA_CSS_ISP_VAMEM0,414414- IA_CSS_ISP_VAMEM1,415415- IA_CSS_ISP_VAMEM2,416416- IA_CSS_ISP_HMEM0,417417- IA_CSS_SP_DMEM0,418418- IA_CSS_DDR,419419- N_IA_CSS_MEMORIES420420-};421421-422422-#define IA_CSS_NUM_MEMORIES 9423423-/* For driver compatibility */424424-#define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES425425-#define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES426426-427427-#endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
···11-/* SPDX-License-Identifier: GPL-2.0 */22-/*33- * Support for Intel Camera Imaging ISP subsystem.44- * Copyright (c) 2015, Intel Corporation.55- *66- * This program is free software; you can redistribute it and/or modify it77- * under the terms and conditions of the GNU General Public License,88- * version 2, as published by the Free Software Foundation.99- *1010- * This program is distributed in the hope it will be useful, but WITHOUT1111- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313- * more details.1414- */1515-1616-#ifndef __SYSTEM_LOCAL_H_INCLUDED__1717-#define __SYSTEM_LOCAL_H_INCLUDED__1818-1919-#ifdef HRT_ISP_CSS_CUSTOM_HOST2020-#ifndef HRT_USE_VIR_ADDRS2121-#define HRT_USE_VIR_ADDRS2222-#endif2323-#endif2424-2525-#include "system_global.h"2626-2727-#define HRT_ADDRESS_WIDTH 64 /* Surprise, this is a local property */2828-2929-/* This interface is deprecated */3030-#include "hive_types.h"3131-3232-/*3333- * Cell specific address maps3434- */3535-#if HRT_ADDRESS_WIDTH == 643636-3737-#define GP_FIFO_BASE ((hrt_address)0x0000000000090104) /* This is NOT a base address */3838-3939-/* DDR */4040-static const hrt_address DDR_BASE[N_DDR_ID] = {4141- 0x0000000120000000ULL4242-};4343-4444-/* ISP */4545-static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = {4646- 0x0000000000020000ULL4747-};4848-4949-static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = {5050- 0x0000000000200000ULL5151-};5252-5353-static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = {5454- 0x0000000000100000ULL5555-};5656-5757-static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = {5858- 0x00000000001C0000ULL,5959- 0x00000000001D0000ULL,6060- 0x00000000001E0000ULL6161-};6262-6363-static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = {6464- 0x00000000001F0000ULL6565-};6666-6767-/* SP */6868-static const hrt_address SP_CTRL_BASE[N_SP_ID] = {6969- 0x0000000000010000ULL7070-};7171-7272-static const hrt_address SP_DMEM_BASE[N_SP_ID] = {7373- 0x0000000000300000ULL7474-};7575-7676-/* MMU */7777-/*7878- * MMU0_ID: The data MMU7979- * MMU1_ID: The icache MMU8080- */8181-static const hrt_address MMU_BASE[N_MMU_ID] = {8282- 0x0000000000070000ULL,8383- 0x00000000000A0000ULL8484-};8585-8686-/* DMA */8787-static const hrt_address DMA_BASE[N_DMA_ID] = {8888- 0x0000000000040000ULL8989-};9090-9191-static const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = {9292- 0x00000000000CA000ULL9393-};9494-9595-/* IRQ */9696-static const hrt_address IRQ_BASE[N_IRQ_ID] = {9797- 0x0000000000000500ULL,9898- 0x0000000000030A00ULL,9999- 0x000000000008C000ULL,100100- 0x0000000000090200ULL101101-};102102-103103-/*104104- 0x0000000000000500ULL};105105- */106106-107107-/* GDC */108108-static const hrt_address GDC_BASE[N_GDC_ID] = {109109- 0x0000000000050000ULL,110110- 0x0000000000060000ULL111111-};112112-113113-/* FIFO_MONITOR (not a subset of GP_DEVICE) */114114-static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = {115115- 0x0000000000000000ULL116116-};117117-118118-/*119119-static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = {120120- 0x0000000000000000ULL};121121-122122-static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {123123- 0x0000000000090000ULL};124124-*/125125-126126-/* GP_DEVICE (single base for all separate GP_REG instances) */127127-static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {128128- 0x0000000000000000ULL129129-};130130-131131-/*GP TIMER , all timer registers are inter-twined,132132- * so, having multiple base addresses for133133- * different timers does not help*/134134-static const hrt_address GP_TIMER_BASE =135135- (hrt_address)0x0000000000000600ULL;136136-137137-/* GPIO */138138-static const hrt_address GPIO_BASE[N_GPIO_ID] = {139139- 0x0000000000000400ULL140140-};141141-142142-/* TIMED_CTRL */143143-static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = {144144- 0x0000000000000100ULL145145-};146146-147147-/* INPUT_FORMATTER */148148-static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = {149149- 0x0000000000030000ULL,150150- 0x0000000000030200ULL,151151- 0x0000000000030400ULL,152152- 0x0000000000030600ULL153153-}; /* memcpy() */154154-155155-/* INPUT_SYSTEM */156156-static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = {157157- 0x0000000000080000ULL158158-};159159-160160-/* 0x0000000000081000ULL, */ /* capture A */161161-/* 0x0000000000082000ULL, */ /* capture B */162162-/* 0x0000000000083000ULL, */ /* capture C */163163-/* 0x0000000000084000ULL, */ /* Acquisition */164164-/* 0x0000000000085000ULL, */ /* DMA */165165-/* 0x0000000000089000ULL, */ /* ctrl */166166-/* 0x000000000008A000ULL, */ /* GP regs */167167-/* 0x000000000008B000ULL, */ /* FIFO */168168-/* 0x000000000008C000ULL, */ /* IRQ */169169-170170-/* RX, the MIPI lane control regs start at offset 0 */171171-static const hrt_address RX_BASE[N_RX_ID] = {172172- 0x0000000000080100ULL173173-};174174-175175-/* IBUF_CTRL, part of the Input System 2401 */176176-static const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = {177177- 0x00000000000C1800ULL, /* ibuf controller A */178178- 0x00000000000C3800ULL, /* ibuf controller B */179179- 0x00000000000C5800ULL /* ibuf controller C */180180-};181181-182182-/* ISYS IRQ Controllers, part of the Input System 2401 */183183-static const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = {184184- 0x00000000000C1400ULL, /* port a */185185- 0x00000000000C3400ULL, /* port b */186186- 0x00000000000C5400ULL /* port c */187187-};188188-189189-/* CSI FE, part of the Input System 2401 */190190-static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = {191191- 0x00000000000C0400ULL, /* csi fe controller A */192192- 0x00000000000C2400ULL, /* csi fe controller B */193193- 0x00000000000C4400ULL /* csi fe controller C */194194-};195195-196196-/* CSI BE, part of the Input System 2401 */197197-static const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = {198198- 0x00000000000C0800ULL, /* csi be controller A */199199- 0x00000000000C2800ULL, /* csi be controller B */200200- 0x00000000000C4800ULL /* csi be controller C */201201-};202202-203203-/* PIXEL Generator, part of the Input System 2401 */204204-static const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = {205205- 0x00000000000C1000ULL, /* pixel gen controller A */206206- 0x00000000000C3000ULL, /* pixel gen controller B */207207- 0x00000000000C5000ULL /* pixel gen controller C */208208-};209209-210210-/* Stream2MMIO, part of the Input System 2401 */211211-static const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = {212212- 0x00000000000C0C00ULL, /* stream2mmio controller A */213213- 0x00000000000C2C00ULL, /* stream2mmio controller B */214214- 0x00000000000C4C00ULL /* stream2mmio controller C */215215-};216216-#elif HRT_ADDRESS_WIDTH == 32217217-218218-#define GP_FIFO_BASE ((hrt_address)0x00090104) /* This is NOT a base address */219219-220220-/* DDR : Attention, this value not defined in 32-bit */221221-static const hrt_address DDR_BASE[N_DDR_ID] = {222222- 0x00000000UL223223-};224224-225225-/* ISP */226226-static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = {227227- 0x00020000UL228228-};229229-230230-static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = {231231- 0xffffffffUL232232-};233233-234234-static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = {235235- 0xffffffffUL236236-};237237-238238-static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = {239239- 0xffffffffUL,240240- 0xffffffffUL,241241- 0xffffffffUL242242-};243243-244244-static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = {245245- 0xffffffffUL246246-};247247-248248-/* SP */249249-static const hrt_address SP_CTRL_BASE[N_SP_ID] = {250250- 0x00010000UL251251-};252252-253253-static const hrt_address SP_DMEM_BASE[N_SP_ID] = {254254- 0x00300000UL255255-};256256-257257-/* MMU */258258-/*259259- * MMU0_ID: The data MMU260260- * MMU1_ID: The icache MMU261261- */262262-static const hrt_address MMU_BASE[N_MMU_ID] = {263263- 0x00070000UL,264264- 0x000A0000UL265265-};266266-267267-/* DMA */268268-static const hrt_address DMA_BASE[N_DMA_ID] = {269269- 0x00040000UL270270-};271271-272272-static const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = {273273- 0x000CA000UL274274-};275275-276276-/* IRQ */277277-static const hrt_address IRQ_BASE[N_IRQ_ID] = {278278- 0x00000500UL,279279- 0x00030A00UL,280280- 0x0008C000UL,281281- 0x00090200UL282282-};283283-284284-/*285285- 0x00000500UL};286286- */287287-288288-/* GDC */289289-static const hrt_address GDC_BASE[N_GDC_ID] = {290290- 0x00050000UL,291291- 0x00060000UL292292-};293293-294294-/* FIFO_MONITOR (not a subset of GP_DEVICE) */295295-static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = {296296- 0x00000000UL297297-};298298-299299-/*300300-static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = {301301- 0x00000000UL};302302-303303-static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {304304- 0x00090000UL};305305-*/306306-307307-/* GP_DEVICE (single base for all separate GP_REG instances) */308308-static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {309309- 0x00000000UL310310-};311311-312312-/*GP TIMER , all timer registers are inter-twined,313313- * so, having multiple base addresses for314314- * different timers does not help*/315315-static const hrt_address GP_TIMER_BASE =316316- (hrt_address)0x00000600UL;317317-/* GPIO */318318-static const hrt_address GPIO_BASE[N_GPIO_ID] = {319319- 0x00000400UL320320-};321321-322322-/* TIMED_CTRL */323323-static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = {324324- 0x00000100UL325325-};326326-327327-/* INPUT_FORMATTER */328328-static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = {329329- 0x00030000UL,330330- 0x00030200UL,331331- 0x00030400UL332332-};333333-334334-/* 0x00030600UL, */ /* memcpy() */335335-336336-/* INPUT_SYSTEM */337337-static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = {338338- 0x00080000UL339339-};340340-341341-/* 0x00081000UL, */ /* capture A */342342-/* 0x00082000UL, */ /* capture B */343343-/* 0x00083000UL, */ /* capture C */344344-/* 0x00084000UL, */ /* Acquisition */345345-/* 0x00085000UL, */ /* DMA */346346-/* 0x00089000UL, */ /* ctrl */347347-/* 0x0008A000UL, */ /* GP regs */348348-/* 0x0008B000UL, */ /* FIFO */349349-/* 0x0008C000UL, */ /* IRQ */350350-351351-/* RX, the MIPI lane control regs start at offset 0 */352352-static const hrt_address RX_BASE[N_RX_ID] = {353353- 0x00080100UL354354-};355355-356356-/* IBUF_CTRL, part of the Input System 2401 */357357-static const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = {358358- 0x000C1800UL, /* ibuf controller A */359359- 0x000C3800UL, /* ibuf controller B */360360- 0x000C5800UL /* ibuf controller C */361361-};362362-363363-/* ISYS IRQ Controllers, part of the Input System 2401 */364364-static const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = {365365- 0x000C1400ULL, /* port a */366366- 0x000C3400ULL, /* port b */367367- 0x000C5400ULL /* port c */368368-};369369-370370-/* CSI FE, part of the Input System 2401 */371371-static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = {372372- 0x000C0400UL, /* csi fe controller A */373373- 0x000C2400UL, /* csi fe controller B */374374- 0x000C4400UL /* csi fe controller C */375375-};376376-377377-/* CSI BE, part of the Input System 2401 */378378-static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = {379379- 0x000C0800UL, /* csi be controller A */380380- 0x000C2800UL, /* csi be controller B */381381- 0x000C4800UL /* csi be controller C */382382-};383383-384384-/* PIXEL Generator, part of the Input System 2401 */385385-static const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = {386386- 0x000C1000UL, /* pixel gen controller A */387387- 0x000C3000UL, /* pixel gen controller B */388388- 0x000C5000UL /* pixel gen controller C */389389-};390390-391391-/* Stream2MMIO, part of the Input System 2401 */392392-static const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = {393393- 0x000C0C00UL, /* stream2mmio controller A */394394- 0x000C2C00UL, /* stream2mmio controller B */395395- 0x000C4C00UL /* stream2mmio controller C */396396-};397397-398398-#else399399-#error "system_local.h: HRT_ADDRESS_WIDTH must be one of {32,64}"400400-#endif401401-402402-#endif /* __SYSTEM_LOCAL_H_INCLUDED__ */
···44 * (c) 2020 Mauro Carvalho Chehab <mchehab+huawei@kernel.org>55 */6677+#ifndef __SYSTEM_GLOBAL_H_INCLUDED__88+#define __SYSTEM_GLOBAL_H_INCLUDED__99+1010+/*1111+ * Create a list of HAS and IS properties that defines the system1212+ * Those are common for both ISP2400 and ISP24011313+ *1414+ * The configuration assumes the following1515+ * - The system is hetereogeneous; Multiple cells and devices classes1616+ * - The cell and device instances are homogeneous, each device type1717+ * belongs to the same class1818+ * - Device instances supporting a subset of the class capabilities are1919+ * allowed2020+ *2121+ * We could manage different device classes through the enumerated2222+ * lists (C) or the use of classes (C++), but that is presently not2323+ * fully supported2424+ *2525+ * N.B. the 3 input formatters are of 2 different classess2626+ */2727+2828+#define HAS_MMU_VERSION_22929+#define HAS_DMA_VERSION_23030+#define HAS_GDC_VERSION_23131+#define HAS_VAMEM_VERSION_23232+#define HAS_HMEM_VERSION_13333+#define HAS_BAMEM_VERSION_23434+#define HAS_IRQ_VERSION_23535+#define HAS_IRQ_MAP_VERSION_23636+#define HAS_INPUT_FORMATTER_VERSION_23737+#define HAS_INPUT_SYSTEM_VERSION_23838+#define HAS_BUFFERED_SENSOR3939+#define HAS_FIFO_MONITORS_VERSION_24040+#define HAS_GP_DEVICE_VERSION_24141+#define HAS_GPIO_VERSION_14242+#define HAS_TIMED_CTRL_VERSION_14343+#define HAS_RX_VERSION_24444+4545+/* per-frame parameter handling support */4646+#define SH_CSS_ENABLE_PER_FRAME_PARAMS4747+4848+#define DMA_DDR_TO_VAMEM_WORKAROUND4949+#define DMA_DDR_TO_HMEM_WORKAROUND5050+5151+/*5252+ * The longest allowed (uninteruptible) bus transfer, does not5353+ * take stalling into account5454+ */5555+#define HIVE_ISP_MAX_BURST_LENGTH 10245656+5757+/*5858+ * Maximum allowed burst length in words for the ISP DMA5959+ * This value is set to 2 to prevent the ISP DMA from blocking6060+ * the bus for too long; as the input system can only buffer6161+ * 2 lines on Moorefield and Cherrytrail, the input system buffers6262+ * may overflow if blocked for too long (BZ 2726).6363+ */6464+#define ISP2400_DMA_MAX_BURST_LENGTH 1286565+#define ISP2401_DMA_MAX_BURST_LENGTH 26666+767#ifdef ISP2401868# include "isp2401_system_global.h"969#else1070# include "isp2400_system_global.h"1171#endif7272+7373+#include <hive_isp_css_defs.h>7474+#include <type_support.h>7575+7676+/* This interface is deprecated */7777+#include "hive_types.h"7878+7979+/*8080+ * Semi global. "HRT" is accessible from SP, but the HRT types do not fully apply8181+ */8282+#define HRT_VADDRESS_WIDTH 328383+8484+#define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3)8585+#define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8)8686+8787+/* The main bus connecting all devices */8888+#define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH8989+#define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES9090+9191+typedef u32 hrt_bus_align_t;9292+9393+/*9494+ * Enumerate the devices, device access through the API is by ID,9595+ * through the DLI by address. The enumerator terminators are used9696+ * to size the wiring arrays and as an exception value.9797+ */9898+typedef enum {9999+ DDR0_ID = 0,100100+ N_DDR_ID101101+} ddr_ID_t;102102+103103+typedef enum {104104+ ISP0_ID = 0,105105+ N_ISP_ID106106+} isp_ID_t;107107+108108+typedef enum {109109+ SP0_ID = 0,110110+ N_SP_ID111111+} sp_ID_t;112112+113113+typedef enum {114114+ MMU0_ID = 0,115115+ MMU1_ID,116116+ N_MMU_ID117117+} mmu_ID_t;118118+119119+typedef enum {120120+ DMA0_ID = 0,121121+ N_DMA_ID122122+} dma_ID_t;123123+124124+typedef enum {125125+ GDC0_ID = 0,126126+ GDC1_ID,127127+ N_GDC_ID128128+} gdc_ID_t;129129+130130+/* this extra define is needed because we want to use it also131131+ in the preprocessor, and that doesn't work with enums.132132+ */133133+#define N_GDC_ID_CPP 2134134+135135+typedef enum {136136+ VAMEM0_ID = 0,137137+ VAMEM1_ID,138138+ VAMEM2_ID,139139+ N_VAMEM_ID140140+} vamem_ID_t;141141+142142+typedef enum {143143+ BAMEM0_ID = 0,144144+ N_BAMEM_ID145145+} bamem_ID_t;146146+147147+typedef enum {148148+ HMEM0_ID = 0,149149+ N_HMEM_ID150150+} hmem_ID_t;151151+152152+typedef enum {153153+ IRQ0_ID = 0, /* GP IRQ block */154154+ IRQ1_ID, /* Input formatter */155155+ IRQ2_ID, /* input system */156156+ IRQ3_ID, /* input selector */157157+ N_IRQ_ID158158+} irq_ID_t;159159+160160+typedef enum {161161+ FIFO_MONITOR0_ID = 0,162162+ N_FIFO_MONITOR_ID163163+} fifo_monitor_ID_t;164164+165165+typedef enum {166166+ GP_DEVICE0_ID = 0,167167+ N_GP_DEVICE_ID168168+} gp_device_ID_t;169169+170170+typedef enum {171171+ GP_TIMER0_ID = 0,172172+ GP_TIMER1_ID,173173+ GP_TIMER2_ID,174174+ GP_TIMER3_ID,175175+ GP_TIMER4_ID,176176+ GP_TIMER5_ID,177177+ GP_TIMER6_ID,178178+ GP_TIMER7_ID,179179+ N_GP_TIMER_ID180180+} gp_timer_ID_t;181181+182182+typedef enum {183183+ GPIO0_ID = 0,184184+ N_GPIO_ID185185+} gpio_ID_t;186186+187187+typedef enum {188188+ TIMED_CTRL0_ID = 0,189189+ N_TIMED_CTRL_ID190190+} timed_ctrl_ID_t;191191+192192+typedef enum {193193+ INPUT_FORMATTER0_ID = 0,194194+ INPUT_FORMATTER1_ID,195195+ INPUT_FORMATTER2_ID,196196+ INPUT_FORMATTER3_ID,197197+ N_INPUT_FORMATTER_ID198198+} input_formatter_ID_t;199199+200200+/* The IF RST is outside the IF */201201+#define INPUT_FORMATTER0_SRST_OFFSET 0x0824202202+#define INPUT_FORMATTER1_SRST_OFFSET 0x0624203203+#define INPUT_FORMATTER2_SRST_OFFSET 0x0424204204+#define INPUT_FORMATTER3_SRST_OFFSET 0x0224205205+206206+#define INPUT_FORMATTER0_SRST_MASK 0x0001207207+#define INPUT_FORMATTER1_SRST_MASK 0x0002208208+#define INPUT_FORMATTER2_SRST_MASK 0x0004209209+#define INPUT_FORMATTER3_SRST_MASK 0x0008210210+211211+typedef enum {212212+ INPUT_SYSTEM0_ID = 0,213213+ N_INPUT_SYSTEM_ID214214+} input_system_ID_t;215215+216216+typedef enum {217217+ RX0_ID = 0,218218+ N_RX_ID219219+} rx_ID_t;220220+221221+enum mipi_port_id {222222+ MIPI_PORT0_ID = 0,223223+ MIPI_PORT1_ID,224224+ MIPI_PORT2_ID,225225+ N_MIPI_PORT_ID226226+};227227+228228+#define N_RX_CHANNEL_ID 4229229+230230+/* Generic port enumeration with an internal port type ID */231231+typedef enum {232232+ CSI_PORT0_ID = 0,233233+ CSI_PORT1_ID,234234+ CSI_PORT2_ID,235235+ TPG_PORT0_ID,236236+ PRBS_PORT0_ID,237237+ FIFO_PORT0_ID,238238+ MEMORY_PORT0_ID,239239+ N_INPUT_PORT_ID240240+} input_port_ID_t;241241+242242+typedef enum {243243+ CAPTURE_UNIT0_ID = 0,244244+ CAPTURE_UNIT1_ID,245245+ CAPTURE_UNIT2_ID,246246+ ACQUISITION_UNIT0_ID,247247+ DMA_UNIT0_ID,248248+ CTRL_UNIT0_ID,249249+ GPREGS_UNIT0_ID,250250+ FIFO_UNIT0_ID,251251+ IRQ_UNIT0_ID,252252+ N_SUB_SYSTEM_ID253253+} sub_system_ID_t;254254+255255+#define N_CAPTURE_UNIT_ID 3256256+#define N_ACQUISITION_UNIT_ID 1257257+#define N_CTRL_UNIT_ID 1258258+259259+260260+enum ia_css_isp_memories {261261+ IA_CSS_ISP_PMEM0 = 0,262262+ IA_CSS_ISP_DMEM0,263263+ IA_CSS_ISP_VMEM0,264264+ IA_CSS_ISP_VAMEM0,265265+ IA_CSS_ISP_VAMEM1,266266+ IA_CSS_ISP_VAMEM2,267267+ IA_CSS_ISP_HMEM0,268268+ IA_CSS_SP_DMEM0,269269+ IA_CSS_DDR,270270+ N_IA_CSS_MEMORIES271271+};272272+273273+#define IA_CSS_NUM_MEMORIES 9274274+/* For driver compatibility */275275+#define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES276276+#define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES277277+278278+/*279279+ * ISP2401 specific enums280280+ */281281+282282+typedef enum {283283+ ISYS_IRQ0_ID = 0, /* port a */284284+ ISYS_IRQ1_ID, /* port b */285285+ ISYS_IRQ2_ID, /* port c */286286+ N_ISYS_IRQ_ID287287+} isys_irq_ID_t;288288+289289+290290+/*291291+ * Input-buffer Controller.292292+ */293293+typedef enum {294294+ IBUF_CTRL0_ID = 0, /* map to ISYS2401_IBUF_CNTRL_A */295295+ IBUF_CTRL1_ID, /* map to ISYS2401_IBUF_CNTRL_B */296296+ IBUF_CTRL2_ID, /* map ISYS2401_IBUF_CNTRL_C */297297+ N_IBUF_CTRL_ID298298+} ibuf_ctrl_ID_t;299299+/* end of Input-buffer Controller */300300+301301+/*302302+ * Stream2MMIO.303303+ */304304+typedef enum {305305+ STREAM2MMIO0_ID = 0, /* map to ISYS2401_S2M_A */306306+ STREAM2MMIO1_ID, /* map to ISYS2401_S2M_B */307307+ STREAM2MMIO2_ID, /* map to ISYS2401_S2M_C */308308+ N_STREAM2MMIO_ID309309+} stream2mmio_ID_t;310310+311311+typedef enum {312312+ /*313313+ * Stream2MMIO 0 has 8 SIDs that are indexed by314314+ * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID7_ID].315315+ *316316+ * Stream2MMIO 1 has 4 SIDs that are indexed by317317+ * [STREAM2MMIO_SID0_ID...TREAM2MMIO_SID3_ID].318318+ *319319+ * Stream2MMIO 2 has 4 SIDs that are indexed by320320+ * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID3_ID].321321+ */322322+ STREAM2MMIO_SID0_ID = 0,323323+ STREAM2MMIO_SID1_ID,324324+ STREAM2MMIO_SID2_ID,325325+ STREAM2MMIO_SID3_ID,326326+ STREAM2MMIO_SID4_ID,327327+ STREAM2MMIO_SID5_ID,328328+ STREAM2MMIO_SID6_ID,329329+ STREAM2MMIO_SID7_ID,330330+ N_STREAM2MMIO_SID_ID331331+} stream2mmio_sid_ID_t;332332+/* end of Stream2MMIO */333333+334334+/**335335+ * Input System 2401: CSI-MIPI recevier.336336+ */337337+typedef enum {338338+ CSI_RX_BACKEND0_ID = 0, /* map to ISYS2401_MIPI_BE_A */339339+ CSI_RX_BACKEND1_ID, /* map to ISYS2401_MIPI_BE_B */340340+ CSI_RX_BACKEND2_ID, /* map to ISYS2401_MIPI_BE_C */341341+ N_CSI_RX_BACKEND_ID342342+} csi_rx_backend_ID_t;343343+344344+typedef enum {345345+ CSI_RX_FRONTEND0_ID = 0, /* map to ISYS2401_CSI_RX_A */346346+ CSI_RX_FRONTEND1_ID, /* map to ISYS2401_CSI_RX_B */347347+ CSI_RX_FRONTEND2_ID, /* map to ISYS2401_CSI_RX_C */348348+#define N_CSI_RX_FRONTEND_ID (CSI_RX_FRONTEND2_ID + 1)349349+} csi_rx_frontend_ID_t;350350+351351+typedef enum {352352+ CSI_RX_DLANE0_ID = 0, /* map to DLANE0 in CSI RX */353353+ CSI_RX_DLANE1_ID, /* map to DLANE1 in CSI RX */354354+ CSI_RX_DLANE2_ID, /* map to DLANE2 in CSI RX */355355+ CSI_RX_DLANE3_ID, /* map to DLANE3 in CSI RX */356356+ N_CSI_RX_DLANE_ID357357+} csi_rx_fe_dlane_ID_t;358358+/* end of CSI-MIPI receiver */359359+360360+typedef enum {361361+ ISYS2401_DMA0_ID = 0,362362+ N_ISYS2401_DMA_ID363363+} isys2401_dma_ID_t;364364+365365+/**366366+ * Pixel-generator. ("system_global.h")367367+ */368368+typedef enum {369369+ PIXELGEN0_ID = 0,370370+ PIXELGEN1_ID,371371+ PIXELGEN2_ID,372372+ N_PIXELGEN_ID373373+} pixelgen_ID_t;374374+/* end of pixel-generator. ("system_global.h") */375375+376376+typedef enum {377377+ INPUT_SYSTEM_CSI_PORT0_ID = 0,378378+ INPUT_SYSTEM_CSI_PORT1_ID,379379+ INPUT_SYSTEM_CSI_PORT2_ID,380380+381381+ INPUT_SYSTEM_PIXELGEN_PORT0_ID,382382+ INPUT_SYSTEM_PIXELGEN_PORT1_ID,383383+ INPUT_SYSTEM_PIXELGEN_PORT2_ID,384384+385385+ N_INPUT_SYSTEM_INPUT_PORT_ID386386+} input_system_input_port_ID_t;387387+388388+#define N_INPUT_SYSTEM_CSI_PORT 3389389+390390+typedef enum {391391+ ISYS2401_DMA_CHANNEL_0 = 0,392392+ ISYS2401_DMA_CHANNEL_1,393393+ ISYS2401_DMA_CHANNEL_2,394394+ ISYS2401_DMA_CHANNEL_3,395395+ ISYS2401_DMA_CHANNEL_4,396396+ ISYS2401_DMA_CHANNEL_5,397397+ ISYS2401_DMA_CHANNEL_6,398398+ ISYS2401_DMA_CHANNEL_7,399399+ ISYS2401_DMA_CHANNEL_8,400400+ ISYS2401_DMA_CHANNEL_9,401401+ ISYS2401_DMA_CHANNEL_10,402402+ ISYS2401_DMA_CHANNEL_11,403403+ N_ISYS2401_DMA_CHANNEL404404+} isys2401_dma_channel;405405+406406+#endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
+179
drivers/staging/media/atomisp/pci/system_local.c
···11+// SPDX-License-Identifier: GPL-2.022+/*33+ * Support for Intel Camera Imaging ISP subsystem.44+ * Copyright (c) 2015, Intel Corporation.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms and conditions of the GNU General Public License,88+ * version 2, as published by the Free Software Foundation.99+ *1010+ * This program is distributed in the hope it will be useful, but WITHOUT1111+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ * more details.1414+ */1515+1616+#include "system_local.h"1717+1818+/* ISP */1919+const hrt_address ISP_CTRL_BASE[N_ISP_ID] = {2020+ 0x0000000000020000ULL2121+};2222+2323+const hrt_address ISP_DMEM_BASE[N_ISP_ID] = {2424+ 0x0000000000200000ULL2525+};2626+2727+const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = {2828+ 0x0000000000100000ULL2929+};3030+3131+/* SP */3232+const hrt_address SP_CTRL_BASE[N_SP_ID] = {3333+ 0x0000000000010000ULL3434+};3535+3636+const hrt_address SP_DMEM_BASE[N_SP_ID] = {3737+ 0x0000000000300000ULL3838+};3939+4040+/* MMU */4141+/*4242+ * MMU0_ID: The data MMU4343+ * MMU1_ID: The icache MMU4444+ */4545+const hrt_address MMU_BASE[N_MMU_ID] = {4646+ 0x0000000000070000ULL,4747+ 0x00000000000A0000ULL4848+};4949+5050+/* DMA */5151+const hrt_address DMA_BASE[N_DMA_ID] = {5252+ 0x0000000000040000ULL5353+};5454+5555+const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = {5656+ 0x00000000000CA000ULL5757+};5858+5959+/* IRQ */6060+const hrt_address IRQ_BASE[N_IRQ_ID] = {6161+ 0x0000000000000500ULL,6262+ 0x0000000000030A00ULL,6363+ 0x000000000008C000ULL,6464+ 0x0000000000090200ULL6565+};6666+6767+/*6868+ 0x0000000000000500ULL};6969+ */7070+7171+/* GDC */7272+const hrt_address GDC_BASE[N_GDC_ID] = {7373+ 0x0000000000050000ULL,7474+ 0x0000000000060000ULL7575+};7676+7777+/* FIFO_MONITOR (not a subset of GP_DEVICE) */7878+const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = {7979+ 0x0000000000000000ULL8080+};8181+8282+/*8383+const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = {8484+ 0x0000000000000000ULL};8585+8686+const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {8787+ 0x0000000000090000ULL};8888+*/8989+9090+/* GP_DEVICE (single base for all separate GP_REG instances) */9191+const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = {9292+ 0x0000000000000000ULL9393+};9494+9595+/*GP TIMER , all timer registers are inter-twined,9696+ * so, having multiple base addresses for9797+ * different timers does not help*/9898+const hrt_address GP_TIMER_BASE =9999+ (hrt_address)0x0000000000000600ULL;100100+101101+/* GPIO */102102+const hrt_address GPIO_BASE[N_GPIO_ID] = {103103+ 0x0000000000000400ULL104104+};105105+106106+/* TIMED_CTRL */107107+const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = {108108+ 0x0000000000000100ULL109109+};110110+111111+/* INPUT_FORMATTER */112112+const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = {113113+ 0x0000000000030000ULL,114114+ 0x0000000000030200ULL,115115+ 0x0000000000030400ULL,116116+ 0x0000000000030600ULL117117+}; /* memcpy() */118118+119119+/* INPUT_SYSTEM */120120+const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = {121121+ 0x0000000000080000ULL122122+};123123+124124+/* 0x0000000000081000ULL, */ /* capture A */125125+/* 0x0000000000082000ULL, */ /* capture B */126126+/* 0x0000000000083000ULL, */ /* capture C */127127+/* 0x0000000000084000ULL, */ /* Acquisition */128128+/* 0x0000000000085000ULL, */ /* DMA */129129+/* 0x0000000000089000ULL, */ /* ctrl */130130+/* 0x000000000008A000ULL, */ /* GP regs */131131+/* 0x000000000008B000ULL, */ /* FIFO */132132+/* 0x000000000008C000ULL, */ /* IRQ */133133+134134+/* RX, the MIPI lane control regs start at offset 0 */135135+const hrt_address RX_BASE[N_RX_ID] = {136136+ 0x0000000000080100ULL137137+};138138+139139+/* IBUF_CTRL, part of the Input System 2401 */140140+const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = {141141+ 0x00000000000C1800ULL, /* ibuf controller A */142142+ 0x00000000000C3800ULL, /* ibuf controller B */143143+ 0x00000000000C5800ULL /* ibuf controller C */144144+};145145+146146+/* ISYS IRQ Controllers, part of the Input System 2401 */147147+const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = {148148+ 0x00000000000C1400ULL, /* port a */149149+ 0x00000000000C3400ULL, /* port b */150150+ 0x00000000000C5400ULL /* port c */151151+};152152+153153+/* CSI FE, part of the Input System 2401 */154154+const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = {155155+ 0x00000000000C0400ULL, /* csi fe controller A */156156+ 0x00000000000C2400ULL, /* csi fe controller B */157157+ 0x00000000000C4400ULL /* csi fe controller C */158158+};159159+160160+/* CSI BE, part of the Input System 2401 */161161+const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = {162162+ 0x00000000000C0800ULL, /* csi be controller A */163163+ 0x00000000000C2800ULL, /* csi be controller B */164164+ 0x00000000000C4800ULL /* csi be controller C */165165+};166166+167167+/* PIXEL Generator, part of the Input System 2401 */168168+const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = {169169+ 0x00000000000C1000ULL, /* pixel gen controller A */170170+ 0x00000000000C3000ULL, /* pixel gen controller B */171171+ 0x00000000000C5000ULL /* pixel gen controller C */172172+};173173+174174+/* Stream2MMIO, part of the Input System 2401 */175175+const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = {176176+ 0x00000000000C0C00ULL, /* stream2mmio controller A */177177+ 0x00000000000C2C00ULL, /* stream2mmio controller B */178178+ 0x00000000000C4C00ULL /* stream2mmio controller C */179179+};
+98-6
drivers/staging/media/atomisp/pci/system_local.h
···11/* SPDX-License-Identifier: GPL-2.0 */22-// SPDX-License-Identifier: GPL-2.0-or-later32/*44- * (c) 2020 Mauro Carvalho Chehab <mchehab+huawei@kernel.org>33+ * Support for Intel Camera Imaging ISP subsystem.44+ * Copyright (c) 2015, Intel Corporation.55+ *66+ * This program is free software; you can redistribute it and/or modify it77+ * under the terms and conditions of the GNU General Public License,88+ * version 2, as published by the Free Software Foundation.99+ *1010+ * This program is distributed in the hope it will be useful, but WITHOUT1111+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or1212+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for1313+ * more details.514 */61577-#ifdef ISP240188-# include "isp2401_system_local.h"99-#else1010-# include "isp2400_system_local.h"1616+#ifndef __SYSTEM_LOCAL_H_INCLUDED__1717+#define __SYSTEM_LOCAL_H_INCLUDED__1818+1919+#ifdef HRT_ISP_CSS_CUSTOM_HOST2020+#ifndef HRT_USE_VIR_ADDRS2121+#define HRT_USE_VIR_ADDRS1122#endif2323+#endif2424+2525+#include "system_global.h"2626+2727+/* This interface is deprecated */2828+#include "hive_types.h"2929+3030+/*3131+ * Cell specific address maps3232+ */3333+3434+#define GP_FIFO_BASE ((hrt_address)0x0000000000090104) /* This is NOT a base address */3535+3636+/* ISP */3737+extern const hrt_address ISP_CTRL_BASE[N_ISP_ID];3838+extern const hrt_address ISP_DMEM_BASE[N_ISP_ID];3939+extern const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID];4040+4141+/* SP */4242+extern const hrt_address SP_CTRL_BASE[N_SP_ID];4343+extern const hrt_address SP_DMEM_BASE[N_SP_ID];4444+4545+/* MMU */4646+4747+extern const hrt_address MMU_BASE[N_MMU_ID];4848+4949+/* DMA */5050+extern const hrt_address DMA_BASE[N_DMA_ID];5151+extern const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID];5252+5353+/* IRQ */5454+extern const hrt_address IRQ_BASE[N_IRQ_ID];5555+5656+/* GDC */5757+extern const hrt_address GDC_BASE[N_GDC_ID];5858+5959+/* FIFO_MONITOR (not a subset of GP_DEVICE) */6060+extern const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID];6161+6262+/* GP_DEVICE (single base for all separate GP_REG instances) */6363+extern const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID];6464+6565+/*GP TIMER , all timer registers are inter-twined,6666+ * so, having multiple base addresses for6767+ * different timers does not help*/6868+extern const hrt_address GP_TIMER_BASE;6969+7070+/* GPIO */7171+extern const hrt_address GPIO_BASE[N_GPIO_ID];7272+7373+/* TIMED_CTRL */7474+extern const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID];7575+7676+/* INPUT_FORMATTER */7777+extern const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID];7878+7979+/* INPUT_SYSTEM */8080+extern const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID];8181+8282+/* RX, the MIPI lane control regs start at offset 0 */8383+extern const hrt_address RX_BASE[N_RX_ID];8484+8585+/* IBUF_CTRL, part of the Input System 2401 */8686+extern const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID];8787+8888+/* ISYS IRQ Controllers, part of the Input System 2401 */8989+extern const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID];9090+9191+/* CSI FE, part of the Input System 2401 */9292+extern const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID];9393+9494+/* CSI BE, part of the Input System 2401 */9595+extern const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID];9696+9797+/* PIXEL Generator, part of the Input System 2401 */9898+extern const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID];9999+100100+/* Stream2MMIO, part of the Input System 2401 */101101+extern const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID];102102+103103+#endif /* __SYSTEM_LOCAL_H_INCLUDED__ */
+15-1
drivers/staging/wlan-ng/prism2usb.c
···6161 const struct usb_device_id *id)6262{6363 struct usb_device *dev;6464-6464+ const struct usb_endpoint_descriptor *epd;6565+ const struct usb_host_interface *iface_desc = interface->cur_altsetting;6566 struct wlandevice *wlandev = NULL;6667 struct hfa384x *hw = NULL;6768 int result = 0;6969+7070+ if (iface_desc->desc.bNumEndpoints != 2) {7171+ result = -ENODEV;7272+ goto failed;7373+ }7474+7575+ result = -EINVAL;7676+ epd = &iface_desc->endpoint[1].desc;7777+ if (!usb_endpoint_is_bulk_in(epd))7878+ goto failed;7979+ epd = &iface_desc->endpoint[2].desc;8080+ if (!usb_endpoint_is_bulk_out(epd))8181+ goto failed;68826983 dev = interface_to_usbdev(interface);7084 wlandev = create_wlan();
+3-3
drivers/thermal/cpufreq_cooling.c
···123123{124124 int i;125125126126- for (i = cpufreq_cdev->max_level - 1; i >= 0; i--) {127127- if (power > cpufreq_cdev->em->table[i].power)126126+ for (i = cpufreq_cdev->max_level; i >= 0; i--) {127127+ if (power >= cpufreq_cdev->em->table[i].power)128128 break;129129 }130130131131- return cpufreq_cdev->em->table[i + 1].frequency;131131+ return cpufreq_cdev->em->table[i].frequency;132132}133133134134/**
+4-3
drivers/thermal/imx_thermal.c
···649649static int imx_thermal_register_legacy_cooling(struct imx_thermal_data *data)650650{651651 struct device_node *np;652652- int ret;652652+ int ret = 0;653653654654 data->policy = cpufreq_cpu_get(0);655655 if (!data->policy) {···664664 if (IS_ERR(data->cdev)) {665665 ret = PTR_ERR(data->cdev);666666 cpufreq_cpu_put(data->policy);667667- return ret;668667 }669668 }670669671671- return 0;670670+ of_node_put(np);671671+672672+ return ret;672673}673674674675static void imx_thermal_unregister_legacy_cooling(struct imx_thermal_data *data)
···211211/* The total number of temperature sensors in the MT8183 */212212#define MT8183_NUM_SENSORS 6213213214214+/* The number of banks in the MT8183 */215215+#define MT8183_NUM_ZONES 1216216+214217/* The number of sensing points per bank */215218#define MT8183_NUM_SENSORS_PER_ZONE 6216219···500497 */501498static const struct mtk_thermal_data mt8183_thermal_data = {502499 .auxadc_channel = MT8183_TEMP_AUXADC_CHANNEL,503503- .num_banks = MT8183_NUM_SENSORS_PER_ZONE,500500+ .num_banks = MT8183_NUM_ZONES,504501 .num_sensors = MT8183_NUM_SENSORS,505502 .vts_index = mt8183_vts_index,506503 .cali_val = MT8183_CALIBRATION,···594591 u32 raw;595592596593 for (i = 0; i < conf->bank_data[bank->id].num_sensors; i++) {597597- raw = readl(mt->thermal_base +598598- conf->msr[conf->bank_data[bank->id].sensors[i]]);594594+ raw = readl(mt->thermal_base + conf->msr[i]);599595600596 temp = raw_to_mcelsius(mt,601597 conf->bank_data[bank->id].sensors[i],···735733736734 for (i = 0; i < conf->bank_data[num].num_sensors; i++)737735 writel(conf->sensor_mux_values[conf->bank_data[num].sensors[i]],738738- mt->thermal_base +739739- conf->adcpnp[conf->bank_data[num].sensors[i]]);736736+ mt->thermal_base + conf->adcpnp[i]);740737741738 writel((1 << conf->bank_data[num].num_sensors) - 1,742739 controller_base + TEMP_MONCTL0);
···167167{168168 struct rcar_gen3_thermal_tsc *tsc = devdata;169169 int mcelsius, val;170170- u32 reg;170170+ int reg;171171172172 /* Read register and convert to mili Celsius */173173 reg = rcar_gen3_thermal_read(tsc, REG_GEN3_TEMP) & CTEMP_MASK;
+2-2
drivers/thermal/sprd_thermal.c
···348348349349 thm->var_data = pdata;350350 thm->base = devm_platform_ioremap_resource(pdev, 0);351351- if (!thm->base)352352- return -ENOMEM;351351+ if (IS_ERR(thm->base))352352+ return PTR_ERR(thm->base);353353354354 thm->nr_sensors = of_get_child_count(np);355355 if (thm->nr_sensors == 0 || thm->nr_sensors > SPRD_THM_MAX_SENSOR) {
+8-8
drivers/thunderbolt/tunnel.c
···913913 * case.914914 */915915 path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1,916916- &tunnel->dst_port, "USB3 Up");916916+ &tunnel->dst_port, "USB3 Down");917917 if (!path) {918918 /* Just disable the downstream port */919919 tb_usb3_port_enable(down, false);920920 goto err_free;921921 }922922- tunnel->paths[TB_USB3_PATH_UP] = path;923923- tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);924924-925925- path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,926926- "USB3 Down");927927- if (!path)928928- goto err_deactivate;929922 tunnel->paths[TB_USB3_PATH_DOWN] = path;930923 tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);924924+925925+ path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,926926+ "USB3 Up");927927+ if (!path)928928+ goto err_deactivate;929929+ tunnel->paths[TB_USB3_PATH_UP] = path;930930+ tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_UP]);931931932932 /* Validate that the tunnel is complete */933933 if (!tb_port_is_usb3_up(tunnel->dst_port)) {
+1-1
drivers/tty/serial/8250/8250_core.c
···524524 */525525 up->mcr_mask = ~ALPHA_KLUDGE_MCR;526526 up->mcr_force = ALPHA_KLUDGE_MCR;527527+ serial8250_set_defaults(up);527528 }528529529530 /* chain base port ops to support Remote Supervisor Adapter */···548547 port->membase = old_serial_port[i].iomem_base;549548 port->iotype = old_serial_port[i].io_type;550549 port->regshift = old_serial_port[i].iomem_reg_shift;551551- serial8250_set_defaults(up);552550553551 port->irqflags |= irqflag;554552 if (serial8250_isa_config != NULL)
+11-1
drivers/tty/serial/8250/8250_exar.c
···326326 * devices will export them as GPIOs, so we pre-configure them safely327327 * as inputs.328328 */329329- u8 dir = pcidev->vendor == PCI_VENDOR_ID_EXAR ? 0xff : 0x00;329329+330330+ u8 dir = 0x00;331331+332332+ if ((pcidev->vendor == PCI_VENDOR_ID_EXAR) &&333333+ (pcidev->subsystem_vendor != PCI_VENDOR_ID_SEALEVEL)) {334334+ // Configure GPIO as inputs for Commtech adapters335335+ dir = 0xff;336336+ } else {337337+ // Configure GPIO as outputs for SeaLevel adapters338338+ dir = 0x00;339339+ }330340331341 writeb(0x00, p + UART_EXAR_MPIOINT_7_0);332342 writeb(0x00, p + UART_EXAR_MPIOLVL_7_0);
+18
drivers/tty/serial/8250/8250_mtk.c
···306306 }307307#endif308308309309+ /*310310+ * Store the requested baud rate before calling the generic 8250311311+ * set_termios method. Standard 8250 port expects bauds to be312312+ * no higher than (uartclk / 16) so the baud will be clamped if it313313+ * gets out of that bound. Mediatek 8250 port supports speed314314+ * higher than that, therefore we'll get original baud rate back315315+ * after calling the generic set_termios method and recalculate316316+ * the speed later in this method.317317+ */318318+ baud = tty_termios_baud_rate(termios);319319+309320 serial8250_do_set_termios(port, termios, old);321321+322322+ tty_termios_encode_baud_rate(termios, baud, baud);310323311324 /*312325 * Mediatek UARTs use an extra highspeed register (MTK_UART_HIGHS)···351338 * interrupts disabled.352339 */353340 spin_lock_irqsave(&port->lock, flags);341341+342342+ /*343343+ * Update the per-port timeout.344344+ */345345+ uart_update_timeout(port, termios->c_cflag, baud);354346355347 /* set DLAB we have cval saved in up->lcr from the call to the core */356348 serial_port_out(port, UART_LCR, up->lcr | UART_LCR_DLAB);
+8-1
drivers/tty/serial/cpm_uart/cpm_uart_core.c
···1215121512161216 pinfo->gpios[i] = NULL;1217121712181218- gpiod = devm_gpiod_get_index(dev, NULL, i, GPIOD_ASIS);12181218+ gpiod = devm_gpiod_get_index_optional(dev, NULL, i, GPIOD_ASIS);12191219+12201220+ if (IS_ERR(gpiod)) {12211221+ ret = PTR_ERR(gpiod);12221222+ goto out_irq;12231223+ }1219122412201225 if (gpiod) {12211226 if (i == GPIO_RTS || i == GPIO_DTR)···1242123712431238 return cpm_uart_request_port(&pinfo->port);1244123912401240+out_irq:12411241+ irq_dispose_mapping(pinfo->port.irq);12451242out_pram:12461243 cpm_uart_unmap_pram(pinfo, pram);12471244out_mem:
+8-4
drivers/tty/serial/mxs-auart.c
···16981698 irq = platform_get_irq(pdev, 0);16991699 if (irq < 0) {17001700 ret = irq;17011701- goto out_disable_clks;17011701+ goto out_iounmap;17021702 }1703170317041704 s->port.irq = irq;17051705 ret = devm_request_irq(&pdev->dev, irq, mxs_auart_irq_handle, 0,17061706 dev_name(&pdev->dev), s);17071707 if (ret)17081708- goto out_disable_clks;17081708+ goto out_iounmap;1709170917101710 platform_set_drvdata(pdev, s);1711171117121712 ret = mxs_auart_init_gpios(s, &pdev->dev);17131713 if (ret) {17141714 dev_err(&pdev->dev, "Failed to initialize GPIOs.\n");17151715- goto out_disable_clks;17151715+ goto out_iounmap;17161716 }1717171717181718 /*···17201720 */17211721 ret = mxs_auart_request_gpio_irq(s);17221722 if (ret)17231723- goto out_disable_clks;17231723+ goto out_iounmap;1724172417251725 auart_port[s->port.line] = s;17261726···17461746 mxs_auart_free_gpio_irq(s);17471747 auart_port[pdev->id] = NULL;1748174817491749+out_iounmap:17501750+ iounmap(s->port.membase);17511751+17491752out_disable_clks:17501753 if (is_asm9260_auart(s)) {17511754 clk_disable_unprepare(s->clk);···17641761 uart_remove_one_port(&auart_driver, &s->port);17651762 auart_port[pdev->id] = NULL;17661763 mxs_auart_free_gpio_irq(s);17641764+ iounmap(s->port.membase);17671765 if (is_asm9260_auart(s)) {17681766 clk_disable_unprepare(s->clk);17691767 clk_disable_unprepare(s->clk_ahb);
+7-9
drivers/tty/serial/serial-tegra.c
···635635}636636637637static void tegra_uart_handle_rx_pio(struct tegra_uart_port *tup,638638- struct tty_port *tty)638638+ struct tty_port *port)639639{640640 do {641641 char flag = TTY_NORMAL;···653653 ch = (unsigned char) tegra_uart_read(tup, UART_RX);654654 tup->uport.icount.rx++;655655656656- if (!uart_handle_sysrq_char(&tup->uport, ch) && tty)657657- tty_insert_flip_char(tty, ch, flag);656656+ if (uart_handle_sysrq_char(&tup->uport, ch))657657+ continue;658658659659 if (tup->uport.ignore_status_mask & UART_LSR_DR)660660 continue;661661+662662+ tty_insert_flip_char(port, ch, flag);661663 } while (1);662664}663665664666static void tegra_uart_copy_rx_to_tty(struct tegra_uart_port *tup,665665- struct tty_port *tty,667667+ struct tty_port *port,666668 unsigned int count)667669{668670 int copied;···674672 return;675673676674 tup->uport.icount.rx += count;677677- if (!tty) {678678- dev_err(tup->uport.dev, "No tty port\n");679679- return;680680- }681675682676 if (tup->uport.ignore_status_mask & UART_LSR_DR)683677 return;684678685679 dma_sync_single_for_cpu(tup->uport.dev, tup->rx_dma_buf_phys,686680 count, DMA_FROM_DEVICE);687687- copied = tty_insert_flip_string(tty,681681+ copied = tty_insert_flip_string(port,688682 ((unsigned char *)(tup->rx_dma_buf_virt)), count);689683 if (copied != count) {690684 WARN_ON(1);
+17-98
drivers/tty/serial/serial_core.c
···41414242#define HIGH_BITS_OFFSET ((sizeof(long)-sizeof(int))*8)43434444-#define SYSRQ_TIMEOUT (HZ * 5)4545-4644static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,4745 struct ktermios *old_termios);4846static void uart_wait_until_sent(struct tty_struct *tty, int timeout);···19141916 return uart_console(port) && (port->cons->flags & CON_ENABLED);19151917}1916191819191919+static void __uart_port_spin_lock_init(struct uart_port *port)19201920+{19211921+ spin_lock_init(&port->lock);19221922+ lockdep_set_class(&port->lock, &port_lock_key);19231923+}19241924+19171925/*19181926 * Ensure that the serial console lock is initialised early.19191927 * If this port is a console, then the spinlock is already initialised.···19291925 if (uart_console(port))19301926 return;1931192719321932- spin_lock_init(&port->lock);19331933- lockdep_set_class(&port->lock, &port_lock_key);19281928+ __uart_port_spin_lock_init(port);19341929}1935193019361931#if defined(CONFIG_SERIAL_CORE_CONSOLE) || defined(CONFIG_CONSOLE_POLL)···2374237123752372 /* Power up port for set_mctrl() */23762373 uart_change_pm(state, UART_PM_STATE_ON);23742374+23752375+ /*23762376+ * If this driver supports console, and it hasn't been23772377+ * successfully registered yet, initialise spin lock for it.23782378+ */23792379+ if (port->cons && !(port->cons->flags & CON_ENABLED))23802380+ __uart_port_spin_lock_init(port);2377238123782382 /*23792383 * Ensure that the modem control lines are de-activated.···31733163 * Returns false if @ch is out of enabling sequence and should be31743164 * handled some other way, true if @ch was consumed.31753165 */31763176-static bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)31663166+bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)31773167{31783168 int sysrq_toggle_seq_len = strlen(sysrq_toggle_seq);31793169···31963186 port->sysrq = 0;31973187 return true;31983188}31993199-#else32003200-static inline bool uart_try_toggle_sysrq(struct uart_port *port, unsigned int ch)32013201-{32023202- return false;32033203-}31893189+EXPORT_SYMBOL_GPL(uart_try_toggle_sysrq);32043190#endif32053205-32063206-int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch)32073207-{32083208- if (!IS_ENABLED(CONFIG_MAGIC_SYSRQ_SERIAL))32093209- return 0;32103210-32113211- if (!port->has_sysrq || !port->sysrq)32123212- return 0;32133213-32143214- if (ch && time_before(jiffies, port->sysrq)) {32153215- if (sysrq_mask()) {32163216- handle_sysrq(ch);32173217- port->sysrq = 0;32183218- return 1;32193219- }32203220- if (uart_try_toggle_sysrq(port, ch))32213221- return 1;32223222- }32233223- port->sysrq = 0;32243224-32253225- return 0;32263226-}32273227-EXPORT_SYMBOL_GPL(uart_handle_sysrq_char);32283228-32293229-int uart_prepare_sysrq_char(struct uart_port *port, unsigned int ch)32303230-{32313231- if (!IS_ENABLED(CONFIG_MAGIC_SYSRQ_SERIAL))32323232- return 0;32333233-32343234- if (!port->has_sysrq || !port->sysrq)32353235- return 0;32363236-32373237- if (ch && time_before(jiffies, port->sysrq)) {32383238- if (sysrq_mask()) {32393239- port->sysrq_ch = ch;32403240- port->sysrq = 0;32413241- return 1;32423242- }32433243- if (uart_try_toggle_sysrq(port, ch))32443244- return 1;32453245- }32463246- port->sysrq = 0;32473247-32483248- return 0;32493249-}32503250-EXPORT_SYMBOL_GPL(uart_prepare_sysrq_char);32513251-32523252-void uart_unlock_and_check_sysrq(struct uart_port *port, unsigned long flags)32533253-__releases(&port->lock)32543254-{32553255- if (port->has_sysrq) {32563256- int sysrq_ch = port->sysrq_ch;32573257-32583258- port->sysrq_ch = 0;32593259- spin_unlock_irqrestore(&port->lock, flags);32603260- if (sysrq_ch)32613261- handle_sysrq(sysrq_ch);32623262- } else {32633263- spin_unlock_irqrestore(&port->lock, flags);32643264- }32653265-}32663266-EXPORT_SYMBOL_GPL(uart_unlock_and_check_sysrq);32673267-32683268-/*32693269- * We do the SysRQ and SAK checking like this...32703270- */32713271-int uart_handle_break(struct uart_port *port)32723272-{32733273- struct uart_state *state = port->state;32743274-32753275- if (port->handle_break)32763276- port->handle_break(port);32773277-32783278- if (port->has_sysrq && uart_console(port)) {32793279- if (!port->sysrq) {32803280- port->sysrq = jiffies + SYSRQ_TIMEOUT;32813281- return 1;32823282- }32833283- port->sysrq = 0;32843284- }32853285-32863286- if (port->flags & UPF_SAK)32873287- do_SAK(state->port.tty);32883288- return 0;32893289-}32903290-EXPORT_SYMBOL_GPL(uart_handle_break);3291319132923192EXPORT_SYMBOL(uart_write_wakeup);32933193EXPORT_SYMBOL(uart_register_driver);···3209328932103290/**32113291 * uart_get_rs485_mode() - retrieve rs485 properties for given uart32123212- * @dev: uart device32133213- * @rs485conf: output parameter32923292+ * @port: uart device's target port32143293 *32153294 * This function implements the device tree binding described in32163295 * Documentation/devicetree/bindings/serial/rs485.txt.
+3
drivers/tty/serial/sh-sci.c
···33013301 sciport->port.flags |= UPF_HARD_FLOW;33023302 }3303330333043304+ if (sci_uart_driver.cons->index == sciport->port.line)33053305+ spin_lock_init(&sciport->port.lock);33063306+33043307 ret = uart_add_one_port(&sci_uart_driver, &sciport->port);33053308 if (ret) {33063309 sci_cleanup_single(sciport);
+6-3
drivers/tty/serial/xilinx_uartps.c
···14651465 cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;14661466#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE14671467 cdns_uart_uart_driver.cons = &cdns_uart_console;14681468- cdns_uart_console.index = id;14691468#endif1470146914711470 rc = uart_register_driver(&cdns_uart_uart_driver);···15801581 * If register_console() don't assign value, then console_port pointer15811582 * is cleanup.15821583 */15831583- if (!console_port)15841584+ if (!console_port) {15851585+ cdns_uart_console.index = id;15841586 console_port = port;15871587+ }15851588#endif1586158915871590 rc = uart_add_one_port(&cdns_uart_uart_driver, port);···15961595#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE15971596 /* This is not port which is used for console that's why clean it up */15981597 if (console_port == port &&15991599- !(cdns_uart_uart_driver.cons->flags & CON_ENABLED))15981598+ !(cdns_uart_uart_driver.cons->flags & CON_ENABLED)) {16001599 console_port = NULL;16001600+ cdns_uart_console.index = -1;16011601+ }16011602#endif1602160316031604 cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,
+18-11
drivers/tty/vt/vt.c
···10921092 .destruct = vc_port_destruct,10931093};1094109410951095+/*10961096+ * Change # of rows and columns (0 means unchanged/the size of fg_console)10971097+ * [this is to be used together with some user program10981098+ * like resize that changes the hardware videomode]10991099+ */11001100+#define VC_MAXCOL (32767)11011101+#define VC_MAXROW (32767)11021102+10951103int vc_allocate(unsigned int currcons) /* return 0 on success */10961104{10971105 struct vt_notifier_param param;10981106 struct vc_data *vc;11071107+ int err;1099110811001109 WARN_CONSOLE_UNLOCKED();11011110···11341125 if (!*vc->vc_uni_pagedir_loc)11351126 con_set_default_unimap(vc);1136112711281128+ err = -EINVAL;11291129+ if (vc->vc_cols > VC_MAXCOL || vc->vc_rows > VC_MAXROW ||11301130+ vc->vc_screenbuf_size > KMALLOC_MAX_SIZE || !vc->vc_screenbuf_size)11311131+ goto err_free;11321132+ err = -ENOMEM;11371133 vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL);11381134 if (!vc->vc_screenbuf)11391135 goto err_free;···11571143 visual_deinit(vc);11581144 kfree(vc);11591145 vc_cons[currcons].d = NULL;11601160- return -ENOMEM;11461146+ return err;11611147}1162114811631149static inline int resize_screen(struct vc_data *vc, int width, int height,···1171115711721158 return err;11731159}11741174-11751175-/*11761176- * Change # of rows and columns (0 means unchanged/the size of fg_console)11771177- * [this is to be used together with some user program11781178- * like resize that changes the hardware videomode]11791179- */11801180-#define VC_RESIZE_MAXCOL (32767)11811181-#define VC_RESIZE_MAXROW (32767)1182116011831161/**11841162 * vc_do_resize - resizing method for the tty···12071201 user = vc->vc_resize_user;12081202 vc->vc_resize_user = 0;1209120312101210- if (cols > VC_RESIZE_MAXCOL || lines > VC_RESIZE_MAXROW)12041204+ if (cols > VC_MAXCOL || lines > VC_MAXROW)12111205 return -EINVAL;1212120612131207 new_cols = (cols ? cols : vc->vc_cols);···12181212 if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)12191213 return 0;1220121412211221- if (new_screen_size > KMALLOC_MAX_SIZE)12151215+ if (new_screen_size > KMALLOC_MAX_SIZE || !new_screen_size)12221216 return -EINVAL;12231217 newscreen = kzalloc(new_screen_size, GFP_USER);12241218 if (!newscreen)···33993393 INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);34003394 tty_port_init(&vc->port);34013395 visual_init(vc, currcons, 1);33963396+ /* Assuming vc->vc_{cols,rows,screenbuf_size} are sane here. */34023397 vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_NOWAIT);34033398 vc_init(vc, vc->vc_rows, vc->vc_cols,34043399 currcons || !vc->vc_sw->con_save_screen);
+2-2
drivers/uio/uio_pdrv_genirq.c
···159159 priv->pdev = pdev;160160161161 if (!uioinfo->irq) {162162- ret = platform_get_irq(pdev, 0);162162+ ret = platform_get_irq_optional(pdev, 0);163163 uioinfo->irq = ret;164164- if (ret == -ENXIO && pdev->dev.of_node)164164+ if (ret == -ENXIO)165165 uioinfo->irq = UIO_IRQ_NONE;166166 else if (ret == -EPROBE_DEFER)167167 return ret;
···12431243 enable_irq(ci->irq);12441244}1245124512461246+/*12471247+ * Handle the wakeup interrupt triggered by extcon connector12481248+ * We need to call ci_irq again for extcon since the first12491249+ * interrupt (wakeup int) only let the controller be out of12501250+ * low power mode, but not handle any interrupts.12511251+ */12521252+static void ci_extcon_wakeup_int(struct ci_hdrc *ci)12531253+{12541254+ struct ci_hdrc_cable *cable_id, *cable_vbus;12551255+ u32 otgsc = hw_read_otgsc(ci, ~0);12561256+12571257+ cable_id = &ci->platdata->id_extcon;12581258+ cable_vbus = &ci->platdata->vbus_extcon;12591259+12601260+ if (!IS_ERR(cable_id->edev) && ci->is_otg &&12611261+ (otgsc & OTGSC_IDIE) && (otgsc & OTGSC_IDIS))12621262+ ci_irq(ci->irq, ci);12631263+12641264+ if (!IS_ERR(cable_vbus->edev) && ci->is_otg &&12651265+ (otgsc & OTGSC_BSVIE) && (otgsc & OTGSC_BSVIS))12661266+ ci_irq(ci->irq, ci);12671267+}12681268+12461269static int ci_controller_resume(struct device *dev)12471270{12481271 struct ci_hdrc *ci = dev_get_drvdata(dev);···12981275 enable_irq(ci->irq);12991276 if (ci_otg_is_fsm_mode(ci))13001277 ci_otg_fsm_wakeup_by_srp(ci);12781278+ ci_extcon_wakeup_int(ci);13011279 }1302128013031281 return 0;
···676676677677 if (!ep->ep.desc) {678678 spin_unlock_irqrestore(&udc->lock, flags);679679- /* REVISIT because this driver disables endpoints in680680- * reset_all_endpoints() before calling disconnect(),681681- * most gadget drivers would trigger this non-error ...682682- */683683- if (udc->gadget.speed != USB_SPEED_UNKNOWN)684684- DBG(DBG_ERR, "ep_disable: %s not enabled\n",685685- ep->ep.name);679679+ DBG(DBG_ERR, "ep_disable: %s not enabled\n", ep->ep.name);686680 return -EINVAL;687681 }688682 ep->ep.desc = NULL;···865871 u32 status;866872867873 DBG(DBG_GADGET | DBG_QUEUE, "ep_dequeue: %s, req %p\n",868868- ep->ep.name, req);874874+ ep->ep.name, _req);869875870876 spin_lock_irqsave(&udc->lock, flags);871877
+5-2
drivers/usb/gadget/udc/gr_udc.c
···1980198019811981 if (num == 0) {19821982 _req = gr_alloc_request(&ep->ep, GFP_ATOMIC);19831983+ if (!_req)19841984+ return -ENOMEM;19851985+19831986 buf = devm_kzalloc(dev->dev, PAGE_SIZE, GFP_DMA | GFP_ATOMIC);19841984- if (!_req || !buf) {19851985- /* possible _req freed by gr_probe via gr_remove */19871987+ if (!buf) {19881988+ gr_free_request(&ep->ep, _req);19861989 return -ENOMEM;19871990 }19881991
+1-1
drivers/usb/gadget/usbstring.c
···68686969/**7070 * usb_validate_langid - validate usb language identifiers7171- * @lang: usb language identifier7171+ * @langid: usb language identifier7272 *7373 * Returns true for valid language identifier, otherwise false.7474 */
+4
drivers/usb/host/xhci-mtk-sch.c
···557557 if (is_fs_or_ls(speed) && !has_tt)558558 return false;559559560560+ /* skip endpoint with zero maxpkt */561561+ if (usb_endpoint_maxp(&ep->desc) == 0)562562+ return false;563563+560564 return true;561565}562566
···14441444 or_mask = caps->u.in.or_mask;14451445 not_mask = caps->u.in.not_mask;1446144614471447- if ((or_mask | not_mask) & ~VMMDEV_EVENT_VALID_EVENT_MASK)14471447+ if ((or_mask | not_mask) & ~VMMDEV_GUEST_CAPABILITIES_MASK)14481448 return -EINVAL;1449144914501450 ret = vbg_set_session_capabilities(gdev, session, or_mask, not_mask,···1520152015211521 /* For VMMDEV_REQUEST hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT */15221522 if (req_no_size == VBG_IOCTL_VMMDEV_REQUEST(0) ||15231523- req == VBG_IOCTL_VMMDEV_REQUEST_BIG)15231523+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG ||15241524+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT)15241525 return vbg_ioctl_vmmrequest(gdev, session, data);1525152615261527 if (hdr->type != VBG_IOCTL_HDR_TYPE_DEFAULT)···15591558 case VBG_IOCTL_HGCM_CALL(0):15601559 return vbg_ioctl_hgcm_call(gdev, session, f32bit, data);15611560 case VBG_IOCTL_LOG(0):15611561+ case VBG_IOCTL_LOG_ALT(0):15621562 return vbg_ioctl_log(data);15631563 }15641564
+15
drivers/virt/vboxguest/vboxguest_core.h
···1515#include <linux/vboxguest.h>1616#include "vmmdev.h"17171818+/*1919+ * The mainline kernel version (this version) of the vboxguest module2020+ * contained a bug where it defined VBGL_IOCTL_VMMDEV_REQUEST_BIG and2121+ * VBGL_IOCTL_LOG using _IOC(_IOC_READ | _IOC_WRITE, 'V', ...) instead2222+ * of _IO(V, ...) as the out of tree VirtualBox upstream version does.2323+ *2424+ * These _ALT definitions keep compatibility with the wrong defines the2525+ * mainline kernel version used for a while.2626+ * Note the VirtualBox userspace bits have always been built against2727+ * VirtualBox upstream's headers, so this is likely not necessary. But2828+ * we must never break our ABI so we keep these around to be 100% sure.2929+ */3030+#define VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT _IOC(_IOC_READ | _IOC_WRITE, 'V', 3, 0)3131+#define VBG_IOCTL_LOG_ALT(s) _IOC(_IOC_READ | _IOC_WRITE, 'V', 9, s)3232+1833struct vbg_session;19342035/** VBox guest memory balloon. */
+2-1
drivers/virt/vboxguest/vboxguest_linux.c
···131131 * the need for a bounce-buffer and another copy later on.132132 */133133 is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) ||134134- req == VBG_IOCTL_VMMDEV_REQUEST_BIG;134134+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG ||135135+ req == VBG_IOCTL_VMMDEV_REQUEST_BIG_ALT;135136136137 if (is_vmmdev_req)137138 buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT,
+2
drivers/virt/vboxguest/vmmdev.h
···206206 * not.207207 */208208#define VMMDEV_GUEST_SUPPORTS_GRAPHICS BIT(2)209209+/* The mask of valid capabilities, for sanity checking. */210210+#define VMMDEV_GUEST_CAPABILITIES_MASK 0x00000007U209211210212/** struct vmmdev_hypervisorinfo - Hypervisor info structure. */211213struct vmmdev_hypervisorinfo {
+2-2
drivers/virtio/virtio_mmio.c
···641641 &vm_cmdline_id, &consumed);642642643643 /*644644- * sscanf() must processes at least 2 chunks; also there644644+ * sscanf() must process at least 2 chunks; also there645645 * must be no extra characters after the last chunk, so646646 * str[consumed] must be '\0'647647 */648648- if (processed < 2 || str[consumed])648648+ if (processed < 2 || str[consumed] || irq == 0)649649 return -EINVAL;650650651651 resources[0].flags = IORESOURCE_MEM;
+82-89
drivers/xen/xenbus/xenbus_client.c
···6969 unsigned int nr_handles;7070};71717272+struct map_ring_valloc {7373+ struct xenbus_map_node *node;7474+7575+ /* Why do we need two arrays? See comment of __xenbus_map_ring */7676+ union {7777+ unsigned long addrs[XENBUS_MAX_RING_GRANTS];7878+ pte_t *ptes[XENBUS_MAX_RING_GRANTS];7979+ };8080+ phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];8181+8282+ struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];8383+ struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];8484+8585+ unsigned int idx; /* HVM only. */8686+};8787+7288static DEFINE_SPINLOCK(xenbus_valloc_lock);7389static LIST_HEAD(xenbus_valloc_pages);74907591struct xenbus_ring_ops {7676- int (*map)(struct xenbus_device *dev,9292+ int (*map)(struct xenbus_device *dev, struct map_ring_valloc *info,7793 grant_ref_t *gnt_refs, unsigned int nr_grefs,7894 void **vaddr);7995 int (*unmap)(struct xenbus_device *dev, void *vaddr);···456440 * Map @nr_grefs pages of memory into this domain from another457441 * domain's grant table. xenbus_map_ring_valloc allocates @nr_grefs458442 * pages of virtual address space, maps the pages to that address, and459459- * sets *vaddr to that address. Returns 0 on success, and GNTST_*460460- * (see xen/include/interface/grant_table.h) or -ENOMEM / -EINVAL on443443+ * sets *vaddr to that address. Returns 0 on success, and -errno on461444 * error. If an error is returned, device will switch to462445 * XenbusStateClosing and the error message will be saved in XenStore.463446 */···464449 unsigned int nr_grefs, void **vaddr)465450{466451 int err;452452+ struct map_ring_valloc *info;467453468468- err = ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);469469- /* Some hypervisors are buggy and can return 1. */470470- if (err > 0)471471- err = GNTST_general_error;454454+ *vaddr = NULL;472455456456+ if (nr_grefs > XENBUS_MAX_RING_GRANTS)457457+ return -EINVAL;458458+459459+ info = kzalloc(sizeof(*info), GFP_KERNEL);460460+ if (!info)461461+ return -ENOMEM;462462+463463+ info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);464464+ if (!info->node)465465+ err = -ENOMEM;466466+ else467467+ err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);468468+469469+ kfree(info->node);470470+ kfree(info);473471 return err;474472}475473EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc);···494466 grant_ref_t *gnt_refs,495467 unsigned int nr_grefs,496468 grant_handle_t *handles,497497- phys_addr_t *addrs,469469+ struct map_ring_valloc *info,498470 unsigned int flags,499471 bool *leaked)500472{501501- struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];502502- struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];503473 int i, j;504504- int err = GNTST_okay;505474506475 if (nr_grefs > XENBUS_MAX_RING_GRANTS)507476 return -EINVAL;508477509478 for (i = 0; i < nr_grefs; i++) {510510- memset(&map[i], 0, sizeof(map[i]));511511- gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],512512- dev->otherend_id);479479+ gnttab_set_map_op(&info->map[i], info->phys_addrs[i], flags,480480+ gnt_refs[i], dev->otherend_id);513481 handles[i] = INVALID_GRANT_HANDLE;514482 }515483516516- gnttab_batch_map(map, i);484484+ gnttab_batch_map(info->map, i);517485518486 for (i = 0; i < nr_grefs; i++) {519519- if (map[i].status != GNTST_okay) {520520- err = map[i].status;521521- xenbus_dev_fatal(dev, map[i].status,487487+ if (info->map[i].status != GNTST_okay) {488488+ xenbus_dev_fatal(dev, info->map[i].status,522489 "mapping in shared page %d from domain %d",523490 gnt_refs[i], dev->otherend_id);524491 goto fail;525492 } else526526- handles[i] = map[i].handle;493493+ handles[i] = info->map[i].handle;527494 }528495529529- return GNTST_okay;496496+ return 0;530497531498 fail:532499 for (i = j = 0; i < nr_grefs; i++) {533500 if (handles[i] != INVALID_GRANT_HANDLE) {534534- memset(&unmap[j], 0, sizeof(unmap[j]));535535- gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],501501+ gnttab_set_unmap_op(&info->unmap[j],502502+ info->phys_addrs[i],536503 GNTMAP_host_map, handles[i]);537504 j++;538505 }539506 }540507541541- if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))508508+ if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, info->unmap, j))542509 BUG();543510544511 *leaked = false;545512 for (i = 0; i < j; i++) {546546- if (unmap[i].status != GNTST_okay) {513513+ if (info->unmap[i].status != GNTST_okay) {547514 *leaked = true;548515 break;549516 }550517 }551518552552- return err;519519+ return -ENOENT;553520}554521555522/**···589566 return err;590567}591568592592-struct map_ring_valloc_hvm593593-{594594- unsigned int idx;595595-596596- /* Why do we need two arrays? See comment of __xenbus_map_ring */597597- phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];598598- unsigned long addrs[XENBUS_MAX_RING_GRANTS];599599-};600600-601569static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,602570 unsigned int goffset,603571 unsigned int len,604572 void *data)605573{606606- struct map_ring_valloc_hvm *info = data;574574+ struct map_ring_valloc *info = data;607575 unsigned long vaddr = (unsigned long)gfn_to_virt(gfn);608576609577 info->phys_addrs[info->idx] = vaddr;···603589 info->idx++;604590}605591606606-static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,607607- grant_ref_t *gnt_ref,608608- unsigned int nr_grefs,609609- void **vaddr)592592+static int xenbus_map_ring_hvm(struct xenbus_device *dev,593593+ struct map_ring_valloc *info,594594+ grant_ref_t *gnt_ref,595595+ unsigned int nr_grefs,596596+ void **vaddr)610597{611611- struct xenbus_map_node *node;598598+ struct xenbus_map_node *node = info->node;612599 int err;613600 void *addr;614601 bool leaked = false;615615- struct map_ring_valloc_hvm info = {616616- .idx = 0,617617- };618602 unsigned int nr_pages = XENBUS_PAGES(nr_grefs);619619-620620- if (nr_grefs > XENBUS_MAX_RING_GRANTS)621621- return -EINVAL;622622-623623- *vaddr = NULL;624624-625625- node = kzalloc(sizeof(*node), GFP_KERNEL);626626- if (!node)627627- return -ENOMEM;628603629604 err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);630605 if (err)···621618622619 gnttab_foreach_grant(node->hvm.pages, nr_grefs,623620 xenbus_map_ring_setup_grant_hvm,624624- &info);621621+ info);625622626623 err = __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,627627- info.phys_addrs, GNTMAP_host_map, &leaked);624624+ info, GNTMAP_host_map, &leaked);628625 node->nr_handles = nr_grefs;629626630627 if (err)···644641 spin_unlock(&xenbus_valloc_lock);645642646643 *vaddr = addr;644644+ info->node = NULL;645645+647646 return 0;648647649648 out_xenbus_unmap_ring:650649 if (!leaked)651651- xenbus_unmap_ring(dev, node->handles, nr_grefs, info.addrs);650650+ xenbus_unmap_ring(dev, node->handles, nr_grefs, info->addrs);652651 else653652 pr_alert("leaking %p size %u page(s)",654653 addr, nr_pages);···658653 if (!leaked)659654 free_xenballooned_pages(nr_pages, node->hvm.pages);660655 out_err:661661- kfree(node);662656 return err;663657}664658···680676EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);681677682678#ifdef CONFIG_XEN_PV683683-static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,684684- grant_ref_t *gnt_refs,685685- unsigned int nr_grefs,686686- void **vaddr)679679+static int xenbus_map_ring_pv(struct xenbus_device *dev,680680+ struct map_ring_valloc *info,681681+ grant_ref_t *gnt_refs,682682+ unsigned int nr_grefs,683683+ void **vaddr)687684{688688- struct xenbus_map_node *node;685685+ struct xenbus_map_node *node = info->node;689686 struct vm_struct *area;690690- pte_t *ptes[XENBUS_MAX_RING_GRANTS];691691- phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];692687 int err = GNTST_okay;693688 int i;694689 bool leaked;695690696696- *vaddr = NULL;697697-698698- if (nr_grefs > XENBUS_MAX_RING_GRANTS)699699- return -EINVAL;700700-701701- node = kzalloc(sizeof(*node), GFP_KERNEL);702702- if (!node)691691+ area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);692692+ if (!area)703693 return -ENOMEM;704704-705705- area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);706706- if (!area) {707707- kfree(node);708708- return -ENOMEM;709709- }710694711695 for (i = 0; i < nr_grefs; i++)712712- phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;696696+ info->phys_addrs[i] =697697+ arbitrary_virt_to_machine(info->ptes[i]).maddr;713698714699 err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,715715- phys_addrs,716716- GNTMAP_host_map | GNTMAP_contains_pte,700700+ info, GNTMAP_host_map | GNTMAP_contains_pte,717701 &leaked);718702 if (err)719703 goto failed;···714722 spin_unlock(&xenbus_valloc_lock);715723716724 *vaddr = area->addr;725725+ info->node = NULL;726726+717727 return 0;718728719729failed:···724730 else725731 pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);726732727727- kfree(node);728733 return err;729734}730735731731-static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)736736+static int xenbus_unmap_ring_pv(struct xenbus_device *dev, void *vaddr)732737{733738 struct xenbus_map_node *node;734739 struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];···791798}792799793800static const struct xenbus_ring_ops ring_ops_pv = {794794- .map = xenbus_map_ring_valloc_pv,795795- .unmap = xenbus_unmap_ring_vfree_pv,801801+ .map = xenbus_map_ring_pv,802802+ .unmap = xenbus_unmap_ring_pv,796803};797804#endif798805799799-struct unmap_ring_vfree_hvm806806+struct unmap_ring_hvm800807{801808 unsigned int idx;802809 unsigned long addrs[XENBUS_MAX_RING_GRANTS];···807814 unsigned int len,808815 void *data)809816{810810- struct unmap_ring_vfree_hvm *info = data;817817+ struct unmap_ring_hvm *info = data;811818812819 info->addrs[info->idx] = (unsigned long)gfn_to_virt(gfn);813820814821 info->idx++;815822}816823817817-static int xenbus_unmap_ring_vfree_hvm(struct xenbus_device *dev, void *vaddr)824824+static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)818825{819826 int rv;820827 struct xenbus_map_node *node;821828 void *addr;822822- struct unmap_ring_vfree_hvm info = {829829+ struct unmap_ring_hvm info = {823830 .idx = 0,824831 };825832 unsigned int nr_pages;···880887EXPORT_SYMBOL_GPL(xenbus_read_driver_state);881888882889static const struct xenbus_ring_ops ring_ops_hvm = {883883- .map = xenbus_map_ring_valloc_hvm,884884- .unmap = xenbus_unmap_ring_vfree_hvm,890890+ .map = xenbus_map_ring_hvm,891891+ .unmap = xenbus_unmap_ring_hvm,885892};886893887894void __init xenbus_ring_ops_init(void)
···25932593 !extent_buffer_uptodate(tree_root->node)) {25942594 handle_error = true;2595259525962596- if (IS_ERR(tree_root->node))25962596+ if (IS_ERR(tree_root->node)) {25972597 ret = PTR_ERR(tree_root->node);25982598- else if (!extent_buffer_uptodate(tree_root->node))25982598+ tree_root->node = NULL;25992599+ } else if (!extent_buffer_uptodate(tree_root->node)) {25992600 ret = -EUCLEAN;26012601+ }2600260226012603 btrfs_warn(fs_info, "failed to read tree root");26022604 continue;
+26-17
fs/btrfs/extent_io.c
···19991999 if (!PageDirty(pages[i]) ||20002000 pages[i]->mapping != mapping) {20012001 unlock_page(pages[i]);20022002- put_page(pages[i]);20022002+ for (; i < ret; i++)20032003+ put_page(pages[i]);20032004 err = -EAGAIN;20042005 goto out;20052006 }···50595058static void check_buffer_tree_ref(struct extent_buffer *eb)50605059{50615060 int refs;50625062- /* the ref bit is tricky. We have to make sure it is set50635063- * if we have the buffer dirty. Otherwise the50645064- * code to free a buffer can end up dropping a dirty50655065- * page50615061+ /*50625062+ * The TREE_REF bit is first set when the extent_buffer is added50635063+ * to the radix tree. It is also reset, if unset, when a new reference50645064+ * is created by find_extent_buffer.50665065 *50675067- * Once the ref bit is set, it won't go away while the50685068- * buffer is dirty or in writeback, and it also won't50695069- * go away while we have the reference count on the50705070- * eb bumped.50665066+ * It is only cleared in two cases: freeing the last non-tree50675067+ * reference to the extent_buffer when its STALE bit is set or50685068+ * calling releasepage when the tree reference is the only reference.50715069 *50725072- * We can't just set the ref bit without bumping the50735073- * ref on the eb because free_extent_buffer might50745074- * see the ref bit and try to clear it. If this happens50755075- * free_extent_buffer might end up dropping our original50765076- * ref by mistake and freeing the page before we are able50775077- * to add one more ref.50705070+ * In both cases, care is taken to ensure that the extent_buffer's50715071+ * pages are not under io. However, releasepage can be concurrently50725072+ * called with creating new references, which is prone to race50735073+ * conditions between the calls to check_buffer_tree_ref in those50745074+ * codepaths and clearing TREE_REF in try_release_extent_buffer.50785075 *50795079- * So bump the ref count first, then set the bit. If someone50805080- * beat us to it, drop the ref we added.50765076+ * The actual lifetime of the extent_buffer in the radix tree is50775077+ * adequately protected by the refcount, but the TREE_REF bit and50785078+ * its corresponding reference are not. To protect against this50795079+ * class of races, we call check_buffer_tree_ref from the codepaths50805080+ * which trigger io after they set eb->io_pages. Note that once io is50815081+ * initiated, TREE_REF can no longer be cleared, so that is the50825082+ * moment at which any such race is best fixed.50815083 */50825084 refs = atomic_read(&eb->refs);50835085 if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))···55315527 clear_bit(EXTENT_BUFFER_READ_ERR, &eb->bflags);55325528 eb->read_mirror = 0;55335529 atomic_set(&eb->io_pages, num_reads);55305530+ /*55315531+ * It is possible for releasepage to clear the TREE_REF bit before we55325532+ * set io_pages. See check_buffer_tree_ref for a more detailed comment.55335533+ */55345534+ check_buffer_tree_ref(eb);55345535 for (i = 0; i < num_pages; i++) {55355536 page = eb->pages[i];55365537
···16901690 ret = fallback_to_cow(inode, locked_page, cow_start,16911691 found_key.offset - 1,16921692 page_started, nr_written);16931693- if (ret) {16941694- if (nocow)16951695- btrfs_dec_nocow_writers(fs_info,16961696- disk_bytenr);16931693+ if (ret)16971694 goto error;16981698- }16991695 cow_start = (u64)-1;17001696 }17011697···17071711 ram_bytes, BTRFS_COMPRESS_NONE,17081712 BTRFS_ORDERED_PREALLOC);17091713 if (IS_ERR(em)) {17101710- if (nocow)17111711- btrfs_dec_nocow_writers(fs_info,17121712- disk_bytenr);17131714 ret = PTR_ERR(em);17141715 goto error;17151716 }···81238130 /*81248131 * Qgroup reserved space handler81258132 * Page here will be either81268126- * 1) Already written to disk81278127- * In this case, its reserved space is released from data rsv map81288128- * and will be freed by delayed_ref handler finally.81298129- * So even we call qgroup_free_data(), it won't decrease reserved81308130- * space.81318131- * 2) Not written to disk81328132- * This means the reserved space should be freed here. However,81338133- * if a truncate invalidates the page (by clearing PageDirty)81348134- * and the page is accounted for while allocating extent81358135- * in btrfs_check_data_free_space() we let delayed_ref to81368136- * free the entire extent.81338133+ * 1) Already written to disk or ordered extent already submitted81348134+ * Then its QGROUP_RESERVED bit in io_tree is already cleaned.81358135+ * Qgroup will be handled by its qgroup_record then.81368136+ * btrfs_qgroup_free_data() call will do nothing here.81378137+ *81388138+ * 2) Not written to disk yet81398139+ * Then btrfs_qgroup_free_data() call will clear the QGROUP_RESERVED81408140+ * bit of its io_tree, and free the qgroup reserved data space.81418141+ * Since the IO will never happen for this page.81378142 */81388138- if (PageDirty(page))81398139- btrfs_qgroup_free_data(inode, NULL, page_start, PAGE_SIZE);81438143+ btrfs_qgroup_free_data(inode, NULL, page_start, PAGE_SIZE);81408144 if (!inode_evicting) {81418145 clear_extent_bit(tree, page_start, page_end, EXTENT_LOCKED |81428146 EXTENT_DELALLOC | EXTENT_DELALLOC_NEW |
+1-1
fs/btrfs/ref-verify.c
···509509 switch (key.type) {510510 case BTRFS_EXTENT_ITEM_KEY:511511 *num_bytes = key.offset;512512- /* fall through */512512+ fallthrough;513513 case BTRFS_METADATA_ITEM_KEY:514514 *bytenr = key.objectid;515515 ret = process_extent_item(fs_info, path, &key, i,
···523523 case Opt_compress_force:524524 case Opt_compress_force_type:525525 compress_force = true;526526- /* Fallthrough */526526+ fallthrough;527527 case Opt_compress:528528 case Opt_compress_type:529529 saved_compress_type = btrfs_test_opt(info,···622622 btrfs_set_opt(info->mount_opt, NOSSD);623623 btrfs_clear_and_info(info, SSD,624624 "not using ssd optimizations");625625- /* Fallthrough */625625+ fallthrough;626626 case Opt_nossd_spread:627627 btrfs_clear_and_info(info, SSD_SPREAD,628628 "not using spread ssd allocation scheme");···793793 case Opt_recovery:794794 btrfs_warn(info,795795 "'recovery' is deprecated, use 'usebackuproot' instead");796796- /* fall through */796796+ fallthrough;797797 case Opt_usebackuproot:798798 btrfs_info(info,799799 "trying to use backup root at mount time");
+8
fs/btrfs/volumes.c
···70527052 mutex_lock(&fs_info->chunk_mutex);7053705370547054 /*70557055+ * It is possible for mount and umount to race in such a way that70567056+ * we execute this code path, but open_fs_devices failed to clear70577057+ * total_rw_bytes. We certainly want it cleared before reading the70587058+ * device items, so clear it here.70597059+ */70607060+ fs_info->fs_devices->total_rw_bytes = 0;70617061+70627062+ /*70557063 * Read all device items, and then all the chunk items. All70567064 * device items are found before any chunk item (their object id70577065 * is smaller than the lowest possible object id for a chunk
+1-1
fs/btrfs/volumes.h
···408408 return BTRFS_MAP_WRITE;409409 default:410410 WARN_ON_ONCE(1);411411- /* fall through */411411+ fallthrough;412412 case REQ_OP_READ:413413 return BTRFS_MAP_READ;414414 }
+1-1
fs/cachefiles/rdwr.c
···937937 }938938939939 data = kmap(page);940940- ret = __kernel_write(file, data, len, &pos);940940+ ret = kernel_write(file, data, len, &pos);941941 kunmap(page);942942 fput(file);943943 if (ret != len)
···53065306 vol_info->nocase = master_tcon->nocase;53075307 vol_info->nohandlecache = master_tcon->nohandlecache;53085308 vol_info->local_lease = master_tcon->local_lease;53095309+ vol_info->no_lease = master_tcon->no_lease;53105310+ vol_info->resilient = master_tcon->use_resilient;53115311+ vol_info->persistent = master_tcon->use_persistent;53125312+ vol_info->handle_timeout = master_tcon->handle_timeout;53095313 vol_info->no_linux_ext = !master_tcon->unix_ext;53145314+ vol_info->linux_ext = master_tcon->posix_extensions;53105315 vol_info->sectype = master_tcon->ses->sectype;53115316 vol_info->sign = master_tcon->ses->sign;53175317+ vol_info->seal = master_tcon->seal;5312531853135319 rc = cifs_set_vol_auth(vol_info, master_tcon->ses);53145320 if (rc) {···53395333 cifs_put_smb_ses(ses);53405334 goto out;53415335 }53425342-53435343- /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */53445344- if (tcon->posix_extensions)53455345- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS;5346533653475337 if (cap_unix(ses))53485338 reset_cifs_unix_caps(0, tcon, NULL, vol_info);
+6-13
fs/cifs/file.c
···1149114911501150/*11511151 * Set the byte-range lock (posix style). Returns:11521152- * 1) 0, if we set the lock and don't need to request to the server;11531153- * 2) 1, if we need to request to the server;11541154- * 3) <0, if the error occurs while setting the lock.11521152+ * 1) <0, if the error occurs while setting the lock;11531153+ * 2) 0, if we set the lock and don't need to request to the server;11541154+ * 3) FILE_LOCK_DEFERRED, if we will wait for some other file_lock;11551155+ * 4) FILE_LOCK_DEFERRED + 1, if we need to request to the server.11551156 */11561157static int11571158cifs_posix_lock_set(struct file *file, struct file_lock *flock)11581159{11591160 struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));11601160- int rc = 1;11611161+ int rc = FILE_LOCK_DEFERRED + 1;1161116211621163 if ((flock->fl_flags & FL_POSIX) == 0)11631164 return rc;1164116511651165-try_again:11661166 cifs_down_write(&cinode->lock_sem);11671167 if (!cinode->can_cache_brlcks) {11681168 up_write(&cinode->lock_sem);···1171117111721172 rc = posix_lock_file(file, flock, NULL);11731173 up_write(&cinode->lock_sem);11741174- if (rc == FILE_LOCK_DEFERRED) {11751175- rc = wait_event_interruptible(flock->fl_wait,11761176- list_empty(&flock->fl_blocked_member));11771177- if (!rc)11781178- goto try_again;11791179- locks_delete_block(flock);11801180- }11811174 return rc;11821175}11831176···16451652 int posix_lock_type;1646165316471654 rc = cifs_posix_lock_set(file, flock);16481648- if (!rc || rc < 0)16551655+ if (rc <= FILE_LOCK_DEFERRED)16491656 return rc;1650165716511658 if (type & server->vals->shared_lock_type)
···523523 const int timeout, const int flags,524524 unsigned int *instance)525525{526526- int rc;526526+ long rc;527527 int *credits;528528 int optype;529529 long int t;
+3-3
fs/efivarfs/super.c
···201201 sb->s_d_op = &efivarfs_d_ops;202202 sb->s_time_gran = 1;203203204204+ if (!efivar_supports_writes())205205+ sb->s_flags |= SB_RDONLY;206206+204207 inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0, true);205208 if (!inode)206209 return -ENOMEM;···255252256253static __init int efivarfs_init(void)257254{258258- if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES))259259- return -ENODEV;260260-261255 if (!efivars_kobject())262256 return -ENODEV;263257
+8-6
fs/exfat/dir.c
···309309 .llseek = generic_file_llseek,310310 .read = generic_read_dir,311311 .iterate = exfat_iterate,312312- .fsync = generic_file_fsync,312312+ .fsync = exfat_file_fsync,313313};314314315315int exfat_alloc_new_dir(struct inode *inode, struct exfat_chain *clu)···425425 ep->dentry.name.flags = 0x0;426426427427 for (i = 0; i < EXFAT_FILE_NAME_LEN; i++) {428428- ep->dentry.name.unicode_0_14[i] = cpu_to_le16(*uniname);429429- if (*uniname == 0x0)430430- break;431431- uniname++;428428+ if (*uniname != 0x0) {429429+ ep->dentry.name.unicode_0_14[i] = cpu_to_le16(*uniname);430430+ uniname++;431431+ } else {432432+ ep->dentry.name.unicode_0_14[i] = 0x0;433433+ }432434 }433435}434436···11121110 ret = exfat_get_next_cluster(sb, &clu.dir);11131111 }1114111211151115- if (ret || clu.dir != EXFAT_EOF_CLUSTER) {11131113+ if (ret || clu.dir == EXFAT_EOF_CLUSTER) {11161114 /* just initialized hint_stat */11171115 hint_stat->clu = p_dir->dir;11181116 hint_stat->eidx = 0;
···1818#include <linux/swap.h>1919#include <linux/falloc.h>2020#include <linux/uio.h>2121+#include <linux/fs.h>21222223static struct page **fuse_pages_alloc(unsigned int npages, gfp_t flags,2324 struct fuse_page_desc **desc)···15871586 struct backing_dev_info *bdi = inode_to_bdi(inode);15881587 int i;1589158815901590- rb_erase(&wpa->writepages_entry, &fi->writepages);15911589 for (i = 0; i < ap->num_pages; i++) {15921590 dec_wb_stat(&bdi->wb, WB_WRITEBACK);15931591 dec_node_page_state(ap->pages[i], NR_WRITEBACK_TEMP);···1637163716381638 out_free:16391639 fi->writectr--;16401640+ rb_erase(&wpa->writepages_entry, &fi->writepages);16401641 fuse_writepage_finish(fc, wpa);16411642 spin_unlock(&fi->lock);16421643···16751674 }16761675}1677167616781678-static void tree_insert(struct rb_root *root, struct fuse_writepage_args *wpa)16771677+static struct fuse_writepage_args *fuse_insert_writeback(struct rb_root *root,16781678+ struct fuse_writepage_args *wpa)16791679{16801680 pgoff_t idx_from = wpa->ia.write.in.offset >> PAGE_SHIFT;16811681 pgoff_t idx_to = idx_from + wpa->ia.ap.num_pages - 1;···16991697 else if (idx_to < curr_index)17001698 p = &(*p)->rb_left;17011699 else17021702- return (void) WARN_ON(true);17001700+ return curr;17031701 }1704170217051703 rb_link_node(&wpa->writepages_entry, parent, p);17061704 rb_insert_color(&wpa->writepages_entry, root);17051705+ return NULL;17061706+}17071707+17081708+static void tree_insert(struct rb_root *root, struct fuse_writepage_args *wpa)17091709+{17101710+ WARN_ON(fuse_insert_writeback(root, wpa));17071711}1708171217091713static void fuse_writepage_end(struct fuse_conn *fc, struct fuse_args *args,···1722171417231715 mapping_set_error(inode->i_mapping, error);17241716 spin_lock(&fi->lock);17171717+ rb_erase(&wpa->writepages_entry, &fi->writepages);17251718 while (wpa->next) {17261719 struct fuse_conn *fc = get_fuse_conn(inode);17271720 struct fuse_write_in *inarg = &wpa->ia.write.in;···19611952}1962195319631954/*19641964- * First recheck under fi->lock if the offending offset is still under19651965- * writeback. If yes, then iterate auxiliary write requests, to see if there's19551955+ * Check under fi->lock if the page is under writeback, and insert it onto the19561956+ * rb_tree if not. Otherwise iterate auxiliary write requests, to see if there's19661957 * one already added for a page at this offset. If there's none, then insert19671958 * this new request onto the auxiliary list, otherwise reuse the existing one by19681968- * copying the new page contents over to the old temporary page.19591959+ * swapping the new temp page with the old one.19691960 */19701970-static bool fuse_writepage_in_flight(struct fuse_writepage_args *new_wpa,19711971- struct page *page)19611961+static bool fuse_writepage_add(struct fuse_writepage_args *new_wpa,19621962+ struct page *page)19721963{19731964 struct fuse_inode *fi = get_fuse_inode(new_wpa->inode);19741965 struct fuse_writepage_args *tmp;···19761967 struct fuse_args_pages *new_ap = &new_wpa->ia.ap;1977196819781969 WARN_ON(new_ap->num_pages != 0);19701970+ new_ap->num_pages = 1;1979197119801972 spin_lock(&fi->lock);19811981- rb_erase(&new_wpa->writepages_entry, &fi->writepages);19821982- old_wpa = fuse_find_writeback(fi, page->index, page->index);19731973+ old_wpa = fuse_insert_writeback(&fi->writepages, new_wpa);19831974 if (!old_wpa) {19841984- tree_insert(&fi->writepages, new_wpa);19851975 spin_unlock(&fi->lock);19861986- return false;19761976+ return true;19871977 }1988197819891989- new_ap->num_pages = 1;19901979 for (tmp = old_wpa->next; tmp; tmp = tmp->next) {19911980 pgoff_t curr_index;19921981···20132006 fuse_writepage_free(new_wpa);20142007 }2015200820162016- return true;20092009+ return false;20102010+}20112011+20122012+static bool fuse_writepage_need_send(struct fuse_conn *fc, struct page *page,20132013+ struct fuse_args_pages *ap,20142014+ struct fuse_fill_wb_data *data)20152015+{20162016+ WARN_ON(!ap->num_pages);20172017+20182018+ /*20192019+ * Being under writeback is unlikely but possible. For example direct20202020+ * read to an mmaped fuse file will set the page dirty twice; once when20212021+ * the pages are faulted with get_user_pages(), and then after the read20222022+ * completed.20232023+ */20242024+ if (fuse_page_is_writeback(data->inode, page->index))20252025+ return true;20262026+20272027+ /* Reached max pages */20282028+ if (ap->num_pages == fc->max_pages)20292029+ return true;20302030+20312031+ /* Reached max write bytes */20322032+ if ((ap->num_pages + 1) * PAGE_SIZE > fc->max_write)20332033+ return true;20342034+20352035+ /* Discontinuity */20362036+ if (data->orig_pages[ap->num_pages - 1]->index + 1 != page->index)20372037+ return true;20382038+20392039+ /* Need to grow the pages array? If so, did the expansion fail? */20402040+ if (ap->num_pages == data->max_pages && !fuse_pages_realloc(data))20412041+ return true;20422042+20432043+ return false;20172044}2018204520192046static int fuse_writepages_fill(struct page *page,···20602019 struct fuse_inode *fi = get_fuse_inode(inode);20612020 struct fuse_conn *fc = get_fuse_conn(inode);20622021 struct page *tmp_page;20632063- bool is_writeback;20642022 int err;2065202320662024 if (!data->ff) {···20692029 goto out_unlock;20702030 }2071203120722072- /*20732073- * Being under writeback is unlikely but possible. For example direct20742074- * read to an mmaped fuse file will set the page dirty twice; once when20752075- * the pages are faulted with get_user_pages(), and then after the read20762076- * completed.20772077- */20782078- is_writeback = fuse_page_is_writeback(inode, page->index);20792079-20802080- if (wpa && ap->num_pages &&20812081- (is_writeback || ap->num_pages == fc->max_pages ||20822082- (ap->num_pages + 1) * PAGE_SIZE > fc->max_write ||20832083- data->orig_pages[ap->num_pages - 1]->index + 1 != page->index)) {20322032+ if (wpa && fuse_writepage_need_send(fc, page, ap, data)) {20842033 fuse_writepages_send(data);20852034 data->wpa = NULL;20862086- } else if (wpa && ap->num_pages == data->max_pages) {20872087- if (!fuse_pages_realloc(data)) {20882088- fuse_writepages_send(data);20892089- data->wpa = NULL;20902090- }20912035 }2092203620932037 err = -ENOMEM;···21092085 ap->args.end = fuse_writepage_end;21102086 ap->num_pages = 0;21112087 wpa->inode = inode;21122112-21132113- spin_lock(&fi->lock);21142114- tree_insert(&fi->writepages, wpa);21152115- spin_unlock(&fi->lock);21162116-21172117- data->wpa = wpa;21182088 }21192089 set_page_writeback(page);21202090···21162098 ap->pages[ap->num_pages] = tmp_page;21172099 ap->descs[ap->num_pages].offset = 0;21182100 ap->descs[ap->num_pages].length = PAGE_SIZE;21012101+ data->orig_pages[ap->num_pages] = page;2119210221202103 inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK);21212104 inc_node_page_state(tmp_page, NR_WRITEBACK_TEMP);2122210521232106 err = 0;21242124- if (is_writeback && fuse_writepage_in_flight(wpa, page)) {21072107+ if (data->wpa) {21082108+ /*21092109+ * Protected by fi->lock against concurrent access by21102110+ * fuse_page_is_writeback().21112111+ */21122112+ spin_lock(&fi->lock);21132113+ ap->num_pages++;21142114+ spin_unlock(&fi->lock);21152115+ } else if (fuse_writepage_add(wpa, page)) {21162116+ data->wpa = wpa;21172117+ } else {21252118 end_page_writeback(page);21262126- data->wpa = NULL;21272127- goto out_unlock;21282119 }21292129- data->orig_pages[ap->num_pages] = page;21302130-21312131- /*21322132- * Protected by fi->lock against concurrent access by21332133- * fuse_page_is_writeback().21342134- */21352135- spin_lock(&fi->lock);21362136- ap->num_pages++;21372137- spin_unlock(&fi->lock);21382138-21392120out_unlock:21402121 unlock_page(page);21412122···2166214921672150 err = write_cache_pages(mapping, wbc, fuse_writepages_fill, &data);21682151 if (data.wpa) {21692169- /* Ignore errors if we can write at least one page */21702152 WARN_ON(!data.wpa->ia.ap.num_pages);21712153 fuse_writepages_send(&data);21722172- err = 0;21732154 }21742155 if (data.ff)21752156 fuse_file_put(data.ff, false, false);···27762761 struct iovec *iov = iov_page;2777276227782763 iov->iov_base = (void __user *)arg;27792779- iov->iov_len = _IOC_SIZE(cmd);27642764+27652765+ switch (cmd) {27662766+ case FS_IOC_GETFLAGS:27672767+ case FS_IOC_SETFLAGS:27682768+ iov->iov_len = sizeof(int);27692769+ break;27702770+ default:27712771+ iov->iov_len = _IOC_SIZE(cmd);27722772+ break;27732773+ }2780277427812775 if (_IOC_DIR(cmd) & _IOC_WRITE) {27822776 in_iov = iov;
+16-3
fs/fuse/inode.c
···121121 }122122}123123124124-static int fuse_remount_fs(struct super_block *sb, int *flags, char *data)124124+static int fuse_reconfigure(struct fs_context *fc)125125{126126+ struct super_block *sb = fc->root->d_sb;127127+126128 sync_filesystem(sb);127127- if (*flags & SB_MANDLOCK)129129+ if (fc->sb_flags & SB_MANDLOCK)128130 return -EINVAL;129131130132 return 0;···477475 struct fuse_fs_context *ctx = fc->fs_private;478476 int opt;479477478478+ if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) {479479+ /*480480+ * Ignore options coming from mount(MS_REMOUNT) for backward481481+ * compatibility.482482+ */483483+ if (fc->oldapi)484484+ return 0;485485+486486+ return invalfc(fc, "No changes allowed in reconfigure");487487+ }488488+480489 opt = fs_parse(fc, fuse_fs_parameters, param, &result);481490 if (opt < 0)482491 return opt;···830817 .evict_inode = fuse_evict_inode,831818 .write_inode = fuse_write_inode,832819 .drop_inode = generic_delete_inode,833833- .remount_fs = fuse_remount_fs,834820 .put_super = fuse_put_super,835821 .umount_begin = fuse_umount_begin,836822 .statfs = fuse_statfs,···13081296static const struct fs_context_operations fuse_context_ops = {13091297 .free = fuse_free_fc,13101298 .parse_param = fuse_parse_param,12991299+ .reconfigure = fuse_reconfigure,13111300 .get_tree = fuse_get_tree,13121301};13131302
+1-44
fs/gfs2/aops.c
···468468}469469470470471471-/**472472- * __gfs2_readpage - readpage473473- * @file: The file to read a page for474474- * @page: The page to read475475- *476476- * This is the core of gfs2's readpage. It's used by the internal file477477- * reading code as in that case we already hold the glock. Also it's478478- * called by gfs2_readpage() once the required lock has been granted.479479- */480480-481471static int __gfs2_readpage(void *file, struct page *page)482472{483473 struct gfs2_inode *ip = GFS2_I(page->mapping->host);484474 struct gfs2_sbd *sdp = GFS2_SB(page->mapping->host);485485-486475 int error;487476488477 if (i_blocksize(page->mapping->host) == PAGE_SIZE &&···494505 * gfs2_readpage - read a page of a file495506 * @file: The file to read496507 * @page: The page of the file497497- *498498- * This deals with the locking required. We have to unlock and499499- * relock the page in order to get the locking in the right500500- * order.501508 */502509503510static int gfs2_readpage(struct file *file, struct page *page)504511{505505- struct address_space *mapping = page->mapping;506506- struct gfs2_inode *ip = GFS2_I(mapping->host);507507- struct gfs2_holder gh;508508- int error;509509-510510- unlock_page(page);511511- gfs2_holder_init(ip->i_gl, LM_ST_SHARED, 0, &gh);512512- error = gfs2_glock_nq(&gh);513513- if (unlikely(error))514514- goto out;515515- error = AOP_TRUNCATED_PAGE;516516- lock_page(page);517517- if (page->mapping == mapping && !PageUptodate(page))518518- error = __gfs2_readpage(file, page);519519- else520520- unlock_page(page);521521- gfs2_glock_dq(&gh);522522-out:523523- gfs2_holder_uninit(&gh);524524- if (error && error != AOP_TRUNCATED_PAGE)525525- lock_page(page);526526- return error;512512+ return __gfs2_readpage(file, page);527513}528514529515/**···562598{563599 struct inode *inode = rac->mapping->host;564600 struct gfs2_inode *ip = GFS2_I(inode);565565- struct gfs2_holder gh;566601567567- gfs2_holder_init(ip->i_gl, LM_ST_SHARED, 0, &gh);568568- if (gfs2_glock_nq(&gh))569569- goto out_uninit;570602 if (!gfs2_is_stuffed(ip))571603 mpage_readahead(rac, gfs2_block_map);572572- gfs2_glock_dq(&gh);573573-out_uninit:574574- gfs2_holder_uninit(&gh);575604}576605577606/**
+50-2
fs/gfs2/file.c
···558558 return block_page_mkwrite_return(ret);559559}560560561561+static vm_fault_t gfs2_fault(struct vm_fault *vmf)562562+{563563+ struct inode *inode = file_inode(vmf->vma->vm_file);564564+ struct gfs2_inode *ip = GFS2_I(inode);565565+ struct gfs2_holder gh;566566+ vm_fault_t ret;567567+ int err;568568+569569+ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, 0, &gh);570570+ err = gfs2_glock_nq(&gh);571571+ if (err) {572572+ ret = block_page_mkwrite_return(err);573573+ goto out_uninit;574574+ }575575+ ret = filemap_fault(vmf);576576+ gfs2_glock_dq(&gh);577577+out_uninit:578578+ gfs2_holder_uninit(&gh);579579+ return ret;580580+}581581+561582static const struct vm_operations_struct gfs2_vm_ops = {562562- .fault = filemap_fault,583583+ .fault = gfs2_fault,563584 .map_pages = filemap_map_pages,564585 .page_mkwrite = gfs2_page_mkwrite,565586};···845824846825static ssize_t gfs2_file_read_iter(struct kiocb *iocb, struct iov_iter *to)847826{827827+ struct gfs2_inode *ip;828828+ struct gfs2_holder gh;829829+ size_t written = 0;848830 ssize_t ret;849831850832 if (iocb->ki_flags & IOCB_DIRECT) {···856832 return ret;857833 iocb->ki_flags &= ~IOCB_DIRECT;858834 }859859- return generic_file_read_iter(iocb, to);835835+ iocb->ki_flags |= IOCB_NOIO;836836+ ret = generic_file_read_iter(iocb, to);837837+ iocb->ki_flags &= ~IOCB_NOIO;838838+ if (ret >= 0) {839839+ if (!iov_iter_count(to))840840+ return ret;841841+ written = ret;842842+ } else {843843+ if (ret != -EAGAIN)844844+ return ret;845845+ if (iocb->ki_flags & IOCB_NOWAIT)846846+ return ret;847847+ }848848+ ip = GFS2_I(iocb->ki_filp->f_mapping->host);849849+ gfs2_holder_init(ip->i_gl, LM_ST_SHARED, 0, &gh);850850+ ret = gfs2_glock_nq(&gh);851851+ if (ret)852852+ goto out_uninit;853853+ ret = generic_file_read_iter(iocb, to);854854+ if (ret > 0)855855+ written += ret;856856+ gfs2_glock_dq(&gh);857857+out_uninit:858858+ gfs2_holder_uninit(&gh);859859+ return written ? written : ret;860860}861861862862/**
···207207208208 if (no_formal_ino && ip->i_no_formal_ino &&209209 no_formal_ino != ip->i_no_formal_ino) {210210+ error = -ESTALE;210211 if (inode->i_state & I_NEW)211212 goto fail;212213 iput(inode);213213- return ERR_PTR(-ESTALE);214214+ return ERR_PTR(error);214215 }215216216217 if (inode->i_state & I_NEW)
+19-6
fs/gfs2/log.c
···613613 return 0;614614}615615616616+static void __ordered_del_inode(struct gfs2_inode *ip)617617+{618618+ if (!list_empty(&ip->i_ordered))619619+ list_del_init(&ip->i_ordered);620620+}621621+616622static void gfs2_ordered_write(struct gfs2_sbd *sdp)617623{618624 struct gfs2_inode *ip;···629623 while (!list_empty(&sdp->sd_log_ordered)) {630624 ip = list_first_entry(&sdp->sd_log_ordered, struct gfs2_inode, i_ordered);631625 if (ip->i_inode.i_mapping->nrpages == 0) {632632- test_and_clear_bit(GIF_ORDERED, &ip->i_flags);633633- list_del(&ip->i_ordered);626626+ __ordered_del_inode(ip);634627 continue;635628 }636629 list_move(&ip->i_ordered, &written);···648643 spin_lock(&sdp->sd_ordered_lock);649644 while (!list_empty(&sdp->sd_log_ordered)) {650645 ip = list_first_entry(&sdp->sd_log_ordered, struct gfs2_inode, i_ordered);651651- list_del(&ip->i_ordered);652652- WARN_ON(!test_and_clear_bit(GIF_ORDERED, &ip->i_flags));646646+ __ordered_del_inode(ip);653647 if (ip->i_inode.i_mapping->nrpages == 0)654648 continue;655649 spin_unlock(&sdp->sd_ordered_lock);···663659 struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);664660665661 spin_lock(&sdp->sd_ordered_lock);666666- if (test_and_clear_bit(GIF_ORDERED, &ip->i_flags))667667- list_del(&ip->i_ordered);662662+ __ordered_del_inode(ip);668663 spin_unlock(&sdp->sd_ordered_lock);669664}670665···1005100210061003out:10071004 if (gfs2_withdrawn(sdp)) {10051005+ /**10061006+ * If the tr_list is empty, we're withdrawing during a log10071007+ * flush that targets a transaction, but the transaction was10081008+ * never queued onto any of the ail lists. Here we add it to10091009+ * ail1 just so that ail_drain() will find and free it.10101010+ */10111011+ spin_lock(&sdp->sd_ail_lock);10121012+ if (tr && list_empty(&tr->tr_list))10131013+ list_add(&tr->tr_list, &sdp->sd_ail1_list);10141014+ spin_unlock(&sdp->sd_ail_lock);10081015 ail_drain(sdp); /* frees all transactions */10091016 tr = NULL;10101017 }
+2-2
fs/gfs2/log.h
···5353 if (gfs2_is_jdata(ip) || !gfs2_is_ordered(sdp))5454 return;55555656- if (!test_bit(GIF_ORDERED, &ip->i_flags)) {5656+ if (list_empty(&ip->i_ordered)) {5757 spin_lock(&sdp->sd_ordered_lock);5858- if (!test_and_set_bit(GIF_ORDERED, &ip->i_flags))5858+ if (list_empty(&ip->i_ordered))5959 list_add(&ip->i_ordered, &sdp->sd_log_ordered);6060 spin_unlock(&sdp->sd_ordered_lock);6161 }
···774774 slot->seq_nr_last_acked = seqnr;775775}776776777777+static void nfs4_probe_sequence(struct nfs_client *client, const struct cred *cred,778778+ struct nfs4_slot *slot)779779+{780780+ struct rpc_task *task = _nfs41_proc_sequence(client, cred, slot, true);781781+ if (!IS_ERR(task))782782+ rpc_put_task_async(task);783783+}784784+777785static int nfs41_sequence_process(struct rpc_task *task,778786 struct nfs4_sequence_res *res)779787{···798790 goto out;799791800792 session = slot->table->session;793793+ clp = session->clp;801794802795 trace_nfs4_sequence_done(session, res);803796···813804 nfs4_slot_sequence_acked(slot, slot->seq_nr);814805 /* Update the slot's sequence and clientid lease timer */815806 slot->seq_done = 1;816816- clp = session->clp;817807 do_renew_lease(clp, res->sr_timestamp);818808 /* Check sequence flags */819809 nfs41_handle_sequence_flag_errors(clp, res->sr_status_flags,···860852 /*861853 * Were one or more calls using this slot interrupted?862854 * If the server never received the request, then our863863- * transmitted slot sequence number may be too high.855855+ * transmitted slot sequence number may be too high. However,856856+ * if the server did receive the request then it might857857+ * accidentally give us a reply with a mismatched operation.858858+ * We can sort this out by sending a lone sequence operation859859+ * to the server on the same slot.864860 */865861 if ((s32)(slot->seq_nr - slot->seq_nr_last_acked) > 1) {866862 slot->seq_nr--;863863+ if (task->tk_msg.rpc_proc != &nfs4_procedures[NFSPROC4_CLNT_SEQUENCE]) {864864+ nfs4_probe_sequence(clp, task->tk_msg.rpc_cred, slot);865865+ res->sr_slot = NULL;866866+ }867867 goto retry_nowait;868868 }869869 /*
···895895 return err;896896}897897898898-int ovl_copy_up_flags(struct dentry *dentry, int flags)898898+static int ovl_copy_up_flags(struct dentry *dentry, int flags)899899{900900 int err = 0;901901 const struct cred *old_cred = ovl_override_creds(dentry->d_sb);
+1-1
fs/overlayfs/export.c
···476476 if (IS_ERR_OR_NULL(this))477477 return this;478478479479- if (WARN_ON(ovl_dentry_real_at(this, layer->idx) != real)) {479479+ if (ovl_dentry_real_at(this, layer->idx) != real) {480480 dput(this);481481 this = ERR_PTR(-EIO);482482 }
+6-4
fs/overlayfs/file.c
···3333 return 'm';3434}35353636+/* No atime modificaton nor notify on underlying */3737+#define OVL_OPEN_FLAGS (O_NOATIME | FMODE_NONOTIFY)3838+3639static struct file *ovl_open_realfile(const struct file *file,3740 struct inode *realinode)3841{3942 struct inode *inode = file_inode(file);4043 struct file *realfile;4144 const struct cred *old_cred;4242- int flags = file->f_flags | O_NOATIME | FMODE_NONOTIFY;4545+ int flags = file->f_flags | OVL_OPEN_FLAGS;4346 int acc_mode = ACC_MODE(flags);4447 int err;4548···7572 struct inode *inode = file_inode(file);7673 int err;77747878- /* No atime modificaton on underlying */7979- flags |= O_NOATIME | FMODE_NONOTIFY;7575+ flags |= OVL_OPEN_FLAGS;80768177 /* If some flag changed that cannot be changed then something's amiss */8278 if (WARN_ON((file->f_flags ^ flags) & ~OVL_SETFL_MASK))···128126 }129127130128 /* Did the flags change since open? */131131- if (unlikely((file->f_flags ^ real->file->f_flags) & ~O_NOATIME))129129+ if (unlikely((file->f_flags ^ real->file->f_flags) & ~OVL_OPEN_FLAGS))132130 return ovl_change_flags(real->file, file->f_flags);133131134132 return 0;
+6-9
fs/overlayfs/namei.c
···389389}390390391391static int ovl_check_origin(struct ovl_fs *ofs, struct dentry *upperdentry,392392- struct ovl_path **stackp, unsigned int *ctrp)392392+ struct ovl_path **stackp)393393{394394 struct ovl_fh *fh = ovl_get_fh(upperdentry, OVL_XATTR_ORIGIN);395395 int err;···406406 return err;407407 }408408409409- if (WARN_ON(*ctrp))410410- return -EIO;411411-412412- *ctrp = 1;413409 return 0;414410}415411···857861 goto out;858862 }859863 if (upperdentry && !d.is_dir) {860860- unsigned int origin_ctr = 0;861861-862864 /*863865 * Lookup copy up origin by decoding origin file handle.864866 * We may get a disconnected dentry, which is fine,···867873 * number - it's the same as if we held a reference868874 * to a dentry in lower layer that was moved under us.869875 */870870- err = ovl_check_origin(ofs, upperdentry, &origin_path,871871- &origin_ctr);876876+ err = ovl_check_origin(ofs, upperdentry, &origin_path);872877 if (err)873878 goto out_put_upper;874879···10661073 upperredirect = NULL;10671074 goto out_free_oe;10681075 }10761076+ err = ovl_check_metacopy_xattr(upperdentry);10771077+ if (err < 0)10781078+ goto out_free_oe;10791079+ uppermetacopy = err;10691080 }1070108110711082 if (upperdentry || ctr) {
···580580 }581581 }582582583583- /* Workdir is useless in non-upper mount */584584- if (!config->upperdir && config->workdir) {585585- pr_info("option \"workdir=%s\" is useless in a non-upper mount, ignore\n",586586- config->workdir);587587- kfree(config->workdir);588588- config->workdir = NULL;583583+ /* Workdir/index are useless in non-upper mount */584584+ if (!config->upperdir) {585585+ if (config->workdir) {586586+ pr_info("option \"workdir=%s\" is useless in a non-upper mount, ignore\n",587587+ config->workdir);588588+ kfree(config->workdir);589589+ config->workdir = NULL;590590+ }591591+ if (config->index && index_opt) {592592+ pr_info("option \"index=on\" is useless in a non-upper mount, ignore\n");593593+ index_opt = false;594594+ }595595+ config->index = false;589596 }590597591598 err = ovl_parse_redirect_mode(config, config->redirect_mode);···629622630623 /* Resolve nfs_export -> index dependency */631624 if (config->nfs_export && !config->index) {632632- if (nfs_export_opt && index_opt) {625625+ if (!config->upperdir && config->redirect_follow) {626626+ pr_info("NFS export requires \"redirect_dir=nofollow\" on non-upper mount, falling back to nfs_export=off.\n");627627+ config->nfs_export = false;628628+ } else if (nfs_export_opt && index_opt) {633629 pr_err("conflicting options: nfs_export=on,index=off\n");634630 return -EINVAL;635635- }636636- if (index_opt) {631631+ } else if (index_opt) {637632 /*638633 * There was an explicit index=off that resulted639634 * in this conflict.···13611352 goto out;13621353 }1363135413551355+ /* index dir will act also as workdir */13561356+ iput(ofs->workdir_trap);13571357+ ofs->workdir_trap = NULL;13581358+ dput(ofs->workdir);13591359+ ofs->workdir = NULL;13641360 ofs->indexdir = ovl_workdir_create(ofs, OVL_INDEXDIR_NAME, true);13651361 if (ofs->indexdir) {13621362+ ofs->workdir = dget(ofs->indexdir);13631363+13661364 err = ovl_setup_trap(sb, ofs->indexdir, &ofs->indexdir_trap,13671365 "indexdir");13681366 if (err)···1411139514121396 if (!ofs->config.nfs_export && !ovl_upper_mnt(ofs))14131397 return true;13981398+13991399+ /*14001400+ * We allow using single lower with null uuid for index and nfs_export14011401+ * for example to support those features with single lower squashfs.14021402+ * To avoid regressions in setups of overlay with re-formatted lower14031403+ * squashfs, do not allow decoding origin with lower null uuid unless14041404+ * user opted-in to one of the new features that require following the14051405+ * lower inode of non-dir upper.14061406+ */14071407+ if (!ofs->config.index && !ofs->config.metacopy && !ofs->config.xino &&14081408+ uuid_is_null(uuid))14091409+ return false;1414141014151411 for (i = 0; i < ofs->numfs; i++) {14161412 /*···15211493 if (err < 0)15221494 goto out;1523149514961496+ /*14971497+ * Check if lower root conflicts with this overlay layers before14981498+ * checking if it is in-use as upperdir/workdir of "another"14991499+ * mount, because we do not bother to check in ovl_is_inuse() if15001500+ * the upperdir/workdir is in fact in-use by our15011501+ * upperdir/workdir.15021502+ */15241503 err = ovl_setup_trap(sb, stack[i].dentry, &trap, "lowerdir");15251504 if (err)15261505 goto out;1527150615281507 if (ovl_is_inuse(stack[i].dentry)) {15291508 err = ovl_report_in_use(ofs, "lowerdir");15301530- if (err)15091509+ if (err) {15101510+ iput(trap);15311511 goto out;15121512+ }15321513 }1533151415341515 mnt = clone_private_mount(&stack[i]);···16121575 if (!ofs->config.upperdir && numlower == 1) {16131576 pr_err("at least 2 lowerdir are needed while upperdir nonexistent\n");16141577 return ERR_PTR(-EINVAL);16151615- } else if (!ofs->config.upperdir && ofs->config.nfs_export &&16161616- ofs->config.redirect_follow) {16171617- pr_warn("NFS export requires \"redirect_dir=nofollow\" on non-upper mount, falling back to nfs_export=off.\n");16181618- ofs->config.nfs_export = false;16191578 }1620157916211580 stack = kcalloc(numlower, sizeof(struct path), GFP_KERNEL);···18751842 if (!ovl_upper_mnt(ofs))18761843 sb->s_flags |= SB_RDONLY;1877184418781878- if (!(ovl_force_readonly(ofs)) && ofs->config.index) {18791879- /* index dir will act also as workdir */18801880- dput(ofs->workdir);18811881- ofs->workdir = NULL;18821882- iput(ofs->workdir_trap);18831883- ofs->workdir_trap = NULL;18841884-18451845+ if (!ovl_force_readonly(ofs) && ofs->config.index) {18851846 err = ovl_get_indexdir(sb, ofs, oe, &upperpath);18861847 if (err)18871848 goto out_free_oe;1888184918891850 /* Force r/o mount with no index dir */18901890- if (ofs->indexdir)18911891- ofs->workdir = dget(ofs->indexdir);18921892- else18511851+ if (!ofs->indexdir)18931852 sb->s_flags |= SB_RDONLY;18941853 }18951854
+3-3
fs/proc/proc_sysctl.c
···566566 goto out;567567568568 /* don't even try if the size is too large */569569- if (count > KMALLOC_MAX_SIZE)570570- return -ENOMEM;569569+ error = -ENOMEM;570570+ if (count >= KMALLOC_MAX_SIZE)571571+ goto out;571572572573 if (write) {573574 kbuf = memdup_user_nul(ubuf, count);···577576 goto out;578577 }579578 } else {580580- error = -ENOMEM;581579 kbuf = kzalloc(count, GFP_KERNEL);582580 if (!kbuf)583581 goto out;
+77-58
fs/read_write.c
···419419 return ret;420420}421421422422-ssize_t __vfs_read(struct file *file, char __user *buf, size_t count,423423- loff_t *pos)422422+ssize_t __kernel_read(struct file *file, void *buf, size_t count, loff_t *pos)424423{425425- if (file->f_op->read)426426- return file->f_op->read(file, buf, count, pos);427427- else if (file->f_op->read_iter)428428- return new_sync_read(file, buf, count, pos);429429- else424424+ mm_segment_t old_fs = get_fs();425425+ ssize_t ret;426426+427427+ if (WARN_ON_ONCE(!(file->f_mode & FMODE_READ)))430428 return -EINVAL;429429+ if (!(file->f_mode & FMODE_CAN_READ))430430+ return -EINVAL;431431+432432+ if (count > MAX_RW_COUNT)433433+ count = MAX_RW_COUNT;434434+ set_fs(KERNEL_DS);435435+ if (file->f_op->read)436436+ ret = file->f_op->read(file, (void __user *)buf, count, pos);437437+ else if (file->f_op->read_iter)438438+ ret = new_sync_read(file, (void __user *)buf, count, pos);439439+ else440440+ ret = -EINVAL;441441+ set_fs(old_fs);442442+ if (ret > 0) {443443+ fsnotify_access(file);444444+ add_rchar(current, ret);445445+ }446446+ inc_syscr(current);447447+ return ret;431448}432449433450ssize_t kernel_read(struct file *file, void *buf, size_t count, loff_t *pos)434451{435435- mm_segment_t old_fs;436436- ssize_t result;452452+ ssize_t ret;437453438438- old_fs = get_fs();439439- set_fs(KERNEL_DS);440440- /* The cast to a user pointer is valid due to the set_fs() */441441- result = vfs_read(file, (void __user *)buf, count, pos);442442- set_fs(old_fs);443443- return result;454454+ ret = rw_verify_area(READ, file, pos, count);455455+ if (ret)456456+ return ret;457457+ return __kernel_read(file, buf, count, pos);444458}445459EXPORT_SYMBOL(kernel_read);446460···470456 return -EFAULT;471457472458 ret = rw_verify_area(READ, file, pos, count);473473- if (!ret) {474474- if (count > MAX_RW_COUNT)475475- count = MAX_RW_COUNT;476476- ret = __vfs_read(file, buf, count, pos);477477- if (ret > 0) {478478- fsnotify_access(file);479479- add_rchar(current, ret);480480- }481481- inc_syscr(current);482482- }459459+ if (ret)460460+ return ret;461461+ if (count > MAX_RW_COUNT)462462+ count = MAX_RW_COUNT;483463464464+ if (file->f_op->read)465465+ ret = file->f_op->read(file, buf, count, pos);466466+ else if (file->f_op->read_iter)467467+ ret = new_sync_read(file, buf, count, pos);468468+ else469469+ ret = -EINVAL;470470+ if (ret > 0) {471471+ fsnotify_access(file);472472+ add_rchar(current, ret);473473+ }474474+ inc_syscr(current);484475 return ret;485476}486477···507488 return ret;508489}509490510510-static ssize_t __vfs_write(struct file *file, const char __user *p,511511- size_t count, loff_t *pos)512512-{513513- if (file->f_op->write)514514- return file->f_op->write(file, p, count, pos);515515- else if (file->f_op->write_iter)516516- return new_sync_write(file, p, count, pos);517517- else518518- return -EINVAL;519519-}520520-491491+/* caller is responsible for file_start_write/file_end_write */521492ssize_t __kernel_write(struct file *file, const void *buf, size_t count, loff_t *pos)522493{523494 mm_segment_t old_fs;524495 const char __user *p;525496 ssize_t ret;526497498498+ if (WARN_ON_ONCE(!(file->f_mode & FMODE_WRITE)))499499+ return -EBADF;527500 if (!(file->f_mode & FMODE_CAN_WRITE))528501 return -EINVAL;529502···524513 p = (__force const char __user *)buf;525514 if (count > MAX_RW_COUNT)526515 count = MAX_RW_COUNT;527527- ret = __vfs_write(file, p, count, pos);516516+ if (file->f_op->write)517517+ ret = file->f_op->write(file, p, count, pos);518518+ else if (file->f_op->write_iter)519519+ ret = new_sync_write(file, p, count, pos);520520+ else521521+ ret = -EINVAL;528522 set_fs(old_fs);529523 if (ret > 0) {530524 fsnotify_modify(file);···538522 inc_syscw(current);539523 return ret;540524}541541-EXPORT_SYMBOL(__kernel_write);542525543526ssize_t kernel_write(struct file *file, const void *buf, size_t count,544527 loff_t *pos)545528{546546- mm_segment_t old_fs;547547- ssize_t res;529529+ ssize_t ret;548530549549- old_fs = get_fs();550550- set_fs(KERNEL_DS);551551- /* The cast to a user pointer is valid due to the set_fs() */552552- res = vfs_write(file, (__force const char __user *)buf, count, pos);553553- set_fs(old_fs);531531+ ret = rw_verify_area(WRITE, file, pos, count);532532+ if (ret)533533+ return ret;554534555555- return res;535535+ file_start_write(file);536536+ ret = __kernel_write(file, buf, count, pos);537537+ file_end_write(file);538538+ return ret;556539}557540EXPORT_SYMBOL(kernel_write);558541···567552 return -EFAULT;568553569554 ret = rw_verify_area(WRITE, file, pos, count);570570- if (!ret) {571571- if (count > MAX_RW_COUNT)572572- count = MAX_RW_COUNT;573573- file_start_write(file);574574- ret = __vfs_write(file, buf, count, pos);575575- if (ret > 0) {576576- fsnotify_modify(file);577577- add_wchar(current, ret);578578- }579579- inc_syscw(current);580580- file_end_write(file);555555+ if (ret)556556+ return ret;557557+ if (count > MAX_RW_COUNT)558558+ count = MAX_RW_COUNT;559559+ file_start_write(file);560560+ if (file->f_op->write)561561+ ret = file->f_op->write(file, buf, count, pos);562562+ else if (file->f_op->write_iter)563563+ ret = new_sync_write(file, buf, count, pos);564564+ else565565+ ret = -EINVAL;566566+ if (ret > 0) {567567+ fsnotify_modify(file);568568+ add_wchar(current, ret);581569 }582582-570570+ inc_syscw(current);571571+ file_end_write(file);583572 return ret;584573}585574
+1-1
fs/squashfs/block.c
···175175 /* Extract the length of the metadata block */176176 data = page_address(bvec->bv_page) + bvec->bv_offset;177177 length = data[offset];178178- if (offset <= bvec->bv_len - 1) {178178+ if (offset < bvec->bv_len - 1) {179179 length |= data[offset + 1] << 8;180180 } else {181181 if (WARN_ON_ONCE(!bio_next_segment(bio, &iter_all))) {
+5-5
fs/xfs/xfs_log_cil.c
···671671 /*672672 * Wake up any background push waiters now this context is being pushed.673673 */674674- wake_up_all(&ctx->push_wait);674674+ if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log))675675+ wake_up_all(&cil->xc_push_wait);675676676677 /*677678 * Check if we've anything to push. If there is nothing, then we don't···744743745744 /*746745 * initialise the new context and attach it to the CIL. Then attach747747- * the current context to the CIL committing lsit so it can be found746746+ * the current context to the CIL committing list so it can be found748747 * during log forces to extract the commit lsn of the sequence that749748 * needs to be forced.750749 */751750 INIT_LIST_HEAD(&new_ctx->committing);752751 INIT_LIST_HEAD(&new_ctx->busy_extents);753753- init_waitqueue_head(&new_ctx->push_wait);754752 new_ctx->sequence = ctx->sequence + 1;755753 new_ctx->cil = cil;756754 cil->xc_ctx = new_ctx;···937937 if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) {938938 trace_xfs_log_cil_wait(log, cil->xc_ctx->ticket);939939 ASSERT(cil->xc_ctx->space_used < log->l_logsize);940940- xlog_wait(&cil->xc_ctx->push_wait, &cil->xc_push_lock);940940+ xlog_wait(&cil->xc_push_wait, &cil->xc_push_lock);941941 return;942942 }943943···12161216 INIT_LIST_HEAD(&cil->xc_committing);12171217 spin_lock_init(&cil->xc_cil_lock);12181218 spin_lock_init(&cil->xc_push_lock);12191219+ init_waitqueue_head(&cil->xc_push_wait);12191220 init_rwsem(&cil->xc_ctx_lock);12201221 init_waitqueue_head(&cil->xc_commit_wait);1221122212221223 INIT_LIST_HEAD(&ctx->committing);12231224 INIT_LIST_HEAD(&ctx->busy_extents);12241224- init_waitqueue_head(&ctx->push_wait);12251225 ctx->sequence = 1;12261226 ctx->cil = cil;12271227 cil->xc_ctx = ctx;
···607607 int nr_pages;608608 ssize_t ret;609609610610- nr_pages = iov_iter_npages(from, BIO_MAX_PAGES);611611- if (!nr_pages)612612- return 0;613613-614610 max = queue_max_zone_append_sectors(bdev_get_queue(bdev));615611 max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize);616612 iov_iter_truncate(from, max);613613+614614+ nr_pages = iov_iter_npages(from, BIO_MAX_PAGES);615615+ if (!nr_pages)616616+ return 0;617617618618 bio = bio_alloc_bioset(GFP_NOFS, nr_pages, &fs_bio_set);619619 if (!bio)···11191119 char *file_name;11201120 struct dentry *dir;11211121 unsigned int n = 0;11221122- int ret = -ENOMEM;11221122+ int ret;1123112311241124 /* If the group is empty, there is nothing to do */11251125 if (!zd->nr_zones[type])···11351135 zgroup_name = "seq";1136113611371137 dir = zonefs_create_inode(sb->s_root, zgroup_name, NULL, type);11381138- if (!dir)11381138+ if (!dir) {11391139+ ret = -ENOMEM;11391140 goto free;11411141+ }1140114211411143 /*11421144 * The first zone contains the super block: skip it.···11761174 * Use the file number within its group as file name.11771175 */11781176 snprintf(file_name, ZONEFS_NAME_MAX - 1, "%u", n);11791179- if (!zonefs_create_inode(dir, file_name, zone, type))11771177+ if (!zonefs_create_inode(dir, file_name, zone, type)) {11781178+ ret = -ENOMEM;11801179 goto free;11801180+ }1181118111821182 n++;11831183 }
···29293030 struct sock *parent;31313232- unsigned int refcnt;3333- unsigned int nokey_refcnt;3232+ atomic_t refcnt;3333+ atomic_t nokey_refcnt;34343535 const struct af_alg_type *type;3636 void *private;
+1-2
include/linux/bits.h
···1818 * position @h. For example1919 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.2020 */2121-#if !defined(__ASSEMBLY__) && \2222- (!defined(CONFIG_CC_IS_GCC) || CONFIG_GCC_VERSION >= 49000)2121+#if !defined(__ASSEMBLY__)2322#include <linux/build_bug.h>2423#define GENMASK_INPUT_CHECK(h, l) \2524 (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
+1
include/linux/blkdev.h
···590590 u64 write_hints[BLK_MAX_WRITE_HINTS];591591};592592593593+/* Keep blk_queue_flag_name[] in sync with the definitions below */593594#define QUEUE_FLAG_STOPPED 0 /* queue is stopped */594595#define QUEUE_FLAG_DYING 1 /* queue being torn down */595596#define QUEUE_FLAG_NOMERGES 3 /* disable merge attempts */
+3-2
include/linux/bpf-netns.h
···3333 union bpf_attr __user *uattr);3434int netns_bpf_prog_attach(const union bpf_attr *attr,3535 struct bpf_prog *prog);3636-int netns_bpf_prog_detach(const union bpf_attr *attr);3636+int netns_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);3737int netns_bpf_link_create(const union bpf_attr *attr,3838 struct bpf_prog *prog);3939#else···4949 return -EOPNOTSUPP;5050}51515252-static inline int netns_bpf_prog_detach(const union bpf_attr *attr)5252+static inline int netns_bpf_prog_detach(const union bpf_attr *attr,5353+ enum bpf_prog_type ptype)5354{5455 return -EOPNOTSUPP;5556}
···1111 + __GNUC_PATCHLEVEL__)12121313/* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 */1414-#if GCC_VERSION < 408001414+#if GCC_VERSION < 409001515# error Sorry, your compiler is too old - please upgrade it.1616#endif1717
+1-26
include/linux/compiler_types.h
···252252 * __unqual_scalar_typeof(x) - Declare an unqualified scalar type, leaving253253 * non-scalar types unchanged.254254 */255255-#if (defined(CONFIG_CC_IS_GCC) && CONFIG_GCC_VERSION < 40900) || defined(__CHECKER__)256255/*257257- * We build this out of a couple of helper macros in a vain attempt to258258- * help you keep your lunch down while reading it.259259- */260260-#define __pick_scalar_type(x, type, otherwise) \261261- __builtin_choose_expr(__same_type(x, type), (type)0, otherwise)262262-263263-/*264264- * 'char' is not type-compatible with either 'signed char' or 'unsigned char',265265- * so we include the naked type here as well as the signed/unsigned variants.266266- */267267-#define __pick_integer_type(x, type, otherwise) \268268- __pick_scalar_type(x, type, \269269- __pick_scalar_type(x, unsigned type, \270270- __pick_scalar_type(x, signed type, otherwise)))271271-272272-#define __unqual_scalar_typeof(x) typeof( \273273- __pick_integer_type(x, char, \274274- __pick_integer_type(x, short, \275275- __pick_integer_type(x, int, \276276- __pick_integer_type(x, long, \277277- __pick_integer_type(x, long long, x))))))278278-#else279279-/*280280- * If supported, prefer C11 _Generic for better compile-times. As above, 'char'256256+ * Prefer C11 _Generic for better compile-times and simpler code. Note: 'char'281257 * is not type-compatible with 'signed char', and we define a separate case.282258 */283259#define __scalar_type_to_expr_cases(type) \···269293 __scalar_type_to_expr_cases(long), \270294 __scalar_type_to_expr_cases(long long), \271295 default: (x)))272272-#endif273296274297/* Is this type a native word size -- useful for atomic operations */275298#define __native_word(t) \
···433433 * @suppliers: List of links to supplier devices.434434 * @consumers: List of links to consumer devices.435435 * @needs_suppliers: Hook to global list of devices waiting for suppliers.436436- * @defer_sync: Hook to global list of devices that have deferred sync_state.436436+ * @defer_hook: Hook to global list of devices that have deferred sync_state or437437+ * deferred fw_devlink.437438 * @need_for_probe: If needs_suppliers is on a list, this indicates if the438439 * suppliers are needed for probe or not.439440 * @status: Driver status information.···443442 struct list_head suppliers;444443 struct list_head consumers;445444 struct list_head needs_suppliers;446446- struct list_head defer_sync;445445+ struct list_head defer_hook;447446 bool need_for_probe;448447 enum dl_dev_state status;449448};
···109109 enum fs_context_phase phase:8; /* The phase the context is in */110110 bool need_free:1; /* Need to call ops->free() */111111 bool global:1; /* Goes into &init_user_ns */112112+ bool oldapi:1; /* Coming from mount(2) */112113};113114114115struct fs_context_operations {
+1-1
include/linux/i2c.h
···5656 * on a bus (or read from them). Apart from two basic transfer functions to5757 * transmit one message at a time, a more complex version can be used to5858 * transmit an arbitrary number of messages without interruption.5959- * @count must be be less than 64k since msg.len is u16.5959+ * @count must be less than 64k since msg.len is u16.6060 */6161int i2c_transfer_buffer_flags(const struct i2c_client *client,6262 char *buf, int count, u16 flags);
···107107 resource_size_t base,108108 unsigned long size)109109{110110+ iomap->iomem = ioremap_wc(base, size);111111+ if (!iomap->iomem)112112+ return NULL;113113+110114 iomap->base = base;111115 iomap->size = size;112112- iomap->iomem = ioremap_wc(base, size);113116#if defined(pgprot_noncached_wc) /* archs can't agree on a name ... */114117 iomap->prot = pgprot_noncached_wc(PAGE_KERNEL);115118#elif defined(pgprot_writecombine)
+3-2
include/linux/kallsyms.h
···1818#define KSYM_SYMBOL_LEN (sizeof("%s+%#lx/%#lx [%s]") + (KSYM_NAME_LEN - 1) + \1919 2*(BITS_PER_LONG*3/10) + (MODULE_NAME_LEN - 1) + 1)20202121+struct cred;2122struct module;22232324static inline int is_kernel_inittext(unsigned long addr)···9998int lookup_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name);10099101100/* How and when do we show kallsyms values? */102102-extern int kallsyms_show_value(void);101101+extern bool kallsyms_show_value(const struct cred *cred);103102104103#else /* !CONFIG_KALLSYMS */105104···159158 return -ERANGE;160159}161160162162-static inline int kallsyms_show_value(void)161161+static inline bool kallsyms_show_value(const struct cred *cred)163162{164163 return false;165164}
+12
include/linux/kgdb.h
···177177 struct pt_regs *regs);178178179179/**180180+ * kgdb_arch_handle_qxfer_pkt - Handle architecture specific GDB XML181181+ * packets.182182+ * @remcom_in_buffer: The buffer of the packet we have read.183183+ * @remcom_out_buffer: The buffer of %BUFMAX bytes to write a packet into.184184+ */185185+186186+extern void187187+kgdb_arch_handle_qxfer_pkt(char *remcom_in_buffer,188188+ char *remcom_out_buffer);189189+190190+/**180191 * kgdb_call_nmi_hook - Call kgdb_nmicallback() on the current CPU181192 * @ignored: This parameter is only here to match the prototype.182193 *···325314326315extern int kgdb_isremovedbreak(unsigned long addr);327316extern void kgdb_schedule_breakpoint(void);317317+extern int kgdb_has_hit_break(unsigned long addr);328318329319extern int330320kgdb_handle_exception(int ex_vector, int signo, int err_code,
···21692169 */21702170static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev)21712171{21722172- struct pci_dev *bridge = pci_upstream_bridge(dev);21732173-21742174- while (bridge) {21752175- if (pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT)21762176- return bridge;21772177- bridge = pci_upstream_bridge(bridge);21722172+ while (dev) {21732173+ if (pci_is_pcie(dev) &&21742174+ pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)21752175+ return dev;21762176+ dev = pci_upstream_bridge(dev);21782177 }2179217821802179 return NULL;
+1-1
include/linux/rhashtable.h
···3333 * of two or more hash tables when the rhashtable is being resized.3434 * The end of the chain is marked with a special nulls marks which has3535 * the least significant bit set but otherwise stores the address of3636- * the hash bucket. This allows us to be be sure we've found the end3636+ * the hash bucket. This allows us to be sure we've found the end3737 * of the right list.3838 * The value stored in the hash bucket has BIT(0) used as a lock bit.3939 * This bit must be atomically set before any changes are made to
+4-4
include/linux/scatterlist.h
···155155 * Loop over each sg element in the given sg_table object.156156 */157157#define for_each_sgtable_sg(sgt, sg, i) \158158- for_each_sg(sgt->sgl, sg, sgt->orig_nents, i)158158+ for_each_sg((sgt)->sgl, sg, (sgt)->orig_nents, i)159159160160/*161161 * Loop over each sg element in the given *DMA mapped* sg_table object.···163163 * of the each element.164164 */165165#define for_each_sgtable_dma_sg(sgt, sg, i) \166166- for_each_sg(sgt->sgl, sg, sgt->nents, i)166166+ for_each_sg((sgt)->sgl, sg, (sgt)->nents, i)167167168168/**169169 * sg_chain - Chain two sglists together···451451 * See also for_each_sg_page(). In each loop it operates on PAGE_SIZE unit.452452 */453453#define for_each_sgtable_page(sgt, piter, pgoffset) \454454- for_each_sg_page(sgt->sgl, piter, sgt->orig_nents, pgoffset)454454+ for_each_sg_page((sgt)->sgl, piter, (sgt)->orig_nents, pgoffset)455455456456/**457457 * for_each_sgtable_dma_page - iterate over the DMA mapped sg_table object···465465 * unit.466466 */467467#define for_each_sgtable_dma_page(sgt, dma_iter, pgoffset) \468468- for_each_sg_dma_page(sgt->sgl, dma_iter, sgt->nents, pgoffset)468468+ for_each_sg_dma_page((sgt)->sgl, dma_iter, (sgt)->nents, pgoffset)469469470470471471/*
···220220 } rack;221221 u16 advmss; /* Advertised MSS */222222 u8 compressed_ack;223223- u8 dup_ack_counter;223223+ u8 dup_ack_counter:2,224224+ tlp_retrans:1, /* TLP is a retransmission */225225+ unused:5;224226 u32 chrono_start; /* Start time in jiffies of a TCP chrono */225227 u32 chrono_stat[3]; /* Time in jiffies for chrono_stat stats */226228 u8 chrono_type:2, /* current chronograph type */···245243 save_syn:1, /* Save headers of SYN packet */246244 is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */247245 syn_smc:1; /* SYN includes SMC */248248- u32 tlp_high_seq; /* snd_nxt at the time of TLP retransmit. */246246+ u32 tlp_high_seq; /* snd_nxt at the time of TLP */249247250248 u32 tcp_tx_delay; /* delay (in usec) added to TX packets */251249 u64 tcp_wstamp_ns; /* departure time for next sent data packet */
···3535 * do additional, common, filtering and return an error3636 * @post_doit: called after an operation's doit callback, it may3737 * undo operations done by pre_doit, for example release locks3838- * @mcast_bind: a socket bound to the given multicast group (which3939- * is given as the offset into the groups array)4040- * @mcast_unbind: a socket was unbound from the given multicast group.4141- * Note that unbind() will not be called symmetrically if the4242- * generic netlink family is removed while there are still open4343- * sockets.4444- * @attrbuf: buffer to store parsed attributes (private)4538 * @mcgrps: multicast groups used by this family4639 * @n_mcgrps: number of multicast groups4740 * @mcgrp_offset: starting number of multicast group IDs in this family···5764 void (*post_doit)(const struct genl_ops *ops,5865 struct sk_buff *skb,5966 struct genl_info *info);6060- int (*mcast_bind)(struct net *net, int group);6161- void (*mcast_unbind)(struct net *net, int group);6262- struct nlattr ** attrbuf; /* private */6367 const struct genl_ops * ops;6468 const struct genl_multicast_group *mcgrps;6569 unsigned int n_ops;
+17-8
include/net/inet_ecn.h
···4455#include <linux/ip.h>66#include <linux/skbuff.h>77+#include <linux/if_vlan.h>7889#include <net/inet_sock.h>910#include <net/dsfield.h>···173172174173static inline int INET_ECN_set_ce(struct sk_buff *skb)175174{176176- switch (skb->protocol) {175175+ switch (skb_protocol(skb, true)) {177176 case cpu_to_be16(ETH_P_IP):178177 if (skb_network_header(skb) + sizeof(struct iphdr) <=179178 skb_tail_pointer(skb))···192191193192static inline int INET_ECN_set_ect1(struct sk_buff *skb)194193{195195- switch (skb->protocol) {194194+ switch (skb_protocol(skb, true)) {196195 case cpu_to_be16(ETH_P_IP):197196 if (skb_network_header(skb) + sizeof(struct iphdr) <=198197 skb_tail_pointer(skb))···273272{274273 __u8 inner;275274276276- if (skb->protocol == htons(ETH_P_IP))275275+ switch (skb_protocol(skb, true)) {276276+ case htons(ETH_P_IP):277277 inner = ip_hdr(skb)->tos;278278- else if (skb->protocol == htons(ETH_P_IPV6))278278+ break;279279+ case htons(ETH_P_IPV6):279280 inner = ipv6_get_dsfield(ipv6_hdr(skb));280280- else281281+ break;282282+ default:281283 return 0;284284+ }282285283286 return INET_ECN_decapsulate(skb, oiph->tos, inner);284287}···292287{293288 __u8 inner;294289295295- if (skb->protocol == htons(ETH_P_IP))290290+ switch (skb_protocol(skb, true)) {291291+ case htons(ETH_P_IP):296292 inner = ip_hdr(skb)->tos;297297- else if (skb->protocol == htons(ETH_P_IPV6))293293+ break;294294+ case htons(ETH_P_IPV6):298295 inner = ipv6_get_dsfield(ipv6_hdr(skb));299299- else296296+ break;297297+ default:300298 return 0;299299+ }301300302301 return INET_ECN_decapsulate(skb, ipv6_get_dsfield(oipv6h), inner);303302}
···99#include <linux/bpf-netns.h>10101111struct bpf_prog;1212+struct bpf_prog_array;12131314struct netns_bpf {1414- struct bpf_prog __rcu *progs[MAX_NETNS_BPF_ATTACH_TYPE];1515- struct bpf_link *links[MAX_NETNS_BPF_ATTACH_TYPE];1515+ /* Array of programs to run compiled from progs or links */1616+ struct bpf_prog_array __rcu *run_array[MAX_NETNS_BPF_ATTACH_TYPE];1717+ struct bpf_prog *progs[MAX_NETNS_BPF_ATTACH_TYPE];1818+ struct list_head links[MAX_NETNS_BPF_ATTACH_TYPE];1619};17201821#endif /* __NETNS_BPF_H__ */
-11
include/net/pkt_sched.h
···136136 }137137}138138139139-static inline __be16 tc_skb_protocol(const struct sk_buff *skb)140140-{141141- /* We need to take extra care in case the skb came via142142- * vlan accelerated path. In that case, use skb->vlan_proto143143- * as the original vlan header was already stripped.144144- */145145- if (skb_vlan_tag_present(skb))146146- return skb->vlan_proto;147147- return skb->protocol;148148-}149149-150139/* Calculate maximal size of packet seen by hard_start_xmit151140 routine of this device.152141 */
+2-1
include/net/sock.h
···533533 * be copied.534534 */535535#define SK_USER_DATA_NOCOPY 1UL536536-#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY)536536+#define SK_USER_DATA_BPF 2UL /* Managed by BPF */537537+#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY | SK_USER_DATA_BPF)537538538539/**539540 * sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied
···6666 * @direction: stream direction, playback/recording6767 * @metadata_set: metadata set flag, true when set6868 * @next_track: has userspace signal next track transition, true when set6969+ * @partial_drain: undergoing partial_drain for stream, true when set6970 * @private_data: pointer to DSP private data7071 * @dma_buffer: allocated buffer if any7172 */···7978 enum snd_compr_direction direction;8079 bool metadata_set;8180 bool next_track;8181+ bool partial_drain;8282 void *private_data;8383 struct snd_dma_buffer dma_buffer;8484};···184182 if (snd_BUG_ON(!stream))185183 return;186184187187- stream->runtime->state = SNDRV_PCM_STATE_SETUP;185185+ /* for partial_drain case we are back to running state on success */186186+ if (stream->partial_drain) {187187+ stream->runtime->state = SNDRV_PCM_STATE_RUNNING;188188+ stream->partial_drain = false; /* clear this flag as well */189189+ } else {190190+ stream->runtime->state = SNDRV_PCM_STATE_SETUP;191191+ }188192189193 wake_up(&stream->runtime->sleep);190194}
+1
include/sound/rt5670.h
···1212 int jd_mode;1313 bool in2_diff;1414 bool dev_gpio;1515+ bool gpio1_is_ext_spk_en;15161617 bool dmic_en;1718 unsigned int dmic1_data_pin;
+1
include/sound/soc-dai.h
···161161int snd_soc_dai_compress_new(struct snd_soc_dai *dai,162162 struct snd_soc_pcm_runtime *rtd, int num);163163bool snd_soc_dai_stream_valid(struct snd_soc_dai *dai, int stream);164164+void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link);164165void snd_soc_dai_action(struct snd_soc_dai *dai,165166 int stream, int action);166167static inline void snd_soc_dai_activate(struct snd_soc_dai *dai,
···31713171 * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)31723172 * Description31733173 * Copy *size* bytes from *data* into a ring buffer *ringbuf*.31743174- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of31753175- * new data availability is sent.31763176- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of31773177- * new data availability is sent unconditionally.31743174+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification31753175+ * of new data availability is sent.31763176+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification31773177+ * of new data availability is sent unconditionally.31783178 * Return31793179- * 0, on success;31803180- * < 0, on error.31793179+ * 0 on success, or a negative error in case of failure.31813180 *31823181 * void *bpf_ringbuf_reserve(void *ringbuf, u64 size, u64 flags)31833182 * Description···31883189 * void bpf_ringbuf_submit(void *data, u64 flags)31893190 * Description31903191 * Submit reserved ring buffer sample, pointed to by *data*.31913191- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of31923192- * new data availability is sent.31933193- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of31943194- * new data availability is sent unconditionally.31923192+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification31933193+ * of new data availability is sent.31943194+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification31953195+ * of new data availability is sent unconditionally.31953196 * Return31963197 * Nothing. Always succeeds.31973198 *31983199 * void bpf_ringbuf_discard(void *data, u64 flags)31993200 * Description32003201 * Discard reserved ring buffer sample, pointed to by *data*.32013201- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of32023202- * new data availability is sent.32033203- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of32043204- * new data availability is sent unconditionally.32023202+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification32033203+ * of new data availability is sent.32043204+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification32053205+ * of new data availability is sent unconditionally.32053206 * Return32063207 * Nothing. Always succeeds.32073208 *···32093210 * Description32103211 * Query various characteristics of provided ring buffer. What32113212 * exactly is queries is determined by *flags*:32123212- * - BPF_RB_AVAIL_DATA - amount of data not yet consumed;32133213- * - BPF_RB_RING_SIZE - the size of ring buffer;32143214- * - BPF_RB_CONS_POS - consumer position (can wrap around);32153215- * - BPF_RB_PROD_POS - producer(s) position (can wrap around);32163216- * Data returned is just a momentary snapshots of actual values32133213+ *32143214+ * * **BPF_RB_AVAIL_DATA**: Amount of data not yet consumed.32153215+ * * **BPF_RB_RING_SIZE**: The size of ring buffer.32163216+ * * **BPF_RB_CONS_POS**: Consumer position (can wrap around).32173217+ * * **BPF_RB_PROD_POS**: Producer(s) position (can wrap around).32183218+ *32193219+ * Data returned is just a momentary snapshot of actual values32173220 * and could be inaccurate, so this facility should be used to32183221 * power heuristics and for reporting, not to make 100% correct32193222 * calculation.32203223 * Return32213221- * Requested value, or 0, if flags are not recognized.32243224+ * Requested value, or 0, if *flags* are not recognized.32223225 *32233226 * int bpf_csum_level(struct sk_buff *skb, u64 level)32243227 * Description
···37463746 return false;3747374737483748 t = btf_type_skip_modifiers(btf, t->type, NULL);37493749- if (!btf_type_is_int(t)) {37493749+ if (!btf_type_is_small_int(t)) {37503750 bpf_log(log,37513751 "ret type %s not allowed for fmod_ret\n",37523752 btf_kind_str[BTF_INFO_KIND(t->info)]);···37683768 /* skip modifiers */37693769 while (btf_type_is_modifier(t))37703770 t = btf_type_by_id(btf, t->type);37713771- if (btf_type_is_int(t) || btf_type_is_enum(t))37713771+ if (btf_type_is_small_int(t) || btf_type_is_enum(t))37723772 /* accessing a scalar */37733773 return true;37743774 if (!btf_type_is_ptr(t)) {
+134-60
kernel/bpf/net_namespace.c
···1919 * with netns_bpf_mutex held.2020 */2121 struct net *net;2222+ struct list_head node; /* node in list of links attached to net */2223};23242425/* Protects updates to netns_bpf */2526DEFINE_MUTEX(netns_bpf_mutex);26272728/* Must be called with netns_bpf_mutex held. */2828-static void __net_exit bpf_netns_link_auto_detach(struct bpf_link *link)2929+static void netns_bpf_run_array_detach(struct net *net,3030+ enum netns_bpf_attach_type type)2931{3030- struct bpf_netns_link *net_link =3131- container_of(link, struct bpf_netns_link, link);3232+ struct bpf_prog_array *run_array;32333333- net_link->net = NULL;3434+ run_array = rcu_replace_pointer(net->bpf.run_array[type], NULL,3535+ lockdep_is_held(&netns_bpf_mutex));3636+ bpf_prog_array_free(run_array);3437}35383639static void bpf_netns_link_release(struct bpf_link *link)···4340 enum netns_bpf_attach_type type = net_link->netns_type;4441 struct net *net;45424646- /* Link auto-detached by dying netns. */4747- if (!net_link->net)4848- return;4949-5043 mutex_lock(&netns_bpf_mutex);51445252- /* Recheck after potential sleep. We can race with cleanup_net5353- * here, but if we see a non-NULL struct net pointer pre_exit5454- * has not happened yet and will block on netns_bpf_mutex.4545+ /* We can race with cleanup_net, but if we see a non-NULL4646+ * struct net pointer, pre_exit has not run yet and wait for4747+ * netns_bpf_mutex.5548 */5649 net = net_link->net;5750 if (!net)5851 goto out_unlock;59526060- net->bpf.links[type] = NULL;6161- RCU_INIT_POINTER(net->bpf.progs[type], NULL);5353+ netns_bpf_run_array_detach(net, type);5454+ list_del(&net_link->node);62556356out_unlock:6457 mutex_unlock(&netns_bpf_mutex);···7576 struct bpf_netns_link *net_link =7677 container_of(link, struct bpf_netns_link, link);7778 enum netns_bpf_attach_type type = net_link->netns_type;7979+ struct bpf_prog_array *run_array;7880 struct net *net;7981 int ret = 0;8082···9393 goto out_unlock;9494 }95959696+ run_array = rcu_dereference_protected(net->bpf.run_array[type],9797+ lockdep_is_held(&netns_bpf_mutex));9898+ WRITE_ONCE(run_array->items[0].prog, new_prog);9999+96100 old_prog = xchg(&link->prog, new_prog);9797- rcu_assign_pointer(net->bpf.progs[type], new_prog);98101 bpf_prog_put(old_prog);99102100103out_unlock:···145142 .show_fdinfo = bpf_netns_link_show_fdinfo,146143};147144145145+/* Must be called with netns_bpf_mutex held. */146146+static int __netns_bpf_prog_query(const union bpf_attr *attr,147147+ union bpf_attr __user *uattr,148148+ struct net *net,149149+ enum netns_bpf_attach_type type)150150+{151151+ __u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);152152+ struct bpf_prog_array *run_array;153153+ u32 prog_cnt = 0, flags = 0;154154+155155+ run_array = rcu_dereference_protected(net->bpf.run_array[type],156156+ lockdep_is_held(&netns_bpf_mutex));157157+ if (run_array)158158+ prog_cnt = bpf_prog_array_length(run_array);159159+160160+ if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags)))161161+ return -EFAULT;162162+ if (copy_to_user(&uattr->query.prog_cnt, &prog_cnt, sizeof(prog_cnt)))163163+ return -EFAULT;164164+ if (!attr->query.prog_cnt || !prog_ids || !prog_cnt)165165+ return 0;166166+167167+ return bpf_prog_array_copy_to_user(run_array, prog_ids,168168+ attr->query.prog_cnt);169169+}170170+148171int netns_bpf_prog_query(const union bpf_attr *attr,149172 union bpf_attr __user *uattr)150173{151151- __u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);152152- u32 prog_id, prog_cnt = 0, flags = 0;153174 enum netns_bpf_attach_type type;154154- struct bpf_prog *attached;155175 struct net *net;176176+ int ret;156177157178 if (attr->query.query_flags)158179 return -EINVAL;···189162 if (IS_ERR(net))190163 return PTR_ERR(net);191164192192- rcu_read_lock();193193- attached = rcu_dereference(net->bpf.progs[type]);194194- if (attached) {195195- prog_cnt = 1;196196- prog_id = attached->aux->id;197197- }198198- rcu_read_unlock();165165+ mutex_lock(&netns_bpf_mutex);166166+ ret = __netns_bpf_prog_query(attr, uattr, net, type);167167+ mutex_unlock(&netns_bpf_mutex);199168200169 put_net(net);201201-202202- if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags)))203203- return -EFAULT;204204- if (copy_to_user(&uattr->query.prog_cnt, &prog_cnt, sizeof(prog_cnt)))205205- return -EFAULT;206206-207207- if (!attr->query.prog_cnt || !prog_ids || !prog_cnt)208208- return 0;209209-210210- if (copy_to_user(prog_ids, &prog_id, sizeof(u32)))211211- return -EFAULT;212212-213213- return 0;170170+ return ret;214171}215172216173int netns_bpf_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)217174{175175+ struct bpf_prog_array *run_array;218176 enum netns_bpf_attach_type type;177177+ struct bpf_prog *attached;219178 struct net *net;220179 int ret;180180+181181+ if (attr->target_fd || attr->attach_flags || attr->replace_bpf_fd)182182+ return -EINVAL;221183222184 type = to_netns_bpf_attach_type(attr->attach_type);223185 if (type < 0)···216200 mutex_lock(&netns_bpf_mutex);217201218202 /* Attaching prog directly is not compatible with links */219219- if (net->bpf.links[type]) {203203+ if (!list_empty(&net->bpf.links[type])) {220204 ret = -EEXIST;221205 goto out_unlock;222206 }223207224208 switch (type) {225209 case NETNS_BPF_FLOW_DISSECTOR:226226- ret = flow_dissector_bpf_prog_attach(net, prog);210210+ ret = flow_dissector_bpf_prog_attach_check(net, prog);227211 break;228212 default:229213 ret = -EINVAL;230214 break;231215 }216216+ if (ret)217217+ goto out_unlock;218218+219219+ attached = net->bpf.progs[type];220220+ if (attached == prog) {221221+ /* The same program cannot be attached twice */222222+ ret = -EINVAL;223223+ goto out_unlock;224224+ }225225+226226+ run_array = rcu_dereference_protected(net->bpf.run_array[type],227227+ lockdep_is_held(&netns_bpf_mutex));228228+ if (run_array) {229229+ WRITE_ONCE(run_array->items[0].prog, prog);230230+ } else {231231+ run_array = bpf_prog_array_alloc(1, GFP_KERNEL);232232+ if (!run_array) {233233+ ret = -ENOMEM;234234+ goto out_unlock;235235+ }236236+ run_array->items[0].prog = prog;237237+ rcu_assign_pointer(net->bpf.run_array[type], run_array);238238+ }239239+240240+ net->bpf.progs[type] = prog;241241+ if (attached)242242+ bpf_prog_put(attached);243243+232244out_unlock:233245 mutex_unlock(&netns_bpf_mutex);234246···265221266222/* Must be called with netns_bpf_mutex held. */267223static int __netns_bpf_prog_detach(struct net *net,268268- enum netns_bpf_attach_type type)224224+ enum netns_bpf_attach_type type,225225+ struct bpf_prog *old)269226{270227 struct bpf_prog *attached;271228272229 /* Progs attached via links cannot be detached */273273- if (net->bpf.links[type])230230+ if (!list_empty(&net->bpf.links[type]))274231 return -EINVAL;275232276276- attached = rcu_dereference_protected(net->bpf.progs[type],277277- lockdep_is_held(&netns_bpf_mutex));278278- if (!attached)233233+ attached = net->bpf.progs[type];234234+ if (!attached || attached != old)279235 return -ENOENT;280280- RCU_INIT_POINTER(net->bpf.progs[type], NULL);236236+ netns_bpf_run_array_detach(net, type);237237+ net->bpf.progs[type] = NULL;281238 bpf_prog_put(attached);282239 return 0;283240}284241285285-int netns_bpf_prog_detach(const union bpf_attr *attr)242242+int netns_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)286243{287244 enum netns_bpf_attach_type type;245245+ struct bpf_prog *prog;288246 int ret;247247+248248+ if (attr->target_fd)249249+ return -EINVAL;289250290251 type = to_netns_bpf_attach_type(attr->attach_type);291252 if (type < 0)292253 return -EINVAL;293254255255+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);256256+ if (IS_ERR(prog))257257+ return PTR_ERR(prog);258258+294259 mutex_lock(&netns_bpf_mutex);295295- ret = __netns_bpf_prog_detach(current->nsproxy->net_ns, type);260260+ ret = __netns_bpf_prog_detach(current->nsproxy->net_ns, type, prog);296261 mutex_unlock(&netns_bpf_mutex);262262+263263+ bpf_prog_put(prog);297264298265 return ret;299266}···312257static int netns_bpf_link_attach(struct net *net, struct bpf_link *link,313258 enum netns_bpf_attach_type type)314259{315315- struct bpf_prog *prog;260260+ struct bpf_netns_link *net_link =261261+ container_of(link, struct bpf_netns_link, link);262262+ struct bpf_prog_array *run_array;316263 int err;317264318265 mutex_lock(&netns_bpf_mutex);319266320267 /* Allow attaching only one prog or link for now */321321- if (net->bpf.links[type]) {268268+ if (!list_empty(&net->bpf.links[type])) {322269 err = -E2BIG;323270 goto out_unlock;324271 }325272 /* Links are not compatible with attaching prog directly */326326- prog = rcu_dereference_protected(net->bpf.progs[type],327327- lockdep_is_held(&netns_bpf_mutex));328328- if (prog) {273273+ if (net->bpf.progs[type]) {329274 err = -EEXIST;330275 goto out_unlock;331276 }332277333278 switch (type) {334279 case NETNS_BPF_FLOW_DISSECTOR:335335- err = flow_dissector_bpf_prog_attach(net, link->prog);280280+ err = flow_dissector_bpf_prog_attach_check(net, link->prog);336281 break;337282 default:338283 err = -EINVAL;···341286 if (err)342287 goto out_unlock;343288344344- net->bpf.links[type] = link;289289+ run_array = bpf_prog_array_alloc(1, GFP_KERNEL);290290+ if (!run_array) {291291+ err = -ENOMEM;292292+ goto out_unlock;293293+ }294294+ run_array->items[0].prog = link->prog;295295+ rcu_assign_pointer(net->bpf.run_array[type], run_array);296296+297297+ list_add_tail(&net_link->node, &net->bpf.links[type]);345298346299out_unlock:347300 mutex_unlock(&netns_bpf_mutex);···408345 return err;409346}410347348348+static int __net_init netns_bpf_pernet_init(struct net *net)349349+{350350+ int type;351351+352352+ for (type = 0; type < MAX_NETNS_BPF_ATTACH_TYPE; type++)353353+ INIT_LIST_HEAD(&net->bpf.links[type]);354354+355355+ return 0;356356+}357357+411358static void __net_exit netns_bpf_pernet_pre_exit(struct net *net)412359{413360 enum netns_bpf_attach_type type;414414- struct bpf_link *link;361361+ struct bpf_netns_link *net_link;415362416363 mutex_lock(&netns_bpf_mutex);417364 for (type = 0; type < MAX_NETNS_BPF_ATTACH_TYPE; type++) {418418- link = net->bpf.links[type];419419- if (link)420420- bpf_netns_link_auto_detach(link);421421- else422422- __netns_bpf_prog_detach(net, type);365365+ netns_bpf_run_array_detach(net, type);366366+ list_for_each_entry(net_link, &net->bpf.links[type], node)367367+ net_link->net = NULL; /* auto-detach link */368368+ if (net->bpf.progs[type])369369+ bpf_prog_put(net->bpf.progs[type]);423370 }424371 mutex_unlock(&netns_bpf_mutex);425372}426373427374static struct pernet_operations netns_bpf_pernet_ops __net_initdata = {375375+ .init = netns_bpf_pernet_init,428376 .pre_exit = netns_bpf_pernet_pre_exit,429377};430378
+10-4
kernel/bpf/reuseport_array.c
···2020/* The caller must hold the reuseport_lock */2121void bpf_sk_reuseport_detach(struct sock *sk)2222{2323- struct sock __rcu **socks;2323+ uintptr_t sk_user_data;24242525 write_lock_bh(&sk->sk_callback_lock);2626- socks = sk->sk_user_data;2727- if (socks) {2626+ sk_user_data = (uintptr_t)sk->sk_user_data;2727+ if (sk_user_data & SK_USER_DATA_BPF) {2828+ struct sock __rcu **socks;2929+3030+ socks = (void *)(sk_user_data & SK_USER_DATA_PTRMASK);2831 WRITE_ONCE(sk->sk_user_data, NULL);2932 /*3033 * Do not move this NULL assignment outside of···255252 struct sock *free_osk = NULL, *osk, *nsk;256253 struct sock_reuseport *reuse;257254 u32 index = *(u32 *)key;255255+ uintptr_t sk_user_data;258256 struct socket *socket;259257 int err, fd;260258···309305 if (err)310306 goto put_file_unlock;311307312312- WRITE_ONCE(nsk->sk_user_data, &array->ptrs[index]);308308+ sk_user_data = (uintptr_t)&array->ptrs[index] | SK_USER_DATA_NOCOPY |309309+ SK_USER_DATA_BPF;310310+ WRITE_ONCE(nsk->sk_user_data, (void *)sk_user_data);313311 rcu_assign_pointer(array->ptrs[index], nsk);314312 free_osk = osk;315313 err = 0;
+8-10
kernel/bpf/ringbuf.c
···132132{133133 struct bpf_ringbuf *rb;134134135135- if (!data_sz || !PAGE_ALIGNED(data_sz))136136- return ERR_PTR(-EINVAL);137137-138138-#ifdef CONFIG_64BIT139139- /* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */140140- if (data_sz > RINGBUF_MAX_DATA_SZ)141141- return ERR_PTR(-E2BIG);142142-#endif143143-144135 rb = bpf_ringbuf_area_alloc(data_sz, numa_node);145136 if (!rb)146137 return ERR_PTR(-ENOMEM);···157166 return ERR_PTR(-EINVAL);158167159168 if (attr->key_size || attr->value_size ||160160- attr->max_entries == 0 || !PAGE_ALIGNED(attr->max_entries))169169+ !is_power_of_2(attr->max_entries) ||170170+ !PAGE_ALIGNED(attr->max_entries))161171 return ERR_PTR(-EINVAL);172172+173173+#ifdef CONFIG_64BIT174174+ /* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */175175+ if (attr->max_entries > RINGBUF_MAX_DATA_SZ)176176+ return ERR_PTR(-E2BIG);177177+#endif162178163179 rb_map = kzalloc(sizeof(*rb_map), GFP_USER);164180 if (!rb_map)
+24-21
kernel/bpf/syscall.c
···21212121 !bpf_capable())21222122 return -EPERM;2123212321242124- if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN))21242124+ if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN) && !capable(CAP_SYS_ADMIN))21252125 return -EPERM;21262126 if (is_perfmon_prog_type(type) && !perfmon_capable())21272127 return -EPERM;···28932893 switch (ptype) {28942894 case BPF_PROG_TYPE_SK_MSG:28952895 case BPF_PROG_TYPE_SK_SKB:28962896- return sock_map_get_from_fd(attr, NULL);28962896+ return sock_map_prog_detach(attr, ptype);28972897 case BPF_PROG_TYPE_LIRC_MODE2:28982898 return lirc_prog_detach(attr);28992899 case BPF_PROG_TYPE_FLOW_DISSECTOR:29002900- if (!capable(CAP_NET_ADMIN))29012901- return -EPERM;29022902- return netns_bpf_prog_detach(attr);29002900+ return netns_bpf_prog_detach(attr, ptype);29032901 case BPF_PROG_TYPE_CGROUP_DEVICE:29042902 case BPF_PROG_TYPE_CGROUP_SKB:29052903 case BPF_PROG_TYPE_CGROUP_SOCK:···31373139 return NULL;31383140}3139314131403140-static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog)31423142+static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog,31433143+ const struct cred *f_cred)31413144{31423145 const struct bpf_map *map;31433146 struct bpf_insn *insns;···31643165 code == (BPF_JMP | BPF_CALL_ARGS)) {31653166 if (code == (BPF_JMP | BPF_CALL_ARGS))31663167 insns[i].code = BPF_JMP | BPF_CALL;31673167- if (!bpf_dump_raw_ok())31683168+ if (!bpf_dump_raw_ok(f_cred))31683169 insns[i].imm = 0;31693170 continue;31703171 }···32203221 return 0;32213222}3222322332233223-static int bpf_prog_get_info_by_fd(struct bpf_prog *prog,32243224+static int bpf_prog_get_info_by_fd(struct file *file,32253225+ struct bpf_prog *prog,32243226 const union bpf_attr *attr,32253227 union bpf_attr __user *uattr)32263228{···32903290 struct bpf_insn *insns_sanitized;32913291 bool fault;3292329232933293- if (prog->blinded && !bpf_dump_raw_ok()) {32933293+ if (prog->blinded && !bpf_dump_raw_ok(file->f_cred)) {32943294 info.xlated_prog_insns = 0;32953295 goto done;32963296 }32973297- insns_sanitized = bpf_insn_prepare_dump(prog);32973297+ insns_sanitized = bpf_insn_prepare_dump(prog, file->f_cred);32983298 if (!insns_sanitized)32993299 return -ENOMEM;33003300 uinsns = u64_to_user_ptr(info.xlated_prog_insns);···33283328 }3329332933303330 if (info.jited_prog_len && ulen) {33313331- if (bpf_dump_raw_ok()) {33313331+ if (bpf_dump_raw_ok(file->f_cred)) {33323332 uinsns = u64_to_user_ptr(info.jited_prog_insns);33333333 ulen = min_t(u32, info.jited_prog_len, ulen);33343334···33633363 ulen = info.nr_jited_ksyms;33643364 info.nr_jited_ksyms = prog->aux->func_cnt ? : 1;33653365 if (ulen) {33663366- if (bpf_dump_raw_ok()) {33663366+ if (bpf_dump_raw_ok(file->f_cred)) {33673367 unsigned long ksym_addr;33683368 u64 __user *user_ksyms;33693369 u32 i;···33943394 ulen = info.nr_jited_func_lens;33953395 info.nr_jited_func_lens = prog->aux->func_cnt ? : 1;33963396 if (ulen) {33973397- if (bpf_dump_raw_ok()) {33973397+ if (bpf_dump_raw_ok(file->f_cred)) {33983398 u32 __user *user_lens;33993399 u32 func_len, i;34003400···34513451 else34523452 info.nr_jited_line_info = 0;34533453 if (info.nr_jited_line_info && ulen) {34543454- if (bpf_dump_raw_ok()) {34543454+ if (bpf_dump_raw_ok(file->f_cred)) {34553455 __u64 __user *user_linfo;34563456 u32 i;34573457···34973497 return 0;34983498}3499349935003500-static int bpf_map_get_info_by_fd(struct bpf_map *map,35003500+static int bpf_map_get_info_by_fd(struct file *file,35013501+ struct bpf_map *map,35013502 const union bpf_attr *attr,35023503 union bpf_attr __user *uattr)35033504{···35413540 return 0;35423541}3543354235443544-static int bpf_btf_get_info_by_fd(struct btf *btf,35433543+static int bpf_btf_get_info_by_fd(struct file *file,35443544+ struct btf *btf,35453545 const union bpf_attr *attr,35463546 union bpf_attr __user *uattr)35473547{···35573555 return btf_get_info_by_fd(btf, attr, uattr);35583556}3559355735603560-static int bpf_link_get_info_by_fd(struct bpf_link *link,35583558+static int bpf_link_get_info_by_fd(struct file *file,35593559+ struct bpf_link *link,35613560 const union bpf_attr *attr,35623561 union bpf_attr __user *uattr)35633562{···36113608 return -EBADFD;3612360936133610 if (f.file->f_op == &bpf_prog_fops)36143614- err = bpf_prog_get_info_by_fd(f.file->private_data, attr,36113611+ err = bpf_prog_get_info_by_fd(f.file, f.file->private_data, attr,36153612 uattr);36163613 else if (f.file->f_op == &bpf_map_fops)36173617- err = bpf_map_get_info_by_fd(f.file->private_data, attr,36143614+ err = bpf_map_get_info_by_fd(f.file, f.file->private_data, attr,36183615 uattr);36193616 else if (f.file->f_op == &btf_fops)36203620- err = bpf_btf_get_info_by_fd(f.file->private_data, attr, uattr);36173617+ err = bpf_btf_get_info_by_fd(f.file, f.file->private_data, attr, uattr);36213618 else if (f.file->f_op == &bpf_link_fops)36223622- err = bpf_link_get_info_by_fd(f.file->private_data,36193619+ err = bpf_link_get_info_by_fd(f.file, f.file->private_data,36233620 attr, uattr);36243621 else36253622 err = -EINVAL;
+10-3
kernel/bpf/verifier.c
···399399 return type == PTR_TO_SOCKET ||400400 type == PTR_TO_TCP_SOCK ||401401 type == PTR_TO_MAP_VALUE ||402402- type == PTR_TO_SOCK_COMMON ||403403- type == PTR_TO_BTF_ID;402402+ type == PTR_TO_SOCK_COMMON;404403}405404406405static bool reg_type_may_be_null(enum bpf_reg_type type)···98009801 int i, j, subprog_start, subprog_end = 0, len, subprog;98019802 struct bpf_insn *insn;98029803 void *old_bpf_func;98039803- int err;98049804+ int err, num_exentries;9804980598059806 if (env->subprog_cnt <= 1)98069807 return 0;···98759876 func[i]->aux->nr_linfo = prog->aux->nr_linfo;98769877 func[i]->aux->jited_linfo = prog->aux->jited_linfo;98779878 func[i]->aux->linfo_idx = env->subprog_info[i].linfo_idx;98799879+ num_exentries = 0;98809880+ insn = func[i]->insnsi;98819881+ for (j = 0; j < func[i]->len; j++, insn++) {98829882+ if (BPF_CLASS(insn->code) == BPF_LDX &&98839883+ BPF_MODE(insn->code) == BPF_PROBE_MEM)98849884+ num_exentries++;98859885+ }98869886+ func[i]->aux->num_exentries = num_exentries;98789887 func[i] = bpf_int_jit_compile(func[i]);98799888 if (!func[i]->jited) {98809889 err = -ENOTSUPP;
+19-12
kernel/cgroup/cgroup.c
···6439643964406440void cgroup_sk_alloc(struct sock_cgroup_data *skcd)64416441{64426442- if (cgroup_sk_alloc_disabled)64436443- return;64446444-64456445- /* Socket clone path */64466446- if (skcd->val) {64476447- /*64486448- * We might be cloning a socket which is left in an empty64496449- * cgroup and the cgroup might have already been rmdir'd.64506450- * Don't use cgroup_get_live().64516451- */64526452- cgroup_get(sock_cgroup_ptr(skcd));64536453- cgroup_bpf_get(sock_cgroup_ptr(skcd));64426442+ if (cgroup_sk_alloc_disabled) {64436443+ skcd->no_refcnt = 1;64546444 return;64556445 }64566446···64656475 rcu_read_unlock();64666476}6467647764786478+void cgroup_sk_clone(struct sock_cgroup_data *skcd)64796479+{64806480+ if (skcd->val) {64816481+ if (skcd->no_refcnt)64826482+ return;64836483+ /*64846484+ * We might be cloning a socket which is left in an empty64856485+ * cgroup and the cgroup might have already been rmdir'd.64866486+ * Don't use cgroup_get_live().64876487+ */64886488+ cgroup_get(sock_cgroup_ptr(skcd));64896489+ cgroup_bpf_get(sock_cgroup_ptr(skcd));64906490+ }64916491+}64926492+64686493void cgroup_sk_free(struct sock_cgroup_data *skcd)64696494{64706495 struct cgroup *cgrp = sock_cgroup_ptr(skcd);6471649664976497+ if (skcd->no_refcnt)64986498+ return;64726499 cgroup_bpf_put(cgrp);64736500 cgroup_put(cgrp);64746501}
+13
kernel/debug/gdbstub.c
···792792 }793793 break;794794#endif795795+#ifdef CONFIG_HAVE_ARCH_KGDB_QXFER_PKT796796+ case 'S':797797+ if (!strncmp(remcom_in_buffer, "qSupported:", 11))798798+ strcpy(remcom_out_buffer, kgdb_arch_gdb_stub_feature);799799+ break;800800+ case 'X':801801+ if (!strncmp(remcom_in_buffer, "qXfer:", 6))802802+ kgdb_arch_handle_qxfer_pkt(remcom_in_buffer,803803+ remcom_out_buffer);804804+ break;805805+#endif806806+ default:807807+ break;795808 }796809}797810
···66#include <linux/debugfs.h>77#include <linux/dma-direct.h>88#include <linux/dma-noncoherent.h>99-#include <linux/dma-contiguous.h>109#include <linux/init.h>1110#include <linux/genalloc.h>1211#include <linux/set_memory.h>···68696970 do {7071 pool_size = 1 << (PAGE_SHIFT + order);7171-7272- if (dev_get_cma_area(NULL))7373- page = dma_alloc_from_contiguous(NULL, 1 << order,7474- order, false);7575- else7676- page = alloc_pages(gfp, order);7272+ page = alloc_pages(gfp, order);7773 } while (!page && order-- > 0);7874 if (!page)7975 goto out;···112118 dma_common_free_remap(addr, pool_size);113119#endif114120free_page: __maybe_unused115115- if (!dma_release_from_contiguous(NULL, page, 1 << order))116116- __free_pages(page, order);121121+ __free_pages(page, order);117122out:118123 return ret;119124}···196203}197204postcore_initcall(dma_atomic_pool_init);198205199199-static inline struct gen_pool *dev_to_pool(struct device *dev)206206+static inline struct gen_pool *dma_guess_pool_from_device(struct device *dev)200207{201208 u64 phys_mask;202209 gfp_t gfp;···210217 return atomic_pool_kernel;211218}212219213213-static bool dma_in_atomic_pool(struct device *dev, void *start, size_t size)220220+static inline struct gen_pool *dma_get_safer_pool(struct gen_pool *bad_pool)214221{215215- struct gen_pool *pool = dev_to_pool(dev);222222+ if (bad_pool == atomic_pool_kernel)223223+ return atomic_pool_dma32 ? : atomic_pool_dma;216224217217- if (unlikely(!pool))218218- return false;219219- return gen_pool_has_addr(pool, (unsigned long)start, size);225225+ if (bad_pool == atomic_pool_dma32)226226+ return atomic_pool_dma;227227+228228+ return NULL;229229+}230230+231231+static inline struct gen_pool *dma_guess_pool(struct device *dev,232232+ struct gen_pool *bad_pool)233233+{234234+ if (bad_pool)235235+ return dma_get_safer_pool(bad_pool);236236+237237+ return dma_guess_pool_from_device(dev);220238}221239222240void *dma_alloc_from_pool(struct device *dev, size_t size,223241 struct page **ret_page, gfp_t flags)224242{225225- struct gen_pool *pool = dev_to_pool(dev);226226- unsigned long val;243243+ struct gen_pool *pool = NULL;244244+ unsigned long val = 0;227245 void *ptr = NULL;246246+ phys_addr_t phys;228247229229- if (!pool) {230230- WARN(1, "%pGg atomic pool not initialised!\n", &flags);231231- return NULL;248248+ while (1) {249249+ pool = dma_guess_pool(dev, pool);250250+ if (!pool) {251251+ WARN(1, "Failed to get suitable pool for %s\n",252252+ dev_name(dev));253253+ break;254254+ }255255+256256+ val = gen_pool_alloc(pool, size);257257+ if (!val)258258+ continue;259259+260260+ phys = gen_pool_virt_to_phys(pool, val);261261+ if (dma_coherent_ok(dev, phys, size))262262+ break;263263+264264+ gen_pool_free(pool, val, size);265265+ val = 0;232266 }233267234234- val = gen_pool_alloc(pool, size);235235- if (val) {236236- phys_addr_t phys = gen_pool_virt_to_phys(pool, val);237268269269+ if (val) {238270 *ret_page = pfn_to_page(__phys_to_pfn(phys));239271 ptr = (void *)val;240272 memset(ptr, 0, size);273273+274274+ if (gen_pool_avail(pool) < atomic_pool_size)275275+ schedule_work(&atomic_pool_work);241276 }242242- if (gen_pool_avail(pool) < atomic_pool_size)243243- schedule_work(&atomic_pool_work);244277245278 return ptr;246279}247280248281bool dma_free_from_pool(struct device *dev, void *start, size_t size)249282{250250- struct gen_pool *pool = dev_to_pool(dev);283283+ struct gen_pool *pool = NULL;251284252252- if (!dma_in_atomic_pool(dev, start, size))253253- return false;254254- gen_pool_free(pool, (unsigned long)start, size);255255- return true;285285+ while (1) {286286+ pool = dma_guess_pool(dev, pool);287287+ if (!pool)288288+ return false;289289+290290+ if (gen_pool_has_addr(pool, (unsigned long)start, size)) {291291+ gen_pool_free(pool, (unsigned long)start, size);292292+ return true;293293+ }294294+ }256295}
+1-1
kernel/events/uprobes.c
···21992199 if (!uprobe) {22002200 if (is_swbp > 0) {22012201 /* No matching uprobe; signal SIGTRAP. */22022202- send_sig(SIGTRAP, current, 0);22022202+ force_sig(SIGTRAP);22032203 } else {22042204 /*22052205 * Either we raced with uprobe_unregister() or we can't
+1-1
kernel/fork.c
···19771977 * to stop root fork bombs.19781978 */19791979 retval = -EAGAIN;19801980- if (nr_threads >= max_threads)19801980+ if (data_race(nr_threads >= max_threads))19811981 goto bad_fork_cleanup_count;1982198219831983 delayacct_tsk_init(p); /* Must remain after dup_task_struct() */
+35-2
kernel/irq/manage.c
···195195 set_bit(IRQTF_AFFINITY, &action->thread_flags);196196}197197198198+#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK198199static void irq_validate_effective_affinity(struct irq_data *data)199200{200200-#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK201201 const struct cpumask *m = irq_data_get_effective_affinity_mask(data);202202 struct irq_chip *chip = irq_data_get_irq_chip(data);203203···205205 return;206206 pr_warn_once("irq_chip %s did not update eff. affinity mask of irq %u\n",207207 chip->name, data->irq);208208-#endif209208}209209+210210+static inline void irq_init_effective_affinity(struct irq_data *data,211211+ const struct cpumask *mask)212212+{213213+ cpumask_copy(irq_data_get_effective_affinity_mask(data), mask);214214+}215215+#else216216+static inline void irq_validate_effective_affinity(struct irq_data *data) { }217217+static inline void irq_init_effective_affinity(struct irq_data *data,218218+ const struct cpumask *mask) { }219219+#endif210220211221int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,212222 bool force)···314304 return ret;315305}316306307307+static bool irq_set_affinity_deactivated(struct irq_data *data,308308+ const struct cpumask *mask, bool force)309309+{310310+ struct irq_desc *desc = irq_data_to_desc(data);311311+312312+ /*313313+ * If the interrupt is not yet activated, just store the affinity314314+ * mask and do not call the chip driver at all. On activation the315315+ * driver has to make sure anyway that the interrupt is in a316316+ * useable state so startup works.317317+ */318318+ if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data))319319+ return false;320320+321321+ cpumask_copy(desc->irq_common_data.affinity, mask);322322+ irq_init_effective_affinity(data, mask);323323+ irqd_set(data, IRQD_AFFINITY_SET);324324+ return true;325325+}326326+317327int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,318328 bool force)319329{···343313344314 if (!chip || !chip->irq_set_affinity)345315 return -EINVAL;316316+317317+ if (irq_set_affinity_deactivated(data, mask, force))318318+ return 0;346319347320 if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) {348321 ret = irq_try_set_affinity(data, mask, force);
+11-6
kernel/kallsyms.c
···644644 * Otherwise, require CAP_SYSLOG (assuming kptr_restrict isn't set to645645 * block even that).646646 */647647-int kallsyms_show_value(void)647647+bool kallsyms_show_value(const struct cred *cred)648648{649649 switch (kptr_restrict) {650650 case 0:651651 if (kallsyms_for_perf())652652- return 1;652652+ return true;653653 /* fallthrough */654654 case 1:655655- if (has_capability_noaudit(current, CAP_SYSLOG))656656- return 1;655655+ if (security_capable(cred, &init_user_ns, CAP_SYSLOG,656656+ CAP_OPT_NOAUDIT) == 0)657657+ return true;657658 /* fallthrough */658659 default:659659- return 0;660660+ return false;660661 }661662}662663···674673 return -ENOMEM;675674 reset_iter(iter, 0);676675677677- iter->show_value = kallsyms_show_value();676676+ /*677677+ * Instead of checking this on every s_show() call, cache678678+ * the result here at open time.679679+ */680680+ iter->show_value = kallsyms_show_value(file->f_cred);678681 return 0;679682}680683
+2-2
kernel/kprobes.c
···24482448 else24492449 kprobe_type = "k";2450245024512451- if (!kallsyms_show_value())24512451+ if (!kallsyms_show_value(pi->file->f_cred))24522452 addr = NULL;2453245324542454 if (sym)···25402540 * If /proc/kallsyms is not showing kernel address, we won't25412541 * show them here either.25422542 */25432543- if (!kallsyms_show_value())25432543+ if (!kallsyms_show_value(m->file->f_cred))25442544 seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL,25452545 (void *)ent->start_addr);25462546 else
···335335 *336336 * Ensure reorder queue is read after pd->lock is dropped so we see337337 * new objects from another task in padata_do_serial. Pairs with338338- * smp_mb__after_atomic in padata_do_serial.338338+ * smp_mb in padata_do_serial.339339 */340340 smp_mb();341341···418418 * with the trylock of pd->lock in padata_reorder. Pairs with smp_mb419419 * in padata_reorder.420420 */421421- smp_mb__after_atomic();421421+ smp_mb();422422423423 padata_reorder(pd);424424}
···1311131113121312void activate_task(struct rq *rq, struct task_struct *p, int flags)13131313{13141314- if (task_contributes_to_load(p))13151315- rq->nr_uninterruptible--;13161316-13171314 enqueue_task(rq, p, flags);1318131513191316 p->on_rq = TASK_ON_RQ_QUEUED;···13191322void deactivate_task(struct rq *rq, struct task_struct *p, int flags)13201323{13211324 p->on_rq = (flags & DEQUEUE_SLEEP) ? 0 : TASK_ON_RQ_MIGRATING;13221322-13231323- if (task_contributes_to_load(p))13241324- rq->nr_uninterruptible++;1325132513261326 dequeue_task(rq, p, flags);13271327}···2230223622312237 lockdep_assert_held(&rq->lock);2232223822332233-#ifdef CONFIG_SMP22342239 if (p->sched_contributes_to_load)22352240 rq->nr_uninterruptible--;2236224122422242+#ifdef CONFIG_SMP22372243 if (wake_flags & WF_MIGRATED)22382244 en_flags |= ENQUEUE_MIGRATED;22392245#endif···25772583 * A similar smb_rmb() lives in try_invoke_on_locked_down_task().25782584 */25792585 smp_rmb();25802580- if (p->on_rq && ttwu_remote(p, wake_flags))25862586+ if (READ_ONCE(p->on_rq) && ttwu_remote(p, wake_flags))25812587 goto unlock;2582258825832589 if (p->in_iowait) {···25862592 }2587259325882594#ifdef CONFIG_SMP25892589- p->sched_contributes_to_load = !!task_contributes_to_load(p);25902590- p->state = TASK_WAKING;25912591-25922595 /*25932596 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be25942597 * possible to, falsely, observe p->on_cpu == 0.···26042613 *26052614 * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in26062615 * __schedule(). See the comment for smp_mb__after_spinlock().26162616+ *26172617+ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure26182618+ * schedule()'s deactivate_task() has 'happened' and p will no longer26192619+ * care about it's own p->state. See the comment in __schedule().26072620 */26082608- smp_rmb();26212621+ smp_acquire__after_ctrl_dep();26222622+26232623+ /*26242624+ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq26252625+ * == 0), which means we need to do an enqueue, change p->state to26262626+ * TASK_WAKING such that we can unlock p->pi_lock before doing the26272627+ * enqueue, such as ttwu_queue_wakelist().26282628+ */26292629+ p->state = TASK_WAKING;2609263026102631 /*26112632 * If the owning (remote) CPU is still in the middle of schedule() with···29652962 * Silence PROVE_RCU.29662963 */29672964 raw_spin_lock_irqsave(&p->pi_lock, flags);29652965+ rseq_migrate(p);29682966 /*29692967 * We're setting the CPU for the first time, we don't migrate,29702968 * so use __set_task_cpu().···30303026 * as we're not fully set-up yet.30313027 */30323028 p->recent_used_cpu = task_cpu(p);30293029+ rseq_migrate(p);30333030 __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0));30343031#endif30353032 rq = __task_rq_lock(p, &rf);···41024097{41034098 struct task_struct *prev, *next;41044099 unsigned long *switch_count;41004100+ unsigned long prev_state;41054101 struct rq_flags rf;41064102 struct rq *rq;41074103 int cpu;···41224116 /*41234117 * Make sure that signal_pending_state()->signal_pending() below41244118 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)41254125- * done by the caller to avoid the race with signal_wake_up().41194119+ * done by the caller to avoid the race with signal_wake_up():41264120 *41274127- * The membarrier system call requires a full memory barrier41214121+ * __set_current_state(@state) signal_wake_up()41224122+ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)41234123+ * wake_up_state(p, state)41244124+ * LOCK rq->lock LOCK p->pi_state41254125+ * smp_mb__after_spinlock() smp_mb__after_spinlock()41264126+ * if (signal_pending_state()) if (p->state & @state)41274127+ *41284128+ * Also, the membarrier system call requires a full memory barrier41284129 * after coming from user-space, before storing to rq->curr.41294130 */41304131 rq_lock(rq, &rf);···41424129 update_rq_clock(rq);4143413041444131 switch_count = &prev->nivcsw;41454145- if (!preempt && prev->state) {41464146- if (signal_pending_state(prev->state, prev)) {41324132+41334133+ /*41344134+ * We must load prev->state once (task_struct::state is volatile), such41354135+ * that:41364136+ *41374137+ * - we form a control dependency vs deactivate_task() below.41384138+ * - ptrace_{,un}freeze_traced() can change ->state underneath us.41394139+ */41404140+ prev_state = prev->state;41414141+ if (!preempt && prev_state) {41424142+ if (signal_pending_state(prev_state, prev)) {41474143 prev->state = TASK_RUNNING;41484144 } else {41454145+ prev->sched_contributes_to_load =41464146+ (prev_state & TASK_UNINTERRUPTIBLE) &&41474147+ !(prev_state & TASK_NOLOAD) &&41484148+ !(prev->flags & PF_FROZEN);41494149+41504150+ if (prev->sched_contributes_to_load)41514151+ rq->nr_uninterruptible++;41524152+41534153+ /*41544154+ * __schedule() ttwu()41554155+ * prev_state = prev->state; if (p->on_rq && ...)41564156+ * if (prev_state) goto out;41574157+ * p->on_rq = 0; smp_acquire__after_ctrl_dep();41584158+ * p->state = TASK_WAKING41594159+ *41604160+ * Where __schedule() and ttwu() have matching control dependencies.41614161+ *41624162+ * After this, schedule() must not care about p->state any more.41634163+ */41494164 deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);4150416541514166 if (prev->in_iowait) {···44854444int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,44864445 void *key)44874446{44474447+ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC);44884448 return try_to_wake_up(curr->private, mode, wake_flags);44894449}44904450EXPORT_SYMBOL(default_wake_function);
+13-2
kernel/sched/fair.c
···40394039 return;40404040 }4041404140424042- rq->misfit_task_load = task_h_load(p);40424042+ /*40434043+ * Make sure that misfit_task_load will not be null even if40444044+ * task_h_load() returns 0.40454045+ */40464046+ rq->misfit_task_load = max_t(unsigned long, task_h_load(p), 1);40434047}4044404840454049#else /* CONFIG_SMP */···7642763876437639 switch (env->migration_type) {76447640 case migrate_load:76457645- load = task_h_load(p);76417641+ /*76427642+ * Depending of the number of CPUs and tasks and the76437643+ * cgroup hierarchy, task_h_load() can return a null76447644+ * value. Make sure that env->imbalance decreases76457645+ * otherwise detach_tasks() will stop only after76467646+ * detaching up to loop_max tasks.76477647+ */76487648+ load = max_t(unsigned long, task_h_load(p), 1);7646764976477650 if (sched_feat(LB_MIN) &&76487651 load < 16 && !env->sd->nr_balance_failed)
+7-3
kernel/signal.c
···25292529 struct signal_struct *signal = current->signal;25302530 int signr;2531253125322532- if (unlikely(current->task_works))25332533- task_work_run();25342534-25352532 if (unlikely(uprobe_deny_signal()))25362533 return false;25372534···2541254425422545relock:25432546 spin_lock_irq(&sighand->siglock);25472547+ current->jobctl &= ~JOBCTL_TASK_WORK;25482548+ if (unlikely(current->task_works)) {25492549+ spin_unlock_irq(&sighand->siglock);25502550+ task_work_run();25512551+ goto relock;25522552+ }25532553+25442554 /*25452555 * Every stopped thread goes here after wakeup. Check to see if25462556 * we should notify the parent, prepare_signal(SIGCONT) encodes
+14-2
kernel/task_work.c
···2525 * 0 if succeeds or -ESRCH.2626 */2727int2828-task_work_add(struct task_struct *task, struct callback_head *work, bool notify)2828+task_work_add(struct task_struct *task, struct callback_head *work, int notify)2929{3030 struct callback_head *head;3131+ unsigned long flags;31323233 do {3334 head = READ_ONCE(task->task_works);···3736 work->next = head;3837 } while (cmpxchg(&task->task_works, head, work) != head);39384040- if (notify)3939+ switch (notify) {4040+ case TWA_RESUME:4141 set_notify_resume(task);4242+ break;4343+ case TWA_SIGNAL:4444+ if (lock_task_sighand(task, &flags)) {4545+ task->jobctl |= JOBCTL_TASK_WORK;4646+ signal_wake_up(task, 0);4747+ unlock_task_sighand(task, &flags);4848+ }4949+ break;5050+ }5151+4252 return 0;4353}4454
+16-5
kernel/time/timer.c
···521521 * Force expire obscene large timeouts to expire at the522522 * capacity limit of the wheel.523523 */524524- if (expires >= WHEEL_TIMEOUT_CUTOFF)525525- expires = WHEEL_TIMEOUT_MAX;524524+ if (delta >= WHEEL_TIMEOUT_CUTOFF)525525+ expires = clk + WHEEL_TIMEOUT_MAX;526526527527 idx = calc_index(expires, LVL_DEPTH - 1);528528 }···584584 * Set the next expiry time and kick the CPU so it can reevaluate the585585 * wheel:586586 */587587- base->next_expiry = timer->expires;587587+ if (time_before(timer->expires, base->clk)) {588588+ /*589589+ * Prevent from forward_timer_base() moving the base->clk590590+ * backward591591+ */592592+ base->next_expiry = base->clk;593593+ } else {594594+ base->next_expiry = timer->expires;595595+ }588596 wake_up_nohz_cpu(base->cpu);589597}590598···904896 * If the next expiry value is > jiffies, then we fast forward to905897 * jiffies otherwise we forward to the next expiry value.906898 */907907- if (time_after(base->next_expiry, jnow))899899+ if (time_after(base->next_expiry, jnow)) {908900 base->clk = jnow;909909- else901901+ } else {902902+ if (WARN_ON_ONCE(time_before(base->next_expiry, base->clk)))903903+ return;910904 base->clk = base->next_expiry;905905+ }911906#endif912907}913908
+5
lib/Kconfig.kgdb
···33config HAVE_ARCH_KGDB44 bool5566+# set if architecture has the its kgdb_arch_handle_qxfer_pkt77+# function to enable gdb stub to address XML packet sent from GDB.88+config HAVE_ARCH_KGDB_QXFER_PKT99+ bool1010+611menuconfig KGDB712 bool "KGDB: kernel debugger"813 depends on HAVE_ARCH_KGDB
+1
lib/packing.c
···7373 * @endbit: The index (in logical notation, compensated for quirks) where7474 * the packed value ends within pbuf. Must be smaller than, or equal7575 * to, startbit.7676+ * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf.7677 * @op: If PACK, then uval will be treated as const pointer and copied (packed)7778 * into pbuf, between startbit and endbit.7879 * If UNPACK, then pbuf will be treated as const pointer and the logical
···2028202820292029 page = find_get_page(mapping, index);20302030 if (!page) {20312031- if (iocb->ki_flags & IOCB_NOWAIT)20312031+ if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_NOIO))20322032 goto would_block;20332033 page_cache_sync_readahead(mapping,20342034 ra, filp,···20382038 goto no_cached_page;20392039 }20402040 if (PageReadahead(page)) {20412041+ if (iocb->ki_flags & IOCB_NOIO) {20422042+ put_page(page);20432043+ goto out;20442044+ }20412045 page_cache_async_readahead(mapping,20422046 ra, filp, page,20432047 index, last_index - index);···21642160 }2165216121662162readpage:21632163+ if (iocb->ki_flags & IOCB_NOIO) {21642164+ unlock_page(page);21652165+ put_page(page);21662166+ goto would_block;21672167+ }21672168 /*21682169 * A previous I/O error may have been due to temporary21692170 * failures, eg. multipath errors.···22582249 *22592250 * This is the "read_iter()" routine for all filesystems22602251 * that can use the page cache directly.22522252+ *22532253+ * The IOCB_NOWAIT flag in iocb->ki_flags indicates that -EAGAIN shall22542254+ * be returned when no data can be read without waiting for I/O requests22552255+ * to complete; it doesn't prevent readahead.22562256+ *22572257+ * The IOCB_NOIO flag in iocb->ki_flags indicates that no new I/O22582258+ * requests shall be made for the read or for readahead. When no data22592259+ * can be read, -EAGAIN shall be returned. When readahead would be22602260+ * triggered, a partial, possibly empty read shall be returned.22612261+ *22612262 * Return:22622263 * * number of bytes copied, even for partial reads22632263- * * negative error code if nothing was read22642264+ * * negative error code (or 0 if IOCB_NOIO) if nothing was read22642265 */22652266ssize_t22662267generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
+11-6
mm/hugetlb.c
···4545unsigned int default_hstate_idx;4646struct hstate hstates[HUGE_MAX_HSTATE];47474848+#ifdef CONFIG_CMA4849static struct cma *hugetlb_cma[MAX_NUMNODES];5050+#endif5151+static unsigned long hugetlb_cma_size __initdata;49525053/*5154 * Minimum page order among possible hugepage sizes, set to a proper value···12381235 * If the page isn't allocated using the cma allocator,12391236 * cma_release() returns false.12401237 */12411241- if (IS_ENABLED(CONFIG_CMA) &&12421242- cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))12381238+#ifdef CONFIG_CMA12391239+ if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))12431240 return;12411241+#endif1244124212451243 free_contig_range(page_to_pfn(page), 1 << order);12461244}···12521248{12531249 unsigned long nr_pages = 1UL << huge_page_order(h);1254125012551255- if (IS_ENABLED(CONFIG_CMA)) {12511251+#ifdef CONFIG_CMA12521252+ {12561253 struct page *page;12571254 int node;12581255···12671262 return page;12681263 }12691264 }12651265+#endif1270126612711267 return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);12721268}···1599159316001594 /* Use first found vma */16011595 pgoff_start = page_to_pgoff(hpage);16021602- pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;15961596+ pgoff_end = pgoff_start + pages_per_huge_page(page_hstate(hpage)) - 1;16031597 anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,16041598 pgoff_start, pgoff_end) {16051599 struct vm_area_struct *vma = avc->vma;···2577257125782572 for (i = 0; i < h->max_huge_pages; ++i) {25792573 if (hstate_is_gigantic(h)) {25802580- if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) {25742574+ if (hugetlb_cma_size) {25812575 pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n");25822576 break;25832577 }···56605654}5661565556625656#ifdef CONFIG_CMA56635663-static unsigned long hugetlb_cma_size __initdata;56645657static bool cma_reserve_called __initdata;5665565856665659static int __init cmdline_parse_hugetlb_cma(char *p)
···56695669 if (!mem_cgroup_is_root(mc.to))56705670 page_counter_uncharge(&mc.to->memory, mc.moved_swap);5671567156725672- mem_cgroup_id_get_many(mc.to, mc.moved_swap);56735672 css_put_many(&mc.to->css, mc.moved_swap);5674567356755674 mc.moved_swap = 0;···58595860 ent = target.ent;58605861 if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {58615862 mc.precharge--;58625862- /* we fixup refcnts and charges later. */58635863+ mem_cgroup_id_get_many(mc.to, 1);58645864+ /* we fixup other refcnts and charges later. */58635865 mc.moved_swap++;58645866 }58655867 break;···71867186 { }, /* terminate */71877187};7188718871897189+/*71907190+ * If mem_cgroup_swap_init() is implemented as a subsys_initcall()71917191+ * instead of a core_initcall(), this could mean cgroup_memory_noswap still71927192+ * remains set to false even when memcg is disabled via "cgroup_disable=memory"71937193+ * boot parameter. This may result in premature OOPS inside71947194+ * mem_cgroup_get_nr_swap_pages() function in corner cases.71957195+ */71897196static int __init mem_cgroup_swap_init(void)71907197{71917198 /* No memory control -> no swap control */···7207720072087201 return 0;72097202}72107210-subsys_initcall(mem_cgroup_swap_init);72037203+core_initcall(mem_cgroup_swap_init);7211720472127205#endif /* CONFIG_MEMCG_SWAP */
+1-1
mm/memory.c
···16011601 return insert_pages(vma, addr, pages, num, vma->vm_page_prot);16021602#else16031603 unsigned long idx = 0, pgcount = *num;16041604- int err;16041604+ int err = -EINVAL;1605160516061606 for (; idx < pgcount; ++idx) {16071607 err = vm_insert_page(vma, addr + (PAGE_SIZE * idx), pages[idx]);
+1-12
mm/migrate.c
···11611161}1162116211631163/*11641164- * gcc 4.7 and 4.8 on arm get an ICEs when inlining unmap_and_move(). Work11651165- * around it.11661166- */11671167-#if defined(CONFIG_ARM) && \11681168- defined(GCC_VERSION) && GCC_VERSION < 40900 && GCC_VERSION >= 4070011691169-#define ICE_noinline noinline11701170-#else11711171-#define ICE_noinline11721172-#endif11731173-11741174-/*11751164 * Obtain the lock on page, remove all ptes and migrate the page11761165 * to the newly allocated page in newpage.11771166 */11781178-static ICE_noinline int unmap_and_move(new_page_t get_new_page,11671167+static int unmap_and_move(new_page_t get_new_page,11791168 free_page_t put_new_page,11801169 unsigned long private, struct page *page,11811170 int force, enum migrate_mode mode,
+14-2
mm/mmap.c
···26202620 * Create a list of vma's touched by the unmap, removing them from the mm's26212621 * vma list as we go..26222622 */26232623-static void26232623+static bool26242624detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,26252625 struct vm_area_struct *prev, unsigned long end)26262626{···2645264526462646 /* Kill the cache */26472647 vmacache_invalidate(mm);26482648+26492649+ /*26502650+ * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or26512651+ * VM_GROWSUP VMA. Such VMAs can change their size under26522652+ * down_read(mmap_lock) and collide with the VMA we are about to unmap.26532653+ */26542654+ if (vma && (vma->vm_flags & VM_GROWSDOWN))26552655+ return false;26562656+ if (prev && (prev->vm_flags & VM_GROWSUP))26572657+ return false;26582658+ return true;26482659}2649266026502661/*···28362825 }2837282628382827 /* Detach vmas from rbtree */28392839- detach_vmas_to_be_unmapped(mm, vma, prev, end);28282828+ if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))28292829+ downgrade = false;2840283028412831 if (downgrade)28422832 mmap_write_downgrade(mm);
+21-2
mm/mremap.c
···206206207207 /*208208 * The destination pmd shouldn't be established, free_pgtables()209209- * should have release it.209209+ * should have released it.210210+ *211211+ * However, there's a case during execve() where we use mremap212212+ * to move the initial stack, and in that case the target area213213+ * may overlap the source area (always moving down).214214+ *215215+ * If everything is PMD-aligned, that works fine, as moving216216+ * each pmd down will clear the source pmd. But if we first217217+ * have a few 4kB-only pages that get moved down, and then218218+ * hit the "now the rest is PMD-aligned, let's do everything219219+ * one pmd at a time", we will still have the old (now empty220220+ * of any 4kB pages, but still there) PMD in the page table221221+ * tree.222222+ *223223+ * Warn on it once - because we really should try to figure224224+ * out how to do this better - but then say "I won't move225225+ * this pmd".226226+ *227227+ * One alternative might be to just unmap the target pmd at228228+ * this point, and verify that it really is empty. We'll see.210229 */211211- if (WARN_ON(!pmd_none(*new_pmd)))230230+ if (WARN_ON_ONCE(!pmd_none(*new_pmd)))212231 return false;213232214233 /*
+1-1
mm/page_alloc.c
···78327832 * Initialise min_free_kbytes.78337833 *78347834 * For small machines we want it small (128k min). For large machines78357835- * we want it large (64MB max). But it is not linear, because network78357835+ * we want it large (256MB max). But it is not linear, because network78367836 * bandwidth does not increase linearly with machine size. We use78377837 *78387838 * min_free_kbytes = 4 * sqrt(lowmem_kbytes), for better accuracy:
···690690 if (to->addr_len != from->addr_len)691691 return;692692693693+ /* netif_addr_lock_bh() uses lockdep subclass 0, this is okay for two694694+ * reasons:695695+ * 1) This is always called without any addr_list_lock, so as the696696+ * outermost one here, it must be 0.697697+ * 2) This is called by some callers after unlinking the upper device,698698+ * so the dev->lower_level becomes 1 again.699699+ * Therefore, the subclass for 'from' is 0, for 'to' is either 1 or700700+ * larger.701701+ */693702 netif_addr_lock_bh(from);694703 netif_addr_lock_nested(to);695704 __hw_addr_unsync(&to->uc, &from->uc, to->addr_len);···920911 if (to->addr_len != from->addr_len)921912 return;922913914914+ /* See the above comments inside dev_uc_unsync(). */923915 netif_addr_lock_bh(from);924916 netif_addr_lock_nested(to);925917 __hw_addr_unsync(&to->mc, &from->mc, to->addr_len);
+7-3
net/core/filter.c
···58535853{58545854 unsigned int iphdr_len;5855585558565856- if (skb->protocol == cpu_to_be16(ETH_P_IP))58565856+ switch (skb_protocol(skb, true)) {58575857+ case cpu_to_be16(ETH_P_IP):58575858 iphdr_len = sizeof(struct iphdr);58585858- else if (skb->protocol == cpu_to_be16(ETH_P_IPV6))58595859+ break;58605860+ case cpu_to_be16(ETH_P_IPV6):58595861 iphdr_len = sizeof(struct ipv6hdr);58605860- else58625862+ break;58635863+ default:58615864 return 0;58655865+ }5862586658635867 if (skb_headlen(skb) < iphdr_len)58645868 return 0;
+12-20
net/core/flow_dissector.c
···7070EXPORT_SYMBOL(skb_flow_dissector_init);71717272#ifdef CONFIG_BPF_SYSCALL7373-int flow_dissector_bpf_prog_attach(struct net *net, struct bpf_prog *prog)7373+int flow_dissector_bpf_prog_attach_check(struct net *net,7474+ struct bpf_prog *prog)7475{7576 enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;7676- struct bpf_prog *attached;77777878 if (net == &init_net) {7979 /* BPF flow dissector in the root namespace overrides···8686 for_each_net(ns) {8787 if (ns == &init_net)8888 continue;8989- if (rcu_access_pointer(ns->bpf.progs[type]))8989+ if (rcu_access_pointer(ns->bpf.run_array[type]))9090 return -EEXIST;9191 }9292 } else {9393 /* Make sure root flow dissector is not attached9494 * when attaching to the non-root namespace.9595 */9696- if (rcu_access_pointer(init_net.bpf.progs[type]))9696+ if (rcu_access_pointer(init_net.bpf.run_array[type]))9797 return -EEXIST;9898 }9999100100- attached = rcu_dereference_protected(net->bpf.progs[type],101101- lockdep_is_held(&netns_bpf_mutex));102102- if (attached == prog)103103- /* The same program cannot be attached twice */104104- return -EINVAL;105105-106106- rcu_assign_pointer(net->bpf.progs[type], prog);107107- if (attached)108108- bpf_prog_put(attached);109100 return 0;110101}111102#endif /* CONFIG_BPF_SYSCALL */···894903 struct flow_dissector_key_addrs *key_addrs;895904 struct flow_dissector_key_tags *key_tags;896905 struct flow_dissector_key_vlan *key_vlan;897897- struct bpf_prog *attached = NULL;898906 enum flow_dissect_ret fdret;899907 enum flow_dissector_key_id dissector_vlan = FLOW_DISSECTOR_KEY_MAX;900908 bool mpls_el = false;···950960 WARN_ON_ONCE(!net);951961 if (net) {952962 enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;963963+ struct bpf_prog_array *run_array;953964954965 rcu_read_lock();955955- attached = rcu_dereference(init_net.bpf.progs[type]);966966+ run_array = rcu_dereference(init_net.bpf.run_array[type]);967967+ if (!run_array)968968+ run_array = rcu_dereference(net->bpf.run_array[type]);956969957957- if (!attached)958958- attached = rcu_dereference(net->bpf.progs[type]);959959-960960- if (attached) {970970+ if (run_array) {961971 struct bpf_flow_keys flow_keys;962972 struct bpf_flow_dissector ctx = {963973 .flow_keys = &flow_keys,···965975 .data_end = data + hlen,966976 };967977 __be16 n_proto = proto;978978+ struct bpf_prog *prog;968979969980 if (skb) {970981 ctx.skb = skb;···976985 n_proto = skb->protocol;977986 }978987979979- ret = bpf_flow_dissect(attached, &ctx, n_proto, nhoff,988988+ prog = READ_ONCE(run_array->items[0].prog);989989+ ret = bpf_flow_dissect(prog, &ctx, n_proto, nhoff,980990 hlen, flags);981991 __skb_flow_bpf_to_target(&flow_keys, flow_dissector,982992 target_container);
+1
net/core/flow_offload.c
···44#include <net/flow_offload.h>55#include <linux/rtnetlink.h>66#include <linux/mutex.h>77+#include <linux/rhashtable.h>7889struct flow_rule *flow_rule_alloc(unsigned int num_actions)910{
···33433343 */33443344 if (err < 0) {33453345 /* If device is not registered at all, free it now */33463346- if (dev->reg_state == NETREG_UNINITIALIZED)33463346+ if (dev->reg_state == NETREG_UNINITIALIZED ||33473347+ dev->reg_state == NETREG_UNREGISTERED)33473348 free_netdev(dev);33483349 goto out;33493350 }
+15-8
net/core/skmsg.c
···683683 return container_of(parser, struct sk_psock, parser);684684}685685686686-static void sk_psock_skb_redirect(struct sk_psock *psock, struct sk_buff *skb)686686+static void sk_psock_skb_redirect(struct sk_buff *skb)687687{688688 struct sk_psock *psock_other;689689 struct sock *sk_other;···715715 }716716}717717718718-static void sk_psock_tls_verdict_apply(struct sk_psock *psock,719719- struct sk_buff *skb, int verdict)718718+static void sk_psock_tls_verdict_apply(struct sk_buff *skb, int verdict)720719{721720 switch (verdict) {722721 case __SK_REDIRECT:723723- sk_psock_skb_redirect(psock, skb);722722+ sk_psock_skb_redirect(skb);724723 break;725724 case __SK_PASS:726725 case __SK_DROP:···740741 ret = sk_psock_bpf_run(psock, prog, skb);741742 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));742743 }744744+ sk_psock_tls_verdict_apply(skb, ret);743745 rcu_read_unlock();744744- sk_psock_tls_verdict_apply(psock, skb, ret);745746 return ret;746747}747748EXPORT_SYMBOL_GPL(sk_psock_tls_strp_read);···769770 }770771 goto out_free;771772 case __SK_REDIRECT:772772- sk_psock_skb_redirect(psock, skb);773773+ sk_psock_skb_redirect(skb);773774 break;774775 case __SK_DROP:775776 /* fall-through */···781782782783static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb)783784{784784- struct sk_psock *psock = sk_psock_from_strp(strp);785785+ struct sk_psock *psock;785786 struct bpf_prog *prog;786787 int ret = __SK_DROP;788788+ struct sock *sk;787789788790 rcu_read_lock();791791+ sk = strp->sk;792792+ psock = sk_psock(sk);793793+ if (unlikely(!psock)) {794794+ kfree_skb(skb);795795+ goto out;796796+ }789797 prog = READ_ONCE(psock->progs.skb_verdict);790798 if (likely(prog)) {791799 skb_orphan(skb);···800794 ret = sk_psock_bpf_run(psock, prog, skb);801795 ret = sk_psock_map_verd(ret, tcp_skb_bpf_redirect_fetch(skb));802796 }803803- rcu_read_unlock();804797 sk_psock_verdict_apply(psock, skb, ret);798798+out:799799+ rcu_read_unlock();805800}806801807802static int sk_psock_strp_read_done(struct strparser *strp, int err)
+1-1
net/core/sock.c
···19261926 /* sk->sk_memcg will be populated at accept() time */19271927 newsk->sk_memcg = NULL;1928192819291929- cgroup_sk_alloc(&newsk->sk_cgrp_data);19291929+ cgroup_sk_clone(&newsk->sk_cgrp_data);1930193019311931 rcu_read_lock();19321932 filter = rcu_dereference(sk->sk_filter);
+48-5
net/core/sock_map.c
···7070 struct fd f;7171 int ret;72727373+ if (attr->attach_flags || attr->replace_bpf_fd)7474+ return -EINVAL;7575+7376 f = fdget(ufd);7477 map = __bpf_map_get(f);7578 if (IS_ERR(map))7679 return PTR_ERR(map);7777- ret = sock_map_prog_update(map, prog, attr->attach_type);8080+ ret = sock_map_prog_update(map, prog, NULL, attr->attach_type);8181+ fdput(f);8282+ return ret;8383+}8484+8585+int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)8686+{8787+ u32 ufd = attr->target_fd;8888+ struct bpf_prog *prog;8989+ struct bpf_map *map;9090+ struct fd f;9191+ int ret;9292+9393+ if (attr->attach_flags || attr->replace_bpf_fd)9494+ return -EINVAL;9595+9696+ f = fdget(ufd);9797+ map = __bpf_map_get(f);9898+ if (IS_ERR(map))9999+ return PTR_ERR(map);100100+101101+ prog = bpf_prog_get(attr->attach_bpf_fd);102102+ if (IS_ERR(prog)) {103103+ ret = PTR_ERR(prog);104104+ goto put_map;105105+ }106106+107107+ if (prog->type != ptype) {108108+ ret = -EINVAL;109109+ goto put_prog;110110+ }111111+112112+ ret = sock_map_prog_update(map, NULL, prog, attr->attach_type);113113+put_prog:114114+ bpf_prog_put(prog);115115+put_map:78116 fdput(f);79117 return ret;80118}···12411203}1242120412431205int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,12441244- u32 which)12061206+ struct bpf_prog *old, u32 which)12451207{12461208 struct sk_psock_progs *progs = sock_map_progs(map);12091209+ struct bpf_prog **pprog;1247121012481211 if (!progs)12491212 return -EOPNOTSUPP;1250121312511214 switch (which) {12521215 case BPF_SK_MSG_VERDICT:12531253- psock_set_prog(&progs->msg_parser, prog);12161216+ pprog = &progs->msg_parser;12541217 break;12551218 case BPF_SK_SKB_STREAM_PARSER:12561256- psock_set_prog(&progs->skb_parser, prog);12191219+ pprog = &progs->skb_parser;12571220 break;12581221 case BPF_SK_SKB_STREAM_VERDICT:12591259- psock_set_prog(&progs->skb_verdict, prog);12221222+ pprog = &progs->skb_verdict;12601223 break;12611224 default:12621225 return -EOPNOTSUPP;12631226 }1264122712281228+ if (old)12291229+ return psock_replace_prog(pprog, prog, old);12301230+12311231+ psock_set_prog(pprog, prog);12651232 return 0;12661233}12671234
···274274 ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);275275 if (write && !ret) {276276 if (jit_enable < 2 ||277277- (jit_enable == 2 && bpf_dump_raw_ok())) {277277+ (jit_enable == 2 && bpf_dump_raw_ok(current_cred()))) {278278 *(int *)table->data = jit_enable;279279 if (jit_enable == 2)280280 pr_warn("bpf_jit_enable = 2 was set! NEVER use this in production, only for JIT debugging!\n");
+13-14
net/ethtool/netlink.c
···376376}377377378378static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,379379- const struct ethnl_dump_ctx *ctx)379379+ const struct ethnl_dump_ctx *ctx,380380+ struct netlink_callback *cb)380381{382382+ void *ehdr;381383 int ret;384384+385385+ ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,386386+ ðtool_genl_family, 0, ctx->ops->reply_cmd);387387+ if (!ehdr)388388+ return -EMSGSIZE;382389383390 ethnl_init_reply_data(ctx->reply_data, ctx->ops, dev);384391 rtnl_lock();···402395 if (ctx->ops->cleanup_data)403396 ctx->ops->cleanup_data(ctx->reply_data);404397 ctx->reply_data->dev = NULL;398398+ if (ret < 0)399399+ genlmsg_cancel(skb, ehdr);400400+ else401401+ genlmsg_end(skb, ehdr);405402 return ret;406403}407404···422411 int s_idx = ctx->pos_idx;423412 int h, idx = 0;424413 int ret = 0;425425- void *ehdr;426414427415 rtnl_lock();428416 for (h = ctx->pos_hash; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {···441431 dev_hold(dev);442432 rtnl_unlock();443433444444- ehdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,445445- cb->nlh->nlmsg_seq,446446- ðtool_genl_family, 0,447447- ctx->ops->reply_cmd);448448- if (!ehdr) {449449- dev_put(dev);450450- ret = -EMSGSIZE;451451- goto out;452452- }453453- ret = ethnl_default_dump_one(skb, dev, ctx);434434+ ret = ethnl_default_dump_one(skb, dev, ctx, cb);454435 dev_put(dev);455436 if (ret < 0) {456456- genlmsg_cancel(skb, ehdr);457437 if (ret == -EOPNOTSUPP)458438 goto lock_and_cont;459439 if (likely(skb->len))460440 ret = skb->len;461441 goto out;462442 }463463- genlmsg_end(skb, ehdr);464443lock_and_cont:465444 rtnl_lock();466445 if (net->dev_base_seq != seq) {
+7-4
net/hsr/hsr_device.c
···415415 unsigned char multicast_spec, u8 protocol_version,416416 struct netlink_ext_ack *extack)417417{418418+ bool unregister = false;418419 struct hsr_priv *hsr;419420 int res;420421···467466 if (res)468467 goto err_unregister;469468469469+ unregister = true;470470+470471 res = hsr_add_port(hsr, slave[0], HSR_PT_SLAVE_A, extack);471472 if (res)472472- goto err_add_slaves;473473+ goto err_unregister;473474474475 res = hsr_add_port(hsr, slave[1], HSR_PT_SLAVE_B, extack);475476 if (res)476476- goto err_add_slaves;477477+ goto err_unregister;477478478479 hsr_debugfs_init(hsr, hsr_dev);479480 mod_timer(&hsr->prune_timer, jiffies + msecs_to_jiffies(PRUNE_PERIOD));480481481482 return 0;482483483483-err_add_slaves:484484- unregister_netdevice(hsr_dev);485484err_unregister:486485 hsr_del_ports(hsr);487486err_add_master:488487 hsr_del_self_node(hsr);489488489489+ if (unregister)490490+ unregister_netdevice(hsr_dev);490491 return res;491492}
+13-5
net/hsr/hsr_forward.c
···120120 return skb_clone(frame->skb_std, GFP_ATOMIC);121121}122122123123-static void hsr_fill_tag(struct sk_buff *skb, struct hsr_frame_info *frame,124124- struct hsr_port *port, u8 proto_version)123123+static struct sk_buff *hsr_fill_tag(struct sk_buff *skb,124124+ struct hsr_frame_info *frame,125125+ struct hsr_port *port, u8 proto_version)125126{126127 struct hsr_ethhdr *hsr_ethhdr;127128 int lane_id;128129 int lsdu_size;130130+131131+ /* pad to minimum packet size which is 60 + 6 (HSR tag) */132132+ if (skb_put_padto(skb, ETH_ZLEN + HSR_HLEN))133133+ return NULL;129134130135 if (port->type == HSR_PT_SLAVE_A)131136 lane_id = 0;···149144 hsr_ethhdr->hsr_tag.encap_proto = hsr_ethhdr->ethhdr.h_proto;150145 hsr_ethhdr->ethhdr.h_proto = htons(proto_version ?151146 ETH_P_HSR : ETH_P_PRP);147147+148148+ return skb;152149}153150154151static struct sk_buff *create_tagged_skb(struct sk_buff *skb_o,···179172 memmove(dst, src, movelen);180173 skb_reset_mac_header(skb);181174182182- hsr_fill_tag(skb, frame, port, port->hsr->prot_version);183183-184184- return skb;175175+ /* skb_put_padto free skb on error and hsr_fill_tag returns NULL in176176+ * that case177177+ */178178+ return hsr_fill_tag(skb, frame, port, port->hsr->prot_version);185179}186180187181/* If the original frame was an HSR tagged frame, just clone it to be sent
+2-1
net/hsr/hsr_framereg.c
···325325 if (port->type != node_dst->addr_B_port)326326 return;327327328328- ether_addr_copy(eth_hdr(skb)->h_dest, node_dst->macaddress_B);328328+ if (is_valid_ether_addr(node_dst->macaddress_B))329329+ ether_addr_copy(eth_hdr(skb)->h_dest, node_dst->macaddress_B);329330}330331331332void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
···26912691 tp->window_clamp = 0;26922692 tp->delivered = 0;26932693 tp->delivered_ce = 0;26942694+ if (icsk->icsk_ca_ops->release)26952695+ icsk->icsk_ca_ops->release(sk);26962696+ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));26942697 tcp_set_ca_state(sk, TCP_CA_Open);26952698 tp->is_sack_reneg = 0;26962699 tcp_clear_retrans(tp);···32493246#ifdef CONFIG_TCP_MD5SIG32503247 case TCP_MD5SIG:32513248 case TCP_MD5SIG_EXT:32523252- if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))32533253- err = tp->af_specific->md5_parse(sk, optname, optval, optlen);32543254- else32553255- err = -EINVAL;32493249+ err = tp->af_specific->md5_parse(sk, optname, optval, optlen);32563250 break;32573251#endif32583252 case TCP_USER_TIMEOUT:···4033403340344034int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key)40354035{40364036+ u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */40364037 struct scatterlist sg;4037403840384038- sg_init_one(&sg, key->key, key->keylen);40394039- ahash_request_set_crypt(hp->md5_req, &sg, NULL, key->keylen);40404040- return crypto_ahash_update(hp->md5_req);40394039+ sg_init_one(&sg, key->key, keylen);40404040+ ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen);40414041+40424042+ /* We use data_race() because tcp_md5_do_add() might change key->key under us */40434043+ return data_race(crypto_ahash_update(hp->md5_req));40414044}40424045EXPORT_SYMBOL(tcp_md5_hash_key);40434046
+1-1
net/ipv4/tcp_cong.c
···197197 icsk->icsk_ca_setsockopt = 1;198198 memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));199199200200- if (sk->sk_state != TCP_CLOSE)200200+ if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)))201201 tcp_init_congestion_control(sk);202202}203203
+8-5
net/ipv4/tcp_input.c
···34883488 }34893489}3490349034913491-/* This routine deals with acks during a TLP episode.34923492- * We mark the end of a TLP episode on receiving TLP dupack or when34933493- * ack is after tlp_high_seq.34943494- * Ref: loss detection algorithm in draft-dukkipati-tcpm-tcp-loss-probe.34913491+/* This routine deals with acks during a TLP episode and ends an episode by34923492+ * resetting tlp_high_seq. Ref: TLP algorithm in draft-ietf-tcpm-rack34953493 */34963494static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)34973495{···34983500 if (before(ack, tp->tlp_high_seq))34993501 return;3500350235013501- if (flag & FLAG_DSACKING_ACK) {35033503+ if (!tp->tlp_retrans) {35043504+ /* TLP of new data has been acknowledged */35053505+ tp->tlp_high_seq = 0;35063506+ } else if (flag & FLAG_DSACKING_ACK) {35023507 /* This DSACK means original and TLP probe arrived; no loss */35033508 tp->tlp_high_seq = 0;35043509 } else if (after(ack, tp->tlp_high_seq)) {···4583458245844583 if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {45854584 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPOFODROP);45854585+ sk->sk_data_ready(sk);45864586 tcp_drop(sk, skb);45874587 return;45884588 }···48304828 sk_forced_mem_schedule(sk, skb->truesize);48314829 else if (tcp_try_rmem_schedule(sk, skb, skb->truesize)) {48324830 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRCVQDROP);48314831+ sk->sk_data_ready(sk);48334832 goto drop;48344833 }48354834
+16-4
net/ipv4/tcp_ipv4.c
···1111111111121112 key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index);11131113 if (key) {11141114- /* Pre-existing entry - just update that one. */11151115- memcpy(key->key, newkey, newkeylen);11161116- key->keylen = newkeylen;11141114+ /* Pre-existing entry - just update that one.11151115+ * Note that the key might be used concurrently.11161116+ * data_race() is telling kcsan that we do not care of11171117+ * key mismatches, since changing MD5 key on live flows11181118+ * can lead to packet drops.11191119+ */11201120+ data_race(memcpy(key->key, newkey, newkeylen));11211121+11221122+ /* Pairs with READ_ONCE() in tcp_md5_hash_key().11231123+ * Also note that a reader could catch new key->keylen value11241124+ * but old key->key[], this is the reason we use __GFP_ZERO11251125+ * at sock_kmalloc() time below these lines.11261126+ */11271127+ WRITE_ONCE(key->keylen, newkeylen);11281128+11171129 return 0;11181130 }11191131···11411129 rcu_assign_pointer(tp->md5sig_info, md5sig);11421130 }1143113111441144- key = sock_kmalloc(sk, sizeof(*key), gfp);11321132+ key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO);11451133 if (!key)11461134 return -ENOMEM;11471135 if (!tcp_alloc_md5sig_pool()) {
+13-8
net/ipv4/tcp_output.c
···700700 unsigned int mss, struct sk_buff *skb,701701 struct tcp_out_options *opts,702702 const struct tcp_md5sig_key *md5,703703- struct tcp_fastopen_cookie *foc)703703+ struct tcp_fastopen_cookie *foc,704704+ enum tcp_synack_type synack_type)704705{705706 struct inet_request_sock *ireq = inet_rsk(req);706707 unsigned int remaining = MAX_TCP_OPTION_SPACE;···716715 * rather than TS in order to fit in better with old,717716 * buggy kernels, but that was deemed to be unnecessary.718717 */719719- ireq->tstamp_ok &= !ireq->sack_ok;718718+ if (synack_type != TCP_SYNACK_COOKIE)719719+ ireq->tstamp_ok &= !ireq->sack_ok;720720 }721721#endif722722···26242622 int pcount;26252623 int mss = tcp_current_mss(sk);2626262426252625+ /* At most one outstanding TLP */26262626+ if (tp->tlp_high_seq)26272627+ goto rearm_timer;26282628+26292629+ tp->tlp_retrans = 0;26272630 skb = tcp_send_head(sk);26282631 if (skb && tcp_snd_wnd_test(tp, skb, mss)) {26292632 pcount = tp->packets_out;···26452638 inet_csk(sk)->icsk_pending = 0;26462639 return;26472640 }26482648-26492649- /* At most one outstanding TLP retransmission. */26502650- if (tp->tlp_high_seq)26512651- goto rearm_timer;2652264126532642 if (skb_still_in_host_queue(sk, skb))26542643 goto rearm_timer;···26672664 if (__tcp_retransmit_skb(sk, skb, 1))26682665 goto rearm_timer;2669266626672667+ tp->tlp_retrans = 1;26682668+26692669+probe_sent:26702670 /* Record snd_nxt for loss detection. */26712671 tp->tlp_high_seq = tp->snd_nxt;2672267226732673-probe_sent:26742673 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPLOSSPROBES);26752674 /* Reset s.t. tcp_rearm_rto will restart timer from now */26762675 inet_csk(sk)->icsk_pending = 0;···33993394#endif34003395 skb_set_hash(skb, tcp_rsk(req)->txhash, PKT_HASH_TYPE_L4);34013396 tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5,34023402- foc) + sizeof(*th);33973397+ foc, synack_type) + sizeof(*th);3403339834043399 skb_push(skb, tcp_header_size);34053400 skb_reset_transport_header(skb);
+10-7
net/ipv4/udp.c
···416416 struct udp_hslot *hslot2,417417 struct sk_buff *skb)418418{419419- struct sock *sk, *result;419419+ struct sock *sk, *result, *reuseport_result;420420 int score, badness;421421 u32 hash = 0;422422···426426 score = compute_score(sk, net, saddr, sport,427427 daddr, hnum, dif, sdif);428428 if (score > badness) {429429+ reuseport_result = NULL;430430+429431 if (sk->sk_reuseport &&430432 sk->sk_state != TCP_ESTABLISHED) {431433 hash = udp_ehashfn(net, daddr, hnum,432434 saddr, sport);433433- result = reuseport_select_sock(sk, hash, skb,434434- sizeof(struct udphdr));435435- if (result && !reuseport_has_conns(sk, false))436436- return result;435435+ reuseport_result = reuseport_select_sock(sk, hash, skb,436436+ sizeof(struct udphdr));437437+ if (reuseport_result && !reuseport_has_conns(sk, false))438438+ return reuseport_result;437439 }440440+441441+ result = reuseport_result ? : sk;438442 badness = score;439439- result = sk;440443 }441444 }442445 return result;···20542051 /*20552052 * UDP-Lite specific tests, ignored on UDP sockets20562053 */20572057- if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {20542054+ if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {2058205520592056 /*20602057 * MIB statistics other than incrementing the error count are
···15621562static int __net_init ip6gre_init_net(struct net *net)15631563{15641564 struct ip6gre_net *ign = net_generic(net, ip6gre_net_id);15651565+ struct net_device *ndev;15651566 int err;1566156715671568 if (!net_has_fallback_tunnels(net))15681569 return 0;15691569- ign->fb_tunnel_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0",15701570- NET_NAME_UNKNOWN,15711571- ip6gre_tunnel_setup);15721572- if (!ign->fb_tunnel_dev) {15701570+ ndev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0",15711571+ NET_NAME_UNKNOWN, ip6gre_tunnel_setup);15721572+ if (!ndev) {15731573 err = -ENOMEM;15741574 goto err_alloc_dev;15751575 }15761576+ ign->fb_tunnel_dev = ndev;15761577 dev_net_set(ign->fb_tunnel_dev, net);15771578 /* FB netdevice is special: we have one, and only one per netns.15781579 * Allowing to move it to another netns is clearly unsafe.···15931592 return 0;1594159315951594err_reg_dev:15961596- free_netdev(ign->fb_tunnel_dev);15951595+ free_netdev(ndev);15971596err_alloc_dev:15981597 return err;15991598}
···431431 struct fib6_info *sibling, *next_sibling;432432 struct fib6_info *match = res->f6i;433433434434- if ((!match->fib6_nsiblings && !match->nh) || have_oif_match)434434+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))435435 goto out;436436+437437+ if (match->nh && have_oif_match && res->nh)438438+ return;436439437440 /* We might have already computed the hash for ICMPv6 errors. In such438441 * case it will always be non-zero. Otherwise now is the time to do it.···34053402 if ((flags & RTF_REJECT) ||34063403 (dev && (dev->flags & IFF_LOOPBACK) &&34073404 !(addr_type & IPV6_ADDR_LOOPBACK) &&34083408- !(flags & RTF_LOCAL)))34053405+ !(flags & (RTF_ANYCAST | RTF_LOCAL))))34093406 return true;3410340734113408 return false;
···148148 int dif, int sdif, struct udp_hslot *hslot2,149149 struct sk_buff *skb)150150{151151- struct sock *sk, *result;151151+ struct sock *sk, *result, *reuseport_result;152152 int score, badness;153153 u32 hash = 0;154154···158158 score = compute_score(sk, net, saddr, sport,159159 daddr, hnum, dif, sdif);160160 if (score > badness) {161161+ reuseport_result = NULL;162162+161163 if (sk->sk_reuseport &&162164 sk->sk_state != TCP_ESTABLISHED) {163165 hash = udp6_ehashfn(net, daddr, hnum,164166 saddr, sport);165167166166- result = reuseport_select_sock(sk, hash, skb,167167- sizeof(struct udphdr));168168- if (result && !reuseport_has_conns(sk, false))169169- return result;168168+ reuseport_result = reuseport_select_sock(sk, hash, skb,169169+ sizeof(struct udphdr));170170+ if (reuseport_result && !reuseport_has_conns(sk, false))171171+ return reuseport_result;170172 }171171- result = sk;173173+174174+ result = reuseport_result ? : sk;172175 badness = score;173176 }174177 }···646643 /*647644 * UDP-Lite specific tests, ignored on UDP sockets (see net/ipv4/udp.c).648645 */649649- if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {646646+ if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {650647651648 if (up->pcrlen == 0) { /* full coverage was set */652649 net_dbg_ratelimited("UDPLITE6: partial coverage %d while full coverage %d requested\n",
+1-4
net/l2tp/l2tp_core.c
···1028102810291029 /* Queue the packet to IP for output */10301030 skb->ignore_df = 1;10311031+ skb_dst_drop(skb);10311032#if IS_ENABLED(CONFIG_IPV6)10321033 if (l2tp_sk_is_v6(tunnel->sock))10331034 error = inet6_csk_xmit(tunnel->sock, skb, NULL);···10991098 ret = NET_XMIT_DROP;11001099 goto out_unlock;11011100 }11021102-11031103- /* Get routing info from the tunnel socket */11041104- skb_dst_drop(skb);11051105- skb_dst_set(skb, sk_dst_check(sk, 0));1106110111071102 inet = inet_sk(sk);11081103 fl = &inet->cork.fl;
+7-3
net/llc/af_llc.c
···273273274274 if (!sock_flag(sk, SOCK_ZAPPED))275275 goto out;276276+ if (!addr->sllc_arphrd)277277+ addr->sllc_arphrd = ARPHRD_ETHER;278278+ if (addr->sllc_arphrd != ARPHRD_ETHER)279279+ goto out;276280 rc = -ENODEV;277281 if (sk->sk_bound_dev_if) {278282 llc->dev = dev_get_by_index(&init_net, sk->sk_bound_dev_if);···332328 if (unlikely(!sock_flag(sk, SOCK_ZAPPED) || addrlen != sizeof(*addr)))333329 goto out;334330 rc = -EAFNOSUPPORT;335335- if (unlikely(addr->sllc_family != AF_LLC))331331+ if (!addr->sllc_arphrd)332332+ addr->sllc_arphrd = ARPHRD_ETHER;333333+ if (unlikely(addr->sllc_family != AF_LLC || addr->sllc_arphrd != ARPHRD_ETHER))336334 goto out;337335 dprintk("%s: binding %02X\n", __func__, addr->sllc_sap);338336 rc = -ENODEV;···342336 if (sk->sk_bound_dev_if) {343337 llc->dev = dev_get_by_index_rcu(&init_net, sk->sk_bound_dev_if);344338 if (llc->dev) {345345- if (!addr->sllc_arphrd)346346- addr->sllc_arphrd = llc->dev->type;347339 if (is_zero_ether_addr(addr->sllc_mac))348340 memcpy(addr->sllc_mac, llc->dev->dev_addr,349341 IFHWADDRLEN);
···39963996 skb_list_walk_safe(skb, skb, next) {39973997 skb_mark_not_on_list(skb);3998399839993999+ if (skb->protocol == sdata->control_port_protocol)40004000+ ctrl_flags |= IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP;40014001+39994002 skb = ieee80211_build_hdr(sdata, skb, info_flags,40004003 sta, ctrl_flags, cookie);40014004 if (IS_ERR(skb)) {···42094206 (!sta || !test_sta_flag(sta, WLAN_STA_TDLS_PEER)))42104207 ra = sdata->u.mgd.bssid;4211420842124212- if (!is_valid_ether_addr(ra))42094209+ if (is_zero_ether_addr(ra))42134210 goto out_free;4214421142154212 multicast = is_multicast_ether_addr(ra);···53745371 return -EINVAL;5375537253765373 if (proto == sdata->control_port_protocol)53775377- ctrl_flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO;53745374+ ctrl_flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO |53755375+ IEEE80211_TX_CTRL_SKIP_MPATH_LOOKUP;5378537653795377 if (unencrypted)53805378 flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT;
+3-3
net/mptcp/options.c
···449449}450450451451static void mptcp_write_data_fin(struct mptcp_subflow_context *subflow,452452- struct mptcp_ext *ext)452452+ struct sk_buff *skb, struct mptcp_ext *ext)453453{454454- if (!ext->use_map) {454454+ if (!ext->use_map || !skb->len) {455455 /* RFC6824 requires a DSS mapping with specific values456456 * if DATA_FIN is set but no data payload is mapped457457 */···503503 opts->ext_copy = *mpext;504504505505 if (skb && tcp_fin && subflow->data_fin_tx_enable)506506- mptcp_write_data_fin(subflow, &opts->ext_copy);506506+ mptcp_write_data_fin(subflow, skb, &opts->ext_copy);507507 ret = true;508508 }509509
···905905}906906EXPORT_SYMBOL_GPL(rds_conn_path_connect_if_down);907907908908+/* Check connectivity of all paths909909+ */910910+void rds_check_all_paths(struct rds_connection *conn)911911+{912912+ int i = 0;913913+914914+ do {915915+ rds_conn_path_connect_if_down(&conn->c_path[i]);916916+ } while (++i < conn->c_npaths);917917+}918918+908919void rds_conn_connect_if_down(struct rds_connection *conn)909920{910921 WARN_ON(conn->c_trans->t_mp_capable);
···304304 /* this should be in poll */305305 sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk);306306307307- if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN))307307+ if (sk->sk_shutdown & SEND_SHUTDOWN)308308 return -EPIPE;309309310310 more = msg->msg_flags & MSG_MORE;
+6-3
net/sched/act_connmark.c
···4343 tcf_lastuse_update(&ca->tcf_tm);4444 bstats_update(&ca->tcf_bstats, skb);45454646- if (skb->protocol == htons(ETH_P_IP)) {4646+ switch (skb_protocol(skb, true)) {4747+ case htons(ETH_P_IP):4748 if (skb->len < sizeof(struct iphdr))4849 goto out;49505051 proto = NFPROTO_IPV4;5151- } else if (skb->protocol == htons(ETH_P_IPV6)) {5252+ break;5353+ case htons(ETH_P_IPV6):5254 if (skb->len < sizeof(struct ipv6hdr))5355 goto out;54565557 proto = NFPROTO_IPV6;5656- } else {5858+ break;5959+ default:5760 goto out;5861 }5962
···313313 /* skb_flow_dissect() does not set n_proto in case an unknown314314 * protocol, so do it rather here.315315 */316316- skb_key.basic.n_proto = skb->protocol;316316+ skb_key.basic.n_proto = skb_protocol(skb, false);317317 skb_flow_dissect_tunnel_info(skb, &mask->dissector, &skb_key);318318 skb_flow_dissect_ct(skb, &mask->dissector, &skb_key,319319 fl_ct_info_to_flower_map,
+1-1
net/sched/em_ipset.c
···5959 };6060 int ret, network_offset;61616262- switch (tc_skb_protocol(skb)) {6262+ switch (skb_protocol(skb, true)) {6363 case htons(ETH_P_IP):6464 state.pf = NFPROTO_IPV4;6565 if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
+1-1
net/sched/em_ipt.c
···212212 struct nf_hook_state state;213213 int ret;214214215215- switch (tc_skb_protocol(skb)) {215215+ switch (skb_protocol(skb, true)) {216216 case htons(ETH_P_IP):217217 if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))218218 return 0;
+1-1
net/sched/em_meta.c
···195195META_COLLECTOR(int_protocol)196196{197197 /* Let userspace take care of the byte ordering */198198- dst->value = tc_skb_protocol(skb);198198+ dst->value = skb_protocol(skb, false);199199}200200201201META_COLLECTOR(int_pkttype)
···2222#include <net/sctp/sm.h>2323#include <net/sctp/stream_sched.h>24242525-/* Migrates chunks from stream queues to new stream queues if needed,2626- * but not across associations. Also, removes those chunks to streams2727- * higher than the new max.2828- */2929-static void sctp_stream_outq_migrate(struct sctp_stream *stream,3030- struct sctp_stream *new, __u16 outcnt)2525+static void sctp_stream_shrink_out(struct sctp_stream *stream, __u16 outcnt)3126{3227 struct sctp_association *asoc;3328 struct sctp_chunk *ch, *temp;3429 struct sctp_outq *outq;3535- int i;36303731 asoc = container_of(stream, struct sctp_association, stream);3832 outq = &asoc->outqueue;···50565157 sctp_chunk_free(ch);5258 }5959+}6060+6161+/* Migrates chunks from stream queues to new stream queues if needed,6262+ * but not across associations. Also, removes those chunks to streams6363+ * higher than the new max.6464+ */6565+static void sctp_stream_outq_migrate(struct sctp_stream *stream,6666+ struct sctp_stream *new, __u16 outcnt)6767+{6868+ int i;6969+7070+ if (stream->outcnt > outcnt)7171+ sctp_stream_shrink_out(stream, outcnt);53725473 if (new) {5574 /* Here we actually move the old ext stuff into the new···10441037 nums = ntohs(addstrm->number_of_streams);10451038 number = stream->outcnt - nums;1046103910471047- if (result == SCTP_STRRESET_PERFORMED)10401040+ if (result == SCTP_STRRESET_PERFORMED) {10481041 for (i = number; i < stream->outcnt; i++)10491042 SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;10501050- else10431043+ } else {10441044+ sctp_stream_shrink_out(stream, number);10511045 stream->outcnt = number;10461046+ }1052104710531048 *evp = sctp_ulpevent_make_stream_change_event(asoc, flags,10541049 0, nums, GFP_ATOMIC);
+8-4
net/smc/af_smc.c
···126126127127static void smc_restore_fallback_changes(struct smc_sock *smc)128128{129129- smc->clcsock->file->private_data = smc->sk.sk_socket;130130- smc->clcsock->file = NULL;129129+ if (smc->clcsock->file) { /* non-accepted sockets have no file yet */130130+ smc->clcsock->file->private_data = smc->sk.sk_socket;131131+ smc->clcsock->file = NULL;132132+ }131133}132134133135static int __smc_release(struct smc_sock *smc)···354352 */355353 mutex_lock(&lgr->llc_conf_mutex);356354 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {357357- if (lgr->lnk[i].state != SMC_LNK_ACTIVE)355355+ if (!smc_link_active(&lgr->lnk[i]))358356 continue;359357 rc = smcr_link_reg_rmb(&lgr->lnk[i], rmb_desc);360358 if (rc)···634632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {635633 struct smc_link *l = &smc->conn.lgr->lnk[i];636634637637- if (l->peer_qpn == ntoh24(aclc->qpn)) {635635+ if (l->peer_qpn == ntoh24(aclc->qpn) &&636636+ !memcmp(l->peer_gid, &aclc->lcl.gid, SMC_GID_SIZE) &&637637+ !memcmp(l->peer_mac, &aclc->lcl.mac, sizeof(l->peer_mac))) {638638 link = l;639639 break;640640 }
···2525#define SMC_CLC_V1 0x1 /* SMC version */2626#define SMC_TYPE_R 0 /* SMC-R only */2727#define SMC_TYPE_D 1 /* SMC-D only */2828+#define SMC_TYPE_N 2 /* neither SMC-R nor SMC-D */2829#define SMC_TYPE_B 3 /* SMC-R and SMC-D */2930#define CLC_WAIT_TIME (6 * HZ) /* max. wait time on clcsock */3031#define CLC_WAIT_TIME_SHORT HZ /* short wait time on clcsock */···4746#define SMC_CLC_DECL_ISMVLANERR 0x03090000 /* err to reg vlan id on ism dev */4847#define SMC_CLC_DECL_NOACTLINK 0x030a0000 /* no active smc-r link in lgr */4948#define SMC_CLC_DECL_NOSRVLINK 0x030b0000 /* SMC-R link from srv not found */4949+#define SMC_CLC_DECL_VERSMISMAT 0x030c0000 /* SMC version mismatch */5050#define SMC_CLC_DECL_SYNCERR 0x04000000 /* synchronization error */5151#define SMC_CLC_DECL_PEERDECL 0x05000000 /* peer declined during handshake */5252#define SMC_CLC_DECL_INTERR 0x09990000 /* internal error */
+41-95
net/smc/smc_core.c
···1515#include <linux/workqueue.h>1616#include <linux/wait.h>1717#include <linux/reboot.h>1818+#include <linux/mutex.h>1819#include <net/tcp.h>1920#include <net/sock.h>2021#include <rdma/ib_verbs.h>···4544static atomic_t lgr_cnt = ATOMIC_INIT(0); /* number of existing link groups */4645static DECLARE_WAIT_QUEUE_HEAD(lgrs_deleted);47464848-struct smc_ib_up_work {4949- struct work_struct work;5050- struct smc_link_group *lgr;5151- struct smc_ib_device *smcibdev;5252- u8 ibport;5353-};5454-5547static void smc_buf_free(struct smc_link_group *lgr, bool is_rmb,5648 struct smc_buf_desc *buf_desc);5749static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft);58505959-static void smc_link_up_work(struct work_struct *work);6051static void smc_link_down_work(struct work_struct *work);61526253/* return head of link group list and its lock for a given link group */···240247 if (smc_link_usable(lnk))241248 lnk->state = SMC_LNK_INACTIVE;242249 }243243- wake_up_interruptible_all(&lgr->llc_waiter);250250+ wake_up_all(&lgr->llc_msg_waiter);251251+ wake_up_all(&lgr->llc_flow_waiter);244252}245253246254static void smc_lgr_free(struct smc_link_group *lgr);···318324319325 get_device(&ini->ib_dev->ibdev->dev);320326 atomic_inc(&ini->ib_dev->lnk_cnt);321321- lnk->state = SMC_LNK_ACTIVATING;322327 lnk->link_id = smcr_next_link_id(lgr);323328 lnk->lgr = lgr;324329 lnk->link_idx = link_idx;···353360 rc = smc_wr_create_link(lnk);354361 if (rc)355362 goto destroy_qp;363363+ lnk->state = SMC_LNK_ACTIVATING;356364 return 0;357365358366destroy_qp:···444450 }445451 smc->conn.lgr = lgr;446452 spin_lock_bh(lgr_lock);447447- list_add(&lgr->list, lgr_list);453453+ list_add_tail(&lgr->list, lgr_list);448454 spin_unlock_bh(lgr_lock);449455 return 0;450456···542548 smc_wr_wakeup_tx_wait(from_lnk);543549544550 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {545545- if (lgr->lnk[i].state != SMC_LNK_ACTIVE ||546546- i == from_lnk->link_idx)551551+ if (!smc_link_active(&lgr->lnk[i]) || i == from_lnk->link_idx)547552 continue;548553 if (is_dev_err && from_lnk->smcibdev == lgr->lnk[i].smcibdev &&549554 from_lnk->ibport == lgr->lnk[i].ibport) {···10971104 sock_put(&smc->sk); /* sock_hold done by schedulers of abort_work */10981105}1099110611001100-/* link is up - establish alternate link if applicable */11011101-static void smcr_link_up(struct smc_link_group *lgr,11021102- struct smc_ib_device *smcibdev, u8 ibport)11031103-{11041104- struct smc_link *link = NULL;11051105-11061106- if (list_empty(&lgr->list) ||11071107- lgr->type == SMC_LGR_SYMMETRIC ||11081108- lgr->type == SMC_LGR_ASYMMETRIC_PEER)11091109- return;11101110-11111111- if (lgr->role == SMC_SERV) {11121112- /* trigger local add link processing */11131113- link = smc_llc_usable_link(lgr);11141114- if (!link)11151115- return;11161116- smc_llc_srv_add_link_local(link);11171117- } else {11181118- /* invite server to start add link processing */11191119- u8 gid[SMC_GID_SIZE];11201120-11211121- if (smc_ib_determine_gid(smcibdev, ibport, lgr->vlan_id, gid,11221122- NULL))11231123- return;11241124- if (lgr->llc_flow_lcl.type != SMC_LLC_FLOW_NONE) {11251125- /* some other llc task is ongoing */11261126- wait_event_interruptible_timeout(lgr->llc_waiter,11271127- (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_NONE),11281128- SMC_LLC_WAIT_TIME);11291129- }11301130- if (list_empty(&lgr->list) ||11311131- !smc_ib_port_active(smcibdev, ibport))11321132- return; /* lgr or device no longer active */11331133- link = smc_llc_usable_link(lgr);11341134- if (!link)11351135- return;11361136- smc_llc_send_add_link(link, smcibdev->mac[ibport - 1], gid,11371137- NULL, SMC_LLC_REQ);11381138- }11391139-}11401140-11411107void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport)11421108{11431143- struct smc_ib_up_work *ib_work;11441109 struct smc_link_group *lgr, *n;1145111011461111 list_for_each_entry_safe(lgr, n, &smc_lgr_list.list, list) {11121112+ struct smc_link *link;11131113+11471114 if (strncmp(smcibdev->pnetid[ibport - 1], lgr->pnet_id,11481115 SMC_MAX_PNETID_LEN) ||11491116 lgr->type == SMC_LGR_SYMMETRIC ||11501117 lgr->type == SMC_LGR_ASYMMETRIC_PEER)11511118 continue;11521152- ib_work = kmalloc(sizeof(*ib_work), GFP_KERNEL);11531153- if (!ib_work)11541154- continue;11551155- INIT_WORK(&ib_work->work, smc_link_up_work);11561156- ib_work->lgr = lgr;11571157- ib_work->smcibdev = smcibdev;11581158- ib_work->ibport = ibport;11591159- schedule_work(&ib_work->work);11191119+11201120+ /* trigger local add link processing */11211121+ link = smc_llc_usable_link(lgr);11221122+ if (link)11231123+ smc_llc_add_link_local(link);11601124 }11611125}11621126···11451195 if (lgr->llc_flow_lcl.type != SMC_LLC_FLOW_NONE) {11461196 /* another llc task is ongoing */11471197 mutex_unlock(&lgr->llc_conf_mutex);11481148- wait_event_interruptible_timeout(lgr->llc_waiter,11491149- (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_NONE),11981198+ wait_event_timeout(lgr->llc_flow_waiter,11991199+ (list_empty(&lgr->list) ||12001200+ lgr->llc_flow_lcl.type == SMC_LLC_FLOW_NONE),11501201 SMC_LLC_WAIT_TIME);11511202 mutex_lock(&lgr->llc_conf_mutex);11521203 }11531153- smc_llc_send_delete_link(to_lnk, del_link_id, SMC_LLC_REQ, true,11541154- SMC_LLC_DEL_LOST_PATH);12041204+ if (!list_empty(&lgr->list)) {12051205+ smc_llc_send_delete_link(to_lnk, del_link_id,12061206+ SMC_LLC_REQ, true,12071207+ SMC_LLC_DEL_LOST_PATH);12081208+ smcr_link_clear(lnk, true);12091209+ }12101210+ wake_up(&lgr->llc_flow_waiter); /* wake up next waiter */11551211 }11561212}11571213···11961240 }11971241}1198124211991199-static void smc_link_up_work(struct work_struct *work)12001200-{12011201- struct smc_ib_up_work *ib_work = container_of(work,12021202- struct smc_ib_up_work,12031203- work);12041204- struct smc_link_group *lgr = ib_work->lgr;12051205-12061206- if (list_empty(&lgr->list))12071207- goto out;12081208- smcr_link_up(lgr, ib_work->smcibdev, ib_work->ibport);12091209-out:12101210- kfree(ib_work);12111211-}12121212-12131243static void smc_link_down_work(struct work_struct *work)12141244{12151245 struct smc_link *link = container_of(work, struct smc_link,···1204126212051263 if (list_empty(&lgr->list))12061264 return;12071207- wake_up_interruptible_all(&lgr->llc_waiter);12651265+ wake_up_all(&lgr->llc_msg_waiter);12081266 mutex_lock(&lgr->llc_conf_mutex);12091267 smcr_link_down(link);12101268 mutex_unlock(&lgr->llc_conf_mutex);···12681326 return false;1269132712701328 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {12711271- if (lgr->lnk[i].state != SMC_LNK_ACTIVE)13291329+ if (!smc_link_active(&lgr->lnk[i]))12721330 continue;12731331 if ((lgr->role == SMC_SERV || lgr->lnk[i].peer_qpn == clcqpn) &&12741332 !memcmp(lgr->lnk[i].peer_gid, &lcl->gid, SMC_GID_SIZE) &&···13111369 smcr_lgr_match(lgr, ini->ib_lcl, role, ini->ib_clcqpn)) &&13121370 !lgr->sync_err &&13131371 lgr->vlan_id == ini->vlan_id &&13141314- (role == SMC_CLNT ||13721372+ (role == SMC_CLNT || ini->is_smcd ||13151373 lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) {13161374 /* link group found */13171375 ini->cln_first_contact = SMC_REUSE_CONTACT;···1716177417171775void smc_sndbuf_sync_sg_for_cpu(struct smc_connection *conn)17181776{17191719- if (!conn->lgr || conn->lgr->is_smcd || !smc_link_usable(conn->lnk))17771777+ if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk))17201778 return;17211779 smc_ib_sync_sg_for_cpu(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE);17221780}1723178117241782void smc_sndbuf_sync_sg_for_device(struct smc_connection *conn)17251783{17261726- if (!conn->lgr || conn->lgr->is_smcd || !smc_link_usable(conn->lnk))17841784+ if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk))17271785 return;17281786 smc_ib_sync_sg_for_device(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE);17291787}···17351793 if (!conn->lgr || conn->lgr->is_smcd)17361794 return;17371795 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {17381738- if (!smc_link_usable(&conn->lgr->lnk[i]))17961796+ if (!smc_link_active(&conn->lgr->lnk[i]))17391797 continue;17401798 smc_ib_sync_sg_for_cpu(&conn->lgr->lnk[i], conn->rmb_desc,17411799 DMA_FROM_DEVICE);···17491807 if (!conn->lgr || conn->lgr->is_smcd)17501808 return;17511809 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {17521752- if (!smc_link_usable(&conn->lgr->lnk[i]))18101810+ if (!smc_link_active(&conn->lgr->lnk[i]))17531811 continue;17541812 smc_ib_sync_sg_for_device(&conn->lgr->lnk[i], conn->rmb_desc,17551813 DMA_FROM_DEVICE);···17721830 return rc;17731831 /* create rmb */17741832 rc = __smc_buf_create(smc, is_smcd, true);17751775- if (rc)18331833+ if (rc) {18341834+ mutex_lock(&smc->conn.lgr->sndbufs_lock);18351835+ list_del(&smc->conn.sndbuf_desc->list);18361836+ mutex_unlock(&smc->conn.lgr->sndbufs_lock);17761837 smc_buf_free(smc->conn.lgr, false, smc->conn.sndbuf_desc);18381838+ }17771839 return rc;17781840}17791841···19011955 struct smc_ib_device *smcibdev;19021956 struct smcd_dev *smcd;1903195719041904- spin_lock(&smc_ib_devices.lock);19581958+ mutex_lock(&smc_ib_devices.mutex);19051959 list_for_each_entry(smcibdev, &smc_ib_devices.list, list) {19061960 int i;1907196119081962 for (i = 0; i < SMC_MAX_PORTS; i++)19091963 set_bit(i, smcibdev->ports_going_away);19101964 }19111911- spin_unlock(&smc_ib_devices.lock);19651965+ mutex_unlock(&smc_ib_devices.mutex);1912196619131913- spin_lock(&smcd_dev_list.lock);19671967+ mutex_lock(&smcd_dev_list.mutex);19141968 list_for_each_entry(smcd, &smcd_dev_list.list, list) {19151969 smcd->going_away = 1;19161970 }19171917- spin_unlock(&smcd_dev_list.lock);19711971+ mutex_unlock(&smcd_dev_list.mutex);19181972}1919197319201974/* Clean up all SMC link groups */···1926198019271981 smc_smcr_terminate_all(NULL);1928198219291929- spin_lock(&smcd_dev_list.lock);19831983+ mutex_lock(&smcd_dev_list.mutex);19301984 list_for_each_entry(smcd, &smcd_dev_list.list, list)19311985 smc_smcd_terminate_all(smcd);19321932- spin_unlock(&smcd_dev_list.lock);19861986+ mutex_unlock(&smcd_dev_list.mutex);19331987}1934198819351989static int smc_core_reboot_event(struct notifier_block *this,
···8787 return8888 fi89899090- # Strip out the base of the path9191- code=${code#$basepath/}9090+ # Strip out the base of the path on each line9191+ code=$(while read -r line; do echo "${line#$basepath/}"; done <<< "$code")92929393 # In the case of inlines, move everything to same line9494 code=${code//$'\n'/' '}
+36-3
scripts/dtc/checks.c
···10221022}10231023WARNING(i2c_bus_bridge, check_i2c_bus_bridge, NULL, &addr_size_cells);1024102410251025+#define I2C_OWN_SLAVE_ADDRESS (1U << 30)10261026+#define I2C_TEN_BIT_ADDRESS (1U << 31)10271027+10251028static void check_i2c_bus_reg(struct check *c, struct dt_info *dti, struct node *node)10261029{10271030 struct property *prop;···10471044 }1048104510491046 reg = fdt32_to_cpu(*cells);10471047+ /* Ignore I2C_OWN_SLAVE_ADDRESS */10481048+ reg &= ~I2C_OWN_SLAVE_ADDRESS;10501049 snprintf(unit_addr, sizeof(unit_addr), "%x", reg);10511050 if (!streq(unitname, unit_addr))10521051 FAIL(c, dti, node, "I2C bus unit address format error, expected \"%s\"",···1056105110571052 for (len = prop->val.len; len > 0; len -= 4) {10581053 reg = fdt32_to_cpu(*(cells++));10591059- if (reg > 0x3ff)10541054+ /* Ignore I2C_OWN_SLAVE_ADDRESS */10551055+ reg &= ~I2C_OWN_SLAVE_ADDRESS;10561056+10571057+ if ((reg & I2C_TEN_BIT_ADDRESS) && ((reg & ~I2C_TEN_BIT_ADDRESS) > 0x3ff))10601058 FAIL_PROP(c, dti, node, prop, "I2C address must be less than 10-bits, got \"0x%x\"",10611059 reg);10621062-10601060+ else if (reg > 0x7f)10611061+ FAIL_PROP(c, dti, node, prop, "I2C address must be less than 7-bits, got \"0x%x\". Set I2C_TEN_BIT_ADDRESS for 10 bit addresses or fix the property",10621062+ reg);10631063 }10641064}10651065WARNING(i2c_bus_reg, check_i2c_bus_reg, NULL, ®_format, &i2c_bus_bridge);···1557154715581548 return false;15591549}15501550+15511551+static void check_interrupt_provider(struct check *c,15521552+ struct dt_info *dti,15531553+ struct node *node)15541554+{15551555+ struct property *prop;15561556+15571557+ if (!node_is_interrupt_provider(node))15581558+ return;15591559+15601560+ prop = get_property(node, "#interrupt-cells");15611561+ if (!prop)15621562+ FAIL(c, dti, node,15631563+ "Missing #interrupt-cells in interrupt provider");15641564+15651565+ prop = get_property(node, "#address-cells");15661566+ if (!prop)15671567+ FAIL(c, dti, node,15681568+ "Missing #address-cells in interrupt provider");15691569+}15701570+WARNING(interrupt_provider, check_interrupt_provider, NULL);15711571+15601572static void check_interrupts_property(struct check *c,15611573 struct dt_info *dti,15621574 struct node *node)···1636160416371605 prop = get_property(irq_node, "#interrupt-cells");16381606 if (!prop) {16391639- FAIL(c, dti, irq_node, "Missing #interrupt-cells in interrupt-parent");16071607+ /* We warn about that already in another test. */16401608 return;16411609 }16421610···18601828 &deprecated_gpio_property,18611829 &gpios_property,18621830 &interrupts_property,18311831+ &interrupt_provider,1863183218641833 &alias_paths,18651834
···436436 return struct_size;437437 }438438439439- if (can_assume(LIBFDT_ORDER) |439439+ if (can_assume(LIBFDT_ORDER) ||440440 !fdt_blocks_misordered_(fdt, mem_rsv_size, struct_size)) {441441 /* no further work necessary */442442 err = fdt_move(fdt, buf, bufsize);
+1-1
scripts/dtc/libfdt/fdt_sw.c
···3232/* 'memrsv' state: Initial state after fdt_create()3333 *3434 * Allowed functions:3535- * fdt_add_reservmap_entry()3535+ * fdt_add_reservemap_entry()3636 * fdt_finish_reservemap() [moves to 'struct' state]3737 */3838static int fdt_sw_probe_memrsv_(void *fdt)
···7878 source tree isn't cleaned after kernel installation).79798080 The seed used for compilation is located at8181- scripts/gcc-plgins/randomize_layout_seed.h. It remains after8181+ scripts/gcc-plugins/randomize_layout_seed.h. It remains after8282 a make clean to allow for external modules to be compiled with8383 the existing seed and will be removed by a make mrproper or8484 make distclean.
+1-1
scripts/gdb/linux/symbols.py
···9696 return ""9797 attrs = sect_attrs['attrs']9898 section_name_to_address = {9999- attrs[n]['name'].string(): attrs[n]['address']9999+ attrs[n]['battr']['attr']['name'].string(): attrs[n]['address']100100 for n in range(int(sect_attrs['nsections']))}101101 args = []102102 for section_name in [".data", ".data..read_mostly", ".rodata", ".bss",
+106-67
scripts/kconfig/qconf.cc
···44 * Copyright (C) 2015 Boris Barbulovski <bbarbulovski@gmail.com>55 */6677-#include <qglobal.h>88-99-#include <QMainWindow>1010-#include <QList>1111-#include <qtextbrowser.h>127#include <QAction>88+#include <QApplication>99+#include <QCloseEvent>1010+#include <QDebug>1111+#include <QDesktopWidget>1312#include <QFileDialog>1313+#include <QLabel>1414+#include <QLayout>1515+#include <QList>1416#include <QMenu>1515-1616-#include <qapplication.h>1717-#include <qdesktopwidget.h>1818-#include <qtoolbar.h>1919-#include <qlayout.h>2020-#include <qsplitter.h>2121-#include <qlineedit.h>2222-#include <qlabel.h>2323-#include <qpushbutton.h>2424-#include <qmenubar.h>2525-#include <qmessagebox.h>2626-#include <qregexp.h>2727-#include <qevent.h>1717+#include <QMenuBar>1818+#include <QMessageBox>1919+#include <QToolBar>28202921#include <stdlib.h>3022···437445 if (rootEntry != &rootmenu && (mode == singleMode ||438446 (mode == symbolMode && rootEntry->parent != &rootmenu))) {439447 item = (ConfigItem *)topLevelItem(0);440440- if (!item)448448+ if (!item && mode != symbolMode) {441449 item = new ConfigItem(this, 0, true);442442- last = item;450450+ last = item;451451+ }443452 }444453 if ((mode == singleMode || (mode == symbolMode && !(rootEntry->flags & MENU_ROOT))) &&445454 rootEntry->sym && rootEntry->prompt) {···538545 rootEntry = menu;539546 updateListAll();540547 if (currentItem()) {541541- currentItem()->setSelected(hasFocus());548548+ setSelected(currentItem(), hasFocus());542549 scrollToItem(currentItem());543550 }544551}···866873867874 ConfigItem* item = (ConfigItem *)currentItem();868875 if (item) {869869- item->setSelected(true);876876+ setSelected(item, true);870877 menu = item->menu;871878 }872879 emit gotFocus(menu);···10141021 : Parent(parent), sym(0), _menu(0)10151022{10161023 setObjectName(name);10171017-10241024+ setOpenLinks(false);1018102510191026 if (!objectName().isEmpty()) {10201027 configSettings->beginGroup(objectName());···10871094 if (sym->name) {10881095 head += " (";10891096 if (showDebug())10901090- head += QString().sprintf("<a href=\"s%p\">", sym);10971097+ head += QString().sprintf("<a href=\"s%s\">", sym->name);10911098 head += print_filter(sym->name);10921099 if (showDebug())10931100 head += "</a>";···10961103 } else if (sym->name) {10971104 head += "<big><b>";10981105 if (showDebug())10991099- head += QString().sprintf("<a href=\"s%p\">", sym);11061106+ head += QString().sprintf("<a href=\"s%s\">", sym->name);11001107 head += print_filter(sym->name);11011108 if (showDebug())11021109 head += "</a>";···11471154 switch (prop->type) {11481155 case P_PROMPT:11491156 case P_MENU:11501150- debug += QString().sprintf("prompt: <a href=\"m%p\">", prop->menu);11571157+ debug += QString().sprintf("prompt: <a href=\"m%s\">", sym->name);11511158 debug += print_filter(prop->text);11521159 debug += "</a><br>";11531160 break;11541161 case P_DEFAULT:11551162 case P_SELECT:11561163 case P_RANGE:11641164+ case P_COMMENT:11651165+ case P_IMPLY:11661166+ case P_SYMBOL:11571167 debug += prop_get_type_name(prop->type);11581168 debug += ": ";11591169 expr_print(prop->expr, expr_print_help, &debug, E_NONE);···12221226 QString str2 = print_filter(str);1223122712241228 if (sym && sym->name && !(sym->flags & SYMBOL_CONST)) {12251225- *text += QString().sprintf("<a href=\"s%p\">", sym);12291229+ *text += QString().sprintf("<a href=\"s%s\">", sym->name);12261230 *text += str2;12271231 *text += "</a>";12281232 } else12291233 *text += str2;12341234+}12351235+12361236+void ConfigInfoView::clicked(const QUrl &url)12371237+{12381238+ QByteArray str = url.toEncoded();12391239+ const std::size_t count = str.size();12401240+ char *data = new char[count + 1];12411241+ struct symbol **result;12421242+ struct menu *m = NULL;12431243+12441244+ if (count < 1) {12451245+ qInfo() << "Clicked link is empty";12461246+ delete data;12471247+ return;12481248+ }12491249+12501250+ memcpy(data, str.constData(), count);12511251+ data[count] = '\0';12521252+12531253+ /* Seek for exact match */12541254+ data[0] = '^';12551255+ strcat(data, "$");12561256+ result = sym_re_search(data);12571257+ if (!result) {12581258+ qInfo() << "Clicked symbol is invalid:" << data;12591259+ delete data;12601260+ return;12611261+ }12621262+12631263+ sym = *result;12641264+12651265+ /* Seek for the menu which holds the symbol */12661266+ for (struct property *prop = sym->prop; prop; prop = prop->next) {12671267+ if (prop->type != P_PROMPT && prop->type != P_MENU)12681268+ continue;12691269+ m = prop->menu;12701270+ break;12711271+ }12721272+12731273+ if (!m) {12741274+ /* Symbol is not visible as a menu */12751275+ symbolInfo();12761276+ emit showDebugChanged(true);12771277+ } else {12781278+ emit menuSelected(m);12791279+ }12801280+12811281+ free(result);12821282+ delete data;12301283}1231128412321285QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)···14471402 addToolBar(toolBar);1448140314491404 backAction = new QAction(QPixmap(xpm_back), "Back", this);14501450- connect(backAction, SIGNAL(triggered(bool)), SLOT(goBack()));14511451- backAction->setEnabled(false);14051405+ connect(backAction, SIGNAL(triggered(bool)), SLOT(goBack()));14061406+14521407 QAction *quitAction = new QAction("&Quit", this);14531408 quitAction->setShortcut(Qt::CTRL + Qt::Key_Q);14541454- connect(quitAction, SIGNAL(triggered(bool)), SLOT(close()));14091409+ connect(quitAction, SIGNAL(triggered(bool)), SLOT(close()));14101410+14551411 QAction *loadAction = new QAction(QPixmap(xpm_load), "&Load", this);14561412 loadAction->setShortcut(Qt::CTRL + Qt::Key_L);14571457- connect(loadAction, SIGNAL(triggered(bool)), SLOT(loadConfig()));14131413+ connect(loadAction, SIGNAL(triggered(bool)), SLOT(loadConfig()));14141414+14581415 saveAction = new QAction(QPixmap(xpm_save), "&Save", this);14591416 saveAction->setShortcut(Qt::CTRL + Qt::Key_S);14601460- connect(saveAction, SIGNAL(triggered(bool)), SLOT(saveConfig()));14171417+ connect(saveAction, SIGNAL(triggered(bool)), SLOT(saveConfig()));14181418+14611419 conf_set_changed_callback(conf_changed);14201420+14621421 // Set saveAction's initial state14631422 conf_changed();14641423 configname = xstrdup(conf_get_configname());···15541505 QMenu* helpMenu = menu->addMenu("&Help");15551506 helpMenu->addAction(showIntroAction);15561507 helpMenu->addAction(showAboutAction);15081508+15091509+ connect (helpText, SIGNAL (anchorClicked (const QUrl &)),15101510+ helpText, SLOT (clicked (const QUrl &)) );1557151115581512 connect(configList, SIGNAL(menuChanged(struct menu *)),15591513 helpText, SLOT(setInfo(struct menu *)));···16631611void ConfigMainWindow::changeItens(struct menu *menu)16641612{16651613 configList->setRootMenu(menu);16661666-16671667- if (configList->rootEntry->parent == &rootmenu)16681668- backAction->setEnabled(false);16691669- else16701670- backAction->setEnabled(true);16711614}1672161516731616void ConfigMainWindow::changeMenu(struct menu *menu)16741617{16751618 menuList->setRootMenu(menu);16761676-16771677- if (menuList->rootEntry->parent == &rootmenu)16781678- backAction->setEnabled(false);16791679- else16801680- backAction->setEnabled(true);16811619}1682162016831621void ConfigMainWindow::setMenuLink(struct menu *menu)···16871645 return;16881646 list->setRootMenu(parent);16891647 break;16901690- case symbolMode:16481648+ case menuMode:16911649 if (menu->flags & MENU_ROOT) {16921692- configList->setRootMenu(menu);16501650+ menuList->setRootMenu(menu);16931651 configList->clearSelection();16941694- list = menuList;16951695- } else {16961652 list = configList;16531653+ } else {16971654 parent = menu_get_parent_menu(menu->parent);16981655 if (!parent)16991656 return;17001700- item = menuList->findConfigItem(parent);16571657+16581658+ /* Select the config view */16591659+ item = configList->findConfigItem(parent);17011660 if (item) {17021702- item->setSelected(true);17031703- menuList->scrollToItem(item);16611661+ configList->setSelected(item, true);16621662+ configList->scrollToItem(item);17041663 }17051705- list->setRootMenu(parent);16641664+16651665+ menuList->setRootMenu(parent);16661666+ menuList->clearSelection();16671667+ list = menuList;17061668 }17071669 break;17081670 case fullMode:···17191673 if (list) {17201674 item = list->findConfigItem(menu);17211675 if (item) {17221722- item->setSelected(true);16761676+ list->setSelected(item, true);17231677 list->scrollToItem(item);17241678 list->setFocus();16791679+ helpText->setInfo(menu);17251680 }17261681 }17271682}···1735168817361689void ConfigMainWindow::goBack(void)17371690{17381738- ConfigItem* item, *oldSelection;17391739-17401740- configList->setParentMenu();16911691+qInfo() << __FUNCTION__;17411692 if (configList->rootEntry == &rootmenu)17421742- backAction->setEnabled(false);17431743-17441744- if (menuList->selectedItems().count() == 0)17451693 return;1746169417471747- item = (ConfigItem*)menuList->selectedItems().first();17481748- oldSelection = item;17491749- while (item) {17501750- if (item->menu == configList->rootEntry) {17511751- oldSelection->setSelected(false);17521752- item->setSelected(true);17531753- break;17541754- }17551755- item = (ConfigItem*)item->parent();17561756- }16951695+ configList->setParentMenu();17571696}1758169717591698void ConfigMainWindow::showSingleView(void)···17501717 splitViewAction->setChecked(false);17511718 fullViewAction->setEnabled(true);17521719 fullViewAction->setChecked(false);17201720+17211721+ backAction->setEnabled(true);1753172217541723 menuView->hide();17551724 menuList->setRootMenu(0);···17711736 splitViewAction->setChecked(true);17721737 fullViewAction->setEnabled(true);17731738 fullViewAction->setChecked(false);17391739+17401740+ backAction->setEnabled(false);1774174117751742 configList->mode = menuMode;17761743 if (configList->rootEntry == &rootmenu)···17961759 splitViewAction->setChecked(false);17971760 fullViewAction->setEnabled(false);17981761 fullViewAction->setChecked(true);17621762+17631763+ backAction->setEnabled(false);1799176418001765 menuView->hide();18011766 menuList->setRootMenu(0);
···138138139139char *get_line(char **stringp)140140{141141+ char *orig = *stringp, *next;142142+141143 /* do not return the unwanted extra line at EOF */142142- if (*stringp && **stringp == '\0')144144+ if (!orig || *orig == '\0')143145 return NULL;144146145145- return strsep(stringp, "\n");147147+ next = strchr(orig, '\n');148148+ if (next)149149+ *next++ = '\0';150150+151151+ *stringp = next;152152+153153+ return orig;146154}147155148156/* A list of all modules we processed */
···823823 if (rc != 0)824824 return rc;825825826826- /* cumulative sha1 over tpm registers 0-7 */826826+ /* cumulative digest over TPM registers 0-7 */827827 for (i = TPM_PCR0; i < TPM_PCR8; i++) {828828 ima_pcrread(i, &d);829829 /* now accumulate with current aggregate */830830 rc = crypto_shash_update(shash, d.digest,831831 crypto_shash_digestsize(tfm));832832+ }833833+ /*834834+ * Extend cumulative digest over TPM registers 8-9, which contain835835+ * measurement for the kernel command line (reg. 8) and image (reg. 9)836836+ * in a typical PCR allocation. Registers 8-9 are only included in837837+ * non-SHA1 boot_aggregate digests to avoid ambiguity.838838+ */839839+ if (alg_id != TPM_ALG_SHA1) {840840+ for (i = TPM_PCR8; i < TPM_PCR10; i++) {841841+ ima_pcrread(i, &d);842842+ rc = crypto_shash_update(shash, d.digest,843843+ crypto_shash_digestsize(tfm));844844+ }832845 }833846 if (!rc)834847 crypto_shash_final(shash, digest);
+16-1
security/security.c
···1414141414151415int security_inode_copy_up_xattr(const char *name)14161416{14171417- return call_int_hook(inode_copy_up_xattr, -EOPNOTSUPP, name);14171417+ struct security_hook_list *hp;14181418+ int rc;14191419+14201420+ /*14211421+ * The implementation can return 0 (accept the xattr), 1 (discard the14221422+ * xattr), -EOPNOTSUPP if it does not know anything about the xattr or14231423+ * any other error code incase of an error.14241424+ */14251425+ hlist_for_each_entry(hp,14261426+ &security_hook_heads.inode_copy_up_xattr, list) {14271427+ rc = hp->hook.inode_copy_up_xattr(name);14281428+ if (rc != LSM_RET_DEFAULT(inode_copy_up_xattr))14291429+ return rc;14301430+ }14311431+14321432+ return LSM_RET_DEFAULT(inode_copy_up_xattr);14181433}14191434EXPORT_SYMBOL(security_inode_copy_up_xattr);14201435
+4
sound/core/compress_offload.c
···764764765765 retval = stream->ops->trigger(stream, SNDRV_PCM_TRIGGER_STOP);766766 if (!retval) {767767+ /* clear flags and stop any drain wait */768768+ stream->partial_drain = false;769769+ stream->metadata_set = false;767770 snd_compr_drain_notify(stream);768771 stream->runtime->total_bytes_available = 0;769772 stream->runtime->total_bytes_transferred = 0;···924921 if (stream->next_track == false)925922 return -EPERM;926923924924+ stream->partial_drain = true;927925 retval = stream->ops->trigger(stream, SND_COMPR_TRIGGER_PARTIAL_DRAIN);928926 if (retval) {929927 pr_debug("Partial drain returned failure\n");
+3-1
sound/core/info.c
···606606{607607 int c;608608609609- if (snd_BUG_ON(!buffer || !buffer->buffer))609609+ if (snd_BUG_ON(!buffer))610610+ return 1;611611+ if (!buffer->buffer)610612 return 1;611613 if (len <= 0 || buffer->stop || buffer->error)612614 return 1;
···7272 if (a->type != b->type)7373 return (int)(a->type - b->type);74747575+ /* If has both hs_mic and hp_mic, pick the hs_mic ahead of hp_mic. */7676+ if (a->is_headset_mic && b->is_headphone_mic)7777+ return -1; /* don't swap */7878+ else if (a->is_headphone_mic && b->is_headset_mic)7979+ return 1; /* swap */8080+7581 /* In case one has boost and the other one has not,7682 pick the one with boost first. */7783 return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
+26-15
sound/pci/hda/patch_hdmi.c
···259259 if (get_pcm_rec(spec, pcm_idx)->stream == hinfo)260260 return pcm_idx;261261262262- codec_warn(codec, "HDMI: hinfo %p not registered\n", hinfo);262262+ codec_warn(codec, "HDMI: hinfo %p not tied to a PCM\n", hinfo);263263 return -EINVAL;264264}265265···277277 return pin_idx;278278 }279279280280- codec_dbg(codec, "HDMI: hinfo %p not registered\n", hinfo);280280+ codec_dbg(codec, "HDMI: hinfo %p (pcm %d) not registered\n", hinfo,281281+ hinfo_to_pcm_index(codec, hinfo));281282 return -EINVAL;282283}283284···1805180418061805static int hdmi_parse_codec(struct hda_codec *codec)18071806{18081808- hda_nid_t nid;18071807+ hda_nid_t start_nid;18081808+ unsigned int caps;18091809 int i, nodes;1810181018111811- nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &nid);18121812- if (!nid || nodes < 0) {18111811+ nodes = snd_hda_get_sub_nodes(codec, codec->core.afg, &start_nid);18121812+ if (!start_nid || nodes < 0) {18131813 codec_warn(codec, "HDMI: failed to get afg sub nodes\n");18141814 return -EINVAL;18151815 }1816181618171817- for (i = 0; i < nodes; i++, nid++) {18181818- unsigned int caps;18191819- unsigned int type;18171817+ /*18181818+ * hdmi_add_pin() assumes total amount of converters to18191819+ * be known, so first discover all converters18201820+ */18211821+ for (i = 0; i < nodes; i++) {18221822+ hda_nid_t nid = start_nid + i;1820182318211824 caps = get_wcaps(codec, nid);18221822- type = get_wcaps_type(caps);1823182518241826 if (!(caps & AC_WCAP_DIGITAL))18251827 continue;1826182818271827- switch (type) {18281828- case AC_WID_AUD_OUT:18291829+ if (get_wcaps_type(caps) == AC_WID_AUD_OUT)18291830 hdmi_add_cvt(codec, nid);18301830- break;18311831- case AC_WID_PIN:18311831+ }18321832+18331833+ /* discover audio pins */18341834+ for (i = 0; i < nodes; i++) {18351835+ hda_nid_t nid = start_nid + i;18361836+18371837+ caps = get_wcaps(codec, nid);18381838+18391839+ if (!(caps & AC_WCAP_DIGITAL))18401840+ continue;18411841+18421842+ if (get_wcaps_type(caps) == AC_WID_PIN)18321843 hdmi_add_pin(codec, nid);18331833- break;18341834- }18351844 }1836184518371846 return 0;
···543543544544 if (cnt) {545545 ret = device_add_properties(codec_dev, props);546546- if (ret)546546+ if (ret) {547547+ put_device(codec_dev);547548 return ret;549549+ }548550 }549551550552 devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);
+11-12
sound/soc/intel/boards/cht_bsw_rt5672.c
···253253 params_set_format(params, SNDRV_PCM_FORMAT_S24_LE);254254255255 /*256256- * Default mode for SSP configuration is TDM 4 slot256256+ * Default mode for SSP configuration is TDM 4 slot. One board/design,257257+ * the Lenovo Miix 2 10 uses not 1 but 2 codecs connected to SSP2. The258258+ * second piggy-backed, output-only codec is inside the keyboard-dock259259+ * (which has extra speakers). Unlike the main rt5672 codec, we cannot260260+ * configure this codec, it is hard coded to use 2 channel 24 bit I2S.261261+ * Since we only support 2 channels anyways, there is no need for TDM262262+ * on any cht-bsw-rt5672 designs. So we simply use I2S 2ch everywhere.257263 */258258- ret = snd_soc_dai_set_fmt(asoc_rtd_to_codec(rtd, 0),259259- SND_SOC_DAIFMT_DSP_B |260260- SND_SOC_DAIFMT_IB_NF |264264+ ret = snd_soc_dai_set_fmt(asoc_rtd_to_cpu(rtd, 0),265265+ SND_SOC_DAIFMT_I2S |266266+ SND_SOC_DAIFMT_NB_NF |261267 SND_SOC_DAIFMT_CBS_CFS);262268 if (ret < 0) {263263- dev_err(rtd->dev, "can't set format to TDM %d\n", ret);264264- return ret;265265- }266266-267267- /* TDM 4 slots 24 bit, set Rx & Tx bitmask to 4 active slots */268268- ret = snd_soc_dai_set_tdm_slot(asoc_rtd_to_codec(rtd, 0), 0xF, 0xF, 4, 24);269269- if (ret < 0) {270270- dev_err(rtd->dev, "can't set codec TDM slot %d\n", ret);269269+ dev_err(rtd->dev, "can't set format to I2S, err %d\n", ret);271270 return ret;272271 }273272
+1-1
sound/soc/qcom/Kconfig
···72727373config SND_SOC_QDSP67474 tristate "SoC ALSA audio driver for QDSP6"7575- depends on QCOM_APR && HAS_DMA7575+ depends on QCOM_APR7676 select SND_SOC_QDSP6_COMMON7777 select SND_SOC_QDSP6_CORE7878 select SND_SOC_QDSP6_AFE
···25732573EXPORT_SYMBOL_GPL(snd_soc_register_component);2574257425752575/**25762576+ * snd_soc_unregister_component_by_driver - Unregister component using a given driver25772577+ * from the ASoC core25782578+ *25792579+ * @dev: The device to unregister25802580+ * @component_driver: The component driver to unregister25812581+ */25822582+void snd_soc_unregister_component_by_driver(struct device *dev,25832583+ const struct snd_soc_component_driver *component_driver)25842584+{25852585+ struct snd_soc_component *component;25862586+25872587+ if (!component_driver)25882588+ return;25892589+25902590+ mutex_lock(&client_mutex);25912591+ component = snd_soc_lookup_component_nolocked(dev, component_driver->name);25922592+ if (!component)25932593+ goto out;25942594+25952595+ snd_soc_del_component_unlocked(component);25962596+25972597+out:25982598+ mutex_unlock(&client_mutex);25992599+}26002600+EXPORT_SYMBOL_GPL(snd_soc_unregister_component_by_driver);26012601+26022602+/**25762603 * snd_soc_unregister_component - Unregister all related component25772604 * from the ASoC core25782605 *
+38
sound/soc/soc-dai.c
···391391 return stream->channels_min;392392}393393394394+/*395395+ * snd_soc_dai_link_set_capabilities() - set dai_link properties based on its DAIs396396+ */397397+void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link)398398+{399399+ struct snd_soc_dai_link_component *cpu;400400+ struct snd_soc_dai_link_component *codec;401401+ struct snd_soc_dai *dai;402402+ bool supported[SNDRV_PCM_STREAM_LAST + 1];403403+ int direction;404404+ int i;405405+406406+ for_each_pcm_streams(direction) {407407+ supported[direction] = true;408408+409409+ for_each_link_cpus(dai_link, i, cpu) {410410+ dai = snd_soc_find_dai(cpu);411411+ if (!dai || !snd_soc_dai_stream_valid(dai, direction)) {412412+ supported[direction] = false;413413+ break;414414+ }415415+ }416416+ if (!supported[direction])417417+ continue;418418+ for_each_link_codecs(dai_link, i, codec) {419419+ dai = snd_soc_find_dai(codec);420420+ if (!dai || !snd_soc_dai_stream_valid(dai, direction)) {421421+ supported[direction] = false;422422+ break;423423+ }424424+ }425425+ }426426+427427+ dai_link->dpcm_playback = supported[SNDRV_PCM_STREAM_PLAYBACK];428428+ dai_link->dpcm_capture = supported[SNDRV_PCM_STREAM_CAPTURE];429429+}430430+EXPORT_SYMBOL_GPL(snd_soc_dai_link_set_capabilities);431431+394432void snd_soc_dai_action(struct snd_soc_dai *dai,395433 int stream, int action)396434{
···12611261 list_add(&routes[i]->dobj.list, &tplg->comp->dobj_list);1262126212631263 ret = soc_tplg_add_route(tplg, routes[i]);12641264- if (ret < 0)12641264+ if (ret < 0) {12651265+ /*12661266+ * this route was added to the list, it will12671267+ * be freed in remove_route() so increment the12681268+ * counter to skip it in the error handling12691269+ * below.12701270+ */12711271+ i++;12651272 break;12731273+ }1266127412671275 /* add route, but keep going if some fail */12681276 snd_soc_dapm_add_routes(dapm, routes[i], 1);12691277 }1270127812711271- /* free memory allocated for all dapm routes in case of error */12721272- if (ret < 0)12731273- for (i = 0; i < count ; i++)12741274- kfree(routes[i]);12791279+ /*12801280+ * free memory allocated for all dapm routes not added to the12811281+ * list in case of error12821282+ */12831283+ if (ret < 0) {12841284+ while (i < count)12851285+ kfree(routes[i++]);12861286+ }1275128712761288 /*12771289 * free pointer to array of dapm routes as this is no longer needed.···13711359 if (err < 0) {13721360 dev_err(tplg->dev, "ASoC: failed to init %s\n",13731361 mc->hdr.name);13741374- soc_tplg_free_tlv(tplg, &kc[i]);13751362 goto err_sm;13761363 }13771364 }···1378136713791368err_sm:13801369 for (; i >= 0; i--) {13701370+ soc_tplg_free_tlv(tplg, &kc[i]);13811371 sm = (struct soc_mixer_control *)kc[i].private_value;13821372 kfree(sm);13831373 kfree(kc[i].name);
+5-5
sound/soc/sof/core.c
···345345 struct snd_sof_pdata *pdata = sdev->pdata;346346 int ret;347347348348- ret = snd_sof_dsp_power_down_notify(sdev);349349- if (ret < 0)350350- dev_warn(dev, "error: %d failed to prepare DSP for device removal",351351- ret);352352-353348 if (IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE))354349 cancel_work_sync(&sdev->probe_work);355350356351 if (sdev->fw_state > SOF_FW_BOOT_NOT_STARTED) {352352+ ret = snd_sof_dsp_power_down_notify(sdev);353353+ if (ret < 0)354354+ dev_warn(dev, "error: %d failed to prepare DSP for device removal",355355+ ret);356356+357357 snd_sof_fw_unload(sdev);358358 snd_sof_ipc_free(sdev);359359 snd_sof_free_debug(sdev);
···8484 dma_addr_t sync_dma; /* DMA address of syncbuf */85858686 unsigned int pipe; /* the data i/o pipe */8787- unsigned int framesize[2]; /* small/large frame sizes in samples */8888- unsigned int sample_rem; /* remainder from division fs/fps */8787+ unsigned int packsize[2]; /* small/large packet sizes in samples */8888+ unsigned int sample_rem; /* remainder from division fs/pps */8989 unsigned int sample_accum; /* sample accumulator */9090- unsigned int fps; /* frames per second */9090+ unsigned int pps; /* packets per second */9191 unsigned int freqn; /* nominal sampling rate in fs/fps in Q16.16 format */9292 unsigned int freqm; /* momentary sampling rate in fs/fps in Q16.16 format */9393 int freqshift; /* how much to shift the feedback value to get Q16.16 */
···14991499 spin_unlock_irq(&umidi->disc_lock);15001500 up_write(&umidi->disc_rwsem);1501150115021502+ del_timer_sync(&umidi->error_timer);15031503+15021504 for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i) {15031505 struct snd_usb_midi_endpoint *ep = &umidi->endpoints[i];15041506 if (ep->out)···15271525 ep->in = NULL;15281526 }15291527 }15301530- del_timer_sync(&umidi->error_timer);15311528}15321529EXPORT_SYMBOL(snd_usbmidi_disconnect);15331530···23022301}23032302EXPORT_SYMBOL(snd_usbmidi_input_stop);2304230323052305-static void snd_usbmidi_input_start_ep(struct snd_usb_midi_in_endpoint *ep)23042304+static void snd_usbmidi_input_start_ep(struct snd_usb_midi *umidi,23052305+ struct snd_usb_midi_in_endpoint *ep)23062306{23072307 unsigned int i;23082308+ unsigned long flags;2308230923092310 if (!ep)23102311 return;23112312 for (i = 0; i < INPUT_URBS; ++i) {23122313 struct urb *urb = ep->urbs[i];23132313- urb->dev = ep->umidi->dev;23142314- snd_usbmidi_submit_urb(urb, GFP_KERNEL);23142314+ spin_lock_irqsave(&umidi->disc_lock, flags);23152315+ if (!atomic_read(&urb->use_count)) {23162316+ urb->dev = ep->umidi->dev;23172317+ snd_usbmidi_submit_urb(urb, GFP_ATOMIC);23182318+ }23192319+ spin_unlock_irqrestore(&umidi->disc_lock, flags);23152320 }23162321}23172322···23332326 if (umidi->input_running || !umidi->opened[1])23342327 return;23352328 for (i = 0; i < MIDI_MAX_ENDPOINTS; ++i)23362336- snd_usbmidi_input_start_ep(umidi->endpoints[i].in);23292329+ snd_usbmidi_input_start_ep(umidi, umidi->endpoints[i].in);23372330 umidi->input_running = 1;23382331}23392332EXPORT_SYMBOL(snd_usbmidi_input_start);
+1
sound/usb/pcm.c
···368368 goto add_sync_ep_from_ifnum;369369 case USB_ID(0x07fd, 0x0008): /* MOTU M Series */370370 case USB_ID(0x31e9, 0x0002): /* Solid State Logic SSL2+ */371371+ case USB_ID(0x0d9a, 0x00df): /* RTX6001 */371372 ep = 0x81;372373 ifnum = 2;373374 goto add_sync_ep_from_ifnum;
+52
sound/usb/quirks-table.h
···36333633 }36343634},3635363536363636+/*36373637+ * MacroSilicon MS2109 based HDMI capture cards36383638+ *36393639+ * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch.36403640+ * They also need QUIRK_AUDIO_ALIGN_TRANSFER, which makes one wonder if36413641+ * they pretend to be 96kHz mono as a workaround for stereo being broken36423642+ * by that...36433643+ *36443644+ * They also have swapped L-R channels, but that's for userspace to deal36453645+ * with.36463646+ */36473647+{36483648+ USB_DEVICE(0x534d, 0x2109),36493649+ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {36503650+ .vendor_name = "MacroSilicon",36513651+ .product_name = "MS2109",36523652+ .ifnum = QUIRK_ANY_INTERFACE,36533653+ .type = QUIRK_COMPOSITE,36543654+ .data = &(const struct snd_usb_audio_quirk[]) {36553655+ {36563656+ .ifnum = 2,36573657+ .type = QUIRK_AUDIO_ALIGN_TRANSFER,36583658+ },36593659+ {36603660+ .ifnum = 2,36613661+ .type = QUIRK_AUDIO_STANDARD_MIXER,36623662+ },36633663+ {36643664+ .ifnum = 3,36653665+ .type = QUIRK_AUDIO_FIXED_ENDPOINT,36663666+ .data = &(const struct audioformat) {36673667+ .formats = SNDRV_PCM_FMTBIT_S16_LE,36683668+ .channels = 2,36693669+ .iface = 3,36703670+ .altsetting = 1,36713671+ .altset_idx = 1,36723672+ .attributes = 0,36733673+ .endpoint = 0x82,36743674+ .ep_attr = USB_ENDPOINT_XFER_ISOC |36753675+ USB_ENDPOINT_SYNC_ASYNC,36763676+ .rates = SNDRV_PCM_RATE_CONTINUOUS,36773677+ .rate_min = 48000,36783678+ .rate_max = 48000,36793679+ }36803680+ },36813681+ {36823682+ .ifnum = -136833683+ }36843684+ }36853685+ }36863686+},36873687+36363688#undef USB_DEVICE_VENDOR_SPEC
···88#include <asm/alternative-asm.h>99#include <asm/export.h>10101111+.pushsection .noinstr.text, "ax"1212+1113/*1214 * We build a jump to memcpy_orig by default which gets NOPped out on1315 * the majority of x86 CPUs which set REP_GOOD. In addition, CPUs which···185183.Lend:186184 retq187185SYM_FUNC_END(memcpy_orig)186186+187187+.popsection188188189189#ifndef CONFIG_UML190190
+1-2
tools/include/linux/bits.h
···1818 * position @h. For example1919 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.2020 */2121-#if !defined(__ASSEMBLY__) && \2222- (!defined(CONFIG_CC_IS_GCC) || CONFIG_GCC_VERSION >= 49000)2121+#if !defined(__ASSEMBLY__)2322#include <linux/build_bug.h>2423#define GENMASK_INPUT_CHECK(h, l) \2524 (BUILD_BUG_ON_ZERO(__builtin_choose_expr( \
+21-20
tools/include/uapi/linux/bpf.h
···31713171 * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags)31723172 * Description31733173 * Copy *size* bytes from *data* into a ring buffer *ringbuf*.31743174- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of31753175- * new data availability is sent.31763176- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of31773177- * new data availability is sent unconditionally.31743174+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification31753175+ * of new data availability is sent.31763176+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification31773177+ * of new data availability is sent unconditionally.31783178 * Return31793179- * 0, on success;31803180- * < 0, on error.31793179+ * 0 on success, or a negative error in case of failure.31813180 *31823181 * void *bpf_ringbuf_reserve(void *ringbuf, u64 size, u64 flags)31833182 * Description···31883189 * void bpf_ringbuf_submit(void *data, u64 flags)31893190 * Description31903191 * Submit reserved ring buffer sample, pointed to by *data*.31913191- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of31923192- * new data availability is sent.31933193- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of31943194- * new data availability is sent unconditionally.31923192+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification31933193+ * of new data availability is sent.31943194+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification31953195+ * of new data availability is sent unconditionally.31953196 * Return31963197 * Nothing. Always succeeds.31973198 *31983199 * void bpf_ringbuf_discard(void *data, u64 flags)31993200 * Description32003201 * Discard reserved ring buffer sample, pointed to by *data*.32013201- * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of32023202- * new data availability is sent.32033203- * IF BPF_RB_FORCE_WAKEUP is specified in *flags*, notification of32043204- * new data availability is sent unconditionally.32023202+ * If **BPF_RB_NO_WAKEUP** is specified in *flags*, no notification32033203+ * of new data availability is sent.32043204+ * If **BPF_RB_FORCE_WAKEUP** is specified in *flags*, notification32053205+ * of new data availability is sent unconditionally.32053206 * Return32063207 * Nothing. Always succeeds.32073208 *···32093210 * Description32103211 * Query various characteristics of provided ring buffer. What32113212 * exactly is queries is determined by *flags*:32123212- * - BPF_RB_AVAIL_DATA - amount of data not yet consumed;32133213- * - BPF_RB_RING_SIZE - the size of ring buffer;32143214- * - BPF_RB_CONS_POS - consumer position (can wrap around);32153215- * - BPF_RB_PROD_POS - producer(s) position (can wrap around);32163216- * Data returned is just a momentary snapshots of actual values32133213+ *32143214+ * * **BPF_RB_AVAIL_DATA**: Amount of data not yet consumed.32153215+ * * **BPF_RB_RING_SIZE**: The size of ring buffer.32163216+ * * **BPF_RB_CONS_POS**: Consumer position (can wrap around).32173217+ * * **BPF_RB_PROD_POS**: Producer(s) position (can wrap around).32183218+ *32193219+ * Data returned is just a momentary snapshot of actual values32173220 * and could be inaccurate, so this facility should be used to32183221 * power heuristics and for reporting, not to make 100% correct32193222 * calculation.32203223 * Return32213221- * Requested value, or 0, if flags are not recognized.32243224+ * Requested value, or 0, if *flags* are not recognized.32223225 *32233226 * int bpf_csum_level(struct sk_buff *skb, u64 level)32243227 * Description
+2
tools/lib/bpf/bpf.h
···233233LIBBPF_API int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf,234234 __u32 *buf_len, __u32 *prog_id, __u32 *fd_type,235235 __u64 *probe_offset, __u64 *probe_addr);236236+237237+enum bpf_stats_type; /* defined in up-to-date linux/bpf.h */236238LIBBPF_API int bpf_enable_stats(enum bpf_stats_type type);237239238240#ifdef __cplusplus
···48184818 err = -EINVAL;48194819 goto out;48204820 }48214821- prog = bpf_object__find_program_by_title(obj, sec_name);48214821+ prog = NULL;48224822+ for (i = 0; i < obj->nr_programs; i++) {48234823+ if (!strcmp(obj->programs[i].section_name, sec_name)) {48244824+ prog = &obj->programs[i];48254825+ break;48264826+ }48274827+ }48224828 if (!prog) {48234829 pr_warn("failed to find program '%s' for CO-RE offset relocation\n",48244830 sec_name);···66596653 .expected_attach_type = BPF_TRACE_ITER,66606654 .is_attach_btf = true,66616655 .attach_fn = attach_iter),66626662- BPF_EAPROG_SEC("xdp_devmap", BPF_PROG_TYPE_XDP,66566656+ BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP,66636657 BPF_XDP_DEVMAP),66646658 BPF_PROG_SEC("xdp", BPF_PROG_TYPE_XDP),66656659 BPF_PROG_SEC("perf_event", BPF_PROG_TYPE_PERF_EVENT),
+3
tools/lib/subcmd/parse-options.c
···237237 return err;238238239239 case OPTION_CALLBACK:240240+ if (opt->set)241241+ *(bool *)opt->set = true;242242+240243 if (unset)241244 return (*opt->callback)(opt, NULL, 1) ? (-1) : 0;242245 if (opt->flags & PARSE_OPT_NOARG)
+37-6
tools/lib/traceevent/kbuffer-parse.c
···361361 break;362362363363 case KBUFFER_TYPE_TIME_EXTEND:364364+ case KBUFFER_TYPE_TIME_STAMP:364365 extend = read_4(kbuf, data);365366 data += 4;366367 extend <<= TS_SHIFT;···370369 *length = 0;371370 break;372371373373- case KBUFFER_TYPE_TIME_STAMP:374374- data += 12;375375- *length = 0;376376- break;377372 case 0:378373 *length = read_4(kbuf, data) - 4;379374 *length = (*length + 3) & ~3;···394397395398 type_len = translate_data(kbuf, ptr, &ptr, &delta, &length);396399397397- kbuf->timestamp += delta;400400+ if (type_len == KBUFFER_TYPE_TIME_STAMP)401401+ kbuf->timestamp = delta;402402+ else403403+ kbuf->timestamp += delta;404404+398405 kbuf->index = calc_index(kbuf, ptr);399406 kbuf->next = kbuf->index + length;400407···455454 if (kbuf->next >= kbuf->size)456455 return -1;457456 type = update_pointers(kbuf);458458- } while (type == KBUFFER_TYPE_TIME_EXTEND || type == KBUFFER_TYPE_PADDING);457457+ } while (type == KBUFFER_TYPE_TIME_EXTEND ||458458+ type == KBUFFER_TYPE_TIME_STAMP ||459459+ type == KBUFFER_TYPE_PADDING);459460460461 return 0;461462}···548545549546 return 0;550547}548548+549549+/**550550+ * kbuffer_subbuf_timestamp - read the timestamp from a sub buffer551551+ * @kbuf: The kbuffer to load552552+ * @subbuf: The subbuffer to read from.553553+ *554554+ * Return the timestamp from a subbuffer.555555+ */556556+unsigned long long kbuffer_subbuf_timestamp(struct kbuffer *kbuf, void *subbuf)557557+{558558+ return kbuf->read_8(subbuf);559559+}560560+561561+/**562562+ * kbuffer_ptr_delta - read the delta field from a record563563+ * @kbuf: The kbuffer to load564564+ * @ptr: The record in the buffe.565565+ *566566+ * Return the timestamp delta from a record567567+ */568568+unsigned int kbuffer_ptr_delta(struct kbuffer *kbuf, void *ptr)569569+{570570+ unsigned int type_len_ts;571571+572572+ type_len_ts = read_4(kbuf, ptr);573573+ return ts4host(kbuf, type_len_ts);574574+}575575+551576552577/**553578 * kbuffer_read_event - read the next event in the kbuffer subbuffer
+2
tools/lib/traceevent/kbuffer.h
···4949void *kbuffer_read_event(struct kbuffer *kbuf, unsigned long long *ts);5050void *kbuffer_next_event(struct kbuffer *kbuf, unsigned long long *ts);5151unsigned long long kbuffer_timestamp(struct kbuffer *kbuf);5252+unsigned long long kbuffer_subbuf_timestamp(struct kbuffer *kbuf, void *subbuf);5353+unsigned int kbuffer_ptr_delta(struct kbuffer *kbuf, void *ptr);52545355void *kbuffer_translate_data(int swap, void *data, unsigned int *size);5456
···852852 * event synthesis.853853 */854854 if (opts->initial_delay || target__has_cpu(&opts->target)) {855855- if (perf_evlist__add_dummy(evlist))856856- return -ENOMEM;855855+ pos = perf_evlist__get_tracking_event(evlist);856856+ if (!evsel__is_dummy_event(pos)) {857857+ /* Set up dummy event. */858858+ if (perf_evlist__add_dummy(evlist))859859+ return -ENOMEM;860860+ pos = evlist__last(evlist);861861+ perf_evlist__set_tracking_event(evlist, pos);862862+ }857863858858- /* Disable tracking of mmaps on lead event. */859859- pos = evlist__first(evlist);860860- pos->tracking = 0;861861- /* Set up dummy event. */862862- pos = evlist__last(evlist);863863- pos->tracking = 1;864864 /*865865 * Enable the dummy event when the process is forked for866866 * initial_delay, immediately for system wide.867867 */868868- if (opts->initial_delay)868868+ if (opts->initial_delay && !pos->immediate)869869 pos->core.attr.enable_on_exec = 1;870870 else871871 pos->immediate = 1;
+1-1
tools/perf/builtin-script.c
···462462 return -EINVAL;463463464464 if (PRINT_FIELD(IREGS) &&465465- evsel__check_stype(evsel, PERF_SAMPLE_REGS_INTR, "IREGS", PERF_OUTPUT_IREGS))465465+ evsel__do_check_stype(evsel, PERF_SAMPLE_REGS_INTR, "IREGS", PERF_OUTPUT_IREGS, allow_user_set))466466 return -EINVAL;467467468468 if (PRINT_FIELD(UREGS) &&
···380380 {381381 "Unit": "CPU-M-CF",382382 "EventCode": "265",383383- "EventName": "DFLT_CCERROR",383383+ "EventName": "DFLT_CCFINISH",384384 "BriefDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2",385385 "PublicDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2"386386 },
···768768 " FROM calls"769769 " INNER JOIN call_paths ON calls.call_path_id = call_paths.id"770770 " INNER JOIN symbols ON call_paths.symbol_id = symbols.id"771771- " WHERE symbols.name" + match +771771+ " WHERE calls.id <> 0"772772+ " AND symbols.name" + match +772773 " GROUP BY comm_id, thread_id, call_path_id"773774 " ORDER BY comm_id, thread_id, call_path_id")774775···964963 " FROM calls"965964 " INNER JOIN call_paths ON calls.call_path_id = call_paths.id"966965 " INNER JOIN symbols ON call_paths.symbol_id = symbols.id"967967- " WHERE symbols.name" + match +966966+ " WHERE calls.id <> 0"967967+ " AND symbols.name" + match +968968 " ORDER BY comm_id, thread_id, call_time, calls.id")969969970970 def FindPath(self, query):···10521050 child = self.model.index(row, 0, parent)10531051 if child.internalPointer().dbid == dbid:10541052 found = True10531053+ self.view.setExpanded(parent, True)10551054 self.view.setCurrentIndex(child)10561055 parent = child10571056 break···11301127 child = self.model.index(row, 0, parent)11311128 if child.internalPointer().dbid == dbid:11321129 found = True11301130+ self.view.setExpanded(parent, True)11331131 self.view.setCurrentIndex(child)11341132 parent = child11351133 break···11431139 return11441140 last_child = None11451141 for row in xrange(n):11421142+ self.view.setExpanded(parent, True)11461143 child = self.model.index(row, 0, parent)11471144 child_call_time = child.internalPointer().call_time11481145 if child_call_time < time:···11561151 if not last_child:11571152 if not found:11581153 child = self.model.index(0, 0, parent)11541154+ self.view.setExpanded(parent, True)11591155 self.view.setCurrentIndex(child)11601156 return11611157 found = True11581158+ self.view.setExpanded(parent, True)11621159 self.view.setCurrentIndex(last_child)11631160 parent = last_child11641161
+5-3
tools/perf/scripts/python/flamegraph.py
···1717from __future__ import print_function1818import sys1919import os2020+import io2021import argparse2122import json2223···82818382 if self.args.format == "html":8483 try:8585- with open(self.args.template) as f:8484+ with io.open(self.args.template, encoding="utf-8") as f:8685 output_str = f.read().replace("/** @flamegraph_json **/",8786 json_str)8887 except IOError as e:···9493 output_fn = self.args.output or "stacks.json"95949695 if output_fn == "-":9797- sys.stdout.write(output_str)9696+ with io.open(sys.stdout.fileno(), "w", encoding="utf-8", closefd=False) as out:9797+ out.write(output_str)9898 else:9999 print("dumping data to {}".format(output_fn))100100 try:101101- with open(output_fn, "w") as out:101101+ with io.open(output_fn, "w", encoding="utf-8") as out:102102 out.write(output_str)103103 except IOError as e:104104 print("Error writing output file: {}".format(e), file=sys.stderr)
···8282 request.make_options)8383 build_end = time.time()8484 if not success:8585- return KunitResult(KunitStatus.BUILD_FAILURE, 'could not build kernel')8585+ return KunitResult(KunitStatus.BUILD_FAILURE,8686+ 'could not build kernel',8787+ build_end - build_start)8688 if not success:8789 return KunitResult(KunitStatus.BUILD_FAILURE,8890 'could not build kernel',
+1-1
tools/testing/kunit/kunit_config.py
···1010import re11111212CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$'1313-CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+)$'1313+CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$'14141515KconfigEntryBase = collections.namedtuple('KconfigEntry', ['name', 'value'])1616
+4-4
tools/testing/kunit/kunit_parser.py
···265265 return bubble_up_errors(lambda x: x.status, test_suite_list)266266267267def parse_test_result(lines: List[str]) -> TestResult:268268- if not lines:269269- return TestResult(TestStatus.NO_TESTS, [], lines)270268 consume_non_diagnositic(lines)271271- if not parse_tap_header(lines):272272- return None269269+ if not lines or not parse_tap_header(lines):270270+ return TestResult(TestStatus.NO_TESTS, [], lines)273271 test_suites = []274272 test_suite = parse_test_suite(lines)275273 while test_suite:···280282 failed_tests = 0281283 crashed_tests = 0282284 test_result = parse_test_result(list(isolate_kunit_output(kernel_output)))285285+ if test_result.status == TestStatus.NO_TESTS:286286+ print_with_timestamp(red('[ERROR] ') + 'no kunit output detected')283287 for test_suite in test_result.suites:284288 if test_suite.status == TestStatus.SUCCESS:285289 print_suite_divider(green('[PASSED] ') + test_suite.name)
···2727/* valid program on DEVMAP entry via SEC name;2828 * has access to egress and ingress ifindex2929 */3030-SEC("xdp_devmap")3030+SEC("xdp_devmap/map_prog")3131int xdp_dummy_dm(struct xdp_md *ctx)3232{3333 char fmt[] = "devmap redirect: dev %u -> dev %u len %u\n";
···698698699699 switch (cc) {700700701701- case ERR_NX_TRANSLATION:701701+ case ERR_NX_AT_FAULT:702702703703 /* We touched the pages ahead of time. In the most common case704704 * we shouldn't be here. But may be some pages were paged out.705705 * Kernel should have placed the faulting address to fsaddr.706706 */707707- NXPRT(fprintf(stderr, "ERR_NX_TRANSLATION %p\n",707707+ NXPRT(fprintf(stderr, "ERR_NX_AT_FAULT %p\n",708708 (void *)cmdp->crb.csb.fsaddr));709709710710 if (pgfault_retries == NX_MAX_FAULTS) {
···1313#include <signal.h>1414#include <err.h>1515#include <sys/syscall.h>1616-#include <asm/processor-flags.h>17161818-#ifdef __x86_64__1919-# define WIDTH "q"2020-#else2121-# define WIDTH "l"2222-#endif1717+#include "helpers.h"23182419static unsigned int nerrs;2525-2626-static unsigned long get_eflags(void)2727-{2828- unsigned long eflags;2929- asm volatile ("pushf" WIDTH "\n\tpop" WIDTH " %0" : "=rm" (eflags));3030- return eflags;3131-}3232-3333-static void set_eflags(unsigned long eflags)3434-{3535- asm volatile ("push" WIDTH " %0\n\tpopf" WIDTH3636- : : "rm" (eflags) : "flags");3737-}38203921static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),4022 int flags)···4159 set_eflags(get_eflags() | extraflags);4260 syscall(SYS_getpid);4361 flags = get_eflags();6262+ set_eflags(X86_EFLAGS_IF | X86_EFLAGS_FIXED);4463 if ((flags & extraflags) == extraflags) {4564 printf("[OK]\tThe syscall worked and flags are still set\n");4665 } else {···5673 printf("[RUN]\tSet NT and issue a syscall\n");5774 do_it(X86_EFLAGS_NT);58757676+ printf("[RUN]\tSet AC and issue a syscall\n");7777+ do_it(X86_EFLAGS_AC);7878+7979+ printf("[RUN]\tSet NT|AC and issue a syscall\n");8080+ do_it(X86_EFLAGS_NT | X86_EFLAGS_AC);8181+5982 /*6083 * Now try it again with TF set -- TF forces returns via IRET in all6184 * cases except non-ptregs-using 64-bit full fast path syscalls.···69807081 sethandler(SIGTRAP, sigtrap, 0);71828383+ printf("[RUN]\tSet TF and issue a syscall\n");8484+ do_it(X86_EFLAGS_TF);8585+7286 printf("[RUN]\tSet NT|TF and issue a syscall\n");7387 do_it(X86_EFLAGS_NT | X86_EFLAGS_TF);8888+8989+ printf("[RUN]\tSet AC|TF and issue a syscall\n");9090+ do_it(X86_EFLAGS_AC | X86_EFLAGS_TF);9191+9292+ printf("[RUN]\tSet NT|AC|TF and issue a syscall\n");9393+ do_it(X86_EFLAGS_NT | X86_EFLAGS_AC | X86_EFLAGS_TF);9494+9595+ /*9696+ * Now try DF. This is evil and it's plausible that we will crash9797+ * glibc, but glibc would have to do something rather surprising9898+ * for this to happen.9999+ */100100+ printf("[RUN]\tSet DF and issue a syscall\n");101101+ do_it(X86_EFLAGS_DF);102102+103103+ printf("[RUN]\tSet TF|DF and issue a syscall\n");104104+ do_it(X86_EFLAGS_TF | X86_EFLAGS_DF);7410575106 return nerrs == 0 ? 0 : 1;76107}