Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Trivial conflict in CAN, keep the net-next + the byteswap wrapper.

Conflicts:
drivers/net/can/usb/gs_usb.c

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+6423 -3000
+1
.mailmap
··· 290 290 Sarangdhar Joshi <spjoshi@codeaurora.org> 291 291 Sascha Hauer <s.hauer@pengutronix.de> 292 292 S.Çağlar Onur <caglar@pardus.org.tr> 293 + Sean Christopherson <seanjc@google.com> <sean.j.christopherson@intel.com> 293 294 Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk> 294 295 Sebastian Reichel <sre@kernel.org> <sebastian.reichel@collabora.co.uk> 295 296 Sebastian Reichel <sre@kernel.org> <sre@debian.org>
-24
Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
··· 109 109 When counting down the counter start from preset value 110 110 and fire event when reach 0. 111 111 112 - What: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available 113 - KernelVersion: 4.12 114 - Contact: benjamin.gaignard@st.com 115 - Description: 116 - Reading returns the list possible quadrature modes. 117 - 118 - What: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode 119 - KernelVersion: 4.12 120 - Contact: benjamin.gaignard@st.com 121 - Description: 122 - Configure the device counter quadrature modes: 123 - 124 - channel_A: 125 - Encoder A input servers as the count input and B as 126 - the UP/DOWN direction control input. 127 - 128 - channel_B: 129 - Encoder B input serves as the count input and A as 130 - the UP/DOWN direction control input. 131 - 132 - quadrature: 133 - Encoder A and B inputs are mixed to get direction 134 - and count with a scale of 0.25. 135 - 136 112 What: /sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available 137 113 KernelVersion: 4.12 138 114 Contact: benjamin.gaignard@st.com
+6
Documentation/devicetree/bindings/display/brcm,bcm2711-hdmi.yaml
··· 76 76 resets: 77 77 maxItems: 1 78 78 79 + wifi-2.4ghz-coexistence: 80 + type: boolean 81 + description: > 82 + Should the pixel frequencies in the WiFi frequencies range be 83 + avoided? 84 + 79 85 required: 80 86 - compatible 81 87 - reg
+6
Documentation/devicetree/bindings/sound/rt1015.txt
··· 8 8 9 9 - reg : The I2C address of the device. 10 10 11 + Optional properties: 12 + 13 + - realtek,power-up-delay-ms 14 + Set a delay time for flush work to be completed, 15 + this value is adjustable depending on platform. 11 16 12 17 Example: 13 18 14 19 rt1015: codec@28 { 15 20 compatible = "realtek,rt1015"; 16 21 reg = <0x28>; 22 + realtek,power-up-delay-ms = <50>; 17 23 };
+104 -16
Documentation/driver-api/media/drivers/vidtv.rst
··· 149 149 Because the generator is implemented in a separate file, it can be 150 150 reused elsewhere in the media subsystem. 151 151 152 - Currently vidtv supports working with 3 PSI tables: PAT, PMT and 153 - SDT. 152 + Currently vidtv supports working with 5 PSI tables: PAT, PMT, 153 + SDT, NIT and EIT. 154 154 155 155 The specification for PAT and PMT can be found in *ISO 13818-1: 156 - Systems*, while the specification for the SDT can be found in *ETSI 156 + Systems*, while the specification for the SDT, NIT, EIT can be found in *ETSI 157 157 EN 300 468: Specification for Service Information (SI) in DVB 158 158 systems*. 159 159 ··· 196 196 #. Their services will be concatenated to populate the SDT. 197 197 198 198 #. Their programs will be concatenated to populate the PAT 199 + 200 + #. Their events will be concatenated to populate the EIT 199 201 200 202 #. For each program in the PAT, a PMT section will be created 201 203 ··· 258 256 The first step to check whether the demod loaded successfully is to run:: 259 257 260 258 $ dvb-fe-tool 259 + Device Dummy demod for DVB-T/T2/C/S/S2 (/dev/dvb/adapter0/frontend0) capabilities: 260 + CAN_FEC_1_2 261 + CAN_FEC_2_3 262 + CAN_FEC_3_4 263 + CAN_FEC_4_5 264 + CAN_FEC_5_6 265 + CAN_FEC_6_7 266 + CAN_FEC_7_8 267 + CAN_FEC_8_9 268 + CAN_FEC_AUTO 269 + CAN_GUARD_INTERVAL_AUTO 270 + CAN_HIERARCHY_AUTO 271 + CAN_INVERSION_AUTO 272 + CAN_QAM_16 273 + CAN_QAM_32 274 + CAN_QAM_64 275 + CAN_QAM_128 276 + CAN_QAM_256 277 + CAN_QAM_AUTO 278 + CAN_QPSK 279 + CAN_TRANSMISSION_MODE_AUTO 280 + DVB API Version 5.11, Current v5 delivery system: DVBC/ANNEX_A 281 + Supported delivery systems: 282 + DVBT 283 + DVBT2 284 + [DVBC/ANNEX_A] 285 + DVBS 286 + DVBS2 287 + Frequency range for the current standard: 288 + From: 51.0 MHz 289 + To: 2.15 GHz 290 + Step: 62.5 kHz 291 + Tolerance: 29.5 MHz 292 + Symbol rate ranges for the current standard: 293 + From: 1.00 MBauds 294 + To: 45.0 MBauds 261 295 262 296 This should return what is currently set up at the demod struct, i.e.:: 263 297 ··· 352 314 here's an example:: 353 315 354 316 [Channel] 355 - FREQUENCY = 330000000 317 + FREQUENCY = 474000000 356 318 MODULATION = QAM/AUTO 357 319 SYMBOL_RATE = 6940000 358 320 INNER_FEC = AUTO ··· 373 335 Assuming this channel is named 'channel.conf', you can then run:: 374 336 375 337 $ dvbv5-scan channel.conf 338 + dvbv5-scan ~/vidtv.conf 339 + ERROR command BANDWIDTH_HZ (5) not found during retrieve 340 + Cannot calc frequency shift. Either bandwidth/symbol-rate is unavailable (yet). 341 + Scanning frequency #1 330000000 342 + (0x00) Signal= -68.00dBm 343 + Scanning frequency #2 474000000 344 + Lock (0x1f) Signal= -34.45dBm C/N= 33.74dB UCB= 0 345 + Service Beethoven, provider LinuxTV.org: digital television 376 346 377 347 For more information on dvb-scan, check its documentation online here: 378 348 `dvb-scan Documentation <https://www.linuxtv.org/wiki/index.php/Dvbscan>`_. ··· 390 344 391 345 dvbv5-zap is a command line tool that can be used to record MPEG-TS to disk. The 392 346 typical use is to tune into a channel and put it into record mode. The example 393 - below - which is taken from the documentation - illustrates that:: 347 + below - which is taken from the documentation - illustrates that\ [1]_:: 394 348 395 - $ dvbv5-zap -c dvb_channel.conf "trilhas sonoras" -r 396 - using demux '/dev/dvb/adapter0/demux0' 349 + $ dvbv5-zap -c dvb_channel.conf "beethoven" -o music.ts -P -t 10 350 + using demux 'dvb0.demux0' 397 351 reading channels from file 'dvb_channel.conf' 398 - service has pid type 05: 204 399 - tuning to 573000000 Hz 400 - audio pid 104 401 - dvb_set_pesfilter 104 402 - Lock (0x1f) Quality= Good Signal= 100.00% C/N= -13.80dB UCB= 70 postBER= 3.14x10^-3 PER= 0 403 - DVR interface '/dev/dvb/adapter0/dvr0' can now be opened 352 + tuning to 474000000 Hz 353 + pass all PID's to TS 354 + dvb_set_pesfilter 8192 355 + dvb_dev_set_bufsize: buffer set to 6160384 356 + Lock (0x1f) Quality= Good Signal= -34.66dBm C/N= 33.41dB UCB= 0 postBER= 0 preBER= 1.05x10^-3 PER= 0 357 + Lock (0x1f) Quality= Good Signal= -34.57dBm C/N= 33.46dB UCB= 0 postBER= 0 preBER= 1.05x10^-3 PER= 0 358 + Record to file 'music.ts' started 359 + received 24587768 bytes (2401 Kbytes/sec) 360 + Lock (0x1f) Quality= Good Signal= -34.42dBm C/N= 33.89dB UCB= 0 postBER= 0 preBER= 2.44x10^-3 PER= 0 404 361 405 - The channel can be watched by playing the contents of the DVR interface, with 406 - some player that recognizes the MPEG-TS format, such as *mplayer* or *vlc*. 362 + .. [1] In this example, it records 10 seconds with all program ID's stored 363 + at the music.ts file. 364 + 365 + 366 + The channel can be watched by playing the contents of the stream with some 367 + player that recognizes the MPEG-TS format, such as ``mplayer`` or ``vlc``. 407 368 408 369 By playing the contents of the stream one can visually inspect the workings of 409 - vidtv, e.g.:: 370 + vidtv, e.g., to play a recorded TS file with:: 371 + 372 + $ mplayer music.ts 373 + 374 + or, alternatively, running this command on one terminal:: 375 + 376 + $ dvbv5-zap -c dvb_channel.conf "beethoven" -P -r & 377 + 378 + And, on a second terminal, playing the contents from DVR interface with:: 410 379 411 380 $ mplayer /dev/dvb/adapter0/dvr0 412 381 ··· 484 423 - Updating the error statistics accordingly (e.g. BER, etc). 485 424 486 425 - Simulating some noise in the encoded data. 426 + 427 + Functions and structs used within vidtv 428 + --------------------------------------- 429 + 430 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_bridge.h 431 + 432 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_channel.h 433 + 434 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_demod.h 435 + 436 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_encoder.h 437 + 438 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_mux.h 439 + 440 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_pes.h 441 + 442 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_psi.h 443 + 444 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_s302m.h 445 + 446 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_ts.h 447 + 448 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_tuner.h 449 + 450 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_common.c 451 + 452 + .. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_tuner.c
+26
Documentation/networking/netdev-FAQ.rst
··· 254 254 minimum, your changes should survive an ``allyesconfig`` and an 255 255 ``allmodconfig`` build without new warnings or failures. 256 256 257 + Q: How do I post corresponding changes to user space components? 258 + ---------------------------------------------------------------- 259 + A: User space code exercising kernel features should be posted 260 + alongside kernel patches. This gives reviewers a chance to see 261 + how any new interface is used and how well it works. 262 + 263 + When user space tools reside in the kernel repo itself all changes 264 + should generally come as one series. If series becomes too large 265 + or the user space project is not reviewed on netdev include a link 266 + to a public repo where user space patches can be seen. 267 + 268 + In case user space tooling lives in a separate repository but is 269 + reviewed on netdev (e.g. patches to `iproute2` tools) kernel and 270 + user space patches should form separate series (threads) when posted 271 + to the mailing list, e.g.:: 272 + 273 + [PATCH net-next 0/3] net: some feature cover letter 274 + └─ [PATCH net-next 1/3] net: some feature prep 275 + └─ [PATCH net-next 2/3] net: some feature do it 276 + └─ [PATCH net-next 3/3] selftest: net: some feature 277 + 278 + [PATCH iproute2-next] ip: add support for some feature 279 + 280 + Posting as one thread is discouraged because it confuses patchwork 281 + (as of patchwork 2.2.2). 282 + 257 283 Q: Any other tips to help ensure my net/net-next patch gets OK'd? 258 284 ----------------------------------------------------------------- 259 285 A: Attention to detail. Re-read your own work as if you were the
+11 -8
MAINTAINERS
··· 1995 1995 1996 1996 ARM/LPC32XX SOC SUPPORT 1997 1997 M: Vladimir Zapolskiy <vz@mleia.com> 1998 - M: Sylvain Lemieux <slemieux.tyco@gmail.com> 1999 1998 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2000 1999 S: Maintained 2001 2000 T: git git://github.com/vzapolskiy/linux-lpc32xx.git ··· 3527 3528 M: Arend van Spriel <arend.vanspriel@broadcom.com> 3528 3529 M: Franky Lin <franky.lin@broadcom.com> 3529 3530 M: Hante Meuleman <hante.meuleman@broadcom.com> 3530 - M: Chi-Hsien Lin <chi-hsien.lin@cypress.com> 3531 - M: Wright Feng <wright.feng@cypress.com> 3531 + M: Chi-hsien Lin <chi-hsien.lin@infineon.com> 3532 + M: Wright Feng <wright.feng@infineon.com> 3533 + M: Chung-hsien Hsu <chung-hsien.hsu@infineon.com> 3532 3534 L: linux-wireless@vger.kernel.org 3533 3535 L: brcm80211-dev-list.pdl@broadcom.com 3534 - L: brcm80211-dev-list@cypress.com 3536 + L: SHA-cyfmac-dev-list@infineon.com 3535 3537 S: Supported 3536 3538 F: drivers/net/wireless/broadcom/brcm80211/ 3537 3539 ··· 9155 9155 9156 9156 IOMMU DRIVERS 9157 9157 M: Joerg Roedel <joro@8bytes.org> 9158 + M: Will Deacon <will@kernel.org> 9158 9159 L: iommu@lists.linux-foundation.org 9159 9160 S: Maintained 9160 9161 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git ··· 9639 9638 F: arch/s390/include/asm/gmap.h 9640 9639 F: arch/s390/include/asm/kvm* 9641 9640 F: arch/s390/include/uapi/asm/kvm* 9641 + F: arch/s390/kernel/uv.c 9642 9642 F: arch/s390/kvm/ 9643 9643 F: arch/s390/mm/gmap.c 9644 9644 F: tools/testing/selftests/kvm/*/s390x/ ··· 13158 13156 M: Ilias Apalodimas <ilias.apalodimas@linaro.org> 13159 13157 L: netdev@vger.kernel.org 13160 13158 S: Supported 13159 + F: Documentation/networking/page_pool.rst 13161 13160 F: include/net/page_pool.h 13161 + F: include/trace/events/page_pool.h 13162 13162 F: net/core/page_pool.c 13163 13163 13164 13164 PANASONIC LAPTOP ACPI EXTRAS DRIVER ··· 14802 14798 F: drivers/net/wireless/realtek/rtlwifi/ 14803 14799 14804 14800 REALTEK WIRELESS DRIVER (rtw88) 14805 - M: Yan-Hsuan Chuang <yhchuang@realtek.com> 14801 + M: Yan-Hsuan Chuang <tony0620emma@gmail.com> 14806 14802 L: linux-wireless@vger.kernel.org 14807 14803 S: Maintained 14808 14804 F: drivers/net/wireless/realtek/rtw88/ ··· 15775 15771 F: include/linux/slimbus.h 15776 15772 15777 15773 SFC NETWORK DRIVER 15778 - M: Solarflare linux maintainers <linux-net-drivers@solarflare.com> 15779 - M: Edward Cree <ecree@solarflare.com> 15780 - M: Martin Habets <mhabets@solarflare.com> 15774 + M: Edward Cree <ecree.xilinx@gmail.com> 15775 + M: Martin Habets <habetsm.xilinx@gmail.com> 15781 15776 L: netdev@vger.kernel.org 15782 15777 S: Supported 15783 15778 F: drivers/net/ethernet/sfc/
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 10 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+1 -3
arch/arc/include/asm/bitops.h
··· 243 243 x <<= 2; 244 244 r -= 2; 245 245 } 246 - if (!(x & 0x80000000u)) { 247 - x <<= 1; 246 + if (!(x & 0x80000000u)) 248 247 r -= 1; 249 - } 250 248 return r; 251 249 } 252 250
+2
arch/arc/include/asm/pgtable.h
··· 134 134 135 135 #ifdef CONFIG_ARC_HAS_PAE40 136 136 #define PTE_BITS_NON_RWX_IN_PD1 (0xff00000000 | PAGE_MASK | _PAGE_CACHEABLE) 137 + #define MAX_POSSIBLE_PHYSMEM_BITS 40 137 138 #else 138 139 #define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK | _PAGE_CACHEABLE) 140 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 139 141 #endif 140 142 141 143 /**************************************************************************
+31 -25
arch/arc/kernel/stacktrace.c
··· 38 38 39 39 #ifdef CONFIG_ARC_DW2_UNWIND 40 40 41 - static void seed_unwind_frame_info(struct task_struct *tsk, 42 - struct pt_regs *regs, 43 - struct unwind_frame_info *frame_info) 41 + static int 42 + seed_unwind_frame_info(struct task_struct *tsk, struct pt_regs *regs, 43 + struct unwind_frame_info *frame_info) 44 44 { 45 - /* 46 - * synchronous unwinding (e.g. dump_stack) 47 - * - uses current values of SP and friends 48 - */ 49 - if (tsk == NULL && regs == NULL) { 45 + if (regs) { 46 + /* 47 + * Asynchronous unwinding of intr/exception 48 + * - Just uses the pt_regs passed 49 + */ 50 + frame_info->task = tsk; 51 + 52 + frame_info->regs.r27 = regs->fp; 53 + frame_info->regs.r28 = regs->sp; 54 + frame_info->regs.r31 = regs->blink; 55 + frame_info->regs.r63 = regs->ret; 56 + frame_info->call_frame = 0; 57 + } else if (tsk == NULL || tsk == current) { 58 + /* 59 + * synchronous unwinding (e.g. dump_stack) 60 + * - uses current values of SP and friends 61 + */ 50 62 unsigned long fp, sp, blink, ret; 51 63 frame_info->task = current; 52 64 ··· 75 63 frame_info->regs.r31 = blink; 76 64 frame_info->regs.r63 = ret; 77 65 frame_info->call_frame = 0; 78 - } else if (regs == NULL) { 66 + } else { 79 67 /* 80 - * Asynchronous unwinding of sleeping task 81 - * - Gets SP etc from task's pt_regs (saved bottom of kernel 82 - * mode stack of task) 68 + * Asynchronous unwinding of a likely sleeping task 69 + * - first ensure it is actually sleeping 70 + * - if so, it will be in __switch_to, kernel mode SP of task 71 + * is safe-kept and BLINK at a well known location in there 83 72 */ 73 + 74 + if (tsk->state == TASK_RUNNING) 75 + return -1; 84 76 85 77 frame_info->task = tsk; 86 78 ··· 106 90 frame_info->regs.r28 += 60; 107 91 frame_info->call_frame = 0; 108 92 109 - } else { 110 - /* 111 - * Asynchronous unwinding of intr/exception 112 - * - Just uses the pt_regs passed 113 - */ 114 - frame_info->task = tsk; 115 - 116 - frame_info->regs.r27 = regs->fp; 117 - frame_info->regs.r28 = regs->sp; 118 - frame_info->regs.r31 = regs->blink; 119 - frame_info->regs.r63 = regs->ret; 120 - frame_info->call_frame = 0; 121 93 } 94 + return 0; 122 95 } 123 96 124 97 #endif ··· 121 116 unsigned int address; 122 117 struct unwind_frame_info frame_info; 123 118 124 - seed_unwind_frame_info(tsk, regs, &frame_info); 119 + if (seed_unwind_frame_info(tsk, regs, &frame_info)) 120 + return 0; 125 121 126 122 while (1) { 127 123 address = UNW_PC(&frame_info);
+12 -12
arch/arc/mm/tlb.c
··· 30 30 * -Changes related to MMU v2 (Rel 4.8) 31 31 * 32 32 * Vineetg: Aug 29th 2008 33 - * -In TLB Flush operations (Metal Fix MMU) there is a explict command to 33 + * -In TLB Flush operations (Metal Fix MMU) there is a explicit command to 34 34 * flush Micro-TLBS. If TLB Index Reg is invalid prior to TLBIVUTLB cmd, 35 35 * it fails. Thus need to load it with ANY valid value before invoking 36 36 * TLBIVUTLB cmd 37 37 * 38 38 * Vineetg: Aug 21th 2008: 39 39 * -Reduced the duration of IRQ lockouts in TLB Flush routines 40 - * -Multiple copies of TLB erase code seperated into a "single" function 40 + * -Multiple copies of TLB erase code separated into a "single" function 41 41 * -In TLB Flush routines, interrupt disabling moved UP to retrieve ASID 42 42 * in interrupt-safe region. 43 43 * ··· 66 66 * 67 67 * Although J-TLB is 2 way set assoc, ARC700 caches J-TLB into uTLBS which has 68 68 * much higher associativity. u-D-TLB is 8 ways, u-I-TLB is 4 ways. 69 - * Given this, the thrasing problem should never happen because once the 3 69 + * Given this, the thrashing problem should never happen because once the 3 70 70 * J-TLB entries are created (even though 3rd will knock out one of the prev 71 71 * two), the u-D-TLB and u-I-TLB will have what is required to accomplish memcpy 72 72 * ··· 127 127 * There was however an obscure hardware bug, where uTLB flush would 128 128 * fail when a prior probe for J-TLB (both totally unrelated) would 129 129 * return lkup err - because the entry didn't exist in MMU. 130 - * The Workround was to set Index reg with some valid value, prior to 130 + * The Workaround was to set Index reg with some valid value, prior to 131 131 * flush. This was fixed in MMU v3 132 132 */ 133 133 unsigned int idx; ··· 272 272 } 273 273 274 274 /* 275 - * Flush the entrie MM for userland. The fastest way is to move to Next ASID 275 + * Flush the entire MM for userland. The fastest way is to move to Next ASID 276 276 */ 277 277 noinline void local_flush_tlb_mm(struct mm_struct *mm) 278 278 { ··· 303 303 * Difference between this and Kernel Range Flush is 304 304 * -Here the fastest way (if range is too large) is to move to next ASID 305 305 * without doing any explicit Shootdown 306 - * -In case of kernel Flush, entry has to be shot down explictly 306 + * -In case of kernel Flush, entry has to be shot down explicitly 307 307 */ 308 308 void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, 309 309 unsigned long end) ··· 620 620 * Super Page size is configurable in hardware (4K to 16M), but fixed once 621 621 * RTL builds. 622 622 * 623 - * The exact THP size a Linx configuration will support is a function of: 623 + * The exact THP size a Linux configuration will support is a function of: 624 624 * - MMU page size (typical 8K, RTL fixed) 625 625 * - software page walker address split between PGD:PTE:PFN (typical 626 626 * 11:8:13, but can be changed with 1 line) ··· 698 698 699 699 #endif 700 700 701 - /* Read the Cache Build Confuration Registers, Decode them and save into 701 + /* Read the Cache Build Configuration Registers, Decode them and save into 702 702 * the cpuinfo structure for later use. 703 703 * No Validation is done here, simply read/convert the BCRs 704 704 */ ··· 803 803 pr_info("%s", arc_mmu_mumbojumbo(0, str, sizeof(str))); 804 804 805 805 /* 806 - * Can't be done in processor.h due to header include depenedencies 806 + * Can't be done in processor.h due to header include dependencies 807 807 */ 808 808 BUILD_BUG_ON(!IS_ALIGNED((CONFIG_ARC_KVADDR_SIZE << 20), PMD_SIZE)); 809 809 810 810 /* 811 811 * stack top size sanity check, 812 - * Can't be done in processor.h due to header include depenedencies 812 + * Can't be done in processor.h due to header include dependencies 813 813 */ 814 814 BUILD_BUG_ON(!IS_ALIGNED(STACK_TOP, PMD_SIZE)); 815 815 ··· 881 881 * the duplicate one. 882 882 * -Knob to be verbose abt it.(TODO: hook them up to debugfs) 883 883 */ 884 - volatile int dup_pd_silent; /* Be slient abt it or complain (default) */ 884 + volatile int dup_pd_silent; /* Be silent abt it or complain (default) */ 885 885 886 886 void do_tlb_overlap_fault(unsigned long cause, unsigned long address, 887 887 struct pt_regs *regs) ··· 948 948 949 949 /*********************************************************************** 950 950 * Diagnostic Routines 951 - * -Called from Low Level TLB Hanlders if things don;t look good 951 + * -Called from Low Level TLB Handlers if things don;t look good 952 952 **********************************************************************/ 953 953 954 954 #ifdef CONFIG_ARC_DBG_TLB_PARANOIA
+3
arch/arm/boot/compressed/head.S
··· 1472 1472 @ issued from HYP mode take us to the correct handler code. We 1473 1473 @ will disable the MMU before jumping to the kernel proper. 1474 1474 @ 1475 + ARM( bic r1, r1, #(1 << 30) ) @ clear HSCTLR.TE 1476 + THUMB( orr r1, r1, #(1 << 30) ) @ set HSCTLR.TE 1477 + mcr p15, 4, r1, c1, c0, 0 1475 1478 adr r0, __hyp_reentry_vectors 1476 1479 mcr p15, 4, r0, c12, c0, 0 @ set HYP vector base (HVBAR) 1477 1480 isb
+1 -1
arch/arm/boot/dts/am437x-l4.dtsi
··· 521 521 ranges = <0x0 0x100000 0x8000>; 522 522 523 523 mac_sw: switch@0 { 524 - compatible = "ti,am4372-cpsw","ti,cpsw-switch"; 524 + compatible = "ti,am4372-cpsw-switch", "ti,cpsw-switch"; 525 525 reg = <0x0 0x4000>; 526 526 ranges = <0 0 0x4000>; 527 527 clocks = <&cpsw_125mhz_gclk>, <&dpll_clksel_mac_clk>;
+2 -2
arch/arm/boot/dts/dra76x.dtsi
··· 32 32 interrupts = <GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>, 33 33 <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>; 34 34 interrupt-names = "int0", "int1"; 35 - clocks = <&mcan_clk>, <&l3_iclk_div>; 36 - clock-names = "cclk", "hclk"; 35 + clocks = <&l3_iclk_div>, <&mcan_clk>; 36 + clock-names = "hclk", "cclk"; 37 37 bosch,mram-cfg = <0x0 0 0 32 0 0 1 1>; 38 38 }; 39 39 };
+2
arch/arm/include/asm/pgtable-2level.h
··· 75 75 #define PTE_HWTABLE_OFF (PTE_HWTABLE_PTRS * sizeof(pte_t)) 76 76 #define PTE_HWTABLE_SIZE (PTRS_PER_PTE * sizeof(u32)) 77 77 78 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 79 + 78 80 /* 79 81 * PMD_SHIFT determines the size of the area a second-level page table can map 80 82 * PGDIR_SHIFT determines what a third-level page table entry can map
+2
arch/arm/include/asm/pgtable-3level.h
··· 25 25 #define PTE_HWTABLE_OFF (0) 26 26 #define PTE_HWTABLE_SIZE (PTRS_PER_PTE * sizeof(u64)) 27 27 28 + #define MAX_POSSIBLE_PHYSMEM_BITS 40 29 + 28 30 /* 29 31 * PGDIR_SHIFT determines the size a top-level page table entry can map. 30 32 */
+2 -1
arch/arm/mach-omap2/Kconfig
··· 7 7 depends on ARCH_MULTI_V6 8 8 select ARCH_OMAP2PLUS 9 9 select CPU_V6 10 - select PM_GENERIC_DOMAINS if PM 11 10 select SOC_HAS_OMAP2_SDRC 12 11 13 12 config ARCH_OMAP3 ··· 105 106 select OMAP_DM_TIMER 106 107 select OMAP_GPMC 107 108 select PINCTRL 109 + select PM_GENERIC_DOMAINS if PM 110 + select PM_GENERIC_DOMAINS_OF if PM 108 111 select RESET_CONTROLLER 109 112 select SOC_BUS 110 113 select TI_SYSC
+5 -3
arch/arm/mach-omap2/cpuidle44xx.c
··· 175 175 if (mpuss_can_lose_context) { 176 176 error = cpu_cluster_pm_enter(); 177 177 if (error) { 178 - omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON); 179 - goto cpu_cluster_pm_out; 178 + index = 0; 179 + cx = state_ptr + index; 180 + pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state); 181 + omap_set_pwrdm_state(mpu_pd, cx->mpu_state); 182 + mpuss_can_lose_context = 0; 180 183 } 181 184 } 182 185 } ··· 187 184 omap4_enter_lowpower(dev->cpu, cx->cpu_state); 188 185 cpu_done[dev->cpu] = true; 189 186 190 - cpu_cluster_pm_out: 191 187 /* Wakeup CPU1 only if it is not offlined */ 192 188 if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) { 193 189
+10 -10
arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi
··· 5 5 usb { 6 6 compatible = "simple-bus"; 7 7 dma-ranges; 8 - #address-cells = <1>; 9 - #size-cells = <1>; 10 - ranges = <0x0 0x0 0x68500000 0x00400000>; 8 + #address-cells = <2>; 9 + #size-cells = <2>; 10 + ranges = <0x0 0x0 0x0 0x68500000 0x0 0x00400000>; 11 11 12 12 usbphy0: usb-phy@0 { 13 13 compatible = "brcm,sr-usb-combo-phy"; 14 - reg = <0x00000000 0x100>; 14 + reg = <0x0 0x00000000 0x0 0x100>; 15 15 #phy-cells = <1>; 16 16 status = "disabled"; 17 17 }; 18 18 19 19 xhci0: usb@1000 { 20 20 compatible = "generic-xhci"; 21 - reg = <0x00001000 0x1000>; 21 + reg = <0x0 0x00001000 0x0 0x1000>; 22 22 interrupts = <GIC_SPI 256 IRQ_TYPE_LEVEL_HIGH>; 23 23 phys = <&usbphy0 1>, <&usbphy0 0>; 24 24 phy-names = "phy0", "phy1"; ··· 28 28 29 29 bdc0: usb@2000 { 30 30 compatible = "brcm,bdc-v0.16"; 31 - reg = <0x00002000 0x1000>; 31 + reg = <0x0 0x00002000 0x0 0x1000>; 32 32 interrupts = <GIC_SPI 259 IRQ_TYPE_LEVEL_HIGH>; 33 33 phys = <&usbphy0 0>, <&usbphy0 1>; 34 34 phy-names = "phy0", "phy1"; ··· 38 38 39 39 usbphy1: usb-phy@10000 { 40 40 compatible = "brcm,sr-usb-combo-phy"; 41 - reg = <0x00010000 0x100>; 41 + reg = <0x0 0x00010000 0x0 0x100>; 42 42 #phy-cells = <1>; 43 43 status = "disabled"; 44 44 }; 45 45 46 46 usbphy2: usb-phy@20000 { 47 47 compatible = "brcm,sr-usb-hs-phy"; 48 - reg = <0x00020000 0x100>; 48 + reg = <0x0 0x00020000 0x0 0x100>; 49 49 #phy-cells = <0>; 50 50 status = "disabled"; 51 51 }; 52 52 53 53 xhci1: usb@11000 { 54 54 compatible = "generic-xhci"; 55 - reg = <0x00011000 0x1000>; 55 + reg = <0x0 0x00011000 0x0 0x1000>; 56 56 interrupts = <GIC_SPI 263 IRQ_TYPE_LEVEL_HIGH>; 57 57 phys = <&usbphy1 1>, <&usbphy2>, <&usbphy1 0>; 58 58 phy-names = "phy0", "phy1", "phy2"; ··· 62 62 63 63 bdc1: usb@21000 { 64 64 compatible = "brcm,bdc-v0.16"; 65 - reg = <0x00021000 0x1000>; 65 + reg = <0x0 0x00021000 0x0 0x1000>; 66 66 interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>; 67 67 phys = <&usbphy2>; 68 68 phy-names = "phy0";
-12
arch/arm64/boot/dts/nvidia/tegra186-p2771-0000.dts
··· 10 10 model = "NVIDIA Jetson TX2 Developer Kit"; 11 11 compatible = "nvidia,p2771-0000", "nvidia,tegra186"; 12 12 13 - aconnect { 14 - status = "okay"; 15 - 16 - dma-controller@2930000 { 17 - status = "okay"; 18 - }; 19 - 20 - interrupt-controller@2a40000 { 21 - status = "okay"; 22 - }; 23 - }; 24 - 25 13 i2c@3160000 { 26 14 power-monitor@42 { 27 15 compatible = "ti,ina3221";
+1 -1
arch/arm64/boot/dts/nvidia/tegra194-p3668-0000.dtsi
··· 54 54 status = "okay"; 55 55 }; 56 56 57 - serial@c280000 { 57 + serial@3100000 { 58 58 status = "okay"; 59 59 }; 60 60
+1 -1
arch/arm64/boot/dts/nvidia/tegra194.dtsi
··· 1161 1161 1162 1162 hsp_aon: hsp@c150000 { 1163 1163 compatible = "nvidia,tegra194-hsp", "nvidia,tegra186-hsp"; 1164 - reg = <0x0c150000 0xa0000>; 1164 + reg = <0x0c150000 0x90000>; 1165 1165 interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>, 1166 1166 <GIC_SPI 134 IRQ_TYPE_LEVEL_HIGH>, 1167 1167 <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>,
+10 -10
arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
··· 1663 1663 vin-supply = <&vdd_5v0_sys>; 1664 1664 }; 1665 1665 1666 - vdd_usb_vbus_otg: regulator@11 { 1667 - compatible = "regulator-fixed"; 1668 - regulator-name = "USB_VBUS_EN0"; 1669 - regulator-min-microvolt = <5000000>; 1670 - regulator-max-microvolt = <5000000>; 1671 - gpio = <&gpio TEGRA_GPIO(CC, 4) GPIO_ACTIVE_HIGH>; 1672 - enable-active-high; 1673 - vin-supply = <&vdd_5v0_sys>; 1674 - }; 1675 - 1676 1666 vdd_hdmi: regulator@10 { 1677 1667 compatible = "regulator-fixed"; 1678 1668 regulator-name = "VDD_HDMI_5V0"; ··· 1701 1711 gpio = <&exp2 9 GPIO_ACTIVE_HIGH>; 1702 1712 enable-active-high; 1703 1713 vin-supply = <&vdd_3v3_sys>; 1714 + }; 1715 + 1716 + vdd_usb_vbus_otg: regulator@14 { 1717 + compatible = "regulator-fixed"; 1718 + regulator-name = "USB_VBUS_EN0"; 1719 + regulator-min-microvolt = <5000000>; 1720 + regulator-max-microvolt = <5000000>; 1721 + gpio = <&gpio TEGRA_GPIO(CC, 4) GPIO_ACTIVE_HIGH>; 1722 + enable-active-high; 1723 + vin-supply = <&vdd_5v0_sys>; 1704 1724 }; 1705 1725 };
+3 -3
arch/arm64/boot/dts/nvidia/tegra234-sim-vdk.dts
··· 8 8 compatible = "nvidia,tegra234-vdk", "nvidia,tegra234"; 9 9 10 10 aliases { 11 - sdhci3 = "/cbb@0/sdhci@3460000"; 11 + mmc3 = "/bus@0/mmc@3460000"; 12 12 serial0 = &uarta; 13 13 }; 14 14 ··· 17 17 stdout-path = "serial0:115200n8"; 18 18 }; 19 19 20 - cbb@0 { 20 + bus@0 { 21 21 serial@3100000 { 22 22 status = "okay"; 23 23 }; 24 24 25 - sdhci@3460000 { 25 + mmc@3460000 { 26 26 status = "okay"; 27 27 bus-width = <8>; 28 28 non-removable;
+36 -36
arch/arm64/boot/dts/qcom/ipq6018.dtsi
··· 179 179 }; 180 180 181 181 soc: soc { 182 - #address-cells = <1>; 183 - #size-cells = <1>; 184 - ranges = <0 0 0 0xffffffff>; 182 + #address-cells = <2>; 183 + #size-cells = <2>; 184 + ranges = <0 0 0 0 0x0 0xffffffff>; 185 185 dma-ranges; 186 186 compatible = "simple-bus"; 187 187 188 188 prng: qrng@e1000 { 189 189 compatible = "qcom,prng-ee"; 190 - reg = <0xe3000 0x1000>; 190 + reg = <0x0 0xe3000 0x0 0x1000>; 191 191 clocks = <&gcc GCC_PRNG_AHB_CLK>; 192 192 clock-names = "core"; 193 193 }; 194 194 195 195 cryptobam: dma@704000 { 196 196 compatible = "qcom,bam-v1.7.0"; 197 - reg = <0x00704000 0x20000>; 197 + reg = <0x0 0x00704000 0x0 0x20000>; 198 198 interrupts = <GIC_SPI 207 IRQ_TYPE_LEVEL_HIGH>; 199 199 clocks = <&gcc GCC_CRYPTO_AHB_CLK>; 200 200 clock-names = "bam_clk"; ··· 206 206 207 207 crypto: crypto@73a000 { 208 208 compatible = "qcom,crypto-v5.1"; 209 - reg = <0x0073a000 0x6000>; 209 + reg = <0x0 0x0073a000 0x0 0x6000>; 210 210 clocks = <&gcc GCC_CRYPTO_AHB_CLK>, 211 211 <&gcc GCC_CRYPTO_AXI_CLK>, 212 212 <&gcc GCC_CRYPTO_CLK>; ··· 217 217 218 218 tlmm: pinctrl@1000000 { 219 219 compatible = "qcom,ipq6018-pinctrl"; 220 - reg = <0x01000000 0x300000>; 220 + reg = <0x0 0x01000000 0x0 0x300000>; 221 221 interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>; 222 222 gpio-controller; 223 223 #gpio-cells = <2>; ··· 235 235 236 236 gcc: gcc@1800000 { 237 237 compatible = "qcom,gcc-ipq6018"; 238 - reg = <0x01800000 0x80000>; 238 + reg = <0x0 0x01800000 0x0 0x80000>; 239 239 clocks = <&xo>, <&sleep_clk>; 240 240 clock-names = "xo", "sleep_clk"; 241 241 #clock-cells = <1>; ··· 244 244 245 245 tcsr_mutex_regs: syscon@1905000 { 246 246 compatible = "syscon"; 247 - reg = <0x01905000 0x8000>; 247 + reg = <0x0 0x01905000 0x0 0x8000>; 248 248 }; 249 249 250 250 tcsr_q6: syscon@1945000 { 251 251 compatible = "syscon"; 252 - reg = <0x01945000 0xe000>; 252 + reg = <0x0 0x01945000 0x0 0xe000>; 253 253 }; 254 254 255 255 blsp_dma: dma@7884000 { 256 256 compatible = "qcom,bam-v1.7.0"; 257 - reg = <0x07884000 0x2b000>; 257 + reg = <0x0 0x07884000 0x0 0x2b000>; 258 258 interrupts = <GIC_SPI 238 IRQ_TYPE_LEVEL_HIGH>; 259 259 clocks = <&gcc GCC_BLSP1_AHB_CLK>; 260 260 clock-names = "bam_clk"; ··· 264 264 265 265 blsp1_uart3: serial@78b1000 { 266 266 compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm"; 267 - reg = <0x078b1000 0x200>; 267 + reg = <0x0 0x078b1000 0x0 0x200>; 268 268 interrupts = <GIC_SPI 306 IRQ_TYPE_LEVEL_HIGH>; 269 269 clocks = <&gcc GCC_BLSP1_UART3_APPS_CLK>, 270 270 <&gcc GCC_BLSP1_AHB_CLK>; ··· 276 276 compatible = "qcom,spi-qup-v2.2.1"; 277 277 #address-cells = <1>; 278 278 #size-cells = <0>; 279 - reg = <0x078b5000 0x600>; 279 + reg = <0x0 0x078b5000 0x0 0x600>; 280 280 interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>; 281 281 spi-max-frequency = <50000000>; 282 282 clocks = <&gcc GCC_BLSP1_QUP1_SPI_APPS_CLK>, ··· 291 291 compatible = "qcom,spi-qup-v2.2.1"; 292 292 #address-cells = <1>; 293 293 #size-cells = <0>; 294 - reg = <0x078b6000 0x600>; 294 + reg = <0x0 0x078b6000 0x0 0x600>; 295 295 interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>; 296 296 spi-max-frequency = <50000000>; 297 297 clocks = <&gcc GCC_BLSP1_QUP2_SPI_APPS_CLK>, ··· 306 306 compatible = "qcom,i2c-qup-v2.2.1"; 307 307 #address-cells = <1>; 308 308 #size-cells = <0>; 309 - reg = <0x078b6000 0x600>; 309 + reg = <0x0 0x078b6000 0x0 0x600>; 310 310 interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>; 311 311 clocks = <&gcc GCC_BLSP1_AHB_CLK>, 312 312 <&gcc GCC_BLSP1_QUP2_I2C_APPS_CLK>; ··· 321 321 compatible = "qcom,i2c-qup-v2.2.1"; 322 322 #address-cells = <1>; 323 323 #size-cells = <0>; 324 - reg = <0x078b7000 0x600>; 324 + reg = <0x0 0x078b7000 0x0 0x600>; 325 325 interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; 326 326 clocks = <&gcc GCC_BLSP1_AHB_CLK>, 327 327 <&gcc GCC_BLSP1_QUP3_I2C_APPS_CLK>; ··· 336 336 compatible = "qcom,msm-qgic2"; 337 337 interrupt-controller; 338 338 #interrupt-cells = <0x3>; 339 - reg = <0x0b000000 0x1000>, /*GICD*/ 340 - <0x0b002000 0x1000>, /*GICC*/ 341 - <0x0b001000 0x1000>, /*GICH*/ 342 - <0x0b004000 0x1000>; /*GICV*/ 339 + reg = <0x0 0x0b000000 0x0 0x1000>, /*GICD*/ 340 + <0x0 0x0b002000 0x0 0x1000>, /*GICC*/ 341 + <0x0 0x0b001000 0x0 0x1000>, /*GICH*/ 342 + <0x0 0x0b004000 0x0 0x1000>; /*GICV*/ 343 343 interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>; 344 344 }; 345 345 346 346 watchdog@b017000 { 347 347 compatible = "qcom,kpss-wdt"; 348 348 interrupts = <GIC_SPI 3 IRQ_TYPE_EDGE_RISING>; 349 - reg = <0x0b017000 0x40>; 349 + reg = <0x0 0x0b017000 0x0 0x40>; 350 350 clocks = <&sleep_clk>; 351 351 timeout-sec = <10>; 352 352 }; 353 353 354 354 apcs_glb: mailbox@b111000 { 355 355 compatible = "qcom,ipq6018-apcs-apps-global"; 356 - reg = <0x0b111000 0x1000>; 356 + reg = <0x0 0x0b111000 0x0 0x1000>; 357 357 #clock-cells = <1>; 358 358 clocks = <&a53pll>, <&xo>; 359 359 clock-names = "pll", "xo"; ··· 362 362 363 363 a53pll: clock@b116000 { 364 364 compatible = "qcom,ipq6018-a53pll"; 365 - reg = <0x0b116000 0x40>; 365 + reg = <0x0 0x0b116000 0x0 0x40>; 366 366 #clock-cells = <0>; 367 367 clocks = <&xo>; 368 368 clock-names = "xo"; ··· 377 377 }; 378 378 379 379 timer@b120000 { 380 - #address-cells = <1>; 381 - #size-cells = <1>; 380 + #address-cells = <2>; 381 + #size-cells = <2>; 382 382 ranges; 383 383 compatible = "arm,armv7-timer-mem"; 384 - reg = <0x0b120000 0x1000>; 384 + reg = <0x0 0x0b120000 0x0 0x1000>; 385 385 clock-frequency = <19200000>; 386 386 387 387 frame@b120000 { 388 388 frame-number = <0>; 389 389 interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>, 390 390 <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; 391 - reg = <0x0b121000 0x1000>, 392 - <0x0b122000 0x1000>; 391 + reg = <0x0 0x0b121000 0x0 0x1000>, 392 + <0x0 0x0b122000 0x0 0x1000>; 393 393 }; 394 394 395 395 frame@b123000 { 396 396 frame-number = <1>; 397 397 interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>; 398 - reg = <0xb123000 0x1000>; 398 + reg = <0x0 0xb123000 0x0 0x1000>; 399 399 status = "disabled"; 400 400 }; 401 401 402 402 frame@b124000 { 403 403 frame-number = <2>; 404 404 interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>; 405 - reg = <0x0b124000 0x1000>; 405 + reg = <0x0 0x0b124000 0x0 0x1000>; 406 406 status = "disabled"; 407 407 }; 408 408 409 409 frame@b125000 { 410 410 frame-number = <3>; 411 411 interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 412 - reg = <0x0b125000 0x1000>; 412 + reg = <0x0 0x0b125000 0x0 0x1000>; 413 413 status = "disabled"; 414 414 }; 415 415 416 416 frame@b126000 { 417 417 frame-number = <4>; 418 418 interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>; 419 - reg = <0x0b126000 0x1000>; 419 + reg = <0x0 0x0b126000 0x0 0x1000>; 420 420 status = "disabled"; 421 421 }; 422 422 423 423 frame@b127000 { 424 424 frame-number = <5>; 425 425 interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>; 426 - reg = <0x0b127000 0x1000>; 426 + reg = <0x0 0x0b127000 0x0 0x1000>; 427 427 status = "disabled"; 428 428 }; 429 429 430 430 frame@b128000 { 431 431 frame-number = <6>; 432 432 interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>; 433 - reg = <0x0b128000 0x1000>; 433 + reg = <0x0 0x0b128000 0x0 0x1000>; 434 434 status = "disabled"; 435 435 }; 436 436 }; 437 437 438 438 q6v5_wcss: remoteproc@cd00000 { 439 439 compatible = "qcom,ipq8074-wcss-pil"; 440 - reg = <0x0cd00000 0x4040>, 441 - <0x004ab000 0x20>; 440 + reg = <0x0 0x0cd00000 0x0 0x4040>, 441 + <0x0 0x004ab000 0x0 0x20>; 442 442 reg-names = "qdsp6", 443 443 "rmb"; 444 444 interrupts-extended = <&intc GIC_SPI 325 IRQ_TYPE_EDGE_RISING>,
-1
arch/arm64/boot/dts/rockchip/rk3326-odroid-go2.dts
··· 243 243 interrupts = <RK_PB2 IRQ_TYPE_LEVEL_LOW>; 244 244 pinctrl-names = "default"; 245 245 pinctrl-0 = <&pmic_int>; 246 - rockchip,system-power-controller; 247 246 wakeup-source; 248 247 #clock-cells = <1>; 249 248 clock-output-names = "rk808-clkout1", "xin32k";
+1 -1
arch/arm64/boot/dts/rockchip/rk3328-nanopi-r2s.dts
··· 20 20 gmac_clk: gmac-clock { 21 21 compatible = "fixed-clock"; 22 22 clock-frequency = <125000000>; 23 - clock-output-names = "gmac_clk"; 23 + clock-output-names = "gmac_clkin"; 24 24 #clock-cells = <0>; 25 25 }; 26 26
+2 -2
arch/arm64/boot/dts/rockchip/rk3399-roc-pc.dtsi
··· 74 74 label = "red:diy"; 75 75 gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>; 76 76 default-state = "off"; 77 - linux,default-trigger = "mmc1"; 77 + linux,default-trigger = "mmc2"; 78 78 }; 79 79 80 80 yellow_led: led-2 { 81 81 label = "yellow:yellow-led"; 82 82 gpios = <&gpio0 RK_PA2 GPIO_ACTIVE_HIGH>; 83 83 default-state = "off"; 84 - linux,default-trigger = "mmc0"; 84 + linux,default-trigger = "mmc1"; 85 85 }; 86 86 }; 87 87
+3
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 29 29 i2c6 = &i2c6; 30 30 i2c7 = &i2c7; 31 31 i2c8 = &i2c8; 32 + mmc0 = &sdio0; 33 + mmc1 = &sdmmc; 34 + mmc2 = &sdhci; 32 35 serial0 = &uart0; 33 36 serial1 = &uart1; 34 37 serial2 = &uart2;
+18 -16
arch/arm64/include/asm/pgtable.h
··· 115 115 #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) 116 116 #define pte_valid_not_user(pte) \ 117 117 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) 118 - #define pte_valid_young(pte) \ 119 - ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF)) 120 118 #define pte_valid_user(pte) \ 121 119 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER)) 122 120 ··· 122 124 * Could the pte be present in the TLB? We must check mm_tlb_flush_pending 123 125 * so that we don't erroneously return false for pages that have been 124 126 * remapped as PROT_NONE but are yet to be flushed from the TLB. 127 + * Note that we can't make any assumptions based on the state of the access 128 + * flag, since ptep_clear_flush_young() elides a DSB when invalidating the 129 + * TLB. 125 130 */ 126 131 #define pte_accessible(mm, pte) \ 127 - (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid_young(pte)) 132 + (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) 128 133 129 134 /* 130 135 * p??_access_permitted() is true for valid user mappings (subject to the ··· 165 164 return pmd; 166 165 } 167 166 168 - static inline pte_t pte_wrprotect(pte_t pte) 169 - { 170 - pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); 171 - pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); 172 - return pte; 173 - } 174 - 175 167 static inline pte_t pte_mkwrite(pte_t pte) 176 168 { 177 169 pte = set_pte_bit(pte, __pgprot(PTE_WRITE)); ··· 187 193 if (pte_write(pte)) 188 194 pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); 189 195 196 + return pte; 197 + } 198 + 199 + static inline pte_t pte_wrprotect(pte_t pte) 200 + { 201 + /* 202 + * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY 203 + * clear), set the PTE_DIRTY bit. 204 + */ 205 + if (pte_hw_dirty(pte)) 206 + pte = pte_mkdirty(pte); 207 + 208 + pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); 209 + pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); 190 210 return pte; 191 211 } 192 212 ··· 853 845 pte = READ_ONCE(*ptep); 854 846 do { 855 847 old_pte = pte; 856 - /* 857 - * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY 858 - * clear), set the PTE_DIRTY bit. 859 - */ 860 - if (pte_hw_dirty(pte)) 861 - pte = pte_mkdirty(pte); 862 848 pte = pte_wrprotect(pte); 863 849 pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), 864 850 pte_val(old_pte), pte_val(pte));
+2
arch/arm64/include/asm/probes.h
··· 7 7 #ifndef _ARM_PROBES_H 8 8 #define _ARM_PROBES_H 9 9 10 + #include <asm/insn.h> 11 + 10 12 typedef u32 probe_opcode_t; 11 13 typedef void (probes_handler_t) (u32 opcode, long addr, struct pt_regs *); 12 14
+5
arch/arm64/kvm/hyp/nvhe/hyp.lds.S
··· 13 13 14 14 SECTIONS { 15 15 HYP_SECTION(.text) 16 + /* 17 + * .hyp..data..percpu needs to be page aligned to maintain the same 18 + * alignment for when linking into vmlinux. 19 + */ 20 + . = ALIGN(PAGE_SIZE); 16 21 HYP_SECTION_NAME(.data..percpu) : { 17 22 PERCPU_INPUT(L1_CACHE_BYTES) 18 23 }
+20 -2
arch/arm64/kvm/vgic/vgic-mmio-v3.c
··· 273 273 return extract_bytes(value, addr & 7, len); 274 274 } 275 275 276 + static unsigned long vgic_uaccess_read_v3r_typer(struct kvm_vcpu *vcpu, 277 + gpa_t addr, unsigned int len) 278 + { 279 + unsigned long mpidr = kvm_vcpu_get_mpidr_aff(vcpu); 280 + int target_vcpu_id = vcpu->vcpu_id; 281 + u64 value; 282 + 283 + value = (u64)(mpidr & GENMASK(23, 0)) << 32; 284 + value |= ((target_vcpu_id & 0xffff) << 8); 285 + 286 + if (vgic_has_its(vcpu->kvm)) 287 + value |= GICR_TYPER_PLPIS; 288 + 289 + /* reporting of the Last bit is not supported for userspace */ 290 + return extract_bytes(value, addr & 7, len); 291 + } 292 + 276 293 static unsigned long vgic_mmio_read_v3r_iidr(struct kvm_vcpu *vcpu, 277 294 gpa_t addr, unsigned int len) 278 295 { ··· 610 593 REGISTER_DESC_WITH_LENGTH(GICR_IIDR, 611 594 vgic_mmio_read_v3r_iidr, vgic_mmio_write_wi, 4, 612 595 VGIC_ACCESS_32bit), 613 - REGISTER_DESC_WITH_LENGTH(GICR_TYPER, 614 - vgic_mmio_read_v3r_typer, vgic_mmio_write_wi, 8, 596 + REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_TYPER, 597 + vgic_mmio_read_v3r_typer, vgic_mmio_write_wi, 598 + vgic_uaccess_read_v3r_typer, vgic_mmio_uaccess_write_wi, 8, 615 599 VGIC_ACCESS_64bit | VGIC_ACCESS_32bit), 616 600 REGISTER_DESC_WITH_LENGTH(GICR_WAKER, 617 601 vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
+6
arch/ia64/include/asm/sparsemem.h
··· 18 18 #endif 19 19 20 20 #endif /* CONFIG_SPARSEMEM */ 21 + 22 + #ifdef CONFIG_MEMORY_HOTPLUG 23 + int memory_add_physaddr_to_nid(u64 addr); 24 + #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid 25 + #endif 26 + 21 27 #endif /* _ASM_IA64_SPARSEMEM_H */
+3
arch/mips/include/asm/pgtable-32.h
··· 154 154 155 155 #if defined(CONFIG_XPA) 156 156 157 + #define MAX_POSSIBLE_PHYSMEM_BITS 40 157 158 #define pte_pfn(x) (((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT)) 158 159 static inline pte_t 159 160 pfn_pte(unsigned long pfn, pgprot_t prot) ··· 170 169 171 170 #elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) 172 171 172 + #define MAX_POSSIBLE_PHYSMEM_BITS 36 173 173 #define pte_pfn(x) ((unsigned long)((x).pte_high >> 6)) 174 174 175 175 static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot) ··· 185 183 186 184 #else 187 185 186 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 188 187 #ifdef CONFIG_CPU_VR41XX 189 188 #define pte_pfn(x) ((unsigned long)((x).pte >> (PAGE_SHIFT + 2))) 190 189 #define pfn_pte(pfn, prot) __pte(((pfn) << (PAGE_SHIFT + 2)) | pgprot_val(prot))
-1
arch/powerpc/Makefile
··· 248 248 cpu-as-$(CONFIG_40x) += -Wa,-m405 249 249 cpu-as-$(CONFIG_44x) += -Wa,-m440 250 250 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) 251 - cpu-as-$(CONFIG_E200) += -Wa,-me200 252 251 cpu-as-$(CONFIG_E500) += -Wa,-me500 253 252 254 253 # When using '-many -mpower4' gas will first try and find a matching power4
+2
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 36 36 */ 37 37 #ifdef CONFIG_PTE_64BIT 38 38 #define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1)) 39 + #define MAX_POSSIBLE_PHYSMEM_BITS 36 39 40 #else 40 41 #define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1)) 42 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 41 43 #endif 42 44 43 45 /*
+2
arch/powerpc/include/asm/book3s/64/kup-radix.h
··· 63 63 64 64 #else /* !__ASSEMBLY__ */ 65 65 66 + #include <linux/jump_label.h> 67 + 66 68 DECLARE_STATIC_KEY_FALSE(uaccess_flush_key); 67 69 68 70 #ifdef CONFIG_PPC_KUAP
+5
arch/powerpc/include/asm/mmzone.h
··· 46 46 #define __HAVE_ARCH_RESERVED_KERNEL_PAGES 47 47 #endif 48 48 49 + #ifdef CONFIG_MEMORY_HOTPLUG 50 + extern int create_section_mapping(unsigned long start, unsigned long end, 51 + int nid, pgprot_t prot); 52 + #endif 53 + 49 54 #endif /* __KERNEL__ */ 50 55 #endif /* _ASM_MMZONE_H_ */
+2
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 153 153 */ 154 154 #if defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT) 155 155 #define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1)) 156 + #define MAX_POSSIBLE_PHYSMEM_BITS 36 156 157 #else 157 158 #define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1)) 159 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 158 160 #endif 159 161 160 162 /*
+2 -3
arch/powerpc/include/asm/sparsemem.h
··· 13 13 #endif /* CONFIG_SPARSEMEM */ 14 14 15 15 #ifdef CONFIG_MEMORY_HOTPLUG 16 - extern int create_section_mapping(unsigned long start, unsigned long end, 17 - int nid, pgprot_t prot); 18 16 extern int remove_section_mapping(unsigned long start, unsigned long end); 17 + extern int memory_add_physaddr_to_nid(u64 start); 18 + #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid 19 19 20 20 #ifdef CONFIG_NUMA 21 21 extern int hot_add_scn_to_nid(unsigned long scn_addr); ··· 26 26 } 27 27 #endif /* CONFIG_NUMA */ 28 28 #endif /* CONFIG_MEMORY_HOTPLUG */ 29 - 30 29 #endif /* __KERNEL__ */ 31 30 #endif /* _ASM_POWERPC_SPARSEMEM_H */
+7 -6
arch/powerpc/kernel/exceptions-64s.S
··· 1000 1000 * Vectors for the FWNMI option. Share common code. 1001 1001 */ 1002 1002 TRAMP_REAL_BEGIN(system_reset_fwnmi) 1003 - /* XXX: fwnmi guest could run a nested/PR guest, so why no test? */ 1004 - __IKVM_REAL(system_reset)=0 1005 1003 GEN_INT_ENTRY system_reset, virt=0 1006 1004 1007 1005 #endif /* CONFIG_PPC_PSERIES */ ··· 1410 1412 * If none is found, do a Linux page fault. Linux page faults can happen in 1411 1413 * kernel mode due to user copy operations of course. 1412 1414 * 1415 + * KVM: The KVM HDSI handler may perform a load with MSR[DR]=1 in guest 1416 + * MMU context, which may cause a DSI in the host, which must go to the 1417 + * KVM handler. MSR[IR] is not enabled, so the real-mode handler will 1418 + * always be used regardless of AIL setting. 1419 + * 1413 1420 * - Radix MMU 1414 1421 * The hardware loads from the Linux page table directly, so a fault goes 1415 1422 * immediately to Linux page fault. ··· 1425 1422 IVEC=0x300 1426 1423 IDAR=1 1427 1424 IDSISR=1 1428 - #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 1429 1425 IKVM_SKIP=1 1430 1426 IKVM_REAL=1 1431 - #endif 1432 1427 INT_DEFINE_END(data_access) 1433 1428 1434 1429 EXC_REAL_BEGIN(data_access, 0x300, 0x80) ··· 1465 1464 * ppc64_bolted_size (first segment). The kernel handler must avoid stomping 1466 1465 * on user-handler data structures. 1467 1466 * 1467 + * KVM: Same as 0x300, DSLB must test for KVM guest. 1468 + * 1468 1469 * A dedicated save area EXSLB is used (XXX: but it actually need not be 1469 1470 * these days, we could use EXGEN). 1470 1471 */ ··· 1475 1472 IAREA=PACA_EXSLB 1476 1473 IRECONCILE=0 1477 1474 IDAR=1 1478 - #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 1479 1475 IKVM_SKIP=1 1480 1476 IKVM_REAL=1 1481 - #endif 1482 1477 INT_DEFINE_END(data_access_slb) 1483 1478 1484 1479 EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80)
+2 -1
arch/powerpc/kernel/head_book3s_32.S
··· 156 156 bl initial_bats 157 157 bl load_segment_registers 158 158 BEGIN_MMU_FTR_SECTION 159 + bl reloc_offset 159 160 bl early_hash_table 160 161 END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) 161 162 #if defined(CONFIG_BOOTX_TEXT) ··· 921 920 ori r6, r6, 3 /* 256kB table */ 922 921 mtspr SPRN_SDR1, r6 923 922 lis r6, early_hash@h 924 - lis r3, Hash@ha 923 + addis r3, r3, Hash@ha 925 924 stw r6, Hash@l(r3) 926 925 blr 927 926
+7
arch/powerpc/kvm/book3s_xive_native.c
··· 251 251 } 252 252 253 253 state = &sb->irq_state[src]; 254 + 255 + /* Some sanity checking */ 256 + if (!state->valid) { 257 + pr_devel("%s: source %lx invalid !\n", __func__, irq); 258 + return VM_FAULT_SIGBUS; 259 + } 260 + 254 261 kvmppc_xive_select_irq(state, &hw_num, &xd); 255 262 256 263 arch_spin_lock(&sb->lock);
+1
arch/powerpc/mm/mem.c
··· 50 50 #include <asm/rtas.h> 51 51 #include <asm/kasan.h> 52 52 #include <asm/svm.h> 53 + #include <asm/mmzone.h> 53 54 54 55 #include <mm/mmu_decl.h> 55 56
+2
arch/riscv/include/asm/pgtable-32.h
··· 14 14 #define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT) 15 15 #define PGDIR_MASK (~(PGDIR_SIZE - 1)) 16 16 17 + #define MAX_POSSIBLE_PHYSMEM_BITS 34 18 + 17 19 #endif /* _ASM_RISCV_PGTABLE_32_H */
+5 -5
arch/s390/kernel/asm-offsets.c
··· 53 53 /* stack_frame offsets */ 54 54 OFFSET(__SF_BACKCHAIN, stack_frame, back_chain); 55 55 OFFSET(__SF_GPRS, stack_frame, gprs); 56 - OFFSET(__SF_EMPTY, stack_frame, empty1); 57 - OFFSET(__SF_SIE_CONTROL, stack_frame, empty1[0]); 58 - OFFSET(__SF_SIE_SAVEAREA, stack_frame, empty1[1]); 59 - OFFSET(__SF_SIE_REASON, stack_frame, empty1[2]); 60 - OFFSET(__SF_SIE_FLAGS, stack_frame, empty1[3]); 56 + OFFSET(__SF_EMPTY, stack_frame, empty1[0]); 57 + OFFSET(__SF_SIE_CONTROL, stack_frame, empty1[1]); 58 + OFFSET(__SF_SIE_SAVEAREA, stack_frame, empty1[2]); 59 + OFFSET(__SF_SIE_REASON, stack_frame, empty1[3]); 60 + OFFSET(__SF_SIE_FLAGS, stack_frame, empty1[4]); 61 61 BLANK(); 62 62 OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val); 63 63 BLANK();
+2
arch/s390/kernel/entry.S
··· 1068 1068 * %r4 1069 1069 */ 1070 1070 load_fpu_regs: 1071 + stnsm __SF_EMPTY(%r15),0xfc 1071 1072 lg %r4,__LC_CURRENT 1072 1073 aghi %r4,__TASK_thread 1073 1074 TSTMSK __LC_CPU_FLAGS,_CIF_FPU ··· 1100 1099 .Lload_fpu_regs_done: 1101 1100 ni __LC_CPU_FLAGS+7,255-_CIF_FPU 1102 1101 .Lload_fpu_regs_exit: 1102 + ssm __SF_EMPTY(%r15) 1103 1103 BR_EX %r14 1104 1104 .Lload_fpu_regs_end: 1105 1105 ENDPROC(load_fpu_regs)
+8 -1
arch/s390/kernel/uv.c
··· 129 129 .paddr = paddr 130 130 }; 131 131 132 - if (uv_call(0, (u64)&uvcb)) 132 + if (uv_call(0, (u64)&uvcb)) { 133 + /* 134 + * Older firmware uses 107/d as an indication of a non secure 135 + * page. Let us emulate the newer variant (no-op). 136 + */ 137 + if (uvcb.header.rc == 0x107 && uvcb.header.rrc == 0xd) 138 + return 0; 133 139 return -EINVAL; 140 + } 134 141 return 0; 135 142 } 136 143
+1 -3
arch/s390/kvm/kvm-s390.c
··· 2312 2312 struct kvm_s390_pv_unp unp = {}; 2313 2313 2314 2314 r = -EINVAL; 2315 - if (!kvm_s390_pv_is_protected(kvm)) 2315 + if (!kvm_s390_pv_is_protected(kvm) || !mm_is_protected(kvm->mm)) 2316 2316 break; 2317 2317 2318 2318 r = -EFAULT; ··· 3564 3564 vcpu->arch.sie_block->pp = 0; 3565 3565 vcpu->arch.sie_block->fpf &= ~FPF_BPBC; 3566 3566 vcpu->arch.sie_block->todpr = 0; 3567 - vcpu->arch.sie_block->cpnc = 0; 3568 3567 } 3569 3568 } 3570 3569 ··· 3581 3582 3582 3583 regs->etoken = 0; 3583 3584 regs->etoken_extension = 0; 3584 - regs->diag318 = 0; 3585 3585 } 3586 3586 3587 3587 int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+2 -1
arch/s390/kvm/pv.c
··· 208 208 return -EIO; 209 209 } 210 210 kvm->arch.gmap->guest_handle = uvcb.guest_handle; 211 - atomic_set(&kvm->mm->context.is_protected, 1); 212 211 return 0; 213 212 } 214 213 ··· 227 228 *rrc = uvcb.header.rrc; 228 229 KVM_UV_EVENT(kvm, 3, "PROTVIRT VM SET PARMS: rc %x rrc %x", 229 230 *rc, *rrc); 231 + if (!cc) 232 + atomic_set(&kvm->mm->context.is_protected, 1); 230 233 return cc ? -EINVAL : 0; 231 234 } 232 235
+2
arch/s390/mm/gmap.c
··· 2690 2690 #include <linux/sched/mm.h> 2691 2691 void s390_reset_acc(struct mm_struct *mm) 2692 2692 { 2693 + if (!mm_is_protected(mm)) 2694 + return; 2693 2695 /* 2694 2696 * we might be called during 2695 2697 * reset: we walk the pages and clear
+3 -3
arch/x86/events/intel/cstate.c
··· 107 107 MODULE_LICENSE("GPL"); 108 108 109 109 #define DEFINE_CSTATE_FORMAT_ATTR(_var, _name, _format) \ 110 - static ssize_t __cstate_##_var##_show(struct kobject *kobj, \ 111 - struct kobj_attribute *attr, \ 110 + static ssize_t __cstate_##_var##_show(struct device *dev, \ 111 + struct device_attribute *attr, \ 112 112 char *page) \ 113 113 { \ 114 114 BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ 115 115 return sprintf(page, _format "\n"); \ 116 116 } \ 117 - static struct kobj_attribute format_attr_##_var = \ 117 + static struct device_attribute format_attr_##_var = \ 118 118 __ATTR(_name, 0444, __cstate_##_var##_show, NULL) 119 119 120 120 static ssize_t cstate_get_attr_cpumask(struct device *dev,
+2 -2
arch/x86/events/intel/uncore.c
··· 94 94 return map; 95 95 } 96 96 97 - ssize_t uncore_event_show(struct kobject *kobj, 98 - struct kobj_attribute *attr, char *buf) 97 + ssize_t uncore_event_show(struct device *dev, 98 + struct device_attribute *attr, char *buf) 99 99 { 100 100 struct uncore_event_desc *event = 101 101 container_of(attr, struct uncore_event_desc, attr);
+6 -6
arch/x86/events/intel/uncore.h
··· 157 157 #define UNCORE_BOX_FLAG_CFL8_CBOX_MSR_OFFS 2 158 158 159 159 struct uncore_event_desc { 160 - struct kobj_attribute attr; 160 + struct device_attribute attr; 161 161 const char *config; 162 162 }; 163 163 ··· 179 179 struct pci2phy_map *__find_pci2phy_map(int segment); 180 180 int uncore_pcibus_to_physid(struct pci_bus *bus); 181 181 182 - ssize_t uncore_event_show(struct kobject *kobj, 183 - struct kobj_attribute *attr, char *buf); 182 + ssize_t uncore_event_show(struct device *dev, 183 + struct device_attribute *attr, char *buf); 184 184 185 185 static inline struct intel_uncore_pmu *dev_to_uncore_pmu(struct device *dev) 186 186 { ··· 201 201 } 202 202 203 203 #define DEFINE_UNCORE_FORMAT_ATTR(_var, _name, _format) \ 204 - static ssize_t __uncore_##_var##_show(struct kobject *kobj, \ 205 - struct kobj_attribute *attr, \ 204 + static ssize_t __uncore_##_var##_show(struct device *dev, \ 205 + struct device_attribute *attr, \ 206 206 char *page) \ 207 207 { \ 208 208 BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ 209 209 return sprintf(page, _format "\n"); \ 210 210 } \ 211 - static struct kobj_attribute format_attr_##_var = \ 211 + static struct device_attribute format_attr_##_var = \ 212 212 __ATTR(_name, 0444, __uncore_##_var##_show, NULL) 213 213 214 214 static inline bool uncore_pmc_fixed(int idx)
+1 -13
arch/x86/events/rapl.c
··· 93 93 * any other bit is reserved 94 94 */ 95 95 #define RAPL_EVENT_MASK 0xFFULL 96 - 97 - #define DEFINE_RAPL_FORMAT_ATTR(_var, _name, _format) \ 98 - static ssize_t __rapl_##_var##_show(struct kobject *kobj, \ 99 - struct kobj_attribute *attr, \ 100 - char *page) \ 101 - { \ 102 - BUILD_BUG_ON(sizeof(_format) >= PAGE_SIZE); \ 103 - return sprintf(page, _format "\n"); \ 104 - } \ 105 - static struct kobj_attribute format_attr_##_var = \ 106 - __ATTR(_name, 0444, __rapl_##_var##_show, NULL) 107 - 108 96 #define RAPL_CNTR_WIDTH 32 109 97 110 98 #define RAPL_EVENT_ATTR_STR(_name, v, str) \ ··· 429 441 .attrs = attrs_empty, 430 442 }; 431 443 432 - DEFINE_RAPL_FORMAT_ATTR(event, event, "config:0-7"); 444 + PMU_FORMAT_ATTR(event, "config:0-7"); 433 445 static struct attribute *rapl_formats_attr[] = { 434 446 &format_attr_event.attr, 435 447 NULL,
+1
arch/x86/include/asm/kvm_host.h
··· 1656 1656 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); 1657 1657 int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); 1658 1658 int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); 1659 + int kvm_cpu_has_extint(struct kvm_vcpu *v); 1659 1660 int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); 1660 1661 int kvm_cpu_get_interrupt(struct kvm_vcpu *v); 1661 1662 void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event);
+10
arch/x86/include/asm/sparsemem.h
··· 28 28 #endif 29 29 30 30 #endif /* CONFIG_SPARSEMEM */ 31 + 32 + #ifndef __ASSEMBLY__ 33 + #ifdef CONFIG_NUMA_KEEP_MEMINFO 34 + extern int phys_to_target_node(phys_addr_t start); 35 + #define phys_to_target_node phys_to_target_node 36 + extern int memory_add_physaddr_to_nid(u64 start); 37 + #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid 38 + #endif 39 + #endif /* __ASSEMBLY__ */ 40 + 31 41 #endif /* _ASM_X86_SPARSEMEM_H */
+10 -53
arch/x86/kernel/cpu/microcode/intel.c
··· 100 100 return find_matching_signature(mc, csig, cpf); 101 101 } 102 102 103 - /* 104 - * Given CPU signature and a microcode patch, this function finds if the 105 - * microcode patch has matching family and model with the CPU. 106 - * 107 - * %true - if there's a match 108 - * %false - otherwise 109 - */ 110 - static bool microcode_matches(struct microcode_header_intel *mc_header, 111 - unsigned long sig) 112 - { 113 - unsigned long total_size = get_totalsize(mc_header); 114 - unsigned long data_size = get_datasize(mc_header); 115 - struct extended_sigtable *ext_header; 116 - unsigned int fam_ucode, model_ucode; 117 - struct extended_signature *ext_sig; 118 - unsigned int fam, model; 119 - int ext_sigcount, i; 120 - 121 - fam = x86_family(sig); 122 - model = x86_model(sig); 123 - 124 - fam_ucode = x86_family(mc_header->sig); 125 - model_ucode = x86_model(mc_header->sig); 126 - 127 - if (fam == fam_ucode && model == model_ucode) 128 - return true; 129 - 130 - /* Look for ext. headers: */ 131 - if (total_size <= data_size + MC_HEADER_SIZE) 132 - return false; 133 - 134 - ext_header = (void *) mc_header + data_size + MC_HEADER_SIZE; 135 - ext_sig = (void *)ext_header + EXT_HEADER_SIZE; 136 - ext_sigcount = ext_header->count; 137 - 138 - for (i = 0; i < ext_sigcount; i++) { 139 - fam_ucode = x86_family(ext_sig->sig); 140 - model_ucode = x86_model(ext_sig->sig); 141 - 142 - if (fam == fam_ucode && model == model_ucode) 143 - return true; 144 - 145 - ext_sig++; 146 - } 147 - return false; 148 - } 149 - 150 103 static struct ucode_patch *memdup_patch(void *data, unsigned int size) 151 104 { 152 105 struct ucode_patch *p; ··· 117 164 return p; 118 165 } 119 166 120 - static void save_microcode_patch(void *data, unsigned int size) 167 + static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigned int size) 121 168 { 122 169 struct microcode_header_intel *mc_hdr, *mc_saved_hdr; 123 170 struct ucode_patch *iter, *tmp, *p = NULL; ··· 161 208 } 162 209 163 210 if (!p) 211 + return; 212 + 213 + if (!find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf)) 164 214 return; 165 215 166 216 /* ··· 300 344 301 345 size -= mc_size; 302 346 303 - if (!microcode_matches(mc_header, uci->cpu_sig.sig)) { 347 + if (!find_matching_signature(data, uci->cpu_sig.sig, 348 + uci->cpu_sig.pf)) { 304 349 data += mc_size; 305 350 continue; 306 351 } 307 352 308 353 if (save) { 309 - save_microcode_patch(data, mc_size); 354 + save_microcode_patch(uci, data, mc_size); 310 355 goto next; 311 356 } 312 357 ··· 440 483 * Save this microcode patch. It will be loaded early when a CPU is 441 484 * hot-added or resumes. 442 485 */ 443 - static void save_mc_for_early(u8 *mc, unsigned int size) 486 + static void save_mc_for_early(struct ucode_cpu_info *uci, u8 *mc, unsigned int size) 444 487 { 445 488 /* Synchronization during CPU hotplug. */ 446 489 static DEFINE_MUTEX(x86_cpu_microcode_mutex); 447 490 448 491 mutex_lock(&x86_cpu_microcode_mutex); 449 492 450 - save_microcode_patch(mc, size); 493 + save_microcode_patch(uci, mc, size); 451 494 show_saved_mc(); 452 495 453 496 mutex_unlock(&x86_cpu_microcode_mutex); ··· 892 935 * permanent memory. So it will be loaded early when a CPU is hot added 893 936 * or resumes. 894 937 */ 895 - save_mc_for_early(new_mc, new_mc_size); 938 + save_mc_for_early(uci, new_mc, new_mc_size); 896 939 897 940 pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n", 898 941 cpu, new_rev, uci->cpu_sig.rev);
+19 -4
arch/x86/kernel/dumpstack.c
··· 78 78 if (!user_mode(regs)) 79 79 return copy_from_kernel_nofault(buf, (u8 *)src, nbytes); 80 80 81 + /* The user space code from other tasks cannot be accessed. */ 82 + if (regs != task_pt_regs(current)) 83 + return -EPERM; 81 84 /* 82 85 * Make sure userspace isn't trying to trick us into dumping kernel 83 86 * memory by pointing the userspace instruction pointer at it. ··· 88 85 if (__chk_range_not_ok(src, nbytes, TASK_SIZE_MAX)) 89 86 return -EINVAL; 90 87 88 + /* 89 + * Even if named copy_from_user_nmi() this can be invoked from 90 + * other contexts and will not try to resolve a pagefault, which is 91 + * the correct thing to do here as this code can be called from any 92 + * context. 93 + */ 91 94 return copy_from_user_nmi(buf, (void __user *)src, nbytes); 92 95 } 93 96 ··· 124 115 u8 opcodes[OPCODE_BUFSIZE]; 125 116 unsigned long prologue = regs->ip - PROLOGUE_SIZE; 126 117 127 - if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) { 128 - printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n", 129 - loglvl, prologue); 130 - } else { 118 + switch (copy_code(regs, opcodes, prologue, sizeof(opcodes))) { 119 + case 0: 131 120 printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %" 132 121 __stringify(EPILOGUE_SIZE) "ph\n", loglvl, opcodes, 133 122 opcodes[PROLOGUE_SIZE], opcodes + PROLOGUE_SIZE + 1); 123 + break; 124 + case -EPERM: 125 + /* No access to the user space stack of other tasks. Ignore. */ 126 + break; 127 + default: 128 + printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n", 129 + loglvl, prologue); 130 + break; 134 131 } 135 132 } 136 133
+1 -7
arch/x86/kernel/tboot.c
··· 514 514 if (!tboot_enabled()) 515 515 return 0; 516 516 517 - if (intel_iommu_tboot_noforce) 518 - return 1; 519 - 520 - if (no_iommu || swiotlb || dmar_disabled) 517 + if (no_iommu || dmar_disabled) 521 518 pr_warn("Forcing Intel-IOMMU to enabled\n"); 522 519 523 520 dmar_disabled = 0; 524 - #ifdef CONFIG_SWIOTLB 525 - swiotlb = 0; 526 - #endif 527 521 no_iommu = 0; 528 522 529 523 return 1;
+34 -51
arch/x86/kvm/irq.c
··· 40 40 * check if there is pending interrupt from 41 41 * non-APIC source without intack. 42 42 */ 43 - static int kvm_cpu_has_extint(struct kvm_vcpu *v) 44 - { 45 - u8 accept = kvm_apic_accept_pic_intr(v); 46 - 47 - if (accept) { 48 - if (irqchip_split(v->kvm)) 49 - return pending_userspace_extint(v); 50 - else 51 - return v->kvm->arch.vpic->output; 52 - } else 53 - return 0; 54 - } 55 - 56 - /* 57 - * check if there is injectable interrupt: 58 - * when virtual interrupt delivery enabled, 59 - * interrupt from apic will handled by hardware, 60 - * we don't need to check it here. 61 - */ 62 - int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v) 43 + int kvm_cpu_has_extint(struct kvm_vcpu *v) 63 44 { 64 45 /* 65 - * FIXME: interrupt.injected represents an interrupt that it's 46 + * FIXME: interrupt.injected represents an interrupt whose 66 47 * side-effects have already been applied (e.g. bit from IRR 67 48 * already moved to ISR). Therefore, it is incorrect to rely 68 49 * on interrupt.injected to know if there is a pending ··· 56 75 if (!lapic_in_kernel(v)) 57 76 return v->arch.interrupt.injected; 58 77 78 + if (!kvm_apic_accept_pic_intr(v)) 79 + return 0; 80 + 81 + if (irqchip_split(v->kvm)) 82 + return pending_userspace_extint(v); 83 + else 84 + return v->kvm->arch.vpic->output; 85 + } 86 + 87 + /* 88 + * check if there is injectable interrupt: 89 + * when virtual interrupt delivery enabled, 90 + * interrupt from apic will handled by hardware, 91 + * we don't need to check it here. 92 + */ 93 + int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v) 94 + { 59 95 if (kvm_cpu_has_extint(v)) 60 96 return 1; 61 97 ··· 89 91 */ 90 92 int kvm_cpu_has_interrupt(struct kvm_vcpu *v) 91 93 { 92 - /* 93 - * FIXME: interrupt.injected represents an interrupt that it's 94 - * side-effects have already been applied (e.g. bit from IRR 95 - * already moved to ISR). Therefore, it is incorrect to rely 96 - * on interrupt.injected to know if there is a pending 97 - * interrupt in the user-mode LAPIC. 98 - * This leads to nVMX/nSVM not be able to distinguish 99 - * if it should exit from L2 to L1 on EXTERNAL_INTERRUPT on 100 - * pending interrupt or should re-inject an injected 101 - * interrupt. 102 - */ 103 - if (!lapic_in_kernel(v)) 104 - return v->arch.interrupt.injected; 105 - 106 94 if (kvm_cpu_has_extint(v)) 107 95 return 1; 108 96 ··· 102 118 */ 103 119 static int kvm_cpu_get_extint(struct kvm_vcpu *v) 104 120 { 105 - if (kvm_cpu_has_extint(v)) { 106 - if (irqchip_split(v->kvm)) { 107 - int vector = v->arch.pending_external_vector; 108 - 109 - v->arch.pending_external_vector = -1; 110 - return vector; 111 - } else 112 - return kvm_pic_read_irq(v->kvm); /* PIC */ 113 - } else 121 + if (!kvm_cpu_has_extint(v)) { 122 + WARN_ON(!lapic_in_kernel(v)); 114 123 return -1; 124 + } 125 + 126 + if (!lapic_in_kernel(v)) 127 + return v->arch.interrupt.nr; 128 + 129 + if (irqchip_split(v->kvm)) { 130 + int vector = v->arch.pending_external_vector; 131 + 132 + v->arch.pending_external_vector = -1; 133 + return vector; 134 + } else 135 + return kvm_pic_read_irq(v->kvm); /* PIC */ 115 136 } 116 137 117 138 /* ··· 124 135 */ 125 136 int kvm_cpu_get_interrupt(struct kvm_vcpu *v) 126 137 { 127 - int vector; 128 - 129 - if (!lapic_in_kernel(v)) 130 - return v->arch.interrupt.nr; 131 - 132 - vector = kvm_cpu_get_extint(v); 133 - 138 + int vector = kvm_cpu_get_extint(v); 134 139 if (vector != -1) 135 140 return vector; /* PIC */ 136 141
+1 -1
arch/x86/kvm/lapic.c
··· 2465 2465 struct kvm_lapic *apic = vcpu->arch.apic; 2466 2466 u32 ppr; 2467 2467 2468 - if (!kvm_apic_hw_enabled(apic)) 2468 + if (!kvm_apic_present(vcpu)) 2469 2469 return -1; 2470 2470 2471 2471 __apic_update_ppr(apic, &ppr);
+1 -1
arch/x86/kvm/mmu/mmu.c
··· 3517 3517 { 3518 3518 u64 sptes[PT64_ROOT_MAX_LEVEL]; 3519 3519 struct rsvd_bits_validate *rsvd_check; 3520 - int root = vcpu->arch.mmu->root_level; 3520 + int root = vcpu->arch.mmu->shadow_root_level; 3521 3521 int leaf; 3522 3522 int level; 3523 3523 bool reserved = false;
+1 -1
arch/x86/kvm/svm/sev.c
··· 642 642 * Its safe to read more than we are asked, caller should ensure that 643 643 * destination has enough space. 644 644 */ 645 - src_paddr = round_down(src_paddr, 16); 646 645 offset = src_paddr & 15; 646 + src_paddr = round_down(src_paddr, 16); 647 647 sz = round_up(sz + offset, 16); 648 648 649 649 return __sev_issue_dbg_cmd(kvm, src_paddr, dst_paddr, sz, err, false);
+3 -1
arch/x86/kvm/svm/svm.c
··· 1309 1309 svm->avic_is_running = true; 1310 1310 1311 1311 svm->msrpm = svm_vcpu_alloc_msrpm(); 1312 - if (!svm->msrpm) 1312 + if (!svm->msrpm) { 1313 + err = -ENOMEM; 1313 1314 goto error_free_vmcb_page; 1315 + } 1314 1316 1315 1317 svm_vcpu_init_msrpm(vcpu, svm->msrpm); 1316 1318
+10 -8
arch/x86/kvm/x86.c
··· 4051 4051 4052 4052 static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) 4053 4053 { 4054 + /* 4055 + * We can accept userspace's request for interrupt injection 4056 + * as long as we have a place to store the interrupt number. 4057 + * The actual injection will happen when the CPU is able to 4058 + * deliver the interrupt. 4059 + */ 4060 + if (kvm_cpu_has_extint(vcpu)) 4061 + return false; 4062 + 4063 + /* Acknowledging ExtINT does not happen if LINT0 is masked. */ 4054 4064 return (!lapic_in_kernel(vcpu) || 4055 4065 kvm_apic_accept_pic_intr(vcpu)); 4056 4066 } 4057 4067 4058 - /* 4059 - * if userspace requested an interrupt window, check that the 4060 - * interrupt window is open. 4061 - * 4062 - * No need to exit to userspace if we already have an interrupt queued. 4063 - */ 4064 4068 static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) 4065 4069 { 4066 4070 return kvm_arch_interrupt_allowed(vcpu) && 4067 - !kvm_cpu_has_interrupt(vcpu) && 4068 - !kvm_event_needs_reinjection(vcpu) && 4069 4071 kvm_cpu_accept_dm_intr(vcpu); 4070 4072 } 4071 4073
+2
arch/x86/mm/numa.c
··· 938 938 939 939 return meminfo_to_nid(&numa_reserved_meminfo, start); 940 940 } 941 + EXPORT_SYMBOL_GPL(phys_to_target_node); 941 942 942 943 int memory_add_physaddr_to_nid(u64 start) 943 944 { ··· 948 947 nid = numa_meminfo.blk[0].nid; 949 948 return nid; 950 949 } 950 + EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); 951 951 #endif
+13 -11
arch/x86/platform/efi/efi_64.c
··· 78 78 gfp_mask = GFP_KERNEL | __GFP_ZERO; 79 79 efi_pgd = (pgd_t *)__get_free_pages(gfp_mask, PGD_ALLOCATION_ORDER); 80 80 if (!efi_pgd) 81 - return -ENOMEM; 81 + goto fail; 82 82 83 83 pgd = efi_pgd + pgd_index(EFI_VA_END); 84 84 p4d = p4d_alloc(&init_mm, pgd, EFI_VA_END); 85 - if (!p4d) { 86 - free_page((unsigned long)efi_pgd); 87 - return -ENOMEM; 88 - } 85 + if (!p4d) 86 + goto free_pgd; 89 87 90 88 pud = pud_alloc(&init_mm, p4d, EFI_VA_END); 91 - if (!pud) { 92 - if (pgtable_l5_enabled()) 93 - free_page((unsigned long) pgd_page_vaddr(*pgd)); 94 - free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER); 95 - return -ENOMEM; 96 - } 89 + if (!pud) 90 + goto free_p4d; 97 91 98 92 efi_mm.pgd = efi_pgd; 99 93 mm_init_cpumask(&efi_mm); 100 94 init_new_context(NULL, &efi_mm); 101 95 102 96 return 0; 97 + 98 + free_p4d: 99 + if (pgtable_l5_enabled()) 100 + free_page((unsigned long)pgd_page_vaddr(*pgd)); 101 + free_pgd: 102 + free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER); 103 + fail: 104 + return -ENOMEM; 103 105 } 104 106 105 107 /*
+11 -1
arch/x86/xen/spinlock.c
··· 93 93 94 94 void xen_uninit_lock_cpu(int cpu) 95 95 { 96 + int irq; 97 + 96 98 if (!xen_pvspin) 97 99 return; 98 100 99 - unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL); 101 + /* 102 + * When booting the kernel with 'mitigations=auto,nosmt', the secondary 103 + * CPUs are not activated, and lock_kicker_irq is not initialized. 104 + */ 105 + irq = per_cpu(lock_kicker_irq, cpu); 106 + if (irq == -1) 107 + return; 108 + 109 + unbind_from_irqhandler(irq, NULL); 100 110 per_cpu(lock_kicker_irq, cpu) = -1; 101 111 kfree(per_cpu(irq_name, cpu)); 102 112 per_cpu(irq_name, cpu) = NULL;
+1
block/blk-cgroup.c
··· 849 849 blkg_iostat_set(&blkg->iostat.cur, &tmp); 850 850 u64_stats_update_end(&blkg->iostat.sync); 851 851 } 852 + disk_put_part(part); 852 853 } 853 854 } 854 855
+6 -1
block/blk-flush.c
··· 225 225 /* release the tag's ownership to the req cloned from */ 226 226 spin_lock_irqsave(&fq->mq_flush_lock, flags); 227 227 228 - WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE); 229 228 if (!refcount_dec_and_test(&flush_rq->ref)) { 230 229 fq->rq_status = error; 231 230 spin_unlock_irqrestore(&fq->mq_flush_lock, flags); 232 231 return; 233 232 } 234 233 234 + /* 235 + * Flush request has to be marked as IDLE when it is really ended 236 + * because its .end_io() is called from timeout code path too for 237 + * avoiding use-after-free. 238 + */ 239 + WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE); 235 240 if (fq->rq_status != BLK_STS_OK) 236 241 error = fq->rq_status; 237 242
+7
block/keyslot-manager.c
··· 103 103 spin_lock_init(&ksm->idle_slots_lock); 104 104 105 105 slot_hashtable_size = roundup_pow_of_two(num_slots); 106 + /* 107 + * hash_ptr() assumes bits != 0, so ensure the hash table has at least 2 108 + * buckets. This only makes a difference when there is only 1 keyslot. 109 + */ 110 + if (slot_hashtable_size < 2) 111 + slot_hashtable_size = 2; 112 + 106 113 ksm->log_slot_ht_size = ilog2(slot_hashtable_size); 107 114 ksm->slot_hashtable = kvmalloc_array(slot_hashtable_size, 108 115 sizeof(ksm->slot_hashtable[0]),
+11 -1
drivers/accessibility/speakup/spk_ttyio.c
··· 49 49 50 50 if (!tty->ops->write) 51 51 return -EOPNOTSUPP; 52 + 53 + mutex_lock(&speakup_tty_mutex); 54 + if (speakup_tty) { 55 + mutex_unlock(&speakup_tty_mutex); 56 + return -EBUSY; 57 + } 52 58 speakup_tty = tty; 53 59 54 60 ldisc_data = kmalloc(sizeof(*ldisc_data), GFP_KERNEL); 55 - if (!ldisc_data) 61 + if (!ldisc_data) { 62 + speakup_tty = NULL; 63 + mutex_unlock(&speakup_tty_mutex); 56 64 return -ENOMEM; 65 + } 57 66 58 67 init_completion(&ldisc_data->completion); 59 68 ldisc_data->buf_free = true; 60 69 speakup_tty->disc_data = ldisc_data; 70 + mutex_unlock(&speakup_tty_mutex); 61 71 62 72 return 0; 63 73 }
+5 -3
drivers/acpi/arm64/iort.c
··· 44 44 * iort_set_fwnode() - Create iort_fwnode and use it to register 45 45 * iommu data in the iort_fwnode_list 46 46 * 47 - * @node: IORT table node associated with the IOMMU 47 + * @iort_node: IORT table node associated with the IOMMU 48 48 * @fwnode: fwnode associated with the IORT node 49 49 * 50 50 * Returns: 0 on success ··· 673 673 /** 674 674 * iort_get_device_domain() - Find MSI domain related to a device 675 675 * @dev: The device. 676 - * @req_id: Requester ID for the device. 676 + * @id: Requester ID for the device. 677 + * @bus_token: irq domain bus token. 677 678 * 678 679 * Returns: the MSI domain for this device, NULL otherwise 679 680 */ ··· 1137 1136 * 1138 1137 * @dev: device to configure 1139 1138 * @dma_addr: device DMA address result pointer 1140 - * @size: DMA range size result pointer 1139 + * @dma_size: DMA range size result pointer 1141 1140 */ 1142 1141 void iort_dma_setup(struct device *dev, u64 *dma_addr, u64 *dma_size) 1143 1142 { ··· 1527 1526 /** 1528 1527 * iort_add_platform_device() - Allocate a platform device for IORT node 1529 1528 * @node: Pointer to device ACPI IORT node 1529 + * @ops: Pointer to IORT device config struct 1530 1530 * 1531 1531 * Returns: 0 on success, <0 failure 1532 1532 */
+19 -10
drivers/bus/ti-sysc.c
··· 227 227 u32 sysc_mask, syss_done, rstval; 228 228 int syss_offset, error = 0; 229 229 230 + if (ddata->cap->regbits->srst_shift < 0) 231 + return 0; 232 + 230 233 syss_offset = ddata->offsets[SYSC_SYSSTATUS]; 231 234 sysc_mask = BIT(ddata->cap->regbits->srst_shift); 232 235 ··· 973 970 return error; 974 971 } 975 972 } 976 - error = sysc_wait_softreset(ddata); 977 - if (error) 978 - dev_warn(ddata->dev, "OCP softreset timed out\n"); 973 + /* 974 + * Some modules like i2c and hdq1w have unusable reset status unless 975 + * the module reset quirk is enabled. Skip status check on enable. 976 + */ 977 + if (!(ddata->cfg.quirks & SYSC_MODULE_QUIRK_ENA_RESETDONE)) { 978 + error = sysc_wait_softreset(ddata); 979 + if (error) 980 + dev_warn(ddata->dev, "OCP softreset timed out\n"); 981 + } 979 982 if (ddata->cfg.quirks & SYSC_QUIRK_OPT_CLKS_IN_RESET) 980 983 sysc_disable_opt_clocks(ddata); 981 984 ··· 1382 1373 SYSC_QUIRK("hdmi", 0, 0, 0x10, -ENODEV, 0x50030200, 0xffffffff, 1383 1374 SYSC_QUIRK_OPT_CLKS_NEEDED), 1384 1375 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff, 1385 - SYSC_MODULE_QUIRK_HDQ1W), 1376 + SYSC_MODULE_QUIRK_HDQ1W | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1386 1377 SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff, 1387 - SYSC_MODULE_QUIRK_HDQ1W), 1378 + SYSC_MODULE_QUIRK_HDQ1W | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1388 1379 SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000036, 0x000000ff, 1389 - SYSC_MODULE_QUIRK_I2C), 1380 + SYSC_MODULE_QUIRK_I2C | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1390 1381 SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x0000003c, 0x000000ff, 1391 - SYSC_MODULE_QUIRK_I2C), 1382 + SYSC_MODULE_QUIRK_I2C | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1392 1383 SYSC_QUIRK("i2c", 0, 0, 0x20, 0x10, 0x00000040, 0x000000ff, 1393 - SYSC_MODULE_QUIRK_I2C), 1384 + SYSC_MODULE_QUIRK_I2C | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1394 1385 SYSC_QUIRK("i2c", 0, 0, 0x10, 0x90, 0x5040000a, 0xfffff0f0, 1395 - SYSC_MODULE_QUIRK_I2C), 1386 + SYSC_MODULE_QUIRK_I2C | SYSC_MODULE_QUIRK_ENA_RESETDONE), 1396 1387 SYSC_QUIRK("gpu", 0x50000000, 0x14, -ENODEV, -ENODEV, 0x00010201, 0xffffffff, 0), 1397 1388 SYSC_QUIRK("gpu", 0x50000000, 0xfe00, 0xfe10, -ENODEV, 0x40000000 , 0xffffffff, 1398 1389 SYSC_MODULE_QUIRK_SGX), ··· 2889 2880 2890 2881 if ((ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) && 2891 2882 (ddata->cfg.quirks & SYSC_QUIRK_NO_IDLE)) 2892 - return -EBUSY; 2883 + return -ENXIO; 2893 2884 2894 2885 return 0; 2895 2886 }
+2 -2
drivers/counter/ti-eqep.c
··· 368 368 .reg_bits = 32, 369 369 .val_bits = 32, 370 370 .reg_stride = 4, 371 - .max_register = 0x24, 371 + .max_register = QUPRD, 372 372 }; 373 373 374 374 static const struct regmap_config ti_eqep_regmap16_config = { ··· 376 376 .reg_bits = 16, 377 377 .val_bits = 16, 378 378 .reg_stride = 2, 379 - .max_register = 0x1e, 379 + .max_register = QCPRDLAT, 380 380 }; 381 381 382 382 static int ti_eqep_probe(struct platform_device *pdev)
+3 -1
drivers/cpufreq/scmi-cpufreq.c
··· 236 236 if (!handle || !handle->perf_ops) 237 237 return -ENODEV; 238 238 239 + #ifdef CONFIG_COMMON_CLK 239 240 /* dummy clock provider as needed by OPP if clocks property is used */ 240 241 if (of_find_property(dev->of_node, "#clock-cells", NULL)) 241 242 devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL); 243 + #endif 242 244 243 245 ret = cpufreq_register_driver(&scmi_cpufreq_driver); 244 246 if (ret) { 245 - dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n", 247 + dev_err(dev, "%s: registering cpufreq failed, err: %d\n", 246 248 __func__, ret); 247 249 } 248 250
-1
drivers/dax/Kconfig
··· 50 50 Say M if unsure. 51 51 52 52 config DEV_DAX_HMEM_DEVICES 53 - depends on NUMA_KEEP_MEMINFO # for phys_to_target_node() 54 53 depends on DEV_DAX_HMEM && DAX=y 55 54 def_bool y 56 55
+9 -8
drivers/dma/dmaengine.c
··· 1039 1039 static int __dma_async_device_channel_register(struct dma_device *device, 1040 1040 struct dma_chan *chan) 1041 1041 { 1042 - int rc = 0; 1042 + int rc; 1043 1043 1044 1044 chan->local = alloc_percpu(typeof(*chan->local)); 1045 1045 if (!chan->local) 1046 - goto err_out; 1046 + return -ENOMEM; 1047 1047 chan->dev = kzalloc(sizeof(*chan->dev), GFP_KERNEL); 1048 1048 if (!chan->dev) { 1049 - free_percpu(chan->local); 1050 - chan->local = NULL; 1051 - goto err_out; 1049 + rc = -ENOMEM; 1050 + goto err_free_local; 1052 1051 } 1053 1052 1054 1053 /* ··· 1060 1061 if (chan->chan_id < 0) { 1061 1062 pr_err("%s: unable to alloc ida for chan: %d\n", 1062 1063 __func__, chan->chan_id); 1063 - goto err_out; 1064 + rc = chan->chan_id; 1065 + goto err_free_dev; 1064 1066 } 1065 1067 1066 1068 chan->dev->device.class = &dma_devclass; ··· 1082 1082 mutex_lock(&device->chan_mutex); 1083 1083 ida_free(&device->chan_ida, chan->chan_id); 1084 1084 mutex_unlock(&device->chan_mutex); 1085 - err_out: 1086 - free_percpu(chan->local); 1085 + err_free_dev: 1087 1086 kfree(chan->dev); 1087 + err_free_local: 1088 + free_percpu(chan->local); 1088 1089 return rc; 1089 1090 } 1090 1091
+15 -16
drivers/dma/idxd/device.c
··· 271 271 resource_size_t start; 272 272 273 273 start = pci_resource_start(pdev, IDXD_WQ_BAR); 274 - start = start + wq->id * IDXD_PORTAL_SIZE; 274 + start += idxd_get_wq_portal_full_offset(wq->id, IDXD_PORTAL_LIMITED); 275 275 276 276 wq->dportal = devm_ioremap(dev, start, IDXD_PORTAL_SIZE); 277 277 if (!wq->dportal) ··· 295 295 int i, wq_offset; 296 296 297 297 lockdep_assert_held(&idxd->dev_lock); 298 - memset(&wq->wqcfg, 0, sizeof(wq->wqcfg)); 298 + memset(wq->wqcfg, 0, idxd->wqcfg_size); 299 299 wq->type = IDXD_WQT_NONE; 300 300 wq->size = 0; 301 301 wq->group = NULL; ··· 304 304 clear_bit(WQ_FLAG_DEDICATED, &wq->flags); 305 305 memset(wq->name, 0, WQ_NAME_SIZE); 306 306 307 - for (i = 0; i < 8; i++) { 308 - wq_offset = idxd->wqcfg_offset + wq->id * 32 + i * sizeof(u32); 307 + for (i = 0; i < WQCFG_STRIDES(idxd); i++) { 308 + wq_offset = WQCFG_OFFSET(idxd, wq->id, i); 309 309 iowrite32(0, idxd->reg_base + wq_offset); 310 310 dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n", 311 311 wq->id, i, wq_offset, ··· 539 539 if (!wq->group) 540 540 return 0; 541 541 542 - memset(&wq->wqcfg, 0, sizeof(union wqcfg)); 542 + memset(wq->wqcfg, 0, idxd->wqcfg_size); 543 543 544 544 /* byte 0-3 */ 545 - wq->wqcfg.wq_size = wq->size; 545 + wq->wqcfg->wq_size = wq->size; 546 546 547 547 if (wq->size == 0) { 548 548 dev_warn(dev, "Incorrect work queue size: 0\n"); ··· 550 550 } 551 551 552 552 /* bytes 4-7 */ 553 - wq->wqcfg.wq_thresh = wq->threshold; 553 + wq->wqcfg->wq_thresh = wq->threshold; 554 554 555 555 /* byte 8-11 */ 556 - wq->wqcfg.priv = !!(wq->type == IDXD_WQT_KERNEL); 557 - wq->wqcfg.mode = 1; 558 - 559 - wq->wqcfg.priority = wq->priority; 556 + wq->wqcfg->priv = !!(wq->type == IDXD_WQT_KERNEL); 557 + wq->wqcfg->mode = 1; 558 + wq->wqcfg->priority = wq->priority; 560 559 561 560 /* bytes 12-15 */ 562 - wq->wqcfg.max_xfer_shift = ilog2(wq->max_xfer_bytes); 563 - wq->wqcfg.max_batch_shift = ilog2(wq->max_batch_size); 561 + wq->wqcfg->max_xfer_shift = ilog2(wq->max_xfer_bytes); 562 + wq->wqcfg->max_batch_shift = ilog2(wq->max_batch_size); 564 563 565 564 dev_dbg(dev, "WQ %d CFGs\n", wq->id); 566 - for (i = 0; i < 8; i++) { 567 - wq_offset = idxd->wqcfg_offset + wq->id * 32 + i * sizeof(u32); 568 - iowrite32(wq->wqcfg.bits[i], idxd->reg_base + wq_offset); 565 + for (i = 0; i < WQCFG_STRIDES(idxd); i++) { 566 + wq_offset = WQCFG_OFFSET(idxd, wq->id, i); 567 + iowrite32(wq->wqcfg->bits[i], idxd->reg_base + wq_offset); 569 568 dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n", 570 569 wq->id, i, wq_offset, 571 570 ioread32(idxd->reg_base + wq_offset));
+2 -1
drivers/dma/idxd/idxd.h
··· 103 103 u32 priority; 104 104 enum idxd_wq_state state; 105 105 unsigned long flags; 106 - union wqcfg wqcfg; 106 + union wqcfg *wqcfg; 107 107 u32 vec_ptr; /* interrupt steering */ 108 108 struct dsa_hw_desc **hw_descs; 109 109 int num_descs; ··· 183 183 int max_wq_size; 184 184 int token_limit; 185 185 int nr_tokens; /* non-reserved tokens */ 186 + unsigned int wqcfg_size; 186 187 187 188 union sw_err_reg sw_err; 188 189 wait_queue_head_t cmd_waitq;
+5
drivers/dma/idxd/init.c
··· 178 178 wq->idxd_cdev.minor = -1; 179 179 wq->max_xfer_bytes = idxd->max_xfer_bytes; 180 180 wq->max_batch_size = idxd->max_batch_size; 181 + wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL); 182 + if (!wq->wqcfg) 183 + return -ENOMEM; 181 184 } 182 185 183 186 for (i = 0; i < idxd->max_engines; i++) { ··· 254 251 dev_dbg(dev, "total workqueue size: %u\n", idxd->max_wq_size); 255 252 idxd->max_wqs = idxd->hw.wq_cap.num_wqs; 256 253 dev_dbg(dev, "max workqueues: %u\n", idxd->max_wqs); 254 + idxd->wqcfg_size = 1 << (idxd->hw.wq_cap.wqcfg_size + IDXD_WQCFG_MIN); 255 + dev_dbg(dev, "wqcfg size: %u\n", idxd->wqcfg_size); 257 256 258 257 /* reading operation capabilities */ 259 258 for (i = 0; i < 4; i++) {
+23 -2
drivers/dma/idxd/registers.h
··· 8 8 9 9 #define IDXD_MMIO_BAR 0 10 10 #define IDXD_WQ_BAR 2 11 - #define IDXD_PORTAL_SIZE 0x4000 11 + #define IDXD_PORTAL_SIZE PAGE_SIZE 12 12 13 13 /* MMIO Device BAR0 Registers */ 14 14 #define IDXD_VER_OFFSET 0x00 ··· 43 43 struct { 44 44 u64 total_wq_size:16; 45 45 u64 num_wqs:8; 46 - u64 rsvd:24; 46 + u64 wqcfg_size:4; 47 + u64 rsvd:20; 47 48 u64 shared_mode:1; 48 49 u64 dedicated_mode:1; 49 50 u64 rsvd2:1; ··· 56 55 u64 bits; 57 56 } __packed; 58 57 #define IDXD_WQCAP_OFFSET 0x20 58 + #define IDXD_WQCFG_MIN 5 59 59 60 60 union group_cap_reg { 61 61 struct { ··· 335 333 }; 336 334 u32 bits[8]; 337 335 } __packed; 336 + 337 + /* 338 + * This macro calculates the offset into the WQCFG register 339 + * idxd - struct idxd * 340 + * n - wq id 341 + * ofs - the index of the 32b dword for the config register 342 + * 343 + * The WQCFG register block is divided into groups per each wq. The n index 344 + * allows us to move to the register group that's for that particular wq. 345 + * Each register is 32bits. The ofs gives us the number of register to access. 346 + */ 347 + #define WQCFG_OFFSET(_idxd_dev, n, ofs) \ 348 + ({\ 349 + typeof(_idxd_dev) __idxd_dev = (_idxd_dev); \ 350 + (__idxd_dev)->wqcfg_offset + (n) * (__idxd_dev)->wqcfg_size + sizeof(u32) * (ofs); \ 351 + }) 352 + 353 + #define WQCFG_STRIDES(_idxd_dev) ((_idxd_dev)->wqcfg_size / sizeof(u32)) 354 + 338 355 #endif
+1 -1
drivers/dma/idxd/submit.c
··· 74 74 if (idxd->state != IDXD_DEV_ENABLED) 75 75 return -EIO; 76 76 77 - portal = wq->dportal + idxd_get_wq_portal_offset(IDXD_PORTAL_UNLIMITED); 77 + portal = wq->dportal; 78 78 /* 79 79 * The wmb() flushes writes to coherent DMA data before possibly 80 80 * triggering a DMA read. The wmb() is necessary even on UP because
-10
drivers/dma/ioat/dca.c
··· 40 40 #define DCA2_TAG_MAP_BYTE3 0x82 41 41 #define DCA2_TAG_MAP_BYTE4 0x82 42 42 43 - /* verify if tag map matches expected values */ 44 - static inline int dca2_tag_map_valid(u8 *tag_map) 45 - { 46 - return ((tag_map[0] == DCA2_TAG_MAP_BYTE0) && 47 - (tag_map[1] == DCA2_TAG_MAP_BYTE1) && 48 - (tag_map[2] == DCA2_TAG_MAP_BYTE2) && 49 - (tag_map[3] == DCA2_TAG_MAP_BYTE3) && 50 - (tag_map[4] == DCA2_TAG_MAP_BYTE4)); 51 - } 52 - 53 43 /* 54 44 * "Legacy" DCA systems do not implement the DCA register set in the 55 45 * I/OAT device. Software needs direct support for their tag mappings.
+1 -1
drivers/dma/pl330.c
··· 2799 2799 * If burst size is smaller than bus width then make sure we only 2800 2800 * transfer one at a time to avoid a burst stradling an MFIFO entry. 2801 2801 */ 2802 - if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width) 2802 + if (burst * 8 < pl330->pcfg.data_bus_width) 2803 2803 desc->rqcfg.brst_len = 1; 2804 2804 2805 2805 desc->bytes_requested = len;
+1 -1
drivers/dma/ti/k3-udma-private.c
··· 83 83 #define XUDMA_GET_PUT_RESOURCE(res) \ 84 84 struct udma_##res *xudma_##res##_get(struct udma_dev *ud, int id) \ 85 85 { \ 86 - return __udma_reserve_##res(ud, false, id); \ 86 + return __udma_reserve_##res(ud, UDMA_TP_NORMAL, id); \ 87 87 } \ 88 88 EXPORT_SYMBOL(xudma_##res##_get); \ 89 89 \
+24 -13
drivers/dma/ti/omap-dma.c
··· 1522 1522 } 1523 1523 } 1524 1524 1525 + /* Currently used by omap2 & 3 to block deeper SoC idle states */ 1526 + static bool omap_dma_busy(struct omap_dmadev *od) 1527 + { 1528 + struct omap_chan *c; 1529 + int lch = -1; 1530 + 1531 + while (1) { 1532 + lch = find_next_bit(od->lch_bitmap, od->lch_count, lch + 1); 1533 + if (lch >= od->lch_count) 1534 + break; 1535 + c = od->lch_map[lch]; 1536 + if (!c) 1537 + continue; 1538 + if (omap_dma_chan_read(c, CCR) & CCR_ENABLE) 1539 + return true; 1540 + } 1541 + 1542 + return false; 1543 + } 1544 + 1525 1545 /* Currently only used for omap2. For omap1, also a check for lcd_dma is needed */ 1526 1546 static int omap_dma_busy_notifier(struct notifier_block *nb, 1527 1547 unsigned long cmd, void *v) 1528 1548 { 1529 1549 struct omap_dmadev *od; 1530 - struct omap_chan *c; 1531 - int lch = -1; 1532 1550 1533 1551 od = container_of(nb, struct omap_dmadev, nb); 1534 1552 1535 1553 switch (cmd) { 1536 1554 case CPU_CLUSTER_PM_ENTER: 1537 - while (1) { 1538 - lch = find_next_bit(od->lch_bitmap, od->lch_count, 1539 - lch + 1); 1540 - if (lch >= od->lch_count) 1541 - break; 1542 - c = od->lch_map[lch]; 1543 - if (!c) 1544 - continue; 1545 - if (omap_dma_chan_read(c, CCR) & CCR_ENABLE) 1546 - return NOTIFY_BAD; 1547 - } 1555 + if (omap_dma_busy(od)) 1556 + return NOTIFY_BAD; 1548 1557 break; 1549 1558 case CPU_CLUSTER_PM_ENTER_FAILED: 1550 1559 case CPU_CLUSTER_PM_EXIT: ··· 1604 1595 1605 1596 switch (cmd) { 1606 1597 case CPU_CLUSTER_PM_ENTER: 1598 + if (omap_dma_busy(od)) 1599 + return NOTIFY_BAD; 1607 1600 omap_dma_context_save(od); 1608 1601 break; 1609 1602 case CPU_CLUSTER_PM_ENTER_FAILED:
+30 -10
drivers/dma/xilinx/xilinx_dma.c
··· 517 517 #define to_dma_tx_descriptor(tx) \ 518 518 container_of(tx, struct xilinx_dma_tx_descriptor, async_tx) 519 519 #define xilinx_dma_poll_timeout(chan, reg, val, cond, delay_us, timeout_us) \ 520 - readl_poll_timeout(chan->xdev->regs + chan->ctrl_offset + reg, val, \ 521 - cond, delay_us, timeout_us) 520 + readl_poll_timeout_atomic(chan->xdev->regs + chan->ctrl_offset + reg, \ 521 + val, cond, delay_us, timeout_us) 522 522 523 523 /* IO accessors */ 524 524 static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg) ··· 948 948 { 949 949 struct xilinx_cdma_tx_segment *cdma_seg; 950 950 struct xilinx_axidma_tx_segment *axidma_seg; 951 + struct xilinx_aximcdma_tx_segment *aximcdma_seg; 951 952 struct xilinx_cdma_desc_hw *cdma_hw; 952 953 struct xilinx_axidma_desc_hw *axidma_hw; 954 + struct xilinx_aximcdma_desc_hw *aximcdma_hw; 953 955 struct list_head *entry; 954 956 u32 residue = 0; 955 957 ··· 963 961 cdma_hw = &cdma_seg->hw; 964 962 residue += (cdma_hw->control - cdma_hw->status) & 965 963 chan->xdev->max_buffer_len; 966 - } else { 964 + } else if (chan->xdev->dma_config->dmatype == 965 + XDMA_TYPE_AXIDMA) { 967 966 axidma_seg = list_entry(entry, 968 967 struct xilinx_axidma_tx_segment, 969 968 node); 970 969 axidma_hw = &axidma_seg->hw; 971 970 residue += (axidma_hw->control - axidma_hw->status) & 972 971 chan->xdev->max_buffer_len; 972 + } else { 973 + aximcdma_seg = 974 + list_entry(entry, 975 + struct xilinx_aximcdma_tx_segment, 976 + node); 977 + aximcdma_hw = &aximcdma_seg->hw; 978 + residue += 979 + (aximcdma_hw->control - aximcdma_hw->status) & 980 + chan->xdev->max_buffer_len; 973 981 } 974 982 } 975 983 ··· 1147 1135 upper_32_bits(chan->seg_p + sizeof(*chan->seg_mv) * 1148 1136 ((i + 1) % XILINX_DMA_NUM_DESCS)); 1149 1137 chan->seg_mv[i].phys = chan->seg_p + 1150 - sizeof(*chan->seg_v) * i; 1138 + sizeof(*chan->seg_mv) * i; 1151 1139 list_add_tail(&chan->seg_mv[i].node, 1152 1140 &chan->free_seg_list); 1153 1141 } ··· 1572 1560 static void xilinx_mcdma_start_transfer(struct xilinx_dma_chan *chan) 1573 1561 { 1574 1562 struct xilinx_dma_tx_descriptor *head_desc, *tail_desc; 1575 - struct xilinx_axidma_tx_segment *tail_segment; 1563 + struct xilinx_aximcdma_tx_segment *tail_segment; 1576 1564 u32 reg; 1577 1565 1578 1566 /* ··· 1594 1582 tail_desc = list_last_entry(&chan->pending_list, 1595 1583 struct xilinx_dma_tx_descriptor, node); 1596 1584 tail_segment = list_last_entry(&tail_desc->segments, 1597 - struct xilinx_axidma_tx_segment, node); 1585 + struct xilinx_aximcdma_tx_segment, node); 1598 1586 1599 1587 reg = dma_ctrl_read(chan, XILINX_MCDMA_CHAN_CR_OFFSET(chan->tdest)); 1600 1588 ··· 1876 1864 struct xilinx_vdma_tx_segment *tail_segment; 1877 1865 struct xilinx_dma_tx_descriptor *tail_desc; 1878 1866 struct xilinx_axidma_tx_segment *axidma_tail_segment; 1867 + struct xilinx_aximcdma_tx_segment *aximcdma_tail_segment; 1879 1868 struct xilinx_cdma_tx_segment *cdma_tail_segment; 1880 1869 1881 1870 if (list_empty(&chan->pending_list)) ··· 1898 1885 struct xilinx_cdma_tx_segment, 1899 1886 node); 1900 1887 cdma_tail_segment->hw.next_desc = (u32)desc->async_tx.phys; 1901 - } else { 1888 + } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { 1902 1889 axidma_tail_segment = list_last_entry(&tail_desc->segments, 1903 1890 struct xilinx_axidma_tx_segment, 1904 1891 node); 1905 1892 axidma_tail_segment->hw.next_desc = (u32)desc->async_tx.phys; 1893 + } else { 1894 + aximcdma_tail_segment = 1895 + list_last_entry(&tail_desc->segments, 1896 + struct xilinx_aximcdma_tx_segment, 1897 + node); 1898 + aximcdma_tail_segment->hw.next_desc = (u32)desc->async_tx.phys; 1906 1899 } 1907 1900 1908 1901 /* ··· 2855 2836 chan->stop_transfer = xilinx_dma_stop_transfer; 2856 2837 } 2857 2838 2858 - /* check if SG is enabled (only for AXIDMA and CDMA) */ 2839 + /* check if SG is enabled (only for AXIDMA, AXIMCDMA, and CDMA) */ 2859 2840 if (xdev->dma_config->dmatype != XDMA_TYPE_VDMA) { 2860 - if (dma_ctrl_read(chan, XILINX_DMA_REG_DMASR) & 2861 - XILINX_DMA_DMASR_SG_MASK) 2841 + if (xdev->dma_config->dmatype == XDMA_TYPE_AXIMCDMA || 2842 + dma_ctrl_read(chan, XILINX_DMA_REG_DMASR) & 2843 + XILINX_DMA_DMASR_SG_MASK) 2862 2844 chan->has_sg = true; 2863 2845 dev_dbg(chan->dev, "ch %d: SG %s\n", chan->id, 2864 2846 chan->has_sg ? "enabled" : "disabled");
+50 -15
drivers/firmware/xilinx/zynqmp.c
··· 20 20 #include <linux/of_platform.h> 21 21 #include <linux/slab.h> 22 22 #include <linux/uaccess.h> 23 + #include <linux/hashtable.h> 23 24 24 25 #include <linux/firmware/xlnx-zynqmp.h> 25 26 #include "zynqmp-debug.h" 26 27 28 + /* Max HashMap Order for PM API feature check (1<<7 = 128) */ 29 + #define PM_API_FEATURE_CHECK_MAX_ORDER 7 30 + 27 31 static bool feature_check_enabled; 28 - static u32 zynqmp_pm_features[PM_API_MAX]; 32 + DEFINE_HASHTABLE(pm_api_features_map, PM_API_FEATURE_CHECK_MAX_ORDER); 33 + 34 + /** 35 + * struct pm_api_feature_data - PM API Feature data 36 + * @pm_api_id: PM API Id, used as key to index into hashmap 37 + * @feature_status: status of PM API feature: valid, invalid 38 + * @hentry: hlist_node that hooks this entry into hashtable 39 + */ 40 + struct pm_api_feature_data { 41 + u32 pm_api_id; 42 + int feature_status; 43 + struct hlist_node hentry; 44 + }; 29 45 30 46 static const struct mfd_cell firmware_devs[] = { 31 47 { ··· 158 142 int ret; 159 143 u32 ret_payload[PAYLOAD_ARG_CNT]; 160 144 u64 smc_arg[2]; 145 + struct pm_api_feature_data *feature_data; 161 146 162 147 if (!feature_check_enabled) 163 148 return 0; 164 149 165 - /* Return value if feature is already checked */ 166 - if (api_id > ARRAY_SIZE(zynqmp_pm_features)) 167 - return PM_FEATURE_INVALID; 150 + /* Check for existing entry in hash table for given api */ 151 + hash_for_each_possible(pm_api_features_map, feature_data, hentry, 152 + api_id) { 153 + if (feature_data->pm_api_id == api_id) 154 + return feature_data->feature_status; 155 + } 168 156 169 - if (zynqmp_pm_features[api_id] != PM_FEATURE_UNCHECKED) 170 - return zynqmp_pm_features[api_id]; 157 + /* Add new entry if not present */ 158 + feature_data = kmalloc(sizeof(*feature_data), GFP_KERNEL); 159 + if (!feature_data) 160 + return -ENOMEM; 171 161 162 + feature_data->pm_api_id = api_id; 172 163 smc_arg[0] = PM_SIP_SVC | PM_FEATURE_CHECK; 173 164 smc_arg[1] = api_id; 174 165 175 166 ret = do_fw_call(smc_arg[0], smc_arg[1], 0, ret_payload); 176 - if (ret) { 177 - zynqmp_pm_features[api_id] = PM_FEATURE_INVALID; 178 - return PM_FEATURE_INVALID; 179 - } 167 + if (ret) 168 + ret = -EOPNOTSUPP; 169 + else 170 + ret = ret_payload[1]; 180 171 181 - zynqmp_pm_features[api_id] = ret_payload[1]; 172 + feature_data->feature_status = ret; 173 + hash_add(pm_api_features_map, &feature_data->hentry, api_id); 182 174 183 - return zynqmp_pm_features[api_id]; 175 + return ret; 184 176 } 185 177 186 178 /** ··· 224 200 * Make sure to stay in x0 register 225 201 */ 226 202 u64 smc_arg[4]; 203 + int ret; 227 204 228 - if (zynqmp_pm_feature(pm_api_id) == PM_FEATURE_INVALID) 229 - return -ENOTSUPP; 205 + /* Check if feature is supported or not */ 206 + ret = zynqmp_pm_feature(pm_api_id); 207 + if (ret < 0) 208 + return ret; 230 209 231 210 smc_arg[0] = PM_SIP_SVC | pm_api_id; 232 211 smc_arg[1] = ((u64)arg1 << 32) | arg0; ··· 642 615 */ 643 616 int zynqmp_pm_sd_dll_reset(u32 node_id, u32 type) 644 617 { 645 - return zynqmp_pm_invoke_fn(PM_IOCTL, node_id, IOCTL_SET_SD_TAPDELAY, 618 + return zynqmp_pm_invoke_fn(PM_IOCTL, node_id, IOCTL_SD_DLL_RESET, 646 619 type, 0, NULL); 647 620 } 648 621 EXPORT_SYMBOL_GPL(zynqmp_pm_sd_dll_reset); ··· 1279 1252 1280 1253 static int zynqmp_firmware_remove(struct platform_device *pdev) 1281 1254 { 1255 + struct pm_api_feature_data *feature_data; 1256 + int i; 1257 + 1282 1258 mfd_remove_devices(&pdev->dev); 1283 1259 zynqmp_pm_api_debugfs_exit(); 1260 + 1261 + hash_for_each(pm_api_features_map, i, feature_data, hentry) { 1262 + hash_del(&feature_data->hentry); 1263 + kfree(feature_data); 1264 + } 1284 1265 1285 1266 return 0; 1286 1267 }
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 4852 4852 if (!amdgpu_device_supports_baco(adev_to_drm(adev))) 4853 4853 return -ENOTSUPP; 4854 4854 4855 - if (ras && ras->supported) 4855 + if (ras && ras->supported && adev->nbio.funcs->enable_doorbell_interrupt) 4856 4856 adev->nbio.funcs->enable_doorbell_interrupt(adev, false); 4857 4857 4858 4858 return amdgpu_dpm_baco_enter(adev); ··· 4871 4871 if (ret) 4872 4872 return ret; 4873 4873 4874 - if (ras && ras->supported) 4874 + if (ras && ras->supported && adev->nbio.funcs->enable_doorbell_interrupt) 4875 4875 adev->nbio.funcs->enable_doorbell_interrupt(adev, true); 4876 4876 4877 4877 return 0;
+4 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1055 1055 {0x1002, 0x15dd, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU}, 1056 1056 {0x1002, 0x15d8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RAVEN|AMD_IS_APU}, 1057 1057 /* Arcturus */ 1058 - {0x1002, 0x738C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT}, 1059 - {0x1002, 0x7388, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT}, 1060 - {0x1002, 0x738E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT}, 1061 - {0x1002, 0x7390, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS|AMD_EXP_HW_SUPPORT}, 1058 + {0x1002, 0x738C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS}, 1059 + {0x1002, 0x7388, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS}, 1060 + {0x1002, 0x738E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS}, 1061 + {0x1002, 0x7390, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARCTURUS}, 1062 1062 /* Navi10 */ 1063 1063 {0x1002, 0x7310, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10}, 1064 1064 {0x1002, 0x7312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_NAVI10},
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 69 69 70 70 static int amdgpu_ttm_init_on_chip(struct amdgpu_device *adev, 71 71 unsigned int type, 72 - uint64_t size) 72 + uint64_t size_in_page) 73 73 { 74 74 return ttm_range_man_init(&adev->mman.bdev, type, 75 - false, size >> PAGE_SHIFT); 75 + false, size_in_page); 76 76 } 77 77 78 78 /**
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
··· 67 67 unsigned harvest_config; 68 68 /* store image width to adjust nb memory state */ 69 69 unsigned decode_image_width; 70 + uint32_t keyselect; 70 71 }; 71 72 72 73 int amdgpu_uvd_sw_init(struct amdgpu_device *adev);
+2
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 3105 3105 SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG3, 0xffffffff, 0x00000280), 3106 3106 SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000), 3107 3107 SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_EXCEPTION_CONTROL, 0x7fff0f1f, 0x00b80000), 3108 + SOC15_REG_GOLDEN_VALUE(GC, 0 ,mmGCEA_SDP_TAG_RESERVE0, 0xffffffff, 0x10100100), 3109 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCEA_SDP_TAG_RESERVE1, 0xffffffff, 0x17000088), 3108 3110 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Sienna_Cichlid, 0x1ff1ffff, 0x00000500), 3109 3111 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGE_PC_CNTL, 0x003fffff, 0x00280400), 3110 3112 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2A_ADDR_MATCH_MASK, 0xffffffff, 0xffffffcf),
+11 -9
drivers/gpu/drm/amd/amdgpu/uvd_v3_1.c
··· 277 277 */ 278 278 static int uvd_v3_1_fw_validate(struct amdgpu_device *adev) 279 279 { 280 - void *ptr; 281 - uint32_t ucode_len, i; 282 - uint32_t keysel; 283 - 284 - ptr = adev->uvd.inst[0].cpu_addr; 285 - ptr += 192 + 16; 286 - memcpy(&ucode_len, ptr, 4); 287 - ptr += ucode_len; 288 - memcpy(&keysel, ptr, 4); 280 + int i; 281 + uint32_t keysel = adev->uvd.keyselect; 289 282 290 283 WREG32(mmUVD_FW_START, keysel); 291 284 ··· 543 550 struct amdgpu_ring *ring; 544 551 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 545 552 int r; 553 + void *ptr; 554 + uint32_t ucode_len; 546 555 547 556 /* UVD TRAP */ 548 557 r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 124, &adev->uvd.inst->irq); ··· 565 570 r = amdgpu_uvd_resume(adev); 566 571 if (r) 567 572 return r; 573 + 574 + /* Retrieval firmware validate key */ 575 + ptr = adev->uvd.inst[0].cpu_addr; 576 + ptr += 192 + 16; 577 + memcpy(&ucode_len, ptr, 4); 578 + ptr += ucode_len; 579 + memcpy(&adev->uvd.keyselect, ptr, 4); 568 580 569 581 r = amdgpu_uvd_entity_init(adev); 570 582
+3 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1041 1041 amdgpu_dm_init_color_mod(); 1042 1042 1043 1043 #ifdef CONFIG_DRM_AMD_DC_HDCP 1044 - if (adev->asic_type >= CHIP_RAVEN) { 1044 + if (adev->dm.dc->caps.max_links > 0 && adev->asic_type >= CHIP_RAVEN) { 1045 1045 adev->dm.hdcp_workqueue = hdcp_create_workqueue(adev, &init_params.cp_psp, adev->dm.dc); 1046 1046 1047 1047 if (!adev->dm.hdcp_workqueue) ··· 7506 7506 bool mode_set_reset_required = false; 7507 7507 7508 7508 drm_atomic_helper_update_legacy_modeset_state(dev, state); 7509 - drm_atomic_helper_calc_timestamping_constants(state); 7510 7509 7511 7510 dm_state = dm_atomic_get_new_state(state); 7512 7511 if (dm_state && dm_state->context) { ··· 7531 7532 dc_stream_release(dm_old_crtc_state->stream); 7532 7533 } 7533 7534 } 7535 + 7536 + drm_atomic_helper_calc_timestamping_constants(state); 7534 7537 7535 7538 /* update changed items */ 7536 7539 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+2 -2
drivers/gpu/drm/amd/display/dc/irq/dcn20/irq_service_dcn20.c
··· 299 299 pflip_int_entry(1), 300 300 pflip_int_entry(2), 301 301 pflip_int_entry(3), 302 - [DC_IRQ_SOURCE_PFLIP5] = dummy_irq_entry(), 303 - [DC_IRQ_SOURCE_PFLIP6] = dummy_irq_entry(), 302 + pflip_int_entry(4), 303 + pflip_int_entry(5), 304 304 [DC_IRQ_SOURCE_PFLIP_UNDERLAY0] = dummy_irq_entry(), 305 305 gpio_pad_int_entry(0), 306 306 gpio_pad_int_entry(1),
+16 -1
drivers/gpu/drm/ast/ast_mode.c
··· 742 742 case DRM_MODE_DPMS_SUSPEND: 743 743 if (ast->tx_chip_type == AST_TX_DP501) 744 744 ast_set_dp501_video_output(crtc->dev, 1); 745 - ast_crtc_load_lut(ast, crtc); 746 745 break; 747 746 case DRM_MODE_DPMS_OFF: 748 747 if (ast->tx_chip_type == AST_TX_DP501) ··· 774 775 return -EINVAL; 775 776 776 777 return 0; 778 + } 779 + 780 + static void 781 + ast_crtc_helper_atomic_flush(struct drm_crtc *crtc, struct drm_crtc_state *old_crtc_state) 782 + { 783 + struct ast_private *ast = to_ast_private(crtc->dev); 784 + struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc->state); 785 + struct ast_crtc_state *old_ast_crtc_state = to_ast_crtc_state(old_crtc_state); 786 + 787 + /* 788 + * The gamma LUT has to be reloaded after changing the primary 789 + * plane's color format. 790 + */ 791 + if (old_ast_crtc_state->format != ast_crtc_state->format) 792 + ast_crtc_load_lut(ast, crtc); 777 793 } 778 794 779 795 static void ··· 844 830 845 831 static const struct drm_crtc_helper_funcs ast_crtc_helper_funcs = { 846 832 .atomic_check = ast_crtc_helper_atomic_check, 833 + .atomic_flush = ast_crtc_helper_atomic_flush, 847 834 .atomic_enable = ast_crtc_helper_atomic_enable, 848 835 .atomic_disable = ast_crtc_helper_atomic_disable, 849 836 };
-6
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 2327 2327 { 2328 2328 enum drm_connector_status result; 2329 2329 2330 - mutex_lock(&hdmi->mutex); 2331 - hdmi->force = DRM_FORCE_UNSPECIFIED; 2332 - dw_hdmi_update_power(hdmi); 2333 - dw_hdmi_update_phy_mask(hdmi); 2334 - mutex_unlock(&hdmi->mutex); 2335 - 2336 2330 result = hdmi->phy.ops->read_hpd(hdmi, hdmi->phy.data); 2337 2331 2338 2332 mutex_lock(&hdmi->mutex);
+1 -1
drivers/gpu/drm/drm_gem_vram_helper.c
··· 140 140 unsigned int c = 0; 141 141 142 142 if (pl_flag & DRM_GEM_VRAM_PL_FLAG_TOPDOWN) 143 - pl_flag = TTM_PL_FLAG_TOPDOWN; 143 + invariant_flags = TTM_PL_FLAG_TOPDOWN; 144 144 145 145 gbo->placement.placement = gbo->placements; 146 146 gbo->placement.busy_placement = gbo->placements;
+2 -1
drivers/gpu/drm/exynos/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_EXYNOS 3 3 tristate "DRM Support for Samsung SoC Exynos Series" 4 - depends on OF && DRM && (ARCH_S3C64XX || ARCH_S5PV210 || ARCH_EXYNOS || ARCH_MULTIPLATFORM || COMPILE_TEST) 4 + depends on OF && DRM && COMMON_CLK 5 + depends on ARCH_S3C64XX || ARCH_S5PV210 || ARCH_EXYNOS || ARCH_MULTIPLATFORM || COMPILE_TEST 5 6 depends on MMU 6 7 select DRM_KMS_HELPER 7 8 select VIDEOMODE_HELPERS
+2 -1
drivers/gpu/drm/i915/display/intel_display.c
··· 12878 12878 case 10 ... 11: 12879 12879 bpp = 10 * 3; 12880 12880 break; 12881 - case 12: 12881 + case 12 ... 16: 12882 12882 bpp = 12 * 3; 12883 12883 break; 12884 12884 default: 12885 + MISSING_CASE(conn_state->max_bpc); 12885 12886 return -EINVAL; 12886 12887 } 12887 12888
+92 -51
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
··· 30 30 #include "i915_trace.h" 31 31 #include "intel_breadcrumbs.h" 32 32 #include "intel_context.h" 33 + #include "intel_engine_pm.h" 33 34 #include "intel_gt_pm.h" 34 35 #include "intel_gt_requests.h" 35 36 36 - static void irq_enable(struct intel_engine_cs *engine) 37 + static bool irq_enable(struct intel_engine_cs *engine) 37 38 { 38 39 if (!engine->irq_enable) 39 - return; 40 + return false; 40 41 41 42 /* Caller disables interrupts */ 42 43 spin_lock(&engine->gt->irq_lock); 43 44 engine->irq_enable(engine); 44 45 spin_unlock(&engine->gt->irq_lock); 46 + 47 + return true; 45 48 } 46 49 47 50 static void irq_disable(struct intel_engine_cs *engine) ··· 60 57 61 58 static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) 62 59 { 63 - lockdep_assert_held(&b->irq_lock); 64 - 65 - if (!b->irq_engine || b->irq_armed) 66 - return; 67 - 68 - if (!intel_gt_pm_get_if_awake(b->irq_engine->gt)) 60 + /* 61 + * Since we are waiting on a request, the GPU should be busy 62 + * and should have its own rpm reference. 63 + */ 64 + if (GEM_WARN_ON(!intel_gt_pm_get_if_awake(b->irq_engine->gt))) 69 65 return; 70 66 71 67 /* ··· 75 73 */ 76 74 WRITE_ONCE(b->irq_armed, true); 77 75 78 - /* 79 - * Since we are waiting on a request, the GPU should be busy 80 - * and should have its own rpm reference. This is tracked 81 - * by i915->gt.awake, we can forgo holding our own wakref 82 - * for the interrupt as before i915->gt.awake is released (when 83 - * the driver is idle) we disarm the breadcrumbs. 84 - */ 76 + /* Requests may have completed before we could enable the interrupt. */ 77 + if (!b->irq_enabled++ && irq_enable(b->irq_engine)) 78 + irq_work_queue(&b->irq_work); 79 + } 85 80 86 - if (!b->irq_enabled++) 87 - irq_enable(b->irq_engine); 81 + static void intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b) 82 + { 83 + if (!b->irq_engine) 84 + return; 85 + 86 + spin_lock(&b->irq_lock); 87 + if (!b->irq_armed) 88 + __intel_breadcrumbs_arm_irq(b); 89 + spin_unlock(&b->irq_lock); 88 90 } 89 91 90 92 static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b) 91 93 { 92 - lockdep_assert_held(&b->irq_lock); 93 - 94 - if (!b->irq_engine || !b->irq_armed) 95 - return; 96 - 97 94 GEM_BUG_ON(!b->irq_enabled); 98 95 if (!--b->irq_enabled) 99 96 irq_disable(b->irq_engine); ··· 106 105 { 107 106 intel_context_get(ce); 108 107 list_add_tail(&ce->signal_link, &b->signalers); 109 - if (list_is_first(&ce->signal_link, &b->signalers)) 110 - __intel_breadcrumbs_arm_irq(b); 111 108 } 112 109 113 110 static void remove_signaling_context(struct intel_breadcrumbs *b, ··· 173 174 intel_engine_add_retire(b->irq_engine, tl); 174 175 } 175 176 176 - static bool __signal_request(struct i915_request *rq, struct list_head *signals) 177 + static bool __signal_request(struct i915_request *rq) 177 178 { 178 - clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags); 179 - 180 179 if (!__dma_fence_signal(&rq->fence)) { 181 180 i915_request_put(rq); 182 181 return false; 183 182 } 184 183 185 - list_add_tail(&rq->signal_link, signals); 186 184 return true; 185 + } 186 + 187 + static struct llist_node * 188 + slist_add(struct llist_node *node, struct llist_node *head) 189 + { 190 + node->next = head; 191 + return node; 187 192 } 188 193 189 194 static void signal_irq_work(struct irq_work *work) 190 195 { 191 196 struct intel_breadcrumbs *b = container_of(work, typeof(*b), irq_work); 192 197 const ktime_t timestamp = ktime_get(); 198 + struct llist_node *signal, *sn; 193 199 struct intel_context *ce, *cn; 194 200 struct list_head *pos, *next; 195 - LIST_HEAD(signal); 201 + 202 + signal = NULL; 203 + if (unlikely(!llist_empty(&b->signaled_requests))) 204 + signal = llist_del_all(&b->signaled_requests); 196 205 197 206 spin_lock(&b->irq_lock); 198 207 199 - if (list_empty(&b->signalers)) 208 + /* 209 + * Keep the irq armed until the interrupt after all listeners are gone. 210 + * 211 + * Enabling/disabling the interrupt is rather costly, roughly a couple 212 + * of hundred microseconds. If we are proactive and enable/disable 213 + * the interrupt around every request that wants a breadcrumb, we 214 + * quickly drown in the extra orders of magnitude of latency imposed 215 + * on request submission. 216 + * 217 + * So we try to be lazy, and keep the interrupts enabled until no 218 + * more listeners appear within a breadcrumb interrupt interval (that 219 + * is until a request completes that no one cares about). The 220 + * observation is that listeners come in batches, and will often 221 + * listen to a bunch of requests in succession. Though note on icl+, 222 + * interrupts are always enabled due to concerns with rc6 being 223 + * dysfunctional with per-engine interrupt masking. 224 + * 225 + * We also try to avoid raising too many interrupts, as they may 226 + * be generated by userspace batches and it is unfortunately rather 227 + * too easy to drown the CPU under a flood of GPU interrupts. Thus 228 + * whenever no one appears to be listening, we turn off the interrupts. 229 + * Fewer interrupts should conserve power -- at the very least, fewer 230 + * interrupt draw less ire from other users of the system and tools 231 + * like powertop. 232 + */ 233 + if (!signal && b->irq_armed && list_empty(&b->signalers)) 200 234 __intel_breadcrumbs_disarm_irq(b); 201 - 202 - list_splice_init(&b->signaled_requests, &signal); 203 235 204 236 list_for_each_entry_safe(ce, cn, &b->signalers, signal_link) { 205 237 GEM_BUG_ON(list_empty(&ce->signals)); ··· 248 218 * spinlock as the callback chain may end up adding 249 219 * more signalers to the same context or engine. 250 220 */ 251 - __signal_request(rq, &signal); 221 + clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags); 222 + if (__signal_request(rq)) 223 + /* We own signal_node now, xfer to local list */ 224 + signal = slist_add(&rq->signal_node, signal); 252 225 } 253 226 254 227 /* ··· 271 238 272 239 spin_unlock(&b->irq_lock); 273 240 274 - list_for_each_safe(pos, next, &signal) { 241 + llist_for_each_safe(signal, sn, signal) { 275 242 struct i915_request *rq = 276 - list_entry(pos, typeof(*rq), signal_link); 243 + llist_entry(signal, typeof(*rq), signal_node); 277 244 struct list_head cb_list; 278 245 279 246 spin_lock(&rq->lock); ··· 284 251 285 252 i915_request_put(rq); 286 253 } 254 + 255 + if (!READ_ONCE(b->irq_armed) && !list_empty(&b->signalers)) 256 + intel_breadcrumbs_arm_irq(b); 287 257 } 288 258 289 259 struct intel_breadcrumbs * ··· 300 264 301 265 spin_lock_init(&b->irq_lock); 302 266 INIT_LIST_HEAD(&b->signalers); 303 - INIT_LIST_HEAD(&b->signaled_requests); 267 + init_llist_head(&b->signaled_requests); 304 268 305 269 init_irq_work(&b->irq_work, signal_irq_work); 306 270 ··· 328 292 329 293 void intel_breadcrumbs_park(struct intel_breadcrumbs *b) 330 294 { 331 - unsigned long flags; 332 - 333 - if (!READ_ONCE(b->irq_armed)) 334 - return; 335 - 336 - spin_lock_irqsave(&b->irq_lock, flags); 337 - __intel_breadcrumbs_disarm_irq(b); 338 - spin_unlock_irqrestore(&b->irq_lock, flags); 339 - 340 - if (!list_empty(&b->signalers)) 341 - irq_work_queue(&b->irq_work); 295 + /* Kick the work once more to drain the signalers */ 296 + irq_work_sync(&b->irq_work); 297 + while (unlikely(READ_ONCE(b->irq_armed))) { 298 + local_irq_disable(); 299 + signal_irq_work(&b->irq_work); 300 + local_irq_enable(); 301 + cond_resched(); 302 + } 303 + GEM_BUG_ON(!list_empty(&b->signalers)); 342 304 } 343 305 344 306 void intel_breadcrumbs_free(struct intel_breadcrumbs *b) 345 307 { 308 + irq_work_sync(&b->irq_work); 309 + GEM_BUG_ON(!list_empty(&b->signalers)); 310 + GEM_BUG_ON(b->irq_armed); 346 311 kfree(b); 347 312 } 348 313 ··· 364 327 * its signal completion. 365 328 */ 366 329 if (__request_completed(rq)) { 367 - if (__signal_request(rq, &b->signaled_requests)) 330 + if (__signal_request(rq) && 331 + llist_add(&rq->signal_node, &b->signaled_requests)) 368 332 irq_work_queue(&b->irq_work); 369 333 return; 370 334 } ··· 400 362 GEM_BUG_ON(!check_signal_order(ce, rq)); 401 363 set_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags); 402 364 403 - /* Check after attaching to irq, interrupt may have already fired. */ 404 - if (__request_completed(rq)) 405 - irq_work_queue(&b->irq_work); 365 + /* 366 + * Defer enabling the interrupt to after HW submission and recheck 367 + * the request as it may have completed and raised the interrupt as 368 + * we were attaching it into the lists. 369 + */ 370 + irq_work_queue(&b->irq_work); 406 371 } 407 372 408 373 bool i915_request_enable_breadcrumb(struct i915_request *rq)
+1 -1
drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
··· 35 35 struct intel_engine_cs *irq_engine; 36 36 37 37 struct list_head signalers; 38 - struct list_head signaled_requests; 38 + struct llist_head signaled_requests; 39 39 40 40 struct irq_work irq_work; /* for use from inside irq_lock */ 41 41
+54 -7
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 182 182 struct virtual_engine { 183 183 struct intel_engine_cs base; 184 184 struct intel_context context; 185 + struct rcu_work rcu; 185 186 186 187 /* 187 188 * We allow only a single request through the virtual engine at a time ··· 5426 5425 return &ve->base.execlists.default_priolist.requests[0]; 5427 5426 } 5428 5427 5429 - static void virtual_context_destroy(struct kref *kref) 5428 + static void rcu_virtual_context_destroy(struct work_struct *wrk) 5430 5429 { 5431 5430 struct virtual_engine *ve = 5432 - container_of(kref, typeof(*ve), context.ref); 5431 + container_of(wrk, typeof(*ve), rcu.work); 5433 5432 unsigned int n; 5434 5433 5435 - GEM_BUG_ON(!list_empty(virtual_queue(ve))); 5436 - GEM_BUG_ON(ve->request); 5437 5434 GEM_BUG_ON(ve->context.inflight); 5438 5435 5436 + /* Preempt-to-busy may leave a stale request behind. */ 5437 + if (unlikely(ve->request)) { 5438 + struct i915_request *old; 5439 + 5440 + spin_lock_irq(&ve->base.active.lock); 5441 + 5442 + old = fetch_and_zero(&ve->request); 5443 + if (old) { 5444 + GEM_BUG_ON(!i915_request_completed(old)); 5445 + __i915_request_submit(old); 5446 + i915_request_put(old); 5447 + } 5448 + 5449 + spin_unlock_irq(&ve->base.active.lock); 5450 + } 5451 + 5452 + /* 5453 + * Flush the tasklet in case it is still running on another core. 5454 + * 5455 + * This needs to be done before we remove ourselves from the siblings' 5456 + * rbtrees as in the case it is running in parallel, it may reinsert 5457 + * the rb_node into a sibling. 5458 + */ 5459 + tasklet_kill(&ve->base.execlists.tasklet); 5460 + 5461 + /* Decouple ourselves from the siblings, no more access allowed. */ 5439 5462 for (n = 0; n < ve->num_siblings; n++) { 5440 5463 struct intel_engine_cs *sibling = ve->siblings[n]; 5441 5464 struct rb_node *node = &ve->nodes[sibling->id].rb; 5442 - unsigned long flags; 5443 5465 5444 5466 if (RB_EMPTY_NODE(node)) 5445 5467 continue; 5446 5468 5447 - spin_lock_irqsave(&sibling->active.lock, flags); 5469 + spin_lock_irq(&sibling->active.lock); 5448 5470 5449 5471 /* Detachment is lazily performed in the execlists tasklet */ 5450 5472 if (!RB_EMPTY_NODE(node)) 5451 5473 rb_erase_cached(node, &sibling->execlists.virtual); 5452 5474 5453 - spin_unlock_irqrestore(&sibling->active.lock, flags); 5475 + spin_unlock_irq(&sibling->active.lock); 5454 5476 } 5455 5477 GEM_BUG_ON(__tasklet_is_scheduled(&ve->base.execlists.tasklet)); 5478 + GEM_BUG_ON(!list_empty(virtual_queue(ve))); 5456 5479 5457 5480 if (ve->context.state) 5458 5481 __execlists_context_fini(&ve->context); 5459 5482 intel_context_fini(&ve->context); 5460 5483 5484 + intel_breadcrumbs_free(ve->base.breadcrumbs); 5461 5485 intel_engine_free_request_pool(&ve->base); 5462 5486 5463 5487 kfree(ve->bonds); 5464 5488 kfree(ve); 5489 + } 5490 + 5491 + static void virtual_context_destroy(struct kref *kref) 5492 + { 5493 + struct virtual_engine *ve = 5494 + container_of(kref, typeof(*ve), context.ref); 5495 + 5496 + GEM_BUG_ON(!list_empty(&ve->context.signals)); 5497 + 5498 + /* 5499 + * When destroying the virtual engine, we have to be aware that 5500 + * it may still be in use from an hardirq/softirq context causing 5501 + * the resubmission of a completed request (background completion 5502 + * due to preempt-to-busy). Before we can free the engine, we need 5503 + * to flush the submission code and tasklets that are still potentially 5504 + * accessing the engine. Flushing the tasklets requires process context, 5505 + * and since we can guard the resubmit onto the engine with an RCU read 5506 + * lock, we can delegate the free of the engine to an RCU worker. 5507 + */ 5508 + INIT_RCU_WORK(&ve->rcu, rcu_virtual_context_destroy); 5509 + queue_rcu_work(system_wq, &ve->rcu); 5465 5510 } 5466 5511 5467 5512 static void virtual_engine_initial_hint(struct virtual_engine *ve)
+3 -2
drivers/gpu/drm/i915/gt/intel_mocs.c
··· 243 243 * only, __init_mocs_table() take care to program unused index with 244 244 * this entry. 245 245 */ 246 - MOCS_ENTRY(1, LE_3_WB | LE_TC_1_LLC | LE_LRUM(3), 247 - L3_3_WB), 246 + MOCS_ENTRY(I915_MOCS_PTE, 247 + LE_0_PAGETABLE | LE_TC_0_PAGETABLE, 248 + L3_1_UC), 248 249 GEN11_MOCS_ENTRIES, 249 250 250 251 /* Implicitly enable L1 - HDC:L1 + L3 + LLC */
+17 -5
drivers/gpu/drm/i915/gt/intel_rc6.c
··· 56 56 57 57 static void gen11_rc6_enable(struct intel_rc6 *rc6) 58 58 { 59 - struct intel_uncore *uncore = rc6_to_uncore(rc6); 59 + struct intel_gt *gt = rc6_to_gt(rc6); 60 + struct intel_uncore *uncore = gt->uncore; 60 61 struct intel_engine_cs *engine; 61 62 enum intel_engine_id id; 63 + u32 pg_enable; 64 + int i; 62 65 63 66 /* 2b: Program RC6 thresholds.*/ 64 67 set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85); ··· 105 102 GEN6_RC_CTL_RC6_ENABLE | 106 103 GEN6_RC_CTL_EI_MODE(1); 107 104 108 - set(uncore, GEN9_PG_ENABLE, 109 - GEN9_RENDER_PG_ENABLE | 110 - GEN9_MEDIA_PG_ENABLE | 111 - GEN11_MEDIA_SAMPLER_PG_ENABLE); 105 + pg_enable = 106 + GEN9_RENDER_PG_ENABLE | 107 + GEN9_MEDIA_PG_ENABLE | 108 + GEN11_MEDIA_SAMPLER_PG_ENABLE; 109 + 110 + if (INTEL_GEN(gt->i915) >= 12) { 111 + for (i = 0; i < I915_MAX_VCS; i++) 112 + if (HAS_ENGINE(gt, _VCS(i))) 113 + pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) | 114 + VDN_MFX_POWERGATE_ENABLE(i)); 115 + } 116 + 117 + set(uncore, GEN9_PG_ENABLE, pg_enable); 112 118 } 113 119 114 120 static void gen9_rc6_enable(struct intel_rc6 *rc6)
+3 -1
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 131 131 return; 132 132 } 133 133 134 - if (wal->list) 134 + if (wal->list) { 135 135 memcpy(list, wal->list, sizeof(*wa) * wal->count); 136 + kfree(wal->list); 137 + } 136 138 137 139 wal->list = list; 138 140 }
+1 -1
drivers/gpu/drm/i915/gvt/display.c
··· 164 164 165 165 /* let the virtual display supports DP1.2 */ 166 166 static u8 dpcd_fix_data[DPCD_HEADER_SIZE] = { 167 - 0x12, 0x014, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 167 + 0x12, 0x014, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 168 168 }; 169 169 170 170 static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
+1 -1
drivers/gpu/drm/i915/gvt/gvt.h
··· 255 255 #define F_CMD_ACCESS (1 << 3) 256 256 /* This reg has been accessed by a VM */ 257 257 #define F_ACCESSED (1 << 4) 258 - /* This reg has been accessed through GPU commands */ 258 + /* This reg could be accessed by unaligned address */ 259 259 #define F_UNALIGN (1 << 6) 260 260 /* This reg is in GVT's mmio save-restor list and in hardware 261 261 * logical context image
+3 -1
drivers/gpu/drm/i915/gvt/kvmgt.c
··· 829 829 /* Take a module reference as mdev core doesn't take 830 830 * a reference for vendor driver. 831 831 */ 832 - if (!try_module_get(THIS_MODULE)) 832 + if (!try_module_get(THIS_MODULE)) { 833 + ret = -ENODEV; 833 834 goto undo_group; 835 + } 834 836 835 837 ret = kvmgt_guest_init(mdev); 836 838 if (ret)
+2 -1
drivers/gpu/drm/i915/gvt/vgpu.c
··· 439 439 440 440 if (IS_BROADWELL(dev_priv)) 441 441 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B); 442 - else 442 + /* FixMe: Re-enable APL/BXT once vfio_edid enabled */ 443 + else if (!IS_BROXTON(dev_priv)) 443 444 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D); 444 445 if (ret) 445 446 goto out_clean_sched_policy;
+7 -2
drivers/gpu/drm/i915/i915_perf.c
··· 909 909 DRM_I915_PERF_RECORD_OA_REPORT_LOST); 910 910 if (ret) 911 911 return ret; 912 - intel_uncore_write(uncore, oastatus_reg, 913 - oastatus & ~GEN8_OASTATUS_REPORT_LOST); 912 + 913 + intel_uncore_rmw(uncore, oastatus_reg, 914 + GEN8_OASTATUS_COUNTER_OVERFLOW | 915 + GEN8_OASTATUS_REPORT_LOST, 916 + IS_GEN_RANGE(uncore->i915, 8, 10) ? 917 + (GEN8_OASTATUS_HEAD_POINTER_WRAP | 918 + GEN8_OASTATUS_TAIL_POINTER_WRAP) : 0); 914 919 } 915 920 916 921 return gen8_append_oa_reports(stream, buf, count, offset);
+7 -7
drivers/gpu/drm/i915/i915_reg.h
··· 676 676 #define GEN7_OASTATUS2_MEM_SELECT_GGTT (1 << 0) /* 0: PPGTT, 1: GGTT */ 677 677 678 678 #define GEN8_OASTATUS _MMIO(0x2b08) 679 + #define GEN8_OASTATUS_TAIL_POINTER_WRAP (1 << 17) 680 + #define GEN8_OASTATUS_HEAD_POINTER_WRAP (1 << 16) 679 681 #define GEN8_OASTATUS_OVERRUN_STATUS (1 << 3) 680 682 #define GEN8_OASTATUS_COUNTER_OVERFLOW (1 << 2) 681 683 #define GEN8_OASTATUS_OABUFFER_OVERFLOW (1 << 1) ··· 8973 8971 #define GEN9_PWRGT_MEDIA_STATUS_MASK (1 << 0) 8974 8972 #define GEN9_PWRGT_RENDER_STATUS_MASK (1 << 1) 8975 8973 8976 - #define POWERGATE_ENABLE _MMIO(0xa210) 8977 - #define VDN_HCP_POWERGATE_ENABLE(n) BIT(((n) * 2) + 3) 8978 - #define VDN_MFX_POWERGATE_ENABLE(n) BIT(((n) * 2) + 4) 8979 - 8980 8974 #define GTFIFODBG _MMIO(0x120000) 8981 8975 #define GT_FIFO_SBDEDICATE_FREE_ENTRY_CHV (0x1f << 20) 8982 8976 #define GT_FIFO_FREE_ENTRIES_CHV (0x7f << 13) ··· 9112 9114 #define GEN9_MEDIA_PG_IDLE_HYSTERESIS _MMIO(0xA0C4) 9113 9115 #define GEN9_RENDER_PG_IDLE_HYSTERESIS _MMIO(0xA0C8) 9114 9116 #define GEN9_PG_ENABLE _MMIO(0xA210) 9115 - #define GEN9_RENDER_PG_ENABLE REG_BIT(0) 9116 - #define GEN9_MEDIA_PG_ENABLE REG_BIT(1) 9117 - #define GEN11_MEDIA_SAMPLER_PG_ENABLE REG_BIT(2) 9117 + #define GEN9_RENDER_PG_ENABLE REG_BIT(0) 9118 + #define GEN9_MEDIA_PG_ENABLE REG_BIT(1) 9119 + #define GEN11_MEDIA_SAMPLER_PG_ENABLE REG_BIT(2) 9120 + #define VDN_HCP_POWERGATE_ENABLE(n) REG_BIT(3 + 2 * (n)) 9121 + #define VDN_MFX_POWERGATE_ENABLE(n) REG_BIT(4 + 2 * (n)) 9118 9122 #define GEN8_PUSHBUS_CONTROL _MMIO(0xA248) 9119 9123 #define GEN8_PUSHBUS_ENABLE _MMIO(0xA250) 9120 9124 #define GEN8_PUSHBUS_SHIFT _MMIO(0xA25C)
+5 -1
drivers/gpu/drm/i915/i915_request.h
··· 176 176 struct intel_context *context; 177 177 struct intel_ring *ring; 178 178 struct intel_timeline __rcu *timeline; 179 - struct list_head signal_link; 179 + 180 + union { 181 + struct list_head signal_link; 182 + struct llist_node signal_node; 183 + }; 180 184 181 185 /* 182 186 * The rcu epoch of when this request was allocated. Used to judiciously
-13
drivers/gpu/drm/i915/intel_pm.c
··· 7118 7118 7119 7119 static void tgl_init_clock_gating(struct drm_i915_private *dev_priv) 7120 7120 { 7121 - u32 vd_pg_enable = 0; 7122 - unsigned int i; 7123 - 7124 7121 /* Wa_1409120013:tgl */ 7125 7122 I915_WRITE(ILK_DPFC_CHICKEN, 7126 7123 ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL); 7127 - 7128 - /* This is not a WA. Enable VD HCP & MFX_ENC powergate */ 7129 - for (i = 0; i < I915_MAX_VCS; i++) { 7130 - if (HAS_ENGINE(&dev_priv->gt, _VCS(i))) 7131 - vd_pg_enable |= VDN_HCP_POWERGATE_ENABLE(i) | 7132 - VDN_MFX_POWERGATE_ENABLE(i); 7133 - } 7134 - 7135 - I915_WRITE(POWERGATE_ENABLE, 7136 - I915_READ(POWERGATE_ENABLE) | vd_pg_enable); 7137 7124 7138 7125 /* Wa_1409825376:tgl (pre-prod)*/ 7139 7126 if (IS_TGL_DISP_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_B1))
+6 -2
drivers/gpu/drm/i915/selftests/i915_request.c
··· 2293 2293 struct intel_context *ce; 2294 2294 2295 2295 ce = intel_context_create(engine); 2296 - if (IS_ERR(ce)) 2296 + if (IS_ERR(ce)) { 2297 + err = PTR_ERR(ce); 2297 2298 goto out; 2299 + } 2298 2300 2299 2301 err = intel_context_pin(ce); 2300 2302 if (err) { ··· 2469 2467 struct intel_context *ce; 2470 2468 2471 2469 ce = intel_context_create(engine); 2472 - if (IS_ERR(ce)) 2470 + if (IS_ERR(ce)) { 2471 + err = PTR_ERR(ce); 2473 2472 goto out; 2473 + } 2474 2474 2475 2475 err = intel_context_pin(ce); 2476 2476 if (err) {
-9
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 522 522 return 0; 523 523 } 524 524 525 - static void mtk_dpi_encoder_destroy(struct drm_encoder *encoder) 526 - { 527 - drm_encoder_cleanup(encoder); 528 - } 529 - 530 - static const struct drm_encoder_funcs mtk_dpi_encoder_funcs = { 531 - .destroy = mtk_dpi_encoder_destroy, 532 - }; 533 - 534 525 static int mtk_dpi_bridge_attach(struct drm_bridge *bridge, 535 526 enum drm_bridge_attach_flags flags) 536 527 {
+20 -37
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 444 444 u32 horizontal_sync_active_byte; 445 445 u32 horizontal_backporch_byte; 446 446 u32 horizontal_frontporch_byte; 447 + u32 horizontal_front_back_byte; 448 + u32 data_phy_cycles_byte; 447 449 u32 dsi_tmp_buf_bpp, data_phy_cycles; 450 + u32 delta; 448 451 struct mtk_phy_timing *timing = &dsi->phy_timing; 449 452 450 453 struct videomode *vm = &dsi->vm; ··· 469 466 horizontal_sync_active_byte = (vm->hsync_len * dsi_tmp_buf_bpp - 10); 470 467 471 468 if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE) 472 - horizontal_backporch_byte = vm->hback_porch * dsi_tmp_buf_bpp; 469 + horizontal_backporch_byte = vm->hback_porch * dsi_tmp_buf_bpp - 10; 473 470 else 474 471 horizontal_backporch_byte = (vm->hback_porch + vm->hsync_len) * 475 - dsi_tmp_buf_bpp; 472 + dsi_tmp_buf_bpp - 10; 476 473 477 474 data_phy_cycles = timing->lpx + timing->da_hs_prepare + 478 - timing->da_hs_zero + timing->da_hs_exit; 475 + timing->da_hs_zero + timing->da_hs_exit + 3; 479 476 480 - if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) { 481 - if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp > 482 - data_phy_cycles * dsi->lanes + 18) { 483 - horizontal_frontporch_byte = 484 - vm->hfront_porch * dsi_tmp_buf_bpp - 485 - (data_phy_cycles * dsi->lanes + 18) * 486 - vm->hfront_porch / 487 - (vm->hfront_porch + vm->hback_porch); 477 + delta = dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST ? 18 : 12; 488 478 489 - horizontal_backporch_byte = 490 - horizontal_backporch_byte - 491 - (data_phy_cycles * dsi->lanes + 18) * 492 - vm->hback_porch / 493 - (vm->hfront_porch + vm->hback_porch); 494 - } else { 495 - DRM_WARN("HFP less than d-phy, FPS will under 60Hz\n"); 496 - horizontal_frontporch_byte = vm->hfront_porch * 497 - dsi_tmp_buf_bpp; 498 - } 479 + horizontal_frontporch_byte = vm->hfront_porch * dsi_tmp_buf_bpp; 480 + horizontal_front_back_byte = horizontal_frontporch_byte + horizontal_backporch_byte; 481 + data_phy_cycles_byte = data_phy_cycles * dsi->lanes + delta; 482 + 483 + if (horizontal_front_back_byte > data_phy_cycles_byte) { 484 + horizontal_frontporch_byte -= data_phy_cycles_byte * 485 + horizontal_frontporch_byte / 486 + horizontal_front_back_byte; 487 + 488 + horizontal_backporch_byte -= data_phy_cycles_byte * 489 + horizontal_backporch_byte / 490 + horizontal_front_back_byte; 499 491 } else { 500 - if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp > 501 - data_phy_cycles * dsi->lanes + 12) { 502 - horizontal_frontporch_byte = 503 - vm->hfront_porch * dsi_tmp_buf_bpp - 504 - (data_phy_cycles * dsi->lanes + 12) * 505 - vm->hfront_porch / 506 - (vm->hfront_porch + vm->hback_porch); 507 - horizontal_backporch_byte = horizontal_backporch_byte - 508 - (data_phy_cycles * dsi->lanes + 12) * 509 - vm->hback_porch / 510 - (vm->hfront_porch + vm->hback_porch); 511 - } else { 512 - DRM_WARN("HFP less than d-phy, FPS will under 60Hz\n"); 513 - horizontal_frontporch_byte = vm->hfront_porch * 514 - dsi_tmp_buf_bpp; 515 - } 492 + DRM_WARN("HFP + HBP less than d-phy, FPS will under 60Hz\n"); 516 493 } 517 494 518 495 writel(horizontal_sync_active_byte, dsi->regs + DSI_HSA_WC);
+5 -3
drivers/gpu/drm/nouveau/nouveau_gem.c
··· 558 558 NV_PRINTK(err, cli, "validating bo list\n"); 559 559 validate_fini(op, chan, NULL, NULL); 560 560 return ret; 561 + } else if (ret > 0) { 562 + *apply_relocs = true; 561 563 } 562 - *apply_relocs = ret; 564 + 563 565 return 0; 564 566 } 565 567 ··· 664 662 nouveau_bo_wr32(nvbo, r->reloc_bo_offset >> 2, data); 665 663 } 666 664 667 - u_free(reloc); 668 665 return ret; 669 666 } 670 667 ··· 873 872 break; 874 873 } 875 874 } 876 - u_free(reloc); 877 875 } 878 876 out_prevalid: 877 + if (!IS_ERR(reloc)) 878 + u_free(reloc); 879 879 u_free(bo); 880 880 u_free(push); 881 881
+7 -1
drivers/gpu/drm/sun4i/sun4i_backend.c
··· 814 814 * 815 815 * XXX(hch): this has no business in a driver and needs to move 816 816 * to the device tree. 817 + * 818 + * If we have two subsequent calls to dma_direct_set_offset 819 + * returns -EINVAL. Unfortunately, this happens when we have two 820 + * backends in the system, and will result in the driver 821 + * reporting an error while it has been setup properly before. 822 + * Ignore EINVAL, but it should really be removed eventually. 817 823 */ 818 824 ret = dma_direct_set_offset(drm->dev, PHYS_OFFSET, 0, SZ_4G); 819 - if (ret) 825 + if (ret && ret != -EINVAL) 820 826 return ret; 821 827 } 822 828
+1
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
··· 208 208 phy_node = of_parse_phandle(dev->of_node, "phys", 0); 209 209 if (!phy_node) { 210 210 dev_err(dev, "Can't found PHY phandle\n"); 211 + ret = -EINVAL; 211 212 goto err_disable_clk_tmds; 212 213 } 213 214
+4
drivers/gpu/drm/vc4/vc4_drv.h
··· 219 219 220 220 struct drm_modeset_lock ctm_state_lock; 221 221 struct drm_private_obj ctm_manager; 222 + struct drm_private_obj hvs_channels; 222 223 struct drm_private_obj load_tracker; 223 224 224 225 /* List of vc4_debugfs_info_entry for adding to debugfs once ··· 532 531 unsigned int top; 533 532 unsigned int bottom; 534 533 } margins; 534 + 535 + /* Transitional state below, only valid during atomic commits */ 536 + bool update_muxing; 535 537 }; 536 538 537 539 #define VC4_HVS_CHANNEL_DISABLED ((unsigned int)-1)
+48
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 760 760 { 761 761 } 762 762 763 + #define WIFI_2_4GHz_CH1_MIN_FREQ 2400000000ULL 764 + #define WIFI_2_4GHz_CH1_MAX_FREQ 2422000000ULL 765 + 766 + static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder, 767 + struct drm_crtc_state *crtc_state, 768 + struct drm_connector_state *conn_state) 769 + { 770 + struct drm_display_mode *mode = &crtc_state->adjusted_mode; 771 + struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 772 + unsigned long long pixel_rate = mode->clock * 1000; 773 + unsigned long long tmds_rate; 774 + 775 + if (vc4_hdmi->variant->unsupported_odd_h_timings && 776 + ((mode->hdisplay % 2) || (mode->hsync_start % 2) || 777 + (mode->hsync_end % 2) || (mode->htotal % 2))) 778 + return -EINVAL; 779 + 780 + /* 781 + * The 1440p@60 pixel rate is in the same range than the first 782 + * WiFi channel (between 2.4GHz and 2.422GHz with 22MHz 783 + * bandwidth). Slightly lower the frequency to bring it out of 784 + * the WiFi range. 785 + */ 786 + tmds_rate = pixel_rate * 10; 787 + if (vc4_hdmi->disable_wifi_frequencies && 788 + (tmds_rate >= WIFI_2_4GHz_CH1_MIN_FREQ && 789 + tmds_rate <= WIFI_2_4GHz_CH1_MAX_FREQ)) { 790 + mode->clock = 238560; 791 + pixel_rate = mode->clock * 1000; 792 + } 793 + 794 + if (pixel_rate > vc4_hdmi->variant->max_pixel_clock) 795 + return -EINVAL; 796 + 797 + return 0; 798 + } 799 + 763 800 static enum drm_mode_status 764 801 vc4_hdmi_encoder_mode_valid(struct drm_encoder *encoder, 765 802 const struct drm_display_mode *mode) 766 803 { 767 804 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder); 805 + 806 + if (vc4_hdmi->variant->unsupported_odd_h_timings && 807 + ((mode->hdisplay % 2) || (mode->hsync_start % 2) || 808 + (mode->hsync_end % 2) || (mode->htotal % 2))) 809 + return MODE_H_ILLEGAL; 768 810 769 811 if ((mode->clock * 1000) > vc4_hdmi->variant->max_pixel_clock) 770 812 return MODE_CLOCK_HIGH; ··· 815 773 } 816 774 817 775 static const struct drm_encoder_helper_funcs vc4_hdmi_encoder_helper_funcs = { 776 + .atomic_check = vc4_hdmi_encoder_atomic_check, 818 777 .mode_valid = vc4_hdmi_encoder_mode_valid, 819 778 .disable = vc4_hdmi_encoder_disable, 820 779 .enable = vc4_hdmi_encoder_enable, ··· 1737 1694 vc4_hdmi->hpd_active_low = hpd_gpio_flags & OF_GPIO_ACTIVE_LOW; 1738 1695 } 1739 1696 1697 + vc4_hdmi->disable_wifi_frequencies = 1698 + of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence"); 1699 + 1740 1700 pm_runtime_enable(dev); 1741 1701 1742 1702 drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS); ··· 1863 1817 PHY_LANE_2, 1864 1818 PHY_LANE_CK, 1865 1819 }, 1820 + .unsupported_odd_h_timings = true, 1866 1821 1867 1822 .init_resources = vc5_hdmi_init_resources, 1868 1823 .csc_setup = vc5_hdmi_csc_setup, ··· 1889 1842 PHY_LANE_CK, 1890 1843 PHY_LANE_2, 1891 1844 }, 1845 + .unsupported_odd_h_timings = true, 1892 1846 1893 1847 .init_resources = vc5_hdmi_init_resources, 1894 1848 .csc_setup = vc5_hdmi_csc_setup,
+11
drivers/gpu/drm/vc4/vc4_hdmi.h
··· 62 62 */ 63 63 enum vc4_hdmi_phy_channel phy_lane_mapping[4]; 64 64 65 + /* The BCM2711 cannot deal with odd horizontal pixel timings */ 66 + bool unsupported_odd_h_timings; 67 + 65 68 /* Callback to get the resources (memory region, interrupts, 66 69 * clocks, etc) for that variant. 67 70 */ ··· 141 138 142 139 int hpd_gpio; 143 140 bool hpd_active_low; 141 + 142 + /* 143 + * On some systems (like the RPi4), some modes are in the same 144 + * frequency range than the WiFi channels (1440p@60Hz for 145 + * example). Should we take evasive actions because that system 146 + * has a wifi adapter? 147 + */ 148 + bool disable_wifi_frequencies; 144 149 145 150 struct cec_adapter *cec_adap; 146 151 struct cec_msg cec_rx_msg;
+182 -64
drivers/gpu/drm/vc4/vc4_kms.c
··· 24 24 #include "vc4_drv.h" 25 25 #include "vc4_regs.h" 26 26 27 + #define HVS_NUM_CHANNELS 3 28 + 27 29 struct vc4_ctm_state { 28 30 struct drm_private_state base; 29 31 struct drm_color_ctm *ctm; ··· 35 33 static struct vc4_ctm_state *to_vc4_ctm_state(struct drm_private_state *priv) 36 34 { 37 35 return container_of(priv, struct vc4_ctm_state, base); 36 + } 37 + 38 + struct vc4_hvs_state { 39 + struct drm_private_state base; 40 + unsigned int unassigned_channels; 41 + }; 42 + 43 + static struct vc4_hvs_state * 44 + to_vc4_hvs_state(struct drm_private_state *priv) 45 + { 46 + return container_of(priv, struct vc4_hvs_state, base); 38 47 } 39 48 40 49 struct vc4_load_tracker_state { ··· 126 113 drm_atomic_private_obj_init(&vc4->base, &vc4->ctm_manager, &ctm_state->base, 127 114 &vc4_ctm_state_funcs); 128 115 129 - return drmm_add_action(&vc4->base, vc4_ctm_obj_fini, NULL); 116 + return drmm_add_action_or_reset(&vc4->base, vc4_ctm_obj_fini, NULL); 130 117 } 131 118 132 119 /* Converts a DRM S31.32 value to the HW S0.9 format. */ ··· 182 169 VC4_SET_FIELD(ctm_state->fifo, SCALER_OLEDOFFS_DISPFIFO)); 183 170 } 184 171 172 + static struct vc4_hvs_state * 173 + vc4_hvs_get_global_state(struct drm_atomic_state *state) 174 + { 175 + struct vc4_dev *vc4 = to_vc4_dev(state->dev); 176 + struct drm_private_state *priv_state; 177 + 178 + priv_state = drm_atomic_get_private_obj_state(state, &vc4->hvs_channels); 179 + if (IS_ERR(priv_state)) 180 + return ERR_CAST(priv_state); 181 + 182 + return to_vc4_hvs_state(priv_state); 183 + } 184 + 185 185 static void vc4_hvs_pv_muxing_commit(struct vc4_dev *vc4, 186 186 struct drm_atomic_state *state) 187 187 { ··· 239 213 { 240 214 struct drm_crtc_state *crtc_state; 241 215 struct drm_crtc *crtc; 242 - unsigned char dsp2_mux = 0; 243 - unsigned char dsp3_mux = 3; 244 - unsigned char dsp4_mux = 3; 245 - unsigned char dsp5_mux = 3; 216 + unsigned char mux; 246 217 unsigned int i; 247 218 u32 reg; 248 219 ··· 247 224 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc_state); 248 225 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 249 226 250 - if (!crtc_state->active) 227 + if (!vc4_state->update_muxing) 251 228 continue; 252 229 253 230 switch (vc4_crtc->data->hvs_output) { 254 231 case 2: 255 - dsp2_mux = (vc4_state->assigned_channel == 2) ? 0 : 1; 232 + mux = (vc4_state->assigned_channel == 2) ? 0 : 1; 233 + reg = HVS_READ(SCALER_DISPECTRL); 234 + HVS_WRITE(SCALER_DISPECTRL, 235 + (reg & ~SCALER_DISPECTRL_DSP2_MUX_MASK) | 236 + VC4_SET_FIELD(mux, SCALER_DISPECTRL_DSP2_MUX)); 256 237 break; 257 238 258 239 case 3: 259 - dsp3_mux = vc4_state->assigned_channel; 240 + if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED) 241 + mux = 3; 242 + else 243 + mux = vc4_state->assigned_channel; 244 + 245 + reg = HVS_READ(SCALER_DISPCTRL); 246 + HVS_WRITE(SCALER_DISPCTRL, 247 + (reg & ~SCALER_DISPCTRL_DSP3_MUX_MASK) | 248 + VC4_SET_FIELD(mux, SCALER_DISPCTRL_DSP3_MUX)); 260 249 break; 261 250 262 251 case 4: 263 - dsp4_mux = vc4_state->assigned_channel; 252 + if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED) 253 + mux = 3; 254 + else 255 + mux = vc4_state->assigned_channel; 256 + 257 + reg = HVS_READ(SCALER_DISPEOLN); 258 + HVS_WRITE(SCALER_DISPEOLN, 259 + (reg & ~SCALER_DISPEOLN_DSP4_MUX_MASK) | 260 + VC4_SET_FIELD(mux, SCALER_DISPEOLN_DSP4_MUX)); 261 + 264 262 break; 265 263 266 264 case 5: 267 - dsp5_mux = vc4_state->assigned_channel; 265 + if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED) 266 + mux = 3; 267 + else 268 + mux = vc4_state->assigned_channel; 269 + 270 + reg = HVS_READ(SCALER_DISPDITHER); 271 + HVS_WRITE(SCALER_DISPDITHER, 272 + (reg & ~SCALER_DISPDITHER_DSP5_MUX_MASK) | 273 + VC4_SET_FIELD(mux, SCALER_DISPDITHER_DSP5_MUX)); 268 274 break; 269 275 270 276 default: 271 277 break; 272 278 } 273 279 } 274 - 275 - reg = HVS_READ(SCALER_DISPECTRL); 276 - HVS_WRITE(SCALER_DISPECTRL, 277 - (reg & ~SCALER_DISPECTRL_DSP2_MUX_MASK) | 278 - VC4_SET_FIELD(dsp2_mux, SCALER_DISPECTRL_DSP2_MUX)); 279 - 280 - reg = HVS_READ(SCALER_DISPCTRL); 281 - HVS_WRITE(SCALER_DISPCTRL, 282 - (reg & ~SCALER_DISPCTRL_DSP3_MUX_MASK) | 283 - VC4_SET_FIELD(dsp3_mux, SCALER_DISPCTRL_DSP3_MUX)); 284 - 285 - reg = HVS_READ(SCALER_DISPEOLN); 286 - HVS_WRITE(SCALER_DISPEOLN, 287 - (reg & ~SCALER_DISPEOLN_DSP4_MUX_MASK) | 288 - VC4_SET_FIELD(dsp4_mux, SCALER_DISPEOLN_DSP4_MUX)); 289 - 290 - reg = HVS_READ(SCALER_DISPDITHER); 291 - HVS_WRITE(SCALER_DISPDITHER, 292 - (reg & ~SCALER_DISPDITHER_DSP5_MUX_MASK) | 293 - VC4_SET_FIELD(dsp5_mux, SCALER_DISPDITHER_DSP5_MUX)); 294 280 } 295 281 296 282 static void ··· 689 657 &load_state->base, 690 658 &vc4_load_tracker_state_funcs); 691 659 692 - return drmm_add_action(&vc4->base, vc4_load_tracker_obj_fini, NULL); 660 + return drmm_add_action_or_reset(&vc4->base, vc4_load_tracker_obj_fini, NULL); 693 661 } 694 662 695 - #define NUM_OUTPUTS 6 696 - #define NUM_CHANNELS 3 697 - 698 - static int 699 - vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) 663 + static struct drm_private_state * 664 + vc4_hvs_channels_duplicate_state(struct drm_private_obj *obj) 700 665 { 701 - unsigned long unassigned_channels = GENMASK(NUM_CHANNELS - 1, 0); 666 + struct vc4_hvs_state *old_state = to_vc4_hvs_state(obj->state); 667 + struct vc4_hvs_state *state; 668 + 669 + state = kzalloc(sizeof(*state), GFP_KERNEL); 670 + if (!state) 671 + return NULL; 672 + 673 + __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base); 674 + 675 + state->unassigned_channels = old_state->unassigned_channels; 676 + 677 + return &state->base; 678 + } 679 + 680 + static void vc4_hvs_channels_destroy_state(struct drm_private_obj *obj, 681 + struct drm_private_state *state) 682 + { 683 + struct vc4_hvs_state *hvs_state = to_vc4_hvs_state(state); 684 + 685 + kfree(hvs_state); 686 + } 687 + 688 + static const struct drm_private_state_funcs vc4_hvs_state_funcs = { 689 + .atomic_duplicate_state = vc4_hvs_channels_duplicate_state, 690 + .atomic_destroy_state = vc4_hvs_channels_destroy_state, 691 + }; 692 + 693 + static void vc4_hvs_channels_obj_fini(struct drm_device *dev, void *unused) 694 + { 695 + struct vc4_dev *vc4 = to_vc4_dev(dev); 696 + 697 + drm_atomic_private_obj_fini(&vc4->hvs_channels); 698 + } 699 + 700 + static int vc4_hvs_channels_obj_init(struct vc4_dev *vc4) 701 + { 702 + struct vc4_hvs_state *state; 703 + 704 + state = kzalloc(sizeof(*state), GFP_KERNEL); 705 + if (!state) 706 + return -ENOMEM; 707 + 708 + state->unassigned_channels = GENMASK(HVS_NUM_CHANNELS - 1, 0); 709 + drm_atomic_private_obj_init(&vc4->base, &vc4->hvs_channels, 710 + &state->base, 711 + &vc4_hvs_state_funcs); 712 + 713 + return drmm_add_action_or_reset(&vc4->base, vc4_hvs_channels_obj_fini, NULL); 714 + } 715 + 716 + /* 717 + * The BCM2711 HVS has up to 7 outputs connected to the pixelvalves and 718 + * the TXP (and therefore all the CRTCs found on that platform). 719 + * 720 + * The naive (and our initial) implementation would just iterate over 721 + * all the active CRTCs, try to find a suitable FIFO, and then remove it 722 + * from the pool of available FIFOs. However, there are a few corner 723 + * cases that need to be considered: 724 + * 725 + * - When running in a dual-display setup (so with two CRTCs involved), 726 + * we can update the state of a single CRTC (for example by changing 727 + * its mode using xrandr under X11) without affecting the other. In 728 + * this case, the other CRTC wouldn't be in the state at all, so we 729 + * need to consider all the running CRTCs in the DRM device to assign 730 + * a FIFO, not just the one in the state. 731 + * 732 + * - To fix the above, we can't use drm_atomic_get_crtc_state on all 733 + * enabled CRTCs to pull their CRTC state into the global state, since 734 + * a page flip would start considering their vblank to complete. Since 735 + * we don't have a guarantee that they are actually active, that 736 + * vblank might never happen, and shouldn't even be considered if we 737 + * want to do a page flip on a single CRTC. That can be tested by 738 + * doing a modetest -v first on HDMI1 and then on HDMI0. 739 + * 740 + * - Since we need the pixelvalve to be disabled and enabled back when 741 + * the FIFO is changed, we should keep the FIFO assigned for as long 742 + * as the CRTC is enabled, only considering it free again once that 743 + * CRTC has been disabled. This can be tested by booting X11 on a 744 + * single display, and changing the resolution down and then back up. 745 + */ 746 + static int vc4_pv_muxing_atomic_check(struct drm_device *dev, 747 + struct drm_atomic_state *state) 748 + { 749 + struct vc4_hvs_state *hvs_new_state; 702 750 struct drm_crtc_state *old_crtc_state, *new_crtc_state; 703 751 struct drm_crtc *crtc; 704 - int i, ret; 752 + unsigned int i; 705 753 706 - /* 707 - * Since the HVS FIFOs are shared across all the pixelvalves and 708 - * the TXP (and thus all the CRTCs), we need to pull the current 709 - * state of all the enabled CRTCs so that an update to a single 710 - * CRTC still keeps the previous FIFOs enabled and assigned to 711 - * the same CRTCs, instead of evaluating only the CRTC being 712 - * modified. 713 - */ 714 - list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 715 - struct drm_crtc_state *crtc_state; 716 - 717 - if (!crtc->state->enable) 718 - continue; 719 - 720 - crtc_state = drm_atomic_get_crtc_state(state, crtc); 721 - if (IS_ERR(crtc_state)) 722 - return PTR_ERR(crtc_state); 723 - } 754 + hvs_new_state = vc4_hvs_get_global_state(state); 755 + if (!hvs_new_state) 756 + return -EINVAL; 724 757 725 758 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 759 + struct vc4_crtc_state *old_vc4_crtc_state = 760 + to_vc4_crtc_state(old_crtc_state); 726 761 struct vc4_crtc_state *new_vc4_crtc_state = 727 762 to_vc4_crtc_state(new_crtc_state); 728 763 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 729 764 unsigned int matching_channels; 730 765 731 - if (old_crtc_state->enable && !new_crtc_state->enable) 732 - new_vc4_crtc_state->assigned_channel = VC4_HVS_CHANNEL_DISABLED; 733 - 734 - if (!new_crtc_state->enable) 766 + /* Nothing to do here, let's skip it */ 767 + if (old_crtc_state->enable == new_crtc_state->enable) 735 768 continue; 736 769 737 - if (new_vc4_crtc_state->assigned_channel != VC4_HVS_CHANNEL_DISABLED) { 738 - unassigned_channels &= ~BIT(new_vc4_crtc_state->assigned_channel); 770 + /* Muxing will need to be modified, mark it as such */ 771 + new_vc4_crtc_state->update_muxing = true; 772 + 773 + /* If we're disabling our CRTC, we put back our channel */ 774 + if (!new_crtc_state->enable) { 775 + hvs_new_state->unassigned_channels |= BIT(old_vc4_crtc_state->assigned_channel); 776 + new_vc4_crtc_state->assigned_channel = VC4_HVS_CHANNEL_DISABLED; 739 777 continue; 740 778 } 741 779 ··· 833 731 * the future, we will need to have something smarter, 834 732 * but it works so far. 835 733 */ 836 - matching_channels = unassigned_channels & vc4_crtc->data->hvs_available_channels; 734 + matching_channels = hvs_new_state->unassigned_channels & vc4_crtc->data->hvs_available_channels; 837 735 if (matching_channels) { 838 736 unsigned int channel = ffs(matching_channels) - 1; 839 737 840 738 new_vc4_crtc_state->assigned_channel = channel; 841 - unassigned_channels &= ~BIT(channel); 739 + hvs_new_state->unassigned_channels &= ~BIT(channel); 842 740 } else { 843 741 return -EINVAL; 844 742 } 845 743 } 744 + 745 + return 0; 746 + } 747 + 748 + static int 749 + vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) 750 + { 751 + int ret; 752 + 753 + ret = vc4_pv_muxing_atomic_check(dev, state); 754 + if (ret) 755 + return ret; 846 756 847 757 ret = vc4_ctm_atomic_check(dev, state); 848 758 if (ret < 0) ··· 919 805 return ret; 920 806 921 807 ret = vc4_load_tracker_obj_init(vc4); 808 + if (ret) 809 + return ret; 810 + 811 + ret = vc4_hvs_channels_obj_init(vc4); 922 812 if (ret) 923 813 return ret; 924 814
+39 -5
drivers/hid/hid-cypress.c
··· 23 23 #define CP_2WHEEL_MOUSE_HACK 0x02 24 24 #define CP_2WHEEL_MOUSE_HACK_ON 0x04 25 25 26 + #define VA_INVAL_LOGICAL_BOUNDARY 0x08 27 + 26 28 /* 27 29 * Some USB barcode readers from cypress have usage min and usage max in 28 30 * the wrong order 29 31 */ 30 - static __u8 *cp_report_fixup(struct hid_device *hdev, __u8 *rdesc, 32 + static __u8 *cp_rdesc_fixup(struct hid_device *hdev, __u8 *rdesc, 31 33 unsigned int *rsize) 32 34 { 33 - unsigned long quirks = (unsigned long)hid_get_drvdata(hdev); 34 35 unsigned int i; 35 - 36 - if (!(quirks & CP_RDESC_SWAPPED_MIN_MAX)) 37 - return rdesc; 38 36 39 37 if (*rsize < 4) 40 38 return rdesc; ··· 43 45 rdesc[i + 2] = 0x29; 44 46 swap(rdesc[i + 3], rdesc[i + 1]); 45 47 } 48 + return rdesc; 49 + } 50 + 51 + static __u8 *va_logical_boundary_fixup(struct hid_device *hdev, __u8 *rdesc, 52 + unsigned int *rsize) 53 + { 54 + /* 55 + * Varmilo VA104M (with VID Cypress and device ID 07B1) incorrectly 56 + * reports Logical Minimum of its Consumer Control device as 572 57 + * (0x02 0x3c). Fix this by setting its Logical Minimum to zero. 58 + */ 59 + if (*rsize == 25 && 60 + rdesc[0] == 0x05 && rdesc[1] == 0x0c && 61 + rdesc[2] == 0x09 && rdesc[3] == 0x01 && 62 + rdesc[6] == 0x19 && rdesc[7] == 0x00 && 63 + rdesc[11] == 0x16 && rdesc[12] == 0x3c && rdesc[13] == 0x02) { 64 + hid_info(hdev, 65 + "fixing up varmilo VA104M consumer control report descriptor\n"); 66 + rdesc[12] = 0x00; 67 + rdesc[13] = 0x00; 68 + } 69 + return rdesc; 70 + } 71 + 72 + static __u8 *cp_report_fixup(struct hid_device *hdev, __u8 *rdesc, 73 + unsigned int *rsize) 74 + { 75 + unsigned long quirks = (unsigned long)hid_get_drvdata(hdev); 76 + 77 + if (quirks & CP_RDESC_SWAPPED_MIN_MAX) 78 + rdesc = cp_rdesc_fixup(hdev, rdesc, rsize); 79 + if (quirks & VA_INVAL_LOGICAL_BOUNDARY) 80 + rdesc = va_logical_boundary_fixup(hdev, rdesc, rsize); 81 + 46 82 return rdesc; 47 83 } 48 84 ··· 160 128 .driver_data = CP_RDESC_SWAPPED_MIN_MAX }, 161 129 { HID_USB_DEVICE(USB_VENDOR_ID_CYPRESS, USB_DEVICE_ID_CYPRESS_MOUSE), 162 130 .driver_data = CP_2WHEEL_MOUSE_HACK }, 131 + { HID_USB_DEVICE(USB_VENDOR_ID_CYPRESS, USB_DEVICE_ID_CYPRESS_VARMILO_VA104M_07B1), 132 + .driver_data = VA_INVAL_LOGICAL_BOUNDARY }, 163 133 { } 164 134 }; 165 135 MODULE_DEVICE_TABLE(hid, cp_devices);
+9
drivers/hid/hid-ids.h
··· 331 331 #define USB_DEVICE_ID_CYPRESS_BARCODE_4 0xed81 332 332 #define USB_DEVICE_ID_CYPRESS_TRUETOUCH 0xc001 333 333 334 + #define USB_DEVICE_ID_CYPRESS_VARMILO_VA104M_07B1 0X07b1 335 + 334 336 #define USB_VENDOR_ID_DATA_MODUL 0x7374 335 337 #define USB_VENDOR_ID_DATA_MODUL_EASYMAXTOUCH 0x1201 336 338 ··· 445 443 #define USB_VENDOR_ID_FRUCTEL 0x25B6 446 444 #define USB_DEVICE_ID_GAMETEL_MT_MODE 0x0002 447 445 446 + #define USB_VENDOR_ID_GAMEVICE 0x27F8 447 + #define USB_DEVICE_ID_GAMEVICE_GV186 0x0BBE 448 + #define USB_DEVICE_ID_GAMEVICE_KISHI 0x0BBF 449 + 448 450 #define USB_VENDOR_ID_GAMERON 0x0810 449 451 #define USB_DEVICE_ID_GAMERON_DUAL_PSX_ADAPTOR 0x0001 450 452 #define USB_DEVICE_ID_GAMERON_DUAL_PCS_ADAPTOR 0x0002 ··· 491 485 #define USB_DEVICE_ID_PENPOWER 0x00f4 492 486 493 487 #define USB_VENDOR_ID_GREENASIA 0x0e8f 488 + #define USB_DEVICE_ID_GREENASIA_DUAL_SAT_ADAPTOR 0x3010 494 489 #define USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD 0x3013 495 490 496 491 #define USB_VENDOR_ID_GRETAGMACBETH 0x0971 ··· 750 743 #define USB_VENDOR_ID_LOGITECH 0x046d 751 744 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e 752 745 #define USB_DEVICE_ID_LOGITECH_T651 0xb00c 746 + #define USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD 0xb309 753 747 #define USB_DEVICE_ID_LOGITECH_C007 0xc007 754 748 #define USB_DEVICE_ID_LOGITECH_C077 0xc077 755 749 #define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101 ··· 1306 1298 1307 1299 #define USB_VENDOR_ID_UGTIZER 0x2179 1308 1300 #define USB_DEVICE_ID_UGTIZER_TABLET_GP0610 0x0053 1301 + #define USB_DEVICE_ID_UGTIZER_TABLET_GT5040 0x0077 1309 1302 1310 1303 #define USB_VENDOR_ID_VIEWSONIC 0x0543 1311 1304 #define USB_DEVICE_ID_VIEWSONIC_PD1011 0xe621
+3
drivers/hid/hid-input.c
··· 319 319 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ASUSTEK, 320 320 USB_DEVICE_ID_ASUSTEK_T100CHI_KEYBOARD), 321 321 HID_BATTERY_QUIRK_IGNORE }, 322 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 323 + USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), 324 + HID_BATTERY_QUIRK_IGNORE }, 322 325 {} 323 326 }; 324 327
+60 -1
drivers/hid/hid-ite.c
··· 11 11 12 12 #include "hid-ids.h" 13 13 14 + #define QUIRK_TOUCHPAD_ON_OFF_REPORT BIT(0) 15 + 16 + static __u8 *ite_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) 17 + { 18 + unsigned long quirks = (unsigned long)hid_get_drvdata(hdev); 19 + 20 + if (quirks & QUIRK_TOUCHPAD_ON_OFF_REPORT) { 21 + if (*rsize == 188 && rdesc[162] == 0x81 && rdesc[163] == 0x02) { 22 + hid_info(hdev, "Fixing up ITE keyboard report descriptor\n"); 23 + rdesc[163] = HID_MAIN_ITEM_RELATIVE; 24 + } 25 + } 26 + 27 + return rdesc; 28 + } 29 + 30 + static int ite_input_mapping(struct hid_device *hdev, 31 + struct hid_input *hi, struct hid_field *field, 32 + struct hid_usage *usage, unsigned long **bit, 33 + int *max) 34 + { 35 + 36 + unsigned long quirks = (unsigned long)hid_get_drvdata(hdev); 37 + 38 + if ((quirks & QUIRK_TOUCHPAD_ON_OFF_REPORT) && 39 + (usage->hid & HID_USAGE_PAGE) == 0x00880000) { 40 + if (usage->hid == 0x00880078) { 41 + /* Touchpad on, userspace expects F22 for this */ 42 + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_F22); 43 + return 1; 44 + } 45 + if (usage->hid == 0x00880079) { 46 + /* Touchpad off, userspace expects F23 for this */ 47 + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_F23); 48 + return 1; 49 + } 50 + return -1; 51 + } 52 + 53 + return 0; 54 + } 55 + 14 56 static int ite_event(struct hid_device *hdev, struct hid_field *field, 15 57 struct hid_usage *usage, __s32 value) 16 58 { ··· 79 37 return 0; 80 38 } 81 39 40 + static int ite_probe(struct hid_device *hdev, const struct hid_device_id *id) 41 + { 42 + int ret; 43 + 44 + hid_set_drvdata(hdev, (void *)id->driver_data); 45 + 46 + ret = hid_open_report(hdev); 47 + if (ret) 48 + return ret; 49 + 50 + return hid_hw_start(hdev, HID_CONNECT_DEFAULT); 51 + } 52 + 82 53 static const struct hid_device_id ite_devices[] = { 83 54 { HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE8595) }, 84 55 { HID_USB_DEVICE(USB_VENDOR_ID_258A, USB_DEVICE_ID_258A_6A88) }, 85 56 /* ITE8595 USB kbd ctlr, with Synaptics touchpad connected to it. */ 86 57 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 87 58 USB_VENDOR_ID_SYNAPTICS, 88 - USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) }, 59 + USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012), 60 + .driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT }, 89 61 /* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */ 90 62 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 91 63 USB_VENDOR_ID_SYNAPTICS, ··· 111 55 static struct hid_driver ite_driver = { 112 56 .name = "itetech", 113 57 .id_table = ite_devices, 58 + .probe = ite_probe, 59 + .report_fixup = ite_report_fixup, 60 + .input_mapping = ite_input_mapping, 114 61 .event = ite_event, 115 62 }; 116 63 module_hid_driver(ite_driver);
+21 -1
drivers/hid/hid-logitech-dj.c
··· 328 328 0x25, 0x01, /* LOGICAL_MAX (1) */ 329 329 0x75, 0x01, /* REPORT_SIZE (1) */ 330 330 0x95, 0x04, /* REPORT_COUNT (4) */ 331 - 0x81, 0x06, /* INPUT */ 331 + 0x81, 0x02, /* INPUT (Data,Var,Abs) */ 332 332 0xC0, /* END_COLLECTION */ 333 333 0xC0, /* END_COLLECTION */ 334 334 }; ··· 866 866 schedule_work(&djrcv_dev->work); 867 867 } 868 868 869 + /* 870 + * Some quad/bluetooth keyboards have a builtin touchpad in this case we see 871 + * only 1 paired device with a device_type of REPORT_TYPE_KEYBOARD. For the 872 + * touchpad to work we must also forward mouse input reports to the dj_hiddev 873 + * created for the keyboard (instead of forwarding them to a second paired 874 + * device with a device_type of REPORT_TYPE_MOUSE as we normally would). 875 + */ 876 + static const u16 kbd_builtin_touchpad_ids[] = { 877 + 0xb309, /* Dinovo Edge */ 878 + 0xb30c, /* Dinovo Mini */ 879 + }; 880 + 869 881 static void logi_hidpp_dev_conn_notif_equad(struct hid_device *hdev, 870 882 struct hidpp_event *hidpp_report, 871 883 struct dj_workitem *workitem) 872 884 { 873 885 struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev); 886 + int i, id; 874 887 875 888 workitem->type = WORKITEM_TYPE_PAIRED; 876 889 workitem->device_type = hidpp_report->params[HIDPP_PARAM_DEVICE_INFO] & ··· 895 882 workitem->reports_supported |= STD_KEYBOARD | MULTIMEDIA | 896 883 POWER_KEYS | MEDIA_CENTER | 897 884 HIDPP; 885 + id = (workitem->quad_id_msb << 8) | workitem->quad_id_lsb; 886 + for (i = 0; i < ARRAY_SIZE(kbd_builtin_touchpad_ids); i++) { 887 + if (id == kbd_builtin_touchpad_ids[i]) { 888 + workitem->reports_supported |= STD_MOUSE; 889 + break; 890 + } 891 + } 898 892 break; 899 893 case REPORT_TYPE_MOUSE: 900 894 workitem->reports_supported |= STD_MOUSE | HIDPP;
+32
drivers/hid/hid-logitech-hidpp.c
··· 93 93 #define HIDPP_CAPABILITY_BATTERY_LEVEL_STATUS BIT(3) 94 94 #define HIDPP_CAPABILITY_BATTERY_VOLTAGE BIT(4) 95 95 96 + #define lg_map_key_clear(c) hid_map_usage_clear(hi, usage, bit, max, EV_KEY, (c)) 97 + 96 98 /* 97 99 * There are two hidpp protocols in use, the first version hidpp10 is known 98 100 * as register access protocol or RAP, the second version hidpp20 is known as ··· 2953 2951 } 2954 2952 2955 2953 /* -------------------------------------------------------------------------- */ 2954 + /* Logitech Dinovo Mini keyboard with builtin touchpad */ 2955 + /* -------------------------------------------------------------------------- */ 2956 + #define DINOVO_MINI_PRODUCT_ID 0xb30c 2957 + 2958 + static int lg_dinovo_input_mapping(struct hid_device *hdev, struct hid_input *hi, 2959 + struct hid_field *field, struct hid_usage *usage, 2960 + unsigned long **bit, int *max) 2961 + { 2962 + if ((usage->hid & HID_USAGE_PAGE) != HID_UP_LOGIVENDOR) 2963 + return 0; 2964 + 2965 + switch (usage->hid & HID_USAGE) { 2966 + case 0x00d: lg_map_key_clear(KEY_MEDIA); break; 2967 + default: 2968 + return 0; 2969 + } 2970 + return 1; 2971 + } 2972 + 2973 + /* -------------------------------------------------------------------------- */ 2956 2974 /* HID++1.0 devices which use HID++ reports for their wheels */ 2957 2975 /* -------------------------------------------------------------------------- */ 2958 2976 static int hidpp10_wheel_connect(struct hidpp_device *hidpp) ··· 3206 3184 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560 && 3207 3185 field->application != HID_GD_MOUSE) 3208 3186 return m560_input_mapping(hdev, hi, field, usage, bit, max); 3187 + 3188 + if (hdev->product == DINOVO_MINI_PRODUCT_ID) 3189 + return lg_dinovo_input_mapping(hdev, hi, field, usage, bit, max); 3209 3190 3210 3191 return 0; 3211 3192 } ··· 3972 3947 LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3973 3948 { /* Mouse Logitech MX Anywhere 2 */ 3974 3949 LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3950 + { LDJ_DEVICE(0x4072), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3975 3951 { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3976 3952 { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, 3977 3953 { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 }, ··· 3996 3970 .driver_data = HIDPP_QUIRK_CLASS_K750 }, 3997 3971 { /* Keyboard MX5000 (Bluetooth-receiver in HID proxy mode) */ 3998 3972 LDJ_DEVICE(0xb305), 3973 + .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 3974 + { /* Dinovo Edge (Bluetooth-receiver in HID proxy mode) */ 3975 + LDJ_DEVICE(0xb309), 3999 3976 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 4000 3977 { /* Keyboard MX5500 (Bluetooth-receiver in HID proxy mode) */ 4001 3978 LDJ_DEVICE(0xb30b), ··· 4041 4012 4042 4013 { /* MX5000 keyboard over Bluetooth */ 4043 4014 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb305), 4015 + .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 4016 + { /* Dinovo Edge keyboard over Bluetooth */ 4017 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb309), 4044 4018 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 4045 4019 { /* MX5500 keyboard over Bluetooth */ 4046 4020 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb30b),
+39 -9
drivers/hid/hid-mcp2221.c
··· 49 49 MCP2221_ALT_F_NOT_GPIOD = 0xEF, 50 50 }; 51 51 52 + /* MCP GPIO direction encoding */ 53 + enum { 54 + MCP2221_DIR_OUT = 0x00, 55 + MCP2221_DIR_IN = 0x01, 56 + }; 57 + 58 + #define MCP_NGPIO 4 59 + 60 + /* MCP GPIO set command layout */ 61 + struct mcp_set_gpio { 62 + u8 cmd; 63 + u8 dummy; 64 + struct { 65 + u8 change_value; 66 + u8 value; 67 + u8 change_direction; 68 + u8 direction; 69 + } gpio[MCP_NGPIO]; 70 + } __packed; 71 + 72 + /* MCP GPIO get command layout */ 73 + struct mcp_get_gpio { 74 + u8 cmd; 75 + u8 dummy; 76 + struct { 77 + u8 direction; 78 + u8 value; 79 + } gpio[MCP_NGPIO]; 80 + } __packed; 81 + 52 82 /* 53 83 * There is no way to distinguish responses. Therefore next command 54 84 * is sent only after response to previous has been received. Mutex ··· 572 542 573 543 mcp->txbuf[0] = MCP2221_GPIO_GET; 574 544 575 - mcp->gp_idx = (offset + 1) * 2; 545 + mcp->gp_idx = offsetof(struct mcp_get_gpio, gpio[offset].value); 576 546 577 547 mutex_lock(&mcp->lock); 578 548 ret = mcp_send_data_req_status(mcp, mcp->txbuf, 1); ··· 589 559 memset(mcp->txbuf, 0, 18); 590 560 mcp->txbuf[0] = MCP2221_GPIO_SET; 591 561 592 - mcp->gp_idx = ((offset + 1) * 4) - 1; 562 + mcp->gp_idx = offsetof(struct mcp_set_gpio, gpio[offset].value); 593 563 594 564 mcp->txbuf[mcp->gp_idx - 1] = 1; 595 565 mcp->txbuf[mcp->gp_idx] = !!value; ··· 605 575 memset(mcp->txbuf, 0, 18); 606 576 mcp->txbuf[0] = MCP2221_GPIO_SET; 607 577 608 - mcp->gp_idx = (offset + 1) * 5; 578 + mcp->gp_idx = offsetof(struct mcp_set_gpio, gpio[offset].direction); 609 579 610 580 mcp->txbuf[mcp->gp_idx - 1] = 1; 611 581 mcp->txbuf[mcp->gp_idx] = val; ··· 620 590 struct mcp2221 *mcp = gpiochip_get_data(gc); 621 591 622 592 mutex_lock(&mcp->lock); 623 - ret = mcp_gpio_dir_set(mcp, offset, 0); 593 + ret = mcp_gpio_dir_set(mcp, offset, MCP2221_DIR_IN); 624 594 mutex_unlock(&mcp->lock); 625 595 626 596 return ret; ··· 633 603 struct mcp2221 *mcp = gpiochip_get_data(gc); 634 604 635 605 mutex_lock(&mcp->lock); 636 - ret = mcp_gpio_dir_set(mcp, offset, 1); 606 + ret = mcp_gpio_dir_set(mcp, offset, MCP2221_DIR_OUT); 637 607 mutex_unlock(&mcp->lock); 638 608 639 609 /* Can't configure as output, bailout early */ ··· 653 623 654 624 mcp->txbuf[0] = MCP2221_GPIO_GET; 655 625 656 - mcp->gp_idx = (offset + 1) * 2; 626 + mcp->gp_idx = offsetof(struct mcp_get_gpio, gpio[offset].direction); 657 627 658 628 mutex_lock(&mcp->lock); 659 629 ret = mcp_send_data_req_status(mcp, mcp->txbuf, 1); ··· 662 632 if (ret) 663 633 return ret; 664 634 665 - if (mcp->gpio_dir) 635 + if (mcp->gpio_dir == MCP2221_DIR_IN) 666 636 return GPIO_LINE_DIRECTION_IN; 667 637 668 638 return GPIO_LINE_DIRECTION_OUT; ··· 788 758 mcp->status = -ENOENT; 789 759 } else { 790 760 mcp->status = !!data[mcp->gp_idx]; 791 - mcp->gpio_dir = !!data[mcp->gp_idx + 1]; 761 + mcp->gpio_dir = data[mcp->gp_idx + 1]; 792 762 } 793 763 break; 794 764 default: ··· 890 860 mcp->gc->get_direction = mcp_gpio_get_direction; 891 861 mcp->gc->set = mcp_gpio_set; 892 862 mcp->gc->get = mcp_gpio_get; 893 - mcp->gc->ngpio = 4; 863 + mcp->gc->ngpio = MCP_NGPIO; 894 864 mcp->gc->base = -1; 895 865 mcp->gc->can_sleep = 1; 896 866 mcp->gc->parent = &hdev->dev;
+5
drivers/hid/hid-quirks.c
··· 83 83 { HID_USB_DEVICE(USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER), HID_QUIRK_NO_INIT_REPORTS }, 84 84 { HID_USB_DEVICE(USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28), HID_QUIRK_NOGET }, 85 85 { HID_USB_DEVICE(USB_VENDOR_ID_FUTABA, USB_DEVICE_ID_LED_DISPLAY), HID_QUIRK_NO_INIT_REPORTS }, 86 + { HID_USB_DEVICE(USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_SAT_ADAPTOR), HID_QUIRK_MULTI_INPUT }, 86 87 { HID_USB_DEVICE(USB_VENDOR_ID_GREENASIA, USB_DEVICE_ID_GREENASIA_DUAL_USB_JOYPAD), HID_QUIRK_MULTI_INPUT }, 88 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_GAMEVICE, USB_DEVICE_ID_GAMEVICE_GV186), 89 + HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 90 + { HID_USB_DEVICE(USB_VENDOR_ID_GAMEVICE, USB_DEVICE_ID_GAMEVICE_KISHI), 91 + HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 87 92 { HID_USB_DEVICE(USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_DRIVING), HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 88 93 { HID_USB_DEVICE(USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FIGHTING), HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT }, 89 94 { HID_USB_DEVICE(USB_VENDOR_ID_HAPP, USB_DEVICE_ID_UGCI_FLYING), HID_QUIRK_BADPAD | HID_QUIRK_MULTI_INPUT },
+2 -1
drivers/hid/hid-sensor-hub.c
··· 483 483 return 1; 484 484 485 485 ptr = raw_data; 486 - ptr++; /* Skip report id */ 486 + if (report->id) 487 + ptr++; /* Skip report id */ 487 488 488 489 spin_lock_irqsave(&pdata->lock, flags); 489 490
+2
drivers/hid/hid-uclogic-core.c
··· 385 385 USB_DEVICE_ID_UCLOGIC_DRAWIMAGE_G3) }, 386 386 { HID_USB_DEVICE(USB_VENDOR_ID_UGTIZER, 387 387 USB_DEVICE_ID_UGTIZER_TABLET_GP0610) }, 388 + { HID_USB_DEVICE(USB_VENDOR_ID_UGTIZER, 389 + USB_DEVICE_ID_UGTIZER_TABLET_GT5040) }, 388 390 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, 389 391 USB_DEVICE_ID_UGEE_TABLET_G5) }, 390 392 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE,
+2
drivers/hid/hid-uclogic-params.c
··· 997 997 break; 998 998 case VID_PID(USB_VENDOR_ID_UGTIZER, 999 999 USB_DEVICE_ID_UGTIZER_TABLET_GP0610): 1000 + case VID_PID(USB_VENDOR_ID_UGTIZER, 1001 + USB_DEVICE_ID_UGTIZER_TABLET_GT5040): 1000 1002 case VID_PID(USB_VENDOR_ID_UGEE, 1001 1003 USB_DEVICE_ID_UGEE_XPPEN_TABLET_G540): 1002 1004 case VID_PID(USB_VENDOR_ID_UGEE,
+9
drivers/hid/i2c-hid/i2c-hid-core.c
··· 943 943 } 944 944 } 945 945 946 + static void i2c_hid_acpi_shutdown(struct device *dev) 947 + { 948 + acpi_device_set_power(ACPI_COMPANION(dev), ACPI_STATE_D3_COLD); 949 + } 950 + 946 951 static const struct acpi_device_id i2c_hid_acpi_match[] = { 947 952 {"ACPI0C50", 0 }, 948 953 {"PNP0C50", 0 }, ··· 964 959 static inline void i2c_hid_acpi_fix_up_power(struct device *dev) {} 965 960 966 961 static inline void i2c_hid_acpi_enable_wakeup(struct device *dev) {} 962 + 963 + static inline void i2c_hid_acpi_shutdown(struct device *dev) {} 967 964 #endif 968 965 969 966 #ifdef CONFIG_OF ··· 1182 1175 1183 1176 i2c_hid_set_power(client, I2C_HID_PWR_SLEEP); 1184 1177 free_irq(client->irq, ihid); 1178 + 1179 + i2c_hid_acpi_shutdown(&client->dev); 1185 1180 } 1186 1181 1187 1182 #ifdef CONFIG_PM_SLEEP
+46 -5
drivers/iio/accel/kxcjk-1013.c
··· 126 126 KX_MAX_CHIPS /* this must be last */ 127 127 }; 128 128 129 + enum kx_acpi_type { 130 + ACPI_GENERIC, 131 + ACPI_SMO8500, 132 + ACPI_KIOX010A, 133 + }; 134 + 129 135 struct kxcjk1013_data { 130 136 struct i2c_client *client; 131 137 struct iio_trigger *dready_trig; ··· 149 143 bool motion_trigger_on; 150 144 int64_t timestamp; 151 145 enum kx_chipset chipset; 152 - bool is_smo8500_device; 146 + enum kx_acpi_type acpi_type; 153 147 }; 154 148 155 149 enum kxcjk1013_axis { ··· 276 270 {19163, 1, 0}, 277 271 {38326, 0, 1} }; 278 272 273 + #ifdef CONFIG_ACPI 274 + enum kiox010a_fn_index { 275 + KIOX010A_SET_LAPTOP_MODE = 1, 276 + KIOX010A_SET_TABLET_MODE = 2, 277 + }; 278 + 279 + static int kiox010a_dsm(struct device *dev, int fn_index) 280 + { 281 + acpi_handle handle = ACPI_HANDLE(dev); 282 + guid_t kiox010a_dsm_guid; 283 + union acpi_object *obj; 284 + 285 + if (!handle) 286 + return -ENODEV; 287 + 288 + guid_parse("1f339696-d475-4e26-8cad-2e9f8e6d7a91", &kiox010a_dsm_guid); 289 + 290 + obj = acpi_evaluate_dsm(handle, &kiox010a_dsm_guid, 1, fn_index, NULL); 291 + if (!obj) 292 + return -EIO; 293 + 294 + ACPI_FREE(obj); 295 + return 0; 296 + } 297 + #endif 298 + 279 299 static int kxcjk1013_set_mode(struct kxcjk1013_data *data, 280 300 enum kxcjk1013_mode mode) 281 301 { ··· 378 346 static int kxcjk1013_chip_init(struct kxcjk1013_data *data) 379 347 { 380 348 int ret; 349 + 350 + #ifdef CONFIG_ACPI 351 + if (data->acpi_type == ACPI_KIOX010A) { 352 + /* Make sure the kbd and touchpad on 2-in-1s using 2 KXCJ91008-s work */ 353 + kiox010a_dsm(&data->client->dev, KIOX010A_SET_LAPTOP_MODE); 354 + } 355 + #endif 381 356 382 357 ret = i2c_smbus_read_byte_data(data->client, KXCJK1013_REG_WHO_AM_I); 383 358 if (ret < 0) { ··· 1286 1247 1287 1248 static const char *kxcjk1013_match_acpi_device(struct device *dev, 1288 1249 enum kx_chipset *chipset, 1289 - bool *is_smo8500_device) 1250 + enum kx_acpi_type *acpi_type) 1290 1251 { 1291 1252 const struct acpi_device_id *id; 1292 1253 ··· 1295 1256 return NULL; 1296 1257 1297 1258 if (strcmp(id->id, "SMO8500") == 0) 1298 - *is_smo8500_device = true; 1259 + *acpi_type = ACPI_SMO8500; 1260 + else if (strcmp(id->id, "KIOX010A") == 0) 1261 + *acpi_type = ACPI_KIOX010A; 1299 1262 1300 1263 *chipset = (enum kx_chipset)id->driver_data; 1301 1264 ··· 1340 1299 } else if (ACPI_HANDLE(&client->dev)) { 1341 1300 name = kxcjk1013_match_acpi_device(&client->dev, 1342 1301 &data->chipset, 1343 - &data->is_smo8500_device); 1302 + &data->acpi_type); 1344 1303 } else 1345 1304 return -ENODEV; 1346 1305 ··· 1357 1316 indio_dev->modes = INDIO_DIRECT_MODE; 1358 1317 indio_dev->info = &kxcjk1013_info; 1359 1318 1360 - if (client->irq > 0 && !data->is_smo8500_device) { 1319 + if (client->irq > 0 && data->acpi_type != ACPI_SMO8500) { 1361 1320 ret = devm_request_threaded_irq(&client->dev, client->irq, 1362 1321 kxcjk1013_data_rdy_trig_poll, 1363 1322 kxcjk1013_event_handler,
+27 -7
drivers/iio/adc/ingenic-adc.c
··· 71 71 #define JZ4725B_ADC_BATTERY_HIGH_VREF_BITS 10 72 72 #define JZ4740_ADC_BATTERY_HIGH_VREF (7500 * 0.986) 73 73 #define JZ4740_ADC_BATTERY_HIGH_VREF_BITS 12 74 - #define JZ4770_ADC_BATTERY_VREF 6600 74 + #define JZ4770_ADC_BATTERY_VREF 1200 75 75 #define JZ4770_ADC_BATTERY_VREF_BITS 12 76 76 77 77 #define JZ_ADC_IRQ_AUX BIT(0) ··· 177 177 mutex_unlock(&adc->lock); 178 178 } 179 179 180 - static void ingenic_adc_enable(struct ingenic_adc *adc, 181 - int engine, 182 - bool enabled) 180 + static void ingenic_adc_enable_unlocked(struct ingenic_adc *adc, 181 + int engine, 182 + bool enabled) 183 183 { 184 184 u8 val; 185 185 186 - mutex_lock(&adc->lock); 187 186 val = readb(adc->base + JZ_ADC_REG_ENABLE); 188 187 189 188 if (enabled) ··· 191 192 val &= ~BIT(engine); 192 193 193 194 writeb(val, adc->base + JZ_ADC_REG_ENABLE); 195 + } 196 + 197 + static void ingenic_adc_enable(struct ingenic_adc *adc, 198 + int engine, 199 + bool enabled) 200 + { 201 + mutex_lock(&adc->lock); 202 + ingenic_adc_enable_unlocked(adc, engine, enabled); 194 203 mutex_unlock(&adc->lock); 195 204 } 196 205 197 206 static int ingenic_adc_capture(struct ingenic_adc *adc, 198 207 int engine) 199 208 { 209 + u32 cfg; 200 210 u8 val; 201 211 int ret; 202 212 203 - ingenic_adc_enable(adc, engine, true); 213 + /* 214 + * Disable CMD_SEL temporarily, because it causes wrong VBAT readings, 215 + * probably due to the switch of VREF. We must keep the lock here to 216 + * avoid races with the buffer enable/disable functions. 217 + */ 218 + mutex_lock(&adc->lock); 219 + cfg = readl(adc->base + JZ_ADC_REG_CFG); 220 + writel(cfg & ~JZ_ADC_REG_CFG_CMD_SEL, adc->base + JZ_ADC_REG_CFG); 221 + 222 + ingenic_adc_enable_unlocked(adc, engine, true); 204 223 ret = readb_poll_timeout(adc->base + JZ_ADC_REG_ENABLE, val, 205 224 !(val & BIT(engine)), 250, 1000); 206 225 if (ret) 207 - ingenic_adc_enable(adc, engine, false); 226 + ingenic_adc_enable_unlocked(adc, engine, false); 227 + 228 + writel(cfg, adc->base + JZ_ADC_REG_CFG); 229 + mutex_unlock(&adc->lock); 208 230 209 231 return ret; 210 232 }
+4 -2
drivers/iio/adc/mt6577_auxadc.c
··· 9 9 #include <linux/err.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 - #include <linux/of.h> 13 - #include <linux/of_device.h> 12 + #include <linux/mod_devicetable.h> 14 13 #include <linux/platform_device.h> 14 + #include <linux/property.h> 15 15 #include <linux/iopoll.h> 16 16 #include <linux/io.h> 17 17 #include <linux/iio/iio.h> ··· 275 275 dev_err(&pdev->dev, "null clock rate\n"); 276 276 goto err_disable_clk; 277 277 } 278 + 279 + adc_dev->dev_comp = device_get_match_data(&pdev->dev); 278 280 279 281 mutex_init(&adc_dev->lock); 280 282
+17 -24
drivers/iio/adc/stm32-adc-core.c
··· 41 41 * struct stm32_adc_common_regs - stm32 common registers 42 42 * @csr: common status register offset 43 43 * @ccr: common control register offset 44 - * @eoc1_msk: adc1 end of conversion flag in @csr 45 - * @eoc2_msk: adc2 end of conversion flag in @csr 46 - * @eoc3_msk: adc3 end of conversion flag in @csr 44 + * @eoc_msk: array of eoc (end of conversion flag) masks in csr for adc1..n 45 + * @ovr_msk: array of ovr (overrun flag) masks in csr for adc1..n 47 46 * @ier: interrupt enable register offset for each adc 48 47 * @eocie_msk: end of conversion interrupt enable mask in @ier 49 48 */ 50 49 struct stm32_adc_common_regs { 51 50 u32 csr; 52 51 u32 ccr; 53 - u32 eoc1_msk; 54 - u32 eoc2_msk; 55 - u32 eoc3_msk; 52 + u32 eoc_msk[STM32_ADC_MAX_ADCS]; 53 + u32 ovr_msk[STM32_ADC_MAX_ADCS]; 56 54 u32 ier; 57 55 u32 eocie_msk; 58 56 }; ··· 280 282 static const struct stm32_adc_common_regs stm32f4_adc_common_regs = { 281 283 .csr = STM32F4_ADC_CSR, 282 284 .ccr = STM32F4_ADC_CCR, 283 - .eoc1_msk = STM32F4_EOC1 | STM32F4_OVR1, 284 - .eoc2_msk = STM32F4_EOC2 | STM32F4_OVR2, 285 - .eoc3_msk = STM32F4_EOC3 | STM32F4_OVR3, 285 + .eoc_msk = { STM32F4_EOC1, STM32F4_EOC2, STM32F4_EOC3}, 286 + .ovr_msk = { STM32F4_OVR1, STM32F4_OVR2, STM32F4_OVR3}, 286 287 .ier = STM32F4_ADC_CR1, 287 - .eocie_msk = STM32F4_EOCIE | STM32F4_OVRIE, 288 + .eocie_msk = STM32F4_EOCIE, 288 289 }; 289 290 290 291 /* STM32H7 common registers definitions */ 291 292 static const struct stm32_adc_common_regs stm32h7_adc_common_regs = { 292 293 .csr = STM32H7_ADC_CSR, 293 294 .ccr = STM32H7_ADC_CCR, 294 - .eoc1_msk = STM32H7_EOC_MST | STM32H7_OVR_MST, 295 - .eoc2_msk = STM32H7_EOC_SLV | STM32H7_OVR_SLV, 295 + .eoc_msk = { STM32H7_EOC_MST, STM32H7_EOC_SLV}, 296 + .ovr_msk = { STM32H7_OVR_MST, STM32H7_OVR_SLV}, 296 297 .ier = STM32H7_ADC_IER, 297 - .eocie_msk = STM32H7_EOCIE | STM32H7_OVRIE, 298 + .eocie_msk = STM32H7_EOCIE, 298 299 }; 299 300 300 301 static const unsigned int stm32_adc_offset[STM32_ADC_MAX_ADCS] = { ··· 315 318 { 316 319 struct stm32_adc_priv *priv = irq_desc_get_handler_data(desc); 317 320 struct irq_chip *chip = irq_desc_get_chip(desc); 321 + int i; 318 322 u32 status; 319 323 320 324 chained_irq_enter(chip, desc); ··· 333 335 * before invoking the interrupt handler (e.g. call ISR only for 334 336 * IRQ-enabled ADCs). 335 337 */ 336 - if (status & priv->cfg->regs->eoc1_msk && 337 - stm32_adc_eoc_enabled(priv, 0)) 338 - generic_handle_irq(irq_find_mapping(priv->domain, 0)); 339 - 340 - if (status & priv->cfg->regs->eoc2_msk && 341 - stm32_adc_eoc_enabled(priv, 1)) 342 - generic_handle_irq(irq_find_mapping(priv->domain, 1)); 343 - 344 - if (status & priv->cfg->regs->eoc3_msk && 345 - stm32_adc_eoc_enabled(priv, 2)) 346 - generic_handle_irq(irq_find_mapping(priv->domain, 2)); 338 + for (i = 0; i < priv->cfg->num_irqs; i++) { 339 + if ((status & priv->cfg->regs->eoc_msk[i] && 340 + stm32_adc_eoc_enabled(priv, i)) || 341 + (status & priv->cfg->regs->ovr_msk[i])) 342 + generic_handle_irq(irq_find_mapping(priv->domain, i)); 343 + } 347 344 348 345 chained_irq_exit(chip, desc); 349 346 };
+48 -2
drivers/iio/adc/stm32-adc.c
··· 154 154 * @start_conv: routine to start conversions 155 155 * @stop_conv: routine to stop conversions 156 156 * @unprepare: optional unprepare routine (disable, power-down) 157 + * @irq_clear: routine to clear irqs 157 158 * @smp_cycles: programmable sampling time (ADC clock cycles) 158 159 */ 159 160 struct stm32_adc_cfg { ··· 167 166 void (*start_conv)(struct iio_dev *, bool dma); 168 167 void (*stop_conv)(struct iio_dev *); 169 168 void (*unprepare)(struct iio_dev *); 169 + void (*irq_clear)(struct iio_dev *indio_dev, u32 msk); 170 170 const unsigned int *smp_cycles; 171 171 }; 172 172 ··· 623 621 STM32F4_ADON | STM32F4_DMA | STM32F4_DDS); 624 622 } 625 623 624 + static void stm32f4_adc_irq_clear(struct iio_dev *indio_dev, u32 msk) 625 + { 626 + struct stm32_adc *adc = iio_priv(indio_dev); 627 + 628 + stm32_adc_clr_bits(adc, adc->cfg->regs->isr_eoc.reg, msk); 629 + } 630 + 626 631 static void stm32h7_adc_start_conv(struct iio_dev *indio_dev, bool dma) 627 632 { 628 633 struct stm32_adc *adc = iio_priv(indio_dev); ··· 666 657 dev_warn(&indio_dev->dev, "stop failed\n"); 667 658 668 659 stm32_adc_clr_bits(adc, STM32H7_ADC_CFGR, STM32H7_DMNGT_MASK); 660 + } 661 + 662 + static void stm32h7_adc_irq_clear(struct iio_dev *indio_dev, u32 msk) 663 + { 664 + struct stm32_adc *adc = iio_priv(indio_dev); 665 + /* On STM32H7 IRQs are cleared by writing 1 into ISR register */ 666 + stm32_adc_set_bits(adc, adc->cfg->regs->isr_eoc.reg, msk); 669 667 } 670 668 671 669 static int stm32h7_adc_exit_pwr_down(struct iio_dev *indio_dev) ··· 1251 1235 } 1252 1236 } 1253 1237 1238 + static void stm32_adc_irq_clear(struct iio_dev *indio_dev, u32 msk) 1239 + { 1240 + struct stm32_adc *adc = iio_priv(indio_dev); 1241 + 1242 + adc->cfg->irq_clear(indio_dev, msk); 1243 + } 1244 + 1254 1245 static irqreturn_t stm32_adc_threaded_isr(int irq, void *data) 1255 1246 { 1256 1247 struct iio_dev *indio_dev = data; 1257 1248 struct stm32_adc *adc = iio_priv(indio_dev); 1258 1249 const struct stm32_adc_regspec *regs = adc->cfg->regs; 1259 1250 u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); 1251 + u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); 1260 1252 1261 - if (status & regs->isr_ovr.mask) 1253 + /* Check ovr status right now, as ovr mask should be already disabled */ 1254 + if (status & regs->isr_ovr.mask) { 1255 + /* 1256 + * Clear ovr bit to avoid subsequent calls to IRQ handler. 1257 + * This requires to stop ADC first. OVR bit state in ISR, 1258 + * is propaged to CSR register by hardware. 1259 + */ 1260 + adc->cfg->stop_conv(indio_dev); 1261 + stm32_adc_irq_clear(indio_dev, regs->isr_ovr.mask); 1262 1262 dev_err(&indio_dev->dev, "Overrun, stopping: restart needed\n"); 1263 + return IRQ_HANDLED; 1264 + } 1263 1265 1264 - return IRQ_HANDLED; 1266 + if (!(status & mask)) 1267 + dev_err_ratelimited(&indio_dev->dev, 1268 + "Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n", 1269 + mask, status); 1270 + 1271 + return IRQ_NONE; 1265 1272 } 1266 1273 1267 1274 static irqreturn_t stm32_adc_isr(int irq, void *data) ··· 1293 1254 struct stm32_adc *adc = iio_priv(indio_dev); 1294 1255 const struct stm32_adc_regspec *regs = adc->cfg->regs; 1295 1256 u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); 1257 + u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); 1258 + 1259 + if (!(status & mask)) 1260 + return IRQ_WAKE_THREAD; 1296 1261 1297 1262 if (status & regs->isr_ovr.mask) { 1298 1263 /* ··· 2089 2046 .start_conv = stm32f4_adc_start_conv, 2090 2047 .stop_conv = stm32f4_adc_stop_conv, 2091 2048 .smp_cycles = stm32f4_adc_smp_cycles, 2049 + .irq_clear = stm32f4_adc_irq_clear, 2092 2050 }; 2093 2051 2094 2052 static const struct stm32_adc_cfg stm32h7_adc_cfg = { ··· 2101 2057 .prepare = stm32h7_adc_prepare, 2102 2058 .unprepare = stm32h7_adc_unprepare, 2103 2059 .smp_cycles = stm32h7_adc_smp_cycles, 2060 + .irq_clear = stm32h7_adc_irq_clear, 2104 2061 }; 2105 2062 2106 2063 static const struct stm32_adc_cfg stm32mp1_adc_cfg = { ··· 2114 2069 .prepare = stm32h7_adc_prepare, 2115 2070 .unprepare = stm32h7_adc_unprepare, 2116 2071 .smp_cycles = stm32h7_adc_smp_cycles, 2072 + .irq_clear = stm32h7_adc_irq_clear, 2117 2073 }; 2118 2074 2119 2075 static const struct of_device_id stm32_adc_of_match[] = {
+11 -5
drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
··· 256 256 struct cros_ec_sensorhub *sensor_hub = dev_get_drvdata(dev->parent); 257 257 struct cros_ec_dev *ec = sensor_hub->ec; 258 258 struct cros_ec_sensor_platform *sensor_platform = dev_get_platdata(dev); 259 - u32 ver_mask; 259 + u32 ver_mask, temp; 260 260 int frequencies[ARRAY_SIZE(state->frequencies) / 2] = { 0 }; 261 261 int ret, i; 262 262 ··· 311 311 &frequencies[2], 312 312 &state->fifo_max_event_count); 313 313 } else { 314 - frequencies[1] = state->resp->info_3.min_frequency; 315 - frequencies[2] = state->resp->info_3.max_frequency; 316 - state->fifo_max_event_count = 317 - state->resp->info_3.fifo_max_event_count; 314 + if (state->resp->info_3.max_frequency == 0) { 315 + get_default_min_max_freq(state->resp->info.type, 316 + &frequencies[1], 317 + &frequencies[2], 318 + &temp); 319 + } else { 320 + frequencies[1] = state->resp->info_3.min_frequency; 321 + frequencies[2] = state->resp->info_3.max_frequency; 322 + } 323 + state->fifo_max_event_count = state->resp->info_3.fifo_max_event_count; 318 324 } 319 325 for (i = 0; i < ARRAY_SIZE(frequencies); i++) { 320 326 state->frequencies[2 * i] = frequencies[i] / 1000;
+4 -2
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
··· 156 156 static void st_lsm6dsx_shub_wait_complete(struct st_lsm6dsx_hw *hw) 157 157 { 158 158 struct st_lsm6dsx_sensor *sensor; 159 - u32 odr; 159 + u32 odr, timeout; 160 160 161 161 sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]); 162 162 odr = (hw->enable_mask & BIT(ST_LSM6DSX_ID_ACC)) ? sensor->odr : 12500; 163 - msleep((2000000U / odr) + 1); 163 + /* set 10ms as minimum timeout for i2c slave configuration */ 164 + timeout = max_t(u32, 2000000U / odr + 1, 10); 165 + msleep(timeout); 164 166 } 165 167 166 168 /*
+1
drivers/iio/light/Kconfig
··· 544 544 545 545 config VCNL4035 546 546 tristate "VCNL4035 combined ALS and proximity sensor" 547 + select IIO_BUFFER 547 548 select IIO_TRIGGERED_BUFFER 548 549 select REGMAP_I2C 549 550 depends on I2C
+1 -3
drivers/infiniband/hw/hfi1/file_ops.c
··· 1 1 /* 2 + * Copyright(c) 2020 Cornelis Networks, Inc. 2 3 * Copyright(c) 2015-2020 Intel Corporation. 3 4 * 4 5 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 207 206 spin_lock_init(&fd->tid_lock); 208 207 spin_lock_init(&fd->invalid_lock); 209 208 fd->rec_cpu_num = -1; /* no cpu affinity by default */ 210 - fd->mm = current->mm; 211 - mmgrab(fd->mm); 212 209 fd->dd = dd; 213 210 fp->private_data = fd; 214 211 return 0; ··· 710 711 711 712 deallocate_ctxt(uctxt); 712 713 done: 713 - mmdrop(fdata->mm); 714 714 715 715 if (atomic_dec_and_test(&dd->user_refcount)) 716 716 complete(&dd->user_comp);
+1 -1
drivers/infiniband/hw/hfi1/hfi.h
··· 1 1 #ifndef _HFI1_KERNEL_H 2 2 #define _HFI1_KERNEL_H 3 3 /* 4 + * Copyright(c) 2020 Cornelis Networks, Inc. 4 5 * Copyright(c) 2015-2020 Intel Corporation. 5 6 * 6 7 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 1452 1451 u32 invalid_tid_idx; 1453 1452 /* protect invalid_tids array and invalid_tid_idx */ 1454 1453 spinlock_t invalid_lock; 1455 - struct mm_struct *mm; 1456 1454 }; 1457 1455 1458 1456 extern struct xarray hfi1_dev_table;
+34 -32
drivers/infiniband/hw/hfi1/mmu_rb.c
··· 1 1 /* 2 + * Copyright(c) 2020 Cornelis Networks, Inc. 2 3 * Copyright(c) 2016 - 2017 Intel Corporation. 3 4 * 4 5 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 49 48 #include <linux/rculist.h> 50 49 #include <linux/mmu_notifier.h> 51 50 #include <linux/interval_tree_generic.h> 51 + #include <linux/sched/mm.h> 52 52 53 53 #include "mmu_rb.h" 54 54 #include "trace.h" 55 - 56 - struct mmu_rb_handler { 57 - struct mmu_notifier mn; 58 - struct rb_root_cached root; 59 - void *ops_arg; 60 - spinlock_t lock; /* protect the RB tree */ 61 - struct mmu_rb_ops *ops; 62 - struct mm_struct *mm; 63 - struct list_head lru_list; 64 - struct work_struct del_work; 65 - struct list_head del_list; 66 - struct workqueue_struct *wq; 67 - }; 68 55 69 56 static unsigned long mmu_node_start(struct mmu_rb_node *); 70 57 static unsigned long mmu_node_last(struct mmu_rb_node *); ··· 81 92 return PAGE_ALIGN(node->addr + node->len) - 1; 82 93 } 83 94 84 - int hfi1_mmu_rb_register(void *ops_arg, struct mm_struct *mm, 95 + int hfi1_mmu_rb_register(void *ops_arg, 85 96 struct mmu_rb_ops *ops, 86 97 struct workqueue_struct *wq, 87 98 struct mmu_rb_handler **handler) 88 99 { 89 - struct mmu_rb_handler *handlr; 100 + struct mmu_rb_handler *h; 90 101 int ret; 91 102 92 - handlr = kmalloc(sizeof(*handlr), GFP_KERNEL); 93 - if (!handlr) 103 + h = kmalloc(sizeof(*h), GFP_KERNEL); 104 + if (!h) 94 105 return -ENOMEM; 95 106 96 - handlr->root = RB_ROOT_CACHED; 97 - handlr->ops = ops; 98 - handlr->ops_arg = ops_arg; 99 - INIT_HLIST_NODE(&handlr->mn.hlist); 100 - spin_lock_init(&handlr->lock); 101 - handlr->mn.ops = &mn_opts; 102 - handlr->mm = mm; 103 - INIT_WORK(&handlr->del_work, handle_remove); 104 - INIT_LIST_HEAD(&handlr->del_list); 105 - INIT_LIST_HEAD(&handlr->lru_list); 106 - handlr->wq = wq; 107 + h->root = RB_ROOT_CACHED; 108 + h->ops = ops; 109 + h->ops_arg = ops_arg; 110 + INIT_HLIST_NODE(&h->mn.hlist); 111 + spin_lock_init(&h->lock); 112 + h->mn.ops = &mn_opts; 113 + INIT_WORK(&h->del_work, handle_remove); 114 + INIT_LIST_HEAD(&h->del_list); 115 + INIT_LIST_HEAD(&h->lru_list); 116 + h->wq = wq; 107 117 108 - ret = mmu_notifier_register(&handlr->mn, handlr->mm); 118 + ret = mmu_notifier_register(&h->mn, current->mm); 109 119 if (ret) { 110 - kfree(handlr); 120 + kfree(h); 111 121 return ret; 112 122 } 113 123 114 - *handler = handlr; 124 + *handler = h; 115 125 return 0; 116 126 } 117 127 ··· 122 134 struct list_head del_list; 123 135 124 136 /* Unregister first so we don't get any more notifications. */ 125 - mmu_notifier_unregister(&handler->mn, handler->mm); 137 + mmu_notifier_unregister(&handler->mn, handler->mn.mm); 126 138 127 139 /* 128 140 * Make sure the wq delete handler is finished running. It will not ··· 154 166 int ret = 0; 155 167 156 168 trace_hfi1_mmu_rb_insert(mnode->addr, mnode->len); 169 + 170 + if (current->mm != handler->mn.mm) 171 + return -EPERM; 172 + 157 173 spin_lock_irqsave(&handler->lock, flags); 158 174 node = __mmu_rb_search(handler, mnode->addr, mnode->len); 159 175 if (node) { ··· 172 180 __mmu_int_rb_remove(mnode, &handler->root); 173 181 list_del(&mnode->list); /* remove from LRU list */ 174 182 } 183 + mnode->handler = handler; 175 184 unlock: 176 185 spin_unlock_irqrestore(&handler->lock, flags); 177 186 return ret; ··· 210 217 unsigned long flags; 211 218 bool ret = false; 212 219 220 + if (current->mm != handler->mn.mm) 221 + return ret; 222 + 213 223 spin_lock_irqsave(&handler->lock, flags); 214 224 node = __mmu_rb_search(handler, addr, len); 215 225 if (node) { ··· 234 238 struct list_head del_list; 235 239 unsigned long flags; 236 240 bool stop = false; 241 + 242 + if (current->mm != handler->mn.mm) 243 + return; 237 244 238 245 INIT_LIST_HEAD(&del_list); 239 246 ··· 270 271 struct mmu_rb_node *node) 271 272 { 272 273 unsigned long flags; 274 + 275 + if (current->mm != handler->mn.mm) 276 + return; 273 277 274 278 /* Validity of handler and node pointers has been checked by caller. */ 275 279 trace_hfi1_mmu_rb_remove(node->addr, node->len);
+15 -1
drivers/infiniband/hw/hfi1/mmu_rb.h
··· 1 1 /* 2 + * Copyright(c) 2020 Cornelis Networks, Inc. 2 3 * Copyright(c) 2016 Intel Corporation. 3 4 * 4 5 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 55 54 unsigned long len; 56 55 unsigned long __last; 57 56 struct rb_node node; 57 + struct mmu_rb_handler *handler; 58 58 struct list_head list; 59 59 }; 60 60 ··· 73 71 void *evict_arg, bool *stop); 74 72 }; 75 73 76 - int hfi1_mmu_rb_register(void *ops_arg, struct mm_struct *mm, 74 + struct mmu_rb_handler { 75 + struct mmu_notifier mn; 76 + struct rb_root_cached root; 77 + void *ops_arg; 78 + spinlock_t lock; /* protect the RB tree */ 79 + struct mmu_rb_ops *ops; 80 + struct list_head lru_list; 81 + struct work_struct del_work; 82 + struct list_head del_list; 83 + struct workqueue_struct *wq; 84 + }; 85 + 86 + int hfi1_mmu_rb_register(void *ops_arg, 77 87 struct mmu_rb_ops *ops, 78 88 struct workqueue_struct *wq, 79 89 struct mmu_rb_handler **handler);
+8 -4
drivers/infiniband/hw/hfi1/user_exp_rcv.c
··· 1 1 /* 2 + * Copyright(c) 2020 Cornelis Networks, Inc. 2 3 * Copyright(c) 2015-2018 Intel Corporation. 3 4 * 4 5 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 174 173 { 175 174 struct page **pages; 176 175 struct hfi1_devdata *dd = fd->uctxt->dd; 176 + struct mm_struct *mm; 177 177 178 178 if (mapped) { 179 179 pci_unmap_single(dd->pcidev, node->dma_addr, 180 180 node->npages * PAGE_SIZE, PCI_DMA_FROMDEVICE); 181 181 pages = &node->pages[idx]; 182 + mm = mm_from_tid_node(node); 182 183 } else { 183 184 pages = &tidbuf->pages[idx]; 185 + mm = current->mm; 184 186 } 185 - hfi1_release_user_pages(fd->mm, pages, npages, mapped); 187 + hfi1_release_user_pages(mm, pages, npages, mapped); 186 188 fd->tid_n_pinned -= npages; 187 189 } 188 190 ··· 220 216 * pages, accept the amount pinned so far and program only that. 221 217 * User space knows how to deal with partially programmed buffers. 222 218 */ 223 - if (!hfi1_can_pin_pages(dd, fd->mm, fd->tid_n_pinned, npages)) { 219 + if (!hfi1_can_pin_pages(dd, current->mm, fd->tid_n_pinned, npages)) { 224 220 kfree(pages); 225 221 return -ENOMEM; 226 222 } 227 223 228 - pinned = hfi1_acquire_user_pages(fd->mm, vaddr, npages, true, pages); 224 + pinned = hfi1_acquire_user_pages(current->mm, vaddr, npages, true, pages); 229 225 if (pinned <= 0) { 230 226 kfree(pages); 231 227 return pinned; ··· 760 756 761 757 if (fd->use_mn) { 762 758 ret = mmu_interval_notifier_insert( 763 - &node->notifier, fd->mm, 759 + &node->notifier, current->mm, 764 760 tbuf->vaddr + (pageidx * PAGE_SIZE), npages * PAGE_SIZE, 765 761 &tid_mn_ops); 766 762 if (ret)
+6
drivers/infiniband/hw/hfi1/user_exp_rcv.h
··· 1 1 #ifndef _HFI1_USER_EXP_RCV_H 2 2 #define _HFI1_USER_EXP_RCV_H 3 3 /* 4 + * Copyright(c) 2020 - Cornelis Networks, Inc. 4 5 * Copyright(c) 2015 - 2017 Intel Corporation. 5 6 * 6 7 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 95 94 struct hfi1_tid_info *tinfo); 96 95 int hfi1_user_exp_rcv_invalid(struct hfi1_filedata *fd, 97 96 struct hfi1_tid_info *tinfo); 97 + 98 + static inline struct mm_struct *mm_from_tid_node(struct tid_rb_node *node) 99 + { 100 + return node->notifier.mm; 101 + } 98 102 99 103 #endif /* _HFI1_USER_EXP_RCV_H */
+7 -6
drivers/infiniband/hw/hfi1/user_sdma.c
··· 1 1 /* 2 + * Copyright(c) 2020 - Cornelis Networks, Inc. 2 3 * Copyright(c) 2015 - 2018 Intel Corporation. 3 4 * 4 5 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 189 188 atomic_set(&pq->n_reqs, 0); 190 189 init_waitqueue_head(&pq->wait); 191 190 atomic_set(&pq->n_locked, 0); 192 - pq->mm = fd->mm; 193 191 194 192 iowait_init(&pq->busy, 0, NULL, NULL, defer_packet_queue, 195 193 activate_packet_queue, NULL, NULL); ··· 230 230 231 231 cq->nentries = hfi1_sdma_comp_ring_size; 232 232 233 - ret = hfi1_mmu_rb_register(pq, pq->mm, &sdma_rb_ops, dd->pport->hfi1_wq, 233 + ret = hfi1_mmu_rb_register(pq, &sdma_rb_ops, dd->pport->hfi1_wq, 234 234 &pq->handler); 235 235 if (ret) { 236 236 dd_dev_err(dd, "Failed to register with MMU %d", ret); ··· 980 980 981 981 npages -= node->npages; 982 982 retry: 983 - if (!hfi1_can_pin_pages(pq->dd, pq->mm, 983 + if (!hfi1_can_pin_pages(pq->dd, current->mm, 984 984 atomic_read(&pq->n_locked), npages)) { 985 985 cleared = sdma_cache_evict(pq, npages); 986 986 if (cleared >= npages) 987 987 goto retry; 988 988 } 989 - pinned = hfi1_acquire_user_pages(pq->mm, 989 + pinned = hfi1_acquire_user_pages(current->mm, 990 990 ((unsigned long)iovec->iov.iov_base + 991 991 (node->npages * PAGE_SIZE)), npages, 0, 992 992 pages + node->npages); ··· 995 995 return pinned; 996 996 } 997 997 if (pinned != npages) { 998 - unpin_vector_pages(pq->mm, pages, node->npages, pinned); 998 + unpin_vector_pages(current->mm, pages, node->npages, pinned); 999 999 return -EFAULT; 1000 1000 } 1001 1001 kfree(node->pages); ··· 1008 1008 static void unpin_sdma_pages(struct sdma_mmu_node *node) 1009 1009 { 1010 1010 if (node->npages) { 1011 - unpin_vector_pages(node->pq->mm, node->pages, 0, node->npages); 1011 + unpin_vector_pages(mm_from_sdma_node(node), node->pages, 0, 1012 + node->npages); 1012 1013 atomic_sub(node->npages, &node->pq->n_locked); 1013 1014 } 1014 1015 }
+6 -1
drivers/infiniband/hw/hfi1/user_sdma.h
··· 1 1 #ifndef _HFI1_USER_SDMA_H 2 2 #define _HFI1_USER_SDMA_H 3 3 /* 4 + * Copyright(c) 2020 - Cornelis Networks, Inc. 4 5 * Copyright(c) 2015 - 2018 Intel Corporation. 5 6 * 6 7 * This file is provided under a dual BSD/GPLv2 license. When using or ··· 134 133 unsigned long unpinned; 135 134 struct mmu_rb_handler *handler; 136 135 atomic_t n_locked; 137 - struct mm_struct *mm; 138 136 }; 139 137 140 138 struct hfi1_user_sdma_comp_q { ··· 249 249 int hfi1_user_sdma_process_request(struct hfi1_filedata *fd, 250 250 struct iovec *iovec, unsigned long dim, 251 251 unsigned long *count); 252 + 253 + static inline struct mm_struct *mm_from_sdma_node(struct sdma_mmu_node *node) 254 + { 255 + return node->rb.handler->mn.mm; 256 + } 252 257 253 258 #endif /* _HFI1_USER_SDMA_H */
+5 -4
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 2936 2936 2937 2937 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1); 2938 2938 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_L_INV_EN_S, 1); 2939 + roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1); 2939 2940 2940 2941 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, 0); 2941 2942 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_MR_MW_S, 1); ··· 4990 4989 V2_QPC_BYTE_28_AT_M, 4991 4990 V2_QPC_BYTE_28_AT_S); 4992 4991 qp_attr->retry_cnt = roce_get_field(context.byte_212_lsn, 4993 - V2_QPC_BYTE_212_RETRY_CNT_M, 4994 - V2_QPC_BYTE_212_RETRY_CNT_S); 4992 + V2_QPC_BYTE_212_RETRY_NUM_INIT_M, 4993 + V2_QPC_BYTE_212_RETRY_NUM_INIT_S); 4995 4994 qp_attr->rnr_retry = roce_get_field(context.byte_244_rnr_rxack, 4996 - V2_QPC_BYTE_244_RNR_CNT_M, 4997 - V2_QPC_BYTE_244_RNR_CNT_S); 4995 + V2_QPC_BYTE_244_RNR_NUM_INIT_M, 4996 + V2_QPC_BYTE_244_RNR_NUM_INIT_S); 4998 4997 4999 4998 done: 5000 4999 qp_attr->cur_qp_state = qp_attr->qp_state;
+1 -1
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 1661 1661 __le32 rsv_uars_rsv_qps; 1662 1662 }; 1663 1663 #define V2_QUERY_PF_CAPS_D_NUM_SRQS_S 0 1664 - #define V2_QUERY_PF_CAPS_D_NUM_SRQS_M GENMASK(20, 0) 1664 + #define V2_QUERY_PF_CAPS_D_NUM_SRQS_M GENMASK(19, 0) 1665 1665 1666 1666 #define V2_QUERY_PF_CAPS_D_RQWQE_HOP_NUM_S 20 1667 1667 #define V2_QUERY_PF_CAPS_D_RQWQE_HOP_NUM_M GENMASK(21, 20)
-5
drivers/infiniband/hw/i40iw/i40iw_main.c
··· 54 54 #define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \ 55 55 __stringify(DRV_VERSION_MINOR) "." __stringify(DRV_VERSION_BUILD) 56 56 57 - static int push_mode; 58 - module_param(push_mode, int, 0644); 59 - MODULE_PARM_DESC(push_mode, "Low latency mode: 0=disabled (default), 1=enabled)"); 60 - 61 57 static int debug; 62 58 module_param(debug, int, 0644); 63 59 MODULE_PARM_DESC(debug, "debug flags: 0=disabled (default), 0x7fffffff=all"); ··· 1576 1580 if (status) 1577 1581 goto exit; 1578 1582 iwdev->obj_next = iwdev->obj_mem; 1579 - iwdev->push_mode = push_mode; 1580 1583 1581 1584 init_waitqueue_head(&iwdev->vchnl_waitq); 1582 1585 init_waitqueue_head(&dev->vf_reqs);
+7 -30
drivers/infiniband/hw/i40iw/i40iw_verbs.c
··· 167 167 */ 168 168 static int i40iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma) 169 169 { 170 - struct i40iw_ucontext *ucontext; 171 - u64 db_addr_offset, push_offset, pfn; 170 + struct i40iw_ucontext *ucontext = to_ucontext(context); 171 + u64 dbaddr; 172 172 173 - ucontext = to_ucontext(context); 174 - if (ucontext->iwdev->sc_dev.is_pf) { 175 - db_addr_offset = I40IW_DB_ADDR_OFFSET; 176 - push_offset = I40IW_PUSH_OFFSET; 177 - if (vma->vm_pgoff) 178 - vma->vm_pgoff += I40IW_PF_FIRST_PUSH_PAGE_INDEX - 1; 179 - } else { 180 - db_addr_offset = I40IW_VF_DB_ADDR_OFFSET; 181 - push_offset = I40IW_VF_PUSH_OFFSET; 182 - if (vma->vm_pgoff) 183 - vma->vm_pgoff += I40IW_VF_FIRST_PUSH_PAGE_INDEX - 1; 184 - } 173 + if (vma->vm_pgoff || vma->vm_end - vma->vm_start != PAGE_SIZE) 174 + return -EINVAL; 185 175 186 - vma->vm_pgoff += db_addr_offset >> PAGE_SHIFT; 176 + dbaddr = I40IW_DB_ADDR_OFFSET + pci_resource_start(ucontext->iwdev->ldev->pcidev, 0); 187 177 188 - if (vma->vm_pgoff == (db_addr_offset >> PAGE_SHIFT)) { 189 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 190 - } else { 191 - if ((vma->vm_pgoff - (push_offset >> PAGE_SHIFT)) % 2) 192 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 193 - else 194 - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 195 - } 196 - 197 - pfn = vma->vm_pgoff + 198 - (pci_resource_start(ucontext->iwdev->ldev->pcidev, 0) >> 199 - PAGE_SHIFT); 200 - 201 - return rdma_user_mmap_io(context, vma, pfn, PAGE_SIZE, 202 - vma->vm_page_prot, NULL); 178 + return rdma_user_mmap_io(context, vma, dbaddr >> PAGE_SHIFT, PAGE_SIZE, 179 + pgprot_noncached(vma->vm_page_prot), NULL); 203 180 } 204 181 205 182 /**
+6 -4
drivers/infiniband/hw/mthca/mthca_cq.c
··· 803 803 } 804 804 805 805 mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL); 806 - if (IS_ERR(mailbox)) 806 + if (IS_ERR(mailbox)) { 807 + err = PTR_ERR(mailbox); 807 808 goto err_out_arm; 809 + } 808 810 809 811 cq_context = mailbox->buf; 810 812 ··· 848 846 } 849 847 850 848 spin_lock_irq(&dev->cq_table.lock); 851 - if (mthca_array_set(&dev->cq_table.cq, 852 - cq->cqn & (dev->limits.num_cqs - 1), 853 - cq)) { 849 + err = mthca_array_set(&dev->cq_table.cq, 850 + cq->cqn & (dev->limits.num_cqs - 1), cq); 851 + if (err) { 854 852 spin_unlock_irq(&dev->cq_table.lock); 855 853 goto err_out_free_mr; 856 854 }
+22 -5
drivers/iommu/amd/init.c
··· 29 29 #include <asm/iommu_table.h> 30 30 #include <asm/io_apic.h> 31 31 #include <asm/irq_remapping.h> 32 + #include <asm/set_memory.h> 32 33 33 34 #include <linux/crash_dump.h> 34 35 ··· 673 672 free_pages((unsigned long)iommu->cmd_buf, get_order(CMD_BUFFER_SIZE)); 674 673 } 675 674 675 + static void *__init iommu_alloc_4k_pages(struct amd_iommu *iommu, 676 + gfp_t gfp, size_t size) 677 + { 678 + int order = get_order(size); 679 + void *buf = (void *)__get_free_pages(gfp, order); 680 + 681 + if (buf && 682 + iommu_feature(iommu, FEATURE_SNP) && 683 + set_memory_4k((unsigned long)buf, (1 << order))) { 684 + free_pages((unsigned long)buf, order); 685 + buf = NULL; 686 + } 687 + 688 + return buf; 689 + } 690 + 676 691 /* allocates the memory where the IOMMU will log its events to */ 677 692 static int __init alloc_event_buffer(struct amd_iommu *iommu) 678 693 { 679 - iommu->evt_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 680 - get_order(EVT_BUFFER_SIZE)); 694 + iommu->evt_buf = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 695 + EVT_BUFFER_SIZE); 681 696 682 697 return iommu->evt_buf ? 0 : -ENOMEM; 683 698 } ··· 732 715 /* allocates the memory where the IOMMU will log its events to */ 733 716 static int __init alloc_ppr_log(struct amd_iommu *iommu) 734 717 { 735 - iommu->ppr_log = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 736 - get_order(PPR_LOG_SIZE)); 718 + iommu->ppr_log = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 719 + PPR_LOG_SIZE); 737 720 738 721 return iommu->ppr_log ? 0 : -ENOMEM; 739 722 } ··· 855 838 856 839 static int __init alloc_cwwb_sem(struct amd_iommu *iommu) 857 840 { 858 - iommu->cmd_sem = (void *)get_zeroed_page(GFP_KERNEL); 841 + iommu->cmd_sem = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 1); 859 842 860 843 return iommu->cmd_sem ? 0 : -ENOMEM; 861 844 }
+4
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
··· 69 69 { 70 70 struct qcom_smmu *qsmmu; 71 71 72 + /* Check to make sure qcom_scm has finished probing */ 73 + if (!qcom_scm_is_available()) 74 + return ERR_PTR(-EPROBE_DEFER); 75 + 72 76 qsmmu = devm_kzalloc(smmu->dev, sizeof(*qsmmu), GFP_KERNEL); 73 77 if (!qsmmu) 74 78 return ERR_PTR(-ENOMEM);
+5 -2
drivers/iommu/intel/dmar.c
··· 335 335 336 336 static inline void vf_inherit_msi_domain(struct pci_dev *pdev) 337 337 { 338 - dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&pdev->physfn->dev)); 338 + struct pci_dev *physfn = pci_physfn(pdev); 339 + 340 + dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&physfn->dev)); 339 341 } 340 342 341 343 static int dmar_pci_bus_notifier(struct notifier_block *nb, ··· 986 984 warn_invalid_dmar(phys_addr, " returns all ones"); 987 985 goto unmap; 988 986 } 989 - iommu->vccap = dmar_readq(iommu->reg + DMAR_VCCAP_REG); 987 + if (ecap_vcs(iommu->ecap)) 988 + iommu->vccap = dmar_readq(iommu->reg + DMAR_VCCAP_REG); 990 989 991 990 /* the registers might be more than one page */ 992 991 map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap),
+5 -4
drivers/iommu/intel/iommu.c
··· 179 179 * (used when kernel is launched w/ TXT) 180 180 */ 181 181 static int force_on = 0; 182 - int intel_iommu_tboot_noforce; 182 + static int intel_iommu_tboot_noforce; 183 183 static int no_platform_optin; 184 184 185 185 #define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry)) ··· 1833 1833 if (ecap_prs(iommu->ecap)) 1834 1834 intel_svm_finish_prq(iommu); 1835 1835 } 1836 - if (ecap_vcs(iommu->ecap) && vccap_pasid(iommu->vccap)) 1836 + if (vccap_pasid(iommu->vccap)) 1837 1837 ioasid_unregister_allocator(&iommu->pasid_allocator); 1838 1838 1839 1839 #endif ··· 3212 3212 * is active. All vIOMMU allocators will eventually be calling the same 3213 3213 * host allocator. 3214 3214 */ 3215 - if (!ecap_vcs(iommu->ecap) || !vccap_pasid(iommu->vccap)) 3215 + if (!vccap_pasid(iommu->vccap)) 3216 3216 return; 3217 3217 3218 3218 pr_info("Register custom PASID allocator\n"); ··· 4884 4884 * Intel IOMMU is required for a TXT/tboot launch or platform 4885 4885 * opt in, so enforce that. 4886 4886 */ 4887 - force_on = tboot_force_iommu() || platform_optin_force_iommu(); 4887 + force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) || 4888 + platform_optin_force_iommu(); 4888 4889 4889 4890 if (iommu_init_mempool()) { 4890 4891 if (force_on)
+6 -4
drivers/iommu/iommu.c
··· 264 264 */ 265 265 iommu_alloc_default_domain(group, dev); 266 266 267 - if (group->default_domain) 267 + if (group->default_domain) { 268 268 ret = __iommu_attach_device(group->default_domain, dev); 269 + if (ret) { 270 + iommu_group_put(group); 271 + goto err_release; 272 + } 273 + } 269 274 270 275 iommu_create_device_direct_mappings(group, dev); 271 276 272 277 iommu_group_put(group); 273 - 274 - if (ret) 275 - goto err_release; 276 278 277 279 if (ops->probe_finalize) 278 280 ops->probe_finalize(dev);
+21 -7
drivers/media/platform/Kconfig
··· 253 253 depends on MTK_IOMMU || COMPILE_TEST 254 254 depends on VIDEO_DEV && VIDEO_V4L2 255 255 depends on ARCH_MEDIATEK || COMPILE_TEST 256 + depends on VIDEO_MEDIATEK_VPU || MTK_SCP 257 + # The two following lines ensure we have the same state ("m" or "y") as 258 + # our dependencies, to avoid missing symbols during link. 259 + depends on VIDEO_MEDIATEK_VPU || !VIDEO_MEDIATEK_VPU 260 + depends on MTK_SCP || !MTK_SCP 256 261 select VIDEOBUF2_DMA_CONTIG 257 262 select V4L2_MEM2MEM_DEV 258 - select VIDEO_MEDIATEK_VPU 259 - select MTK_SCP 263 + select VIDEO_MEDIATEK_VCODEC_VPU if VIDEO_MEDIATEK_VPU 264 + select VIDEO_MEDIATEK_VCODEC_SCP if MTK_SCP 260 265 help 261 - Mediatek video codec driver provides HW capability to 262 - encode and decode in a range of video formats 263 - This driver rely on VPU driver to communicate with VPU. 266 + Mediatek video codec driver provides HW capability to 267 + encode and decode in a range of video formats on MT8173 268 + and MT8183. 264 269 265 - To compile this driver as modules, choose M here: the 266 - modules will be called mtk-vcodec-dec and mtk-vcodec-enc. 270 + Note that support for MT8173 requires VIDEO_MEDIATEK_VPU to 271 + also be selected. Support for MT8183 depends on MTK_SCP. 272 + 273 + To compile this driver as modules, choose M here: the 274 + modules will be called mtk-vcodec-dec and mtk-vcodec-enc. 275 + 276 + config VIDEO_MEDIATEK_VCODEC_VPU 277 + bool 278 + 279 + config VIDEO_MEDIATEK_VCODEC_SCP 280 + bool 267 281 268 282 config VIDEO_MEM2MEM_DEINTERLACE 269 283 tristate "Deinterlace support"
+2
drivers/media/platform/marvell-ccic/mmp-driver.c
··· 307 307 * Suspend/resume support. 308 308 */ 309 309 310 + #ifdef CONFIG_PM 310 311 static int mmpcam_runtime_resume(struct device *dev) 311 312 { 312 313 struct mmp_camera *cam = dev_get_drvdata(dev); ··· 353 352 return mccic_resume(&cam->mcam); 354 353 return 0; 355 354 } 355 + #endif 356 356 357 357 static const struct dev_pm_ops mmpcam_pm_ops = { 358 358 SET_RUNTIME_PM_OPS(mmpcam_runtime_suspend, mmpcam_runtime_resume, NULL)
+9 -1
drivers/media/platform/mtk-vcodec/Makefile
··· 24 24 25 25 mtk-vcodec-common-y := mtk_vcodec_intr.o \ 26 26 mtk_vcodec_util.o \ 27 - mtk_vcodec_fw.o 27 + mtk_vcodec_fw.o \ 28 + 29 + ifneq ($(CONFIG_VIDEO_MEDIATEK_VCODEC_VPU),) 30 + mtk-vcodec-common-y += mtk_vcodec_fw_vpu.o 31 + endif 32 + 33 + ifneq ($(CONFIG_VIDEO_MEDIATEK_VCODEC_SCP),) 34 + mtk-vcodec-common-y += mtk_vcodec_fw_scp.o 35 + endif
+1 -1
drivers/media/platform/mtk-vcodec/mtk_vcodec_dec_drv.c
··· 241 241 } 242 242 dma_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32)); 243 243 244 - dev->fw_handler = mtk_vcodec_fw_select(dev, fw_type, VPU_RST_DEC); 244 + dev->fw_handler = mtk_vcodec_fw_select(dev, fw_type, DECODER); 245 245 if (IS_ERR(dev->fw_handler)) 246 246 return PTR_ERR(dev->fw_handler); 247 247
+1 -1
drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c
··· 293 293 } 294 294 dma_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32)); 295 295 296 - dev->fw_handler = mtk_vcodec_fw_select(dev, fw_type, VPU_RST_ENC); 296 + dev->fw_handler = mtk_vcodec_fw_select(dev, fw_type, ENCODER); 297 297 if (IS_ERR(dev->fw_handler)) 298 298 return PTR_ERR(dev->fw_handler); 299 299
+5 -169
drivers/media/platform/mtk-vcodec/mtk_vcodec_fw.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include "mtk_vcodec_fw.h" 4 + #include "mtk_vcodec_fw_priv.h" 4 5 #include "mtk_vcodec_util.h" 5 6 #include "mtk_vcodec_drv.h" 6 7 7 - struct mtk_vcodec_fw_ops { 8 - int (*load_firmware)(struct mtk_vcodec_fw *fw); 9 - unsigned int (*get_vdec_capa)(struct mtk_vcodec_fw *fw); 10 - unsigned int (*get_venc_capa)(struct mtk_vcodec_fw *fw); 11 - void * (*map_dm_addr)(struct mtk_vcodec_fw *fw, u32 dtcm_dmem_addr); 12 - int (*ipi_register)(struct mtk_vcodec_fw *fw, int id, 13 - mtk_vcodec_ipi_handler handler, const char *name, void *priv); 14 - int (*ipi_send)(struct mtk_vcodec_fw *fw, int id, void *buf, 15 - unsigned int len, unsigned int wait); 16 - }; 17 - 18 - struct mtk_vcodec_fw { 19 - enum mtk_vcodec_fw_type type; 20 - const struct mtk_vcodec_fw_ops *ops; 21 - struct platform_device *pdev; 22 - struct mtk_scp *scp; 23 - }; 24 - 25 - static int mtk_vcodec_vpu_load_firmware(struct mtk_vcodec_fw *fw) 26 - { 27 - return vpu_load_firmware(fw->pdev); 28 - } 29 - 30 - static unsigned int mtk_vcodec_vpu_get_vdec_capa(struct mtk_vcodec_fw *fw) 31 - { 32 - return vpu_get_vdec_hw_capa(fw->pdev); 33 - } 34 - 35 - static unsigned int mtk_vcodec_vpu_get_venc_capa(struct mtk_vcodec_fw *fw) 36 - { 37 - return vpu_get_venc_hw_capa(fw->pdev); 38 - } 39 - 40 - static void *mtk_vcodec_vpu_map_dm_addr(struct mtk_vcodec_fw *fw, 41 - u32 dtcm_dmem_addr) 42 - { 43 - return vpu_mapping_dm_addr(fw->pdev, dtcm_dmem_addr); 44 - } 45 - 46 - static int mtk_vcodec_vpu_set_ipi_register(struct mtk_vcodec_fw *fw, int id, 47 - mtk_vcodec_ipi_handler handler, 48 - const char *name, void *priv) 49 - { 50 - /* 51 - * The handler we receive takes a void * as its first argument. We 52 - * cannot change this because it needs to be passed down to the rproc 53 - * subsystem when SCP is used. VPU takes a const argument, which is 54 - * more constrained, so the conversion below is safe. 55 - */ 56 - ipi_handler_t handler_const = (ipi_handler_t)handler; 57 - 58 - return vpu_ipi_register(fw->pdev, id, handler_const, name, priv); 59 - } 60 - 61 - static int mtk_vcodec_vpu_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf, 62 - unsigned int len, unsigned int wait) 63 - { 64 - return vpu_ipi_send(fw->pdev, id, buf, len); 65 - } 66 - 67 - static const struct mtk_vcodec_fw_ops mtk_vcodec_vpu_msg = { 68 - .load_firmware = mtk_vcodec_vpu_load_firmware, 69 - .get_vdec_capa = mtk_vcodec_vpu_get_vdec_capa, 70 - .get_venc_capa = mtk_vcodec_vpu_get_venc_capa, 71 - .map_dm_addr = mtk_vcodec_vpu_map_dm_addr, 72 - .ipi_register = mtk_vcodec_vpu_set_ipi_register, 73 - .ipi_send = mtk_vcodec_vpu_ipi_send, 74 - }; 75 - 76 - static int mtk_vcodec_scp_load_firmware(struct mtk_vcodec_fw *fw) 77 - { 78 - return rproc_boot(scp_get_rproc(fw->scp)); 79 - } 80 - 81 - static unsigned int mtk_vcodec_scp_get_vdec_capa(struct mtk_vcodec_fw *fw) 82 - { 83 - return scp_get_vdec_hw_capa(fw->scp); 84 - } 85 - 86 - static unsigned int mtk_vcodec_scp_get_venc_capa(struct mtk_vcodec_fw *fw) 87 - { 88 - return scp_get_venc_hw_capa(fw->scp); 89 - } 90 - 91 - static void *mtk_vcodec_vpu_scp_dm_addr(struct mtk_vcodec_fw *fw, 92 - u32 dtcm_dmem_addr) 93 - { 94 - return scp_mapping_dm_addr(fw->scp, dtcm_dmem_addr); 95 - } 96 - 97 - static int mtk_vcodec_scp_set_ipi_register(struct mtk_vcodec_fw *fw, int id, 98 - mtk_vcodec_ipi_handler handler, 99 - const char *name, void *priv) 100 - { 101 - return scp_ipi_register(fw->scp, id, handler, priv); 102 - } 103 - 104 - static int mtk_vcodec_scp_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf, 105 - unsigned int len, unsigned int wait) 106 - { 107 - return scp_ipi_send(fw->scp, id, buf, len, wait); 108 - } 109 - 110 - static const struct mtk_vcodec_fw_ops mtk_vcodec_rproc_msg = { 111 - .load_firmware = mtk_vcodec_scp_load_firmware, 112 - .get_vdec_capa = mtk_vcodec_scp_get_vdec_capa, 113 - .get_venc_capa = mtk_vcodec_scp_get_venc_capa, 114 - .map_dm_addr = mtk_vcodec_vpu_scp_dm_addr, 115 - .ipi_register = mtk_vcodec_scp_set_ipi_register, 116 - .ipi_send = mtk_vcodec_scp_ipi_send, 117 - }; 118 - 119 - static void mtk_vcodec_reset_handler(void *priv) 120 - { 121 - struct mtk_vcodec_dev *dev = priv; 122 - struct mtk_vcodec_ctx *ctx; 123 - 124 - mtk_v4l2_err("Watchdog timeout!!"); 125 - 126 - mutex_lock(&dev->dev_mutex); 127 - list_for_each_entry(ctx, &dev->ctx_list, list) { 128 - ctx->state = MTK_STATE_ABORT; 129 - mtk_v4l2_debug(0, "[%d] Change to state MTK_STATE_ABORT", 130 - ctx->id); 131 - } 132 - mutex_unlock(&dev->dev_mutex); 133 - } 134 - 135 8 struct mtk_vcodec_fw *mtk_vcodec_fw_select(struct mtk_vcodec_dev *dev, 136 9 enum mtk_vcodec_fw_type type, 137 - enum rst_id rst_id) 10 + enum mtk_vcodec_fw_use fw_use) 138 11 { 139 - const struct mtk_vcodec_fw_ops *ops; 140 - struct mtk_vcodec_fw *fw; 141 - struct platform_device *fw_pdev = NULL; 142 - struct mtk_scp *scp = NULL; 143 - 144 12 switch (type) { 145 13 case VPU: 146 - ops = &mtk_vcodec_vpu_msg; 147 - fw_pdev = vpu_get_plat_device(dev->plat_dev); 148 - if (!fw_pdev) { 149 - mtk_v4l2_err("firmware device is not ready"); 150 - return ERR_PTR(-EINVAL); 151 - } 152 - vpu_wdt_reg_handler(fw_pdev, mtk_vcodec_reset_handler, 153 - dev, rst_id); 154 - break; 14 + return mtk_vcodec_fw_vpu_init(dev, fw_use); 155 15 case SCP: 156 - ops = &mtk_vcodec_rproc_msg; 157 - scp = scp_get(dev->plat_dev); 158 - if (!scp) { 159 - mtk_v4l2_err("could not get vdec scp handle"); 160 - return ERR_PTR(-EPROBE_DEFER); 161 - } 162 - break; 16 + return mtk_vcodec_fw_scp_init(dev); 163 17 default: 164 18 mtk_v4l2_err("invalid vcodec fw type"); 165 19 return ERR_PTR(-EINVAL); 166 20 } 167 - 168 - fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL); 169 - if (!fw) 170 - return ERR_PTR(-EINVAL); 171 - 172 - fw->type = type; 173 - fw->ops = ops; 174 - fw->pdev = fw_pdev; 175 - fw->scp = scp; 176 - 177 - return fw; 178 21 } 179 22 EXPORT_SYMBOL_GPL(mtk_vcodec_fw_select); 180 23 181 24 void mtk_vcodec_fw_release(struct mtk_vcodec_fw *fw) 182 25 { 183 - switch (fw->type) { 184 - case VPU: 185 - put_device(&fw->pdev->dev); 186 - break; 187 - case SCP: 188 - scp_put(fw->scp); 189 - break; 190 - } 26 + fw->ops->release(fw); 191 27 } 192 28 EXPORT_SYMBOL_GPL(mtk_vcodec_fw_release); 193 29
+6 -1
drivers/media/platform/mtk-vcodec/mtk_vcodec_fw.h
··· 15 15 SCP, 16 16 }; 17 17 18 + enum mtk_vcodec_fw_use { 19 + DECODER, 20 + ENCODER, 21 + }; 22 + 18 23 struct mtk_vcodec_fw; 19 24 20 25 typedef void (*mtk_vcodec_ipi_handler) (void *data, ··· 27 22 28 23 struct mtk_vcodec_fw *mtk_vcodec_fw_select(struct mtk_vcodec_dev *dev, 29 24 enum mtk_vcodec_fw_type type, 30 - enum rst_id rst_id); 25 + enum mtk_vcodec_fw_use fw_use); 31 26 void mtk_vcodec_fw_release(struct mtk_vcodec_fw *fw); 32 27 33 28 int mtk_vcodec_fw_load_firmware(struct mtk_vcodec_fw *fw);
+52
drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_priv.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _MTK_VCODEC_FW_PRIV_H_ 4 + #define _MTK_VCODEC_FW_PRIV_H_ 5 + 6 + #include "mtk_vcodec_fw.h" 7 + 8 + struct mtk_vcodec_dev; 9 + 10 + struct mtk_vcodec_fw { 11 + enum mtk_vcodec_fw_type type; 12 + const struct mtk_vcodec_fw_ops *ops; 13 + struct platform_device *pdev; 14 + struct mtk_scp *scp; 15 + }; 16 + 17 + struct mtk_vcodec_fw_ops { 18 + int (*load_firmware)(struct mtk_vcodec_fw *fw); 19 + unsigned int (*get_vdec_capa)(struct mtk_vcodec_fw *fw); 20 + unsigned int (*get_venc_capa)(struct mtk_vcodec_fw *fw); 21 + void *(*map_dm_addr)(struct mtk_vcodec_fw *fw, u32 dtcm_dmem_addr); 22 + int (*ipi_register)(struct mtk_vcodec_fw *fw, int id, 23 + mtk_vcodec_ipi_handler handler, const char *name, 24 + void *priv); 25 + int (*ipi_send)(struct mtk_vcodec_fw *fw, int id, void *buf, 26 + unsigned int len, unsigned int wait); 27 + void (*release)(struct mtk_vcodec_fw *fw); 28 + }; 29 + 30 + #if IS_ENABLED(CONFIG_VIDEO_MEDIATEK_VCODEC_VPU) 31 + struct mtk_vcodec_fw *mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev, 32 + enum mtk_vcodec_fw_use fw_use); 33 + #else 34 + static inline struct mtk_vcodec_fw * 35 + mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev, 36 + enum mtk_vcodec_fw_use fw_use) 37 + { 38 + return ERR_PTR(-ENODEV); 39 + } 40 + #endif /* CONFIG_VIDEO_MEDIATEK_VCODEC_VPU */ 41 + 42 + #if IS_ENABLED(CONFIG_VIDEO_MEDIATEK_VCODEC_SCP) 43 + struct mtk_vcodec_fw *mtk_vcodec_fw_scp_init(struct mtk_vcodec_dev *dev); 44 + #else 45 + static inline struct mtk_vcodec_fw * 46 + mtk_vcodec_fw_scp_init(struct mtk_vcodec_dev *dev) 47 + { 48 + return ERR_PTR(-ENODEV); 49 + } 50 + #endif /* CONFIG_VIDEO_MEDIATEK_VCODEC_SCP */ 51 + 52 + #endif /* _MTK_VCODEC_FW_PRIV_H_ */
+73
drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_scp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "mtk_vcodec_fw_priv.h" 4 + #include "mtk_vcodec_util.h" 5 + #include "mtk_vcodec_drv.h" 6 + 7 + static int mtk_vcodec_scp_load_firmware(struct mtk_vcodec_fw *fw) 8 + { 9 + return rproc_boot(scp_get_rproc(fw->scp)); 10 + } 11 + 12 + static unsigned int mtk_vcodec_scp_get_vdec_capa(struct mtk_vcodec_fw *fw) 13 + { 14 + return scp_get_vdec_hw_capa(fw->scp); 15 + } 16 + 17 + static unsigned int mtk_vcodec_scp_get_venc_capa(struct mtk_vcodec_fw *fw) 18 + { 19 + return scp_get_venc_hw_capa(fw->scp); 20 + } 21 + 22 + static void *mtk_vcodec_vpu_scp_dm_addr(struct mtk_vcodec_fw *fw, 23 + u32 dtcm_dmem_addr) 24 + { 25 + return scp_mapping_dm_addr(fw->scp, dtcm_dmem_addr); 26 + } 27 + 28 + static int mtk_vcodec_scp_set_ipi_register(struct mtk_vcodec_fw *fw, int id, 29 + mtk_vcodec_ipi_handler handler, 30 + const char *name, void *priv) 31 + { 32 + return scp_ipi_register(fw->scp, id, handler, priv); 33 + } 34 + 35 + static int mtk_vcodec_scp_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf, 36 + unsigned int len, unsigned int wait) 37 + { 38 + return scp_ipi_send(fw->scp, id, buf, len, wait); 39 + } 40 + 41 + static void mtk_vcodec_scp_release(struct mtk_vcodec_fw *fw) 42 + { 43 + scp_put(fw->scp); 44 + } 45 + 46 + static const struct mtk_vcodec_fw_ops mtk_vcodec_rproc_msg = { 47 + .load_firmware = mtk_vcodec_scp_load_firmware, 48 + .get_vdec_capa = mtk_vcodec_scp_get_vdec_capa, 49 + .get_venc_capa = mtk_vcodec_scp_get_venc_capa, 50 + .map_dm_addr = mtk_vcodec_vpu_scp_dm_addr, 51 + .ipi_register = mtk_vcodec_scp_set_ipi_register, 52 + .ipi_send = mtk_vcodec_scp_ipi_send, 53 + .release = mtk_vcodec_scp_release, 54 + }; 55 + 56 + struct mtk_vcodec_fw *mtk_vcodec_fw_scp_init(struct mtk_vcodec_dev *dev) 57 + { 58 + struct mtk_vcodec_fw *fw; 59 + struct mtk_scp *scp; 60 + 61 + scp = scp_get(dev->plat_dev); 62 + if (!scp) { 63 + mtk_v4l2_err("could not get vdec scp handle"); 64 + return ERR_PTR(-EPROBE_DEFER); 65 + } 66 + 67 + fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL); 68 + fw->type = SCP; 69 + fw->ops = &mtk_vcodec_rproc_msg; 70 + fw->scp = scp; 71 + 72 + return fw; 73 + }
+110
drivers/media/platform/mtk-vcodec/mtk_vcodec_fw_vpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "mtk_vcodec_fw_priv.h" 4 + #include "mtk_vcodec_util.h" 5 + #include "mtk_vcodec_drv.h" 6 + 7 + static int mtk_vcodec_vpu_load_firmware(struct mtk_vcodec_fw *fw) 8 + { 9 + return vpu_load_firmware(fw->pdev); 10 + } 11 + 12 + static unsigned int mtk_vcodec_vpu_get_vdec_capa(struct mtk_vcodec_fw *fw) 13 + { 14 + return vpu_get_vdec_hw_capa(fw->pdev); 15 + } 16 + 17 + static unsigned int mtk_vcodec_vpu_get_venc_capa(struct mtk_vcodec_fw *fw) 18 + { 19 + return vpu_get_venc_hw_capa(fw->pdev); 20 + } 21 + 22 + static void *mtk_vcodec_vpu_map_dm_addr(struct mtk_vcodec_fw *fw, 23 + u32 dtcm_dmem_addr) 24 + { 25 + return vpu_mapping_dm_addr(fw->pdev, dtcm_dmem_addr); 26 + } 27 + 28 + static int mtk_vcodec_vpu_set_ipi_register(struct mtk_vcodec_fw *fw, int id, 29 + mtk_vcodec_ipi_handler handler, 30 + const char *name, void *priv) 31 + { 32 + /* 33 + * The handler we receive takes a void * as its first argument. We 34 + * cannot change this because it needs to be passed down to the rproc 35 + * subsystem when SCP is used. VPU takes a const argument, which is 36 + * more constrained, so the conversion below is safe. 37 + */ 38 + ipi_handler_t handler_const = (ipi_handler_t)handler; 39 + 40 + return vpu_ipi_register(fw->pdev, id, handler_const, name, priv); 41 + } 42 + 43 + static int mtk_vcodec_vpu_ipi_send(struct mtk_vcodec_fw *fw, int id, void *buf, 44 + unsigned int len, unsigned int wait) 45 + { 46 + return vpu_ipi_send(fw->pdev, id, buf, len); 47 + } 48 + 49 + static void mtk_vcodec_vpu_release(struct mtk_vcodec_fw *fw) 50 + { 51 + put_device(&fw->pdev->dev); 52 + } 53 + 54 + static void mtk_vcodec_vpu_reset_handler(void *priv) 55 + { 56 + struct mtk_vcodec_dev *dev = priv; 57 + struct mtk_vcodec_ctx *ctx; 58 + 59 + mtk_v4l2_err("Watchdog timeout!!"); 60 + 61 + mutex_lock(&dev->dev_mutex); 62 + list_for_each_entry(ctx, &dev->ctx_list, list) { 63 + ctx->state = MTK_STATE_ABORT; 64 + mtk_v4l2_debug(0, "[%d] Change to state MTK_STATE_ABORT", 65 + ctx->id); 66 + } 67 + mutex_unlock(&dev->dev_mutex); 68 + } 69 + 70 + static const struct mtk_vcodec_fw_ops mtk_vcodec_vpu_msg = { 71 + .load_firmware = mtk_vcodec_vpu_load_firmware, 72 + .get_vdec_capa = mtk_vcodec_vpu_get_vdec_capa, 73 + .get_venc_capa = mtk_vcodec_vpu_get_venc_capa, 74 + .map_dm_addr = mtk_vcodec_vpu_map_dm_addr, 75 + .ipi_register = mtk_vcodec_vpu_set_ipi_register, 76 + .ipi_send = mtk_vcodec_vpu_ipi_send, 77 + .release = mtk_vcodec_vpu_release, 78 + }; 79 + 80 + struct mtk_vcodec_fw *mtk_vcodec_fw_vpu_init(struct mtk_vcodec_dev *dev, 81 + enum mtk_vcodec_fw_use fw_use) 82 + { 83 + struct platform_device *fw_pdev; 84 + struct mtk_vcodec_fw *fw; 85 + enum rst_id rst_id; 86 + 87 + switch (fw_use) { 88 + case ENCODER: 89 + rst_id = VPU_RST_ENC; 90 + break; 91 + case DECODER: 92 + default: 93 + rst_id = VPU_RST_DEC; 94 + break; 95 + } 96 + 97 + fw_pdev = vpu_get_plat_device(dev->plat_dev); 98 + if (!fw_pdev) { 99 + mtk_v4l2_err("firmware device is not ready"); 100 + return ERR_PTR(-EINVAL); 101 + } 102 + vpu_wdt_reg_handler(fw_pdev, mtk_vcodec_vpu_reset_handler, dev, rst_id); 103 + 104 + fw = devm_kzalloc(&dev->plat_dev->dev, sizeof(*fw), GFP_KERNEL); 105 + fw->type = VPU; 106 + fw->ops = &mtk_vcodec_vpu_msg; 107 + fw->pdev = fw_pdev; 108 + 109 + return fw; 110 + }
+13 -2
drivers/media/platform/qcom/venus/core.h
··· 243 243 244 244 u32 header_mode; 245 245 246 - u32 profile; 247 - u32 level; 246 + struct { 247 + u32 h264; 248 + u32 mpeg4; 249 + u32 hevc; 250 + u32 vp8; 251 + u32 vp9; 252 + } profile; 253 + struct { 254 + u32 h264; 255 + u32 mpeg4; 256 + u32 hevc; 257 + u32 vp9; 258 + } level; 248 259 }; 249 260 250 261 struct venus_buffer {
+2 -2
drivers/media/platform/qcom/venus/pm_helpers.c
··· 794 794 return 0; 795 795 796 796 opp_dl_add_err: 797 - dev_pm_domain_detach(core->opp_pmdomain, true); 797 + dev_pm_opp_detach_genpd(core->opp_table); 798 798 opp_attach_err: 799 799 if (core->pd_dl_venus) { 800 800 device_link_del(core->pd_dl_venus); ··· 832 832 if (core->opp_dl_venus) 833 833 device_link_del(core->opp_dl_venus); 834 834 835 - dev_pm_domain_detach(core->opp_pmdomain, true); 835 + dev_pm_opp_detach_genpd(core->opp_table); 836 836 } 837 837 838 838 static int core_get_v4(struct device *dev)
+30 -1
drivers/media/platform/qcom/venus/venc.c
··· 537 537 struct hfi_quantization quant; 538 538 struct hfi_quantization_range quant_range; 539 539 u32 ptype, rate_control, bitrate; 540 + u32 profile, level; 540 541 int ret; 541 542 542 543 ret = venus_helper_set_work_mode(inst, VIDC_WORK_MODE_2); ··· 685 684 if (ret) 686 685 return ret; 687 686 688 - ret = venus_helper_set_profile_level(inst, ctr->profile, ctr->level); 687 + switch (inst->hfi_codec) { 688 + case HFI_VIDEO_CODEC_H264: 689 + profile = ctr->profile.h264; 690 + level = ctr->level.h264; 691 + break; 692 + case HFI_VIDEO_CODEC_MPEG4: 693 + profile = ctr->profile.mpeg4; 694 + level = ctr->level.mpeg4; 695 + break; 696 + case HFI_VIDEO_CODEC_VP8: 697 + profile = ctr->profile.vp8; 698 + level = 0; 699 + break; 700 + case HFI_VIDEO_CODEC_VP9: 701 + profile = ctr->profile.vp9; 702 + level = ctr->level.vp9; 703 + break; 704 + case HFI_VIDEO_CODEC_HEVC: 705 + profile = ctr->profile.hevc; 706 + level = ctr->level.hevc; 707 + break; 708 + case HFI_VIDEO_CODEC_MPEG2: 709 + default: 710 + profile = 0; 711 + level = 0; 712 + break; 713 + } 714 + 715 + ret = venus_helper_set_profile_level(inst, profile, level); 689 716 if (ret) 690 717 return ret; 691 718
+12 -2
drivers/media/platform/qcom/venus/venc_ctrls.c
··· 103 103 ctr->h264_entropy_mode = ctrl->val; 104 104 break; 105 105 case V4L2_CID_MPEG_VIDEO_MPEG4_PROFILE: 106 + ctr->profile.mpeg4 = ctrl->val; 107 + break; 106 108 case V4L2_CID_MPEG_VIDEO_H264_PROFILE: 109 + ctr->profile.h264 = ctrl->val; 110 + break; 107 111 case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE: 112 + ctr->profile.hevc = ctrl->val; 113 + break; 108 114 case V4L2_CID_MPEG_VIDEO_VP8_PROFILE: 109 - ctr->profile = ctrl->val; 115 + ctr->profile.vp8 = ctrl->val; 110 116 break; 111 117 case V4L2_CID_MPEG_VIDEO_MPEG4_LEVEL: 118 + ctr->level.mpeg4 = ctrl->val; 119 + break; 112 120 case V4L2_CID_MPEG_VIDEO_H264_LEVEL: 121 + ctr->level.h264 = ctrl->val; 122 + break; 113 123 case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL: 114 - ctr->level = ctrl->val; 124 + ctr->level.hevc = ctrl->val; 115 125 break; 116 126 case V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP: 117 127 ctr->h264_i_qp = ctrl->val;
+66 -52
drivers/media/test-drivers/vidtv/vidtv_bridge.c
··· 4 4 * validate the existing APIs in the media subsystem. It can also aid 5 5 * developers working on userspace applications. 6 6 * 7 - * When this module is loaded, it will attempt to modprobe 'dvb_vidtv_tuner' and 'dvb_vidtv_demod'. 7 + * When this module is loaded, it will attempt to modprobe 'dvb_vidtv_tuner' 8 + * and 'dvb_vidtv_demod'. 8 9 * 9 10 * Copyright (C) 2020 Daniel W. S. Almeida 10 11 */ 11 12 13 + #include <linux/dev_printk.h> 12 14 #include <linux/moduleparam.h> 13 15 #include <linux/mutex.h> 14 16 #include <linux/platform_device.h> 15 - #include <linux/dev_printk.h> 16 17 #include <linux/time.h> 17 18 #include <linux/types.h> 18 19 #include <linux/workqueue.h> 19 20 20 21 #include "vidtv_bridge.h" 21 - #include "vidtv_demod.h" 22 - #include "vidtv_tuner.h" 23 - #include "vidtv_ts.h" 24 - #include "vidtv_mux.h" 25 22 #include "vidtv_common.h" 23 + #include "vidtv_demod.h" 24 + #include "vidtv_mux.h" 25 + #include "vidtv_ts.h" 26 + #include "vidtv_tuner.h" 26 27 27 - //#define MUX_BUF_MAX_SZ 28 - //#define MUX_BUF_MIN_SZ 28 + #define MUX_BUF_MIN_SZ 90164 29 + #define MUX_BUF_MAX_SZ (MUX_BUF_MIN_SZ * 10) 29 30 #define TUNER_DEFAULT_ADDR 0x68 30 31 #define DEMOD_DEFAULT_ADDR 0x60 32 + #define VIDTV_DEFAULT_NETWORK_ID 0xff44 33 + #define VIDTV_DEFAULT_NETWORK_NAME "LinuxTV.org" 34 + #define VIDTV_DEFAULT_TS_ID 0x4081 31 35 32 - /* LNBf fake parameters: ranges used by an Universal (extended) European LNBf */ 33 - #define LNB_CUT_FREQUENCY 11700000 34 - #define LNB_LOW_FREQ 9750000 35 - #define LNB_HIGH_FREQ 10600000 36 - 36 + /* 37 + * The LNBf fake parameters here are the ranges used by an 38 + * Universal (extended) European LNBf, which is likely the most common LNBf 39 + * found on Satellite digital TV system nowadays. 40 + */ 41 + #define LNB_CUT_FREQUENCY 11700000 /* high IF frequency */ 42 + #define LNB_LOW_FREQ 9750000 /* low IF frequency */ 43 + #define LNB_HIGH_FREQ 10600000 /* transition frequency */ 37 44 38 45 static unsigned int drop_tslock_prob_on_low_snr; 39 46 module_param(drop_tslock_prob_on_low_snr, uint, 0); ··· 99 92 100 93 static unsigned int pcr_period_msec = 40; 101 94 module_param(pcr_period_msec, uint, 0); 102 - MODULE_PARM_DESC(pcr_period_msec, "How often to send PCR packets. Default: 40ms"); 95 + MODULE_PARM_DESC(pcr_period_msec, 96 + "How often to send PCR packets. Default: 40ms"); 103 97 104 98 static unsigned int mux_rate_kbytes_sec = 4096; 105 99 module_param(mux_rate_kbytes_sec, uint, 0); ··· 112 104 113 105 static unsigned int mux_buf_sz_pkts; 114 106 module_param(mux_buf_sz_pkts, uint, 0); 115 - MODULE_PARM_DESC(mux_buf_sz_pkts, "Size for the internal mux buffer in multiples of 188 bytes"); 116 - 117 - #define MUX_BUF_MIN_SZ 90164 118 - #define MUX_BUF_MAX_SZ (MUX_BUF_MIN_SZ * 10) 107 + MODULE_PARM_DESC(mux_buf_sz_pkts, 108 + "Size for the internal mux buffer in multiples of 188 bytes"); 119 109 120 110 static u32 vidtv_bridge_mux_buf_sz_for_mux_rate(void) 121 111 { 122 112 u32 max_elapsed_time_msecs = VIDTV_MAX_SLEEP_USECS / USEC_PER_MSEC; 123 - u32 nbytes_expected; 124 113 u32 mux_buf_sz = mux_buf_sz_pkts * TS_PACKET_LEN; 114 + u32 nbytes_expected; 125 115 126 116 nbytes_expected = mux_rate_kbytes_sec; 127 117 nbytes_expected *= max_elapsed_time_msecs; ··· 149 143 FE_HAS_LOCK); 150 144 } 151 145 152 - static void 153 - vidtv_bridge_on_new_pkts_avail(void *priv, u8 *buf, u32 npkts) 146 + /* 147 + * called on a separate thread by the mux when new packets become available 148 + */ 149 + static void vidtv_bridge_on_new_pkts_avail(void *priv, u8 *buf, u32 npkts) 154 150 { 155 - /* 156 - * called on a separate thread by the mux when new packets become 157 - * available 158 - */ 159 - struct vidtv_dvb *dvb = (struct vidtv_dvb *)priv; 151 + struct vidtv_dvb *dvb = priv; 160 152 161 153 /* drop packets if we lose the lock */ 162 154 if (vidtv_bridge_check_demod_lock(dvb, 0)) ··· 163 159 164 160 static int vidtv_start_streaming(struct vidtv_dvb *dvb) 165 161 { 166 - struct vidtv_mux_init_args mux_args = {0}; 162 + struct vidtv_mux_init_args mux_args = { 163 + .mux_rate_kbytes_sec = mux_rate_kbytes_sec, 164 + .on_new_packets_available_cb = vidtv_bridge_on_new_pkts_avail, 165 + .pcr_period_usecs = pcr_period_msec * USEC_PER_MSEC, 166 + .si_period_usecs = si_period_msec * USEC_PER_MSEC, 167 + .pcr_pid = pcr_pid, 168 + .transport_stream_id = VIDTV_DEFAULT_TS_ID, 169 + .network_id = VIDTV_DEFAULT_NETWORK_ID, 170 + .network_name = VIDTV_DEFAULT_NETWORK_NAME, 171 + .priv = dvb, 172 + }; 167 173 struct device *dev = &dvb->pdev->dev; 168 174 u32 mux_buf_sz; 169 175 ··· 182 168 return 0; 183 169 } 184 170 185 - mux_buf_sz = (mux_buf_sz_pkts) ? mux_buf_sz_pkts : vidtv_bridge_mux_buf_sz_for_mux_rate(); 171 + if (mux_buf_sz_pkts) 172 + mux_buf_sz = mux_buf_sz_pkts; 173 + else 174 + mux_buf_sz = vidtv_bridge_mux_buf_sz_for_mux_rate(); 186 175 187 - mux_args.mux_rate_kbytes_sec = mux_rate_kbytes_sec; 188 - mux_args.on_new_packets_available_cb = vidtv_bridge_on_new_pkts_avail; 189 - mux_args.mux_buf_sz = mux_buf_sz; 190 - mux_args.pcr_period_usecs = pcr_period_msec * 1000; 191 - mux_args.si_period_usecs = si_period_msec * 1000; 192 - mux_args.pcr_pid = pcr_pid; 193 - mux_args.transport_stream_id = VIDTV_DEFAULT_TS_ID; 194 - mux_args.priv = dvb; 176 + mux_args.mux_buf_sz = mux_buf_sz; 195 177 196 178 dvb->streaming = true; 197 - dvb->mux = vidtv_mux_init(dvb->fe[0], dev, mux_args); 179 + dvb->mux = vidtv_mux_init(dvb->fe[0], dev, &mux_args); 180 + if (!dvb->mux) 181 + return -ENOMEM; 198 182 vidtv_mux_start_thread(dvb->mux); 199 183 200 184 dev_dbg_ratelimited(dev, "Started streaming\n"); ··· 216 204 { 217 205 struct dvb_demux *demux = feed->demux; 218 206 struct vidtv_dvb *dvb = demux->priv; 219 - int rc; 220 207 int ret; 208 + int rc; 221 209 222 210 if (!demux->dmx.frontend) 223 211 return -EINVAL; ··· 255 243 256 244 static struct dvb_frontend *vidtv_get_frontend_ptr(struct i2c_client *c) 257 245 { 258 - /* the demod will set this when its probe function runs */ 259 246 struct vidtv_demod_state *state = i2c_get_clientdata(c); 260 247 248 + /* the demod will set this when its probe function runs */ 261 249 return &state->frontend; 262 250 } 263 251 ··· 265 253 struct i2c_msg msgs[], 266 254 int num) 267 255 { 256 + /* 257 + * Right now, this virtual driver doesn't really send or receive 258 + * messages from I2C. A real driver will require an implementation 259 + * here. 260 + */ 268 261 return 0; 269 262 } 270 263 ··· 337 320 338 321 static int vidtv_bridge_probe_demod(struct vidtv_dvb *dvb, u32 n) 339 322 { 340 - struct vidtv_demod_config cfg = {}; 341 - 342 - cfg.drop_tslock_prob_on_low_snr = drop_tslock_prob_on_low_snr; 343 - cfg.recover_tslock_prob_on_good_snr = recover_tslock_prob_on_good_snr; 344 - 323 + struct vidtv_demod_config cfg = { 324 + .drop_tslock_prob_on_low_snr = drop_tslock_prob_on_low_snr, 325 + .recover_tslock_prob_on_good_snr = recover_tslock_prob_on_good_snr, 326 + }; 345 327 dvb->i2c_client_demod[n] = dvb_module_probe("dvb_vidtv_demod", 346 328 NULL, 347 329 &dvb->i2c_adapter, ··· 359 343 360 344 static int vidtv_bridge_probe_tuner(struct vidtv_dvb *dvb, u32 n) 361 345 { 362 - struct vidtv_tuner_config cfg = {}; 346 + struct vidtv_tuner_config cfg = { 347 + .fe = dvb->fe[n], 348 + .mock_power_up_delay_msec = mock_power_up_delay_msec, 349 + .mock_tune_delay_msec = mock_tune_delay_msec, 350 + }; 363 351 u32 freq; 364 352 int i; 365 - 366 - cfg.fe = dvb->fe[n]; 367 - cfg.mock_power_up_delay_msec = mock_power_up_delay_msec; 368 - cfg.mock_tune_delay_msec = mock_tune_delay_msec; 369 353 370 354 /* TODO: check if the frequencies are at a valid range */ 371 355 ··· 405 389 406 390 static int vidtv_bridge_dvb_init(struct vidtv_dvb *dvb) 407 391 { 408 - int ret; 409 - int i; 410 - int j; 392 + int ret, i, j; 411 393 412 394 ret = vidtv_bridge_i2c_register_adap(dvb); 413 395 if (ret < 0)
+3 -1
drivers/media/test-drivers/vidtv/vidtv_bridge.h
··· 20 20 #include <linux/i2c.h> 21 21 #include <linux/platform_device.h> 22 22 #include <linux/types.h> 23 + 23 24 #include <media/dmxdev.h> 24 25 #include <media/dvb_demux.h> 25 26 #include <media/dvb_frontend.h> 27 + 26 28 #include "vidtv_mux.h" 27 29 28 30 /** ··· 34 32 * @adapter: Represents a DTV adapter. See 'dvb_register_adapter'. 35 33 * @demux: The demux used by the dvb_dmx_swfilter_packets() call. 36 34 * @dmx_dev: Represents a demux device. 37 - * @dmx_frontend: The frontends associated with the demux. 35 + * @dmx_fe: The frontends associated with the demux. 38 36 * @i2c_adapter: The i2c_adapter associated with the bridge driver. 39 37 * @i2c_client_demod: The i2c_clients associated with the demodulator modules. 40 38 * @i2c_client_tuner: The i2c_clients associated with the tuner modules.
+274 -38
drivers/media/test-drivers/vidtv/vidtv_channel.c
··· 9 9 * When vidtv boots, it will create some hardcoded channels. 10 10 * Their services will be concatenated to populate the SDT. 11 11 * Their programs will be concatenated to populate the PAT 12 + * Their events will be concatenated to populate the EIT 12 13 * For each program in the PAT, a PMT section will be created 13 14 * The PMT section for a channel will be assigned its streams. 14 15 * Every stream will have its corresponding encoder polled to produce TS packets ··· 19 18 * Copyright (C) 2020 Daniel W. S. Almeida 20 19 */ 21 20 22 - #include <linux/types.h> 23 - #include <linux/slab.h> 24 21 #include <linux/dev_printk.h> 25 22 #include <linux/ratelimit.h> 23 + #include <linux/slab.h> 24 + #include <linux/types.h> 26 25 27 26 #include "vidtv_channel.h" 28 - #include "vidtv_psi.h" 27 + #include "vidtv_common.h" 29 28 #include "vidtv_encoder.h" 30 29 #include "vidtv_mux.h" 31 - #include "vidtv_common.h" 30 + #include "vidtv_psi.h" 32 31 #include "vidtv_s302m.h" 33 32 34 33 static void vidtv_channel_encoder_destroy(struct vidtv_encoder *e) 35 34 { 36 - struct vidtv_encoder *curr = e; 37 35 struct vidtv_encoder *tmp = NULL; 36 + struct vidtv_encoder *curr = e; 38 37 39 38 while (curr) { 40 39 /* forward the call to the derived type */ ··· 45 44 } 46 45 47 46 #define ENCODING_ISO8859_15 "\x0b" 47 + #define TS_NIT_PID 0x10 48 48 49 + /* 50 + * init an audio only channel with a s302m encoder 51 + */ 49 52 struct vidtv_channel 50 53 *vidtv_channel_s302m_init(struct vidtv_channel *head, u16 transport_stream_id) 51 54 { 52 - /* 53 - * init an audio only channel with a s302m encoder 54 - */ 55 + const __be32 s302m_fid = cpu_to_be32(VIDTV_S302M_FORMAT_IDENTIFIER); 56 + char *event_text = ENCODING_ISO8859_15 "Bagatelle No. 25 in A minor for solo piano, also known as F\xfcr Elise, composed by Ludwig van Beethoven"; 57 + char *event_name = ENCODING_ISO8859_15 "Ludwig van Beethoven: F\xfcr Elise"; 58 + struct vidtv_s302m_encoder_init_args encoder_args = {}; 59 + char *iso_language_code = ENCODING_ISO8859_15 "eng"; 60 + char *provider = ENCODING_ISO8859_15 "LinuxTV.org"; 61 + char *name = ENCODING_ISO8859_15 "Beethoven"; 62 + const u16 s302m_es_pid = 0x111; /* packet id for the ES */ 63 + const u16 s302m_program_pid = 0x101; /* packet id for PMT*/ 55 64 const u16 s302m_service_id = 0x880; 56 65 const u16 s302m_program_num = 0x880; 57 - const u16 s302m_program_pid = 0x101; /* packet id for PMT*/ 58 - const u16 s302m_es_pid = 0x111; /* packet id for the ES */ 59 - const __be32 s302m_fid = cpu_to_be32(VIDTV_S302M_FORMAT_IDENTIFIER); 66 + const u16 s302m_beethoven_event_id = 1; 67 + struct vidtv_channel *s302m; 60 68 61 - char *name = ENCODING_ISO8859_15 "Beethoven"; 62 - char *provider = ENCODING_ISO8859_15 "LinuxTV.org"; 63 - 64 - struct vidtv_channel *s302m = kzalloc(sizeof(*s302m), GFP_KERNEL); 65 - struct vidtv_s302m_encoder_init_args encoder_args = {}; 69 + s302m = kzalloc(sizeof(*s302m), GFP_KERNEL); 70 + if (!s302m) 71 + return NULL; 66 72 67 73 s302m->name = kstrdup(name, GFP_KERNEL); 74 + if (!s302m->name) 75 + goto free_s302m; 68 76 69 - s302m->service = vidtv_psi_sdt_service_init(NULL, s302m_service_id); 77 + s302m->service = vidtv_psi_sdt_service_init(NULL, s302m_service_id, false, true); 78 + if (!s302m->service) 79 + goto free_name; 70 80 71 81 s302m->service->descriptor = (struct vidtv_psi_desc *) 72 82 vidtv_psi_service_desc_init(NULL, 73 - DIGITAL_TELEVISION_SERVICE, 83 + DIGITAL_RADIO_SOUND_SERVICE, 74 84 name, 75 85 provider); 86 + if (!s302m->service->descriptor) 87 + goto free_service; 76 88 77 89 s302m->transport_stream_id = transport_stream_id; 78 90 79 91 s302m->program = vidtv_psi_pat_program_init(NULL, 80 92 s302m_service_id, 81 93 s302m_program_pid); 94 + if (!s302m->program) 95 + goto free_service; 82 96 83 97 s302m->program_num = s302m_program_num; 84 98 85 99 s302m->streams = vidtv_psi_pmt_stream_init(NULL, 86 100 STREAM_PRIVATE_DATA, 87 101 s302m_es_pid); 102 + if (!s302m->streams) 103 + goto free_program; 88 104 89 105 s302m->streams->descriptor = (struct vidtv_psi_desc *) 90 106 vidtv_psi_registration_desc_init(NULL, 91 107 s302m_fid, 92 108 NULL, 93 109 0); 110 + if (!s302m->streams->descriptor) 111 + goto free_streams; 112 + 94 113 encoder_args.es_pid = s302m_es_pid; 95 114 96 115 s302m->encoders = vidtv_s302m_encoder_init(encoder_args); 116 + if (!s302m->encoders) 117 + goto free_streams; 118 + 119 + s302m->events = vidtv_psi_eit_event_init(NULL, s302m_beethoven_event_id); 120 + if (!s302m->events) 121 + goto free_encoders; 122 + s302m->events->descriptor = (struct vidtv_psi_desc *) 123 + vidtv_psi_short_event_desc_init(NULL, 124 + iso_language_code, 125 + event_name, 126 + event_text); 127 + if (!s302m->events->descriptor) 128 + goto free_events; 97 129 98 130 if (head) { 99 131 while (head->next) ··· 136 102 } 137 103 138 104 return s302m; 105 + 106 + free_events: 107 + vidtv_psi_eit_event_destroy(s302m->events); 108 + free_encoders: 109 + vidtv_s302m_encoder_destroy(s302m->encoders); 110 + free_streams: 111 + vidtv_psi_pmt_stream_destroy(s302m->streams); 112 + free_program: 113 + vidtv_psi_pat_program_destroy(s302m->program); 114 + free_service: 115 + vidtv_psi_sdt_service_destroy(s302m->service); 116 + free_name: 117 + kfree(s302m->name); 118 + free_s302m: 119 + kfree(s302m); 120 + 121 + return NULL; 122 + } 123 + 124 + static struct vidtv_psi_table_eit_event 125 + *vidtv_channel_eit_event_cat_into_new(struct vidtv_mux *m) 126 + { 127 + /* Concatenate the events */ 128 + const struct vidtv_channel *cur_chnl = m->channels; 129 + struct vidtv_psi_table_eit_event *curr = NULL; 130 + struct vidtv_psi_table_eit_event *head = NULL; 131 + struct vidtv_psi_table_eit_event *tail = NULL; 132 + struct vidtv_psi_desc *desc = NULL; 133 + u16 event_id; 134 + 135 + if (!cur_chnl) 136 + return NULL; 137 + 138 + while (cur_chnl) { 139 + curr = cur_chnl->events; 140 + 141 + if (!curr) 142 + dev_warn_ratelimited(m->dev, 143 + "No events found for channel %s\n", 144 + cur_chnl->name); 145 + 146 + while (curr) { 147 + event_id = be16_to_cpu(curr->event_id); 148 + tail = vidtv_psi_eit_event_init(tail, event_id); 149 + if (!tail) { 150 + vidtv_psi_eit_event_destroy(head); 151 + return NULL; 152 + } 153 + 154 + desc = vidtv_psi_desc_clone(curr->descriptor); 155 + vidtv_psi_desc_assign(&tail->descriptor, desc); 156 + 157 + if (!head) 158 + head = tail; 159 + 160 + curr = curr->next; 161 + } 162 + 163 + cur_chnl = cur_chnl->next; 164 + } 165 + 166 + return head; 139 167 } 140 168 141 169 static struct vidtv_psi_table_sdt_service ··· 221 125 222 126 if (!curr) 223 127 dev_warn_ratelimited(m->dev, 224 - "No services found for channel %s\n", cur_chnl->name); 128 + "No services found for channel %s\n", 129 + cur_chnl->name); 225 130 226 131 while (curr) { 227 132 service_id = be16_to_cpu(curr->service_id); 228 - tail = vidtv_psi_sdt_service_init(tail, service_id); 133 + tail = vidtv_psi_sdt_service_init(tail, 134 + service_id, 135 + curr->EIT_schedule, 136 + curr->EIT_present_following); 137 + if (!tail) 138 + goto free; 229 139 230 140 desc = vidtv_psi_desc_clone(curr->descriptor); 141 + if (!desc) 142 + goto free_tail; 231 143 vidtv_psi_desc_assign(&tail->descriptor, desc); 232 144 233 145 if (!head) ··· 248 144 } 249 145 250 146 return head; 147 + 148 + free_tail: 149 + vidtv_psi_sdt_service_destroy(tail); 150 + free: 151 + vidtv_psi_sdt_service_destroy(head); 152 + return NULL; 251 153 } 252 154 253 155 static struct vidtv_psi_table_pat_program* ··· 284 174 tail = vidtv_psi_pat_program_init(tail, 285 175 serv_id, 286 176 pid); 177 + if (!tail) { 178 + vidtv_psi_pat_program_destroy(head); 179 + return NULL; 180 + } 287 181 288 182 if (!head) 289 183 head = tail; ··· 297 183 298 184 cur_chnl = cur_chnl->next; 299 185 } 186 + /* Add the NIT table */ 187 + vidtv_psi_pat_program_init(tail, 0, TS_NIT_PID); 300 188 301 189 return head; 302 190 } 303 191 192 + /* 193 + * Match channels to their respective PMT sections, then assign the 194 + * streams 195 + */ 304 196 static void 305 197 vidtv_channel_pmt_match_sections(struct vidtv_channel *channels, 306 198 struct vidtv_psi_table_pmt **sections, 307 199 u32 nsections) 308 200 { 309 - /* 310 - * Match channels to their respective PMT sections, then assign the 311 - * streams 312 - */ 313 201 struct vidtv_psi_table_pmt *curr_section = NULL; 314 - struct vidtv_channel *cur_chnl = channels; 315 - 316 - struct vidtv_psi_table_pmt_stream *s = NULL; 317 202 struct vidtv_psi_table_pmt_stream *head = NULL; 318 203 struct vidtv_psi_table_pmt_stream *tail = NULL; 319 - 204 + struct vidtv_psi_table_pmt_stream *s = NULL; 205 + struct vidtv_channel *cur_chnl = channels; 320 206 struct vidtv_psi_desc *desc = NULL; 321 - u32 j; 322 - u16 curr_id; 323 207 u16 e_pid; /* elementary stream pid */ 208 + u16 curr_id; 209 + u32 j; 324 210 325 211 while (cur_chnl) { 326 212 for (j = 0; j < nsections; ++j) { ··· 346 232 head = tail; 347 233 348 234 desc = vidtv_psi_desc_clone(s->descriptor); 349 - vidtv_psi_desc_assign(&tail->descriptor, desc); 235 + vidtv_psi_desc_assign(&tail->descriptor, 236 + desc); 350 237 351 238 s = s->next; 352 239 } ··· 361 246 } 362 247 } 363 248 364 - void vidtv_channel_si_init(struct vidtv_mux *m) 249 + static void 250 + vidtv_channel_destroy_service_list(struct vidtv_psi_desc_service_list_entry *e) 365 251 { 252 + struct vidtv_psi_desc_service_list_entry *tmp; 253 + 254 + while (e) { 255 + tmp = e; 256 + e = e->next; 257 + kfree(tmp); 258 + } 259 + } 260 + 261 + static struct vidtv_psi_desc_service_list_entry 262 + *vidtv_channel_build_service_list(struct vidtv_psi_table_sdt_service *s) 263 + { 264 + struct vidtv_psi_desc_service_list_entry *curr_e = NULL; 265 + struct vidtv_psi_desc_service_list_entry *head_e = NULL; 266 + struct vidtv_psi_desc_service_list_entry *prev_e = NULL; 267 + struct vidtv_psi_desc *desc = s->descriptor; 268 + struct vidtv_psi_desc_service *s_desc; 269 + 270 + while (s) { 271 + while (desc) { 272 + if (s->descriptor->type != SERVICE_DESCRIPTOR) 273 + goto next_desc; 274 + 275 + s_desc = (struct vidtv_psi_desc_service *)desc; 276 + 277 + curr_e = kzalloc(sizeof(*curr_e), GFP_KERNEL); 278 + if (!curr_e) { 279 + vidtv_channel_destroy_service_list(head_e); 280 + return NULL; 281 + } 282 + 283 + curr_e->service_id = s->service_id; 284 + curr_e->service_type = s_desc->service_type; 285 + 286 + if (!head_e) 287 + head_e = curr_e; 288 + if (prev_e) 289 + prev_e->next = curr_e; 290 + 291 + prev_e = curr_e; 292 + 293 + next_desc: 294 + desc = desc->next; 295 + } 296 + s = s->next; 297 + } 298 + return head_e; 299 + } 300 + 301 + int vidtv_channel_si_init(struct vidtv_mux *m) 302 + { 303 + struct vidtv_psi_desc_service_list_entry *service_list = NULL; 366 304 struct vidtv_psi_table_pat_program *programs = NULL; 367 305 struct vidtv_psi_table_sdt_service *services = NULL; 306 + struct vidtv_psi_table_eit_event *events = NULL; 368 307 369 308 m->si.pat = vidtv_psi_pat_table_init(m->transport_stream_id); 309 + if (!m->si.pat) 310 + return -ENOMEM; 370 311 371 - m->si.sdt = vidtv_psi_sdt_table_init(m->transport_stream_id); 312 + m->si.sdt = vidtv_psi_sdt_table_init(m->network_id, 313 + m->transport_stream_id); 314 + if (!m->si.sdt) 315 + goto free_pat; 372 316 373 317 programs = vidtv_channel_pat_prog_cat_into_new(m); 318 + if (!programs) 319 + goto free_sdt; 374 320 services = vidtv_channel_sdt_serv_cat_into_new(m); 321 + if (!services) 322 + goto free_programs; 323 + 324 + events = vidtv_channel_eit_event_cat_into_new(m); 325 + if (!events) 326 + goto free_services; 327 + 328 + /* look for a service descriptor for every service */ 329 + service_list = vidtv_channel_build_service_list(services); 330 + if (!service_list) 331 + goto free_events; 332 + 333 + /* use these descriptors to build the NIT */ 334 + m->si.nit = vidtv_psi_nit_table_init(m->network_id, 335 + m->transport_stream_id, 336 + m->network_name, 337 + service_list); 338 + if (!m->si.nit) 339 + goto free_service_list; 340 + 341 + m->si.eit = vidtv_psi_eit_table_init(m->network_id, 342 + m->transport_stream_id, 343 + programs->service_id); 344 + if (!m->si.eit) 345 + goto free_nit; 375 346 376 347 /* assemble all programs and assign to PAT */ 377 348 vidtv_psi_pat_program_assign(m->si.pat, programs); ··· 465 264 /* assemble all services and assign to SDT */ 466 265 vidtv_psi_sdt_service_assign(m->si.sdt, services); 467 266 468 - m->si.pmt_secs = vidtv_psi_pmt_create_sec_for_each_pat_entry(m->si.pat, m->pcr_pid); 267 + /* assemble all events and assign to EIT */ 268 + vidtv_psi_eit_event_assign(m->si.eit, events); 269 + 270 + m->si.pmt_secs = vidtv_psi_pmt_create_sec_for_each_pat_entry(m->si.pat, 271 + m->pcr_pid); 272 + if (!m->si.pmt_secs) 273 + goto free_eit; 469 274 470 275 vidtv_channel_pmt_match_sections(m->channels, 471 276 m->si.pmt_secs, 472 - m->si.pat->programs); 277 + m->si.pat->num_pmt); 278 + 279 + vidtv_channel_destroy_service_list(service_list); 280 + 281 + return 0; 282 + 283 + free_eit: 284 + vidtv_psi_eit_table_destroy(m->si.eit); 285 + free_nit: 286 + vidtv_psi_nit_table_destroy(m->si.nit); 287 + free_service_list: 288 + vidtv_channel_destroy_service_list(service_list); 289 + free_events: 290 + vidtv_psi_eit_event_destroy(events); 291 + free_services: 292 + vidtv_psi_sdt_service_destroy(services); 293 + free_programs: 294 + vidtv_psi_pat_program_destroy(programs); 295 + free_sdt: 296 + vidtv_psi_sdt_table_destroy(m->si.sdt); 297 + free_pat: 298 + vidtv_psi_pat_table_destroy(m->si.pat); 299 + return 0; 473 300 } 474 301 475 302 void vidtv_channel_si_destroy(struct vidtv_mux *m) 476 303 { 477 304 u32 i; 478 - u16 num_programs = m->si.pat->programs; 479 305 480 306 vidtv_psi_pat_table_destroy(m->si.pat); 481 307 482 - for (i = 0; i < num_programs; ++i) 308 + for (i = 0; i < m->si.pat->num_pmt; ++i) 483 309 vidtv_psi_pmt_table_destroy(m->si.pmt_secs[i]); 484 310 485 311 kfree(m->si.pmt_secs); 486 312 vidtv_psi_sdt_table_destroy(m->si.sdt); 313 + vidtv_psi_nit_table_destroy(m->si.nit); 314 + vidtv_psi_eit_table_destroy(m->si.eit); 487 315 } 488 316 489 - void vidtv_channels_init(struct vidtv_mux *m) 317 + int vidtv_channels_init(struct vidtv_mux *m) 490 318 { 491 319 /* this is the place to add new 'channels' for vidtv */ 492 320 m->channels = vidtv_channel_s302m_init(NULL, m->transport_stream_id); 321 + 322 + if (!m->channels) 323 + return -ENOMEM; 324 + 325 + return 0; 493 326 } 494 327 495 328 void vidtv_channels_destroy(struct vidtv_mux *m) ··· 537 302 vidtv_psi_pat_program_destroy(curr->program); 538 303 vidtv_psi_pmt_stream_destroy(curr->streams); 539 304 vidtv_channel_encoder_destroy(curr->encoders); 305 + vidtv_psi_eit_event_destroy(curr->events); 540 306 541 307 tmp = curr; 542 308 curr = curr->next;
+8 -3
drivers/media/test-drivers/vidtv/vidtv_channel.h
··· 9 9 * When vidtv boots, it will create some hardcoded channels. 10 10 * Their services will be concatenated to populate the SDT. 11 11 * Their programs will be concatenated to populate the PAT 12 + * Their events will be concatenated to populate the EIT 12 13 * For each program in the PAT, a PMT section will be created 13 14 * The PMT section for a channel will be assigned its streams. 14 15 * Every stream will have its corresponding encoder polled to produce TS packets ··· 23 22 #define VIDTV_CHANNEL_H 24 23 25 24 #include <linux/types.h> 26 - #include "vidtv_psi.h" 25 + 27 26 #include "vidtv_encoder.h" 28 27 #include "vidtv_mux.h" 28 + #include "vidtv_psi.h" 29 29 30 30 /** 31 31 * struct vidtv_channel - A 'channel' abstraction ··· 39 37 * Every stream will have its corresponding encoder polled to produce TS packets 40 38 * These packets may be interleaved by the mux and then delivered to the bridge 41 39 * 40 + * @name: name of the channel 42 41 * @transport_stream_id: a number to identify the TS, chosen at will. 43 42 * @service: A _single_ service. Will be concatenated into the SDT. 44 43 * @program_num: The link between PAT, PMT and SDT. ··· 47 44 * Will be concatenated into the PAT. 48 45 * @streams: A stream loop used to populate the PMT section for 'program' 49 46 * @encoders: A encoder loop. There must be one encoder for each stream. 47 + * @events: Optional event information. This will feed into the EIT. 50 48 * @next: Optionally chain this channel. 51 49 */ 52 50 struct vidtv_channel { ··· 58 54 struct vidtv_psi_table_pat_program *program; 59 55 struct vidtv_psi_table_pmt_stream *streams; 60 56 struct vidtv_encoder *encoders; 57 + struct vidtv_psi_table_eit_event *events; 61 58 struct vidtv_channel *next; 62 59 }; 63 60 ··· 66 61 * vidtv_channel_si_init - Init the PSI tables from the channels in the mux 67 62 * @m: The mux containing the channels. 68 63 */ 69 - void vidtv_channel_si_init(struct vidtv_mux *m); 64 + int vidtv_channel_si_init(struct vidtv_mux *m); 70 65 void vidtv_channel_si_destroy(struct vidtv_mux *m); 71 66 72 67 /** 73 68 * vidtv_channels_init - Init hardcoded, fake 'channels'. 74 69 * @m: The mux to store the channels into. 75 70 */ 76 - void vidtv_channels_init(struct vidtv_mux *m); 71 + int vidtv_channels_init(struct vidtv_mux *m); 77 72 struct vidtv_channel 78 73 *vidtv_channel_s302m_init(struct vidtv_channel *head, u16 transport_stream_id); 79 74 void vidtv_channels_destroy(struct vidtv_mux *m);
-1
drivers/media/test-drivers/vidtv/vidtv_common.h
··· 16 16 #define CLOCK_UNIT_27MHZ 27000000 17 17 #define VIDTV_SLEEP_USECS 10000 18 18 #define VIDTV_MAX_SLEEP_USECS (2 * VIDTV_SLEEP_USECS) 19 - #define VIDTV_DEFAULT_TS_ID 0x744 20 19 21 20 u32 vidtv_memcpy(void *to, 22 21 size_t to_offset,
+1 -1
drivers/media/test-drivers/vidtv/vidtv_demod.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/string.h> 21 21 #include <linux/workqueue.h> 22 + 22 23 #include <media/dvb_frontend.h> 23 24 24 25 #include "vidtv_demod.h" ··· 193 192 194 193 c->cnr.stat[0].svalue = state->tuner_cnr; 195 194 c->cnr.stat[0].svalue -= prandom_u32_max(state->tuner_cnr / 50); 196 - 197 195 } 198 196 199 197 static int vidtv_demod_read_status(struct dvb_frontend *fe,
+5 -6
drivers/media/test-drivers/vidtv/vidtv_demod.h
··· 12 12 #define VIDTV_DEMOD_H 13 13 14 14 #include <linux/dvb/frontend.h> 15 + 15 16 #include <media/dvb_frontend.h> 16 17 17 18 /** ··· 20 19 * modulation and fec_inner 21 20 * @modulation: see enum fe_modulation 22 21 * @fec: see enum fe_fec_rate 22 + * @cnr_ok: S/N threshold to consider the signal as OK. Below that, there's 23 + * a chance of losing sync. 24 + * @cnr_good: S/N threshold to consider the signal strong. 23 25 * 24 26 * This struct matches values for 'good' and 'ok' CNRs given the combination 25 27 * of modulation and fec_inner in use. We might simulate some noise if the ··· 56 52 * struct vidtv_demod_state - The demodulator state 57 53 * @frontend: The frontend structure allocated by the demod. 58 54 * @config: The config used to init the demod. 59 - * @poll_snr: The task responsible for periodically checking the simulated 60 - * signal quality, eventually dropping or reacquiring the TS lock. 61 55 * @status: the demod status. 62 - * @cold_start: Whether the demod has not been init yet. 63 - * @poll_snr_thread_running: Whether the task responsible for periodically 64 - * checking the simulated signal quality is running. 65 - * @poll_snr_thread_restart: Whether we should restart the poll_snr task. 56 + * @tuner_cnr: current S/N ratio for the signal carrier 66 57 */ 67 58 struct vidtv_demod_state { 68 59 struct dvb_frontend frontend;
+4 -5
drivers/media/test-drivers/vidtv/vidtv_encoder.h
··· 28 28 struct vidtv_access_unit *next; 29 29 }; 30 30 31 - /* Some musical notes, used by a tone generator */ 31 + /* Some musical notes, used by a tone generator. Values are in Hz */ 32 32 enum musical_notes { 33 33 NOTE_SILENT = 0, 34 34 ··· 103 103 * @encoder_buf_sz: The encoder buffer size, in bytes 104 104 * @encoder_buf_offset: Our byte position in the encoder buffer. 105 105 * @sample_count: How many samples we have encoded in total. 106 + * @access_units: encoder payload units, used for clock references 106 107 * @src_buf: The source of raw data to be encoded, encoder might set a 107 108 * default if null. 109 + * @src_buf_sz: size of @src_buf. 108 110 * @src_buf_offset: Our position in the source buffer. 109 111 * @is_video_encoder: Whether this a video encoder (as opposed to audio) 110 112 * @ctx: Encoder-specific state. 111 113 * @stream_id: Examples: Audio streams (0xc0-0xdf), Video streams 112 114 * (0xe0-0xef). 113 - * @es_id: The TS PID to use for the elementary stream in this encoder. 115 + * @es_pid: The TS PID to use for the elementary stream in this encoder. 114 116 * @encode: Prepare enough AUs for the given amount of time. 115 117 * @clear: Clear the encoder output. 116 118 * @sync: Attempt to synchronize with this encoder. ··· 133 131 u32 encoder_buf_offset; 134 132 135 133 u64 sample_count; 136 - int last_duration; 137 - int note_offset; 138 - enum musical_notes last_tone; 139 134 140 135 struct vidtv_access_unit *access_units; 141 136
+161 -89
drivers/media/test-drivers/vidtv/vidtv_mux.c
··· 12 12 * Copyright (C) 2020 Daniel W. S. Almeida 13 13 */ 14 14 15 - #include <linux/types.h> 16 - #include <linux/slab.h> 15 + #include <linux/delay.h> 16 + #include <linux/dev_printk.h> 17 17 #include <linux/jiffies.h> 18 18 #include <linux/kernel.h> 19 - #include <linux/dev_printk.h> 20 - #include <linux/ratelimit.h> 21 - #include <linux/delay.h> 22 - #include <linux/vmalloc.h> 23 19 #include <linux/math64.h> 20 + #include <linux/ratelimit.h> 21 + #include <linux/slab.h> 22 + #include <linux/types.h> 23 + #include <linux/vmalloc.h> 24 24 25 - #include "vidtv_mux.h" 26 - #include "vidtv_ts.h" 27 - #include "vidtv_pes.h" 28 - #include "vidtv_encoder.h" 29 25 #include "vidtv_channel.h" 30 26 #include "vidtv_common.h" 27 + #include "vidtv_encoder.h" 28 + #include "vidtv_mux.h" 29 + #include "vidtv_pes.h" 31 30 #include "vidtv_psi.h" 31 + #include "vidtv_ts.h" 32 32 33 33 static struct vidtv_mux_pid_ctx 34 34 *vidtv_mux_get_pid_ctx(struct vidtv_mux *m, u16 pid) ··· 47 47 struct vidtv_mux_pid_ctx *ctx; 48 48 49 49 ctx = vidtv_mux_get_pid_ctx(m, pid); 50 - 51 50 if (ctx) 52 - goto end; 51 + return ctx; 53 52 54 - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 53 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 54 + if (!ctx) 55 + return NULL; 56 + 55 57 ctx->pid = pid; 56 58 ctx->cc = 0; 57 59 hash_add(m->pid_ctx, &ctx->h, pid); 58 60 59 - end: 60 61 return ctx; 61 62 } 62 63 63 - static void vidtv_mux_pid_ctx_init(struct vidtv_mux *m) 64 + static void vidtv_mux_pid_ctx_destroy(struct vidtv_mux *m) 65 + { 66 + struct vidtv_mux_pid_ctx *ctx; 67 + struct hlist_node *tmp; 68 + int bkt; 69 + 70 + hash_for_each_safe(m->pid_ctx, bkt, tmp, ctx, h) { 71 + hash_del(&ctx->h); 72 + kfree(ctx); 73 + } 74 + } 75 + 76 + static int vidtv_mux_pid_ctx_init(struct vidtv_mux *m) 64 77 { 65 78 struct vidtv_psi_table_pat_program *p = m->si.pat->program; 66 79 u16 pid; 67 80 68 81 hash_init(m->pid_ctx); 69 82 /* push the pcr pid ctx */ 70 - vidtv_mux_create_pid_ctx_once(m, m->pcr_pid); 71 - /* push the null packet pid ctx */ 72 - vidtv_mux_create_pid_ctx_once(m, TS_NULL_PACKET_PID); 83 + if (!vidtv_mux_create_pid_ctx_once(m, m->pcr_pid)) 84 + return -ENOMEM; 85 + /* push the NULL packet pid ctx */ 86 + if (!vidtv_mux_create_pid_ctx_once(m, TS_NULL_PACKET_PID)) 87 + goto free; 73 88 /* push the PAT pid ctx */ 74 - vidtv_mux_create_pid_ctx_once(m, VIDTV_PAT_PID); 89 + if (!vidtv_mux_create_pid_ctx_once(m, VIDTV_PAT_PID)) 90 + goto free; 75 91 /* push the SDT pid ctx */ 76 - vidtv_mux_create_pid_ctx_once(m, VIDTV_SDT_PID); 92 + if (!vidtv_mux_create_pid_ctx_once(m, VIDTV_SDT_PID)) 93 + goto free; 94 + /* push the NIT pid ctx */ 95 + if (!vidtv_mux_create_pid_ctx_once(m, VIDTV_NIT_PID)) 96 + goto free; 97 + /* push the EIT pid ctx */ 98 + if (!vidtv_mux_create_pid_ctx_once(m, VIDTV_EIT_PID)) 99 + goto free; 77 100 78 101 /* add a ctx for all PMT sections */ 79 102 while (p) { ··· 104 81 vidtv_mux_create_pid_ctx_once(m, pid); 105 82 p = p->next; 106 83 } 107 - } 108 84 109 - static void vidtv_mux_pid_ctx_destroy(struct vidtv_mux *m) 110 - { 111 - int bkt; 112 - struct vidtv_mux_pid_ctx *ctx; 113 - struct hlist_node *tmp; 85 + return 0; 114 86 115 - hash_for_each_safe(m->pid_ctx, bkt, tmp, ctx, h) { 116 - hash_del(&ctx->h); 117 - kfree(ctx); 118 - } 87 + free: 88 + vidtv_mux_pid_ctx_destroy(m); 89 + return -ENOMEM; 119 90 } 120 91 121 92 static void vidtv_mux_update_clk(struct vidtv_mux *m) ··· 129 112 130 113 static u32 vidtv_mux_push_si(struct vidtv_mux *m) 131 114 { 132 - u32 initial_offset = m->mux_buf_offset; 115 + struct vidtv_psi_pat_write_args pat_args = { 116 + .buf = m->mux_buf, 117 + .buf_sz = m->mux_buf_sz, 118 + .pat = m->si.pat, 119 + }; 120 + struct vidtv_psi_pmt_write_args pmt_args = { 121 + .buf = m->mux_buf, 122 + .buf_sz = m->mux_buf_sz, 123 + .pcr_pid = m->pcr_pid, 124 + }; 125 + struct vidtv_psi_sdt_write_args sdt_args = { 126 + .buf = m->mux_buf, 127 + .buf_sz = m->mux_buf_sz, 128 + .sdt = m->si.sdt, 129 + }; 130 + struct vidtv_psi_nit_write_args nit_args = { 131 + .buf = m->mux_buf, 132 + .buf_sz = m->mux_buf_sz, 133 + .nit = m->si.nit, 133 134 135 + }; 136 + struct vidtv_psi_eit_write_args eit_args = { 137 + .buf = m->mux_buf, 138 + .buf_sz = m->mux_buf_sz, 139 + .eit = m->si.eit, 140 + }; 141 + u32 initial_offset = m->mux_buf_offset; 134 142 struct vidtv_mux_pid_ctx *pat_ctx; 135 143 struct vidtv_mux_pid_ctx *pmt_ctx; 136 144 struct vidtv_mux_pid_ctx *sdt_ctx; 137 - 138 - struct vidtv_psi_pat_write_args pat_args = {}; 139 - struct vidtv_psi_pmt_write_args pmt_args = {}; 140 - struct vidtv_psi_sdt_write_args sdt_args = {}; 141 - 142 - u32 nbytes; /* the number of bytes written by this function */ 145 + struct vidtv_mux_pid_ctx *nit_ctx; 146 + struct vidtv_mux_pid_ctx *eit_ctx; 147 + u32 nbytes; 143 148 u16 pmt_pid; 144 149 u32 i; 145 150 146 151 pat_ctx = vidtv_mux_get_pid_ctx(m, VIDTV_PAT_PID); 147 152 sdt_ctx = vidtv_mux_get_pid_ctx(m, VIDTV_SDT_PID); 153 + nit_ctx = vidtv_mux_get_pid_ctx(m, VIDTV_NIT_PID); 154 + eit_ctx = vidtv_mux_get_pid_ctx(m, VIDTV_EIT_PID); 148 155 149 - pat_args.buf = m->mux_buf; 150 156 pat_args.offset = m->mux_buf_offset; 151 - pat_args.pat = m->si.pat; 152 - pat_args.buf_sz = m->mux_buf_sz; 153 157 pat_args.continuity_counter = &pat_ctx->cc; 154 158 155 - m->mux_buf_offset += vidtv_psi_pat_write_into(pat_args); 159 + m->mux_buf_offset += vidtv_psi_pat_write_into(&pat_args); 156 160 157 - for (i = 0; i < m->si.pat->programs; ++i) { 161 + for (i = 0; i < m->si.pat->num_pmt; ++i) { 158 162 pmt_pid = vidtv_psi_pmt_get_pid(m->si.pmt_secs[i], 159 163 m->si.pat); 160 164 ··· 187 149 188 150 pmt_ctx = vidtv_mux_get_pid_ctx(m, pmt_pid); 189 151 190 - pmt_args.buf = m->mux_buf; 191 152 pmt_args.offset = m->mux_buf_offset; 192 153 pmt_args.pmt = m->si.pmt_secs[i]; 193 154 pmt_args.pid = pmt_pid; 194 - pmt_args.buf_sz = m->mux_buf_sz; 195 155 pmt_args.continuity_counter = &pmt_ctx->cc; 196 - pmt_args.pcr_pid = m->pcr_pid; 197 156 198 157 /* write each section into buffer */ 199 - m->mux_buf_offset += vidtv_psi_pmt_write_into(pmt_args); 158 + m->mux_buf_offset += vidtv_psi_pmt_write_into(&pmt_args); 200 159 } 201 160 202 - sdt_args.buf = m->mux_buf; 203 161 sdt_args.offset = m->mux_buf_offset; 204 - sdt_args.sdt = m->si.sdt; 205 - sdt_args.buf_sz = m->mux_buf_sz; 206 162 sdt_args.continuity_counter = &sdt_ctx->cc; 207 163 208 - m->mux_buf_offset += vidtv_psi_sdt_write_into(sdt_args); 164 + m->mux_buf_offset += vidtv_psi_sdt_write_into(&sdt_args); 165 + 166 + nit_args.offset = m->mux_buf_offset; 167 + nit_args.continuity_counter = &nit_ctx->cc; 168 + 169 + m->mux_buf_offset += vidtv_psi_nit_write_into(&nit_args); 170 + 171 + eit_args.offset = m->mux_buf_offset; 172 + eit_args.continuity_counter = &eit_ctx->cc; 173 + 174 + m->mux_buf_offset += vidtv_psi_eit_write_into(&eit_args); 209 175 210 176 nbytes = m->mux_buf_offset - initial_offset; 211 177 ··· 272 230 static u32 vidtv_mux_packetize_access_units(struct vidtv_mux *m, 273 231 struct vidtv_encoder *e) 274 232 { 275 - u32 nbytes = 0; 276 - 277 - struct pes_write_args args = {}; 278 - u32 initial_offset = m->mux_buf_offset; 233 + struct pes_write_args args = { 234 + .dest_buf = m->mux_buf, 235 + .dest_buf_sz = m->mux_buf_sz, 236 + .pid = be16_to_cpu(e->es_pid), 237 + .encoder_id = e->id, 238 + .stream_id = be16_to_cpu(e->stream_id), 239 + .send_pts = true, /* forbidden value '01'... */ 240 + .send_dts = false, /* ...for PTS_DTS flags */ 241 + }; 279 242 struct vidtv_access_unit *au = e->access_units; 280 - 243 + u32 initial_offset = m->mux_buf_offset; 244 + struct vidtv_mux_pid_ctx *pid_ctx; 245 + u32 nbytes = 0; 281 246 u8 *buf = NULL; 282 - struct vidtv_mux_pid_ctx *pid_ctx = vidtv_mux_create_pid_ctx_once(m, 283 - be16_to_cpu(e->es_pid)); 284 247 285 - args.dest_buf = m->mux_buf; 286 - args.dest_buf_sz = m->mux_buf_sz; 287 - args.pid = be16_to_cpu(e->es_pid); 288 - args.encoder_id = e->id; 248 + /* see SMPTE 302M clause 6.4 */ 249 + if (args.encoder_id == S302M) { 250 + args.send_dts = false; 251 + args.send_pts = true; 252 + } 253 + 254 + pid_ctx = vidtv_mux_create_pid_ctx_once(m, be16_to_cpu(e->es_pid)); 289 255 args.continuity_counter = &pid_ctx->cc; 290 - args.stream_id = be16_to_cpu(e->stream_id); 291 - args.send_pts = true; 292 256 293 257 while (au) { 294 258 buf = e->encoder_buf + au->offset; ··· 304 256 args.pts = au->pts; 305 257 args.pcr = m->timing.clk; 306 258 307 - m->mux_buf_offset += vidtv_pes_write_into(args); 259 + m->mux_buf_offset += vidtv_pes_write_into(&args); 308 260 309 261 au = au->next; 310 262 } ··· 321 273 322 274 static u32 vidtv_mux_poll_encoders(struct vidtv_mux *m) 323 275 { 324 - u32 nbytes = 0; 325 - u32 au_nbytes; 326 276 struct vidtv_channel *cur_chnl = m->channels; 327 277 struct vidtv_encoder *e = NULL; 278 + u32 nbytes = 0; 279 + u32 au_nbytes; 328 280 329 281 while (cur_chnl) { 330 282 e = cur_chnl->encoders; ··· 348 300 349 301 static u32 vidtv_mux_pad_with_nulls(struct vidtv_mux *m, u32 npkts) 350 302 { 351 - struct null_packet_write_args args = {}; 303 + struct null_packet_write_args args = { 304 + .dest_buf = m->mux_buf, 305 + .buf_sz = m->mux_buf_sz, 306 + .dest_offset = m->mux_buf_offset, 307 + }; 352 308 u32 initial_offset = m->mux_buf_offset; 353 - u32 nbytes; /* the number of bytes written by this function */ 354 - u32 i; 355 309 struct vidtv_mux_pid_ctx *ctx; 310 + u32 nbytes; 311 + u32 i; 356 312 357 313 ctx = vidtv_mux_get_pid_ctx(m, TS_NULL_PACKET_PID); 358 314 359 - args.dest_buf = m->mux_buf; 360 - args.buf_sz = m->mux_buf_sz; 361 315 args.continuity_counter = &ctx->cc; 362 - args.dest_offset = m->mux_buf_offset; 363 316 364 317 for (i = 0; i < npkts; ++i) { 365 318 m->mux_buf_offset += vidtv_ts_null_write_into(args); ··· 392 343 struct vidtv_mux, 393 344 mpeg_thread); 394 345 struct dtv_frontend_properties *c = &m->fe->dtv_property_cache; 346 + u32 tot_bits = 0; 395 347 u32 nbytes; 396 348 u32 npkts; 397 - u32 tot_bits = 0; 398 349 399 350 while (m->streaming) { 400 351 nbytes = 0; ··· 476 427 477 428 struct vidtv_mux *vidtv_mux_init(struct dvb_frontend *fe, 478 429 struct device *dev, 479 - struct vidtv_mux_init_args args) 430 + struct vidtv_mux_init_args *args) 480 431 { 481 - struct vidtv_mux *m = kzalloc(sizeof(*m), GFP_KERNEL); 432 + struct vidtv_mux *m; 433 + 434 + m = kzalloc(sizeof(*m), GFP_KERNEL); 435 + if (!m) 436 + return NULL; 482 437 483 438 m->dev = dev; 484 439 m->fe = fe; 485 - m->timing.pcr_period_usecs = args.pcr_period_usecs; 486 - m->timing.si_period_usecs = args.si_period_usecs; 440 + m->timing.pcr_period_usecs = args->pcr_period_usecs; 441 + m->timing.si_period_usecs = args->si_period_usecs; 487 442 488 - m->mux_rate_kbytes_sec = args.mux_rate_kbytes_sec; 443 + m->mux_rate_kbytes_sec = args->mux_rate_kbytes_sec; 489 444 490 - m->on_new_packets_available_cb = args.on_new_packets_available_cb; 445 + m->on_new_packets_available_cb = args->on_new_packets_available_cb; 491 446 492 - m->mux_buf = vzalloc(args.mux_buf_sz); 493 - m->mux_buf_sz = args.mux_buf_sz; 447 + m->mux_buf = vzalloc(args->mux_buf_sz); 448 + if (!m->mux_buf) 449 + goto free_mux; 494 450 495 - m->pcr_pid = args.pcr_pid; 496 - m->transport_stream_id = args.transport_stream_id; 497 - m->priv = args.priv; 451 + m->mux_buf_sz = args->mux_buf_sz; 452 + 453 + m->pcr_pid = args->pcr_pid; 454 + m->transport_stream_id = args->transport_stream_id; 455 + m->priv = args->priv; 456 + m->network_id = args->network_id; 457 + m->network_name = kstrdup(args->network_name, GFP_KERNEL); 498 458 m->timing.current_jiffies = get_jiffies_64(); 499 459 500 - if (args.channels) 501 - m->channels = args.channels; 460 + if (args->channels) 461 + m->channels = args->channels; 502 462 else 503 - vidtv_channels_init(m); 463 + if (vidtv_channels_init(m) < 0) 464 + goto free_mux_buf; 504 465 505 466 /* will alloc data for pmt_sections after initializing pat */ 506 - vidtv_channel_si_init(m); 467 + if (vidtv_channel_si_init(m) < 0) 468 + goto free_channels; 507 469 508 470 INIT_WORK(&m->mpeg_thread, vidtv_mux_tick); 509 471 510 - vidtv_mux_pid_ctx_init(m); 472 + if (vidtv_mux_pid_ctx_init(m) < 0) 473 + goto free_channel_si; 511 474 512 475 return m; 476 + 477 + free_channel_si: 478 + vidtv_channel_si_destroy(m); 479 + free_channels: 480 + vidtv_channels_destroy(m); 481 + free_mux_buf: 482 + vfree(m->mux_buf); 483 + free_mux: 484 + kfree(m); 485 + return NULL; 513 486 } 514 487 515 488 void vidtv_mux_destroy(struct vidtv_mux *m) ··· 540 469 vidtv_mux_pid_ctx_destroy(m); 541 470 vidtv_channel_si_destroy(m); 542 471 vidtv_channels_destroy(m); 472 + kfree(m->network_name); 543 473 vfree(m->mux_buf); 544 474 kfree(m); 545 475 }
+18 -3
drivers/media/test-drivers/vidtv/vidtv_mux.h
··· 15 15 #ifndef VIDTV_MUX_H 16 16 #define VIDTV_MUX_H 17 17 18 - #include <linux/types.h> 19 18 #include <linux/hashtable.h> 19 + #include <linux/types.h> 20 20 #include <linux/workqueue.h> 21 + 21 22 #include <media/dvb_frontend.h> 22 23 23 24 #include "vidtv_psi.h" ··· 59 58 * @pat: The PAT in use by the muxer. 60 59 * @pmt_secs: The PMT sections in use by the muxer. One for each program in the PAT. 61 60 * @sdt: The SDT in use by the muxer. 61 + * @nit: The NIT in use by the muxer. 62 + * @eit: the EIT in use by the muxer. 62 63 */ 63 64 struct vidtv_mux_si { 64 65 /* the SI tables */ 65 66 struct vidtv_psi_table_pat *pat; 66 67 struct vidtv_psi_table_pmt **pmt_secs; /* the PMT sections */ 67 68 struct vidtv_psi_table_sdt *sdt; 69 + struct vidtv_psi_table_nit *nit; 70 + struct vidtv_psi_table_eit *eit; 68 71 }; 69 72 70 73 /** ··· 87 82 88 83 /** 89 84 * struct vidtv_mux - A muxer abstraction loosely based in libavcodec/mpegtsenc.c 90 - * @mux_rate_kbytes_sec: The bit rate for the TS, in kbytes. 85 + * @fe: The frontend structure allocated by the muxer. 86 + * @dev: pointer to struct device. 91 87 * @timing: Keeps track of timing related information. 88 + * @mux_rate_kbytes_sec: The bit rate for the TS, in kbytes. 92 89 * @pid_ctx: A hash table to keep track of per-PID metadata. 93 90 * @on_new_packets_available_cb: A callback to inform of new TS packets ready. 94 91 * @mux_buf: A pointer to a buffer for this muxer. TS packets are stored there ··· 106 99 * @pcr_pid: The TS PID used for the PSI packets. All channels will share the 107 100 * same PCR. 108 101 * @transport_stream_id: The transport stream ID 102 + * @network_id: The network ID 103 + * @network_name: The network name 109 104 * @priv: Private data. 110 105 */ 111 106 struct vidtv_mux { ··· 137 128 138 129 u16 pcr_pid; 139 130 u16 transport_stream_id; 131 + u16 network_id; 132 + char *network_name; 140 133 void *priv; 141 134 }; 142 135 ··· 153 142 * same PCR. 154 143 * @transport_stream_id: The transport stream ID 155 144 * @channels: an optional list of channels to use 145 + * @network_id: The network ID 146 + * @network_name: The network name 156 147 * @priv: Private data. 157 148 */ 158 149 struct vidtv_mux_init_args { ··· 166 153 u16 pcr_pid; 167 154 u16 transport_stream_id; 168 155 struct vidtv_channel *channels; 156 + u16 network_id; 157 + char *network_name; 169 158 void *priv; 170 159 }; 171 160 172 161 struct vidtv_mux *vidtv_mux_init(struct dvb_frontend *fe, 173 162 struct device *dev, 174 - struct vidtv_mux_init_args args); 163 + struct vidtv_mux_init_args *args); 175 164 void vidtv_mux_destroy(struct vidtv_mux *m); 176 165 177 166 void vidtv_mux_start_thread(struct vidtv_mux *m);
+83 -96
drivers/media/test-drivers/vidtv/vidtv_pes.c
··· 16 16 #include <linux/types.h> 17 17 #include <linux/printk.h> 18 18 #include <linux/ratelimit.h> 19 - #include <asm/byteorder.h> 20 19 21 20 #include "vidtv_pes.h" 22 21 #include "vidtv_common.h" ··· 56 57 return len; 57 58 } 58 59 59 - static u32 vidtv_pes_write_header_stuffing(struct pes_header_write_args args) 60 + static u32 vidtv_pes_write_header_stuffing(struct pes_header_write_args *args) 60 61 { 61 62 /* 62 63 * This is a fixed 8-bit value equal to '0xFF' that can be inserted ··· 64 65 * It is discarded by the decoder. No more than 32 stuffing bytes shall 65 66 * be present in one PES packet header. 66 67 */ 67 - if (args.n_pes_h_s_bytes > PES_HEADER_MAX_STUFFING_BYTES) { 68 + if (args->n_pes_h_s_bytes > PES_HEADER_MAX_STUFFING_BYTES) { 68 69 pr_warn_ratelimited("More than %d stuffing bytes in PES packet header\n", 69 70 PES_HEADER_MAX_STUFFING_BYTES); 70 - args.n_pes_h_s_bytes = PES_HEADER_MAX_STUFFING_BYTES; 71 + args->n_pes_h_s_bytes = PES_HEADER_MAX_STUFFING_BYTES; 71 72 } 72 73 73 - return vidtv_memset(args.dest_buf, 74 - args.dest_offset, 75 - args.dest_buf_sz, 74 + return vidtv_memset(args->dest_buf, 75 + args->dest_offset, 76 + args->dest_buf_sz, 76 77 TS_FILL_BYTE, 77 - args.n_pes_h_s_bytes); 78 + args->n_pes_h_s_bytes); 78 79 } 79 80 80 - static u32 vidtv_pes_write_pts_dts(struct pes_header_write_args args) 81 + static u32 vidtv_pes_write_pts_dts(struct pes_header_write_args *args) 81 82 { 82 83 u32 nbytes = 0; /* the number of bytes written by this function */ 83 84 ··· 89 90 u64 mask2; 90 91 u64 mask3; 91 92 92 - if (!args.send_pts && args.send_dts) 93 + if (!args->send_pts && args->send_dts) 93 94 return 0; 94 95 95 96 mask1 = GENMASK_ULL(32, 30); ··· 97 98 mask3 = GENMASK_ULL(14, 0); 98 99 99 100 /* see ISO/IEC 13818-1 : 2000 p. 32 */ 100 - if (args.send_pts && args.send_dts) { 101 - pts_dts.pts1 = (0x3 << 4) | ((args.pts & mask1) >> 29) | 0x1; 102 - pts_dts.pts2 = cpu_to_be16(((args.pts & mask2) >> 14) | 0x1); 103 - pts_dts.pts3 = cpu_to_be16(((args.pts & mask3) << 1) | 0x1); 101 + if (args->send_pts && args->send_dts) { 102 + pts_dts.pts1 = (0x3 << 4) | ((args->pts & mask1) >> 29) | 0x1; 103 + pts_dts.pts2 = cpu_to_be16(((args->pts & mask2) >> 14) | 0x1); 104 + pts_dts.pts3 = cpu_to_be16(((args->pts & mask3) << 1) | 0x1); 104 105 105 - pts_dts.dts1 = (0x1 << 4) | ((args.dts & mask1) >> 29) | 0x1; 106 - pts_dts.dts2 = cpu_to_be16(((args.dts & mask2) >> 14) | 0x1); 107 - pts_dts.dts3 = cpu_to_be16(((args.dts & mask3) << 1) | 0x1); 106 + pts_dts.dts1 = (0x1 << 4) | ((args->dts & mask1) >> 29) | 0x1; 107 + pts_dts.dts2 = cpu_to_be16(((args->dts & mask2) >> 14) | 0x1); 108 + pts_dts.dts3 = cpu_to_be16(((args->dts & mask3) << 1) | 0x1); 108 109 109 110 op = &pts_dts; 110 111 op_sz = sizeof(pts_dts); 111 112 112 - } else if (args.send_pts) { 113 - pts.pts1 = (0x1 << 5) | ((args.pts & mask1) >> 29) | 0x1; 114 - pts.pts2 = cpu_to_be16(((args.pts & mask2) >> 14) | 0x1); 115 - pts.pts3 = cpu_to_be16(((args.pts & mask3) << 1) | 0x1); 113 + } else if (args->send_pts) { 114 + pts.pts1 = (0x1 << 5) | ((args->pts & mask1) >> 29) | 0x1; 115 + pts.pts2 = cpu_to_be16(((args->pts & mask2) >> 14) | 0x1); 116 + pts.pts3 = cpu_to_be16(((args->pts & mask3) << 1) | 0x1); 116 117 117 118 op = &pts; 118 119 op_sz = sizeof(pts); 119 120 } 120 121 121 122 /* copy PTS/DTS optional */ 122 - nbytes += vidtv_memcpy(args.dest_buf, 123 - args.dest_offset + nbytes, 124 - args.dest_buf_sz, 123 + nbytes += vidtv_memcpy(args->dest_buf, 124 + args->dest_offset + nbytes, 125 + args->dest_buf_sz, 125 126 op, 126 127 op_sz); 127 128 128 129 return nbytes; 129 130 } 130 131 131 - static u32 vidtv_pes_write_h(struct pes_header_write_args args) 132 + static u32 vidtv_pes_write_h(struct pes_header_write_args *args) 132 133 { 133 134 u32 nbytes = 0; /* the number of bytes written by this function */ 134 135 135 136 struct vidtv_mpeg_pes pes_header = {}; 136 137 struct vidtv_pes_optional pes_optional = {}; 137 - struct pes_header_write_args pts_dts_args = args; 138 - u32 stream_id = (args.encoder_id == S302M) ? PRIVATE_STREAM_1_ID : args.stream_id; 138 + struct pes_header_write_args pts_dts_args; 139 + u32 stream_id = (args->encoder_id == S302M) ? PRIVATE_STREAM_1_ID : args->stream_id; 139 140 u16 pes_opt_bitfield = 0x01 << 15; 140 141 141 142 pes_header.bitfield = cpu_to_be32((PES_START_CODE_PREFIX << 8) | stream_id); 142 143 143 - pes_header.length = cpu_to_be16(vidtv_pes_op_get_len(args.send_pts, 144 - args.send_dts) + 145 - args.access_unit_len); 144 + pes_header.length = cpu_to_be16(vidtv_pes_op_get_len(args->send_pts, 145 + args->send_dts) + 146 + args->access_unit_len); 146 147 147 - if (args.send_pts && args.send_dts) 148 + if (args->send_pts && args->send_dts) 148 149 pes_opt_bitfield |= (0x3 << 6); 149 - else if (args.send_pts) 150 + else if (args->send_pts) 150 151 pes_opt_bitfield |= (0x1 << 7); 151 152 152 153 pes_optional.bitfield = cpu_to_be16(pes_opt_bitfield); 153 - pes_optional.length = vidtv_pes_op_get_len(args.send_pts, args.send_dts) + 154 - args.n_pes_h_s_bytes - 154 + pes_optional.length = vidtv_pes_op_get_len(args->send_pts, args->send_dts) + 155 + args->n_pes_h_s_bytes - 155 156 sizeof(struct vidtv_pes_optional); 156 157 157 158 /* copy header */ 158 - nbytes += vidtv_memcpy(args.dest_buf, 159 - args.dest_offset + nbytes, 160 - args.dest_buf_sz, 159 + nbytes += vidtv_memcpy(args->dest_buf, 160 + args->dest_offset + nbytes, 161 + args->dest_buf_sz, 161 162 &pes_header, 162 163 sizeof(pes_header)); 163 164 164 165 /* copy optional header bits */ 165 - nbytes += vidtv_memcpy(args.dest_buf, 166 - args.dest_offset + nbytes, 167 - args.dest_buf_sz, 166 + nbytes += vidtv_memcpy(args->dest_buf, 167 + args->dest_offset + nbytes, 168 + args->dest_buf_sz, 168 169 &pes_optional, 169 170 sizeof(pes_optional)); 170 171 171 172 /* copy the timing information */ 172 - pts_dts_args.dest_offset = args.dest_offset + nbytes; 173 - nbytes += vidtv_pes_write_pts_dts(pts_dts_args); 173 + pts_dts_args = *args; 174 + pts_dts_args.dest_offset = args->dest_offset + nbytes; 175 + nbytes += vidtv_pes_write_pts_dts(&pts_dts_args); 174 176 175 177 /* write any PES header stuffing */ 176 178 nbytes += vidtv_pes_write_header_stuffing(args); ··· 300 300 return nbytes; 301 301 } 302 302 303 - u32 vidtv_pes_write_into(struct pes_write_args args) 303 + u32 vidtv_pes_write_into(struct pes_write_args *args) 304 304 { 305 - u32 unaligned_bytes = (args.dest_offset % TS_PACKET_LEN); 306 - struct pes_ts_header_write_args ts_header_args = {}; 307 - struct pes_header_write_args pes_header_args = {}; 308 - u32 remaining_len = args.access_unit_len; 305 + u32 unaligned_bytes = (args->dest_offset % TS_PACKET_LEN); 306 + struct pes_ts_header_write_args ts_header_args = { 307 + .dest_buf = args->dest_buf, 308 + .dest_buf_sz = args->dest_buf_sz, 309 + .pid = args->pid, 310 + .pcr = args->pcr, 311 + .continuity_counter = args->continuity_counter, 312 + }; 313 + struct pes_header_write_args pes_header_args = { 314 + .dest_buf = args->dest_buf, 315 + .dest_buf_sz = args->dest_buf_sz, 316 + .encoder_id = args->encoder_id, 317 + .send_pts = args->send_pts, 318 + .pts = args->pts, 319 + .send_dts = args->send_dts, 320 + .dts = args->dts, 321 + .stream_id = args->stream_id, 322 + .n_pes_h_s_bytes = args->n_pes_h_s_bytes, 323 + .access_unit_len = args->access_unit_len, 324 + }; 325 + u32 remaining_len = args->access_unit_len; 309 326 bool wrote_pes_header = false; 310 - u64 last_pcr = args.pcr; 327 + u64 last_pcr = args->pcr; 311 328 bool need_pcr = true; 312 329 u32 available_space; 313 330 u32 payload_size; ··· 335 318 pr_warn_ratelimited("buffer is misaligned, while starting PES\n"); 336 319 337 320 /* forcibly align and hope for the best */ 338 - nbytes += vidtv_memset(args.dest_buf, 339 - args.dest_offset + nbytes, 340 - args.dest_buf_sz, 321 + nbytes += vidtv_memset(args->dest_buf, 322 + args->dest_offset + nbytes, 323 + args->dest_buf_sz, 341 324 TS_FILL_BYTE, 342 325 TS_PACKET_LEN - unaligned_bytes); 343 - } 344 - 345 - if (args.send_dts && !args.send_pts) { 346 - pr_warn_ratelimited("forbidden value '01' for PTS_DTS flags\n"); 347 - args.send_pts = true; 348 - args.pts = args.dts; 349 - } 350 - 351 - /* see SMPTE 302M clause 6.4 */ 352 - if (args.encoder_id == S302M) { 353 - args.send_dts = false; 354 - args.send_pts = true; 355 326 } 356 327 357 328 while (remaining_len) { ··· 350 345 * the space needed for the TS header _and_ for the PES header 351 346 */ 352 347 if (!wrote_pes_header) 353 - available_space -= vidtv_pes_h_get_len(args.send_pts, 354 - args.send_dts); 348 + available_space -= vidtv_pes_h_get_len(args->send_pts, 349 + args->send_dts); 355 350 356 351 /* 357 352 * if the encoder has inserted stuffing bytes in the PES 358 353 * header, account for them. 359 354 */ 360 - available_space -= args.n_pes_h_s_bytes; 355 + available_space -= args->n_pes_h_s_bytes; 361 356 362 357 /* Take the extra adaptation into account if need to send PCR */ 363 358 if (need_pcr) { ··· 392 387 } 393 388 394 389 /* write ts header */ 395 - ts_header_args.dest_buf = args.dest_buf; 396 - ts_header_args.dest_offset = args.dest_offset + nbytes; 397 - ts_header_args.dest_buf_sz = args.dest_buf_sz; 398 - ts_header_args.pid = args.pid; 399 - ts_header_args.pcr = args.pcr; 400 - ts_header_args.continuity_counter = args.continuity_counter; 401 - ts_header_args.wrote_pes_header = wrote_pes_header; 402 - ts_header_args.n_stuffing_bytes = stuff_bytes; 390 + ts_header_args.dest_offset = args->dest_offset + nbytes; 391 + ts_header_args.wrote_pes_header = wrote_pes_header; 392 + ts_header_args.n_stuffing_bytes = stuff_bytes; 403 393 404 394 nbytes += vidtv_pes_write_ts_h(ts_header_args, need_pcr, 405 395 &last_pcr); ··· 403 403 404 404 if (!wrote_pes_header) { 405 405 /* write the PES header only once */ 406 - pes_header_args.dest_buf = args.dest_buf; 407 - 408 - pes_header_args.dest_offset = args.dest_offset + 409 - nbytes; 410 - 411 - pes_header_args.dest_buf_sz = args.dest_buf_sz; 412 - pes_header_args.encoder_id = args.encoder_id; 413 - pes_header_args.send_pts = args.send_pts; 414 - pes_header_args.pts = args.pts; 415 - pes_header_args.send_dts = args.send_dts; 416 - pes_header_args.dts = args.dts; 417 - pes_header_args.stream_id = args.stream_id; 418 - pes_header_args.n_pes_h_s_bytes = args.n_pes_h_s_bytes; 419 - pes_header_args.access_unit_len = args.access_unit_len; 420 - 421 - nbytes += vidtv_pes_write_h(pes_header_args); 422 - wrote_pes_header = true; 406 + pes_header_args.dest_offset = args->dest_offset + 407 + nbytes; 408 + nbytes += vidtv_pes_write_h(&pes_header_args); 409 + wrote_pes_header = true; 423 410 } 424 411 425 412 /* write as much of the payload as we possibly can */ 426 - nbytes += vidtv_memcpy(args.dest_buf, 427 - args.dest_offset + nbytes, 428 - args.dest_buf_sz, 429 - args.from, 413 + nbytes += vidtv_memcpy(args->dest_buf, 414 + args->dest_offset + nbytes, 415 + args->dest_buf_sz, 416 + args->from, 430 417 payload_size); 431 418 432 - args.from += payload_size; 419 + args->from += payload_size; 433 420 434 421 remaining_len -= payload_size; 435 422 }
+5 -3
drivers/media/test-drivers/vidtv/vidtv_pes.h
··· 14 14 #ifndef VIDTV_PES_H 15 15 #define VIDTV_PES_H 16 16 17 - #include <asm/byteorder.h> 18 17 #include <linux/types.h> 19 18 20 19 #include "vidtv_common.h" ··· 113 114 * @dest_buf_sz: The size of the dest_buffer 114 115 * @pid: The PID to use for the TS packets. 115 116 * @continuity_counter: Incremented on every new TS packet. 116 - * @n_pes_h_s_bytes: Padding bytes. Might be used by an encoder if needed, gets 117 + * @wrote_pes_header: Flag to indicate that the PES header was written 118 + * @n_stuffing_bytes: Padding bytes. Might be used by an encoder if needed, gets 117 119 * discarded by the decoder. 120 + * @pcr: counter driven by a 27Mhz clock. 118 121 */ 119 122 struct pes_ts_header_write_args { 120 123 void *dest_buf; ··· 147 146 * @dts: DTS value to send. 148 147 * @n_pes_h_s_bytes: Padding bytes. Might be used by an encoder if needed, gets 149 148 * discarded by the decoder. 149 + * @pcr: counter driven by a 27Mhz clock. 150 150 */ 151 151 struct pes_write_args { 152 152 void *dest_buf; ··· 188 186 * equal to the size of the access unit, since we need space for PES headers, TS headers 189 187 * and padding bytes, if any. 190 188 */ 191 - u32 vidtv_pes_write_into(struct pes_write_args args); 189 + u32 vidtv_pes_write_into(struct pes_write_args *args); 192 190 193 191 #endif // VIDTV_PES_H
+1110 -407
drivers/media/test-drivers/vidtv/vidtv_psi.c
··· 6 6 * technically be broken into one or more sections, we do not do this here, 7 7 * hence 'table' and 'section' are interchangeable for vidtv. 8 8 * 9 - * This code currently supports three tables: PAT, PMT and SDT. These are the 10 - * bare minimum to get userspace to recognize our MPEG transport stream. It can 11 - * be extended to support more PSI tables in the future. 12 - * 13 9 * Copyright (C) 2020 Daniel W. S. Almeida 14 10 */ 15 11 16 12 #define pr_fmt(fmt) KBUILD_MODNAME ":%s, %d: " fmt, __func__, __LINE__ 17 13 18 - #include <linux/kernel.h> 19 - #include <linux/types.h> 20 - #include <linux/slab.h> 14 + #include <linux/bcd.h> 21 15 #include <linux/crc32.h> 22 - #include <linux/string.h> 16 + #include <linux/kernel.h> 17 + #include <linux/ktime.h> 23 18 #include <linux/printk.h> 24 19 #include <linux/ratelimit.h> 20 + #include <linux/slab.h> 25 21 #include <linux/string.h> 26 - #include <asm/byteorder.h> 22 + #include <linux/string.h> 23 + #include <linux/time.h> 24 + #include <linux/types.h> 27 25 28 - #include "vidtv_psi.h" 29 26 #include "vidtv_common.h" 27 + #include "vidtv_psi.h" 30 28 #include "vidtv_ts.h" 31 29 32 30 #define CRC_SIZE_IN_BYTES 4 33 31 #define MAX_VERSION_NUM 32 32 + #define INITIAL_CRC 0xffffffff 33 + #define ISO_LANGUAGE_CODE_LEN 3 34 34 35 35 static const u32 CRC_LUT[256] = { 36 36 /* from libdvbv5 */ ··· 79 79 0xbcb4666d, 0xb8757bda, 0xb5365d03, 0xb1f740b4 80 80 }; 81 81 82 - static inline u32 dvb_crc32(u32 crc, u8 *data, u32 len) 82 + static u32 dvb_crc32(u32 crc, u8 *data, u32 len) 83 83 { 84 84 /* from libdvbv5 */ 85 85 while (len--) ··· 92 92 h->version++; 93 93 } 94 94 95 - static inline u16 vidtv_psi_sdt_serv_get_desc_loop_len(struct vidtv_psi_table_sdt_service *s) 96 - { 97 - u16 mask; 98 - u16 ret; 99 - 100 - mask = GENMASK(11, 0); 101 - 102 - ret = be16_to_cpu(s->bitfield) & mask; 103 - return ret; 104 - } 105 - 106 - static inline u16 vidtv_psi_pmt_stream_get_desc_loop_len(struct vidtv_psi_table_pmt_stream *s) 107 - { 108 - u16 mask; 109 - u16 ret; 110 - 111 - mask = GENMASK(9, 0); 112 - 113 - ret = be16_to_cpu(s->bitfield2) & mask; 114 - return ret; 115 - } 116 - 117 - static inline u16 vidtv_psi_pmt_get_desc_loop_len(struct vidtv_psi_table_pmt *p) 118 - { 119 - u16 mask; 120 - u16 ret; 121 - 122 - mask = GENMASK(9, 0); 123 - 124 - ret = be16_to_cpu(p->bitfield2) & mask; 125 - return ret; 126 - } 127 - 128 - static inline u16 vidtv_psi_get_sec_len(struct vidtv_psi_table_header *h) 95 + static u16 vidtv_psi_get_sec_len(struct vidtv_psi_table_header *h) 129 96 { 130 97 u16 mask; 131 98 u16 ret; ··· 103 136 return ret; 104 137 } 105 138 106 - inline u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p) 139 + u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p) 107 140 { 108 141 u16 mask; 109 142 u16 ret; ··· 114 147 return ret; 115 148 } 116 149 117 - inline u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s) 150 + u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s) 118 151 { 119 152 u16 mask; 120 153 u16 ret; ··· 125 158 return ret; 126 159 } 127 160 128 - static inline void vidtv_psi_set_desc_loop_len(__be16 *bitfield, u16 new_len, u8 desc_len_nbits) 161 + static void vidtv_psi_set_desc_loop_len(__be16 *bitfield, u16 new_len, 162 + u8 desc_len_nbits) 129 163 { 130 - u16 mask; 131 164 __be16 new; 165 + u16 mask; 132 166 133 167 mask = GENMASK(15, desc_len_nbits); 134 168 ··· 156 188 h->bitfield = new; 157 189 } 158 190 159 - static u32 vidtv_psi_ts_psi_write_into(struct psi_write_args args) 191 + /* 192 + * Packetize PSI sections into TS packets: 193 + * push a TS header (4bytes) every 184 bytes 194 + * manage the continuity_counter 195 + * add stuffing (i.e. padding bytes) after the CRC 196 + */ 197 + static u32 vidtv_psi_ts_psi_write_into(struct psi_write_args *args) 160 198 { 161 - /* 162 - * Packetize PSI sections into TS packets: 163 - * push a TS header (4bytes) every 184 bytes 164 - * manage the continuity_counter 165 - * add stuffing (i.e. padding bytes) after the CRC 166 - */ 167 - 168 - u32 nbytes_past_boundary = (args.dest_offset % TS_PACKET_LEN); 199 + struct vidtv_mpeg_ts ts_header = { 200 + .sync_byte = TS_SYNC_BYTE, 201 + .bitfield = cpu_to_be16((args->new_psi_section << 14) | args->pid), 202 + .scrambling = 0, 203 + .payload = 1, 204 + .adaptation_field = 0, /* no adaptation field */ 205 + }; 206 + u32 nbytes_past_boundary = (args->dest_offset % TS_PACKET_LEN); 169 207 bool aligned = (nbytes_past_boundary == 0); 170 - struct vidtv_mpeg_ts ts_header = {}; 171 - 172 - /* number of bytes written by this function */ 173 - u32 nbytes = 0; 174 - /* how much there is left to write */ 175 - u32 remaining_len = args.len; 176 - /* how much we can be written in this packet */ 208 + u32 remaining_len = args->len; 177 209 u32 payload_write_len = 0; 178 - /* where we are in the source */ 179 210 u32 payload_offset = 0; 211 + u32 nbytes = 0; 180 212 181 - const u16 PAYLOAD_START = args.new_psi_section; 182 - 183 - if (!args.crc && !args.is_crc) 213 + if (!args->crc && !args->is_crc) 184 214 pr_warn_ratelimited("Missing CRC for chunk\n"); 185 215 186 - if (args.crc) 187 - *args.crc = dvb_crc32(*args.crc, args.from, args.len); 216 + if (args->crc) 217 + *args->crc = dvb_crc32(*args->crc, args->from, args->len); 188 218 189 - if (args.new_psi_section && !aligned) { 219 + if (args->new_psi_section && !aligned) { 190 220 pr_warn_ratelimited("Cannot write a new PSI section in a misaligned buffer\n"); 191 221 192 222 /* forcibly align and hope for the best */ 193 - nbytes += vidtv_memset(args.dest_buf, 194 - args.dest_offset + nbytes, 195 - args.dest_buf_sz, 223 + nbytes += vidtv_memset(args->dest_buf, 224 + args->dest_offset + nbytes, 225 + args->dest_buf_sz, 196 226 TS_FILL_BYTE, 197 227 TS_PACKET_LEN - nbytes_past_boundary); 198 228 } 199 229 200 230 while (remaining_len) { 201 - nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN; 231 + nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN; 202 232 aligned = (nbytes_past_boundary == 0); 203 233 204 234 if (aligned) { 205 235 /* if at a packet boundary, write a new TS header */ 206 - ts_header.sync_byte = TS_SYNC_BYTE; 207 - ts_header.bitfield = cpu_to_be16((PAYLOAD_START << 14) | args.pid); 208 - ts_header.scrambling = 0; 209 - ts_header.continuity_counter = *args.continuity_counter; 210 - ts_header.payload = 1; 211 - /* no adaptation field */ 212 - ts_header.adaptation_field = 0; 236 + ts_header.continuity_counter = *args->continuity_counter; 213 237 214 - /* copy the header */ 215 - nbytes += vidtv_memcpy(args.dest_buf, 216 - args.dest_offset + nbytes, 217 - args.dest_buf_sz, 238 + nbytes += vidtv_memcpy(args->dest_buf, 239 + args->dest_offset + nbytes, 240 + args->dest_buf_sz, 218 241 &ts_header, 219 242 sizeof(ts_header)); 220 243 /* 221 244 * This will trigger a discontinuity if the buffer is full, 222 245 * effectively dropping the packet. 223 246 */ 224 - vidtv_ts_inc_cc(args.continuity_counter); 247 + vidtv_ts_inc_cc(args->continuity_counter); 225 248 } 226 249 227 250 /* write the pointer_field in the first byte of the payload */ 228 - if (args.new_psi_section) 229 - nbytes += vidtv_memset(args.dest_buf, 230 - args.dest_offset + nbytes, 231 - args.dest_buf_sz, 251 + if (args->new_psi_section) 252 + nbytes += vidtv_memset(args->dest_buf, 253 + args->dest_offset + nbytes, 254 + args->dest_buf_sz, 232 255 0x0, 233 256 1); 234 257 235 258 /* write as much of the payload as possible */ 236 - nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN; 259 + nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN; 237 260 payload_write_len = min(TS_PACKET_LEN - nbytes_past_boundary, remaining_len); 238 261 239 - nbytes += vidtv_memcpy(args.dest_buf, 240 - args.dest_offset + nbytes, 241 - args.dest_buf_sz, 242 - args.from + payload_offset, 262 + nbytes += vidtv_memcpy(args->dest_buf, 263 + args->dest_offset + nbytes, 264 + args->dest_buf_sz, 265 + args->from + payload_offset, 243 266 payload_write_len); 244 267 245 268 /* 'payload_write_len' written from a total of 'len' requested*/ ··· 242 283 * fill the rest of the packet if there is any remaining space unused 243 284 */ 244 285 245 - nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN; 286 + nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN; 246 287 247 - if (args.is_crc) 248 - nbytes += vidtv_memset(args.dest_buf, 249 - args.dest_offset + nbytes, 250 - args.dest_buf_sz, 288 + if (args->is_crc) 289 + nbytes += vidtv_memset(args->dest_buf, 290 + args->dest_offset + nbytes, 291 + args->dest_buf_sz, 251 292 TS_FILL_BYTE, 252 293 TS_PACKET_LEN - nbytes_past_boundary); 253 294 254 295 return nbytes; 255 296 } 256 297 257 - static u32 table_section_crc32_write_into(struct crc32_write_args args) 298 + static u32 table_section_crc32_write_into(struct crc32_write_args *args) 258 299 { 300 + struct psi_write_args psi_args = { 301 + .dest_buf = args->dest_buf, 302 + .from = &args->crc, 303 + .len = CRC_SIZE_IN_BYTES, 304 + .dest_offset = args->dest_offset, 305 + .pid = args->pid, 306 + .new_psi_section = false, 307 + .continuity_counter = args->continuity_counter, 308 + .is_crc = true, 309 + .dest_buf_sz = args->dest_buf_sz, 310 + }; 311 + 259 312 /* the CRC is the last entry in the section */ 260 - u32 nbytes = 0; 261 - struct psi_write_args psi_args = {}; 262 313 263 - psi_args.dest_buf = args.dest_buf; 264 - psi_args.from = &args.crc; 265 - psi_args.len = CRC_SIZE_IN_BYTES; 266 - psi_args.dest_offset = args.dest_offset; 267 - psi_args.pid = args.pid; 268 - psi_args.new_psi_section = false; 269 - psi_args.continuity_counter = args.continuity_counter; 270 - psi_args.is_crc = true; 271 - psi_args.dest_buf_sz = args.dest_buf_sz; 314 + return vidtv_psi_ts_psi_write_into(&psi_args); 315 + } 272 316 273 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 317 + static void vidtv_psi_desc_chain(struct vidtv_psi_desc *head, struct vidtv_psi_desc *desc) 318 + { 319 + if (head) { 320 + while (head->next) 321 + head = head->next; 274 322 275 - return nbytes; 323 + head->next = desc; 324 + } 276 325 } 277 326 278 327 struct vidtv_psi_desc_service *vidtv_psi_service_desc_init(struct vidtv_psi_desc *head, ··· 293 326 u32 provider_name_len = provider_name ? strlen(provider_name) : 0; 294 327 295 328 desc = kzalloc(sizeof(*desc), GFP_KERNEL); 329 + if (!desc) 330 + return NULL; 296 331 297 332 desc->type = SERVICE_DESCRIPTOR; 298 333 ··· 316 347 if (provider_name && provider_name_len) 317 348 desc->provider_name = kstrdup(provider_name, GFP_KERNEL); 318 349 319 - if (head) { 320 - while (head->next) 321 - head = head->next; 322 - 323 - head->next = (struct vidtv_psi_desc *)desc; 324 - } 350 + vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc); 325 351 return desc; 326 352 } 327 353 ··· 329 365 struct vidtv_psi_desc_registration *desc; 330 366 331 367 desc = kzalloc(sizeof(*desc) + sizeof(format_id) + additional_info_len, GFP_KERNEL); 368 + if (!desc) 369 + return NULL; 332 370 333 371 desc->type = REGISTRATION_DESCRIPTOR; 334 372 ··· 344 378 additional_ident_info, 345 379 additional_info_len); 346 380 347 - if (head) { 348 - while (head->next) 349 - head = head->next; 381 + vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc); 382 + return desc; 383 + } 350 384 351 - head->next = (struct vidtv_psi_desc *)desc; 385 + struct vidtv_psi_desc_network_name 386 + *vidtv_psi_network_name_desc_init(struct vidtv_psi_desc *head, char *network_name) 387 + { 388 + u32 network_name_len = network_name ? strlen(network_name) : 0; 389 + struct vidtv_psi_desc_network_name *desc; 390 + 391 + desc = kzalloc(sizeof(*desc), GFP_KERNEL); 392 + if (!desc) 393 + return NULL; 394 + 395 + desc->type = NETWORK_NAME_DESCRIPTOR; 396 + 397 + desc->length = network_name_len; 398 + 399 + if (network_name && network_name_len) 400 + desc->network_name = kstrdup(network_name, GFP_KERNEL); 401 + 402 + vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc); 403 + return desc; 404 + } 405 + 406 + struct vidtv_psi_desc_service_list 407 + *vidtv_psi_service_list_desc_init(struct vidtv_psi_desc *head, 408 + struct vidtv_psi_desc_service_list_entry *entry) 409 + { 410 + struct vidtv_psi_desc_service_list_entry *curr_e = NULL; 411 + struct vidtv_psi_desc_service_list_entry *head_e = NULL; 412 + struct vidtv_psi_desc_service_list_entry *prev_e = NULL; 413 + struct vidtv_psi_desc_service_list *desc; 414 + u16 length = 0; 415 + 416 + desc = kzalloc(sizeof(*desc), GFP_KERNEL); 417 + if (!desc) 418 + return NULL; 419 + 420 + desc->type = SERVICE_LIST_DESCRIPTOR; 421 + 422 + while (entry) { 423 + curr_e = kzalloc(sizeof(*curr_e), GFP_KERNEL); 424 + if (!curr_e) { 425 + while (head_e) { 426 + curr_e = head_e; 427 + head_e = head_e->next; 428 + kfree(curr_e); 429 + } 430 + kfree(desc); 431 + return NULL; 432 + } 433 + 434 + curr_e->service_id = entry->service_id; 435 + curr_e->service_type = entry->service_type; 436 + 437 + length += sizeof(struct vidtv_psi_desc_service_list_entry) - 438 + sizeof(struct vidtv_psi_desc_service_list_entry *); 439 + 440 + if (!head_e) 441 + head_e = curr_e; 442 + if (prev_e) 443 + prev_e->next = curr_e; 444 + 445 + prev_e = curr_e; 446 + entry = entry->next; 352 447 } 353 448 449 + desc->length = length; 450 + desc->service_list = head_e; 451 + 452 + vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc); 453 + return desc; 454 + } 455 + 456 + struct vidtv_psi_desc_short_event 457 + *vidtv_psi_short_event_desc_init(struct vidtv_psi_desc *head, 458 + char *iso_language_code, 459 + char *event_name, 460 + char *text) 461 + { 462 + u32 iso_len = iso_language_code ? strlen(iso_language_code) : 0; 463 + u32 event_name_len = event_name ? strlen(event_name) : 0; 464 + struct vidtv_psi_desc_short_event *desc; 465 + u32 text_len = text ? strlen(text) : 0; 466 + 467 + desc = kzalloc(sizeof(*desc), GFP_KERNEL); 468 + if (!desc) 469 + return NULL; 470 + 471 + desc->type = SHORT_EVENT_DESCRIPTOR; 472 + 473 + desc->length = ISO_LANGUAGE_CODE_LEN + 474 + sizeof_field(struct vidtv_psi_desc_short_event, event_name_len) + 475 + event_name_len + 476 + sizeof_field(struct vidtv_psi_desc_short_event, text_len) + 477 + text_len; 478 + 479 + desc->event_name_len = event_name_len; 480 + desc->text_len = text_len; 481 + 482 + if (iso_len != ISO_LANGUAGE_CODE_LEN) 483 + iso_language_code = "eng"; 484 + 485 + desc->iso_language_code = kstrdup(iso_language_code, GFP_KERNEL); 486 + 487 + if (event_name && event_name_len) 488 + desc->event_name = kstrdup(event_name, GFP_KERNEL); 489 + 490 + if (text && text_len) 491 + desc->text = kstrdup(text, GFP_KERNEL); 492 + 493 + vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc); 354 494 return desc; 355 495 } 356 496 357 497 struct vidtv_psi_desc *vidtv_psi_desc_clone(struct vidtv_psi_desc *desc) 358 498 { 499 + struct vidtv_psi_desc_network_name *desc_network_name; 500 + struct vidtv_psi_desc_service_list *desc_service_list; 501 + struct vidtv_psi_desc_short_event *desc_short_event; 502 + struct vidtv_psi_desc_service *service; 359 503 struct vidtv_psi_desc *head = NULL; 360 504 struct vidtv_psi_desc *prev = NULL; 361 505 struct vidtv_psi_desc *curr = NULL; 362 - 363 - struct vidtv_psi_desc_service *service; 364 506 365 507 while (desc) { 366 508 switch (desc->type) { 367 509 case SERVICE_DESCRIPTOR: 368 510 service = (struct vidtv_psi_desc_service *)desc; 369 511 curr = (struct vidtv_psi_desc *) 370 - vidtv_psi_service_desc_init(head, 371 - service->service_type, 372 - service->service_name, 373 - service->provider_name); 512 + vidtv_psi_service_desc_init(head, 513 + service->service_type, 514 + service->service_name, 515 + service->provider_name); 516 + break; 517 + 518 + case NETWORK_NAME_DESCRIPTOR: 519 + desc_network_name = (struct vidtv_psi_desc_network_name *)desc; 520 + curr = (struct vidtv_psi_desc *) 521 + vidtv_psi_network_name_desc_init(head, 522 + desc_network_name->network_name); 523 + break; 524 + 525 + case SERVICE_LIST_DESCRIPTOR: 526 + desc_service_list = (struct vidtv_psi_desc_service_list *)desc; 527 + curr = (struct vidtv_psi_desc *) 528 + vidtv_psi_service_list_desc_init(head, 529 + desc_service_list->service_list); 530 + break; 531 + 532 + case SHORT_EVENT_DESCRIPTOR: 533 + desc_short_event = (struct vidtv_psi_desc_short_event *)desc; 534 + curr = (struct vidtv_psi_desc *) 535 + vidtv_psi_short_event_desc_init(head, 536 + desc_short_event->iso_language_code, 537 + desc_short_event->event_name, 538 + desc_short_event->text); 374 539 break; 375 540 376 541 case REGISTRATION_DESCRIPTOR: 377 542 default: 378 543 curr = kzalloc(sizeof(*desc) + desc->length, GFP_KERNEL); 544 + if (!curr) 545 + return NULL; 379 546 memcpy(curr, desc, sizeof(*desc) + desc->length); 380 - break; 381 - } 547 + } 382 548 383 - if (curr) 384 - curr->next = NULL; 549 + if (!curr) 550 + return NULL; 551 + 552 + curr->next = NULL; 385 553 if (!head) 386 554 head = curr; 387 555 if (prev) ··· 530 430 531 431 void vidtv_psi_desc_destroy(struct vidtv_psi_desc *desc) 532 432 { 433 + struct vidtv_psi_desc_service_list_entry *sl_entry_tmp = NULL; 434 + struct vidtv_psi_desc_service_list_entry *sl_entry = NULL; 533 435 struct vidtv_psi_desc *curr = desc; 534 436 struct vidtv_psi_desc *tmp = NULL; 535 437 ··· 548 446 case REGISTRATION_DESCRIPTOR: 549 447 /* nothing to do */ 550 448 break; 449 + 450 + case NETWORK_NAME_DESCRIPTOR: 451 + kfree(((struct vidtv_psi_desc_network_name *)tmp)->network_name); 452 + break; 453 + 454 + case SERVICE_LIST_DESCRIPTOR: 455 + sl_entry = ((struct vidtv_psi_desc_service_list *)tmp)->service_list; 456 + while (sl_entry) { 457 + sl_entry_tmp = sl_entry; 458 + sl_entry = sl_entry->next; 459 + kfree(sl_entry_tmp); 460 + } 461 + break; 462 + 463 + case SHORT_EVENT_DESCRIPTOR: 464 + kfree(((struct vidtv_psi_desc_short_event *)tmp)->iso_language_code); 465 + kfree(((struct vidtv_psi_desc_short_event *)tmp)->event_name); 466 + kfree(((struct vidtv_psi_desc_short_event *)tmp)->text); 467 + break; 551 468 552 469 default: 553 470 pr_warn_ratelimited("Possible leak: not handling descriptor type %d\n", ··· 634 513 vidtv_psi_update_version_num(&sdt->header); 635 514 } 636 515 637 - static u32 vidtv_psi_desc_write_into(struct desc_write_args args) 516 + static u32 vidtv_psi_desc_write_into(struct desc_write_args *args) 638 517 { 639 - /* the number of bytes written by this function */ 518 + struct psi_write_args psi_args = { 519 + .dest_buf = args->dest_buf, 520 + .from = &args->desc->type, 521 + .pid = args->pid, 522 + .new_psi_section = false, 523 + .continuity_counter = args->continuity_counter, 524 + .is_crc = false, 525 + .dest_buf_sz = args->dest_buf_sz, 526 + .crc = args->crc, 527 + .len = sizeof_field(struct vidtv_psi_desc, type) + 528 + sizeof_field(struct vidtv_psi_desc, length), 529 + }; 530 + struct vidtv_psi_desc_service_list_entry *serv_list_entry = NULL; 640 531 u32 nbytes = 0; 641 - struct psi_write_args psi_args = {}; 642 532 643 - psi_args.dest_buf = args.dest_buf; 644 - psi_args.from = &args.desc->type; 533 + psi_args.dest_offset = args->dest_offset + nbytes; 645 534 646 - psi_args.len = sizeof_field(struct vidtv_psi_desc, type) + 647 - sizeof_field(struct vidtv_psi_desc, length); 535 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 648 536 649 - psi_args.dest_offset = args.dest_offset + nbytes; 650 - psi_args.pid = args.pid; 651 - psi_args.new_psi_section = false; 652 - psi_args.continuity_counter = args.continuity_counter; 653 - psi_args.is_crc = false; 654 - psi_args.dest_buf_sz = args.dest_buf_sz; 655 - psi_args.crc = args.crc; 656 - 657 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 658 - 659 - switch (args.desc->type) { 537 + switch (args->desc->type) { 660 538 case SERVICE_DESCRIPTOR: 661 - psi_args.dest_offset = args.dest_offset + nbytes; 539 + psi_args.dest_offset = args->dest_offset + nbytes; 662 540 psi_args.len = sizeof_field(struct vidtv_psi_desc_service, service_type) + 663 541 sizeof_field(struct vidtv_psi_desc_service, provider_name_len); 664 - psi_args.from = &((struct vidtv_psi_desc_service *)args.desc)->service_type; 542 + psi_args.from = &((struct vidtv_psi_desc_service *)args->desc)->service_type; 665 543 666 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 544 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 667 545 668 - psi_args.dest_offset = args.dest_offset + nbytes; 669 - psi_args.len = ((struct vidtv_psi_desc_service *)args.desc)->provider_name_len; 670 - psi_args.from = ((struct vidtv_psi_desc_service *)args.desc)->provider_name; 546 + psi_args.dest_offset = args->dest_offset + nbytes; 547 + psi_args.len = ((struct vidtv_psi_desc_service *)args->desc)->provider_name_len; 548 + psi_args.from = ((struct vidtv_psi_desc_service *)args->desc)->provider_name; 671 549 672 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 550 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 673 551 674 - psi_args.dest_offset = args.dest_offset + nbytes; 552 + psi_args.dest_offset = args->dest_offset + nbytes; 675 553 psi_args.len = sizeof_field(struct vidtv_psi_desc_service, service_name_len); 676 - psi_args.from = &((struct vidtv_psi_desc_service *)args.desc)->service_name_len; 554 + psi_args.from = &((struct vidtv_psi_desc_service *)args->desc)->service_name_len; 677 555 678 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 556 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 679 557 680 - psi_args.dest_offset = args.dest_offset + nbytes; 681 - psi_args.len = ((struct vidtv_psi_desc_service *)args.desc)->service_name_len; 682 - psi_args.from = ((struct vidtv_psi_desc_service *)args.desc)->service_name; 558 + psi_args.dest_offset = args->dest_offset + nbytes; 559 + psi_args.len = ((struct vidtv_psi_desc_service *)args->desc)->service_name_len; 560 + psi_args.from = ((struct vidtv_psi_desc_service *)args->desc)->service_name; 683 561 684 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 562 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 563 + break; 564 + 565 + case NETWORK_NAME_DESCRIPTOR: 566 + psi_args.dest_offset = args->dest_offset + nbytes; 567 + psi_args.len = args->desc->length; 568 + psi_args.from = ((struct vidtv_psi_desc_network_name *)args->desc)->network_name; 569 + 570 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 571 + break; 572 + 573 + case SERVICE_LIST_DESCRIPTOR: 574 + serv_list_entry = ((struct vidtv_psi_desc_service_list *)args->desc)->service_list; 575 + while (serv_list_entry) { 576 + psi_args.dest_offset = args->dest_offset + nbytes; 577 + psi_args.len = sizeof(struct vidtv_psi_desc_service_list_entry) - 578 + sizeof(struct vidtv_psi_desc_service_list_entry *); 579 + psi_args.from = serv_list_entry; 580 + 581 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 582 + 583 + serv_list_entry = serv_list_entry->next; 584 + } 585 + break; 586 + 587 + case SHORT_EVENT_DESCRIPTOR: 588 + psi_args.dest_offset = args->dest_offset + nbytes; 589 + psi_args.len = ISO_LANGUAGE_CODE_LEN; 590 + psi_args.from = ((struct vidtv_psi_desc_short_event *) 591 + args->desc)->iso_language_code; 592 + 593 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 594 + 595 + psi_args.dest_offset = args->dest_offset + nbytes; 596 + psi_args.len = sizeof_field(struct vidtv_psi_desc_short_event, event_name_len); 597 + psi_args.from = &((struct vidtv_psi_desc_short_event *) 598 + args->desc)->event_name_len; 599 + 600 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 601 + 602 + psi_args.dest_offset = args->dest_offset + nbytes; 603 + psi_args.len = ((struct vidtv_psi_desc_short_event *)args->desc)->event_name_len; 604 + psi_args.from = ((struct vidtv_psi_desc_short_event *)args->desc)->event_name; 605 + 606 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 607 + 608 + psi_args.dest_offset = args->dest_offset + nbytes; 609 + psi_args.len = sizeof_field(struct vidtv_psi_desc_short_event, text_len); 610 + psi_args.from = &((struct vidtv_psi_desc_short_event *)args->desc)->text_len; 611 + 612 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 613 + 614 + psi_args.dest_offset = args->dest_offset + nbytes; 615 + psi_args.len = ((struct vidtv_psi_desc_short_event *)args->desc)->text_len; 616 + psi_args.from = ((struct vidtv_psi_desc_short_event *)args->desc)->text; 617 + 618 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 619 + 685 620 break; 686 621 687 622 case REGISTRATION_DESCRIPTOR: 688 623 default: 689 - psi_args.dest_offset = args.dest_offset + nbytes; 690 - psi_args.len = args.desc->length; 691 - psi_args.from = &args.desc->data; 624 + psi_args.dest_offset = args->dest_offset + nbytes; 625 + psi_args.len = args->desc->length; 626 + psi_args.from = &args->desc->data; 692 627 693 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 628 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 694 629 break; 695 630 } 696 631 ··· 754 577 } 755 578 756 579 static u32 757 - vidtv_psi_table_header_write_into(struct header_write_args args) 580 + vidtv_psi_table_header_write_into(struct header_write_args *args) 758 581 { 759 - /* the number of bytes written by this function */ 760 - u32 nbytes = 0; 761 - struct psi_write_args psi_args = {}; 582 + struct psi_write_args psi_args = { 583 + .dest_buf = args->dest_buf, 584 + .from = args->h, 585 + .len = sizeof(struct vidtv_psi_table_header), 586 + .dest_offset = args->dest_offset, 587 + .pid = args->pid, 588 + .new_psi_section = true, 589 + .continuity_counter = args->continuity_counter, 590 + .is_crc = false, 591 + .dest_buf_sz = args->dest_buf_sz, 592 + .crc = args->crc, 593 + }; 762 594 763 - psi_args.dest_buf = args.dest_buf; 764 - psi_args.from = args.h; 765 - psi_args.len = sizeof(struct vidtv_psi_table_header); 766 - psi_args.dest_offset = args.dest_offset; 767 - psi_args.pid = args.pid; 768 - psi_args.new_psi_section = true; 769 - psi_args.continuity_counter = args.continuity_counter; 770 - psi_args.is_crc = false; 771 - psi_args.dest_buf_sz = args.dest_buf_sz; 772 - psi_args.crc = args.crc; 773 - 774 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 775 - 776 - return nbytes; 595 + return vidtv_psi_ts_psi_write_into(&psi_args); 777 596 } 778 597 779 598 void 780 599 vidtv_psi_pat_table_update_sec_len(struct vidtv_psi_table_pat *pat) 781 600 { 782 - /* see ISO/IEC 13818-1 : 2000 p.43 */ 783 601 u16 length = 0; 784 602 u32 i; 603 + 604 + /* see ISO/IEC 13818-1 : 2000 p.43 */ 785 605 786 606 /* from immediately after 'section_length' until 'last_section_number'*/ 787 607 length += PAT_LEN_UNTIL_LAST_SECTION_NUMBER; 788 608 789 609 /* do not count the pointer */ 790 - for (i = 0; i < pat->programs; ++i) 610 + for (i = 0; i < pat->num_pat; ++i) 791 611 length += sizeof(struct vidtv_psi_table_pat_program) - 792 612 sizeof(struct vidtv_psi_table_pat_program *); 793 613 ··· 795 621 796 622 void vidtv_psi_pmt_table_update_sec_len(struct vidtv_psi_table_pmt *pmt) 797 623 { 798 - /* see ISO/IEC 13818-1 : 2000 p.46 */ 799 - u16 length = 0; 800 624 struct vidtv_psi_table_pmt_stream *s = pmt->stream; 801 625 u16 desc_loop_len; 626 + u16 length = 0; 627 + 628 + /* see ISO/IEC 13818-1 : 2000 p.46 */ 802 629 803 630 /* from immediately after 'section_length' until 'program_info_length'*/ 804 631 length += PMT_LEN_UNTIL_PROGRAM_INFO_LENGTH; ··· 830 655 831 656 void vidtv_psi_sdt_table_update_sec_len(struct vidtv_psi_table_sdt *sdt) 832 657 { 833 - /* see ETSI EN 300 468 V 1.10.1 p.24 */ 834 - u16 length = 0; 835 658 struct vidtv_psi_table_sdt_service *s = sdt->service; 836 659 u16 desc_loop_len; 660 + u16 length = 0; 661 + 662 + /* see ETSI EN 300 468 V 1.10.1 p.24 */ 837 663 838 664 /* 839 665 * from immediately after 'section_length' until ··· 857 681 } 858 682 859 683 length += CRC_SIZE_IN_BYTES; 860 - 861 684 vidtv_psi_set_sec_len(&sdt->header, length); 862 685 } 863 686 ··· 869 694 const u16 RESERVED = 0x07; 870 695 871 696 program = kzalloc(sizeof(*program), GFP_KERNEL); 697 + if (!program) 698 + return NULL; 872 699 873 700 program->service_id = cpu_to_be16(service_id); 874 701 ··· 891 714 void 892 715 vidtv_psi_pat_program_destroy(struct vidtv_psi_table_pat_program *p) 893 716 { 894 - struct vidtv_psi_table_pat_program *curr = p; 895 717 struct vidtv_psi_table_pat_program *tmp = NULL; 718 + struct vidtv_psi_table_pat_program *curr = p; 896 719 897 720 while (curr) { 898 721 tmp = curr; ··· 901 724 } 902 725 } 903 726 727 + /* This function transfers ownership of p to the table */ 904 728 void 905 729 vidtv_psi_pat_program_assign(struct vidtv_psi_table_pat *pat, 906 730 struct vidtv_psi_table_pat_program *p) 907 731 { 908 - /* This function transfers ownership of p to the table */ 732 + struct vidtv_psi_table_pat_program *program; 733 + u16 program_count; 909 734 910 - u16 program_count = 0; 911 - struct vidtv_psi_table_pat_program *program = p; 735 + do { 736 + program_count = 0; 737 + program = p; 912 738 913 - if (p == pat->program) 914 - return; 739 + if (p == pat->program) 740 + return; 915 741 916 - while (program) { 917 - ++program_count; 918 - program = program->next; 919 - } 742 + while (program) { 743 + ++program_count; 744 + program = program->next; 745 + } 920 746 921 - pat->programs = program_count; 922 - pat->program = p; 747 + pat->num_pat = program_count; 748 + pat->program = p; 923 749 924 - /* Recompute section length */ 925 - vidtv_psi_pat_table_update_sec_len(pat); 750 + /* Recompute section length */ 751 + vidtv_psi_pat_table_update_sec_len(pat); 926 752 927 - if (vidtv_psi_get_sec_len(&pat->header) > MAX_SECTION_LEN) 928 - vidtv_psi_pat_program_assign(pat, NULL); 753 + p = NULL; 754 + } while (vidtv_psi_get_sec_len(&pat->header) > MAX_SECTION_LEN); 929 755 930 756 vidtv_psi_update_version_num(&pat->header); 931 757 } 932 758 933 759 struct vidtv_psi_table_pat *vidtv_psi_pat_table_init(u16 transport_stream_id) 934 760 { 935 - struct vidtv_psi_table_pat *pat = kzalloc(sizeof(*pat), GFP_KERNEL); 761 + struct vidtv_psi_table_pat *pat; 936 762 const u16 SYNTAX = 0x1; 937 763 const u16 ZERO = 0x0; 938 764 const u16 ONES = 0x03; 765 + 766 + pat = kzalloc(sizeof(*pat), GFP_KERNEL); 767 + if (!pat) 768 + return NULL; 939 769 940 770 pat->header.table_id = 0x0; 941 771 ··· 956 772 pat->header.section_id = 0x0; 957 773 pat->header.last_section = 0x0; 958 774 959 - pat->programs = 0; 960 - 961 775 vidtv_psi_pat_table_update_sec_len(pat); 962 776 963 777 return pat; 964 778 } 965 779 966 - u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args args) 780 + u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args *args) 967 781 { 968 - /* the number of bytes written by this function */ 782 + struct vidtv_psi_table_pat_program *p = args->pat->program; 783 + struct header_write_args h_args = { 784 + .dest_buf = args->buf, 785 + .dest_offset = args->offset, 786 + .pid = VIDTV_PAT_PID, 787 + .h = &args->pat->header, 788 + .continuity_counter = args->continuity_counter, 789 + .dest_buf_sz = args->buf_sz, 790 + }; 791 + struct psi_write_args psi_args = { 792 + .dest_buf = args->buf, 793 + .pid = VIDTV_PAT_PID, 794 + .new_psi_section = false, 795 + .continuity_counter = args->continuity_counter, 796 + .is_crc = false, 797 + .dest_buf_sz = args->buf_sz, 798 + }; 799 + struct crc32_write_args c_args = { 800 + .dest_buf = args->buf, 801 + .pid = VIDTV_PAT_PID, 802 + .dest_buf_sz = args->buf_sz, 803 + }; 804 + u32 crc = INITIAL_CRC; 969 805 u32 nbytes = 0; 970 - const u16 pat_pid = VIDTV_PAT_PID; 971 - u32 crc = 0xffffffff; 972 806 973 - struct vidtv_psi_table_pat_program *p = args.pat->program; 807 + vidtv_psi_pat_table_update_sec_len(args->pat); 974 808 975 - struct header_write_args h_args = {}; 976 - struct psi_write_args psi_args = {}; 977 - struct crc32_write_args c_args = {}; 978 - 979 - vidtv_psi_pat_table_update_sec_len(args.pat); 980 - 981 - h_args.dest_buf = args.buf; 982 - h_args.dest_offset = args.offset; 983 - h_args.h = &args.pat->header; 984 - h_args.pid = pat_pid; 985 - h_args.continuity_counter = args.continuity_counter; 986 - h_args.dest_buf_sz = args.buf_sz; 987 809 h_args.crc = &crc; 988 810 989 - nbytes += vidtv_psi_table_header_write_into(h_args); 811 + nbytes += vidtv_psi_table_header_write_into(&h_args); 990 812 991 813 /* note that the field 'u16 programs' is not really part of the PAT */ 992 814 993 - psi_args.dest_buf = args.buf; 994 - psi_args.pid = pat_pid; 995 - psi_args.new_psi_section = false; 996 - psi_args.continuity_counter = args.continuity_counter; 997 - psi_args.is_crc = false; 998 - psi_args.dest_buf_sz = args.buf_sz; 999 - psi_args.crc = &crc; 815 + psi_args.crc = &crc; 1000 816 1001 817 while (p) { 1002 818 /* copy the PAT programs */ 1003 819 psi_args.from = p; 1004 820 /* skip the pointer */ 1005 821 psi_args.len = sizeof(*p) - 1006 - sizeof(struct vidtv_psi_table_pat_program *); 1007 - psi_args.dest_offset = args.offset + nbytes; 822 + sizeof(struct vidtv_psi_table_pat_program *); 823 + psi_args.dest_offset = args->offset + nbytes; 824 + psi_args.continuity_counter = args->continuity_counter; 1008 825 1009 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 826 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1010 827 1011 828 p = p->next; 1012 829 } 1013 830 1014 - c_args.dest_buf = args.buf; 1015 - c_args.dest_offset = args.offset + nbytes; 831 + c_args.dest_offset = args->offset + nbytes; 832 + c_args.continuity_counter = args->continuity_counter; 1016 833 c_args.crc = cpu_to_be32(crc); 1017 - c_args.pid = pat_pid; 1018 - c_args.continuity_counter = args.continuity_counter; 1019 - c_args.dest_buf_sz = args.buf_sz; 1020 834 1021 835 /* Write the CRC32 at the end */ 1022 - nbytes += table_section_crc32_write_into(c_args); 836 + nbytes += table_section_crc32_write_into(&c_args); 1023 837 1024 838 return nbytes; 1025 839 } ··· 1041 859 u16 desc_loop_len; 1042 860 1043 861 stream = kzalloc(sizeof(*stream), GFP_KERNEL); 862 + if (!stream) 863 + return NULL; 1044 864 1045 865 stream->type = stream_type; 1046 866 ··· 1067 883 1068 884 void vidtv_psi_pmt_stream_destroy(struct vidtv_psi_table_pmt_stream *s) 1069 885 { 1070 - struct vidtv_psi_table_pmt_stream *curr_stream = s; 1071 886 struct vidtv_psi_table_pmt_stream *tmp_stream = NULL; 887 + struct vidtv_psi_table_pmt_stream *curr_stream = s; 1072 888 1073 889 while (curr_stream) { 1074 890 tmp_stream = curr_stream; ··· 1081 897 void vidtv_psi_pmt_stream_assign(struct vidtv_psi_table_pmt *pmt, 1082 898 struct vidtv_psi_table_pmt_stream *s) 1083 899 { 1084 - /* This function transfers ownership of s to the table */ 1085 - if (s == pmt->stream) 1086 - return; 900 + do { 901 + /* This function transfers ownership of s to the table */ 902 + if (s == pmt->stream) 903 + return; 1087 904 1088 - pmt->stream = s; 1089 - vidtv_psi_pmt_table_update_sec_len(pmt); 905 + pmt->stream = s; 906 + vidtv_psi_pmt_table_update_sec_len(pmt); 1090 907 1091 - if (vidtv_psi_get_sec_len(&pmt->header) > MAX_SECTION_LEN) 1092 - vidtv_psi_pmt_stream_assign(pmt, NULL); 908 + s = NULL; 909 + } while (vidtv_psi_get_sec_len(&pmt->header) > MAX_SECTION_LEN); 1093 910 1094 911 vidtv_psi_update_version_num(&pmt->header); 1095 912 } ··· 1118 933 struct vidtv_psi_table_pmt *vidtv_psi_pmt_table_init(u16 program_number, 1119 934 u16 pcr_pid) 1120 935 { 1121 - struct vidtv_psi_table_pmt *pmt = kzalloc(sizeof(*pmt), GFP_KERNEL); 1122 - const u16 SYNTAX = 0x1; 1123 - const u16 ZERO = 0x0; 1124 - const u16 ONES = 0x03; 936 + struct vidtv_psi_table_pmt *pmt; 1125 937 const u16 RESERVED1 = 0x07; 1126 938 const u16 RESERVED2 = 0x0f; 939 + const u16 SYNTAX = 0x1; 940 + const u16 ONES = 0x03; 941 + const u16 ZERO = 0x0; 1127 942 u16 desc_loop_len; 943 + 944 + pmt = kzalloc(sizeof(*pmt), GFP_KERNEL); 945 + if (!pmt) 946 + return NULL; 1128 947 1129 948 if (!pcr_pid) 1130 949 pcr_pid = 0x1fff; ··· 1159 970 return pmt; 1160 971 } 1161 972 1162 - u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args args) 973 + u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args) 1163 974 { 1164 - /* the number of bytes written by this function */ 975 + struct vidtv_psi_desc *table_descriptor = args->pmt->descriptor; 976 + struct vidtv_psi_table_pmt_stream *stream = args->pmt->stream; 977 + struct vidtv_psi_desc *stream_descriptor; 978 + struct header_write_args h_args = { 979 + .dest_buf = args->buf, 980 + .dest_offset = args->offset, 981 + .h = &args->pmt->header, 982 + .pid = args->pid, 983 + .continuity_counter = args->continuity_counter, 984 + .dest_buf_sz = args->buf_sz, 985 + }; 986 + struct psi_write_args psi_args = { 987 + .dest_buf = args->buf, 988 + .from = &args->pmt->bitfield, 989 + .len = sizeof_field(struct vidtv_psi_table_pmt, bitfield) + 990 + sizeof_field(struct vidtv_psi_table_pmt, bitfield2), 991 + .pid = args->pid, 992 + .new_psi_section = false, 993 + .is_crc = false, 994 + .dest_buf_sz = args->buf_sz, 995 + }; 996 + struct desc_write_args d_args = { 997 + .dest_buf = args->buf, 998 + .desc = table_descriptor, 999 + .pid = args->pid, 1000 + .dest_buf_sz = args->buf_sz, 1001 + }; 1002 + struct crc32_write_args c_args = { 1003 + .dest_buf = args->buf, 1004 + .pid = args->pid, 1005 + .dest_buf_sz = args->buf_sz, 1006 + }; 1007 + u32 crc = INITIAL_CRC; 1165 1008 u32 nbytes = 0; 1166 - u32 crc = 0xffffffff; 1167 1009 1168 - struct vidtv_psi_desc *table_descriptor = args.pmt->descriptor; 1169 - struct vidtv_psi_table_pmt_stream *stream = args.pmt->stream; 1170 - struct vidtv_psi_desc *stream_descriptor = (stream) ? 1171 - args.pmt->stream->descriptor : 1172 - NULL; 1010 + vidtv_psi_pmt_table_update_sec_len(args->pmt); 1173 1011 1174 - struct header_write_args h_args = {}; 1175 - struct psi_write_args psi_args = {}; 1176 - struct desc_write_args d_args = {}; 1177 - struct crc32_write_args c_args = {}; 1178 - 1179 - vidtv_psi_pmt_table_update_sec_len(args.pmt); 1180 - 1181 - h_args.dest_buf = args.buf; 1182 - h_args.dest_offset = args.offset; 1183 - h_args.h = &args.pmt->header; 1184 - h_args.pid = args.pid; 1185 - h_args.continuity_counter = args.continuity_counter; 1186 - h_args.dest_buf_sz = args.buf_sz; 1187 1012 h_args.crc = &crc; 1188 1013 1189 - nbytes += vidtv_psi_table_header_write_into(h_args); 1014 + nbytes += vidtv_psi_table_header_write_into(&h_args); 1190 1015 1191 1016 /* write the two bitfields */ 1192 - psi_args.dest_buf = args.buf; 1193 - psi_args.from = &args.pmt->bitfield; 1194 - psi_args.len = sizeof_field(struct vidtv_psi_table_pmt, bitfield) + 1195 - sizeof_field(struct vidtv_psi_table_pmt, bitfield2); 1196 - 1197 - psi_args.dest_offset = args.offset + nbytes; 1198 - psi_args.pid = args.pid; 1199 - psi_args.new_psi_section = false; 1200 - psi_args.continuity_counter = args.continuity_counter; 1201 - psi_args.is_crc = false; 1202 - psi_args.dest_buf_sz = args.buf_sz; 1203 - psi_args.crc = &crc; 1204 - 1205 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 1017 + psi_args.dest_offset = args->offset + nbytes; 1018 + psi_args.continuity_counter = args->continuity_counter; 1019 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1206 1020 1207 1021 while (table_descriptor) { 1208 1022 /* write the descriptors, if any */ 1209 - d_args.dest_buf = args.buf; 1210 - d_args.dest_offset = args.offset + nbytes; 1211 - d_args.desc = table_descriptor; 1212 - d_args.pid = args.pid; 1213 - d_args.continuity_counter = args.continuity_counter; 1214 - d_args.dest_buf_sz = args.buf_sz; 1023 + d_args.dest_offset = args->offset + nbytes; 1024 + d_args.continuity_counter = args->continuity_counter; 1215 1025 d_args.crc = &crc; 1216 1026 1217 - nbytes += vidtv_psi_desc_write_into(d_args); 1027 + nbytes += vidtv_psi_desc_write_into(&d_args); 1218 1028 1219 1029 table_descriptor = table_descriptor->next; 1220 1030 } 1221 1031 1032 + psi_args.len += sizeof_field(struct vidtv_psi_table_pmt_stream, type); 1222 1033 while (stream) { 1223 1034 /* write the streams, if any */ 1224 1035 psi_args.from = stream; 1225 - psi_args.len = sizeof_field(struct vidtv_psi_table_pmt_stream, type) + 1226 - sizeof_field(struct vidtv_psi_table_pmt_stream, bitfield) + 1227 - sizeof_field(struct vidtv_psi_table_pmt_stream, bitfield2); 1228 - psi_args.dest_offset = args.offset + nbytes; 1036 + psi_args.dest_offset = args->offset + nbytes; 1037 + psi_args.continuity_counter = args->continuity_counter; 1229 1038 1230 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 1039 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1040 + 1041 + stream_descriptor = stream->descriptor; 1231 1042 1232 1043 while (stream_descriptor) { 1233 1044 /* write the stream descriptors, if any */ 1234 - d_args.dest_buf = args.buf; 1235 - d_args.dest_offset = args.offset + nbytes; 1045 + d_args.dest_offset = args->offset + nbytes; 1236 1046 d_args.desc = stream_descriptor; 1237 - d_args.pid = args.pid; 1238 - d_args.continuity_counter = args.continuity_counter; 1239 - d_args.dest_buf_sz = args.buf_sz; 1047 + d_args.continuity_counter = args->continuity_counter; 1240 1048 d_args.crc = &crc; 1241 1049 1242 - nbytes += vidtv_psi_desc_write_into(d_args); 1050 + nbytes += vidtv_psi_desc_write_into(&d_args); 1243 1051 1244 1052 stream_descriptor = stream_descriptor->next; 1245 1053 } ··· 1244 1058 stream = stream->next; 1245 1059 } 1246 1060 1247 - c_args.dest_buf = args.buf; 1248 - c_args.dest_offset = args.offset + nbytes; 1061 + c_args.dest_offset = args->offset + nbytes; 1249 1062 c_args.crc = cpu_to_be32(crc); 1250 - c_args.pid = args.pid; 1251 - c_args.continuity_counter = args.continuity_counter; 1252 - c_args.dest_buf_sz = args.buf_sz; 1063 + c_args.continuity_counter = args->continuity_counter; 1253 1064 1254 1065 /* Write the CRC32 at the end */ 1255 - nbytes += table_section_crc32_write_into(c_args); 1066 + nbytes += table_section_crc32_write_into(&c_args); 1256 1067 1257 1068 return nbytes; 1258 1069 } ··· 1261 1078 kfree(pmt); 1262 1079 } 1263 1080 1264 - struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 transport_stream_id) 1081 + struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 network_id, 1082 + u16 transport_stream_id) 1265 1083 { 1266 - struct vidtv_psi_table_sdt *sdt = kzalloc(sizeof(*sdt), GFP_KERNEL); 1267 - const u16 SYNTAX = 0x1; 1268 - const u16 ONE = 0x1; 1269 - const u16 ONES = 0x03; 1084 + struct vidtv_psi_table_sdt *sdt; 1270 1085 const u16 RESERVED = 0xff; 1086 + const u16 SYNTAX = 0x1; 1087 + const u16 ONES = 0x03; 1088 + const u16 ONE = 0x1; 1089 + 1090 + sdt = kzalloc(sizeof(*sdt), GFP_KERNEL); 1091 + if (!sdt) 1092 + return NULL; 1271 1093 1272 1094 sdt->header.table_id = 0x42; 1273 - 1274 1095 sdt->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12)); 1275 1096 1276 1097 /* ··· 1298 1111 * This can be changed to something more useful, when support for 1299 1112 * NIT gets added 1300 1113 */ 1301 - sdt->network_id = cpu_to_be16(0xff01); 1114 + sdt->network_id = cpu_to_be16(network_id); 1302 1115 sdt->reserved = RESERVED; 1303 1116 1304 1117 vidtv_psi_sdt_table_update_sec_len(sdt); ··· 1306 1119 return sdt; 1307 1120 } 1308 1121 1309 - u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args args) 1122 + u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args *args) 1310 1123 { 1124 + struct header_write_args h_args = { 1125 + .dest_buf = args->buf, 1126 + .dest_offset = args->offset, 1127 + .h = &args->sdt->header, 1128 + .pid = VIDTV_SDT_PID, 1129 + .dest_buf_sz = args->buf_sz, 1130 + }; 1131 + struct psi_write_args psi_args = { 1132 + .dest_buf = args->buf, 1133 + .len = sizeof_field(struct vidtv_psi_table_sdt, network_id) + 1134 + sizeof_field(struct vidtv_psi_table_sdt, reserved), 1135 + .pid = VIDTV_SDT_PID, 1136 + .new_psi_section = false, 1137 + .is_crc = false, 1138 + .dest_buf_sz = args->buf_sz, 1139 + }; 1140 + struct desc_write_args d_args = { 1141 + .dest_buf = args->buf, 1142 + .pid = VIDTV_SDT_PID, 1143 + .dest_buf_sz = args->buf_sz, 1144 + }; 1145 + struct crc32_write_args c_args = { 1146 + .dest_buf = args->buf, 1147 + .pid = VIDTV_SDT_PID, 1148 + .dest_buf_sz = args->buf_sz, 1149 + }; 1150 + struct vidtv_psi_table_sdt_service *service = args->sdt->service; 1151 + struct vidtv_psi_desc *service_desc; 1311 1152 u32 nbytes = 0; 1312 - u16 sdt_pid = VIDTV_SDT_PID; /* see ETSI EN 300 468 v1.15.1 p. 11 */ 1153 + u32 crc = INITIAL_CRC; 1313 1154 1314 - u32 crc = 0xffffffff; 1155 + /* see ETSI EN 300 468 v1.15.1 p. 11 */ 1315 1156 1316 - struct vidtv_psi_table_sdt_service *service = args.sdt->service; 1317 - struct vidtv_psi_desc *service_desc = (args.sdt->service) ? 1318 - args.sdt->service->descriptor : 1319 - NULL; 1157 + vidtv_psi_sdt_table_update_sec_len(args->sdt); 1320 1158 1321 - struct header_write_args h_args = {}; 1322 - struct psi_write_args psi_args = {}; 1323 - struct desc_write_args d_args = {}; 1324 - struct crc32_write_args c_args = {}; 1325 - 1326 - vidtv_psi_sdt_table_update_sec_len(args.sdt); 1327 - 1328 - h_args.dest_buf = args.buf; 1329 - h_args.dest_offset = args.offset; 1330 - h_args.h = &args.sdt->header; 1331 - h_args.pid = sdt_pid; 1332 - h_args.continuity_counter = args.continuity_counter; 1333 - h_args.dest_buf_sz = args.buf_sz; 1159 + h_args.continuity_counter = args->continuity_counter; 1334 1160 h_args.crc = &crc; 1335 1161 1336 - nbytes += vidtv_psi_table_header_write_into(h_args); 1162 + nbytes += vidtv_psi_table_header_write_into(&h_args); 1337 1163 1338 - psi_args.dest_buf = args.buf; 1339 - psi_args.from = &args.sdt->network_id; 1340 - 1341 - psi_args.len = sizeof_field(struct vidtv_psi_table_sdt, network_id) + 1342 - sizeof_field(struct vidtv_psi_table_sdt, reserved); 1343 - 1344 - psi_args.dest_offset = args.offset + nbytes; 1345 - psi_args.pid = sdt_pid; 1346 - psi_args.new_psi_section = false; 1347 - psi_args.continuity_counter = args.continuity_counter; 1348 - psi_args.is_crc = false; 1349 - psi_args.dest_buf_sz = args.buf_sz; 1164 + psi_args.from = &args->sdt->network_id; 1165 + psi_args.dest_offset = args->offset + nbytes; 1166 + psi_args.continuity_counter = args->continuity_counter; 1350 1167 psi_args.crc = &crc; 1351 1168 1352 1169 /* copy u16 network_id + u8 reserved)*/ 1353 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 1170 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1171 + 1172 + /* skip both pointers at the end */ 1173 + psi_args.len = sizeof(struct vidtv_psi_table_sdt_service) - 1174 + sizeof(struct vidtv_psi_desc *) - 1175 + sizeof(struct vidtv_psi_table_sdt_service *); 1354 1176 1355 1177 while (service) { 1356 1178 /* copy the services, if any */ 1357 1179 psi_args.from = service; 1358 - /* skip both pointers at the end */ 1359 - psi_args.len = sizeof(struct vidtv_psi_table_sdt_service) - 1360 - sizeof(struct vidtv_psi_desc *) - 1361 - sizeof(struct vidtv_psi_table_sdt_service *); 1362 - psi_args.dest_offset = args.offset + nbytes; 1180 + psi_args.dest_offset = args->offset + nbytes; 1181 + psi_args.continuity_counter = args->continuity_counter; 1363 1182 1364 - nbytes += vidtv_psi_ts_psi_write_into(psi_args); 1183 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1184 + 1185 + service_desc = service->descriptor; 1365 1186 1366 1187 while (service_desc) { 1367 1188 /* copy the service descriptors, if any */ 1368 - d_args.dest_buf = args.buf; 1369 - d_args.dest_offset = args.offset + nbytes; 1189 + d_args.dest_offset = args->offset + nbytes; 1370 1190 d_args.desc = service_desc; 1371 - d_args.pid = sdt_pid; 1372 - d_args.continuity_counter = args.continuity_counter; 1373 - d_args.dest_buf_sz = args.buf_sz; 1191 + d_args.continuity_counter = args->continuity_counter; 1374 1192 d_args.crc = &crc; 1375 1193 1376 - nbytes += vidtv_psi_desc_write_into(d_args); 1194 + nbytes += vidtv_psi_desc_write_into(&d_args); 1377 1195 1378 1196 service_desc = service_desc->next; 1379 1197 } ··· 1386 1194 service = service->next; 1387 1195 } 1388 1196 1389 - c_args.dest_buf = args.buf; 1390 - c_args.dest_offset = args.offset + nbytes; 1197 + c_args.dest_offset = args->offset + nbytes; 1391 1198 c_args.crc = cpu_to_be32(crc); 1392 - c_args.pid = sdt_pid; 1393 - c_args.continuity_counter = args.continuity_counter; 1394 - c_args.dest_buf_sz = args.buf_sz; 1199 + c_args.continuity_counter = args->continuity_counter; 1395 1200 1396 1201 /* Write the CRC at the end */ 1397 - nbytes += table_section_crc32_write_into(c_args); 1202 + nbytes += table_section_crc32_write_into(&c_args); 1398 1203 1399 1204 return nbytes; 1400 1205 } ··· 1404 1215 1405 1216 struct vidtv_psi_table_sdt_service 1406 1217 *vidtv_psi_sdt_service_init(struct vidtv_psi_table_sdt_service *head, 1407 - u16 service_id) 1218 + u16 service_id, 1219 + bool eit_schedule, 1220 + bool eit_present_following) 1408 1221 { 1409 1222 struct vidtv_psi_table_sdt_service *service; 1410 1223 1411 1224 service = kzalloc(sizeof(*service), GFP_KERNEL); 1225 + if (!service) 1226 + return NULL; 1412 1227 1413 1228 /* 1414 1229 * ETSI 300 468: this is a 16bit field which serves as a label to ··· 1421 1228 * corresponding program_map_section 1422 1229 */ 1423 1230 service->service_id = cpu_to_be16(service_id); 1424 - service->EIT_schedule = 0x0; 1425 - service->EIT_present_following = 0x0; 1231 + service->EIT_schedule = eit_schedule; 1232 + service->EIT_present_following = eit_present_following; 1426 1233 service->reserved = 0x3f; 1427 1234 1428 1235 service->bitfield = cpu_to_be16(RUNNING << 13); ··· 1455 1262 vidtv_psi_sdt_service_assign(struct vidtv_psi_table_sdt *sdt, 1456 1263 struct vidtv_psi_table_sdt_service *service) 1457 1264 { 1458 - if (service == sdt->service) 1459 - return; 1265 + do { 1266 + if (service == sdt->service) 1267 + return; 1460 1268 1461 - sdt->service = service; 1269 + sdt->service = service; 1462 1270 1463 - /* recompute section length */ 1464 - vidtv_psi_sdt_table_update_sec_len(sdt); 1271 + /* recompute section length */ 1272 + vidtv_psi_sdt_table_update_sec_len(sdt); 1465 1273 1466 - if (vidtv_psi_get_sec_len(&sdt->header) > MAX_SECTION_LEN) 1467 - vidtv_psi_sdt_service_assign(sdt, NULL); 1274 + service = NULL; 1275 + } while (vidtv_psi_get_sec_len(&sdt->header) > MAX_SECTION_LEN); 1468 1276 1469 1277 vidtv_psi_update_version_num(&sdt->header); 1470 1278 } 1471 1279 1280 + /* 1281 + * PMTs contain information about programs. For each program, 1282 + * there is one PMT section. This function will create a section 1283 + * for each program found in the PAT 1284 + */ 1472 1285 struct vidtv_psi_table_pmt** 1473 - vidtv_psi_pmt_create_sec_for_each_pat_entry(struct vidtv_psi_table_pat *pat, u16 pcr_pid) 1286 + vidtv_psi_pmt_create_sec_for_each_pat_entry(struct vidtv_psi_table_pat *pat, 1287 + u16 pcr_pid) 1474 1288 1475 1289 { 1476 - /* 1477 - * PMTs contain information about programs. For each program, 1478 - * there is one PMT section. This function will create a section 1479 - * for each program found in the PAT 1480 - */ 1481 - struct vidtv_psi_table_pat_program *program = pat->program; 1290 + struct vidtv_psi_table_pat_program *program; 1482 1291 struct vidtv_psi_table_pmt **pmt_secs; 1483 - u32 i = 0; 1292 + u32 i = 0, num_pmt = 0; 1484 1293 1485 - /* a section for each program_id */ 1486 - pmt_secs = kcalloc(pat->programs, 1487 - sizeof(struct vidtv_psi_table_pmt *), 1488 - GFP_KERNEL); 1489 - 1294 + /* 1295 + * The number of PMT entries is the number of PAT entries 1296 + * that contain service_id. That exclude special tables, like NIT 1297 + */ 1298 + program = pat->program; 1490 1299 while (program) { 1491 - pmt_secs[i] = vidtv_psi_pmt_table_init(be16_to_cpu(program->service_id), pcr_pid); 1492 - ++i; 1300 + if (program->service_id) 1301 + num_pmt++; 1493 1302 program = program->next; 1494 1303 } 1304 + 1305 + pmt_secs = kcalloc(num_pmt, 1306 + sizeof(struct vidtv_psi_table_pmt *), 1307 + GFP_KERNEL); 1308 + if (!pmt_secs) 1309 + return NULL; 1310 + 1311 + for (program = pat->program; program; program = program->next) { 1312 + if (!program->service_id) 1313 + continue; 1314 + pmt_secs[i] = vidtv_psi_pmt_table_init(be16_to_cpu(program->service_id), 1315 + pcr_pid); 1316 + 1317 + if (!pmt_secs[i]) { 1318 + while (i > 0) { 1319 + i--; 1320 + vidtv_psi_pmt_table_destroy(pmt_secs[i]); 1321 + } 1322 + return NULL; 1323 + } 1324 + i++; 1325 + } 1326 + pat->num_pmt = num_pmt; 1495 1327 1496 1328 return pmt_secs; 1497 1329 } 1498 1330 1331 + /* find the PMT section associated with 'program_num' */ 1499 1332 struct vidtv_psi_table_pmt 1500 1333 *vidtv_psi_find_pmt_sec(struct vidtv_psi_table_pmt **pmt_sections, 1501 1334 u16 nsections, 1502 1335 u16 program_num) 1503 1336 { 1504 - /* find the PMT section associated with 'program_num' */ 1505 1337 struct vidtv_psi_table_pmt *sec = NULL; 1506 1338 u32 i; 1507 1339 ··· 1537 1319 } 1538 1320 1539 1321 return NULL; /* not found */ 1322 + } 1323 + 1324 + static void vidtv_psi_nit_table_update_sec_len(struct vidtv_psi_table_nit *nit) 1325 + { 1326 + u16 length = 0; 1327 + struct vidtv_psi_table_transport *t = nit->transport; 1328 + u16 desc_loop_len; 1329 + u16 transport_loop_len = 0; 1330 + 1331 + /* 1332 + * from immediately after 'section_length' until 1333 + * 'network_descriptor_length' 1334 + */ 1335 + length += NIT_LEN_UNTIL_NETWORK_DESCRIPTOR_LEN; 1336 + 1337 + desc_loop_len = vidtv_psi_desc_comp_loop_len(nit->descriptor); 1338 + vidtv_psi_set_desc_loop_len(&nit->bitfield, desc_loop_len, 12); 1339 + 1340 + length += desc_loop_len; 1341 + 1342 + length += sizeof_field(struct vidtv_psi_table_nit, bitfield2); 1343 + 1344 + while (t) { 1345 + /* skip both pointers at the end */ 1346 + transport_loop_len += sizeof(struct vidtv_psi_table_transport) - 1347 + sizeof(struct vidtv_psi_desc *) - 1348 + sizeof(struct vidtv_psi_table_transport *); 1349 + 1350 + length += transport_loop_len; 1351 + 1352 + desc_loop_len = vidtv_psi_desc_comp_loop_len(t->descriptor); 1353 + vidtv_psi_set_desc_loop_len(&t->bitfield, desc_loop_len, 12); 1354 + 1355 + length += desc_loop_len; 1356 + 1357 + t = t->next; 1358 + } 1359 + 1360 + // Actually sets the transport stream loop len, maybe rename this function later 1361 + vidtv_psi_set_desc_loop_len(&nit->bitfield2, transport_loop_len, 12); 1362 + length += CRC_SIZE_IN_BYTES; 1363 + 1364 + vidtv_psi_set_sec_len(&nit->header, length); 1365 + } 1366 + 1367 + struct vidtv_psi_table_nit 1368 + *vidtv_psi_nit_table_init(u16 network_id, 1369 + u16 transport_stream_id, 1370 + char *network_name, 1371 + struct vidtv_psi_desc_service_list_entry *service_list) 1372 + { 1373 + struct vidtv_psi_table_transport *transport; 1374 + struct vidtv_psi_table_nit *nit; 1375 + const u16 SYNTAX = 0x1; 1376 + const u16 ONES = 0x03; 1377 + const u16 ONE = 0x1; 1378 + 1379 + nit = kzalloc(sizeof(*nit), GFP_KERNEL); 1380 + if (!nit) 1381 + return NULL; 1382 + 1383 + transport = kzalloc(sizeof(*transport), GFP_KERNEL); 1384 + if (!transport) 1385 + goto free_nit; 1386 + 1387 + nit->header.table_id = 0x40; // ACTUAL_NETWORK 1388 + 1389 + nit->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12)); 1390 + 1391 + nit->header.id = cpu_to_be16(network_id); 1392 + nit->header.current_next = ONE; 1393 + 1394 + nit->header.version = 0x1f; 1395 + 1396 + nit->header.one2 = ONES; 1397 + nit->header.section_id = 0; 1398 + nit->header.last_section = 0; 1399 + 1400 + nit->bitfield = cpu_to_be16(0xf); 1401 + nit->bitfield2 = cpu_to_be16(0xf); 1402 + 1403 + nit->descriptor = (struct vidtv_psi_desc *) 1404 + vidtv_psi_network_name_desc_init(NULL, network_name); 1405 + if (!nit->descriptor) 1406 + goto free_transport; 1407 + 1408 + transport->transport_id = cpu_to_be16(transport_stream_id); 1409 + transport->network_id = cpu_to_be16(network_id); 1410 + transport->bitfield = cpu_to_be16(0xf); 1411 + transport->descriptor = (struct vidtv_psi_desc *) 1412 + vidtv_psi_service_list_desc_init(NULL, service_list); 1413 + if (!transport->descriptor) 1414 + goto free_nit_desc; 1415 + 1416 + nit->transport = transport; 1417 + 1418 + vidtv_psi_nit_table_update_sec_len(nit); 1419 + 1420 + return nit; 1421 + 1422 + free_nit_desc: 1423 + vidtv_psi_desc_destroy((struct vidtv_psi_desc *)nit->descriptor); 1424 + 1425 + free_transport: 1426 + kfree(transport); 1427 + free_nit: 1428 + kfree(nit); 1429 + return NULL; 1430 + } 1431 + 1432 + u32 vidtv_psi_nit_write_into(struct vidtv_psi_nit_write_args *args) 1433 + { 1434 + struct header_write_args h_args = { 1435 + .dest_buf = args->buf, 1436 + .dest_offset = args->offset, 1437 + .h = &args->nit->header, 1438 + .pid = VIDTV_NIT_PID, 1439 + .dest_buf_sz = args->buf_sz, 1440 + }; 1441 + struct psi_write_args psi_args = { 1442 + .dest_buf = args->buf, 1443 + .from = &args->nit->bitfield, 1444 + .len = sizeof_field(struct vidtv_psi_table_nit, bitfield), 1445 + .pid = VIDTV_NIT_PID, 1446 + .new_psi_section = false, 1447 + .is_crc = false, 1448 + .dest_buf_sz = args->buf_sz, 1449 + }; 1450 + struct desc_write_args d_args = { 1451 + .dest_buf = args->buf, 1452 + .pid = VIDTV_NIT_PID, 1453 + .dest_buf_sz = args->buf_sz, 1454 + }; 1455 + struct crc32_write_args c_args = { 1456 + .dest_buf = args->buf, 1457 + .pid = VIDTV_NIT_PID, 1458 + .dest_buf_sz = args->buf_sz, 1459 + }; 1460 + struct vidtv_psi_desc *table_descriptor = args->nit->descriptor; 1461 + struct vidtv_psi_table_transport *transport = args->nit->transport; 1462 + struct vidtv_psi_desc *transport_descriptor; 1463 + u32 crc = INITIAL_CRC; 1464 + u32 nbytes = 0; 1465 + 1466 + vidtv_psi_nit_table_update_sec_len(args->nit); 1467 + 1468 + h_args.continuity_counter = args->continuity_counter; 1469 + h_args.crc = &crc; 1470 + 1471 + nbytes += vidtv_psi_table_header_write_into(&h_args); 1472 + 1473 + /* write the bitfield */ 1474 + 1475 + psi_args.dest_offset = args->offset + nbytes; 1476 + psi_args.continuity_counter = args->continuity_counter; 1477 + psi_args.crc = &crc; 1478 + 1479 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1480 + 1481 + while (table_descriptor) { 1482 + /* write the descriptors, if any */ 1483 + d_args.dest_offset = args->offset + nbytes; 1484 + d_args.desc = table_descriptor; 1485 + d_args.continuity_counter = args->continuity_counter; 1486 + d_args.crc = &crc; 1487 + 1488 + nbytes += vidtv_psi_desc_write_into(&d_args); 1489 + 1490 + table_descriptor = table_descriptor->next; 1491 + } 1492 + 1493 + /* write the second bitfield */ 1494 + psi_args.from = &args->nit->bitfield2; 1495 + psi_args.len = sizeof_field(struct vidtv_psi_table_nit, bitfield2); 1496 + psi_args.dest_offset = args->offset + nbytes; 1497 + 1498 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1499 + 1500 + psi_args.len = sizeof_field(struct vidtv_psi_table_transport, transport_id) + 1501 + sizeof_field(struct vidtv_psi_table_transport, network_id) + 1502 + sizeof_field(struct vidtv_psi_table_transport, bitfield); 1503 + while (transport) { 1504 + /* write the transport sections, if any */ 1505 + psi_args.from = transport; 1506 + psi_args.dest_offset = args->offset + nbytes; 1507 + 1508 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1509 + 1510 + transport_descriptor = transport->descriptor; 1511 + 1512 + while (transport_descriptor) { 1513 + /* write the transport descriptors, if any */ 1514 + d_args.dest_offset = args->offset + nbytes; 1515 + d_args.desc = transport_descriptor; 1516 + d_args.continuity_counter = args->continuity_counter; 1517 + d_args.crc = &crc; 1518 + 1519 + nbytes += vidtv_psi_desc_write_into(&d_args); 1520 + 1521 + transport_descriptor = transport_descriptor->next; 1522 + } 1523 + 1524 + transport = transport->next; 1525 + } 1526 + 1527 + c_args.dest_offset = args->offset + nbytes; 1528 + c_args.crc = cpu_to_be32(crc); 1529 + c_args.continuity_counter = args->continuity_counter; 1530 + 1531 + /* Write the CRC32 at the end */ 1532 + nbytes += table_section_crc32_write_into(&c_args); 1533 + 1534 + return nbytes; 1535 + } 1536 + 1537 + static void vidtv_psi_transport_destroy(struct vidtv_psi_table_transport *t) 1538 + { 1539 + struct vidtv_psi_table_transport *tmp_t = NULL; 1540 + struct vidtv_psi_table_transport *curr_t = t; 1541 + 1542 + while (curr_t) { 1543 + tmp_t = curr_t; 1544 + curr_t = curr_t->next; 1545 + vidtv_psi_desc_destroy(tmp_t->descriptor); 1546 + kfree(tmp_t); 1547 + } 1548 + } 1549 + 1550 + void vidtv_psi_nit_table_destroy(struct vidtv_psi_table_nit *nit) 1551 + { 1552 + vidtv_psi_desc_destroy(nit->descriptor); 1553 + vidtv_psi_transport_destroy(nit->transport); 1554 + kfree(nit); 1555 + } 1556 + 1557 + void vidtv_psi_eit_table_update_sec_len(struct vidtv_psi_table_eit *eit) 1558 + { 1559 + struct vidtv_psi_table_eit_event *e = eit->event; 1560 + u16 desc_loop_len; 1561 + u16 length = 0; 1562 + 1563 + /* 1564 + * from immediately after 'section_length' until 1565 + * 'last_table_id' 1566 + */ 1567 + length += EIT_LEN_UNTIL_LAST_TABLE_ID; 1568 + 1569 + while (e) { 1570 + /* skip both pointers at the end */ 1571 + length += sizeof(struct vidtv_psi_table_eit_event) - 1572 + sizeof(struct vidtv_psi_desc *) - 1573 + sizeof(struct vidtv_psi_table_eit_event *); 1574 + 1575 + desc_loop_len = vidtv_psi_desc_comp_loop_len(e->descriptor); 1576 + vidtv_psi_set_desc_loop_len(&e->bitfield, desc_loop_len, 12); 1577 + 1578 + length += desc_loop_len; 1579 + 1580 + e = e->next; 1581 + } 1582 + 1583 + length += CRC_SIZE_IN_BYTES; 1584 + 1585 + vidtv_psi_set_sec_len(&eit->header, length); 1586 + } 1587 + 1588 + void vidtv_psi_eit_event_assign(struct vidtv_psi_table_eit *eit, 1589 + struct vidtv_psi_table_eit_event *e) 1590 + { 1591 + do { 1592 + if (e == eit->event) 1593 + return; 1594 + 1595 + eit->event = e; 1596 + vidtv_psi_eit_table_update_sec_len(eit); 1597 + 1598 + e = NULL; 1599 + } while (vidtv_psi_get_sec_len(&eit->header) > EIT_MAX_SECTION_LEN); 1600 + 1601 + vidtv_psi_update_version_num(&eit->header); 1602 + } 1603 + 1604 + struct vidtv_psi_table_eit 1605 + *vidtv_psi_eit_table_init(u16 network_id, 1606 + u16 transport_stream_id, 1607 + __be16 service_id) 1608 + { 1609 + struct vidtv_psi_table_eit *eit; 1610 + const u16 SYNTAX = 0x1; 1611 + const u16 ONE = 0x1; 1612 + const u16 ONES = 0x03; 1613 + 1614 + eit = kzalloc(sizeof(*eit), GFP_KERNEL); 1615 + if (!eit) 1616 + return NULL; 1617 + 1618 + eit->header.table_id = 0x4e; //actual_transport_stream: present/following 1619 + 1620 + eit->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12)); 1621 + 1622 + eit->header.id = service_id; 1623 + eit->header.current_next = ONE; 1624 + 1625 + eit->header.version = 0x1f; 1626 + 1627 + eit->header.one2 = ONES; 1628 + eit->header.section_id = 0; 1629 + eit->header.last_section = 0; 1630 + 1631 + eit->transport_id = cpu_to_be16(transport_stream_id); 1632 + eit->network_id = cpu_to_be16(network_id); 1633 + 1634 + eit->last_segment = eit->header.last_section; /* not implemented */ 1635 + eit->last_table_id = eit->header.table_id; /* not implemented */ 1636 + 1637 + vidtv_psi_eit_table_update_sec_len(eit); 1638 + 1639 + return eit; 1640 + } 1641 + 1642 + u32 vidtv_psi_eit_write_into(struct vidtv_psi_eit_write_args *args) 1643 + { 1644 + struct header_write_args h_args = { 1645 + .dest_buf = args->buf, 1646 + .dest_offset = args->offset, 1647 + .h = &args->eit->header, 1648 + .pid = VIDTV_EIT_PID, 1649 + .dest_buf_sz = args->buf_sz, 1650 + }; 1651 + struct psi_write_args psi_args = { 1652 + .dest_buf = args->buf, 1653 + .len = sizeof_field(struct vidtv_psi_table_eit, transport_id) + 1654 + sizeof_field(struct vidtv_psi_table_eit, network_id) + 1655 + sizeof_field(struct vidtv_psi_table_eit, last_segment) + 1656 + sizeof_field(struct vidtv_psi_table_eit, last_table_id), 1657 + .pid = VIDTV_EIT_PID, 1658 + .new_psi_section = false, 1659 + .is_crc = false, 1660 + .dest_buf_sz = args->buf_sz, 1661 + }; 1662 + struct desc_write_args d_args = { 1663 + .dest_buf = args->buf, 1664 + .pid = VIDTV_EIT_PID, 1665 + .dest_buf_sz = args->buf_sz, 1666 + }; 1667 + struct crc32_write_args c_args = { 1668 + .dest_buf = args->buf, 1669 + .pid = VIDTV_EIT_PID, 1670 + .dest_buf_sz = args->buf_sz, 1671 + }; 1672 + struct vidtv_psi_table_eit_event *event = args->eit->event; 1673 + struct vidtv_psi_desc *event_descriptor; 1674 + u32 crc = INITIAL_CRC; 1675 + u32 nbytes = 0; 1676 + 1677 + vidtv_psi_eit_table_update_sec_len(args->eit); 1678 + 1679 + h_args.continuity_counter = args->continuity_counter; 1680 + h_args.crc = &crc; 1681 + 1682 + nbytes += vidtv_psi_table_header_write_into(&h_args); 1683 + 1684 + psi_args.from = &args->eit->transport_id; 1685 + psi_args.dest_offset = args->offset + nbytes; 1686 + psi_args.continuity_counter = args->continuity_counter; 1687 + psi_args.crc = &crc; 1688 + 1689 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1690 + 1691 + /* skip both pointers at the end */ 1692 + psi_args.len = sizeof(struct vidtv_psi_table_eit_event) - 1693 + sizeof(struct vidtv_psi_desc *) - 1694 + sizeof(struct vidtv_psi_table_eit_event *); 1695 + while (event) { 1696 + /* copy the events, if any */ 1697 + psi_args.from = event; 1698 + psi_args.dest_offset = args->offset + nbytes; 1699 + 1700 + nbytes += vidtv_psi_ts_psi_write_into(&psi_args); 1701 + 1702 + event_descriptor = event->descriptor; 1703 + 1704 + while (event_descriptor) { 1705 + /* copy the event descriptors, if any */ 1706 + d_args.dest_offset = args->offset + nbytes; 1707 + d_args.desc = event_descriptor; 1708 + d_args.continuity_counter = args->continuity_counter; 1709 + d_args.crc = &crc; 1710 + 1711 + nbytes += vidtv_psi_desc_write_into(&d_args); 1712 + 1713 + event_descriptor = event_descriptor->next; 1714 + } 1715 + 1716 + event = event->next; 1717 + } 1718 + 1719 + c_args.dest_offset = args->offset + nbytes; 1720 + c_args.crc = cpu_to_be32(crc); 1721 + c_args.continuity_counter = args->continuity_counter; 1722 + 1723 + /* Write the CRC at the end */ 1724 + nbytes += table_section_crc32_write_into(&c_args); 1725 + 1726 + return nbytes; 1727 + } 1728 + 1729 + struct vidtv_psi_table_eit_event 1730 + *vidtv_psi_eit_event_init(struct vidtv_psi_table_eit_event *head, u16 event_id) 1731 + { 1732 + const u8 DURATION[] = {0x23, 0x59, 0x59}; /* BCD encoded */ 1733 + struct vidtv_psi_table_eit_event *e; 1734 + struct timespec64 ts; 1735 + struct tm time; 1736 + int mjd, l; 1737 + __be16 mjd_be; 1738 + 1739 + e = kzalloc(sizeof(*e), GFP_KERNEL); 1740 + if (!e) 1741 + return NULL; 1742 + 1743 + e->event_id = cpu_to_be16(event_id); 1744 + 1745 + ts = ktime_to_timespec64(ktime_get_real()); 1746 + time64_to_tm(ts.tv_sec, 0, &time); 1747 + 1748 + /* Convert date to Modified Julian Date - per EN 300 468 Annex C */ 1749 + if (time.tm_mon < 2) 1750 + l = 1; 1751 + else 1752 + l = 0; 1753 + 1754 + mjd = 14956 + time.tm_mday; 1755 + mjd += (time.tm_year - l) * 36525 / 100; 1756 + mjd += (time.tm_mon + 2 + l * 12) * 306001 / 10000; 1757 + mjd_be = cpu_to_be16(mjd); 1758 + 1759 + /* 1760 + * Store MJD and hour/min/sec to the event. 1761 + * 1762 + * Let's make the event to start on a full hour 1763 + */ 1764 + memcpy(e->start_time, &mjd_be, sizeof(mjd_be)); 1765 + e->start_time[2] = bin2bcd(time.tm_hour); 1766 + e->start_time[3] = 0; 1767 + e->start_time[4] = 0; 1768 + 1769 + /* 1770 + * TODO: for now, the event will last for a day. Should be 1771 + * enough for testing purposes, but if one runs the driver 1772 + * for more than that, the current event will become invalid. 1773 + * So, we need a better code here in order to change the start 1774 + * time once the event expires. 1775 + */ 1776 + memcpy(e->duration, DURATION, sizeof(e->duration)); 1777 + 1778 + e->bitfield = cpu_to_be16(RUNNING << 13); 1779 + 1780 + if (head) { 1781 + while (head->next) 1782 + head = head->next; 1783 + 1784 + head->next = e; 1785 + } 1786 + 1787 + return e; 1788 + } 1789 + 1790 + void vidtv_psi_eit_event_destroy(struct vidtv_psi_table_eit_event *e) 1791 + { 1792 + struct vidtv_psi_table_eit_event *tmp_e = NULL; 1793 + struct vidtv_psi_table_eit_event *curr_e = e; 1794 + 1795 + while (curr_e) { 1796 + tmp_e = curr_e; 1797 + curr_e = curr_e->next; 1798 + vidtv_psi_desc_destroy(tmp_e->descriptor); 1799 + kfree(tmp_e); 1800 + } 1801 + } 1802 + 1803 + void vidtv_psi_eit_table_destroy(struct vidtv_psi_table_eit *eit) 1804 + { 1805 + vidtv_psi_eit_event_destroy(eit->event); 1806 + kfree(eit); 1540 1807 }
+257 -25
drivers/media/test-drivers/vidtv/vidtv_psi.h
··· 6 6 * technically be broken into one or more sections, we do not do this here, 7 7 * hence 'table' and 'section' are interchangeable for vidtv. 8 8 * 9 - * This code currently supports three tables: PAT, PMT and SDT. These are the 10 - * bare minimum to get userspace to recognize our MPEG transport stream. It can 11 - * be extended to support more PSI tables in the future. 12 - * 13 9 * Copyright (C) 2020 Daniel W. S. Almeida 14 10 */ 15 11 ··· 13 17 #define VIDTV_PSI_H 14 18 15 19 #include <linux/types.h> 16 - #include <asm/byteorder.h> 17 20 18 21 /* 19 22 * all section lengths start immediately after the 'section_length' field ··· 22 27 #define PAT_LEN_UNTIL_LAST_SECTION_NUMBER 5 23 28 #define PMT_LEN_UNTIL_PROGRAM_INFO_LENGTH 9 24 29 #define SDT_LEN_UNTIL_RESERVED_FOR_FUTURE_USE 8 30 + #define NIT_LEN_UNTIL_NETWORK_DESCRIPTOR_LEN 7 31 + #define EIT_LEN_UNTIL_LAST_TABLE_ID 11 25 32 #define MAX_SECTION_LEN 1021 33 + #define EIT_MAX_SECTION_LEN 4093 /* see ETSI 300 468 v.1.10.1 p. 26 */ 26 34 #define VIDTV_PAT_PID 0 /* mandated by the specs */ 27 35 #define VIDTV_SDT_PID 0x0011 /* mandated by the specs */ 36 + #define VIDTV_NIT_PID 0x0010 /* mandated by the specs */ 37 + #define VIDTV_EIT_PID 0x0012 /*mandated by the specs */ 28 38 29 39 enum vidtv_psi_descriptors { 30 40 REGISTRATION_DESCRIPTOR = 0x05, /* See ISO/IEC 13818-1 section 2.6.8 */ 41 + NETWORK_NAME_DESCRIPTOR = 0x40, /* See ETSI EN 300 468 section 6.2.27 */ 42 + SERVICE_LIST_DESCRIPTOR = 0x41, /* See ETSI EN 300 468 section 6.2.35 */ 31 43 SERVICE_DESCRIPTOR = 0x48, /* See ETSI EN 300 468 section 6.2.33 */ 44 + SHORT_EVENT_DESCRIPTOR = 0x4d, /* See ETSI EN 300 468 section 6.2.37 */ 32 45 }; 33 46 34 47 enum vidtv_psi_stream_types { 35 48 STREAM_PRIVATE_DATA = 0x06, /* see ISO/IEC 13818-1 2000 p. 48 */ 36 49 }; 37 50 38 - /** 51 + /* 39 52 * struct vidtv_psi_desc - A generic PSI descriptor type. 40 53 * The descriptor length is an 8-bit field specifying the total number of bytes of the data portion 41 54 * of the descriptor following the byte defining the value of this field. ··· 55 52 u8 data[]; 56 53 } __packed; 57 54 58 - /** 55 + /* 59 56 * struct vidtv_psi_desc_service - Service descriptor. 60 57 * See ETSI EN 300 468 section 6.2.33. 61 58 */ ··· 71 68 char *service_name; 72 69 } __packed; 73 70 74 - /** 71 + /* 75 72 * struct vidtv_psi_desc_registration - A registration descriptor. 76 73 * See ISO/IEC 13818-1 section 2.6.8 77 74 */ ··· 93 90 u8 additional_identification_info[]; 94 91 } __packed; 95 92 96 - /** 93 + /* 94 + * struct vidtv_psi_desc_network_name - A network name descriptor 95 + * see ETSI EN 300 468 v1.15.1 section 6.2.27 96 + */ 97 + struct vidtv_psi_desc_network_name { 98 + struct vidtv_psi_desc *next; 99 + u8 type; 100 + u8 length; 101 + char *network_name; 102 + } __packed; 103 + 104 + struct vidtv_psi_desc_service_list_entry { 105 + __be16 service_id; 106 + u8 service_type; 107 + struct vidtv_psi_desc_service_list_entry *next; 108 + } __packed; 109 + 110 + /* 111 + * struct vidtv_psi_desc_service_list - A service list descriptor 112 + * see ETSI EN 300 468 v1.15.1 section 6.2.35 113 + */ 114 + struct vidtv_psi_desc_service_list { 115 + struct vidtv_psi_desc *next; 116 + u8 type; 117 + u8 length; 118 + struct vidtv_psi_desc_service_list_entry *service_list; 119 + } __packed; 120 + 121 + /* 122 + * struct vidtv_psi_desc_short_event - A short event descriptor 123 + * see ETSI EN 300 468 v1.15.1 section 6.2.37 124 + */ 125 + struct vidtv_psi_desc_short_event { 126 + struct vidtv_psi_desc *next; 127 + u8 type; 128 + u8 length; 129 + char *iso_language_code; 130 + u8 event_name_len; 131 + char *event_name; 132 + u8 text_len; 133 + char *text; 134 + } __packed; 135 + 136 + struct vidtv_psi_desc_short_event 137 + *vidtv_psi_short_event_desc_init(struct vidtv_psi_desc *head, 138 + char *iso_language_code, 139 + char *event_name, 140 + char *text); 141 + 142 + /* 97 143 * struct vidtv_psi_table_header - A header that is present for all PSI tables. 98 144 */ 99 145 struct vidtv_psi_table_header { ··· 158 106 u8 last_section; /* last_section_number */ 159 107 } __packed; 160 108 161 - /** 109 + /* 162 110 * struct vidtv_psi_table_pat_program - A single program in the PAT 163 111 * See ISO/IEC 13818-1 : 2000 p.43 164 112 */ ··· 168 116 struct vidtv_psi_table_pat_program *next; 169 117 } __packed; 170 118 171 - /** 119 + /* 172 120 * struct vidtv_psi_table_pat - The Program Allocation Table (PAT) 173 121 * See ISO/IEC 13818-1 : 2000 p.43 174 122 */ 175 123 struct vidtv_psi_table_pat { 176 124 struct vidtv_psi_table_header header; 177 - u16 programs; /* Included by libdvbv5, not part of the table and not actually serialized */ 125 + u16 num_pat; 126 + u16 num_pmt; 178 127 struct vidtv_psi_table_pat_program *program; 179 128 } __packed; 180 129 181 - /** 130 + /* 182 131 * struct vidtv_psi_table_sdt_service - Represents a service in the SDT. 183 132 * see ETSI EN 300 468 v1.15.1 section 5.2.3. 184 133 */ ··· 193 140 struct vidtv_psi_table_sdt_service *next; 194 141 } __packed; 195 142 196 - /** 143 + /* 197 144 * struct vidtv_psi_table_sdt - Represents the Service Description Table 198 145 * see ETSI EN 300 468 v1.15.1 section 5.2.3. 199 146 */ ··· 205 152 struct vidtv_psi_table_sdt_service *service; 206 153 } __packed; 207 154 208 - /** 155 + /* 209 156 * enum service_running_status - Status of a SDT service. 210 157 * see ETSI EN 300 468 v1.15.1 section 5.2.3 table 6. 211 158 */ ··· 213 160 RUNNING = 0x4, 214 161 }; 215 162 216 - /** 163 + /* 217 164 * enum service_type - The type of a SDT service. 218 165 * see ETSI EN 300 468 v1.15.1 section 6.2.33, table 81. 219 166 */ 220 167 enum service_type { 221 168 /* see ETSI EN 300 468 v1.15.1 p. 77 */ 222 169 DIGITAL_TELEVISION_SERVICE = 0x1, 170 + DIGITAL_RADIO_SOUND_SERVICE = 0X2, 223 171 }; 224 172 225 - /** 173 + /* 226 174 * struct vidtv_psi_table_pmt_stream - A single stream in the PMT. 227 175 * See ISO/IEC 13818-1 : 2000 p.46. 228 176 */ ··· 235 181 struct vidtv_psi_table_pmt_stream *next; 236 182 } __packed; 237 183 238 - /** 184 + /* 239 185 * struct vidtv_psi_table_pmt - The Program Map Table (PMT). 240 186 * See ISO/IEC 13818-1 : 2000 p.46. 241 187 */ ··· 344 290 u8 *additional_ident_info, 345 291 u32 additional_info_len); 346 292 293 + struct vidtv_psi_desc_network_name 294 + *vidtv_psi_network_name_desc_init(struct vidtv_psi_desc *head, char *network_name); 295 + 296 + struct vidtv_psi_desc_service_list 297 + *vidtv_psi_service_list_desc_init(struct vidtv_psi_desc *head, 298 + struct vidtv_psi_desc_service_list_entry *entry); 299 + 347 300 struct vidtv_psi_table_pat_program 348 301 *vidtv_psi_pat_program_init(struct vidtv_psi_table_pat_program *head, 349 302 u16 service_id, ··· 366 305 struct vidtv_psi_table_pmt *vidtv_psi_pmt_table_init(u16 program_number, 367 306 u16 pcr_pid); 368 307 369 - struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 transport_stream_id); 308 + struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 network_id, 309 + u16 transport_stream_id); 370 310 371 311 struct vidtv_psi_table_sdt_service* 372 312 vidtv_psi_sdt_service_init(struct vidtv_psi_table_sdt_service *head, 373 - u16 service_id); 313 + u16 service_id, 314 + bool eit_schedule, 315 + bool eit_present_following); 374 316 375 317 void 376 318 vidtv_psi_desc_destroy(struct vidtv_psi_desc *desc); ··· 477 413 * vidtv_psi_create_sec_for_each_pat_entry - Create a PMT section for each 478 414 * program found in the PAT 479 415 * @pat: The PAT to look for programs. 480 - * @s: The stream loop (one or more streams) 481 416 * @pcr_pid: packet ID for the PCR to be used for the program described in this 482 417 * PMT section 483 418 */ ··· 555 492 * equal to the size of the PAT, since more space is needed for TS headers during TS 556 493 * encapsulation. 557 494 */ 558 - u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args args); 495 + u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args *args); 559 496 560 497 /** 561 498 * struct vidtv_psi_sdt_write_args - Arguments for writing a SDT table ··· 587 524 * equal to the size of the SDT, since more space is needed for TS headers during TS 588 525 * encapsulation. 589 526 */ 590 - u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args args); 527 + u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args *args); 591 528 592 529 /** 593 530 * struct vidtv_psi_pmt_write_args - Arguments for writing a PMT section 594 531 * @buf: The destination buffer. 595 532 * @offset: The offset into the destination buffer. 596 533 * @pmt: A pointer to the PMT. 534 + * @pid: Program ID 597 535 * @buf_sz: The size of the destination buffer. 598 536 * @continuity_counter: A pointer to the CC. Incremented on every new packet. 599 - * 537 + * @pcr_pid: The TS PID used for the PSI packets. All channels will share the 538 + * same PCR. 600 539 */ 601 540 struct vidtv_psi_pmt_write_args { 602 541 char *buf; ··· 622 557 * equal to the size of the PMT section, since more space is needed for TS headers 623 558 * during TS encapsulation. 624 559 */ 625 - u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args args); 560 + u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args); 626 561 627 562 /** 628 563 * vidtv_psi_find_pmt_sec - Finds the PMT section for 'program_num' ··· 638 573 639 574 u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p); 640 575 u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s); 576 + 577 + /** 578 + * struct vidtv_psi_table_transport - A entry in the TS loop for the NIT and/or other tables. 579 + * See ETSI 300 468 section 5.2.1 580 + * @transport_id: The TS ID being described 581 + * @network_id: The network_id that contains the TS ID 582 + * @bitfield: Contains the descriptor loop length 583 + * @descriptor: A descriptor loop 584 + * @next: Pointer to the next entry 585 + * 586 + */ 587 + struct vidtv_psi_table_transport { 588 + __be16 transport_id; 589 + __be16 network_id; 590 + __be16 bitfield; /* desc_len: 12, reserved: 4 */ 591 + struct vidtv_psi_desc *descriptor; 592 + struct vidtv_psi_table_transport *next; 593 + } __packed; 594 + 595 + /** 596 + * struct vidtv_psi_table_nit - A Network Information Table (NIT). See ETSI 300 597 + * 468 section 5.2.1 598 + * @header: A PSI table header 599 + * @bitfield: Contains the network descriptor length 600 + * @descriptor: A descriptor loop describing the network 601 + * @bitfield2: Contains the transport stream loop length 602 + * @transport: The transport stream loop 603 + * 604 + */ 605 + struct vidtv_psi_table_nit { 606 + struct vidtv_psi_table_header header; 607 + __be16 bitfield; /* network_desc_len: 12, reserved:4 */ 608 + struct vidtv_psi_desc *descriptor; 609 + __be16 bitfield2; /* ts_loop_len: 12, reserved: 4 */ 610 + struct vidtv_psi_table_transport *transport; 611 + } __packed; 612 + 613 + struct vidtv_psi_table_nit 614 + *vidtv_psi_nit_table_init(u16 network_id, 615 + u16 transport_stream_id, 616 + char *network_name, 617 + struct vidtv_psi_desc_service_list_entry *service_list); 618 + 619 + /** 620 + * struct vidtv_psi_nit_write_args - Arguments for writing a NIT section 621 + * @buf: The destination buffer. 622 + * @offset: The offset into the destination buffer. 623 + * @nit: A pointer to the NIT 624 + * @buf_sz: The size of the destination buffer. 625 + * @continuity_counter: A pointer to the CC. Incremented on every new packet. 626 + * 627 + */ 628 + struct vidtv_psi_nit_write_args { 629 + char *buf; 630 + u32 offset; 631 + struct vidtv_psi_table_nit *nit; 632 + u32 buf_sz; 633 + u8 *continuity_counter; 634 + }; 635 + 636 + /** 637 + * vidtv_psi_nit_write_into - Write NIT as MPEG-TS packets into a buffer. 638 + * @args: an instance of struct vidtv_psi_nit_write_args 639 + * 640 + * This function writes the MPEG TS packets for a NIT table into a buffer. 641 + * Calling code will usually generate the NIT via a call to its init function 642 + * and thus is responsible for freeing it. 643 + * 644 + * Return: The number of bytes written into the buffer. This is NOT 645 + * equal to the size of the NIT, since more space is needed for TS headers during TS 646 + * encapsulation. 647 + */ 648 + u32 vidtv_psi_nit_write_into(struct vidtv_psi_nit_write_args *args); 649 + 650 + void vidtv_psi_nit_table_destroy(struct vidtv_psi_table_nit *nit); 651 + 652 + /* 653 + * struct vidtv_psi_desc_short_event - A short event descriptor 654 + * see ETSI EN 300 468 v1.15.1 section 6.2.37 655 + */ 656 + struct vidtv_psi_table_eit_event { 657 + __be16 event_id; 658 + u8 start_time[5]; 659 + u8 duration[3]; 660 + __be16 bitfield; /* desc_length: 12, free_CA_mode: 1, running_status: 1 */ 661 + struct vidtv_psi_desc *descriptor; 662 + struct vidtv_psi_table_eit_event *next; 663 + } __packed; 664 + 665 + /* 666 + * struct vidtv_psi_table_eit - A Event Information Table (EIT) 667 + * See ETSI 300 468 section 5.2.4 668 + */ 669 + struct vidtv_psi_table_eit { 670 + struct vidtv_psi_table_header header; 671 + __be16 transport_id; 672 + __be16 network_id; 673 + u8 last_segment; 674 + u8 last_table_id; 675 + struct vidtv_psi_table_eit_event *event; 676 + } __packed; 677 + 678 + struct vidtv_psi_table_eit 679 + *vidtv_psi_eit_table_init(u16 network_id, 680 + u16 transport_stream_id, 681 + u16 service_id); 682 + 683 + /** 684 + * struct vidtv_psi_eit_write_args - Arguments for writing an EIT section 685 + * @buf: The destination buffer. 686 + * @offset: The offset into the destination buffer. 687 + * @eit: A pointer to the EIT 688 + * @buf_sz: The size of the destination buffer. 689 + * @continuity_counter: A pointer to the CC. Incremented on every new packet. 690 + * 691 + */ 692 + struct vidtv_psi_eit_write_args { 693 + char *buf; 694 + u32 offset; 695 + struct vidtv_psi_table_eit *eit; 696 + u32 buf_sz; 697 + u8 *continuity_counter; 698 + }; 699 + 700 + /** 701 + * vidtv_psi_eit_write_into - Write EIT as MPEG-TS packets into a buffer. 702 + * @args: an instance of struct vidtv_psi_nit_write_args 703 + * 704 + * This function writes the MPEG TS packets for a EIT table into a buffer. 705 + * Calling code will usually generate the EIT via a call to its init function 706 + * and thus is responsible for freeing it. 707 + * 708 + * Return: The number of bytes written into the buffer. This is NOT 709 + * equal to the size of the EIT, since more space is needed for TS headers during TS 710 + * encapsulation. 711 + */ 712 + u32 vidtv_psi_eit_write_into(struct vidtv_psi_eit_write_args *args); 713 + 714 + void vidtv_psi_eit_table_destroy(struct vidtv_psi_table_eit *eit); 715 + 716 + /** 717 + * vidtv_psi_eit_table_update_sec_len - Recompute and update the EIT section length. 718 + * @eit: The EIT whose length is to be updated. 719 + * 720 + * This will traverse the table and accumulate the length of its components, 721 + * which is then used to replace the 'section_length' field. 722 + * 723 + * If section_length > EIT_MAX_SECTION_LEN, the operation fails. 724 + */ 725 + void vidtv_psi_eit_table_update_sec_len(struct vidtv_psi_table_eit *eit); 726 + 727 + /** 728 + * vidtv_psi_eit_event_assign - Assigns the event loop to the EIT. 729 + * @eit: The EIT to assign to. 730 + * @e: The event loop 731 + * 732 + * This will free the previous event loop in the table. 733 + * This will assign ownership of the stream loop to the table, i.e. the table 734 + * will free this stream loop when a call to its destroy function is made. 735 + */ 736 + void vidtv_psi_eit_event_assign(struct vidtv_psi_table_eit *eit, 737 + struct vidtv_psi_table_eit_event *e); 738 + 739 + struct vidtv_psi_table_eit_event 740 + *vidtv_psi_eit_event_init(struct vidtv_psi_table_eit_event *head, u16 event_id); 741 + 742 + void vidtv_psi_eit_event_destroy(struct vidtv_psi_table_eit_event *e); 641 743 642 744 #endif // VIDTV_PSI_H
+68 -57
drivers/media/test-drivers/vidtv/vidtv_s302m.c
··· 17 17 18 18 #define pr_fmt(fmt) KBUILD_MODNAME ":%s, %d: " fmt, __func__, __LINE__ 19 19 20 - #include <linux/types.h> 21 - #include <linux/slab.h> 20 + #include <linux/bug.h> 22 21 #include <linux/crc32.h> 23 - #include <linux/vmalloc.h> 24 - #include <linux/string.h> 25 - #include <linux/kernel.h> 22 + #include <linux/fixp-arith.h> 26 23 #include <linux/jiffies.h> 24 + #include <linux/kernel.h> 25 + #include <linux/math64.h> 27 26 #include <linux/printk.h> 28 27 #include <linux/ratelimit.h> 29 - #include <linux/fixp-arith.h> 28 + #include <linux/slab.h> 29 + #include <linux/string.h> 30 + #include <linux/types.h> 31 + #include <linux/vmalloc.h> 30 32 31 - #include <linux/math64.h> 32 - #include <asm/byteorder.h> 33 - 34 - #include "vidtv_s302m.h" 35 - #include "vidtv_encoder.h" 36 33 #include "vidtv_common.h" 34 + #include "vidtv_encoder.h" 35 + #include "vidtv_s302m.h" 37 36 38 37 #define S302M_SAMPLING_RATE_HZ 48000 39 38 #define PES_PRIVATE_STREAM_1 0xbd /* PES: private_stream_1 */ ··· 78 79 int duration; 79 80 }; 80 81 81 - #define COMPASS 120 /* beats per minute (Allegro) */ 82 - static const struct tone_duration beethoven_5th_symphony[] = { 82 + #define COMPASS 100 /* beats per minute */ 83 + static const struct tone_duration beethoven_fur_elise[] = { 84 + { NOTE_SILENT, 512}, 83 85 { NOTE_E_6, 128}, { NOTE_DS_6, 128}, { NOTE_E_6, 128}, 84 86 { NOTE_DS_6, 128}, { NOTE_E_6, 128}, { NOTE_B_5, 128}, 85 87 { NOTE_D_6, 128}, { NOTE_C_6, 128}, { NOTE_A_3, 128}, ··· 121 121 { NOTE_E_5, 128}, { NOTE_D_5, 128}, { NOTE_A_3, 128}, 122 122 { NOTE_E_4, 128}, { NOTE_A_4, 128}, { NOTE_E_4, 128}, 123 123 { NOTE_D_5, 128}, { NOTE_C_5, 128}, { NOTE_E_3, 128}, 124 - { NOTE_E_4, 128}, { NOTE_E_5, 255}, { NOTE_E_6, 128}, 125 - { NOTE_E_5, 128}, { NOTE_E_6, 128}, { NOTE_E_5, 255}, 124 + { NOTE_E_4, 128}, { NOTE_E_5, 128}, { NOTE_E_5, 128}, 125 + { NOTE_E_6, 128}, { NOTE_E_5, 128}, { NOTE_E_6, 128}, 126 + { NOTE_E_5, 128}, { NOTE_E_5, 128}, { NOTE_DS_5, 128}, 127 + { NOTE_E_5, 128}, { NOTE_DS_6, 128}, { NOTE_E_6, 128}, 126 128 { NOTE_DS_5, 128}, { NOTE_E_5, 128}, { NOTE_DS_6, 128}, 127 - { NOTE_E_6, 128}, { NOTE_DS_5, 128}, { NOTE_E_5, 128}, 128 - { NOTE_DS_6, 128}, { NOTE_E_6, 128}, { NOTE_DS_6, 128}, 129 129 { NOTE_E_6, 128}, { NOTE_DS_6, 128}, { NOTE_E_6, 128}, 130 - { NOTE_B_5, 128}, { NOTE_D_6, 128}, { NOTE_C_6, 128}, 131 - { NOTE_A_3, 128}, { NOTE_E_4, 128}, { NOTE_A_4, 128}, 132 - { NOTE_C_5, 128}, { NOTE_E_5, 128}, { NOTE_A_5, 128}, 133 - { NOTE_E_3, 128}, { NOTE_E_4, 128}, { NOTE_GS_4, 128}, 134 - { NOTE_E_5, 128}, { NOTE_GS_5, 128}, { NOTE_B_5, 128}, 135 - { NOTE_A_3, 128}, { NOTE_E_4, 128}, { NOTE_A_4, 128}, 136 - { NOTE_E_5, 128}, { NOTE_E_6, 128}, { NOTE_DS_6, 128}, 130 + { NOTE_DS_6, 128}, { NOTE_E_6, 128}, { NOTE_B_5, 128}, 131 + { NOTE_D_6, 128}, { NOTE_C_6, 128}, { NOTE_A_3, 128}, 132 + { NOTE_E_4, 128}, { NOTE_A_4, 128}, { NOTE_C_5, 128}, 133 + { NOTE_E_5, 128}, { NOTE_A_5, 128}, { NOTE_E_3, 128}, 134 + { NOTE_E_4, 128}, { NOTE_GS_4, 128}, { NOTE_E_5, 128}, 135 + { NOTE_GS_5, 128}, { NOTE_B_5, 128}, { NOTE_A_3, 128}, 136 + { NOTE_E_4, 128}, { NOTE_A_4, 128}, { NOTE_E_5, 128}, 137 137 { NOTE_E_6, 128}, { NOTE_DS_6, 128}, { NOTE_E_6, 128}, 138 - { NOTE_B_5, 128}, { NOTE_D_6, 128}, { NOTE_C_6, 128}, 139 - { NOTE_A_3, 128}, { NOTE_E_4, 128}, { NOTE_A_4, 128}, 140 - { NOTE_C_5, 128}, { NOTE_E_5, 128}, { NOTE_A_5, 128}, 141 - { NOTE_E_3, 128}, { NOTE_E_4, 128}, { NOTE_GS_4, 128}, 142 - { NOTE_E_5, 128}, { NOTE_C_6, 128}, { NOTE_B_5, 128}, 143 - { NOTE_C_5, 255}, { NOTE_C_5, 255}, { NOTE_SILENT, 512}, 138 + { NOTE_DS_6, 128}, { NOTE_E_6, 128}, { NOTE_B_5, 128}, 139 + { NOTE_D_6, 128}, { NOTE_C_6, 128}, { NOTE_A_3, 128}, 140 + { NOTE_E_4, 128}, { NOTE_A_4, 128}, { NOTE_C_5, 128}, 141 + { NOTE_E_5, 128}, { NOTE_A_5, 128}, { NOTE_E_3, 128}, 142 + { NOTE_E_4, 128}, { NOTE_GS_4, 128}, { NOTE_E_5, 128}, 143 + { NOTE_C_6, 128}, { NOTE_B_5, 128}, { NOTE_A_5, 512}, 144 + { NOTE_SILENT, 256}, 144 145 }; 145 146 146 147 static struct vidtv_access_unit *vidtv_s302m_access_unit_init(struct vidtv_access_unit *head) 147 148 { 148 - struct vidtv_access_unit *au = kzalloc(sizeof(*au), GFP_KERNEL); 149 + struct vidtv_access_unit *au; 150 + 151 + au = kzalloc(sizeof(*au), GFP_KERNEL); 152 + if (!au) 153 + return NULL; 149 154 150 155 if (head) { 151 156 while (head->next) ··· 201 196 static void 202 197 vidtv_s302m_compute_sample_count_from_video(struct vidtv_encoder *e) 203 198 { 204 - struct vidtv_access_unit *au = e->access_units; 205 199 struct vidtv_access_unit *sync_au = e->sync->access_units; 206 - u32 vau_duration_usecs; 200 + struct vidtv_access_unit *au = e->access_units; 207 201 u32 sample_duration_usecs; 202 + u32 vau_duration_usecs; 208 203 u32 s; 209 204 210 205 vau_duration_usecs = USEC_PER_SEC / e->sync->sampling_rate_hz; ··· 235 230 { 236 231 u16 sample; 237 232 int pos; 233 + struct vidtv_s302m_ctx *ctx = e->ctx; 238 234 239 235 if (!e->src_buf) { 240 236 /* 241 237 * Simple tone generator: play the tones at the 242 - * beethoven_5th_symphony array. 238 + * beethoven_fur_elise array. 243 239 */ 244 - if (e->last_duration <= 0) { 245 - if (e->src_buf_offset >= ARRAY_SIZE(beethoven_5th_symphony)) 240 + if (ctx->last_duration <= 0) { 241 + if (e->src_buf_offset >= ARRAY_SIZE(beethoven_fur_elise)) 246 242 e->src_buf_offset = 0; 247 243 248 - e->last_tone = beethoven_5th_symphony[e->src_buf_offset].note; 249 - e->last_duration = beethoven_5th_symphony[e->src_buf_offset].duration * S302M_SAMPLING_RATE_HZ / COMPASS / 5; 244 + ctx->last_tone = beethoven_fur_elise[e->src_buf_offset].note; 245 + ctx->last_duration = beethoven_fur_elise[e->src_buf_offset].duration * 246 + S302M_SAMPLING_RATE_HZ / COMPASS / 5; 250 247 e->src_buf_offset++; 251 - e->note_offset = 0; 248 + ctx->note_offset = 0; 252 249 } else { 253 - e->last_duration--; 250 + ctx->last_duration--; 254 251 } 255 252 256 - /* Handle silent */ 257 - if (!e->last_tone) { 258 - e->src_buf_offset = 0; 253 + /* Handle pause notes */ 254 + if (!ctx->last_tone) 259 255 return 0x8000; 260 - } 261 256 262 - pos = (2 * PI * e->note_offset * e->last_tone / S302M_SAMPLING_RATE_HZ); 263 - 264 - if (pos == 360) 265 - e->note_offset = 0; 266 - else 267 - e->note_offset++; 257 + pos = (2 * PI * ctx->note_offset * ctx->last_tone) / S302M_SAMPLING_RATE_HZ; 258 + ctx->note_offset++; 268 259 269 260 return (fixp_sin32(pos % (2 * PI)) >> 16) + 0x8000; 270 261 } ··· 290 289 static u32 vidtv_s302m_write_frame(struct vidtv_encoder *e, 291 290 u16 sample) 292 291 { 293 - u32 nbytes = 0; 294 - struct vidtv_s302m_frame_16 f = {}; 295 292 struct vidtv_s302m_ctx *ctx = e->ctx; 293 + struct vidtv_s302m_frame_16 f = {}; 294 + u32 nbytes = 0; 296 295 297 296 /* from ffmpeg: see s302enc.c */ 298 297 ··· 389 388 390 389 static void *vidtv_s302m_encode(struct vidtv_encoder *e) 391 390 { 391 + struct vidtv_s302m_ctx *ctx = e->ctx; 392 + 392 393 /* 393 394 * According to SMPTE 302M, an audio access unit is specified as those 394 395 * AES3 words that are associated with a corresponding video frame. ··· 403 400 * is created with values for 'num_samples' and 'pts' taken empirically from 404 401 * ffmpeg 405 402 */ 406 - 407 - struct vidtv_s302m_ctx *ctx = e->ctx; 408 403 409 404 vidtv_s302m_access_unit_destroy(e); 410 405 vidtv_s302m_alloc_au(e); ··· 441 440 struct vidtv_encoder 442 441 *vidtv_s302m_encoder_init(struct vidtv_s302m_encoder_init_args args) 443 442 { 444 - struct vidtv_encoder *e = kzalloc(sizeof(*e), GFP_KERNEL); 445 443 u32 priv_sz = sizeof(struct vidtv_s302m_ctx); 444 + struct vidtv_s302m_ctx *ctx; 445 + struct vidtv_encoder *e; 446 + 447 + e = kzalloc(sizeof(*e), GFP_KERNEL); 448 + if (!e) 449 + return NULL; 446 450 447 451 e->id = S302M; 448 452 ··· 459 453 e->encoder_buf_offset = 0; 460 454 461 455 e->sample_count = 0; 462 - e->last_duration = 0; 463 456 464 457 e->src_buf = (args.src_buf) ? args.src_buf : NULL; 465 458 e->src_buf_sz = (args.src_buf) ? args.src_buf_sz : 0; 466 459 e->src_buf_offset = 0; 467 460 468 461 e->is_video_encoder = false; 469 - e->ctx = kzalloc(priv_sz, GFP_KERNEL); 462 + 463 + ctx = kzalloc(priv_sz, GFP_KERNEL); 464 + if (!ctx) 465 + return NULL; 466 + 467 + e->ctx = ctx; 468 + ctx->last_duration = 0; 470 469 471 470 e->encode = vidtv_s302m_encode; 472 471 e->clear = vidtv_s302m_clear;
+7 -2
drivers/media/test-drivers/vidtv/vidtv_s302m.h
··· 19 19 #define VIDTV_S302M_H 20 20 21 21 #include <linux/types.h> 22 - #include <asm/byteorder.h> 23 22 24 23 #include "vidtv_encoder.h" 25 24 ··· 33 34 * @enc: A pointer to the containing encoder structure. 34 35 * @frame_index: The current frame in a block 35 36 * @au_count: The total number of access units encoded up to now 37 + * @last_duration: Duration of the tone currently being played 38 + * @note_offset: Position at the music tone array 39 + * @last_tone: Tone currently being played 36 40 */ 37 41 struct vidtv_s302m_ctx { 38 42 struct vidtv_encoder *enc; 39 43 u32 frame_index; 40 44 u32 au_count; 45 + int last_duration; 46 + unsigned int note_offset; 47 + enum musical_notes last_tone; 41 48 }; 42 49 43 - /** 50 + /* 44 51 * struct vidtv_smpte_s302m_es - s302m MPEG Elementary Stream header. 45 52 * 46 53 * See SMPTE 302M 2007 table 1.
+2 -3
drivers/media/test-drivers/vidtv/vidtv_ts.c
··· 9 9 10 10 #define pr_fmt(fmt) KBUILD_MODNAME ":%s, %d: " fmt, __func__, __LINE__ 11 11 12 + #include <linux/math64.h> 12 13 #include <linux/printk.h> 13 14 #include <linux/ratelimit.h> 14 15 #include <linux/types.h> 15 - #include <linux/math64.h> 16 - #include <asm/byteorder.h> 17 16 18 - #include "vidtv_ts.h" 19 17 #include "vidtv_common.h" 18 + #include "vidtv_ts.h" 20 19 21 20 static u32 vidtv_ts_write_pcr_bits(u8 *to, u32 to_offset, u64 pcr) 22 21 {
+2 -3
drivers/media/test-drivers/vidtv/vidtv_ts.h
··· 11 11 #define VIDTV_TS_H 12 12 13 13 #include <linux/types.h> 14 - #include <asm/byteorder.h> 15 14 16 15 #define TS_SYNC_BYTE 0x47 17 16 #define TS_PACKET_LEN 188 ··· 53 54 * @dest_offset: The byte offset into the buffer. 54 55 * @pid: The TS PID for the PCR packets. 55 56 * @buf_sz: The size of the buffer in bytes. 56 - * @countinuity_counter: The TS continuity_counter. 57 + * @continuity_counter: The TS continuity_counter. 57 58 * @pcr: A sample from the system clock. 58 59 */ 59 60 struct pcr_write_args { ··· 70 71 * @dest_buf: The buffer to write into. 71 72 * @dest_offset: The byte offset into the buffer. 72 73 * @buf_sz: The size of the buffer in bytes. 73 - * @countinuity_counter: The TS continuity_counter. 74 + * @continuity_counter: The TS continuity_counter. 74 75 */ 75 76 struct null_packet_write_args { 76 77 void *dest_buf;
+4 -3
drivers/media/test-drivers/vidtv/vidtv_tuner.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/i2c.h> 15 15 #include <linux/module.h> 16 - #include <linux/slab.h> 17 - #include <linux/types.h> 18 - #include <media/dvb_frontend.h> 19 16 #include <linux/printk.h> 20 17 #include <linux/ratelimit.h> 18 + #include <linux/slab.h> 19 + #include <linux/types.h> 20 + 21 + #include <media/dvb_frontend.h> 21 22 22 23 #include "vidtv_tuner.h" 23 24
+1
drivers/media/test-drivers/vidtv/vidtv_tuner.h
··· 11 11 #define VIDTV_TUNER_H 12 12 13 13 #include <linux/types.h> 14 + 14 15 #include <media/dvb_frontend.h> 15 16 16 17 #define NUM_VALID_TUNER_FREQS 8
+19 -32
drivers/mmc/host/sdhci-of-arasan.c
··· 30 30 #define SDHCI_ARASAN_VENDOR_REGISTER 0x78 31 31 32 32 #define SDHCI_ARASAN_ITAPDLY_REGISTER 0xF0F8 33 + #define SDHCI_ARASAN_ITAPDLY_SEL_MASK 0xFF 34 + 33 35 #define SDHCI_ARASAN_OTAPDLY_REGISTER 0xF0FC 36 + #define SDHCI_ARASAN_OTAPDLY_SEL_MASK 0x3F 34 37 35 38 #define SDHCI_ARASAN_CQE_BASE_ADDR 0x200 36 39 #define VENDOR_ENHANCED_STROBE BIT(0) ··· 603 600 u8 tap_delay, tap_max = 0; 604 601 int ret; 605 602 606 - /* 607 - * This is applicable for SDHCI_SPEC_300 and above 608 - * ZynqMP does not set phase for <=25MHz clock. 609 - * If degrees is zero, no need to do anything. 610 - */ 611 - if (host->version < SDHCI_SPEC_300 || 612 - host->timing == MMC_TIMING_LEGACY || 613 - host->timing == MMC_TIMING_UHS_SDR12 || !degrees) 603 + /* This is applicable for SDHCI_SPEC_300 and above */ 604 + if (host->version < SDHCI_SPEC_300) 614 605 return 0; 615 606 616 607 switch (host->timing) { ··· 634 637 ret = zynqmp_pm_set_sd_tapdelay(node_id, PM_TAPDELAY_OUTPUT, tap_delay); 635 638 if (ret) 636 639 pr_err("Error setting Output Tap Delay\n"); 640 + 641 + /* Release DLL Reset */ 642 + zynqmp_pm_sd_dll_reset(node_id, PM_DLL_RESET_RELEASE); 637 643 638 644 return ret; 639 645 } ··· 668 668 u8 tap_delay, tap_max = 0; 669 669 int ret; 670 670 671 - /* 672 - * This is applicable for SDHCI_SPEC_300 and above 673 - * ZynqMP does not set phase for <=25MHz clock. 674 - * If degrees is zero, no need to do anything. 675 - */ 676 - if (host->version < SDHCI_SPEC_300 || 677 - host->timing == MMC_TIMING_LEGACY || 678 - host->timing == MMC_TIMING_UHS_SDR12 || !degrees) 671 + /* This is applicable for SDHCI_SPEC_300 and above */ 672 + if (host->version < SDHCI_SPEC_300) 679 673 return 0; 674 + 675 + /* Assert DLL Reset */ 676 + zynqmp_pm_sd_dll_reset(node_id, PM_DLL_RESET_ASSERT); 680 677 681 678 switch (host->timing) { 682 679 case MMC_TIMING_MMC_HS: ··· 730 733 struct sdhci_host *host = sdhci_arasan->host; 731 734 u8 tap_delay, tap_max = 0; 732 735 733 - /* 734 - * This is applicable for SDHCI_SPEC_300 and above 735 - * Versal does not set phase for <=25MHz clock. 736 - * If degrees is zero, no need to do anything. 737 - */ 738 - if (host->version < SDHCI_SPEC_300 || 739 - host->timing == MMC_TIMING_LEGACY || 740 - host->timing == MMC_TIMING_UHS_SDR12 || !degrees) 736 + /* This is applicable for SDHCI_SPEC_300 and above */ 737 + if (host->version < SDHCI_SPEC_300) 741 738 return 0; 742 739 743 740 switch (host->timing) { ··· 764 773 regval = sdhci_readl(host, SDHCI_ARASAN_OTAPDLY_REGISTER); 765 774 regval |= SDHCI_OTAPDLY_ENABLE; 766 775 sdhci_writel(host, regval, SDHCI_ARASAN_OTAPDLY_REGISTER); 776 + regval &= ~SDHCI_ARASAN_OTAPDLY_SEL_MASK; 767 777 regval |= tap_delay; 768 778 sdhci_writel(host, regval, SDHCI_ARASAN_OTAPDLY_REGISTER); 769 779 } ··· 796 804 struct sdhci_host *host = sdhci_arasan->host; 797 805 u8 tap_delay, tap_max = 0; 798 806 799 - /* 800 - * This is applicable for SDHCI_SPEC_300 and above 801 - * Versal does not set phase for <=25MHz clock. 802 - * If degrees is zero, no need to do anything. 803 - */ 804 - if (host->version < SDHCI_SPEC_300 || 805 - host->timing == MMC_TIMING_LEGACY || 806 - host->timing == MMC_TIMING_UHS_SDR12 || !degrees) 807 + /* This is applicable for SDHCI_SPEC_300 and above */ 808 + if (host->version < SDHCI_SPEC_300) 807 809 return 0; 808 810 809 811 switch (host->timing) { ··· 832 846 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER); 833 847 regval |= SDHCI_ITAPDLY_ENABLE; 834 848 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER); 849 + regval &= ~SDHCI_ARASAN_ITAPDLY_SEL_MASK; 835 850 regval |= tap_delay; 836 851 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER); 837 852 regval &= ~SDHCI_ITAPDLY_CHGWIN;
+11 -2
drivers/mmc/host/sdhci-pci-core.c
··· 665 665 } 666 666 } 667 667 668 + static void sdhci_intel_set_uhs_signaling(struct sdhci_host *host, 669 + unsigned int timing) 670 + { 671 + /* Set UHS timing to SDR25 for High Speed mode */ 672 + if (timing == MMC_TIMING_MMC_HS || timing == MMC_TIMING_SD_HS) 673 + timing = MMC_TIMING_UHS_SDR25; 674 + sdhci_set_uhs_signaling(host, timing); 675 + } 676 + 668 677 #define INTEL_HS400_ES_REG 0x78 669 678 #define INTEL_HS400_ES_BIT BIT(0) 670 679 ··· 730 721 .enable_dma = sdhci_pci_enable_dma, 731 722 .set_bus_width = sdhci_set_bus_width, 732 723 .reset = sdhci_reset, 733 - .set_uhs_signaling = sdhci_set_uhs_signaling, 724 + .set_uhs_signaling = sdhci_intel_set_uhs_signaling, 734 725 .hw_reset = sdhci_pci_hw_reset, 735 726 }; 736 727 ··· 740 731 .enable_dma = sdhci_pci_enable_dma, 741 732 .set_bus_width = sdhci_set_bus_width, 742 733 .reset = sdhci_cqhci_reset, 743 - .set_uhs_signaling = sdhci_set_uhs_signaling, 734 + .set_uhs_signaling = sdhci_intel_set_uhs_signaling, 744 735 .hw_reset = sdhci_pci_hw_reset, 745 736 .irq = sdhci_cqhci_irq, 746 737 };
+9 -3
drivers/mtd/nand/raw/ams-delta.c
··· 215 215 return 0; 216 216 } 217 217 218 + static int gpio_nand_attach_chip(struct nand_chip *chip) 219 + { 220 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 221 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 222 + 223 + return 0; 224 + } 225 + 218 226 static const struct nand_controller_ops gpio_nand_ops = { 219 227 .exec_op = gpio_nand_exec_op, 228 + .attach_chip = gpio_nand_attach_chip, 220 229 .setup_interface = gpio_nand_setup_interface, 221 230 }; 222 231 ··· 268 259 dev_warn(&pdev->dev, "RDY GPIO request failed (%d)\n", err); 269 260 return err; 270 261 } 271 - 272 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 273 - this->ecc.algo = NAND_ECC_ALGO_HAMMING; 274 262 275 263 platform_set_drvdata(pdev, priv); 276 264
+9 -2
drivers/mtd/nand/raw/au1550nd.c
··· 236 236 return ret; 237 237 } 238 238 239 + static int au1550nd_attach_chip(struct nand_chip *chip) 240 + { 241 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 242 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 243 + 244 + return 0; 245 + } 246 + 239 247 static const struct nand_controller_ops au1550nd_ops = { 240 248 .exec_op = au1550nd_exec_op, 249 + .attach_chip = au1550nd_attach_chip, 241 250 }; 242 251 243 252 static int au1550nd_probe(struct platform_device *pdev) ··· 303 294 nand_controller_init(&ctx->controller); 304 295 ctx->controller.ops = &au1550nd_ops; 305 296 this->controller = &ctx->controller; 306 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 307 - this->ecc.algo = NAND_ECC_ALGO_HAMMING; 308 297 309 298 if (pd->devwidth) 310 299 this->options |= NAND_BUSWIDTH_16;
+16 -8
drivers/mtd/nand/raw/cs553x_nand.c
··· 243 243 244 244 static struct cs553x_nand_controller *controllers[4]; 245 245 246 + static int cs553x_attach_chip(struct nand_chip *chip) 247 + { 248 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 249 + return 0; 250 + 251 + chip->ecc.size = 256; 252 + chip->ecc.bytes = 3; 253 + chip->ecc.hwctl = cs_enable_hwecc; 254 + chip->ecc.calculate = cs_calculate_ecc; 255 + chip->ecc.correct = nand_correct_data; 256 + chip->ecc.strength = 1; 257 + 258 + return 0; 259 + } 260 + 246 261 static const struct nand_controller_ops cs553x_nand_controller_ops = { 247 262 .exec_op = cs553x_exec_op, 263 + .attach_chip = cs553x_attach_chip, 248 264 }; 249 265 250 266 static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) ··· 301 285 err = -EIO; 302 286 goto out_mtd; 303 287 } 304 - 305 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 306 - this->ecc.size = 256; 307 - this->ecc.bytes = 3; 308 - this->ecc.hwctl = cs_enable_hwecc; 309 - this->ecc.calculate = cs_calculate_ecc; 310 - this->ecc.correct = nand_correct_data; 311 - this->ecc.strength = 1; 312 288 313 289 /* Enable the following for a flash based bad block table */ 314 290 this->bbt_options = NAND_BBT_USE_FLASH;
+4 -4
drivers/mtd/nand/raw/davinci_nand.c
··· 585 585 if (IS_ERR(pdata)) 586 586 return PTR_ERR(pdata); 587 587 588 + /* Use board-specific ECC config */ 589 + info->chip.ecc.engine_type = pdata->engine_type; 590 + info->chip.ecc.placement = pdata->ecc_placement; 591 + 588 592 switch (info->chip.ecc.engine_type) { 589 593 case NAND_ECC_ENGINE_TYPE_NONE: 590 594 pdata->ecc_bits = 0; ··· 853 849 /* use nandboot-capable ALE/CLE masks by default */ 854 850 info->mask_ale = pdata->mask_ale ? : MASK_ALE; 855 851 info->mask_cle = pdata->mask_cle ? : MASK_CLE; 856 - 857 - /* Use board-specific ECC config */ 858 - info->chip.ecc.engine_type = pdata->engine_type; 859 - info->chip.ecc.placement = pdata->ecc_placement; 860 852 861 853 spin_lock_irq(&davinci_nand_lock); 862 854
+19 -10
drivers/mtd/nand/raw/diskonchip.c
··· 1269 1269 return 1; 1270 1270 } 1271 1271 1272 + static int doc200x_attach_chip(struct nand_chip *chip) 1273 + { 1274 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 1275 + return 0; 1276 + 1277 + chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 1278 + chip->ecc.size = 512; 1279 + chip->ecc.bytes = 6; 1280 + chip->ecc.strength = 2; 1281 + chip->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 1282 + chip->ecc.hwctl = doc200x_enable_hwecc; 1283 + chip->ecc.calculate = doc200x_calculate_ecc; 1284 + chip->ecc.correct = doc200x_correct_data; 1285 + 1286 + return 0; 1287 + } 1288 + 1272 1289 static const struct nand_controller_ops doc200x_ops = { 1273 1290 .exec_op = doc200x_exec_op, 1291 + .attach_chip = doc200x_attach_chip, 1274 1292 }; 1275 1293 1276 1294 static const struct nand_controller_ops doc2001plus_ops = { 1277 1295 .exec_op = doc2001plus_exec_op, 1296 + .attach_chip = doc200x_attach_chip, 1278 1297 }; 1279 1298 1280 1299 static int __init doc_probe(unsigned long physadr) ··· 1471 1452 1472 1453 nand->controller = &doc->base; 1473 1454 nand_set_controller_data(nand, doc); 1474 - nand->ecc.hwctl = doc200x_enable_hwecc; 1475 - nand->ecc.calculate = doc200x_calculate_ecc; 1476 - nand->ecc.correct = doc200x_correct_data; 1477 - 1478 - nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 1479 - nand->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 1480 - nand->ecc.size = 512; 1481 - nand->ecc.bytes = 6; 1482 - nand->ecc.strength = 2; 1483 - nand->ecc.options = NAND_ECC_GENERIC_ERASED_CHECK; 1484 1455 nand->bbt_options = NAND_BBT_USE_FLASH; 1485 1456 /* Skip the automatic BBT scan so we can run it manually */ 1486 1457 nand->options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK;
+15 -15
drivers/mtd/nand/raw/fsmc_nand.c
··· 880 880 struct mtd_info *mtd = nand_to_mtd(nand); 881 881 struct fsmc_nand_data *host = nand_to_fsmc(nand); 882 882 883 + if (nand->ecc.engine_type == NAND_ECC_ENGINE_TYPE_INVALID) 884 + nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 885 + 886 + if (!nand->ecc.size) 887 + nand->ecc.size = 512; 888 + 889 + if (AMBA_REV_BITS(host->pid) >= 8) { 890 + nand->ecc.read_page = fsmc_read_page_hwecc; 891 + nand->ecc.calculate = fsmc_read_hwecc_ecc4; 892 + nand->ecc.correct = fsmc_bch8_correct_data; 893 + nand->ecc.bytes = 13; 894 + nand->ecc.strength = 8; 895 + } 896 + 883 897 if (AMBA_REV_BITS(host->pid) >= 8) { 884 898 switch (mtd->oobsize) { 885 899 case 16: ··· 919 905 dev_info(host->dev, "Using 1-bit HW ECC scheme\n"); 920 906 nand->ecc.calculate = fsmc_read_hwecc_ecc1; 921 907 nand->ecc.correct = nand_correct_data; 908 + nand->ecc.hwctl = fsmc_enable_hwecc; 922 909 nand->ecc.bytes = 3; 923 910 nand->ecc.strength = 1; 924 911 nand->ecc.options |= NAND_ECC_SOFT_HAMMING_SM_ORDER; ··· 1070 1055 1071 1056 mtd->dev.parent = &pdev->dev; 1072 1057 1073 - /* 1074 - * Setup default ECC mode. nand_dt_init() called from nand_scan_ident() 1075 - * can overwrite this value if the DT provides a different value. 1076 - */ 1077 - nand->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 1078 - nand->ecc.hwctl = fsmc_enable_hwecc; 1079 - nand->ecc.size = 512; 1080 1058 nand->badblockbits = 7; 1081 1059 1082 1060 if (host->mode == USE_DMA_ACCESS) { ··· 1090 1082 if (host->dev_timings) { 1091 1083 fsmc_nand_setup(host, host->dev_timings); 1092 1084 nand->options |= NAND_KEEP_TIMINGS; 1093 - } 1094 - 1095 - if (AMBA_REV_BITS(host->pid) >= 8) { 1096 - nand->ecc.read_page = fsmc_read_page_hwecc; 1097 - nand->ecc.calculate = fsmc_read_hwecc_ecc4; 1098 - nand->ecc.correct = fsmc_bch8_correct_data; 1099 - nand->ecc.bytes = 13; 1100 - nand->ecc.strength = 8; 1101 1085 } 1102 1086 1103 1087 nand_controller_init(&host->base);
+9 -2
drivers/mtd/nand/raw/gpio.c
··· 161 161 return ret; 162 162 } 163 163 164 + static int gpio_nand_attach_chip(struct nand_chip *chip) 165 + { 166 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 167 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 168 + 169 + return 0; 170 + } 171 + 164 172 static const struct nand_controller_ops gpio_nand_ops = { 165 173 .exec_op = gpio_nand_exec_op, 174 + .attach_chip = gpio_nand_attach_chip, 166 175 }; 167 176 168 177 #ifdef CONFIG_OF ··· 351 342 gpiomtd->base.ops = &gpio_nand_ops; 352 343 353 344 nand_set_flash_node(chip, pdev->dev.of_node); 354 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 355 - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 356 345 chip->options = gpiomtd->plat.options; 357 346 chip->controller = &gpiomtd->base; 358 347
+13 -10
drivers/mtd/nand/raw/lpc32xx_mlc.c
··· 648 648 struct lpc32xx_nand_host *host = nand_get_controller_data(chip); 649 649 struct device *dev = &host->pdev->dev; 650 650 651 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 652 + return 0; 653 + 651 654 host->dma_buf = devm_kzalloc(dev, mtd->writesize, GFP_KERNEL); 652 655 if (!host->dma_buf) 653 656 return -ENOMEM; ··· 659 656 if (!host->dummy_buf) 660 657 return -ENOMEM; 661 658 662 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 663 659 chip->ecc.size = 512; 660 + chip->ecc.hwctl = lpc32xx_ecc_enable; 661 + chip->ecc.read_page_raw = lpc32xx_read_page; 662 + chip->ecc.read_page = lpc32xx_read_page; 663 + chip->ecc.write_page_raw = lpc32xx_write_page_lowlevel; 664 + chip->ecc.write_page = lpc32xx_write_page_lowlevel; 665 + chip->ecc.write_oob = lpc32xx_write_oob; 666 + chip->ecc.read_oob = lpc32xx_read_oob; 667 + chip->ecc.strength = 4; 668 + chip->ecc.bytes = 10; 669 + 664 670 mtd_set_ooblayout(mtd, &lpc32xx_ooblayout_ops); 665 671 host->mlcsubpages = mtd->writesize / 512; 666 672 ··· 753 741 platform_set_drvdata(pdev, host); 754 742 755 743 /* Initialize function pointers */ 756 - nand_chip->ecc.hwctl = lpc32xx_ecc_enable; 757 - nand_chip->ecc.read_page_raw = lpc32xx_read_page; 758 - nand_chip->ecc.read_page = lpc32xx_read_page; 759 - nand_chip->ecc.write_page_raw = lpc32xx_write_page_lowlevel; 760 - nand_chip->ecc.write_page = lpc32xx_write_page_lowlevel; 761 - nand_chip->ecc.write_oob = lpc32xx_write_oob; 762 - nand_chip->ecc.read_oob = lpc32xx_read_oob; 763 - nand_chip->ecc.strength = 4; 764 - nand_chip->ecc.bytes = 10; 765 744 nand_chip->legacy.waitfunc = lpc32xx_waitfunc; 766 745 767 746 nand_chip->options = NAND_NO_SUBPAGE_WRITE;
+14 -12
drivers/mtd/nand/raw/lpc32xx_slc.c
··· 775 775 struct mtd_info *mtd = nand_to_mtd(chip); 776 776 struct lpc32xx_nand_host *host = nand_get_controller_data(chip); 777 777 778 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 779 + return 0; 780 + 778 781 /* OOB and ECC CPU and DMA work areas */ 779 782 host->ecc_buf = (uint32_t *)(host->data_buf + LPC32XX_DMA_DATA_SIZE); 780 783 ··· 789 786 if (mtd->writesize <= 512) 790 787 mtd_set_ooblayout(mtd, &lpc32xx_ooblayout_ops); 791 788 789 + chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 792 790 /* These sizes remain the same regardless of page size */ 793 791 chip->ecc.size = 256; 792 + chip->ecc.strength = 1; 794 793 chip->ecc.bytes = LPC32XX_SLC_DEV_ECC_BYTES; 795 794 chip->ecc.prepad = 0; 796 795 chip->ecc.postpad = 0; 796 + chip->ecc.read_page_raw = lpc32xx_nand_read_page_raw_syndrome; 797 + chip->ecc.read_page = lpc32xx_nand_read_page_syndrome; 798 + chip->ecc.write_page_raw = lpc32xx_nand_write_page_raw_syndrome; 799 + chip->ecc.write_page = lpc32xx_nand_write_page_syndrome; 800 + chip->ecc.write_oob = lpc32xx_nand_write_oob_syndrome; 801 + chip->ecc.read_oob = lpc32xx_nand_read_oob_syndrome; 802 + chip->ecc.calculate = lpc32xx_nand_ecc_calculate; 803 + chip->ecc.correct = nand_correct_data; 804 + chip->ecc.hwctl = lpc32xx_nand_ecc_enable; 797 805 798 806 /* 799 807 * Use a custom BBT marker setup for small page FLASH that ··· 895 881 platform_set_drvdata(pdev, host); 896 882 897 883 /* NAND callbacks for LPC32xx SLC hardware */ 898 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 899 - chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 900 884 chip->legacy.read_byte = lpc32xx_nand_read_byte; 901 885 chip->legacy.read_buf = lpc32xx_nand_read_buf; 902 886 chip->legacy.write_buf = lpc32xx_nand_write_buf; 903 - chip->ecc.read_page_raw = lpc32xx_nand_read_page_raw_syndrome; 904 - chip->ecc.read_page = lpc32xx_nand_read_page_syndrome; 905 - chip->ecc.write_page_raw = lpc32xx_nand_write_page_raw_syndrome; 906 - chip->ecc.write_page = lpc32xx_nand_write_page_syndrome; 907 - chip->ecc.write_oob = lpc32xx_nand_write_oob_syndrome; 908 - chip->ecc.read_oob = lpc32xx_nand_read_oob_syndrome; 909 - chip->ecc.calculate = lpc32xx_nand_ecc_calculate; 910 - chip->ecc.correct = nand_correct_data; 911 - chip->ecc.strength = 1; 912 - chip->ecc.hwctl = lpc32xx_nand_ecc_enable; 913 887 914 888 /* 915 889 * Allocate a large enough buffer for a single huge page plus
+17 -2
drivers/mtd/nand/raw/mpc5121_nfc.c
··· 104 104 #define NFC_TIMEOUT (HZ / 10) /* 1/10 s */ 105 105 106 106 struct mpc5121_nfc_prv { 107 + struct nand_controller controller; 107 108 struct nand_chip chip; 108 109 int irq; 109 110 void __iomem *regs; ··· 603 602 iounmap(prv->csreg); 604 603 } 605 604 605 + static int mpc5121_nfc_attach_chip(struct nand_chip *chip) 606 + { 607 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 608 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 609 + 610 + return 0; 611 + } 612 + 613 + static const struct nand_controller_ops mpc5121_nfc_ops = { 614 + .attach_chip = mpc5121_nfc_attach_chip, 615 + }; 616 + 606 617 static int mpc5121_nfc_probe(struct platform_device *op) 607 618 { 608 619 struct device_node *dn = op->dev.of_node; ··· 646 633 647 634 chip = &prv->chip; 648 635 mtd = nand_to_mtd(chip); 636 + 637 + nand_controller_init(&prv->controller); 638 + prv->controller.ops = &mpc5121_nfc_ops; 639 + chip->controller = &prv->controller; 649 640 650 641 mtd->dev.parent = dev; 651 642 nand_set_controller_data(chip, prv); ··· 705 688 chip->legacy.set_features = nand_get_set_features_notsupp; 706 689 chip->legacy.get_features = nand_get_set_features_notsupp; 707 690 chip->bbt_options = NAND_BBT_USE_FLASH; 708 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 709 - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 710 691 711 692 /* Support external chip-select logic on ADS5121 board */ 712 693 if (of_machine_is_compatible("fsl,mpc5121ads")) {
+17 -2
drivers/mtd/nand/raw/orion_nand.c
··· 22 22 #include <linux/platform_data/mtd-orion_nand.h> 23 23 24 24 struct orion_nand_info { 25 + struct nand_controller controller; 25 26 struct nand_chip chip; 26 27 struct clk *clk; 27 28 }; ··· 83 82 buf[i++] = readb(io_base); 84 83 } 85 84 85 + static int orion_nand_attach_chip(struct nand_chip *chip) 86 + { 87 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 88 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 89 + 90 + return 0; 91 + } 92 + 93 + static const struct nand_controller_ops orion_nand_ops = { 94 + .attach_chip = orion_nand_attach_chip, 95 + }; 96 + 86 97 static int __init orion_nand_probe(struct platform_device *pdev) 87 98 { 88 99 struct orion_nand_info *info; ··· 113 100 return -ENOMEM; 114 101 nc = &info->chip; 115 102 mtd = nand_to_mtd(nc); 103 + 104 + nand_controller_init(&info->controller); 105 + info->controller.ops = &orion_nand_ops; 106 + nc->controller = &info->controller; 116 107 117 108 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 118 109 io_base = devm_ioremap_resource(&pdev->dev, res); ··· 156 139 nc->legacy.IO_ADDR_R = nc->legacy.IO_ADDR_W = io_base; 157 140 nc->legacy.cmd_ctrl = orion_nand_cmd_ctrl; 158 141 nc->legacy.read_buf = orion_nand_read_buf; 159 - nc->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 160 - nc->ecc.algo = NAND_ECC_ALGO_HAMMING; 161 142 162 143 if (board->chip_delay) 163 144 nc->legacy.chip_delay = board->chip_delay;
+17 -2
drivers/mtd/nand/raw/pasemi_nand.c
··· 29 29 30 30 static unsigned int lpcctl; 31 31 static struct mtd_info *pasemi_nand_mtd; 32 + static struct nand_controller controller; 32 33 static const char driver_name[] = "pasemi-nand"; 33 34 34 35 static void pasemi_read_buf(struct nand_chip *chip, u_char *buf, int len) ··· 74 73 return !!(inl(lpcctl) & LBICTRL_LPCCTL_NR); 75 74 } 76 75 76 + static int pasemi_attach_chip(struct nand_chip *chip) 77 + { 78 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 79 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 80 + 81 + return 0; 82 + } 83 + 84 + static const struct nand_controller_ops pasemi_ops = { 85 + .attach_chip = pasemi_attach_chip, 86 + }; 87 + 77 88 static int pasemi_nand_probe(struct platform_device *ofdev) 78 89 { 79 90 struct device *dev = &ofdev->dev; ··· 112 99 err = -ENOMEM; 113 100 goto out; 114 101 } 102 + 103 + controller.ops = &pasemi_ops; 104 + nand_controller_init(&controller); 105 + chip->controller = &controller; 115 106 116 107 pasemi_nand_mtd = nand_to_mtd(chip); 117 108 ··· 149 132 chip->legacy.read_buf = pasemi_read_buf; 150 133 chip->legacy.write_buf = pasemi_write_buf; 151 134 chip->legacy.chip_delay = 0; 152 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 153 - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 154 135 155 136 /* Enable the following for a flash based bad block table */ 156 137 chip->bbt_options = NAND_BBT_USE_FLASH;
+17 -3
drivers/mtd/nand/raw/plat_nand.c
··· 14 14 #include <linux/mtd/platnand.h> 15 15 16 16 struct plat_nand_data { 17 + struct nand_controller controller; 17 18 struct nand_chip chip; 18 19 void __iomem *io_base; 20 + }; 21 + 22 + static int plat_nand_attach_chip(struct nand_chip *chip) 23 + { 24 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 25 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 26 + 27 + return 0; 28 + } 29 + 30 + static const struct nand_controller_ops plat_nand_ops = { 31 + .attach_chip = plat_nand_attach_chip, 19 32 }; 20 33 21 34 /* ··· 59 46 if (!data) 60 47 return -ENOMEM; 61 48 49 + data->controller.ops = &plat_nand_ops; 50 + nand_controller_init(&data->controller); 51 + data->chip.controller = &data->controller; 52 + 62 53 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 63 54 data->io_base = devm_ioremap_resource(&pdev->dev, res); 64 55 if (IS_ERR(data->io_base)) ··· 82 65 data->chip.legacy.chip_delay = pdata->chip.chip_delay; 83 66 data->chip.options |= pdata->chip.options; 84 67 data->chip.bbt_options |= pdata->chip.bbt_options; 85 - 86 - data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 87 - data->chip.ecc.algo = NAND_ECC_ALGO_HAMMING; 88 68 89 69 platform_set_drvdata(pdev, data); 90 70
+27 -13
drivers/mtd/nand/raw/r852.c
··· 817 817 return ret; 818 818 } 819 819 820 + static int r852_attach_chip(struct nand_chip *chip) 821 + { 822 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 823 + return 0; 824 + 825 + chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 826 + chip->ecc.size = R852_DMA_LEN; 827 + chip->ecc.bytes = SM_OOB_SIZE; 828 + chip->ecc.strength = 2; 829 + chip->ecc.hwctl = r852_ecc_hwctl; 830 + chip->ecc.calculate = r852_ecc_calculate; 831 + chip->ecc.correct = r852_ecc_correct; 832 + 833 + /* TODO: hack */ 834 + chip->ecc.read_oob = r852_read_oob; 835 + 836 + return 0; 837 + } 838 + 839 + static const struct nand_controller_ops r852_ops = { 840 + .attach_chip = r852_attach_chip, 841 + }; 842 + 820 843 static int r852_probe(struct pci_dev *pci_dev, const struct pci_device_id *id) 821 844 { 822 845 int error; ··· 881 858 chip->legacy.read_buf = r852_read_buf; 882 859 chip->legacy.write_buf = r852_write_buf; 883 860 884 - /* ecc */ 885 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 886 - chip->ecc.placement = NAND_ECC_PLACEMENT_INTERLEAVED; 887 - chip->ecc.size = R852_DMA_LEN; 888 - chip->ecc.bytes = SM_OOB_SIZE; 889 - chip->ecc.strength = 2; 890 - chip->ecc.hwctl = r852_ecc_hwctl; 891 - chip->ecc.calculate = r852_ecc_calculate; 892 - chip->ecc.correct = r852_ecc_correct; 893 - 894 - /* TODO: hack */ 895 - chip->ecc.read_oob = r852_read_oob; 896 - 897 861 /* init our device structure */ 898 862 dev = kzalloc(sizeof(struct r852_device), GFP_KERNEL); 899 863 ··· 891 881 dev->chip = chip; 892 882 dev->pci_dev = pci_dev; 893 883 pci_set_drvdata(pci_dev, dev); 884 + 885 + nand_controller_init(&dev->controller); 886 + dev->controller.ops = &r852_ops; 887 + chip->controller = &dev->controller; 894 888 895 889 dev->bounce_buffer = dma_alloc_coherent(&pci_dev->dev, R852_DMA_LEN, 896 890 &dev->phys_bounce_buffer, GFP_KERNEL);
+1
drivers/mtd/nand/raw/r852.h
··· 104 104 #define DMA_MEMORY 1 105 105 106 106 struct r852_device { 107 + struct nand_controller controller; 107 108 void __iomem *mmio; /* mmio */ 108 109 struct nand_chip *chip; /* nand chip backpointer */ 109 110 struct pci_dev *pci_dev; /* pci backpointer */
+24 -8
drivers/mtd/nand/raw/sharpsl.c
··· 20 20 #include <linux/io.h> 21 21 22 22 struct sharpsl_nand { 23 + struct nand_controller controller; 23 24 struct nand_chip chip; 24 25 25 26 void __iomem *io; ··· 97 96 return readb(sharpsl->io + ECCCNTR) != 0; 98 97 } 99 98 99 + static int sharpsl_attach_chip(struct nand_chip *chip) 100 + { 101 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 102 + return 0; 103 + 104 + chip->ecc.size = 256; 105 + chip->ecc.bytes = 3; 106 + chip->ecc.strength = 1; 107 + chip->ecc.hwctl = sharpsl_nand_enable_hwecc; 108 + chip->ecc.calculate = sharpsl_nand_calculate_ecc; 109 + chip->ecc.correct = nand_correct_data; 110 + 111 + return 0; 112 + } 113 + 114 + static const struct nand_controller_ops sharpsl_ops = { 115 + .attach_chip = sharpsl_attach_chip, 116 + }; 117 + 100 118 /* 101 119 * Main initialization routine 102 120 */ ··· 156 136 /* Get pointer to private data */ 157 137 this = (struct nand_chip *)(&sharpsl->chip); 158 138 139 + nand_controller_init(&sharpsl->controller); 140 + sharpsl->controller.ops = &sharpsl_ops; 141 + this->controller = &sharpsl->controller; 142 + 159 143 /* Link the private data with the MTD structure */ 160 144 mtd = nand_to_mtd(this); 161 145 mtd->dev.parent = &pdev->dev; ··· 180 156 this->legacy.dev_ready = sharpsl_nand_dev_ready; 181 157 /* 15 us command delay time */ 182 158 this->legacy.chip_delay = 15; 183 - /* set eccmode using hardware ECC */ 184 - this->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 185 - this->ecc.size = 256; 186 - this->ecc.bytes = 3; 187 - this->ecc.strength = 1; 188 159 this->badblock_pattern = data->badblock_pattern; 189 - this->ecc.hwctl = sharpsl_nand_enable_hwecc; 190 - this->ecc.calculate = sharpsl_nand_calculate_ecc; 191 - this->ecc.correct = nand_correct_data; 192 160 193 161 /* Scan to find existence of the device */ 194 162 err = nand_scan(this, 1);
+17 -4
drivers/mtd/nand/raw/socrates_nand.c
··· 22 22 #define FPGA_NAND_DATA_SHIFT 16 23 23 24 24 struct socrates_nand_host { 25 + struct nand_controller controller; 25 26 struct nand_chip nand_chip; 26 27 void __iomem *io_base; 27 28 struct device *dev; ··· 117 116 return 1; 118 117 } 119 118 119 + static int socrates_attach_chip(struct nand_chip *chip) 120 + { 121 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 122 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 123 + 124 + return 0; 125 + } 126 + 127 + static const struct nand_controller_ops socrates_ops = { 128 + .attach_chip = socrates_attach_chip, 129 + }; 130 + 120 131 /* 121 132 * Probe for the NAND device. 122 133 */ ··· 154 141 mtd = nand_to_mtd(nand_chip); 155 142 host->dev = &ofdev->dev; 156 143 144 + nand_controller_init(&host->controller); 145 + host->controller.ops = &socrates_ops; 146 + nand_chip->controller = &host->controller; 147 + 157 148 /* link the private data structures */ 158 149 nand_set_controller_data(nand_chip, host); 159 150 nand_set_flash_node(nand_chip, ofdev->dev.of_node); ··· 169 152 nand_chip->legacy.write_buf = socrates_nand_write_buf; 170 153 nand_chip->legacy.read_buf = socrates_nand_read_buf; 171 154 nand_chip->legacy.dev_ready = socrates_nand_device_ready; 172 - 173 - /* enable ECC */ 174 - nand_chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 175 - nand_chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 176 155 177 156 /* TODO: I have no idea what real delay is. */ 178 157 nand_chip->legacy.chip_delay = 20; /* 20us command delay time */
+24 -9
drivers/mtd/nand/raw/tmio_nand.c
··· 103 103 /*--------------------------------------------------------------------------*/ 104 104 105 105 struct tmio_nand { 106 + struct nand_controller controller; 106 107 struct nand_chip chip; 107 108 struct completion comp; 108 109 ··· 356 355 cell->disable(dev); 357 356 } 358 357 358 + static int tmio_attach_chip(struct nand_chip *chip) 359 + { 360 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 361 + return 0; 362 + 363 + chip->ecc.size = 512; 364 + chip->ecc.bytes = 6; 365 + chip->ecc.strength = 2; 366 + chip->ecc.hwctl = tmio_nand_enable_hwecc; 367 + chip->ecc.calculate = tmio_nand_calculate_ecc; 368 + chip->ecc.correct = tmio_nand_correct_data; 369 + 370 + return 0; 371 + } 372 + 373 + static const struct nand_controller_ops tmio_ops = { 374 + .attach_chip = tmio_attach_chip, 375 + }; 376 + 359 377 static int tmio_probe(struct platform_device *dev) 360 378 { 361 379 struct tmio_nand_data *data = dev_get_platdata(&dev->dev); ··· 405 385 mtd->name = "tmio-nand"; 406 386 mtd->dev.parent = &dev->dev; 407 387 388 + nand_controller_init(&tmio->controller); 389 + tmio->controller.ops = &tmio_ops; 390 + nand_chip->controller = &tmio->controller; 391 + 408 392 tmio->ccr = devm_ioremap(&dev->dev, ccr->start, resource_size(ccr)); 409 393 if (!tmio->ccr) 410 394 return -EIO; ··· 432 408 nand_chip->legacy.read_byte = tmio_nand_read_byte; 433 409 nand_chip->legacy.write_buf = tmio_nand_write_buf; 434 410 nand_chip->legacy.read_buf = tmio_nand_read_buf; 435 - 436 - /* set eccmode using hardware ECC */ 437 - nand_chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 438 - nand_chip->ecc.size = 512; 439 - nand_chip->ecc.bytes = 6; 440 - nand_chip->ecc.strength = 2; 441 - nand_chip->ecc.hwctl = tmio_nand_enable_hwecc; 442 - nand_chip->ecc.calculate = tmio_nand_calculate_ecc; 443 - nand_chip->ecc.correct = tmio_nand_correct_data; 444 411 445 412 if (data) 446 413 nand_chip->badblock_pattern = data->badblock_pattern;
+9 -5
drivers/mtd/nand/raw/txx9ndfmc.c
··· 253 253 { 254 254 struct mtd_info *mtd = nand_to_mtd(chip); 255 255 256 + if (chip->ecc.engine_type != NAND_ECC_ENGINE_TYPE_ON_HOST) 257 + return 0; 258 + 259 + chip->ecc.strength = 1; 260 + 256 261 if (mtd->writesize >= 512) { 257 262 chip->ecc.size = 512; 258 263 chip->ecc.bytes = 6; ··· 265 260 chip->ecc.size = 256; 266 261 chip->ecc.bytes = 3; 267 262 } 263 + 264 + chip->ecc.calculate = txx9ndfmc_calculate_ecc; 265 + chip->ecc.correct = txx9ndfmc_correct_data; 266 + chip->ecc.hwctl = txx9ndfmc_enable_hwecc; 268 267 269 268 return 0; 270 269 } ··· 335 326 chip->legacy.write_buf = txx9ndfmc_write_buf; 336 327 chip->legacy.cmd_ctrl = txx9ndfmc_cmd_ctrl; 337 328 chip->legacy.dev_ready = txx9ndfmc_dev_ready; 338 - chip->ecc.calculate = txx9ndfmc_calculate_ecc; 339 - chip->ecc.correct = txx9ndfmc_correct_data; 340 - chip->ecc.hwctl = txx9ndfmc_enable_hwecc; 341 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_ON_HOST; 342 - chip->ecc.strength = 1; 343 329 chip->legacy.chip_delay = 100; 344 330 chip->controller = &drvdata->controller; 345 331
+16 -2
drivers/mtd/nand/raw/xway_nand.c
··· 62 62 #define NAND_CON_NANDM 1 63 63 64 64 struct xway_nand_data { 65 + struct nand_controller controller; 65 66 struct nand_chip chip; 66 67 unsigned long csflags; 67 68 void __iomem *nandaddr; ··· 146 145 xway_writeb(nand_to_mtd(chip), NAND_WRITE_DATA, buf[i]); 147 146 } 148 147 148 + static int xway_attach_chip(struct nand_chip *chip) 149 + { 150 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 151 + chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 152 + 153 + return 0; 154 + } 155 + 156 + static const struct nand_controller_ops xway_nand_ops = { 157 + .attach_chip = xway_attach_chip, 158 + }; 159 + 149 160 /* 150 161 * Probe for the NAND device. 151 162 */ ··· 193 180 data->chip.legacy.read_byte = xway_read_byte; 194 181 data->chip.legacy.chip_delay = 30; 195 182 196 - data->chip.ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 197 - data->chip.ecc.algo = NAND_ECC_ALGO_HAMMING; 183 + nand_controller_init(&data->controller); 184 + data->controller.ops = &xway_nand_ops; 185 + data->chip.controller = &data->controller; 198 186 199 187 platform_set_drvdata(pdev, data); 200 188 nand_set_controller_data(&data->chip, data);
+52 -27
drivers/net/bonding/bond_main.c
··· 1459 1459 slave->dev->flags &= ~IFF_SLAVE; 1460 1460 } 1461 1461 1462 - static struct slave *bond_alloc_slave(struct bonding *bond) 1462 + static void slave_kobj_release(struct kobject *kobj) 1463 1463 { 1464 - struct slave *slave = NULL; 1465 - 1466 - slave = kzalloc(sizeof(*slave), GFP_KERNEL); 1467 - if (!slave) 1468 - return NULL; 1469 - 1470 - if (BOND_MODE(bond) == BOND_MODE_8023AD) { 1471 - SLAVE_AD_INFO(slave) = kzalloc(sizeof(struct ad_slave_info), 1472 - GFP_KERNEL); 1473 - if (!SLAVE_AD_INFO(slave)) { 1474 - kfree(slave); 1475 - return NULL; 1476 - } 1477 - } 1478 - INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work); 1479 - 1480 - return slave; 1481 - } 1482 - 1483 - static void bond_free_slave(struct slave *slave) 1484 - { 1464 + struct slave *slave = to_slave(kobj); 1485 1465 struct bonding *bond = bond_get_bond_by_slave(slave); 1486 1466 1487 1467 cancel_delayed_work_sync(&slave->notify_work); ··· 1469 1489 kfree(SLAVE_AD_INFO(slave)); 1470 1490 1471 1491 kfree(slave); 1492 + } 1493 + 1494 + static struct kobj_type slave_ktype = { 1495 + .release = slave_kobj_release, 1496 + #ifdef CONFIG_SYSFS 1497 + .sysfs_ops = &slave_sysfs_ops, 1498 + #endif 1499 + }; 1500 + 1501 + static int bond_kobj_init(struct slave *slave) 1502 + { 1503 + int err; 1504 + 1505 + err = kobject_init_and_add(&slave->kobj, &slave_ktype, 1506 + &(slave->dev->dev.kobj), "bonding_slave"); 1507 + if (err) 1508 + kobject_put(&slave->kobj); 1509 + 1510 + return err; 1511 + } 1512 + 1513 + static struct slave *bond_alloc_slave(struct bonding *bond, 1514 + struct net_device *slave_dev) 1515 + { 1516 + struct slave *slave = NULL; 1517 + 1518 + slave = kzalloc(sizeof(*slave), GFP_KERNEL); 1519 + if (!slave) 1520 + return NULL; 1521 + 1522 + slave->bond = bond; 1523 + slave->dev = slave_dev; 1524 + 1525 + if (bond_kobj_init(slave)) 1526 + return NULL; 1527 + 1528 + if (BOND_MODE(bond) == BOND_MODE_8023AD) { 1529 + SLAVE_AD_INFO(slave) = kzalloc(sizeof(struct ad_slave_info), 1530 + GFP_KERNEL); 1531 + if (!SLAVE_AD_INFO(slave)) { 1532 + kobject_put(&slave->kobj); 1533 + return NULL; 1534 + } 1535 + } 1536 + INIT_DELAYED_WORK(&slave->notify_work, bond_netdev_notify_work); 1537 + 1538 + return slave; 1472 1539 } 1473 1540 1474 1541 static void bond_fill_ifbond(struct bonding *bond, struct ifbond *info) ··· 1704 1677 goto err_undo_flags; 1705 1678 } 1706 1679 1707 - new_slave = bond_alloc_slave(bond); 1680 + new_slave = bond_alloc_slave(bond, slave_dev); 1708 1681 if (!new_slave) { 1709 1682 res = -ENOMEM; 1710 1683 goto err_undo_flags; 1711 1684 } 1712 1685 1713 - new_slave->bond = bond; 1714 - new_slave->dev = slave_dev; 1715 1686 /* Set the new_slave's queue_id to be zero. Queue ID mapping 1716 1687 * is set via sysfs or module option if desired. 1717 1688 */ ··· 2031 2006 dev_set_mtu(slave_dev, new_slave->original_mtu); 2032 2007 2033 2008 err_free: 2034 - bond_free_slave(new_slave); 2009 + kobject_put(&new_slave->kobj); 2035 2010 2036 2011 err_undo_flags: 2037 2012 /* Enslave of first slave has failed and we need to fix master's mac */ ··· 2211 2186 if (!netif_is_bond_master(slave_dev)) 2212 2187 slave_dev->priv_flags &= ~IFF_BONDING; 2213 2188 2214 - bond_free_slave(slave); 2189 + kobject_put(&slave->kobj); 2215 2190 2216 2191 return 0; 2217 2192 }
+1 -17
drivers/net/bonding/bond_sysfs_slave.c
··· 121 121 }; 122 122 123 123 #define to_slave_attr(_at) container_of(_at, struct slave_attribute, attr) 124 - #define to_slave(obj) container_of(obj, struct slave, kobj) 125 124 126 125 static ssize_t slave_show(struct kobject *kobj, 127 126 struct attribute *attr, char *buf) ··· 131 132 return slave_attr->show(slave, buf); 132 133 } 133 134 134 - static const struct sysfs_ops slave_sysfs_ops = { 135 + const struct sysfs_ops slave_sysfs_ops = { 135 136 .show = slave_show, 136 - }; 137 - 138 - static struct kobj_type slave_ktype = { 139 - #ifdef CONFIG_SYSFS 140 - .sysfs_ops = &slave_sysfs_ops, 141 - #endif 142 137 }; 143 138 144 139 int bond_sysfs_slave_add(struct slave *slave) 145 140 { 146 141 const struct slave_attribute **a; 147 142 int err; 148 - 149 - err = kobject_init_and_add(&slave->kobj, &slave_ktype, 150 - &(slave->dev->dev.kobj), "bonding_slave"); 151 - if (err) { 152 - kobject_put(&slave->kobj); 153 - return err; 154 - } 155 143 156 144 for (a = slave_attrs; *a; ++a) { 157 145 err = sysfs_create_file(&slave->kobj, &((*a)->attr)); ··· 157 171 158 172 for (a = slave_attrs; *a; ++a) 159 173 sysfs_remove_file(&slave->kobj, &((*a)->attr)); 160 - 161 - kobject_put(&slave->kobj); 162 174 }
+4 -2
drivers/net/can/m_can/m_can.c
··· 1033 1033 .name = KBUILD_MODNAME, 1034 1034 .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 1035 1035 .tseg1_max = 256, 1036 - .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 1036 + .tseg2_min = 2, /* Time segment 2 = phase_seg2 */ 1037 1037 .tseg2_max = 128, 1038 1038 .sjw_max = 128, 1039 1039 .brp_min = 1, ··· 1385 1385 &m_can_data_bittiming_const_31X; 1386 1386 break; 1387 1387 case 32: 1388 + case 33: 1389 + /* Support both MCAN version v3.2.x and v3.3.0 */ 1388 1390 m_can_dev->can.bittiming_const = m_can_dev->bit_timing ? 1389 1391 m_can_dev->bit_timing : &m_can_bittiming_const_31X; 1390 1392 ··· 1655 1653 INIT_WORK(&cdev->tx_work, m_can_tx_work_queue); 1656 1654 1657 1655 err = request_threaded_irq(dev->irq, NULL, m_can_isr, 1658 - IRQF_ONESHOT | IRQF_TRIGGER_FALLING, 1656 + IRQF_ONESHOT, 1659 1657 dev->name, dev); 1660 1658 } else { 1661 1659 err = request_irq(dev->irq, m_can_isr, IRQF_SHARED, dev->name,
+4
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 2735 2735 u32 freq; 2736 2736 int err; 2737 2737 2738 + if (!spi->irq) 2739 + return dev_err_probe(&spi->dev, -ENXIO, 2740 + "No IRQ specified (maybe node \"interrupts-extended\" in DT missing)!\n"); 2741 + 2738 2742 rx_int = devm_gpiod_get_optional(&spi->dev, "microchip,rx-int", 2739 2743 GPIOD_IN); 2740 2744 if (PTR_ERR(rx_int) == -EPROBE_DEFER)
+71 -62
drivers/net/can/usb/gs_usb.c
··· 64 64 }; 65 65 66 66 /* data types passed between host and device */ 67 - struct gs_host_config { 68 - u32 byte_order; 69 - } __packed; 70 - /* All data exchanged between host and device is exchanged in host byte order, 71 - * thanks to the struct gs_host_config byte_order member, which is sent first 72 - * to indicate the desired byte order. 67 + 68 + /* The firmware on the original USB2CAN by Geschwister Schneider 69 + * Technologie Entwicklungs- und Vertriebs UG exchanges all data 70 + * between the host and the device in host byte order. This is done 71 + * with the struct gs_host_config::byte_order member, which is sent 72 + * first to indicate the desired byte order. 73 + * 74 + * The widely used open source firmware candleLight doesn't support 75 + * this feature and exchanges the data in little endian byte order. 73 76 */ 77 + struct gs_host_config { 78 + __le32 byte_order; 79 + } __packed; 74 80 75 81 struct gs_device_config { 76 82 u8 reserved1; 77 83 u8 reserved2; 78 84 u8 reserved3; 79 85 u8 icount; 80 - u32 sw_version; 81 - u32 hw_version; 86 + __le32 sw_version; 87 + __le32 hw_version; 82 88 } __packed; 83 89 84 90 #define GS_CAN_MODE_NORMAL 0 ··· 94 88 #define GS_CAN_MODE_ONE_SHOT BIT(3) 95 89 96 90 struct gs_device_mode { 97 - u32 mode; 98 - u32 flags; 91 + __le32 mode; 92 + __le32 flags; 99 93 } __packed; 100 94 101 95 struct gs_device_state { 102 - u32 state; 103 - u32 rxerr; 104 - u32 txerr; 96 + __le32 state; 97 + __le32 rxerr; 98 + __le32 txerr; 105 99 } __packed; 106 100 107 101 struct gs_device_bittiming { 108 - u32 prop_seg; 109 - u32 phase_seg1; 110 - u32 phase_seg2; 111 - u32 sjw; 112 - u32 brp; 102 + __le32 prop_seg; 103 + __le32 phase_seg1; 104 + __le32 phase_seg2; 105 + __le32 sjw; 106 + __le32 brp; 113 107 } __packed; 114 108 115 109 struct gs_identify_mode { 116 - u32 mode; 110 + __le32 mode; 117 111 } __packed; 118 112 119 113 #define GS_CAN_FEATURE_LISTEN_ONLY BIT(0) ··· 124 118 #define GS_CAN_FEATURE_IDENTIFY BIT(5) 125 119 126 120 struct gs_device_bt_const { 127 - u32 feature; 128 - u32 fclk_can; 129 - u32 tseg1_min; 130 - u32 tseg1_max; 131 - u32 tseg2_min; 132 - u32 tseg2_max; 133 - u32 sjw_max; 134 - u32 brp_min; 135 - u32 brp_max; 136 - u32 brp_inc; 121 + __le32 feature; 122 + __le32 fclk_can; 123 + __le32 tseg1_min; 124 + __le32 tseg1_max; 125 + __le32 tseg2_min; 126 + __le32 tseg2_max; 127 + __le32 sjw_max; 128 + __le32 brp_min; 129 + __le32 brp_max; 130 + __le32 brp_inc; 137 131 } __packed; 138 132 139 133 #define GS_CAN_FLAG_OVERFLOW 1 140 134 141 135 struct gs_host_frame { 142 136 u32 echo_id; 143 - u32 can_id; 137 + __le32 can_id; 144 138 145 139 u8 can_dlc; 146 140 u8 channel; ··· 336 330 if (!skb) 337 331 return; 338 332 339 - cf->can_id = hf->can_id; 333 + cf->can_id = le32_to_cpu(hf->can_id); 340 334 341 335 can_frame_set_cc_len(cf, hf->can_dlc, dev->can.ctrlmode); 342 336 memcpy(cf->data, hf->data, 8); 343 337 344 338 /* ERROR frames tell us information about the controller */ 345 - if (hf->can_id & CAN_ERR_FLAG) 339 + if (le32_to_cpu(hf->can_id) & CAN_ERR_FLAG) 346 340 gs_update_state(dev, cf); 347 341 348 342 netdev->stats.rx_packets++; ··· 425 419 if (!dbt) 426 420 return -ENOMEM; 427 421 428 - dbt->prop_seg = bt->prop_seg; 429 - dbt->phase_seg1 = bt->phase_seg1; 430 - dbt->phase_seg2 = bt->phase_seg2; 431 - dbt->sjw = bt->sjw; 432 - dbt->brp = bt->brp; 422 + dbt->prop_seg = cpu_to_le32(bt->prop_seg); 423 + dbt->phase_seg1 = cpu_to_le32(bt->phase_seg1); 424 + dbt->phase_seg2 = cpu_to_le32(bt->phase_seg2); 425 + dbt->sjw = cpu_to_le32(bt->sjw); 426 + dbt->brp = cpu_to_le32(bt->brp); 433 427 434 428 /* request bit timings */ 435 429 rc = usb_control_msg(interface_to_usbdev(intf), ··· 510 504 511 505 cf = (struct can_frame *)skb->data; 512 506 513 - hf->can_id = cf->can_id; 507 + hf->can_id = cpu_to_le32(cf->can_id); 514 508 hf->can_dlc = can_get_cc_dlc(cf, dev->can.ctrlmode); 515 509 516 510 memcpy(hf->data, cf->data, cf->len); ··· 581 575 int rc, i; 582 576 struct gs_device_mode *dm; 583 577 u32 ctrlmode; 578 + u32 flags = 0; 584 579 585 580 rc = open_candev(netdev); 586 581 if (rc) ··· 649 642 650 643 /* flags */ 651 644 ctrlmode = dev->can.ctrlmode; 652 - dm->flags = 0; 653 645 654 646 if (ctrlmode & CAN_CTRLMODE_LOOPBACK) 655 - dm->flags |= GS_CAN_MODE_LOOP_BACK; 647 + flags |= GS_CAN_MODE_LOOP_BACK; 656 648 else if (ctrlmode & CAN_CTRLMODE_LISTENONLY) 657 - dm->flags |= GS_CAN_MODE_LISTEN_ONLY; 649 + flags |= GS_CAN_MODE_LISTEN_ONLY; 658 650 659 651 /* Controller is not allowed to retry TX 660 652 * this mode is unavailable on atmels uc3c hardware 661 653 */ 662 654 if (ctrlmode & CAN_CTRLMODE_ONE_SHOT) 663 - dm->flags |= GS_CAN_MODE_ONE_SHOT; 655 + flags |= GS_CAN_MODE_ONE_SHOT; 664 656 665 657 if (ctrlmode & CAN_CTRLMODE_3_SAMPLES) 666 - dm->flags |= GS_CAN_MODE_TRIPLE_SAMPLE; 658 + flags |= GS_CAN_MODE_TRIPLE_SAMPLE; 667 659 668 660 /* finally start device */ 669 - dm->mode = GS_CAN_MODE_START; 661 + dm->mode = cpu_to_le32(GS_CAN_MODE_START); 662 + dm->flags = cpu_to_le32(flags); 670 663 rc = usb_control_msg(interface_to_usbdev(dev->iface), 671 664 usb_sndctrlpipe(interface_to_usbdev(dev->iface), 0), 672 665 GS_USB_BREQ_MODE, ··· 746 739 return -ENOMEM; 747 740 748 741 if (do_identify) 749 - imode->mode = GS_CAN_IDENTIFY_ON; 742 + imode->mode = cpu_to_le32(GS_CAN_IDENTIFY_ON); 750 743 else 751 - imode->mode = GS_CAN_IDENTIFY_OFF; 744 + imode->mode = cpu_to_le32(GS_CAN_IDENTIFY_OFF); 752 745 753 746 rc = usb_control_msg(interface_to_usbdev(dev->iface), 754 747 usb_sndctrlpipe(interface_to_usbdev(dev->iface), ··· 799 792 struct net_device *netdev; 800 793 int rc; 801 794 struct gs_device_bt_const *bt_const; 795 + u32 feature; 802 796 803 797 bt_const = kmalloc(sizeof(*bt_const), GFP_KERNEL); 804 798 if (!bt_const) ··· 840 832 841 833 /* dev setup */ 842 834 strcpy(dev->bt_const.name, "gs_usb"); 843 - dev->bt_const.tseg1_min = bt_const->tseg1_min; 844 - dev->bt_const.tseg1_max = bt_const->tseg1_max; 845 - dev->bt_const.tseg2_min = bt_const->tseg2_min; 846 - dev->bt_const.tseg2_max = bt_const->tseg2_max; 847 - dev->bt_const.sjw_max = bt_const->sjw_max; 848 - dev->bt_const.brp_min = bt_const->brp_min; 849 - dev->bt_const.brp_max = bt_const->brp_max; 850 - dev->bt_const.brp_inc = bt_const->brp_inc; 835 + dev->bt_const.tseg1_min = le32_to_cpu(bt_const->tseg1_min); 836 + dev->bt_const.tseg1_max = le32_to_cpu(bt_const->tseg1_max); 837 + dev->bt_const.tseg2_min = le32_to_cpu(bt_const->tseg2_min); 838 + dev->bt_const.tseg2_max = le32_to_cpu(bt_const->tseg2_max); 839 + dev->bt_const.sjw_max = le32_to_cpu(bt_const->sjw_max); 840 + dev->bt_const.brp_min = le32_to_cpu(bt_const->brp_min); 841 + dev->bt_const.brp_max = le32_to_cpu(bt_const->brp_max); 842 + dev->bt_const.brp_inc = le32_to_cpu(bt_const->brp_inc); 851 843 852 844 dev->udev = interface_to_usbdev(intf); 853 845 dev->iface = intf; ··· 864 856 865 857 /* can setup */ 866 858 dev->can.state = CAN_STATE_STOPPED; 867 - dev->can.clock.freq = bt_const->fclk_can; 859 + dev->can.clock.freq = le32_to_cpu(bt_const->fclk_can); 868 860 dev->can.bittiming_const = &dev->bt_const; 869 861 dev->can.do_set_bittiming = gs_usb_set_bittiming; 870 862 871 863 dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC; 872 864 873 - if (bt_const->feature & GS_CAN_FEATURE_LISTEN_ONLY) 865 + feature = le32_to_cpu(bt_const->feature); 866 + if (feature & GS_CAN_FEATURE_LISTEN_ONLY) 874 867 dev->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY; 875 868 876 - if (bt_const->feature & GS_CAN_FEATURE_LOOP_BACK) 869 + if (feature & GS_CAN_FEATURE_LOOP_BACK) 877 870 dev->can.ctrlmode_supported |= CAN_CTRLMODE_LOOPBACK; 878 871 879 - if (bt_const->feature & GS_CAN_FEATURE_TRIPLE_SAMPLE) 872 + if (feature & GS_CAN_FEATURE_TRIPLE_SAMPLE) 880 873 dev->can.ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; 881 874 882 - if (bt_const->feature & GS_CAN_FEATURE_ONE_SHOT) 875 + if (feature & GS_CAN_FEATURE_ONE_SHOT) 883 876 dev->can.ctrlmode_supported |= CAN_CTRLMODE_ONE_SHOT; 884 877 885 878 SET_NETDEV_DEV(netdev, &intf->dev); 886 879 887 - if (dconf->sw_version > 1) 888 - if (bt_const->feature & GS_CAN_FEATURE_IDENTIFY) 880 + if (le32_to_cpu(dconf->sw_version) > 1) 881 + if (feature & GS_CAN_FEATURE_IDENTIFY) 889 882 netdev->ethtool_ops = &gs_usb_ethtool_ops; 890 883 891 884 kfree(bt_const); ··· 921 912 if (!hconf) 922 913 return -ENOMEM; 923 914 924 - hconf->byte_order = 0x0000beef; 915 + hconf->byte_order = cpu_to_le32(0x0000beef); 925 916 926 917 /* send host config */ 927 918 rc = usb_control_msg(interface_to_usbdev(intf),
+3
drivers/net/ethernet/amazon/ena/ena_eth_com.c
··· 516 516 { 517 517 struct ena_com_rx_buf_info *ena_buf = &ena_rx_ctx->ena_bufs[0]; 518 518 struct ena_eth_io_rx_cdesc_base *cdesc = NULL; 519 + u16 q_depth = io_cq->q_depth; 519 520 u16 cdesc_idx = 0; 520 521 u16 nb_hw_desc; 521 522 u16 i = 0; ··· 544 543 do { 545 544 ena_buf[i].len = cdesc->length; 546 545 ena_buf[i].req_id = cdesc->req_id; 546 + if (unlikely(ena_buf[i].req_id >= q_depth)) 547 + return -EIO; 547 548 548 549 if (++i >= nb_hw_desc) 549 550 break;
+30 -50
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 789 789 adapter->num_io_queues); 790 790 } 791 791 792 - static int validate_rx_req_id(struct ena_ring *rx_ring, u16 req_id) 793 - { 794 - if (likely(req_id < rx_ring->ring_size)) 795 - return 0; 796 - 797 - netif_err(rx_ring->adapter, rx_err, rx_ring->netdev, 798 - "Invalid rx req_id: %hu\n", req_id); 799 - 800 - u64_stats_update_begin(&rx_ring->syncp); 801 - rx_ring->rx_stats.bad_req_id++; 802 - u64_stats_update_end(&rx_ring->syncp); 803 - 804 - /* Trigger device reset */ 805 - rx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; 806 - set_bit(ENA_FLAG_TRIGGER_RESET, &rx_ring->adapter->flags); 807 - return -EFAULT; 808 - } 809 - 810 792 /* ena_setup_rx_resources - allocate I/O Rx resources (Descriptors) 811 793 * @adapter: network interface device structure 812 794 * @qid: queue index ··· 908 926 static int ena_alloc_rx_page(struct ena_ring *rx_ring, 909 927 struct ena_rx_buffer *rx_info, gfp_t gfp) 910 928 { 929 + int headroom = rx_ring->rx_headroom; 911 930 struct ena_com_buf *ena_buf; 912 931 struct page *page; 913 932 dma_addr_t dma; 933 + 934 + /* restore page offset value in case it has been changed by device */ 935 + rx_info->page_offset = headroom; 914 936 915 937 /* if previous allocated page is not used */ 916 938 if (unlikely(rx_info->page)) ··· 945 959 "Allocate page %p, rx_info %p\n", page, rx_info); 946 960 947 961 rx_info->page = page; 948 - rx_info->page_offset = 0; 949 962 ena_buf = &rx_info->ena_buf; 950 - ena_buf->paddr = dma + rx_ring->rx_headroom; 951 - ena_buf->len = ENA_PAGE_SIZE - rx_ring->rx_headroom; 963 + ena_buf->paddr = dma + headroom; 964 + ena_buf->len = ENA_PAGE_SIZE - headroom; 952 965 953 966 return 0; 954 967 } ··· 1341 1356 struct ena_rx_buffer *rx_info; 1342 1357 u16 len, req_id, buf = 0; 1343 1358 void *va; 1344 - int rc; 1345 1359 1346 1360 len = ena_bufs[buf].len; 1347 1361 req_id = ena_bufs[buf].req_id; 1348 - 1349 - rc = validate_rx_req_id(rx_ring, req_id); 1350 - if (unlikely(rc < 0)) 1351 - return NULL; 1352 1362 1353 1363 rx_info = &rx_ring->rx_buffer_info[req_id]; 1354 1364 ··· 1359 1379 1360 1380 /* save virt address of first buffer */ 1361 1381 va = page_address(rx_info->page) + rx_info->page_offset; 1362 - prefetch(va + NET_IP_ALIGN); 1382 + 1383 + prefetch(va); 1363 1384 1364 1385 if (len <= rx_ring->rx_copybreak) { 1365 1386 skb = ena_alloc_skb(rx_ring, false); ··· 1401 1420 1402 1421 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page, 1403 1422 rx_info->page_offset, len, ENA_PAGE_SIZE); 1404 - /* The offset is non zero only for the first buffer */ 1405 - rx_info->page_offset = 0; 1406 1423 1407 1424 netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev, 1408 1425 "RX skb updated. len %d. data_len %d\n", ··· 1418 1439 buf++; 1419 1440 len = ena_bufs[buf].len; 1420 1441 req_id = ena_bufs[buf].req_id; 1421 - 1422 - rc = validate_rx_req_id(rx_ring, req_id); 1423 - if (unlikely(rc < 0)) 1424 - return NULL; 1425 1442 1426 1443 rx_info = &rx_ring->rx_buffer_info[req_id]; 1427 1444 } while (1); ··· 1519 1544 int ret; 1520 1545 1521 1546 rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]; 1522 - xdp->data = page_address(rx_info->page) + 1523 - rx_info->page_offset + rx_ring->rx_headroom; 1547 + xdp->data = page_address(rx_info->page) + rx_info->page_offset; 1524 1548 xdp_set_data_meta_invalid(xdp); 1525 1549 xdp->data_hard_start = page_address(rx_info->page); 1526 1550 xdp->data_end = xdp->data + rx_ring->ena_bufs[0].len; ··· 1586 1612 if (unlikely(ena_rx_ctx.descs == 0)) 1587 1613 break; 1588 1614 1615 + /* First descriptor might have an offset set by the device */ 1589 1616 rx_info = &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]; 1590 - rx_info->page_offset = ena_rx_ctx.pkt_offset; 1617 + rx_info->page_offset += ena_rx_ctx.pkt_offset; 1591 1618 1592 1619 netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev, 1593 1620 "rx_poll: q %d got packet from ena. descs #: %d l3 proto %d l4 proto %d hash: %x\n", ··· 1672 1697 error: 1673 1698 adapter = netdev_priv(rx_ring->netdev); 1674 1699 1675 - u64_stats_update_begin(&rx_ring->syncp); 1676 - rx_ring->rx_stats.bad_desc_num++; 1677 - u64_stats_update_end(&rx_ring->syncp); 1700 + if (rc == -ENOSPC) { 1701 + u64_stats_update_begin(&rx_ring->syncp); 1702 + rx_ring->rx_stats.bad_desc_num++; 1703 + u64_stats_update_end(&rx_ring->syncp); 1704 + adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; 1705 + } else { 1706 + u64_stats_update_begin(&rx_ring->syncp); 1707 + rx_ring->rx_stats.bad_req_id++; 1708 + u64_stats_update_end(&rx_ring->syncp); 1709 + adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; 1710 + } 1678 1711 1679 - /* Too many desc from the device. Trigger reset */ 1680 - adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; 1681 1712 set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); 1682 1713 1683 1714 return 0; ··· 3369 3388 goto err_mmio_read_less; 3370 3389 } 3371 3390 3372 - rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_width)); 3391 + rc = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(dma_width)); 3373 3392 if (rc) { 3374 - dev_err(dev, "pci_set_dma_mask failed 0x%x\n", rc); 3375 - goto err_mmio_read_less; 3376 - } 3377 - 3378 - rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(dma_width)); 3379 - if (rc) { 3380 - dev_err(dev, "err_pci_set_consistent_dma_mask failed 0x%x\n", 3381 - rc); 3393 + dev_err(dev, "dma_set_mask_and_coherent failed %d\n", rc); 3382 3394 goto err_mmio_read_less; 3383 3395 } 3384 3396 ··· 4139 4165 if (rc) { 4140 4166 dev_err(&pdev->dev, "pci_enable_device_mem() failed!\n"); 4141 4167 return rc; 4168 + } 4169 + 4170 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(ENA_MAX_PHYS_ADDR_SIZE_BITS)); 4171 + if (rc) { 4172 + dev_err(&pdev->dev, "dma_set_mask_and_coherent failed %d\n", rc); 4173 + goto err_disable_device; 4142 4174 } 4143 4175 4144 4176 pci_set_master(pdev);
+50 -72
drivers/net/ethernet/aquantia/atlantic/aq_ring.c
··· 413 413 buff->rxdata.pg_off, 414 414 buff->len, DMA_FROM_DEVICE); 415 415 416 - /* for single fragment packets use build_skb() */ 417 - if (buff->is_eop && 418 - buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) { 419 - skb = build_skb(aq_buf_vaddr(&buff->rxdata), 416 + skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE); 417 + if (unlikely(!skb)) { 418 + u64_stats_update_begin(&self->stats.rx.syncp); 419 + self->stats.rx.skb_alloc_fails++; 420 + u64_stats_update_end(&self->stats.rx.syncp); 421 + err = -ENOMEM; 422 + goto err_exit; 423 + } 424 + if (is_ptp_ring) 425 + buff->len -= 426 + aq_ptp_extract_ts(self->aq_nic, skb, 427 + aq_buf_vaddr(&buff->rxdata), 428 + buff->len); 429 + 430 + hdr_len = buff->len; 431 + if (hdr_len > AQ_CFG_RX_HDR_SIZE) 432 + hdr_len = eth_get_headlen(skb->dev, 433 + aq_buf_vaddr(&buff->rxdata), 434 + AQ_CFG_RX_HDR_SIZE); 435 + 436 + memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata), 437 + ALIGN(hdr_len, sizeof(long))); 438 + 439 + if (buff->len - hdr_len > 0) { 440 + skb_add_rx_frag(skb, 0, buff->rxdata.page, 441 + buff->rxdata.pg_off + hdr_len, 442 + buff->len - hdr_len, 420 443 AQ_CFG_RX_FRAME_MAX); 421 - if (unlikely(!skb)) { 422 - u64_stats_update_begin(&self->stats.rx.syncp); 423 - self->stats.rx.skb_alloc_fails++; 424 - u64_stats_update_end(&self->stats.rx.syncp); 425 - err = -ENOMEM; 426 - goto err_exit; 427 - } 428 - if (is_ptp_ring) 429 - buff->len -= 430 - aq_ptp_extract_ts(self->aq_nic, skb, 431 - aq_buf_vaddr(&buff->rxdata), 432 - buff->len); 433 - skb_put(skb, buff->len); 434 444 page_ref_inc(buff->rxdata.page); 435 - } else { 436 - skb = napi_alloc_skb(napi, AQ_CFG_RX_HDR_SIZE); 437 - if (unlikely(!skb)) { 438 - u64_stats_update_begin(&self->stats.rx.syncp); 439 - self->stats.rx.skb_alloc_fails++; 440 - u64_stats_update_end(&self->stats.rx.syncp); 441 - err = -ENOMEM; 442 - goto err_exit; 443 - } 444 - if (is_ptp_ring) 445 - buff->len -= 446 - aq_ptp_extract_ts(self->aq_nic, skb, 447 - aq_buf_vaddr(&buff->rxdata), 448 - buff->len); 445 + } 449 446 450 - hdr_len = buff->len; 451 - if (hdr_len > AQ_CFG_RX_HDR_SIZE) 452 - hdr_len = eth_get_headlen(skb->dev, 453 - aq_buf_vaddr(&buff->rxdata), 454 - AQ_CFG_RX_HDR_SIZE); 447 + if (!buff->is_eop) { 448 + buff_ = buff; 449 + i = 1U; 450 + do { 451 + next_ = buff_->next; 452 + buff_ = &self->buff_ring[next_]; 455 453 456 - memcpy(__skb_put(skb, hdr_len), aq_buf_vaddr(&buff->rxdata), 457 - ALIGN(hdr_len, sizeof(long))); 458 - 459 - if (buff->len - hdr_len > 0) { 460 - skb_add_rx_frag(skb, 0, buff->rxdata.page, 461 - buff->rxdata.pg_off + hdr_len, 462 - buff->len - hdr_len, 454 + dma_sync_single_range_for_cpu(aq_nic_get_dev(self->aq_nic), 455 + buff_->rxdata.daddr, 456 + buff_->rxdata.pg_off, 457 + buff_->len, 458 + DMA_FROM_DEVICE); 459 + skb_add_rx_frag(skb, i++, 460 + buff_->rxdata.page, 461 + buff_->rxdata.pg_off, 462 + buff_->len, 463 463 AQ_CFG_RX_FRAME_MAX); 464 - page_ref_inc(buff->rxdata.page); 465 - } 464 + page_ref_inc(buff_->rxdata.page); 465 + buff_->is_cleaned = 1; 466 466 467 - if (!buff->is_eop) { 468 - buff_ = buff; 469 - i = 1U; 470 - do { 471 - next_ = buff_->next, 472 - buff_ = &self->buff_ring[next_]; 467 + buff->is_ip_cso &= buff_->is_ip_cso; 468 + buff->is_udp_cso &= buff_->is_udp_cso; 469 + buff->is_tcp_cso &= buff_->is_tcp_cso; 470 + buff->is_cso_err |= buff_->is_cso_err; 473 471 474 - dma_sync_single_range_for_cpu( 475 - aq_nic_get_dev(self->aq_nic), 476 - buff_->rxdata.daddr, 477 - buff_->rxdata.pg_off, 478 - buff_->len, 479 - DMA_FROM_DEVICE); 480 - skb_add_rx_frag(skb, i++, 481 - buff_->rxdata.page, 482 - buff_->rxdata.pg_off, 483 - buff_->len, 484 - AQ_CFG_RX_FRAME_MAX); 485 - page_ref_inc(buff_->rxdata.page); 486 - buff_->is_cleaned = 1; 487 - 488 - buff->is_ip_cso &= buff_->is_ip_cso; 489 - buff->is_udp_cso &= buff_->is_udp_cso; 490 - buff->is_tcp_cso &= buff_->is_tcp_cso; 491 - buff->is_cso_err |= buff_->is_cso_err; 492 - 493 - } while (!buff_->is_eop); 494 - } 472 + } while (!buff_->is_eop); 495 473 } 496 474 497 475 if (buff->is_vlan)
+3 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 11590 11590 if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) != 0 && 11591 11591 dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) { 11592 11592 dev_err(&pdev->dev, "System does not support DMA, aborting\n"); 11593 - goto init_err_disable; 11593 + rc = -EIO; 11594 + goto init_err_release; 11594 11595 } 11595 11596 11596 11597 pci_set_master(pdev); ··· 12675 12674 create_singlethread_workqueue("bnxt_pf_wq"); 12676 12675 if (!bnxt_pf_wq) { 12677 12676 dev_err(&pdev->dev, "Unable to create workqueue.\n"); 12677 + rc = -ENOMEM; 12678 12678 goto init_err_pci_clean; 12679 12679 } 12680 12680 }
+1 -1
drivers/net/ethernet/chelsio/Kconfig
··· 68 68 69 69 config CHELSIO_T4 70 70 tristate "Chelsio Communications T4/T5/T6 Ethernet support" 71 - depends on PCI && (IPV6 || IPV6=n) 71 + depends on PCI && (IPV6 || IPV6=n) && (TLS || TLS=n) 72 72 select FW_LOADER 73 73 select MDIO 74 74 select ZLIB_DEFLATE
+2 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
··· 880 880 FW_FILTER_WR_OVLAN_VLD_V(f->fs.val.ovlan_vld) | 881 881 FW_FILTER_WR_IVLAN_VLDM_V(f->fs.mask.ivlan_vld) | 882 882 FW_FILTER_WR_OVLAN_VLDM_V(f->fs.mask.ovlan_vld)); 883 - fwr->smac_sel = f->smt->idx; 883 + if (f->fs.newsmac) 884 + fwr->smac_sel = f->smt->idx; 884 885 fwr->rx_chan_rx_rpl_iq = 885 886 htons(FW_FILTER_WR_RX_CHAN_V(0) | 886 887 FW_FILTER_WR_RX_RPL_IQ_V(adapter->sge.fw_evtq.abs_id));
+3 -1
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
··· 544 544 /* need to wait for hw response, can't free tx_info yet. */ 545 545 if (tx_info->open_state == CH_KTLS_OPEN_PENDING) 546 546 tx_info->pending_close = true; 547 - /* free the lock after the cleanup */ 547 + else 548 + spin_unlock_bh(&tx_info->lock); 549 + /* if in pending close, free the lock after the cleanup */ 548 550 goto put_module; 549 551 } 550 552 spin_unlock_bh(&tx_info->lock);
+2
drivers/net/ethernet/freescale/dpaa2/Kconfig
··· 4 4 depends on FSL_MC_BUS && FSL_MC_DPIO 5 5 select PHYLINK 6 6 select PCS_LYNX 7 + select FSL_XGMAC_MDIO 8 + select NET_DEVLINK 7 9 help 8 10 This is the DPAA2 Ethernet driver supporting Freescale SoCs 9 11 with DPAA2 (DataPath Acceleration Architecture v2).
+2 -12
drivers/net/ethernet/freescale/enetc/enetc_qos.c
··· 92 92 gcl_config->atc = 0xff; 93 93 gcl_config->acl_len = cpu_to_le16(gcl_len); 94 94 95 - if (!admin_conf->base_time) { 96 - gcl_data->btl = 97 - cpu_to_le32(enetc_rd(&priv->si->hw, ENETC_SICTR0)); 98 - gcl_data->bth = 99 - cpu_to_le32(enetc_rd(&priv->si->hw, ENETC_SICTR1)); 100 - } else { 101 - gcl_data->btl = 102 - cpu_to_le32(lower_32_bits(admin_conf->base_time)); 103 - gcl_data->bth = 104 - cpu_to_le32(upper_32_bits(admin_conf->base_time)); 105 - } 106 - 95 + gcl_data->btl = cpu_to_le32(lower_32_bits(admin_conf->base_time)); 96 + gcl_data->bth = cpu_to_le32(upper_32_bits(admin_conf->base_time)); 107 97 gcl_data->ct = cpu_to_le32(admin_conf->cycle_time); 108 98 gcl_data->cte = cpu_to_le32(admin_conf->cycle_time_extension); 109 99
+20 -3
drivers/net/ethernet/ibm/ibmvnic.c
··· 2156 2156 for (i = 0; i < adapter->req_rx_queues; i++) 2157 2157 napi_schedule(&adapter->napi[i]); 2158 2158 2159 - if (adapter->reset_reason != VNIC_RESET_FAILOVER) 2159 + if (adapter->reset_reason == VNIC_RESET_FAILOVER || 2160 + adapter->reset_reason == VNIC_RESET_MOBILITY) { 2160 2161 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev); 2162 + call_netdevice_notifiers(NETDEV_RESEND_IGMP, netdev); 2163 + } 2161 2164 2162 2165 rc = 0; 2163 2166 ··· 2230 2227 if (rc) 2231 2228 return IBMVNIC_OPEN_FAILED; 2232 2229 2230 + call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, netdev); 2231 + call_netdevice_notifiers(NETDEV_RESEND_IGMP, netdev); 2232 + 2233 2233 return 0; 2234 2234 } 2235 2235 ··· 2297 2291 2298 2292 if (!saved_state) { 2299 2293 reset_state = adapter->state; 2300 - adapter->state = VNIC_RESETTING; 2301 2294 saved_state = true; 2302 2295 } 2303 2296 spin_unlock_irqrestore(&adapter->state_lock, flags); ··· 2436 2431 static void ibmvnic_tx_timeout(struct net_device *dev, unsigned int txqueue) 2437 2432 { 2438 2433 struct ibmvnic_adapter *adapter = netdev_priv(dev); 2434 + 2435 + if (test_bit(0, &adapter->resetting)) { 2436 + netdev_err(adapter->netdev, 2437 + "Adapter is resetting, skip timeout reset\n"); 2438 + return; 2439 + } 2439 2440 2440 2441 ibmvnic_reset(adapter, VNIC_RESET_TIMEOUT); 2441 2442 } ··· 2973 2962 static int reset_sub_crq_queues(struct ibmvnic_adapter *adapter) 2974 2963 { 2975 2964 int i, rc; 2965 + 2966 + if (!adapter->tx_scrq || !adapter->rx_scrq) 2967 + return -EINVAL; 2976 2968 2977 2969 for (i = 0; i < adapter->req_tx_queues; i++) { 2978 2970 netdev_dbg(adapter->netdev, "Re-setting tx_scrq[%d]\n", i); ··· 5058 5044 } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); 5059 5045 5060 5046 /* Clean out the queue */ 5047 + if (!crq->msgs) 5048 + return -EINVAL; 5049 + 5061 5050 memset(crq->msgs, 0, PAGE_SIZE); 5062 5051 crq->cur = 0; 5063 5052 crq->active = false; ··· 5365 5348 unsigned long flags; 5366 5349 5367 5350 spin_lock_irqsave(&adapter->state_lock, flags); 5368 - if (adapter->state == VNIC_RESETTING) { 5351 + if (test_bit(0, &adapter->resetting)) { 5369 5352 spin_unlock_irqrestore(&adapter->state_lock, flags); 5370 5353 return -EBUSY; 5371 5354 }
+1 -2
drivers/net/ethernet/ibm/ibmvnic.h
··· 943 943 VNIC_CLOSING, 944 944 VNIC_CLOSED, 945 945 VNIC_REMOVING, 946 - VNIC_REMOVED, 947 - VNIC_RESETTING}; 946 + VNIC_REMOVED}; 948 947 949 948 enum ibmvnic_reset_reason {VNIC_RESET_FAILOVER = 1, 950 949 VNIC_RESET_MOBILITY,
+1
drivers/net/ethernet/intel/i40e/i40e.h
··· 140 140 __I40E_CLIENT_RESET, 141 141 __I40E_VIRTCHNL_OP_PENDING, 142 142 __I40E_RECOVERY_MODE, 143 + __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ 143 144 /* This must be last as it determines the size of the BITMAP */ 144 145 __I40E_STATE_SIZE__, 145 146 };
+15 -7
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 4010 4010 } 4011 4011 4012 4012 if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) { 4013 - ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK; 4014 - set_bit(__I40E_VFLR_EVENT_PENDING, pf->state); 4013 + /* disable any further VFLR event notifications */ 4014 + if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state)) { 4015 + u32 reg = rd32(hw, I40E_PFINT_ICR0_ENA); 4016 + 4017 + reg &= ~I40E_PFINT_ICR0_VFLR_MASK; 4018 + wr32(hw, I40E_PFINT_ICR0_ENA, reg); 4019 + } else { 4020 + ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK; 4021 + set_bit(__I40E_VFLR_EVENT_PENDING, pf->state); 4022 + } 4015 4023 } 4016 4024 4017 4025 if (icr0 & I40E_PFINT_ICR0_GRST_MASK) { ··· 15319 15311 while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state)) 15320 15312 usleep_range(1000, 2000); 15321 15313 15314 + if (pf->flags & I40E_FLAG_SRIOV_ENABLED) { 15315 + set_bit(__I40E_VF_RESETS_DISABLED, pf->state); 15316 + i40e_free_vfs(pf); 15317 + pf->flags &= ~I40E_FLAG_SRIOV_ENABLED; 15318 + } 15322 15319 /* no more scheduling of any task */ 15323 15320 set_bit(__I40E_SUSPENDED, pf->state); 15324 15321 set_bit(__I40E_DOWN, pf->state); ··· 15349 15336 * has been stopped. 15350 15337 */ 15351 15338 i40e_notify_client_of_netdev_close(pf->vsi[pf->lan_vsi], false); 15352 - 15353 - if (pf->flags & I40E_FLAG_SRIOV_ENABLED) { 15354 - i40e_free_vfs(pf); 15355 - pf->flags &= ~I40E_FLAG_SRIOV_ENABLED; 15356 - } 15357 15339 15358 15340 i40e_fdir_teardown(pf); 15359 15341
+15 -11
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1403 1403 * @vf: pointer to the VF structure 1404 1404 * @flr: VFLR was issued or not 1405 1405 * 1406 - * Returns true if the VF is reset, false otherwise. 1406 + * Returns true if the VF is in reset, resets successfully, or resets 1407 + * are disabled and false otherwise. 1407 1408 **/ 1408 1409 bool i40e_reset_vf(struct i40e_vf *vf, bool flr) 1409 1410 { ··· 1414 1413 u32 reg; 1415 1414 int i; 1416 1415 1416 + if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state)) 1417 + return true; 1418 + 1417 1419 /* If the VFs have been disabled, this means something else is 1418 1420 * resetting the VF, so we shouldn't continue. 1419 1421 */ 1420 1422 if (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) 1421 - return false; 1423 + return true; 1422 1424 1423 1425 i40e_trigger_vf_reset(vf, flr); 1424 1426 ··· 1585 1581 1586 1582 i40e_notify_client_of_vf_enable(pf, 0); 1587 1583 1584 + /* Disable IOV before freeing resources. This lets any VF drivers 1585 + * running in the host get themselves cleaned up before we yank 1586 + * the carpet out from underneath their feet. 1587 + */ 1588 + if (!pci_vfs_assigned(pf->pdev)) 1589 + pci_disable_sriov(pf->pdev); 1590 + else 1591 + dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n"); 1592 + 1588 1593 /* Amortize wait time by stopping all VFs at the same time */ 1589 1594 for (i = 0; i < pf->num_alloc_vfs; i++) { 1590 1595 if (test_bit(I40E_VF_STATE_INIT, &pf->vf[i].vf_states)) ··· 1608 1595 1609 1596 i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[i].lan_vsi_idx]); 1610 1597 } 1611 - 1612 - /* Disable IOV before freeing resources. This lets any VF drivers 1613 - * running in the host get themselves cleaned up before we yank 1614 - * the carpet out from underneath their feet. 1615 - */ 1616 - if (!pci_vfs_assigned(pf->pdev)) 1617 - pci_disable_sriov(pf->pdev); 1618 - else 1619 - dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n"); 1620 1598 1621 1599 /* free up VF resources */ 1622 1600 tmp = pf->num_alloc_vfs;
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 1193 1193 .pcs_get_adv_lp = dwmac4_get_adv_lp, 1194 1194 .debug = dwmac4_debug, 1195 1195 .set_filter = dwmac4_set_filter, 1196 - .flex_pps_config = dwmac5_flex_pps_config, 1197 1196 .set_mac_loopback = dwmac4_set_mac_loopback, 1198 1197 .update_vlan_hash = dwmac4_update_vlan_hash, 1199 1198 .sarc_configure = dwmac4_sarc_configure, ··· 1235 1236 .pcs_get_adv_lp = dwmac4_get_adv_lp, 1236 1237 .debug = dwmac4_debug, 1237 1238 .set_filter = dwmac4_set_filter, 1239 + .flex_pps_config = dwmac5_flex_pps_config, 1238 1240 .set_mac_loopback = dwmac4_set_mac_loopback, 1239 1241 .update_vlan_hash = dwmac4_update_vlan_hash, 1240 1242 .sarc_configure = dwmac4_sarc_configure,
+11 -3
drivers/net/tun.c
··· 1921 1921 struct tun_file *tfile = file->private_data; 1922 1922 struct tun_struct *tun = tun_get(tfile); 1923 1923 ssize_t result; 1924 + int noblock = 0; 1924 1925 1925 1926 if (!tun) 1926 1927 return -EBADFD; 1927 1928 1928 - result = tun_get_user(tun, tfile, NULL, from, 1929 - file->f_flags & O_NONBLOCK, false); 1929 + if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT)) 1930 + noblock = 1; 1931 + 1932 + result = tun_get_user(tun, tfile, NULL, from, noblock, false); 1930 1933 1931 1934 tun_put(tun); 1932 1935 return result; ··· 2140 2137 struct tun_file *tfile = file->private_data; 2141 2138 struct tun_struct *tun = tun_get(tfile); 2142 2139 ssize_t len = iov_iter_count(to), ret; 2140 + int noblock = 0; 2143 2141 2144 2142 if (!tun) 2145 2143 return -EBADFD; 2146 - ret = tun_do_read(tun, tfile, to, file->f_flags & O_NONBLOCK, NULL); 2144 + 2145 + if ((file->f_flags & O_NONBLOCK) || (iocb->ki_flags & IOCB_NOWAIT)) 2146 + noblock = 1; 2147 + 2148 + ret = tun_do_read(tun, tfile, to, noblock, NULL); 2147 2149 ret = min_t(ssize_t, ret, len); 2148 2150 if (ret > 0) 2149 2151 iocb->ki_pos = ret;
+1 -1
drivers/net/usb/ipheth.c
··· 59 59 #define IPHETH_USBINTF_SUBCLASS 253 60 60 #define IPHETH_USBINTF_PROTO 1 61 61 62 - #define IPHETH_BUF_SIZE 1516 62 + #define IPHETH_BUF_SIZE 1514 63 63 #define IPHETH_IP_ALIGN 2 /* padding at front of URB */ 64 64 #define IPHETH_TX_TIMEOUT (5 * HZ) 65 65
+5 -5
drivers/net/wireless/intel/iwlwifi/fw/api/sta.h
··· 5 5 * 6 6 * GPL LICENSE SUMMARY 7 7 * 8 - * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 8 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 10 9 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 11 - * Copyright(c) 2018 - 2019 Intel Corporation 10 + * Copyright(c) 2012-2014, 2018 - 2020 Intel Corporation 12 11 * 13 12 * This program is free software; you can redistribute it and/or modify 14 13 * it under the terms of version 2 of the GNU General Public License as ··· 27 28 * 28 29 * BSD LICENSE 29 30 * 30 - * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 31 31 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 32 32 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 33 - * Copyright(c) 2018 - 2019 Intel Corporation 33 + * Copyright(c) 2012-2014, 2018 - 2020 Intel Corporation 34 34 * All rights reserved. 35 35 * 36 36 * Redistribution and use in source and binary forms, with or without ··· 126 128 STA_FLG_MAX_AGG_SIZE_256K = (5 << STA_FLG_MAX_AGG_SIZE_SHIFT), 127 129 STA_FLG_MAX_AGG_SIZE_512K = (6 << STA_FLG_MAX_AGG_SIZE_SHIFT), 128 130 STA_FLG_MAX_AGG_SIZE_1024K = (7 << STA_FLG_MAX_AGG_SIZE_SHIFT), 129 - STA_FLG_MAX_AGG_SIZE_MSK = (7 << STA_FLG_MAX_AGG_SIZE_SHIFT), 131 + STA_FLG_MAX_AGG_SIZE_2M = (8 << STA_FLG_MAX_AGG_SIZE_SHIFT), 132 + STA_FLG_MAX_AGG_SIZE_4M = (9 << STA_FLG_MAX_AGG_SIZE_SHIFT), 133 + STA_FLG_MAX_AGG_SIZE_MSK = (0xf << STA_FLG_MAX_AGG_SIZE_SHIFT), 130 134 131 135 STA_FLG_AGG_MPDU_DENS_SHIFT = 23, 132 136 STA_FLG_AGG_MPDU_DENS_2US = (4 << STA_FLG_AGG_MPDU_DENS_SHIFT),
+5 -3
drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 11 - * Copyright(c) 2018 - 2019 Intel Corporation 11 + * Copyright(c) 2018 - 2020 Intel Corporation 12 12 * 13 13 * This program is free software; you can redistribute it and/or modify 14 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 31 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 32 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 33 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 34 - * Copyright(c) 2018 - 2019 Intel Corporation 34 + * Copyright(c) 2018 - 2020 Intel Corporation 35 35 * All rights reserved. 36 36 * 37 37 * Redistribution and use in source and binary forms, with or without ··· 421 421 * able to run the GO Negotiation. Will not be fragmented and not 422 422 * repetitive. Valid only on the P2P Device MAC. Only the duration will 423 423 * be taken into account. 424 + * @SESSION_PROTECT_CONF_MAX_ID: not used 424 425 */ 425 426 enum iwl_mvm_session_prot_conf_id { 426 427 SESSION_PROTECT_CONF_ASSOC, 427 428 SESSION_PROTECT_CONF_GO_CLIENT_ASSOC, 428 429 SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV, 429 430 SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION, 431 + SESSION_PROTECT_CONF_MAX_ID, 430 432 }; /* SESSION_PROTECTION_CONF_ID_E_VER_1 */ 431 433 432 434 /** ··· 461 459 * @mac_id: the mac id for which the session protection started / ended 462 460 * @status: 1 means success, 0 means failure 463 461 * @start: 1 means the session protection started, 0 means it ended 464 - * @conf_id: the configuration id of the session that started / eneded 462 + * @conf_id: see &enum iwl_mvm_session_prot_conf_id 465 463 * 466 464 * Note that any session protection will always get two notifications: start 467 465 * and end even the firmware could not schedule it.
+10
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
··· 147 147 #define CSR_MAC_SHADOW_REG_CTL2 (CSR_BASE + 0x0AC) 148 148 #define CSR_MAC_SHADOW_REG_CTL2_RX_WAKE 0xFFFF 149 149 150 + /* LTR control (since IWL_DEVICE_FAMILY_22000) */ 151 + #define CSR_LTR_LONG_VAL_AD (CSR_BASE + 0x0D4) 152 + #define CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ 0x80000000 153 + #define CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE 0x1c000000 154 + #define CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL 0x03ff0000 155 + #define CSR_LTR_LONG_VAL_AD_SNOOP_REQ 0x00008000 156 + #define CSR_LTR_LONG_VAL_AD_SNOOP_SCALE 0x00001c00 157 + #define CSR_LTR_LONG_VAL_AD_SNOOP_VAL 0x000003ff 158 + #define CSR_LTR_LONG_VAL_AD_SCALE_USEC 2 159 + 150 160 /* GIO Chicken Bits (PCI Express bus link power management) */ 151 161 #define CSR_GIO_CHICKEN_BITS (CSR_BASE+0x100) 152 162
+4 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 3080 3080 3081 3081 /* this would be a mac80211 bug ... but don't crash */ 3082 3082 if (WARN_ON_ONCE(!mvmvif->phy_ctxt)) 3083 - return -EINVAL; 3083 + return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) ? 0 : -EINVAL; 3084 3084 3085 3085 /* 3086 3086 * If we are in a STA removal flow and in DQA mode: ··· 3126 3126 ret = -EINVAL; 3127 3127 goto out_unlock; 3128 3128 } 3129 + 3130 + if (vif->type == NL80211_IFTYPE_STATION) 3131 + vif->bss_conf.he_support = sta->he_cap.has_he; 3129 3132 3130 3133 if (sta->tdls && 3131 3134 (vif->p2p ||
+18
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 196 196 mpdu_dens = sta->ht_cap.ampdu_density; 197 197 } 198 198 199 + 199 200 if (sta->vht_cap.vht_supported) { 200 201 agg_size = sta->vht_cap.cap & 201 202 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK; ··· 205 204 } else if (sta->ht_cap.ht_supported) { 206 205 agg_size = sta->ht_cap.ampdu_factor; 207 206 } 207 + 208 + /* D6.0 10.12.2 A-MPDU length limit rules 209 + * A STA indicates the maximum length of the A-MPDU preEOF padding 210 + * that it can receive in an HE PPDU in the Maximum A-MPDU Length 211 + * Exponent field in its HT Capabilities, VHT Capabilities, 212 + * and HE 6 GHz Band Capabilities elements (if present) and the 213 + * Maximum AMPDU Length Exponent Extension field in its HE 214 + * Capabilities element 215 + */ 216 + if (sta->he_cap.has_he) 217 + agg_size += u8_get_bits(sta->he_cap.he_cap_elem.mac_cap_info[3], 218 + IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK); 219 + 220 + /* Limit to max A-MPDU supported by FW */ 221 + if (agg_size > (STA_FLG_MAX_AGG_SIZE_4M >> STA_FLG_MAX_AGG_SIZE_SHIFT)) 222 + agg_size = (STA_FLG_MAX_AGG_SIZE_4M >> 223 + STA_FLG_MAX_AGG_SIZE_SHIFT); 208 224 209 225 add_sta_cmd.station_flags |= 210 226 cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
+69 -34
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 641 641 } 642 642 } 643 643 644 + static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm, 645 + struct iwl_mvm_vif *mvmvif) 646 + { 647 + struct iwl_mvm_session_prot_cmd cmd = { 648 + .id_and_color = 649 + cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, 650 + mvmvif->color)), 651 + .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE), 652 + .conf_id = cpu_to_le32(mvmvif->time_event_data.id), 653 + }; 654 + int ret; 655 + 656 + ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD, 657 + MAC_CONF_GROUP, 0), 658 + 0, sizeof(cmd), &cmd); 659 + if (ret) 660 + IWL_ERR(mvm, 661 + "Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret); 662 + } 663 + 644 664 static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm, 645 665 struct iwl_mvm_time_event_data *te_data, 646 666 u32 *uid) 647 667 { 648 668 u32 id; 669 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 649 670 650 671 /* 651 672 * It is possible that by the time we got to this point the time ··· 684 663 iwl_mvm_te_clear_data(mvm, te_data); 685 664 spin_unlock_bh(&mvm->time_event_lock); 686 665 687 - /* 688 - * It is possible that by the time we try to remove it, the time event 689 - * has already ended and removed. In such a case there is no need to 690 - * send a removal command. 666 + /* When session protection is supported, the te_data->id field 667 + * is reused to save session protection's configuration. 691 668 */ 692 - if (id == TE_MAX) { 693 - IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid); 669 + if (fw_has_capa(&mvm->fw->ucode_capa, 670 + IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) { 671 + if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) { 672 + /* Session protection is still ongoing. Cancel it */ 673 + iwl_mvm_cancel_session_protection(mvm, mvmvif); 674 + if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) { 675 + set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status); 676 + iwl_mvm_roc_finished(mvm); 677 + } 678 + } 694 679 return false; 680 + } else { 681 + /* It is possible that by the time we try to remove it, the 682 + * time event has already ended and removed. In such a case 683 + * there is no need to send a removal command. 684 + */ 685 + if (id == TE_MAX) { 686 + IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid); 687 + return false; 688 + } 695 689 } 696 690 697 691 return true; ··· 807 771 struct iwl_rx_packet *pkt = rxb_addr(rxb); 808 772 struct iwl_mvm_session_prot_notif *notif = (void *)pkt->data; 809 773 struct ieee80211_vif *vif; 774 + struct iwl_mvm_vif *mvmvif; 810 775 811 776 rcu_read_lock(); 812 777 vif = iwl_mvm_rcu_dereference_vif_id(mvm, le32_to_cpu(notif->mac_id), ··· 816 779 if (!vif) 817 780 goto out_unlock; 818 781 782 + mvmvif = iwl_mvm_vif_from_mac80211(vif); 783 + 819 784 /* The vif is not a P2P_DEVICE, maintain its time_event_data */ 820 785 if (vif->type != NL80211_IFTYPE_P2P_DEVICE) { 821 - struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 822 786 struct iwl_mvm_time_event_data *te_data = 823 787 &mvmvif->time_event_data; 824 788 ··· 854 816 855 817 if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) { 856 818 /* End TE, notify mac80211 */ 819 + mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID; 857 820 ieee80211_remain_on_channel_expired(mvm->hw); 858 821 set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status); 859 822 iwl_mvm_roc_finished(mvm); 860 823 } else if (le32_to_cpu(notif->start)) { 824 + if (WARN_ON(mvmvif->time_event_data.id != 825 + le32_to_cpu(notif->conf_id))) 826 + goto out_unlock; 861 827 set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status); 862 828 ieee80211_ready_on_channel(mvm->hw); /* Start TE */ 863 829 } ··· 887 845 888 846 lockdep_assert_held(&mvm->mutex); 889 847 848 + /* The time_event_data.id field is reused to save session 849 + * protection's configuration. 850 + */ 890 851 switch (type) { 891 852 case IEEE80211_ROC_TYPE_NORMAL: 892 - cmd.conf_id = 893 - cpu_to_le32(SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV); 853 + mvmvif->time_event_data.id = 854 + SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV; 894 855 break; 895 856 case IEEE80211_ROC_TYPE_MGMT_TX: 896 - cmd.conf_id = 897 - cpu_to_le32(SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION); 857 + mvmvif->time_event_data.id = 858 + SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION; 898 859 break; 899 860 default: 900 861 WARN_ONCE(1, "Got an invalid ROC type\n"); 901 862 return -EINVAL; 902 863 } 903 864 865 + cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id); 904 866 return iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD, 905 867 MAC_CONF_GROUP, 0), 906 868 0, sizeof(cmd), &cmd); ··· 1006 960 __iwl_mvm_remove_time_event(mvm, te_data, &uid); 1007 961 } 1008 962 1009 - static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm, 1010 - struct iwl_mvm_vif *mvmvif) 1011 - { 1012 - struct iwl_mvm_session_prot_cmd cmd = { 1013 - .id_and_color = 1014 - cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, 1015 - mvmvif->color)), 1016 - .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE), 1017 - }; 1018 - int ret; 1019 - 1020 - ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD, 1021 - MAC_CONF_GROUP, 0), 1022 - 0, sizeof(cmd), &cmd); 1023 - if (ret) 1024 - IWL_ERR(mvm, 1025 - "Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret); 1026 - } 1027 - 1028 963 void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif) 1029 964 { 1030 965 struct iwl_mvm_vif *mvmvif; ··· 1015 988 IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) { 1016 989 mvmvif = iwl_mvm_vif_from_mac80211(vif); 1017 990 1018 - iwl_mvm_cancel_session_protection(mvm, mvmvif); 1019 - 1020 - if (vif->type == NL80211_IFTYPE_P2P_DEVICE) 991 + if (vif->type == NL80211_IFTYPE_P2P_DEVICE) { 992 + iwl_mvm_cancel_session_protection(mvm, mvmvif); 1021 993 set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status); 994 + } else { 995 + iwl_mvm_remove_aux_roc_te(mvm, mvmvif, 996 + &mvmvif->time_event_data); 997 + } 1022 998 1023 999 iwl_mvm_roc_finished(mvm); 1024 1000 ··· 1156 1126 cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id, 1157 1127 mvmvif->color)), 1158 1128 .action = cpu_to_le32(FW_CTXT_ACTION_ADD), 1159 - .conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC), 1160 1129 .duration_tu = cpu_to_le32(MSEC_TO_TU(duration)), 1161 1130 }; 1131 + 1132 + /* The time_event_data.id field is reused to save session 1133 + * protection's configuration. 1134 + */ 1135 + mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC; 1136 + cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id); 1162 1137 1163 1138 lockdep_assert_held(&mvm->mutex); 1164 1139
+20
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
··· 252 252 253 253 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL, 254 254 CSR_AUTO_FUNC_BOOT_ENA); 255 + 256 + if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) { 257 + /* 258 + * The firmware initializes this again later (to a smaller 259 + * value), but for the boot process initialize the LTR to 260 + * ~250 usec. 261 + */ 262 + u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 263 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 264 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 265 + u32_encode_bits(250, 266 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 267 + CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 268 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 269 + CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 270 + u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 271 + 272 + iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val); 273 + } 274 + 255 275 if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) 256 276 iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1); 257 277 else
+27 -9
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 2156 2156 void *buf, int dwords) 2157 2157 { 2158 2158 unsigned long flags; 2159 - int offs, ret = 0; 2159 + int offs = 0; 2160 2160 u32 *vals = buf; 2161 2161 2162 - if (iwl_trans_grab_nic_access(trans, &flags)) { 2163 - iwl_write32(trans, HBUS_TARG_MEM_RADDR, addr); 2164 - for (offs = 0; offs < dwords; offs++) 2165 - vals[offs] = iwl_read32(trans, HBUS_TARG_MEM_RDAT); 2166 - iwl_trans_release_nic_access(trans, &flags); 2167 - } else { 2168 - ret = -EBUSY; 2162 + while (offs < dwords) { 2163 + /* limit the time we spin here under lock to 1/2s */ 2164 + ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC); 2165 + 2166 + if (iwl_trans_grab_nic_access(trans, &flags)) { 2167 + iwl_write32(trans, HBUS_TARG_MEM_RADDR, 2168 + addr + 4 * offs); 2169 + 2170 + while (offs < dwords) { 2171 + vals[offs] = iwl_read32(trans, 2172 + HBUS_TARG_MEM_RDAT); 2173 + offs++; 2174 + 2175 + /* calling ktime_get is expensive so 2176 + * do it once in 128 reads 2177 + */ 2178 + if (offs % 128 == 0 && ktime_after(ktime_get(), 2179 + timeout)) 2180 + break; 2181 + } 2182 + iwl_trans_release_nic_access(trans, &flags); 2183 + } else { 2184 + return -EBUSY; 2185 + } 2169 2186 } 2170 - return ret; 2187 + 2188 + return 0; 2171 2189 } 2172 2190 2173 2191 static int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr,
+1 -1
drivers/net/wireless/realtek/rtw88/fw.c
··· 1482 1482 int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size, 1483 1483 u32 *buffer) 1484 1484 { 1485 - if (!rtwdev->chip->fw_fifo_addr) { 1485 + if (!rtwdev->chip->fw_fifo_addr[0]) { 1486 1486 rtw_dbg(rtwdev, RTW_DBG_FW, "chip not support dump fw fifo\n"); 1487 1487 return -ENOTSUPP; 1488 1488 }
+2 -2
drivers/nfc/s3fwrn5/i2c.c
··· 25 25 struct i2c_client *i2c_dev; 26 26 struct nci_dev *ndev; 27 27 28 - unsigned int gpio_en; 29 - unsigned int gpio_fw_wake; 28 + int gpio_en; 29 + int gpio_fw_wake; 30 30 31 31 struct mutex mutex; 32 32
+18 -7
drivers/nvme/host/core.c
··· 2929 2929 static int nvme_get_effects_log(struct nvme_ctrl *ctrl, u8 csi, 2930 2930 struct nvme_effects_log **log) 2931 2931 { 2932 - struct nvme_cel *cel = xa_load(&ctrl->cels, csi); 2932 + struct nvme_effects_log *cel = xa_load(&ctrl->cels, csi); 2933 2933 int ret; 2934 2934 2935 2935 if (cel) ··· 2940 2940 return -ENOMEM; 2941 2941 2942 2942 ret = nvme_get_log(ctrl, 0x00, NVME_LOG_CMD_EFFECTS, 0, csi, 2943 - &cel->log, sizeof(cel->log), 0); 2943 + cel, sizeof(*cel), 0); 2944 2944 if (ret) { 2945 2945 kfree(cel); 2946 2946 return ret; 2947 2947 } 2948 2948 2949 - cel->csi = csi; 2950 - xa_store(&ctrl->cels, cel->csi, cel, GFP_KERNEL); 2949 + xa_store(&ctrl->cels, csi, cel, GFP_KERNEL); 2951 2950 out: 2952 - *log = &cel->log; 2951 + *log = cel; 2953 2952 return 0; 2954 2953 } 2955 2954 ··· 4373 4374 } 4374 4375 EXPORT_SYMBOL_GPL(nvme_uninit_ctrl); 4375 4376 4377 + static void nvme_free_cels(struct nvme_ctrl *ctrl) 4378 + { 4379 + struct nvme_effects_log *cel; 4380 + unsigned long i; 4381 + 4382 + xa_for_each (&ctrl->cels, i, cel) { 4383 + xa_erase(&ctrl->cels, i); 4384 + kfree(cel); 4385 + } 4386 + 4387 + xa_destroy(&ctrl->cels); 4388 + } 4389 + 4376 4390 static void nvme_free_ctrl(struct device *dev) 4377 4391 { 4378 4392 struct nvme_ctrl *ctrl = ··· 4395 4383 if (!subsys || ctrl->instance != subsys->instance) 4396 4384 ida_simple_remove(&nvme_instance_ida, ctrl->instance); 4397 4385 4398 - xa_destroy(&ctrl->cels); 4399 - 4386 + nvme_free_cels(ctrl); 4400 4387 nvme_mpath_uninit(ctrl); 4401 4388 __free_page(ctrl->discard_page); 4402 4389
-6
drivers/nvme/host/nvme.h
··· 226 226 #endif 227 227 }; 228 228 229 - struct nvme_cel { 230 - struct list_head entry; 231 - struct nvme_effects_log log; 232 - u8 csi; 233 - }; 234 - 235 229 struct nvme_ctrl { 236 230 bool comp_seen; 237 231 enum nvme_ctrl_state state;
+15
drivers/nvme/host/pci.c
··· 292 292 nvmeq->dbbuf_cq_ei = &dev->dbbuf_eis[cq_idx(qid, dev->db_stride)]; 293 293 } 294 294 295 + static void nvme_dbbuf_free(struct nvme_queue *nvmeq) 296 + { 297 + if (!nvmeq->qid) 298 + return; 299 + 300 + nvmeq->dbbuf_sq_db = NULL; 301 + nvmeq->dbbuf_cq_db = NULL; 302 + nvmeq->dbbuf_sq_ei = NULL; 303 + nvmeq->dbbuf_cq_ei = NULL; 304 + } 305 + 295 306 static void nvme_dbbuf_set(struct nvme_dev *dev) 296 307 { 297 308 struct nvme_command c; 309 + unsigned int i; 298 310 299 311 if (!dev->dbbuf_dbs) 300 312 return; ··· 320 308 dev_warn(dev->ctrl.device, "unable to set dbbuf\n"); 321 309 /* Free memory and continue on */ 322 310 nvme_dbbuf_dma_free(dev); 311 + 312 + for (i = 1; i <= dev->online_queues; i++) 313 + nvme_dbbuf_free(&dev->queues[i]); 323 314 } 324 315 } 325 316
+1
drivers/platform/x86/acer-wmi.c
··· 111 111 {KE_KEY, 0x64, {KEY_SWITCHVIDEOMODE} }, /* Display Switch */ 112 112 {KE_IGNORE, 0x81, {KEY_SLEEP} }, 113 113 {KE_KEY, 0x82, {KEY_TOUCHPAD_TOGGLE} }, /* Touch Pad Toggle */ 114 + {KE_IGNORE, 0x84, {KEY_KBDILLUMTOGGLE} }, /* Automatic Keyboard background light toggle */ 114 115 {KE_KEY, KEY_TOUCHPAD_ON, {KEY_TOUCHPAD_ON} }, 115 116 {KE_KEY, KEY_TOUCHPAD_OFF, {KEY_TOUCHPAD_OFF} }, 116 117 {KE_IGNORE, 0x83, {KEY_TOUCHPAD_TOGGLE} },
+6
drivers/platform/x86/intel-vbtn.c
··· 206 206 DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"), 207 207 }, 208 208 }, 209 + { 210 + .matches = { 211 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 212 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"), 213 + }, 214 + }, 209 215 {} /* Array terminator */ 210 216 }; 211 217
+12 -1
drivers/platform/x86/thinkpad_acpi.c
··· 3218 3218 3219 3219 in_tablet_mode = hotkey_gmms_get_tablet_mode(res, 3220 3220 &has_tablet_mode); 3221 - if (has_tablet_mode) 3221 + /* 3222 + * The Yoga 11e series has 2 accelerometers described by a 3223 + * BOSC0200 ACPI node. This setup relies on a Windows service 3224 + * which calls special ACPI methods on this node to report 3225 + * the laptop/tent/tablet mode to the EC. The bmc150 iio driver 3226 + * does not support this, so skip the hotkey on these models. 3227 + */ 3228 + if (has_tablet_mode && !acpi_dev_present("BOSC0200", "1", -1)) 3222 3229 tp_features.hotkey_tablet = TP_HOTKEY_TABLET_USES_GMMS; 3223 3230 type = "GMMS"; 3224 3231 } else if (acpi_evalf(hkey_handle, &res, "MHKG", "qd")) { ··· 4235 4228 pr_err("error while attempting to reset the event firmware interface\n"); 4236 4229 4237 4230 tpacpi_send_radiosw_update(); 4231 + tpacpi_input_send_tabletsw(); 4238 4232 hotkey_tablet_mode_notify_change(); 4239 4233 hotkey_wakeup_reason_notify_change(); 4240 4234 hotkey_wakeup_hotunplug_complete_notify_change(); ··· 8784 8776 TPACPI_Q_LNV3('N', '2', 'C', TPACPI_FAN_2CTL), /* P52 / P72 */ 8785 8777 TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (1st gen) */ 8786 8778 TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (2nd gen) */ 8779 + TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (3nd gen) */ 8780 + TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL), /* P15 (1st gen) / P15v (1st gen) */ 8787 8781 }; 8788 8782 8789 8783 static int __init fan_init(struct ibm_init_struct *iibm) ··· 9713 9703 TPACPI_Q_LNV3('R', '0', 'B', true), /* Thinkpad 11e gen 3 */ 9714 9704 TPACPI_Q_LNV3('R', '0', 'C', true), /* Thinkpad 13 */ 9715 9705 TPACPI_Q_LNV3('R', '0', 'J', true), /* Thinkpad 13 gen 2 */ 9706 + TPACPI_Q_LNV3('R', '0', 'K', true), /* Thinkpad 11e gen 4 celeron BIOS */ 9716 9707 }; 9717 9708 9718 9709 static int __init tpacpi_battery_init(struct ibm_init_struct *ibm)
+1 -2
drivers/platform/x86/toshiba_acpi.c
··· 1478 1478 struct toshiba_acpi_dev *dev = PDE_DATA(file_inode(file)); 1479 1479 char *buffer; 1480 1480 char *cmd; 1481 - int lcd_out, crt_out, tv_out; 1481 + int lcd_out = -1, crt_out = -1, tv_out = -1; 1482 1482 int remain = count; 1483 1483 int value; 1484 1484 int ret; ··· 1510 1510 1511 1511 kfree(cmd); 1512 1512 1513 - lcd_out = crt_out = tv_out = -1; 1514 1513 ret = get_video_status(dev, &video_out); 1515 1514 if (!ret) { 1516 1515 unsigned int new_video_out = video_out;
+50
drivers/platform/x86/touchscreen_dmi.c
··· 295 295 .properties = irbis_tw90_props, 296 296 }; 297 297 298 + static const struct property_entry irbis_tw118_props[] = { 299 + PROPERTY_ENTRY_U32("touchscreen-min-x", 20), 300 + PROPERTY_ENTRY_U32("touchscreen-min-y", 30), 301 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1960), 302 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1510), 303 + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-irbis-tw118.fw"), 304 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 305 + { } 306 + }; 307 + 308 + static const struct ts_dmi_data irbis_tw118_data = { 309 + .acpi_name = "MSSL1680:00", 310 + .properties = irbis_tw118_props, 311 + }; 312 + 298 313 static const struct property_entry itworks_tw891_props[] = { 299 314 PROPERTY_ENTRY_U32("touchscreen-min-x", 1), 300 315 PROPERTY_ENTRY_U32("touchscreen-min-y", 5), ··· 638 623 .properties = pov_mobii_wintab_p1006w_v10_props, 639 624 }; 640 625 626 + static const struct property_entry predia_basic_props[] = { 627 + PROPERTY_ENTRY_U32("touchscreen-min-x", 3), 628 + PROPERTY_ENTRY_U32("touchscreen-min-y", 10), 629 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1728), 630 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1144), 631 + PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 632 + PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-predia-basic.fw"), 633 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 634 + PROPERTY_ENTRY_BOOL("silead,home-button"), 635 + { } 636 + }; 637 + 638 + static const struct ts_dmi_data predia_basic_data = { 639 + .acpi_name = "MSSL1680:00", 640 + .properties = predia_basic_props, 641 + }; 642 + 641 643 static const struct property_entry schneider_sct101ctm_props[] = { 642 644 PROPERTY_ENTRY_U32("touchscreen-size-x", 1715), 643 645 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), ··· 969 937 }, 970 938 }, 971 939 { 940 + /* Irbis TW118 */ 941 + .driver_data = (void *)&irbis_tw118_data, 942 + .matches = { 943 + DMI_MATCH(DMI_SYS_VENDOR, "IRBIS"), 944 + DMI_MATCH(DMI_PRODUCT_NAME, "TW118"), 945 + }, 946 + }, 947 + { 972 948 /* I.T.Works TW891 */ 973 949 .driver_data = (void *)&itworks_tw891_data, 974 950 .matches = { ··· 1147 1107 DMI_MATCH(DMI_BIOS_VERSION, "3BAIR1014"), 1148 1108 /* Above matches are too generic, add bios-date match */ 1149 1109 DMI_MATCH(DMI_BIOS_DATE, "10/24/2014"), 1110 + }, 1111 + }, 1112 + { 1113 + /* Predia Basic tablet) */ 1114 + .driver_data = (void *)&predia_basic_data, 1115 + .matches = { 1116 + DMI_MATCH(DMI_SYS_VENDOR, "Insyde"), 1117 + DMI_MATCH(DMI_PRODUCT_NAME, "CherryTrail"), 1118 + /* Above matches are too generic, add bios-version match */ 1119 + DMI_MATCH(DMI_BIOS_VERSION, "Mx.WT107.KUBNGEA"), 1150 1120 }, 1151 1121 }, 1152 1122 {
+16 -33
drivers/ptp/ptp_clockmatrix.c
··· 103 103 return 0; 104 104 } 105 105 106 - static int idtcm_strverscmp(const char *ver1, const char *ver2) 106 + static int idtcm_strverscmp(const char *version1, const char *version2) 107 107 { 108 - u8 num1; 109 - u8 num2; 110 - int result = 0; 108 + u8 ver1[3], ver2[3]; 109 + int i; 111 110 112 - /* loop through each level of the version string */ 113 - while (result == 0) { 114 - /* extract leading version numbers */ 115 - if (kstrtou8(ver1, 10, &num1) < 0) 111 + if (sscanf(version1, "%hhu.%hhu.%hhu", 112 + &ver1[0], &ver1[1], &ver1[2]) != 3) 113 + return -1; 114 + if (sscanf(version2, "%hhu.%hhu.%hhu", 115 + &ver2[0], &ver2[1], &ver2[2]) != 3) 116 + return -1; 117 + 118 + for (i = 0; i < 3; i++) { 119 + if (ver1[i] > ver2[i]) 120 + return 1; 121 + if (ver1[i] < ver2[i]) 116 122 return -1; 117 - 118 - if (kstrtou8(ver2, 10, &num2) < 0) 119 - return -1; 120 - 121 - /* if numbers differ, then set the result */ 122 - if (num1 < num2) 123 - result = -1; 124 - else if (num1 > num2) 125 - result = 1; 126 - else { 127 - /* if numbers are the same, go to next level */ 128 - ver1 = strchr(ver1, '.'); 129 - ver2 = strchr(ver2, '.'); 130 - if (!ver1 && !ver2) 131 - break; 132 - else if (!ver1) 133 - result = -1; 134 - else if (!ver2) 135 - result = 1; 136 - else { 137 - ver1++; 138 - ver2++; 139 - } 140 - } 141 123 } 142 - return result; 124 + 125 + return 0; 143 126 } 144 127 145 128 static int idtcm_xfer_read(struct idtcm *idtcm,
+6
drivers/s390/block/dasd.c
··· 2980 2980 2981 2981 if (!block) 2982 2982 return -EINVAL; 2983 + /* 2984 + * If the request is an ERP request there is nothing to requeue. 2985 + * This will be done with the remaining original request. 2986 + */ 2987 + if (cqr->refers) 2988 + return 0; 2983 2989 spin_lock_irq(&cqr->dq->lock); 2984 2990 req = (struct request *) cqr->callback_data; 2985 2991 blk_mq_requeue_request(req, false);
+6 -3
drivers/s390/net/qeth_core.h
··· 417 417 QETH_QDIO_BUF_EMPTY, 418 418 /* Filled by driver; owned by hardware in order to be sent. */ 419 419 QETH_QDIO_BUF_PRIMED, 420 - /* Identified to be pending in TPQ. */ 420 + /* Discovered by the TX completion code: */ 421 421 QETH_QDIO_BUF_PENDING, 422 - /* Found in completion queue. */ 423 - QETH_QDIO_BUF_IN_CQ, 422 + /* Finished by the TX completion code: */ 423 + QETH_QDIO_BUF_NEED_QAOB, 424 + /* Received QAOB notification on CQ: */ 425 + QETH_QDIO_BUF_QAOB_OK, 426 + QETH_QDIO_BUF_QAOB_ERROR, 424 427 /* Handled via transfer pending / completion queue. */ 425 428 QETH_QDIO_BUF_HANDLED_DELAYED, 426 429 };
+54 -28
drivers/s390/net/qeth_core_main.c
··· 33 33 34 34 #include <net/iucv/af_iucv.h> 35 35 #include <net/dsfield.h> 36 + #include <net/sock.h> 36 37 37 38 #include <asm/ebcdic.h> 38 39 #include <asm/chpid.h> ··· 500 499 501 500 } 502 501 } 503 - if (forced_cleanup && (atomic_read(&(q->bufs[bidx]->state)) == 504 - QETH_QDIO_BUF_HANDLED_DELAYED)) { 505 - /* for recovery situations */ 506 - qeth_init_qdio_out_buf(q, bidx); 507 - QETH_CARD_TEXT(q->card, 2, "clprecov"); 508 - } 509 502 } 510 503 511 504 static void qeth_qdio_handle_aob(struct qeth_card *card, 512 505 unsigned long phys_aob_addr) 513 506 { 507 + enum qeth_qdio_out_buffer_state new_state = QETH_QDIO_BUF_QAOB_OK; 514 508 struct qaob *aob; 515 509 struct qeth_qdio_out_buffer *buffer; 516 510 enum iucv_tx_notify notification; ··· 516 520 QETH_CARD_TEXT_(card, 5, "%lx", phys_aob_addr); 517 521 buffer = (struct qeth_qdio_out_buffer *) aob->user1; 518 522 QETH_CARD_TEXT_(card, 5, "%lx", aob->user1); 519 - 520 - if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED, 521 - QETH_QDIO_BUF_IN_CQ) == QETH_QDIO_BUF_PRIMED) { 522 - notification = TX_NOTIFY_OK; 523 - } else { 524 - WARN_ON_ONCE(atomic_read(&buffer->state) != 525 - QETH_QDIO_BUF_PENDING); 526 - atomic_set(&buffer->state, QETH_QDIO_BUF_IN_CQ); 527 - notification = TX_NOTIFY_DELAYED_OK; 528 - } 529 - 530 - if (aob->aorc != 0) { 531 - QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc); 532 - notification = qeth_compute_cq_notification(aob->aorc, 1); 533 - } 534 - qeth_notify_skbs(buffer->q, buffer, notification); 535 523 536 524 /* Free dangling allocations. The attached skbs are handled by 537 525 * qeth_cleanup_handled_pending(). ··· 528 548 if (data && buffer->is_header[i]) 529 549 kmem_cache_free(qeth_core_header_cache, data); 530 550 } 531 - atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); 551 + 552 + if (aob->aorc) { 553 + QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc); 554 + new_state = QETH_QDIO_BUF_QAOB_ERROR; 555 + } 556 + 557 + switch (atomic_xchg(&buffer->state, new_state)) { 558 + case QETH_QDIO_BUF_PRIMED: 559 + /* Faster than TX completion code. */ 560 + notification = qeth_compute_cq_notification(aob->aorc, 0); 561 + qeth_notify_skbs(buffer->q, buffer, notification); 562 + atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); 563 + break; 564 + case QETH_QDIO_BUF_PENDING: 565 + /* TX completion code is active and will handle the async 566 + * completion for us. 567 + */ 568 + break; 569 + case QETH_QDIO_BUF_NEED_QAOB: 570 + /* TX completion code is already finished. */ 571 + notification = qeth_compute_cq_notification(aob->aorc, 1); 572 + qeth_notify_skbs(buffer->q, buffer, notification); 573 + atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); 574 + break; 575 + default: 576 + WARN_ON_ONCE(1); 577 + } 532 578 533 579 qdio_release_aob(aob); 534 580 } ··· 1411 1405 skb_queue_walk(&buf->skb_list, skb) { 1412 1406 QETH_CARD_TEXT_(q->card, 5, "skbn%d", notification); 1413 1407 QETH_CARD_TEXT_(q->card, 5, "%lx", (long) skb); 1414 - if (skb->protocol == htons(ETH_P_AF_IUCV) && skb->sk) 1408 + if (skb->sk && skb->sk->sk_family == PF_IUCV) 1415 1409 iucv_sk(skb->sk)->sk_txnotify(skb, notification); 1416 1410 } 1417 1411 } ··· 1421 1415 { 1422 1416 struct qeth_qdio_out_q *queue = buf->q; 1423 1417 struct sk_buff *skb; 1424 - 1425 - /* release may never happen from within CQ tasklet scope */ 1426 - WARN_ON_ONCE(atomic_read(&buf->state) == QETH_QDIO_BUF_IN_CQ); 1427 1418 1428 1419 if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING) 1429 1420 qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR); ··· 6081 6078 6082 6079 if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED, 6083 6080 QETH_QDIO_BUF_PENDING) == 6084 - QETH_QDIO_BUF_PRIMED) 6081 + QETH_QDIO_BUF_PRIMED) { 6085 6082 qeth_notify_skbs(queue, buffer, TX_NOTIFY_PENDING); 6083 + 6084 + /* Handle race with qeth_qdio_handle_aob(): */ 6085 + switch (atomic_xchg(&buffer->state, 6086 + QETH_QDIO_BUF_NEED_QAOB)) { 6087 + case QETH_QDIO_BUF_PENDING: 6088 + /* No concurrent QAOB notification. */ 6089 + break; 6090 + case QETH_QDIO_BUF_QAOB_OK: 6091 + qeth_notify_skbs(queue, buffer, 6092 + TX_NOTIFY_DELAYED_OK); 6093 + atomic_set(&buffer->state, 6094 + QETH_QDIO_BUF_HANDLED_DELAYED); 6095 + break; 6096 + case QETH_QDIO_BUF_QAOB_ERROR: 6097 + qeth_notify_skbs(queue, buffer, 6098 + TX_NOTIFY_DELAYED_GENERALERROR); 6099 + atomic_set(&buffer->state, 6100 + QETH_QDIO_BUF_HANDLED_DELAYED); 6101 + break; 6102 + default: 6103 + WARN_ON_ONCE(1); 6104 + } 6105 + } 6086 6106 6087 6107 QETH_CARD_TEXT_(card, 5, "pel%u", bidx); 6088 6108
+2 -16
drivers/s390/net/qeth_l2_main.c
··· 983 983 * change notification' and thus can support the learning_sync bridgeport 984 984 * attribute 985 985 * @card: qeth_card structure pointer 986 - * 987 - * This is a destructive test and must be called before dev2br or 988 - * bridgeport address notification is enabled! 989 986 */ 990 987 static void qeth_l2_detect_dev2br_support(struct qeth_card *card) 991 988 { 992 989 struct qeth_priv *priv = netdev_priv(card->dev); 993 990 bool dev2br_supported; 994 - int rc; 995 991 996 992 QETH_CARD_TEXT(card, 2, "d2brsup"); 997 993 if (!IS_IQD(card)) 998 994 return; 999 995 1000 996 /* dev2br requires valid cssid,iid,chid */ 1001 - if (!card->info.ids_valid) { 1002 - dev2br_supported = false; 1003 - } else if (css_general_characteristics.enarf) { 1004 - dev2br_supported = true; 1005 - } else { 1006 - /* Old machines don't have the feature bit: 1007 - * Probe by testing whether a disable succeeds 1008 - */ 1009 - rc = qeth_l2_pnso(card, PNSO_OC_NET_ADDR_INFO, 0, NULL, NULL); 1010 - dev2br_supported = !rc; 1011 - } 997 + dev2br_supported = card->info.ids_valid && 998 + css_general_characteristics.enarf; 1012 999 QETH_CARD_TEXT_(card, 2, "D2Bsup%02x", dev2br_supported); 1013 1000 1014 1001 if (dev2br_supported) ··· 2220 2233 struct net_device *dev = card->dev; 2221 2234 int rc = 0; 2222 2235 2223 - /* query before bridgeport_notification may be enabled */ 2224 2236 qeth_l2_detect_dev2br_support(card); 2225 2237 2226 2238 mutex_lock(&card->sbp_lock);
+15 -8
drivers/scsi/libiscsi.c
··· 533 533 if (conn->task == task) 534 534 conn->task = NULL; 535 535 536 - if (conn->ping_task == task) 537 - conn->ping_task = NULL; 536 + if (READ_ONCE(conn->ping_task) == task) 537 + WRITE_ONCE(conn->ping_task, NULL); 538 538 539 539 /* release get from queueing */ 540 540 __iscsi_put_task(task); ··· 737 737 task->hdr->itt = build_itt(task->itt, 738 738 task->conn->session->age); 739 739 } 740 + 741 + if (unlikely(READ_ONCE(conn->ping_task) == INVALID_SCSI_TASK)) 742 + WRITE_ONCE(conn->ping_task, task); 740 743 741 744 if (!ihost->workq) { 742 745 if (iscsi_prep_mgmt_task(conn, task)) ··· 944 941 struct iscsi_nopout hdr; 945 942 struct iscsi_task *task; 946 943 947 - if (!rhdr && conn->ping_task) 948 - return -EINVAL; 944 + if (!rhdr) { 945 + if (READ_ONCE(conn->ping_task)) 946 + return -EINVAL; 947 + WRITE_ONCE(conn->ping_task, INVALID_SCSI_TASK); 948 + } 949 949 950 950 memset(&hdr, 0, sizeof(struct iscsi_nopout)); 951 951 hdr.opcode = ISCSI_OP_NOOP_OUT | ISCSI_OP_IMMEDIATE; ··· 963 957 964 958 task = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)&hdr, NULL, 0); 965 959 if (!task) { 960 + if (!rhdr) 961 + WRITE_ONCE(conn->ping_task, NULL); 966 962 iscsi_conn_printk(KERN_ERR, conn, "Could not send nopout\n"); 967 963 return -EIO; 968 964 } else if (!rhdr) { 969 965 /* only track our nops */ 970 - conn->ping_task = task; 971 966 conn->last_ping = jiffies; 972 967 } 973 968 ··· 991 984 struct iscsi_conn *conn = task->conn; 992 985 int rc = 0; 993 986 994 - if (conn->ping_task != task) { 987 + if (READ_ONCE(conn->ping_task) != task) { 995 988 /* 996 989 * If this is not in response to one of our 997 990 * nops then it must be from userspace. ··· 1930 1923 */ 1931 1924 static int iscsi_has_ping_timed_out(struct iscsi_conn *conn) 1932 1925 { 1933 - if (conn->ping_task && 1926 + if (READ_ONCE(conn->ping_task) && 1934 1927 time_before_eq(conn->last_recv + (conn->recv_timeout * HZ) + 1935 1928 (conn->ping_timeout * HZ), jiffies)) 1936 1929 return 1; ··· 2065 2058 * Checking the transport already or nop from a cmd timeout still 2066 2059 * running 2067 2060 */ 2068 - if (conn->ping_task) { 2061 + if (READ_ONCE(conn->ping_task)) { 2069 2062 task->have_checked_conn = true; 2070 2063 rc = BLK_EH_RESET_TIMER; 2071 2064 goto done;
+23 -14
drivers/scsi/ufs/ufshcd.c
··· 1294 1294 } 1295 1295 spin_unlock_irqrestore(hba->host->host_lock, irq_flags); 1296 1296 1297 + pm_runtime_get_noresume(hba->dev); 1298 + if (!pm_runtime_active(hba->dev)) { 1299 + pm_runtime_put_noidle(hba->dev); 1300 + ret = -EAGAIN; 1301 + goto out; 1302 + } 1297 1303 start = ktime_get(); 1298 1304 ret = ufshcd_devfreq_scale(hba, scale_up); 1305 + pm_runtime_put(hba->dev); 1299 1306 1300 1307 trace_ufshcd_profile_clk_scaling(dev_name(hba->dev), 1301 1308 (scale_up ? "up" : "down"), ··· 3199 3192 /* Get the length of descriptor */ 3200 3193 ufshcd_map_desc_id_to_length(hba, desc_id, &buff_len); 3201 3194 if (!buff_len) { 3202 - dev_err(hba->dev, "%s: Failed to get desc length", __func__); 3195 + dev_err(hba->dev, "%s: Failed to get desc length\n", __func__); 3196 + return -EINVAL; 3197 + } 3198 + 3199 + if (param_offset >= buff_len) { 3200 + dev_err(hba->dev, "%s: Invalid offset 0x%x in descriptor IDN 0x%x, length 0x%x\n", 3201 + __func__, param_offset, desc_id, buff_len); 3203 3202 return -EINVAL; 3204 3203 } 3205 3204 3206 3205 /* Check whether we need temp memory */ 3207 3206 if (param_offset != 0 || param_size < buff_len) { 3208 - desc_buf = kmalloc(buff_len, GFP_KERNEL); 3207 + desc_buf = kzalloc(buff_len, GFP_KERNEL); 3209 3208 if (!desc_buf) 3210 3209 return -ENOMEM; 3211 3210 } else { ··· 3225 3212 desc_buf, &buff_len); 3226 3213 3227 3214 if (ret) { 3228 - dev_err(hba->dev, "%s: Failed reading descriptor. desc_id %d, desc_index %d, param_offset %d, ret %d", 3215 + dev_err(hba->dev, "%s: Failed reading descriptor. desc_id %d, desc_index %d, param_offset %d, ret %d\n", 3229 3216 __func__, desc_id, desc_index, param_offset, ret); 3230 3217 goto out; 3231 3218 } 3232 3219 3233 3220 /* Sanity check */ 3234 3221 if (desc_buf[QUERY_DESC_DESC_TYPE_OFFSET] != desc_id) { 3235 - dev_err(hba->dev, "%s: invalid desc_id %d in descriptor header", 3222 + dev_err(hba->dev, "%s: invalid desc_id %d in descriptor header\n", 3236 3223 __func__, desc_buf[QUERY_DESC_DESC_TYPE_OFFSET]); 3237 3224 ret = -EINVAL; 3238 3225 goto out; ··· 3242 3229 buff_len = desc_buf[QUERY_DESC_LENGTH_OFFSET]; 3243 3230 ufshcd_update_desc_length(hba, desc_id, desc_index, buff_len); 3244 3231 3245 - /* Check wherher we will not copy more data, than available */ 3246 - if (is_kmalloc && (param_offset + param_size) > buff_len) 3247 - param_size = buff_len - param_offset; 3248 - 3249 - if (is_kmalloc) 3232 + if (is_kmalloc) { 3233 + /* Make sure we don't copy more data than available */ 3234 + if (param_offset + param_size > buff_len) 3235 + param_size = buff_len - param_offset; 3250 3236 memcpy(param_read_buf, &desc_buf[param_offset], param_size); 3237 + } 3251 3238 out: 3252 3239 if (is_kmalloc) 3253 3240 kfree(desc_buf); ··· 8913 8900 if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba)) 8914 8901 goto out; 8915 8902 8916 - if (pm_runtime_suspended(hba->dev)) { 8917 - ret = ufshcd_runtime_resume(hba); 8918 - if (ret) 8919 - goto out; 8920 - } 8903 + pm_runtime_get_sync(hba->dev); 8921 8904 8922 8905 ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM); 8923 8906 out:
+1 -4
drivers/soc/fsl/dpio/dpio-driver.c
··· 95 95 { 96 96 int error; 97 97 struct fsl_mc_device_irq *irq; 98 - cpumask_t mask; 99 98 100 99 irq = dpio_dev->irqs[0]; 101 100 error = devm_request_irq(&dpio_dev->dev, ··· 111 112 } 112 113 113 114 /* set the affinity hint */ 114 - cpumask_clear(&mask); 115 - cpumask_set_cpu(cpu, &mask); 116 - if (irq_set_affinity_hint(irq->msi_desc->irq, &mask)) 115 + if (irq_set_affinity_hint(irq->msi_desc->irq, cpumask_of(cpu))) 117 116 dev_err(&dpio_dev->dev, 118 117 "irq_set_affinity failed irq %d cpu %d\n", 119 118 irq->msi_desc->irq, cpu);
+2 -1
drivers/spi/spi-dw-core.c
··· 875 875 master->set_cs = dw_spi_set_cs; 876 876 master->transfer_one = dw_spi_transfer_one; 877 877 master->handle_err = dw_spi_handle_err; 878 - master->mem_ops = &dws->mem_ops; 878 + if (dws->mem_ops.exec_op) 879 + master->mem_ops = &dws->mem_ops; 879 880 master->max_speed_hz = dws->max_freq; 880 881 master->dev.of_node = dev->of_node; 881 882 master->dev.fwnode = dev->fwnode;
+1
drivers/spi/spi-imx.c
··· 1686 1686 1687 1687 pm_runtime_set_autosuspend_delay(spi_imx->dev, MXC_RPM_TIMEOUT); 1688 1688 pm_runtime_use_autosuspend(spi_imx->dev); 1689 + pm_runtime_get_noresume(spi_imx->dev); 1689 1690 pm_runtime_set_active(spi_imx->dev); 1690 1691 pm_runtime_enable(spi_imx->dev); 1691 1692
+7
drivers/spi/spi-nxp-fspi.c
··· 1001 1001 struct resource *res; 1002 1002 struct nxp_fspi *f; 1003 1003 int ret; 1004 + u32 reg; 1004 1005 1005 1006 ctlr = spi_alloc_master(&pdev->dev, sizeof(*f)); 1006 1007 if (!ctlr) ··· 1032 1031 ret = PTR_ERR(f->iobase); 1033 1032 goto err_put_ctrl; 1034 1033 } 1034 + 1035 + /* Clear potential interrupts */ 1036 + reg = fspi_readl(f, f->iobase + FSPI_INTR); 1037 + if (reg) 1038 + fspi_writel(f, reg, f->iobase + FSPI_INTR); 1039 + 1035 1040 1036 1041 /* find the resources - controller memory mapped space */ 1037 1042 if (is_acpi_node(f->dev->fwnode))
+5
drivers/spi/spi.c
··· 3372 3372 if (!spi->max_speed_hz) 3373 3373 spi->max_speed_hz = spi->controller->max_speed_hz; 3374 3374 3375 + mutex_lock(&spi->controller->io_mutex); 3376 + 3375 3377 if (spi->controller->setup) 3376 3378 status = spi->controller->setup(spi); 3377 3379 3378 3380 if (spi->controller->auto_runtime_pm && spi->controller->set_cs) { 3379 3381 status = pm_runtime_get_sync(spi->controller->dev.parent); 3380 3382 if (status < 0) { 3383 + mutex_unlock(&spi->controller->io_mutex); 3381 3384 pm_runtime_put_noidle(spi->controller->dev.parent); 3382 3385 dev_err(&spi->controller->dev, "Failed to power device: %d\n", 3383 3386 status); ··· 3401 3398 } else { 3402 3399 spi_set_cs(spi, false); 3403 3400 } 3401 + 3402 + mutex_unlock(&spi->controller->io_mutex); 3404 3403 3405 3404 if (spi->rt && !spi->controller->rt) { 3406 3405 spi->controller->rt = true;
+1 -1
drivers/staging/media/sunxi/cedrus/cedrus_h264.c
··· 446 446 reg |= (pps->second_chroma_qp_index_offset & 0x3f) << 16; 447 447 reg |= (pps->chroma_qp_index_offset & 0x3f) << 8; 448 448 reg |= (pps->pic_init_qp_minus26 + 26 + slice->slice_qp_delta) & 0x3f; 449 - if (pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT) 449 + if (!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT)) 450 450 reg |= VE_H264_SHS_QP_SCALING_MATRIX_DEFAULT; 451 451 cedrus_write(dev, VE_H264_SHS_QP, reg); 452 452
+3 -12
drivers/staging/mt7621-pci/pci-mt7621.c
··· 653 653 return 0; 654 654 } 655 655 656 - static int mt7621_pcie_request_resources(struct mt7621_pcie *pcie, 657 - struct list_head *res) 656 + static void mt7621_pcie_add_resources(struct mt7621_pcie *pcie, 657 + struct list_head *res) 658 658 { 659 - struct device *dev = pcie->dev; 660 - 661 659 pci_add_resource_offset(res, &pcie->io, pcie->offset.io); 662 660 pci_add_resource_offset(res, &pcie->mem, pcie->offset.mem); 663 - pci_add_resource(res, &pcie->busn); 664 - 665 - return devm_request_pci_bus_resources(dev, res); 666 661 } 667 662 668 663 static int mt7621_pcie_register_host(struct pci_host_bridge *host, ··· 733 738 734 739 setup_cm_memory_region(pcie); 735 740 736 - err = mt7621_pcie_request_resources(pcie, &res); 737 - if (err) { 738 - dev_err(dev, "Error requesting resources\n"); 739 - return err; 740 - } 741 + mt7621_pcie_add_resources(pcie, &res); 741 742 742 743 err = mt7621_pcie_register_host(bridge, &res); 743 744 if (err) {
+1
drivers/staging/rtl8723bs/os_dep/sdio_intf.c
··· 20 20 { SDIO_DEVICE(0x024c, 0x0525), }, 21 21 { SDIO_DEVICE(0x024c, 0x0623), }, 22 22 { SDIO_DEVICE(0x024c, 0x0626), }, 23 + { SDIO_DEVICE(0x024c, 0x0627), }, 23 24 { SDIO_DEVICE(0x024c, 0xb723), }, 24 25 { /* end: all zeroes */ }, 25 26 };
+13 -4
drivers/target/iscsi/iscsi_target.c
··· 483 483 void iscsit_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd) 484 484 { 485 485 spin_lock_bh(&conn->cmd_lock); 486 - if (!list_empty(&cmd->i_conn_node) && 487 - !(cmd->se_cmd.transport_state & CMD_T_FABRIC_STOP)) 486 + if (!list_empty(&cmd->i_conn_node)) 488 487 list_del_init(&cmd->i_conn_node); 489 488 spin_unlock_bh(&conn->cmd_lock); 490 489 ··· 4082 4083 spin_lock_bh(&conn->cmd_lock); 4083 4084 list_splice_init(&conn->conn_cmd_list, &tmp_list); 4084 4085 4085 - list_for_each_entry(cmd, &tmp_list, i_conn_node) { 4086 + list_for_each_entry_safe(cmd, cmd_tmp, &tmp_list, i_conn_node) { 4086 4087 struct se_cmd *se_cmd = &cmd->se_cmd; 4087 4088 4088 4089 if (se_cmd->se_tfo != NULL) { 4089 4090 spin_lock_irq(&se_cmd->t_state_lock); 4090 - se_cmd->transport_state |= CMD_T_FABRIC_STOP; 4091 + if (se_cmd->transport_state & CMD_T_ABORTED) { 4092 + /* 4093 + * LIO's abort path owns the cleanup for this, 4094 + * so put it back on the list and let 4095 + * aborted_task handle it. 4096 + */ 4097 + list_move_tail(&cmd->i_conn_node, 4098 + &conn->conn_cmd_list); 4099 + } else { 4100 + se_cmd->transport_state |= CMD_T_FABRIC_STOP; 4101 + } 4091 4102 spin_unlock_irq(&se_cmd->t_state_lock); 4092 4103 } 4093 4104 }
+2 -1
drivers/tee/optee/call.c
··· 534 534 static bool is_normal_memory(pgprot_t p) 535 535 { 536 536 #if defined(CONFIG_ARM) 537 - return (pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEALLOC; 537 + return (((pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEALLOC) || 538 + ((pgprot_val(p) & L_PTE_MT_MASK) == L_PTE_MT_WRITEBACK)); 538 539 #elif defined(CONFIG_ARM64) 539 540 return (pgprot_val(p) & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL); 540 541 #else
+4 -2
drivers/tty/serial/ar933x_uart.c
··· 789 789 goto err_disable_clk; 790 790 791 791 up->gpios = mctrl_gpio_init(port, 0); 792 - if (IS_ERR(up->gpios) && PTR_ERR(up->gpios) != -ENOSYS) 793 - return PTR_ERR(up->gpios); 792 + if (IS_ERR(up->gpios) && PTR_ERR(up->gpios) != -ENOSYS) { 793 + ret = PTR_ERR(up->gpios); 794 + goto err_disable_clk; 795 + } 794 796 795 797 up->rts_gpiod = mctrl_gpio_to_gpiod(up->gpios, UART_GPIO_RTS); 796 798
+11 -19
drivers/tty/serial/imx.c
··· 942 942 struct imx_port *sport = dev_id; 943 943 unsigned int usr1, usr2, ucr1, ucr2, ucr3, ucr4; 944 944 irqreturn_t ret = IRQ_NONE; 945 + unsigned long flags = 0; 945 946 946 - spin_lock(&sport->port.lock); 947 + /* 948 + * IRQs might not be disabled upon entering this interrupt handler, 949 + * e.g. when interrupt handlers are forced to be threaded. To support 950 + * this scenario as well, disable IRQs when acquiring the spinlock. 951 + */ 952 + spin_lock_irqsave(&sport->port.lock, flags); 947 953 948 954 usr1 = imx_uart_readl(sport, USR1); 949 955 usr2 = imx_uart_readl(sport, USR2); ··· 1019 1013 ret = IRQ_HANDLED; 1020 1014 } 1021 1015 1022 - spin_unlock(&sport->port.lock); 1016 + spin_unlock_irqrestore(&sport->port.lock, flags); 1023 1017 1024 1018 return ret; 1025 1019 } ··· 2008 2002 unsigned int ucr1; 2009 2003 unsigned long flags = 0; 2010 2004 int locked = 1; 2011 - int retval; 2012 - 2013 - retval = clk_enable(sport->clk_per); 2014 - if (retval) 2015 - return; 2016 - retval = clk_enable(sport->clk_ipg); 2017 - if (retval) { 2018 - clk_disable(sport->clk_per); 2019 - return; 2020 - } 2021 2005 2022 2006 if (sport->port.sysrq) 2023 2007 locked = 0; ··· 2043 2047 2044 2048 if (locked) 2045 2049 spin_unlock_irqrestore(&sport->port.lock, flags); 2046 - 2047 - clk_disable(sport->clk_ipg); 2048 - clk_disable(sport->clk_per); 2049 2050 } 2050 2051 2051 2052 /* ··· 2143 2150 2144 2151 retval = uart_set_options(&sport->port, co, baud, parity, bits, flow); 2145 2152 2146 - clk_disable(sport->clk_ipg); 2147 2153 if (retval) { 2148 - clk_unprepare(sport->clk_ipg); 2154 + clk_disable_unprepare(sport->clk_ipg); 2149 2155 goto error_console; 2150 2156 } 2151 2157 2152 - retval = clk_prepare(sport->clk_per); 2158 + retval = clk_prepare_enable(sport->clk_per); 2153 2159 if (retval) 2154 - clk_unprepare(sport->clk_ipg); 2160 + clk_disable_unprepare(sport->clk_ipg); 2155 2161 2156 2162 error_console: 2157 2163 return retval;
+6 -1
drivers/video/fbdev/hyperv_fb.c
··· 1093 1093 goto err1; 1094 1094 } 1095 1095 1096 - fb_virt = ioremap(par->mem->start, screen_fb_size); 1096 + /* 1097 + * Map the VRAM cacheable for performance. This is also required for 1098 + * VM Connect to display properly for ARM64 Linux VM, as the host also 1099 + * maps the VRAM cacheable. 1100 + */ 1101 + fb_virt = ioremap_cache(par->mem->start, screen_fb_size); 1097 1102 if (!fb_virt) 1098 1103 goto err2; 1099 1104
+1
fs/afs/dir.c
··· 823 823 vp->cb_break_before = afs_calc_vnode_cb_break(vnode); 824 824 vp->vnode = vnode; 825 825 vp->put_vnode = true; 826 + vp->speculative = true; /* vnode not locked */ 826 827 } 827 828 } 828 829 }
+8
fs/afs/inode.c
··· 294 294 op->flags &= ~AFS_OPERATION_DIR_CONFLICT; 295 295 } 296 296 } else if (vp->scb.have_status) { 297 + if (vp->dv_before + vp->dv_delta != vp->scb.status.data_version && 298 + vp->speculative) 299 + /* Ignore the result of a speculative bulk status fetch 300 + * if it splits around a modification op, thereby 301 + * appearing to regress the data version. 302 + */ 303 + goto out; 297 304 afs_apply_status(op, vp); 298 305 if (vp->scb.have_cb) 299 306 afs_apply_callback(op, vp); ··· 312 305 } 313 306 } 314 307 308 + out: 315 309 write_sequnlock(&vnode->cb_lock); 316 310 317 311 if (vp->scb.have_status)
+1
fs/afs/internal.h
··· 755 755 bool update_ctime:1; /* Need to update the ctime */ 756 756 bool set_size:1; /* Must update i_size */ 757 757 bool op_unlinked:1; /* True if file was unlinked by op */ 758 + bool speculative:1; /* T if speculative status fetch (no vnode lock) */ 758 759 }; 759 760 760 761 /*
+4 -1
fs/btrfs/ctree.h
··· 878 878 */ 879 879 struct ulist *qgroup_ulist; 880 880 881 - /* protect user change for quota operations */ 881 + /* 882 + * Protect user change for quota operations. If a transaction is needed, 883 + * it must be started before locking this lock. 884 + */ 882 885 struct mutex qgroup_ioctl_lock; 883 886 884 887 /* list of dirty qgroups to be written at next commit */
-57
fs/btrfs/file.c
··· 452 452 } 453 453 } 454 454 455 - static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode, 456 - const u64 start, 457 - const u64 len, 458 - struct extent_state **cached_state) 459 - { 460 - u64 search_start = start; 461 - const u64 end = start + len - 1; 462 - 463 - while (search_start < end) { 464 - const u64 search_len = end - search_start + 1; 465 - struct extent_map *em; 466 - u64 em_len; 467 - int ret = 0; 468 - 469 - em = btrfs_get_extent(inode, NULL, 0, search_start, search_len); 470 - if (IS_ERR(em)) 471 - return PTR_ERR(em); 472 - 473 - if (em->block_start != EXTENT_MAP_HOLE) 474 - goto next; 475 - 476 - em_len = em->len; 477 - if (em->start < search_start) 478 - em_len -= search_start - em->start; 479 - if (em_len > search_len) 480 - em_len = search_len; 481 - 482 - ret = set_extent_bit(&inode->io_tree, search_start, 483 - search_start + em_len - 1, 484 - EXTENT_DELALLOC_NEW, 485 - NULL, cached_state, GFP_NOFS); 486 - next: 487 - search_start = extent_map_end(em); 488 - free_extent_map(em); 489 - if (ret) 490 - return ret; 491 - } 492 - return 0; 493 - } 494 - 495 455 /* 496 456 * after copy_from_user, pages need to be dirtied and we need to make 497 457 * sure holes are created between the current EOF and the start of ··· 487 527 clear_extent_bit(&inode->io_tree, start_pos, end_of_last_block, 488 528 EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG, 489 529 0, 0, cached); 490 - 491 - if (!btrfs_is_free_space_inode(inode)) { 492 - if (start_pos >= isize && 493 - !(inode->flags & BTRFS_INODE_PREALLOC)) { 494 - /* 495 - * There can't be any extents following eof in this case 496 - * so just set the delalloc new bit for the range 497 - * directly. 498 - */ 499 - extra_bits |= EXTENT_DELALLOC_NEW; 500 - } else { 501 - err = btrfs_find_new_delalloc_bytes(inode, start_pos, 502 - num_bytes, cached); 503 - if (err) 504 - return err; 505 - } 506 - } 507 530 508 531 err = btrfs_set_extent_delalloc(inode, start_pos, end_of_last_block, 509 532 extra_bits, cached);
+58
fs/btrfs/inode.c
··· 2253 2253 return 0; 2254 2254 } 2255 2255 2256 + static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode, 2257 + const u64 start, 2258 + const u64 len, 2259 + struct extent_state **cached_state) 2260 + { 2261 + u64 search_start = start; 2262 + const u64 end = start + len - 1; 2263 + 2264 + while (search_start < end) { 2265 + const u64 search_len = end - search_start + 1; 2266 + struct extent_map *em; 2267 + u64 em_len; 2268 + int ret = 0; 2269 + 2270 + em = btrfs_get_extent(inode, NULL, 0, search_start, search_len); 2271 + if (IS_ERR(em)) 2272 + return PTR_ERR(em); 2273 + 2274 + if (em->block_start != EXTENT_MAP_HOLE) 2275 + goto next; 2276 + 2277 + em_len = em->len; 2278 + if (em->start < search_start) 2279 + em_len -= search_start - em->start; 2280 + if (em_len > search_len) 2281 + em_len = search_len; 2282 + 2283 + ret = set_extent_bit(&inode->io_tree, search_start, 2284 + search_start + em_len - 1, 2285 + EXTENT_DELALLOC_NEW, 2286 + NULL, cached_state, GFP_NOFS); 2287 + next: 2288 + search_start = extent_map_end(em); 2289 + free_extent_map(em); 2290 + if (ret) 2291 + return ret; 2292 + } 2293 + return 0; 2294 + } 2295 + 2256 2296 int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end, 2257 2297 unsigned int extra_bits, 2258 2298 struct extent_state **cached_state) 2259 2299 { 2260 2300 WARN_ON(PAGE_ALIGNED(end)); 2301 + 2302 + if (start >= i_size_read(&inode->vfs_inode) && 2303 + !(inode->flags & BTRFS_INODE_PREALLOC)) { 2304 + /* 2305 + * There can't be any extents following eof in this case so just 2306 + * set the delalloc new bit for the range directly. 2307 + */ 2308 + extra_bits |= EXTENT_DELALLOC_NEW; 2309 + } else { 2310 + int ret; 2311 + 2312 + ret = btrfs_find_new_delalloc_bytes(inode, start, 2313 + end + 1 - start, 2314 + cached_state); 2315 + if (ret) 2316 + return ret; 2317 + } 2318 + 2261 2319 return set_extent_delalloc(&inode->io_tree, start, end, extra_bits, 2262 2320 cached_state); 2263 2321 }
+78 -10
fs/btrfs/qgroup.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/workqueue.h> 13 13 #include <linux/btrfs.h> 14 + #include <linux/sched/mm.h> 14 15 15 16 #include "ctree.h" 16 17 #include "transaction.h" ··· 498 497 break; 499 498 } 500 499 out: 500 + btrfs_free_path(path); 501 501 fs_info->qgroup_flags |= flags; 502 502 if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_ON)) 503 503 clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 504 504 else if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN && 505 505 ret >= 0) 506 506 ret = qgroup_rescan_init(fs_info, rescan_progress, 0); 507 - btrfs_free_path(path); 508 507 509 508 if (ret < 0) { 510 509 ulist_free(fs_info->qgroup_ulist); ··· 937 936 struct btrfs_key found_key; 938 937 struct btrfs_qgroup *qgroup = NULL; 939 938 struct btrfs_trans_handle *trans = NULL; 939 + struct ulist *ulist = NULL; 940 940 int ret = 0; 941 941 int slot; 942 942 ··· 945 943 if (fs_info->quota_root) 946 944 goto out; 947 945 948 - fs_info->qgroup_ulist = ulist_alloc(GFP_KERNEL); 949 - if (!fs_info->qgroup_ulist) { 946 + ulist = ulist_alloc(GFP_KERNEL); 947 + if (!ulist) { 950 948 ret = -ENOMEM; 951 949 goto out; 952 950 } ··· 954 952 ret = btrfs_sysfs_add_qgroups(fs_info); 955 953 if (ret < 0) 956 954 goto out; 955 + 956 + /* 957 + * Unlock qgroup_ioctl_lock before starting the transaction. This is to 958 + * avoid lock acquisition inversion problems (reported by lockdep) between 959 + * qgroup_ioctl_lock and the vfs freeze semaphores, acquired when we 960 + * start a transaction. 961 + * After we started the transaction lock qgroup_ioctl_lock again and 962 + * check if someone else created the quota root in the meanwhile. If so, 963 + * just return success and release the transaction handle. 964 + * 965 + * Also we don't need to worry about someone else calling 966 + * btrfs_sysfs_add_qgroups() after we unlock and getting an error because 967 + * that function returns 0 (success) when the sysfs entries already exist. 968 + */ 969 + mutex_unlock(&fs_info->qgroup_ioctl_lock); 970 + 957 971 /* 958 972 * 1 for quota root item 959 973 * 1 for BTRFS_QGROUP_STATUS item ··· 979 961 * would be a lot of overkill. 980 962 */ 981 963 trans = btrfs_start_transaction(tree_root, 2); 964 + 965 + mutex_lock(&fs_info->qgroup_ioctl_lock); 982 966 if (IS_ERR(trans)) { 983 967 ret = PTR_ERR(trans); 984 968 trans = NULL; 985 969 goto out; 986 970 } 971 + 972 + if (fs_info->quota_root) 973 + goto out; 974 + 975 + fs_info->qgroup_ulist = ulist; 976 + ulist = NULL; 987 977 988 978 /* 989 979 * initially create the quota tree ··· 1150 1124 if (ret) { 1151 1125 ulist_free(fs_info->qgroup_ulist); 1152 1126 fs_info->qgroup_ulist = NULL; 1153 - if (trans) 1154 - btrfs_end_transaction(trans); 1155 1127 btrfs_sysfs_del_qgroups(fs_info); 1156 1128 } 1157 1129 mutex_unlock(&fs_info->qgroup_ioctl_lock); 1130 + if (ret && trans) 1131 + btrfs_end_transaction(trans); 1132 + else if (trans) 1133 + ret = btrfs_end_transaction(trans); 1134 + ulist_free(ulist); 1158 1135 return ret; 1159 1136 } 1160 1137 ··· 1170 1141 mutex_lock(&fs_info->qgroup_ioctl_lock); 1171 1142 if (!fs_info->quota_root) 1172 1143 goto out; 1144 + mutex_unlock(&fs_info->qgroup_ioctl_lock); 1173 1145 1174 1146 /* 1175 1147 * 1 For the root item 1176 1148 * 1177 1149 * We should also reserve enough items for the quota tree deletion in 1178 1150 * btrfs_clean_quota_tree but this is not done. 1151 + * 1152 + * Also, we must always start a transaction without holding the mutex 1153 + * qgroup_ioctl_lock, see btrfs_quota_enable(). 1179 1154 */ 1180 1155 trans = btrfs_start_transaction(fs_info->tree_root, 1); 1156 + 1157 + mutex_lock(&fs_info->qgroup_ioctl_lock); 1181 1158 if (IS_ERR(trans)) { 1182 1159 ret = PTR_ERR(trans); 1160 + trans = NULL; 1183 1161 goto out; 1184 1162 } 1163 + 1164 + if (!fs_info->quota_root) 1165 + goto out; 1185 1166 1186 1167 clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1187 1168 btrfs_qgroup_wait_for_completion(fs_info, false); ··· 1206 1167 ret = btrfs_clean_quota_tree(trans, quota_root); 1207 1168 if (ret) { 1208 1169 btrfs_abort_transaction(trans, ret); 1209 - goto end_trans; 1170 + goto out; 1210 1171 } 1211 1172 1212 1173 ret = btrfs_del_root(trans, &quota_root->root_key); 1213 1174 if (ret) { 1214 1175 btrfs_abort_transaction(trans, ret); 1215 - goto end_trans; 1176 + goto out; 1216 1177 } 1217 1178 1218 1179 list_del(&quota_root->dirty_list); ··· 1224 1185 1225 1186 btrfs_put_root(quota_root); 1226 1187 1227 - end_trans: 1228 - ret = btrfs_end_transaction(trans); 1229 1188 out: 1230 1189 mutex_unlock(&fs_info->qgroup_ioctl_lock); 1190 + if (ret && trans) 1191 + btrfs_end_transaction(trans); 1192 + else if (trans) 1193 + ret = btrfs_end_transaction(trans); 1194 + 1231 1195 return ret; 1232 1196 } 1233 1197 ··· 1366 1324 struct btrfs_qgroup *member; 1367 1325 struct btrfs_qgroup_list *list; 1368 1326 struct ulist *tmp; 1327 + unsigned int nofs_flag; 1369 1328 int ret = 0; 1370 1329 1371 1330 /* Check the level of src and dst first */ 1372 1331 if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst)) 1373 1332 return -EINVAL; 1374 1333 1334 + /* We hold a transaction handle open, must do a NOFS allocation. */ 1335 + nofs_flag = memalloc_nofs_save(); 1375 1336 tmp = ulist_alloc(GFP_KERNEL); 1337 + memalloc_nofs_restore(nofs_flag); 1376 1338 if (!tmp) 1377 1339 return -ENOMEM; 1378 1340 ··· 1433 1387 struct btrfs_qgroup_list *list; 1434 1388 struct ulist *tmp; 1435 1389 bool found = false; 1390 + unsigned int nofs_flag; 1436 1391 int ret = 0; 1437 1392 int ret2; 1438 1393 1394 + /* We hold a transaction handle open, must do a NOFS allocation. */ 1395 + nofs_flag = memalloc_nofs_save(); 1439 1396 tmp = ulist_alloc(GFP_KERNEL); 1397 + memalloc_nofs_restore(nofs_flag); 1440 1398 if (!tmp) 1441 1399 return -ENOMEM; 1442 1400 ··· 3562 3512 { 3563 3513 struct btrfs_trans_handle *trans; 3564 3514 int ret; 3515 + bool can_commit = true; 3565 3516 3566 3517 /* 3567 3518 * We don't want to run flush again and again, so if there is a running ··· 3573 3522 !test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state)); 3574 3523 return 0; 3575 3524 } 3525 + 3526 + /* 3527 + * If current process holds a transaction, we shouldn't flush, as we 3528 + * assume all space reservation happens before a transaction handle is 3529 + * held. 3530 + * 3531 + * But there are cases like btrfs_delayed_item_reserve_metadata() where 3532 + * we try to reserve space with one transction handle already held. 3533 + * In that case we can't commit transaction, but at least try to end it 3534 + * and hope the started data writes can free some space. 3535 + */ 3536 + if (current->journal_info && 3537 + current->journal_info != BTRFS_SEND_TRANS_STUB) 3538 + can_commit = false; 3576 3539 3577 3540 ret = btrfs_start_delalloc_snapshot(root); 3578 3541 if (ret < 0) ··· 3599 3534 goto out; 3600 3535 } 3601 3536 3602 - ret = btrfs_commit_transaction(trans); 3537 + if (can_commit) 3538 + ret = btrfs_commit_transaction(trans); 3539 + else 3540 + ret = btrfs_end_transaction(trans); 3603 3541 out: 3604 3542 clear_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state); 3605 3543 wake_up(&root->qgroup_flush_wait);
+8 -4
fs/btrfs/tests/inode-tests.c
··· 983 983 ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 984 984 BTRFS_MAX_EXTENT_SIZE >> 1, 985 985 (BTRFS_MAX_EXTENT_SIZE >> 1) + sectorsize - 1, 986 - EXTENT_DELALLOC | EXTENT_UPTODATE, 0, 0, NULL); 986 + EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | 987 + EXTENT_UPTODATE, 0, 0, NULL); 987 988 if (ret) { 988 989 test_err("clear_extent_bit returned %d", ret); 989 990 goto out; ··· 1051 1050 ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 1052 1051 BTRFS_MAX_EXTENT_SIZE + sectorsize, 1053 1052 BTRFS_MAX_EXTENT_SIZE + 2 * sectorsize - 1, 1054 - EXTENT_DELALLOC | EXTENT_UPTODATE, 0, 0, NULL); 1053 + EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | 1054 + EXTENT_UPTODATE, 0, 0, NULL); 1055 1055 if (ret) { 1056 1056 test_err("clear_extent_bit returned %d", ret); 1057 1057 goto out; ··· 1084 1082 1085 1083 /* Empty */ 1086 1084 ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1, 1087 - EXTENT_DELALLOC | EXTENT_UPTODATE, 0, 0, NULL); 1085 + EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | 1086 + EXTENT_UPTODATE, 0, 0, NULL); 1088 1087 if (ret) { 1089 1088 test_err("clear_extent_bit returned %d", ret); 1090 1089 goto out; ··· 1100 1097 out: 1101 1098 if (ret) 1102 1099 clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1, 1103 - EXTENT_DELALLOC | EXTENT_UPTODATE, 0, 0, NULL); 1100 + EXTENT_DELALLOC | EXTENT_DELALLOC_NEW | 1101 + EXTENT_UPTODATE, 0, 0, NULL); 1104 1102 iput(inode); 1105 1103 btrfs_free_dummy_root(root); 1106 1104 btrfs_free_dummy_fs_info(fs_info);
+3
fs/btrfs/tree-checker.c
··· 1068 1068 "invalid root item size, have %u expect %zu or %u", 1069 1069 btrfs_item_size_nr(leaf, slot), sizeof(ri), 1070 1070 btrfs_legacy_root_item_size()); 1071 + return -EUCLEAN; 1071 1072 } 1072 1073 1073 1074 /* ··· 1424 1423 "invalid item size, have %u expect aligned to %zu for key type %u", 1425 1424 btrfs_item_size_nr(leaf, slot), 1426 1425 sizeof(*dref), key->type); 1426 + return -EUCLEAN; 1427 1427 } 1428 1428 if (!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize)) { 1429 1429 generic_err(leaf, slot, ··· 1453 1451 extent_err(leaf, slot, 1454 1452 "invalid extent data backref offset, have %llu expect aligned to %u", 1455 1453 offset, leaf->fs_info->sectorsize); 1454 + return -EUCLEAN; 1456 1455 } 1457 1456 } 1458 1457 return 0;
+7 -1
fs/btrfs/volumes.c
··· 940 940 if (device->bdev != path_bdev) { 941 941 bdput(path_bdev); 942 942 mutex_unlock(&fs_devices->device_list_mutex); 943 - btrfs_warn_in_rcu(device->fs_info, 943 + /* 944 + * device->fs_info may not be reliable here, so 945 + * pass in a NULL instead. This avoids a 946 + * possible use-after-free when the fs_info and 947 + * fs_info->sb are already torn down. 948 + */ 949 + btrfs_warn_in_rcu(NULL, 944 950 "duplicate device %s devid %llu generation %llu scanned by %s (%d)", 945 951 path, devid, found_transid, 946 952 current->comm,
+1
fs/cifs/cifsacl.c
··· 1266 1266 cifs_dbg(VFS, "%s: error %d getting sec desc\n", __func__, rc); 1267 1267 } else if (mode_from_special_sid) { 1268 1268 rc = parse_sec_desc(cifs_sb, pntsd, acllen, fattr, true); 1269 + kfree(pntsd); 1269 1270 } else { 1270 1271 /* get approximated mode from ACL */ 1271 1272 rc = parse_sec_desc(cifs_sb, pntsd, acllen, fattr, false);
+73 -15
fs/cifs/smb2ops.c
··· 264 264 } 265 265 266 266 static struct mid_q_entry * 267 - smb2_find_mid(struct TCP_Server_Info *server, char *buf) 267 + __smb2_find_mid(struct TCP_Server_Info *server, char *buf, bool dequeue) 268 268 { 269 269 struct mid_q_entry *mid; 270 270 struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)buf; ··· 281 281 (mid->mid_state == MID_REQUEST_SUBMITTED) && 282 282 (mid->command == shdr->Command)) { 283 283 kref_get(&mid->refcount); 284 + if (dequeue) { 285 + list_del_init(&mid->qhead); 286 + mid->mid_flags |= MID_DELETED; 287 + } 284 288 spin_unlock(&GlobalMid_Lock); 285 289 return mid; 286 290 } 287 291 } 288 292 spin_unlock(&GlobalMid_Lock); 289 293 return NULL; 294 + } 295 + 296 + static struct mid_q_entry * 297 + smb2_find_mid(struct TCP_Server_Info *server, char *buf) 298 + { 299 + return __smb2_find_mid(server, buf, false); 300 + } 301 + 302 + static struct mid_q_entry * 303 + smb2_find_dequeue_mid(struct TCP_Server_Info *server, char *buf) 304 + { 305 + return __smb2_find_mid(server, buf, true); 290 306 } 291 307 292 308 static void ··· 4372 4356 static int 4373 4357 handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, 4374 4358 char *buf, unsigned int buf_len, struct page **pages, 4375 - unsigned int npages, unsigned int page_data_size) 4359 + unsigned int npages, unsigned int page_data_size, 4360 + bool is_offloaded) 4376 4361 { 4377 4362 unsigned int data_offset; 4378 4363 unsigned int data_len; ··· 4395 4378 4396 4379 if (server->ops->is_session_expired && 4397 4380 server->ops->is_session_expired(buf)) { 4398 - cifs_reconnect(server); 4381 + if (!is_offloaded) 4382 + cifs_reconnect(server); 4399 4383 return -1; 4400 4384 } 4401 4385 ··· 4420 4402 cifs_dbg(FYI, "%s: server returned error %d\n", 4421 4403 __func__, rdata->result); 4422 4404 /* normal error on read response */ 4423 - dequeue_mid(mid, false); 4405 + if (is_offloaded) 4406 + mid->mid_state = MID_RESPONSE_RECEIVED; 4407 + else 4408 + dequeue_mid(mid, false); 4424 4409 return 0; 4425 4410 } 4426 4411 ··· 4447 4426 cifs_dbg(FYI, "%s: data offset (%u) beyond end of smallbuf\n", 4448 4427 __func__, data_offset); 4449 4428 rdata->result = -EIO; 4450 - dequeue_mid(mid, rdata->result); 4429 + if (is_offloaded) 4430 + mid->mid_state = MID_RESPONSE_MALFORMED; 4431 + else 4432 + dequeue_mid(mid, rdata->result); 4451 4433 return 0; 4452 4434 } 4453 4435 ··· 4466 4442 cifs_dbg(FYI, "%s: data offset (%u) beyond 1st page of response\n", 4467 4443 __func__, data_offset); 4468 4444 rdata->result = -EIO; 4469 - dequeue_mid(mid, rdata->result); 4445 + if (is_offloaded) 4446 + mid->mid_state = MID_RESPONSE_MALFORMED; 4447 + else 4448 + dequeue_mid(mid, rdata->result); 4470 4449 return 0; 4471 4450 } 4472 4451 4473 4452 if (data_len > page_data_size - pad_len) { 4474 4453 /* data_len is corrupt -- discard frame */ 4475 4454 rdata->result = -EIO; 4476 - dequeue_mid(mid, rdata->result); 4455 + if (is_offloaded) 4456 + mid->mid_state = MID_RESPONSE_MALFORMED; 4457 + else 4458 + dequeue_mid(mid, rdata->result); 4477 4459 return 0; 4478 4460 } 4479 4461 4480 4462 rdata->result = init_read_bvec(pages, npages, page_data_size, 4481 4463 cur_off, &bvec); 4482 4464 if (rdata->result != 0) { 4483 - dequeue_mid(mid, rdata->result); 4465 + if (is_offloaded) 4466 + mid->mid_state = MID_RESPONSE_MALFORMED; 4467 + else 4468 + dequeue_mid(mid, rdata->result); 4484 4469 return 0; 4485 4470 } 4486 4471 ··· 4504 4471 /* read response payload cannot be in both buf and pages */ 4505 4472 WARN_ONCE(1, "buf can not contain only a part of read data"); 4506 4473 rdata->result = -EIO; 4507 - dequeue_mid(mid, rdata->result); 4474 + if (is_offloaded) 4475 + mid->mid_state = MID_RESPONSE_MALFORMED; 4476 + else 4477 + dequeue_mid(mid, rdata->result); 4508 4478 return 0; 4509 4479 } 4510 4480 ··· 4518 4482 if (length < 0) 4519 4483 return length; 4520 4484 4521 - dequeue_mid(mid, false); 4485 + if (is_offloaded) 4486 + mid->mid_state = MID_RESPONSE_RECEIVED; 4487 + else 4488 + dequeue_mid(mid, false); 4522 4489 return length; 4523 4490 } 4524 4491 ··· 4550 4511 } 4551 4512 4552 4513 dw->server->lstrp = jiffies; 4553 - mid = smb2_find_mid(dw->server, dw->buf); 4514 + mid = smb2_find_dequeue_mid(dw->server, dw->buf); 4554 4515 if (mid == NULL) 4555 4516 cifs_dbg(FYI, "mid not found\n"); 4556 4517 else { 4557 4518 mid->decrypted = true; 4558 4519 rc = handle_read_data(dw->server, mid, dw->buf, 4559 4520 dw->server->vals->read_rsp_size, 4560 - dw->ppages, dw->npages, dw->len); 4561 - mid->callback(mid); 4521 + dw->ppages, dw->npages, dw->len, 4522 + true); 4523 + if (rc >= 0) { 4524 + #ifdef CONFIG_CIFS_STATS2 4525 + mid->when_received = jiffies; 4526 + #endif 4527 + mid->callback(mid); 4528 + } else { 4529 + spin_lock(&GlobalMid_Lock); 4530 + if (dw->server->tcpStatus == CifsNeedReconnect) { 4531 + mid->mid_state = MID_RETRY_NEEDED; 4532 + spin_unlock(&GlobalMid_Lock); 4533 + mid->callback(mid); 4534 + } else { 4535 + mid->mid_state = MID_REQUEST_SUBMITTED; 4536 + mid->mid_flags &= ~(MID_DELETED); 4537 + list_add_tail(&mid->qhead, 4538 + &dw->server->pending_mid_q); 4539 + spin_unlock(&GlobalMid_Lock); 4540 + } 4541 + } 4562 4542 cifs_mid_q_entry_release(mid); 4563 4543 } 4564 4544 ··· 4680 4622 (*mid)->decrypted = true; 4681 4623 rc = handle_read_data(server, *mid, buf, 4682 4624 server->vals->read_rsp_size, 4683 - pages, npages, len); 4625 + pages, npages, len, false); 4684 4626 } 4685 4627 4686 4628 free_pages: ··· 4823 4765 char *buf = server->large_buf ? server->bigbuf : server->smallbuf; 4824 4766 4825 4767 return handle_read_data(server, mid, buf, server->pdu_size, 4826 - NULL, 0, 0); 4768 + NULL, 0, 0, false); 4827 4769 } 4828 4770 4829 4771 static int
+1
fs/efivarfs/super.c
··· 21 21 static void efivarfs_evict_inode(struct inode *inode) 22 22 { 23 23 clear_inode(inode); 24 + kfree(inode->i_private); 24 25 } 25 26 26 27 static const struct super_operations efivarfs_ops = {
+2 -1
fs/ext4/ext4.h
··· 2695 2695 struct ext4_filename *fname); 2696 2696 static inline void ext4_update_dx_flag(struct inode *inode) 2697 2697 { 2698 - if (!ext4_has_feature_dir_index(inode->i_sb)) { 2698 + if (!ext4_has_feature_dir_index(inode->i_sb) && 2699 + ext4_test_inode_flag(inode, EXT4_INODE_INDEX)) { 2699 2700 /* ext4_iget() should have caught this... */ 2700 2701 WARN_ON_ONCE(ext4_has_feature_metadata_csum(inode->i_sb)); 2701 2702 ext4_clear_inode_flag(inode, EXT4_INODE_INDEX);
-4
fs/ext4/super.c
··· 2638 2638 } else if (test_opt2(sb, DAX_INODE)) { 2639 2639 SEQ_OPTS_PUTS("dax=inode"); 2640 2640 } 2641 - 2642 - if (test_opt2(sb, JOURNAL_FAST_COMMIT)) 2643 - SEQ_OPTS_PUTS("fast_commit"); 2644 - 2645 2641 ext4_show_quota_options(seq, sb); 2646 2642 return 0; 2647 2643 }
+62 -34
fs/io_uring.c
··· 205 205 struct list_head file_list; 206 206 struct fixed_file_data *file_data; 207 207 struct llist_node llist; 208 + bool done; 208 209 }; 209 210 210 211 struct fixed_file_data { ··· 479 478 struct io_open { 480 479 struct file *file; 481 480 int dfd; 481 + bool ignore_nonblock; 482 482 struct filename *filename; 483 483 struct open_how how; 484 484 unsigned long nofile; ··· 1313 1311 return false; 1314 1312 req->work.flags |= IO_WQ_WORK_FSIZE; 1315 1313 } 1316 - 1317 - if (!(req->work.flags & IO_WQ_WORK_FILES) && 1318 - (def->work_flags & IO_WQ_WORK_FILES) && 1319 - !(req->flags & REQ_F_NO_FILE_TABLE)) { 1320 - if (id->files != current->files || 1321 - id->nsproxy != current->nsproxy) 1322 - return false; 1323 - atomic_inc(&id->files->count); 1324 - get_nsproxy(id->nsproxy); 1325 - req->flags |= REQ_F_INFLIGHT; 1326 - 1327 - spin_lock_irq(&ctx->inflight_lock); 1328 - list_add(&req->inflight_entry, &ctx->inflight_list); 1329 - spin_unlock_irq(&ctx->inflight_lock); 1330 - req->work.flags |= IO_WQ_WORK_FILES; 1331 - } 1332 1314 #ifdef CONFIG_BLK_CGROUP 1333 1315 if (!(req->work.flags & IO_WQ_WORK_BLKCG) && 1334 1316 (def->work_flags & IO_WQ_WORK_BLKCG)) { ··· 1353 1367 req->work.flags |= IO_WQ_WORK_CANCEL; 1354 1368 } 1355 1369 spin_unlock(&current->fs->lock); 1370 + } 1371 + if (!(req->work.flags & IO_WQ_WORK_FILES) && 1372 + (def->work_flags & IO_WQ_WORK_FILES) && 1373 + !(req->flags & REQ_F_NO_FILE_TABLE)) { 1374 + if (id->files != current->files || 1375 + id->nsproxy != current->nsproxy) 1376 + return false; 1377 + atomic_inc(&id->files->count); 1378 + get_nsproxy(id->nsproxy); 1379 + req->flags |= REQ_F_INFLIGHT; 1380 + 1381 + spin_lock_irq(&ctx->inflight_lock); 1382 + list_add(&req->inflight_entry, &ctx->inflight_list); 1383 + spin_unlock_irq(&ctx->inflight_lock); 1384 + req->work.flags |= IO_WQ_WORK_FILES; 1356 1385 } 1357 1386 1358 1387 return true; ··· 2578 2577 } 2579 2578 end_req: 2580 2579 req_set_fail_links(req); 2581 - io_req_complete(req, ret); 2582 2580 return false; 2583 2581 } 2584 2582 #endif ··· 3192 3192 rw->free_iovec = iovec; 3193 3193 rw->bytes_done = 0; 3194 3194 /* can only be fixed buffers, no need to do anything */ 3195 - if (iter->type == ITER_BVEC) 3195 + if (iov_iter_is_bvec(iter)) 3196 3196 return; 3197 3197 if (!iovec) { 3198 3198 unsigned iov_off = 0; ··· 3795 3795 return ret; 3796 3796 } 3797 3797 req->open.nofile = rlimit(RLIMIT_NOFILE); 3798 + req->open.ignore_nonblock = false; 3798 3799 req->flags |= REQ_F_NEED_CLEANUP; 3799 3800 return 0; 3800 3801 } ··· 3839 3838 struct file *file; 3840 3839 int ret; 3841 3840 3842 - if (force_nonblock) 3841 + if (force_nonblock && !req->open.ignore_nonblock) 3843 3842 return -EAGAIN; 3844 3843 3845 3844 ret = build_open_flags(&req->open.how, &op); ··· 3854 3853 if (IS_ERR(file)) { 3855 3854 put_unused_fd(ret); 3856 3855 ret = PTR_ERR(file); 3856 + /* 3857 + * A work-around to ensure that /proc/self works that way 3858 + * that it should - if we get -EOPNOTSUPP back, then assume 3859 + * that proc_self_get_link() failed us because we're in async 3860 + * context. We should be safe to retry this from the task 3861 + * itself with force_nonblock == false set, as it should not 3862 + * block on lookup. Would be nice to know this upfront and 3863 + * avoid the async dance, but doesn't seem feasible. 3864 + */ 3865 + if (ret == -EOPNOTSUPP && io_wq_current_is_worker()) { 3866 + req->open.ignore_nonblock = true; 3867 + refcount_inc(&req->refs); 3868 + io_req_task_queue(req); 3869 + return 0; 3870 + } 3857 3871 } else { 3858 3872 fsnotify_open(file); 3859 3873 fd_install(ret, file); ··· 6973 6957 return -ENXIO; 6974 6958 6975 6959 spin_lock(&data->lock); 6976 - if (!list_empty(&data->ref_list)) 6977 - ref_node = list_first_entry(&data->ref_list, 6978 - struct fixed_file_ref_node, node); 6960 + ref_node = data->node; 6979 6961 spin_unlock(&data->lock); 6980 6962 if (ref_node) 6981 6963 percpu_ref_kill(&ref_node->refs); ··· 7322 7308 kfree(pfile); 7323 7309 } 7324 7310 7325 - spin_lock(&file_data->lock); 7326 - list_del(&ref_node->node); 7327 - spin_unlock(&file_data->lock); 7328 - 7329 7311 percpu_ref_exit(&ref_node->refs); 7330 7312 kfree(ref_node); 7331 7313 percpu_ref_put(&file_data->refs); ··· 7348 7338 static void io_file_data_ref_zero(struct percpu_ref *ref) 7349 7339 { 7350 7340 struct fixed_file_ref_node *ref_node; 7341 + struct fixed_file_data *data; 7351 7342 struct io_ring_ctx *ctx; 7352 - bool first_add; 7343 + bool first_add = false; 7353 7344 int delay = HZ; 7354 7345 7355 7346 ref_node = container_of(ref, struct fixed_file_ref_node, refs); 7356 - ctx = ref_node->file_data->ctx; 7347 + data = ref_node->file_data; 7348 + ctx = data->ctx; 7357 7349 7358 - if (percpu_ref_is_dying(&ctx->file_data->refs)) 7350 + spin_lock(&data->lock); 7351 + ref_node->done = true; 7352 + 7353 + while (!list_empty(&data->ref_list)) { 7354 + ref_node = list_first_entry(&data->ref_list, 7355 + struct fixed_file_ref_node, node); 7356 + /* recycle ref nodes in order */ 7357 + if (!ref_node->done) 7358 + break; 7359 + list_del(&ref_node->node); 7360 + first_add |= llist_add(&ref_node->llist, &ctx->file_put_llist); 7361 + } 7362 + spin_unlock(&data->lock); 7363 + 7364 + if (percpu_ref_is_dying(&data->refs)) 7359 7365 delay = 0; 7360 7366 7361 - first_add = llist_add(&ref_node->llist, &ctx->file_put_llist); 7362 7367 if (!delay) 7363 7368 mod_delayed_work(system_wq, &ctx->file_put_work, 0); 7364 7369 else if (first_add) ··· 7397 7372 INIT_LIST_HEAD(&ref_node->node); 7398 7373 INIT_LIST_HEAD(&ref_node->file_list); 7399 7374 ref_node->file_data = ctx->file_data; 7375 + ref_node->done = false; 7400 7376 return ref_node; 7401 7377 } 7402 7378 ··· 7493 7467 7494 7468 file_data->node = ref_node; 7495 7469 spin_lock(&file_data->lock); 7496 - list_add(&ref_node->node, &file_data->ref_list); 7470 + list_add_tail(&ref_node->node, &file_data->ref_list); 7497 7471 spin_unlock(&file_data->lock); 7498 7472 percpu_ref_get(&file_data->refs); 7499 7473 return ret; ··· 7652 7626 if (needs_switch) { 7653 7627 percpu_ref_kill(&data->node->refs); 7654 7628 spin_lock(&data->lock); 7655 - list_add(&ref_node->node, &data->ref_list); 7629 + list_add_tail(&ref_node->node, &data->ref_list); 7656 7630 data->node = ref_node; 7657 7631 spin_unlock(&data->lock); 7658 7632 percpu_ref_get(&ctx->file_data->refs); ··· 9251 9225 * to a power-of-two, if it isn't already. We do NOT impose 9252 9226 * any cq vs sq ring sizing. 9253 9227 */ 9254 - p->cq_entries = roundup_pow_of_two(p->cq_entries); 9255 - if (p->cq_entries < p->sq_entries) 9228 + if (!p->cq_entries) 9256 9229 return -EINVAL; 9257 9230 if (p->cq_entries > IORING_MAX_CQ_ENTRIES) { 9258 9231 if (!(p->flags & IORING_SETUP_CLAMP)) 9259 9232 return -EINVAL; 9260 9233 p->cq_entries = IORING_MAX_CQ_ENTRIES; 9261 9234 } 9235 + p->cq_entries = roundup_pow_of_two(p->cq_entries); 9236 + if (p->cq_entries < p->sq_entries) 9237 + return -EINVAL; 9262 9238 } else { 9263 9239 p->cq_entries = 2 * p->sq_entries; 9264 9240 }
+18 -16
fs/jbd2/journal.c
··· 566 566 } 567 567 568 568 /** 569 - * Force and wait upon a commit if the calling process is not within 570 - * transaction. This is used for forcing out undo-protected data which contains 571 - * bitmaps, when the fs is running out of space. 569 + * jbd2_journal_force_commit_nested - Force and wait upon a commit if the 570 + * calling process is not within transaction. 572 571 * 573 572 * @journal: journal to force 574 573 * Returns true if progress was made. 574 + * 575 + * This is used for forcing out undo-protected data which contains 576 + * bitmaps, when the fs is running out of space. 575 577 */ 576 578 int jbd2_journal_force_commit_nested(journal_t *journal) 577 579 { ··· 584 582 } 585 583 586 584 /** 587 - * int journal_force_commit() - force any uncommitted transactions 585 + * jbd2_journal_force_commit() - force any uncommitted transactions 588 586 * @journal: journal to force 589 587 * 590 588 * Caller want unconditional commit. We can only force the running transaction ··· 1883 1881 1884 1882 1885 1883 /** 1886 - * int jbd2_journal_load() - Read journal from disk. 1884 + * jbd2_journal_load() - Read journal from disk. 1887 1885 * @journal: Journal to act on. 1888 1886 * 1889 1887 * Given a journal_t structure which tells us which disk blocks contain ··· 1953 1951 } 1954 1952 1955 1953 /** 1956 - * void jbd2_journal_destroy() - Release a journal_t structure. 1954 + * jbd2_journal_destroy() - Release a journal_t structure. 1957 1955 * @journal: Journal to act on. 1958 1956 * 1959 1957 * Release a journal_t structure once it is no longer in use by the ··· 2030 2028 2031 2029 2032 2030 /** 2033 - *int jbd2_journal_check_used_features() - Check if features specified are used. 2031 + * jbd2_journal_check_used_features() - Check if features specified are used. 2034 2032 * @journal: Journal to check. 2035 2033 * @compat: bitmask of compatible features 2036 2034 * @ro: bitmask of features that force read-only mount ··· 2065 2063 } 2066 2064 2067 2065 /** 2068 - * int jbd2_journal_check_available_features() - Check feature set in journalling layer 2066 + * jbd2_journal_check_available_features() - Check feature set in journalling layer 2069 2067 * @journal: Journal to check. 2070 2068 * @compat: bitmask of compatible features 2071 2069 * @ro: bitmask of features that force read-only mount ··· 2128 2126 } 2129 2127 2130 2128 /** 2131 - * int jbd2_journal_set_features() - Mark a given journal feature in the superblock 2129 + * jbd2_journal_set_features() - Mark a given journal feature in the superblock 2132 2130 * @journal: Journal to act on. 2133 2131 * @compat: bitmask of compatible features 2134 2132 * @ro: bitmask of features that force read-only mount ··· 2219 2217 } 2220 2218 2221 2219 /* 2222 - * jbd2_journal_clear_features () - Clear a given journal feature in the 2220 + * jbd2_journal_clear_features() - Clear a given journal feature in the 2223 2221 * superblock 2224 2222 * @journal: Journal to act on. 2225 2223 * @compat: bitmask of compatible features ··· 2248 2246 EXPORT_SYMBOL(jbd2_journal_clear_features); 2249 2247 2250 2248 /** 2251 - * int jbd2_journal_flush () - Flush journal 2249 + * jbd2_journal_flush() - Flush journal 2252 2250 * @journal: Journal to act on. 2253 2251 * 2254 2252 * Flush all data for a given journal to disk and empty the journal. ··· 2323 2321 } 2324 2322 2325 2323 /** 2326 - * int jbd2_journal_wipe() - Wipe journal contents 2324 + * jbd2_journal_wipe() - Wipe journal contents 2327 2325 * @journal: Journal to act on. 2328 2326 * @write: flag (see below) 2329 2327 * ··· 2364 2362 } 2365 2363 2366 2364 /** 2367 - * void jbd2_journal_abort () - Shutdown the journal immediately. 2365 + * jbd2_journal_abort () - Shutdown the journal immediately. 2368 2366 * @journal: the journal to shutdown. 2369 2367 * @errno: an error number to record in the journal indicating 2370 2368 * the reason for the shutdown. ··· 2455 2453 } 2456 2454 2457 2455 /** 2458 - * int jbd2_journal_errno () - returns the journal's error state. 2456 + * jbd2_journal_errno() - returns the journal's error state. 2459 2457 * @journal: journal to examine. 2460 2458 * 2461 2459 * This is the errno number set with jbd2_journal_abort(), the last ··· 2479 2477 } 2480 2478 2481 2479 /** 2482 - * int jbd2_journal_clear_err () - clears the journal's error state 2480 + * jbd2_journal_clear_err() - clears the journal's error state 2483 2481 * @journal: journal to act on. 2484 2482 * 2485 2483 * An error must be cleared or acked to take a FS out of readonly ··· 2499 2497 } 2500 2498 2501 2499 /** 2502 - * void jbd2_journal_ack_err() - Ack journal err. 2500 + * jbd2_journal_ack_err() - Ack journal err. 2503 2501 * @journal: journal to act on. 2504 2502 * 2505 2503 * An error must be cleared or acked to take a FS out of readonly
+16 -15
fs/jbd2/transaction.c
··· 519 519 520 520 521 521 /** 522 - * handle_t *jbd2_journal_start() - Obtain a new handle. 522 + * jbd2_journal_start() - Obtain a new handle. 523 523 * @journal: Journal to start transaction on. 524 524 * @nblocks: number of block buffer we might modify 525 525 * ··· 566 566 EXPORT_SYMBOL(jbd2_journal_free_reserved); 567 567 568 568 /** 569 - * int jbd2_journal_start_reserved() - start reserved handle 569 + * jbd2_journal_start_reserved() - start reserved handle 570 570 * @handle: handle to start 571 571 * @type: for handle statistics 572 572 * @line_no: for handle statistics ··· 620 620 EXPORT_SYMBOL(jbd2_journal_start_reserved); 621 621 622 622 /** 623 - * int jbd2_journal_extend() - extend buffer credits. 623 + * jbd2_journal_extend() - extend buffer credits. 624 624 * @handle: handle to 'extend' 625 625 * @nblocks: nr blocks to try to extend by. 626 626 * @revoke_records: number of revoke records to try to extend by. ··· 745 745 } 746 746 747 747 /** 748 - * int jbd2_journal_restart() - restart a handle . 748 + * jbd2__journal_restart() - restart a handle . 749 749 * @handle: handle to restart 750 750 * @nblocks: nr credits requested 751 751 * @revoke_records: number of revoke record credits requested ··· 815 815 EXPORT_SYMBOL(jbd2_journal_restart); 816 816 817 817 /** 818 - * void jbd2_journal_lock_updates () - establish a transaction barrier. 818 + * jbd2_journal_lock_updates () - establish a transaction barrier. 819 819 * @journal: Journal to establish a barrier on. 820 820 * 821 821 * This locks out any further updates from being started, and blocks ··· 874 874 } 875 875 876 876 /** 877 - * void jbd2_journal_unlock_updates (journal_t* journal) - release barrier 877 + * jbd2_journal_unlock_updates () - release barrier 878 878 * @journal: Journal to release the barrier on. 879 879 * 880 880 * Release a transaction barrier obtained with jbd2_journal_lock_updates(). ··· 1182 1182 } 1183 1183 1184 1184 /** 1185 - * int jbd2_journal_get_write_access() - notify intent to modify a buffer for metadata (not data) update. 1185 + * jbd2_journal_get_write_access() - notify intent to modify a buffer 1186 + * for metadata (not data) update. 1186 1187 * @handle: transaction to add buffer modifications to 1187 1188 * @bh: bh to be used for metadata writes 1188 1189 * ··· 1227 1226 * unlocked buffer beforehand. */ 1228 1227 1229 1228 /** 1230 - * int jbd2_journal_get_create_access () - notify intent to use newly created bh 1229 + * jbd2_journal_get_create_access () - notify intent to use newly created bh 1231 1230 * @handle: transaction to new buffer to 1232 1231 * @bh: new buffer. 1233 1232 * ··· 1307 1306 } 1308 1307 1309 1308 /** 1310 - * int jbd2_journal_get_undo_access() - Notify intent to modify metadata with 1309 + * jbd2_journal_get_undo_access() - Notify intent to modify metadata with 1311 1310 * non-rewindable consequences 1312 1311 * @handle: transaction 1313 1312 * @bh: buffer to undo ··· 1384 1383 } 1385 1384 1386 1385 /** 1387 - * void jbd2_journal_set_triggers() - Add triggers for commit writeout 1386 + * jbd2_journal_set_triggers() - Add triggers for commit writeout 1388 1387 * @bh: buffer to trigger on 1389 1388 * @type: struct jbd2_buffer_trigger_type containing the trigger(s). 1390 1389 * ··· 1426 1425 } 1427 1426 1428 1427 /** 1429 - * int jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata 1428 + * jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata 1430 1429 * @handle: transaction to add buffer to. 1431 1430 * @bh: buffer to mark 1432 1431 * ··· 1594 1593 } 1595 1594 1596 1595 /** 1597 - * void jbd2_journal_forget() - bforget() for potentially-journaled buffers. 1596 + * jbd2_journal_forget() - bforget() for potentially-journaled buffers. 1598 1597 * @handle: transaction handle 1599 1598 * @bh: bh to 'forget' 1600 1599 * ··· 1763 1762 } 1764 1763 1765 1764 /** 1766 - * int jbd2_journal_stop() - complete a transaction 1765 + * jbd2_journal_stop() - complete a transaction 1767 1766 * @handle: transaction to complete. 1768 1767 * 1769 1768 * All done for a particular handle. ··· 2081 2080 } 2082 2081 2083 2082 /** 2084 - * int jbd2_journal_try_to_free_buffers() - try to free page buffers. 2083 + * jbd2_journal_try_to_free_buffers() - try to free page buffers. 2085 2084 * @journal: journal for operation 2086 2085 * @page: to try and free 2087 2086 * ··· 2412 2411 } 2413 2412 2414 2413 /** 2415 - * void jbd2_journal_invalidatepage() 2414 + * jbd2_journal_invalidatepage() 2416 2415 * @journal: journal to use for flush... 2417 2416 * @page: page to flush 2418 2417 * @offset: start of the range to invalidate
+4 -2
fs/libfs.c
··· 959 959 size_t len, loff_t *ppos) 960 960 { 961 961 struct simple_attr *attr; 962 - u64 val; 962 + unsigned long long val; 963 963 size_t size; 964 964 ssize_t ret; 965 965 ··· 977 977 goto out; 978 978 979 979 attr->set_buf[size] = '\0'; 980 - val = simple_strtoll(attr->set_buf, NULL, 0); 980 + ret = kstrtoull(attr->set_buf, 0, &val); 981 + if (ret) 982 + goto out; 981 983 ret = attr->set(attr->data, val); 982 984 if (ret == 0) 983 985 ret = len; /* on success, claim we got the whole input */
+7 -5
fs/notify/fsnotify.c
··· 178 178 struct inode *inode = d_inode(dentry); 179 179 struct dentry *parent; 180 180 bool parent_watched = dentry->d_flags & DCACHE_FSNOTIFY_PARENT_WATCHED; 181 + bool parent_needed, parent_interested; 181 182 __u32 p_mask; 182 183 struct inode *p_inode = NULL; 183 184 struct name_snapshot name; ··· 194 193 return 0; 195 194 196 195 parent = NULL; 197 - if (!parent_watched && !fsnotify_event_needs_parent(inode, mnt, mask)) 196 + parent_needed = fsnotify_event_needs_parent(inode, mnt, mask); 197 + if (!parent_watched && !parent_needed) 198 198 goto notify; 199 199 200 200 /* Does parent inode care about events on children? */ ··· 207 205 208 206 /* 209 207 * Include parent/name in notification either if some notification 210 - * groups require parent info (!parent_watched case) or the parent is 211 - * interested in this event. 208 + * groups require parent info or the parent is interested in this event. 212 209 */ 213 - if (!parent_watched || (mask & p_mask & ALL_FSNOTIFY_EVENTS)) { 210 + parent_interested = mask & p_mask & ALL_FSNOTIFY_EVENTS; 211 + if (parent_needed || parent_interested) { 214 212 /* When notifying parent, child should be passed as data */ 215 213 WARN_ON_ONCE(inode != fsnotify_data_inode(data, data_type)); 216 214 217 215 /* Notify both parent and child with child name info */ 218 216 take_dentry_name_snapshot(&name, dentry); 219 217 file_name = &name.name; 220 - if (parent_watched) 218 + if (parent_interested) 221 219 mask |= FS_EVENT_ON_CHILD; 222 220 } 223 221
+7
fs/proc/self.c
··· 16 16 pid_t tgid = task_tgid_nr_ns(current, ns); 17 17 char *name; 18 18 19 + /* 20 + * Not currently supported. Once we can inherit all of struct pid, 21 + * we can allow this. 22 + */ 23 + if (current->flags & PF_KTHREAD) 24 + return ERR_PTR(-EOPNOTSUPP); 25 + 19 26 if (!tgid) 20 27 return ERR_PTR(-ENOENT); 21 28 /* max length of unsigned int in decimal + NULL term */
+7 -1
fs/xfs/libxfs/xfs_attr_leaf.c
··· 515 515 *========================================================================*/ 516 516 517 517 /* 518 - * Query whether the requested number of additional bytes of extended 518 + * Query whether the total requested number of attr fork bytes of extended 519 519 * attribute space will be able to fit inline. 520 520 * 521 521 * Returns zero if not, else the di_forkoff fork offset to be used in the ··· 534 534 int minforkoff; 535 535 int maxforkoff; 536 536 int offset; 537 + 538 + /* 539 + * Check if the new size could fit at all first: 540 + */ 541 + if (bytes > XFS_LITINO(mp)) 542 + return 0; 537 543 538 544 /* rounded down */ 539 545 offset = (XFS_LITINO(mp) - bytes) >> 3;
+8 -8
fs/xfs/libxfs/xfs_rmap_btree.c
··· 243 243 else if (y > x) 244 244 return -1; 245 245 246 - x = be64_to_cpu(kp->rm_offset); 247 - y = xfs_rmap_irec_offset_pack(rec); 246 + x = XFS_RMAP_OFF(be64_to_cpu(kp->rm_offset)); 247 + y = rec->rm_offset; 248 248 if (x > y) 249 249 return 1; 250 250 else if (y > x) ··· 275 275 else if (y > x) 276 276 return -1; 277 277 278 - x = be64_to_cpu(kp1->rm_offset); 279 - y = be64_to_cpu(kp2->rm_offset); 278 + x = XFS_RMAP_OFF(be64_to_cpu(kp1->rm_offset)); 279 + y = XFS_RMAP_OFF(be64_to_cpu(kp2->rm_offset)); 280 280 if (x > y) 281 281 return 1; 282 282 else if (y > x) ··· 390 390 return 1; 391 391 else if (a > b) 392 392 return 0; 393 - a = be64_to_cpu(k1->rmap.rm_offset); 394 - b = be64_to_cpu(k2->rmap.rm_offset); 393 + a = XFS_RMAP_OFF(be64_to_cpu(k1->rmap.rm_offset)); 394 + b = XFS_RMAP_OFF(be64_to_cpu(k2->rmap.rm_offset)); 395 395 if (a <= b) 396 396 return 1; 397 397 return 0; ··· 420 420 return 1; 421 421 else if (a > b) 422 422 return 0; 423 - a = be64_to_cpu(r1->rmap.rm_offset); 424 - b = be64_to_cpu(r2->rmap.rm_offset); 423 + a = XFS_RMAP_OFF(be64_to_cpu(r1->rmap.rm_offset)); 424 + b = XFS_RMAP_OFF(be64_to_cpu(r2->rmap.rm_offset)); 425 425 if (a <= b) 426 426 return 1; 427 427 return 0;
+4 -4
fs/xfs/scrub/bmap.c
··· 218 218 * which doesn't track unwritten state. 219 219 */ 220 220 if (owner != XFS_RMAP_OWN_COW && 221 - irec->br_state == XFS_EXT_UNWRITTEN && 222 - !(rmap.rm_flags & XFS_RMAP_UNWRITTEN)) 221 + !!(irec->br_state == XFS_EXT_UNWRITTEN) != 222 + !!(rmap.rm_flags & XFS_RMAP_UNWRITTEN)) 223 223 xchk_fblock_xref_set_corrupt(info->sc, info->whichfork, 224 224 irec->br_startoff); 225 225 226 - if (info->whichfork == XFS_ATTR_FORK && 227 - !(rmap.rm_flags & XFS_RMAP_ATTR_FORK)) 226 + if (!!(info->whichfork == XFS_ATTR_FORK) != 227 + !!(rmap.rm_flags & XFS_RMAP_ATTR_FORK)) 228 228 xchk_fblock_xref_set_corrupt(info->sc, info->whichfork, 229 229 irec->br_startoff); 230 230 if (rmap.rm_flags & XFS_RMAP_BMBT_BLOCK)
+28 -19
fs/xfs/scrub/btree.c
··· 452 452 int level, 453 453 struct xfs_btree_block *block) 454 454 { 455 - unsigned int numrecs; 456 - int ok_level; 457 - 458 - numrecs = be16_to_cpu(block->bb_numrecs); 455 + struct xfs_btree_cur *cur = bs->cur; 456 + unsigned int root_level = cur->bc_nlevels - 1; 457 + unsigned int numrecs = be16_to_cpu(block->bb_numrecs); 459 458 460 459 /* More records than minrecs means the block is ok. */ 461 - if (numrecs >= bs->cur->bc_ops->get_minrecs(bs->cur, level)) 460 + if (numrecs >= cur->bc_ops->get_minrecs(cur, level)) 462 461 return; 463 462 464 463 /* 465 - * Certain btree blocks /can/ have fewer than minrecs records. Any 466 - * level greater than or equal to the level of the highest dedicated 467 - * btree block are allowed to violate this constraint. 468 - * 469 - * For a btree rooted in a block, the btree root can have fewer than 470 - * minrecs records. If the btree is rooted in an inode and does not 471 - * store records in the root, the direct children of the root and the 472 - * root itself can have fewer than minrecs records. 464 + * For btrees rooted in the inode, it's possible that the root block 465 + * contents spilled into a regular ondisk block because there wasn't 466 + * enough space in the inode root. The number of records in that 467 + * child block might be less than the standard minrecs, but that's ok 468 + * provided that there's only one direct child of the root. 473 469 */ 474 - ok_level = bs->cur->bc_nlevels - 1; 475 - if (bs->cur->bc_flags & XFS_BTREE_ROOT_IN_INODE) 476 - ok_level--; 477 - if (level >= ok_level) 478 - return; 470 + if ((cur->bc_flags & XFS_BTREE_ROOT_IN_INODE) && 471 + level == cur->bc_nlevels - 2) { 472 + struct xfs_btree_block *root_block; 473 + struct xfs_buf *root_bp; 474 + int root_maxrecs; 479 475 480 - xchk_btree_set_corrupt(bs->sc, bs->cur, level); 476 + root_block = xfs_btree_get_block(cur, root_level, &root_bp); 477 + root_maxrecs = cur->bc_ops->get_dmaxrecs(cur, root_level); 478 + if (be16_to_cpu(root_block->bb_numrecs) != 1 || 479 + numrecs <= root_maxrecs) 480 + xchk_btree_set_corrupt(bs->sc, cur, level); 481 + return; 482 + } 483 + 484 + /* 485 + * Otherwise, only the root level is allowed to have fewer than minrecs 486 + * records or keyptrs. 487 + */ 488 + if (level < root_level) 489 + xchk_btree_set_corrupt(bs->sc, cur, level); 481 490 } 482 491 483 492 /*
+17 -4
fs/xfs/scrub/dir.c
··· 558 558 /* Check all the bestfree entries. */ 559 559 for (i = 0; i < bestcount; i++, bestp++) { 560 560 best = be16_to_cpu(*bestp); 561 - if (best == NULLDATAOFF) 562 - continue; 563 561 error = xfs_dir3_data_read(sc->tp, sc->ip, 564 - i * args->geo->fsbcount, 0, &dbp); 562 + xfs_dir2_db_to_da(args->geo, i), 563 + XFS_DABUF_MAP_HOLE_OK, 564 + &dbp); 565 565 if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk, 566 566 &error)) 567 567 break; 568 - xchk_directory_check_freesp(sc, lblk, dbp, best); 568 + 569 + if (!dbp) { 570 + if (best != NULLDATAOFF) { 571 + xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, 572 + lblk); 573 + break; 574 + } 575 + continue; 576 + } 577 + 578 + if (best == NULLDATAOFF) 579 + xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk); 580 + else 581 + xchk_directory_check_freesp(sc, lblk, dbp, best); 569 582 xfs_trans_brelse(sc->tp, dbp); 570 583 if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT) 571 584 break;
+29
fs/xfs/xfs_iomap.c
··· 706 706 return 0; 707 707 } 708 708 709 + /* 710 + * Check that the imap we are going to return to the caller spans the entire 711 + * range that the caller requested for the IO. 712 + */ 713 + static bool 714 + imap_spans_range( 715 + struct xfs_bmbt_irec *imap, 716 + xfs_fileoff_t offset_fsb, 717 + xfs_fileoff_t end_fsb) 718 + { 719 + if (imap->br_startoff > offset_fsb) 720 + return false; 721 + if (imap->br_startoff + imap->br_blockcount < end_fsb) 722 + return false; 723 + return true; 724 + } 725 + 709 726 static int 710 727 xfs_direct_write_iomap_begin( 711 728 struct inode *inode, ··· 782 765 783 766 if (imap_needs_alloc(inode, flags, &imap, nimaps)) 784 767 goto allocate_blocks; 768 + 769 + /* 770 + * NOWAIT IO needs to span the entire requested IO with a single map so 771 + * that we avoid partial IO failures due to the rest of the IO range not 772 + * covered by this map triggering an EAGAIN condition when it is 773 + * subsequently mapped and aborting the IO. 774 + */ 775 + if ((flags & IOMAP_NOWAIT) && 776 + !imap_spans_range(&imap, offset_fsb, end_fsb)) { 777 + error = -EAGAIN; 778 + goto out_unlock; 779 + } 785 780 786 781 xfs_iunlock(ip, lockmode); 787 782 trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
+24 -3
fs/xfs/xfs_iwalk.c
··· 55 55 /* Where do we start the traversal? */ 56 56 xfs_ino_t startino; 57 57 58 + /* What was the last inode number we saw when iterating the inobt? */ 59 + xfs_ino_t lastino; 60 + 58 61 /* Array of inobt records we cache. */ 59 62 struct xfs_inobt_rec_incore *recs; 60 63 ··· 304 301 if (XFS_IS_CORRUPT(mp, *has_more != 1)) 305 302 return -EFSCORRUPTED; 306 303 304 + iwag->lastino = XFS_AGINO_TO_INO(mp, agno, 305 + irec->ir_startino + XFS_INODES_PER_CHUNK - 1); 306 + 307 307 /* 308 308 * If the LE lookup yielded an inobt record before the cursor position, 309 309 * skip it and see if there's another one after it. ··· 353 347 struct xfs_mount *mp = iwag->mp; 354 348 struct xfs_trans *tp = iwag->tp; 355 349 struct xfs_inobt_rec_incore *irec; 356 - xfs_agino_t restart; 350 + xfs_agino_t next_agino; 357 351 int error; 352 + 353 + next_agino = XFS_INO_TO_AGINO(mp, iwag->lastino) + 1; 358 354 359 355 ASSERT(iwag->nr_recs > 0); 360 356 361 357 /* Delete cursor but remember the last record we cached... */ 362 358 xfs_iwalk_del_inobt(tp, curpp, agi_bpp, 0); 363 359 irec = &iwag->recs[iwag->nr_recs - 1]; 364 - restart = irec->ir_startino + XFS_INODES_PER_CHUNK - 1; 360 + ASSERT(next_agino == irec->ir_startino + XFS_INODES_PER_CHUNK); 365 361 366 362 error = xfs_iwalk_ag_recs(iwag); 367 363 if (error) ··· 380 372 if (error) 381 373 return error; 382 374 383 - return xfs_inobt_lookup(*curpp, restart, XFS_LOOKUP_GE, has_more); 375 + return xfs_inobt_lookup(*curpp, next_agino, XFS_LOOKUP_GE, has_more); 384 376 } 385 377 386 378 /* Walk all inodes in a single AG, from @iwag->startino to the end of the AG. */ ··· 404 396 405 397 while (!error && has_more) { 406 398 struct xfs_inobt_rec_incore *irec; 399 + xfs_ino_t rec_fsino; 407 400 408 401 cond_resched(); 409 402 if (xfs_pwork_want_abort(&iwag->pwork)) ··· 415 406 error = xfs_inobt_get_rec(cur, irec, &has_more); 416 407 if (error || !has_more) 417 408 break; 409 + 410 + /* Make sure that we always move forward. */ 411 + rec_fsino = XFS_AGINO_TO_INO(mp, agno, irec->ir_startino); 412 + if (iwag->lastino != NULLFSINO && 413 + XFS_IS_CORRUPT(mp, iwag->lastino >= rec_fsino)) { 414 + error = -EFSCORRUPTED; 415 + goto out; 416 + } 417 + iwag->lastino = rec_fsino + XFS_INODES_PER_CHUNK - 1; 418 418 419 419 /* No allocated inodes in this chunk; skip it. */ 420 420 if (iwag->skip_empty && irec->ir_freecount == irec->ir_count) { ··· 553 535 .trim_start = 1, 554 536 .skip_empty = 1, 555 537 .pwork = XFS_PWORK_SINGLE_THREADED, 538 + .lastino = NULLFSINO, 556 539 }; 557 540 xfs_agnumber_t agno = XFS_INO_TO_AGNO(mp, startino); 558 541 int error; ··· 642 623 iwag->data = data; 643 624 iwag->startino = startino; 644 625 iwag->sz_recs = xfs_iwalk_prefetch(inode_records); 626 + iwag->lastino = NULLFSINO; 645 627 xfs_pwork_queue(&pctl, &iwag->pwork); 646 628 startino = XFS_AGINO_TO_INO(mp, agno + 1, 0); 647 629 if (flags & XFS_INOBT_WALK_SAME_AG) ··· 716 696 .startino = startino, 717 697 .sz_recs = xfs_inobt_walk_prefetch(inobt_records), 718 698 .pwork = XFS_PWORK_SINGLE_THREADED, 699 + .lastino = NULLFSINO, 719 700 }; 720 701 xfs_agnumber_t agno = XFS_INO_TO_AGNO(mp, startino); 721 702 int error;
+8 -3
fs/xfs/xfs_mount.c
··· 194 194 } 195 195 196 196 pag = kmem_zalloc(sizeof(*pag), KM_MAYFAIL); 197 - if (!pag) 197 + if (!pag) { 198 + error = -ENOMEM; 198 199 goto out_unwind_new_pags; 200 + } 199 201 pag->pag_agno = index; 200 202 pag->pag_mount = mp; 201 203 spin_lock_init(&pag->pag_ici_lock); 202 204 INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC); 203 - if (xfs_buf_hash_init(pag)) 205 + 206 + error = xfs_buf_hash_init(pag); 207 + if (error) 204 208 goto out_free_pag; 205 209 init_waitqueue_head(&pag->pagb_wait); 206 210 spin_lock_init(&pag->pagb_lock); 207 211 pag->pagb_count = 0; 208 212 pag->pagb_tree = RB_ROOT; 209 213 210 - if (radix_tree_preload(GFP_NOFS)) 214 + error = radix_tree_preload(GFP_NOFS); 215 + if (error) 211 216 goto out_hash_destroy; 212 217 213 218 spin_lock(&mp->m_perag_lock);
+2
include/linux/compiler-clang.h
··· 8 8 + __clang_patchlevel__) 9 9 10 10 #if CLANG_VERSION < 100001 11 + #ifndef __BPF_TRACING__ 11 12 # error Sorry, your version of Clang is too old - please use 10.0.1 or newer. 13 + #endif 12 14 #endif 13 15 14 16 /* Compiler specific definitions for Clang compiler */
-4
include/linux/firmware/xlnx-zynqmp.h
··· 50 50 #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 51 51 #define ZYNQMP_PM_CAPABILITY_UNUSABLE 0x8U 52 52 53 - /* Feature check status */ 54 - #define PM_FEATURE_INVALID -1 55 - #define PM_FEATURE_UNCHECKED 0 56 - 57 53 /* 58 54 * Firmware FPGA Manager flags 59 55 * XILINX_ZYNQMP_PM_FPGA_FULL: FPGA full reconfiguration
-1
include/linux/intel-iommu.h
··· 798 798 extern int iommu_calculate_max_sagaw(struct intel_iommu *iommu); 799 799 extern int dmar_disabled; 800 800 extern int intel_iommu_enabled; 801 - extern int intel_iommu_tboot_noforce; 802 801 extern int intel_iommu_gfx_mapped; 803 802 #else 804 803 static inline int iommu_calculate_agaw(struct intel_iommu *iommu)
+1 -1
include/linux/jbd2.h
··· 401 401 #define JI_WAIT_DATA (1 << __JI_WAIT_DATA) 402 402 403 403 /** 404 - * struct jbd_inode - The jbd_inode type is the structure linking inodes in 404 + * struct jbd2_inode - The jbd_inode type is the structure linking inodes in 405 405 * ordered mode present in a transaction so that we can sync them during commit. 406 406 */ 407 407 struct jbd2_inode {
+14 -14
include/linux/memcontrol.h
··· 282 282 283 283 MEMCG_PADDING(_pad1_); 284 284 285 - /* 286 - * set > 0 if pages under this cgroup are moving to other cgroup. 287 - */ 288 - atomic_t moving_account; 289 - struct task_struct *move_lock_task; 290 - 291 - /* Legacy local VM stats and events */ 292 - struct memcg_vmstats_percpu __percpu *vmstats_local; 293 - 294 - /* Subtree VM stats and events (batched updates) */ 295 - struct memcg_vmstats_percpu __percpu *vmstats_percpu; 296 - 297 - MEMCG_PADDING(_pad2_); 298 - 299 285 atomic_long_t vmstats[MEMCG_NR_STAT]; 300 286 atomic_long_t vmevents[NR_VM_EVENT_ITEMS]; 301 287 ··· 302 316 struct obj_cgroup __rcu *objcg; 303 317 struct list_head objcg_list; /* list of inherited objcgs */ 304 318 #endif 319 + 320 + MEMCG_PADDING(_pad2_); 321 + 322 + /* 323 + * set > 0 if pages under this cgroup are moving to other cgroup. 324 + */ 325 + atomic_t moving_account; 326 + struct task_struct *move_lock_task; 327 + 328 + /* Legacy local VM stats and events */ 329 + struct memcg_vmstats_percpu __percpu *vmstats_local; 330 + 331 + /* Subtree VM stats and events (batched updates) */ 332 + struct memcg_vmstats_percpu __percpu *vmstats_percpu; 305 333 306 334 #ifdef CONFIG_CGROUP_WRITEBACK 307 335 struct list_head cgwb_list;
-14
include/linux/memory_hotplug.h
··· 281 281 } 282 282 #endif /* ! CONFIG_MEMORY_HOTPLUG */ 283 283 284 - #ifdef CONFIG_NUMA 285 - extern int memory_add_physaddr_to_nid(u64 start); 286 - extern int phys_to_target_node(u64 start); 287 - #else 288 - static inline int memory_add_physaddr_to_nid(u64 start) 289 - { 290 - return 0; 291 - } 292 - static inline int phys_to_target_node(u64 start) 293 - { 294 - return 0; 295 - } 296 - #endif 297 - 298 284 #if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_DEFERRED_STRUCT_PAGE_INIT) 299 285 /* 300 286 * pgdat resizing functions
+5
include/linux/netdevice.h
··· 3163 3163 return false; 3164 3164 } 3165 3165 3166 + static inline bool dev_has_header(const struct net_device *dev) 3167 + { 3168 + return dev->header_ops && dev->header_ops->create; 3169 + } 3170 + 3166 3171 typedef int gifconf_func_t(struct net_device * dev, char __user * bufptr, 3167 3172 int len, int size); 3168 3173 int register_gifconf(unsigned int family, gifconf_func_t *gifconf);
+29 -1
include/linux/numa.h
··· 21 21 #endif 22 22 23 23 #ifdef CONFIG_NUMA 24 + #include <linux/printk.h> 25 + #include <asm/sparsemem.h> 26 + 24 27 /* Generic implementation available */ 25 28 int numa_map_to_online_node(int node); 26 - #else 29 + 30 + #ifndef memory_add_physaddr_to_nid 31 + static inline int memory_add_physaddr_to_nid(u64 start) 32 + { 33 + pr_info_once("Unknown online node for memory at 0x%llx, assuming node 0\n", 34 + start); 35 + return 0; 36 + } 37 + #endif 38 + #ifndef phys_to_target_node 39 + static inline int phys_to_target_node(u64 start) 40 + { 41 + pr_info_once("Unknown target node for memory at 0x%llx, assuming node 0\n", 42 + start); 43 + return 0; 44 + } 45 + #endif 46 + #else /* !CONFIG_NUMA */ 27 47 static inline int numa_map_to_online_node(int node) 28 48 { 29 49 return NUMA_NO_NODE; 50 + } 51 + static inline int memory_add_physaddr_to_nid(u64 start) 52 + { 53 + return 0; 54 + } 55 + static inline int phys_to_target_node(u64 start) 56 + { 57 + return 0; 30 58 } 31 59 #endif 32 60
+2
include/linux/pagemap.h
··· 906 906 xas_set(&xas, rac->_index); 907 907 rcu_read_lock(); 908 908 xas_for_each(&xas, page, rac->_index + rac->_nr_pages - 1) { 909 + if (xas_retry(&xas, page)) 910 + continue; 909 911 VM_BUG_ON_PAGE(!PageLocked(page), page); 910 912 VM_BUG_ON_PAGE(PageTail(page), page); 911 913 array[i++] = page;
+13
include/linux/pgtable.h
··· 1427 1427 1428 1428 #endif /* !__ASSEMBLY__ */ 1429 1429 1430 + #if !defined(MAX_POSSIBLE_PHYSMEM_BITS) && !defined(CONFIG_64BIT) 1431 + #ifdef CONFIG_PHYS_ADDR_T_64BIT 1432 + /* 1433 + * ZSMALLOC needs to know the highest PFN on 32-bit architectures 1434 + * with physical address space extension, but falls back to 1435 + * BITS_PER_LONG otherwise. 1436 + */ 1437 + #error Missing MAX_POSSIBLE_PHYSMEM_BITS definition 1438 + #else 1439 + #define MAX_POSSIBLE_PHYSMEM_BITS 32 1440 + #endif 1441 + #endif 1442 + 1430 1443 #ifndef has_transparent_hugepage 1431 1444 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1432 1445 #define has_transparent_hugepage() 1
+1
include/linux/platform_data/ti-sysc.h
··· 50 50 s8 emufree_shift; 51 51 }; 52 52 53 + #define SYSC_MODULE_QUIRK_ENA_RESETDONE BIT(25) 53 54 #define SYSC_MODULE_QUIRK_PRUSS BIT(24) 54 55 #define SYSC_MODULE_QUIRK_DSS_RESET BIT(23) 55 56 #define SYSC_MODULE_QUIRK_RTC_UNLOCK BIT(22)
+24 -2
include/linux/sched.h
··· 552 552 * overruns. 553 553 */ 554 554 unsigned int dl_throttled : 1; 555 - unsigned int dl_boosted : 1; 556 555 unsigned int dl_yielded : 1; 557 556 unsigned int dl_non_contending : 1; 558 557 unsigned int dl_overrun : 1; ··· 570 571 * time. 571 572 */ 572 573 struct hrtimer inactive_timer; 574 + 575 + #ifdef CONFIG_RT_MUTEXES 576 + /* 577 + * Priority Inheritance. When a DEADLINE scheduling entity is boosted 578 + * pi_se points to the donor, otherwise points to the dl_se it belongs 579 + * to (the original one/itself). 580 + */ 581 + struct sched_dl_entity *pi_se; 582 + #endif 573 583 }; 574 584 575 585 #ifdef CONFIG_UCLAMP_TASK ··· 778 770 unsigned sched_reset_on_fork:1; 779 771 unsigned sched_contributes_to_load:1; 780 772 unsigned sched_migrated:1; 781 - unsigned sched_remote_wakeup:1; 782 773 #ifdef CONFIG_PSI 783 774 unsigned sched_psi_wake_requeue:1; 784 775 #endif ··· 786 779 unsigned :0; 787 780 788 781 /* Unserialized, strictly 'current' */ 782 + 783 + /* 784 + * This field must not be in the scheduler word above due to wakelist 785 + * queueing no longer being serialized by p->on_cpu. However: 786 + * 787 + * p->XXX = X; ttwu() 788 + * schedule() if (p->on_rq && ..) // false 789 + * smp_mb__after_spinlock(); if (smp_load_acquire(&p->on_cpu) && //true 790 + * deactivate_task() ttwu_queue_wakelist()) 791 + * p->on_rq = 0; p->sched_remote_wakeup = Y; 792 + * 793 + * guarantees all stores of 'current' are visible before 794 + * ->sched_remote_wakeup gets used, so it can be in this word. 795 + */ 796 + unsigned sched_remote_wakeup:1; 789 797 790 798 /* Bit to tell LSMs we're in execve(): */ 791 799 unsigned in_execve:1;
+8
include/net/bonding.h
··· 185 185 struct rtnl_link_stats64 slave_stats; 186 186 }; 187 187 188 + static inline struct slave *to_slave(struct kobject *kobj) 189 + { 190 + return container_of(kobj, struct slave, kobj); 191 + } 192 + 188 193 struct bond_up_slave { 189 194 unsigned int count; 190 195 struct rcu_head rcu; ··· 754 749 755 750 /* exported from bond_netlink.c */ 756 751 extern struct rtnl_link_ops bond_link_ops; 752 + 753 + /* exported from bond_sysfs_slave.c */ 754 + extern const struct sysfs_ops slave_sysfs_ops; 757 755 758 756 static inline netdev_tx_t bond_tx_drop(struct net_device *dev, struct sk_buff *skb) 759 757 {
+3 -2
include/net/inet_hashtables.h
··· 247 247 unsigned long high_limit); 248 248 int inet_hashinfo2_init_mod(struct inet_hashinfo *h); 249 249 250 - bool inet_ehash_insert(struct sock *sk, struct sock *osk); 251 - bool inet_ehash_nolisten(struct sock *sk, struct sock *osk); 250 + bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk); 251 + bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, 252 + bool *found_dup_sk); 252 253 int __inet_hash(struct sock *sk, struct sock *osk); 253 254 int inet_hash(struct sock *sk); 254 255 void inet_unhash(struct sock *sk);
+6
include/net/tls.h
··· 199 199 * to be atomic. 200 200 */ 201 201 TLS_TX_SYNC_SCHED = 1, 202 + /* tls_dev_del was called for the RX side, device state was released, 203 + * but tls_ctx->netdev might still be kept, because TX-side driver 204 + * resources might not be released yet. Used to prevent the second 205 + * tls_dev_del call in tls_device_down if it happens simultaneously. 206 + */ 207 + TLS_RX_DEV_CLOSED = 2, 202 208 }; 203 209 204 210 struct cipher_context {
+3
include/scsi/libiscsi.h
··· 132 132 void *dd_data; /* driver/transport data */ 133 133 }; 134 134 135 + /* invalid scsi_task pointer */ 136 + #define INVALID_SCSI_TASK (struct iscsi_task *)-1l 137 + 135 138 static inline int iscsi_task_has_unsol_data(struct iscsi_task *task) 136 139 { 137 140 return task->unsol_r2t.data_length > task->unsol_r2t.sent;
+15
include/sound/rt1015.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * linux/sound/rt1015.h -- Platform data for RT1015 4 + * 5 + * Copyright 2020 Realtek Microelectronics 6 + */ 7 + 8 + #ifndef __LINUX_SND_RT1015_H 9 + #define __LINUX_SND_RT1015_H 10 + 11 + struct rt1015_platform_data { 12 + unsigned int power_up_delay_ms; 13 + }; 14 + 15 + #endif
+4 -4
include/trace/events/writeback.h
··· 190 190 ), 191 191 192 192 TP_fast_assign( 193 - strncpy(__entry->name, bdi_dev_name(inode_to_bdi(inode)), 32); 193 + strscpy_pad(__entry->name, bdi_dev_name(inode_to_bdi(inode)), 32); 194 194 __entry->ino = inode->i_ino; 195 195 __entry->cgroup_ino = __trace_wbc_assign_cgroup(wbc); 196 196 __entry->history = history; ··· 219 219 ), 220 220 221 221 TP_fast_assign( 222 - strncpy(__entry->name, bdi_dev_name(old_wb->bdi), 32); 222 + strscpy_pad(__entry->name, bdi_dev_name(old_wb->bdi), 32); 223 223 __entry->ino = inode->i_ino; 224 224 __entry->old_cgroup_ino = __trace_wb_assign_cgroup(old_wb); 225 225 __entry->new_cgroup_ino = __trace_wb_assign_cgroup(new_wb); ··· 252 252 struct address_space *mapping = page_mapping(page); 253 253 struct inode *inode = mapping ? mapping->host : NULL; 254 254 255 - strncpy(__entry->name, bdi_dev_name(wb->bdi), 32); 255 + strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32); 256 256 __entry->bdi_id = wb->bdi->id; 257 257 __entry->ino = inode ? inode->i_ino : 0; 258 258 __entry->memcg_id = wb->memcg_css->id; ··· 285 285 ), 286 286 287 287 TP_fast_assign( 288 - strncpy(__entry->name, bdi_dev_name(wb->bdi), 32); 288 + strscpy_pad(__entry->name, bdi_dev_name(wb->bdi), 32); 289 289 __entry->cgroup_ino = __trace_wb_assign_cgroup(wb); 290 290 __entry->frn_bdi_id = frn_bdi_id; 291 291 __entry->frn_memcg_id = frn_memcg_id;
+2
include/uapi/linux/devlink.h
··· 526 526 DEVLINK_ATTR_RELOAD_STATS_LIMIT, /* u8 */ 527 527 DEVLINK_ATTR_RELOAD_STATS_VALUE, /* u32 */ 528 528 DEVLINK_ATTR_REMOTE_RELOAD_STATS, /* nested */ 529 + DEVLINK_ATTR_RELOAD_ACTION_INFO, /* nested */ 530 + DEVLINK_ATTR_RELOAD_ACTION_STATS, /* nested */ 529 531 530 532 /* add new attributes above here, update the policy in devlink.c */ 531 533
+2
include/uapi/linux/openvswitch.h
··· 1058 1058 __OVS_DEC_TTL_ATTR_MAX 1059 1059 }; 1060 1060 1061 + #define OVS_DEC_TTL_ATTR_MAX (__OVS_DEC_TTL_ATTR_MAX - 1) 1062 + 1061 1063 #endif /* _LINUX_OPENVSWITCH_H */
+1 -1
init/Kconfig
··· 719 719 with more CPUs. Therefore this value is used only when the sum of 720 720 contributions is greater than the half of the default kernel ring 721 721 buffer as defined by LOG_BUF_SHIFT. The default values are set 722 - so that more than 64 CPUs are needed to trigger the allocation. 722 + so that more than 16 CPUs are needed to trigger the allocation. 723 723 724 724 Also this option is ignored when "log_buf_len" kernel parameter is 725 725 used as it forces an exact (power of two) size of the ring buffer.
+4 -2
kernel/locking/lockdep.c
··· 108 108 { 109 109 DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 110 110 111 + __this_cpu_inc(lockdep_recursion); 111 112 arch_spin_lock(&__lock); 112 113 __owner = current; 113 - __this_cpu_inc(lockdep_recursion); 114 114 } 115 115 116 116 static inline void lockdep_unlock(void) 117 117 { 118 + DEBUG_LOCKS_WARN_ON(!irqs_disabled()); 119 + 118 120 if (debug_locks && DEBUG_LOCKS_WARN_ON(__owner != current)) 119 121 return; 120 122 121 - __this_cpu_dec(lockdep_recursion); 122 123 __owner = NULL; 123 124 arch_spin_unlock(&__lock); 125 + __this_cpu_dec(lockdep_recursion); 124 126 } 125 127 126 128 static inline bool lockdep_assert_locked(void)
+2 -2
kernel/printk/printk.c
··· 528 528 if (dev_info) 529 529 memcpy(&r.info->dev_info, dev_info, sizeof(r.info->dev_info)); 530 530 531 - /* insert message */ 532 - if ((flags & LOG_CONT) || !(flags & LOG_NEWLINE)) 531 + /* A message without a trailing newline can be continued. */ 532 + if (!(flags & LOG_NEWLINE)) 533 533 prb_commit(&e); 534 534 else 535 535 prb_final_commit(&e);
-2
kernel/printk/printk_ringbuffer.c
··· 882 882 head_id = atomic_long_read(&desc_ring->head_id); /* LMM(desc_reserve:A) */ 883 883 884 884 do { 885 - desc = to_desc(desc_ring, head_id); 886 - 887 885 id = DESC_ID(head_id + 1); 888 886 id_prev_wrap = DESC_ID_PREV_WRAP(desc_ring, id); 889 887
+5 -11
kernel/ptrace.c
··· 264 264 return ret; 265 265 } 266 266 267 - static bool ptrace_has_cap(const struct cred *cred, struct user_namespace *ns, 268 - unsigned int mode) 267 + static bool ptrace_has_cap(struct user_namespace *ns, unsigned int mode) 269 268 { 270 - int ret; 271 - 272 269 if (mode & PTRACE_MODE_NOAUDIT) 273 - ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NOAUDIT); 274 - else 275 - ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NONE); 276 - 277 - return ret == 0; 270 + return ns_capable_noaudit(ns, CAP_SYS_PTRACE); 271 + return ns_capable(ns, CAP_SYS_PTRACE); 278 272 } 279 273 280 274 /* Returns 0 on success, -errno on denial. */ ··· 320 326 gid_eq(caller_gid, tcred->sgid) && 321 327 gid_eq(caller_gid, tcred->gid)) 322 328 goto ok; 323 - if (ptrace_has_cap(cred, tcred->user_ns, mode)) 329 + if (ptrace_has_cap(tcred->user_ns, mode)) 324 330 goto ok; 325 331 rcu_read_unlock(); 326 332 return -EPERM; ··· 339 345 mm = task->mm; 340 346 if (mm && 341 347 ((get_dumpable(mm) != SUID_DUMP_USER) && 342 - !ptrace_has_cap(cred, mm->user_ns, mode))) 348 + !ptrace_has_cap(mm->user_ns, mode))) 343 349 return -EPERM; 344 350 345 351 return security_ptrace_access_check(task, mode);
+16 -10
kernel/sched/core.c
··· 2501 2501 #ifdef CONFIG_SMP 2502 2502 if (wake_flags & WF_MIGRATED) 2503 2503 en_flags |= ENQUEUE_MIGRATED; 2504 + else 2504 2505 #endif 2506 + if (p->in_iowait) { 2507 + delayacct_blkio_end(p); 2508 + atomic_dec(&task_rq(p)->nr_iowait); 2509 + } 2505 2510 2506 2511 activate_task(rq, p, en_flags); 2507 2512 ttwu_do_wakeup(rq, p, wake_flags, rf); ··· 2893 2888 if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags)) 2894 2889 goto unlock; 2895 2890 2896 - if (p->in_iowait) { 2897 - delayacct_blkio_end(p); 2898 - atomic_dec(&task_rq(p)->nr_iowait); 2899 - } 2900 - 2901 2891 #ifdef CONFIG_SMP 2902 2892 /* 2903 2893 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be ··· 2963 2963 2964 2964 cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags); 2965 2965 if (task_cpu(p) != cpu) { 2966 + if (p->in_iowait) { 2967 + delayacct_blkio_end(p); 2968 + atomic_dec(&task_rq(p)->nr_iowait); 2969 + } 2970 + 2966 2971 wake_flags |= WF_MIGRATED; 2967 2972 psi_ttwu_dequeue(p); 2968 2973 set_task_cpu(p, cpu); ··· 4912 4907 if (!dl_prio(p->normal_prio) || 4913 4908 (pi_task && dl_prio(pi_task->prio) && 4914 4909 dl_entity_preempt(&pi_task->dl, &p->dl))) { 4915 - p->dl.dl_boosted = 1; 4910 + p->dl.pi_se = pi_task->dl.pi_se; 4916 4911 queue_flag |= ENQUEUE_REPLENISH; 4917 - } else 4918 - p->dl.dl_boosted = 0; 4912 + } else { 4913 + p->dl.pi_se = &p->dl; 4914 + } 4919 4915 p->sched_class = &dl_sched_class; 4920 4916 } else if (rt_prio(prio)) { 4921 4917 if (dl_prio(oldprio)) 4922 - p->dl.dl_boosted = 0; 4918 + p->dl.pi_se = &p->dl; 4923 4919 if (oldprio < prio) 4924 4920 queue_flag |= ENQUEUE_HEAD; 4925 4921 p->sched_class = &rt_sched_class; 4926 4922 } else { 4927 4923 if (dl_prio(oldprio)) 4928 - p->dl.dl_boosted = 0; 4924 + p->dl.pi_se = &p->dl; 4929 4925 if (rt_prio(oldprio)) 4930 4926 p->rt.timeout = 0; 4931 4927 p->sched_class = &fair_sched_class;
+53 -44
kernel/sched/deadline.c
··· 43 43 return !RB_EMPTY_NODE(&dl_se->rb_node); 44 44 } 45 45 46 + #ifdef CONFIG_RT_MUTEXES 47 + static inline struct sched_dl_entity *pi_of(struct sched_dl_entity *dl_se) 48 + { 49 + return dl_se->pi_se; 50 + } 51 + 52 + static inline bool is_dl_boosted(struct sched_dl_entity *dl_se) 53 + { 54 + return pi_of(dl_se) != dl_se; 55 + } 56 + #else 57 + static inline struct sched_dl_entity *pi_of(struct sched_dl_entity *dl_se) 58 + { 59 + return dl_se; 60 + } 61 + 62 + static inline bool is_dl_boosted(struct sched_dl_entity *dl_se) 63 + { 64 + return false; 65 + } 66 + #endif 67 + 46 68 #ifdef CONFIG_SMP 47 69 static inline struct dl_bw *dl_bw_of(int i) 48 70 { ··· 720 698 struct dl_rq *dl_rq = dl_rq_of_se(dl_se); 721 699 struct rq *rq = rq_of_dl_rq(dl_rq); 722 700 723 - WARN_ON(dl_se->dl_boosted); 701 + WARN_ON(is_dl_boosted(dl_se)); 724 702 WARN_ON(dl_time_before(rq_clock(rq), dl_se->deadline)); 725 703 726 704 /* ··· 758 736 * could happen are, typically, a entity voluntarily trying to overcome its 759 737 * runtime, or it just underestimated it during sched_setattr(). 760 738 */ 761 - static void replenish_dl_entity(struct sched_dl_entity *dl_se, 762 - struct sched_dl_entity *pi_se) 739 + static void replenish_dl_entity(struct sched_dl_entity *dl_se) 763 740 { 764 741 struct dl_rq *dl_rq = dl_rq_of_se(dl_se); 765 742 struct rq *rq = rq_of_dl_rq(dl_rq); 766 743 767 - BUG_ON(pi_se->dl_runtime <= 0); 744 + BUG_ON(pi_of(dl_se)->dl_runtime <= 0); 768 745 769 746 /* 770 747 * This could be the case for a !-dl task that is boosted. 771 748 * Just go with full inherited parameters. 772 749 */ 773 750 if (dl_se->dl_deadline == 0) { 774 - dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline; 775 - dl_se->runtime = pi_se->dl_runtime; 751 + dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline; 752 + dl_se->runtime = pi_of(dl_se)->dl_runtime; 776 753 } 777 754 778 755 if (dl_se->dl_yielded && dl_se->runtime > 0) ··· 784 763 * arbitrary large. 785 764 */ 786 765 while (dl_se->runtime <= 0) { 787 - dl_se->deadline += pi_se->dl_period; 788 - dl_se->runtime += pi_se->dl_runtime; 766 + dl_se->deadline += pi_of(dl_se)->dl_period; 767 + dl_se->runtime += pi_of(dl_se)->dl_runtime; 789 768 } 790 769 791 770 /* ··· 799 778 */ 800 779 if (dl_time_before(dl_se->deadline, rq_clock(rq))) { 801 780 printk_deferred_once("sched: DL replenish lagged too much\n"); 802 - dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline; 803 - dl_se->runtime = pi_se->dl_runtime; 781 + dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline; 782 + dl_se->runtime = pi_of(dl_se)->dl_runtime; 804 783 } 805 784 806 785 if (dl_se->dl_yielded) ··· 833 812 * task with deadline equal to period this is the same of using 834 813 * dl_period instead of dl_deadline in the equation above. 835 814 */ 836 - static bool dl_entity_overflow(struct sched_dl_entity *dl_se, 837 - struct sched_dl_entity *pi_se, u64 t) 815 + static bool dl_entity_overflow(struct sched_dl_entity *dl_se, u64 t) 838 816 { 839 817 u64 left, right; 840 818 ··· 855 835 * of anything below microseconds resolution is actually fiction 856 836 * (but still we want to give the user that illusion >;). 857 837 */ 858 - left = (pi_se->dl_deadline >> DL_SCALE) * (dl_se->runtime >> DL_SCALE); 838 + left = (pi_of(dl_se)->dl_deadline >> DL_SCALE) * (dl_se->runtime >> DL_SCALE); 859 839 right = ((dl_se->deadline - t) >> DL_SCALE) * 860 - (pi_se->dl_runtime >> DL_SCALE); 840 + (pi_of(dl_se)->dl_runtime >> DL_SCALE); 861 841 862 842 return dl_time_before(right, left); 863 843 } ··· 942 922 * Please refer to the comments update_dl_revised_wakeup() function to find 943 923 * more about the Revised CBS rule. 944 924 */ 945 - static void update_dl_entity(struct sched_dl_entity *dl_se, 946 - struct sched_dl_entity *pi_se) 925 + static void update_dl_entity(struct sched_dl_entity *dl_se) 947 926 { 948 927 struct dl_rq *dl_rq = dl_rq_of_se(dl_se); 949 928 struct rq *rq = rq_of_dl_rq(dl_rq); 950 929 951 930 if (dl_time_before(dl_se->deadline, rq_clock(rq)) || 952 - dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) { 931 + dl_entity_overflow(dl_se, rq_clock(rq))) { 953 932 954 933 if (unlikely(!dl_is_implicit(dl_se) && 955 934 !dl_time_before(dl_se->deadline, rq_clock(rq)) && 956 - !dl_se->dl_boosted)){ 935 + !is_dl_boosted(dl_se))) { 957 936 update_dl_revised_wakeup(dl_se, rq); 958 937 return; 959 938 } 960 939 961 - dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline; 962 - dl_se->runtime = pi_se->dl_runtime; 940 + dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline; 941 + dl_se->runtime = pi_of(dl_se)->dl_runtime; 963 942 } 964 943 } 965 944 ··· 1057 1038 * The task might have been boosted by someone else and might be in the 1058 1039 * boosting/deboosting path, its not throttled. 1059 1040 */ 1060 - if (dl_se->dl_boosted) 1041 + if (is_dl_boosted(dl_se)) 1061 1042 goto unlock; 1062 1043 1063 1044 /* ··· 1085 1066 * but do not enqueue -- wait for our wakeup to do that. 1086 1067 */ 1087 1068 if (!task_on_rq_queued(p)) { 1088 - replenish_dl_entity(dl_se, dl_se); 1069 + replenish_dl_entity(dl_se); 1089 1070 goto unlock; 1090 1071 } 1091 1072 ··· 1175 1156 1176 1157 if (dl_time_before(dl_se->deadline, rq_clock(rq)) && 1177 1158 dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { 1178 - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p))) 1159 + if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(p))) 1179 1160 return; 1180 1161 dl_se->dl_throttled = 1; 1181 1162 if (dl_se->runtime > 0) ··· 1306 1287 dl_se->dl_overrun = 1; 1307 1288 1308 1289 __dequeue_task_dl(rq, curr, 0); 1309 - if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr))) 1290 + if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(curr))) 1310 1291 enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH); 1311 1292 1312 1293 if (!is_leftmost(curr, &rq->dl)) ··· 1500 1481 } 1501 1482 1502 1483 static void 1503 - enqueue_dl_entity(struct sched_dl_entity *dl_se, 1504 - struct sched_dl_entity *pi_se, int flags) 1484 + enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags) 1505 1485 { 1506 1486 BUG_ON(on_dl_rq(dl_se)); 1507 1487 ··· 1511 1493 */ 1512 1494 if (flags & ENQUEUE_WAKEUP) { 1513 1495 task_contending(dl_se, flags); 1514 - update_dl_entity(dl_se, pi_se); 1496 + update_dl_entity(dl_se); 1515 1497 } else if (flags & ENQUEUE_REPLENISH) { 1516 - replenish_dl_entity(dl_se, pi_se); 1498 + replenish_dl_entity(dl_se); 1517 1499 } else if ((flags & ENQUEUE_RESTORE) && 1518 1500 dl_time_before(dl_se->deadline, 1519 1501 rq_clock(rq_of_dl_rq(dl_rq_of_se(dl_se))))) { ··· 1530 1512 1531 1513 static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) 1532 1514 { 1533 - struct task_struct *pi_task = rt_mutex_get_top_task(p); 1534 - struct sched_dl_entity *pi_se = &p->dl; 1535 - 1536 - /* 1537 - * Use the scheduling parameters of the top pi-waiter task if: 1538 - * - we have a top pi-waiter which is a SCHED_DEADLINE task AND 1539 - * - our dl_boosted is set (i.e. the pi-waiter's (absolute) deadline is 1540 - * smaller than our deadline OR we are a !SCHED_DEADLINE task getting 1541 - * boosted due to a SCHED_DEADLINE pi-waiter). 1542 - * Otherwise we keep our runtime and deadline. 1543 - */ 1544 - if (pi_task && dl_prio(pi_task->normal_prio) && p->dl.dl_boosted) { 1545 - pi_se = &pi_task->dl; 1515 + if (is_dl_boosted(&p->dl)) { 1546 1516 /* 1547 1517 * Because of delays in the detection of the overrun of a 1548 1518 * thread's runtime, it might be the case that a thread ··· 1563 1557 * the throttle. 1564 1558 */ 1565 1559 p->dl.dl_throttled = 0; 1566 - BUG_ON(!p->dl.dl_boosted || flags != ENQUEUE_REPLENISH); 1560 + BUG_ON(!is_dl_boosted(&p->dl) || flags != ENQUEUE_REPLENISH); 1567 1561 return; 1568 1562 } 1569 1563 ··· 1600 1594 return; 1601 1595 } 1602 1596 1603 - enqueue_dl_entity(&p->dl, pi_se, flags); 1597 + enqueue_dl_entity(&p->dl, flags); 1604 1598 1605 1599 if (!task_current(rq, p) && p->nr_cpus_allowed > 1) 1606 1600 enqueue_pushable_dl_task(rq, p); ··· 2793 2787 dl_se->dl_bw = 0; 2794 2788 dl_se->dl_density = 0; 2795 2789 2796 - dl_se->dl_boosted = 0; 2797 2790 dl_se->dl_throttled = 0; 2798 2791 dl_se->dl_yielded = 0; 2799 2792 dl_se->dl_non_contending = 0; 2800 2793 dl_se->dl_overrun = 0; 2794 + 2795 + #ifdef CONFIG_RT_MUTEXES 2796 + dl_se->pi_se = dl_se; 2797 + #endif 2801 2798 } 2802 2799 2803 2800 bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+2 -1
kernel/sched/fair.c
··· 5477 5477 struct cfs_rq *cfs_rq; 5478 5478 struct sched_entity *se = &p->se; 5479 5479 int idle_h_nr_running = task_has_idle_policy(p); 5480 + int task_new = !(flags & ENQUEUE_WAKEUP); 5480 5481 5481 5482 /* 5482 5483 * The code below (indirectly) updates schedutil which looks at ··· 5550 5549 * into account, but that is not straightforward to implement, 5551 5550 * and the following generally works well enough in practice. 5552 5551 */ 5553 - if (flags & ENQUEUE_WAKEUP) 5552 + if (!task_new) 5554 5553 update_overutilized_status(rq); 5555 5554 5556 5555 enqueue_throttle:
+2 -3
kernel/seccomp.c
··· 38 38 #include <linux/filter.h> 39 39 #include <linux/pid.h> 40 40 #include <linux/ptrace.h> 41 - #include <linux/security.h> 41 + #include <linux/capability.h> 42 42 #include <linux/tracehook.h> 43 43 #include <linux/uaccess.h> 44 44 #include <linux/anon_inodes.h> ··· 558 558 * behavior of privileged children. 559 559 */ 560 560 if (!task_no_new_privs(current) && 561 - security_capable(current_cred(), current_user_ns(), 562 - CAP_SYS_ADMIN, CAP_OPT_NOAUDIT) != 0) 561 + !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN)) 563 562 return ERR_PTR(-EACCES); 564 563 565 564 /* Allocate a new seccomp_filter */
+22 -4
mm/filemap.c
··· 1484 1484 rotate_reclaimable_page(page); 1485 1485 } 1486 1486 1487 + /* 1488 + * Writeback does not hold a page reference of its own, relying 1489 + * on truncation to wait for the clearing of PG_writeback. 1490 + * But here we must make sure that the page is not freed and 1491 + * reused before the wake_up_page(). 1492 + */ 1493 + get_page(page); 1487 1494 if (!test_clear_page_writeback(page)) 1488 1495 BUG(); 1489 1496 1490 1497 smp_mb__after_atomic(); 1491 1498 wake_up_page(page, PG_writeback); 1499 + put_page(page); 1492 1500 } 1493 1501 EXPORT_SYMBOL(end_page_writeback); 1494 1502 ··· 2355 2347 2356 2348 page_not_up_to_date: 2357 2349 /* Get exclusive access to the page ... */ 2358 - if (iocb->ki_flags & IOCB_WAITQ) 2350 + if (iocb->ki_flags & IOCB_WAITQ) { 2351 + if (written) { 2352 + put_page(page); 2353 + goto out; 2354 + } 2359 2355 error = lock_page_async(page, iocb->ki_waitq); 2360 - else 2356 + } else { 2361 2357 error = lock_page_killable(page); 2358 + } 2362 2359 if (unlikely(error)) 2363 2360 goto readpage_error; 2364 2361 ··· 2406 2393 } 2407 2394 2408 2395 if (!PageUptodate(page)) { 2409 - if (iocb->ki_flags & IOCB_WAITQ) 2396 + if (iocb->ki_flags & IOCB_WAITQ) { 2397 + if (written) { 2398 + put_page(page); 2399 + goto out; 2400 + } 2410 2401 error = lock_page_async(page, iocb->ki_waitq); 2411 - else 2402 + } else { 2412 2403 error = lock_page_killable(page); 2404 + } 2413 2405 2414 2406 if (unlikely(error)) 2415 2407 goto readpage_error;
+4 -5
mm/huge_memory.c
··· 710 710 transparent_hugepage_use_zero_page()) { 711 711 pgtable_t pgtable; 712 712 struct page *zero_page; 713 - bool set; 714 713 vm_fault_t ret; 715 714 pgtable = pte_alloc_one(vma->vm_mm); 716 715 if (unlikely(!pgtable)) ··· 722 723 } 723 724 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 724 725 ret = 0; 725 - set = false; 726 726 if (pmd_none(*vmf->pmd)) { 727 727 ret = check_stable_address_space(vma->vm_mm); 728 728 if (ret) { 729 729 spin_unlock(vmf->ptl); 730 + pte_free(vma->vm_mm, pgtable); 730 731 } else if (userfaultfd_missing(vma)) { 731 732 spin_unlock(vmf->ptl); 733 + pte_free(vma->vm_mm, pgtable); 732 734 ret = handle_userfault(vmf, VM_UFFD_MISSING); 733 735 VM_BUG_ON(ret & VM_FAULT_FALLBACK); 734 736 } else { 735 737 set_huge_zero_page(pgtable, vma->vm_mm, vma, 736 738 haddr, vmf->pmd, zero_page); 737 739 spin_unlock(vmf->ptl); 738 - set = true; 739 740 } 740 - } else 741 + } else { 741 742 spin_unlock(vmf->ptl); 742 - if (!set) 743 743 pte_free(vma->vm_mm, pgtable); 744 + } 744 745 return ret; 745 746 } 746 747 gfp = alloc_hugepage_direct_gfpmask(vma);
+1 -3
mm/madvise.c
··· 226 226 struct address_space *mapping) 227 227 { 228 228 XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start)); 229 - pgoff_t end_index = end / PAGE_SIZE; 229 + pgoff_t end_index = linear_page_index(vma, end + PAGE_SIZE - 1); 230 230 struct page *page; 231 231 232 232 rcu_read_lock(); ··· 1231 1231 ret = total_len - iov_iter_count(&iter); 1232 1232 1233 1233 mmput(mm); 1234 - return ret; 1235 - 1236 1234 release_task: 1237 1235 put_task_struct(task); 1238 1236 put_pid:
+7 -2
mm/memcontrol.c
··· 867 867 rcu_read_lock(); 868 868 memcg = mem_cgroup_from_obj(p); 869 869 870 - /* Untracked pages have no memcg, no lruvec. Update only the node */ 871 - if (!memcg || memcg == root_mem_cgroup) { 870 + /* 871 + * Untracked pages have no memcg, no lruvec. Update only the 872 + * node. If we reparent the slab objects to the root memcg, 873 + * when we free the slab object, we need to update the per-memcg 874 + * vmstats to keep it correct for the root memcg. 875 + */ 876 + if (!memcg) { 872 877 __mod_node_page_state(pgdat, idx, val); 873 878 } else { 874 879 lruvec = mem_cgroup_lruvec(memcg, pgdat);
-18
mm/memory_hotplug.c
··· 350 350 return err; 351 351 } 352 352 353 - #ifdef CONFIG_NUMA 354 - int __weak memory_add_physaddr_to_nid(u64 start) 355 - { 356 - pr_info_once("Unknown online node for memory at 0x%llx, assuming node 0\n", 357 - start); 358 - return 0; 359 - } 360 - EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); 361 - 362 - int __weak phys_to_target_node(u64 start) 363 - { 364 - pr_info_once("Unknown target node for memory at 0x%llx, assuming node 0\n", 365 - start); 366 - return 0; 367 - } 368 - EXPORT_SYMBOL_GPL(phys_to_target_node); 369 - #endif 370 - 371 353 /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ 372 354 static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, 373 355 unsigned long start_pfn,
-6
mm/page-writeback.c
··· 2754 2754 } else { 2755 2755 ret = TestClearPageWriteback(page); 2756 2756 } 2757 - /* 2758 - * NOTE: Page might be free now! Writeback doesn't hold a page 2759 - * reference on its own, it relies on truncation to wait for 2760 - * the clearing of PG_writeback. The below can only access 2761 - * page state that is static across allocation cycles. 2762 - */ 2763 2757 if (ret) { 2764 2758 dec_lruvec_state(lruvec, NR_WRITEBACK); 2765 2759 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
+1
net/batman-adv/log.c
··· 180 180 .read = batadv_log_read, 181 181 .poll = batadv_log_poll, 182 182 .llseek = no_llseek, 183 + .owner = THIS_MODULE, 183 184 }; 184 185 185 186 /**
+5 -2
net/can/af_can.c
··· 541 541 542 542 /* Check for bugs in CAN protocol implementations using af_can.c: 543 543 * 'rcv' will be NULL if no matching list item was found for removal. 544 + * As this case may potentially happen when closing a socket while 545 + * the notifier for removing the CAN netdev is running we just print 546 + * a warning here. 544 547 */ 545 548 if (!rcv) { 546 - WARN(1, "BUG: receive list entry not found for dev %s, id %03X, mask %03X\n", 547 - DNAME(dev), can_id, mask); 549 + pr_warn("can: receive list entry not found for dev %s, id %03X, mask %03X\n", 550 + DNAME(dev), can_id, mask); 548 551 goto out; 549 552 } 550 553
+39 -17
net/core/devlink.c
··· 517 517 return test_bit(limit, &devlink->ops->reload_limits); 518 518 } 519 519 520 - static int devlink_reload_stat_put(struct sk_buff *msg, enum devlink_reload_action action, 520 + static int devlink_reload_stat_put(struct sk_buff *msg, 521 521 enum devlink_reload_limit limit, u32 value) 522 522 { 523 523 struct nlattr *reload_stats_entry; ··· 526 526 if (!reload_stats_entry) 527 527 return -EMSGSIZE; 528 528 529 - if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, action) || 530 - nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) || 529 + if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) || 531 530 nla_put_u32(msg, DEVLINK_ATTR_RELOAD_STATS_VALUE, value)) 532 531 goto nla_put_failure; 533 532 nla_nest_end(msg, reload_stats_entry); ··· 539 540 540 541 static int devlink_reload_stats_put(struct sk_buff *msg, struct devlink *devlink, bool is_remote) 541 542 { 542 - struct nlattr *reload_stats_attr; 543 + struct nlattr *reload_stats_attr, *act_info, *act_stats; 543 544 int i, j, stat_idx; 544 545 u32 value; 545 546 ··· 551 552 if (!reload_stats_attr) 552 553 return -EMSGSIZE; 553 554 554 - for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) { 555 - /* Remote stats are shown even if not locally supported. Stats 556 - * of actions with unspecified limit are shown though drivers 557 - * don't need to register unspecified limit. 558 - */ 559 - if (!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC && 560 - !devlink_reload_limit_is_supported(devlink, j)) 555 + for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) { 556 + if ((!is_remote && 557 + !devlink_reload_action_is_supported(devlink, i)) || 558 + i == DEVLINK_RELOAD_ACTION_UNSPEC) 561 559 continue; 562 - for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) { 563 - if ((!is_remote && !devlink_reload_action_is_supported(devlink, i)) || 564 - i == DEVLINK_RELOAD_ACTION_UNSPEC || 560 + act_info = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_INFO); 561 + if (!act_info) 562 + goto nla_put_failure; 563 + 564 + if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, i)) 565 + goto action_info_nest_cancel; 566 + act_stats = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_STATS); 567 + if (!act_stats) 568 + goto action_info_nest_cancel; 569 + 570 + for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) { 571 + /* Remote stats are shown even if not locally supported. 572 + * Stats of actions with unspecified limit are shown 573 + * though drivers don't need to register unspecified 574 + * limit. 575 + */ 576 + if ((!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC && 577 + !devlink_reload_limit_is_supported(devlink, j)) || 565 578 devlink_reload_combination_is_invalid(i, j)) 566 579 continue; 567 580 ··· 582 571 value = devlink->stats.reload_stats[stat_idx]; 583 572 else 584 573 value = devlink->stats.remote_reload_stats[stat_idx]; 585 - if (devlink_reload_stat_put(msg, i, j, value)) 586 - goto nla_put_failure; 574 + if (devlink_reload_stat_put(msg, j, value)) 575 + goto action_stats_nest_cancel; 587 576 } 577 + nla_nest_end(msg, act_stats); 578 + nla_nest_end(msg, act_info); 588 579 } 589 580 nla_nest_end(msg, reload_stats_attr); 590 581 return 0; 591 582 583 + action_stats_nest_cancel: 584 + nla_nest_cancel(msg, act_stats); 585 + action_info_nest_cancel: 586 + nla_nest_cancel(msg, act_info); 592 587 nla_put_failure: 593 588 nla_nest_cancel(msg, reload_stats_attr); 594 589 return -EMSGSIZE; ··· 772 755 if (nla_put_u32(msg, DEVLINK_ATTR_PORT_INDEX, devlink_port->index)) 773 756 goto nla_put_failure; 774 757 758 + /* Hold rtnl lock while accessing port's netdev attributes. */ 759 + rtnl_lock(); 775 760 spin_lock_bh(&devlink_port->type_lock); 776 761 if (nla_put_u16(msg, DEVLINK_ATTR_PORT_TYPE, devlink_port->type)) 777 762 goto nla_put_failure_type_locked; ··· 782 763 devlink_port->desired_type)) 783 764 goto nla_put_failure_type_locked; 784 765 if (devlink_port->type == DEVLINK_PORT_TYPE_ETH) { 766 + struct net *net = devlink_net(devlink_port->devlink); 785 767 struct net_device *netdev = devlink_port->type_dev; 786 768 787 - if (netdev && 769 + if (netdev && net_eq(net, dev_net(netdev)) && 788 770 (nla_put_u32(msg, DEVLINK_ATTR_PORT_NETDEV_IFINDEX, 789 771 netdev->ifindex) || 790 772 nla_put_string(msg, DEVLINK_ATTR_PORT_NETDEV_NAME, ··· 801 781 goto nla_put_failure_type_locked; 802 782 } 803 783 spin_unlock_bh(&devlink_port->type_lock); 784 + rtnl_unlock(); 804 785 if (devlink_nl_port_attrs_put(msg, devlink_port)) 805 786 goto nla_put_failure; 806 787 if (devlink_nl_port_function_attrs_put(msg, devlink_port, extack)) ··· 812 791 813 792 nla_put_failure_type_locked: 814 793 spin_unlock_bh(&devlink_port->type_lock); 794 + rtnl_unlock(); 815 795 nla_put_failure: 816 796 genlmsg_cancel(msg, hdr); 817 797 return -EMSGSIZE;
+6 -1
net/core/gro_cells.c
··· 99 99 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i); 100 100 101 101 napi_disable(&cell->napi); 102 - netif_napi_del(&cell->napi); 102 + __netif_napi_del(&cell->napi); 103 103 __skb_queue_purge(&cell->napi_skbs); 104 104 } 105 + /* This barrier is needed because netpoll could access dev->napi_list 106 + * under rcu protection. 107 + */ 108 + synchronize_net(); 109 + 105 110 free_percpu(gcells->cells); 106 111 gcells->cells = NULL; 107 112 }
+1 -1
net/core/skbuff.c
··· 4562 4562 if (skb && (skb_next = skb_peek(q))) { 4563 4563 icmp_next = is_icmp_err_skb(skb_next); 4564 4564 if (icmp_next) 4565 - sk->sk_err = SKB_EXT_ERR(skb_next)->ee.ee_origin; 4565 + sk->sk_err = SKB_EXT_ERR(skb_next)->ee.ee_errno; 4566 4566 } 4567 4567 spin_unlock_irqrestore(&q->lock, flags); 4568 4568
+1 -1
net/dccp/ipv4.c
··· 427 427 428 428 if (__inet_inherit_port(sk, newsk) < 0) 429 429 goto put_and_exit; 430 - *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); 430 + *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL); 431 431 if (*own_req) 432 432 ireq->ireq_opt = NULL; 433 433 else
+1 -1
net/dccp/ipv6.c
··· 533 533 dccp_done(newsk); 534 534 goto out; 535 535 } 536 - *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); 536 + *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL); 537 537 /* Clone pktoptions received with SYN, if we own the req */ 538 538 if (*own_req && ireq->pktopts) { 539 539 newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);
+1 -1
net/ipv4/inet_connection_sock.c
··· 787 787 timer_setup(&req->rsk_timer, reqsk_timer_handler, TIMER_PINNED); 788 788 mod_timer(&req->rsk_timer, jiffies + timeout); 789 789 790 - inet_ehash_insert(req_to_sk(req), NULL); 790 + inet_ehash_insert(req_to_sk(req), NULL, NULL); 791 791 /* before letting lookups find us, make sure all req fields 792 792 * are committed to memory and refcnt initialized. 793 793 */
+60 -8
net/ipv4/inet_hashtables.c
··· 20 20 #include <net/addrconf.h> 21 21 #include <net/inet_connection_sock.h> 22 22 #include <net/inet_hashtables.h> 23 + #if IS_ENABLED(CONFIG_IPV6) 24 + #include <net/inet6_hashtables.h> 25 + #endif 23 26 #include <net/secure_seq.h> 24 27 #include <net/ip.h> 25 28 #include <net/tcp.h> ··· 511 508 inet->inet_dport); 512 509 } 513 510 514 - /* insert a socket into ehash, and eventually remove another one 515 - * (The another one can be a SYN_RECV or TIMEWAIT 511 + /* Searches for an exsiting socket in the ehash bucket list. 512 + * Returns true if found, false otherwise. 516 513 */ 517 - bool inet_ehash_insert(struct sock *sk, struct sock *osk) 514 + static bool inet_ehash_lookup_by_sk(struct sock *sk, 515 + struct hlist_nulls_head *list) 516 + { 517 + const __portpair ports = INET_COMBINED_PORTS(sk->sk_dport, sk->sk_num); 518 + const int sdif = sk->sk_bound_dev_if; 519 + const int dif = sk->sk_bound_dev_if; 520 + const struct hlist_nulls_node *node; 521 + struct net *net = sock_net(sk); 522 + struct sock *esk; 523 + 524 + INET_ADDR_COOKIE(acookie, sk->sk_daddr, sk->sk_rcv_saddr); 525 + 526 + sk_nulls_for_each_rcu(esk, node, list) { 527 + if (esk->sk_hash != sk->sk_hash) 528 + continue; 529 + if (sk->sk_family == AF_INET) { 530 + if (unlikely(INET_MATCH(esk, net, acookie, 531 + sk->sk_daddr, 532 + sk->sk_rcv_saddr, 533 + ports, dif, sdif))) { 534 + return true; 535 + } 536 + } 537 + #if IS_ENABLED(CONFIG_IPV6) 538 + else if (sk->sk_family == AF_INET6) { 539 + if (unlikely(INET6_MATCH(esk, net, 540 + &sk->sk_v6_daddr, 541 + &sk->sk_v6_rcv_saddr, 542 + ports, dif, sdif))) { 543 + return true; 544 + } 545 + } 546 + #endif 547 + } 548 + return false; 549 + } 550 + 551 + /* Insert a socket into ehash, and eventually remove another one 552 + * (The another one can be a SYN_RECV or TIMEWAIT) 553 + * If an existing socket already exists, socket sk is not inserted, 554 + * and sets found_dup_sk parameter to true. 555 + */ 556 + bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk) 518 557 { 519 558 struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo; 520 559 struct hlist_nulls_head *list; ··· 575 530 if (osk) { 576 531 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash); 577 532 ret = sk_nulls_del_node_init_rcu(osk); 533 + } else if (found_dup_sk) { 534 + *found_dup_sk = inet_ehash_lookup_by_sk(sk, list); 535 + if (*found_dup_sk) 536 + ret = false; 578 537 } 538 + 579 539 if (ret) 580 540 __sk_nulls_add_node_rcu(sk, list); 541 + 581 542 spin_unlock(lock); 543 + 582 544 return ret; 583 545 } 584 546 585 - bool inet_ehash_nolisten(struct sock *sk, struct sock *osk) 547 + bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, bool *found_dup_sk) 586 548 { 587 - bool ok = inet_ehash_insert(sk, osk); 549 + bool ok = inet_ehash_insert(sk, osk, found_dup_sk); 588 550 589 551 if (ok) { 590 552 sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); ··· 635 583 int err = 0; 636 584 637 585 if (sk->sk_state != TCP_LISTEN) { 638 - inet_ehash_nolisten(sk, osk); 586 + inet_ehash_nolisten(sk, osk, NULL); 639 587 return 0; 640 588 } 641 589 WARN_ON(!sk_unhashed(sk)); ··· 731 679 tb = inet_csk(sk)->icsk_bind_hash; 732 680 spin_lock_bh(&head->lock); 733 681 if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) { 734 - inet_ehash_nolisten(sk, NULL); 682 + inet_ehash_nolisten(sk, NULL, NULL); 735 683 spin_unlock_bh(&head->lock); 736 684 return 0; 737 685 } ··· 810 758 inet_bind_hash(sk, tb, port); 811 759 if (sk_unhashed(sk)) { 812 760 inet_sk(sk)->inet_sport = htons(port); 813 - inet_ehash_nolisten(sk, (struct sock *)tw); 761 + inet_ehash_nolisten(sk, (struct sock *)tw, NULL); 814 762 } 815 763 if (tw) 816 764 inet_twsk_bind_unhash(tw, hinfo);
+5
net/ipv4/tcp_cong.c
··· 198 198 icsk->icsk_ca_setsockopt = 1; 199 199 memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv)); 200 200 201 + if (ca->flags & TCP_CONG_NEEDS_ECN) 202 + INET_ECN_xmit(sk); 203 + else 204 + INET_ECN_dontxmit(sk); 205 + 201 206 if (!((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN))) 202 207 tcp_init_congestion_control(sk); 203 208 }
+22 -6
net/ipv4/tcp_ipv4.c
··· 980 980 981 981 skb = tcp_make_synack(sk, dst, req, foc, synack_type, syn_skb); 982 982 983 - tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? 984 - tcp_rsk(req)->syn_tos : inet_sk(sk)->tos; 985 - 986 983 if (skb) { 987 984 __tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr); 985 + 986 + tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? 987 + tcp_rsk(req)->syn_tos & ~INET_ECN_MASK : 988 + inet_sk(sk)->tos; 989 + 990 + if (!INET_ECN_is_capable(tos) && 991 + tcp_bpf_ca_needs_ecn((struct sock *)req)) 992 + tos |= INET_ECN_ECT_0; 988 993 989 994 rcu_read_lock(); 990 995 err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr, 991 996 ireq->ir_rmt_addr, 992 997 rcu_dereference(ireq->ireq_opt), 993 - tos & ~INET_ECN_MASK); 998 + tos); 994 999 rcu_read_unlock(); 995 1000 err = net_xmit_eval(err); 996 1001 } ··· 1503 1498 bool *own_req) 1504 1499 { 1505 1500 struct inet_request_sock *ireq; 1501 + bool found_dup_sk = false; 1506 1502 struct inet_sock *newinet; 1507 1503 struct tcp_sock *newtp; 1508 1504 struct sock *newsk; ··· 1581 1575 1582 1576 if (__inet_inherit_port(sk, newsk) < 0) 1583 1577 goto put_and_exit; 1584 - *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); 1578 + *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), 1579 + &found_dup_sk); 1585 1580 if (likely(*own_req)) { 1586 1581 tcp_move_syn(newtp, req); 1587 1582 ireq->ireq_opt = NULL; 1588 1583 } else { 1589 - newinet->inet_opt = NULL; 1584 + if (!req_unhash && found_dup_sk) { 1585 + /* This code path should only be executed in the 1586 + * syncookie case only 1587 + */ 1588 + bh_unlock_sock(newsk); 1589 + sock_put(newsk); 1590 + newsk = NULL; 1591 + } else { 1592 + newinet->inet_opt = NULL; 1593 + } 1590 1594 } 1591 1595 return newsk; 1592 1596
+17 -9
net/ipv6/addrlabel.c
··· 306 306 /* add default label */ 307 307 static int __net_init ip6addrlbl_net_init(struct net *net) 308 308 { 309 - int err = 0; 309 + struct ip6addrlbl_entry *p = NULL; 310 + struct hlist_node *n; 311 + int err; 310 312 int i; 311 313 312 314 ADDRLABEL(KERN_DEBUG "%s\n", __func__); ··· 317 315 INIT_HLIST_HEAD(&net->ipv6.ip6addrlbl_table.head); 318 316 319 317 for (i = 0; i < ARRAY_SIZE(ip6addrlbl_init_table); i++) { 320 - int ret = ip6addrlbl_add(net, 321 - ip6addrlbl_init_table[i].prefix, 322 - ip6addrlbl_init_table[i].prefixlen, 323 - 0, 324 - ip6addrlbl_init_table[i].label, 0); 325 - /* XXX: should we free all rules when we catch an error? */ 326 - if (ret && (!err || err != -ENOMEM)) 327 - err = ret; 318 + err = ip6addrlbl_add(net, 319 + ip6addrlbl_init_table[i].prefix, 320 + ip6addrlbl_init_table[i].prefixlen, 321 + 0, 322 + ip6addrlbl_init_table[i].label, 0); 323 + if (err) 324 + goto err_ip6addrlbl_add; 325 + } 326 + return 0; 327 + 328 + err_ip6addrlbl_add: 329 + hlist_for_each_entry_safe(p, n, &net->ipv6.ip6addrlbl_table.head, list) { 330 + hlist_del_rcu(&p->list); 331 + kfree_rcu(p, rcu); 328 332 } 329 333 return err; 330 334 }
+21 -5
net/ipv6/tcp_ipv6.c
··· 527 527 if (np->repflow && ireq->pktopts) 528 528 fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts)); 529 529 530 + tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? 531 + tcp_rsk(req)->syn_tos & ~INET_ECN_MASK : 532 + np->tclass; 533 + 534 + if (!INET_ECN_is_capable(tclass) && 535 + tcp_bpf_ca_needs_ecn((struct sock *)req)) 536 + tclass |= INET_ECN_ECT_0; 537 + 530 538 rcu_read_lock(); 531 539 opt = ireq->ipv6_opt; 532 - tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? 533 - tcp_rsk(req)->syn_tos : np->tclass; 534 540 if (!opt) 535 541 opt = rcu_dereference(np->opt); 536 542 err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt, 537 - tclass & ~INET_ECN_MASK, 538 - sk->sk_priority); 543 + tclass, sk->sk_priority); 539 544 rcu_read_unlock(); 540 545 err = net_xmit_eval(err); 541 546 } ··· 1198 1193 const struct ipv6_pinfo *np = tcp_inet6_sk(sk); 1199 1194 struct ipv6_txoptions *opt; 1200 1195 struct inet_sock *newinet; 1196 + bool found_dup_sk = false; 1201 1197 struct tcp_sock *newtp; 1202 1198 struct sock *newsk; 1203 1199 #ifdef CONFIG_TCP_MD5SIG ··· 1374 1368 tcp_done(newsk); 1375 1369 goto out; 1376 1370 } 1377 - *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); 1371 + *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), 1372 + &found_dup_sk); 1378 1373 if (*own_req) { 1379 1374 tcp_move_syn(newtp, req); 1380 1375 ··· 1389 1382 tcp_v6_restore_cb(newnp->pktoptions); 1390 1383 skb_set_owner_r(newnp->pktoptions, newsk); 1391 1384 } 1385 + } 1386 + } else { 1387 + if (!req_unhash && found_dup_sk) { 1388 + /* This code path should only be executed in the 1389 + * syncookie case only 1390 + */ 1391 + bh_unlock_sock(newsk); 1392 + sock_put(newsk); 1393 + newsk = NULL; 1392 1394 } 1393 1395 } 1394 1396
+2 -2
net/iucv/af_iucv.c
··· 1645 1645 } 1646 1646 1647 1647 /* Create the new socket */ 1648 - nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC, 0); 1648 + nsk = iucv_sock_alloc(NULL, sk->sk_protocol, GFP_ATOMIC, 0); 1649 1649 if (!nsk) { 1650 1650 err = pr_iucv->path_sever(path, user_data); 1651 1651 iucv_path_free(path); ··· 1851 1851 goto out; 1852 1852 } 1853 1853 1854 - nsk = iucv_sock_alloc(NULL, sk->sk_type, GFP_ATOMIC, 0); 1854 + nsk = iucv_sock_alloc(NULL, sk->sk_protocol, GFP_ATOMIC, 0); 1855 1855 bh_lock_sock(sk); 1856 1856 if ((sk->sk_state != IUCV_LISTEN) || 1857 1857 sk_acceptq_is_full(sk) ||
+2 -3
net/mptcp/subflow.c
··· 543 543 fallback = true; 544 544 } else if (subflow_req->mp_join) { 545 545 mptcp_get_options(skb, &mp_opt); 546 - if (!mp_opt.mp_join || 547 - !mptcp_can_accept_new_subflow(subflow_req->msk) || 548 - !subflow_hmac_valid(req, &mp_opt)) { 546 + if (!mp_opt.mp_join || !subflow_hmac_valid(req, &mp_opt) || 547 + !mptcp_can_accept_new_subflow(subflow_req->msk)) { 549 548 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC); 550 549 fallback = true; 551 550 }
+3 -4
net/openvswitch/actions.c
··· 958 958 { 959 959 /* The first action is always 'OVS_DEC_TTL_ATTR_ARG'. */ 960 960 struct nlattr *dec_ttl_arg = nla_data(attr); 961 - int rem = nla_len(attr); 962 961 963 962 if (nla_len(dec_ttl_arg)) { 964 - struct nlattr *actions = nla_next(dec_ttl_arg, &rem); 963 + struct nlattr *actions = nla_data(dec_ttl_arg); 965 964 966 965 if (actions) 967 - return clone_execute(dp, skb, key, 0, actions, rem, 968 - last, false); 966 + return clone_execute(dp, skb, key, 0, nla_data(actions), 967 + nla_len(actions), last, false); 969 968 } 970 969 consume_skb(skb); 971 970 return 0;
+55 -19
net/openvswitch/flow_netlink.c
··· 2503 2503 __be16 eth_type, __be16 vlan_tci, 2504 2504 u32 mpls_label_count, bool log) 2505 2505 { 2506 - int start, err; 2507 - u32 nested = true; 2506 + const struct nlattr *attrs[OVS_DEC_TTL_ATTR_MAX + 1]; 2507 + int start, action_start, err, rem; 2508 + const struct nlattr *a, *actions; 2508 2509 2509 - if (!nla_len(attr)) 2510 - return ovs_nla_add_action(sfa, OVS_ACTION_ATTR_DEC_TTL, 2511 - NULL, 0, log); 2510 + memset(attrs, 0, sizeof(attrs)); 2511 + nla_for_each_nested(a, attr, rem) { 2512 + int type = nla_type(a); 2513 + 2514 + /* Ignore unknown attributes to be future proof. */ 2515 + if (type > OVS_DEC_TTL_ATTR_MAX) 2516 + continue; 2517 + 2518 + if (!type || attrs[type]) 2519 + return -EINVAL; 2520 + 2521 + attrs[type] = a; 2522 + } 2523 + 2524 + actions = attrs[OVS_DEC_TTL_ATTR_ACTION]; 2525 + if (rem || !actions || (nla_len(actions) && nla_len(actions) < NLA_HDRLEN)) 2526 + return -EINVAL; 2512 2527 2513 2528 start = add_nested_action_start(sfa, OVS_ACTION_ATTR_DEC_TTL, log); 2514 2529 if (start < 0) 2515 2530 return start; 2516 2531 2517 - err = ovs_nla_add_action(sfa, OVS_DEC_TTL_ATTR_ACTION, &nested, 2518 - sizeof(nested), log); 2532 + action_start = add_nested_action_start(sfa, OVS_DEC_TTL_ATTR_ACTION, log); 2533 + if (action_start < 0) 2534 + return start; 2519 2535 2520 - if (err) 2521 - return err; 2522 - 2523 - err = __ovs_nla_copy_actions(net, attr, key, sfa, eth_type, 2536 + err = __ovs_nla_copy_actions(net, actions, key, sfa, eth_type, 2524 2537 vlan_tci, mpls_label_count, log); 2525 2538 if (err) 2526 2539 return err; 2527 2540 2541 + add_nested_action_end(*sfa, action_start); 2528 2542 add_nested_action_end(*sfa, start); 2529 2543 return 0; 2530 2544 } ··· 3501 3487 static int dec_ttl_action_to_attr(const struct nlattr *attr, 3502 3488 struct sk_buff *skb) 3503 3489 { 3504 - int err = 0, rem = nla_len(attr); 3505 - struct nlattr *start; 3490 + struct nlattr *start, *action_start; 3491 + const struct nlattr *a; 3492 + int err = 0, rem; 3506 3493 3507 3494 start = nla_nest_start_noflag(skb, OVS_ACTION_ATTR_DEC_TTL); 3508 - 3509 3495 if (!start) 3510 3496 return -EMSGSIZE; 3511 3497 3512 - err = ovs_nla_put_actions(nla_data(attr), rem, skb); 3513 - if (err) 3514 - nla_nest_cancel(skb, start); 3515 - else 3516 - nla_nest_end(skb, start); 3498 + nla_for_each_attr(a, nla_data(attr), nla_len(attr), rem) { 3499 + switch (nla_type(a)) { 3500 + case OVS_DEC_TTL_ATTR_ACTION: 3517 3501 3502 + action_start = nla_nest_start_noflag(skb, OVS_DEC_TTL_ATTR_ACTION); 3503 + if (!action_start) { 3504 + err = -EMSGSIZE; 3505 + goto out; 3506 + } 3507 + 3508 + err = ovs_nla_put_actions(nla_data(a), nla_len(a), skb); 3509 + if (err) 3510 + goto out; 3511 + 3512 + nla_nest_end(skb, action_start); 3513 + break; 3514 + 3515 + default: 3516 + /* Ignore all other option to be future compatible */ 3517 + break; 3518 + } 3519 + } 3520 + 3521 + nla_nest_end(skb, start); 3522 + return 0; 3523 + 3524 + out: 3525 + nla_nest_cancel(skb, start); 3518 3526 return err; 3519 3527 } 3520 3528
+9 -9
net/packet/af_packet.c
··· 94 94 95 95 /* 96 96 Assumptions: 97 - - If the device has no dev->header_ops, there is no LL header visible 98 - above the device. In this case, its hard_header_len should be 0. 97 + - If the device has no dev->header_ops->create, there is no LL header 98 + visible above the device. In this case, its hard_header_len should be 0. 99 99 The device may prepend its own header internally. In this case, its 100 100 needed_headroom should be set to the space needed for it to add its 101 101 internal header. ··· 109 109 On receive: 110 110 ----------- 111 111 112 - Incoming, dev->header_ops != NULL 112 + Incoming, dev_has_header(dev) == true 113 113 mac_header -> ll header 114 114 data -> data 115 115 116 - Outgoing, dev->header_ops != NULL 116 + Outgoing, dev_has_header(dev) == true 117 117 mac_header -> ll header 118 118 data -> ll header 119 119 120 - Incoming, dev->header_ops == NULL 120 + Incoming, dev_has_header(dev) == false 121 121 mac_header -> data 122 122 However drivers often make it point to the ll header. 123 123 This is incorrect because the ll header should be invisible to us. 124 124 data -> data 125 125 126 - Outgoing, dev->header_ops == NULL 126 + Outgoing, dev_has_header(dev) == false 127 127 mac_header -> data. ll header is invisible to us. 128 128 data -> data 129 129 130 130 Resume 131 - If dev->header_ops == NULL we are unable to restore the ll header, 131 + If dev_has_header(dev) == false we are unable to restore the ll header, 132 132 because it is invisible to us. 133 133 134 134 ··· 2083 2083 2084 2084 skb->dev = dev; 2085 2085 2086 - if (dev->header_ops) { 2086 + if (dev_has_header(dev)) { 2087 2087 /* The device has an explicit notion of ll header, 2088 2088 * exported to higher levels. 2089 2089 * ··· 2212 2212 if (!net_eq(dev_net(dev), sock_net(sk))) 2213 2213 goto drop; 2214 2214 2215 - if (dev->header_ops) { 2215 + if (dev_has_header(dev)) { 2216 2216 if (sk->sk_type != SOCK_DGRAM) 2217 2217 skb_push(skb, skb->data - skb_mac_header(skb)); 2218 2218 else if (skb->pkt_type == PACKET_OUTGOING) {
+13 -4
net/rose/rose_loopback.c
··· 96 96 } 97 97 98 98 if (frametype == ROSE_CALL_REQUEST) { 99 - if ((dev = rose_dev_get(dest)) != NULL) { 100 - if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0) 101 - kfree_skb(skb); 102 - } else { 99 + if (!rose_loopback_neigh->dev) { 100 + kfree_skb(skb); 101 + continue; 102 + } 103 + 104 + dev = rose_dev_get(dest); 105 + if (!dev) { 106 + kfree_skb(skb); 107 + continue; 108 + } 109 + 110 + if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0) { 111 + dev_put(dev); 103 112 kfree_skb(skb); 104 113 } 105 114 } else {
+4 -1
net/tls/tls_device.c
··· 1262 1262 if (tls_ctx->tx_conf != TLS_HW) { 1263 1263 dev_put(netdev); 1264 1264 tls_ctx->netdev = NULL; 1265 + } else { 1266 + set_bit(TLS_RX_DEV_CLOSED, &tls_ctx->flags); 1265 1267 } 1266 1268 out: 1267 1269 up_read(&device_offload_lock); ··· 1293 1291 if (ctx->tx_conf == TLS_HW) 1294 1292 netdev->tlsdev_ops->tls_dev_del(netdev, ctx, 1295 1293 TLS_OFFLOAD_CTX_DIR_TX); 1296 - if (ctx->rx_conf == TLS_HW) 1294 + if (ctx->rx_conf == TLS_HW && 1295 + !test_bit(TLS_RX_DEV_CLOSED, &ctx->flags)) 1297 1296 netdev->tlsdev_ops->tls_dev_del(netdev, ctx, 1298 1297 TLS_OFFLOAD_CTX_DIR_RX); 1299 1298 WRITE_ONCE(ctx->netdev, NULL);
+6
net/tls/tls_sw.c
··· 1294 1294 return NULL; 1295 1295 } 1296 1296 1297 + if (!skb_queue_empty(&sk->sk_receive_queue)) { 1298 + __strp_unpause(&ctx->strp); 1299 + if (ctx->recv_pkt) 1300 + return ctx->recv_pkt; 1301 + } 1302 + 1297 1303 if (sk->sk_shutdown & RCV_SHUTDOWN) 1298 1304 return NULL; 1299 1305
+5 -3
net/vmw_vsock/virtio_transport_common.c
··· 841 841 virtio_transport_free_pkt(pkt); 842 842 } 843 843 844 - if (remove_sock) 844 + if (remove_sock) { 845 + sock_set_flag(sk, SOCK_DONE); 845 846 vsock_remove_sock(vsk); 847 + } 846 848 } 847 849 EXPORT_SYMBOL_GPL(virtio_transport_release); 848 850 ··· 1134 1132 1135 1133 lock_sock(sk); 1136 1134 1137 - /* Check if sk has been released before lock_sock */ 1138 - if (sk->sk_shutdown == SHUTDOWN_MASK) { 1135 + /* Check if sk has been closed before lock_sock */ 1136 + if (sock_flag(sk, SOCK_DONE)) { 1139 1137 (void)virtio_transport_reset_no_sock(t, pkt); 1140 1138 release_sock(sk); 1141 1139 sock_put(sk);
+1 -1
sound/core/control.c
··· 1539 1539 1540 1540 unlock: 1541 1541 up_write(&card->controls_rwsem); 1542 - return 0; 1542 + return err; 1543 1543 } 1544 1544 1545 1545 static int snd_ctl_elem_add_user(struct snd_ctl_file *file,
+2 -2
sound/firewire/fireworks/fireworks_transaction.c
··· 123 123 t = (struct snd_efw_transaction *)data; 124 124 length = min_t(size_t, be32_to_cpu(t->length) * sizeof(u32), length); 125 125 126 - spin_lock_irq(&efw->lock); 126 + spin_lock(&efw->lock); 127 127 128 128 if (efw->push_ptr < efw->pull_ptr) 129 129 capacity = (unsigned int)(efw->pull_ptr - efw->push_ptr); ··· 190 190 191 191 copy_resp_to_buf(efw, data, length, rcode); 192 192 end: 193 - spin_unlock_irq(&instances_lock); 193 + spin_unlock(&instances_lock); 194 194 } 195 195 196 196 static void
+3
sound/pci/hda/hda_intel.c
··· 2506 2506 /* DG1 */ 2507 2507 { PCI_DEVICE(0x8086, 0x490d), 2508 2508 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2509 + /* Alderlake-S */ 2510 + { PCI_DEVICE(0x8086, 0x7ad0), 2511 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2509 2512 /* Elkhart Lake */ 2510 2513 { PCI_DEVICE(0x8086, 0x4b55), 2511 2514 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+2
sound/pci/hda/patch_ca0132.c
··· 9183 9183 case QUIRK_AE5: 9184 9184 ca0132_mmio_init_ae5(codec); 9185 9185 break; 9186 + default: 9187 + break; 9186 9188 } 9187 9189 } 9188 9190
+1
sound/pci/hda/patch_hdmi.c
··· 4274 4274 HDA_CODEC_ENTRY(0x8086280f, "Icelake HDMI", patch_i915_icl_hdmi), 4275 4275 HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi), 4276 4276 HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI", patch_i915_tgl_hdmi), 4277 + HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI", patch_i915_tgl_hdmi), 4277 4278 HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), 4278 4279 HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), 4279 4280 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi),
+85 -2
sound/pci/hda/patch_realtek.c
··· 2522 2522 SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 2523 2523 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 2524 2524 SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950), 2525 + SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950), 2526 + SND_PCI_QUIRK(0x1558, 0x950A, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950), 2525 2527 SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950), 2526 2528 SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950), 2529 + SND_PCI_QUIRK(0x1558, 0x95e3, "Clevo P955[ER]T", ALC1220_FIXUP_CLEVO_P950), 2530 + SND_PCI_QUIRK(0x1558, 0x95e4, "Clevo P955ER", ALC1220_FIXUP_CLEVO_P950), 2531 + SND_PCI_QUIRK(0x1558, 0x95e5, "Clevo P955EE6", ALC1220_FIXUP_CLEVO_P950), 2532 + SND_PCI_QUIRK(0x1558, 0x95e6, "Clevo P950R[CDF]", ALC1220_FIXUP_CLEVO_P950), 2527 2533 SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950), 2528 2534 SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950), 2529 - SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2530 - SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2535 + SND_PCI_QUIRK(0x1558, 0x97e2, "Clevo P970RC-M", ALC1220_FIXUP_CLEVO_P950), 2531 2536 SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2537 + SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2538 + SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2539 + SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2540 + SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2541 + SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2532 2542 SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2533 2543 SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2534 2544 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), ··· 4224 4214 const struct hda_fixup *fix, int action) 4225 4215 { 4226 4216 alc_fixup_hp_gpio_led(codec, action, 0x02, 0x20); 4217 + } 4218 + 4219 + static void alc287_fixup_hp_gpio_led(struct hda_codec *codec, 4220 + const struct hda_fixup *fix, int action) 4221 + { 4222 + alc_fixup_hp_gpio_led(codec, action, 0x10, 0); 4227 4223 } 4228 4224 4229 4225 /* turn on/off mic-mute LED per capture hook via VREF change */ ··· 6317 6301 ALC274_FIXUP_HP_MIC, 6318 6302 ALC274_FIXUP_HP_HEADSET_MIC, 6319 6303 ALC256_FIXUP_ASUS_HPE, 6304 + ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK, 6305 + ALC287_FIXUP_HP_GPIO_LED, 6306 + ALC256_FIXUP_HP_HEADSET_MIC, 6320 6307 }; 6321 6308 6322 6309 static const struct hda_fixup alc269_fixups[] = { ··· 7724 7705 .chained = true, 7725 7706 .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC 7726 7707 }, 7708 + [ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK] = { 7709 + .type = HDA_FIXUP_FUNC, 7710 + .v.func = alc_fixup_headset_jack, 7711 + .chained = true, 7712 + .chain_id = ALC269_FIXUP_THINKPAD_ACPI 7713 + }, 7714 + [ALC287_FIXUP_HP_GPIO_LED] = { 7715 + .type = HDA_FIXUP_FUNC, 7716 + .v.func = alc287_fixup_hp_gpio_led, 7717 + }, 7718 + [ALC256_FIXUP_HP_HEADSET_MIC] = { 7719 + .type = HDA_FIXUP_FUNC, 7720 + .v.func = alc274_fixup_hp_headset_mic, 7721 + }, 7727 7722 }; 7728 7723 7729 7724 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 7892 7859 SND_PCI_QUIRK(0x103c, 0x8760, "HP", ALC285_FIXUP_HP_MUTE_LED), 7893 7860 SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED), 7894 7861 SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED), 7862 + SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), 7863 + SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), 7895 7864 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 7896 7865 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 7897 7866 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 7959 7924 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), 7960 7925 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 7961 7926 SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC), 7927 + SND_PCI_QUIRK(0x1558, 0x1323, "Clevo N130ZU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7962 7928 SND_PCI_QUIRK(0x1558, 0x1325, "System76 Darter Pro (darp5)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7929 + SND_PCI_QUIRK(0x1558, 0x1401, "Clevo L140[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7930 + SND_PCI_QUIRK(0x1558, 0x1403, "Clevo N140CU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7931 + SND_PCI_QUIRK(0x1558, 0x1404, "Clevo N150CU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7932 + SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7933 + SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7934 + SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7935 + SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7936 + SND_PCI_QUIRK(0x1558, 0x40a1, "Clevo NL40GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7937 + SND_PCI_QUIRK(0x1558, 0x40c1, "Clevo NL40[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7938 + SND_PCI_QUIRK(0x1558, 0x40d1, "Clevo NL41DU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7939 + SND_PCI_QUIRK(0x1558, 0x50a3, "Clevo NJ51GU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7940 + SND_PCI_QUIRK(0x1558, 0x50b3, "Clevo NK50S[BEZ]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7941 + SND_PCI_QUIRK(0x1558, 0x50b6, "Clevo NK50S5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7942 + SND_PCI_QUIRK(0x1558, 0x50b8, "Clevo NK50SZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7943 + SND_PCI_QUIRK(0x1558, 0x50d5, "Clevo NP50D5", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7944 + SND_PCI_QUIRK(0x1558, 0x50f0, "Clevo NH50A[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7945 + SND_PCI_QUIRK(0x1558, 0x50f3, "Clevo NH58DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7946 + SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7947 + SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7948 + SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7949 + SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7950 + SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7951 + SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7952 + SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7953 + SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7954 + SND_PCI_QUIRK(0x1558, 0x8535, "Clevo NH50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7955 + SND_PCI_QUIRK(0x1558, 0x8536, "Clevo NH79D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7963 7956 SND_PCI_QUIRK(0x1558, 0x8550, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7964 7957 SND_PCI_QUIRK(0x1558, 0x8551, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7965 7958 SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC), 7966 7959 SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC), 7960 + SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7961 + SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7962 + SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7963 + SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7964 + SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7965 + SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7966 + SND_PCI_QUIRK(0x1558, 0x951d, "Clevo N950T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7967 + SND_PCI_QUIRK(0x1558, 0x961d, "Clevo N960S[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7968 + SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7969 + SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL53RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 7967 7970 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), 7968 7971 SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7969 7972 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), ··· 8039 7966 SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 8040 7967 SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), 8041 7968 SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK), 7969 + SND_PCI_QUIRK(0x17aa, 0x22c1, "Thinkpad P1 Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK), 7970 + SND_PCI_QUIRK(0x17aa, 0x22c2, "Thinkpad X1 Extreme Gen 3", ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK), 8042 7971 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 8043 7972 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 8044 7973 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), ··· 8353 8278 {0x19, 0x02a11020}, 8354 8279 {0x1a, 0x02a11030}, 8355 8280 {0x21, 0x0221101f}), 8281 + SND_HDA_PIN_QUIRK(0x10ec0236, 0x103c, "HP", ALC256_FIXUP_HP_HEADSET_MIC, 8282 + {0x14, 0x90170110}, 8283 + {0x19, 0x02a11020}, 8284 + {0x21, 0x02211030}), 8356 8285 SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE, 8357 8286 {0x14, 0x90170110}, 8358 8287 {0x21, 0x02211020}), ··· 8459 8380 {0x1a, 0x90a70130}, 8460 8381 {0x1b, 0x90170110}, 8461 8382 {0x21, 0x03211020}), 8383 + SND_HDA_PIN_QUIRK(0x10ec0256, 0x103c, "HP", ALC256_FIXUP_HP_HEADSET_MIC, 8384 + {0x14, 0x90170110}, 8385 + {0x19, 0x02a11020}, 8386 + {0x21, 0x0221101f}), 8462 8387 SND_HDA_PIN_QUIRK(0x10ec0274, 0x103c, "HP", ALC274_FIXUP_HP_HEADSET_MIC, 8463 8388 {0x17, 0x90170110}, 8464 8389 {0x19, 0x03a11030},
+2 -3
sound/pci/mixart/mixart_core.c
··· 70 70 unsigned int i; 71 71 #endif 72 72 73 - mutex_lock(&mgr->msg_lock); 74 73 err = 0; 75 74 76 75 /* copy message descriptor from miXart to driver */ ··· 118 119 writel_be(headptr, MIXART_MEM(mgr, MSG_OUTBOUND_FREE_HEAD)); 119 120 120 121 _clean_exit: 121 - mutex_unlock(&mgr->msg_lock); 122 - 123 122 return err; 124 123 } 125 124 ··· 255 258 resp.data = resp_data; 256 259 resp.size = max_resp_size; 257 260 261 + mutex_lock(&mgr->msg_lock); 258 262 err = get_msg(mgr, &resp, msg_frame); 263 + mutex_unlock(&mgr->msg_lock); 259 264 260 265 if( request->message_id != resp.message_id ) 261 266 dev_err(&mgr->pci->dev, "RESPONSE ERROR!\n");
+21 -1
sound/soc/codecs/rt1015.c
··· 27 27 #include <sound/soc-dapm.h> 28 28 #include <sound/soc.h> 29 29 #include <sound/tlv.h> 30 + #include <sound/rt1015.h> 30 31 31 32 #include "rl6231.h" 32 33 #include "rt1015.h" 34 + 35 + static const struct rt1015_platform_data i2s_default_platform_data = { 36 + .power_up_delay_ms = 50, 37 + }; 33 38 34 39 static const struct reg_default rt1015_reg[] = { 35 40 { 0x0000, 0x0000 }, ··· 544 539 struct rt1015_priv *rt1015 = container_of(work, struct rt1015_priv, 545 540 flush_work.work); 546 541 struct snd_soc_component *component = rt1015->component; 547 - unsigned int val, i = 0, count = 20; 542 + unsigned int val, i = 0, count = 200; 548 543 549 544 while (i < count) { 550 545 usleep_range(1000, 1500); ··· 655 650 case SND_SOC_DAPM_POST_PMU: 656 651 if (rt1015->hw_config == RT1015_HW_28) 657 652 schedule_delayed_work(&rt1015->flush_work, msecs_to_jiffies(10)); 653 + msleep(rt1015->pdata.power_up_delay_ms); 658 654 break; 659 655 default: 660 656 break; ··· 1073 1067 MODULE_DEVICE_TABLE(acpi, rt1015_acpi_match); 1074 1068 #endif 1075 1069 1070 + static void rt1015_parse_dt(struct rt1015_priv *rt1015, struct device *dev) 1071 + { 1072 + device_property_read_u32(dev, "realtek,power-up-delay-ms", 1073 + &rt1015->pdata.power_up_delay_ms); 1074 + } 1075 + 1076 1076 static int rt1015_i2c_probe(struct i2c_client *i2c, 1077 1077 const struct i2c_device_id *id) 1078 1078 { 1079 + struct rt1015_platform_data *pdata = dev_get_platdata(&i2c->dev); 1079 1080 struct rt1015_priv *rt1015; 1080 1081 int ret; 1081 1082 unsigned int val; ··· 1093 1080 return -ENOMEM; 1094 1081 1095 1082 i2c_set_clientdata(i2c, rt1015); 1083 + 1084 + rt1015->pdata = i2s_default_platform_data; 1085 + 1086 + if (pdata) 1087 + rt1015->pdata = *pdata; 1088 + else 1089 + rt1015_parse_dt(rt1015, &i2c->dev); 1096 1090 1097 1091 rt1015->regmap = devm_regmap_init_i2c(i2c, &rt1015_regmap); 1098 1092 if (IS_ERR(rt1015->regmap)) {
+2
sound/soc/codecs/rt1015.h
··· 12 12 13 13 #ifndef __RT1015_H__ 14 14 #define __RT1015_H__ 15 + #include <sound/rt1015.h> 15 16 16 17 #define RT1015_DEVICE_ID_VAL 0x1011 17 18 #define RT1015_DEVICE_ID_VAL2 0x1015 ··· 381 380 382 381 struct rt1015_priv { 383 382 struct snd_soc_component *component; 383 + struct rt1015_platform_data pdata; 384 384 struct regmap *regmap; 385 385 int sysclk; 386 386 int sysclk_src;
+2
sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
··· 700 700 switch (level) { 701 701 case SND_SOC_BIAS_PREPARE: 702 702 if (dapm->bias_level == SND_SOC_BIAS_ON) { 703 + if (!__clk_is_enabled(priv->mclk)) 704 + return 0; 703 705 dev_dbg(card->dev, "Disable mclk"); 704 706 clk_disable_unprepare(priv->mclk); 705 707 } else {
+4 -5
sound/soc/intel/catpt/pcm.c
··· 458 458 if (ret) 459 459 return CATPT_IPC_ERROR(ret); 460 460 461 - ret = catpt_dsp_update_lpclock(cdev); 462 - if (ret) 463 - return ret; 464 - 465 461 ret = catpt_dai_apply_usettings(dai, stream); 466 462 if (ret) 467 463 return ret; ··· 496 500 case SNDRV_PCM_TRIGGER_RESUME: 497 501 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 498 502 resume_stream: 503 + catpt_dsp_update_lpclock(cdev); 499 504 ret = catpt_ipc_resume_stream(cdev, stream->info.stream_hw_id); 500 505 if (ret) 501 506 return CATPT_IPC_ERROR(ret); ··· 504 507 505 508 case SNDRV_PCM_TRIGGER_STOP: 506 509 stream->prepared = false; 507 - catpt_dsp_update_lpclock(cdev); 508 510 fallthrough; 509 511 case SNDRV_PCM_TRIGGER_SUSPEND: 510 512 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 511 513 ret = catpt_ipc_pause_stream(cdev, stream->info.stream_hw_id); 514 + catpt_dsp_update_lpclock(cdev); 512 515 if (ret) 513 516 return CATPT_IPC_ERROR(ret); 514 517 break; ··· 531 534 532 535 dsppos = bytes_to_frames(r, pos->stream_position); 533 536 537 + if (!stream->prepared) 538 + goto exit; 534 539 /* only offload is set_write_pos driven */ 535 540 if (stream->template->type != CATPT_STRM_TYPE_RENDER) 536 541 goto exit;
+3 -3
sound/soc/intel/keembay/kmb_platform.c
··· 487 487 kmb_i2s->xfer_resolution = 0x02; 488 488 break; 489 489 case SNDRV_PCM_FORMAT_S24_LE: 490 - config->data_width = 24; 491 - kmb_i2s->ccr = 0x08; 492 - kmb_i2s->xfer_resolution = 0x04; 490 + config->data_width = 32; 491 + kmb_i2s->ccr = 0x14; 492 + kmb_i2s->xfer_resolution = 0x05; 493 493 break; 494 494 case SNDRV_PCM_FORMAT_S32_LE: 495 495 config->data_width = 32;
+4 -1
sound/soc/qcom/lpass-platform.c
··· 122 122 else 123 123 dma_ch = 0; 124 124 125 - if (dma_ch < 0) 125 + if (dma_ch < 0) { 126 + kfree(data); 126 127 return dma_ch; 128 + } 127 129 128 130 if (cpu_dai->driver->id == LPASS_DP_RX) { 129 131 map = drvdata->hdmiif_map; ··· 149 147 ret = snd_pcm_hw_constraint_integer(runtime, 150 148 SNDRV_PCM_HW_PARAM_PERIODS); 151 149 if (ret < 0) { 150 + kfree(data); 152 151 dev_err(soc_runtime->dev, "setting constraints failed: %d\n", 153 152 ret); 154 153 return -EINVAL;
+4
sound/usb/card.c
··· 379 379 380 380 DEVICE_NAME(0x046d, 0x0990, "Logitech, Inc.", "QuickCam Pro 9000"), 381 381 382 + /* ASUS ROG Strix */ 383 + PROFILE_NAME(0x0b05, 0x1917, 384 + "Realtek", "ALC1220-VB-DT", "Realtek-ALC1220-VB-Desktop"), 385 + 382 386 /* Dell WD15 Dock */ 383 387 PROFILE_NAME(0x0bda, 0x4014, "Dell", "WD15 Dock", "Dell-WD15-Dock"), 384 388 /* Dell WD19 Dock */
+2 -1
sound/usb/mixer_maps.c
··· 561 561 }, 562 562 { /* ASUS ROG Strix */ 563 563 .id = USB_ID(0x0b05, 0x1917), 564 - .map = asus_rog_map, 564 + .map = trx40_mobo_map, 565 + .connector_map = trx40_mobo_connector_map, 565 566 }, 566 567 { /* MSI TRX40 Creator */ 567 568 .id = USB_ID(0x0db0, 0x0d64),
+5 -5
sound/usb/quirks.c
··· 1672 1672 && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1673 1673 msleep(20); 1674 1674 1675 - /* Zoom R16/24, Logitech H650e/H570e, Jabra 550a, Kingston HyperX 1676 - * needs a tiny delay here, otherwise requests like get/set 1677 - * frequency return as failed despite actually succeeding. 1675 + /* Zoom R16/24, many Logitech(at least H650e/H570e/BCC950), 1676 + * Jabra 550a, Kingston HyperX needs a tiny delay here, 1677 + * otherwise requests like get/set frequency return 1678 + * as failed despite actually succeeding. 1678 1679 */ 1679 1680 if ((chip->usb_id == USB_ID(0x1686, 0x00dd) || 1680 - chip->usb_id == USB_ID(0x046d, 0x0a46) || 1681 - chip->usb_id == USB_ID(0x046d, 0x0a56) || 1681 + USB_ID_VENDOR(chip->usb_id) == 0x046d || /* Logitech */ 1682 1682 chip->usb_id == USB_ID(0x0b0e, 0x0349) || 1683 1683 chip->usb_id == USB_ID(0x0951, 0x16ad)) && 1684 1684 (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+4 -4
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 1758 1758 * and the code is stored as a positive value. \ 1759 1759 */ \ 1760 1760 if (_result < 0) { \ 1761 - SYSCALL_RET(_regs) = -result; \ 1761 + SYSCALL_RET(_regs) = -_result; \ 1762 1762 (_regs).ccr |= 0x10000000; \ 1763 1763 } else { \ 1764 - SYSCALL_RET(_regs) = result; \ 1764 + SYSCALL_RET(_regs) = _result; \ 1765 1765 (_regs).ccr &= ~0x10000000; \ 1766 1766 } \ 1767 1767 } while (0) ··· 1804 1804 #define SYSCALL_RET(_regs) (_regs).a[(_regs).windowbase * 4 + 2] 1805 1805 #elif defined(__sh__) 1806 1806 # define ARCH_REGS struct pt_regs 1807 - # define SYSCALL_NUM(_regs) (_regs).gpr[3] 1808 - # define SYSCALL_RET(_regs) (_regs).gpr[0] 1807 + # define SYSCALL_NUM(_regs) (_regs).regs[3] 1808 + # define SYSCALL_RET(_regs) (_regs).regs[0] 1809 1809 #else 1810 1810 # error "Do not know how to find your architecture's registers and syscalls" 1811 1811 #endif